hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8b35d9982525705ce805c06ded1df966dff59ab7 | 3,057 | py | Python | backend/app/api/users.py | vacuumlabs/cowork-reservation | 5b4d275f6e2d1b1877b3b57f5550f27ed2b3f641 | [
"MIT"
] | null | null | null | backend/app/api/users.py | vacuumlabs/cowork-reservation | 5b4d275f6e2d1b1877b3b57f5550f27ed2b3f641 | [
"MIT"
] | 21 | 2021-10-11T11:43:19.000Z | 2022-02-27T19:18:56.000Z | backend/app/api/users.py | vacuumlabs/cowork-reservation | 5b4d275f6e2d1b1877b3b57f5550f27ed2b3f641 | [
"MIT"
] | 2 | 2021-10-16T11:52:43.000Z | 2021-11-15T16:25:08.000Z | import json
from flask import jsonify, request, make_response, render_template
from flask.blueprints import Blueprint
from app.daos import tenant_dao
from app.firebase_utils import *
from app.services import user_service
from app.user_dao import user_dao
users_bp = Blueprint("users_bp", __name__)
@users_bp.route("/users", methods=["GET"])
def get_multiple():
accessible_roles = ["SUPER_ADMIN","TENANT_ADMIN"]
returned_value = have_claims(request.headers.get("Authorization"),accessible_roles)
if returned_value["have_access"]:
params = user_service.url_args_to_query_params_dict(request.args, False)
values_for_return = user_dao.get_users(
returned_value,
params['filters'],
params['sort'],
params['range']
)
return user_service.response(values_for_return['data'],values_for_return['count'], 200)
else:
return user_service.response(status_code=403)
@users_bp.route("/users", methods=["POST"])
def create_tenant_admin():
accessible_roles = ["SUPER_ADMIN","TENANT_ADMIN"]
returned_value = have_claims(request.headers.get("Authorization"),accessible_roles)
if returned_value["have_access"]:
data = request.json
success = user_dao.create_user(**data)
if success:
return user_service.response(success,status_code=200)
else:
return user_service.response(status_code=500)
else:
return user_service.response(status_code=403)
@users_bp.route("/users/<id>", methods=["DELETE"])
def del_tenant(id):
accessible_roles = ["SUPER_ADMIN","TENANT_ADMIN"]
returned_value = have_claims(request.headers.get("Authorization"),accessible_roles)
if returned_value["have_access"]:
'''success = user_dao.delete_user(id)
if success:
return user_service.response() '''
return user_service.response(user_dao.update_user(have_claims, id, {"role":"", "tenantId": ""}))
return user_service.response(status_code=403)
@users_bp.route("/users/<id>", methods=["PUT"])
def update_tenant_admin(id):
accessible_roles = ["SUPER_ADMIN", "TENANT_ADMIN", "USER"]
returned_value = have_claims(request.headers.get("Authorization"),accessible_roles)
if returned_value["have_access"]:
data = request.json
success = user_dao.update_user(returned_value, id, data)
if success:
return user_service.response(success,status_code=200)
else:
return user_service.response(status_code=500)
return user_service.response(status_code=403)
@users_bp.route("/users/<id>", methods=["GET"])
def get_one_user(id):
accessible_roles = ["SUPER_ADMIN","TENANT_ADMIN", "USER"]
returned_value = have_claims(request.headers.get("Authorization"),accessible_roles)
if returned_value["have_access"]:
return user_service.response(user_dao.get_one(returned_value, id))
else:
return user_service.response(status_code=403) | 42.458333 | 105 | 0.690546 | 378 | 3,057 | 5.277778 | 0.193122 | 0.082707 | 0.110777 | 0.162907 | 0.674687 | 0.659148 | 0.610025 | 0.609023 | 0.586466 | 0.586466 | 0 | 0.012111 | 0.189728 | 3,057 | 72 | 106 | 42.458333 | 0.793298 | 0 | 0 | 0.52381 | 0 | 0 | 0.119201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079365 | false | 0 | 0.111111 | 0 | 0.380952 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b3a84c2a16392cfb5f300789bb38d4d26b81c9c | 1,185 | py | Python | desktop/core/ext-py/docutils-0.14/test/test_parsers/test_parser.py | kokosing/hue | 2307f5379a35aae9be871e836432e6f45138b3d9 | [
"Apache-2.0"
] | 5,079 | 2015-01-01T03:39:46.000Z | 2022-03-31T07:38:22.000Z | desktop/core/ext-py/docutils-0.14/test/test_parsers/test_parser.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 1,623 | 2015-01-01T08:06:24.000Z | 2022-03-30T19:48:52.000Z | desktop/core/ext-py/docutils-0.14/test/test_parsers/test_parser.py | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 2,033 | 2015-01-04T07:18:02.000Z | 2022-03-28T19:55:47.000Z | #! /usr/bin/env python
# $Id: test_parser.py 7463 2012-06-22 19:49:51Z milde $
# Author: Stefan Rank <strank(AT)strank(DOT)info>
# Copyright: This module has been placed in the public domain.
"""
Tests for basic functionality of parser classes.
"""
import sys
import unittest
import DocutilsTestSupport # must be imported before docutils
import docutils
from docutils import parsers, utils, frontend
from docutils._compat import b
class RstParserTests(unittest.TestCase):
def test_inputrestrictions(self):
parser_class = parsers.get_parser_class('rst')
parser = parser_class()
document = utils.new_document('test data', frontend.OptionParser(
components=(parser, )).get_default_values())
if sys.version_info < (3,):
# supplying string input is supported, but only if ascii-decodable
self.assertRaises(UnicodeDecodeError,
parser.parse, b('hol%s' % chr(224)), document)
else:
# input must be unicode at all times
self.assertRaises(TypeError, parser.parse, b('hol'), document)
if __name__ == '__main__':
unittest.main()
| 31.184211 | 78 | 0.664979 | 143 | 1,185 | 5.377622 | 0.664336 | 0.042913 | 0.031209 | 0.039012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.238819 | 1,185 | 37 | 79 | 32.027027 | 0.82816 | 0.308861 | 0 | 0 | 0 | 0 | 0.034783 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.052632 | false | 0 | 0.315789 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8b3b6823cfef19d1d56f714c0f6996299375bb5f | 41,975 | py | Python | aloha/aloha_object.py | TDHTTTT/MadGraph_Windows | 8a0be5befed650b6adcb9825c1b57af907c0167a | [
"NCSA"
] | null | null | null | aloha/aloha_object.py | TDHTTTT/MadGraph_Windows | 8a0be5befed650b6adcb9825c1b57af907c0167a | [
"NCSA"
] | null | null | null | aloha/aloha_object.py | TDHTTTT/MadGraph_Windows | 8a0be5befed650b6adcb9825c1b57af907c0167a | [
"NCSA"
] | null | null | null | ################################################################################
#
# Copyright (c) 2010 The MadGraph5_aMC@NLO Development team and Contributors
#
# This file is a part of the MadGraph5_aMC@NLO project, an application which
# automatically generates Feynman diagrams and matrix elements for arbitrary
# high-energy processes in the Standard Model and beyond.
#
# It is subject to the MadGraph5_aMC@NLO license which should accompany this
# distribution.
#
# For more information, visit madgraph.phys.ucl.ac.be and amcatnlo.web.cern.ch
#
################################################################################
## Diagram of Class
##
## Variable <--- aloha_lib.Variable
## |
## +- LorentzObject <--- Gamma
## |
## +- Sigma
## |
## +- P
##
## list <--- AddVariable
## |
## +- MultVariable <--- MultLorentz
##
## list <--- LorentzObjectRepresentation <-- ConstantObject
##
################################################################################
from __future__ import division
import aloha.aloha_lib as aloha_lib
import aloha
import cmath
#===============================================================================
# P (Momenta)
#===============================================================================
class L_P(aloha_lib.LorentzObject):
""" Helas Object for an Impulsion """
contract_first = 1
def __init__(self, name, lorentz1, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name,[lorentz1], [],['P%s'%particle])
aloha_lib.KERNEL.add_tag((name,))
def create_representation(self):
self.sub0 = aloha_lib.DVariable('P%s_0' % self.particle)
self.sub1 = aloha_lib.DVariable('P%s_1' % self.particle)
self.sub2 = aloha_lib.DVariable('P%s_2' % self.particle)
self.sub3 = aloha_lib.DVariable('P%s_3' % self.particle)
self.representation= aloha_lib.LorentzObjectRepresentation(
{(0,): self.sub0, (1,): self.sub1, \
(2,): self.sub2, (3,): self.sub3},
self.lorentz_ind, [])
class P(aloha_lib.FactoryLorentz):
""" Helas Object for an Impulsion """
object_class = L_P
#def __init__(self, lorentz1, particle):
@classmethod
def get_unique_name(self, lorentz1, particle):
return '_P^%s_%s' % (particle, lorentz1)
#===============================================================================
# Pslash
#===============================================================================
class L_PSlash(aloha_lib.LorentzObject):
""" Gamma Matrices """
#gamma0 = [[0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 1, 0, 0]]
#gamma1 = [[0, 0, 0, 1], [0, 0, 1, 0], [0, -1, 0, 0], [-1, 0, 0, 0]]
#gamma2 = [[0, 0, 0, -complex(0,1)],[0, 0, complex(0,1), 0],
# [0, complex(0,1), 0, 0], [-complex(0,1), 0, 0, 0]]
#gamma3 = [[0, 0, 1, 0], [0, 0, 0, -1], [-1, 0, 0, 0], [0, 1, 0, 0]]
#
#gamma = [gamma0, gamma1, gamma2, gamma3]
def __init__(self, name, spin1, spin2, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self,name,[], [spin1, spin2])
def create_representation(self):
"""create representation"""
p0 = aloha_lib.DVariable('P%s_0' % self.particle)
p1 = aloha_lib.DVariable('P%s_1' % self.particle)
p2 = aloha_lib.DVariable('P%s_2' % self.particle)
p3 = aloha_lib.DVariable('P%s_3' % self.particle)
gamma = {
(0, 0): 0, (0, 1): 0, (0, 2): p0-p3, (0, 3): -1*p1+1j*p2,
(1, 0): 0, (1, 1): 0, (1, 2): -1*p1-1j*p2, (1, 3): p0+p3,
(2, 0): p0+p3, (2, 1): p1-1j*p2, (2, 2): 0, (2, 3): 0,
(3, 0): p1+1j*p2, (3, 1): p0-p3, (3, 2): 0, (3, 3): 0}
self.representation = aloha_lib.LorentzObjectRepresentation(gamma,
self.lorentz_ind,self.spin_ind)
class PSlash(aloha_lib.FactoryLorentz):
object_class = L_PSlash
@classmethod
def get_unique_name(self, spin1, spin2, particle):
return '_P%s/_%s_%s' % (particle, spin1,spin2)
#===============================================================================
# Mass
#===============================================================================
class L_Mass(aloha_lib.LorentzObject):
""" Helas Object for a Mass"""
def __init__(self, name, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name,[], [])
def create_representation(self):
mass = aloha_lib.DVariable('M%s' % self.particle)
self.representation = aloha_lib.LorentzObjectRepresentation(
mass, self.lorentz_ind, self.spin_ind)
class Mass(aloha_lib.FactoryLorentz):
object_class = L_Mass
@classmethod
def get_unique_name(self, particle):
return '_M%s' % particle
#===============================================================================
# Mass
#===============================================================================
class L_Coup(aloha_lib.LorentzObject):
""" Helas Object for a Mass"""
def __init__(self, name, nb):
self.nb = nb
aloha_lib.LorentzObject.__init__(self, name,[], [])
def create_representation(self):
coup = aloha_lib.Variable('COUP%s' % self.nb)
self.representation = aloha_lib.LorentzObjectRepresentation(
coup, self.lorentz_ind, self.spin_ind)
class Coup(aloha_lib.FactoryLorentz):
object_class = L_Coup
@classmethod
def get_unique_name(self, nb):
return 'coup%s' % nb
#===============================================================================
# FCT
#===============================================================================
class L_FCT(aloha_lib.LorentzObject):
""" Helas Object for a Mass"""
def __init__(self, name, id):
self.fctid = id
aloha_lib.LorentzObject.__init__(self, name,[], [])
def create_representation(self):
var = aloha_lib.Variable('FCT%s' % self.fctid)
self.representation = aloha_lib.LorentzObjectRepresentation(
var, self.lorentz_ind, self.spin_ind)
class FCT(aloha_lib.FactoryLorentz):
object_class = L_FCT
@classmethod
def get_unique_name(self, name):
return '_FCT%s' % name
#===============================================================================
# OverMass2
#===============================================================================
class L_OverMass2(aloha_lib.LorentzObject):
""" Helas Object for 1/M**2 """
def __init__(self, name, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [], [], tags=['OM%s' % particle])
def create_representation(self):
mass = aloha_lib.DVariable('OM%s' % self.particle)
self.representation = aloha_lib.LorentzObjectRepresentation(
mass, self.lorentz_ind, self.spin_ind)
class OverMass2(aloha_lib.FactoryLorentz):
object_class = L_OverMass2
@classmethod
def get_unique_name(self, particle):
return '_OM2_%s' % particle
#===============================================================================
# Width
#===============================================================================
class L_Width(aloha_lib.LorentzObject):
""" Helas Object for an Impulsion """
def __init__(self, name, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [], [])
def create_representation(self):
width = aloha_lib.DVariable('W%s' % self.particle)
self.representation= aloha_lib.LorentzObjectRepresentation(
width, self.lorentz_ind, self.spin_ind)
class Width(aloha_lib.FactoryLorentz):
object_class = L_Width
@classmethod
def get_unique_name(self, particle):
return '_W%s' % particle
#===============================================================================
# Param
#===============================================================================
class L_Param(aloha_lib.LorentzObject):
""" Object for a Model Parameter """
def __init__(self, Lname, name):
self.varname = name
aloha_lib.LorentzObject.__init__(self, name, [], [])
def create_representation(self):
param = aloha_lib.Variable( self.varname, aloha_lib.ExtVariable)
self.representation= aloha_lib.LorentzObjectRepresentation(
param, [], [])
class Param(aloha_lib.FactoryLorentz):
object_class = L_Param
@classmethod
def get_unique_name(self, name):
if name == 'Pi':
KERNEL.has_pi = True
return 'Param_%s' % name
#===============================================================================
# Scalar
#===============================================================================
class L_Scalar(aloha_lib.LorentzObject):
""" Helas Object for a Spinor"""
def __init__(self, name, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [], [])
def create_representation(self):
rep = aloha_lib.Variable('S%s_1' % self.particle)
self.representation= aloha_lib.LorentzObjectRepresentation(
rep, [], [])
class Scalar(aloha_lib.FactoryLorentz):
object_class = L_Scalar
@classmethod
def get_unique_name(self,particle):
return '_S%s' % particle
#===============================================================================
# Spinor
#===============================================================================
class L_Spinor(aloha_lib.LorentzObject):
""" Helas Object for a Spinor"""
contract_first = 1
def __init__(self, name, spin1, particle, prefactor=1):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name,[], [spin1])
def create_representation(self):
self.sub0 = aloha_lib.Variable('F%s_1' % self.particle)
self.sub1 = aloha_lib.Variable('F%s_2' % self.particle)
self.sub2 = aloha_lib.Variable('F%s_3' % self.particle)
self.sub3 = aloha_lib.Variable('F%s_4' % self.particle)
self.representation= aloha_lib.LorentzObjectRepresentation(
{(0,): self.sub0, (1,): self.sub1, \
(2,): self.sub2, (3,): self.sub3},
[],self.spin_ind)
class Spinor(aloha_lib.FactoryLorentz):
""" Helas Object for a Spinor"""
object_class = L_Spinor
@classmethod
def get_unique_name(self,spin1, particle):
return '_F%s_%s' % (particle,spin1)
#===============================================================================
# Vector
#===============================================================================
class L_Vector(aloha_lib.LorentzObject):
""" Helas Object for a Vector"""
contract_first = 1
def __init__(self, name, lorentz, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [lorentz], [])
def create_representation(self):
self.sub0 = aloha_lib.Variable('V%s_1' % self.particle)
self.sub1 = aloha_lib.Variable('V%s_2' % self.particle)
self.sub2 = aloha_lib.Variable('V%s_3' % self.particle)
self.sub3 = aloha_lib.Variable('V%s_4' % self.particle)
self.representation= aloha_lib.LorentzObjectRepresentation(
{(0,): self.sub0, (1,): self.sub1, \
(2,): self.sub2, (3,): self.sub3},
self.lorentz_ind, [])
class Vector(aloha_lib.FactoryLorentz):
object_class = L_Vector
@classmethod
def get_unique_name(self, lor, particle):
return '_V%s_%s' % (particle, lor)
#===============================================================================
# Spin3/2
#===============================================================================
class L_Spin3Half(aloha_lib.LorentzObject):
""" Helas Object for a Spin2"""
def __init__(self, name, lorentz, spin, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [lorentz], [spin])
def create_representation(self):
self.sub00 = aloha_lib.Variable('R%s_1' % self.particle)
self.sub01 = aloha_lib.Variable('R%s_2' % self.particle)
self.sub02 = aloha_lib.Variable('R%s_3' % self.particle)
self.sub03 = aloha_lib.Variable('R%s_4' % self.particle)
self.sub10 = aloha_lib.Variable('R%s_5' % self.particle)
self.sub11 = aloha_lib.Variable('R%s_6' % self.particle)
self.sub12 = aloha_lib.Variable('R%s_7' % self.particle)
self.sub13 = aloha_lib.Variable('R%s_8' % self.particle)
self.sub20 = aloha_lib.Variable('R%s_9' % self.particle)
self.sub21 = aloha_lib.Variable('R%s_10' % self.particle)
self.sub22 = aloha_lib.Variable('R%s_11' % self.particle)
self.sub23 = aloha_lib.Variable('R%s_12' % self.particle)
self.sub30 = aloha_lib.Variable('R%s_13' % self.particle)
self.sub31 = aloha_lib.Variable('R%s_14' % self.particle)
self.sub32 = aloha_lib.Variable('R%s_15' % self.particle)
self.sub33 = aloha_lib.Variable('R%s_16' % self.particle)
rep = {(0,0): self.sub00, (0,1): self.sub01, (0,2): self.sub02, (0,3): self.sub03,
(1,0): self.sub10, (1,1): self.sub11, (1,2): self.sub12, (1,3): self.sub13,
(2,0): self.sub20, (2,1): self.sub21, (2,2): self.sub22, (2,3): self.sub23,
(3,0): self.sub30, (3,1): self.sub31, (3,2): self.sub32, (3,3): self.sub33}
self.representation= aloha_lib.LorentzObjectRepresentation( rep, \
self.lorentz_ind, self.spin_ind)
class Spin3Half(aloha_lib.FactoryLorentz):
object_class = L_Spin3Half
@classmethod
def get_unique_name(self, lor, spin, part):
return 'Spin3Half%s^%s_%s' % (part, lor, spin)
#===============================================================================
# Spin2
#===============================================================================
class L_Spin2(aloha_lib.LorentzObject):
""" Helas Object for a Spin2"""
def __init__(self, name, lorentz1, lorentz2, particle):
self.particle = particle
aloha_lib.LorentzObject.__init__(self, name, [lorentz1, lorentz2], [])
def create_representation(self):
self.sub00 = aloha_lib.Variable('T%s_1' % self.particle)
self.sub01 = aloha_lib.Variable('T%s_2' % self.particle)
self.sub02 = aloha_lib.Variable('T%s_3' % self.particle)
self.sub03 = aloha_lib.Variable('T%s_4' % self.particle)
self.sub10 = aloha_lib.Variable('T%s_5' % self.particle)
self.sub11 = aloha_lib.Variable('T%s_6' % self.particle)
self.sub12 = aloha_lib.Variable('T%s_7' % self.particle)
self.sub13 = aloha_lib.Variable('T%s_8' % self.particle)
self.sub20 = aloha_lib.Variable('T%s_9' % self.particle)
self.sub21 = aloha_lib.Variable('T%s_10' % self.particle)
self.sub22 = aloha_lib.Variable('T%s_11' % self.particle)
self.sub23 = aloha_lib.Variable('T%s_12' % self.particle)
self.sub30 = aloha_lib.Variable('T%s_13' % self.particle)
self.sub31 = aloha_lib.Variable('T%s_14' % self.particle)
self.sub32 = aloha_lib.Variable('T%s_15' % self.particle)
self.sub33 = aloha_lib.Variable('T%s_16' % self.particle)
rep = {(0,0): self.sub00, (0,1): self.sub01, (0,2): self.sub02, (0,3): self.sub03,
(1,0): self.sub10, (1,1): self.sub11, (1,2): self.sub12, (1,3): self.sub13,
(2,0): self.sub20, (2,1): self.sub21, (2,2): self.sub22, (2,3): self.sub23,
(3,0): self.sub30, (3,1): self.sub31, (3,2): self.sub32, (3,3): self.sub33}
self.representation= aloha_lib.LorentzObjectRepresentation( rep, \
self.lorentz_ind, [])
class Spin2(aloha_lib.FactoryLorentz):
object_class = L_Spin2
@classmethod
def get_unique_name(self, lor1, lor2, part):
return 'Spin2^%s_%s_%s' % (part, lor1, lor2)
#===============================================================================
# Gamma
#===============================================================================
class L_Gamma(aloha_lib.LorentzObject):
""" Gamma Matrices """
#gamma0 = [[0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 1, 0, 0]]
#gamma1 = [[0, 0, 0, 1], [0, 0, 1, 0], [0, -1, 0, 0], [-1, 0, 0, 0]]
#gamma2 = [[0, 0, 0, -complex(0,1)],[0, 0, complex(0,1), 0],
# [0, complex(0,1), 0, 0], [-complex(0,1), 0, 0, 0]]
#gamma3 = [[0, 0, 1, 0], [0, 0, 0, -1], [-1, 0, 0, 0], [0, 1, 0, 0]]
#
#gamma = [gamma0, gamma1, gamma2, gamma3]
gamma = { #Gamma0
(0, 0, 0): 0, (0, 0, 1): 0, (0, 0, 2): 1, (0, 0, 3): 0,
(0, 1, 0): 0, (0, 1, 1): 0, (0, 1, 2): 0, (0, 1, 3): 1,
(0, 2, 0): 1, (0, 2, 1): 0, (0, 2, 2): 0, (0, 2, 3): 0,
(0, 3, 0): 0, (0, 3, 1): 1, (0, 3, 2): 0, (0, 3, 3): 0,
#Gamma1
(1, 0, 0): 0, (1, 0, 1): 0, (1, 0, 2): 0, (1, 0, 3): 1,
(1, 1, 0): 0, (1, 1, 1): 0, (1, 1, 2): 1, (1, 1, 3): 0,
(1, 2, 0): 0, (1, 2, 1): -1, (1, 2, 2): 0, (1, 2, 3): 0,
(1, 3, 0): -1, (1, 3, 1): 0, (1, 3, 2): 0, (1, 3, 3): 0,
#Gamma2
(2, 0, 0): 0, (2, 0, 1): 0, (2, 0, 2): 0, (2, 0, 3): -1j,
(2, 1, 0): 0, (2, 1, 1): 0, (2, 1, 2): 1j, (2, 1, 3): 0,
(2, 2, 0): 0, (2, 2, 1): 1j, (2, 2, 2): 0, (2, 2, 3): 0,
(2, 3, 0): -1j, (2, 3, 1): 0, (2, 3, 2): 0, (2, 3, 3): 0,
#Gamma3
(3, 0, 0): 0, (3, 0, 1): 0, (3, 0, 2): 1, (3, 0, 3): 0,
(3, 1, 0): 0, (3, 1, 1): 0, (3, 1, 2): 0, (3, 1, 3): -1,
(3, 2, 0): -1, (3, 2, 1): 0, (3, 2, 2): 0, (3, 2, 3): 0,
(3, 3, 0): 0, (3, 3, 1): 1, (3, 3, 2): 0, (3, 3, 3): 0
}
def __init__(self, name, lorentz, spin1, spin2):
aloha_lib.LorentzObject.__init__(self,name,[lorentz], [spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.gamma,
self.lorentz_ind,self.spin_ind)
class Gamma(aloha_lib.FactoryLorentz):
object_class = L_Gamma
@classmethod
def get_unique_name(self, lor, spin1, spin2):
return 'Gamma^%s_%s_%s' % (lor, spin1, spin2)
#===============================================================================
# Sigma
#===============================================================================
class L_Sigma(aloha_lib.LorentzObject):
""" Sigma Matrices """
#zero = [[0,0,0,0]]*4
#i = complex(0,1)
#sigma01 = [[ 0, -i, 0, 0], [-i, 0, 0, 0], [0, 0, 0, i], [0, 0, i, 0]]
#sigma02 = [[ 0, -1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 1], [0, 0, -1, 0]]
#sigma03 = [[-i, 0, 0, 0], [0, i, 0, 0], [0, 0, i, 0], [0, 0, 0, -i]]
#sigma12 = [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, -1]]
#sigma13 = [[0, i, 0, 0], [-i, 0, 0, 0], [0, 0, 0, i], [0, 0, -i, 0]]
#sigma23 = [[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]]
#def inv(matrice):
# out=[]
# for i in range(4):
# out2=[]
# out.append(out2)
# for j in range(4):
# out2.append(-1*matrice[i][j])
# return out
#
#sigma =[[zero, sigma01, sigma02, sigma03], \
# [inv(sigma01), zero, sigma12, sigma13],\
# [inv(sigma02), inv(sigma12), zero, sigma23],\
# [inv(sigma03), inv(sigma13), inv(sigma23), zero]]
sigma={(0, 2, 0, 1): -0.5, (3, 1, 2, 0): 0, (3, 2, 3, 1): 0, (1, 3, 1, 3): 0,
(2, 3, 3, 2): 0.5, (2, 1, 3, 1): 0, (0, 2, 2, 1): 0, (3, 1, 0, 0): 0,
(2, 3, 3, 1): 0, (3, 3, 1, 2): 0, (3, 1, 0, 3): 0, (1, 1, 0, 3): 0,
(0, 1, 2, 2): 0, (3, 2, 3, 2): -0.5, (2, 1, 0, 1): 0, (3, 3, 3, 3): 0,
(1, 1, 2, 2): 0, (2, 2, 3, 2): 0, (2, 1, 2, 1): 0, (0, 1, 0, 3): 0,
(2, 1, 2, 2): -0.5, (1, 2, 2, 1): 0, (2, 2, 1, 3): 0, (0, 3, 1, 3): 0,
(3, 0, 3, 2): 0, (1, 2, 0, 1): 0, (3, 0, 3, 1): 0, (0, 0, 2, 2): 0,
(1, 2, 0, 2): 0, (2, 0, 0, 3): 0, (0, 0, 2, 1): 0, (0, 3, 3, 2): 0,
(3, 0, 1, 1): -0.5j, (3, 2, 0, 1): -0.5, (1, 0, 1, 0): 0.5j, (0, 0, 0, 1): 0,
(0, 2, 1, 1): 0, (3, 1, 3, 2): 0.5j, (3, 2, 2, 1): 0, (1, 3, 2, 3): 0.5j,
(1, 0, 3, 0): 0, (3, 2, 2, 2): 0, (0, 2, 3, 1): 0, (1, 0, 3, 3): 0,
(2, 3, 2, 1): 0, (0, 2, 3, 2): -0.5, (3, 1, 1, 3): 0, (1, 1, 1, 3): 0,
(1, 3, 0, 2): 0, (2, 3, 0, 1): 0.5, (1, 1, 1, 0): 0, (2, 3, 0, 2): 0,
(3, 3, 0, 3): 0, (1, 1, 3, 0): 0, (0, 1, 3, 3): 0, (2, 2, 0, 1): 0,
(2, 1, 1, 0): 0, (3, 3, 2, 2): 0, (2, 3, 1, 0): 0.5, (2, 2, 2, 3): 0,
(0, 3, 0, 3): 0, (0, 1, 1, 2): 0, (0, 3, 0, 0): -0.5j, (2, 3, 1, 1): 0,
(1, 2, 3, 0): 0, (2, 0, 1, 3): 0, (0, 0, 3, 1): 0, (0, 3, 2, 0): 0,
(2, 3, 1, 2): 0, (2, 0, 1, 0): -0.5, (1, 2, 1, 0): 0, (3, 0, 0, 2): 0,
(1, 0, 0, 2): 0, (0, 0, 1, 1): 0, (1, 2, 1, 3): 0, (2, 3, 1, 3): 0,
(2, 0, 3, 0): 0, (0, 0, 1, 2): 0, (1, 3, 3, 3): 0, (3, 2, 1, 0): -0.5,
(1, 3, 3, 0): 0, (1, 0, 2, 3): -0.5j, (0, 2, 0, 0): 0, (3, 1, 2, 3): -0.5j,
(3, 2, 3, 0): 0, (1, 3, 1, 0): -0.5j, (3, 2, 3, 3): 0, (0, 2, 2, 0): 0,
(2, 3, 3, 0): 0, (3, 3, 1, 3): 0, (0, 2, 2, 3): 0.5, (3, 1, 0, 2): 0,
(1, 1, 0, 2): 0, (3, 3, 1, 0): 0, (0, 1, 2, 3): 0.5j, (1, 1, 0, 1): 0,
(2, 1, 0, 2): 0, (0, 1, 2, 0): 0, (3, 3, 3, 0): 0, (1, 1, 2, 1): 0,
(2, 2, 3, 3): 0, (0, 1, 0, 0): 0, (2, 2, 3, 0): 0, (2, 1, 2, 3): 0,
(1, 2, 2, 2): 0.5, (2, 2, 1, 0): 0, (0, 3, 1, 2): 0, (0, 3, 1, 1): 0.5j,
(3, 0, 3, 0): 0, (1, 2, 0, 3): 0, (2, 0, 0, 2): 0, (0, 0, 2, 0): 0,
(0, 3, 3, 1): 0, (3, 0, 1, 0): 0, (2, 0, 0, 1): 0.5, (3, 2, 0, 2): 0,
(3, 0, 1, 3): 0, (1, 0, 1, 3): 0, (0, 0, 0, 0): 0, (0, 2, 1, 2): 0,
(3, 1, 3, 3): 0, (0, 0, 0, 3): 0, (1, 3, 2, 2): 0, (3, 1, 3, 0): 0,
(3, 2, 2, 3): -0.5, (1, 3, 2, 1): 0, (1, 0, 3, 2): -0.5j, (2, 3, 2, 2): 0,
(0, 2, 3, 3): 0, (3, 1, 1, 0): 0.5j, (1, 3, 0, 1): 0.5j, (1, 1, 1, 1): 0,
(2, 1, 3, 2): 0, (2, 3, 0, 3): 0, (3, 3, 0, 2): 0, (1, 1, 3, 1): 0,
(3, 3, 0, 1): 0, (2, 1, 3, 3): 0.5, (0, 1, 3, 2): 0.5j, (1, 1, 3, 2): 0,
(2, 1, 1, 3): 0, (3, 0, 2, 1): 0, (0, 1, 3, 1): 0, (3, 3, 2, 1): 0,
(2, 2, 2, 2): 0, (0, 1, 1, 1): 0, (2, 2, 2, 1): 0, (0, 3, 0, 1): 0,
(3, 0, 2, 2): -0.5j, (1, 2, 3, 3): -0.5, (0, 0, 3, 2): 0, (0, 3, 2, 1): 0,
(2, 0, 1, 1): 0, (2, 2, 0, 0): 0, (0, 3, 2, 2): 0.5j, (3, 0, 0, 3): 0,
(1, 0, 0, 3): 0, (1, 2, 1, 2): 0, (2, 0, 3, 1): 0, (1, 0, 0, 0): 0,
(0, 0, 1, 3): 0, (2, 0, 3, 2): 0.5, (3, 2, 1, 3): 0, (1, 3, 3, 1): 0,
(1, 0, 2, 0): 0, (2, 2, 0, 2): 0, (0, 2, 0, 3): 0, (3, 1, 2, 2): 0,
(1, 3, 1, 1): 0, (3, 1, 2, 1): 0, (2, 2, 0, 3): 0, (3, 0, 0, 1): 0,
(1, 3, 1, 2): 0, (2, 3, 3, 3): 0, (0, 2, 2, 2): 0, (3, 1, 0, 1): -0.5j,
(3, 3, 1, 1): 0, (1, 1, 0, 0): 0, (2, 1, 0, 3): 0, (0, 1, 2, 1): 0,
(3, 3, 3, 1): 0, (2, 1, 0, 0): -0.5, (1, 1, 2, 0): 0, (3, 3, 3, 2): 0,
(0, 1, 0, 1): -0.5j, (1, 1, 2, 3): 0, (2, 2, 3, 1): 0, (2, 1, 2, 0): 0,
(0, 1, 0, 2): 0, (1, 2, 2, 3): 0, (2, 0, 2, 1): 0, (2, 2, 1, 1): 0,
(1, 2, 2, 0): 0, (2, 2, 1, 2): 0, (0, 3, 1, 0): 0, (3, 0, 3, 3): 0.5j,
(2, 1, 3, 0): 0, (1, 2, 0, 0): 0.5, (0, 0, 2, 3): 0, (0, 3, 3, 0): 0,
(2, 0, 0, 0): 0, (3, 2, 0, 3): 0, (0, 3, 3, 3): -0.5j, (3, 0, 1, 2): 0,
(1, 0, 1, 2): 0, (3, 2, 0, 0): 0, (0, 2, 1, 3): 0, (1, 0, 1, 1): 0,
(0, 0, 0, 2): 0, (0, 2, 1, 0): 0.5, (3, 1, 3, 1): 0, (3, 2, 2, 0): 0,
(1, 3, 2, 0): 0, (1, 0, 3, 1): 0, (2, 3, 2, 3): 0.5, (0, 2, 3, 0): 0,
(3, 1, 1, 1): 0, (2, 3, 2, 0): 0, (1, 3, 0, 0): 0, (3, 1, 1, 2): 0,
(1, 1, 1, 2): 0, (1, 3, 0, 3): 0, (2, 3, 0, 0): 0, (2, 0, 2, 0): 0,
(3, 3, 0, 0): 0, (1, 1, 3, 3): 0, (2, 1, 1, 2): 0, (0, 1, 3, 0): 0,
(3, 3, 2, 0): 0, (2, 1, 1, 1): 0.5, (2, 0, 2, 2): 0, (3, 3, 2, 3): 0,
(0, 1, 1, 0): -0.5j, (2, 2, 2, 0): 0, (0, 3, 0, 2): 0, (3, 0, 2, 3): 0,
(0, 1, 1, 3): 0, (2, 0, 2, 3): -0.5, (1, 2, 3, 2): 0, (3, 0, 2, 0): 0,
(0, 0, 3, 3): 0, (1, 2, 3, 1): 0, (2, 0, 1, 2): 0, (0, 0, 3, 0): 0,
(0, 3, 2, 3): 0, (3, 0, 0, 0): 0.5j, (1, 2, 1, 1): -0.5, (1, 0, 0, 1): 0.5j,
(0, 0, 1, 0): 0, (2, 0, 3, 3): 0, (3, 2, 1, 2): 0, (1, 3, 3, 2): -0.5j,
(1, 0, 2, 1): 0, (3, 2, 1, 1): 0, (0, 2, 0, 2): 0, (1, 0, 2, 2): 0}
def __init__(self, name, lorentz1, lorentz2, spin1, spin2):
aloha_lib.LorentzObject.__init__(self, name, [lorentz1, lorentz2], \
[spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.sigma,
self.lorentz_ind,self.spin_ind)
class Sigma(aloha_lib.FactoryLorentz):
object_class = L_Sigma
@classmethod
def get_unique_name(self, lorentz1, lorentz2, spin1, spin2):
return 'Sigma_[%s,%s]^[%s,%s]' % (spin1, spin2, lorentz1, lorentz2)
#===============================================================================
# Gamma5
#===============================================================================
class L_Gamma5(aloha_lib.LorentzObject):
#gamma5 = [[-1, 0, 0, 0, 0], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
gamma5 = {(0,0): -1, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): -1, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 1, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): 1}
def __init__(self, name, spin1, spin2):
aloha_lib.LorentzObject.__init__(self, name, [], [spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.gamma5,
self.lorentz_ind,self.spin_ind)
class Gamma5(aloha_lib.FactoryLorentz):
object_class = L_Gamma5
@classmethod
def get_unique_name(self, spin1, spin2):
return 'Gamma5_%s_%s' % (spin1, spin2)
#===============================================================================
# Conjugate Matrices
#===============================================================================
class L_C(aloha_lib.LorentzObject):
#[0, -1, 0, 0] [1,0,0,0] [0,0,0,1],[0,0,-1,0]
Cmetrix = {(0,0): 0, (0,1): -1, (0,2): 0, (0,3): 0,\
(1,0): 1, (1,1): 0, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 0, (2,3): 1,\
(3,0): 0, (3,1): 0, (3,2): -1, (3,3): 0}
def __init__(self, name, spin_list):
# spin_list is automatically ordered. The sign for the symmetrization
# is set in the Factory routine
aloha_lib.LorentzObject.__init__(self, name, [], spin_list)
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.Cmetrix,
self.lorentz_ind,self.spin_ind)
class C(aloha_lib.FactoryLorentz):
object_class = L_C
def __new__(cls, spin1, spin2):
spin_list = [spin1, spin2]
spin_list.sort()
sign = give_sign_perm(spin_list, [spin1, spin2])
name = cls.get_unique_name(spin_list)
if sign == 1:
return aloha_lib.FactoryVar.__new__(cls, name, cls.object_class, spin_list)
else:
out = aloha_lib.FactoryVar.__new__(cls, name, cls.object_class, spin_list)
out.prefactor = -1
return out
@classmethod
def get_unique_name(cls, spin_list):
return "C_%s_%s" % tuple(spin_list)
#===============================================================================
# EPSILON
#===============================================================================
#Helpfull function
def give_sign_perm(perm0, perm1):
"""Check if 2 permutations are of equal parity.
Assume that both permutation lists are of equal length
and have the same elements. No need to check for these
conditions.
"""
assert len(perm0) == len(perm1)
perm1 = list(perm1) ## copy this into a list so we don't mutate the original
perm1_map = dict((v, i) for i,v in enumerate(perm1))
transCount = 0
for loc, p0 in enumerate(perm0):
p1 = perm1[loc]
if p0 != p1:
sloc = perm1_map[p0] # Find position in perm1
perm1[loc], perm1[sloc] = p0, p1 # Swap in perm1
perm1_map[p0], perm1_map[p1] = loc, sloc # Swap the map
transCount += 1
# Even number of transposition means equal parity
return -2 * (transCount % 2) + 1
# Practical definition of Epsilon
class L_Epsilon(aloha_lib.LorentzObject):
""" The fully anti-symmetric object in Lorentz-Space """
def give_parity(self, perm):
"""return the parity of the permutation"""
assert set(perm) == set([0,1,2,3])
i1 , i2, i3, i4 = perm
#formula found on wikipedia
return -self.sign * ((i2-i1) * (i3-i1) *(i4-i1) * (i3-i2) * (i4-i2) *(i4-i3))/12
# DEFINE THE REPRESENTATION OF EPSILON
def __init__(self, name, lorentz1, lorentz2, lorentz3, lorentz4):
lorentz_list = [lorentz1 , lorentz2, lorentz3, lorentz4]
#order_lor = list(lorentz_list)
#order_lor.sort()
#self.sign = give_sign_perm(order_lor, lorentz_list)
self.sign=1
aloha_lib.LorentzObject.__init__(self, name, lorentz_list, [])
def create_representation(self):
if not hasattr(self, 'epsilon'):
# init all element to zero
epsilon = dict( ((l1, l2, l3, l4), 0)
for l1 in range(4) \
for l2 in range(4) \
for l3 in range(4) \
for l4 in range(4))
# update non trivial one
epsilon.update(dict(
((l1, l2, l3, l4), self.give_parity((l1,l2,l3,l4)))
for l1 in range(4) \
for l2 in range(4) if l2 != l1\
for l3 in range(4) if l3 not in [l1,l2]\
for l4 in range(4) if l4 not in [l1,l2,l3]))
L_Epsilon.epsilon = epsilon
self.representation = aloha_lib.LorentzObjectRepresentation(self.epsilon,
self.lorentz_ind,self.spin_ind)
class Epsilon(aloha_lib.FactoryLorentz):
object_class = L_Epsilon
@classmethod
def get_unique_name(cls,l1,l2,l3,l4):
return '_EPSILON_%s_%s_%s_%s' % (l1,l2,l3,l4)
#===============================================================================
# Metric
#===============================================================================
class L_Metric(aloha_lib.LorentzObject):
metric = {(0,0): 1, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): -1, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): -1, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): -1}
#[[1, 0, 0,0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, -1]]
def __init__(self, name, lorentz1, lorentz2):
aloha_lib.LorentzObject.__init__(self,name,[lorentz1, lorentz2], [])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.metric,
self.lorentz_ind,self.spin_ind)
class Metric(aloha_lib.FactoryLorentz):
object_class = L_Metric
@classmethod
def get_unique_name(cls,l1,l2):
if l1<l2:
return '_ETA_%s_%s' % (l1,l2)
else:
return '_ETA_%s_%s' % (l2,l1)
#===============================================================================
# Identity
#===============================================================================
class L_Identity(aloha_lib.LorentzObject):
#identity = [[1, 0, 0,0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
identity = {(0,0): 1, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): 1, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 1, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): 1}
def __init__(self, name, spin1, spin2):
aloha_lib.LorentzObject.__init__(self, name, [],[spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.identity,
self.lorentz_ind,self.spin_ind)
class Identity(aloha_lib.FactoryLorentz):
object_class = L_Identity
@classmethod
def get_unique_name(self, spin1, spin2):
return 'Id_%s_%s' % (spin1, spin2)
#===============================================================================
# IdentityL
#===============================================================================
class L_IdentityL(aloha_lib.LorentzObject):
#identity = [[1, 0, 0,0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
identity = {(0,0): 1, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): 1, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 1, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): 1}
def __init__(self, name, l1, l2):
aloha_lib.LorentzObject.__init__(self, name, [l1,l2], [])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.identity,
self.lorentz_ind,self.spin_ind)
class IdentityL(aloha_lib.FactoryLorentz):
object_class = L_IdentityL
@classmethod
def get_unique_name(self, l1, l2):
return 'IdL_%s_%s' % (l1, l2)
#===============================================================================
# ProjM
#===============================================================================
class L_ProjM(aloha_lib.LorentzObject):
""" A object for (1-gamma5)/2 """
#projm = [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
projm= {(0,0): 1, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): 1, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 0, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): 0}
def __init__(self,name, spin1, spin2):
"""Initialize the object"""
aloha_lib.LorentzObject.__init__(self, name, [], [spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.projm,
self.lorentz_ind,self.spin_ind)
class ProjM(aloha_lib.FactoryLorentz):
object_class = L_ProjM
@classmethod
def get_unique_name(self, spin1, spin2):
return 'PROJM_%s_%s' % (spin1, spin2)
#===============================================================================
# ProjP
#===============================================================================
class L_ProjP(aloha_lib.LorentzObject):
"""A object for (1+gamma5)/2 """
#projp = [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
projp = {(0,0): 0, (0,1): 0, (0,2): 0, (0,3): 0,\
(1,0): 0, (1,1): 0, (1,2): 0, (1,3): 0,\
(2,0): 0, (2,1): 0, (2,2): 1, (2,3): 0,\
(3,0): 0, (3,1): 0, (3,2): 0, (3,3): 1}
def __init__(self,name, spin1, spin2):
"""Initialize the object"""
aloha_lib.LorentzObject.__init__(self, name, [], [spin1, spin2])
def create_representation(self):
self.representation = aloha_lib.LorentzObjectRepresentation(self.projp,
self.lorentz_ind, self.spin_ind)
class ProjP(aloha_lib.FactoryLorentz):
object_class = L_ProjP
@classmethod
def get_unique_name(self, spin1, spin2):
return 'PROJP_%s_%s' % (spin1, spin2)
#===============================================================================
# Denominator Propagator
#===============================================================================
class DenominatorPropagator(aloha_lib.LorentzObject):
"""The Denominator of the Propagator"""
def __new__(cls, particle):
name = 'DenomP%s' % particle
return aloha_lib.Variable.__new__(cls, name)
def __init__(self, particle):
if self:
return
self.particle = particle
aloha_lib.LorentzObject.__init__(self, [], [])
def get_unique_name(self,*args):
return 'DenomP%s' % self.particle
def simplify(self):
"""Return the Denominator in a abstract way"""
mass = Mass(self.particle)
width = Width(self.particle)
denominator = P('i1', self.particle) * P('i1', self.particle) - \
mass * mass + complex(0,1) * mass* width
return denominator
def create_representation(self):
"""Create the representation for the Vector propagator"""
object = self.simplify()
self.representation = object.expand()
#===============================================================================
# Numerator Propagator
#===============================================================================
SpinorPropagatorout = lambda spin1, spin2, particle: -1 * (Gamma('mu', spin1, spin2) * \
P('mu', particle) - Mass(particle) * Identity(spin1, spin2))
SpinorPropagatorin = lambda spin1, spin2, particle: (Gamma('mu', spin1, spin2) * \
P('mu', particle) + Mass(particle) * Identity(spin1, spin2))
VectorPropagator = lambda l1, l2, part: complex(0,1)*(-1 * Metric(l1, l2) + OverMass2(part) * \
Metric(l1,'I3')* P('I3', part) * P(l2, part))
VectorPropagatorMassless= lambda l1, l2, part: complex(0,-1) * Metric(l1, l2)
Spin3halfPropagatorin = lambda mu, nu, s1, s2, part: (\
-1/3 * (Gamma(mu,s1,-2) + Identity(s1, -2) * P(mu, part) * Mass(part) * OverMass2(part))* \
(PSlash(-2,-3, part) - Identity(-2,-3) * Mass(part)) * \
( Gamma(nu, -3, s2)+ Mass(part) * OverMass2(part) * Identity(-3, s2) * P(nu, part) ) - \
(PSlash(s1,s2, part) + Mass(part) * Identity(s1,s2)) * (Metric(mu, nu) - OverMass2(part) * P(mu, part) * P(nu,part)))
Spin3halfPropagatorout = lambda mu, nu, s1, s2, part: ( \
-1/3 * (Gamma(mu,s1,-2) - Identity(s1, -2) * P(mu, part) * Mass(part) * OverMass2(part))* \
(-1*PSlash(-2,-3, part) - Identity(-2,-3) * Mass(part)) * \
( Gamma(nu, -3, s2)- Mass(part) * OverMass2(part) * Identity(-3, s2) * P(nu, part) ) - \
(-1*PSlash(s1,s2, part)
+ Mass(part) * Identity(s1,s2)) * (Metric(mu, nu) - OverMass2(part) * P(mu, part) * P(nu,part)))
Spin3halfPropagatorMasslessOut = lambda mu, nu, s1, s2, part: Gamma(nu, s1,-1) * PSlash(-1,-2, part) * Gamma(mu,-2, s2)
Spin3halfPropagatorMasslessIn = lambda mu, nu, s1, s2, part: -1 * Gamma(mu, s1,-1) * PSlash(-1,-2, part) * Gamma(nu,-2, s2)
Spin2masslessPropagator = lambda mu, nu, alpha, beta: 1/2 *( Metric(mu, alpha)* Metric(nu, beta) +\
Metric(mu, beta) * Metric(nu, alpha) - Metric(mu, nu) * Metric(alpha, beta))
Spin2Propagator = lambda mu, nu, alpha, beta, part: Spin2masslessPropagator(mu, nu, alpha, beta) + \
- 1/2 * OverMass2(part) * (Metric(mu,alpha)* P(nu, part) * P(beta, part) + \
Metric(nu, beta) * P(mu, part) * P(alpha, part) + \
Metric(mu, beta) * P(nu, part) * P(alpha, part) + \
Metric(nu, alpha) * P(mu, part) * P(beta , part) )+ \
1/6 * (Metric(mu,nu) + 2 * OverMass2(part) * P(mu, part) * P(nu, part)) * \
(Metric(alpha,beta) + 2 * OverMass2(part) * P(alpha, part) * P(beta, part))
| 40.555556 | 123 | 0.445289 | 5,617 | 41,975 | 3.204202 | 0.05964 | 0.04745 | 0.028003 | 0.01778 | 0.707023 | 0.649572 | 0.543894 | 0.467774 | 0.396989 | 0.292644 | 0 | 0.102297 | 0.287362 | 41,975 | 1,034 | 124 | 40.594778 | 0.499382 | 0.204407 | 0 | 0.284173 | 0 | 0 | 0.017211 | 0.000641 | 0 | 0 | 0 | 0 | 0.003597 | 1 | 0.138489 | false | 0 | 0.007194 | 0.039568 | 0.350719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b438e05322fd68c73974bbc580bd7c882aa4fa3 | 2,314 | py | Python | tuples.py | sumon-dey/PythonConcepts | a014656232ff97161f995bf0124f6ce9daf42496 | [
"Apache-2.0"
] | 6 | 2019-07-21T13:07:31.000Z | 2021-02-24T10:36:06.000Z | tuples.py | sumon-dey/PythonConcepts | a014656232ff97161f995bf0124f6ce9daf42496 | [
"Apache-2.0"
] | null | null | null | tuples.py | sumon-dey/PythonConcepts | a014656232ff97161f995bf0124f6ce9daf42496 | [
"Apache-2.0"
] | 3 | 2021-02-20T11:01:27.000Z | 2021-07-22T10:58:34.000Z | # Python Tuples
# Ordered, Immutable collection of items which allows Duplicate Members
# We can put the data, which will not change throughout the program, in a Tuple
# Tuples can be called as "Immutable Python Lists" or "Constant Python Lists"
employeeTuple = ("Sam", "Sam" "Mike", "John", "Harry", "Tom", "Sean", "Justin")
# to check the variable type
print(type(employeeTuple))
# to check whether the type is "Tuple" or not
print(isinstance(employeeTuple, tuple))
# to print all the elements in the Tuple
for employeeName in employeeTuple:
print("Employee: " + employeeName)
print("**********************************************************")
# Other functions
# to display an element using index
print(employeeTuple[0])
# to display the length of the Tuple
print(len(employeeTuple))
# employeeTuple[2] = "David" # This will throw a TypeError since Tuple cannot be modified
print(employeeTuple)
print("**********************************************************")
# we can use the tuple() constructor to create a tuple
employeeName2 = tuple(("Richard", "Henry", "Brian"))
print(employeeName2)
# we can also omit the use of brackets to create a tuple
employeeName3 = "David", "Michael", "Shaun"
print(employeeName3)
print(type(employeeName3))
print("**********************************************************")
# Difference between a Tuple and a String
myStr = ("Sam") # This is a String
print(type(myStr))
# This is a Tuple (for a Tuple, comma is mandatory) with one item
myTuple1 = ("Sam",)
print(type(myTuple1))
print(len(myTuple1))
# This is an empty Tuple
myTuple2 = ()
print(type(myTuple2))
print(len(myTuple2))
print("**********************************************************")
# Value Swapping using Tuple
myNumber1 = 2
myNumber2 = 3
myNumber1, myNumber2 = myNumber2, myNumber1
print(myNumber1)
print(myNumber2)
print("**********************************************************")
# Nested Tuples
employeeName4 = employeeName3, ("Raj", "Vinith")
print(employeeName4)
print("**********************************************************")
# Tuple Sequence Packing
packed_tuple = 1, 2, "Python"
print(packed_tuple)
# Tuple Sequence Unpacking
number1, number2, string1 = packed_tuple
print(number1)
print(number2)
print(string1)
print("**********************************************************")
| 33.536232 | 90 | 0.609767 | 262 | 2,314 | 5.374046 | 0.427481 | 0.025568 | 0.012784 | 0.019886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016691 | 0.119706 | 2,314 | 68 | 91 | 34.029412 | 0.674521 | 0.372083 | 0 | 0.170732 | 0 | 0 | 0.351748 | 0.283916 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.682927 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
8c663c96b9101af26692648dfb0ca8488e020977 | 332 | py | Python | stPlus/__init__.py | qiaochen/stPlus | 06294c6cf63d6d81555ffdd4d19c571c49805ff2 | [
"MIT"
] | 7 | 2021-01-23T07:13:21.000Z | 2022-02-19T09:15:20.000Z | stPlus/__init__.py | qiaochen/stPlus | 06294c6cf63d6d81555ffdd4d19c571c49805ff2 | [
"MIT"
] | 1 | 2021-08-22T00:59:03.000Z | 2021-08-22T00:59:03.000Z | stPlus/__init__.py | qiaochen/stPlus | 06294c6cf63d6d81555ffdd4d19c571c49805ff2 | [
"MIT"
] | 6 | 2020-11-29T02:01:01.000Z | 2022-03-26T08:56:50.000Z | #!/usr/bin/env python
"""
# Authors: Shengquan Chen, Boheng Zhang, Xiaoyang Chen
# Created Time : Sat 28 Nov 2020 08:31:29 PM CST
# File Name: utils.py
# Description: stPlus: reference-based enhancement of spatial transcriptomics
"""
__author__ = "Xiaoyang Chen"
__email__ = "xychen20@mails.tsinghua.edu.cn"
from .model import *
| 23.714286 | 77 | 0.737952 | 46 | 332 | 5.152174 | 0.934783 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04947 | 0.14759 | 332 | 13 | 78 | 25.538462 | 0.787986 | 0.674699 | 0 | 0 | 0 | 0 | 0.434343 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8c7565fa05e188eaf9fcca6bca1db9405321a1a7 | 2,275 | py | Python | tmp_mail/__init__.py | KMikeeU/temp-mail | 257155b60baee8ed8e24776db3e5492f42e2c10b | [
"MIT"
] | 2 | 2018-12-09T18:36:03.000Z | 2019-05-01T12:54:41.000Z | tmp_mail/__init__.py | KMikeeU/temp-mail | 257155b60baee8ed8e24776db3e5492f42e2c10b | [
"MIT"
] | null | null | null | tmp_mail/__init__.py | KMikeeU/temp-mail | 257155b60baee8ed8e24776db3e5492f42e2c10b | [
"MIT"
] | 3 | 2018-12-09T18:36:07.000Z | 2019-10-22T11:35:09.000Z | import requests
from bs4 import BeautifulSoup
import re
import time
import _thread
class Client():
base = "https://temp-mail.org/en"
def __init__(self, address: str = None):
self.address = address
self.session = requests.Session()
realisticBrowser = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"
}
if self.address != None:
self.session.cookies.set("mail", self.address.replace("@", "%40"))
self.session.headers.update(realisticBrowser)
self.session.get(self.base)
try:
self.address = self.session.cookies["mail"].replace("%40", "@")
except:
raise ValueError("The email address you supplied is not supported on temp-mail")
self.recent = []
def checkloop(self, callback=lambda m: m, async=True):
if async == True:
_thread.start_new_thread(self.checkloop, (), {"callback": callback, "async": False})
else:
self.check()
time.sleep(2)
while True:
ilen = len(self.recent)
self.check()
if len(self.recent) > ilen:
for i in self.recent[ilen:]:
callback(i)
time.sleep(10)
def check(self):
r = self.session.get(self.base + "/option/refresh/")
soup = BeautifulSoup(r.text, "html.parser")
mails = []
for mail in soup.find('tbody').findChildren("tr"):
info = mail.findChildren("td")[0].findChildren()[0]
view_url = info["href"]
subject = info["title"]
by = re.findall(r".*<(.+)>", info.decode_contents())
creq = self.session.get(view_url)
contentsoup = BeautifulSoup(creq.text, "html.parser")
content = contentsoup.find("div", {"class":"pm-text"}).decode_contents()
themail = Email(author=by, content=content,to=self.address,subject=subject)
mails.append(themail)
self.recent = mails
return mails
class Email():
def __init__(self, author=None, content=None, to=None, subject=None):
self.author = author
self.content = content
self.to = to
self.subject = subject
def __str__(self):
return str({"author":self.author, "content": self.content, "to": self.to, "subject":self.subject})
def links(self):
soup = BeautifulSoup(self.content, "html.parser")
lnks = []
for link in soup.findAll("a"):
lnks.append(link["href"])
return lnks | 27.743902 | 134 | 0.670769 | 313 | 2,275 | 4.814696 | 0.399361 | 0.051095 | 0.02787 | 0.023889 | 0.029197 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020516 | 0.164396 | 2,275 | 82 | 135 | 27.743902 | 0.771699 | 0 | 0 | 0.030769 | 0 | 0.015385 | 0.158681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c771cbba76c700d1db7d0599ef23c47728f8fc7 | 803 | py | Python | socialnetworks/moves/views.py | gGonz/django-socialnetworks | 3f6c577efafd6ed5eb8b5cb60d9ee6a36920581d | [
"Apache-2.0"
] | 5 | 2015-06-18T03:30:28.000Z | 2017-11-04T21:34:20.000Z | socialnetworks/moves/views.py | gGonz/django-socialnetworks | 3f6c577efafd6ed5eb8b5cb60d9ee6a36920581d | [
"Apache-2.0"
] | 2 | 2015-04-25T00:06:19.000Z | 2015-04-30T22:42:40.000Z | socialnetworks/moves/views.py | gGonz/django-socialnetworks | 3f6c577efafd6ed5eb8b5cb60d9ee6a36920581d | [
"Apache-2.0"
] | 4 | 2015-06-11T18:28:04.000Z | 2016-09-07T15:08:09.000Z | from .clients import MovesAppClient
from .settings import SETUP_URL_NAME
from ..core import views
class MovesAppDialogRedirect(views.OAuthDialogRedirectView):
"""
View that handles the redirects for the Moves app authorization dialog.
"""
client_class = MovesAppClient
class MovesAppCallback(views.OAuthCallbackView):
"""
View that handles the Moves app OAuth callback.
"""
client_class = MovesAppClient
class MovesAppSetup(views.OAuthSetupView):
"""
View that handles the setup of a retrieved Moves app account.
"""
client_class = MovesAppClient
setup_url = SETUP_URL_NAME
class MovesAppOAuthDisconnect(views.OAuthDisconnectView):
"""
View that handles the disconnect of a Moves app account.
"""
client_class = MovesAppClient
| 24.333333 | 75 | 0.73599 | 87 | 803 | 6.689655 | 0.425287 | 0.054983 | 0.103093 | 0.123711 | 0.137457 | 0.137457 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198007 | 803 | 32 | 76 | 25.09375 | 0.903727 | 0.296389 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8c801a155d630ce0c04e5998293ea5519ef48632 | 444 | py | Python | bookorbooks/account/migrations/0003_alter_customuser_identity_number.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | 1 | 2021-08-10T22:24:17.000Z | 2021-08-10T22:24:17.000Z | bookorbooks/account/migrations/0003_alter_customuser_identity_number.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | bookorbooks/account/migrations/0003_alter_customuser_identity_number.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-13 09:12
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('account', '0002_auto_20210712_1851'),
]
operations = [
migrations.AlterField(
model_name='customuser',
name='identity_number',
field=models.CharField(max_length=11, unique=True, verbose_name='Kimlik Numarası'),
),
]
| 23.368421 | 95 | 0.630631 | 49 | 444 | 5.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.256757 | 444 | 18 | 96 | 24.666667 | 0.727273 | 0.101351 | 0 | 0 | 1 | 0 | 0.176322 | 0.057935 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c805e1adcc6b31a87a479f4115ffa71c7a3bbab | 8,678 | py | Python | Code/Python/ImageProcessing/StereoDepth/featureMatchStereo.py | Nailim/shuttler | a12ea89a1c6b289079ce61ebf8bf3361696f10b2 | [
"MIT"
] | null | null | null | Code/Python/ImageProcessing/StereoDepth/featureMatchStereo.py | Nailim/shuttler | a12ea89a1c6b289079ce61ebf8bf3361696f10b2 | [
"MIT"
] | null | null | null | Code/Python/ImageProcessing/StereoDepth/featureMatchStereo.py | Nailim/shuttler | a12ea89a1c6b289079ce61ebf8bf3361696f10b2 | [
"MIT"
] | null | null | null | # for new opencv
#import os,sys
#os.chdir(os.path.expanduser('~/opencv-2.4.6.1/lib'))
#sys.path.append(os.path.expanduser('~/opencv-2.4.6.1/lib/python2.7/dist-packages'))
# before starting
#export PYTHONPATH=~/opencv-2.4.6.1/lib/python2.7/dist-packages
import os
#import cv
import cv2
import math
import argparse
import numpy as np
global inputParser # just a reminder, it's used as a global variable
global inputArgs # just a reminder, it's used as a global variable
def parseInput() :
global inputParser
global inputArgs
inputParser = argparse.ArgumentParser(description='Match features between two stereo images.')
inputParser.add_argument('-l', '--left', dest='left', action='store', default="", type=str, help='left image')
inputParser.add_argument('-r', '--right', dest='right', action='store', default="", type=str, help='right image')
inputParser.add_argument('-n', '--name', dest='name', action='store', default="fm_out", type=str, help='name of the current set (used to save output values)')
inputParser.add_argument('-f', '--feature', dest='feature', action='store', default="sift", type=str, help='feature to use: sift, surf, orb, brisk')
inputParser.add_argument('-m', '--match', dest='match', action='store', default="bf", type=str, help='match using: bf (bruteforce), flann')
inputParser.add_argument('-p', '--proportion', dest='proportion', action='store', default=0.25, type=float, help='Lowe\'s distance test ratio')
inputParser.add_argument('-s', '--stddev', dest='stddev', action='store', default=0.0, type=float, help='max standard deviation between angles of each point pair (stereo cheat, 0.0 don\'t; use, > 0.0 use this)')
inputParser.add_argument('-d', '--debug', action='store_true', help='debug output')
inputArgs = inputParser.parse_args()
def processInput() :
print ""
if inputArgs.left == "" or inputArgs.right == "":
print "Missing images!"
quit()
# here we go ...
# load image pair
img_l = cv2.imread(inputArgs.left)
img_r = cv2.imread(inputArgs.right)
if img_l == None or img_r == None:
print "Missing images!"
quit()
# we like them gray
gray_l = cv2.cvtColor(img_l, cv2.COLOR_BGR2GRAY)
gray_r = cv2.cvtColor(img_r, cv2.COLOR_BGR2GRAY)
# which decetor are we using
if inputArgs.feature == 'sift':
detector = cv2.SIFT()
norm = cv2.NORM_L2
elif inputArgs.feature == 'surf':
detector = cv2.SURF(800)
norm = cv2.NORM_L2
elif inputArgs.feature == 'orb':
detector = cv2.ORB(400)
norm = cv2.NORM_HAMMING
elif inputArgs.feature == 'brisk':
detector = cv2.BRISK()
norm = cv2.NORM_HAMMING
else:
print "Wrong feature detector!"
quit()
# how are we matching detected features
if inputArgs.match == 'bf':
matcher = cv2.BFMatcher(norm)
elif inputArgs.match == 'flann':
# borrowed from: https://github.com/Itseez
FLANN_INDEX_KDTREE = 1 # bug: flann enums are missing
FLANN_INDEX_LSH = 6
flann_params = []
if norm == cv2.NORM_L2:
flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
else:
flann_params = dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12
key_size = 12, # 20
multi_probe_level = 1) #2
matcher = cv2.FlannBasedMatcher(flann_params, {}) # bug : need to pass empty dict (#1329)
print "Using: " + inputArgs.feature + " with " + inputArgs.match
print ""
print "detecting ..."
# find the keypoints and descriptors
kp_l, des_l = detector.detectAndCompute(gray_l, None)
kp_r, des_r = detector.detectAndCompute(gray_r, None)
print "Left image features: " + str(len(kp_l))
print "Right image features: " + str(len(kp_l))
print ""
# visualization
if inputArgs.debug == 1:
# left
img_l_tmp = img_l.copy()
#for kp in kp_l:
# x = int(kp.pt[0])
# y = int(kp.pt[1])
# cv2.circle(img_l_tmp, (x, y), 2, (0, 0, 255))
img_l_tmp = cv2.drawKeypoints(img_l_tmp, kp_l, img_l_tmp, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DEFAULT)
head, tail = os.path.split(inputArgs.left)
cv2.imwrite(head+"/"+"feat_"+tail, img_l_tmp)
# right
img_r_tmp = img_r.copy()
#for kp in kp_r:
# x = int(kp.pt[0])
# y = int(kp.pt[1])
# cv2.circle(img_r_tmp, (x, y), 2, (0, 0, 255))
img_r_tmp = cv2.drawKeypoints(img_r_tmp, kp_r, img_r_tmp, (0, 0, 255), cv2.DRAW_MATCHES_FLAGS_DEFAULT)
head, tail = os.path.split(inputArgs.right)
cv2.imwrite(head+"/"+"feat_"+tail, img_r_tmp)
print "matching ..."
# match
raw_matches = matcher.knnMatch(des_l, trainDescriptors = des_r, k = 2)
print "Raw matches: " + str(len(raw_matches))
# filter matches: per Lowe's ratio test
filtered_matches = []
mkp_l = []
mkp_r = []
for m in raw_matches:
if len(m) == 2 and m[0].distance < m[1].distance * inputArgs.proportion:
filtered_matches.append(m)
mkp_l.append( kp_l[m[0].queryIdx] )
mkp_r.append( kp_r[m[0].trainIdx] )
print "Filtered matches: " + str(len(filtered_matches))
# visualization
if inputArgs.debug == 1:
# draw points
img_l_tmp = cv2.drawKeypoints(img_l_tmp, mkp_l, img_l_tmp, (255, 0, 0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
head, tail = os.path.split(inputArgs.left)
#cv2.imwrite(head+"/"+"feat_"+tail, img_l_tmp)
img_r_tmp = cv2.drawKeypoints(img_r_tmp, mkp_r, img_r_tmp, (255, 0, 0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
head, tail = os.path.split(inputArgs.right)
#cv2.imwrite(head+"/"+"feat_"+tail, img_r_tmp)
# merge image side by side
h_l, w_l = img_l_tmp.shape[:2]
h_r, w_r = img_r_tmp.shape[:2]
img_tmp = np.zeros((max(h_l, h_l), w_r+w_r, 3), np.uint8)
img_tmp[:h_l, :w_l] = img_l_tmp
img_tmp[:h_r, w_l:w_l+w_r] = img_r_tmp
# draw lines
for m in filtered_matches:
cv2.line(img_tmp, (int(round(kp_l[m[0].queryIdx].pt[0])), int(round(kp_l[m[0].queryIdx].pt[1]))), (int(w_l + round(kp_r[m[0].trainIdx].pt[0])), int(round(kp_r[m[0].trainIdx].pt[1]))), (255, 0, 0), 1)
cv2.imwrite(inputArgs.name + "_features.jpg", img_tmp)
# filter matches: per direction (since it's a stereo pair, most of the points should have the same angle between them)
if inputArgs.stddev != 0.0:
ang_stddev = 360.0
stddev = 180.0
while abs(stddev) > inputArgs.stddev:
ang_stddev = stddev
raw_matches = [] # silly !!!
for m in filtered_matches: # silly !!!
raw_matches.append(m) # silly !!!
filtered_matches = []
mkp_l = []
mkp_r = []
ang = []
for m in raw_matches:
xDiff = kp_r[m[0].trainIdx].pt[0] - kp_l[m[0].queryIdx].pt[0] #p2.x - p1.x
yDiff = kp_r[m[0].trainIdx].pt[1] - kp_l[m[0].queryIdx].pt[1] #p2.y - p1.y
#print math.degrees(math.atan2(yDiff,xDiff))
ang.append(math.degrees(math.atan2(yDiff,xDiff)))
mean = np.mean(ang)
differences = [(value - mean)**2 for value in ang]
stddev = np.mean(differences) ** 0.5
#print mean
#print stddev
ang = []
for m in raw_matches:
xDiff = kp_r[m[0].trainIdx].pt[0] - kp_l[m[0].queryIdx].pt[0] #p2.x - p1.x
yDiff = kp_r[m[0].trainIdx].pt[1] - kp_l[m[0].queryIdx].pt[1] #p2.y - p1.y
ang_tmp = math.degrees(math.atan2(yDiff,xDiff))
if (mean + stddev) > (mean - stddev):
if (mean + stddev) >= ang_tmp and (mean - stddev) <= ang_tmp:
filtered_matches.append(m)
mkp_l.append( kp_l[m[0].queryIdx] )
mkp_r.append( kp_r[m[0].trainIdx] )
ang.append(math.degrees(math.atan2(yDiff,xDiff)))
else:
if (mean + stddev) <= ang_tmp and (mean - stddev) >= ang_tmp:
filtered_matches.append(m)
mkp_l.append( kp_l[m[0].queryIdx] )
mkp_r.append( kp_r[m[0].trainIdx] )
ang.append(math.degrees(math.atan2(yDiff,xDiff)))
##print np.median(ang)
mean = np.mean(ang)
differences = [(value - mean)**2 for value in ang]
stddev = np.mean(differences) ** 0.5
#print mean
#print stddev
if (abs(ang_stddev) - abs(stddev)) < 0.001:
break
print "Filtered matches cheat: " + str(len(filtered_matches))
mkp_pairs = zip(mkp_l, mkp_r)
file = open(inputArgs.name + "_kp.txt", "w")
for p in mkp_pairs:
# left x , left y ; right x , right y
file.write(str(p[0].pt[0]) + "," + str(p[0].pt[1]) + ";" + str(p[1].pt[0]) + "," + str(p[1].pt[1]) + "\n")
file.close()
# visualization
if inputArgs.debug == 1:
# draw lines
for m in filtered_matches:
cv2.line(img_tmp, (int(round(kp_l[m[0].queryIdx].pt[0])), int(round(kp_l[m[0].queryIdx].pt[1]))), (int(w_l + round(kp_r[m[0].trainIdx].pt[0])), int(round(kp_r[m[0].trainIdx].pt[1]))), (0, 255, 0), 1)
cv2.imwrite(inputArgs.name + "_features.jpg", img_tmp)
if __name__ == "__main__": # this is not a module
parseInput() # what do we have to do
processInput() # doing what we have to do
print "" # for estetic output
| 35.276423 | 212 | 0.659484 | 1,414 | 8,678 | 3.889675 | 0.193777 | 0.008364 | 0.015273 | 0.01 | 0.453273 | 0.433091 | 0.379273 | 0.353455 | 0.320545 | 0.312364 | 0 | 0.031516 | 0.173658 | 8,678 | 245 | 213 | 35.420408 | 0.735462 | 0.169279 | 0 | 0.411392 | 0 | 0 | 0.102309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.031646 | null | null | 0.094937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c8584e4a637146aa64ba5592088fa5de2ab6771 | 7,747 | py | Python | examples/scripts/shapelet_transform.py | wilsonify/sktime | 68395d44bd3f46b0801c506e23e889dd54999d29 | [
"BSD-3-Clause"
] | null | null | null | examples/scripts/shapelet_transform.py | wilsonify/sktime | 68395d44bd3f46b0801c506e23e889dd54999d29 | [
"BSD-3-Clause"
] | null | null | null | examples/scripts/shapelet_transform.py | wilsonify/sktime | 68395d44bd3f46b0801c506e23e889dd54999d29 | [
"BSD-3-Clause"
] | null | null | null | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.11.1
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Shapelets and the Shapelet Transform with sktime
#
# Introduced in [1], a shapelet is a time series subsequences that is identified as being representative of class membership. Shapelets are a powerful approach for measuring _phase-independent_ similarity between time series; they can occur at any point within a series and offer _interpretable_ results for how matches occur. The original research extracted shapelets to build a decision tree classifier.
#
# The example below illustrates how leaf shape can be represented as a one-dimensional time series (blue line) to distinguish between two species.[2]
#
# <img src = "img/leaf_types.png">
# <img src = "img/verdena_shapelet.png">
#
# The highlighted red subsection of the time series (i.e., "subsequences") above is the shapelet that distinguishes *Verbena urticifolia* from *Urtica dioica*.
#
# ## The Shapelet Transform
#
# Much research emphasis has been placed on shapelet-based approaches for time series classification (TSC) since the original research was proposed. The current state-of-the-art for shapelets is the **shapelet transform** (ST) [3, 4]. The transform improves upon the original use of shapelets by separating shapelet extraction from the classification algorithm, allowing interpretable phase-independent classification of time series with any standard classification algorithm (such as random/rotation forest, neural networks, nearest neighbour classifications, ensembles of all, etc.). To facilitate this, rather than recursively assessing data for the best shapelet, the transform evaluates candidate shapelets in a single procedure to rank them based on information gain. Then, given a set of _k_ shapelets, a time series can be transformed into _k_ features by calculating the distance from the series to each shapelet. By transforming a dataset in this manner any vector-based classification algorithm can be applied to a shapelet-transformed time series problem while the interpretability of shapelets is maintained through the ranked list of the _best_ shapelets during transformation.
#
# Shapelets can provide interpretable results, as seen in the figure below:
#
# <img src = "img/leaves_shapelets.png">
#
# The shapelet has "discovered" where the two plant species distinctly differ. *Urtica dioica* has a stem that connects to the leaf at almost 90 degrees, whereas the stem of *Verbena urticifolia* connects to the leaf at a wider angle.
#
# Having found shapelet, its distance to the nearest matching subsequence in all objects in the database can be recorded. Finally, a simple decision tree classifier can be built to determine whether an object $Q$ has a subsequence within a certain distance from shapelet $I$.
#
# <img src = "img/shapelet_classifier.png">
#
# #### References
# [1] Ye, Lexiang, and Eamonn Keogh. "Time series shapelets: a novel technique that allows accurate, interpretable and fast classification." Data mining and knowledge discovery 22, no. 1-2 (2011): 149-182.
#
# [2] Ye, Lexiang, and Eamonn Keogh. "Time series shapelets: a new primitive for data mining." In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 947-956. 2009.
#
# [3] Lines, Jason, Luke M. Davis, Jon Hills, and Anthony Bagnall. "A shapelet transform for time series classification." In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 289-297. ACM, 2012.
#
# [4] Hills, Jon, Jason Lines, Edgaras Baranauskas, James Mapp, and Anthony Bagnall. "Classification of time series by shapelet transformation." Data Mining and Knowledge Discovery 28, no. 4 (2014): 851-881.
#
# [5] Bostrom, Aaron, and Anthony Bagnall. "Binary shapelet transform for multiclass time series classification." In Transactions on Large-Scale Data-and Knowledge-Centered Systems XXXII, pp. 24-46. Springer, Berlin, Heidelberg, 2017.
#
# ## Example: The Shapelet Transform in sktime
#
# The following workbook demonstrates a full workflow of using the shapelet transform in `sktime` with a `scikit-learn` classifier with the [OSU Leaf](http://www.timeseriesclassification.com/description.php?Dataset=OSULeaf) dataset, which consists of one dimensional outlines of six leaf classes: *Acer Circinatum*, *Acer Glabrum*, *Acer Macrophyllum*, *Acer Negundo*, *Quercus Garryana* and *Quercus Kelloggii*.
# + tags=[]
from sktime.datasets import load_osuleaf
from sktime.transformations.panel.shapelets import ContractedShapeletTransform
train_x, train_y = load_osuleaf(split="train", return_X_y=True)
test_x, test_y = load_osuleaf(split="test", return_X_y=True)
# + tags=[]
# How long (in minutes) to extract shapelets for.
# This is a simple lower-bound initially;
# once time is up, no further shapelets will be assessed
time_contract_in_mins = 1
# The initial number of shapelet candidates to assess per training series.
# If all series are visited and time remains on the contract then another
# pass of the data will occur
initial_num_shapelets_per_case = 10
# Whether or not to print on-going information about shapelet extraction.
# Useful for demo/debugging
verbose = 2
st = ContractedShapeletTransform(
time_contract_in_mins=time_contract_in_mins,
num_candidates_to_sample_per_case=initial_num_shapelets_per_case,
verbose=verbose,
)
st.fit(train_x, train_y)
# + tags=[]
# %matplotlib inline
import matplotlib.pyplot as plt
# for each extracted shapelet (in descending order of quality/information gain)
for s in st.shapelets[0:5]:
# summary info about the shapelet
print(s)
# plot the series that the shapelet was extracted from
plt.plot(train_x.iloc[s.series_id, 0], "gray")
# overlay the shapelet onto the full series
plt.plot(
list(range(s.start_pos, (s.start_pos + s.length))),
train_x.iloc[s.series_id, 0][s.start_pos : s.start_pos + s.length],
"r",
linewidth=3.0,
)
plt.show()
# + tags=[]
# for each extracted shapelet (in descending order of quality/information gain)
for i in range(0, min(len(st.shapelets), 5)):
s = st.shapelets[i]
# summary info about the shapelet
print("#" + str(i) + ": " + str(s))
# overlay shapelets
plt.plot(
list(range(s.start_pos, (s.start_pos + s.length))),
train_x.iloc[s.series_id, 0][s.start_pos : s.start_pos + s.length],
)
plt.show()
# + tags=[]
import time
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sktime.datasets import load_osuleaf
train_x, train_y = load_osuleaf(split="train", return_X_y=True)
test_x, test_y = load_osuleaf(split="test", return_X_y=True)
# example pipeline with 1 minute time limit
pipeline = Pipeline(
[
(
"st",
ContractedShapeletTransform(
time_contract_in_mins=time_contract_in_mins,
num_candidates_to_sample_per_case=10,
verbose=False,
),
),
("rf", RandomForestClassifier(n_estimators=100)),
]
)
start = time.time()
pipeline.fit(train_x, train_y)
end_build = time.time()
preds = pipeline.predict(test_x)
end_test = time.time()
print("Results:")
print("Correct:")
correct = sum(preds == test_y)
print("\t" + str(correct) + "/" + str(len(test_y)))
print("\t" + str(correct / len(test_y)))
print("\nTiming:")
print("\tTo build: " + str(end_build - start) + " secs")
print("\tTo predict: " + str(end_test - end_build) + " secs")
# -
| 46.668675 | 1,192 | 0.741448 | 1,112 | 7,747 | 5.073741 | 0.368705 | 0.023041 | 0.012761 | 0.014179 | 0.234314 | 0.19213 | 0.160936 | 0.157391 | 0.157391 | 0.142148 | 0 | 0.015104 | 0.171034 | 7,747 | 165 | 1,193 | 46.951515 | 0.863438 | 0.694462 | 0 | 0.28125 | 0 | 0 | 0.04302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.109375 | 0 | 0.109375 | 0.140625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c87ba42d9c4fcf728e4c4af9e88b377f8f22da4 | 3,405 | py | Python | nodepool/tests/test_shade_integration.py | Juniper/nodepool | 3de0b20faa6f90437eb62c56ec23c59c6e15b5fa | [
"Apache-2.0"
] | null | null | null | nodepool/tests/test_shade_integration.py | Juniper/nodepool | 3de0b20faa6f90437eb62c56ec23c59c6e15b5fa | [
"Apache-2.0"
] | null | null | null | nodepool/tests/test_shade_integration.py | Juniper/nodepool | 3de0b20faa6f90437eb62c56ec23c59c6e15b5fa | [
"Apache-2.0"
] | 2 | 2019-03-19T12:34:13.000Z | 2020-02-05T10:31:48.000Z | # Copyright (C) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import fixtures
import voluptuous
import yaml
from nodepool import config as nodepool_config
from nodepool import provider_manager
from nodepool import tests
class TestShadeIntegration(tests.IntegrationTestCase):
def _cleanup_cloud_config(self):
os.remove(self.clouds_path)
def _use_cloud_config(self, config):
config_dir = fixtures.TempDir()
self.useFixture(config_dir)
self.clouds_path = os.path.join(config_dir.path, 'clouds.yaml')
self.useFixture(fixtures.MonkeyPatch(
'os_client_config.config.CONFIG_FILES',
[self.clouds_path]))
with open(self.clouds_path, 'w') as h:
yaml.safe_dump(config, h)
self.addCleanup(self._cleanup_cloud_config)
def test_nodepool_provider_config_bad(self):
# nodepool doesn't support clouds.yaml-less config anymore
# Assert that we get a nodepool error and not an os-client-config
# error.
self.assertRaises(
voluptuous.MultipleInvalid,
self.setup_config, 'integration_noocc.yaml')
def test_nodepool_occ_config(self):
configfile = self.setup_config('integration_occ.yaml')
auth_data = {'username': 'os_real',
'project_name': 'os_real',
'password': 'os_real',
'auth_url': 'os_real'}
occ_config = {'clouds': {'real-cloud': {'auth': auth_data}}}
self._use_cloud_config(occ_config)
config = nodepool_config.loadConfig(configfile)
self.assertIn('real-provider', config.providers)
pm = provider_manager.get_provider(
config.providers['real-provider'], use_taskmanager=False)
pm.start()
self.assertEqual(pm._client.auth, auth_data)
def test_nodepool_occ_config_reload(self):
configfile = self.setup_config('integration_occ.yaml')
auth_data = {'username': 'os_real',
'project_name': 'os_real',
'password': 'os_real',
'auth_url': 'os_real'}
occ_config = {'clouds': {'real-cloud': {'auth': auth_data}}}
self._use_cloud_config(occ_config)
pool = self.useNodepool(configfile, watermark_sleep=1)
pool.updateConfig()
provider_manager = pool.config.provider_managers['real-provider']
self.assertEqual(provider_manager._client.auth, auth_data)
# update the config
auth_data['password'] = 'os_new_real'
os.remove(self.clouds_path)
with open(self.clouds_path, 'w') as h:
yaml.safe_dump(occ_config, h)
pool.updateConfig()
provider_manager = pool.config.provider_managers['real-provider']
self.assertEqual(provider_manager._client.auth, auth_data)
| 37.833333 | 75 | 0.669016 | 420 | 3,405 | 5.211905 | 0.35 | 0.029237 | 0.038374 | 0.035633 | 0.350845 | 0.315212 | 0.315212 | 0.315212 | 0.315212 | 0.315212 | 0 | 0.003444 | 0.232599 | 3,405 | 89 | 76 | 38.258427 | 0.83429 | 0.214097 | 0 | 0.421053 | 0 | 0 | 0.131678 | 0.021821 | 0 | 0 | 0 | 0 | 0.087719 | 1 | 0.087719 | false | 0.052632 | 0.122807 | 0 | 0.22807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8c89c45428c0846d584df54eae048d3cd2046b39 | 1,944 | py | Python | src/pybbox/bboxApiURL.py | BrunoMalard/pybbox | 08a4437fcaa5ab6034cdad2822abf2b7e812ab6e | [
"MIT"
] | null | null | null | src/pybbox/bboxApiURL.py | BrunoMalard/pybbox | 08a4437fcaa5ab6034cdad2822abf2b7e812ab6e | [
"MIT"
] | null | null | null | src/pybbox/bboxApiURL.py | BrunoMalard/pybbox | 08a4437fcaa5ab6034cdad2822abf2b7e812ab6e | [
"MIT"
] | null | null | null | from .bboxConstant import BboxConstant
import netaddr as net
import socket
class BboxAPIUrl:
"""
Used to handle API url
"""
API_PREFIX = "api/v1"
def __init__(self, api_class, api_method, ip=BboxConstant.DEFAULT_LOCAL_IP):
"""
:param api_class: string
:param api_method: string
:param ip: string
:return:
"""
self.api_class = api_class
self.api_method = api_method
self.ip = ip
self.authentication_type = None
self.url = None
self.build_url_request()
def get_api_class(self):
return self.api_class
def get_api_method(self):
return self.api_method
def get_ip(self):
return self.ip
def get_url(self):
return self.url
def get_authentication_type(self):
return self.authentication_type
def set_api_name(self, api_class, api_method):
self.api_class = api_class
self.api_method = api_method
self.build_url_request()
def build_url_request(self):
"""
Build the url to use for making a call to the Bbox API
:return: url string
"""
# Check if the ip is LAN or WAN
if net.IPAddress(socket.gethostbyname(self.ip)).is_private():
url = BboxConstant.DEFAULT_BBOX_URL
self.authentication_type = BboxConstant.AUTHENTICATION_TYPE_LOCAL
else:
url = "https://{}:{}".format(self.ip,
BboxConstant.DEFAULT_REMOTE_PORT)
self.authentication_type = BboxConstant.AUTHENTICATION_TYPE_REMOTE
if self.api_class is None:
url = "{}/{}".format(url, self.API_PREFIX)
else:
url = "{}/{}/{}".format(url, self.API_PREFIX, self.api_class)
if self.api_method is None:
self.url = url
else:
self.url = "{}/{}".format(url, self.api_method)
| 27.771429 | 80 | 0.596708 | 239 | 1,944 | 4.610879 | 0.230126 | 0.088929 | 0.076225 | 0.054446 | 0.314882 | 0.22323 | 0.083485 | 0.083485 | 0.083485 | 0.083485 | 0 | 0.000746 | 0.310185 | 1,944 | 69 | 81 | 28.173913 | 0.821029 | 0.105967 | 0 | 0.214286 | 0 | 0 | 0.022506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.071429 | 0.119048 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
8c91071f0ddeb89b2d1a133336badc4c01678885 | 2,550 | py | Python | azure-keyvault/azure/keyvault/key_vault_authentication.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | 1 | 2018-11-09T06:16:34.000Z | 2018-11-09T06:16:34.000Z | azure-keyvault/azure/keyvault/key_vault_authentication.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | null | null | null | azure-keyvault/azure/keyvault/key_vault_authentication.py | azuresdkci1x/azure-sdk-for-python-1722 | e08fa6606543ce0f35b93133dbb78490f8e6bcc9 | [
"MIT"
] | 1 | 2018-11-09T06:17:41.000Z | 2018-11-09T06:17:41.000Z | #---------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#---------------------------------------------------------------------------------------------
import urllib
from requests.auth import AuthBase
import requests
from msrest.authentication import Authentication
from . import HttpBearerChallenge
from . import HttpBearerChallengeCache as ChallengeCache
class KeyVaultAuthBase(AuthBase):
def __init__(self, authorization_callback):
self._callback = authorization_callback
self._token = None
def __call__(self, request):
# attempt to pre-fetch challenge if cached
if self._callback:
challenge = ChallengeCache.get_challenge_for_url(request.url)
if challenge:
# if challenge cached, retrieve token and update the request
self.set_authorization_header(request, challenge)
else:
# send the request to retrieve the challenge, then request the token and update
# the request
# TODO: wire up commons flag for things like Fiddler, logging, etc.
response = requests.Session().send(request)
if response.status_code == 401:
auth_header = response.headers['WWW-Authenticate']
if HttpBearerChallenge.is_bearer_challenge(auth_header):
challenge = HttpBearerChallenge(response.request.url, auth_header)
ChallengeCache.set_challenge_for_url(response.request.url, challenge)
self.set_authorization_header(request, challenge)
return request
def set_authorization_header(self, request, challenge):
auth = self._callback(
challenge.get_authorization_server(),
challenge.get_resource(),
challenge.get_scope())
request.headers['Authorization'] = '{} {}'.format(auth[0], auth[1])
class KeyVaultAuthentication(Authentication):
def __init__(self, authorization_callback):
super(KeyVaultAuthentication, self).__init__()
self.auth = KeyVaultAuthBase(authorization_callback)
self._callback = authorization_callback
def signed_session(self):
session = super(KeyVaultAuthentication, self).signed_session()
session.auth = self.auth
return session
| 43.220339 | 95 | 0.620784 | 237 | 2,550 | 6.468354 | 0.375527 | 0.068493 | 0.048924 | 0.031311 | 0.184605 | 0.125245 | 0 | 0 | 0 | 0 | 0 | 0.002585 | 0.241569 | 2,550 | 58 | 96 | 43.965517 | 0.790072 | 0.232549 | 0 | 0.153846 | 0 | 0 | 0.017454 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.128205 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c99299cba6a52cf3dabace97db1a9fc2ccbe36e | 1,969 | py | Python | test/territories_test.py | AColocho/us-states | 1b97d9d4d1784ad6e297ffe1df2927eab3e86546 | [
"BSL-1.0"
] | 3 | 2020-12-31T02:08:01.000Z | 2022-01-11T07:21:22.000Z | test/territories_test.py | AColocho/us-states | 1b97d9d4d1784ad6e297ffe1df2927eab3e86546 | [
"BSL-1.0"
] | 14 | 2020-12-23T00:06:20.000Z | 2022-01-10T16:43:43.000Z | test/territories_test.py | AColocho/us-states | 1b97d9d4d1784ad6e297ffe1df2927eab3e86546 | [
"BSL-1.0"
] | 3 | 2020-12-28T08:47:00.000Z | 2022-01-10T15:27:01.000Z | from states import Territories_Abbreviated
from states import Territories_Full_Name
from states import Uninhabited_Territories
from states import Associated_States
from states import Territories
import unittest
class Abbreviated_Test(unittest.TestCase):
def check_lenght(self):
"""
Checks there is only 5 territories.
"""
all_territories = Territories_Abbreviated().all_territories
self.assertEqual(5,len(all_territories),'There is either less or more than 5 territories recognized.')
class Full_Name_Test(unittest.TestCase):
def check_lenght(self):
"""
Checks there is only 5 territories.
"""
all_territories = Territories_Full_Name().all_territories
self.assertEqual(5,len(all_territories),'There is either less or more than 5 territories recognized.')
class Uninhabited_Territories_Test(unittest.TestCase):
def check_lenght(self):
"""
Checks there is only 7 territories.
"""
all_territories = Uninhabited_Territories().all_territories
self.assertEqual(7,len(all_territories),'There is either less or more than 7 territories recognized.')
class Associated_States_Test(unittest.TestCase):
def check_lenght(self):
"""
Checks there is only 3 territories.
"""
all_territories = Associated_States().pacific_abbreviated
self.assertEqual(3,len(all_territories),'There is either less or more than 3 territories recognized.')
class Territories_Test(unittest.TestCase):
def check_search_returns_info(self):
"""
Checks that the search function returns the correct value.
"""
PR = Territories().get_territory_info('Puerto Rico')
full_name = PR['full_name']
self.assertTrue(full_name == 'Puerto Rico', 'Search function is not retrieving the right value. Used Puerto Rico as a Test.')
if __name__ == "__main__":
unittest.main() | 39.38 | 133 | 0.705942 | 236 | 1,969 | 5.677966 | 0.237288 | 0.114925 | 0.059701 | 0.085821 | 0.473134 | 0.473134 | 0.435821 | 0.435821 | 0.435821 | 0.435821 | 0 | 0.007747 | 0.213306 | 1,969 | 50 | 134 | 39.38 | 0.857327 | 0.10259 | 0 | 0.206897 | 0 | 0 | 0.21368 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 1 | 0.172414 | false | 0 | 0.206897 | 0 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8ca1def1e3fb9c5420539abd418d03fb101d97fb | 16,206 | py | Python | lib/jnpr/healthbot/swagger/models/hb_graphs_query.py | Juniper/healthbot-py-client | 49f0884b5d01ac8430aa7ed4c9acb4e7a2b717a6 | [
"Apache-2.0"
] | 10 | 2019-10-23T12:54:37.000Z | 2022-02-07T19:24:30.000Z | docs/jnpr_healthbot_swagger/swagger_client/models/hb_graphs_query.py | Juniper/healthbot-py-client | 49f0884b5d01ac8430aa7ed4c9acb4e7a2b717a6 | [
"Apache-2.0"
] | 5 | 2019-09-30T04:29:25.000Z | 2022-02-16T12:21:06.000Z | docs/jnpr_healthbot_swagger/swagger_client/models/hb_graphs_query.py | Juniper/healthbot-py-client | 49f0884b5d01ac8430aa7ed4c9acb4e7a2b717a6 | [
"Apache-2.0"
] | 4 | 2019-09-30T01:17:48.000Z | 2020-08-25T07:27:54.000Z | # coding: utf-8
"""
Paragon Insights APIs
API interface for PI application # noqa: E501
OpenAPI spec version: 4.0.0
Contact: healthbot-feedback@juniper.net
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class HbGraphsQuery(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'group_name': 'str',
'group_type': 'str',
'device_name': 'str',
'measurement_name': 'str',
'measurement_type': 'str',
'transformation': 'str',
'field_name': 'str',
'field_type': 'str',
'field_aggregation': 'str',
'where': 'HbGraphsQueryWhere',
'group_by_interval': 'str',
'group_by_fill': 'str',
'group_by_tag_key': 'str',
'retention_policy': 'str'
}
attribute_map = {
'group_name': 'group_name',
'group_type': 'group_type',
'device_name': 'device_name',
'measurement_name': 'measurement_name',
'measurement_type': 'measurement_type',
'transformation': 'transformation',
'field_name': 'field_name',
'field_type': 'field_type',
'field_aggregation': 'field_aggregation',
'where': 'where',
'group_by_interval': 'group_by_interval',
'group_by_fill': 'group_by_fill',
'group_by_tag_key': 'group_by_tag_key',
'retention_policy': 'retention_policy'
}
def __init__(self, group_name=None, group_type=None, device_name=None, measurement_name=None, measurement_type=None, transformation=None, field_name=None, field_type=None, field_aggregation=None, where=None, group_by_interval=None, group_by_fill=None, group_by_tag_key=None, retention_policy=None): # noqa: E501
"""HbGraphsQuery - a model defined in Swagger""" # noqa: E501
self._group_name = None
self._group_type = None
self._device_name = None
self._measurement_name = None
self._measurement_type = None
self._transformation = None
self._field_name = None
self._field_type = None
self._field_aggregation = None
self._where = None
self._group_by_interval = None
self._group_by_fill = None
self._group_by_tag_key = None
self._retention_policy = None
self.discriminator = None
if group_name is not None:
self.group_name = group_name
if group_type is not None:
self.group_type = group_type
if device_name is not None:
self.device_name = device_name
if measurement_name is not None:
self.measurement_name = measurement_name
if measurement_type is not None:
self.measurement_type = measurement_type
if transformation is not None:
self.transformation = transformation
if field_name is not None:
self.field_name = field_name
if field_type is not None:
self.field_type = field_type
if field_aggregation is not None:
self.field_aggregation = field_aggregation
if where is not None:
self.where = where
if group_by_interval is not None:
self.group_by_interval = group_by_interval
if group_by_fill is not None:
self.group_by_fill = group_by_fill
if group_by_tag_key is not None:
self.group_by_tag_key = group_by_tag_key
if retention_policy is not None:
self.retention_policy = retention_policy
@property
def group_name(self):
"""Gets the group_name of this HbGraphsQuery. # noqa: E501
Device/Network group name # noqa: E501
:return: The group_name of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._group_name
@group_name.setter
def group_name(self, group_name):
"""Sets the group_name of this HbGraphsQuery.
Device/Network group name # noqa: E501
:param group_name: The group_name of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._group_name = group_name
@property
def group_type(self):
"""Gets the group_type of this HbGraphsQuery. # noqa: E501
Device/Network group type # noqa: E501
:return: The group_type of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._group_type
@group_type.setter
def group_type(self, group_type):
"""Sets the group_type of this HbGraphsQuery.
Device/Network group type # noqa: E501
:param group_type: The group_type of this HbGraphsQuery. # noqa: E501
:type: str
"""
allowed_values = ["device", "network"] # noqa: E501
if group_type not in allowed_values:
raise ValueError(
"Invalid value for `group_type` ({0}), must be one of {1}" # noqa: E501
.format(group_type, allowed_values)
)
self._group_type = group_type
@property
def device_name(self):
"""Gets the device_name of this HbGraphsQuery. # noqa: E501
label name # noqa: E501
:return: The device_name of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._device_name
@device_name.setter
def device_name(self, device_name):
"""Sets the device_name of this HbGraphsQuery.
label name # noqa: E501
:param device_name: The device_name of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._device_name = device_name
@property
def measurement_name(self):
"""Gets the measurement_name of this HbGraphsQuery. # noqa: E501
Measurement name (topic/rule name) # noqa: E501
:return: The measurement_name of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._measurement_name
@measurement_name.setter
def measurement_name(self, measurement_name):
"""Sets the measurement_name of this HbGraphsQuery.
Measurement name (topic/rule name) # noqa: E501
:param measurement_name: The measurement_name of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._measurement_name = measurement_name
@property
def measurement_type(self):
"""Gets the measurement_type of this HbGraphsQuery. # noqa: E501
Measurement type: Field table/Trigger table/Rollup table # noqa: E501
:return: The measurement_type of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._measurement_type
@measurement_type.setter
def measurement_type(self, measurement_type):
"""Sets the measurement_type of this HbGraphsQuery.
Measurement type: Field table/Trigger table/Rollup table # noqa: E501
:param measurement_type: The measurement_type of this HbGraphsQuery. # noqa: E501
:type: str
"""
allowed_values = ["Field table", "Trigger table", "Rollup table"] # noqa: E501
if measurement_type not in allowed_values:
raise ValueError(
"Invalid value for `measurement_type` ({0}), must be one of {1}" # noqa: E501
.format(measurement_type, allowed_values)
)
self._measurement_type = measurement_type
@property
def transformation(self):
"""Gets the transformation of this HbGraphsQuery. # noqa: E501
Transformation value for query # noqa: E501
:return: The transformation of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._transformation
@transformation.setter
def transformation(self, transformation):
"""Sets the transformation of this HbGraphsQuery.
Transformation value for query # noqa: E501
:param transformation: The transformation of this HbGraphsQuery. # noqa: E501
:type: str
"""
allowed_values = ["derivative", "spread", "non-negative-derivative", "difference", "cumulative-sum", "elapsed"] # noqa: E501
if transformation not in allowed_values:
raise ValueError(
"Invalid value for `transformation` ({0}), must be one of {1}" # noqa: E501
.format(transformation, allowed_values)
)
self._transformation = transformation
@property
def field_name(self):
"""Gets the field_name of this HbGraphsQuery. # noqa: E501
Field name of a measurement # noqa: E501
:return: The field_name of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._field_name
@field_name.setter
def field_name(self, field_name):
"""Sets the field_name of this HbGraphsQuery.
Field name of a measurement # noqa: E501
:param field_name: The field_name of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._field_name = field_name
@property
def field_type(self):
"""Gets the field_type of this HbGraphsQuery. # noqa: E501
Field type of the measurement (int, float, string, uint) # noqa: E501
:return: The field_type of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._field_type
@field_type.setter
def field_type(self, field_type):
"""Sets the field_type of this HbGraphsQuery.
Field type of the measurement (int, float, string, uint) # noqa: E501
:param field_type: The field_type of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._field_type = field_type
@property
def field_aggregation(self):
"""Gets the field_aggregation of this HbGraphsQuery. # noqa: E501
Data aggregation type of the field/key # noqa: E501
:return: The field_aggregation of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._field_aggregation
@field_aggregation.setter
def field_aggregation(self, field_aggregation):
"""Sets the field_aggregation of this HbGraphsQuery.
Data aggregation type of the field/key # noqa: E501
:param field_aggregation: The field_aggregation of this HbGraphsQuery. # noqa: E501
:type: str
"""
allowed_values = ["mean", "mode", "median", "count", "sum", "integral", "distinct"] # noqa: E501
if field_aggregation not in allowed_values:
raise ValueError(
"Invalid value for `field_aggregation` ({0}), must be one of {1}" # noqa: E501
.format(field_aggregation, allowed_values)
)
self._field_aggregation = field_aggregation
@property
def where(self):
"""Gets the where of this HbGraphsQuery. # noqa: E501
:return: The where of this HbGraphsQuery. # noqa: E501
:rtype: HbGraphsQueryWhere
"""
return self._where
@where.setter
def where(self, where):
"""Sets the where of this HbGraphsQuery.
:param where: The where of this HbGraphsQuery. # noqa: E501
:type: HbGraphsQueryWhere
"""
self._where = where
@property
def group_by_interval(self):
"""Gets the group_by_interval of this HbGraphsQuery. # noqa: E501
Group by interval of the query # noqa: E501
:return: The group_by_interval of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._group_by_interval
@group_by_interval.setter
def group_by_interval(self, group_by_interval):
"""Sets the group_by_interval of this HbGraphsQuery.
Group by interval of the query # noqa: E501
:param group_by_interval: The group_by_interval of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._group_by_interval = group_by_interval
@property
def group_by_fill(self):
"""Gets the group_by_fill of this HbGraphsQuery. # noqa: E501
Group by fill value of the query # noqa: E501
:return: The group_by_fill of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._group_by_fill
@group_by_fill.setter
def group_by_fill(self, group_by_fill):
"""Sets the group_by_fill of this HbGraphsQuery.
Group by fill value of the query # noqa: E501
:param group_by_fill: The group_by_fill of this HbGraphsQuery. # noqa: E501
:type: str
"""
allowed_values = ["fill(null)", "none"] # noqa: E501
if group_by_fill not in allowed_values:
raise ValueError(
"Invalid value for `group_by_fill` ({0}), must be one of {1}" # noqa: E501
.format(group_by_fill, allowed_values)
)
self._group_by_fill = group_by_fill
@property
def group_by_tag_key(self):
"""Gets the group_by_tag_key of this HbGraphsQuery. # noqa: E501
Group by tag key value of the query # noqa: E501
:return: The group_by_tag_key of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._group_by_tag_key
@group_by_tag_key.setter
def group_by_tag_key(self, group_by_tag_key):
"""Sets the group_by_tag_key of this HbGraphsQuery.
Group by tag key value of the query # noqa: E501
:param group_by_tag_key: The group_by_tag_key of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._group_by_tag_key = group_by_tag_key
@property
def retention_policy(self):
"""Gets the retention_policy of this HbGraphsQuery. # noqa: E501
Retention policy name # noqa: E501
:return: The retention_policy of this HbGraphsQuery. # noqa: E501
:rtype: str
"""
return self._retention_policy
@retention_policy.setter
def retention_policy(self, retention_policy):
"""Sets the retention_policy of this HbGraphsQuery.
Retention policy name # noqa: E501
:param retention_policy: The retention_policy of this HbGraphsQuery. # noqa: E501
:type: str
"""
self._retention_policy = retention_policy
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(HbGraphsQuery, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, HbGraphsQuery):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 31.776471 | 316 | 0.614217 | 1,931 | 16,206 | 4.931124 | 0.08493 | 0.068053 | 0.111741 | 0.101449 | 0.595778 | 0.478051 | 0.414199 | 0.358118 | 0.263495 | 0.16026 | 0 | 0.023215 | 0.300938 | 16,206 | 509 | 317 | 31.8389 | 0.817283 | 0.357584 | 0 | 0.092511 | 1 | 0 | 0.121158 | 0.002552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.14978 | false | 0 | 0.013216 | 0 | 0.264317 | 0.008811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ca571729a1d490bef3f077d7c0408ec944e7fe6 | 24,209 | py | Python | src/stitcher.py | ahelsing/geni-tools | 0799f715ad88a1a11b069e8208f012abef23cbd8 | [
"MIT"
] | 3 | 2015-05-05T13:03:27.000Z | 2021-11-17T11:16:38.000Z | src/stitcher.py | ahelsing/geni-tools | 0799f715ad88a1a11b069e8208f012abef23cbd8 | [
"MIT"
] | null | null | null | src/stitcher.py | ahelsing/geni-tools | 0799f715ad88a1a11b069e8208f012abef23cbd8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#----------------------------------------------------------------------
# Copyright (c) 2013-2016 Raytheon BBN Technologies
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and/or hardware specification (the "Work") to
# deal in the Work without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Work, and to permit persons to whom the Work
# is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Work.
#
# THE WORK IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE WORK OR THE USE OR OTHER DEALINGS
# IN THE WORK.
#----------------------------------------------------------------------
'''Stitching client: Call the Stitching Computation Service to expand a single request RSpec.
Then use Omni to allocate / createsliver reservations at all necessary aggregates. Return
the combined manifest RSpec.'''
# Call this just like omni:
# $ python ./src/stitcher.py -o createsliver <valid slice name> <path to RSpec file>
# (assuming a valid omni_config in the usual spots)
# 'createsliver' or 'allocate' commands with an RSpec that requires stitching will be processed
# by the stitcher code.
# All other calls will be passed directly to Omni.
# All calls are APIv2 (hard-coded) currently.
# Input request RSpec does _not_ need a stitching extension, but should
# be a single RSpec for all resources that you want in your slice.
# To create a request that needs stitching, include at least 1 <link> elements with
# more than 1 different <component_manager> elements (and no
# shared_vlan element or link_type of other than VLAN)
# Selected known issues / todos
# - Thread calls to omni
# - Support AM API v3
# - Consolidate constants
# - Fully handle a VLAN_UNAVAILABLE error from an AM
# - Fully handle negotiating among AMs for a VLAN tag to use
# As in when the returned suggestedVLANRange is not what was requested
# - fakeMode is incomplete
# - Tune counters, sleep durations, etc
# - Return a struct with detailed results (not just comments in manifest)
# - Return a struct on errors
# - Get AM URLs from the Clearinghouse
# - Use Authentication with the SCS
# - Support Stitching schema v2
# - Time out omni calls in case an AM hangs
# - opts.warn is used to suppress omni output. Clean that up. A scriptMode option?
# - Implement confirmSafeRequest to ensure no dangerous requests are made
# - Handle known EG error messages
# - Loop checking to see if EG sliverstatus says success or failure
import json
import logging
import logging.handlers
import optparse
import os
import sys
import gcf.oscript as omni
from gcf.omnilib.util import OmniError, AMAPIError
from gcf.omnilib.stitchhandler import StitchingHandler
from gcf.omnilib.stitch.utils import StitchingError, prependFilePrefix
from gcf.omnilib.stitch.objects import Aggregate
import gcf.omnilib.stitch.objects
#from gcf.omnilib.stitch.objects import DCN_AM_RETRY_INTERVAL_SECS as DCN_AM_RETRY_INTERVAL_SECS
# URL of the SCS service
SCS_URL = "https://geni-scs.net.internet2.edu:8443/geni/xmlrpc"
DEFAULT_CAPACITY = 20000 # in Kbps
# Call is the way another script might call this.
# It initializes the logger, options, config (using omni functions),
# and then dispatches to the stitch handler
def call(argv, options=None):
if options is not None and not options.__class__==optparse.Values:
raise OmniError("Invalid options argument to call: must be an optparse.Values object")
if argv is None or not type(argv) == list:
raise OmniError("Invalid argv argument to call: must be a list")
##############################################################################
# Get a parser from omni that understands omni options
##############################################################################
parser = omni.getParser()
# update usage for help message
omni_usage = parser.get_usage()
parser.set_usage("\n" + "GENI Omni Stitching Tool\n" + "Copyright (c) 2013-2016 Raytheon BBN Technologies\n" +
omni_usage+
"\nstitcher.py reserves multi-aggregate fully bound topologies, including stitching, if the call is createsliver or allocate; else it just calls Omni.\n")
##############################################################################
# Add additional optparse.OptionParser style options
# Be sure not to re-use options already in use by omni for
# different meanings, otherwise you'll raise an OptionConflictError
##############################################################################
parser.add_option("--defaultCapacity", default=DEFAULT_CAPACITY,
type="int", help="Default stitched link capacity in Kbps - default is 20000 meaning ~20Mbps")
parser.add_option("--excludehop", metavar="HOP_EXCLUDE", action="append",
help="Hop URN to exclude from any path")
parser.add_option("--includehop", metavar="HOP_INCLUDE", action="append",
help="Hop URN to include on every path - use with caution")
parser.add_option("--includehoponpath", metavar="HOP_INCLUDE PATH", action="append", nargs=2,
help="Hop URN to include and then path (link client_id) to include it on")
parser.add_option("--fixedEndpoint", default=False, action="store_true",
help="RSpec uses a static endpoint - add a fake node with an interface on every link")
parser.add_option("--noExoSM", default=False, action="store_true",
help="Always use local ExoGENI racks, not the ExoSM, where possible (default %default)")
parser.add_option("--useExoSM", default=False, action="store_true",
help="Always use the ExoGENI ExoSM, not the individual EG racks, where possible (default %default)")
parser.add_option("--fileDir", default=None,
help="Directory for all output files generated. By default some files go in /tmp, some in the CWD, some in ~/.gcf.")
parser.add_option("--logFileCount", default=5, type="int",
help="Number of backup log files to keep, Default %default")
parser.add_option("--ionRetryIntervalSecs", type="int",
help="Seconds to sleep before retrying at DCN aggregates (default: %default)",
default=gcf.omnilib.stitch.objects.DCN_AM_RETRY_INTERVAL_SECS)
parser.add_option("--ionStatusIntervalSecs", type="int",
help="Seconds to sleep between sliverstatus calls at DCN aggregates (default %default)",
default=30)
parser.add_option("--noReservation", default=False, action="store_true",
help="Do no reservations: just generate the expanded request RSpec (default %default)")
parser.add_option("--scsURL",
help="URL to the SCS service. Default: Value of 'scs_url' in omni_config or " + SCS_URL,
default=None)
parser.add_option("--timeout", default=0, type="int",
help="Max minutes to allow stitcher to run before killing a reservation attempt (default %default minutes, 0 means no timeout).")
parser.add_option("--noAvailCheck", default=False, action="store_true",
help="Disable checking current VLAN availability where possible.")
parser.add_option("--genRequest", default=False, action="store_true",
help="Generate and save an expanded request RSpec, but do no reservation.")
parser.add_option("--noDeleteAtEnd", default=False, action="store_true",
help="On failure or Ctrl-C do not delete any reservations completed at some aggregates (default %default).")
parser.add_option("--noTransitAMs", default=False, action="store_true",
help="Do not reserve resources at intermediate / transit aggregates; allow experimenter to manually complete the circuit (default %default).")
parser.add_option("--noSCS", default=False, action="store_true",
help="Do not call the SCS to expand or add a stitching extension. Use this only if supplying any needed stitching extension and the SCS would fail your request. (default %default).")
parser.add_option("--fakeModeDir",
help="Developers only: If supplied, use canned server responses from this directory",
default=None)
parser.add_option("--savedSCSResults", default=None,
help="Developers only: Use this saved file of SCS results instead of calling SCS (saved previously using --debug)")
parser.add_option("--useSCSSugg", default=False, action="store_true",
help="Developers only: Always use the VLAN tags the SCS suggests, not 'any'.")
parser.add_option("--noEGStitching", default=False, action="store_true",
help="Developers only: Use GENI stitching, not ExoGENI stitching.")
parser.add_option("--noEGStitchingOnLink", metavar="LINK_ID", action="append",
help="Developers only: Use GENI stitching on this particular link only, not ExoGENI stitching.")
# parser.add_option("--script",
# help="If supplied, a script is calling this",
# action="store_true", default=False)
# Put our logs in a different file by default
parser.set_defaults(logoutput='stitcher.log')
# Configure stitcher with a specific set of configs by default
# First, set the default logging config file
lcfile = os.path.join(sys.path[0], os.path.join("gcf","stitcher_logging.conf"))
# Windows & Mac binaries do not get the .conf file in the proper archive apparently
# And even if they did, it appears the logging stuff can't readily read .conf files
# from that archive.
# Solution 1 that fails (no pkg_resources on windows so far, needs the file in the .zip)
# lcfile = pkg_resources.resource_filename("gcf", "stitcher_logging.conf")
# Solution2 is to use pkgutil to read the file from the archive
# And write it to a temp file that the logging stuff can use.
# Note this requires finding some way to get the file into the archive
# With whatever I do, I want to read the file direct from source per above if possible
if not os.path.exists(lcfile):
# File didn't exist as a regular file among python source
# Try it where py2exe (Windows) puts resources (one directory up, parallel to zip).
lcfile = os.path.join(os.path.normpath(os.path.join(sys.path[0], '..')), os.path.join("gcf","stitcher_logging.conf"))
if not os.path.exists(lcfile):
# File didn't exist in dir parallel to zip of source
# Try one more up, but no gcf sub-directory - where py2app (Mac) puts it.
lcfile = os.path.join(os.path.normpath(os.path.join(os.path.join(sys.path[0], '..'), '..')), "stitcher_logging.conf")
if not os.path.exists(lcfile):
# Now we'll try a couple approaches to read the .conf file out of a source zip
# And put it in a temp directory
tmpdir = os.path.normpath(os.getenv("TMPDIR", os.getenv("TMP", "/tmp")))
if tmpdir and tmpdir != "" and not os.path.exists(tmpdir):
os.makedirs(tmpdir)
lcfile = os.path.join(tmpdir, "stitcher_logging.conf")
try:
# This approach requires the .conf be in the source.zip (e.g. library.zip, python27.zip)
# On Windows (py2exe) this isn't easy apparently. But it happens by default on Mac (py2app)
# Note that could be a manual copy & paste possibly
import pkgutil
lconf = pkgutil.get_data("gcf", "stitcher_logging.conf")
with open(lcfile, 'w') as file:
file.write(lconf)
#print "Read config with pkgutils %s" % lcfile
except Exception, e:
#print "Failed to read .conf file using pkgutil: %s" % e
# If we didn't get the file in the archive, use the .py version
# I find this solution distasteful
from gcf import stitcher_logging_deft
try:
with open(lcfile, 'w') as file:
file.write(stitcher_logging_deft.DEFT_STITCHER_LOGGING_CONFIG)
except Exception, e2:
sys.exit("Error configuring logging: Could not write (from python default) logging config file %s: %s" % (lcfile, e2))
#print "Read from logging config from .py into tmp file %s" % lcfile
parser.set_defaults(logconfig=lcfile)
# Have omni use our parser to parse the args, manipulating options as needed
options, args = omni.parse_args(argv, parser=parser)
# If there is no fileDir, then we try to write to the CWD. In some installations, that will
# fail. So test writing to CWD. If that fails, set fileDir to a temp dir to write files ther.
if not options.fileDir:
testfile = None
handle = None
try:
import tempfile
handle, testfile = tempfile.mkstemp(dir='.')
#print "Can write to CWD: created %s" % testfile
os.close(handle)
except Exception, e:
#print "Cannot write to CWD '%s' for output files: %s" % (os.path.abspath('.'), e)
tmpdir = os.path.normpath(os.getenv("TMPDIR", os.getenv("TMP", "/tmp")))
if tmpdir and tmpdir != "" and not os.path.exists(tmpdir):
os.makedirs(tmpdir)
testfile1 = None
handle1 = None
try:
import tempfile
handle1, testfile1 = tempfile.mkstemp(dir=tmpdir)
os.close(handle1)
options.fileDir = tmpdir
except Exception, e1:
sys.exit("Cannot write to temp directory '%s' for output files. Try setting `--fileDir` to point to a writable directory. Error: %s'" % (tmpdir, e1))
finally:
try:
os.unlink(testfile1)
except Exception, e2:
pass
finally:
try:
os.unlink(testfile)
except Exception, e2:
pass
# Create the dirs for fileDir option as needed
if options.fileDir:
fpDir = os.path.normpath(os.path.expanduser(options.fileDir))
if fpDir and fpDir != "":
if not fpDir.endswith(os.sep):
fpDir += os.sep
fpd2 = os.path.abspath(fpDir)
if not os.path.exists(fpd2):
try:
os.makedirs(fpd2)
except Exception, e:
sys.exit("Failed to create '%s' for saving files per --fileDir option: %s" % (fpd2, e))
if not os.path.isdir(fpd2):
sys.exit("Path specified in '--fileDir' is not a directory: %s" % fpd2)
testfile = None
handle = None
try:
import tempfile
handle, testfile = tempfile.mkstemp(dir=fpd2)
os.close(handle)
except Exception, e:
sys.exit("Cannot write to directory '%s' specified by '--fileDir': %s" % (fpDir, e))
finally:
try:
os.unlink(testfile)
except Exception, e2:
pass
options.fileDir = fpDir
options.logoutput = os.path.normpath(os.path.join(os.path.abspath(options.fileDir), options.logoutput))
# Set up the logger
# First, rotate the logfile if necessary
if options.logoutput:
options.logoutput = os.path.normpath(os.path.expanduser(options.logoutput))
if options.logoutput and os.path.exists(options.logoutput) and options.logFileCount > 0:
backupCount = options.logFileCount
bfn = options.logoutput
# Code from python logging.handlers.RotatingFileHandler.doRollover()
for i in range(backupCount - 1, 0, -1):
sfn = "%s.%d" % (bfn, i)
dfn = "%s.%d" % (bfn, i + 1)
if os.path.exists(sfn):
if os.path.exists(dfn):
os.remove(dfn)
os.rename(sfn, dfn)
dfn = bfn + ".1"
if os.path.exists(dfn):
os.remove(dfn)
if os.path.exists(bfn):
try:
os.rename(bfn, dfn)
except OSError, e:
# Issue #824 partial solution
if "being used by another process" in str(e):
# On Windows, when another stitcher instance running in same directory, so has stitcher.log open
# WindowsError: [Error 32] The process cannot access the file because it is being used by another process
sys.exit("Error: Is another stitcher process running in this directory? Run stitcher from a different directory, or re-run with the option `--fileDir <separate directory for this run's output files>`")
else:
raise
# Then have Omni configure the logger
try:
omni.configure_logging(options)
except Exception, e:
sys.exit("Failed to configure logging: %s" % e)
# Now that we've configured logging, reset this to None
# to avoid later log messages about configuring logging
options.logconfig = None
logger = logging.getLogger("stitcher")
if options.fileDir:
logger.info("All files will be saved in the directory '%s'", os.path.abspath(options.fileDir))
# We use the omni config file
# First load the agg nick cache
# First, suppress all but WARN+ messages on console
if not options.debug:
lvl = logging.INFO
handlers = logger.handlers
if len(handlers) == 0:
handlers = logging.getLogger().handlers
for handler in handlers:
if isinstance(handler, logging.StreamHandler):
lvl = handler.level
handler.setLevel(logging.WARN)
break
config = omni.load_agg_nick_config(options, logger)
# Load custom config _after_ system agg_nick_cache,
# which also sets omni_defaults
config = omni.load_config(options, logger, config)
if config.has_key('omni_defaults') and config['omni_defaults'].has_key('scs_url'):
if options.scsURL is not None:
logger.debug("Ignoring omni_config default SCS URL of '%s' because commandline specified '%s'", config['omni_defaults']['scs_url'], options.scsURL)
else:
options.scsURL = config['omni_defaults']['scs_url']
logger.debug("Using SCS URL from omni_config: %s", options.scsURL)
else:
if options.scsURL is None:
options.scsURL = SCS_URL
logger.debug("Using SCS URL default: %s", SCS_URL)
else:
logger.debug("Using SCS URL from commandline: %s", options.scsURL)
if not options.debug:
handlers = logger.handlers
if len(handlers) == 0:
handlers = logging.getLogger().handlers
for handler in handlers:
if isinstance(handler, logging.StreamHandler):
handler.setLevel(lvl)
break
#logger.info("Using AM API version %d", options.api_version)
# Make any file prefix be part of the output file prefix so files go in the right spot
if options.prefix and options.fileDir:
pIsDir = (options.prefix and options.prefix.endswith(os.sep))
if not os.path.isabs(options.prefix):
options.prefix = os.path.normpath(os.path.join(options.fileDir, options.prefix))
else:
# replace any directory in prefix and use the fileDir
options.prefix = prependFilePrefix(options.fileDir, options.prefix)
if pIsDir:
options.prefix += os.sep
elif options.fileDir:
options.prefix = options.fileDir
# logger.debug("--prefix is now %s", options.prefix)
# Create the dirs needed for options.prefix if specified
if options.prefix:
fpDir = os.path.normpath(os.path.expanduser(os.path.dirname(options.prefix)))
if fpDir and fpDir != "" and not os.path.exists(fpDir):
try:
os.makedirs(fpDir)
except Exception, e:
sys.exit("Failed to create '%s' for saving files per --prefix option: %s" % (fpDir, e))
if options.fakeModeDir:
if not os.path.isdir(options.fakeModeDir):
logger.error("Got Fake Mode Dir %s that is not a directory!", options.fakeModeDir)
raise StitchingError("Fake Mod path not a directory: %s" % options.fakeModeDir)
else:
logger.info("Running with Fake Mode Dir %s", options.fakeModeDir)
Aggregate.PAUSE_FOR_DCN_AM_TO_FREE_RESOURCES_SECS = options.ionRetryIntervalSecs
Aggregate.SLIVERSTATUS_POLL_INTERVAL_SEC = options.ionStatusIntervalSecs
nondefOpts = omni.getOptsUsed(parser, options)
if options.debug:
logger.info(omni.getSystemInfo() + "\nStitcher: " + omni.getOmniVersion())
logger.info("Running stitcher ... %s Args: %s" % (nondefOpts, " ".join(args)))
else:
# Force this to the debug log file only
logger.debug(omni.getSystemInfo() + "\nStitcher: " + omni.getOmniVersion())
logger.debug("Running stitcher ... %s Args: %s" % (nondefOpts, " ".join(args)))
omni.checkForUpdates(config, logger)
if options.defaultCapacity < 1:
logger.warn("Specified a tiny default link capacity of %dKbps!", options.defaultCapacity)
# FIXME: Warn about really big capacities too?
if options.useExoSM and options.noExoSM:
sys.exit("Cannot specify both useExoSM and noExoSM")
if options.useExoSM and options.noEGStitching:
sys.exit("Cannot specify both useExoSM and noEGStitching")
if options.useExoSM and options.noEGStitchingOnLink:
sys.exit("Cannot specify both useExoSM and noEGStitchingOnLink")
if options.noExoSM:
if not options.noEGStitching:
logger.debug("Per options avoiding ExoSM. Therefore, not using EG Stitching")
options.noEGStitching = True
# Note that the converse is not true: You can require noEGStitching and still use
# the ExoSM, assuming we edit the request to the ExoSM carefully.
if options.noTransitAMs:
logger.info("Per options not completing reservations at transit / SCS added aggregates")
if not options.noDeleteAtEnd:
logger.debug(" ... therefore setting noDeleteAtEnd")
options.noDeleteAtEnd = True
handler = StitchingHandler(options, config, logger)
return handler.doStitching(args)
# Goal of main is to call the 'call' method and print the result
def main(argv=None):
if argv is None:
argv = sys.argv[1:]
# FIXME: Print other header stuff?
try:
text, item = call(argv)
# FIXME: If called from a script, then anything here?
# if options.script:
# return json
# return result
# else:
print text
except AMAPIError, ae:
if ae.returnstruct and isinstance(ae.returnstruct, dict) and ae.returnstruct.has_key('code'):
if isinstance(ae.returnstruct['code'], int) or isinstance(ae.returnstruct['code'], str):
sys.exit(int(ae.returnstruct['code']))
if isinstance(ae.returnstruct['code'], dict) and ae.returnstruct['code'].has_key('geni_code'):
sys.exit(int(ae.returnstruct['code']['geni_code']))
sys.exit(ae)
except OmniError, oe:
sys.exit(oe)
if __name__ == "__main__":
sys.exit(main())
| 50.646444 | 221 | 0.638399 | 3,097 | 24,209 | 4.944785 | 0.223442 | 0.016847 | 0.024487 | 0.016521 | 0.235079 | 0.191132 | 0.137456 | 0.10239 | 0.071569 | 0.062688 | 0 | 0.004983 | 0.253873 | 24,209 | 477 | 222 | 50.752621 | 0.842828 | 0.285596 | 0 | 0.274914 | 0 | 0.027491 | 0.281515 | 0.01028 | 0 | 0 | 0 | 0.002096 | 0 | 0 | null | null | 0.010309 | 0.058419 | null | null | 0.003436 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ca7949e5600af676c35fbf996b6b49821a26ed0 | 320 | py | Python | check_db_connection.py | sernote/python_training | 2e2fa1f71b551d12bb303a0abc31bd86a0a7b81d | [
"Apache-2.0"
] | null | null | null | check_db_connection.py | sernote/python_training | 2e2fa1f71b551d12bb303a0abc31bd86a0a7b81d | [
"Apache-2.0"
] | null | null | null | check_db_connection.py | sernote/python_training | 2e2fa1f71b551d12bb303a0abc31bd86a0a7b81d | [
"Apache-2.0"
] | null | null | null | from fixture.orm import ORMFixture
from model.group import Group
from fixture.contact import Contacthelper
db = ORMFixture(host='localhost', name='addressbook', user='root', password='')
try:
l = db.get_contact_list()
for item in l:
print(item.all_phones_from_page)
print(len(l))
finally:
pass
| 22.857143 | 79 | 0.715625 | 45 | 320 | 4.977778 | 0.688889 | 0.098214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 320 | 13 | 80 | 24.615385 | 0.848485 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.181818 | 0.272727 | 0 | 0.272727 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8ca9b54baa338723ba065c5a2b4ac0c863288e7a | 973 | py | Python | ranking-jaccardian.py | bharathbs93/Article-Similarity-Search | 32835dbf3e62b93e369b2596d41ad6359c8f1601 | [
"MIT"
] | null | null | null | ranking-jaccardian.py | bharathbs93/Article-Similarity-Search | 32835dbf3e62b93e369b2596d41ad6359c8f1601 | [
"MIT"
] | null | null | null | ranking-jaccardian.py | bharathbs93/Article-Similarity-Search | 32835dbf3e62b93e369b2596d41ad6359c8f1601 | [
"MIT"
] | null | null | null | #Importing libraries that are needed for processing
import pandas as pd
from sklearn import preprocessing
import numpy as np
from sklearn.metrics.pairwise import pairwise_distances
# Reading the csv file of term document frequency generated in R
input_data = pd.read_csv("finamatrix.csv",index_col=0, header =0 )
# Normalizing the data within a scale range
x = preprocessing.MinMaxScaler()
data_standard = x.fit_transform(input_data)
# Calculating jaccardian similarity scores
jac_similarity = pairwise_distances(data_standard, metric='jaccard')
# Reshaping array
jac_similarity = jac_similarity.reshape(jac_similarity.size,1)
print jac_similarity[0:10]
# sorting scores
sorted_jac = np.sort(jac_similarity, axis = None)[::-1]
sorted_jac = np.roll(sorted_jac, -np.count_nonzero(np.isnan(jac_similarity)))
#Top three paired article scores are
top_jac = sorted_jac[0:3]
print sorted_jac[0:20]
print("top three jacardian score paired articles are")
print(top_jac)
| 27.027778 | 77 | 0.798561 | 146 | 973 | 5.157534 | 0.547945 | 0.12085 | 0.043825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014002 | 0.119219 | 973 | 35 | 78 | 27.8 | 0.864644 | 0.26927 | 0 | 0 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cad26aa5c9bb0b79790bafff44921d3d31e5198 | 389 | py | Python | rest/migrations/0032_auto_20200819_2057.py | narcotis/Welbot-V2 | 7525216b61036f62d0be0b5ebb6d3476b73323c8 | [
"MIT"
] | 1 | 2021-06-04T03:28:06.000Z | 2021-06-04T03:28:06.000Z | rest/migrations/0032_auto_20200819_2057.py | narcotis/Welbot-V2 | 7525216b61036f62d0be0b5ebb6d3476b73323c8 | [
"MIT"
] | 2 | 2020-09-09T14:19:10.000Z | 2020-09-09T14:20:21.000Z | rest/migrations/0032_auto_20200819_2057.py | narcotis/Welbot-V2 | 7525216b61036f62d0be0b5ebb6d3476b73323c8 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.8 on 2020-08-19 11:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('rest', '0031_auto_20200819_2056'),
]
operations = [
migrations.AlterField(
model_name='culture_event',
name='fare',
field=models.CharField(max_length=200),
),
]
| 20.473684 | 51 | 0.601542 | 43 | 389 | 5.302326 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 0.285347 | 389 | 18 | 52 | 21.611111 | 0.697842 | 0.115681 | 0 | 0 | 1 | 0 | 0.128655 | 0.067251 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cba3acfbc1a50238c1a027941fbc4491ae9149b | 1,178 | py | Python | ccfx/scripts/testthreadingutil.py | ytetsuwo/ccfinder-core | e20f390e8d26f900c68d8656b9cfb9cbaa3716d9 | [
"MIT"
] | 2 | 2019-10-27T08:01:19.000Z | 2021-12-20T07:53:02.000Z | ccfx/scripts/testthreadingutil.py | ytetsuwo/ccfinder-core | e20f390e8d26f900c68d8656b9cfb9cbaa3716d9 | [
"MIT"
] | 5 | 2019-05-02T16:36:39.000Z | 2019-05-12T16:04:45.000Z | ccfx/scripts/testthreadingutil.py | ytetsuwo/ccfinder-core | e20f390e8d26f900c68d8656b9cfb9cbaa3716d9 | [
"MIT"
] | 2 | 2019-08-04T13:21:51.000Z | 2021-03-07T00:18:36.000Z |
import threadingutil
import sys
import random
import time
random.seed(0)
def f(v): # this function must be declared at global scope, in order to make it visible to subprocess.
time.sleep(random.random() * 2.0)
return v * v
if __name__ == '__main__':
usage = "Usage: testthreadingutil.py [NUMWORKER [INPUTSIZE]]"
numWorker = 4
inputSize = 30
if len(sys.argv) >= 2:
if sys.argv[1] == "-h":
print usage
sys.exit(0)
numWorker = int(sys.argv[1])
if len(sys.argv) >= 3:
inputSize = int(sys.argv[2])
if len(sys.argv) >= 4:
print usage
sys.exit(1)
def genargslist(size):
for v in xrange(size):
yield ( v, )
t1 = time.time()
#for index, result in threadingutil.multithreading_iter(f, [ args for args in genargslist(inputSize) ], numWorker):
for index, result in threadingutil.multithreading_iter(f, genargslist(inputSize), numWorker):
print "index = ", index, ", result = ", result
print
print "NUMWORKER = %d, INPUTSIZE = %d" % ( numWorker, inputSize )
print "elapsed time: %g" % (time.time() - t1)
| 26.772727 | 119 | 0.596774 | 151 | 1,178 | 4.589404 | 0.397351 | 0.060606 | 0.034632 | 0.051948 | 0.138528 | 0.138528 | 0.138528 | 0.138528 | 0 | 0 | 0 | 0.018868 | 0.280136 | 1,178 | 43 | 120 | 27.395349 | 0.798349 | 0.173175 | 0 | 0.064516 | 0 | 0 | 0.129897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.129032 | null | null | 0.193548 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cbee8bc9e37824427f878a582ea24c70d8c204c | 1,400 | py | Python | slybot/slybot/utils.py | rmcwilliams2004/mapping | 55f97e61fc558d243a0806ae8bf4a26b080026de | [
"BSD-3-Clause"
] | 8 | 2015-03-16T20:30:50.000Z | 2021-09-21T13:05:46.000Z | slybot/slybot/utils.py | btomashvili/portia | 9039140269f2cfca588a1feec6cc793cba87f202 | [
"BSD-3-Clause"
] | 1 | 2018-10-24T09:29:00.000Z | 2018-10-24T09:29:00.000Z | slybot/slybot/utils.py | btomashvili/portia | 9039140269f2cfca588a1feec6cc793cba87f202 | [
"BSD-3-Clause"
] | 4 | 2015-02-01T01:17:45.000Z | 2022-03-04T06:07:25.000Z | from urlparse import urlparse
import os
import json
from scrapely.htmlpage import HtmlPage
def iter_unique_scheme_hostname(urls):
"""Return an iterator of tuples (scheme, hostname) over the given urls,
filtering dupes
"""
scheme_hostname = set()
for x in urls:
p = urlparse(x)
scheme_hostname.add((p.scheme, p.hostname))
return list(scheme_hostname)
def open_project_from_dir(project_dir):
specs = {"spiders": {}}
with open(os.path.join(project_dir, "project.json")) as f:
specs["project"] = json.load(f)
with open(os.path.join(project_dir, "items.json")) as f:
specs["items"] = json.load(f)
with open(os.path.join(project_dir, "extractors.json")) as f:
specs["extractors"] = json.load(f)
for fname in os.listdir(os.path.join(project_dir, "spiders")):
if fname.endswith(".json"):
spider_name = os.path.splitext(fname)[0]
with open(os.path.join(project_dir, "spiders", fname)) as f:
try:
specs["spiders"][spider_name] = json.load(f)
except ValueError, e:
raise ValueError("Error parsing spider (invalid JSON): %s: %s" % (fname, e))
return specs
def htmlpage_from_response(response):
return HtmlPage(response.url, response.headers, \
response.body_as_unicode(), encoding=response.encoding)
| 36.842105 | 96 | 0.638571 | 185 | 1,400 | 4.718919 | 0.356757 | 0.068729 | 0.057274 | 0.097365 | 0.187858 | 0.187858 | 0.148912 | 0.084765 | 0.084765 | 0.084765 | 0 | 0.000933 | 0.234286 | 1,400 | 37 | 97 | 37.837838 | 0.813433 | 0 | 0 | 0 | 0 | 0 | 0.103766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.133333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cc0ae2e9a623354a047f26bbb36e199b6e49eb8 | 2,975 | py | Python | propara/utils/prostruct_predicted_json_to_tsv_grid.py | keisks/propara | 49fa8fe0481291df18b2c7b48e7ba1dafaad48e2 | [
"Apache-2.0"
] | 84 | 2018-06-02T02:00:53.000Z | 2022-03-13T12:17:42.000Z | propara/utils/prostruct_predicted_json_to_tsv_grid.py | keisks/propara | 49fa8fe0481291df18b2c7b48e7ba1dafaad48e2 | [
"Apache-2.0"
] | 3 | 2018-10-31T00:28:31.000Z | 2020-05-12T01:06:53.000Z | propara/utils/prostruct_predicted_json_to_tsv_grid.py | keisks/propara | 49fa8fe0481291df18b2c7b48e7ba1dafaad48e2 | [
"Apache-2.0"
] | 13 | 2018-09-14T20:37:51.000Z | 2021-03-23T09:24:49.000Z | import json
import sys
from pprint import pprint
from processes.data.propara_dataset_reader import Action
# Input: json format generated by ProparaPredictor
# paraid": "1114",
# "sentence_texts": ["Rainwater falls onto the soil.", "The rainwater seeps into the soil.",...."],
# "participants": ["rainwater; water", "bedrock", "funnels", "caves"],
# "states": [["?", "soil", "soil", "bedrock", "bedrock", "bedrock", "bedrock", "bedrock"],....],
# "predicted_actions": ["MOVE", "NONE", "NONE", "NONE", "MOVE", ..., "CREATE", "CREATE"]
#
# Output: paraid \t sentence_id \t participant \t action \t before_val \t after_val
# This class converts the json file format generated by ProparaPredictor into partial grids TSV format
def get_before_after_val(action: Action, predicted_before_location: str, predicted_after_location: str):
if action == Action.CREATE:
return '-', '?'
elif action == Action.DESTROY:
return '?', '-'
elif action == Action.MOVE:
return predicted_before_location, predicted_after_location
elif action == Action.NONE:
return '?', '?'
def convert_predicted_json_to_partial_grids(infile_path: str, outfile_path: str):
out_file = open(outfile_path, "w")
for line in open(infile_path):
data = json.loads(line)
pprint(data)
para_id = data["para_id"]
participants = data["participants"]
actions_sentences_participants = data["top1_original"]
sentence_texts = data["sentence_texts"]
num_sentences = len(sentence_texts)
num_participants = len(participants)
predicted_after_locations = data["predicted_locations"] if "predicted_locations" in data and len(data["predicted_locations"]) > 0 \
else [['?' for _ in range(num_participants)] for _ in range(num_sentences)]
print(num_sentences)
print(num_participants)
for sentence_id in range(num_sentences):
for participant_id in range(num_participants):
predicted_before_location = predicted_after_locations[sentence_id-1][participant_id] if sentence_id > 0 else '?'
predicted_after_location = predicted_after_locations[sentence_id][participant_id]
action = Action(actions_sentences_participants[sentence_id][participant_id])
(before_val, after_val) = get_before_after_val(action, predicted_before_location, predicted_after_location)
out_file.write("\t".join([para_id,
str(sentence_id+1),
participants[participant_id],
action.name,
before_val,
after_val]) + "\n")
out_file.close()
if __name__ == '__main__':
infile = sys.argv[1]
outfile = sys.argv[2]
convert_predicted_json_to_partial_grids(infile_path=infile, outfile_path=outfile) | 43.75 | 139 | 0.644034 | 335 | 2,975 | 5.41791 | 0.277612 | 0.038567 | 0.050689 | 0.052893 | 0.17686 | 0.143251 | 0.048485 | 0.048485 | 0 | 0 | 0 | 0.004893 | 0.24437 | 2,975 | 68 | 140 | 43.75 | 0.802491 | 0.201008 | 0 | 0 | 1 | 0 | 0.052365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.088889 | 0 | 0.222222 | 0.088889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cc61c995bb4f660479b7eb7a1926b0e87dfaec1 | 894 | py | Python | wwwroot/cgi-bin/NetDict/format_conversion.py | fenshitianyue/WebDict | fc04133c3921d6c98f31c1e8608b6e1255088a10 | [
"MIT"
] | 1 | 2019-04-15T04:23:53.000Z | 2019-04-15T04:23:53.000Z | wwwroot/cgi-bin/NetDict/format_conversion.py | fenshitianyue/WebDict | fc04133c3921d6c98f31c1e8608b6e1255088a10 | [
"MIT"
] | null | null | null | wwwroot/cgi-bin/NetDict/format_conversion.py | fenshitianyue/WebDict | fc04133c3921d6c98f31c1e8608b6e1255088a10 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
import pymysql
import sys
reload(sys)
sys.setdefaultencoding('utf8')
base = {}
def write_file():
fp = open("/home/zanda/Desktop/PythonCode/new_formatted_data", "w+")
for word, meaning in base.items():
fp.write(word + "\3" + meaning + "\n")
fp.close()
def find(cursor):
try:
sql = "select * from mydict"
cursor.execute(sql)
if cursor.rowcount == 0:
print "查询为空"
else:
tmp = cursor.fetchall() # 获取查询的所有结果,即词库
for row in tmp: #遍历词库,将每一条数据添加到字典base中
base[row[0]] = row[1]
# base.update(row[0], row[1]) have bug
except:
print "查询失败"
if __name__ == '__main__':
db = pymysql.connect("localhost", "root", "nihao.","Dict", charset = "utf8")
cursor = db.cursor()
find(cursor)
write_file()
db.close()
| 23.526316 | 80 | 0.558166 | 109 | 894 | 4.46789 | 0.642202 | 0.036961 | 0.028747 | 0.032854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014019 | 0.281879 | 894 | 37 | 81 | 24.162162 | 0.744548 | 0.123043 | 0 | 0 | 0 | 0 | 0.156611 | 0.062901 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cc6690d4201d4fa3d78af7391584f7477660468 | 2,161 | py | Python | app/old/results_to_csv-mo.py | jpenney78/usabmx_results | 7301ba82c8c24f978d9d2196f6cb4311cb85b033 | [
"MIT"
] | null | null | null | app/old/results_to_csv-mo.py | jpenney78/usabmx_results | 7301ba82c8c24f978d9d2196f6cb4311cb85b033 | [
"MIT"
] | null | null | null | app/old/results_to_csv-mo.py | jpenney78/usabmx_results | 7301ba82c8c24f978d9d2196f6cb4311cb85b033 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from bs4 import BeautifulSoup
import urllib2
import re
import sys
import collections
#with open('grands.html') as f:
# soup = BeautifulSoup(f, 'html.parser')
url = sys.argv[-1]
page = urllib2.urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
groups = soup.findAll('h4', class_='race-result-group')
#foo = dir(page)
#bar = page.__dict__
#print foo
#print bar
days = soup.findAll('div', id=re.compile('^rday_'))
#print 'Race, Class, Place, Name, City, State, Sponsor'
print 'Name, Place, Class'
for day in days:
title = day.find('h3', class_='race-result-title')
title = 'title'
#title = title.text
print '\n\n\nRACE: {}, ,'.format(title)
groups = day.findAll('h4', class_='race-result-group')
uls = day.findAll('ul', class_='race-result-list')
count = 0
places = collections.OrderedDict()
p = 1
while p < 8:
places[p] = 0
p += 1
for ul in uls:
group = groups[count].text
class_name = group.split('Total Riders')[0].rstrip()
# print class_name
for li in ul.findAll('li'):
(place, rider) = li.findAll('span')
rider_info = rider.text.rstrip().split(',')
rider_info_len = len(rider_info)
if rider_info_len >= 3:
if rider_info_len == 4:
rider_sponsor = rider_info[1]
else:
rider_sponsor = 'Privateer'
rider_name = rider_info[0]
rider_city = rider_info[-2]
rider_state = rider_info[-1]
if rider_state == ' MO':
#print '{},{},{},{},{},{},{}'.format(title, class_name, place.text, rider_name, rider_city, rider_state, rider_sponsor)
try:
places[int(place.text)] += 1
except:
places[int(place.text)] = 1
print '{}, {}, {}'.format(rider_name, place.text, class_name)
else:
pass
count += 1
print '\n\nPlace Count:'
for k,v in places.iteritems():
print '{} - {}'.format(k,v)
| 28.064935 | 139 | 0.541416 | 265 | 2,161 | 4.279245 | 0.343396 | 0.071429 | 0.05291 | 0.031746 | 0.084656 | 0.051146 | 0 | 0 | 0 | 0 | 0 | 0.014805 | 0.312355 | 2,161 | 76 | 140 | 28.434211 | 0.748318 | 0.162425 | 0 | 0.04 | 0 | 0 | 0.110679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.02 | 0.1 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ccc3f085556e9b031c98cfec64f5d267b519580 | 3,771 | py | Python | include/fetchfile.py | dongniu/cadnano2 | 6805fe2af856c59b06373c0ee0142ad6bc286262 | [
"Unlicense"
] | 17 | 2015-02-07T03:46:49.000Z | 2021-09-25T09:23:41.000Z | include/fetchfile.py | scholer/cadnano2 | 0b8bba1ab3277ac9859ef78615890d351561784c | [
"Unlicense"
] | 2 | 2017-08-22T03:17:16.000Z | 2021-07-03T14:42:41.000Z | include/fetchfile.py | scholer/cadnano2 | 0b8bba1ab3277ac9859ef78615890d351561784c | [
"Unlicense"
] | 9 | 2015-09-06T22:41:38.000Z | 2022-03-27T13:57:37.000Z | #!/usr/bin/env python
# encoding: utf-8
# The MIT License
#
# Copyright (c) 2011 Wyss Institute at Harvard University
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# http://www.opensource.org/licenses/mit-license.php
from urllib2 import Request, urlopen, URLError, HTTPError
import sys
from os.path import basename, dirname, splitext, exists
import os
import shutil
import tarfile
def fetchFile(filename, baseurl, filemode='b', filetype='gz', filepath=None):
"""
filepath - is optional file path location to store the fetched file
"""
# create the url and the request
url = baseurl + '/' + filename
request = Request(url)
# Open the url
try:
f_url = urlopen(request)
print "downloading " + url
# Open our local file for writing
f_dest = open(filename, "w" + filemode)
# Write to our local file
f_dest.write(f_url.read())
f_dest.close()
# handle errors
except HTTPError, e:
print "HTTP Error:", e.code , url
except URLError, e:
print "URL Error:", e.reason , url
filename_out = filename
# unzip if possible
if filetype == 'gz':
# get the extracted folder name
filename_out = splitext(filename)[0]
temp = splitext(filename_out)
if temp[1] == '.tar':
filename_out = temp[0]
# open the archive
try:
f_zip= tarfile.open(filename, mode='r')
except tarfile.ReadError, e:
print "unable to read archive", e.code
print "extracting " + filename_out
try:
if filepath:
# remove existing folder
if os.path.exists(filepath + '/' + filename_out):
print "file exists"
shutil.rmtree(filepath + '/' + filename_out)
else:
print "file does not exist", filename_out
f_zip.extractall(path=filepath)
else:
# remove existing folder
if os.path.exists(filename_out):
print "file exists"
shutil.rmtree(filename_out)
else:
print "file does not exist", filename_out
f_zip.extractall()
except tarfile.ExtractError, e:
print "unable to extract archive", e.code
f_zip.close()
# remove the archive
print "removing the downloaded archive", filename
os.remove(filename)
print "done"
return filename_out
# end def
if __name__ == '__main__':
argv = sys.argv
url = argv[1]
filename = basename(url)
base_url = dirname(url)
fetchFile(filename, base_url)
| 33.669643 | 79 | 0.629011 | 474 | 3,771 | 4.938819 | 0.421941 | 0.056386 | 0.011106 | 0.011961 | 0.113627 | 0.113627 | 0.113627 | 0.052114 | 0.052114 | 0.052114 | 0 | 0.003754 | 0.293556 | 3,771 | 111 | 80 | 33.972973 | 0.875 | 0.37974 | 0 | 0.172414 | 0 | 0 | 0.09399 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.103448 | null | null | 0.206897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cccebd0d964c1db537e29204485613a1e3bfc4a | 4,410 | py | Python | pyjs/tests/test-report.py | allbuttonspressed/pyjs | c726fdead530eb63ee4763ae15daaa58d84cd58f | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2018-09-19T09:14:16.000Z | 2018-09-19T09:14:16.000Z | pyjs/tests/test-report.py | andreyvit/pyjamas | 1154abe3340a84dba7530b8174aaddecfc1a0944 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | pyjs/tests/test-report.py | andreyvit/pyjamas | 1154abe3340a84dba7530b8174aaddecfc1a0944 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-11-18T14:17:59.000Z | 2019-11-18T14:17:59.000Z | #!/usr/bin/env python
import sys
import difflib
differ = difflib.HtmlDiff()
class Coverage:
def __init__(self, testset_name):
self.testset_name = testset_name
self.lines = {}
def tracer(self, frame, event, arg):
lineno = frame.f_lineno
filename = frame.f_globals["__file__"]
if filename[-4:] in [".pyc", ".pyo"]:
filename = filename[:-1]
self.lines[filename][lineno] = self.lines.setdefault(filename, {}).get(lineno, 0) + 1
return self.tracer
def start(self):
sys.settrace(self.tracer)
def stop(self):
sys.settrace(None)
def output(self, *files):
print """
<html>
<head>
<title>Coverage for %s</title>
<style>
body {
color: #000;
background-color: #FFF;
}
h1, h2 {
font-family: sans-serif;
font-weight: normal;
}
td {
white-space: pre;
padding: 1px 5px;
font-family: monospace;
font-size: 10pt;
}
td.hit {
}
td.hit-line {
}
td.miss {
background-color: #C33;
}
td.miss-line {
background-color: #FCC;
}
td.ignore {
color: #999;
}
td.ignore-line {
color: #999;
}
td.lineno {
color: #999;
background-color: #EEE;
}
</style>
</head>
<body>
""" % self.testset_name
print """
<h1>Coverage for %s</h1>
""" % self.testset_name
for filename in files:
print """
<h2>%s</h2>
<table>
""" % filename
code = open(filename).readlines()
for lineno, line in enumerate(code):
count = self.lines[filename].get(lineno + 1, 0)
if count == 0:
if line.strip() in ["", "else:"] or line.strip().startswith("#"):
klass = "ignore"
else:
klass = "miss"
else:
klass = "hit"
klass2 = klass + "-line"
print """<tr><td class="lineno">%s</td><td class="%s">%s</td><td class="%s">%s</td></tr>""" % (lineno + 1, klass, count, klass2, line.strip("\n"))
print """
</table>
"""
print """
</body>
</html>
"""
print """
<html>
<head>
<style>
.diff_add { background: #9F9; }
.diff_sub { background: #F99; }
.diff_chg { background: #FF9; }
.diff_header { background: #DDD; padding: 0px 3px; }
.diff_next { padding: 0px 3px; }
table.diff {
font-family: monospace;
}
</style>
</head>
<body>
"""
def test(filename, module):
print "<h1>" + filename + "</h1>"
try:
output = pyjs.translate(filename + ".py", module)
desired_output = open(filename + ".js").read()
if output == desired_output:
print "<p>pass</p>"
else:
print differ.make_table(output.split("\n"), desired_output.split("\n"), context=True)
except Exception, e:
print "\texception", e
import sys
sys.path.append("..")
import pyjs
test("test001", "ui")
test("test002", "ui")
test("test003", "ui")
test("test004", "ui")
test("test005", "ui")
test("test006", "ui")
test("test007", "ui")
test("test008", "ui")
test("test009", "ui")
test("test010", None)
test("test011", None)
test("test012", None)
test("test013", "ui")
test("test014", None)
test("test015", None)
test("test016", None)
test("test017", None)
test("test018", None)
test("test019", None)
test("test020", None)
test("test021", None)
test("test022", None)
test("test023", None)
test("test024", None)
test("test025", None)
test("test026", None)
test("test027", None)
test("test028", None)
test("test029", None)
test("test030", None)
test("test031", None)
test("test032", None)
test("test033", None)
test("test034", None)
test("test035", None)
test("test036", None)
test("test037", None)
test("test038", None)
test("test039", None)
test("test040", None)
test("test041", None)
test("test042", None)
test("test043", None)
test("test044", None)
test("test045", None)
test("test046", None)
print """
</body>
</html>
""" | 23.089005 | 162 | 0.506576 | 480 | 4,410 | 4.604167 | 0.3625 | 0.126697 | 0.027149 | 0.00905 | 0.011312 | 0.011312 | 0.011312 | 0 | 0 | 0 | 0 | 0.061204 | 0.321995 | 4,410 | 191 | 163 | 23.089005 | 0.677926 | 0.004535 | 0 | 0.230769 | 0 | 0.005917 | 0.420957 | 0.015718 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005917 | 0.023669 | null | null | 0.071006 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ccf7d8c58166e297f3266ddf7b93c9e740f2087 | 1,582 | py | Python | pymilvus_orm/__init__.py | PahudPlus/pymilvus-orm | 78e2e38e71cff92ed6d243dcac85314230ce0fdc | [
"Apache-2.0"
] | null | null | null | pymilvus_orm/__init__.py | PahudPlus/pymilvus-orm | 78e2e38e71cff92ed6d243dcac85314230ce0fdc | [
"Apache-2.0"
] | null | null | null | pymilvus_orm/__init__.py | PahudPlus/pymilvus-orm | 78e2e38e71cff92ed6d243dcac85314230ce0fdc | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2019-2020 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
"""client module"""
from pkg_resources import get_distribution, DistributionNotFound
from .collection import Collection
from .connections import (
Connections,
connections,
add_connection,
list_connections,
get_connection_addr,
remove_connection,
connect,
get_connection,
disconnect
)
from .index import Index
from .partition import Partition
from .utility import (
loading_progress,
index_building_progress,
wait_for_loading_complete,
wait_for_index_building_complete,
has_collection,
has_partition,
list_collections,
)
from .search import SearchResult, Hits, Hit
from .types import DataType
from .schema import FieldSchema, CollectionSchema
from .future import SearchResultFuture, InsertFuture
__version__ = '0.0.0.dev'
try:
__version__ = get_distribution('pymilvus-orm').version
except DistributionNotFound:
# package is not installed
pass
| 29.849057 | 99 | 0.73641 | 195 | 1,582 | 5.820513 | 0.584615 | 0.052863 | 0.022907 | 0.028194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.20354 | 1,582 | 52 | 100 | 30.423077 | 0.888889 | 0.385588 | 0 | 0 | 0 | 0 | 0.02199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.030303 | 0.30303 | 0 | 0.30303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8cd09fbcc54d1551c8b23bec979629192ae13b6e | 4,192 | py | Python | linux/lib/python2.7/dist-packages/samba/tests/ntacls.py | nmercier/linux-cross-gcc | a5b0028fd2b72ec036a4725e93ba29d73cb753a6 | [
"BSD-3-Clause"
] | 3 | 2015-10-31T10:39:25.000Z | 2019-04-27T20:19:33.000Z | linux/lib/python2.7/dist-packages/samba/tests/ntacls.py | nmercier/linux-cross-gcc | a5b0028fd2b72ec036a4725e93ba29d73cb753a6 | [
"BSD-3-Clause"
] | null | null | null | linux/lib/python2.7/dist-packages/samba/tests/ntacls.py | nmercier/linux-cross-gcc | a5b0028fd2b72ec036a4725e93ba29d73cb753a6 | [
"BSD-3-Clause"
] | null | null | null | # Unix SMB/CIFS implementation. Tests for ntacls manipulation
# Copyright (C) Matthieu Patou <mat@matws.net> 2009-2010
# Copyright (C) Andrew Bartlett 2012
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
"""Tests for samba.ntacls."""
from samba.ntacls import setntacl, getntacl, XattrBackendError
from samba.param import LoadParm
from samba.dcerpc import security
from samba.tests import TestCaseInTempDir, SkipTest
import os
class NtaclsTests(TestCaseInTempDir):
def test_setntacl(self):
lp = LoadParm()
acl = "O:S-1-5-21-2212615479-2695158682-2101375467-512G:S-1-5-21-2212615479-2695158682-2101375467-513D:(A;OICI;0x001f01ff;;;S-1-5-21-2212615479-2695158682-2101375467-512)"
open(self.tempf, 'w').write("empty")
lp.set("posix:eadb",os.path.join(self.tempdir,"eadbtest.tdb"))
setntacl(lp, self.tempf, acl, "S-1-5-21-2212615479-2695158682-2101375467")
os.unlink(os.path.join(self.tempdir,"eadbtest.tdb"))
def test_setntacl_getntacl(self):
lp = LoadParm()
acl = "O:S-1-5-21-2212615479-2695158682-2101375467-512G:S-1-5-21-2212615479-2695158682-2101375467-513D:(A;OICI;0x001f01ff;;;S-1-5-21-2212615479-2695158682-2101375467-512)"
open(self.tempf, 'w').write("empty")
lp.set("posix:eadb",os.path.join(self.tempdir,"eadbtest.tdb"))
setntacl(lp,self.tempf,acl,"S-1-5-21-2212615479-2695158682-2101375467")
facl = getntacl(lp,self.tempf)
anysid = security.dom_sid(security.SID_NT_SELF)
self.assertEquals(facl.as_sddl(anysid),acl)
os.unlink(os.path.join(self.tempdir,"eadbtest.tdb"))
def test_setntacl_getntacl_param(self):
lp = LoadParm()
acl = "O:S-1-5-21-2212615479-2695158682-2101375467-512G:S-1-5-21-2212615479-2695158682-2101375467-513D:(A;OICI;0x001f01ff;;;S-1-5-21-2212615479-2695158682-2101375467-512)"
open(self.tempf, 'w').write("empty")
setntacl(lp,self.tempf,acl,"S-1-5-21-2212615479-2695158682-2101375467","tdb",os.path.join(self.tempdir,"eadbtest.tdb"))
facl=getntacl(lp,self.tempf,"tdb",os.path.join(self.tempdir,"eadbtest.tdb"))
domsid=security.dom_sid(security.SID_NT_SELF)
self.assertEquals(facl.as_sddl(domsid),acl)
os.unlink(os.path.join(self.tempdir,"eadbtest.tdb"))
def test_setntacl_invalidbackend(self):
lp = LoadParm()
acl = "O:S-1-5-21-2212615479-2695158682-2101375467-512G:S-1-5-21-2212615479-2695158682-2101375467-513D:(A;OICI;0x001f01ff;;;S-1-5-21-2212615479-2695158682-2101375467-512)"
open(self.tempf, 'w').write("empty")
self.assertRaises(XattrBackendError, setntacl, lp, self.tempf, acl, "S-1-5-21-2212615479-2695158682-2101375467","ttdb", os.path.join(self.tempdir,"eadbtest.tdb"))
def test_setntacl_forcenative(self):
if os.getuid() == 0:
raise SkipTest("Running test as root, test skipped")
lp = LoadParm()
acl = "O:S-1-5-21-2212615479-2695158682-2101375467-512G:S-1-5-21-2212615479-2695158682-2101375467-513D:(A;OICI;0x001f01ff;;;S-1-5-21-2212615479-2695158682-2101375467-512)"
open(self.tempf, 'w').write("empty")
lp.set("posix:eadb", os.path.join(self.tempdir,"eadbtest.tdb"))
self.assertRaises(Exception, setntacl, lp, self.tempf ,acl,
"S-1-5-21-2212615479-2695158682-2101375467","native")
def setUp(self):
super(NtaclsTests, self).setUp()
self.tempf = os.path.join(self.tempdir, "test")
open(self.tempf, 'w').write("empty")
def tearDown(self):
os.unlink(self.tempf)
super(NtaclsTests, self).tearDown()
| 50.506024 | 179 | 0.70062 | 603 | 4,192 | 4.840796 | 0.25539 | 0.013703 | 0.020555 | 0.034258 | 0.642343 | 0.610483 | 0.583076 | 0.583076 | 0.559096 | 0.559096 | 0 | 0.216559 | 0.15291 | 4,192 | 82 | 180 | 51.121951 | 0.605463 | 0.187023 | 0 | 0.45283 | 0 | 0.09434 | 0.368576 | 0.30124 | 0 | 0 | 0.014767 | 0 | 0.075472 | 1 | 0.132075 | false | 0 | 0.09434 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cd3b7d036cac276f28d4d2b1c5af63da168cbbd | 15,856 | py | Python | assignment2/comp411/classifiers/fc_net.py | kukalbriiwa7/COMP511_CS231n | 1537e98cdca43fad906e56a22f48d884523414b0 | [
"MIT"
] | 1 | 2022-02-06T19:35:05.000Z | 2022-02-06T19:35:05.000Z | assignment2/comp411/classifiers/fc_net.py | kukalbriiwa7/COMP511_CS231n | 1537e98cdca43fad906e56a22f48d884523414b0 | [
"MIT"
] | null | null | null | assignment2/comp411/classifiers/fc_net.py | kukalbriiwa7/COMP511_CS231n | 1537e98cdca43fad906e56a22f48d884523414b0 | [
"MIT"
] | null | null | null | from builtins import range
from builtins import object
import numpy as np
from comp411.layers import *
from comp411.layer_utils import *
class ThreeLayerNet(object):
"""
A three-layer fully-connected neural network with Leaky ReLU nonlinearity and
softmax loss that uses a modular layer design. We assume an input dimension
of D, a hidden dimension of tuple of (H1, H2) yielding the dimension for the
first and second hidden layer respectively, and perform classification over C classes.
The architecture should be affine - leakyrelu - affine - leakyrelu - affine - softmax.
Note that this class does not implement gradient descent; instead, it
will interact with a separate Solver object that is responsible for running
optimization.
The learnable parameters of the model are stored in the dictionary
self.params that maps parameter names to numpy arrays.
"""
def __init__(self, input_dim=3*32*32, hidden_dim=(64, 32), num_classes=10,
weight_scale=1e-3, reg=0.0, alpha=1e-3):
"""
Initialize a new network.
Inputs:
- input_dim: An integer giving the size of the input
- hidden_dim: A tuple giving the size of the first and second hidden layer respectively
- num_classes: An integer giving the number of classes to classify
- weight_scale: Scalar giving the standard deviation for random
initialization of the weights.
- reg: Scalar giving L2 regularization strength.
- alpha: negative slope of Leaky ReLU layers
"""
self.params = {}
self.reg = reg
self.alpha = alpha
############################################################################
# TODO: Initialize the weights and biases of the three-layer net. Weights #
# should be initialized from a Gaussian centered at 0.0 with #
# standard deviation equal to weight_scale, and biases should be #
# initialized to zero. All weights and biases should be stored in the #
# dictionary self.params, with first layer weights #
# and biases using the keys 'W1' and 'b1', second layer #
# weights and biases using the keys 'W2' and 'b2', #
# and third layer weights and biases using the keys 'W3' and 'b3. #
# #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
self.params['W1'] = weight_scale * np.random.randn(input_dim,hidden_dim[0])
self.params['W2'] = weight_scale * np.random.randn(hidden_dim[0],hidden_dim[1])
self.params['W3'] = weight_scale * np.random.randn(hidden_dim[1],num_classes)
self.params['b1'] = np.zeros(hidden_dim[0])
self.params['b2'] = np.zeros(hidden_dim[1])
self.params['b3'] = np.zeros(num_classes)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
def loss(self, X, y=None):
"""
Compute loss and gradient for a minibatch of data.
Inputs:
- X: Array of input data of shape (N, d_1, ..., d_k)
- y: Array of labels, of shape (N,). y[i] gives the label for X[i].
Returns:
If y is None, then run a test-time forward pass of the model and return:
- scores: Array of shape (N, C) giving classification scores, where
scores[i, c] is the classification score for X[i] and class c.
If y is not None, then run a training-time forward and backward pass and
return a tuple of:
- loss: Scalar value giving the loss
- grads: Dictionary with the same keys as self.params, mapping parameter
names to gradients of the loss with respect to those parameters.
"""
scores = None
############################################################################
# TODO: Implement the forward pass for the three-layer net, computing the #
# class scores for X and storing them in the scores variable. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
W1 = self.params['W1']
W2 = self.params['W2']
W3 = self.params['W3']
b1 = self.params['b1']
b2 = self.params['b2']
b3 = self.params['b3']
X2 , lrelu_cache1 = affine_lrelu_forward(X,W1,b1,{"alpha": self.alpha})
X3 , lrelu_cache2 = affine_lrelu_forward(X2,W2,b2,{"alpha": self.alpha})
scores, affine_cache = affine_forward(X3,W3,b3)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
# If y is None then we are in test mode so just return scores
if y is None:
return scores
loss, grads = 0, {}
############################################################################
# TODO: Implement the backward pass for the three-layer net. Store the loss#
# in the loss variable and gradients in the grads dictionary. Compute data #
# loss using softmax, and make sure that grads[k] holds the gradients for #
# self.params[k]. Don't forget to add L2 regularization! #
# #
# NOTE: To ensure that your implementation matches ours and you pass the #
# automated tests, make sure that your L2 regularization includes a factor #
# of 0.5 to simplify the expression for the gradient. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
loss, softmax_grad = softmax_loss(scores, y)
loss += 0.5 * self.reg * ( np.sum(W1 * W1) + np.sum(W2 * W2) + np.sum(W3 * W3) )
dx3, dw3, db3 = affine_backward(softmax_grad, affine_cache)
dx2, dw2, db2 = affine_lrelu_backward(dx3, lrelu_cache2)
dx1, dw1, db1 = affine_lrelu_backward(dx2, lrelu_cache1)
grads['W3'] = dw3 + self.reg * W3
grads['b3'] = db3
grads['W2'] = dw2 + self.reg * W2
grads['b2'] = db2
grads['W1'] = dw1 + self.reg * W1
grads['b1'] = db1
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return loss, grads
class FullyConnectedNet(object):
"""
A fully-connected neural network with an arbitrary number of hidden layers,
LeakyReLU nonlinearities, and a softmax loss function. This will also implement
dropout optionally. For a network with L layers, the architecture will be
{affine - leakyrelu - [dropout]} x (L - 1) - affine - softmax
where dropout is optional, and the {...} block is repeated L - 1 times.
Similar to the ThreeLayerNet above, learnable parameters are stored in the
self.params dictionary and will be learned using the Solver class.
"""
def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
dropout=1, reg=0.0, alpha=1e-2,
weight_scale=1e-2, dtype=np.float32, seed=None):
"""
Initialize a new FullyConnectedNet.
Inputs:
- hidden_dims: A list of integers giving the size of each hidden layer.
- input_dim: An integer giving the size of the input.
- num_classes: An integer giving the number of classes to classify.
- dropout: Scalar between 0 and 1 giving dropout strength. If dropout=1 then
the network should not use dropout at all.
- reg: Scalar giving L2 regularization strength.
- alpha: negative slope of Leaky ReLU layers
- weight_scale: Scalar giving the standard deviation for random
initialization of the weights.
- dtype: A numpy datatype object; all computations will be performed using
this datatype. float32 is faster but less accurate, so you should use
float64 for numeric gradient checking.
- seed: If not None, then pass this random seed to the dropout layers. This
will make the dropout layers deterministic so we can gradient check the
model.
"""
self.use_dropout = dropout != 1
self.reg = reg
self.alpha = alpha
self.num_layers = 1 + len(hidden_dims)
self.dtype = dtype
self.params = {}
############################################################################
# TODO: Initialize the parameters of the network, storing all values in #
# the self.params dictionary. Store weights and biases for the first layer #
# in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
# initialized from a normal distribution centered at 0 with standard #
# deviation equal to weight_scale. Biases should be initialized to zero. #
# #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
dims = np.hstack((input_dim, hidden_dims, num_classes))
for i in range(self.num_layers):
self.params['W%d' % (i + 1)] = weight_scale * np.random.randn(dims[i], dims[i+1])
self.params['b%d' % (i + 1)] = np.zeros(dims[i+1])
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
# When using dropout we need to pass a dropout_param dictionary to each
# dropout layer so that the layer knows the dropout probability and the mode
# (train / test). You can pass the same dropout_param to each dropout layer.
self.dropout_param = {}
if self.use_dropout:
self.dropout_param = {'mode': 'train', 'p': dropout}
if seed is not None:
self.dropout_param['seed'] = seed
# Cast all parameters to the correct datatype
for k, v in self.params.items():
self.params[k] = v.astype(dtype)
def loss(self, X, y=None):
"""
Compute loss and gradient for the fully-connected net.
Input / output: Same as ThreeLayerNet above.
"""
X = X.astype(self.dtype)
mode = 'test' if y is None else 'train'
# Set train/test mode for dropout param since it
# behaves differently during training and testing.
if self.use_dropout:
self.dropout_param['mode'] = mode
scores = None
############################################################################
# TODO: Implement the forward pass for the fully-connected net, computing #
# the class scores for X and storing them in the scores variable. #
# #
# When using dropout, you'll need to pass self.dropout_param to each #
# dropout forward pass. #
# #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
hidden_num = self.num_layers - 1
scores = X
cache_history = []
L2reg = 0
for i in range(hidden_num):
scores, cache = affine_lrelu_forward(scores, self.params['W%d' % (i + 1)], self.params['b%d' % (i + 1)],{"alpha": self.alpha})
cache_history.append(cache)
if self.use_dropout:
scores, cache = dropout_forward(scores, self.dropout_param)
cache_history.append(cache)
L2reg += np.sum(self.params['W%d' % (i + 1)] ** 2)
i += 1
scores, cache = affine_forward(scores, self.params['W%d' % (i + 1)],
self.params['b%d' % (i + 1)])
cache_history.append(cache)
L2reg += np.sum(self.params['W%d' % (i + 1)] ** 2)
L2reg *= 0.5 * self.reg
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
# If test mode return early
if mode == 'test':
return scores
loss, grads = 0.0, {}
############################################################################
# TODO: Implement the backward pass for the fully-connected net. Store the #
# loss in the loss variable and gradients in the grads dictionary. Compute #
# data loss using softmax, and make sure that grads[k] holds the gradients #
# for self.params[k]. Don't forget to add L2 regularization! #
# #
# #
# NOTE: To ensure that your implementation matches ours and you pass the #
# automated tests, make sure that your L2 regularization includes a factor #
# of 0.5 to simplify the expression for the gradient. #
############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
loss, dout = softmax_loss(scores, y)
loss += L2reg
dout, grads['W%d' % (i + 1)], grads['b%d' % (i + 1)] = affine_backward(dout, cache_history.pop())
grads['W%d' % (i + 1)] += self.reg * self.params['W%d' % (i + 1)]
i -= 1
while i >= 0:
if self.use_dropout:
dout = dropout_backward(dout, cache_history.pop())
#else:
dout, grads['W%d' % (i + 1)], grads['b%d' % (i + 1)] = affine_lrelu_backward(dout, cache_history.pop())
grads['W%d' % (i + 1)] += self.reg * self.params['W%d' % (i + 1)]
i -= 1
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
############################################################################
# END OF YOUR CODE #
############################################################################
return loss, grads
| 50.336508 | 138 | 0.48663 | 1,768 | 15,856 | 4.308258 | 0.172511 | 0.043324 | 0.023631 | 0.018905 | 0.48418 | 0.416831 | 0.37797 | 0.345805 | 0.31456 | 0.303269 | 0 | 0.017054 | 0.319564 | 15,856 | 314 | 139 | 50.496815 | 0.688942 | 0.495585 | 0 | 0.264706 | 0 | 0 | 0.023407 | 0 | 0 | 0 | 0 | 0.012739 | 0 | 1 | 0.039216 | false | 0 | 0.04902 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cd43ca8bd2a786d14de4833789bfb393dcd55d1 | 1,552 | py | Python | setup.py | PureTryOut/pico-wizard | d5ffe9a777a33c5f5ab5772db84315595d19a50a | [
"MIT"
] | 11 | 2021-02-05T13:49:44.000Z | 2022-02-24T14:14:46.000Z | setup.py | PureTryOut/pico-wizard | d5ffe9a777a33c5f5ab5772db84315595d19a50a | [
"MIT"
] | 26 | 2021-02-12T17:25:34.000Z | 2021-12-30T07:47:18.000Z | setup.py | PureTryOut/pico-wizard | d5ffe9a777a33c5f5ab5772db84315595d19a50a | [
"MIT"
] | 6 | 2021-06-21T17:37:15.000Z | 2022-01-15T16:07:47.000Z | # SPDX-FileCopyrightText: 2021 Anupam Basak <anupam.basak27@gmail.com>
#
# SPDX-License-Identifier: MIT
import setuptools
setuptools.setup(
name="pico-wizard",
version="0.1.0",
author="Anupam Basak",
author_email="anupam.basak27@gmail.com",
description="A Post Installation COnfiguration tool",
long_description="A Post Installation COnfiguration tool for Linux OSes",
long_description_content_type="text/plain",
scripts=["files/pico-wizard-script-runner"],
entry_points={
"console_scripts": [
"pico-wizard = PicoWizard.__main__:__main__",
]
},
url="https://github.com/pico-wizard/pico-wizard",
project_urls={
"Bug Tracker": "https://github.com/pico-wizard/pico-wizard/issues",
"Documentation": "https://github.com/pico-wizard/pico-wizard",
"Source Code": "https://github.com/pico-wizard/pico-wizard",
},
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
],
license="MIT",
install_requires=[
### Pyside2 needs to be installed from manjaro repository
### pip doesnt provide prebuilt arm64
# "pyside2"
],
python_requires=">=3.6",
package_data = {
"": [
"*.qml",
"**/*.qml",
"**/*.svg",
"**/*.svg.license",
"**/*.sh",
"**/qmldir",
"PicoWizard/**/*.svg"
]
},
include_package_data=True,
)
| 29.846154 | 77 | 0.589562 | 158 | 1,552 | 5.651899 | 0.563291 | 0.12318 | 0.06271 | 0.080627 | 0.25308 | 0.25308 | 0.152296 | 0 | 0 | 0 | 0 | 0.015517 | 0.252577 | 1,552 | 51 | 78 | 30.431373 | 0.75431 | 0.125644 | 0 | 0.046512 | 0 | 0 | 0.448737 | 0.061664 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.023256 | 0 | 0.023256 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cda297e14662b818246dabf96df5f485117391e | 1,448 | py | Python | core/tests/test_models.py | maneeshbabu/recipe | 8cb02bf524f7676b11ec68fb4e6e518dea6b3456 | [
"MIT"
] | null | null | null | core/tests/test_models.py | maneeshbabu/recipe | 8cb02bf524f7676b11ec68fb4e6e518dea6b3456 | [
"MIT"
] | null | null | null | core/tests/test_models.py | maneeshbabu/recipe | 8cb02bf524f7676b11ec68fb4e6e518dea6b3456 | [
"MIT"
] | null | null | null | from django.test import TestCase
from django.contrib.auth import get_user_model
class ModelTestCase(TestCase):
def test_create_user_with_email_successful(self):
"""Test creating a new user with email is successful"""
email = "test@example.com"
password = "test123"
user = get_user_model().objects.create_user(email=email,
password=password)
self.assertEqual(user.email, email)
self.assertTrue(user.check_password(password))
def test_new_user_email_normalized(self):
"""Test creating a new user with email is normalized"""
email = "test@EXAMPLE.com"
user = get_user_model().objects.create_user(email=email,
password='test123')
self.assertEqual(user.email, email.lower())
def test_new_user_invalid_email(self):
"""Test creating a new user with invalid email raise exception"""
with self.assertRaises(ValueError):
get_user_model().objects.create_user(email=None,
password='test123')
def test_new_user_is_superuser(self):
"""Test creating a new super user"""
user = get_user_model().objects.create_superuser(
email="test@example.com", password="test123")
self.assertTrue(user.is_superuser)
self.assertTrue(user.is_staff)
| 37.128205 | 73 | 0.619475 | 165 | 1,448 | 5.230303 | 0.248485 | 0.048667 | 0.069525 | 0.078795 | 0.473928 | 0.383546 | 0.271147 | 0.199305 | 0.199305 | 0.118192 | 0 | 0.011628 | 0.287293 | 1,448 | 38 | 74 | 38.105263 | 0.824612 | 0.131215 | 0 | 0.166667 | 0 | 0 | 0.061439 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.166667 | false | 0.25 | 0.083333 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8cddfb293ba557275c826d5a51078f1b1456befd | 5,980 | py | Python | CIF_Assembly_Entry_debug.py | cschlick/cif-assembly | 84c24b6d1dd4c878be8b9fba69d3a0c7e2d81382 | [
"MIT"
] | null | null | null | CIF_Assembly_Entry_debug.py | cschlick/cif-assembly | 84c24b6d1dd4c878be8b9fba69d3a0c7e2d81382 | [
"MIT"
] | null | null | null | CIF_Assembly_Entry_debug.py | cschlick/cif-assembly | 84c24b6d1dd4c878be8b9fba69d3a0c7e2d81382 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
from mmtbx.ncs.ncs import ncs
from phenix.programs import map_symmetry as map_symmetry_program
from cctbx.maptbx.segment_and_split_map import run_get_ncs_from_map
from iotbx import phil
from iotbx.data_manager import DataManager
# initialize data manager
dm = DataManager()
# file IO
# this map has origin at (0,0,0)
# dm.process_model_file("../6ui6_fit_in_corner_map.pdb")
# dm.process_real_map_file("../emd_20669_corner_zero.map")
# this map has origin at data.all()/2
dm.process_model_file("../6ui6.pdb")
dm.process_real_map_file("../emd_20669.map")
# Run Tom's map_symmetry tool
mm = dm.get_real_map()
mm.shift_origin()
mm.find_map_symmetry()
print(type(mm._ncs_object) # this was not set...
# debug find_map_symmetry()
self = mm
include_helical_symmetry = False
symmetry_center = None
min_ncs_cc = None
symmetry = None
ncs_object = None
check_crystal_symmetry = True
only_proceed_if_crystal_symmetry = False
assert self.origin_is_zero()
self._warning_message = ""
self._ncs_cc = None
from cctbx.maptbx.segment_and_split_map import \
run_get_ncs_from_map, get_params
if symmetry is None:
symmetry = 'ALL'
if symmetry_center is None:
# Most likely map center is (1/2,1/2,1/2) in full grid
full_unit_cell=self.unit_cell_crystal_symmetry(
).unit_cell().parameters()[:3]
symmetry_center=[]
for x, sc in zip(full_unit_cell, self.shift_cart()):
# version 1 # original version gives incorrect symmetry
#symmetry_center.append(0.5*x + sc)
# version 2 # this finds I(b), the correct symmetry
symmetry_center.append(0.5*x)
symmetry_center = tuple(symmetry_center)
params = get_params(args=[],
symmetry = symmetry,
include_helical_symmetry = include_helical_symmetry,
symmetry_center = symmetry_center,
min_ncs_cc = min_ncs_cc,
return_params_only = True,
)
space_group_number = None
if check_crystal_symmetry and symmetry == 'ALL' and (not ncs_object):
# See if we can narrow it down looking at intensities at low-res
d_min = 0.05*self.crystal_symmetry().unit_cell().volume()**0.333
map_coeffs = self.map_as_fourier_coefficients(d_min=d_min)
from iotbx.map_model_manager import get_map_coeffs_as_fp_phi
f_array_info = get_map_coeffs_as_fp_phi(map_coeffs, d_min = d_min,
n_bins = 15)
ampl = f_array_info.f_array
data = ampl.customized_copy(
data = ampl.data(),sigmas = flex.double(ampl.size(),1.))
from mmtbx.scaling.twin_analyses import symmetry_issues
si = symmetry_issues(data)
cs_possibility = si.xs_with_pg_choice_in_standard_setting
space_group_number = cs_possibility.space_group_number()
# # neccessary to remove or for me it will return None
# if space_group_number < 2:
# space_group_number = None
# if space_group_number is None and only_proceed_if_crystal_symmetry:
# return # skip looking further
params.reconstruction_symmetry.\
must_be_consistent_with_space_group_number = space_group_number
new_ncs_obj, ncs_cc, ncs_score = run_get_ncs_from_map(params = params,
map_data = self.map_data(),
crystal_symmetry = self.crystal_symmetry(),
out = sys.stdout,
ncs_obj = ncs_object)
# Build cif model from ncs_obj
from iotbx import cif
model = dm.get_model()
h = model.get_hierarchy()
chains = [c.id for c in h.chains()]
n_oper = ncs_group.n_ncs_oper()
# start cif building
builder = cif.builders.cif_model_builder()
builder.add_data_block("assembly_information")
# add pdbx_struct_assembly loop
headers = ['_pdbx_struct_assembly.id',
'_pdbx_struct_assembly.details',
'_pdbx_struct_assembly.method_details',
'_pdbx_struct_assembly.oligomeric_details',
'_pdbx_struct_assembly.oligomeric_count']
columns = [["1"],["Symmetry assembly "+ncs_obj.get_ncs_name()],["?"],["?"],["?"]]
builder.add_loop(headers,columns)
# add pdbx_struct_assembly_gen loop
headers = ['_pdbx_struct_assembly_gen.assembly_id',
'_pdbx_struct_assembly_gen.oper_expression',
'_pdbx_struct_assembly_gen.asym_id_list']
columns = [["1"],["(1-"+str(n_oper)+")"],[','.join(chains)]]
builder.add_loop(headers,columns)
# add pdbx_struct_oper_list loop
headers = ['_pdbx_struct_oper_list.id',
'_pdbx_struct_oper_list.type',
'_pdbx_struct_oper_list.name',
'_pdbx_struct_oper_list.symmetry_operation',
'_pdbx_struct_oper_list.matrix[1][1]',
'_pdbx_struct_oper_list.matrix[1][2]',
'_pdbx_struct_oper_list.matrix[1][3]',
'_pdbx_struct_oper_list.vector[1]',
'_pdbx_struct_oper_list.matrix[2][1]',
'_pdbx_struct_oper_list.matrix[2][2]',
'_pdbx_struct_oper_list.matrix[2][3]',
'_pdbx_struct_oper_list.vector[2]',
'_pdbx_struct_oper_list.matrix[3][1]',
'_pdbx_struct_oper_list.matrix[3][2]',
'_pdbx_struct_oper_list.matrix[3][3]',
'_pdbx_struct_oper_list.vector[3]']
_id = list(range(1,n_oper+1))
_type = ['point symmetry operation' for i in range(n_oper)]
_name = ['?' for i in range(n_oper)]
_symmetry_operation = ['?' for i in range(n_oper)]
info_columns = [_id,_type,_name,_symmetry_operation]
rotations = [[r[i] for r in ncs_group.rota_matrices_inv()] for i in range(9)]
#translations = [[t[i] for t in ncs_group.translations_orth_inv()] for i in range(3)]
translations = [[0.0 for t in ncs_group.translations_orth_inv()] for i in range(3)] # debug, translations are not meaningful
numeric_columns = [rotations[0],rotations[1],rotations[2],translations[0],
rotations[3],rotations[4],rotations[5],translations[1],
rotations[6],rotations[7],rotations[8],translations[2]]
columns = info_columns+numeric_columns
builder.add_loop(headers,columns)
# Combine cif string with mmcif string.
from StringIO import StringIO
output = StringIO()
cif_model = builder.model()
cif_model = cif_model[cif_model.keys()[0]]
cif_model.show(indent="",out=output)
filestring = "data_default\n"+output.getvalue().replace("data_information\n","")+dm.get_model().model_as_mmcif().replace("data_default\n","")
dm.write_model_file(filestring,filename="../6ui6_processed.cif",overwrite=True)
| 32.150538 | 141 | 0.756355 | 936 | 5,980 | 4.464744 | 0.261752 | 0.064609 | 0.056951 | 0.073223 | 0.314669 | 0.199569 | 0.137832 | 0.09811 | 0.047858 | 0.047858 | 0 | 0.016939 | 0.121405 | 5,980 | 185 | 142 | 32.324324 | 0.778455 | 0 | 0 | 0.025862 | 0 | 0 | 0.20346 | 0.171952 | 0 | 0 | 0 | 0 | 0.008621 | 0 | null | null | 0 | 0.086207 | null | null | 0.008621 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ce17a97e863e42c948a052d4db20da7594b80e9 | 6,771 | py | Python | EMLABPY/modules/makefinancialreports.py | TradeRES/toolbox-amiris-emlab | 11e6e7101bfbc0d71753e3892d4463c4955d2c34 | [
"Unlicense"
] | null | null | null | EMLABPY/modules/makefinancialreports.py | TradeRES/toolbox-amiris-emlab | 11e6e7101bfbc0d71753e3892d4463c4955d2c34 | [
"Unlicense"
] | null | null | null | EMLABPY/modules/makefinancialreports.py | TradeRES/toolbox-amiris-emlab | 11e6e7101bfbc0d71753e3892d4463c4955d2c34 | [
"Unlicense"
] | null | null | null | from domain.import_object import *
from modules.defaultmodule import DefaultModule
from domain.financialReports import FinancialPowerPlantReport
from domain.powerplant import PowerPlant
from domain.cashflow import CashFlow
from domain.technologies import *
import logging
class CreatingFinancialReports(DefaultModule):
def __init__(self, reps):
super().__init__("Creating Financial Reports", reps)
reps.dbrw.stage_init_financial_results_structure()
def act(self):
# fuelPriceMap = {}
# for substance in self.reps.substances:
# fuelPriceMap.update({substance: findLastKnownPriceForSubstance(substance)})
#TODO WHY findAllPowerPlantsWhichAreNotDismantledBeforeTick(self.reps.current_tick - 2)
self.createFinancialReportsForPowerPlantsAndTick(self.reps.power_plants, self.reps.current_tick)
print("finished financial report")
def createFinancialReportsForNewInvestments(self):
self.createFinancialReportsForPowerPlantsAndTick(self.reps.findAllPowerPlantsWithConstructionStartTimeInTick(self.reps.current_tick), self.reps.current_tick)
def createFinancialReportsForPowerPlantsAndTick(self, plants, tick): # todo -> probably this is needed only for operational power plants
financialPowerPlantReports = []
for plant in plants.values():
financialPowerPlantReport = FinancialPowerPlantReport(plant.name, self.reps)
financialPowerPlantReport.setTime(tick)
financialPowerPlantReport.setPowerPlant(plant.name)
totalSupply = plant.getAwardedPowerinMWh()
financialPowerPlantReport.setProduction(totalSupply)
financialPowerPlantReport.setSpotMarketRevenue(plant.ReceivedMoneyinEUR)
financialPowerPlantReport.setProfit(plant.Profit)
financialPowerPlantReports.append(financialPowerPlantReport)
self.reps.dbrw.stage_financial_results(financialPowerPlantReports)
# if plant.getFuelMix() is None:
# plant.setFuelMix(java.util.HashSet())
# for share in plant.getFuelMix():
# amount = share.getShare() * totalSupply
# substance = share.getSubstance()
# substanceCost = findLastKnownPriceForSubstance(substance) * amount
# financialPowerPlantReport.setCommodityCosts(financialPowerPlantReport.getCommodityCosts() + substanceCost)
#TODO add cash flows
#cashFlows = self.reps.getCashFlowsForPowerPlant(plant, tick)
#financialPowerPlantReport.setCo2Costs(self.calculateCO2CostsOfPowerPlant(cashFlows))
#financialPowerPlantReport.setVariableCosts(financialPowerPlantReport.getCommodityCosts() + financialPowerPlantReport.getCo2Costs())
#Determine fixed costs
#financialPowerPlantReport.setFixedCosts(self.calculateFixedCostsOfPowerPlant(cashFlows))
#financialPowerPlantReport.setFixedOMCosts(self.calculateFixedOMCostsOfPowerPlant(cashFlows))
#financialPowerPlantReport.setStrategicReserveRevenue(self.calculateStrategicReserveRevenueOfPowerPlant(cashFlows))
#financialPowerPlantReport.setCapacityMarketRevenue(self.calculateCapacityMarketRevenueOfPowerPlant(cashFlows))
#financialPowerPlantReport.setCo2HedgingRevenue(self.calculateCO2HedgingRevenueOfPowerPlant(cashFlows))
#financialPowerPlantReport.setOverallRevenue(financialPowerPlantReport.getCapacityMarketRevenue() + financialPowerPlantReport.getCo2HedgingRevenue() + financialPowerPlantReport.getSpotMarketRevenue() + financialPowerPlantReport.getStrategicReserveRevenue())
# Calculate Full load hours
#financialPowerPlantReport.setFullLoadHours(self.reps.calculateFullLoadHoursOfPowerPlant(plant, tick))
#
# def calculateSpotMarketRevenueOfPowerPlant(self, cashFlows):
# toReturn = cashFlows.stream().filter(lambda p : p.getType() == emlab.gen.domain.contract.CashFlow.ELECTRICITY_SPOT).collect(java.util.stream.Collectors.summarizingDouble(emlab.gen.domain.contract.CashFlow::getMoney)).getSum()
# java.util.logging.Logger.getGlobal().finer("Income Spot " + toReturn)
# return toReturn
#
# def calculateLongTermContractRevenueOfPowerPlant(self, cashFlows):
# toReturn = cashFlows.stream().filter(lambda p : p.getType() == emlab.gen.domain.contract.CashFlow.ELECTRICITY_LONGTERM).collect(java.util.stream.Collectors.summarizingDouble(emlab.gen.domain.contract.CashFlow::getMoney)).getSum()
# java.util.logging.Logger.getGlobal().finer("Income LT " + toReturn)
# return toReturn
#
# def calculateStrategicReserveRevenueOfPowerPlant(self, cashFlows):
# toReturn = cashFlows.stream().filter(lambda p : p.getType() == emlab.gen.domain.contract.CashFlow.STRRESPAYMENT).collect(java.util.stream.Collectors.summarizingDouble(emlab.gen.domain.contract.CashFlow::getMoney)).getSum()
# java.util.logging.Logger.getGlobal().finer("Income strategic reserve " + toReturn)
# return toReturn
#
# def calculateCapacityMarketRevenueOfPowerPlant(self, cashFlows):
# toReturn = cashFlows.stream().filter(lambda p : p.getType() == emlab.gen.domain.contract.CashFlow.CAPMARKETPAYMENT).collect(java.util.stream.Collectors.summarizingDouble(emlab.gen.domain.contract.CashFlow::getMoney)).getSum()
# java.util.logging.Logger.getGlobal().finer("Income Capacity market " + toReturn)
# return toReturn
#
# def calculateCO2HedgingRevenueOfPowerPlant(self, cashFlows):
# toReturn = cashFlows.stream().filter(lambda p : p.getType() == emlab.gen.domain.contract.CashFlow.CO2HEDGING).collect(java.util.stream.Collectors.summarizingDouble(emlab.gen.domain.contract.CashFlow::getMoney)).getSum()
# java.util.logging.Logger.getGlobal().finer("Income CO2 Hedging" + toReturn)
# return toReturn
#
# def calculateCO2CostsOfPowerPlant(self, list):
# return list.stream().filter(lambda p : (p.getType() == emlab.gen.domain.contract.CashFlow.CO2TAX) or (p.getType() == emlab.gen.domain.contract.CashFlow.CO2AUCTION) or (p.getType() == emlab.gen.domain.contract.CashFlow.NATIONALMINCO2)).mapToDouble(lambda p : p.getMoney()).sum()
# def calculateFixedCostsOfPowerPlant(self, list):
# pass
# #return list.stream().filter(lambda p : (p.getType() == CashFlow.FIXEDOMCOST) or (p.getType() == CashFlow.LOAN) or (p.getType() == CashFlow.DOWNPAYMENT)).mapToDouble(lambda p : p.getMoney()).sum()
#
# def calculateFixedOMCostsOfPowerPlant(self, list):
# pass
# #return list.stream().filter(lambda p : (p.getType() == CashFlow.FIXEDOMCOST)).mapToDouble(lambda p : p.getMoney()).sum()
| 69.091837 | 287 | 0.742136 | 572 | 6,771 | 8.746504 | 0.27972 | 0.020788 | 0.036378 | 0.057166 | 0.30002 | 0.30002 | 0.294024 | 0.280832 | 0.262842 | 0.262842 | 0 | 0.002449 | 0.155664 | 6,771 | 97 | 288 | 69.804124 | 0.87266 | 0.679073 | 0 | 0 | 0 | 0 | 0.024171 | 0 | 0 | 0 | 0 | 0.010309 | 0 | 1 | 0.142857 | false | 0 | 0.25 | 0 | 0.428571 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ce2f056900cdd54d05f6b392b7b1f1788adf0cb | 3,174 | py | Python | guess_number.py | redlinger/Guess_Number | ce0a08159b66b9c5bf8e2c529f02fc2a5f7071b7 | [
"MIT"
] | null | null | null | guess_number.py | redlinger/Guess_Number | ce0a08159b66b9c5bf8e2c529f02fc2a5f7071b7 | [
"MIT"
] | null | null | null | guess_number.py | redlinger/Guess_Number | ce0a08159b66b9c5bf8e2c529f02fc2a5f7071b7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Guess a Number between 1 and 100
Setup: The computer generates a random number between 1 and 100. The human's
goal is to guess that number in 5 or fewer guesses.
"""
import random
import time
# human number guess
hum_num = 0
# 5 trys to guess the right number
guesses = 5
# num_check =1 if guess right; =0 if guess wrong
num_check = 0
# variables to keep track of score
comp_wins = 0
comp_loss = 0
# function to check if guess is above, below, or equal to my number
def number_check(hum_num, comp_num, guess_left):
# if input is valid, then see if number is below, above, or same
if hum_num < comp_num:
print('My number is above %d.' % hum_num)
number_right = 0
elif hum_num > comp_num:
print('My number is below %d.' % hum_num)
number_right = 0
elif hum_num == comp_num:
number_right = 1
return number_right
# BEGIN GAME ##################################################################
print('Guess my number! Enter q to quit.' )
print("I'm thinking of a number between 1 and 100.")
print("Guess what number I picked with 5 or fewer guesses and you win (nothing)")
# continue playing until player wants to quit
while hum_num != 'q':
# print score
print('Score:')
print('Human', 'Computer', sep='\t')
print(' %s \t %s' % (comp_loss, comp_wins))
guess_left = guesses
# draw a number between 1 and 100
draw = random.randrange(0, 101)
while guess_left > 0:
# get input from huuman
print()
hum_num = input('Enter number between 1 and 100: ')
# end if user wants to quit
if hum_num == 'q':
print('Final Score:')
print('Human', 'Computer', sep='\t')
print(' %s \t %s' % (comp_loss, comp_wins))
break
# if input is invalid, report error
hum_num = int(hum_num)
if hum_num > 100 or hum_num < 1:
print('Invalid entry. Enter a number between 1 and 100')
else:
# check if human's guess it below, above, or equal to number
num_check = number_check(hum_num=hum_num, comp_num=draw,
guess_left=guess_left)
# if guess it correct, report human win and go to next game
if num_check==1:
print('My number is %i. You won this round!' % draw)
comp_loss += 1
break
guess_left -= 1
if guess_left > 1:
print('You have %i guesses left' % guess_left)
else:
print('You have %i guess left' % guess_left)
# if human is not able to guess correct in 5 tries, they lose
if num_check==0 and hum_num != 'q':
print('My number is %i. You lost this round.' % draw)
comp_wins += 1
# short break so slow human can read and then report score
time.sleep(2)
| 28.854545 | 82 | 0.538122 | 440 | 3,174 | 3.768182 | 0.270455 | 0.06152 | 0.050663 | 0.06152 | 0.252714 | 0.204463 | 0.130881 | 0.130881 | 0.104946 | 0.104946 | 0 | 0.02712 | 0.361059 | 3,174 | 109 | 83 | 29.119266 | 0.790434 | 0.272842 | 0 | 0.188679 | 1 | 0 | 0.218052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018868 | false | 0 | 0.037736 | 0 | 0.075472 | 0.320755 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8ce83244a55e8bc68a89ac42669afba7cecbc338 | 1,961 | py | Python | tests/test__file_importer.py | johnmdelgado/SRE-Project | 4637c2fa5a7d93da96d1e14ab96fcab8b652f076 | [
"MIT"
] | null | null | null | tests/test__file_importer.py | johnmdelgado/SRE-Project | 4637c2fa5a7d93da96d1e14ab96fcab8b652f076 | [
"MIT"
] | null | null | null | tests/test__file_importer.py | johnmdelgado/SRE-Project | 4637c2fa5a7d93da96d1e14ab96fcab8b652f076 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
'''
FileName: test__file_importer.py
Author: John Delgado
Created Date: 8/7/2020
Version: 1.0 Initial Development
This is the testing file for the file_importer script
'''
import os
import sys
import inspect
functions_dir = os.path.dirname(os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))))+ "/scripts" #scripts Directory
print(functions_dir)
sys.path.insert(0, functions_dir)
import file_importer
import unittest
import yaml
with open("../configs/config.yaml", "r") as ymlfile:
config = yaml.safe_load(ymlfile)
class password_characters_test_case(unittest.TestCase):
# ========================================================================================
# Program termininating test cases
# ========================================================================================
def test_file_path_does_not_exist(self):
# should exit executing code
test_string = "./test.txt"
self.assertRaises(Exception, file_importer.file_importer, test_string,config["debugging"]["test_debug"])
def test_file_is_not_txt_file(self):
# should exit executing code
testString = "../data/test.csv"
self.assertRaises(Exception, file_importer.file_importer, testString,config["debugging"]["test_debug"])
# ========================================================================================
# Valid filepaths returning map test cases
# ========================================================================================
def test_default_file_path_from_config(self):
# should exit executing code
testString = config["testing"]["sample_excluded_pw_filepath"]
result = file_importer.file_importer(testString,
config["debugging"]["test_debug"])
self.assertIsInstance(result, object)
if __name__ == '__main__':
unittest.main()
| 35.654545 | 137 | 0.570117 | 196 | 1,961 | 5.454082 | 0.484694 | 0.101029 | 0.039289 | 0.064546 | 0.302152 | 0.24696 | 0.177736 | 0.108513 | 0.108513 | 0 | 0 | 0.006109 | 0.165222 | 1,961 | 54 | 138 | 36.314815 | 0.646915 | 0.36206 | 0 | 0 | 0 | 0 | 0.126521 | 0.03974 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.12 | false | 0.04 | 0.36 | 0 | 0.52 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
8ce85526db08143d6f7b5fd20f02640f67bf4970 | 1,058 | py | Python | ProjectEuler/p049.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | 1 | 2018-01-30T13:21:30.000Z | 2018-01-30T13:21:30.000Z | ProjectEuler/p049.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | null | null | null | ProjectEuler/p049.py | TISparta/competitive-programming-solutions | 31987d4e67bb874bf15653565c6418b5605a20a8 | [
"MIT"
] | 1 | 2018-08-29T13:26:50.000Z | 2018-08-29T13:26:50.000Z |
# Execution time : 0.440223 seconds
# Solution Explanation
# We can simplily iterate through all the 4-digits numbers
# Then generate all the permutation of this number and check
# if the desirable sequence, distinct from the given one, is found
import time
width = 40
import itertools
import math
def solution():
isPrime = lambda p: p>=1000 and all(p%it!=0 for it in range(2,int(math.sqrt(p))+1))
for num in range(1488,10000):
v = [int(''.join(ch for ch in it)) for it in itertools.permutations(str(num))]
v.sort()
for it1 in range(len(v)):
for it2 in range(it1+1,len(v)):
r = v[it2] - v[it1]
if r > 0 and v[it1]!=1487 and v[it2]+r in v:
if isPrime(v[it1]) and isPrime(v[it2]) and isPrime(v[it2]+r):
return str(v[it1])+str(v[it2])+str(v[it2]+r)
if __name__=="__main__":
start_ = time.time()
print(' Answer -> %s '.center(width,'-') % ( solution() ))
print(' %f seconds '.center(width,'-') % ( time.time() - start_))
| 31.117647 | 87 | 0.591682 | 167 | 1,058 | 3.688623 | 0.449102 | 0.038961 | 0.024351 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057471 | 0.259924 | 1,058 | 33 | 88 | 32.060606 | 0.729246 | 0.222117 | 0 | 0 | 1 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.263158 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cf1600e5f77551d25752ca057e78d7b4032d079 | 518 | py | Python | host/rdmem.py | flowswitch/phison | d9415a8d5c62354d09cd6410754c9d8bb65e164f | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | host/rdmem.py | flowswitch/phison | d9415a8d5c62354d09cd6410754c9d8bb65e164f | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | host/rdmem.py | flowswitch/phison | d9415a8d5c62354d09cd6410754c9d8bb65e164f | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | """read XDATA memory"""
import sys
import PyScsi as drv
import Phison as ph
from util import BinFile
if len(sys.argv)!=4:
sys.exit("Read chip internal memory\nUsage: %s <file> <addr> <size>\nExample: %s ram.bin 0 0x10000" % (sys.argv[0], sys.argv[0]))
addr = int(sys.argv[2], 0)
size = int(sys.argv[3], 0)
disk = ph.FindDrive()
if not disk:
sys.exit("No Phison devices found !")
drv.err_mode = drv.err_mode_raise
drv.open(disk)
print "Reading..."
BinFile.save(sys.argv[1], ph.ReadMemory(addr, size))
drv.close()
| 23.545455 | 130 | 0.69305 | 92 | 518 | 3.869565 | 0.532609 | 0.117978 | 0.044944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033408 | 0.133205 | 518 | 21 | 131 | 24.666667 | 0.759465 | 0 | 0 | 0 | 0 | 0.0625 | 0.248485 | 0 | 0 | 0 | 0.014141 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cf1b63f0ee0ab41801ca7a509ae506ce2873aca | 2,672 | py | Python | zapper/context.py | alfuananzo/zapper | 8a87e332ebdad8803058793c036a1426b9240b98 | [
"MIT"
] | 1 | 2019-05-07T09:56:49.000Z | 2019-05-07T09:56:49.000Z | zapper/context.py | alfuananzo/zapper | 8a87e332ebdad8803058793c036a1426b9240b98 | [
"MIT"
] | null | null | null | zapper/context.py | alfuananzo/zapper | 8a87e332ebdad8803058793c036a1426b9240b98 | [
"MIT"
] | null | null | null | #TODO: Kill scans if overwrite is set.
from zapper.helpers import report_to_cli
class context:
def __init__(self, target, scope, api, force=False):
"""
Control one Context entry of ZAP. A context has the following properties:
Attributes:
target: The target of the scan
scope: The scope that is allowed to scan
api: The ZAP api that it can call. Expectes the zapper.api class
force: Overwrite the context if a current scan is going
"""
# Remove special chars from target so it can be set as context name
self.context = target.replace('/', '')
self.scope = scope
self.api = api
self.force = force
try:
self.api.call('POST', 'JSON/context/action/importContext', {'zapapiformat': 'JSON', 'formMethod': 'POST', 'contextFile': '%s.context' % self.context})
report_to_cli("Found existing context %s, importing" % self.context)
self.context_id = context_info['contextId']
return True
except:
contexts = self.api.call('GET', 'JSON/context/view/contextList/?zapapiformat=JSON&formMethod=GET').json()
if self.context in contexts['contextList']:
if self.force:
self.delete()
else:
report_to_cli('ZAP is already scanning %s, exiting.' % self.context)
exit(1)
context_info = self.api.call('POST', 'JSON/context/action/newContext', {'zapapiformat': 'JSON', 'formMethod': 'POST', 'contextName': self.context}).json()
self.context_id = context_info['contextId']
report_to_cli("Created new ZAP context %s with context ID %s" % (self.context, self.context_id))
# Include the domain(s) into the context
for scope_url in self.scope:
self.api.call('POST', 'JSON/context/action/includeInContext', {'zapapiformat': 'JSON', 'formMethod': 'POST', 'contextName': self.context, 'regex': ".*" + scope_url + ".*"})
def delete(self):
self.api.call('POST', 'JSON/context/action/removeContext', {'zapapiformat': 'JSON', 'formMethod': 'POST', 'contextName': self.context})
report_to_cli("Removed ZAP context %s" % self.context)
def name(self):
return self.context
def id(self):
return self.context_id
def store(self):
report_to_cli('Storing context %s' % self.context)
self.api.call('POST', 'JSON/context/action/exportContext', {'zapapiformat': 'JSON', 'formMethod': 'POST', 'contextName': self.context, 'contextFile': '%s.context' % self.context})
| 44.533333 | 188 | 0.611901 | 319 | 2,672 | 5.047022 | 0.310345 | 0.122981 | 0.040994 | 0.046584 | 0.342236 | 0.269565 | 0.228571 | 0 | 0 | 0 | 0 | 0.000505 | 0.258982 | 2,672 | 59 | 189 | 45.288136 | 0.812626 | 0.163922 | 0 | 0.057143 | 0 | 0 | 0.315037 | 0.105166 | 0 | 0 | 0 | 0.016949 | 0 | 1 | 0.142857 | false | 0 | 0.085714 | 0.057143 | 0.342857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cf7205decff15f66c7b82d0402b54b1a61e8918 | 476 | py | Python | setup.py | Emrys-Merlin/monitor_airquality | 40a734c8a2cc24d826b61f26e863a5c0913b7106 | [
"MIT"
] | null | null | null | setup.py | Emrys-Merlin/monitor_airquality | 40a734c8a2cc24d826b61f26e863a5c0913b7106 | [
"MIT"
] | null | null | null | setup.py | Emrys-Merlin/monitor_airquality | 40a734c8a2cc24d826b61f26e863a5c0913b7106 | [
"MIT"
] | null | null | null | from importlib.metadata import entry_points
from setuptools import find_packages, setup
setup(
name='monitor_airquality',
version='0.1',
url='',
author='Tim Adler',
author_email='tim+github@emrys-merlin.de',
description='Measure airquality using some sensors connected to a raspberry pi',
packages=find_packages(),
install_requires=[],
entry_points={
'console_scripts': ['monitor_airquality=monitor_airquality.main:main']
}
)
| 26.444444 | 84 | 0.712185 | 56 | 476 | 5.875 | 0.714286 | 0.155015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005089 | 0.17437 | 476 | 17 | 85 | 28 | 0.832061 | 0 | 0 | 0 | 0 | 0 | 0.384454 | 0.153361 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cfbc2509615626710ec962d624548fc6620ce7a | 16,310 | py | Python | src/insulaudit/devices/clmm/proto.py | kakoni/insulaudit | 18fe0802bafe5764882ac4e65e472fdc840baa45 | [
"MIT"
] | 1 | 2020-11-28T13:23:58.000Z | 2020-11-28T13:23:58.000Z | src/insulaudit/devices/clmm/proto.py | kakoni/insulaudit | 18fe0802bafe5764882ac4e65e472fdc840baa45 | [
"MIT"
] | null | null | null | src/insulaudit/devices/clmm/proto.py | kakoni/insulaudit | 18fe0802bafe5764882ac4e65e472fdc840baa45 | [
"MIT"
] | null | null | null | import struct
import sys
import serial
import time
import logging
from pprint import pprint, pformat
import doctest
from insulaudit.core import Command
from insulaudit.clmm.usbstick import *
from insulaudit import lib
#logging.basicConfig( stream=sys.stdout )
log = logging.getLogger(__name__)
#log.setLevel( logging.DEBUG )
log.info( 'hello world' )
io = logging.getLogger('.'.join(['io', __name__, 'io' ]))
#io = log
#io.setLevel( logging.DEBUG )
"""
######################
#
# ComLink2
# pseudocode analysis of critical procedures
# there is some implicit OO going on
#
execute(command):
usbcommand.execute(self)
############################
#
# USB(Pump) Command Stuff
#
packSerialNumber:
return makePackedBCD(serial)
"""
"""
######################
#
# Pump
#
# every command needs:
# code, retries, params, length, pages
initDevice:
# cmdPowerControl Command(93, "rf power on", 2)
# cmdPowerControl.params = [ 1, 1 ]
# cmdPowerControl.retries = 0
# cmdReadErrorStatus = Command(117, "read pump error status")
# cmdReadState = Command(131, "Read Pump State")
# cmdReadTempBasal = Command(152, "Read Temporary Basal")
initDevice2
iniDevice2:
detectActiveBolus = Command(76, "set temp basal rate (bolus detection only)", 3)
detectActiveBolus.params = [ 0, 0, 0 ]
detectActiveBolus.retries = 0
detectActiveBolus:
# cmdDetectBolus
shutDownPump
if suspended:
shutDownPump2()
cmdCancelSuspend()
# turn rf power off
# retries 0
cmdOff = Command(93, "rf power off", [ 0 ], 2)
cmdOff.execute
shutDownPump2:
Command(91, "keypad push (ack)", [ 2 ], 1).execute
time.sleep(.500)
Command(91, "keypad push (esc)", [ 1 ], 1).execute
time.sleep(.500)
getNAKDescription:
# pass
# 2 params
Command(code, descr)
# 5: code, descr, bytesPerRecord, maxRecords, maxRetries
return Command(code, descr, 64, 1, 0)
# 3 params
Command(code, descr, paramCount):
# 5
#
com = Command(code, descr, 0, 1, 11)
com.paramCount = paramCount
numblocks = paramCount / 64 + 1
# 4 params
Command(code, descr, params, tail)
# 5
com = Command(code, descr, 0, 1, 11)
com.params = params
#com.paramCount
# 5 params
Command(code, descr, bytesPerRecord, maxRecords, ??):
# likely decompile error
# 7
Command(code, descr, bytesPerRecord, maxRecords, 0, 0, paramCount)
dataOffset = 0
cmdLength = 2
# 7 params
Command(code, descr, bytesPerRecord, maxRecords, address, addressLength, arg8):
offset = 2
if addressLength == 1:
cmdLength = 2 + addressLength
else:
cmdLength = 2 + addressLength + 1
retries = 2
# 511
execute:
result = None
for i in xrange(maxRetries)
# reset bytes read
response = usb.execute(self)
# handle stack trace
if response: break
return result
"""
"""
"""
class ProtocolError(Exception): pass
class DeviceCommsError(ProtocolError): pass
class AckError(DeviceCommsError): pass
def retry(block, retry=3, sleep=0):
r = None
for i in xrange(retry):
log.info('retry:%s:%i' % (block, i))
r = block( )
if r:
return r
if sleep:
time.sleep(sleep)
return r
class Link( core.CommBuffer ):
class ID:
VENDOR = 0x0a21
PRODUCT = 0x8001
timeout = .100
def __init__( self, port, timeout=None ):
super(type(self), self).__init__(port, timeout)
def setTimeout(self, timeout):
self.serial.setTimeout(timeout)
def getTimeout(self):
return self.serial.getTimeout()
def initUSBComms(self):
def init( ):
init = False
try:
self.initCommunicationsIO()
init = True
except ProtocolError, e: pass
return init
if not retry(init):
raise ProtocolError("could not init usb module")
#self.initDevice()
def getSignalStrength(self):
result = self.readSignalStrength()
signal = result[0]
def readSignalStrength(self):
result = self.sendComLink2Command(6, 0)
# result[0] is signal strength
log.info('%r:readSignalStrength:%s' % (self, int(result[0])))
return result
def initCommunicationsIO(self):
# close/open serial
self.readProductInfo( )
self.readSignalStrength()
def endCommunicationsIO(self):
self.readSignalStrength()
self.readInterfaceStatistics()
# close port
self.close()
def readProductInfo(self):
result = self.sendComLink2Command(4)
# 1/0/255
log.info('readProductInfo:result')
freq = result[5]
info = self.decodeProductInfo(result)
log.info('product info: %s' % pformat(info))
# decodeInterface stats
def decodeProductInfo(self, data):
class F:
body = data
comm = USBProductInfo()
comm.reply = F()
comm.onACK()
return comm.info
def sendComLink2Command(self, msg, a2=0x00, a3=0x00):
# generally commands are 3 bytes, most often CMD, 0x00, 0x00
msg = bytearray([ msg, a2, a3 ])
io.info('sendComLink2Command:write')
self.write(msg)
return retry(self.checkAck, sleep=.100)
# throw local usb exception
def checkAck(self):
time.sleep(.100)
result = bytearray(self.read(64))
if len(result) == 0:
raise AckError('checkAck must have a response')
io.info('checkAck:read')
commStatus = result[0]
# usable response
#assert commStatus == 1
if commStatus != 1:
raise DeviceCommsError('\n'.join([ "checkAck: bad response code"
, lib.hexdump(result[0:4]) ]))
status = result[1]
# status == 102 'f' NAK, look up NAK
if status == 85: # 'U'
log.info('ACK OK')
return result[3:]
assert False, "NAK!!"
def decodeIFaceStats(self, data):
class F:
body = data
comm = InterfaceStats()
comm.reply = F()
comm.onACK()
return comm.info
def readInterfaceStatistics(self):
# decode and log stats
result = self.sendComLink2Command(5, 0)
info = self.decodeIFaceStats(result)
log.info("read radio Interface Stats: %s" % pformat(info))
result = self.sendComLink2Command(5, 1)
info = self.decodeIFaceStats(result)
log.info("read stick Interface Stats: %s" % pformat(info))
#######################
#
#
#
def CRC8(data):
return lib.CRC8.compute(data)
################################
# Remote Stuff
#
class BaseCommand(object):
code = 0x00
descr = "(error)"
retries = 2
timeout = 3
params = [ ]
bytesPerRecord = 0
maxRecords = 0
effectTime = 1
def __init__(self, code, descr, *args):
self.code = code
self.descr = descr
self.params = [ ]
def __repr__(self):
fields = [ 'descr', 'timeout', 'effectTime', 'code' ]
details = [''] \
+ [ "\t%8s: %s" % (f, str(getattr(self, f))) for f in fields ]
summary = '<{name}:{descr}>'.format(name=self.__class__.__name__,
descr=self.descr)
kwds = dict(details='\n'.join(map(str, details)), summary=summary)
return "{summary}{details}".format(**kwds)
def format(self):
pass
def allocateRawData(self):
self.raw = self.bytesPerRecord * self.maxRecords
class Device(object):
def __init__(self, link):
self.link = link
def execute(self, command):
self.command = command
#try:
# self.allocateRawData()
# self.sendAndRead()
#except DeviceCommsError, e:
# raise
errors = [ ]
that = self
def execute( ):
try:
that.allocateRawData()
that.sendAndRead()
return True
except DeviceCommsError, e:
errors.append(e)
return False
if not retry(execute, sleep=.150):
raise DeviceCommsError('\n'.join([ "tried executing %s bunch of times and failed"
, "%s" % ('\n\t'.join(map(str, errors))) ]) % self.command)
def sendAndRead(self):
self.sendDeviceCommand()
time.sleep(self.command.effectTime)
if self.expectedLength > 0:
# in original code, this modifies the length tested in the previous if
# statement
self.command.data = self.readDeviceData()
def sendDeviceCommand(self):
packet = self.buildTransmitPacket()
io.info('sendDeviceCommand:write:%r' % (self.command))
self.link.write(packet)
time.sleep(.500)
code = self.command.code
params = self.command.params
if code != 93 or params[0] != 0:
self.link.checkAck()
def allocateRawData(self):
self.command.allocateRawData()
self.expectedLength = self.command.bytesPerRecord * self.command.maxRecords
def readDeviceData(self):
self.eod = False
results = bytearray( )
while not self.eod:
data = self.readDeviceDataIO( )
results.extend(data)
return results
def readDeviceDataIO(self):
results = self.readData()
lb, hb = results[5] & 0x7F, results[6]
self.eod = (results[5] & 0x80) > 0
resLength = lib.BangInt((lb, hb))
assert resLength > 63, ("cmd low byte count:\n%s" % lib.hexdump(results))
data = results[13:13+resLength]
assert len(data) == resLength
crc = results[-1]
# crc check
log.info('readDeviceDataIO:msgCRC:%r:expectedCRC:%r:data:%r' % (crc, CRC8(data), data))
assert crc == CRC8(data)
return data
def readData(self):
bytesAvailable = self.getNumBytesAvailable()
packet = [12, 0, lib.HighByte(bytesAvailable), lib.LowByte(bytesAvailable)]
packet.append( CRC8(packet) )
response = self.writeAndRead(packet, bytesAvailable)
# assert response.length > 14
# assert (int(response[0]) == 2), repr(response)
rcode = response[0]
if len(response) < 14:
raise DeviceCommsError('\n'.join([ "readData: insufficientData",
lib.hexdump(response) ]))
if rcode != 2:
raise DeviceCommsError("readData: bad response code: %#04x" % rcode)
# response[1] != 0 # interface number !=0
# response[2] == 5 # timeout occurred
# response[2] == 2 # NAK
# response[2] # should be within 0..4
log.info("readData ACK")
return response
def writeAndRead(self, msg, length):
io.info("writeAndRead:")
self.link.write(bytearray(msg))
time.sleep(.300)
self.link.setTimeout(self.command.timeout)
return bytearray(self.link.read(length))
def getNumBytesAvailable(self):
#result = self.readStatus( )
result = 0
start = time.time()
i = 0
while result == 0 and time.time() - start < 4:
log.debug('%r:getNumBytesAvailable:attempt:%s' % (self, i))
result = self.readStatus( )
time.sleep(.100)
i += 1
log.info('getNumBytesAvailable:%s' % result)
return result
def readStatus(self):
"""
result = False
def fetch_status( ):
res = self.link.sendComLink2Command(3)
status = res[0] # 0 indicates success
if status == 0:
result = res
return True
return False
if not retry(fetch_status) or not result or len(result) == 0:
raise RFFailed("rf read header indicates failure")
"""
result = self.link.sendComLink2Command(3)
commStatus = result[0] # 0 indicates success
status = result[2]
lb, hb = result[3], result[4]
stat = StickStatusStruct(status)
header = result[0:3]
test = [ StickStatusStruct(s) for s in header ]
log.info(test)
log.info("HEADER:\n%s" % lib.hexdump(header))
if 0 != commStatus:
raise DeviceCommsError('\n'.join([ "rf read header indicates failure"
, "%s" % lib.hexdump(header) ]))
assert commStatus == 0, ("command status not 0: %s:%s" % (commStatus, stat))
bytesAvailable = lib.BangInt((lb, hb))
self.status = status
if (status & 0x1) > 0:
return bytesAvailable
return 0
def buildTransmitPacket(self):
return self.command.format( )
class PumpCommand(BaseCommand):
serial = '665455'
#serial = '206525'
params = [ ]
bytesPerRecord = 64
maxRecords = 1
retries = 2
__fields__ = ['maxRecords', 'code', 'descr',
'serial', 'bytesPerRecord', 'params']
def __init__(self, **kwds):
for k in self.__fields__:
value = kwds.get(k, getattr(self, k))
setattr(self, k, value)
def getData(self):
return self.data
def format(self):
params = self.params
code = self.code
maxRetries = self.retries
serial = list(bytearray(self.serial.decode('hex')))
paramsCount = len(params)
head = [ 1, 0, 167, 1 ]
# serial
packet = head + serial
# paramCount 2 bytes
packet.extend( [ (0x80 | lib.HighByte(paramsCount)),
lib.LowByte(paramsCount) ] )
# not sure what this byte means
button = 0
# special case command 93
if code == 93:
button = 85
packet.append(button)
packet.append(maxRetries)
# how many packets/frames/pages/flows will this take?
responseSize = self.calcRecordsRequired()
# really only 1 or 2?
pages = responseSize
if responseSize > 1:
pages = 2
packet.append(pages)
packet.append(0)
# command code goes here
packet.append(code)
packet.append(CRC8(packet))
packet.extend(params)
packet.append(CRC8(params))
io.info(packet)
return bytearray(packet)
def calcRecordsRequired(self):
length = self.bytesPerRecord * self.maxRecords
i = length / 64
j = length % 64
if j > 0:
return i + 1
return i
class PowerControl(PumpCommand):
"""
>>> PowerControl().format() == PowerControl._test_ok
True
"""
_test_ok = bytearray( [ 0x01, 0x00, 0xA7, 0x01, 0x66, 0x54, 0x55, 0x80,
0x02, 0x55, 0x00, 0x00, 0x00, 0x5D, 0xE6, 0x01,
0x0A, 0xA2 ] )
code = 93
descr = "RF Power On"
params = [ 0x01, 0x0A ]
retries = 0
maxRecords = 0
timeout = 17
effectTime = 17
class PowerControlOff(PowerControl):
params = [ 0x00, 0x0A ]
class ReadErrorStatus(PumpCommand):
"""
>>> ReadErrorStatus().format() == ReadErrorStatus._test_ok
True
"""
_test_ok = bytearray([ 0x01, 0x00, 0xA7, 0x01, 0x66, 0x54, 0x55, 0x80,
0x00, 0x00, 0x02, 0x01, 0x00, 0x75, 0xD7, 0x00 ])
code = 117
descr = "Read Error Status any current alarms set?"
params = [ ]
retries = 2
maxRecords = 1
class ReadPumpState(PumpCommand):
"""
>>> ReadPumpState().format() == ReadPumpState._test_ok
True
"""
_test_ok = bytearray([ 0x01, 0x00, 0xA7, 0x01, 0x66, 0x54, 0x55, 0x80,
0x00, 0x00, 0x02, 0x01, 0x00, 0x83, 0x2E, 0x00 ])
code = 131
descr = "Read Pump State"
params = [ ]
retries = 2
maxRecords = 1
class ReadPumpModel(PumpCommand):
"""
>>> ReadPumpModel().format() == ReadPumpModel._test_ok
True
"""
code = 141
descr = "Read Pump Model Number"
params = [ ]
retries = 2
maxRecords = 1
_test_ok = bytearray([ 0x01, 0x00, 0xA7, 0x01, 0x66, 0x54, 0x55, 0x80,
0x00, 0x00, 0x02, 0x01, 0x00, 0x8D, 0x5B, 0x00 ])
def getData(self):
data = self.data
length = data[0]
msg = data[1:1+length]
self.model = msg
return str(msg)
def initDevice(link):
device = Device(link)
comm = PowerControl()
device.execute(comm)
log.info('comm:%s:data:%s' % (comm, getattr(comm, 'data', None)))
comm = ReadErrorStatus()
device.execute(comm)
log.info('comm:%s:data:%s' % (comm, getattr(comm, 'data', None)))
comm = ReadPumpState()
device.execute(comm)
log.info('comm:%s:data:%s' % (comm, getattr(comm, 'data', None)))
return device
def do_commands(device):
comm = ReadPumpModel( )
device.execute(comm)
log.info('comm:%s:data:%s' % (comm, getattr(comm.getData( ), 'data', None)))
log.info('REMOTE PUMP MODEL NUMBER: %s' % comm.getData( ))
def shutdownDevice(device):
comm = PowerControlOff()
device.execute(comm)
log.info('comm:%s:data:%s' % (comm, getattr(comm, 'data', None)))
if __name__ == '__main__':
io.info("hello world")
doctest.testmod( )
port = None
try:
port = sys.argv[1]
except IndexError, e:
print "usage:\n%s /dev/ttyUSB0" % sys.argv[0]
sys.exit(1)
link = Link(port)
link.initUSBComms()
device = initDevice(link)
do_commands(device)
#shutdownDevice(device)
link.endCommunicationsIO()
#pprint( carelink( USBProductInfo( ) ).info )
| 25.604396 | 91 | 0.62385 | 1,915 | 16,310 | 5.273629 | 0.213577 | 0.01317 | 0.014259 | 0.010892 | 0.124963 | 0.096544 | 0.081493 | 0.068225 | 0.068225 | 0.055748 | 0 | 0.042555 | 0.240711 | 16,310 | 636 | 92 | 25.644654 | 0.772933 | 0.067014 | 0 | 0.169399 | 0 | 0 | 0.085438 | 0.016757 | 0 | 0 | 0.026333 | 0 | 0.013661 | 0 | null | null | 0.013661 | 0.027322 | null | null | 0.005464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cfc14873a8e3827bbe126057b1811e7460db3fe | 300 | py | Python | examples/interrupts.py | nodesign/electripy | 6765dccd93b4d71e77ae258f560a1e2eb6645128 | [
"Unlicense",
"MIT"
] | 3 | 2017-06-20T11:50:47.000Z | 2019-10-28T15:14:53.000Z | examples/interrupts.py | nodesign/electripy | 6765dccd93b4d71e77ae258f560a1e2eb6645128 | [
"Unlicense",
"MIT"
] | null | null | null | examples/interrupts.py | nodesign/electripy | 6765dccd93b4d71e77ae258f560a1e2eb6645128 | [
"Unlicense",
"MIT"
] | null | null | null | from lib.electripy import *
print "Board name : ", getBoardName()
print "INTERRUPTS TEST **************************"
def hello(data):
print "interrupt ", INTERRUPT_TYPE[data]
attachInterrupt(25, CHANGE, hello)
for a in range(0,15):
delay(1000)
print a
detachInterrupt(25)
stop() | 17.647059 | 50 | 0.63 | 36 | 300 | 5.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044534 | 0.176667 | 300 | 17 | 51 | 17.647059 | 0.716599 | 0 | 0 | 0 | 0 | 0 | 0.215947 | 0.086379 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cfc82cb9558f36c262b30e77dc6228a934d7ef7 | 15,691 | py | Python | Contents/Code/interface/menu.py | tomerblecher/Sub-Zero.bundle | afc7614095fe6b0e1d15f40671f8cc1de37b4402 | [
"MIT"
] | 2 | 2020-12-06T19:40:10.000Z | 2020-12-26T13:20:10.000Z | Contents/Code/interface/menu.py | PlexIL/Sub-Zero.bundle | e74af913f13b390fc5271def4fdc55066358d3e3 | [
"MIT"
] | null | null | null | Contents/Code/interface/menu.py | PlexIL/Sub-Zero.bundle | e74af913f13b390fc5271def4fdc55066358d3e3 | [
"MIT"
] | null | null | null | # coding=utf-8
import locale
import logging
import os
import platform
import traceback
import logger
import copy
from requests import HTTPError
from item_details import ItemDetailsMenu
from refresh_item import RefreshItem
from menu_helpers import add_incl_excl_options, dig_tree, set_refresh_menu_state, \
default_thumb, debounce, ObjectContainer, SubFolderObjectContainer, route
from main import fatality, InclExclMenu
from advanced import DispatchRestart
from subzero.constants import ART, PREFIX, DEPENDENCY_MODULE_NAMES
from support.extract import season_extract_embedded
from support.scheduler import scheduler
from support.config import config
from support.helpers import timestamp, df, display_language
from support.ignore import get_decision_list
from support.items import get_all_items, get_items_info, get_item_kind_from_rating_key, get_item, get_item_title
from support.i18n import _
# init GUI
ObjectContainer.art = R(ART)
ObjectContainer.no_cache = True
# default thumb for DirectoryObjects
DirectoryObject.thumb = default_thumb
Plugin.AddViewGroup("full_details", viewMode="InfoList", mediaType="items", type="list", summary=2)
@route(PREFIX + '/section/firstLetter/key', deeper=bool)
def FirstLetterMetadataMenu(rating_key, key, title=None, base_title=None, display_items=False, previous_item_type=None,
previous_rating_key=None):
"""
displays the contents of a section filtered by the first letter
:param rating_key: actually is the section's key
:param key: the firstLetter wanted
:param title: the first letter, or #
:param deeper:
:return:
"""
title = base_title + " > " + unicode(title)
oc = SubFolderObjectContainer(title2=title, no_cache=True, no_history=True)
items = get_all_items(key="first_character", value=[rating_key, key], base="library/sections", flat=False)
kind, deeper = get_items_info(items)
dig_tree(oc, items, MetadataMenu,
pass_kwargs={"base_title": title, "display_items": deeper, "previous_item_type": kind,
"previous_rating_key": rating_key})
return oc
@route(PREFIX + '/section/contents', display_items=bool)
def MetadataMenu(rating_key, title=None, base_title=None, display_items=False, previous_item_type=None,
previous_rating_key=None, message=None, header=None, randomize=None):
"""
displays the contents of a section based on whether it has a deeper tree or not (movies->movie (item) list; series->series list)
:param rating_key:
:param title:
:param base_title:
:param display_items:
:param previous_item_type:
:param previous_rating_key:
:return:
"""
title = unicode(title)
item_title = title
title = base_title + " > " + title
oc = SubFolderObjectContainer(title2=title, no_cache=True, no_history=True, header=header, message=message,
view_group="full_details")
current_kind = get_item_kind_from_rating_key(rating_key)
if display_items:
timeout = 30
show = None
# add back to series for season
if current_kind == "season":
timeout = 720
show = get_item(previous_rating_key)
oc.add(DirectoryObject(
key=Callback(MetadataMenu, rating_key=show.rating_key, title=show.title, base_title=show.section.title,
previous_item_type="section", display_items=True, randomize=timestamp()),
title=_(u"< Back to %s", show.title),
thumb=show.thumb or default_thumb
))
elif current_kind == "series":
# it shouldn't take more than 6 minutes to scan all of a series' files and determine the force refresh
timeout = 3600
items = get_all_items(key="children", value=rating_key, base="library/metadata")
kind, deeper = get_items_info(items)
dig_tree(oc, items, MetadataMenu,
pass_kwargs={"base_title": title, "display_items": deeper, "previous_item_type": kind,
"previous_rating_key": rating_key})
# we don't know exactly where we are here, only add ignore option to series
if current_kind in ("series", "season"):
item = get_item(rating_key)
sub_title = get_item_title(item)
add_incl_excl_options(oc, current_kind, title=sub_title, rating_key=rating_key, callback_menu=InclExclMenu)
# mass-extract embedded
if current_kind == "season" and config.plex_transcoder:
for lang in config.lang_list:
oc.add(DirectoryObject(
key=Callback(SeasonExtractEmbedded, rating_key=rating_key, language=lang,
base_title=show.section.title, display_items=display_items, item_title=item_title,
title=title,
previous_item_type=previous_item_type, with_mods=True,
previous_rating_key=previous_rating_key, randomize=timestamp()),
title=_(u"Extract missing %(language)s embedded subtitles", language=display_language(lang)),
summary=_("Extracts the not yet extracted embedded subtitles of all episodes for the current "
"season with all configured default modifications")
))
oc.add(DirectoryObject(
key=Callback(SeasonExtractEmbedded, rating_key=rating_key, language=lang,
base_title=show.section.title, display_items=display_items, item_title=item_title,
title=title, force=True,
previous_item_type=previous_item_type, with_mods=True,
previous_rating_key=previous_rating_key, randomize=timestamp()),
title=_(u"Extract and activate %(language)s embedded subtitles", language=display_language(lang)),
summary=_("Extracts embedded subtitles of all episodes for the current season "
"with all configured default modifications")
))
# add refresh
oc.add(DirectoryObject(
key=Callback(RefreshItem, rating_key=rating_key, item_title=title, refresh_kind=current_kind,
previous_rating_key=previous_rating_key, timeout=timeout * 1000, randomize=timestamp()),
title=_(u"Refresh: %s", item_title),
summary=_("Refreshes %(the_movie_series_season_episode)s, possibly searching for missing and picking up "
"new subtitles on disk", the_movie_series_season_episode=_(u"the %s" % current_kind))
))
oc.add(DirectoryObject(
key=Callback(RefreshItem, rating_key=rating_key, item_title=title, force=True,
refresh_kind=current_kind, previous_rating_key=previous_rating_key, timeout=timeout * 1000,
randomize=timestamp()),
title=_(u"Auto-Find subtitles: %s", item_title),
summary=_("Issues a forced refresh, ignoring known subtitles and searching for new ones")
))
else:
return ItemDetailsMenu(rating_key=rating_key, title=title, item_title=item_title)
return oc
@route(PREFIX + '/season/extract_embedded/{rating_key}/{language}')
def SeasonExtractEmbedded(**kwargs):
rating_key = kwargs.get("rating_key")
requested_language = kwargs.pop("language")
with_mods = kwargs.pop("with_mods")
item_title = kwargs.pop("item_title")
title = kwargs.pop("title")
force = kwargs.pop("force", False)
Thread.Create(season_extract_embedded, **{"rating_key": rating_key, "requested_language": requested_language,
"with_mods": with_mods, "force": force})
kwargs["header"] = _("Success")
kwargs["message"] = _(u"Extracting of embedded subtitles for %s triggered", title)
kwargs.pop("randomize")
return MetadataMenu(randomize=timestamp(), title=item_title, **kwargs)
@route(PREFIX + '/ignore_list')
def IgnoreListMenu():
ref_list = get_decision_list()
include = ref_list.store == "include"
list_title = _("Include list" if include else "Ignore list")
oc = SubFolderObjectContainer(title2=list_title, replace_parent=True)
for key in ref_list.key_order:
values = ref_list[key]
for value in values:
add_incl_excl_options(oc, key, title=ref_list.get_title(key, value), rating_key=value,
callback_menu=InclExclMenu)
return oc
@route(PREFIX + '/history')
def HistoryMenu():
from support.history import get_history
history = get_history()
oc = SubFolderObjectContainer(title2=_("History"), replace_parent=True)
for item in history.items[:100]:
possible_language = item.language
language_display = item.lang_name if not possible_language else display_language(possible_language)
oc.add(DirectoryObject(
key=Callback(ItemDetailsMenu, title=item.title, item_title=item.item_title,
rating_key=item.rating_key),
title=u"%s (%s)" % (item.item_title, _(item.mode_verbose)),
summary=_(u"%s in %s (%s, score: %s), %s", language_display, item.section_title,
_(item.provider_name), item.score, df(item.time)),
thumb=item.thumb or default_thumb
))
history.destroy()
return oc
@route(PREFIX + '/missing/refresh')
@debounce
def RefreshMissing(randomize=None):
scheduler.dispatch_task("SearchAllRecentlyAddedMissing")
header = "Refresh of recently added items with missing subtitles triggered"
return fatality(header=header, replace_parent=True)
def replace_item(obj, key, replace_value):
for k, v in obj.items():
if isinstance(v, dict):
obj[k] = replace_item(v, key, replace_value)
if key in obj:
obj[key] = replace_value
return obj
def check_connections():
# debug drone
Log.Debug("Checking connections ...")
log_buffer = []
try:
from subliminal_patch.refiners.drone import SonarrClient, RadarrClient
log_buffer.append(["----- Connections -----"])
for key, cls in [("sonarr", SonarrClient), ("radarr", RadarrClient)]:
if key in config.refiner_settings:
cname = key.capitalize()
try:
status = cls(**config.refiner_settings[key]).status(timeout=5)
except HTTPError, e:
if e.response.status_code == 401:
log_buffer.append(("%s: NOT WORKING - BAD API KEY", cname))
else:
log_buffer.append(("%s: NOT WORKING - %s", cname, traceback.format_exc()))
except:
log_buffer.append(("%s: NOT WORKING - %s", cname, traceback.format_exc()))
else:
if status and status["version"]:
log_buffer.append(("%s: OK - %s", cname, status["version"]))
else:
log_buffer.append(("%s: NOT WORKING - %s", cname))
except:
log_buffer.append(("Something went really wrong when evaluating Sonarr/Radarr: %s", traceback.format_exc()))
finally:
Core.log.setLevel(logging.DEBUG)
for entry in log_buffer:
Log.Debug(*entry)
Core.log.setLevel(logging.getLevelName(Prefs["log_level"]))
@route(PREFIX + '/ValidatePrefs', enforce_route=True)
def ValidatePrefs():
Core.log.setLevel(logging.DEBUG)
if Prefs["log_console"]:
Core.log.addHandler(logger.console_handler)
Log.Debug("Logging to console from now on")
else:
Core.log.removeHandler(logger.console_handler)
Log.Debug("Stop logging to console")
# cache the channel state
update_dict = False
restart = False
# reset pin
Dict["pin_correct_time"] = None
config.initialize()
if "channel_enabled" not in Dict:
update_dict = True
elif Dict["channel_enabled"] != config.enable_channel:
Log.Debug("Interface features %s, restarting plugin", "enabled" if config.enable_channel else "disabled")
update_dict = True
restart = True
if "plugin_pin_mode2" not in Dict:
update_dict = True
elif Dict["plugin_pin_mode2"] != Prefs["plugin_pin_mode2"]:
update_dict = True
restart = True
if update_dict:
Dict["channel_enabled"] = config.enable_channel
Dict["plugin_pin_mode2"] = Prefs["plugin_pin_mode2"]
Dict.Save()
if restart:
scheduler.stop()
DispatchRestart()
return
scheduler.setup_tasks()
scheduler.clear_task_data("MissingSubtitles")
set_refresh_menu_state(None)
Log.Debug("Validate Prefs called.")
# SZ config debug
Log.Debug("--- SZ Config-Debug ---")
for attr in [
"version", "app_support_path", "data_path", "data_items_path", "enable_agent",
"enable_channel", "permissions_ok", "missing_permissions", "fs_encoding",
"subtitle_destination_folder", "include", "include_exclude_paths", "include_exclude_sz_files",
"new_style_cache", "dbm_supported", "lang_list", "providers", "normal_subs", "forced_only", "forced_also",
"plex_transcoder", "refiner_settings", "unrar", "adv_cfg_path", "use_custom_dns",
"has_anticaptcha", "anticaptcha_cls", "mediainfo_bin"]:
value = getattr(config, attr)
if isinstance(value, dict):
d = replace_item(copy.deepcopy(value), "api_key", "xxxxxxxxxxxxxxxxxxxxxxxxx")
Log.Debug("config.%s: %s", attr, d)
continue
if attr in ("api_key",):
value = "xxxxxxxxxxxxxxxxxxxxxxxxx"
Log.Debug("config.%s: %s", attr, value)
for attr in ["plugin_log_path", "server_log_path"]:
value = getattr(config, attr)
if value:
access = os.access(value, os.R_OK)
if Core.runtime.os == "Windows":
try:
f = open(value, "r")
f.read(1)
f.close()
except:
access = False
Log.Debug("config.%s: %s (accessible: %s)", attr, value, access)
for attr in [
"subtitles.save.filesystem", ]:
Log.Debug("Pref.%s: %s", attr, Prefs[attr])
if "sonarr" in config.refiner_settings or "radarr" in config.refiner_settings:
Thread.Create(check_connections)
# fixme: check existance of and os access of logs
Log.Debug("----- Environment -----")
Log.Debug("Platform: %s", Core.runtime.platform)
Log.Debug("OS: %s", Core.runtime.os)
Log.Debug("Python: %s", platform.python_version())
for key, value in os.environ.iteritems():
if key.startswith("PLEX") or key.startswith("SZ_"):
if "TOKEN" in key:
outval = "xxxxxxxxxxxxxxxxxxx"
else:
outval = value
Log.Debug("%s: %s", key, outval)
Log.Debug("Locale: %s", locale.getdefaultlocale())
Log.Debug("-----------------------")
Log.Debug("Setting log-level to %s", Prefs["log_level"])
logger.register_logging_handler(DEPENDENCY_MODULE_NAMES, level=Prefs["log_level"])
Core.log.setLevel(logging.getLevelName(Prefs["log_level"]))
os.environ['U1pfT01EQl9LRVk'] = '789CF30DAC2C8B0AF433F5C9AD34290A712DF30D7135F12D0FB3E502006FDE081E'
return
| 41.510582 | 132 | 0.636734 | 1,816 | 15,691 | 5.290749 | 0.212004 | 0.044963 | 0.024771 | 0.018734 | 0.312968 | 0.258535 | 0.237927 | 0.222107 | 0.198585 | 0.194421 | 0 | 0.007128 | 0.257919 | 15,691 | 377 | 133 | 41.62069 | 0.818018 | 0.025811 | 0 | 0.219858 | 0 | 0 | 0.184196 | 0.025343 | 0 | 0 | 0 | 0.002653 | 0 | 0 | null | null | 0.007092 | 0.08156 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8cfca3b3f7b05a6aebdc8eea726b3a320cf8a4ca | 7,643 | py | Python | gsicrawler_pipeline.py | antoniofll/sefarad4.0-testing | 1b50f479ee503e5e23345ab0388eb6c2608ab73d | [
"Apache-2.0"
] | null | null | null | gsicrawler_pipeline.py | antoniofll/sefarad4.0-testing | 1b50f479ee503e5e23345ab0388eb6c2608ab73d | [
"Apache-2.0"
] | null | null | null | gsicrawler_pipeline.py | antoniofll/sefarad4.0-testing | 1b50f479ee503e5e23345ab0388eb6c2608ab73d | [
"Apache-2.0"
] | null | null | null | import luigi
from luigi import configuration
from luigi.s3 import S3Target, S3PathTask
import threading
from time import sleep
import os
import json
import imp
import random
import datetime
import uuid
from bottle import route, run, template, static_file, response, request, install
import luigi
import urllib2
pending_analysis = {}
ids = {}
def return_json(result):
return json.dumps(result)
class ScrapUrlTask(luigi.Task):
#http://www.yelp.com/biz/taqueria-cazadores-san-francisco-2
url = luigi.Parameter(default="https://es.foursquare.com/v/cafeter%C3%ADa-hd/4b5b0ca9f964a520d0e028e3")
unique_id = str(uuid.uuid1())
webpage = luigi.Parameter(default="foursquare")
analysis_type = luigi.Parameter(default="sentiments")
def output(self):
return luigi.LocalTarget(path='analysis/%s.scraper' % self.unique_id)
def run(self):
#content = urllib2.urlopen(self.url).read()
#print content
#scrap_url('yelp','sentiments',self.url)
rv = {'error':None, 'loading':True, 'uuid':self.unique_id,
'analysis_type' : self.analysis_type,
'webpage':self.webpage, 'scraping':True}
pending_analysis[self.unique_id] = rv
ids[0] = self.unique_id
#Start scraper
try:
filePath = 'analysis/%s.scraper' % self.unique_id
print "######## filePath: ",filePath
scraperImported = imp.load_source(self.webpage, 'scrapers/%s.py' % (self.webpage))
#scraperImported.startScraping, args=(self.url, filePath)
scraperTask = threading.Thread(target=scraperImported.startScraping, args=(self.url, filePath))
scraperTask.start()
scraperTask.join()
with open(filePath) as result:
with self.output().open('w') as output:
json.dump(result, output)
print output
except Exception as e:
return {'error':'%s scraper doesn\'t exist' % (self.webpage), 'loading':False}
class SenpyAnalysisTask(luigi.Task):
unique_id = str(uuid.uuid1())
def requires(self):
return ScrapUrlTask()
def run(self):
print "Analisis"
#filePath = imp.load_source(ids[0], './analysis/%s.scraper' % (ids[0]))
'''
with self.input().open('r') as analysis_file:
scraped_reviews = json.loads(analysis_file.read())
analysis_types = {'sentiments'}
analysis_type = analysis_types[0]
self.startAnalysis(scraped_reviews,'sentiments',unique_id)
try:
resultPath = '%s/analysis/%s.analisis' % (os.getcwd(), self.unique_id)
importedAnalyzer = imp.load_source(analysis_types[0], '%s/analyzers/%s.py' % (os.getcwd(), analysis_types[0]))
analysisTask = threading.Thread(target=importedAnalyzer.analyze, args=(scraped_reviews, resultPath))
analysisTask.start()
analysisTask.join()
except Exception as e:
print '###### startAnalysis ' + str(e)
return {'error':'%s analyzer doesn\'t exist' %(analysis_types[0]), 'loading':False}
with open(resultPath) as result:
with self.output().open('w') as output:
json.dump(result, output)
'''
#GSI CRAWLER functions
def return_json(result):
return json.dumps(result)
def startScraper(webpage, url, unique_id):
try:
filePath = './analysis/%s.scraper' % (unique_id)
print "filePath: ",filePath
scraperImported = imp.load_source(webpage, './scrapers/%s.py' % (webpage))
scraperTask = threading.Thread(target=scraperImported.startScraping, args=(url, filePath))
scraperTask.start()
except Exception as e:
return {'error':'%s scraper doesn\'t exist' % (webpage), 'loading':False}
def scrap_url(nameWeb,analysisType, url):
#global scraping
webpage = nameWeb
analysis_type = analysisType
unique_id = str(uuid.uuid1())
#unique_id = 'test'
rv = {'error':None, 'loading':True, 'uuid':unique_id,
'analysis_type' : analysis_type,
'webpage':webpage, 'scraping':True}
pending_analysis[unique_id] = rv
startScraper(webpage, url, unique_id)
return return_json(rv)
def retrieve_info(self):
print "######Inside retrieve_info"
global pending_analysis
unique_id = 'test'
try:
analysis_info = pending_analysis[unique_id]
if(analysis_info['scraping']):
return self.checkScrapedFinishedAndStartAnalysis(self,analysis_info, unique_id)
#else:
#return checkAnalysisFinished(analysis_info, unique_id)
except Exception as e:
print '#### retrieve_info ' + str(e)
return return_json({'error':'No valid uuid', 'loading':False, 'uuid':unique_id})
def checkScrapedFinishedAndStartAnalysis(self,analysis_info, unique_id):
try:
print "######Inside checkScrapedFinishedAndStartAnalysis"
with open('analysis/%s.scraper'%unique_id, 'r') as analysis_file:
scraped_reviews = json.loads(analysis_file.read())
if('error' in scraped_reviews and scraped_reviews['error'] != None):
return return_json(scraped_reviews)
analysis_info['scraping'] = False
analysis_types = analysis_info['analysis_type'].split(",")
error = self.startAnalysis(scraped_reviews, analysis_types[0], unique_id)
del analysis_types[0]
analysis_info['analysis_type'] = ','.join(analysis_types)
if(error):
return return_json(error)
print "Analisis :",return_json(analysis_info)
return return_json(analysis_info)
except Exception as e:
print '###### checkScrapedFinishedAndStartAnalysis' + str(e)
return return_json(analysis_info)
def startAnalysis(self,scraped_reviews, analysis_type, unique_id):
try:
resultPath = '%s/analysis/%s.analisis' % (os.getcwd(), unique_id)
importedAnalyzer = imp.load_source(analysis_type, '%s/analyzers/%s.py' % (os.getcwd(), analysis_type))
analysisTask = threading.Thread(target=importedAnalyzer.analyze, args=(scraped_reviews, resultPath))
analysisTask.start()
return None
except Exception as e:
print '###### startAnalysis ' + str(e)
return {'error':'%s analyzer doesn\'t exist' %(analysis_type), 'loading':False}
def run(self):
print "Analisis"
unique_id='test'
with open('analysis/%s.scraper'%unique_id, 'r') as analysis_file:
scraped_reviews = json.loads(analysis_file.read())
analysis_types = {'sentiments'}
self.startAnalysis(scraped_reviews,'sentiments',unique_id)
with open('analysis/%s.analisis' % unique_id) as analysis_file:
analysis_result = json.loads(analysis_file.read())
print return_json(analysis_result)
class SemanticTask(luigi.Task):
"""
This task loads JSON data contained in a :py:class:`luigi.target.Target` and transform into RDF file
to insert into Fuseki platform as a semantic
"""
#: date task parameter (default = today)
date = luigi.DateParameter(default=datetime.date.today())
file = str(random.randint(0,10000)) + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
def output(self):
"""
Returns the target output for this task.
In this case, a successful execution of this task will create a file on the local filesystem.
:return: the target output for this task.
:rtype: object (:py:class:`luigi.target.Target`)
"""
return luigi.LocalTarget(path='analysis/%s.n3' % self.file)
def requires(self):
"""
This task's dependencies:
* :py:class:`~.SenpyTask`
:return: object (:py:class:`luigi.task.Task`)
"""
return SenpyAnalysisTask()
def run(self):
"""
Receive data from Senpy and transform them to RDF format in order to be indexed in Fuseki
"""
with self.input().open('r') as infile:
j = json.load(infile)
g = Graph().parse(data=j, format='json-ld')
with self.output().open('w') as output:
output.write(g.serialize(format='n3', indent=4))
if __name__ == '__main__':
luigi.run()
| 33.230435 | 114 | 0.697501 | 963 | 7,643 | 5.411215 | 0.214953 | 0.046056 | 0.018423 | 0.020725 | 0.466897 | 0.376511 | 0.305508 | 0.216081 | 0.200729 | 0.178852 | 0 | 0.006729 | 0.163941 | 7,643 | 229 | 115 | 33.375546 | 0.808764 | 0.057962 | 0 | 0.301471 | 0 | 0.007353 | 0.153123 | 0.020582 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.147059 | null | null | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5065c0ef1a0d7d57ef1fde52307819e6cc954dfe | 6,286 | py | Python | app/verbose.py | erikosmond/knights_tour | beb4cbb3a081d1c134ed180c6044e3ba7a4a6016 | [
"MIT"
] | null | null | null | app/verbose.py | erikosmond/knights_tour | beb4cbb3a081d1c134ed180c6044e3ba7a4a6016 | [
"MIT"
] | null | null | null | app/verbose.py | erikosmond/knights_tour | beb4cbb3a081d1c134ed180c6044e3ba7a4a6016 | [
"MIT"
] | 1 | 2019-05-04T13:03:53.000Z | 2019-05-04T13:03:53.000Z | class Verbose(object):
Initialized = False
def __init__(self, verbosity, show_info=False):
assert type(verbosity) is int, "verbose takes an integer value of 0-1023"
self.verbose_int = verbosity
self.info = """
bit 0[-1](1) - max/min values
bit 1[-2](2) - retrace
bit 2[-3](4) - visited positions
bit 3[-4](8) - board when it changes
bit 4[-5](16) - every move
bit 5[-6](32) - recording failed position
bit 6[-7](64) - progress - list of final positions (how many moves have been made)
bit 7[-8](128) - potential OBOB
bit 8[-9](256) - possible moves
bit 9[-10](512) - final positions
"""
#create an 8bit string representing the verbose type mask
self.verbose = bin(verbosity)[2:].zfill(10)
self.show_info = show_info
#values for development and debugging
self.min_max_switch = int(self.verbose[-1])
self.retrace_switch = int(self.verbose[-2])
self.visited_positions_switch = int(self.verbose[-3])
self.board_switch = int(self.verbose[-4])
self.every_move_switch = int(self.verbose[-5])
self.failed_position_switch = int(self.verbose[-6])
self.progress_switch = int(self.verbose[-7])
self.potential_OBOB_switch = int(self.verbose[-8])
self.possible_moves_switch = int(self.verbose[-9])
#for final user to see resulting position
self.final_positions_switch = int(self.verbose[-10])
if self.show_info == True:
self._print_verbose_info()
Verbose.Initialized = True
def min_max(self, tour, largest_tour):
#as the tour progresses, will show the longest tour, and if the tour shrinks to a small size
new_max = False
if len(tour.knight.visited_positions) > largest_tour:
new_max = True
largest_tour = len(tour.knight.visited_positions)
if self.min_max_switch:
if new_max:
print "current largest tour", str(largest_tour)
elif len(tour.knight.visited_positions) in [1,2,3]:
print "size of the tour got pretty small with length ", str(len(tour.knight.visited_positions))
return largest_tour
def failed_position(self, old_position, failed_moves):
return #remove this, but change the switch
if self.failed_position_switch:
print "\told position", old_position
for i in failed_moves:
print "\t\t failed move", i
def final_positions(self, chess_piece):
if self.final_positions_switch:
self.board(chess_piece, final=True)
def retrace(self, chess_piece):
if self.retrace_switch:
print "Retracing to ", chess_piece.visited_positions[-1]
if self.visited_positions_switch:
pass
#self.final_positions(chess_piece)
def potential_OBOB(self, tour):
if self.potential_OBOB_switch:
if len(tour.knight.visited_positions) == tour.board.size -1:
print "possible OBOB with len", len(tour.knight.visited_positions)
for pos in tour.knight.visited_positions:
print pos
def progress(self, count, chess_piece=None):
if self.progress_switch:
if count % 1 == 0: #was 10000
print str(count), "moves tried so far"
if chess_piece != None:
final_positions = []
for i in chess_piece.visited_positions:
final_positions.append(i.coordinate)
print final_positions
def every_move(self, move):
if self.every_move_switch:
print "moving to", move
def board(self, chess_piece, final=False):
if self.board_switch or final==True:
print "\n\n"
board = chess_piece.get_board()
for row in range(1, board.rows+1):
for column in range(1, board.columns+1):
knight_present = False
fail_present = False
for i in chess_piece.visited_positions:
if row == i.row and column == i.column:
print chess_piece.visited_positions.index(i), "\t",
knight_present = True
break
if knight_present == True:
continue
'''
for i in chess_piece.trials: #section commented out 11-2-13
#must convert coordinate back into position; should be able to get rid of the if statement
if type(i) is tuple and len(i) == 2:
i = Position(row=i[0], column=i[1], board=board, verbosity=0)
for j in chess_piece.get_failed_moves(i):
if row == j.row and column == j.column:
print "F\t",
fail_present = True
break
if fail_present == True:
break
'''
if knight_present == False:# and fail_present == False: #second check commented out 11-2-13
print "x\t",
print "\n"
print "\n\n"
#raw_input("press any key to continue")
def possible_moves(self, origin, moves):
if self.possible_moves_switch:
print "possible moves from position", origin
for move in moves:
print "\t", move
def _print_verbose_info(self):
if self.verbose_int > 0 and self.verbose_int != 512 and Verbose.Initialized == False:
print self.info
print "current verbose settings...\n"
for i in dir(self):
value = getattr(self, i)
if "switch" in str(i):
spacing = "\t"
for j in range(3-(len(i)/8)):
spacing += "\t"
print i, spacing, getattr(self, i)
print "\n" | 43.958042 | 114 | 0.544066 | 763 | 6,286 | 4.338139 | 0.230668 | 0.046526 | 0.039275 | 0.060423 | 0.144411 | 0.056798 | 0.019335 | 0 | 0 | 0 | 0 | 0.024121 | 0.366847 | 6,286 | 143 | 115 | 43.958042 | 0.807538 | 0.063793 | 0 | 0.054054 | 0 | 0.009009 | 0.148932 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0 | null | null | 0.009009 | 0 | null | null | 0.207207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
506906ff6a85d3dd74305ee68d4ad5ec97a8dca5 | 5,216 | py | Python | usbcan/somebus.py | laigui/usbcan | a58e8f4a7d757fdadc4dd57a039cd4afa016d585 | [
"MIT"
] | null | null | null | usbcan/somebus.py | laigui/usbcan | a58e8f4a7d757fdadc4dd57a039cd4afa016d585 | [
"MIT"
] | null | null | null | usbcan/somebus.py | laigui/usbcan | a58e8f4a7d757fdadc4dd57a039cd4afa016d585 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
''' somebus.py: Somebus USBCAN-II adaptor driver class.
Copyright (C) 2019 Laigui Qin <laigui@gmail.com>'''
from ctypes import *
class VciInitConfig(Structure):
"""
INIT_CONFIG结构体定义了初始化CAN的配置
"""
_fields_ = [("AccCode", c_ulong), # 验收码,后面是数据类型
("AccMask", c_ulong), # 屏蔽码
("Reserved", c_ulong), # 保留
("Filter", c_ubyte), # 滤波使能。0=不使能,1=使能使能时,/
# 请参照SJA1000验收滤波器设置验收码和屏蔽码。
("Timing0", c_ubyte), # 波特率定时器0(BTR0)
("Timing1", c_ubyte), # 波特率定时器1(BTR1)
("Mode", c_ubyte)] # 模式。=0为正常模式,=1为只听模式, =2为自发自收模式
class VciCanObj(Structure):
"""
CAN_OBJ结构体表示帧的数据结构。 在发送函数Transmit和接收函数Receive中被用来传送CAN信息帧。
"""
_fields_ = [("ID", c_uint), # 报文帧ID'''
("TimeStamp", c_uint), # 接收到信息帧时的时间标识
("TimeFlag", c_ubyte), # 是否使用时间标识, 为1时TimeStamp有效
("SendType", c_ubyte), # 发送帧类型。=0时为正常发送,=1时为单次发送(不自动重发),/
# =2时为自发自收(用于测试CAN卡是否损坏) , =3时为单次自发自收(只发送一次, 用于自测试),/
# 只在此帧为发送帧时有意义。
("RemoteFlag", c_ubyte), # 是否是远程帧。=0时为数据帧,=1时为远程帧。
("ExternFlag", c_ubyte), # 是否是扩展帧。=0时为标准帧(11位帧ID),=1时为扩展帧(29位帧ID)。
("DataLen", c_ubyte), # 数据长度DLC(<=8), 即Data的长度
("Data", c_ubyte * 8), # CAN报文的数据。 空间受DataLen的约束。
("Reserved", c_ubyte * 3)] # 系统保留
class VciErrInfo(Structure):
"""
ERR_INFO结构体用于装载VCI库运行时产生的错误信息。 结构体将在ReadErrInfo函数中被填充。
"""
_fields_ = [("ErrCode", c_uint), # 错误码。 对应1.2 中的错误码定义。
("Passive_ErrData", c_ubyte), # 当产生的错误中有消极错误时表示为消极错误的错误标识数据
("ArLost_ErrData", c_ubyte)] # 当产生的错误中有仲裁丢失错误时表示为仲裁丢失错误的错误标识数据
class USBCAN():
"""
GCAN USBCAN adaptor
"""
STATUS_OK = 1
STATUS_ERR = 0
RECEIVE_ERR = 0xFFFFFFFF
def __init__(self):
pass
def OpenDevice(self, nDeviceType, nDeviceInd):
"""
:param nDeviceType:
:param nDeviceInd:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
nReserved = 0
return dll.OpenDevice(nDeviceType, nDeviceInd, nReserved)
def InitCAN(self, DevType, DevIndex, CANIndex, pInitConfig):
"""
:param DevType:
:param DevIndex:
:param CANIndex:
:param pInitConfig:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.InitCAN(DevType, DevIndex, CANIndex, pInitConfig)
def StartCAN(self, DevType, DevIndex, CANIndex):
"""
此函数用以启动USBCAN设备的某一个CAN通道。 如有多个CAN通道时, 需要
多次调用。 在执行StartCAN函数后, 需要延迟10ms执行Transmit函数。
:param DevType:
:param DevIndex:
:param CANIndex:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.StartCAN(DevType, DevIndex, CANIndex)
def Transmit(self, DevType, DevIndex, CANIndex, pSend, Len):
"""
返回实际发送成功的帧数量。
:param DevType:
:param DevIndex:
:param CANIndex:
:param pSend:
:param Len:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.Transmit(DevType, DevIndex, CANIndex, pSend, Len)
def Receive(self, DevType, DevIndex, CANIndex, pReceive, Len, WaitTime):
"""
此函数从指定的设备CAN通道的缓冲区里读取数据。
:param DevType:
:param DevIndex:
:param CANIndex:
:param pReceive:
:param Len:
:param WaitTime:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.Receive(DevType, DevIndex, CANIndex, pReceive, Len, WaitTime)
def CloseDevice(self, DevType, DevIndex):
"""
此函数用于关闭设备。
:param DevType:
:param DevIndex:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.CloseDevice(DevType, DevIndex)
def ClearBuffer(self, DevType, DevIndex, CANIndex):
"""
此函数用以清空指定CAN通道的缓冲区。
:param DevType:
:param DevIndex:
:param CANIndex:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.ClearBuffer(DevType, DevIndex, CANIndex)
def ReadErrInfo(self, DevType, DevIndex, CANIndex, pErrInfo):
"""
:param DevType:
:param DevIndex:
:param CANIndex:
:param pErrInfo:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.ReadErrInfo(DevType, DevIndex, CANIndex, pErrInfo)
def ReadCanStatus(self, DevType, DevIndex, CANIndex, pCANStatus):
"""
:param DevType:
:param DevIndex:
:param CANIndex:
:param pCANStatus:
:return:
"""
dll = windll.LoadLibrary('./ECanVci64.dll') # 调用dll文件
return dll.ReadCanStatus(DevType, DevIndex, CANIndex, pCANStatus) | 31.612121 | 89 | 0.551189 | 435 | 5,216 | 6.524138 | 0.351724 | 0.057082 | 0.11346 | 0.082452 | 0.322058 | 0.300211 | 0.270613 | 0.178999 | 0.178999 | 0.064834 | 0 | 0.017684 | 0.327837 | 5,216 | 165 | 90 | 31.612121 | 0.791786 | 0.308474 | 0 | 0.157895 | 0 | 0 | 0.094744 | 0 | 0 | 0 | 0.003348 | 0 | 0 | 1 | 0.175439 | false | 0.035088 | 0.017544 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
50695771cffd15a76d764cc359fe32d149466b8a | 2,285 | py | Python | PyFlow/Ui/StyleSheetEditor.py | pedroCabrera/PyFlow | 8b439d9b47fff450e91c09d40c7b286e88cb624f | [
"MIT"
] | 7 | 2018-06-24T15:55:00.000Z | 2021-07-13T08:11:25.000Z | PyFlow/Ui/StyleSheetEditor.py | pedroCabrera/PyFlow | 8b439d9b47fff450e91c09d40c7b286e88cb624f | [
"MIT"
] | 32 | 2019-02-18T20:47:46.000Z | 2019-05-30T12:51:10.000Z | PyFlow/Ui/StyleSheetEditor.py | pedroCabrera/PyFlow | 8b439d9b47fff450e91c09d40c7b286e88cb624f | [
"MIT"
] | 5 | 2019-02-19T23:26:21.000Z | 2020-12-23T00:32:59.000Z | from Qt import QtWidgets
from Qt import QtCore
from widgets.pc_HueSlider import pc_HueSlider,pc_GradientSlider
if __name__ == '__main__':
import sys
sys.path.append("..")
import stylesheet
else:
from .. import stylesheet
from .. import resources
class StyleSheetEditor(QtWidgets.QWidget):
"""Style Sheet Editor"""
Updated = QtCore.Signal()
def __init__(self,parent=None):
super(StyleSheetEditor, self).__init__(parent)
self.style = stylesheet.editableStyleSheet()
self.setLayout(QtWidgets.QVBoxLayout ())
self.mainGroup = QtWidgets.QGroupBox(self)
self.mainGroupLay = QtWidgets.QVBoxLayout(self.mainGroup)
mainLabel = QtWidgets.QLabel("Main Color Hue",parent =self.mainGroup)
self.main_hue = pc_HueSlider(self.mainGroup)
self.main_hue.valueChanged.connect(self.updateHue)
self.main_light = pc_GradientSlider(self.mainGroup)
self.main_light.valueChanged.connect(self.updateLight)
self.mainGroupLay.addWidget(mainLabel)
self.mainGroupLay.addWidget(self.main_hue)
self.mainGroupLay.addWidget(self.main_light)
self.bgColor = pc_GradientSlider(self)
self.bgColor.valueChanged.connect(self.updateBg)
self.layout().addWidget(self.mainGroup)
self.layout().addWidget(self.bgColor)
self.setColor(self.style.MainColor)
self.bgColor.setValue(0.196)
self.main_light.setValue(self.MainColor.lightnessF())
self.USETEXTUREBG = True
def setColor(self,color):
self.MainColor = color
self.main_hue.setColor(color)
def hue(self):
return self.main_hue.value()
def getStyleSheet(self):
return self.style.getStyleSheet()
def updateHue(self,value):
self.style.setHue(self.main_hue.value())
self.style.setLightness(self.main_light.value())
self.Updated.emit()
def updateLight(self,value):
self.main_hue.setLightness(self.main_light.value())
self.main_hue.update()
self.style.setLightness(self.main_light.value())
self.Updated.emit()
def updateBg(self,value):
self.style.setBg(self.bgColor.value())
self.Updated.emit()
if __name__ == '__main__':
import sys
app = QtWidgets.QApplication(sys.argv)
a = StyleSheetEditor()
def update():
print a.bgColor.value()
app.setStyleSheet( a.getStyleSheet() )
app.setStyleSheet( a.getStyleSheet() )
a.Updated.connect(update)
a.show()
sys.exit(app.exec_()) | 29.675325 | 71 | 0.760175 | 290 | 2,285 | 5.831034 | 0.262069 | 0.070964 | 0.05204 | 0.037256 | 0.17741 | 0.087522 | 0.067416 | 0.067416 | 0.067416 | 0.067416 | 0 | 0.00197 | 0.111597 | 2,285 | 77 | 72 | 29.675325 | 0.831034 | 0 | 0 | 0.171875 | 0 | 0 | 0.014147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5069c552bceb85ec98f646d7fca3218a5cca5f2d | 1,511 | gyp | Python | test/win/compiler-flags/calling-convention.gyp | chlorm-forks/gyp | a8921fcaab1a18c8cf7e4ab09ceb940e336918ec | [
"BSD-3-Clause"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | test/win/compiler-flags/calling-convention.gyp | chlorm-forks/gyp | a8921fcaab1a18c8cf7e4ab09ceb940e336918ec | [
"BSD-3-Clause"
] | 1,432 | 2017-06-21T04:08:48.000Z | 2020-08-25T16:21:15.000Z | test/win/compiler-flags/calling-convention.gyp | chlorm-forks/gyp | a8921fcaab1a18c8cf7e4ab09ceb940e336918ec | [
"BSD-3-Clause"
] | 338 | 2020-04-18T08:03:10.000Z | 2022-03-29T12:33:22.000Z | # Copyright (c) 2014 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'targets': [
{
'target_name': 'test_cdecl',
'type': 'loadable_module',
'msvs_settings': {
'VCCLCompilerTool': {
'CallingConvention': 0,
},
},
'sources': [
'calling-convention.cc',
'calling-convention-cdecl.def',
],
},
{
'target_name': 'test_fastcall',
'type': 'loadable_module',
'msvs_settings': {
'VCCLCompilerTool': {
'CallingConvention': 1,
},
},
'sources': [
'calling-convention.cc',
'calling-convention-fastcall.def',
],
},
{
'target_name': 'test_stdcall',
'type': 'loadable_module',
'msvs_settings': {
'VCCLCompilerTool': {
'CallingConvention': 2,
},
},
'sources': [
'calling-convention.cc',
'calling-convention-stdcall.def',
],
},
],
'conditions': [
['MSVS_VERSION[0:4]>="2013"', {
'targets': [
{
'target_name': 'test_vectorcall',
'type': 'loadable_module',
'msvs_settings': {
'VCCLCompilerTool': {
'CallingConvention': 3,
},
},
'sources': [
'calling-convention.cc',
'calling-convention-vectorcall.def',
],
},
],
}],
],
}
| 22.552239 | 72 | 0.483786 | 116 | 1,511 | 6.155172 | 0.474138 | 0.190476 | 0.078431 | 0.123249 | 0.593838 | 0.593838 | 0.352941 | 0 | 0 | 0 | 0 | 0.014478 | 0.360026 | 1,511 | 66 | 73 | 22.893939 | 0.723888 | 0.09861 | 0 | 0.467742 | 0 | 0 | 0.469072 | 0.170103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
506ad7660bd1038dfb11c4b2c1efad4b5755bce2 | 679 | py | Python | agents/base_agent.py | IanYHWu/msc_2021 | 0ae09ed392cce5fdf0e85d1f96b7af82900835f8 | [
"MIT"
] | null | null | null | agents/base_agent.py | IanYHWu/msc_2021 | 0ae09ed392cce5fdf0e85d1f96b7af82900835f8 | [
"MIT"
] | null | null | null | agents/base_agent.py | IanYHWu/msc_2021 | 0ae09ed392cce5fdf0e85d1f96b7af82900835f8 | [
"MIT"
] | null | null | null |
class BaseAgent(object):
"""
Class for the basic agent objects.
"""
def __init__(self,
env,
actor_critic,
storage,
device):
"""
env: (gym.Env) environment following the openAI Gym API
"""
self.env = env
self.actor_critic = actor_critic
self.storage = storage
self.device = device
self.t = 0
def predict(self, obs, hidden_state, done):
"""
Predict the action with the given input
"""
pass
def optimize(self):
"""
Train the neural network model
"""
pass
| 19.4 | 63 | 0.483063 | 67 | 679 | 4.776119 | 0.567164 | 0.103125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002591 | 0.431517 | 679 | 34 | 64 | 19.970588 | 0.826425 | 0.237113 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.133333 | 0 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
506ec8d37920dc02fb0964cd084a15f385c1f491 | 2,210 | py | Python | pysnmp-with-texts/RBT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/RBT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/RBT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module RBT-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/RBT-MIB
# Produced by pysmi-0.3.4 at Wed May 1 13:18:46 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueSizeConstraint, ValueRangeConstraint, SingleValueConstraint, ConstraintsUnion, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueSizeConstraint", "ValueRangeConstraint", "SingleValueConstraint", "ConstraintsUnion", "ConstraintsIntersection")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
Counter32, TimeTicks, ModuleIdentity, Integer32, MibIdentifier, iso, Gauge32, MibScalar, MibTable, MibTableRow, MibTableColumn, IpAddress, ObjectIdentity, Unsigned32, Bits, NotificationType, enterprises, Counter64 = mibBuilder.importSymbols("SNMPv2-SMI", "Counter32", "TimeTicks", "ModuleIdentity", "Integer32", "MibIdentifier", "iso", "Gauge32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "IpAddress", "ObjectIdentity", "Unsigned32", "Bits", "NotificationType", "enterprises", "Counter64")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
rbt = ModuleIdentity((1, 3, 6, 1, 4, 1, 17163))
rbt.setRevisions(('2009-09-23 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: rbt.setRevisionsDescriptions(('Updated contact information',))
if mibBuilder.loadTexts: rbt.setLastUpdated('200909230000Z')
if mibBuilder.loadTexts: rbt.setOrganization('Riverbed Technology, Inc.')
if mibBuilder.loadTexts: rbt.setContactInfo(' Riverbed Technical Support support@riverbed.com')
if mibBuilder.loadTexts: rbt.setDescription('Riverbed Technology MIB')
products = MibIdentifier((1, 3, 6, 1, 4, 1, 17163, 1))
mibBuilder.exportSymbols("RBT-MIB", products=products, PYSNMP_MODULE_ID=rbt, rbt=rbt)
| 88.4 | 505 | 0.776923 | 237 | 2,210 | 7.236287 | 0.468354 | 0.080466 | 0.061224 | 0.069971 | 0.337026 | 0.221574 | 0.221574 | 0.208746 | 0.208746 | 0.208746 | 0 | 0.059724 | 0.083258 | 2,210 | 24 | 506 | 92.083333 | 0.786772 | 0.139367 | 0 | 0 | 0 | 0 | 0.325938 | 0.023244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
50732e512e8aa60eef305eb3f7a68cdf0138b721 | 1,186 | py | Python | archive/lym_project/treeano-master/benchmarks/fractional_max_pooling.py | peterdonnelly1/u24_lymphocyte | dff7ceed404c38582feb81aa9b8a55d80ada0f77 | [
"BSD-3-Clause"
] | 45 | 2015-04-26T04:45:51.000Z | 2022-01-24T15:03:55.000Z | archive/lym_project/treeano-master/benchmarks/fractional_max_pooling.py | peterdonnelly1/u24_lymphocyte | dff7ceed404c38582feb81aa9b8a55d80ada0f77 | [
"BSD-3-Clause"
] | 8 | 2018-07-20T20:54:51.000Z | 2020-06-12T05:36:04.000Z | archive/lym_project/treeano-master/benchmarks/fractional_max_pooling.py | peterdonnelly1/u24_lymphocyte | dff7ceed404c38582feb81aa9b8a55d80ada0f77 | [
"BSD-3-Clause"
] | 22 | 2018-05-21T23:57:20.000Z | 2022-02-21T00:48:32.000Z | import numpy as np
import theano
import theano.tensor as T
import treeano.nodes as tn
from treeano.sandbox.nodes import fmp
fX = theano.config.floatX
# TODO change me
node = "fmp2"
compute_grad = True
if node == "mp":
n = tn.MaxPool2DNode("mp", pool_size=(2, 2))
elif node == "fmp":
n = fmp.DisjointPseudorandomFractionalMaxPool2DNode("fmp1",
fmp_alpha=1.414,
fmp_u=0.5)
elif node == "fmp2":
n = fmp.OverlappingRandomFractionalMaxPool2DNode("fmp2",
pool_size=(1.414, 1.414))
else:
assert False
network = tn.SequentialNode(
"s",
[tn.InputNode("i", shape=(1, 1, 32, 32)),
n]
).network()
if compute_grad:
i = network["i"].get_vw("default").variable
s = network["s"].get_vw("default").variable
fn = network.function(["i"], [T.grad(s.sum(), i)])
else:
fn = network.function(["i"], ["s"])
x = np.random.randn(1, 1, 32, 32).astype(fX)
"""
20150924 results:
%timeit fn(x)
no grad:
mp: 33.7 us
fmp: 77.6 us
fmp2: 1.91 ms
with grad:
mp: 67.1 us
fmp: 162 us
fmp2: 2.66 ms
"""
| 21.178571 | 78 | 0.559022 | 164 | 1,186 | 3.993902 | 0.469512 | 0.018321 | 0.012214 | 0.018321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074911 | 0.290894 | 1,186 | 55 | 79 | 21.563636 | 0.703924 | 0.011804 | 0 | 0.064516 | 0 | 0 | 0.042677 | 0 | 0 | 0 | 0 | 0.018182 | 0.032258 | 1 | 0 | false | 0 | 0.16129 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
507882446718b2c00a42bfdd52f4470e8480586d | 5,583 | py | Python | kolab/yk/yk2.py | KuramitsuLab/kolab | 91fa4bae4a440a15291ba2d2690e4e335cbfd21e | [
"MIT"
] | null | null | null | kolab/yk/yk2.py | KuramitsuLab/kolab | 91fa4bae4a440a15291ba2d2690e4e335cbfd21e | [
"MIT"
] | 1 | 2021-11-14T05:38:27.000Z | 2021-11-14T05:38:27.000Z | kolab/yk/yk2.py | KuramitsuLab/kolab | 91fa4bae4a440a15291ba2d2690e4e335cbfd21e | [
"MIT"
] | 7 | 2020-11-02T13:05:44.000Z | 2022-01-09T11:06:04.000Z | from os import read
import random
import sys
import pegtree as pg
import argparse
import csv
from pegtree.optimizer import optimize
peg = pg.grammar('yk.tpeg')
parse = pg.generate(peg)
parser = argparse.ArgumentParser(description='yk for Parameter Handling')
parser.add_argument('--notConv', action='store_true') # Python のトークナイズのみ
parser.add_argument('--diff', action='store_true') # 変数名 (name) とリテラル (val) に異なるものを付与
parser.add_argument('--shuffle', action='store_true') # 特殊トークンをランダムに付与 (順序を考慮しない)
parser.add_argument('--both', action='store_true') # shuffle ありとなしを両方追加
parser.add_argument('--files', nargs='*') # 入力ファイルを与える
args = parser.parse_args()
token_idx = list(range(1, 7))
def replace_as_special_parameter(s, mapped, token_idx=token_idx, tag=None): # mapped => {'df': '<A>'}
if s in mapped:
return mapped[s]
if tag == 'Name':
x = f'<name{token_idx[len(mapped)]}>'
elif tag == 'Value':
x = f'<val{token_idx[len(mapped)]}>'
else:
x = f'<var{token_idx[len(mapped)]}>'
mapped[s] = x
return x
def convert_nothing(tok, doc, mapped, token_idx, diff):
s = str(tok)
if s == ';': # ; だけはセミコロンに変える
return '<sep>'
return s
def convert_all(tok, doc, mapped, token_idx, diff):
tag = tok.getTag()
s = str(tok)
if diff:
if tag == 'Name':
if s in doc:
in_idx = [i for i, x in enumerate(doc) if x == s]
flag = 0
for idx in in_idx:
try:
if '軸' in doc[idx+1] or '座標' in doc[idx+1]:
flag += 1
except:
pass
if len(in_idx) == flag:
return s
else:
return replace_as_special_parameter(s, mapped, token_idx, tag='Name')
else:
if s.startswith('.'):
s = '. ' + s[1:]
return s
if tag == 'Value':
if s in doc:
return replace_as_special_parameter(s, mapped, token_idx, tag='Value')
s_q1 = f"'{s[1:-1]}'"
if s_q1 in doc:
return replace_as_special_parameter(s_q1, mapped, token_idx, tag='Value')
s_q2 = f'"{s[1:-1]}"'
if s_q2 in doc:
return replace_as_special_parameter(s_q2, mapped, token_idx, tag='Value')
else:
if tag == 'Name':
if s in doc:
return replace_as_special_parameter(s, mapped, token_idx)
else:
if s.startswith('.'):
s = '. ' + s[1:]
return s
if tag == 'Value':
if s in doc:
return replace_as_special_parameter(s, mapped, token_idx)
s_q1 = f"'{s[1:-1]}'"
if s_q1 in doc:
return replace_as_special_parameter(s_q1, mapped, token_idx)
s_q2 = f'"{s[1:-1]}"'
if s_q2 in doc:
return replace_as_special_parameter(s_q2, mapped, token_idx)
return convert_nothing(tok, doc, mapped, token_idx, diff)
def make(code, doc0, convert=convert_all, token_idx=token_idx, diff=False):
mapped = {}
doc = []
for tok in parse(doc0):
s = str(tok)
if tok.getTag() == 'Raw':
q = f"'{s}'"
q2 = f'"{s}"'
if q in code:
doc.append(q)
continue
if q2 in code:
doc.append(q2)
continue
doc.append(s)
ws = [convert(tok, doc, mapped, token_idx, diff) for tok in parse(code)]
code = ' '.join(ws)
ws = []
for idx, tok in enumerate(doc):
if tok.strip() != '':
if tok in mapped:
try:
if '軸' in doc[idx+1] and '座標' in doc[idx+1]:
ws.append(tok)
else:
ws.append(mapped[tok])
except:
ws.append(mapped[tok])
else:
ws.append(tok)
doc = ' '.join(ws)
return code, doc
def read_tsv(input_filename, output_filename=None):
with open(input_filename) as f:
reader = csv.reader(f, delimiter='\t')
if output_filename != None:
writer = csv.writer(output_filename, delimiter='\t')
for row in reader:
code0 = None
if args.both:
token_idx0 = list(range(1, 7))
code0, doc0 = make(row[0], row[1], convert=convert_all , token_idx=token_idx0, diff=args.diff)
if args.shuffle or args.both:
random.shuffle(token_idx)
if args.notConv:
code, doc = make(row[0], row[1], convert=convert_nothing , token_idx=token_idx, diff=args.diff)
else:
code, doc = make(row[0], row[1], convert=convert_all, token_idx=token_idx, diff=args.diff)
if output_filename == None:
print(code, doc)
if code0 != None and code0 != code:
print(code0, doc0)
else:
writer.writerow([code, doc])
if code0 != None and code0 != code:
writer.writerow([code0, doc0])
if __name__ == '__main__':
if args.files != None:
for filename in args.files:
try:
read_tsv(filename, sys.stdout)
except:
read_tsv(filename)
else:
pass
| 32.649123 | 111 | 0.506538 | 697 | 5,583 | 3.908178 | 0.177905 | 0.076358 | 0.066814 | 0.082599 | 0.39464 | 0.383627 | 0.361233 | 0.32746 | 0.246696 | 0.232012 | 0 | 0.016023 | 0.373992 | 5,583 | 170 | 112 | 32.841176 | 0.763376 | 0.025793 | 0 | 0.380952 | 1 | 0 | 0.061315 | 0.016203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034014 | false | 0.013605 | 0.047619 | 0 | 0.197279 | 0.013605 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50826185ee7d341557b552bde864d1a94b521ad3 | 3,480 | py | Python | isc_dhcp_leases/test_lease6.py | dholl/python-isc-dhcp-leases | 7bd07a3a4a7965bd7a6e0a45de7b30914e1bbced | [
"MIT"
] | 111 | 2015-02-11T21:36:40.000Z | 2022-03-18T13:36:12.000Z | isc_dhcp_leases/test_lease6.py | dholl/python-isc-dhcp-leases | 7bd07a3a4a7965bd7a6e0a45de7b30914e1bbced | [
"MIT"
] | 36 | 2015-05-05T12:04:07.000Z | 2021-06-17T12:58:30.000Z | isc_dhcp_leases/test_lease6.py | dholl/python-isc-dhcp-leases | 7bd07a3a4a7965bd7a6e0a45de7b30914e1bbced | [
"MIT"
] | 52 | 2015-05-02T19:31:20.000Z | 2022-03-18T13:36:29.000Z | import datetime
from unittest import TestCase
from isc_dhcp_leases.iscdhcpleases import Lease6, utc
from freezegun import freeze_time
__author__ = 'Martijn Braam <martijn@brixit.nl>'
class TestLease6(TestCase):
def setUp(self):
self.lease_time = datetime.datetime(2015, 8, 18, 16, 55, 37, tzinfo=utc)
self.lease_data = {
'binding': 'state active',
'ends': 'never',
'preferred-life': '375',
'max-life': '600'
}
def test_init(self):
lease = Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na")
self.assertEqual(lease.ip, "2001:610:600:891d::60")
self.assertEqual(lease.host_identifier, b"4dv\xea\x00\x01\x00\x01\x1df\x1f\xe2\n\x00'\x00\x00\x00")
self.assertEqual(lease.valid, True)
self.assertEqual(lease.iaid, 3933627444)
self.assertEqual(lease.duid, b"\x00\x01\x00\x01\x1df\x1f\xe2\n\x00'\x00\x00\x00")
self.assertEqual(lease.active, True)
self.assertEqual(lease.binding_state, 'active')
self.assertEqual(lease.preferred_life, 375)
self.assertEqual(lease.max_life, 600)
self.assertEqual(lease.last_communication, self.lease_time)
self.assertEqual(lease.type, Lease6.NON_TEMPORARY)
def test_repr(self):
lease = Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na")
self.assertEqual(repr(lease), '<Lease6 2001:610:600:891d::60>')
def _test_valid(self, now=None):
lease = Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na",
now=now)
self.assertTrue(lease.valid) # Lease is forever
lease.end = datetime.datetime(2015, 7, 6, 13, 57, 4, tzinfo=utc)
self.assertTrue(lease.valid) # Lease is before end
lease.end = lease.end - datetime.timedelta(hours=7)
self.assertFalse(lease.valid) # Lease is ended
@freeze_time("2015-07-6 8:15:0")
def test_valid_frozen(self):
self._test_valid()
def test_valid_historical(self):
self._test_valid(
now=datetime.datetime(2015, 7, 6, 8, 15, 0, tzinfo=utc))
def test_eq(self):
lease_a = Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na")
lease_b = Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na")
self.assertEqual(lease_a, lease_b)
lease_b.ip = "2001:610:600:891d::42"
self.assertNotEqual(lease_a, lease_b)
lease_b.ip = "2001:610:600:891d::60"
lease_b.host_identifier = "gd4\352\000\001\000\001\035b\037\322\012\000'\000\000\000"
self.assertNotEqual(lease_a, lease_b)
def test_naive_time(self):
with self.assertRaises(ValueError):
Lease6("2001:610:600:891d::60", self.lease_data, self.lease_time,
"4dv\\352\\000\\001\\000\\001\\035f\\037\\342\\012\\000'\\000\\000\\000", "na",
now=datetime.datetime.now())
| 43.5 | 107 | 0.60977 | 493 | 3,480 | 4.190669 | 0.229209 | 0.060987 | 0.060987 | 0.067764 | 0.508228 | 0.47241 | 0.407551 | 0.394482 | 0.394482 | 0.394482 | 0 | 0.201473 | 0.219828 | 3,480 | 79 | 108 | 44.050633 | 0.559484 | 0.014655 | 0 | 0.206349 | 0 | 0.142857 | 0.269197 | 0.230949 | 0 | 0 | 0 | 0 | 0.301587 | 1 | 0.126984 | false | 0 | 0.063492 | 0 | 0.206349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5088c0c6396ddfd3443bddad31b38216e9bda0c1 | 1,330 | py | Python | python/periodic-web-scrapper/scraper/Scraper.py | MarioCodes/ProyectosClaseDAM | df568b4feda8bf3a6cf7cc8e81e7dfa4156dcfd9 | [
"Apache-2.0"
] | null | null | null | python/periodic-web-scrapper/scraper/Scraper.py | MarioCodes/ProyectosClaseDAM | df568b4feda8bf3a6cf7cc8e81e7dfa4156dcfd9 | [
"Apache-2.0"
] | 17 | 2019-06-14T12:30:46.000Z | 2022-02-18T11:38:50.000Z | python/periodic-web-scrapper/scraper/Scraper.py | MarioCodes/ProyectosClaseDAM | df568b4feda8bf3a6cf7cc8e81e7dfa4156dcfd9 | [
"Apache-2.0"
] | null | null | null | '''
Created on Apr 18, 2018
@author: msanchez
'''
from scraper.RequestScraper import RequestScraper
from scraper.HTMLFilter import HTMLFilter
from scraper.NewsFilter import NewsFilter
from scraper.utilities.WebUtilities import WebUtilities
class Scraper(object):
''' Full scrap operation. Downloads the request with an URL. Checks the HTTP status code. In case it's correct, proceeds with the scrap & filter operation.
'''
def __init__(self):
'''
Constructor
'''
def scrap(self):
web = self.__download()
result = list()
if(200 == web.status_code):
scraper = RequestScraper(web)
html_news_tags = scraper.scrap_news()
cleaned_tags = self.__clean(html_news_tags)
result = self.__filter(cleaned_tags)
else:
print("There was an error on download operation. Status code: ", str(web.status_code))
return result
def __download(self):
downloader = WebUtilities()
return downloader.download("https://www.heraldo.es/")
def __clean(self, html_tags):
tag_filter = HTMLFilter(html_tags)
return tag_filter.filter()
def __filter(self, unfiltered_tags):
matcher = NewsFilter(unfiltered_tags)
return matcher.search() | 30.930233 | 159 | 0.64812 | 149 | 1,330 | 5.577181 | 0.456376 | 0.052948 | 0.031288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009184 | 0.263158 | 1,330 | 43 | 160 | 30.930233 | 0.838776 | 0.158647 | 0 | 0 | 0 | 0 | 0.072022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.153846 | 0 | 0.538462 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
50897cd7ac40da6c45acf1571aad36057bdb2d94 | 2,877 | py | Python | cabot_alert_pushover/models.py | dnelson/cabot-alert-pushover | c10f34e59c9f289e60016e7240c4e6ada611f611 | [
"MIT"
] | null | null | null | cabot_alert_pushover/models.py | dnelson/cabot-alert-pushover | c10f34e59c9f289e60016e7240c4e6ada611f611 | [
"MIT"
] | 1 | 2015-02-09T22:26:48.000Z | 2015-02-09T22:26:48.000Z | cabot_alert_pushover/models.py | packetcollision/cabot-alert-pushover | c10f34e59c9f289e60016e7240c4e6ada611f611 | [
"MIT"
] | 1 | 2015-04-29T15:43:31.000Z | 2015-04-29T15:43:31.000Z | from django.db import models
from django.conf import settings
from django.template import Context, Template
from cabot.cabotapp.alert import AlertPlugin, AlertPluginUserData
from os import environ as env
import requests
pushover_alert_url = "https://api.pushover.net/1/messages.json"
pushover_template = "Service {{ service.name }} {% if service.overall_status == service.PASSING_STATUS %}is back to normal{% else %}reporting {{ service.overall_status }} status{% endif %}: {{ scheme }}://{{ host }}{% url 'service' pk=service.id %}."
class PushoverAlert(AlertPlugin):
name = "Pushover"
author = "Daniel Nelson"
def send_alert(self, service, users, duty_officers):
# Pushover handles repeat alerts, so we can skip them
if service.overall_status == service.old_overall_status:
return
for u in users:
alert = True
priority = 1
try:
data = AlertPluginUserData.objects.get(user=u, title=PushoverAlertUserData.name)
except:
pass
if service.overall_status == service.WARNING_STATUS:
if not data.alert_on_warn:
alert = False
priority = 0
elif service.overall_status == service.ERROR_STATUS:
priority = 1
elif service.overall_status == service.CRITICAL_STATUS:
priority = 2
elif service.overall_status == service.PASSING_STATUS:
priority = 0
if service.old_overall_status == service.CRITICAL_STATUS:
# cancel the recurring crit
pass
else:
# something weird happened
alert = False
if not alert:
return
# now let's send
c = Context({
'service': service,
'host': settings.WWW_HTTP_HOST,
'scheme': settings.WWW_SCHEME,
'jenkins_api': settings.JENKINS_API,
})
message = Template(pushover_template).render(c)
self._send_pushover_alert(message, key=data.key, priority=priority)
def _send_pushover_alert(self, message, key, priority=0):
payload = {
'token':env['PUSHOVER_TOKEN'],
'user': key,
'priority': priority,
'title': 'Cabot ALERT',
'message': message,
}
if priority == 2:
payload['retry'] = 60
payload['expire'] = 3600
r = requests.post(pushover_alert_url, data=payload)
class PushoverAlertUserData(AlertPluginUserData):
name = "Pushover Plugin"
key = models.CharField(max_length=32, blank=False, verbose_name="User/Group Key")
alert_on_warn = models.BooleanField(default=False)
| 35.518519 | 250 | 0.583246 | 296 | 2,877 | 5.523649 | 0.418919 | 0.07156 | 0.085627 | 0.099083 | 0.155352 | 0.04893 | 0 | 0 | 0 | 0 | 0 | 0.00826 | 0.326729 | 2,877 | 80 | 251 | 35.9625 | 0.835829 | 0.040667 | 0 | 0.163934 | 0 | 0.016393 | 0.149183 | 0.023956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0.065574 | 0.098361 | 0 | 0.278689 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
508e54ebec8de34c224abfb2a9e4b84d25629fa1 | 6,652 | py | Python | aws_automation/s3.py | VCCRI/Scavenger | a17f48b242a1cd8d626ad3a21439fa2c80b06533 | [
"MIT"
] | 4 | 2018-07-25T15:52:00.000Z | 2020-10-20T00:47:35.000Z | aws_automation/s3.py | VCCRI/Scavenger | a17f48b242a1cd8d626ad3a21439fa2c80b06533 | [
"MIT"
] | null | null | null | aws_automation/s3.py | VCCRI/Scavenger | a17f48b242a1cd8d626ad3a21439fa2c80b06533 | [
"MIT"
] | null | null | null | # a module that wraps some of the S3 commands
import boto3
from botocore.exceptions import ClientError
from boto3.s3.transfer import S3Transfer
import re
import os
# check for existance of bucket
def list_bucket(bucket_name, region):
s3 = boto3.resource('s3', region)
bucket = s3.Bucket(bucket_name)
object_list = []
try:
for key in bucket.objects.all():
print(key.key)
object_list.append(key.key)
except ClientError as e:
#print('code: {}, msg: {}, op name: {}'.format([
# e.error_code, e.error_message, e.operation_name]))
#print(e.msg)
print(str(e))
print(e.response)
except Exception as e:
# other response Error keys: Code, Message, BucketName
print(e.response['Error']['Code'])
print(str(e))
print(e.response)
print(e.response['ResponseMetadata']['HTTPStatusCode'])
return object_list
# get list of bucket contents
def get_bucket_list(bucket_name, region):
s3 = boto3.resource('s3', region)
bucket = s3.Bucket(bucket_name)
object_list = []
for key in bucket.objects.all():
object_list.append(key.key)
return object_list
# check bucket exists (efficient version)
# NOTE: s3 bucket name space is for all AWS users
# therefore need to also check that have rights to read & write (+list)
def bucket_exists(bucket, region):
s3 = boto3.resource('s3', region)
exists = True
try:
s3.meta.client.head_bucket(Bucket=bucket)
except ClientError as e:
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
exists = False
return exists
#upload a file
def upload_file(bucket, region, source_file, dest_file):
client = boto3.client('s3', region)
transfer = S3Transfer(client)
transfer.upload_file(source_file, bucket, dest_file)
# determine next unique number
def get_next_id(bucket_name, region, prefix):
"""
determines the next sequential numbering for a given folder prefix
e.g. prefix is "01092015Tue-"; if a file exists in the folder, then
it will be of the form "01092015Tue-xxx/somefilename.ext" - where
xxx is some number"; if there is such a file, then next folder will
be 01092015Tue-yyy - where yyy = xxx + 1; otherwise, next folder is
01092015Tue-1
Args:
prefix: a string that represents the absolute folder name
Returns: a string that represents the next folder name in the
sequence
"""
# added () to get group
pattern = re.compile(prefix + '([0-9]+)/')
ids = get_bucket_list(bucket_name, region)
next_num = 1
for name in ids:
match = pattern.match(name)
if match:
# there is only one bracketed group - the number
next_num = max(int(match.groups()[0]) + 1, next_num)
result = prefix + str(next_num)
# want to strip out any "directories" in path & just return id
return result.split('/')[-1]
# return a list of bucket objects that match a given prefix
#TODO remove default bucket name
def list_by_prefix(bucket_name, region, prefix=''):
""" returns a list of names of bucket objects that start with a
given prefix
Args:
bucket_name: string - the name of the s3 bucket
prefix: string - the prefix of the name (key) of the bucket
objects
Returns:
a list of objects whose name (key) starts with the given prefix
"""
s3 = boto3.resource('s3', region)
bucket = s3.Bucket(bucket_name)
names = []
# osi - object summary iterator
for osi in bucket.objects.filter(
Prefix=prefix):
name = osi.key
names.append(name)
return names
# determine if a given object key exists in the bucket
def key_exists(bucket_name, region, key):
""" indicates if a key (object name) is in the bucket
Args:
bucket_name: string - the name of the s3 bucket
key: string - the name of the object key (file-name)
Returns:
True if key in bucket; False otherwise
"""
if key in list_by_prefix(bucket_name, region, key):
return True
return False
def get_timing_info(bucket_name, region, prefix):
""" gets the timing information for jobs - labelled start & finish
Returns:
a 3-tuple of (finish time, elapsed time string, task name string)
"""
s3 = boto3.resource('s3', region)
bucket = s3.Bucket(bucket_name)
start_dict = {}
finish_dict = {}
# osi - object summary iterator
for osi in bucket.objects.filter(
Prefix=prefix):
name = osi.key
last_mod = osi.last_modified
if 'start' in name:
start_dict[name] = last_mod
if 'finish' in name:
finish_dict[name] = last_mod
results = []
for name, finish_time in finish_dict.items():
start_name = name.replace('finish', 'start')
if start_name in start_dict:
elapsed = str(finish_time - start_dict[start_name])
results.append((finish_time, elapsed, name.replace('finish', 'task').split('/')[-1].
split('.')[0]))
return sorted(results)
# download files matching regex
def download_files(bucket_name, region, prefix='', suffix='', dest_dir=''):
""" downloads files who's path & name match given prefix & suffix
to specified dir
Args:
bucket_name: the name of the s3 bucket to download from
prefix: string - start of full path the s3 file
suffix: string - the end characters of the file (e.g. '.vcf')
dest_dir: string - the (local) directory to which the files are
downloaded
"""
# TODO better to raise ValueError??
assert (prefix or suffix), 'must have a value for either prefix or suffix'
# get rid of '/' at end of dir if exists
if dest_dir.endswith('/'):
dest_dir = dest_dir[:-1]
# create directory in case not exist
if dest_dir:
os.makedirs(dest_dir, exist_ok=True)
else:
# no dir provided - default to current dir
dest_dir = '.'
names = []
client = boto3.client('s3', region)
transfer = S3Transfer(client)
for name in list_by_prefix(bucket_name, region, prefix):
if name.endswith(suffix):
# remove any path from the file name
fname = name.split('/').pop()
# download the file
transfer.download_file(bucket_name, name, dest_dir + '/' + fname)
| 34.827225 | 96 | 0.637853 | 922 | 6,652 | 4.508677 | 0.238612 | 0.048112 | 0.038489 | 0.020447 | 0.237431 | 0.204234 | 0.159731 | 0.135675 | 0.1121 | 0.1121 | 0 | 0.018253 | 0.266987 | 6,652 | 190 | 97 | 35.010526 | 0.83429 | 0.399579 | 0 | 0.362745 | 0 | 0 | 0.040984 | 0 | 0 | 0 | 0 | 0.010526 | 0.009804 | 1 | 0.088235 | false | 0 | 0.04902 | 0 | 0.215686 | 0.068627 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
508f157fb5ef3d702783d7357c19874b1334a4e9 | 7,597 | py | Python | bot/exts/info/codeblock/_instructions.py | zwycl/bot | 862fb070a501ca45cccb481d62f079b3bdb1d16f | [
"MIT",
"BSD-3-Clause"
] | 1 | 2020-12-22T09:13:09.000Z | 2020-12-22T09:13:09.000Z | bot/exts/info/codeblock/_instructions.py | Ezamey/bot | 0b7a1be1dd57a464fc79fcb235b79d75bec43f99 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | bot/exts/info/codeblock/_instructions.py | Ezamey/bot | 0b7a1be1dd57a464fc79fcb235b79d75bec43f99 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | """This module generates and formats instructional messages about fixing Markdown code blocks."""
import logging
from typing import Optional
from bot.exts.info.codeblock import _parsing
log = logging.getLogger(__name__)
_EXAMPLE_PY = "{lang}\nprint('Hello, world!')" # Make sure to escape any Markdown symbols here.
_EXAMPLE_CODE_BLOCKS = (
"\\`\\`\\`{content}\n\\`\\`\\`\n\n"
"**This will result in the following:**\n"
"```{content}```"
)
def _get_example(language: str) -> str:
"""Return an example of a correct code block using `language` for syntax highlighting."""
# Determine the example code to put in the code block based on the language specifier.
if language.lower() in _parsing.PY_LANG_CODES:
log.trace(f"Code block has a Python language specifier `{language}`.")
content = _EXAMPLE_PY.format(lang=language)
elif language:
log.trace(f"Code block has a foreign language specifier `{language}`.")
# It's not feasible to determine what would be a valid example for other languages.
content = f"{language}\n..."
else:
log.trace("Code block has no language specifier.")
content = "\nHello, world!"
return _EXAMPLE_CODE_BLOCKS.format(content=content)
def _get_bad_ticks_message(code_block: _parsing.CodeBlock) -> Optional[str]:
"""Return instructions on using the correct ticks for `code_block`."""
log.trace("Creating instructions for incorrect code block ticks.")
valid_ticks = f"\\{_parsing.BACKTICK}" * 3
instructions = (
"It looks like you are trying to paste code into this channel.\n\n"
"You seem to be using the wrong symbols to indicate where the code block should start. "
f"The correct symbols would be {valid_ticks}, not `{code_block.tick * 3}`."
)
log.trace("Check if the bad ticks code block also has issues with the language specifier.")
addition_msg = _get_bad_lang_message(code_block.content)
if not addition_msg and not code_block.language:
addition_msg = _get_no_lang_message(code_block.content)
# Combine the back ticks message with the language specifier message. The latter will
# already have an example code block.
if addition_msg:
log.trace("Language specifier issue found; appending additional instructions.")
# The first line has double newlines which are not desirable when appending the msg.
addition_msg = addition_msg.replace("\n\n", " ", 1)
# Make the first character of the addition lower case.
instructions += "\n\nFurthermore, " + addition_msg[0].lower() + addition_msg[1:]
else:
log.trace("No issues with the language specifier found.")
example_blocks = _get_example(code_block.language)
instructions += f"\n\n**Here is an example of how it should look:**\n{example_blocks}"
return instructions
def _get_no_ticks_message(content: str) -> Optional[str]:
"""If `content` is Python/REPL code, return instructions on using code blocks."""
log.trace("Creating instructions for a missing code block.")
if _parsing.is_python_code(content):
example_blocks = _get_example("python")
return (
"It looks like you're trying to paste code into this channel.\n\n"
"Discord has support for Markdown, which allows you to post code with full "
"syntax highlighting. Please use these whenever you paste code, as this "
"helps improve the legibility and makes it easier for us to help you.\n\n"
f"**To do this, use the following method:**\n{example_blocks}"
)
else:
log.trace("Aborting missing code block instructions: content is not Python code.")
def _get_bad_lang_message(content: str) -> Optional[str]:
"""
Return instructions on fixing the Python language specifier for a code block.
If `code_block` does not have a Python language specifier, return None.
If there's nothing wrong with the language specifier, return None.
"""
log.trace("Creating instructions for a poorly specified language.")
info = _parsing.parse_bad_language(content)
if not info:
log.trace("Aborting bad language instructions: language specified isn't Python.")
return
lines = []
language = info.language
if info.has_leading_spaces:
log.trace("Language specifier was preceded by a space.")
lines.append(f"Make sure there are no spaces between the back ticks and `{language}`.")
if not info.has_terminal_newline:
log.trace("Language specifier was not followed by a newline.")
lines.append(
f"Make sure you put your code on a new line following `{language}`. "
f"There must not be any spaces after `{language}`."
)
if lines:
lines = " ".join(lines)
example_blocks = _get_example(language)
# Note that _get_bad_ticks_message expects the first line to have two newlines.
return (
f"It looks like you incorrectly specified a language for your code block.\n\n{lines}"
f"\n\n**Here is an example of how it should look:**\n{example_blocks}"
)
else:
log.trace("Nothing wrong with the language specifier; no instructions to return.")
def _get_no_lang_message(content: str) -> Optional[str]:
"""
Return instructions on specifying a language for a code block.
If `content` is not valid Python or Python REPL code, return None.
"""
log.trace("Creating instructions for a missing language.")
if _parsing.is_python_code(content):
example_blocks = _get_example("python")
# Note that _get_bad_ticks_message expects the first line to have two newlines.
return (
"It looks like you pasted Python code without syntax highlighting.\n\n"
"Please use syntax highlighting to improve the legibility of your code and make "
"it easier for us to help you.\n\n"
f"**To do this, use the following method:**\n{example_blocks}"
)
else:
log.trace("Aborting missing language instructions: content is not Python code.")
def get_instructions(content: str) -> Optional[str]:
"""
Parse `content` and return code block formatting instructions if something is wrong.
Return None if `content` lacks code block formatting issues.
"""
log.trace("Getting formatting instructions.")
blocks = _parsing.find_code_blocks(content)
if blocks is None:
log.trace("At least one valid code block found; no instructions to return.")
return
if not blocks:
log.trace("No code blocks were found in message.")
instructions = _get_no_ticks_message(content)
else:
log.trace("Searching results for a code block with invalid ticks.")
block = next((block for block in blocks if block.tick != _parsing.BACKTICK), None)
if block:
log.trace("A code block exists but has invalid ticks.")
instructions = _get_bad_ticks_message(block)
else:
log.trace("A code block exists but is missing a language.")
block = blocks[0]
# Check for a bad language first to avoid parsing content into an AST.
instructions = _get_bad_lang_message(block.content)
if not instructions:
instructions = _get_no_lang_message(block.content)
if instructions:
instructions += "\nYou can **edit your original message** to correct your code block."
return instructions
| 41.064865 | 97 | 0.675925 | 1,034 | 7,597 | 4.848162 | 0.212766 | 0.052065 | 0.016756 | 0.023938 | 0.309196 | 0.219828 | 0.203072 | 0.174347 | 0.141632 | 0.107321 | 0 | 0.001033 | 0.235619 | 7,597 | 184 | 98 | 41.288043 | 0.862235 | 0.198499 | 0 | 0.17094 | 1 | 0 | 0.441981 | 0.029843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.025641 | 0 | 0.145299 | 0.008547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50993243321f041c53a88a278ea5aece495a4f45 | 9,453 | py | Python | pysnmp/TRANGO-APEX-TRAP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/TRANGO-APEX-TRAP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/TRANGO-APEX-TRAP-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module TRANGO-APEX-TRAP-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/TRANGO-APEX-TRAP-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 21:19:34 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ConstraintsUnion, SingleValueConstraint, ValueRangeConstraint, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ConstraintsUnion", "SingleValueConstraint", "ValueRangeConstraint", "ValueSizeConstraint")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
Counter64, MibScalar, MibTable, MibTableRow, MibTableColumn, MibIdentifier, NotificationType, IpAddress, Gauge32, Unsigned32, TimeTicks, iso, ModuleIdentity, Bits, Counter32, Integer32, ObjectIdentity = mibBuilder.importSymbols("SNMPv2-SMI", "Counter64", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "MibIdentifier", "NotificationType", "IpAddress", "Gauge32", "Unsigned32", "TimeTicks", "iso", "ModuleIdentity", "Bits", "Counter32", "Integer32", "ObjectIdentity")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
MibScalar, MibTable, MibTableRow, MibTableColumn, apex, NotificationType, Unsigned32, ModuleIdentity, ObjectIdentity = mibBuilder.importSymbols("TRANGO-APEX-MIB", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "apex", "NotificationType", "Unsigned32", "ModuleIdentity", "ObjectIdentity")
class DisplayString(OctetString):
pass
trangotrap = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6))
trapReboot = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 1))
if mibBuilder.loadTexts: trapReboot.setStatus('current')
trapStartUp = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 2))
if mibBuilder.loadTexts: trapStartUp.setStatus('current')
traplock = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3))
trapModemLock = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3, 1))
if mibBuilder.loadTexts: trapModemLock.setStatus('current')
trapTimingLock = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3, 2))
if mibBuilder.loadTexts: trapTimingLock.setStatus('current')
trapInnerCodeLock = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3, 3))
if mibBuilder.loadTexts: trapInnerCodeLock.setStatus('current')
trapEqualizerLock = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3, 4))
if mibBuilder.loadTexts: trapEqualizerLock.setStatus('current')
trapFrameSyncLock = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 3, 5))
if mibBuilder.loadTexts: trapFrameSyncLock.setStatus('current')
trapthreshold = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4))
trapmse = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 1))
trapMSEMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 1, 1))
if mibBuilder.loadTexts: trapMSEMinThreshold.setStatus('current')
trapMSEMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 1, 2))
if mibBuilder.loadTexts: trapMSEMaxThreshold.setStatus('current')
trapber = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 2))
trapBERMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 2, 1))
if mibBuilder.loadTexts: trapBERMinThreshold.setStatus('current')
trapBERMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 2, 2))
if mibBuilder.loadTexts: trapBERMaxThreshold.setStatus('current')
trapfer = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 3))
trapFERMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 3, 1))
if mibBuilder.loadTexts: trapFERMinThreshold.setStatus('current')
trapFERMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 3, 2))
if mibBuilder.loadTexts: trapFERMaxThreshold.setStatus('current')
traprssi = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 4))
trapRSSIMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 4, 1))
if mibBuilder.loadTexts: trapRSSIMinThreshold.setStatus('current')
trapRSSIMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 4, 2))
if mibBuilder.loadTexts: trapRSSIMaxThreshold.setStatus('current')
trapidutemp = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 5))
trapIDUTempMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 5, 1))
if mibBuilder.loadTexts: trapIDUTempMinThreshold.setStatus('current')
trapIDUTempMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 5, 2))
if mibBuilder.loadTexts: trapIDUTempMaxThreshold.setStatus('current')
trapodutemp = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 6))
trapODUTempMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 6, 1))
if mibBuilder.loadTexts: trapODUTempMinThreshold.setStatus('current')
trapODUTempMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 6, 2))
if mibBuilder.loadTexts: trapODUTempMaxThreshold.setStatus('current')
trapinport = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 7))
trapInPortUtilMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 7, 1))
if mibBuilder.loadTexts: trapInPortUtilMinThreshold.setStatus('current')
trapInPortUtilMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 7, 2))
if mibBuilder.loadTexts: trapInPortUtilMaxThreshold.setStatus('current')
trapoutport = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 8))
trapOutPortUtilMinThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 8, 1))
if mibBuilder.loadTexts: trapOutPortUtilMinThreshold.setStatus('current')
trapOutPortUtilMaxThreshold = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 4, 8, 2))
if mibBuilder.loadTexts: trapOutPortUtilMaxThreshold.setStatus('current')
trapstandby = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 5))
trapStandbyLinkDown = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 5, 1))
if mibBuilder.loadTexts: trapStandbyLinkDown.setStatus('current')
trapStandbyLinkUp = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 5, 2))
if mibBuilder.loadTexts: trapStandbyLinkUp.setStatus('current')
trapSwitchover = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 5, 3))
if mibBuilder.loadTexts: trapSwitchover.setStatus('current')
trapeth = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6))
trapethstatus = MibIdentifier((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6, 1))
trapEth1StatusUpdate = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6, 1, 1))
if mibBuilder.loadTexts: trapEth1StatusUpdate.setStatus('current')
trapEth2StatusUpdate = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6, 1, 2))
if mibBuilder.loadTexts: trapEth2StatusUpdate.setStatus('current')
trapEth3StatusUpdate = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6, 1, 3))
if mibBuilder.loadTexts: trapEth3StatusUpdate.setStatus('current')
trapEth4StatusUpdate = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 6, 1, 4))
if mibBuilder.loadTexts: trapEth4StatusUpdate.setStatus('current')
trapDownShift = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 8))
if mibBuilder.loadTexts: trapDownShift.setStatus('current')
trapRapidPortShutdown = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 9))
if mibBuilder.loadTexts: trapRapidPortShutdown.setStatus('current')
trapRPSPortUp = NotificationType((1, 3, 6, 1, 4, 1, 5454, 1, 60, 6, 10))
if mibBuilder.loadTexts: trapRPSPortUp.setStatus('current')
mibBuilder.exportSymbols("TRANGO-APEX-TRAP-MIB", trapber=trapber, trapEth3StatusUpdate=trapEth3StatusUpdate, DisplayString=DisplayString, trapFERMinThreshold=trapFERMinThreshold, trapStandbyLinkDown=trapStandbyLinkDown, trapInPortUtilMinThreshold=trapInPortUtilMinThreshold, trapMSEMinThreshold=trapMSEMinThreshold, trapRSSIMaxThreshold=trapRSSIMaxThreshold, traprssi=traprssi, trapStandbyLinkUp=trapStandbyLinkUp, trapIDUTempMinThreshold=trapIDUTempMinThreshold, trapRapidPortShutdown=trapRapidPortShutdown, trangotrap=trangotrap, trapStartUp=trapStartUp, trapMSEMaxThreshold=trapMSEMaxThreshold, trapSwitchover=trapSwitchover, traplock=traplock, trapethstatus=trapethstatus, trapEth2StatusUpdate=trapEth2StatusUpdate, trapodutemp=trapodutemp, trapinport=trapinport, trapReboot=trapReboot, trapthreshold=trapthreshold, trapmse=trapmse, trapEth4StatusUpdate=trapEth4StatusUpdate, trapIDUTempMaxThreshold=trapIDUTempMaxThreshold, trapFrameSyncLock=trapFrameSyncLock, trapOutPortUtilMinThreshold=trapOutPortUtilMinThreshold, trapInnerCodeLock=trapInnerCodeLock, trapfer=trapfer, trapTimingLock=trapTimingLock, trapFERMaxThreshold=trapFERMaxThreshold, trapstandby=trapstandby, trapModemLock=trapModemLock, trapInPortUtilMaxThreshold=trapInPortUtilMaxThreshold, trapOutPortUtilMaxThreshold=trapOutPortUtilMaxThreshold, trapoutport=trapoutport, trapODUTempMinThreshold=trapODUTempMinThreshold, trapDownShift=trapDownShift, trapBERMinThreshold=trapBERMinThreshold, trapRPSPortUp=trapRPSPortUp, trapEqualizerLock=trapEqualizerLock, trapeth=trapeth, trapRSSIMinThreshold=trapRSSIMinThreshold, trapEth1StatusUpdate=trapEth1StatusUpdate, trapidutemp=trapidutemp, trapODUTempMaxThreshold=trapODUTempMaxThreshold, trapBERMaxThreshold=trapBERMaxThreshold)
| 95.484848 | 1,742 | 0.756056 | 1,144 | 9,453 | 6.247378 | 0.131119 | 0.015111 | 0.020148 | 0.026305 | 0.353155 | 0.2762 | 0.2762 | 0.2762 | 0.2762 | 0.248216 | 0 | 0.100188 | 0.101449 | 9,453 | 98 | 1,743 | 96.459184 | 0.741229 | 0.035333 | 0 | 0 | 0 | 0 | 0.088474 | 0.00483 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.011111 | 0.077778 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
509af6bcf44faa152d90bad8b829b7b4847920a9 | 359 | py | Python | example/controller/tests/helper/security/web/csrf/verify/by_value.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 18 | 2015-04-07T14:28:39.000Z | 2020-02-08T14:03:38.000Z | example/controller/tests/helper/security/web/csrf/verify/by_value.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 7 | 2016-10-05T05:14:06.000Z | 2021-05-20T02:07:22.000Z | example/controller/tests/helper/security/web/csrf/verify/by_value.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 11 | 2015-12-15T09:49:39.000Z | 2021-09-06T18:38:21.000Z | # -*- coding: utf-8 -*-
from dp_tornado.engine.controller import Controller
class ByValueController(Controller):
def get(self):
param_key = 'csrf'
if not self.helper.security.web.csrf.verify_token(controller=self, value=self.get_argument(param_key)):
return self.parent.finish_with_error(400)
self.finish('done')
| 23.933333 | 111 | 0.688022 | 46 | 359 | 5.217391 | 0.717391 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0.192201 | 359 | 14 | 112 | 25.642857 | 0.813793 | 0.058496 | 0 | 0 | 0 | 0 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
509bc06e0b8597b7dcfcd3d366bc4b0a1b33cced | 1,913 | py | Python | dis_snek/api/http/route.py | BoredManCodes/Dis-Snek | 662dbc3f86c133fd704c22d3d6d55af5ee1f6f5b | [
"MIT"
] | null | null | null | dis_snek/api/http/route.py | BoredManCodes/Dis-Snek | 662dbc3f86c133fd704c22d3d6d55af5ee1f6f5b | [
"MIT"
] | null | null | null | dis_snek/api/http/route.py | BoredManCodes/Dis-Snek | 662dbc3f86c133fd704c22d3d6d55af5ee1f6f5b | [
"MIT"
] | null | null | null | from typing import TYPE_CHECKING, Any, ClassVar, Optional
from urllib.parse import quote as _uriquote
if TYPE_CHECKING:
from dis_snek.models.discord.snowflake import Snowflake_Type
__all__ = ["Route"]
class Route:
BASE: ClassVar[str] = "https://discord.com/api/v9"
path: str
params: dict[str, str | int]
webhook_id: Optional["Snowflake_Type"]
webhook_token: Optional[str]
def __init__(self, method: str, path: str, **parameters: Any):
self.path: str = path
self.method: str = method
self.params = parameters
self.channel_id = parameters.get("channel_id")
self.guild_id = parameters.get("guild_id")
self.webhook_id = parameters.get("webhook_id")
self.webhook_token = parameters.get("webhook_token")
self.known_bucket: Optional[str] = None
def __eq__(self, other):
if isinstance(other, Route):
return self.rl_bucket == other.rl_bucket
return NotImplemented
def __hash__(self):
return hash(self.rl_bucket)
def __repr__(self):
return f"<Route {self.endpoint}>"
def __str__(self):
return self.endpoint
@property
def rl_bucket(self) -> str:
"""This route's full rate limit bucket"""
if self.known_bucket:
return self.known_bucket
if self.webhook_token:
return f"{self.webhook_id}{self.webhook_token}:{self.channel_id}:{self.guild_id}:{self.endpoint}"
return f"{self.channel_id}:{self.guild_id}:{self.endpoint}"
@property
def endpoint(self) -> str:
"""The endpoint for this route"""
return f"{self.method} {self.path}"
@property
def url(self) -> str:
"""The full url for this route"""
return f"{self.BASE}{self.path}".format_map(
{k: _uriquote(v) if isinstance(v, str) else v for k, v in self.params.items()}
)
| 28.984848 | 109 | 0.634605 | 250 | 1,913 | 4.632 | 0.288 | 0.041451 | 0.037997 | 0.046632 | 0.162349 | 0.1019 | 0.062176 | 0.062176 | 0 | 0 | 0 | 0.000692 | 0.244642 | 1,913 | 65 | 110 | 29.430769 | 0.800692 | 0.047569 | 0 | 0.066667 | 0 | 0.022222 | 0.161683 | 0.087486 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.066667 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
50a3bc0cc77af6e3f24e20ed927fe327a30a1517 | 695 | py | Python | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/survey/admin.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 3 | 2021-12-15T04:58:18.000Z | 2022-02-06T12:15:37.000Z | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/survey/admin.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | null | null | null | Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/survey/admin.py | osoco/better-ways-of-thinking-about-software | 83e70d23c873509e22362a09a10d3510e10f6992 | [
"MIT"
] | 1 | 2019-01-02T14:38:50.000Z | 2019-01-02T14:38:50.000Z | """
Provide accessors to these models via the Django Admin pages
"""
from django import forms
from django.contrib import admin
from lms.djangoapps.survey.models import SurveyForm
class SurveyFormAdminForm(forms.ModelForm):
"""Form providing validation of SurveyForm content."""
class Meta:
model = SurveyForm
fields = ('name', 'form')
def clean_form(self):
"""Validate the HTML template."""
form = self.cleaned_data["form"]
SurveyForm.validate_form_html(form)
return form
class SurveyFormAdmin(admin.ModelAdmin):
"""Admin for SurveyForm"""
form = SurveyFormAdminForm
admin.site.register(SurveyForm, SurveyFormAdmin)
| 21.71875 | 60 | 0.700719 | 77 | 695 | 6.272727 | 0.571429 | 0.041408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204317 | 695 | 31 | 61 | 22.419355 | 0.873418 | 0.227338 | 0 | 0 | 0 | 0 | 0.023346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
50a5a6f4507fa5ed92be275eb4339130e457b023 | 6,369 | py | Python | calico/etcddriver/test/test_hwm.py | ozdanborne/felix | 5eff313e6498b3a7d775aa16cb09fd4578331701 | [
"Apache-2.0"
] | 6 | 2016-10-18T04:04:25.000Z | 2016-10-18T04:06:49.000Z | calico/etcddriver/test/test_hwm.py | ozdanborne/felix | 5eff313e6498b3a7d775aa16cb09fd4578331701 | [
"Apache-2.0"
] | 1 | 2021-06-01T21:45:37.000Z | 2021-06-01T21:45:37.000Z | calico/etcddriver/test/test_hwm.py | ozdanborne/felix | 5eff313e6498b3a7d775aa16cb09fd4578331701 | [
"Apache-2.0"
] | 2 | 2018-10-31T08:55:19.000Z | 2019-04-16T02:14:50.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2015-2016 Tigera, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
test_hwm
~~~~~~~~
Tests for high water mark tracking function.
"""
import logging
from unittest import TestCase
from mock import Mock, call, patch
from calico.etcddriver import hwm
from calico.etcddriver.hwm import HighWaterTracker
_log = logging.getLogger(__name__)
class TestHighWaterTracker(TestCase):
def setUp(self):
self.hwm = HighWaterTracker()
def test_mainline(self):
# Test merging of updates between a snapshot with etcd_index 10 and
# updates coming in afterwards with indexes 11, 12, ...
# We use prefix "/a/$" because $ is not allowed in the trie so it
# implicitly tests encoding/decoding is being properly applied.
old_hwm = self.hwm.update_hwm("/a/$/c", 9) # Pre-snapshot
self.assertEqual(old_hwm, None)
old_hwm = self.hwm.update_hwm("/b/c/d", 9) # Pre-snapshot
self.assertEqual(old_hwm, None)
old_hwm = self.hwm.update_hwm("/j/c/d", 9) # Pre-snapshot
self.assertEqual(old_hwm, None)
self.assertEqual(len(self.hwm), 3)
# While merging a snapshot we track deletions.
self.hwm.start_tracking_deletions()
# Send in some keys from the snapshot.
old_hwm = self.hwm.update_hwm("/a/$/c", 10) # From snapshot
self.assertEqual(old_hwm, 9)
old_hwm = self.hwm.update_hwm("/a/$/d", 10) # From snapshot
self.assertEqual(old_hwm, None)
old_hwm = self.hwm.update_hwm("/d/e/f", 10) # From snapshot
self.assertEqual(old_hwm, None)
self.assertEqual(len(self.hwm), 5)
# This key is first seen in the event stream, so the snapshot version
# should be ignored.
old_hwm = self.hwm.update_hwm("/a/h/i", 11) # From events
self.assertEqual(old_hwm, None)
old_hwm = self.hwm.update_hwm("/a/h/i", 10) # From snapshot
self.assertEqual(old_hwm, 11)
old_hwm = self.hwm.update_hwm("/a/h/i", 12) # From events
self.assertEqual(old_hwm, 11) # Still 11, snapshot ignored.
self.assertEqual(len(self.hwm), 6)
# Then a whole subtree gets deleted by the events.
deleted_keys = self.hwm.store_deletion("/a/$", 13)
self.assertEqual(set(deleted_keys), set(["/a/$/c", "/a/$/d"]))
self.assertEqual(len(self.hwm), 4)
# But afterwards, we see a snapshot key within the subtree, it should
# be ignored.
old_hwm = self.hwm.update_hwm("/a/$/e", 10)
self.assertEqual(old_hwm, 13) # Returns the etcd_index of the delete.
# Then a new update from the event stream, recreates the directory.
old_hwm = self.hwm.update_hwm("/a/$/f", 14)
self.assertEqual(old_hwm, None)
self.assertEqual(len(self.hwm), 5)
# And subsequent updates are processed ignoring the delete.
old_hwm = self.hwm.update_hwm("/a/$/f", 15)
self.assertEqual(old_hwm, 14)
# However, snapshot updates from within the deleted subtree are still
# ignored.
old_hwm = self.hwm.update_hwm("/a/$/e", 10)
self.assertEqual(old_hwm, 13) # Returns the etcd_index of the delete.
old_hwm = self.hwm.update_hwm("/a/$/f", 10)
self.assertEqual(old_hwm, 13) # Returns the etcd_index of the delete.
old_hwm = self.hwm.update_hwm("/a/$/g", 10)
self.assertEqual(old_hwm, 13) # Returns the etcd_index of the delete.
self.assertEqual(len(self.hwm), 5)
# But ones outside the subtree ar not.
old_hwm = self.hwm.update_hwm("/f/g", 10)
self.assertEqual(old_hwm, None)
# And subsequent updates are processed ignoring the delete.
old_hwm = self.hwm.update_hwm("/a/$/f", 16)
self.assertEqual(old_hwm, 15)
# End of snapshot: we stop tracking deletions, which should free up the
# resources.
self.hwm.stop_tracking_deletions()
self.assertEqual(self.hwm._deletion_hwms, None)
# Then, subseqent updates should be handled normally.
old_hwm = self.hwm.update_hwm("/a/$/f", 17)
self.assertEqual(old_hwm, 16) # From previous event
old_hwm = self.hwm.update_hwm("/g/b/f", 18)
self.assertEqual(old_hwm, None) # Seen for the first time.
old_hwm = self.hwm.update_hwm("/d/e/f", 19)
self.assertEqual(old_hwm, 10) # From the snapshot.
self.assertEqual(len(self.hwm), 7)
# We should be able to find all the keys that weren't seen during
# the snapshot.
old_keys = self.hwm.remove_old_keys(10)
self.assertEqual(set(old_keys), set(["/b/c/d", "/j/c/d"]))
self.assertEqual(len(self.hwm), 5)
# They should now be gone from the index.
old_hwm = self.hwm.update_hwm("/b/c/d", 20)
self.assertEqual(old_hwm, None)
self.assertEqual(len(self.hwm), 6)
class TestKeyEncoding(TestCase):
def test_encode_key(self):
self.assert_enc_dec("/calico/v1/foo/bar", "/calico/v1/foo/bar/")
self.assert_enc_dec("/:_-./foo", "/:_-./foo/")
self.assert_enc_dec("/:_-.~/foo", "/:_-.%7E/foo/")
self.assert_enc_dec("/%/foo", "/%25/foo/")
self.assert_enc_dec(u"/\u01b1/foo", "/%C6%B1/foo/")
self.assertEqual(hwm.encode_key("/foo/"), "/foo/")
def assert_enc_dec(self, key, expected_encoding):
encoded = hwm.encode_key(key)
self.assertEqual(
encoded,
expected_encoding,
msg="Expected %r to encode as %r but got %r" %
(key, expected_encoding, encoded))
decoded = hwm.decode_key(encoded)
self.assertEqual(
decoded,
key,
msg="Expected %r to decode as %r but got %r" %
(encoded, key, decoded))
| 41.627451 | 79 | 0.633851 | 914 | 6,369 | 4.294311 | 0.257112 | 0.064204 | 0.053503 | 0.069554 | 0.414777 | 0.385223 | 0.318981 | 0.295287 | 0.267771 | 0.246369 | 0 | 0.021537 | 0.241796 | 6,369 | 152 | 80 | 41.901316 | 0.791261 | 0.332862 | 0 | 0.292135 | 0 | 0 | 0.084766 | 0 | 0 | 0 | 0 | 0 | 0.47191 | 1 | 0.044944 | false | 0 | 0.05618 | 0 | 0.123596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50ae8802ab2f67d8085eed2af76c35268d1f3319 | 3,167 | py | Python | flan/exports/awssqs.py | bretlowery/flan | b79319044fcdb2230ac090232e9056719cb09f17 | [
"MIT"
] | 3 | 2019-08-03T13:27:31.000Z | 2021-06-08T16:25:31.000Z | flan/exports/awssqs.py | bretlowery/flan | b79319044fcdb2230ac090232e9056719cb09f17 | [
"MIT"
] | 2 | 2020-09-24T10:44:55.000Z | 2021-06-25T15:31:24.000Z | flan/exports/awssqs.py | bretlowery/flan | b79319044fcdb2230ac090232e9056719cb09f17 | [
"MIT"
] | null | null | null | from flanexport import FlanExport, timeout_after
import os
import ast
try:
from boto.sqs import connection
from boto.sqs.message import Message
except:
pass
class AWSSQS(FlanExport):
def __init__(self, meta, config):
name = self.__class__.__name__
super().__init__(name, meta, config)
@timeout_after(10)
def prepare(self):
aws_access_key_id = self._getsetting('aws_access_key_id', checkenv=True)
aws_secret_access_key = self._getsetting('aws_secret_access_key', checkenv=True)
is_secure = self._getsetting('is_secure', erroronnone=False, defaultvalue=True)
port = self._getsetting('port', erroronnone=False)
proxy = self._getsetting('proxy', erroronnone=False)
proxy_port = self._getsetting('proxy_port', erroronnone=False)
proxy_user = self._getsetting('proxy_user', erroronnone=False)
proxy_pass = self._getsetting('proxy_pass', erroronnone=False)
region = self._getsetting('region', erroronnone=False)
path = self._getsetting('region', defaultvalue="/")
security_token = self._getsetting('security_token', erroronnone=False)
validate_certs = self._getsetting('region', defaultvalue=True)
profile_name = self._getsetting('profile_name', erroronnone=False)
queue_name = self._getsetting('queue_name', erroronnone=True, defaultvalue="flan")
sqs_message_attributes = self._getsetting('sqs_message_attributes', erroronnone=False)
if sqs_message_attributes:
self.sqs_message_attributes = ast.literal_eval(sqs_message_attributes)
else:
self.sqs_message_attributes = {}
try:
self.conn = connection.SQSConnection(
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
is_secure=is_secure,
port=port,
proxy=proxy,
proxy_port=proxy_port,
proxy_user=proxy_user,
proxy_pass=proxy_pass,
region=region,
path=path,
security_token=security_token,
validate_certs=validate_certs,
profile_name=profile_name
)
self.sender = self.conn.create_queue(queue_name, self._getsetting('timeout'))
except Exception as e:
self.logerr('Flan->%s connection to %s:%s failed: %s' %
(self.name, self.config["host"], self.config["port"], str(e)))
os._exit(1)
@timeout_after(10)
def send(self, data):
try:
m = Message()
m.message_attributes = self.sqs_message_attributes
m.set_body(data)
self.sender.write(m)
except Exception as e:
self.logerr('Flan->%s delivery failed: %s' % (self.name, str(e)))
pass
return
@property
def closed(self):
return False
@timeout_after(10)
def close(self):
try:
self.conn.close()
except:
pass
return | 38.156627 | 94 | 0.610988 | 346 | 3,167 | 5.277457 | 0.234104 | 0.122673 | 0.07667 | 0.030668 | 0.098028 | 0.081051 | 0.036145 | 0.036145 | 0 | 0 | 0 | 0.003126 | 0.293022 | 3,167 | 83 | 95 | 38.156627 | 0.812416 | 0 | 0 | 0.213333 | 0 | 0 | 0.078598 | 0.013573 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.066667 | 0.013333 | 0.186667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
50c1727f4cbaa7d21dda9d008ae65222d11294d4 | 1,718 | py | Python | 2017/day25/day25.py | icemanblues/advent-of-code | eac937ac2762d1c8b8cec358a13af352e339446c | [
"Apache-2.0"
] | null | null | null | 2017/day25/day25.py | icemanblues/advent-of-code | eac937ac2762d1c8b8cec358a13af352e339446c | [
"Apache-2.0"
] | 2 | 2020-04-06T18:56:13.000Z | 2022-03-30T20:32:50.000Z | 2017/day25/day25.py | icemanblues/advent-of-code | eac937ac2762d1c8b8cec358a13af352e339446c | [
"Apache-2.0"
] | null | null | null | from typing import Set
day_num = "25"
day_title = "The Halting Problem"
def part1():
tape: Set[int] = set()
curr: int = 0
state: str = 'a'
for _ in range(12523873):
is_one = curr in tape
if state == 'a' and not is_one:
tape.add(curr)
curr += 1
state = 'b'
elif state == 'a' and is_one:
tape.add(curr)
curr -= 1
state = 'e'
elif state == 'b' and not is_one:
tape.add(curr)
curr += 1
state = 'c'
elif state == 'b' and is_one:
tape.add(curr)
curr += 1
state = 'f'
elif state == 'c' and not is_one:
tape.add(curr)
curr -= 1
state = 'd'
elif state == 'c' and is_one:
tape.remove(curr)
curr += 1
state = 'b'
elif state == 'd' and not is_one:
tape.add(curr)
curr += 1
state = 'e'
elif state == 'd' and is_one:
tape.remove(curr)
curr -= 1
state = 'c'
elif state == 'e' and not is_one:
tape.add(curr)
curr -= 1
state = 'a'
elif state == 'e' and is_one:
tape.remove(curr)
curr += 1
state = 'd'
elif state == 'f' and not is_one:
tape.add(curr)
curr += 1
state = 'a'
elif state == 'f' and is_one:
tape.add(curr)
curr += 1
state = 'c'
print("Part 1:", len(tape))
def main():
print(f"Day {day_num}: {day_title}")
part1()
if __name__ == '__main__':
main()
| 23.534247 | 41 | 0.419092 | 214 | 1,718 | 3.242991 | 0.200935 | 0.09366 | 0.15562 | 0.242075 | 0.685879 | 0.685879 | 0.685879 | 0.600865 | 0.600865 | 0.371758 | 0 | 0.027957 | 0.458673 | 1,718 | 72 | 42 | 23.861111 | 0.71828 | 0 | 0 | 0.555556 | 0 | 0 | 0.05064 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.015873 | 0 | 0.047619 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50cf002b29b4d43070275df7465fc14466be2932 | 3,430 | py | Python | pcircle/globals.py | fwang2/ccss | f2521cc492c5f459363cbaf4a55c9c504762efa4 | [
"Apache-2.0"
] | 20 | 2015-10-02T14:43:17.000Z | 2020-01-21T15:17:42.000Z | pcircle/globals.py | fwang2/ccss | f2521cc492c5f459363cbaf4a55c9c504762efa4 | [
"Apache-2.0"
] | 37 | 2015-10-01T18:52:08.000Z | 2018-11-20T21:05:39.000Z | pcircle/globals.py | fwang2/ccss | f2521cc492c5f459363cbaf4a55c9c504762efa4 | [
"Apache-2.0"
] | 7 | 2015-12-04T02:36:16.000Z | 2018-08-31T18:16:39.000Z | class T:
WORK_REQUEST = 1
WORK_REPLY = 2
REDUCE = 3
BARRIER = 4
TOKEN = 7
class Tally:
total_dirs = 0
total_files = 0
total_filesize = 0
total_stat_filesize = 0
total_symlinks = 0
total_skipped = 0
total_sparse = 0
max_files = 0
total_nlinks = 0
total_nlinked_files = 0
total_0byte_files = 0
devfile_cnt = 0
devfile_sz = 0
spcnt = 0 # stripe cnt account per process
# ZFS
total_blocks = 0
class G:
ZERO = 0
ABORT = -1
WHITE = 50
BLACK = 51
NONE = -99
TERMINATE = -100
MSG = 99
MSG_VALID = True
MSG_INVALID = False
fmt1 = '%(asctime)s - %(levelname)s - %(rank)s:%(filename)s:%(lineno)d - %(message)s'
fmt2 = '%(asctime)s - %(rank)s:%(filename)s:%(lineno)d - %(message)s'
bare_fmt = '%(name)s - %(levelname)s - %(message)s'
mpi_fmt = '%(name)s - %(levelname)s - %(rank)s - %(message)s'
bare_fmt2 = '%(message)s'
str = {WHITE: "white", BLACK: "black", NONE: "not set", TERMINATE: "terminate",
ABORT: "abort", MSG: "message"}
KEY = "key"
VAL = "val"
logger = None
logfile = None
loglevel = "warn"
use_store = False
fix_opt = False
preserve = False
DB_BUFSIZE = 10000
memitem_threshold = 100000
tempdir = None
total_chunks = 0
rid = None
chk_file = None
chk_file_db = None
totalsize = 0
src = None
dest = None
args_src = None
args_dest = None
resume = None
reduce_interval = 30
reduce_enabled = False
verbosity = 0
am_root = False
copytype = 'dir2dir'
# Lustre file system
fs_lustre = None
lfs_bin = None
stripe_threshold = None
b0 = 0
b4k = 4 * 1024
b8k = 8 * 1024
b16k = 16 * 1024
b32k = 32 * 1024
b64k = 64 * 1024
b128k = 128 * 1024
b256k = 256 * 1024
b512k = 512 * 1024
b1m = 1024 * 1024
b2m = 2 * b1m
b4m = 4 * b1m
b8m = 8 * b1m
b16m = 16 * b1m
b32m = 32 * b1m
b64m = 64 * b1m
b128m = 128 * b1m
b256m = 256 * b1m
b512m = 512 * b1m
b1g = 1024 * b1m
b4g = 4 * b1g
b16g = 16 * b1g
b64g = 64 * b1g
b128g = 128 * b1g
b256g = 256 * b1g
b512g = 512 * b1g
b1tb = 1024 * b1g
b4tb = 4 * b1tb
FSZ_BOUND = 64 * b1tb
# 25 bins
bins = [b0, b4k, b8k, b16k, b32k, b64k, b128k, b256k, b512k,
b1m, b2m, b4m, b16m, b32m, b64m, b128m, b256m, b512m,
b1g, b4g, b64g, b128g, b256g, b512g, b1tb, b4tb]
# 17 bins, the last bin is special
# This is error-prone, to be refactored.
# bins_fmt = ["B1_000k_004k", "B1_004k_008k", "B1_008k_016k", "B1_016k_032k", "B1_032k_064k", "B1_064k_256k",
# "B1_256k_512k", "B1_512k_001m",
# "B2_001m_004m", "B2_m004_016m", "B2_016m_512m", "B2_512m_001g",
# "B3_001g_100g", "B3_100g_256g", "B3_256g_512g",
# "B4_512g_001t",
# "B5_001t_up"]
# GPFS
gpfs_block_size = ("256k", "512k", "b1m", "b4m", "b8m", "b16m", "b32m")
gpfs_block_cnt = [0, 0, 0, 0, 0, 0, 0]
gpfs_subs = (b256k/32, b512k/32, b1m/32, b4m/32, b8m/32, b16m/32, b32m/32)
dev_suffixes = [".C", ".CC", ".CU", ".H", ".CPP", ".HPP", ".CXX", ".F", ".I", ".II",
".F90", ".F95", ".F03", ".FOR", ".O", ".A", ".SO", ".S",
".IN", ".M4", ".CACHE", ".PY", ".PYC"]
| 25.597015 | 113 | 0.533819 | 473 | 3,430 | 3.697674 | 0.450317 | 0.030875 | 0.008576 | 0.009148 | 0.067467 | 0.038308 | 0.034305 | 0.034305 | 0.034305 | 0 | 0 | 0.195934 | 0.325948 | 3,430 | 133 | 114 | 25.789474 | 0.560554 | 0.139942 | 0 | 0 | 0 | 0.018868 | 0.131812 | 0.021798 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.95283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
50d091c9f0425cffc07e257ef10b1881ccd4fbda | 1,514 | py | Python | pedidos/migrations/0001_initial.py | tiagocordeiro/zumaq-partners | ba2c5d4257438ec062ef034096cd203efe58ef4a | [
"MIT"
] | 1 | 2019-02-13T11:01:25.000Z | 2019-02-13T11:01:25.000Z | pedidos/migrations/0001_initial.py | tiagocordeiro/zumaq-partners | ba2c5d4257438ec062ef034096cd203efe58ef4a | [
"MIT"
] | 619 | 2018-11-26T06:11:05.000Z | 2022-03-31T22:56:13.000Z | pedidos/migrations/0001_initial.py | tiagocordeiro/zumaq-partners | ba2c5d4257438ec062ef034096cd203efe58ef4a | [
"MIT"
] | 1 | 2020-03-12T16:34:13.000Z | 2020-03-12T16:34:13.000Z | # Generated by Django 2.1.7 on 2019-02-14 13:07
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Pedido',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, verbose_name='criado em')),
('modified', models.DateTimeField(auto_now=True, verbose_name='modificado em')),
('active', models.BooleanField(default=True, verbose_name='ativo')),
('status', models.IntegerField(blank=True, choices=[(0, 'Aberto'), (1, 'Enviado'), (2, 'Finalizado'), (3, 'Cancelado')], default=0, verbose_name='Situação')),
('parceiro', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='parceiro')),
],
options={
'verbose_name': 'pedido',
'verbose_name_plural': 'pedidos',
},
),
migrations.CreateModel(
name='PedidoItem',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
| 38.820513 | 174 | 0.602378 | 155 | 1,514 | 5.735484 | 0.483871 | 0.111361 | 0.050619 | 0.049494 | 0.177728 | 0.177728 | 0.177728 | 0.177728 | 0.177728 | 0.177728 | 0 | 0.017825 | 0.258917 | 1,514 | 38 | 175 | 39.842105 | 0.77451 | 0.029723 | 0 | 0.322581 | 1 | 0 | 0.121336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.096774 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50d1ff29cc95475cd6346e510fcbdb66477154e4 | 2,358 | py | Python | spyderlib/utils/debug.py | MarlaJahari/Marve | d8d18122c19f050429a91ef23edc87a85fecad1d | [
"MIT"
] | 1 | 2021-01-25T02:13:36.000Z | 2021-01-25T02:13:36.000Z | SMlib/utils/debug.py | koll00/Gui_SM | d02d28b20ef2ae1aa602b9bb52a6bb55fd66be9c | [
"MIT"
] | null | null | null | SMlib/utils/debug.py | koll00/Gui_SM | d02d28b20ef2ae1aa602b9bb52a6bb55fd66be9c | [
"MIT"
] | 1 | 2021-08-04T08:13:34.000Z | 2021-08-04T08:13:34.000Z | # -*- coding: utf-8 -*-
#
# Copyright © 2009-2010 Pierre Raybaut
# Licensed under the terms of the MIT License
# (see spyderlib/__init__.py for details)
"""Debug utilities"""
import inspect
import traceback
import time
def log_time(fd):
timestr = "Logging time: %s" % time.ctime(time.time())
print >>fd, "="*len(timestr)
print >>fd, timestr
print >>fd, "="*len(timestr)
print >>fd, ""
def log_last_error(fname, context=None):
"""Log last error in filename *fname* -- *context*: string (optional)"""
fd = open(fname, 'a')
log_time(fd)
if context:
print >>fd, "Context"
print >>fd, "-------"
print >>fd, ""
print >>fd, context
print >>fd, ""
print >>fd, "Traceback"
print >>fd, "---------"
print >>fd, ""
traceback.print_exc(file=fd)
print >>fd, ""
print >>fd, ""
def log_dt(fname, context, t0):
fd = open(fname, 'a')
log_time(fd)
print >>fd, "%s: %d ms" % (context, 10*round(1e2*(time.time()-t0)))
print >>fd, ""
print >>fd, ""
def caller_name(skip=2):
"""Get a name of a caller in the format module.class.method
`skip` specifies how many levels of stack to skip while getting caller
name. skip=1 means "who calls me", skip=2 "who calls my caller" etc.
An empty string is returned if skipped levels exceed stack height
"""
stack = inspect.stack()
start = 0 + skip
if len(stack) < start + 1:
return ''
parentframe = stack[start][0]
name = []
module = inspect.getmodule(parentframe)
# `modname` can be None when frame is executed directly in console
# TODO(techtonik): consider using __main__
if module:
name.append(module.__name__)
# detect classname
if 'self' in parentframe.f_locals:
# I don't know any way to detect call from the object method
# XXX: there seems to be no way to detect static method call - it will
# be just a function call
name.append(parentframe.f_locals['self'].__class__.__name__)
codename = parentframe.f_code.co_name
if codename != '<module>': # top level usually
name.append( codename ) # function or a method
del parentframe
return ".".join(name)
| 31.026316 | 79 | 0.583121 | 307 | 2,358 | 4.37785 | 0.456026 | 0.088542 | 0.053571 | 0.0625 | 0.162946 | 0.136161 | 0.072917 | 0 | 0 | 0 | 0 | 0.012463 | 0.285411 | 2,358 | 75 | 80 | 31.44 | 0.78457 | 0.343511 | 0 | 0.297872 | 0 | 0 | 0.055398 | 0 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.085106 | false | 0 | 0.06383 | 0 | 0.191489 | 0.382979 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50d864ab62ce6a701debae2af6bac07b0dbf0ee8 | 9,439 | py | Python | libsolresol.py | MishaKlopukh/solresol-language | 075332912fdda7412e11c759c958997fee6c87a9 | [
"MIT"
] | null | null | null | libsolresol.py | MishaKlopukh/solresol-language | 075332912fdda7412e11c759c958997fee6c87a9 | [
"MIT"
] | null | null | null | libsolresol.py | MishaKlopukh/solresol-language | 075332912fdda7412e11c759c958997fee6c87a9 | [
"MIT"
] | null | null | null | import enum
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from scipy.io import wavfile
import json
jupyter_nb_mode = False
try:
assert jupyter_nb_mode
from IPython.display import Audio
except:
import sounddevice as sd
def Audio(array,rate=44100):
sd.play(array,rate)
class SolfegeSymbol(enum.Enum):
DO,Do,do,D,d,p,o = 1,1,1,1,1,1,1
RE,Re,re,R,r,k,e = 2,2,2,2,2,2,2
MI,Mi,mi,M,m,i = 3,3,3,3,3,3
FA,Fa,fa,F,f,a = 4,4,4,4,4,4
SOL,Sol,sol,So,so,S,s,u = 5,5,5,5,5,5,5,5
LA,La,la,L,l,au = 6,6,6,6,6,6
SI,Si,si,TI,Ti,ti,T,t,ai = 7,7,7,7,7,7,7,7,7
@property
def freq(self,octave=4):
notes = [261.63,293.66,329.63,349.23,392.00,440.00,493.88]
return notes[self.value-1]*(2**(octave-4))
@property
def shortname(self):
names = 'drmfslt'
return names[self.value-1]
@property
def sescons(self):
names = 'pkmfslt'
return names[self.value-1]
@property
def sesvowel(self):
names = list('oeiau')+['ai','au']
return names[self.value-1]
def makeglyph(self,xy,scale=1,color='black',weight=2,doubler=False):
x,y=xy
if doubler:
shape = [
patches.FancyArrowPatch((x-scale/2,y+2*scale/6),(x-scale/2,y+4*scale/6),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-scale/6,y+scale/2),(x+scale/6,y+scale/2),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-scale/2,y+2*scale/6),(x-scale/2,y+4*scale/6),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-4*scale/6,y+scale/2),(x-scale/3,y+scale/2),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-scale/2,y-scale/6),(x-scale/2,y+scale/6),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-2*scale/6,y+scale/2),(x-4*scale/6,y+scale/2),arrowstyle='-',color=color,linewidth=weight),
patches.FancyArrowPatch((x-4*scale/6,y-scale/2),(x-scale/3,y-scale/2),arrowstyle='-',color=color,linewidth=weight),
][self.value-1]
attachment = xy
else:
shape, attachment = [
(patches.Circle((x+scale/2,y),scale/2,fill=False,color=color,linewidth=weight),(x+scale,y)),
(patches.FancyArrowPatch((x,y),(x,y-scale),arrowstyle='-',color=color,linewidth=weight),(x,y-scale)),
(patches.Arc((x+scale/2,y),scale,scale,theta1=0.0,theta2=180.0,color=color,linewidth=weight),(x+scale,y)),
(patches.FancyArrowPatch((x,y),(x+scale,y-scale),arrowstyle='-',color=color,linewidth=weight),(x+scale,y-scale)),
(patches.FancyArrowPatch((x,y),(x+scale,y),arrowstyle='-',color=color,linewidth=weight),(x+scale,y)),
(patches.Arc((x,y-scale/2),scale,scale,theta1=90.0,theta2=-90.0,color=color,linewidth=weight),(x,y-scale)),
(patches.FancyArrowPatch((x,y),(x+scale,y+scale),arrowstyle='-',color=color,linewidth=weight),(x+scale,y+scale))
][self.value-1]
return shape, attachment
def generate_note(frequency, duration, sample_rate=44100, amplitude=1, envelope_ratio=1/3):
fmul = 2*frequency*np.pi/sample_rate
note = np.sin(fmul*np.arange(sample_rate*duration))
env_time = int(envelope_ratio*sample_rate*duration)
envelope = np.concatenate((np.linspace(0,amplitude,env_time),amplitude*np.ones(int(sample_rate*duration-2*env_time)),np.linspace(amplitude,0,env_time)))
return note*envelope
class SolresolWord():
def __init__(self, word, syntax='default'):
if isinstance(word, list):
if isinstance(word[0], SolfegeSymbol):
self.word = word
elif isinstance(word[0], str):
self.word = [SolfegeSymbol[s] for s in word]
elif isinstance(word[0], int):
self.word = [SolfegeSymbol(i) for i in word]
elif isinstance(word, str):
if syntax in ['ses','s']:
self.word = [SolfegeSymbol[s] for s in word.replace('ai','l').replace('au','t')]
elif syntax in ['num','#',0]:
self.word = [SolfegeSymbol(int(s)) for s in word.strip('0')]
elif syntax in ['full','default']:
self.word = []
while len(word) > 0:
if word.lower().startswith('sol') and not word.lower().startswith('sola'):
self.word.append(SolfegeSymbol.SOL)
word = word[3:]
else:
self.word.append(SolfegeSymbol[word[:2]])
word = word[2:]
elif isinstance(word, int):
self.word = [SolfegeSymbol(int(s)) for s in oct(word)[2:].strip('0')]
def __repr__(self):
return f"{type(self).__name__}(['"+"','".join(smb.name for smb in self.word)+"'])"
def __getitem__(self,ix):
return self.word.__getitem__(ix)
def __len__(self):
return len(self.word)
def __iter__(self):
return iter(self.word)
@property
def ses(self):
if len(self) == 1:
return self.word[0].sesvowel
else:
return ''.join(ltr.sescons if ix%2==0 else ltr.sesvowel for ix,ltr in enumerate(self.word))
@property
def fulltext(self):
return ''.join(smb.name for smb in self.word).lower()
def __str__(self):
return self.fulltext
@property
def value(self):
return int(''.join(str(ltr.value) for ltr in self.word),8)
@property
def definition(self):
return solresol_dict[self.fulltext]
def __int__(self):
return self.value
def melody(self, note_len=0.2, amplitude=1, envelope_ratio=0.2, sample_rate=44100):
return np.concatenate([generate_note(ltr.freq,note_len,sample_rate,amplitude,envelope_ratio) for ltr in self.word])
def draw(self,ax,color='black',weight=2,startpos=(0,0)):
pos=startpos
for ix,ltr in enumerate(self.word):
if ltr==SolfegeSymbol.LA and (self.word[ix-1]==SolfegeSymbol.SI or self.word[ix-1]==SolfegeSymbol.DO) and ix>0:
pos = (pos[0]+0.5,pos[1]+0.5)
g,pos = ltr.makeglyph(pos,color=color,weight=weight,doubler=(ltr==self.word[ix-1] and ix>0))
ax.add_patch(g)
ax.axis('scaled')
ax.axis('off')
return pos[0]+2,startpos[1]
class Solresol():
def __init__(self, text, syntax='default'):
if isinstance(text,str):
text = text.translate(str.maketrans('','','!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'))
self.words = [SolresolWord(word,syntax) for word in text.split()]
elif isinstance(text,list):
self.words = [SolresolWord(word,syntax) for word in text]
elif isinstance(text,int):
sw = oct(text)[2:]
self.words = [SolresolWord(int(sw[i:i+5],8)) for i in range(0,len(sw),5)]
@property
def fulltext(self):
return ' '.join(word.fulltext for word in self.words)
@property
def ses(self):
return ' '.join(word.ses for word in self.words)
@property
def numlist(self):
return [int(word) for word in self.words]
@property
def value(self):
return int(''.join(oct(num)[2:].ljust(5,'0') for num in self.numlist),8)
def __int__(self):
return self.value
def __str__(self):
return self.fulltext
def __getitem__(self,ix):
return self.words.__getitem__(ix)
def __len__(self):
return len(self.words)
def __iter__(self):
return iter(self.words)
def __repr__(self):
return f"Solresol('{str(self)}')"
def melody(self, note_len=0.2, amplitude=1, envelope_ratio=0.2, gap_ratio=1, sample_rate=44100):
notes = []
for word in self.words:
notes.append(word.melody(note_len, amplitude, envelope_ratio, sample_rate))
notes.append(np.zeros(int(note_len*sample_rate*gap_ratio)))
return np.concatenate(notes)
def play(self, note_len=0.2, amplitude=1, envelope_ratio=0.2, gap_ratio=1):
return Audio(self.melody(note_len, amplitude, envelope_ratio, gap_ratio, 44100),rate=44100)
def draw(self,color='black',weight=2,subplot_mode=False,rowmax=5):
if len(self) > 1 and subplot_mode:
fig,axs = plt.subplots(len(self)//rowmax+1,(len(self)-1)%rowmax+1)
for word,ax in zip(self.words,axs):
word.draw(ax,color=color,weight=weight)
else:
fig,ax = plt.subplots()
pos = (0,0)
for word in self.words:
pos = word.draw(ax,color=color,weight=weight,startpos=pos)
return fig
def translate(self,alldefs=False,random=False,ix=0):
translation = []
for word in self.words:
if alldefs:
translation.append(f'{word.fulltext}: ({word.definition})')
else:
dfn = word.definition.split(',')
ix = np.random.randint(len(dfn)) if random else ix
translation.append(dfn[ix].strip())
return ' '.join(translation)
with open('solresol_dict.json') as f:
solresol_dict = json.load(f)
dictionary_url = "https://docs.google.com/spreadsheets/d/1-3lBxMURGN4AtGG846kuVGVNuEiHewCT88PiBahnODA/edit#gid=0"
| 44.523585 | 156 | 0.599534 | 1,353 | 9,439 | 4.097561 | 0.162602 | 0.033189 | 0.04798 | 0.063131 | 0.452381 | 0.397367 | 0.337843 | 0.266414 | 0.19733 | 0.173882 | 0 | 0.036847 | 0.238055 | 9,439 | 211 | 157 | 44.734597 | 0.73401 | 0 | 0 | 0.247475 | 0 | 0.030303 | 0.033796 | 0.004979 | 0 | 0 | 0 | 0 | 0.005051 | 1 | 0.176768 | false | 0 | 0.040404 | 0.106061 | 0.39899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
50d889e431ae6bc543ceb74e6af10a5b1cfede00 | 4,351 | py | Python | src/icegrams/trie_build.py | vthorsteinsson/Icegrams | d20d137a7e2b3316f38dfd5ace826993704c2050 | [
"MIT"
] | 7 | 2019-03-04T18:14:37.000Z | 2019-03-29T02:57:02.000Z | src/icegrams/trie_build.py | vthorsteinsson/Icegrams | d20d137a7e2b3316f38dfd5ace826993704c2050 | [
"MIT"
] | 1 | 2020-09-04T13:19:38.000Z | 2020-09-06T20:14:58.000Z | src/icegrams/trie_build.py | vthorsteinsson/Icegrams | d20d137a7e2b3316f38dfd5ace826993704c2050 | [
"MIT"
] | 1 | 2019-05-31T11:45:51.000Z | 2019-05-31T11:45:51.000Z | """
Icegrams: A trigrams library for Icelandic
CFFI builder for _trie module
Copyright (C) 2020 Miðeind ehf.
Original author: Vilhjálmur Þorsteinsson
This software is licensed under the MIT License:
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This module only runs at setup/installation time. It is invoked
from setup.py as requested by the cffi_modules=[] parameter of the
setup() function. It causes the _trie.*.so CFFI wrapper library
to be built from its source in trie.cpp.
"""
import os
import platform
import cffi
# Don't change the name of this variable unless you
# change it in setup.py as well
ffibuilder = cffi.FFI()
_PATH = os.path.dirname(__file__) or "."
WINDOWS = platform.system() == "Windows"
MACOS = platform.system() == "Darwin"
# What follows is the actual Python-wrapped C interface to trie.*.so
# It must be kept in sync with trie.h
declarations = """
typedef unsigned int UINT;
typedef uint8_t BYTE;
typedef uint32_t UINT32;
typedef uint64_t UINT64;
typedef void VOID;
UINT mapping(const BYTE* pbMap, const BYTE* pbWord);
UINT bitselect(const BYTE* pb, UINT n);
UINT retrieve(const BYTE* pb, UINT nStart, UINT n);
UINT lookupFrequency(const BYTE* pb,
UINT nQuantumSize, UINT nIndex);
UINT64 lookupMonotonic(const BYTE* pb,
UINT nQuantumSize, UINT nIndex);
VOID lookupPairMonotonic(const BYTE* pb,
UINT nQuantumSize, UINT nIndex,
UINT64* pn1, UINT64* pn2);
UINT64 lookupPartition(const BYTE* pb,
UINT nOuterQuantum, UINT nInnerQuantum, UINT nIndex);
VOID lookupPairPartition(const BYTE* pb,
UINT nQuantumSize, UINT nInnerQuantum, UINT nIndex,
UINT64* pn1, UINT64* pn2);
UINT searchMonotonic(const BYTE* pb,
UINT nQuantumSize, UINT nP1, UINT nP2, UINT64 n);
UINT searchMonotonicPrefix(const BYTE* pb,
UINT nQuantumSize, UINT nP1, UINT nP2, UINT64 n);
UINT searchPartition(const BYTE* pb,
UINT nOuterQuantum, UINT nInnerQuantum,
UINT nP1, UINT nP2, UINT64 n);
UINT searchPartitionPrefix(const BYTE* pb,
UINT nOuterQuantum, UINT nInnerQuantum,
UINT nP1, UINT nP2, UINT64 n);
"""
# Do the magic CFFI incantations necessary to get CFFI and setuptools
# to compile trie.cpp at setup time, generate a .so library and
# wrap it so that it is callable from Python and PyPy as _trie
if WINDOWS:
extra_compile_args = ["/Zc:offsetof-"]
elif MACOS:
os.environ["CFLAGS"] = "-stdlib=libc++" # Fixes PyPy build on macOS 10.15.6+
extra_compile_args = ["-mmacosx-version-min=10.7", "-stdlib=libc++"]
else:
# Adding -O3 to the compiler arguments doesn't seem to make
# any discernible difference in lookup speed
extra_compile_args = ["-std=c++11"]
ffibuilder.set_source(
"icegrams._trie",
# trie.cpp is written in C++ but must export a pure C interface.
# This is the reason for the "extern 'C' { ... }" wrapper.
'extern "C" {\n' + declarations + "\n}\n",
source_extension=".cpp",
sources=["src/icegrams/trie.cpp"],
extra_compile_args=extra_compile_args,
)
ffibuilder.cdef(declarations)
if __name__ == "__main__":
ffibuilder.compile(verbose=False)
| 35.08871 | 81 | 0.69846 | 600 | 4,351 | 5.01 | 0.425 | 0.038922 | 0.040253 | 0.05489 | 0.160679 | 0.160679 | 0.137059 | 0.12342 | 0.07851 | 0.07851 | 0 | 0.016627 | 0.225925 | 4,351 | 123 | 82 | 35.373984 | 0.875891 | 0.495059 | 0 | 0.178571 | 0 | 0 | 0.70506 | 0.122892 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.053571 | 0 | 0.053571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50d8d110901dc8d422a6e7035234a1d64625b3dc | 1,854 | py | Python | raspi/test/bork_dprc.py | lukaschoebel/bed | 4f3619809adab9156e530bfa71c426871749d53b | [
"MIT"
] | null | null | null | raspi/test/bork_dprc.py | lukaschoebel/bed | 4f3619809adab9156e530bfa71c426871749d53b | [
"MIT"
] | 1 | 2020-09-11T13:11:16.000Z | 2020-09-11T13:11:16.000Z | raspi/test/bork_dprc.py | lukaschoebel/bed | 4f3619809adab9156e530bfa71c426871749d53b | [
"MIT"
] | null | null | null | #!/usr/bin/python
import time
import random
import RPi.GPIO as GPIO
import firebase_admin
from firebase_admin import credentials, firestore
cred = credentials.Certificate("secrets/firestore-creds.json")
firebase_admin.initialize_app(cred)
db = firestore.client()
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
# reference to the firestore document
doc_ref = db.collection(u'current_measure').document(u'0')
count = 0
prev_inp = 1
def random_number(infested):
"""
This function mocks the functionality of the detection.
If tree is infested, generate number between 50-100.
If tree is not infested, generate number between 0-50.
Args:
infested (bool): [description]
"""
if infested:
return random.randint(51, 100)
return random.randint(0, 50)
def trigger_detection(PIN_NO):
"""
On button press, trigger the send process of the message.
Args:
PIN_NO (int): Pin number on raspi zero board
"""
global prev_inp
global count
t1 = time.time()
inp = GPIO.input(PIN_NO)
duration = time.time() - t1
if ((not prev_inp) and inp):
count = count + 1
print("Button pressed")
print(round(duration, 2))
if duration > 0.8:
print("befallen")
else:
print("cool")
print(count)
# only update degree of infestiation and duration
doc_ref.update({
u'duration': 5,
u'infestation': random_number(infested=True),
u'status': u'completed'
})
prev_inp = inp
time.sleep(0.05)
@DeprecationWarning
if __name__ == "__main__":
print("+++ borki initialized +++")
try:
while True:
trigger_detection(18)
except KeyboardInterrupt:
GPIO.cleanup()
| 22.888889 | 62 | 0.639159 | 237 | 1,854 | 4.877637 | 0.50211 | 0.024221 | 0.034602 | 0.050173 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023965 | 0.257282 | 1,854 | 80 | 63 | 23.175 | 0.815541 | 0.053937 | 0 | 0 | 0 | 0 | 0.099564 | 0.020349 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.104167 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50db4e1da37404b4b48c1ff5cba8f3fb72458aa5 | 538 | py | Python | how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/dummy_train.py | faxu/MachineLearningNotebooks | ea1b7599c3e6903873aa152dc5829afa080e885a | [
"MIT"
] | 1 | 2021-01-18T16:19:04.000Z | 2021-01-18T16:19:04.000Z | how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/dummy_train.py | faxu/MachineLearningNotebooks | ea1b7599c3e6903873aa152dc5829afa080e885a | [
"MIT"
] | 1 | 2019-03-18T04:33:24.000Z | 2019-03-18T04:33:24.000Z | MachineLearningNotebooks/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/dummy_train.py | raina777/bp-demo | 064848299731d8388c3709b5b809788860b63fc5 | [
"MIT"
] | 2 | 2020-09-07T01:41:49.000Z | 2020-10-01T18:16:28.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
print("*********************************************************")
print("Hello Azure ML!")
try:
from azureml.core import Run
run = Run.get_context()
print("Log Fibonacci numbers.")
run.log_list('Fibonacci numbers', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
run.complete()
except:
print("Warning: you need to install Azure ML SDK in order to log metrics.")
print("*********************************************************")
| 31.647059 | 79 | 0.524164 | 64 | 538 | 4.375 | 0.75 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028634 | 0.156134 | 538 | 16 | 80 | 33.625 | 0.588106 | 0.165428 | 0 | 0.181818 | 0 | 0 | 0.524664 | 0.255605 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
50e3b3228607ed7e01edb42284e3a1cc6116992e | 2,393 | py | Python | scripts/publisher.py | kavyadevd/Toy_Car_Simulation | 56609f0b1ae6eb0bb9578615a83bab9ae01d6247 | [
"MIT"
] | null | null | null | scripts/publisher.py | kavyadevd/Toy_Car_Simulation | 56609f0b1ae6eb0bb9578615a83bab9ae01d6247 | [
"MIT"
] | null | null | null | scripts/publisher.py | kavyadevd/Toy_Car_Simulation | 56609f0b1ae6eb0bb9578615a83bab9ae01d6247 | [
"MIT"
] | null | null | null | #!/usr/bin/python2
import rospy
from std_msgs.msg import String
from std_msgs.msg import Float64
def pub_sub(left_speed,right_speed):
pub_right = rospy.Publisher('right_speed', Float64, queue_size=10)
pub_left = rospy.Publisher('left_speed', Float64, queue_size=10)
send = "The robot status: %s" % left_speed
rospy.loginfo(send)
pub_right.publish(right_speed)
pub_left.publish(left_speed)
def move():
# Starts a new node
rospy.init_node('Motion')
pub_movfr = rospy.Publisher('/car_control/joint_right_controller/command', Float64, queue_size=10)
pub_movfl = rospy.Publisher('/car_control/joint_left_controller/command', Float64, queue_size=10)
pub_movbr = rospy.Publisher('/car_control/joint_back_left_controller/command', Float64, queue_size=10)
pub_movbl = rospy.Publisher('/car_control/joint_back_right_controller/command', Float64, queue_size=10)
rate = rospy.Rate(10) # 10hz
stop=0.0
status = 0
target_speed = 0.0
left_speed = 0.0
right_speed = 0.0
left_dir = 1
right_dir = 1
control_speed = 0.0
run = True
mode = ""
#Receiveing the user's input
print("Let's move the robot")
speed = 5
distance = 100
left_speed =left_dir*speed
right_speed =right_dir*speed
while not rospy.is_shutdown():
#Setting the current time for distance calculus
t0 = rospy.Time.now().to_sec()
current_distance = 0
while not rospy.is_shutdown():
#Loop to move the turtle in an specified distance
while(True):
#Publish the velocity
pub_movfl.publish(left_speed)
pub_movfr.publish(right_speed)
pub_movbl.publish(left_speed)
pub_movbr.publish(right_speed)
pub_sub(left_speed,right_speed)
#Takes actual time to velocity calculus
t1=rospy.Time.now().to_sec()
#Calculates distancePoseStamped
current_distance= speed*(t1-t0)
#After the loop, stops the robot
#Force the robot to stop
pub_movfl.publish(stop) # publish the turn command.
pub_movfr.publish(stop) # publish the turn command.
pub_movbl.publish(stop) # publish the turn command.
pub_movbr.publish(stop) # publish the turn command.
pub_sub(stop,stop)
run = False
if __name__ == '__main__':
try:
#Testing our function
move()
except rospy.ROSInterruptException: pass
| 30.679487 | 107 | 0.692018 | 338 | 2,393 | 4.66568 | 0.301775 | 0.051363 | 0.060875 | 0.068484 | 0.412175 | 0.268231 | 0.194673 | 0.053266 | 0 | 0 | 0 | 0.028147 | 0.213122 | 2,393 | 77 | 108 | 31.077922 | 0.809347 | 0.178437 | 0 | 0.037037 | 0 | 0 | 0.130769 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.018519 | 0.055556 | null | null | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50e5a28d8d3dff6cd20e630e0c995cafb0c49a98 | 11,544 | py | Python | bot.py | areapergrimm/therejectbot | 308a948d0f7fb38edbec55fe55aac62016c59bb7 | [
"MIT"
] | null | null | null | bot.py | areapergrimm/therejectbot | 308a948d0f7fb38edbec55fe55aac62016c59bb7 | [
"MIT"
] | null | null | null | bot.py | areapergrimm/therejectbot | 308a948d0f7fb38edbec55fe55aac62016c59bb7 | [
"MIT"
] | null | null | null | import discord
from discord.ext import commands
import random
import codenames
import trrutils
import asyncio
description = "The Reject Bot."
PREFIX = "."
CHANNELNAME = "spam"
bot = commands.Bot(command_prefix=PREFIX, description=description)
TOKEN = open('TOKEN.txt', 'r').read()
PERMLIST = open('PERMLIST.txt', 'r')
gameBoard = None
lobbyDict = {}
mastersLink = False
gameProgress = False
teamLists = []
mastersList = []
teamTurn = False
mastersTurn = False
hintNumber = 0
teamDict = {False:"B", True:"I"}
remainingWords = [8,7]
def permCheck(authorID):
PERMLIST.seek(0)
for line in PERMLIST:
if str(authorID) == line:
return True
return False
def checkTurn():
if gameProgress == False:
return "There is no game in progress!"
strAnnounce = "Current Turn: "
if teamTurn == False:
strAnnounce += "**Bold** "
else:
strAnnounce += "__*Italic*__ "
if mastersTurn == False:
strAnnounce += "codemaster ," + teamList[int(teamTurn)][0] + ". Use .hint [hint word] [number of guesses] to register your hint."
else:
strAnnounce += "team ," + teamList[int(teamTurn)][1:] + "Use .guess [word] to register your guess, or .passturn to end your turn."
strAnnounce += "Words remaining: Bold ({}) - ({}) Italics".format(remainingWords[0],remainingWords[1])
return strAnnounce
def checkGuess(guess):
if guess in gameDict:
gameDict[guess] = gameDict[guess].upper()
if teamDict[teamTurn] == gameDict[guess]:
return "Correct"
elif teamDict[not teamTurn] == gameDict[guess]:
return "Opponent"
elif gameDict[guess] == "O":
return "Incorrect"
elif gameDict[guess] == "X":
return "Death"
return "Invalid"
def resetGame():
global gameBoard
global lobbyDict
global mastersLink
global gameProgress
global teamLists
global mastersList
global teamTurn
global mastersTurn
global hintNumber
global teamDict
global remainingWords
gameBoard = None
lobbyDict = {}
mastersLink = False
gameProgress = False
teamLists = []
mastersList = []
teamTurn = False
mastersTurn = False
hintNumber = 0
teamDict = {False:"B", True:"I"}
remainingWords = [8,7]
@bot.event
async def on_ready():
print('Logged in as')
print(bot.user.name)
print(bot.user.id)
print('------')
@bot.command(pass_context=True)
async def exit(ctx):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if permCheck(ctx.message.author.id) == True:
await bot.say("Bye!")
exit
@bot.command(pass_context=True)
async def forcereset(ctx):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if permCheck(ctx.message.author.id) == True:
resetGame()
await bot.say("Game reset.")
'''codewords'''
@bot.command(pass_context=True)
async def refresh(ctx):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if permCheck(ctx.message.author.id) == True:
tally = codenames.cleanup()
await bot.say("The codewords file has been refreshed, with " + str(tally) + " words.")
@bot.command(pass_context=True)
async def blacklist(ctx, arg):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if permCheck(ctx.message.author.id) == True:
codenames.blacklist(arg)
await bot.say(arg + " has been added to the blacklist. Use !refreshcodewords to remove blacklisted words from the game.")
@bot.command(pass_context=True)
async def whitelist(ctx, arg):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if permCheck(ctx.message.author.id) == True:
codenames.whitelist(arg)
await bot.say(arg + " has been added to the whitelist. Use !refreshcodewords to add whitelisted words into the game.")
@bot.command(pass_context=True)
async def join(ctx, arg=None):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
authorMember = ctx.message.author
if gameProgress == True:
await bot.say("There is a game already in progress! Joining in the middle of the game is currently not implemented.")
if (arg in ["b", "i", "B", "I", "r", "R"]) == False:
await bot.say(".join [team] requires a team preference: 'b', 'i', 'r'. r is for random assignment. Use capital letters ('B' or 'I') to preference being that team's codemaster.")
elif authorMember in lobbyDict:
lobbyDict[authorMember] = arg
await bot.say(str(authorMember) + " has changed their team preference.")
else:
lobbyDict[authorMember] = arg
await bot.say(str(authorMember) + " has joined the Codenames lobby.")
@bot.command(pass_context=True)
async def leave(ctx):
authorMember = ctx.message.author
if authorMember not in lobbyDict:
await bot.say("You are not in the Codenames lobby.")
return
lobbyDict.pop(authorMember, None)
await bot.say(str(authorMember) + " has left the Codenames lobby.")
@bot.command(pass_context=True)
async def lobby(ctx):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
lobbyStr = ""
for member in lobbyDict:
lobbyStr += (str(member) + " (" + str(lobbyDict[member]) + ") ")
if lobbyStr == "":
await bot.say("No-one in the lobby yet. Use '.join b' or '.join i' to join a team, or '.join r' to be randomly assigned. Use capital letters to preference being a codemaster.")
else:
await bot.say(lobbyStr)
@bot.command(pass_context=True)
async def start(ctx):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
global gameProgress
if gameProgress == True:
await bot.say("A game is running already!")
return
if ctx.message.author not in lobbyDict:
await bot.say("You have not joined the Codenames lobby yet! Use '.join b' or '.join i' to join a team, or '.join r' to be randomly assigned. Use capital letters to preference being a codemaster.")
return
if len(lobbyDict) < 4:
await bot.say("At least four players must join the lobby. Currently in the lobby: " + str(len(lobbyDict)) + ".")
return
teamLists = codenames.generateTeams(lobbyDict)
if len(teamLists[0]) < 2:
await bot.say("There are not enough players in **Bold** team! Use .join b to switch to or join them, or .join r to let the bot assign you.")
return
if len(teamLists[1]) < 2:
await bot.say("There are not enough players in __*Italic*__ team! Use .join i to switch to or join them, or .join r to let the bot assign you.")
return
gameProgress = True
gameDict = codenames.initGame()
boardDrawn = codenames.drawBoard(gameDict)
masterBoard = codenames.displayBoard(boardDrawn)
mastersList.append(teamLists[0][0])
mastersList.append(teamLists[1][0])
for masterUser in mastersList:
await bot.send_message(masterUser, "New game started! Good luck, codemaster. Private messages to this bot will automatically be sent to your opposing codemaster <This functionality still in development>.")
await bot.send_message(masterUser, masterBoard)
mastersLink = True
announceStr = ("A game of Codenames has started. Going first, {} leads the **bold** team: {}. {} leads the __*italic*__ team: {}.").format(mastersList[0],teamLists[0][1:],mastersList[1],teamLists[1][1:])
await bot.say(announceStr)
gameBoard = codenames.displayBoard(gameDict)
await bot.say(gameBoard)
await bot.say(checkTurn())
@bot.command(pass_context=True)
async def hint(ctx, arg, param):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if (mastersTurn == True):
if ctx.message.author == mastersList[int(teamTurn)][0]:
hintNumber = param + 1
mastersTurn = not mastersTurn
await bot.say("The hint is " + arg + " for " + str(param) + " guesses, and 1 bonus guess. Use .guess to guess, and .passturn to end guessing.")
else:
await bot.say(checkTurn())
@bot.command(pass_context=True)
async def guess(ctx, arg):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if (mastersTurn == False):
if ctx.message.author in mastersList[int(teamTurn)][1:]:
response = checkGuess(arg)
if response == "Correct":
hintNumber += -1
remainingWords[int(teamTurn)] += -1
if remainingWords[int(teamTurn)] == 0:
await bot.say("Game over! " + ctx.message.author + "has revealed their last codeword. " + teamlist[int(teamTurn)][0] + "'s team has won the game!")
resetGame()
if hintNumber == 0:
await bot.say(arg + " is one of your words! You have no more guesses for this turn.")
mastersTurn = not mastersTurn
teamTurn = not teamTurn
newBoard = codenames.progressBoard(gameDict)
await bot.say(checkTurn())
await bot.say(newBoard)
else:
await bot.say(arg + " is one of your words! Remaining guesses: " + str(hintNumber))
elif response == "Incorrect":
await bot.say(arg + " is a neutral word! Your turn has ended.")
mastersTurn = not mastersTurn
teamTurn = not teamTurn
await bot.say(checkTurn())
elif response == "Opponent":
remainingWords[int(not teamTurn)] += -1
if remainingWords[int(not teamTurn)] == 0:
await bot.say("Game over! " + ctx.message.author + "has revealed their opponents' last codeword. " + teamList[int(not teamTurn)][0] + "'s team has won the game!")
resetGame()
await bot.say(arg + " is one of your opponents' words! Your turn has ended.")
mastersTurn = not mastersTurn
teamTurn = not teamTurn
await bot.say(checkTurn())
elif response == "Death":
newBoard = codenames.progressBoard(gameDict)
await bot.say(progressBoard(newBoard))
await bot.say("Game over!" + arg + " is the double agent!" + teamList[teamTurn][0] + "'s team has lost!")
resetGame()
else:
await bot.say(arg + " is not a valid guess! Try again.")
else:
await bot.say(checkTurn())
@bot.command(pass_context=True)
async def passturn(ctx, arg):
if (CHANNELNAME in ctx.message.channel.name) == False:
return
if (mastersTurn == False):
if ctx.message.author in mastersList[int(teamTurn)][1:]:
mastersTurn = not mastersTurn
teamTurn = not teamTurn
await bot.say(str(ctx.message.author) + " has passed the turn.")
await bot.say(checkTurn())
@bot.command(pass_context=True)
async def turn():
if (CHANNELNAME in ctx.message.channel.name) == False:
return
await bot.say(checkTurn())
'''end codewords'''
bot.run(TOKEN)
| 40.083333 | 214 | 0.610707 | 1,380 | 11,544 | 5.087681 | 0.173913 | 0.047856 | 0.062669 | 0.038883 | 0.484689 | 0.452785 | 0.430708 | 0.385273 | 0.381712 | 0.315197 | 0 | 0.004794 | 0.277287 | 11,544 | 287 | 215 | 40.222997 | 0.836749 | 0 | 0 | 0.43295 | 0 | 0.02682 | 0.228569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015326 | false | 0.065134 | 0.022989 | 0 | 0.141762 | 0.015326 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
50f5d881c3d0d561188e06fdce8329c0abb29d96 | 1,537 | py | Python | rampwf/score_types/__init__.py | rth/ramp-workflow | e97a27235a8dbd68111ca6b0c9136ff35cab81f8 | [
"BSD-3-Clause"
] | null | null | null | rampwf/score_types/__init__.py | rth/ramp-workflow | e97a27235a8dbd68111ca6b0c9136ff35cab81f8 | [
"BSD-3-Clause"
] | 1 | 2020-01-18T09:47:03.000Z | 2020-01-20T15:33:11.000Z | rampwf/score_types/__init__.py | rth/ramp-workflow | e97a27235a8dbd68111ca6b0c9136ff35cab81f8 | [
"BSD-3-Clause"
] | null | null | null | from .accuracy import Accuracy
from .balanced_accuracy import BalancedAccuracy
from .base import BaseScoreType
from .brier_score import (
BrierScore, BrierSkillScore, BrierScoreReliability, BrierScoreResolution)
from .clustering_efficiency import ClusteringEfficiency
from .classification_error import ClassificationError
from .combined import Combined
from .detection import (
OSPA, SCP, DetectionPrecision, DetectionRecall, MADCenter, MADRadius,
AverageDetectionPrecision, DetectionAveragePrecision)
from .f1_above import F1Above
from .macro_averaged_recall import MacroAveragedRecall
from .make_combined import MakeCombined
from .mare import MARE
from .negative_log_likelihood import NegativeLogLikelihood
from .normalized_gini import NormalizedGini
from .normalized_rmse import NormalizedRMSE
from .relative_rmse import RelativeRMSE
from .rmse import RMSE
from .roc_auc import ROCAUC
from .soft_accuracy import SoftAccuracy
__all__ = [
'Accuracy',
'BalancedAccuracy',
'BaseScoreType',
'BrierScore',
'BrierScoreReliability',
'BrierScoreResolution',
'BrierSkillScore',
'ClassificationError',
'ClusteringEfficiency',
'Combined',
'DetectionPrecision',
'DetectionRecall',
'DetectionAveragePrecision',
'F1Above',
'MacroAveragedRecall',
'MakeCombined',
'MADCenter',
'MADRadius',
'MARE',
'NegativeLogLikelihood',
'NormalizedGini',
'NormalizedRMSE',
'OSPA',
'RelativeRMSE',
'RMSE',
'ROCAUC',
'SCP',
'SoftAccuracy',
]
| 28.462963 | 77 | 0.762524 | 130 | 1,537 | 8.869231 | 0.430769 | 0.036427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002322 | 0.159401 | 1,537 | 53 | 78 | 29 | 0.890093 | 0 | 0 | 0 | 0 | 0 | 0.232921 | 0.043591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.365385 | 0 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
50fc5d678ce5f53a6ac5266cf971694049a80f34 | 6,799 | py | Python | server/roster_updater/lib/course.py | HeavenFox/coursepad | cd7b1224f563271a2b89817ae4f2eccdc164d46b | [
"Apache-2.0"
] | 33 | 2016-09-30T03:58:01.000Z | 2021-01-09T16:12:25.000Z | server/roster_updater/lib/course.py | HeavenFox/coursepad | cd7b1224f563271a2b89817ae4f2eccdc164d46b | [
"Apache-2.0"
] | 8 | 2017-01-30T23:27:45.000Z | 2022-02-18T05:15:59.000Z | server/roster_updater/lib/course.py | HeavenFox/coursepad | cd7b1224f563271a2b89817ae4f2eccdc164d46b | [
"Apache-2.0"
] | 7 | 2016-09-30T03:59:50.000Z | 2018-10-21T02:17:12.000Z | import re
# crawl roster
faulty_prof = {
'Francis,J)' : 'Francis,J (jdf2)',
'Glathar,E)' : 'Glathar,E',
'Cady,B)' : 'Cady,B'
}
section_types = set()
day_pattern = {
'M': 1,
'T': 1<<1,
'W': 1<<2,
'R': 1<<3,
'F': 1<<4,
'S': 1<<5,
'U': 1<<6
}
def to_bool(s):
return True if s == 'Y' else False
def to_list(node):
return [a.text.strip() for a in node]
def set_if_truthy(obj, idx, value):
if value:
obj[idx] = value
def convert_crosslist(c):
if c is None:
return None
if len(c) > 0:
return [c.find('subject').text, int(c.find('catalog_nbr').text)]
return None
def get_s(node):
if node is None:
return None
return node.text
def maybe_float(s):
if s.find('.') > -1:
return float(s)
return int(s)
def convert_units(s):
return [maybe_float(a) for a in s.split('-')]
class CourseParser(object):
def __init__(self):
self.courses = []
self.profs = set()
def parse(self, node):
raise NotImplementedError()
class CourseParserJson(CourseParser):
def __init__(self):
super(CourseParserJson, self).__init__()
self.sessions = set()
self.locations = set()
self.facility = set()
@staticmethod
def crosslist(d):
if d:
return [d['subject'], int(d['catalogNbr']), d['type']]
return None
def convert_meeting(self, node, parent=None):
obj = {}
pattern = 0
pattern_desc = node.get('pattern', '').replace('Su', 'U')
if pattern_desc != 'TBA':
for c in pattern_desc:
pattern |= day_pattern[c]
set_if_truthy(obj, 'ptn', pattern)
facility = node.get('facilityDescrshort')
if facility and facility != 'TBA':
set_if_truthy(obj, 'bldg', facility[:3])
set_if_truthy(obj, 'rm', facility[3:])
set_if_truthy(obj, 'loc', node.get('facilityDescr'))
set_if_truthy(obj, 'st', node.get('timeStart'))
set_if_truthy(obj, 'et', node.get('timeEnd'))
set_if_truthy(obj, 'sd', node.get('startDt'))
set_if_truthy(obj, 'ed', node.get('endDt'))
set_if_truthy(obj, 'profs', [s['netid'] for s in node.get('instructors', [])])
set_if_truthy(obj, 'topic', node.get('meetingTopicDescription'))
return obj
def convert_section(self, node, parent=None):
comp = node.get('ssrComponent')
obj = {}
obj['nbr'] = int(node.get('classNbr'))
obj['sec'] = node.get('section')
# obj['loc'] = node.get('location')
# obj['campus'] = node.get('campus')
set_if_truthy(obj, 'topic', node.get('topicDescription'))
self.locations.add((node.get('location'), node.get('locationDescr'), node.get('campus'), node.get('campusDescr')))
set_if_truthy(obj, 'mt', [self.convert_meeting(s, node) for s in node.get('meetings', [])])
return comp, obj
def parse(self, node):
obj = {}
obj['sub'] = node.get('subject')
obj['nbr'] = int(node.get('catalogNbr'))
obj['title'] = node.get('titleLong')
for group in node.get('enrollGroups', []):
course = obj.copy()
if group['unitsMinimum'] == group['unitsMaximum']:
course['unit'] = [group['unitsMaximum']]
else:
course['unit'] = [group['unitsMinimum'], group['unitsMaximum']]
set_if_truthy(course, 'optcomp', group['componentsOptional'])
set_if_truthy(course, 'session', group['sessionCode'])
set_if_truthy(course, 'crosslists', [self.crosslist(d) for d in group.get('simpleCombinations', [])])
secs = {}
for sec in group['classSections']:
comp, sec = self.convert_section(sec, group)
if comp not in secs:
secs[comp] = []
secs[comp].append(sec)
course['secs'] = secs
self.courses.append(course)
self.sessions.add((group['sessionCode'], group['sessionBeginDt'], group['sessionEndDt'], group['sessionLong']))
class CourseParserXML(CourseParser):
def __init__(self):
self.courses = []
self.profs = set()
def parse(self, node):
self.courses.append(self.convert_course(node))
def parse_prof(self, name):
if name in faulty_prof:
name = faulty_prof[name]
result = re.search(r'\((.+)\)', name)
if result is None:
print "warning: %s dont have netid" % name
return name
else:
netid = result.group(1)
self.profs.add(netid)
return netid
def convert_meeting(self, node):
obj = {}
pattern = 0
pattern_desc = node.find('meeting_pattern_sdescr').text
if pattern_desc != 'TBA':
for c in pattern_desc:
pattern |= day_pattern[c]
set_if_truthy(obj, 'ptn', pattern)
set_if_truthy(obj, 'bldg', node.find('building_code').text)
set_if_truthy(obj, 'rm', node.find('room').text)
set_if_truthy(obj, 'st', node.find('start_time').text)
set_if_truthy(obj, 'et', node.find('end_time').text)
set_if_truthy(obj, 'sd', node.find('start_date').text)
set_if_truthy(obj, 'ed', node.find('end_date').text)
set_if_truthy(obj, 'profs', [self.parse_prof(s) for s in to_list(node.find('instructors') or [])])
return obj
def convert_section(self, node):
comp = node.get('ssr_component')
obj = {}
obj['nbr'] = int(node.get('class_number'))
obj['sec'] = node.get('class_section')
section_types.add(comp)
set_if_truthy(obj, 'consent', get_s(node.find('consent_ldescr')))
set_if_truthy(obj, 'note', get_s(node.find('notes')))
set_if_truthy(obj, 'mt', [self.convert_meeting(s) for s in node.findall('meeting')])
return comp, obj
def convert_course(self, node):
obj = {}
obj['sub'] = node.get('subject')
obj['nbr'] = int(node.get('catalog_nbr'))
obj['unit'] = convert_units(node.find('units').text)
obj['title'] = node.find('course_title').text
set_if_truthy(obj, 'topics', to_list(node.find('topics')))
set_if_truthy(obj, 'crosslists', [convert_crosslist(a) for a in node.find('crosslists') or []])
set_if_truthy(obj, 'comeetings', [convert_crosslist(a) for a in node.find('comeetings') or []])
secs = {}
for sec in node.find('sections'):
comp, sec = self.convert_section(sec)
if comp not in secs:
secs[comp] = []
secs[comp].append(sec)
obj['secs'] = secs
return obj
| 29.820175 | 123 | 0.564348 | 860 | 6,799 | 4.310465 | 0.203488 | 0.040464 | 0.089021 | 0.101969 | 0.338279 | 0.289722 | 0.182358 | 0.149987 | 0.133261 | 0.114378 | 0 | 0.004261 | 0.27504 | 6,799 | 227 | 124 | 29.951542 | 0.747819 | 0.011914 | 0 | 0.279762 | 0 | 0 | 0.141389 | 0.006704 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.005952 | null | null | 0.005952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
50ff2d5fa93361a61b58feca15f43ba6967b76f5 | 320 | py | Python | src/cart/migrations/0009_remove_cartitem_user.py | phisyche/rydhubsolutions | 50cd35c9f2f6530bcc19358beb6c91cda5287f4f | [
"MIT"
] | 5 | 2020-09-07T15:30:10.000Z | 2021-01-21T19:25:22.000Z | src/cart/migrations/0009_remove_cartitem_user.py | phisyche/rydhubsolutions | 50cd35c9f2f6530bcc19358beb6c91cda5287f4f | [
"MIT"
] | 21 | 2019-12-04T22:49:42.000Z | 2022-02-12T09:17:42.000Z | src/cart/migrations/0009_remove_cartitem_user.py | phisyche/rydhubsolutions | 50cd35c9f2f6530bcc19358beb6c91cda5287f4f | [
"MIT"
] | 4 | 2020-03-25T05:50:39.000Z | 2021-08-08T20:59:20.000Z | # Generated by Django 2.2.9 on 2020-01-29 16:31
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('cart', '0008_cartitem_user'),
]
operations = [
migrations.RemoveField(
model_name='cartitem',
name='user',
),
]
| 17.777778 | 47 | 0.58125 | 34 | 320 | 5.382353 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085202 | 0.303125 | 320 | 17 | 48 | 18.823529 | 0.735426 | 0.140625 | 0 | 0 | 1 | 0 | 0.124542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f9934026df2e6f4b91b0975aecdc12c43dbf68d | 6,417 | py | Python | health/models.py | atiro/obesitydata | b0cc0759db03ee07c63b3b254bf04f59aeef965d | [
"BSD-3-Clause"
] | null | null | null | health/models.py | atiro/obesitydata | b0cc0759db03ee07c63b3b254bf04f59aeef965d | [
"BSD-3-Clause"
] | 15 | 2015-07-27T21:43:25.000Z | 2015-07-29T21:34:48.000Z | health/models.py | atiro/obesitydata | b0cc0759db03ee07c63b3b254bf04f59aeef965d | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
class Health(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
class Meta:
abstract = True
class HealthActivity(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
AGE_16_TO_24 = '16-24'
AGE_25_TO_34 = '25-34'
AGE_35_TO_44 = '35-44'
AGE_45_TO_54 = '45-54'
AGE_55_TO_64 = '55-64'
AGE_65_TO_74 = '65-74'
AGE_75_PLUS = '75+'
AGE_ALL = 'ALL'
AGE_CHOICES = (
(AGE_16_TO_24, '16-24'),
(AGE_25_TO_34, '25-34'),
(AGE_35_TO_44, '35-44'),
(AGE_45_TO_54, '45-54'),
(AGE_55_TO_64, '55-64'),
(AGE_65_TO_74, '65-74'),
(AGE_75_PLUS, '75+'),
(AGE_ALL, 'All Ages'),
)
ACTIVITY_MEETS = 'Meets'
ACTIVITY_SOME = 'Some'
ACTIVITY_LOW = 'Low'
ACTIVITY_BASES = 'Bases'
ACTIVITY_CHOICES = (
(ACTIVITY_MEETS, 'Meets Activity'),
(ACTIVITY_SOME, 'Some Activity'),
(ACTIVITY_LOW, 'Low Activity'),
(ACTIVITY_BASES, 'Bases'),
)
year = models.IntegerField()
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=MALE)
age = models.CharField(max_length=8, choices=AGE_CHOICES, default=AGE_16_TO_24)
activity = models.CharField(max_length=5, choices=ACTIVITY_CHOICES, default=ACTIVITY_MEETS)
percentage = models.FloatField(default=0.0)
class HealthWeight(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
year = models.IntegerField()
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=MALE)
weight_mean = models.FloatField()
weight_stderr = models.FloatField()
base = models.IntegerField()
class HealthBMI(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
AGE_16_TO_24 = '16-24'
AGE_25_TO_34 = '25-34'
AGE_35_TO_44 = '35-44'
AGE_45_TO_54 = '45-54'
AGE_55_TO_64 = '55-64'
AGE_65_TO_74 = '65-74'
AGE_75_PLUS = '75+'
AGE_ALL = 'ALL'
AGE_CHOICES = (
(AGE_16_TO_24, '16-24'),
(AGE_25_TO_34, '25-34'),
(AGE_35_TO_44, '35-44'),
(AGE_45_TO_54, '45-54'),
(AGE_55_TO_64, '55-64'),
(AGE_65_TO_74, '65-74'),
(AGE_75_PLUS, '75+'),
(AGE_ALL, 'All Ages'),
)
BMI_UNDERWEIGHT = 'U'
BMI_NORMAL = 'N'
BMI_OVERWEIGHT = 'O'
BMI_OBESE = 'B'
BMI_MORBIDLY_OBESE = 'M'
BMI_OVERWEIGHT_OBESE = 'W'
BMI_MEAN = 'E'
BMI_STDERR = 'S'
BMI_BASE = 'A'
BMI_ALL = 'L'
BMI_CHOICES = (
(BMI_UNDERWEIGHT, 'Underweight'),
(BMI_NORMAL, 'Normal'),
(BMI_OVERWEIGHT, 'Overweight'),
(BMI_OBESE, 'Obese'),
(BMI_MORBIDLY_OBESE, 'Morbidly Obese'),
(BMI_OVERWEIGHT_OBESE, 'Overweight including obese'),
(BMI_MEAN, 'Mean'),
(BMI_STDERR, 'Std error of the mean'),
(BMI_BASE, 'Base'),
(BMI_ALL, 'All'),
)
year = models.IntegerField()
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=MALE)
age = models.CharField(max_length=8, choices=AGE_CHOICES, default=AGE_16_TO_24)
bmi = models.CharField(max_length=1, choices=BMI_CHOICES, default=BMI_NORMAL)
percentage = models.FloatField(default=0.0)
# Create your models here.
class HealthFruitVeg(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
AGE_16_TO_24 = '16-24'
AGE_25_TO_34 = '25-34'
AGE_35_TO_44 = '35-44'
AGE_45_TO_54 = '45-54'
AGE_55_TO_64 = '55-64'
AGE_65_TO_74 = '65-74'
AGE_75_PLUS = '75+'
AGE_ALL = 'ALL'
AGE_CHOICES = (
(AGE_16_TO_24, '16-24'),
(AGE_25_TO_34, '25-34'),
(AGE_35_TO_44, '35-44'),
(AGE_45_TO_54, '45-54'),
(AGE_55_TO_64, '55-64'),
(AGE_65_TO_74, '65-74'),
(AGE_75_PLUS, '75+'),
(AGE_ALL, 'All Ages'),
)
FRUITVEG_NONE = 'N'
FRUITVEG_LESS_1 = '1'
FRUITVEG_LESS_2 = '2'
FRUITVEG_LESS_3 = '3'
FRUITVEG_LESS_4 = '4'
FRUITVEG_LESS_5 = '5'
FRUITVEG_MORE_5 = '6'
FRUITVEG_MEAN = 'M'
FRUITVEG_STDERR = 'S'
FRUITVEG_MEDIAN = 'D'
FRUITVEG_BASE = 'B'
FRUITVEG_CHOICES = (
(FRUITVEG_NONE, 'No Fruit & Veg'),
(FRUITVEG_LESS_1, 'Under 1 portion'),
(FRUITVEG_LESS_2, '1-2 Portions'),
(FRUITVEG_LESS_3, '2-3 Portions'),
(FRUITVEG_LESS_4, '3-4 Portions'),
(FRUITVEG_LESS_5, '4-5 Portions'),
(FRUITVEG_MORE_5, '5+ Portions'),
(FRUITVEG_MEAN, 'Mean Portions'),
(FRUITVEG_STDERR, 'Standard error of the mean'),
(FRUITVEG_MEDIAN, 'Median Portions'),
(FRUITVEG_BASE, 'Standard error of the mean')
)
year = models.IntegerField()
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=MALE)
age = models.CharField(max_length=8, choices=AGE_CHOICES, default=AGE_16_TO_24)
fruitveg = models.CharField(max_length=1, choices=FRUITVEG_CHOICES, default=FRUITVEG_NONE)
percentage = models.FloatField(default=0.0)
# Create your models here.
class HealthHealth(models.Model):
MALE = 'M'
FEMALE = 'F'
ALL = 'A'
GENDER_CHOICES = (
(MALE, 'Male'),
(FEMALE, 'Female'),
(ALL, 'All')
)
HEALTH_VG = 'VG'
HEALTH_VB = 'VB'
HEALTH_ILL = 'ILL'
HEALTH_SICK = 'SICK'
HEALTH_ALL = 'ALL'
HEALTH_BASE = 'BASE'
HEALTH_CHOICES = (
(HEALTH_VG, 'Very good/good health'),
(HEALTH_VB, 'Very bad/bad health'),
(HEALTH_ILL, 'At least one longstanding illness'),
(HEALTH_SICK, 'Acute sickness'),
(HEALTH_ALL, 'All'),
(HEALTH_BASE, 'Bases'),
)
year = models.IntegerField()
gender = models.CharField(max_length=1, choices=GENDER_CHOICES, default=MALE)
health = models.CharField(max_length=4, choices=HEALTH_CHOICES, default=HEALTH_VG)
percentage = models.FloatField(default=0.0)
# Create your models here.
| 25.164706 | 95 | 0.58532 | 845 | 6,417 | 4.142012 | 0.136095 | 0.025714 | 0.061714 | 0.082286 | 0.573429 | 0.548286 | 0.52 | 0.52 | 0.52 | 0.52 | 0 | 0.079284 | 0.268817 | 6,417 | 254 | 96 | 25.26378 | 0.666667 | 0.011532 | 0 | 0.536585 | 0 | 0 | 0.125256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004878 | 0 | 0.57561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0fb8484d9a2f11e82bc1b0d029f000c6539ffbe4 | 828 | py | Python | venv/Lib/site-packages/captcha/urls.py | bopopescu/diandian_online | 05eba122762c087623d42fad5352f6b67c73bcc5 | [
"BSD-3-Clause"
] | 3 | 2020-03-16T02:49:59.000Z | 2022-03-21T15:04:44.000Z | venv/Lib/site-packages/captcha/urls.py | bopopescu/diandian_online | 05eba122762c087623d42fad5352f6b67c73bcc5 | [
"BSD-3-Clause"
] | 1 | 2020-12-10T08:04:25.000Z | 2020-12-10T08:04:25.000Z | venv/Lib/site-packages/captcha/urls.py | bopopescu/diandian_online | 05eba122762c087623d42fad5352f6b67c73bcc5 | [
"BSD-3-Clause"
] | 2 | 2020-05-01T08:16:25.000Z | 2020-07-21T00:12:06.000Z | from django.conf.urls import url
from django.urls import path
from captcha import views
# urlpatterns = [
# url(r'image/(?P<key>\w+)/$', views.captcha_image, name='captcha-image', kwargs={'scale': 1}),
# url(r'image/(?P<key>\w+)@2/$', views.captcha_image, name='captcha-image-2x', kwargs={'scale': 2}),
# url(r'audio/(?P<key>\w+).wav$', views.captcha_audio, name='captcha-audio'),
# url(r'refresh/$', views.captcha_refresh, name='captcha-refresh'),
# ]
urlpatterns = [
path('image/<slug:key>/', views.captcha_image, name='captcha-image', kwargs={'scale': 1}),
path('image/<slug:key>@2/', views.captcha_image, name='captcha-image-2x', kwargs={'scale': 2}),
path('audio/<slug:key>.wav', views.captcha_audio, name='captcha-audio'),
path('refresh/', views.captcha_refresh, name='captcha-refresh'),
] | 51.75 | 104 | 0.660628 | 118 | 828 | 4.567797 | 0.211864 | 0.178108 | 0.12616 | 0.155844 | 0.693878 | 0.693878 | 0.64193 | 0.345083 | 0.345083 | 0.178108 | 0 | 0.01084 | 0.108696 | 828 | 16 | 105 | 51.75 | 0.719512 | 0.444444 | 0 | 0 | 0 | 0 | 0.288546 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0fba0a7eb708f5b3bddbe1370f87cfb260a40450 | 428 | py | Python | users/migrations/0002_auto_20190529_1832.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | null | null | null | users/migrations/0002_auto_20190529_1832.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | 18 | 2019-05-28T17:20:34.000Z | 2022-03-11T23:50:12.000Z | users/migrations/0002_auto_20190529_1832.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | 3 | 2019-05-27T09:51:54.000Z | 2019-12-12T20:35:29.000Z | # Generated by Django 2.2.1 on 2019-05-29 15:32
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('users', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='gender',
field=models.CharField(choices=[('M', 'Male'), ('F', 'Female')], default='Ч', max_length=1),
),
]
| 22.526316 | 104 | 0.57243 | 47 | 428 | 5.148936 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064103 | 0.271028 | 428 | 18 | 105 | 23.777778 | 0.711538 | 0.10514 | 0 | 0 | 1 | 0 | 0.112861 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fbe97554004825caf136fddc2974680e7d04552 | 635 | py | Python | order/models/customers.py | fbsamples/cp_reference | 028b384767d06158a64be8cbb1af613e2f3c881e | [
"MIT"
] | 2 | 2021-09-05T04:21:33.000Z | 2021-11-03T20:56:46.000Z | order/models/customers.py | fbsamples/cp_reference | 028b384767d06158a64be8cbb1af613e2f3c881e | [
"MIT"
] | null | null | null | order/models/customers.py | fbsamples/cp_reference | 028b384767d06158a64be8cbb1af613e2f3c881e | [
"MIT"
] | null | null | null | # Copyright 2004-present, Facebook. All Rights Reserved.
from django.db import models
from core.models import BaseModel
class Customer(BaseModel):
"""Represent a single customer instance
fields:
store: the store this customer belongs to
full_name: customer full name
email: customer email
addr: customer shipping addr
"""
store = models.ForeignKey("shop.Store", null=True, on_delete=models.SET_NULL)
full_name = models.CharField(max_length=255)
email = models.CharField(max_length=255)
addr = models.TextField(blank=True, null=True)
def __str__(self):
return self.full_name
| 27.608696 | 81 | 0.722835 | 84 | 635 | 5.333333 | 0.571429 | 0.071429 | 0.080357 | 0.107143 | 0.120536 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019493 | 0.192126 | 635 | 22 | 82 | 28.863636 | 0.853801 | 0.352756 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.111111 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
0fc3095b8ef6cc10b6acc06d5aa68037afc74cc5 | 692 | py | Python | greeting/cliente.py | javalisson/Sockets | 90068c0b5a4b2f21ca789177c3c445c671732a86 | [
"MIT"
] | 2 | 2017-04-26T11:17:56.000Z | 2017-12-05T01:55:20.000Z | greeting/cliente.py | javalisson/Sockets | 90068c0b5a4b2f21ca789177c3c445c671732a86 | [
"MIT"
] | 2 | 2017-02-22T12:35:13.000Z | 2017-03-29T12:44:22.000Z | greeting/cliente.py | javalisson/Sockets | 90068c0b5a4b2f21ca789177c3c445c671732a86 | [
"MIT"
] | 24 | 2017-02-22T12:26:04.000Z | 2020-10-13T05:19:53.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# adaptado de https://wiki.python.org/moin/TcpCommunication
import socket
TCP_IP = '127.0.0.1'
TCP_PORT = 5005
BUFFER_SIZE = 1024
NOME = "Javalisson"
print ("[CLIENTE] Iniciando")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print ("[CLIENTE] Conectando")
s.connect((TCP_IP, TCP_PORT))
print ("[CLIENTE] Enviando dados: \"" + NOME + "\"")
s.send(NOME.encode('utf-8'))
print ("[CLIENTE] Recebendo dados do servidor")
resposta = s.recv(BUFFER_SIZE)
print ("[CLIENTE] Dados recebidos em resposta do servidor: \"" + resposta.decode('utf-8') + "\"")
print ("[CLIENTE] Fechando conexão com o servidor")
s.close()
print ("[CLIENTE] Fim") | 25.62963 | 97 | 0.690751 | 98 | 692 | 4.795918 | 0.571429 | 0.178723 | 0.038298 | 0.068085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028053 | 0.124277 | 692 | 27 | 98 | 25.62963 | 0.747525 | 0.144509 | 0 | 0 | 0 | 0 | 0.462712 | 0.040678 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0.411765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
0fc3569e087d76a290fb758960cbd352ee6f27dc | 2,132 | py | Python | blog/migrations/0001_initial.py | jxtxzzw/eoj3 | 468c16ed6de8b9b542972d0e83b02fd2cfa35e4f | [
"MIT"
] | 1 | 2020-11-17T13:08:07.000Z | 2020-11-17T13:08:07.000Z | blog/migrations/0001_initial.py | zerolfx/eoj3 | 156060399d1c3e5f7bcdbf34eaffbe2be66e1b20 | [
"MIT"
] | 2 | 2020-09-23T21:27:55.000Z | 2021-06-25T15:24:46.000Z | blog/migrations/0001_initial.py | zerolfx/eoj3 | 156060399d1c3e5f7bcdbf34eaffbe2be66e1b20 | [
"MIT"
] | 1 | 2019-07-13T00:44:39.000Z | 2019-07-13T00:44:39.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2017-04-06 16:37
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('problem', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Blog',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=128, verbose_name='Title')),
('text', models.TextField(verbose_name='Text')),
('visible', models.BooleanField(default=False, verbose_name='Visible')),
('create_time', models.DateTimeField(auto_now_add=True, verbose_name='Created time')),
('edit_time', models.DateTimeField(auto_now=True, verbose_name='Edit time')),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-edit_time'],
},
),
migrations.CreateModel(
name='Comment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('text', models.TextField(verbose_name='Text')),
('create_time', models.DateTimeField(auto_now_add=True, verbose_name='Created time')),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
('blog', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='blog.Blog')),
('problem', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='problem.Problem')),
],
options={
'ordering': ['-create_time'],
},
),
]
| 42.64 | 125 | 0.604597 | 224 | 2,132 | 5.571429 | 0.34375 | 0.079327 | 0.05609 | 0.088141 | 0.544872 | 0.520833 | 0.466346 | 0.466346 | 0.466346 | 0.466346 | 0 | 0.015094 | 0.254221 | 2,132 | 49 | 126 | 43.510204 | 0.769811 | 0.031895 | 0 | 0.439024 | 1 | 0 | 0.110141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.097561 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fc891bc1997dfbe1f1be5e5c5524986e39b1d74 | 5,345 | py | Python | validation_tests/case_studies/merewether/plot_results.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 136 | 2015-05-07T05:47:43.000Z | 2022-02-16T03:07:40.000Z | validation_tests/case_studies/merewether/plot_results.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 184 | 2015-05-03T09:27:54.000Z | 2021-12-20T04:22:48.000Z | validation_tests/case_studies/merewether/plot_results.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 70 | 2015-03-18T07:35:22.000Z | 2021-11-01T07:07:29.000Z | from anuga.utilities import plot_utils as util
from matplotlib import pyplot as pyplot
import numpy
verbose= True
swwfile = 'merewether_1m.sww'
p=util.get_output(swwfile)
p2=util.get_centroids(p)
# Time index at last time
tindex = len(p2.time)-1
if verbose: print('calculating experimental transect')
x_data = [ 0.0, 3.0, 6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0, 27.0, 30.0, 33.0]
#vel = [ 0.0, 0.0, 1.1, 3.2, 3.4, 2.4, 3.2, 3.2, 3.7, 3.1, 0.4, 0.0]
vel_data = [ 0.0, 0.4, 3.1, 3.7, 3.2, 3.2, 2.4, 3.4, 3.2, 1.1, 0.0, 0.0]
#depth = [ 0.0, 0.0, 0.1, 0.5, 0.45, 0.4, 0.55, 0.1, 0.1, 0.05, 0.04, 0.0]
depth_data = [ 0.0, 0.04, 0.05, 0.1, 0.1, 0.55, 0.4, 0.45, 0.5, 0.1, 0.0, 0.0]
from scipy import interpolate
fvel = interpolate.interp1d(x_data, vel_data)
fdepth = interpolate.interp1d(x_data, depth_data)
if verbose: print('calculating model heights at observation points')
# Get nearest wet points to 'point observations'
point_observations = numpy.genfromtxt(
'Observations/ObservationPoints.csv',
delimiter=",",skip_header=1)
nearest_points = point_observations[:,0]*0. - 1
for i in range(len(nearest_points)):
# Compute distance of ANUGA points to observation, and
# if the ANUGA point is dry then add a large value
# Then find index of minimum
n = ( (p2.x+p2.xllcorner-point_observations[i,0])**2 + \
(p2.y+p2.yllcorner-point_observations[i,1])**2 + \
(p2.stage[tindex,:] <= p2.elev)*1.0e+06).argmin()
nearest_points[i] = n
f = open('Stage_point_comparison.csv','w')
f.writelines( 'Field, ANUGA, TUFLOW, ANUGA minus Field, ANUGA minus TUFLOW \n' )
if verbose: print nearest_points.tolist()
for i in range(len(nearest_points)):
po = point_observations[i,-2]
tu = point_observations[i,-1]
anuga_data = p2.stage[tindex, nearest_points.tolist()[i]]
newline = str(round(po,2)) + ', ' + str(round(anuga_data,2)) + ', ' + str(tu) + ', ' + \
str(round(anuga_data - po,2)) + ', ' + str(round(anuga_data - tu,2)) + '\n'
f.writelines(newline)
f.flush()
f.close()
if verbose: print('Plot transect')
## Plot transect 1 [need to guess appropriate end points as these are not so
## clear from the report]
xx=util.near_transect(p2,[103, 100.], [130.,80.],tol=0.5)
xx2=xx[0]
pyplot.clf()
pyplot.figure(figsize=(16,10.5))
pyplot.subplot(121)
pyplot.scatter(p2.x, p2.y, c=p2.elev,edgecolors='none')
# Add nice elevation data
colVals = numpy.maximum(numpy.minimum(p2.elev, 25.), 19.)
util.plot_triangles(p, values = colVals, edgecolors='none')
pyplot.gca().set_aspect('equal')
pyplot.scatter(p2.x[xx2],p2.y[xx2],color='green')
pyplot.xlim( (40., 160.))
pyplot.ylim( (0.,140.))
pyplot.title('Transect points in green')
pyplot.subplot(222)
pyplot.scatter(xx[1],p2.vel[tindex,xx[0]],color='green',label='model')
pyplot.scatter(xx[1],fvel(xx[1]),color='blue',label='data')
pyplot.legend(loc='upper left')
#pyplot.xlim(0,25)
pyplot.title('Final flow speed along the transect')
pyplot.subplot(224)
pyplot.scatter(xx[1],p2.stage[tindex,xx[0]]-p2.elev[xx[0]],color='green',label='model')
pyplot.scatter(xx[1],fdepth(xx[1]),color='blue',label='data')
pyplot.legend(loc='upper left')
#pyplot.xlim(0,25)
pyplot.title('Final depth along the transect')
pyplot.savefig('Transect1.png', bbox_inches='tight')
if verbose: print('Plot velocity field')
pyplot.clf()
# Velocity vector plot
pyplot.figure(figsize=(16,22))
pyplot.scatter(p2.x,p2.y,c=(p2.elev>24.),edgecolors='none', s=0.2)
pyplot.gca().set_aspect('equal')
pyplot.xlim((100,180))
pyplot.ylim((100,210))
#k=range(0,len(p2.x),2) # Thin out the vectors for easier viewing
colVals = numpy.maximum(numpy.minimum(p2.elev, 25.), 19.)
util.plot_triangles(p, values = colVals, edgecolors='white')
k = range(len(p2.x))
# Thin out the triangles
#k = (((10.*(p2.x - p2.x.round())).round()%2 == 0.0)*((10.*(p2.y - p2.y.round())).round()%2 == 0.0)).nonzero()[0]
pyplot.quiver(p2.x[k],p2.y[k],p2.xvel[tindex,k], p2.yvel[tindex,k],
scale_units='xy',units='xy',width=0.1,
color='black',scale=1.0)
pyplot.savefig('velocity_stationary.png',dpi=100, bbox_inches='tight')
## Froude number plot
if verbose: print('Plot Froude number plot')
pyplot.clf()
pyplot.figure(figsize=(6,8))
froude_number = p2.vel[tindex]/(numpy.maximum(p2.height[tindex], 1.0e-03)*9.8)**0.5
froude_category = (froude_number>1.).astype(float) + (froude_number > 0.).astype(float)
pyplot.scatter(p2.x,p2.y,edgecolors='none', s=0.2)
## Fake additions to plot to hack matplotlib legend
pyplot.scatter(0.,0., color='FireBrick',label='>1', marker='s')
pyplot.scatter(0.,0., color='PaleGreen',label='0-1', marker='s')
pyplot.scatter(0.,0., color='blue',label='0',marker='s')
pyplot.gca().set_aspect('equal')
util.plot_triangles(p, values = froude_category, edgecolors='none')
pyplot.xlim((p.x.min(), p.x.max()))
pyplot.ylim((p.y.min(), p.y.max()))
pyplot.title("Froude Number zones: 0, (0,1], or >1")
import matplotlib.patches as mpatches
#red_patch = mpatches.Patch(color='red', label='>1')
#green_patch = mpatches.Patch(color='green', label='(0-1]')
#blue_patch = mpatches.Patch(color='blue', label='0.')
#pyplot.legend(handles=[red_patch, green_patch, blue_patch], labels=['>1', '(0-1]', '0.'], loc='best')
pyplot.legend(loc='upper left')
pyplot.savefig('froudeNumber.png',dpi=100,bbox_inches='tight')
| 35.397351 | 113 | 0.674088 | 924 | 5,345 | 3.841991 | 0.257576 | 0.015211 | 0.009296 | 0.005634 | 0.280563 | 0.210423 | 0.155493 | 0.140282 | 0.124507 | 0.109859 | 0 | 0.072328 | 0.123106 | 5,345 | 150 | 114 | 35.633333 | 0.685086 | 0.197568 | 0 | 0.139785 | 0 | 0 | 0.147618 | 0.019479 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053763 | null | null | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fcc6d24d3d335350d5fc8d0e26b2a5c90342944 | 2,304 | py | Python | myproject/products/views.py | abhishekmorya/django-rest-assignment | 069fedafa2ba92e06157441500cd3f1fecf450e9 | [
"MIT"
] | null | null | null | myproject/products/views.py | abhishekmorya/django-rest-assignment | 069fedafa2ba92e06157441500cd3f1fecf450e9 | [
"MIT"
] | null | null | null | myproject/products/views.py | abhishekmorya/django-rest-assignment | 069fedafa2ba92e06157441500cd3f1fecf450e9 | [
"MIT"
] | null | null | null | from rest_framework import viewsets, filters, mixins
from rest_framework.response import Response
from products import models
from products import serializers
class CategoryApiView(viewsets.ReadOnlyModelViewSet):
"""API View for Category"""
serializer_class = serializers.CategorySerializer
queryset = models.Category.objects.all()
class SubCategoryApiView(viewsets.ReadOnlyModelViewSet):
"""API View for SubCategory"""
serializer_class = serializers.SubCategorySerializer
queryset = models.SubCategory.objects.all().order_by('-created_on')
def retrieve(self, response, pk):
category = models.Category.objects.filter(
name = pk
).first()
queryset = models.SubCategory.objects.filter(category = category.id)
values = [x.to_dict() for x in queryset]
return Response(values)
class ProductApiView(mixins.ListModelMixin, mixins.RetrieveModelMixin, mixins.CreateModelMixin, viewsets.GenericViewSet):
"""API View for Product"""
serializer_class = serializers.ProductSerializer
queryset = models.Product.objects.all().order_by('-created_on')
class ProductSubCategoryView(viewsets.ReadOnlyModelViewSet):
"""Products for a sub-category"""
serializer_class = serializers.ProductSerializer
queryset = models.Product.objects.all()
def retrieve(self, request, pk):
sub_category = models.SubCategory.objects.filter(
name = pk
).first()
if sub_category is None:
return Response("Error: Sub Category not found", 404)
queryset = models.Product.objects.filter(sub_category = sub_category.id)
values = [x.to_dict() for x in queryset]
return Response(values)
class ProductCategoryView(viewsets.ReadOnlyModelViewSet):
"""Product for a category"""
serializer_class = serializers.ProductSerializer
queryset = models.Product.objects.all()
def retrieve(self, request, pk):
category = models.Category.objects.filter(
name = pk
).first()
if category is None:
return Response("Error: Category Not Found", 404)
q = self.queryset.filter(
category = category.id
)
values = [x.to_dict() for x in q]
return Response(values) | 31.135135 | 121 | 0.691406 | 243 | 2,304 | 6.481481 | 0.263374 | 0.062222 | 0.08254 | 0.071111 | 0.490794 | 0.44254 | 0.35619 | 0.35619 | 0.35619 | 0.249524 | 0 | 0.003313 | 0.213976 | 2,304 | 74 | 122 | 31.135135 | 0.866372 | 0.051215 | 0 | 0.434783 | 0 | 0 | 0.035169 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.086957 | 0 | 0.586957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0fcd071e5471654de2c169fc870632e566ae5177 | 302 | py | Python | R function.py | haideraheem/Simple-Python-Programs | dc0a71e88adc7323d46e75168e8fd2db97eea775 | [
"CC0-1.0"
] | null | null | null | R function.py | haideraheem/Simple-Python-Programs | dc0a71e88adc7323d46e75168e8fd2db97eea775 | [
"CC0-1.0"
] | null | null | null | R function.py | haideraheem/Simple-Python-Programs | dc0a71e88adc7323d46e75168e8fd2db97eea775 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Oct 6 17:11:11 2018
@author: Haider Raheem
"""
a= float(input("Type a : "))
b= float(input("Type b : "))
c= float(input("Type c : "))
x= (a*b)+(b*c)+(c*a)
y= (a+b+c)
r= x/y
print( )
print("The result of the calculation is {0:.2f}".format(r)) | 20.133333 | 59 | 0.539735 | 56 | 302 | 2.910714 | 0.571429 | 0.184049 | 0.257669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.211921 | 302 | 15 | 59 | 20.133333 | 0.62605 | 0.271523 | 0 | 0 | 0 | 0 | 0.336683 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fce3a1010ed62a22f3091187ec07f0057d5fa5a | 449 | py | Python | src/selectedtests/app/controllers/health_controller.py | isabella232/selected-tests | 890cd5f39f5571d50f0406b4c25a1a2eef1006a3 | [
"Apache-2.0"
] | 2 | 2020-04-13T11:26:57.000Z | 2022-01-21T00:03:52.000Z | src/selectedtests/app/controllers/health_controller.py | mongodb/selected-tests | 467f71f1d45b06ac3cc5db252f18658f8cd93083 | [
"Apache-2.0"
] | 54 | 2019-09-26T18:56:34.000Z | 2022-03-12T01:07:00.000Z | src/selectedtests/app/controllers/health_controller.py | isabella232/selected-tests | 890cd5f39f5571d50f0406b4c25a1a2eef1006a3 | [
"Apache-2.0"
] | 6 | 2019-10-01T14:24:27.000Z | 2020-02-13T15:53:47.000Z | """Controller for the health endpoints."""
from fastapi import APIRouter
from pydantic import BaseModel
router = APIRouter()
class HealthCheckResponse(BaseModel):
"""Model for health check responses."""
online: bool
@router.get("", response_model=HealthCheckResponse, description="Health check endpoint")
def health() -> HealthCheckResponse:
"""Get the current status of the service."""
return HealthCheckResponse(online=True)
| 24.944444 | 88 | 0.74833 | 48 | 449 | 6.979167 | 0.625 | 0.065672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144766 | 449 | 17 | 89 | 26.411765 | 0.872396 | 0.242762 | 0 | 0 | 0 | 0 | 0.064815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0fd0370ba84af5ab763ef6f4203eaefec7cafeb5 | 920 | py | Python | bookorbooks/account/migrations/0008_auto_20210714_1507.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | 1 | 2021-08-10T22:24:17.000Z | 2021-08-10T22:24:17.000Z | bookorbooks/account/migrations/0008_auto_20210714_1507.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | bookorbooks/account/migrations/0008_auto_20210714_1507.py | talhakoylu/SummerInternshipBackend | 4ecedf5c97f73e3d32d5a534769e86aac3e4b6d3 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-14 15:07
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('account', '0007_alter_childlist_child'),
]
operations = [
migrations.AlterField(
model_name='childprofile',
name='user',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='user_child', serialize=False, to=settings.AUTH_USER_MODEL, verbose_name='Kullanıcı'),
),
migrations.AlterField(
model_name='parentprofile',
name='user',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='user_parent', serialize=False, to=settings.AUTH_USER_MODEL, verbose_name='Kullanıcı'),
),
]
| 35.384615 | 202 | 0.686957 | 107 | 920 | 5.728972 | 0.457944 | 0.052202 | 0.068516 | 0.107667 | 0.50571 | 0.50571 | 0.50571 | 0.50571 | 0.50571 | 0.50571 | 0 | 0.025815 | 0.2 | 920 | 25 | 203 | 36.8 | 0.807065 | 0.048913 | 0 | 0.315789 | 1 | 0 | 0.120275 | 0.029782 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fdab97e2573fd7c25a3583d3b5d334aba1173ee | 1,067 | py | Python | allies/serializers.py | PaulLerner/Allies | 478a85d0b98b4865854b52585563e0e42855b101 | [
"MIT"
] | null | null | null | allies/serializers.py | PaulLerner/Allies | 478a85d0b98b4865854b52585563e0e42855b101 | [
"MIT"
] | null | null | null | allies/serializers.py | PaulLerner/Allies | 478a85d0b98b4865854b52585563e0e42855b101 | [
"MIT"
] | null | null | null | import numpy as np
import pickle
import struct
class Serializer:
def serialize(self, model):
"""
Serialize a model dict to a string of uint8 bytes
:param model: a dict with model components
:return: a np.array of uint8
"""
raise NotImplementedError
def deserialize(self, model):
"""
Deserialize a model from a string of uint8 bytes
:param model: a np.array of uint8
:return: a dict with model components
"""
raise NotImplementedError
class DummySerializer(Serializer):
"""
Dummy serializer for debug purposes.
We assume inputs in the model dict are just paths to pretrained models
"""
def serialize(self, model):
pkl = pickle.dumps(model)
u8 = np.array(struct.unpack("{}B".format(len(pkl)), pkl), dtype=np.uint8)
return u8
def deserialize(self, model):
pkl_after = struct.pack('{}B'.format(len(model)), *list(model))
serialized_model = pickle.loads(pkl_after)
return serialized_model | 27.358974 | 81 | 0.633552 | 134 | 1,067 | 5.014925 | 0.41791 | 0.053571 | 0.047619 | 0.0625 | 0.202381 | 0.089286 | 0.089286 | 0.089286 | 0 | 0 | 0 | 0.009091 | 0.278351 | 1,067 | 39 | 82 | 27.358974 | 0.863636 | 0.328022 | 0 | 0.352941 | 0 | 0 | 0.009677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.176471 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fdf08782d0c6126eba111656026fe30af178bba | 389 | py | Python | warpworks/views/index.py | storborg/warpworks | a41a0a5bab8b826157309f7d0bafbdcdff66505b | [
"MIT"
] | null | null | null | warpworks/views/index.py | storborg/warpworks | a41a0a5bab8b826157309f7d0bafbdcdff66505b | [
"MIT"
] | null | null | null | warpworks/views/index.py | storborg/warpworks | a41a0a5bab8b826157309f7d0bafbdcdff66505b | [
"MIT"
] | null | null | null | from __future__ import (absolute_import, division, print_function,
unicode_literals)
from pyramid.view import view_config
from pyweaving.generators.twill import twill
@view_config(route_name='index', renderer='index.html')
def index_view(request):
draft = twill(2, warp_color=(200, 0, 0), weft_color=(90, 90, 90))
return dict(draft_json=draft.to_json())
| 32.416667 | 69 | 0.722365 | 54 | 389 | 4.925926 | 0.62963 | 0.075188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.167095 | 389 | 11 | 70 | 35.363636 | 0.783951 | 0 | 0 | 0 | 0 | 0 | 0.03856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0fdf761292ed2657c4ca262e9905100c59129c2e | 3,291 | py | Python | linux_odp/.waf-1.6.8-3e3391c5f23fbabad81e6d17c63a1b1e/waflib/Tools/cs.py | dproc/trex_odp_porting_integration | 84d5f27a7eab8186b68c5a2b1409d3d0f41f859b | [
"Apache-2.0"
] | null | null | null | linux_odp/.waf-1.6.8-3e3391c5f23fbabad81e6d17c63a1b1e/waflib/Tools/cs.py | dproc/trex_odp_porting_integration | 84d5f27a7eab8186b68c5a2b1409d3d0f41f859b | [
"Apache-2.0"
] | null | null | null | linux_odp/.waf-1.6.8-3e3391c5f23fbabad81e6d17c63a1b1e/waflib/Tools/cs.py | dproc/trex_odp_porting_integration | 84d5f27a7eab8186b68c5a2b1409d3d0f41f859b | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
# encoding: utf-8
# WARNING! Do not edit! http://waf.googlecode.com/git/docs/wafbook/single.html#_obtaining_the_waf_file
import sys
if sys.hexversion < 0x020400f0: from sets import Set as set
from waflib import Utils,Task,Options,Logs,Errors
from waflib.TaskGen import before_method,after_method,feature
from waflib.Tools import ccroot
from waflib.Configure import conf
ccroot.USELIB_VARS['cs']=set(['CSFLAGS','ASSEMBLIES','RESOURCES'])
ccroot.lib_patterns['csshlib']=['%s']
def apply_cs(self):
cs_nodes=[]
no_nodes=[]
for x in self.to_nodes(self.source):
if x.name.endswith('.cs'):
cs_nodes.append(x)
else:
no_nodes.append(x)
self.source=no_nodes
bintype=getattr(self,'type',self.gen.endswith('.dll')and'library'or'exe')
self.cs_task=tsk=self.create_task('mcs',cs_nodes,self.path.find_or_declare(self.gen))
tsk.env.CSTYPE='/target:%s'%bintype
tsk.env.OUT='/out:%s'%tsk.outputs[0].abspath()
inst_to=getattr(self,'install_path',bintype=='exe'and'${BINDIR}'or'${LIBDIR}')
if inst_to:
mod=getattr(self,'chmod',bintype=='exe'and Utils.O755 or Utils.O644)
self.install_task=self.bld.install_files(inst_to,self.cs_task.outputs[:],env=self.env,chmod=mod)
def use_cs(self):
names=self.to_list(getattr(self,'use',[]))
get=self.bld.get_tgen_by_name
for x in names:
try:
y=get(x)
except Errors.WafError:
self.cs_task.env.append_value('CSFLAGS','/reference:%s'%x)
continue
y.post()
tsk=getattr(y,'cs_task',None)or getattr(y,'link_task',None)
if not tsk:
self.bld.fatal('cs task has no link task for use %r'%self)
self.cs_task.dep_nodes.extend(tsk.outputs)
self.cs_task.set_run_after(tsk)
self.cs_task.env.append_value('CSFLAGS','/reference:%s'%tsk.outputs[0].abspath())
def debug_cs(self):
csdebug=getattr(self,'csdebug',self.env.CSDEBUG)
if not csdebug:
return
node=self.cs_task.outputs[0]
if self.env.CS_NAME=='mono':
out=node.parent.find_or_declare(node.name+'.mdb')
else:
out=node.change_ext('.pdb')
self.cs_task.outputs.append(out)
try:
self.install_task.source.append(out)
except AttributeError:
pass
if csdebug=='pdbonly':
val=['/debug+','/debug:pdbonly']
elif csdebug=='full':
val=['/debug+','/debug:full']
else:
val=['/debug-']
self.cs_task.env.append_value('CSFLAGS',val)
class mcs(Task.Task):
color='YELLOW'
run_str='${MCS} ${CSTYPE} ${CSFLAGS} ${ASS_ST:ASSEMBLIES} ${RES_ST:RESOURCES} ${OUT} ${SRC}'
def configure(conf):
csc=getattr(Options.options,'cscbinary',None)
if csc:
conf.env.MCS=csc
conf.find_program(['csc','mcs','gmcs'],var='MCS')
conf.env.ASS_ST='/r:%s'
conf.env.RES_ST='/resource:%s'
conf.env.CS_NAME='csc'
if str(conf.env.MCS).lower().find('mcs')>-1:
conf.env.CS_NAME='mono'
def options(opt):
opt.add_option('--with-csc-binary',type='string',dest='cscbinary')
class fake_csshlib(Task.Task):
color='YELLOW'
inst_to=None
def runnable_status(self):
for x in self.outputs:
x.sig=Utils.h_file(x.abspath())
return Task.SKIP_ME
def read_csshlib(self,name,paths=[]):
return self(name=name,features='fake_lib',lib_paths=paths,lib_type='csshlib')
feature('cs')(apply_cs)
before_method('process_source')(apply_cs)
feature('cs')(use_cs)
after_method('apply_cs')(use_cs)
feature('cs')(debug_cs)
after_method('apply_cs','use_cs')(debug_cs)
conf(read_csshlib) | 33.581633 | 102 | 0.726831 | 550 | 3,291 | 4.194545 | 0.316364 | 0.028609 | 0.039012 | 0.022107 | 0.086693 | 0.070655 | 0.070655 | 0.035544 | 0.035544 | 0 | 0 | 0.006325 | 0.087208 | 3,291 | 98 | 103 | 33.581633 | 0.761651 | 0.041629 | 0 | 0.075269 | 1 | 0.010753 | 0.167566 | 0 | 0 | 0 | 0.003174 | 0 | 0 | 1 | 0.075269 | false | 0.010753 | 0.064516 | 0.010753 | 0.236559 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0fe195d068b46848f6a4f6f2244542d3be9fe7d3 | 930 | py | Python | tests/test_build.py | polishmatt/swp | e54a8f4ed13a48dd3a385c6d312edd6e4c86724a | [
"MIT"
] | 1 | 2017-02-13T23:09:46.000Z | 2017-02-13T23:09:46.000Z | tests/test_build.py | polishmatt/swp | e54a8f4ed13a48dd3a385c6d312edd6e4c86724a | [
"MIT"
] | 16 | 2016-11-03T02:50:17.000Z | 2017-01-24T04:35:42.000Z | tests/test_build.py | polishmatt/stawp | e54a8f4ed13a48dd3a385c6d312edd6e4c86724a | [
"MIT"
] | null | null | null |
import unittest
import filecmp
from stawp.build import Builder
class TestBuild(unittest.TestCase):
def build_fixture(self, name, description):
dest='/tmp/swp'
source='tests/fixtures/%s/src' % name
builder = Builder(dist=dest, base=source)
builder.interpret()
builder.render()
cmp = filecmp.dircmp(dest, 'tests/fixtures/%s/dest' % name)
for attr in ['left_only', 'right_only', 'common_funny', 'diff_files', 'funny_files']:
self.assertEqual(len(getattr(cmp, attr)), 0, 'Failed ' + description + "\n" + attr + " - " + str(getattr(cmp, attr)))
def test_build_min_copy(self):
self.build_fixture('minimum-copy', 'copy file only')
def test_build_min_page(self):
self.build_fixture('minimum-page', 'minimum configuration with one page')
def test_build_default(self):
self.build_fixture('builder', 'default build behavior')
| 34.444444 | 129 | 0.658065 | 117 | 930 | 5.08547 | 0.487179 | 0.080672 | 0.060504 | 0.10084 | 0.090756 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001353 | 0.205376 | 930 | 26 | 130 | 35.769231 | 0.803789 | 0 | 0 | 0 | 0 | 0 | 0.233836 | 0.046336 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.210526 | false | 0 | 0.157895 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0ff3a9bcd7a1323504d7ab075a98f49049806831 | 832 | py | Python | main.py | irahorecka/craigslist-housing-subscription | 389c325dc30526eaed4c2333f5dd4d60d7939a13 | [
"MIT"
] | null | null | null | main.py | irahorecka/craigslist-housing-subscription | 389c325dc30526eaed4c2333f5dd4d60d7939a13 | [
"MIT"
] | null | null | null | main.py | irahorecka/craigslist-housing-subscription | 389c325dc30526eaed4c2333f5dd4d60d7939a13 | [
"MIT"
] | null | null | null | """
Main file to find and send user Craigslist housing posts.
"""
import time
import users
import posts
import mail
def main():
"""Main app to execute subscription based email notifications."""
users_json = users.get_users()
# At start of subscription, drop all content and populate db without sending email
posts.drop_contents()
for _ in posts.get(users_json):
pass
while True:
# Sleep for a day (-80 seconds) to fetch posts
print("sleeping...")
time.sleep(86320)
# Get users' json file for every iteration - allows for updates during operation
users_json = users.get_users()
user_posts = zip(users_json, posts.get(users_json))
for user, post in user_posts:
mail.write_email(user, post)
if __name__ == "__main__":
main()
| 26 | 88 | 0.661058 | 113 | 832 | 4.690265 | 0.522124 | 0.101887 | 0.067925 | 0.064151 | 0.083019 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011254 | 0.252404 | 832 | 31 | 89 | 26.83871 | 0.840836 | 0.388221 | 0 | 0.111111 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.222222 | 0 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0ff6fb1c2b8d1844c202ef54247d1d380ce729a2 | 10,377 | py | Python | metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/rest_commands.py | slab14/metron | 52bd310fcce68dad15eead57f1113092a30d9791 | [
"Apache-2.0"
] | 1 | 2017-02-07T03:31:44.000Z | 2017-02-07T03:31:44.000Z | metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/rest_commands.py | slab14/metron | 52bd310fcce68dad15eead57f1113092a30d9791 | [
"Apache-2.0"
] | 2 | 2017-06-22T18:03:12.000Z | 2017-06-25T03:51:47.000Z | metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/rest_commands.py | slab14/metron | 52bd310fcce68dad15eead57f1113092a30d9791 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
from datetime import datetime
from resource_management.core.logger import Logger
from resource_management.core.resources.system import Directory, Execute, File
from resource_management.libraries.functions import get_user_call_output
from resource_management.libraries.functions.format import format
from resource_management.libraries.functions.show_logs import show_logs
import metron_service
from metron_security import kinit
# Wrap major operations and functionality in this class
class RestCommands:
__params = None
__kafka_configured = False
__kafka_acl_configured = False
__hbase_configured = False
__hbase_acl_configured = False
def __init__(self, params):
if params is None:
raise ValueError("params argument is required for initialization")
self.__params = params
self.__kafka_configured = os.path.isfile(self.__params.rest_kafka_configured_flag_file)
self.__kafka_acl_configured = os.path.isfile(self.__params.rest_kafka_acl_configured_flag_file)
self.__hbase_configured = os.path.isfile(self.__params.rest_hbase_configured_flag_file)
self.__hbase_acl_configured = os.path.isfile(self.__params.rest_hbase_acl_configured_flag_file)
Directory(params.metron_rest_pid_dir,
mode=0755,
owner=params.metron_user,
group=params.metron_group,
create_parents=True
)
Directory(params.metron_log_dir,
mode=0755,
owner=params.metron_user,
group=params.metron_group,
create_parents=True
)
def __get_topics(self):
return [self.__params.metron_escalation_topic]
def is_kafka_configured(self):
return self.__kafka_configured
def is_kafka_acl_configured(self):
return self.__kafka_acl_configured
def is_hbase_configured(self):
return self.__hbase_configured
def is_hbase_acl_configured(self):
return self.__hbase_acl_configured
def set_kafka_configured(self):
metron_service.set_configured(self.__params.metron_user, self.__params.rest_kafka_configured_flag_file, "Setting Kafka configured to True for rest")
def set_kafka_acl_configured(self):
metron_service.set_configured(self.__params.metron_user, self.__params.rest_kafka_acl_configured_flag_file, "Setting Kafka ACL configured to True for rest")
def set_hbase_configured(self):
metron_service.set_configured(self.__params.metron_user, self.__params.rest_hbase_configured_flag_file, "Setting HBase configured to True for rest")
def set_hbase_acl_configured(self):
metron_service.set_configured(self.__params.metron_user, self.__params.rest_hbase_acl_configured_flag_file, "Setting HBase ACL configured to True for rest")
def init_kafka_topics(self):
Logger.info('Creating Kafka topics for rest')
metron_service.init_kafka_topics(self.__params, self.__get_topics())
def init_kafka_acls(self):
Logger.info('Creating Kafka ACLs for rest')
# The following topics must be permissioned for the rest application list operation
topics = self.__get_topics() + [self.__params.ambari_kafka_service_check_topic, self.__params.consumer_offsets_topic]
metron_service.init_kafka_acls(self.__params, topics)
groups = ['metron-rest']
metron_service.init_kafka_acl_groups(self.__params, groups)
def start_rest_application(self):
"""
Start the REST application
"""
Logger.info('Starting REST application')
if self.__params.security_enabled:
kinit(self.__params.kinit_path_local,
self.__params.metron_keytab_path,
self.__params.metron_principal_name,
execute_user=self.__params.metron_user)
# Get the PID associated with the service
pid_file = format("{metron_rest_pid_dir}/{metron_rest_pid}")
pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=self.__params.metron_user, is_checked_call=False)[1]
process_id_exists_command = format("ls {pid_file} >/dev/null 2>&1 && ps -p {pid} >/dev/null 2>&1")
# Set the password with env variable instead of param to avoid it showing in ps
cmd = format((
"export METRON_JDBC_PASSWORD={metron_jdbc_password!p};"
"export JAVA_HOME={java_home};"
"export METRON_REST_CLASSPATH={metron_rest_classpath};"
"export METRON_INDEX_CP={metron_indexing_classpath};"
"export METRON_LOG_DIR={metron_log_dir};"
"export METRON_PID_FILE={pid_file};"
"export METRON_RA_INDEXING_WRITER={ra_indexing_writer};"
"{metron_home}/bin/metron-rest.sh;"
"unset METRON_JDBC_PASSWORD;"
))
Execute(cmd,
user = self.__params.metron_user,
logoutput=True,
not_if = process_id_exists_command,
timeout=60)
Logger.info('Done starting REST application')
def stop_rest_application(self):
"""
Stop the REST application
"""
Logger.info('Stopping REST application')
# Get the pid associated with the service
pid_file = format("{metron_rest_pid_dir}/{metron_rest_pid}")
pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=self.__params.metron_user, is_checked_call=False)[1]
process_id_exists_command = format("ls {pid_file} >/dev/null 2>&1 && ps -p {pid} >/dev/null 2>&1")
if self.__params.security_enabled:
kinit(self.__params.kinit_path_local,
self.__params.metron_keytab_path,
self.__params.metron_principal_name,
execute_user=self.__params.metron_user)
# Politely kill
kill_cmd = ('kill', format("{pid}"))
Execute(kill_cmd,
sudo=True,
not_if = format("! ({process_id_exists_command})")
)
# Violently kill
hard_kill_cmd = ('kill', '-9', format("{pid}"))
wait_time = 5
Execute(hard_kill_cmd,
not_if = format("! ({process_id_exists_command}) || ( sleep {wait_time} && ! ({process_id_exists_command}) )"),
sudo=True,
ignore_failures = True
)
try:
# check if stopped the process, else fail the task
Execute(format("! ({process_id_exists_command})"),
tries=20,
try_sleep=3,
)
except:
show_logs(self.__params.metron_log_dir, self.__params.metron_user)
raise
File(pid_file, action = "delete")
Logger.info('Done stopping REST application')
def restart_rest_application(self, env):
"""
Restart the REST application
:param env: Environment
"""
Logger.info('Restarting the REST application')
self.stop_rest_application()
self.start_rest_application()
Logger.info('Done restarting the REST application')
def status_rest_application(self, env):
"""
Performs a status check for the REST application
:param env: Environment
"""
Logger.info('Status check the REST application')
metron_service.check_http(
self.__params.metron_rest_host,
self.__params.metron_rest_port,
self.__params.metron_user)
def create_hbase_tables(self):
Logger.info("Creating HBase Tables")
metron_service.create_hbase_table(self.__params,
self.__params.user_settings_hbase_table,
self.__params.user_settings_hbase_cf)
Logger.info("Done creating HBase Tables")
self.set_hbase_configured()
def set_hbase_acls(self):
Logger.info("Setting HBase ACLs")
if self.__params.security_enabled:
kinit(self.__params.kinit_path_local,
self.__params.hbase_keytab_path,
self.__params.hbase_principal_name,
execute_user=self.__params.hbase_user)
cmd = "echo \"grant '{0}', 'RW', '{1}'\" | hbase shell -n"
add_rest_acl_cmd = cmd.format(self.__params.metron_user, self.__params.user_settings_hbase_table)
Execute(add_rest_acl_cmd,
tries=3,
try_sleep=5,
logoutput=False,
path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin',
user=self.__params.hbase_user
)
Logger.info("Done setting HBase ACLs")
self.set_hbase_acl_configured()
def service_check(self, env):
"""
Performs a service check for the REST application
:param env: Environment
"""
Logger.info('Checking connectivity to REST application')
metron_service.check_http(
self.__params.metron_rest_host,
self.__params.metron_rest_port,
self.__params.metron_user)
Logger.info('Checking Kafka topics for the REST application')
metron_service.check_kafka_topics(self.__params, self.__get_topics())
if self.__params.security_enabled:
Logger.info('Checking Kafka topic ACL for the REST application')
metron_service.check_kafka_acls(self.__params, self.__get_topics())
Logger.info("REST application service check completed successfully")
| 40.694118 | 164 | 0.668979 | 1,272 | 10,377 | 5.081761 | 0.194182 | 0.085087 | 0.056931 | 0.040223 | 0.47401 | 0.377166 | 0.358447 | 0.317915 | 0.247215 | 0.247215 | 0 | 0.004233 | 0.248723 | 10,377 | 254 | 165 | 40.854331 | 0.82491 | 0.037776 | 0 | 0.218935 | 0 | 0.011834 | 0.184467 | 0.063039 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011834 | 0.053254 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0ff9a9bae36874fc3a41a605b6109b76794154d0 | 991 | py | Python | headerstest.py | mzeinstra/openAnalyser | 859156117948eb15283c348e6f6025cae9352279 | [
"MIT"
] | 1 | 2021-06-28T09:39:43.000Z | 2021-06-28T09:39:43.000Z | headerstest.py | mzeinstra/openAnalyser | 859156117948eb15283c348e6f6025cae9352279 | [
"MIT"
] | null | null | null | headerstest.py | mzeinstra/openAnalyser | 859156117948eb15283c348e6f6025cae9352279 | [
"MIT"
] | null | null | null | from urllib.parse import urlparse
from bs4 import BeautifulSoup
import urllib3
from socket import timeout
import tldextract
import re
import traceback
import sys
import logging
import socket
import threading
from time import sleep
from collector import Collector
from checker import Checker
http = urllib3.PoolManager()
page = http.request('GET', "http://opennederland.nl", timeout=2)
print(page)
print("-------------------------------------------------------")
print(page.headers)
print("-------------------------------------------------------")
print(page.headers.keys())
print("-------------------------------------------------------")
print(page.headers.items())
print("-------------------------------------------------------")
t = page.headers.items()
print("-------------------------------------------------------")
d = dict((x, y) for x, y in t)
print (d)
print("-------------------------------------------------------")
if "X-Powered-By" in d:
print ("print " + d['X-Powered-By']) | 30.030303 | 64 | 0.491423 | 99 | 991 | 4.919192 | 0.424242 | 0.073922 | 0.086242 | 0.129363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004405 | 0.083754 | 991 | 33 | 65 | 30.030303 | 0.531938 | 0 | 0 | 0.193548 | 0 | 0 | 0.389113 | 0.332661 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.451613 | 0 | 0.451613 | 0.387097 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0ffb6d131a75f4d5e786c2e48eaefbaf6ae88520 | 326 | py | Python | app/blog/blog_entries/migrations/0002_remove_article_for_adult.py | Risoko/DRF-Auth-With-Blog-Entries | ba903d1eba1d1f774bfa3782d51d292430d84dbd | [
"MIT"
] | null | null | null | app/blog/blog_entries/migrations/0002_remove_article_for_adult.py | Risoko/DRF-Auth-With-Blog-Entries | ba903d1eba1d1f774bfa3782d51d292430d84dbd | [
"MIT"
] | null | null | null | app/blog/blog_entries/migrations/0002_remove_article_for_adult.py | Risoko/DRF-Auth-With-Blog-Entries | ba903d1eba1d1f774bfa3782d51d292430d84dbd | [
"MIT"
] | null | null | null | # Generated by Django 3.0.1 on 2020-01-08 08:47
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('blog_entries', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='article',
name='for_adult',
),
]
| 18.111111 | 47 | 0.588957 | 35 | 326 | 5.371429 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082969 | 0.297546 | 326 | 17 | 48 | 19.176471 | 0.737991 | 0.138037 | 0 | 0 | 1 | 0 | 0.143369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0ffb9c15a6273824c51ed946ffd5452b3519ceb0 | 380 | py | Python | leetcode/easy/rotated-digits.py | vtemian/interviews-prep | ddef96b5ecc699a590376a892a804c143fe18034 | [
"Apache-2.0"
] | 8 | 2019-05-14T12:50:29.000Z | 2022-03-01T09:08:27.000Z | leetcode/easy/rotated-digits.py | vtemian/interviews-prep | ddef96b5ecc699a590376a892a804c143fe18034 | [
"Apache-2.0"
] | 46 | 2019-03-24T20:59:29.000Z | 2019-04-09T16:28:43.000Z | leetcode/easy/rotated-digits.py | vtemian/interviews-prep | ddef96b5ecc699a590376a892a804c143fe18034 | [
"Apache-2.0"
] | 1 | 2022-01-28T12:46:29.000Z | 2022-01-28T12:46:29.000Z | class Solution:
def rotatedDigits(self, N: 'int') -> 'int':
result = 0
for nr in range(1, N + 1):
ok = False
for digit in str(nr):
if digit in '347':
break
if digit in '6952':
ok = True
else:
result += int(ok)
return result
| 21.111111 | 47 | 0.386842 | 41 | 380 | 3.585366 | 0.609756 | 0.142857 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055249 | 0.523684 | 380 | 17 | 48 | 22.352941 | 0.756906 | 0 | 0 | 0 | 0 | 0 | 0.034211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0ffecfc820d549bb4fa3335755a6e86a03bd358c | 2,584 | py | Python | src/process_particle_sys.py | faycalki/tainted-paths | 81cecf6c1fba903ec3b8043e22652d222892609d | [
"MIT"
] | 4 | 2019-09-26T21:34:32.000Z | 2021-11-18T19:31:15.000Z | src/process_particle_sys.py | faycalki/tainted-paths | 81cecf6c1fba903ec3b8043e22652d222892609d | [
"MIT"
] | null | null | null | src/process_particle_sys.py | faycalki/tainted-paths | 81cecf6c1fba903ec3b8043e22652d222892609d | [
"MIT"
] | null | null | null | import sys
sys.dont_write_bytecode = True
from module_info import *
from module_particle_systems import *
from process_common import *
# Lav's export_dir tweak
export_dir = '%s/' % export_dir.replace('\\', '/').rstrip('/')
id_pos = 0
flags_pos = 1
mesh_name_pos = 2
num_particles_pos = 3
life_pos = 4
damping_pos = 5
gravity_pos = 6
turb_size_pos = 7
turb_wt_pos = 8
alpha_key_pos = 9
red_key_pos = alpha_key_pos + 2
green_key_pos = red_key_pos + 2
blue_key_pos = green_key_pos + 2
scale_key_pos = blue_key_pos + 2
emit_box_size_pos = scale_key_pos + 2
emit_velocity_pos = emit_box_size_pos + 1
emit_rndmness_pos = emit_velocity_pos + 1
angular_speed_pos = emit_rndmness_pos + 1
angular_damping_pos = angular_speed_pos + 1
def save_psys_keys(ofile, keys1, keys2):
ofile.write("%f %f %f %f\n"%(keys1[0], keys1[1], keys2[0], keys2[1]))
def save_particle_systems():
ofile = open(export_dir + "particle_systems.txt","w")
ofile.write("particle_systemsfile version 1\n")
ofile.write("%d\n"%len(particle_systems))
for psys in particle_systems:
ofile.write("psys_%s %d %s "%(psys[0], psys[1], psys[2]))
ofile.write("%d %f %f %f %f %f \n"%(psys[num_particles_pos], psys[life_pos], psys[damping_pos], psys[gravity_pos], psys[turb_size_pos], psys[turb_wt_pos]))
save_psys_keys(ofile,psys[alpha_key_pos],psys[alpha_key_pos+1])
save_psys_keys(ofile,psys[red_key_pos],psys[red_key_pos+1])
save_psys_keys(ofile,psys[green_key_pos],psys[green_key_pos+1])
save_psys_keys(ofile,psys[blue_key_pos],psys[blue_key_pos+1])
save_psys_keys(ofile,psys[scale_key_pos],psys[scale_key_pos+1])
ofile.write("%f %f %f "%(psys[emit_box_size_pos][0],psys[emit_box_size_pos][1],psys[emit_box_size_pos][2]))
ofile.write("%f %f %f "%(psys[emit_velocity_pos][0],psys[emit_velocity_pos][1],psys[emit_velocity_pos][2]))
ofile.write("%f \n"%(psys[emit_rndmness_pos]))
if (len(psys) >= (angular_speed_pos + 1)):
ofile.write("%f "%(psys[angular_speed_pos]))
else:
ofile.write("0.0 ")
if (len(psys) >= (angular_damping_pos + 1)):
ofile.write("%f "%(psys[angular_damping_pos]))
else:
ofile.write("0.0 ")
ofile.write("\n")
ofile.close()
def save_python_header():
ofile = open("./ID_particle_systems.py","w")
for i_particle_system in xrange(len(particle_systems)):
ofile.write("psys_%s = %d\n"%(particle_systems[i_particle_system][0],i_particle_system))
ofile.close()
print "Exporting particle data..."
save_particle_systems()
save_python_header()
| 37.449275 | 159 | 0.700464 | 433 | 2,584 | 3.82679 | 0.196305 | 0.07242 | 0.012674 | 0.061557 | 0.258902 | 0.184671 | 0.161738 | 0.067592 | 0 | 0 | 0 | 0.023203 | 0.149381 | 2,584 | 68 | 160 | 38 | 0.730664 | 0.008514 | 0 | 0.1 | 0 | 0 | 0.086719 | 0.009375 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ba050f61eda5b2d821dc6b5f12862b36a4acfe44 | 8,322 | py | Python | src/Tools/CodeGenerator/Plugins/SharedLibraryTestsPluginImpl/ScalarTypeInfos.py | Bhaskers-Blu-Org2/FeaturizersLibrary | 229ae38ea233bfb02a6ff92ec3a67c1751c58005 | [
"MIT"
] | 15 | 2019-12-14T07:54:18.000Z | 2021-03-14T14:53:28.000Z | src/Tools/CodeGenerator/Plugins/SharedLibraryTestsPluginImpl/ScalarTypeInfos.py | Bhaskers-Blu-Org2/FeaturizersLibrary | 229ae38ea233bfb02a6ff92ec3a67c1751c58005 | [
"MIT"
] | 30 | 2019-12-03T20:58:56.000Z | 2020-04-21T23:34:39.000Z | src/Tools/CodeGenerator/Plugins/SharedLibraryTestsPluginImpl/ScalarTypeInfos.py | microsoft/FeaturizersLibrary | 229ae38ea233bfb02a6ff92ec3a67c1751c58005 | [
"MIT"
] | 13 | 2020-01-23T00:18:47.000Z | 2021-10-04T17:46:45.000Z | # ----------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License
# ----------------------------------------------------------------------
"""Contains the scalar type info objects"""
import os
import textwrap
import CommonEnvironment
from CommonEnvironment import Interface
from Plugins.SharedLibraryTestsPluginImpl.TypeInfo import TypeInfo
# ----------------------------------------------------------------------
_script_fullpath = CommonEnvironment.ThisFullpath()
_script_dir, _script_name = os.path.split(_script_fullpath)
# ----------------------------------------------------------------------
# ----------------------------------------------------------------------
class _ScalarTypeInfo(TypeInfo):
"""Functionality common to all scalars"""
# ----------------------------------------------------------------------
# |
# | Public Properties
# |
# ----------------------------------------------------------------------
@Interface.abstractproperty
def CType(self):
"""C type"""
raise Exception("Abstract property")
# ----------------------------------------------------------------------
# |
# | Public Methods
# |
# ----------------------------------------------------------------------
def __init__(
self,
*args,
member_type=None,
**kwargs
):
if member_type is None:
return
super(_ScalarTypeInfo, self).__init__(*args, **kwargs)
self.RequiresOptionalType = self.IsOptional and self.TypeName not in ["float", "double"]
# ----------------------------------------------------------------------
@Interface.override
def GetTransformInputArgs(
self,
input_name="input",
):
if self.RequiresOptionalType:
return "Microsoft::Featurizer::Traits<typename Microsoft::Featurizer::Traits<{cpp_type}>::nullable_type>::IsNull({input_name}) ? nullptr : &Microsoft::Featurizer::Traits<typename Microsoft::Featurizer::Traits<{cpp_type}>::nullable_type>::GetNullableValue({input_name})".format(
cpp_type=self.CppType,
input_name=input_name,
)
return input_name
# ----------------------------------------------------------------------
@Interface.override
def GetTransformInputBufferArgs(
self,
input_name='input',
):
if self.RequiresOptionalType:
raise NotImplementedError("Not implemented yet")
return "{name}.data(), {name}.size()".format(
name=input_name,
)
# ----------------------------------------------------------------------
@Interface.override
def GetOutputInfo(
self,
invocation_template,
result_name="result",
):
result_name = "{}_value".format(result_name)
if self.RequiresOptionalType:
vector_type = "nonstd::optional<{}>".format(self.CppType)
local_type = "{} *".format(self.CppType)
statement = "{name} ? std::move(*{name}) : nonstd::optional<{type}>()".format(
type=self.CppType,
name=result_name,
)
else:
vector_type = self.CppType
local_type = self.CppType
if self.TypeName == "bool":
# vector<bool> doesn't support `emplace_back` on older compilers
statement = result_name
else:
statement = "std::move({})".format(result_name)
return self.Result(
vector_type,
[self.Type(local_type, result_name)],
invocation_template.format(statement),
)
# ----------------------------------------------------------------------
@Interface.staticderived
class Int8TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("int8")
CppType = Interface.DerivedProperty("std::int8_t")
CType = Interface.DerivedProperty("int8_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class Int16TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("int16")
CppType = Interface.DerivedProperty("std::int16_t")
CType = Interface.DerivedProperty("int16_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class Int32TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("int32")
CppType = Interface.DerivedProperty("std::int32_t")
CType = Interface.DerivedProperty("int32_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class Int64TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("int64")
CppType = Interface.DerivedProperty("std::int64_t")
CType = Interface.DerivedProperty("int64_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class UInt8TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("uint8")
CppType = Interface.DerivedProperty("std::uint8_t")
CType = Interface.DerivedProperty("uint8_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class UInt16TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("uint16")
CppType = Interface.DerivedProperty("std::uint16_t")
CType = Interface.DerivedProperty("uint16_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class UInt32TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("uint32")
CppType = Interface.DerivedProperty("std::uint32_t")
CType = Interface.DerivedProperty("uint32_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class UInt64TypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("uint64")
CppType = Interface.DerivedProperty("std::uint64_t")
CType = Interface.DerivedProperty("uint64_t")
# ----------------------------------------------------------------------
@Interface.staticderived
class FloatTypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("float")
CppType = Interface.DerivedProperty("std::float_t")
CType = Interface.DerivedProperty("float")
# ----------------------------------------------------------------------
@Interface.staticderived
class DoubleTypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("double")
CppType = Interface.DerivedProperty("std::double_t")
CType = Interface.DerivedProperty("double")
# ----------------------------------------------------------------------
@Interface.staticderived
class BoolTypeInfo(_ScalarTypeInfo):
TypeName = Interface.DerivedProperty("bool")
CppType = Interface.DerivedProperty("bool")
CType = Interface.DerivedProperty("bool")
| 41.819095 | 290 | 0.43403 | 488 | 8,322 | 7.239754 | 0.262295 | 0.224172 | 0.084065 | 0.143221 | 0.084914 | 0.068497 | 0.068497 | 0.043589 | 0.043589 | 0.043589 | 0 | 0.009601 | 0.299087 | 8,322 | 198 | 291 | 42.030303 | 0.596091 | 0.229993 | 0 | 0.219512 | 0 | 0.00813 | 0.11569 | 0.043972 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04065 | false | 0 | 0.04065 | 0 | 0.487805 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ba07d0c821ac43a4b143f06257b21b47e9c896a7 | 4,061 | py | Python | src/dynamodb_encryption_sdk/materials/__init__.py | ajw-aws/aws-dynamodb-encryption-python | 4374ad5d87c7ea66c249c5f458cac6ff4465d437 | [
"Apache-2.0"
] | null | null | null | src/dynamodb_encryption_sdk/materials/__init__.py | ajw-aws/aws-dynamodb-encryption-python | 4374ad5d87c7ea66c249c5f458cac6ff4465d437 | [
"Apache-2.0"
] | 1 | 2021-03-20T05:42:35.000Z | 2021-03-20T05:42:35.000Z | src/dynamodb_encryption_sdk/materials/__init__.py | gwsu2008/aws-dynamodb-encryption-python | 80465a4b62b1198d519b0ba96da1643eb8af2c9a | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Cryptographic materials are containers that provide delegated keys for cryptographic operations."""
import abc
import six
from dynamodb_encryption_sdk.delegated_keys import DelegatedKey # noqa pylint: disable=unused-import
try: # Python 3.5.0 and 3.5.1 have incompatible typing modules
from typing import Dict, Text # noqa pylint: disable=unused-import
from mypy_extensions import NoReturn # noqa pylint: disable=unused-import
except ImportError: # pragma: no cover
# We only actually need these imports when running the mypy checks
pass
__all__ = ("CryptographicMaterials", "EncryptionMaterials", "DecryptionMaterials")
@six.add_metaclass(abc.ABCMeta)
class CryptographicMaterials(object):
"""Base class for all cryptographic materials."""
@abc.abstractproperty
def material_description(self):
# type: () -> Dict[Text, Text]
"""Material description to use with these cryptographic materials.
:returns: Material description
:rtype: dict
"""
@abc.abstractproperty
def encryption_key(self):
# type: () -> DelegatedKey
"""Delegated key used for encrypting attributes.
:returns: Encryption key
:rtype: DelegatedKey
"""
@abc.abstractproperty
def decryption_key(self):
# type: () -> DelegatedKey
"""Delegated key used for decrypting attributes.
:returns: Decryption key
:rtype: DelegatedKey
"""
@abc.abstractproperty
def signing_key(self):
# type: () -> DelegatedKey
"""Delegated key used for calculating digital signatures.
:returns: Signing key
:rtype: DelegatedKey
"""
@abc.abstractproperty
def verification_key(self):
# type: () -> DelegatedKey
"""Delegated key used for verifying digital signatures.
:returns: Verification key
:rtype: DelegatedKey
"""
class EncryptionMaterials(CryptographicMaterials):
"""Base class for all encryption materials."""
@property
def decryption_key(self):
# type: () -> NoReturn
"""Encryption materials do not provide decryption keys.
:raises NotImplementedError: because encryption materials do not contain decryption keys
"""
raise NotImplementedError("Encryption materials do not provide decryption keys.")
@property
def verification_key(self):
# type: () -> NoReturn
"""Encryption materials do not provide verification keys.
:raises NotImplementedError: because encryption materials do not contain verification keys
"""
raise NotImplementedError("Encryption materials do not provide verification keys.")
class DecryptionMaterials(CryptographicMaterials):
"""Base class for all decryption materials."""
@property
def encryption_key(self):
# type: () -> NoReturn
"""Decryption materials do not provide encryption keys.
:raises NotImplementedError: because decryption materials do not contain encryption keys
"""
raise NotImplementedError("Decryption materials do not provide encryption keys.")
@property
def signing_key(self):
# type: () -> NoReturn
"""Decryption materials do not provide signing keys.
:raises NotImplementedError: because decryption materials do not contain signing keys
"""
raise NotImplementedError("Decryption materials do not provide signing keys.")
| 33.01626 | 102 | 0.690717 | 436 | 4,061 | 6.392202 | 0.334862 | 0.047363 | 0.06028 | 0.06028 | 0.482957 | 0.391819 | 0.346609 | 0.312881 | 0.167923 | 0 | 0 | 0.004469 | 0.228515 | 4,061 | 122 | 103 | 33.286885 | 0.885094 | 0.561931 | 0 | 0.485714 | 0 | 0 | 0.178715 | 0.014726 | 0 | 0 | 0 | 0 | 0 | 1 | 0.257143 | false | 0.028571 | 0.171429 | 0 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ba0acde2a7dda4e8e83ad9fe76009f222b95e173 | 1,011 | py | Python | binascii_.py | hlovatt/PyBoardTypeshedGenerator | 1d133cab16ea5d558b03175e6fa48b4a23b76136 | [
"MIT"
] | 5 | 2020-07-26T08:48:39.000Z | 2021-09-13T19:19:37.000Z | binascii_.py | hlovatt/PyBoardTypeshedGenerator | 1d133cab16ea5d558b03175e6fa48b4a23b76136 | [
"MIT"
] | null | null | null | binascii_.py | hlovatt/PyBoardTypeshedGenerator | 1d133cab16ea5d558b03175e6fa48b4a23b76136 | [
"MIT"
] | 1 | 2020-11-07T22:37:44.000Z | 2020-11-07T22:37:44.000Z | """
Generate `pyi` from corresponding `rst` docs.
"""
import rst
from rst2pyi import RST2PyI
__author__ = rst.__author__
__copyright__ = rst.__copyright__
__license__ = rst.__license__
__version__ = "7.2.0" # Version set by https://github.com/hlovatt/tag2ver
def binascii(shed: RST2PyI) -> None:
shed.module(name="binascii", old="binary/ASCII conversions", end=r"Functions")
shed.def_(
old=r".. function:: hexlify(data, [sep])",
new="def hexlify(data: bytes, sep: str | bytes = ..., /) -> bytes",
indent=0,
)
shed.def_(
old=r".. function:: unhexlify(data)",
new="def unhexlify(data: str | bytes, /) -> bytes",
indent=0,
)
shed.def_(
old=r".. function:: a2b_base64(data)",
new="def a2b_base64(data: str | bytes, /) -> bytes",
indent=0,
)
shed.def_(
old=r".. function:: b2a_base64(data)",
new="def b2a_base64(data: bytes, /) -> bytes",
indent=0,
)
shed.write(u_also=True)
| 27.324324 | 82 | 0.593472 | 124 | 1,011 | 4.540323 | 0.41129 | 0.049734 | 0.071048 | 0.078153 | 0.293073 | 0.222025 | 0.222025 | 0.222025 | 0.222025 | 0.222025 | 0 | 0.02987 | 0.238378 | 1,011 | 36 | 83 | 28.083333 | 0.701299 | 0.094955 | 0 | 0.275862 | 1 | 0 | 0.393605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ba0f201ff461642700e5183d5d5d537e2f37ac6f | 332,638 | py | Python | kvmagent/kvmagent/plugins/vm_plugin.py | zstackio/zstack-utility | 919d686d46c68836cbcad51ab0b8bf53bc88abda | [
"ECL-2.0",
"Apache-2.0"
] | 55 | 2017-02-10T07:55:21.000Z | 2021-09-01T00:59:36.000Z | kvmagent/kvmagent/plugins/vm_plugin.py | zstackio/zstack-utility | 919d686d46c68836cbcad51ab0b8bf53bc88abda | [
"ECL-2.0",
"Apache-2.0"
] | 106 | 2017-02-13T09:58:27.000Z | 2022-02-15T09:51:48.000Z | kvmagent/kvmagent/plugins/vm_plugin.py | zstackio/zstack-utility | 919d686d46c68836cbcad51ab0b8bf53bc88abda | [
"ECL-2.0",
"Apache-2.0"
] | 68 | 2017-02-13T11:02:01.000Z | 2021-12-16T11:02:01.000Z | '''
@author: Frank
'''
import contextlib
import os.path
import tempfile
import time
import datetime
import traceback
import xml.etree.ElementTree as etree
import re
import platform
import netaddr
import uuid
import shutil
import simplejson
import base64
import uuid
import json
import socket
from signal import SIGKILL
import syslog
import threading
import libvirt
import xml.dom.minidom as minidom
#from typing import List, Any, Union
from distutils.version import LooseVersion
import zstacklib.utils.ip as ip
import zstacklib.utils.ebtables as ebtables
import zstacklib.utils.iptables as iptables
import zstacklib.utils.lock as lock
from kvmagent import kvmagent
from kvmagent.plugins.baremetal_v2_gateway_agent import \
BaremetalV2GatewayAgentPlugin as BmV2GwAgent
from kvmagent.plugins.bmv2_gateway_agent import utils as bm_utils
from kvmagent.plugins.imagestore import ImageStoreClient
from zstacklib.utils import bash, plugin
from zstacklib.utils.bash import in_bash
from zstacklib.utils import jsonobject
from zstacklib.utils import lvm
from zstacklib.utils import shell
from zstacklib.utils import uuidhelper
from zstacklib.utils import xmlobject
from zstacklib.utils import misc
from zstacklib.utils import qemu_img
from zstacklib.utils import ebtables
from zstacklib.utils import vm_operator
from zstacklib.utils import pci
from zstacklib.utils.report import *
from zstacklib.utils.vm_plugin_queue_singleton import VmPluginQueueSingleton
from zstacklib.utils.libvirt_event_manager_singleton import LibvirtEventManager
from zstacklib.utils.libvirt_event_manager_singleton import LibvirtEventManagerSingleton
from distutils.version import LooseVersion
logger = log.get_logger(__name__)
HOST_ARCH = platform.machine()
DIST_NAME = platform.dist()[0]
ZS_XML_NAMESPACE = 'http://zstack.org'
etree.register_namespace('zs', ZS_XML_NAMESPACE)
GUEST_TOOLS_ISO_PATH = "/var/lib/zstack/guesttools/GuestTools.iso"
QMP_SOCKET_PATH = "/var/lib/libvirt/qemu/zstack"
PCI_ROM_PATH = "/var/lib/zstack/pcirom"
class RetryException(Exception):
pass
class NicTO(object):
def __init__(self):
self.mac = None
self.bridgeName = None
self.deviceId = None
class RemoteStorageFactory(object):
@staticmethod
def get_remote_storage(cmd):
if cmd.storageInfo and cmd.storageInfo.type == 'nfs':
return NfsRemoteStorage(cmd)
else:
return SshfsRemoteStorage(cmd)
class RemoteStorage(object):
def __init__(self, cmd):
self.mount_point = tempfile.mkdtemp(prefix="zs-backup")
def mount(self):
raise Exception('function mount not be implemented')
def umount(self):
raise Exception('function umount not be implemented')
def clean(self):
linux.rmdir_if_empty(self.mount_point)
class NfsRemoteStorage(RemoteStorage):
def __init__(self, cmd):
super(NfsRemoteStorage, self).__init__(cmd)
self.options = cmd.storageInfo.options
self.url = cmd.storageInfo.url
relative_work_dir = cmd.uploadDir.replace(os.path.normpath(cmd.bsPath), '').lstrip(os.path.sep)
self.local_work_dir = os.path.join(self.mount_point, relative_work_dir)
self.remote_work_dir = os.path.join(self.url, relative_work_dir)
def mount(self):
linux.mount(self.url, self.mount_point, self.options)
def umount(self):
if linux.is_mounted(path=self.mount_point):
linux.umount(self.mount_point)
class SshfsRemoteStorage(RemoteStorage):
def __init__(self, cmd):
super(SshfsRemoteStorage, self).__init__(cmd)
self.bandwidth = cmd.networkWriteBandwidth
self.username = cmd.username
self.hostname = cmd.hostname
self.port = cmd.sshPort
self.password = cmd.password
self.dst_dir = cmd.uploadDir
self.vm_uuid = cmd.vmUuid
self.remote_work_dir = cmd.uploadDir
self.local_work_dir = self.mount_point
def mount(self):
if 0 != linux.sshfs_mount_with_vm_uuid(self.vm_uuid, self.username, self.hostname, self.port,
self.password, self.dst_dir, self.mount_point, self.bandwidth):
raise kvmagent.KvmError("failed to prepare backup space for [vm:%s]" % self.vm_uuid)
def umount(self):
for i in xrange(6):
if linux.fumount(self.mount_point, 5) == 0:
break
else:
time.sleep(5)
class StartVmCmd(kvmagent.AgentCommand):
@log.sensitive_fields("consolePassword")
def __init__(self):
super(StartVmCmd, self).__init__()
self.vmInstanceUuid = None
self.vmName = None
self.memory = None
self.cpuNum = None
self.cpuSpeed = None
self.bootDev = None
self.rootVolume = None
self.dataVolumes = []
self.cacheVolumes = []
self.isoPath = None
self.nics = []
self.timeout = None
self.dataIsoPaths = None
self.addons = None
self.useBootMenu = True
self.vmCpuModel = None
self.emulateHyperV = False
self.additionalQmp = True
self.isApplianceVm = False
self.systemSerialNumber = None
self.bootMode = None
self.consolePassword = None
class StartVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(StartVmResponse, self).__init__()
class PciAddressInfo():
def __init__(self):
self.type = None
self.domain = None
self.bus = None
self.slot = None
self.function = None
class AttchNicResponse(kvmagent.AgentResponse):
def __init__(self):
super(AttchNicResponse, self).__init__()
self.pciAddress = PciAddressInfo()
class GetVncPortCmd(kvmagent.AgentCommand):
def __init__(self):
super(GetVncPortCmd, self).__init__()
self.vmUuid = None
class GetVncPortResponse(kvmagent.AgentResponse):
def __init__(self):
super(GetVncPortResponse, self).__init__()
self.port = None
self.protocol = None
self.vncPort = None
self.spicePort = None
self.spiceTlsPort = None
class ChangeCpuMemResponse(kvmagent.AgentResponse):
def _init_(self):
super(ChangeCpuMemResponse, self)._init_()
self.cpuNum = None
self.memorySize = None
self.vmuuid
class IncreaseCpuResponse(kvmagent.AgentResponse):
def __init__(self):
super(IncreaseCpuResponse, self).__init__()
self.cpuNum = None
self.vmUuid = None
class IncreaseMemoryResponse(kvmagent.AgentResponse):
def __init__(self):
super(IncreaseMemoryResponse, self).__init__()
self.memorySize = None
self.vmUuid = None
class StopVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(StopVmCmd, self).__init__()
self.uuid = None
self.timeout = None
class StopVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(StopVmResponse, self).__init__()
class PauseVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(PauseVmCmd, self).__init__()
self.uuid = None
self.timeout = None
class PauseVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(PauseVmResponse, self).__init__()
class ResumeVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(ResumeVmCmd, self).__init__()
self.uuid = None
self.timeout = None
class ResumeVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(ResumeVmResponse, self).__init__()
class RebootVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(RebootVmCmd, self).__init__()
self.uuid = None
self.timeout = None
class RebootVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(RebootVmResponse, self).__init__()
class DestroyVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(DestroyVmCmd, self).__init__()
self.uuid = None
class DestroyVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(DestroyVmResponse, self).__init__()
class VmSyncCmd(kvmagent.AgentCommand):
def __init__(self):
super(VmSyncCmd, self).__init__()
class VmSyncResponse(kvmagent.AgentResponse):
def __init__(self):
super(VmSyncResponse, self).__init__()
self.states = None
self.vmInShutdowns = None
class AttachDataVolumeCmd(kvmagent.AgentCommand):
def __init__(self):
super(AttachDataVolumeCmd, self).__init__()
self.volume = None
self.uuid = None
class AttachDataVolumeResponse(kvmagent.AgentResponse):
def __init__(self):
super(AttachDataVolumeResponse, self).__init__()
class DetachDataVolumeCmd(kvmagent.AgentCommand):
def __init__(self):
super(DetachDataVolumeCmd, self).__init__()
self.volume = None
self.uuid = None
class DetachDataVolumeResponse(kvmagent.AgentResponse):
def __init__(self):
super(DetachDataVolumeResponse, self).__init__()
class MigrateVmResponse(kvmagent.AgentResponse):
def __init__(self):
super(MigrateVmResponse, self).__init__()
class TakeSnapshotResponse(kvmagent.AgentResponse):
def __init__(self):
super(TakeSnapshotResponse, self).__init__()
self.newVolumeInstallPath = None
self.snapshotInstallPath = None
self.size = None
class TakeVolumeBackupCommand(kvmagent.AgentCommand):
@log.sensitive_fields("password")
def __init__(self):
super(TakeVolumeBackupCommand, self).__init__()
self.hostname = None
self.username = None
self.password = None
self.sshPort = 22
self.bsPath = None
self.uploadDir = None
self.vmUuid = None
self.volume = None
self.bitmap = None
self.lastBackup = None
self.networkWriteBandwidth = 0L
self.volumeWriteBandwidth = 0L
self.maxIncremental = 0
self.mode = None
self.storageInfo = None
class TakeVolumeBackupResponse(kvmagent.AgentResponse):
def __init__(self):
super(TakeVolumeBackupResponse, self).__init__()
self.backupFile = None
self.parentInstallPath = None
self.bitmap = None
class VolumeBackupInfo(object):
def __init__(self, deviceId, bitmap, backupFile, parentInstallPath):
self.deviceId = deviceId
self.bitmap = bitmap
self.backupFile = backupFile
self.parentInstallPath = parentInstallPath
class TakeVolumesBackupsCommand(kvmagent.AgentCommand):
@log.sensitive_fields("password")
def __init__(self):
super(TakeVolumesBackupsCommand, self).__init__()
self.hostname = None
self.username = None
self.password = None
self.sshPort = 22
self.bsPath = None
self.uploadDir = None
self.vmUuid = None
self.backupInfos = []
self.deviceIds = [] # type:list[int]
self.networkWriteBandwidth = 0L
self.volumeWriteBandwidth = 0L
self.maxIncremental = 0
self.mode = None
self.volumes = []
self.storageInfo = None
class TakeVolumesBackupsResponse(kvmagent.AgentResponse):
def __init__(self):
super(TakeVolumesBackupsResponse, self).__init__()
self.backupInfos = [] # type: list[VolumeBackupInfo]
class TakeSnapshotsCmd(kvmagent.AgentCommand):
snapshotJobs = None # type: list[VolumeSnapshotJobStruct]
def __init__(self):
super(TakeSnapshotsCmd, self).__init__()
self.snapshotJobs = []
class TakeSnapshotsResponse(kvmagent.AgentResponse):
snapshots = None # type: List[VolumeSnapshotResultStruct]
def __init__(self):
super(TakeSnapshotsResponse, self).__init__()
self.snapshots = []
class CancelBackupJobsCmd(kvmagent.AgentCommand):
def __init__(self):
super(CancelBackupJobsCmd, self).__init__()
self.vmUuid = None
class CancelBackupJobsResponse(kvmagent.AgentResponse):
def __init__(self):
super(CancelBackupJobsResponse, self).__init__()
class MergeSnapshotRsp(kvmagent.AgentResponse):
def __init__(self):
super(MergeSnapshotRsp, self).__init__()
class LogoutIscsiTargetRsp(kvmagent.AgentResponse):
def __init__(self):
super(LogoutIscsiTargetRsp, self).__init__()
class LoginIscsiTargetCmd(kvmagent.AgentCommand):
@log.sensitive_fields("chapPassword")
def __init__(self):
super(LoginIscsiTargetCmd, self).__init__()
self.hostname = None
self.port = None # type:int
self.target = None
self.chapUsername = None
self.chapPassword = None
class LoginIscsiTargetRsp(kvmagent.AgentResponse):
def __init__(self):
super(LoginIscsiTargetRsp, self).__init__()
class ReportVmStateCmd(object):
def __init__(self):
self.hostUuid = None
self.vmUuid = None
self.vmState = None
class ReportVmShutdownEventCmd(object):
def __init__(self):
self.vmUuid = None
class ReportVmRebootEventCmd(object):
def __init__(self):
self.vmUuid = None
class CheckVmStateRsp(kvmagent.AgentResponse):
def __init__(self):
super(CheckVmStateRsp, self).__init__()
self.states = {}
class CheckColoVmStateRsp(kvmagent.AgentResponse):
def __init__(self):
super(CheckColoVmStateRsp, self).__init__()
self.state = None
self.mode = None
class ChangeVmPasswordCmd(kvmagent.AgentCommand):
@log.sensitive_fields("accountPerference.accountPassword")
def __init__(self):
super(ChangeVmPasswordCmd, self).__init__()
self.accountPerference = AccountPerference() # type:AccountPerference
self.timeout = 0L
class ChangeVmPasswordRsp(kvmagent.AgentResponse):
def __init__(self):
super(ChangeVmPasswordRsp, self).__init__()
self.accountPerference = None
class AccountPerference(object):
def __init__(self):
self.userAccount = None
self.accountPassword = None
self.vmUuid = None
class ReconnectMeCmd(object):
def __init__(self):
self.hostUuid = None
self.reason = None
class FailOverCmd(object):
def __init__(self):
self.vmInstanceUuid = None
self.hostUuid = None
self.reason = None
self.primaryVmFailure = None
class HotPlugPciDeviceCommand(kvmagent.AgentCommand):
def __init__(self):
super(HotPlugPciDeviceCommand, self).__init__()
self.pciDeviceAddress = None
self.vmUuid = None
class HotPlugPciDeviceRsp(kvmagent.AgentResponse):
def __init__(self):
super(HotPlugPciDeviceRsp, self).__init__()
class HotUnplugPciDeviceCommand(kvmagent.AgentCommand):
def __init__(self):
super(HotUnplugPciDeviceCommand, self).__init__()
self.pciDeviceAddress = None
self.vmUuid = None
class HotUnplugPciDeviceRsp(kvmagent.AgentResponse):
def __init__(self):
super(HotUnplugPciDeviceRsp, self).__init__()
class AttachPciDeviceToHostCommand(kvmagent.AgentCommand):
def __init__(self):
super(AttachPciDeviceToHostCommand, self).__init__()
self.pciDeviceAddress = None
class AttachPciDeviceToHostRsp(kvmagent.AgentResponse):
def __init__(self):
super(AttachPciDeviceToHostRsp, self).__init__()
class DetachPciDeviceFromHostCommand(kvmagent.AgentCommand):
def __init__(self):
super(DetachPciDeviceFromHostCommand, self).__init__()
self.pciDeviceAddress = None
class DetachPciDeviceFromHostRsp(kvmagent.AgentResponse):
def __init__(self):
super(DetachPciDeviceFromHostRsp, self).__init__()
class KvmAttachUsbDeviceRsp(kvmagent.AgentResponse):
def __init__(self):
super(KvmAttachUsbDeviceRsp, self).__init__()
class KvmDetachUsbDeviceRsp(kvmagent.AgentResponse):
def __init__(self):
super(KvmDetachUsbDeviceRsp, self).__init__()
class ReloadRedirectUsbRsp(kvmagent.AgentResponse):
def __init__(self):
super(ReloadRedirectUsbRsp, self).__init__()
class CheckMountDomainRsp(kvmagent.AgentResponse):
def __init__(self):
super(CheckMountDomainRsp, self).__init__()
self.active = False
class KvmResizeVolumeCommand(kvmagent.AgentCommand):
def __init__(self):
super(KvmResizeVolumeCommand, self).__init__()
self.vmUuid = None
self.size = None
self.deviceId = None
class KvmResizeVolumeRsp(kvmagent.AgentResponse):
def __init__(self):
super(KvmResizeVolumeRsp, self).__init__()
class UpdateVmPriorityRsp(kvmagent.AgentResponse):
def __init__(self):
super(UpdateVmPriorityRsp, self).__init__()
class BlockStreamResponse(kvmagent.AgentResponse):
def __init__(self):
super(BlockStreamResponse, self).__init__()
class AttachGuestToolsIsoToVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(AttachGuestToolsIsoToVmCmd, self).__init__()
self.vmInstanceUuid = None
self.needTempDisk = None
class AttachGuestToolsIsoToVmRsp(kvmagent.AgentResponse):
def __init__(self):
super(AttachGuestToolsIsoToVmRsp, self).__init__()
class DetachGuestToolsIsoFromVmCmd(kvmagent.AgentCommand):
def __init__(self):
super(DetachGuestToolsIsoFromVmCmd, self).__init__()
self.vmInstanceUuid = None
class DetachGuestToolsIsoFromVmRsp(kvmagent.AgentResponse):
def __init__(self):
super(DetachGuestToolsIsoFromVmRsp, self).__init__()
class IsoTo(object):
def __init__(self):
super(IsoTo, self).__init__()
self.path = None
self.imageUuid = None
self.deviceId = None
class AttachIsoCmd(object):
def __init__(self):
super(AttachIsoCmd, self).__init__()
self.iso = None
self.vmUuid = None
class DetachIsoCmd(object):
def __init__(self):
super(DetachIsoCmd, self).__init__()
self.vmUuid = None
self.deviceId = None
class GetVmGuestToolsInfoCmd(kvmagent.AgentCommand):
def __init__(self):
super(GetVmGuestToolsInfoCmd, self).__init__()
self.vmInstanceUuid = None
class GetVmGuestToolsInfoRsp(kvmagent.AgentResponse):
def __init__(self):
super(GetVmGuestToolsInfoRsp, self).__init__()
self.version = None
self.status = None
class GetVmFirstBootDeviceCmd(kvmagent.AgentCommand):
def __init__(self):
super(GetVmFirstBootDeviceCmd, self).__init__()
self.uuid = None
class GetVmFirstBootDeviceRsp(kvmagent.AgentResponse):
def __init__(self):
super(GetVmFirstBootDeviceRsp, self).__init__()
self.firstBootDevice = None
class FailColoPrimaryVmCmd(kvmagent.AgentCommand):
@log.sensitive_fields("targetHostPassword")
def __init__(self):
super(FailColoPrimaryVmCmd, self).__init__()
self.vmInstanceUuid = None
self.targetHostIp = None
self.targetHostPort = None
self.targetHostPassword = None
class GetVmDeviceAddressRsp(kvmagent.AgentResponse):
def __init__(self):
super(GetVmDeviceAddressRsp, self).__init__()
self.addresses = {} # type:map[str, list[VmDeviceAddress]]
class VmDeviceAddress(object):
def __init__(self, uuid, device_type, address_type, address):
self.uuid = uuid
self.deviceType = device_type
self.addressType = address_type
self.address = address
class VncPortIptableRule(object):
def __init__(self):
self.host_ip = None
self.port = None
self.vm_internal_id = None
def _make_chain_name(self):
return "vm-%s-vnc" % self.vm_internal_id
@lock.file_lock('/run/xtables.lock')
def apply(self):
assert self.host_ip is not None
assert self.port is not None
assert self.vm_internal_id is not None
ipt = iptables.from_iptables_save()
chain_name = self._make_chain_name()
current_ip = linux.get_host_by_name(self.host_ip)
# get ipv4 subnet
current_ip_with_netmask = shell.call('ip -o -f inet addr show | awk \'/scope global/ {print $4}\' | fgrep -w %s' % current_ip).splitlines()[0]
if not current_ip_with_netmask:
err = 'cannot get host ip with netmask for %s' % self.host_ip
logger.warn(err)
raise kvmagent.KvmError(err)
ipt.add_rule('-A INPUT -p tcp -m tcp --dport %s -j %s' % (self.port, chain_name))
ipt.add_rule('-A %s -d %s -j ACCEPT' % (chain_name, current_ip_with_netmask))
ipt.add_rule('-A %s ! -d %s -j REJECT --reject-with icmp-host-prohibited' % (chain_name, current_ip_with_netmask))
ipt.iptable_restore()
@lock.file_lock('/run/xtables.lock')
def delete(self):
assert self.vm_internal_id is not None
ipt = iptables.from_iptables_save()
chain_name = self._make_chain_name()
ipt.delete_chain(chain_name)
ipt.iptable_restore()
def find_vm_internal_ids(self, vms):
internal_ids = []
namespace_used = is_namespace_used()
for vm in vms:
if namespace_used:
vm_id_node = find_zstack_metadata_node(etree.fromstring(vm.domain_xml), 'internalId')
if vm_id_node is None:
continue
vm_id = vm_id_node.text
else:
if not vm.domain_xmlobject.has_element('metadata.internalId'):
continue
vm_id = vm.domain_xmlobject.metadata.internalId.text_
if vm_id:
internal_ids.append(vm_id)
return internal_ids
@lock.file_lock('/run/xtables.lock')
def delete_stale_chains(self):
ipt = iptables.from_iptables_save()
tbl = ipt.get_table()
if not tbl:
ipt.iptable_restore()
return
vms = get_running_vms()
internal_ids = self.find_vm_internal_ids(vms)
# delete all vnc chains
chains = tbl.children[:]
for chain in chains:
if 'vm' in chain.name and 'vnc' in chain.name:
vm_internal_id = chain.name.split('-')[1]
if vm_internal_id not in internal_ids:
ipt.delete_chain(chain.name)
logger.debug('deleted a stale VNC iptable chain[%s]' % chain.name)
ipt.iptable_restore()
def e(parent, tag, value=None, attrib={}, usenamesapce = False):
if usenamesapce:
tag = '{%s}%s' % (ZS_XML_NAMESPACE, tag)
el = etree.SubElement(parent, tag, attrib)
if value:
el.text = value
return el
def find_namespace_node(root, path, name):
ns = {'zs': ZS_XML_NAMESPACE}
ps = path.split('.')
cnode = root
for p in ps:
cnode = cnode.find(p)
if cnode is None:
return None
return cnode.find('zs:%s' % name, ns)
def find_zstack_metadata_node(root, name):
zs = find_namespace_node(root, 'metadata', 'zstack')
if zs is None:
return None
return zs.find(name)
def find_domain_cdrom_address(domain_xml, target_dev):
domain_xmlobject = xmlobject.loads(domain_xml)
disks = domain_xmlobject.devices.get_children_nodes()['disk']
for d in disks:
if d.device_ != 'cdrom':
continue
if d.get_child_node('target').dev_ != target_dev:
continue
return d.get_child_node('address')
return None
def find_domain_first_boot_device(domain_xml):
domain_xmlobject = xmlobject.loads(domain_xml)
disks = domain_xmlobject.devices.get_child_node_as_list('disk')
ifaces = domain_xmlobject.devices.get_child_node_as_list('interface')
for d in disks:
if d.get_child_node('boot') is None:
continue
if d.device_ == 'disk' and d.get_child_node('boot').order_ == '1':
return "HardDisk"
if d.device_ == 'cdrom' and d.get_child_node('boot').order_ == '1':
return "CdRom"
for i in ifaces:
if i.get_child_node('boot') is None:
continue
if i.get_child_node('boot').order_ == '1':
return "Network"
devs = domain_xmlobject.os.get_child_node_as_list('boot')
if devs and devs[0].dev_ == 'cdrom':
return "CdRom"
return "HardDisk"
def compare_version(version1, version2):
def normalize(v):
return [int(x) for x in re.sub(r'(\.0+)*$','', v).split(".")]
return cmp(normalize(version1), normalize(version2))
LIBVIRT_VERSION = linux.get_libvirt_version()
LIBVIRT_MAJOR_VERSION = LIBVIRT_VERSION.split('.')[0]
QEMU_VERSION = linux.get_qemu_version()
def is_namespace_used():
return compare_version(LIBVIRT_VERSION, '1.3.3') >= 0
def is_hv_freq_supported():
return compare_version(QEMU_VERSION, '2.12.0') >= 0
@linux.with_arch(todo_list=['x86_64'])
def is_ioapic_supported():
return compare_version(LIBVIRT_VERSION, '3.4.0') >= 0
def is_kylin402():
zstack_release = linux.read_file('/etc/zstack-release')
if zstack_release is None:
return False
return "kylin402" in zstack_release.splitlines()[0]
def is_spiceport_driver_supported():
# qemu-system-aarch64 not supported char driver: spiceport
return shell.run("%s -h | grep 'chardev spiceport'" % kvmagent.get_qemu_path()) == 0
def is_virtual_machine():
product_name = shell.call("dmidecode -s system-product-name").strip()
return product_name == "KVM Virtual Machine" or product_name == "KVM"
def get_domain_type():
return "qemu" if HOST_ARCH == "aarch64" and is_virtual_machine() else "kvm"
def get_gic_version(cpu_num):
kernel_release = platform.release().split("-")[0]
if is_kylin402() and cpu_num <= 8 and LooseVersion(kernel_release) < LooseVersion('4.15.0'):
return 2
# Occasionally, libvirt might fail to list VM ...
def get_console_without_libvirt(vmUuid):
output = bash.bash_o("""ps x | awk '/qemu[-]kvm.*%s/{print $1, index($0, " -vnc ")}'""" % vmUuid).splitlines()
if len(output) != 1:
return None, None, None, None
pid, idx = output[0].split()
output = bash.bash_o(
"""lsof -p %s -aPi4 | awk '$8 == "TCP" { n=split($9,a,":"); print a[n] }'""" % pid).splitlines()
if len(output) < 1:
logger.warn("get_port_without_libvirt: no port found")
return None, None, None, None
# There is a port in vnc, there may be one or two porters in the spice, and two or three ports may exist in vncAndSpice.
output = output.sort()
if len(output) == 1 and int(idx) == 0:
protocol = "spice"
return protocol, None, int(output[0]), None
if len(output) == 1 and int(idx) != 0:
protocol = "vnc"
return protocol, int(output[0]), None, None
if len(output) == 2 and int(idx) == 0:
protocol = "spice"
return protocol, None, int(output[0]), int(output[1])
if len(output) == 2 and int(idx) != 0:
protocol = "vncAndSpice"
return protocol, int(output[0]), int(output[1]), None
if len(output) == 3:
protocol = "vncAndSpice"
return protocol, int(output[0]), int(output[1]), int(output[2])
logger.warn("get_port_without_libvirt: more than 3 ports")
return None, None, None, None
def check_vdi_port(vncPort, spicePort, spiceTlsPort):
if vncPort is None and spicePort is None and spiceTlsPort is None:
return False
if vncPort is not None and vncPort <= 0:
return False
if spicePort is not None and spicePort <= 0:
return False
if spiceTlsPort is not None and spiceTlsPort <= 0:
return False
return True
# get domain/bus/slot/function from pci device address
def parse_pci_device_address(addr):
domain = '0000' if len(addr.split(":")) == 2 else addr.split(":")[0]
bus = addr.split(":")[-2]
slot = addr.split(":")[-1].split(".")[0]
function = addr.split(".")[-1]
return domain, bus, slot, function
def get_machineType(machine_type):
if HOST_ARCH == "aarch64":
return "virt"
return machine_type if machine_type else "pc"
def get_sgio_value():
device_name = [x for x in os.listdir("/sys/block") if not x.startswith("loop")][0]
return "unfiltered" if os.path.isfile("/sys/block/{}/queue/unpriv_sgio".format(device_name)) else "filtered"
class LibvirtAutoReconnect(object):
conn = libvirt.open('qemu:///system')
if not conn:
raise Exception('unable to get libvirt connection')
evtMgr = LibvirtEventManagerSingleton()
libvirt_event_callbacks = {}
def __init__(self, func):
self.func = func
self.exception = None
@staticmethod
def add_libvirt_callback(id, cb):
cbs = LibvirtAutoReconnect.libvirt_event_callbacks.get(id, None)
if cbs is None:
cbs = []
LibvirtAutoReconnect.libvirt_event_callbacks[id] = cbs
cbs.append(cb)
@staticmethod
def register_libvirt_callbacks():
def reboot_callback(conn, dom, opaque):
cbs = LibvirtAutoReconnect.libvirt_event_callbacks.get(libvirt.VIR_DOMAIN_EVENT_ID_REBOOT)
if not cbs:
return
for cb in cbs:
try:
cb(conn, dom, opaque)
except:
content = traceback.format_exc()
logger.warn(content)
LibvirtAutoReconnect.conn.domainEventRegisterAny(None, libvirt.VIR_DOMAIN_EVENT_ID_REBOOT, reboot_callback,
None)
def lifecycle_callback(conn, dom, event, detail, opaque):
cbs = LibvirtAutoReconnect.libvirt_event_callbacks.get(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE)
if not cbs:
return
for cb in cbs:
try:
cb(conn, dom, event, detail, opaque)
except:
content = traceback.format_exc()
logger.warn(content)
LibvirtAutoReconnect.conn.domainEventRegisterAny(None, libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE,
lifecycle_callback, None)
def libvirtClosedCallback(conn, reason, opaque):
reasonStrings = (
"Error", "End-of-file", "Keepalive", "Client",
)
logger.debug("got libvirt closed callback: %s: %s" % (conn.getURI(), reasonStrings[reason]))
LibvirtAutoReconnect.conn.registerCloseCallback(libvirtClosedCallback, None)
# NOTE: the keepalive doesn't work on some libvirtd even the versions are the same
# the error is like "the caller doesn't support keepalive protocol; perhaps it's missing event loop implementation"
# def start_keep_alive(_):
# try:
# LibvirtAutoReconnect.conn.setKeepAlive(5, 3)
# return True
# except Exception as e:
# logger.warn('unable to start libvirt keep-alive, %s' % str(e))
# return False
#
# if not linux.wait_callback_success(start_keep_alive, timeout=5, interval=0.5):
# raise Exception('unable to start libvirt keep-alive after 5 seconds, see the log for detailed error')
@lock.lock('libvirt-reconnect')
def _reconnect(self):
def test_connection():
try:
LibvirtAutoReconnect.conn.getLibVersion()
VmPlugin._reload_ceph_secret_keys()
return None
except libvirt.libvirtError as ex:
return ex
ex = test_connection()
if not ex:
# the connection is ok
return
# 2nd version: 2015
logger.warn("the libvirt connection is broken, there is no safeway to auto-reconnect without fd leak, we"
" will ask the mgmt server to reconnect us after self quit")
_stop_world()
# old_conn = LibvirtAutoReconnect.conn
# LibvirtAutoReconnect.conn = libvirt.open('qemu:///system')
# if not LibvirtAutoReconnect.conn:
# raise Exception('unable to get a libvirt connection')
#
# for cid in LibvirtAutoReconnect.callback_id:
# logger.debug("remove libvirt event callback[id:%s]" % cid)
# old_conn.domainEventDeregisterAny(cid)
#
# # stop old event manager
# LibvirtAutoReconnect.evtMgr.stop()
# # create a new event manager
# LibvirtAutoReconnect.evtMgr = LibvirtEventManager()
# LibvirtAutoReconnect.register_libvirt_callbacks()
#
# # try to close the old connection anyway
# try:
# old_conn.close()
# except Exception as ee:
# logger.warn('unable to close an old libvirt exception, %s' % str(ee))
# finally:
# del old_conn
#
# ex = test_connection()
# if ex:
# # unable to reconnect, raise the error
# raise Exception('unable to get a libvirt connection, %s' % str(ex))
#
# logger.debug('successfully reconnected to the libvirt')
def __call__(self, *args, **kwargs):
try:
return self.func(LibvirtAutoReconnect.conn)
except libvirt.libvirtError as ex:
err = str(ex)
if 'client socket is closed' in err or 'Broken pipe' in err or 'invalid connection' in err:
logger.debug('socket to the libvirt is broken[%s], try reconnecting' % err)
self._reconnect()
return self.func(LibvirtAutoReconnect.conn)
else:
raise
class IscsiLogin(object):
def __init__(self):
self.server_hostname = None
self.server_port = None
self.target = None
self.chap_username = None
self.chap_password = None
self.lun = 1
@lock.lock('iscsiadm')
def login(self):
assert self.server_hostname, "hostname cannot be None"
assert self.server_port, "port cannot be None"
assert self.target, "target cannot be None"
device_path = os.path.join('/dev/disk/by-path/', 'ip-%s:%s-iscsi-%s-lun-%s' % (
self.server_hostname, self.server_port, self.target, self.lun))
shell.call('iscsiadm -m discovery -t sendtargets -p %s:%s' % (self.server_hostname, self.server_port))
if self.chap_username and self.chap_password:
shell.call(
'iscsiadm --mode node --targetname "%s" -p %s:%s --op=update --name node.session.auth.authmethod --value=CHAP' % (
self.target, self.server_hostname, self.server_port))
shell.call(
'iscsiadm --mode node --targetname "%s" -p %s:%s --op=update --name node.session.auth.username --value=%s' % (
self.target, self.server_hostname, self.server_port, self.chap_username))
shell.call(
'iscsiadm --mode node --targetname "%s" -p %s:%s --op=update --name node.session.auth.password --value=%s' % (
self.target, self.server_hostname, self.server_port, self.chap_password))
shell.call('iscsiadm --mode node --targetname "%s" -p %s:%s --login' % (
self.target, self.server_hostname, self.server_port))
def wait_device_to_show(_):
return os.path.exists(device_path)
if not linux.wait_callback_success(wait_device_to_show, timeout=30, interval=0.5):
raise Exception('ISCSI device[%s] is not shown up after 30s' % device_path)
return device_path
class BlkIscsi(object):
def __init__(self):
self.is_cdrom = None
self.volume_uuid = None
self.chap_username = None
self.chap_password = None
self.device_letter = None
self.addressBus = None
self.addressUnit = None
self.server_hostname = None
self.server_port = None
self.target = None
self.lun = None
def _login_portal(self):
login = IscsiLogin()
login.server_hostname = self.server_hostname
login.server_port = self.server_port
login.target = self.target
login.chap_username = self.chap_username
login.chap_password = self.chap_password
return login.login()
def to_xmlobject(self):
# type: () -> etree.Element
device_path = self._login_portal()
if self.is_cdrom:
root = etree.Element('disk', {'type': 'block', 'device': 'cdrom'})
e(root, 'driver', attrib={'name': 'qemu', 'type': 'raw', 'cache': 'none'})
e(root, 'source', attrib={'dev': device_path})
e(root, 'target', attrib={'dev': self.device_letter})
if self.addressBus and self.addressUnit:
e(root, 'address', None,{'type' : 'drive', 'bus' : self.addressBus, 'unit' : self.addressUnit})
else:
root = etree.Element('disk', {'type': 'block', 'device': 'lun'})
e(root, 'driver', attrib={'name': 'qemu', 'type': 'raw', 'cache': 'none', 'discard':'unmap'})
e(root, 'source', attrib={'dev': device_path})
e(root, 'target', attrib={'dev': 'sd%s' % self.device_letter})
return root
@staticmethod
@lock.lock('iscsiadm')
def logout_portal(dev_path):
if not os.path.exists(dev_path):
return
device = os.path.basename(dev_path)
portal = device[3:device.find('-iscsi')]
target = device[device.find('iqn'):device.find('-lun')]
try:
shell.call('iscsiadm -m node --targetname "%s" --portal "%s" --logout' % (target, portal))
except Exception as e:
logger.warn('failed to logout device[%s], %s' % (dev_path, str(e)))
class IsoCeph(object):
def __init__(self):
self.iso = None
def to_xmlobject(self, target_dev, target_bus_type, bus=None, unit=None, bootOrder=None):
disk = etree.Element('disk', {'type': 'network', 'device': 'cdrom'})
source = e(disk, 'source', None, {'name': self.iso.path.lstrip('ceph:').lstrip('//'), 'protocol': 'rbd'})
if self.iso.secretUuid:
auth = e(disk, 'auth', attrib={'username': 'zstack'})
e(auth, 'secret', attrib={'type': 'ceph', 'uuid': self.iso.secretUuid})
for minfo in self.iso.monInfo:
e(source, 'host', None, {'name': minfo.hostname, 'port': str(minfo.port)})
e(disk, 'target', None, {'dev': target_dev, 'bus': target_bus_type})
if bus and unit:
e(disk, 'address', None, {'type': 'drive', 'bus': bus, 'unit': unit})
e(disk, 'readonly', None)
if bootOrder is not None and bootOrder > 0:
e(disk, 'boot', None, {'order': str(bootOrder)})
return disk
class BlkCeph(object):
def __init__(self):
self.volume = None
self.dev_letter = None
self.bus_type = None
def to_xmlobject(self):
disk = etree.Element('disk', {'type': 'network', 'device': 'disk'})
source = e(disk, 'source', None,
{'name': self.volume.installPath.lstrip('ceph:').lstrip('//'), 'protocol': 'rbd'})
if self.volume.secretUuid:
auth = e(disk, 'auth', attrib={'username': 'zstack'})
e(auth, 'secret', attrib={'type': 'ceph', 'uuid': self.volume.secretUuid})
for minfo in self.volume.monInfo:
e(source, 'host', None, {'name': minfo.hostname, 'port': str(minfo.port)})
dev_format = Vm._get_disk_target_dev_format(self.bus_type)
e(disk, 'target', None, {'dev': dev_format % self.dev_letter, 'bus': self.bus_type})
if self.volume.physicalBlockSize:
e(disk, 'blockio', None, {'physical_block_size': str(self.volume.physicalBlockSize)})
return disk
class VirtioCeph(object):
def __init__(self):
self.volume = None
self.dev_letter = None
def to_xmlobject(self):
disk = etree.Element('disk', {'type': 'network', 'device': 'disk'})
source = e(disk, 'source', None,
{'name': self.volume.installPath.lstrip('ceph:').lstrip('//'), 'protocol': 'rbd'})
if self.volume.secretUuid:
auth = e(disk, 'auth', attrib={'username': 'zstack'})
e(auth, 'secret', attrib={'type': 'ceph', 'uuid': self.volume.secretUuid})
for minfo in self.volume.monInfo:
e(source, 'host', None, {'name': minfo.hostname, 'port': str(minfo.port)})
e(disk, 'target', None, {'dev': 'vd%s' % self.dev_letter, 'bus': 'virtio'})
if self.volume.physicalBlockSize:
e(disk, 'blockio', None, {'physical_block_size': str(self.volume.physicalBlockSize)})
return disk
class VirtioSCSICeph(object):
def __init__(self):
self.volume = None
self.dev_letter = None
def to_xmlobject(self):
disk = etree.Element('disk', {'type': 'network', 'device': 'disk'})
source = e(disk, 'source', None,
{'name': self.volume.installPath.lstrip('ceph:').lstrip('//'), 'protocol': 'rbd'})
if self.volume.secretUuid:
auth = e(disk, 'auth', attrib={'username': 'zstack'})
e(auth, 'secret', attrib={'type': 'ceph', 'uuid': self.volume.secretUuid})
for minfo in self.volume.monInfo:
e(source, 'host', None, {'name': minfo.hostname, 'port': str(minfo.port)})
e(disk, 'target', None, {'dev': 'sd%s' % self.dev_letter, 'bus': 'scsi'})
e(disk, 'wwn', self.volume.wwn)
if self.volume.shareable:
e(disk, 'driver', None, {'name': 'qemu', 'type': 'raw', 'cache': 'none'})
e(disk, 'shareable')
if self.volume.physicalBlockSize:
e(disk, 'blockio', None, {'physical_block_size': str(self.volume.physicalBlockSize)})
return disk
class VirtioIscsi(object):
def __init__(self):
self.volume_uuid = None
self.chap_username = None
self.chap_password = None
self.device_letter = None
self.server_hostname = None
self.server_port = None
self.target = None
self.lun = None
def to_xmlobject(self):
root = etree.Element('disk', {'type': 'network', 'device': 'disk'})
e(root, 'driver', attrib={'name': 'qemu', 'type': 'raw', 'cache': 'none', 'discard':'unmap'})
if self.chap_username and self.chap_password:
auth = e(root, 'auth', attrib={'username': self.chap_username})
e(auth, 'secret', attrib={'type': 'iscsi', 'uuid': self._get_secret_uuid()})
source = e(root, 'source', attrib={'protocol': 'iscsi', 'name': '%s/%s' % (self.target, self.lun)})
e(source, 'host', attrib={'name': self.server_hostname, 'port': self.server_port})
e(root, 'target', attrib={'dev': 'sd%s' % self.device_letter, 'bus': 'scsi'})
e(root, 'shareable')
return root
def _get_secret_uuid(self):
root = etree.Element('secret', {'ephemeral': 'yes', 'private': 'yes'})
e(root, 'description', self.volume_uuid)
usage = e(root, 'usage', attrib={'type': 'iscsi'})
e(usage, 'target', self.target)
xml = etree.tostring(root)
logger.debug('create secret for virtio-iscsi volume:\n%s\n' % xml)
@LibvirtAutoReconnect
def call_libvirt(conn):
return conn.secretDefineXML(xml)
secret = call_libvirt()
secret.setValue(self.chap_password)
return secret.UUIDString()
@linux.retry(times=3, sleep_time=1)
def get_connect(src_host_ip):
conn = libvirt.open('qemu+tcp://{0}/system'.format(src_host_ip))
if conn is None:
logger.warn('unable to connect qemu on host {0}'.format(src_host_ip))
raise kvmagent.KvmError('unable to connect qemu on host %s' % (src_host_ip))
return conn
def get_vm_by_uuid(uuid, exception_if_not_existing=True, conn=None):
try:
# libvirt may not be able to find a VM when under a heavy workload, we re-try here
@LibvirtAutoReconnect
def call_libvirt(conn):
return conn.lookupByName(uuid)
@linux.retry(times=3, sleep_time=1)
def retry_call_libvirt():
if conn is None:
return call_libvirt()
else:
return conn.lookupByName(uuid)
vm = Vm.from_virt_domain(retry_call_libvirt())
logger.debug("find xm xml: %s" % vm.domain_xml)
return vm
except libvirt.libvirtError as e:
error_code = e.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
if exception_if_not_existing:
raise kvmagent.KvmError('unable to find vm[uuid:%s]' % uuid)
else:
return None
err = 'error happened when looking up vm[uuid:%(uuid)s], libvirt error code: %(error_code)s, %(e)s' % locals()
raise libvirt.libvirtError(err)
def get_vm_by_uuid_no_retry(uuid, exception_if_not_existing=True):
try:
# do not retry to fix create vm slow issue 4175
@LibvirtAutoReconnect
def call_libvirt(conn):
return conn.lookupByName(uuid)
vm = Vm.from_virt_domain(call_libvirt())
return vm
except libvirt.libvirtError as e:
error_code = e.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
if exception_if_not_existing:
raise kvmagent.KvmError('unable to find vm[uuid:%s]' % uuid)
else:
return None
err = 'error happened when looking up vm[uuid:%(uuid)s], libvirt error code: %(error_code)s, %(e)s' % locals()
raise libvirt.libvirtError(err)
def get_active_vm_uuids_states():
@LibvirtAutoReconnect
def call_libvirt(conn):
return conn.listDomainsID()
ids = call_libvirt()
uuids_states = {}
uuids_vmInShutdown = []
@LibvirtAutoReconnect
def get_domain(conn):
# i is for..loop's control variable
# it's Python's local scope tricky
try:
return conn.lookupByID(i)
except libvirt.libvirtError as ex:
if ex.get_error_code() == libvirt.VIR_ERR_NO_DOMAIN:
return None
raise ex
for i in ids:
domain = get_domain()
if domain == None:
continue
uuid = domain.name()
if uuid.startswith("guestfs-"):
logger.debug("ignore the temp vm generate by guestfish.")
continue
if uuid == "ZStack Management Node VM":
logger.debug("ignore the vm used for MN HA.")
continue
(state, _, _, _, _) = domain.info()
if state == Vm.VIR_DOMAIN_SHUTDOWN:
uuids_vmInShutdown.append(uuid)
state = Vm.power_state[state]
# or use
uuids_states[uuid] = state
return uuids_states, uuids_vmInShutdown
def get_all_vm_states():
return get_active_vm_uuids_states()[0]
def get_all_vm_sync_states():
return get_active_vm_uuids_states()
def get_running_vms():
@LibvirtAutoReconnect
def get_all_ids(conn):
return conn.listDomainsID()
ids = get_all_ids()
vms = []
@LibvirtAutoReconnect
def get_domain(conn):
try:
return conn.lookupByID(i)
except libvirt.libvirtError as ex:
if ex.get_error_code() == libvirt.VIR_ERR_NO_DOMAIN:
return None
raise ex
for i in ids:
domain = get_domain()
if domain == None:
continue
vm = Vm.from_virt_domain(domain)
vms.append(vm)
return vms
def get_cpu_memory_used_by_running_vms():
runnings = get_running_vms()
used_cpu = 0
used_memory = 0
for vm in runnings:
used_cpu += vm.get_cpu_num()
used_memory += vm.get_memory()
return (used_cpu, used_memory)
def cleanup_stale_vnc_iptable_chains():
VncPortIptableRule().delete_stale_chains()
def shared_block_to_file(sbkpath):
return sbkpath.replace("sharedblock:/", "/dev")
class VmOperationJudger(object):
def __init__(self, op):
self.op = op
self.expected_events = {}
if self.op == VmPlugin.VM_OP_START:
self.expected_events[LibvirtEventManager.EVENT_STARTED] = LibvirtEventManager.EVENT_STARTED
elif self.op == VmPlugin.VM_OP_MIGRATE:
self.expected_events[LibvirtEventManager.EVENT_STOPPED] = LibvirtEventManager.EVENT_STOPPED
elif self.op == VmPlugin.VM_OP_STOP:
self.expected_events[LibvirtEventManager.EVENT_STOPPED] = LibvirtEventManager.EVENT_STOPPED
elif self.op == VmPlugin.VM_OP_DESTROY:
self.expected_events[LibvirtEventManager.EVENT_STOPPED] = LibvirtEventManager.EVENT_STOPPED
elif self.op == VmPlugin.VM_OP_REBOOT:
self.expected_events[LibvirtEventManager.EVENT_STARTED] = LibvirtEventManager.EVENT_STARTED
self.expected_events[LibvirtEventManager.EVENT_STOPPED] = LibvirtEventManager.EVENT_STOPPED
elif self.op == VmPlugin.VM_OP_SUSPEND:
self.expected_events[LibvirtEventManager.EVENT_SUSPENDED] = LibvirtEventManager.EVENT_SUSPENDED
elif self.op == VmPlugin.VM_OP_RESUME:
self.expected_events[LibvirtEventManager.EVENT_RESUMED] = LibvirtEventManager.EVENT_RESUMED
else:
raise Exception('unknown vm operation[%s]' % self.op)
def remove_expected_event(self, evt):
del self.expected_events[evt]
return len(self.expected_events)
def ignore_libvirt_events(self):
if self.op == VmPlugin.VM_OP_START:
return [LibvirtEventManager.EVENT_STARTED]
elif self.op == VmPlugin.VM_OP_MIGRATE:
return [LibvirtEventManager.EVENT_STOPPED, LibvirtEventManager.EVENT_UNDEFINED]
elif self.op == VmPlugin.VM_OP_STOP:
return [LibvirtEventManager.EVENT_STOPPED, LibvirtEventManager.EVENT_SHUTDOWN]
elif self.op == VmPlugin.VM_OP_DESTROY:
return [LibvirtEventManager.EVENT_STOPPED, LibvirtEventManager.EVENT_SHUTDOWN,
LibvirtEventManager.EVENT_UNDEFINED]
elif self.op == VmPlugin.VM_OP_REBOOT:
return [LibvirtEventManager.EVENT_STARTED, LibvirtEventManager.EVENT_STOPPED]
else:
raise Exception('unknown vm operation[%s]' % self.op)
def make_spool_conf(imgfmt, dev_letter, volume):
d = tempfile.gettempdir()
fname = "{0}_{1}".format(os.path.basename(volume.installPath), dev_letter)
fpath = os.path.join(d, fname) + ".conf"
vsize, _ = linux.qcow2_size_and_actual_size(volume.installPath)
with open(fpath, "w") as fd:
fd.write("device_type 0\n")
fd.write("local_storage_type 0\n")
fd.write("device_owner blockpmd\n")
fd.write("device_format {0}\n".format(imgfmt))
fd.write("cluster_id 1000\n")
fd.write("device_id {0}\n".format(ord(dev_letter)))
fd.write("device_uuid {0}\n".format(fname))
fd.write("mount_point {0}\n".format(volume.installPath))
fd.write("device_size {0}\n".format(vsize))
os.chmod(fpath, 0o600)
return fpath
def is_spice_tls():
return bash.bash_r("grep '^[[:space:]]*spice_tls[[:space:]]*=[[:space:]]*1' /etc/libvirt/qemu.conf")
def get_dom_error(uuid):
try:
domblkerror = shell.call('virsh domblkerror %s' % uuid)
except:
return None
if 'No errors found' in domblkerror:
return None
return domblkerror.replace('\n', '')
class Vm(object):
VIR_DOMAIN_NOSTATE = 0
VIR_DOMAIN_RUNNING = 1
VIR_DOMAIN_BLOCKED = 2
VIR_DOMAIN_PAUSED = 3
VIR_DOMAIN_SHUTDOWN = 4
VIR_DOMAIN_SHUTOFF = 5
VIR_DOMAIN_CRASHED = 6
VIR_DOMAIN_PMSUSPENDED = 7
VM_STATE_NO_STATE = 'NoState'
VM_STATE_RUNNING = 'Running'
VM_STATE_PAUSED = 'Paused'
VM_STATE_SHUTDOWN = 'Shutdown'
VM_STATE_CRASHED = 'Crashed'
VM_STATE_SUSPENDED = 'Suspended'
ALLOW_SNAPSHOT_STATE = (VM_STATE_RUNNING, VM_STATE_PAUSED, VM_STATE_SHUTDOWN)
power_state = {
VIR_DOMAIN_NOSTATE: VM_STATE_NO_STATE,
VIR_DOMAIN_RUNNING: VM_STATE_RUNNING,
VIR_DOMAIN_BLOCKED: VM_STATE_RUNNING,
VIR_DOMAIN_PAUSED: VM_STATE_PAUSED,
VIR_DOMAIN_SHUTDOWN: VM_STATE_SHUTDOWN,
VIR_DOMAIN_SHUTOFF: VM_STATE_SHUTDOWN,
VIR_DOMAIN_CRASHED: VM_STATE_CRASHED,
VIR_DOMAIN_PMSUSPENDED: VM_STATE_SUSPENDED,
}
# IDE and SATA is not supported in aarch64/i440fx
# so cdroms and volumes need to share sd[a-z]
#
# IDE is supported in x86_64/i440fx
# so cdroms use hd[c-e]
# virtio and virtioSCSI volumes share (sd[a-z] - sdc)
device_letter_config = {
'aarch64': 'abfghijklmnopqrstuvwxyz',
'mips64el': 'abfghijklmnopqrstuvwxyz',
'x86_64': 'abdefghijklmnopqrstuvwxyz'
}
DEVICE_LETTERS = device_letter_config[HOST_ARCH]
ISO_DEVICE_LETTERS = 'cde'
timeout_detached_vol = set()
@staticmethod
def get_device_unit(device_id):
# type: (int) -> int
if device_id >= len(Vm.DEVICE_LETTERS):
err = "exceeds max disk limit, device id[%s], but only 0 ~ %d are allowed" % (device_id, len(Vm.DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
# e.g. sda -> unit 0 sdf -> unit 5, same as libvirt
return ord(Vm.DEVICE_LETTERS[device_id]) - ord(Vm.DEVICE_LETTERS[0])
@staticmethod
def get_iso_device_unit(device_id):
if device_id >= len(Vm.ISO_DEVICE_LETTERS):
err = "exceeds max iso limit, device id[%s], but only 0 ~ %d are allowed" % (device_id, len(Vm.ISO_DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
return str(ord(Vm.ISO_DEVICE_LETTERS[device_id]) - ord(Vm.DEVICE_LETTERS[0]))
timeout_object = linux.TimeoutObject()
@staticmethod
def set_device_address(disk_element, vol, vm_to_attach=None):
# type: (etree.Element, jsonobject.JsonObject, Vm) -> None
target = disk_element.find('target')
bus = target.get('bus') if target is not None else None
if bus == 'scsi':
occupied_units = vm_to_attach.get_occupied_disk_address_units(bus='scsi', controller=0) if vm_to_attach else []
default_unit = Vm.get_device_unit(vol.deviceId)
unit = default_unit if default_unit not in occupied_units else max(occupied_units) + 1
e(disk_element, 'address', None, {'type': 'drive', 'controller': '0', 'unit': str(unit)})
def __init__(self):
self.uuid = None
self.domain_xmlobject = None
self.domain_xml = None
self.domain = None
self.state = None
def wait_for_state_change(self, state):
try:
self.refresh()
except Exception as e:
if not state:
return True
raise e
if isinstance(state, list):
return self.state in state
else:
return self.state == state
def get_occupied_disk_address_units(self, bus, controller):
# type: (str, int) -> list[int]
result = []
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if not xmlobject.has_element(disk, 'address') or not xmlobject.has_element(disk, 'target'):
continue
if not disk.target.bus__ or not disk.target.bus_ == bus:
continue
if not disk.address.controller__ or not str(disk.address.controller_) == str(controller):
continue
result.append(int(disk.address.unit_))
return result
def get_cpu_num(self):
cpuNum = self.domain_xmlobject.vcpu.current__
if cpuNum:
return int(cpuNum)
else:
return int(self.domain_xmlobject.vcpu.text_)
def get_cpu_speed(self):
cputune = self.domain_xmlobject.get_child_node('cputune')
if cputune:
return int(cputune.shares.text_) / self.get_cpu_num()
else:
# TODO: return system cpu capacity
return 512
def get_memory(self):
return long(self.domain_xmlobject.currentMemory.text_) * 1024
def get_name(self):
return self.domain_xmlobject.description.text_
def refresh(self):
(state, _, _, _, _) = self.domain.info()
self.state = self.power_state[state]
self.domain_xml = self.domain.XMLDesc(0)
self.domain_xmlobject = xmlobject.loads(self.domain_xml)
self.uuid = self.domain_xmlobject.name.text_
def is_alive(self):
try:
self.domain.info()
return True
except:
return False
def _wait_for_vm_running(self, timeout=60, wait_console=True):
if not linux.wait_callback_success(self.wait_for_state_change, [self.VM_STATE_RUNNING, self.VM_STATE_PAUSED], interval=0.5,
timeout=timeout):
raise kvmagent.KvmError('unable to start vm[uuid:%s, name:%s], vm state is not changing to '
'running/paused after %s seconds' % (self.uuid, self.get_name(), timeout))
if not wait_console:
return
vnc_port = self.get_console_port()
def wait_vnc_port_open(_):
cmd = shell.ShellCmd('netstat -na | grep ":%s" > /dev/null' % vnc_port)
cmd(is_exception=False)
return cmd.return_code == 0
if not linux.wait_callback_success(wait_vnc_port_open, None, interval=0.5, timeout=30):
raise kvmagent.KvmError("unable to start vm[uuid:%s, name:%s]; its vnc port does"
" not open after 30 seconds" % (self.uuid, self.get_name()))
def _wait_for_vm_paused(self, timeout=60):
if not linux.wait_callback_success(self.wait_for_state_change, self.VM_STATE_PAUSED, interval=0.5,
timeout=timeout):
raise kvmagent.KvmError('unable to start vm[uuid:%s, name:%s], vm state is not changing to '
'paused after %s seconds' % (self.uuid, self.get_name(), timeout))
def reboot(self, cmd):
self.stop(timeout=cmd.timeout)
# set boot order
boot_dev = []
for bdev in cmd.bootDev:
xo = xmlobject.XmlObject('boot')
xo.put_attr('dev', bdev)
boot_dev.append(xo)
self.domain_xmlobject.os.replace_node('boot', boot_dev)
self.domain_xml = self.domain_xmlobject.dump()
self.start(cmd.timeout)
def restore(self, path):
@LibvirtAutoReconnect
def restore_from_file(conn):
return conn.restoreFlags(path, self.domain_xml)
restore_from_file()
def start(self, timeout=60, create_paused=False, wait_console=True):
# TODO: 1. enable hair_pin mode
logger.debug('creating vm:\n%s' % self.domain_xml)
@LibvirtAutoReconnect
def define_xml(conn):
return conn.defineXML(self.domain_xml)
flag = (0, libvirt.VIR_DOMAIN_START_PAUSED)[create_paused]
domain = define_xml()
self.domain = domain
self.domain.createWithFlags(flag)
if create_paused:
self._wait_for_vm_paused(timeout)
else:
self._wait_for_vm_running(timeout, wait_console)
def stop(self, strategy='grace', timeout=5, undefine=True):
def cleanup_addons():
for chan in self.domain_xmlobject.devices.get_child_node_as_list('channel'):
if chan.type_ == 'unix':
path = chan.source.path_
linux.rm_file_force(path)
def loop_shutdown(_):
try:
self.domain.shutdown()
except:
# domain has been shut down
pass
try:
return self.wait_for_state_change(self.VM_STATE_SHUTDOWN)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
return True
else:
raise
def iscsi_cleanup():
disks = self.domain_xmlobject.devices.get_child_node_as_list('disk')
for disk in disks:
if disk.type_ == 'block' and disk.device_ == 'lun':
BlkIscsi.logout_portal(disk.source.dev_)
def loop_undefine(_):
if not undefine:
return True
if not self.is_alive():
return True
def force_undefine():
try:
self.domain.undefine()
except:
logger.warn('cannot undefine the VM[uuid:%s]' % self.uuid)
pid = linux.find_process_by_cmdline(['qemu', self.uuid])
if pid:
# force to kill the VM
linux.kill_process(pid, is_exception=False)
try:
flags = 0
for attr in [ "VIR_DOMAIN_UNDEFINE_MANAGED_SAVE", "VIR_DOMAIN_UNDEFINE_SNAPSHOTS_METADATA", "VIR_DOMAIN_UNDEFINE_NVRAM" ]:
if hasattr(libvirt, attr):
flags |= getattr(libvirt, attr)
self.domain.undefineFlags(flags)
except libvirt.libvirtError as ex:
logger.warn('undefine domain[%s] failed: %s' % (self.uuid, str(ex)))
force_undefine()
return self.wait_for_state_change(None)
def loop_destroy(_):
try:
self.domain.destroy()
except:
# domain has been destroyed
pass
try:
return self.wait_for_state_change(self.VM_STATE_SHUTDOWN)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
return True
else:
raise
do_destroy, isPersistent = strategy == 'grace' or strategy == 'cold', self.domain.isPersistent()
if strategy == 'grace':
if linux.wait_callback_success(loop_shutdown, None, timeout=60):
do_destroy = False
iscsi_cleanup()
if do_destroy:
if not linux.wait_callback_success(loop_destroy, None, timeout=60):
logger.warn('failed to destroy vm, timeout after 60 secs')
raise kvmagent.KvmError('failed to stop vm, timeout after 60 secs')
cleanup_addons()
if strategy == 'force':
pid = linux.find_process_by_cmdline(['qemu', self.uuid])
if pid:
# force to kill the VM
try:
linux.kill_process(int(pid), 60, True, False)
except Exception as e:
logger.warn('failed to kill vm, timeout after 60 secs')
raise kvmagent.KvmError('failed to kill vm, timeout after 60 secs')
return
# undefine domain only if it is persistent
if not isPersistent:
return
if not linux.wait_callback_success(loop_undefine, None, timeout=60):
logger.warn('failed to undefine vm, timeout after 60 secs')
raise kvmagent.KvmError('failed to stop vm, timeout after 60 secs')
def destroy(self):
self.stop(strategy='cold')
def pause(self, timeout=5):
def loop_suspend(_):
try:
self.domain.suspend()
except:
pass
try:
return self.wait_for_state_change(self.VM_STATE_PAUSED)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
return True
else:
raise
if not linux.wait_callback_success(loop_suspend, None, timeout=10):
raise kvmagent.KvmError('failed to suspend vm ,timeout after 10 secs')
def resume(self, timeout=5):
def loop_resume(_):
try:
self.domain.resume()
except:
pass
try:
return self.wait_for_state_change(self.VM_STATE_RUNNING)
except libvirt.libvirtError as ex:
error_code = ex.get_error_code()
if error_code == libvirt.VIR_ERR_NO_DOMAIN:
return True
else:
raise
if not linux.wait_callback_success(loop_resume, None, timeout=60):
domblkerror = get_dom_error(self.uuid)
if domblkerror is None:
raise kvmagent.KvmError('failed to resume vm ,timeout after 60 secs')
else:
raise kvmagent.KvmError('failed to resume vm , because %s' % domblkerror)
def harden_console(self, mgmt_ip):
if is_namespace_used():
id_node = find_zstack_metadata_node(etree.fromstring(self.domain_xml), 'internalId')
id = id_node.text
else:
id = self.domain_xmlobject.metadata.internalId.text_
vir = VncPortIptableRule()
vir.vm_internal_id = id
vir.delete()
vir.host_ip = mgmt_ip
vir.port = self.get_console_port()
vir.apply()
def get_vdi_connect_port(self):
rsp = GetVncPortResponse()
for g in self.domain_xmlobject.devices.get_child_node_as_list('graphics'):
if g.type_ == 'vnc':
rsp.vncPort = g.port_
rsp.protocol = "vnc"
elif g.type_ == 'spice':
rsp.spicePort = g.port_
if g.hasattr('tlsPort_'):
rsp.spiceTlsPort = g.tlsPort_
rsp.protocol = "spice"
if rsp.vncPort is not None and rsp.spicePort is not None:
rsp.protocol = "vncAndSpice"
return rsp.protocol, rsp.vncPort, rsp.spicePort, rsp.spiceTlsPort
def get_console_port(self):
for g in self.domain_xmlobject.devices.get_child_node_as_list('graphics'):
if g.type_ == 'vnc' or g.type_ == 'spice':
return g.port_
def get_console_protocol(self):
for g in self.domain_xmlobject.devices.get_child_node_as_list('graphics'):
if g.type_ == 'vnc' or g.type_ == 'spice':
return g.type_
raise kvmagent.KvmError('no vnc console defined for vm[uuid:%s]' % self.uuid)
def attach_data_volume(self, volume, addons):
self._wait_vm_run_until_seconds(10)
self.timeout_object.wait_until_object_timeout('detach-volume-%s' % self.uuid)
self._attach_data_volume(volume, addons)
self.timeout_object.put('attach-volume-%s' % self.uuid, timeout=10)
@staticmethod
def set_volume_qos(addons, volumeUuid, volume_xml_obj):
if not addons:
return
for key in ["VolumeQos", "VolumeReadQos", "VolumeWriteQos"]:
vol_qos = addons[key]
if not vol_qos:
continue
qos = vol_qos[volumeUuid]
if not qos:
continue
if not qos.totalBandwidth and not qos.totalIops:
continue
mode = None
if key == 'VolumeQos':
mode = "total"
elif key == 'VolumeReadQos':
mode = "read"
elif key == 'VolumeWriteQos':
mode = "write"
iotune = e(volume_xml_obj, 'iotune')
if qos.totalBandwidth:
virsh_key = "%s_bytes_sec" % mode
e(iotune, virsh_key, str(qos.totalBandwidth))
if qos.totalIops:
virsh_key = "%s_iops_sec" % mode
e(iotune, virsh_key, str(qos.totalIops))
@staticmethod
def set_volume_serial_id(vol_uuid, volume_xml_obj):
if volume_xml_obj.get('type') != 'block' or volume_xml_obj.get('device') != 'lun':
e(volume_xml_obj, 'serial', vol_uuid)
def _attach_data_volume(self, volume, addons):
if volume.deviceId >= len(self.DEVICE_LETTERS):
err = "vm[uuid:%s] exceeds max disk limit, device id[%s], but only 0 ~ %d are allowed" % (self.uuid, volume.deviceId, len(self.DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
def volume_native_aio(volume_xml_obj):
if not addons:
return
vol_aio = addons['NativeAio']
if not vol_aio:
return
drivers = volume_xml_obj.getiterator("driver")
if drivers is None or len(drivers) == 0:
return
drivers[0].set("io", "native")
def filebased_volume():
disk = etree.Element('disk', attrib={'type': 'file', 'device': 'disk'})
e(disk, 'driver', None, {'name': 'qemu', 'type': linux.get_img_fmt(volume.installPath), 'cache': volume.cacheMode})
e(disk, 'source', None, {'file': volume.installPath})
if volume.shareable:
e(disk, 'shareable')
if volume.useVirtioSCSI:
e(disk, 'target', None, {'dev': 'sd%s' % dev_letter, 'bus': 'scsi'})
e(disk, 'wwn', volume.wwn)
elif volume.useVirtio:
e(disk, 'target', None, {'dev': 'vd%s' % self.DEVICE_LETTERS[volume.deviceId], 'bus': 'virtio'})
else:
bus_type = self._get_controller_type()
dev_format = Vm._get_disk_target_dev_format(bus_type)
e(disk, 'target', None, {'dev': dev_format % dev_letter, 'bus': bus_type})
return disk
def scsilun_volume():
# default value of sgio is 'filtered'
#NOTE(weiw): scsi lun not support aio or qos
disk = etree.Element('disk', attrib={'type': 'block', 'device': 'lun', 'sgio': get_sgio_value()})
e(disk, 'driver', None, {'name': 'qemu', 'type': 'raw'})
e(disk, 'source', None, {'dev': volume.installPath})
e(disk, 'target', None, {'dev': 'sd%s' % dev_letter, 'bus': 'scsi'})
return disk
def iscsibased_volume():
# type: () -> etree.Element
def virtio_iscsi():
vi = VirtioIscsi()
portal, vi.target, vi.lun = volume.installPath.lstrip('iscsi://').split('/')
vi.server_hostname, vi.server_port = portal.split(':')
vi.device_letter = dev_letter
vi.volume_uuid = volume.volumeUuid
vi.chap_username = volume.chapUsername
vi.chap_password = volume.chapPassword
return vi.to_xmlobject()
def blk_iscsi():
bi = BlkIscsi()
portal, bi.target, bi.lun = volume.installPath.lstrip('iscsi://').split('/')
bi.server_hostname, bi.server_port = portal.split(':')
bi.device_letter = dev_letter
bi.volume_uuid = volume.volumeUuid
bi.chap_username = volume.chapUsername
bi.chap_password = volume.chapPassword
return bi.to_xmlobject()
if volume.useVirtio:
return virtio_iscsi()
else:
return blk_iscsi()
def ceph_volume():
# type: () -> etree.Element
def virtoio_ceph():
vc = VirtioCeph()
vc.volume = volume
vc.dev_letter = dev_letter
return vc.to_xmlobject()
def blk_ceph():
ic = BlkCeph()
ic.volume = volume
ic.dev_letter = dev_letter
ic.bus_type = self._get_controller_type()
return ic.to_xmlobject()
def virtio_scsi_ceph():
vsc = VirtioSCSICeph()
vsc.volume = volume
vsc.dev_letter = dev_letter
return vsc.to_xmlobject()
if volume.useVirtioSCSI:
return virtio_scsi_ceph()
else:
if volume.useVirtio:
return virtoio_ceph()
else:
return blk_ceph()
def block_volume():
# type: () -> etree.Element
def blk():
disk = etree.Element('disk', {'type': 'block', 'device': 'disk', 'snapshot': 'external'})
e(disk, 'driver', None,
{'name': 'qemu', 'type': 'raw', 'cache': 'none', 'io': 'native'})
e(disk, 'source', None, {'dev': volume.installPath})
if volume.useVirtioSCSI:
e(disk, 'target', None, {'dev': 'sd%s' % dev_letter, 'bus': 'scsi'})
e(disk, 'wwn', volume.wwn)
else:
e(disk, 'target', None, {'dev': 'vd%s' % dev_letter, 'bus': 'virtio'})
return disk
return blk()
def spool_volume():
# type: () -> etree.Element
def blk():
imgfmt = linux.get_img_fmt(volume.installPath)
disk = etree.Element('disk', {'type': 'network', 'device': 'disk'})
e(disk, 'driver', None,
{'name': 'qemu', 'type': 'raw', 'cache': 'none', 'io': 'native'})
e(disk, 'source', None,
{'protocol': 'spool', 'name': make_spool_conf(imgfmt, dev_letter, volume)})
e(disk, 'target', None, {'dev': 'vd%s' % dev_letter, 'bus': 'virtio'})
return disk
return blk()
dev_letter = self._get_device_letter(volume, addons)
if volume.deviceType == 'iscsi':
disk_element = iscsibased_volume()
elif volume.deviceType == 'file':
disk_element = filebased_volume()
elif volume.deviceType == 'ceph':
disk_element = ceph_volume()
elif volume.deviceType == 'scsilun':
disk_element = scsilun_volume()
elif volume.deviceType == 'block':
disk_element = block_volume()
elif volume.deviceType == 'spool':
disk_element = spool_volume()
else:
raise Exception('unsupported volume deviceType[%s]' % volume.deviceType)
Vm.set_device_address(disk_element, volume, get_vm_by_uuid(self.uuid))
Vm.set_volume_qos(addons, volume.volumeUuid, disk_element)
Vm.set_volume_serial_id(volume.volumeUuid, disk_element)
volume_native_aio(disk_element)
xml = etree.tostring(disk_element)
logger.debug('attaching volume[%s] to vm[uuid:%s]:\n%s' % (volume.installPath, self.uuid, xml))
try:
# libvirt has a bug that if attaching volume just after vm created, it likely fails. So we retry three time here
@linux.retry(times=3, sleep_time=5)
def attach():
def wait_for_attach(_):
me = get_vm_by_uuid(self.uuid)
disk, _ = me._get_target_disk(volume, is_exception=False)
if not disk:
logger.debug('volume[%s] is still in process of attaching, wait it' % volume.installPath)
return bool(disk)
try:
self.domain.attachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
if not linux.wait_callback_success(wait_for_attach, None, 5, 1):
raise Exception("cannot attach a volume[uuid: %s] to the vm[uuid: %s];"
"it's still not attached after 5 seconds" % (volume.volumeUuid, self.uuid))
except:
# check one more time
if not wait_for_attach(None):
raise
attach()
except libvirt.libvirtError as ex:
err = str(ex)
if 'Duplicate ID' in err:
err = ('unable to attach the volume[%s] to vm[uuid: %s], %s. This is a KVM issue, please reboot'
' the VM and try again' % (volume.volumeUuid, self.uuid, err))
elif 'No more available PCI slots' in err:
err = ('vm[uuid: %s] has no more PCI slots for volume[%s]. This is a Libvirt issue, please reboot'
' the VM and try again' % (self.uuid, volume.volumeUuid))
else:
err = 'unable to attach the volume[%s] to vm[uuid: %s], %s.' % (volume.volumeUuid, self.uuid, err)
logger.warn(linux.get_exception_stacktrace())
raise kvmagent.KvmError(err)
def _get_device_letter(self, volume, addons):
default_letter = Vm.DEVICE_LETTERS[volume.deviceId]
if not volume.useVirtioSCSI:
return default_letter
# usually, device_letter_index equals device_id, but reversed when volume use VirtioSCSI because of ZSTAC-9641
# so when attach SCSI volume again after detached it, device_letter should be same as origin name,
# otherwise it will fail for duplicate device name.
def get_reversed_disks():
results = {}
for vol in addons.attachedDataVolumes:
_, disk_name = self._get_target_disk(vol)
if disk_name and disk_name[-1] != Vm.DEVICE_LETTERS[vol.deviceId]:
results[disk_name[-1]] = vol.deviceId
return results
# {actual_dev_letter: device_id_in_db}
# type: dict[str, int]
reversed_disks = get_reversed_disks()
if default_letter not in reversed_disks.keys():
return default_letter
else:
# letter has been occupied, so return reversed letter
logger.debug("reversed disk name: %s" % reversed_disks)
return Vm.DEVICE_LETTERS[reversed_disks[default_letter]]
def detach_data_volume(self, volume):
self._wait_vm_run_until_seconds(10)
self.timeout_object.wait_until_object_timeout('attach-volume-%s' % self.uuid)
self._detach_data_volume(volume)
self.timeout_object.put('detach-volume-%s' % self.uuid, timeout=10)
def _detach_data_volume(self, volume):
assert volume.deviceId != 0, 'how can root volume gets detached???'
target_disk, disk_name = self._get_target_disk(volume, is_exception=False)
if not target_disk:
if self._volume_detach_timed_out(volume):
logger.debug('volume [installPath: %s] has been detached before' % volume.installPath)
self._clean_timeout_record(volume)
return
raise kvmagent.KvmError('unable to find data volume[%s] on vm[uuid:%s]' % (disk_name, self.uuid))
xmlstr = target_disk.dump()
logger.debug('detaching volume from vm[uuid:%s]:\n%s' % (self.uuid, xmlstr))
try:
# libvirt has a bug that if detaching volume just after vm created, it likely fails. So we retry three time here
@linux.retry(times=3, sleep_time=5)
def detach():
def wait_for_detach(_):
me = get_vm_by_uuid(self.uuid)
disk, _ = me._get_target_disk(volume, is_exception=False)
if disk:
logger.debug('volume[%s] is still in process of detaching, wait for it' % volume.installPath)
return not bool(disk)
try:
self.domain.detachDeviceFlags(xmlstr, libvirt.VIR_DOMAIN_AFFECT_LIVE)
if not linux.wait_callback_success(wait_for_detach, None, 5, 1):
raise Exception("unable to detach the volume[uuid:%s] from the vm[uuid:%s];"
"it's still attached after 5 seconds" %
(volume.volumeUuid, self.uuid))
except:
# check one more time
if not wait_for_detach(None):
self._record_volume_detach_timeout(volume)
logger.debug("detach timeout, record volume install path: %s" % volume.installPath)
raise
detach()
if self._volume_detach_timed_out(volume):
self._clean_timeout_record(volume)
logger.debug("detach success finally, remove record of volume install path: %s" % volume.installPath)
def logout_iscsi():
BlkIscsi.logout_portal(target_disk.source.dev_)
if volume.deviceType == 'iscsi':
if not volume.useVirtio:
logout_iscsi()
except libvirt.libvirtError as ex:
vm = get_vm_by_uuid(self.uuid)
logger.warn('vm dump: %s' % vm.domain_xml)
logger.warn(linux.get_exception_stacktrace())
raise kvmagent.KvmError(
'unable to detach volume[%s] from vm[uuid:%s], %s' % (volume.installPath, self.uuid, str(ex)))
def _record_volume_detach_timeout(self, volume):
Vm.timeout_detached_vol.add(volume.installPath + "-" + self.uuid)
def _volume_detach_timed_out(self, volume):
return volume.installPath + "-" + self.uuid in Vm.timeout_detached_vol
def _clean_timeout_record(self, volume):
Vm.timeout_detached_vol.remove(volume.installPath + "-" + self.uuid)
def _get_back_file(self, volume):
back = linux.qcow2_get_backing_file(volume)
return None if not back else back
def _get_backfile_chain(self, current):
back_files = []
def get_back_files(volume):
back_file = self._get_back_file(volume)
if not back_file:
return
back_files.append(back_file)
get_back_files(back_file)
get_back_files(current)
return back_files
@staticmethod
def ensure_no_internal_snapshot(volume):
if os.path.exists(volume) and shell.run("%s --backing-chain %s | grep 'Snapshot list:'"
% (qemu_img.subcmd('info'), volume)) == 0:
raise kvmagent.KvmError('found internal snapshot in the backing chain of volume[path:%s].' % volume)
# NOTE: code from Openstack nova
def _wait_for_block_job(self, disk_path, abort_on_error=False,
wait_for_job_clean=False):
"""Wait for libvirt block job to complete.
Libvirt may return either cur==end or an empty dict when
the job is complete, depending on whether the job has been
cleaned up by libvirt yet, or not.
:returns: True if still in progress
False if completed
"""
status = self.domain.blockJobInfo(disk_path, 0)
if status == -1 and abort_on_error:
raise kvmagent.KvmError('libvirt error while requesting blockjob info.')
try:
cur = status.get('cur', 0)
end = status.get('end', 0)
except Exception as e:
logger.warn(linux.get_exception_stacktrace())
return False
if wait_for_job_clean:
job_ended = not status
else:
job_ended = cur == end
return not job_ended
def _get_target_disk_by_path(self, installPath, is_exception=True):
if installPath.startswith('sharedblock'):
installPath = shared_block_to_file(installPath)
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if not xmlobject.has_element(disk, 'source'):
continue
# file
if disk.source.file__ and disk.source.file_ == installPath:
return disk, disk.target.dev_
# ceph
if disk.source.name__ and disk.source.name_ in installPath:
return disk, disk.target.dev_
# 'block':
if disk.source.dev__ and disk.source.dev_ in installPath:
return disk, disk.target.dev_
if not is_exception:
return None, None
logger.debug('%s is not found on the vm[uuid:%s]' % (installPath, self.uuid))
raise kvmagent.KvmError('unable to find volume[installPath:%s] on vm[uuid:%s]' % (installPath, self.uuid))
def _get_all_volume_alias_names(self, volumes):
volumes.sort(key=lambda d: d.deviceId)
target_disk_alias_names = []
for volume in volumes:
target_disk, _ = self._get_target_disk(volume)
target_disk_alias_names.append(target_disk.alias.name_)
if len(volumes) != len(target_disk_alias_names):
raise Exception('not all disk have alias names, skip rollback')
return target_disk_alias_names
def _get_target_disk(self, volume, is_exception=True):
if volume.installPath.startswith('sharedblock'):
volume.installPath = shared_block_to_file(volume.installPath)
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if not xmlobject.has_element(disk, 'source') and not volume.deviceType == 'quorum':
continue
if volume.deviceType == 'iscsi':
if volume.useVirtio:
if disk.source.name__ and disk.source.name_ in volume.installPath:
return disk, disk.target.dev_
else:
if disk.source.dev__ and volume.volumeUuid in disk.source.dev_:
return disk, disk.target.dev_
elif volume.deviceType == 'file':
if disk.source.file__ and disk.source.file_ == volume.installPath:
return disk, disk.target.dev_
elif volume.deviceType == 'ceph':
if disk.source.name__ and disk.source.name_ in volume.installPath:
return disk, disk.target.dev_
elif volume.deviceType == 'scsilun':
if disk.source.dev__ and volume.installPath in disk.source.dev_:
return disk, disk.target.dev_
elif volume.deviceType == 'block':
if disk.source.dev__ and disk.source.dev_ in volume.installPath:
return disk, disk.target.dev_
elif volume.deviceType == 'quorum':
logger.debug("quorum file path is %s" % disk.backingStore.source.file_)
if disk.backingStore.source.file_ and disk.backingStore.source.file_ in volume.installPath:
disk.driver.type_ = "qcow2"
disk.source = disk.backingStore.source
return disk, disk.backingStore.source.file_
if not is_exception:
return None, None
logger.debug('%s is not found on the vm[uuid:%s], xml: %s' % (volume.installPath, self.uuid, self.domain_xml))
raise kvmagent.KvmError('unable to find volume[installPath:%s] on vm[uuid:%s]' % (volume.installPath, self.uuid))
def _is_ft_vm(self):
return any(disk.type_ == "quorum" for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'))
def resize_volume(self, volume, size):
device_id = volume.deviceId
target_disk, disk_name = self._get_target_disk(volume)
alias_name = target_disk.alias.name_
r, o, e = bash.bash_roe("virsh qemu-monitor-command %s block_resize drive-%s %sB --hmp"
% (self.uuid, alias_name, size))
logger.debug("resize volume[%s] of vm[%s]" % (alias_name, self.uuid))
if r != 0:
raise kvmagent.KvmError(
'unable to resize volume[id:{1}] of vm[uuid:{0}] because {2}'.format(device_id, self.uuid, e))
def take_live_volumes_delta_snapshots(self, vs_structs):
"""
:type vs_structs: list[VolumeSnapshotJobStruct]
:rtype: list[VolumeSnapshotResultStruct]
"""
disk_names = []
return_structs = []
memory_snapshot_struct = None
snapshot = etree.Element('domainsnapshot')
disks = e(snapshot, 'disks')
logger.debug(snapshot)
if len(vs_structs) == 0:
return return_structs
def get_size(install_path):
"""
:rtype: long
"""
return VmPlugin._get_snapshot_size(install_path)
logger.debug(vs_structs)
need_memory_snapshot = False
for vs_struct in vs_structs:
if vs_struct.live is False or vs_struct.full is True:
raise kvmagent.KvmError("volume %s is not live or full snapshot specified, "
"can not proceed")
if vs_struct.memory:
e(snapshot, 'memory', None, attrib={'snapshot': 'external', 'file': vs_struct.installPath})
need_memory_snapshot = True
snapshot_dir = os.path.dirname(vs_struct.installPath)
if not os.path.exists(snapshot_dir):
os.makedirs(snapshot_dir)
memory_snapshot_struct = vs_struct
continue
target_disk, disk_name = self._get_target_disk(vs_struct.volume)
if target_disk is None:
logger.debug("can not find %s" % vs_struct.volume.deviceId)
continue
snapshot_dir = os.path.dirname(vs_struct.installPath)
if not os.path.exists(snapshot_dir):
os.makedirs(snapshot_dir)
disk_names.append(disk_name)
d = e(disks, 'disk', None, attrib={'name': disk_name, 'snapshot': 'external', 'type': 'file'})
e(d, 'source', None, attrib={'file': vs_struct.installPath})
e(d, 'driver', None, attrib={'type': 'qcow2'})
return_structs.append(VolumeSnapshotResultStruct(
vs_struct.volumeUuid,
target_disk.source.file_,
vs_struct.installPath,
get_size(target_disk.source.file_)))
self.refresh()
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if disk.target.dev_ not in disk_names:
e(disks, 'disk', None, attrib={'name': disk.target.dev_, 'snapshot': 'no'})
xml = etree.tostring(snapshot)
logger.debug('creating live snapshot for vm[uuid:{0}] volumes[id:{1}]:\n{2}'.format(self.uuid, disk_names, xml))
snap_flags = libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA | libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_ATOMIC
if not need_memory_snapshot:
snap_flags |= libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY
try:
self.domain.snapshotCreateXML(xml, snap_flags)
if memory_snapshot_struct:
return_structs.append(VolumeSnapshotResultStruct(
memory_snapshot_struct.volumeUuid,
memory_snapshot_struct.installPath,
memory_snapshot_struct.installPath,
get_size(memory_snapshot_struct.installPath)))
return return_structs
except libvirt.libvirtError as ex:
logger.warn(linux.get_exception_stacktrace())
raise kvmagent.KvmError(
'unable to take live snapshot of vm[uuid:{0}] volumes[id:{1}], {2}'.format(self.uuid, disk_names, str(ex)))
def take_volume_snapshot(self, volume, install_path, full_snapshot=False):
device_id = volume.deviceId
target_disk, disk_name = self._get_target_disk(volume)
snapshot_dir = os.path.dirname(install_path)
if not os.path.exists(snapshot_dir):
os.makedirs(snapshot_dir)
previous_install_path = target_disk.source.file_
back_file_len = len(self._get_backfile_chain(previous_install_path))
# for RHEL, base image's back_file_len == 1; for ubuntu back_file_len == 0
first_snapshot = full_snapshot and (back_file_len == 1 or back_file_len == 0)
def take_delta_snapshot():
snapshot = etree.Element('domainsnapshot')
disks = e(snapshot, 'disks')
d = e(disks, 'disk', None, attrib={'name': disk_name, 'snapshot': 'external', 'type': 'file'})
e(d, 'source', None, attrib={'file': install_path})
e(d, 'driver', None, attrib={'type': 'qcow2'})
# QEMU 2.3 default create snapshots on all devices
# but we only need for one
self.refresh()
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if disk.target.dev_ != disk_name:
e(disks, 'disk', None, attrib={'name': disk.target.dev_, 'snapshot': 'no'})
xml = etree.tostring(snapshot)
logger.debug('creating snapshot for vm[uuid:{0}] volume[id:{1}]:\n{2}'.format(self.uuid, device_id, xml))
snap_flags = libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY | libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA
try:
self.domain.snapshotCreateXML(xml, snap_flags)
return previous_install_path, install_path
except libvirt.libvirtError as ex:
logger.warn(linux.get_exception_stacktrace())
raise kvmagent.KvmError(
'unable to take snapshot of vm[uuid:{0}] volume[id:{1}], {2}'.format(self.uuid, device_id, str(ex)))
def take_full_snapshot():
self.block_stream_disk(volume)
return take_delta_snapshot()
if first_snapshot:
# the first snapshot is always full snapshot
# at this moment, delta snapshot returns the original volume as full snapshot
return take_delta_snapshot()
if full_snapshot:
return take_full_snapshot()
else:
return take_delta_snapshot()
def block_stream_disk(self, volume):
target_disk, disk_name = self._get_target_disk(volume)
install_path = target_disk.source.file_
logger.debug('start block stream for disk %s' % disk_name)
self.domain.blockRebase(disk_name, None, 0, 0)
logger.debug('block stream for disk %s in processing' % disk_name)
def wait_job(_):
logger.debug('block stream is waiting for %s blockRebase job completion' % disk_name)
return not self._wait_for_block_job(disk_name, abort_on_error=True)
if not linux.wait_callback_success(wait_job, timeout=21600, ignore_exception_in_callback=True):
raise kvmagent.KvmError('block stream failed')
def wait_backing_file_cleared(_):
return not linux.qcow2_get_backing_file(install_path)
if not linux.wait_callback_success(wait_backing_file_cleared, timeout=60, ignore_exception_in_callback=True):
raise kvmagent.KvmError('block stream succeeded, but backing file is not cleared')
def list_blk_sources(self):
"""list domain blocks (aka. domblklist) -- but with sources only"""
tree = etree.fromstring(self.domain_xml)
res = []
for disk in tree.findall("devices/disk"):
for src in disk.findall("source"):
src_file = src.get("file")
if src_file is None:
continue
res.append(src_file)
return res
def migrate(self, cmd):
if self.state == Vm.VM_STATE_SHUTDOWN:
raise kvmagent.KvmError('vm[uuid:%s] is stopped, cannot live migrate,' % cmd.vmUuid)
current_hostname = linux.get_host_name()
if cmd.migrateFromDestination:
hostname = cmd.destHostIp.replace('.', '-')
else:
hostname = cmd.srcHostIp.replace('.', '-')
if current_hostname == 'localhost.localdomain' or current_hostname == 'localhost':
# set the hostname, otherwise the migration will fail
shell.call('hostname %s.zstack.org' % hostname)
destHostIp = cmd.destHostIp
destUrl = "qemu+tcp://{0}/system".format(destHostIp)
tcpUri = "tcp://{0}".format(destHostIp)
flag = (libvirt.VIR_MIGRATE_LIVE |
libvirt.VIR_MIGRATE_PEER2PEER |
libvirt.VIR_MIGRATE_UNDEFINE_SOURCE)
if cmd.autoConverge:
flag |= libvirt.VIR_MIGRATE_AUTO_CONVERGE
if cmd.xbzrle:
flag |= libvirt.VIR_MIGRATE_COMPRESSED
if cmd.storageMigrationPolicy == 'FullCopy':
flag |= libvirt.VIR_MIGRATE_NON_SHARED_DISK
elif cmd.storageMigrationPolicy == 'IncCopy':
flag |= libvirt.VIR_MIGRATE_NON_SHARED_INC
# to workaround libvirt bug (c.f. RHBZ#1494454)
if LIBVIRT_MAJOR_VERSION >= 4:
if any(s.startswith('/dev/') for s in self.list_blk_sources()):
flag |= libvirt.VIR_MIGRATE_UNSAFE
if cmd.useNuma:
flag |= libvirt.VIR_MIGRATE_PERSIST_DEST
stage = get_task_stage(cmd)
timeout = 1800 if cmd.timeout is None else cmd.timeout
class MigrateDaemon(plugin.TaskDaemon):
def __init__(self, domain):
super(MigrateDaemon, self).__init__(cmd, 'MigrateVm', timeout)
self.domain = domain
def _get_percent(self):
try:
stats = self.domain.jobStats()
if libvirt.VIR_DOMAIN_JOB_DATA_REMAINING in stats and libvirt.VIR_DOMAIN_JOB_DATA_TOTAL in stats:
remain = stats[libvirt.VIR_DOMAIN_JOB_DATA_REMAINING]
total = stats[libvirt.VIR_DOMAIN_JOB_DATA_TOTAL]
if total == 0:
return
percent = min(99, 100.0 - remain * 100.0 / total)
return get_exact_percent(percent, stage)
except libvirt.libvirtError:
pass
except:
logger.debug(linux.get_exception_stacktrace())
def _cancel(self):
logger.debug('cancelling vm[uuid:%s] migration' % cmd.vmUuid)
self.domain.abortJob()
def __exit__(self, exc_type, exc_val, exc_tb):
super(MigrateDaemon, self).__exit__(exc_type, exc_val, exc_tb)
if exc_type == libvirt.libvirtError:
raise kvmagent.KvmError(
'unable to migrate vm[uuid:%s] to %s, %s' % (cmd.vmUuid, destUrl, str(exc_val)))
with MigrateDaemon(self.domain):
logger.debug('migrating vm[uuid:{0}] to dest url[{1}]'.format(self.uuid, destUrl))
self.domain.migrateToURI2(destUrl, tcpUri, None, flag, None, 0)
try:
logger.debug('migrating vm[uuid:{0}] to dest url[{1}]'.format(self.uuid, destUrl))
if not linux.wait_callback_success(self.wait_for_state_change, callback_data=None, timeout=timeout):
try: self.domain.abortJob()
except: pass
raise kvmagent.KvmError('timeout after %d seconds' % timeout)
except kvmagent.KvmError:
raise
except:
logger.debug(linux.get_exception_stacktrace())
logger.debug('successfully migrated vm[uuid:{0}] to dest url[{1}]'.format(self.uuid, destUrl))
def _interface_cmd_to_xml(self, cmd, action=None):
vhostSrcPath = cmd.addons['vhostSrcPath'] if cmd.addons else None
brMode = cmd.addons['brMode'] if cmd.addons else None
interface = Vm._build_interface_xml(cmd.nic, None, vhostSrcPath, action, brMode)
def addon():
if cmd.addons and cmd.addons['NicQos']:
qos = cmd.addons['NicQos']
Vm._add_qos_to_interface(interface, qos)
addon()
return etree.tostring(interface)
def _wait_vm_run_until_seconds(self, sec):
vm_pid = linux.find_process_by_cmdline([kvmagent.get_qemu_path(), self.uuid])
if not vm_pid:
raise Exception('cannot find pid for vm[uuid:%s]' % self.uuid)
up_time = linux.get_process_up_time_in_second(vm_pid)
def wait(_):
return linux.get_process_up_time_in_second(vm_pid) > sec
if up_time < sec and not linux.wait_callback_success(wait, timeout=60):
raise Exception("vm[uuid:%s] seems hang, its process[pid:%s] up-time is not increasing after %s seconds" %
(self.uuid, vm_pid, 60))
def attach_iso(self, cmd):
iso = cmd.iso
if iso.deviceId >= len(self.ISO_DEVICE_LETTERS):
err = 'vm[uuid:%s] exceeds max iso limit, device id[%s], but only 0 ~ %d are allowed' % (self.uuid, iso.deviceId, len(self.ISO_DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
device_letter = self.ISO_DEVICE_LETTERS[iso.deviceId]
dev = self._get_iso_target_dev(device_letter)
bus = self._get_controller_type()
if iso.path.startswith('ceph'):
ic = IsoCeph()
ic.iso = iso
cdrom = ic.to_xmlobject(dev, bus)
else:
if iso.path.startswith('sharedblock'):
iso.path = shared_block_to_file(iso.path)
cdrom = etree.Element('disk', {'type': 'file', 'device': 'cdrom'})
e(cdrom, 'driver', None, {'name': 'qemu', 'type': 'raw'})
e(cdrom, 'source', None, {'file': iso.path})
e(cdrom, 'target', None, {'dev': dev, 'bus': bus})
e(cdrom, 'readonly', None)
xml = etree.tostring(cdrom)
if LIBVIRT_MAJOR_VERSION >= 4:
addr = find_domain_cdrom_address(self.domain.XMLDesc(0), dev)
ridx = xml.rindex('<')
xml = xml[:ridx] + addr.dump() + xml[ridx:]
logger.debug('attaching ISO to the vm[uuid:%s]:\n%s' % (self.uuid, xml))
try:
self.domain.updateDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
except libvirt.libvirtError as ex:
err = str(ex)
logger.warn('unable to attach the iso to the VM[uuid:%s], %s' % (self.uuid, err))
if "QEMU command 'change': error connecting: Operation not supported" in err:
raise Exception('cannot hotplug ISO to the VM[uuid:%s]. It is a libvirt bug: %s.'
' you can power-off the vm and attach again.' %
(self.uuid, 'https://bugzilla.redhat.com/show_bug.cgi?id=1541702'))
elif 'timed out waiting for disk tray status update' in err:
raise Exception(
'unable to attach the iso to the VM[uuid:%s]. It seems met some internal error,'
' you can reboot the vm and try again' % self.uuid)
else:
raise Exception('unable to attach the iso to the VM[uuid:%s].' % self.uuid)
def check(_):
me = get_vm_by_uuid(self.uuid)
for disk in me.domain_xmlobject.devices.get_child_node_as_list('disk'):
if disk.device_ == "cdrom" and xmlobject.has_element(disk, 'source'):
if disk.target.dev__ and disk.target.dev_ == dev:
return True
return False
if not linux.wait_callback_success(check, None, 30, 1):
raise Exception('cannot attach the iso[%s] for the VM[uuid:%s]. The device is not present after 30s' %
(iso.path, cmd.vmUuid))
def detach_iso(self, cmd):
cdrom = None
for disk in self.domain_xmlobject.devices.get_child_node_as_list('disk'):
if disk.device_ == "cdrom":
cdrom = disk
break
if not cdrom:
return
device_letter = self.ISO_DEVICE_LETTERS[cmd.deviceId]
dev = self._get_iso_target_dev(device_letter)
bus = self._get_controller_type()
cdrom = etree.Element('disk', {'type': 'file', 'device': 'cdrom'})
e(cdrom, 'driver', None, {'name': 'qemu', 'type': 'raw'})
e(cdrom, 'target', None, {'dev': dev, 'bus': bus})
e(cdrom, 'readonly', None)
xml = etree.tostring(cdrom)
if LIBVIRT_MAJOR_VERSION >= 4:
addr = find_domain_cdrom_address(self.domain.XMLDesc(0), dev)
ridx = xml.rindex('<')
xml = xml[:ridx] + addr.dump() + xml[ridx:]
logger.debug('detaching ISO from the vm[uuid:%s]:\n%s' % (self.uuid, xml))
try:
self.domain.updateDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE | libvirt.VIR_DOMAIN_DEVICE_MODIFY_FORCE)
except libvirt.libvirtError as ex:
err = str(ex)
logger.warn('unable to detach the iso from the VM[uuid:%s], %s' % (self.uuid, err))
if 'is locked' in err and 'eject' in err:
raise Exception(
'unable to detach the iso from the VM[uuid:%s]. It seems the ISO is still mounted in the operating system'
', please umount it first' % self.uuid)
else:
raise Exception(
'unable to detach the iso from the VM[uuid:%s]' % self.uuid)
def check(_):
me = get_vm_by_uuid(self.uuid)
for disk in me.domain_xmlobject.devices.get_child_node_as_list('disk'):
if disk.device_ == "cdrom" and xmlobject.has_element(disk, 'source') == False:
if disk.target.dev__ and disk.target.dev_ == dev:
return True
return False
if not linux.wait_callback_success(check, None, 30, 1):
raise Exception('cannot detach the cdrom from the VM[uuid:%s]. The device is still present after 30s' %
self.uuid)
def _get_controller_type(self):
is_q35 = 'q35' in self.domain_xmlobject.os.type.machine_
return ('ide', 'sata', 'scsi')[max(is_q35, (HOST_ARCH in ['aarch64', 'mips64el']) * 2)]
@staticmethod
def _get_iso_target_dev(device_letter):
return "sd%s" % device_letter if (HOST_ARCH in ['aarch64', 'mips64el']) else 'hd%s' % device_letter
@staticmethod
def _get_disk_target_dev_format(bus_type):
return {'virtio': 'vd%s', 'scsi': 'sd%s', 'sata': 'hd%s', 'ide': 'hd%s'}[bus_type]
def hotplug_mem(self, memory_size):
mem_size = (memory_size - self.get_memory()) / 1024
xml = "<memory model='dimm'><target><size unit='KiB'>%d</size><node>0</node></target></memory>" % mem_size
logger.debug('hot plug memory: %d KiB' % mem_size)
try:
self.domain.attachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE | libvirt.VIR_DOMAIN_AFFECT_CONFIG)
except libvirt.libvirtError as ex:
err = str(ex)
logger.warn('unable to hotplug memory in vm[uuid:%s], %s' % (self.uuid, err))
if "cannot set up guest memory" in err:
raise kvmagent.KvmError("No enough physical memory for guest")
elif "would exceed domain's maxMemory config" in err:
raise kvmagent.KvmError(err + "; please check if you have rebooted the VM to make NUMA take effect")
else:
raise kvmagent.KvmError(err)
return
def hotplug_cpu(self, cpu_num):
logger.debug('set cpus: %d cpus' % cpu_num)
try:
self.domain.setVcpusFlags(cpu_num, libvirt.VIR_DOMAIN_AFFECT_LIVE | libvirt.VIR_DOMAIN_AFFECT_CONFIG)
except libvirt.libvirtError as ex:
err = str(ex)
logger.warn('unable to set cpus in vm[uuid:%s], %s' % (self.uuid, err))
if "requested vcpus is greater than max" in err:
err += "; please check if you have rebooted the VM to make NUMA take effect"
raise kvmagent.KvmError(err)
return
@linux.retry(times=3, sleep_time=5)
def _attach_nic(self, cmd):
def check_device(_):
self.refresh()
for iface in self.domain_xmlobject.devices.get_child_node_as_list('interface'):
if iface.mac.address_ == cmd.nic.mac:
# vf nic doesn't have internal name
if cmd.nic.pciDeviceAddress is not None:
return True
else:
return linux.is_network_device_existing(cmd.nic.nicInternalName)
return False
try:
if check_device(None):
return
xml = self._interface_cmd_to_xml(cmd, action='Attach')
logger.debug('attaching nic:\n%s' % xml)
if self.state == self.VM_STATE_RUNNING or self.state == self.VM_STATE_PAUSED:
self.domain.attachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
else:
self.domain.attachDevice(xml)
if not linux.wait_callback_success(check_device, interval=0.5, timeout=30):
raise Exception('nic device does not show after 30 seconds')
except:
# check one more time
if not check_device(None):
raise
def attach_nic(self, cmd):
self._wait_vm_run_until_seconds(10)
self.timeout_object.wait_until_object_timeout('%s-attach-nic' % self.uuid)
try:
self._attach_nic(cmd)
except libvirt.libvirtError as ex:
err = str(ex)
if 'Duplicate ID' in err:
err = ('unable to attach a L3 network to the vm[uuid:%s], %s. This is a KVM issue, please reboot'
' the vm and try again' % (self.uuid, err))
elif 'No more available PCI slots' in err:
err = ('vm[uuid: %s] has no more PCI slots for vm nic[mac:%s]. This is a Libvirt issue, please reboot'
' the VM and try again' % (self.uuid, cmd.nic.mac))
else:
err = 'unable to attach a L3 network to the vm[uuid:%s], %s' % (self.uuid, err)
raise kvmagent.KvmError(err)
# in 10 seconds, no detach-nic operation can be performed,
# work around libvirt bug
self.timeout_object.put('%s-detach-nic' % self.uuid, timeout=10)
@linux.retry(times=3, sleep_time=5)
def _detach_nic(self, cmd):
def check_device(_):
self.refresh()
for iface in self.domain_xmlobject.devices.get_child_node_as_list('interface'):
if iface.mac.address_ == cmd.nic.mac:
return False
return shell.run('ip link show dev %s > /dev/null' % cmd.nic.nicInternalName) != 0
if check_device(None):
return
try:
xml = self._interface_cmd_to_xml(cmd, action='Detach')
logger.debug('detaching nic:\n%s' % xml)
if self.state == self.VM_STATE_RUNNING or self.state == self.VM_STATE_PAUSED:
self.domain.detachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
else:
self.domain.detachDevice(xml)
if not linux.wait_callback_success(check_device, interval=0.5, timeout=10):
raise Exception('NIC device is still attached after 10 seconds. Please check virtio driver or stop VM and detach again.')
except:
# check one more time
if not check_device(None):
logger.warn('failed to detach a nic[mac:%s], dump vm xml:\n%s' % (cmd.nic.mac, self.domain_xml))
raise
def detach_nic(self, cmd):
self._wait_vm_run_until_seconds(10)
self.timeout_object.wait_until_object_timeout('%s-detach-nic' % self.uuid)
self._detach_nic(cmd)
# in 10 seconds, no attach-nic operation can be performed,
# to work around libvirt bug
self.timeout_object.put('%s-attach-nic' % self.uuid, timeout=10)
def update_nic(self, cmd):
self._wait_vm_run_until_seconds(10)
self.timeout_object.wait_until_object_timeout('%s-update-nic' % self.uuid)
self._update_nic(cmd)
self.timeout_object.put('%s-update-nic' % self.uuid, timeout=10)
def _update_nic(self, cmd):
if not cmd.nics:
return
def check_device(nic):
self.refresh()
for iface in self.domain_xmlobject.devices.get_child_node_as_list('interface'):
if iface.mac.address_ == nic.mac:
return linux.is_network_device_existing(nic.nicInternalName)
return False
def addon(nic_xml_object):
if cmd.addons and cmd.addons['NicQos'] and cmd.addons['NicQos'][nic.uuid]:
qos = cmd.addons['NicQos'][nic.uuid]
Vm._add_qos_to_interface(nic_xml_object, qos)
for nic in cmd.nics:
interface = Vm._build_interface_xml(nic)
addon(interface)
xml = etree.tostring(interface)
logger.debug('updating nic:\n%s' % xml)
if self.state == self.VM_STATE_RUNNING or self.state == self.VM_STATE_PAUSED:
self.domain.updateDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
else:
self.domain.updateDeviceFlags(xml)
if not linux.wait_callback_success(check_device, nic, interval=0.5, timeout=30):
raise Exception('nic device does not show after 30 seconds')
def _check_qemuga_info(self, info):
if info:
for command in info["return"]["supported_commands"]:
if command["name"] == "guest-set-user-password":
if command["enabled"]:
return True
return False
def _wait_until_qemuga_ready(self, timeout, uuid):
finish_time = time.time() + (timeout / 1000)
while time.time() < finish_time:
state = get_all_vm_states().get(uuid)
if state != Vm.VM_STATE_RUNNING:
raise kvmagent.KvmError("vm's state is %s, not running" % state)
r, o, e = bash.bash_roe("virsh qemu-agent-command %s --cmd '{\"execute\":\"guest-info\"}'" % self.uuid)
if r != 0:
logger.warn("get guest info from vm[uuid:%s]: %s, %s" % (self.uuid, o, e))
else:
logger.debug("qga_json: %s" % o)
info = json.loads(o)['return']
if LooseVersion(info["version"]) < LooseVersion('2.3'):
raise kvmagent.KvmError("You need to install version 2.3 or above to support set user password ,qga current version is %s" % info["version"])
else:
return True
time.sleep(2)
raise kvmagent.KvmError("qemu-agent service is not ready in vm...")
def _escape_char_password(self, password):
escape_str = "\*\#\(\)\<\>\|\"\'\/\\\$\`\&\{\}"
des = ""
for c in list(password):
if c in escape_str:
des += "\\"
des += c
return des
def change_vm_password(self, cmd):
uuid = self.uuid
# check the vm state first, then choose the method in different way
state = get_all_vm_states().get(uuid)
timeout = 60000
if state == Vm.VM_STATE_RUNNING:
# before set-user-password, we must check if os ready in the guest
self._wait_until_qemuga_ready(timeout, uuid)
try:
escape_password = self._escape_char_password(cmd.accountPerference.accountPassword)
shell.call('virsh set-user-password %s %s %s' % (self.uuid,
cmd.accountPerference.userAccount,
escape_password))
except Exception as e:
logger.warn(e.message)
if e.message.find("child process has failed to set user password") > 0:
logger.warn('user [%s] not exist!' % cmd.accountPerference.userAccount)
raise kvmagent.KvmError('user [%s] not exist on vm[uuid: %s]!' % (cmd.accountPerference.userAccount, uuid))
else:
raise e
else:
raise kvmagent.KvmError("vm is not running, cannot connect to qemu-ga")
def merge_snapshot(self, cmd):
target_disk, disk_name = self._get_target_disk(cmd.volume)
@linux.retry(times=3, sleep_time=3)
def do_pull(base, top):
logger.debug('start block rebase [active: %s, new backing: %s]' % (top, base))
# Double check (c.f. issue #1323)
def wait_previous_job(_):
logger.debug('merge snapshot is checking previous block job')
return not self._wait_for_block_job(disk_name, abort_on_error=True)
if not linux.wait_callback_success(wait_previous_job, timeout=21600, ignore_exception_in_callback=True):
raise kvmagent.KvmError('merge snapshot failed - pending previous block job')
self.domain.blockRebase(disk_name, base, 0)
def wait_job(_):
logger.debug('merging snapshot chain is waiting for blockRebase job completion')
return not self._wait_for_block_job(disk_name, abort_on_error=True)
if not linux.wait_callback_success(wait_job, timeout=21600):
raise kvmagent.KvmError('live merging snapshot chain failed, timeout after 6 hours')
# Double check (c.f. issue #757)
if self._get_back_file(top) != base:
raise kvmagent.KvmError('[libvirt bug] live merge snapshot failed')
logger.debug('end block rebase [active: %s, new backing: %s]' % (top, base))
if cmd.fullRebase:
do_pull(None, cmd.destPath)
else:
do_pull(cmd.srcPath, cmd.destPath)
def take_volumes_shallow_backup(self, task_spec, volumes, dst_backup_paths):
if self._is_ft_vm():
self._take_volumes_top_drive_backup(task_spec, volumes, dst_backup_paths)
else:
self._take_volumes_shallow_block_copy(task_spec, volumes, dst_backup_paths)
def _take_volumes_top_drive_backup(self, task_spec, volumes, dst_backup_paths):
class DriveBackupDaemon(plugin.TaskDaemon):
def __init__(self, domain_uuid):
super(DriveBackupDaemon, self).__init__(task_spec, 'TakeVolumeBackup', report_progress=False)
self.domain_uuid = domain_uuid
def __exit__(self, exc_type, exc_val, exc_tb):
super(DriveBackupDaemon, self).__exit__(exc_type, exc_val, exc_tb)
os.unlink(tmp_workspace)
def _cancel(self):
logger.debug("cancel vm[uuid:%s] backup" % self.domain_uuid)
ImageStoreClient().stop_backup_jobs(self.domain_uuid)
def _get_percent(self):
pass
tmp_workspace = os.path.join(tempfile.gettempdir(), uuidhelper.uuid())
with DriveBackupDaemon(self.uuid):
self._do_take_volumes_top_drive_backup(volumes, dst_backup_paths, tmp_workspace)
def _do_take_volumes_top_drive_backup(self, volumes, dst_backup_paths, tmp_workspace):
args = {}
for volume in volumes:
target_disk, _ = self._get_target_disk(volume)
args[str(volume.deviceId)] = VmPlugin.get_backup_device_name(target_disk), 0
dst_workspace = os.path.join(os.path.dirname(dst_backup_paths['0']), 'workspace')
linux.mkdir(dst_workspace)
os.symlink(dst_workspace, tmp_workspace)
res = ImageStoreClient().top_backup_volumes(self.uuid, args.values(), tmp_workspace)
job_res = jsonobject.loads(res)
for device_id, dst_path in dst_backup_paths.items():
device_name = args[device_id][0]
back_path = os.path.join(dst_workspace, job_res[device_name].backupFile)
linux.mkdir(os.path.dirname(dst_path))
shutil.move(back_path, dst_path)
def _take_volumes_shallow_block_copy(self, task_spec, volumes, dst_backup_paths):
# type: (Vm, jsonobject.JsonObject, list[xmlobject.XmlObject], dict[str, str]) -> None
class VolumeInfo(object):
def __init__(self, dev_name):
self.dev_name = dev_name # type: str
self.end_time = None # type: float
class ShallowBackupDaemon(plugin.TaskDaemon):
def __init__(self, domain):
super(ShallowBackupDaemon, self).__init__(task_spec, 'TakeVolumeBackup', report_progress=False)
self.domain = domain
def _cancel(self):
logger.debug("cancel vm[uuid:%s] backup" % self.domain.name())
for v in volume_backup_info.values():
if self.domain.blockJobInfo(v.dev_name, 0):
self.domain.blockJobAbort(v.dev_name)
def _get_percent(self):
pass
volume_backup_info = {}
for volume in volumes:
target_disk, _ = self._get_target_disk(volume)
volume_backup_info[str(volume.deviceId)] = VolumeInfo(target_disk.target.dev_)
with ShallowBackupDaemon(self.domain):
self._do_take_volumes_shallow_block_copy(volume_backup_info, dst_backup_paths)
def _do_take_volumes_shallow_block_copy(self, volume_backup_info, dst_backup_paths):
dom = self.domain
flags = libvirt.VIR_DOMAIN_BLOCK_COPY_TRANSIENT_JOB | libvirt.VIR_DOMAIN_BLOCK_COPY_SHALLOW
for device_id, v in volume_backup_info.items():
vol_dir = os.path.dirname(dst_backup_paths[device_id])
linux.mkdir(vol_dir)
logger.info("start copying {}/{} ...".format(self.uuid, v.dev_name))
dom.blockCopy(v.dev_name, "<disk type='file'><source file='{}'/><driver type='qcow2'/></disk>"
.format(dst_backup_paths[device_id]), None, flags)
while time.sleep(5) or any(not v.end_time for v in volume_backup_info.values()):
for v in volume_backup_info.values():
if v.end_time:
continue
info = dom.blockJobInfo(v.dev_name, 0)
if not info:
raise Exception('blockjob not found on disk: ' + v.dev_name)
elif info['cur'] == info['end']:
v.end_time = time.time()
logger.info("completed copying {}/{} ...".format(self.uuid, v.dev_name))
with vm_operator.TemporaryPauseVmOperator(dom):
for v in volume_backup_info.values():
dom.blockJobAbort(v.dev_name)
@staticmethod
def from_virt_domain(domain):
vm = Vm()
vm.domain = domain
(state, _, _, _, _) = domain.info()
vm.state = Vm.power_state[state]
vm.domain_xml = domain.XMLDesc(0)
vm.domain_xmlobject = xmlobject.loads(vm.domain_xml)
vm.uuid = vm.domain_xmlobject.name.text_
return vm
@staticmethod
def from_StartVmCmd(cmd):
use_numa = cmd.useNuma
machine_type = get_machineType(cmd.machineType)
if HOST_ARCH == "aarch64" and cmd.bootMode == 'Legacy':
raise kvmagent.KvmError("Aarch64 does not support legacy, please change boot mode to UEFI instead of Legacy on your VM or Image.")
if cmd.architecture and cmd.architecture != HOST_ARCH:
raise kvmagent.KvmError("Image architecture[{}] not matched host architecture[{}].".format(cmd.architecture, HOST_ARCH))
default_bus_type = ('ide', 'sata', 'scsi')[max(machine_type == 'q35', (HOST_ARCH in ['aarch64', 'mips64el']) * 2)]
elements = {}
def make_root():
root = etree.Element('domain')
root.set('type', get_domain_type())
root.set('xmlns:qemu', 'http://libvirt.org/schemas/domain/qemu/1.0')
elements['root'] = root
def make_memory_backing():
root = elements['root']
backing = e(root, 'memoryBacking')
e(backing, "hugepages")
e(backing, "nosharepages")
e(backing, "allocation", attrib={'mode': 'immediate'})
def make_cpu():
if use_numa:
root = elements['root']
tune = e(root, 'cputune')
def on_x86_64():
e(root, 'vcpu', '128', {'placement': 'static', 'current': str(cmd.cpuNum)})
# e(root,'vcpu',str(cmd.cpuNum),{'placement':'static'})
if cmd.nestedVirtualization == 'host-model':
cpu = e(root, 'cpu', attrib={'mode': 'host-model'})
e(cpu, 'model', attrib={'fallback': 'allow'})
elif cmd.nestedVirtualization == 'host-passthrough':
cpu = e(root, 'cpu', attrib={'mode': 'host-passthrough'})
e(cpu, 'model', attrib={'fallback': 'allow'})
elif cmd.nestedVirtualization == 'custom':
cpu = e(root, 'cpu', attrib={'mode': 'custom', 'match': 'minimum'})
e(cpu, 'model', cmd.vmCpuModel, attrib={'fallback': 'allow'})
else:
cpu = e(root, 'cpu')
# e(cpu, 'topology', attrib={'sockets': str(cmd.socketNum), 'cores': str(cmd.cpuOnSocket), 'threads': '1'})
mem = cmd.memory / 1024
e(cpu, 'topology', attrib={'sockets': '32', 'cores': '4', 'threads': '1'})
numa = e(cpu, 'numa')
e(numa, 'cell', attrib={'id': '0', 'cpus': '0-127', 'memory': str(mem), 'unit': 'KiB'})
def on_aarch64():
cpu = e(root, 'cpu', attrib={'mode': 'custom'})
e(cpu, 'model', 'host', attrib={'fallback': 'allow'})
mem = cmd.memory / 1024
e(cpu, 'topology', attrib={'sockets': '32', 'cores': '4', 'threads': '1'})
numa = e(cpu, 'numa')
e(numa, 'cell', attrib={'id': '0', 'cpus': '0-127', 'memory': str(mem), 'unit': 'KiB'})
def on_mips64el():
e(root, 'vcpu', '8', {'placement': 'static', 'current': str(cmd.cpuNum)})
# e(root,'vcpu',str(cmd.cpuNum),{'placement':'static'})
cpu = e(root, 'cpu', attrib={'mode': 'custom', 'match': 'exact', 'check': 'partial'})
e(cpu, 'model', 'Loongson-3A4000-COMP', attrib={'fallback': 'allow'})
mem = cmd.memory / 1024
e(cpu, 'topology', attrib={'sockets': '2', 'cores': '4', 'threads': '1'})
numa = e(cpu, 'numa')
e(numa, 'cell', attrib={'id': '0', 'cpus': '0-7', 'memory': str(mem), 'unit': 'KiB'})
eval("on_{}".format(HOST_ARCH))()
else:
root = elements['root']
# e(root, 'vcpu', '128', {'placement': 'static', 'current': str(cmd.cpuNum)})
e(root, 'vcpu', str(cmd.cpuNum), {'placement': 'static'})
tune = e(root, 'cputune')
# enable nested virtualization
def on_x86_64():
if cmd.nestedVirtualization == 'host-model':
cpu = e(root, 'cpu', attrib={'mode': 'host-model'})
e(cpu, 'model', attrib={'fallback': 'allow'})
elif cmd.nestedVirtualization == 'host-passthrough':
cpu = e(root, 'cpu', attrib={'mode': 'host-passthrough'})
e(cpu, 'model', attrib={'fallback': 'allow'})
elif cmd.nestedVirtualization == 'custom':
cpu = e(root, 'cpu', attrib={'mode': 'custom'})
e(cpu, 'model', cmd.vmCpuModel, attrib={'fallback': 'allow'})
else:
cpu = e(root, 'cpu')
return cpu
def on_aarch64():
if is_virtual_machine():
cpu = e(root, 'cpu')
e(cpu, 'model', 'cortex-a57')
else :
cpu = e(root, 'cpu', attrib={'mode': 'host-passthrough'})
e(cpu, 'model', attrib={'fallback': 'allow'})
return cpu
def on_mips64el():
cpu = e(root, 'cpu', attrib={'mode': 'custom', 'match': 'exact', 'check': 'partial'})
e(cpu, 'model', 'Loongson-3A4000-COMP', attrib={'fallback': 'allow'})
return cpu
cpu = eval("on_{}".format(HOST_ARCH))()
e(cpu, 'topology', attrib={'sockets': str(cmd.socketNum), 'cores': str(cmd.cpuOnSocket), 'threads': '1'})
if cmd.addons.cpuPinning:
for rule in cmd.addons.cpuPinning:
e(tune, 'vcpupin', attrib={'vcpu': str(rule.vCpu), 'cpuset': rule.pCpuSet})
def make_memory():
root = elements['root']
mem = cmd.memory / 1024
if use_numa:
e(root, 'maxMemory', str(34359738368), {'slots': str(16), 'unit': 'KiB'})
# e(root,'memory',str(mem),{'unit':'k'})
e(root, 'currentMemory', str(mem), {'unit': 'k'})
else:
e(root, 'memory', str(mem), {'unit': 'k'})
e(root, 'currentMemory', str(mem), {'unit': 'k'})
def make_os():
root = elements['root']
os = e(root, 'os')
host_arch = kvmagent.os_arch
def on_x86_64():
e(os, 'type', 'hvm', attrib={'machine': machine_type})
# if boot mode is UEFI
if cmd.bootMode == "UEFI":
e(os, 'loader', '/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd', attrib={'readonly': 'yes', 'type': 'pflash'})
e(os, 'nvram', '/var/lib/libvirt/qemu/nvram/%s.fd' % cmd.vmInstanceUuid, attrib={'template': '/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd'})
elif cmd.bootMode == "UEFI_WITH_CSM":
e(os, 'loader', '/usr/share/edk2.git/ovmf-x64/OVMF_CODE-with-csm.fd', attrib={'readonly': 'yes', 'type': 'pflash'})
e(os, 'nvram', '/var/lib/libvirt/qemu/nvram/%s.fd' % cmd.vmInstanceUuid, attrib={'template': '/usr/share/edk2.git/ovmf-x64/OVMF_VARS-with-csm.fd'})
elif cmd.addons['loaderRom'] is not None:
e(os, 'loader', cmd.addons['loaderRom'], {'type': 'rom'})
def on_aarch64():
def on_redhat():
e(os, 'type', 'hvm', attrib={'arch': 'aarch64', 'machine': machine_type})
e(os, 'loader', '/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw', attrib={'readonly': 'yes', 'type': 'pflash'})
e(os, 'nvram', '/var/lib/libvirt/qemu/nvram/%s.fd' % cmd.vmInstanceUuid, attrib={'template': '/usr/share/edk2/aarch64/vars-template-pflash.raw'})
def on_debian():
e(os, 'type', 'hvm', attrib={'arch': 'aarch64', 'machine': machine_type})
e(os, 'loader', '/usr/share/OVMF/QEMU_EFI-pflash.raw', attrib={'readonly': 'yes', 'type': 'rom'})
e(os, 'nvram', '/var/lib/libvirt/qemu/nvram/%s.fd' % cmd.vmInstanceUuid, attrib={'template': '/usr/share/OVMF/vars-template-pflash.raw'})
eval("on_{}".format(kvmagent.get_host_os_type()))()
def on_mips64el():
e(os, 'type', 'hvm', attrib={'arch': 'mips64el', 'machine': 'loongson3a'})
e(os, 'loader', '/usr/share/qemu/ls3a_bios.bin', attrib={'readonly': 'yes', 'type': 'rom'})
eval("on_{}".format(host_arch))()
if cmd.useBootMenu:
e(os, 'bootmenu', attrib={'enable': 'yes'})
if cmd.systemSerialNumber and HOST_ARCH != 'mips64el':
e(os, 'smbios', attrib={'mode': 'sysinfo'})
def make_sysinfo():
if not cmd.systemSerialNumber:
return
root = elements['root']
sysinfo = e(root, 'sysinfo', attrib={'type': 'smbios'})
system = e(sysinfo, 'system')
e(system, 'entry', cmd.systemSerialNumber, attrib={'name': 'serial'})
if cmd.chassisAssetTag is not None:
chassis = e(sysinfo, 'chassis')
e(chassis, 'entry', cmd.chassisAssetTag, attrib={'name': 'asset'})
def make_features():
root = elements['root']
features = e(root, 'features')
for f in ['apic', 'pae']:
e(features, f)
@linux.with_arch(todo_list=['x86_64'])
def make_acpi():
e(features, 'acpi')
make_acpi()
if cmd.kvmHiddenState is True:
kvm = e(features, "kvm")
e(kvm, 'hidden', None, {'state': 'on'})
if cmd.vmPortOff is True:
e(features, 'vmport', attrib={'state': 'off'})
if cmd.emulateHyperV is True:
hyperv = e(features, "hyperv")
e(hyperv, 'relaxed', attrib={'state': 'on'})
e(hyperv, 'vapic', attrib={'state': 'on'})
if is_hv_freq_supported(): e(hyperv, 'frequencies', attrib={'state': 'on'})
e(hyperv, 'spinlocks', attrib={'state': 'on', 'retries': '4096'})
e(hyperv, 'vendor_id', attrib={'state': 'on', 'value': 'ZStack_Org'})
# always set ioapic driver to kvm after libvirt 3.4.0
if is_ioapic_supported():
e(features, "ioapic", attrib={'driver': 'kvm'})
if get_gic_version(cmd.cpuNum) == 2:
e(features, "gic", attrib={'version': '2'})
def make_qemu_commandline():
if not os.path.exists(QMP_SOCKET_PATH):
os.mkdir(QMP_SOCKET_PATH)
root = elements['root']
qcmd = e(root, 'qemu:commandline')
vendor_id, model_name = linux.get_cpu_model()
if "hygon" in model_name.lower():
if isinstance(cmd.imagePlatform, str) and cmd.imagePlatform.lower() not in ["other", "paravirtualization"]:
e(qcmd, "qemu:arg", attrib={"value": "-cpu"})
e(qcmd, "qemu:arg", attrib={"value": "EPYC,vendor=AuthenticAMD,model_id={} Processor,+svm".format(" ".join(model_name.split(" ")[0:3]))})
else:
e(qcmd, "qemu:arg", attrib={"value": "-qmp"})
e(qcmd, "qemu:arg", attrib={"value": "unix:{}/{}.sock,server,nowait".format(QMP_SOCKET_PATH, cmd.vmInstanceUuid)})
args = cmd.addons['qemuCommandLine']
if args is not None:
for arg in args:
e(qcmd, "qemu:arg", attrib={"value": arg.strip('"')})
if cmd.useColoBinary:
e(qcmd, "qemu:arg", attrib={"value": '-L'})
e(qcmd, "qemu:arg", attrib={"value": '/usr/share/qemu-kvm/'})
if cmd.coloPrimary:
e(qcmd, "qemu:arg", attrib={"value": '-L'})
e(qcmd, "qemu:arg", attrib={"value": '/usr/share/qemu-kvm/'})
count = 0
primary_host_ip = cmd.addons['primaryVmHostIp']
for config in cmd.addons['primaryVmNicConfig']:
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=zs-mirror-%s,host=%s,port=%s,server,nowait'
% (count, primary_host_ip, config.mirrorPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=primary-in-s-%s,host=%s,port=%s,server,nowait'
% (count, primary_host_ip, config.primaryInPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=secondary-in-s-%s,host=%s,port=%s,server,nowait'
% (count, primary_host_ip, config.secondaryInPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=primary-in-c-%s,host=%s,port=%s,nowait'
% (count, primary_host_ip, config.primaryInPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=primary-out-s-%s,host=%s,port=%s,server,nowait'
% (count, primary_host_ip, config.primaryOutPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=primary-out-c-%s,host=%s,port=%s,nowait'
% (count, primary_host_ip, config.primaryOutPort)})
count += 1
e(qcmd, "qemu:arg", attrib={"value": '-monitor'})
e(qcmd, "qemu:arg", attrib={"value": 'tcp:%s:%s,server,nowait' % (primary_host_ip, cmd.addons['primaryMonitorPort'])})
elif cmd.coloSecondary:
e(qcmd, "qemu:arg", attrib={"value": '-L'})
e(qcmd, "qemu:arg", attrib={"value": '/usr/share/qemu-kvm/'})
count = 0
for config in cmd.addons['ftSecondaryVmNicConfig']:
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=red-mirror-%s,host=%s,port=%s'
% (count, cmd.addons['primaryVmHostIp'], config.mirrorPort)})
e(qcmd, "qemu:arg", attrib={"value": '-chardev'})
e(qcmd, "qemu:arg", attrib={"value": 'socket,id=red-secondary-%s,host=%s,port=%s'
% (count, cmd.addons['primaryVmHostIp'], config.secondaryInPort)})
e(qcmd, "qemu:arg", attrib={"value": '-object'})
e(qcmd, "qemu:arg", attrib={"value": 'filter-redirector,id=fr-mirror-%s,netdev=hostnet%s,queue=tx,'
'indev=red-mirror-%s' % (count, count, count)})
e(qcmd, "qemu:arg", attrib={"value": '-object'})
e(qcmd, "qemu:arg", attrib={"value": 'filter-redirector,id=fr-secondary-%s,netdev=hostnet%s,'
'queue=rx,outdev=red-secondary-%s' % (count, count, count)})
e(qcmd, "qemu:arg", attrib={"value": '-object'})
e(qcmd, "qemu:arg", attrib={"value": 'filter-rewriter,id=rew-%s,netdev=hostnet%s,queue=all'
% (count, count)})
count += 1
block_replication_port = cmd.addons['blockReplicationPort']
secondary_vm_host_ip = cmd.addons['secondaryVmHostIp']
e(qcmd, "qemu:arg", attrib={"value": '-incoming'})
e(qcmd, "qemu:arg", attrib={"value": 'tcp:%s:%s' % (secondary_vm_host_ip, block_replication_port)})
secondary_monitor_port = cmd.addons['secondaryMonitorPort']
e(qcmd, "qemu:arg", attrib={"value": '-monitor'})
e(qcmd, "qemu:arg", attrib={"value": 'tcp:%s:%s,server,nowait' % (secondary_vm_host_ip, secondary_monitor_port)})
def make_devices():
root = elements['root']
devices = e(root, 'devices')
if cmd.addons and cmd.addons['qemuPath']:
e(devices, 'emulator', cmd.addons['qemuPath'])
else:
if cmd.coloPrimary or cmd.coloSecondary or cmd.useColoBinary:
e(devices, 'emulator', kvmagent.get_colo_qemu_path())
else:
e(devices, 'emulator', kvmagent.get_qemu_path())
@linux.with_arch(todo_list=['aarch64', 'mips64el'])
def set_keyboard():
keyboard = e(devices, 'input', None, {'type': 'keyboard', 'bus': 'usb'})
e(keyboard, 'address', None, {'type': 'usb', 'bus': '0', 'port': '2'})
def set_tablet():
tablet = e(devices, 'input', None, {'type': 'tablet', 'bus': 'usb'})
e(tablet, 'address', None, {'type':'usb', 'bus':'0', 'port':'1'})
# no default usb controller and tablet device for appliance vm
if cmd.isApplianceVm:
e(devices, 'controller', None, {'type': 'usb', 'model': 'ehci'})
set_keyboard()
else:
set_keyboard()
set_tablet()
elements['devices'] = devices
def make_cdrom():
devices = elements['devices']
max_cdrom_num = len(Vm.ISO_DEVICE_LETTERS)
empty_cdrom_configs = None
if HOST_ARCH in ['aarch64', 'mips64el']:
# SCSI controller only supports 1 bus
empty_cdrom_configs = [
EmptyCdromConfig('sd%s' % Vm.ISO_DEVICE_LETTERS[0], '0', Vm.get_iso_device_unit(0)),
EmptyCdromConfig('sd%s' % Vm.ISO_DEVICE_LETTERS[1], '0', Vm.get_iso_device_unit(1)),
EmptyCdromConfig('sd%s' % Vm.ISO_DEVICE_LETTERS[2], '0', Vm.get_iso_device_unit(2))
]
else:
if cmd.fromForeignHypervisor:
cdroms = cmd.addons['FIXED_CDROMS']
if cdroms is None:
empty_cdrom_configs = [
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[0], '0', '1')
]
else:
cdrom_device_id_list = cdroms.split(',')
empty_cdrom_configs = []
for i in xrange(len(cdrom_device_id_list)):
empty_cdrom_configs.append(
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[i], str(i / 2), str(i % 2)))
elif machine_type == 'q35':
# bus 0 unit 0 already use by root volume if it is on sata
empty_cdrom_configs = [
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[0], '0', '1'),
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[1], '0', '2'),
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[2], '0', '3'),
]
else: # machine_type=pc
# bus 0 unit 0 already use by root volume if it is on ide
empty_cdrom_configs = [
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[0], '0', '1'),
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[1], '1', '0'),
EmptyCdromConfig('hd%s' % Vm.ISO_DEVICE_LETTERS[2], '1', '1')
]
if len(empty_cdrom_configs) != max_cdrom_num:
logger.error('ISO_DEVICE_LETTERS or EMPTY_CDROM_CONFIGS config error')
def make_empty_cdrom(target_dev, bus, unit, bootOrder):
cdrom = e(devices, 'disk', None, {'type': 'file', 'device': 'cdrom'})
e(cdrom, 'driver', None, {'name': 'qemu', 'type': 'raw'})
e(cdrom, 'target', None, {'dev': target_dev, 'bus': default_bus_type})
e(cdrom, 'address', None, {'type': 'drive', 'bus': bus, 'unit': unit})
e(cdrom, 'readonly', None)
if bootOrder is not None and bootOrder > 0:
e(cdrom, 'boot', None, {'order': str(bootOrder)})
return cdrom
"""
if not cmd.bootIso:
for config in empty_cdrom_configs:
makeEmptyCdrom(config.targetDev, config.bus, config.unit)
return
"""
if not cmd.cdRoms:
return
for iso in cmd.cdRoms:
cdrom_config = empty_cdrom_configs[iso.deviceId]
if iso.isEmpty:
make_empty_cdrom(cdrom_config.targetDev, cdrom_config.bus, cdrom_config.unit, iso.bootOrder)
continue
if iso.path.startswith('ceph'):
ic = IsoCeph()
ic.iso = iso
devices.append(ic.to_xmlobject(cdrom_config.targetDev, default_bus_type, cdrom_config.bus, cdrom_config.unit, iso.bootOrder))
else:
cdrom = make_empty_cdrom(cdrom_config.targetDev, cdrom_config.bus , cdrom_config.unit, iso.bootOrder)
e(cdrom, 'source', None, {'file': iso.path})
def make_volumes():
devices = elements['devices']
#guarantee rootVolume is the first of the set
volumes = [cmd.rootVolume]
volumes.extend(cmd.dataVolumes)
#When platform=other and default_bus_type=ide, the maximum number of volume is three
volume_ide_configs = [
VolumeIDEConfig('0', '0'),
VolumeIDEConfig('1', '1'),
VolumeIDEConfig('1', '0')
]
def quorumbased_volume(_dev_letter, _v):
def make_backingstore(volume_path):
disk = etree.Element('disk', {'type': 'quorum', 'device': 'disk', 'threshold': '1', 'mode': 'primary' if cmd.coloPrimary else 'secondary'})
paths = linux.qcow2_get_file_chain(volume_path)
if len(paths) == 0:
# could not read qcow2
raise Exception("could not read qcow2")
backingStore = None
for path in paths:
logger.debug('disk path %s' % path)
xml = etree.tostring(disk)
logger.debug('disk xml is %s' % xml)
if backingStore:
backingStore = e(backingStore, 'backingStore', None, {'type': 'file'})
else:
backingStore = e(disk, 'backingStore', None, {'type': 'file'})
# if backingStore:
# backingStore = e(backingStore, 'backingStore', None, {'type': 'file'})
# else:
# backingStore = e(disk, 'backingStore', None, {'type': 'file'})
e(backingStore, 'format', None, {'type': 'qcow2'})
xml = etree.tostring(disk)
logger.debug('disk xml is %s' % xml)
if cmd.coloSecondary:
e(backingStore, 'active', None, {'file': cmd.cacheVolumes[0].installPath})
e(backingStore, 'hidden', None, {'file': cmd.cacheVolumes[1].installPath})
e(backingStore, 'source', None, {'file': path})
return disk
disk = make_backingstore(_v.installPath)
if _v.useVirtio:
e(disk, 'target', None, {'dev': 'vd%s' % _dev_letter, 'bus': 'virtio'})
else:
dev_format = Vm._get_disk_target_dev_format(default_bus_type)
e(disk, 'target', None, {'dev': dev_format % _dev_letter, 'bus': default_bus_type})
if default_bus_type == "ide" and cmd.imagePlatform.lower() == "other":
allocat_ide_config(disk)
return disk
def filebased_volume(_dev_letter, _v):
disk = etree.Element('disk', {'type': 'file', 'device': 'disk', 'snapshot': 'external'})
if cmd.addons and cmd.addons['useDataPlane'] is True:
e(disk, 'driver', None, {'name': 'qemu', 'type': linux.get_img_fmt(_v.installPath), 'cache': _v.cacheMode, 'queues':'1', 'dataplane': 'on'})
else:
e(disk, 'driver', None, {'name': 'qemu', 'type': linux.get_img_fmt(_v.installPath), 'cache': _v.cacheMode})
e(disk, 'source', None, {'file': _v.installPath})
if _v.shareable:
e(disk, 'shareable')
if _v.useVirtioSCSI:
e(disk, 'target', None, {'dev': 'sd%s' % _dev_letter, 'bus': 'scsi'})
e(disk, 'wwn', _v.wwn)
return disk
if _v.useVirtio:
e(disk, 'target', None, {'dev': 'vd%s' % _dev_letter, 'bus': 'virtio'})
else:
dev_format = Vm._get_disk_target_dev_format(default_bus_type)
e(disk, 'target', None, {'dev': dev_format % _dev_letter, 'bus': default_bus_type})
if default_bus_type == "ide" and cmd.imagePlatform.lower() == "other":
allocat_ide_config(disk)
return disk
def iscsibased_volume(_dev_letter, _v):
def blk_iscsi():
bi = BlkIscsi()
portal, bi.target, bi.lun = _v.installPath.lstrip('iscsi://').split('/')
bi.server_hostname, bi.server_port = portal.split(':')
bi.device_letter = _dev_letter
bi.volume_uuid = _v.volumeUuid
bi.chap_username = _v.chapUsername
bi.chap_password = _v.chapPassword
return bi.to_xmlobject()
def virtio_iscsi():
vi = VirtioIscsi()
portal, vi.target, vi.lun = _v.installPath.lstrip('iscsi://').split('/')
vi.server_hostname, vi.server_port = portal.split(':')
vi.device_letter = _dev_letter
vi.volume_uuid = _v.volumeUuid
vi.chap_username = _v.chapUsername
vi.chap_password = _v.chapPassword
return vi.to_xmlobject()
if _v.useVirtio:
return virtio_iscsi()
else:
return blk_iscsi()
def ceph_volume(_dev_letter, _v):
def ceph_virtio():
vc = VirtioCeph()
vc.volume = _v
vc.dev_letter = _dev_letter
return vc.to_xmlobject()
def ceph_blk():
ic = BlkCeph()
ic.volume = _v
ic.dev_letter = _dev_letter
ic.bus_type = default_bus_type
return ic.to_xmlobject()
def ceph_virtio_scsi():
vsc = VirtioSCSICeph()
vsc.volume = _v
vsc.dev_letter = _dev_letter
return vsc.to_xmlobject()
def build_ceph_disk():
if _v.useVirtioSCSI:
disk = ceph_virtio_scsi()
if _v.shareable:
e(disk, 'shareable')
return disk
if _v.useVirtio:
return ceph_virtio()
else:
disk = ceph_blk()
if default_bus_type == "ide" and cmd.imagePlatform.lower() == "other":
allocat_ide_config(disk)
return disk
d = build_ceph_disk()
if _v.physicalBlockSize:
e(d, 'blockio', None, {'physical_block_size': str(_v.physicalBlockSize)})
return d
def spool_volume(_dev_letter, _v):
imgfmt = linux.get_img_fmt(_v.installPath)
disk = etree.Element('disk', {'type': 'network', 'device': 'disk'})
e(disk, 'driver', None,
{'name': 'qemu', 'type': 'raw', 'cache': 'none', 'io': 'native'})
e(disk, 'source', None,
{'protocol': 'spool', 'name': make_spool_conf(imgfmt, _dev_letter, _v)})
e(disk, 'target', None, {'dev': 'vd%s' % _dev_letter, 'bus': 'virtio'})
return disk
def block_volume(_dev_letter, _v):
disk = etree.Element('disk', {'type': 'block', 'device': 'disk', 'snapshot': 'external'})
e(disk, 'driver', None,
{'name': 'qemu', 'type': 'raw', 'cache': 'none', 'io': 'native'})
e(disk, 'source', None, {'dev': _v.installPath})
if _v.useVirtioSCSI:
e(disk, 'target', None, {'dev': 'sd%s' % _dev_letter, 'bus': 'scsi'})
e(disk, 'wwn', _v.wwn)
else:
e(disk, 'target', None, {'dev': 'vd%s' % _dev_letter, 'bus': 'virtio'})
return disk
def volume_qos(volume_xml_obj):
if not cmd.addons:
return
vol_qos = cmd.addons['VolumeQos']
if not vol_qos:
return
qos = vol_qos[v.volumeUuid]
if not qos:
return
if not qos.totalBandwidth and not qos.totalIops:
return
iotune = e(volume_xml_obj, 'iotune')
if qos.totalBandwidth:
e(iotune, 'total_bytes_sec', str(qos.totalBandwidth))
if qos.totalIops:
# e(iotune, 'total_iops_sec', str(qos.totalIops))
e(iotune, 'read_iops_sec', str(qos.totalIops))
e(iotune, 'write_iops_sec', str(qos.totalIops))
# e(iotune, 'read_iops_sec_max', str(qos.totalIops))
# e(iotune, 'write_iops_sec_max', str(qos.totalIops))
# e(iotune, 'total_iops_sec_max', str(qos.totalIops))
def volume_native_aio(volume_xml_obj):
if not cmd.addons:
return
vol_aio = cmd.addons['NativeAio']
if not vol_aio:
return
drivers = volume_xml_obj.getiterator("driver")
if drivers is None or len(drivers) == 0:
return
drivers[0].set("io", "native")
def allocat_ide_config(_disk):
if len(volume_ide_configs) == 0:
err = "insufficient IDE address."
logger.warn(err)
raise kvmagent.KvmError(err)
volume_ide_config = volume_ide_configs.pop(0)
e(_disk, 'address', None, {'type': 'drive', 'bus': volume_ide_config.bus, 'unit': volume_ide_config.unit})
if default_bus_type == "ide" and cmd.imagePlatform.lower() == "other":
Vm.DEVICE_LETTERS=Vm.DEVICE_LETTERS.replace('de','')
volumes.sort(key=lambda d: d.deviceId)
scsi_device_ids = [v.deviceId for v in volumes if v.useVirtioSCSI]
for v in volumes:
if v.deviceId >= len(Vm.DEVICE_LETTERS):
err = "exceeds max disk limit, device id[%s], but only 0 ~ %d are allowed" % (v.deviceId, len(Vm.DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
dev_letter = Vm.DEVICE_LETTERS[v.deviceId]
if v.useVirtioSCSI:
scsi_device_id = scsi_device_ids.pop()
if scsi_device_id >= len(Vm.DEVICE_LETTERS):
err = "exceeds max disk limit, device id[%s], but only 0 ~ %d are allowed" % (scsi_device_id, len(Vm.DEVICE_LETTERS) - 1)
logger.warn(err)
raise kvmagent.KvmError(err)
dev_letter = Vm.DEVICE_LETTERS[scsi_device_id]
if v.deviceType == 'quorum':
vol = quorumbased_volume(dev_letter, v)
elif v.deviceType == 'file':
vol = filebased_volume(dev_letter, v)
elif v.deviceType == 'iscsi':
vol = iscsibased_volume(dev_letter, v)
elif v.deviceType == 'ceph':
vol = ceph_volume(dev_letter, v)
elif v.deviceType == 'block':
vol = block_volume(dev_letter, v)
elif v.deviceType == 'spool':
vol = spool_volume(dev_letter, v)
else:
raise Exception('unknown volume deviceType: %s' % v.deviceType)
assert vol is not None, 'vol cannot be None'
Vm.set_device_address(vol, v)
if v.bootOrder is not None and v.bootOrder > 0 and v.deviceId == 0:
e(vol, 'boot', None, {'order': str(v.bootOrder)})
Vm.set_volume_qos(cmd.addons, v.volumeUuid, vol)
Vm.set_volume_serial_id(v.volumeUuid, vol)
volume_native_aio(vol)
devices.append(vol)
def make_nics():
if not cmd.nics:
return
def addon(nic_xml_object):
if cmd.addons and cmd.addons['NicQos'] and cmd.addons['NicQos'][nic.uuid]:
qos = cmd.addons['NicQos'][nic.uuid]
Vm._add_qos_to_interface(nic_xml_object, qos)
if cmd.coloPrimary or cmd.coloSecondary:
Vm._ignore_colo_vm_nic_rom_file_on_interface(nic_xml_object)
devices = elements['devices']
vhostSrcPath = cmd.addons['vhostSrcPath'] if cmd.addons else None
brMode = cmd.addons['brMode'] if cmd.addons else None
for index, nic in enumerate(cmd.nics):
interface = Vm._build_interface_xml(nic, devices, vhostSrcPath, 'Attach', brMode, index)
addon(interface)
def make_meta():
root = elements['root']
e(root, 'name', cmd.vmInstanceUuid)
if cmd.coloPrimary or cmd.coloSecondary:
e(root, 'iothreads', str(len(cmd.nics)))
e(root, 'uuid', uuidhelper.to_full_uuid(cmd.vmInstanceUuid))
e(root, 'description', cmd.vmName)
e(root, 'on_poweroff', 'destroy')
e(root, 'on_reboot', 'restart')
on_crash = cmd.addons['onCrash']
if on_crash is None:
on_crash = 'restart'
e(root, 'on_crash', on_crash)
meta = e(root, 'metadata')
zs = e(meta, 'zstack', usenamesapce=True)
e(zs, 'internalId', str(cmd.vmInternalId))
e(zs, 'hostManagementIp', str(cmd.hostManagementIp))
# <clock offset="utc" />
clock = e(root, 'clock', None, {'offset': cmd.clock})
# <rom bar='off'/>
if cmd.clock == 'localtime':
if cmd.clockTrack:
e(clock, 'timer', None, {'name': 'rtc', 'tickpolicy': 'catchup', 'track': cmd.clockTrack})
else:
e(clock, 'timer', None, {'name': 'rtc', 'tickpolicy': 'catchup'})
e(clock, 'timer', None, {'name': 'pit', 'tickpolicy': 'delay'})
e(clock, 'timer', None, {'name': 'hpet', 'present': 'no'})
e(clock, 'timer', None, {'name': 'hypervclock', 'present': 'yes'})
def make_vnc():
devices = elements['devices']
if cmd.consolePassword == None:
vnc = e(devices, 'graphics', None, {'type': 'vnc', 'port': '5900', 'autoport': 'yes'})
else:
vnc = e(devices, 'graphics', None,
{'type': 'vnc', 'port': '5900', 'autoport': 'yes', 'passwd': str(cmd.consolePassword)})
e(vnc, "listen", None, {'type': 'address', 'address': '0.0.0.0'})
def make_spice():
devices = elements['devices']
if cmd.consolePassword == None:
spice = e(devices, 'graphics', None, {'type': 'spice', 'port': '5900', 'autoport': 'yes'})
else:
spice = e(devices, 'graphics', None,
{'type': 'spice', 'port': '5900', 'autoport': 'yes', 'passwd': str(cmd.consolePassword)})
e(spice, "listen", None, {'type': 'address', 'address': '0.0.0.0'})
if is_spice_tls() == 0 and cmd.spiceChannels != None:
for channel in cmd.spiceChannels:
e(spice, "channel", None, {'name': channel, 'mode': "secure"})
e(spice, "image", None, {'compression': 'auto_glz'})
e(spice, "jpeg", None, {'compression': 'always'})
e(spice, "zlib", None, {'compression': 'never'})
e(spice, "playback", None, {'compression': 'off'})
e(spice, "streaming", None, {'mode': cmd.spiceStreamingMode})
e(spice, "mouse", None, {'mode': 'client'})
e(spice, "filetransfer", None, {'enable': 'yes'})
e(spice, "clipboard", None, {'copypaste': 'yes'})
def make_folder_sharing():
devices = elements['devices']
chan = e(devices, 'channel', None, {'type': 'spiceport'})
e(chan, 'source', None, {'channel': 'org.spice-space.webdav.0'})
e(chan, 'target', None, {'type': 'virtio', 'name': 'org.spice-space.webdav.0'})
def make_usb_redirect():
devices = elements['devices']
e(devices, 'controller', None, {'type': 'usb', 'index': '0'})
# make sure there are three usb controllers, each for USB 1.1/2.0/3.0
@linux.on_redhat_based(DIST_NAME)
@linux.with_arch(todo_list=['aarch64'])
def set_default():
# for aarch64 centos, only support default controller(qemu-xhci 3.0) on current qemu version(2.12_0-18)
e(devices, 'controller', None, {'type': 'usb', 'index': '1'})
e(devices, 'controller', None, {'type': 'usb', 'index': '2'})
return True
def set_usb2_3():
e(devices, 'controller', None, {'type': 'usb', 'index': '1', 'model': 'ehci'})
e(devices, 'controller', None, {'type': 'usb', 'index': '2', 'model': 'nec-xhci'})
# USB2.0 Controller for redirect
e(devices, 'controller', None, {'type': 'usb', 'index': '3', 'model': 'ehci'})
e(devices, 'controller', None, {'type': 'usb', 'index': '4', 'model': 'nec-xhci'})
def set_redirdev():
chan = e(devices, 'channel', None, {'type': 'spicevmc'})
e(chan, 'target', None, {'type': 'virtio', 'name': 'com.redhat.spice.0'})
e(chan, 'address', None, {'type': 'virtio-serial'})
redirdev1 = e(devices, 'redirdev', None, {'type': 'spicevmc', 'bus': 'usb'})
e(redirdev1, 'address', None, {'type': 'usb', 'bus': '3', 'port': '1'})
redirdev2 = e(devices, 'redirdev', None, {'type': 'spicevmc', 'bus': 'usb'})
e(redirdev2, 'address', None, {'type': 'usb', 'bus': '3', 'port': '2'})
redirdev3 = e(devices, 'redirdev', None, {'type': 'spicevmc', 'bus': 'usb'})
e(redirdev3, 'address', None, {'type': 'usb', 'bus': '4', 'port': '1'})
redirdev4 = e(devices, 'redirdev', None, {'type': 'spicevmc', 'bus': 'usb'})
e(redirdev4, 'address', None, {'type': 'usb', 'bus': '4', 'port': '2'})
if set_default():
return
set_usb2_3()
set_redirdev()
def make_video():
devices = elements['devices']
if HOST_ARCH == 'aarch64':
video = e(devices, 'video')
e(video, 'model', None, {'type': 'virtio'})
elif cmd.videoType != "qxl":
video = e(devices, 'video')
e(video, 'model', None, {'type': str(cmd.videoType)})
else:
for monitor in range(cmd.VDIMonitorNumber):
video = e(devices, 'video')
if cmd.qxlMemory is not None:
e(video, 'model', None, {'type': str(cmd.videoType), 'ram': str(cmd.qxlMemory.ram), 'vram': str(cmd.qxlMemory.vram),
'vgamem': str(cmd.qxlMemory.vgamem)})
else:
e(video, 'model', None, {'type': str(cmd.videoType)})
def make_sound():
if cmd.consoleMode == 'spice' or cmd.consoleMode == 'vncAndSpice':
devices = elements['devices']
if cmd.soundType is not None:
e(devices, 'sound', None, {'model': str(cmd.soundType)})
else:
e(devices, 'sound', None, {'model': 'ich6'})
def make_graphic_console():
if cmd.consoleMode == 'spice':
make_spice()
elif cmd.consoleMode == "vnc":
make_vnc()
elif cmd.consoleMode == "vncAndSpice":
make_vnc()
make_spice()
else:
return
def make_addons():
if not cmd.addons:
return
devices = elements['devices']
channel = cmd.addons['channel']
if channel:
basedir = os.path.dirname(channel.socketPath)
linux.mkdir(basedir, 0777)
chan = e(devices, 'channel', None, {'type': 'unix'})
e(chan, 'source', None, {'mode': 'bind', 'path': channel.socketPath})
e(chan, 'target', None, {'type': 'virtio', 'name': channel.targetName})
cephSecretKey = cmd.addons['ceph_secret_key']
cephSecretUuid = cmd.addons['ceph_secret_uuid']
if cephSecretKey and cephSecretUuid:
VmPlugin._create_ceph_secret_key(cephSecretKey, cephSecretUuid)
pciDevices = cmd.addons['pciDevice']
if pciDevices:
make_pci_device(pciDevices)
mdevDevices = cmd.addons['mdevDevice']
if mdevDevices:
make_mdev_device(mdevDevices)
storageDevices = cmd.addons['storageDevice']
if storageDevices:
make_storage_device(storageDevices)
usbDevices = cmd.addons['usbDevice']
if usbDevices:
make_usb_device(usbDevices)
# FIXME: manage scsi device in one place.
def make_storage_device(storageDevices):
lvm.unpriv_sgio()
devices = elements['devices']
for volume in storageDevices:
if match_storage_device(volume.installPath):
disk = e(devices, 'disk', None, attrib={'type': 'block', 'device': 'lun', 'sgio': get_sgio_value()})
e(disk, 'driver', None, {'name': 'qemu', 'type': 'raw'})
e(disk, 'source', None, {'dev': volume.installPath})
e(disk, 'target', None, {'dev': 'sd%s' % Vm.DEVICE_LETTERS[volume.deviceId], 'bus': 'scsi'})
Vm.set_device_address(disk, volume)
def make_pci_device(pciDevices):
devices = elements['devices']
for pci in pciDevices:
addr, spec_uuid = pci.split(',')
ret, out, err = bash.bash_roe("virsh nodedev-detach pci_%s" % addr.replace(':', '_').replace('.', '_'))
if ret != 0:
raise kvmagent.KvmError('failed to nodedev-detach %s: %s, %s' % (addr, out, err))
if match_pci_device(addr):
hostdev = e(devices, "hostdev", None, {'mode': 'subsystem', 'type': 'pci', 'managed': 'no'})
e(hostdev, "driver", None, {'name': 'vfio'})
source = e(hostdev, "source")
e(source, "address", None, {
"domain": hex(0) if len(addr.split(":")) == 2 else hex(int(addr.split(":")[0], 16)),
"bus": hex(int(addr.split(":")[-2], 16)),
"slot": hex(int(addr.split(":")[-1].split(".")[0], 16)),
"function": hex(int(addr.split(":")[-1].split(".")[1], 16))
})
else:
raise kvmagent.KvmError(
'can not find pci device for address %s' % addr)
if spec_uuid:
rom_file = os.path.join(PCI_ROM_PATH, spec_uuid)
# only turn bar on when rom file exists
if os.path.exists(rom_file):
e(hostdev, "rom", None, {'bar': 'on', 'file': rom_file})
def make_mdev_device(mdevUuids):
devices = elements['devices']
for mdevUuid in mdevUuids:
hostdev = e(devices, "hostdev", None, {'mode': 'subsystem', 'type': 'mdev', 'model': 'vfio-pci', 'managed': 'yes'})
source = e(hostdev, "source")
# convert mdevUuid to 8-4-4-4-12 format
e(source, "address", None, { "uuid": uuidhelper.to_full_uuid(mdevUuid) })
def make_usb_device(usbDevices):
if HOST_ARCH in ['aarch64', 'mips64el']:
next_uhci_port = 3
else:
next_uhci_port = 2
next_ehci_port = 1
next_xhci_port = 1
devices = elements['devices']
for usb in usbDevices:
if match_usb_device(usb):
if usb.split(":")[5] == "PassThrough":
hostdev = e(devices, "hostdev", None, {'mode': 'subsystem', 'type': 'usb', 'managed': 'yes'})
source = e(hostdev, "source")
e(source, "address", None, {
"bus": str(int(usb.split(":")[0])),
"device": str(int(usb.split(":")[1]))
})
e(source, "vendor", None, {
"id": hex(int(usb.split(":")[2], 16))
})
e(source, "product", None, {
"id": hex(int(usb.split(":")[3], 16))
})
# get controller index from usbVersion
# eg. 1.1 -> 0
# eg. 2.0.0 -> 1
# eg. 3 -> 2
bus = int(usb.split(":")[4][0]) - 1
if bus == 0:
address = e(hostdev, "address", None, {'type': 'usb', 'bus': str(bus), 'port': str(next_uhci_port)})
next_uhci_port += 1
elif bus == 1:
address = e(hostdev, "address", None, {'type': 'usb', 'bus': str(bus), 'port': str(next_ehci_port)})
next_ehci_port += 1
elif bus == 2:
address = e(hostdev, "address", None, {'type': 'usb', 'bus': str(bus), 'port': str(next_xhci_port)})
next_xhci_port += 1
else:
raise kvmagent.KvmError('unknown usb controller %s', bus)
if usb.split(":")[5] == "Redirect":
redirdev = e(devices, "redirdev", None, {'bus': 'usb', 'type': 'tcp'})
source = e(redirdev, "source", None, {'mode': 'connect', 'host': usb.split(":")[7], 'service': usb.split(":")[6]})
# get controller index from usbVersion
# eg. 1.1 -> 0
# eg. 2.0.0 -> 1
# eg. 3 -> 2
bus = int(usb.split(":")[4][0]) - 1
if bus == 0:
address = e(redirdev, "address", None,
{'type': 'usb', 'bus': str(bus), 'port': str(next_uhci_port)})
next_uhci_port += 1
elif bus == 1:
address = e(redirdev, "address", None,
{'type': 'usb', 'bus': str(bus), 'port': str(next_ehci_port)})
next_ehci_port += 1
elif bus == 2:
address = e(redirdev, "address", None,
{'type': 'usb', 'bus': str(bus), 'port': str(next_xhci_port)})
next_xhci_port += 1
else:
raise kvmagent.KvmError('unknown usb controller %s', bus)
else:
raise kvmagent.KvmError('cannot find usb device %s', usb)
#TODO(weiw) validate here
def match_storage_device(install_path):
return True
# TODO(WeiW) Validate here
def match_pci_device(addr):
return True
def match_usb_device(addr):
if len(addr.split(':')) == 8:
return True
else:
return False
def make_balloon_memory():
if cmd.addons['useMemBalloon'] is False:
return
devices = elements['devices']
b = e(devices, 'memballoon', None, {'model': 'virtio'})
e(b, 'stats', None, {'period': '10'})
if kvmagent.get_host_os_type() == "debian":
e(b, 'address', None, {'type': 'pci', 'controller': '0', 'bus': '0x00', 'slot': '0x04', 'function':'0x0'})
def make_console():
devices = elements['devices']
if cmd.consoleLogToFile:
logfilename = '%s-vm-kernel.log' % cmd.vmInstanceUuid
logpath = os.path.join(tempfile.gettempdir(), logfilename)
serial = e(devices, 'serial', None, {'type': 'file'})
e(serial, 'target', None, {'port': '0'})
e(serial, 'source', None, {'path': logpath})
console = e(devices, 'console', None, {'type': 'file'})
e(console, 'target', None, {'type': 'serial', 'port': '0'})
e(console, 'source', None, {'path': logpath})
else:
serial = e(devices, 'serial', None, {'type': 'pty'})
e(serial, 'target', None, {'port': '0'})
console = e(devices, 'console', None, {'type': 'pty'})
e(console, 'target', None, {'type': 'serial', 'port': '0'})
def make_sec_label():
root = elements['root']
e(root, 'seclabel', None, {'type': 'none'})
def make_controllers():
devices = elements['devices']
e(devices, 'controller', None, {'type': 'scsi', 'model': 'virtio-scsi'})
if machine_type in ['q35', 'virt']:
controller = e(devices, 'controller', None, {'type': 'sata', 'index': '0'})
e(controller, 'alias', None, {'name': 'sata'})
e(controller, 'address', None, {'type': 'pci', 'domain': '0', 'bus': '0', 'slot': '0x1f', 'function': '2'})
pci_idx_generator = range(cmd.pciePortNums + 3).__iter__()
e(devices, 'controller', None, {'type': 'pci', 'model': 'pcie-root', 'index': str(pci_idx_generator.next())})
e(devices, 'controller', None, {'type': 'pci', 'model': 'dmi-to-pci-bridge', 'index': str(pci_idx_generator.next())})
for _ in xrange(cmd.predefinedPciBridgeNum):
e(devices, 'controller', None, {'type': 'pci', 'model': 'pci-bridge', 'index': str(pci_idx_generator.next())})
for i in pci_idx_generator:
e(devices, 'controller', None, {'type': 'pci', 'model': 'pcie-root-port', 'index': str(i)})
else:
if not cmd.predefinedPciBridgeNum or HOST_ARCH == 'mips64el':
return
for i in xrange(cmd.predefinedPciBridgeNum):
e(devices, 'controller', None, {'type': 'pci', 'index': str(i + 1), 'model': 'pci-bridge'})
make_root()
make_meta()
make_cpu()
make_memory()
make_os()
make_sysinfo()
make_features()
make_devices()
make_video()
make_sound()
make_nics()
make_volumes()
if not cmd.addons or cmd.addons['noConsole'] is not True:
make_graphic_console()
make_addons()
make_balloon_memory()
make_console()
make_sec_label()
make_controllers()
if is_spiceport_driver_supported() and cmd.consoleMode in ["spice", "vncAndSpice"] and not cmd.coloPrimary and not cmd.coloSecondary:
make_folder_sharing()
# appliance vm doesn't need any cdrom or usb controller
if not cmd.isApplianceVm:
make_cdrom()
if not cmd.coloPrimary and not cmd.coloSecondary and not cmd.useColoBinary:
make_usb_redirect()
if cmd.additionalQmp:
make_qemu_commandline()
if cmd.useHugePage:
make_memory_backing()
root = elements['root']
xml = etree.tostring(root)
vm = Vm()
vm.uuid = cmd.vmInstanceUuid
if cmd.addons["userDefinedXml"] is not None:
vm.domain_xml = base64.b64decode(cmd.addons["userDefinedXml"])
vm.domain_xmlobject = xmlobject.loads(vm.domain_xml)
else:
vm.domain_xml = xml
vm.domain_xmlobject = xmlobject.loads(xml)
return vm
@staticmethod
def _build_interface_xml(nic, devices=None, vhostSrcPath=None, action=None, brMode=None, index=0):
if nic.pciDeviceAddress is not None:
iftype = 'hostdev'
device_attr = {'type': iftype, 'managed': 'yes'}
elif vhostSrcPath is not None:
iftype = 'vhostuser'
device_attr = {'type': iftype}
else:
iftype = 'bridge'
device_attr = {'type': iftype}
if devices:
interface = e(devices, 'interface', None, device_attr)
else:
interface = etree.Element('interface', attrib=device_attr)
e(interface, 'mac', None, attrib={'address': nic.mac})
e(interface, 'alias', None, {'name': 'net%s' % nic.nicInternalName.split('.')[1]})
if iftype != 'hostdev':
e(interface, 'mtu', None, attrib={'size': '%d' % nic.mtu})
if iftype == 'hostdev':
domain, bus, slot, function = parse_pci_device_address(nic.pciDeviceAddress)
source = e(interface, 'source')
e(source, 'address', None, attrib={'type': 'pci', 'domain': '0x' + domain, 'bus': '0x' + bus, 'slot': '0x' + slot, 'function': '0x' + function})
e(interface, 'driver', None, attrib={'name': 'vfio'})
if nic.vlanId is not None:
vlan = e(interface, 'vlan')
e(vlan, 'tag', None, attrib={'id': nic.vlanId})
elif iftype == 'vhostuser':
if brMode != 'mocbr':
e(interface, 'source', None, attrib={'type': 'unix', 'path': vhostSrcPath, 'mode': 'client'})
e(interface, 'driver', None, attrib={'queues': '16', 'vhostforce': 'on'})
else:
e(interface, 'source', None, attrib={'type': 'unix', 'path': '/var/run/phynic{}'.format(index+1), 'mode':'server'})
e(interface, 'driver', None, attrib={'queues': '8'})
else:
e(interface, 'source', None, attrib={'bridge': nic.bridgeName})
e(interface, 'target', None, attrib={'dev': nic.nicInternalName})
if nic.pci is not None and (iftype == 'bridge' or iftype == 'vhostuser'):
e(interface, 'address', None, attrib={'type': nic.pci.type, 'domain': nic.pci.domain, 'bus': nic.pci.bus, 'slot': nic.pci.slot, "function": nic.pci.function})
else:
e(interface, 'address', None, attrib={'type': "pci"})
if nic.ips and iftype == 'bridge':
ip4Addr = None
ip6Addrs = []
for addr in nic.ips:
version = netaddr.IPAddress(addr).version
if version == 4:
ip4Addr = addr
else:
ip6Addrs.append(addr)
# ipv4 nic
if ip4Addr is not None and len(ip6Addrs) == 0:
filterref = e(interface, 'filterref', None, {'filter': 'clean-traffic'})
e(filterref, 'parameter', None, {'name': 'IP', 'value': ip4Addr})
elif ip4Addr is None and len(ip6Addrs) > 0: # ipv6 nic
filterref = e(interface, 'filterref', None, {'filter': 'zstack-clean-traffic-ipv6'})
for addr6 in ip6Addrs:
e(filterref, 'parameter', None, {'name': 'GLOBAL_IP', 'value': addr6})
e(filterref, 'parameter', None, {'name': 'LINK_LOCAL_IP', 'value': ip.get_link_local_address(nic.mac)})
else: # dual stack nic
filterref = e(interface, 'filterref', None, {'filter': 'zstack-clean-traffic-ip46'})
e(filterref, 'parameter', None, {'name': 'IP', 'value': ip4Addr})
for addr6 in ip6Addrs:
e(filterref, 'parameter', None, {'name': 'GLOBAL_IP', 'value': addr6})
e(filterref, 'parameter', None, {'name': 'LINK_LOCAL_IP', 'value': ip.get_link_local_address(nic.mac)})
if iftype != 'hostdev':
if nic.driverType:
e(interface, 'model', None, attrib={'type': nic.driverType})
elif nic.useVirtio:
e(interface, 'model', None, attrib={'type': 'virtio'})
else:
e(interface, 'model', None, attrib={'type': 'e1000'})
if nic.driverType == 'virtio' and nic.vHostAddOn.queueNum != 1:
e(interface, 'driver ', None, attrib={'name': 'vhost', 'txmode': 'iothread', 'ioeventfd': 'on', 'event_idx': 'off', 'queues': str(nic.vHostAddOn.queueNum), 'rx_queue_size': str(nic.vHostAddOn.rxBufferSize) if nic.vHostAddOn.rxBufferSize is not None else '256', 'tx_queue_size': str(nic.vHostAddOn.txBufferSize) if nic.vHostAddOn.txBufferSize is not None else '256'})
if nic.bootOrder is not None and nic.bootOrder > 0:
e(interface, 'boot', None, attrib={'order': str(nic.bootOrder)})
@in_bash
@lock.file_lock('/run/xtables.lock')
def _config_ebtable_rules_for_vfnics():
VF_NIC_MAC = nic.mac
CHAIN_NAME = 'ZSTACK-VF-NICS'
EBTABLES_CMD = ebtables.get_ebtables_cmd()
if action == 'Attach':
if bash.bash_r(EBTABLES_CMD + ' -L {{CHAIN_NAME}} > /dev/null 2>&1') != 0:
bash.bash_r(EBTABLES_CMD + ' -N {{CHAIN_NAME}}')
if bash.bash_r(EBTABLES_CMD + ' -L FORWARD | grep -- "-j {{CHAIN_NAME}}" > /dev/null') != 0:
bash.bash_r(EBTABLES_CMD + ' -I FORWARD -j {{CHAIN_NAME}}')
if bash.bash_r(EBTABLES_CMD + ' -L {{CHAIN_NAME}} --Lmac2 | grep -- "-p IPv4 -s {{VF_NIC_MAC}} --ip-proto udp --ip-sport 67:68 -j ACCEPT" > /dev/null') != 0:
bash.bash_r(EBTABLES_CMD + ' -I {{CHAIN_NAME}} -p IPv4 -s {{VF_NIC_MAC}} --ip-proto udp --ip-sport 67:68 -j ACCEPT')
elif action == 'Detach':
# FIXME: when vm is destroyed, no vnic detaching function will be called and left some garbage rules
if bash.bash_r(EBTABLES_CMD + ' -L {{CHAIN_NAME}} --Lmac2 | grep -- "-p IPv4 -s {{VF_NIC_MAC}} --ip-proto udp --ip-sport 67:68 -j ACCEPT" > /dev/null') == 0:
bash.bash_r(EBTABLES_CMD + ' -D {{CHAIN_NAME}} -p IPv4 -s {{VF_NIC_MAC}} --ip-proto udp --ip-sport 67:68 -j ACCEPT')
@in_bash
def _add_bridge_fdb_entry_for_vnic():
if action == 'Attach':
# if nic.physicalInterface is bond, then find the first splited pf name out of its slaves
_phy_dev_name = nic.physicalInterface
_phy_dev_folder = os.path.join('/sys/class/net', _phy_dev_name)
for fname in os.listdir(_phy_dev_folder):
if fname.startswith('slave_'):
_slave_numvfs = os.path.join(_phy_dev_folder, fname, 'device/sriov_numvfs')
if os.path.isfile(_slave_numvfs):
with open(_slave_numvfs, 'r') as f:
if int(f.read().strip()) != 0:
_phy_dev_name = fname.replace('slave_', '').strip(' \t\n\r')
break
if not linux.bridge_fdb_has_self_rule(nic.mac, _phy_dev_name):
bash.bash_r("bridge fdb add %s dev %s" % (nic.mac, _phy_dev_name))
# to allow vf nic dhcp
if nic.pciDeviceAddress is not None:
_config_ebtable_rules_for_vfnics()
# to allow vnic/vf communication in same host
if nic.pciDeviceAddress is None and nic.physicalInterface is not None and brMode != 'mocbr':
_add_bridge_fdb_entry_for_vnic()
return interface
@staticmethod
def _ignore_colo_vm_nic_rom_file_on_interface(interface):
e(interface, 'driver', None, attrib={'name': 'qemu'})
e(interface, 'rom', None, attrib={'file': ''})
@staticmethod
def _add_qos_to_interface(interface, qos):
if not qos.outboundBandwidth and not qos.inboundBandwidth:
return
bandwidth = e(interface, 'bandwidth')
if qos.outboundBandwidth:
e(bandwidth, 'outbound', None, {'average': str(qos.outboundBandwidth / 1024 / 8)})
if qos.inboundBandwidth:
e(bandwidth, 'inbound', None, {'average': str(qos.inboundBandwidth / 1024 / 8)})
def _stop_world():
http.AsyncUirHandler.STOP_WORLD = True
VmPlugin.queue_singleton.queue.put("exit")
@in_bash
def execute_qmp_command(domain_id, command):
return bash.bash_roe("virsh qemu-monitor-command %s '%s' --pretty" % (domain_id, command))
class VmPlugin(kvmagent.KvmAgent):
KVM_START_VM_PATH = "/vm/start"
KVM_STOP_VM_PATH = "/vm/stop"
KVM_PAUSE_VM_PATH = "/vm/pause"
KVM_RESUME_VM_PATH = "/vm/resume"
KVM_REBOOT_VM_PATH = "/vm/reboot"
KVM_DESTROY_VM_PATH = "/vm/destroy"
KVM_ONLINE_CHANGE_CPUMEM_PATH = "/vm/online/changecpumem"
KVM_ONLINE_INCREASE_CPU_PATH = "/vm/increase/cpu"
KVM_ONLINE_INCREASE_MEMORY_PATH = "/vm/increase/mem"
KVM_GET_CONSOLE_PORT_PATH = "/vm/getvncport"
KVM_VM_SYNC_PATH = "/vm/vmsync"
KVM_ATTACH_VOLUME = "/vm/attachdatavolume"
KVM_DETACH_VOLUME = "/vm/detachdatavolume"
KVM_MIGRATE_VM_PATH = "/vm/migrate"
KVM_BLOCK_LIVE_MIGRATION_PATH = "/vm/blklivemigration"
KVM_VM_CHECK_VOLUME_PATH = "/vm/volume/check"
KVM_TAKE_VOLUME_SNAPSHOT_PATH = "/vm/volume/takesnapshot"
KVM_TAKE_VOLUME_BACKUP_PATH = "/vm/volume/takebackup"
KVM_BLOCK_STREAM_VOLUME_PATH = "/vm/volume/blockstream"
KVM_TAKE_VOLUMES_SNAPSHOT_PATH = "/vm/volumes/takesnapshot"
KVM_TAKE_VOLUMES_BACKUP_PATH = "/vm/volumes/takebackup"
KVM_CANCEL_VOLUME_BACKUP_JOBS_PATH = "/vm/volume/cancel/backupjobs"
KVM_MERGE_SNAPSHOT_PATH = "/vm/volume/mergesnapshot"
KVM_LOGOUT_ISCSI_TARGET_PATH = "/iscsi/target/logout"
KVM_LOGIN_ISCSI_TARGET_PATH = "/iscsi/target/login"
KVM_ATTACH_NIC_PATH = "/vm/attachnic"
KVM_DETACH_NIC_PATH = "/vm/detachnic"
KVM_UPDATE_NIC_PATH = "/vm/updatenic"
KVM_CREATE_SECRET = "/vm/createcephsecret"
KVM_ATTACH_ISO_PATH = "/vm/iso/attach"
KVM_DETACH_ISO_PATH = "/vm/iso/detach"
KVM_VM_CHECK_STATE = "/vm/checkstate"
KVM_VM_CHANGE_PASSWORD_PATH = "/vm/changepasswd"
KVM_SET_VOLUME_BANDWIDTH = "/set/volume/bandwidth"
KVM_DELETE_VOLUME_BANDWIDTH = "/delete/volume/bandwidth"
KVM_GET_VOLUME_BANDWIDTH = "/get/volume/bandwidth"
KVM_SET_NIC_QOS = "/set/nic/qos"
KVM_GET_NIC_QOS = "/get/nic/qos"
KVM_HARDEN_CONSOLE_PATH = "/vm/console/harden"
KVM_DELETE_CONSOLE_FIREWALL_PATH = "/vm/console/deletefirewall"
HOT_PLUG_PCI_DEVICE = "/pcidevice/hotplug"
HOT_UNPLUG_PCI_DEVICE = "/pcidevice/hotunplug"
ATTACH_PCI_DEVICE_TO_HOST = "/pcidevice/attachtohost"
DETACH_PCI_DEVICE_FROM_HOST = "/pcidevice/detachfromhost"
KVM_ATTACH_USB_DEVICE_PATH = "/vm/usbdevice/attach"
KVM_DETACH_USB_DEVICE_PATH = "/vm/usbdevice/detach"
RELOAD_USB_REDIRECT_PATH = "/vm/usbdevice/reload"
CHECK_MOUNT_DOMAIN_PATH = "/check/mount/domain"
KVM_RESIZE_VOLUME_PATH = "/volume/resize"
VM_PRIORITY_PATH = "/vm/priority"
ATTACH_GUEST_TOOLS_ISO_TO_VM_PATH = "/vm/guesttools/attachiso"
DETACH_GUEST_TOOLS_ISO_FROM_VM_PATH = "/vm/guesttools/detachiso"
GET_VM_GUEST_TOOLS_INFO_PATH = "/vm/guesttools/getinfo"
KVM_GET_VM_FIRST_BOOT_DEVICE_PATH = "/vm/getfirstbootdevice"
KVM_CONFIG_PRIMARY_VM_PATH = "/primary/vm/config"
KVM_CONFIG_SECONDARY_VM_PATH = "/secondary/vm/config"
KVM_START_COLO_SYNC_PATH = "/start/colo/sync"
KVM_REGISTER_PRIMARY_VM_HEARTBEAT = "/register/primary/vm/heartbeat"
CHECK_COLO_VM_STATE_PATH = "/check/colo/vm/state"
WAIT_COLO_VM_READY_PATH = "/wait/colo/vm/ready"
ROLLBACK_QUORUM_CONFIG_PATH = "/rollback/quorum/config"
FAIL_COLO_PVM_PATH = "/fail/colo/pvm"
GET_VM_DEVICE_ADDRESS_PATH = "/vm/getdeviceaddress"
VM_OP_START = "start"
VM_OP_STOP = "stop"
VM_OP_REBOOT = "reboot"
VM_OP_MIGRATE = "migrate"
VM_OP_DESTROY = "destroy"
VM_OP_SUSPEND = "suspend"
VM_OP_RESUME = "resume"
timeout_object = linux.TimeoutObject()
queue_singleton = VmPluginQueueSingleton()
secret_keys = {}
vm_heartbeat = {}
if not os.path.exists(QMP_SOCKET_PATH):
os.mkdir(QMP_SOCKET_PATH)
def _record_operation(self, uuid, op):
j = VmOperationJudger(op)
self.timeout_object.put(uuid, j, 300)
def _remove_operation(self, uuid):
self.timeout_object.remove(uuid)
def _get_operation(self, uuid):
o = self.timeout_object.get(uuid)
if not o:
return None
return o[0]
def _prepare_ebtables_for_mocbr(self, cmd):
brMode = cmd.addons['brMode'] if cmd.addons else None
if brMode != 'mocbr':
return
l3mapping = cmd.addons['l3mapping'] if cmd.addons else None
if not l3mapping:
return
if not cmd.nics:
return
mappings = {} # mac -> l3uuid
for ele in l3mapping:
m = ele.split("-")
mappings[m[0]] = m[1]
EBTABLES_CMD = ebtables.get_ebtables_cmd()
for nic in cmd.nics:
ns = "{}_{}".format(nic.bridgeName, mappings[nic.mac])
outerdev = "outer%s" % ip.get_namespace_id(ns)
rule = " -t nat -A PREROUTING -i {} -d {} -j dnat --to-destination ff:ff:ff:ff:ff:ff".format(outerdev, nic.mac)
bash.bash_r(EBTABLES_CMD + rule)
bash.bash_r("ebtables-save | uniq | ebtables-restore")
def _start_vm(self, cmd):
try:
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if vm:
if vm.state == Vm.VM_STATE_RUNNING:
# http://jira.zstack.io/browse/ZSTAC-26937
#raise kvmagent.KvmError(
# 'vm[uuid:%s, name:%s] is already running' % (cmd.vmInstanceUuid, vm.get_name()))
logger.debug('vm[uuid:%s, name:%s] is already running' % (cmd.vmInstanceUuid, vm.get_name()))
return
else:
vm.destroy()
vm = Vm.from_StartVmCmd(cmd)
if cmd.memorySnapshotPath:
vm.restore(cmd.memorySnapshotPath)
return
wait_console = True if not cmd.addons or cmd.addons['noConsole'] is not True else False
self._prepare_ebtables_for_mocbr(cmd)
vm.start(cmd.timeout, cmd.createPaused, wait_console)
except libvirt.libvirtError as e:
logger.warn(linux.get_exception_stacktrace())
# c.f. https://access.redhat.com/solutions/2735671
if "org.fedoraproject.FirewallD1 was not provided" in str(e.message):
_stop_world() # to trigger libvirtd restart
raise kvmagent.KvmError(
'unable to start vm[uuid:%s, name:%s], libvirt error: %s' % (
cmd.vmInstanceUuid, cmd.vmName, str(e)))
if "Device or resource busy" in str(e.message):
raise kvmagent.KvmError(
'unable to start vm[uuid:%s, name:%s], libvirt error: %s' % (
cmd.vmInstanceUuid, cmd.vmName, str(e)))
try:
vm = get_vm_by_uuid(cmd.vmInstanceUuid)
if vm and vm.state != Vm.VM_STATE_RUNNING:
raise kvmagent.KvmError(
'vm[uuid:%s, name:%s, state:%s] is not in running state, libvirt error: %s' % (
cmd.vmInstanceUuid, cmd.vmName, vm.state, str(e)))
except kvmagent.KvmError:
raise kvmagent.KvmError(
'unable to start vm[uuid:%s, name:%s], libvirt error: %s' % (cmd.vmInstanceUuid, cmd.vmName, str(e)))
def _cleanup_iptable_chains(self, chain, data):
if 'vnic' not in chain.name:
return False
vnic_name = chain.name.split('-')[0]
if vnic_name not in data:
logger.debug('clean up defunct vnic chain[%s]' % chain.name)
return True
return False
@kvmagent.replyerror
def attach_iso(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
vm.attach_iso(cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def detach_iso(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
vm.detach_iso(cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def attach_nic(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = AttchNicResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
vm.attach_nic(cmd)
for iface in vm.domain_xmlobject.devices.get_child_node_as_list('interface'):
if iface.mac.address_ == cmd.nic.mac:
rsp.pciAddress.bus = iface.address.bus_
rsp.pciAddress.function = iface.address.function_
rsp.pciAddress.type = iface.address.type_
rsp.pciAddress.domain = iface.address.domain_
rsp.pciAddress.slot = iface.address.slot_
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def detach_nic(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid, False)
if not vm:
return jsonobject.dumps(rsp)
vm.detach_nic(cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def update_nic(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmInstanceUuid)
vm.update_nic(cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def start_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = StartVmResponse()
try:
self._record_operation(cmd.vmInstanceUuid, self.VM_OP_START)
self._start_vm(cmd)
logger.debug('successfully started vm[uuid:%s, name:%s]' % (cmd.vmInstanceUuid, cmd.vmName))
try:
vm_pid = linux.find_vm_pid_by_uuid(cmd.vmInstanceUuid)
linux.enable_process_coredump(vm_pid)
linux.set_vm_priority(vm_pid, cmd.priorityConfigStruct)
except Exception as e:
logger.warn("enable coredump for VM: %s: %s" % (cmd.vmInstanceUuid, str(e)))
except kvmagent.KvmError as e:
e_str = linux.get_exception_stacktrace()
logger.warn(e_str)
if "burst" in e_str and "Illegal" in e_str and "rate" in e_str:
rsp.error = "QoS exceed max limit, please check and reset it in zstack"
elif "cannot set up guest memory" in e_str:
logger.warn('unable to start vm[uuid:%s], %s' % (cmd.vmInstanceUuid, e_str))
rsp.error = "No enough physical memory for guest"
else:
rsp.error = e_str
err = self.handle_vfio_irq_conflict(cmd.vmInstanceUuid)
if err != "":
rsp.error = "%s, details: %s" % (err, rsp.error)
rsp.success = False
return jsonobject.dumps(rsp)
def get_vm_stat_with_ps(self, uuid):
"""In case libvirtd is stopped or misbehaved"""
if not linux.find_vm_pid_by_uuid(uuid):
return Vm.VM_STATE_SHUTDOWN
return Vm.VM_STATE_RUNNING
@kvmagent.replyerror
def check_vm_state(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
states = get_all_vm_states()
rsp = CheckVmStateRsp()
for uuid in cmd.vmUuids:
s = states.get(uuid)
if not s:
s = self.get_vm_stat_with_ps(uuid)
rsp.states[uuid] = s
return jsonobject.dumps(rsp)
def _escape(self, size):
unit = size.strip().lower()[-1]
num = size.strip()[:-1]
units = {
"g": lambda x: x * 1024,
"m": lambda x: x,
"k": lambda x: x / 1024,
}
return int(units[unit](int(num)))
def _get_image_mb_size(self, image):
backing = shell.call('%s %s | grep "backing file:" | awk -F \'backing file:\' \'{print $2}\' ' %
(qemu_img.subcmd('info'), image)).strip()
size = shell.call('%s %s | grep "disk size:" | awk -F \'disk size:\' \'{print $2}\' ' %
(qemu_img.subcmd('info'), image)).strip()
if not backing:
return self._escape(size)
else:
return self._get_image_mb_size(backing) + self._escape(size)
def _get_volume_bandwidth_value(self, vm_uuid, device_id, mode):
cmd_base = "virsh blkdeviotune %s %s" % (vm_uuid, device_id)
if mode == "total":
return shell.call('%s | grep -w total_bytes_sec | awk \'{print $2}\'' % cmd_base).strip()
elif mode == "read":
return shell.call('%s | grep -w read_bytes_sec | awk \'{print $3}\'' % cmd_base).strip()
elif mode == "write":
return shell.call('%s | grep -w write_bytes_sec | awk \'{print $2}\'' % cmd_base).strip()
@kvmagent.replyerror
def set_volume_bandwidth(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
_, device_id = vm._get_target_disk(cmd.volume)
## total and read/write of bytes_sec cannot be set at the same time
## http://confluence.zstack.io/pages/viewpage.action?pageId=42599772#comment-42600879
cmd_base = "virsh blkdeviotune %s %s" % (cmd.vmUuid, device_id)
if (cmd.mode == "total") or (cmd.mode is None): # to set total(read/write reset)
shell.call('%s --total_bytes_sec %s' % (cmd_base, cmd.totalBandwidth))
elif cmd.mode == "all":
shell.call('%s --read_bytes_sec %s --write_bytes_sec %s' % (cmd_base, cmd.readBandwidth, cmd.writeBandwidth))
elif cmd.mode == "read": # to set read(write reserved, total reset)
write_bytes_sec = self._get_volume_bandwidth_value(cmd.vmUuid, device_id, "write")
shell.call('%s --read_bytes_sec %s --write_bytes_sec %s' % (cmd_base, cmd.readBandwidth, write_bytes_sec))
elif cmd.mode == "write": # to set write(read reserved, total reset)
read_bytes_sec = self._get_volume_bandwidth_value(cmd.vmUuid, device_id, "read")
shell.call('%s --read_bytes_sec %s --write_bytes_sec %s' % (cmd_base, read_bytes_sec, cmd.writeBandwidth))
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def delete_volume_bandwidth(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
_, device_id = vm._get_target_disk(cmd.volume)
## total and read/write of bytes_sec cannot be set at the same time
## http://confluence.zstack.io/pages/viewpage.action?pageId=42599772#comment-42600879
cmd_base = "virsh blkdeviotune %s %s" % (cmd.vmUuid, device_id)
is_total_mode = self._get_volume_bandwidth_value(cmd.vmUuid, device_id, "total") != "0"
if cmd.mode == "all": # to delete all(read/write reset)
shell.call('%s --total_bytes_sec 0' % (cmd_base))
elif (cmd.mode == "total") or (cmd.mode is None): # to delete total
if is_total_mode:
shell.call('%s --total_bytes_sec 0' % (cmd_base))
elif cmd.mode == "read": # to delete read(write reserved, total reset)
if not is_total_mode:
write_bytes_sec = self._get_volume_bandwidth_value(cmd.vmUuid, device_id, "write")
shell.call('%s --read_bytes_sec 0 --write_bytes_sec %s' % (cmd_base, write_bytes_sec))
elif cmd.mode == "write": # to delete write(read reserved, total reset)
if not is_total_mode:
read_bytes_sec = self._get_volume_bandwidth_value(cmd.vmUuid, device_id, "read")
shell.call('%s --read_bytes_sec %s --write_bytes_sec 0' % (cmd_base, read_bytes_sec))
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def get_volume_bandwidth(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
_, device_id = vm._get_target_disk(cmd.volume)
cmd_base = "virsh blkdeviotune %s %s" % (cmd.vmUuid, device_id)
bandWidth = shell.call('%s | grep -w total_bytes_sec | awk \'{print $2}\'' % cmd_base).strip()
bandWidthRead = shell.call('%s | grep -w read_bytes_sec | awk \'{print $3}\'' % cmd_base).strip()
bandWidthWrite = shell.call('%s | grep -w write_bytes_sec | awk \'{print $2}\'' % cmd_base).strip()
rsp.bandWidth = bandWidth if long(bandWidth) > 0 else -1
rsp.bandWidthWrite = bandWidthWrite if long(bandWidthWrite) > 0 else -1
rsp.bandWidthRead = bandWidthRead if long(bandWidthRead) > 0 else -1
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def set_nic_qos(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
try:
if cmd.inboundBandwidth != -1:
shell.call('virsh domiftune %s %s --inbound %s' % (cmd.vmUuid, cmd.internalName, cmd.inboundBandwidth/1024/8))
if cmd.outboundBandwidth != -1:
shell.call('virsh domiftune %s %s --outbound %s' % (cmd.vmUuid, cmd.internalName, cmd.outboundBandwidth/1024/8))
except Exception as e:
e_str = linux.get_exception_stacktrace()
logger.warn(e_str)
if "burst" in e_str and "Illegal" in e_str and "rate" in e_str:
rsp.error = "QoS exceed the max limit, please check and reset it in zstack"
else:
rsp.error = e_str
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def get_nic_qos(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
inbound = shell.call('virsh domiftune %s %s | grep "inbound.average:"|awk \'{print $2}\'' % (cmd.vmUuid, cmd.internalName)).strip()
outbound = shell.call('virsh domiftune %s %s | grep "outbound.average:"|awk \'{print $2}\'' % (cmd.vmUuid, cmd.internalName)).strip()
rsp.inbound = long(inbound) * 8 * 1024 if long(inbound) > 0 else -1
rsp.outbound = long(outbound) * 8 * 1024 if long(outbound) > 0 else -1
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def check_mount_domain(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = CheckMountDomainRsp()
finish_time = time.time() + (cmd.timeout / 1000)
while time.time() < finish_time:
try:
logger.debug("check mount url: %s" % cmd.url)
linux.is_valid_nfs_url(cmd.url)
rsp.active = True
return jsonobject.dumps(rsp)
except Exception as err:
if 'cannont resolve to ip address' in err.message:
logger.warn(err.message)
logger.warn('wait 1 seconds')
else:
raise err
time.sleep(1)
rsp.active = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def change_vm_password(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = ChangeVmPasswordRsp()
vm = get_vm_by_uuid(cmd.accountPerference.vmUuid, False)
try:
if not vm:
raise kvmagent.KvmError('vm is not in running state.')
else:
vm.change_vm_password(cmd)
except kvmagent.KvmError as e:
rsp.error = str(e)
rsp.success = False
rsp.accountPerference = cmd.accountPerference
rsp.accountPerference.accountPassword = "******"
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def harden_console(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.vmUuid)
vm.harden_console(cmd.hostManagementIp)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def vm_sync(self, req):
rsp = VmSyncResponse()
rsp.states, rsp.vmInShutdowns = get_all_vm_sync_states()
# In case of an reboot inside the VM. Note that ZS will only define transient VM's.
retry_for_paused = []
for uuid in rsp.states:
if rsp.states[uuid] == Vm.VM_STATE_SHUTDOWN:
rsp.states[uuid] = Vm.VM_STATE_RUNNING
elif rsp.states[uuid] == Vm.VM_STATE_PAUSED:
retry_for_paused.append(uuid)
# Occasionally, virsh might not be able to list all VM instances with
# uri=qemu://system. To prevend this situation, we double check the
# 'rsp.states' agaist QEMU process lists.
output = bash.bash_o("ps x | grep -P -o 'qemu-kvm.*?-name\s+(guest=)?\K.*?,' | sed 's/.$//'").splitlines()
for guest in output:
if guest in rsp.states \
or guest.lower() == "ZStack Management Node VM".lower()\
or guest.startswith("guestfs-"):
continue
logger.warn('guest [%s] not found in virsh list' % guest)
rsp.states[guest] = Vm.VM_STATE_RUNNING
time.sleep(0.5)
if len(retry_for_paused) > 0:
states, in_shutdown = get_all_vm_sync_states()
for uuid in states:
if states[uuid] == Vm.VM_STATE_SHUTDOWN:
rsp.states[uuid] = Vm.VM_STATE_RUNNING
elif states[uuid] != Vm.VM_STATE_PAUSED:
rsp.states[uuid] = states[uuid]
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def online_increase_mem(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = IncreaseMemoryResponse()
try:
vm = get_vm_by_uuid(cmd.vmUuid)
memory_size = cmd.memorySize
vm.hotplug_mem(memory_size)
vm = get_vm_by_uuid(cmd.vmUuid)
rsp.memorySize = vm.get_memory()
logger.debug('successfully increase memory of vm[uuid:%s] to %s Kib' % (cmd.vmUuid, vm.get_memory()))
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def online_increase_cpu(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = IncreaseCpuResponse()
try:
vm = get_vm_by_uuid(cmd.vmUuid)
cpu_num = cmd.cpuNum
vm.hotplug_cpu(cpu_num)
vm = get_vm_by_uuid(cmd.vmUuid)
rsp.cpuNum = vm.get_cpu_num()
logger.debug('successfully increase cpu number of vm[uuid:%s] to %s' % (cmd.vmUuid, vm.get_cpu_num()))
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def online_change_cpumem(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = ChangeCpuMemResponse()
try:
vm = get_vm_by_uuid(cmd.vmUuid)
cpu_num = cmd.cpuNum
memory_size = cmd.memorySize
vm.hotplug_mem(memory_size)
vm.hotplug_cpu(cpu_num)
vm = get_vm_by_uuid(cmd.vmUuid)
rsp.cpuNum = vm.get_cpu_num()
rsp.memorySize = vm.get_memory()
logger.debug('successfully add cpu and memory on vm[uuid:%s]' % (cmd.vmUuid))
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
def get_vm_console_info(self, vmUuid):
try:
vm = get_vm_by_uuid(vmUuid)
protocol, vncPort, spicePort, spiceTlsPort = vm.get_vdi_connect_port()
ret = check_vdi_port(vncPort, spicePort, spiceTlsPort)
if ret is True:
return protocol, vncPort, spicePort, spiceTlsPort
# Occasionally, 'virsh list' would list nothing but conn.lookupByName()
# can find the VM and dom.XMLDesc(0) will return VNC port '-1'.
err = 'libvirt failed to get console port for VM %s' % vmUuid
logger.warn(err)
raise kvmagent.KvmError(err)
except kvmagent.KvmError as e:
protocol, vncPort, spicePort, spiceTlsPort = get_console_without_libvirt(vmUuid)
ret = check_vdi_port(vncPort, spicePort, spiceTlsPort)
if ret is True:
return protocol, vncPort, spicePort, spiceTlsPort
raise e
@kvmagent.replyerror
def get_console_port(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = GetVncPortResponse()
try:
protocol, vncPort, spicePort, spiceTlsPort = self.get_vm_console_info(cmd.vmUuid)
rsp.protocol = protocol
rsp.vncPort = vncPort
rsp.spicePort = spicePort
rsp.spiceTlsPort = spiceTlsPort
if vncPort is not None:
rsp.port = vncPort
else:
rsp.port = spicePort
logger.debug('successfully get vncPort[%s], spicePort[%s], spiceTlsPort[%s] of vm[uuid:%s]' % (
vncPort, spicePort, spiceTlsPort, cmd.vmUuid))
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
def _stop_vm(self, cmd):
try:
vm = get_vm_by_uuid(cmd.uuid)
strategy = str(cmd.type)
if strategy == "cold" or strategy == "force":
vm.stop(strategy=strategy)
else:
vm.stop(timeout=cmd.timeout / 2)
except kvmagent.KvmError as e:
logger.debug(linux.get_exception_stacktrace())
finally:
# libvirt is not reliable, c.f. ZSTAC-15412
self.kill_vm(cmd.uuid)
def kill_vm(self, vm_uuid):
output = bash.bash_o("ps x | grep -P -o 'qemu-kvm.*?-name\s+(guest=)?\K%s,' | sed 's/.$//'" % vm_uuid)
if vm_uuid not in output:
return
logger.debug('killing vm %s' % vm_uuid)
vm_pid = linux.find_vm_pid_by_uuid(vm_uuid)
if vm_pid:
linux.kill_process(vm_pid)
@kvmagent.replyerror
def stop_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = StopVmResponse()
try:
self._record_operation(cmd.uuid, self.VM_OP_STOP)
self._stop_vm(cmd)
logger.debug("successfully stopped vm[uuid:%s]" % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def pause_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
try:
self._record_operation(cmd.uuid, self.VM_OP_SUSPEND)
rsp = PauseVmResponse()
vm = get_vm_by_uuid(cmd.uuid)
vm.pause()
logger.debug('successfully, pause vm [uuid:%s]' % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def resume_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
try:
self._record_operation(cmd.uuid, self.VM_OP_RESUME)
rsp = ResumeVmResponse()
vm = get_vm_by_uuid(cmd.uuid)
vm.resume()
logger.debug('successfully, resume vm [uuid:%s]' % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def reboot_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = RebootVmResponse()
try:
self._record_operation(cmd.uuid, self.VM_OP_REBOOT)
vm = get_vm_by_uuid(cmd.uuid)
vm.reboot(cmd)
logger.debug('successfully, reboot vm[uuid:%s]' % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def destroy_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = DestroyVmResponse()
try:
self._record_operation(cmd.uuid, self.VM_OP_DESTROY)
vm = get_vm_by_uuid(cmd.uuid, False)
if vm:
vm.destroy()
logger.debug('successfully destroyed vm[uuid:%s]' % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def attach_data_volume(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = AttachDataVolumeResponse()
try:
volume = cmd.volume
vm = get_vm_by_uuid(cmd.vmInstanceUuid)
if vm.state != Vm.VM_STATE_RUNNING and vm.state != Vm.VM_STATE_PAUSED:
raise kvmagent.KvmError(
'unable to attach volume[%s] to vm[uuid:%s], vm must be running or paused' % (volume.installPath, vm.uuid))
vm.attach_data_volume(cmd.volume, cmd.addons)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
touchQmpSocketWhenExists(cmd.vmInstanceUuid)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def detach_data_volume(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = DetachDataVolumeResponse()
try:
volume = cmd.volume
vm = get_vm_by_uuid(cmd.vmInstanceUuid)
if vm.state != Vm.VM_STATE_RUNNING and vm.state != Vm.VM_STATE_PAUSED:
raise kvmagent.KvmError(
'unable to detach volume[%s] to vm[uuid:%s], vm must be running or paused' % (volume.installPath, vm.uuid))
vm.detach_data_volume(volume)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def migrate_vm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = MigrateVmResponse()
try:
self._record_operation(cmd.vmUuid, self.VM_OP_MIGRATE)
if cmd.migrateFromDestination:
with contextlib.closing(get_connect(cmd.srcHostIp)) as conn:
vm = get_vm_by_uuid(cmd.vmUuid, False, conn)
if vm is None:
logger.warn('unable to find vm {0} on host {1}'.format(cmd.vmUuid, cmd.srcHostIp))
raise kvmagent.KvmError('unable to find vm %s on host %s' % (cmd.vmUuid, cmd.srcHostIp))
vm.migrate(cmd)
else:
vm = get_vm_by_uuid(cmd.vmUuid)
vm.migrate(cmd)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def check_volume(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid(cmd.uuid)
for volume in cmd.volumes:
vm._get_target_disk(volume)
return jsonobject.dumps(rsp)
def _get_new_disk(self, oldDisk, volume):
def filebased_volume(_v):
disk = etree.Element('disk', {'type': 'file', 'device': 'disk', 'snapshot': 'external'})
e(disk, 'driver', None, {'name': 'qemu', 'type': 'qcow2', 'cache': _v.cacheMode})
e(disk, 'source', None, {'file': _v.installPath})
return disk
def ceph_volume(_v):
def ceph_virtio():
vc = VirtioCeph()
vc.volume = _v
return vc.to_xmlobject()
def ceph_blk():
ic = BlkCeph()
ic.volume = _v
return ic.to_xmlobject()
def ceph_virtio_scsi():
vsc = VirtioSCSICeph()
vsc.volume = _v
return vsc.to_xmlobject()
if _v.useVirtioSCSI:
disk = ceph_virtio_scsi()
if _v.shareable:
e(disk, 'shareable')
return disk
if _v.useVirtio:
return ceph_virtio()
else:
return ceph_blk()
def block_volume(_v):
disk = etree.Element('disk', {'type': 'block', 'device': 'disk', 'snapshot': 'external'})
e(disk, 'driver', None,
{'name': 'qemu', 'type': 'raw', 'cache': 'none', 'io': 'native'})
e(disk, 'source', None, {'dev': _v.installPath})
return disk
if volume.deviceType == 'file':
ele = filebased_volume(volume)
elif volume.deviceType == 'ceph':
ele = ceph_volume(volume)
elif volume.deviceType == 'block':
ele = block_volume(volume)
else:
raise Exception('unsupported volume deviceType[%s]' % volume.deviceType)
tags_to_keep = [ 'target', 'boot', 'alias', 'address', 'wwn', 'serial']
for c in oldDisk.getchildren():
if c.tag in tags_to_keep:
child = ele.find(c.tag)
if child is not None: ele.remove(child)
ele.append(c)
logger.info("updated disk XML: " + etree.tostring(ele))
return ele
def _build_domain_new_xml(self, vm, volumeDicts):
migrate_disks = {}
for oldpath, volume in volumeDicts.items():
_, disk_name = vm._get_target_disk_by_path(oldpath)
migrate_disks[disk_name] = volume
fd, fpath = tempfile.mkstemp()
with os.fdopen(fd, 'w') as tmpf:
tmpf.write(vm.domain_xml)
tree = etree.parse(fpath)
devices = tree.getroot().find('devices')
for disk in tree.iterfind('devices/disk'):
dev = disk.find('target').attrib['dev']
if dev in migrate_disks:
new_disk = self._get_new_disk(disk, migrate_disks[dev])
parent_index = list(devices).index(disk)
devices.remove(disk)
devices.insert(parent_index, new_disk)
tree.write(fpath)
return migrate_disks.keys(), fpath
def _do_block_migration(self, vmUuid, dstHostIp, volumeDicts):
vm = get_vm_by_uuid(vmUuid)
disks, fpath = self._build_domain_new_xml(vm, volumeDicts)
dst = 'qemu+tcp://{0}/system'.format(dstHostIp)
migurl = 'tcp://{0}'.format(dstHostIp)
diskstr = ','.join(disks)
flags = "--live --p2p --copy-storage-all"
if LIBVIRT_MAJOR_VERSION >= 4:
if any(s.startswith('/dev/') for s in vm.list_blk_sources()):
flags += " --unsafe"
cmd = "virsh migrate {} --migrate-disks {} --xml {} {} {} {}".format(flags, diskstr, fpath, vmUuid, dst, migurl)
try:
shell.check_run(cmd)
finally:
os.remove(fpath)
@kvmagent.replyerror
def block_migrate_vm(self, req):
rsp = kvmagent.AgentResponse()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
self._record_operation(cmd.vmUuid, self.VM_OP_MIGRATE)
self._do_block_migration(cmd.vmUuid, cmd.destHostIp, cmd.disks.__dict__)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def merge_snapshot_to_volume(self, req):
rsp = MergeSnapshotRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=True)
if vm.state != vm.VM_STATE_RUNNING:
rsp.error = 'vm[uuid:%s] is not running, cannot do live snapshot chain merge' % vm.uuid
rsp.success = False
return jsonobject.dumps(rsp)
vm.merge_snapshot(cmd)
return jsonobject.dumps(rsp)
@staticmethod
def _get_snapshot_size(install_path):
size = linux.get_local_file_disk_usage(install_path)
if size is None or size == 0:
if install_path.startswith("/dev/"):
size = int(lvm.get_lv_size(install_path))
else:
size = linux.qcow2_virtualsize(install_path)
return size
@kvmagent.replyerror
def take_volumes_snapshots(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY]) # type: TakeSnapshotsCmd
rsp = TakeSnapshotsResponse() # type: TakeSnapshotsResponse
for snapshot_job in cmd.snapshotJobs:
if snapshot_job.vmInstanceUuid != cmd.snapshotJobs[0].vmInstanceUuid:
raise kvmagent.KvmError("can not take snapshot on multiple vms[%s and %s]" %
snapshot_job.vmInstanceUuid, cmd.snapshotJobs[0].vmInstanceUuid)
if snapshot_job.live != cmd.snapshotJobs[0].live:
raise kvmagent.KvmError("can not take snapshot on different live status")
Vm.ensure_no_internal_snapshot(snapshot_job.volume.installPath)
def makedir_if_need(new_path):
dirname = os.path.dirname(new_path)
if not os.path.exists(dirname):
os.makedirs(dirname, 0o755)
def get_size(install_path):
"""
:rtype: long
"""
return VmPlugin._get_snapshot_size(install_path)
def take_full_snapshot_by_qemu_img_convert(previous_install_path, install_path, new_volume_install_path):
"""
:rtype: (str, str, long)
"""
makedir_if_need(install_path)
linux.create_template(previous_install_path, install_path)
new_volume_path = new_volume_install_path if new_volume_install_path is not None else os.path.join(os.path.dirname(install_path), '{0}.qcow2'.format(uuidhelper.uuid()))
makedir_if_need(new_volume_path)
linux.qcow2_clone_with_cmd(install_path, new_volume_path, cmd)
return install_path, new_volume_path, get_size(install_path)
def take_delta_snapshot_by_qemu_img_convert(previous_install_path, install_path, new_volume_install_path):
"""
:rtype: (str, str, long)
"""
new_volume_path = new_volume_install_path if new_volume_install_path is not None else os.path.join(os.path.dirname(install_path), '{0}.qcow2'.format(uuidhelper.uuid()))
makedir_if_need(new_volume_path)
linux.qcow2_clone_with_cmd(previous_install_path, new_volume_path, cmd)
return previous_install_path, new_volume_path, get_size(install_path)
vm = get_vm_by_uuid(cmd.snapshotJobs[0].vmInstanceUuid, exception_if_not_existing=False)
try:
if vm and vm.state not in vm.ALLOW_SNAPSHOT_STATE:
raise kvmagent.KvmError(
'unable to take snapshot on vm[uuid:{0}] volume[id:{1}], '
'because vm is not in [{2}], current state is {3}'.format(
vm.uuid, cmd.snapshotJobs[0].deviceId, vm.ALLOW_SNAPSHOT_STATE, vm.state))
if vm and (vm.state == vm.VM_STATE_RUNNING or vm.state == vm.VM_STATE_PAUSED):
rsp.snapshots = vm.take_live_volumes_delta_snapshots(cmd.snapshotJobs)
else:
if vm and cmd.snapshotJobs[0].live is True:
raise kvmagent.KvmError("expected live snapshot but vm[%s] state is %s" %
vm.uuid, vm.state)
elif not vm and cmd.snapshotJobs[0].live is True:
raise kvmagent.KvmError("expected live snapshot but can not find vm[%s]" %
cmd.snapshotJobs[0].vmInstanceUuid)
for snapshot_job in cmd.snapshotJobs:
if snapshot_job.full:
rsp.snapshots.append(VolumeSnapshotResultStruct(
snapshot_job.volumeUuid, *take_full_snapshot_by_qemu_img_convert(
snapshot_job.previousInstallPath, snapshot_job.installPath, snapshot_job.newVolumeInstallPath)))
else:
rsp.snapshots.append(VolumeSnapshotResultStruct(
snapshot_job.volumeUuid, *take_delta_snapshot_by_qemu_img_convert(
snapshot_job.previousInstallPath, snapshot_job.installPath, snapshot_job.newVolumeInstallPath)))
except kvmagent.KvmError as error:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(error)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def take_volume_snapshot(self, req):
""" Take snapshot for a volume
:param req: The request obj, exmaple of req.body::
{
'vmUuid': '0dc62031-678d-4040-95e4-64fb217a2669',
'volumeUuid': '2e9fd964-ba33-4214-aaad-c6e16b9ae72b',
'volume': {
},
'installPath': '',
'volumeInstallPath': '',
'newVolumeUuid': '',
'newVolumeInstallPath': '',
'fullSnapshot': False,
'isBaremetal2InstanceOnlineSnapshot': False
}
"""
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = TakeSnapshotResponse()
def makedir_if_need(new_path):
dirname = os.path.dirname(new_path)
if not os.path.exists(dirname):
os.makedirs(dirname, 0755)
def take_full_snapshot_by_qemu_img_convert(previous_install_path, install_path):
makedir_if_need(install_path)
linux.create_template(previous_install_path, install_path)
new_volume_path = cmd.newVolumeInstallPath if cmd.newVolumeInstallPath is not None else os.path.join(os.path.dirname(install_path), '{0}.qcow2'.format(uuidhelper.uuid()))
makedir_if_need(new_volume_path)
linux.qcow2_clone_with_cmd(install_path, new_volume_path, cmd)
return install_path, new_volume_path
def take_delta_snapshot_by_qemu_img_convert(previous_install_path, install_path):
new_volume_path = cmd.newVolumeInstallPath if cmd.newVolumeInstallPath is not None else os.path.join(os.path.dirname(install_path), '{0}.qcow2'.format(uuidhelper.uuid()))
makedir_if_need(new_volume_path)
linux.qcow2_clone_with_cmd(previous_install_path, new_volume_path, cmd)
return previous_install_path, new_volume_path
try:
Vm.ensure_no_internal_snapshot(cmd.volumeInstallPath)
if not cmd.vmUuid:
if cmd.fullSnapshot:
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = take_full_snapshot_by_qemu_img_convert(
cmd.volumeInstallPath, cmd.installPath)
else:
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = take_delta_snapshot_by_qemu_img_convert(
cmd.volumeInstallPath, cmd.installPath)
else:
# New params in cmd:
# A flag to show the instance is bm instance and the instance
# status is online
if cmd.isBaremetal2InstanceOnlineSnapshot:
with bm_utils.NamedLock(name='baremetal_v2_volume_operator'):
src_vol_driver, dst_vol_driver = BmV2GwAgent.pre_take_volume_snapshot(cmd)
try:
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = take_delta_snapshot_by_qemu_img_convert(
cmd.volumeInstallPath, cmd.installPath)
BmV2GwAgent.post_take_volume_snapshot(src_vol_driver, dst_vol_driver)
except Exception as e:
# Try to rollback the snapshot action
# BmV2GwAgent.resume_device(src_vol_driver)
BmV2GwAgent.rollback_volume_snapshot(
src_vol_driver, dst_vol_driver)
logger.error(traceback.format_exc())
raise e
else:
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
if vm and vm.state != vm.VM_STATE_RUNNING and vm.state != vm.VM_STATE_SHUTDOWN and vm.state != vm.VM_STATE_PAUSED:
raise kvmagent.KvmError(
'unable to take snapshot on vm[uuid:{0}] volume[id:{1}], because vm is not Running, Stopped or Paused, current state is {2}'.format(
vm.uuid, cmd.volume.deviceId, vm.state))
if vm and (vm.state == vm.VM_STATE_RUNNING or vm.state == vm.VM_STATE_PAUSED):
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = vm.take_volume_snapshot(cmd.volume,
cmd.installPath,
cmd.fullSnapshot)
else:
if cmd.fullSnapshot:
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = take_full_snapshot_by_qemu_img_convert(
cmd.volumeInstallPath, cmd.installPath)
else:
rsp.snapshotInstallPath, rsp.newVolumeInstallPath = take_delta_snapshot_by_qemu_img_convert(
cmd.volumeInstallPath, cmd.installPath)
if cmd.fullSnapshot:
logger.debug(
'took full snapshot on vm[uuid:{0}] volume[id:{1}], snapshot path:{2}, new volulme path:{3}'.format(
cmd.vmUuid, cmd.volume.deviceId, rsp.snapshotInstallPath, rsp.newVolumeInstallPath))
else:
logger.debug(
'took delta snapshot on vm[uuid:{0}] volume[id:{1}], snapshot path:{2}, new volulme path:{3}'.format(
cmd.vmUuid, cmd.volume.deviceId, rsp.snapshotInstallPath, rsp.newVolumeInstallPath))
linux.sync_file(rsp.snapshotInstallPath)
rsp.size = VmPlugin._get_snapshot_size(rsp.snapshotInstallPath)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
rsp.error = str(e)
rsp.success = False
if not cmd.isBaremetal2InstanceOnlineSnapshot:
touchQmpSocketWhenExists(cmd.vmUuid)
return jsonobject.dumps(rsp)
def push_backing_files(self, isc, hostname, drivertype, source):
if drivertype != 'qcow2':
return None
bf = linux.qcow2_get_backing_file(source.file_)
if bf:
imf = isc.upload_image(hostname, bf)
return imf
return None
def do_cancel_backup_jobs(self, cmd):
isc = ImageStoreClient()
isc.stop_backup_jobs(cmd.vmUuid)
# returns list[VolumeBackupInfo]
def do_take_volumes_backup(self, cmd, target_disks, bitmaps, dstdir):
isc = ImageStoreClient()
backupArgs = {}
parents = {}
speed = 0
if cmd.volumeWriteBandwidth:
speed = cmd.volumeWriteBandwidth
device_ids = [volume.deviceId for volume in cmd.volumes]
for deviceId in device_ids:
target_disk = target_disks[deviceId]
drivertype = target_disk.driver.type_
nodename = self.get_backup_device_name(target_disk)
source = target_disk.source
bitmap = bitmaps[deviceId]
def get_backup_args():
if bitmap:
return bitmap, 'full' if cmd.mode == 'full' else 'auto', nodename, speed
bm = 'zsbitmap%d' % deviceId
if cmd.mode == 'full':
return bm, 'full', nodename, speed
imf = self.push_backing_files(isc, cmd.hostname, drivertype, source)
if not imf:
return bm, 'full', nodename, speed
parent = isc._build_install_path(imf.name, imf.id)
parents[deviceId] = parent
return bm, 'top', nodename, speed
backupArgs[deviceId] = get_backup_args()
logger.info('taking backup for vm: %s' % cmd.vmUuid)
res = isc.backup_volumes(cmd.vmUuid, backupArgs.values(), dstdir, Report.from_spec(cmd, "VmBackup"), get_task_stage(cmd))
logger.info('completed backup for vm: %s' % cmd.vmUuid)
backres = jsonobject.loads(res)
bkinfos = []
for deviceId in device_ids:
nodename = backupArgs[deviceId][2]
nodebak = backres[nodename]
installPath = None
if nodebak.mode == 'incremental':
installPath = self.getLastBackup(deviceId, cmd.backupInfos)
else:
installPath = parents.get(deviceId)
info = VolumeBackupInfo(deviceId,
backupArgs[deviceId][0],
nodebak.backupFile,
installPath)
if nodebak.mode == 'top' and info.parentInstallPath is None:
target_disk = target_disks[deviceId]
drivertype = target_disk.driver.type_
source = target_disk.source
imf = self.push_backing_files(isc, cmd.hostname, drivertype, source)
if imf:
parent = isc._build_install_path(imf.name, imf.id)
info.parentInstallPath = parent
bkinfos.append(info)
return bkinfos
# returns tuple: (bitmap, parent)
def do_take_volume_backup(self, cmd, drivertype, nodename, source, dest):
isc = ImageStoreClient()
bitmap = None
parent = None
mode = None
topoverlay = None
speed = 0
if drivertype == 'qcow2':
topoverlay = source.file_
def get_parent_bitmap_mode():
if cmd.bitmap:
return None, cmd.bitmap, 'full' if cmd.mode == 'full' else 'auto'
bitmap = 'zsbitmap%d' % (cmd.volume.deviceId)
if drivertype != 'qcow2':
return None, bitmap, 'full'
if cmd.mode == 'full':
return None, bitmap, 'full'
bf = linux.qcow2_get_backing_file(topoverlay)
if not bf:
return None, bitmap, 'full'
imf = isc.upload_image(cmd.hostname, bf)
parent = isc._build_install_path(imf.name, imf.id)
return parent, bitmap, 'top'
parent, bitmap, mode = get_parent_bitmap_mode()
if cmd.volumeWriteBandwidth:
speed = cmd.volumeWriteBandwidth
mode = isc.backup_volume(cmd.vmUuid, nodename, bitmap, mode, dest, speed, Report.from_spec(cmd, "VolumeBackup"), get_task_stage(cmd))
logger.info('finished backup volume with mode: %s' % mode)
if mode == 'incremental':
return bitmap, cmd.lastBackup
if mode == 'top' and parent is None and topoverlay != None:
bf = linux.qcow2_get_backing_file(topoverlay)
imf = isc.upload_image(cmd.hostname, bf)
parent = isc._build_install_path(imf.name, imf.id)
return bitmap, parent
@staticmethod
def get_backup_device_name(disk):
return ('' if disk.type_ == 'quorum' else 'drive-') + disk.alias.name_
def getLastBackup(self, deviceId, backupInfos):
for info in backupInfos:
if info.deviceId == deviceId:
return info.lastBackup
return None
def getBitmap(self, deviceId, backupInfos):
for info in backupInfos:
if info.deviceId == deviceId:
return info.bitmap
return None
@kvmagent.replyerror
def cancel_backup_jobs(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = TakeVolumesBackupsResponse()
try:
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
if not vm:
raise kvmagent.KvmError("vm[uuid: %s] not found by libvirt" % cmd.vmUuid)
self.do_cancel_backup_jobs(cmd)
except kvmagent.KvmError as e:
logger.warn("cancel vm[uuid:%s] backup failed: %s" % (cmd.vmUuid, str(e)))
rsp.error = str(e)
rsp.success = False
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def take_volumes_backups(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = TakeVolumesBackupsResponse()
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
if not vm:
raise kvmagent.KvmError("vm[uuid: %s] not found by libvirt" % cmd.vmUuid)
storage = RemoteStorageFactory.get_remote_storage(cmd)
try:
storage.mount()
target_disks = {}
for volume in cmd.volumes:
target_disk, _ = vm._get_target_disk(volume)
target_disks[volume.deviceId] = target_disk
bitmaps = {}
device_ids = [volume.deviceId for volume in cmd.volumes]
for deviceId in device_ids:
bitmap = self.getBitmap(deviceId, cmd.backupInfos)
bitmaps[deviceId] = bitmap
res = self.do_take_volumes_backup(cmd,
target_disks,
bitmaps,
storage.local_work_dir)
for r in res:
r.backupFile = os.path.join(cmd.uploadDir, r.backupFile)
rsp.backupInfos = res
except Exception as e:
content = traceback.format_exc()
logger.warn("take vm[uuid:%s] backup failed: %s\n%s" % (cmd.vmUuid, str(e), content))
rsp.error = str(e)
rsp.success = False
finally:
storage.umount()
storage.clean()
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def take_volume_backup(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = TakeVolumeBackupResponse()
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
if not vm:
raise kvmagent.KvmError("vm[uuid: %s] not found by libvirt" % cmd.vmUuid)
storage = RemoteStorageFactory.get_remote_storage(cmd)
fname = uuidhelper.uuid()+".qcow2"
try:
storage.mount()
target_disk, _ = vm._get_target_disk(cmd.volume)
bitmap, parent = self.do_take_volume_backup(cmd,
target_disk.driver.type_, # 'qcow2' etc.
self.get_backup_device_name(target_disk), # 'virtio-disk0' etc.
target_disk.source,
os.path.join(storage.local_work_dir, fname))
logger.info('finished backup volume with parent: %s' % parent)
rsp.bitmap = bitmap
rsp.parentInstallPath = parent
rsp.backupFile = os.path.join(cmd.uploadDir, fname)
except Exception as e:
content = traceback.format_exc()
logger.warn("take volume backup failed: " + str(e) + '\n' + content)
rsp.error = str(e)
rsp.success = False
finally:
storage.umount()
storage.clean()
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def block_stream(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = BlockStreamResponse()
if not cmd.vmUuid:
rsp.success = True
return jsonobject.dumps(rsp)
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
if not vm:
rsp.success = True
return jsonobject.dumps(rsp)
vm.block_stream_disk(cmd.volume)
rsp.success = True
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@lock.lock('iscsiadm')
def logout_iscsi_target(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
shell.call(
'iscsiadm -m node --targetname "%s" --portal "%s:%s" --logout' % (cmd.target, cmd.hostname, cmd.port))
rsp = LogoutIscsiTargetRsp()
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def login_iscsi_target(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
login = IscsiLogin()
login.server_hostname = cmd.hostname
login.server_port = cmd.port
login.chap_password = cmd.chapPassword
login.chap_username = cmd.chapUsername
login.target = cmd.target
login.login()
return jsonobject.dumps(LoginIscsiTargetRsp())
@kvmagent.replyerror
def delete_console_firewall_rule(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vir = VncPortIptableRule()
vir.vm_internal_id = cmd.vmInternalId
vir.host_ip = cmd.hostManagementIp
vir.delete()
return jsonobject.dumps(kvmagent.AgentResponse())
@kvmagent.replyerror
def create_ceph_secret_key(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
VmPlugin._create_ceph_secret_key(cmd.userKey, cmd.uuid)
return jsonobject.dumps(kvmagent.AgentResponse())
@staticmethod
def _reload_ceph_secret_keys():
for u, k in VmPlugin.secret_keys.items():
VmPlugin._create_ceph_secret_key(k, u)
@staticmethod
def _create_ceph_secret_key(userKey, uuid):
VmPlugin.secret_keys[uuid] = userKey
sh_cmd = shell.ShellCmd('virsh secret-get-value %s' % uuid)
sh_cmd(False)
if sh_cmd.stdout.strip() == userKey:
return
elif sh_cmd.return_code == 0:
shell.call('virsh secret-set-value %s %s' % (uuid, userKey))
return
# for some reason, ceph doesn't work with the secret created by libvirt
# we have to use the command line here
content = '''
<secret ephemeral='yes' private='no'>
<uuid>%s</uuid>
<usage type='ceph'>
<name>%s</name>
</usage>
</secret>
''' % (uuid, uuid)
spath = linux.write_to_temp_file(content)
try:
o = shell.call("virsh secret-define %s" % spath)
o = o.strip(' \n\t\r')
_, generateuuid, _ = o.split()
shell.call('virsh secret-set-value %s %s' % (generateuuid, userKey))
finally:
os.remove(spath)
@staticmethod
def add_amdgpu_to_blacklist():
r_amd = bash.bash_r("grep -E 'modprobe.blacklist.*amdgpu' /etc/default/grub")
if r_amd != 0:
r_amd, o_amd, e_amd = bash.bash_roe("sed -i 's/radeon/amdgpu,radeon/g' /etc/default/grub")
if r_amd != 0:
return False, "%s %s" % (e_amd, o_amd)
r_amd, o_amd, e_amd = bash.bash_roe("grub2-mkconfig -o /boot/grub2/grub.cfg")
if r_amd != 0:
return False, "%s %s" % (e_amd, o_amd)
r_amd, o_amd, e_amd = bash.bash_roe("grub2-mkconfig -o /etc/grub2-efi.cfg")
if r_amd != 0:
return False, "%s %s" % (e_amd, o_amd)
return True, None
@kvmagent.replyerror
@in_bash
def hot_plug_pci_device(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = HotPlugPciDeviceRsp()
addr = cmd.pciDeviceAddress
domain, bus, slot, function = parse_pci_device_address(addr)
content = '''
<hostdev mode='subsystem' type='pci'>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x%s' bus='0x%s' slot='0x%s' function='0x%s'/>
</source>
</hostdev>''' % (domain, bus, slot, function)
spath = linux.write_to_temp_file(content)
# do not attach pci device immediately after detach pci device from same vm
vm = get_vm_by_uuid(cmd.vmUuid)
vm._wait_vm_run_until_seconds(60)
self.timeout_object.wait_until_object_timeout('hot-unplug-pci-device-from-vm-%s' % cmd.vmUuid)
r, o, e = bash.bash_roe("virsh attach-device %s %s" % (cmd.vmUuid, spath))
self.timeout_object.put('hot-plug-pci-device-to-vm-%s' % cmd.vmUuid, timeout=30)
if r != 0:
rsp.success = False
err = self.handle_vfio_irq_conflict_with_addr(cmd.vmUuid, addr)
if err == "":
rsp.error = "failed to attach-device %s to %s: %s, %s" % (addr, cmd.vmUuid, o, e)
else:
rsp.error = "failed to handle_vfio_irq_conflict_with_addr: %s, details: %s %s" % (err, o, e)
logger.debug("attach-device %s to %s: %s, %s" % (spath, cmd.vmUuid, o, e))
return jsonobject.dumps(rsp)
@in_bash
def handle_vfio_irq_conflict_with_addr(self, vmUuid, addr):
logger.debug("check irq conflict with %s, %s" % (vmUuid, addr))
cmd = ("tail -n 5 /var/log/libvirt/qemu/%s.log | grep -E 'vfio: Error: Failed to setup INTx fd: Device or resource busy'" %
vmUuid)
r, o, e = bash.bash_roe(cmd)
if r != 0:
return ""
cmd = "lspci -vs %s | grep IRQ | awk '{print $5}' | grep -E -o '[[:digit:]]+'" % addr
r, o, e = bash.bash_roe(cmd)
if o == "":
return "can not get irq"
hostname = bash.bash_o("hostname -f")
cmd = "devices=`find /sys/devices/ -iname 'irq' | grep pci | xargs grep %s | grep -v '%s' | awk -F '/' '{ print \"/\"$2\"/\"$3\"/\"$4\"/\"$5 }' | sort | uniq`;" % (o.strip(), addr) + \
" for dev in $devices; do wc -l $dev/msi_bus; done | grep -E '^.*0 /sys' | awk -F '/' '{ print \"/\"$2\"/\"$3\"/\"$4\"/\"$5 }'"
r, o, e = bash.bash_roe(cmd)
if o == "":
return "there are irq conflict, but zstack can not get irq conflict device, you need fix it manually"
ret = ""
names = ""
for dev in o.splitlines():
if dev.strip() != "":
ret += "echo 1 > %s/remove; " % dev
cmd = "lspci -s %s" % dev.split('/')[-1]
r, o, e = bash.bash_roe(cmd)
names += o.strip()
return "WARN: found irq conflict for pci device addr %s, please execute '%s', and then try to passthrough again. Please noted, the above command will remove the conflicted devices(%s) from system, ONLY reboot can bring the device back to service." % \
(addr, ret, names)
@in_bash
def handle_vfio_irq_conflict(self, vmUuid):
cmd = ("tail -n 5 /var/log/libvirt/qemu/%s.log | grep -E 'qemu.*vfio: Error: Failed to setup INTx fd: Device or resource busy' | awk -F'[=,]' '{ print $3 }'" %
vmUuid)
r, o, e = bash.bash_roe(cmd)
if r != 0:
return ""
return self.handle_vfio_irq_conflict_with_addr(vmUuid, o.strip())
@kvmagent.replyerror
@in_bash
def hot_unplug_pci_device(self, req):
@linux.retry(3, 3)
def find_pci_device(vm_uuid, pci_addr):
domain, bus, slot, function = parse_pci_device_address(pci_addr)
cmd = """virsh dumpxml %s | grep -A3 -E '<hostdev.*pci' | grep "<address domain='0x%s' bus='0x%s' slot='0x%s' function='0x%s'/>" """ % \
(vm_uuid, domain, bus, slot, function)
r, o, e = bash.bash_roe(cmd)
return o != ""
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = HotUnplugPciDeviceRsp()
addr = cmd.pciDeviceAddress
if not find_pci_device(cmd.vmUuid, addr):
logger.debug("pci device %s not found" % addr)
return jsonobject.dumps(rsp)
domain, bus, slot, function = parse_pci_device_address(addr)
content = '''
<hostdev mode='subsystem' type='pci'>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x%s' bus='0x%s' slot='0x%s' function='0x%s'/>
</source>
</hostdev>''' % (domain, bus, slot, function)
spath = linux.write_to_temp_file(content)
# no need to detach pci device if vm is shutdown
vm = get_vm_by_uuid_no_retry(cmd.vmUuid, exception_if_not_existing=False)
if not vm or vm.state == Vm.VM_STATE_SHUTDOWN:
logger.debug("vm[uuid:%s] is shutdown, no need to detach pci device" % cmd.vmUuid)
return jsonobject.dumps(rsp)
# do not detach pci device immediately after starting vm instance
try:
vm._wait_vm_run_until_seconds(60)
except Exception:
logger.debug("cannot find pid of vm[uuid:%s, state:%s], no need to detach pci device" % (cmd.vmUuid, vm.state))
return jsonobject.dumps(rsp)
# do not detach pci device immediately after attach pci device to same vm
self.timeout_object.wait_until_object_timeout('hot-plug-pci-device-to-vm-%s' % cmd.vmUuid)
self.timeout_object.put('hot-unplug-pci-device-from-vm-%s' % cmd.vmUuid, timeout=10)
retry_num = 4
retry_interval = 5
logger.debug("try to virsh detach xml for %d times: %s" % (retry_num, content))
for i in range(1, retry_num + 1):
r, o, e = bash.bash_roe("virsh detach-device %s %s" % (cmd.vmUuid, spath))
succ = linux.wait_callback_success(lambda args: not find_pci_device(args[0], args[1]), [cmd.vmUuid, addr], timeout=retry_interval)
if succ:
break
if i < retry_num:
continue
if r != 0:
rsp.success = False
rsp.error = "failed to detach-device %s from %s: %s, %s" % (addr, cmd.vmUuid, o, e)
return jsonobject.dumps(rsp)
if not succ:
rsp.success = False
rsp.error = "pci device %s still exists on vm %s after %ds" % (addr, cmd.vmUuid, retry_num * retry_interval)
return jsonobject.dumps(rsp)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def attach_pci_device_to_host(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = AttachPciDeviceToHostRsp()
addr = cmd.pciDeviceAddress
r, o, e = bash.bash_roe("virsh nodedev-reattach pci_%s" % addr.replace(':', '_').replace('.', '_'))
logger.debug("nodedev-reattach %s: %s, %s" % (addr, o, e))
if r != 0:
rsp.success = False
rsp.error = "failed to nodedev-reattach %s: %s, %s" % (addr, o, e)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def detach_pci_device_from_host(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = DetachPciDeviceFromHostRsp()
addr = cmd.pciDeviceAddress
r, o, e = bash.bash_roe("virsh nodedev-detach pci_%s" % addr.replace(':', '_').replace('.', '_'))
logger.debug("nodedev-detach %s: %s, %s" % (addr, o, e))
if r != 0:
rsp.success = False
rsp.error = "failed to nodedev-detach %s: %s, %s" % (addr, o, e)
return jsonobject.dumps(rsp)
def _get_next_usb_port(self, dom, bus):
domain_xml = dom.XMLDesc(0)
domain_xmlobject = xmlobject.loads(domain_xml)
# if arm or mips uhci, port 0, 1, 2 are hard-coded reserved
# else uhci, port 0, 1 are hard-coded reserved
if bus == 0 and HOST_ARCH in ['aarch64', 'mips64el']:
usb_ports = [0, 1, 2]
elif bus == 0:
usb_ports = [0, 1]
else:
usb_ports = [0]
for hostdev in domain_xmlobject.devices.get_child_node_as_list('hostdev'):
if hostdev.type_ == 'usb':
for address in hostdev.get_child_node_as_list('address'):
if address.type_ == 'usb' and address.bus_ == str(bus):
usb_ports.append(int(address.port_))
for redirdev in domain_xmlobject.devices.get_child_node_as_list('redirdev'):
if redirdev.type_ == 'tcp':
for address in redirdev.get_child_node_as_list('address'):
if address.type_ == 'usb' and address.bus_ == str(bus):
usb_ports.append(int(address.port_))
# get the first unused port number
for i in range(len(usb_ports) + 1):
if i not in usb_ports:
return i
@kvmagent.replyerror
def kvm_attach_usb_device(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = KvmAttachUsbDeviceRsp()
bus = int(cmd.usbVersion[0]) - 1
r, ex = self._attach_usb_by_libvirt(cmd, bus)
if not r:
rsp.success = False
rsp.error = ex
return jsonobject.dumps(rsp)
@linux.retry(times=5, sleep_time=2)
def _detach_usb_by_libvirt(self, cmd):
vm = get_vm_by_uuid(cmd.vmUuid)
root = None
if cmd.attachType == "PassThrough":
root = etree.Element('hostdev', {'mode': 'subsystem', 'type': 'usb', 'managed': 'yes'})
d = e(root, 'source')
e(d, 'vendor', None, {'id': '0x%s' % cmd.idVendor})
e(d, 'product', None, {'id': '0x%s' % cmd.idProduct})
e(d, 'address', None, {'bus': str(cmd.busNum).lstrip('0'), 'device': str(cmd.devNum).lstrip('0')})
if cmd.attachType == "Redirect":
root = etree.Element('redirdev', {'bus': 'usb', 'type': 'tcp'})
e(root, 'source', None, {'mode': 'connect', 'host': cmd.ip, 'service': str(cmd.port)})
xml = etree.tostring(root)
logger.info(xml)
try:
vm.domain.detachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
except libvirt.libvirtError as ex:
logger.warn('detach usb device to domain[%s] failed: %s' % (cmd.vmUuid, str(ex)))
if "redirdev was not found" in str(ex):
logger.debug(
"cannot find matching redirdev from vm %s domainxml, maybe usb has been detached" % cmd.vmUuid)
return True
raise RetryException("failed to detach usb device from %s: %s" % (cmd.vmUuid, str(ex)))
logger.debug("detached usb device from %s successfully" % cmd.vmUuid)
def _attach_usb_by_libvirt(self, cmd, bus):
vm = get_vm_by_uuid(cmd.vmUuid)
root = None
if cmd.attachType == "PassThrough":
root = etree.Element('hostdev', {'mode': 'subsystem', 'type': 'usb', 'managed': 'yes'})
d = e(root, 'source')
e(d, 'vendor', None, {'id': '0x%s' % cmd.idVendor})
e(d, 'product', None, {'id': '0x%s' % cmd.idProduct})
e(d, 'address', None, {'bus': str(cmd.busNum).lstrip('0'), 'device': str(cmd.devNum).lstrip('0')})
e(root, 'address', None, {'type': 'usb', 'bus': str(bus), 'port': str(self._get_next_usb_port(vm.domain, bus))})
if cmd.attachType == "Redirect":
root = etree.Element('redirdev', {'bus': 'usb', 'type': 'tcp'})
e(root, 'source', None, {'mode': 'connect', 'host': cmd.ip, 'service': str(cmd.port)})
e(root, 'address', None, {'type': 'usb', 'bus': str(bus), 'port': str(self._get_next_usb_port(vm.domain, bus))})
xml = etree.tostring(root)
logger.info(xml)
try:
vm.domain.attachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
except libvirt.libvirtError as ex:
logger.warn('attach usb device to domain[%s] failed: %s' % (cmd.vmUuid, str(ex)))
return False, str(ex)
return True, None
# deprecated
def _attach_usb(self, cmd, bus):
vm = get_vm_by_uuid(cmd.vmUuid)
content = ''
if cmd.attachType == "PassThrough":
content = '''
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x%s'/>
<product id='0x%s'/>
<address bus='%s' device='%s'/>
</source>
<address type='usb' bus='%s' port='%s' />
</hostdev>''' % (cmd.idVendor, cmd.idProduct, int(cmd.busNum), int(cmd.devNum), bus, self._get_next_usb_port(vm.domain, bus))
if cmd.attachType == "Redirect":
content = '''
<redirdev bus='usb' type='tcp'>
<source mode='connect' host='%s' service='%s'/>
<address type='usb' bus='%s' port='%s'/>
</redirdev>''' % (cmd.ip, int(cmd.port), bus, self._get_next_usb_port(vm.domain, bus))
spath = linux.write_to_temp_file(content)
r, o, e = bash.bash_roe("virsh attach-device %s %s" % (cmd.vmUuid, spath))
os.remove(spath)
logger.debug("attached %s to %s, %s, %s" % (
spath, cmd.vmUuid, o, e))
return r, o, e
@kvmagent.replyerror
def kvm_detach_usb_device(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = KvmDetachUsbDeviceRsp()
try:
self._detach_usb_by_libvirt(cmd)
except Exception as e:
rsp.success = False
rsp.error = str(e)
return jsonobject.dumps(rsp)
# deprecated
@linux.retry(times=5, sleep_time=2)
def _detach_usb(self, cmd):
content = ''
if cmd.attachType == "PassThrough":
content = '''
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x%s'/>
<product id='0x%s'/>
<address bus='%s' device='%s'/>
</source>
</hostdev>''' % (cmd.idVendor, cmd.idProduct, int(cmd.busNum), int(cmd.devNum))
if cmd.attachType == "Redirect":
content = '''
<redirdev bus='usb' type='tcp'>
<source mode='connect' host='%s' service='%s'/>
</redirdev>''' % (cmd.ip, int(cmd.port))
spath = linux.write_to_temp_file(content)
r, o, e = bash.bash_roe("virsh detach-device %s %s" % (cmd.vmUuid, spath))
os.remove(spath)
if r:
if "redirdev was not found" in e:
logger.debug("cannot find matching redirdev from vm %s domainxml, maybe usb has been detached" % cmd.vmUuid)
return
raise RetryException("failed to detach usb device from %s: %s, %s" % (cmd.vmUuid, o, e))
else:
logger.debug("detached usb device %s from %s" % (spath, cmd.vmUuid))
@kvmagent.replyerror
def reload_redirect_usb(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = ReloadRedirectUsbRsp()
self._detach_usb_by_libvirt(cmd)
bus = int(cmd.usbVersion[0]) - 1
r, ex = self._attach_usb_by_libvirt(cmd, bus)
if not r:
rsp.success = False
rsp.error = ex
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def vm_priority(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = UpdateVmPriorityRsp()
for pcs in cmd.priorityConfigStructs:
pid = linux.find_vm_pid_by_uuid(pcs.vmUuid)
linux.set_vm_priority(pid, pcs)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def kvm_resize_volume(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = KvmResizeVolumeRsp()
vm = get_vm_by_uuid(cmd.vmUuid, exception_if_not_existing=False)
vm.resize_volume(cmd.volume, cmd.size)
touchQmpSocketWhenExists(cmd.vmUuid)
return jsonobject.dumps(rsp)
def _create_xml_for_guesttools_temp_disk(self, vm_uuid):
temp_disk = "/var/lib/zstack/guesttools/temp_disk_%s.qcow2" % vm_uuid
content = """
<disk type='file' device='disk'>
<driver type='qcow2' cache='writeback'/>
<source file='%s'/>
<target dev='vdz' bus='virtio'/>
</disk>
""" % temp_disk
return linux.write_to_temp_file(content)
@kvmagent.replyerror
@in_bash
def attach_guest_tools_iso_to_vm(self, req):
rsp = AttachGuestToolsIsoToVmRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vm_uuid = cmd.vmInstanceUuid
if not os.path.exists(GUEST_TOOLS_ISO_PATH):
rsp.success = False
rsp.error = "%s not exists" % GUEST_TOOLS_ISO_PATH
return jsonobject.dumps(rsp)
r, _, _ = bash.bash_roe("virsh dumpxml %s | grep \"dev='vdz' bus='virtio'\"" % vm_uuid)
if cmd.needTempDisk and r != 0:
temp_disk = "/var/lib/zstack/guesttools/temp_disk_%s.qcow2" % vm_uuid
if not os.path.exists(temp_disk):
linux.qcow2_create(temp_disk, 1)
spath = self._create_xml_for_guesttools_temp_disk(vm_uuid)
r, o, e = bash.bash_roe("virsh attach-device %s %s" % (vm_uuid, spath))
# temp_disk will be truly deleted after it's closed by qemu-kvm
linux.rm_file_force(temp_disk)
if r != 0:
rsp.success = False
rsp.error = "%s, %s" % (o, e)
return jsonobject.dumps(rsp)
else:
logger.debug("attached temp disk %s to %s, %s, %s" % (spath, vm_uuid, o, e))
# attach guest tools iso to [hs]dc, whose device id is 0
vm = get_vm_by_uuid(vm_uuid, exception_if_not_existing=False)
iso = IsoTo()
iso.deviceId = 0
iso.path = GUEST_TOOLS_ISO_PATH
# in case same iso already attached
detach_cmd = DetachIsoCmd()
detach_cmd.vmUuid = vm_uuid
detach_cmd.deviceId = iso.deviceId
vm.detach_iso(detach_cmd)
attach_cmd = AttachIsoCmd()
attach_cmd.iso = iso
attach_cmd.vmUuid = vm_uuid
vm.attach_iso(attach_cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def detach_guest_tools_iso_from_vm(self, req):
rsp = DetachGuestToolsIsoFromVmRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vm_uuid = cmd.vmInstanceUuid
# detach temp_disk from vm
spath = self._create_xml_for_guesttools_temp_disk(vm_uuid)
bash.bash_roe("virsh detach-device %s %s" % (vm_uuid, spath))
# detach guesttools iso from vm
r, _, _ = bash.bash_roe("virsh dumpxml %s | grep %s" % (vm_uuid, GUEST_TOOLS_ISO_PATH))
if r == 0:
vm = get_vm_by_uuid(vm_uuid, exception_if_not_existing=False)
detach_cmd = DetachIsoCmd()
detach_cmd.vmUuid = vm_uuid
detach_cmd.deviceId = 0
vm.detach_iso(detach_cmd)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def get_vm_guest_tools_info(self, req):
rsp = GetVmGuestToolsInfoRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
# get guest tools info by reading VERSION file inside vm
vm_uuid = cmd.vmInstanceUuid
r, o, e = bash.bash_roe('virsh qemu-agent-command %s --cmd \'{"execute":"guest-file-open", \
"arguments":{"path":"C:\\\Program Files\\\Common Files\\\GuestTools\\\VERSION", "mode":"r"}}\'' % vm_uuid)
if r != 0:
_r, _o, _e = bash.bash_roe("virsh qemu-agent-command %s --cmd '{\"execute\":\"guest-tools-info\"}'" % vm_uuid)
if _r == 0:
info = simplejson.loads(_o)['return']
for k in info.keys():
setattr(rsp, k, info[k])
return jsonobject.dumps(rsp)
else:
rsp.success = False
rsp.error = "%s, %s" % (o, e)
return jsonobject.dumps(rsp)
fd = simplejson.loads(o)['return']
def _close_version_file():
bash.bash_roe('virsh qemu-agent-command %s --cmd \'{"execute":"guest-file-close", "arguments":{"handle":%s}}\'' % (vm_uuid, fd))
r, o, e = bash.bash_roe('virsh qemu-agent-command %s --cmd \'{"execute":"guest-file-read", "arguments":{"handle":%s}}\'' % (vm_uuid, fd))
if r != 0:
_close_version_file()
rsp.success = False
rsp.error = "%s, %s" % (o, e)
return jsonobject.dumps(rsp)
version = base64.b64decode(simplejson.loads(o)['return']['buf-b64']).strip()
rsp.version = version
rsp.status = 'Running'
_close_version_file()
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def fail_colo_pvm(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
r, _, e = linux.sshpass_run(cmd.targetHostIp, cmd.targetHostPassword, "pkill -f 'qemu-system-x86_64 -name guest=%s'" % cmd.vmInstanceUuid, "root", cmd.targetHostPort)
if r != 0:
rsp.success = False
rsp.error = 'failed to kill vm %s on host %s, cause: %s' % (cmd.vmInstanceUuid, cmd.targetHostIp, e)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def rollback_quorum_config(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if not vm:
raise Exception('vm[uuid:%s] not exists, failed' % cmd.vmInstanceUuid)
count = 0
for alias_name in vm._get_all_volume_alias_names(cmd.volumes):
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "x-blockdev-change",'
' "arguments": {"parent": "%s", "child": "children.1"}}' % alias_name)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "human-monitor-command",'
' "arguments":{"command-line": "drive_del replication%s"}}' % count)
count += 1
for i in xrange(0, cmd.nicNumber):
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"fm-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-out-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-in-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"comp-%s"}}' % i)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def wait_secondary_vm_ready(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
rsp = kvmagent.AgentResponse()
def wait_for_colo_state_change(_):
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if not vm:
raise Exception('vm[uuid:%s] not exists, failed' % cmd.vmInstanceUuid)
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"query-colo-status"}')
if err:
raise Exception('Failed to check vm[uuid:%s] colo status by query-colo-status' % cmd.vmInstanceUuid)
colo_status = json.loads(o)['return']
mode = colo_status['mode']
return mode == 'secondary'
if not linux.wait_callback_success(wait_for_colo_state_change, None, interval=3, timeout=cmd.coloCheckTimeout):
raise Exception('unable to wait secondary vm[uuid:%s] ready, after %s seconds'
% (cmd.vmInstanceUuid, cmd.coloCheckTimeout))
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def check_colo_vm_state(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
states = get_all_vm_states()
rsp = CheckColoVmStateRsp()
state = states.get(cmd.vmInstanceUuid)
if state != Vm.VM_STATE_RUNNING or state != Vm.VIR_DOMAIN_PAUSED:
rsp.state = state
return jsonobject.dumps(rsp)
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"query-colo-status"}')
if err:
rsp.success = False
rsp.error = "Failed to check vm colo status"
return jsonobject.dumps(rsp)
colo_status = json.loads(o)['return']
rsp.mode = colo_status['mode']
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def register_primary_vm_heartbeat(self, req):
rsp = kvmagent.AgentResponse()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(10)
try:
s.connect((cmd.targetHostIp, cmd.heartbeatPort))
logger.debug("Successfully test heartbeat to address[%s:%s]" % (cmd.targetHostIp, cmd.heartbeatPort))
except socket.error as ex:
logger.debug("Failed to detect heartbeat connection return error")
rsp.success = False
rsp.error = "Failed connect to heartbeat address[%s:%s], because %s" % (cmd.targetHostIp, cmd.heartbeatPort, ex)
finally:
s.close()
if not rsp.success:
return jsonobject.dumps(rsp)
if self.vm_heartbeat.get(cmd.vmInstanceUuid) is not None and self.vm_heartbeat.get(
cmd.vmInstanceUuid).is_alive():
logger.debug("vm heartbeat thread exists, skip it")
return jsonobject.dumps(rsp)
self.vm_heartbeat[cmd.vmInstanceUuid] = thread.ThreadFacade.run_in_thread(self.start_vm_heart_beat, (cmd,))
if self.vm_heartbeat.get(cmd.vmInstanceUuid).is_alive():
logger.debug("successfully start vm heartbeat")
else:
logger.debug("Failed to start vm heartbeat")
rsp.success = False
rsp.error = "Failed to start vm heartbeat address[%s:%s]" % (cmd.targetHostIp, cmd.heartbeatPort)
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def start_colo_sync(self, req):
rsp = kvmagent.AgentResponse()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "qmp_capabilities"}')
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if not vm:
raise Exception('vm[uuid:%s] not exists, failed' % cmd.vmInstanceUuid)
count = 0
replication_list = []
def colo_qemu_replication_cleanup():
for replication in replication_list:
if replication.alias_name:
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "x-blockdev-change",'
' "arguments": {"parent": "%s", "child": "children.1"}}' % replication.alias_name)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "human-monitor-command",'
' "arguments":{"command-line": "drive_del replication%s"}}' % replication.replication_id)
@linux.retry(times=3, sleep_time=0.5)
def add_nbd_client_to_quorum(alias_name, count):
r, stdout, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "x-blockdev-change","arguments":'
'{"parent": "%s","node": "replication%s" } }' % (alias_name, count))
if err:
return False
elif 'does not support adding a child' in stdout:
raise RetryException("failed to add child to %s" % alias_name)
else:
return True
for alias_name in vm._get_all_volume_alias_names(cmd.volumes):
if cmd.fullSync:
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "drive-mirror", "arguments":{ "device": "%s",'
' "job-id": "zs-ft-resync", "target": "nbd://%s:%s/parent%s",'
' "mode": "existing", "format": "nbd", "sync": "full"} }'
% (alias_name, cmd.secondaryVmHostIp, cmd.nbdServerPort, count))
while True:
time.sleep(3)
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"query-block-jobs"}')
if err:
rsp.success = False
rsp.error = "Failed to get zs-ft-resync job, report error"
return jsonobject.dumps(rsp)
block_jobs = json.loads(o)['return']
job = next((job for job in block_jobs if job['device'] == 'zs-ft-resync'), None)
if not job:
logger.debug("job finished, start colo sync")
break
if job['status'] == 'ready':
break
logger.debug("current resync %s/%s, percentage %s" % (
job['len'], job['offset'], 100 * (float(job['offset'] / float(job['len'])))))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "stop"}')
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "block-job-cancel", "arguments":{ "device": "zs-ft-resync"}}')
while True:
time.sleep(1)
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"query-block-jobs"}')
if err:
rsp.success = False
rsp.error = "Failed to query block jobs, report error"
return jsonobject.dumps(rsp)
block_jobs = json.loads(o)['return']
job = next((job for job in block_jobs if job['device'] == 'zs-ft-resync'), None)
if job:
continue
break
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "human-monitor-command","arguments":'
' {"command-line":"drive_add -n buddy'
' driver=replication,mode=primary,file.driver=nbd,file.host=%s,'
'file.port=%s,file.export=parent%s,node-name=replication%s"}}'
% (cmd.secondaryVmHostIp, cmd.nbdServerPort, count, count))
successed = False
try:
successed = add_nbd_client_to_quorum(alias_name, count)
except Exception as e:
logger.debug("ignore excetion raised by retry")
if not successed:
replication_list.append(ColoReplicationConfig(None, count))
colo_qemu_replication_cleanup()
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "cont"}')
rsp.success = False
rsp.error = "Failed to setup quorum replication node, report error"
return jsonobject.dumps(rsp)
replication_list.append(ColoReplicationConfig(alias_name, count))
count+=1
domain_xml = vm.domain.XMLDesc(0)
is_origin_secondary = 'filter-rewriter' in domain_xml
for count in xrange(0, cmd.nicNumber):
if not is_origin_secondary:
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "fm-%s",'
' "props": { "netdev": "hostnet%s", "queue": "tx", "outdev": "zs-mirror-%s" } } }'
% (count, count, count))
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector",'
' "id": "primary-out-redirect-%s", "props": { "netdev": "hostnet%s", "queue": "rx",'
' "indev": "primary-out-s-%s"}}}' % (count, count, count))
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id":'
' "primary-in-redirect-%s", "props": { "netdev": "hostnet%s", "queue": "rx",'
' "outdev": "primary-in-s-%s"}}}' % (count, count, count))
else:
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror",'
' "id": "fm-%s", "props": { "insert": "before", "position": "id=rew-%s", '
' "netdev": "hostnet%s", "queue": "tx", "outdev": "zs-mirror-%s" } } }'
% (count, count, count, count))
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector",'
' "id": "primary-out-redirect-%s", "props":'
' { "insert": "before", "position": "id=rew-%s",'
' "netdev": "hostnet%s", "queue": "rx",'
' "indev": "primary-out-s-%s"}}}' % (count, count, count, count))
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id":'
' "primary-in-redirect-%s", "props": { "insert": "before", "position": "id=rew-%s",'
' "netdev": "hostnet%s", "queue": "rx",'
' "outdev": "primary-in-s-%s"}}}' % (count, count, count, count))
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp-%s",'
' "props": { "primary_in": "primary-in-c-%s", "secondary_in": "secondary-in-s-%s",'
' "outdev":"primary-out-c-%s", "iothread": "iothread%s" } } }'
% (count, count, count, count, int(count) + 1))
count += 1
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "migrate-set-capabilities","arguments":'
'{"capabilities":[ {"capability": "x-colo", "state":true}]}}')
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "migrate-set-parameters", "arguments":'
'{ "max-bandwidth": 3355443200 }}')
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "migrate", "arguments": {"uri": "tcp:%s:%s"}}'
% (cmd.secondaryVmHostIp, cmd.blockReplicationPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "migrate-set-parameters",'
' "arguments": {"x-checkpoint-delay": %s}}'
% cmd.checkpointDelay)
def colo_qemu_object_cleanup():
for i in xrange(cmd.nicNumber):
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"fm-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-out-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-in-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"comp-%s"}}' % i)
# wait primary vm migrate job finished
failure = 0
while True:
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "query-migrate"}')
if err:
rsp.success = False
rsp.error = "Failed to query migrate info, because %s" % err
colo_qemu_object_cleanup()
break
migrate_info = json.loads(o)['return']
if migrate_info['status'] == 'colo':
logger.debug("migrate finished")
break
elif migrate_info['status'] == 'active':
ram_info = migrate_info['ram']
logger.debug("current migrate %s/%s, percentage %s"
% (ram_info['total'], ram_info['remaining'], 100 * (float(ram_info['remaining'] / float(ram_info['total'])))))
elif migrate_info['status'] == 'failed':
rsp.success = False
rsp.error = "could not finish colo migration."
try:
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if vm:
vm.resume()
logger.debug('successfully, resume vm [uuid:%s]' % cmd.uuid)
except kvmagent.KvmError as e:
logger.warn(linux.get_exception_stacktrace())
break
else:
# those status are not handled but vm should not stuck in
# MIGRATION_STATUS_POSTCOPY_ACTIVE:
# MIGRATION_STATUS_POSTCOPY_PAUSED:
# MIGRATION_STATUS_POSTCOPY_RECOVER:
# MIGRATION_STATUS_SETUP:
# MIGRATION_STATUS_PRE_SWITCHOVER:
# MIGRATION_STATUS_DEVICE:
if failure < 2:
failure += 1
else:
rsp.success = False
rsp.error = "unknown migrate status: %s" % migrate_info['status']
# cancel migrate if vm stuck in unexpected status
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "migrate_cancel"}')
break
time.sleep(2)
if not rsp.success:
colo_qemu_object_cleanup()
colo_qemu_replication_cleanup()
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def config_secondary_vm(self, req):
rsp = kvmagent.AgentResponse()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "qmp_capabilities"}')
execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"nbd-server-start", "arguments":{"addr":{"type":"inet",'
' "data":{"host":"%s", "port":"%s"}}}}'
% (cmd.primaryVmHostIp, cmd.nbdServerPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "nbd-server-add",'
' "arguments": {"device": "parent0", "writable": true }}')
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def config_primary_vm(self, req):
rsp = GetVmFirstBootDeviceRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "qmp_capabilities"}')
r, o, err = execute_qmp_command(cmd.vmInstanceUuid, '{"execute":"query-chardev"}')
if err:
rsp.success = False
rsp.error = "Failed to check qemu config, report error"
return jsonobject.dumps(rsp)
vm = get_vm_by_uuid(cmd.vmInstanceUuid)
domain_xml = vm.domain.XMLDesc(0)
is_origin_secondary = 'filter-rewriter' in domain_xml
char_devices = json.loads(o)['return']
mirror_device_nums = [int(dev['label'][-1]) for dev in char_devices if dev['label'].startswith('zs-mirror')]
logger.debug("get mirror char device of vm[uuid:%s] devices: %s" % (cmd.vmInstanceUuid, mirror_device_nums))
if len(mirror_device_nums) == len(cmd.configs):
logger.debug("config and devices matched, just return success")
return jsonobject.dumps(rsp)
elif len(mirror_device_nums) > len(cmd.configs):
logger.debug("vm over config, please check what happened")
return jsonobject.dumps(rsp)
count = len(mirror_device_nums)
for config in cmd.configs[len(mirror_device_nums):]:
if not linux.is_port_available(config.mirrorPort):
raise Exception("failed to config primary vm, because mirrorPort port %d is occupied" % config.mirrorPort)
if not linux.is_port_available(config.primaryInPort):
raise Exception("failed to config primary vm, because primaryInPort port %d is occupied" % config.primaryInPort)
if not linux.is_port_available(config.secondaryInPort):
raise Exception("failed to config primary vm, because secondaryInPort port %d is occupied" % config.secondaryInPort)
if not linux.is_port_available(config.primaryOutPort):
raise Exception("failed to config primary vm, because mirrorPort port %d is occupied" % config.primaryOutPort)
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "chardev-add", "arguments":{ "id": "zs-mirror-%s", "backend":'
' {"type": "socket", "data": {"addr": { "type": "inet", "data":'
' { "host": "%s", "port": "%s" } }, "server": true}}}}'
% (count, cmd.hostIp, config.mirrorPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-add", "arguments":{ "id": "primary-in-s-%s",'
' "backend": {"type": "socket", "data": {"addr": { "type":'
' "inet", "data": { "host": "%s", "port": "%s" } },'
' "server": true } } } }' % (count, cmd.hostIp, config.primaryInPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-add", "arguments":{'
' "id": "secondary-in-s-%s","backend": {"type":'
' "socket", "data": {"addr": {"type":'
' "inet", "data": { "host": "%s", "port": "%s" } },'
' "server": true } } } }' % (count, cmd.hostIp, config.secondaryInPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-add", "arguments":{ "id": "primary-in-c-%s",'
' "backend": {"type": "socket", "data": {"addr": { "type":'
' "inet", "data": { "host": "%s", "port": "%s" } },'
' "server": false } } } }' % (count, cmd.hostIp, config.primaryInPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-add", "arguments":{ "id": "primary-out-s-%s",'
' "backend": {"type": "socket", "data": {"addr": { "type":'
' "inet", "data": { "host": "%s", "port": "%s" } },'
' "server": true } } } }' % (count, cmd.hostIp, config.primaryOutPort))
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-add", "arguments":{ "id": "primary-out-c-%s",'
' "backend": {"type": "socket", "data": {"addr": { "type":'
' "inet", "data": { "host": "%s", "port": "%s" } },'
' "server": false } } } }' % (count, cmd.hostIp, config.primaryOutPort))
count += 1
return jsonobject.dumps(rsp)
@kvmagent.replyerror
@in_bash
def get_vm_first_boot_device(self, req):
rsp = GetVmFirstBootDeviceRsp()
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vm_uuid = cmd.uuid
vm = get_vm_by_uuid_no_retry(vm_uuid, False)
boot_dev = find_domain_first_boot_device(vm.domain.XMLDesc(0))
rsp.firstBootDevice = boot_dev
return jsonobject.dumps(rsp)
@kvmagent.replyerror
def get_vm_device_address(self, req):
cmd = jsonobject.loads(req[http.REQUEST_BODY])
vm_uuid = cmd.uuid
rsp = GetVmDeviceAddressRsp()
vm = get_vm_by_uuid_no_retry(vm_uuid, False)
for resource_type in cmd.deviceTOs.__dict__.keys():
tos = getattr(cmd.deviceTOs, resource_type)
if resource_type == 'VolumeVO':
addresses = VmPlugin._find_volume_device_address(vm, tos)
else:
addresses = []
rsp.addresses[resource_type] = addresses
return jsonobject.dumps(rsp)
@staticmethod
def _find_volume_device_address(vm, volumes):
# type:(Vm, list[jsonobject.JsonObject]) -> list[VmDeviceAddress]
addresses = []
o = simplejson.loads(shell.call('virsh qemu-monitor-command %s --cmd \'{"execute":"query-pci"}\'' % vm.uuid))
# only PCI buses up to 0 are available
devices = o['return'][0]['devices']
for vol in volumes:
disk, _ = vm._get_target_disk(vol)
if hasattr(disk, 'wwn'):
addresses.append(VmDeviceAddress(vol.volumeUuid, 'disk', 'wwn', disk.wwn.text_))
continue
elif disk.address.type_ == 'pci':
device = VmPlugin._find_pci_device(devices, disk.alias.name_)
if device:
addresses.append(VmDeviceAddress(vol.volumeUuid, 'disk', 'pci', pci.fmt_pci_address(device)))
continue
addresses.append(VmDeviceAddress(vol.volumeUuid, 'disk', disk.target.bus_, 'unknown'))
return addresses
@staticmethod
def _find_pci_device(devices, qdev_id):
for device in devices:
if device['qdev_id'] == qdev_id:
return device
elif 'pci_bridge' in device:
target = VmPlugin._find_pci_device(device['pci_bridge']['devices'], qdev_id)
if target:
return target
return None
def start(self):
http_server = kvmagent.get_http_server()
http_server.register_async_uri(self.KVM_START_VM_PATH, self.start_vm, cmd=StartVmCmd())
http_server.register_async_uri(self.KVM_STOP_VM_PATH, self.stop_vm)
http_server.register_async_uri(self.KVM_PAUSE_VM_PATH, self.pause_vm)
http_server.register_async_uri(self.KVM_RESUME_VM_PATH, self.resume_vm)
http_server.register_async_uri(self.KVM_REBOOT_VM_PATH, self.reboot_vm)
http_server.register_async_uri(self.KVM_DESTROY_VM_PATH, self.destroy_vm)
http_server.register_async_uri(self.KVM_GET_CONSOLE_PORT_PATH, self.get_console_port)
http_server.register_async_uri(self.KVM_ONLINE_CHANGE_CPUMEM_PATH, self.online_change_cpumem)
http_server.register_async_uri(self.KVM_ONLINE_INCREASE_CPU_PATH, self.online_increase_cpu)
http_server.register_async_uri(self.KVM_ONLINE_INCREASE_MEMORY_PATH, self.online_increase_mem)
http_server.register_async_uri(self.KVM_VM_SYNC_PATH, self.vm_sync)
http_server.register_async_uri(self.KVM_ATTACH_VOLUME, self.attach_data_volume)
http_server.register_async_uri(self.KVM_DETACH_VOLUME, self.detach_data_volume)
http_server.register_async_uri(self.KVM_ATTACH_ISO_PATH, self.attach_iso)
http_server.register_async_uri(self.KVM_DETACH_ISO_PATH, self.detach_iso)
http_server.register_async_uri(self.KVM_MIGRATE_VM_PATH, self.migrate_vm)
http_server.register_async_uri(self.KVM_BLOCK_LIVE_MIGRATION_PATH, self.block_migrate_vm)
http_server.register_async_uri(self.KVM_VM_CHECK_VOLUME_PATH, self.check_volume)
http_server.register_async_uri(self.KVM_TAKE_VOLUME_SNAPSHOT_PATH, self.take_volume_snapshot)
http_server.register_async_uri(self.KVM_TAKE_VOLUME_BACKUP_PATH, self.take_volume_backup, cmd=TakeVolumeBackupCommand())
http_server.register_async_uri(self.KVM_TAKE_VOLUMES_SNAPSHOT_PATH, self.take_volumes_snapshots)
http_server.register_async_uri(self.KVM_TAKE_VOLUMES_BACKUP_PATH, self.take_volumes_backups, cmd=TakeVolumesBackupsCommand())
http_server.register_async_uri(self.KVM_CANCEL_VOLUME_BACKUP_JOBS_PATH, self.cancel_backup_jobs)
http_server.register_async_uri(self.KVM_BLOCK_STREAM_VOLUME_PATH, self.block_stream)
http_server.register_async_uri(self.KVM_MERGE_SNAPSHOT_PATH, self.merge_snapshot_to_volume)
http_server.register_async_uri(self.KVM_LOGOUT_ISCSI_TARGET_PATH, self.logout_iscsi_target, cmd=LoginIscsiTargetCmd())
http_server.register_async_uri(self.KVM_LOGIN_ISCSI_TARGET_PATH, self.login_iscsi_target)
http_server.register_async_uri(self.KVM_ATTACH_NIC_PATH, self.attach_nic)
http_server.register_async_uri(self.KVM_DETACH_NIC_PATH, self.detach_nic)
http_server.register_async_uri(self.KVM_UPDATE_NIC_PATH, self.update_nic)
http_server.register_async_uri(self.KVM_CREATE_SECRET, self.create_ceph_secret_key)
http_server.register_async_uri(self.KVM_VM_CHECK_STATE, self.check_vm_state)
http_server.register_async_uri(self.KVM_VM_CHANGE_PASSWORD_PATH, self.change_vm_password, cmd=ChangeVmPasswordCmd())
http_server.register_async_uri(self.KVM_SET_VOLUME_BANDWIDTH, self.set_volume_bandwidth)
http_server.register_async_uri(self.KVM_DELETE_VOLUME_BANDWIDTH, self.delete_volume_bandwidth)
http_server.register_async_uri(self.KVM_GET_VOLUME_BANDWIDTH, self.get_volume_bandwidth)
http_server.register_async_uri(self.KVM_SET_NIC_QOS, self.set_nic_qos)
http_server.register_async_uri(self.KVM_GET_NIC_QOS, self.get_nic_qos)
http_server.register_async_uri(self.KVM_HARDEN_CONSOLE_PATH, self.harden_console)
http_server.register_async_uri(self.KVM_DELETE_CONSOLE_FIREWALL_PATH, self.delete_console_firewall_rule)
http_server.register_async_uri(self.HOT_PLUG_PCI_DEVICE, self.hot_plug_pci_device)
http_server.register_async_uri(self.HOT_UNPLUG_PCI_DEVICE, self.hot_unplug_pci_device)
http_server.register_async_uri(self.ATTACH_PCI_DEVICE_TO_HOST, self.attach_pci_device_to_host)
http_server.register_async_uri(self.DETACH_PCI_DEVICE_FROM_HOST, self.detach_pci_device_from_host)
http_server.register_async_uri(self.KVM_ATTACH_USB_DEVICE_PATH, self.kvm_attach_usb_device)
http_server.register_async_uri(self.KVM_DETACH_USB_DEVICE_PATH, self.kvm_detach_usb_device)
http_server.register_async_uri(self.RELOAD_USB_REDIRECT_PATH, self.reload_redirect_usb)
http_server.register_async_uri(self.CHECK_MOUNT_DOMAIN_PATH, self.check_mount_domain)
http_server.register_async_uri(self.KVM_RESIZE_VOLUME_PATH, self.kvm_resize_volume)
http_server.register_async_uri(self.VM_PRIORITY_PATH, self.vm_priority)
http_server.register_async_uri(self.ATTACH_GUEST_TOOLS_ISO_TO_VM_PATH, self.attach_guest_tools_iso_to_vm)
http_server.register_async_uri(self.DETACH_GUEST_TOOLS_ISO_FROM_VM_PATH, self.detach_guest_tools_iso_from_vm)
http_server.register_async_uri(self.GET_VM_GUEST_TOOLS_INFO_PATH, self.get_vm_guest_tools_info)
http_server.register_async_uri(self.KVM_GET_VM_FIRST_BOOT_DEVICE_PATH, self.get_vm_first_boot_device)
http_server.register_async_uri(self.KVM_CONFIG_PRIMARY_VM_PATH, self.config_primary_vm)
http_server.register_async_uri(self.KVM_CONFIG_SECONDARY_VM_PATH, self.config_secondary_vm)
http_server.register_async_uri(self.KVM_START_COLO_SYNC_PATH, self.start_colo_sync)
http_server.register_async_uri(self.KVM_REGISTER_PRIMARY_VM_HEARTBEAT, self.register_primary_vm_heartbeat)
http_server.register_async_uri(self.CHECK_COLO_VM_STATE_PATH, self.check_colo_vm_state)
http_server.register_async_uri(self.WAIT_COLO_VM_READY_PATH, self.wait_secondary_vm_ready)
http_server.register_async_uri(self.ROLLBACK_QUORUM_CONFIG_PATH, self.rollback_quorum_config)
http_server.register_async_uri(self.FAIL_COLO_PVM_PATH, self.fail_colo_pvm, cmd=FailColoPrimaryVmCmd())
http_server.register_async_uri(self.GET_VM_DEVICE_ADDRESS_PATH, self.get_vm_device_address)
self.clean_old_sshfs_mount_points()
self.register_libvirt_event()
self.register_qemu_log_cleaner()
self.enable_auto_extend = True
self.auto_extend_size = 1073741824 * 2
# the virtio-channel directory used by VR.
# libvirt won't create this directory when migrating a VR,
# we have to do this otherwise VR migration may fail
linux.mkdir('/var/lib/zstack/kvm/agentSocket/')
@thread.AsyncThread
def wait_end_signal():
while True:
try:
self.queue_singleton.queue.get(True)
while http.AsyncUirHandler.HANDLER_COUNTER.get() != 0:
time.sleep(0.1)
# the libvirt has been stopped or restarted
# to prevent fd leak caused by broken libvirt connection
# we have to ask mgmt server to reboot the agent
url = self.config.get(kvmagent.SEND_COMMAND_URL)
if not url:
logger.warn('cannot find SEND_COMMAND_URL, unable to ask the mgmt server to reconnect us')
os._exit(1)
host_uuid = self.config.get(kvmagent.HOST_UUID)
if not host_uuid:
logger.warn('cannot find HOST_UUID, unable to ask the mgmt server to reconnect us')
os._exit(1)
logger.warn("libvirt has been rebooted or stopped, ask the mgmt server to reconnt us")
cmd = ReconnectMeCmd()
cmd.hostUuid = host_uuid
cmd.reason = "libvirt rebooted or stopped"
http.json_dump_post(url, cmd, {'commandpath': '/kvm/reconnectme'})
os._exit(1)
except:
content = traceback.format_exc()
logger.warn(content)
finally:
os._exit(1)
wait_end_signal()
@thread.AsyncThread
def monitor_libvirt():
while True:
pid = linux.get_libvirtd_pid()
if not pid or not linux.process_exists(pid):
logger.warn(
"cannot find the libvirt process, assume it's dead, ask the mgmt server to reconnect us")
_stop_world()
time.sleep(20)
monitor_libvirt()
@thread.AsyncThread
def clean_stale_vm_vnc_port_chain():
while True:
logger.debug("do clean up stale vnc port iptable chains")
cleanup_stale_vnc_iptable_chains()
time.sleep(600)
clean_stale_vm_vnc_port_chain()
def start_vm_heart_beat(self, cmd):
def send_failover(vm_instance_uuid, host_uuid, primary_failure):
url = self.config.get(kvmagent.SEND_COMMAND_URL)
if not url:
logger.warn('cannot find SEND_COMMAND_URL')
return
logger.warn("heartbeat of vm %s lost, failover" % vm_instance_uuid)
fcmd = FailOverCmd()
fcmd.vmInstanceUuid = vm_instance_uuid
fcmd.reason = "network failure"
fcmd.hostUuid = host_uuid
fcmd.primaryVmFailure = primary_failure
try:
http.json_dump_post(url, fcmd, {'commandpath': '/kvm/reportfailover'})
except Exception as e:
logger.debug('failed to report fail')
def test_heart_beat():
logger.debug("vm [uuid:%s] heartbeat finished", cmd.vmInstanceUuid)
with contextlib.closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.settimeout(0.5)
try:
s.connect((cmd.targetHostIp, cmd.heartbeatPort))
logger.debug("successfully connect to address[%s:%s]" % (cmd.targetHostIp, cmd.heartbeatPort))
except socket.error as ex:
logger.debug(
"lost heartbeat to %s:%s, because %s" % (cmd.targetHostIp, cmd.heartbeatPort, ex))
if cmd.coloPrimary:
vm = get_vm_by_uuid_no_retry(cmd.vmInstanceUuid, False)
if not vm:
raise Exception('vm[uuid:%s] not exists, failed' % cmd.vmInstanceUuid)
count = 0
for alias_name in vm._get_all_volume_alias_names(cmd.volumes):
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "x-blockdev-change", "arguments": {"parent":'
' "%s", "child": "children.1"}}' % alias_name)
execute_qmp_command(cmd.vmInstanceUuid,
'{"execute": "human-monitor-command", "arguments":'
'{"command-line": "drive_del replication%s" } }' % count)
count += 1
for i in xrange(cmd.redirectNum):
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"fm-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-out-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"primary-in-redirect-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"comp-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "x-colo-lost-heartbeat"}')
else:
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "nbd-server-stop"}')
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "x-colo-lost-heartbeat"}')
for i in xrange(cmd.redirectNum):
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{"id":"fr-secondary-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "object-del",'
'"arguments":{ "id": "fr-mirror-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-remove",'
'"arguments":{"id":"red-secondary-%s"}}' % i)
execute_qmp_command(cmd.vmInstanceUuid, '{"execute": "chardev-remove",'
'"arguments":{"id":"red-mirror-%s"}}' % i)
send_failover(cmd.vmInstanceUuid, cmd.hostUuid, not cmd.coloPrimary)
return True
logger.debug("vm [uuid:%s] heartbeat finished", cmd.vmInstanceUuid)
return False
t = threading.currentThread()
while getattr(t, "do_heart_beat", True):
need_break = test_heart_beat()
if need_break:
break
time.sleep(1)
try:
self.vm_heartbeat.pop(cmd.vmInstanceUuid)
except KeyError:
logger.debug("ignore error occurs when remove %s from heartbeat",
cmd.vmInstanceUuid)
def _vm_lifecycle_event(self, conn, dom, event, detail, opaque):
try:
evstr = LibvirtEventManager.event_to_string(event)
vm_uuid = dom.name()
if evstr not in (LibvirtEventManager.EVENT_STARTED, LibvirtEventManager.EVENT_STOPPED):
logger.debug("ignore event[%s] of the vm[uuid:%s]" % (evstr, vm_uuid))
return
if vm_uuid.startswith("guestfs-"):
logger.debug("[vm_lifecycle]ignore the temp vm[%s] while using guestfish" % vm_uuid)
return
vm_op_judger = self._get_operation(vm_uuid)
if vm_op_judger and evstr in vm_op_judger.ignore_libvirt_events():
# this is an operation originated from ZStack itself
logger.debug(
'ignore event[%s] for the vm[uuid:%s], this operation is from ZStack itself' % (evstr, vm_uuid))
if vm_op_judger.remove_expected_event(evstr) == 0:
self._remove_operation(vm_uuid)
logger.debug(
'events happened of the vm[uuid:%s] meet the expectation, delete the operation judger' % vm_uuid)
return
# this is an operation outside zstack, report it
url = self.config.get(kvmagent.SEND_COMMAND_URL)
if not url:
logger.warn('cannot find SEND_COMMAND_URL, unable to report abnormal operation[vm:%s, op:%s]' % (
vm_uuid, evstr))
return
host_uuid = self.config.get(kvmagent.HOST_UUID)
if not host_uuid:
logger.warn(
'cannot find HOST_UUID, unable to report abnormal operation[vm:%s, op:%s]' % (vm_uuid, evstr))
return
@thread.AsyncThread
def report_to_management_node():
cmd = ReportVmStateCmd()
cmd.vmUuid = vm_uuid
cmd.hostUuid = host_uuid
if evstr == LibvirtEventManager.EVENT_STARTED:
cmd.vmState = Vm.VM_STATE_RUNNING
elif evstr == LibvirtEventManager.EVENT_STOPPED:
cmd.vmState = Vm.VM_STATE_SHUTDOWN
logger.debug(
'detected an abnormal vm operation[uuid:%s, op:%s], report it to %s' % (vm_uuid, evstr, url))
http.json_dump_post(url, cmd, {'commandpath': '/kvm/reportvmstate'})
report_to_management_node()
except:
content = traceback.format_exc()
logger.warn(content)
# WARNING: it contains quite a few hacks to avoid xmlobject#loads()
def _vm_reboot_event(self, conn, dom, opaque):
try:
domain_xml = dom.XMLDesc(0)
vm_uuid = dom.name()
@thread.AsyncThread
def report_to_management_node():
cmd = ReportVmRebootEventCmd()
cmd.vmUuid = vm_uuid
syslog.syslog('report reboot event for vm ' + vm_uuid)
http.json_dump_post(url, cmd, {'commandpath': '/kvm/reportvmreboot'})
# make sure reboot event only report once
op = self._get_operation(vm_uuid)
if op is None or op.op != VmPlugin.VM_OP_REBOOT:
url = self.config.get(kvmagent.SEND_COMMAND_URL)
if not url:
logger.warn(
'cannot find SEND_COMMAND_URL, unable to report shutdown event of vm[uuid:%s]' % vm_uuid)
return
report_to_management_node()
self._record_operation(vm_uuid, VmPlugin.VM_OP_REBOOT)
is_cdrom = self._check_boot_from_cdrom(domain_xml)
if not is_cdrom:
logger.debug(
"the vm[uuid:%s]'s boot device is not cdrom, nothing to do, skip this reboot event" % (vm_uuid))
return
logger.debug(
'the vm[uuid:%s] is set to boot from the cdrom, for the policy[bootFromHardDisk], the reboot will boot from hdd' % vm_uuid)
try:
dom.destroy()
except:
pass
xml = self.update_root_volume_boot_order(domain_xml)
xml = re.sub(r"""\stray\s*=\s*'open'""", """ tray='closed'""", xml)
domain = conn.defineXML(xml)
domain.createWithFlags(0)
except:
content = traceback.format_exc()
logger.warn(content)
# update the boot order of the root volume to 1, rely on the make_volumes() function
def update_root_volume_boot_order(self, domain_xml):
xml = minidom.parseString(domain_xml)
disks = xml.getElementsByTagName('disk')
boots = xml.getElementsByTagName("boot")
for boot in boots:
boot.parentNode.removeChild(boot);
order = xml.createElement("boot")
order.setAttribute("order", "1")
disks[0].appendChild(order)
xml = xml.toxml()
return xml
def _check_boot_from_cdrom(self, domain_xml):
is_cdrom = False
xml = minidom.parseString(domain_xml)
disks = xml.getElementsByTagName('disk')
for disk in disks:
if disk.getAttribute("device") == "cdrom" and disk.getElementsByTagName("boot").length > 0 and \
disk.getElementsByTagName("boot")[0].getAttribute("order") == "1":
is_cdrom = True
break
if not is_cdrom:
os = xml.getElementsByTagName("os")[0]
if os.getElementsByTagName("boot").length > 0 and os.getElementsByTagName("boot")[0].getAttribute(
"device") == "cdrom":
is_cdrom = True
return is_cdrom
@bash.in_bash
@misc.ignoreerror
def _extend_sharedblock(self, conn, dom, event, detail, opaque):
from shared_block_plugin import MAX_ACTUAL_SIZE_FACTOR
logger.debug("got event from libvirt, %s %s %s %s" %
(dom.name(), LibvirtEventManager.event_to_string(event), detail, opaque))
if not self.enable_auto_extend:
return
def check_lv(file, vm, device):
logger.debug("sblk max actual size factor %s" % MAX_ACTUAL_SIZE_FACTOR)
virtual_size, image_offest, _ = vm.domain.blockInfo(device)
lv_size = int(lvm.get_lv_size(file))
# image_offest = int(bash.bash_o("qemu-img check %s | grep 'Image end offset' | awk -F ': ' '{print $2}'" % file).strip())
# virtual_size = int(linux.qcow2_virtualsize(file))
return int(lv_size) < int(virtual_size) * MAX_ACTUAL_SIZE_FACTOR, image_offest, lv_size, virtual_size
@bash.in_bash
def extend_lv(event_str, path, vm, device):
# type: (str, str, Vm, object) -> object
r, image_offest, lv_size, virtual_size = check_lv(path, vm, device)
logger.debug("lv %s image offest: %s, lv size: %s, virtual size: %s" %
(path, image_offest, lv_size, virtual_size))
if not r:
logger.debug("lv %s is larager than virtual size * %s, skip extend for event %s" % (path, MAX_ACTUAL_SIZE_FACTOR, event_str))
return
extend_size = lv_size + self.auto_extend_size
try:
lvm.resize_lv(path, extend_size)
except Exception as e:
logger.warn("extend lv[%s] to size[%s] failed" % (path, extend_size))
if "incompatible mode" not in e.message.lower():
return
try:
with lvm.OperateLv(path, shared=False, delete_when_exception=False):
lvm.resize_lv(path, extend_size)
except Exception as e:
logger.warn("extend lv[%s] to size[%s] with operate failed" % (path, extend_size))
else:
logger.debug("lv %s extend to %s sucess" % (path, extend_size))
def get_path_by_device(device_name, vm):
for dev in vm.domain_xmlobject.devices.disk:
if dev.get_child_node("target").dev_ == device_name:
return dev.get_child_node("source").file_
@thread.AsyncThread
@lock.lock("sharedblock-extend-vm-%s" % dom.name())
def handle_event(dom, event_str):
# type: (libvirt.virDomain, str) -> object
vm_uuid = dom.name()
syslog.syslog("got suspend event from libvirt, %s %s %s" %
(vm_uuid, event_str, LibvirtEventManager.suspend_event_to_string(detail)))
disk_errors = dom.diskErrors() # type: dict
vm = get_vm_by_uuid_no_retry(vm_uuid, False)
if len(disk_errors) == 0:
syslog.syslog("no error in vm %s. skip to check and extend volume" % vm_uuid)
return
fixed = False
try:
for device, error in disk_errors.viewitems():
if error == libvirt.VIR_DOMAIN_DISK_ERROR_NO_SPACE:
path = get_path_by_device(device, vm)
syslog.syslog("disk %s:%s of vm %s got ENOSPC" % (device, path, vm_uuid))
if not lvm.lv_exists(path):
continue
extend_lv(event_str, path, vm, device)
fixed = True
except Exception as e:
syslog.syslog(str(e))
if fixed:
syslog.syslog("resume vm %s" % vm_uuid)
vm.resume()
touchQmpSocketWhenExists(vm_uuid)
event_str = LibvirtEventManager.event_to_string(event)
if event_str not in (LibvirtEventManager.EVENT_SUSPENDED,):
return
handle_event(dom, event_str)
def _clean_colo_heartbeat(self, conn, dom, event, detail, opaque):
event_str = LibvirtEventManager.event_to_string(event)
if event_str not in (LibvirtEventManager.EVENT_SHUTDOWN, LibvirtEventManager.EVENT_STOPPED):
return
vm_uuid = dom.name()
heartbeat_thread = self.vm_heartbeat.pop(vm_uuid, None)
if heartbeat_thread and heartbeat_thread.is_alive():
logger.debug("clean vm[uuid:%s] heartbeat, due to evnet %s" % (dom.name(), LibvirtEventManager.event_to_string(event)))
heartbeat_thread.do_heart_beat = False
heartbeat_thread.join()
@bash.in_bash
def _release_sharedblocks(self, conn, dom, event, detail, opaque):
logger.debug("got event from libvirt, %s %s" % (dom.name(), LibvirtEventManager.event_to_string(event)))
@linux.retry(times=5, sleep_time=1)
def wait_volume_unused(volume):
used_process = linux.linux_lsof(volume)
if len(used_process) != 0:
raise RetryException("volume %s still used: %s" % (volume, used_process))
@thread.AsyncThread
@bash.in_bash
def deactivate_colo_cache_volume(event_str, path, vm_uuid):
try:
wait_volume_unused(path)
finally:
used_process = linux.linux_lsof(path)
if len(used_process) == 0:
mount_path = path.rsplit('/',1)[0].replace("'", '')
sblk_volume_path = linux.get_mount_url(mount_path)
linux.umount(mount_path)
linux.rm_dir_force(mount_path)
if not sblk_volume_path:
syslog.syslog("vm: %s: no mount url found for %s" % (vm_uuid, mount_path))
try:
lvm.deactive_lv(sblk_volume_path, False)
syslog.syslog(
"deactivated volume %s for event %s happend on vm %s" % (
sblk_volume_path, event_str, vm_uuid))
except Exception as e:
logger.debug("deactivate volume %s for event %s happend on vm %s failed, %s" % (
sblk_volume_path, event_str, vm_uuid, str(e)))
else:
syslog.syslog("vm: %s, volume %s still used: %s, skip to deactivate" % (vm_uuid, path, used_process))
@thread.AsyncThread
@bash.in_bash
def deactivate_volume(event_str, file, vm_uuid):
# type: (str, str, str) -> object
volume = file.strip().split("'")[1]
syslog.syslog("deactivating volume %s for vm %s" % (file, vm_uuid))
lock_type = bash.bash_o("lvs --noheading --nolocking %s -ovg_lock_type" % volume).strip()
if "sanlock" not in lock_type:
syslog.syslog("%s has no sanlock, skip to deactive" % file)
return
try:
wait_volume_unused(volume)
finally:
used_process = linux.linux_lsof(volume)
if len(used_process) == 0:
try:
lvm.deactive_lv(volume, False)
syslog.syslog(
"deactivated volume %s for event %s happend on vm %s success" % (volume, event_str, vm_uuid))
except Exception as e:
syslog.syslog("deactivate volume %s for event %s happend on vm %s failed, %s" % (
volume, event_str, vm_uuid, str(e)))
else:
syslog.syslog("vm: %s, volume %s still used: %s, skip to deactivate" % (vm_uuid, volume, used_process))
try:
event_str = LibvirtEventManager.event_to_string(event)
if event_str not in (LibvirtEventManager.EVENT_SHUTDOWN, LibvirtEventManager.EVENT_STOPPED):
return
vm_uuid = dom.name()
vm_op_judger = self._get_operation(vm_uuid)
if vm_op_judger and event_str in vm_op_judger.ignore_libvirt_events():
logger.info("expected event for zstack op %s, ignore event %s on vm %s" % (vm_op_judger.op, event_str, vm_uuid))
return
out = bash.bash_o("virsh dumpxml %s | grep \"source file='/dev/\"" % vm_uuid).strip().splitlines()
if len(out) != 0:
for file in out:
deactivate_volume(event_str, file, vm_uuid)
out = bash.bash_o('virsh dumpxml %s | grep -E "(active|hidden) file="' % vm_uuid).strip().splitlines()
if len(out) != 0:
for cache_config in out:
path = cache_config.split('=')[1].rsplit('/', 1)[0]
deactivate_colo_cache_volume(event_str, path, vm_uuid)
else:
logger.debug("can not find sharedblock related volume for vm %s, skip to release" % vm_uuid)
except:
content = traceback.format_exc()
logger.warn("traceback: %s" % content)
def _vm_shutdown_event(self, conn, dom, event, detail, opaque):
try:
event = LibvirtEventManager.event_to_string(event)
if event not in (LibvirtEventManager.EVENT_SHUTDOWN,):
return
vm_uuid = dom.name()
# this is an operation outside zstack, report it
url = self.config.get(kvmagent.SEND_COMMAND_URL)
if not url:
logger.warn('cannot find SEND_COMMAND_URL, unable to report shutdown event of vm[uuid:%s]' % vm_uuid)
return
@thread.AsyncThread
def report_to_management_node():
cmd = ReportVmShutdownEventCmd()
cmd.vmUuid = vm_uuid
syslog.syslog('report shutdown event for vm ' + vm_uuid)
http.json_dump_post(url, cmd, {'commandpath': '/kvm/reportvmshutdown'})
report_to_management_node()
except:
content = traceback.format_exc()
logger.warn("traceback: %s" % content)
def _set_vnc_port_iptable_rule(self, conn, dom, event, detail, opaque):
try:
event = LibvirtEventManager.event_to_string(event)
if event not in (LibvirtEventManager.EVENT_STARTED, LibvirtEventManager.EVENT_STOPPED):
return
vm_uuid = dom.name()
if vm_uuid.startswith("guestfs-"):
logger.debug("[set_vnc_port_iptable]ignore the temp vm[%s] while using guestfish" % vm_uuid)
return
domain_xml = dom.XMLDesc(0)
domain_xmlobject = xmlobject.loads(domain_xml)
if is_namespace_used():
internal_id_node = find_zstack_metadata_node(etree.fromstring(domain_xml), 'internalId')
vm_id = internal_id_node.text if internal_id_node is not None else None
else:
vm_id = domain_xmlobject.metadata.internalId.text_ if xmlobject.has_element(domain_xmlobject, 'metadata.internalId') else None
if not vm_id:
logger.debug('vm[uuid:%s] is not managed by zstack, do not configure the vnc iptables rules' % vm_uuid)
return
vir = VncPortIptableRule()
if LibvirtEventManager.EVENT_STARTED == event:
if is_namespace_used():
host_ip_node = find_zstack_metadata_node(etree.fromstring(domain_xml), 'hostManagementIp')
vir.host_ip = host_ip_node.text
else:
vir.host_ip = domain_xmlobject.metadata.hostManagementIp.text_
if shell.run('ip addr | grep -w %s > /dev/null' % vir.host_ip) != 0:
logger.debug('the vm is migrated from another host, we do not need to set the console firewall, as '
'the management node will take care')
return
for g in domain_xmlobject.devices.get_child_node_as_list('graphics'):
if g.type_ == 'vnc' or g.type_ == 'spice':
vir.port = g.port_
break
vir.vm_internal_id = vm_id
vir.apply()
logger.debug('Enable [port:%s] in firewall rule for vm[uuid:%s] console' % (vir.port, vm_id))
elif LibvirtEventManager.EVENT_STOPPED == event:
vir.vm_internal_id = vm_id
vir.delete()
logger.debug('Delete firewall rule for vm[uuid:%s] console' % vm_id)
except:
# if vm do live migrate the dom may not be found or the vm has been undefined
vm = get_vm_by_uuid(dom.name(), False)
if not vm:
logger.debug("can not get domain xml of vm[uuid:%s], "
"the vm may be just migrated here or it has already been undefined" % dom.name())
return
content = traceback.format_exc()
logger.warn(content)
def _delete_pushgateway_metric(self, conn, dom, event, detail, opaque):
try:
event = LibvirtEventManager.event_to_string(event)
if event != LibvirtEventManager.EVENT_STOPPED:
return
output = shell.call('ps aux | grep [p]ushgateway')
if '/var/lib/zstack/kvm/pushgateway' not in output:
return
port = None
lines = output.splitlines()
for line in lines:
if '/var/lib/zstack/kvm/pushgateway' in line:
port = line[line.rindex('web.listen-address :') + 20:]
port = port.split()[0]
break
vm_uuid = dom.name()
url = "http://localhost:%s/metrics/job/zwatch_vm_agent/vmUuid/%s" % (port, vm_uuid)
shell.run('curl -X DELETE ' + url)
except Exception as e:
logger.warn("delete pushgateway metric when vm stoped failed: %s" % e.message)
def register_libvirt_event(self):
#LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._vm_lifecycle_event)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE,
self._set_vnc_port_iptable_rule)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_REBOOT, self._vm_reboot_event)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._vm_shutdown_event)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._release_sharedblocks)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._clean_colo_heartbeat)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._extend_sharedblock)
LibvirtAutoReconnect.add_libvirt_callback(libvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, self._delete_pushgateway_metric)
LibvirtAutoReconnect.register_libvirt_callbacks()
def register_qemu_log_cleaner(self):
def pick_uuid_from_filename(filename):
pattern = r'^([0-9a-f]{32})\.log'
matcher = re.match(pattern, filename)
if matcher:
return matcher.group(1) # return uuid
else:
return None
def qemu_log_cleaner():
logger.debug('Clean libvirt log task start')
try:
log_paths = linux.listPath('/var/log/libvirt/qemu/')
all_active_vm_uuids = set(get_all_vm_states())
# log life : 180 days
clean_time = datetime.datetime.now() - datetime.timedelta(days=180)
for p in log_paths:
filename = os.path.basename(p)
uuid = pick_uuid_from_filename(filename)
if uuid and uuid in all_active_vm_uuids:
# vm exists
continue
try:
modify_time = datetime.datetime.fromtimestamp(os.stat(p).st_mtime)
if modify_time < clean_time:
linux.rm_file_force(p)
except Exception as ex_inner:
logger.warn('Failed to clean libvirt log files `%s` because : %s' % (p, str(ex_inner)))
except Exception as ex_outer:
logger.warn('Failed to clean libvirt log files because : %s' % str(ex_outer))
# run cleaner : once a day
thread.timer(24 * 3600, qemu_log_cleaner).start()
# first time
thread.timer(60, qemu_log_cleaner).start()
def clean_old_sshfs_mount_points(self):
mpts = shell.call("mount -t fuse.sshfs | awk '{print $3}'").splitlines()
for mpt in mpts:
if mpt.startswith(tempfile.gettempdir()):
pids = linux.get_pids_by_process_fullname(mpt)
for pid in pids:
linux.kill_process(pid, is_exception=False)
linux.fumount(mpt, 2)
def stop(self):
self.clean_old_sshfs_mount_points()
pass
def configure(self, config):
self.config = config
class EmptyCdromConfig():
def __init__(self, targetDev, bus, unit):
self.targetDev = targetDev
self.bus = bus
self.unit = unit
class VolumeIDEConfig():
def __init__(self, bus, unit):
self.bus = bus
self.unit = unit
class ColoReplicationConfig():
def __init__(self, alias_name, replication_id):
self.alias_name = alias_name
self.replication_id = replication_id
class VolumeSnapshotJobStruct(object):
def __init__(self, volumeUuid, volume, installPath, vmInstanceUuid, previousInstallPath,
newVolumeInstallPath, live=True, full=False, memory=False):
self.volumeUuid = volumeUuid
self.volume = volume
self.installPath = installPath
self.vmInstanceUuid = vmInstanceUuid
self.previousInstallPath = previousInstallPath
self.newVolumeInstallPath = newVolumeInstallPath
self.memory = memory
self.live = live
self.full = full
class VolumeSnapshotResultStruct(object):
def __init__(self, volumeUuid, previousInstallPath, installPath, size=None):
"""
:type volumeUuid: str
:type size: long
:type installPath: str
:type previousInstallPath: str
"""
self.volumeUuid = volumeUuid
self.previousInstallPath = previousInstallPath
self.installPath = installPath
self.size = size
@bash.in_bash
@misc.ignoreerror
def touchQmpSocketWhenExists(vmUuid):
if vmUuid is None:
return
path = "%s/%s.sock" % (QMP_SOCKET_PATH, vmUuid)
if os.path.exists(path):
bash.bash_roe("touch %s" % path)
| 43.205351 | 382 | 0.575304 | 38,356 | 332,638 | 4.786683 | 0.053864 | 0.007026 | 0.006231 | 0.011111 | 0.561188 | 0.495152 | 0.433534 | 0.39641 | 0.349998 | 0.311686 | 0 | 0.006607 | 0.309246 | 332,638 | 7,698 | 383 | 43.210964 | 0.792441 | 0.03205 | 0 | 0.431725 | 0 | 0.054129 | 0.1431 | 0.015384 | 0 | 0 | 0.000047 | 0.00026 | 0.001472 | 0 | null | null | 0.012101 | 0.008013 | null | null | 0.00278 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.