hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9677bbcd0dfdb24413182900511a1e43b219d536 | 517 | py | Python | notecard_pseudo_sensor/notecard_pseudo_sensor.py | blues/notecard-pseudo-sensor-python | 19429e3420d168eda622562f092b1e4b1dcb77d5 | [
"MIT"
] | null | null | null | notecard_pseudo_sensor/notecard_pseudo_sensor.py | blues/notecard-pseudo-sensor-python | 19429e3420d168eda622562f092b1e4b1dcb77d5 | [
"MIT"
] | null | null | null | notecard_pseudo_sensor/notecard_pseudo_sensor.py | blues/notecard-pseudo-sensor-python | 19429e3420d168eda622562f092b1e4b1dcb77d5 | [
"MIT"
] | null | null | null | import random
class NotecardPseudoSensor:
def __init__(self, card):
self.card = card
# Read the temperature from the Notecard’s temperature
# sensor. The Notecard captures a new temperature sample every
# five minutes.
def temp(self):
temp_req = {"req": "card.temp"}
temp_rsp = self.card.Transaction(temp_req)
return temp_rsp["value"]
# Generate a random humidity that’s close to an average
# indoor humidity reading.
def humidity(self):
return round(random.uniform(45, 50), 4)
| 27.210526 | 64 | 0.711799 | 72 | 517 | 5 | 0.583333 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012019 | 0.195358 | 517 | 18 | 65 | 28.722222 | 0.853365 | 0.398453 | 0 | 0 | 0 | 0 | 0.055738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
96843be69995d5261982a7a95d54bc0c85b10d1c | 1,124 | py | Python | pyclick/click_models/task_centric/SearchTask.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | 3 | 2021-07-22T14:46:01.000Z | 2021-07-23T08:55:01.000Z | pyclick/click_models/task_centric/SearchTask.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | null | null | null | pyclick/click_models/task_centric/SearchTask.py | gaudel/ranking_bandits | 1fe4a38b17a3bb7ccab3ae0f4d0afb70fe54dbc9 | [
"MIT"
] | null | null | null | #
# Copyright (C) 2015 Ilya Markov
#
# Full copyright notice can be found in LICENSE.
#
from collections import OrderedDict
__author__ = 'Ilya Markov'
class SearchTask(object):
"""A search task consisting of multiple search sessions."""
def __init__(self, task):
self._task = task
self.search_sessions = []
def __str__(self):
return "%s:%r" % (self._task, [search_session.query for search_session in self.search_sessions])
def __repr__(self):
return str(self)
@staticmethod
def get_search_tasks(search_sessions):
"""
Groups search sessions by task and returns the list of all tasks.
:param search_sessions: Task-centric search sessions.
:returns: The list of tasks.
"""
search_tasks = OrderedDict()
for search_session in search_sessions:
if search_session.task not in search_tasks:
search_tasks[search_session.task] = SearchTask(search_session.task)
search_tasks[search_session.task].search_sessions.append(search_session)
return search_tasks.values()
| 27.414634 | 104 | 0.669929 | 136 | 1,124 | 5.25 | 0.397059 | 0.176471 | 0.095238 | 0.058824 | 0.078431 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004723 | 0.246441 | 1,124 | 40 | 105 | 28.1 | 0.838253 | 0.251779 | 0 | 0 | 0 | 0 | 0.020202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0.111111 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
968b31d315d2794504114cd12a305550671ccee4 | 782 | py | Python | Exercises/Databases/playlist-app/forms.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | Exercises/Databases/playlist-app/forms.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | Exercises/Databases/playlist-app/forms.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | """Forms for playlist app."""
from wtforms import SelectField, StringField, PasswordField, ValidationError
from wtforms.validators import InputRequired, Email
from flask_wtf import FlaskForm
class PlaylistForm(FlaskForm):
"""Form for adding playlists."""
name = StringField("Name",validators=[InputRequired()])
description = StringField("Description",validators=[InputRequired()])
class SongForm(FlaskForm):
"""Form for adding songs."""
title = StringField("Title",validators=[InputRequired()])
artist = StringField("Artist",validators=[InputRequired()])
# DO NOT MODIFY THIS FORM - EVERYTHING YOU NEED IS HERE
class NewSongForPlaylistForm(FlaskForm):
"""Form for adding a song to playlist."""
song = SelectField('Song To Add', coerce=int)
| 28.962963 | 76 | 0.734015 | 83 | 782 | 6.903614 | 0.53012 | 0.160558 | 0.08377 | 0.115183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144501 | 782 | 26 | 77 | 30.076923 | 0.856502 | 0.209719 | 0 | 0 | 0 | 0 | 0.061977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.272727 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
96b228ae9e7649879cdeda2fdd3f35cf03b7b213 | 57 | py | Python | day1/main.py | HallerPatrick/aoc-2021 | 4df56c940f5fcd0b9967e3f8d8f7a80ef251217e | [
"MIT"
] | null | null | null | day1/main.py | HallerPatrick/aoc-2021 | 4df56c940f5fcd0b9967e3f8d8f7a80ef251217e | [
"MIT"
] | null | null | null | day1/main.py | HallerPatrick/aoc-2021 | 4df56c940f5fcd0b9967e3f8d8f7a80ef251217e | [
"MIT"
] | null | null | null | x = 3
def foo():
y = "String"
return y
foo()
| 5.7 | 16 | 0.438596 | 9 | 57 | 2.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.403509 | 57 | 9 | 17 | 6.333333 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96baf8d07ea198031649ec3fdb101d5cf09a30ac | 1,291 | py | Python | schedpy/config.py | safiya03/schedpy | 719fd08e0b5d2ed95bf70c4c0d3b1b4a8463d1ac | [
"MIT"
] | 2 | 2019-12-04T18:58:47.000Z | 2020-03-07T13:09:10.000Z | schedpy/config.py | safiya03/schedpy | 719fd08e0b5d2ed95bf70c4c0d3b1b4a8463d1ac | [
"MIT"
] | 4 | 2019-12-04T19:02:04.000Z | 2019-12-04T19:10:54.000Z | schedpy/config.py | safiya03/schedpy | 719fd08e0b5d2ed95bf70c4c0d3b1b4a8463d1ac | [
"MIT"
] | 3 | 2019-12-05T07:30:37.000Z | 2020-11-27T15:05:33.000Z | from datetime import date
class Config(object):
def __init__(self):
self.start_day = 0
self.start_date = date.today()
def _set_configs(self, start_day, start_date):
"""Method to set start day."""
self.start_day = start_day
self.start_date = start_date
days_list = [
"mon",
"tue",
"wed",
"thu",
"fri",
"sat",
"sun",
"monday",
"tuesday",
"wednesday",
"thursday",
"friday",
"saturday",
"sunday",
]
if isinstance(self.start_day, str):
if self.start_day.lower() in days_list:
self.start_day = (
days_list.index(self.start_day.lower())
if days_list.index(self.start_day.lower()) < 7
else days_list.index(self.start_day.lower()) - 7
)
# self.start_day = self.start_day
print("Start day set to: " + str(self.start_day) + "/" + days_list[7 + self.start_day].capitalize())
print("Start date set to: " + str(self.start_date))
return self.start_day
def get_configs(self):
return self.start_day, self.start_date
| 30.023256 | 108 | 0.502711 | 146 | 1,291 | 4.205479 | 0.315068 | 0.278502 | 0.29316 | 0.110749 | 0.389251 | 0.149837 | 0.149837 | 0.100977 | 0 | 0 | 0 | 0.004994 | 0.379551 | 1,291 | 42 | 109 | 30.738095 | 0.761548 | 0.044152 | 0 | 0 | 0 | 0 | 0.088762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.027778 | 0.027778 | 0.194444 | 0.055556 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96bfc271402cd1c2d0a61f6e1be27c1f167119be | 898 | py | Python | WeatherStationSensorsReader/controllers/ground_temperature_controller.py | weather-station-project/weather-station-sensors-reader | cda7902ee382248b41d14b9a2c0543817decbb4a | [
"MIT"
] | null | null | null | WeatherStationSensorsReader/controllers/ground_temperature_controller.py | weather-station-project/weather-station-sensors-reader | cda7902ee382248b41d14b9a2c0543817decbb4a | [
"MIT"
] | null | null | null | WeatherStationSensorsReader/controllers/ground_temperature_controller.py | weather-station-project/weather-station-sensors-reader | cda7902ee382248b41d14b9a2c0543817decbb4a | [
"MIT"
] | null | null | null | from controllers.controller import Controller
from dao.ground_temperature_dao import GroundTemperatureDao
from sensors.ground_temperature_sensor import GroundTemperatureSensor
class GroundTemperatureController(Controller):
""" Represents the controller with the ground temperature sensor and DAO """
def __init__(self, server, database, user, password):
super(GroundTemperatureController, self).__init__(sensor=GroundTemperatureSensor(), dao=GroundTemperatureDao(server=server,
database=database,
user=user,
password=password))
| 64.142857 | 136 | 0.502227 | 55 | 898 | 7.981818 | 0.454545 | 0.116173 | 0.104784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.454343 | 898 | 13 | 137 | 69.076923 | 0.895918 | 0.075724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.222222 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
73707d826b46ea5a349c8e03b0cafc919ea900a5 | 1,982 | py | Python | src/sims4communitylib/events/build_buy/common_build_buy_event_dispatcher.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 118 | 2019-08-31T04:33:18.000Z | 2022-03-28T21:12:14.000Z | src/sims4communitylib/events/build_buy/common_build_buy_event_dispatcher.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 15 | 2019-12-05T01:29:46.000Z | 2022-02-18T17:13:46.000Z | src/sims4communitylib/events/build_buy/common_build_buy_event_dispatcher.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | 28 | 2019-09-07T04:11:05.000Z | 2022-02-07T18:31:40.000Z | """
The Sims 4 Community Library is licensed under the Creative Commons Attribution 4.0 International public license (CC BY 4.0).
https://creativecommons.org/licenses/by/4.0/
https://creativecommons.org/licenses/by/4.0/legalcode
Copyright (c) COLONOLNUTTY
"""
from typing import Any
from sims4communitylib.events.build_buy.events.build_buy_enter import S4CLBuildBuyEnterEvent
from sims4communitylib.events.build_buy.events.build_buy_exit import S4CLBuildBuyExitEvent
from sims4communitylib.events.event_handling.common_event_registry import CommonEventRegistry
from sims4communitylib.modinfo import ModInfo
from sims4communitylib.services.common_service import CommonService
from sims4communitylib.utils.common_injection_utils import CommonInjectionUtils
from zone import Zone
class CommonBuildBuyEventDispatcherService(CommonService):
"""A service that dispatches Build/Buy events.
.. warning:: Do not use this service directly to listen for events!\
Use the :class:`.CommonEventRegistry` to listen for dispatched events.
"""
def _on_build_buy_enter(self, zone: Zone, *_, **__):
return CommonEventRegistry.get().dispatch(S4CLBuildBuyEnterEvent(zone))
def _on_build_buy_exit(self, zone: Zone, *_, **__):
return CommonEventRegistry.get().dispatch(S4CLBuildBuyExitEvent(zone))
@CommonInjectionUtils.inject_safely_into(ModInfo.get_identity(), Zone, Zone.on_build_buy_enter.__name__)
def _common_build_buy_enter(original, self, *args, **kwargs) -> Any:
result = original(self, *args, **kwargs)
CommonBuildBuyEventDispatcherService.get()._on_build_buy_enter(self, *args, **kwargs)
return result
@CommonInjectionUtils.inject_safely_into(ModInfo.get_identity(), Zone, Zone.on_build_buy_exit.__name__)
def _common_build_buy_exit(original, self, *args, **kwargs) -> Any:
result = original(self, *args, **kwargs)
CommonBuildBuyEventDispatcherService.get()._on_build_buy_exit(self, *args, **kwargs)
return result
| 43.086957 | 125 | 0.792634 | 237 | 1,982 | 6.367089 | 0.337553 | 0.06892 | 0.039761 | 0.058317 | 0.495693 | 0.408217 | 0.408217 | 0.344599 | 0.279655 | 0.279655 | 0 | 0.010802 | 0.112513 | 1,982 | 45 | 126 | 44.044444 | 0.847072 | 0.223512 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.347826 | 0.086957 | 0.73913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7372d8e280bfee2d014c4e70726c289a411ac62b | 1,202 | py | Python | mail.py | Landers1037/simple-email-sms | d9172e389c24ceb4a9160d5108502a8ba9337d5e | [
"MIT"
] | 1 | 2019-05-27T08:56:52.000Z | 2019-05-27T08:56:52.000Z | mail.py | Landers1037/simple-email-sms | d9172e389c24ceb4a9160d5108502a8ba9337d5e | [
"MIT"
] | 1 | 2019-05-27T02:53:35.000Z | 2019-05-27T02:53:56.000Z | mail.py | Landers1037/simple-email-sms | d9172e389c24ceb4a9160d5108502a8ba9337d5e | [
"MIT"
] | null | null | null | import smtplib
import schedule
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.header import Header
import time
import json
# 设置smtplib所需的参数
# 下面的发件人,收件人是用于邮件传输的。
class pymail:
with open('service.json', 'r', encoding='utf8') as file:
data = json.load(file)
smtpserver = data["smtp"]
username = data["username"]
password = data["password"]
sender = data["sender"]
receiver = data["receiver"]
text = data["text"]
subject = data["subject"]
sslflag = data["ssl"]
msg = MIMEMultipart('mixed')
msg['Subject'] = subject
msg['From'] = data["source"]
msg['To'] = ";".join(receiver)
text_plain = MIMEText(text, 'plain', 'utf-8')
msg.attach(text_plain)
def email(self):
if self.sslflag == "true":
smtp = smtplib.SMTP_SSL(self.data["smtp"], self.data["port"])
else:
smtp = smtplib.SMTP(self.data["smtp"], self.data["port"])
smtp.login(self.username, self.password)
smtp.sendmail(self.sender, self.receiver, self.msg.as_string())
smtp.quit()
| 27.318182 | 74 | 0.616473 | 143 | 1,202 | 5.153846 | 0.405594 | 0.048847 | 0.052917 | 0.043419 | 0.065129 | 0.065129 | 0 | 0 | 0 | 0 | 0 | 0.002191 | 0.240433 | 1,202 | 43 | 75 | 27.953488 | 0.805038 | 0.028286 | 0 | 0 | 0 | 0 | 0.106952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0.060606 | 0.242424 | 0 | 0.606061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7379ef134306e3fa7e9f8f7f9df8e836ee26e3b0 | 5,793 | py | Python | stack/__init__.py | FireHead90544/stack.py | 783f59928e7f58519864e6e7a2306f18655c9f65 | [
"Apache-2.0"
] | 1 | 2021-07-30T16:52:03.000Z | 2021-07-30T16:52:03.000Z | stack/__init__.py | FireHead90544/stack.py | 783f59928e7f58519864e6e7a2306f18655c9f65 | [
"Apache-2.0"
] | null | null | null | stack/__init__.py | FireHead90544/stack.py | 783f59928e7f58519864e6e7a2306f18655c9f65 | [
"Apache-2.0"
] | null | null | null | """
Stack.py - LIFO Stack & FIFO Queue Implementation in Python
For Docs, Visit The GitHub Repo: https://github.com/FireHead90544/stack.py
Author: Rudransh Joshi (https://github.com/FireHead90544)
Issues: https://github.com/FireHead90544/stack.py/issues
"""
from __future__ import annotations
from typing import List, Any
import copy
__author__ = "Rudransh Joshi"
__version__ = 2.0
class Stack:
"""
LIFO Implementation of Stack in Python\n
Last element to get in will be the first to get out
Creates a Stack object
"""
def __init__(self, stackLength: int = 5) -> None:
"""
LIFO Implementation of Stack in Python\n
Last element to get in will be the first to get out.
Initialises a stack object with given stack length.\n
Stack length defaults to 5
"""
self.stackLength = stackLength
self._top = -1
self._stackList = []
@property
def top(self) -> int:
"""
Returns the top index of the stack.
"""
self._top = len(self._stackList) - 1
return self._top
@property
def list(self) -> List[Any]:
"""
Returns the values that the stack holds as a list
"""
return self._stackList
@property
def empty(self) -> bool:
"""
Returns True if the stack is empty, else returns False
"""
return False if not self.top == -1 else True
def put(self, e: Any) -> None:
"""
Pushes an element to the stack.\n
If the stack is already full, then shows OverflowError.\n
Returns None
"""
if len(self._stackList) >= self.stackLength:
raise OverflowError('The stack is already full. Unable to push any more elements.')
self._stackList.append(e)
def get(self) -> Any:
"""
Pops/Gets the last element from the stack.\n
If the stack is already empty, then shows UnderflowError.\n
Returns the value popped
"""
if self.top < 0:
raise Exception('UnderflowError: The stack is empty. Unable to pop/get any elements.')
else:
return self._stackList.pop()
def clear(self) -> None:
"""
Clears the stack, removes every element from the stack.\n
Returns None
"""
self._stackList = []
def copy(self) -> 'Stack':
"""
Returns the copy of the stack as a Stack object
"""
return copy.deepcopy(self)
def __str__(self) -> str:
"""
Overrides __str__ dunder method.
Returns the stack as a stringified list object
"""
return f"{self._stackList}"
def __repr__(self) -> str:
"""
Overrides __repr__ dunder method.\n
Returns the class object representation of the stack with the values it holds
"""
return f"Stack(length={self.stackLength}, top={self.top}, stack={self._stackList})"
class Queue:
"""
FIFO Implementation of Queue in Python\n
First element to get in will be the first to get out
Creates a Queue object
"""
def __init__(self, queueLength: int = 5):
"""
FIFO Implementation of Queue in Python\n
First element to get in will be the first to get out.
Initialises a queue object with given queue length.\n
Queue length defaults to 5
"""
self.queueLength = queueLength
self._front = 0
self._rear = -1
self._queueList = []
@property
def front(self) -> int:
"""
Returns the front index of the queue.
"""
return self._front
@property
def rear(self) -> int:
"""
Returns the rear index of the queue.
"""
self._rear = len(self._queueList) - 1
return self._rear
@property
def list(self) -> List[Any]:
"""
Returns the values that the queue holds as a list
"""
return self._queueList
@property
def empty(self) -> bool:
"""
Returns True if the queue is empty, else returns False
"""
return False if not self.rear == -1 else True
def enqueue(self, e: Any) -> None:
"""
Enqueues an element to the queue.\n
If the queue is already full, then shows OverflowError.\n
Returns None
"""
if len(self._queueList) >= self.queueLength:
raise OverflowError('The queue is already full. Unable to enqueue any more elements.')
self._queueList.append(e)
def dequeue(self) -> Any:
"""
Dequeues the first element from the queue.\n
If the queue is already empty, then shows UnderflowError.\n
Returns the value dequeued
"""
if self.rear < 0:
raise Exception('UnderflowError: The queue is empty. Unable to dequeue any element.')
else:
return self._queueList.pop(self.front)
def clear(self) -> None:
"""
Clears the queue, removes every element from the queue.\n
Returns None
"""
self._queueList = []
def copy(self) -> 'Queue':
"""
Returns the copy of the queue as a Queue object
"""
return copy.deepcopy(self)
def __str__(self) -> str:
"""
Overrides __str__ dunder method.
Returns the queue as a stringified list object
"""
return f"{self._queueList}"
def __repr__(self) -> str:
"""
Overrides __repr__ dunder method.\n
Returns the class object representation of the queue with the values it holds
"""
return f"Queue(length={self.queueLength}, front={self.front}, rear={self.rear}, queue={self._queueList})"
| 28.678218 | 113 | 0.586915 | 721 | 5,793 | 4.60749 | 0.170596 | 0.036123 | 0.015051 | 0.016857 | 0.535822 | 0.449127 | 0.400361 | 0.384106 | 0.340157 | 0.316075 | 0 | 0.007628 | 0.321077 | 5,793 | 201 | 114 | 28.820896 | 0.83702 | 0.385983 | 0 | 0.342466 | 0 | 0.013699 | 0.169599 | 0.039409 | 0 | 0 | 0 | 0 | 0 | 1 | 0.287671 | false | 0 | 0.041096 | 0 | 0.561644 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
737c558dfd529e7f30a69f2924eca68b610b71de | 1,278 | py | Python | autumn/models/covid_19/stratifications/history.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | autumn/models/covid_19/stratifications/history.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | autumn/models/covid_19/stratifications/history.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2019-10-22T04:47:34.000Z | 2019-10-22T04:47:34.000Z | from typing import Dict
from summer import Stratification
from autumn.models.covid_19.parameters import Parameters
from autumn.models.covid_19.constants import COMPARTMENTS, History, HISTORY_STRATA
from autumn.models.covid_19.stratifications.vaccination import apply_immunity_to_strat
def get_history_strat(params: Parameters, stratified_adjusters: Dict[str, Dict[str, float]]) -> Stratification:
"""
Stratification to represent status regarding past infection/disease with Covid.
Currently three strata, with everyone entering the experienced stratum after they have recovered from an episode.
Args:
params: All model parameters
stratified_adjusters: VoC and severity stratification adjusters
Returns:
The history stratification summer object for application to the main model
"""
history_strat = Stratification("history", HISTORY_STRATA, COMPARTMENTS)
# Everyone starts out infection-naive
pop_split = {stratum: 0. for stratum in HISTORY_STRATA}
pop_split[History.NAIVE] = 1.
history_strat.set_population_split(pop_split)
# Immunity adjustments equivalent to vaccination approach
apply_immunity_to_strat(history_strat, params, stratified_adjusters, History.NAIVE)
return history_strat
| 36.514286 | 117 | 0.784038 | 156 | 1,278 | 6.25641 | 0.467949 | 0.061475 | 0.04918 | 0.064549 | 0.070697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00747 | 0.161972 | 1,278 | 34 | 118 | 37.588235 | 0.903828 | 0.377934 | 0 | 0 | 0 | 0 | 0.009296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.416667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
737dfa7c6e4ef99be8c84a2f5c58379914d6c165 | 233 | py | Python | API/onepanman_api/admin/rule.py | CMS0503/CodeOnBoard | 2df8c9d934f6ffb05dbfbde329f84c66f2348618 | [
"MIT"
] | null | null | null | API/onepanman_api/admin/rule.py | CMS0503/CodeOnBoard | 2df8c9d934f6ffb05dbfbde329f84c66f2348618 | [
"MIT"
] | 12 | 2020-11-19T09:24:02.000Z | 2020-12-02T11:07:22.000Z | API/onepanman_api/admin/rule.py | CMS0503/CodeOnBoard | 2df8c9d934f6ffb05dbfbde329f84c66f2348618 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .. import models
@admin.register(models.Rule)
class RuleAdmin(admin.ModelAdmin):
"""
규칙 정보
"""
list_display = ['id', 'type', 'name']
class Meta:
model = models.Rule | 17.923077 | 41 | 0.626609 | 28 | 233 | 5.178571 | 0.714286 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236052 | 233 | 13 | 42 | 17.923077 | 0.814607 | 0.021459 | 0 | 0 | 0 | 0 | 0.046948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
738ff9b464b875ec7ce2a25b19c5b5d1f421f123 | 946 | py | Python | mne/viz/__init__.py | jaeilepp/eggie | a7e812f27e33f9c43ac2e36c6b45a26a01530a06 | [
"BSD-2-Clause"
] | null | null | null | mne/viz/__init__.py | jaeilepp/eggie | a7e812f27e33f9c43ac2e36c6b45a26a01530a06 | [
"BSD-2-Clause"
] | null | null | null | mne/viz/__init__.py | jaeilepp/eggie | a7e812f27e33f9c43ac2e36c6b45a26a01530a06 | [
"BSD-2-Clause"
] | null | null | null | """Visualization routines
"""
from .topomap import plot_evoked_topomap, plot_projs_topomap
from .topomap import plot_ica_components, plot_ica_topomap
from .topomap import plot_tfr_topomap, plot_topomap
from .topo import (plot_topo, plot_topo_tfr, plot_topo_image_epochs,
iter_topography)
from .utils import tight_layout, mne_analyze_colormap, compare_fiff
from ._3d import plot_sparse_source_estimates, plot_source_estimates
from ._3d import plot_trans, plot_evoked_field
from .misc import plot_cov, plot_bem, plot_events
from .misc import plot_source_spectrogram
from .utils import _mutable_defaults
from .evoked import plot_evoked, plot_evoked_image
from .circle import plot_connectivity_circle, circular_layout
from .epochs import plot_image_epochs, plot_drop_log, plot_epochs
from .epochs import _drop_log_stats
from .raw import plot_raw, plot_raw_psds
from .ica import plot_ica_scores, plot_ica_sources, plot_ica_overlay
| 45.047619 | 68 | 0.842495 | 143 | 946 | 5.13986 | 0.335664 | 0.176871 | 0.069388 | 0.085714 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002381 | 0.112051 | 946 | 20 | 69 | 47.3 | 0.872619 | 0.023256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.941176 | 0 | 0.941176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
73968a64038186a231ad939ca0949adb285e3fe1 | 1,191 | py | Python | nextactions/card.py | stevecshanks/trello-next-actions | d9a5086185bccf969d551a25f966e1b24ce4a299 | [
"MIT"
] | null | null | null | nextactions/card.py | stevecshanks/trello-next-actions | d9a5086185bccf969d551a25f966e1b24ce4a299 | [
"MIT"
] | 1 | 2016-12-28T16:23:19.000Z | 2016-12-28T16:25:18.000Z | nextactions/card.py | stevecshanks/trello-next-actions | d9a5086185bccf969d551a25f966e1b24ce4a299 | [
"MIT"
] | null | null | null | from urllib.parse import urlparse
class Card:
AUTO_GENERATED_TEXT = 'Auto-created by TrelloNextActions'
def __init__(self, trello, json):
self._trello = trello
self.id = json['id']
self.name = json['name']
self.board_id = json['idBoard']
self.description = json['desc']
self.url = json['url']
def isAutoGenerated(self):
return Card.AUTO_GENERATED_TEXT in self.description
def getProjectBoard(self):
board_id = self._getProjectBoardId()
return self._trello.getBoardById(board_id)
def _getProjectBoardId(self):
url_components = urlparse(self.description)
path_segments = url_components.path.split('/')
if (len(path_segments) >= 3):
return path_segments[2]
else:
raise ValueError("Description could not be parsed as project URL")
def __eq__(self, other):
return self.id == other.id
def linksTo(self, other):
return self.description.startswith(other.url)
def archive(self):
self._trello.put(
'https://api.trello.com/1/cards/' + self.id + '/closed',
{'value': "true"}
)
| 28.357143 | 78 | 0.617968 | 137 | 1,191 | 5.189781 | 0.452555 | 0.056259 | 0.04782 | 0.059072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00344 | 0.267842 | 1,191 | 41 | 79 | 29.04878 | 0.811927 | 0 | 0 | 0 | 1 | 0 | 0.123426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225806 | false | 0 | 0.032258 | 0.096774 | 0.483871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
739e55523a04f2d7ccad72f1592313ae63718b29 | 673 | py | Python | Strings/1255_2.py | jeconiassantos/uriissues | f6c32f8632b9940a4886240ea5d22300922dc79a | [
"MIT"
] | null | null | null | Strings/1255_2.py | jeconiassantos/uriissues | f6c32f8632b9940a4886240ea5d22300922dc79a | [
"MIT"
] | null | null | null | Strings/1255_2.py | jeconiassantos/uriissues | f6c32f8632b9940a4886240ea5d22300922dc79a | [
"MIT"
] | null | null | null | N = int(input())
while N > 0:
texto = input().lower().replace(' ', '')
alfabeto = 'abcdefghijklmnopqrstuvwxyz'
contador = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
result = ''
a, i, count, maior = 0, 0, 0, 0
break_ = True
while count < 52:
if break_ == True:
contador[i] = texto.count(alfabeto[i])
if contador[i] > maior:
maior = contador[i]
i += 1
if i == 26:
break_ = False
else:
if maior == texto.count(alfabeto[a]):
result += alfabeto[a]
a += 1
count += 1
print(result)
N -= 1
| 28.041667 | 68 | 0.451709 | 94 | 673 | 3.202128 | 0.265957 | 0.186047 | 0.259136 | 0.318937 | 0.086379 | 0.086379 | 0.086379 | 0.086379 | 0.086379 | 0.086379 | 0 | 0.094203 | 0.384844 | 673 | 23 | 69 | 29.26087 | 0.63285 | 0 | 0 | 0 | 0 | 0 | 0.040119 | 0.038633 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
739f2e72b2518d429688ce94886662287a341dd5 | 78 | py | Python | activities/renderers/activity/pdf/colors.py | zemogle/astroEDU | 8d240ff35a288c9e920f6527f1cd3957d116e6ae | [
"MIT"
] | 1 | 2021-09-09T12:32:34.000Z | 2021-09-09T12:32:34.000Z | activities/renderers/activity/pdf/colors.py | zemogle/astroEDU | 8d240ff35a288c9e920f6527f1cd3957d116e6ae | [
"MIT"
] | 4 | 2021-09-09T19:53:18.000Z | 2021-09-24T09:11:26.000Z | activities/renderers/activity/pdf/colors.py | zemogle/astroEDU | 8d240ff35a288c9e920f6527f1cd3957d116e6ae | [
"MIT"
] | null | null | null | HEADER_COLOR = '#F78606'
TEXT_COLOR = '#676867'
FOOTER_LINE_COLOR = '#B0B0AE'
| 19.5 | 29 | 0.730769 | 10 | 78 | 5.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 0.115385 | 78 | 3 | 30 | 26 | 0.57971 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73a30f48f20340b22a4c76ca06d5d7cfa5438e73 | 553 | py | Python | shop/migrations/0001_initial.py | aashish01/CRUD-Operation-Django-Rest-Framework | f6950e37c5eb25942c15ff733416dd90347b2a25 | [
"MIT"
] | null | null | null | shop/migrations/0001_initial.py | aashish01/CRUD-Operation-Django-Rest-Framework | f6950e37c5eb25942c15ff733416dd90347b2a25 | [
"MIT"
] | null | null | null | shop/migrations/0001_initial.py | aashish01/CRUD-Operation-Django-Rest-Framework | f6950e37c5eb25942c15ff733416dd90347b2a25 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-07-02 05:41
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='item',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=50)),
('price', models.FloatField()),
('brand', models.CharField(max_length=40)),
],
),
]
| 23.041667 | 76 | 0.540687 | 54 | 553 | 5.481481 | 0.759259 | 0.101351 | 0.121622 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051213 | 0.329114 | 553 | 23 | 77 | 24.043478 | 0.746631 | 0.081374 | 0 | 0 | 1 | 0 | 0.039526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73addeab48b465aab956af75fcbbd1d44a07f92f | 15,000 | py | Python | scuole/stats/schemas/tapr/pre2014schema.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | 1 | 2019-03-12T04:30:02.000Z | 2019-03-12T04:30:02.000Z | scuole/stats/schemas/tapr/pre2014schema.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | 616 | 2017-08-18T21:15:39.000Z | 2022-03-25T11:17:10.000Z | scuole/stats/schemas/tapr/pre2014schema.py | texastribune/scuole | 8ab316ee50ef0d8e71b94b50dc889d10c6e83412 | [
"MIT"
] | null | null | null | # Schema used for pre-2013-2014 TAPR data
SCHEMA = {
'staff-and-student-information': {
'all_students_count': 'PETALLC',
'african_american_count': 'PETBLAC',
'african_american_percent': 'PETBLAP',
'american_indian_count': 'PETINDC',
'american_indian_percent': 'PETINDP',
'asian_count': 'PETASIC',
'asian_percent': 'PETASIP',
'hispanic_count': 'PETHISC',
'hispanic_percent': 'PETHISP',
'pacific_islander_count': 'PETPCIC',
'pacific_islander_percent': 'PETPCIP',
'two_or_more_races_count': 'PETTWOC',
'two_or_more_races_percent': 'PETTWOP',
'white_count': 'PETWHIC',
'white_percent': 'PETWHIP',
'early_childhood_education_count': 'PETGEEC',
'early_childhood_education_percent': 'PETGEEP',
'prek_count': 'PETGPKC',
'prek_percent': 'PETGPKP',
'kindergarten_count': 'PETGKNC',
'kindergarten_percent': 'PETGKNP',
'first_count': 'PETG01C',
'first_percent': 'PETG01P',
'second_count': 'PETG02C',
'second_percent': 'PETG02P',
'third_count': 'PETG03C',
'third_percent': 'PETG03P',
'fourth_count': 'PETG04C',
'fourth_percent': 'PETG04P',
'fifth_count': 'PETG05C',
'fifth_percent': 'PETG05P',
'sixth_count': 'PETG06C',
'sixth_percent': 'PETG06P',
'seventh_count': 'PETG07C',
'seventh_percent': 'PETG07P',
'eighth_count': 'PETG08C',
'eighth_percent': 'PETG08P',
'ninth_count': 'PETG09C',
'ninth_percent': 'PETG09P',
'tenth_count': 'PETG10C',
'tenth_percent': 'PETG10P',
'eleventh_count': 'PETG11C',
'eleventh_percent': 'PETG11P',
'twelfth_count': 'PETG12C',
'twelfth_percent': 'PETG12P',
'at_risk_count': 'PETRSKC',
'at_risk_percent': 'PETRSKP',
'economically_disadvantaged_count': 'PETECOC',
'economically_disadvantaged_percent': 'PETECOP',
'limited_english_proficient_count': 'PETLEPC',
'limited_english_proficient_percent': 'PETLEPP',
'bilingual_esl_count': 'PETBILC',
'bilingual_esl_percent': 'PETBILP',
'career_technical_education_count': 'PETVOCC',
'career_technical_education_percent': 'PETVOCP',
'gifted_and_talented_count': 'PETGIFC',
'gifted_and_talented_percent': 'PETGIFP',
'special_education_count': 'PETSPEC',
'special_education_percent': 'PETSPEP',
'class_size_avg_kindergarten': 'PCTGKGA',
'class_size_avg_first': 'PCTG01A',
'class_size_avg_second': 'PCTG02A',
'class_size_avg_third': 'PCTG03A',
'class_size_avg_fourth': 'PCTG04A',
'class_size_avg_fifth': 'PCTG05A',
'class_size_avg_sixth': 'PCTG06A',
'class_size_avg_mixed_elementary': 'PCTGMEA',
'class_size_avg_secondary_english': 'PCTENGA',
'class_size_avg_secondary_foreign_language': 'PCTFLAA',
'class_size_avg_secondary_math': 'PCTMATA',
'class_size_avg_secondary_science': 'PCTSCIA',
'class_size_avg_secondary_social_studies': 'PCTSOCA',
'students_per_teacher': 'PSTKIDR',
# teacher_avg_tenure is Average Years Experience of Teachers with District
'teacher_avg_tenure': 'PSTTENA',
# teacher_avg_experience is Average Years Experience of Teachers
'teacher_avg_experience': 'PSTEXPA',
'teacher_avg_base_salary': 'PSTTOSA',
'teacher_avg_beginning_salary': 'PST00SA',
'teacher_avg_1_to_5_year_salary': 'PST01SA',
'teacher_avg_6_to_10_year_salary': 'PST06SA',
'teacher_avg_11_to_20_year_salary': 'PST11SA',
'teacher_avg_20_plus_year_salary': 'PST20SA',
'teacher_total_fte_count': 'PSTTOFC',
'teacher_african_american_fte_count': 'PSTBLFC',
'teacher_american_indian_fte_count': 'PSTINFC',
'teacher_asian_fte_count': 'PSTASFC',
'teacher_hispanic_fte_count': 'PSTHIFC',
'teacher_pacific_islander_fte_count': 'PSTPIFC',
'teacher_two_or_more_races_fte_count': 'PSTTWFC',
'teacher_white_fte_count': 'PSTWHFC',
'teacher_total_fte_percent': 'PSTTOFC',
'teacher_african_american_fte_percent': 'PSTBLFP',
'teacher_american_indian_fte_percent': 'PSTINFP',
'teacher_asian_fte_percent': 'PSTASFP',
'teacher_hispanic_fte_percent': 'PSTHIFP',
'teacher_pacific_islander_fte_percent': 'PSTPIFP',
'teacher_two_or_more_races_fte_percent': 'PSTTWFP',
'teacher_white_fte_percent': 'PSTWHFP',
# 'teacher_no_degree_count': 'PSTNOFC',
# 'teacher_bachelors_count': 'PSTBAFC',
# 'teacher_masters_count': 'PSTMSFC',
# 'teacher_doctorate_count': 'PSTPHFC',
# 'teacher_no_degree_percent': 'PSTNOFP',
# 'teacher_bachelors_percent': 'PSTBAFP',
# 'teacher_masters_percent': 'PSTMSFP',
# 'teacher_doctorate_percent': 'PSTPHFP',
},
'postsecondary-readiness-and-non-staar-performance-indicators': {
# 'college_ready_graduates_english_all_students_count': 'ACRR',
'college_ready_graduates_english_all_students_percent': 'ACRR',
# 'college_ready_graduates_english_african_american_count': 'BCRR',
'college_ready_graduates_english_african_american_percent': 'BCRR',
# 'college_ready_graduates_english_american_indian_count': 'ICRR',
'college_ready_graduates_english_american_indian_percent': 'ICRR',
# 'college_ready_graduates_english_asian_count': '3CRR',
'college_ready_graduates_english_asian_percent': '3CRR',
# 'college_ready_graduates_english_hispanic_count': 'HCRR',
'college_ready_graduates_english_hispanic_percent': 'HCRR',
# 'college_ready_graduates_english_pacific_islander_count': '4CRR',
'college_ready_graduates_english_pacific_islander_percent': '4CRR',
# 'college_ready_graduates_english_two_or_more_races_count': '2CRR',
'college_ready_graduates_english_two_or_more_races_percent': '2CRR',
# 'college_ready_graduates_english_white_count': 'WCRR',
'college_ready_graduates_english_white_percent': 'WCRR',
# 'college_ready_graduates_english_economically_disadvantaged_count': 'ECRR',
'college_ready_graduates_english_economically_disadvantaged_percent': 'ECRR',
# 'college_ready_graduates_english_limited_english_proficient_count': 'LCRR',
'college_ready_graduates_english_limited_english_proficient_percent': 'LCRR',
# 'college_ready_graduates_english_at_risk_count': 'RCRR',
'college_ready_graduates_english_at_risk_percent': 'RCRR',
# 'college_ready_graduates_math_all_students_count': 'ACRM',
'college_ready_graduates_math_all_students_percent': 'ACRM',
# 'college_ready_graduates_math_african_american_count': 'BCRM',
'college_ready_graduates_math_african_american_percent': 'BCRM',
# 'college_ready_graduates_math_american_indian_count': 'ICRM',
'college_ready_graduates_math_american_indian_percent': 'ICRM',
# 'college_ready_graduates_math_asian_count': '3CRM',
'college_ready_graduates_math_asian_percent': '3CRM',
# 'college_ready_graduates_math_hispanic_count': 'HCRM',
'college_ready_graduates_math_hispanic_percent': 'HCRM',
# 'college_ready_graduates_math_pacific_islander_count': '4CRM',
'college_ready_graduates_math_pacific_islander_percent': '4CRM',
# 'college_ready_graduates_math_two_or_more_races_count': '2CRM',
'college_ready_graduates_math_two_or_more_races_percent': '2CRM',
# 'college_ready_graduates_math_white_count': 'WCRM',
'college_ready_graduates_math_white_percent': 'WCRM',
# 'college_ready_graduates_math_economically_disadvantaged_count': 'ECRM',
'college_ready_graduates_math_economically_disadvantaged_percent': 'ECRM',
# 'college_ready_graduates_math_limited_english_proficient_count': 'LCRM',
'college_ready_graduates_math_limited_english_proficient_percent': 'LCRM',
# 'college_ready_graduates_math_at_risk_count': 'RCRM',
'college_ready_graduates_math_at_risk_percent': 'RCRM',
# 'college_ready_graduates_both_all_students_count': 'ACRB',
'college_ready_graduates_both_all_students_percent': 'ACRB',
# 'college_ready_graduates_both_african_american_count': 'BCRB',
'college_ready_graduates_both_african_american_percent': 'BCRB',
# 'college_ready_graduates_both_asian_count': '3CRB',
'college_ready_graduates_both_asian_percent': '3CRB',
# 'college_ready_graduates_both_hispanic_count': 'HCRB',
'college_ready_graduates_both_hispanic_percent': 'HCRB',
# 'college_ready_graduates_both_american_indian_count': 'ICRB',
'college_ready_graduates_both_american_indian_percent': 'ICRB',
# 'college_ready_graduates_both_pacific_islander_count': '4CRB',
'college_ready_graduates_both_pacific_islander_percent': '4CRB',
# 'college_ready_graduates_both_two_or_more_races_count': '2CRB',
'college_ready_graduates_both_two_or_more_races_percent': '2CRB',
# 'college_ready_graduates_both_white_count': 'WCRB',
'college_ready_graduates_both_white_percent': 'WCRB',
# 'college_ready_graduates_both_economically_disadvantaged_count': 'ECRB',
'college_ready_graduates_both_economically_disadvantaged_percent': 'ECRB',
# 'college_ready_graduates_both_limited_english_proficient_count': 'LCRB',
'college_ready_graduates_both_limited_english_proficient_percent': 'LCRB',
# 'college_ready_graduates_both_at_risk_count': 'RCRB',
'college_ready_graduates_both_at_risk_percent': 'RCRB',
'avg_sat_score_all_students': 'A0CSA',
'avg_sat_score_african_american': 'B0CSA',
'avg_sat_score_american_indian': 'I0CSA',
'avg_sat_score_asian': '30CSA',
'avg_sat_score_hispanic': 'H0CSA',
'avg_sat_score_pacific_islander': '40CSA',
'avg_sat_score_two_or_more_races': '20CSA',
'avg_sat_score_white': 'W0CSA',
'avg_sat_score_economically_disadvantaged': 'E0CSA',
'avg_act_score_all_students': 'A0CAA',
'avg_act_score_african_american': 'B0CAA',
'avg_act_score_american_indian': 'I0CAA',
'avg_act_score_asian': '30CAA',
'avg_act_score_hispanic': 'H0CAA',
'avg_act_score_pacific_islander': '40CAA',
'avg_act_score_two_or_more_races': '20CAA',
'avg_act_score_white': 'W0CAA',
'avg_act_score_economically_disadvantaged': 'E0CAA',
# 'ap_ib_all_students_count_above_criterion': 'A0BKA',
'ap_ib_all_students_percent_above_criterion': 'A0BKA',
# 'ap_ib_african_american_count_above_criterion': 'B0BKA',
'ap_ib_african_american_percent_above_criterion': 'B0BKA',
# 'ap_ib_asian_count_above_criterion': '30BKA',
'ap_ib_asian_percent_above_criterion': '30BKA',
# 'ap_ib_hispanic_count_above_criterion': 'H0BKA',
'ap_ib_hispanic_percent_above_criterion': 'H0BKA',
# 'ap_ib_american_indian_count_above_criterion': 'I0BKA',
'ap_ib_american_indian_percent_above_criterion': 'I0BKA',
# 'ap_ib_pacific_islander_count_above_criterion': '40BKA',
'ap_ib_pacific_islander_percent_above_criterion': '40BKA',
# 'ap_ib_two_or_more_races_count_above_criterion': '20BKA',
'ap_ib_two_or_more_races_percent_above_criterion': '20BKA',
# 'ap_ib_white_count_above_criterion': 'W0BKA',
'ap_ib_white_percent_above_criterion': 'W0BKA',
# 'ap_ib_economically_disadvantaged_count_above_criterion': 'E0BKA',
'ap_ib_economically_disadvantaged_percent_above_criterion': 'E0BKA',
'ap_ib_all_students_percent_taking': 'A0BTA',
'ap_ib_african_american_percent_taking': 'B0BTA',
'ap_ib_asian_percent_taking': '30BTA',
'ap_ib_hispanic_percent_taking': 'H0BTA',
'ap_ib_american_indian_percent_taking': 'I0BTA',
'ap_ib_pacific_islander_percent_taking': '40BTA',
'ap_ib_two_or_more_races_percent_taking': '20BTA',
'ap_ib_white_percent_taking': 'W0BTA',
'ap_ib_economically_disadvantaged_percent_taking': 'E0BTA',
# 'dropout_all_students_count': 'A0912DR',
'dropout_all_students_percent': 'A0912DR',
# 'dropout_african_american_count': 'B0912DR',
'dropout_african_american_percent': 'B0912DR',
# 'dropout_asian_count': '30912DR',
'dropout_asian_percent': '30912DR',
# 'dropout_hispanic_count': 'H0912DR',
'dropout_hispanic_percent': 'H0912DR',
# 'dropout_american_indian_count': 'I0912DR',
'dropout_american_indian_percent': 'I0912DR',
# 'dropout_pacific_islander_count': '40912DR',
'dropout_pacific_islander_percent': '40912DR',
# 'dropout_two_or_more_races_count': '20912DR',
'dropout_two_or_more_races_percent': '20912DR',
# 'dropout_white_count': 'W0912DR',
'dropout_white_percent': 'W0912DR',
# 'dropout_at_risk_count': 'R0912DR',
'dropout_at_risk_percent': 'R0912DR',
# 'dropout_economically_disadvantaged_count': 'E0912DR',
'dropout_economically_disadvantaged_percent': 'E0912DR',
# 'dropout_limited_english_proficient_count': 'E0912DR',
'dropout_limited_english_proficient_percent': 'E0912DR',
# 'four_year_graduate_all_students_count': 'AGC4X',
'four_year_graduate_all_students_percent': 'AGC4X',
# 'four_year_graduate_african_american_count': 'BGC4X',
'four_year_graduate_african_american_percent': 'BGC4X',
# 'four_year_graduate_american_indian_count': 'IGC4X',
'four_year_graduate_american_indian_percent': 'IGC4X',
# 'four_year_graduate_asian_count': '3GC4X',
'four_year_graduate_asian_percent': '3GC4X',
# 'four_year_graduate_hispanic_count': 'HGC4X',
'four_year_graduate_hispanic_percent': 'HGC4X',
# 'four_year_graduate_pacific_islander_count': '4GC4X',
'four_year_graduate_pacific_islander_percent': '4GC4X',
# 'four_year_graduate_two_or_more_races_count': '2GC4X',
'four_year_graduate_two_or_more_races_percent': '2GC4X',
# 'four_year_graduate_white_count': 'WGC4X',
'four_year_graduate_white_percent': 'WGC4X',
# 'four_year_graduate_at_risk_count': 'RGC4X',
'four_year_graduate_at_risk_percent': 'RGC4X',
# 'four_year_graduate_economically_disadvantaged_count': 'EGC4X',
'four_year_graduate_economically_disadvantaged_percent': 'EGC4X',
# 'four_year_graduate_limited_english_proficient_count': 'L3C4X',
'four_year_graduate_limited_english_proficient_percent': 'L3C4X',
'attendance_rate': 'A0AT',
},
'reference': {
'accountability_rating': '_RATING',
},
}
| 54.744526 | 85 | 0.7034 | 1,615 | 15,000 | 5.879876 | 0.199381 | 0.083404 | 0.145956 | 0.064869 | 0.47104 | 0.243892 | 0.06845 | 0.025274 | 0 | 0 | 0 | 0.023876 | 0.184667 | 15,000 | 273 | 86 | 54.945055 | 0.752576 | 0.270533 | 0 | 0 | 0 | 0 | 0.652954 | 0.481502 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73c5bf01be8b25980264bc227433957a55bd99b7 | 368 | py | Python | analyticsclient/exceptions.py | Jawayria/edx-analytics-data-api-client | 1ff83fc0ab1f56032826157502d987ecc5ac2e82 | [
"Apache-2.0"
] | 12 | 2015-07-24T17:06:18.000Z | 2021-07-21T15:22:30.000Z | analyticsclient/exceptions.py | Jawayria/edx-analytics-data-api-client | 1ff83fc0ab1f56032826157502d987ecc5ac2e82 | [
"Apache-2.0"
] | 61 | 2015-01-06T02:55:17.000Z | 2021-11-18T20:51:19.000Z | analyticsclient/exceptions.py | Jawayria/edx-analytics-data-api-client | 1ff83fc0ab1f56032826157502d987ecc5ac2e82 | [
"Apache-2.0"
] | 26 | 2015-01-26T14:39:33.000Z | 2021-03-26T06:38:06.000Z | class ClientError(Exception):
"""Common base class for all client errors."""
class NotFoundError(ClientError):
"""URL was not found."""
class InvalidRequestError(ClientError):
"""The API request was invalid."""
class TimeoutError(ClientError): # pylint: disable=redefined-builtin
"""The API server did not respond before the timeout expired."""
| 24.533333 | 69 | 0.717391 | 42 | 368 | 6.285714 | 0.714286 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 368 | 14 | 70 | 26.285714 | 0.857143 | 0.494565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
73d7639a690983572e46a8202fb7895db1bb1e78 | 1,852 | py | Python | app/converters.py | UniOulu-Ubicomp-Programming-Courses/pwp-inventory-service | 5631ae42780b3af693e6ba71872a9b6ffda708d8 | [
"MIT"
] | null | null | null | app/converters.py | UniOulu-Ubicomp-Programming-Courses/pwp-inventory-service | 5631ae42780b3af693e6ba71872a9b6ffda708d8 | [
"MIT"
] | null | null | null | app/converters.py | UniOulu-Ubicomp-Programming-Courses/pwp-inventory-service | 5631ae42780b3af693e6ba71872a9b6ffda708d8 | [
"MIT"
] | null | null | null | """
This module defines custom converters for routing. These converters streamline
resource code by performing the process of getting the model instance from the
database before the view method is called. The converter converts a resource's
slug to the corresponding model instance, which is then placed into the view
method arguments. This eliminates the need to get all of the model instances
referenced in the URI and doing a 404 check for each one, avoiding a whole
lot of boilerplate code in the view methods.
The same happens in reverse: when costructing a URI for a resource, a model
instance is passed to *url_for* instead of the model slug, and the converter
will take care of placing a convertible value into the URI.
"""
from werkzeug.exceptions import NotFound
from werkzeug.routing import BaseConverter
from app.models import Map, Observer, Obstacle
class MapConverter(BaseConverter):
"""
A converter for the Map model. Uses map slug as the resource handle.
"""
def to_python(self, map_slug) -> object:
"""
Converts a map slug into the corresponding map model instance.
"""
map = Map.query.filter_by(slug=map_slug).first()
if map is None:
raise NotFound
return map
def to_url(self, map) -> str:
"""
Converts a map model instance to its corresponding slug for use in URI.
"""
return map.slug
class ObserverConverter(BaseConverter):
"""
A converter for the Map model. Uses map slug as the resource handle.
"""
def to_python(self, obs_slug):
observer = Observer.query.filter_by(slug=obs_slug).first()
if observer is None:
raise NotFound
return observer
def to_url(self, observer):
return observer.slug
| 33.672727 | 79 | 0.685205 | 260 | 1,852 | 4.838462 | 0.392308 | 0.033386 | 0.020668 | 0.041335 | 0.170111 | 0.130366 | 0.130366 | 0.130366 | 0.130366 | 0.130366 | 0 | 0.002188 | 0.259719 | 1,852 | 55 | 80 | 33.672727 | 0.91539 | 0.538337 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.157895 | 0.052632 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
73d7eb2238d3b8f6b37085a9c2e1479e64d16baf | 31,342 | py | Python | AppDashboard/test/functional/test_dashboard.py | eabyshev/appscale | 1cfb5a609130f415143ec76718e839b0f73ac668 | [
"Apache-2.0"
] | 2 | 2018-10-09T17:48:12.000Z | 2019-01-15T10:18:19.000Z | AppDashboard/test/functional/test_dashboard.py | christianbaun/appscale | c24ddfd987c8eed8ed8864cc839cc0556a8af3c7 | [
"Apache-2.0"
] | null | null | null | AppDashboard/test/functional/test_dashboard.py | christianbaun/appscale | c24ddfd987c8eed8ed8864cc839cc0556a8af3c7 | [
"Apache-2.0"
] | 1 | 2022-02-20T20:57:12.000Z | 2022-02-20T20:57:12.000Z | #!/usr/bin/env python2
from flexmock import flexmock
import os
import re
import SOAPpy
import StringIO
import sys
import unittest
sys.path.append(os.path.join(os.path.dirname(__file__), '../../../AppDashboard'))
from dashboard import AppDeletePage
from dashboard import AppUploadPage
from dashboard import AuthorizePage
from dashboard import IndexPage
from dashboard import LoginPage
from dashboard import LoginVerify
from dashboard import LogoutPage
from dashboard import NewUserPage
from dashboard import StatusPage
from dashboard import StatusRefreshPage
sys.path.append(os.path.join(os.path.dirname(__file__), '../../../AppServer'))
from google.appengine.api.appcontroller_client import AppControllerClient
from google.appengine.ext import db
from google.appengine.api import taskqueue
from google.appengine.api import users
sys.path.append(os.path.join(os.path.dirname(__file__), '../../lib'))
import app_dashboard_data
from app_dashboard_data import AppDashboardData
from app_dashboard_helper import AppDashboardHelper
from secret_key import GLOBAL_SECRET_KEY
class FunctionalTestAppDashboard(unittest.TestCase):
def setUp(self):
acc = flexmock(AppControllerClient)
acc.should_receive('get_uaserver_host').and_return('public1')
acc.should_receive('get_stats').and_return([
{'ip' : '1.1.1.1',
'cpu' : '50',
'memory' : '50',
'disk' : '50',
'cloud' : 'cloud1',
'roles' : 'roles1',
'apps':{ 'app1':True, 'app2':False }
},
{'ip' : '2.2.2.2',
'cpu' : '50',
'memory' : '50',
'disk' : '50',
'cloud' : 'cloud1',
'roles' : 'roles1'}
])
acc.should_receive('get_role_info').and_return(
[{'jobs':['shadow', 'login'], 'public_ip':'1.1.1.1'} ]
)
acc.should_receive('get_database_information').and_return(
{'table':'fake_database', 'replication':1}
)
acc.should_receive('get_api_status').and_return(
{'api1':'running', 'api2':'failed', 'api3':'unknown'}
)
acc.should_receive('upload_tgz').and_return('true')
acc.should_receive('stop_app').and_return('true')
fake_soap = flexmock(name='fake_soap')
soap = flexmock(SOAPpy)
soap.should_receive('SOAPProxy').and_return(fake_soap)
fake_soap.should_receive('get_app_data').and_return(
"\n\n ports: 8080\n num_ports:1\n"
)
fake_soap.should_receive('get_capabilities')\
.with_args('a@a.com', GLOBAL_SECRET_KEY)\
.and_return('upload_app')
fake_soap.should_receive('get_capabilities')\
.with_args('b@a.com', GLOBAL_SECRET_KEY)\
.and_return('upload_app')
fake_soap.should_receive('get_capabilities')\
.with_args('c@a.com', GLOBAL_SECRET_KEY)\
.and_return('')
fake_soap.should_receive('get_user_data')\
.with_args('a@a.com', GLOBAL_SECRET_KEY)\
.and_return(
"is_cloud_admin:true\napplications:app1:app2\npassword:79951d98d43c1830c5e5e4de58244a621595dfaa\n"
)
fake_soap.should_receive('get_user_data')\
.with_args('b@a.com', GLOBAL_SECRET_KEY)\
.and_return(
"is_cloud_admin:false\napplications:app2\npassword:79951d98d43c1830c5e5e4de58244a621595dfaa\n"
)
fake_soap.should_receive('get_user_data')\
.with_args('c@a.com', GLOBAL_SECRET_KEY)\
.and_return(
"is_cloud_admin:false\napplications:app2\npassword:79951d98d43c1830c5e5e4de58244a621595dfaa\n"
)
fake_soap.should_receive('commit_new_user').and_return('true')
fake_soap.should_receive('commit_new_token').and_return()
fake_soap.should_receive('get_all_users').and_return("a@a.com:b@a.com")
fake_soap.should_receive('set_capabilities').and_return('true')
self.request = self.fakeRequest()
self.response = self.fakeResponse()
self.set_user()
fake_tq = flexmock(taskqueue)
fake_tq.should_receive('add').and_return()
self.setup_fake_db()
def setup_fake_db(self):
fake_root = flexmock()
fake_root.head_node_ip = '1.1.1.1'
fake_root.table = 'table'
fake_root.replication = 'replication'
fake_root.should_receive('put').and_return()
flexmock(app_dashboard_data).should_receive('DashboardDataRoot')\
.and_return(fake_root)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.DashboardDataRoot,
AppDashboardData.ROOT_KEYNAME)\
.and_return(None)\
.and_return(fake_root)
fake_api1 = flexmock(name='APIstatus')
fake_api1.name = 'api1'
fake_api1.value = 'running'
fake_api1.should_receive('put').and_return()
fake_api2 = flexmock(name='APIstatus')
fake_api2.name = 'api2'
fake_api2.value = 'failed'
fake_api2.should_receive('put').and_return()
fake_api3 = flexmock(name='APIstatus')
fake_api3.name = 'api3'
fake_api3.value = 'unknown'
fake_api3.should_receive('put').and_return()
fake_api_q = flexmock()
fake_api_q.should_receive('ancestor').and_return()
fake_api_q.should_receive('run')\
.and_yield(fake_api1, fake_api2, fake_api3)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.APIstatus, re.compile('api'))\
.and_return(fake_api1)\
.and_return(fake_api3)\
.and_return(fake_api3)
flexmock(AppDashboardData).should_receive('get_all')\
.with_args(app_dashboard_data.APIstatus)\
.and_return(fake_api_q)
fake_server1 = flexmock(name='ServerStatus')
fake_server1.ip = '1.1.1.1'
fake_server1.cpu = '25'
fake_server1.memory = '50'
fake_server1.disk = '100'
fake_server1.cloud = 'cloud1'
fake_server1.roles = 'roles2'
fake_server1.should_receive('put').and_return()
fake_server2 = flexmock(name='ServerStatus')
fake_server2.ip = '2.2.2.2'
fake_server2.cpu = '75'
fake_server2.memory = '55'
fake_server2.disk = '100'
fake_server2.cloud = 'cloud1'
fake_server2.roles = 'roles2'
fake_server2.should_receive('put').and_return()
flexmock(app_dashboard_data).should_receive('ServerStatus')\
.and_return(fake_server1)
fake_server_q = flexmock()
fake_server_q.should_receive('ancestor').and_return()
fake_server_q.should_receive('run')\
.and_yield(fake_server1, fake_server2)
fake_server_q.should_receive('get')\
.and_return(fake_server1)\
.and_return(fake_server2)
flexmock(AppDashboardData).should_receive('get_all')\
.with_args(app_dashboard_data.ServerStatus)\
.and_return(fake_server_q)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.ServerStatus, re.compile('\d'))\
.and_return(fake_server1)\
.and_return(fake_server2)
fake_app1 = flexmock(name='AppStatus')
fake_app1.name = 'app1'
fake_app1.url = 'http://1.1.1.1:8080'
fake_app1.should_receive('put').and_return()
fake_app1.should_receive('delete').and_return()
fake_app2 = flexmock(name='AppStatus')
fake_app2.name = 'app2'
fake_app2.url = None
fake_app2.should_receive('put').and_return()
fake_app2.should_receive('delete').and_return()
flexmock(app_dashboard_data).should_receive('AppStatus')\
.and_return(fake_app1)
fake_app_q = flexmock()
fake_app_q.should_receive('ancestor').and_return()
fake_app_q.should_receive('run')\
.and_yield(fake_app1, fake_app2)
flexmock(AppDashboardData).should_receive('get_all')\
.with_args(app_dashboard_data.AppStatus)\
.and_return(fake_app_q)
flexmock(AppDashboardData).should_receive('get_all')\
.with_args(app_dashboard_data.AppStatus, keys_only=True)\
.and_return(fake_app_q)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.AppStatus, re.compile('app'))\
.and_return(fake_app1)\
.and_return(fake_app2)
user_info1 = flexmock(name='UserInfo')
user_info1.email = 'a@a.com'
user_info1.is_user_cloud_admin = True
user_info1.can_upload_apps = True
user_info1.owned_apps = 'app1:app2'
user_info1.should_receive('put').and_return()
user_info2 = flexmock(name='UserInfo')
user_info2.email = 'b@a.com'
user_info2.is_user_cloud_admin = False
user_info2.can_upload_apps = True
user_info2.owned_apps = 'app2'
user_info2.should_receive('put').and_return()
user_info3 = flexmock(name='UserInfo')
user_info3.email = 'c@a.com'
user_info3.is_user_cloud_admin = False
user_info3.can_upload_apps = False
user_info3.owned_apps = 'app2'
user_info3.should_receive('put').and_return()
flexmock(app_dashboard_data).should_receive('UserInfo')\
.and_return(user_info1)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.UserInfo, re.compile('a@a.com'))\
.and_return(user_info1)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.UserInfo, re.compile('b@a.com'))\
.and_return(user_info2)
flexmock(AppDashboardData).should_receive('get_one')\
.with_args(app_dashboard_data.UserInfo, re.compile('c@a.com'))\
.and_return(user_info3)
flexmock(db).should_receive('delete').and_return()
flexmock(db).should_receive('run_in_transaction').and_return()
def set_user(self, email=None):
self.usrs = flexmock(users)
if email is not None:
user_obj = flexmock(name='users')
user_obj.should_receive('email').and_return(email)
self.usrs.should_receive('get_current_user').and_return(user_obj)
else:
self.usrs.should_receive('get_current_user').and_return(None)
def set_post(self, post_dict):
self.request.POST = post_dict
for key in post_dict.keys():
self.request.should_receive('get').with_args(key)\
.and_return(post_dict[key])
def set_fileupload(self, fieldname):
self.request.POST = flexmock(name='POST')
self.request.POST.multi = {}
self.request.POST.multi[fieldname] = flexmock(name='file')
self.request.POST.multi[fieldname].file = StringIO.StringIO("FILE CONTENTS")
def set_get(self, post_dict):
self.request.GET = post_dict
for key in post_dict.keys():
self.request.should_receive('get').with_args(key)\
.and_return(post_dict[key])
def fakeRequest(self):
req = flexmock(name='request')
req.should_receive('get').and_return('')
req.url = '/'
return req
def fakeResponse(self):
res = flexmock(name='response')
res.headers = {}
res.cookies = {}
res.deleted_cookies = {}
res.redirect_location = None
res.out = StringIO.StringIO()
def fake_set_cookie(key, value='', max_age=None, path='/', domain=None,
secure=None, httponly=False, comment=None, expires=None, overwrite=False):
res.cookies[key] = value
def fake_delete_cookie(key, path='/', domain=None):
res.deleted_cookies[key] = 1
def fake_clear(): pass
def fake_redirect(path, response):
res.redirect_location = path
res.set_cookie = fake_set_cookie
res.delete_cookie = fake_delete_cookie
res.clear = fake_clear
res.redirect = fake_redirect
return res
def test_landing_notloggedin(self):
IndexPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/landing/index.html -->', html))
self.assertTrue(re.search('<a href="/users/login">Login to this cloud.</a>', html))
self.assertFalse(re.search('<a href="/authorize">Manage users.</a>', html))
def test_landing_loggedin_notAdmin(self):
self.set_user('b@a.com')
IndexPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/landing/index.html -->', html))
self.assertTrue(re.search('<a href="/users/logout">Logout now.</a>', html))
self.assertFalse(re.search('<a href="/authorize">Manage users.</a>', html))
def test_landing_loggedin_isAdmin(self):
self.set_user('a@a.com')
IndexPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/landing/index.html -->', html))
self.assertTrue(re.search('<a href="/users/logout">Logout now.</a>', html))
self.assertTrue(re.search('<a href="/authorize">Manage users.</a>', html))
def test_status_notloggedin_refresh(self):
self.set_get({
'forcerefresh' : '1',
})
StatusPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/status/cloud.html -->', html))
self.assertTrue(re.search('<a href="/users/login">Login</a>', html))
def test_status_notloggedin(self):
StatusPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/status/cloud.html -->', html))
self.assertTrue(re.search('<a href="/users/login">Login</a>', html))
def test_status_loggedin_notAdmin(self):
self.set_user('b@a.com')
StatusPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/status/cloud.html -->', html))
self.assertTrue(re.search('<a href="/users/logout">Logout</a>', html))
self.assertFalse(re.search('<span>CPU / Memory Usage', html))
def test_status_loggedin_isAdmin(self):
self.set_user('a@a.com')
StatusPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/status/cloud.html -->', html))
self.assertTrue(re.search('<a href="/users/logout">Logout</a>', html))
self.assertTrue(re.search('<span>CPU / Memory Usage', html))
def test_newuser_page(self):
NewUserPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/new.html -->', html))
def test_newuser_bademail(self):
self.set_post({
'user_email' : 'c@a',
'user_password' : 'aaaaaa',
'user_password_confirmation' : 'aaaaaa',
})
NewUserPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/new.html -->', html))
self.assertTrue(re.search('Format must be foo@boo.goo.', html))
def test_newuser_shortpasswd(self):
self.set_post({
'user_email' : 'c@a.com',
'user_password' : 'aaa',
'user_password_confirmation' : 'aaa',
})
NewUserPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/new.html -->', html))
self.assertTrue(re.search('Password must be at least 6 characters long.', html))
def test_newuser_passwdnomatch(self):
self.set_post({
'user_email' : 'c@a.com',
'user_password' : 'aaaaa',
'user_password_confirmation' : 'aaabbb',
})
NewUserPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/new.html -->', html))
self.assertTrue(re.search('Passwords do not match.', html))
def test_newuser_success(self):
self.set_post({
'user_email' : 'c@a.com',
'user_password' : 'aaaaaa',
'user_password_confirmation' : 'aaaaaa',
})
page = NewUserPage(self.request, self.response)
page.redirect = self.response.redirect
page.post()
self.assertTrue(AppDashboardHelper.DEV_APPSERVER_LOGIN_COOKIE in self.response.cookies)
self.assertEqual(self.response.redirect_location, '/')
def test_loginverify_page(self):
self.set_get({
'continue' : 'http%3A//192.168.33.168%3A8080/_ah/login%3Fcontinue%3Dhttp%3A//192.168.33.168%3A8080/'
})
LoginVerify(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/confirm.html -->', html))
self.assertTrue(re.search('http://192.168.33.168:8080/', html))
def test_loginverify_submitcontinue(self):
self.set_post({
'commit' : 'Yes',
'continue' : 'http://192.168.33.168:8080/'
})
page = LoginVerify(self.request, self.response)
page.redirect = self.response.redirect
page.post()
self.assertEqual(self.response.redirect_location, 'http://192.168.33.168:8080/')
def test_loginverify_submitnocontinue(self):
self.set_post({
'commit' : 'No',
'continue' : 'http://192.168.33.168:8080/'
})
page = LoginVerify(self.request, self.response)
page.redirect = self.response.redirect
page.post()
self.assertEqual(self.response.redirect_location, '/')
def test_logout_page(self):
self.set_user('a@a.com')
page = LogoutPage(self.request, self.response)
page.redirect = self.response.redirect
page.get()
self.assertEqual(self.response.redirect_location, '/')
self.assertTrue(AppDashboardHelper.DEV_APPSERVER_LOGIN_COOKIE in self.response.deleted_cookies)
def test_login_page(self):
continue_url = 'http%3A//192.168.33.168%3A8080/_ah/login%3Fcontinue%3Dhttp%3A//192.168.33.168%3A8080/'
self.set_get({
'continue' : continue_url
})
LoginPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/login.html -->', html))
self.assertTrue(re.search(continue_url, html))
def test_login_success(self):
self.set_post({
'user_email' : 'a@a.com',
'user_password' : 'aaaaaa'
})
page = LoginPage(self.request, self.response)
page.redirect = self.response.redirect
page.post()
html = self.response.out.getvalue()
self.assertEqual(self.response.redirect_location, '/')
self.assertTrue(AppDashboardHelper.DEV_APPSERVER_LOGIN_COOKIE in self.response.cookies)
def test_login_success_redir(self):
continue_url = 'http%3A//192.168.33.168%3A8080/_ah/login%3Fcontinue%3Dhttp%3A//192.168.33.168%3A8080/'
self.set_post({
'continue' : continue_url,
'user_email' : 'a@a.com',
'user_password' : 'aaaaaa'
})
page = LoginPage(self.request, self.response)
page.redirect = self.response.redirect
page.post()
html = self.response.out.getvalue()
self.assertTrue(re.search('/users/confirm\?continue=',self.response.redirect_location))
self.assertTrue(AppDashboardHelper.DEV_APPSERVER_LOGIN_COOKIE in self.response.cookies)
def test_login_fail(self):
self.set_post({
'user_email' : 'a@a.com',
'user_password' : 'bbbbbb'
})
page = LoginPage(self.request, self.response)
page.redirect = self.response.redirect
page.post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/users/login.html -->', html))
self.assertTrue(re.search('Incorrect username / password combination. Please try again', html))
def test_authorize_page_notloggedin(self):
AuthorizePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Only the cloud administrator can change permissions.', html))
def test_authorize_page_loggedin_notadmin(self):
self.set_user('b@a.com')
AuthorizePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Only the cloud administrator can change permissions.', html))
def test_authorize_page_loggedin_admin(self):
self.set_user('a@a.com')
AuthorizePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('a@a.com-upload_app', html))
self.assertTrue(re.search('b@a.com-upload_app', html))
def test_authorize_submit_notloggedin(self):
self.set_post({
'user_permission_1' : 'a@a.com',
'CURRENT-a@a.com-upload_app' : 'True',
'a@a.com-upload_app' : 'a@a.com-upload_app', #this box is checked
'user_permission_1' : 'b@a.com',
'CURRENT-b@a.com-upload_app' : 'True', #this box is unchecked
})
AuthorizePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Only the cloud administrator can change permissions.', html))
def test_authorize_submit_notadmin(self):
self.set_user('b@a.com')
self.set_post({
'user_permission_1' : 'a@a.com',
'CURRENT-a@a.com-upload_app' : 'True',
'a@a.com-upload_app' : 'a@a.com-upload_app', #this box is checked
'user_permission_1' : 'b@a.com',
'CURRENT-b@a.com-upload_app' : 'True', #this box is unchecked
})
AuthorizePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Only the cloud administrator can change permissions.', html))
def test_authorize_submit_remove(self):
self.set_user('a@a.com')
self.set_post({
'user_permission_1' : 'a@a.com',
'CURRENT-a@a.com-upload_app' : 'True',
'a@a.com-upload_app' : 'a@a.com-upload_app', #this box is checked
'user_permission_1' : 'b@a.com',
'CURRENT-b@a.com-upload_app' : 'True', #this box is unchecked
})
AuthorizePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Disabling upload_app for b@a.com', html))
def test_authorize_submit_add(self):
self.set_user('a@a.com')
self.set_post({
'user_permission_1' : 'a@a.com',
'CURRENT-a@a.com-upload_app' : 'True',
'a@a.com-upload_app' : 'a@a.com-upload_app', #this box is checked
'user_permission_1' : 'c@a.com',
'CURRENT-c@a.com-upload_app' : 'False', #this box is unchecked
'c@a.com-upload_app' : 'c@a.com-upload_app', #this box is checked
})
AuthorizePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/authorize/cloud.html -->', html))
self.assertTrue(re.search('Enabling upload_app for c@a.com', html))
def test_upload_page_notloggedin(self):
AppUploadPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/new.html -->', html))
self.assertTrue(re.search('You do not have permission to upload application. Please contact your cloud administrator', html))
def test_upload_page_loggedin(self):
self.set_user('a@a.com')
AppUploadPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/new.html -->', html))
self.assertTrue(re.search('<input accept="tar.gz, tgz" id="app_file_data" name="app_file_data" size="30" type="file" />', html))
def test_upload_submit_notloggedin(self):
self.set_fileupload('app_file_data')
AppUploadPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/new.html -->', html))
self.assertTrue(re.search('You do not have permission to upload application. Please contact your cloud administrator', html))
def test_upload_submit_loggedin(self):
self.set_user('a@a.com')
self.set_fileupload('app_file_data')
AppUploadPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/new.html -->', html))
self.assertTrue(re.search('Application uploaded successfully. Please wait for the application to start running.', html))
def test_appdelete_page_nologgedin(self):
AppDeletePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertFalse(re.search('<option ', html))
def test_appdelete_page_loggedin_twoapps(self):
self.set_user('a@a.com')
AppDeletePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertTrue(re.search('<option value="app1">app1</option>', html))
self.assertTrue(re.search('<option value="app2">app2</option>', html))
def test_appdelete_page_loggedin_oneapp(self):
self.set_user('b@a.com')
AppDeletePage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertFalse(re.search('<option value="app1">app1</option>', html))
self.assertTrue(re.search('<option value="app2">app2</option>', html))
def test_appdelete_submit_notloggedin(self):
self.set_post({
'appname' : 'app1'
})
AppDeletePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertTrue(re.search('There are no running applications that you have permission to delete.', html))
def test_appdelete_submit_notappadmin(self):
self.set_user('b@a.com')
self.set_post({
'appname' : 'app1'
})
AppDeletePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertTrue(re.search('You do not have permission to delete the application: app1', html))
def test_appdelete_submit_success(self):
self.set_user('a@a.com')
self.set_post({
'appname' : 'app1'
})
AppDeletePage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('<!-- FILE:templates/layouts/main.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/shared/navigation.html -->', html))
self.assertTrue(re.search('<!-- FILE:templates/apps/delete.html -->', html))
self.assertTrue(re.search('Application removed successfully. Please wait for your app to shut', html))
def test_refresh_data_get(self):
StatusRefreshPage(self.request, self.response).get()
html = self.response.out.getvalue()
self.assertTrue(re.search('datastore updated', html))
def test_refresh_data_post(self):
StatusRefreshPage(self.request, self.response).post()
html = self.response.out.getvalue()
self.assertTrue(re.search('datastore updated', html))
| 44.019663 | 132 | 0.688246 | 4,085 | 31,342 | 5.108935 | 0.080539 | 0.049066 | 0.093531 | 0.128606 | 0.770244 | 0.719454 | 0.688356 | 0.665213 | 0.649928 | 0.633253 | 0 | 0.016567 | 0.144885 | 31,342 | 711 | 133 | 44.081575 | 0.762136 | 0.006381 | 0 | 0.523292 | 0 | 0.006211 | 0.270076 | 0.1334 | 0 | 0 | 0 | 0 | 0.21118 | 1 | 0.079193 | false | 0.031056 | 0.03882 | 0 | 0.122671 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73dbcbd67c906acef272848de33b325a6f949ae9 | 329 | py | Python | Metric.py | noooway/exj | 4f9adf2e340c0e215f5138a848cddba6567725a9 | [
"MIT"
] | 1 | 2020-04-15T21:42:42.000Z | 2020-04-15T21:42:42.000Z | Metric.py | noooway/exj | 4f9adf2e340c0e215f5138a848cddba6567725a9 | [
"MIT"
] | null | null | null | Metric.py | noooway/exj | 4f9adf2e340c0e215f5138a848cddba6567725a9 | [
"MIT"
] | null | null | null | from Exercise import *
class Metric( Exercise ):
"""Metric is just another sort of exercise"""
def __init__( self, description_dict ):
super( Metric, self ).__init__( description_dict )
@classmethod
def init_from_json( cls, dict_from_json ):
metric = cls( dict_from_json )
return metric
| 27.416667 | 58 | 0.668693 | 40 | 329 | 5.1 | 0.5 | 0.117647 | 0.107843 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246201 | 329 | 11 | 59 | 29.909091 | 0.822581 | 0.118541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
73e41fcd951dffad95d9bd5bc29e479aa153adf4 | 430 | py | Python | myproject/poco/models.py | rg3915/geodjango | b92e26088158c1601a976b87ec2f14f33ca16722 | [
"Xnet",
"X11"
] | null | null | null | myproject/poco/models.py | rg3915/geodjango | b92e26088158c1601a976b87ec2f14f33ca16722 | [
"Xnet",
"X11"
] | 4 | 2021-03-19T10:15:30.000Z | 2022-02-10T10:27:56.000Z | myproject/poco/models.py | rg3915/geodjango | b92e26088158c1601a976b87ec2f14f33ca16722 | [
"Xnet",
"X11"
] | null | null | null | # This is an auto-generated Django model module created by ogrinspect.
from django.contrib.gis.db import models
class Poco(models.Model):
proprietar = models.CharField(max_length=254)
orgao = models.CharField(max_length=254)
data_perfu = models.DateField()
profundida = models.FloatField()
q_m3h = models.FloatField()
equipament = models.CharField(max_length=254)
geom = models.PointField(srid=4326)
| 33.076923 | 70 | 0.744186 | 56 | 430 | 5.625 | 0.660714 | 0.142857 | 0.171429 | 0.228571 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038674 | 0.15814 | 430 | 12 | 71 | 35.833333 | 0.831492 | 0.15814 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73eed2dfddc3c295590fed1967638ba75b89bb7a | 924 | py | Python | Dataset/Leetcode/test/5/179.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/test/5/179.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/test/5/179.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution(object):
def XXX(self, s):
"""
:type s: str
:rtype: str
"""
s_length = len(s)
mark = [[0 for i in range(s_length)] for _ in range(s_length)]
max_length = 0
max_sub_str = ""
for j in range(0, s_length):
for i in range(0, j + 1):
if j - i <= 1:
if s[i] == s[j]:
mark[i][j] = 1
if max_length < j - i + 1:
max_sub_str = s[i:j+1]
max_length = j - i + 1
else:
if s[i] == s[j] and mark[i+1][j-1]:
mark[i][j] = 1
if max_length < j - i + 1:
max_sub_str = s[i:j+1]
max_length = j - i + 1
return max_sub_str
| 31.862069 | 70 | 0.331169 | 118 | 924 | 2.440678 | 0.220339 | 0.041667 | 0.052083 | 0.152778 | 0.361111 | 0.319444 | 0.319444 | 0.319444 | 0.319444 | 0.319444 | 0 | 0.039312 | 0.559524 | 924 | 28 | 71 | 33 | 0.668305 | 0 | 0 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73f0b4625b14381b39b9abee8de13a246d6aa36e | 603 | py | Python | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/base/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | 1 | 2019-12-19T01:53:13.000Z | 2019-12-19T01:53:13.000Z | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/base/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | apps/odoo/lib/odoo-10.0.post20170615-py2.7.egg/odoo/addons/base/__init__.py | gtfarng/Odoo_migrade | 9cc28fae4c379e407645248a29d22139925eafe7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
import ir
import workflow
import module
import res
import report
import tests
def post_init(cr, registry):
"""Rewrite ICP's to force groups"""
from odoo import api, SUPERUSER_ID
from odoo.addons.base.ir.ir_config_parameter import _default_parameters
env = api.Environment(cr, SUPERUSER_ID, {})
ICP = env['ir.config_parameter']
for key, func in _default_parameters.iteritems():
val = ICP.get_param(key)
_, groups = func()
ICP.set_param(key, val, groups)
| 27.409091 | 75 | 0.699834 | 86 | 603 | 4.755814 | 0.616279 | 0.03912 | 0.08313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002075 | 0.200663 | 603 | 21 | 76 | 28.714286 | 0.846473 | 0.207297 | 0 | 0 | 0 | 0 | 0.04034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.533333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
73f41924b685c051a11fa3e2dc69caad82125502 | 1,734 | py | Python | tests/test_key.py | fortesp/PyBitcoinAddress | da9dd65e529600bc7ad0b5427c91bbff533fe773 | [
"MIT"
] | 85 | 2020-03-21T02:57:25.000Z | 2022-03-25T12:13:14.000Z | tests/test_key.py | fortesp/PyBitcoinAddress | da9dd65e529600bc7ad0b5427c91bbff533fe773 | [
"MIT"
] | 15 | 2020-09-09T18:14:15.000Z | 2021-12-12T13:54:36.000Z | tests/test_key.py | fortesp/PyBitcoinAddress | da9dd65e529600bc7ad0b5427c91bbff533fe773 | [
"MIT"
] | 37 | 2020-03-21T02:40:59.000Z | 2022-03-25T14:33:32.000Z | import unittest
from unittest import TestCase
from bitcoinaddress import Key, Seed
class TestKey(TestCase):
def testFromRandomSeed(self):
# given
key = Key.of(Seed())
# then
self.assertEqual(len(key.hex), 64)
self.assertEqual(len(key.mainnet.wif), 51)
self.assertEqual(len(key.mainnet.wifc), 52)
def testFromHex_K(self):
# given
key = Key.of('669182eb2c3169e01cfc305034dc0b1df8328c274865e70d632c711ba62ec3d3')
# then
self.assertEqual(key.hex, '669182eb2c3169e01cfc305034dc0b1df8328c274865e70d632c711ba62ec3d3')
self.assertEqual(key.mainnet.wif, '5JbTZ4zCTn1rwCfdkPWLddFgqzieGaG9Qjp3iRhf7R8gNroj4KM')
self.assertEqual(key.mainnet.wifc, 'Kzf6CYbTbBgoQEVXCWLVef1psFkoVjor7mxeyr2TDKWto7iHfXHh')
def testFromHex_L(self):
# given
key = Key.of('c2814c56793485f803430ef28ea93ba34e1dc74a74cead43407378350a958792')
# then
self.assertEqual(key.hex, 'c2814c56793485f803430ef28ea93ba34e1dc74a74cead43407378350a958792')
self.assertEqual(key.mainnet.wif, '5KHwxCT8Nrb3MSiQRS5h6fqmAJWrXzi9min15xSzY1EuR3EgLHT')
self.assertEqual(key.mainnet.wifc, 'L3joYdYKZTsFPEVkNqhhz2SDv4JmdoidiPPdNsjiwr4NLr31PkqK')
def testFromWIF(self):
# given
key = Key.of('5JbTZ4zCTn1rwCfdkPWLddFgqzieGaG9Qjp3iRhf7R8gNroj4KM')
# then
self.assertEqual(key.hex, '669182eb2c3169e01cfc305034dc0b1df8328c274865e70d632c711ba62ec3d3')
self.assertEqual(key.mainnet.wif, '5JbTZ4zCTn1rwCfdkPWLddFgqzieGaG9Qjp3iRhf7R8gNroj4KM')
self.assertEqual(key.mainnet.wifc, 'Kzf6CYbTbBgoQEVXCWLVef1psFkoVjor7mxeyr2TDKWto7iHfXHh')
if __name__ == "__main__":
unittest.main()
| 36.125 | 101 | 0.739908 | 128 | 1,734 | 9.945313 | 0.289063 | 0.141398 | 0.127258 | 0.117832 | 0.553024 | 0.391202 | 0.391202 | 0.391202 | 0.391202 | 0.391202 | 0 | 0.193571 | 0.17474 | 1,734 | 47 | 102 | 36.893617 | 0.696017 | 0.024798 | 0 | 0.230769 | 0 | 0 | 0.409037 | 0.404281 | 0 | 0 | 0 | 0 | 0.461538 | 1 | 0.153846 | false | 0 | 0.115385 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73fa083daa51cb6eb020483a5e2a963e0266dfaf | 3,509 | py | Python | tpi1/tree_search.py | vascoalramos/ia | 4fdfa29877822f184c668900a0253cbe0084da12 | [
"MIT"
] | 1 | 2019-11-13T14:23:05.000Z | 2019-11-13T14:23:05.000Z | tpi1/tree_search.py | vascoalramos/ia | 4fdfa29877822f184c668900a0253cbe0084da12 | [
"MIT"
] | null | null | null | tpi1/tree_search.py | vascoalramos/ia | 4fdfa29877822f184c668900a0253cbe0084da12 | [
"MIT"
] | 1 | 2019-10-28T16:29:04.000Z | 2019-10-28T16:29:04.000Z |
# Module: tree_search
#
# This module provides a set o classes for automated
# problem solving through tree search:
# SearchDomain - problem domains
# SearchProblem - concrete problems to be solved
# SearchNode - search tree nodes
# SearchTree - search tree with the necessary methods for searhing
#
# (c) Luis Seabra Lopes
# Introducao a Inteligencia Artificial, 2012-2019,
# Inteligência Artificial, 2014-2019
from abc import ABC, abstractmethod
# Dominios de pesquisa
# Permitem calcular
# as accoes possiveis em cada estado, etc
class SearchDomain(ABC):
# construtor
@abstractmethod
def __init__(self):
pass
# lista de accoes possiveis num estado
@abstractmethod
def actions(self, state):
pass
# resultado de uma accao num estado, ou seja, o estado seguinte
@abstractmethod
def result(self, state, action):
pass
# custo de uma accao num estado
@abstractmethod
def cost(self, state, action):
pass
# custo estimado de chegar de um estado a outro
@abstractmethod
def heuristic(self, state, goal):
pass
# test if the given "goal" is satisfied in "state"
@abstractmethod
def satisfies(self, state, goal):
pass
# Problemas concretos a resolver
# dentro de um determinado dominio
class SearchProblem:
def __init__(self, domain, initial, goal):
self.domain = domain
self.initial = initial
self.goal = goal
def goal_test(self, state):
return self.domain.satisfies(state,self.goal)
# Nos de uma arvore de pesquisa
class SearchNode:
def __init__(self,state,parent):
self.state = state
self.parent = parent
def __str__(self):
return "no(" + str(self.state) + "," + str(self.parent) + ")"
def __repr__(self):
return str(self)
# Arvores de pesquisa
class SearchTree:
# construtor
def __init__(self,problem, strategy='breadth'):
self.problem = problem
self.root = SearchNode(problem.initial, None)
self.open_nodes = [self.root]
self.strategy = strategy
# obter o caminho (sequencia de estados) da raiz ate um no
def get_path(self,node):
if node.parent == None:
return [node.state]
path = self.get_path(node.parent)
path += [node.state]
return(path)
# procurar a solucao
def search(self):
while self.open_nodes != []:
node = self.open_nodes.pop(0)
if self.problem.goal_test(node.state):
return self.get_path(node)
lnewnodes = []
for a in self.problem.domain.actions(node.state):
newstate = self.problem.domain.result(node.state,a)
if newstate not in self.get_path(node):
newnode = SearchNode(newstate,node)
lnewnodes.append(newnode)
self.add_to_open(lnewnodes)
return None
# juntar novos nos a lista de nos abertos de acordo com a estrategia
def add_to_open(self,lnewnodes):
if self.strategy == 'breadth':
self.open_nodes.extend(lnewnodes)
elif self.strategy == 'depth':
self.open_nodes[:0] = lnewnodes
elif self.strategy == 'astar':
self.astar_add_to_open(lnewnodes)
elif self.strategy == 'uniform':
pass
| 29.991453 | 73 | 0.61043 | 411 | 3,509 | 5.107056 | 0.347932 | 0.03859 | 0.030967 | 0.021439 | 0.040972 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007401 | 0.306925 | 3,509 | 116 | 74 | 30.25 | 0.855674 | 0.282702 | 0 | 0.188406 | 0 | 0 | 0.015196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0.101449 | 0.014493 | 0.043478 | 0.376812 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
73ffb35e5502576c65abc69058974b01ee34a8b9 | 452 | py | Python | parser/parser/database.py | spider19281/parser | ca26bc1010fd82c79e110b99ef1fdb77875fe87b | [
"MIT"
] | null | null | null | parser/parser/database.py | spider19281/parser | ca26bc1010fd82c79e110b99ef1fdb77875fe87b | [
"MIT"
] | null | null | null | parser/parser/database.py | spider19281/parser | ca26bc1010fd82c79e110b99ef1fdb77875fe87b | [
"MIT"
] | null | null | null | import sqlite3
class Database:
def __init__(self):
self.connection = sqlite3.connect('./database.db')
self.cursor = self.connection.cursor()
def insert_item(self, item):
cur = self.cursor.execute(
"insert into posts (title, file, link) values (?, ?, ?)",
(item['title'], item['file'], item['link']))
self.connection.commit()
def close(self):
self.connection.close() | 30.133333 | 69 | 0.577434 | 49 | 452 | 5.22449 | 0.469388 | 0.21875 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006042 | 0.267699 | 452 | 15 | 70 | 30.133333 | 0.767372 | 0 | 0 | 0 | 0 | 0 | 0.1766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fb478ae297be5611a34c083982cab7c0be777e53 | 600 | py | Python | alma/entities/installment.py | alma/alma-python-client | be895691772f7939dc6d0af39db4a48c9f9ae193 | [
"MIT"
] | 4 | 2020-01-27T16:44:58.000Z | 2020-06-26T13:14:52.000Z | alma/entities/installment.py | alma/alma-python-client | be895691772f7939dc6d0af39db4a48c9f9ae193 | [
"MIT"
] | 7 | 2020-01-27T16:50:27.000Z | 2020-10-12T16:21:25.000Z | alma/entities/installment.py | alma/alma-python-client | be895691772f7939dc6d0af39db4a48c9f9ae193 | [
"MIT"
] | 7 | 2019-05-10T19:20:13.000Z | 2022-03-24T07:08:55.000Z | from enum import Enum
from . import Base
class InstallmentState(Enum):
PENDING = "pending"
PAID = "paid"
INCIDENT = "incident"
CLAIMED = "claimed"
COVERED = "covered"
class Installment(Base):
def __init__(self, data):
state = data.pop("state", None)
if state:
try:
self.state = InstallmentState(state)
except ValueError:
# Pass on unrecognized state values
# they will be accessible as-is in the Installment data
pass
super(Installment, self).__init__(data)
| 23.076923 | 71 | 0.58 | 62 | 600 | 5.483871 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.338333 | 600 | 25 | 72 | 24 | 0.856423 | 0.145 | 0 | 0 | 0 | 0 | 0.07451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.117647 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
fb48cfe3601c82c6fb83669d1c5cafe5eb9c4955 | 932 | py | Python | createtransitionsnetworkweighted.py | trovdimi/wikilinks | 835feb3a982d9a77afc88b6787b4b84c411442db | [
"MIT"
] | 6 | 2016-03-11T08:31:02.000Z | 2020-06-25T14:12:47.000Z | createtransitionsnetworkweighted.py | trovdimi/wikilinks | 835feb3a982d9a77afc88b6787b4b84c411442db | [
"MIT"
] | null | null | null | createtransitionsnetworkweighted.py | trovdimi/wikilinks | 835feb3a982d9a77afc88b6787b4b84c411442db | [
"MIT"
] | 1 | 2018-03-24T13:06:25.000Z | 2018-03-24T13:06:25.000Z | from wsd.database import MySQLDatabase
from graph_tool.all import *
from conf import *
__author__ = 'dimitrovdr'
db = MySQLDatabase(DATABASE_HOST, DATABASE_USER, DATABASE_PASSWORD, DATABASE_NAME)
db_work_view = db.get_work_view()
wikipedia = Graph()
for link in db_work_view.retrieve_all_internal_transitions_counts():
for i in range(int(link['counts'])) :
wikipedia.add_edge(link['from'], link['to'])
#print 'from %s, to %s', link['from'], link['to']
#wikipedia.save("output/transitionsnetwork.xml.gz")
# filter all nodes that have no edges
transitions_network = GraphView(wikipedia, vfilt=lambda v : v.out_degree()+v.in_degree()>0 )
transitions_network.save("output/transitionsnetworkweighted.xml.gz")
print "Stats for transitions network:"
print "number of nodes: %d" %transitions_network.num_vertices()
print "number of edges: %d" %transitions_network.num_edges()
| 27.411765 | 93 | 0.725322 | 125 | 932 | 5.184 | 0.488 | 0.138889 | 0.030864 | 0.04321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001269 | 0.154506 | 932 | 33 | 94 | 28.242424 | 0.821066 | 0.143777 | 0 | 0 | 0 | 0 | 0.171278 | 0.052701 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.066667 | 0.2 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb5af87917b83ad53f87907aaeff1e1a1d31c16c | 203 | py | Python | src/art_of_geom/geom/euclid/_rD/sphere.py | Mathverse/Art-of-Geometry | 8fd89fd3526f871815c38953580a48b017d39847 | [
"MIT"
] | 1 | 2021-12-25T01:16:10.000Z | 2021-12-25T01:16:10.000Z | src/art_of_geom/geom/euclid/_rD/sphere.py | Mathverse/Art-of-Geometry | 8fd89fd3526f871815c38953580a48b017d39847 | [
"MIT"
] | null | null | null | src/art_of_geom/geom/euclid/_rD/sphere.py | Mathverse/Art-of-Geometry | 8fd89fd3526f871815c38953580a48b017d39847 | [
"MIT"
] | null | null | null | __all__ = 'SphereInRD', 'SphereRD', 'Sphere'
from .._abc._entity import _EuclideanGeometryEntityABC
class SphereInRD(_EuclideanGeometryEntityABC):
pass
# aliases
Sphere = SphereRD = SphereInRD
| 15.615385 | 54 | 0.768473 | 17 | 203 | 8.705882 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 203 | 12 | 55 | 16.916667 | 0.850575 | 0.034483 | 0 | 0 | 0 | 0 | 0.123711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb60f82721085c6f787997bf7afcbf2e5d555a26 | 910 | py | Python | dmri_handler.py | clegling/deep-brain-stimulation | 943b76ce45ed53d269be5a6682c91cadbb40aa0c | [
"Apache-2.0"
] | 1 | 2017-02-23T19:42:11.000Z | 2017-02-23T19:42:11.000Z | dmri_handler.py | clegling/deep-brain-stimulation | 943b76ce45ed53d269be5a6682c91cadbb40aa0c | [
"Apache-2.0"
] | null | null | null | dmri_handler.py | clegling/deep-brain-stimulation | 943b76ce45ed53d269be5a6682c91cadbb40aa0c | [
"Apache-2.0"
] | null | null | null | """This file represents dmri handling"""
from abc import ABCMeta, abstractmethod
class DMRIHandler:
"""This class is an abstract class for dti manipulations"""
__metaclass__ = ABCMeta
def __init__(self, dmri_file, fbvals, fbvecs):
self.dmri_file = dmri_file
self.fbvals = fbvals
self.fbvecs = fbvecs
@abstractmethod
def get_shape(self):
"""Returns number of voxels for each dmri dimension"""
@abstractmethod
def handle(self):
"""Abstract method which includes all processing and calculations"""
pass
@abstractmethod
def get_eigen_vectors(self):
"""Abstract method which returns eigen vectors for each voxel of the DMRI"""
pass
@abstractmethod
def get_eigen_values(self):
"""Abstract method which returns eigen values for each voxel of the DMRI"""
pass
| 30.333333 | 85 | 0.650549 | 105 | 910 | 5.485714 | 0.438095 | 0.118056 | 0.104167 | 0.119792 | 0.302083 | 0.208333 | 0.086806 | 0 | 0 | 0 | 0 | 0 | 0.275824 | 910 | 29 | 86 | 31.37931 | 0.874052 | 0.374725 | 0 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0.166667 | 0.055556 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb7c153d134d1af457e68040379bbb124e2701e9 | 6,242 | py | Python | usermgmt.py | roclops/usermgmtlib | 34d1bb17c2a4d914a12ddeafa7c7820b2fc7d62f | [
"MIT"
] | null | null | null | usermgmt.py | roclops/usermgmtlib | 34d1bb17c2a4d914a12ddeafa7c7820b2fc7d62f | [
"MIT"
] | null | null | null | usermgmt.py | roclops/usermgmtlib | 34d1bb17c2a4d914a12ddeafa7c7820b2fc7d62f | [
"MIT"
] | null | null | null | from passlib.hash import ldap_salted_sha1
from passlib.hash import ldap_pbkdf2_sha256
from sshpubkeys import SSHKey
import datetime
class Usermgmt(object):
def attrs(self):
raise NotImplementedError
def get_dict(self):
return dict((key, value) for key, value in self.__dict__.items()
if not callable(value) and not key.startswith('__'))
def __values(self):
return (getattr(self, attr) for attr in self.attrs())
def __eq__(self, other):
# print('self:\t' + str(self.get_dict()))
# print('other:\t' + str(other.get_dict()))
return self.get_dict() == other.get_dict()
def __str__(self):
return "<Usermgmt {0}>".format(self.get_dict())
def refresh():
raise NotImplementedError
def save():
raise NotImplementedError
class Role(Usermgmt):
def __init__(self, rolename=None, groups=[]):
self.rolename = str(rolename)
if groups:
self.groups = set(sorted(groups))
else:
self.groups = set()
def __eq__(self, other):
return self.rolename == other.rolename and \
self.groups == other.groups
def __str__(self):
return "<Role {}>".format(self.rolename)
def attrs(self):
return ['rolename', 'groups']
class Group(Usermgmt):
def __init__(self, groupname=None, gid=None):
self.groupname = str(groupname)
self.gid = str(gid)
def __cmp__(self, other):
return self.gid == other.gid and self.groupname == other.groupname
def __eq__(self, other):
return self.__cmp__(other)
def attrs(self):
return ['groupname', 'gid']
class User(Usermgmt):
def __init__(self, username=None, hash_ldap=None, password_mod_date=None, email=None, uidNumber=None, public_keys=[], sshkey_mod_date=None, groups=[], auth_code=None, auth_code_date=None):
self.username = str(username)
self.hash_ldap = str(hash_ldap)
self.password_mod_date = str(password_mod_date)
self.email = str(email)
self.uidNumber = str(uidNumber)
if public_keys:
self.public_keys = set(public_keys)
else:
self.public_keys = set()
self.sshkey_mod_date = str(sshkey_mod_date)
if groups:
self.groups = set(sorted(groups))
else:
self.groups = set()
self.auth_code= str(auth_code)
self.auth_code_date = str(auth_code_date)
def __eq__(self, other):
return self.username == other.username and \
self.hash_ldap == other.hash_ldap and \
self.password_mod_date == other.password_mod_date and \
self.email == other.email and \
self.uidNumber == other.uidNumber and \
self.public_keys == other.public_keys and \
self.groups == other.groups and \
self.auth_code == other.auth_code and \
self.auth_code_date == other.auth_code_date
def __cmp__(self, other):
for a in self.attrs():
self_a = getattr(self, a)
other_a = getattr(other, a)
if type(self_a) == list and type(other_a) == list or type(self_a) == set and type(other_a) == set:
self_a = sorted(list(self_a))
other_a = sorted(list(other_a))
c = cmp(self_a, other_a)
if c:
return c
return 0
def attrs(self):
return ['username', 'password', 'email', 'uidNumber', 'public_keys', 'groups', 'hash_ldap', 'password_mod_date', 'sshkey_mod_date', 'auth_code', 'auth_code_date']
def set(self, attribute, value):
attr = setattr(self, attribute, value)
self.save()
return True
def check_password(self, password):
return ( ldap_pbkdf2_sha256.identify(self.hash_ldap) and \
ldap_pbkdf2_sha256.verify(password, self.hash_ldap) ) \
or (ldap_salted_sha1.identify(self.hash_ldap) and \
ldap_salted_sha1.verify(password, self.hash_ldap))
def set_password(self, password):
try:
self.hash_ldap = ldap_pbkdf2_sha256.hash(password)
self.password_mod_date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
self.auth_code = None
self.auth_code_date = None
self.save()
return True
except Exception as e:
print("Exception: %s" % e)
return False
def validate_key(self, key):
try:
ssh = SSHKey(key)
ssh.parse()
return ssh
except:
return False
def get_ssh_key_comment(self, key):
ssh = SSHKey(key)
ssh.parse()
return ssh.comment
def get_ssh_key_hash(self, key):
ssh = SSHKey(key)
ssh.parse()
return ssh.hash_md5().split('MD5:').pop()
def check_key_exist(self, key):
for test_key in self.public_keys:
if self.get_ssh_key_hash(key) == self.get_ssh_key_hash(test_key):
return True
return False
def add_ssh_key(self, key):
try:
ssh = self.validate_key(key)
if self.check_key_exist(key): return False
self.public_keys.add(ssh.keydata)
self.sshkey_mod_date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
self.save()
return True
except Exception as e:
print(e)
return False
def remove_ssh_key(self, key):
self.public_keys.discard(key)
self.save()
return True
def remove_ssh_key_by_hash(self, hash_md5):
key = self.find_key_by_hash(hash_md5)
self.public_keys.discard(key)
self.save()
return True
def find_key_by_hash(self, hash_md5):
for key in self.public_keys:
test_hash = self.get_ssh_key_hash(key)
if hash_md5 == test_hash:
return key
return None
def is_admin(self):
return self.is_group_member('internal.admins') or self.is_group_member('unix.admins')
def is_group_member(self, group):
if group in self.groups:
return True
else:
return False
| 32.341969 | 192 | 0.595322 | 801 | 6,242 | 4.377029 | 0.141074 | 0.031945 | 0.031945 | 0.02567 | 0.271535 | 0.188534 | 0.129778 | 0.121506 | 0.121506 | 0.078722 | 0 | 0.006148 | 0.296379 | 6,242 | 192 | 193 | 32.510417 | 0.792122 | 0.012977 | 0 | 0.348101 | 0 | 0 | 0.037837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202532 | false | 0.075949 | 0.025316 | 0.088608 | 0.462025 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb83648a32b4a3b292948ec8d2929d777d570337 | 2,240 | py | Python | release/stubs.min/System/Net/__init___parts/IAuthenticationModule.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/Net/__init___parts/IAuthenticationModule.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/System/Net/__init___parts/IAuthenticationModule.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | class IAuthenticationModule:
""" Provides the base authentication interface for Web client authentication modules. """
def Authenticate(self, challenge, request, credentials):
"""
Authenticate(self: IAuthenticationModule,challenge: str,request: WebRequest,credentials: ICredentials) -> Authorization
Returns an instance of the System.Net.Authorization class in respose to an authentication
challenge from a server.
challenge: The authentication challenge sent by the server.
request: The System.Net.WebRequest instance associated with the challenge.
credentials: The credentials associated with the challenge.
Returns: An System.Net.Authorization instance containing the authorization message for the request,or
null if the challenge cannot be handled.
"""
pass
def PreAuthenticate(self, request, credentials):
"""
PreAuthenticate(self: IAuthenticationModule,request: WebRequest,credentials: ICredentials) -> Authorization
Returns an instance of the System.Net.Authorization class for an authentication request to a
server.
request: The System.Net.WebRequest instance associated with the authentication request.
credentials: The credentials associated with the authentication request.
Returns: An System.Net.Authorization instance containing the authorization message for the request.
"""
pass
def __init__(self, *args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
AuthenticationType = property(
lambda self: object(), lambda self, v: None, lambda self: None
)
"""Gets the authentication type provided by this authentication module.
Get: AuthenticationType(self: IAuthenticationModule) -> str
"""
CanPreAuthenticate = property(
lambda self: object(), lambda self, v: None, lambda self: None
)
"""Gets a value indicating whether the authentication module supports preauthentication.
Get: CanPreAuthenticate(self: IAuthenticationModule) -> bool
"""
| 27.654321 | 153 | 0.704018 | 232 | 2,240 | 6.676724 | 0.318966 | 0.034861 | 0.030988 | 0.051646 | 0.51388 | 0.486766 | 0.432537 | 0.432537 | 0.392511 | 0.392511 | 0 | 0 | 0.225893 | 2,240 | 80 | 154 | 28 | 0.89331 | 0.582143 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.230769 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb9d01ce0e570e97220240a2ba058cf64890a1f9 | 838 | py | Python | src/models/order.py | NurseHack4Health/LocAid | 3c2e735e9d0383934df47c7a42a9fe5f6523d9ab | [
"MIT"
] | null | null | null | src/models/order.py | NurseHack4Health/LocAid | 3c2e735e9d0383934df47c7a42a9fe5f6523d9ab | [
"MIT"
] | null | null | null | src/models/order.py | NurseHack4Health/LocAid | 3c2e735e9d0383934df47c7a42a9fe5f6523d9ab | [
"MIT"
] | 1 | 2021-05-15T18:20:29.000Z | 2021-05-15T18:20:29.000Z | import uuid
from src.data_layer.db_connector import Base
from sqlalchemy import Column, Boolean, DateTime
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy import func
class OrderModel(Base):
"""
Define Item database table ORM model
"""
__tablename__ = "order"
# Register columns
id = Column(UUID(as_uuid=True), default=uuid.uuid4, unique=True, primary_key=True, index=True)
user_id = Column(UUID(as_uuid=True), index=True)
from_hospital_id = Column(UUID(as_uuid=True), index=True)
to_hospital_id = Column(UUID(as_uuid=True), index=True)
item_id = Column(UUID(as_uuid=True), index=True)
emergency = Column(Boolean, default=False)
created_at = Column(DateTime, default=func.now())
approved = Column(Boolean, default=False)
processed = Column(Boolean, default=False)
| 34.916667 | 98 | 0.73389 | 115 | 838 | 5.182609 | 0.408696 | 0.067114 | 0.100671 | 0.11745 | 0.271812 | 0.271812 | 0.234899 | 0.234899 | 0.130872 | 0 | 0 | 0.00142 | 0.159905 | 838 | 23 | 99 | 36.434783 | 0.84517 | 0.064439 | 0 | 0 | 0 | 0 | 0.00651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fba4dedb5ca42bdb935e73389d8c86edb8ce30dd | 14,205 | py | Python | pysnmp/ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 18:50:07 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ValueSizeConstraint, ConstraintsUnion, SingleValueConstraint, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ValueSizeConstraint", "ConstraintsUnion", "SingleValueConstraint", "ValueRangeConstraint")
etsysModules, = mibBuilder.importSymbols("ENTERASYS-MIB-NAMES", "etsysModules")
InetAddress, InetAddressType = mibBuilder.importSymbols("INET-ADDRESS-MIB", "InetAddress", "InetAddressType")
ObjectGroup, ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ObjectGroup", "ModuleCompliance", "NotificationGroup")
IpAddress, Counter32, iso, MibScalar, MibTable, MibTableRow, MibTableColumn, Integer32, Bits, ModuleIdentity, TimeTicks, ObjectIdentity, MibIdentifier, Counter64, Unsigned32, NotificationType, Gauge32 = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "Counter32", "iso", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Integer32", "Bits", "ModuleIdentity", "TimeTicks", "ObjectIdentity", "MibIdentifier", "Counter64", "Unsigned32", "NotificationType", "Gauge32")
RowStatus, TextualConvention, DisplayString, TruthValue = mibBuilder.importSymbols("SNMPv2-TC", "RowStatus", "TextualConvention", "DisplayString", "TruthValue")
etsysRadiusAcctClientMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27))
etsysRadiusAcctClientMIB.setRevisions(('2009-08-07 15:48', '2004-11-12 15:23', '2004-09-09 14:37', '2004-08-30 15:55', '2004-08-25 15:03', '2002-09-13 19:30',))
if mibBuilder.loadTexts: etsysRadiusAcctClientMIB.setLastUpdated('200908071548Z')
if mibBuilder.loadTexts: etsysRadiusAcctClientMIB.setOrganization('Enterasys Networks')
etsysRadiusAcctClientMIBObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1))
etsysRadiusAcctClientEnable = MibScalar((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: etsysRadiusAcctClientEnable.setStatus('current')
etsysRadiusAcctClientUpdateInterval = MibScalar((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647)).clone(1800)).setUnits('seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: etsysRadiusAcctClientUpdateInterval.setStatus('current')
etsysRadiusAcctClientIntervalMinimum = MibScalar((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(60, 2147483647)).clone(600)).setUnits('seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: etsysRadiusAcctClientIntervalMinimum.setStatus('current')
etsysRadiusAcctClientServerTable = MibTable((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4), )
if mibBuilder.loadTexts: etsysRadiusAcctClientServerTable.setStatus('current')
etsysRadiusAcctClientServerEntry = MibTableRow((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1), ).setIndexNames((0, "ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerIndex"))
if mibBuilder.loadTexts: etsysRadiusAcctClientServerEntry.setStatus('current')
etsysRadiusAcctClientServerIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647)))
if mibBuilder.loadTexts: etsysRadiusAcctClientServerIndex.setStatus('current')
etsysRadiusAcctClientServerAddressType = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 2), InetAddressType().clone('ipv4')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerAddressType.setStatus('current')
etsysRadiusAcctClientServerAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 3), InetAddress().subtype(subtypeSpec=ValueSizeConstraint(1, 64))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerAddress.setStatus('current')
etsysRadiusAcctClientServerPortNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(1813)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerPortNumber.setStatus('current')
etsysRadiusAcctClientServerSecret = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 5), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 255))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerSecret.setStatus('current')
etsysRadiusAcctClientServerSecretEntered = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 6), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerSecretEntered.setStatus('current')
etsysRadiusAcctClientServerRetryTimeout = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(2, 10)).clone(5)).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerRetryTimeout.setStatus('current')
etsysRadiusAcctClientServerRetries = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 20)).clone(2)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerRetries.setStatus('current')
etsysRadiusAcctClientServerClearTime = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerClearTime.setStatus('deprecated')
etsysRadiusAcctClientServerStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 10), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerStatus.setStatus('current')
etsysRadiusAcctClientServerUpdateInterval = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 11), Integer32().subtype(subtypeSpec=ConstraintsUnion(ValueRangeConstraint(-1, -1), ValueRangeConstraint(0, 2147483647), )).clone(-1)).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerUpdateInterval.setStatus('current')
etsysRadiusAcctClientServerIntervalMinimum = MibTableColumn((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 1, 4, 1, 12), Integer32().subtype(subtypeSpec=ConstraintsUnion(ValueRangeConstraint(-1, -1), ValueRangeConstraint(60, 2147483647), )).clone(-1)).setUnits('seconds').setMaxAccess("readcreate")
if mibBuilder.loadTexts: etsysRadiusAcctClientServerIntervalMinimum.setStatus('current')
etsysRadiusAcctClientMIBConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2))
etsysRadiusAcctClientMIBCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 1))
etsysRadiusAcctClientMIBGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 2))
etsysRadiusAcctClientMIBGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 2, 1)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientEnable"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientUpdateInterval"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientIntervalMinimum"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddressType"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddress"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerPortNumber"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecret"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecretEntered"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetryTimeout"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetries"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerClearTime"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBGroup = etsysRadiusAcctClientMIBGroup.setStatus('deprecated')
etsysRadiusAcctClientMIBGroupV2 = ObjectGroup((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 2, 2)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientEnable"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientUpdateInterval"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientIntervalMinimum"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddressType"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddress"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerPortNumber"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecret"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecretEntered"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetryTimeout"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetries"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBGroupV2 = etsysRadiusAcctClientMIBGroupV2.setStatus('deprecated')
etsysRadiusAcctClientMIBGroupV3 = ObjectGroup((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 2, 3)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientEnable"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientUpdateInterval"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientIntervalMinimum"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddressType"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerAddress"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerPortNumber"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecret"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerSecretEntered"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetryTimeout"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerRetries"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerStatus"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerIntervalMinimum"), ("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientServerUpdateInterval"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBGroupV3 = etsysRadiusAcctClientMIBGroupV3.setStatus('current')
etsysRadiusAcctClientMIBCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 1, 2)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientMIBGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBCompliance = etsysRadiusAcctClientMIBCompliance.setStatus('deprecated')
etsysRadiusAcctClientMIBComplianceV2 = ModuleCompliance((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 1, 3)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientMIBGroupV2"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBComplianceV2 = etsysRadiusAcctClientMIBComplianceV2.setStatus('deprecated')
etsysRadiusAcctClientMIBComplianceV3 = ModuleCompliance((1, 3, 6, 1, 4, 1, 5624, 1, 2, 27, 2, 1, 4)).setObjects(("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", "etsysRadiusAcctClientMIBGroupV3"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
etsysRadiusAcctClientMIBComplianceV3 = etsysRadiusAcctClientMIBComplianceV3.setStatus('current')
mibBuilder.exportSymbols("ENTERASYS-RADIUS-ACCT-CLIENT-EXT-MIB", etsysRadiusAcctClientMIBComplianceV2=etsysRadiusAcctClientMIBComplianceV2, etsysRadiusAcctClientServerClearTime=etsysRadiusAcctClientServerClearTime, etsysRadiusAcctClientServerPortNumber=etsysRadiusAcctClientServerPortNumber, etsysRadiusAcctClientServerAddressType=etsysRadiusAcctClientServerAddressType, etsysRadiusAcctClientIntervalMinimum=etsysRadiusAcctClientIntervalMinimum, etsysRadiusAcctClientServerAddress=etsysRadiusAcctClientServerAddress, etsysRadiusAcctClientServerSecret=etsysRadiusAcctClientServerSecret, etsysRadiusAcctClientMIBCompliances=etsysRadiusAcctClientMIBCompliances, etsysRadiusAcctClientServerIndex=etsysRadiusAcctClientServerIndex, etsysRadiusAcctClientServerRetryTimeout=etsysRadiusAcctClientServerRetryTimeout, etsysRadiusAcctClientMIB=etsysRadiusAcctClientMIB, etsysRadiusAcctClientServerUpdateInterval=etsysRadiusAcctClientServerUpdateInterval, PYSNMP_MODULE_ID=etsysRadiusAcctClientMIB, etsysRadiusAcctClientMIBObjects=etsysRadiusAcctClientMIBObjects, etsysRadiusAcctClientMIBGroupV2=etsysRadiusAcctClientMIBGroupV2, etsysRadiusAcctClientMIBGroup=etsysRadiusAcctClientMIBGroup, etsysRadiusAcctClientServerSecretEntered=etsysRadiusAcctClientServerSecretEntered, etsysRadiusAcctClientServerStatus=etsysRadiusAcctClientServerStatus, etsysRadiusAcctClientServerTable=etsysRadiusAcctClientServerTable, etsysRadiusAcctClientMIBCompliance=etsysRadiusAcctClientMIBCompliance, etsysRadiusAcctClientMIBGroupV3=etsysRadiusAcctClientMIBGroupV3, etsysRadiusAcctClientMIBGroups=etsysRadiusAcctClientMIBGroups, etsysRadiusAcctClientEnable=etsysRadiusAcctClientEnable, etsysRadiusAcctClientServerRetries=etsysRadiusAcctClientServerRetries, etsysRadiusAcctClientUpdateInterval=etsysRadiusAcctClientUpdateInterval, etsysRadiusAcctClientServerEntry=etsysRadiusAcctClientServerEntry, etsysRadiusAcctClientServerIntervalMinimum=etsysRadiusAcctClientServerIntervalMinimum, etsysRadiusAcctClientMIBConformance=etsysRadiusAcctClientMIBConformance, etsysRadiusAcctClientMIBComplianceV3=etsysRadiusAcctClientMIBComplianceV3)
| 177.5625 | 2,097 | 0.803168 | 1,329 | 14,205 | 8.583145 | 0.13845 | 0.007715 | 0.071623 | 0.09424 | 0.454545 | 0.391952 | 0.365477 | 0.346805 | 0.324362 | 0.324362 | 0 | 0.061558 | 0.061105 | 14,205 | 79 | 2,098 | 179.810127 | 0.793732 | 0.025766 | 0 | 0.086957 | 0 | 0 | 0.286189 | 0.211135 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.115942 | 0 | 0.115942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fbce834a735ad51e7b51a5da9375f7ec19153a43 | 682 | py | Python | analytics/BeMoBI_PyAnalytics/BeMoBI_PyAnalytics.py | xfleckx/BeMoBI_Tools | 90a6066648a3249e6ea22165609aaaa763e02f05 | [
"MIT"
] | null | null | null | analytics/BeMoBI_PyAnalytics/BeMoBI_PyAnalytics.py | xfleckx/BeMoBI_Tools | 90a6066648a3249e6ea22165609aaaa763e02f05 | [
"MIT"
] | null | null | null | analytics/BeMoBI_PyAnalytics/BeMoBI_PyAnalytics.py | xfleckx/BeMoBI_Tools | 90a6066648a3249e6ea22165609aaaa763e02f05 | [
"MIT"
] | null | null | null | import os
import pandas as pd
import seaborn as sns
dataDir = '..\\Test_Data\\'
pilotMarkerDataFile = 'Pilot.csv'
df = pd.read_csv( dataDir + '\\' + pilotMarkerDataFile,sep='\t', engine='python')
repr(df.head())
# TODO times per position
# plotting a heatmap http://stanford.edu/~mwaskom/software/seaborn/examples/many_pairwise_correlations.html
## Generate a custom diverging colormap
#cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
#sns.heatmap(timesAtPositions, mask=mask, cmap=cmap, vmax=.3,
# square=True, xticklabels=5, yticklabels=5,
# linewidths=.5, cbar_kws={"shrink": .5}, ax=ax) | 31 | 107 | 0.717009 | 94 | 682 | 5.12766 | 0.712766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017182 | 0.146628 | 682 | 22 | 108 | 31 | 0.810997 | 0.652493 | 0 | 0 | 0 | 0 | 0.149123 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fbde4302833ec88b5839af1a2bdd4789b1c8ae09 | 478 | py | Python | main.py | fossabot/superstructure | f4ab5cac269fb3dedfbd3a54c441af23edf3840b | [
"MIT"
] | null | null | null | main.py | fossabot/superstructure | f4ab5cac269fb3dedfbd3a54c441af23edf3840b | [
"MIT"
] | null | null | null | main.py | fossabot/superstructure | f4ab5cac269fb3dedfbd3a54c441af23edf3840b | [
"MIT"
] | null | null | null | from redisworks import Root
from superstructure.geist import Bewusstsein
# TODO find way to pickle objects
def main():
root = Root # redis.Redis('localhost')
try:
weltgeist = root.weltgeist
except BaseException:
print("Creating new weltgeist")
weltgeist = Bewusstsein(name="Weltgeist")
root.weltgeist = weltgeist
# print(weltgeist)
print(root.weltgeist)
root.weltgeist.spill()
if __name__ == "__main__":
main()
| 20.782609 | 49 | 0.669456 | 51 | 478 | 6.117647 | 0.529412 | 0.166667 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236402 | 478 | 22 | 50 | 21.727273 | 0.854795 | 0.15272 | 0 | 0 | 0 | 0 | 0.097257 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0.142857 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
837c19eef088865d2dc6aafc140effefa8fcbc78 | 688 | py | Python | watcher.py | Craeckie/Container-Bot | a3bc55fe9c560d2f4a92403a4d661ddde9f7132d | [
"MIT"
] | null | null | null | watcher.py | Craeckie/Container-Bot | a3bc55fe9c560d2f4a92403a4d661ddde9f7132d | [
"MIT"
] | null | null | null | watcher.py | Craeckie/Container-Bot | a3bc55fe9c560d2f4a92403a4d661ddde9f7132d | [
"MIT"
] | null | null | null | import docker
class Watcher:
def __init__(self, socket_path='/var/run/docker.sock'):
self.client = docker.DockerClient(base_url='unix://' + socket_path)
def container_list(self):
return self.containers.list()
def listen_events(self, event_callback, *args, **kwargs):
for event in self.client.events(decode=True):
try:
if 'status' in event and not event['status'].startswith('exec_') and event['status'] != 'pull':
msg = event['Actor']['Attributes']['name'] + ": " + event['status']
event_callback(event, msg, *args, **kwargs)
except Exception as e:
pass | 40.470588 | 111 | 0.584302 | 79 | 688 | 4.936709 | 0.607595 | 0.084615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273256 | 688 | 17 | 112 | 40.470588 | 0.78 | 0 | 0 | 0 | 0 | 0 | 0.117562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.071429 | 0.071429 | 0.071429 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
837e3d2d96c78455ca68897af18dd213d2e999f1 | 515 | py | Python | ois_api_client/v3_0/deserialization/deserialize_detailed_reason.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | 7 | 2020-10-22T08:15:29.000Z | 2022-01-27T07:59:39.000Z | ois_api_client/v3_0/deserialization/deserialize_detailed_reason.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | null | null | null | ois_api_client/v3_0/deserialization/deserialize_detailed_reason.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | null | null | null | from typing import Optional
import xml.etree.ElementTree as ET
from ...xml.XmlReader import XmlReader as XR
from ..namespaces import COMMON
from ..namespaces import DATA
from ..dto.DetailedReason import DetailedReason
def deserialize_detailed_reason(element: ET.Element) -> Optional[DetailedReason]:
if element is None:
return None
result = DetailedReason(
case=XR.get_child_text(element, 'case', DATA),
reason=XR.get_child_text(element, 'reason', DATA),
)
return result
| 27.105263 | 81 | 0.733981 | 65 | 515 | 5.723077 | 0.461538 | 0.075269 | 0.107527 | 0.075269 | 0.112903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180583 | 515 | 18 | 82 | 28.611111 | 0.881517 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.428571 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
837e4bab46beccba993bb55df20966fe17f92d9a | 823 | py | Python | mldash/plugins/viewer/handler.py | K-A-R-T/JacMLDash | 5cb5d66da32ac55002319301f82d1db9091f0c56 | [
"MIT"
] | 5 | 2019-09-07T09:35:19.000Z | 2022-03-09T14:33:44.000Z | mldash/plugins/viewer/handler.py | K-A-R-T/JacMLDash | 5cb5d66da32ac55002319301f82d1db9091f0c56 | [
"MIT"
] | null | null | null | mldash/plugins/viewer/handler.py | K-A-R-T/JacMLDash | 5cb5d66da32ac55002319301f82d1db9091f0c56 | [
"MIT"
] | 1 | 2021-03-17T04:17:06.000Z | 2021-03-17T04:17:06.000Z | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
# File : handler.py
# Author : Jiayuan Mao
# Email : maojiayuan@gmail.com
# Date : 09/08/2019
#
# This file is part of JacMLDash.
# Distributed under terms of the MIT license.
import os
import os.path as osp
import mimetypes
from tornado.web import StaticFileHandler
from jacweb.web import route, JacRequestHandler
@route(r'/viewer/(.*)')
class FileViewerHandler(StaticFileHandler):
def initialize(self, path=None, default_filename='index.html'):
if path is None:
path = os.getcwd()
super().initialize(path, default_filename)
def get_content_type(self) -> str:
assert self.absolute_path is not None
if self.absolute_path.endswith('.log'):
return 'text/plain'
return super().get_content_type()
| 26.548387 | 67 | 0.679222 | 108 | 823 | 5.101852 | 0.666667 | 0.029038 | 0.050817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015337 | 0.207776 | 823 | 30 | 68 | 27.433333 | 0.829755 | 0.256379 | 0 | 0 | 0 | 0 | 0.059801 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.125 | false | 0 | 0.3125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
83827304f7a0429aa38c4a9f8e6141696a9b691b | 94 | py | Python | src/mathematical/bitwise_operations.py | Venkat0273/Python-Notes | bb3901315bd688acf61a97dc9f45353376f8ff39 | [
"MIT"
] | null | null | null | src/mathematical/bitwise_operations.py | Venkat0273/Python-Notes | bb3901315bd688acf61a97dc9f45353376f8ff39 | [
"MIT"
] | null | null | null | src/mathematical/bitwise_operations.py | Venkat0273/Python-Notes | bb3901315bd688acf61a97dc9f45353376f8ff39 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
__author__ = "venkat"
__author_email__ = "venkatram0273@gmail.com"
| 13.428571 | 44 | 0.670213 | 10 | 94 | 5.4 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.148936 | 94 | 6 | 45 | 15.666667 | 0.6125 | 0.223404 | 0 | 0 | 0 | 0 | 0.42029 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
838cf8818d2fa2350abd45b9478277fc6154989b | 228 | py | Python | notification/urls.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | null | null | null | notification/urls.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | 8 | 2020-03-24T17:11:49.000Z | 2022-01-13T01:18:11.000Z | notification/urls.py | opendream/asip | 20583aca6393102d425401d55ea32ac6b78be048 | [
"MIT"
] | null | null | null | from django.conf.urls import url, patterns
urlpatterns = patterns('notification.views',
url(r'^notification/$', 'notification_list', name='notification_list'),
url(r'^request/$', 'request_list', name='request_list'),
) | 32.571429 | 75 | 0.719298 | 27 | 228 | 5.925926 | 0.518519 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 228 | 7 | 76 | 32.571429 | 0.784314 | 0 | 0 | 0 | 0 | 0 | 0.441048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
838ea63d5bff1bf8f011f7fa53191bb877f6eb41 | 1,706 | py | Python | aix360/algorithms/protodash/PDASH.py | PurplePean/AIX360 | 4037d6347c40405f342b07da5d341fcd21081cfa | [
"Apache-2.0"
] | 1 | 2019-10-21T20:07:44.000Z | 2019-10-21T20:07:44.000Z | aix360/algorithms/protodash/PDASH.py | PurplePean/AIX360 | 4037d6347c40405f342b07da5d341fcd21081cfa | [
"Apache-2.0"
] | 12 | 2020-01-28T23:06:13.000Z | 2022-02-10T00:23:14.000Z | aix360/algorithms/protodash/PDASH.py | PurplePean/AIX360 | 4037d6347c40405f342b07da5d341fcd21081cfa | [
"Apache-2.0"
] | 1 | 2020-04-20T08:15:36.000Z | 2020-04-20T08:15:36.000Z | from __future__ import print_function
from aix360.algorithms.die import DIExplainer
from .PDASH_utils import HeuristicSetSelection
class ProtodashExplainer(DIExplainer):
"""
ProtodashExplainer provides exemplar-based explanations for summarizing datasets as well
as explaining predictions made by an AI model. It employs a fast gradient based algorithm
to find prototypes along with their (non-negative) importance weights. The algorithm minimizes the maximum
mean discrepancy metric and has constant factor approximation guarantees for this weakly submodular function. [#]_.
References:
.. [#] `Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi,
"ProtoDash: Fast Interpretable Prototype Selection"
<https://arxiv.org/abs/1707.01212>`_
"""
def __init__(self):
"""
Constructor method, initializes the explainer
"""
super(ProtodashExplainer, self).__init__()
def set_params(self, *argv, **kwargs):
"""
Set parameters for the explainer.
"""
pass
def explain(self, X, Y, m, kernelType='other', sigma=2):
"""
Return prototypes for data X, Y.
Args:
X (double 2d array): Dataset to select prototypical explanations from.
Y (double 2d array): Dataset you want to explain.
m (int): Number of prototypes
kernelType (str): Type of kernel (viz. 'Gaussian', / 'other')
sigma (double): width of kernel
Returns:
m selected prototypes from X and their (unnormalized) importance weights
"""
return( HeuristicSetSelection(X, Y, m, kernelType, sigma) )
| 35.541667 | 119 | 0.660023 | 188 | 1,706 | 5.898936 | 0.659574 | 0.00541 | 0.00541 | 0.023445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011886 | 0.260258 | 1,706 | 47 | 120 | 36.297872 | 0.866878 | 0.621336 | 0 | 0 | 0 | 0 | 0.010661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.1 | 0.3 | 0 | 0.7 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
83922c5d46dd9cf938810b0116e304b37e96df4a | 984 | py | Python | test/rpcframework/ClientTest.py | crylearner/PythonRpcFramework | ccae8dfe82ec5b957296e288ae21b9e292cc52a0 | [
"Apache-2.0"
] | 1 | 2017-11-16T09:58:06.000Z | 2017-11-16T09:58:06.000Z | test/rpcframework/ClientTest.py | crylearner/PythonRpcFramework | ccae8dfe82ec5b957296e288ae21b9e292cc52a0 | [
"Apache-2.0"
] | null | null | null | test/rpcframework/ClientTest.py | crylearner/PythonRpcFramework | ccae8dfe82ec5b957296e288ae21b9e292cc52a0 | [
"Apache-2.0"
] | null | null | null | '''
Created on 2015年12月22日
@author: sunshyran
'''
import time
import unittest
from framework.client.Client import AbstractClient
from framework.driver.Invoker import Invoker
from framework.driver.InvokerHandler import InvokerHandler
from test.rpcframework.FakeChannel import FakeChannel
class FakeClient(AbstractClient):
def __init__(self):
invokerhandler = InvokerHandler(FakeChannel())
super().__init__(invokerhandler)
def onResponse(self, rsp):
print(rsp)
class ClientTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def testName(self):
client = FakeClient()
client.start()
client.asyncrequest(Invoker(1, 'test message'))
time.sleep(1)
client.stop()
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main() | 19.68 | 59 | 0.614837 | 92 | 984 | 6.402174 | 0.478261 | 0.066214 | 0.064516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014368 | 0.292683 | 984 | 50 | 60 | 19.68 | 0.831897 | 0.087398 | 0 | 0.08 | 0 | 0 | 0.023753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.08 | 0.24 | 0 | 0.52 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
83a501cd315e08dc04e8e6ecd6ba453c02b942ba | 2,925 | py | Python | mvsgst/mvsgst/spiders/vcmv.py | miemiekurisu/mvsgst | a8efe763c988cae6f3298074d5d03fccef8fdbce | [
"MIT"
] | null | null | null | mvsgst/mvsgst/spiders/vcmv.py | miemiekurisu/mvsgst | a8efe763c988cae6f3298074d5d03fccef8fdbce | [
"MIT"
] | null | null | null | mvsgst/mvsgst/spiders/vcmv.py | miemiekurisu/mvsgst | a8efe763c988cae6f3298074d5d03fccef8fdbce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
import sys
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from mvsgst.items import MvsgstItem
from scrapy.selector import HtmlXPathSelector
import time
import re
import random
class DbmvSpider(scrapy.Spider):
name = "vcmv"
tags = []
# for i in range(1900,2016,1):
# tags.append( "http://movie.douban.com/tag/"+str(i))
start_urls = ("http://www.verycd.com/base/movie/~all/",) #tuple(tags)
def load_item(self,ct):
staticurl = "http://www.verycd.com"
item = MvsgstItem()
item['url']=[]
for i in ct.xpath('@href').extract():
item['url'].append(staticurl+i)
return item
def regprocess(self,lst):
ret = []
if len(lst) ==0:
return ret
for i in lst:
i=i.replace('"','')
i=i.replace(',','')
i=i.replace(u'\u201c',u'')
i=i.replace('\\','')
ret.append(i)
return ret
def load_detail(self,itemurl):
dre = re.compile(ur'\u4e0a\u6620\u65e5\u671f.*\>(.*)\<\/em\>')
details = itemurl.meta['item']
x = scrapy.Selector(itemurl)
details['mvname']= self.regprocess(x.xpath('//*[@class="titleDiv"]/h1/text()').extract())
enn = x.xpath('//*[@class="titleDiv"]/h2/text()').extract()
details['enname'] = self.regprocess(enn)
details['director']=self.regprocess(x.xpath('//*[@rel="v:directedBy"]/text()').extract())
details['actors']=self.regprocess(x.xpath('//*[@rel="v:starring"]/text()').extract())
details['types']=self.regprocess(x.xpath('//*[@rel="v:genre"]/text()').extract())
details['date'] = dre.findall(itemurl.body.decode('utf8'))
details['length'] = x.xpath('//*[@property="v:runtime"]/text()').extract()
summ= x.xpath('//*[@property="v:summary"]/p/text()').extract()
details['summary'] = self.regprocess(summ)
details['imdblink'] = x.xpath('//*[@id="imdb_rate_id"]/a/text()').extract()
#details['rank']= x.xpath('//*[@id="scoreDivDiv"]/text()').extract()
#TODO this place should be rebuilt, but for this case, enough
time.sleep(random.randint(1,3))
return details
def parse(self, response):
x = scrapy.Selector(response)
sites = x.xpath('//*[@class="clearfix entry_cover_list"]/li/a')
i=0
for ct in sites:
i+=1
item = self.load_item(ct)
yield scrapy.Request(item['url'][0],meta={'item':item},callback=self.load_detail)
time.sleep(random.randint(3,10))
nexturl=x.xpath('//*[@rel="next"]/@href').extract()
if len(nexturl)>0:
yield scrapy.Request('http://www.verycd.com'+nexturl[0],callback=self.parse)
#if i==10:
# sys.exit(0)
| 37.987013 | 97 | 0.567863 | 361 | 2,925 | 4.576177 | 0.393352 | 0.039952 | 0.065375 | 0.048426 | 0.059927 | 0.059927 | 0 | 0 | 0 | 0 | 0 | 0.018667 | 0.230769 | 2,925 | 76 | 98 | 38.486842 | 0.715556 | 0.093675 | 0 | 0.033333 | 0 | 0 | 0.201363 | 0.126798 | 0 | 0 | 0 | 0.013158 | 0 | 0 | null | null | 0 | 0.15 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83b2455793ac0102ae6c3ddd7a4e3cd3efc16519 | 380 | py | Python | sets.py | Varanasi-Software-Junction/Python-repository-for-basics | 01128ccb91866cb1abb6d8abf035213f722f5750 | [
"MIT"
] | 2 | 2021-07-14T11:01:58.000Z | 2021-07-14T11:02:01.000Z | sets.py | Maurya232Abhishek/Python-repository-for-basics | 3dcec5c529a0847df07c9dcc1424675754ce6376 | [
"MIT"
] | 4 | 2021-04-09T10:14:06.000Z | 2021-04-13T10:25:58.000Z | sets.py | Maurya232Abhishek/Python-repository-for-basics | 3dcec5c529a0847df07c9dcc1424675754ce6376 | [
"MIT"
] | 2 | 2021-07-11T08:17:30.000Z | 2021-07-14T11:10:58.000Z | s1=set([1,3,7,94])
s2=set([2,3])
print(s1)
print(s2)
print(s1.intersection(s2))
print(s1.difference(s2))
print(s2.difference(s1))
print(s1.symmetric_difference(s2))
print(s1.union(s2))
s1.difference_update(s2) #S1 becomes equal to the difference
print(s1)
s1=set([1,3])
s1.discard(1)
s1.remove(3)
print(s1)
s1.add(5)
print(s1)
t=([6,7])
s2.update(t)
print(s2)
x=s2.pop()
print(x) | 17.272727 | 61 | 0.702632 | 77 | 380 | 3.441558 | 0.337662 | 0.211321 | 0.101887 | 0.05283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116477 | 0.073684 | 380 | 22 | 62 | 17.272727 | 0.636364 | 0.092105 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
83c212e47df5af280b2e08dd307c66c52fc7c620 | 746 | py | Python | Bidding/migrations/0007_auto_20201125_1049.py | Ishikashah2510/nirvaas_main | 5eaf92756d06261a7f555b10aad864a34c9e761b | [
"MIT"
] | null | null | null | Bidding/migrations/0007_auto_20201125_1049.py | Ishikashah2510/nirvaas_main | 5eaf92756d06261a7f555b10aad864a34c9e761b | [
"MIT"
] | null | null | null | Bidding/migrations/0007_auto_20201125_1049.py | Ishikashah2510/nirvaas_main | 5eaf92756d06261a7f555b10aad864a34c9e761b | [
"MIT"
] | 3 | 2020-12-30T11:35:22.000Z | 2021-01-07T13:10:26.000Z | # Generated by Django 3.1.3 on 2020-11-25 05:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('Bidding', '0006_auto_20201125_1007'),
]
operations = [
migrations.AddField(
model_name='old_items_on_bid',
name='buyer_email',
field=models.EmailField(default='', max_length=254),
),
migrations.AddField(
model_name='old_items_on_bid',
name='last_bid_value',
field=models.FloatField(default=0),
),
migrations.AddField(
model_name='old_items_on_bid',
name='threshold_value',
field=models.FloatField(default=0),
),
]
| 25.724138 | 64 | 0.58445 | 80 | 746 | 5.2 | 0.55 | 0.129808 | 0.165865 | 0.194712 | 0.480769 | 0.480769 | 0.317308 | 0.317308 | 0.317308 | 0 | 0 | 0.069231 | 0.302949 | 746 | 28 | 65 | 26.642857 | 0.730769 | 0.060322 | 0 | 0.5 | 1 | 0 | 0.168813 | 0.032904 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83c657faaeb5f77a3afd9ea7d3c64106353332cb | 413 | py | Python | src/tests/part2/q071_test_simplify_path.py | hychrisli/PyAlgorithms | 71e537180f3b371d0d2cc47b11cb68ec13a8ac68 | [
"Apache-2.0"
] | null | null | null | src/tests/part2/q071_test_simplify_path.py | hychrisli/PyAlgorithms | 71e537180f3b371d0d2cc47b11cb68ec13a8ac68 | [
"Apache-2.0"
] | null | null | null | src/tests/part2/q071_test_simplify_path.py | hychrisli/PyAlgorithms | 71e537180f3b371d0d2cc47b11cb68ec13a8ac68 | [
"Apache-2.0"
] | null | null | null | from src.base.test_cases import TestCases
class SimplifiyPathTestCases(TestCases):
def __init__(self):
super(SimplifiyPathTestCases, self).__init__()
self.__add_test_case__('Test 1', '/home/', '/home')
self.__add_test_case__('Test 2', '/a/./b/../../c/', '/c')
self.__add_test_case__('Test 3', '/../', '/')
self.__add_test_case__('Test 4', '/home//foo/', '/home/foo') | 37.545455 | 68 | 0.619855 | 51 | 413 | 4.372549 | 0.470588 | 0.125561 | 0.197309 | 0.269058 | 0.340807 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01173 | 0.174334 | 413 | 11 | 68 | 37.545455 | 0.642229 | 0 | 0 | 0 | 0 | 0 | 0.18599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83db8a59b1d72d9359ecb0571d02bfaf26f6d0fe | 2,612 | py | Python | survol/sources_types/Linux/modules_dependencies.py | rchateauneu/survol | ba66d3ec453b2d9dd3a8dabc6d53f71aa9ba8c78 | [
"BSD-3-Clause"
] | 9 | 2017-10-05T23:36:23.000Z | 2021-08-09T15:40:03.000Z | survol/sources_types/Linux/modules_dependencies.py | rchateauneu/survol | ba66d3ec453b2d9dd3a8dabc6d53f71aa9ba8c78 | [
"BSD-3-Clause"
] | 21 | 2018-01-02T09:33:03.000Z | 2018-08-27T11:09:52.000Z | survol/sources_types/Linux/modules_dependencies.py | rchateauneu/survol | ba66d3ec453b2d9dd3a8dabc6d53f71aa9ba8c78 | [
"BSD-3-Clause"
] | 4 | 2018-06-23T09:05:45.000Z | 2021-01-22T15:36:50.000Z | #!/usr/bin/env python
"""
Linux modules dependencies
"""
import sys
import socket
import lib_common
import lib_util
import lib_modules
from lib_properties import pc
#
# The modules.dep as generated by module-init-tools depmod,
# lists the dependencies for every module in the directories
# under /lib/modules/version, where modules.dep is.
#
# cat /proc/version
# Linux version 2.6.24.7-desktop586-2mnb (qateam@titan.mandriva.com) (gcc version 4.2.3 (4.2.3-6mnb1)) #1 SMP Thu Oct 30 17:39:28 EDT 2008
# ls /lib/modules/$(cat /proc/version | cut -d " " -f3)/modules.dep
#
# /lib/modules/2.6.24.7-desktop586-2mnb/modules.dep
# /lib/modules/2.6.24.7-desktop586-2mnb/dkms-binary/drivers/char/hsfmc97via.ko.gz: /lib/modules/2.6.24.7-desktop586-2mnb/dkms-binary/drivers/char/hsfserial.ko.gz /lib/modules/2.6.24.7-desktop586-2mnb/dkms-binary/drivers/char/hsfengine.ko.gz /lib/modules/2.6.24.7-desktop586-2mnb/dkms-binary/drivers/char/hsfosspec.ko.gz /lib/modules/2.6.24.7-desktop586-2mnb/kernel/drivers/usb/core/usbcore.ko.gz /lib/modules/2.6.24.7-desktop586-2mnb/dkms-binary/drivers/char/hsfsoar.ko.gz
#
#
def Main():
cgiEnv = lib_common.ScriptEnvironment()
grph = cgiEnv.GetGraph()
# TODO: The dependency network is huge, so we put a limit, for the moment.
max_cnt = 0
try:
modudeps = lib_modules.Dependencies()
except Exception as exc:
lib_common.ErrorMessageHtml("Caught:"+str(exc))
for module_name in modudeps:
# NOT TOO MUCH NODES: BEYOND THIS, IT IS FAR TOO SLOW, UNUSABLE. HARDCODE_LIMIT
max_cnt += 1
if max_cnt > 2000:
logging.error("Too many modules to display. Break.")
break
file_parent = lib_modules.ModuleToNode(module_name)
file_child = None
for module_dep in modudeps[module_name]:
# print ( module_name + " => " + module_dep )
# This generates a directed acyclic graph,
# but not a tree in the general case.
file_child = lib_modules.ModuleToNode(module_dep)
grph.add((file_parent, pc.property_module_dep, file_child))
# TODO: Ugly trick, otherwise nodes without connections are not displayed.
# TODO: I think this is a BUG in the dot file generation. Or in RDF ?...
if file_child is None:
grph.add((file_parent, pc.property_information, lib_util.NodeLiteral("")))
# Splines are rather slow.
if max_cnt > 100:
layout_type = "LAYOUT_XXX"
else:
layout_type = "LAYOUT_SPLINE"
cgiEnv.OutCgiRdf(layout_type)
if __name__ == '__main__':
Main()
| 34.368421 | 472 | 0.684533 | 391 | 2,612 | 4.457801 | 0.42711 | 0.074584 | 0.018359 | 0.022949 | 0.241538 | 0.241538 | 0.199656 | 0.199656 | 0.199656 | 0.199656 | 0 | 0.050239 | 0.199847 | 2,612 | 75 | 473 | 34.826667 | 0.783732 | 0.534839 | 0 | 0 | 1 | 0 | 0.061603 | 0 | 0 | 0 | 0 | 0.013333 | 0 | 1 | 0.030303 | false | 0 | 0.181818 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83eb15797d997733d9f93ba8fd7b40602b6fddd5 | 444 | py | Python | kryta_server/app/utils/init_template/database_init.py | mattholy/Kryta-MC | 67126e3bb61ae97efd08576d49b3b0d7a01a8822 | [
"MIT"
] | 1 | 2021-10-05T10:35:02.000Z | 2021-10-05T10:35:02.000Z | kryta_server/app/utils/init_template/database_init.py | mattholy/Kryta-MC | 67126e3bb61ae97efd08576d49b3b0d7a01a8822 | [
"MIT"
] | null | null | null | kryta_server/app/utils/init_template/database_init.py | mattholy/Kryta-MC | 67126e3bb61ae97efd08576d49b3b0d7a01a8822 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
'''
database_init.py
----
初始化数据库
@Time : 2021/10/03 12:25:03
@Author : Mattholy
@Version : 1.0
@Contact : smile.used@hotmail.com
@License : MIT License
'''
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String
Base = declarative_base()
engine = create_engine('sqlite:///kryta-system.db?check_same_thread=False')
| 19.304348 | 75 | 0.718468 | 59 | 444 | 5.288136 | 0.745763 | 0.134615 | 0.128205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045093 | 0.150901 | 444 | 22 | 76 | 20.181818 | 0.782493 | 0.423423 | 0 | 0 | 0 | 0 | 0.198381 | 0.198381 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
83ee3ca805a83db54b0d1425a4b1dec0918e1078 | 2,113 | py | Python | benchmark/wrappers/rllib/wrapper.py | JenishPatel99/SMARTS | feee8fd8a1f0c10ab2aaf6f12acc8c9cc0f861af | [
"MIT"
] | null | null | null | benchmark/wrappers/rllib/wrapper.py | JenishPatel99/SMARTS | feee8fd8a1f0c10ab2aaf6f12acc8c9cc0f861af | [
"MIT"
] | null | null | null | benchmark/wrappers/rllib/wrapper.py | JenishPatel99/SMARTS | feee8fd8a1f0c10ab2aaf6f12acc8c9cc0f861af | [
"MIT"
] | null | null | null | # MIT License
#
# Copyright (C) 2021. Huawei Technologies Co., Ltd. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import copy
from ray.rllib.env.multi_agent_env import MultiAgentEnv
class Wrapper(MultiAgentEnv):
def __init__(self, config):
base_env_cls = config["base_env_cls"]
self.env = base_env_cls(config)
self._agent_keys = list(config["agent_specs"].keys())
self._last_observations = {k: None for k in self._agent_keys}
def _get_observations(self, observations):
return observations
def _get_rewards(self, last_observations, observations, rewards):
return rewards
def _get_infos(self, observations, rewards, infos):
return infos
def _update_last_observation(self, observations):
for agent_id, obs in observations.items():
self._last_observations[agent_id] = copy.copy(obs)
def step(self, agent_actions):
return self.env.step(agent_actions)
def reset(self, **kwargs):
return self.env.reset(**kwargs)
def close(self):
self.env.close()
| 38.418182 | 79 | 0.733081 | 298 | 2,113 | 5.080537 | 0.456376 | 0.058124 | 0.019815 | 0.021136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002346 | 0.19309 | 2,113 | 54 | 80 | 39.12963 | 0.885631 | 0.522007 | 0 | 0 | 0 | 0 | 0.023279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.347826 | false | 0 | 0.086957 | 0.217391 | 0.695652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
83fb49c0fad272e8d8e34b432f2715b58dd4bc53 | 261 | py | Python | Darlington/phase1/python Basic 1/day 13 solution/qtn3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase1/python Basic 1/day 13 solution/qtn3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase1/python Basic 1/day 13 solution/qtn3.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #program to input a number, if it is not a number generate an error message.
while True:
try:
a = int(input("Input a number: "))
break
except ValueError:
print("\nThis is not a number. Try again...")
print()
break | 29 | 76 | 0.586207 | 37 | 261 | 4.135135 | 0.621622 | 0.183007 | 0.156863 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.318008 | 261 | 9 | 77 | 29 | 0.859551 | 0.287356 | 0 | 0.25 | 1 | 0 | 0.27957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83fee75301f392ac61e8ac89a8f2291ac2d94169 | 396 | py | Python | src/bxcommon/feed/filter_parsing.py | thabaptiser/bxcommon | ee8547c9fc68c71b8acf4ce0989a344681ea273c | [
"MIT"
] | null | null | null | src/bxcommon/feed/filter_parsing.py | thabaptiser/bxcommon | ee8547c9fc68c71b8acf4ce0989a344681ea273c | [
"MIT"
] | null | null | null | src/bxcommon/feed/filter_parsing.py | thabaptiser/bxcommon | ee8547c9fc68c71b8acf4ce0989a344681ea273c | [
"MIT"
] | null | null | null | from typing import Callable, Dict
import pycond as pc
from bxutils import logging
logger = logging.get_logger(__name__)
pc.ops_use_symbolic_and_txt(allow_single_eq=True)
def get_validator(filter_string: str) -> Callable[[Dict], bool]:
logger.trace("Getting validator for filters {}", filter_string)
res = pc.qualify(filter_string.lower(), brkts="()", add_cached=True)
return res
| 26.4 | 72 | 0.757576 | 57 | 396 | 4.982456 | 0.684211 | 0.126761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133838 | 396 | 14 | 73 | 28.285714 | 0.827988 | 0 | 0 | 0 | 0 | 0 | 0.085859 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
860362618af49e26f9510a14dc6d5c9d5c45caad | 2,297 | py | Python | tests/test_dib.py | noda/pyMeterBus | a1bb6b6ef9b3db4583dfb2b154e4f65365dee9d9 | [
"BSD-3-Clause"
] | 44 | 2016-12-11T14:43:14.000Z | 2022-03-17T18:31:14.000Z | tests/test_dib.py | noda/pyMeterBus | a1bb6b6ef9b3db4583dfb2b154e4f65365dee9d9 | [
"BSD-3-Clause"
] | 13 | 2017-11-29T14:36:34.000Z | 2020-12-20T18:33:35.000Z | tests/test_dib.py | noda/pyMeterBus | a1bb6b6ef9b3db4583dfb2b154e4f65365dee9d9 | [
"BSD-3-Clause"
] | 32 | 2015-09-15T12:23:19.000Z | 2022-03-22T08:32:22.000Z | # -*- coding: utf-8 -*-
import os
import sys
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../')
import unittest
import meterbus
from meterbus.exceptions import *
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.dib_empty = meterbus.DataInformationBlock()
self.dib0 = meterbus.DataInformationBlock([0x0C])
self.dib7 = meterbus.DataInformationBlock([0x2F])
self.dib8 = meterbus.DataInformationBlock([0x0F])
self.dib9 = meterbus.DataInformationBlock([0x1F])
def test_empty_dib_has_extension_bit(self):
self.assertEqual(self.dib_empty.has_extension_bit, False)
def test_empty_dib_has_lvar_bit(self):
self.assertEqual(self.dib_empty.has_lvar_bit, False)
def test_empty_dib_is_eoud(self):
self.assertEqual(self.dib_empty.is_eoud, False)
def test_empty_dib_more_records_follow(self):
self.assertEqual(self.dib_empty.more_records_follow, False)
def test_empty_dib_is_variable_length(self):
self.assertEqual(self.dib_empty.is_variable_length, False)
def test_dib0_has_extension_bit(self):
self.assertEqual(self.dib0.has_extension_bit, False)
def test_dib0_has_lvar_bit(self):
self.assertEqual(self.dib0.has_lvar_bit, False)
def test_dib0_is_eoud(self):
self.assertEqual(self.dib0.is_eoud, False)
def test_dib0_is_variable_length(self):
self.assertEqual(self.dib0.is_variable_length, False)
def test_dib0_function_type(self):
self.assertEqual(self.dib0.function_type,
meterbus.FunctionType.INSTANTANEOUS_VALUE)
def test_dib7_function_type(self):
self.assertEqual(self.dib7.function_type,
meterbus.FunctionType.SPECIAL_FUNCTION_FILL_BYTE)
def test_dib8_function_type(self):
self.assertEqual(self.dib8.function_type,
meterbus.FunctionType.SPECIAL_FUNCTION)
def test_dib9_more_records_follow(self):
self.assertEqual(self.dib9.more_records_follow, True)
def test_dib9_function_type(self):
self.assertEqual(self.dib9.function_type,
meterbus.FunctionType.MORE_RECORDS_FOLLOW)
if __name__ == '__main__':
unittest.main()
| 32.814286 | 74 | 0.715281 | 288 | 2,297 | 5.350694 | 0.208333 | 0.077872 | 0.172615 | 0.208955 | 0.602855 | 0.55159 | 0.286178 | 0.048021 | 0 | 0 | 0 | 0.017232 | 0.191554 | 2,297 | 69 | 75 | 33.289855 | 0.812601 | 0.009142 | 0 | 0 | 0 | 0 | 0.005277 | 0 | 0 | 0 | 0.007036 | 0 | 0.291667 | 1 | 0.3125 | false | 0 | 0.104167 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8605cb268bff0bc91478ddd74d6557acca5d57dc | 893 | py | Python | mercury/migrations/0012_gfconfig.py | ab7289/mercury-telemetry | db886fce9a2328bdcc85127130c6c6f42f9155eb | [
"MIT"
] | 5 | 2020-05-05T20:05:12.000Z | 2020-11-10T23:57:44.000Z | mercury/migrations/0012_gfconfig.py | ab7289/mercury-telemetry | db886fce9a2328bdcc85127130c6c6f42f9155eb | [
"MIT"
] | 38 | 2020-05-06T23:30:13.000Z | 2020-12-01T15:07:08.000Z | mercury/migrations/0012_gfconfig.py | ab7289/mercury-telemetry | db886fce9a2328bdcc85127130c6c6f42f9155eb | [
"MIT"
] | 10 | 2020-05-04T17:08:07.000Z | 2020-05-23T17:35:47.000Z | # Generated by Django 2.2.10 on 2020-03-20 16:15
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("mercury", "0011_merge_20200314_0111"),
]
operations = [
migrations.CreateModel(
name="GFConfig",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("gf_name", models.CharField(max_length=64)),
("gf_host", models.CharField(max_length=128)),
("gf_token", models.CharField(max_length=256)),
("gf_current", models.BooleanField(default=False)),
],
),
]
| 27.90625 | 67 | 0.464726 | 75 | 893 | 5.36 | 0.68 | 0.11194 | 0.134328 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077973 | 0.425532 | 893 | 31 | 68 | 28.806452 | 0.705653 | 0.051512 | 0 | 0.12 | 1 | 0 | 0.088757 | 0.028402 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7adbf1a3101c5e95093139f62a7b9fd7a905e61 | 1,440 | py | Python | tensorflow/mycode/src/tf_layer_utils.py | christinazavou/O-CNN | 88cda0aea9bf07e14686fff1fe476e8080296dcf | [
"MIT"
] | null | null | null | tensorflow/mycode/src/tf_layer_utils.py | christinazavou/O-CNN | 88cda0aea9bf07e14686fff1fe476e8080296dcf | [
"MIT"
] | null | null | null | tensorflow/mycode/src/tf_layer_utils.py | christinazavou/O-CNN | 88cda0aea9bf07e14686fff1fe476e8080296dcf | [
"MIT"
] | null | null | null | import tensorflow as tf
def make_weights(shape, name='weights'):
return tf.Variable(tf.truncated_normal(shape=shape, stddev=0.05), name=name)
def make_biases(shape, name='biases'):
return tf.Variable(tf.constant(0.05, shape=shape), name=name)
def convolution_layer(prev_layer, f_size, inp_c, out_c, stride_s):
_weights = make_weights([f_size, f_size, inp_c, out_c])
_bias = make_biases([out_c])
return tf.add(tf.nn.conv2d(prev_layer, _weights, [1, stride_s, stride_s, 1], padding='SAME'), _bias)
def pool_layer(prev_layer, size, stride_s):
kernel = [1, size, size, 1]
stride = [1, stride_s, stride_s, 1]
return tf.nn.max_pool(prev_layer, kernel, stride, padding='SAME')
def activation_layer(prev_layer, type):
if type == 'relu':
return tf.nn.relu(prev_layer)
else:
raise NotImplemented('unsupported activation type')
def flat_layer(inp):
input_size = inp.get_shape().as_list()
if len(input_size) != 4:
raise NotImplemented('flat layer unsupported for input with dim != 4')
output_size = input_size[-1] * input_size[-2] * input_size[-3]
return tf.reshape(inp, [-1, output_size]), output_size
def fc_layer(prev_layer, h_in, h_out):
_weights = make_weights([h_in, h_out])
_bias = make_biases([h_out])
return tf.add(tf.matmul(prev_layer, _weights), _bias)
def dropout_layer(prev_layer, prob):
return tf.nn.dropout(prev_layer, prob)
| 30.638298 | 104 | 0.696528 | 228 | 1,440 | 4.131579 | 0.285088 | 0.095541 | 0.07431 | 0.038217 | 0.061571 | 0.061571 | 0 | 0 | 0 | 0 | 0 | 0.015847 | 0.167361 | 1,440 | 46 | 105 | 31.304348 | 0.769808 | 0 | 0 | 0 | 0 | 0 | 0.068056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.033333 | 0.1 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f7b5a8ea18b93a0e2420ee64ad68108c327e73b0 | 5,322 | py | Python | sql_queries.py | bsulayman/Data_modeling_Postgres | 5442c7d4be0a789ef6b2ab0dc15ce956bc70581c | [
"MIT"
] | null | null | null | sql_queries.py | bsulayman/Data_modeling_Postgres | 5442c7d4be0a789ef6b2ab0dc15ce956bc70581c | [
"MIT"
] | null | null | null | sql_queries.py | bsulayman/Data_modeling_Postgres | 5442c7d4be0a789ef6b2ab0dc15ce956bc70581c | [
"MIT"
] | null | null | null | # DROP TABLES
songplay_table_drop = "DROP TABLE IF EXISTS songplays;"
user_table_drop = "DROP TABLE IF EXISTS users;"
song_table_drop = "DROP TABLE IF EXISTS songs;"
artist_table_drop = "DROP TABLE IF EXISTS artists;"
time_table_drop = "DROP TABLE IF EXISTS time;"
# CREATE TABLES
songplay_table_create = ("""CREATE TABLE IF NOT EXISTS songplays (
songplay_id SERIAL PRIMARY KEY,
start_time TIMESTAMP NOT NULL,
user_id INT NOT NULL,
level VARCHAR(4),
song_id VARCHAR,
artist_id VARCHAR,
session_id INT NOT NULL,
location TEXT,
user_agent TEXT
)
""")
user_table_create = ("""CREATE TABLE IF NOT EXISTS users (
user_id INT UNIQUE NOT NULL PRIMARY KEY,
first_name TEXT,
last_name TEXT,
gender VARCHAR(1),
level VARCHAR(4)
)
""")
song_table_create = ("""CREATE TABLE IF NOT EXISTS songs (
song_id VARCHAR UNIQUE NOT NULL PRIMARY KEY,
title TEXT,
artist_id VARCHAR,
year INT,
duration NUMERIC
)
""")
artist_table_create = ("""CREATE TABLE IF NOT EXISTS artists (
artist_id VARCHAR UNIQUE NOT NULL PRIMARY KEY,
name TEXT,
location TEXT,
latitude NUMERIC,
longitude NUMERIC
)
""")
time_table_create = ("""CREATE TABLE IF NOT EXISTS time (
start_time TIME UNIQUE NOT NULL,
hour INT,
day INT,
week INT,
month VARCHAR(10),
year INT,
weekday VARCHAR(10)
)
""")
# INSERT RECORDS
songplay_table_insert = ("""INSERT INTO songplays (
start_time,
user_id,
level,
song_id,
artist_id,
session_id,
location,
user_agent
)
VALUES (to_timestamp(%s), %s, %s, %s, %s, %s, %s, %s)
""")
user_table_insert = ("""INSERT INTO users (
user_id,
first_name,
last_name,
gender,
level
)
VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (user_id)
DO UPDATE SET level = EXCLUDED.level
""")
song_table_insert = ("""INSERT INTO songs (
song_id,
title,
artist_id,
year,
duration
)
VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (song_id)
DO NOTHING
""")
artist_table_insert = ("""INSERT INTO artists (
artist_id,
name,
location,
latitude,
longitude
)
VALUES (%s, %s, %s, %s, %s)
ON CONFLICT (artist_id)
DO NOTHING
""")
time_table_insert = ("""INSERT INTO time (
start_time,
hour,
day,
week,
month,
year,
weekday
)
VALUES (%s, %s, %s, %s, %s, %s, %s)
ON CONFLICT (start_time)
DO NOTHING
""")
# FIND SONGS
song_select = ("""SELECT songs.song_id, songs.artist_id
FROM songs JOIN artists ON songs.artist_id = artists.artist_id
WHERE songs.title = (%s) AND artists.name = (%s) AND songs.duration = (%s);
""")
# QUERY LISTS
create_table_queries = [songplay_table_create, user_table_create, song_table_create, artist_table_create, time_table_create]
drop_table_queries = [songplay_table_drop, user_table_drop, song_table_drop, artist_table_drop, time_table_drop] | 38.846715 | 124 | 0.360955 | 408 | 5,322 | 4.485294 | 0.17402 | 0.027322 | 0.032787 | 0.032787 | 0.260109 | 0.247541 | 0.173224 | 0.034426 | 0 | 0 | 0 | 0.003126 | 0.579294 | 5,322 | 137 | 125 | 38.846715 | 0.814203 | 0.011838 | 0 | 0.313043 | 0 | 0.026087 | 0.866768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7cddff083977ef868affd95a78df624756cab8e | 24,961 | py | Python | genshi/core.py | litebook/litebook | 3e0fc6daa62a4782a4a1103fe41b9bc56e677e00 | [
"Python-2.0"
] | 20 | 2015-01-27T04:50:16.000Z | 2019-12-09T02:23:15.000Z | genshi/core.py | litebook/litebook | 3e0fc6daa62a4782a4a1103fe41b9bc56e677e00 | [
"Python-2.0"
] | 1 | 2020-11-26T04:10:27.000Z | 2021-01-03T22:36:12.000Z | genshi/core.py | hujun-open/litebook | 3e0fc6daa62a4782a4a1103fe41b9bc56e677e00 | [
"Python-2.0"
] | 2 | 2017-05-09T06:56:00.000Z | 2020-11-20T15:23:16.000Z | # -*- coding: utf-8 -*-
#
# Copyright (C) 2006-2009 Edgewall Software
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://genshi.edgewall.org/wiki/License.
#
# This software consists of voluntary contributions made by many
# individuals. For the exact contribution history, see the revision
# history and logs, available at http://genshi.edgewall.org/log/.
"""Core classes for markup processing."""
try:
reduce # builtin in Python < 3
except NameError:
from functools import reduce
from itertools import chain
import operator
from genshi.util import plaintext, stripentities, striptags, stringrepr
__all__ = ['Stream', 'Markup', 'escape', 'unescape', 'Attrs', 'Namespace',
'QName']
__docformat__ = 'restructuredtext en'
class StreamEventKind(str):
"""A kind of event on a markup stream."""
__slots__ = []
_instances = {}
def __new__(cls, val):
return cls._instances.setdefault(val, str.__new__(cls, val))
class Stream(object):
"""Represents a stream of markup events.
This class is basically an iterator over the events.
Stream events are tuples of the form::
(kind, data, position)
where ``kind`` is the event kind (such as `START`, `END`, `TEXT`, etc),
``data`` depends on the kind of event, and ``position`` is a
``(filename, line, offset)`` tuple that contains the location of the
original element or text in the input. If the original location is unknown,
``position`` is ``(None, -1, -1)``.
Also provided are ways to serialize the stream to text. The `serialize()`
method will return an iterator over generated strings, while `render()`
returns the complete generated text at once. Both accept various parameters
that impact the way the stream is serialized.
"""
__slots__ = ['events', 'serializer']
START = StreamEventKind('START') #: a start tag
END = StreamEventKind('END') #: an end tag
TEXT = StreamEventKind('TEXT') #: literal text
XML_DECL = StreamEventKind('XML_DECL') #: XML declaration
DOCTYPE = StreamEventKind('DOCTYPE') #: doctype declaration
START_NS = StreamEventKind('START_NS') #: start namespace mapping
END_NS = StreamEventKind('END_NS') #: end namespace mapping
START_CDATA = StreamEventKind('START_CDATA') #: start CDATA section
END_CDATA = StreamEventKind('END_CDATA') #: end CDATA section
PI = StreamEventKind('PI') #: processing instruction
COMMENT = StreamEventKind('COMMENT') #: comment
def __init__(self, events, serializer=None):
"""Initialize the stream with a sequence of markup events.
:param events: a sequence or iterable providing the events
:param serializer: the default serialization method to use for this
stream
:note: Changed in 0.5: added the `serializer` argument
"""
self.events = events #: The underlying iterable producing the events
self.serializer = serializer #: The default serializion method
def __iter__(self):
return iter(self.events)
def __or__(self, function):
"""Override the "bitwise or" operator to apply filters or serializers
to the stream, providing a syntax similar to pipes on Unix shells.
Assume the following stream produced by the `HTML` function:
>>> from genshi.input import HTML
>>> html = HTML('''<p onclick="alert('Whoa')">Hello, world!</p>''')
>>> print(html)
<p onclick="alert('Whoa')">Hello, world!</p>
A filter such as the HTML sanitizer can be applied to that stream using
the pipe notation as follows:
>>> from genshi.filters import HTMLSanitizer
>>> sanitizer = HTMLSanitizer()
>>> print(html | sanitizer)
<p>Hello, world!</p>
Filters can be any function that accepts and produces a stream (where
a stream is anything that iterates over events):
>>> def uppercase(stream):
... for kind, data, pos in stream:
... if kind is TEXT:
... data = data.upper()
... yield kind, data, pos
>>> print(html | sanitizer | uppercase)
<p>HELLO, WORLD!</p>
Serializers can also be used with this notation:
>>> from genshi.output import TextSerializer
>>> output = TextSerializer()
>>> print(html | sanitizer | uppercase | output)
HELLO, WORLD!
Commonly, serializers should be used at the end of the "pipeline";
using them somewhere in the middle may produce unexpected results.
:param function: the callable object that should be applied as a filter
:return: the filtered stream
:rtype: `Stream`
"""
return Stream(_ensure(function(self)), serializer=self.serializer)
def filter(self, *filters):
"""Apply filters to the stream.
This method returns a new stream with the given filters applied. The
filters must be callables that accept the stream object as parameter,
and return the filtered stream.
The call::
stream.filter(filter1, filter2)
is equivalent to::
stream | filter1 | filter2
:param filters: one or more callable objects that should be applied as
filters
:return: the filtered stream
:rtype: `Stream`
"""
return reduce(operator.or_, (self,) + filters)
def render(self, method=None, encoding='utf-8', out=None, **kwargs):
"""Return a string representation of the stream.
Any additional keyword arguments are passed to the serializer, and thus
depend on the `method` parameter value.
:param method: determines how the stream is serialized; can be either
"xml", "xhtml", "html", "text", or a custom serializer
class; if `None`, the default serialization method of
the stream is used
:param encoding: how the output string should be encoded; if set to
`None`, this method returns a `unicode` object
:param out: a file-like object that the output should be written to
instead of being returned as one big string; note that if
this is a file or socket (or similar), the `encoding` must
not be `None` (that is, the output must be encoded)
:return: a `str` or `unicode` object (depending on the `encoding`
parameter), or `None` if the `out` parameter is provided
:rtype: `basestring`
:see: XMLSerializer, XHTMLSerializer, HTMLSerializer, TextSerializer
:note: Changed in 0.5: added the `out` parameter
"""
from genshi.output import encode
if method is None:
method = self.serializer or 'xml'
generator = self.serialize(method=method, **kwargs)
return encode(generator, method=method, encoding=encoding, out=out)
def select(self, path, namespaces=None, variables=None):
"""Return a new stream that contains the events matching the given
XPath expression.
>>> from genshi import HTML
>>> stream = HTML('<doc><elem>foo</elem><elem>bar</elem></doc>')
>>> print(stream.select('elem'))
<elem>foo</elem><elem>bar</elem>
>>> print(stream.select('elem/text()'))
foobar
Note that the outermost element of the stream becomes the *context
node* for the XPath test. That means that the expression "doc" would
not match anything in the example above, because it only tests against
child elements of the outermost element:
>>> print(stream.select('doc'))
<BLANKLINE>
You can use the "." expression to match the context node itself
(although that usually makes little sense):
>>> print(stream.select('.'))
<doc><elem>foo</elem><elem>bar</elem></doc>
:param path: a string containing the XPath expression
:param namespaces: mapping of namespace prefixes used in the path
:param variables: mapping of variable names to values
:return: the selected substream
:rtype: `Stream`
:raises PathSyntaxError: if the given path expression is invalid or not
supported
"""
from genshi.path import Path
return Path(path).select(self, namespaces, variables)
def serialize(self, method='xml', **kwargs):
"""Generate strings corresponding to a specific serialization of the
stream.
Unlike the `render()` method, this method is a generator that returns
the serialized output incrementally, as opposed to returning a single
string.
Any additional keyword arguments are passed to the serializer, and thus
depend on the `method` parameter value.
:param method: determines how the stream is serialized; can be either
"xml", "xhtml", "html", "text", or a custom serializer
class; if `None`, the default serialization method of
the stream is used
:return: an iterator over the serialization results (`Markup` or
`unicode` objects, depending on the serialization method)
:rtype: ``iterator``
:see: XMLSerializer, XHTMLSerializer, HTMLSerializer, TextSerializer
"""
from genshi.output import get_serializer
if method is None:
method = self.serializer or 'xml'
return get_serializer(method, **kwargs)(_ensure(self))
def __str__(self):
return self.render()
def __unicode__(self):
return self.render(encoding=None)
def __html__(self):
return self
START = Stream.START
END = Stream.END
TEXT = Stream.TEXT
XML_DECL = Stream.XML_DECL
DOCTYPE = Stream.DOCTYPE
START_NS = Stream.START_NS
END_NS = Stream.END_NS
START_CDATA = Stream.START_CDATA
END_CDATA = Stream.END_CDATA
PI = Stream.PI
COMMENT = Stream.COMMENT
def _ensure(stream):
"""Ensure that every item on the stream is actually a markup event."""
stream = iter(stream)
event = stream.next()
# Check whether the iterable is a real markup event stream by examining the
# first item it yields; if it's not we'll need to do some conversion
if type(event) is not tuple or len(event) != 3:
for event in chain([event], stream):
if hasattr(event, 'totuple'):
event = event.totuple()
else:
event = TEXT, unicode(event), (None, -1, -1)
yield event
return
# This looks like a markup event stream, so we'll just pass it through
# unchanged
yield event
for event in stream:
yield event
class Attrs(tuple):
"""Immutable sequence type that stores the attributes of an element.
Ordering of the attributes is preserved, while access by name is also
supported.
>>> attrs = Attrs([('href', '#'), ('title', 'Foo')])
>>> attrs
Attrs([('href', '#'), ('title', 'Foo')])
>>> 'href' in attrs
True
>>> 'tabindex' in attrs
False
>>> attrs.get('title')
'Foo'
Instances may not be manipulated directly. Instead, the operators ``|`` and
``-`` can be used to produce new instances that have specific attributes
added, replaced or removed.
To remove an attribute, use the ``-`` operator. The right hand side can be
either a string or a set/sequence of strings, identifying the name(s) of
the attribute(s) to remove:
>>> attrs - 'title'
Attrs([('href', '#')])
>>> attrs - ('title', 'href')
Attrs()
The original instance is not modified, but the operator can of course be
used with an assignment:
>>> attrs
Attrs([('href', '#'), ('title', 'Foo')])
>>> attrs -= 'title'
>>> attrs
Attrs([('href', '#')])
To add a new attribute, use the ``|`` operator, where the right hand value
is a sequence of ``(name, value)`` tuples (which includes `Attrs`
instances):
>>> attrs | [('title', 'Bar')]
Attrs([('href', '#'), ('title', 'Bar')])
If the attributes already contain an attribute with a given name, the value
of that attribute is replaced:
>>> attrs | [('href', 'http://example.org/')]
Attrs([('href', 'http://example.org/')])
"""
__slots__ = []
def __contains__(self, name):
"""Return whether the list includes an attribute with the specified
name.
:return: `True` if the list includes the attribute
:rtype: `bool`
"""
for attr, _ in self:
if attr == name:
return True
def __getitem__(self, i):
"""Return an item or slice of the attributes list.
>>> attrs = Attrs([('href', '#'), ('title', 'Foo')])
>>> attrs[1]
('title', 'Foo')
>>> attrs[1:]
Attrs([('title', 'Foo')])
"""
items = tuple.__getitem__(self, i)
if type(i) is slice:
return Attrs(items)
return items
def __getslice__(self, i, j):
"""Return a slice of the attributes list.
>>> attrs = Attrs([('href', '#'), ('title', 'Foo')])
>>> attrs[1:]
Attrs([('title', 'Foo')])
"""
return Attrs(tuple.__getslice__(self, i, j))
def __or__(self, attrs):
"""Return a new instance that contains the attributes in `attrs` in
addition to any already existing attributes.
:return: a new instance with the merged attributes
:rtype: `Attrs`
"""
repl = dict([(an, av) for an, av in attrs if an in self])
return Attrs([(sn, repl.get(sn, sv)) for sn, sv in self] +
[(an, av) for an, av in attrs if an not in self])
def __repr__(self):
if not self:
return 'Attrs()'
return 'Attrs([%s])' % ', '.join([repr(item) for item in self])
def __sub__(self, names):
"""Return a new instance with all attributes with a name in `names` are
removed.
:param names: the names of the attributes to remove
:return: a new instance with the attribute removed
:rtype: `Attrs`
"""
if isinstance(names, basestring):
names = (names,)
return Attrs([(name, val) for name, val in self if name not in names])
def get(self, name, default=None):
"""Return the value of the attribute with the specified name, or the
value of the `default` parameter if no such attribute is found.
:param name: the name of the attribute
:param default: the value to return when the attribute does not exist
:return: the attribute value, or the `default` value if that attribute
does not exist
:rtype: `object`
"""
for attr, value in self:
if attr == name:
return value
return default
def totuple(self):
"""Return the attributes as a markup event.
The returned event is a `TEXT` event, the data is the value of all
attributes joined together.
>>> Attrs([('href', '#'), ('title', 'Foo')]).totuple()
('TEXT', '#Foo', (None, -1, -1))
:return: a `TEXT` event
:rtype: `tuple`
"""
return TEXT, ''.join([x[1] for x in self]), (None, -1, -1)
class Markup(unicode):
"""Marks a string as being safe for inclusion in HTML/XML output without
needing to be escaped.
"""
__slots__ = []
def __add__(self, other):
return Markup(unicode.__add__(self, escape(other)))
def __radd__(self, other):
return Markup(unicode.__add__(escape(other), self))
def __mod__(self, args):
if isinstance(args, dict):
args = dict(zip(args.keys(), map(escape, args.values())))
elif isinstance(args, (list, tuple)):
args = tuple(map(escape, args))
else:
args = escape(args)
return Markup(unicode.__mod__(self, args))
def __mul__(self, num):
return Markup(unicode.__mul__(self, num))
__rmul__ = __mul__
def __repr__(self):
return "<%s %s>" % (type(self).__name__, unicode.__repr__(self))
def join(self, seq, escape_quotes=True):
"""Return a `Markup` object which is the concatenation of the strings
in the given sequence, where this `Markup` object is the separator
between the joined elements.
Any element in the sequence that is not a `Markup` instance is
automatically escaped.
:param seq: the sequence of strings to join
:param escape_quotes: whether double quote characters in the elements
should be escaped
:return: the joined `Markup` object
:rtype: `Markup`
:see: `escape`
"""
return Markup(unicode.join(self, [escape(item, quotes=escape_quotes)
for item in seq]))
@classmethod
def escape(cls, text, quotes=True):
"""Create a Markup instance from a string and escape special characters
it may contain (<, >, & and \").
>>> escape('"1 < 2"')
<Markup u'"1 < 2"'>
If the `quotes` parameter is set to `False`, the \" character is left
as is. Escaping quotes is generally only required for strings that are
to be used in attribute values.
>>> escape('"1 < 2"', quotes=False)
<Markup u'"1 < 2"'>
:param text: the text to escape
:param quotes: if ``True``, double quote characters are escaped in
addition to the other special characters
:return: the escaped `Markup` string
:rtype: `Markup`
"""
if not text:
return cls()
if type(text) is cls:
return text
if hasattr(text, '__html__'):
return Markup(text.__html__())
text = text.replace('&', '&') \
.replace('<', '<') \
.replace('>', '>')
if quotes:
text = text.replace('"', '"')
return cls(text)
def unescape(self):
"""Reverse-escapes &, <, >, and \" and returns a `unicode` object.
>>> Markup('1 < 2').unescape()
u'1 < 2'
:return: the unescaped string
:rtype: `unicode`
:see: `genshi.core.unescape`
"""
if not self:
return ''
return unicode(self).replace('"', '"') \
.replace('>', '>') \
.replace('<', '<') \
.replace('&', '&')
def stripentities(self, keepxmlentities=False):
"""Return a copy of the text with any character or numeric entities
replaced by the equivalent UTF-8 characters.
If the `keepxmlentities` parameter is provided and evaluates to `True`,
the core XML entities (``&``, ``'``, ``>``, ``<`` and
``"``) are not stripped.
:return: a `Markup` instance with entities removed
:rtype: `Markup`
:see: `genshi.util.stripentities`
"""
return Markup(stripentities(self, keepxmlentities=keepxmlentities))
def striptags(self):
"""Return a copy of the text with all XML/HTML tags removed.
:return: a `Markup` instance with all tags removed
:rtype: `Markup`
:see: `genshi.util.striptags`
"""
return Markup(striptags(self))
try:
from genshi._speedups import Markup
except ImportError:
pass # just use the Python implementation
escape = Markup.escape
def unescape(text):
"""Reverse-escapes &, <, >, and \" and returns a `unicode` object.
>>> unescape(Markup('1 < 2'))
u'1 < 2'
If the provided `text` object is not a `Markup` instance, it is returned
unchanged.
>>> unescape('1 < 2')
'1 < 2'
:param text: the text to unescape
:return: the unescsaped string
:rtype: `unicode`
"""
if not isinstance(text, Markup):
return text
return text.unescape()
class Namespace(object):
"""Utility class creating and testing elements with a namespace.
Internally, namespace URIs are encoded in the `QName` of any element or
attribute, the namespace URI being enclosed in curly braces. This class
helps create and test these strings.
A `Namespace` object is instantiated with the namespace URI.
>>> html = Namespace('http://www.w3.org/1999/xhtml')
>>> html
Namespace('http://www.w3.org/1999/xhtml')
>>> html.uri
u'http://www.w3.org/1999/xhtml'
The `Namespace` object can than be used to generate `QName` objects with
that namespace:
>>> html.body
QName('http://www.w3.org/1999/xhtml}body')
>>> html.body.localname
u'body'
>>> html.body.namespace
u'http://www.w3.org/1999/xhtml'
The same works using item access notation, which is useful for element or
attribute names that are not valid Python identifiers:
>>> html['body']
QName('http://www.w3.org/1999/xhtml}body')
A `Namespace` object can also be used to test whether a specific `QName`
belongs to that namespace using the ``in`` operator:
>>> qname = html.body
>>> qname in html
True
>>> qname in Namespace('http://www.w3.org/2002/06/xhtml2')
False
"""
def __new__(cls, uri):
if type(uri) is cls:
return uri
return object.__new__(cls)
def __getnewargs__(self):
return (self.uri,)
def __getstate__(self):
return self.uri
def __setstate__(self, uri):
self.uri = uri
def __init__(self, uri):
self.uri = unicode(uri)
def __contains__(self, qname):
return qname.namespace == self.uri
def __ne__(self, other):
return not self == other
def __eq__(self, other):
if isinstance(other, Namespace):
return self.uri == other.uri
return self.uri == other
def __getitem__(self, name):
return QName(self.uri + '}' + name)
__getattr__ = __getitem__
def __hash__(self):
return hash(self.uri)
def __repr__(self):
return '%s(%s)' % (type(self).__name__, stringrepr(self.uri))
def __str__(self):
return self.uri.encode('utf-8')
def __unicode__(self):
return self.uri
# The namespace used by attributes such as xml:lang and xml:space
XML_NAMESPACE = Namespace('http://www.w3.org/XML/1998/namespace')
class QName(unicode):
"""A qualified element or attribute name.
The unicode value of instances of this class contains the qualified name of
the element or attribute, in the form ``{namespace-uri}local-name``. The
namespace URI can be obtained through the additional `namespace` attribute,
while the local name can be accessed through the `localname` attribute.
>>> qname = QName('foo')
>>> qname
QName('foo')
>>> qname.localname
u'foo'
>>> qname.namespace
>>> qname = QName('http://www.w3.org/1999/xhtml}body')
>>> qname
QName('http://www.w3.org/1999/xhtml}body')
>>> qname.localname
u'body'
>>> qname.namespace
u'http://www.w3.org/1999/xhtml'
"""
__slots__ = ['namespace', 'localname']
def __new__(cls, qname):
"""Create the `QName` instance.
:param qname: the qualified name as a string of the form
``{namespace-uri}local-name``, where the leading curly
brace is optional
"""
if type(qname) is cls:
return qname
parts = qname.lstrip('{').split('}', 1)
if len(parts) > 1:
self = unicode.__new__(cls, '{%s' % qname)
self.namespace, self.localname = map(unicode, parts)
else:
self = unicode.__new__(cls, qname)
self.namespace, self.localname = None, unicode(qname)
return self
def __getnewargs__(self):
return (self.lstrip('{'),)
def __repr__(self):
return '%s(%s)' % (type(self).__name__, stringrepr(self.lstrip('{')))
| 34.287088 | 79 | 0.591202 | 3,010 | 24,961 | 4.810299 | 0.180066 | 0.007597 | 0.006837 | 0.009117 | 0.184888 | 0.13896 | 0.110367 | 0.095587 | 0.072381 | 0.056841 | 0 | 0.006968 | 0.298586 | 24,961 | 727 | 80 | 34.33425 | 0.820025 | 0.561396 | 0 | 0.159091 | 0 | 0 | 0.039874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.213636 | false | 0.004545 | 0.040909 | 0.090909 | 0.618182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f7d26b92c29c05eca33d08c9a884bd1721344a5c | 324 | py | Python | spec/construct/test_position_in_seq.py | DarkShadow44/kaitai_struct_tests | 4bb13cef82965cca66dda2eb2b77cd64e9f70a12 | [
"MIT"
] | 11 | 2018-04-01T03:58:15.000Z | 2021-08-14T09:04:55.000Z | spec/construct/test_position_in_seq.py | DarkShadow44/kaitai_struct_tests | 4bb13cef82965cca66dda2eb2b77cd64e9f70a12 | [
"MIT"
] | 73 | 2016-07-20T10:27:15.000Z | 2020-12-17T18:56:46.000Z | spec/construct/test_position_in_seq.py | DarkShadow44/kaitai_struct_tests | 4bb13cef82965cca66dda2eb2b77cd64e9f70a12 | [
"MIT"
] | 37 | 2016-08-15T08:25:56.000Z | 2021-08-28T14:48:46.000Z | # Autogenerated from KST: please remove this line if doing any edits by hand!
import unittest
from position_in_seq import _schema
class TestPositionInSeq(unittest.TestCase):
def test_position_in_seq(self):
r = _schema.parse_file('src/position_in_seq.bin')
self.assertEqual(r.numbers, [(0 + 1), 2, 3])
| 29.454545 | 77 | 0.734568 | 48 | 324 | 4.75 | 0.75 | 0.131579 | 0.171053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.17284 | 324 | 10 | 78 | 32.4 | 0.835821 | 0.231481 | 0 | 0 | 1 | 0 | 0.093117 | 0.093117 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f7ed3c244a1994c9f619cb4b81bb90d7d2abe5a9 | 1,155 | py | Python | naiveSpider/spiders/quotes-spider.py | shusheng007/NaiveSpider | 402a60db594fcf19006f9e237c9c58d9fd922ed5 | [
"Apache-2.0"
] | null | null | null | naiveSpider/spiders/quotes-spider.py | shusheng007/NaiveSpider | 402a60db594fcf19006f9e237c9c58d9fd922ed5 | [
"Apache-2.0"
] | null | null | null | naiveSpider/spiders/quotes-spider.py | shusheng007/NaiveSpider | 402a60db594fcf19006f9e237c9c58d9fd922ed5 | [
"Apache-2.0"
] | null | null | null | import scrapy
import random
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'https://jinqiangua.911cha.com/',
]
my_headers=["Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14",
"Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Win64; x64; Trident/6.0)"
'Mozilla/5.0 (X11; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0'
]
def start_requests(self):
headers= {'User-Agent': random.choice(headers) }
for url in self.start_urls:
yield scrapy.Request(url,headers=headers)
def parse(self, response):
page = response.url.split("/")[-1]
filename = 'gua-%s' % page
with open(filename, 'wb') as f:
f.write(response.body) | 33.970588 | 128 | 0.638095 | 183 | 1,155 | 3.978142 | 0.491803 | 0.065934 | 0.074176 | 0.04533 | 0.291209 | 0.291209 | 0.208791 | 0.129121 | 0.129121 | 0.129121 | 0 | 0.145993 | 0.211255 | 1,155 | 34 | 129 | 33.970588 | 0.653128 | 0 | 0 | 0 | 0 | 0.26087 | 0.534602 | 0.018166 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.086957 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7eda255525fa58a15b1301ce759f10405f1deda | 558 | py | Python | app/migrations/0019_auto_20210206_1309.py | LP-Dev-Web/LeBonRecoin | 7c52c797d14c7043dfc85c0a5cc5221793c752a8 | [
"MIT"
] | null | null | null | app/migrations/0019_auto_20210206_1309.py | LP-Dev-Web/LeBonRecoin | 7c52c797d14c7043dfc85c0a5cc5221793c752a8 | [
"MIT"
] | null | null | null | app/migrations/0019_auto_20210206_1309.py | LP-Dev-Web/LeBonRecoin | 7c52c797d14c7043dfc85c0a5cc5221793c752a8 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-02-06 12:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("app", "0018_auto_20210206_0402"),
]
operations = [
migrations.AlterField(
model_name="product",
name="created_at",
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name="user",
name="created_at",
field=models.DateTimeField(auto_now_add=True),
),
]
| 23.25 | 58 | 0.587814 | 59 | 558 | 5.372881 | 0.644068 | 0.126183 | 0.157729 | 0.182965 | 0.321767 | 0.321767 | 0.321767 | 0.321767 | 0.321767 | 0.321767 | 0 | 0.079487 | 0.301075 | 558 | 23 | 59 | 24.26087 | 0.733333 | 0.080645 | 0 | 0.470588 | 1 | 0 | 0.111546 | 0.04501 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7eff5d2e53dae84d1b3fce760d7f85ffc335c1f | 84 | py | Python | Back-End/Python/Basics/Part -1 - Functional/08 - Modules, Packages/04 - structuring_imports_package/common/models/users/user.py | ASHISHKUMAR2411/Programming-CookBook | 9c60655d64d21985ccb4196360858d98344701f9 | [
"MIT"
] | 25 | 2021-04-28T02:51:26.000Z | 2022-03-24T13:58:04.000Z | Back-End/Python/Basics/Part -1 - Functional/08 - Modules, Packages/04 - structuring_imports_package/common/models/users/user.py | ASHISHKUMAR2411/Programming-CookBook | 9c60655d64d21985ccb4196360858d98344701f9 | [
"MIT"
] | 1 | 2022-03-03T23:33:41.000Z | 2022-03-03T23:35:41.000Z | Back-End/Python/Basics/Part -1 - Functional/08 - Modules, Packages/04 - structuring_imports_package/common/models/users/user.py | ASHISHKUMAR2411/Programming-CookBook | 9c60655d64d21985ccb4196360858d98344701f9 | [
"MIT"
] | 15 | 2021-05-30T01:35:20.000Z | 2022-03-25T12:38:25.000Z | # user.py
__all__ = ['User']
class User:
pass
def user_helper_1():
pass | 7.636364 | 20 | 0.595238 | 12 | 84 | 3.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.27381 | 84 | 11 | 21 | 7.636364 | 0.704918 | 0.083333 | 0 | 0.4 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.4 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f7fc3dc791c89d5fddee570aa17ebd97f05374de | 6,990 | py | Python | pySVG/src/pysvg/text.py | Estevo-Aleixo/pysvg | 88d2f98eea43fdb16a6d7679048a24985709c850 | [
"Unlicense"
] | null | null | null | pySVG/src/pysvg/text.py | Estevo-Aleixo/pysvg | 88d2f98eea43fdb16a6d7679048a24985709c850 | [
"Unlicense"
] | null | null | null | pySVG/src/pysvg/text.py | Estevo-Aleixo/pysvg | 88d2f98eea43fdb16a6d7679048a24985709c850 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
# -*- coding: iso-8859-1 -*-
'''
(C) 2008, 2009 Kerim Mansour
For licensing information please refer to license.txt
'''
from attributes import *
from core import BaseElement, PointAttrib, DeltaPointAttrib, RotateAttrib
class AltGlyphDef(BaseElement, CoreAttrib):
"""
Class representing the altGlyphDef element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'altGlypfDef')
self.setKWARGS(**kwargs)
class AltGlyphItem(BaseElement, CoreAttrib):
"""
Class representing the altGlyphItem element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'altGlypfItem')
self.setKWARGS(**kwargs)
class GlyphRef(BaseElement, CoreAttrib, ExternalAttrib, StyleAttrib, FontAttrib, XLinkAttrib, PaintAttrib, PointAttrib, DeltaPointAttrib):
"""
Class representing the glyphRef element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'glyphRef')
self.setKWARGS(**kwargs)
def set_glyphRef(self, glyphRef):
self._attributes['glyphRef'] = glyphRef
def get_glyphRef(self):
return self._attributes.get('glyphRef')
def set_format(self, format):
self._attributes['format'] = format
def get_format(self):
return self._attributes.get('format')
def set_lengthAdjust(self, lengthAdjust):
self._attributes['lengthAdjust'] = lengthAdjust
def get_lengthAdjust(self):
return self._attributes.get('lengthAdjust')
class AltGlyph(GlyphRef, ConditionalAttrib, GraphicalEventsAttrib, OpacityAttrib, GraphicsAttrib, CursorAttrib, FilterAttrib, MaskAttrib, ClipAttrib, TextContentAttrib, RotateAttrib):
"""
Class representing the altGlyph element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'altGlyph')
self.setKWARGS(**kwargs)
def set_textLength(self, textLength):
self._attributes['textLength'] = textLength
def get_textLength(self):
return self._attributes.get('textLength')
class TextPath(BaseElement, CoreAttrib, ConditionalAttrib, ExternalAttrib, StyleAttrib, XLinkAttrib, FontAttrib, PaintAttrib, GraphicalEventsAttrib, OpacityAttrib, GraphicsAttrib, CursorAttrib, FilterAttrib, MaskAttrib, ClipAttrib, TextContentAttrib):
"""
Class representing the textPath element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'textPath')
self.setKWARGS(**kwargs)
def set_startOffset(self, startOffset):
self._attributes['startOffset'] = startOffset
def get_startOffset(self):
return self._attributes.get('startOffset')
def set_textLength(self, textLength):
self._attributes['textLength'] = textLength
def get_textLength(self):
return self._attributes.get('textLength')
def set_lengthAdjust(self, lengthAdjust):
self._attributes['lengthAdjust'] = lengthAdjust
def get_lengthAdjust(self):
return self._attributes.get('lengthAdjust')
def set_method(self, method):
self._attributes['method'] = method
def get_method(self):
return self._attributes.get('method')
def set_spacing(self, spacing):
self._attributes['spacing'] = spacing
def get_spacing(self):
return self._attributes.get('spacing')
class Tref(BaseElement, CoreAttrib, ConditionalAttrib, ExternalAttrib, StyleAttrib, XLinkAttrib, PointAttrib, DeltaPointAttrib, RotateAttrib, GraphicalEventsAttrib, PaintAttrib, FontAttrib, OpacityAttrib, GraphicsAttrib, CursorAttrib, FilterAttrib, MaskAttrib, ClipAttrib, TextContentAttrib):
"""
Class representing the tref element of an svg doc.
"""
def __init__(self, **kwargs):
BaseElement.__init__(self, 'tref')
self.setKWARGS(**kwargs)
def set_textLength(self, textLength):
self._attributes['textLength'] = textLength
def get_textLength(self):
return self._attributes.get('textLength')
def set_lengthAdjust(self, lengthAdjust):
self._attributes['lengthAdjust'] = lengthAdjust
def get_lengthAdjust(self):
return self._attributes.get('lengthAdjust')
class Tspan(BaseElement, CoreAttrib, ConditionalAttrib, ExternalAttrib, StyleAttrib, PointAttrib, DeltaPointAttrib, RotateAttrib, GraphicalEventsAttrib, PaintAttrib, FontAttrib, OpacityAttrib, GraphicsAttrib, CursorAttrib, FilterAttrib, MaskAttrib, ClipAttrib, TextContentAttrib):
"""
Class representing the tspan element of an svg doc.
"""
def __init__(self, x=None, y=None, dx=None, dy=None, rotate=None, textLength=None, lengthAdjust=None, **kwargs):
BaseElement.__init__(self, 'tspan')
self.set_x(x)
self.set_y(y)
self.set_dx(dx)
self.set_dy(dy)
self.set_rotate(rotate)
self.set_textLength(textLength)
self.set_lengthAdjust(lengthAdjust)
self.setKWARGS(**kwargs)
def set_textLength(self, textLength):
self._attributes['textLength'] = textLength
def get_textLength(self):
return self._attributes.get('textLength')
def set_lengthAdjust(self, lengthAdjust):
self._attributes['lengthAdjust'] = lengthAdjust
def get_lengthAdjust(self):
return self._attributes.get('lengthAdjust')
class Text(BaseElement, CoreAttrib, ConditionalAttrib, ExternalAttrib, StyleAttrib, PointAttrib, DeltaPointAttrib, RotateAttrib, GraphicalEventsAttrib, PaintAttrib, FontAttrib, OpacityAttrib, GraphicsAttrib, CursorAttrib, FilterAttrib, MaskAttrib, ClipAttrib, TextContentAttrib, TextAttrib):
"""
Class representing the text element of an svg doc.
"""
def __init__(self, content=None, x=None, y=None, dx=None, dy=None, rotate=None, textLength=None, lengthAdjust=None, **kwargs):
BaseElement.__init__(self, 'text')
if content <> None:
self.appendTextContent(content)
self.set_x(x)
self.set_y(y)
self.set_dx(dx)
self.set_dy(dy)
self.set_rotate(rotate)
self.set_textLength(textLength)
self.set_lengthAdjust(lengthAdjust)
self.setKWARGS(**kwargs)
def set_transform(self, transform):
self._attributes['transform'] = transform
def get_transform(self):
return self._attributes.get('transform')
def set_textLength(self, textLength):
self._attributes['textLength'] = textLength
def get_textLength(self):
return self._attributes.get('textLength')
def set_lengthAdjust(self, lengthAdjust):
self._attributes['lengthAdjust'] = lengthAdjust
def get_lengthAdjust(self):
return self._attributes.get('lengthAdjust')
| 41.117647 | 293 | 0.679971 | 682 | 6,990 | 6.760997 | 0.1261 | 0.097159 | 0.048579 | 0.083279 | 0.750813 | 0.687053 | 0.654956 | 0.654956 | 0.614617 | 0.614617 | 0 | 0.002377 | 0.21774 | 6,990 | 169 | 294 | 41.360947 | 0.840892 | 0.006152 | 0 | 0.596491 | 0 | 0 | 0.061011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.017544 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7fcd003bc9e3f57ec944b1eb041edb86ca93e84 | 6,381 | py | Python | rafiki/model/model.py | dcslin/rafiki | b617ac2536ac13095c4930d6d3f1f9b3c231b5e7 | [
"Apache-2.0"
] | null | null | null | rafiki/model/model.py | dcslin/rafiki | b617ac2536ac13095c4930d6d3f1f9b3c231b5e7 | [
"Apache-2.0"
] | null | null | null | rafiki/model/model.py | dcslin/rafiki | b617ac2536ac13095c4930d6d3f1f9b3c231b5e7 | [
"Apache-2.0"
] | null | null | null | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
import abc
import numpy as np
from typing import Union, Dict, Optional, Any, List
from .knob import BaseKnob
KnobConfig = Dict[str, BaseKnob]
Knobs = Dict[str, Any]
Params = Dict[str, Union[str, int, float, np.ndarray]]
class BaseModel(abc.ABC):
'''
Rafiki's base model class that Rafiki models must extend.
Rafiki models must implement all abstract methods below, according to the specification of its associated task (see :ref:`tasks`).
They configure how this model template will be trained, evaluated, tuned, serialized and served on Rafiki.
In the model's ``__init__`` method, call ``super().__init__(**knobs)`` as the first line,
followed by the model's initialization logic. The model should be initialize itself with ``knobs``,
a set of generated knob values for the created model instance.
These knob values are chosen by Rafiki based on the model's knob configuration (defined in :meth:`rafiki.model.BaseModel.get_knob_config`).
For example:
::
def __init__(self, **knobs):
super().__init__(**knobs)
self.__dict__.update(knobs)
...
self._build_model(self.knob1, self.knob2)
:param knobs: Dictionary mapping knob names to knob values
:type knobs: :obj:`rafiki.model.Knobs`
'''
def __init__(self, **knobs: Knobs):
pass
@abc.abstractstaticmethod
def get_knob_config() -> KnobConfig:
'''
Return a dictionary that defines the search space for this model template's knobs
(i.e. knobs' names, their types & their ranges).
Over the course of training, your model will be initialized with different values of knobs within this search space
to maximize this model’s performance.
Refer to :ref:`model-tuning` to understand more about how this works.
:returns: Dictionary mapping knob names to knob specifications
'''
raise NotImplementedError()
@abc.abstractmethod
def train(self, dataset_path: str, shared_params: Optional[Params] = None, **train_args):
'''
Train this model instance with the given traing dataset and initialized knob values.
Additional keyword arguments could be passed depending on the task's specification.
Additionally, trained parameters shared from previous trials could be passed,
as part of the ``SHARE_PARAMS`` policy (see :ref:`model-policies`).
Subsequently, the model is considered *trained*.
:param dataset_path: File path of the train dataset file in the *local filesystem*, in a format specified by the task
:param shared_params: Dictionary mapping parameter names to values, as produced by your model's :meth:`rafiki.model.BaseModel.dump_parameters`.
'''
raise NotImplementedError()
@abc.abstractmethod
def evaluate(self, dataset_path: str) -> float:
'''
Evaluate this model instance with the given validation dataset after training.
This will be called only when model is *trained*.
:param dataset_path: File path of the validation dataset file in the *local filesystem*, in a format specified by the task
:returns: A score associated with the validation performance for the trained model instance, the higher the better e.g. classification accuracy.
'''
raise NotImplementedError()
@abc.abstractmethod
def predict(self, queries: List[Any]) -> List[Any]:
'''
Make predictions on a batch of queries after training.
This will be called only when model is *trained*.
:param queries: List of queries, where a query is in the format specified by the task
:returns: List of predictions, in an order corresponding to the queries, where a prediction is in the format specified by the task
'''
raise NotImplementedError()
@abc.abstractmethod
def dump_parameters(self) -> Params:
'''
Returns a dictionary of model parameters that *fully define the trained state of the model*.
This dictionary must conform to the format :obj:`rafiki.model.Params`.
This will be used to save the trained model in Rafiki.
Additionally, trained parameters produced by this method could be shared with future trials, as
part of the ``SHARE_PARAMS`` policy (see :ref:`model-policies`).
This will be called only when model is *trained*.
:returns: Dictionary mapping parameter names to values
'''
raise NotImplementedError()
@abc.abstractmethod
def load_parameters(self, params: Params):
'''
Loads this model instance with previously trained model parameters produced by your model's :meth:`rafiki.model.BaseModel.dump_parameters`.
*This model instance's initialized knob values will match those during training*.
Subsequently, the model is considered *trained*.
'''
raise NotImplementedError()
def destroy(self):
'''
Destroy this model instance, freeing any resources held by this model instance.
No other instance methods will be called subsequently.
'''
pass
@staticmethod
def teardown():
'''
Runs class-wide teardown logic (e.g. close a training session shared across trials).
'''
pass
class PandaModel(BaseModel):
def __init__(self, **knobs: Knobs):
super().__init__(**knobs)
@abc.abstractmethod
def local_explain(self, queries, params: Params):
raise NotImplementedError()
| 39.147239 | 152 | 0.692681 | 834 | 6,381 | 5.23741 | 0.327338 | 0.018544 | 0.027473 | 0.046932 | 0.262134 | 0.202152 | 0.136905 | 0.136905 | 0.108974 | 0.100275 | 0 | 0.001228 | 0.233976 | 6,381 | 162 | 153 | 39.388889 | 0.89239 | 0.702241 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.282051 | false | 0.076923 | 0.102564 | 0 | 0.435897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7901435b0febe6d85163a17baac50a97baf9102a | 8,086 | py | Python | Dev/Cpp/CreateHeader.py | Shockblast/Effekseer | bac86c0fc965f04a0f57c5863d37a9c2d5c3be97 | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2021-12-21T07:03:42.000Z | 2021-12-21T07:03:42.000Z | Dev/Cpp/CreateHeader.py | Shockblast/Effekseer | bac86c0fc965f04a0f57c5863d37a9c2d5c3be97 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | Dev/Cpp/CreateHeader.py | Shockblast/Effekseer | bac86c0fc965f04a0f57c5863d37a9c2d5c3be97 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | import os
import re
import codecs
def isValidLine(line):
if re.search('include \"', line) == None or line.find('.PSVita') != -1 or line.find('.PS4') != -1 or line.find('.Switch') != -1 or line.find('.XBoxOne') != -1:
return True
return False
class CreateHeader:
def __init__(self):
self.lines = []
def addLine(self,line):
self.lines.append(line)
def readLines(self,path):
f = codecs.open(path, 'r','utf-8_sig')
line = f.readline()
while line:
if isValidLine(line):
self.lines.append(line.strip(os.linesep))
line = f.readline()
f.close()
def output(self,path):
f = codecs.open(path, 'w','utf-8_sig')
for line in self.lines:
f.write(line + os.linesep)
f.close()
effekseerHeader = CreateHeader()
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Base.Pre.h')
effekseerHeader.readLines('Effekseer/Effekseer/Utils/Effekseer.CustomAllocator.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Vector2D.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Vector3D.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Color.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.RectF.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Matrix43.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Matrix44.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.File.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.DefaultFile.h')
effekseerHeader.readLines('Effekseer/Effekseer/Backend/GraphicsDevice.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Resource.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Effect.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Manager.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Setting.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Server.h')
effekseerHeader.readLines('Effekseer/Effekseer/Effekseer.Client.h')
effekseerHeader.addLine('')
effekseerHeader.addLine('#include "Effekseer.Modules.h"')
effekseerHeader.addLine('')
effekseerHeader.output('Effekseer/Effekseer.h')
effekseerSimdHeader = CreateHeader()
effekseerSimdHeader.addLine('#pragma once')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Base.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Float4_Gen.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Float4_NEON.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Float4_SSE.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Int4_Gen.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Int4_NEON.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Int4_SSE.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Bridge_Gen.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Bridge_NEON.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Bridge_SSE.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Vec2f.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Vec3f.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Vec4f.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Mat43f.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Mat44f.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Quaternionf.h')
effekseerSimdHeader.readLines('Effekseer/Effekseer/SIMD/Utils.h')
effekseerSimdHeader.output('Effekseer/Effekseer.SIMD.h')
effekseerModulesHeader = CreateHeader()
effekseerModulesHeader.addLine('#pragma once')
effekseerModulesHeader.addLine('')
effekseerModulesHeader.addLine('#include "Effekseer.h"')
effekseerModulesHeader.addLine('#include "Effekseer.SIMD.h"')
effekseerModulesHeader.addLine('')
effekseerModulesHeader.addLine('// A header to access internal data of effekseer')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Parameter/Effekseer.Parameters.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Renderer/Effekseer.SpriteRenderer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Renderer/Effekseer.RibbonRenderer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Renderer/Effekseer.RingRenderer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Renderer/Effekseer.ModelRenderer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Renderer/Effekseer.TrackRenderer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.EffectLoader.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.TextureLoader.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Model/Model.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Model/ModelLoader.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.MaterialLoader.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Model/Model.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.Curve.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.CurveLoader.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Sound/Effekseer.SoundPlayer.h')
effekseerModulesHeader.readLines('Effekseer/Effekseer/Effekseer.SoundLoader.h')
effekseerModulesHeader.output('Effekseer/Effekseer.Modules.h')
effekseerRendererDX9Header = CreateHeader()
effekseerRendererDX9Header.readLines('EffekseerRendererDX9/EffekseerRenderer/EffekseerRendererDX9.Base.Pre.h')
effekseerRendererDX9Header.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererDX9Header.readLines('EffekseerRendererDX9/EffekseerRenderer/EffekseerRendererDX9.Renderer.h')
effekseerRendererDX9Header.output('EffekseerRendererDX9/EffekseerRendererDX9.h')
effekseerRendererDX11Header = CreateHeader()
effekseerRendererDX11Header.readLines('EffekseerRendererDX11/EffekseerRenderer/EffekseerRendererDX11.Base.Pre.h')
effekseerRendererDX11Header.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererDX11Header.readLines('EffekseerRendererDX11/EffekseerRenderer/EffekseerRendererDX11.Renderer.h')
effekseerRendererDX11Header.output('EffekseerRendererDX11/EffekseerRendererDX11.h')
effekseerRendererDX12Header = CreateHeader()
effekseerRendererDX12Header.readLines('EffekseerRendererDX12/EffekseerRenderer/EffekseerRendererDX12.Base.Pre.h')
effekseerRendererDX12Header.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererDX12Header.readLines('EffekseerRendererDX12/EffekseerRenderer/EffekseerRendererDX12.Renderer.h')
effekseerRendererDX12Header.readLines('EffekseerRendererLLGI/Common.h')
effekseerRendererDX12Header.output('EffekseerRendererDX12/EffekseerRendererDX12.h')
effekseerRendererVulkanHeader = CreateHeader()
effekseerRendererVulkanHeader.readLines('EffekseerRendererVulkan/EffekseerRenderer/EffekseerRendererVulkan.Base.Pre.h')
effekseerRendererVulkanHeader.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererVulkanHeader.readLines('EffekseerRendererVulkan/EffekseerRenderer/EffekseerRendererVulkan.Renderer.h')
effekseerRendererVulkanHeader.readLines('EffekseerRendererLLGI/Common.h')
effekseerRendererVulkanHeader.output('EffekseerRendererVulkan/EffekseerRendererVulkan.h')
effekseerRendererGLHeader = CreateHeader()
effekseerRendererGLHeader.readLines('EffekseerRendererGL/EffekseerRenderer/EffekseerRendererGL.Base.Pre.h')
effekseerRendererGLHeader.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererGLHeader.readLines('EffekseerRendererGL/EffekseerRenderer/EffekseerRendererGL.Renderer.h')
effekseerRendererGLHeader.output('EffekseerRendererGL/EffekseerRendererGL.h')
effekseerRendererMetalHeader = CreateHeader()
effekseerRendererMetalHeader.readLines('EffekseerRendererMetal/EffekseerRenderer/EffekseerRendererMetal.Base.Pre.h')
effekseerRendererMetalHeader.readLines('EffekseerRendererCommon/EffekseerRenderer.Renderer.h')
effekseerRendererMetalHeader.readLines('EffekseerRendererMetal/EffekseerRenderer/EffekseerRendererMetal.Renderer.h')
effekseerRendererMetalHeader.readLines('EffekseerRendererLLGI/Common.h')
effekseerRendererMetalHeader.output('EffekseerRendererMetal/EffekseerRendererMetal.h')
| 57.757143 | 160 | 0.852461 | 727 | 8,086 | 9.460798 | 0.176066 | 0.193661 | 0.196278 | 0.109916 | 0.625618 | 0.36813 | 0.166182 | 0.031986 | 0.031986 | 0.031986 | 0 | 0.010608 | 0.032402 | 8,086 | 139 | 161 | 58.172662 | 0.868482 | 0 | 0 | 0.080645 | 0 | 0 | 0.473535 | 0.450532 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040323 | false | 0 | 0.024194 | 0 | 0.08871 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7903025f1f9a9404dc70aaab4d2f4d39ef35b4fc | 3,729 | py | Python | CircuitPython_101/basic_data_structures/song_book/code.py | billagee/Adafruit_Learning_System_Guides | 6e90bd839161573780ab9937c3deaa115deca055 | [
"MIT"
] | 1 | 2018-10-17T19:37:08.000Z | 2018-10-17T19:37:08.000Z | CircuitPython_101/basic_data_structures/song_book/code.py | billagee/Adafruit_Learning_System_Guides | 6e90bd839161573780ab9937c3deaa115deca055 | [
"MIT"
] | null | null | null | CircuitPython_101/basic_data_structures/song_book/code.py | billagee/Adafruit_Learning_System_Guides | 6e90bd839161573780ab9937c3deaa115deca055 | [
"MIT"
] | 1 | 2018-07-16T15:47:52.000Z | 2018-07-16T15:47:52.000Z | import time
import board
import debouncer
import busio as io
import digitalio
import pulseio
import adafruit_ssd1306
i2c = io.I2C(board.SCL, board.SDA)
reset_pin = digitalio.DigitalInOut(board.D11)
oled = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c, reset=reset_pin)
button_select = debouncer.Debouncer(board.D7, mode=digitalio.Pull.UP)
button_play = debouncer.Debouncer(board.D9, mode=digitalio.Pull.UP)
C4 = 261
C_SH_4 = 277
D4 = 293
D_SH_4 = 311
E4 = 329
F4 = 349
F_SH_4 = 369
G4 = 392
G_SH_4 = 415
A4 = 440
A_SH_4 = 466
B4 = 493
# pylint: disable=line-too-long
songbook = {'Twinkle Twinkle': [(C4, 0.5), (C4, 0.5), (G4, 0.5), (G4, 0.5), (A4, 0.5), (A4, 0.5), (G4, 1.0), (0, 0.5),
(F4, 0.5), (F4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (D4, 0.5), (C4, 0.5), (0, 0.5),
(G4, 0.5), (G4, 0.5), (F4, 0.5), (F4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (0, 0.5),
(G4, 0.5), (G4, 0.5), (F4, 0.5), (F4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (0, 0.5),
(C4, 0.5), (C4, 0.5), (G4, 0.5), (G4, 0.5), (A4, 0.5), (A4, 0.5), (G4, 1.0), (0, 0.5),
(F4, 0.5), (F4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (D4, 0.5), (C4, 0.5), (0, 0.5)],
'ItsyBitsy Spider': [(G4, 0.5), (C4, 0.5), (C4, 0.5), (C4, 0.5), (D4, 0.5), (E4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (C4, 0.5), (D4, 0.5), (E4, 0.5), (C4, 0.5), (0, 0.5),
(E4, 0.5), (E4, 0.5), (F4, 0.5), (G4, 0.5), (G4, 0.5), (F4, 0.5), (E4, 0.5), (F4, 0.5), (G4, 0.5), (E4, 0.5), (0, 0.5)],
'Old MacDonald': [(G4, 0.5), (G4, 0.5), (G4, 0.5), (D4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (0, 0.5),
(B4, 0.5), (B4, 0.5), (A4, 0.5), (A4, 0.5), (G4, 0.5), (0, 0.5),
(D4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (D4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (0, 0.5),
(B4, 0.5), (B4, 0.5), (A4, 0.5), (A4, 0.5), (G4, 0.5), (0, 0.5),
(D4, 0.5), (D4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (D4, 0.5), (D4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (0, 0.5),
(G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (0, 0.5),
(G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (G4, 0.5), (0, 0.5),
(G4, 0.5), (G4, 0.5), (G4, 0.5), (D4, 0.5), (E4, 0.5), (E4, 0.5), (D4, 0.5), (0, 0.5),
(B4, 0.5), (B4, 0.5), (A4, 0.5), (A4, 0.5), (G4, 0.5), (0, 0.5)]
}
# pylint: enable=line-too-long
def play_note(note):
if note[0] != 0:
pwm = pulseio.PWMOut(board.D12, duty_cycle = 0, frequency=note[0])
# Hex 7FFF (binary 0111111111111111) is half of the largest value for a 16-bit int,
# i.e. 50%
pwm.duty_cycle = 0x7FFF
time.sleep(note[1])
if note[0] != 0:
pwm.deinit()
def play_song(songname):
for note in songbook[songname]:
play_note(note)
def update(songnames, selected):
oled.fill(0)
line = 0
for songname in songnames:
if line == selected:
oled.text(">", 0, line * 8)
oled.text(songname, 10, line * 8)
line += 1
oled.show()
selected_song = 0
song_names = sorted(list(songbook.keys()))
while True:
button_select.update()
button_play.update()
update(song_names, selected_song)
if button_select.fell:
print("select")
selected_song = (selected_song + 1) % len(songbook)
elif button_play.fell:
print("play")
play_song(song_names[selected_song])
| 41.433333 | 185 | 0.448914 | 676 | 3,729 | 2.424556 | 0.177515 | 0.169616 | 0.102502 | 0.122026 | 0.345943 | 0.328249 | 0.328249 | 0.328249 | 0.317877 | 0.317877 | 0 | 0.208446 | 0.307857 | 3,729 | 89 | 186 | 41.898876 | 0.426579 | 0.039957 | 0 | 0.109589 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0.001678 | 0 | 0 | 1 | 0.041096 | false | 0 | 0.09589 | 0 | 0.136986 | 0.027397 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7903881baf28fb04948dceaf26f6f1e7b726da74 | 417 | py | Python | polyaxon/api/repos/serializers.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/api/repos/serializers.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/api/repos/serializers.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | from rest_framework import fields, serializers
from db.models.repos import Repo
class RepoSerializer(serializers.ModelSerializer):
project = fields.SerializerMethodField()
class Meta:
model = Repo
fields = ('project', 'created_at', 'updated_at', 'is_public', )
def get_user(self, obj):
return obj.user.username
def get_project(self, obj):
return obj.project.name
| 23.166667 | 71 | 0.688249 | 49 | 417 | 5.734694 | 0.612245 | 0.042705 | 0.092527 | 0.113879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215827 | 417 | 17 | 72 | 24.529412 | 0.859327 | 0 | 0 | 0 | 0 | 0 | 0.086331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0.181818 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
790ab10c114ed809a1b80f3e101c2509b9257268 | 332 | py | Python | modules/timeblock.py | 5225225/bar | cc72eb45f21ac2b2e070c6d9f66b306ed51aef35 | [
"MIT"
] | 1 | 2015-09-05T17:07:59.000Z | 2015-09-05T17:07:59.000Z | modules/timeblock.py | 5225225/bar | cc72eb45f21ac2b2e070c6d9f66b306ed51aef35 | [
"MIT"
] | null | null | null | modules/timeblock.py | 5225225/bar | cc72eb45f21ac2b2e070c6d9f66b306ed51aef35 | [
"MIT"
] | 2 | 2015-09-05T17:08:02.000Z | 2019-02-22T21:14:08.000Z | import linelib
import datetime
import signal
def handler(x, y):
pass
signal.signal(signal.SIGUSR1, handler)
signal.signal(signal.SIGALRM, handler)
while True:
linelib.sendblock("date", {"full_text": datetime.datetime.now().strftime(
"%Y-%m-%e %H:%M:%S"
)})
linelib.sendPID("date")
linelib.waitsig(1)
| 18.444444 | 77 | 0.671687 | 44 | 332 | 5.045455 | 0.590909 | 0.216216 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.168675 | 332 | 17 | 78 | 19.529412 | 0.797101 | 0 | 0 | 0 | 0 | 0 | 0.10241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.076923 | 0.230769 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
790b99b7c8510d3b99bd51ef86e99adaa01fb768 | 183 | py | Python | modules/isrunning.py | ShaderLight/autochampselect | b7d346cc99011b5f84867f3a01dc2e8d815c05d7 | [
"MIT"
] | null | null | null | modules/isrunning.py | ShaderLight/autochampselect | b7d346cc99011b5f84867f3a01dc2e8d815c05d7 | [
"MIT"
] | null | null | null | modules/isrunning.py | ShaderLight/autochampselect | b7d346cc99011b5f84867f3a01dc2e8d815c05d7 | [
"MIT"
] | null | null | null | from subprocess import check_output
def isrunning(processName):
tasklist = check_output('tasklist', shell=False)
tasklist = str(tasklist)
return(processName in tasklist) | 26.142857 | 52 | 0.759563 | 21 | 183 | 6.52381 | 0.666667 | 0.160584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15847 | 183 | 7 | 53 | 26.142857 | 0.88961 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7912dbc97deac732656616d2bb8fce94ba34891e | 1,829 | py | Python | tests/test_simple supported_beam.py | bteodoru/ebbef2p-python | 449e6414e4ce3ef3deccf9fd892410b2d15578ef | [
"MIT"
] | 2 | 2020-05-03T19:14:39.000Z | 2020-05-03T19:20:19.000Z | tests/test_simple supported_beam.py | bteodoru/ebbef2p-python | 449e6414e4ce3ef3deccf9fd892410b2d15578ef | [
"MIT"
] | null | null | null | tests/test_simple supported_beam.py | bteodoru/ebbef2p-python | 449e6414e4ce3ef3deccf9fd892410b2d15578ef | [
"MIT"
] | null | null | null | import pytest
import numpy as np
from ebbef2p.structure import Structure
L = 2
E = 1
I = 1
def test_center_load():
P = 100
M_max = P * L / 4 # maximum moment
S_max = P/2 # max shearing force
w_max = -P * L ** 3 / (48 * E * I) # max displacement
tolerance = 1e-6 #set a tolerance of 0.0001%
s = Structure('test')
s.add_beam(coord=[0, L], E=E, I=I)
s.add_nodal_load(P, L/2, 'fz')
s.add_nodal_support({'uz': 0, 'ur': "NaN"}, 0)
s.add_nodal_support({'uz': 0, 'ur': "NaN"}, L)
s.add_nodes(25)
s.add_elements(s.nodes)
s.solve(s.build_global_matrix(), s.build_load_vector(), s.get_boudary_conditions())
assert min(s.get_displacements()['vertical_displacements']) == pytest.approx(w_max, rel=tolerance)
assert max(s.get_bending_moments()['values']) == pytest.approx(M_max, rel=tolerance)
assert max(s.get_shear_forces()['values']) == pytest.approx(S_max, rel=tolerance)
def test_uniformly_distributed_load():
q = 10
M_max = q * L ** 2 / 8 # maximum moment
S_max = q * L/2 # max shearing force
w_max = -5 * q * L ** 4 / (384 * E * I) # max displacement
tolerance = 1e-4 #set a tolerance of 0.01%
s = Structure('test')
s.add_beam(coord=[0, L], E=E, I=I)
s.add_distributed_load((q, q), (0, L))
s.add_nodal_support({'uz': 0, 'ur': "NaN"}, 0)
s.add_nodal_support({'uz': 0, 'ur': "NaN"}, L)
s.add_nodes(200)
s.add_elements(s.nodes)
s.solve(s.build_global_matrix(), s.build_load_vector(), s.get_boudary_conditions())
assert min(s.get_displacements()['vertical_displacements']) == pytest.approx(w_max, rel=tolerance)
assert max(s.get_bending_moments()['values']) == pytest.approx(M_max, rel=tolerance)
assert max(s.get_shear_forces()['values']) == pytest.approx(S_max, rel=1e-2) | 34.509434 | 102 | 0.630946 | 302 | 1,829 | 3.625828 | 0.261589 | 0.043836 | 0.041096 | 0.058447 | 0.776256 | 0.747032 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0 | 0.034812 | 0.199016 | 1,829 | 53 | 103 | 34.509434 | 0.712628 | 0.083652 | 0 | 0.4 | 0 | 0 | 0.063549 | 0.026379 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.05 | false | 0 | 0.075 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
79210aefb1eb42f15c46a5decf553a199213c5f4 | 155,776 | py | Python | src/ansys/mapdl/core/_commands/solution/analysis_options.py | Miiicah/pymapdl | ce85393ca82db7556a5d05883ca3fd9296444cba | [
"MIT"
] | 194 | 2016-10-21T08:46:41.000Z | 2021-01-06T20:39:23.000Z | ansys/mapdl/core/_commands/solution/analysis_options.py | NewsamNiu/pymapdl | 482c960142a612997eb33216731aaa88f1371168 | [
"MIT"
] | 463 | 2021-01-12T14:07:38.000Z | 2022-03-31T22:42:25.000Z | ansys/mapdl/core/_commands/solution/analysis_options.py | NewsamNiu/pymapdl | 482c960142a612997eb33216731aaa88f1371168 | [
"MIT"
] | 66 | 2016-11-21T04:26:08.000Z | 2020-12-28T09:27:27.000Z | from typing import Optional
from ansys.mapdl.core.mapdl_types import MapdlInt
class AnalysisOptions:
def abextract(self, mode1="", mode2="", **kwargs):
"""Extracts the alpha-beta damping multipliers for Rayleigh damping.
APDL Command: ABEXTRACT
Parameters
----------
mode1
First mode number.
mode2
Second mode number.
Notes
-----
ABEXTRACT calls the command macro DMPEXT to extract the damping ratio
of MODE1 and MODE2 and then computes the Alpha and Beta damping
multipliers for use in a subsequent structural harmonic or transient
analysis. See Damping in the Structural Analysis Guide for more
information on the alpha and beta damping multipliers. The damping
multipliers are stored in parameters ALPHADMP and BETADMP and can be
applied using the ALPHAD and BETAD commands. Before calling ABEXTRACT,
you must issue RMFLVEC to extract the modal displacements. In addition,
a node component FLUN must exist from all FLUID136 nodes. See
Introduction for more information on thin film analyses.
This command is also valid in PREP7.
Distributed ANSYS Restriction: This command is not supported in
Distributed ANSYS.
"""
command = f"ABEXTRACT,{mode1},{mode2}"
return self.run(command, **kwargs)
def accoption(self, activate="", **kwargs):
"""Specifies GPU accelerator capability options.
APDL Command: ACCOPTION
Parameters
----------
activate
Activates the GPU accelerator capability within the equation
solvers.
Do not use GPU accelerator. - Use GPU accelerator.
Notes
-----
The GPU accelerator capability requires specific hardware to be
installed on the machine. See the appropriate ANSYS, Inc. Installation
Guide (Windows or Linux) for a list of supported GPU hardware. Use of
this capability also requires HPC licensing. For more information, see
GPU Accelerator Capability in the Parallel Processing Guide.
The GPU accelerator capability is available for the sparse direct
solver and the PCG and JCG iterative solvers. Static, buckling, modal,
full harmonic, and full transient analyses are supported. For buckling
analyses, the Block Lanczos and Subspace eigensolvers are supported.
For modal analyses, only the Block Lanczos, PCG Lanczos, Subspace,
Unsymmetric, and Damped eigensolvers are supported. Activating this
capability when using other equation solvers or other analysis types
has no effect.
The GPU accelerator capability is supported only on the Windows 64-bit
and Linux 64-bit platforms.
"""
command = f"ACCOPTION,{activate}"
return self.run(command, **kwargs)
def adams(self, nmodes="", kstress="", kshell="", **kwargs):
"""Performs solutions and writes flexible body information to a modal
APDL Command: ADAMS
neutral file (Jobname.MNF) for use in an ADAMS analysis.
Parameters
----------
nmodes
Number of normal modes to be written to Jobname.MNF file (no
default).
kstress
Specifies whether to write stress or strain results:
0 - Do not write stress or strain results (default).
1 - Write stress results.
2 - Write strain results.
3 - Write both stress and strain results.
kshell
Shell element output location. This option is valid only for shell
elements.
0, 1 - Shell top surface (default).
2 - Shell middle surface.
3 - Shell bottom surface.
Notes
-----
ADAMS invokes a predefined ANSYS macro that solves a series of analyses
and then writes the modal neutral file, Jobname.MNF. This file can be
imported into the ADAMS program in order to perform a rigid body
dynamics simulation. For detailed information on how to use the ADAMS
command macro to create a modal neutral file, see Rigid Body Dynamics
and the ANSYS-ADAMS Interface in the Substructuring Analysis Guide.
Before running the ADAMS command macro, you must specify the units with
the /UNITS command. The interface points should be the only selected
nodes when the command macro is initiated. (Interface points are nodes
where constraints may be applied in ADAMS.) Only selected elements will
be considered in the calculations.
By default, stress and strain data is transferred to the ADAMS program
for all nodes, as specified by the KSTRESS value. If you want to
transfer stress/strain data for only a subset of nodes, select the
desired subset and create a node component named "STRESS" before
running the ADAMS command macro. For example, you may want to select
exterior nodes for the purpose of visualization in the ADAMS program.
The default filename for the modal neutral file is Jobname.MNF. In
interactive (GUI) mode, you can specify a filename other than
Jobname.MNF. In batch mode, there is no option to change the filename,
and the modal neutral file is always written to Jobname.MNF.
"""
command = f"ADAMS,{nmodes},{kstress},{kshell}"
return self.run(command, **kwargs)
def antype(self, antype="", status="", ldstep="", substep="", action="", **kwargs):
"""Specifies the analysis type and restart status.
APDL Command: ANTYPE
Parameters
----------
antype
Analysis type (defaults to the previously specified analysis type,
or to STATIC if none specified):
STATIC or 0 - Perform a static analysis. Valid for all degrees of freedom.
BUCKLE or 1 - Perform a buckling analysis. Implies that a previous static solution was
performed with prestress effects calculated
(PSTRES,ON). Valid for structural degrees of freedom
only.
MODAL or 2 - Perform a modal analysis. Valid for structural and fluid degrees of freedom.
HARMIC or 3 - Perform a harmonic analysis. Valid for structural, fluid, magnetic, and
electrical degrees of freedom.
TRANS or 4 - Perform a transient analysis. Valid for all degrees of freedom.
SUBSTR or 7 - Perform a substructure analysis. Valid for all degrees of freedom.
SPECTR or 8 - Perform a spectrum analysis. Implies that a previous modal analysis was
performed. Valid for structural degrees of freedom
only.
status
Specifies the status of the analysis (new or restart):
NEW - Specifies a new analysis (default). If NEW, the remaining fields on this
command are ignored.
RESTART - Specifies a restart of a previous analysis. Valid for static, modal, and
transient (full or mode-superposition method) analyses.
For more information about restarting static and
transient analyses, see Multiframe Restart in the Basic
Analysis Guide. For more information on restarting a
modal analysis, see Modal Analysis Restart in the Basic
Analysis Guide.
Multiframe restart is also valid for harmonic analysis, but is limited to 2-D magnetic analysis only. - A substructure analysis (backsubstitution method only) can be restarted for the
purpose of generating additional load vectors.
For more information, see the SEOPT command and
Applying Loads and Creating the Superelement
Matrices in the Substructuring Analysis Guide.
VTREST - Specifies the restart of a previous VT Accelerator analysis. Valid only with
Antype = STATIC, HARMIC, or TRANS. For more information,
see VT Accelerator Re-run in the Basic Analysis Guide.
ldstep
Specifies the load step at which a multiframe restart begins.
substep
Specifies the substep at which a multiframe restart begins.
action
Specifies the manner of a multiframe restart.
CONTINUE - The program continues the analysis based on the specified LDSTEP and SUBSTEP
(default). The current load step is continued. If the
end of the load step is encountered in the .Rnnn file, a
new load step is started. The program deletes all .Rnnn
files, or .Mnnn files for mode-superposition transient
analyses, beyond the point of restart and updates the
.LDHI file if a new load step is encountered.
ENDSTEP - At restart, force the specified load step (LDSTEP) to end at the specified
substep (SUBSTEP), even though the end of the current
load step has not been reached. At the end of the
specified substep, all loadings are scaled to the level
of the current ending and stored in the .LDHI file. A run
following this ENDSTEP starts a new load step. This
capability allows you to change the load level in the
middle of a load step. The program updates the .LDHI file
and deletes all .Rnnn files, or .Mnnn files for mode-
superposition transient analyses, beyond the point of
ENDSTEP. The .Rnnn or .Mnnn file at the point of ENDSTEP
are rewritten to record the rescaled load level.
RSTCREATE - At restart, retrieve information to be written to the results file for the
specified load step (LDSTEP) and substep (SUBSTEP). Be
sure to use OUTRES to write the results to the
results file. This action does not affect the .LDHI or
.Rnnn files. Previous items stored in the results file
at and beyond the point of RSTCREATE are deleted. This
option cannot be used to restart a mode-superposition
transient analysis.
PERTURB - At restart, a linear perturbation analysis (static, modal, buckling, or full
harmonic) is performed for the specified load step
(LDSTEP) and substep (SUBSTEP). This action does not
affect the .LDHI, .Rnnn, or .RST files.
Notes
-----
If using the ANTYPE command to change the analysis type in the same
SOLVE session, the program issues the following message: "Some analysis
options have been reset to their defaults. Please verify current
settings or respecify as required." Typically, the program resets
commands such as NLGEOM and EQSLV to their default values.
The analysis type (Antype) cannot be changed if a restart is specified.
Always save parameters before doing a restart. For more information on
the different types of restart, see Restarting an Analysis in the Basic
Analysis Guide.
This command is also valid in PREP7.
The ANSYS Professional - Nonlinear Structural (PRN) product supports
the Antype = TRANS option for mode-superposition (TRNOPT,MSUP) analyses
only.
"""
command = f"ANTYPE,{antype},{status},{ldstep},{substep},{action}"
return self.run(command, **kwargs)
def ascres(self, opt="", **kwargs):
"""Specifies the output type for an acoustic scattering analysis.
APDL Command: ASCRES
Parameters
----------
opt
Output option:
TOTAL - Output the total pressure field (default).
SCAT - Output the scattered pressure field.
Notes
-----
Use the ASCRES command to specify the output type for an acoustic
scattering analysis.
The scattered option (Opt = SCAT) provides a scattered pressure output,
psc, required for calculating target strength (TS).
The default behavior (Opt = TOTAL) provides a sum of the incident and
scattering fields, ptotal = pinc + psc.
Issue the AWAVE command to define the incident pressure pinc. If the
AWAVE command is defined with Opt2 = INT, only the total pressure field
is output regardless of the ASCRES,Opt command.
"""
command = f"ASCRES,{opt}"
return self.run(command, **kwargs)
def asol(self, lab="", opt="", **kwargs):
"""Specifies the output type of an acoustic scattering analysis.
APDL Command: ASOL
Parameters
----------
lab
Acoustic solver specification (no default):
SCAT - Set acoustic solver to the scattered field formulation.
opt
Option identifying an acoustic solver status:
OFF - Deactivate the specified acoustic solver (default).
ON - Activate the specified acoustic solver.
Notes
-----
Use the ASOL command to activate the specified acoustic solution
process.
The scattered option (Lab = SCAT) sets the acoustic solver to the
scattered-pressure field formulation.
Issue the AWAVE command to define the incident pressure pinc. If the
AWAVE command is defined with Opt2 = INT, the acoustic solver is set to
the scattered field formulation regardless of the ASOL command issued.
"""
command = f"ASOL,{lab},{opt}"
return self.run(command, **kwargs)
def bcsoption(self, memory_option="", memory_size="", solve_info="", **kwargs):
"""Sets memory option for the sparse solver.
APDL Command: BCSOPTION
Parameters
----------
memory_option
Memory allocation option:
DEFAULT - Use the default memory allocation strategy for
the sparse solver. The default strategy attempts
to run in the INCORE memory mode. If there is
not enough available physical memory when the
solver starts to run in the INCORE memory mode,
the solver will then attempt to run in the
OUTOFCORE memory mode.
INCORE - Use a memory allocation strategy in the sparse
solver that will attempt to obtain enough memory
to run with the entire factorized matrix in
memory. This option uses the most amount of
memory and should avoid doing any I/O. By
avoiding I/O, this option achieves optimal solver
performance. However, a significant amount of
memory is required to run in this mode, and it is
only recommended on machines with a large amount
of memory. If the allocation for in-core memory
fails, the solver will automatically revert to
out-of-core memory mode.
OUTOFCORE - Use a memory allocation strategy in the sparse
solver that will attempt to allocate only
enough work space to factor each individual
frontal matrix in memory, but will store the
entire factorized matrix on disk. Typically,
this memory mode results in poor performance
due to the potential bottleneck caused by the
I/O to the various files written by the
solver.
FORCE - This option, when used in conjunction with the
Memory_Size option, allows you to force the sparse
solver to run with a specific amount of
memory. This option is only recommended for the
advanced user who understands sparse solver memory
requirements for the problem being solved,
understands the physical memory on the system, and
wants to control the sparse solver memory usage.
memory_size
Initial memory size allocation for the sparse solver in
MB. This argument allows you to tune the sparse solver
memory and is not generally required. Although there is no
upper limit for Memory_Size, the Memory_Size setting
should always be well within the physical memory
available, but not so small as to cause the sparse solver
to run out of memory. Warnings and/or errors from the
sparse solver will appear if this value is set too low. If
the FORCE memory option is used, this value is the amount
of memory allocated for the entire duration of the sparse
solver solution.
solve_info
Solver output option:
OFF - Turns off additional output printing from the sparse
solver (default).
PERFORMANCE - Turns on additional output printing from the
sparse solver, including a performance
summary and a summary of file I/O for the
sparse solver. Information on memory usage
during assembly of the global matrix (that
is, creation of the Jobname.FULL file) is
also printed with this option.
Notes
-----
This command controls options related to the sparse solver in
all analysis types where the sparse solver can be used. It
also controls the Block Lanczos eigensolver in a modal or
buckling analysis.
The sparse solver runs from one large work space (that is, one
large memory allocation). The amount of memory required for
the sparse solver is unknown until the matrix structure is
preprocessed, including equation reordering. The amount of
memory allocated for the sparse solver is then dynamically
adjusted to supply the solver what it needs to compute the
solution.
If you have a very large memory system, you may want to try
selecting the INCORE memory mode for larger jobs to improve
performance. When running the sparse solver on a machine with
very slow I/O performance (for example, slow hard drive
speed), you may want to try using the INCORE memory mode to
achieve better performance. However, doing so may require much
more memory compared to running in the OUTOFCORE memory mode.
Running with the INCORE memory mode is best for jobs which
comfortably fit within the limits of the physical memory on a
given system. If the sparse solver work space exceeds physical
memory size, the system will be forced to use virtual memory
(or the system page/swap file). In this case, it is typically
more efficient to run with the OUTOFCORE memory mode. Assuming
the job fits comfortably within the limits of the machine,
running with the INCORE memory mode is often ideal for jobs
where repeated solves are performed for a single matrix
factorization. This occurs in a modal or buckling analysis or
when doing multiple load steps in a linear, static analysis.
For repeated runs with the sparse solver, you may set the
initial sparse solver memory allocation to the amount required
for factorization. This strategy reduces the frequency of
allocation and reallocation in the run to make the INCORE
option fully effective. If you have a very large memory
system, you may use the Memory_Size argument to increase the
maximum size attempted for in-core runs.
"""
command = f"BCSOPTION,,{memory_option},{memory_size},,,{solve_info}"
return self.run(command, **kwargs)
def cgrow(self, action="", par1="", par2="", **kwargs):
"""Defines crack-growth information
APDL Command: CGROW
Parameters
----------
action
Specifies the action for defining or manipulating crack-growth
data:
NEW - Initiate a new set of crack-growth simulation data (default).
CID - Specify the crack-calculation (CINT) ID for energy-release rates to be used in
the fracture criterion calculation.
FCOPTION - Specify the fracture criterion for crack-growth/delamination.
CPATH - Specify the element component for crack growth.
DTIME - Specify the initial time step for crack growth.
DTMIN - Specify the minimum time step for crack growth.
DTMAX - Specify the maximum time step for crack growth.
FCRAT - Fracture criterion ratio (fc).
STOP - Stops the analysis when the specified maximum crack extension is reached.
METHOD - Define the method of crack propagation.
Notes
-----
When Action = NEW, the CGROW command initializes a crack-growth
simulation set. Subsequent CGROW commands define the parameters
necessary for the simulation.
For multiple cracks, issue multiple CGROW,NEW commands (and any
subsequent CGROW commands necessary to define the parameters) for each
crack.
If the analysis is restarted (ANTYPE,,RESTART), the CGROW command must
be re-issued.
For additional details on this command, see
https://www.mm.bme.hu/~gyebro/files/ans_help_v182/ans_cmd/Hlp_C_CGROW.html
"""
command = f"CGROW,{action},{par1},{par2}"
return self.run(command, **kwargs)
def cmatrix(
self, symfac="", condname="", numcond="", grndkey="", capname="", **kwargs
):
"""Performs electrostatic field solutions and calculates the
self and mutual capacitances between multiple conductors.x
APDL Command: CMATRIX
Parameters
----------
symfac
Geometric symmetry factor. Capacitance values are scaled by this
factor which represents the fraction of the total device modeled.
Defaults to 1.
condname
Alphanumeric prefix identifier used in defining named conductor
components.
numcond
Total Number of Components. If a ground is modeled, it is to be
included as a component. If a ground is not modeled, but infinite
elements are used to model the far-field ground, a named component
for the far-field ground is not required.
grndkey
Ground key:
0 - Ground is one of the components, which is not at infinity.
1 - Ground is at infinity (modeled by infinite elements).
capname
Array name for computed capacitance matrix. Defaults to CMATRIX.
Notes
-----
To invoke the CMATRIX macro, the exterior nodes of each conductor must
be grouped into individual components using the CM command. Each set
of independent components is assigned a component name with a common
prefix followed by the conductor number. A conductor system with a
ground must also include the ground nodes as a component. The ground
component is numbered last in the component name sequence.
A ground capacitance matrix relates charge to a voltage vector. A
ground matrix cannot be applied to a circuit modeler. The lumped
capacitance matrix is a combination of lumped "arrangements" of
voltage differences between conductors. Use the lumped capacitance
terms in a circuit modeler to represent capacitances between
conductors.
Enclose all name-strings in single quotes in the CMATRIX command line.
See the Mechanical APDL Theory Reference and HMAGSOLV in the Low-
Frequency Electromagnetic Analysis Guide for details.
This command does not support multiframe restarts.
"""
command = f"CMATRIX,{symfac},'{condname}',{numcond},{grndkey},'{capname}'"
return self.run(command, **kwargs)
def cmsopt(
self,
cmsmeth="",
nmode="",
freqb="",
freqe="",
fbddef="",
fbdval="",
iokey="",
**kwargs,
):
"""Specifies component mode synthesis (CMS) analysis options.
APDL Command: CMSOPT
Parameters
----------
cmsmeth
The component mode synthesis method to use. This value is required.
FIX - Fixed-interface method.
FREE - Free-interface method.
RFFB - Residual-flexible free-interface method.
nmode
The number of normal modes extracted and used in the superelement
generation. This value is required; the minimum is 1.
freqb
Beginning, or lower end, of frequency range of interest. This value
is optional.
freqe
Ending, or upper end, of frequency range of interest. This value is
optional.
fbddef
In a free-interface (CMSMETH = FREE) or residual-flexible free-
interface (CMSMETH = RFFB) CMS analysis, the method to use for
defining free body modes:
FNUM - The number (FDBVAL) of rigid body modes in the calculation.
FTOL - Employ a specified tolerance (FDBVAL) to determine rigid body modes in the
calculation.
FAUTO - Automatically determine rigid body modes in the calculation. This method is the
default.
RIGID - If no rigid body modes exist, define your own via the RIGID command.
fbdval
In a free-interface CMS analysis (CMSMETH = FREE), the number of
rigid body modes if FBDDEF = fnum (where the value is an integer
from 0 through 6), or the tolerance to employ if FBDDEF = ftol
(where the value is a positive real number representing rad/sec).
This value is required only when FBDDEF = fnum or FBDDEF = ftol;
otherwise, any specified value is ignored.
iokey
Output key to control writing of the transformation matrix to the
.TCMS file (FIX or FREE methods) or body properties to the .EXB
file (FIX method).
TCMS - Write the transformation matrix of the nodal component defined by the OUTPR
command to a .TCMS file. Refer to TCMS File Format in the
Programmer's Reference for more information on the this
file.
EXB - Write a body property input file (.EXB file) containing the condensed
substructure matrices and other body properties for use with
AVL EXCITE. Refer to ANSYS Interface to AVL EXCITE in the
Substructuring Analysis Guide for more information.
Notes
-----
CMS employs the Block Lanczos eigensolution method in the generation
pass.
CMS supports damping matrix reduction when a damping matrix exists. Set
the matrix generation key to 3 (SEOPT,Sename,SEMATR) to generate and
then reduce stiffness, mass, and damping matrices.
CMS does not support the SEOPT,,,,,RESOLVE command. Instead, ANSYS sets
the expansion method for the expansion pass (EXPMTH) to BACKSUB.
For more information about performing a CMS analysis, see Component
Mode Synthesis in the Substructuring Analysis Guide.
If IOKEY = TCMS is used to output the transformation matrix, then only
ITEM = NSOL is valid in the OUTPR command. In the interactive
sessions, the transformation matrix will not be output if the model has
more than 10 elements.
This command is also valid in /PREP7.
"""
command = f"CMSOPT,{cmsmeth},{nmode},{freqb},{freqe},{fbddef},{fbdval},{iokey}"
return self.run(command, **kwargs)
def cncheck(
self,
option="",
rid1="",
rid2="",
rinc="",
intertype="",
trlevel="",
cgap="",
cpen="",
ioff="",
**kwargs,
):
"""Provides and/or adjusts the initial status of contact pairs.
APDL Command: CNCHECK
Parameters
----------
option
Option to be performed:
* ``"DETAIL"`` : List all contact pair properties (default).
* ``"SUMMARY"`` : List only the open/closed status for each
contact pair.
* ``"POST"`` : Execute a partial solution to write the initial
contact configuration to the Jobname.RCN file.
* ``"ADJUST"`` : Physically move contact nodes to the target
in order to close a gap or reduce penetration. The initial
adjustment is converted to structural displacement values
(UX, UY, UZ) and stored in the Jobname.RCN file.
* ``"MORPH"`` : Physically move contact nodes to the target in
order to close a gap or reduce penetration, and also morph
the underlying solid mesh. The initial adjustment of contact
nodes and repositioning of solid element nodes due to mesh
morphing are converted to structural displacement values
(UX, UY, UZ) and stored in the Jobname.RCN file.
* ``"RESET"`` : Reset target element and contact element key
options and real constants to their default values. This
option is not valid for general contact.
* ``"AUTO"`` : Automatically sets certain real constants and
key options to recommended values or settings in order to
achieve better convergence based on overall contact pair
behaviors. This option is not valid for general contact.
* ``"TRIM"`` : Trim contact pair (remove certain contact and
target elements).
* ``"UNSE"`` : Unselect certain contact and target elements.
rid1, rid2, rinc
For pair-based contact, the range of real constant pair IDs
for which Option will be performed. If RID2 is not specified,
it defaults to RID1. If no value is specified, all contact
pairs in the selected set of elements are considered.
For general contact (InterType = GCN), RID1 and RID2 are
section IDs associated with general contact surfaces instead
of real constant IDs. If RINC = 0, the Option is performed
between the two sections, RID1 and RID2. If RINC > 0, the
Option is performed among all specified sections (RID1 to RID2
with increment of RINC).
intertype
The type of contact interface (pair-based versus general
contact) to be considered; or the type of contact pair to be
trimmed/unselected/auto-set.
The following labels specify the type of contact interface:
* ``""`` : (blank) Include all contact definitions (pair-based
and general contact).
* ``"GCN"`` : Include general contact definitions only (not valid when Option = RESET or AUTO).
The following labels specify the type of contact pairs to be
trimmed/unselected/auto-set (used only when Option = TRIM,
UNSE, or AUTO, and only for pair-based contact definitions):
* ``"ANY"`` : All types (default).
* ``"MPC"`` : MPC-based contact pairs (KEYOPT(2) = 2).
* ``"BOND"`` : Bonded contact pairs (KEYOPT(12) = 3, 5, 6).
* ``"NOSP"`` : No separation contact pairs (KEYOPT(12) = 2, 4).
* ``"INAC"`` : Inactive contact pairs (symmetric contact pairs for MPC contact or KEYOPT(8) = 2).
* ``"TRlevel"`` : mming level (used only when Option = TRIM, UNSE, or MORPH):
* ``"(blank)"`` : Normal trimming (default): remove/unselect contact and target elements which are in far-field.
* ``"AGGRE"`` : Aggressive trimming: remove/unselect contact and target elements which are in far-field, and certain elements in near-field.
cgap
They are only valid when Option = ADJUST or MORPH. Control
parameter for opening gap. Close the opening gap if the
absolute value of the gap is smaller than the CGAP value. CGAP
defaults to ``0.25*PINB`` (where PINB is the pinball radius) for
bonded and no-separation contact; otherwise it defaults to the
value of real constant ICONT.
CPEN
They are only valid when Option = ADJUST or MORPH. Control
parameter for initial penetration. Close the initial
penetration if the absolute value of the penetration is
smaller than the CPEN value. CPEN defaults to ``0.25*PINB`` (where
PINB is the pinball radius) for any type of interface behavior
(either bonded or standard contact).
IOFF
They are only valid when Option = ADJUST or MORPH. Control
parameter for initial adjustment. Input a positive value to
adjust the contact nodes towards the target surface with a
constant interference distance equal to IOFF. Input a negative
value to adjust the contact node towards the target surface
with a uniform gap distance equal to the absolute value of
IOFF.
Notes
-----
The CNCHECK command provides information for surface-to-surface,
node-to-surface, and line-to-line contact pairs (element types
TARGE169, TARGE170, CONTA171, CONTA172, CONTA173, CONTA174,
CONTA175, CONTA176, CONTA177). All contact and target elements of
interest, along with the solid elements and nodes attached to
them, must be selected for the command to function properly. For
performance reasons, the program uses a subset of nodes and
elements based on the specified contact regions (RID1, RID2, RINC)
when executing the CNCHECK command.
For additional details, see the notes section at:
https://www.mm.bme.hu/~gyebro/files/ans_help_v182/ans_cmd/Hlp_C_CNCHECK.html
"""
command = f"CNCHECK,{option},{rid1},{rid2},{rinc},{intertype},{trlevel},{cgap},{cpen},{ioff}"
return self.run(command, **kwargs)
def cnkmod(self, itype="", knum="", value="", **kwargs):
"""Modifies contact element key options.
APDL Command: CNKMOD
Parameters
----------
itype
Contact element type number as defined on the ET command.
knum
Number of the KEYOPT to be modified (KEYOPT(KNUM)).
value
Value to be assigned to the KEYOPT.
Notes
-----
The CNKMOD command has the same syntax as the KEYOPT command. However,
it is valid only in the SOLUTION processor. This command is intended
only for use in a linear perturbation analysis, and can only be used to
modify certain contact element KEYOPT values as described below.
Modifying KEYOPT(12)
One use for this command is to modify contact interface behavior
between load steps in a linear perturbation analysis; it allows the
user to control the contact status locally per contact pair. For this
application, this command is limited to changing the contact interface
behavior key option: KEYOPT(12) of CONTA171, CONTA172, CONTA173,
CONTA174, CONTA175, CONTA176, and CONTA177; and KEYOPT(10) of CONTA178.
When used for this purpose, the command adjusts the contact status from
the linear perturbation base analysis (at the point of restart) as
described in the table below. Note that CNKMOD allows you to take
points in the base analysis that are near contact (within the pinball
region) and modify them to be treated as "in contact" in the
perturbation analysis; see the "1 - near-field" row with KEYOPT(12)
values set to 4 or 5. CNKMOD also allows you to take points that are
sliding in the base analysis and treat them as sticking in the
perturbation analysis, irrespective of the MU value; see the "2 -
sliding" row with KEYOPT(12) values set to 1,3, 5, or 6.
Table: 128:: : Adjusted Contact Status with CNKMOD is Issued
(if outside of the adjusted pinball region)
(if inside of the adjusted pinball region)
(if outside of the adjusted pinball region)
(if inside of the adjusted pinball region)
If an open gap exists at the end of the previous load step and the
contact status is adjusted as sliding or sticking due to a "bonded" or
"no separation" contact behavior definition, then the program will
treat it as near-field contact when executing CNKMOD in the subsequent
load steps.
In the linear perturbation analysis procedure, contact status can also
be controlled or modified by the PERTURB command. The contact status
always follows local controls defined by the CNKMOD command first, and
is then adjusted by the global sticking or bonded setting (ContKey =
STICKING or BONDED) on the PERTURB command (see the PERTURB command for
details).
Modifying KEYOPT(3)
Another use for this command is to change the units of normal contact
stiffness (contact element real constant FKN) in a linear perturbation
modal analysis that is used to model brake squeal. For contact elements
CONTA171, CONTA172, CONTA173, and CONTA174, KEYOPT(3) controls the
units of normal contact stiffness. You can issue the command
CNKMOD,ITYPE,3,1 during the first phase of the linear perturbation
analysis in order to change the units of normal contact stiffness from
FORCE/LENGTH3 (in the base analysis) to FORCE/LENGTH. Note that
KEYOPT(3) = 1 is valid only when a penalty-based algorithm is used
(KEYOPT(2) = 0 or 1) and the absolute normal contact stiffness value is
explicitly specified (that is, a negative value input for real constant
FKN).
"""
command = f"CNKMOD,{itype},{knum},{value}"
return self.run(command, **kwargs)
def cntr(self, option="", key="", **kwargs):
"""Redirects contact pair output quantities to a text file.
APDL Command: CNTR
Parameters
----------
option
Output option:
OUT - Contact output control.
key
Control key:
NO - Write contact information to the output file or to the screen (default).
YES - Write contact information to the Jobname.CNM file.
Notes
-----
Issue the command CNTR,OUT,YES to redirect contact pair output
quantities to the Jobname.CNM file.
To ensure that the contact information is written to Jobname.CNM,
reissue CNTR,OUT,YES each time you reenter the solution processor
(/SOLU).
"""
command = f"CNTR,{option},{key}"
return self.run(command, **kwargs)
def cutcontrol(self, lab="", value="", option="", **kwargs):
"""Controls time-step cutback during a nonlinear solution.
APDL Command: CUTCONTROL
Parameters
----------
lab
Specifies the criteria for causing a cutback. Valid labels are:
PLSLIMIT - Maximum equivalent plastic strain allowed within a time-step (substep). If the
calculated value exceeds the VALUE, the program
performs a cutback (bisection). VALUE defaults to 0.15
(15%).
CRPLIMIT - Set values for calculating the maximum equivalent creep ratio allowed within a
time step. If the calculated maximum creep ratio
exceeds the defined creep ratio limit, the program
performs a cutback.
DSPLIMIT - Maximum incremental displacement within the solution field in a time step
(substep). If the maximum calculated value exceeds
VALUE, the program performs a cutback (bisection).
VALUE defaults to 1.0 x 107.
NPOINT - Number of points in a cycle for a second order dynamic equation, used to
control automatic time stepping. If the number of
solution points per cycle is less than VALUE, the program
performs a cutback in time step size. VALUE defaults to
13 for linear analysis, 5 for nonlinear analysis. A
larger number of points yields a more accurate solution
but also increases the solution run time.
This option works well for linear problems. For nonlinear analyses, other factors such as contact status changes and solution convergence rate can overwrite NPOINT. See Automatic Time Stepping in the Mechanical APDL Theory Reference for more information on automatic time stepping. - NOITERPREDICT
If VALUE is 0 (default), an internal auto time step scheme will predict the number of iterations for nonlinear convergence and perform a cutback earlier than the number of iterations specified by the NEQIT command. This is the recommended option. If VALUE is 1, the solution will iterate (if nonconvergent) to NEQIT number of iterations before a cutback is invoked. It is sometimes useful for poorly-convergent problems, but rarely needed in general. - Bisection is also controlled by contact status change, plasticity or creep
strain limit, and other factors. If any of these
factors occur, bisection will still take place,
regardless of the NOITERPREDICT setting.
CUTBACKFACTOR - Changes the cutback value for bisection. Default is 0.5. VALUE must be greater
than 0.0 and less than 1.0. This option is active
only if AUTOTS,ON is set.
value
Numeric value for the specified cutback criterion. For Lab =
CRPLIMIT, VALUE is the creep criteria for the creep ratio limit.
option
Type of creep analysis. Valid for Lab = CRPLIMIT only.
IMPRATIO - Set the maximum creep ratio value for implicit creep. The default is 0.0 (i.e.,
no creep limit control) and any positive value is
valid. (See Implicit Creep Procedure in the Structural
Analysis Guide for information on how to define
implicit creep.)
EXPRATIO - Set the maximum creep ratio value for explicit creep. The default value is 0.1
and any positive value up to 0.25 is allowed. (See
Explicit Creep Procedure in the Structural Analysis
Guide for information on how to define explicit
creep.)
STSLIMIT - Stress threshold for calculating the creep ratio. For integration points with
effective stress below this threshold, the creep ratio
does not cause cutback. The default value is 0.0 and
any positive value is valid.
STNLIMIT - Elastic strain threshold for calculating the creep ratio. For integration
points with effective elastic strain below this
threshold, the creep ratio does not cause cutback. The
default value is 0.0 and any positive value is valid.
Notes
-----
A cutback is a method for automatically reducing the step size when
either the solution error is too large or the solution encounters
convergence difficulties during a nonlinear analysis.
Should a convergence failure occur, the program reduces the time step
interval to a fraction of its previous size and automatically continues
the solution from the last successfully converged time step. If the
reduced time step again fails to converge, the program again reduces
the time step size and proceeds with the solution. This process
continues until convergence is achieved or the minimum specified time
step value is reached.
For creep analysis, the cutback procedure is similar; the process
continues until the minimum specified time step size is reached.
However, if the creep ratio limit is exceeded, the program issues a
warning but continues the substep until the analysis is complete. In
this case, convergence is achieved but the creep ratio criteria is not
satisfied.
The CRPLIM command is functionally equivalent to Lab = CRPLIMIT with
options IMPRATIO and EXPRATIO
"""
command = f"CUTCONTROL,{lab},{value},{option}"
return self.run(command, **kwargs)
def ddoption(self, decomp="", **kwargs):
"""Sets domain decomposer option for Distributed ANSYS.
APDL Command: DDOPTION
Parameters
----------
decomp
Controls which domain decomposition algorithm to use.
AUTO - Use the default domain decomposition algorithm when splitting the model into
domains for Distributed ANSYS (default).
GREEDY - Use the "greedy" domain decomposition algorithm.
METIS - Use the METIS graph partitioning domain decomposition algorithm.
Notes
-----
This command controls options relating to the domain decomposition
algorithm used by Distributed ANSYS to split the model into pieces (or
domains), with each piece being solved on a different processor.
The greedy domain decomposition algorithm starts from a single element
at a corner of the model. The domain grows by taking the properly
connected neighboring elements and stops after reaching the optimal
size.
The METIS domain decomposition algorithm starts by creating a graph
from the finite element mesh. It then uses a multilevel graph
partitioning scheme which reduces the size of the original graph,
creates domains using the reduced graph, and then creates the final CPU
domains by expanding the smaller domains from the reduced graph back to
the original mesh.
"""
command = f"DDOPTION,{decomp}"
return self.run(command, **kwargs)
def dmpext(
self, smode="", tmode="", dmpname="", freqb="", freqe="", nsteps="", **kwargs
):
"""Extracts modal damping coefficients in a specified frequency range.
APDL Command: DMPEXT
Parameters
----------
smode
Source mode number. There is no default for this field; you must
enter an integer greater than zero.
tmode
Target mode. Defaults to SMODE.
dmpname
Array parameter name containing the damping results. Defaults to
d_damp.
freqb
Beginning frequency range (real number greater than zero) or 'EIG'
at eigenfrequency of source mode. 'EIG' is valid only if SMODE =
TMODE. Note that EIG must be enclosed in single quotes when this
command is used on the command line or in an input file. There is
no default for this field; you must enter a value.
freqe
End of frequency range. Must be blank for Freqb = EIG. Default is
Freqb.
nsteps
Number of substeps. Defaults to 1.
Notes
-----
DMPEXT invokes an ANSYS macro that uses modal projection techniques to
compute the damping force by the modal velocity of the source mode onto
the target mode. From the damping force, damping parameters are
extracted. DMPEXT creates an array parameter Dmpname, with the
following entries in each row:
response frequency
modal damping coefficient
modal squeeze stiffness coefficient
damping ratio
squeeze-to-structural stiffness ratio
The macro requires the modal displacements from the file Jobname.EFL
obtained from the RMFLVEC command. In addition, a node component FLUN
must exist from all FLUID136 nodes. The computed damping ratio may be
used to specify constant or modal damping by means of the DMPRAT or
MDAMP commands. For Rayleigh damping, use the ABEXTRACT command to
compute ALPHAD and BETAD damping parameters. See Thin Film Analysis for
more information on thin film analyses.
The macro uses the LSSOLVE command to perform two load steps for each
frequency. The first load case contains the solution of the source
mode excitation and can be used for further postprocessing. Solid model
boundary conditions are deleted from the model. In addition,
prescribed nodal boundary conditions are applied to the model. You
should carefully check the boundary conditions of your model prior to
executing a subsequent analysis.
This command is also valid in PREP7.
Distributed ANSYS Restriction: This command is not supported in
Distributed ANSYS.
"""
command = f"DMPEXT,{smode},{tmode},{dmpname},{freqb},{freqe},{nsteps}"
return self.run(command, **kwargs)
def dmpoption(self, filetype="", combine="", **kwargs):
"""Specifies distributed memory parallel (Distributed ANSYS) file
APDL Command: DMPOPTION
combination options.
Parameters
----------
filetype
Type of solution file to combine after a distributed memory
parallel solution. There is no default; if (blank), the command is
ignored.
RST - Results files (.RST, .RTH, .RMG, .RSTP)
EMAT - Element matrix files (.EMAT).
ESAV - Element saved data files (.ESAVE)
MODE - Modal results files (.MODE)
MLV - Modal load vector file (.MLV)
IST - Initial state file (.IST)
FULL - Full matrix file (.FULL)
RFRQ - Reduced complex displacement file (.RFRQ)
RDSP - Reduced displacement file (.RDSP)
combine
Option to combine solution files.
Yes - Combine solution files (default).
No - Do not combine solution files.
Notes
-----
The DMPOPTION command controls how solution files are written during a
distributed memory parallel (Distributed ANSYS) solution. This command
is most useful for controlling how results files (.RST,.RTH, etc.) are
written.
In a distributed memory parallel solution, a local results file is
written by each process (JobnameN.ext, where N is the process number).
By default, the program automatically combines the local results files
(for example, JobnameN.RST) upon leaving the SOLUTION processor (for
example, upon the FINISH command) into a single global results file
(Jobname.RST) which can be used in ANSYS postprocessing. To reduce the
amount of communication and I/O performed by this operation, you can
issue the command DMPOPTION,RST,NO to bypass this step of combining the
local results files; the local files will remain on the local disks in
the current working directory. You can then use the RESCOMBINE command
macro in the POST1 general postprocessor (/POST1) to read all results
into the database for postprocessing.
The RESCOMBINE command macro is intended for use with POST1. If you
want to postprocess distributed parallel solution results using the
POST26 time-history postprocessor (/POST26), it is recommended that you
combine your local results files into one global results file
(DMPOPTION,RST,YES or COMBINE).
Local .EMAT, .ESAV, .MODE, .MLV, .IST, .RFRQ, .RDSP, and .FULL files
are also written (when applicable) by each process in a distributed
memory parallel solution. If these files are not needed for a
downstream solution or operation, you can issue the command
DMPOPTION,FileType,NO for each file type to bypass the file combination
step and thereby improve performance. You should not bypass the file
combination step if a downstream PSD analysis or modal expansion pass
will be performed.
If DMPOPTION,MODE,NO or DMPOPTION,RST,NO is specified in a modal
analysis, element results cannot be written to the combined mode file
(Jobname.MODE). In this case, if Distributed ANSYS is used in a
downstream harmonic or transient analysis that uses the mode-
superposition method, the MSUPkey on the MXPAND command can retain its
value. However, if shared memory parallel processing is used in the
downstream harmonic or transient analysis, the MSUPkey is effectively
set to NO.
The DMPOPTION command can be changed between load steps; however, doing
so will not affect which set of solution files are combined. Only the
last values of FileType and Combine upon leaving the solution processor
will be used to determine whether the solution files are combined. For
example, given a two load step solution and FileType = RST, setting
Combine = NO for the first load step and YES for the second load step
will cause all sets on the local results files to be combined. If the
opposite is true (Combine = YES for the first load step and NO for the
second load step), no results will be combined.
After using DMPOPTION to suppress file combination, you may find it
necessary to combine the local files for a specific FileType for use in
a subsequent analysis. In this case, use the COMBINE command to combine
local solution files into a single, global file.
"""
command = f"DMPOPTION,{filetype},{combine}"
return self.run(command, **kwargs)
def dspoption(
self, reord_option="", memory_option="", memory_size="", solve_info="", **kwargs
):
"""Sets memory option for the distributed sparse solver.
APDL Command: DSPOPTION
Parameters
----------
reord_option
Reordering option:
DEFAULT - Use the default reordering scheme.
SEQORDER - Use a sequential equation reordering scheme
within the distributed sparse solver. Relative
to PARORDER, this option typically results in
longer equation ordering times and therefore
longer overall solver times. Occasionally,
however, this option will produce better
quality orderings which decrease the matrix
factorization times and improve overall solver
performance.
PARORDER - Use a parallel equation reordering scheme
within the distributed sparse solver. Relative
to SEQORDER, this option typically results in
shorter equation ordering times and therefore
shorter overall solver times. Occasionally,
however, this option will produce lower quality
orderings which increase the matrix
factorization times and degrade overall solver
performance.
memory_option
Memory allocation option:
DEFAULT - Use the default memory allocation strategy for
the distributed sparse solver. The default
strategy attempts to run in the INCORE memory
mode. If there is not enough physical memory
available when the solver starts to run in the
INCORE memory mode, the solver will then attempt
to run in the OUTOFCORE memory mode.
INCORE - Use a memory allocation strategy in the
distributed sparse solver that will attempt to
obtain enough memory to run with the entire
factorized matrix in memory. This option uses the
most amount of memory and should avoid doing any
I/O. By avoiding I/O, this option achieves
optimal solver performance. However, a
significant amount of memory is required to run
in this mode, and it is only recommended on
machines with a large amount of memory. If the
allocation for in-core memory fails, the solver
will automatically revert to out-of-core memory
mode.
OUTOFCORE - Use a memory allocation strategy in the
distributed sparse solver that will attempt to
allocate only enough work space to factor each
individual frontal matrix in memory, but will
share the entire factorized matrix on
disk. Typically, this memory mode results in
poor performance due to the potential
bottleneck caused by the I/O to the various
files written by the solver.
FORCE - This option, when used in conjunction with the
Memory_Size option, allows you to force the
distributed sparse solver to run with a specific
amount of memory. This option is only recommended
for the advanced user who understands distributed
sparse solver memory requirements for the problem
being solved, understands the physical memory on
the system, and wants to control the distributed
sparse solver memory usage.
memory_size
Initial memory size allocation for the sparse solver in
MB. The Memory_Size setting should always be well within
the physical memory available, but not so small as to
cause the distributed sparse solver to run out of
memory. Warnings and/or errors from the distributed sparse
solver will appear if this value is set too low. If the
FORCE memory option is used, this value is the amount of
memory allocated for the entire duration of the
distributed sparse solver solution.
solve_info
Solver output option:
OFF - Turns off additional output printing from the
distributed sparse solver (default).
PERFORMANCE - Turns on additional output printing from the
distributed sparse solver, including a
performance summary and a summary of file
I/O for the distributed sparse
solver. Information on memory usage during
assembly of the global matrix (that is,
creation of the Jobname.FULL file) is also
printed with this option.
Notes
-----
This command controls options related to the distributed sparse solver
in all analysis types where the distributed sparse solver can be used.
The amount of memory required for the distributed sparse solver is
unknown until the matrix structure is preprocessed, including equation
reordering. The amount of memory allocated for the distributed sparse
solver is then dynamically adjusted to supply the solver what it needs
to compute the solution.
If you have a large memory system, you may want to try selecting the
INCORE memory mode for larger jobs to improve performance. Also, when
running the distributed sparse solver with many processors on the same
machine or on a machine with very slow I/O performance (e.g., slow hard
drive speed), you may want to try using the INCORE memory mode to
achieve better performance. However, doing so may require much more
memory compared to running in the OUTOFCORE memory mode.
Running with the INCORE memory mode is best for jobs which comfortably
fit within the limits of the physical memory on a given system. If the
distributed sparse solver workspace exceeds physical memory size, the
system will be forced to use virtual memory (or the system page/swap
file). In this case, it is typically more efficient to run with the
OUTOFCORE memory mode.
"""
command = (
f"DSPOPTION,{reord_option},{memory_option},{memory_size},,,{solve_info}"
)
return self.run(command, **kwargs)
def exbopt(
self,
outinv2="",
outtcms="",
outsub="",
outcms="",
outcomp="",
outrm="",
noinv="",
outele="",
**kwargs,
):
"""Specifies .EXB file output options in a CMS generation pass.
APDL Command: EXBOPT
Parameters
----------
outinv2
Output control for 2nd order invariant:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the second order invariant.
outtcms
Output control for .TCMS file:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the .TCMS file.
outsub
Output control for .SUB file:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the .SUB file.
OUTCMS
Output control for .CMS file:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the .CMS file.
outcomp
Output control for node and element component information:
* ``"0"`` : Do not output any component information.
* ``"1"`` : Output node component information only.
* ``"2"`` : Output element component information only.
* ``"3"`` : Output both node and element component information (default).
outrm
Output control for the recovery matrix:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the recovery matrix to file.EXB.
* ``"2"`` : Output the recovery matrix to a separate file, file_RECOVER.EXB.
noinv
Invariant calculation:
* ``"0"`` : Calculate all invariants (default).
* ``"1"`` : Suppress calculation of the 1st and 2nd order
invariants. NOINV = 1 suppresses OUTINV2 = 1.
OUTELE
Output control for the element data:
* ``"0"`` : Do not output (default).
* ``"1"`` : Output the element data.
Notes
-----
When the body property file (file.EXB) is requested in a CMS
generation pass (CMSOPT,,,,,,,EXB command), the .TCMS, .SUB, and
.CMS files are not output by default. Use the EXBOPT command to
request these files, as needed.
EXBOPT can also be used to manage some content in the .EXB file
for improving performance and storage (see the OUTINV2, OUTCOMP,
OUTRM, NOINV, and OUTELE arguments described above).
If both recovery matrix output (OUTRM = 1 or 2) and the .TCMS file
(OUTTCMS = 1) are requested, the .TCMS file writing is turned off
due to potentially large in-core memory use.
For more information on how to generate file.EXB, see ANSYS
Interface to AVL EXCITE in the Mechanical APDL Substructuring
Analysis Guide
"""
command = f"EXBOPT,{outinv2},{outtcms},{outsub},{outcms},{outcomp},{outrm},{noinv},{outele}"
return self.run(command, **kwargs)
def ematwrite(self, key: str = "", **kwargs) -> Optional[str]:
"""Forces the writing of all the element matrices to File.EMAT.
APDL Command: EMATWRITE
Parameters
----------
key
Write key:
YES - Forces the writing of the element matrices to
File.EMAT even if not normally
done.
NO - Element matrices are written only if required. This
value is the default.
Notes
-----
The EMATWRITE command forces ANSYS to write the File.EMAT
file. The file is necessary if you intend to follow the
initial load step with a subsequent inertia relief
calculation (IRLF). If used in the solution
processor (/SOLU), this command is only valid within the
first load step.
This command is also valid in PREP7.
"""
command = f"EMATWRITE,{key}"
return self.run(command, **kwargs)
def eqslv(self, lab="", toler="", mult="", keepfile="", **kwargs):
"""Specifies the type of equation solver.
APDL Command: EQSLV
Parameters
----------
lab
Equation solver type:
SPARSE - Sparse direct equation solver. Applicable to
real-value or complex-value symmetric and
unsymmetric matrices. Available only for STATIC,
HARMIC (full method only), TRANS (full method
only), SUBSTR, and PSD spectrum analysis types
[ANTYPE]. Can be used for nonlinear and linear
analyses, especially nonlinear analysis where
indefinite matrices are frequently
encountered. Well suited for contact analysis
where contact status alters the mesh
topology. Other typical well-suited applications
are: (a) models consisting of shell/beam or
shell/beam and solid elements (b) models with a
multi-branch structure, such as an automobile
exhaust or a turbine fan. This is an alternative
to iterative solvers since it combines both speed
and robustness. Generally, it requires
considerably more memory (~10x) than the PCG
solver to obtain optimal performance (running
totally in-core). When memory is limited, the
solver works partly in-core and out-of-core,
which can noticeably slow down the performance of
the solver. See the BCSOPTION command for more
details on the various modes of operation for
this solver.
This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, this solver preserves all of
the merits of the classic or shared memory sparse
solver. The total sum of memory (summed for all processes)
is usually higher than the shared memory sparse
solver. System configuration also affects the performance
of the distributed memory parallel solver. If enough
physical memory is available, running this solver in the
in-core memory mode achieves optimal performance. The
ideal configuration when using the out-of-core memory mode
is to use one processor per machine on multiple machines
(a cluster), spreading the I/O across the hard drives of
each machine, assuming that you are using a high-speed
network such as Infiniband to efficiently support all
communication across the multiple machines. - This solver
supports use of the GPU accelerator capability.
JCG - Jacobi Conjugate Gradient iterative equation
solver. Available only for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural, thermal,
and multiphysics applications. Applicable for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. Recommended for 3-D harmonic
analyses in structural and multiphysics
applications. Efficient for heat transfer,
electromagnetics, piezoelectrics, and acoustic field
problems.
This solver can be run in shared memory parallel or
distributed memory parallel (Distributed ANSYS) mode. When
used in Distributed ANSYS, in addition to the limitations
listed above, this solver only runs in a distributed
parallel fashion for STATIC and TRANS (full method)
analyses in which the stiffness is symmetric and only when
not using the fast thermal option (THOPT). Otherwise, this
solver runs in shared memory parallel mode inside
Distributed ANSYS. - This solver supports use of the GPU
accelerator capability. When using the GPU accelerator
capability, in addition to the limitations listed above,
this solver is available only for STATIC and TRANS (full
method) analyses where the stiffness is symmetric and does
not support the fast thermal option (THOPT).
ICCG - Incomplete Cholesky Conjugate Gradient iterative
equation solver. Available for STATIC, HARMIC (full
method only), and TRANS (full method only) analysis
types [ANTYPE]. Can be used for structural,
thermal, and multiphysics applications, and for
symmetric, unsymmetric, complex, definite, and
indefinite matrices. The ICCG solver requires more
memory than the JCG solver, but is more robust than
the JCG solver for ill-conditioned matrices.
This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.
QMR - Quasi-Minimal Residual iterative equation
solver. Available for the HARMIC (full method only)
analysis type [ANTYPE]. Can be used for
high-frequency electromagnetic applications, and for
symmetric, complex, definite, and indefinite
matrices. The QMR solver is more stable than the
ICCG solver.
This solver can only be run in shared memory parallel
mode. This is also true when the solver is used inside
Distributed ANSYS. - This solver does not support use of
the GPU accelerator capability.
PCG - Preconditioned Conjugate Gradient iterative equation
solver (licensed from Computational Applications and
Systems Integration, Inc.). Requires less disk file
space than SPARSE and is faster for large
models. Useful for plates, shells, 3-D models, large
2-D models, and other problems having symmetric,
sparse, definite or indefinite matrices for
nonlinear analysis. Requires twice as much memory
as JCG. Available only for analysis types [ANTYPE]
STATIC, TRANS (full method only), or MODAL (with PCG
Lanczos option only). Also available for the use
pass of substructure analyses (MATRIX50). The PCG
solver can robustly solve equations with constraint
equations (CE, CEINTF, CPINTF, and CERIG). With
this solver, you can use the MSAVE command to obtain
a considerable memory savings.
The PCG solver can handle ill-conditioned problems by
using a higher level of difficulty (see
PCGOPT). Ill-conditioning arises from elements with high
aspect ratios, contact, and plasticity. - This solver can
be run in shared memory parallel or distributed memory
parallel (Distributed ANSYS) mode. When used in
Distributed ANSYS, this solver preserves all of the merits
of the classic or shared memory PCG solver. The total sum
of memory (summed for all processes) is about 30% more
than the shared memory PCG solver.
toler
Iterative solver tolerance value. Used only with the
Jacobi Conjugate Gradient, Incomplete Cholesky Conjugate
Gradient, Pre- conditioned Conjugate Gradient, and
Quasi-Minimal Residual equation solvers. For the PCG
solver, the default is 1.0E-8. The value 1.0E-5 may be
acceptable in many situations. When using the PCG Lanczos
mode extraction method, the default solver tolerance value
is 1.0E-4. For the JCG and ICCG solvers with symmetric
matrices, the default is 1.0E-8. For the JCG and ICCG
solvers with unsymmetric matrices, and for the QMR solver,
the default is 1.0E-6. Iterations continue until the SRSS
norm of the residual is less than TOLER times the norm of
the applied load vector. For the PCG solver in the linear
static analysis case, 3 error norms are used. If one of
the error norms is smaller than TOLER, and the SRSS norm
of the residual is smaller than 1.0E-2, convergence is
assumed to have been reached. See Iterative Solver in the
Mechanical APDL Theory Reference for details.
mult
Multiplier (defaults to 2.5 for nonlinear analyses; 1.0
for linear analyses) used to control the maximum number of
iterations performed during convergence calculations. Used
only with the Pre- conditioned Conjugate Gradient equation
solver (PCG). The maximum number of iterations is equal to
the multiplier (MULT) times the number of degrees of
freedom (DOF). If MULT is input as a negative value, then
the maximum number of iterations is equal to abs(MULT).
Iterations continue until either the maximum number of
iterations or solution convergence has been reached. In
general, the default value for MULT is adequate for
reaching convergence. However, for ill-conditioned
matrices (that is, models containing elements with high
aspect ratios or material type discontinuities) the
multiplier may be used to increase the maximum number of
iterations used to achieve convergence. The recommended
range for the multiplier is 1.0 MULT 3.0. Normally, a
value greater than 3.0 adds no further benefit toward
convergence, and merely increases time requirements. If
the solution does not converge with 1.0 MULT 3.0, or in
less than 10,000 iterations, then convergence is highly
unlikely and further examination of the model is
recommended. Rather than increasing the default value of
MULT, consider increasing the level of difficulty
(Lev_Diff) on the PCGOPT command.
keepfile
Determines whether files from a SPARSE solver run should be deleted
or retained. Applies only to Lab = SPARSE for static and full
transient analyses.
"""
return self.run(f"EQSLV,{lab},{toler},{mult},,{keepfile}", **kwargs)
def eresx(self, key="", **kwargs):
"""Specifies extrapolation of integration point results.
APDL Command: ERESX
Parameters
----------
key
Extrapolation key:
DEFA - If element is fully elastic (no active plasticity, creep, or swelling
nonlinearities), extrapolate the integration point results
to the nodes. If any portion of the element is plastic (or
other active material nonlinearity), copy the integration
point results to the nodes (default).
YES - Extrapolate the linear portion of the integration point results to the nodes
and copy the nonlinear portion (for example, plastic
strains).
NO - Copy the integration point results to the nodes.
Notes
-----
Specifies whether the solution results at the element integration
points are extrapolated or copied to the nodes for element and nodal
postprocessing. The structural stresses, elastic and thermal strains,
field gradients, and fluxes are affected. Nonlinear data (plastic,
creep, and swelling strains) are always copied to the nodes, never
extrapolated. For shell elements, ERESX applies only to integration
point results in the in-plane directions.
This command is also valid in PREP7.
"""
command = f"ERESX,{key}"
return self.run(command, **kwargs)
def escheck(
self, sele: str = "", levl: str = "", defkey: MapdlInt = "", **kwargs
) -> Optional[str]:
"""Perform element shape checking for a selected element set.
APDL Command: ESCHECK
Parameters
----------
sele
Specifies whether to select elements for checking:
(blank) - List all warnings/errors from element shape
checking.
ESEL - Select the elements based on the .Levl criteria
specified below.
levl
WARN - Select elements producing warning and error messages.
ERR - Select only elements producing error messages (
default).
defkey
Specifies whether check should be performed on deformed
element
shapes. .
0 - Do not update node coordinates before performing
shape checks (default).
1 - Update node coordinates using the current set of
deformations in the database.
Notes
-----
Shape checking will occur according to the current SHPP
settings. Although ESCHECK is valid in all processors,
Defkey uses the current results in the database. If no
results are available a warning will be issued.
This command is also valid in PREP7, SOLUTION and POST1.
"""
command = f"ESCHECK,{sele},{levl},{defkey}"
return self.run(command, **kwargs)
def essolv(
self,
electit="",
strutit="",
dimn="",
morphopt="",
mcomp="",
xcomp="",
electol="",
strutol="",
mxloop="",
ruseky="",
restky="",
eiscomp="",
**kwargs,
):
"""Performs a coupled electrostatic-structural analysis.
APDL Command: ESSOLV
Parameters
----------
electit
Title of the electrostatics physics file as assigned by the PHYSICS
command.
strutit
Title of the structural physics file as assigned by the PHYSICS
command.
dimn
Model dimensionality (a default is not allowed):
2 - 2-D model.
3 - 3-D model.
morphopt
Morphing option:
<0 - Do not perform any mesh morphing or remeshing.
0 - Remesh the non-structural regions for each recursive loop only if mesh morphing
fails (default).
1 - Remesh the non-structural regions each recursive loop and bypass mesh morphing.
2 - Perform mesh morphing only, do not remesh any non-structural regions.
mcomp
Component name of the region to be morphed. For 2-D models, the
component may be elements or areas. For 3-D models, the component
may be elements or volumes. A component must be specified. You
must enclose name-strings in single quotes in the ESSOLV command
line.
xcomp
Component name of entities excluded from morphing. In the 2-D
case, it is the component name for the lines excluded from
morphing. In the 3-D case, it is component name for the areas
excluded from morphing. Defaults to exterior non-shared entities
(see the DAMORPH, DVMORPH, and DEMORPH commands). You must enclose
name-strings in single quotes in the ESSOLV command line.
electol
Electrostatic energy convergence tolerance. Defaults to .005 (.5%)
of the value computed from the previous iteration. If less than
zero, the convergence criteria based on electrostatics results is
turned off.
strutol
Structural maximum displacement convergence tolerance. Defaults to
.005 (.5%) of the value computed from the previous iteration. If
less than zero, the convergence criteria base on structural results
is turned off.
mxloop
Maximum number of allowable solution recursive loops. A single
pass through both an electrostatics and structural analysis
constitutes one loop. Defaults to 100.
ruseky
Reuse flag option:
1 - Assumes initial run of ESSOLV using base geometry for
the first electrostatics solution.
>1 - Assumes ESSOLV run is a continuation of a previous
ESSOLV run, whereby the morphed geometry is used for
the initial electrostatic simulation.
restky
Structural restart key.
0 - Use static solution option for structural solution.
1 - Use static restart solution option for structural solution.
eiscomp
Element component name for elements containing initial stress data
residing in file jobname.ist. The initial stress data must be
defined prior to issuing ESSOLV (see INISTATE command).
Notes
-----
ESSOLV invokes an ANSYS macro which automatically performs a coupled
electrostatic-structural analysis.
The macro displays periodic updates of the convergence.
If non-structural regions are remeshed during the analysis, boundary
conditions and loads applied to nodes and elements will be lost.
Accordingly, it is better to assign boundary conditions and loads to
the solid model.
Use RUSEKY > 1 for solving multiple ESSOLV simulations for different
excitation levels (i.e., for running a voltage sweep). Do not issue the
SAVE command to save the database between ESSOLV calls.
For nonlinear structural solutions, the structural restart option
(RESTKY = 1) may improve solution time by starting from the previous
converged structural solution.
For solid elements, ESSOLV automatically detects the air-structure
interface and applies a Maxwell surface flag on the electrostatic
elements. This flag is used to initiate the transfer for forces from
the electrostatic region to the structure. When using the ESSOLV
command with structural shell elements (for example, SHELL181), you
must manually apply the Maxwell surface flag on all air elements
surrounding the shells before writing the final electrostatic physics
file. Use the SFA command to apply the Maxwell surface flag to the
areas representing the shell elements; doing so ensures that the air
elements next to both sides of the shells receive the Maxwell surface
flag.
If lower-order structural solids or shells are used, set KEYOPT(7) = 1
for the electrostatic element types to ensure the correct transfer of
forces.
Information on creating the initial stress file is documented in the
Loading chapter in the Basic Analysis Guide.
Distributed ANSYS Restriction: This command is not supported in
Distributed ANSYS.
"""
command = f"ESSOLV,{electit},{strutit},{dimn},{morphopt},{mcomp},{xcomp},{electol},{strutol},{mxloop},,{ruseky},{restky},{eiscomp}"
return self.run(command, **kwargs)
def expass(self, key="", **kwargs):
"""Specifies an expansion pass of an analysis.
APDL Command: EXPASS
Parameters
----------
key
Expansion pass key:
OFF - No expansion pass will be performed (default).
ON - An expansion pass will be performed.
Notes
-----
Specifies that an expansion pass of a modal, substructure, buckling,
transient, or harmonic analysis is to be performed.
Note:: : This separate solution pass requires an explicit FINISH to
preceding analysis and reentry into SOLUTION.
This command is also valid in PREP7.
"""
command = f"EXPASS,{key}"
return self.run(command, **kwargs)
def gauge(self, opt="", freq="", **kwargs):
"""Gauges the problem domain for a magnetic edge-element formulation.
APDL Command: GAUGE
Parameters
----------
opt
Type of gauging to be performed:
ON - Perform tree gauging of the edge values (default).
OFF - Gauging is off. (You must specify custom gauging via APDL specifications.)
STAT - Gauging status (returns the current Opt and FREQ values)
freq
The following options are valid when Opt = ON:
0 - Generate tree-gauging information once, at the first load step. Gauging data is
retained for subsequent load steps. (This behavior is the
default.)
1 - Repeat gauging for each load step. Rewrites the gauging information at each
load step to accommodate changing boundary conditions on the AZ
degree of freedom (for example, adding or deleting AZ
constraints via the D or CE commands).
Notes
-----
The GAUGE command controls the tree-gauging procedure required for
electromagnetic analyses using an edge-based magnetic formulation
(elements SOLID236 and SOLID237).
Gauging occurs at the solver level for each solution (SOLVE). It sets
additional zero constraints on the edge-flux degrees of freedom AZ to
produce a unique solution; the additional constraints are removed after
solution.
Use the FREQ option to specify how the command generates gauging
information for multiple load steps.
Access the gauging information via the _TGAUGE component of gauged
nodes. The program creates and uses this component internally to remove
and reapply the AZ constraints required by gauging. If FREQ = 0, the
_TGAUGE component is created at the first load step and is used to
reapply the tree gauge constraints at subsequent load steps. If FREQ =
1, the tree-gauging information and the _TGAUGE component are generated
at every load step
If gauging is turned off (GAUGE,OFF), you must specify your own gauging
at the APDL level.
This command is also valid in PREP7.
"""
command = f"GAUGE,{opt},{freq}"
return self.run(command, **kwargs)
def gmatrix(self, symfac="", condname="", numcond="", matrixname="", **kwargs):
"""Performs electric field solutions and calculates the self and mutual
APDL Command: GMATRIX
conductance between multiple conductors.
Parameters
----------
symfac
Geometric symmetry factor. Conductance values are scaled by this
factor which represents the fraction of the total device modeled.
Defaults to 1.
condname
Alphanumeric prefix identifier used in defining named conductor
components.
numcond
Total number of components. If a ground is modeled, it is to be
included as a component.
matrixname
Array name for computed conductance matrix. Defaults to GMATRIX.
Notes
-----
To invoke the GMATRIX macro, the exterior nodes of each conductor must
be grouped into individual components using the CM command. Each set
of independent components is assigned a component name with a common
prefix followed by the conductor number. A conductor system with a
ground must also include the ground nodes as a component. The ground
component is numbered last in the component name sequence.
A ground conductance matrix relates current to a voltage vector. A
ground matrix cannot be applied to a circuit modeler. The lumped
conductance matrix is a combination of lumped "arrangements" of
voltage differences between conductors. Use the lumped conductance
terms in a circuit modeler to represent conductances between
conductors.
Enclose all name-strings in single quotes in the GMATRIX command line.
GMATRIX works with the following elements:
SOLID5 (KEYOPT(1) = 9)
SOLID98 (KEYOPT(1) = 9)
LINK68
PLANE230
SOLID231
SOLID232
This command is available from the menu path shown below only if
existing results are available.
This command does not support multiframe restarts
Distributed ANSYS Restriction: This command is not supported in
Distributed ANSYS.
"""
command = f"GMATRIX,{symfac},{condname},{numcond},,{matrixname}"
return self.run(command, **kwargs)
def lanboption(self, strmck="", **kwargs):
"""Specifies Block Lanczos eigensolver options.
APDL Command: LANBOPTION
strmck
Controls whether the Block Lanczos eigensolver will perform a
Sturm sequence check:
* ``"OFF"`` : Do not perform the Sturm sequence check
(default).
* ``"ON"`` : Perform a Sturm sequence check. This requires
additional matrix factorization (which can be expensive),
but does help ensure that no modes are missed in the
specified range.
Notes
-----
LANBOPTION specifies options to be used with the Block Lanczos
eigensolver during an eigenvalue buckling analysis (BUCOPT,LANB)
or a modal analysis (MODOPT,LANB).
By default the sturm sequence check is off for the Block Lanczos
eigensolver when it is used in a modal analysis, and on when it is
used in a buckling analysis.
"""
return self.run(f"LANBOPTION,{strmck}", **kwargs)
def lumpm(self, key="", **kwargs):
"""Specifies a lumped mass matrix formulation.
APDL Command: LUMPM
Parameters
----------
key
Formulation key:
OFF - Use the element-dependent default mass matrix formulation (default).
ON - Use a lumped mass approximation.
Notes
-----
This command is also valid in PREP7. If used in SOLUTION, this command
is valid only within the first load step.
"""
command = f"LUMPM,{key}"
return self.run(command, **kwargs)
def moddir(self, key="", directory="", fname="", **kwargs):
"""Activates the remote read-only modal files usage.
APDL Command: MODDIR
Parameters
----------
key
Key to activate the remote modal files usage
* ``"1 (ON or YES)"`` : The program performs the analysis
using remote modal files. The files are read-only.
* ``"0 (OFF or NO)"`` : The program performs the analysis
using modal files located in the working directory
(default).
directory
Directory path (248 characters maximum). The directory
contains the modal analysis files. The directory path
defaults to the current working directory.
fname
File name (no extension or directory path) for the modal
analysis files. The file name defaults to the current
Jobname.
Notes
-----
Only applies to spectrum analyses (ANTYPE,SPECTR).
Using the default for both the directory path (Directory) and the
file name (Fname) is not valid. At least one of these values must
be specified.
The MODDIR command must be issued during the first solution and at
the beginning of the solution phase (before LVSCALE in
particular).
Remote modal files usage is not supported when mode file reuse is
activated (modeReuseKey = YES on SPOPT).
"""
return self.run(f"MODDIR,{key},{directory},{fname}", **kwargs)
def monitor(self, var="", node="", lab="", **kwargs):
"""Controls contents of three variable fields in nonlinear solution
APDL Command: MONITOR
monitor file.
Parameters
----------
var
One of three variable field numbers in the monitor file whose
contents can be specified by the Lab field. Valid arguments are
integers 1, 2, or 3. See Notes section for default values.
node
The node number for which information is monitored in the specified
VAR field. In the GUI, if Node = P, graphical picking is enabled.
If blank, the monitor file lists the maximum value of the specified
quantity (Lab field) for the entire structure.
lab
The solution quantity to be monitored in the specified VAR field.
Valid labels for solution quantities are UX, UY, and UZ
(displacements); ROTX, ROTY, and ROTZ (rotations); and TEMP
(temperature). Valid labels for reaction force are FX, FY, and FZ
(structural force) and MX, MY, and MZ (structural moment). Valid
label for heat flow rate is HEAT. For defaults see the Notes
section.
Notes
-----
The monitor file always has an extension of .mntr, and takes its file
name from the specified Jobname. If no Jobname is specified, the file
name defaults to file.
You must issue this command once for each solution quantity you want to
monitor at a specified node at each load step. You cannot monitor a
reaction force during a linear analysis. The variable field contents
can be redefined at each load step by reissuing the command. The
monitored quantities are appended to the file for each load step.
Reaction forces reported in the monitor file may be incorrect if the
degree of freedom of the specified node is involved in externally
defined coupling (CP command) or constraint equations (CE command), or
if the program has applied constraint equations internally to the node.
The following example shows the format of a monitor file. Note that
the file only records the solution substep history when a substep is
convergent.
The following details the contents of the various fields in the monitor
file:
The current load step number.
The current substep (time step) number.
The number of attempts made in solving the current substep. This
number is equal to the number of failed attempts (bisections) plus one
(the successful attempt).
The number of iterations used by the last successful attempt.
Total cumulative number of iterations (including each iteration used by
a bisection).
:
Time or load factor increments for the current substep.
Total time (or load factor) for the last successful attempt in the
current substep.
Variable field 1. In this example, the field is reporting the UZ
value. By default, this field lists the CPU time used up to (but not
including) the current substep.
Variable field 2. In this example, the field is reporting the MZ
value. By default, this field lists the maximum displacement in the
entire structure.
Variable field 3. By default (and in the example), this field reports
the maximum equivalent plastic strain increment in the entire
structure.
"""
command = f"MONITOR,{var},{node},{lab}"
return self.run(command, **kwargs)
def msave(self, key="", **kwargs):
"""Sets the solver memory saving option. This option only applies to the
APDL Command: MSAVE
PCG solver (including PCG Lanczos).
Parameters
----------
key
Activation key:
0 or OFF - Use global assembly for the stiffness matrix (and mass matrix, when using PCG
Lanczos) of the entire model.
1 or ON - Use an element-by-element approach when possible to save memory during the
solution. In this case, the global stiffness (and mass)
matrix is not assembled; element stiffness (and mass) is
regenerated during PCG or PCG Lanczos iterations.
Notes
-----
MSAVE,ON only applies to and is the default for parts of the model
using the following element types with linear material properties that
meet the conditions listed below.
SOLID186 (Structural Solid only)
SOLID187
The following conditions must also be true:
The PCG solver has been specified.
Small strains are assumed (NLGEOM,OFF).
No prestress effects (PSTRES) are included.
All nodes on the supported element types must be defined (i.e., the
midside nodes cannot be removed using the EMID command).
For elements with thermally dependent material properties, MSAVE,ON
applies only to elements with uniform temperatures prescribed.
The default element coordinate system must be used.
If you manually force MSAVE,ON by including it in the input file, the
model can include the following additional conditions:
The analysis can be a modal analysis using the PCG Lanczos method
(MODOPT,LANPCG).
Large deflection effects (NLGEOM,ON) are included.
SOLID185 (brick shapes and KEYOPT(2) = 3 only) elements can be
included.
All other element types or other parts of the model that don't meet the
above criteria will be solved using global assembly (MSAVE,OFF). This
command can result in memory savings of up to 70 percent over the
global assembly approach for the part of the model that meets the
criteria. Depending on the hardware (e.g., processor speed, memory
bandwidth, etc.), the solution time may increase or decrease when this
feature is used.
This memory-saving feature runs in parallel when multiple processors
are used with the /CONFIG command or with Distributed ANSYS. The gain
in performance with using multiple processors with this feature turned
on should be similar to the default case when this feature is turned
off. Performance also improves when using the uniform reduced
integration option for SOLID186 elements.
This command does not support the layered option of the SOLID185 and
SOLID186 elements.
When using MSAVE,ON with the PCGOPT command, note the following
restrictions:
For static and modal analyses, MSAVE,ON is not valid when using a
Lev_Diff value of 5 on the PCGOPT command; Lev_Diff will automatically
be reset to 2.
For modal analyses, MSAVE,ON is not valid with the StrmCk option of the
PCGOPT command; Strmck will be set to OFF.
For all analysis types, MSAVE,ON is not valid when the Lagrange
multiplier option (LM_Key) of the PCGOPT command is set to ON; the
MSAVE activation key will be set to OFF.
For linear perturbation static and modal analyses, MSAVE,ON is not
valid; the MSAVE activation key will be set to OFF.
When using MSAVE,ON for modal analyses, no .FULL file will be created.
The .FULL file may be necessary for subsequent analyses (e.g.,
harmonic, transient mode-superposition, or spectrum analyses). To
generate the .FULL file, rerun the modal analysis using the WRFULL
command.
"""
command = f"MSAVE,{key}"
return self.run(command, **kwargs)
def msolve(self, numslv="", nrmtol="", nrmchkinc="", **kwargs):
"""Starts multiple solutions for random acoustics analysis with diffuse
APDL Command: MSOLVE
sound field.
Parameters
----------
numslv
Number of multiple solutions (load steps) corresponding to the
number of samplings. Default = 1.
Notes
-----
The MSOLVE command starts multiple solutions (load steps) for random
acoustics analysis with multiple samplings.
The process is controlled by the norm convergence tolerance NRMTOL or
the number of multiple solutions NUMSLV (if the solution steps reach
the defined number).
The program checks the norm convergence by comparing two averaged sets
of radiated sound powers with the interval NRMCHKINC over the frequency
range. For example, if NRMCHKINC = 5, the averaged values from 5
solutions are compared with the averaged values from 10 solutions, then
the averaged values from 10 solutions are compared with the averaged
values from 15 solutions, and so on.
The incident diffuse sound field is defined via the DFSWAVE command.
The average result of multiple solutions with different samplings is
calculated via the PLST command.
"""
command = f"MSOLVE,{numslv},{nrmtol},{nrmchkinc}"
return self.run(command, **kwargs)
def opncontrol(self, lab="", value="", numstep="", **kwargs):
"""Sets decision parameter for automatically increasing the time step
APDL Command: OPNCONTROL
interval.
Parameters
----------
lab
DOF
DOF - Degree-of-freedom label used to base a decision for increasing the time step
(substep) interval in a nonlinear or transient analysis.
The only DOF label currently supported is TEMP.
OPENUPFACTOR - Factor for increasing the time step interval. Specify when AUTOTS,ON is issued
and specify a VALUE > 1.0 (up to 10.0). The default
VALUE = 1.5 (except for thermal analysis, where it
is 3.0). Generally, VALUE > 3.0 is not recommended.
value, numstep
Two values used in the algorithm for determining if the time step
interval can be increased. Valid only when Lab = DOF.
Notes
-----
This command is available only for nonlinear or full transient
analysis.
"""
command = f"OPNCONTROL,{lab},{value},{numstep}"
return self.run(command, **kwargs)
def outaero(self, sename="", timeb="", dtime="", **kwargs):
"""Outputs the superelement matrices and load vectors to formatted files
APDL Command: OUTAERO
for aeroelastic analysis.
Parameters
----------
sename
Name of the superelement that models the wind turbine supporting
structure. Defaults to the current Jobname.
timeb
First time at which the load vector is formed (defaults to be read
from SENAME.sub).
dtime
Time step size of the load vectors (defaults to be read from
SENAME.sub).
Notes
-----
Both TIMEB and DTIME must be blank if the time data is to be read from
the SENAME.sub file.
The matrix file (SENAME.SUB) must be available from the substructure
generation run before issuing this command. This superelement that
models the wind turbine supporting structure must contain only one
master node with six freedoms per node: UX, UY, UZ, ROTX, ROTY, ROTZ.
The master node represents the connection point between the turbine and
the supporting structure.
This command will generate four files that are exported to the
aeroelastic code for integrated wind turbine analysis. The four files
are Jobname.GNK for the generalized stiffness matrix, Jobname.GNC for
the generalized damping matrix, Jobname.GNM for the generalized mass
matrix and Jobname.GNF for the generalized load vectors.
For detailed information on how to perform a wind coupling analysis,
see Coupling to External Aeroelastic Analysis of Wind Turbines in the
Mechanical APDL Advanced Analysis Guide.
"""
command = f"OUTAERO,{sename},{timeb},{dtime}"
return self.run(command, **kwargs)
def ovcheck(self, method="", frequency="", set_="", **kwargs):
"""Checks for overconstraint among constraint equations and Lagrange
APDL Command: OVCHECK
multipliers.
Parameters
----------
method
Method used to determine which slave DOFs will be eliminated:
TOPO - Topological approach (default). This method only works with constraint
equations; it does not work with Lagrange multipliers.
ALGE - Algebraic approach.
NONE - Do not use overconstraint detection logic.
frequency
Frequency of overconstraint detection for static or full transient
analyses:
ITERATION - For all equilibrium iterations (default).
SUBSTEP - At the beginning of each substep.
LOADSTEP - At the beginning of each load step.
set\_
Set of equations:
All - Check for overconstraint between all constraint equations (default).
LAG - Check for overconstraint only on the set of equations that involves Lagrange
multipliers. This is faster than checking all sets,
especially when the model contains large MPC bonded contact
pairs.
Notes
-----
The OVCHECK command checks for overconstraint among the constraint
equations (CE/CP) and the Lagrange multipliers for the globally
assembled stiffness matrix. If overconstrained constraint equations or
Lagrange multipliers are detected, they are automatically removed from
the system of equations.
The constraint equations that are identified as redundant are removed
from the system and printed to the output file. It is very important
that you check the removed equations—they may lead to convergence
issues, especially for nonlinear analyses.
The Frequency and Set arguments are active only for the topological
method (Method = TOPO). If you do not issue the OVCHECK command,
overconstraint detection is performed topologically, and the slave DOFs
are also determined topologically.
Overconstraint detection slows down the run. We recommend using it to
validate that your model does not contain any overconstraints. Then,
you can switch back to the default method (no OVCHECK command is
needed).
As an example, consider the redundant set of constraint equations
defined below:
Equation number 2 will be removed by the overconstraint detection
logic. However, this is an arbitrary decision since equation number 1
could be removed instead. This is an important choice as the constant
term is not the same in these two constraint equations. Therefore, you
must check the removed constraint equations carefully.
For detailed information on the topological and algebraic methods of
overconstraint detection, see Constraints: Automatic Selection of Slave
DOFs in the Mechanical APDL Theory Reference
"""
command = f"OVCHECK,{method},{frequency},{set_}"
return self.run(command, **kwargs)
def pcgopt(
self,
lev_diff="",
reduceio="",
strmck="",
wrtfull="",
memory="",
lm_key="",
**kwargs,
):
"""Controls PCG solver options.
APDL Command: PCGOPT
Parameters
----------
lev_diff
Indicates the level of difficulty of the analysis. Valid
settings are AUTO or 0 (default), 1, 2, 3, 4, or 5. This
option applies to both the PCG solver when used in static
and full transient analyses and to the PCG Lanczos method
in modal analyses. Use AUTO to let ANSYS automatically
choose the proper level of difficulty for the model. Lower
values (1 or 2) generally provide the best performance for
well-conditioned problems. Values of 3 or 4 generally
provide the best performance for ill-conditioned problems;
however, higher values may increase the solution time for
well-conditioned problems. Higher level-of-difficulty
values typically require more memory. Using the highest
value of 5 essentially performs a factorization of the
global matrix (similar to the sparse solver) and may
require a very large amount of memory. If necessary, use
Memory to reduce the memory usage when using Lev_Diff = 5.
Lev_Diff = 5 is generally recommended for small- to
medium-sized problems when using the PCG Lanczos mode
extraction method.
reduceio
Controls whether the PCG solver will attempt to reduce I/O
performed during equation solution:
AUTO - Automatically chooses whether to reduce I/O or not
(default).
YES - Reduces I/O performed during equation solution in
order to reduce total solver time.
NO - Does NOT reduce I/O performed during equation solution.
strmck
Controls whether or not a Sturm sequence check is performed:
OFF - Does NOT perform Sturm sequence check (default).
ON - Performs Sturm sequence check
wrtfull
Controls whether or not the .FULL file is written.
ON - Write .FULL file (default)
OFF - Do not write .FULL file.
memory
Controls whether to run using in-core or out-of-core mode
when using Lev_Diff = 5.
AUTO - Automatically chooses which mode to use (default).
INCORE - Run using in-core mode.
OOC - Run using out-of-core mode.
lm_key
Controls use of the PCG solver for MPC184 Lagrange
multiplier method elements. This option applies only to
the PCG solver when used in static and full transient
analyses.
OFF - Do not use the PCG solver for the MPC184 Lagrange
multiplier method (default).
ON - Allow use of the PCG solver for the MPC184 Lagrange
multiplier method.
Notes
-----
ReduceIO works independently of the MSAVE command in the PCG
solver. Setting ReduceIO to YES can significantly increase
the memory usage in the PCG solver.
To minimize the memory used by the PCG solver with respect to
the Lev_Diff option only, set Lev_Diff = 1 if you do not have
sufficient memory to run the PCG solver with Lev_Diff = AUTO.
The MSAVE,ON command is not valid when using Lev_Diff = 5. In
this case, the Lev_Diff value will automatically be reset to
2. The MSAVE,ON command is also not valid with the StrmCk
option. In this case, StrmCk will be set to OFF.
Distributed ANSYS Restriction: The Memory option and the
LM_Key option are not supported in Distributed ANSYS.
"""
command = f"PCGOPT,{lev_diff},,{reduceio},{strmck},{wrtfull},{memory},{lm_key}"
return self.run(command, **kwargs)
def perturb(self, type_="", matkey="", contkey="", loadcontrol="", **kwargs):
"""Sets linear perturbation analysis options.
APDL Command: PERTURB
Parameters
----------
type\_
Type of linear perturbation analysis to be performed:
STATIC - Perform a linear perturbation static analysis.
MODAL - Perform a linear perturbation modal analysis.
BUCKLE - Perform a linear perturbation eigenvalue buckling analysis.
HARMONIC - Perform a linear perturbation full harmonic analysis.
SUBSTR - Perform a linear perturbation substructure generation pass.
OFF - Do not perform a linear perturbation analysis (default).
matkey
Key for specifying how the linear perturbation analysis uses
material properties, valid for all structural elements except
contact elements. For more information, see Linear Perturbation
Analysis in the Mechanical APDL Theory Reference.
AUTO - The program selects the material properties for the linear perturbation
analysis automatically (default). The materials are handled
in the following way:
For pure linear elastic materials used in the base analysis, the same properties are used in the linear perturbation analysis. - For hyperelastic materials used in the base analysis, the material properties
are assumed to be linear elastic in the linear
perturbation analysis. The material property data
(or material Jacobian) is obtained based on the
tangent of the hyperelastic material's
constitutive law at the point where restart
occurs.
For any nonlinear materials other than hyperelastic materials used in the base analysis, the material properties are assumed to be linear elastic in the linear perturbation analysis. The material data is the same as the linear portion of the nonlinear materials (that is, the parts defined by MP commands). - For COMBIN39, the stiffness is that of the first segment of the force-
deflection curve.
TANGENT - Use the tangent (material Jacobian) on the material constitutive curve as the
material property. The material property remains linear
in the linear perturbation analysis and is obtained at
the point of the base analysis where restart occurs. The
materials are handled in the following way:
For pure linear elastic materials used in the base analysis, the same properties are used in the linear perturbation analysis. Because the material constitutive curve is linear, the tangent is the same as the base analysis. - For hyperelastic materials used in the base analysis, the program uses the same
tangent as that used for MatKey = AUTO, and the
results are therefore identical.
For any nonlinear materials other than hyperelastic materials used in the base analysis, the material properties are obtained via the material tangent on the material constitutive curve at the restart point of the base analysis. - The materials and properties typically differ from Matkey = AUTO, but it is
possible the results could be identical or very
similar if a.) the material is elasto-plastic
rate-independent and is unloading (or has neutral
loading) at the restart point, or b.) the
material is rate-dependent, depending on the
material properties and loading conditions.
For COMBIN39, the stiffness is equal to the tangent of the current segment of the force-deflection curve. - In a modal restart solution that follows a linear perturbation modal analysis,
the TANGENT option is overridden by the AUTO
option and linear material properties are used
for stress calculations in the modal restart. See
the discussion in the Notes for more information.
contkey
Key that controls contact status for the linear perturbation
analysis. This key controls all contact elements (TARGE169,
TARGE170, and CONTA171 through CONTA178) globally for all contact
pairs. Alternatively, contact status can be controlled locally per
contact pair by using the CNKMOD command. Note that the contact
status from the base analysis solution is always adjusted by the
local contact controls specified by CNKMOD first and then modified
by the global sticking or bonded control (ContKey = STICKING or
BONDED). The tables in the Notes section show how the contact
status is adjusted by CNKMOD and/or the ContKey setting.
CURRENT - Use the current contact status from the restart
snapshot (default). If the previous run is
nonlinear, then the nonlinear contact status at
the point of restart is frozen and used
throughout the linear perturbation analysis.
STICKING - For frictional contact pairs (MU > 0), use
sticking contact (e.g., ``MU*KN`` for tangential
contact stiffness) everywhere the contact state
is closed (i.e., status is sticking or
sliding). This option only applies to contact
pairs that are in contact and have a frictional
coefficient MU greater than zero. Contact pairs
without friction (MU = 0) and in a sliding
state remain free to slide in the linear
perturbation analysis.
BONDED - Any contact pairs that are in the closed
(sticking or sliding) state are moved to bonded
(for example, KN for both normal and tangential
contact stiffness). Contact pairs that have a
status of far-field or near-field remain open.
loadcontrol
Key that controls how the load vector of {Fperturbed} is
calculated. This control is provided for convenience of load
generation for linear perturbation analysis. In general, a new set
of loads is required for a linear perturbation analysis. This key
controls all mechanical loads; it does not affect non-mechanical
loads. Non-mechanical loads (including thermal loads) are always
kept (i.e., not deleted).
ALLKEEP - Keep all the boundary conditions (loads and
constraints) from the end of the load step of
the current restart point. This option is
convenient for further load application and is
useful for a linear perturbation analysis
restarted from a previous linear analysis. For
this option, {Fend} is the total load vector at
the end of the load step at the restart point.
INERKEEP - Delete all loads and constraints from the
restart step, except for displacement
constraints and inertia loads (default). All
displacement constraints and inertia loads are
kept for convenience when performing the linear
perturbation analysis. Note that nonzero and
tabular displacement constraints can be
considered as external loads; however, they are
not deleted when using this option.
PARKEEP - Delete all loads and constraints from the
restart step, except for displacement
constraints. All displacement constraints are
kept for convenience when performing the linear
perturbation analysis. Note that nonzero and
tabular displacement constraints can be
considered as external loads; however, they are
not deleted when using this option.
DZEROKEEP - Behaves the same as the PARKEEP option, except
that all nonzero displacement constraints are
set to zero upon the onset of linear
perturbation.
NOKEEP - Delete all the loads and constraints, including
all displacement constraints. For this option,
{Fend} is zero unless non-mechanical loads (e.g.,
thermal loads) are present.
Notes
-----
This command controls options relating to linear perturbation analyses.
It must be issued in the first phase of a linear perturbation analysis.
This command is also valid in PREP7.
"""
command = f"PERTURB,{type_},{matkey},{contkey},{loadcontrol}"
return self.run(command, **kwargs)
def prscontrol(self, key="", **kwargs):
"""Specifies whether to include pressure load stiffness in the element
APDL Command: PRSCONTROL
stiffness formation.
Parameters
----------
key
Pressure load stiffness key. In general, use the default setting.
Use a non-default setting only if you encounter convergence
difficulties. Pressure load stiffness is automatically included
when using eigenvalue buckling analyses (ANTYPE,BUCKLE), equivalent
to Key = INCP. For all other types of analyses, valid arguments for
Key are:
NOPL - Pressure load stiffness not included for any elements.
(blank) (default) - Include pressure load stiffness for elements SURF153, SURF154, SURF156,
SURF159, SHELL181, PLANE182, PLANE183, SOLID185,
SOLID186, SOLID187, SOLSH190, BEAM188, BEAM189,
FOLLW201, SHELL208, SHELL209, SOLID272, SOLID273,
SHELL281, SOLID285, PIPE288, PIPE289, and
ELBOW290. Do not include pressure load stiffness
for elements SOLID65.
INCP - Pressure load stiffness included for all of the default elements listed above
and SOLID65.
Notes
-----
This command is rarely needed. The default settings are recommended for
most analyses.
"""
command = f"PRSCONTROL,{key}"
return self.run(command, **kwargs)
def pscontrol(self, option="", key="", **kwargs):
"""Enables or disables shared-memory parallel operations.
APDL Command: PSCONTROL
Parameters
----------
option
Specify the operations for which you intend to enable/disable
parallel behavior:
ALL - Enable/disable parallel for all areas (default).
PREP - Enable/disable parallel during preprocessing (/PREP7).
SOLU - Enable/disable parallel during solution (/SOLU).
FORM - Enable/disable parallel during element matrix generation.
SOLV - Enable/disable parallel during equation solver.
RESU - Enable/disable parallel during element results calculation.
POST - Enable/disable parallel during postprocessing (/POST1 and /POST26).
STAT - List parallel operations that are enabled/disabled.
key
Option control key. Used for all Option values except STAT.
ON - Enable parallel operation.
OFF - Disable parallel operation.
Notes
-----
Use this command in shared-memory parallel operations.
This command is useful when you encounter minor discrepancies in a
nonlinear solution when using different numbers of processors. A
parallel operation applied to the element matrix generation can produce
a different nonlinear solution with a different number of processors.
Although the nonlinear solution converges to the same nonlinear
tolerance, the minor discrepancy created may not be desirable for
consistency.
Enabling/disabling parallel behavior for the solution (Option = SOLU)
supersedes the activation/deactivation of parallel behavior for element
matrix generation (FORM), equation solver (SOLV), and element results
calculation (RESU).
The SOLV option supports only the sparse direct and PCG solvers
(EQSLV,SPARSE or PCG). No other solvers are supported.
This command applies only to shared-memory architecture. It does not
apply to the Distributed ANSYS product.
"""
command = f"PSCONTROL,{option},{key}"
return self.run(command, **kwargs)
def rate(self, option="", **kwargs):
"""Specifies whether the effect of creep strain rate will be used in the
APDL Command: RATE
solution of a load step.
Parameters
----------
option
Activates implicit creep analysis.
0 or OFF - No implicit creep analysis. This option is the default.
1 or ON - Perform implicit creep analysis.
Notes
-----
Set Option = 1 (or ON) to perform an implicit creep analysis (TB,CREEP
with TBOPT : 1). For viscoplasticity/creep analysis, Option specifies
whether or not to include the creep calculation in the solution of a
load step. If Option = 1 (or ON), ANSYS performs the creep calculation.
Set an appropriate time for solving the load step via a TIME,TIME
command.
"""
command = f"RATE,{option}"
return self.run(command, **kwargs)
def resvec(self, key="", **kwargs):
"""Calculates or includes residual vectors.
APDL Command: RESVEC
Parameters
----------
key
Residual vector key:
OFF - Do not calculate or include residual vectors. This option is the default.
ON - Calculate or include residual vectors.
Notes
-----
In a modal analysis, the RESVEC command calculates residual vectors. In
a mode-superposition transient dynamic, mode-superposition harmonic,
PSD or spectrum analysis, the command includes residual vectors.
In a component mode synthesis (CMS) generation pass, the RESVEC command
calculates one residual vector which is included in the normal modes
basis used in the transformation matrix. It is supported for the three
available CMS methods. RESVEC,ON can only be specified in the first
load step of a generation pass and is ignored if issued at another load
step.
If rigid-body modes exist, pseudo-constraints are required for the
calculation. Issue the D,,,SUPPORT command to specify only the minimum
number of pseudo-constraints necessary to prevent rigid-body motion.
For more information about residual vector formulation, see Residual
Vector Method in the Mechanical APDL Theory Reference.
"""
command = f"RESVEC,{key}"
return self.run(command, **kwargs)
def rstoff(self, lab="", offset="", **kwargs):
"""Offsets node or element IDs in the FE geometry record.
APDL Command: RSTOFF
Parameters
----------
lab
The offset type:
NODE - Offset the node IDs.
ELEM - Offset the element IDs.
offset
A positive integer value specifying the offset value to apply. The
value must be greater than the number of nodes or elements in the
existing superelement results file.
Notes
-----
The RSTOFF command offsets node or element IDs in the FE geometry
record saved in the .rst results file. Use the command when expanding
superelements in a bottom-up substructuring analysis (where each
superelement is generated individually in a generation pass, and all
superelements are assembled together in the use pass).
With appropriate offsets, you can write results files with unique node
or element IDs and thus display the entire model even if the original
superelements have overlapping element or node ID sets. (Such results
files are incompatible with the .db database file saved at the
generation pass.)
The offset that you specify is based on the original superelement node
or element numbering, rather than on any offset specified via a SESYMM
or SETRAN command. When issuing an RSTOFF command, avoid specifying an
offset that creates conflicting node or element numbers for a
superelement generated via a SESYMM or SETRAN command.
If you issue the command to set non-zero offsets for node or element
IDs, you must bring the geometry into the database via the SET command
so that ANSYS can display the results. You must specify appropriate
offsets to avoid overlapping node or element IDs with other
superelement results files.
The command is valid only in the first load step of a superelement
expansion pass.
"""
command = f"RSTOFF,{lab},{offset}"
return self.run(command, **kwargs)
def scopt(self, tempdepkey="", **kwargs):
"""Specifies System Coupling options.
APDL Command: SCOPT
Parameters
----------
tempdepkey
Temperature-dependent behavior key based on the convection
coefficient:
* ``"YES"`` : A negative convection coefficient, -N, is
assumed to be a function of temperature and is determined
from the HF property table for material N (MP command). This
is the default.
* ``"NO"`` : A negative convection coefficient, -N, is used as
is in the convection calculation.
Notes
-----
By default in the Mechanical APDL program, a negative convection
coefficient value triggers temperature-dependent behavior. In
System Coupling, and in some one-way CFD to Mechanical APDL
thermal simulations, it is desirable to allow convection
coefficients to be used as negative values. To do so, issue the
command ``scopt("NO")``.
"""
return self.run(f"SCOPT,{tempdepkey}", **kwargs)
def seexp(self, sename="", usefil="", imagky="", expopt="", **kwargs):
"""Specifies options for the substructure expansion pass.
APDL Command: SEEXP
Parameters
----------
sename
The name (case-sensitive) of the superelement matrix file created
by the substructure generation pass (Sename.SUB). Defaults to the
initial jobname File. If a number, it is the element number of the
superelement as used in the use pass.
usefil
The name of the file containing the superelement degree-of-freedom
(DOF) solution created by the substructure use pass (Usefil.DSUB).
imagky
Key to specify use of the imaginary component of the DOF solution.
Applicable only if the use pass is a harmonic (ANTYPE,HARMIC)
analysis:
OFF - Use real component of DOF solution (default).
ON - Use imaginary component of DOF solution.
expopt
Key to specify whether the superelement (ANTYPE,SUBSTR) expansion
pass (EXPASS,ON) should transform the geometry:
OFF - Do not transform node or element locations (default).
ON - Transform node or element locations in the FE geometry record of the .rst
results file.
Notes
-----
Specifies options for the expansion pass of the substructure analysis
(ANTYPE,SUBSTR). If used in SOLUTION, this command is valid only
within the first load step.
If you specify geometry transformation (Expopt = ON), you must retrieve
the transformation matrix (if it exists) from the specified .SUB file.
The command updates the nodal X, Y, and Z coordinates to represent the
transformed node locations. The Expopt option is useful when you want
to expand superelements created from other superelements (via SETRAN or
SESYMM commands). For more information, see Superelement Expansion in
Transformed Locations and Plotting or Printing Mode Shapes.
This command is also valid in /PREP7.
"""
command = f"SEEXP,{sename},{usefil},{imagky},{expopt}"
return self.run(command, **kwargs)
def seopt(
self, sename="", sematr="", sepr="", sesst="", expmth="", seoclvl="", **kwargs
):
"""Specifies substructure analysis options.
APDL Command: SEOPT
Parameters
----------
sename
The name (case-sensitive, thirty-two character maximum) assigned to
the superelement matrix file. The matrix file will be named
Sename.SUB. This field defaults to Fname on the /FILNAME command.
sematr
Matrix generation key:
1 - Generate stiffness (or conductivity) matrix (default).
2 - Generate stiffness and mass (or conductivity and specific heat) matrices.
3 - Generate stiffness, mass and damping matrices.
sepr
Print key:
0 - Do not print superelement matrices or load vectors.
1 - Print both load vectors and superelement matrices.
2 - Print load vectors but not matrices.
sesst
Stress stiffening key:
0 - Do not save space for stress stiffening in a later run.
1 - Save space for the stress stiffening matrix (calculated in a subsequent
generation run after the expansion pass).
expmth
Expansion method for expansion pass:
BACKSUB - Save necessary factorized matrix files for backsubstitution during subsequent
expansion passes (default). This normally results in a
large usage of disk space
RESOLVE - Do not save factorized matrix files. Global stiffness matrix will be reformed
during expansion pass. This option provides an effective
way to save disk space usage. This option cannot be used
if the use pass uses large deflections (NLGEOM,ON).
seoclvl
For the added-mass calculation, the ocean level to use when ocean
waves (OCTYPE,,WAVE) are present:
ATP - The ocean level at this point in time (default).
MSL - The mean ocean level.
Notes
-----
The SEOPT command specifies substructure analysis options
(ANTYPE,SUBSTR). If used during solution, the command is valid only
within the first load step.
When ocean waves (OCTYPE,,WAVE) are present, the SeOcLvL argument
specifies the ocean height or level to use for the added-mass
calculation, as the use-run analysis type is unknown during the
generation run.
The expansion pass method RESOLVE is not supported with component mode
synthesis analysis (CMSOPT). ExpMth is automatically set to BACKSUB for
CMS analysis. The RESOLVE method invalidates the use of the NUMEXP
command. The RESOLVE method does not allow the computation of results
based on nodal velocity and nodal acceleration (damping force, inertial
force, kinetic energy, etc.) in the substructure expansion pass.
This command is also valid in PREP7.
"""
command = f"SEOPT,{sename},{sematr},{sepr},{sesst},{expmth},{seoclvl}"
return self.run(command, **kwargs)
def snoption(
self,
rangefact="",
blocksize="",
robustlev="",
compute="",
solve_info="",
**kwargs,
):
"""Specifies Supernode (SNODE) eigensolver options.
APDL Command: SNOPTION
Parameters
----------
rangefact
Factor used to control the range of eigenvalues computed for each
supernode. The value of RangeFact must be a number between 1.0 and
5.0. By default the RangeFact value is set to 2.0, which means that
all eigenvalues between 0 and ``2*FREQE`` are computed for each
supernode (where FREQE is the upper end of the frequency range of
interest as specified on the MODOPT command). As the RangeFact
value increases, the eigensolution for the SNODE solver becomes
more accurate and the computational time increases.
blocksize
BlockSize to be used when computing the final eigenvectors. The
value of Blocksize must be either MAX or a number between 1 and
NMODE, where NMODE is the number of modes to be computed as set on
the MODOPT command. Input a value of MAX to force the algorithm to
allocate enough memory to hold all of the final eigenvectors in
memory and, therefore, only read through the file containing the
supernode eigenvectors once. Note that this setting is ONLY
recommended when there is sufficient physical memory on the machine
to safely hold all of the final eigenvectors in memory.
robustlev
Parameter used to control the robustness of the SNODE eigensolver.
The value of RobustLev must be a number between 0 and 10. Lower
values of RobustLev allow the eigensolver to run in the most
efficient manner for optimal performance. Higher values of
RobustLev often slow down the performance of the eigensolver, but
can increase the robustness; this may be desirable if a problem is
detected with the eigensolver or its eigensolution.
compute
Key to control which computations are performed by the Supernode
eigensolver:
EVALUE - The eigensolver computes only the eigenvalues.
EVECTOR - The eigensolver computes only the eigenvectors
(must be preceded by a modal analysis where the
eigenvalues were computed using the Supernode
eigensolver).
BOTH - The eigensolver computes both the eigenvalues and
eigenvectors in the same pass (default).
solve_info
Solver output option:
OFF - Turns off additional output printing from the
Supernode eigensolver (default).
PERFORMANCE - Turns on additional output printing from the
Supernode eigensolver, including a
performance summary and a summary of file
I/O for the Supernode
eigensolver. Information on memory usage
during assembly of the global matrices (that
is, creation of the Jobname.FULL file) is
also printed with this option.
Notes
-----
This command specifies options for the Supernode (SNODE)
eigensolver.
Setting RangeFact to a value greater than 2.0 will improve the
accuracy of the computed eigenvalues and eigenvectors, but
will often increase the computing time of the SNODE
eigensolver. Conversely, setting RangeFact to a value less
than 2.0 will deteriorate the accuracy of the computed
eigenvalues and eigenvectors, but will often speedup the
computing time of the SNODE eigensolver. The default value of
2.0 has been set as a good blend of accuracy and performance.
The SNODE eigensolver reads the eigenvectors and related
information for each supernode from a file and uses that
information to compute the final eigenvectors. For each
eigenvalue/eigenvector requested by the user, the program must
do one pass through the entire file that contains the
supernode eigenvectors. By choosing a BlockSize value greater
than 1, the program can compute BlockSize number of final
eigenvectors for each pass through the file. Therefore,
smaller values of BlockSize result in more I/O, and larger
values of BlockSize result in less I/O. Larger values of
BlockSize also result in significant additional memory usage,
as BlockSize number of final eigenvectors must be stored in
memory. The default Blocksize of min(NMODE,40) is normally a
good choice to balance memory and I/O usage.
The RobustLev field should only be used when a problem is
detected with the accuracy of the final solution or if the
Supernode eigensolver fails while computing the
eigenvalues/eigenvectors. Setting RobustLev to a value greater
than 0 will cause the performance of the eigensolver to
deteriorate. If the performance deteriorates too much or if
the eigensolver continues to fail when setting the RobustLev
field to higher values, then switching to another eigensolver
such as Block Lanczos or PCG Lanczos is recommended.
Setting Compute = EVALUE causes the Supernode eigensolver to
compute only the requested eigenvalues. During this process a
Jobname.SNODE file is written; however, a Jobname.MODE file is
not written. Thus, errors will likely occur in any downstream
computations that require the Jobname.MODE file (for example,
participation factor computations, mode superpostion
transient/harmonic analysis, PSD analysis). Setting Compute =
EVECTOR causes the Supernode eigensolver to compute only the
corresponding eigenvectors. The Jobname.SNODE file and the
associated Jobname.FULL file are required when requesting
these eigenvectors. In other words, the eigenvalues must have
already been computed for this model before computing the
eigenvectors. This field can be useful in order to separate
the two steps (computing eigenvalues and computing
eigenvectors).
"""
command = (
f"SNOPTION,{rangefact},{blocksize},{robustlev},{compute},,{solve_info}"
)
return self.run(command, **kwargs)
def solve(self, action="", **kwargs):
"""Starts a solution.
APDL Command: SOLVE
Parameters
----------
action
Action to be performed on solve (used only for linear perturbation
analyses).
ELFORM - Reform all appropriate element matrices in the first phase of a linear
perturbation analysis.
Notes
-----
Starts the solution of one load step of a solution sequence based on
the current analysis type and option settings. Use Action = ELFORM only
in the first phase of a linear perturbation analysis.
"""
command = f"SOLVE,{action}"
return self.run(command, **kwargs)
def stabilize(
self, key="", method="", value="", substpopt="", forcelimit="", **kwargs
):
"""Activates stabilization for all elements that support nonlinear
APDL Command: STABILIZE
stabilization.
Parameters
----------
key
Key for controlling nonlinear stabilization:
OFF - Deactivate stabilization. This value is the default.
CONSTANT - Activate stabilization. The energy-dissipation ratio or damping factor remains
constant during the load step.
REDUCE - Activate stabilization. The energy-dissipation ratio or damping factor is
reduced linearly to zero at the end of the load step from
the specified or calculated value.
method
The stabilization-control method:
ENERGY - Use the energy-dissipation ratio as the control. This value is the default
when Key ≠ OFF.
DAMPING - Use the damping factor as the control.
value
The energy-dissipation ratio (Method = ENERGY) or damping factor
(Method = DAMPING). This value must be greater than 0 when Method =
ENERGY or Method = DAMPING. When Method = ENERGY, this value is
usually a number between 0 and 1.
substpopt
Option for the first substep of the load step:
NO - Stabilization is not activated for the first substep even when it does not
converge after the minimal allowed time increment is reached.
This value is the default when Key ≠ OFF.
MINTIME - Stabilization is activated for the first substep if it still does not converge
after the minimal allowed time increment is reached.
ANYTIME - Stabilization is activated for the first substep. Use this option if
stabilization was active for the previous load step via
Key = CONSTANT.
forcelimit
The stabilization force limit coefficient, such that 0 < FORCELIMIT
< 1. The default value is 0.2. To omit a stabilization force check,
set this value to 0.
Notes
-----
Once issued, a STABILIZE command remains in effect until you reissue
the command.
For the energy dissipation ratio, specify VALUE = 1.0e-4 if you have no
prior experience with the current model; if convergence problems are
still an issue, increase the value gradually. The damping factor is
mesh-, material-, and time-step-dependent; an initial reference value
from the previous run (such as a run with the energy-dissipation ratio
as input) should suggest itself.
Exercise caution when specifying SubStpOpt = MINTIME or ANYTIME for the
first load step; ANSYS, Inc. recommends this option only for
experienced users. If stabilization was active for the previous load
step via Key = CONSTANT and convergence is an issue for the first
substep, specify SubStpOpt = ANYTIME.
When the L2-norm of the stabilization force (CSRSS value) exceeds the
L2-norm of the internal force multiplied by the stabilization force
coefficient, ANSYS issues a message displaying both the stabilization
force norm and the internal force norm. The FORCELIMIT argument allows
you to change the default stabilization force coefficient (normally 20
percent).
This command stabilizes the degrees of freedom for current-technology
elements only. Other elements can be included in the FE model, but
their degrees of freedom are not stabilized.
For more information about nonlinear stabilization, see Unstable
Structures in the Structural Analysis Guide. For additional tips that
can help you to achieve a stable final model, see Simplify Your Model
in the Structural Analysis Guide.
"""
command = f"STABILIZE,{key},{method},{value},{substpopt},{forcelimit}"
return self.run(command, **kwargs)
def thexpand(self, key="", **kwargs):
"""Enables or disables thermal loading
APDL Command: THEXPAND
Parameters
----------
key
Activation key:
ON - Thermal loading is included in the load vector (default).
OFF - Thermal loading is not included in the load vector.
Notes
-----
Temperatures applied in the analysis are used by default to evaluate
material properties and contribute to the load vector if the
temperature does not equal the reference temperature and a coefficient
of thermal expansion is specified.
Use THEXPAND,OFF to evaluate the material properties but not contribute
to the load vector. This capability is particularly useful when
performing a harmonic analysis where you do not want to include
harmonically varying thermal loads. It is also useful in a modal
analysis when computing a modal load vector but excluding the thermal
load.
This command is valid for all analysis types except linear perturbation
modal and linear perturbation harmonic analyses. For these two linear
perturbation analysis types, the program internally sets THEXPAND,OFF,
and it cannot be set to ON by using this command (THEXPAND,ON is
ignored).
"""
command = f"THEXPAND,{key}"
return self.run(command, **kwargs)
def thopt(
self,
refopt="",
reformtol="",
ntabpoints="",
tempmin="",
tempmax="",
algo="",
**kwargs,
):
"""Specifies nonlinear transient thermal solution options.
APDL Command: THOPT
Parameters
----------
refopt
Matrix reform option.
FULL - Use the full Newton-Raphson solution option (default). All subsequent input
values are ignored.
QUASI - Use a selective reform solution option based on REFORMTOL.
reformtol
Property change tolerance for Matrix Reformation (.05 default). The
thermal matrices are reformed if the maximum material property
change in an element (from the previous reform time) is greater
than the reform tolerance. Valid only when Refopt = QUASI.
ntabpoints
Number of points in Fast Material Table (64 default). Valid only
when Refopt = QUASI.
tempmin
Minimum temperature for Fast Material Table. Defaults to the
minimum temperature defined by the MPTEMP command for any material
property defined. Valid only when Refopt = QUASI.
tempmax
Maximum temperature for Fast Material Table. Defaults to the
maximum temperature defined by the MPTEMP command for any material
property defined. Valid only when Refopt = QUASI.
--
Reserved field.
algo
Specifies which solution algorithm to apply:
0 - Multipass (default).
1 - Iterative.
Notes
-----
The QUASI matrix reform option is supported by the ICCG, JCG, and
sparse solvers only (EQSLV).
For Refopt = QUASI:
Results from a restart may be different than results from a single run
because the stiffness matrices are always recreated in a restart run,
but may or may not be in a single run (depending on the behavior
resulting from the REFORMTOL setting). Additionally, results may differ
between two single runs as well, if the matrices are reformed as a
result of the REFORMTOL setting.
Midside node temperatures are not calculated if 20-node thermal solid
elements (SOLID90 or SOLID279) are used.
For more information, see Solution Algorithms Used in Transient Thermal
Analysis in the Thermal Analysis Guide.
"""
command = f"THOPT,{refopt},{reformtol},{ntabpoints},{tempmin},{tempmax},{algo}"
return self.run(command, **kwargs)
| 43.868206 | 539 | 0.630643 | 19,056 | 155,776 | 5.151711 | 0.106528 | 0.007334 | 0.006754 | 0.009575 | 0.30999 | 0.246743 | 0.209349 | 0.180909 | 0.159721 | 0.142751 | 0 | 0.005954 | 0.323927 | 155,776 | 3,550 | 540 | 43.880563 | 0.926175 | 0.806581 | 0 | 0.309917 | 0 | 0.004132 | 0.185614 | 0.158652 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210744 | false | 0.008264 | 0.008264 | 0 | 0.433884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7922953fb7bf617a7ae109bd584d9b8e22523424 | 1,998 | py | Python | shovel.py | folklabs/waste-service-standards | 18025fb02056a690f672cfcd965c52c6ee85cee6 | [
"MIT"
] | null | null | null | shovel.py | folklabs/waste-service-standards | 18025fb02056a690f672cfcd965c52c6ee85cee6 | [
"MIT"
] | null | null | null | shovel.py | folklabs/waste-service-standards | 18025fb02056a690f672cfcd965c52c6ee85cee6 | [
"MIT"
] | null | null | null | import csv
import json
import jinja2
import os
import ramlfications
import time
from livereload import Server, shell
from shovel import task
RAML_FILE = 'raml/waste_services.raml'
def parse_raml(template_path):
api = ramlfications.parse(RAML_FILE)
data = {}
data['collection_event_types'] = []
with open('taxonomies/collection_event_types.csv', 'rb') as csvfile:
spamreader = csv.DictReader(csvfile)
# header = spamreader.read()
for row in spamreader:
data['collection_event_types'].append(row)
env = jinja2.Environment(loader=jinja2.FileSystemLoader('./templates'))
env.filters['jsonify'] = json.dumps
template = env.get_template(template_path)
f = open(os.path.join('docs', template_path), 'w')
f.write(template.render(api=api, data=data))
f.close()
def scan_files():
print 'Scanning files...'
for root, subdirs, files in os.walk('templates'):
for f in files:
templates_sub_path = root.replace('templates/', '')
template = os.path.join(templates_sub_path, f)
if f == '.DS_Store':
continue
parse_raml(template)
@task
def raml():
'''Converts RAML to Markdown'''
scan_files()
@task
def raml_watch():
'''Converts RAML to Markdown'''
# TODO: move to var
# template_file = open('api-templates/api.md')
props = os.stat(RAML_FILE)
this = last = props.st_mtime
print 'Watching for changes...'
while 1:
if this > last:
print 'Updating output.'
last = this
parse_raml()
props = os.stat(RAML_FILE)
this = props.st_mtime
time.sleep(0.2)
@task
def watch():
server = Server()
server.watch('templates/**/*', scan_files)
server.watch('raml/*', scan_files)
server.watch('examples/*.json', scan_files)
server.serve()
@task
def hello(name):
'''Prints hello and the provided name'''
print 'Hello, %s' % name
| 23.232558 | 75 | 0.626627 | 250 | 1,998 | 4.88 | 0.404 | 0.036885 | 0.04918 | 0.039344 | 0.037705 | 0.037705 | 0 | 0 | 0 | 0 | 0 | 0.003979 | 0.245245 | 1,998 | 85 | 76 | 23.505882 | 0.80504 | 0.044545 | 0 | 0.103448 | 0 | 0 | 0.143174 | 0.058269 | 0 | 0 | 0 | 0.011765 | 0 | 0 | null | null | 0 | 0.137931 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7936895cd429fb7690cd6215759396da2451d9f6 | 2,184 | py | Python | classNotes/object_orientation9/Polymorphism.py | minefarmer/Think_like_a_programmer | d6b1363f96600445ea47f91c637c5d0bede2e8f6 | [
"Unlicense"
] | null | null | null | classNotes/object_orientation9/Polymorphism.py | minefarmer/Think_like_a_programmer | d6b1363f96600445ea47f91c637c5d0bede2e8f6 | [
"Unlicense"
] | null | null | null | classNotes/object_orientation9/Polymorphism.py | minefarmer/Think_like_a_programmer | d6b1363f96600445ea47f91c637c5d0bede2e8f6 | [
"Unlicense"
] | null | null | null | ''' Polymorphism
Polymorphism means that differient objects can behave in differient ways for the same message (FunctionCall)
Consequently, sender of a message does not need to know exact class of reciever
Example - Drawing Application
Drawing Pane Sender Object
/|\
/ | \
/ | \
/ | \
/ | \
/ | \
/ | \
draw / draw draw
/ | \
Triangle circle Rectangle
|____________|___________|
Reciever Objects
Polymorphism (Wikipedia)
In programming langues, polymorphism is the provision of a single interface to entities of differient types.
A polymorphic type is one whose operations can also be applied to values of some other type, or types.
'''
from abc import ABC, abstractmethod
from unicodedata import name
class Animal (ABC):
def __init__(self):
self.__name = name
@abstractmethod
def makeNoise(self):pass
@abstractmethod
def eat(self):pass
def move(self):
print("I can move anywhere")
def getName(self):
return self.__name
class Lion(Animal):
def __init__(self, name):
super().__init__()
def makeNoise(self):
print("Meow meow...")
def eat(self):
print("I can eat buffaloes, zebras, young elephants")
class Cat(Animal):
def __init__(self, name):
super().__init__()
def makeNoise(self):
print("I can roar...")
def eat(self):
print("I can eat mouses...")
animals = [Lion("Woofie")], Cat("Max")
for animal in animals:
print(animal.getName())
print(animal.makeNoise())
print(animal.eat())
# Traceback (most recent call last):
# File "/home/carl/Desktop/MatsHub/Think_like_a_programmer/classNotes/object_orientation9/Polymorphism.py", line 73, in <module>
# print(animal.getName())
# AttributeError: 'list' object has no attribute 'getName'
| 26.962963 | 136 | 0.560897 | 225 | 2,184 | 5.217778 | 0.497778 | 0.03833 | 0.034072 | 0.044293 | 0.124361 | 0.124361 | 0.124361 | 0.086882 | 0.086882 | 0.086882 | 0 | 0.002113 | 0.349817 | 2,184 | 80 | 137 | 27.3 | 0.824648 | 0.537088 | 0 | 0.3125 | 0 | 0 | 0.118974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.34375 | false | 0.0625 | 0.0625 | 0.03125 | 0.53125 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
7937d5873594856871631b2d7befe33ec15630e0 | 282 | py | Python | musicmore/cart/processors.py | IPenuelas/musicmore_r01 | 94726c84aa61201940916feb4d5e535bcd4dc230 | [
"MIT"
] | null | null | null | musicmore/cart/processors.py | IPenuelas/musicmore_r01 | 94726c84aa61201940916feb4d5e535bcd4dc230 | [
"MIT"
] | null | null | null | musicmore/cart/processors.py | IPenuelas/musicmore_r01 | 94726c84aa61201940916feb4d5e535bcd4dc230 | [
"MIT"
] | null | null | null | from .models import Cart, CartItem
from .views import _cart_id
def ctx_dict_cart(request):
cart = Cart.objects.filter(cart_id=_cart_id(request))
cart_items = CartItem.objects.all().filter(cart=cart[:1])
ctx_cart = {'CTX_CART_ITEMS':cart_items}
return ctx_cart
| 28.2 | 65 | 0.730496 | 43 | 282 | 4.488372 | 0.395349 | 0.093264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004202 | 0.156028 | 282 | 9 | 66 | 31.333333 | 0.806723 | 0 | 0 | 0 | 0 | 0 | 0.049822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f735d114acfbbc7d006b9ce0ce107c90ffa2fae2 | 324 | py | Python | simulator/kruxsim/mocks/flash.py | odudex/krux | db421a3f107c0263221e5f1e877e9c38925bb17c | [
"MIT"
] | null | null | null | simulator/kruxsim/mocks/flash.py | odudex/krux | db421a3f107c0263221e5f1e877e9c38925bb17c | [
"MIT"
] | 13 | 2022-03-21T05:35:03.000Z | 2022-03-31T14:31:46.000Z | simulator/kruxsim/mocks/flash.py | odudex/krux | db421a3f107c0263221e5f1e877e9c38925bb17c | [
"MIT"
] | null | null | null | import sys
from unittest import mock
flash = bytearray(8 * 1024 * 1024)
def read_data(addr, amount):
return flash[addr : addr + amount]
def write_data(addr, data):
flash[addr : addr + len(data)] = data
if "flash" not in sys.modules:
sys.modules["flash"] = mock.MagicMock(read=read_data, write=write_data)
| 19.058824 | 75 | 0.691358 | 49 | 324 | 4.489796 | 0.44898 | 0.072727 | 0.118182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.185185 | 324 | 16 | 76 | 20.25 | 0.799242 | 0 | 0 | 0 | 0 | 0 | 0.030864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.111111 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
f7366410d3402dde7686f8ca4b30c6c1c2234403 | 52,914 | py | Python | models/new/sencebgan.py | yigitozgumus/Polimi_Thesis | 711c1edcf1fdb92fc6c15bf5ab1be141c13995c3 | [
"MIT"
] | 3 | 2019-07-27T14:00:42.000Z | 2020-01-17T17:07:51.000Z | models/new/sencebgan.py | yigitozgumus/Polimi_Thesis | 711c1edcf1fdb92fc6c15bf5ab1be141c13995c3 | [
"MIT"
] | null | null | null | models/new/sencebgan.py | yigitozgumus/Polimi_Thesis | 711c1edcf1fdb92fc6c15bf5ab1be141c13995c3 | [
"MIT"
] | 4 | 2019-10-22T02:58:26.000Z | 2020-10-06T09:59:26.000Z | import tensorflow as tf
from base.base_model import BaseModel
from utils.alad_utils import get_getter
import utils.alad_utils as sn
class SENCEBGAN(BaseModel):
def __init__(self, config):
super(SENCEBGAN, self).__init__(config)
self.build_model()
self.init_saver()
def build_model(self):
############################################################################################
# INIT
############################################################################################
# Kernel initialization for the convolutions
if self.config.trainer.init_type == "normal":
self.init_kernel = tf.random_normal_initializer(mean=0.0, stddev=0.02)
elif self.config.trainer.init_type == "xavier":
self.init_kernel = tf.contrib.layers.xavier_initializer(
uniform=False, seed=None, dtype=tf.float32
)
# Placeholders
self.is_training_gen = tf.placeholder(tf.bool)
self.is_training_dis = tf.placeholder(tf.bool)
self.is_training_enc_g = tf.placeholder(tf.bool)
self.is_training_enc_r = tf.placeholder(tf.bool)
self.feature_match1 = tf.placeholder(tf.float32)
self.feature_match2 = tf.placeholder(tf.float32)
self.image_input = tf.placeholder(
tf.float32, shape=[None] + self.config.trainer.image_dims, name="x"
)
self.noise_tensor = tf.placeholder(
tf.float32, shape=[None, self.config.trainer.noise_dim], name="noise"
)
############################################################################################
# MODEL
############################################################################################
self.logger.info("Building training graph...")
with tf.variable_scope("SENCEBGAN"):
# First training part
# G(z) ==> x'
with tf.variable_scope("Generator_Model"):
self.image_gen = self.generator(self.noise_tensor)
# Discriminator outputs
with tf.variable_scope("Discriminator_Model"):
self.embedding_real, self.decoded_real = self.discriminator(
self.image_input, do_spectral_norm=self.config.trainer.do_spectral_norm
)
self.embedding_fake, self.decoded_fake = self.discriminator(
self.image_gen, do_spectral_norm=self.config.trainer.do_spectral_norm
)
# Second training part
# E(x) ==> z'
with tf.variable_scope("Encoder_G_Model"):
self.image_encoded = self.encoder_g(self.image_input)
# G(z') ==> G(E(x)) ==> x''
with tf.variable_scope("Generator_Model"):
self.image_gen_enc = self.generator(self.image_encoded)
# Discriminator outputs
with tf.variable_scope("Discriminator_Model"):
self.embedding_enc_fake, self.decoded_enc_fake = self.discriminator(
self.image_gen_enc, do_spectral_norm=self.config.trainer.do_spectral_norm
)
self.embedding_enc_real, self.decoded_enc_real = self.discriminator(
self.image_input, do_spectral_norm=self.config.trainer.do_spectral_norm
)
with tf.variable_scope("Discriminator_Model_XX"):
self.im_logit_real, self.im_f_real = self.discriminator_xx(
self.image_input,
self.image_input,
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.im_logit_fake, self.im_f_fake = self.discriminator_xx(
self.image_input,
self.image_gen_enc,
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
# Third training part
with tf.variable_scope("Encoder_G_Model"):
self.image_encoded_r = self.encoder_g(self.image_input)
with tf.variable_scope("Generator_Model"):
self.image_gen_enc_r = self.generator(self.image_encoded_r)
with tf.variable_scope("Encoder_R_Model"):
self.image_ege = self.encoder_r(self.image_gen_enc_r)
with tf.variable_scope("Discriminator_Model_ZZ"):
self.z_logit_real, self.z_f_real = self.discriminator_zz(
self.image_encoded_r,
self.image_encoded_r,
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.z_logit_fake, self.z_f_fake = self.discriminator_zz(
self.image_encoded_r,
self.image_ege,
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
############################################################################################
# LOSS FUNCTIONS
############################################################################################
with tf.name_scope("Loss_Functions"):
with tf.name_scope("Generator_Discriminator"):
# Discriminator Loss
if self.config.trainer.mse_mode == "norm":
self.disc_loss_real = tf.reduce_mean(
self.mse_loss(
self.decoded_real,
self.image_input,
mode="norm",
order=self.config.trainer.order,
)
)
self.disc_loss_fake = tf.reduce_mean(
self.mse_loss(
self.decoded_fake,
self.image_gen,
mode="norm",
order=self.config.trainer.order,
)
)
elif self.config.trainer.mse_mode == "mse":
self.disc_loss_real = self.mse_loss(
self.decoded_real,
self.image_input,
mode="mse",
order=self.config.trainer.order,
)
self.disc_loss_fake = self.mse_loss(
self.decoded_fake,
self.image_gen,
mode="mse",
order=self.config.trainer.order,
)
self.loss_discriminator = (
tf.math.maximum(self.config.trainer.disc_margin - self.disc_loss_fake, 0)
+ self.disc_loss_real
)
# Generator Loss
pt_loss = 0
if self.config.trainer.pullaway:
pt_loss = self.pullaway_loss(self.embedding_fake)
self.loss_generator = self.disc_loss_fake + self.config.trainer.pt_weight * pt_loss
# New addition to enforce visual similarity
delta_noise = self.embedding_real - self.embedding_fake
delta_flat = tf.layers.Flatten()(delta_noise)
loss_noise_gen = tf.reduce_mean(tf.norm(delta_flat, ord=2, axis=1, keepdims=False))
self.loss_generator += 0.1 * loss_noise_gen
with tf.name_scope("Encoder_G"):
if self.config.trainer.mse_mode == "norm":
self.loss_enc_rec = tf.reduce_mean(
self.mse_loss(
self.image_gen_enc,
self.image_input,
mode="norm",
order=self.config.trainer.order,
)
)
self.loss_enc_f = tf.reduce_mean(
self.mse_loss(
self.decoded_enc_real,
self.decoded_enc_fake,
mode="norm",
order=self.config.trainer.order,
)
)
elif self.config.trainer.mse_mode == "mse":
self.loss_enc_rec = tf.reduce_mean(
self.mse_loss(
self.image_gen_enc,
self.image_input,
mode="mse",
order=self.config.trainer.order,
)
)
self.loss_enc_f = tf.reduce_mean(
self.mse_loss(
self.embedding_enc_real,
self.embedding_enc_fake,
mode="mse",
order=self.config.trainer.order,
)
)
self.loss_encoder_g = (
self.loss_enc_rec + self.config.trainer.encoder_f_factor * self.loss_enc_f
)
if self.config.trainer.enable_disc_xx:
self.enc_xx_real = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.im_logit_real, labels=tf.zeros_like(self.im_logit_real)
)
self.enc_xx_fake = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.im_logit_fake, labels=tf.ones_like(self.im_logit_fake)
)
self.enc_loss_xx = tf.reduce_mean(self.enc_xx_real + self.enc_xx_fake)
self.loss_encoder_g += self.enc_loss_xx
with tf.name_scope("Encoder_R"):
if self.config.trainer.mse_mode == "norm":
self.loss_encoder_r = tf.reduce_mean(
self.mse_loss(
self.image_ege,
self.image_encoded_r,
mode="norm",
order=self.config.trainer.order,
)
)
elif self.config.trainer.mse_mode == "mse":
self.loss_encoder_r = tf.reduce_mean(
self.mse_loss(
self.image_ege,
self.image_encoded_r,
mode="mse",
order=self.config.trainer.order,
)
)
if self.config.trainer.enable_disc_zz:
self.enc_zz_real = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.z_logit_real, labels=tf.zeros_like(self.z_logit_real)
)
self.enc_zz_fake = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.z_logit_fake, labels=tf.ones_like(self.z_logit_fake)
)
self.enc_loss_zz = tf.reduce_mean(self.enc_zz_real + self.enc_zz_fake)
self.loss_encoder_r += self.enc_loss_zz
if self.config.trainer.enable_disc_xx:
with tf.name_scope("Discriminator_XX"):
self.loss_xx_real = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.im_logit_real, labels=tf.ones_like(self.im_logit_real)
)
self.loss_xx_fake = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.im_logit_fake, labels=tf.zeros_like(self.im_logit_fake)
)
self.dis_loss_xx = tf.reduce_mean(self.loss_xx_real + self.loss_xx_fake)
if self.config.trainer.enable_disc_zz:
with tf.name_scope("Discriminator_ZZ"):
self.loss_zz_real = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.z_logit_real, labels=tf.ones_like(self.z_logit_real)
)
self.loss_zz_fake = tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.z_logit_fake, labels=tf.zeros_like(self.z_logit_fake)
)
self.dis_loss_zz = tf.reduce_mean(self.loss_zz_real + self.loss_zz_fake)
############################################################################################
# OPTIMIZERS
############################################################################################
with tf.name_scope("Optimizers"):
self.generator_optimizer = tf.train.AdamOptimizer(
self.config.trainer.standard_lr_gen,
beta1=self.config.trainer.optimizer_adam_beta1,
beta2=self.config.trainer.optimizer_adam_beta2,
)
self.encoder_g_optimizer = tf.train.AdamOptimizer(
self.config.trainer.standard_lr_enc,
beta1=self.config.trainer.optimizer_adam_beta1,
beta2=self.config.trainer.optimizer_adam_beta2,
)
self.encoder_r_optimizer = tf.train.AdamOptimizer(
self.config.trainer.standard_lr_enc,
beta1=self.config.trainer.optimizer_adam_beta1,
beta2=self.config.trainer.optimizer_adam_beta2,
)
self.discriminator_optimizer = tf.train.AdamOptimizer(
self.config.trainer.standard_lr_dis,
beta1=self.config.trainer.optimizer_adam_beta1,
beta2=self.config.trainer.optimizer_adam_beta2,
)
# Collect all the variables
all_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
# Generator Network Variables
self.generator_vars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Generator_Model")
]
# Discriminator Network Variables
self.discriminator_vars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Discriminator_Model")
]
# Discriminator Network Variables
self.encoder_g_vars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Encoder_G_Model")
]
self.encoder_r_vars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Encoder_R_Model")
]
self.dxxvars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Discriminator_Model_XX")
]
self.dzzvars = [
v for v in all_variables if v.name.startswith("SENCEBGAN/Discriminator_Model_ZZ")
]
# Generator Network Operations
self.gen_update_ops = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Generator_Model"
)
# Discriminator Network Operations
self.disc_update_ops = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Discriminator_Model"
)
self.encg_update_ops = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Encoder_G_Model"
)
self.encr_update_ops = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Encoder_R_Model"
)
self.update_ops_dis_xx = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Discriminator_Model_XX"
)
self.update_ops_dis_zz = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope="SENCEBGAN/Discriminator_Model_ZZ"
)
with tf.control_dependencies(self.gen_update_ops):
self.gen_op = self.generator_optimizer.minimize(
self.loss_generator,
var_list=self.generator_vars,
global_step=self.global_step_tensor,
)
with tf.control_dependencies(self.disc_update_ops):
self.disc_op = self.discriminator_optimizer.minimize(
self.loss_discriminator, var_list=self.discriminator_vars
)
with tf.control_dependencies(self.encg_update_ops):
self.encg_op = self.encoder_g_optimizer.minimize(
self.loss_encoder_g,
var_list=self.encoder_g_vars,
global_step=self.global_step_tensor,
)
with tf.control_dependencies(self.encr_update_ops):
self.encr_op = self.encoder_r_optimizer.minimize(
self.loss_encoder_r,
var_list=self.encoder_r_vars,
global_step=self.global_step_tensor,
)
if self.config.trainer.enable_disc_xx:
with tf.control_dependencies(self.update_ops_dis_xx):
self.disc_op_xx = self.discriminator_optimizer.minimize(
self.dis_loss_xx, var_list=self.dxxvars
)
if self.config.trainer.enable_disc_zz:
with tf.control_dependencies(self.update_ops_dis_zz):
self.disc_op_zz = self.discriminator_optimizer.minimize(
self.dis_loss_zz, var_list=self.dzzvars
)
# Exponential Moving Average for Estimation
self.dis_ema = tf.train.ExponentialMovingAverage(decay=self.config.trainer.ema_decay)
maintain_averages_op_dis = self.dis_ema.apply(self.discriminator_vars)
self.gen_ema = tf.train.ExponentialMovingAverage(decay=self.config.trainer.ema_decay)
maintain_averages_op_gen = self.gen_ema.apply(self.generator_vars)
self.encg_ema = tf.train.ExponentialMovingAverage(decay=self.config.trainer.ema_decay)
maintain_averages_op_encg = self.encg_ema.apply(self.encoder_g_vars)
self.encr_ema = tf.train.ExponentialMovingAverage(decay=self.config.trainer.ema_decay)
maintain_averages_op_encr = self.encr_ema.apply(self.encoder_r_vars)
if self.config.trainer.enable_disc_xx:
self.dis_xx_ema = tf.train.ExponentialMovingAverage(
decay=self.config.trainer.ema_decay
)
maintain_averages_op_dis_xx = self.dis_xx_ema.apply(self.dxxvars)
if self.config.trainer.enable_disc_zz:
self.dis_zz_ema = tf.train.ExponentialMovingAverage(
decay=self.config.trainer.ema_decay
)
maintain_averages_op_dis_zz = self.dis_zz_ema.apply(self.dzzvars)
with tf.control_dependencies([self.disc_op]):
self.train_dis_op = tf.group(maintain_averages_op_dis)
with tf.control_dependencies([self.gen_op]):
self.train_gen_op = tf.group(maintain_averages_op_gen)
with tf.control_dependencies([self.encg_op]):
self.train_enc_g_op = tf.group(maintain_averages_op_encg)
with tf.control_dependencies([self.encr_op]):
self.train_enc_r_op = tf.group(maintain_averages_op_encr)
if self.config.trainer.enable_disc_xx:
with tf.control_dependencies([self.disc_op_xx]):
self.train_dis_op_xx = tf.group(maintain_averages_op_dis_xx)
if self.config.trainer.enable_disc_zz:
with tf.control_dependencies([self.disc_op_zz]):
self.train_dis_op_zz = tf.group(maintain_averages_op_dis_zz)
############################################################################################
# TESTING
############################################################################################
self.logger.info("Building Testing Graph...")
with tf.variable_scope("SENCEBGAN"):
with tf.variable_scope("Discriminator_Model"):
self.embedding_q_ema, self.decoded_q_ema = self.discriminator(
self.image_input,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
with tf.variable_scope("Generator_Model"):
self.image_gen_ema = self.generator(
self.embedding_q_ema, getter=get_getter(self.gen_ema)
)
with tf.variable_scope("Discriminator_Model"):
self.embedding_rec_ema, self.decoded_rec_ema = self.discriminator(
self.image_gen_ema,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
# Second Training Part
with tf.variable_scope("Encoder_G_Model"):
self.image_encoded_ema = self.encoder_g(
self.image_input, getter=get_getter(self.encg_ema)
)
with tf.variable_scope("Generator_Model"):
self.image_gen_enc_ema = self.generator(
self.image_encoded_ema, getter=get_getter(self.gen_ema)
)
with tf.variable_scope("Discriminator_Model"):
self.embedding_enc_fake_ema, self.decoded_enc_fake_ema = self.discriminator(
self.image_gen_enc_ema,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.embedding_enc_real_ema, self.decoded_enc_real_ema = self.discriminator(
self.image_input,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
if self.config.trainer.enable_disc_xx:
with tf.variable_scope("Discriminator_Model_XX"):
self.im_logit_real_ema, self.im_f_real_ema = self.discriminator_xx(
self.image_input,
self.image_input,
getter=get_getter(self.dis_xx_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.im_logit_fake_ema, self.im_f_fake_ema = self.discriminator_xx(
self.image_input,
self.image_gen_enc_ema,
getter=get_getter(self.dis_xx_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
# Third training part
with tf.variable_scope("Encoder_G_Model"):
self.image_encoded_r_ema = self.encoder_g(self.image_input)
with tf.variable_scope("Generator_Model"):
self.image_gen_enc_r_ema = self.generator(self.image_encoded_r_ema)
with tf.variable_scope("Encoder_R_Model"):
self.image_ege_ema = self.encoder_r(self.image_gen_enc_r_ema)
with tf.variable_scope("Discriminator_Model"):
self.embedding_encr_fake_ema, self.decoded_encr_fake_ema = self.discriminator(
self.image_gen_enc_r_ema,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.embedding_encr_real_ema, self.decoded_encr_real_ema = self.discriminator(
self.image_input,
getter=get_getter(self.dis_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
if self.config.trainer.enable_disc_zz:
with tf.variable_scope("Discriminator_Model_ZZ"):
self.z_logit_real_ema, self.z_f_real_ema = self.discriminator_zz(
self.image_encoded_r_ema,
self.image_encoded_r_ema,
getter=get_getter(self.dis_zz_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
self.z_logit_fake_ema, self.z_f_fake_ema = self.discriminator_zz(
self.image_encoded_r_ema,
self.image_ege_ema,
getter=get_getter(self.dis_zz_ema),
do_spectral_norm=self.config.trainer.do_spectral_norm,
)
with tf.name_scope("Testing"):
with tf.name_scope("Image_Based"):
delta = self.image_input - self.image_gen_enc_ema
self.rec_residual = -delta
delta_flat = tf.layers.Flatten()(delta)
img_score_l1 = tf.norm(
delta_flat, ord=2, axis=1, keepdims=False, name="img_loss__1"
)
self.img_score_l1 = tf.squeeze(img_score_l1)
delta = self.decoded_enc_fake_ema - self.decoded_enc_real_ema
delta_flat = tf.layers.Flatten()(delta)
img_score_l2 = tf.norm(
delta_flat, ord=2, axis=1, keepdims=False, name="img_loss__2"
)
self.img_score_l2 = tf.squeeze(img_score_l2)
with tf.name_scope("Noise_Based"):
delta = self.image_encoded_r_ema - self.image_ege_ema
delta_flat = tf.layers.Flatten()(delta)
final_score_1 = tf.norm(
delta_flat, ord=2, axis=1, keepdims=False, name="final_score_1"
)
self.final_score_1 = tf.squeeze(final_score_1)
self.score_comb_im = (
1 * self.img_score_l1
+ self.feature_match1 * self.final_score_1
)
delta = self.image_encoded_r_ema - self.embedding_enc_fake_ema
delta_flat = tf.layers.Flatten()(delta)
final_score_2 = tf.norm(
delta_flat, ord=2, axis=1, keepdims=False, name="final_score_2"
)
self.final_score_2 = tf.squeeze(final_score_2)
delta = self.embedding_encr_real_ema - self.embedding_encr_fake_ema
delta_flat = tf.layers.Flatten()(delta)
final_score_3 = tf.norm(
delta_flat, ord=2, axis=1, keepdims=False, name="final_score_3"
)
self.final_score_3 = tf.squeeze(final_score_3)
# Combo 1
self.score_comb_z = (
(1 - self.feature_match2) * self.final_score_2
+ self.feature_match2 * self.final_score_3
)
# Combo 2
if self.config.trainer.enable_disc_xx:
delta = self.im_f_real_ema - self.im_f_fake_ema
delta_flat = tf.layers.Flatten()(delta)
final_score_4 = tf.norm(
delta_flat, ord=1, axis=1, keepdims=False, name="final_score_4"
)
self.final_score_4 = tf.squeeze(final_score_4)
delta = self.z_f_real_ema - self.z_f_fake_ema
delta_flat = tf.layers.Flatten()(delta)
final_score_6 = tf.norm(
delta_flat, ord=1, axis=1, keepdims=False, name="final_score_6"
)
self.final_score_6 = tf.squeeze(final_score_6)
############################################################################################
# TENSORBOARD
############################################################################################
if self.config.log.enable_summary:
with tf.name_scope("train_summary"):
with tf.name_scope("dis_summary"):
tf.summary.scalar("loss_disc", self.loss_discriminator, ["dis"])
tf.summary.scalar("loss_disc_real", self.disc_loss_real, ["dis"])
tf.summary.scalar("loss_disc_fake", self.disc_loss_fake, ["dis"])
if self.config.trainer.enable_disc_xx:
tf.summary.scalar("loss_dis_xx", self.dis_loss_xx, ["enc_g"])
if self.config.trainer.enable_disc_zz:
tf.summary.scalar("loss_dis_zz", self.dis_loss_zz, ["enc_r"])
with tf.name_scope("gen_summary"):
tf.summary.scalar("loss_generator", self.loss_generator, ["gen"])
with tf.name_scope("enc_summary"):
tf.summary.scalar("loss_encoder_g", self.loss_encoder_g, ["enc_g"])
tf.summary.scalar("loss_encoder_r", self.loss_encoder_r, ["enc_r"])
with tf.name_scope("img_summary"):
tf.summary.image("input_image", self.image_input, 1, ["img_1"])
tf.summary.image("reconstructed", self.image_gen, 1, ["img_1"])
# From discriminator in part 1
tf.summary.image("decoded_real", self.decoded_real, 1, ["img_1"])
tf.summary.image("decoded_fake", self.decoded_fake, 1, ["img_1"])
# Second Stage of Training
tf.summary.image("input_enc", self.image_input, 1, ["img_2"])
tf.summary.image("reconstructed", self.image_gen_enc, 1, ["img_2"])
# From discriminator in part 2
tf.summary.image("decoded_enc_real", self.decoded_enc_real, 1, ["img_2"])
tf.summary.image("decoded_enc_fake", self.decoded_enc_fake, 1, ["img_2"])
# Testing
tf.summary.image("input_image", self.image_input, 1, ["test"])
tf.summary.image("reconstructed", self.image_gen_enc_r_ema, 1, ["test"])
tf.summary.image("residual", self.rec_residual, 1, ["test"])
self.sum_op_dis = tf.summary.merge_all("dis")
self.sum_op_gen = tf.summary.merge_all("gen")
self.sum_op_enc_g = tf.summary.merge_all("enc_g")
self.sum_op_enc_r = tf.summary.merge_all("enc_r")
self.sum_op_im_1 = tf.summary.merge_all("img_1")
self.sum_op_im_2 = tf.summary.merge_all("img_2")
self.sum_op_im_test = tf.summary.merge_all("test")
self.sum_op = tf.summary.merge([self.sum_op_dis, self.sum_op_gen])
###############################################################################################
# MODULES
###############################################################################################
def generator(self, noise_input, getter=None):
with tf.variable_scope("Generator", custom_getter=getter, reuse=tf.AUTO_REUSE):
net_name = "Layer_1"
with tf.variable_scope(net_name):
x_g = tf.layers.Dense(
units=2 * 2 * 256, kernel_initializer=self.init_kernel, name="fc"
)(noise_input)
x_g = tf.layers.batch_normalization(
x_g,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_gen,
name="batch_normalization",
)
x_g = tf.nn.leaky_relu(
features=x_g, alpha=self.config.trainer.leakyReLU_alpha, name="relu"
)
x_g = tf.reshape(x_g, [-1, 2, 2, 256])
net_name = "Layer_2"
with tf.variable_scope(net_name):
x_g = tf.layers.Conv2DTranspose(
filters=128,
kernel_size=5,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv2t",
)(x_g)
x_g = tf.layers.batch_normalization(
x_g,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_gen,
name="batch_normalization",
)
x_g = tf.nn.leaky_relu(
features=x_g, alpha=self.config.trainer.leakyReLU_alpha, name="relu"
)
net_name = "Layer_3"
with tf.variable_scope(net_name):
x_g = tf.layers.Conv2DTranspose(
filters=64,
kernel_size=5,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv2t",
)(x_g)
x_g = tf.layers.batch_normalization(
x_g,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_gen,
name="batch_normalization",
)
x_g = tf.nn.leaky_relu(
features=x_g, alpha=self.config.trainer.leakyReLU_alpha, name="relu"
)
net_name = "Layer_4"
with tf.variable_scope(net_name):
x_g = tf.layers.Conv2DTranspose(
filters=32,
kernel_size=5,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv2t",
)(x_g)
x_g = tf.layers.batch_normalization(
x_g,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_gen,
name="batch_normalization",
)
x_g = tf.nn.leaky_relu(
features=x_g, alpha=self.config.trainer.leakyReLU_alpha, name="relu"
)
net_name = "Layer_5"
with tf.variable_scope(net_name):
x_g = tf.layers.Conv2DTranspose(
filters=1,
kernel_size=5,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv2t",
)(x_g)
x_g = tf.tanh(x_g, name="tanh")
return x_g
def discriminator(self, image_input, getter=None, do_spectral_norm=False):
layers = sn if do_spectral_norm else tf.layers
with tf.variable_scope("Discriminator", custom_getter=getter, reuse=tf.AUTO_REUSE):
with tf.variable_scope("Encoder"):
x_e = tf.reshape(
image_input,
[-1, self.config.data_loader.image_size, self.config.data_loader.image_size, 1],
)
net_name = "Layer_1"
with tf.variable_scope(net_name):
x_e = layers.conv2d(
x_e,
filters=32,
kernel_size=5,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv",
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
# 14 x 14 x 64
net_name = "Layer_2"
with tf.variable_scope(net_name):
x_e = layers.conv2d(
x_e,
filters=64,
kernel_size=5,
padding="same",
strides=2,
kernel_initializer=self.init_kernel,
name="conv",
)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
# 7 x 7 x 128
net_name = "Layer_3"
with tf.variable_scope(net_name):
x_e = layers.conv2d(
x_e,
filters=128,
kernel_size=5,
padding="same",
strides=2,
kernel_initializer=self.init_kernel,
name="conv",
)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
# 4 x 4 x 256
x_e = tf.layers.Flatten()(x_e)
net_name = "Layer_4"
with tf.variable_scope(net_name):
x_e = layers.dense(
x_e,
units=self.config.trainer.noise_dim,
kernel_initializer=self.init_kernel,
name="fc",
)
embedding = x_e
with tf.variable_scope("Decoder"):
net = tf.reshape(embedding, [-1, 1, 1, self.config.trainer.noise_dim])
net_name = "layer_1"
with tf.variable_scope(net_name):
net = tf.layers.Conv2DTranspose(
filters=256,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="tconv1",
)(net)
net = tf.layers.batch_normalization(
inputs=net,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
name="tconv1/bn",
)
net = tf.nn.relu(features=net, name="tconv1/relu")
net_name = "layer_2"
with tf.variable_scope(net_name):
net = tf.layers.Conv2DTranspose(
filters=128,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="tconv2",
)(net)
net = tf.layers.batch_normalization(
inputs=net,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
name="tconv2/bn",
)
net = tf.nn.relu(features=net, name="tconv2/relu")
net_name = "layer_3"
with tf.variable_scope(net_name):
net = tf.layers.Conv2DTranspose(
filters=64,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="tconv3",
)(net)
net = tf.layers.batch_normalization(
inputs=net,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
name="tconv3/bn",
)
net = tf.nn.relu(features=net, name="tconv3/relu")
net_name = "layer_4"
with tf.variable_scope(net_name):
net = tf.layers.Conv2DTranspose(
filters=32,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="tconv4",
)(net)
net = tf.layers.batch_normalization(
inputs=net,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_dis,
name="tconv4/bn",
)
net = tf.nn.relu(features=net, name="tconv4/relu")
net_name = "layer_5"
with tf.variable_scope(net_name):
net = tf.layers.Conv2DTranspose(
filters=1,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="tconv5",
)(net)
decoded = tf.nn.tanh(net, name="tconv5/tanh")
return embedding, decoded
def encoder_g(self, image_input, getter=None):
with tf.variable_scope("Encoder_G", custom_getter=getter, reuse=tf.AUTO_REUSE):
x_e = tf.reshape(
image_input,
[-1, self.config.data_loader.image_size, self.config.data_loader.image_size, 1],
)
net_name = "Layer_1"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=64,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_g,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
net_name = "Layer_2"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=128,
kernel_size=5,
padding="same",
strides=(2, 2),
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_g,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
net_name = "Layer_3"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=256,
kernel_size=5,
padding="same",
strides=(2, 2),
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_g,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
x_e = tf.layers.Flatten()(x_e)
net_name = "Layer_4"
with tf.variable_scope(net_name):
x_e = tf.layers.Dense(
units=self.config.trainer.noise_dim,
kernel_initializer=self.init_kernel,
name="fc",
)(x_e)
return x_e
def encoder_r(self, image_input, getter=None):
with tf.variable_scope("Encoder_R", custom_getter=getter, reuse=tf.AUTO_REUSE):
x_e = tf.reshape(
image_input,
[-1, self.config.data_loader.image_size, self.config.data_loader.image_size, 1],
)
net_name = "Layer_1"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=64,
kernel_size=5,
strides=(2, 2),
padding="same",
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_r,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
net_name = "Layer_2"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=128,
kernel_size=5,
padding="same",
strides=(2, 2),
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_r,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
net_name = "Layer_3"
with tf.variable_scope(net_name):
x_e = tf.layers.Conv2D(
filters=256,
kernel_size=5,
padding="same",
strides=(2, 2),
kernel_initializer=self.init_kernel,
name="conv",
)(x_e)
x_e = tf.layers.batch_normalization(
x_e,
momentum=self.config.trainer.batch_momentum,
training=self.is_training_enc_r,
)
x_e = tf.nn.leaky_relu(
features=x_e, alpha=self.config.trainer.leakyReLU_alpha, name="leaky_relu"
)
x_e = tf.layers.Flatten()(x_e)
net_name = "Layer_4"
with tf.variable_scope(net_name):
x_e = tf.layers.Dense(
units=self.config.trainer.noise_dim,
kernel_initializer=self.init_kernel,
name="fc",
)(x_e)
return x_e
# Regularizer discriminator for the Generator Encoder
def discriminator_xx(self, img_tensor, recreated_img, getter=None, do_spectral_norm=False):
""" Discriminator architecture in tensorflow
Discriminates between (x, x) and (x, rec_x)
Args:
img_tensor:
recreated_img:
getter: for exponential moving average during inference
reuse: sharing variables or not
do_spectral_norm:
"""
layers = sn if do_spectral_norm else tf.layers
with tf.variable_scope("Discriminator_xx", reuse=tf.AUTO_REUSE, custom_getter=getter):
net = tf.concat([img_tensor, recreated_img], axis=1)
net_name = "layer_1"
with tf.variable_scope(net_name):
net = layers.conv2d(
net,
filters=64,
kernel_size=4,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv1",
)
net = tf.nn.leaky_relu(
features=net, alpha=self.config.trainer.leakyReLU_alpha, name="conv2/leaky_relu"
)
net = tf.layers.dropout(
net,
rate=self.config.trainer.dropout_rate,
training=self.is_training_enc_g,
name="dropout",
)
with tf.variable_scope(net_name, reuse=True):
weights = tf.get_variable("conv1/kernel")
net_name = "layer_2"
with tf.variable_scope(net_name):
net = layers.conv2d(
net,
filters=128,
kernel_size=4,
strides=2,
padding="same",
kernel_initializer=self.init_kernel,
name="conv2",
)
net = tf.nn.leaky_relu(
features=net, alpha=self.config.trainer.leakyReLU_alpha, name="conv2/leaky_relu"
)
net = tf.layers.dropout(
net,
rate=self.config.trainer.dropout_rate,
training=self.is_training_enc_g,
name="dropout",
)
net = tf.layers.Flatten()(net)
intermediate_layer = net
net_name = "layer_3"
with tf.variable_scope(net_name):
net = tf.layers.dense(net, units=1, kernel_initializer=self.init_kernel, name="fc")
logits = tf.squeeze(net)
return logits, intermediate_layer
# Regularizer discriminator for the Reconstruction Encoder
def discriminator_zz(self, noise_tensor, recreated_noise, getter=None, do_spectral_norm=False):
""" Discriminator architecture in tensorflow
Discriminates between (z, z) and (z, rec_z)
Args:
noise_tensor:
recreated_noise:
getter: for exponential moving average during inference
reuse: sharing variables or not
do_spectral_norm:
"""
layers = sn if do_spectral_norm else tf.layers
with tf.variable_scope("Discriminator_zz", reuse=tf.AUTO_REUSE, custom_getter=getter):
y = tf.concat([noise_tensor, recreated_noise], axis=-1)
net_name = "y_layer_1"
with tf.variable_scope(net_name):
y = layers.dense(y, units=64, kernel_initializer=self.init_kernel, name="fc")
y = tf.nn.leaky_relu(features=y, alpha=self.config.trainer.leakyReLU_alpha)
y = tf.layers.dropout(
y,
rate=self.config.trainer.dropout_rate,
training=self.is_training_enc_r,
name="dropout",
)
net_name = "y_layer_2"
with tf.variable_scope(net_name):
y = layers.dense(y, units=32, kernel_initializer=self.init_kernel, name="fc")
y = tf.nn.leaky_relu(features=y, alpha=self.config.trainer.leakyReLU_alpha)
y = tf.layers.dropout(
y,
rate=self.config.trainer.dropout_rate,
training=self.is_training_enc_r,
name="dropout",
)
intermediate_layer = y
net_name = "y_layer_3"
with tf.variable_scope(net_name):
y = layers.dense(y, units=1, kernel_initializer=self.init_kernel, name="fc")
logits = tf.squeeze(y)
return logits, intermediate_layer
###############################################################################################
# CUSTOM LOSSES
###############################################################################################
def mse_loss(self, pred, data, mode="norm", order=2):
if mode == "norm":
delta = pred - data
delta = tf.layers.Flatten()(delta)
loss_val = tf.norm(delta, ord=order, axis=1, keepdims=False)
elif mode == "mse":
loss_val = tf.reduce_mean(tf.squared_difference(pred, data))
return loss_val
def pullaway_loss(self, embeddings):
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
similarity = tf.matmul(normalized_embeddings, normalized_embeddings, transpose_b=True)
batch_size = tf.cast(tf.shape(embeddings)[0], tf.float32)
pt_loss = (tf.reduce_sum(similarity) - batch_size) / (batch_size * (batch_size - 1))
return pt_loss
def init_saver(self):
self.saver = tf.train.Saver(max_to_keep=self.config.log.max_to_keep)
| 46.785146 | 100 | 0.501191 | 5,490 | 52,914 | 4.520765 | 0.053552 | 0.050365 | 0.079455 | 0.046698 | 0.8149 | 0.752972 | 0.694347 | 0.661509 | 0.630928 | 0.591966 | 0 | 0.010414 | 0.388461 | 52,914 | 1,130 | 101 | 46.826549 | 0.756575 | 0.026004 | 0 | 0.525888 | 0 | 0 | 0.0466 | 0.008963 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011168 | false | 0 | 0.004061 | 0 | 0.024365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f73897464eec5dfd237e31a4d8f787d41ed1bef8 | 1,619 | py | Python | procgen_adventure/utils/torch_utils.py | Laurans/procgen_adventure | 5f88f3f647f7854c8fb2ae516f3490d89845eefa | [
"MIT"
] | 2 | 2020-04-02T11:51:43.000Z | 2020-04-20T20:07:03.000Z | procgen_adventure/utils/torch_utils.py | Laurans/procgen_adventure | 5f88f3f647f7854c8fb2ae516f3490d89845eefa | [
"MIT"
] | 1 | 2020-04-08T10:34:29.000Z | 2020-04-29T21:08:48.000Z | procgen_adventure/utils/torch_utils.py | Laurans/procgen_adventure | 5f88f3f647f7854c8fb2ae516f3490d89845eefa | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.distributed as dist
def tensor(x, device):
if isinstance(x, torch.Tensor):
return x.to(device)
x = np.asarray(x, dtype=np.float)
x = torch.tensor(x, device=device, dtype=torch.float32)
return x
def input_preprocessing(x, device):
x = tensor(x, device)
x = x.float()
x /= 255.0
return x
def to_np(t):
return t.cpu().detach().numpy()
def random_seed(seed=None):
np.random.seed(seed)
torch.manual_seed(np.random.randint(int(1e6)))
def restore_model(model, save_path):
checkpoint = torch.load(save_path)
model.network.load_state_dict(checkpoint["model_state_dict"])
model.optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
update = checkpoint["update"]
return update
def sync_initial_weights(model):
for param in model.parameters():
dist.broadcast(param.data, src=0)
def sync_gradients(model):
for param in model.parameters():
dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
def cleanup():
dist.destroy_process_group()
def sync_values(tensor_sum_values, tensor_nb_values):
dist.reduce(tensor_sum_values, dst=0)
dist.reduce(tensor_nb_values, dst=0)
return tensor_sum_values / tensor_nb_values
def range_tensor(t, device):
return torch.arange(t).long().to(device)
def zeros(shape, dtype):
"""Attempt to return torch tensor of zeros, or if numpy dtype provided,
return numpy array or zeros."""
try:
return torch.zeros(shape, dtype=dtype)
except TypeError:
return np.zeros(shape, dtype=dtype)
| 23.128571 | 75 | 0.697344 | 238 | 1,619 | 4.596639 | 0.340336 | 0.025594 | 0.035649 | 0.042048 | 0.115174 | 0.115174 | 0.062157 | 0 | 0 | 0 | 0 | 0.00834 | 0.1853 | 1,619 | 69 | 76 | 23.463768 | 0.821077 | 0.059914 | 0 | 0.090909 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.068182 | 0.045455 | 0.522727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f74317f78bef66c71ecf8b843045e4f5b2c8153b | 5,580 | py | Python | otri/utils/key_handler.py | OTRI-Unipd/OTRI | 5d1fce470eeb31f5cc75cadfc06d9d2908736052 | [
"FSFAP"
] | null | null | null | otri/utils/key_handler.py | OTRI-Unipd/OTRI | 5d1fce470eeb31f5cc75cadfc06d9d2908736052 | [
"FSFAP"
] | 34 | 2020-04-18T13:57:05.000Z | 2021-10-05T16:21:56.000Z | otri/utils/key_handler.py | OTRI-Unipd/OTRI | 5d1fce470eeb31f5cc75cadfc06d9d2908736052 | [
"FSFAP"
] | null | null | null | from typing import *
import re
LOWER_ERROR = "Only dictionaries and lists can be modified by this method."
def apply_deep(data: Union[Mapping, List], fun: Callable) -> Union[dict, list]:
'''
Applies fun to all keys in data.
The method is recursive and applies as deep as possible in the dictionary nest.
Parameters:
data : Mapping or List
Data to modify, must be either a dictionary or a list of dictionaries.
fun : function | lambda
Function to apply to each key, must take the key as its single parameter.
Returns:
A copy of the dict or list with the modified keys, with all nested dicts and list
receiving the same treatment. It will return the original
object (not a copy) if no operation could be applied, for example when:
- data is not a list or dict
- data is a list of non dict items
- data is not a list that contains dicts at any nesting level
...
'''
if isinstance(data, Mapping):
return __apply_deep_dict(data, fun)
if isinstance(data, List):
return __apply_deep_list(data, fun)
return data
def __apply_deep_dict(data: Mapping, fun: Callable) -> dict:
'''
Applies fun to all keys in a dictionary and all nested items.
Parameters:
data : dict
Data to modify, must be a dictionary.
fun : function | lambda
Function to apply to each key, must take the key as its single parameter.
Returns:
A copy of the dict with the renamed keys, where all values have been replaced by copies of
their original if apply_deep(value, fun) was appliable.
'''
new_data = dict()
for key, value in data.items():
new_key = fun(key)
new_data[new_key] = apply_deep(value, fun)
return new_data
def __apply_deep_list(data: List, fun : Callable) -> list:
'''
Applies fun to all keys in each item of the list, if appliable.
Parameters:
data : List
Data to modify, should be a list, but can be a tuple.
fun : function | lambda
Function to apply to each key, must take the key as its single parameter.
Returns:
A copy of the list, where each item got its keys modified through apply_deep(item, fun) if appliable.
'''
return [apply_deep(item, fun) for item in data]
def lower_all_keys_deep(data : Union[Mapping, List]) -> Union[dict, list]:
'''
Renames all the keys in a dict object to be lower case.
The method is recursive and applies as deep as possible in the dict nest.
Parameters:
data : dict | list
Data to modify, must be either a dictionary or a list of dictionaries.
Should work with any dictionary. In any case, only string keys will be modified.
Returns:
A copy of the dict or list with the renamed keys, with all nested dicts and list
receiving the same treatment. It will return the original
object (not a copy) if no operation could be applied. See apply_deep(data, fun) for details.
...
'''
return apply_deep(data, lambda s: s.lower() if isinstance(s, str) else s)
def rename_deep(data : Union[Mapping, List], aliases: Mapping) -> Union[dict, list]:
'''
Renames the keys in the dict object based on the aliases in dict.
The method is recursive and applies as deep as possible in the dict nest.
es. data = {"key" : "value"}, aliases {"key", "one"}
data becomes {"one" : "value"}
Parameters:
data : dict | list
Data to modify, must be either a dictionary or a list of dictionaries.
Should work with any dictionary.
aliases : dict
Dictionary containing the aliases for the keys. For each item the key must be
the original key and the value the new key. Keys of any type will be modified
as long as they are a key in aliases.
Returns:
A copy of the dict or list with the renamed keys, with all nested dicts and list
receiving the same treatment. It will return the original
object (not a copy) if no operation could be applied. See apply_deep(data, fun) for details.
'''
return apply_deep(data, lambda x: aliases[x] if x in aliases.keys() else x)
def replace_deep(data : Union[Mapping, List], regexes: Mapping) -> Union[dict, list]:
'''
Renames the keys in a dictionary replacing each given regex with the given alias.
The method is recursive and applies as deep as possible in the dict nest.
es. data = {"key_ciao" : "value"}, aliases {"ciao", "hi"}
data becomes {"key_hi" : "value"}
Parameters:
data : dict | list
Data to modify, must be either a dictionary or a list of dictionaries.
Should work with any dictionary.
aliases : dict
Dictionary containing the aliases for the keys. For each item the key must be
the regex to replace and the value what to replace it with.
Only string keys are modified.
Returns:
A copy of the dict or list with the renamed keys, with all nested dicts and lists
receiving the same treatment. It will return the original object (not a copy)
if no operation could be applied. See apply_deep(data, fun) for details.
'''
def replace_regex(string, regexes=regexes):
for r, s in regexes.items():
string = re.sub(r, s, string)
return string
return apply_deep(data, lambda x: replace_regex(x) if isinstance(x, str) else x)
| 41.641791 | 109 | 0.655018 | 843 | 5,580 | 4.285884 | 0.160142 | 0.037365 | 0.025187 | 0.023249 | 0.591752 | 0.552449 | 0.539164 | 0.525325 | 0.505397 | 0.505397 | 0 | 0 | 0.280287 | 5,580 | 133 | 110 | 41.954887 | 0.899651 | 0.689606 | 0 | 0 | 0 | 0 | 0.043223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0 | 0.074074 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f77262f78648f5065671fda0b803fd2061d327e1 | 481 | py | Python | model_test.py | noatgnu/colossi | c936bc25b5990f8dfbf4db3ed11ce8a893553668 | [
"MIT"
] | null | null | null | model_test.py | noatgnu/colossi | c936bc25b5990f8dfbf4db3ed11ce8a893553668 | [
"MIT"
] | null | null | null | model_test.py | noatgnu/colossi | c936bc25b5990f8dfbf4db3ed11ce8a893553668 | [
"MIT"
] | null | null | null | import unittest
from model import prediction_with_model
import pandas as pd
import numpy as np
class PredictionWithModel(unittest.TestCase):
def test_prediction(self):
d = pd.read_csv(r"C:\Users\Toan\Documents\GitHub\colossi\static\temp\cc7deed8140745d89f2f42f716f6fd1b\out_imac_atlas_expression_v7.1.tsv", " ")
result = np.array([d['Freq'].to_list() + [0, 1800]])
print(prediction_with_model(result))
if __name__ == '__main__':
unittest.main()
| 30.0625 | 151 | 0.733888 | 64 | 481 | 5.21875 | 0.75 | 0.065868 | 0.113772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 0.147609 | 481 | 15 | 152 | 32.066667 | 0.753659 | 0 | 0 | 0 | 0 | 0.090909 | 0.272349 | 0.245322 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f77dd5fcff559a23973eb58cb349868330dee91f | 2,159 | py | Python | crawlerfeeder/sources.py | ddiazpinto/python-crawlerfeeder | ec498a11b3de2f9d68ab9245406b258b07930724 | [
"MIT"
] | null | null | null | crawlerfeeder/sources.py | ddiazpinto/python-crawlerfeeder | ec498a11b3de2f9d68ab9245406b258b07930724 | [
"MIT"
] | null | null | null | crawlerfeeder/sources.py | ddiazpinto/python-crawlerfeeder | ec498a11b3de2f9d68ab9245406b258b07930724 | [
"MIT"
] | null | null | null | """
Data sources
All the data sources must extend DataSource abstract class and define `crawl` and `feed` methods.
This methods are automatically called during the crawl and feed processes.
"""
import httplib2
from abc import ABCMeta, abstractmethod
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
import pymysql.cursors
from crawlerfeeder import logging
class DataSource(object):
__metaclass__ = ABCMeta
_service = None
_data = {}
@abstractmethod
def crawl(self, **kwargs):
pass
@abstractmethod
def feed(self, **kwargs):
pass
class GoogleAnalyticsDataSource(DataSource):
"""
Google Analytics V4 data source
"""
_view_id = None
def __init__(self, service_account_email, key_file_location, scopes, discovery_uri, view_id, **kwargs):
credentials = ServiceAccountCredentials.from_p12_keyfile(
service_account_email, key_file_location, scopes=scopes)
http = credentials.authorize(httplib2.Http())
self._service = build('analytics', 'v4', http=http, discoveryServiceUrl=discovery_uri)
self._view_id = view_id
def crawl(self, **kwargs):
return self._service.reports().batchGet(body=kwargs['request']).execute()
def feed(self, **kwargs):
raise NotImplementedError("This method is not implemented yet.")
class MysqlDataSource(DataSource):
"""
MySQL data source
"""
def __init__(self, host, user, password, db, **kwargs):
self._service = pymysql.connect(host, user, password, db, cursorclass=pymysql.cursors.DictCursor)
def crawl(self, **kwargs):
with self._service.cursor() as cursor:
cursor.execute(**kwargs)
logging.info("Affected rows: %s" % cursor.rowcount)
return cursor.fetchall()
def feed(self, **kwargs):
with self._service.cursor() as cursor:
if 'args' in kwargs:
cursor.executemany(**kwargs)
else:
cursor.execute(**kwargs)
logging.info("Affected rows: %s" % cursor.rowcount)
return cursor
| 30.408451 | 107 | 0.672997 | 236 | 2,159 | 5.995763 | 0.423729 | 0.042403 | 0.025442 | 0.038163 | 0.209187 | 0.209187 | 0.209187 | 0.15265 | 0.097527 | 0.097527 | 0 | 0.004202 | 0.228346 | 2,159 | 70 | 108 | 30.842857 | 0.845138 | 0.108847 | 0 | 0.363636 | 0 | 0 | 0.04825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.090909 | 0.136364 | 0.022727 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
f77dff8f9c8ab83db0eb1e03ee889a6d8ab05f1a | 364 | py | Python | tests/test_doi.py | garethcmurphy/brightness | e18cc42636439d521cc904371f7c7643ea907a57 | [
"BSD-2-Clause"
] | null | null | null | tests/test_doi.py | garethcmurphy/brightness | e18cc42636439d521cc904371f7c7643ea907a57 | [
"BSD-2-Clause"
] | null | null | null | tests/test_doi.py | garethcmurphy/brightness | e18cc42636439d521cc904371f7c7643ea907a57 | [
"BSD-2-Clause"
] | null | null | null | from ..bright import doimaker
__author__ = "Gareth Murphy"
__credits__ = ["Gareth Murphy"]
__license__ = "GPL"
__version__ = "1.0.1"
__maintainer__ = "Gareth Murphy"
__email__ = "garethcmurphy@gmail.com"
__status__ = "Development"
def test_doi():
bright = doimaker.DOIMaker()
assert isinstance(bright.passw, str)
assert isinstance(bright.user, str)
| 22.75 | 40 | 0.730769 | 41 | 364 | 5.780488 | 0.682927 | 0.151899 | 0.185654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009677 | 0.148352 | 364 | 15 | 41 | 24.266667 | 0.754839 | 0 | 0 | 0 | 0 | 0 | 0.222527 | 0.063187 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0.083333 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
e38af176cfef83d4e22cf82786aa09cbbdfcb4c0 | 1,291 | py | Python | python/fizzbuzz/fizzbuzz.py | frostickflakes/isat252s20_03 | 03d7ee9e0ccd164be43715af57211ceff7625e6b | [
"MIT"
] | null | null | null | python/fizzbuzz/fizzbuzz.py | frostickflakes/isat252s20_03 | 03d7ee9e0ccd164be43715af57211ceff7625e6b | [
"MIT"
] | 1 | 2021-05-11T04:59:35.000Z | 2021-05-11T04:59:35.000Z | python/fizzbuzz/fizzbuzz.py | frostickflakes/isat252s20_03 | 03d7ee9e0ccd164be43715af57211ceff7625e6b | [
"MIT"
] | 2 | 2020-03-02T14:50:02.000Z | 2020-03-06T18:44:14.000Z | """A FizzBuzz program"""
# import necessary supporting libraries or packages
from numbers import Number
def fizz(x):
"""
Takes an input `x` and checks to see if x is a
number, and if so, also a multiple of 3.
If it is both, return 'Fizz'.
Otherwise, return the input.
"""
return 'Fizz' if isinstance(x, Number) and x % 3 == 0 else x
def buzz(x):
"""
Takes an input `x` and checks to see if x is a
number, and if so, also a multiple of 5.
If it is both, return 'Buzz'.
Otherwise, return the input.
"""
return 'Buzz' if isinstance(x, Number) and x % 5 == 0 else x
def fibu(x):
"""
Takes an input `x` and checks to see if x is a
number, and if so, also a multiple of 15.
If it is both, return 'FizzBuzz'.
Otherwise, return the input.
"""
return 'FizzBuzz' if isinstance(x, Number) and x % 15 == 0 else x
def play(start, end):
"""
Given a start number and an end number, produce
all of the output expected for a game of FizzBuzz
as an array.
"""
# initialize an empty list (array) to hold our output
output = []
# loop from the start number to the end number
for x in range(start, end + 1):
# append the tranformed input to the output array
output.append(buzz(fizz(fibu(x))))
return output
| 28.065217 | 67 | 0.646011 | 219 | 1,291 | 3.808219 | 0.292237 | 0.07554 | 0.028777 | 0.046763 | 0.467626 | 0.305755 | 0.223022 | 0.223022 | 0.223022 | 0.223022 | 0 | 0.012513 | 0.257165 | 1,291 | 45 | 68 | 28.688889 | 0.857143 | 0.597211 | 0 | 0 | 0 | 0 | 0.038554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.083333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
e391f8b8a0bc01973d751df5fae9b530804460c4 | 705 | py | Python | pedlarweb_old/forms.py | ThomasWongMingHei/pedlar | ba79f9e4e13ae8009a1a7a9d1d04a7fcd2535fe7 | [
"Apache-2.0"
] | 61 | 2018-09-26T06:11:53.000Z | 2022-02-15T18:30:10.000Z | pedlarweb_old/forms.py | ThomasWongMingHei/pedlar | ba79f9e4e13ae8009a1a7a9d1d04a7fcd2535fe7 | [
"Apache-2.0"
] | 6 | 2019-01-26T22:48:46.000Z | 2019-12-24T00:08:15.000Z | pedlarweb_old/forms.py | ThomasWongMingHei/pedlar | ba79f9e4e13ae8009a1a7a9d1d04a7fcd2535fe7 | [
"Apache-2.0"
] | 36 | 2018-10-06T09:17:57.000Z | 2022-02-21T22:17:53.000Z | """Web forms for pedlarweb."""
from flask_wtf import FlaskForm
from wtforms import StringField, PasswordField
from wtforms.validators import DataRequired, Regexp, Length
class UserPasswordForm(FlaskForm):
"""Username password form used for login."""
# \w is [0-9a-zA-Z_]
username = StringField('User or Team name',
validators=[DataRequired(),
Regexp(r"^\w(\w| )*\w$",
message="At least 2 alphanumeric characters with only spaces in between.")
]
)
password = PasswordField('Password', validators=[DataRequired(), Length(min=4)])
| 41.470588 | 118 | 0.564539 | 68 | 705 | 5.823529 | 0.691176 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008493 | 0.331915 | 705 | 16 | 119 | 44.0625 | 0.832272 | 0.117731 | 0 | 0 | 0 | 0 | 0.165303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.272727 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
e39af21b3a6e28f491c0b53a0727560abc0f9f34 | 1,049 | py | Python | skeleton/models.py | therden/skeletal_flask | ad8af37376af888c999ab5de5b69ba0039b557c0 | [
"Unlicense"
] | 3 | 2020-01-02T07:58:52.000Z | 2020-11-25T20:31:37.000Z | skeleton/models.py | therden/skeletal_flask | ad8af37376af888c999ab5de5b69ba0039b557c0 | [
"Unlicense"
] | null | null | null | skeleton/models.py | therden/skeletal_flask | ad8af37376af888c999ab5de5b69ba0039b557c0 | [
"Unlicense"
] | 1 | 2020-06-19T03:00:28.000Z | 2020-06-19T03:00:28.000Z | from datetime import datetime
from skeleton.config_db import db
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
def __repr__(self):
return '<User %r>' % self.username
class Post(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(80), nullable=False)
body = db.Column(db.Text, nullable=False)
pub_date = db.Column(db.DateTime, nullable=False,
default=datetime.utcnow)
category_id = db.Column(db.Integer, db.ForeignKey('category.id'),
nullable=False)
category = db.relationship('Category',
backref=db.backref('posts', lazy=True))
def __repr__(self):
return '<Post %r>' % self.title
class Category(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
def __repr__(self):
return '<Category %r>' % self.name
| 28.351351 | 69 | 0.667302 | 146 | 1,049 | 4.671233 | 0.30137 | 0.117302 | 0.146628 | 0.070381 | 0.344575 | 0.26393 | 0.175953 | 0.175953 | 0.175953 | 0.175953 | 0 | 0.010588 | 0.189704 | 1,049 | 36 | 70 | 29.138889 | 0.791765 | 0 | 0 | 0.24 | 0 | 0 | 0.052431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.08 | 0.12 | 0.88 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
e3a2b8f71d6de1df7388cd09949ecbfb79724959 | 103 | py | Python | output/models/saxon_data/complex/complex012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/saxon_data/complex/complex012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/saxon_data/complex/complex012_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.saxon_data.complex.complex012_xsd.complex012 import Root
__all__ = [
"Root",
]
| 17.166667 | 75 | 0.757282 | 13 | 103 | 5.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 0.135922 | 103 | 5 | 76 | 20.6 | 0.741573 | 0 | 0 | 0 | 0 | 0 | 0.038835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3aa7a6b7ca6686219f3d46850a8ea55f4a8bb3d | 4,667 | py | Python | Lib/site-packages/unidecode/x07b.py | hirorin-demon/hirorin-streamlit | 03fbb6f03ec94f909d451e708a3b30b177607695 | [
"0BSD"
] | 82 | 2020-03-28T02:24:38.000Z | 2022-03-30T04:18:42.000Z | Lib/site-packages/unidecode/x07b.py | hirorin-demon/hirorin-streamlit | 03fbb6f03ec94f909d451e708a3b30b177607695 | [
"0BSD"
] | 118 | 2020-03-14T17:34:11.000Z | 2022-03-30T07:07:45.000Z | Lib/site-packages/unidecode/x07b.py | hirorin-demon/hirorin-streamlit | 03fbb6f03ec94f909d451e708a3b30b177607695 | [
"0BSD"
] | 30 | 2020-06-20T15:31:53.000Z | 2022-03-06T06:23:55.000Z | data = (
'Mang ', # 0x00
'Zhu ', # 0x01
'Utsubo ', # 0x02
'Du ', # 0x03
'Ji ', # 0x04
'Xiao ', # 0x05
'Ba ', # 0x06
'Suan ', # 0x07
'Ji ', # 0x08
'Zhen ', # 0x09
'Zhao ', # 0x0a
'Sun ', # 0x0b
'Ya ', # 0x0c
'Zhui ', # 0x0d
'Yuan ', # 0x0e
'Hu ', # 0x0f
'Gang ', # 0x10
'Xiao ', # 0x11
'Cen ', # 0x12
'Pi ', # 0x13
'Bi ', # 0x14
'Jian ', # 0x15
'Yi ', # 0x16
'Dong ', # 0x17
'Shan ', # 0x18
'Sheng ', # 0x19
'Xia ', # 0x1a
'Di ', # 0x1b
'Zhu ', # 0x1c
'Na ', # 0x1d
'Chi ', # 0x1e
'Gu ', # 0x1f
'Li ', # 0x20
'Qie ', # 0x21
'Min ', # 0x22
'Bao ', # 0x23
'Tiao ', # 0x24
'Si ', # 0x25
'Fu ', # 0x26
'Ce ', # 0x27
'Ben ', # 0x28
'Pei ', # 0x29
'Da ', # 0x2a
'Zi ', # 0x2b
'Di ', # 0x2c
'Ling ', # 0x2d
'Ze ', # 0x2e
'Nu ', # 0x2f
'Fu ', # 0x30
'Gou ', # 0x31
'Fan ', # 0x32
'Jia ', # 0x33
'Ge ', # 0x34
'Fan ', # 0x35
'Shi ', # 0x36
'Mao ', # 0x37
'Po ', # 0x38
'Sey ', # 0x39
'Jian ', # 0x3a
'Qiong ', # 0x3b
'Long ', # 0x3c
'Souke ', # 0x3d
'Bian ', # 0x3e
'Luo ', # 0x3f
'Gui ', # 0x40
'Qu ', # 0x41
'Chi ', # 0x42
'Yin ', # 0x43
'Yao ', # 0x44
'Xian ', # 0x45
'Bi ', # 0x46
'Qiong ', # 0x47
'Gua ', # 0x48
'Deng ', # 0x49
'Jiao ', # 0x4a
'Jin ', # 0x4b
'Quan ', # 0x4c
'Sun ', # 0x4d
'Ru ', # 0x4e
'Fa ', # 0x4f
'Kuang ', # 0x50
'Zhu ', # 0x51
'Tong ', # 0x52
'Ji ', # 0x53
'Da ', # 0x54
'Xing ', # 0x55
'Ce ', # 0x56
'Zhong ', # 0x57
'Kou ', # 0x58
'Lai ', # 0x59
'Bi ', # 0x5a
'Shai ', # 0x5b
'Dang ', # 0x5c
'Zheng ', # 0x5d
'Ce ', # 0x5e
'Fu ', # 0x5f
'Yun ', # 0x60
'Tu ', # 0x61
'Pa ', # 0x62
'Li ', # 0x63
'Lang ', # 0x64
'Ju ', # 0x65
'Guan ', # 0x66
'Jian ', # 0x67
'Han ', # 0x68
'Tong ', # 0x69
'Xia ', # 0x6a
'Zhi ', # 0x6b
'Cheng ', # 0x6c
'Suan ', # 0x6d
'Shi ', # 0x6e
'Zhu ', # 0x6f
'Zuo ', # 0x70
'Xiao ', # 0x71
'Shao ', # 0x72
'Ting ', # 0x73
'Ce ', # 0x74
'Yan ', # 0x75
'Gao ', # 0x76
'Kuai ', # 0x77
'Gan ', # 0x78
'Chou ', # 0x79
'Kago ', # 0x7a
'Gang ', # 0x7b
'Yun ', # 0x7c
'O ', # 0x7d
'Qian ', # 0x7e
'Xiao ', # 0x7f
'Jian ', # 0x80
'Pu ', # 0x81
'Lai ', # 0x82
'Zou ', # 0x83
'Bi ', # 0x84
'Bi ', # 0x85
'Bi ', # 0x86
'Ge ', # 0x87
'Chi ', # 0x88
'Guai ', # 0x89
'Yu ', # 0x8a
'Jian ', # 0x8b
'Zhao ', # 0x8c
'Gu ', # 0x8d
'Chi ', # 0x8e
'Zheng ', # 0x8f
'Jing ', # 0x90
'Sha ', # 0x91
'Zhou ', # 0x92
'Lu ', # 0x93
'Bo ', # 0x94
'Ji ', # 0x95
'Lin ', # 0x96
'Suan ', # 0x97
'Jun ', # 0x98
'Fu ', # 0x99
'Zha ', # 0x9a
'Gu ', # 0x9b
'Kong ', # 0x9c
'Qian ', # 0x9d
'Quan ', # 0x9e
'Jun ', # 0x9f
'Chui ', # 0xa0
'Guan ', # 0xa1
'Yuan ', # 0xa2
'Ce ', # 0xa3
'Ju ', # 0xa4
'Bo ', # 0xa5
'Ze ', # 0xa6
'Qie ', # 0xa7
'Tuo ', # 0xa8
'Luo ', # 0xa9
'Dan ', # 0xaa
'Xiao ', # 0xab
'Ruo ', # 0xac
'Jian ', # 0xad
'Xuan ', # 0xae
'Bian ', # 0xaf
'Sun ', # 0xb0
'Xiang ', # 0xb1
'Xian ', # 0xb2
'Ping ', # 0xb3
'Zhen ', # 0xb4
'Sheng ', # 0xb5
'Hu ', # 0xb6
'Shi ', # 0xb7
'Zhu ', # 0xb8
'Yue ', # 0xb9
'Chun ', # 0xba
'Lu ', # 0xbb
'Wu ', # 0xbc
'Dong ', # 0xbd
'Xiao ', # 0xbe
'Ji ', # 0xbf
'Jie ', # 0xc0
'Huang ', # 0xc1
'Xing ', # 0xc2
'Mei ', # 0xc3
'Fan ', # 0xc4
'Chui ', # 0xc5
'Zhuan ', # 0xc6
'Pian ', # 0xc7
'Feng ', # 0xc8
'Zhu ', # 0xc9
'Hong ', # 0xca
'Qie ', # 0xcb
'Hou ', # 0xcc
'Qiu ', # 0xcd
'Miao ', # 0xce
'Qian ', # 0xcf
None, # 0xd0
'Kui ', # 0xd1
'Sik ', # 0xd2
'Lou ', # 0xd3
'Yun ', # 0xd4
'He ', # 0xd5
'Tang ', # 0xd6
'Yue ', # 0xd7
'Chou ', # 0xd8
'Gao ', # 0xd9
'Fei ', # 0xda
'Ruo ', # 0xdb
'Zheng ', # 0xdc
'Gou ', # 0xdd
'Nie ', # 0xde
'Qian ', # 0xdf
'Xiao ', # 0xe0
'Cuan ', # 0xe1
'Gong ', # 0xe2
'Pang ', # 0xe3
'Du ', # 0xe4
'Li ', # 0xe5
'Bi ', # 0xe6
'Zhuo ', # 0xe7
'Chu ', # 0xe8
'Shai ', # 0xe9
'Chi ', # 0xea
'Zhu ', # 0xeb
'Qiang ', # 0xec
'Long ', # 0xed
'Lan ', # 0xee
'Jian ', # 0xef
'Bu ', # 0xf0
'Li ', # 0xf1
'Hui ', # 0xf2
'Bi ', # 0xf3
'Di ', # 0xf4
'Cong ', # 0xf5
'Yan ', # 0xf6
'Peng ', # 0xf7
'Sen ', # 0xf8
'Zhuan ', # 0xf9
'Pai ', # 0xfa
'Piao ', # 0xfb
'Dou ', # 0xfc
'Yu ', # 0xfd
'Mie ', # 0xfe
'Zhuan ', # 0xff
)
| 18.019305 | 20 | 0.395757 | 513 | 4,667 | 3.60039 | 0.803119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.200557 | 0.384615 | 4,667 | 258 | 21 | 18.089147 | 0.442549 | 0.274052 | 0 | 0.604651 | 0 | 0 | 0.341744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3ae477cbacf87c30e4f9e467679b841119fc2ec | 356 | py | Python | pyleecan/Methods/Machine/LamSquirrelCage/comp_length_ring.py | harshasunder-1/pyleecan | 32ae60f98b314848eb9b385e3652d7fc50a77420 | [
"Apache-2.0"
] | 2 | 2019-06-08T15:04:39.000Z | 2020-09-07T13:32:22.000Z | pyleecan/Methods/Machine/LamSquirrelCage/comp_length_ring.py | harshasunder-1/pyleecan | 32ae60f98b314848eb9b385e3652d7fc50a77420 | [
"Apache-2.0"
] | null | null | null | pyleecan/Methods/Machine/LamSquirrelCage/comp_length_ring.py | harshasunder-1/pyleecan | 32ae60f98b314848eb9b385e3652d7fc50a77420 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from numpy import pi
def comp_length_ring(self):
"""Computation of the ring length
Parameters
----------
self : LamSquirrelCage
A LamSquirrelCage object
Returns
-------
Lring: float
Length of the ring [m]
"""
Rmw = self.slot.comp_radius_mid_wind()
return 2 * pi * Rmw
| 14.833333 | 42 | 0.570225 | 42 | 356 | 4.714286 | 0.714286 | 0.050505 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008032 | 0.300562 | 356 | 23 | 43 | 15.478261 | 0.787149 | 0.519663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
e3b7e2ccc401a24ce7cdda41ee6b67326027248b | 1,257 | py | Python | module_info.py | Ikaguia/LWBR-WarForge | 0099fe20188b2dbfff237e8690ae54c33671656f | [
"Unlicense"
] | null | null | null | module_info.py | Ikaguia/LWBR-WarForge | 0099fe20188b2dbfff237e8690ae54c33671656f | [
"Unlicense"
] | null | null | null | module_info.py | Ikaguia/LWBR-WarForge | 0099fe20188b2dbfff237e8690ae54c33671656f | [
"Unlicense"
] | null | null | null | # Point export_dir to the folder you will be keeping your module
# Make sure you use forward slashes (/) and NOT backward slashes (\)
# Several possible variants for export_dir variable:
# Warband being installed to C:/Games
export_dir = "mod/"
###################################
# W.R.E.C.K. Compiler Options #
###################################
# Change this line to select where compiler will generate ID_* files. Use None instead of the string to completely suppress generation of ID_* files.
# ONLY DO THIS WHEN YOU HAVE COMPLETELY REMOVED ID_* FILE DEPENDENCIES IN MODULE SYSTEM!
# Default value: "ID_%s.py"
#write_id_files = "ID_%s.py" # default vanilla-compatible option
#write_id_files = "ID/ID_%s.py" # will put ID_* files in ID/ subfolder of module system's folder
write_id_files = None # will suppress generation of ID_*.py files
# Set to True to display compiler performance information at the end of compilation. Set to False to suppress.
# Default value: False
show_performance_data = False
##########################
# W.R.E.C.K. Plugins #
##########################
import plugin_ms_extension
import plugin_multiplayer_troops
import plugin_make_presentations
import plugin_lwbr_main
import plugin_end | 29.232558 | 149 | 0.682578 | 178 | 1,257 | 4.662921 | 0.516854 | 0.050602 | 0.018072 | 0.009639 | 0.012048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161496 | 1,257 | 43 | 150 | 29.232558 | 0.787476 | 0.68576 | 0 | 0 | 0 | 0 | 0.015873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
e3c248c64912ecddfb2709461a774a633f90e748 | 1,045 | py | Python | myuw/views/api/current_schedule.py | uw-it-aca/myuw | 3fa1fabeb3c09d81a049f7c1a8c94092d612438a | [
"Apache-2.0"
] | 18 | 2015-02-04T01:09:11.000Z | 2021-11-25T03:10:39.000Z | myuw/views/api/current_schedule.py | uw-it-aca/myuw | 3fa1fabeb3c09d81a049f7c1a8c94092d612438a | [
"Apache-2.0"
] | 2,323 | 2015-01-15T19:45:10.000Z | 2022-03-21T19:57:06.000Z | myuw/views/api/current_schedule.py | uw-it-aca/myuw | 3fa1fabeb3c09d81a049f7c1a8c94092d612438a | [
"Apache-2.0"
] | 9 | 2015-01-15T19:29:26.000Z | 2022-02-11T04:51:23.000Z | # Copyright 2021 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
import logging
import traceback
from myuw.dao.term import get_current_quarter
from myuw.logger.timer import Timer
from myuw.views.error import handle_exception
from myuw.views.api.base_schedule import StudClasSche
logger = logging.getLogger(__name__)
class StudClasScheCurQuar(StudClasSche):
"""
Performs actions on resource at /api/v1/schedule/current/.
"""
def get(self, request, *args, **kwargs):
"""
GET returns 200 with the current quarter course section schedule
@return class schedule data in json format
status 404: no schedule found (not registered)
status 543: data error
"""
timer = Timer()
try:
return self.make_http_resp(timer,
get_current_quarter(request),
request)
except Exception:
return handle_exception(logger, timer, traceback)
| 32.65625 | 72 | 0.641148 | 118 | 1,045 | 5.567797 | 0.601695 | 0.048706 | 0.05175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021534 | 0.288995 | 1,045 | 31 | 73 | 33.709677 | 0.862719 | 0.321531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
e3c4a0f4c819f8cd9ffe203ccc389bb5ab23bc36 | 280 | py | Python | unit1/spiders/spider_2_quotes.py | nulearn3296/scrapy-training | 8981dbc33b68bd7246839eee34ca8266d5a0066f | [
"BSD-3-Clause"
] | 182 | 2017-04-05T23:39:22.000Z | 2022-02-22T19:49:52.000Z | unit1/spiders/spider_2_quotes.py | nulearn3296/scrapy-training | 8981dbc33b68bd7246839eee34ca8266d5a0066f | [
"BSD-3-Clause"
] | 3 | 2017-04-18T07:16:39.000Z | 2019-05-04T22:54:53.000Z | unit1/spiders/spider_2_quotes.py | nulearn3296/scrapy-training | 8981dbc33b68bd7246839eee34ca8266d5a0066f | [
"BSD-3-Clause"
] | 53 | 2017-04-07T03:25:54.000Z | 2022-02-21T21:51:01.000Z | import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes2"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
def parse(self, response):
self.log('I just visited {}'.format(response.url))
| 21.538462 | 58 | 0.607143 | 34 | 280 | 4.970588 | 0.764706 | 0.118343 | 0.213018 | 0.248521 | 0.295858 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013825 | 0.225 | 280 | 12 | 59 | 23.333333 | 0.764977 | 0 | 0 | 0 | 0 | 0 | 0.328571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3cf9ceee92cef8bf273106333c95cd00354d2fd | 376 | py | Python | 2017/01/p1.py | foxscotch/advent-of-code | 20688fad4eef09ef5670ad91f8051c044dfc7baf | [
"MIT"
] | null | null | null | 2017/01/p1.py | foxscotch/advent-of-code | 20688fad4eef09ef5670ad91f8051c044dfc7baf | [
"MIT"
] | null | null | null | 2017/01/p1.py | foxscotch/advent-of-code | 20688fad4eef09ef5670ad91f8051c044dfc7baf | [
"MIT"
] | null | null | null | # Python 3.6.1
with open("input.txt", "r") as f:
puzzle_input = [int(i) for i in f.read()[0:-1]]
total = 0
for cur_index in range(len(puzzle_input)):
next_index = cur_index + 1 if not cur_index == len(puzzle_input) - 1 else 0
puz_cur = puzzle_input[cur_index]
pnext = puzzle_input[next_index]
if puz_cur == pnext:
total += puz_cur
print(total)
| 23.5 | 79 | 0.648936 | 67 | 376 | 3.432836 | 0.447761 | 0.23913 | 0.121739 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0.218085 | 376 | 15 | 80 | 25.066667 | 0.751701 | 0.031915 | 0 | 0 | 0 | 0 | 0.027624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3dbcbdb3ec7d20bf8430c926dca62a8ba70bef4 | 4,710 | py | Python | tests/all_tests.py | My-Novel-Management/storybuilderunite | c003d3451e237f574c54a87ea7d4fd8da8e833be | [
"MIT"
] | 1 | 2020-06-18T01:38:55.000Z | 2020-06-18T01:38:55.000Z | tests/all_tests.py | My-Novel-Management/storybuilder | 1f36e56a74dbb55a25d60fce3ce81f3c650f521a | [
"MIT"
] | 143 | 2019-11-13T00:21:11.000Z | 2020-08-15T05:47:41.000Z | tests/all_tests.py | My-Novel-Management/storybuilderunite | c003d3451e237f574c54a87ea7d4fd8da8e833be | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
The test suite for all test cases
=================================
'''
import unittest
from tests import test_world
from tests.commands import test_command
from tests.commands import test_optioncmd
from tests.commands import test_scode
from tests.commands import test_storycmd
from tests.commands import test_tagcmd
from tests.containers import test_container
from tests.core import test_compiler
from tests.core import test_filter
from tests.core import test_formatter
from tests.core import test_headerupdater
from tests.core import test_outputter
from tests.core import test_reducer
from tests.core import test_runner
from tests.core import test_serializer
from tests.core import test_tagreplacer
from tests.core import test_validater
from tests.datatypes import test_codelist
from tests.datatypes import test_compilemode
from tests.datatypes import test_database
from tests.datatypes import test_formatmode
from tests.datatypes import test_formattag
from tests.datatypes import test_headerinfo
from tests.datatypes import test_outputmode
from tests.datatypes import test_rawdata
from tests.datatypes import test_resultdata
from tests.datatypes import test_storyconfig
from tests.objects import test_day
from tests.objects import test_item
from tests.objects import test_person
from tests.objects import test_rubi
from tests.objects import test_sobject
from tests.objects import test_stage
from tests.objects import test_time
from tests.objects import test_word
from tests.objects import test_writer
from tests.tools import test_checker
from tests.tools import test_converter
from tests.tools import test_counter
from tests.utils import test_assertion
from tests.utils import test_dict
from tests.utils import test_list
from tests.utils import test_logger
from tests.utils import test_math
from tests.utils import test_name
from tests.utils import test_str
def suite() -> unittest.TestSuite:
''' Packing all tests.
'''
suite = unittest.TestSuite()
suite.addTests((
# commands
unittest.makeSuite(test_command.SCmdEnumTest),
unittest.makeSuite(test_optioncmd.OptionParserTest),
unittest.makeSuite(test_scode.SCodeTest),
unittest.makeSuite(test_storycmd.StoryCmdTest),
unittest.makeSuite(test_tagcmd.TagCmdTest),
# containers
unittest.makeSuite(test_container.ContainerTest),
# datatypes
unittest.makeSuite(test_codelist.CodeListTest),
unittest.makeSuite(test_compilemode.CompileModeTest),
unittest.makeSuite(test_database.DatabaseTest),
unittest.makeSuite(test_formatmode.FormatModeTest),
unittest.makeSuite(test_formattag.FormatTagTest),
unittest.makeSuite(test_headerinfo.HeaderInfoTest),
unittest.makeSuite(test_outputmode.OutputModeTest),
unittest.makeSuite(test_rawdata.RawDataTest),
unittest.makeSuite(test_resultdata.ResultDataTest),
unittest.makeSuite(test_storyconfig.StoryConfigTest),
# objects
unittest.makeSuite(test_day.DayTest),
unittest.makeSuite(test_item.ItemTest),
unittest.makeSuite(test_person.PersonTest),
unittest.makeSuite(test_rubi.RubiTest),
unittest.makeSuite(test_sobject.SObjectTest),
unittest.makeSuite(test_stage.StageTest),
unittest.makeSuite(test_time.TimeTest),
unittest.makeSuite(test_word.WordTest),
unittest.makeSuite(test_writer.WriterTest),
# tools
unittest.makeSuite(test_checker.CheckerTest),
unittest.makeSuite(test_converter.ConverterTest),
unittest.makeSuite(test_counter.CounterTest),
# utility
unittest.makeSuite(test_assertion.MethodsTest),
unittest.makeSuite(test_dict.MethodsTest),
unittest.makeSuite(test_list.MethodsTest),
unittest.makeSuite(test_logger.MyLoggerTest),
unittest.makeSuite(test_math.MethodsTest),
unittest.makeSuite(test_name.MethodsTest),
unittest.makeSuite(test_str.MethodsTest),
# core
unittest.makeSuite(test_compiler.CompilerTest),
unittest.makeSuite(test_filter.FilterTest),
unittest.makeSuite(test_formatter.FormatterTest),
unittest.makeSuite(test_headerupdater.HeaderUpdaterTest),
unittest.makeSuite(test_outputter.OutputterTest),
unittest.makeSuite(test_reducer.ReducerTest),
unittest.makeSuite(test_runner.RunnerTest),
unittest.makeSuite(test_serializer.SerializerTest),
unittest.makeSuite(test_tagreplacer.TagReplacerTest),
unittest.makeSuite(test_validater.ValidaterTest),
# main
unittest.makeSuite(test_world.WorldTest),
))
return suite
| 39.579832 | 65 | 0.767091 | 539 | 4,710 | 6.532468 | 0.213358 | 0.11758 | 0.274354 | 0.053962 | 0.317807 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000252 | 0.157962 | 4,710 | 118 | 66 | 39.915254 | 0.887544 | 0.037367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 1 | 0.010204 | false | 0 | 0.479592 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
e3e28b729577829ef3b226cb7c4e4eb15e894a82 | 2,137 | py | Python | pkgs/clean-pkg/src/genie/libs/clean/stages/iosxe/cat9k/tests/test_tftp_boot.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | 1 | 2022-01-16T10:00:24.000Z | 2022-01-16T10:00:24.000Z | pkgs/clean-pkg/src/genie/libs/clean/stages/iosxe/cat9k/tests/test_tftp_boot.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | pkgs/clean-pkg/src/genie/libs/clean/stages/iosxe/cat9k/tests/test_tftp_boot.py | patrickboertje/genielibs | 61c37aacf3dd0f499944555e4ff940f92f53dacb | [
"Apache-2.0"
] | null | null | null | import logging
import unittest
from unittest.mock import Mock, MagicMock
from genie.libs.clean.stages.iosxe.cat9k.stages import TftpBoot
from genie.libs.clean.stages.tests.utils import CommonStageTests, create_test_device
from pyats.aetest.steps import Steps
from pyats.results import Passed, Failed
from pyats.aetest.signals import TerminateStepSignal
from unicon.eal.dialogs import Statement, Dialog
# Disable logging. It may be useful to comment this out when developing tests.
logging.disable(logging.CRITICAL)
class Tftpboot(unittest.TestCase):
def setUp(self):
# Instantiate class object
self.cls = TftpBoot()
# Instantiate device object. This also sets up commonly needed
# attributes and Mock objects associated with the device.
self.device = create_test_device('PE1', os='iosxe', platform='cat9k')
def test_pass(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the execute_no_boot_variable api to be mocked.
# This simulates the pass case.
self.device.api.execute_no_boot_variable = Mock()
# Call the method to be tested (clean step inside class)
self.cls.delete_boot_variables(
steps=steps, device=self.device, timeout=0
)
# Check that the result is expected
self.assertEqual(Passed, steps.details[0].result)
def test_fail_tftp_boot(self):
# Make sure we have a unique Steps() object for result verification
steps = Steps()
# And we want the execute_no_boot_variable api to be mocked to raise an
# exception when called. This simulates the fail case.
self.device.api.execute_no_boot_variable = Mock(side_effect=Exception)
# We expect this step to fail so make sure it raises the signal
with self.assertRaises(TerminateStepSignal):
self.cls.delete_boot_variables(
steps=steps, device=self.device, timeout=0
)
# Check the overall result is as expected
self.assertEqual(Failed, steps.details[0].result)
| 34.467742 | 84 | 0.702387 | 287 | 2,137 | 5.142857 | 0.400697 | 0.033875 | 0.03523 | 0.056911 | 0.330623 | 0.298103 | 0.298103 | 0.298103 | 0.298103 | 0.241192 | 0 | 0.004253 | 0.229761 | 2,137 | 61 | 85 | 35.032787 | 0.892467 | 0.352831 | 0 | 0.206897 | 0 | 0 | 0.00951 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 1 | 0.103448 | false | 0.103448 | 0.310345 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
e3e9ddb3f6a154281b95ad1d2aa584f2ce6e1a6c | 21,345 | py | Python | examples/axro/AXROalignment.py | bddonovan/PyXFocus | 2d6722f0db28c045df35075487f9d4fdfed8b284 | [
"MIT"
] | 1 | 2018-04-20T15:32:24.000Z | 2018-04-20T15:32:24.000Z | examples/axro/AXROalignment.py | bddonovan/PyXFocus | 2d6722f0db28c045df35075487f9d4fdfed8b284 | [
"MIT"
] | 6 | 2017-11-03T16:13:46.000Z | 2019-04-26T11:13:03.000Z | examples/axro/AXROalignment.py | bddonovan/PyXFocus | 2d6722f0db28c045df35075487f9d4fdfed8b284 | [
"MIT"
] | 4 | 2017-04-13T17:24:54.000Z | 2019-08-08T15:27:29.000Z | from numpy import *
from matplotlib.pyplot import *
import traces.conicsolve as conicsolve
import traces.PyTrace as PT
import pdb
from mpl_toolkits.mplot3d import Axes3D
import time
import scipy.optimize
#Load in flat mirror deformations
foldfig = genfromtxt("/home/rallured/Dropbox/AXRO/Alignment/Simulation/"
"NIST/141202FoldFigCoeffs.txt")
foldsag = genfromtxt("/home/rallured/Dropbox/AXRO/Alignment/Simulation/"
"141202FoldSagCoeffs.txt")*1000
foldcoeffs = foldfig + foldsag
retrofig = genfromtxt("/home/rallured/Dropbox/AXRO/Alignment/Simulation/"
"NIST/141202RetroFigCoeffs.txt")
retrosag = genfromtxt("/home/rallured/Dropbox/AXRO/Alignment/Simulation/"
"141202RetroSagCoeffs.txt")*1000
retrocoeffs = retrosag + retrosag
#Load in primary deformations
pcoeff,pax,paz = genfromtxt('/home/rallured/Dropbox/AXRO/'
'Alignment/CoarseAlignment/150615_OP1S09Coeffs.txt')
pcoeff = pcoeff/1000.
#Set up Hartmann mask
holewidth = arcsin(.3/220)
holetheta = linspace(-arcsin(45./220),arcsin(45./220),15)
numholes = size(holetheta)
#Set up diverging beam with angular offset
#Give it divergence angle, pitch, and roll
def CDAbeam(num,div,pitch,roll,cda):
## PT.transform(*cda)
PT.pointsource(div,num)
PT.transform(0,0,0,pitch,0,0)
PT.transform(0,0,0,0,0,roll)
PT.itransform(*cda)
return
#Trace from primary focus to fold to primary
def primMaskTrace(fold,primary,woltVignette=True,foldrot=0.):
#Get Wolter parameters
alpha,p,d,e = conicsolve.woltparam(220.,8400.)
primfoc = conicsolve.primfocus(220.,8400.)
#Trace to fold mirror
#translate to center of fold mirror
PT.transform(0.,85.12,primfoc-651.57+85.12,0,0,0)
#rotate so surface normal points in correct direction
PT.transform(0,0,0,-3*pi/4,0,0)
PT.transform(0,0,0,0,0,pi)
#trace to fold flat
PT.flat()
#Introduce fold misalignment
PT.transform(*fold)
PT.zernsurfrot(foldsag,foldfig,406./2,-174.659*pi/180+foldrot)
PT.itransform(*fold)
PT.reflect()
PT.transform(0,0,0,0,0,-pi)
PT.transform(0,0,0,pi/4,0,0)
#Translate to optical axis mid-plane, then down to image of
#primary focus, place primary mirror and trace
PT.transform(0,85.12,651.57-85.12,0,0,0)
PT.flat()
## pdb.set_trace()
rt = conicsolve.primrad(8475.,220.,8400.)
PT.transform(0,-rt,75.,0,0,0)
PT.transform(*primary)
PT.transform(0,rt,-8475.,0,0,0)
## PT.wolterprimary(220.,8400.)
PT.primaryLL(220.,8400.,8525.,8425.,30.*np.pi/180.,pcoeff,pax,paz)
if woltVignette is True:
ind = logical_and(PT.z<8525.,PT.z>8425.)
PT.vignette(ind=ind)
PT.reflect()
PT.transform(0,-rt,8475.,0,0,0)
PT.itransform(*primary)
PT.transform(0,rt,-8475.,0,0,0)
#Move back up to mask plane and trace flat
PT.transform(0,0,8400.+134.18,0,0,0)
PT.flat()
## pdb.set_trace()
#Rays should now be at Hartmann mask plane
return
def traceFromMask(N,numholes,cda,fold,retro,primary,foldrot=0.,retrorot=0.):
#Vignette at proper hole
h = hartmannMask()
ind = h==N
PT.vignette(ind=ind)
#Continue trace up to retro and back to CDA
PT.transform(0,-123.41,1156.48-651.57-134.18,0,0,0)
PT.flat()
PT.transform(0,0,0,pi,0,0)
PT.transform(*retro)
PT.zernsurfrot(retrosag,retrofig,378./2,-8.993*pi/180+retrorot)
PT.itransform(*retro)
PT.reflect()
PT.transform(0,0,0,-pi,0,0)
PT.transform(0,123.41,-1156.48+651.57+134.18,0,0,0)
PT.flat()
h = hartmannMask()
ind = h==N
PT.vignette(ind=ind)
PT.transform(0,0,-134.18,0,0,0)
rt = conicsolve.primrad(8475.,220.,8400.)
PT.transform(0,-rt,75.,0,0,0)
PT.transform(*primary)
PT.transform(0,rt,-8475.,0,0,0)
PT.wolterprimary(220.,8400.)
ind = logical_and(PT.z<8525.,PT.z>8425.)
PT.vignette(ind=ind)
PT.reflect()
PT.transform(0,-rt,8475.,0,0,0)
PT.itransform(*primary)
PT.transform(0,rt,-8475.,0,0,0)
PT.transform(0,-85.12,8400.-651.57+85.12\
,0,0,0)
PT.transform(0,0,0,-pi/4,0,0)
PT.transform(0,0,0,0,0,pi)
PT.flat()
PT.transform(*fold)
PT.zernsurfrot(foldsag,foldfig,406./2,-174.659*pi/180+foldrot)
PT.itransform(*fold)
PT.reflect()
PT.transform(0,0,0,0,0,-pi)
PT.transform(0,0,0,3*pi/4,0,0)
PT.transform(0,-85.12,-85.12-(conicsolve.primfocus(220.,8400.)-651.57)\
,0,0,0)
PT.transform(*cda)
PT.flat()
return
#### DOUBLE MIRROR TRACES ####
#Trace from focus to fold to Hartmann mask
def fullMaskTrace(fold,prim,sec,woltVignette=True,foldrot=0.):
#Get Wolter parameters
alpha,p,d,e = conicsolve.woltparam(220.,8400.)
foc = 8400.
#Trace to fold mirror
#translate to center of fold mirror
PT.transform(0.,85.12,foc-651.57+85.12,0,0,0)
#rotate so surface normal points in correct direction
PT.transform(0,0,0,-3*pi/4,0,0)
PT.transform(0,0,0,0,0,pi)
#trace to fold flat
PT.flat()
#Introduce fold misalignment
PT.transform(*fold)
PT.zernsurfrot(foldsag,foldfig,406./2,-174.659*pi/180+foldrot)
PT.itransform(*fold)
PT.reflect()
PT.transform(0,0,0,0,0,-pi)
PT.transform(0,0,0,pi/4,0,0)
#Translate to optical axis mid-plane, then down to image of
#primary focus, place primary mirror and trace
PT.transform(0,85.12,651.57-85.12,0,0,0)
PT.flat()
PT.transform(0,0,-8400.,0,0,0)
#Place secondary
#Go to tangent point, apply misalignment, place mirror, and reverse
PT.transform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.transform(*sec)
PT.itransform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.woltersecondary(220.,8400.)
if woltVignette is True:
ind = logical_and(PT.z<8375.,PT.z>8275.)
PT.vignette(ind=ind)
PT.reflect()
PT.transform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.itransform(*sec) #Back at nominal secondary tangent point
PT.itransform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
#Place primary
#Go to tangent point, apply misalignment, place mirror, and reverse
PT.transform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
PT.transform(*prim)
PT.itransform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
## PT.transform(0,0,8475.,0,0,0)
## PT.flat()
## PT.itransform(0,0,8475.,0,0,0)
## PT.wolterprimary(220.,8400.)
PT.primaryLL(220.,8400.,8525.,8425.,30.*np.pi/180.,pcoeff,pax,paz)
## pdb.set_trace()
if woltVignette is True:
ind = logical_and(PT.z<8525.,PT.z>8425.)
PT.vignette(ind=ind)
PT.reflect()
PT.transform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
PT.itransform(*prim)
PT.itransform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
#Move back up to mask plane and trace flat
PT.transform(0,0,8400.+134.18,0,0,0)
PT.flat()
## pdb.set_trace()
#Rays should now be at Hartmann mask plane
return
def fullFromMask(N,cda,fold,retro,prim,sec,foldrot=0.,retrorot=0.):
## pdb.set_trace()
#Vignette at proper hole
h = hartmannMask()
ind = h==N
PT.vignette(ind=ind)
#Continue trace up to retro and back to CDA
PT.transform(0,-123.41,1156.48-651.57-134.18,0,0,0)
PT.flat()
PT.transform(0,0,0,pi,0,0)
PT.transform(*retro)
PT.zernsurfrot(retrosag,retrofig,378./2,-8.993*pi/180+retrorot)
PT.itransform(*retro)
PT.reflect()
PT.transform(0,0,0,-pi,0,0)
#Back to mask
PT.transform(0,123.41,-1156.48+651.57+134.18,0,0,0)
PT.flat()
h = hartmannMask()
ind = h==N
PT.vignette(ind=ind)
#Place Wolter surfaces
PT.transform(0,0,-134.18-8400.,0,0,0)
PT.transform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
PT.transform(*prim)
PT.itransform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
## PT.wolterprimary(220.,8400.)
PT.primaryLL(220.,8400.,8525.,8425.,30.*np.pi/180.,pcoeff,pax,paz)
pdb.set_trace()
ind = logical_and(PT.z<8525.,PT.z>8425.)
PT.vignette(ind=ind)
PT.transform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
PT.itransform(*prim)
PT.itransform(0,-conicsolve.primrad(8425.,220.,8400.),8425.,0,0,0)
PT.reflect()
#Wolter secondary
PT.transform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.transform(*sec)
PT.itransform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.woltersecondary(220.,8400.)
ind = logical_and(PT.z<8375.,PT.z>8275.)
PT.vignette(ind=ind)
PT.reflect()
PT.transform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
PT.itransform(*sec)
PT.itransform(0,-conicsolve.secrad(8325.,220.,8400.),8325.,0,0,0)
## PT.woltersecondary(220.,8400.)
## ind = logical_and(PT.z<8375.,PT.z>8275.)
## PT.vignette(ind=ind)
## PT.reflect()
#Back to fold
PT.transform(0,-85.12,8400.-651.57+85.12\
,0,0,0)
PT.transform(0,0,0,-pi/4,0,0)
PT.transform(0,0,0,0,0,pi)
PT.flat()
PT.transform(*fold)
PT.zernsurfrot(foldsag,foldfig,406./2,-174.659*pi/180+foldrot)
PT.itransform(*fold)
PT.reflect()
PT.transform(0,0,0,0,0,-pi)
#Back to CDA
PT.transform(0,0,0,3*pi/4,0,0)
PT.transform(0,-85.12,-85.12-8400.+651.57\
,0,0,0)
PT.transform(*cda)
PT.flat()
return
#Return nominal pitch and roll for Hartmann hole
#Used as starting point for optimization
def hartmannStartFull(N):
global holetheta
a,p,d,e = conicsolve.woltparam(220.,8400.) #Pitch is 4*a
return 4*a,holetheta[N-1]
#Return mean x and y positions as a function of pitch and roll
def traceHoleFull(num,N,div,p,r,cda,fold,prim,sec,foldrot=0.):
CDAbeam(num,div,p,r,cda)
fullMaskTrace(fold,prim,sec,woltVignette=False,foldrot=foldrot)
realx,realy = hartmannPosition(N)
res = sqrt(mean((PT.x-realx)**2+(PT.y-realy)**2))
print str(N) + ': ' + str(res)
return res
#Use minimization routine to aim ray bundle at proper hole
def fullAim(num,N,div,cda,fold,prim,sec,foldrot=0.):
#Create function
fun = lambda p: traceHoleFull(num,N,div,p[0],p[1],\
cda,fold,prim,sec,foldrot=foldrot)
#Optimize function
start = array(hartmannStartFull(N))
if abs(start[1]) < .001:
start[1] = .01
print 'Begin ' + str(N)
res = scipy.optimize.minimize(fun,start,method='nelder-mead',\
options={'ftol':1.e-2,'disp':True})
print 'End ' +str(N)
## traceHole2(num,N,numholes,div,res['x'][0],res['x'][1],cda,fold)
return res['x']
##Return vector of pitch and roll for a Hartmann mask
def alignHartmannFull(cda,fold,prim,sec,foldrot=0.):
global numholes
pitch = zeros(numholes)
roll = zeros(numholes)
for i in range(numholes):
pitch[i],roll[i] = fullAim(10**2,i+1,.00001*pi/180,\
cda,fold,prim,sec,foldrot=0.)#findHole(i+1,numholes,cda,fold)
return pitch,roll
##Do full double mirror alignment trace, making use of ray aiming results
##for efficiency
def fullAlign(num,cda,fold,retro,prim,sec,p=None,r=None,\
foldrot=0.,retrorot=0.):
global numholes
if p is None:
#Grab pitch and roll vectors
p,r = alignHartmannFull(cda,fold,prim,sec,foldrot=foldrot)
#Trace out holes, one by one
xm = []
ym = []
xstd = []
ystd = []
for i in range(numholes):
#Trace up to Hartmann mask
CDAbeam(num,.001*pi/180,p[i],r[i],cda)
fullMaskTrace(fold,prim,sec,woltVignette=True)
fullFromMask(i+1,cda,fold,retro,prim,sec)
#Print out vignetting factor
print size(PT.x)/num
#Evaluate mean spot position
xm.append(mean(PT.x))
ym.append(mean(PT.y))
xstd.append(std(PT.x))
ystd.append(std(PT.y))
return array(xm),array(ym),array(xstd),array(ystd)
#Evaluate sensitivity of misalignment degree of freedom
def dofSensitivityFull(num,obj,dof,step,criteria):
#Initial misalignment vectors
cda = zeros(6)
fold = zeros(6)
retro = zeros(6)
prim = zeros(6)
sec = zeros(6)
misalign = [cda,fold,retro,prim,sec]
#Get nominal spot position
x0,y0 = fullAlign(num,*misalign)
#Increase proper dof until spot shifts breach 10 micron requirement
merit = 0.
figure()
while merit < criteria:
misalign[obj][dof] = misalign[obj][dof] + step
try:
x1,y1 = fullAlign(num,*misalign)
except:
sys.stdout.write('Hartmann Throughput Cutoff at %7.4e' %\
misalign[obj][dof])
break
dx = x1-x0
dx = dx - mean(dx)
dy = y1-y0
dy = dy - mean(dy)
merit = max(sqrt(dx**2+dy**2))
sys.stdout.write('DoF: %7.4e Merit : %0.4f\r' %\
(misalign[obj][dof],merit))
plot(dx,dy,'.')
draw()
sys.stdout.flush()
return
#### DOUBLE MIRROR TRACES ####
#Define Hartmann mask and vignette rays
#Keep track of which hole rays hit with "hole" vector
#Need origin to be at Hartmann mask plane on optical axis
def hartmannMask():
## #Create hole array to handle hole positions
## hole = zeros(size(PT.x))
## holerad = (conicsolve.primrad(8500.,220.,8400.)-220.)/2. #Hole halfwidth
## holecent = conicsolve.primrad(8475.,220.,8400.) #Radius of center of holes
## halfang = arcsin(50./220.)-.009 #Half angle of Hartmann mask
## holetheta = linspace(-halfang,halfang,numholes) #Vector of Hartmann angles
## holewidth = arcsin(holerad/220.)
#Set holewidth and holetheta to be global variables
global holewidth, holetheta
#Loop through hole numbers
rayang = arctan2(PT.y,PT.x) #Center of mirror is -pi/2
i = 1
hole = zeros(size(PT.x))
for theta in holetheta:
ind = logical_and(rayang < -pi/2 + theta + holewidth,\
rayang > -pi/2 + theta - holewidth)
hole[ind] = i
i = i+1
return hole
#Set up CDA beam to trace to a given Hartmann hole
#Will use indicated beam divergence and apply
#appropriate roll to beam to find hole N
def traceHole(num,N,numholes,div,cda):
a,p,d,e = conicsolve.woltparam(220.,8400.) #Pitch is 2*a
halfang = arcsin(50./220.)-.009
holetheta = linspace(-halfang,halfang,numholes)
CDAbeam(num,div,2*a,holetheta[N-1],cda)
return 2*a,holetheta[N-1]
#Return nominal pitch and roll for Hartmann hole
#Used as starting point for optimization
def hartmannStart(N,numholes):
global holetheta
a,p,d,e = conicsolve.woltparam(220.,8400.) #Pitch is 2*a
return 2*a,holetheta[N-1]
#Return mean x and y positions as a function of pitch and roll
def traceHole2(num,N,numholes,div,p,r,cda,fold,primary,foldrot=0.):
CDAbeam(num,div,p,r,cda)
primMaskTrace(fold,primary,woltVignette=False,foldrot=foldrot)
realx,realy = hartmannPosition(N)
return mean(sqrt((PT.x-realx)**2+(PT.y-realy)**2))
#Use minimization routine to aim ray bundle at proper hole
def aimHole(num,N,numholes,div,cda,fold,primary,foldrot=0.):
#Create function
fun = lambda p: traceHole2(num,N,numholes,div,p[0],p[1],\
cda,fold,primary,foldrot=foldrot)
#Optimize function
start = array(hartmannStart(N,numholes))
if abs(start[1]) < .001:
start[1] = .01
res = scipy.optimize.minimize(fun,start,method='nelder-mead',\
options={'ftol':1.e-2,'disp':False})
## traceHole2(num,N,numholes,div,res['x'][0],res['x'][1],cda,fold)
return res['x']
#Returns nominal position of Hartmann hole in Cartesian coordinates
def hartmannPosition(N):
global holetheta
holecent = conicsolve.primrad(8475.,220.,8400.) #Radius of center of holes
thistheta = holetheta[N-1]-pi/2
return holecent*cos(thistheta),holecent*sin(thistheta)
#Returns nominal position of Hartmann hole in polar coordinates
def hartmannAngles(N,numholes):
holecent = conicsolve.primrad(8475.,220.,8400.) #Radius of center of holes
halfang = arcsin(50./220.)-.009 #Half angle of Hartmann mask
holetheta = linspace(-halfang,halfang,numholes) #Vector of Hartmann angles
thistheta = holetheta[N-1]-pi/2
return holecent,thistheta
#Determine chief ray to a given Hartmann hole
#Start with fairly wide divergence at nominal location
#Take mean of rays that hit the hole, then repeat with much
#smaller divergence
#Probably three iterations to converge
#Need to do this for each Hartmann hole and save ray directions
def findHole(N,numholes,cda,fold,primary):
#Trace out first iteration
pitch,roll = traceHole(10**3,N,numholes,.0001*pi/180,cda)
pdb.set_trace()
primMaskTrace(fold,woltVignette=False)
pdb.set_trace()
#Mean position of rays should be equal to nominal Hartmann position
rad,roll = hartmannAngles(N,numholes)
actrad = mean(sqrt(PT.x**2+PT.y**2))
actroll = mean(arctan2(PT.y,PT.x))
raddiff = rad - actrad
rolldiff = roll - actroll
newpitch = pitch - raddiff/(conicsolve.primfocus(220.,8400.)+134.18)
newroll = roll - rolldiff + pi/2
pdb.set_trace()
while (abs(raddiff) > .01) or (abs(rolldiff) > .01/220.):
#Fix pitch and roll
CDAbeam(10**3,.0001*pi/180,newpitch,newroll,cda)
pitch = newpitch
roll = newroll-pi/2
primMaskTrace(fold,woltVignette=False)
raddiff = rad - mean(sqrt(PT.x**2+PT.y**2))
rolldiff = roll - mean(arctan2(PT.y,PT.x))
## print 'Rolldiff: ' + str(rolldiff)
## print 'Raddiff: ' + str(raddiff)
newpitch = pitch - raddiff/(conicsolve.primfocus(220.,8400.)+134.18)
newroll = roll - rolldiff + pi/2
return newpitch,newroll
##Return vector of pitch and roll for a Hartmann mask
def alignHartmann(numholes,cda,fold,primary,foldrot=0.):
pitch = zeros(numholes)
roll = zeros(numholes)
for i in range(numholes):
pitch[i],roll[i] = aimHole(10**2,i+1,numholes,.00001*pi/180,\
cda,fold,primary,foldrot=0.)#findHole(i+1,numholes,cda,fold)
return pitch,roll
##Do full primary alignment trace, making use of ray aiming results
##for efficiency
def fullPrimary(num,numholes,cda,fold,retro,primary,p=None,r=None,\
foldrot=0.,retrorot=0.):
if p is None:
#Grab pitch and roll vectors
p,r = alignHartmann(numholes,cda,fold,primary,foldrot=foldrot)
#Trace out holes, one by one
xm = []
ym = []
for i in range(numholes):
#Trace up to Hartmann mask
CDAbeam(num,.001*pi/180,p[i],r[i],cda)
primMaskTrace(fold,primary,woltVignette=True)
traceFromMask(i+1,numholes,cda,fold,retro,primary)
#Evaluate mean spot position
xm.append(mean(PT.x))
ym.append(mean(PT.y))
return array(xm),array(ym)
#Evaluate sensitivity of misalignment degree of freedom
def dofSensitivity(num,numholes,obj,dof,step):
#Initial misalignment vectors
cda = zeros(6)
fold = zeros(6)
retro = zeros(6)
primary = zeros(6)
misalign = [cda,fold,retro,primary]
#Get nominal spot position
x0,y0 = fullPrimary(num,numholes,*misalign)
#Increase proper dof until spot shifts breach 10 micron requirement
merit = 0.
figure()
while merit < .01:
misalign[obj][dof] = misalign[obj][dof] + step
try:
x1,y1 = fullPrimary(num,numholes,*misalign)
except:
sys.stdout.write('Hartmann Throughput Cutoff at %7.4f' %\
misalign[obj][dof])
break
dx = x1-x0
dx = dx - mean(dx)
dy = y1-y0
dy = dy - mean(dy)
merit = max(sqrt(dx**2+dy**2))
sys.stdout.write('DoF: %7.4f Merit : %0.4f\r' %\
(misalign[obj][dof],merit))
plot(dx,dy,'.')
draw()
sys.stdout.flush()
return
#Evaluate sensitivity of misalignment degree of freedom
def rotSensitivity(num,numholes,step,foldrot=False,retrorot=False):
#Initial misalignment vectors
cda = zeros(6)
fold = zeros(6)
retro = zeros(6)
primary = zeros(6)
misalign = [cda,fold,retro,primary]
#Get nominal spot position
x0,y0 = fullPrimary(num,numholes,*misalign)
#Increase proper dof until spot shifts breach 10 micron requirement
merit = 0.
figure()
while merit < .01:
if foldrot is not False:
foldrot = foldrot + step
retrorot = 0.
else:
retrorot = retrorot + step
foldrot = 0.
try:
x1,y1 = fullPrimary(num,numholes,*misalign,\
foldrot=foldrot,retrorot=retrorot)
except:
sys.stdout.write('Hartmann Throughput Cutoff at %7.4f' %\
max([foldrot,retrorot])*180/pi)
break
dx = x1-x0
dx = dx - mean(dx)
dy = y1-y0
dy = dy - mean(dy)
merit = max(sqrt(dx**2+dy**2))
sys.stdout.write('DoF: %7.4f Merit : %0.4f\r' %\
(max([foldrot,retrorot])*180/pi,merit))
plot(dx,dy,'.')
draw()
sys.stdout.flush()
return
| 33.14441 | 95 | 0.633544 | 3,203 | 21,345 | 4.216047 | 0.124883 | 0.024882 | 0.018439 | 0.013329 | 0.762219 | 0.7129 | 0.669505 | 0.638033 | 0.587382 | 0.580865 | 0 | 0.090953 | 0.217053 | 21,345 | 643 | 96 | 33.195956 | 0.71709 | 0.235043 | 0 | 0.674419 | 0 | 0 | 0.038109 | 0.023361 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018605 | null | null | 0.009302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3eebb504a36c5e07a64cfe77e023ebab3183b29 | 3,131 | py | Python | dependencies/src/4Suite-XML-1.0.2/test/Xml/Xslt/Borrowed/dc_20000110.py | aleasims/Peach | bb56841e943d719d5101fee0a503ed34308eda04 | [
"MIT"
] | null | null | null | dependencies/src/4Suite-XML-1.0.2/test/Xml/Xslt/Borrowed/dc_20000110.py | aleasims/Peach | bb56841e943d719d5101fee0a503ed34308eda04 | [
"MIT"
] | null | null | null | dependencies/src/4Suite-XML-1.0.2/test/Xml/Xslt/Borrowed/dc_20000110.py | aleasims/Peach | bb56841e943d719d5101fee0a503ed34308eda04 | [
"MIT"
] | 1 | 2020-07-26T03:57:45.000Z | 2020-07-26T03:57:45.000Z | #Example from David Carlisle <davidc@nag.co.uk> to John Robert Gardner <jrgardn@emory.edu> on 10 Jan 2000
from Xml.Xslt import test_harness
sheet_1 = """<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="1.0"
>
<xsl:output method="xml" indent="yes"/>
<xsl:template match="sample">
<verse>
<xsl:apply-templates select="verse[@id='rv1.84.10']/mantra"/>
</verse>
</xsl:template>
<xsl:template match="verse[@id='rv1.84.10']/mantra">
<xsl:copy-of select="."/>
<xsl:variable name="x" select="position()"/>
<xsl:copy-of select="../../verse[@id='rv1.16.1']/mantra[position()=$x]"/>
</xsl:template>
</xsl:stylesheet>"""
sheet_2 = """<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="1.0"
>
<xsl:output method="xml" indent="yes"/>
<xsl:template match="sample">
<verse>
<xsl:for-each select="verse/mantra">
<xsl:sort select="substring-after(@id,../@id)"/>
<xsl:sort select="../@id" order="descending"/>
<xsl:copy-of select="."/>
</xsl:for-each>
</verse>
</xsl:template>
</xsl:stylesheet>"""
xml_source="""<sample>
<verse meter="gaayatrii" id="rv1.16.1">
<mantra id="rv1.16.1a">
aa tvaa vahantu harayo
</mantra>
<mantra id="rv1.16.1b">
vRSaNaM somapiitaye
</mantra>
<mantra id="rv1.16.1c">
indra tvaa suuracakSasaH
</mantra>
</verse>
<verse meter="gaayatrii" id="rv1.84.10">
<mantra id="rv1.84.10a">
svaador itthaa viSuuvato
</mantra>
<mantra id="rv1.84.10b">
madhvaH pibanti gauryaH
</mantra>
<mantra id="rv1.84.10c">
yaa indreNa sayaavariir
</mantra>
</verse>
</sample>"""
expected_1 = """<?xml version='1.0' encoding='UTF-8'?>
<verse>
<mantra id='rv1.84.10a'>
svaador itthaa viSuuvato
</mantra>
<mantra id='rv1.16.1a'>
aa tvaa vahantu harayo
</mantra>
<mantra id='rv1.84.10b'>
madhvaH pibanti gauryaH
</mantra>
<mantra id='rv1.16.1b'>
vRSaNaM somapiitaye
</mantra>
<mantra id='rv1.84.10c'>
yaa indreNa sayaavariir
</mantra>
<mantra id='rv1.16.1c'>
indra tvaa suuracakSasaH
</mantra>
</verse>"""
expected_2 = """<?xml version='1.0' encoding='UTF-8'?>
<verse>
<mantra id='rv1.84.10a'>
svaador itthaa viSuuvato
</mantra>
<mantra id='rv1.16.1a'>
aa tvaa vahantu harayo
</mantra>
<mantra id='rv1.84.10b'>
madhvaH pibanti gauryaH
</mantra>
<mantra id='rv1.16.1b'>
vRSaNaM somapiitaye
</mantra>
<mantra id='rv1.84.10c'>
yaa indreNa sayaavariir
</mantra>
<mantra id='rv1.16.1c'>
indra tvaa suuracakSasaH
</mantra>
</verse>"""
def Test(tester):
source = test_harness.FileInfo(string=xml_source)
sheet = test_harness.FileInfo(string=sheet_1)
test_harness.XsltTest(tester, source, [sheet], expected_1,
title='Using position()')
source = test_harness.FileInfo(string=xml_source)
sheet = test_harness.FileInfo(string=sheet_2)
test_harness.XsltTest(tester, source, [sheet], expected_2,
title='Using xsl:for-each and xsl:sort')
return
| 24.653543 | 105 | 0.622804 | 421 | 3,131 | 4.589074 | 0.251781 | 0.059524 | 0.102484 | 0.123188 | 0.746377 | 0.692029 | 0.671325 | 0.625776 | 0.625776 | 0.625776 | 0 | 0.05461 | 0.19291 | 3,131 | 126 | 106 | 24.849206 | 0.709933 | 0.033216 | 0 | 0.694444 | 0 | 0.009259 | 0.811302 | 0.072373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009259 | false | 0 | 0.009259 | 0 | 0.027778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3eff8a807310f1658bf9add7b3fd64cd9a65c43 | 435 | py | Python | moyu_engine/config/system/save_system.py | MoYuStudio/MYSG01 | 2bb33f6258893d881466689cd2f0de50e866915d | [
"Apache-2.0"
] | null | null | null | moyu_engine/config/system/save_system.py | MoYuStudio/MYSG01 | 2bb33f6258893d881466689cd2f0de50e866915d | [
"Apache-2.0"
] | null | null | null | moyu_engine/config/system/save_system.py | MoYuStudio/MYSG01 | 2bb33f6258893d881466689cd2f0de50e866915d | [
"Apache-2.0"
] | null | null | null |
import pickle
import moyu_engine.config.data.constants as C
class SavaSystem:
def save_tilemap():
f=open('moyu_engine/config/data/game_save','wb')
save_data = {'window':C.window}
pickle.dump(save_data, f)
f.close()
def read_tilemap():
f=open('moyu_engine/config/data/game_save', 'rb')
read_data = pickle.load(f)
C.window = read_data['window']
f.close() | 21.75 | 57 | 0.611494 | 60 | 435 | 4.25 | 0.4 | 0.117647 | 0.188235 | 0.235294 | 0.313725 | 0.313725 | 0.313725 | 0.313725 | 0.313725 | 0 | 0 | 0 | 0.252874 | 435 | 20 | 58 | 21.75 | 0.784615 | 0 | 0 | 0.153846 | 0 | 0 | 0.188506 | 0.151724 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
e3f572b26ae5c5d16dc9bcfe6e977a44a4119b67 | 5,815 | py | Python | nvflare/apis/server_engine_spec.py | ZiyueXu77/NVFlare | ec855326b91b47d54074017a12f89ec971a8139b | [
"Apache-2.0"
] | null | null | null | nvflare/apis/server_engine_spec.py | ZiyueXu77/NVFlare | ec855326b91b47d54074017a12f89ec971a8139b | [
"Apache-2.0"
] | null | null | null | nvflare/apis/server_engine_spec.py | ZiyueXu77/NVFlare | ec855326b91b47d54074017a12f89ec971a8139b | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC, abstractmethod
from typing import Dict, List, Optional, Tuple
from nvflare.apis.shareable import Shareable
from nvflare.widgets.widget import Widget
from .client import Client
from .fl_context import FLContext
from .fl_snapshot import RunSnapshot
from .workspace import Workspace
class ServerEngineSpec(ABC):
@abstractmethod
def fire_event(self, event_type: str, fl_ctx: FLContext):
pass
@abstractmethod
def get_clients(self) -> List[Client]:
pass
@abstractmethod
def sync_clients_from_main_process(self):
"""To fetch the participating clients from the main parent process
Returns: clients
"""
pass
@abstractmethod
def validate_clients(self, client_names: List[str]) -> Tuple[List[Client], List[str]]:
"""Validate specified client names.
Args:
client_names: list of names to be validated
Returns: a list of validate clients and a list of invalid client names
"""
pass
@abstractmethod
def new_context(self) -> FLContext:
# the engine must use FLContextManager to create a new context!
pass
@abstractmethod
def get_workspace(self) -> Workspace:
pass
@abstractmethod
def get_component(self, component_id: str) -> object:
pass
@abstractmethod
def register_aux_message_handler(self, topic: str, message_handle_func):
"""Register aux message handling function with specified topics.
Exception is raised when:
a handler is already registered for the topic;
bad topic - must be a non-empty string
bad message_handle_func - must be callable
Implementation Note:
This method should simply call the ServerAuxRunner's register_aux_message_handler method.
Args:
topic: the topic to be handled by the func
message_handle_func: the func to handle the message. Must follow aux_message_handle_func_signature.
"""
pass
@abstractmethod
def send_aux_request(self, targets: [], topic: str, request: Shareable, timeout: float, fl_ctx: FLContext) -> dict:
"""Send a request to specified clients via the aux channel.
Implementation: simply calls the ServerAuxRunner's send_aux_request method.
Args:
targets: target clients. None or empty list means all clients
topic: topic of the request
request: request to be sent
timeout: number of secs to wait for replies. 0 means fire-and-forget.
fl_ctx: FL context
Returns: a dict of replies (client name => reply Shareable)
"""
pass
def fire_and_forget_aux_request(self, targets: [], topic: str, request: Shareable, fl_ctx: FLContext) -> dict:
return self.send_aux_request(targets, topic, request, 0.0, fl_ctx)
@abstractmethod
def get_widget(self, widget_id: str) -> Widget:
"""Get the widget with the specified ID.
Args:
widget_id: ID of the widget
Returns: the widget or None if not found
"""
pass
@abstractmethod
def persist_components(self, fl_ctx: FLContext, completed: bool):
"""To persist the FL running components
Args:
fl_ctx: FLContext
completed: flag to indicate where the run is complete
Returns:
"""
pass
@abstractmethod
def restore_components(self, snapshot: RunSnapshot, fl_ctx: FLContext):
"""To restore the FL components from the saved snapshot
Args:
snapshot: RunSnapshot
fl_ctx: FLContext
Returns:
"""
pass
@abstractmethod
def start_client_job(self, run_number, client_sites):
"""To send the start client run commands to the clients
Args:
client_sites: client sites
run_number: run_number
Returns:
"""
pass
@abstractmethod
def check_client_resources(self, resource_reqs: Dict[str, dict]) -> Dict[str, Tuple[bool, Optional[str]]]:
"""Sends the check_client_resources requests to the clients.
Args:
resource_reqs: A dict of {client_name: resource requirements dict}
Returns:
A dict of {client_name: client_check_result} where client_check_result
is a tuple of {client check OK, resource reserve token if any}
"""
pass
@abstractmethod
def cancel_client_resources(
self, resource_check_results: Dict[str, Tuple[bool, str]], resource_reqs: Dict[str, dict]
):
"""Cancels the request resources for the job.
Args:
resource_check_results: A dict of {client_name: client_check_result}
where client_check_result is a tuple of {client check OK, resource reserve token if any}
resource_reqs: A dict of {client_name: resource requirements dict}
"""
pass
@abstractmethod
def get_client_name_from_token(self, token: str) -> str:
"""Gets client name from a client login token."""
pass
| 30.445026 | 119 | 0.657782 | 727 | 5,815 | 5.133425 | 0.286107 | 0.072883 | 0.078778 | 0.025723 | 0.138264 | 0.108253 | 0.108253 | 0.108253 | 0.084137 | 0.084137 | 0 | 0.003561 | 0.275666 | 5,815 | 190 | 120 | 30.605263 | 0.882479 | 0.507825 | 0 | 0.52459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.278689 | false | 0.262295 | 0.131148 | 0.016393 | 0.442623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5400f18a3b0bb21f6e1240bc7dcf7a7817e230ca | 551 | py | Python | facial recognition using mtcnn/image_count.py | yoga-suhas-km/facial-recognition | 2fab92ec977430ae5d887fe7d3cf6df1b988bef2 | [
"MIT"
] | null | null | null | facial recognition using mtcnn/image_count.py | yoga-suhas-km/facial-recognition | 2fab92ec977430ae5d887fe7d3cf6df1b988bef2 | [
"MIT"
] | null | null | null | facial recognition using mtcnn/image_count.py | yoga-suhas-km/facial-recognition | 2fab92ec977430ae5d887fe7d3cf6df1b988bef2 | [
"MIT"
] | null | null | null | import os
def count_images(count_images_in_folder):
number_of_images = []
path, dirs, files = next(os.walk(count_images_in_folder))
num_classes = len(dirs)
for i in files:
if i.endswith('.jpg'):
number_of_images.append(1)
for i in dirs:
path, dirs, files = next(os.walk(os.path.join(count_images_in_folder, i)))
for j in files:
if j.endswith('.jpg'):
number_of_images.append(1)
file_count = len(number_of_images)
return file_count
| 23.956522 | 82 | 0.600726 | 79 | 551 | 3.924051 | 0.35443 | 0.141935 | 0.180645 | 0.183871 | 0.354839 | 0.354839 | 0.206452 | 0 | 0 | 0 | 0 | 0.005168 | 0.297641 | 551 | 23 | 83 | 23.956522 | 0.795866 | 0 | 0 | 0.133333 | 0 | 0 | 0.014493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5409241d1256ff9679a064a143c42e39381d869a | 436 | py | Python | pineboolib/kugar/mreportdetail.py | Miguel-J/pineboo-buscar | 41a2f3ee0425d163619b78f32544c4b4661d5fa7 | [
"MIT"
] | null | null | null | pineboolib/kugar/mreportdetail.py | Miguel-J/pineboo-buscar | 41a2f3ee0425d163619b78f32544c4b4661d5fa7 | [
"MIT"
] | null | null | null | pineboolib/kugar/mreportdetail.py | Miguel-J/pineboo-buscar | 41a2f3ee0425d163619b78f32544c4b4661d5fa7 | [
"MIT"
] | null | null | null | from pineboolib import decorators
from pineboolib.flcontrols import ProjectClass
from pineboolib.kugar.mreportsection import MReportSection
class MReportDetail(ProjectClass, MReportSection):
@decorators.BetaImplementation
def __init__(self, *args):
super(MReportDetail, self).__init__(*args)
@decorators.NotImplementedWarn
# def operator=(self, mrd): #FIXME
def operator(self, mrd):
return self
| 25.647059 | 58 | 0.754587 | 43 | 436 | 7.465116 | 0.488372 | 0.130841 | 0.093458 | 0.11215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167431 | 436 | 16 | 59 | 27.25 | 0.884298 | 0.071101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
540ba0e1c7e366dfe7a263f9df7b64a920507acb | 1,574 | py | Python | setup.py | anshulp2912/cheapBuy | ea8df2035cf089465313e0609f0eb73272a92554 | [
"MIT"
] | 1 | 2021-11-26T18:20:34.000Z | 2021-11-26T18:20:34.000Z | setup.py | anshulp2912/cheapBuy | ea8df2035cf089465313e0609f0eb73272a92554 | [
"MIT"
] | 19 | 2021-11-04T03:41:46.000Z | 2021-11-04T18:48:39.000Z | setup.py | anshulp2912/cheapBuy | ea8df2035cf089465313e0609f0eb73272a92554 | [
"MIT"
] | 4 | 2021-11-05T01:45:26.000Z | 2021-11-29T22:04:20.000Z | from setuptools import setup
setup(name='cheapBuy',
version='2.0',
description='cheapBuy Extension provides you ease to buy any product through your favourite website like Amazon, Walmart, Ebay, Bjs, Costco, etc, by providing prices of the same product from all different websites to extension.',
author='Anshul, Bhavya, Darshan, Pragna, Rohan',
author_email='anshulp2912@gmail.com',
url='https://github.com/anshulp2912/cheapBuy.git',
packages=['cheapBuy'],
long_description="""\
Hands on for the standard github repo files.
.gitignore
.travis.yml
CITATION.md : fill on once you've got your ZENODO DOI going
CODE-OF-CONDUCT.md
CONTRIBUTING.md
LICENSE.md
README.md
setup.py
requirements.txt
data/
README.md
test/
README.md
code/
__init__.py
""",
classifiers=[
"License :: MIT License",
"Programming Language :: Python",
"Development Status :: Initial",
"Intended Audience :: Developers",
"Topic :: Software Engineering",
],
keywords='python requirements license gitignore',
license='MIT',
install_requires=[
'BeautifulSoup',
'pytest',
'Flask',
'selenium',
'streamlit',
'webdriver_manager',
'pyshorteners',
'link-button'
],
)
| 32.791667 | 235 | 0.543837 | 147 | 1,574 | 5.768707 | 0.748299 | 0.028302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00996 | 0.362135 | 1,574 | 47 | 236 | 33.489362 | 0.834661 | 0 | 0 | 0.108696 | 0 | 0.021739 | 0.67662 | 0.013342 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.021739 | 0 | 0.021739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.