hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44d81f4a562fc325b6fdaf33b5effc0dd60d28d3 | 26,298 | py | Python | hgapp/powers/createPowerFormUtilities.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 6 | 2020-10-03T12:15:05.000Z | 2021-10-15T04:43:36.000Z | hgapp/powers/createPowerFormUtilities.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 99 | 2020-06-04T17:43:56.000Z | 2022-03-12T01:07:20.000Z | hgapp/powers/createPowerFormUtilities.py | shadytradesman/The-Contract-Website | d8b353064f91c53ebab951dec784a0a36caba260 | [
"Apache-2.0"
] | 9 | 2020-06-06T16:39:09.000Z | 2020-10-02T16:24:17.000Z | from django.forms import formset_factory
from django.utils import timezone
from django.shortcuts import get_object_or_404
import bleach
import json
from .forms import CreatePowerForm, make_enhancement_form, make_drawback_form, make_parameter_form, \
SystemFieldRollForm, SystemFieldTextForm, MIND_, BODY_, PARRY_
from .models import Enhancement_Instance, Drawback_Instance, Power, DICE_SYSTEM, Enhancement, Drawback, \
Power_Param, SystemFieldText, SystemFieldRoll, SystemFieldTextInstance, SystemFieldRollInstance, \
Parameter_Value, Base_Power_System, Power_Full, CREATION_REASON, PowerTutorial
from characters.models import Roll, Attribute, Ability, NO_PARRY_INFO, REACTION, THROWN
def get_create_power_context_from_base(base_power, character=None):
system = base_power.get_system()
primary_form = CreatePowerForm(base_power, initial={'system': system.system_text})
enhancement_forms = []
for enhancement in Enhancement.objects.filter(pk__in=base_power.enhancements.all()):
enhancement_forms.append(formset_factory(make_enhancement_form(enhancement), extra = 1)())
drawback_forms = []
for drawback in Drawback.objects.filter(pk__in=base_power.drawbacks.all()):
drawback_forms.append(formset_factory(make_drawback_form(drawback), extra = 1)())
parameter_forms = []
for parameter in Power_Param.objects.filter(relevant_base_power=base_power).all():
parameter_forms.append(formset_factory(make_parameter_form(parameter))())
roll_fields_formset = _get_system_roll_field_formset(system)
text_fields_formset = _get_system_text_field_formset(system)
system = Base_Power_System.objects.filter(dice_system=DICE_SYSTEM[1][0]).get(base_power=base_power.slug)
requirements = _get_modifier_requirements(Enhancement.objects.filter(pk__in=base_power.enhancements.all()),
Drawback.objects.filter(pk__in=base_power.drawbacks.all()))
context = {
'base_power': base_power,
'power_system': system,
'form': primary_form,
'parameters': parameter_forms,
'enhancements': enhancement_forms,
'drawbacks': drawback_forms,
'requirements_json': json.dumps(requirements),
'character': character,
'roll_fields': roll_fields_formset,
'text_fields': text_fields_formset,
}
if character:
unspent_rewards = []
for reward in character.unspent_rewards().all():
unspent_rewards.append("{} from {}".format(reward.type_text(), reward.reason_text()))
context["unspent_rewards_json"] = json.dumps(unspent_rewards)
spent_rewards = []
context["spent_rewards_json"] = json.dumps(spent_rewards)
context = _add_tutorial_to_context(context)
return context
def get_create_power_context_from_power(power, new=True):
initial = {'system': power.get_system(),
'description': power.description,
'flavor': power.flavor_text,
'activation_style': power.activation_style,
'power_name': power.name}
if power.parent_power:
initial['tags'] = power.parent_power.tags.all()
initial['example_description'] = power.parent_power.example_description
system = Base_Power_System.objects.filter(dice_system=DICE_SYSTEM[1][0]).get(base_power=power.base.slug)
text_fields_formset = _get_text_field_formsets_for_edit(power, system)
roll_fields_formset = _get_roll_field_formsets_for_edit(power, system)
primary_form = CreatePowerForm(power.base,
initial=initial)
enhancement_forms = _get_enhancement_formsets_from_power(power)
drawback_forms = _get_drawback_formsets_from_power(power)
parameter_forms = []
for parameter_value in Parameter_Value.objects.filter(relevant_power=power).all():
init = [{'level_picker': parameter_value.value}]
parameter_forms.append(formset_factory(make_parameter_form(parameter_value.relevant_power_param), extra = 0)(initial = init))
requirements = _get_modifier_requirements(Enhancement.objects.filter(pk__in=power.base.enhancements.all()),
Drawback.objects.filter(pk__in=power.base.drawbacks.all()))
context = {
'base_power': power.base,
'power_system': system,
'form': primary_form,
'parameters': parameter_forms,
'enhancements': enhancement_forms,
'drawbacks': drawback_forms,
'requirements_json': json.dumps(requirements),
'roll_fields': roll_fields_formset,
'text_fields': text_fields_formset,
}
if power.parent_power is not None:
if power.parent_power.character is not None and new:
context["character"] = power.parent_power.character
unspent_rewards = []
for reward in power.parent_power.character.unspent_rewards().all():
unspent_rewards.append("{} from {}".format(reward.type_text(), reward.reason_text()))
context["unspent_rewards_json"] = json.dumps(unspent_rewards)
spent_rewards = []
for reward in power.parent_power.reward_list():
spent_rewards.append("{} from {}".format(reward.type_text(), reward.reason_text()))
context["spent_rewards_json"] = json.dumps(spent_rewards)
context = _add_tutorial_to_context(context)
return context
def _get_text_field_formsets_for_edit(power, system):
TextFieldsFormset = formset_factory(SystemFieldTextForm, extra=0)
text_system_fields = system.systemfieldtext_set.order_by("id").all()
instances = power.systemfieldtextinstance_set.all()
value_by_field_id = {n.relevant_field.id: n.value for n in instances}
return TextFieldsFormset(
initial=[{'system_field_id': x.id,
'system_field': x,
'field_text': value_by_field_id[x.id] if x.id in value_by_field_id else ""
} for x in text_system_fields],
prefix="system_text_fields")
def _get_roll_field_formsets_for_edit(power, system):
RollFieldsFormset = formset_factory(SystemFieldRollForm, extra=0)
roll_system_fields = system.systemfieldroll_set.order_by("id").all()
instances = power.systemfieldrollinstance_set.all()
value_by_field_id = {n.relevant_field.id: n.roll for n in instances}
return RollFieldsFormset(
initial=[{'system_field_id': x.id,
'system_field': x,
'ability_roll': _get_roll_initial_ability(value_by_field_id[x.id]) if x.id in value_by_field_id else None,
'attribute_roll': _get_roll_initial_attribute(value_by_field_id[x.id]) if x.id in value_by_field_id else None,
}
for x in roll_system_fields],
prefix="system_roll_fields")
def _get_roll_initial_ability(roll):
if roll.ability:
return roll.ability.id
else:
return None
def _get_roll_initial_attribute(roll):
if roll.attribute:
return roll.attribute.id
elif roll.is_mind:
return MIND_
elif roll.is_body:
return BODY_
elif roll.parry_type != NO_PARRY_INFO:
return PARRY_
else:
raise ValueError("Unknown roll attribute")
def get_edit_power_context_from_power(og_power):
context = get_create_power_context_from_power(og_power)
if og_power.parent_power is not None and og_power.parent_power.owner is not None:
context["owner"] = og_power.parent_power.owner
context["og_power"] = og_power
return context
def create_power_for_new_edit(base_power, request, power_full):
power_form = CreatePowerForm(base_power, request.POST)
if power_form.is_valid():
old_power = power_full.latest_revision()
if request.user.is_superuser:
power_full.tags.set(power_form.cleaned_data["tags"])
power_full.example_description = power_form.cleaned_data["example_description"]
power_full.save()
new_power = _create_power_from_post_and_base(base_power, request, power_full)
new_power.creation_reason = _get_power_creation_reason(new_power, old_power)
new_power.creation_reason_expanded_text = _get_power_creation_reason_expanded_text(new_power, old_power)
new_power.save()
if hasattr(power_full, "character") and power_full.character:
power_full.character.reset_attribute_bonuses()
return new_power
def create_new_power_and_parent(base_power, request, character=None):
form = CreatePowerForm(base_power, request.POST)
if form.is_valid():
power_full = _create_new_full_power(power_form=form, base=base_power)
if request.user.id:
power_full.owner = request.user
if character:
power_full.character = character
power_full.save()
if request.user.is_superuser:
power_full.tags.set(form.cleaned_data["tags"])
power_full.example_description = form.cleaned_data["example_description"]
power_full.save()
new_power = _create_power_from_post_and_base(base_power, request, power_full)
new_power.creation_reason = CREATION_REASON[0][0]
new_power.creation_reason_expanded_text = "Initial power creation"
new_power.save()
if character:
character.reset_attribute_bonuses()
return new_power
else:
print(form.errors)
return None
def refund_or_assign_rewards(new_power, old_power=None):
og_point_value = 0
if old_power:
og_point_value=old_power.get_point_value()
delta = new_power.get_point_value() - og_point_value
if delta == 0:
return
if delta > 0:
if new_power.parent_power.character is not None:
unspent_gifts = new_power.parent_power.character.unspent_rewards()
for a in range(delta):
if a == len(unspent_gifts):
break
unspent_gifts[a].assign_to_power(new_power)
if delta < 0:
if new_power.parent_power.character is not None and old_power:
spent_gifts = old_power.parent_power.reward_list()
for a in range(delta*-1):
if a == len(spent_gifts):
break
spent_gifts[a].refund_keeping_character_assignment()
def _get_enhancement_formsets_from_power(power):
enhancement_forms = []
enhancement_instances = Enhancement_Instance.objects.filter(relevant_power=power).all()
for base_enhancement in Enhancement.objects.filter(pk__in=power.base.enhancements.all()):
instances_of_this_enhancement = set(
x for x in enhancement_instances if (x.relevant_enhancement == base_enhancement))
init = []
num_extra = 0
for enhancement_instance in instances_of_this_enhancement:
init.append({
'is_selected': True,
'detail_text': enhancement_instance.detail,
})
if base_enhancement.multiplicity_allowed or not instances_of_this_enhancement:
num_extra = 1
new_form = formset_factory(make_enhancement_form(base_enhancement), extra=num_extra, max_num=4)(initial=init)
enhancement_forms.append(new_form)
return enhancement_forms
def _get_drawback_formsets_from_power(power):
drawback_forms = []
drawback_instances = Drawback_Instance.objects.filter(relevant_power=power).all()
for base_drawback in Drawback.objects.filter(pk__in=power.base.drawbacks.all()):
instances_of_this_drawback = set(
x for x in drawback_instances if (x.relevant_drawback == base_drawback))
init = []
num_extra = 0
for drawback_instance in instances_of_this_drawback:
init.append({
'is_selected': True,
'detail_text': drawback_instance.detail,
})
if base_drawback.multiplicity_allowed or not instances_of_this_drawback:
num_extra = 1
new_form = formset_factory(make_drawback_form(base_drawback), extra=num_extra, max_num=4)(initial=init)
drawback_forms.append(new_form)
return drawback_forms
def _add_tutorial_to_context(context):
tutorial = get_object_or_404(PowerTutorial)
context['modal_header'] = tutorial.modal_edit_header
context['modal_text'] = tutorial.modal_edit
context['modal_art'] = 'overrides/art/ocean-walking-copy.jpg'
return context
def _get_modifier_requirements(enhancements, drawbacks):
requirements = {}
for enhancement in enhancements:
if enhancement.required_Enhancements:
required = []
for req_enhancement in enhancement.required_Enhancements.all():
required.append( req_enhancement.form_name() )
requirements[enhancement.form_name()] = required
for drawback in drawbacks:
if drawback.required_drawbacks:
required = []
for req_drawback in drawback.required_drawbacks.all():
required.append(req_drawback.form_name())
requirements[drawback.form_name()] = required
return requirements
def _get_enhancement_instances(post_data, enhancements, new_power):
instances = []
for enhancement in enhancements:
if enhancement.slug + "-e-is_selected" in post_data:
detail_texts = []
if enhancement.slug + "-e-detail_text" in post_data:
detail_texts = post_data.getlist(enhancement.slug + "-e-detail_text")
for on in post_data.getlist(enhancement.slug + "-e-is_selected"):
if detail_texts:
new_detail_text = bleach.clean(detail_texts.pop(0))
else:
new_detail_text = ""
instances.append(Enhancement_Instance(relevant_enhancement=enhancement,
relevant_power=new_power,
detail=new_detail_text))
return instances
def _get_drawback_instances(post_data, drawbacks, new_power):
instances = []
for drawback in drawbacks:
if drawback.slug + "-d-is_selected" in post_data:
detail_texts = []
if drawback.slug + "-d-detail_text" in post_data:
detail_texts = post_data.getlist(drawback.slug + "-d-detail_text")
for on in post_data.getlist(drawback.slug + "-d-is_selected"):
if detail_texts:
new_detail_text = bleach.clean(detail_texts.pop(0))
else:
new_detail_text = ""
instances.append(Drawback_Instance(relevant_drawback=drawback,
relevant_power=new_power,
detail=new_detail_text))
return instances
def _create_new_full_power(power_form, base):
return Power_Full(name=power_form.cleaned_data['power_name'],
dice_system=DICE_SYSTEM[1][0],
base=base,
pub_date=timezone.now())
def _get_power_from_form(power_form, base):
return Power(name=power_form.cleaned_data['power_name'],
flavor_text=power_form.cleaned_data['flavor'],
description=power_form.cleaned_data['description'],
system=power_form.cleaned_data['system'],
activation_style=power_form.cleaned_data['activation_style'],
base=base,
dice_system=DICE_SYSTEM[1][0],
pub_date=timezone.now())
def _get_roll_from_form_and_system(form, system_field):
attr = form.cleaned_data["attribute_roll"]
difficulty = 6
if system_field.difficulty:
difficulty = system_field.difficulty
if attr == BODY_[0] or attr == MIND_[0] or attr == PARRY_[0]:
if attr == BODY_[0]:
return Roll.get_body_roll(difficulty=difficulty)
elif attr == MIND_[0]:
return Roll.get_mind_roll(difficulty=difficulty)
elif attr == PARRY_[0]:
return Roll.get_roll(difficulty=difficulty, parry_type=system_field.parry_type, speed=REACTION)
else:
raise ValueError("Unexpected attr")
else:
attribute = get_object_or_404(Attribute, id=attr)
ability = get_object_or_404(Ability, id=form.cleaned_data["ability_roll"])
return Roll.get_roll(attribute = attribute,
ability = ability,
difficulty = difficulty,
speed=system_field.speed)
def _create_power_from_post_and_base(base_power, request, power_full):
form = CreatePowerForm(base_power, request.POST)
if form.is_valid():
system = Base_Power_System.objects.filter(dice_system=DICE_SYSTEM[1][0]).get(base_power=base_power.slug)
power = _get_power_from_form(power_form=form, base=base_power)
if request.user.id:
power.created_by = request.user
power.parent_power = power_full
power.save()
enhancement_instances = _get_enhancement_instances(post_data=request.POST,
enhancements=Enhancement.objects.filter(
pk__in=base_power.enhancements.all()),
new_power=power)
for enhancement_instance in enhancement_instances:
enhancement_instance.save()
drawback_instances = _get_drawback_instances(post_data=request.POST,
drawbacks=Drawback.objects.filter(
pk__in=base_power.drawbacks.all()),
new_power=power)
for drawback_instance in drawback_instances:
drawback_instance.save()
for power_param in Power_Param.objects.filter(relevant_base_power=base_power):
param_val = Parameter_Value(relevant_power=power,
relevant_power_param=power_param,
value=request.POST[power_param.relevant_parameter.slug])
param_val.save()
text_field_formset = _get_system_text_field_formset(system, request.POST)
if text_field_formset.is_valid():
for form in text_field_formset:
system_field = get_object_or_404(SystemFieldText, id=form.cleaned_data["system_field_id"])
field_instance = SystemFieldTextInstance(relevant_power=power,
relevant_field=system_field,
value=form.cleaned_data["field_text"])
field_instance.save()
else:
raise ValueError("Invalid text field formset")
roll_field_formset = _get_system_roll_field_formset(system, request.POST)
if roll_field_formset.is_valid():
for form in roll_field_formset:
system_field = get_object_or_404(SystemFieldRoll, id=form.cleaned_data["system_field_id"])
roll = _get_roll_from_form_and_system(form, system_field)
field_instance = SystemFieldRollInstance(relevant_power=power,
relevant_field=system_field,
roll=roll)
field_instance.save()
else:
raise ValueError("Invalid roll field formset")
return power
else:
raise ValueError("Invalid Power Form")
def _get_system_text_field_formset(system, POST = None):
TextFieldsFormset = formset_factory(SystemFieldTextForm, extra=0)
text_system_fields = system.systemfieldtext_set.order_by("id").all()
text_fields_formset = TextFieldsFormset(
POST,
initial=[{'system_field_id': x.id, 'system_field': x} for x in text_system_fields],
prefix="system_text_fields")
return text_fields_formset
def _get_system_roll_field_formset(system, POST=None):
RollFieldsFormset = formset_factory(SystemFieldRollForm, extra=0)
roll_system_fields = system.systemfieldroll_set.order_by("id").all()
roll_fields_formset = RollFieldsFormset(
POST,
initial=[{'system_field_id': x.id, 'system_field': x} for x in roll_system_fields],
prefix="system_roll_fields")
return roll_fields_formset
def _get_power_creation_reason(new_power, old_power):
if old_power is None:
# new
return CREATION_REASON[0][0]
new_points = new_power.get_point_value()
old_points = old_power.get_point_value()
if new_points > old_points:
# improvement
return CREATION_REASON[1][0]
if new_points < old_points\
or _get_param_difference_text(new_power, old_power)\
or _get_added_enhancements(new_power, old_power)\
or _get_removed_enhancements(new_power, old_power)\
or _get_added_drawbacks(new_power, old_power)\
or _get_removed_drawbacks(new_power, old_power):
# revision
return CREATION_REASON[2][0]
# adjustment
return CREATION_REASON[3][0]
def _get_power_creation_reason_expanded_text(new_power, old_power):
edit_text = ""
if new_power.creation_reason == CREATION_REASON[3][0]:
edit_text = "Text field change"
if new_power.creation_reason == CREATION_REASON[1][0] or new_power.creation_reason == CREATION_REASON[2][0]:
# improvement or revision
added_enhancements = _get_added_enhancements(new_power, old_power)
if len(added_enhancements) > 0:
edit_text = edit_text + "Added Enhancement"
if len(added_enhancements) > 1:
edit_text = edit_text + "s"
edit_text = edit_text + ": "
for enhancement in added_enhancements:
edit_text = edit_text + enhancement.relevant_enhancement.name + ", "
removed_enhancements = _get_removed_enhancements(new_power, old_power)
if len(removed_enhancements) > 0:
edit_text = edit_text + "Removed Enhancement"
if len(removed_enhancements) > 1:
edit_text = edit_text + "s"
edit_text = edit_text + ": "
for enhancement in removed_enhancements:
edit_text = edit_text + enhancement.relevant_enhancement.name + ", "
added_drawbacks = _get_added_drawbacks(new_power, old_power)
if len(added_drawbacks) > 0:
edit_text = edit_text + "Added Drawback"
if len(added_drawbacks) > 1:
edit_text = edit_text + "s"
edit_text = edit_text + ": "
for drawback in added_drawbacks:
edit_text = edit_text + drawback.relevant_drawback.name + ", "
removed_drawbacks = _get_removed_drawbacks(new_power, old_power)
if len(removed_drawbacks) > 0:
edit_text = edit_text + "Removed Drawback"
if len(removed_drawbacks) > 1:
edit_text = edit_text + "s"
edit_text = edit_text + ": "
for drawback in removed_drawbacks:
edit_text = edit_text + drawback.relevant_drawback.name + ", "
edit_text = edit_text + _get_param_difference_text(new_power, old_power)
#stopgap bugfix measure until we fix the _get_added_enhancements method by properly using form fields.
if len(edit_text) < 3:
edit_text = "Power Adjustment"
if edit_text[-2] == ',':
edit_text = edit_text[:-2]
return edit_text[:1500]
def _get_added_enhancements(new_power, old_power):
added_enhancements = []
for new_enhancement in new_power.enhancement_instance_set.all():
in_old = False
for old_enhancement in old_power.enhancement_instance_set.all():
if old_enhancement.relevant_enhancement.slug == new_enhancement.relevant_enhancement.slug:
in_old = True
if not in_old:
added_enhancements.append(new_enhancement)
return added_enhancements
def _get_removed_enhancements(new_power, old_power):
removed_enhancements = []
for old_enhancement in old_power.enhancement_instance_set.all():
in_new = False
for new_enhancement in new_power.enhancement_instance_set.all():
if old_enhancement.relevant_enhancement.slug == new_enhancement.relevant_enhancement.slug:
in_new = True
if not in_new:
removed_enhancements.append(old_enhancement)
return removed_enhancements
def _get_added_drawbacks(new_power, old_power):
added_drawbacks = []
for new_drawback in new_power.drawback_instance_set.all():
in_old = False
for old_drawback in old_power.drawback_instance_set.all():
if old_drawback.relevant_drawback.slug == new_drawback.relevant_drawback.slug:
in_old = True
if not in_old:
added_drawbacks.append(new_drawback)
return added_drawbacks
def _get_removed_drawbacks(new_power, old_power):
removed_drawbacks = []
for old_drawback in old_power.drawback_instance_set.all():
in_new = False
for new_drawback in new_power.drawback_instance_set.all():
if old_drawback.relevant_drawback.slug == new_drawback.relevant_drawback.slug:
in_new = True
if not in_new:
removed_drawbacks.append(old_drawback)
return removed_drawbacks
def _get_param_difference_text(new_power, old_power):
param_text = ""
param_counter = 0
for new_param_value in new_power.parameter_value_set.order_by('relevant_power_param_id').all():
try:
old_param_value = old_power.parameter_value_set.order_by('relevant_power_param_id').all()[param_counter]
if old_param_value.value != new_param_value.value:
param_text = param_text + "Parameter {} changed from {} to {}. "
param_text = param_text.format(new_param_value.relevant_power_param.relevant_parameter.name, old_param_value.value, new_param_value.value)
except:
return "Base Parameters Changed. "
param_counter = param_counter + 1
return param_text | 46.299296 | 154 | 0.664994 | 3,118 | 26,298 | 5.230597 | 0.069275 | 0.025017 | 0.013489 | 0.019621 | 0.626832 | 0.563309 | 0.494267 | 0.399718 | 0.343491 | 0.262064 | 0 | 0.00447 | 0.251426 | 26,298 | 568 | 155 | 46.299296 | 0.823987 | 0.006084 | 0 | 0.351297 | 0 | 0 | 0.053651 | 0.003138 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057884 | false | 0 | 0.015968 | 0.003992 | 0.155689 | 0.001996 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44d8afe91e73eaf3251897e6e3f5a49e57dc20e9 | 1,877 | py | Python | cargan/loss/pitch.py | mdc202002/cargan | 5bfb44a1d8c2de8126e8053bed6078ad2e20819c | [
"MIT"
] | 72 | 2021-10-20T01:17:54.000Z | 2022-02-22T07:40:35.000Z | cargan/loss/pitch.py | mdc202002/cargan | 5bfb44a1d8c2de8126e8053bed6078ad2e20819c | [
"MIT"
] | 7 | 2021-10-21T21:44:00.000Z | 2022-03-17T18:24:42.000Z | cargan/loss/pitch.py | mdc202002/cargan | 5bfb44a1d8c2de8126e8053bed6078ad2e20819c | [
"MIT"
] | 16 | 2021-10-20T02:07:46.000Z | 2022-03-16T08:18:37.000Z | import torch
import torchcrepe
###############################################################################
# CREPE perceptual loss
###############################################################################
class CREPEPerceptualLoss(torch.nn.Module):
def __init__(self):
super().__init__()
# Register model
self.add_module('model', torchcrepe.Crepe())
# Don't update model weights
self.requires_grad_(False)
def forward(self, x, y):
# Get feature maps
x_maps = self.activations(x)
y_maps = self.activations(y)
# Compute distance
loss = 0.
for x_map, y_map in zip(x_maps, y_maps):
loss += torch.nn.functional.l1_loss(x_map, y_map)
return loss
def activations(self, x):
activations = []
# shape=(batch, 1, 1024, 1)
x = x[:, None, :, None]
# Forward pass through model and save activations
x = self.model.layer(x, self.model.conv1, self.model.conv1_BN, (0, 0, 254, 254))
activations.append(x)
x = self.model.layer(x, self.model.conv2, self.model.conv2_BN)
activations.append(x)
x = self.model.layer(x, self.model.conv3, self.model.conv3_BN)
activations.append(x)
x = self.model.layer(x, self.model.conv4, self.model.conv4_BN)
activations.append(x)
x = self.model.layer(x, self.model.conv5, self.model.conv5_BN)
activations.append(x)
x = self.model.layer(x, self.model.conv6, self.model.conv6_BN)
activations.append(x)
# shape=(batch, self.in_features)
x = x.permute(0, 2, 1, 3).reshape(-1, self.model.in_features)
# Compute unnormalized probability distribution
x = self.model.classifier(x)
activations.append(x)
return activations
| 30.770492 | 88 | 0.550879 | 226 | 1,877 | 4.451327 | 0.30531 | 0.178926 | 0.129225 | 0.089463 | 0.246521 | 0.246521 | 0.246521 | 0.22167 | 0.22167 | 0.22167 | 0 | 0.02369 | 0.257858 | 1,877 | 60 | 89 | 31.283333 | 0.698492 | 0.132659 | 0 | 0.212121 | 0 | 0 | 0.003425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44dbb055e9de0b9ac2e1f37ae940e8d05cde499b | 6,230 | py | Python | XML_to_Semark/xml_semark.py | prashankkadam/Maer_1 | e201866429a1231df7f439797ef100f9e4e6da37 | [
"MIT"
] | null | null | null | XML_to_Semark/xml_semark.py | prashankkadam/Maer_1 | e201866429a1231df7f439797ef100f9e4e6da37 | [
"MIT"
] | null | null | null | XML_to_Semark/xml_semark.py | prashankkadam/Maer_1 | e201866429a1231df7f439797ef100f9e4e6da37 | [
"MIT"
] | null | null | null | # -*- coding:/ utf-8 -*-
"""
Created on Tue Jul 23 12:07:20 2019
This piece of software is bound by The MIT License (MIT)
Copyright (c) 2019 Prashank Kadam
Code written by : Prashank Kadam
User name - ADM-PKA187
Email ID : prashank.kadam@maersktankers.com
Created on - Tue Jul 30 10:00:14 2019
version : 1.0
"""
# Importing the required libraries
import pandas as pd
import xml.etree.ElementTree as et
# Importing the standard semark light format for sea and port reports inorder to ger the column
# names of all the standard columns into runtime
df_init_sea = pd.read_excel('semark_light.xlsx', 'Sea')
df_init_port = pd.read_excel('semark_light.xlsx', 'Port')
# Importing the xml converted data into a dataframe
df_target = pd.read_excel('test_1.xlsx')
# Taking a subset of only the required columns from the dataframe
df_target = df_target[['ImoNumber', 'VesselName', 'ReportTime', 'Longitude', 'Port',
'Location', 'Latitude', 'VoyageNo', 'ObservedDistance', 'FWDDraft', 'LOG_DISTANCE',
'WindForce', 'SeaDir', 'SwellHeight', 'CurrentDirection', 'WindDirection', 'SeaHeight',
'SwellDir', 'SeaState', 'Swell', 'Current', 'FuelType', 'AuxEngineConsumption',
'BoilerEngineConsumption', 'Units', 'Received', 'Consumption', 'SeaTemp', 'VesselCondition']]
# Initializing the rows list in which we will append the mapped data
rows = []
# Looping over the dataframe to map each row to the corresponding columns in the other dataframe
for index, row in df_target.iterrows():
# Kindly note that the time complexity of the below code higher than the optimum as the same data values are
# repeated for 4 rows in a succession but since the time is not an issue in our particular use case we can
# refrain from adding further validations to complicate the code
s_imo = row['ImoNumber']
s_vesselname = row['VesselName']
s_time = row['ReportTime']
s_longitutde = row['Longitude']
s_port = row['Port']
s_latitude = row['Latitude']
s_voyage = row['VoyageNo']
s_obvdis = row['ObservedDistance']
s_draught = row['FWDDraft']
s_dist = row['LOG_DISTANCE']
s_wind = row['WindForce']
s_seadir = row['SeaDir']
s_swellhgt = row['SeaDir']
s_curdir = row['CurrentDirection']
s_windir = row['WindDirection']
s_seahgt = row['SeaHeight']
s_swelldir = row['SwellDir']
s_seastate = row['SeaState']
s_swell = row['Swell']
s_curr = row['Current']
s_units = row['Units']
s_seatemp = row['SeaTemp']
s_vesscon = row['VesselCondition']
# Filling in the corresponding fields for the repective fuel types:
if row['FuelType'] == 'IFO':
s_hshfo_ae = row['AuxEngineConsumption']
s_hshfo_blr = row['BoilerEngineConsumption']
s_hshfo_me = row['Consumption']
elif row['FuelType'] == 'LSF':
s_lshfo_ae = row['AuxEngineConsumption']
s_lshfo_blr = row['BoilerEngineConsumption']
s_lshfo_me = row['Consumption']
elif row['FuelType'] == 'LSG':
s_lsmdo_ae = row['AuxEngineConsumption']
s_lsmdo_blr = row['BoilerEngineConsumption']
s_lsmdo_me = row['Consumption']
elif row['FuelType'] == 'MGO':
s_hsmdo_ae = row['AuxEngineConsumption']
s_hsmdo_blr = row['BoilerEngineConsumption']
s_hsmdo_me = row['Consumption']
# Since MGO is the last fuel type for a particular report, we append the mapped values to the
# other dataframe
rows.append({'Vessel_Name': s_vesselname, 'Report_Date': s_time, 'IMO_NO': s_imo,
'Main Engine Fuel Consumption (H.S.HFO)': s_hshfo_me,
'Main Engine Fuel Consumption (L.S.HFO)': s_lshfo_me,
'Main Engine Fuel Consumption (H.S.MDO)': s_hsmdo_me,
'Main Engine Fuel Consumption (L.S.MDO)': s_lsmdo_me,
'Boiler Consumption (H.S.HFO)': s_hshfo_blr,
'Boiler Consumption (L.S.HFO)': s_lshfo_blr,
'Boiler Consumption (H.S.MDO)': s_hsmdo_blr,
'Boiler Consumption (L.S.MDO)': s_lsmdo_blr,
'Auxiliary Engine (Diesel Generator ) (H.S.HFO)': s_hshfo_ae,
'Auxiliary Engine (Diesel Generator ) (L.S.HFO)': s_lshfo_ae,
'Auxiliary Engine (Diesel Generator ) (H.S.MDO)': s_hsmdo_ae,
'Auxiliary Engine (Diesel Generator ) (L.S.MDO)': s_lsmdo_ae,
'Vessel State( Loaded\Ballast)': s_vesscon, 'True Wind Direction ': s_windir,
'True Wave Direction': s_seadir, 'True Swell Direction': s_swelldir})
# Creating the final dataframe with the mapped data and using the column values mapped from the semark sheet
df_final = pd.DataFrame(rows, columns=df_init_sea.columns)
# Exporting the data to excel sheet
df_final.to_excel('final.xlsx', index=False)
######################################################################################################
# The below piece of code is for xml to pandas dataframe conversion.
# Kindly note that all the fields have not yet been added to the dictionary
# xtree = et.parse("vess_test.xml")
# xroot = xtree.getroot()
#
# df_cols = ["VesselName", "ReportTime", "Longitude", "Port", "IMO_NUMBER"]
# rows = []
#
# for node in xroot:
# # s_name = node.attrib.get("name")
# s_vess_name = node.find("VesselName").text if node is not None else None
# s_report_time = node.find("ReportTime").text if node is not None else None
# s_longitude = node.find("Longitude").text if node is not None else None
# s_port = node.find("Port").text if node is not None else None
# s_imo = node.find("IMO_NUMBER").text if node is not None else None
# # s_location = node.find("Location").text if node is not None else None
#
# rows.append({"VesselName": s_vess_name, "ReportTime": s_report_time, "Longitude":s_longitude,
# "Port": s_port, "IMO_NUMBER": s_imo})
#
# out_df = pd.DataFrame(rows, columns=df_cols)
#
# print(out_df.head(10))
| 46.148148 | 117 | 0.632103 | 817 | 6,230 | 4.665851 | 0.30355 | 0.009182 | 0.00787 | 0.018888 | 0.206191 | 0.178384 | 0.076863 | 0.043809 | 0.036726 | 0 | 0 | 0.008041 | 0.241413 | 6,230 | 134 | 118 | 46.492537 | 0.798561 | 0.376726 | 0 | 0 | 0 | 0 | 0.384829 | 0.031953 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44dc0bec2aec12c7dc578a3fe8630954d0075baf | 3,506 | py | Python | bot.py | justletterh/cutiecafe | 148f4708ab0b852552f6b91a25e084ac0011b2f0 | [
"WTFPL"
] | null | null | null | bot.py | justletterh/cutiecafe | 148f4708ab0b852552f6b91a25e084ac0011b2f0 | [
"WTFPL"
] | null | null | null | bot.py | justletterh/cutiecafe | 148f4708ab0b852552f6b91a25e084ac0011b2f0 | [
"WTFPL"
] | null | null | null | import discord, jishaku
from discord.ext import commands
from time import sleep
hid=666317117154525185
did=676454199742955530
lid=701254727534510129
owners=[hid,did,lid]
status="— ୨୧ 𝐬𝐧𝐮𝐠𝐠𝐥𝐢𝐧’ 𝐭𝐡𝐞 𝐜𝐮𝐭𝐢𝐞 𝐩𝐢𝐞𝐬! ₓ˚. ୭ ˚○◦"
join="""\U00002601 . . . ⇢ ˗ˏˋ <@&689140834200846374> ࿐ྂ
**welcome sweetheart!! please verify to gain access to the rest of the server!** <:b_powheart:727644834265038918> <:b_teddy:727644836819107860> <:b_powheart:727644834265038918>
<:b_wingies2:727644834806104124> **get some roles in** <a:b_arrow:727644833597882459> <#650563103699763240>
<:b_wingies2:727644834806104124> **make an intro in** <a:b_arrow:727644833597882459> <#650562789546655790>
<:b_wingies2:727644834806104124> **read and react to the triggers and rules list** <a:b_arrow:727644833597882459> <#662158949239226388> + <#668220102482722821>
<:b_wingies2:727644834806104124> **ping staff in** <a:b_arrow:727644833597882459> <#694558376029454386>
<a:b_butterflies:727644835023945778> — **and have loads of fun, $USER!**"""
leave= """<a:B4562AEA046F4DB6B1892479B9ADA72D:727644835023945778> — **oh no!! an angel named $USER left us :c god speed little angel. god speed.** <:5CD871E9E3E34685A9E579DA3BC0D982:727644834265038918>"""
welcomechan=650560380271067148
color=0xf8dfea
def isown(usr):
if usr.id in owners:
return True
else:
return False
bot = commands.Bot(command_prefix='~',owner_ids=owners)
bot.remove_command('help')
@bot.event
async def on_ready():
await bot.change_presence(activity=discord.Game(name=status), status=discord.Status('online'))
@bot.event
async def on_member_join(member):
with open('./app/join.gif', 'rb') as fp:
await bot.get_channel(welcomechan).send(content=join.replace("$USER",member.mention),file=discord.File(fp,"join.gif"))
@bot.event
async def on_member_remove(member):
with open('./app/leave.gif', 'rb') as fp:
await bot.get_channel(welcomechan).send(content=leave.replace("$USER",f"@{member.name}#{member.discriminator}"),file=discord.File(fp,"leave.gif"))
@bot.event
async def on_message(message):
if ("h " in message.content.lower() or "hh" in message.content.lower() or message.content.lower()=="h") and message.author.id==hid:
await message.channel.send(content="h")
await bot.process_commands(message)
@bot.command(name='join')
@commands.is_owner()
async def _join(ctx):
with open('./app/join.gif', 'rb') as fp:
await bot.get_channel(welcomechan).send(content=join.replace("$USER",ctx.author.mention),file=discord.File(fp,"join.gif"))
await ctx.send(content="Done!")
@bot.command(name='leave')
@commands.is_owner()
async def _leave(ctx):
with open('./app/leave.gif', 'rb') as fp:
await bot.get_channel(welcomechan).send(content=leave.replace("$USER",f"@{ctx.author.name}#{ctx.author.discriminator}"),file=discord.File(fp,"leave.gif"))
await ctx.send(content="Done!")
@bot.command(name='say')
@commands.is_owner()
async def _say(ctx, *, arg):
await ctx.send(content=arg)
await ctx.message.delete()
@bot.command()
@commands.is_owner()
async def tst(ctx):
await ctx.send(content=join)
@bot.command(name='fetchmsg')
@commands.is_owner()
async def _msg(ctx, arg):
arg=int(arg)
m=await ctx.channel.fetch_message(arg)
await ctx.send(content=f"\U00000060\U00000060\U00000060{m.content}\U00000060\U00000060\U00000060")
bot.load_extension('jishaku')
bot.load_extension("utils")
bot.load_extension("misc")
bot.load_extension("voice")
bot.run('BOT_TOKEN_HERE') | 40.767442 | 204 | 0.733885 | 488 | 3,506 | 5.206967 | 0.342213 | 0.04329 | 0.029516 | 0.039355 | 0.355765 | 0.236128 | 0.210153 | 0.158205 | 0.158205 | 0.126722 | 0 | 0.170755 | 0.104678 | 3,506 | 86 | 205 | 40.767442 | 0.635234 | 0 | 0 | 0.205479 | 0 | 0.068493 | 0.392073 | 0.21842 | 0 | 0 | 0.002281 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.041096 | 0 | 0.082192 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44dd2af0e04f2bcde0f9679219132a6850d1e347 | 1,611 | py | Python | src/05_concurrent/concurrent_queue.py | edgardeng/python-advance-interview | 59fd7bee8e871acdc7fdfecf2a110db840c47ebb | [
"Apache-2.0"
] | 1 | 2022-03-06T13:03:56.000Z | 2022-03-06T13:03:56.000Z | src/05_concurrent/concurrent_queue.py | edgardeng/python-advance-interview | 59fd7bee8e871acdc7fdfecf2a110db840c47ebb | [
"Apache-2.0"
] | null | null | null | src/05_concurrent/concurrent_queue.py | edgardeng/python-advance-interview | 59fd7bee8e871acdc7fdfecf2a110db840c47ebb | [
"Apache-2.0"
] | null | null | null | from multiprocessing import Queue, Process, Pool, Manager, Pipe
from time import sleep
def basic_usage():
q = Queue(3) # 指定队列大小,如果不写默认无限
q.put('消息1')
q.put('消息2')
q.put('消息3')
# q.put('消息4') # 一直等待直到进入
if not q.full():
q.put('消息5', block=True, timeout=1) # 等待1s,如果还没有put成功,直接抛异常
print('判断队列是否已满: %s' % q.full())
print(q.get()) # 获取并删除
print(q.get())
print(q.get())
# print(q.get()) # 一直等待获取
if not q.empty():
print(q.get(block=True,timeout=1)) # 等待获取,超时1s,则抛异常
print('判断队列是否为空: %s' % q.empty())
# print('队列大小 %d' % q.qsize()) # qsize error in mac osx
'''
' 队列中通信
' 如果使用Pool创建进程,需要使用 Manager中的Queue来完成进程间的通信
' 如果使用Process,则使用multiprocessing.Queue
'''
def write(q:Queue):
a = ['a', 'b', 'c', 'd']
for i in a:
print('is writing %s' % i )
q.put(i)
sleep(1)
def read(q:Queue):
for i in range(4):
print('is redding %s' % q.get())
sleep(1)
def queue_usage():
# 进程的通信
q = Queue()
pw = Process(target=write, args=(q,))
pr = Process(target=read, args=(q,))
pw.start()
pr.start()
pw.join()
pr.join()
# 进程池
q = Manager().Queue()
pool = Pool(3)
pool.apply(write, (q,))
pool.apply(read, (q,))
pool.close()
'''
' pip u管道的使用
'''
def func_pipe(conn):
conn.send('send by child')
print('child recv:', conn.recv())
conn.close()
def pipe_usage():
parent_conn, child_conn = Pipe() # 获得 Pipe 连接的两端
p = Process(target=func_pipe, args=(child_conn, ))
p.start()
print('parent recv:', parent_conn.recv())
parent_conn.send('send by parent')
p.join()
if __name__ == '__main__':
# basic_usage()
pipe_usage()
| 19.888889 | 63 | 0.605835 | 245 | 1,611 | 3.902041 | 0.383673 | 0.025105 | 0.047071 | 0.035565 | 0.028243 | 0.028243 | 0 | 0 | 0 | 0 | 0 | 0.010828 | 0.197393 | 1,611 | 80 | 64 | 20.1375 | 0.728538 | 0.121043 | 0 | 0.096154 | 0 | 0 | 0.096724 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.038462 | 0 | 0.153846 | 0.192308 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44e34cab338c76b526dd9c77da1242eee658adad | 458 | py | Python | cfgov/paying_for_college/migrations/0010_program_median_monthly_debt.py | flacoman91/consumerfinance.gov | 64e3d68d1c023ae944baf66a99e54236e5976097 | [
"CC0-1.0"
] | 37 | 2020-08-18T19:52:39.000Z | 2022-03-23T08:08:41.000Z | cfgov/paying_for_college/migrations/0010_program_median_monthly_debt.py | flacoman91/consumerfinance.gov | 64e3d68d1c023ae944baf66a99e54236e5976097 | [
"CC0-1.0"
] | 338 | 2020-08-14T20:46:36.000Z | 2022-03-31T20:49:32.000Z | cfgov/paying_for_college/migrations/0010_program_median_monthly_debt.py | raft-tech/cfgov-refresh | 7c63c31fd6bb95ed4f7d368f1e1252175f0c71ca | [
"CC0-1.0"
] | 14 | 2020-10-21T15:27:03.000Z | 2022-03-17T03:16:36.000Z | # -*- coding: utf-8 -*-
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('paying_for_college', '0009_expandable_group_help_text'),
]
operations = [
migrations.AddField(
model_name='program',
name='median_monthly_debt',
field=models.IntegerField(blank=True, help_text='MEDIAN MONTHLY PAYMENT FOR A 10-YEAR LOAN', null=True),
),
]
| 25.444444 | 116 | 0.631004 | 49 | 458 | 5.693878 | 0.77551 | 0.057348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.251092 | 458 | 17 | 117 | 26.941176 | 0.793003 | 0.045852 | 0 | 0 | 0 | 0 | 0.266667 | 0.071264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44e680a8fac14428a219624044650be58b2e4334 | 374 | py | Python | Tools/AtlasMaker/Assets/listdir.py | fakhirsh/cyclicshift | d255a3cc82703decdbbd477df3fa14791cd528d5 | [
"MIT"
] | 1 | 2019-11-12T17:47:23.000Z | 2019-11-12T17:47:23.000Z | Tools/AtlasMaker/Assets/listdir.py | fakhirsh/cyclicshift | d255a3cc82703decdbbd477df3fa14791cd528d5 | [
"MIT"
] | 31 | 2019-10-25T11:28:21.000Z | 2019-12-10T16:57:30.000Z | Tools/AtlasMaker/Assets/listdir.py | fakhirsh/cyclicshift | d255a3cc82703decdbbd477df3fa14791cd528d5 | [
"MIT"
] | null | null | null | import os
import sys
if(len(sys.argv) != 2):
print("Error: usage --> python3 lstdir.py [DIRNAME]")
exit(0)
path = sys.argv[1]
files = []
# r=root, d=directories, f = files
for r, d, f in os.walk(path):
for file in f:
if '.png' in file:
files.append(os.path.join(r, file))
#files.append(file)
for f in files:
print(f)
| 17.809524 | 57 | 0.558824 | 61 | 374 | 3.42623 | 0.508197 | 0.066986 | 0.143541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01487 | 0.280749 | 374 | 20 | 58 | 18.7 | 0.762082 | 0.13369 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44e6bc135f28e05f2002c2b95899b3825d3fcdcd | 1,022 | py | Python | preprocess-scripts/build_movies.py | rohanrb302/End-to-End-Movie-Recommendation--Service | eabbf843e599cbfa4ae17f9e7c7eb0e73fd852d4 | [
"Apache-2.0"
] | null | null | null | preprocess-scripts/build_movies.py | rohanrb302/End-to-End-Movie-Recommendation--Service | eabbf843e599cbfa4ae17f9e7c7eb0e73fd852d4 | [
"Apache-2.0"
] | null | null | null | preprocess-scripts/build_movies.py | rohanrb302/End-to-End-Movie-Recommendation--Service | eabbf843e599cbfa4ae17f9e7c7eb0e73fd852d4 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import requests
ratings = pd.read_csv("processed_ratings.csv")
# fetch movie details of all the unique movieids from the movie API
movie_api_url = "http://128.2.204.215:8080/movie/"
movies = [requests.get(movie_api_url + movie).json() for movie in ratings['movieid'].unique()]
# filter out records for which movie details does not exist
movies = list(filter(lambda x: x.get('message','None') == 'None', movies))
# convert JSON data to dataframe
movies = list(map(pd.io.json.json_normalize, movies))
movies_data = pd.concat(movies).reset_index(drop=True)
movies_data = movies_data.drop_duplicates(subset=['id'])
# keep only important columns
movies_data = movies_data[['id','imdb_id','title','adult','budget','genres','original_language','release_date','vote_count','vote_average','popularity','overview']]
# preprocess the genres column to make it readable
movies_data['genres'] = movies_data['genres'].apply(lambda x: ",".join([y['name'] for y in x]))
movies_data.to_csv("movies.csv", index=False) | 48.666667 | 164 | 0.746575 | 158 | 1,022 | 4.683544 | 0.550633 | 0.108108 | 0.02973 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015184 | 0.097847 | 1,022 | 21 | 165 | 48.666667 | 0.787419 | 0.226027 | 0 | 0 | 0 | 0 | 0.259542 | 0.026718 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44ea1e24de27283dea1dea7e3321f3745fc9d01f | 400 | py | Python | docs/en/conf.py | sdaityari/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 40 | 2015-03-26T20:46:16.000Z | 2022-02-28T09:15:30.000Z | docs/en/conf.py | zixtor/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 1 | 2017-07-29T09:44:12.000Z | 2017-08-08T16:27:22.000Z | docs/en/conf.py | zixtor/e-cidadania | 2fc7f312145e7cd674033f3d765ff9ff8d4fb23c | [
"Apache-2.0"
] | 19 | 2015-01-13T20:40:49.000Z | 2021-11-02T03:53:39.000Z | import sys
import os
cwd = os.path.dirname(os.path.realpath(__file__))
main_dir = os.path.normpath(cwd + '/../')
sys.path.append(main_dir)
#print sys.path
from config.all import *
language = 'en'
#html_logo = '../images/logos/logo-en.png'
latex_logo = '../images/logos/logo-en.png'
latex_documents = [
('index', 'e-cidadania.tex', u'Documentation',
u'Cidadania S. Coop. Galega', 'manual'),
] | 22.222222 | 49 | 0.6875 | 60 | 400 | 4.433333 | 0.583333 | 0.067669 | 0.112782 | 0.142857 | 0.218045 | 0.218045 | 0.218045 | 0 | 0 | 0 | 0 | 0 | 0.12 | 400 | 18 | 50 | 22.222222 | 0.755682 | 0.1375 | 0 | 0 | 0 | 0 | 0.281977 | 0.078488 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44f1e748bd22b2cafab30bc7205e2a0aa86a626c | 2,191 | py | Python | tests/data_source/test_ec.py | KarrLab/Kinetic-Datanator | 8aff047fd117033b98eca8ee3b21a8f07c430dec | [
"CC-BY-3.0",
"CC0-1.0",
"CC-BY-4.0",
"MIT"
] | 10 | 2018-11-20T17:04:09.000Z | 2021-08-24T18:29:06.000Z | tests/data_source/test_ec.py | KarrLab/Kinetic-Datanator | 8aff047fd117033b98eca8ee3b21a8f07c430dec | [
"CC-BY-3.0",
"CC0-1.0",
"CC-BY-4.0",
"MIT"
] | 59 | 2018-11-23T20:42:11.000Z | 2020-11-08T19:51:36.000Z | tests/data_source/test_ec.py | KarrLab/Kinetic-Datanator | 8aff047fd117033b98eca8ee3b21a8f07c430dec | [
"CC-BY-3.0",
"CC0-1.0",
"CC-BY-4.0",
"MIT"
] | 3 | 2018-12-15T00:53:54.000Z | 2021-08-24T18:29:08.000Z | import unittest
from datanator.data_source import ec
import datanator.config.core
import shutil
import tempfile
from pathlib import Path
class TestEC(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.cache_dir = tempfile.mkdtemp()
db = 'test'
username = datanator.config.core.get_config()['datanator']['mongodb']['user']
password = datanator.config.core.get_config()['datanator']['mongodb']['password']
MongoDB = datanator.config.core.get_config()['datanator']['mongodb']['server']
cls.src = ec.EC(server=MongoDB, db=db, username=username, password=password, authSource='admin',
readPreference='nearest', max_entries=20, cache_dir=cls.cache_dir)
@classmethod
def tearDownClass(cls):
shutil.rmtree(cls.cache_dir)
cls.src.db.drop_collection(cls.src.collection_str)
cls.src.client.close()
@unittest.skip('IP')
def test_establish_ftp(self):
ftp = self.src.establish_ftp()
self.assertTrue('enzyme.dat' in ftp.nlst())
@unittest.skip('IP')
def test_retrieve_content(self):
p = Path(self.cache_dir+'/enzyme.dat')
self.src.retrieve_content()
self.assertTrue(p.exists())
@unittest.skip('circle directory error.')
def test_parse_content(self):
location = str(Path('~/karr_lab/datanator/docs/enzyme.dat').expanduser())
self.src.parse_content(location)
def test_make_doc(self):
lines = ["ID 1.1.1.1", "DE Alcohol dehydrogenase.", "AN Aldehyde reductase.",
"CA (1) A primary alcohol + NAD(+) = an aldehyde + NADH.", "CA (2) A secondary alcohol + NAD(+) = a ketone + NADH.",
"CF Zn(2+) or Fe cation."]
result = self.src.make_doc(lines)
self.assertEqual(result, {'ec_number': '1.1.1.1', 'ec_name': 'Alcohol dehydrogenase',
'ec_synonyms': ['Aldehyde reductase'],
'catalytic_activity': ['(1) A primary alcohol + NAD(+) = an aldehyde + NADH', '(2) A secondary alcohol + NAD(+) = a ketone + NADH'],
'cofactor': 'Zn(2+) or Fe cation'}) | 42.960784 | 167 | 0.614331 | 264 | 2,191 | 4.988636 | 0.382576 | 0.009112 | 0.057707 | 0.050114 | 0.250569 | 0.198937 | 0.198937 | 0.098709 | 0 | 0 | 0 | 0.009615 | 0.240529 | 2,191 | 51 | 168 | 42.960784 | 0.781851 | 0 | 0 | 0.093023 | 0 | 0 | 0.267336 | 0.016423 | 0 | 0 | 0 | 0 | 0.069767 | 1 | 0.139535 | false | 0.046512 | 0.139535 | 0 | 0.302326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44f8c313abc197a6cbaf4a7a0e766b03137619a3 | 3,922 | py | Python | faps/make_offspring.py | ellisztamas/faps | fdf5ba990eaf85bfcf05b5eb757285ef40e8f918 | [
"MIT"
] | null | null | null | faps/make_offspring.py | ellisztamas/faps | fdf5ba990eaf85bfcf05b5eb757285ef40e8f918 | [
"MIT"
] | 9 | 2018-02-15T11:19:04.000Z | 2020-05-22T17:54:07.000Z | faps/make_offspring.py | ellisztamas/faps | fdf5ba990eaf85bfcf05b5eb757285ef40e8f918 | [
"MIT"
] | null | null | null | import numpy as np
from faps.genotypeArray import genotypeArray
from faps.calculate_geno_probs import calculate_geno_probs
def make_offspring(parents, noffs=None, dam_list=None, sire_list=None, mu=1e-12, family_name='offs'):
"""
Mate individuals in a base population to create simulated offspring. Lists of
specific sires and dams can be provided with the options dam_list and
sire_list. If only the number of offspring are specified parents are mated at
random from the base population.
Parameters
----------
parents: genotypeArray
Genotype information on the parents to be mated.
noffs: int
Number of offspring to be produced. If specific dams and sires are
specified, this is ignored.
dam_list, sire_list: lists
Integer lists of positions of sires and dams to be mated.
Pairs are mated in order (i.e. the first dam with the first sire, and so
forth). If used these two lists must be of the same length. If no
arguments are given for either list, parents are mated at random with
replacement, and the possibility of self-fertilisation.
mu: float or 1-d array between 0 and 1
Per locus genotype error rate; the probability that the called
genotype is incorrect. Alternatively, supply a vector of error rates
for each locus. Defaults to 1e-12.
family_name: str, optional
String denoting the name for this family.
Returns
-------
A genotypeArray object.
"""
if dam_list is None and sire_list is None and noffs is None:
raise ValueError("Either noffs needs to be a positive integer, or else lists of dams and sires should be given.")
# If parents haven't been specified, choose these at random.
if dam_list is None and sire_list is None:
if noffs < 1 or not isinstance(noffs, int):
raise ValueError("noffs should be a positive integer.")
nparents = parents.geno.shape[0]
dam_list = np.random.choice(range(nparents), noffs, replace=True).tolist()
sire_list = np.random.choice(range(nparents), noffs, replace=True).tolist()
# if parents have been specified, set noffs to the length of sires and dams.
if dam_list is not None or sire_list is not None:
noffs = len(dam_list)
if len(dam_list) != len(sire_list):
raise ValueError("List of sires must be the same length as the list of dams.")
nloci = parents.geno.shape[1] # pull out the number of loci
offs_genotypes= np.zeros([noffs, nloci, 2]) # empty array to store offspring genotypes.
# pull out arrays of genotype data for the dams and sires.
dam_genotypes = parents.subset(dam_list).geno
sire_genotypes = parents.subset(sire_list).geno
# draw an array of indices for whether the first or second allele should be drawn.
dam_alleles = np.random.binomial(1, 0.5, nloci*noffs).reshape([noffs, nloci])
sire_alleles = np.random.binomial(1, 0.5, nloci*noffs).reshape([noffs, nloci])
# loop over every mating pair and send the selected alleles to offs_genotypes.
for o in range(noffs):
offs_genotypes[o,:,0] = np.array([dam_genotypes [o,l][dam_alleles [o,l]] for l in range(nloci)])
offs_genotypes[o,:,1] = np.array([sire_genotypes[o,l][sire_alleles[o,l]] for l in range(nloci)])
offs_genotypes = offs_genotypes.astype(float)
# extra information on names.
offspring_names = np.array([family_name+'_'+str(a) for a in np.arange(noffs)])
maternal_names = parents.subset(dam_list).names
paternal_names = parents.subset(sire_list).names
geno_probs = calculate_geno_probs(offs_genotypes, mu)
return genotypeArray(
geno = offs_genotypes,
geno_probs = geno_probs,
names = offspring_names,
mothers = maternal_names,
fathers = paternal_names,
markers = np.arange(nloci)
)
| 46.141176 | 121 | 0.694544 | 589 | 3,922 | 4.528014 | 0.302207 | 0.028871 | 0.014998 | 0.012373 | 0.149231 | 0.131984 | 0.131984 | 0.131984 | 0.131984 | 0.131984 | 0 | 0.006931 | 0.227435 | 3,922 | 84 | 122 | 46.690476 | 0.873267 | 0.413819 | 0 | 0 | 0 | 0 | 0.087776 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.078947 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44f956a849a6c4d35aee25d62b6bcf916bf88c50 | 7,576 | py | Python | iprir/tests.py | account-login/iprir | 6b268bfff3f5af68f1cbf812f01104d4db238e68 | [
"MIT"
] | 2 | 2017-03-01T09:27:18.000Z | 2019-10-03T06:36:18.000Z | iprir/tests.py | account-login/iprir | 6b268bfff3f5af68f1cbf812f01104d4db238e68 | [
"MIT"
] | null | null | null | iprir/tests.py | account-login/iprir | 6b268bfff3f5af68f1cbf812f01104d4db238e68 | [
"MIT"
] | null | null | null | from ipaddress import IPv4Address, IPv6Address, IPv6Network
from contextlib import contextmanager
import tempfile
import os
import random
import unittest
import requests
import iprir
from iprir.record import RIRRecord, ip_to_int
from iprir.parser import parse_file, parse_string
from iprir.database import DB
from iprir.ipset import IpSet
import iprir.updater
SAMPLE_TEXT_DB_CONTENT = '''
#
2|apnic|20170120|50186|19830613|20170119|+1000
apnic|*|asn|*|7517|summary
apnic|*|ipv4|*|36581|summary
apnic|*|ipv6|*|6088|summary
apnic|NZ|asn|681|1|20020801|allocated
apnic|AU|ipv4|1.0.0.0|256|20110811|assigned
apnic|CN|ipv4|1.0.1.0|256|20110414|allocated
apnic|CN|ipv6|2001:250::|35|20000426|allocated
apnic|CN|ipv6|2001:250:2000::|35|20020726|allocated
'''
REAL_RECORDS = None
# noinspection PyPep8Naming
def setUpModule():
global REAL_RECORDS
iprir.updater.initialize()
REAL_RECORDS = sum(map(parse_file, iprir.TEXT_DB_PATH.values()), [])
@contextmanager
def patch(obj, key, value):
origin = getattr(obj, key)
setattr(obj, key, value)
try:
yield
finally:
setattr(obj, key, origin)
@contextmanager
def patch_db_path():
fd, text_db_path = tempfile.mkstemp(prefix='iprir_test_', suffix='.txt')
os.close(fd)
fd, sql_db_path = tempfile.mkstemp(prefix='iprir_test_', suffix='.sqlite')
os.close(fd)
print('text_db_path', text_db_path)
print('sql_db_path', sql_db_path)
with patch(iprir, 'TEXT_DB_PATH', dict(test=text_db_path)):
with patch(iprir, 'TEXT_DB_URLS', dict(test='https://dummy/')):
with patch(iprir, 'SQL_DB_PATH', sql_db_path):
try:
yield text_db_path, sql_db_path
except Exception:
raise
else:
os.remove(text_db_path)
os.remove(sql_db_path)
def write_string_to_file(filename: str, string: str):
with open(filename, 'wt') as fp:
fp.write(string)
def test_record_ipv4():
r = RIRRecord('CN', 'ipv4', '1.0.1.0', '256', 'assigned')
assert r.length == 256
assert r.ipv4.exploded == '1.0.1.0'
assert r.ipv4_network.network_address == r.ipv4
assert r.ipv4_network.prefixlen == 24
assert r.ipv4 == IPv4Address(r.as_int)
def test_record_ipv6():
r = RIRRecord('CN', 'ipv6', '2001:250::', '35', 'allocated')
assert r.length == 2 ** (128 - 35)
assert r.ipv6.compressed == '2001:250::'
assert r.ipv6_network.network_address == r.ipv6
assert r.ipv6_network.prefixlen == 35
assert r.ipv6 == IPv6Address(r.as_int)
def test_parse():
records = parse_string(SAMPLE_TEXT_DB_CONTENT)
assert len(records) == 5
r = records[-1]
assert (r.country, r.ipv6, r.ipv6_network, r.status) == (
'CN',
IPv6Address('2001:250:2000::'),
IPv6Network('2001:250:2000::/35'),
'allocated'
)
def test_ip_overlap():
def verify(lst):
lst.sort(key=lambda x: x[0])
for i in range(1, len(lst)):
prev_start, prev_len = lst[i - 1]
assert prev_start + prev_len <= lst[i][0]
lst4 = []
lst6 = []
for r in REAL_RECORDS:
if r.country == 'AP': # asia/pacific
# XXX: conflicts
# apnic|AP|ipv4|159.117.192.0|2048|19920409|allocated|A928972C
# ripencc|NL|ipv4|159.117.192.0|2048|19920409|assigned|
continue
if not DB.filter_record(r):
continue
if r.type == 'ipv4':
lst4.append((r.as_int, r.length))
elif r.type == 'ipv6':
lst6.append((r.as_int, r.length))
verify(lst4)
verify(lst6)
def test_db():
with patch_db_path() as pathes:
text_db_path, sql_db_path = pathes
write_string_to_file(text_db_path, SAMPLE_TEXT_DB_CONTENT)
records = parse_file(text_db_path)
db = DB()
try:
ret = db.reset_table()
assert ret
ret = db.add_records(records)
assert ret
cn4 = db.by_country('ipv4', 'CN')
assert len(cn4) == 1
assert cn4[0] == records[2]
cn6 = db.by_country('ipv6', 'CN')
assert len(cn6) == 2
assert cn6 == records[3:5]
r = db.by_ip(IPv4Address('1.0.1.0'))
assert r == records[2]
r = db.by_ip(IPv4Address('1.0.1.255'))
assert r == records[2]
r = db.by_ip(IPv4Address('1.0.2.0'))
assert r is None
r = db.by_ip(IPv6Address('2001:250::'))
assert r == records[3]
net = records[3].ipv6_network
r = db.by_ip(net.network_address + net.num_addresses)
assert r == records[4]
net = records[4].ipv6_network
r = db.by_ip(net.network_address + net.num_addresses)
assert r is None
finally:
db.close()
def test_update():
def fake_get(*args, **kwargs):
class Obj:
pass
o = Obj()
o.text = SAMPLE_TEXT_DB_CONTENT
return o
with patch(requests, 'get', fake_get):
with patch_db_path():
iprir.updater.update()
db = DB()
try:
records = parse_string(SAMPLE_TEXT_DB_CONTENT)
records = list(filter(lambda r: r.type in ('ipv4', 'ipv6'), records))
assert db.all() == records
finally:
db.close()
def test_ipset():
def to_int(ips):
return [ip_to_int(IPv4Address(ip)) for ip in ips]
text = '''
2|apnic|20170120|50186|19830613|20170119|+1000
apnic|*|ipv6|*|6088|summary
apnic|AU|ipv4|1.0.0.0|256|20110811|assigned
apnic|CN|ipv4|1.0.1.0|256|20110414|allocated
apnic|CN|ipv4|1.0.5.0|256|20110414|allocated
'''
records = parse_string(text)
random.shuffle(records)
ipset = IpSet(records)
assert ipset.lo == to_int(['1.0.0.0', '1.0.5.0'])
assert ipset.hi == to_int(['1.0.2.0', '1.0.6.0'])
assert IPv4Address('0.255.255.255') not in ipset
assert IPv4Address('1.0.0.0') in ipset
assert IPv4Address('1.0.1.0') in ipset
assert IPv4Address('1.0.1.255') in ipset
assert IPv4Address('1.0.2.0') not in ipset
assert IPv4Address('1.0.4.255') not in ipset
assert IPv4Address('1.0.5.0') in ipset
assert IPv4Address('1.0.5.255') in ipset
assert IPv4Address('1.0.6.0') not in ipset
# test IpSet.by_country()
with patch_db_path() as pathes:
text_db_path, sql_db_path = pathes
write_string_to_file(text_db_path, text)
iprir.updater.update_sql_db()
ipset = IpSet.by_country('ipv4', 'CN')
assert ipset.lo == to_int(['1.0.1.0', '1.0.5.0'])
assert ipset.hi == to_int(['1.0.2.0', '1.0.6.0'])
class TestIpSetOnRealData(unittest.TestCase):
by_country = staticmethod(IpSet.by_country)
def test_by_country(self):
# test on real data
cn4 = self.by_country('ipv4', 'CN')
assert IPv4Address('1.2.4.8') in cn4
assert IPv4Address('111.13.101.208') in cn4
assert IPv4Address('112.124.47.27') in cn4
assert IPv4Address('74.125.68.105') not in cn4
class TestRealDataWithApi(TestIpSetOnRealData):
by_country = staticmethod(iprir.by_country)
def test_by_ip(self):
assert iprir.by_ip(IPv4Address('8.8.8.8')) == RIRRecord(
country='US', type='ipv4', start='8.0.0.0', value='16777216', status='allocated',
)
# noinspection PyPep8Naming
def tearDownModule():
iprir.get_db().close()
| 29.59375 | 93 | 0.606389 | 1,084 | 7,576 | 4.089483 | 0.197417 | 0.014888 | 0.029326 | 0.007219 | 0.372208 | 0.312655 | 0.272727 | 0.194451 | 0.123618 | 0.123618 | 0 | 0.102573 | 0.256204 | 7,576 | 255 | 94 | 29.709804 | 0.684117 | 0.031151 | 0 | 0.21608 | 0 | 0 | 0.150866 | 0.075297 | 0 | 0 | 0 | 0 | 0.221106 | 1 | 0.085427 | false | 0.005025 | 0.065327 | 0.005025 | 0.18593 | 0.01005 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44fa7ab080f682a67685d77fc4c1632e1036494b | 820 | py | Python | Aron/Day10/answer.py | coolafabbe/AdventOfCode2021 | 97a2e4c7d887ef6f1ae477becb25cc1d97114781 | [
"MIT"
] | null | null | null | Aron/Day10/answer.py | coolafabbe/AdventOfCode2021 | 97a2e4c7d887ef6f1ae477becb25cc1d97114781 | [
"MIT"
] | null | null | null | Aron/Day10/answer.py | coolafabbe/AdventOfCode2021 | 97a2e4c7d887ef6f1ae477becb25cc1d97114781 | [
"MIT"
] | null | null | null | import sys
with open(sys.argv[1], "r") as file:
entries = file.read().splitlines()
open_chars = ['(', '[', '{', '<']
close_chars = [')', ']', '}', '>']
char_map = {o:c for o, c in zip(open_chars, close_chars)}
points = {')': 3, ']':57, '}':1197, '>': 25137}
syntax_score = 0
acp_scores = []
for line in entries:
levels = []
for c in line:
if c in open_chars:
levels.append(c)
elif c == char_map[levels[-1]]:
levels.pop()
else:
syntax_score += points[c]
break
else:
score = 0
for l in reversed(levels):
score = score * 5 + 1 + open_chars.index(l)
acp_scores.append(score)
acp_score = sorted(acp_scores)[len(acp_scores)//2]
print('Answer 1:', syntax_score)
print('Answer 2:', acp_score)
| 21.578947 | 57 | 0.536585 | 111 | 820 | 3.810811 | 0.432432 | 0.085106 | 0.066194 | 0.089835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035775 | 0.284146 | 820 | 37 | 58 | 22.162162 | 0.684838 | 0 | 0 | 0.074074 | 0 | 0 | 0.037897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.037037 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44fc2f7950308d857850a4bf2b09c5e329cf743e | 1,207 | py | Python | bin/compute_stats.py | krayzpipes/ACE-1 | 138bf2aecad949f0b72b66519c32893df033de39 | [
"Apache-2.0"
] | 28 | 2018-08-08T11:57:31.000Z | 2022-01-12T23:06:18.000Z | bin/compute_stats.py | krayzpipes/ACE-1 | 138bf2aecad949f0b72b66519c32893df033de39 | [
"Apache-2.0"
] | 108 | 2018-08-08T12:35:06.000Z | 2019-07-19T22:57:19.000Z | bin/compute_stats.py | krayzpipes/ACE-1 | 138bf2aecad949f0b72b66519c32893df033de39 | [
"Apache-2.0"
] | 16 | 2018-08-03T18:48:00.000Z | 2021-11-09T00:35:35.000Z | #!/usr/bin/env python3
import sys
import argparse
import os
import os.path
import re
regex = re.compile(r'^(\d+):(\d\d):(\d\d)\.(\d+)$')
alt_regex = re.compile(r'^(\d+):(\d\d):(\d\d)$')
count = 0
total = 0.0
_max = 0.0
_min = 100000
minimum_considered = 1.0
excluded = 0
for line in sys.stdin:
if count == 1000000:
break
m = regex.match(line.strip())
if m:
hour, minute, second, frac = m.groups()
else:
m = alt_regex.match(line.strip())
if m:
hour, minute, second = m.groups()
frac = "000000"
else:
sys.stderr.write("ERROR: line {} in {} failed regex\n".format(line.strip(), stats_file))
continue
total_seconds = float('0.{}'.format(frac)) + float(second) + (float(minute) * 60.0) + (float(hour) * 60.0 * 60.0)
if total_seconds < minimum_considered:
excluded += 1
continue
total += total_seconds
count += 1
if total_seconds > _max:
_max = total_seconds
if total_seconds < _min:
_min = total_seconds
if count:
print("total {} averge {:.2f} max {:.2f} min {:.2f} (excluded {})".format(count, total / float(count), _max, _min, excluded))
| 24.632653 | 129 | 0.574979 | 169 | 1,207 | 3.988166 | 0.337278 | 0.026706 | 0.031157 | 0.029674 | 0.173591 | 0.172107 | 0.172107 | 0.172107 | 0.172107 | 0 | 0 | 0.048098 | 0.259321 | 1,207 | 48 | 130 | 25.145833 | 0.705817 | 0.017399 | 0 | 0.153846 | 0 | 0 | 0.12827 | 0.04135 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.128205 | 0 | 0.128205 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44fcd428acfed83c3b1b8317f6fe3c4d1cb8bb2f | 1,752 | py | Python | endtoend_tests/forseti/notifier/inventory_summary_export_test.py | VGerris/forseti-security | 59dc7607b14709e7da4db2751889b4fc757816b6 | [
"Apache-2.0"
] | 921 | 2017-03-09T01:01:24.000Z | 2019-04-16T11:38:25.000Z | endtoend_tests/forseti/notifier/inventory_summary_export_test.py | VGerris/forseti-security | 59dc7607b14709e7da4db2751889b4fc757816b6 | [
"Apache-2.0"
] | 1,996 | 2017-03-03T22:07:50.000Z | 2019-04-17T00:02:28.000Z | endtoend_tests/forseti/notifier/inventory_summary_export_test.py | VGerris/forseti-security | 59dc7607b14709e7da4db2751889b4fc757816b6 | [
"Apache-2.0"
] | 241 | 2017-03-09T01:00:04.000Z | 2019-04-15T18:53:35.000Z | # Copyright 2020 The Forseti Security Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Notifier inventory summary export tests"""
import pytest
import re
import subprocess
class TestNotifierInventorySummaryExport:
"""Tests for the notifier inventory summary export feature."""
@pytest.mark.e2e
@pytest.mark.notifier
@pytest.mark.server
def test_inventory_summary_export_gcs(
self,
forseti_notifier_readonly: subprocess.CompletedProcess,
forseti_server_bucket_name: str):
"""Test that the inventory summary is exported to GCS.
Args:
forseti_notifier_readonly (subprocess.CompletedProcess): Notifier
run process result.
forseti_server_bucket_name (str): Forseti server bucket name.
"""
match = re.search(
fr'gs://{forseti_server_bucket_name}/inventory_summary/(.*).csv',
str(forseti_notifier_readonly.stdout))
assert match
gcs_path = match.group(0)
cmd = ['sudo', 'gsutil', 'ls', gcs_path]
result = subprocess.run(cmd, stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
assert result.returncode == 0
| 35.04 | 77 | 0.687215 | 214 | 1,752 | 5.523364 | 0.528037 | 0.050761 | 0.064298 | 0.077834 | 0.126904 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.234018 | 1,752 | 49 | 78 | 35.755102 | 0.872578 | 0.515411 | 0 | 0 | 0 | 0 | 0.092308 | 0.076923 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44fcf97e01a85045813ffdd909c930123aef9a7d | 2,084 | py | Python | telemetry/roles/slurm_telemetry/files/monster/process.py | Lakshmi-Patneedi/omnia | 40a5dd9496af16ab6fd18f2d807a4d8dea11bbf3 | [
"Apache-2.0"
] | 1 | 2021-10-13T21:48:15.000Z | 2021-10-13T21:48:15.000Z | telemetry/roles/slurm_telemetry/files/monster/process.py | Lakshmi-Patneedi/omnia | 40a5dd9496af16ab6fd18f2d807a4d8dea11bbf3 | [
"Apache-2.0"
] | null | null | null | telemetry/roles/slurm_telemetry/files/monster/process.py | Lakshmi-Patneedi/omnia | 40a5dd9496af16ab6fd18f2d807a4d8dea11bbf3 | [
"Apache-2.0"
] | null | null | null | """
MIT License
Copyright (c) 2022 Texas Tech University
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
"""
This file is part of MonSter.
Author:
Jie Li, jie.li@ttu.edu
"""
import logger
import time
import multiprocessing
log = logger.get_logger(__name__)
def partition(arr:list, cores: int):
"""partition Partition a list
Partition urls/nodes into several groups based on # of cores
Args:
arr (list): A list to be partitioned
cores (int): Number of cores of the compute running MonSter
Returns:
list: partitioned list
"""
groups = []
try:
arr_len = len(arr)
arr_per_core = arr_len // cores
arr_surplus = arr_len % cores
increment = 1
for i in range(cores):
if(arr_surplus != 0 and i == (cores-1)):
groups.append(arr[i * arr_per_core:])
else:
groups.append(arr[i * arr_per_core : increment * arr_per_core])
increment += 1
except Exception as err:
log.error(f"Cannot Partition the list: {err}")
return groups
| 33.079365 | 79 | 0.702015 | 304 | 2,084 | 4.753289 | 0.5 | 0.0609 | 0.027682 | 0.022145 | 0.035986 | 0.035986 | 0.035986 | 0 | 0 | 0 | 0 | 0.005013 | 0.234165 | 2,084 | 62 | 80 | 33.612903 | 0.900376 | 0.629079 | 0 | 0 | 0 | 0 | 0.048558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44fef0a3b086e649cd8bbdf5e4629c04e482ccb9 | 2,594 | py | Python | data/main.py | danielxiangzl/hotstuff | e701f5556102aae99dd1e3a654c15b2cda15579f | [
"Apache-2.0"
] | null | null | null | data/main.py | danielxiangzl/hotstuff | e701f5556102aae99dd1e3a654c15b2cda15579f | [
"Apache-2.0"
] | null | null | null | data/main.py | danielxiangzl/hotstuff | e701f5556102aae99dd1e3a654c15b2cda15579f | [
"Apache-2.0"
] | 1 | 2021-08-08T05:08:49.000Z | 2021-08-08T05:08:49.000Z | from glob import glob
from os.path import join
import os
from matplotlib.pyplot import hexbin
from parse import LogAggregator
from plot import Ploter
if __name__ == '__main__':
max_latencies = [2_000, 5_000] # For TPS graphs.
# Parse the results.
for system in ['3-chain', '2-chain', 'ditto-async', 'ditto-sync', 'vaba']:
[os.remove(x) for x in glob(f'{system}.*.txt')]
files = glob(join(system, 'results', '*.txt'))
LogAggregator(system, files, max_latencies).print()
LogAggregator(system, files, max_latencies, end_to_end=False).print()
# Plot 'Happy path' graph.
ploter = Ploter(width=12.8)
for system in ['3-chain', '2-chain', 'ditto-sync', 'vaba']:
ploter.plot_latency(system, [10, 20, 50], [0], 512)
ploter.finalize('happy-path', legend_cols=4)
# Plot 'Happy path TPS' graph.
ploter = Ploter()
for system in ['3-chain', '2-chain', 'ditto-sync', 'vaba']:
ploter.plot_tps(system, [0], max_latencies, 512)
ploter.finalize('happy-path-tps', legend_cols=2)
# Plot 'Happy path commit latency' graph.
ploter = Ploter()
for system in ['3-chain', '2-chain']:
ploter.plot_commit_lantecy(
system, [0], [20000], 512, graph_type='commit_latency'
)
ploter.finalize('happy-path-commit', legend_cols=2, top_lim=1_500)
# Plot 'Leader under DoS' graph.
ploter = Ploter()
for i, system in enumerate(['3-chain', '2-chain']):
name = Ploter.legend_name(system)
ploter.plot_free(
[i*500],
[0],
[f'{name}, {x} nodes' for x in [10, 20, 50]]
)
for system in ['ditto-async', 'vaba']:
ploter.plot_latency(system, [10, 20, 50], [0], 512)
ploter.finalize('leader-under-dos', legend_cols=2)
# Plot 'Dead nodes' graph.
ploter = Ploter(width=12.8)
for system in ['3-chain', '2-chain', 'ditto-sync', 'vaba']:
ploter.plot_latency(system, [20], [0, 1, 3], 512)
ploter.finalize('dead-nodes', legend_cols=4)
# Plot 'Dead nodes and DoS' graph.
ploter = Ploter()
for i, system in enumerate(['3-chain', '2-chain']):
name = Ploter.legend_name(system)
ploter.plot_free(
[i*500],
[0],
[
f'{name}, 20 nodes',
f'{name}, 20 nodes (1 faulty)',
f'{name}, 20 nodes (3 faulty)'
]
)
for system in ['ditto-async', 'vaba']:
ploter.plot_latency(system, [20], [0, 1, 3], 512)
ploter.finalize('dead-nodes-and-dos', legend_cols=2)
| 33.688312 | 78 | 0.581727 | 353 | 2,594 | 4.169972 | 0.215297 | 0.048913 | 0.05231 | 0.057065 | 0.555707 | 0.483016 | 0.483016 | 0.483016 | 0.463995 | 0.463995 | 0 | 0.056535 | 0.256746 | 2,594 | 76 | 79 | 34.131579 | 0.70695 | 0.083655 | 0 | 0.438596 | 0 | 0 | 0.175253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.035088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7803a8c8d00022860aa2c1e297b0776b9bac5e6f | 4,437 | py | Python | LGTV/__init__.py | Maccraft123/LGWebOSRemote | 52c481c83e78d06457b58cc68a87fefbfb80c7ef | [
"MIT"
] | null | null | null | LGTV/__init__.py | Maccraft123/LGWebOSRemote | 52c481c83e78d06457b58cc68a87fefbfb80c7ef | [
"MIT"
] | null | null | null | LGTV/__init__.py | Maccraft123/LGWebOSRemote | 52c481c83e78d06457b58cc68a87fefbfb80c7ef | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import print_function
from inspect import getargspec
import json
import os
import sys
from time import sleep
import logging
from .scan import LGTVScan
from .remote import LGTVRemote
from .auth import LGTVAuth
search_config = [
"/etc/lgtv/config.json",
"~/.lgtv/config.json",
"/opt/venvs/lgtv/config/config.json"
]
def usage(error=None):
if error:
print ("Error: " + error)
print ("LGTV Controller")
print ("Author: Karl Lattimer <karl@qdh.org.uk>")
print ("Usage: lgtv <command> [parameter]\n")
print ("Available Commands:")
print (" -i interactive mode")
print (" scan")
print (" auth <host> <tv_name>")
commands = LGTVRemote.getCommands()
for c in commands:
args = getargspec(LGTVRemote.__dict__[c])
if len(args.args) > 1:
a = ' <' + '> <'.join(args.args[1:-1]) + '>'
print (' <tv_name> ' + c + a)
else:
print (' <tv_name> ' + c)
def parseargs(command, argv):
args = getargspec(LGTVRemote.__dict__[command])
args = args.args[1:-1]
#if len(args) != len(argv):
# raise Exception("Argument lengths do not match")
output = {}
for (i, a) in enumerate(args):
if argv[i].lower() == "true":
argv[i] = True
elif argv[i].lower() == "false":
argv[i] = False
try:
f = int(argv[i])
argv[i] = f
except:
try:
f = float(argv[i])
argv[i] = f
except:
pass
output[a] = argv[i]
return output
def find_config():
w = None
for f in search_config:
f = os.path.expanduser(f)
f = os.path.abspath(f)
d = os.path.dirname(f)
if os.path.exists(d):
if os.access(d, os.W_OK):
w = f
if os.path.exists(f):
if os.access(f, os.W_OK):
return f
elif os.access(os.path.dirname(d), os.W_OK):
os.makedirs(d)
w = f
if w is None:
print ("Cannot find suitable config path to write, create one in %s" % ' or '.join(search_config))
raise Exception("No config file")
return w
def main():
if len(sys.argv) < 2:
usage("Too few arguments")
sys.exit(1)
logging.basicConfig(level=logging.DEBUG)
command = None
filename = None
config = {}
filename = find_config()
if filename is not None:
try:
with open(filename) as f:
config = json.loads(f.read())
except:
pass
if sys.argv[1] == "scan":
results = LGTVScan()
if len(results) > 0:
print (json.dumps({
"result": "ok",
"count": len(results),
"list": results
}))
sys.exit(0)
else:
print (json.dumps({
"result": "failed",
"count": len(results)
}))
sys.exit(1)
if sys.argv[1] == "-i":
pass
elif sys.argv[1] == "auth":
if len(sys.argv) < 3:
usage("Hostname or IP is required for auth")
sys.exit(1)
if len(sys.argv) < 4:
usage("TV name is required for auth")
sys.exit(1)
name = sys.argv[3]
host = sys.argv[2]
ws = LGTVAuth(name, host)
ws.connect()
ws.run_forever()
sleep(1)
config[name] = ws.serialise()
if filename is not None:
with open(filename, 'w') as f:
f.write(json.dumps(config))
print ("Wrote config file: " + filename)
sys.exit(0)
elif len(sys.argv) >= 2 and sys.argv[2] == "on":
name = sys.argv[1]
ws = LGTVRemote(name, **config[name])
ws.on()
sleep(1)
sys.exit(0)
else:
try:
args = parseargs(sys.argv[2], sys.argv[3:])
name = sys.argv[1]
command = sys.argv[2]
except Exception as e:
usage(str(e))
sys.exit(1)
try:
ws = LGTVRemote(name, **config[name])
ws.connect()
if command is not None:
ws.execute(command, args)
ws.run_forever()
except KeyboardInterrupt:
ws.close()
if __name__ == '__main__':
main()
| 25.5 | 106 | 0.497634 | 545 | 4,437 | 3.988991 | 0.26789 | 0.048298 | 0.022079 | 0.016559 | 0.095676 | 0.064397 | 0.022999 | 0 | 0 | 0 | 0 | 0.011412 | 0.368041 | 4,437 | 173 | 107 | 25.647399 | 0.763909 | 0.022312 | 0 | 0.285714 | 0 | 0 | 0.121827 | 0.01269 | 0.006803 | 0 | 0 | 0 | 0 | 1 | 0.027211 | false | 0.020408 | 0.068027 | 0 | 0.115646 | 0.102041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7804b1932916d494a0caef5d17c396d3ce3feb1f | 5,809 | py | Python | data/master_cycle_gan_dataset.py | RegentLee/master_research | ee8e45abc890c7103c1c9917954c5958b48782f6 | [
"BSD-3-Clause"
] | null | null | null | data/master_cycle_gan_dataset.py | RegentLee/master_research | ee8e45abc890c7103c1c9917954c5958b48782f6 | [
"BSD-3-Clause"
] | null | null | null | data/master_cycle_gan_dataset.py | RegentLee/master_research | ee8e45abc890c7103c1c9917954c5958b48782f6 | [
"BSD-3-Clause"
] | null | null | null | """Dataset class template
This module provides a template for users to implement custom datasets.
You can specify '--dataset_mode template' to use this dataset.
The class name should be consistent with both the filename and its dataset_mode option.
The filename should be <dataset_mode>_dataset.py
The class name should be <Dataset_mode>Dataset.py
You need to implement the following functions:
-- <modify_commandline_options>: Add dataset-specific options and rewrite default values for existing options.
-- <__init__>: Initialize this dataset class.
-- <__getitem__>: Return a data point and its metadata information.
-- <__len__>: Return the number of images.
"""
from data.base_dataset import BaseDataset, get_transform
# from data.image_folder import make_dataset
# from PIL import Image
import torch
import torchvision.transforms as transforms
import random
from data.MyFunction import my_data_creator
from data.MyFunction import my_transforms
from util import my_util
class MasterCycleGANDataset(BaseDataset):
"""A template dataset class for you to implement custom datasets."""
@staticmethod
def modify_commandline_options(parser, is_train):
"""Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
the modified parser.
"""
# parser.add_argument('--new_dataset_option', type=float, default=1.0, help='new dataset option')
# parser.set_defaults(max_dataset_size=10, new_dataset_option=2.0) # specify dataset-specific default values
parser.add_argument('--matrix', type=str, default='Cb', help='input matrix')
parser.add_argument('--LOOid', type=int, default=-1, help='Leave-one-out cross-validation id')
parser.add_argument('--diff', type=bool, default=False)
parser.set_defaults(input_nc=1, output_nc=1) # specify dataset-specific default values
return parser
def __init__(self, opt):
"""Initialize this dataset class.
Parameters:
opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
A few things can be done here.
- save the options (have been done in BaseDataset)
- get image paths and meta information of the dataset.
- define the image transformation.
"""
# save the option and dataset root
BaseDataset.__init__(self, opt)
# get the image paths of your dataset;
# self.image_paths = [] # You can call sorted(make_dataset(self.root, opt.max_dataset_size)) to get all the image paths under the directory self.root
# define the default transform function. You can use <base_dataset.get_transform>; You can also define your custom transform function
# self.transform = get_transform(opt)
data = my_data_creator.MyDataCreator(opt)
matrix_size = [len(i) for i in data.data_A]
input_n = max(matrix_size)
for i in range(4):
if input_n%4 == 0:
break
input_n += 1
transform = transforms.Compose([
my_transforms.preprocess(input_n),
transforms.ToTensor()
])
data_A = data.data_A
if opt.diff:
data_B = [data.data_B[i//3] - data_A[i] for i in range(len(data_A))]
else:
data_B = data.data_B
if opt.LOOid < 0:
val_A = [data_A[i] for i in range(3)]
if opt.diff:
val_B = [data_B[i] for i in range(3)]
else:
val_B = [data_B[0]]
else:
val_A = [data_A[i] for i in range(opt.LOOid*3, opt.LOOid*3 + 3)]
data_A = data_A[:opt.LOOid*3] + data_A[opt.LOOid*3 + 3:]
if opt.diff:
val_B = data_B[opt.LOOid*3:opt.LOOid*3 + 3]
data_B = data_B[:opt.LOOid*3] + data_B[opt.LOOid*3 + 3:]
else:
val_B = [data_B[opt.LOOid]]
data_B = data_B[:opt.LOOid] + data_B[opt.LOOid + 1:]
if not my_util.val:
self.data_A = [transform(i) for i in data_A]
self.data_B = [transform(i) for i in data_B]
else:
self.data_A = [transform(i) for i in val_A]
self.data_B = [transform(i) for i in val_B]
def __getitem__(self, index):
"""Return a data point and its metadata information.
Parameters:
index -- a random integer for data indexing
Returns:
a dictionary of data with their names. It usually contains the data itself and its metadata information.
Step 1: get a random image path: e.g., path = self.image_paths[index]
Step 2: load your data from the disk: e.g., image = Image.open(path).convert('RGB').
Step 3: convert your data to a PyTorch tensor. You can use helpder functions such as self.transform. e.g., data = self.transform(image)
Step 4: return a data point as a dictionary.
"""
path = 'temp' # needs to be a string
# data_A = torch.Tensor(self.data.data_A) # needs to be a tensor
# data_B = torch.Tensor(self.data.data_B) # needs to be a tensor
A = self.data_A[index % len(self.data_A)]
index_B = random.randint(0, len(self.data_B) - 1)
B = self.data_B[index_B]
return {'A': A, 'B': B, 'A_paths': path, 'B_paths': path}
def __len__(self):
"""Return the total number of images."""
return max(len(self.data_A), len(self.data_B))
| 41.791367 | 158 | 0.632123 | 828 | 5,809 | 4.27657 | 0.229469 | 0.031065 | 0.016944 | 0.017792 | 0.260378 | 0.171985 | 0.138379 | 0.123129 | 0.062129 | 0.035583 | 0 | 0.008789 | 0.275263 | 5,809 | 138 | 159 | 42.094203 | 0.832304 | 0.484076 | 0 | 0.129032 | 0 | 0 | 0.031769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.112903 | 0 | 0.241935 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
780aa1d44fdf013fa8158b9597ec531da2bacc1c | 882 | py | Python | zero_to_one_hundred/processors/refresh_map_processor.py | fossabot/0to100 | 37faa1340b2ec8b87e5d4c268c8caf521ea164cb | [
"Apache-2.0"
] | null | null | null | zero_to_one_hundred/processors/refresh_map_processor.py | fossabot/0to100 | 37faa1340b2ec8b87e5d4c268c8caf521ea164cb | [
"Apache-2.0"
] | null | null | null | zero_to_one_hundred/processors/refresh_map_processor.py | fossabot/0to100 | 37faa1340b2ec8b87e5d4c268c8caf521ea164cb | [
"Apache-2.0"
] | null | null | null | """RefreshMapProcessor:
refresh sections in map
"""
# pylint: disable=C0116,R0903,E0401,W0703,W1201,redefined-outer-name,missing-function-docstring,E0401,C0114,W0511,W1203,C0200,C0103,W1203
from configs.config import ConfigMap
from models.map import Map
class RefreshMapProcessor:
"""RefreshMapProcessor"""
def __init__(self, config_map: ConfigMap, persist_fs):
"""init"""
self.config_map = config_map
self.persist_fs = persist_fs
def process(self):
"""Scan the repo and for each new_section add it to the map, save the map file."""
sections = Map.build_from_dirs(
self.config_map,
self.persist_fs,
self.persist_fs.list_dirs(self.config_map.get_repo_path),
)
map_: Map = Map(self.config_map, self.persist_fs, sections)
map_.write(self.config_map.get_repo_sorted)
| 32.666667 | 137 | 0.689342 | 117 | 882 | 4.965812 | 0.478632 | 0.108434 | 0.134251 | 0.10327 | 0.196213 | 0.089501 | 0 | 0 | 0 | 0 | 0 | 0.06867 | 0.207483 | 882 | 26 | 138 | 33.923077 | 0.762518 | 0.323129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
780b6383b9441cbbb52f13931df26519c2613bf3 | 14,047 | py | Python | sharkdata_core/dataset_utils.py | sharkdata/sharkdata | 67793fd1771c9c2e599e62d57fcef432be5a8340 | [
"MIT"
] | 2 | 2016-07-20T07:09:51.000Z | 2016-08-12T12:20:20.000Z | sharkdata_core/dataset_utils.py | sharkdata/sharkdata | 67793fd1771c9c2e599e62d57fcef432be5a8340 | [
"MIT"
] | 1 | 2016-01-21T12:18:17.000Z | 2016-01-21T12:20:50.000Z | sharkdata_core/dataset_utils.py | sharkdata/sharkdata | 67793fd1771c9c2e599e62d57fcef432be5a8340 | [
"MIT"
] | 2 | 2016-07-20T07:13:35.000Z | 2016-08-12T11:40:15.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
#
# Copyright (c) 2013-present SMHI, Swedish Meteorological and Hydrological Institute
# License: MIT License (see LICENSE.txt or http://opensource.org/licenses/mit).
import pathlib
from django.conf import settings
import app_datasets.models as datasets_models
import app_ctdprofiles.models as ctdprofiles_models
import sharkdata_core
@sharkdata_core.singleton
class DatasetUtils(object):
""" Singleton class. """
def __init__(self):
""" """
self._data_header = None
self._translations = None
self._data_in_datasets = settings.SHARKDATA_DATA_IN_DATASETS
self._data_datasets = pathlib.Path(settings.SHARKDATA_DATA, "datasets")
self._metadata_update_thread = None
self._generate_archives_thread = None
def translateDataHeaders(
self, data_header, resource_name="translate_headers", language="darwin_core"
):
# language = 'english'):
""" """
return sharkdata_core.ResourcesUtils().translateHeaders(
data_header, resource_name, language
)
def getDatasetListHeaders(self):
""" """
if not self._data_header:
self._data_header = [
"dataset_name",
"datatype",
"version",
"dataset_file_name",
]
#
return self._data_header
def translateDatasetListHeaders(self, data_header, language=None):
""" """
# if not language:
# return data_header
#
translated = []
#
if not self._translations:
self._translations = {
"dataset_name": "Dataset name",
"datatype": "Datatype",
"version": "Version",
"dataset_file_name": "File name",
}
#
for item in data_header:
if item in self._translations:
translated.append(self._translations[item])
else:
translated.append(item)
#
return translated
def getDataAsText(self, dataset_name):
""" Data is not stored in database, get from zip file."""
db_dataset = datasets_models.Datasets.objects.get(dataset_name=dataset_name)
#
# Extract data part.
data_content = ""
zipreader = sharkdata_core.SharkArchiveFileReader(
db_dataset.dataset_file_name, self._data_in_datasets
)
try:
zipreader.open()
data_content = zipreader.getDataAsText().decode(
"cp1252"
) # Default encoding in archive data.
finally:
zipreader.close()
# print(data_content)
#
return data_content
def getDataColumnsAsText(self, dataset_name):
""" Data is not stored in database, get from zip file."""
db_dataset = datasets_models.Datasets.objects.get(dataset_name=dataset_name)
#
# Extract data part.
data_content = ""
zipreader = sharkdata_core.SharkArchiveFileReader(
db_dataset.dataset_file_name, self._data_in_datasets
)
try:
zipreader.open()
data_content = zipreader.getDataColumnsAsText().decode(
"cp1252"
) # Default encoding in archive data.
finally:
zipreader.close()
# print(data_content)
#
return data_content
def getMetadataAsText(self, dataset_name):
""" """
db_dataset = datasets_models.Datasets.objects.get(dataset_name=dataset_name)
# Fix line breaks for windows. Remove rows with no key-value-pairs.
metadata_list = []
concat_metadata = (
db_dataset.content_metadata + "\n" + db_dataset.content_metadata_auto
)
for row in concat_metadata.split("\n"):
if ":" in row:
parts = row.split(":", 1) # Split on first occurence.
key = parts[0].strip()
value = parts[1].strip()
metadata_list.append(key + ": " + value)
#
return "\r\n".join(metadata_list)
def writeLatestDatasetsInfoToDb(self, logfile_name=None, user=""):
"""Updates the database from datasets stored in the FTP area.
I multiple versions of a dataset are in the FTP area only the latest
will be loaded.
"""
# Check dataset in 'data_in/datasets'. Create a list of dataset names.
dataset_names = []
for dataset_path in self._data_in_datasets.glob("SHARK_*.zip"):
print(dataset_path.name)
parts = dataset_path.name.split("_version")
if len(parts) >= 1:
dataset_names.append(parts[0])
# Remove all datasets from 'data/datasets' not included in 'dataset_names'.
for dataset_path in self._data_datasets.glob("SHARK_*.zip"):
print(dataset_path.name)
parts = dataset_path.name.split("_version")
if len(parts) >= 1:
if parts[0] not in dataset_names:
# Delete the file.
dataset_path.unlink() # Removes file.
# Remove from database.
datasets_models.Datasets.objects.get(
dataset_name=dataset_path.name
).delete()
error_counter = 0
# Remove all db rows.
datasets_models.Datasets.objects.all().delete()
# CTD profiles.
ctdprofiles_models.CtdProfiles.objects.all().delete()
# Get latest datasets from FTP archive.
archive = sharkdata_core.SharkArchive(self._data_in_datasets)
for file_name in sorted(archive.getLatestSharkArchiveFilenames()):
if logfile_name:
sharkdata_core.SharkdataAdminUtils().log_write(
logfile_name, log_row="Loading file: " + file_name + "..."
)
try:
error_string = self.writeFileInfoToDb(file_name, logfile_name, user)
if error_string:
error_counter += 1
sharkdata_core.SharkdataAdminUtils().log_write(
logfile_name,
log_row="ERROR: Failed to load: "
+ file_name
+ ". Error: "
+ error_string,
)
except Exception as e:
error_counter += 1
sharkdata_core.SharkdataAdminUtils().log_write(
logfile_name,
log_row="ERROR: Failed to load: "
+ file_name
+ ". Error: "
+ str(e),
)
#
return error_counter
def writeFileInfoToDb(self, file_name, logfile_name=None, user=""):
""" Extracts info from the dataset filename and from the zip file content and adds to database. """
try:
#
ftp_file_path = pathlib.Path(self._data_in_datasets, file_name)
# Extract info from file name.
dataset_name, datatype, version = self.splitFilename(file_name)
# Extract metadata parts.
metadata = ""
metadata_auto = ""
columndata_available = False
#
zipreader = sharkdata_core.SharkArchiveFileReader(
file_name, self._data_in_datasets
)
try:
zipreader.open()
#
try:
metadata = zipreader.getMetadataAsText()
encoding = "cp1252"
metadata = str(metadata, encoding, "strict")
except Exception as e:
sharkdata_core.SharkdataAdminUtils().log_write(
logfile_name, log_row="WARNING: " + str(e)
)
#
try:
metadata_auto = zipreader.getMetadataAutoAsText()
encoding = "cp1252"
metadata_auto = str(metadata_auto, encoding, "strict")
except Exception as e:
sharkdata_core.SharkdataAdminUtils().log_write(
logfile_name, log_row="WARNING: " + str(e)
)
#
columndata_available = zipreader.isDataColumnsAvailable()
# CTD profiles.
ctd_profiles_table = None
# if datatype == 'CTDprofile':
if datatype == "Profile":
ctd_profiles_table = zipreader.getDataAsText()
finally:
zipreader.close()
# Remove from database.
try:
db_dataset = datasets_models.Datasets.objects.get(
dataset_name=dataset_name
)
db_dataset.delete()
except datasets_models.Datasets.DoesNotExist:
pass # Not found.
# Save to db.
dataset = datasets_models.Datasets(
dataset_name=dataset_name,
datatype=datatype,
version=version,
dataset_file_name=file_name,
ftp_file_path=ftp_file_path,
content_data="NOT USED",
content_metadata=metadata,
content_metadata_auto=metadata_auto,
#
column_data_available=columndata_available,
)
dataset.save()
if ctd_profiles_table:
data_header = []
ctd_profiles_table = ctd_profiles_table.decode("cp1252")
for index, row in enumerate(ctd_profiles_table.split("\n")):
rowitems = row.strip().split("\t")
if index == 0:
data_header = rowitems
else:
if len(rowitems) > 1:
row_dict = dict(zip(data_header, rowitems))
water_depth_m = 0.0
try:
water_depth_m = float(
row_dict.get("water_depth_m", -99)
)
except:
pass
db_profiles = ctdprofiles_models.CtdProfiles(
visit_year=row_dict.get("visit_year", ""), # '2002',
platform_code=row_dict.get(
"platform_code", ""
), # 'Svea',
expedition_id=row_dict.get(
"expedition_id", ""
), # 'aa-bb-11',
visit_id=row_dict.get("visit_id", ""), # '123456',
station_name=row_dict.get(
"station_name", ""
), # 'Station1A',
latitude=float(
row_dict.get("sample_latitude_dd", -99)
), # 70.00,
longitude=float(
row_dict.get("sample_longitude_dd", -99)
), # 10.00,
water_depth_m=water_depth_m, # '80.0',
sampler_type_code=row_dict.get(
"sampler_type_code", ""
), # 'CTD',
sample_date=row_dict.get(
"visit_date", ""
), # '2000-01-01',
sample_project_code=row_dict.get(
"sample_project_code", ""
), # 'Proj',
# sample_project_code = row_dict.get('sample_project_name_sv', ''), # 'Proj',
sample_orderer_code=row_dict.get(
"sample_orderer_code", ""
), # 'Orderer',
# sample_orderer_code = row_dict.get('sample_orderer_name_sv', ''), # 'Orderer',
sampling_laboratory_code=row_dict.get(
"sampling_laboratory_code", ""
), # 'Slabo',
# sampling_laboratory_code = row_dict.get('sampling_laboratory_name_sv', ''), # 'Slabo',
revision_date=row_dict.get(
"revision_date", ""
), # '2010-10-10',
ctd_profile_name=row_dict.get(
"profile_file_name_db", ""
), # 'ctd.profile',
dataset_file_name=file_name,
ftp_file_path=ftp_file_path,
)
db_profiles.save()
#
return None # No error message.
#
except Exception as e:
return str(e)
def splitFilename(self, file_name):
""" """
filename = pathlib.Path(file_name).stem
parts = filename.split("version")
name = parts[0].strip("_").strip()
version = parts[1].strip("_").strip() if len(parts) > 0 else ""
#
parts = filename.split("_")
datatype = parts[1].strip("_").strip()
#
return name, datatype, version
| 40.134286 | 152 | 0.484587 | 1,214 | 14,047 | 5.339374 | 0.205931 | 0.027152 | 0.027769 | 0.017279 | 0.337396 | 0.326134 | 0.326134 | 0.326134 | 0.267201 | 0.251928 | 0 | 0.01111 | 0.4297 | 14,047 | 349 | 153 | 40.249284 | 0.798028 | 0.130704 | 0 | 0.316406 | 0 | 0 | 0.050577 | 0.001993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0.007813 | 0.019531 | 0 | 0.101563 | 0.007813 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7810d779e5150a8da080692129dd8ba323975421 | 70,466 | py | Python | RayTracing/gui/gui.py | TEM-Gemini-Centre/RayTracing | fa4b6057c9a7307f75e52f0bd7a4a13751f832ec | [
"MIT"
] | null | null | null | RayTracing/gui/gui.py | TEM-Gemini-Centre/RayTracing | fa4b6057c9a7307f75e52f0bd7a4a13751f832ec | [
"MIT"
] | null | null | null | RayTracing/gui/gui.py | TEM-Gemini-Centre/RayTracing | fa4b6057c9a7307f75e52f0bd7a4a13751f832ec | [
"MIT"
] | null | null | null | from PyQt5 import QtWidgets, QtGui, QtCore, Qt, uic
from PyQt5.QtCore import pyqtSignal, pyqtSlot
from RayTracing.gui.mplwidget import *
from RayTracing.RayTracing import *
from tabulate import tabulate
from pathlib import Path
from matplotlib.lines import lineStyles
from matplotlib.colors import to_hex, to_rgb
import sys
import time
import argparse
class Error(Exception):
pass
class OperatorModelError(Error):
pass
class SourceModelError(Error):
pass
class ScreenModelError(Error):
pass
class OpticalOperatorModel(QtCore.QObject):
"""
Model for controlling an OpticalOperator
The model should ensure that proper signals are sent whenever the data of the OpticalOperator has been changed.
The model emits the following signals:
:param valueChanged: Signal ([], [float]) emitted whenever the value of the OpticalOperator has changed.
:param zChanged: Signal ([], [float]) emitted whenever the z-value of the OpticalOperator has changed.
:param offsetChanged: Signal ([], [float]) emitted whenever the offset-value of the OpticalOperator has changed.
:param labelChanged: Signal([], [float]) emitted whenever the label of the OpticalOperator has changed.
:param operatorChanged: Signal wmitted whenever any change has been made to the OpticalOperator, inculding the above.
"""
valueChanged = pyqtSignal([], [float], name='valueChanged')
zChanged = pyqtSignal([], [float], name='zChanged')
offsetChanged = pyqtSignal([], [float], name='offsetChanged')
labelChanged = pyqtSignal([], [str], name='labelChanged')
operatorChanged = pyqtSignal(name='operatorChanged')
styleChanged = pyqtSignal([dict], name='styleChanged')
@property
def z(self):
return self._operator.z
@z.setter
def z(self, value):
if isinstance(value, float):
self._operator.z = value
self.zChanged.emit()
self.zChanged[float].emit(value)
self.operatorChanged.emit()
else:
raise OperatorModelError(
f'Cannot set Z-value of {self.__class__.__name__} of {self._operator!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def offset(self):
return self._operator.offset
@offset.setter
def offset(self, value):
if isinstance(value, float):
self._operator.offset = value
self.offsetChanged.emit()
self.offsetChanged[float].emit(value)
self.operatorChanged.emit()
else:
raise OperatorModelError(
f'Cannot set offset-value of {self.__class__.__name__} of {self._operator!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def value(self):
return self._operator.value
@value.setter
def value(self, value):
if isinstance(value, float):
self._operator.value = value
self.valueChanged.emit()
self.valueChanged[float].emit(value)
self.operatorChanged.emit()
else:
raise OperatorModelError(
f'Cannot set operator-value of {self.__class__.__name__} of {self._operator!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def label(self):
return self._operator.label
@label.setter
def label(self, value):
if isinstance(value, str):
self._operator.label = value
self.labelChanged.emit()
self.labelChanged[str].emit(value)
self.operatorChanged.emit()
else:
raise OperatorModelError(
f'Cannot set label-value of {self.__class__.__name__} of {self._operator!r}.') from TypeError(
f'Value {value!r} must be `str`')
@property
def silent(self):
return self._silent
@silent.setter
def silent(self, value):
self._silent = bool(value)
self.blockSignals(self._silent)
@property
def operator_type(self):
return type(self._operator)
@property
def operator_classname(self):
return self._operator.__class__.__name__
@property
def is_deflector(self):
return isinstance(self._operator, Deflector)
@property
def is_lens(self):
return isinstance(self._operator, Lens)
@property
def is_propagator(self):
return isinstance(self._operator, Propagator)
@property
def style(self):
return dict(self._style)
@property
def focal_style(self):
if self.is_lens:
return dict(self._focal_style)
else:
return dict() # raise AttributeError(f'Cannot get focal_style for {self}. Operator {self._operator!r} is not a lens')
def __init__(self, operator, *args, **kwargs):
"""
Create a model for an OpticalOperator
:param operator: The OpticalOperator to model
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type operator: OpticalOperator
"""
super(OpticalOperatorModel, self).__init__(*args, **kwargs)
if not isinstance(operator, OpticalOperator):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {operator!r}. Invalid type {type(operator)}. Accepted types are OpticalOperator and subclasses.')
self._operator = operator
self._silent = False
self._style = dict([['ls', '-'], ['alpha', 1.], ['color', 'k'], ['lw', 1.]])
self._focal_style = dict([['ls', '--'], ['alpha', 0.5], ['color', 'k'], ['lw', 0.5]])
def __repr__(self):
return f'{self.__class__.__name__}({self._operator!r}, {self.parent()})'
def __str__(self):
return f'{self._operator}'
def show(self, *args, **kwargs):
"""
Shows the operator
:param args: Optional positional arguments passed to OpticalOperator.show()
:param kwargs: Optional keyword arguments passed to OpticalOperator.show()
:return:
"""
kwargs.update(self.style)
print(kwargs)
if self.is_lens:
return self._operator.show(*args, focal_plane_kwargs=self._focal_style, **kwargs)
else:
return self._operator.show(*args, **kwargs)
def set_style(self, key, value, focal=False):
f"""
Sets one of the style fields to the given value
:param focal: Whether to set the style for focal planes or not. Only applicable if the optical operator is a Lens.
:param key: The key to set. Should be one of {list(self._style.keys())}
:param value: The value to set the field to.
:type key: str
:type value: Union[float, int, str]
:return:
"""
if focal:
if key in self.focal_style:
self._focal_style[key] = value
else:
raise ValueError(f'Cannot set focal style {key} to {value} for {self}: Key {key!r} not recognized')
else:
if key in self.style:
self._style[key] = value
else:
raise ValueError(f'Cannot set style {key} to {value} for {self}: Key {key!r} not recognized')
self.styleChanged.emit()
class OpticalOperatorController(QtCore.QObject):
"""
Controller for controlling an OpticalOperatorModel
The controller has a series of preset values that can be used to store certain values in a dictionary with integer keys.
"""
presetsChanged = pyqtSignal([], name='presetsChanged')
@property
def value_presets(self):
return self._value_presets
@property
def model_name(self):
return str(self._model.label)
@property
def model(self):
return self._model
def __init__(self, model, *args, **kwargs):
"""
Create a controller for an OpticalOperatorModel
:param model: The model to control
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type model: OpticalOperatorModel
"""
super(OpticalOperatorController, self).__init__(*args, **kwargs)
if not isinstance(model, OpticalOperatorModel):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {model!r}. Invalid type {type(model)}. Accepted types are `OpticalOperatorModel` and subclasses')
self._model = model
self._value_presets = dict()
@pyqtSlot(int, float, name='setValuePreset')
def setValuePreset(self, preset, value):
"""
Sets/adds a preset value
:param preset: Preset-key
:param value: Preset-value
:type preset: int
:type value: float
"""
self._value_presets[preset] = value
self.presetsChanged.emit()
@pyqtSlot(int, name='setSilent')
@pyqtSlot(bool, name='setSilent')
@pyqtSlot(float, name='setSilent')
def setSilent(self, value):
"""
Disable signals from the model
:param value: whether to disable or enable signals
:param value: Union[int, float, bool]
:return:
"""
self._model.silent = value
@pyqtSlot(float, name='setZ')
def setZ(self, value):
"""
Set the z-position of the model
:param value: z-value
:type value: float
"""
self._model.z = value
@pyqtSlot(float, name='setOffset')
def setOffset(self, value):
"""
Set the offset-value of the model
:param value: offset-value
:type value: float
:return:
"""
self._model.offset = value
@pyqtSlot(float, name='setValue')
def setFloatValue(self, value):
"""
Set the value of the model
:param value: the value
:type value: float
:return:
"""
self._model.value = value
@pyqtSlot(int, name='setValue')
def setIntValue(self, value):
"""
Set the value of the model based on preset values
:param value: The preset-key to use
:type value: int
:return:
"""
operator_value = self._value_presets.get(value, float(value))
self._model.value = operator_value
@pyqtSlot(str, float)
def setParameter(self, parameter, value):
"""
Sets a given parameter to a given value
:param parameter: The parameter to set. SHould be either "z", "offset", "value-float" or "value-int"
:param value: The value to set
:type parameter: str
:type value: float
:return:
"""
if parameter.lower() == 'z':
self.setZ(value)
elif parameter.lower() == 'offset':
self.setOffset(value)
elif parameter.lower() == 'value-float':
self.setFloatValue(value)
elif parameter.lower() == 'value-int':
self.setIntValue(int(value))
else:
raise ValueError(f'Could not set parameter {parameter} to {value} for {self!r}: Parameter not recognized.')
@pyqtSlot(str, float, bool)
@pyqtSlot(str, int, bool)
@pyqtSlot(str, str, bool)
def setStyle(self, field, value, focal):
if self._model.is_lens:
self._model.set_style(field, value, focal)
else:
self._model.set_style(field, value, False)
@pyqtSlot(dict, bool)
def setStyleDict(self, styles, focal):
blocked = self._model.signalsBlocked()
if not blocked:
self._model.blockSignals(True)
for key in styles:
self._model.set_style(key, styles[key], focal)
if not blocked:
self._model.blockSignals(False)
self._model.styleChanged[dict].emit(styles)
class StyleWidget(QtWidgets.QWidget):
styleChanged = pyqtSignal([dict])
@property
def styleDict(self):
return dict(self._styledict)
@property
def widgets(self):
return {'style': self._linestyleCombobox, 'width': self._linewidthSpinbox, 'alpha': self._aSpinbox,
'color': self._colorWidget}
def __init__(self, *args, **kwargs):
super(StyleWidget, self).__init__(*args, **kwargs)
self._styledict = dict()
self._linewidthSpinbox = QtWidgets.QDoubleSpinBox(self)
self._linestyleCombobox = QtWidgets.QComboBox(self)
self._colorWidget = QtWidgets.QWidget(self)
self._rSpinbox = QtWidgets.QDoubleSpinBox(self._colorWidget)
self._gSpinbox = QtWidgets.QDoubleSpinBox(self._colorWidget)
self._bSpinbox = QtWidgets.QDoubleSpinBox(self._colorWidget)
self._aSpinbox = QtWidgets.QDoubleSpinBox(self._colorWidget)
self._linewidthSpinbox.setMinimum(0)
self._linewidthSpinbox.setMaximum(10)
self._linewidthSpinbox.setDecimals(2)
self._linewidthSpinbox.setSingleStep(0.1)
self._linewidthSpinbox.blockSignals(True)
self._linewidthSpinbox.setValue(1)
self._linewidthSpinbox.blockSignals(False)
self._linestyleCombobox.addItems(lineStyles.keys())
self._linestyleCombobox.blockSignals(True)
self._linestyleCombobox.setCurrentText('-')
self._linestyleCombobox.blockSignals(False)
self._rSpinbox.setMinimum(0)
self._rSpinbox.setMaximum(1)
self._rSpinbox.setDecimals(2)
self._rSpinbox.setSingleStep(0.1)
self._rSpinbox.blockSignals(True)
self._rSpinbox.setValue(1)
self._rSpinbox.blockSignals(False)
self._gSpinbox.setMinimum(0)
self._gSpinbox.setMaximum(1)
self._gSpinbox.setDecimals(2)
self._gSpinbox.setSingleStep(0.1)
self._gSpinbox.blockSignals(True)
self._gSpinbox.setValue(1)
self._gSpinbox.blockSignals(False)
self._bSpinbox.setMinimum(0)
self._bSpinbox.setMaximum(1)
self._gSpinbox.setDecimals(2)
self._bSpinbox.setSingleStep(0.1)
self._bSpinbox.blockSignals(True)
self._bSpinbox.setValue(1)
self._bSpinbox.blockSignals(False)
self._aSpinbox.setMinimum(0)
self._aSpinbox.setMaximum(1)
self._aSpinbox.setDecimals(2)
self._aSpinbox.setSingleStep(0.1)
self._aSpinbox.blockSignals(True)
self._aSpinbox.setValue(1.)
self._aSpinbox.blockSignals(False)
gridlayout = QtWidgets.QGridLayout()
gridlayout.addWidget(QtWidgets.QLabel('R'), 0, 0)
gridlayout.addWidget(QtWidgets.QLabel('G'), 0, 1)
gridlayout.addWidget(QtWidgets.QLabel('B'), 0, 2)
gridlayout.addWidget(QtWidgets.QLabel('A'), 0, 3)
gridlayout.addWidget(self._rSpinbox, 1, 0)
gridlayout.addWidget(self._gSpinbox, 1, 1)
gridlayout.addWidget(self._bSpinbox, 1, 2)
gridlayout.addWidget(self._aSpinbox, 1, 3)
self._colorWidget.setLayout(gridlayout)
self._styledict['lw'] = self._linewidthSpinbox.value()
self._styledict['ls'] = self._linestyleCombobox.currentText()
self._styledict['color'] = to_hex([self._rSpinbox.value(), self._gSpinbox.value(), self._bSpinbox.value()])
self._styledict['alpha'] = self._aSpinbox.value()
@pyqtSlot(float, float, float)
def setColorRGB(self, r, g, b):
blocked = self.signalsBlocked()
if not blocked:
self.blockSignals(True)
self.setRValue(r)
self.setGValue(g)
self.setBValue(b)
if not blocked:
self.blockSignals(False)
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(str)
def setColorHex(self, hex):
try:
color = to_rgb(hex)
except ValueError as e:
raise ValueError(f'Cannot set color for {self!r} for hex-string {hex!r}') from e
else:
self.setColorRGB(*color)
@pyqtSlot(float)
def setRValue(self, value):
self._rSpinbox.blockSignals(True)
self._rSpinbox.setValue(value)
self._rSpinbox.blockSignals(False)
self._styledict['color'] = to_hex([self._rSpinbox.value(), self._gSpinbox.value(), self._bSpinbox.value()])
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(float)
def setGValue(self, value):
self._gSpinbox.blockSignals(True)
self._gSpinbox.setValue(value)
self._gSpinbox.blockSignals(False)
self._styledict['color'] = to_hex([self._rSpinbox.value(), self._gSpinbox.value(), self._bSpinbox.value()])
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(float)
def setBValue(self, value):
self._bSpinbox.blockSignals(True)
self._bSpinbox.setValue(value)
self._bSpinbox.blockSignals(False)
self._styledict['color'] = to_hex([self._rSpinbox.value(), self._gSpinbox.value(), self._bSpinbox.value()])
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(float)
def setAValue(self, value):
self._aSpinbox.blockSignals(True)
self._aSpinbox.setValue(value)
self._aSpinbox.blockSignals(False)
self._styledict['alpha'] = self._aSpinbox.value()
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(float)
def setLinewidth(self, value):
self._linewidthSpinbox.blockSignals(True)
self._linewidthSpinbox.setValue(value)
self._linewidthSpinbox.blockSignals(False)
self._styledict['lw'] = self._linewidthSpinbox.value()
self.styleChanged[dict].emit(self.styleDict)
@pyqtSlot(str)
def setLinestyle(self, value):
self._linestyleCombobox.blockSignals(True)
self._linestyleCombobox.setCurrentText(value)
self._linestyleCombobox.blockSignals(False)
self._styledict['ls'] = self._linestyleCombobox.currentText()
self.styleChanged[dict].emit(self._styledict)
@pyqtSlot(dict)
def setStyles(self, styles):
blocked = self.signalsBlocked()
if not blocked:
self.blockSignals(True)
self.setLinestyle(styles['ls'])
self.setLinewidth(styles['lw'])
self.setAValue(styles['alpha'])
self.setColorHex(styles['color'])
if not blocked:
self.blockSignals(False)
self.styleChanged[dict].emit(self.styleDict)
class OpticalOperatorView(QtWidgets.QWidget):
"""
Create a view for an OpticalOperator.
This object provides a series of widgets and setup-tools for the widgets. The widgets are connected to a controller that controls the model, and changes in the model are reflected in the view - as long as the underlying data object (i.e. the OpticalOperator) is changed directly (not through the corresponding OpticalOperatorModel)
"""
value_min = -999
value_max = 999
value_step = 0.1
value_decimals = 2
z_min = -999
z_max = 999
z_step = 0.5
z_decimals = 2
offset_min = -999
offset_max = 999
offset_step = 0.05
offset_decimals = 2
plotUpdated = pyqtSignal(name='plotUpdated')
@property
def model(self):
return self._model
def __init__(self, controller, *args, plot_widget=None, **kwargs):
"""
Create a view for a controller.
The following widgets will be created:
-typeLabel: A QLabel to show the type of the operator
-nameLabel: A QLabel to show the name of the operator
-zSpinbox: A QDoubleSpinBox to control/show the z-position of the operator
-offsetSpinbox: A QDoubleSpinBox to control/show the offset of the operator
-valueSpinbox: A QDoubleSpinBox to control/show the value of the operator
-valueDial: A QDial to control/show the value of the operator through preset values
-valueIndicator: A QLabel to show the current value of the operator below the valueDial.
-zStepSpinbox: A QDoubleSpinBox to control/show the singleStep of the zSpinbox.
-offsetStepSpinbox: A QDoubleSpinBox to control/show the singleStep of the offsetSpinbox.
-valueStepSpinbox: A QDoubleSpinBox to control/show the singleStep of the valueSpinbox.
-plotWidget: A MplWidget to show the operator graphically in a plot area.
:param controller: The controller to connect to. The model will be extracted from this controller.
:param args: Optional positional arguments passed to QtWidgets.QWidget
:param plot_widget: The plot-widget to use to show the optical operator on
:param kwargs: Optional keyword arguments passed to QtWidgets.QWidget
:type controller: OpticalOperatorController
:type plot_widget: MplWidget
"""
super(OpticalOperatorView, self).__init__(*args, **kwargs)
if not isinstance(controller, OpticalOperatorController):
raise TypeError()
self._controller = controller
self._model = self._controller.model
self.typeLabel = QtWidgets.QLabel(self._model.operator_classname, self)
self.nameLabel = QtWidgets.QLabel(self._model.label, self)
self.zSpinbox = QtWidgets.QDoubleSpinBox(self)
self.offsetSpinbox = QtWidgets.QDoubleSpinBox(self)
self.valueSpinbox = QtWidgets.QDoubleSpinBox(self)
self.valueDial = QtWidgets.QDial(self)
self.valueIndicator = QtWidgets.QLabel(self)
self.styleWidget = StyleWidget(self)
if self._model.is_lens:
self.focalStyleWidget = StyleWidget(self)
else:
self.focalStyleWidget = None
# self.zStepSpinbox = QtWidgets.QDoubleSpinBox(self)
# self.offsetStepSpinbox = QtWidgets.QDoubleSpinBox(self)
# self.valueStepSpinbox = QtWidgets.QDoubleSpinBox(self)
# self.zDecimalsSpinbox = QtWidgets.QSpinBox(self)
# self.offsetDecimalsSpinbox = QtWidgets.QSpinBox(self)
# self.valueDecimalsSpinbox = QtWidgets.QSpinBox(self)
# self.zMinimumLineEdit = QtWidgets.QLineEdit(self)
# self.offsetMinimumLineEdit = QtWidgets.QLineEdit(self)
# self.valueMinimumLineEdit = QtWidgets.QLineEdit(self)
# self.zMaximumLineEdit = QtWidgets.QLineEdit(self)
# self.offsetMaximumLineEdit = QtWidgets.QLineEdit(self)
# self.valueMaximumLineEdit = QtWidgets.QLineEdit(self)
if plot_widget is None:
self.plotWidget = MplWidget(self)
else:
if isinstance(plot_widget, MplWidget):
self.plotWidget = plot_widget
else:
raise TypeError(
f'Cannot create {self.__class__.__name__} for controller {self._controller!r} with model {self._model!r}. Provided plotWidget is not a MplWidget but a {type(plot_widget)}')
self._plot_data = None
self.setupZSpinbox()
self.setupValueDial()
self.setupValueSpinbox()
self.setupOffsetSpinbox()
self.setupValueIndicator()
self.styleWidget.setStyles(self._model.style) # Simple setup for the stylewidgets
# Listeners
self._model.valueChanged[float].connect(self.on_value_changed)
self._model.zChanged[float].connect(self.on_z_changed)
self._model.offsetChanged[float].connect(self.on_offset_changed)
self._model.labelChanged[str].connect(self.on_label_changed)
self._model.operatorChanged.connect(lambda: self.on_model_changed())
self._model.styleChanged[dict].connect(self.on_style_changed)
# Signals
self.zSpinbox.valueChanged[float].connect(self._controller.setZ)
self.offsetSpinbox.valueChanged[float].connect(self._controller.setOffset)
self.valueSpinbox.valueChanged[float].connect(self._controller.setFloatValue)
self.valueDial.valueChanged[int].connect(self._controller.setIntValue)
self.styleWidget.styleChanged[dict].connect(lambda x: self._controller.setStyleDict(x, False))
def setupValueSpinbox(self):
"""
Sets up the value spinbox
:return:
"""
self.valueSpinbox.setMinimum(self.value_min)
self.valueSpinbox.setMaximum(self.value_max)
self.valueSpinbox.setDecimals(self.value_decimals)
self.valueSpinbox.setSingleStep(self.value_step)
self.valueSpinbox.blockSignals(True)
self.valueSpinbox.setValue(self._model.value)
self.valueSpinbox.blockSignals(False)
def setupZSpinbox(self):
self.zSpinbox.setMinimum(self.z_min)
self.zSpinbox.setMaximum(self.z_max)
self.zSpinbox.setDecimals(self.z_decimals)
self.zSpinbox.setSingleStep(self.z_step)
self.zSpinbox.blockSignals(True)
self.zSpinbox.setValue(self._model.z)
self.zSpinbox.blockSignals(False)
def setupOffsetSpinbox(self):
if self._model.is_deflector or self._model.is_propagator:
self.offsetSpinbox.setEnabled(False)
else:
self.offsetSpinbox.setMinimum(self.offset_min)
self.offsetSpinbox.setMaximum(self.offset_max)
self.offsetSpinbox.setDecimals(self.offset_decimals)
self.offsetSpinbox.setSingleStep(self.offset_step)
self.offsetSpinbox.blockSignals(True)
self.offsetSpinbox.setValue(self._model.offset)
self.offsetSpinbox.blockSignals(False)
def setupValueDial(self):
if len(self._controller.value_presets) < 2:
self.valueDial.setEnabled(False)
dial_value = None
else:
self.valueDial.setMinimum(min(self._controller.value_presets.keys()))
self.valueDial.setMaximum(max(self._controller.value_presets.keys()))
preset_matches = [key for key in self._controller.value_presets if
self._controller.value_presets[key] == self._model.value]
if len(preset_matches) > 0:
dial_value = min(preset_matches)
else:
dial_value = None
self.valueDial.setTracking(True)
self.valueDial.setNotchesVisible(True)
if dial_value is None:
if self.valueDial.isEnabled():
self.valueDial.setStyleSheet('background-color : lightblue')
else:
pass
else:
self.valueDial.setStyleSheet('background-color : lightgreen')
self.valueDial.blockSignals(True)
self.valueDial.setValue(dial_value)
self.valueDial.blockSignals(False)
def setupValueIndicator(self):
self.valueIndicator.setText(f'{self._model.value}')
@pyqtSlot(float)
def on_z_changed(self, value):
if self.zSpinbox.minimum() > value:
self.zSpinbox.setMinimum(value)
if self.zSpinbox.maximum() < value:
self.zSpinbox.setMaximum(value)
self.zSpinbox.blockSignals(True)
self.zSpinbox.setValue(value)
self.zSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_offset_changed(self, value):
if self.offsetSpinbox.minimum() > value:
self.offsetSpinbox.setMinimum(value)
if self.offsetSpinbox.maximum() < value:
self.offsetSpinbox.setMaximum(value)
self.offsetSpinbox.blockSignals(True)
self.offsetSpinbox.setValue(value)
self.offsetSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_value_changed(self, value):
if self.valueSpinbox.minimum() > value:
self.valueSpinbox.setMinimum(value)
if self.valueSpinbox.maximum() < value:
self.valueSpinbox.setMaximum(value)
self.valueSpinbox.blockSignals(True)
self.valueSpinbox.setValue(value)
self.valueSpinbox.blockSignals(False)
preset_values = [key for key in self._controller.value_presets if self._controller.value_presets[key] == value]
if len(preset_values) == 0:
self.valueDial.setStyleSheet('background-color : lightblue')
else:
self.valueDial.setStyleSheet('background-color : lightgreen')
self.valueDial.blockSignals(True)
self.valueDial.setValue(preset_values[0])
self.valueDial.blockSignals(False)
self.valueIndicator.setText(f'{value}')
@pyqtSlot(str)
def on_label_changed(self, value):
self.nameLabel.setText(value)
def on_model_changed(self, *args, **kwargs):
kwargs.update({'ax': self.plotWidget.canvas.ax})
if self._plot_data is None:
_, _, self._plot_data = self._model.show(*args, **kwargs)
else:
if self._model.is_deflector:
self._plot_data[0].set_ydata([self._model.z, self._model.z])
elif self._model.is_lens:
[line.set_ydata([z, z]) for z, line in
zip([self._model.z, self._model.z + self._model.value, self._model.z - self._model.value],
self._plot_data)]
self.plotUpdated.emit()
@pyqtSlot(dict)
def on_style_changed(self, style):
self.styleWidget.blockSignals(True)
self.styleWidget.setStyles(style)
self.styleWidget.blockSignals(False)
self.on_model_changed()
class SourceModel(QtCore.QObject):
"""
Model for controlling a Source
The model should ensure that proper signals are sent whenever the data of the Source has been changed.
The model emits the following signals:
:param zChanged: Signal ([], [float]) emitted whenever the z-value of the Source has changed.
:param offsetChanged: Signal ([], [float]) emitted whenever the offset-value of the Source has changed.
:param sizeChanged: Signal ([], [float]) emitted whenever the size-value of the Source has changed.
:param anglesChanged: Signal ([]) emitted whenever the angles of the Source has changed.
:param pointsChanged: Signal ([], [int]) emitted whenever the points-value of the Source has changed.
:param sourceChanged: Signal wmitted whenever any change has been made to the Source, inculding the above.
"""
zChanged = pyqtSignal([], [float], name='zChanged')
offsetChanged = pyqtSignal([], [float], name='offsetChanged')
sizeChanged = pyqtSignal([], [float], name='sizeChanged')
anglesChanged = pyqtSignal([], [np.ndarray], name='anglesChanged')
pointsChanged = pyqtSignal([], [int], name='pointsChanged')
sourceChanged = pyqtSignal(name='operatorChanged')
@property
def z(self):
return self._source.z
@z.setter
def z(self, value):
if isinstance(value, float):
self._source.z = value
self.zChanged.emit()
self.zChanged[float].emit(value)
self.sourceChanged.emit()
else:
raise SourceModelError(
f'Cannot set Z-value of {self.__class__.__name__} of {self._source!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def offset(self):
return self._source.offset
@offset.setter
def offset(self, value):
if isinstance(value, float):
self._source.offset = value
self.offsetChanged.emit()
self.offsetChanged[float].emit(value)
self.sourceChanged.emit()
else:
raise SourceModelError(
f'Cannot set offset-value of {self.__class__.__name__} of {self._source!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def angles(self):
return self._source.angles
@angles.setter
def angles(self, value):
if isinstance(value, (list, tuple, np.ndarray)):
if len(np.shape(value)) == 1:
self._source.angles = np.array(value)
self.anglesChanged.emit()
self.angelesChanged[np.ndarray].emit(np.array(value))
self.operatorChanged.emit()
else:
raise SourceModelError(
f'Cannot set angles of {self.__class__.__name__} of {self._source!r}.') from ValueError(
f'Argument {value!r} has invalid shape {np.shape(value)} != (1,).')
else:
raise SourceModelError(
f'Cannot set angles of {self.__class__.__name__} of {self._source!r}.') from TypeError(
f'Value {value!r} must be `tuple`, `list`, or `np.ndarray`.')
@property
def size(self):
return self._source.size
@size.setter
def size(self, value):
if isinstance(value, float):
self._source.size = value
self.sizeChanged.emit()
self.sizeChanged[float].emit(value)
self.sourceChanged.emit()
else:
raise SourceModelError(
f'Cannot set size-value of {self.__class__.__name__} of {self._source!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def points(self):
return self._source.points
@points.setter
def points(self, value):
if isinstance(value, int):
self._source.points = value
self.pointsChanged.emit()
self.pointsChanged[int].emit(value)
self.sourceChanged.emit()
else:
raise SourceModelError(
f'Cannot set points-value of {self.__class__.__name__} of {self._source!r}.') from TypeError(
f'Value {value!r} must be `int`')
@property
def silent(self):
return self._silent
@silent.setter
def silent(self, value):
self._silent = bool(value)
self.blockSignals(self._silent)
def __init__(self, source, *args, **kwargs):
"""
Create a model for a Source
:param source: The Source to model
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type source: Source
"""
super(SourceModel, self).__init__(*args, **kwargs)
if not isinstance(source, Source):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {source!r}. Invalid type {type(source)}. Accepted types are OpticalOperator and subclasses.')
self._source = source
self._silent = False
def __repr__(self):
return f'{self.__class__.__name__}({self._source!r}, {self.parent()})'
def __str__(self):
return f'{self._source}'
class SourceController(QtCore.QObject):
"""
Controller for controlling a SourceModel
"""
@property
def model(self):
return self._model
def __init__(self, model, *args, **kwargs):
"""
Create a controller for a SourceModel
:param model: The model to control
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type model: SourceModel
"""
super(SourceController, self).__init__(*args, **kwargs)
if not isinstance(model, SourceModel):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {model!r}. Invalid type {type(model)}. Accepted type is `SourceModel`')
self._model = model
@pyqtSlot(int, name='setSilent')
@pyqtSlot(bool, name='setSilent')
@pyqtSlot(float, name='setSilent')
def setSilent(self, value):
"""
Disable signals from the model
:param value: whether to disable or enable signals
:param value: Union[int, float, bool]
:return:
"""
self._model.silent = value
@pyqtSlot(float, name='setZ')
def setZ(self, value):
"""
Set the z-position of the model
:param value: z-value
:type value: float
"""
self._model.z = value
@pyqtSlot(float, name='setOffset')
def setOffset(self, value):
"""
Set the offset-value of the model
:param value: offset-value
:type value: float
:return:
"""
self._model.offset = value
@pyqtSlot(float, name='setAngleMin')
def setAngleMin(self, value):
"""
Set the minimum angle of the source model
:param value: minimum angle
:type value: float
:return:
"""
self._model.angles = np.linspace(value, np.max(self._model.angles), num=len(self._model.angles))
@pyqtSlot(float, name='setAngleMax')
def setAngleMax(self, value):
"""
Set the maximum angle of the source model
:param value: maximum angle
:type value: float
:return:
"""
self._model.angles = np.linspace(np.min(self._model.angles), value, num=len(self._model.angles))
@pyqtSlot(int, name='setAngleNumber')
def setAngleNumber(self, value):
"""
Set the number of angles of the source model
:param value: the number of angles
:type value: int
:return:
"""
self._model.angles = np.linspace(np.min(self._model.angles), np.max(self._model.angles), num=value)
@pyqtSlot(list, name='setAngles')
@pyqtSlot(tuple, name='setAngles')
@pyqtSlot(np.ndarray, name='setAngles')
def setAngles(self, value):
"""
Set the angles of the model
:param value: the angles
:type value: Union[list, tuple, np.ndarray]
:return:
"""
self._model.angles = np.array(value)
@pyqtSlot(float, name='addAngle')
def addAngle(self, value):
"""
Add an angle to the source
:param value: The angle to add
:type value: float
:return:
"""
self._model.angles = np.array(list(self._model.angles) + [value])
@pyqtSlot(float, name='setSize')
def setSize(self, value):
"""
Set the size-value of the model
:param value: size-value
:type value: float
:return:
"""
self._model.size = value
@pyqtSlot(int, name='setPoints')
def setPoints(self, value):
"""
Set the number of points to emit rays from
:param value: The number of points
:type value: int
:return:
"""
self._model.points = value
@pyqtSlot(str, float)
def setParameter(self, parameter, value):
"""
Sets a given parameter to a given value
:param parameter: The parameter to set. Should be either "z", "offset", "size", or "angle"
:param value: The value to set
:type parameter: str
:type value: float
:return:
"""
if parameter.lower() == 'z':
self.setZ(value)
elif parameter.lower() == 'offset':
self.setOffset(value)
elif parameter.lower() == 'size':
self.setSize(value)
elif parameter.lower() == 'angle':
self.addAngle(value)
else:
raise ValueError(f'Could not set parameter {parameter} to {value} for {self!r}: Parameter not recognized.')
class SourceView(QtWidgets.QWidget):
"""
Create a view for a SourceModel.
This object provides a series of widgets and setup-tools for the widgets. The widgets are connected to a controller that controls the model, and changes in the model are reflected in the view - as long as the underlying data object (i.e. the Source) is changed directly (not through the corresponding SourceModel)
"""
size_min = -999
size_max = 999
size_step = 0.01
value_decimals = 2
size_points_min = 1
size_points_max = 50
size_points_step = 1
z_min = -999
z_max = 999
z_step = 0.5
z_decimals = 2
offset_min = -999
offset_max = 999
offset_step = 0.05
offset_decimals = 2
angles_min = -90
angles_max = 90
angles_step = 0.01
angles_decimals = 2
angles_points_min = 1
angles_points_max = 50
angles_points_step = 1
@property
def model(self):
return self._model
def __init__(self, controller, *args, **kwargs):
"""
Create a view for a controller.
The following widgets will be created:
-zSpinbox: A QDoubleSpinBox to control/show the z-position of the source
-offsetSpinbox: A QDoubleSpinBox to control/show the offset of the source
-sizeSpinBox: A QDoubleSpinBox to control/show the size of the source
-pointsSpinBox: A QSpinBox to control/show the number of points to emit rays from the source for
-anglesMinSpinBox: A QDoubleSpinBox to control/show the minimum angle to emit
-anglesMaxSpinBox: A QDoubleSpinBox to control/show the maximum angle to emit
-anglesNumberSpinBox: A QSpinBox to control/show the number of angles to emit from each point.
:param controller: The controller to connect to. The model will be extracted from this controller.
:param args: Optional positional arguments passed to QtWidgets.QWidget
:param kwargs: Optional keyword arguments passed to QtWidgets.QWidget
:type controller: SourceController
"""
super(SourceView, self).__init__(*args, **kwargs)
if not isinstance(controller, SourceController):
raise TypeError()
self._controller = controller
self._model = self._controller.model
self.zSpinbox = QtWidgets.QDoubleSpinBox(self)
self.offsetSpinbox = QtWidgets.QDoubleSpinBox(self)
self.sizeSpinbox = QtWidgets.QDoubleSpinBox(self)
self.pointsSpinBox = QtWidgets.QSpinBox(self)
self.anglesMinSpinBox = QtWidgets.QDoubleSpinBox(self)
self.anglesMaxSpinBox = QtWidgets.QDoubleSpinBox(self)
self.anglesNumberSpinBox = QtWidgets.QSpinBox(self)
self.setupZSpinbox()
self.setupOffsetSpinbox()
self.setupSizeSpinbox()
self.setupAnglesSpinBox()
# Listeners
self._model.zChanged[float].connect(self.on_z_changed)
self._model.offsetChanged[float].connect(self.on_offset_changed)
self._model.sizeChanged[float].connect(self.on_size_changed)
self._model.anglesChanged[np.ndarra].connect(self.on_angles_changed)
self._model.pointsChanged[int].connect(self.on_points_changed)
# Signals
self.zSpinbox.valueChanged[float].connect(self._controller.setZ)
self.offsetSpinbox.valueChanged[float].connect(self._controller.setOffset)
self.sizeSpinbox.valueChanged[float].connect(self._controller.setSize)
self.pointsSpinBox.valueChanged[int].connect(self._controller.setPoints)
self.anglesMinSpinBox.valueChanged[float].connect(self._controller.setAngleMin)
self.anglesMaxSpinBox.valueChanged[float].connect(self._controller.setAngleMax)
self.anglesNumberSpinBox.valueChanged[int].connect(self._controller.setAngleNumber)
def setupZSpinbox(self):
self.zSpinbox.setMinimum(self.z_min)
self.zSpinbox.setMaximum(self.z_max)
self.zSpinbox.setDecimals(self.z_decimals)
self.zSpinbox.setSingleStep(self.z_step)
self.zSpinbox.blockSignals(True)
self.zSpinbox.setValue(self._model.z)
self.zSpinbox.blockSignals(False)
def setupOffsetSpinbox(self):
self.offsetSpinbox.setMinimum(self.offset_min)
self.offsetSpinbox.setMaximum(self.offset_max)
self.offsetSpinbox.setDecimals(self.offset_decimals)
self.offsetSpinbox.setSingleStep(self.offset_step)
self.offsetSpinbox.blockSignals(True)
self.offsetSpinbox.setValue(self._model.offset)
self.offsetSpinbox.blockSignals(False)
def setupSizeSpinbox(self):
self.sizeSpinbox.setMinimum(self.size_min)
self.sizeSpinbox.setMaximum(self.size_max)
self.sizeSpinbox.setDecimals(self.size_decimals)
self.sizeSpinbox.setSingleStep(self.size_step)
self.sizeSpinbox.blockSignals(True)
self.sizeSpinbox.setValue(self._model.size)
self.sizeSpinbox.blockSignals(False)
self.pointsSpinbox.setMinimum(self.size_points_min)
self.pointsSpinbox.setMaximum(self.size_points_max)
self.pointsSpinbox.setSingleStep(self.size_points_step)
self.pointsSpinbox.blockSignals(True)
self.pointsSpinbox.setValue(self._model.points)
self.pointsSpinbox.blockSignals(False)
def setupAnglesSpinbox(self):
self.anglesMinSpinbox.setMinimum(self.anglesMin_min)
self.anglesMinSpinbox.setMaximum(self.anglesMin_max)
self.anglesMinSpinbox.setDecimals(self.anglesMin_decimals)
self.anglesMinSpinbox.setSingleStep(self.anglesMin_step)
self.anglesMinSpinbox.blockSignals(True)
self.anglesMinSpinbox.setValue(self._model.anglesMin)
self.anglesMinSpinbox.blockSignals(False)
self.anglesMaxSpinbox.setMinimum(self.anglesMax_min)
self.anglesMaxSpinbox.setMaximum(self.anglesMax_max)
self.anglesMaxSpinbox.setDecimals(self.anglesMax_decimals)
self.anglesMaxSpinbox.setSingleStep(self.anglesMax_step)
self.anglesMaxSpinbox.blockSignals(True)
self.anglesMaxSpinbox.setValue(self._model.anglesMax)
self.anglesMaxSpinbox.blockSignals(False)
self.anglesNumberSpinbox.setMinimum(self.angles_points_min)
self.anglesNumberSpinbox.setMaximum(self.angles_points_max)
self.anglesNumberSpinbox.setSingleStep(self.angles_points_step)
self.anglesNumberSpinbox.blockSignals(True)
self.anglesNumberSpinbox.setValue(len(self._model.angles))
self.anglesNumberSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_z_changed(self, value):
if self.zSpinbox.minimum() > value:
self.zSpinbox.setMinimum(value)
if self.zSpinbox.maximum() < value:
self.zSpinbox.setMaximum(value)
self.zSpinbox.blockSignals(True)
self.zSpinbox.setValue(value)
self.zSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_offset_changed(self, value):
if self.offsetSpinbox.minimum() > value:
self.offsetSpinbox.setMinimum(value)
if self.offsetSpinbox.maximum() < value:
self.offsetSpinbox.setMaximum(value)
self.offsetSpinbox.blockSignals(True)
self.offsetSpinbox.setValue(value)
self.offsetSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_size_changed(self, value):
if self.sizeSpinbox.minimum() > value:
self.sizeSpinbox.setMinimum(value)
if self.sizeSpinbox.maximum() < value:
self.sizeSpinbox.setMaximum(value)
self.sizeSpinbox.blockSignals(True)
self.sizeSpinbox.setValue(value)
self.sizeSpinbox.blockSignals(False)
@pyqtSlot(float)
def on_points_changed(self, value):
if self.pointsSpinbox.minimum() > value:
self.pointsSpinbox.setMinimum(value)
if self.pointsSpinbox.maximum() < value:
self.pointsSpinbox.setMaximum(value)
self.pointsSpinbox.blockSignals(True)
self.pointsSpinbox.setValue(value)
self.pointsSpinbox.blockSignals(False)
@pyqtSlot(np.ndarray)
def on_angles_changed(self, value):
minimum = np.min(value)
maximum = np.maximum(value)
n = len(value)
if self.anglesMinSpinBox.minimum() > minimum:
self.anglesMinSpinbox.setMinimum(minimum)
if self.anglesMinSpinbox.maximum() < minimum:
self.anglesMinSpinbox.setMaximum(minimum)
if self.anglesMaxSpinBox.maximum() > maximum:
self.anglesMaxSpinbox.setMinimum(maximum)
if self.anglesMaxSpinbox.maximum() < maximum:
self.anglesMaxSpinbox.setMaximum(maximum)
if self.anglesNumberSpinBox.minimum() > n:
self.anglesNumberSpinbox.setMinimum(n)
if self.anglesNumberSpinbox.maximum() < n:
self.anglesNumberSpinbox.setMaximum(n)
self.anglesMinSpinbox.blockSignals(True)
self.anglesMaxSpinbox.blockSignals(True)
self.anglesNumberSpinbox.blockSignals(True)
self.anglesMinSpinBox.setValue(minimum)
self.anglesMinSpinBox.setValue(maximum)
self.anglesNumberSpinBox.setValue(n)
self.anglesMinSpinbox.blockSignals(False)
self.anglesMaxSpinbox.blockSignals(False)
self.anglesNumberSpinbox.blockSignals(False)
class ScreenModel(QtCore.QObject):
"""
Model for controlling a Screen
The model should ensure that proper signals are sent whenever the data of the Screen has been changed.
The model emits the following signals:
:param zChanged: Signal ([], [float]) emitted whenever the z-value of the Screen has changed.
:param ScreenChanged: Signal emitted whenever any change has been made to the Screen, inculding the above.
"""
zChanged = pyqtSignal([], [float], name='zChanged')
screenChanged = pyqtSignal(name='operatorChanged')
@property
def z(self):
return self._screen.z
@z.setter
def z(self, value):
if isinstance(value, float):
self.screen.z = value
self.zChanged.emit()
self.zChanged[float].emit(value)
self.screenChanged.emit()
else:
raise ScreenModelError(
f'Cannot set Z-value of {self.__class__.__name__} of {self._screen!r}.') from TypeError(
f'Value {value!r} must be `float`')
@property
def silent(self):
return self._silent
@silent.setter
def silent(self, value):
self._silent = bool(value)
self.blockSignals(self._silent)
def __init__(self, screen, *args, **kwargs):
"""
Create a model for a Screen
:param screen: The Screen to model
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type screen: Screen
"""
super(ScreenModel, self).__init__(*args, **kwargs)
if not isinstance(screen, Screen):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {screen!r}. Invalid type {type(screen)}. Accepted type is Screen.')
self._screen = screen
self._silent = False
def __repr__(self):
return f'{self.__class__.__name__}({self._screen!r}, {self.parent()})'
def __str__(self):
return f'{self._screen}'
class ScreenController(QtCore.QObject):
"""
Controller for controlling a ScreenModel
"""
@property
def model(self):
return self._model
def __init__(self, model, *args, **kwargs):
"""
Create a controller for a ScreenModel
:param model: The model to control
:param args: Optional positional arguments passed to QtCore.QObject constructor
:param kwargs: Optional keyword arguments passed to QtCore.QObject constructor
:type model: ScreenModel
"""
super(ScreenController, self).__init__(*args, **kwargs)
if not isinstance(model, ScreenModel):
raise TypeError(
f'Cannot create {self.__class__.__name__} for {model!r}. Invalid type {type(model)}. Accepted type is `ScreenModel`')
self._model = model
@pyqtSlot(int, name='setSilent')
@pyqtSlot(bool, name='setSilent')
@pyqtSlot(float, name='setSilent')
def setSilent(self, value):
"""
Disable signals from the model
:param value: whether to disable or enable signals
:param value: Union[int, float, bool]
:return:
"""
self._model.silent = value
@pyqtSlot(float, name='setZ')
def setZ(self, value):
"""
Set the z-position of the model
:param value: z-value
:type value: float
"""
self._model.z = value
@pyqtSlot(str, float)
def setParameter(self, parameter, value):
"""
Sets a given parameter to a given value
:param parameter: The parameter to set. Should be "z"
:param value: The value to set
:type parameter: str
:type value: float
:return:
"""
if parameter.lower() == 'z':
self.setZ(value)
else:
raise ValueError(f'Could not set parameter {parameter} to {value} for {self!r}: Parameter not recognized.')
#WIP: Make ScreenView
class MicroscopeModel(QtCore.QObject):
modelChanged = pyqtSignal([], name='modelChanged')
systemFilled = pyqtSignal([], name='systemFilled')
systemTraced = pyqtSignal([list], name='systemTraced')
@property
def operatorModels(self):
return [model for model in self._operatorModels]
@property
def sourceModel(self):
return self._sourceModel
@property
def screenModel(self):
return self._screenModel
def __init__(self, optical_system, *args, **kwargs):
super(MicroscopeModel, self).__init__(*args, **kwargs)
if not isinstance(optical_system, OpticalSystem):
raise TypeError(
f'Cannot create {self.__class__.__name__} for source: {optical_system!r}. Expected type OpticalSystem not {type(optical_system)}')
self._optical_system = optical_system
self._sourceModel = SourceModel(optical_system.source)
self._screenModel = ScreenModel(optical_system.screen)
self._operatorModels = [OpticalOperatorModel(operator, self.parent()) for operator in self._optical_system]
def __iter__(self):
for obj in [self.sourceModel] + self.operatorModels + [self.screenModel]:
yield obj
@pyqtSlot()
def fillSystem(self):
self._optical_system.fill()
self.systemFilled.emit()
@pyqtSlot(name='trace', result=list)
def trace(self):
self.blockSignals(True)
self.fillSystem()
self.blockSignals(False)
traces = self._optical_system.trace
self.systemTraced[list].emit(traces)
return traces
@pyqtSlot(name='printSystem')
def printSystem(self):
print(self._optical_system)
@pyqtSlot(name='printTraces')
def printTraces(self):
traces = self._optical_system.trace
for trace in traces:
print(f'Trace {trace.label}:')
t = tabulate([[i, ray.x, ray.angle_deg, ray.z] for i, ray in enumerate(trace)],
headers=['#', 'X', 'Angle [deg]', 'Z'])
print(t)
class MicroscopeController(QtCore.QObject):
@property
def model(self):
return self._model
@property
def sourceController(self):
return self._sourceController
@property
def screenController(self):
return self._screenController
@property
def operatorControllers(self):
return [controller for controller in self._operatorControllers]
def __init__(self, model, *args, **kwargs):
super(MicroscopeController, self).__init__(*args, **kwargs)
if not isinstance(model, MicroscopeModel):
raise TypeError(
f'Cannot create {self.__class__.__name__} for model: {model!r}. Expected type MicroscopeModel not {type(model)}')
self._model = model
self._sourceController = None
self._screenController = None
self._operatorControllers = [OpticalOperatorController(model) for model in self._model.operatorModels if
(model.is_lens or model.is_deflector)]
def __iter__(self):
for obj in [self.sourceController] + self.operatorControllers + [self.screenController]:
yield obj
@pyqtSlot(str, str, float)
def setOperatorParameterByName(self, name, parameter, value):
print(f'Setting {name} {parameter}={value}')
changes = len([controller.setParameter(parameter, value) for controller in self._operatorControllers if
controller.model_name == name])
if changes > 0:
self._model.modelChanged.emit()
@pyqtSlot(name='trace', result=list)
def trace(self):
return self._model.trace()
class MicroscopeView(QtWidgets.QMainWindow):
# colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
# colors = plt.get_cmap('inferno', 10)
colors = plt.get_cmap('tab20', 10)
@property
def screenView(self):
return self._screenView
@property
def sourceView(self):
return self._sourceView
@property
def operatorViews(self):
return [view for view in self._operatorViews]
def __init__(self, controller, *args, **kwargs):
super(MicroscopeView, self).__init__(*args, **kwargs)
if not isinstance(controller, MicroscopeController):
raise TypeError()
self._controller = controller
self._model = self._controller.model
self.plot_widget = MplWidget(self)
self.lens_widgets = QtWidgets.QWidget(self)
self.lens_widgets.setLayout(QtWidgets.QGridLayout())
self.plot_button = QtWidgets.QPushButton('Plot')
self.print_system_button = QtWidgets.QPushButton('Print system')
self.print_traces_button = QtWidgets.QPushButton('Print rays')
self._screenView = None
self._sourceView = None
self._operatorViews = [OpticalOperatorView(controller, self, plot_widget=self.plot_widget) for controller in
self._controller.operatorControllers]
self._trace_lines = None
self.setCentralWidget(QtWidgets.QWidget(self))
self.centralWidget().setLayout(QtWidgets.QGridLayout())
self.centralWidget().layout().addWidget(self.plot_widget, 0, 0)
self.centralWidget().layout().addWidget(self.lens_widgets, 0, 1)
self.centralWidget().layout().addWidget(self.plot_button, 1, 0)
self.centralWidget().layout().addWidget(self.print_system_button, 2, 0)
self.centralWidget().layout().addWidget(self.print_traces_button, 3, 0)
self.lensStyleWindow = QtWidgets.QMainWindow()
self.lensStyleWindow.setCentralWidget(QtWidgets.QWidget())
self.lensStyleWindow.centralWidget().setLayout(QtWidgets.QGridLayout())
self.lensStyleWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('Name'), 0, 0)
self.lensStyleWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('Style'), 0, 1)
self.lensStyleWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('Width'), 0, 2)
self.lensStyleWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('Color'), 0, 4)
[self.lensStyleWindow.centralWidget().layout().addWidget(QtWidgets.QLabel(f'{view.nameLabel.text()}'), i + 1, 0)
for
i, view in enumerate(self.operatorViews) if view.model.is_lens]
[self.lensStyleWindow.centralWidget().layout().addWidget(view.styleWidget.widgets['style'], i + 1, 1) for
i, view in enumerate(self.operatorViews) if view.model.is_lens]
[self.lensStyleWindow.centralWidget().layout().addWidget(view.styleWidget.widgets['width'], i + 1, 2) for
i, view in enumerate(self.operatorViews) if view.model.is_lens]
[self.lensStyleWindow.centralWidget().layout().addWidget(view.styleWidget.widgets['color'], i + 1, 4) for
i, view in enumerate(self.operatorViews) if view.model.is_lens]
# [v for view in self.operatorViews]
menubar = self.menuBar()
self.controlMenu = menubar.addMenu('Controls')
self.operatorAction = QtWidgets.QAction('&Operators', self)
self.sourceAction = QtWidgets.QAction('&Source', self)
self.screenAction = QtWidgets.QAction('&Screen', self)
self.controlMenu.addAction(self.operatorAction)
self.controlMenu.addAction(self.sourceAction)
self.controlMenu.addAction(self.screenAction)
self.styleMenu = menubar.addMenu('Styles')
self.lensStyleAction = QtWidgets.QAction('&Lenses', self)
self.deflectorStyleAction = QtWidgets.QAction('&Deflectors', self)
self.rayStyleAction = QtWidgets.QAction('&Rays', self)
self.styleMenu.addAction(self.lensStyleAction)
self.styleMenu.addAction(self.deflectorStyleAction)
self.styleMenu.addAction(self.rayStyleAction)
self.lensStyleAction.triggered.connect(self.openLensStyle)
# Source control
self.sourceControlWindow = QtWidgets.QMainWindow()
self.sourceControlWindow.setCentralWidget(QtWidgets.QWidget())
self.sourceControlWindow.centralWidget().setLayout(QtWidgets.QGridLayout())
self.sourceAngleMinimumSpinBox = QtWidgets.QDoubleSpinBox()
self.sourceAngleMinimumSpinBox.setMinimum(-90)
self.sourceAngleMinimumSpinBox.setMaximum(0)
self.sourceAngleMinimumSpinBox.setDecimals(2)
self.sourceAngleMinimumSpinBox.setSingleStep(0.01)
self.sourceAngleMinimumSpinBox.setValue(-0.10)
self.sourceAngleMaximumSpinBox = QtWidgets.QDoubleSpinBox()
self.sourceAngleMaximumSpinBox.setMinimum(0)
self.sourceAngleMaximumSpinBox.setMaximum(90)
self.sourceAngleMaximumSpinBox.setDecimals(2)
self.sourceAngleMaximumSpinBox.setSingleStep(0.01)
self.sourceAngleMaximumSpinBox.setValue(0.10)
self.sourceAngles = QtWidgets.QSpinBox()
self.sourceAngles.setMinimum(1)
self.sourceAngles.setMaximum(500)
self.sourceAngles.setSingleStep(1)
self.sourceAngles.setValue(3)
self.sourceControlWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('Angular range from'))
self.sourceControlWindow.centralWidget().layout().addWidget(self.sourceAngleMinimumSpinBox)
self.sourceControlWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('to'))
self.sourceControlWindow.centralWidget().layout().addWidget(self.sourceAngleMaximumSpinBox)
self.sourceControlWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('in'))
self.sourceControlWindow.centralWidget().layout().addWidget(self.sourceAngles)
self.sourceControlWindow.centralWidget().layout().addWidget(QtWidgets.QLabel('steps'))
self.sourceAction.triggered.connect(self.openSourceControl)
self.operatorAction.triggered.connect(self.openOperatorControl)
self.screenAction.triggered.connect(self.openScreenControl)
# Signals
self.plot_button.clicked.connect(self.on_model_changed)
self.print_system_button.clicked.connect(self._model.printSystem)
[view.plotUpdated.connect(self._model.modelChanged) for view in self._operatorViews]
self.print_traces_button.clicked.connect(self._model.printTraces)
# Listeners
self._model.modelChanged.connect(self.on_model_changed)
self._model.systemTraced[list].connect(self.on_retraced)
self.setup_lens_widgets()
# show lenses
[operator_view.on_model_changed(annotate=False) for operator_view in self._operatorViews]
# Run raytracing and update the plot fo an initial inspection
self.on_model_changed()
def setup_lens_widgets(self):
self.lens_widgets.layout().addWidget(QtWidgets.QLabel('Type', self.lens_widgets), 0, 0)
self.lens_widgets.layout().addWidget(QtWidgets.QLabel('Name', self.lens_widgets), 0, 1)
self.lens_widgets.layout().addWidget(QtWidgets.QLabel('Z', self.lens_widgets), 0, 2)
self.lens_widgets.layout().addWidget(QtWidgets.QLabel('Offset', self.lens_widgets), 0, 3)
self.lens_widgets.layout().addWidget(QtWidgets.QLabel('Value', self.lens_widgets), 0, 4)
for i, view in enumerate(self.operatorViews):
self.lens_widgets.layout().addWidget(view.typeLabel, i + 1, 0)
self.lens_widgets.layout().addWidget(view.nameLabel, i + 1, 1)
self.lens_widgets.layout().addWidget(view.zSpinbox, i + 1, 2)
self.lens_widgets.layout().addWidget(view.offsetSpinbox, i + 1, 3)
self.lens_widgets.layout().addWidget(view.valueSpinbox, i + 1, 4)
@pyqtSlot(list, name='on_retraced')
def on_retraced(self, traces):
if len(traces) > self.colors.N:
self.colors = plt.get_cmap(self.colors.name, len(traces))
if self._trace_lines is not None:
[line[0].remove() for line in self._trace_lines]
self.plot_widget.canvas.ax.set_prop_cycle(None)
colors = {}
for trace in traces:
if trace[0].x in colors:
pass
else:
# colors[trace[0].x] = self.colors[len(colors)]
colors[trace[0].x] = self.colors(len(colors) / len(traces))
self._trace_lines = [trace.show(ax=self.plot_widget.canvas.ax, annotate=False, color=colors[trace[0].x])[2] for
i, trace in enumerate(traces)]
xs = [[ray.x for ray in raytrace] for raytrace in traces]
minimum_x = min([min(x) for x in xs])
maximum_x = max([max(x) for x in xs])
ys = [[ray.z for ray in raytrace] for raytrace in traces]
minimum_y = min([min(y) for y in ys])
maximum_y = max([max(y) for y in ys])
ticks = [(operator.z, operator.label) for operator in self._model.operatorModels if
(operator.is_deflector or operator.is_lens)]
additional_ticks = [(operator.z + operator.value, f'{operator.label} FFP') for operator in
self._model.operatorModels if operator.is_lens]
additional_ticks.extend(
[(operator.z - operator.value, f'{operator.label} BFP') for operator in self._model.operatorModels if
operator.is_lens])
ticks.extend(additional_ticks)
self.plot_widget.canvas.ax.set_yticks([tick[0] for tick in ticks])
self.plot_widget.canvas.ax.set_yticklabels([tick[1] for tick in ticks])
self.plot_widget.canvas.ax.set_xlim(minimum_x, maximum_x)
self.plot_widget.canvas.ax.set_ylim(minimum_y, maximum_y)
print('Plot updated')
self.plot_widget.canvas.draw()
@pyqtSlot()
def on_model_changed(self):
self._model.trace()
@pyqtSlot()
def openLensStyle(self):
self.lensStyleWindow.show()
@pyqtSlot()
def openSourceControl(self):
self.sourceControlWindow.show()
@pyqtSlot()
def openScreenControl(self):
self.screenView.show()
@pyqtSlot()
def openOperatorControl(self):
pass
# self.operatorViews.show()
def full_column(angles=(-1, 0, 1), size=0, n_points=1):
mygui = QtWidgets.QApplication(sys.argv)
source = Source(150, angles, size=size, points=n_points)
screen = Screen(-100)
GUN1 = Deflector(0, label='GUN1', z=95)
GUN2 = Deflector(0, label='GUN2', z=85)
CL1 = Lens(10, label='CL1', z=80)
CL2 = Lens(10, label='CL2', z=70)
CL3 = Lens(10, label='CL3', z=60)
CLA1 = Deflector(0, label='CLA1', z=50)
CLA2 = Deflector(0, label='CLA2', z=40)
CM = Lens(10, label='CM', z=30)
OLPre = Lens(10, label='OLPre', z=5)
OLPost = Lens(10, label='OLPost', z=-5)
OM = Lens(10, label='OM', z=-15)
ILA1 = Deflector(0, label='ILA1', z=-25)
ILA2 = Deflector(0, label='ILA2', z=-30)
IL1 = Lens(10, label='IL1', z=-40)
IL2 = Lens(10, label='IL2', z=-50)
IL3 = Lens(10, label='IL3', z=-60)
PLA = Deflector(0, label='PLA', z=-70)
PL = Lens(10, label='PLA', z=-80)
optical_system = OpticalSystem(source,
[GUN1, GUN2, CL1, CL2, CL3, CLA1, CLA2, CM, OLPre, OLPost, OM, ILA1, ILA2, IL1, IL2,
IL3, PLA, PL], screen)
microscope_model = MicroscopeModel(optical_system)
microscope_controller = MicroscopeController(microscope_model)
microscope_view = MicroscopeView(microscope_controller)
microscope_view.show()
sys.exit(mygui.exec_())
def condenser_system(angles=(-1, 0, 1), size=0, n_points=1):
mygui = QtWidgets.QApplication(sys.argv)
source = Source(100, angles, size=size, points=n_points)
screen = Screen(0)
CL1 = Lens(6.3, label='CL1', z=82)
CL3 = Lens(8, label='CL3', z=60)
CLA1 = Deflector(0, label='CLA1', z=49)
CLA2 = Deflector(0, label='CLA2', z=42.5)
CM = Lens(10, label='CM', z=27)
OLPre = Lens(8.5, label='OLPre', z=8.5)
optical_system = OpticalSystem(source,
[CL1, CL3, CLA1, CLA2, CM, OLPre], screen)
microscope_model = MicroscopeModel(optical_system)
microscope_controller = MicroscopeController(microscope_model)
microscope_view = MicroscopeView(microscope_controller)
microscope_view.show()
sys.exit(mygui.exec_())
if __name__ == '__main__':
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('--system', type=str, default='full', choices=['full', 'condenser', 'imaging'],
help='The system to show, i.e. the condenser, imaging, or full system.')
parser.add_argument('--min_angle', dest='min_angle', type=float, default=-1,
help='The minimum angle to emit from the source')
parser.add_argument('--max_angle', dest='max_angle', type=float, default=1,
help='The maximum angle to emit from the source')
parser.add_argument('--n_angles', dest='n_angles', type=int, default=3,
help='The number of angles to emit from the source')
parser.add_argument('--source_size', dest='source_size', type=float, default=0.0, help='The size of the source')
parser.add_argument('--source_points', dest='source_points', type=int, default=1,
help='The number of points to emit beams from the source')
arguments = parser.parse_args()
angles = np.linspace(arguments.min_angle, arguments.max_angle, num=arguments.n_angles)
if arguments.system == 'full':
full_column(angles, size=arguments.source_size, n_points=arguments.source_points)
elif arguments.system == 'condenser':
condenser_system(angles, size=arguments.source_size, n_points=arguments.source_points)
elif arguments.system == 'imaging':
raise NotImplementedError(f'System {arguments.system} is not supported yet.')
else:
raise ValueError(f'System {arguments.system} not recognized')
| 38.909994 | 335 | 0.652059 | 7,843 | 70,466 | 5.72536 | 0.070764 | 0.020444 | 0.016925 | 0.004988 | 0.547056 | 0.488509 | 0.441609 | 0.364444 | 0.342219 | 0.320907 | 0 | 0.007129 | 0.241563 | 70,466 | 1,810 | 336 | 38.931492 | 0.833078 | 0.146979 | 0 | 0.4061 | 0 | 0.020064 | 0.088407 | 0.012462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113162 | false | 0.005618 | 0.008828 | 0.036116 | 0.219101 | 0.012841 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7810ed6a751d4492d7c42bf9fe88d5e9684d0272 | 532 | py | Python | app.py | lilbillybiscuit/tensorflow_chessbot | 7e8c49ea173c8f7ba05faf036c10b1b2ddf67f45 | [
"MIT"
] | null | null | null | app.py | lilbillybiscuit/tensorflow_chessbot | 7e8c49ea173c8f7ba05faf036c10b1b2ddf67f45 | [
"MIT"
] | null | null | null | app.py | lilbillybiscuit/tensorflow_chessbot | 7e8c49ea173c8f7ba05faf036c10b1b2ddf67f45 | [
"MIT"
] | null | null | null | import json
import base64
import os
def lambda_handler(event, context):
#os.system("./tensorflow_chessbot.py")
#return "Hi"
text=base64.b64decode(event['body'])
image = open("/tmp/image.png", "wb")
image.write(text)
image.close()
os.system("./tensorflow_chessbot.py --filepath /tmp/image.png")
fen=open("/tmp/fen.txt", "r")
str1=fen.readline()
fen.close()
print("Final FEN" + str1)
return {
'statusCode': 200,
'body': str(str1),
"headers": {
"Access-Control-Allow-Origin" : "*",
}
}
| 22.166667 | 65 | 0.631579 | 69 | 532 | 4.826087 | 0.594203 | 0.048048 | 0.108108 | 0.156156 | 0.168168 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027273 | 0.172932 | 532 | 23 | 66 | 23.130435 | 0.729545 | 0.090226 | 0 | 0 | 0 | 0 | 0.292531 | 0.105809 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.25 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7811235be67bd4477bed0dde9c6d06a0d0935165 | 976 | py | Python | plugins/grafana/icon_grafana/actions/do_proxied_datasource_call/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/grafana/icon_grafana/actions/do_proxied_datasource_call/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/grafana/icon_grafana/actions/do_proxied_datasource_call/action.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | import komand
from .schema import DoProxiedDatasourceCallInput, DoProxiedDatasourceCallOutput
# Custom imports below
class DoProxiedDatasourceCall(komand.Action):
def __init__(self):
super(self.__class__, self).__init__(
name="do_proxied_datasource_call",
description="Proxies all calls to the actual datasource",
input=DoProxiedDatasourceCallInput(),
output=DoProxiedDatasourceCallOutput(),
)
def run(self, params={}):
urlparts = ["datasources", "proxy", params.get("datasource_id")] + params.get("path").strip("/").split("/")
response = self.connection.request("GET", urlparts, params=params.get("parameters"))
if response.ok:
return {"response": response.json()}
else:
self.logger.error("Grafana API: " + response.json().get("message", ""))
response.raise_for_status()
def test(self):
return self.connection.test()
| 34.857143 | 115 | 0.646516 | 93 | 976 | 6.591398 | 0.623656 | 0.044046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223361 | 976 | 27 | 116 | 36.148148 | 0.808707 | 0.020492 | 0 | 0 | 0 | 0 | 0.150943 | 0.027254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.1 | 0.05 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7812b8f4ff19f85ceff7a725ced6fd5ca666d547 | 2,748 | py | Python | 3/deck_handler.py | diblaze/TDP002 | 41c9c2155e2ad8cc4047ea912edd463042d95362 | [
"MIT"
] | null | null | null | 3/deck_handler.py | diblaze/TDP002 | 41c9c2155e2ad8cc4047ea912edd463042d95362 | [
"MIT"
] | null | null | null | 3/deck_handler.py | diblaze/TDP002 | 41c9c2155e2ad8cc4047ea912edd463042d95362 | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
import random
# acc. to assignment we only need two suits (half of deck)
#spades = 1..13
#hearts = 1..13 * 2
suits = {"spades": 1, "hearts": 2}
values = {"one": 1, "two": 2, "three": 3, "four": 4, "five": 5, "six": 6, "seven": 7,
"eight": 8, "nine": 9, "ten": 10, "elseven": 11, "twelve": 12, "thirteen": 13}
joker_a = ["joker_a", 27]
joker_b = ["joker_b", 27]
def create_deck():
"""Creates a deck of 26 cards (-2 jokers)"""
# list to hold the deck
_deck = []
for i in range(1, 3):
for j in range(1, 14):
_deck.append([i, j])
return _deck
def shuffle_deck(deck_to_shuffle):
"""Shuffles a deck.
The shuffle occurs IN PLACE, but for others to better understand this function I will return the same deck but shuffeled."""
#random seed is set to 10 to ensure same passkey.
random.seed(10)
random.shuffle(deck_to_shuffle)
return deck_to_shuffle
def pick_card(deck_to_pick_from):
"""Returns a random card from the deck"""
return random.choice(deck_to_pick_from)
def insert_jokers(deck_to_insert_into):
"""Inserts joker_a and joker_b into deck"""
deck_to_insert_into.append(joker_a)
deck_to_insert_into.append(joker_b)
def insert_card_by_name(card_in_text, deck_to_insert_into):
"""Adds a new card to the last postion of the deck
Use by inputting card either by text or by [i,j].
"""
splitted_string = card_in_text.split()
value = splitted_string[0]
value = values[value]
suit = splitted_string[2]
if suit == "spades" or suit == "Spades":
suit = 0
elif suit == "hearts" or suit == "Hearts":
suit = 1
card_to_add = [suit, value]
deck_to_insert_into.append(card_to_add)
def insert_card_by_dict(card, deck_to_insert_into):
"""Adds a new card to the last postion of the deck
Use by inputting card by [i,j].
"""
deck_to_insert_into.append(card)
# print(card)
def get_value_of_card(position_of_card, deck):
"""Returns the value of the card that has the specific position in the deck"""
# print(deck[position_of_card])
value_int = deck[position_of_card][1]
return value_int
def get_suit_of_card(position_of_card, deck):
"""Returns the suit of the card that has the specific position in the deck"""
suit_int = deck[position_of_card][0]
if suit_int == 0:
return "Spades"
elif suit_int == 1:
return "Hearts"
def display_card(position_of_card, deck):
"""Displays the card in the specific position in the deck."""
suit = get_suit_of_card(position_of_card, deck)
value = str(get_value_of_card(position_of_card, deck))
text_printed = value + " of " + suit
return text_printed
| 28.040816 | 128 | 0.664483 | 443 | 2,748 | 3.893905 | 0.282167 | 0.041739 | 0.064928 | 0.064928 | 0.337391 | 0.30029 | 0.238841 | 0.217971 | 0.133333 | 0.133333 | 0 | 0.02381 | 0.220524 | 2,748 | 97 | 129 | 28.329897 | 0.781513 | 0.310771 | 0 | 0 | 0 | 0 | 0.069269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.020833 | 0 | 0.354167 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78150c3f14d3b4c2aff18129bfca862c94bb1aec | 2,106 | py | Python | django_db_views/tests/tests.py | Skylude/django-db-views | 048a1e3ab7465d3e19481de82b3737f780b175c0 | [
"MIT"
] | null | null | null | django_db_views/tests/tests.py | Skylude/django-db-views | 048a1e3ab7465d3e19481de82b3737f780b175c0 | [
"MIT"
] | null | null | null | django_db_views/tests/tests.py | Skylude/django-db-views | 048a1e3ab7465d3e19481de82b3737f780b175c0 | [
"MIT"
] | null | null | null | import django
import os
from unittest.mock import patch
from django.apps import apps
from django.conf import settings
from django.db.migrations.loader import MigrationLoader
from django.db.migrations.state import ProjectState
from django.db.migrations.recorder import MigrationRecorder
from django.db import connections
from django.core.management import call_command
from django.test import TransactionTestCase, override_settings
os.environ['DJANGO_SETTINGS_MODULE'] = 'test_settings'
django.setup()
class MigrationTests(TransactionTestCase):
def tearDown(self):
for db in self.databases:
recorder = MigrationRecorder(connections[db])
recorder.migration_qs.filter(app='migrations').delete()
available_apps = ['migrations']
def assertTableNotExists(self, table, using='default'):
with connections[using].cursor() as cursor:
self.assertNotIn(table, connections[using].introspection.table_names(cursor))
def assertViewExists(self, view, using='default'):
with connections[using].cursor() as cursor:
tables = [
table.name for table in connections[using].introspection.get_table_list(cursor) if table.type == 'v'
]
self.assertIn(view, tables)
def assertViewNotExists(self, view, using='default'):
with connections[using].cursor() as cursor:
tables = [
table.name for table in connections[using].introspection.get_table_list(cursor) if table.type == 'v'
]
self.assertNotIn(view, tables)
@override_settings(MIGRATION_MODULES={'migrations': 'migrations.test_basic_view_creation'})
def test_migrate_successfully_creates_view(self):
call_command('migrate')
self.assertViewExists('question_stat')
@override_settings(MIGRATION_MODULES={'migrations': 'migrations.test_basic_view_creation'})
def test_roll_back_successfully_removes_view(self):
call_command('migrate')
call_command('migrate', 'migrations', 'zero')
self.assertViewNotExists('question_stat')
| 35.694915 | 116 | 0.716049 | 234 | 2,106 | 6.290598 | 0.324786 | 0.054348 | 0.032609 | 0.044837 | 0.368207 | 0.33288 | 0.33288 | 0.33288 | 0.30163 | 0.30163 | 0 | 0 | 0.187085 | 2,106 | 58 | 117 | 36.310345 | 0.859813 | 0 | 0 | 0.255814 | 0 | 0 | 0.108944 | 0.043768 | 0 | 0 | 0 | 0 | 0.186047 | 1 | 0.139535 | false | 0 | 0.255814 | 0 | 0.44186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
781955390893d95f18fcf3689df89dc587380e35 | 4,236 | py | Python | iotsim/controls.py | mmamaev/iotsim | d47587dea106f312ff4ee407a3d693a96fc46799 | [
"BSD-3-Clause"
] | null | null | null | iotsim/controls.py | mmamaev/iotsim | d47587dea106f312ff4ee407a3d693a96fc46799 | [
"BSD-3-Clause"
] | null | null | null | iotsim/controls.py | mmamaev/iotsim | d47587dea106f312ff4ee407a3d693a96fc46799 | [
"BSD-3-Clause"
] | null | null | null | from .core import Control, AssemblyContext, Trigger
from .utils import to_name
import numpy as np
from typing import List, Callable
class ContextRetriever:
def __call__(self, assembly_context: AssemblyContext):
return None
class CopyFromParameter(ContextRetriever):
def __init__(self, src_component, src_parameter, apply: Callable = None):
self._src_component = to_name(src_component)
self._src_parameter = src_parameter
if not apply is None:
assert callable(apply)
self._apply = apply
def __call__(self, assembly_context: AssemblyContext):
x = assembly_context.get_parameter(self._src_component, self._src_parameter)
if not self._apply is None:
x = self._apply(x)
return x
class CopyFromHistory(ContextRetriever):
def __init__(self, src_component, lag, apply: Callable = None):
self._src_component = to_name(src_component)
self._lag = lag
if not apply is None:
assert callable(apply)
self._apply = apply
def __call__(self, assembly_context: AssemblyContext):
x = assembly_context.query(self._src_component, self._lag)
if not self._apply is None:
x = self._apply(x)
return x
class CopyFromCounter(ContextRetriever):
def __init__(self, src_component, src_counter, apply: Callable = None):
self._src_component = to_name(src_component)
self._src_counter = src_counter
if not apply is None:
assert callable(apply)
self._apply = apply
def __call__(self, assembly_context: AssemblyContext):
x = assembly_context.read_counter(self._src_component, self._src_counter)
if not self._apply is None:
x = self._apply(x)
return x
class UpdateParametersControl(Control):
def __init__(self, name, behavior, when, trigger: Trigger,
update_choices: List, p: List = None,
priority=0
):
def choose_and_update(assembly_context: AssemblyContext,
update_choices: List, p=None):
choice_idx = np.random.choice(np.arange(len(update_choices)), p=p)
choice = update_choices[choice_idx]
if choice is not None:
for param_tuple in choice:
component, parameter, value = param_tuple
if isinstance(value, ContextRetriever):
value = value(assembly_context)
assembly_context.set_parameter(component, parameter, value)
super().__init__(name, behavior, when, trigger,
action=choose_and_update,
action_parameters=dict(update_choices=update_choices, p=p),
priority=priority)
class ResetCounterControl(Control):
def __init__(self, name, behavior, when, trigger: Trigger,
component, counter,
priority=0
):
def reset_counter(assembly_context: AssemblyContext,
component, counter):
assembly_context.reset_counter(component, counter)
super().__init__(name, behavior, when, trigger,
action=reset_counter,
action_parameters=dict(component=component,
counter=counter),
priority=priority)
class IncrementCounterControl(Control):
def __init__(self, name, behavior, when, trigger: Trigger,
component, counter, increment=1,
priority=0
):
def increment_counter(assembly_context: AssemblyContext,
component, counter, increment):
assembly_context.increment_counter(component, counter, increment)
super().__init__(name, behavior, when, trigger,
action=increment_counter,
action_parameters=dict(component=component,
counter=counter,
increment=increment),
priority=priority)
| 36.205128 | 84 | 0.598206 | 419 | 4,236 | 5.720764 | 0.174224 | 0.08761 | 0.060075 | 0.057572 | 0.573217 | 0.540676 | 0.463079 | 0.380476 | 0.331247 | 0.311222 | 0 | 0.00141 | 0.3305 | 4,236 | 116 | 85 | 36.517241 | 0.843794 | 0 | 0 | 0.47191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033708 | 1 | 0.146067 | false | 0 | 0.044944 | 0.011236 | 0.314607 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7820dee9a037495f3b5046586b45f8279d2ed452 | 3,147 | py | Python | fpn/test.py | jjjump-tutu/depository | 2667e2217c4e0ee1dcdbcf2f94630487d3c14c70 | [
"MIT"
] | null | null | null | fpn/test.py | jjjump-tutu/depository | 2667e2217c4e0ee1dcdbcf2f94630487d3c14c70 | [
"MIT"
] | 1 | 2020-12-01T07:11:08.000Z | 2020-12-01T09:28:55.000Z | fpn/test.py | jjjump-tutu/depository | 2667e2217c4e0ee1dcdbcf2f94630487d3c14c70 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sun Mar 17 23:2 7:28 2019
@author: Winham
网络测试
"""
import os
import numpy as np
from keras.models import load_model
from keras.utils import to_categorical
from data_preprocess import *
import mit_utils as utils
import time
import matplotlib.pyplot as plt
import tensorflow_addons as tfa
target_class = ['W', 'N1', 'N2', 'N3', 'REM']
target_sig_length = 3072
tic = time.time()
trainX, trainY, TestX, TestY = dataload('channel0.npz')
toc = time.time()
markov_matrix = [[66927., 3996., 179., 6., 86.],
[2252., 17891., 4269., 9., 753.],
[1271., 2262., 80861., 3546., 1043.],
[179., 113., 3247., 15892., 23.],
[565., 912., 427., 1., 32279.]]
markov_matrix = np.array(markov_matrix)
# markov_matrix_copy = markov_matrix.copy()
# for i in range(5):
# markov_matrix_copy[i] /= markov_matrix_copy[i].sum()
# print(markov_matrix_copy)
markov_matrix = np.log2(markov_matrix) ** 3
for i in range(5):
max = np.max(markov_matrix[i])
markov_matrix[i] /= max
# print(markov_matrix)
# assert False
print('Time for data processing--- '+str(toc-tic)+' seconds---')
model_name = 'myNet.h5'
model = load_model(model_name)
# model.summary()
pred_vt = model.predict(TestX, batch_size=256, verbose=1)
pred_v = np.argmax(pred_vt, axis=1)
true_v = np.argmax(TestY, axis=1)
def weight_decay(order):
weights = []
for i in range(order):
weights.append(4 ** (-i))
return weights
order = 6
weight = weight_decay(order)
for i in range(1,len(pred_vt)-order):
factor = 1
if pred_v[i-1] != pred_v[i]:
for j in range(1,order+1):
if pred_v[i+j] == pred_v[i-1]:
factor += weight[j-1]*2.1
elif pred_v[i+j] == pred_v[i]:
factor -= 0.55 * weight[j-1]
if factor < 0.1:
factor = 0.1
vector = markov_matrix[pred_v[i - 1]].copy()
vector[pred_v[i-1]] *= factor
re_pred = pred_vt[i] * vector
# print(re_pred)
pred_v[i] = np.argmax(re_pred)
# f1 = 3.1
# f2 = 0.45
# for i in range(1,len(pred_vt)-1):
# if pred_v[i-1] != pred_v[i]:
# if pred_v[i-1] == pred_v[i+1]:
# factor = f1
# elif pred_v[i] == pred_v[i+1]:
# factor = f2
# else:
# factor = 1
# # print(pred_vt[i])
# vector = markov_matrix[pred_v[i - 1]].copy()
# vector[pred_v[i-1]] *= factor
# re_pred = pred_vt[i] * vector
# # print(re_pred)
# pred_v[i] = np.argmax(re_pred)
utils.plot_confusion_matrix(true_v, pred_v, np.array(target_class))
utils.print_results(true_v, pred_v, target_class)
plt.savefig('cm.png')
# pred_v = pred_v[:10000]
# pred_v.resize((100,100))
# plt.subplot(121)
# plt.matshow(pred_v, cmap = plt.cm.Blues)
# plt.savefig('cm_pred.png')
#
# true_v = true_v[:10000]
# true_v.resize((100,100))
# plt.subplot(122)
# plt.matshow(true_v, cmap = plt.cm.Blues)
# plt.savefig('cm_true.png')
| 27.605263 | 68 | 0.575151 | 480 | 3,147 | 3.59375 | 0.314583 | 0.072464 | 0.062609 | 0.04058 | 0.302029 | 0.238261 | 0.211594 | 0.196522 | 0.132174 | 0.113623 | 0 | 0.079792 | 0.267239 | 3,147 | 113 | 69 | 27.849558 | 0.668257 | 0.327614 | 0 | 0 | 0 | 0 | 0.038304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.166667 | 0 | 0.203704 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7822ce08c98bbae4ad9b61ae8abb7997cf1d7c6e | 902 | py | Python | solutions/codeforces/158B.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | 1 | 2021-01-02T04:31:34.000Z | 2021-01-02T04:31:34.000Z | solutions/codeforces/158B.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | null | null | null | solutions/codeforces/158B.py | forxhunter/ComputingIntro | 50fa2ac030748626c694ec5c884c5ac32f0b42a8 | [
"Apache-2.0"
] | null | null | null | '''
check
'''
groups = [0, 0, 0, 0]
n = int(input())
data = input()
carsnum = 0
for i in range(4):
groups[i] = data.count(str(i+1))
# deal with 4 people group
carsnum += groups[3]
groups[3] = 0
# deal with 2 people group
carsnum += groups[1] // 2
groups[1] %= 2
# deal with 1 and 3 people group
if groups[0] <= groups[2]:
carsnum += groups[0]
groups[2] -= groups[0]
groups[0] = 0
# deal with the 3 people group left
carsnum += groups[2]
if groups[1] != 0:
carsnum += 1
else:
carsnum += groups[2]
groups[0] -= groups[2]
groups[2] = 0
# deal with the 1 people group left
carsnum += groups[0] // 4
groups[0] %= 4
if groups[1] == 0:
if groups[0] != 0:
carsnum += 1
else:
# 2 people group has 1 group
if groups[0] == 3:
carsnum += 2
else:
carsnum += 1
print(carsnum) | 19.608696 | 39 | 0.531042 | 139 | 902 | 3.446043 | 0.208633 | 0.146138 | 0.10856 | 0.087683 | 0.229645 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082927 | 0.318182 | 902 | 46 | 40 | 19.608696 | 0.695935 | 0.201774 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78242ab3f8e265a2bb01183be9d47df6398aa178 | 4,113 | py | Python | mppsolar/devices/device.py | BarkinSpider/mpp-solar | 071ca0cd9feea458b1e36dc020aa704b2000e431 | [
"MIT"
] | 1 | 2021-03-02T22:44:04.000Z | 2021-03-02T22:44:04.000Z | mppsolar/devices/device.py | BarkinSpider/mpp-solar | 071ca0cd9feea458b1e36dc020aa704b2000e431 | [
"MIT"
] | null | null | null | mppsolar/devices/device.py | BarkinSpider/mpp-solar | 071ca0cd9feea458b1e36dc020aa704b2000e431 | [
"MIT"
] | null | null | null | import abc
import importlib
import logging
log = logging.getLogger("MPP-Solar")
SERIAL_TYPE_TEST = 1
SERIAL_TYPE_USB = 2
SERIAL_TYPE_ESP32 = 4
SERIAL_TYPE_SERIAL = 8
class AbstractDevice(metaclass=abc.ABCMeta):
"""
Abstract device class
"""
def __init__(self, *args, **kwargs):
self._protocol = None
self._protocol_class = None
self._port = None
def is_test_device(self, serial_device):
return "test" in serial_device.lower()
def is_directusb_device(self, serial_device):
"""
Determine if this instance is using direct USB connection
(instead of a serial connection)
"""
if not serial_device:
return False
if "hidraw" in serial_device:
log.debug("Device matches hidraw")
return True
if "mppsolar" in serial_device:
log.debug("Device matches mppsolar")
return True
return False
def is_ESP32_device(self, serial_device):
return "esp" in serial_device.lower()
def get_port_type(self, port):
if self.is_test_device(port):
return SERIAL_TYPE_TEST
elif self.is_directusb_device(port):
return SERIAL_TYPE_USB
elif self.is_ESP32_device(port):
return SERIAL_TYPE_ESP32
else:
return SERIAL_TYPE_SERIAL
def set_protocol(self, protocol=None):
"""
Set the protocol for this device
"""
log.debug(f"device.set_protocol with protocol {protocol}")
if protocol is None:
self._protocol = None
self._protocol_class = None
return
protocol_id = protocol.lower()
# Try to import the protocol module with the supplied name (may not exist)
try:
proto_module = importlib.import_module(
"mppsolar.protocols." + protocol_id, "."
)
except ModuleNotFoundError:
log.error(f"No module found for protocol {protocol_id}")
self._protocol = None
self._protocol_class = None
return
# Find the protocol class - classname must be the same as the protocol_id
try:
self._protocol_class = getattr(proto_module, protocol_id)
except AttributeError:
log.error(f"Module {proto_module} has no attribute {protocol_id}")
self._protocol = None
self._protocol_class = None
return
# Instantiate the class
# TODO: fix protocol instantiate
self._protocol = self._protocol_class(
"init_var", proto_keyword="value", second_keyword=123
)
def set_port(self, port=None):
port_type = self.get_port_type(port)
if port_type == SERIAL_TYPE_TEST:
log.info("Using testio for communications")
from mppsolar.io.testio import TestIO
self._port = TestIO()
elif port_type == SERIAL_TYPE_USB:
log.info("Using hidrawio for communications")
from mppsolar.io.hidrawio import HIDRawIO
self._port = HIDRawIO(device_path=port)
elif port_type == SERIAL_TYPE_ESP32:
log.info("Using esp32io for communications")
from mppsolar.io.esp32io import ESP32IO
self._port = ESP32IO(device_path=port)
elif port_type == SERIAL_TYPE_SERIAL:
log.info("Using serialio for communications")
from mppsolar.io.serialio import SerialIO
self._port = SerialIO(serial_port=port, serial_baud=2400)
else:
self._port = None
@abc.abstractmethod
def run_command(self, command=None, show_raw=False):
raise NotImplementedError
@abc.abstractmethod
def get_status(self, show_raw):
raise NotImplementedError
@abc.abstractmethod
def get_settings(self, show_raw):
raise NotImplementedError
def run_default_command(self, show_raw):
return self.run_command(
command=self._protocol.DEFAULT_COMMAND, show_raw=show_raw
)
| 31.883721 | 82 | 0.624605 | 479 | 4,113 | 5.125261 | 0.242171 | 0.063544 | 0.041548 | 0.032587 | 0.322607 | 0.171894 | 0.133605 | 0.09002 | 0.043177 | 0.043177 | 0 | 0.010105 | 0.302213 | 4,113 | 128 | 83 | 32.132813 | 0.845296 | 0.083637 | 0 | 0.284211 | 0 | 0 | 0.10119 | 0 | 0 | 0 | 0 | 0.007813 | 0 | 1 | 0.115789 | false | 0 | 0.084211 | 0.031579 | 0.357895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7824f64ce3bb3cc3961afdec7e3c6fa3721e5452 | 2,539 | py | Python | autism/views.py | jenith-hue/Lung_Cancer | 69171d26ab1a2eccf4ae7243e8bdd2d9f1ccbfb5 | [
"MIT"
] | null | null | null | autism/views.py | jenith-hue/Lung_Cancer | 69171d26ab1a2eccf4ae7243e8bdd2d9f1ccbfb5 | [
"MIT"
] | null | null | null | autism/views.py | jenith-hue/Lung_Cancer | 69171d26ab1a2eccf4ae7243e8bdd2d9f1ccbfb5 | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from django.http import HttpResponseRedirect
from .forms import Predict
from .ML_ALGORITHM import you
import numpy
def index(request):
return render(request, 'autism/home.html')
def predict(request):
return render(request, 'autism/predict.html')
def predicted(request):
if request.method == "POST":
form = Predict(request.POST)
type1 = int(request.POST['type1'])
type2 = int(request.POST['type2'])
type3 = int(request.POST['type3'])
type4 = int(request.POST['type4'])
type5 = int(request.POST['type5'])
type6 = float(request.POST['type6'])
type7 = float(request.POST['type7'])
type8 = int(request.POST['type8'])
x= []
new_list = []
x.append(type1)
x.append(type2)
x.append(type3)
x.append(type4)
x.append(type5)
x.append(type6)
x.append(type7)
x.append(type8)
list = you.getPrediction(x)
yes = list[0]
no = 100-list[0]
new_list.append(yes)
new_list.append(no)
label = ['yes','no']
zipped_list = zip(list)
context = {
'zipped_list': zipped_list,
'list': new_list,
'label': label,
}
print(list)
return render(request, 'autism/predicted.html',context)
else:
form = Predict()
return render(request,'autism/predicted.html',{'form':form})
def restapi(request):
type1 = request.GET.get('value1', -1)
type2 = request.GET.get('value2', -1)
type3 = request.GET.get('value3', -1)
type4 = request.GET.get('value4', -1)
type5 = request.GET.get('value5', -1)
type6 = request.GET.get('value6', -1)
type7 = request.GET.get('value7', -1)
type8 = request.GET.get('value8', -1)
x= []
new_list = []
x.append(type1)
x.append(type2)
x.append(type3)
x.append(type4)
x.append(type5)
x.append(type6)
x.append(type7)
x.append(type8)
list = you.getPrediction(x)
yes = list[0]
no = 100-list[0]
new_list.append(yes)
new_list.append(no)
label = ['yes','no']
zipped_list = zip(list)
context = {
'zipped_list': zipped_list,
'list': new_list,
'label': label,
}
print(list)
return render(request, 'autism/predicted.html',context) | 29.523256 | 64 | 0.554943 | 301 | 2,539 | 4.621262 | 0.215947 | 0.080518 | 0.074766 | 0.089863 | 0.508986 | 0.462976 | 0.435658 | 0.435658 | 0.435658 | 0.435658 | 0 | 0.038983 | 0.302875 | 2,539 | 86 | 65 | 29.523256 | 0.746893 | 0 | 0 | 0.575 | 0 | 0 | 0.096063 | 0.024803 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.0625 | 0.025 | 0.175 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78267247f1ad73db4ca30cdb83a52c4d8988159a | 1,708 | py | Python | Program.py | evanxia1018/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | 2 | 2019-11-10T02:04:52.000Z | 2020-04-19T03:51:51.000Z | Program.py | SarahLynnePu/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | null | null | null | Program.py | SarahLynnePu/CSE539_Project_LLE | 881ea2278c39c16716e5de83dd8abbd267806a35 | [
"MIT"
] | 3 | 2017-12-28T14:09:24.000Z | 2020-04-19T04:25:03.000Z | import time
import Generation_Stage
import Evaluate_lle
import Evaluate_pca
import Evaluation_Stage
print("**********************************************************************")
print("Hello. This is CSE569 Project Demo, produced by Haisi Yi and Zheng Xia")
print("**********************************************************************\n\n")
while True:
option = input("\nPlease specify the task to perform:\n"
"1: Generate five artificial dataset and read MNIST_images dataset\n"
"2: Perform PCA to all artificial dataset and MNIST_images dataset\n"
"3: Perform LLE 11 * 6 times, using parameter k = 5, 6, ..., 15, to all artificial dataset and MNIST_images dataset\n"
"4: Do task 1, task 2 and task 3. This task will take about 20 min\n"
"5: Evaluate the data produced by PCA. This task will take about 40 min.\n"
"6: Evaluate the data produced by LLE. This task will take about 40 min.\n"
"7: Run everything. This will take about 8 hours.\n"
"0: Exit this Demo\n")
option = int(option)
if option == 1:
Generation_Stage.generate_original_datasets()
elif option == 2:
Generation_Stage.perform_pca_to_original_datasets()
elif option == 3:
Generation_Stage.perform_lle_to_orginal_datasets()
elif option == 4:
Generation_Stage.run()
elif option == 5:
Evaluate_pca.run()
elif option == 6:
Evaluate_lle.run()
elif option == 7:
Evaluation_Stage.run()
break
elif option == 0:
break
else:
print("Invalid option, try again")
| 32.846154 | 137 | 0.566745 | 215 | 1,708 | 4.390698 | 0.367442 | 0.074153 | 0.055085 | 0.060381 | 0.225636 | 0.150424 | 0.150424 | 0.150424 | 0.09322 | 0 | 0 | 0.029103 | 0.275761 | 1,708 | 51 | 138 | 33.490196 | 0.734034 | 0 | 0 | 0.052632 | 0 | 0.052632 | 0.475631 | 0.084557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.131579 | 0 | 0.131579 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7828bc8cfbb067ddc1a2bb084a724e2b6671e88f | 3,868 | py | Python | medical_peek_api/controller/exception_handler_controller.py | WillCallahan/medical_peek | e27e547ea7c8bc1deea8668090ff582020d7d6b2 | [
"MIT"
] | null | null | null | medical_peek_api/controller/exception_handler_controller.py | WillCallahan/medical_peek | e27e547ea7c8bc1deea8668090ff582020d7d6b2 | [
"MIT"
] | 12 | 2021-04-06T18:25:47.000Z | 2022-03-12T00:52:42.000Z | medical_peek_api/controller/exception_handler_controller.py | WillCallahan/medical_peek | e27e547ea7c8bc1deea8668090ff582020d7d6b2 | [
"MIT"
] | null | null | null | import logging
from django.core.handlers.wsgi import WSGIRequest
from django.http import JsonResponse
from django.views.defaults import server_error, page_not_found
from rest_framework import status
from medical_peek_core.model.j_send import JSend, JSendSerializer
from medical_peek_core.utility.exception_utility import ExceptionUtility
logger = logging.getLogger(__name__)
def rest_exception_handler(exception, context):
"""
Exception handler utilized by the Django Rest Framework
The exception handler will override the default implementation of the Django Rest Framework Exception handler if
the "Accept" header of the request in the current context has "application/json" in its value. If this is true,
a JSonResponse View will be returned to the user containing a JSend object that represents the exception.
:param exception: Exception that occurred
:type exception: object
:param context: Context of the exception (i.e. request)
:type context: dict
:return: JSonResponse View with JSend error if the Accept header of the request has a value of "application/json"
:rtype: JsonResponse
"""
if context.get('request', None) is not None \
and 'application/json' in context.get('request').META.get('HTTP_ACCEPT', ''):
logger.error("Unhandled exception!")
logger.exception(exception)
j_send = ExceptionUtility.get_jsend_from_exception(exception)
j_send_serializer = JSendSerializer(data = j_send.__dict__)
j_send_serializer.is_valid(True)
return JsonResponse(j_send_serializer.data, status = j_send.code)
def handler500(request, template_name = '500.html'):
"""
Overrides the default Django implementation of a 500 error so that a JSon response will be provided if the accept
header of the request has a value of "application/json". Otherwise the default server error implementation is
called.
To enable this handler, the DEBUG setting in the Django settings must be set to False
:param request: Current Request
:type request: WSGIRequest
:param template_name: Template of the error page
:type template_name: str
:return: Response
:rtype: object
"""
if request is not None and 'application/json' in request.META.get('HTTP_ACCEPT', ''):
logger.error("Unhandled exception!")
j_send = JSend()
j_send.status = JSend.Status.error
j_send.code = status.HTTP_500_INTERNAL_SERVER_ERROR
j_send.message = 'Unexpected API Server Error'
j_send_serializer = JSendSerializer(data = j_send.__dict__)
j_send_serializer.is_valid(True)
return JsonResponse(j_send_serializer.data, status = status.HTTP_500_INTERNAL_SERVER_ERROR)
return server_error(request = request, template_name = template_name)
def handler404(request, template_name = '404.html'):
"""
Overrides the default Django implementation of a 404 error so that a JSon response will be provided if the accept
header of the request has a value of "application/json". Otherwise the default server error implementation is
called.
To enable this handler, the DEBUG setting in the Django settings must be set to False
:param request: Current Request
:type request: WSGIRequest
:param template_name: Template of the error page
:type template_name: str
:return: Response
:rtype: object
"""
if 'application/json' in request.META.get('HTTP_ACCEPT', ''):
j_send = JSend()
j_send.status = JSend.Status.error
j_send.code = status.HTTP_404_NOT_FOUND
j_send.message = 'Not found'
j_send_serializer = JSendSerializer(data = j_send.__dict__)
j_send_serializer.is_valid(True)
return JsonResponse(j_send_serializer.data, status = j_send.code)
return page_not_found(request, template_name)
| 44.45977 | 117 | 0.735264 | 527 | 3,868 | 5.222011 | 0.208729 | 0.043605 | 0.049055 | 0.024709 | 0.579578 | 0.579578 | 0.559956 | 0.53452 | 0.480015 | 0.441497 | 0 | 0.008684 | 0.196225 | 3,868 | 86 | 118 | 44.976744 | 0.876488 | 0.422441 | 0 | 0.368421 | 0 | 0 | 0.089904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.184211 | 0 | 0.394737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
782abecf863cb582e8ab49f243f14f653cba734a | 1,188 | py | Python | 00 UNICEF/03 Data New/aedes-main/best_aedes_model.py | Cirrolytix/aedes_unicef_2022 | 23a26d57d5316ba44d573b4c1dcefcad4e10b157 | [
"MIT"
] | null | null | null | 00 UNICEF/03 Data New/aedes-main/best_aedes_model.py | Cirrolytix/aedes_unicef_2022 | 23a26d57d5316ba44d573b4c1dcefcad4e10b157 | [
"MIT"
] | null | null | null | 00 UNICEF/03 Data New/aedes-main/best_aedes_model.py | Cirrolytix/aedes_unicef_2022 | 23a26d57d5316ba44d573b4c1dcefcad4e10b157 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.impute import SimpleImputer
# NOTE: Make sure that the outcome column is labeled 'target' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)
features = tpot_data.drop('target', axis=1)
training_features, testing_features, training_target, testing_target = \
train_test_split(features, tpot_data['target'], random_state=42)
imputer = SimpleImputer(strategy="median")
imputer.fit(training_features)
training_features = imputer.transform(training_features)
testing_features = imputer.transform(testing_features)
# Average CV score on the training set was: 0.889950753668092
exported_pipeline = XGBClassifier(learning_rate=1.0, max_depth=1, min_child_weight=10, n_estimators=100, n_jobs=1, subsample=0.6000000000000001, verbosity=0)
# Fix random state in exported estimator
if hasattr(exported_pipeline, 'random_state'):
setattr(exported_pipeline, 'random_state', 42)
exported_pipeline.fit(training_features, training_target)
results = exported_pipeline.predict(testing_features)
| 45.692308 | 157 | 0.81229 | 165 | 1,188 | 5.630303 | 0.515152 | 0.086114 | 0.03014 | 0.066738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046816 | 0.10101 | 1,188 | 25 | 158 | 47.52 | 0.823034 | 0.147306 | 0 | 0 | 0 | 0 | 0.074331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.277778 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
782c3b76e7c1eaa2f8d3ae8a4ab9167a73ffb9ea | 3,379 | py | Python | espressodb/base/tests/views/urls.py | remram44/espressodb | 5aad7222ab81c0f1694b51171e5d197dbcc8a65f | [
"BSD-3-Clause"
] | 8 | 2019-12-10T04:30:01.000Z | 2020-10-30T09:40:22.000Z | espressodb/base/tests/views/urls.py | remram44/espressodb | 5aad7222ab81c0f1694b51171e5d197dbcc8a65f | [
"BSD-3-Clause"
] | 41 | 2019-10-23T00:26:25.000Z | 2021-10-21T07:55:57.000Z | espressodb/base/tests/views/urls.py | remram44/espressodb | 5aad7222ab81c0f1694b51171e5d197dbcc8a65f | [
"BSD-3-Clause"
] | 3 | 2020-01-09T21:29:09.000Z | 2021-03-14T22:20:52.000Z | """Unittest for all present urls
"""
from django.test import TestCase
from django.contrib.auth.models import User
from espressodb.base.utilities.apps import get_apps_slug_map
import espressodb.base.utilities.blackmagicsorcery as re
URLS = ["/", "/populate/", "/populate-result/"]
LOGGED_IN_URLS = [
"/notifications/",
"/notifications/debug/",
"/notifications/info/",
"/notifications/warning/",
"/notifications/error/",
"/admin/",
"/admin/auth/group/",
"/admin/auth/user/",
"/admin/notifications/notification/",
]
class URLViewTest(TestCase):
"""Tests if all urls are present
"""
exclude_urls = []
@classmethod
def url_excluded(cls, url: str) -> bool:
"""Checks if the url is in the exclude_urls pattern list
Arguments:
url: Regex pattern to match.
"""
return any([re.match(pattern, url) is not None for pattern in cls.exclude_urls])
def setUp(self):
"""Create a user for the test
"""
self.username = "test user"
self.password = "admin1234"
user = User.objects.create(username=self.username)
user.set_password(self.password)
user.save()
def test_open_urls(self):
"""Tests the HTTP status of the client.
"""
for url in URLS:
if self.url_excluded(url):
continue
with self.subTest(url=url):
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
def test_logged_in_urls_as_logged_out(self):
"""Tests wether login required URLS are present but require login.
"""
for url in LOGGED_IN_URLS:
if self.url_excluded(url):
continue
with self.subTest(url=url):
with self.subTest(follow=False):
response = self.client.get(url, follow=False)
self.assertEqual(response.status_code, 302)
with self.subTest(follow=True):
response = self.client.get(url, follow=True)
self.assertEqual(response.status_code, 200)
self.assertEqual(
response.redirect_chain[-1][0],
("/admin" if "admin" in url else "") + f"/login/?next={url}",
)
def test_logged_in_urls_as_logged_in(self):
"""Tests wether login required URLS are present and viewable by logged in user.
"""
login = self.client.login(username=self.username, password=self.password)
self.assertTrue(login)
for url in LOGGED_IN_URLS:
if self.url_excluded(url):
continue
with self.subTest(url=url):
response = self.client.get(url)
self.assertEqual(response.status_code, 302 if "admin" in url else 200)
def test_documentation_pages(self):
"""Tests wether documentation pages are present for each project app with models.
"""
for app_slug, app in get_apps_slug_map().items():
if not app.get_models():
continue
url = f"/documentation/{app_slug}/"
with self.subTest(app=app, url=url):
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
| 31.877358 | 89 | 0.586268 | 393 | 3,379 | 4.926209 | 0.282443 | 0.028926 | 0.046488 | 0.054236 | 0.356921 | 0.340393 | 0.270661 | 0.242769 | 0.19938 | 0.19938 | 0 | 0.010235 | 0.306008 | 3,379 | 105 | 90 | 32.180952 | 0.815352 | 0.145605 | 0 | 0.272727 | 0 | 0 | 0.1 | 0.044326 | 0 | 0 | 0 | 0 | 0.106061 | 1 | 0.090909 | false | 0.045455 | 0.060606 | 0 | 0.19697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
782dc6f49c603eca2fa962e30fbfc31a6e32254b | 1,723 | py | Python | python/datagraph/graphviz/dmo/digraph_text_cleanser.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/datagraph/graphviz/dmo/digraph_text_cleanser.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/datagraph/graphviz/dmo/digraph_text_cleanser.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | # !/usr/bin/env python
# -*- coding: UTF-8 -*-
from base import BaseObject
class DigraphTextCleanser(BaseObject):
"""
Purpose:
Edge Generation for a graphviz.Digraph object
Traceability:
https://github.ibm.com/GTS-CDO/unstructured-analytics/issues/1426#issuecomment-16165027
"""
def __init__(self,
graph_style: dict,
is_debug: bool = True):
"""
Created:
21-Nov-2019
craig.trim@ibm.com
* https://github.ibm.com/GTS-CDO/unstructured-analytics/issues/1426#issuecomment-16165027
:param graph_style:
a graph style defined in a graph stylesheet
e.g.:
- resources/config/graph/graphviz_nlp_graph.yml
- resources/config/graph/graphviz_big_graph.yml
:param is_debug:
True increase log output at DEBUG level
"""
BaseObject.__init__(self, __name__)
self._is_debug = is_debug
self._graph_style = graph_style
def process(self,
some_text: str) -> str:
"""
Purpose:
determine whether to split the text for readability
:param some_text:
input text
:return:
(optionally) processed text
"""
if "graph" not in self._graph_style:
return some_text
if "split_text" not in self._graph_style["graph"]:
return some_text
if not self._graph_style["graph"]["split_text"]:
return some_text
if " " not in some_text:
return some_text
tokens = some_text.split(" ")
return "{}\\n{}".format(tokens[0], " ".join(tokens[1:]))
| 29.20339 | 103 | 0.572838 | 194 | 1,723 | 4.871134 | 0.458763 | 0.084656 | 0.074074 | 0.060317 | 0.237037 | 0.156614 | 0.156614 | 0.156614 | 0.156614 | 0.156614 | 0 | 0.028497 | 0.327916 | 1,723 | 58 | 104 | 29.706897 | 0.787565 | 0.426582 | 0 | 0.2 | 0 | 0 | 0.056747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
782ed811b859ad654cfa2961ac4c649ce8d9f83b | 7,550 | py | Python | sqlglot/optimizer/qualify_columns.py | RobinL/sqlglot | 7ec1022ac4c1fbaeb44e47d5f187a78e5c14735a | [
"MIT"
] | null | null | null | sqlglot/optimizer/qualify_columns.py | RobinL/sqlglot | 7ec1022ac4c1fbaeb44e47d5f187a78e5c14735a | [
"MIT"
] | null | null | null | sqlglot/optimizer/qualify_columns.py | RobinL/sqlglot | 7ec1022ac4c1fbaeb44e47d5f187a78e5c14735a | [
"MIT"
] | null | null | null | import itertools
import sqlglot.expressions as exp
from sqlglot.errors import OptimizeError
from sqlglot.optimizer.schema import ensure_schema
from sqlglot.optimizer.scope import traverse_scope
def qualify_columns(expression, schema):
"""
Rewrite sqlglot AST to have fully qualified columns.
Example:
>>> import sqlglot
>>> schema = {"tbl": {"col": "INT"}}
>>> expression = sqlglot.parse_one("SELECT col FROM tbl")
>>> qualify_columns(expression, schema).sql()
'SELECT tbl.col AS col FROM tbl'
Args:
expression (sqlglot.Expression): expression to qualify
schema (dict|sqlglot.optimizer.Schema): Database schema
Returns:
sqlglot.Expression: qualified expression
"""
schema = ensure_schema(schema)
# We'll use this when generating alias names
sequence = itertools.count()
for scope in traverse_scope(expression):
_check_union_outputs(scope)
_qualify_derived_tables(scope.ctes, scope, sequence)
_qualify_derived_tables(scope.derived_tables, scope, sequence)
_qualify_columns(scope, schema)
_expand_stars(scope, schema)
_qualify_outputs(scope)
_check_unknown_tables(scope)
return expression
def _check_union_outputs(scope):
"""Assert that the outputs of both sides of a UNION are the same"""
if not isinstance(scope.expression, exp.Union):
return
left, right = scope.union
if left.outputs != right.outputs:
raise OptimizeError(
f"UNION outputs not equal: {left.outputs} vs. {left.outputs}"
)
def _qualify_derived_tables(derived_tables, scope, sequence):
"""Ensure all derived tables have aliases"""
for derived_table in derived_tables:
table_alias = derived_table.args.get("alias")
if not table_alias:
table_alias = exp.TableAlias()
derived_table.set("alias", table_alias)
alias = table_alias.args.get("this")
if not alias:
alias = exp.to_identifier(f"_q_{next(sequence)}")
scope.rename_selectable(None, alias.name)
table_alias.set("this", alias)
# Remove any alias column list
# (e.g. SELECT ... FROM (SELECT ...) AS foo(col1, col2)
table_alias.args.pop("columns", None)
def _qualify_columns(scope, schema):
"""Disambiguate columns, ensuring each column reference specifies a selectable"""
unambiguous_columns = None # lazily loaded
for column in scope.references:
column_table = column.text("table")
column_name = column.text("this")
if (
column_table
and column_table in scope.selectables
and column_name
not in _get_selectable_columns(column_table, scope.selectables, schema)
):
raise OptimizeError(f"Unknown column: {column_name}")
if not column_table:
if unambiguous_columns is None:
selectable_columns = {
k: _get_selectable_columns(k, scope.selectables, schema)
for k in scope.referenced_selectables
}
unambiguous_columns = _get_unambiguous_columns(selectable_columns)
column_table = unambiguous_columns.get(column_name)
if not column_table and not scope.is_subquery:
raise OptimizeError(f"Ambiguous column: {column_name}")
column.set("table", exp.to_identifier(column_table))
def _expand_stars(scope, schema):
"""Expand stars to lists of column selections"""
all_new_columns = []
for expression in scope.selects:
if isinstance(expression, exp.Star):
tables = list(scope.referenced_selectables)
elif isinstance(expression, exp.Column) and isinstance(
expression.this, exp.Star
):
tables = [expression.text("table")]
else:
continue
new_columns = []
for table in tables:
if table not in scope.selectables:
raise OptimizeError(f"Unknown table: {table}")
columns = _get_selectable_columns(table, scope.selectables, schema)
for column in columns:
new_columns.append(
exp.Column(
this=exp.to_identifier(column), table=exp.to_identifier(table)
)
)
expression.replace(*new_columns)
all_new_columns.extend(new_columns)
scope.columns.extend(all_new_columns)
def _qualify_outputs(scope):
"""Ensure all output columns are aliased"""
for i, (selection, aliased_column) in enumerate(
itertools.zip_longest(scope.selects, scope.outer_column_list)
):
if isinstance(selection, exp.Column):
selection_name = selection.text("this")
new_selection = exp.alias_(selection.copy(), selection_name)
selection.replace(new_selection)
selection = new_selection
elif not isinstance(selection, exp.Alias):
selection_name = f"_col_{i}"
new_selection = exp.alias_(selection.copy(), selection_name)
selection.replace(new_selection)
selection = new_selection
if aliased_column:
selection.set("alias", exp.to_identifier(aliased_column))
def _check_unknown_tables(scope):
if scope.external_references and not scope.is_correlated_subquery:
raise OptimizeError(
f"Unknown table: {scope.external_references[0].text('table')}"
)
def _get_unambiguous_columns(selectable_columns):
"""
Find all the unambiguous columns in selectables.
Args:
selectable_columns (dict): Mapping of names to selectable columns
Returns:
dict: Mapping of column name to selectable name
"""
if not selectable_columns:
return {}
selectable_columns = list(selectable_columns.items())
first_table, first_columns = selectable_columns[0]
unambiguous_columns = {
col: first_table for col in _find_unique_columns(first_columns)
}
for table, columns in selectable_columns[1:]:
unique = _find_unique_columns(columns)
ambiguous = set(unambiguous_columns).intersection(unique)
for column in ambiguous:
unambiguous_columns.pop(column)
for column in unique.difference(ambiguous):
unambiguous_columns[column] = table
return unambiguous_columns
def _find_unique_columns(columns):
"""
Find the unique columns in a list of columns.
Example:
>>> sorted(_find_unique_columns(["a", "b", "b", "c"]))
['a', 'c']
This is necessary because duplicate column names are ambiguous.
"""
counts = {}
for column in columns:
counts[column] = counts.get(column, 0) + 1
return {column for column, count in counts.items() if count == 1}
def _get_selectable_columns(name, selectables, schema):
"""Resolve the selectable columns for a given selectable `name`"""
if name not in selectables:
raise OptimizeError(f"Unknown table: {name}")
selectable = selectables[name]
# If referencing a table, return the columns from the schema
if isinstance(selectable, exp.Table):
try:
return schema.column_names(selectable)
except Exception as e:
raise OptimizeError(str(e)) from e
# Otherwise, if referencing another scope, return that scope's outputs
return selectable.outputs
| 33.114035 | 86 | 0.649669 | 859 | 7,550 | 5.519208 | 0.19092 | 0.053786 | 0.024046 | 0.021936 | 0.102299 | 0.068762 | 0.040076 | 0.040076 | 0.040076 | 0.040076 | 0 | 0.001439 | 0.263709 | 7,550 | 227 | 87 | 33.259912 | 0.851412 | 0.188742 | 0 | 0.094891 | 0 | 0 | 0.050268 | 0.007373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072993 | false | 0 | 0.036496 | 0 | 0.160584 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
782f027acd854915903862b95967abf5208d8465 | 5,314 | py | Python | model_zoo/cifar10_subclass/cifar10_subclass.py | sorrycc/elasticdl | 01439e0bf7bba6ebfffe265916fd41370a59c29d | [
"MIT"
] | 2 | 2021-07-07T16:31:50.000Z | 2021-11-08T09:23:01.000Z | model_zoo/cifar10_subclass/cifar10_subclass.py | sorrycc/elasticdl | 01439e0bf7bba6ebfffe265916fd41370a59c29d | [
"MIT"
] | null | null | null | model_zoo/cifar10_subclass/cifar10_subclass.py | sorrycc/elasticdl | 01439e0bf7bba6ebfffe265916fd41370a59c29d | [
"MIT"
] | 1 | 2021-08-18T18:14:38.000Z | 2021-08-18T18:14:38.000Z | import tensorflow as tf
from elasticdl.python.common.constants import Mode
class CustomModel(tf.keras.Model):
def __init__(self, channel_last=True):
super(CustomModel, self).__init__(name="cifar10_model")
use_bias = True
self._conv_1 = tf.keras.layers.Conv2D(
32,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_1 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_1 = tf.keras.layers.Activation(tf.nn.relu)
self._conv_2 = tf.keras.layers.Conv2D(
32,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_2 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_2 = tf.keras.layers.Activation(tf.nn.relu)
self._max_pool_1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))
self._dropout_1 = tf.keras.layers.Dropout(0.2)
self._conv_3 = tf.keras.layers.Conv2D(
64,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_3 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_3 = tf.keras.layers.Activation(tf.nn.relu)
self._conv_4 = tf.keras.layers.Conv2D(
64,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_4 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_4 = tf.keras.layers.Activation(tf.nn.relu)
self._max_pool_2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))
self._dropout_2 = tf.keras.layers.Dropout(0.3)
self._conv_5 = tf.keras.layers.Conv2D(
128,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_5 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_5 = tf.keras.layers.Activation(tf.nn.relu)
self._conv_6 = tf.keras.layers.Conv2D(
128,
kernel_size=(3, 3),
padding="same",
use_bias=use_bias,
activation=None,
)
self._bn_6 = tf.keras.layers.BatchNormalization(
epsilon=1e-06, axis=-1, momentum=0.9
)
self._relu_6 = tf.keras.layers.Activation(tf.nn.relu)
self._max_pool_3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))
self._dropout_3 = tf.keras.layers.Dropout(0.4)
self._flatten_1 = tf.keras.layers.Flatten()
self._dense_1 = tf.keras.layers.Dense(10, name="output")
def call(self, inputs, training=False):
x = self._conv_1(inputs["image"])
x = self._bn_1(x)
x = self._relu_1(x)
x = self._conv_2(x)
x = self._bn_2(x)
x = self._relu_2(x)
x = self._max_pool_1(x)
x = self._dropout_1(x)
x = self._conv_3(x)
x = self._bn_3(x)
x = self._relu_3(x)
x = self._conv_4(x)
x = self._bn_4(x)
x = self._relu_4(x)
x = self._max_pool_2(x)
x = self._dropout_2(x)
x = self._conv_5(x)
x = self._bn_5(x)
x = self._relu_5(x)
x = self._conv_6(x)
x = self._bn_6(x)
x = self._relu_6(x)
x = self._max_pool_3(x)
x = self._dropout_3(x)
x = self._flatten_1(x)
return self._dense_1(x)
def loss(output, labels):
labels = tf.reshape(labels, [-1])
return tf.reduce_mean(
input_tensor=tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=output, labels=labels
)
)
def optimizer(lr=0.1):
return tf.optimizers.SGD(lr)
def dataset_fn(dataset, mode):
def _parse_data(record):
if mode == Mode.PREDICTION:
feature_description = {
"image": tf.io.FixedLenFeature([32, 32, 3], tf.float32)
}
else:
feature_description = {
"image": tf.io.FixedLenFeature([32, 32, 3], tf.float32),
"label": tf.io.FixedLenFeature([1], tf.int64),
}
r = tf.io.parse_single_example(record, feature_description)
features = {
"image": tf.math.divide(tf.cast(r["image"], tf.float32), 255.0)
}
if mode == Mode.PREDICTION:
return features
else:
return features, tf.cast(r["label"], tf.int32)
dataset = dataset.map(_parse_data)
if mode != Mode.PREDICTION:
dataset = dataset.shuffle(buffer_size=1024)
return dataset
def eval_metrics_fn(predictions, labels):
labels = tf.reshape(labels, [-1])
return {
"accuracy": tf.reduce_mean(
input_tensor=tf.cast(
tf.equal(
tf.argmax(predictions, 1, output_type=tf.dtypes.int32),
labels,
),
tf.float32,
)
)
}
| 30.022599 | 75 | 0.547234 | 690 | 5,314 | 3.975362 | 0.176812 | 0.068903 | 0.123223 | 0.035727 | 0.58148 | 0.536274 | 0.518046 | 0.493256 | 0.493256 | 0.450602 | 0 | 0.049538 | 0.327625 | 5,314 | 176 | 76 | 30.193182 | 0.718164 | 0 | 0 | 0.289474 | 0 | 0 | 0.016184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046053 | false | 0 | 0.013158 | 0.006579 | 0.111842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78323ad51e27c4fa0767acc4613f077ad4236011 | 2,686 | py | Python | XNATSlicer/XnatSlicerLib/ui/custom-qt-widgets/HoverButton.py | QwaddleMan/XNATSlicer | 5aa06e4f2a578898d34cf0ea703963b9556f2da3 | [
"BSD-3-Clause"
] | 4 | 2016-03-03T08:56:52.000Z | 2021-12-10T21:14:58.000Z | XNATSlicer/XnatSlicerLib/ui/custom-qt-widgets/HoverButton.py | keithcallenberg/XNATSlicer | 4a8462b2e81984cc114d25fb2b1c981457a11878 | [
"BSD-3-Clause"
] | null | null | null | XNATSlicer/XnatSlicerLib/ui/custom-qt-widgets/HoverButton.py | keithcallenberg/XNATSlicer | 4a8462b2e81984cc114d25fb2b1c981457a11878 | [
"BSD-3-Clause"
] | 5 | 2015-04-22T01:53:40.000Z | 2021-03-29T12:14:32.000Z | __author__ = "Sunil Kumar (kumar.sunil.p@gmail.com)"
__copyright__ = "Copyright 2014, Washington University in St. Louis"
__credits__ = ["Sunil Kumar", "Steve Pieper", "Dan Marcus"]
__license__ = "XNAT Software License Agreement " + \
"(see: http://xnat.org/about/license.php)"
__version__ = "2.1.1"
__maintainer__ = "Rick Herrick"
__email__ = "herrickr@mir.wustl.edu"
__status__ = "Production"
from __main__ import qt
comment = """
HoverButton is a customized QWidget where the
user can set the style of the button upon hovering.
TODO:
"""
class HoverButton (qt.QPushButton):
""" Descriptor above.
"""
def __init__(self, parent = None):
""" Init function.
"""
#--------------------
# Call parent init.
#--------------------
if parent:
super(HoverButton, self).__init__(parent)
else:
super(HoverButton, self).__init__(self)
#--------------------
# Install the event filter to
# interpret the hovers.
#--------------------
self.installEventFilter(self)
#--------------------
# Track the stylesheets for
# the hover/not-hovered states.
#--------------------
self.defaultStyleSheet = None
self.hoverStyleSheet = None
def setDefaultStyleSheet(self, styleSheet):
""" Set stylesheet for when the mouse is
not hovering over the button.
"""
self.defaultStyleSheet = styleSheet
self.setStyleSheet(styleSheet)
def setHoverStyleSheet(self, styleSheet):
""" Set stylesheet for when the mouse is
hovering over the button.
"""
self.hoverStyleSheet = styleSheet
def eventFilter(self, widget, event):
""" Event filter function inherited from
QObject. Specifically targets the 'Enter'
and 'Leave' events for hovering purposes.
"""
if event.type() == qt.QEvent.Enter:
self.onHoverEnter()
elif event.type() == qt.QEvent.Leave:
self.onHoverLeave()
def onHoverEnter(self):
""" Callback when the mouse begins
hovering over the button: applies the
'hoverStyleSheet'.
"""
self.setStyleSheet(self.hoverStyleSheet)
def onHoverLeave(self):
""" Callback when the mouse leaves
hovering over the button: applies the
'defaultStyleSheet'.
"""
self.setStyleSheet(self.defaultStyleSheet)
| 22.762712 | 68 | 0.546165 | 242 | 2,686 | 5.863636 | 0.487603 | 0.031712 | 0.033827 | 0.059197 | 0.174771 | 0.105708 | 0.062016 | 0.062016 | 0.062016 | 0 | 0 | 0.003853 | 0.323529 | 2,686 | 117 | 69 | 22.957265 | 0.777105 | 0.269546 | 0 | 0 | 0 | 0 | 0.196489 | 0.026614 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.026316 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7833d1ecd63dcbafc3405efa977d29556dcffc33 | 363 | py | Python | abrirMochila.py | KozlowskiJ2/avengers | 304e677ac45becbd182db71b0fd148be90fa7050 | [
"MIT"
] | null | null | null | abrirMochila.py | KozlowskiJ2/avengers | 304e677ac45becbd182db71b0fd148be90fa7050 | [
"MIT"
] | null | null | null | abrirMochila.py | KozlowskiJ2/avengers | 304e677ac45becbd182db71b0fd148be90fa7050 | [
"MIT"
] | null | null | null | def abreMochila(mochila) :
if len(mochila) == 0:
print('Bolso Vazio!')
return False
if len(mochila)!= 0:
print("Itens no bolso:\nFale o numero correspondente dele para escolher")
for item in mochila:
print(mochila.index(item)+1,"-",item)
i=escutar()
escolha=mochila[int(i)-1]
return(escolha) | 33 | 81 | 0.589532 | 46 | 363 | 4.652174 | 0.630435 | 0.046729 | 0.11215 | 0.121495 | 0.168224 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015326 | 0.280992 | 363 | 11 | 82 | 33 | 0.804598 | 0 | 0 | 0 | 0 | 0 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.181818 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7835dd4b0a54930acba1f3445b9513092882a896 | 24,101 | py | Python | fst2/processers.py | superjcd/fst2 | da4bd97bc9e028af55e1099940bd0b1e5bb34ded | [
"MIT"
] | 2 | 2020-03-15T07:44:46.000Z | 2021-05-17T04:32:46.000Z | fst2/processers.py | superjcd/fst2 | da4bd97bc9e028af55e1099940bd0b1e5bb34ded | [
"MIT"
] | null | null | null | fst2/processers.py | superjcd/fst2 | da4bd97bc9e028af55e1099940bd0b1e5bb34ded | [
"MIT"
] | 1 | 2020-05-13T08:56:25.000Z | 2020-05-13T08:56:25.000Z | import copy
import os
import csv
import json
import torch
import logging
from transformers.file_utils import is_tf_available, is_torch_available
from functools import wraps
from .utils import CACHE_PARAMS
logger = logging.getLogger(__name__)
class InputExample(object):
"""
A single training/test example for simple sequence classification.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
def __init__(self, guid, text_a, text_b=None, label=None):
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
def __repr__(self):
return str(self.to_json_string())
def to_dict(self):
"""Serializes this instance to a Python dictionary."""
output = copy.deepcopy(self.__dict__)
return output
def to_json_string(self):
"""Serializes this instance to a JSON string."""
return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
class InputFeatures(object):
"""
A single set of features of data.
Args:
input_ids: Indices of input sequence tokens in the vocabulary.
attention_mask: Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
Usually ``1`` for tokens that are NOT MASKED, ``0`` for MASKED (padded) tokens.
token_type_ids: Segment token indices to indicate first and second portions of the inputs.
label: Label corresponding to the input
"""
def __init__(self, input_ids, attention_mask=None, token_type_ids=None, label=None):
self.input_ids = input_ids
self.attention_mask = attention_mask
self.token_type_ids = token_type_ids
self.label = label
def __repr__(self):
return str(self.to_json_string())
def to_dict(self):
"""Serializes this instance to a Python dictionary."""
output = copy.deepcopy(self.__dict__)
return output
def to_json_string(self):
"""Serializes this instance to a JSON string."""
return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_example_from_tensor_dict(self, tensor_dict):
"""Gets an example from a dict with tensorflow tensors
Args:
tensor_dict: Keys and values should match the corresponding Glue
tensorflow_dataset examples.
"""
raise NotImplementedError()
def get_train_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_labels(self):
"""Gets the list of labels for this data set."""
raise NotImplementedError()
def tfds_map(self, example):
"""Some tensorflow_datasets datasets are not formatted the same way the GLUE datasets are.
This method converts examples to the correct format."""
if len(self.get_labels()) > 1:
example.label = self.get_labels()[int(example.label)]
return example
@classmethod
def _read_csv(cls, input_file, delimiter="\t", quotechar=None):
"""Reads a tab separated csv/tsv file."""
with open(input_file, "r", encoding="utf-8-sig") as f:
return list(csv.reader(f, delimiter=delimiter, quotechar=quotechar))
class SingleSentenceClassificationProcessor(DataProcessor):
""" Generic processor for a single sentence classification data set."""
def __init__(self, labels=None, examples=None, mode="classification", verbose=False):
self.labels = [] if labels is None else labels
self.examples = [] if examples is None else examples
self.mode = mode
self.verbose = verbose
def __len__(self):
return len(self.examples)
def __getitem__(self, idx):
if isinstance(idx, slice):
return SingleSentenceClassificationProcessor(labels=self.labels, examples=self.examples[idx])
return self.examples[idx]
@classmethod
def create_from_csv(
cls, file_name, delimiter, split_name="", column_label=0, column_text=1, column_id=None, skip_first_row=False, **kwargs
):
processor = cls(**kwargs)
processor.add_examples_from_csv(
file_name,
delimiter,
split_name=split_name,
column_label=column_label,
column_text=column_text,
column_id=column_id,
skip_first_row=skip_first_row,
overwrite_labels=True,
overwrite_examples=True,
)
return processor
@classmethod
def create_from_examples(cls, texts_or_text_and_labels, labels=None, **kwargs):
processor = cls(**kwargs)
processor.add_examples(texts_or_text_and_labels, labels=labels)
return processor
def add_examples_from_csv(
self,
file_name,
delimiter,
split_name="",
column_label=0,
column_text=1,
column_id=None,
skip_first_row=False,
overwrite_labels=False,
overwrite_examples=False,
):
lines = self._read_csv(file_name, delimiter=delimiter)
if skip_first_row:
lines = lines[1:]
texts = []
labels = []
ids = []
for (i, line) in enumerate(lines):
texts.append(line[column_text])
labels.append(line[column_label])
if column_id is not None:
ids.append(line[column_id])
else:
guid = "%s-%s" % (split_name, i) if split_name else "%s" % i
ids.append(guid)
return self.add_examples(
texts, labels, ids, overwrite_labels=overwrite_labels, overwrite_examples=overwrite_examples
)
def add_examples(
self, texts_or_text_and_labels, labels=None, ids=None, overwrite_labels=False, overwrite_examples=False
):
assert labels is None or len(texts_or_text_and_labels) == len(labels)
assert ids is None or len(texts_or_text_and_labels) == len(ids)
if ids is None:
ids = [None] * len(texts_or_text_and_labels)
if labels is None:
labels = [None] * len(texts_or_text_and_labels)
examples = []
added_labels = set()
for (text_or_text_and_label, label, guid) in zip(texts_or_text_and_labels, labels, ids):
if isinstance(text_or_text_and_label, (tuple, list)) and label is None:
text, label = text_or_text_and_label
else:
text = text_or_text_and_label
added_labels.add(label)
examples.append(InputExample(guid=guid, text_a=text, text_b=None, label=label))
# Update examples
if overwrite_examples:
self.examples = examples
else:
self.examples.extend(examples)
# Update labels
if overwrite_labels:
self.labels = list(added_labels)
else:
self.labels = list(set(self.labels).union(added_labels))
return self.examples
def get_features(
self,
tokenizer,
max_length=None,
pad_on_left=False,
pad_token=0,
mask_padding_with_zero=True,
return_tensors="pt",
):
"""
Convert examples in a list of ``InputFeatures``
Args:
tokenizer: Instance of a tokenizer that will tokenize the examples
max_length: Maximum example length
task: GLUE task
label_list: List of labels. Can be obtained from the processor using the ``processor.get_labels()`` method
output_mode: String indicating the output mode. Either ``regression`` or ``classification``
pad_on_left: If set to ``True``, the examples will be padded on the left rather than on the right (default)
pad_token: Padding token
mask_padding_with_zero: If set to ``True``, the attention mask will be filled by ``1`` for actual values
and by ``0`` for padded values. If set to ``False``, inverts it (``1`` for padded values, ``0`` for
actual values)
Returns:
If the ``examples`` input is a ``tf.data.Dataset``, will return a ``tf.data.Dataset``
containing the task-specific features. If the input is a list of ``InputExamples``, will return
a list of task-specific ``InputFeatures`` which can be fed to the model.
"""
if max_length is None:
max_length = tokenizer.max_len
label_map = {label: i for i, label in enumerate(self.labels)}
all_input_ids = []
for (ex_index, example) in enumerate(self.examples):
if ex_index % 10000 == 0:
logger.info("Tokenizing example %d", ex_index)
input_ids = tokenizer.encode(
example.text_a, add_special_tokens=True, max_length=min(max_length, tokenizer.max_len),
)
all_input_ids.append(input_ids)
batch_length = max(len(input_ids) for input_ids in all_input_ids)
features = []
for (ex_index, (input_ids, example)) in enumerate(zip(all_input_ids, self.examples)):
if ex_index % 10000 == 0:
logger.info("Writing example %d/%d" % (ex_index, len(self.examples)))
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
attention_mask = [1 if mask_padding_with_zero else 0] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = batch_length - len(input_ids)
if pad_on_left:
input_ids = ([pad_token] * padding_length) + input_ids
attention_mask = ([0 if mask_padding_with_zero else 1] * padding_length) + attention_mask
else:
input_ids = input_ids + ([pad_token] * padding_length)
attention_mask = attention_mask + ([0 if mask_padding_with_zero else 1] * padding_length)
assert len(input_ids) == batch_length, "Error with input length {} vs {}".format(
len(input_ids), batch_length
)
assert len(attention_mask) == batch_length, "Error with input length {} vs {}".format(
len(attention_mask), batch_length
)
if self.mode == "classification":
label = label_map[example.label]
elif self.mode == "regression":
label = float(example.label)
else:
raise ValueError(self.mode)
if ex_index < 5 and self.verbose:
logger.info("*** Example ***")
logger.info("guid: %s" % (example.guid))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("attention_mask: %s" % " ".join([str(x) for x in attention_mask]))
logger.info("label: %s (id = %d)" % (example.label, label))
features.append(InputFeatures(input_ids=input_ids, attention_mask=attention_mask, label=label))
if return_tensors is None:
return features
elif return_tensors == "tf":
if not is_tf_available():
raise RuntimeError("return_tensors set to 'tf' but TensorFlow 2.0 can't be imported")
import tensorflow as tf
def gen():
for ex in features:
yield ({"input_ids": ex.input_ids, "attention_mask": ex.attention_mask}, ex.label)
dataset = tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32}, tf.int64),
({"input_ids": tf.TensorShape([None]), "attention_mask": tf.TensorShape([None])}, tf.TensorShape([])),
)
return dataset
elif return_tensors == "pt":
if not is_torch_available():
raise RuntimeError("return_tensors set to 'pt' but PyTorch can't be imported")
import torch
from torch.utils.data import TensorDataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
if self.mode == "classification":
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
elif self.mode == "regression":
all_labels = torch.tensor([f.label for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_attention_mask, all_labels)
return dataset
else:
raise ValueError("return_tensors should be one of 'tf' or 'pt'")
class SequnceTokenClassificationProcessor(DataProcessor):
def __init__(self, labels=None, examples=None, mode="classification", verbose=False):
self.labels = [] if labels is None else labels
self.examples = [] if examples is None else examples
self.mode = mode
self.verbose = verbose
@classmethod
def create_from_txt(cls, file_name, delimiter, **kwargs):
processor = cls(**kwargs)
processor.read_examples_from_txt(file_name, delimiter)
return processor
def read_examples_from_txt(self, file_name, delimiter):
examples = []
guid_index = 1
with open(file_name, encoding="utf-8") as f:
words = []
labels = []
for line in f:
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
if words:
examples.append(InputExample(guid=guid_index,
text_a=words,
label=labels))
words = []
labels = []
guid_index +=1
else:
splits = line.split(delimiter)
if len(splits) > 1:
words.append(splits[0])
labels.append(splits[-1].replace("\n", ""))
else:
labels.append("O")
if words:
examples.append(InputExample(guid=guid_index, text_a=words,
label=labels))
self.examples = examples
def get_features(
self,
max_seq_length,
tokenizer,
return_tensors,
cls_token_at_end=False,
cls_token="[CLS]",
cls_token_segment_id=0,
sep_token="[SEP]",
sep_token_extra=False,
pad_on_left=False,
pad_token=0,
pad_token_segment_id=0,
pad_token_label_id=-100,
sequence_a_segment_id=0,
mask_padding_with_zero=True,
):
""" Loads a data file into a list of `InputBatch`s
`cls_token_at_end` define the location of the CLS token:
- False (Default, BERT/XLM pattern): [CLS] + A + [SEP] + B + [SEP]
- True (XLNet/GPT pattern): A + [SEP] + B + [SEP] + [CLS]
`cls_token_segment_id` define the segment id associated to the CLS token (0 for BERT, 2 for XLNet)
"""
label_map = {label: i for i, label in enumerate(self.labels)}
features = [] # [[inout_ids, input_mask, segment_ids, label_id]]
for (ex_index, example) in enumerate(self.examples):
if ex_index % 10000 == 0:
logger.info("Writing example %d of %d", ex_index, len(self.examples))
tokens = []
label_ids = []
for word, label in zip(example.text_a, example.label):
word_tokens = tokenizer.tokenize(word)
tokens.extend(word_tokens)
# Use the real label id for the first token of the word, and padding ids for the remaining tokens
label_ids.extend([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1)) # some language like german one word will be tokenized to several subwords
# Account for [CLS] and [SEP] with "- 2" and with "- 3" for RoBERTa.
special_tokens_count = 3 if sep_token_extra else 2
if len(tokens) > max_seq_length - special_tokens_count:
tokens = tokens[: (max_seq_length - special_tokens_count)]
label_ids = label_ids[: (max_seq_length - special_tokens_count)]
tokens += [sep_token]
label_ids += [pad_token_label_id]
if sep_token_extra:
# roberta uses an extra separator b/w pairs of sentences
tokens += [sep_token]
label_ids += [pad_token_label_id]
segment_ids = [pad_token_segment_id] * len(tokens)
if cls_token_at_end:
tokens += [cls_token]
label_ids += [pad_token_label_id]
segment_ids += [cls_token_segment_id]
else:
tokens = [cls_token] + tokens
label_ids = [pad_token_label_id] + label_ids
segment_ids = [cls_token_segment_id] + segment_ids
input_ids = tokenizer.convert_tokens_to_ids(tokens) # 输入的是列表, 返回的也是列表
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1 if mask_padding_with_zero else 0] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = max_seq_length - len(input_ids)
if pad_on_left:
input_ids = ([pad_token] * padding_length) + input_ids
input_mask = ([0 if mask_padding_with_zero else 1] * padding_length) + input_mask
segment_ids = ([pad_token_segment_id] * padding_length) + segment_ids
label_ids = ([pad_token_label_id] * padding_length) + label_ids
else:
input_ids += [pad_token] * padding_length
input_mask += [0 if mask_padding_with_zero else 1] * padding_length
segment_ids += [pad_token_segment_id] * padding_length
label_ids += [pad_token_label_id] * padding_length
# 保证我的input_id, input_mask, segment_ids, 和label_ids都是一致的
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
assert len(label_ids) == max_seq_length
if ex_index < 5:
logger.info("*** Example ***")
logger.info("guid: %s", example.guid)
logger.info("tokens: %s", " ".join([str(x) for x in tokens]))
logger.info("input_ids: %s", " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s", " ".join([str(x) for x in input_mask]))
logger.info("segment_ids: %s", " ".join([str(x) for x in segment_ids]))
logger.info("label_ids: %s", " ".join([str(x) for x in label_ids]))
features.append(
InputFeatures(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids, label=label_ids)
) # note segement_ids has 1 in the front? but we don't use it right now
# fetures to dataset
if return_tensors is None:
return features
elif return_tensors == "tf":
if not is_tf_available():
raise RuntimeError("return_tensors set to 'tf' but TensorFlow 2.0 can't be imported")
import tensorflow as tf
def gen():
for ex in features:
yield ({"input_ids": ex.input_ids, "attention_mask": ex.attention_mask}, ex.label)
dataset = tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32}, tf.int64),
({"input_ids": tf.TensorShape([None]), "attention_mask": tf.TensorShape([None])}, tf.TensorShape([])),
)
return dataset
elif return_tensors == "pt":
if not is_torch_available():
raise RuntimeError("return_tensors set to 'pt' but PyTorch can't be imported")
import torch
from torch.utils.data import TensorDataset
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
if self.mode == "classification":
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
elif self.mode == "regression":
all_labels = torch.tensor([f.label for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
return dataset
else:
raise ValueError("return_tensors should be one of 'tf' or 'pt'")
############## help functions ######
def load_and_cache_dataset(func):
@wraps(func)
def inner(*args, **kwargs):
logger = logging.getLogger("Load-Cache_Dataset")
cached_features_file = os.path.join(
CACHE_PARAMS["data_dir"],
"cached_{}_{}_{}".format(
CACHE_PARAMS["mode"], list(filter(None, CACHE_PARAMS["model_name_or_path"].split("/"))).pop(),
str(CACHE_PARAMS["max_seq_length"])
),)
if os.path.exists(cached_features_file):
dataset = torch.load(cached_features_file)
logger.info("Load dataset from {}".format(cached_features_file))
return dataset
else:
logger.info("Read data and preparing dataset")
dataset = func(*args, **kwargs)
torch.save(dataset, cached_features_file)
logger.info("Cached dataset at {}".format(cached_features_file))
return dataset
return inner
# if __name__ == "__main__":
# from transformers import BertTokenizer
# processer = SingleSentenceClassificationProcessor.create_from_csv(file_name="test_data.tsv", delimiter=",")
# tokenizer = BertTokenizer.from_pretrained("/Users/jiangchaodi/Code/NLP/transformer_examples/models/chinese_L-12_H-768_A-12")
# dataset = processer.get_features(tokenizer=tokenizer, max_length=128)
# print(dataset)
# test torch.load(dataset)
# import torch
# torch.save(dataset, "test_cached")
# dataset2 = torch.load("test_cached")
# print(dataset2)
# if __name__ == "__main__":
# from transformers import BertTokenizer
# from pipelines import get_labels
# processer = SequnceTokenClassificationProcessor.create_from_txt(file_name="/Users/jiangchaodi/Code/NLP/fasttransformer/fst2/data/train.txt", delimiter="\t", labels=get_labels("/Users/jiangchaodi/Code/NLP/fasttransformer/fst2/data/labels.txt"))
# tokenizer = BertTokenizer.from_pretrained("/Users/jiangchaodi/Code/NLP/transformer_examples/models/chinese_L-12_H-768_A-12")
# dataset = processer.get_features(tokenizer=tokenizer, max_seq_length=128, return_tensors="pt")
# print(dataset)
| 42.506173 | 249 | 0.604788 | 2,960 | 24,101 | 4.692905 | 0.122635 | 0.028796 | 0.010294 | 0.01231 | 0.496653 | 0.453747 | 0.416169 | 0.374271 | 0.35721 | 0.334317 | 0 | 0.006806 | 0.298867 | 24,101 | 566 | 250 | 42.581272 | 0.815244 | 0.212647 | 0 | 0.430412 | 0 | 0 | 0.057988 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 1 | 0.07732 | false | 0 | 0.048969 | 0.007732 | 0.203608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
783cc0ae7ce9094203cd5255e4827fee654d5210 | 530 | py | Python | jupyter_cache/cli/utils.py | ExecutableBookProject/sandbox | e72c2121b460c8558f9e6257b3b53353b9e7f35c | [
"MIT"
] | 2 | 2020-03-11T23:14:00.000Z | 2020-04-07T14:58:51.000Z | jupyter_cache/cli/utils.py | ExecutableBookProject/sandbox | e72c2121b460c8558f9e6257b3b53353b9e7f35c | [
"MIT"
] | 41 | 2020-02-19T20:18:56.000Z | 2020-04-20T01:25:55.000Z | jupyter_cache/cli/utils.py | ExecutableBookProject/sandbox | e72c2121b460c8558f9e6257b3b53353b9e7f35c | [
"MIT"
] | 1 | 2020-03-15T05:45:15.000Z | 2020-03-15T05:45:15.000Z | import logging
import click
class ClickLogHandler(logging.Handler):
_use_stderr = True
def emit(self, record):
try:
msg = self.format(record)
click.echo(msg, err=self._use_stderr)
except Exception:
self.handleError(record)
def setup_logger(logger: logging.Logger) -> None:
"""Add handler to log to click."""
try:
import click_log
except ImportError:
logger.addHandler(ClickLogHandler())
else:
click_log.basic_config(logger)
| 21.2 | 49 | 0.632075 | 60 | 530 | 5.45 | 0.533333 | 0.067278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275472 | 530 | 24 | 50 | 22.083333 | 0.851563 | 0.05283 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.235294 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
783e84924972e61c147a0b1b3dc18448354ffac3 | 4,401 | py | Python | Samples/codes/matopt_review/add_optimizer.py | wilsongis/3DP_Experiments | da9bd3b4ba1d82bac7dcfa27d86634add59db087 | [
"MIT",
"Unlicense"
] | null | null | null | Samples/codes/matopt_review/add_optimizer.py | wilsongis/3DP_Experiments | da9bd3b4ba1d82bac7dcfa27d86634add59db087 | [
"MIT",
"Unlicense"
] | null | null | null | Samples/codes/matopt_review/add_optimizer.py | wilsongis/3DP_Experiments | da9bd3b4ba1d82bac7dcfa27d86634add59db087 | [
"MIT",
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Copyright (c) 2021 Showa Denko Materials co., Ltd. All rights reserved.
This software is for non-profit use only.
THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THIS SOFTWARE OR THE USE OR OTHER DEALINGS IN THIS SOFTWARE.
"""
import GPyOpt
from GPyOpt.optimization.optimizer import OptLbfgs, OptDirect, OptCma, apply_optimizer
from GPyOpt.optimization.anchor_points_generator import ObjectiveAnchorPointsGenerator, ThompsonSamplingAnchorPointsGenerator
max_objective_anchor_points_logic = "max_objective"
thompson_sampling_anchor_points_logic = "thompsom_sampling"
sobol_design_type = "sobol"
random_design_type = "random"
class InvalidArgumentError(Exception):
pass
class AcquisitionOptimizer(GPyOpt.optimization.AcquisitionOptimizer):
"""
AcquisitionOptimizer of GPyOpt was modified to control some parameters including, max_AcOpt_iter and num_anchor_points.
Note that the default paramaters of GPyOpt were used in the study of the goal-oriented Bayesian optimization.
:param space: design space class from GPyOpt.
:param optimizer: optimizer to use. Can be selected among:
- 'lbfgs': L-BFGS.
- 'DIRECT': Dividing Rectangles.
- 'CMA': covariance matrix adaptation.
:param max_AcOpt_iter: maximun number of optimization steps.
:param num_anchor_points: number of initial search points.
"""
def __init__(self, space, optimizer='lbfgs', max_AcOpt_iter=1000, num_anchor_points=1000, **kwargs):
super(AcquisitionOptimizer, self).__init__(space, optimizer, **kwargs)
self.max_AcOpt_iter = max_AcOpt_iter
self.num_anchor_points = num_anchor_points
def optimize(self, f=None, df=None, f_df=None, duplicate_manager=None):
"""
Optimizes the input function.
:param f: function to optimize.
:param df: gradient of the function to optimize.
:param f_df: returns both the function to optimize and its gradient.
"""
self.f = f
self.df = df
self.f_df = f_df
## --- Update the optimizer, in case context has beee passed.
self.optimizer = self.choose_optimizermod(self.optimizer_name, self.context_manager.noncontext_bounds)
## --- Selecting the anchor points and removing duplicates
if self.type_anchor_points_logic == max_objective_anchor_points_logic:
anchor_points_generator = ObjectiveAnchorPointsGenerator(self.space, random_design_type, f, num_samples=self.num_anchor_points)
elif self.type_anchor_points_logic == thompson_sampling_anchor_points_logic:
anchor_points_generator = ThompsonSamplingAnchorPointsGenerator(self.space, sobol_design_type, self.model)
## -- Select the anchor points (with context)
anchor_points = anchor_points_generator.get(num_anchor=5, duplicate_manager=duplicate_manager, context_manager=self.context_manager)
## --- Applying local optimizers at the anchor points and update bounds of the optimizer (according to the context)
optimized_points = [apply_optimizer(self.optimizer, a, f=f, df=df, f_df=f_df, duplicate_manager=duplicate_manager, context_manager=self.context_manager, space = self.space) for a in anchor_points]
x_min, fx_min = min(optimized_points, key=lambda t:t[1])
return x_min, fx_min
def choose_optimizermod(self, optimizer_name, bounds):
"""
Selects the type of local optimizer
"""
if optimizer_name == 'lbfgs':
optimizer = OptLbfgs(bounds, self.max_AcOpt_iter)
elif optimizer_name == 'DIRECT':
optimizer = OptDirect(bounds, self.max_AcOpt_iter)
elif optimizer_name == 'CMA':
optimizer = OptCma(bounds, self.max_AcOpt_iter)
else:
print(optimizer_name)
raise InvalidArgumentError('Invalid optimizer selected.')
return optimizer
| 46.326316 | 205 | 0.709839 | 543 | 4,401 | 5.54512 | 0.359116 | 0.083693 | 0.031883 | 0.021255 | 0.174693 | 0.093657 | 0.068416 | 0.068416 | 0.042511 | 0 | 0 | 0.004364 | 0.219041 | 4,401 | 94 | 206 | 46.819149 | 0.87169 | 0.374915 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0.026316 | 0.078947 | 0 | 0.263158 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
783f73ab72792697849c2074b2a97ea34ea37f7c | 1,354 | py | Python | eos_db/test/test_credits.py | cedadev/eos-db | b97b1b7c469779e370aab8ad68cf7e8d2e6ff8e6 | [
"BSD-3-Clause"
] | null | null | null | eos_db/test/test_credits.py | cedadev/eos-db | b97b1b7c469779e370aab8ad68cf7e8d2e6ff8e6 | [
"BSD-3-Clause"
] | null | null | null | eos_db/test/test_credits.py | cedadev/eos-db | b97b1b7c469779e370aab8ad68cf7e8d2e6ff8e6 | [
"BSD-3-Clause"
] | null | null | null | """Tests for credit addition, subtraction and querying.
See also tests in test_user_api
"""
import unittest
import requests
from eos_db.server import choose_engine, create_user, touch_to_add_credit
from eos_db.server import check_credit, check_actor_id
class TestCreditFunctions(unittest.TestCase):
"""Tests credit functions in server module."""
def setUp(self):
choose_engine('SQLite')
def test_create_user(self):
"""
Add a user.
"""
user = create_user('user','testuser','testuser','testuser')
self.assertEqual(check_actor_id(user), user)
def test_add(self):
"""
Behaviour: Calling the API to add credit should result credit being added to
the database.
"""
user = create_user('user','testuser2','testuser2','testuser2')
touch_to_add_credit(user,1000)
credit = check_credit(user)
self.assertEqual(credit, 1000)
def test_subtract(self):
"""
Behaviour: Calling the API to add credit should result credit being
subtracted from the database.
"""
user = create_user('user', 'testuser3', 'testuser3', 'testuser3')
touch_to_add_credit(user,-500)
credit = check_credit(user)
self.assertEqual(credit, -500)
if __name__ == '__main__':
unittest.main()
| 28.808511 | 84 | 0.654357 | 164 | 1,354 | 5.170732 | 0.353659 | 0.058962 | 0.064858 | 0.056604 | 0.40566 | 0.308962 | 0.240566 | 0.141509 | 0.141509 | 0.141509 | 0 | 0.019474 | 0.241507 | 1,354 | 46 | 85 | 29.434783 | 0.806232 | 0.240768 | 0 | 0.090909 | 0 | 0 | 0.112069 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
783fc3228d1bb98e46b23e9c61a33784429ce728 | 2,733 | py | Python | visualization/data_plot.py | Jingyu6/forl_2021 | 8679b41ece66551d14cfb31fa42a467eb4c1fb0b | [
"MIT"
] | null | null | null | visualization/data_plot.py | Jingyu6/forl_2021 | 8679b41ece66551d14cfb31fa42a467eb4c1fb0b | [
"MIT"
] | null | null | null | visualization/data_plot.py | Jingyu6/forl_2021 | 8679b41ece66551d14cfb31fa42a467eb4c1fb0b | [
"MIT"
] | null | null | null | import gym # type: ignore
import matplotlib # type: ignore
import matplotlib.pyplot as plt # type: ignore
import numpy as np
from pathlib import Path # type: ignore
from typing import List, Union, Literal, Dict, Any
from visualization.data_parser import Records, D3rlpyCSVDataParser
def plot_records_list(
axes: matplotlib.axes.Axes,
records_list: List[Records],
env_name: str,
value_description: str = 'loss',
horizon_name: Union[Literal['epochs', 'steps']] = 'epochs',
**kwargs: Any # arguments to the plot function
) -> None:
"""
Plot the graph of different algorithms,
each algorithm contains multiple experiments,
all experiments are from the same environment
"""
assert len(records_list) > 0, "Can not pass in empty records."
# group them together
algo_to_records: Dict[str, List[Records]] = {}
for records in records_list:
algo_name = records.algo_name
if algo_name not in algo_to_records:
algo_to_records[algo_name] = []
algo_to_records[algo_name].append(records)
# make sure all algorithms have the same number of experiments
experiment_counts = set([len(data) for data in algo_to_records.values()])
assert len(experiment_counts) == 1, \
"All algorithms should have the same number of experiments"
# truncate horizon (assuming monotonic increasing)
min_horizon = min([len(records.get_data()[horizon_name]) for records in records_list])
for algo_name in sorted(algo_to_records.keys()):
print(algo_name)
algo_records_list = algo_to_records[algo_name]
horizon = algo_records_list[0].get_data(min_horizon)[horizon_name]
values = np.array([records.get_data(min_horizon)['values'] for records in algo_records_list])
value_mean = np.mean(values, axis=0)
value_std = np.std(values, axis=0)
axes.plot(horizon, value_mean, **kwargs)
axes.fill_between(horizon, value_mean - value_std, value_mean + value_std, alpha=0.2, interpolate=True)
axes.set_title('{}: {} plots of {} over {} trials'.format(
env_name, value_description, horizon_name, next(iter(experiment_counts))))
axes.set_ylabel(value_description)
axes.set_xlabel(horizon_name)
axes.legend(sorted(list(algo_to_records.keys())))
def plot_records_in_dir(
log_dir: str,
env_name: str,
value_description: str = 'loss',
horizon_name: Union[Literal['epochs', 'steps']] = 'epochs',
**kwargs: Any
) -> None:
log_dir_path = Path(log_dir)
assert log_dir_path.is_dir(), "Invalid log dir."
parser = D3rlpyCSVDataParser()
records_list: List[Records] = []
for sub_dir in log_dir_path.iterdir():
records_list.append(parser.parse(str(sub_dir), value_description=value_description))
plot_records_list(plt.gca(), records_list, env_name, value_description, horizon_name, **kwargs)
plt.show()
| 35.038462 | 105 | 0.750823 | 399 | 2,733 | 4.907268 | 0.295739 | 0.067416 | 0.053115 | 0.034729 | 0.204801 | 0.149132 | 0.083759 | 0.083759 | 0.083759 | 0.083759 | 0 | 0.003814 | 0.13648 | 2,733 | 77 | 106 | 35.493506 | 0.825847 | 0.126235 | 0 | 0.181818 | 0 | 0 | 0.077801 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 1 | 0.036364 | false | 0.018182 | 0.127273 | 0 | 0.163636 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7840eb695f4d59312dcc4726ecededb6274d25a1 | 2,947 | py | Python | game_matrix.py | GeorgiaLu/game-2048 | f256974ad65d869943630564d9eac697c4e8dc04 | [
"MIT"
] | 1 | 2019-01-08T04:10:22.000Z | 2019-01-08T04:10:22.000Z | game_matrix.py | GeorgiaLu/game-2048 | f256974ad65d869943630564d9eac697c4e8dc04 | [
"MIT"
] | null | null | null | game_matrix.py | GeorgiaLu/game-2048 | f256974ad65d869943630564d9eac697c4e8dc04 | [
"MIT"
] | null | null | null | import numpy
import random
from enum import Enum
class Direction(Enum):
kUp, kRight, kDown, kLeft = range(4)
class GameMatrix(object):
def __init__(self, dim):
self.dim = dim
self.matrix = numpy.zeros((dim, dim))
self.init_matrix()
self.tube = []
def init_matrix(self):
indices = [i for i in range(self.dim * self.dim)]
random.shuffle(indices)
pick_number = random.choices([2, 4], k=2)
self.matrix[indices[0] // self.dim][indices[0] % self.dim] = pick_number[0]
self.matrix[indices[1] // self.dim][indices[1] % self.dim] = pick_number[1]
def random_pick_empty(self):
empty_spots = []
for i in range(self.dim):
for j in range(self.dim):
if self.matrix[i][j] == 0:
empty_spots.append([i, j])
return random.choices(empty_spots)[0]
def random_add_one(self):
my_pick = self.random_pick_empty()
self.matrix[my_pick[0]][my_pick[1]] = random.choices([2, 4], weights=[.5, .5])[0]
def tube_append(self, elem):
if elem != 0:
if len(self.tube) != 0 and self.tube[-1] == elem:
self.tube[-1] *= 2
else:
self.tube.append(elem)
def move(self, direction):
if direction == Direction.kDown:
for j in range(self.dim):
for i in range(self.dim):
elem = self.matrix[self.dim - 1 - i][j]
self.tube_append(elem)
self.matrix[self.dim - 1 - i][j] = 0
for i in range(len(self.tube)):
self.matrix[self.dim - 1 - i][j] = self.tube[i]
self.tube.clear()
elif direction == Direction.kUp:
for j in range(self.dim):
for i in range(self.dim):
elem = self.matrix[i][j]
self.tube_append(elem)
self.matrix[i][j] = 0
for i in range(len(self.tube)):
self.matrix[i][j] = self.tube[i]
self.tube.clear()
elif direction == Direction.kLeft:
for i in range(self.dim):
for j in range(self.dim):
elem = self.matrix[i][j]
self.tube_append(elem)
self.matrix[i][j] = 0
for j in range(len(self.tube)):
self.matrix[i][j] = self.tube[j]
self.tube.clear()
elif direction == Direction.kRight:
for i in range(self.dim):
for j in range(self.dim):
elem = self.matrix[i][self.dim - 1 - j]
self.tube_append(elem)
self.matrix[i][self.dim - 1 - j] = 0
for j in range(len(self.tube)):
self.matrix[i][self.dim - 1 - j] = self.tube[j]
self.tube.clear()
| 33.488636 | 89 | 0.490329 | 389 | 2,947 | 3.647815 | 0.133676 | 0.118393 | 0.085271 | 0.108527 | 0.51938 | 0.510218 | 0.48203 | 0.462297 | 0.422128 | 0.379845 | 0 | 0.018519 | 0.376994 | 2,947 | 87 | 90 | 33.873563 | 0.754357 | 0 | 0 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.042857 | 0 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78442b33bc9640a1fb48adf69170375475735e0f | 5,581 | py | Python | cognite/model_hosting/schedules/schedules.py | cognitedata/cognite-model-hosting | 89f58e25f0e3c3a37006e60f52246da0b00a0066 | [
"Apache-2.0"
] | 4 | 2019-05-27T12:51:45.000Z | 2020-02-26T08:16:30.000Z | cognite/model_hosting/schedules/schedules.py | cognitedata/cognite-model-hosting | 89f58e25f0e3c3a37006e60f52246da0b00a0066 | [
"Apache-2.0"
] | 26 | 2019-03-18T15:10:20.000Z | 2021-06-21T05:47:24.000Z | cognite/model_hosting/schedules/schedules.py | cognitedata/cognite-model-hosting | 89f58e25f0e3c3a37006e60f52246da0b00a0066 | [
"Apache-2.0"
] | null | null | null | import json
from collections import defaultdict
from typing import Dict, List, Union
import numpy as np
import pandas as pd
from marshmallow import EXCLUDE, Schema, ValidationError, fields, validate
from cognite.model_hosting._cognite_model_hosting_common.utils import timestamp_to_ms
from cognite.model_hosting.schedules.exceptions import DuplicateAliasInScheduledOutput, InvalidScheduleOutputFormat
def to_output(dataframe: Union[pd.DataFrame, List[pd.DataFrame]]) -> Dict:
"""Converts your data to a json serializable output format complying with the schedules feature.
Args:
dataframe (Union[List[pd.DataFrame, pd.DataFrame]]: A dataframe or list of dataframes.
Returns:
Dict: The data on a json serializable and schedules compliant output format.
Examples:
The correct output format looks like this::
{
"timeSeries":
{
"my-alias-1": [(t0, p0), (t1, p1), ...],
"my-alias-2": [(t0, p0), (t1, p1), ...],
}
}
"""
output = defaultdict(lambda: {})
if isinstance(dataframe, pd.DataFrame):
output["timeSeries"] = _convert_df_to_output_format(dataframe)
elif isinstance(dataframe, List):
for df in dataframe:
if set(df.columns) - set(output["timeSeries"].keys()) != set(df.columns):
raise DuplicateAliasInScheduledOutput("An alias has been provided multiple times")
output["timeSeries"].update(_convert_df_to_output_format(df))
else:
raise TypeError("dataframe should be a pandas DataFrame or list of pandas DataFrames")
return output
def _convert_df_to_output_format(df: pd.DataFrame):
return {name: list(zip([timestamp_to_ms(ts) for ts in df.index], df[name])) for name in df.columns}
class _ScheduleOutputSchema(Schema):
class Meta:
unknown = EXCLUDE
timeSeries = fields.Dict(
keys=fields.Str(), values=fields.List(fields.List(fields.Float(), validate=validate.Length(equal=2)))
)
_schedule_output_schema = _ScheduleOutputSchema(unknown=EXCLUDE)
class ScheduleOutput:
"""Helper class for parsing and converting output from scheduled predictions.
Args:
output(Dict): The output returned from the scheduled prediction.
"""
def __init__(self, output: Dict):
self._output = self._load(output)
def __str__(self):
return json.dumps(self._output, indent=4, sort_keys=True)
@staticmethod
def _load(output):
try:
return _schedule_output_schema.load(output)
except ValidationError as e:
raise InvalidScheduleOutputFormat(e.messages) from e
def _validate_alias(self, type: str, alias: str):
assert self._output.get(type, {}).get(alias) is not None, "{} is not a valid alias".format(alias)
def _validate_aligned(self, aliases: List[str]):
timestamps = set()
for alias in aliases:
self._validate_alias("timeSeries", alias)
timestamps.add(tuple(point[0] for point in self._output["timeSeries"][alias]))
assert 1 == len(timestamps), "Timestamps for aliases {} are not aligned".format(aliases)
def _get_dataframe_single_alias(self, alias) -> pd.DataFrame:
self._validate_alias("timeSeries", alias)
data = self._output["timeSeries"][alias]
timestamps = [int(point[0]) for point in data]
datapoints = [point[1] for point in data]
return pd.DataFrame({alias: datapoints}, index=np.array(timestamps, dtype="datetime64[ms]"))
def _get_dataframe_multiple_aliases(self, aliases: List[str]) -> pd.DataFrame:
self._validate_aligned(aliases)
data = {}
timestamps = [int(p[0]) for p in self._output["timeSeries"][aliases[0]]]
for a in aliases:
data[a] = [p[1] for p in self._output["timeSeries"][a]]
return pd.DataFrame(data, index=np.array(timestamps, dtype="datetime64[ms]"))
def get_dataframe(self, alias: Union[str, List[str]]) -> pd.DataFrame:
"""Returns a time-aligned dataframe of the specified alias(es).
Assumes that all aliases specify output time series with matching timestamps.
Args:
alias(Union[str, List[str]]): alias or list of aliases
Returns:
pd.DataFrame: The dataframe containing the time series for the specified alias(es).
"""
if isinstance(alias, str):
return self._get_dataframe_single_alias(alias)
elif isinstance(alias, List):
return self._get_dataframe_multiple_aliases(alias)
raise TypeError("alias must be a string or list of strings")
def get_datapoints(self, alias: Union[str, List[str]]) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:
"""Returns the dataframes for the specified alias(es).
Args:
alias (Union[str, List[str]]): alias or list of aliases.
Returns:
Union[pd.DataFrame, Dict[str, pd.DataFrame]: A single dataframe if a single alias has been specified. Or a
dictionary mapping alias to dataframe if a list of aliases has been provided.
"""
if isinstance(alias, str):
return self._get_dataframe_single_alias(alias)
elif isinstance(alias, List):
dataframes = {}
for a in alias:
dataframes[a] = self._get_dataframe_single_alias(a)
return dataframes
raise TypeError("alias must be a string or list of strings")
| 39.302817 | 118 | 0.656334 | 686 | 5,581 | 5.21137 | 0.239067 | 0.049231 | 0.013427 | 0.025734 | 0.244476 | 0.191329 | 0.149371 | 0.13035 | 0.13035 | 0.13035 | 0 | 0.005437 | 0.242071 | 5,581 | 141 | 119 | 39.58156 | 0.839716 | 0.238129 | 0 | 0.131579 | 0 | 0 | 0.091558 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.144737 | false | 0 | 0.105263 | 0.026316 | 0.434211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78480e99cab78715400c1a2ba08f71032a31b6fe | 578 | py | Python | src/matplot.py | AutuanLiu/PyCon | ba0e2005d1e0301d77bb8111ff67b663dc234784 | [
"MIT"
] | 1 | 2018-03-18T11:07:15.000Z | 2018-03-18T11:07:15.000Z | src/matplot.py | AutuanLiu/PyCon | ba0e2005d1e0301d77bb8111ff67b663dc234784 | [
"MIT"
] | null | null | null | src/matplot.py | AutuanLiu/PyCon | ba0e2005d1e0301d77bb8111ff67b663dc234784 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sun Jun 4 14:56:00 2017
@author: AutuanLiu
"""
import numpy as np
import matplotlib.pyplot as plt
# test so so
plt.plot([3, 1, 4, -5, 6])
plt.ylabel("grade")
plt.savefig("test", dpi = 600)
plt.show()
def f(x):
return np.exp(-x) * np.cos(2 * np.pi * x)
a = np.arange(0, 5, .02)
plt.subplot(2, 1, 1)
plt.plot(a, f(a))
plt.subplot(2, 1, 2)
plt.plot(a, np.cos(2 * np.pi * a), 'r--')
plt.show()
# plot multi image
plt.plot(a, np.sin(a), a, np.sinh(a), a, np.exp(a), a, a ** 3)
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.show()
| 17 | 62 | 0.586505 | 119 | 578 | 2.84874 | 0.462185 | 0.044248 | 0.070796 | 0.047198 | 0.058997 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069474 | 0.178201 | 578 | 33 | 63 | 17.515152 | 0.644211 | 0.183391 | 0 | 0.166667 | 0 | 0 | 0.052174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0.055556 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7848a09465a19c587425eb6807accc3751bd0e0a | 1,439 | py | Python | lib.py | Zenahr/ALUB | cc9161d6e30a7f5278761954333fcdcee9598259 | [
"MIT"
] | null | null | null | lib.py | Zenahr/ALUB | cc9161d6e30a7f5278761954333fcdcee9598259 | [
"MIT"
] | null | null | null | lib.py | Zenahr/ALUB | cc9161d6e30a7f5278761954333fcdcee9598259 | [
"MIT"
] | null | null | null | import random
import autopy
import pyautogui
import time
import json
from threading import Timer
import cv2
import pytesseract
def click_readyup_button():
print('CHECKING...')
if should_click():
try:
autopy.mouse.move(*(230, 928))
time.sleep(.2)
autopy.mouse.click()
except TypeError:
print('INTERNAL ERROR OCCURED. CONTACT DEVELOPER.')
def get_position():
print(
pyautogui.position()
)
def should_click():
box = ((140, 950), (170, 42))
screenshot = autopy.bitmap.capture_screen(box)
screenshot.save('screenshot.png')
img = cv2.imread('screenshot.png')
threshold = 90
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, threshold, 255, cv2.THRESH_BINARY_INV)[1]
# cv2.imshow('thresh', thresh)
# cv2.waitKey(0)
scanned_text = pytesseract.image_to_string(img, lang='eng', config='--psm 6')
try:
print(scanned_text)
if 'READY' in scanned_text:
print('read READY in screenshot')
return True
elif 'CANCEL' in scanned_text:
print('read CANCEL in screenshot')
return True
else:
print('read nothing in screenshot')
return False
except AttributeError:
raise Exception(f'OCR MODULE: COULD NOT RETRIEVE TEXT')
if __name__ == '__main__':
print(
should_click()
) | 27.150943 | 81 | 0.62057 | 168 | 1,439 | 5.172619 | 0.553571 | 0.050633 | 0.06214 | 0.041427 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033397 | 0.271716 | 1,439 | 53 | 82 | 27.150943 | 0.795802 | 0.029882 | 0 | 0.12766 | 0 | 0 | 0.157819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.170213 | 0 | 0.297872 | 0.170213 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
784be104b596367c04897f8c45fd427758c904d4 | 10,993 | py | Python | games/bocce/frame.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | null | null | null | games/bocce/frame.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | 1 | 2022-02-19T20:40:44.000Z | 2022-02-19T20:40:44.000Z | games/bocce/frame.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | null | null | null | # imports
from .ball import Pallino
from .throw import Throw
from .cv.ballfinder import BallFinder
from scipy.spatial import distance as dist
# for now, these are "pixels" (not "inches" or "cm")
TOO_CLOSE_MARGIN = 5
class Frame:
def __init__(self, frameNumber, throwingEnd, pallinoThrowingTeam,
teamHome, teamAway, cam):
self.frameNumer = frameNumber
self.throwingEnd = throwingEnd
self.frameWinner = None
self.pallinoThrowingTeam = pallinoThrowingTeam
self.teamHome = teamHome
self.teamAway = teamAway
# todo
self.cam = cam
self.pallinoInPlay = False
self.ballMotion = False
self.whoseIn = None
self.inPoints = 0
self.framePoints = 0
#### throws ####
self.throw = None
self.throws = []
self.first_bocce_thrown = False
self.second_bocce_thrown = False
self.numThrowsTeamHome = 0
self.numThrowsTeamAway = 0
self.throw_trigger = False
self.num_total_team_balls = None
def initialize_balls(self, playersPerTeam):
self.pallino = Pallino("yellow")
if playersPerTeam == 1:
self.num_total_team_balls = 2
self.teamHome.add_balls(self.num_total_team_balls)
self.teamAway.add_balls(self.num_total_team_balls)
elif playersPerTeam == 2 or playersPerTeam == 4:
self.num_total_team_balls = 4
self.teamHome.add_balls(self.num_total_team_balls)
self.teamAway.add_balls(self.num_total_team_balls)
else:
self.num_total_team_balls = None
raise ValueError("valid playersPerTeam must be 1, 2, or 4")
def start(self):
print("Frame {} is started".format(str(self.frameNumer)))
def throw_pallino(self, team):
# throw the pallino
# todo: determine throwing player; currently gets RANDOM player
self.pallino.set_thrower(team.get_random_player())
self.throw = Throw(self.pallino.thrownBy, self.pallino)
self.throw.throw()
valid = self.throw.valid
# debug
print("{} threw the pallino. Throw is {}.".format(
self.pallino.thrownBy, "valid" if valid else "invalid"))
if valid:
self.pallino.isThrown = True
self.pallinoInPlay = True
return valid
def increment_team_throw_count(self, team):
if team == self.teamHome:
self.numThrowsTeamHome += 1
elif team == self.teamAway:
self.numThrowsTeamAway += 1
def throw_bocce(self, team, followPallino=False):
thrower = None
# whichever team threw the pallino throws again
if followPallino:
print("Following the pallino")
team = self.pallinoThrowingTeam
# otherwise, it is the furthest team's throw
else:
# if the furthest team has no more balls, then switch teams
if self.get_num_remaining_team_balls(team) <= 0:
# switch team
team = self.get_other_team(team)
# grab a bocce ball from the team
ball = self.get_a_team_ball(team.balls)
# grab a player from the team
# todo: determine the throwing player; currently gets a random player with ball
thrower = team.get_random_player_with_balls()
# throw the bocce ball
ball.set_thrower(thrower)
self.throw = Throw(thrower, ball)
self.throw.throw()
self.increment_team_throw_count(team)
valid = self.throw.valid
# update who is in
if followPallino:
self.whoseIn = self.get_other_team(self.pallinoThrowingTeam)
else:
self.whoseIn = self.determine_whose_in(self.cam.last_frame)
# debug
print("{}({}) threw a bocce. Throw is {}. {} is in with points={}. {} remaining balls={}".format(
str(thrower), team.teamBallColor,
"valid" if valid else "invalid", self.whoseIn,
self.inPoints, str(team), self.get_num_remaining_team_balls(team)))
return valid
def get_a_team_ball(self, balls):
for ball in balls:
# go to the next ball if this one is already thrown
if ball.isThrown:
continue
else:
# determined that this team has more balls to throw
return ball
# by default, the team doesn't have any more balls to throw
return None
def either_team_has_balls(self):
if self.get_num_remaining_team_balls(self.teamHome) > 0 \
or self.get_num_remaining_team_balls(self.teamAway) > 0:
return True
return False
def get_num_remaining_team_balls(self, team):
numBalls = 4
for ball in team.balls:
# go to the next ball if this one is already thrown
if ball.isThrown:
numBalls -= 1
return numBalls
def handle_throw(self):
if not self.pallino.isThrown:
# throw the pallino
self.throw_pallino(self.pallinoThrowingTeam)
# check to see if the pallino is in play
if not self.pallinoInPlay:
# swap pallino throwing team
if self.pallinoThrowingTeam == self.teamHome:
self.pallinoThrowingTeam = self.teamAway
elif self.pallinoThrowingTeam == self.teamAway:
self.pallinoThrowingTeam = self.teamHome
# indicate that the pallino hasn't been thrown
self.pallino.isThrown = False
return
return
# the pallino thrower NEEDS to throw their first ball
if not self.first_bocce_thrown:
self.throw_bocce(self.pallinoThrowingTeam,
followPallino=True)
self.first_bocce_thrown = True
self.update_in_points(1) # force to one point
self.first_bocce_thrown = True
return
# the other team ALWAYS throws their first ball next
if not self.second_bocce_thrown:
print("The other team ALWAYS throws their first ball next")
self.throw_bocce(self.get_other_team(self.pallinoThrowingTeam), followPallino=False)
self.second_bocce_thrown = True
# todo we need to determine who is in but kmeans fails if all dff ball nums = 1
self.update_in_points(1)
return
else:
if self.either_team_has_balls():
# throw all remaining balls
self.inPoints, self.whoseIn = self.determine_whose_in(self.cam)
# the other team (furthest team) throws
valid = self.throw_bocce(self.get_other_team(self.whoseIn),
followPallino=False)
else:
print("Please score the frame")
# if we reach this, then the frame is done, so cleanup
self.set_frame_points(self.whoseIn, self.inPoints)
def get_other_team(self, team):
if team == self.teamHome:
return self.teamAway
return self.teamHome
"""Finds closest ball with computer vision"""
def determine_whose_in(self, court):
bf = BallFinder()
bf.pipeline(court, self.numThrowsTeamHome, self.numThrowsTeamAway)
self.pallino = bf.pallino
self.teamHome.balls = bf.homeBalls
self.teamAway.balls = bf.awayBalls
points, frameLeader = self.get_frame_points_and_frame_leader(self.pallino, self.teamHome.balls,
self.teamAway.balls)
return points, frameLeader
def get_frame_points_and_frame_leader(self, pallino, homeBalls, awayBalls):
def get_frame_points(ballDistancesA, ballDistancesB):
framePoints = 0
for (i, dB) in enumerate(ballDistancesB):
for (j, dA) in enumerate(ballDistancesA):
if dA < dB:
framePoints += 1
else:
break
break
return framePoints
if pallino is None:
print("not annotating; couldn't find pallino")
# calculate Euclidean distance for each ball to the pallino
homeBallsDistances = []
awayBallsDistances = []
for ball in homeBalls:
D = dist.euclidean(pallino.coordinates, ball.coordinates)
homeBallsDistances.append(D)
for ball in awayBalls:
D = dist.euclidean(pallino.coordinates, ball.coordinates)
awayBallsDistances.append(D)
# sort balls and distances
homeBallsDistances, homeBalls = zip(*sorted(zip(homeBallsDistances, homeBalls)))
awayBallsDistances, awayBalls = zip(*sorted(zip(awayBallsDistances, awayBalls)))
# grab each min distance (the 0th element in the sorted list)
homeBallsMinDistance = homeBallsDistances[0]
awayBallsMinDistance = awayBallsDistances[0]
# who is closer?
homeIsCloser = homeBallsMinDistance < awayBallsMinDistance
awayIsCloser = awayBallsMinDistance < homeBallsMinDistance
equidistant = homeBallsMinDistance == awayBallsMinDistance
# check if it is "too close to call"
tooCloseToCall = abs(homeBallsMinDistance - awayBallsMinDistance) <= TOO_CLOSE_MARGIN
# determine framePoints and frameWinner
framePoints = None
frameLeader = None
if homeIsCloser:
framePoints = get_frame_points(homeBallsDistances, awayBallsDistances)
frameLeader = self.teamHome
elif awayIsCloser:
framePoints = get_frame_points(awayBallsDistances, homeBallsDistances)
frameLeader = self.teamAway
elif equidistant or tooCloseToCall:
# todo how do we handle when both teams' closest ball is equidistant
framePoints = None
return framePoints, frameLeader
"""Determine's who is in and accounts for their points"""
def update_in_points(self, points=None):
# determine who is in
if points is not None:
self.inPoints = points
return
# check for balls closest to pallino
ballsThrown = 1 + self.numThrowsTeamHome + self.numThrowsTeamAway
# if at least two bocce balls are thrown
if ballsThrown >= 2:
self.inPoints, self.whoseIn = self.determine_whose_in(self.cam.last_frame)
else:
self.inPoints = 0
def set_frame_points(self, inTeam, inPoints):
self.framePoints = inPoints
self.frameWinner = inTeam
def end(self):
print("[INFO] frame winner is {} with points={}".format(
self.frameWinner, self.framePoints))
self.teamAway.balls = []
self.teamHome.balls = []
return self.frameWinner, self.framePoints | 35.009554 | 105 | 0.615665 | 1,220 | 10,993 | 5.42377 | 0.189344 | 0.020402 | 0.014508 | 0.019344 | 0.211878 | 0.163216 | 0.134804 | 0.10065 | 0.078585 | 0.065891 | 0 | 0.00411 | 0.313927 | 10,993 | 314 | 106 | 35.009554 | 0.873243 | 0.140453 | 0 | 0.210784 | 0 | 0 | 0.040116 | 0 | 0 | 0 | 0 | 0.003185 | 0 | 1 | 0.083333 | false | 0 | 0.019608 | 0 | 0.191176 | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
784d88dfa8270065fc4ec2b2539f03dba716b534 | 2,738 | py | Python | timer.py | mrozowski/TaskTimer | 66b069c14a7117502bb657738869f3ae33870c04 | [
"MIT"
] | 1 | 2021-07-10T17:51:01.000Z | 2021-07-10T17:51:01.000Z | timer.py | mrozowski/TaskTimer | 66b069c14a7117502bb657738869f3ae33870c04 | [
"MIT"
] | null | null | null | timer.py | mrozowski/TaskTimer | 66b069c14a7117502bb657738869f3ae33870c04 | [
"MIT"
] | null | null | null | import threading, time, signal
from datetime import timedelta
from PyQt5 import QtWidgets
from playsound import playsound
import win10toast
from view import Dial
isActive = False # global variable that says if timer is set
class MyTimer(threading.Thread):
"""This class count down timer and move dial back to the default position"""
def __init__(self, hours: QtWidgets.QLabel, minutes: QtWidgets.QLabel, seconds: QtWidgets.QLabel, dial: QtWidgets.QDial, set_default):
threading.Thread.__init__(self)
self.counter = 0
self.dial_controller = True
self.fun = set_default
self.dial = dial
self.hours_label = hours
self.minutes_label = minutes
self.seconds_label = seconds
self.seconds_label.show()
self.hours = int(hours.text())
self.min = int(minutes.text())
self.sec = 59
self.daemon = True # True: if the main thread is killed this thread will be killed too
self.stopped = threading.Event()
self.interval = timedelta(seconds=1)
self.execute = self.count_down
def count_down(self):
self.sec -= 1
self.seconds_label.setText(str(self.sec))
if self.sec == 0:
self.sec = 59
if self.counter == 60:
self.min -= 1
if self.min == -1:
if self.hours > 0:
self.hours -= 1
self.min = 59
self.hours_label.setText(str(self.hours))
else:
"""show message time left"""
self.times_up()
self.stop()
self.dial.setValue(self.hours * 60 + self.min)
self.minutes_label.setText(str(self.min))
self.counter = 0
self.counter += 1
def times_up(self):
self.back_to_default()
self.fun()
playsound("Sounds/alarm2.mp3", False)
toaster = win10toast.ToastNotifier()
toaster.show_toast("Timer", "Times's up!", icon_path="Graphic/timer_icon.ico", duration=5)
def back_to_default(self):
global isActive
isActive = False
self.seconds_label.setText("59")
self.dial.setDisabled(False)
self.seconds_label.hide()
def stop(self):
self.back_to_default()
self.stopped.set()
self.join()
def run(self):
while not self.stopped.wait(self.interval.total_seconds()):
try:
self.execute()
except RuntimeError:
"""This exception is rised when progrem is closed"""
self.stopped.set()
| 31.471264 | 139 | 0.565741 | 318 | 2,738 | 4.764151 | 0.333333 | 0.047525 | 0.052805 | 0.037624 | 0.048845 | 0.033003 | 0 | 0 | 0 | 0 | 0 | 0.016584 | 0.339299 | 2,738 | 86 | 140 | 31.837209 | 0.820896 | 0.065376 | 0 | 0.151515 | 0 | 0 | 0.023909 | 0.009228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.19697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7853fc867b9268e27df1fa9c7f6378f0521175e4 | 7,310 | py | Python | untitled.py | czyczyyzc/MyMaskRCNN | e5a451fd05c593ae05d6e596813fc63aad7af2de | [
"MIT"
] | 1 | 2020-10-16T08:10:12.000Z | 2020-10-16T08:10:12.000Z | untitled.py | czyczyyzc/MyMaskRCNN | e5a451fd05c593ae05d6e596813fc63aad7af2de | [
"MIT"
] | null | null | null | untitled.py | czyczyyzc/MyMaskRCNN | e5a451fd05c593ae05d6e596813fc63aad7af2de | [
"MIT"
] | null | null | null | BACKBONE = "resnet101"
BACKBONE_STRIDES = [4, 8, 16, 32, 64]
POST_NMS_ROIS_TRAINING = 2000
POST_NMS_ROIS_INFERENCE = 1000
RPN_NMS_THRESHOLD = 0.7
POOL_SIZE = 7
MASK_POOL_SIZE = 14
TRAIN_BN = False # Defaulting to False since batch size is often small
FPN_CLASSIF_FC_LAYERS_SIZE = 1024
TRAIN_ROIS_PER_IMAGE = 200
ROI_POSITIVE_RATIO = 0.33
MASK_SHAPE = [28, 28]
RPN_BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2])
BBOX_STD_DEV = np.array([0.1, 0.1, 0.2, 0.2])
DETECTION_MAX_INSTANCES = 100
DETECTION_MIN_CONFIDENCE = 0.7
DETECTION_NMS_THRESHOLD = 0.3
RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)
RPN_ANCHOR_RATIOS = [0.5, 1, 2]
RPN_ANCHOR_STRIDE = 1
RPN_NMS_THRESHOLD = 0.7
BACKBONE_STRIDES = [4, 8, 16, 32, 64]
FPN_CLASSIF_FC_LAYERS_SIZE = 1024
TOP_DOWN_PYRAMID_SIZE = 256
import numpy as np
import tensorflow as tf
from .bbox import *
class BBoxesLayer(object):
def __init__(self, img_shp=None, img_num=None):
self.img_shp = img_shp
self.img_num = img_num
self.box_siz_min = 5
self.box_prb_min = 0.5
self.box_nms_pre = None
self.box_nms_pst = 100 #200
self.box_nms_max = 0.3 #0.2
self.box_msk_min = 0.5
self.box_msk_siz = [28, 28]
def generate_boxs(self, rois=None, roi_prbs_pst=None, roi_prds_pst=None, roi_imxs=None):
#取出最佳类的预测值
box_clss = tf.argmax(roi_prbs_pst, axis=1)
box_clss = tf.cast(box_clss, tf.int32)
box_prbs = tf.reduce_max(roi_prbs_pst, axis=1)
#设置一个box索引,避免大量的gather操作(prds、msks),节省内存,提升速度
box_idxs = tf.range(tf.shape(rois)[0])
#剔除背景box
idxs = tf.where(box_clss>0)
box_clss = tf.gather_nd(box_clss, idxs)
box_prbs = tf.gather_nd(box_prbs, idxs)
box_idxs = tf.gather_nd(box_idxs, idxs)
#剔除得分较低的box
if self.box_prb_min is not None:
idxs = tf.where(box_prbs>=self.box_prb_min)
box_clss = tf.gather_nd(box_clss, idxs)
box_prbs = tf.gather_nd(box_prbs, idxs)
box_idxs = tf.gather_nd(box_idxs, idxs)
#根据box_idxs进行剩余的gather操作
rois = tf.gather(rois, box_idxs)
box_imxs = tf.gather(roi_imxs, box_idxs)
box_idxs = tf.stack([box_idxs, box_clss], axis=-1) #如果box的预测是定类的话要加上这句
roi_prds_pst = tf.gather(roi_prds_pst, box_idxs)
#还原出box以进行后续的滤除
boxs = bbox_transform_inv(rois, roi_prds_pst)
boxs = bbox_clip(boxs, [0.0, 0.0, self.img_shp[0]-1.0, self.img_shp[1]-1.0])
#剔除过小的box
idxs = bbox_filter(boxs, self.box_siz_min)
boxs = tf.gather_nd(boxs, idxs)
box_clss = tf.gather_nd(box_clss, idxs)
box_prbs = tf.gather_nd(box_prbs, idxs)
box_imxs = tf.gather_nd(box_imxs, idxs)
#做逐img逐cls的nms
#设置一个box索引,避免大量的concat操作(boxs、clss、prbs、imxs),节省内存,提升速度
box_idxs = tf.zeros(shape=[0], dtype=tf.int32)
def cond0(i, boxs, box_clss, box_prbs, box_imxs, box_idxs):
c = tf.less(i, self.img_num)
return c
def body0(i, boxs, box_clss, box_prbs, box_imxs, box_idxs):
box_idxs_img = tf.where(tf.equal(box_imxs, i))
boxs_img = tf.gather_nd(boxs, box_idxs_img) #和box_idxs_img对应
box_clss_img = tf.gather_nd(box_clss, box_idxs_img)
box_prbs_img = tf.gather_nd(box_prbs, box_idxs_img)
#进一步剔除过多的roi
if self.box_nms_pre is not None:
box_nms_pre = tf.minimum(self.box_nms_pre, tf.shape(boxs_img)[0])
box_prbs_img, idxs = tf.nn.top_k(box_prbs_img, k=box_nms_pre, sorted=True)
boxs_img = tf.gather(boxs_img, idxs)
box_clss_img = tf.gather(box_clss_img, idxs)
box_idxs_img = tf.gather(box_idxs_img, idxs)
#####################################
box_idxs_kep = tf.zeros(shape=[0], dtype=tf.int32)
box_clss_unq, idxs = tf.unique(box_clss_img)
def cond1(j, boxs_img, box_clss_img, box_prbs_img, box_clss_unq, box_idxs_kep):
box_cls_num = tf.shape(box_clss_unq)[0]
c = tf.less(j, box_cls_num)
return c
def body1(j, boxs_img, box_clss_img, box_prbs_img, box_clss_unq, box_idxs_kep):
#选出对应类的rois
box_cls = box_clss_unq[j]
box_idxs_cls = tf.where(tf.equal(box_clss_img, box_cls))
boxs_cls = tf.gather_nd(boxs_img, box_idxs_cls)
box_prbs_cls = tf.gather_nd(box_prbs_img, box_idxs_cls)
#进行非极大值抑制操作
idxs = tf.image.non_max_suppression(boxs_cls, box_prbs_cls, self.box_nms_pst, self.box_nms_max)
box_idxs_cls = tf.gather(box_idxs_cls, idxs)
# 保存结果
box_idxs_kep = tf.concat([box_idxs_kep, box_idxs_cls], axis=0)
return [j+1, boxs_img, box_clss_img, box_prbs_img, box_clss_unq, box_idxs_kep]
j = tf.constant(0)
[j, boxs_img, box_clss_img, box_prbs_img, box_clss_unq, box_idxs_kep] = \
tf.while_loop(cond1, body1, loop_vars=[j, boxs_img, box_clss_img, box_prbs_img, box_clss_unq, box_idxs_kep], \
shape_invariants=[j.get_shape(), boxs_img.get_shape(), box_clss_img.get_shape(), \
box_prbs_img.get_shape(), box_clss_unq.get_shape(), tf.TensorShape([None])], \
parallel_iterations=10, back_prop=False, swap_memory=True)
box_prbs_img = tf.gather(box_prbs_img, box_idxs_kep)
box_idxs_img = tf.gather(box_idxs_img, box_idxs_kep)
box_num_img = tf.minimum(self.box_nms_pst, tf.shape(box_idxs_img)[0])
box_prbs_img, idxs = tf.nn.top_k(box_prbs_img, k=box_num_img, sorted=True)
box_idxs_img = tf.gather(box_idxs_img, idxs)
# 保存结果
box_idxs = tf.concat([box_idxs, box_idxs_img], axis=0)
return [i+1, boxs, box_clss, box_prbs, box_imxs, box_idxs]
i = tf.constant(0)
[i, boxs, box_clss, box_prbs, box_imxs, box_idxs] = \
tf.while_loop(cond, body, loop_vars=[i, boxs, box_clss, box_prbs, box_imxs, box_idxs], \
shape_invariants=[i.get_shape(), boxs.get_shape(), box_clss.get_shape(), \
box_prbs.get_shape(), box_imxs.get_shape(), tf.TensorShape([None])], \
parallel_iterations=10, back_prop=False, swap_memory=True)
boxs = tf.gather_nd(boxs, box_idxs)
box_clss = tf.gather_nd(box_clss, box_idxs)
box_prbs = tf.gather_nd(box_prbs, box_idxs)
box_imxs = tf.gather_nd(box_imxs, box_idxs)
return boxs, box_clss, box_prbs, box_imxs
def generate_msks(self, boxs=None, box_clss=None, box_msks_pst=None):
return
| 41.067416 | 126 | 0.583174 | 1,073 | 7,310 | 3.603914 | 0.16589 | 0.090509 | 0.049134 | 0.050427 | 0.471683 | 0.359969 | 0.331006 | 0.285751 | 0.250323 | 0.225498 | 0 | 0.031344 | 0.314774 | 7,310 | 178 | 127 | 41.067416 | 0.740667 | 0.042818 | 0 | 0.168067 | 0 | 0 | 0.001297 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.02521 | 0.008403 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7854206db388f69d88e15202d41b4806273c060f | 6,198 | py | Python | skm_tea/engine/trainer.py | StanfordMIMI/skm-tea | 5678bfcebad4fdc30de62b319d96ec1775e1671c | [
"MIT"
] | 26 | 2021-08-28T06:57:50.000Z | 2022-02-17T06:33:41.000Z | skm_tea/engine/trainer.py | StanfordMIMI/skm-tea | 5678bfcebad4fdc30de62b319d96ec1775e1671c | [
"MIT"
] | 6 | 2021-10-20T16:04:12.000Z | 2022-03-15T20:16:52.000Z | skm_tea/engine/trainer.py | StanfordMIMI/skm-tea | 5678bfcebad4fdc30de62b319d96ec1775e1671c | [
"MIT"
] | 4 | 2021-11-15T08:32:41.000Z | 2022-02-23T18:54:30.000Z | import logging
import os
import pytorch_lightning as pl
from meddlr.config.config import CfgNode
from meddlr.engine.trainer import convert_cfg_time_to_iter as _convert_cfg_time_to_iter
from meddlr.engine.trainer import format_as_iter
from meddlr.utils import env
from meddlr.utils.env import supports_wandb
from pytorch_lightning.callbacks import EarlyStopping
from pytorch_lightning.loggers import CSVLogger
from pytorch_lightning.profiler import SimpleProfiler
from pytorch_lightning.utilities.distributed import rank_zero_only
from skm_tea.callbacks import PLPeriodicCheckpointer
from skm_tea.utils.pl_utils import LoggerCollection, TensorBoardLogger, WandbLogger
__all__ = ["PLDefaultTrainer"]
def convert_cfg_time_to_iter(cfg: CfgNode, iters_per_epoch: int):
"""Convert all config time-related parameters to iterations.
Note:
When adding to this list, be careful not to convert config parameters
multiple times.
"""
time_scale = cfg.TIME_SCALE
cfg = _convert_cfg_time_to_iter(cfg.clone(), iters_per_epoch, ignore_missing=True).defrost()
cfg.SOLVER.EARLY_STOPPING.PATIENCE = format_as_iter(
cfg.SOLVER.EARLY_STOPPING.PATIENCE, iters_per_epoch, time_scale
)
cfg.TIME_SCALE = "iter"
cfg.freeze()
return cfg
class PLDefaultTrainer(pl.Trainer):
def __init__(
self,
cfg,
iters_per_epoch: int,
log_gpu_memory=None,
replace_sampler_ddp=False,
num_gpus=0,
resume=False,
eval_only=False,
**kwargs,
):
logger = logging.getLogger("skm_tea")
self.eval_only = eval_only
if "limit_train_batches" in kwargs:
iters_per_epoch = kwargs["limit_train_batches"]
cfg = convert_cfg_time_to_iter(cfg, iters_per_epoch)
self.cfg = cfg
callbacks = self.build_callbacks() # includes user-specified callbacks
kwargs["callbacks"] = callbacks
if resume:
assert not kwargs.get(
"resume_from_checkpoint", None
), "Cannot specify resume=True and resume_from_checkpoint"
resume_from_checkpoint = self.configure_resume(callbacks)
logger.info(f"Resuming from checkpoint {resume_from_checkpoint}")
kwargs["resume_from_checkpoint"] = resume_from_checkpoint
early_stopping_callback = self.build_early_stopping(iters_per_epoch)
if early_stopping_callback:
callbacks.append(early_stopping_callback)
# Hacky way to get around the definition of "step" as optimizer.step in pt-lightning.
# Without this the training time would be scaled by a factor of SOLVER.GRAD_ACCUM_ITERS.
max_steps = cfg.SOLVER.MAX_ITER // cfg.SOLVER.GRAD_ACCUM_ITERS
# Default arguments based on Trainer. Any keyword args provided will overwrite these.
args = dict(
logger=self.build_logger() if not self.eval_only else False,
default_root_dir=cfg.OUTPUT_DIR,
max_steps=max_steps,
# TODO Issue #4406: https://github.com/PyTorchLightning/pytorch-lightning/issues/4406
val_check_interval=min(
cfg.TEST.EVAL_PERIOD, kwargs.get("limit_train_batches", float("inf"))
),
accumulate_grad_batches=cfg.SOLVER.GRAD_ACCUM_ITERS,
log_gpu_memory=log_gpu_memory,
checkpoint_callback=False,
sync_batchnorm=False,
profiler=SimpleProfiler(dirpath=cfg.OUTPUT_DIR, filename="profile.txt"),
log_every_n_steps=5,
replace_sampler_ddp=replace_sampler_ddp,
deterministic=env.is_repro(),
)
if num_gpus > 0:
args.update({"gpus": num_gpus, "auto_select_gpus": True})
args.update(kwargs)
super().__init__(**args)
def build_early_stopping(self, iters_per_epoch):
monitor = self.cfg.SOLVER.EARLY_STOPPING.MONITOR
patience = self.cfg.SOLVER.EARLY_STOPPING.PATIENCE
min_delta = self.cfg.SOLVER.EARLY_STOPPING.MIN_DELTA
if patience == 0:
return False
patience = patience / iters_per_epoch
assert (
self.cfg.TIME_SCALE == "iter" and patience > 0 and int(patience) == patience
), f"Got time scale '{self.cfg.TIME_SCALE}' and patience '{patience}'"
return EarlyStopping(monitor=monitor, min_delta=min_delta, patience=patience, verbose=True)
@rank_zero_only
def build_logger(self):
cfg = self.cfg
version = ""
loggers = [
CSVLogger(cfg.OUTPUT_DIR, name="", version=version),
TensorBoardLogger(cfg.OUTPUT_DIR, name="", version=version, log_graph=False),
]
if supports_wandb():
import wandb
loggers.append(WandbLogger(experiment=wandb.run))
return LoggerCollection(loggers)
def build_callbacks(self, **kwargs):
"""Append default callbacks to list of user-defined callbacks."""
cfg = self.cfg
callbacks = list(kwargs.get("callbacks", []))
if "checkpoint_callback" not in kwargs and not any(
isinstance(x, PLPeriodicCheckpointer) for x in callbacks
):
callbacks.append(
PLPeriodicCheckpointer(
frequency=cfg.SOLVER.CHECKPOINT_PERIOD,
filepath=os.path.join(cfg.OUTPUT_DIR, "{global_step:07d}-{epoch:03d}"),
save_after_val=True,
)
)
return callbacks
def configure_resume(self, callbacks):
"""Configure setup for resume.
Currently finds the latest epoch and resumes from there.
"""
# cfg = self.cfg
checkpointer = [x for x in callbacks if isinstance(x, PLPeriodicCheckpointer)]
if len(checkpointer) == 0:
raise ValueError("Resuming training only works with PLPeriodicCheckpointer")
elif len(checkpointer) > 1 and any(
cp.dirpath != checkpointer[0].dirpath for cp in checkpointer
):
raise ValueError("Found more than one checkpointer with different save directories")
return checkpointer[0].get_latest()
| 38.02454 | 99 | 0.666989 | 737 | 6,198 | 5.369064 | 0.303935 | 0.019459 | 0.029568 | 0.020217 | 0.146828 | 0.054334 | 0.013141 | 0 | 0 | 0 | 0 | 0.004532 | 0.252339 | 6,198 | 162 | 100 | 38.259259 | 0.849374 | 0.111649 | 0 | 0.041322 | 0 | 0 | 0.095046 | 0.030092 | 0 | 0 | 0 | 0.006173 | 0.016529 | 1 | 0.049587 | false | 0 | 0.123967 | 0 | 0.231405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78542c8e023300ee416e5d670aa78aae94c4eae7 | 1,189 | py | Python | uw_canvas/tests/test_quizzes.py | uw-it-aca/uw-restclients-canvas | 2c54d7676649ec18129817992890878ace1ec6c6 | [
"Apache-2.0"
] | 1 | 2019-11-26T21:38:50.000Z | 2019-11-26T21:38:50.000Z | uw_canvas/tests/test_quizzes.py | uw-it-aca/uw-restclients-canvas | 2c54d7676649ec18129817992890878ace1ec6c6 | [
"Apache-2.0"
] | 135 | 2017-04-04T23:11:26.000Z | 2021-05-28T17:00:20.000Z | uw_canvas/tests/test_quizzes.py | uw-it-aca/uw-restclients-canvas | 2c54d7676649ec18129817992890878ace1ec6c6 | [
"Apache-2.0"
] | 2 | 2020-05-20T20:36:55.000Z | 2022-03-05T00:23:44.000Z | # Copyright 2021 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
from unittest import TestCase
from uw_canvas.utilities import fdao_canvas_override
from uw_canvas.quizzes import Quizzes
from uw_canvas.models import Quiz
@fdao_canvas_override
class CanvasTestQuizzes(TestCase):
def test_quizzes_by_course_id(self):
canvas = Quizzes()
submissions = canvas.get_quizzes("862539")
sub = submissions[0]
self.assertEquals(sub.quiz_id, 762037, "Has correct quiz id")
self.assertEquals(sub.published, True, "Is published")
self.assertEquals(sub.due_at.day, 1, "due at datetime")
def test_quizzes_by_sis_id(self):
canvas = Quizzes()
submissions = canvas.get_quizzes_by_sis_id("2013-autumn-PHYS-248-A")
self.assertEquals(len(submissions), 1, "Submission Count")
def test_quiz_without_due_date(self):
quiz = Quiz(data={
"id": "1",
"title": "title",
"html_url": "http://...",
"published": False,
"points_possible": 0,
})
self.assertEquals(quiz.title, "title")
self.assertEquals(quiz.due_at, None)
| 32.135135 | 76 | 0.660219 | 148 | 1,189 | 5.108108 | 0.459459 | 0.126984 | 0.047619 | 0.042328 | 0.121693 | 0.121693 | 0.121693 | 0.121693 | 0 | 0 | 0 | 0.032644 | 0.227082 | 1,189 | 36 | 77 | 33.027778 | 0.789989 | 0.068966 | 0 | 0.074074 | 0 | 0 | 0.13587 | 0.019928 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.148148 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7859d6c0896c5f80478ef39ac42a3fbaba78584d | 4,554 | py | Python | diofant/solvers/utils.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | diofant/solvers/utils.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | diofant/solvers/utils.py | rajkk1/diofant | 6b361334569e4ec2e8c7d30dc324387a4ad417c2 | [
"BSD-3-Clause"
] | null | null | null | """General utility functions for solvers."""
import warnings
from ..core import (expand_mul, expand_multinomial, nan, oo,
preorder_traversal, zoo)
from ..core.sympify import sympify
from ..simplify.simplify import posify, simplify
__all__ = 'checksol',
def checksol(f, sol, **flags):
r"""Checks whether sol is a solution of equations f.
Examples
========
>>> checksol(x**4 - 1, {x: 1})
True
>>> checksol(x**4 - 1, {x: 0})
False
>>> checksol(x**2 + y**2 - 5**2, {x: 3, y: 4})
True
Returns
=======
bool or None
Return True, if solution satisfy all equations
in ``f``. Return False, if a solution doesn't
satisfy any equation. Else (i.e. one or more checks
are inconclusive), return None.
Parameters
==========
f : Expr or iterable of Expr's
Equations to substitute solutions in.
sol : dict of Expr's
Mapping of symbols to values.
\*\*flags : dict
A dictionary of following parameters:
minimal : bool, optional
Do a very fast, minimal testing. Default is False.
warn : bool, optional
Show a warning if it could not conclude. Default is False.
simplify : bool, optional
Simplify solution before substituting into function and
simplify the function before trying specific simplifications.
Default is True.
force : bool, optional
Make positive all symbols without assumptions regarding
sign. Default is False.
"""
minimal = flags.get('minimal', False)
if not isinstance(sol, dict):
raise ValueError(f'Expecting dictionary but got {sol}')
if sol and not f.has(*list(sol)):
# if f(y) == 0, x=3 does not set f(y) to zero...nor does it not
if f.is_Number:
return f.is_zero
else:
return
illegal = {nan, zoo, oo, -oo}
if any(sympify(v).atoms() & illegal for k, v in sol.items()):
return False
was = f
attempt = -1
while 1:
attempt += 1
if attempt == 0:
val = f.subs(sol)
if val.atoms() & illegal:
return False
elif attempt == 1:
assert val.free_symbols
if not val.is_constant(*list(sol), simplify=not minimal):
return False
# there are free symbols -- simple expansion might work
_, val = val.as_content_primitive()
val = expand_mul(expand_multinomial(val))
elif attempt == 2:
if minimal:
return
if flags.get('simplify', True):
for k in sol:
sol[k] = simplify(sol[k])
# start over without the failed expanded form, possibly
# with a simplified solution
val = simplify(f.subs(sol))
if flags.get('force', True):
val, reps = posify(val)
# expansion may work now, so try again and check
exval = expand_mul(expand_multinomial(val))
if exval.is_number or not exval.free_symbols:
# we can decide now
val = exval
else:
# if there are no radicals and no functions then this can't be
# zero anymore -- can it?
pot = preorder_traversal(expand_mul(val))
seen = set()
saw_pow_func = False
for p in pot:
if p in seen:
continue
seen.add(p)
if p.is_Pow and not p.exp.is_Integer:
saw_pow_func = True
elif p.is_Function:
saw_pow_func = True
if saw_pow_func:
break
if saw_pow_func is False:
return False
if flags.get('force', True):
# don't do a zero check with the positive assumptions in place
val = val.subs(reps)
val # XXX "peephole" optimization, http://bugs.python.org/issue2506
break
if val == was:
continue
elif val.is_Rational:
return val == 0
elif val.is_nonzero:
return False
if not val.free_symbols:
return bool(abs(val.evalf(18, strict=False).evalf(12, chop=True)) < 1e-9)
was = val
if flags.get('warn', False):
warnings.warn(f'\n\tWarning: could not verify solution {sol}.')
| 32.070423 | 85 | 0.538428 | 566 | 4,554 | 4.265018 | 0.35689 | 0.027341 | 0.020713 | 0.032312 | 0.04971 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010839 | 0.371981 | 4,554 | 141 | 86 | 32.297872 | 0.833217 | 0.359025 | 0 | 0.223684 | 0 | 0 | 0.042075 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 1 | 0.013158 | false | 0 | 0.052632 | 0 | 0.197368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78607ee463c1efbeda346568d754d388788b0a46 | 16,798 | py | Python | client/admin.py | AhmedElmawary/erp | c998787c62194e26e10e3cbc61e35935e901e56d | [
"MIT"
] | null | null | null | client/admin.py | AhmedElmawary/erp | c998787c62194e26e10e3cbc61e35935e901e56d | [
"MIT"
] | null | null | null | client/admin.py | AhmedElmawary/erp | c998787c62194e26e10e3cbc61e35935e901e56d | [
"MIT"
] | null | null | null | import os
from django.template.response import TemplateResponse
from _helpers.common import make_list_of_lists
from app_user.models import ClosingPeriod
from datetime import datetime
from django.utils import timezone
from django.core.paginator import Paginator
from django.db.models.query import QuerySet
from django.template.loader import render_to_string
from num2words import num2words
from xhtml2pdf import pisa
from _helpers.models import areas_ar_en, find_in, get_currency, get_payment_type, modified_num2words
from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union
from django.http.request import HttpHeaders, HttpRequest
from django.http.response import HttpResponse, HttpResponseRedirect
from django.urls.base import reverse
from django.urls.conf import path
from django.urls.resolvers import URLPattern
from django.utils.html import format_html
from payment.models import ClientPaymentTransaction, PaymentTransactionType
from django.contrib import admin
from .models import Client
from django.utils.translation import gettext, ugettext as _, ugettext_lazy
import xlsxwriter
from _helpers.admin import Amount, ClientHelper, CommonMethods, ConsumerTransaction,ConsumerTransactionDownloder, admin_client_download_transaction_pdf, admin_supplier_download_transaction_pdf, make_xls_data, make_xls_headers, str_to_date
from zipfile import ZipFile
class ClientAdmin(admin.ModelAdmin):
class Media:
js = (
'client_transactions.js',
'client_account_statment.js'
)
css = {
"all":('client_admin.css',)
}
model = Client
search_fields = ['name', 'phone', 'email']
extra = 1
list_display_links = [
'name'
]
list_display = [
'id',
'name',
'phone',
'is_active',
'taxes',
# 'make_transaction',
'parsed_get_debit',
'parsed_get_credit',
'parsed_get_net',
'account_statment_from',
'account_statment_to',
'account_statment_btn',
]
add_fieldsets = (
(ugettext_lazy('Main info'), {
'classes': ("wide",),
"fields": (
_('id'),
_('name'),
_('email'),
_('phone'),
_('gender'),
_('img'),
),
}),
(ugettext_lazy("Location"), {
'classes': ('collapse', 'wide'),
'fields': (
_('country'),
_('area'),
_('city'),
_('address'),
)
})
,(ugettext_lazy('Cash'), {
'classes': ('collapse', 'wide'),
"fields" : (
_('debit'),
_('credit'),
_('get_cash'),
_('cash')
)
}),(ugettext_lazy('Taxes'), {
"classes": ('collapse', 'wide'),
'fields' :
(
_('taxes'),
_('taxes_rate'),
)
}),
(ugettext_lazy('Status'), {'classes': ('collapse',),"fields": ('is_active',)})
)
fieldsets = (
(ugettext_lazy('Main info'), {
'classes': ("wide",),
"fields": (
_('id'),
_('name'),
_('email'),
_('phone'),
_('gender'),
_('img'),
),
}),
(ugettext_lazy("Location"), {
'classes': ('collapse', 'wide'),
'fields': (
_('country'),
_('area'),
_('city'),
_('address'),
)
})
,(ugettext_lazy('Cash'), {
'classes': ('collapse', 'wide'),
"fields" : (
_('debit'),
_('credit'),
_('get_cash'),
_('period_close'),
)
}),(ugettext_lazy('Taxes'), {
"classes": ('collapse', 'wide'),
'fields' :
(
_('taxes'),
_('taxes_rate'),
)
}),
(ugettext_lazy('Status'), {'classes': ('collapse',),"fields": ('is_active',)})
)
search_fields = ['name', 'phone', 'email','area__name', 'city']
list_filter = ('is_active', 'taxes')
list_per_page = 20
actions = [
_('activate'),
_('deactivate'),
_('export_as_xls'),
_('export_invoices_for'),
]
change_form_template = 'admin/client/client/custom_change_form.html'
change_list_template = 'admin/client/client/custom_change_list.html'
def activate(self, request, queryset):
client_no = queryset.update(is_active=True)
supplier_string = 'clients have' if client_no < 1 else 'client has'
self.message_user(request, f'{client_no} {supplier_string} activated successfully')
activate.short_description = ugettext_lazy('Activate selected clients')
def deactivate(self, request, queryset):
client_no = queryset.update(is_active=False)
supplier_string = 'clients have' if client_no < 1 else 'client has'
self.message_user(request, f'{client_no} {supplier_string} activated successfully')
deactivate.short_description = ugettext_lazy('Deactivate selected clinets')
def export_invoices_for(self, request, queryset):
response = HttpResponse(content_type='application/vnd.ms-excel')
xls_sheet = xlsxwriter.Workbook(response)
headers_format = xls_sheet.add_format()
headers_format.set_font_shadow()
headers_format.set_bg_color('gray')
headers_format.set_border(1)
headers_format.set_font_color('white')
headers_format.set_bold()
headers_format.set_align('center')
headers_format.set_locked(True)
headers_format.set_size(15)
data_format = xls_sheet.add_format()
data_format.set_align('center')
data_format.set_bg_color('#8d8894')
data_format.set_font_color('#e7e7e7')
data_format.set_font_size(14)
data_format.set_bold()
data_format.set_border()
data_format.set_border_color('black')
clients = queryset.all()
for supplier in clients:
qs_values = supplier.transactions.order_by('-id').all()
work_sheet = xls_sheet.add_worksheet()
make_xls_headers(work_sheet, [
_('issued_at'),
_('type_tranasction'),
_('amount'),
_('description'),
_('id'),
_('client'),
], headers_format)
data_fields_names = [
'client',
'id',
'description',
'amount',
'type_tranasction',
'issued_at',
]
make_xls_data(work_sheet, qs_values, data_fields_names, data_format)
xls_sheet.close()
return response
export_invoices_for.short_description = ugettext_lazy('Export invoices for')
def get_search_results(self, request: HttpRequest, queryset: QuerySet, search_term: str) -> Tuple[QuerySet, bool]:
area = find_in(areas_ar_en(), search_term)
if area:
search_term = area[search_term]
return super().get_search_results(request, queryset, search_term)
def get_readonly_fields(self, request: HttpRequest, obj: Optional["Client"]) -> Union[List[str], Tuple]:
readonly_fields = [
'id',
'make_transaction',
'debit',
'credit',
'period_close',
'get_cash',
'get_net',
]
if obj:
return readonly_fields+ ['cash']
return readonly_fields
def add_view(self, *args, **kwargs):
self.fieldsets = self.add_fieldsets
return super().add_view(*args, **kwargs)
def changeform_view(self, request: HttpRequest, object_id: Optional[str], form_url: str, extra_context: Optional[Dict[str, bool]]) -> Any:
transact_to_value = request.COOKIES.get('to_value')
transact_from_value = request.COOKIES.get('from_value')
today_full = datetime.today().date()
extra_context = {
"filter_label": _('Filter Transactions'),
'from' :format_html('<label>{}</label>: <input type=date value={} id=transactions_from>', _('from'), today_full),
'to' : format_html(' <label>{}</label>: <input type=date value={} id=transactions_to>', _('to'), today_full),
'trs': [
_('id of transaction'),
_('ISSUED AT'),
_('TRANSACTION TYPE'),
_('Description'),
_('debit'),
_('credit'),
_('balance'),
_('VIEW'),
_('CSV'),
_('PDF'),
],
"page": _('Page'),
'of': _('of'),
'next': _('next'),
'previous':_('previous'),
'last': _('last page'),
'first': _('first')
}
client = self.get_object(request=request,object_id=object_id)
if not client:
return super().changeform_view(request, object_id=object_id, form_url=form_url, extra_context=extra_context)
transactions = client.transactions.all()
if transact_to_value:
to_date=str_to_date(transact_to_value)
from_date=str_to_date(transact_from_value)
transactions = transactions.filter(issued_at__gte=from_date).filter(issued_at__lte=to_date).all()
extra_context.update(ConsumerTransaction.prepare_tarnsactions_table(request, transactions, 'admin:payment_clientpaymenttransaction_change', consumer= 'client'))
return super().changeform_view(request, object_id=object_id, form_url=form_url, extra_context=extra_context)
def get_urls(self) -> List[URLPattern]:
urls = super().get_urls()
urls += [
path('<int:client_id>/make_a_transaction', self.process_make_transaction, ),
path('<int:client_id>/change/<int:id>/download/csv', self.download_transaction_csv ,),
path('<int:client_id>/change/<int:id>/download/pdf', self.download_transaction_pdf , name='client_transaction_download_pdf'),
path('<int:client_id>/change/period_close', self.period_close_controller , name='client_period_close'),
path('account-statment/<str:date_from>/<str:date_to>/<str:client_id>', self.account_statment_handler , name='client_account_statment'),
]
return urls
def account_statment_handler(self, request, date_from, date_to, client_id):
client = self.get_object(request, client_id)
return CommonMethods.account_statment_pdf(
date_from=date_from,
date_to=date_to,
consumer_obj=client,
request=request
)
def period_close_controller(self, request, client_id):
client = self.get_object(request, client_id)
transactions = client.transactions
return CommonMethods.make_peroid_close(transactions, client)
def changelist_view(self, request: HttpRequest, extra_context: Optional[Dict[str, str]]=None) -> TemplateResponse:
extra_context = {
'to': _('to'),
'from': _('from'),
'export': _('export'),
"account_stament_label": _('account statment')
}
response = super().changelist_view(request, extra_context=extra_context)
if request.COOKIES.get('client_id'):
response.delete_cookie('client_id')
return response
def process_make_transaction(self, request, **kwargs):
url = reverse("admin:payment_clientpaymenttransaction_add")
response = HttpResponseRedirect(url)
response.set_cookie('client_id', kwargs.get('client_id'))
return response
def get_queryset(self, request: HttpRequest) -> QuerySet:
client_id = request.COOKIES.get('client_id')
if not client_id:
return super().get_queryset(request)
client_list_path = reverse('admin:client_client_changelist')
queryset = super().get_queryset(request)
if not request.path == client_list_path:
queryset = queryset.filter(id=client_id)
return queryset
def download_transaction_pdf(self, request, **kwargs):
transaction = ClientPaymentTransaction.objects.get(id=kwargs['id'])
return admin_client_download_transaction_pdf(transaction, request)
def download_transaction_csv(self, *args_, **kwargs):
url = reverse('admin:client_transaction_download_csv', args=[kwargs['id']])
return HttpResponseRedirect(url)
def has_delete_permission(self, request, obj=None):
return False
def activate_clients(self, request, queryset):
clinets_no = queryset.update(is_active=True)
client_string = 'clients have' if clinets_no < 1 else 'client has'
self.message_user(request, f'{clinets_no} {client_string} activated successfully')
def dactivate_clients(self, request, queryset):
clinets_no = queryset.update(is_active=False)
client_string = 'clients have' if clinets_no < 1 else 'client has'
self.message_user(request, f'{clinets_no} {client_string} deactivated successfully')
def get_action(self, action: Union[Callable, str]) -> Tuple[Callable, str, str]:
return super().get_action(action)
def save_model(self, request: Any, client: "Client", form: Any, change: Any) -> None:
super().save_model(request, client, form, change)
if (not client.cash == 0) and (change == False):
if not ClientPaymentTransaction.objects.filter(client_id=client.pk).exists():
if client.cash < 0:
try:
type_tranasction = PaymentTransactionType.objects.get(name=_('Opening account'))
except PaymentTransactionType.DoesNotExist:
type_tranasction = PaymentTransactionType.objects.create(
name=_('Opening account'),
transaction_for=2,
)
ClientPaymentTransaction.objects.create(
amount=abs(client.cash),
client=client,
type_tranasction=type_tranasction,
payment_type=2,
issued_by = request.user
)
return
try:
type_tranasction = PaymentTransactionType.objects.get(name=_('Opening account'))
except PaymentTransactionType.DoesNotExist:
type_tranasction = PaymentTransactionType.objects.create(
name=_('Opening account'),
transaction_for=1,
)
ClientPaymentTransaction.objects.create(
amount=client.cash,
client=client,
type_tranasction=type_tranasction,
payment_type=1,
issued_by = request.user
)
def export_as_xls(self, request, queryset):
response = HttpResponse(content_type='application/vnd.ms-excel')
xls = xlsxwriter.Workbook(response)
work_sheet = xls.add_worksheet()
headers_format = xls.add_format()
headers_format.set_font_shadow()
headers_format.set_bg_color('gray')
headers_format.set_border(1)
headers_format.set_font_color('white')
headers_format.set_bold()
headers_format.set_align('center')
headers_format.set_locked(True)
headers_format.set_size(15)
data_format = xls.add_format()
data_format.set_align('center')
data_format.set_bg_color('#8d8894')
data_format.set_font_color('#e7e7e7')
data_format.set_font_size(14)
data_format.set_bold()
data_format.set_border()
data_format.set_border_color('black')
sheet_headers = [
_('city'),
_('address'),
_('phone'),
_('email'),
_('id'),
_('name'),
]
qs_values = queryset.order_by('-id').all()
make_xls_headers(work_sheet, sheet_headers, headers_format)
data_fields_names = [
'name',
'id',
'email',
'phone',
'address',
'city',
]
make_xls_data(work_sheet, qs_values, data_fields_names, data_format)
xls.close()
return response
export_as_xls.short_description = ugettext_lazy("Export Selected as xls")
admin.site.register(Client, ClientAdmin)
| 35.816631 | 238 | 0.589177 | 1,692 | 16,798 | 5.529551 | 0.160165 | 0.028858 | 0.027362 | 0.016032 | 0.416097 | 0.354853 | 0.345019 | 0.345019 | 0.337751 | 0.318298 | 0 | 0.003626 | 0.294023 | 16,798 | 468 | 239 | 35.893162 | 0.785311 | 0.001131 | 0 | 0.379397 | 0 | 0 | 0.154318 | 0.040055 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052764 | false | 0 | 0.065327 | 0.005025 | 0.20603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
786303dad3d68e212f04a894e451f0bed72ebf96 | 949 | py | Python | mountainproject/util/util.py | calebwang/mountainproject | 6a986d33a1e44710308e66eea77b66167a0ef2a7 | [
"MIT"
] | null | null | null | mountainproject/util/util.py | calebwang/mountainproject | 6a986d33a1e44710308e66eea77b66167a0ef2a7 | [
"MIT"
] | null | null | null | mountainproject/util/util.py | calebwang/mountainproject | 6a986d33a1e44710308e66eea77b66167a0ef2a7 | [
"MIT"
] | null | null | null | import itertools
def chunk(iterable, n):
it = iter(iterable)
cls = list
chunk = cls(itertools.islice(it, n))
while chunk:
yield chunk
chunk = cls(itertools.islice(it, n))
def map_chunk(iterable, n, f_chunk):
"""
Map over iterable in chunks of size n, applying f_chunk to
each chunk, and then flattening the result back into the original
shape of iterable
"""
cls = iterable.__class__
it_result = itertools.chain.from_iterable(
f_chunk(c) for c in chunk(iterable, n)
)
return cls(it_result)
def paginator(page_getter, page_limit, n):
start_pos = 0
num_results = 0
while True:
page = page_getter(start_pos)
yield page
page_size = len(page)
num_results += page_size
start_pos += page_size
if num_results >= n or page_size < page_limit:
return
def paginate(page_getter, page_limit, n):
return list(itertools.chain.from_iterable(paginator(page_getter, page_limit, n)))
| 24.333333 | 83 | 0.700738 | 145 | 949 | 4.37931 | 0.358621 | 0.062992 | 0.066142 | 0.089764 | 0.204724 | 0.173228 | 0 | 0 | 0 | 0 | 0 | 0.00266 | 0.207587 | 949 | 38 | 84 | 24.973684 | 0.841755 | 0.150685 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.037037 | 0.037037 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78652027358f752a4fa3b5d361b4256d3dc48763 | 5,372 | py | Python | src/psypose/MEVA/scripts/eval_vae.py | scraplab/psypose | 81753e29b78023b8a7c48356ec54c67b7182c183 | [
"MIT"
] | null | null | null | src/psypose/MEVA/scripts/eval_vae.py | scraplab/psypose | 81753e29b78023b8a7c48356ec54c67b7182c183 | [
"MIT"
] | 1 | 2021-10-13T16:27:34.000Z | 2021-10-13T16:27:34.000Z | src/psypose/MEVA/scripts/eval_vae.py | scraplab/psypose | 81753e29b78023b8a7c48356ec54c67b7182c183 | [
"MIT"
] | null | null | null | import glob
import os
import sys
import pdb
import os.path as osp
sys.path.append(os.getcwd())
import math
import pickle as pk
import argparse
import time
from torch import optim
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdm
import joblib
from khrylib.utils import *
from meva.utils.config import Config
from meva.lib.model import *
from meva.utils.transform_utils import *
from meva.utils.image_utils import *
from meva.lib.smpl import SMPL, SMPL_MODEL_DIR, H36M_TO_J14, SMPL_MEAN_PARAMS
from meva.utils.video_config import MEVA_DATA_DIR
from meva.utils.eval_utils import (
compute_accel,
compute_error_accel,
compute_error_verts,
batch_compute_similarity_transform_torch,
smpl_to_joints,
compute_metric_on_seqs
)
from copycat.smpllib.smpl_mujoco import SMPL_M_Renderer
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--gpu_index", type=int, default=0)
parser.add_argument("--cfg", default=None)
parser.add_argument("--image_size", action="store_true", default=400)
parser.add_argument("--render", action="store_true", default=False)
parser.add_argument("--iter", type=int, default=-2)
args = parser.parse_args()
dtype = torch.float32
torch.set_default_dtype(dtype)
cfg_name = args.cfg
cfg = Config(args.cfg)
gpu_index = args.gpu_index
device = torch.device('cuda', index=gpu_index)
image_size = args.image_size
has_smpl_root = cfg.data_specs['has_smpl_root']
model, _, run_batch = get_models(cfg, iter = args.iter)
model.to(device)
model.eval()
smpl = SMPL(
SMPL_MODEL_DIR,
batch_size=50,
create_transl=False,
dtype = dtype
).to(device)
J_regressor = torch.from_numpy(np.load(osp.join(MEVA_DATA_DIR, 'J_regressor_h36m.npy'))).float()
output_base = "/hdd/zen/data/ActmixGenenerator/output/3dpw"
output_path = osp.join(output_base, cfg_name)
if not osp.isdir(output_path): os.makedirs(output_path)
dataset_3dpw = joblib.load("/hdd/zen/data/ActBound/AMASS/3dpw_train_res.pkl")
# dataset_3dpw = joblib.load("/hdd/zen/data/ActBound/AMASS/3dpw_val_res.pkl")
# dataset_3dpw = joblib.load("/hdd/zen/data/ActBound/AMASS/3dpw_test_res.pkl")
image_size = 400
total = cfg.data_specs['t_total']
if args.render:
# renderer = SMPL_Renderer(device = device, image_size = 400, camera_mode="look_at")
renderer = SMPL_M_Renderer(render_size = (image_size, image_size))
eval_recs =[]
# eval_vibe =[]
idx = 0
for k, v in tqdm(dataset_3dpw.items()):
curr_name = v
mocap_thetas = v['target_traj']
vibe_thetas = v['traj']
vis_feats = v['feat']
mocap_betas = v['target_beta']
vibe_betas = v['traj_beta']
with torch.no_grad():
vibe_pose = torch.tensor(vibe_thetas).squeeze().to(device)
mocap_pose = torch.tensor(mocap_thetas).squeeze().to(device)
vis_feats = torch.tensor(vis_feats).squeeze().to(device)
vibe_betas = torch.tensor(vibe_betas).squeeze().to(device)
mocap_betas = torch.tensor(mocap_betas).squeeze().to(device)
mocap_pose_6d = convert_aa_to_orth6d(mocap_pose).reshape(-1, 144)
mocap_pose_6d = mocap_pose_6d[None, :].permute(1, 0, 2)
vibe_pose_6d = convert_aa_to_orth6d(vibe_pose).reshape(-1, 144)
vibe_pose_6d = vibe_pose_6d[None, :].permute(1, 0, 2)
vis_feats = vis_feats[None, :].permute(1, 0, 2)
mocap_pose_6d_chunks = torch.split(mocap_pose_6d, total, dim=0)
vibe_pose_6d_chunks = torch.split(vibe_pose_6d, total, dim=0)
vis_feats_chunks = torch.split(vis_feats, total, dim=0)
X_r_acc = []
for i in range(len(mocap_pose_6d_chunks)):
mocap_pose_chunk = mocap_pose_6d_chunks[i]
vibe_pose_chunk = vibe_pose_6d_chunks[i]
vis_feats_chunk = vis_feats_chunks[i]
label_rl = torch.tensor([[1,0]]).to(device).float()
X_r, mu, logvar = model(mocap_pose_chunk)
X_r_acc.append(X_r[:mocap_pose_chunk.shape[0]])
X_r = torch.cat(X_r_acc)
X_r = X_r.permute(1,0,2)
ref_pose_curr_rl = convert_orth_6d_to_aa(X_r.squeeze())
######## Rendering...... ########
if args.render:
mocap_pose = vertizalize_smpl_root(mocap_pose).cpu().numpy()
ref_pose_curr_rl = vertizalize_smpl_root(ref_pose_curr_rl).cpu().numpy()
tgt_images = renderer.render_smpl(mocap_pose)
ref_images = renderer.render_smpl(ref_pose_curr_rl)
grid_size = [1,2]
videos = [tgt_images, ref_images]
descriptions = ["Mocap", "VAE"]
output_name = "{}/output_vae{:02d}.mp4".format(output_path, idx)
assemble_videos(videos, grid_size, descriptions, output_name)
print(output_name)
idx += 1
else:
eval_acc = compute_metric_on_seqs(ref_pose_curr_rl, mocap_betas, mocap_pose, mocap_betas, smpl, J_regressor=J_regressor)
eval_recs.append(eval_acc)
print(np.mean(eval_recs, axis = 0))
| 36.794521 | 136 | 0.648734 | 748 | 5,372 | 4.332888 | 0.256684 | 0.044431 | 0.023758 | 0.020056 | 0.139463 | 0.074668 | 0.060475 | 0.048133 | 0.048133 | 0.048133 | 0 | 0.019311 | 0.238459 | 5,372 | 145 | 137 | 37.048276 | 0.772916 | 0.04933 | 0 | 0.017544 | 0 | 0 | 0.053884 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.192982 | 0 | 0.192982 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
786642d4839cecf971d110875afa5237e48ae23f | 475 | py | Python | exercise1_6.py | ccie8030/pynet | 84be459c6cb50a025a801e3d4b9bd237c698776a | [
"Apache-2.0"
] | 1 | 2016-01-30T03:36:15.000Z | 2016-01-30T03:36:15.000Z | exercise1_6.py | ccie8030/pynet | 84be459c6cb50a025a801e3d4b9bd237c698776a | [
"Apache-2.0"
] | null | null | null | exercise1_6.py | ccie8030/pynet | 84be459c6cb50a025a801e3d4b9bd237c698776a | [
"Apache-2.0"
] | null | null | null | import yaml
import json
def main():
yaml_file = 'my_yaml.yml'
json_file = 'my_json.json'
net_dict = {'ip_addr' : '192.168.1.214', 'model' : 'wlc', 'manufacturer' : 'Cisco', 'model': '2504'}
net_list = ['test_strings','1','2','3', net_dict, 'python', 'neteng']
with open(yaml_file, "w") as f:
f.write(yaml.dump(net_list, default_flow_style=False))
with open(json_file, "w") as f:
json.dump(net_list, f)
main()
| 18.269231 | 104 | 0.581053 | 71 | 475 | 3.676056 | 0.549296 | 0.08046 | 0.05364 | 0.061303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046832 | 0.235789 | 475 | 25 | 105 | 19 | 0.672176 | 0 | 0 | 0 | 0 | 0 | 0.225053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
786662f7fd60f019ba19ea1741f61dafe91fb64a | 1,978 | py | Python | Structral Similarity/stsim_2.py | Michelle0903/Performance-Comparison-of-Structural-Similarity-Metrics | c2c409eefe335e4946ca895ad1d22b4930263819 | [
"MIT"
] | null | null | null | Structral Similarity/stsim_2.py | Michelle0903/Performance-Comparison-of-Structural-Similarity-Metrics | c2c409eefe335e4946ca895ad1d22b4930263819 | [
"MIT"
] | null | null | null | Structral Similarity/stsim_2.py | Michelle0903/Performance-Comparison-of-Structural-Similarity-Metrics | c2c409eefe335e4946ca895ad1d22b4930263819 | [
"MIT"
] | null | null | null | from perceptual.metric_copy import Metric
import cv2
import os
import glob
import heapq
import torch
import time
from torch.utils.data import Dataset, DataLoader
data_dir = "/Users/yuxiao/Desktop/data/Corbis128BigExperiment_gray/"
data = glob.glob(data_dir + "*.tiff")
class ImgData(Dataset):
def __init__(self, k, data):
self.data = data
self.img1 = cv2.imread(data[k], cv2.IMREAD_GRAYSCALE)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img2_path = self.data[idx]
img2 = cv2.imread(img2_path, cv2.IMREAD_GRAYSCALE)
score = m.STSIM2(self.img1, img2)
sample = score
return sample
def del_path(s):
(_, temp) = os.path.split(s)
return temp
def takesecond(elem):
return elem[1]
m = Metric()
res = []
knum = 10
for k in range(len(data)):
tmp = []
score = []
img1name = del_path(data[k])
tmp.append(img1name)
dataset = ImgData(k, data)
#print(len(dataset))
dataloader = DataLoader(dataset,
batch_size = 16,
shuffle = False,
num_workers = 16,
pin_memory = True)
score_list = []
for idx, batch_data in enumerate(dataloader):
score_list.extend(batch_data.numpy().tolist())
max_num_index_list = list(map(score_list.index, heapq.nlargest(knum, score_list)))
for ind in max_num_index_list:
tmp.append(del_path(data[ind]))
score.append(score_list[ind])
tmp.extend(score)
res.append(tmp)
if k%256 == 0:
print("%d images done"%(k+1))
#-----------------------------------------
outputfile = "./stsim_2_result.txt"
with open(outputfile, 'a') as f:
for i in range(len(res)):
line = ''
for name in res[i]:
line = line + str(name) + ','
line = line[:-1] + '\n'
f.write(line)
f.close()
| 22.224719 | 86 | 0.570779 | 252 | 1,978 | 4.313492 | 0.400794 | 0.041398 | 0.033119 | 0.027599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022048 | 0.289181 | 1,978 | 88 | 87 | 22.477273 | 0.751067 | 0.030334 | 0 | 0 | 0 | 0 | 0.051724 | 0.028736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.131148 | 0.032787 | 0.295082 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7867596a29dbeb64fbd242bc6c8c7720e3a87739 | 509 | py | Python | src/codebase/controllers/default.py | ooclab/ga.service | 894b4703628b2ce93790db31939009783e8e7b09 | [
"MIT"
] | 1 | 2019-09-20T04:32:52.000Z | 2019-09-20T04:32:52.000Z | src/codebase/controllers/default.py | ooclab/ga.service | 894b4703628b2ce93790db31939009783e8e7b09 | [
"MIT"
] | 1 | 2019-02-01T04:57:27.000Z | 2019-02-01T04:57:27.000Z | src/codebase/controllers/default.py | ooclab/ga.service | 894b4703628b2ce93790db31939009783e8e7b09 | [
"MIT"
] | 1 | 2019-01-14T06:51:17.000Z | 2019-01-14T06:51:17.000Z | # pylint: disable=W0221,W0223
import os
from codebase.web import APIRequestHandler
class HealthHandler(APIRequestHandler):
def get(self):
self.write("ok")
class SpecHandler(APIRequestHandler):
"""
提供 SwaggerUI YAML 文档
"""
def get(self):
path = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))
abspath = os.path.join(path, "schema.yml")
self.set_header("Content-Type", "text/plain")
self.write(open(abspath, "rb").read())
| 20.36 | 82 | 0.650295 | 63 | 509 | 5.174603 | 0.619048 | 0.07362 | 0.06135 | 0.104294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.206287 | 509 | 24 | 83 | 21.208333 | 0.787129 | 0.096267 | 0 | 0.181818 | 0 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
787089ab58a2c573c193c13a1c54f7ae8051fe13 | 1,692 | py | Python | graphql/backend/tests/test_compileddocument.py | ThanksBoomerang/graphql-core-legacy | 6e2fbccdec655ce9122b84d3808c14242c4e6b96 | [
"MIT"
] | 8 | 2020-03-23T21:34:02.000Z | 2021-11-12T11:27:45.000Z | graphql/backend/tests/test_compileddocument.py | ThanksBoomerang/graphql-core-legacy | 6e2fbccdec655ce9122b84d3808c14242c4e6b96 | [
"MIT"
] | 17 | 2020-03-14T22:22:29.000Z | 2022-03-16T19:26:37.000Z | graphql/backend/tests/test_compileddocument.py | ThanksBoomerang/graphql-core-legacy | 6e2fbccdec655ce9122b84d3808c14242c4e6b96 | [
"MIT"
] | 17 | 2020-03-23T12:06:23.000Z | 2022-02-13T05:33:32.000Z | from ...language.base import parse
from ...utils.ast_to_code import ast_to_code
from ..compiled import GraphQLCompiledDocument
from .schema import schema
def test_compileddocument_from_module_dict():
# type: () -> None
document_string = "{ hello }"
document_ast = parse(document_string)
document = GraphQLCompiledDocument.from_module_dict(
schema,
{
"document_string": document_string,
"document_ast": document_ast,
"execute": lambda *_: True,
},
)
assert document.operations_map == {None: "query"}
assert document.document_string == document_string
assert document.document_ast == document_ast
assert document.schema == schema
assert document.execute()
def test_compileddocument_from_code():
# type: () -> None
document_string = "{ hello }"
document_ast = parse(document_string)
code = '''
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from graphql.language import ast
from graphql.language.parser import Loc
from graphql.language.source import Source
schema = None
document_string = """{document_string}"""
source = Source(document_string)
def loc(start, end):
return Loc(start, end, source)
document_ast = {document_ast}
def execute(*_):
return True
'''.format(
document_string=document_string, document_ast=ast_to_code(document_ast)
)
document = GraphQLCompiledDocument.from_code(schema, code)
assert document.operations_map == {None: "query"}
assert document.document_string == document_string
assert document.document_ast == document_ast
assert document.schema == schema
assert document.execute()
| 28.2 | 79 | 0.708038 | 191 | 1,692 | 6 | 0.225131 | 0.183246 | 0.153578 | 0.122164 | 0.448517 | 0.448517 | 0.380454 | 0.380454 | 0.380454 | 0.380454 | 0 | 0.000732 | 0.19208 | 1,692 | 59 | 80 | 28.677966 | 0.837601 | 0.019504 | 0 | 0.304348 | 0 | 0 | 0.27657 | 0.055556 | 0 | 0 | 0 | 0 | 0.217391 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78715707268ed203ef8de172b23e49fc52b02ddc | 9,568 | py | Python | pid/eco/views.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | 3 | 2019-06-14T18:05:22.000Z | 2020-01-22T17:38:17.000Z | pid/eco/views.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | null | null | null | pid/eco/views.py | PlanetaryResources/pid | ecb146cc26c6ade2863bcdc6d271ead3cbcbbe40 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Design views."""
from flask import Blueprint, request, jsonify, render_template, make_response
from flask_login import login_required, current_user
from .forms import CreateECOForm
from .models import ECO
from pid.common.models import Project, Approver
from pid.mail import send_email
from pid.user.models import User
from pid.design.models import Design
blueprint = Blueprint('eco', __name__, url_prefix='/eco', static_folder='../static')
@blueprint.route('/create', methods=['POST'])
@login_required
def create_eco():
"""Create new ECO."""
form = CreateECOForm(request.form)
validated = form.validate_on_submit()
design_ids = form.designs.data.split(',')
designs = []
for design_id in design_ids:
design = Design.get_by_id(design_id)
if design != None:
designs.append(design)
if validated:
variables = {
'name': form.name.data,
'owner': form.owner.data,
'project': designs[0].project
}
eco = ECO.create(**variables)
for design in designs:
eco.designs.append(design)
eco.save()
jsonData = {
'success': True,
'ecoId': eco.id,
'url': eco.get_url()
}
return jsonify(jsonData), 200, {'ContentType': 'application/json'}
else:
return make_response(render_template('eco/create_eco.html', form=form, designs=designs), 500)
@blueprint.route('/update', methods=['POST'])
@login_required
def update_eco():
id = request.form['pk']
# UID for field will be ala [fieldname]-[classname]-[id]-editable, field name will be first section always
field = request.form['name'].split('-')[0]
field_value = request.form['value']
eco = ECO.get_by_id(id)
original_value = None
if field == 'name':
original_value = eco.name
eco.update(name=field_value)
if field == 'summary':
original_value = eco.summary
eco.update(summary=field_value)
if field == 'analysis':
original_value = eco.analysis
eco.update(analysis=field_value)
if field == 'corrective_action':
original_value = eco.corrective_action
eco.update(corrective_action=field_value)
elif field == 'project':
if eco.project:
original_value = eco.project.name
project = Project.get_by_id(field_value)
eco.update(project=project)
field_value = project.name if project else None
elif field == 'owner':
if eco.owner:
original_value = eco.owner.get_name()
if eco.owner.padawan:
for approver in eco.approvers:
if approver.approver == eco.owner.supervisor and approver.capacity == 'Supervisor':
eco.approvers.remove(approver)
approver.delete()
owner = User.get_by_id(field_value)
if owner.padawan:
approver = Approver.create(approver_id=owner.supervisor_id, capacity='Supervisor')
eco.approvers.append(approver)
eco.update(owner=owner)
field_value = owner.get_name() if owner else None
if field == 'thumbnail_id':
thumbnail_id = None if field_value == 'default' else field_value
eco.update(thumbnail_id=thumbnail_id)
return render_template('shared/thumbnail_return.html', record=eco)
eco.add_change_log_entry(action='Edit', field=field.title().replace('_', ' '),
original_value=original_value, new_value=field_value)
return jsonify({'success': True}), 200, {'ContentType': 'application/json'}
@blueprint.route('/update_state', methods=['POST'])
@login_required
def update_eco_state():
# TODO: verify that current_user is owner of record and can edit it
design_id = request.values['parent_id']
state = request.form['state']
transition = request.form['transition']
comment = request.values['comment']
eco = ECO.get_by_id(design_id)
eco.update(state=state)
eco.add_workflow_log_entry(capacity='Owner', action=transition, comment=comment)
if state == eco.workflow.get_approval_state():
for approver in eco.approvers:
if not approver.approved_at:
variables = {
'record': eco,
'approver': approver,
'comment': comment
}
send_email(subject='Approval Required for {0}: {1}'.format(eco.descriptor, eco.get_name()),
recipients=[approver.approver.email],
text_body=render_template('mail/approvals/new_approver.txt', **variables),
html_body=render_template('mail/approvals/new_approver.html', **variables))
elif state == eco.workflow.released_state:
# Only self-approval will trigger this
eco.add_workflow_log_entry(capacity='PLAIDmin', action='Approved')
return jsonify({'success': True}), 200, {'ContentType': 'application/json'}
@blueprint.route('/<string:key>', methods=['GET'])
@login_required
def view_eco(key):
"""View existing eco."""
eco = ECO.get_by_key(key)
users = User.query.all()
projects = Project.query.all()
variables = {
'eco': eco,
'users': users,
'projects': projects
}
return render_template('eco/view_eco.html', **variables)
@blueprint.route('/typeahead_search', methods=['GET'])
@login_required
def typeahead_search():
query = request.args.get('query')
ecos = ECO.typeahead_search(query)
results = []
for eco in ecos:
eco_dict = {}
eco_dict['class'] = eco.get_class_name()
eco_dict['icon'] = '<i class="pri-typeahead-icon pri-icons-record-eco" aria-hidden="true"></i>'
eco_dict['id'] = eco.id
eco_dict['name'] = eco.name
eco_dict['number'] = eco.key
eco_dict['object_type'] = 'ECO'
eco_dict['state'] = eco.state
eco_dict['thumb_url'] = eco.get_thumbnail_url()
eco_dict['url'] = eco.get_url()
results.append(eco_dict)
return jsonify({'success': True, 'data': results}), 200, {'ContentType': 'application/json'}
@blueprint.route('/get_create_modal', methods=['POST'])
@login_required
def get_eco_modal():
form = CreateECOForm(request.form)
variables = {
'form': form
}
design_id = request.form.get('design_id', None)
if design_id:
variables['designs'] = [Design.get_by_id(design_id)]
return render_template('eco/create_eco.html', **variables)
@blueprint.route('/advanced_search', methods=['GET'])
@login_required
def advanced_search_ecos():
params = request.args.to_dict()
ecos = ECO.advanced_search(params)
results = []
for eco in ecos:
eco_dict = {
'eco_number': eco.key,
'name': eco.name,
'state': eco.state,
'project': eco.project.name,
'summary': eco.summary,
'owner': eco.owner.get_name(),
'created_by': eco.created_by.get_name(),
'created_at': eco.created_at,
'url': eco.get_url()
}
results.append(eco_dict)
return jsonify({'success': True, 'data': results}), 200, {'ContentType': 'application/json'}
@blueprint.route('/get_add_design_typeahead_modal', methods=['POST'])
@login_required
def get_add_design_typeahead_modal():
eco_id = request.values['eco_id']
eco = ECO.get_by_id(eco_id)
designs = []
for design in eco.designs:
designs.extend([rev_design.id for rev_design in design.find_all_revisions()])
variables = {
'eco': eco,
'designs': designs
}
return render_template('eco/add_design_typeahead_modal.html', **variables)
@blueprint.route('/update_design', methods=['POST'])
@login_required
def update_design():
eco_id = request.values['eco_id']
old_design_id = request.values['old_design_id']
new_design_id = request.values['new_design_id']
eco = ECO.get_by_id(eco_id)
old_design = Design.get_by_id(old_design_id)
new_design = Design.get_by_id(new_design_id)
eco.designs.remove(old_design)
eco.designs.append(new_design)
eco.add_change_log_entry(action='Edit', field='Design', original_value=old_design.get_descriptive_url(),
new_value=new_design.get_descriptive_url())
eco.save()
variables = {
'eco': eco,
'design': new_design
}
return render_template('eco/eco_design_row.html', **variables)
@blueprint.route('/add_design', methods=['POST'])
@login_required
def add_design():
eco_id = request.values['eco_id']
design_id = request.values['design_id']
eco = ECO.get_by_id(eco_id)
design = Design.get_by_id(design_id)
eco.designs.append(design)
eco.add_change_log_entry(action='Add', field='Design', new_value=design.get_descriptive_url())
eco.save()
variables = {
'eco': eco,
'design': design
}
return render_template('eco/eco_design_row.html', **variables)
@blueprint.route('/remove_design', methods=['POST'])
@login_required
def remove_design():
eco_id = request.values['eco_id']
eco = ECO.get_by_id(eco_id)
design_id = request.values['design_id']
design = Design.get_by_id(design_id)
eco.designs.remove(design)
eco.add_change_log_entry(action='Remove', field='Design', original_value=design.get_descriptive_url())
eco.save()
return jsonify({'success': True}), 200, {'ContentType': 'application/json'}
| 35.969925 | 110 | 0.639946 | 1,179 | 9,568 | 4.969466 | 0.15352 | 0.028674 | 0.016726 | 0.03277 | 0.371906 | 0.315071 | 0.249872 | 0.177846 | 0.138932 | 0.129374 | 0 | 0.003513 | 0.226484 | 9,568 | 265 | 111 | 36.10566 | 0.788137 | 0.02916 | 0 | 0.269565 | 0 | 0.004348 | 0.134196 | 0.029342 | 0 | 0 | 0 | 0.003774 | 0 | 1 | 0.047826 | false | 0 | 0.034783 | 0 | 0.13913 | 0.056522 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7872f59905b7d2b01c5c5e396418732a9653183b | 3,485 | py | Python | psono/restapi/serializers/create_membership.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 48 | 2018-04-19T15:50:58.000Z | 2022-01-23T15:58:11.000Z | psono/restapi/serializers/create_membership.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 9 | 2018-09-13T14:56:18.000Z | 2020-01-17T16:44:33.000Z | psono/restapi/serializers/create_membership.py | dirigeant/psono-server | a18c5b3c4d8bbbe4ecf1615b210d99fb77752205 | [
"Apache-2.0",
"CC0-1.0"
] | 11 | 2019-09-20T11:53:47.000Z | 2021-07-18T22:41:31.000Z | from django.utils.translation import ugettext_lazy as _
from rest_framework import serializers, exceptions
from ..fields import UUIDField, BooleanField
from ..models import User, User_Group_Membership
import re
class CreateMembershipSerializer(serializers.Serializer):
user_id = UUIDField(required=True)
group_id = UUIDField(required=True)
secret_key = serializers.CharField(required=True)
secret_key_nonce = serializers.CharField(max_length=64, required=True)
secret_key_type = serializers.CharField(default='asymmetric')
private_key = serializers.CharField(required=True)
private_key_nonce = serializers.CharField(max_length=64, required=True)
private_key_type = serializers.CharField(default='asymmetric')
group_admin = BooleanField(default=False)
share_admin = BooleanField(default=True)
def validate_secret_key(self, value):
value = value.strip()
if not re.match('^[0-9a-f]*$', value, re.IGNORECASE):
msg = _('secret_key must be in hex representation')
raise exceptions.ValidationError(msg)
return value
def validate_secret_key_nonce(self, value):
value = value.strip()
if not re.match('^[0-9a-f]*$', value, re.IGNORECASE):
msg = _('secret_key_nonce must be in hex representation')
raise exceptions.ValidationError(msg)
return value
def validate_secret_key_type(self, value):
value = value.strip()
if value not in ('symmetric', 'asymmetric'):
msg = _('Unknown secret key type')
raise exceptions.ValidationError(msg)
return value
def validate_private_key(self, value):
value = value.strip()
if not re.match('^[0-9a-f]*$', value, re.IGNORECASE):
msg = _('private_key must be in hex representation')
raise exceptions.ValidationError(msg)
return value
def validate_private_key_nonce(self, value):
value = value.strip()
if not re.match('^[0-9a-f]*$', value, re.IGNORECASE):
msg = _('private_key_nonce must be in hex representation')
raise exceptions.ValidationError(msg)
return value
def validate_private_key_type(self, value):
value = value.strip()
if value not in ('symmetric', 'asymmetric'):
msg = _('Unknown private key type')
raise exceptions.ValidationError(msg)
return value
def validate_user_id(self, value):
try:
User.objects.get(pk=value)
except User.DoesNotExist:
msg = _('Target user does not exist.')
raise exceptions.ValidationError(msg)
return value
def validate_group_id(self, value):
# This line also ensures that the desired group exists and that the user firing the request has admin rights
if not User_Group_Membership.objects.filter(group_id=value, user=self.context['request'].user, group_admin=True, accepted=True).exists():
msg = "NO_PERMISSION_OR_NOT_EXIST"
raise exceptions.ValidationError(msg)
return value
def validate(self, attrs: dict) -> dict:
user_id = attrs.get('user_id')
group_id = attrs.get('group_id')
if User_Group_Membership.objects.filter(group_id=group_id, user_id=user_id).count() > 0:
msg = _("User is already part of the group.")
raise exceptions.ValidationError(msg)
return attrs
| 33.190476 | 145 | 0.663415 | 421 | 3,485 | 5.31829 | 0.232779 | 0.053595 | 0.12059 | 0.132649 | 0.640464 | 0.591782 | 0.552479 | 0.517642 | 0.517642 | 0.472086 | 0 | 0.004909 | 0.240172 | 3,485 | 104 | 146 | 33.509615 | 0.840634 | 0.030416 | 0 | 0.408451 | 0 | 0 | 0.127924 | 0.007699 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126761 | false | 0 | 0.070423 | 0 | 0.478873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78736c505d7315485f6b5015659e5139ab914041 | 7,158 | py | Python | stanza_wrapper/stanza_wrapper.py | Filter-Bubble/stanza_wrapper | 04388869cbbe419132628422663e4c7c987cf1d0 | [
"Apache-2.0"
] | null | null | null | stanza_wrapper/stanza_wrapper.py | Filter-Bubble/stanza_wrapper | 04388869cbbe419132628422663e4c7c987cf1d0 | [
"Apache-2.0"
] | null | null | null | stanza_wrapper/stanza_wrapper.py | Filter-Bubble/stanza_wrapper | 04388869cbbe419132628422663e4c7c987cf1d0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from . import __version__
import logging
import stanza
from KafNafParserPy import *
from lxml.etree import XMLSyntaxError
from io import BytesIO
import sys
from itertools import groupby
from operator import itemgetter
from xml.sax.saxutils import escape
logger = logging.getLogger(__name__)
this_name = 'Morphosyntactic parser based on stanza'
default_treebank = 'alpino'
def get_naf(input_file):
input = input_file.read()
try:
naf = KafNafParser(BytesIO(input))
except XMLSyntaxError:
input = input.decode("utf-8")
if "<NAF" in input and "</NAF>" in input:
# I'm guessing this should be a NAF file but something is wrong
logging.exception("Error parsing NAF file")
raise
naf = KafNafParser(type="NAF")
naf.set_version("3.0")
naf.set_language("nl")
naf.lang = "nl"
naf.raw = input
naf.set_raw(naf.raw)
return naf
def create_text_layer(st_doc, knaf_obj):
id_to_tokenid = {}
wcount = 1
offsets = {}
txt = knaf_obj.get_raw()
for sid, sentence in enumerate(st_doc.sentences):
id_to_tokenid[sid+1] = {}
for token in sentence.tokens:
token_obj = Cwf(type=knaf_obj.get_type())
token_id = 'w{}'.format(wcount)
token_length = len(token.text)
offsets[wcount] = txt.find(token.text, offsets.get(wcount-1, 0))
token_obj.set_id(token_id)
token_obj.set_length(str(token_length))
# token_obj.set_offset(str(offset)) # Is this correct????
token_obj.set_para('1')
token_obj.set_sent(str(sid+1))
token_obj.set_text(token.text)
token_obj.set_offset(str(offsets[wcount]))
wcount += 1
id_to_tokenid[sid+1][token.id[0]] = token_id
knaf_obj.add_wf(token_obj)
return id_to_tokenid
def get_term_type(pos):
if pos in ['det', 'pron', 'prep', 'vg', 'conj']:
return 'close'
else:
return 'open'
def create_term_layer(st_doc, knaf_obj, id_to_tokenid):
tcount = 0
term_id_mapping = {} # Mapping from stanford word index -> NAF term id
for sid, sentence in enumerate(st_doc.sentences):
for term in sentence.words:
new_term_id = 't_'+str(tcount)
term_id_mapping[(sid, term.id)] = new_term_id
term_obj = Cterm(type=knaf_obj.get_type())
term_obj.set_id(new_term_id)
new_span = Cspan()
new_span.create_from_ids([id_to_tokenid[sid+1]
[term.parent.id[0]]])
term_obj.set_span(new_span)
# lemma: copy from stanza
term_obj.set_lemma(term.lemma)
# pos: take the UD UPOS value
term_obj.set_pos(term.upos.lower())
# external reference: the UD FEATS value
if term.feats:
ext_ref = CexternalReference()
ext_ref.set_resource('Stanza')
ext_ref.set_reftype('FEATS')
ext_ref.set_reference(term.feats)
term_obj.add_external_reference(ext_ref)
# morphofeat: reformatted UD XPOS value
if term.xpos:
feats = term.xpos.split('|')
feat = feats[0] + '(' + ','.join(feats[1:]) + ')'
term_obj.set_morphofeat(feat)
termtype = get_term_type(term.upos.lower())
term_obj.set_type(termtype)
knaf_obj.add_term(term_obj)
tcount += 1
return term_id_mapping
def create_dependency_layer(st_doc, knaf_obj, term_id_mapping):
for s_id, sent in enumerate(st_doc.sentences):
for source, rel, target in sent.dependencies:
# Do not include root
if rel != 'root':
# Creating comment
str_comment = ' '+rel+'('+str(target.lemma)+','+str(source.lemma)+') '
str_comment = escape(str_comment, {"--": "&ndash"})
my_dep = Cdependency()
my_dep.set_from(term_id_mapping.get((s_id, source.id)))
my_dep.set_to(term_id_mapping.get((s_id, target.id)))
my_dep.set_function(rel)
my_dep.set_comment(str_comment)
knaf_obj.add_dependency(my_dep)
def add_linguistic_processors(in_obj, added_text_layer, treebank):
name = this_name + ' using {} treebank'.format(treebank)
if added_text_layer:
my_lp = Clp()
my_lp.set_name(name)
my_lp.set_version(__version__)
my_lp.set_timestamp()
in_obj.add_linguistic_processor('text', my_lp)
my_lp = Clp()
my_lp.set_name(name)
my_lp.set_version(__version__)
my_lp.set_timestamp()
in_obj.add_linguistic_processor('terms', my_lp)
my_lp = Clp()
my_lp.set_name(name)
my_lp.set_version(__version__)
my_lp.set_timestamp()
in_obj.add_linguistic_processor('deps', my_lp)
return in_obj
def parse(input_file, treebank=None):
treebank = treebank if treebank is not None else default_treebank
if isinstance(input_file, KafNafParser):
in_obj = input_file
else:
in_obj = get_naf(input_file)
lang = in_obj.get_language()
if lang != 'nl':
logging.warning('ERROR! Language is {} and must be nl (Dutch)'
.format(lang))
sys.exit(-1)
if in_obj.text_layer is None:
added_text_layer = True
nlp = stanza.Pipeline(lang='nl',
processors='tokenize,pos,lemma,depparse',
package=treebank)
text = in_obj.get_raw()
in_obj.remove_text_layer()
doc = nlp(text)
id_to_tokenid = create_text_layer(doc, in_obj)
else:
# Use existing tokenization
added_text_layer = False
nlp = stanza.Pipeline(lang='nl',
tokenize_pretokenized=True,
processors='tokenize,pos,lemma,depparse',
package=treebank)
sent_tokens_ixa = [(token.get_sent(), token.get_text())
for token in in_obj.get_tokens()]
text = [[t for s2, t in toks]
for s, toks in groupby(sent_tokens_ixa, itemgetter(0))]
# TODO: is this correct??? can we make it more elegant?
id_to_tokenid = {int(k):
{i+1: t.get_id() for i, t in enumerate(g)}
for k, g in
groupby(in_obj.get_tokens(), lambda t: t.get_sent())}
doc = nlp(text)
# Check that we don't get mutli-word get_tokens
if any([len(sent.tokens) != len(sent.words)
for sent in doc.sentences]):
raise Exception('stanza returns MutliWordTokens. '
'This is not allowed for Dutch.')
term_id_mapping = create_term_layer(doc, in_obj, id_to_tokenid)
create_dependency_layer(doc, in_obj, term_id_mapping)
in_obj = add_linguistic_processors(in_obj, added_text_layer, treebank)
return in_obj
| 34.248804 | 86 | 0.58955 | 937 | 7,158 | 4.225187 | 0.232657 | 0.023996 | 0.025006 | 0.018186 | 0.214953 | 0.168477 | 0.151048 | 0.126295 | 0.092448 | 0.067189 | 0 | 0.00464 | 0.307488 | 7,158 | 208 | 87 | 34.413462 | 0.794029 | 0.066778 | 0 | 0.16875 | 0 | 0 | 0.052813 | 0.008102 | 0 | 0 | 0 | 0.004808 | 0 | 1 | 0.04375 | false | 0 | 0.0625 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78747f79066b20c85ec7068912df47ccb366ae61 | 2,749 | py | Python | micro_center_price_monitor/price_checker.py | Nintendude64/micro-center-price-monitor | 5aee275ef2e6a65d1fd69aa07956225bad7f30ac | [
"MIT"
] | null | null | null | micro_center_price_monitor/price_checker.py | Nintendude64/micro-center-price-monitor | 5aee275ef2e6a65d1fd69aa07956225bad7f30ac | [
"MIT"
] | null | null | null | micro_center_price_monitor/price_checker.py | Nintendude64/micro-center-price-monitor | 5aee275ef2e6a65d1fd69aa07956225bad7f30ac | [
"MIT"
] | null | null | null | from micro_center_price_monitor.scraper import MicroCenterScraper
from micro_center_price_monitor.mail import Email
import datetime, time
class PriceChecker:
"""
PriceChecker:
Manages execution flow for:
-> Retrieving search results list
-> Selected wanted product
-> Monitoring price
-> Sending product email
"""
def search(self):
try:
# Prompt to enter a product name
search_for = input('Enter a product:\n')
# Init scraper obj, passing user input for search term
scraper = MicroCenterScraper(search_term=search_for)
# GET request to retrieve first page results
scraper.search_for_products()
# Print search results
scraper.get_products()
# Prompt to search for one of list items
product_selection = int(input('Select a product:\n'))
# Selects product from list
scraper.select_product(product_selection)
# Prompt to enter expected price at discount
expected_price = float(input('Enter your expected price\n'))
while True:
# update pricing info
scraper.check_product_price()
# Get float value of price attribute
price = float(str(scraper.key_product.price).replace(',',''))
# currency symbol for output (e.g., "$"" for USD)
currency_symbol = scraper.key_product.currency
# Print current time and price
print('Price at: %s -> %s%s' % (datetime.datetime.now(),
currency_symbol,
str(price)))
# Email if the price is beneath expected threshold. Otherwise, continue to loop.
if price <= expected_price:
print('Price at or below %s%.2f at %s%.2f' % (currency_symbol, expected_price, currency_symbol, price))
print('Sending email now...')
email = Email(scraper.key_product.name,
currency_symbol + scraper.key_product.price,
scraper.data.product_url)
email.send_email()
break
# sleep for n seconds
time.sleep(scraper.data.REFRESH_SECS)
except ValueError:
print('Invalid product selection value provided. Please try again later.')
except IndexError:
print('Unable to find any search results. Please try again.')
except Exception as e:
print('Unexpected error has occured. %s' % e)
| 39.84058 | 123 | 0.559112 | 285 | 2,749 | 5.280702 | 0.414035 | 0.055814 | 0.045183 | 0.026578 | 0.077076 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001152 | 0.368498 | 2,749 | 68 | 124 | 40.426471 | 0.865783 | 0.237177 | 0 | 0 | 0 | 0 | 0.141176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.085714 | 0 | 0.142857 | 0.171429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78756479e1ce298be3ad0fcb9bbe4e75724a401d | 3,613 | py | Python | yakbak/diff.py | shiroyuki/2019-cfp | 90c20ad01c19ddf17b0bfd1f96b264c715456c01 | [
"BSD-3-Clause"
] | null | null | null | yakbak/diff.py | shiroyuki/2019-cfp | 90c20ad01c19ddf17b0bfd1f96b264c715456c01 | [
"BSD-3-Clause"
] | 6 | 2019-04-27T16:48:33.000Z | 2019-08-06T20:28:23.000Z | yakbak/diff.py | shiroyuki/2019-cfp | 90c20ad01c19ddf17b0bfd1f96b264c715456c01 | [
"BSD-3-Clause"
] | 2 | 2019-08-06T15:23:57.000Z | 2019-08-21T23:16:01.000Z | # Per Google's recommendation [1], this is copied from [2], with
# the line ending match adjusted to find spans of whitespace.
#
# The original [2] is used under the Apache License, Version 2.0:
#
# Diff Match and Patch
# Copyright 2018 The diff-match-patch Authors.
# https://github.com/google/diff-match-patch
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# [1] https://github.com/google/diff-match-patch/wiki/Line-or-Word-Diffs#word-mode
# [2] https://github.com/google/diff-match-patch/blob/858b3812cc02e7d48da4beebb21d4d80dc1d3062/python3/diff_match_patch.py
from typing import Dict, Tuple
import re
def diff_wordsToChars(text1: str, text2: str) -> Tuple[str, str, object]:
"""Split two texts into an array of strings. Reduce the texts to a string
of hashes where each Unicode character represents one line.
Args:
text1: First string.
text2: Second string.
Returns:
Three element tuple, containing the encoded text1, the encoded text2 and
the array of unique strings. The zeroth element of the array of unique
strings is intentionally blank.
"""
lineArray = [] # e.g. lineArray[4] == "Hello\n"
lineHash: Dict[str, int] = {} # e.g. lineHash["Hello\n"] == 4
# "\x00" is a valid character, but various debuggers don't like it.
# So we'll insert a junk entry to avoid generating a null character.
lineArray.append('')
def next_word_end(text: str, start: int) -> int:
"""Find the next word end (any whitespace) after `start`.
"""
pattern = re.compile(r"([^ \t\n]+)[ \t\n]")
match = pattern.search(text, start)
if not match:
return -1
return start + len(match.group(1))
def diff_linesToCharsMunge(text: str) -> str:
"""Split a text into an array of strings. Reduce the texts to a string
of hashes where each Unicode character represents one line.
Modifies linearray and linehash through being a closure.
Args:
text: String to encode.
Returns:
Encoded string.
"""
chars = []
# Walk the text, pulling out a substring for each line.
# text.split('\n') would would temporarily double our memory footprint.
# Modifying text would create many large strings to garbage collect.
lineStart = 0
lineEnd = -1
while lineEnd < len(text) - 1:
lineEnd = next_word_end(text, lineStart)
if lineEnd == -1:
lineEnd = len(text) - 1
line = text[lineStart:lineEnd + 1]
if line in lineHash:
chars.append(chr(lineHash[line]))
else:
if len(lineArray) == maxLines:
# Bail out at 1114111 because chr(1114112) throws.
line = text[lineStart:]
lineEnd = len(text)
lineArray.append(line)
lineHash[line] = len(lineArray) - 1
chars.append(chr(len(lineArray) - 1))
lineStart = lineEnd + 1
return "".join(chars)
# Allocate 2/3rds of the space for text1, the rest for text2.
maxLines = 666666
chars1 = diff_linesToCharsMunge(text1)
maxLines = 1114111
chars2 = diff_linesToCharsMunge(text2)
return (chars1, chars2, lineArray)
# flake8: noqa
| 35.772277 | 122 | 0.681705 | 511 | 3,613 | 4.800391 | 0.41683 | 0.022014 | 0.028536 | 0.02446 | 0.16062 | 0.141867 | 0.141867 | 0.075826 | 0.075826 | 0.075826 | 0 | 0.034752 | 0.219485 | 3,613 | 100 | 123 | 36.13 | 0.835106 | 0.618323 | 0 | 0 | 0 | 0 | 0.014041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.054054 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
787af07ddabb27f12cabf735e06b4adc1ae9725b | 4,498 | py | Python | tukey/api/nova.py | Li-Ko/tukey_portal | 8dc395ef1a1ebaa806d23c88ce51460e6c202921 | [
"Apache-2.0"
] | null | null | null | tukey/api/nova.py | Li-Ko/tukey_portal | 8dc395ef1a1ebaa806d23c88ce51460e6c202921 | [
"Apache-2.0"
] | null | null | null | tukey/api/nova.py | Li-Ko/tukey_portal | 8dc395ef1a1ebaa806d23c88ce51460e6c202921 | [
"Apache-2.0"
] | null | null | null | from collections import Sequence
from django.conf import settings
from openstack_dashboard.api import nova
from openstack_dashboard.api.base import Quota
from openstack_dashboard.api.nova import flavor_list
from openstack_dashboard.api.nova import novaclient
from openstack_dashboard.api.nova import server_list
from openstack_dashboard.api.nova import tenant_floating_ip_list
from openstack_dashboard.api.nova import tenant_quota_get
from horizon.utils.memoized import memoized
from tukey.cloud_attribute import get_cloud
from collections import OrderedDict
class NovaUsage(nova.NovaUsage):
_attrs = ['start', 'server_usages', 'stop', 'tenant_id',
'total_local_gb_usage', 'total_memory_mb_usage',
'total_vcpus_usage', 'total_hours',
'cloud_cores', 'cloud_du', 'cloud_ram', 'hadoop_jobs',
'hadoop_hdfsdu'] + settings.USAGE_ATTRIBUTES.values()
def get_summary(self):
#TODO: find some way to make this ordered oh well it is not
# going to happen :(
return OrderedDict([('instances', self.total_active_instances),
('memory_mb', self.memory_mb),
('vcpus', getattr(self, "total_vcpus_usage", 0)),
('vcpu_hours', self.vcpu_hours),
('local_gb', self.local_gb),
('disk_gb_hours', self.disk_gb_hours),
('cloud_cores', getattr(self, "cloud_cores", -1)),
('cloud_du', getattr(self, "cloud_du", -1)),
('hadoop_hdfsdu', getattr(self, "hadoop_hdfsdu", -1)),
('hadoop_jobs', getattr(self, "hadoop_jobs", -1)),
('Cloud Core Hours', getattr(self, "cloud_cores", -1)),
('Cloud Disk Usage (GB)', getattr(self, "cloud_du", -1)),
('Cloud RAM Hours (GB Hours)', getattr(self, "cloud_ram", -1)),
('Hadoop Disk Usage (GB)', getattr(self, "hadoop_hdfsdu", -1)),
('Hadoop Job Hours', getattr(self, "hadoop_jobs", -1))]
+ [(key, getattr(self, value, -1)) for key, value in
settings.USAGE_ATTRIBUTES.items()])
class QuotaSet2(Sequence):
"""
Wrapper for client QuotaSet objects which turns the individual quotas
into Quota objects for easier handling/iteration.
`QuotaSet` objects support a mix of `list` and `dict` methods; you can use
the bracket notiation (`qs["my_quota"] = 0`) to add new quota values, and
use the `get` method to retrieve a specific quota, but otherwise it
behaves much like a list or tuple, particularly in supporting iteration.
"""
def __init__(self, apiresource=None):
self.items = []
if apiresource:
for k, v in apiresource.items():
#for k, v in apiresource._info.items():
if k == 'id':
continue
self[k] = v
def __setitem__(self, k, v):
v = int(v) if v is not None else v
q = Quota(k, v)
self.items.append(q)
def __getitem__(self, index):
return self.items[index]
def __len__(self):
return len(self.items)
def __repr__(self):
return repr(self.items)
def get(self, key, default=None):
match = [quota for quota in self.items if quota.name == key]
return match.pop() if len(match) else Quota(key, default)
def default_quota_get(request, tenant_id):
return cloud_quota(request, novaclient(request).quotas.defaults(tenant_id))
def tenant_quota_get(request, tenant_id):
return cloud_quota(request, novaclient(request).quotas.get(tenant_id))
def cloud_quota(request, quotas):
cloud = None
if 'cloud' in request.GET:
cloud = request.GET['cloud']
elif 'cloud' in request.POST:
cloud = request.POST['cloud']
if cloud is not None:
quotas = quotas._info[cloud]
del(quotas['cloud'])
else:
# "sum" the quotas!
# The attributes not to sum
ignore = ['cloud', 'id']
if hasattr(quotas, '_info'):
clouds = quotas._info.keys()
if 'cloud' in quotas._info[clouds[0]]:
keys = []
for cloud in clouds:
keys += quotas._info[cloud].keys()
quotas = {key:
reduce(
lambda s, c: s + quotas._info[c][key] if key in quotas._info[c]
else 0, [0] + clouds) for key in keys if key not in ignore}
return QuotaSet2(quotas)
| 38.118644 | 83 | 0.611827 | 571 | 4,498 | 4.639229 | 0.281961 | 0.045678 | 0.058135 | 0.066063 | 0.223103 | 0.170253 | 0.100793 | 0.08607 | 0.052095 | 0.052095 | 0 | 0.005207 | 0.274122 | 4,498 | 117 | 84 | 38.444444 | 0.806126 | 0.126723 | 0 | 0 | 0 | 0 | 0.130144 | 0.005401 | 0 | 0 | 0 | 0.008547 | 0 | 1 | 0.120482 | false | 0 | 0.144578 | 0.072289 | 0.39759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
787c3c6f3e40d9b934038220ef7e2d375b00740c | 1,172 | py | Python | uptimer/uptimer.py | sourcepirate/uptimer | 07ec5586cc1f57676073a8b3098f705ca9c843ec | [
"MIT"
] | 1 | 2021-10-10T16:17:00.000Z | 2021-10-10T16:17:00.000Z | uptimer/uptimer.py | sourcepirate/uptimer | 07ec5586cc1f57676073a8b3098f705ca9c843ec | [
"MIT"
] | null | null | null | uptimer/uptimer.py | sourcepirate/uptimer | 07ec5586cc1f57676073a8b3098f705ca9c843ec | [
"MIT"
] | null | null | null | """Main module."""
import logging
from datetime import datetime
from urllib.parse import urlparse
from typing import Any, Dict, List
import requests
def check_domain(domain_url: str) -> Dict[str, Any]:
try:
current_time = datetime.now()
session = requests.Session()
response = session.get(domain_url)
return {
"healthy": response.ok,
"latency": response.elapsed.microseconds // 1000,
"content_type": response.headers.get("Content-Type"),
"current_time": int(current_time.timestamp()),
"domain_url": domain_url,
"domain": urlparse(domain_url).hostname,
}
except Exception:
return {
"healthy": False,
"latency": 0,
"current_time": int(current_time.timestamp()),
"domain_url": domain_url,
"domain": urlparse(domain_url).hostname,
}
def access_domains(domains: List[str]) -> Dict[str, Any]:
responses = []
for domain_url in domains:
try:
responses.append(check_domain(domain_url))
except Exception:
pass
return responses
| 27.255814 | 65 | 0.595563 | 123 | 1,172 | 5.520325 | 0.406504 | 0.132548 | 0.088365 | 0.05891 | 0.244477 | 0.244477 | 0.244477 | 0.244477 | 0.244477 | 0.244477 | 0 | 0.006031 | 0.292662 | 1,172 | 42 | 66 | 27.904762 | 0.813028 | 0.010239 | 0 | 0.352941 | 0 | 0 | 0.093588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.029412 | 0.147059 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
787c830352c3b6240940e7f3f6a544c89fc5bdf9 | 1,071 | py | Python | tests/python_frontend/arithmetic_conversions_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 227 | 2019-03-15T23:39:06.000Z | 2022-03-30T07:49:08.000Z | tests/python_frontend/arithmetic_conversions_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 834 | 2019-07-31T22:49:31.000Z | 2022-03-28T14:01:32.000Z | tests/python_frontend/arithmetic_conversions_test.py | jnice-81/dace | 5211794a2d17b7189037ac485ab0b292fb02aa0d | [
"BSD-3-Clause"
] | 64 | 2019-03-19T05:40:37.000Z | 2022-03-11T15:02:42.000Z | # Copyright 2019-2021 ETH Zurich and the DaCe authors. All rights reserved.
import dace
import numpy as np
@dace.program
def add(A: dace.complex64[5, 5], B: dace.float64[5, 5]):
return A + B
def test_add():
A = np.random.randint(0, high=10, size=(5, 5), dtype=np.uint64).astype(np.complex64)
B = np.random.randint(-10, high=0, size=(5, 5), dtype=np.int32).astype(np.float64)
C = add(A, B)
assert(np.linalg.norm(C - A - B) / np.linalg.norm(A + B) < 1e-12)
@dace.program
def complex_conversion(a: dace.complex128[1], b: dace.int32):
return a[0] + b
def test_complex_conversion():
a = np.zeros((1,), dtype=np.complex128)
a[0] = 5 + 6j
b = 7
c = complex_conversion(a=a, b=b)
assert(c[0] == 12 + 6j)
@dace.program
def float_conversion(a: dace.float32, b: dace.int64):
return a + b
def test_float_conversion():
a = np.float32(5.2)
b = np.int64(7)
c = float_conversion(a=a, b=b)
assert(c[0] == a + b)
if __name__ == "__main__":
test_add()
test_complex_conversion()
test_float_conversion()
| 23.282609 | 88 | 0.633053 | 183 | 1,071 | 3.584699 | 0.306011 | 0.02439 | 0.064024 | 0.033537 | 0.152439 | 0.067073 | 0.067073 | 0.067073 | 0 | 0 | 0 | 0.079625 | 0.202614 | 1,071 | 45 | 89 | 23.8 | 0.688525 | 0.068161 | 0 | 0.096774 | 0 | 0 | 0.008032 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 1 | 0.193548 | false | 0 | 0.064516 | 0.096774 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7880e8c114adeeed696cf6d28a33365f19d2d6f6 | 7,612 | py | Python | Seeder/settings/base.py | WebarchivCZ/Seeder | 1958c5d3f6bdcbbdb2c81dcb6abc7f689125b6a8 | [
"MIT"
] | 8 | 2017-08-16T19:18:57.000Z | 2022-01-24T10:08:19.000Z | Seeder/settings/base.py | WebarchivCZ/Seeder | 1958c5d3f6bdcbbdb2c81dcb6abc7f689125b6a8 | [
"MIT"
] | 242 | 2017-02-03T19:15:52.000Z | 2022-03-25T08:02:52.000Z | Seeder/settings/base.py | WebarchivCZ/Seeder | 1958c5d3f6bdcbbdb2c81dcb6abc7f689125b6a8 | [
"MIT"
] | 2 | 2019-03-06T12:36:29.000Z | 2019-07-08T12:52:20.000Z | """
Django settings for Seeder project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
import re
from django.utils.translation import ugettext_lazy as _
# Import version to be displayed further
from .version import VERSION, VERSION_DATETIME
# that double dirname is necessary since setting is in folder...
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'postgres',
},
'legacy_seeder': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'legacy_seeder',
'USER': 'root',
'PASSWORD': 'legacy'
}
}
ADMINS = (
('Visgean Skeloru', 'visgean@gmail.com'),
('Petr Manas', 'peter@petermanas.com'),
)
IGNORABLE_404_URLS = (
re.compile(r'\.(php|cgi)$'),
re.compile(r'^/phpmyadmin/'),
)
# Application definition
INSTALLED_APPS = (
'raven.contrib.django.raven_compat',
'dal',
'dal_select2',
'modeltranslation',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.postgres',
# 'djangobower', # everything is on cdn
'django_extensions',
'django_tables2',
'django_filters',
'bootstrap3',
'mptt',
'formtools',
'reversion',
'ckeditor',
'ckeditor_uploader',
'debug_toolbar',
'django_crontab',
'sorl.thumbnail',
'rest_framework',
'rest_framework.authtoken',
'captcha',
'ordered_model',
# 'haystack',
# 'elasticstack',
'core',
'publishers',
'source',
'voting',
'comments',
'contracts',
'legacy_db',
'harvests',
'blacklists',
'qa',
'www',
'search_blob',
)
MIDDLEWARE = (
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
'reversion.middleware.RevisionMiddleware',
'django.middleware.locale.LocaleMiddleware',
)
SESSION_COOKIE_NAME = 'seeder_sessionid'
# In seconds, 14400 = 4 * 60 * 60 (4 hours)
try:
SESSION_COOKIE_AGE = int(os.environ.get("SESSION_COOKIE_AGE", "14400"))
except:
SESSION_COOKIE_AGE = 14400
ROOT_URLCONF = 'urls'
WSGI_APPLICATION = 'wsgi.application'
STATICFILES_FINDERS = (
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
# 'djangobower.finders.BowerFinder',
)
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': (
"django.contrib.auth.context_processors.auth",
"django.template.context_processors.debug",
"django.template.context_processors.i18n",
"django.template.context_processors.media",
"django.template.context_processors.static",
"django.template.context_processors.tz",
"django.contrib.messages.context_processors.messages",
'django.template.context_processors.request',
'core.context_processors.core_processor',
)
},
},
]
# APP_DIRS = True
#
# TEMPLATES = {
# 'BACKEND': 'django.template.backends.django.DjangoTemplates',
# 'DIRS': TEMPLATE_DIRS,
# 'APP_DIRS': True,
# 'OPTIONS': {
# 'context_processors': TEMPLATE_CONTEXT_PROCESSORS
# }
# }
LANGUAGES = (
('cs', _('Czech')),
('en', _('English')),
)
CALENDAR_LANGUAGES = {
'cs': 'cs-CZ',
'en': 'en-US'
}
MODELTRANSLATION_DEFAULT_LANGUAGE = 'cs'
LOCALE_PATHS = (
os.path.join(BASE_DIR, 'locale'),
)
BOWER_COMPONENTS_ROOT = BASE_DIR
BOWER_INSTALLED_APPS = () # everything is on CDN now
LOGIN_URL = '/seeder/auth/login/'
LOGOUT_URL = '/seeder/auth/logout/'
LOGIN_REDIRECT_URL = '/'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static_root')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
MESSAGE_STORAGE = 'django.contrib.messages.storage.session.SessionStorage'
CKEDITOR_UPLOAD_PATH = "uploads/"
CKEDITOR_IMAGE_BACKEND = 'pillow'
CKEDITOR_CONFIGS = {
'default': {
'toolbar': 'Custom',
'toolbar_Custom': [
['Bold', 'Italic', 'Underline'],
['NumberedList', 'BulletedList', 'Link'],
],
},
'mini': {
'toolbar': 'Custom',
'toolbar_Custom': [
['Bold', 'Italic', 'Underline'],
['NumberedList', 'BulletedList', 'Link'],
],
'width': 800,
'height': 100,
},
}
DEBUG_TOOLBAR_CONFIG = {
'SHOW_TOOLBAR_CALLBACK': 'core.utils.show_toolbar',
}
CRONJOBS = [
('1 * * * *', 'source.screenshots.take_screenshots'),
('10 * * * *', 'voting.cron.revive_postponed_rounds'),
('20 * * * *', 'contracts.cron.expire_contracts'),
('30 * * * *', 'contracts.cron.send_emails'),
]
# * * * * * command to be executed
# - - - - -
# | | | | |
# | | | | +----- day of week (0 - 6) (Sunday=0)
# | | | +------- month (1 - 12)
# | | +--------- day of month (1 - 31)
# | +----------- hour (0 - 23)
# +------------- min (0 - 59)
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
# 'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.TokenAuthentication',
],
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
]
}
if DEBUG:
REST_FRAMEWORK['DEFAULT_PERMISSION_CLASSES'] = [
'rest_framework.permissions.AllowAny'
]
WAKAT_URL = 'http://forpsi.kitakitsune.org:8080/?url_id={id}'
WAYBACK_URL = "http://wayback.webarchiv.cz/wayback/query?type=urlquery&url={url}"
SEEDS_EXPORT_DIR = 'seeds'
MANET_URL = '127.0.0.1:8891'
QA_EVERY_N_MONTHS = 24
LEGACY_URL = 'http://intranet.webarchiv.cz/wadmin/tables/resources/view/{pk}'
LEGACY_SCREENSHOT_URL = 'http://www.webarchiv.cz/images/resource/thumb/small_{id}_{date}.jpg'
LEGACY_SCREENSHOT_URL_PNG = 'http://www.webarchiv.cz/images/resource/thumb/small_{id}_{date}.png'
WEBARCHIV_EMAIL = 'webarchiv@nkp.cz'
# RECAPTCHA_PUBLIC_KEY = ''
# RECAPTCHA_PRIVATE_KEY = ''
NOCAPTCHA = True
| 25.373333 | 97 | 0.638203 | 786 | 7,612 | 5.996183 | 0.428753 | 0.044133 | 0.037131 | 0.017823 | 0.168682 | 0.161468 | 0.106726 | 0.106726 | 0.051347 | 0.051347 | 0 | 0.014234 | 0.206253 | 7,612 | 299 | 98 | 25.458194 | 0.765806 | 0.201787 | 0 | 0.066327 | 0 | 0.010204 | 0.510113 | 0.288296 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.010204 | 0.020408 | 0 | 0.020408 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78848f7ee34a7d0fca0d77ac1f78fbe0189d64c4 | 2,437 | py | Python | tools/internal/tizenrt_testresult_collector.py | JoshWorld/RT-OCF | fd41fc4ccd0b3a56e6a2a1bee3e164a559a0fd45 | [
"Apache-2.0"
] | 15 | 2018-03-07T12:53:30.000Z | 2021-07-26T07:08:13.000Z | tools/internal/tizenrt_testresult_collector.py | JoshWorld/RT-OCF | fd41fc4ccd0b3a56e6a2a1bee3e164a559a0fd45 | [
"Apache-2.0"
] | 2 | 2018-01-19T06:38:20.000Z | 2018-04-09T06:34:28.000Z | tools/internal/tizenrt_testresult_collector.py | JoshWorld/RT-OCF | fd41fc4ccd0b3a56e6a2a1bee3e164a559a0fd45 | [
"Apache-2.0"
] | 4 | 2018-01-18T09:53:00.000Z | 2020-08-30T13:09:14.000Z | #!/usr/bin/env python
import glob
import serial
import sys
from internal.common import Result
import time
WIFI_SSID = 'ZEROROOT'
WIFI_PASSWORD = 'zeroroot'
class TestResultCollector:
def __init__(self, usb_device=None):
if usb_device is None:
usb_device = self.get_usb_tty_number()
self.serial = self.create_serial(usb_device)
def get_usb_tty_number(self):
ttyUSBs = glob.glob('/sys/class/tty/ttyUSB*')
if len(ttyUSBs) == 0:
print('TizenRT is not connected')
exit(1)
return '/dev/{}'.format(ttyUSBs[0].split('/')[-1])
def create_serial(self, usb_device):
return serial.Serial(usb_device, 115200, timeout=70)
def collect(self, options=''):
time.sleep(2)
self.write_connecting_wifi_command()
command = 'iot_rt_unittest ' + options + '\n'
self.serial.write(command)
return self.read_serial_output()
def write_connecting_wifi_command(self):
self.serial.write('wifi startsta\n')
time.sleep(2)
self.serial.write('wifi join {} {} wpa2_aes\n'.format(WIFI_SSID, WIFI_PASSWORD))
time.sleep(2)
self.serial.write('ifconfig wl1 dhcp\n')
time.sleep(2)
def read_serial_output(self):
while True:
line = self.serial.readline()
if line == '':
print('Timeout')
return Result(exitcode=1,
message='timeout: Core Dump may occur')
sys.stdout.write(line)
if self.is_test_result(line):
return Result(
exitcode=self.get_test_exitcode(line),
message=line)
if self.is_core_dump(line):
return Result(exitcode=1, message=line)
def get_test_exitcode(self, line):
arr = line.split(' ')
if arr[2] == '0':
return 0
return 1
def is_test_result(self, line):
return 'Tests' in line and 'Failure' in line and 'Ignored' in line
def is_core_dump(self, line):
return '(core dumped)' in line
def test_get_usb_tty_number():
assert '/dev/ttyUSB1' == TestResultCollector().get_usb_tty_number()
def test_create_serial():
assert None != TestResultCollector().create_serial('/dev/ttyUSB1')
def test_is_core_dump():
assert True == TestResultCollector().is_core_dump('Aborted (core dumped)')
| 30.08642 | 88 | 0.608535 | 305 | 2,437 | 4.659016 | 0.295082 | 0.038001 | 0.025334 | 0.042224 | 0.101337 | 0.035186 | 0 | 0 | 0 | 0 | 0 | 0.014748 | 0.27657 | 2,437 | 80 | 89 | 30.4625 | 0.791265 | 0.008207 | 0 | 0.064516 | 0 | 0 | 0.108444 | 0.009106 | 0 | 0 | 0 | 0 | 0.048387 | 1 | 0.193548 | false | 0.032258 | 0.080645 | 0.048387 | 0.451613 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7888949bd7f932e7b291514995f398653ef1a039 | 9,561 | py | Python | tethys_apps/cli/services_commands.py | quyendong/tethys | 99bcb524d5b2021b88d5fa15b7ed6b8acb460997 | [
"BSD-2-Clause"
] | 1 | 2020-10-08T20:38:33.000Z | 2020-10-08T20:38:33.000Z | tethys_apps/cli/services_commands.py | quyendong/tethys | 99bcb524d5b2021b88d5fa15b7ed6b8acb460997 | [
"BSD-2-Clause"
] | 1 | 2018-04-14T19:40:54.000Z | 2018-04-14T19:40:54.000Z | tethys_apps/cli/services_commands.py | quyendong/tethys | 99bcb524d5b2021b88d5fa15b7ed6b8acb460997 | [
"BSD-2-Clause"
] | 1 | 2021-09-07T14:47:11.000Z | 2021-09-07T14:47:11.000Z | from __future__ import print_function
from django.core.exceptions import ObjectDoesNotExist
from django.db.utils import IntegrityError
from django.forms.models import model_to_dict
from .cli_colors import BOLD, pretty_output, FG_RED, FG_GREEN
from .cli_helpers import add_geoserver_rest_to_endpoint
from builtins import input
SERVICES_CREATE = 'create'
SERVICES_CREATE_PERSISTENT = 'persistent'
SERVICES_CREATE_SPATIAL = 'spatial'
SERVICES_LINK = 'link'
SERVICES_LIST = 'list'
class FormatError(Exception):
def __init__(self):
Exception.__init__(self)
def services_create_persistent_command(args):
"""
Interact with Tethys Services (Spatial/Persistent Stores) to create them and/or link them to existing apps
"""
from tethys_services.models import PersistentStoreService
name = None
try:
name = args.name
connection = args.connection
parts = connection.split('@')
cred_parts = parts[0].split(':')
store_username = cred_parts[0]
store_password = cred_parts[1]
url_parts = parts[1].split(':')
host = url_parts[0]
port = url_parts[1]
new_persistent_service = PersistentStoreService(name=name, host=host, port=port,
username=store_username, password=store_password)
new_persistent_service.save()
with pretty_output(FG_GREEN) as p:
p.write('Successfully created new Persistent Store Service!')
except IndexError:
with pretty_output(FG_RED) as p:
p.write('The connection argument (-c) must be of the form "<username>:<password>@<host>:<port>".')
except IntegrityError:
with pretty_output(FG_RED) as p:
p.write('Persistent Store Service with name "{0}" already exists. Command aborted.'.format(name))
def services_remove_persistent_command(args):
from tethys_services.models import PersistentStoreService
persistent_service_id = None
try:
persistent_service_id = args.service_uid
force = args.force
try:
persistent_service_id = int(persistent_service_id)
service = PersistentStoreService.objects.get(pk=persistent_service_id)
except ValueError:
service = PersistentStoreService.objects.get(name=persistent_service_id)
if force:
service.delete()
with pretty_output(FG_GREEN) as p:
p.write('Successfully removed Persistent Store Service {0}!'.format(persistent_service_id))
exit(0)
else:
proceed = input('Are you sure you want to delete this Persistent Store Service? [y/n]: ')
while proceed not in ['y', 'n', 'Y', 'N']:
proceed = input('Please enter either "y" or "n": ')
if proceed in ['y', 'Y']:
service.delete()
with pretty_output(FG_GREEN) as p:
p.write('Successfully removed Persistent Store Service {0}!'.format(persistent_service_id))
exit(0)
else:
with pretty_output(FG_RED) as p:
p.write('Aborted. Persistent Store Service not removed.')
exit(0)
except ObjectDoesNotExist:
with pretty_output(FG_RED) as p:
p.write('A Persistent Store Service with ID/Name "{0}" does not exist.'.format(persistent_service_id))
exit(0)
def services_create_spatial_command(args):
"""
Interact with Tethys Services (Spatial/Persistent Stores) to create them and/or link them to existing apps
"""
from tethys_services.models import SpatialDatasetService
name = None
try:
name = args.name
connection = args.connection
parts = connection.split('@')
cred_parts = parts[0].split(':')
service_username = cred_parts[0]
service_password = cred_parts[1]
endpoint = parts[1]
public_endpoint = args.public_endpoint or ''
apikey = args.apikey or ''
if 'http' not in endpoint or '://' not in endpoint:
raise IndexError()
if public_endpoint and 'http' not in public_endpoint or '://' not in public_endpoint:
raise FormatError()
endpoint = add_geoserver_rest_to_endpoint(endpoint)
if public_endpoint:
public_endpoint = add_geoserver_rest_to_endpoint(public_endpoint)
new_persistent_service = SpatialDatasetService(name=name, endpoint=endpoint, public_endpoint=public_endpoint,
apikey=apikey, username=service_username,
password=service_password)
new_persistent_service.save()
with pretty_output(FG_GREEN) as p:
p.write('Successfully created new Spatial Dataset Service!')
except IndexError:
with pretty_output(FG_RED) as p:
p.write('The connection argument (-c) must be of the form '
'"<username>:<password>@<protocol>//<host>:<port>".')
except FormatError:
with pretty_output(FG_RED) as p:
p.write('The public_endpoint argument (-p) must be of the form '
'"<protocol>//<host>:<port>".')
except IntegrityError:
with pretty_output(FG_RED) as p:
p.write('Spatial Dataset Service with name "{0}" already exists. Command aborted.'.format(name))
def services_remove_spatial_command(args):
from tethys_services.models import SpatialDatasetService
spatial_service_id = None
try:
spatial_service_id = args.service_uid
force = args.force
try:
spatial_service_id = int(spatial_service_id)
service = SpatialDatasetService.objects.get(pk=spatial_service_id)
except ValueError:
service = SpatialDatasetService.objects.get(name=spatial_service_id)
if force:
service.delete()
with pretty_output(FG_GREEN) as p:
p.write('Successfully removed Spatial Dataset Service {0}!'.format(spatial_service_id))
exit(0)
else:
proceed = input('Are you sure you want to delete this Persistent Store Service? [y/n]: ')
while proceed not in ['y', 'n', 'Y', 'N']:
proceed = input('Please enter either "y" or "n": ')
if proceed in ['y', 'Y']:
service.delete()
with pretty_output(FG_GREEN) as p:
p.write('Successfully removed Spatial Dataset Service {0}!'.format(spatial_service_id))
exit(0)
else:
with pretty_output(FG_RED) as p:
p.write('Aborted. Spatial Dataset Service not removed.')
exit(0)
except ObjectDoesNotExist:
with pretty_output(FG_RED) as p:
p.write('A Spatial Dataset Service with ID/Name "{0}" does not exist.'.format(spatial_service_id))
exit(0)
def services_list_command(args):
"""
Interact with Tethys Services (Spatial/Persistent Stores) to create them and/or link them to existing apps
"""
from tethys_services.models import SpatialDatasetService, PersistentStoreService
list_persistent = False
list_spatial = False
if not args.spatial and not args.persistent:
list_persistent = True
list_spatial = True
elif args.spatial:
list_spatial = True
elif args.persistent:
list_persistent = True
if list_persistent:
persistent_entries = PersistentStoreService.objects.order_by('id').all()
if len(persistent_entries) > 0:
with pretty_output(BOLD) as p:
p.write('\nPersistent Store Services:')
is_first_entry = True
for entry in persistent_entries:
model_dict = model_to_dict(entry)
if is_first_entry:
with pretty_output(BOLD) as p:
p.write('{0: <3}{1: <50}{2: <25}{3: <6}'.format('ID', 'Name', 'Host', 'Port'))
is_first_entry = False
print('{0: <3}{1: <50}{2: <25}{3: <6}'.format(model_dict['id'], model_dict['name'],
model_dict['host'], model_dict['port']))
if list_spatial:
spatial_entries = SpatialDatasetService.objects.order_by('id').all()
if len(spatial_entries) > 0:
with pretty_output(BOLD) as p:
p.write('\nSpatial Dataset Services:')
is_first_entry = True
for entry in spatial_entries:
model_dict = model_to_dict(entry)
if is_first_entry:
with pretty_output(BOLD) as p:
p.write('{0: <3}{1: <50}{2: <50}{3: <50}{4: <30}'.format('ID', 'Name', 'Endpoint',
'Public Endpoint', 'API Key'))
is_first_entry = False
print('{0: <3}{1: <50}{2: <50}{3: <50}{4: <30}'.format(model_dict['id'], model_dict['name'],
model_dict['endpoint'],
model_dict['public_endpoint'],
model_dict['apikey'] if model_dict['apikey']
else "None"))
| 41.751092 | 117 | 0.590315 | 1,082 | 9,561 | 5.021257 | 0.141405 | 0.044174 | 0.055954 | 0.031474 | 0.63372 | 0.594883 | 0.558071 | 0.534143 | 0.521627 | 0.477821 | 0 | 0.011614 | 0.315553 | 9,561 | 228 | 118 | 41.934211 | 0.818612 | 0.033469 | 0 | 0.502703 | 0 | 0.021622 | 0.16248 | 0.012616 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032432 | false | 0.032432 | 0.064865 | 0 | 0.102703 | 0.016216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
788a11df4d7eb86501d6c98b081b63c1da73fda6 | 6,078 | py | Python | graphAttack/gaUtilities/neuralNetwork.py | jgolebiowski/graphAttack | ec8488444b44d0bd54498bf917ee42d821643ee8 | [
"MIT"
] | 51 | 2017-08-16T13:04:43.000Z | 2022-03-30T09:10:30.000Z | graphAttack/gaUtilities/neuralNetwork.py | jgolebiowski/graphAttack | ec8488444b44d0bd54498bf917ee42d821643ee8 | [
"MIT"
] | null | null | null | graphAttack/gaUtilities/neuralNetwork.py | jgolebiowski/graphAttack | ec8488444b44d0bd54498bf917ee42d821643ee8 | [
"MIT"
] | 12 | 2017-09-27T01:10:02.000Z | 2021-05-05T09:44:56.000Z | """Neural networks utilities"""
import numpy as np
from ..coreDataContainers import Variable
from ..operations.activationOperations import *
from ..operations.costOperations import *
from ..operations.twoInputOperations import *
from ..operations.singleInputOperations import *
from ..operations.convolutionOperation import *
from ..operations.transformationOperations import *
from ..operations.multipleInputOperations import *
from .misc import generateRandomVariable, generateZeroVariable
def addDenseLayer(mainGraph, nOutputNodes,
inputOperation=None,
activation=ReLUActivation,
dropoutRate=0,
batchNormalisation=False):
"""Append a dense layer to the graph
Parameters
----------
mainGraph : ga.Graph
computation graph to which append the dense layer
nOutputNodes : int
Number of output nodes
inputOperation : ga.Operation
operation feeding the data to the layer
activation : ga.SingleInputOperation
activatin operation of choice
dropoutRate : float
dropout rate at the end of this layer
batchNormalisation: bool
Whether to use Batch normalisation
w : np.array
weigthts in shape (nOutputNodes, nFeatures)
if None randomly initialized
b : np.array
biases, in shape (nOutputNodes, )
if None, randomly initialized
Returns
-------
ga.Operation
Last operation of the dense layer
"""
N, D = inputOperation.shape
if (inputOperation is None):
inputOperation = mainGraph.operations[-1]
w = generateRandomVariable(shape=(nOutputNodes, D),
transpose=True, nInputs=D)
b = generateRandomVariable(shape=nOutputNodes,
transpose=False, nInputs=1)
wo = mainGraph.addOperation(w, doGradient=True)
bo = mainGraph.addOperation(b, doGradient=True)
mmo = mainGraph.addOperation(MatMatmulOperation(inputOperation, wo),
doGradient=False,
finalOperation=False)
addo = mainGraph.addOperation(AddOperation(mmo, bo),
doGradient=False,
finalOperation=False)
if (dropoutRate > 0):
dpo = mainGraph.addOperation(DropoutOperation(addo, dropoutRate),
doGradient=False,
finalOperation=False)
else:
dpo = addo
if (batchNormalisation):
beta = mainGraph.addOperation(generateRandomVariable((1, nOutputNodes)), doGradient=True)
gamma = mainGraph.addOperation(generateRandomVariable((1, nOutputNodes)), doGradient=True)
bnorm = mainGraph.addOperation(BatchNormalisationOperation(dpo, beta, gamma))
else:
bnorm = dpo
acto = mainGraph.addOperation(activation(bnorm),
doGradient=False,
finalOperation=False)
return acto
def addConv2dLayer(mainGraph,
inputOperation=None,
nFilters=1,
filterHeigth=2,
filterWidth=2,
padding="SAME",
convStride=1,
activation=ReLUActivation,
batchNormalisation=False,
pooling=MaxPoolOperation,
poolHeight=2,
poolWidth=2,
poolStride=2):
"""Append a convolution2D layer with pooling
Parameters
----------
mainGraph : ga.Graph
computation graph to which append the dense layer
inputOperation : ga.Operation
operation feeding the data to the layer
nFilters : int
number of filter to be applied for the convolution
filterHeigth : int
convolution filter heigth
filterWidth : int
convolution filter width
padding: "SAME" or "VALID"
padding method for the convolution
convStride : int
stride for the convolution filter
activation : ga.SingleInputOperation
activatin operation of choice
batchNormalisation: bool
Whether to use Batch normalisation
pooling : ga.SingleInputOperation
pooling operation of choice
poolHeight : int
heigth of the pooling filter
poolWidth : int
width of the pooling filter
poolStride : int
stride of the pooling operation
Returns
-------
ga.Operation
Last operation of the dense layer
"""
N, C, H, W = inputOperation.shape
w = generateRandomVariable(shape=(nFilters, C, filterHeigth, filterWidth),
transpose=False, nInputs=(filterHeigth * filterWidth * C))
b = generateRandomVariable(shape=(1, nFilters, 1, 1), transpose=False, nInputs=1)
filterWop = mainGraph.addOperation(w, doGradient=True, feederOperation=False)
opConv2d = mainGraph.addOperation(Conv2dOperation(
inputOperation, filterWop, stride=convStride, paddingMethod=padding))
filterBop = mainGraph.addOperation(b, doGradient=True, feederOperation=False)
addConv2d = mainGraph.addOperation(AddOperation(opConv2d, filterBop))
if (batchNormalisation):
beta = mainGraph.addOperation(generateRandomVariable((1, *addConv2d.shape[1:])), doGradient=True)
gamma = mainGraph.addOperation(generateRandomVariable((1, *addConv2d.shape[1:])), doGradient=True)
bnorm = mainGraph.addOperation(BatchNormalisationOperation(addConv2d, beta, gamma))
else:
bnorm = addConv2d
actop = mainGraph.addOperation(activation(bnorm),
doGradient=False,
finalOperation=False)
poolOP = mainGraph.addOperation(pooling(inputA=actop,
poolHeight=poolHeight,
poolWidth=poolWidth,
stride=poolStride))
return poolOP
| 36.614458 | 106 | 0.618954 | 523 | 6,078 | 7.193117 | 0.269598 | 0.100478 | 0.031898 | 0.045189 | 0.345561 | 0.307283 | 0.279107 | 0.167996 | 0.130782 | 0.091972 | 0 | 0.007392 | 0.30997 | 6,078 | 165 | 107 | 36.836364 | 0.889604 | 0.270155 | 0 | 0.231707 | 0 | 0 | 0.000956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.121951 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
788c4b05fc916695f80ef1ea25828672d59aa6d7 | 28,835 | py | Python | unity/MMutils.py | kreimanlab/WhenPigsFlyContext | 4d03bb29f3be3e96c2b9d1945dc08c381abae513 | [
"MIT"
] | 13 | 2021-04-07T15:39:24.000Z | 2022-03-08T19:01:20.000Z | unity/MMutils.py | kreimanlab/WhenPigsFlyContext | 4d03bb29f3be3e96c2b9d1945dc08c381abae513 | [
"MIT"
] | 1 | 2021-11-13T17:18:03.000Z | 2021-12-03T02:05:33.000Z | unity/MMutils.py | kreimanlab/WhenPigsFlyContext | 4d03bb29f3be3e96c2b9d1945dc08c381abae513 | [
"MIT"
] | 1 | 2021-04-18T18:14:51.000Z | 2021-04-18T18:14:51.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Nov 1 17:14:58 2020
@author: mengmi
"""
import IPython.display
# cd into virtualhome repo
import sys
sys.path.append('../simulation/')
from unity_simulator.comm_unity import UnityCommunication
import PIL
import numpy as np
from collections import defaultdict
import cv2
import os
import math
import pickle
import random
def display_grid_img(images_old, nrows=1):
images = [x for x in images_old]
h, w, _ = images[0].shape
ncols = int((len(images)+nrows-1)/nrows)
missing = ncols - (len(images)%ncols)
for m in range(missing):
images.append(np.zeros((h, w, 3)).astype(np.uint8))
img_final = []
for it_r in range(nrows):
init_ind = it_r * ncols
end_ind = init_ind + ncols
images_take = [images[it] for it in range(init_ind, end_ind)]
img_final.append(np.concatenate(images_take, 1))
img_final = np.concatenate(img_final, 0)
img_final = PIL.Image.fromarray(img_final[:,:,::-1])
return img_final
def display_scene_modalities(img_height, img_width,
comm, ids, modalities=['normal', 'seg_class', 'seg_inst', 'depth'], nrows=1):
# Check the number of cameras
_, ncameras = comm.camera_count()
#print(ncameras)
cameras_select = list(range(ncameras))
cameras_select = [cameras_select[x] for x in ids]
imgs_modality = []
for mode_name in modalities:
(ok_img, imgs) = comm.camera_image(cameras_select, mode=mode_name, image_width=img_height, image_height=img_width)
#print(imgs)
if mode_name == 'depth':
#imgs = [((x/np.max(x))*255.).astype(np.uint8) for x in imgs]
imgs = [(x*255.).astype(np.uint8) for x in imgs]
imgs_modality += imgs
img_final = display_grid_img(imgs_modality, nrows=nrows)
return img_final
def find_nodes(graph, **kwargs):
if len(kwargs) == 0:
return None
else:
k, v = next(iter(kwargs.items()))
return [n for n in graph['nodes'] if n[k] == v]
def find_nodes_byclassname(graph, classname):
return [n for n in graph['nodes'] if n['class_name'] == classname]
def find_nodes_byid(graph, idnum):
return [n for n in graph['nodes'] if n['id'] == idnum]
def find_edges(graph, **kwargs):
if len(kwargs) == 0:
return None
else:
k, v = next(iter(kwargs.items()))
return [n for n in graph['edges'] if n[k] == v]
def find_allRooms(graph):
return [n for n in graph['nodes'] if n['category'] == 'Rooms']
def find_rooms(graph, fromnode):
roomnodes = find_allRooms(graph)
if fromnode['category'] != 'Rooms':
for node in roomnodes:
bboxroom = node['bounding_box']
bboxobj = fromnode['bounding_box']
status = checkTwo3DBboxOverlap(bboxobj, bboxroom)
if status:
return node['class_name']
return fromnode['class_name']
def find_rooms_graphedges(graph, fromnode):
while fromnode['category'] != 'Rooms':
objedge = find_edges(graph, from_id = fromnode['id'])[0]
fromnode_id = objedge['to_id']
fromnode = find_nodes_byid(graph, fromnode_id)[0]
return fromnode
def displayAllBbox(img_height, img_width, JasonData, img):
#convert to cv2 image and ready to draw
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
for infor in JasonData.items():
left = infor[1]['bbox'][2]
top = infor[1]['bbox'][0]
right = infor[1]['bbox'][3]
bottom = infor[1]['bbox'][1]
color = (0, 0, 255)
thick = 3
label = infor[1]['class_name'] +', ' + infor[1]['roomtype']
cv2.rectangle(img,(left, top), (right, bottom), color, thick)
cv2.putText(img, label, (left, top - 12), 0, 1e-3 * img_width, color, thick//3)
status = True
return status, img
def displayTargetBbox(img_height, img_width, JasonData, img, targetid, textflag, boxflag):
#convert to cv2 image and ready to draw
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
status = False
for infor in JasonData.items():
if infor[1]['prefab_id'] == targetid:
left = infor[1]['bbox'][2]
top = infor[1]['bbox'][0]
right = infor[1]['bbox'][3]
bottom = infor[1]['bbox'][1]
targetbbox = [left, top, right, bottom]
color = (0, 0, 255)
thick = 3
label = infor[1]['class_name'] +', ' + infor[1]['roomtype']
if boxflag:
cv2.rectangle(img,(left, top), (right, bottom), color, thick)
if textflag:
cv2.putText(img, label, (left, top - 12), 0, 1e-3 * img_width, color, thick//3)
status = True
targetarea = infor[1]['area']#(bottom - top)*(right - left)
return status, targetarea, targetbbox, img
return status, 0, 0, img
def extractColorInstanceTable(graph, message_color):
ColorInstLookUpTab = {}
for prefab_id in message_color:
prefab_id = int(prefab_id)
#print(type(prefab_id))
objcolor_sm = message_color.get(str(prefab_id)) #color range from [0,1]
#print(objcolor_sm)
objcolor = np.round(np.array(objcolor_sm['Item1'], dtype=np.float32)*255.0).astype(np.uint8) #color range from [0,255]
objcolor = tuple(objcolor)
objnode = find_nodes_byid(graph, prefab_id)[0]
infor = {}
infor['prefab_id'] = prefab_id
infor['prefab_name'] = objnode['prefab_name']
infor['class_name'] = objnode['class_name']
infor['category'] = objnode['category']
roomname = find_rooms(graph, objnode)
infor['roomtype'] = roomname
ColorInstLookUpTab[objcolor] = infor
return ColorInstLookUpTab
def extractJasonInstanceTable(img_inst_pil, img_inst_np, ColorInstLookUpTab):
img_inst_color_tab = defaultdict(int)
for pixel in img_inst_pil.getdata():
img_inst_color_tab[pixel] +=1
[imgw, imgh, imgc] = img_inst_np.shape
#consolidate all objects infor on image and output jasondata for this image
JasonData = {}
for pixel in img_inst_color_tab:
if pixel in ColorInstLookUpTab.keys():
X,Y = np.where(np.all(img_inst_np==np.asarray(pixel),axis=2))
bbox = [min(X), max(X), min(Y), max(Y)]
instinfor = ColorInstLookUpTab.get(pixel)
infor = {}
infor['prefab_id'] = instinfor['prefab_id']
infor['prefab_name'] = instinfor['prefab_name']
infor['class_name'] = instinfor['class_name']
infor['roomtype'] = instinfor['roomtype']
infor['category'] = instinfor['category']
infor['bbox'] = bbox
infor['color'] = pixel
infor['area'] = img_inst_color_tab.get(pixel)*1.0/(imgw*imgh) #ratio of isntance area on the entire image
JasonData[pixel] = infor
return JasonData
def convertPILImageToNumpyImage(img_all_pil, img_height, img_width):
#img contains modalities=['normal', 'seg_class', 'seg_inst'], nrows=3
#split into three images (normal, seg_class, seg_instance)
img_ori_pil = img_all_pil.crop((0, img_width*0, img_height, img_width*1))
img_class_pil = img_all_pil.crop((0, img_width*1, img_height, img_width*2))
img_inst_pil = img_all_pil.crop((0, img_width*2, img_height, img_width*3))
#convert to numpy array
img_ori_np = np.array(img_ori_pil)
img_class_np = np.array(img_class_pil)
img_inst_np = np.array(img_inst_pil)
return img_ori_pil, img_class_pil, img_inst_pil, img_ori_np, img_class_np, img_inst_np
def IsHighContrast(img_height, img_width, ThresContrast, RatioCroppedContrast, JasonData, img, targetid):
#convert to cv2 image and ready to draw
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
imgY = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)[:,:,0]
status = False
for infor in JasonData.items():
if infor[1]['prefab_id'] == targetid:
left = infor[1]['bbox'][2]
top = infor[1]['bbox'][0]
right = infor[1]['bbox'][3]
bottom = infor[1]['bbox'][1]
#print(infor[1]['bbox'])
width = bottom - top
height = right - left
if int(left-RatioCroppedContrast*height) <0:
left = 0
else:
left = int(left-RatioCroppedContrast*height)
if int(right+RatioCroppedContrast*height) > (img_height-1):
right = img_height - 1
else:
right = int(right+RatioCroppedContrast*height)
if int(top-RatioCroppedContrast*width) <0:
top = 0
else:
top = int(top-RatioCroppedContrast*width)
if int(bottom+RatioCroppedContrast*width) > (img_width-1):
bottom = img_width-1
else:
bottom = int(bottom+RatioCroppedContrast*width)
cropped_imgY = imgY[top:bottom, left:right]
# compute min and max of Y
#print(cropped_imgY.shape)
if cropped_imgY.shape[0] == 0 or cropped_imgY.shape[1] == 0:
return False
Ymin = np.min(cropped_imgY)
Ymax = np.max(cropped_imgY)
#print(Ymin)
#print(Ymax)
# compute contrast
contrast = (Ymax-Ymin)/(Ymax+Ymin)
#print(contrast)
if contrast > ThresContrast:
status = True
return status
def checkCameraImageFitness(JasonData, targetprefabid, ThresRoomArea):
#two criterias for a good pic:
#1. the target object is on the pic
#2. the camera is mostly looking at one room (not crossing two rooms); ThresRoomArea
statusTarget = False #cond1 flag
statusRoom = False #cond2 flag
#keep track of total areas for each room type
roomarea = defaultdict(float)
for infor in JasonData.items():
roomarea[infor[1]['roomtype']] += infor[1]['area']
if infor[1]['prefab_id'] == targetprefabid:
statusTarget = True
targetroom = infor[1]['roomtype']
if not statusTarget:
#print('Target not in pic')
return False
else:
return True
# otherarea = 0.0
# for roomtype in roomarea:
# if roomtype != targetroom:
# otherarea += roomarea.get(roomtype)
# if otherarea <= ThresRoomArea:
# statusRoom = True
#
# if not statusRoom:
# print('contain too many rooms!')
# return statusTarget & statusRoom
def checkCameraImageBlackSky(img_ori_np, ThresBlackSkyArea):
[imgw, imgh, imgc] = img_ori_np.shape
X,Y = np.where(np.all(img_ori_np==np.asarray([0,0,0]),axis=2))
area = len(X)*1.0/(imgw*imgh)
if area >= ThresBlackSkyArea:
return False
else:
return True
def IsTargetCollision(JasonData, graph, target_id):
targetnode = find_nodes_byid(graph, target_id)[0]
targetbbox = targetnode['bounding_box']
for infor in JasonData.items():
if infor[1]['prefab_id'] == target_id:
continue
elif infor[1]['category'] == 'Rooms':
continue
else:
objbbox = find_nodes_byid(graph, infor[1]['prefab_id'])[0]['bounding_box']
status = checkTwo3DBboxOverlap(targetbbox, objbbox) or checkTwo3DBboxOverlap(objbbox, targetbbox)
if status:
print("collided with: " + infor[1]['prefab_name'] + "; from: " + infor[1]['category'])
return True #collision is happening
return False
def checkTwo3DBboxOverlap(bbox1, bbox2):
#get 8 vertex of bbox1
vertexlist = []
for i in [-1,1]:
for j in [-1,1]:
for k in [-1,1]:
point = np.array([bbox1['center'][0]+i*bbox1['size'][0]/2, bbox1['center'][1]+j*bbox1['size'][1]/2, bbox1['center'][2]+k*bbox1['size'][2]/2])
vertexlist.append(point)
#check wehtehr each point is within bbox2
for i in range(8):
status = isPointInsideBox(vertexlist[i], bbox2)
if status:
return True
return False
def checkCamCollision(cam_pos, graph):
status = False
for node in graph['nodes']:
if node['category'] == 'Rooms' or node['category'] == 'Walls':
continue
else:
bbox = node['bounding_box']
statusInside = isPointInsideBox(cam_pos,bbox)
if statusInside:
status = True
print(node['prefab_name'])
return status
return status
def isPointInsideBox(point, bbox):
#get bbox2 boundaries
minX = bbox['center'][0] - bbox['size'][0]/2
maxX = bbox['center'][0] + bbox['size'][0]/2
minY = bbox['center'][1] - bbox['size'][1]/2
maxY = bbox['center'][1] + bbox['size'][1]/2
minZ = bbox['center'][2] - bbox['size'][2]/2
maxZ = bbox['center'][2] + bbox['size'][2]/2
return (point[0] >= minX and point[0] <= maxX) and (point[1] >= minY and point[1] <= maxY) and (point[2] >= minZ and point[2] <= maxZ)
def FindOptimalCamTargetConfig_original(targetSz, targetYpos, NumRes):
if targetSz <0.5:
Radius = np.sqrt(2)
elif targetSz <1:
Radius = 1.5*np.sqrt(2)
elif targetSz <2:
Radius = 2.5*np.sqrt(2)
else:
Radius = 4*np.sqrt(2)
if targetYpos > 1.4:
camYStepSz = 0
targetYStepSz = -0.25
elif targetYpos>0.7:
targetYStepSz = 0.25
camYStepSz = 0.5
else:
targetYStepSz = 0.25
camYStepSz = 1
circ = CircleTrajectory(Radius, NumRes)
return circ, camYStepSz, targetYStepSz
def FindOptimalCamTargetConfig_size(targetSz, sizeMult, targetYpos, NumRes):
if targetSz <0.5:
Radius = 2*np.sqrt(2)
elif targetSz <1:
Radius = 1.5*2*np.sqrt(2)
elif targetSz <2:
Radius = 2.5*1.5*np.sqrt(2)
else:
Radius = 4*np.sqrt(2)
if targetYpos > 1.4:
camYStepSz = 0
targetYStepSz = -0.25-0.2
elif targetYpos>0.7:
targetYStepSz = 0.25
camYStepSz = 0.5+0.3
else:
targetYStepSz = 0.25
camYStepSz = 1+0.5
circ = CircleTrajectory(Radius, NumRes)
return circ, camYStepSz, targetYStepSz
#objects in their original place
def FindOptimalCamTargetConfig_gravity(targetSz, targetYpos, NumRes):
if targetSz <0.5:
Radius = np.sqrt(2)
elif targetSz <1:
Radius = 1.5*np.sqrt(2)
elif targetSz <2:
Radius = 2.5*np.sqrt(2)
else:
Radius = 4*np.sqrt(2)
if targetYpos > 1.4:
camYStepSz = 0
targetYStepSz = -0.25
elif targetYpos>0.7:
targetYStepSz = 0.25
camYStepSz = 0.5
else:
targetYStepSz = 0.25
camYStepSz = 1
circ = CircleTrajectory(Radius, NumRes)
return circ, camYStepSz, targetYStepSz
def FindOptimalCamTargetConfig_trained(targetSz, targetYpos, NumRes):
if targetSz <0.5:
Radius = 1*np.sqrt(2)
elif targetSz <1:
Radius = 1.5*np.sqrt(2)
elif targetSz <2:
Radius = 2*np.sqrt(2)
else:
Radius = 2.5*np.sqrt(2)
if targetYpos > 1.4:
pitch = [np.pi/2 + np.pi/9, 7*np.pi/18] #pitch angle in radians [-20, 20]
elif targetYpos>0.7:
pitch = [7*np.pi/18, np.pi/6] #pitch angle in radians [20, 60]
else:
pitch = [np.pi/3, np.pi/9] #pitch angle in radians [30, 70]
circ = SphereTrajectory(Radius, pitch, NumRes)
return circ
def SphereTrajectory(radius, pitch, Res):
#takes in radius and how many uniformly sampled points on the circle
#generate list of tuple (x,y) coordinates on the circle equally spaced
circ = list()
for p in pitch:
for j in range(Res):
circ.append( ( radius* np.sin(p) * np.cos(j* 2 * np.pi / Res), radius*np.cos(p), radius* np.sin(p) * np.sin(j* 2 * np.pi / Res) ))
return circ
def FindOptimalCamTargetConfig_trained2(targetSz, targetYpos, NumRes):
Resolution = 2.0 # 1 deg angle resolution
radius = []
pitch = []
yaw = []
for i in range(NumRes):
RandSzTimes = random.randrange(2,7) #random int from [2,10] inclusive
radius.append(1.0*RandSzTimes*targetSz)
yaw.append( random.randrange(0, int(360/Resolution), Resolution)*Resolution/360 * math.pi*2)
if targetYpos > 1.4:
pitch.append( random.randrange(-int(35/Resolution), int(55/Resolution), Resolution)*Resolution/90 * math.pi/2)
else:
pitch.append( random.randrange(int(10/Resolution), int(90/Resolution), Resolution)*Resolution/90 * math.pi/2)
# print(radius)
# print(pitch)
# print(yaw)
circ = SphereTrajectory2(radius, pitch, yaw)
return circ, radius, pitch, yaw
def FindOptimalCamTargetConfig_trained3(targetSz, targetYpos, NumRes):
Resolution = 2.0 # 1 deg angle resolution
radius = []
pitch = []
yaw = []
for i in range(NumRes):
RandSzTimes = random.randrange(1,10) #random int from [2,10] inclusive
radius.append(1.0*RandSzTimes*0.5)
yaw.append( random.randrange(0, int(360/Resolution), Resolution)*Resolution/360 * math.pi*2)
if targetYpos > 1.4:
pitch.append( random.randrange(-int(35/Resolution), int(55/Resolution), Resolution)*Resolution/90 * math.pi/2)
else:
pitch.append( random.randrange(int(10/Resolution), int(90/Resolution), Resolution)*Resolution/90 * math.pi/2)
# print(radius)
# print(pitch)
# print(yaw)
circ = SphereTrajectory2(radius, pitch, yaw)
return circ, radius, pitch, yaw
def SphereTrajectory2(radius, pitch, yaw):
#takes in radius and how many uniformly sampled points on the circle
#generate list of tuple (x,y) coordinates on the circle equally spaced
circ = list()
for i, R in enumerate(radius):
p = pitch[i]
y = yaw[i]
circ.append( ( R* np.sin(p) * np.cos(y), R*np.cos(p), R* np.sin(p) * np.sin(y) ))
return circ
def CircleTrajectory(radius, Res):
#takes in radius and how many uniformly sampled points on the circle
#generate list of tuple (x,y) coordinates on the circle equally spaced
circ = list()
for j in range(Res):
circ.append( ( radius* np.cos(j* 2 * np.pi / Res), radius* np.sin(j* 2 * np.pi / Res) ))
return circ
def saveImgList(writedir, writedirjason, imageprefix, imgformat, sort_index, CamMImg, CamMID, TargetInfor, propFirstN, saveJasonflag):
N = int(propFirstN * len(sort_index))
for index in sort_index[:N]:
count_camview = CamMID[index]
img_inst_target_cv2 = CamMImg[index]
print(writedir + imageprefix + str(count_camview) + imgformat)
cv2.imwrite(writedir + imageprefix + str(count_camview) + imgformat, img_inst_target_cv2)
if saveJasonflag:
storeinfor = TargetInfor[index]
#storeinfor_json = json.dumps(storeinfor)
f = open(writedirjason + imageprefix + str(count_camview) + ".pkl","wb")
pickle.dump(storeinfor,f)
f.close()
def saveImgList_train(writedir, writedirjason, imageprefix, imgformat, sort_index, CamMImg, CamMID, TargetInfor, propFirstN, saveJasonflag):
N = int(propFirstN * len(sort_index))
for index in sort_index[:N]:
count_camview = CamMID[index]
img_inst_target_cv2 = CamMImg[index]
print(writedir + imageprefix + str(count_camview) + imgformat)
img_inst_target_cv2 = cv2.resize(img_inst_target_cv2, (640, 512))
cv2.imwrite(writedir + imageprefix + str(count_camview) + imgformat, img_inst_target_cv2)
if saveJasonflag:
storeinfor = TargetInfor[index]
#storeinfor_json = json.dumps(storeinfor)
f = open(writedirjason + imageprefix + str(count_camview) + ".pkl","wb")
pickle.dump(storeinfor,f)
f.close()
def findAllPossibleDestNodes(targetclass, wantedClass, ItemToRoom, SurfaceToRoom, RoomList, SurfaceList, graph):
destnodesIDs = []
destPrefabs = []
destTargetRooms = []
destSurfaceList=[]
destRooms = []
for i in np.where(ItemToRoom[wantedClass.index(targetclass)] == 1)[0]:
destRooms.append(RoomList[i])
destSurface = []
for dstR in destRooms:
for i in np.where( SurfaceToRoom[:, RoomList.index(dstR)] == 1)[0]:
destSurface.append(SurfaceList[i])
destSurface = set(destSurface)
destSurface = list(destSurface)
for node in graph['nodes']:
if node['class_name'] not in destSurface:
continue
roomIn = find_rooms(graph, node)
if roomIn not in destRooms:
#print("warning! " + roomIn + " doesnt belong to any rooms!")
continue
destnodesIDs.append(node['id'])
destPrefabs.append(node['prefab_name'])
destTargetRooms.append(roomIn)
destSurfaceList.append(node['class_name'])
return destnodesIDs, destPrefabs, destTargetRooms, destSurfaceList
def findAllPossibleDestNodes_anomaly(targetclass, wantedClass, ItemToRoom, RoomList, SurfaceList, graph):
destnodesIDs = []
destPrefabs = []
destTargetRooms = []
destSurfaceList=[]
destRoomNode = []
destWallNode = []
destSurface = []
for i in np.where(ItemToRoom[wantedClass.index(targetclass)] == 1)[0]:
surfacename = SurfaceList[i]
if 'floor_' in surfacename:
surfacename = surfacename[6:]
destSurface.append(surfacename)
else:
destSurface.append(surfacename)
destSurface = set(destSurface)
destSurface = list(destSurface)
#find all wall surfaces and their corresponding room
# wallnodes=[]
# wallroom = []
# for node in graph['nodes']:
# if node['class_name'] == 'wall':
# sz = node['bounding_box']['size']
# if all(x > 2 for x in sz):
# continue;
# else:
# roomIn = find_rooms_graphedges(graph, node)
# wallroom.append(roomIn)
# wallnodes.append(node)
for node in graph['nodes']:
if node['class_name'] != 'wall':
if node['class_name'] not in destSurface:
continue
if node['class_name'] in RoomList:
roomIn = node['class_name']
else:
roomIn = find_rooms(graph, node)
destnodesIDs.append(node['id'])
destPrefabs.append(node['prefab_name'])
destTargetRooms.append(roomIn)
destSurfaceList.append(node['class_name'])
destRoomNode.append(float("nan"))
destWallNode.append(float("nan"))
else:
sz = node['bounding_box']['size']
if all(x > 2 for x in sz):
continue;
else:
roomNode = find_rooms_graphedges(graph, node)
roomIn = roomNode['class_name']
destsurf = 'wall_' + roomIn
if destsurf in destSurface:
destnodesIDs.append(node['id'])
destPrefabs.append(node['prefab_name'])
destTargetRooms.append(roomIn)
destSurfaceList.append(node['class_name'])
destRoomNode.append(roomNode)
destWallNode.append(node)
return destnodesIDs, destPrefabs, destTargetRooms, destSurfaceList, destRoomNode, destWallNode
def add_node(graph, n):
graph['nodes'].append(n)
def add_edge(graph, fr_id, rel, to_id):
graph['edges'].append({'from_id': fr_id, 'relation_type': rel, 'to_id': to_id})
def deleteGraphByClassname(graph, target_classname):
#print(graph)
ToDeleteList = find_nodes_byclassname(graph, target_classname)
#print(ToDeleteList)
ToDeleteIDList = []
for i, mc in enumerate(ToDeleteList):
ToDeleteIDList.append(mc['id'])
#del mc['obj_transform']
#del mc['bounding_box']
flagAll = True
while flagAll:
for i, node in enumerate(graph['nodes']):
if node['class_name'] == target_classname:
del graph['nodes'][i]
flagAll = True
break
else:
flagAll = False
#print(ToDeleteIDList)
#for idDelete in ToDeleteIDList:
graph['edges'] = [edge for edge in graph['edges'] if (edge['from_id'] not in ToDeleteIDList) and (edge['to_id'] not in ToDeleteIDList)]
return graph
def computeMoveNodeOffset_anomaly(destwallnode, destroomnode, targetnode):
wallcenter = destwallnode['bounding_box']['center']
roomcenter = destroomnode['bounding_box']['center']
if destwallnode['bounding_box']['size'][0]<2:
alongaxis = 0
else:
alongaxis = 2
if wallcenter[alongaxis] - roomcenter[alongaxis] > 0:
axisorient = -1
else:
axisorient = 1
desiredpos = wallcenter.copy()
desiredpos[alongaxis] = wallcenter[alongaxis] + axisorient*targetnode['bounding_box']['size'][alongaxis]/2
movenode_offset = desiredpos.copy()
for dim in range(3):
movenode_offset[dim] = desiredpos[dim] - targetnode['bounding_box']['center'][dim]
return movenode_offset
def find_destsurfnode_byclassname(graph, targetnode, destsurf):
targetid = targetnode['id']
destsurflist = find_nodes_byclassname(graph, destsurf)
destsurfidlist = [node['id'] for node in destsurflist]
targetsurfidlist = [edge['to_id'] for edge in graph['edges'] if edge['from_id'] == targetid and edge['relation_type'] == 'ON']
surfnode = []
if len(destsurfidlist)>0 and len(targetsurfidlist)>0 :
counter = 0
for did in destsurfidlist:
if did in targetsurfidlist:
surfnode.append(destsurflist[counter])
break
counter = counter + 1
return surfnode
def computePossibleLocationsOnSurf(targetnode, surfnode, scaleStepSz):
targetSzX = targetnode['bounding_box']['size'][0]
targetSzZ = targetnode['bounding_box']['size'][2]
leftBoundSurfX = surfnode['bounding_box']['center'][0] - surfnode['bounding_box']['size'][0]/2 + targetSzX/2
rightBoundSurfX = surfnode['bounding_box']['center'][0] + surfnode['bounding_box']['size'][0]/2 - targetSzX/2
leftBoundSurfZ = surfnode['bounding_box']['center'][2] - surfnode['bounding_box']['size'][2]/2 + targetSzZ/2
rightBoundSurfZ = surfnode['bounding_box']['center'][2] + surfnode['bounding_box']['size'][2]/2 - targetSzZ/2
x = np.arange(leftBoundSurfX,rightBoundSurfX,scaleStepSz*targetSzX)
z = np.arange(leftBoundSurfZ,rightBoundSurfZ,scaleStepSz*targetSzZ)
# x = np.arange(leftBoundSurfX,rightBoundSurfX,0.1)
# z = np.arange(leftBoundSurfZ,rightBoundSurfZ,0.1)
xpos, zpos = np.meshgrid(x,z)
xpos = xpos.flatten()
zpos = zpos.flatten()
xoffset = xpos - targetnode['bounding_box']['center'][0]
zoffset = zpos - targetnode['bounding_box']['center'][2]
return xoffset, zoffset
def segmentTargetBbox(img_height, img_width, JasonData, img, targetid):
#convert to cv2 image and ready to draw
#img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
seg = np.zeros((img_width, img_height)).astype('uint8')
status = False
for infor in JasonData.items():
if infor[1]['prefab_id'] == targetid:
pixel = infor[1]['color']
X,Y = np.where(np.all(img==np.asarray(pixel),axis=2))
left = infor[1]['bbox'][2]
top = infor[1]['bbox'][0]
right = infor[1]['bbox'][3]
bottom = infor[1]['bbox'][1]
targetbbox = [left, top, right, bottom]
seg[X,Y] = 255
status = True
targetarea = infor[1]['area']#(bottom - top)*(right - left)
seg = cv2.cvtColor(seg, cv2.COLOR_GRAY2BGR)
return status, targetarea, targetbbox, seg
return status, 0,0, img
| 37.69281 | 157 | 0.594902 | 3,435 | 28,835 | 4.893741 | 0.1377 | 0.012849 | 0.010113 | 0.009102 | 0.466865 | 0.41749 | 0.392683 | 0.365021 | 0.350446 | 0.325164 | 0 | 0.026204 | 0.283995 | 28,835 | 764 | 158 | 37.742147 | 0.787998 | 0.11167 | 0 | 0.455185 | 0 | 0 | 0.053735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072056 | false | 0 | 0.019332 | 0.005272 | 0.175747 | 0.00703 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
788eb0842dabcf5bdcfbc33b4a3c93db411e4720 | 316 | py | Python | test/testConnectDB.py | dantegg/pythonWeather | ceda06e0fb2fe68695b56f8bf0d206099d8779d9 | [
"MIT"
] | null | null | null | test/testConnectDB.py | dantegg/pythonWeather | ceda06e0fb2fe68695b56f8bf0d206099d8779d9 | [
"MIT"
] | null | null | null | test/testConnectDB.py | dantegg/pythonWeather | ceda06e0fb2fe68695b56f8bf0d206099d8779d9 | [
"MIT"
] | null | null | null | #coding:utf-8
import sys
sys.path.append("..")
from connectDB import connectDB
testDB = connectDB
testWeatherRecord = {
"collectTime": '2016-10-16',
"ctemp":'22'
}
testconnection = testDB.connectMongo()
testDB.saveWeather(testWeatherRecord,testconnection)
testDB.printWeather(testconnection) | 18.588235 | 52 | 0.727848 | 31 | 316 | 7.419355 | 0.677419 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040892 | 0.148734 | 316 | 17 | 53 | 18.588235 | 0.814126 | 0.037975 | 0 | 0 | 0 | 0 | 0.098684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7890861cf35effbd978b1f446fe1133e0f69c410 | 12,683 | py | Python | analysis/analysis/seg_stats.py | asaran/sawyer-demos_human-audio | b9f1d1df152234569a95b525441e2afba43f54bf | [
"MIT"
] | null | null | null | analysis/analysis/seg_stats.py | asaran/sawyer-demos_human-audio | b9f1d1df152234569a95b525441e2afba43f54bf | [
"MIT"
] | null | null | null | analysis/analysis/seg_stats.py | asaran/sawyer-demos_human-audio | b9f1d1df152234569a95b525441e2afba43f54bf | [
"MIT"
] | null | null | null |
# Analyze the ground truth hand annotated audio features with error presence/segment presence. (Level II)
# [What kind of mistake types there are? What are the types of audio that are labeled there?]
# TODO: bar plots indicating how frequent errors/segment types are
# TODO: What is the distribution of human audio types for the different error/segment types
# frequency count of different speech types during precision, non-precision, no-seg chunks
# frequency count of different speech types during seg, no-seg chunks
import librosa
import os
import pickle as pkl
import argparse
import csv
class SegmentAnalysis():
def __init__(self, args):
self.seg_ann = args.segmentation_annotator
self.utt_ann = args.utterances_annotator
self.demo_dir = '../../'
self.audio_dir = '../../data/demo_audio'
# self.seg_dir = os.path.join(self.demo_dir, 'annotations/A4')
self.tasks = ['box', 'cutting']
self.demo_types = ['video', 'kt']
self.users = ['user2', 'user3', 'user4', 'user5', 'user6', 'user7', 'user8', 'user9', 'user10',\
'user11', 'user12','user14', 'user15', 'user16', 'user17', 'user18', 'user19', 'user20']
with open('../../data/seg_'+self.seg_ann+'.pkl', 'rb') as fp:
self.seg_annot = pkl.load(fp)
with open('../../data/audio_'+self.utt_ann+'.pkl', 'rb') as fp:
self.utt_annot = pkl.load(fp)
self.box_time, self.cutting_time = 0, 0
self.box_pr_time, self.box_non_pr_time = 0, 0
self.cutting_pr_time, self.cutting_non_pr_time = 0, 0
self.box_utt_time, self.cutting_utt_time = 0, 0
self.box_pr_utt_time, self.box_non_pr_utt_time, self.box_non_seg_utt_time = 0, 0, 0
self.cutting_pr_utt_time, self.cutting_non_pr_utt_time, self.cutting_non_seg_utt_time = 0, 0, 0
def get_precision_labels(self):
# segment list for precise, no-precise subtasks
with open('../../data/box_precise.pkl', 'rb') as fp:
self.box_precise = pkl.load(fp)
with open('../../data/box_not-precise.pkl', 'rb') as fp:
self.box_not_precise = pkl.load(fp)
with open('../../data/cutting_precise.pkl', 'rb') as fp:
self.cutting_precise = pkl.load(fp)
with open('../../data/cutting_not-precise.pkl', 'rb') as fp:
self.cutting_not_precise = pkl.load(fp)
def get_demo_time(self,demo_id):
user_id, task, demo_type = demo_id.split('_')
# total demo time (not just speech time)
audio_path = os.path.join(self.audio_dir,user_id,task,demo_type,'env.wav')
audio, sr = librosa.load(audio_path)
demo_len = (audio.shape[0])/sr
return demo_len
def get_stats(self):
for demo_id in self.seg_annot:
# What % of a demonstration is not a segment or an error?
# get total demo time from wav file, get err/seg duration from annotated json
demo_time = self.get_demo_time(demo_id)
user_id, task, demo_type = demo_id.split('_')
# utt start and stop times for each annotation
utt_start = self.utt_annot[demo_id]['start']
utt_stop = self.utt_annot[demo_id]['stop']
utt_duration = self.utt_annot[demo_id]['duration']
if task=='box':
self.box_time+=demo_time
if task=='cutting':
self.cutting_time+=demo_time
for k,l,d in zip(utt_start,utt_stop,utt_duration):
assert(d==0 or d==l-k)
if task=='box':
self.box_utt_time+=d
if task=='cutting':
self.cutting_utt_time+=d
# start and stop times for each annotated segment
seg_start = self.seg_annot[demo_id]['start']
seg_stop = self.seg_annot[demo_id]['stop']
segments = self.seg_annot[demo_id]['seg_label']
self.get_precision_labels()
for i,j,s in zip(seg_start,seg_stop,segments):
dur = j-i
if dur>0:
if task=='box':
if s in self.box_precise:
# precision_label = 'precision'
self.box_pr_time+=dur
# During such parts of a demonstration, what % of time are people talking?
# would require to find overlap of both seg annot and utt annot
# how much people talk in precision, non-precision, and no-seg chunks?
for k,l,d in zip(utt_start,utt_stop,utt_duration):
if d>0:
# utt completely inside seg
if k>=i and k<=j and l>=i and l<=j:
self.box_pr_utt_time+=d
# utt stop inside seg
elif k<i and k<j and l>=i and l<=j:
assert(l-i>=0)
self.box_pr_utt_time+=(l-i)
# utt start inside seg
elif k>=i and k<=j and l>i and l>j:
assert(j-k>=0)
self.box_pr_utt_time+=(j-k)
# seg completely inside utt
elif k>=i and k<=j and l>=i and l<=j:
self.box_pr_utt_time+=dur
elif s in self.box_not_precise:
# precision_label = 'non-precision'
self.box_non_pr_time+=dur
# During such parts of a demonstration, what % of time are people talking?
# would require to find overlap of both seg annot and utt annot
# how much people talk in precision, non-precision, and no-seg chunks?
for k,l,d in zip(utt_start,utt_stop,utt_duration):
if d>0:
# utt completely inside seg
if k>=i and k<=j and l>=i and l<=j:
self.box_non_pr_utt_time+=d
# utt stop inside seg
elif k<i and k<j and l>=i and l<=j:
assert(l-i>=0)
self.box_non_pr_utt_time+=(l-i)
# utt start inside seg
elif k>=i and k<=j and l>i and l>j:
assert(j-k>=0)
self.box_non_pr_utt_time+=(j-k)
# seg completely inside utt
elif k>=i and k<=j and l>=i and l<=j:
self.box_non_pr_utt_time+=dur
elif task=='cutting':
if s in self.cutting_precise:
# precision_label = 'precision'
self.cutting_pr_time+=dur
# During such parts of a demonstration, what % of time are people talking?
# would require to find overlap of both seg annot and utt annot
# how much people talk in precision, non-precision, and no-seg chunks?
for k,l,d in zip(utt_start,utt_stop,utt_duration):
if d>0:
# utt completely inside seg
if k>=i and k<=j and l>=i and l<=j:
self.cutting_pr_utt_time+=d
# utt stop inside seg
elif k<i and k<j and l>=i and l<=j:
assert(l-i>=0)
self.cutting_pr_utt_time+=(l-i)
# utt start inside seg
elif k>=i and k<=j and l>i and l>j:
assert(j-k>=0)
self.cutting_pr_utt_time+=(j-k)
# seg completely inside utt
elif k>=i and k<=j and l>=i and l<=j:
self.cutting_pr_utt_time+=dur
elif s in self.cutting_not_precise:
# precision_label = 'non-precision'
self.cutting_non_pr_time+=dur
# During such parts of a demonstration, what % of time are people talking?
# would require to find overlap of both seg annot and utt annot
# how much people talk in precision, non-precision, and no-seg chunks?
for k,l,d in zip(utt_start,utt_stop,utt_duration):
if d>0:
# utt completely inside seg
if k>=i and k<=j and l>=i and l<=j:
self.cutting_non_pr_utt_time+=d
# utt stop inside seg
elif k<i and k<j and l>=i and l<=j:
assert(l-i>=0)
self.cutting_non_pr_utt_time+=(l-i)
# utt start inside seg
elif k>=i and k<=j and l>i and l>j:
assert(j-k>=0)
self.cutting_non_pr_utt_time+=(j-k)
# seg completely inside utt
elif k>=i and k<=j and l>=i and l<=j:
self.cutting_non_pr_utt_time+=dur
# utt during non-seg parts = total utt_time in demo - utt_time during seg
self.box_non_seg_utt_time = self.box_utt_time - self.box_pr_utt_time - self.box_non_pr_utt_time
# utt during non-seg parts = total utt_time in demo - utt_time during seg
self.cutting_non_seg_utt_time = self.cutting_utt_time - self.cutting_pr_utt_time - self.cutting_non_pr_utt_time
def write_csv(self):
self.get_stats()
exp_file = open('seg_stats.csv', mode='w')
writer = csv.writer(exp_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
column_labels = ['Box Precision Seg Time', 'Box Non-Precision Seg Time', 'Box Total Demo Time','',\
'Cutting Precision Seg Time', 'Cutting Non-Precision Seg Time', 'Cutting Total Demo Time']
writer.writerow(column_labels)
column_values = [self.box_pr_time, self.box_non_pr_time, self.box_time, '',\
self.cutting_pr_time, self.cutting_non_pr_time, self.cutting_time]
writer.writerow(column_values)
writer.writerow([])
column_labels = ['Box Total Utterance Time', 'Box Precision Utterance Time', 'Box Non-Precision Utt Time',\
'Box Total Seg Utt Time', 'Box Total Non-Seg Utt Time', '', 'Cutting Total Utterance Time',\
'Cutting Precision Utt Time', 'Cutting Non-Precision Utt Time', 'Cutting Total Seg Utt Time',\
'Cutting Total Non-Seg Utt Time']
writer.writerow(column_labels)
column_values = [self.box_utt_time, self.box_pr_utt_time, self.box_non_pr_utt_time,\
self.box_pr_utt_time+self.box_non_pr_utt_time, self.box_non_seg_utt_time, '',\
self.cutting_utt_time, self.cutting_pr_utt_time, self.cutting_non_pr_utt_time,\
self.cutting_pr_utt_time+self.cutting_non_pr_utt_time, self.cutting_non_seg_utt_time]
writer.writerow(column_values)
exp_file.close()
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-s', '--segmentation-annotator',type=str,default='A4')
parser.add_argument('-u', '--utterances-annotator',type=str,default='A2')
args = parser.parse_args()
analysis = SegmentAnalysis(args)
analysis.write_csv()
if __name__ == '__main__':
main() | 46.457875 | 119 | 0.498305 | 1,588 | 12,683 | 3.775189 | 0.120907 | 0.066555 | 0.04804 | 0.032027 | 0.640867 | 0.567473 | 0.542619 | 0.484404 | 0.468057 | 0.432027 | 0 | 0.008047 | 0.412126 | 12,683 | 273 | 120 | 46.457875 | 0.796003 | 0.184578 | 0 | 0.283871 | 0 | 0 | 0.085487 | 0.018166 | 0 | 0 | 0 | 0.003663 | 0.058065 | 1 | 0.03871 | false | 0 | 0.032258 | 0 | 0.083871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7890d4f24ed4a643bfc95b0ced319e559f2a1e26 | 4,786 | py | Python | AVSD_Baseline/Feature_Extraction/extract_i3d_rgb_features.py | hudaAlamri/DSTC7-Audio-Visual-Scene-Aware-Dialog-AVSD-Challenge | 6a5ee8542132ad6634ee02896d7c935b8c447d78 | [
"MIT"
] | 51 | 2018-06-04T11:34:58.000Z | 2022-03-09T09:18:08.000Z | AVSD_Baseline/Feature_Extraction/extract_i3d_rgb_features.py | TwentyBN/DSTC7-Audio-Visual-Scene-Aware-Dialog-AVSD-Challenge | 61ea13cd680fc4743ad20e010c6d3047e03b993c | [
"MIT"
] | 4 | 2018-08-17T12:40:34.000Z | 2020-01-09T19:00:56.000Z | AVSD_Baseline/Feature_Extraction/extract_i3d_rgb_features.py | hudaAlamri/DSTC7-Audio-Visual-Scene-Aware-Dialog-AVSD-Challenge | 6a5ee8542132ad6634ee02896d7c935b8c447d78 | [
"MIT"
] | 13 | 2018-06-01T19:50:44.000Z | 2020-12-04T03:37:48.000Z | """I3D feature extration using a tensorflow model.
Copyright 2018 Mitsubishi Electric Research Labs
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import h5py
import numpy as np
import tensorflow as tf
import time
import os
import scipy.io as sio
import skimage.io
from skimage.transform import rescale, resize, downscale_local_mean
from random import randint
import cv2
import i3d
from i3d import Unit3D
import sonnet as snt
import skvideo.io
import pickle
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--input', default='data/Charades_v1_rgb', type=str,
help='Directory that includes image files')
parser.add_argument('--net_output', default='Mixed_5c',
type=str, help="layer used as output features")
parser.add_argument('--feature_dim', '-f', default=2048, type=int,
help='output feature dimension')
parser.add_argument('--model_path', default='data/i3d_model/data/checkpoints/rgb_imagenet', type=str, help='model path')
parser.add_argument('--stride', default=4, type=int, help='stride of frame features')
parser.add_argument('--output',default='data/Charades/i3d_rgb', type=str,
help='output pickle file of feature vectors')
parser.add_argument('--seq_length', default=16, type=int, help='window size of frame features')
args = parser.parse_args()
_IMAGE_SIZE = 224
_NUM_CLASSES = 400
def train():
print (args.model_path)
model_path = args.model_path
pose_net_path = os.path.join(model_path, 'model.ckpt')
tf.reset_default_graph()
with tf.variable_scope('RGB'):
rgb_input = tf.placeholder(tf.float32, [None, args.seq_length, _IMAGE_SIZE, _IMAGE_SIZE, 3])
rgb_y = tf.placeholder(tf.float32, [None, _NUM_CLASSES])
lr = tf.placeholder("float")
drop_out_prob = tf.placeholder("float")
i3d_model = i3d.InceptionI3d(num_classes=_NUM_CLASSES, final_endpoint='Mixed_5c')
net, end_points = i3d_model(rgb_input, is_training=False, dropout_keep_prob=drop_out_prob)
rgb_variable_map = {}
for variable in tf.global_variables():
if variable.name.split('/')[0] == 'RGB':
rgb_variable_map[variable.name.replace(':0', '')] = variable
tf_config = tf.ConfigProto()
restorer = tf.train.Saver(var_list=rgb_variable_map, reshape=True)
with tf.Session(config=tf_config) as sess:
restorer.restore(sess, pose_net_path)
lr_s = 0.0001
drop_out = 1
save_folder = args.output
root_folder = args.input
num_seq = len(os.listdir(root_folder))
for f1 in os.listdir(root_folder):
seq = os.listdir(os.path.join(root_folder, f1))
f_exit = os.listdir(save_folder)
if f1 not in f_exit:
os.mkdir(os.path.join(save_folder, f1))
else:
if os.listdir(os.path.join(save_folder, f1)) !=[]:
continue
num_frame = len(seq)
if num_frame < args.seq_length:
print("There should be at least",args.seq_length," frames")
num_sample = num_frame//args.stride
features = np.zeros(shape=[num_sample, args.feature_dim])
for i in range(0, num_sample):
Start_f = i*args.stride + 1
input = np.zeros(shape=[1, args.seq_length, _IMAGE_SIZE, _IMAGE_SIZE, 3])
gth_label = np.zeros(shape=[1, _NUM_CLASSES])
for j in range(0, args.seq_length):
pick_f = Start_f + j
if pick_f > num_frame:
pick_f = Start_f
im = cv2.imread(os.path.join(root_folder, f1, (f1 + '-' + ("%06d" % pick_f) + '.jpg')))
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
im = cv2.resize(im, (_IMAGE_SIZE, _IMAGE_SIZE))
im = (im - 128)/128
input[:, j, :, :, :] = im
gth_label[0] = 1
feed_dict = {
rgb_input: input,
rgb_y: gth_label,
lr: lr_s,
drop_out_prob: drop_out
}
logits, net_feature = sess.run([net, end_points], feed_dict)
Mix5c = net_feature[args.net_output]
feature = Mix5c.mean(axis=(2,3))
feature = feature.reshape((1, 2048))
features[i, :] = feature
pickle.dump(features, open(os.path.join(save_folder, f1) + '/feature.pkl', 'wb'), 2)
def main(argv=None):
train()
if __name__ == '__main__':
tf.app.run()
| 39.553719 | 120 | 0.602382 | 621 | 4,786 | 4.397746 | 0.320451 | 0.023068 | 0.043574 | 0.019773 | 0.089345 | 0.063713 | 0.023435 | 0.023435 | 0 | 0 | 0 | 0.024569 | 0.285625 | 4,786 | 120 | 121 | 39.883333 | 0.774203 | 0.020059 | 0 | 0 | 0 | 0 | 0.097009 | 0.013889 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.186275 | 0 | 0.205882 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7891d89406deccb9af158b35a76ea6e08c700edb | 1,037 | py | Python | code/add_country_lat_lon.py | ParhamP/Global-Trade-Network | 106d3e55fba04e72feda2844d092745ce170e55d | [
"BSD-3-Clause"
] | 2 | 2021-08-22T10:02:08.000Z | 2021-11-09T11:30:31.000Z | code/add_country_lat_lon.py | ParhamP/Global-Trade-Network | 106d3e55fba04e72feda2844d092745ce170e55d | [
"BSD-3-Clause"
] | null | null | null | code/add_country_lat_lon.py | ParhamP/Global-Trade-Network | 106d3e55fba04e72feda2844d092745ce170e55d | [
"BSD-3-Clause"
] | null | null | null | import csv
import collections # iterator and counter libraries
with open("../MIT_WT_datafiles/country_names.csv", 'r') as cntry, open("../MIT_WT_datafiles/country_lat_lon_from_google.csv", 'r') as ll, open("../MIT_WT_datafiles/cntry_lat_lon_combined.csv", 'w') as output:
reader = csv.reader(cntry) #,delimiter='\t') #... was a tsv file
llread = csv.reader(ll)
writer = csv.writer(output)
next(reader)
next(llread)
writer.writerow(["id", "id_3char","name","latitude","longitude"])
count = 0
latlon = dict()
for row in llread:
print(row)
latlon[row[3].casefold()]=(row[1],row[2]) # make a dictionary with country name as key - row[3].
# casefold makes all letters lowercase.
for row in reader:
if row[2].casefold() in set(latlon.keys()):
#country_count[row[4]] += 1
writer.writerow([ row[0].casefold(), row[1].casefold(), row[2].casefold(), latlon[row[2].casefold()][0], latlon[row[2].casefold()][1]] )
else:
writer.writerow([ row[0].casefold(), row[1].casefold(), row[2].casefold() ])
| 35.758621 | 208 | 0.66731 | 157 | 1,037 | 4.305732 | 0.414013 | 0.035503 | 0.088757 | 0.079882 | 0.221893 | 0.147929 | 0.147929 | 0.147929 | 0.147929 | 0.147929 | 0 | 0.021324 | 0.140791 | 1,037 | 28 | 209 | 37.035714 | 0.737374 | 0.175506 | 0 | 0 | 0 | 0 | 0.198113 | 0.158019 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
789ccecbc53bbdded880ac33de58a7bbeecb50e7 | 1,062 | py | Python | pinnwand/cli.py | aether-space/pinnwand | 427c8fe68486f2afa0832abbe584595e51848c03 | [
"BSD-3-Clause"
] | null | null | null | pinnwand/cli.py | aether-space/pinnwand | 427c8fe68486f2afa0832abbe584595e51848c03 | [
"BSD-3-Clause"
] | null | null | null | pinnwand/cli.py | aether-space/pinnwand | 427c8fe68486f2afa0832abbe584595e51848c03 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import sys
from datetime import datetime, timedelta
from pinnwand.models import Base, engine, session, Paste
def main():
args = sys.argv[1:]
if args:
if args[0] == "init_db":
Base.metadata.create_all(engine)
if args[0] == "add":
paste = Paste("<html>hi</html>", lexer="html", expiry=timedelta(seconds=5))
session.add(paste)
session.commit()
if args[0] == "remove":
paste = session.query(Paste).filter(Paste.id == int(args[1])).first()
session.delete(paste)
session.commit()
if args[0] == "list":
for paste in session.query(Paste).all():
print(paste)
if args[0] == "reap":
pastes = session.query(Paste).filter(Paste.exp_date < datetime.utcnow()).all()
for paste in pastes:
session.delete(paste)
session.commit()
print("Reaped {} expired pastes".format(len(pastes)))
if __name__ == "__main__":
main()
| 25.285714 | 90 | 0.548964 | 125 | 1,062 | 4.576 | 0.448 | 0.062937 | 0.061189 | 0.06993 | 0.262238 | 0.087413 | 0 | 0 | 0 | 0 | 0 | 0.01084 | 0.305085 | 1,062 | 41 | 91 | 25.902439 | 0.764228 | 0.018832 | 0 | 0.185185 | 0 | 0 | 0.072046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.111111 | 0 | 0.148148 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78a3b061984d25f892de9e2d172a18884e735177 | 1,781 | py | Python | fastquotes/fund/history.py | YangzhenZhao/fastquotes | 1faba9f7fc7801a11359001e08cefa9cfbc41d64 | [
"MIT"
] | 4 | 2020-11-18T11:25:00.000Z | 2021-04-08T01:02:49.000Z | fastquotes/fund/history.py | YangzhenZhao/fastquotes | 1faba9f7fc7801a11359001e08cefa9cfbc41d64 | [
"MIT"
] | null | null | null | fastquotes/fund/history.py | YangzhenZhao/fastquotes | 1faba9f7fc7801a11359001e08cefa9cfbc41d64 | [
"MIT"
] | 1 | 2020-11-18T11:25:01.000Z | 2020-11-18T11:25:01.000Z | import json
from datetime import datetime
from typing import Optional
import requests
from ..const import CUSTOM_HEADER
def get_dividend(msg: str) -> Optional[float]:
if not msg:
return None
left, right = 0, len(msg) - 1
while not msg[left].isdigit() or not msg[right].isdigit():
if not msg[left].isdigit():
left += 1
if not msg[right].isdigit():
right -= 1
return float(msg[left : right + 1])
def fund_history_data(fund_code: str) -> list:
url = f"http://fund.eastmoney.com/pingzhongdata/{fund_code}.js"
text = requests.get(url, headers=CUSTOM_HEADER).text
text = text[
text.find("Data_netWorthTrend") + 21 : text.find("Data_ACWorthTrend") - 15
]
res_list = []
dividend_sum = 0.0
growth_rate_factor = 1.0
for item in json.loads(text):
dividend = get_dividend(item["unitMoney"])
unit_nv = item["y"]
if dividend is not None:
dividend_sum += dividend
growth_rate_factor *= (unit_nv + dividend) / unit_nv
res_list.append(
{
"日期": datetime.fromtimestamp(item["x"] // 1000).strftime("%Y%m%d"),
"单位净值": unit_nv,
"累计净值": unit_nv + dividend_sum,
"复权净值": unit_nv * growth_rate_factor,
"日涨幅": item["equityReturn"],
"分红送配": dividend,
}
)
return res_list
def fund_history_profit_dict(fund_code: str) -> dict:
fund_history_list = fund_history_data(fund_code)
res_dic = {}
for i in range(1, len(fund_history_list)):
item = fund_history_list[i]
last_item = fund_history_list[i - 1]
res_dic[item["日期"]] = item["复权净值"] / last_item["复权净值"] - 1
return res_dic
| 30.706897 | 83 | 0.590679 | 233 | 1,781 | 4.309013 | 0.347639 | 0.076693 | 0.059761 | 0.033865 | 0.085657 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015748 | 0.286917 | 1,781 | 57 | 84 | 31.245614 | 0.774803 | 0 | 0 | 0 | 0 | 0 | 0.083661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.102041 | 0 | 0.244898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78a897b7f40cac7c3cf59971f4394a4765aa060b | 599 | py | Python | dacy/tests/test_download.py | MalteHB/DaCy | 1c3d348b14368c772d13344d35dc076b01d5bf07 | [
"Apache-2.0"
] | 1 | 2021-07-24T19:14:34.000Z | 2021-07-24T19:14:34.000Z | dacy/tests/test_download.py | MalteHB/DaCy | 1c3d348b14368c772d13344d35dc076b01d5bf07 | [
"Apache-2.0"
] | null | null | null | dacy/tests/test_download.py | MalteHB/DaCy | 1c3d348b14368c772d13344d35dc076b01d5bf07 | [
"Apache-2.0"
] | null | null | null | import urllib
import os
from dacy.download import models_url
from dacy.load import load
def test_urls():
for m, url in models_url.items():
print(m)
req = urllib.request.Request(url, method="HEAD")
f = urllib.request.urlopen(req)
assert f.status == 200
print("\t Status:", f.status)
size = int(f.headers["Content-Length"]) / 1e6
assert size > 20
print("\t File Size:", round(size), "mb")
def test_load():
models = ["da_dacy_medium_tft-0.0.0"]
for m in models:
nlp = load(m)
nlp("Dette er en test tekst")
| 24.958333 | 56 | 0.599332 | 89 | 599 | 3.955056 | 0.516854 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022831 | 0.268781 | 599 | 23 | 57 | 26.043478 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0.148581 | 0.040067 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.315789 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78ab84fbfc0799cfbe7822b26a69af440075f9ad | 19,503 | py | Python | flowsa/USDA_CoA_Cropland.py | ericmbell1/flowsa | d251301864289a4de42dda118c9c6da41bcf4cf0 | [
"CC0-1.0"
] | null | null | null | flowsa/USDA_CoA_Cropland.py | ericmbell1/flowsa | d251301864289a4de42dda118c9c6da41bcf4cf0 | [
"CC0-1.0"
] | null | null | null | flowsa/USDA_CoA_Cropland.py | ericmbell1/flowsa | d251301864289a4de42dda118c9c6da41bcf4cf0 | [
"CC0-1.0"
] | null | null | null | # USDA_CoA_Cropland.py (flowsa)
# !/usr/bin/env python3
# coding=utf-8
import json
import numpy as np
import pandas as pd
from flowsa.common import *
from flowsa.flowbyfunctions import assign_fips_location_system, sector_disaggregation
def CoA_Cropland_URL_helper(build_url, config, args):
"""This helper function uses the "build_url" input from flowbyactivity.py, which is a base url for coa cropland data
that requires parts of the url text string to be replaced with info specific to the usda nass quickstats API.
This function does not parse the data, only modifies the urls from which data is obtained. """
# initiate url list for coa cropland data
urls = []
# call on state acronyms from common.py (and remove entry for DC)
state_abbrevs = abbrev_us_state
state_abbrevs = {k: v for (k, v) in state_abbrevs.items() if k != "DC"}
# replace "__aggLevel__" in build_url to create three urls
for x in config['agg_levels']:
for y in config['sector_levels']:
# at national level, remove the text string calling for state acronyms
if x == 'NATIONAL':
url = build_url
url = url.replace("__aggLevel__", x)
url = url.replace("__secLevel__", y)
url = url.replace("&state_alpha=__stateAlpha__", "")
if y == "ECONOMICS":
url = url.replace(
"AREA HARVESTED&statisticcat_desc=AREA IN PRODUCTION&statisticcat_desc=TOTAL&statisticcat_desc=AREA BEARING %26 NON-BEARING",
"AREA&statisticcat_desc=AREA OPERATED")
else:
url = url.replace("&commodity_desc=AG LAND&commodity_desc=FARM OPERATIONS", "")
url = url.replace(" ", "%20")
urls.append(url)
else:
# substitute in state acronyms for state and county url calls
for z in state_abbrevs:
url = build_url
url = url.replace("__aggLevel__", x)
url = url.replace("__secLevel__", y)
url = url.replace("__stateAlpha__", z)
if y == "ECONOMICS":
url = url.replace(
"AREA HARVESTED&statisticcat_desc=AREA IN PRODUCTION&statisticcat_desc=TOTAL&statisticcat_desc=AREA BEARING %26 NON-BEARING",
"AREA&statisticcat_desc=AREA OPERATED")
else:
url = url.replace("&commodity_desc=AG LAND&commodity_desc=FARM OPERATIONS", "")
url = url.replace(" ", "%20")
urls.append(url)
return urls
def coa_cropland_call(url, coa_response, args):
cropland_json = json.loads(coa_response.text)
df_cropland = pd.DataFrame(data=cropland_json["data"])
return df_cropland
def coa_cropland_parse(dataframe_list, args):
"""Modify the imported data so it meets the flowbyactivity criteria and only includes data on harvested acreage
(irrigated and total). """
df = pd.concat(dataframe_list, sort=False)
# specify desired data based on domain_desc
df = df[~df['domain_desc'].isin(['ECONOMIC CLASS', 'FARM SALES', 'IRRIGATION STATUS', 'CONCENTRATION',
'ORGANIC STATUS', 'NAICS CLASSIFICATION', 'PRODUCERS'])]
df = df[df['statisticcat_desc'].isin(['AREA HARVESTED', 'AREA IN PRODUCTION', 'AREA BEARING & NON-BEARING',
'AREA', 'AREA OPERATED'])]
# drop rows that subset data into farm sizes (ex. 'area harvested: (1,000 to 1,999 acres)
df = df[~df['domaincat_desc'].str.contains(' ACRES')].reset_index(drop=True)
# drop Descriptions that contain certain phrases, as these data are included in other categories
df = df[~df['short_desc'].str.contains('FRESH MARKET|PROCESSING|ENTIRE CROP|NONE OF CROP|PART OF CROP')]
# drop Descriptions that contain certain phrases - only occur in AG LAND data
df = df[~df['short_desc'].str.contains('INSURANCE|OWNED|RENTED|FAILED|FALLOW|IDLE')].reset_index(drop=True)
# Many crops are listed as their own commodities as well as grouped within a broader category (for example, orange
# trees are also part of orchards). As this dta is not needed, takes up space, and can lead to double counting if
# included, want to drop these unused columns
# subset dataframe into the 5 crop types and land in farms and drop rows
# crop totals: drop all data
# field crops: don't want certain commodities and don't want detailed types of wheat, cotton, or sunflower
df_fc = df[df['group_desc'] == 'FIELD CROPS']
df_fc = df_fc[~df_fc['commodity_desc'].isin(['GRASSES', 'GRASSES & LEGUMES, OTHER', 'LEGUMES', 'HAY', 'HAYLAGE'])]
df_fc = df_fc[~df_fc['class_desc'].str.contains('SPRING|WINTER|TRADITIONAL|OIL|PIMA|UPLAND', regex=True)]
# fruit and tree nuts: only want a few commodities
df_ftn = df[df['group_desc'] == 'FRUIT & TREE NUTS']
df_ftn = df_ftn[df_ftn['commodity_desc'].isin(['BERRY TOTALS', 'ORCHARDS'])]
df_ftn = df_ftn[df_ftn['class_desc'].isin(['ALL CLASSES'])]
# horticulture: only want a few commodities
df_h = df[df['group_desc'] == 'HORTICULTURE']
df_h = df_h[df_h['commodity_desc'].isin(['CUT CHRISTMAS TREES', 'SHORT TERM WOODY CROPS'])]
# vegetables: only want a few commodities
df_v = df[df['group_desc'] == 'VEGETABLES']
df_v = df_v[df_v['commodity_desc'].isin(['VEGETABLE TOTALS'])]
# only want ag land and farm operations in farms & land & assets
df_fla = df[df['group_desc'] == 'FARMS & LAND & ASSETS']
df_fla = df_fla[df_fla['short_desc'].str.contains("AG LAND|FARM OPERATIONS")]
# drop the irrigated acreage in farms (want the irrigated harvested acres)
df_fla = df_fla[((df_fla['domaincat_desc'] == 'AREA CROPLAND, HARVESTED:(ANY)') &
(df_fla['domain_desc'] == 'AREA CROPLAND, HARVESTED ') &
(df_fla['short_desc'] == 'AG LAND, IRRIGATED - ACRES'))]
# concat data frames
df = pd.concat([df_fc, df_ftn, df_h, df_v, df_fla], sort=False).reset_index(drop=True)
# drop unused columns
df = df.drop(columns=['agg_level_desc', 'location_desc', 'state_alpha', 'sector_desc',
'country_code', 'begin_code', 'watershed_code', 'reference_period_desc',
'asd_desc', 'county_name', 'source_desc', 'congr_district_code', 'asd_code',
'week_ending', 'freq_desc', 'load_time', 'zip_5', 'watershed_desc', 'region_desc',
'state_ansi', 'state_name', 'country_name', 'county_ansi', 'end_code', 'group_desc'])
# create FIPS column by combining existing columns
df.loc[df['county_code'] == '', 'county_code'] = '000' # add county fips when missing
df['Location'] = df['state_fips_code'] + df['county_code']
df.loc[df['Location'] == '99000', 'Location'] = US_FIPS # modify national level fips
# address non-NAICS classification data
# use info from other columns to determine flow name
df.loc[:, 'FlowName'] = df['statisticcat_desc'] + ', ' + df['prodn_practice_desc']
df.loc[:, 'FlowName'] = df['FlowName'].str.replace(", ALL PRODUCTION PRACTICES", "", regex=True)
df.loc[:, 'FlowName'] = df['FlowName'].str.replace(", IN THE OPEN", "", regex=True)
# combine column information to create activity information, and create two new columns for activities
df['Activity'] = df['commodity_desc'] + ', ' + df['class_desc'] + ', ' + df['util_practice_desc'] # drop this column later
df['Activity'] = df['Activity'].str.replace(", ALL CLASSES", "", regex=True) # not interested in all data from class_desc
df['Activity'] = df['Activity'].str.replace(", ALL UTILIZATION PRACTICES", "", regex=True) # not interested in all data from class_desc
df['ActivityProducedBy'] = np.where(df["unit_desc"] == 'OPERATIONS', df["Activity"], None)
df['ActivityConsumedBy'] = np.where(df["unit_desc"] == 'ACRES', df["Activity"], None)
# rename columns to match flowbyactivity format
df = df.rename(columns={"Value": "FlowAmount", "unit_desc": "Unit",
"year": "Year", "CV (%)": "Spread",
"short_desc": "Description"})
# drop remaining unused columns
df = df.drop(columns=['Activity', 'class_desc', 'commodity_desc', 'domain_desc', 'state_fips_code', 'county_code',
'statisticcat_desc', 'prodn_practice_desc', 'domaincat_desc', 'util_practice_desc'])
# modify contents of units column
df.loc[df['Unit'] == 'OPERATIONS', 'Unit'] = 'p'
# modify contents of flowamount column, "D" is supressed data, "z" means less than half the unit is shown
df['FlowAmount'] = df['FlowAmount'].str.strip() # trim whitespace
df.loc[df['FlowAmount'] == "(D)", 'FlowAmount'] = withdrawn_keyword
df.loc[df['FlowAmount'] == "(Z)", 'FlowAmount'] = withdrawn_keyword
df['FlowAmount'] = df['FlowAmount'].str.replace(",", "", regex=True)
# USDA CoA 2017 states that (H) means CV >= 99.95, therefore replacing with 99.95 so can convert column to int
# (L) is a CV of <= 0.05
df['Spread'] = df['Spread'].str.strip() # trim whitespace
df.loc[df['Spread'] == "(H)", 'Spread'] = 99.95
df.loc[df['Spread'] == "(L)", 'Spread'] = 0.05
df.loc[df['Spread'] == "", 'Spread'] = None # for instances where data is missing
df.loc[df['Spread'] == "(D)", 'Spread'] = withdrawn_keyword
# add location system based on year of data
df = assign_fips_location_system(df, args['year'])
# Add hardcoded data
df['Class'] = np.where(df["Unit"] == 'ACRES', "Land", "Other")
df['SourceName'] = "USDA_CoA_Cropland"
df['MeasureofSpread'] = "RSD"
df['DataReliability'] = None
df['DataCollection'] = 2
return df
def coa_irrigated_cropland_fba_cleanup(fba):
"""
When using irrigated cropland, aggregate sectors to cropland and total ag land. Doing this because published values
for irrigated harvested cropland do not include the water use for vegetables, woody crops, berries.
:param fba:
:return:
"""
fba = fba[~fba['ActivityConsumedBy'].isin(['AG LAND', 'AG LAND, CROPLAND, HARVESTED'])]
return fba
def disaggregate_coa_cropland_to_6_digit_naics(fba_w_sector, attr):
"""
Disaggregate usda coa cropland to naics 6
:param fba_w_sector:
:param attr:
:return:
"""
# use ratios of usda 'land in farms' to determine animal use of pasturelands at 6 digit naics
fba_w_sector = disaggregate_pastureland(fba_w_sector, attr)
# use ratios of usda 'harvested cropland' to determine missing 6 digit naics
fba_w_sector = disaggregate_cropland(fba_w_sector, attr)
return fba_w_sector
def disaggregate_pastureland(fba_w_sector, attr):
"""
The USDA CoA Cropland irrigated pastureland data only links to the 3 digit NAICS '112'. This function uses state
level CoA 'Land in Farms' to allocate the county level acreage data to 6 digit NAICS.
:param fba_w_sector: The CoA Cropland dataframe after linked to sectors
:return: The CoA cropland dataframe with disaggregated pastureland data
"""
import flowsa
from flowsa.flowbyfunctions import allocate_by_sector, clean_df, flow_by_activity_fields, \
fba_fill_na_dict
# subset the coa data so only pastureland
p = fba_w_sector.loc[fba_w_sector['Sector'] == '112']
# add temp loc column for state fips
p.loc[:, 'Location_tmp'] = p['Location'].apply(lambda x: str(x[0:2]))
# load usda coa cropland naics
df_f = flowsa.getFlowByActivity(flowclass=['Land'],
years=[attr['allocation_source_year']],
datasource='USDA_CoA_Cropland_NAICS')
df_f = clean_df(df_f, flow_by_activity_fields, fba_fill_na_dict)
# subset to land in farms data
df_f = df_f[df_f['FlowName'] == 'FARM OPERATIONS']
# subset to rows related to pastureland
df_f = df_f.loc[df_f['ActivityConsumedBy'].apply(lambda x: str(x[0:3])) == '112']
# drop rows with "&'
df_f = df_f[~df_f['ActivityConsumedBy'].str.contains('&')]
# create sector column
df_f.loc[:, 'Sector'] = df_f['ActivityConsumedBy']
# create proportional ratios
df_f = allocate_by_sector(df_f, 'proportional')
# drop naics = '11
df_f = df_f[df_f['Sector'] != '11']
# drop 000 in location
df_f.loc[:, 'Location'] = df_f['Location'].apply(lambda x: str(x[0:2]))
# merge the coa pastureland data with land in farm data
df = p.merge(df_f[['Sector', 'Location', 'FlowAmountRatio']], how='left',
left_on="Location_tmp", right_on="Location")
# multiply the flowamount by the flowratio
df.loc[:, 'FlowAmount'] = df['FlowAmount'] * df['FlowAmountRatio']
# drop columns and rename
df = df.drop(columns=['Location_tmp', 'Sector_x', 'Location_y', 'FlowAmountRatio'])
df = df.rename(columns={"Sector_y": "Sector",
"Location_x": 'Location'})
# drop rows where sector = 112 and then concat with original fba_w_sector
fba_w_sector = fba_w_sector[fba_w_sector['Sector'].apply(lambda x: str(x[0:3])) != '112'].reset_index(drop=True)
fba_w_sector = pd.concat([fba_w_sector, df], sort=False).reset_index(drop=True)
return fba_w_sector
def disaggregate_cropland(fba_w_sector, attr):
"""
In the event there are 4 (or 5) digit naics for cropland at the county level, use state level harvested cropland to
create ratios
:param fba_w_sector:
:param attr:
:return:
"""
import flowsa
from flowsa.flowbyfunctions import generalize_activity_field_names, sector_aggregation,\
fbs_default_grouping_fields, clean_df, fba_fill_na_dict, add_missing_flow_by_fields
from flowsa.mapping import add_sectors_to_flowbyactivity
# drop pastureland data
crop = fba_w_sector.loc[fba_w_sector['Sector'].apply(lambda x: str(x[0:3])) != '112'].reset_index(drop=True)
# drop sectors < 4 digits
crop = crop[crop['Sector'].apply(lambda x: len(x) > 3)].reset_index(drop=True)
# create tmp location
crop.loc[:, 'Location_tmp'] = crop['Location'].apply(lambda x: str(x[0:2]))
# load the relevant state level harvested cropland by naics
naics_load = flowsa.getFlowByActivity(flowclass=['Land'],
years=[attr['allocation_source_year']],
datasource="USDA_CoA_Cropland_NAICS").reset_index(drop=True)
# clean df
naics = clean_df(naics_load, flow_by_activity_fields, fba_fill_na_dict)
# subset the harvested cropland by naics
naics = naics[naics['FlowName'] == 'AG LAND, CROPLAND, HARVESTED'].reset_index(drop=True)
# add sectors
naics = add_sectors_to_flowbyactivity(naics, sectorsourcename='NAICS_2012_Code', levelofSectoragg='agg')
# add missing fbs fields
naics = add_missing_flow_by_fields(naics, flow_by_sector_fields)
# aggregate sectors to create any missing naics levels
naics = sector_aggregation(naics, fbs_default_grouping_fields)
# add missing naics5/6 when only one naics5/6 associated with a naics4
naics = sector_disaggregation(naics)
# drop rows where sector consumed by is none and FlowAmount 0
naics = naics[naics['SectorConsumedBy'].notnull()]
naics = naics.loc[naics['FlowAmount'] != 0]
# create ratios
naics = sector_ratios(naics)
# drop sectors < 4 digits
#naics = naics[naics['SectorConsumedBy'].apply(lambda x: len(x) > 3)].reset_index(drop=True)
# create temporary sector column to match the two dfs on
naics.loc[:, 'Location_tmp'] = naics['Location'].apply(lambda x: str(x[0:2]))
# for loop through naics lengths to determine naics 4 and 5 digits to disaggregate
for i in range(4, 6):
# subset df to sectors with length = i and length = i + 1
crop_subset = crop.loc[crop['Sector'].apply(lambda x: i+1 >= len(x) >= i)]
crop_subset.loc[:, 'Sector_tmp'] = crop_subset['Sector'].apply(lambda x: x[0:i])
# if duplicates drop all rows
df = crop_subset.drop_duplicates(subset=['Location', 'Sector_tmp'], keep=False).reset_index(drop=True)
# drop sector temp column
df = df.drop(columns=["Sector_tmp"])
# subset df to keep the sectors of length i
df_subset = df.loc[df['Sector'].apply(lambda x: len(x) == i)]
# subset the naics df where naics length is i + 1
naics_subset = naics.loc[naics['SectorConsumedBy'].apply(lambda x: len(x) == i+1)].reset_index(drop=True)
naics_subset.loc[:, 'Sector_tmp'] = naics_subset['SectorConsumedBy'].apply(lambda x: x[0:i])
# merge the two df based on locations
df_subset = pd.merge(df_subset, naics_subset[['SectorConsumedBy', 'FlowAmountRatio', 'Sector_tmp', 'Location_tmp']],
how='left', left_on=['Sector', 'Location_tmp'], right_on=['Sector_tmp', 'Location_tmp'])
# create flow amounts for the new NAICS based on the flow ratio
df_subset.loc[:, 'FlowAmount'] = df_subset['FlowAmount'] * df_subset['FlowAmountRatio']
# drop rows of 0 and na
df_subset = df_subset[df_subset['FlowAmount'] != 0]
df_subset = df_subset[~df_subset['FlowAmount'].isna()].reset_index(drop=True)
# drop columns
df_subset = df_subset.drop(columns=['Sector', 'FlowAmountRatio', 'Sector_tmp'])
# rename columns
df_subset = df_subset.rename(columns={"SectorConsumedBy": "Sector"})
# add new rows of data to crop df
crop = pd.concat([crop, df_subset], sort=True).reset_index(drop=True)
# clean up df
crop = crop.drop(columns=['Location_tmp'])
# pasture data
pasture = fba_w_sector.loc[fba_w_sector['Sector'].apply(lambda x: str(x[0:3])) == '112'].reset_index(drop=True)
# concat crop and pasture
fba_w_sector = pd.concat([pasture, crop], sort=True).reset_index(drop=True)
return fba_w_sector
def sector_ratios(df):
# find the longest length sector
length = max(df['SectorConsumedBy'].apply(lambda x: len(x)).unique())
# for loop in reverse order longest length naics minus 1 to 2
# appends missing naics levels to df
sector_ratios = []
for i in range(length, 3, -1):
# subset df to sectors with length = i and length = i + 1
df_subset = df.loc[df['SectorConsumedBy'].apply(lambda x: len(x) == i)]
# create column for sector grouping
df_subset.loc[:, 'Sector_group'] = df_subset['SectorConsumedBy'].apply(lambda x: x[0:i-1])
# subset df to create denominator
df_denom = df_subset[['FlowAmount', 'Location', 'Sector_group']]
df_denom = df_denom.groupby(['Location', 'Sector_group'], as_index=False)[["FlowAmount"]].agg("sum")
df_denom = df_denom.rename(columns={"FlowAmount": "Denominator"})
# merge the denominator column with fba_w_sector df
ratio_df = df_subset.merge(df_denom, how='left')
# calculate ratio
ratio_df.loc[:, 'FlowAmountRatio'] = ratio_df['FlowAmount'] / ratio_df['Denominator']
ratio_df = ratio_df.drop(columns=['Denominator', 'Sector_group']).reset_index()
sector_ratios.append(ratio_df)
# concat list of dataframes (info on each page)
df_w_ratios = pd.concat(sector_ratios, sort=True).reset_index(drop=True)
return df_w_ratios
| 53.57967 | 153 | 0.654463 | 2,651 | 19,503 | 4.64089 | 0.179555 | 0.008778 | 0.021946 | 0.024872 | 0.284727 | 0.252621 | 0.190198 | 0.141266 | 0.126961 | 0.110461 | 0 | 0.009008 | 0.220171 | 19,503 | 363 | 154 | 53.727273 | 0.799921 | 0.290878 | 0 | 0.153846 | 0 | 0.010256 | 0.276331 | 0.038541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041026 | false | 0 | 0.051282 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b114541d2883d76b5e6615c0019a1dbcd48b43 | 1,129 | py | Python | src/experiments/models/index_emb_classifier.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | src/experiments/models/index_emb_classifier.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | src/experiments/models/index_emb_classifier.py | clemens33/thesis | c94e066c2fe22881a7465eb9c3859bd02138748e | [
"MIT"
] | null | null | null | import torch
from torch import nn
from tabnet_lightning import TabNetClassifier
class IndexEmbTabNetClassifier(TabNetClassifier):
"""test model implementation using index based embeddings"""
def __init__(self, **kwargs):
super(IndexEmbTabNetClassifier, self).__init__(**kwargs)
self.index_embeddings = nn.Embedding(num_embeddings=kwargs["input_size"], embedding_dim=1)
def embeddings(self, inputs: torch.Tensor) -> torch.Tensor:
indices = torch.nonzero(inputs, as_tuple=True) # gets the indices which are active
values = self.index_embeddings(indices[-1]).squeeze()
output = torch.index_put_(inputs, indices, values)
return output
#
# # test
# if __name__ == "__main__":
# inputs = torch.Tensor([
# [0, 0, 1, 0, 0, 0, 0, 1],
# [1, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 0, 0, 0, 0, 0, 1],
# ])
# e = nn.Embedding(num_embeddings=8, embedding_dim=1)
#
# indices = torch.nonzero(inputs, as_tuple=True)
#
# emb = e(indices[-1]).squeeze()
#
# # indices[..., -1] = emb
#
# inputs = torch.index_put_(inputs, indices, emb)
| 28.225 | 98 | 0.635075 | 142 | 1,129 | 4.838028 | 0.359155 | 0.049491 | 0.061135 | 0.069869 | 0.212518 | 0.125182 | 0.125182 | 0.020378 | 0.020378 | 0.020378 | 0 | 0.034091 | 0.220549 | 1,129 | 39 | 99 | 28.948718 | 0.746591 | 0.426041 | 0 | 0 | 0 | 0 | 0.016051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b1ddaf10061b10db5d28ce24732f514b36c95b | 616 | py | Python | MOD/game_stats.py | divineflatus/MOD | 988299e0e75d8f8fa7893c22ab0db707f02a8f1d | [
"MIT"
] | null | null | null | MOD/game_stats.py | divineflatus/MOD | 988299e0e75d8f8fa7893c22ab0db707f02a8f1d | [
"MIT"
] | null | null | null | MOD/game_stats.py | divineflatus/MOD | 988299e0e75d8f8fa7893c22ab0db707f02a8f1d | [
"MIT"
] | null | null | null | import pygame
#Class to store game statistics
class GameStats():
def __init__(self, mod_settings):
#Initialize MOD settings
self.mod_settings = mod_settings
#Number of lives available
self.ninjas_left = self.mod_settings.ninja_limit
#Starts inactive until 'Play' is clicked
self.game_active = False
#Resets statistics
self.reset_stats()
self.high_score = 0
#Resets statistics to appropriate values
def reset_stats(self):
self.ninjas_left = self.mod_settings.ninja_limit
self.score = 0
| 24.64 | 57 | 0.641234 | 73 | 616 | 5.178082 | 0.520548 | 0.174603 | 0.15873 | 0.095238 | 0.206349 | 0.206349 | 0.206349 | 0.206349 | 0 | 0 | 0 | 0.00463 | 0.298701 | 616 | 24 | 58 | 25.666667 | 0.87037 | 0.280844 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b6dc301c1b043d7640e21cef75902cf0f201a3 | 1,373 | py | Python | tests/grammar/test_sql_file.py | Daniihh/sqlpyparser | aad1d613c02d4f8fa6b833c060a683cf7e194b1c | [
"MIT"
] | 28 | 2016-02-13T10:20:21.000Z | 2022-03-10T02:41:58.000Z | tests/grammar/test_sql_file.py | Daniihh/sqlpyparser | aad1d613c02d4f8fa6b833c060a683cf7e194b1c | [
"MIT"
] | 22 | 2016-02-15T15:55:09.000Z | 2017-09-12T13:49:17.000Z | tests/grammar/test_sql_file.py | Daniihh/sqlpyparser | aad1d613c02d4f8fa6b833c060a683cf7e194b1c | [
"MIT"
] | 16 | 2016-02-15T16:41:23.000Z | 2021-05-18T04:51:52.000Z | # -*- encoding:utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
import unittest
from mysqlparse.grammar.sql_file import sql_file_syntax
class SqlFileSyntaxTest(unittest.TestCase):
def test_multiple_statements(self):
sql_file = sql_file_syntax.parseString("""
CREATE TABLE test_table1 (
test_column1 INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL,
test_column2 INT(11) NOT NULL
);
ALTER TABLE test_table2 ADD col_no0 BIT(8) NOT NULL DEFAULT 0 FIRST,
ADD col_no1 LONGTEXT NOT NULL,
ADD col_no2 VARCHAR(200) NULL,
ADD col_no3 BIT(8) AFTER col0;
CREATE TABLE test_table3 (
test_column INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL
);
ALTER TABLE test_table4 ADD col_no0 BIT(8) NOT NULL DEFAULT 0 FIRST,
ADD col_no1 LONGTEXT NOT NULL,
ADD col_no2 VARCHAR(200) NULL,
ADD col_no3 BIT(8) AFTER col0;
""")
self.assertEqual(len(sql_file.statements), 4)
self.assertEqual(sql_file.statements[0].table_name, 'test_table1')
self.assertEqual(sql_file.statements[1].table_name, 'test_table2')
self.assertEqual(sql_file.statements[2].table_name, 'test_table3')
self.assertEqual(sql_file.statements[3].table_name, 'test_table4')
| 35.205128 | 82 | 0.674436 | 186 | 1,373 | 4.736559 | 0.360215 | 0.07151 | 0.096481 | 0.099886 | 0.496027 | 0.31101 | 0.31101 | 0.31101 | 0.231555 | 0.231555 | 0 | 0.04243 | 0.24472 | 1,373 | 38 | 83 | 36.131579 | 0.807136 | 0.016023 | 0 | 0.296296 | 0 | 0 | 0.547072 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 1 | 0.037037 | false | 0 | 0.111111 | 0 | 0.185185 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b76294cbc37248ab04281f09991a89a57a24b6 | 2,061 | py | Python | barchart2.py | ahealy19/F-IDE-2016 | 82fd4664fc105174cbe2f1a57e2a099fbf3c81d8 | [
"Apache-2.0"
] | 2 | 2017-10-13T09:16:01.000Z | 2018-01-23T04:03:19.000Z | barchart2.py | ahealy19/F-IDE-2016 | 82fd4664fc105174cbe2f1a57e2a099fbf3c81d8 | [
"Apache-2.0"
] | null | null | null | barchart2.py | ahealy19/F-IDE-2016 | 82fd4664fc105174cbe2f1a57e2a099fbf3c81d8 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from pandas import DataFrame
import matplotlib.pyplot as plt
import os
"""
plots the results for each solver and strategy on
the test set as a stacked barchart
Andrew Healy, Aug. 2016
"""
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1,1,1)
df = DataFrame.from_csv('data_for_second_barchart.csv')
provers = ['Alt-Ergo-0.95.2', 'Alt-Ergo-1.01', 'CVC3', 'CVC4',
'veriT', 'Yices', 'Z3-4.3.2', 'Z3-4.4.1',
'Best','Random','Worst','Where4']
df = df.reindex(columns=provers)
N = len(provers)
valids = list(df.ix['Valid'])
unknown = list(df.ix['Unknown'])
timeout = list(df.ix['Timeout'])
failure = list(df.ix['Failure'])
ind = np.arange(N) # the x locations for the groups
offset = lambda x: 1 if x > 7 else 0
for i,_ in enumerate(ind):
ind[i] += offset(i) # x offset for strategies and Where4
width = 0.35 # the width of the bars
p1 = ax.bar(ind, valids, width, color='1.0')
p2 = ax.bar(ind, unknown, width, color='0.55',
bottom=valids)
bottom = [unknown[i]+valids[i] for i in xrange(N)]
p3 = ax.bar(ind, timeout, width, bottom=bottom, color='0.8')
bottom = [bottom[i]+timeout[i] for i in xrange(N)]
p4 = ax.bar(ind, failure, width, bottom=bottom, color='0.3')
ax.set_ylabel('Number of proof obligations')
ax.set_xticks(ind)
ax.set_xticklabels(provers, rotation = 30)
ax.set_yticks(np.arange(0, 263, 50))
ax.legend((p1[0], p2[0], p3[0], p4[0]),
('Valid', 'Unknown', 'Timeout', 'Failure'),
loc='upper center', ncol=4,
bbox_to_anchor=(0.5, 1.05))
ind = np.arange(N)
for i,v in enumerate(valids):
plt.annotate(str(v), xy=(ind[i]+width+0.05+offset(i),v/2.-0.5))
for i,u in enumerate(unknown):
plt.annotate(str(u), xy=(ind[i]+width+0.05+offset(i),valids[i]+u/2.-0.5))
for i,t in enumerate(timeout):
plt.annotate(str(t), xy=(ind[i]+width+0.05+offset(i),valids[i]+unknown[i]+t/2.-0.5))
for i,f in enumerate(failure):
plt.annotate(str(f), xy=(ind[i]+width+0.05+offset(i),valids[i]+unknown[i]+timeout[i]+f/2.-0.5))
plt.savefig(os.path.join('paper','barcharts2.pdf'), bbox_inches='tight') | 28.232877 | 96 | 0.663755 | 381 | 2,061 | 3.55643 | 0.351706 | 0.020664 | 0.023616 | 0.032472 | 0.15941 | 0.109963 | 0.089299 | 0.089299 | 0.073801 | 0.073801 | 0 | 0.053132 | 0.13246 | 2,061 | 73 | 97 | 28.232877 | 0.704698 | 0.042213 | 0 | 0.043478 | 0 | 0 | 0.12898 | 0.015111 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b8980c397285a3ee6cf5a9943a92060add2e64 | 727 | py | Python | Procedural Paradigm/exercises/week-7-basic/HitungJarak.py | morenzoe/IF1210_Dasar_Pemrograman | 6bfd5300c18bfb9c6ba80f6108e2206aa9cbf015 | [
"BSD-3-Clause"
] | null | null | null | Procedural Paradigm/exercises/week-7-basic/HitungJarak.py | morenzoe/IF1210_Dasar_Pemrograman | 6bfd5300c18bfb9c6ba80f6108e2206aa9cbf015 | [
"BSD-3-Clause"
] | null | null | null | Procedural Paradigm/exercises/week-7-basic/HitungJarak.py | morenzoe/IF1210_Dasar_Pemrograman | 6bfd5300c18bfb9c6ba80f6108e2206aa9cbf015 | [
"BSD-3-Clause"
] | 1 | 2022-02-21T05:03:26.000Z | 2022-02-21T05:03:26.000Z | # Program HitungJarak
# Menghitung jarak (s) berdasarkan kecepatan (v) dan waktu tempuh (t), yaitu: s = v * t
# KAMUS
# s : float
# v : float
# t : float
# ALGORITMA
v = float(input()) # menerima input kecepatan dalam m/s
t = float (input()) # menerima input waktu dalam s
s = v * t # menghitung jarak dalam m
print(s) # menampilkan hasil perhitungan
# NOTASI ALGORITMIK
'''
Program HitungJarak
{Menghitung jarak (s) berdasarkan kecepatan (v) dan waktu tempuh (t), yaitu: s = v * t}
Kamus
s : real
v : real
t : real
ALGORITMA
input(v) {menerima input kecepatan dalam m/s}
input(t) {menerima input waktu dalam s}
s <- v * t {menghitung jarak dalam m}
output(s) {menampilkan hasil perhitungan}
''' | 21.382353 | 88 | 0.671252 | 106 | 727 | 4.603774 | 0.264151 | 0.122951 | 0.02459 | 0.135246 | 0.659836 | 0.659836 | 0.540984 | 0.540984 | 0.540984 | 0.540984 | 0 | 0 | 0.220083 | 727 | 34 | 89 | 21.382353 | 0.86067 | 0.397524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b92735f679218e246415e9c15eb48e474ed578 | 4,414 | py | Python | os_xml_automation/text_manipulation/_text_manipulation_mapper.py | osfunapps/os-xml-automation-py | 2e339642fcfa11a9b71c231c652e6e3aa3849354 | [
"MIT"
] | 1 | 2020-10-25T10:30:40.000Z | 2020-10-25T10:30:40.000Z | os_xml_automation/text_manipulation/_text_manipulation_mapper.py | osfunapps/os-xml-automation-py | 2e339642fcfa11a9b71c231c652e6e3aa3849354 | [
"MIT"
] | null | null | null | os_xml_automation/text_manipulation/_text_manipulation_mapper.py | osfunapps/os-xml-automation-py | 2e339642fcfa11a9b71c231c652e6e3aa3849354 | [
"MIT"
] | null | null | null | import os_xml_handler.xml_handler as xh
from os_xml_automation import shared_res as shared_res
from os_xml_automation import shared_tools as shared_tools
from os_xml_automation.text_manipulation import _res as res
# manipulate the files by the text mapper
def manipulate(xml_path, xml, place_holder_map):
file_nodes = xh.get_all_direct_child_nodes(xh.get_root_node(xml))
# run on all of the root's direction children
for file_node in file_nodes:
# get the <file_src> and <file_dst> nodes paths
src_file_path = shared_tools.get_file_node_path(xml_path, place_holder_map, file_node, shared_res.NODE_FILE_SRC)
dst_file_path = shared_tools.get_file_node_path(xml_path, place_holder_map, file_node, shared_res.NODE_FILE_DST, src_file_path)
texts_node = xh.get_child_nodes(file_node, res.NODE_TEXTS)[0]
text_nodes = xh.get_child_nodes(texts_node, res.NODE_TEXT)
for text_node in text_nodes:
init_text_node_cycle(text_node, place_holder_map, src_file_path, dst_file_path)
# will do a specific text node
def init_text_node_cycle(text_node, place_holder_map, src_file_path, dst_file_path):
# get the current action and text
action = str(xh.get_node_att(text_node, shared_res.ACTION))
original_text = xh.get_text_from_child_node(text_node, shared_res.NODE_ORIGINAL_TEXT)
cancel_if_already_present = False
new_text = ''
# delete range and set in range are special. They will need a special way to be dealt with
if action == res.NODE_TEXT_ATT_ACTION_VAL_DELETE_RANGE or action == res.NODE_TEXT_ATT_ACTION_VAL_REPLACE_IN_RANGE:
handle_delete_range(text_node, place_holder_map, src_file_path, dst_file_path)
if action == res.NODE_TEXT_ATT_ACTION_VAL_DELETE_RANGE:
return
else:
# set in range will change the action to above line and set the required text above the bottom boundary
action = res.NODE_TEXT_ATT_ACTION_VAL_ABOVE
original_text = xh.get_text_from_child_node(text_node, res.NODE_TO_TEXT)
original_text = shared_tools.fill_place_holders(original_text, place_holder_map)
if action != res.NODE_TEXT_ATT_ACTION_VAL_DELETE_LINE:
new_text_node = xh.get_child_nodes(text_node, shared_res.NODE_NEW_TEXT)[0]
new_text = xh.get_text_from_node(new_text_node)
cancel_if_already_present = xh.get_node_att(new_text_node, res.NODE_TEXT_ATT_IF_ALREADY_PRESENT) == res.NODE_TEXT_ATT_IF_ALREADY_PRESENT_VAL_CANCEL
# replace place holders
for key, value in place_holder_map.items():
if key in original_text:
original_text = original_text.replace(key, value)
if new_text and key in new_text:
new_text = new_text.replace(key, value)
from os_file_stream_handler import file_stream_handler as fsh
if action == res.NODE_TEXT_ATT_ACTION_VAL_DELETE_LINE:
fsh.delete_line_in_file(src_file_path, dst_file_path, original_text)
elif action == res.NODE_TEXT_ATT_ACTION_VAL_REPLACE or action == res.NODE_TEXT_ATT_ACTION_VAL_REPLACE_LINE:
fsh.replace_text_in_file(src_file_path, dst_file_path, original_text, new_text if new_text else '', action == res.NODE_TEXT_ATT_ACTION_VAL_REPLACE_LINE, cancel_if_already_present)
elif action == res.NODE_TEXT_ATT_ACTION_VAL_ABOVE:
fsh.append_text_above_line_in_file(src_file_path, dst_file_path, original_text, new_text, cancel_if_already_present)
elif action == res.NODE_TEXT_ATT_ACTION_VAL_BELOW:
fsh.append_text_below_line_in_file(src_file_path, dst_file_path, original_text, new_text, cancel_if_already_present)
# will delete a text in range
def handle_delete_range(text_node, place_holder_map, src_file_path, dst_file_path):
from_text = xh.get_text_from_child_node(text_node, res.NODE_FROM_TEXT)
to_text = xh.get_text_from_child_node(text_node, res.NODE_TO_TEXT)
from_text = shared_tools.fill_place_holders(from_text, place_holder_map)
to_text = shared_tools.fill_place_holders(to_text, place_holder_map)
include_boundaries = xh.get_node_att(text_node, res.NODE_TEXT_ATT_INCLUDE_BOUNDARIES)
include_boundaries = not include_boundaries or include_boundaries == 'false'
from os_file_stream_handler import file_stream_handler as fsh
fsh.delete_text_range_in_file(src_file_path, dst_file_path, from_text, to_text, include_bundaries=include_boundaries)
| 55.873418 | 187 | 0.783643 | 730 | 4,414 | 4.250685 | 0.136986 | 0.051885 | 0.053174 | 0.063165 | 0.584273 | 0.547212 | 0.47728 | 0.457944 | 0.419594 | 0.380922 | 0 | 0.000538 | 0.157227 | 4,414 | 78 | 188 | 56.589744 | 0.833602 | 0.097644 | 0 | 0.037736 | 0 | 0 | 0.001258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0.113208 | 0 | 0.188679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78b964ea1f80a7b69e03614379bba228f287598a | 2,304 | py | Python | src/lib/mine/task/task_manager.py | rdw20170120/workstation | ed19aa930a83885c2a8cb58eb0bb5afe58f95df3 | [
"MIT"
] | null | null | null | src/lib/mine/task/task_manager.py | rdw20170120/workstation | ed19aa930a83885c2a8cb58eb0bb5afe58f95df3 | [
"MIT"
] | 2 | 2021-04-06T18:07:32.000Z | 2021-06-02T01:50:40.000Z | src/lib/mine/task/task_manager.py | rdw20170120/workstation | ed19aa930a83885c2a8cb58eb0bb5afe58f95df3 | [
"MIT"
] | null | null | null | #!/usr/bin/env false
"""Manage tasks."""
# Internal packages (absolute references, distributed with Python)
from logging import getLogger
# External packages (absolute references, NOT distributed with Python)
# Library modules (absolute references, NOT packaged, in project)
from task.exception import Abort
from task.exception import Skip
from task.queue import TaskQueue
from utility.my_logging import log_exception
# Project modules (relative references, NOT packaged, in project)
class TaskManager(object):
def __init__(self, config, mapping):
self._log = getLogger(self.__class__.__name__)
self._config = config
self._mapping = mapping
self._q = TaskQueue()
super().__init__()
def _add(self, task):
self._q.put(task)
def _execute_task(self, the_task):
try:
the_task.execute()
except Abort as e:
self._log.debug("From %s _execute_task() except Abort", __name__)
self._log.info(repr(e))
except KeyboardInterrupt as e:
self._log.debug(
"From %s _execute_task() except KeyboardInterrupt", __name__
)
self._log.fatal(repr(e))
raise
except NotImplementedError as e:
self._log.debug(
"From %s _execute_task() except NotImplementedError", __name__
)
self._log.debug(repr(e))
except Skip as e:
self._log.debug("From %s _execute_task() except Skip", __name__)
self._log.info(repr(e))
except BaseException as e:
self._log.debug(
"From %s _execute_task() except BaseException", __name__
)
if self._config.should_abort_upon_task_failure:
log_exception(self._log, e)
raise
else:
log_exception(self._log, e, with_traceback=True)
@property
def config(self):
return self._config
@property
def mapping(self):
return self._mapping
def run(self):
self._log.info("Running task manager...")
while not self._q.empty():
self._execute_task(self._q.get())
self._log.debug("Queue contains %d tasks", self._q.length)
"""DisabledContent
"""
| 31.135135 | 78 | 0.613715 | 265 | 2,304 | 5.011321 | 0.30566 | 0.073795 | 0.063253 | 0.037651 | 0.253765 | 0.178464 | 0.178464 | 0.139307 | 0.139307 | 0.139307 | 0 | 0 | 0.290799 | 2,304 | 73 | 79 | 31.561644 | 0.81273 | 0.129774 | 0 | 0.166667 | 0 | 0 | 0.131272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.092593 | 0.037037 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78bc6d363f695604891dee0584bdb38942f09a51 | 2,519 | py | Python | robosuite/scripts/Final_Copy/utils.py | spatric5/robosuite | 9e6b9691eb949fbf33a23fbe8a8c6faea61c50b6 | [
"MIT"
] | null | null | null | robosuite/scripts/Final_Copy/utils.py | spatric5/robosuite | 9e6b9691eb949fbf33a23fbe8a8c6faea61c50b6 | [
"MIT"
] | null | null | null | robosuite/scripts/Final_Copy/utils.py | spatric5/robosuite | 9e6b9691eb949fbf33a23fbe8a8c6faea61c50b6 | [
"MIT"
] | null | null | null | from mpi4py import MPI
import numpy as np
import torch
# sync_networks across the different cores
def sync_networks(network):
"""
netowrk is the network you want to sync
"""
comm = MPI.COMM_WORLD
flat_params, params_shape = _get_flat_params(network)
comm.Bcast(flat_params, root=0)
# set the flat params back to the network
_set_flat_params(network, params_shape, flat_params)
# get the flat params from the network
def _get_flat_params(network):
param_shape = {}
flat_params = None
for key_name, value in network.named_parameters():
param_shape[key_name] = value.detach().numpy().shape
if flat_params is None:
flat_params = value.detach().numpy().flatten()
else:
flat_params = np.append(flat_params, value.detach().numpy().flatten())
return flat_params, param_shape
# set the params from the network
def _set_flat_params(network, params_shape, params):
pointer = 0
for key_name, values in network.named_parameters():
# get the length of the parameters
len_param = np.prod(params_shape[key_name])
copy_params = params[pointer:pointer + len_param].reshape(params_shape[key_name])
copy_params = torch.tensor(copy_params)
# copy the params
values.data.copy_(copy_params.data)
# update the pointer
pointer += len_param
# sync the networks
def sync_grads(network):
flat_grads, grads_shape = _get_flat_grads(network)
comm = MPI.COMM_WORLD
global_grads = np.zeros_like(flat_grads)
comm.Allreduce(flat_grads, global_grads, op=MPI.SUM)
_set_flat_grads(network, grads_shape, global_grads)
def _set_flat_grads(network, grads_shape, flat_grads):
pointer = 0
for key_name, value in network.named_parameters():
len_grads = np.prod(grads_shape[key_name])
copy_grads = flat_grads[pointer:pointer + len_grads].reshape(grads_shape[key_name])
copy_grads = torch.tensor(copy_grads)
# copy the grads
value.grad.data.copy_(copy_grads.data)
pointer += len_grads
def _get_flat_grads(network):
grads_shape = {}
flat_grads = None
for key_name, value in network.named_parameters():
grads_shape[key_name] = value.grad.data.cpu().numpy().shape
if flat_grads is None:
flat_grads = value.grad.data.cpu().numpy().flatten()
else:
flat_grads = np.append(flat_grads, value.grad.data.cpu().numpy().flatten())
return flat_grads, grads_shape
| 35.985714 | 91 | 0.692735 | 354 | 2,519 | 4.649718 | 0.189266 | 0.09113 | 0.043742 | 0.058323 | 0.382746 | 0.326245 | 0.163426 | 0.120899 | 0.052248 | 0 | 0 | 0.002017 | 0.212783 | 2,519 | 69 | 92 | 36.507246 | 0.828038 | 0.115522 | 0 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.06 | 0 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78bd42c19113c497e1993e92b673e02423c0f0b9 | 9,143 | py | Python | src/apd/aggregation/cli.py | MatthewWilkes/apd.aggregation | 427fa908f45332d623295f92e1ccfdaf545d6997 | [
"BSD-3-Clause"
] | null | null | null | src/apd/aggregation/cli.py | MatthewWilkes/apd.aggregation | 427fa908f45332d623295f92e1ccfdaf545d6997 | [
"BSD-3-Clause"
] | 11 | 2020-11-23T21:36:48.000Z | 2022-03-12T00:48:58.000Z | src/apd/aggregation/cli.py | MatthewWilkes/apd.aggregation | 427fa908f45332d623295f92e1ccfdaf545d6997 | [
"BSD-3-Clause"
] | 1 | 2020-08-09T01:47:59.000Z | 2020-08-09T01:47:59.000Z | import asyncio
import functools
import importlib.util
import logging
import signal
import sys
import typing as t
import uuid
import aiohttp
import click
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from . import collect
from .actions.runner import DataProcessor
from .actions.source import get_data_ongoing, refeed_queue_var
from .database import Deployment, deployment_table
from .query import with_database
logger = logging.getLogger(__name__)
@click.command()
@click.argument("server", nargs=-1)
@click.option(
"--db",
metavar="<CONNECTION_STRING>",
default="postgresql+psycopg2://localhost/apd",
help="The connection string to a PostgreSQL database",
envvar="APD_DB_URI",
)
@click.option("--api-key", metavar="<KEY>", envvar="APD_API_KEY")
@click.option("-v", "--verbose", is_flag=True, help="Enables verbose mode")
def collect_sensor_data(
db: str, server: t.Tuple[str], api_key: str, verbose: bool
) -> None:
"""This loads data from one or more sensors into the specified database.
Only PostgreSQL databases are supported, as the column definitions use
multiple pg specific features. The database must already exist and be
populated with the required tables.
The --api-key option is used to specify the access token for the sensors
being queried.
You may specify any number of servers, the variable should be the full URL
to the sensor's HTTP interface, not including the /v/2.0 portion. Multiple
URLs should be separated with a space.
"""
success = True
try:
collect.standalone(db, server, api_key, echo=verbose)
except ValueError as e:
click.secho(str(e), err=True, fg="red")
success = False
if not success:
sys.exit(1)
def load_handler_config(path: str) -> t.List[DataProcessor]:
# Create a module called user_config backed by the file specified, and load it
# This uses Python's import internals to fake a module in a known location
# Based on an SO answer by Sebastian Rittau and sample code from Brett Cannon
module_spec = importlib.util.spec_from_file_location("user_config", path)
module = importlib.util.module_from_spec(module_spec)
loader = module_spec.loader
if isinstance(loader, importlib.abc.Loader):
loader.exec_module(module)
try:
return module.handlers # type: ignore
except AttributeError as err:
raise ValueError(f"Could not load config file from {path}") from err
else:
# No valid loader could be found
raise ValueError(f"Could not load config file from {path}")
def actually_exit(sig, frame):
click.secho("Exiting...", bold=True)
sys.exit(1)
def stats_signal_handler(sig, frame, handlers=None):
for handler in handlers:
click.echo(
click.style(handler.name, bold=True, fg="red") + " " + handler.stats()
)
if sig == signal.SIGINT:
click.secho("Press Ctrl+C again to end the process", bold=True)
handler = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, actually_exit)
asyncio.get_running_loop().call_later(5, install_ctrl_c_signal_handler, handler)
return
def install_ctrl_c_signal_handler(signal_handler):
click.secho("Press Ctrl+C to view statistics", bold=True)
signal.signal(signal.SIGINT, signal_handler)
@click.command()
@click.argument("config", nargs=1)
@click.option(
"--db",
metavar="<CONNECTION_STRING>",
default="postgresql+psycopg2://localhost/apd",
help="The connection string to a PostgreSQL database",
envvar="APD_DB_URI",
)
@click.option(
"--historical",
is_flag=True,
help="Also trigger actions for data points that were already present in the database",
)
@click.option("-v", "--verbose", is_flag=True, help="Enables verbose mode")
def run_actions(config: str, db: str, verbose: bool, historical: bool):
"""This runs the long-running action processors defined in a config file.
The configuration file specified should be a Python file that defines a
list of DataProcessor objects called processors.n
"""
logging.basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
level=logging.DEBUG if verbose else logging.WARN,
)
async def main_loop():
with with_database(db):
logger.info("Loading configuration")
handlers = load_handler_config(config)
# Set up the refeed queue before starting the handlers
# or source, so they all have access to it
refeed_queue_var.set(asyncio.Queue())
logger.info(f"Configured {len(handlers)} handlers")
starters = [handler.start() for handler in handlers]
await asyncio.gather(*starters)
logger.info("Ingesting data")
data = get_data_ongoing(historical=historical)
signal_handler = functools.partial(
stats_signal_handler,
handlers=handlers,
)
for signal_name in "SIGINFO", "SIGUSR1", "SIGINT":
try:
signal.signal(signal.Signals[signal_name], signal_handler)
except KeyError:
pass
async for datapoint in data:
for handler in handlers:
await handler.push(datapoint)
asyncio.run(main_loop())
@click.group()
def deployments():
pass
@deployments.command()
@click.argument("uri")
@click.argument("name")
@click.option(
"--db",
metavar="<CONNECTION_STRING>",
default="postgresql+psycopg2://localhost/apd",
help="The connection string to a PostgreSQL database",
envvar="APD_DB_URI",
)
@click.option("--api-key", metavar="<KEY>", envvar="APD_API_KEY")
@click.option("--colour")
def add(
db: str,
uri: str,
name: str,
api_key: t.Optional[str],
colour: t.Optional[str],
) -> None:
"""This creates a record of a new deployment in the database."""
deployment = Deployment(id=None, uri=uri, name=name, api_key=api_key, colour=colour)
async def http_get_deployment_id():
async with aiohttp.ClientSession() as http:
collect.http_session_var.set(http)
return await collect.get_deployment_id(uri)
deployment.id = asyncio.run(http_get_deployment_id())
insert = deployment_table.insert().values(**deployment._asdict())
engine = create_engine(db)
sm = sessionmaker(engine)
Session = sm()
Session.execute(insert)
Session.commit()
@deployments.command()
@click.option(
"--db",
metavar="<CONNECTION_STRING>",
default="postgresql+psycopg2://localhost/apd",
help="The connection string to a PostgreSQL database",
envvar="APD_DB_URI",
)
def list(db: str) -> None:
"""This creates a record of a new deployment in the database."""
engine = create_engine(db)
sm = sessionmaker(engine)
Session = sm()
deployments = Session.query(deployment_table).all()
for deployment in deployments:
click.secho(deployment.name, bold=True)
click.echo(click.style("ID ", bold=True) + deployment.id.hex)
click.echo(click.style("URI ", bold=True) + deployment.uri)
click.echo(click.style("API key ", bold=True) + deployment.api_key)
click.echo(click.style("Colour ", bold=True) + str(deployment.colour))
click.echo()
Session.rollback()
@deployments.command()
@click.argument("id")
@click.option("--uri")
@click.option("--name")
@click.option(
"--db",
metavar="<CONNECTION_STRING>",
default="postgresql+psycopg2://localhost/apd",
help="The connection string to a PostgreSQL database",
envvar="APD_DB_URI",
)
@click.option("--api-key", metavar="<KEY>", envvar="APD_API_KEY")
@click.option("--colour")
def edit(
db: str,
id,
uri: t.Optional[str],
name: t.Optional[str],
api_key: t.Optional[str],
colour: t.Optional[str],
) -> None:
"""This creates a record of a new deployment in the database."""
update = {}
if uri is not None:
update["uri"] = uri
if name is not None:
update["name"] = name
if api_key is not None:
update["api_key"] = api_key
if colour is not None:
update["colour"] = colour
deployment_id = uuid.UUID(id)
update_stmt = (
deployment_table.update()
.where(deployment_table.c.id == deployment_id)
.values(**update)
)
engine = create_engine(db)
sm = sessionmaker(engine)
Session = sm()
Session.execute(update_stmt)
deployments = Session.query(deployment_table).filter(
deployment_table.c.id == deployment_id
)
Session.commit()
for deployment in deployments:
click.secho(deployment.name, bold=True)
click.echo(click.style("ID ", bold=True) + deployment.id.hex)
click.echo(click.style("URI ", bold=True) + deployment.uri)
click.echo(click.style("API key ", bold=True) + deployment.api_key)
click.echo(click.style("Colour ", bold=True) + str(deployment.colour))
click.echo()
| 32.30742 | 90 | 0.665099 | 1,185 | 9,143 | 5.031224 | 0.243882 | 0.020127 | 0.021134 | 0.028682 | 0.376887 | 0.340658 | 0.330594 | 0.330594 | 0.330594 | 0.322375 | 0 | 0.001818 | 0.217872 | 9,143 | 282 | 91 | 32.421986 | 0.831912 | 0.138029 | 0 | 0.394495 | 0 | 0 | 0.156815 | 0.022439 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045872 | false | 0.009174 | 0.091743 | 0 | 0.151376 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78c41a1434c06c1642f44349fdd1eea2106f0e22 | 17,447 | py | Python | backend/integration/tests.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | 1 | 2021-01-10T12:34:59.000Z | 2021-01-10T12:34:59.000Z | backend/integration/tests.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | 37 | 2021-01-07T22:31:25.000Z | 2021-02-20T10:59:46.000Z | backend/integration/tests.py | Tim6FTN/UKS | 3cf19f014cdc7845bf0b808b97c4e05dc49b062e | [
"MIT"
] | null | null | null | from unittest.mock import MagicMock, Mock
import six
from django.contrib.auth.models import User
from django.core.exceptions import SuspiciousOperation
from django.test import SimpleTestCase, Client, TransactionTestCase
from django.urls import reverse, resolve
from branch.models import Branch
from integration.views import receive_webhook_request
from integration.webhook_handler import WebhookHandler, _format_event
from project.models import Project
from repository.models import Repository
class TestWebhookHandler(SimpleTestCase):
def test_if_secret_not_initialized(self):
webhook_handler = WebhookHandler()
self.assertIsNone(webhook_handler.secret)
def test_if_secret_properly_initialized(self):
webhook_handler = WebhookHandler(secret="test-secret")
self.assertIsNotNone(webhook_handler.secret)
self.assertIsInstance(webhook_handler.secret, bytes)
self.assertEqual(webhook_handler.secret, "test-secret".encode("utf-8"))
def test_format_event_if_key_is_present(self):
data = {'pusher': {'name': 'test_name'}, 'ref': 'test_ref',
'repository': {'full_name': 'test_repository_full_name'}}
push_event_description = _format_event("push", data)
self.assertEqual(push_event_description, "test_name pushed test_ref in test_repository_full_name")
def test_format_event_if_key_is_not_present(self):
push_event_description = _format_event("non-existing-key", {})
self.assertEqual(push_event_description, "non-existing-key")
def test__get_header_if_key_is_present(self):
request = Mock()
request.headers = {WebhookHandler.X_GITHUB_DELIVERY: 'some-guid'}
header_value = WebhookHandler._get_header(WebhookHandler.X_GITHUB_DELIVERY, request)
self.assertEqual(header_value, 'some-guid')
def test__get_header_if_key_is_not_present(self):
with self.assertRaisesMessage(SuspiciousOperation, f'Missing header: {WebhookHandler.X_GITHUB_DELIVERY}'):
request = Mock()
request.headers = {}
WebhookHandler._get_header(WebhookHandler.X_GITHUB_DELIVERY, request)
def test__get_digest_if_secret_is_present(self):
request = Mock()
request.body = '{"key": "value"}'.encode('utf-8')
webhook_handler = WebhookHandler(secret="test-secret")
digest = webhook_handler._get_digest(request)
self.assertIsNotNone(digest)
self.assertIsInstance(digest, six.text_type)
def test__get_digest_if_secret_is_not_present(self):
request = Mock()
request.body = {}
webhook_handler = WebhookHandler()
digest = webhook_handler._get_digest(request)
self.assertIsNone(digest)
def test_handle_if_no_signature(self):
request = Mock()
request.headers = {WebhookHandler.X_HUB_SIGNATURE_256: 'incorrect-digest'}
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value="sha256-digest")
with self.assertRaisesMessage(SuspiciousOperation, "Signature required."):
webhook_handler.handle(request)
def test_handle_if_signature_invalid(self):
request = Mock()
request.headers = {WebhookHandler.X_HUB_SIGNATURE_256: 'sha256=incorrect-digest'}
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value="sha256-digest")
with self.assertRaisesMessage(SuspiciousOperation, "Invalid signature."):
webhook_handler.handle(request)
def test_handle_if_event_type_missing(self):
request = Mock()
request.headers = {}
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value=None)
with self.assertRaisesMessage(SuspiciousOperation, f'Missing header: {WebhookHandler.X_GITHUB_EVENT}'):
webhook_handler.handle(request)
def test_handle_when_content_type_form(self):
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value=None)
request = Mock()
request.headers = {'content-type': 'application/x-www-form-urlencoded', WebhookHandler.X_GITHUB_EVENT: 'push'}
with self.assertRaisesMessage(SuspiciousOperation, "Unsupported operation."):
webhook_handler.handle(request)
def test_handle_when_content_type_json_and_data_invalid(self):
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value=None)
request = Mock()
request.headers = {
'content-type': 'application/json',
'X-Github-Delivery': 'some-guid',
WebhookHandler.X_GITHUB_EVENT: 'push'
}
request.body = ''.encode('utf-8')
with self.assertRaisesMessage(SuspiciousOperation, "Request body must contain valid JSON data."):
webhook_handler.handle(request)
def test_handle_when_content_type_json_and_data_valid(self):
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value=None)
request = Mock()
request.headers = {
'content-type': 'application/json',
'X-Github-Delivery': 'some-guid',
WebhookHandler.X_GITHUB_EVENT: 'push'
}
request.body = '{"key": "value"}'.encode('utf-8')
webhook_handler.handle(request)
def test_if_webhook_handler_handle_called(self):
webhook_handler = WebhookHandler()
webhook_handler.handle = MagicMock(return_value=None)
webhook_handler.handle(request=Mock())
webhook_handler.handle.assert_called_once()
def test_if_webhook_handler_called_all_registered_hook_handlers(self):
webhook_handler = WebhookHandler()
webhook_handler._get_digest = MagicMock(return_value=None)
request = Mock()
request.headers = {
'content-type': 'application/json',
'X-Github-Delivery': 'some-guid',
WebhookHandler.X_GITHUB_EVENT: 'push'
}
request.body = '{"key": "value"}'.encode('utf-8')
@webhook_handler.hook(event_type="push")
@MagicMock
def first_decorated_func(): pass
@webhook_handler.hook(event_type="push")
@MagicMock
def second_decorated_func(): pass
@webhook_handler.hook(event_type="ping")
@MagicMock
def third_decorated_func(): pass
webhook_handler.handle(request)
first_decorated_func.assert_called_once()
second_decorated_func.assert_called_once()
third_decorated_func.assert_not_called()
class TestIntegrationURLs(SimpleTestCase):
def test_notify_url(self):
notify_url = reverse('notify')
self.assertEquals(resolve(notify_url).func, receive_webhook_request)
class TestIntegrationViews(TransactionTestCase):
def setUp(self):
self.client = Client()
self.notify_url = reverse('notify')
self.user = User.objects.create_user('test_username', 'test@email.com', 'test_password')
self.repository = Repository.objects.create(
url="https://github.com/fivkovic/uks-demo",
name="uks-demo",
description="uks-demo repository description",
is_public=True)
self.project = Project.objects.create(
name="UKS DEMO PROJECT",
description="UKS demo project description",
is_public=True,
wiki_content="Wiki",
repository=self.repository,
owner=self.user)
self.branch = Branch.objects.create(name="main", repository=self.repository)
self.task = None
def test_receive_webhook_request_view(self):
headers = {
'HTTP_' + WebhookHandler.X_GITHUB_EVENT: 'push',
'HTTP_' + WebhookHandler.X_GITHUB_DELIVERY: 'some-guid'
}
response = self.client.post(
self.notify_url,
INTEGRATION_TEST_REQUEST_BODY,
content_type='application/json',
**headers)
self.assertEquals(response.status_code, 204)
INTEGRATION_TEST_REQUEST_BODY = {
"ref": "refs/heads/main",
"before": "2f781a5371291ce8ba3f3a8acdf8bd673889dcaf",
"after": "9549a348a9c4e175cf8a27e45bab93407d178767",
"repository": {
"id": 339193534,
"node_id": "MDEwOlJlcG9zaXRvcnkzMzkxOTM1MzQ=",
"name": "uks-demo",
"full_name": "fivkovic/uks-demo",
"private": False,
"owner": {
"name": "fivkovic",
"email": "f.ivkovic16@gmail.com",
"login": "fivkovic",
"id": 17569172,
"node_id": "MDQ6VXNlcjE3NTY5MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17569172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fivkovic",
"html_url": "https://github.com/fivkovic",
"followers_url": "https://api.github.com/users/fivkovic/followers",
"following_url": "https://api.github.com/users/fivkovic/following{/other_user}",
"gists_url": "https://api.github.com/users/fivkovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fivkovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fivkovic/subscriptions",
"organizations_url": "https://api.github.com/users/fivkovic/orgs",
"repos_url": "https://api.github.com/users/fivkovic/repos",
"events_url": "https://api.github.com/users/fivkovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/fivkovic/received_events",
"type": "User",
"site_admin": False
},
"html_url": "https://github.com/fivkovic/uks-demo",
"description": "Demo repository for testing UKS project",
"fork": False,
"url": "https://github.com/fivkovic/uks-demo",
"forks_url": "https://api.github.com/repos/fivkovic/uks-demo/forks",
"keys_url": "https://api.github.com/repos/fivkovic/uks-demo/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/fivkovic/uks-demo/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/fivkovic/uks-demo/teams",
"hooks_url": "https://api.github.com/repos/fivkovic/uks-demo/hooks",
"issue_events_url": "https://api.github.com/repos/fivkovic/uks-demo/issues/events{/number}",
"events_url": "https://api.github.com/repos/fivkovic/uks-demo/events",
"assignees_url": "https://api.github.com/repos/fivkovic/uks-demo/assignees{/user}",
"branches_url": "https://api.github.com/repos/fivkovic/uks-demo/branches{/branch}",
"tags_url": "https://api.github.com/repos/fivkovic/uks-demo/tags",
"blobs_url": "https://api.github.com/repos/fivkovic/uks-demo/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/fivkovic/uks-demo/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/fivkovic/uks-demo/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/fivkovic/uks-demo/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/fivkovic/uks-demo/statuses/{sha}",
"languages_url": "https://api.github.com/repos/fivkovic/uks-demo/languages",
"stargazers_url": "https://api.github.com/repos/fivkovic/uks-demo/stargazers",
"contributors_url": "https://api.github.com/repos/fivkovic/uks-demo/contributors",
"subscribers_url": "https://api.github.com/repos/fivkovic/uks-demo/subscribers",
"subscription_url": "https://api.github.com/repos/fivkovic/uks-demo/subscription",
"commits_url": "https://api.github.com/repos/fivkovic/uks-demo/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/fivkovic/uks-demo/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/fivkovic/uks-demo/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/fivkovic/uks-demo/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/fivkovic/uks-demo/contents/{+path}",
"compare_url": "https://api.github.com/repos/fivkovic/uks-demo/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/fivkovic/uks-demo/merges",
"archive_url": "https://api.github.com/repos/fivkovic/uks-demo/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/fivkovic/uks-demo/downloads",
"issues_url": "https://api.github.com/repos/fivkovic/uks-demo/issues{/number}",
"pulls_url": "https://api.github.com/repos/fivkovic/uks-demo/pulls{/number}",
"milestones_url": "https://api.github.com/repos/fivkovic/uks-demo/milestones{/number}",
"notifications_url": "https://api.github.com/repos/fivkovic/uks-demo/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/fivkovic/uks-demo/labels{/name}",
"releases_url": "https://api.github.com/repos/fivkovic/uks-demo/releases{/id}",
"deployments_url": "https://api.github.com/repos/fivkovic/uks-demo/deployments",
"created_at": 1613419653,
"updated_at": "2021-02-15T20:07:41Z",
"pushed_at": 1613420915,
"git_url": "git://github.com/fivkovic/uks-demo.git",
"ssh_url": "git@github.com:fivkovic/uks-demo.git",
"clone_url": "https://github.com/fivkovic/uks-demo.git",
"svn_url": "https://github.com/fivkovic/uks-demo",
"homepage": None,
"size": 0,
"stargazers_count": 0,
"watchers_count": 0,
"language": None,
"has_issues": True,
"has_projects": True,
"has_downloads": True,
"has_wiki": True,
"has_pages": False,
"forks_count": 0,
"mirror_url": None,
"archived": False,
"disabled": False,
"open_issues_count": 0,
"license": {
"key": "mit",
"name": "MIT License",
"spdx_id": "MIT",
"url": "https://api.github.com/licenses/mit",
"node_id": "MDc6TGljZW5zZTEz"
},
"forks": 0,
"open_issues": 0,
"watchers": 0,
"default_branch": "main",
"stargazers": 0,
"master_branch": "main"
},
"pusher": {
"name": "fivkovic",
"email": "f.ivkovic16@gmail.com"
},
"sender": {
"login": "fivkovic",
"id": 17569172,
"node_id": "MDQ6VXNlcjE3NTY5MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17569172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fivkovic",
"html_url": "https://github.com/fivkovic",
"followers_url": "https://api.github.com/users/fivkovic/followers",
"following_url": "https://api.github.com/users/fivkovic/following{/other_user}",
"gists_url": "https://api.github.com/users/fivkovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fivkovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fivkovic/subscriptions",
"organizations_url": "https://api.github.com/users/fivkovic/orgs",
"repos_url": "https://api.github.com/users/fivkovic/repos",
"events_url": "https://api.github.com/users/fivkovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/fivkovic/received_events",
"type": "User",
"site_admin": False
},
"created": False,
"deleted": False,
"forced": False,
"base_ref": None,
"compare": "https://github.com/fivkovic/uks-demo/compare/2f781a537129...9549a348a9c4",
"commits": [
{
"id": "9549a348a9c4e175cf8a27e45bab93407d178767",
"tree_id": "20f7ae1a25f3c039e7d6442440672bd012c3a78d",
"distinct": True,
"message": "First test commit closes #1 #2",
"timestamp": "2021-02-15T21:12:35+01:00",
"url": "https://github.com/fivkovic/uks-demo/commit/9549a348a9c4e175cf8a27e45bab93407d178767",
"author": {
"name": "Filip Ivkovic",
"email": "fivkovic@uns.ac.rs",
"username": "fivkovic"
},
"committer": {
"name": "Filip Ivkovic",
"email": "fivkovic@uns.ac.rs",
"username": "fivkovic"
},
"added": [
"F1.txt",
"F2.txt"
],
"removed": [],
"modified": []
}
],
"head_commit": {
"id": "9549a348a9c4e175cf8a27e45bab93407d178767",
"tree_id": "20f7ae1a25f3c039e7d6442440672bd012c3a78d",
"distinct": True,
"message": "First test commit closes #1 #2",
"timestamp": "2021-02-15T21:12:35+01:00",
"url": "https://github.com/fivkovic/uks-demo/commit/9549a348a9c4e175cf8a27e45bab93407d178767",
"author": {
"name": "Filip Ivkovic",
"email": "fivkovic@uns.ac.rs",
"username": "fivkovic"
},
"committer": {
"name": "Filip Ivkovic",
"email": "fivkovic@uns.ac.rs",
"username": "fivkovic"
},
"added": [
"F1.txt",
"F2.txt"
],
"removed": [],
"modified": []
}
} | 43.400498 | 118 | 0.638161 | 1,907 | 17,447 | 5.636602 | 0.157315 | 0.057773 | 0.058331 | 0.090148 | 0.654293 | 0.611871 | 0.574658 | 0.527956 | 0.49065 | 0.393618 | 0 | 0.029433 | 0.217172 | 17,447 | 402 | 119 | 43.400498 | 0.757578 | 0 | 0 | 0.397727 | 0 | 0.025568 | 0.410993 | 0.034904 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.0625 | false | 0.011364 | 0.03125 | 0 | 0.102273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78c876141f5329c52ad1e038a8ee03869db886d7 | 6,705 | py | Python | ketu/characterization/prepare.py | dfm/turnstile | 13a9a3b489b458396a6ad1e8a2d1e89a0dd6312d | [
"MIT"
] | 10 | 2015-02-19T09:13:24.000Z | 2020-04-25T10:50:38.000Z | ketu/characterization/prepare.py | dfm/turnstile | 13a9a3b489b458396a6ad1e8a2d1e89a0dd6312d | [
"MIT"
] | 1 | 2015-07-10T19:50:31.000Z | 2015-07-11T03:51:15.000Z | ketu/characterization/prepare.py | dfm/turnstile | 13a9a3b489b458396a6ad1e8a2d1e89a0dd6312d | [
"MIT"
] | 7 | 2015-04-20T06:42:28.000Z | 2019-02-25T03:04:45.000Z | # -*- coding: utf-8 -*-
from __future__ import division, print_function
__all__ = ["prepare_characterization"]
import kplr
import transit
import numpy as np
from scipy.stats import beta
import matplotlib.pyplot as pl
import george
from george import kernels
from ..prepare import Prepare
from ..download import Download
from ..discontinuity import Discontinuity
def prepare_characterization(kicid, periods, time0s, rors, impacts,
es=None,
data_window_hw=3.0, min_data_window_hw=0.5):
# Download and process the light curves.
pipe = Download()
pipe = Prepare(pipe)
pipe = Discontinuity(pipe)
r = pipe.query(kicid=kicid)
# Find the data chunks that hit a transit.
lcs = []
for lc in r.light_curves:
# Build the mask of times that hit transits.
m = np.zeros_like(lc.time, dtype=bool)
mmin = np.zeros_like(lc.time, dtype=bool)
for p, t0 in zip(periods, time0s):
hp = 0.5 * p
t0 = t0 % p
dt = np.abs((lc.time - t0 + hp) % p - hp)
m += dt < data_window_hw
mmin += dt < min_data_window_hw
# Trim the dataset and set up the Gaussian Process model.
if np.any(mmin) and np.sum(m) > 10:
# Re-normalize the trimmed light curve.
mu = np.median(lc.flux[m])
lc.time = np.ascontiguousarray(lc.time[m])
lc.flux = np.ascontiguousarray(lc.flux[m] / mu)
lc.ferr = np.ascontiguousarray(lc.ferr[m] / mu)
# Make sure that the light curve knows its integration time.
lc.texp = kplr.EXPOSURE_TIMES[1] / 86400.0
# Heuristically guess the Gaussian Process parameters.
lc.factor = 1000.0
amp = np.median((lc.factor * (lc.flux-1.0))**2)
kernel = amp*kernels.Matern32Kernel(4.0)
lc.gp = george.GP(kernel)
# Run an initial computation of the GP.
lc.gp.compute(lc.time, lc.ferr * lc.factor)
# Save this light curve.
lcs.append(lc)
# Set up the initial system model.
spars = r.star.huber
star = transit.Central(mass=spars.M, radius=spars.R)
s = transit.System(star)
for i in range(len(periods)):
planet = transit.Body(r=rors[i] * star.radius,
period=periods[i],
t0=time0s[i] % periods[i],
b=impacts[i],
e=0.0 if es is None else es[i])
s.add_body(planet)
# Approximate the stellar mass and radius measurements as log-normal.
q = np.array(spars[["R", "E_R", "e_R"]], dtype=float)
lnsr = (np.log(q[0]),
1.0 / np.mean([np.log(q[0] + q[1]) - np.log(q[0]),
np.log(q[0]) - np.log(q[0] - q[2])]) ** 2)
q = np.array(spars[["M", "E_M", "e_M"]], dtype=float)
lnsm = (np.log(q[0]),
1.0 / np.mean([np.log(q[0] + q[1]) - np.log(q[0]),
np.log(q[0]) - np.log(q[0] - q[2])]) ** 2)
return ProbabilisticModel(lcs, s, lnsr, lnsm)
class ProbabilisticModel(object):
def __init__(self, lcs, system, lnsr, lnsm):
self.lcs = lcs
self.system = system
self.lnsr = lnsr
self.lnsm = lnsm
self.fit_star = False
def pack(self):
star = self.system.central
planets = self.system.bodies
vec = list(self.lcs[0].gp.kernel.vector)
if self.fit_star:
vec += [np.log(star.radius), np.log(star.mass)]
vec += [
star.q1,
star.q2,
]
vec += [v for p in planets for v in (
np.log(p.r), np.log(p.period), p.t0, p.b,
np.sqrt(p.e) * np.sin(p.pomega),
np.sqrt(p.e) * np.cos(p.pomega)
)]
return np.array(vec)
def unpack(self, pars):
# Update the kernel.
i = len(self.lcs[0].gp.kernel)
for lc in self.lcs:
lc.gp.kernel[:] = pars[:i]
# Update the star.
star = self.system.central
if self.fit_star:
star.radius, star.mass = np.exp(pars[i:i+2])
i += 2
star.q1, star.q2 = pars[i:i+2]
i += 2
# Update the planets.
for p in self.system.bodies:
p.r, p.period = np.exp(pars[i:i+2])
i += 2
p.t0, p.b = pars[i:i+2]
i += 2
sqesn, sqecs = pars[i:i+2]
p.e = sqesn**2 + sqecs**2
p.pomega = np.arctan2(sqesn, sqecs)
i += 2
def lnprior(self):
lnp = 0.0
# Apply the stellar parameter constraints.
star = self.system.central
if not (0 < star.q1 < 1 and 0 < star.q2 < 1):
return -np.inf
lnsr = np.log(star.radius)
lnp -= 0.5 * self.lnsr[1] * (self.lnsr[0] - lnsr) ** 2
lnsm = np.log(star.mass)
lnp -= 0.5 * self.lnsm[1] * (self.lnsm[0] - lnsm) ** 2
# And the planet parameters.
for p in self.system.bodies:
if p.b < 0.0 or not (-2 * np.pi < p.pomega < 2 * np.pi):
return -np.inf
# Kipping (2013)
lnp += beta(1.12, 3.09).logpdf(p.e)
return lnp
def lnlike(self):
ll = 0.0
for lc in self.lcs:
try:
mu = self.system.light_curve(lc.time, texp=lc.texp)
except RuntimeError:
return -np.inf
r = (lc.flux - mu) * lc.factor
ll += lc.gp.lnlikelihood(r, quiet=True)
if not np.isfinite(ll):
return -np.inf
return ll
def lnprob(self, p):
try:
self.unpack(p)
except ValueError:
return -np.inf
lp = self.lnprior()
if not np.isfinite(lp):
return -np.inf
ll = self.lnlike()
if not np.isfinite(ll):
return -np.inf
return lp + ll
def plot(self, dy=1e-2):
fig = pl.figure()
ax = fig.add_subplot(111)
period = self.system.bodies[0].period
t0 = self.system.bodies[0].t0
for i, lc in enumerate(self.lcs):
t = (lc.time - t0 + 0.5 * period) % period - 0.5 * period
ax.plot(t, lc.flux + i*dy, ".k", alpha=0.5)
mu = self.system.light_curve(lc.time, texp=lc.texp)
r = lc.factor * (lc.flux - mu)
pred = lc.gp.predict(r, lc.time, mean_only=True) / lc.factor
ax.plot(t, pred + 1.0 + i*dy, "r", alpha=0.5)
ax.plot(t, pred + mu + i*dy, "b", alpha=0.5)
ax.axvline(0.0, color="k", alpha=0.3, lw=3)
return fig
| 31.928571 | 73 | 0.513497 | 968 | 6,705 | 3.514463 | 0.233471 | 0.023516 | 0.017637 | 0.020576 | 0.149324 | 0.112287 | 0.094062 | 0.078777 | 0.070547 | 0.050559 | 0 | 0.032228 | 0.352125 | 6,705 | 209 | 74 | 32.08134 | 0.750921 | 0.097092 | 0 | 0.2 | 0 | 0 | 0.007125 | 0.003977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051613 | false | 0 | 0.070968 | 0 | 0.212903 | 0.006452 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78c9b51f8e253459950c6aba31616db59d9ecbca | 1,242 | py | Python | docs/python/f_1st_partial_ex4.py | Voldemort373/Notes-and-Reference | 796885e315e9c349ff1cb37760abc56327547140 | [
"CC-BY-4.0",
"CC0-1.0"
] | 30 | 2018-11-12T09:03:45.000Z | 2021-12-09T02:20:08.000Z | docs/python/f_1st_partial_ex4.py | Voldemort373/Notes-and-Reference | 796885e315e9c349ff1cb37760abc56327547140 | [
"CC-BY-4.0",
"CC0-1.0"
] | 36 | 2018-11-11T21:32:31.000Z | 2019-02-02T16:18:11.000Z | docs/python/f_1st_partial_ex4.py | Voldemort373/Notes-and-Reference | 796885e315e9c349ff1cb37760abc56327547140 | [
"CC-BY-4.0",
"CC0-1.0"
] | 8 | 2018-11-14T17:09:21.000Z | 2020-05-28T16:18:12.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2018, Silvio Peroni <essepuntato@gmail.com>
#
# Permission to use, copy, modify, and/or distribute this software for any purpose
# with or without fee is hereby granted, provided that the above copyright notice
# and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT,
# OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
# DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
# SOFTWARE.
from re import findall
def f(cur_digit):
l = list()
l.append("a")
l.append("b")
l.extend(l)
l.extend(l)
l.append("c")
for i in range(int(cur_digit)):
if l[i] != "a" and "a" in l:
l.remove("a")
else:
l.insert(i, "c")
return l
rightmost_digit = "".join(findall("\d", input("Please provide your matriculation number: ")))[-1]
print("Result:", f(rightmost_digit))
| 33.567568 | 97 | 0.691626 | 190 | 1,242 | 4.5 | 0.573684 | 0.042105 | 0.018713 | 0.021053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006085 | 0.206119 | 1,242 | 36 | 98 | 34.5 | 0.861055 | 0.623994 | 0 | 0.125 | 0 | 0 | 0.128319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78cb4d79d62529d58ce3023b98597dd72ee5c35a | 1,916 | py | Python | DataBase/Mongo/MongoTest.py | InverseLina/python-practice | 496d2020916d8096a32131cdedd25a4da7b7735e | [
"Apache-2.0"
] | null | null | null | DataBase/Mongo/MongoTest.py | InverseLina/python-practice | 496d2020916d8096a32131cdedd25a4da7b7735e | [
"Apache-2.0"
] | null | null | null | DataBase/Mongo/MongoTest.py | InverseLina/python-practice | 496d2020916d8096a32131cdedd25a4da7b7735e | [
"Apache-2.0"
] | null | null | null | import pymongo
from bson.son import SON
from pymongo import MongoClient
# encoding=utf-8
__author__ = 'Hinsteny'
print(pymongo.get_version_string())
class SingleClient(object):
'''
Single Client hold the client object
'''
client = MongoClient('127.0.0.1', 27017)
client.the_database.authenticate('hinsteny', 'welcome', source='admin', mechanism='SCRAM-SHA-1')
def __new__(cls, *args, **kw):
if not hasattr(cls, '_instance'):
orig = super(SingleClient, cls)
cls._instance = orig.__new__(cls, *args, **kw)
return cls._instance
def getClient():
client = MongoClient('127.0.0.1', 27017)
client.the_database.authenticate('hinsteny', 'welcome', source='admin', mechanism='SCRAM-SHA-1')
return client
def test_connection():
client = getClient()
db = client.cube_test
query = {}
cursor = db.user.find(query)
print(cursor.count())
print(cursor[0])
def test_addUser():
client = getClient()
db = client.admin
query = {}
cursor = db.system.users.find(query)
if cursor.count() == 0 :
db.runCommand({createUser})({"user":"admin","pwd":"welcome","roles":["root"]})
else:
print(cursor[0])
def create_test_data(db):
db.things.drop()
result = db.things.insert_many([{"x": 1, "tags": ["dog", "cat"]},{"x": 2, "tags": ["cat"]},{"x": 2, "tags": ["mouse", "cat", "dog"]},{"x": 3, "tags": ["eat","pear"]}])
print(result.inserted_ids)
def doAggregation(collection, pipeline):
print(list(collection.aggregate(pipeline)))
# Do test
if __name__ == "__main__":
test_connection()
# test_addUser()
db = getClient().aggregation_example
create_test_data(db)
pipeline = [
{"$unwind": "$tags"},
{"$group": {"_id": "", "count": {"$sum": "$x"}}},
{"$sort": SON([("count", -1), ("_id", -1)])}
]
doAggregation(db.things, pipeline) | 28.597015 | 171 | 0.606472 | 230 | 1,916 | 4.873913 | 0.426087 | 0.029438 | 0.035682 | 0.037467 | 0.180196 | 0.180196 | 0.180196 | 0.180196 | 0.180196 | 0.180196 | 0 | 0.022208 | 0.200939 | 1,916 | 67 | 172 | 28.597015 | 0.709993 | 0.039144 | 0 | 0.204082 | 0 | 0 | 0.123355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0.061224 | 0 | 0.265306 | 0.122449 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78cd90912da668d8ba7cfecb75291ed8ec65c67c | 933 | py | Python | python/std_scripts/numerical_operations.py | IamPhytan/Cookbook | a903f9098b0d2ddccdf343f740858731242bde97 | [
"MIT"
] | null | null | null | python/std_scripts/numerical_operations.py | IamPhytan/Cookbook | a903f9098b0d2ddccdf343f740858731242bde97 | [
"MIT"
] | null | null | null | python/std_scripts/numerical_operations.py | IamPhytan/Cookbook | a903f9098b0d2ddccdf343f740858731242bde97 | [
"MIT"
] | null | null | null | #
# Get divisor and modulo
# Often forgotten, often useful
#
a = 5
b = 3
n, m = divmod(a, b)
print(n) # 1
print(m) # 2
#
# Next multiple of a number n
# Used a lot in CodinGame Clash of Code
#
n = 3
idx = [*range(10)]
res = [a + (n - (a % n)) % n for a in idx]
print(idx) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(res) # [0, 3, 3, 3, 6, 6, 6, 9, 9, 9]
#
# Show a multiplication
# Used in CodinGame Clash of Code
#
# Numbers to multiply
a = 500
b = 1300
# Second number => String
b_s = str(b)
# Small multiplications
mults = list(reversed([a * int(b_s[i]) * 10 ** (len(b_s) - i - 1) for i in range(len(b_s))]))
mults = [m for m in mults if m != 0]
# Strings to list
s = [str(a), b_s, "-", *map(str, mults), "-"]
s.append(str(sum(list(mults))))
# Add mult sign
s[1] = "x " + b_s
# Adjust right align
n = len(max(s, key=len))
s = [w.rjust(n, " ") for w in s]
# Horizontal bars
s[2] = s[-2] = n * "-"
print("\n".join(s))
| 16.368421 | 93 | 0.561629 | 184 | 933 | 2.815217 | 0.413043 | 0.023166 | 0.061776 | 0.069498 | 0.084942 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057828 | 0.240086 | 933 | 56 | 94 | 16.660714 | 0.672779 | 0.395498 | 0 | 0 | 0 | 0 | 0.014815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.227273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78d0276dde967a30b789dae789ec250967368d4b | 8,842 | py | Python | src/kgmk/nlp/tpn/src/classification.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | src/kgmk/nlp/tpn/src/classification.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | src/kgmk/nlp/tpn/src/classification.py | kagemeka/python | 486ce39d97360b61029527bacf00a87fdbcf552c | [
"MIT"
] | null | null | null | from _base import *
config = config['classification']
tp = TP()
tokenizer_path = f'{project_root}/{config["tokenizer_path"]}'
tokenizer_gen_path = f'{project_root}/{config["tokenizer_gen_path"]}'
def load_category_master():
try:
dba = clsDbAccessor()
category_master = dba.execQuery("SELECT category_id, category_name FROM mst_categories;")
dba.close()
return category_master
except Exception as e:
print('failed: SELECT mst_categories')
raise e
def load_keyword_master():
try:
dba = clsDbAccessor()
keyword_master = dba.execQuery("SELECT category_id, keyword FROM mst_keywords;")
dba.close()
return keyword_master
except Exception as e:
print('failed: SELECT mst_keywords')
raise e
def load_gen_category_master():
try:
dba = clsDbAccessor()
gen_category_master = dba.execQuery("SELECT category_id, category_name FROM mst_gen_categories;")
dba.close()
return gen_category_master
except Exception as e:
print('failed: SELECT mst_categories')
raise e
# return pd.read_csv(f'{project_root}/data/mst_gen_categories.csv')
def load_gen_keyword_master(): # localで学習時に使用
return pd.read_csv(f'{project_root}/data/mst_gen_keywords.csv')
category_master = load_category_master()
categories = list(category_master['category_name'].values)
gen_category_master = load_gen_category_master()
gen_categories = list(gen_category_master['category_name'].values)
import tensorflow as tf
from tensorflow.keras import Sequential, layers, losses, optimizers, callbacks
def create_model(emb_dim=10):
model = Sequential([
layers.Embedding(input_dim=10**6, output_dim=emb_dim),
layers.Conv1D(256, 3, activation='relu'),
layers.GlobalMaxPooling1D(),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(
loss=losses.BinaryCrossentropy(),
optimizer=optimizers.Adam(),
metrics=['accuracy']
)
return model
def regex_and(s):
return ''.join([f'(?=.*{w})' for w in s.split()])
def create_dataset(texts, keywords):
# bl = texts.str.contains('|'.join(category_keywords[category]), regex=True)
bl = texts.str.contains(r'{}'.format('|'.join(map(regex_and, keywords))), regex=True)
true_datas = texts[bl]
n = len(true_datas)
false_datas = texts.drop(true_datas.index).sample(n=n, random_state=10)
x = pd.concat([true_datas, false_datas]).map(tp.norm_wakati)
y = np.array([1]*n + [0]*n)
return x, y
from tensorflow.keras.preprocessing.text import Tokenizer, tokenizer_from_json
from tensorflow.keras.preprocessing.sequence import pad_sequences
def pad(x, maxlen):
return pad_sequences(x, maxlen=maxlen, padding='post', truncating='post')
def train(df):
# texts = df['text'].drop_duplicates()
# if not os.path.exists(tokenizer_path):
# tokenizer = Tokenizer()
# tokenizer.fit_on_texts(texts.map(tp.norm_wakati))
# with open(tokenizer_path, 'w', encoding='utf-8') as f:
# f.write(json.dumps(tokenizer.to_json(), ensure_ascii=False))
# with open(tokenizer_path) as f:
# tokenizer = tokenizer_from_json(json.load(f))
# keyword_master = load_keyword_master()
# id2category = dict(zip(category_master['category_id'], category_master['category_name']))
# category_keywords = dict()
# for category_id, df in keyword_master.groupby(['category_id']):
# keywords = df['keyword'].values
# category_keywords[id2category[category_id]] = list(keywords)
# categories = list(category_keywords.keys())
# for category in categories:
# x, y = create_dataset(texts, category_keywords[category])
# x = pad(tokenizer.texts_to_sequences(x), maxlen=100)
# model = create_model(emb_dim=2)
# weights_save_path = f'{project_root}/model/classification_model/{category}.ckpt'
# model.fit(
# x, y,
# epochs=config['epochs'], batch_size=config['batch_size'],
# callbacks=[
# callbacks.EarlyStopping(patience=config['patience']),
# callbacks.ModelCheckpoint(weights_save_path, save_best_only=True, save_weights_only=True)
# ],
# validation_split=0.1
# )
# print(f'{category} end')
texts = df['text'].drop_duplicates()
if not os.path.exists(tokenizer_gen_path):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts.map(tp.norm_wakati))
with open(tokenizer_gen_path, 'w', encoding='utf-8') as f:
f.write(json.dumps(tokenizer.to_json(), ensure_ascii=False))
with open(tokenizer_gen_path) as f:
tokenizer = tokenizer_from_json(json.load(f))
keyword_master = load_gen_keyword_master()
id2category = dict(zip(gen_category_master['category_id'], gen_category_master['category_name']))
category_keywords = dict()
for category_id, df in keyword_master.groupby(['category_id']):
keywords = df['keyword'].values
category_keywords[id2category[category_id]] = list(keywords)
gen_categories = list(category_keywords.keys())
for category in gen_categories:
x, y = create_dataset(texts, category_keywords[category])
x = pad(tokenizer.texts_to_sequences(x), maxlen=40)
model = create_model(emb_dim=10)
weights_save_path = f'{project_root}/model/gen_classification_model/{category}.ckpt'
model.fit(
x, y,
epochs=config['epochs'], batch_size=config['batch_size'],
callbacks=[
callbacks.EarlyStopping(patience=config['patience']),
callbacks.ModelCheckpoint(weights_save_path, save_best_only=True, save_weights_only=True)
],
validation_split=0.1
)
print(f'{category} end')
def infer(df, categories=categories, gen_categories=gen_categories):
toppan = (df['tw_id'] >= 9*10**18).map(int).values
texts = df['wakati_text']
with open(tokenizer_path) as f:
tokenizer = tokenizer_from_json(json.load(f))
x = pad(tokenizer.texts_to_sequences(texts), maxlen=100)
print('data prepared!')
category2id = dict(zip(category_master['category_name'], category_master['category_id']))
res = []
n_categories = []
for category in categories:
model = create_model(emb_dim=2)
try:
model.load_weights(f'{project_root}/model/classification_model/{category}.ckpt')
n_categories.append(category)
except:
continue
predicted = np.around(model.predict(x).ravel(), 4)# probability
res.append(predicted)
print(f'{category} end')
del model
categories = n_categories
category_score = [json.dumps(dict([(str(category2id[categories[j]]), {"val": str(res[j][i])}) for j in range(len(categories))] + [('8', {'val': str(toppan[i])})]), ensure_ascii=False) for i in range(len(res[0]))]
df['category'] = category_score
'''
記事カテゴリ
'''
with open(tokenizer_gen_path) as f:
tokenizer = tokenizer_from_json(json.load(f))
x = pad(tokenizer.texts_to_sequences(texts), maxlen=100)
print('data prepared!')
category2id = dict(zip(gen_category_master['category_name'], gen_category_master['category_id']))
res = []
n_categories = []
for category in gen_categories:
model = create_model(emb_dim=10)
try:
model.load_weights(f'{project_root}/model/gen_classification_model/{category}.ckpt')
n_categories.append(category)
except:
continue
predicted = np.around(model.predict(x).ravel(), 4)# probability
res.append(predicted)
print(f'{category} end')
del model
gen_categories = n_categories
category_score = [json.dumps(dict([(str(category2id[gen_categories[j]]), {"val": str(res[j][i])}) for j in range(len(gen_categories))]), ensure_ascii=False) for i in range(len(res[0]))]
df['gen_category'] = category_score
# from main import update_local_df
def update_past_all():
dba = clsDbAccessor()
df = dba.execQuery("SELECT `tw_id` FROM `tbl_twitters` WHERE proc_flag=1 AND deleted_at IS NULL;")
dba.close()
print(df)
local_df = load_local_df().set_index('tw_id').reset_index()
print(local_df.head())
df = df.merge(local_df, how='left', on='tw_id').set_index('tw_id', drop=False).dropna()
print(df)
infer(df)
print(df.head())
update_tbl_twitter(df, ['category', 'gen_category'])
pass
if __name__ == '__main__':
# df = pd.read_csv(f'{project_root}/data/tbl_twitter.csv', names=['text'])
# train(df)
update_past_all()
pass | 38.443478 | 216 | 0.662859 | 1,126 | 8,842 | 4.968028 | 0.186501 | 0.050054 | 0.03039 | 0.027887 | 0.659278 | 0.609939 | 0.56221 | 0.552378 | 0.522703 | 0.507687 | 0 | 0.008954 | 0.204252 | 8,842 | 230 | 217 | 38.443478 | 0.7861 | 0.186836 | 0 | 0.354037 | 0 | 0 | 0.139972 | 0.042777 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068323 | false | 0.012422 | 0.031056 | 0.018634 | 0.149068 | 0.074534 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78d1bbfc4644ff6a02d35970a20eee982aa45909 | 7,281 | py | Python | projects/advanced_lane_lines/lane_detector/utils/camera_utils.py | stoicio/RoboCar | 65591e8c217e61d0571df39fe9d9993e5984d8fe | [
"MIT"
] | null | null | null | projects/advanced_lane_lines/lane_detector/utils/camera_utils.py | stoicio/RoboCar | 65591e8c217e61d0571df39fe9d9993e5984d8fe | [
"MIT"
] | null | null | null | projects/advanced_lane_lines/lane_detector/utils/camera_utils.py | stoicio/RoboCar | 65591e8c217e61d0571df39fe9d9993e5984d8fe | [
"MIT"
] | null | null | null | import json
import logging
import os
import cv2
import numpy as np
from tqdm import tqdm
logger = logging.getLogger('CameraUtils')
class CameraCalibration(object):
@staticmethod
def get_image_paths(chessboard_img_dir):
allowed_extensions = ['.jpg', '.png', '.jpeg']
full_image_paths = []
if not os.path.exists(chessboard_img_dir) or os.path.isfile(chessboard_img_dir):
raise ValueError("Chessboard images directory not found")
files_in_dir = os.listdir(chessboard_img_dir)
for _file in files_in_dir:
if os.path.splitext(_file)[-1] in allowed_extensions:
full_image_paths.append(os.path.join(chessboard_img_dir, _file))
else:
logger.info("Skipping {name} - Not an image file".format(name=_file))
if not full_image_paths:
raise RuntimeError("No chessboard images found")
return full_image_paths
def __init__(self, n_cols=None, n_rows=None, chessboard_img_dir=None,
params_load_path=None, store_output_images=False,):
'''
Args:
n_cols (int) : Number of corners along horizontal axis
n_rows (int) : Number of corners along vertical axis
chessboard_img_dir (str) : directory where the chessboard images are stored
'''
if params_load_path:
if os.path.exists(params_load_path):
self.load_params_from_file(params_load_path)
logger.info('Camera params loaded and ready to use')
else:
logger.error('Cannot load params from file. Please recalibrate')
raise ValueError('Cannot load params from file. Please recalibrate')
return
if not all([n_cols, n_rows, chessboard_img_dir]):
raise ValueError('Pass in chess board params and location to images')
self.images_dir = chessboard_img_dir
self.image_paths = self.get_image_paths(chessboard_img_dir)
self.pattern_size = (n_cols, n_rows)
self.mtx = None
self.dist = None
self.output_images_path = []
self.failed_images = []
self.__is_calibrated = False
self.__store_output_images = store_output_images
self.__calibrate_camera()
def load_params_from_file(self, json_file_path):
expected_keys = ['mtx', 'dist']
with open(json_file_path, 'r') as fp:
data = json.load(fp)
if not all([k in data.keys() for k in expected_keys]):
raise ValueError('Cannot load camera params. Use a different file or recalibrate')
self.mtx = np.array(data['mtx'])
self.dist = np.array(data['dist'])
self.__is_calibrated = True
def save_params_to_file(self, file_path):
data = {
'mtx': self.mtx.tolist(),
'dist': self.dist.tolist()
}
with open(file_path, 'w') as fp:
json.dump(data, fp)
def __calibrate_camera(self):
# Termination criteria to choose accurate corners. terminate sub-pixel detection
# after 30 iterations or if improvement is less than 0.001
termination_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Arrays to store collection of 3d and 2d chessboard corners
chessboard_corners_3d = []
image_points_2d = []
corner_points_3d = np.zeros((self.pattern_size[0] * self.pattern_size[1], 3), np.float32)
# Fill with 3D Coordinates representing the corners in chess board
corner_points_3d[:, :2] = np.mgrid[0: self.pattern_size[0], 0:self.pattern_size[1]].T.reshape(-1, 2) # flake8: noqa
# if we have to store output images of detected chess boards, create a target folder
output_imgs_dir = os.path.join(self.images_dir, 'output')
if self.__store_output_images and not os.path.exists(output_imgs_dir):
os.makedirs(output_imgs_dir)
for image in tqdm(self.image_paths, desc='Finding chessboard corners'):
img = cv2.imread(image)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find corners - return early if no corners are detectable
found_corners, corners = cv2.findChessboardCorners(gray, self.pattern_size,
None,
cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK)
if found_corners:
chessboard_corners_3d.append(corner_points_3d)
accurate_corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1),
termination_criteria)
image_points_2d.append(accurate_corners)
if self.__store_output_images:
new_img_path = os.path.join(output_imgs_dir, os.path.basename(image))
cv2.drawChessboardCorners(img, self.pattern_size, accurate_corners,
found_corners)
cv2.imwrite(new_img_path, img)
self.output_images_path.append(new_img_path)
else:
logger.debug("Failed to find chessboard in {name}".format(name=image))
self.failed_images.append(image)
(success, self.mtx, self.dist, _, _) = cv2.calibrateCamera(chessboard_corners_3d,
image_points_2d, gray.shape[::-1], None, None)
if not success:
raise RuntimeError("Calibration failed ! Retry with better chessboard images")
# Set Calibration Result to Trues
logger.info(('Successfully calculated Camera Matrix.'
'Skipped processing {count} images').format(count=len(self.failed_images)))
self.__is_calibrated = True
def get_camera_params(self, redo_calibration=False):
if not self.__is_calibrated or redo_calibration:
self.__calibrate_camera()
return (self.mtx, self.dist)
def get_processed_images(self):
'''Returns a list of chessboard images with corners drawn and a list of images
in which corner detection failed
Returns data (dict):
data['output_images'] : list of paths with corners drawn
data['failed_images'] : list of path in which corner detection failed
'''
if not self.__store_output_images:
logger.warn(('Output images are not stored. To write output images,'
'set "store_ store_output_images=True" during init'))
return {
'output_images': self.output_images_path,
'failed_images': self.failed_images
}
def undistort_image(self, image):
'''Takes an numpy array representing an image or a string pointing to a image path
and undistorts with the calibrated camera matrix and distortion coffiecients'''
if not self.__is_calibrated:
self.__calibrate_camera()
img_data = cv2.imread(image) if isinstance(image, str) else image
return cv2.undistort(img_data, self.mtx, self.dist, None, self.mtx)
| 42.086705 | 124 | 0.618734 | 893 | 7,281 | 4.801792 | 0.255319 | 0.041978 | 0.037313 | 0.016791 | 0.145056 | 0.0625 | 0.019123 | 0 | 0 | 0 | 0 | 0.011954 | 0.299135 | 7,281 | 172 | 125 | 42.331395 | 0.828336 | 0.146958 | 0 | 0.070175 | 0 | 0 | 0.116615 | 0.0041 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0.008772 | 0.052632 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78d4eb146a133aea0467f8cf0a76b0671ef0d2b6 | 5,375 | py | Python | spirou/sandbox/ccf_tools/analyse_TOI736.py | njcuk9999/apero-utils | f77de4c9123874e5bb6ed6bd03a7de3b27057402 | [
"MIT"
] | 2 | 2020-10-08T17:03:45.000Z | 2021-03-09T17:49:44.000Z | spirou/sandbox/ccf_tools/analyse_TOI736.py | njcuk9999/apero-utils | f77de4c9123874e5bb6ed6bd03a7de3b27057402 | [
"MIT"
] | 17 | 2020-09-24T17:35:38.000Z | 2020-12-11T16:10:13.000Z | spirou/sandbox/ccf_tools/analyse_TOI736.py | njcuk9999/apero-utils | f77de4c9123874e5bb6ed6bd03a7de3b27057402 | [
"MIT"
] | 5 | 2020-04-10T06:41:00.000Z | 2020-12-16T21:09:14.000Z | import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from bisector import *
from astropy.time import Time
from ccf2rv import *
from per_epoch_table import per_epoch_table
def sinusoidal(phase,dphase,amp,zp):
return np.sin( (phase+dphase))*amp+zp
# do not *formally* exclude an order, but this is done later with the bandpass keyword
exclude_orders = [28,47,48]
object = 'TOI-736'
mask = 'gl699_neg'
method = 'all'
sanitize = True
# number of median-absolute deviations within an epoch to consider a point discrepant
tbl,dico = get_object_rv(object,mask =mask,
method = method,force = True,
exclude_orders = exclude_orders,
snr_min = 20.0, velocity_window = 20, sanitize = sanitize,
dvmax_per_order = 500.0, bandpass = 'H',
doplot = True, do_blacklist = True,
detailed_output = True,
sed_match = False)
rv = np.array(tbl['RV'])
rv -= np.mean(rv)
ccf = np.array(dico['MEAN_CCF'])
ccf2 = np.array(ccf)
for i in range(34):
ccf2[:,i] = np.roll(ccf2[:,i],int(-rv[i]*10))
moy = np.mean(ccf2,axis=1)
for i in range(34):
ccf2[:,i] -= moy
for i in range(34):
ccf2[:,i] = np.roll(ccf2[:,i],int(rv[i]*10))
damps = np.arange(10,55,0.1)
all_ccs = np.zeros([ccf2.shape[0],len(damps)])
for ite in range(len(damps)):
print(ite)
ccf3 = np.zeros_like(ccf2)
for i in range(34):
ccf3[:,i] = np.roll(ccf2[:,i],int(damps[ite]*rv[i]*10))
all_ccs[:,ite] = np.nanmean(ccf3,axis=1)
plt.plot(dico['ccf_RV'],moy)
plt.show()
plt.plot(damps,all_ccs[np.argmin(moy),:])
plt.show()
plt.imshow(all_ccs/np.std(all_ccs),aspect = 'auto',extent = [np.min(damps),np.max(damps),np.min(dico['ccf_RV']),np.max(dico['ccf_RV'])])
plt.show()
# period for the sinusoidal currve
period = 14.4
# create the table with bis per epoch
tbl_bin = per_epoch_table(tbl,nMAD_cut = 5)
# get time stamps friendly for plotting
t2 = Time(tbl_bin['MJDATE_MEAN'], format = 'mjd')
t3 = Time(tbl['MJDATE'], format = 'mjd')
# get phase for sine fitting
phase_bin = 2*np.pi*tbl_bin['MJDATE_MEAN']/period
phase = 2*np.pi*tbl['MJDATE']/period
# fit sinusoid
fit, pcov = curve_fit(sinusoidal, phase_bin, tbl_bin['RV'])
# some plotting fiddling
dt = np.max(tbl_bin['MJDATE_MEAN']) - np.min(tbl_bin['MJDATE_MEAN'])
time_plot = np.arange(np.min(tbl_bin['MJDATE_MEAN'])-dt/10,np.max(tbl_bin['MJDATE_MEAN'])+dt/10,dt/1000)
phase_plot = 2*np.pi*time_plot/period
model_bin = sinusoidal(phase_bin,*fit)
model= sinusoidal(phase,*fit)
model_plot = sinusoidal(phase_plot,*fit)
print('Amplitude of the sinusoidal at {0} days: {1:.2f} m/s'.format(period, 1000*fit[1]))
print('Mean velocity: {1:.2f} m/s'.format(period, 1000*fit[2]))
print('Mean/Median per-epoch STDDEV {0}/{1} km/s'.format(np.mean(tbl_bin["ERROR_RV"])
,np.median(tbl_bin["ERROR_RV"])))
fig, ax = plt.subplots(nrows = 2, ncols = 1,sharex = True, figsize = (14,8))
for i in range(len(t2)):
ax[0].plot_date(t2.plot_date,tbl_bin['RV'],'g.')
ax[0].plot_date([t2[i].plot_date,t2[i].plot_date],[tbl_bin['RV'][i]-tbl_bin['ERROR_RV'][i],
tbl_bin['RV'][i]+tbl_bin['ERROR_RV'][i]],'g')
ax[0].plot_date(t3.plot_date,tbl['RV'],'r.',alpha = 0.5)
ax[1].errorbar(t3.plot_date,tbl['RV'] - model,yerr=tbl['ERROR_RV'], linestyle="None",
fmt='o',color = 'green', alpha = 0.2, label = 'Individual measurements')
ax[0].plot(Time(time_plot, format = 'mjd').plot_date,model_plot,'r:')
ax[0].set(ylabel = 'Velocity [km/s]',title = object)
ax[1].errorbar(t2.plot_date, tbl_bin['RV'] - model_bin, yerr=tbl_bin['ERROR_RV'],
linestyle="None", fmt='o',
alpha = 0.5, capsize = 2, color = 'black',label = 'Epoch mean')
ax[1].legend()
ax[1].plot(Time(time_plot, format = 'mjd').plot_date,np.zeros(len(time_plot)),'r:')
ax[1].set(xlabel = 'Date', ylabel = 'Residuals [km/s]',ylim = [-.15,0.15],
xlim = [np.min(Time(time_plot, format = 'mjd').plot_date),
np.max(Time(time_plot, format = 'mjd').plot_date)]
)
for label in ax[1].get_xticklabels():
label.set_rotation(25)
label.set_ha('right')
plt.tight_layout()
plt.savefig(object+'.pdf')
plt.show()
sigma = np.std((tbl_bin['RV'] - model_bin))
mean_error = np.mean(tbl_bin['ERROR_RV'])
median_error = np.nanmedian(tbl_bin['ERROR_RV'])
reduced_chi2 = np.std((tbl_bin['RV'] - model_bin)/tbl_bin['ERROR_RV'])
print('\n--- values for the per-night weighted-mean points ---\n')
print(' mean ERROR_RV {0:.2f} m/s, median ERROR_RV {1:.2f} m/s, '
'reduced chi2 {2:.2f} '.format(mean_error*1e3, median_error*1e3, reduced_chi2))
mean_error = np.mean(tbl['ERROR_RV'])
median_error = np.nanmedian(tbl['ERROR_RV'])
print('\n--- values for the individual points ---\n')
print(' mean ERROR_RV {0:.2f} m/s, median ERROR_RV {1:.2f} m/s'.format( mean_error*1e3,median_error*1e3))
f = open('TOI1278_obslog.tex','w')
# create an observation log in tex format
# Nice when you want to write a paper in the end, hey, that's the point of all these observations!
for i in range(len(tbl)):
f.write('{0:.4f} & ${1:.3f} \pm {2:.3f}$ & {3:.3f} \\\\ \n'.format(tbl['MJDATE'][i],tbl['RV'][i], tbl['ERROR_RV'][i],tbl['D2_RESIDUAL_CCF'][i]))
f.close() | 35.361842 | 148 | 0.639814 | 903 | 5,375 | 3.672204 | 0.264673 | 0.039807 | 0.026538 | 0.031363 | 0.304885 | 0.264777 | 0.188782 | 0.108263 | 0.063932 | 0.05006 | 0 | 0.036679 | 0.173209 | 5,375 | 152 | 149 | 35.361842 | 0.709496 | 0.088372 | 0 | 0.076923 | 0 | 0.028846 | 0.166053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009615 | false | 0.009615 | 0.067308 | 0.009615 | 0.086538 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |