hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ffb567f0688a716f16f90345a882a5dc8d44737d | 4,025 | py | Python | language/xsp/data_preprocessing/language_utils.py | Xtuden-com/language | 70c0328968d5ffa1201c6fdecde45bbc4fec19fc | [
"Apache-2.0"
] | 1,199 | 2018-10-16T01:30:18.000Z | 2022-03-31T21:05:24.000Z | language/xsp/data_preprocessing/language_utils.py | Xtuden-com/language | 70c0328968d5ffa1201c6fdecde45bbc4fec19fc | [
"Apache-2.0"
] | 116 | 2018-10-18T03:31:46.000Z | 2022-03-24T13:40:50.000Z | language/xsp/data_preprocessing/language_utils.py | Xtuden-com/language | 70c0328968d5ffa1201c6fdecde45bbc4fec19fc | [
"Apache-2.0"
] | 303 | 2018-10-22T12:35:12.000Z | 2022-03-27T17:38:17.000Z | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities for processing and storing the natural langauge input."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
class Wordpiece(object):
"""Contains wordpiece information as a substring of an utterance."""
def __init__(self,
wordpiece=None,
tokenized_index=None,
span_start_index=None,
span_end_index=None,
matches_to_schema=None):
self.wordpiece = wordpiece
self.tokenized_index = tokenized_index
self.span_start_index = span_start_index
self.span_end_index = span_end_index
self.matches_to_schema = matches_to_schema
def to_json(self):
return {
'wordpiece': self.wordpiece,
'tokenized_index': self.tokenized_index,
'span_start_index': self.span_start_index,
'span_end_index': self.span_end_index,
'matches_to_schema': self.matches_to_schema
}
def from_json(self, dictionary):
self.wordpiece = dictionary['wordpiece']
self.tokenized_index = dictionary['tokenized_index']
self.span_start_index = dictionary['span_start_index']
self.span_end_index = dictionary['span_end_index']
self.matches_to_schema = dictionary['matches_to_schema']
return self
def get_wordpieces(sequence, tokenizer, schema_entities=None):
"""Sets the wordpieces for a NLToSQLExample."""
# First, it finds exact-string alignment between schema entities and the
# utterance.
aligned_entities = set()
aligned_chars = [False for _ in range(len(sequence))]
if schema_entities:
for schema_entity in sorted(schema_entities, key=len, reverse=True):
if schema_entity in sequence.lower():
aligned_entities.add(schema_entity)
start_idx = sequence.lower().index(schema_entity)
for i in range(start_idx, start_idx + len(schema_entity)):
aligned_chars[i] = True
# Get the spans for the wordpieces
wordpieces = tokenizer.tokenize(sequence)
original_seq_index = 0
wordpieces_with_spans = list()
for i, wordpiece in enumerate(wordpieces):
search_wordpiece = wordpiece
if wordpiece.startswith('#'):
search_wordpiece = wordpiece[2:]
# It will be a substring of the lowered original sequence.
found = True
while (sequence.lower()[original_seq_index:original_seq_index +
len(search_wordpiece)] != search_wordpiece):
original_seq_index += 1
if original_seq_index + len(search_wordpiece) > len(sequence):
found = False
break
span_start = original_seq_index
span_end = original_seq_index + len(search_wordpiece) # Not inclusive!
if not found:
raise ValueError('Span not found! \nWordpiece: ' + wordpiece +
'\nSequence: ' + sequence)
if sequence.lower()[span_start:span_end] != search_wordpiece:
raise ValueError('Found span did not match!\nWordpiece: ' + wordpiece +
'\nSpan: ' + sequence.lower()[span_start:span_end])
# See if the span start/end align at all with the aligned chars
aligned = False
for j in range(span_start, span_end):
if aligned_chars[j]:
aligned = True
break
wordpiece = Wordpiece(wordpiece, i, span_start, span_end, aligned)
wordpieces_with_spans.append(wordpiece)
return wordpieces_with_spans, aligned_entities
| 35.619469 | 77 | 0.704596 | 518 | 4,025 | 5.233591 | 0.30695 | 0.043158 | 0.036149 | 0.023608 | 0.151605 | 0.142752 | 0.045002 | 0 | 0 | 0 | 0 | 0.003791 | 0.213665 | 4,025 | 112 | 78 | 35.9375 | 0.852765 | 0.249938 | 0 | 0.028571 | 0 | 0 | 0.077078 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.042857 | 0.014286 | 0.157143 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ffb64699aee68caa91b512adb859b23ae28d1500 | 13,270 | py | Python | back/api/tests/test_pricing.py | maltaesousa/geoshop2 | 624c6d79b5a29b39a898e0d1332fb8de23bd96e4 | [
"BSD-3-Clause"
] | null | null | null | back/api/tests/test_pricing.py | maltaesousa/geoshop2 | 624c6d79b5a29b39a898e0d1332fb8de23bd96e4 | [
"BSD-3-Clause"
] | 155 | 2020-01-06T09:32:32.000Z | 2022-03-31T09:21:39.000Z | back/api/tests/test_pricing.py | maltaesousa/geoshop2 | 624c6d79b5a29b39a898e0d1332fb8de23bd96e4 | [
"BSD-3-Clause"
] | 3 | 2020-01-29T15:48:02.000Z | 2020-06-04T12:50:24.000Z | from itertools import islice
from django.core import mail
from django.contrib.auth import get_user_model
from django.contrib.gis.geos import Polygon, Point
from djmoney.money import Money
from rest_framework.test import APITestCase
from api.models import Contact, Pricing, Product, PricingGeometry, Order, OrderItem, OrderType
UserModel = get_user_model()
class PricingTests(APITestCase):
"""
Test Pricings
"""
def setUp(self):
self.user_private = UserModel.objects.create_user(
username='rincevent', password='rincevent')
self.user_private.identity.email = 'admin@admin.com'
self.user_private.identity.save()
self.base_fee = Money(50, 'CHF')
self.unit_price = Money(150, 'CHF')
self.pricings = Pricing.objects.bulk_create([
Pricing(
name="Gratuit", # 0
pricing_type="FREE"),
Pricing(
name="Forfait", # 1
pricing_type="SINGLE",
unit_price=self.unit_price,
base_fee=Money(20, 'CHF')),
Pricing(
name="Par nombre d'objets", # 2
pricing_type="BY_NUMBER_OBJECTS",
unit_price=Money(1, 'CHF'),
max_price=Money(250, 'CHF')),
Pricing(
name="Par surface", # 3
pricing_type="BY_AREA",
unit_price=self.unit_price,
base_fee=Money(50, 'CHF')),
Pricing(
name="Par couche géométrique", # 4
pricing_type="FROM_PRICING_LAYER"),
Pricing(
name="Devis manuel", # 5
pricing_type="MANUAL"),
Pricing(
name="Style de prix non connu de l'application", #6
pricing_type="YET_UNKNOWN_PRICING"),
Pricing(
name="Prix selon ce qu'il y a dans le groupe", #7
pricing_type="FROM_CHILDREN_OF_GROUP")
])
self.products = Product.objects.bulk_create([
Product(
label="Produit gratuit",
pricing=self.pricings[0]),
Product(
label="Produit forfaitaire",
pricing=self.pricings[1]),
Product(
label="Bâtiments 3D",
pricing=self.pricings[2]),
Product(
label="Produit vendu au m²",
pricing=self.pricings[3]),
Product(
label="MO",
pricing=self.pricings[4]),
Product(
label="Maquette 3D",
pricing=self.pricings[5]),
Product(
label="Produit facturé au Mb (non implémenté)",
pricing=self.pricings[6]),
])
self.order_geom = Polygon((
(2528577, 1193422),
(2542482, 1193422),
(2542482, 1199018),
(2528577, 1199018),
(2528577, 1193422)
))
self.pricing_area1 = PricingGeometry.objects.create(
unit_price=Money(2, 'CHF'),
pricing=self.pricings[4],
geom=Polygon((
(2537498, 1210000),
(2533183, 1180000),
(2520000, 1180000),
(2520000, 1210000),
(2537498, 1210000)
))
)
self.pricing_area2 = PricingGeometry.objects.create(
unit_price=Money(4, 'CHF'),
pricing=self.pricings[4],
geom=Polygon((
(2533183, 1180000),
(2537498, 1210000),
(2550000, 1210000),
(2550000, 1180000),
(2533183, 1180000)
))
)
# 4 points are located in order geom
self.number_of_objects = 4
self.building_pricing_geometry = PricingGeometry.objects.bulk_create([
PricingGeometry(
geom=Point(2559661.132097245, 1205773.4376192095)
),
PricingGeometry(
geom=Point(2554387.694597245, 1205539.0626192095)
),
PricingGeometry(
geom=Point(2557786.132097245, 1203781.2501192095)
),
PricingGeometry(
geom=Point(2533265.624642372, 1196165.5274033546)
),
PricingGeometry(
geom=Point(2534378.905892372, 1195403.8086533546)
),
PricingGeometry(
geom=Point(2535052.734017372, 1195081.5430283546)
),
PricingGeometry(
geom=Point(2536312.499642372, 1196341.3086533546)
)
])
self.orderTypePrivate = OrderType.objects.create(
name="Privé",
)
self.orderTypePublic = OrderType.objects.create(
name="Communal",
)
self.order = Order.objects.create(
client=self.user_private,
order_type=self.orderTypePrivate,
title="Test pricing order",
geom=self.order_geom
)
for geom in self.building_pricing_geometry:
geom.pricing = Pricing.objects.filter(
name="Par nombre d'objets").first()
geom.save()
def test_free_price(self):
free_price = self.products[0].pricing.get_price(self.order_geom)
self.assertEqual(free_price[0], 0)
def test_single_price(self):
single_price = self.products[1].pricing.get_price(self.order_geom)
self.assertEqual(single_price[0], self.unit_price)
def test_by_object_price(self):
by_object_price = self.products[2].pricing.get_price(self.order_geom)
self.assertGreater(by_object_price[0], Money(0, 'CHF'))
self.assertEqual(by_object_price[0],
self.number_of_objects * Money(1, 'CHF'))
def test_by_area_price(self):
by_area_price = self.products[3].pricing.get_price(self.order_geom)
expected_price = (self.order_geom.area / 10000) * self.unit_price
self.assertEqual(by_area_price[0], expected_price)
def test_from_pricing_layer_price(self):
from_pricing_layer_price = self.products[4].pricing.get_price(self.order_geom)
pricing_part1 = self.pricing_area1.geom.intersection(self.order_geom)
pricing_part2 = self.pricing_area2.geom.intersection(self.order_geom)
expected_price_part1 = (
pricing_part1.area / 10000) * self.pricing_area1.unit_price
expected_price_part2 = (
pricing_part2.area / 10000) * self.pricing_area2.unit_price
self.assertEqual(
from_pricing_layer_price[0].round(2),
(expected_price_part1 + expected_price_part2).round(2))
def test_manual_price(self):
manual_price = self.products[5].pricing.get_price(self.order_geom)
self.assertIsNone(manual_price[0], 'Manual price has None price when pricing is called')
order_item = OrderItem.objects.create(
order=self.order,
product=self.products[5]
)
self.order.order_type = self.orderTypePrivate
self.order.save()
self.assertEqual(self.order.status, Order.OrderStatus.DRAFT)
self.assertEqual(order_item.price_status, OrderItem.PricingStatus.PENDING, 'princing status stays pending')
# Client asks for a quote bescause order item pricing status is PENDING
self.order.confirm()
# An email is sent to admins, asking them to set a manual price
self.assertEqual(len(mail.outbox), 1, 'An email has been sent to admins')
self.assertEqual(self.order.status, Order.OrderStatus.PENDING, 'Order status is now pending')
# An admin sets price manually, this is normally done in admin interface
order_item.set_price(
price=self.unit_price,
base_fee=self.base_fee,
)
order_item.save()
# The admin confirms he's done with the quote
self.order.quote_done()
self.assertEqual(len(mail.outbox), 2, 'An email has been sent to the client')
self.assertEqual(self.order.status, Order.OrderStatus.QUOTE_DONE, 'Order status has quote done')
self.assertEqual(order_item.price_status, OrderItem.PricingStatus.CALCULATED, 'Price is calculated')
self.order.confirm()
self.assertEqual(self.order.status, Order.OrderStatus.READY, 'Order is ready for Extract')
def test_undefined_price(self):
undefined_price = self.products[6].pricing.get_price(self.order_geom)
self.assertIsNone(undefined_price[0])
self.assertLogs('pricing', level='ERROR')
self.assertEqual(len(mail.outbox), 1, 'An email has been sent to admins')
def test_base_fee(self):
orderitem1 = OrderItem.objects.create(
order=self.order,
product=self.products[1]
)
orderitem2 = OrderItem.objects.create(
order=self.order,
product=self.products[3]
)
self.order.order_type = self.orderTypePrivate
self.order.save()
orderitem1.set_price()
orderitem1.save()
orderitem2.set_price()
orderitem2.save()
self.order.set_price()
self.assertEqual(self.order.processing_fee, Money(50, 'CHF'), 'Base fee is correct')
def test_user_subscribed_to_product(self):
self.user_private.identity.subscribed = True
self.user_private.identity.save()
self.products[3].free_when_subscribed = True
self.products[3].save()
orderitem2 = OrderItem.objects.create(
order=self.order,
product=self.products[3]
)
self.order.order_type = self.orderTypePrivate
self.order.save()
orderitem2.set_price()
orderitem2.save()
self.order.set_price()
self.assertEqual(self.order.processing_fee, Money(0, 'CHF'), 'Processing fee is free')
self.assertEqual(self.order.total_with_vat, Money(0, 'CHF'), 'Order is free')
def test_invoice_contact_subscribed_to_product(self):
self.assertFalse(self.user_private.identity.subscribed)
self.products[3].free_when_subscribed = True
self.products[3].save()
orderitem2 = OrderItem.objects.create(
order=self.order,
product=self.products[3]
)
contact = Contact.objects.create(
first_name='Jean',
last_name='Doe',
email='test3@admin.com',
postcode=2000,
city='Lausanne',
country='Suisse',
belongs_to=self.user_private,
subscribed=True
)
self.order.invoice_contact = contact
self.order.order_type = self.orderTypePrivate
self.order.save()
orderitem2.set_price()
orderitem2.save()
self.order.set_price()
self.assertEqual(self.order.processing_fee, Money(0, 'CHF'), 'Processing fee is free')
self.assertEqual(self.order.total_with_vat, Money(0, 'CHF'), 'Order is free')
def test_public_order_is_free(self):
orderitem2 = OrderItem.objects.create(
order=self.order,
product=self.products[3]
)
self.order.order_type = self.orderTypePublic
self.order.save()
orderitem2.set_price()
orderitem2.save()
self.order.set_price()
self.assertEqual(self.order.processing_fee, Money(0, 'CHF'), 'Processing fee is free')
self.assertEqual(self.order.total_with_vat, Money(0, 'CHF'), 'Order is free')
def test_max_price_needs_manual_quote(self):
number_of_objects = 251
bbox = self.order_geom.extent
objs = (
PricingGeometry(geom=Point(x, y), pricing=self.pricings[2]) for x, y in zip(
range(int(bbox[0]), int(bbox[2])), range(int(bbox[1]), int(bbox[3]))
)
)
while True:
batch = list(islice(objs, number_of_objects))
if not batch:
break
PricingGeometry.objects.bulk_create(batch, number_of_objects)
by_object_price = self.products[2].pricing.get_price(self.order_geom)
self.assertIsNone(by_object_price[0], 'Price is None because max_price reached')
def test_from_children_of_group_price(self):
group_product = Product.objects.create(
label="Réseau d'eau",
pricing=self.pricings[7],
)
self.products[0].group = group_product
self.products[1].group = group_product
self.products[2].group = group_product
self.products[0].save()
self.products[1].save()
self.products[2].save()
orderitem1 = OrderItem.objects.create(
order=self.order,
product=group_product
)
self.order.order_type = self.orderTypePrivate
self.order.save()
orderitem1.set_price()
orderitem1.save()
self.order.set_price()
self.assertEqual(self.order.processing_fee, Money(20, 'CHF'), 'Base fee is correct')
self.assertEqual(self.order.total_without_vat,
self.number_of_objects * Money(1, 'CHF') + Money(150, 'CHF') + Money(20, 'CHF'))
| 39.029412 | 115 | 0.595554 | 1,470 | 13,270 | 5.213605 | 0.187755 | 0.065762 | 0.023747 | 0.04071 | 0.419494 | 0.355037 | 0.32333 | 0.283142 | 0.226905 | 0.205898 | 0 | 0.064155 | 0.299925 | 13,270 | 339 | 116 | 39.144543 | 0.760818 | 0.023361 | 0 | 0.35082 | 0 | 0 | 0.080189 | 0.001701 | 0 | 0 | 0 | 0 | 0.095082 | 1 | 0.045902 | false | 0.003279 | 0.022951 | 0 | 0.072131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ffb9d19110d417666cdbb685244d2f511866f9ea | 2,442 | py | Python | app/searchtweet/searchtweet.py | winsb/SearchTweet | fe5e2760fda8fd563210d71293aef16d2cc90d7e | [
"MIT"
] | null | null | null | app/searchtweet/searchtweet.py | winsb/SearchTweet | fe5e2760fda8fd563210d71293aef16d2cc90d7e | [
"MIT"
] | null | null | null | app/searchtweet/searchtweet.py | winsb/SearchTweet | fe5e2760fda8fd563210d71293aef16d2cc90d7e | [
"MIT"
] | null | null | null | # searchtweet.py
import json
from requests_oauthlib import OAuth1Session
from . import tweetdata
# const
TWITTER_API_URL_PREFIX = "https://api.twitter.com/1.1/"
# SearchTweet class
class SearchTweet:
def __init__(self, consumer_key, consumer_secret, access_token, access_token_secret):
self._twitter_oauth = OAuth1Session(consumer_key, consumer_secret, access_token, access_token_secret)
def _request(self, url, params, log=False):
request = self._twitter_oauth.get(url, params=params)
if log:
print(request.url)
if request.status_code != 200:
print("Error : Http Status Code " + request.status_code)
return None
return json.loads(request.text)
def get_home_timeline(self, count=50, exclude_replies=False):
# url
url = TWITTER_API_URL_PREFIX + "statuses/home_timeline.json"
# params
params = dict()
params.setdefault("count", count)
params.setdefault("exclude_replies", exclude_replies)
# request
timeline = self._request(url, params, log=True)
if timeline is None:
print("Error : Not get home timeline")
return None
# convert
return [tweetdata.TweetData(user_name=tweet["user"]["name"],
user_account=tweet["user"]["screen_name"],
date=tweet["created_at"],
text=tweet["text"]) for tweet in timeline]
def get_user_timeline(self, screen_name, count=50, exclude_replies=False, include_rts=True):
# url
url = TWITTER_API_URL_PREFIX + "statuses/user_timeline.json"
# params
params = dict()
params.setdefault("screen_name", screen_name)
params.setdefault("count", count)
params.setdefault("exclude_replies", exclude_replies)
params.setdefault("include_rts", include_rts)
# request
timeline = self._request(url, params, log=True)
if timeline is None:
print("Error : Not get user timeline")
return None
# convert
return [tweetdata.TweetData(user_name=tweet["user"]["name"],
user_account=tweet["user"]["screen_name"],
date=tweet["created_at"],
text=tweet["text"]) for tweet in timeline]
| 34.394366 | 109 | 0.601556 | 267 | 2,442 | 5.280899 | 0.254682 | 0.059574 | 0.02766 | 0.040426 | 0.625532 | 0.588652 | 0.588652 | 0.49078 | 0.49078 | 0.415603 | 0 | 0.006433 | 0.299754 | 2,442 | 70 | 110 | 34.885714 | 0.818129 | 0.037674 | 0 | 0.488372 | 0 | 0 | 0.128743 | 0.023097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.069767 | 0 | 0.325581 | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4401734f9b9049dfaf10b947e8ca78dc704dd051 | 1,329 | py | Python | core/realm_cmd.py | pg83/mix | 1aa964214a239bb80b3a2fa408551929b6b77acc | [
"MIT"
] | 12 | 2021-12-04T09:38:50.000Z | 2022-03-22T16:27:30.000Z | core/realm_cmd.py | apatrushev/mix | 754fb2f7f308ad8285953aab9c4eba218968c0d4 | [
"MIT"
] | 1 | 2022-02-15T23:16:32.000Z | 2022-02-15T23:16:32.000Z | core/realm_cmd.py | apatrushev/mix | 754fb2f7f308ad8285953aab9c4eba218968c0d4 | [
"MIT"
] | 1 | 2022-02-08T18:57:50.000Z | 2022-02-08T18:57:50.000Z | import core.utils as cu
import core.manager as cm
import core.cmd_line as cc
def parse_args(ctx):
class Args:
def __init__(self):
args = ctx['args']
self.realm = args[0]
ctx['args'] = args[1:]
self.config, self.pkgs = cc.parse_pkgs(ctx)
return Args()
def cli_realm_add(ctx):
args = parse_args(ctx)
cm.Manager(args.config).ensure_realm(args.realm).add(args.pkgs).install()
def cli_realm_remove(ctx):
args = parse_args(ctx)
cm.Manager(args.config).ensure_realm(args.realm).remove(args.pkgs).install()
def cli_realm_upgrade(ctx):
cu.step('start upgrade')
mngr = cm.Manager(cc.config_from(ctx))
def iter_realms():
if ctx['args']:
yield from ctx['args']
else:
yield from mngr.list_realms()
for r in iter_realms():
cu.step('realm start')
mngr.load_realm(r).upgrade().install()
cu.step('realm end')
def cli_realm_list(ctx):
mngr = cm.Manager(cc.config_from(ctx))
if ctx['args']:
for a in ctx['args']:
for x in mngr.load_realm(a).pkgs:
print(x)
else:
for r in mngr.list_realms():
print(r)
def cli_realm_purge(ctx):
cm.Manager(cc.config_from(ctx)).load_realm(ctx['args'][0]).uninstall()
| 22.15 | 80 | 0.598194 | 195 | 1,329 | 3.917949 | 0.251282 | 0.082461 | 0.07199 | 0.066754 | 0.324607 | 0.324607 | 0.225131 | 0.151832 | 0.151832 | 0.151832 | 0 | 0.003058 | 0.261851 | 1,329 | 59 | 81 | 22.525424 | 0.775739 | 0 | 0 | 0.2 | 0 | 0 | 0.045899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.075 | 0 | 0.325 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4401e3beac1ac62391add8303cda7c0a7de04f42 | 476 | py | Python | hash_table/0500_keyboard_row/0500_keyboard_row.py | zdyxry/LeetCode | 33371285d0f3302158230f46e8b1b63b9f4639c4 | [
"Xnet",
"X11"
] | 6 | 2019-09-16T01:50:44.000Z | 2020-09-17T08:52:25.000Z | hash_table/0500_keyboard_row/0500_keyboard_row.py | zdyxry/LeetCode | 33371285d0f3302158230f46e8b1b63b9f4639c4 | [
"Xnet",
"X11"
] | null | null | null | hash_table/0500_keyboard_row/0500_keyboard_row.py | zdyxry/LeetCode | 33371285d0f3302158230f46e8b1b63b9f4639c4 | [
"Xnet",
"X11"
] | 4 | 2020-02-07T12:43:16.000Z | 2021-04-11T06:38:55.000Z | from typing import List
class Solution:
def findWords(self, words: List[str]) -> List[str]:
set1 = set('qwertyuiop')
set2 = set('asdfghjkl')
set3 = set('zxcvbnm')
res = []
for i in words:
x = i.lower()
setx = set(x)
if setx<=set1 or setx<=set2 or setx<=set3:
res.append(i)
return res
words = ["Hello","Alaska","Dad","Peace"]
res = Solution().findWords(words)
print(res) | 25.052632 | 55 | 0.52521 | 59 | 476 | 4.237288 | 0.576271 | 0.056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01875 | 0.327731 | 476 | 19 | 56 | 25.052632 | 0.7625 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
440897e459f198c3b14d3d46c61083b908043a45 | 547 | py | Python | common/data_refinery_common/constants.py | erflynn/refinebio | 4ead4082a6b98f7fc8cffdc62c4394338a577f3d | [
"BSD-3-Clause"
] | 106 | 2018-03-05T16:24:47.000Z | 2022-03-19T19:12:25.000Z | common/data_refinery_common/constants.py | erflynn/refinebio | 4ead4082a6b98f7fc8cffdc62c4394338a577f3d | [
"BSD-3-Clause"
] | 1,494 | 2018-02-27T17:02:21.000Z | 2022-03-24T15:10:30.000Z | common/data_refinery_common/constants.py | erflynn/refinebio | 4ead4082a6b98f7fc8cffdc62c4394338a577f3d | [
"BSD-3-Clause"
] | 15 | 2019-02-03T01:34:59.000Z | 2022-03-29T01:59:13.000Z | from data_refinery_common.utils import get_env_variable
LOCAL_ROOT_DIR = get_env_variable("LOCAL_ROOT_DIR", "/home/user/data_store")
# We store what salmon ouptuts as its version, therefore for
# comparisions or defaults we shouldn't just store the version string,
# we need something with the pattern: 'salmon X.X.X'
CURRENT_SALMON_VERSION = "salmon " + get_env_variable("SALMON_VERSION", "0.13.1")
CHUNK_SIZE = 1024 * 256 # chunk_size is in bytes
# Let this fail if SYSTEM_VERSION is unset.
SYSTEM_VERSION = get_env_variable("SYSTEM_VERSION")
| 49.727273 | 81 | 0.789762 | 89 | 547 | 4.595506 | 0.606742 | 0.05868 | 0.136919 | 0.09291 | 0.127139 | 0.127139 | 0 | 0 | 0 | 0 | 0 | 0.023013 | 0.126143 | 547 | 10 | 82 | 54.7 | 0.832636 | 0.444241 | 0 | 0 | 0 | 0 | 0.255034 | 0.07047 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4409126fcb48835e2b94769b277223312da2de0b | 4,680 | py | Python | db.py | seasidefm/botsuro-twitch | de87be41c0ea3b57816eda89cc3fea1169778343 | [
"Apache-2.0"
] | null | null | null | db.py | seasidefm/botsuro-twitch | de87be41c0ea3b57816eda89cc3fea1169778343 | [
"Apache-2.0"
] | null | null | null | db.py | seasidefm/botsuro-twitch | de87be41c0ea3b57816eda89cc3fea1169778343 | [
"Apache-2.0"
] | null | null | null | import datetime
import os
import string
import time
from bson.json_util import dumps
from json import loads
from pymongo import MongoClient
from utils import SongRequest, UserPayload
TEMP_MOVIE_DETAILS = """
Title: Maison Ikkoku |
Synopsis: Maison Ikkoku is a bitter-sweet romantic comedy involving a group of
madcap people who live in a boarding house in 1980s Tokyo. The story focuses
primarily on the gradually developing relationships between Yusaku Godai, a poor
student down on his luck, and Kyoko Otonashi, a young, recently widowed boarding
house manager.
"""
class DB:
"""DB Class
A convenience wrapper for interacting with the database
"""
def __init__(self):
connection_string = os.environ['MONGO_CONNECTION']
if not connection_string:
raise EnvironmentError("MONGO_CONNECTION missing in env!")
self.mongo = MongoClient(connection_string)
self.db = self.mongo.get_default_database()
self.collection = self.db.get_collection('requests')
print("> DB ready for commands")
@staticmethod
async def get_video_title():
return TEMP_MOVIE_DETAILS
async def save_request(self, req: SongRequest):
result = self.collection.insert_one(req)
print(f"Saved request from {req['user']} -> '{req['artist']} - {req['song_title']}'")
return result
async def get_requests(self):
# Get any not marked complete
requests = self.collection.find({"complete": {"$ne": True}})
return loads(dumps(list(requests)))
async def __super_saver(self, user: UserPayload, song_to_superfave):
super_faves = self.db.get_collection('super_faves')
already_superfaved = super_faves.find_one({"user": user['user'], "song": song_to_superfave})
if already_superfaved is None:
super_faves.insert_one({
"user": user["user"],
"twitch_id": user["user_id"],
"song": song_to_superfave,
"date": int(time.time())
})
return "added"
else:
return "already_exists"
async def __song_saver(self, user: UserPayload, song_to_add: dict):
fave_songs = self.db.get_collection('saved_songs')
user_list = fave_songs.find({"user": user['user']})
results = list(user_list)
if len(results) == 0:
print(f"User list not found for {user}, creating...")
res = fave_songs.insert_one({
"user": user['user'],
"twitch_id": user['user_id'],
"songs": [{
"song": song_to_add,
"date": int(time.time())
}]
})
print(res)
return "created"
else:
fave_data = results[0]
song_titles = []
for song in fave_data['songs']:
song_titles.append(song["song"])
if song_to_add not in song_titles:
new_song_list = fave_data['songs'] + [{
"song": song_to_add,
"date": int(time.time())
}]
fave_songs.update_one({
"user": {
"$eq": user['user']
}
}, {
"$set": {
"twitch_id": user['user_id'],
"songs": new_song_list
}
})
return "added"
else:
return "already_exists"
async def super_fave_song(self, user: UserPayload):
current_song = (await self.current_song())['song_string']
if current_song == "":
return "no_song"
return await self.__super_saver(user, current_song)
async def save_current_song(self, user: UserPayload):
current_song = (await self.current_song())['song_string']
if current_song == "":
return "no_song"
return await self.__song_saver(user, current_song)
async def save_last_song(self, user: UserPayload):
last_song = (await self.last_song())['song_string']
if last_song == "":
return "no_song"
return await self.__song_saver(user, last_song)
async def current_song(self):
collection = self.db.get_collection('current_song')
requests = collection.find({"type": "current_song"})
return list(requests)[0]
async def last_song(self):
collection = self.db.get_collection('current_song')
requests = collection.find({"type": "last_song"})
return list(requests)[0]
| 34.666667 | 100 | 0.577564 | 531 | 4,680 | 4.862524 | 0.284369 | 0.055383 | 0.017428 | 0.036793 | 0.332301 | 0.314485 | 0.269558 | 0.248257 | 0.215724 | 0.190163 | 0 | 0.002495 | 0.314957 | 4,680 | 134 | 101 | 34.925373 | 0.80287 | 0.019872 | 0 | 0.272727 | 0 | 0 | 0.193788 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009091 | false | 0 | 0.072727 | 0 | 0.236364 | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
440a2c6ef802e9a82756a1ade4c76fb4a056e3ee | 20,200 | py | Python | hw2/train_pg_v2.py | HuanjunWang/rl_homework | 4c387fa7016e980fe1f72824e6ed5b980bf8e717 | [
"MIT"
] | null | null | null | hw2/train_pg_v2.py | HuanjunWang/rl_homework | 4c387fa7016e980fe1f72824e6ed5b980bf8e717 | [
"MIT"
] | null | null | null | hw2/train_pg_v2.py | HuanjunWang/rl_homework | 4c387fa7016e980fe1f72824e6ed5b980bf8e717 | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
import gym
import logz
import scipy.signal
import os
import time
from multiprocessing import Process
import shutil
class MyArgument(object):
def __init__(self,
exp_name='vpg',
env_name='CartPole-v1',
n_iter=100,
gamma=.99,
min_batch_size=1000,
max_path_length=None,
learning_rate=1e-3,
reward_to_go=True,
render=False,
normalize_advantage=True,
nn_baseline=True,
seed=1,
n_layers=1,
size=32,
debug=False,
max_loss=100):
self.exp_name = exp_name
self.env_name = env_name
self.n_iter = n_iter
self.gamma = gamma
self.min_batch_size = min_batch_size
self.max_path_length = max_path_length
self.learning_rate = learning_rate
self.reward_to_go = reward_to_go
self.render = render
self.debug = debug
self.max_loss = max_loss
self.normalize_advantage = normalize_advantage
self.nn_baseline = nn_baseline
self.seed = seed
self.n_layers = n_layers
self.size = size
base_dir = '/tmp/pg/%s' % self.env_name
self.log_dir = 'bl_' if self.nn_baseline else ''
self.log_dir += 'rtg_' if self.reward_to_go else ''
self.log_dir += 'norm_' if self.normalize_advantage else ''
self.log_dir += 'nn_%d_%d_' % (self.n_layers, self.size)
self.log_dir += 'lr_%6f_' % self.learning_rate
self.log_dir += 'batch_%d_' % self.min_batch_size
self.log_dir += 't_%d' % self.max_loss
self.log_dir = os.path.join(base_dir, self.log_dir)
self.log_dir = os.path.join(self.log_dir, 'seed%d' % self.seed)
class PgModel(object):
def __init__(self, env, n_layers, size, learning_rate, nn_baseline, debug):
self.observation_dim = env.observation_space.shape[0]
self.ph_observation = tf.placeholder(shape=[None, self.observation_dim], name="Observation", dtype=tf.float32)
self.ph_advance = tf.placeholder(shape=[None], name='advance', dtype=tf.float32)
self.nn_baseline = nn_baseline
self.ph_q_value = tf.placeholder(shape=[None], name='QValue', dtype=tf.float32)
self.debug = debug
if self.debug:
self.ph_mean_reward = tf.placeholder(name="reward", dtype=tf.float32)
tf.summary.scalar("MeanReward", self.ph_mean_reward)
self.predict_action = None
self.critic_opt = None
self.actor_opt = None
self.ph_action = None
self.predict_baseline = None
self.merged = None
self.critic_loss = None
@staticmethod
def build_mlp(input_placeholder, output_size, scope,
n_layers=2, size=64, activation=tf.nn.relu, output_activation=None):
with tf.variable_scope(scope):
x = tf.keras.Input(tensor=input_placeholder)
for i in range(n_layers):
x = tf.keras.layers.Dense(units=size, activation=activation)(x)
x = tf.keras.layers.Dense(units=output_size, activation=output_activation)(x)
return x
def get_predict_action(self, sess, observation):
action = sess.run(self.predict_action, feed_dict={self.ph_observation: observation[None]})
return action[0]
def update(self, sess, observations, actions, q, normalize_advance, mean_reward, max_loss):
if self.nn_baseline:
# Update Cirtic Network
baseline = sess.run(self.predict_baseline, feed_dict={self.ph_observation: observations})
updated_baseline = baseline * .9 + q * .1
sess.run([self.critic_opt], feed_dict={self.ph_observation: observations,
self.ph_q_value: updated_baseline})
advance = q - baseline
else:
advance = q.copy()
if normalize_advance:
advance = (advance - np.mean(advance)) / (np.std(advance) + 1e-8)
# Update the Actor network
if self.debug:
_, summary = sess.run([self.action_opt, self.merged], feed_dict={self.ph_observation: observations,
self.ph_action: actions,
self.ph_advance: advance,
self.ph_q_value: q,
self.ph_mean_reward: mean_reward})
else:
sess.run(self.action_opt, feed_dict={self.ph_observation: observations,
self.ph_action: actions,
self.ph_advance: advance})
summary = None
return summary
class PgModelContinuous(PgModel):
def __init__(self, env, n_layers, size, learning_rate, nn_baseline, debug):
super().__init__(env, n_layers, size, learning_rate, nn_baseline, debug)
self.action_dim = env.action_space.shape[0]
self.ph_action = tf.placeholder(shape=[None, self.action_dim], name="Action", dtype=tf.float32)
# Define the Actor Model
with tf.variable_scope("Actor"):
# N x action dim
# Output activation ?
self.action_mean = self.build_mlp(input_placeholder=self.ph_observation,
output_size=self.action_dim,
scope="Mean_%d_%d" % (n_layers, size),
size=size,
n_layers=n_layers,
activation=tf.nn.relu,
output_activation=None)
# action dim
# self.action_sigma = tf.get_variable(name='Sigma', shape=[self.action_dim],
# dtype=tf.float32, trainable=True, initializer=tf.ones_initializer())
self.action_sigma = self.build_mlp(input_placeholder=self.ph_observation,
output_size=self.action_dim,
scope="Sigma_%d_%d" % (n_layers, size),
size=32,
n_layers=1,
activation=tf.nn.relu,
output_activation=tf.nn.sigmoid)
tf.summary.histogram('Mean', self.action_mean)
tf.summary.histogram('Std', self.action_sigma)
# Broadcast expected here
# Get N x action dim distributions
self.normal_dist = tf.distributions.Normal(self.action_mean, (self.action_sigma + 1e-8),
name="PredictDistribution")
# Expected N* action dis distributions
self.predict_action = self.normal_dist.sample(name="PredictAction")
# self.predict_action = tf.clip_by_value(normal_dist.sample(), env.action_space.low, env.action_space.high,
# name="PredictAction")
with tf.name_scope("Loss"):
self.action_prob = self.normal_dist.log_prob(self.ph_action, name="Prob")
self.actor_loss = - tf.reduce_mean(self.action_prob * tf.expand_dims(self.ph_advance, -1))
tf.summary.scalar('Actor Loss', self.actor_loss)
with tf.name_scope("Opt/"):
self.action_opt = tf.train.AdamOptimizer(learning_rate).minimize(self.actor_loss)
# Define the Critic Model
if nn_baseline:
with tf.name_scope("Critic"):
self.predict_baseline_2d = self.build_mlp(input_placeholder=self.ph_observation,
output_size=1,
scope="NN_%d_%d" % (n_layers, size),
n_layers=n_layers,
size=size,
activation=tf.nn.relu)
self.predict_baseline = tf.squeeze(self.predict_baseline_2d, axis=1, name="PredictBaseline")
with tf.name_scope("Loss"):
self.critic_loss = tf.losses.mean_squared_error(self.ph_q_value, self.predict_baseline)
tf.summary.scalar('Critic Loss', self.critic_loss)
with tf.name_scope("Opt/"):
self.critic_opt = tf.train.AdamOptimizer(learning_rate).minimize(self.critic_loss)
weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
for w in weights:
tf.summary.histogram(w.name, w)
self.merged = tf.summary.merge_all()
class PgModelDiscrete(PgModel):
def __init__(self, env, n_layers, size, learning_rate, nn_baseline, debug):
super().__init__(env, n_layers, size, learning_rate, nn_baseline, debug)
self.action_dim = env.action_space.n
self.ph_action = tf.placeholder(shape=[None], name="Action", dtype=tf.int32)
# Define the Actor Model
with tf.name_scope("Actor"):
self.action_logist = self.build_mlp(input_placeholder=self.ph_observation,
output_size=self.action_dim, scope="NN_%d_%d" % (n_layers, size),
size=size, n_layers=n_layers)
self.predict_action_2d = tf.multinomial(self.action_logist, 1)
self.predict_action = tf.squeeze(self.predict_action_2d, axis=1, name="PredictAction")
self.batch_size = tf.shape(self.ph_observation)[0]
with tf.name_scope('Loss'):
indices = tf.stack([tf.range(self.batch_size), self.ph_action], axis=1)
action_prob = tf.gather_nd(tf.nn.softmax(self.action_logist), indices)
self.actor_loss = tf.reduce_mean(-tf.log(action_prob) * self.ph_advance)
tf.summary.scalar('Actor loss', self.actor_loss)
with tf.name_scope("Opt/"):
self.action_opt = tf.train.AdamOptimizer(learning_rate).minimize(self.actor_loss)
# Define the Critic Model
if nn_baseline:
with tf.name_scope("Critic"):
self.predict_baseline_2d = self.build_mlp(input_placeholder=self.ph_observation,
output_size=1,
scope="NN_%d_%d" % (n_layers, size),
n_layers=n_layers,
size=size,
activation=tf.nn.relu)
self.predict_baseline = tf.squeeze(self.predict_baseline_2d, axis=1, name="PredictBaseline")
with tf.name_scope("Loss"):
self.critic_loss = tf.losses.mean_squared_error(self.ph_q_value, self.predict_baseline)
tf.summary.scalar('Critic Loss', self.critic_loss)
with tf.name_scope("Opt/"):
self.critic_opt = tf.train.AdamOptimizer(learning_rate).minimize(self.critic_loss)
weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
for w in weights:
tf.summary.histogram(w.name, w)
self.merged = tf.summary.merge_all()
def discount_reward(paths, gamma, reward_to_go):
if reward_to_go:
discounted_reward = []
for path in paths:
path_len = len(path['reward'])
discount_factor = [1 * (gamma ** i) for i in range(path_len)]
for i in range(path_len):
discounted_reward.append(
np.sum(np.array(path['reward'][i:]) * np.array(discount_factor[:path_len - i])))
else:
discounted_reward = []
for path in paths:
ret_tau = 0
discount_factor = 1
for reward in path['reward']:
ret_tau += reward * discount_factor
discount_factor *= gamma
discounted_reward.extend([ret_tau for i in range(len(path['reward']))])
q_n = np.array(discounted_reward, dtype=np.float32)
return q_n
def verify_model(sess, model, env):
action_dim = env.action_space.shape[0]
observation_dim = env.observation_space.shape[0]
observations = np.random.randn(observation_dim)
print("Observation shape:", observations.shape)
print(observations)
actions = model.get_predict_action(sess, observations)
print("action shape:", actions.shape)
print(actions)
assert actions.shape == (action_dim,)
N = 10
observations = np.random.randn(N, observation_dim)
print("Observation shape:", observations.shape)
print(observations)
predict_action = sess.run(model.predict_action, feed_dict={model.ph_observation: observations})
print("Action shape:", predict_action.shape)
print(predict_action)
assert predict_action.shape == (N, action_dim)
action_prob = sess.run(model.action_prob, feed_dict={model.ph_observation: observations,
model.ph_action: predict_action})
print("Prob:", action_prob)
assert action_prob.shape == (N, action_dim)
loss = sess.run(model.actor_loss, feed_dict={model.ph_observation: observations,
model.ph_action: predict_action,
model.ph_advance: np.ones(N)})
print("Loss:", loss)
for i in range(20):
sess.run(model.action_opt, feed_dict={model.ph_observation: observations,
model.ph_action: predict_action,
model.ph_advance: np.ones(N)})
action_prob2 = sess.run(model.action_prob, feed_dict={model.ph_observation: observations,
model.ph_action: predict_action})
print("Prob2:", action_prob2)
assert action_prob2.shape == (N, action_dim)
loss2 = sess.run(model.actor_loss, feed_dict={model.ph_observation: observations,
model.ph_action: predict_action,
model.ph_advance: np.ones(N)})
print("Loss2:", loss2)
assert loss2 < loss
print(np.sum(action_prob2 > action_prob))
assert np.sum(action_prob2 > action_prob) == N * action_dim
def train_pg(args):
if os.path.exists(args.log_dir):
shutil.rmtree(args.log_dir)
logz.configure_output_dir(args.log_dir)
logz.save_params(args.__dict__)
# Set random seeds
tf.set_random_seed(args.seed)
np.random.seed(args.seed)
# Make the gym environment
env = gym.make(args.env_name)
# Is this env continuous, or discrete?
discrete = isinstance(env.action_space, gym.spaces.Discrete)
if discrete:
model = PgModelDiscrete(env, args.n_layers, args.size, args.learning_rate, args.nn_baseline, debug=args.debug)
else:
model = PgModelContinuous(env, args.n_layers, args.size, args.learning_rate, args.nn_baseline, debug=args.debug)
tf_config = tf.ConfigProto(inter_op_parallelism_threads=1, intra_op_parallelism_threads=1)
sess = tf.Session(config=tf_config)
sess.__enter__() # equivalent to `with sess:`
tf.global_variables_initializer().run() # pylint: disable=E1101
writer = tf.summary.FileWriter(args.log_dir, sess.graph)
# if not discrete:
# verify_model(sess, model, env)
max_path_length = args.max_path_length or env.spec.max_episode_steps
start = time.time()
for itr in range(args.n_iter):
end = time.time()
cost = end - start
start = end
if itr > 0:
if itr == 1:
mean_cost = cost
else:
mean_cost = .9 * mean_cost + .1 * cost
print("Time: %1f " % mean_cost, "Togo:%1f min" % ((args.n_iter - itr) * mean_cost / 60))
print("********** Iteration %i ************" % itr)
time_steps_this_batch = 0
paths = []
while True:
observation = env.reset()
obs, acs, rewards = [], [], []
render_this_episode = (len(paths) == 0 and (itr % 10 == 0) and args.render)
path_steps = 0
while True:
if render_this_episode:
env.render()
time.sleep(0.0001)
obs.append(observation)
action = model.get_predict_action(sess, observation)
acs.append(action)
observation, rew, done, _ = env.step(action)
rewards.append(rew)
path_steps += 1
if done or path_steps > max_path_length:
break
path = {"observation": np.array(obs),
"reward": np.array(rewards),
"action": np.array(acs)}
paths.append(path)
time_steps_this_batch += len(obs)
if time_steps_this_batch > args.min_batch_size:
break
path_reward = [sum(path['reward']) for path in paths]
mean_reward = sum(path_reward) / len(path_reward)
print("Average Reward:", mean_reward)
ob_batch = np.concatenate([path["observation"] for path in paths])
ac_batch = np.concatenate([path["action"] for path in paths])
q_batch = discount_reward(paths, args.gamma, args.reward_to_go)
summary = model.update(sess, observations=ob_batch, actions=ac_batch,
q=q_batch, normalize_advance=args.normalize_advantage,
mean_reward=mean_reward, max_loss=args.max_loss)
if args.debug:
writer.add_summary(summary, itr)
logz.pickle_tf_vars()
def main():
for baseline in [True]:
for normalize in [True]:
for reward_to_go in [True]:
for min_batch in [10000]:
for lr in [1e-2]:
for max_loss in [1e6]:
for seed in [23]:
# env_name = 'CartPole-v0'
# env_name = 'MountainCar-v0'
# env_name = 'MountainCarContinuous-v0'
env_name = 'InvertedPendulum-v1'
# env_name = "Ant-v1"
env_name = 'HalfCheetah-v1'
args = MyArgument(env_name=env_name,
seed=seed,
debug=True,
n_layers=3,
size=64,
reward_to_go=reward_to_go,
normalize_advantage=normalize,
nn_baseline=baseline,
n_iter=1200,
learning_rate=lr,
render=False,
max_path_length=500,
min_batch_size=min_batch,
max_loss=max_loss)
p = Process(target=train_pg, args=(args,))
p.start()
p.join()
if __name__ == "__main__":
main()
| 44.789357 | 120 | 0.536188 | 2,227 | 20,200 | 4.61383 | 0.131118 | 0.018686 | 0.012847 | 0.016058 | 0.407786 | 0.373625 | 0.31708 | 0.291873 | 0.291873 | 0.273674 | 0 | 0.010835 | 0.369455 | 20,200 | 450 | 121 | 44.888889 | 0.79587 | 0.045347 | 0 | 0.259259 | 0 | 0 | 0.032094 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 1 | 0.031339 | false | 0 | 0.025641 | 0 | 0.079772 | 0.045584 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
440afa545bf83412add26954940014f4b8250c80 | 664 | py | Python | lab1_python_intro/ex5_SOLUTION_conditional_number.py | ggruszczynski/CFDPython | 1662ede061fb899d6ed3f89c17877e65f65e521c | [
"CC-BY-3.0"
] | null | null | null | lab1_python_intro/ex5_SOLUTION_conditional_number.py | ggruszczynski/CFDPython | 1662ede061fb899d6ed3f89c17877e65f65e521c | [
"CC-BY-3.0"
] | null | null | null | lab1_python_intro/ex5_SOLUTION_conditional_number.py | ggruszczynski/CFDPython | 1662ede061fb899d6ed3f89c17877e65f65e521c | [
"CC-BY-3.0"
] | 1 | 2021-02-05T08:00:02.000Z | 2021-02-05T08:00:02.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 4 19:05:22 2020
@author: ggruszczynski
"""
import numpy as np
from numpy import linalg as LA
def dot_product(u,v):
return u @ v
def calc_condition_number_naive(f, u, v, delta):
cond_number = LA.norm(f(u+delta, v) - f(u,v)) / LA.norm(f(u,v))
cond_number /= LA.norm(u+delta)/LA.norm(u)
return cond_number
x0 = np.array([1, 2, 3])
x1 = np.array([2, 6, -5])
dx = 0.02 * x0
# dx = np.array([0.02, 0.04, -0.04])
c = calc_condition_number_naive(dot_product, x0, x1, dx)
print(c)
# calculate the difference explicitly
print(
abs(1 - ((x0 + dx) @ x1 / (x0 @ x1) ))
) | 20.75 | 67 | 0.614458 | 122 | 664 | 3.254098 | 0.47541 | 0.025189 | 0.02267 | 0.120907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077505 | 0.203313 | 664 | 32 | 68 | 20.75 | 0.672968 | 0.262048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.0625 | 0.375 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4410ba355da46d3266d0b93f07bb597a2d073c21 | 1,000 | py | Python | paramak/parametric_shapes/rotate_spline_shape.py | billingsley-john/paramak | 127d064f7bc0fd26305b4d83776d66b0e12aeeb0 | [
"MIT"
] | null | null | null | paramak/parametric_shapes/rotate_spline_shape.py | billingsley-john/paramak | 127d064f7bc0fd26305b4d83776d66b0e12aeeb0 | [
"MIT"
] | null | null | null | paramak/parametric_shapes/rotate_spline_shape.py | billingsley-john/paramak | 127d064f7bc0fd26305b4d83776d66b0e12aeeb0 | [
"MIT"
] | null | null | null |
from typing import Optional, Tuple
from paramak import RotateMixedShape
class RotateSplineShape(RotateMixedShape):
"""Rotates a 3d CadQuery solid from points connected with splines.
Args:
rotation_angle: The rotation_angle to use when revolving the solid.
Defaults to 360.0.
stp_filename: Defaults to "RotateSplineShape.stp".
stl_filename: Defaults to "RotateSplineShape.stl".
"""
def __init__(
self,
rotation_angle: Optional[float] = 360,
stp_filename: Optional[str] = "RotateSplineShape.stp",
stl_filename: Optional[str] = "RotateSplineShape.stl",
color: Optional[Tuple[float, float, float, Optional[float]]] = (0.415, 0.239, 0.603),
**kwargs
):
super().__init__(
rotation_angle=rotation_angle,
stp_filename=stp_filename,
stl_filename=stl_filename,
color=color,
connection_type="spline",
**kwargs
)
| 29.411765 | 93 | 0.633 | 105 | 1,000 | 5.819048 | 0.438095 | 0.106383 | 0.05892 | 0.114566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027624 | 0.276 | 1,000 | 33 | 94 | 30.30303 | 0.816298 | 0.279 | 0 | 0.105263 | 0 | 0 | 0.069666 | 0.060958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4414b94db096b9792beb0261cd50cbcc79200424 | 3,352 | py | Python | applications/fatigue_w_proxy/container4/app/predict.py | Dumpkin1996/clipper | 1a08bbdde846c3cfe76236c68548a848f71605e0 | [
"Apache-2.0"
] | 2 | 2019-04-24T13:46:28.000Z | 2019-05-28T06:59:26.000Z | applications/fatigue_w_proxy/container4/app/predict.py | SimonZsx/clipper | 457088be2ebe68c68b94d90389d1308e35b4c844 | [
"Apache-2.0"
] | null | null | null | applications/fatigue_w_proxy/container4/app/predict.py | SimonZsx/clipper | 457088be2ebe68c68b94d90389d1308e35b4c844 | [
"Apache-2.0"
] | 4 | 2019-04-03T11:03:57.000Z | 2019-06-26T08:22:38.000Z | import cv2
import numpy as np
import os
import json
import time
from datetime import datetime
def image_string(image):
image_encode=cv2.imencode('.jpg',image)[1]
imagelist=image_encode.tolist()
image_string=json.dumps(imagelist)
return image_string
def string_image(imagestring):
image_list=json.loads(imagestring)
arr=np.array(image_list)
arr=np.uint8(arr)
image=cv2.imdecode(arr,cv2.IMREAD_COLOR)
return image
protoFile = "/container/pose_deploy_linevec.prototxt"
weightsFile = "/container/pose_iter_440000.caffemodel"
net = cv2.dnn.readNetFromCaffe(protoFile, weightsFile)
nPoints = 18
POSE_PAIRS = [ [1,0],[1,2],[1,5],[2,3],[3,4],[5,6],[6,7],[1,8],[8,9],[9,10],[1,11],[11,12],[12,13],[0,14],[0,15],[14,16],[15,17]]
# input image dimensions for the network
inWidth = 368
inHeight = 368
COUNT=0
def predict(imagestring):
t1 = datetime.utcnow()
print("\n[INFO]\t", "[c4]\t", str(t1))
if imagestring is None:
t2 = datetime.utcnow()
print("[INFO]\t", "[c4]\t", str(t2))
print("[INFO]\t", "[c4]\tTime elapsed: ", (t2-t1).total_seconds(), " seconds." )
print("\n[INFO] Pose Detection FINISHED!")
return False
frame=string_image(imagestring)
frameCopy = np.copy(frame)
frameWidth = frame.shape[1]
frameHeight = frame.shape[0]
threshold = 0.1
inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight),
(0, 0, 0), swapRB=False, crop=False)
net.setInput(inpBlob)
output = net.forward()
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = []
add=0
square=0
count=0
for i in range(nPoints):
# confidence map of corresponding body's part.
probMap = output[0, i, :, :]
# Find global maxima of the probMap.
minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)
# Scale the point to fit on the original image
x = (frameWidth * point[0]) / W
y = (frameHeight * point[1]) / H
if prob > threshold :
cv2.circle(frameCopy, (int(x), int(y)), 8, (0, 255, 255), thickness=-1, lineType=cv2.FILLED)
cv2.putText(frameCopy, "{}".format(i), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, lineType=cv2.LINE_AA)
# Add the point to the list if the probability is greater than the threshold
points.append((int(x), int(y)))
add+=int(x)
square+=int(x)*int(x)
count+=1
else :
points.append(None)
if count==0:
t2 = datetime.utcnow()
print("[INFO]\t", "[c4]\t", str(t2))
print("[INFO]\t", "[c4]\tTime elapsed: ", (t2-t1).total_seconds(), " seconds." )
return None
add=add/count
variance=(square-add*add)/(count-1)
print("\n[INFO] Pose Detection FINISHED!")
t2 = datetime.utcnow()
print("[INFO]\t", "[c4]\t", str(t2))
print("[INFO]\t", "[c4]\tTime elapsed: ", (t2-t1).total_seconds(), " seconds." )
if COUNT > 6:
return "True"
else:
return "False"
if variance>20000:
COUNT=COUNT+1
print("\n[INFO] WARNING! MAY BE TIRED!")
else:
COUNT=COUNT-1
| 28.649573 | 135 | 0.579356 | 455 | 3,352 | 4.221978 | 0.364835 | 0.01822 | 0.025508 | 0.03748 | 0.17595 | 0.158771 | 0.126497 | 0.126497 | 0.126497 | 0.126497 | 0 | 0.057848 | 0.26253 | 3,352 | 116 | 136 | 28.896552 | 0.719256 | 0.083831 | 0 | 0.168675 | 0 | 0 | 0.116841 | 0.025131 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0 | 0.072289 | 0 | 0.180723 | 0.120482 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442337ece147504cb1df39981a0b29a4670fe631 | 530 | py | Python | PDF/pymupdf_get_bookmarks.py | MartinThoma/algorithms | 4347a9b7bf54ef378d16d26ef9e357ddc710664b | [
"MIT"
] | 209 | 2015-01-02T03:47:12.000Z | 2022-03-06T16:54:47.000Z | PDF/pymupdf_get_bookmarks.py | Kerwin-Xie/algorithms | 4347a9b7bf54ef378d16d26ef9e357ddc710664b | [
"MIT"
] | 3 | 2015-12-06T14:40:34.000Z | 2021-03-22T17:40:24.000Z | PDF/pymupdf_get_bookmarks.py | Kerwin-Xie/algorithms | 4347a9b7bf54ef378d16d26ef9e357ddc710664b | [
"MIT"
] | 114 | 2015-01-31T08:37:10.000Z | 2022-02-23T04:42:28.000Z | # Type annotations are pretty awesome:
# https://medium.com/analytics-vidhya/type-annotations-in-python-3-8-3b401384403d
from typing import Dict
import fitz # pip install pymupdf
def get_bookmarks(filepath: str) -> Dict[int, str]:
# WARNING! One page can have multiple bookmarks!
bookmarks = {}
with fitz.open(filepath) as doc:
toc = doc.getToC() # [[lvl, title, page, …], …]
for level, title, page in toc:
bookmarks[page] = title
return bookmarks
print(get_bookmarks("my.pdf"))
| 27.894737 | 81 | 0.669811 | 72 | 530 | 4.986111 | 0.694444 | 0.083565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02864 | 0.209434 | 530 | 18 | 82 | 29.444444 | 0.813842 | 0.396226 | 0 | 0 | 0 | 0 | 0.019108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44252fea2299758e673934a1ecda9362f2441cf3 | 2,058 | py | Python | muddery/commands/player.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | muddery/commands/player.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | muddery/commands/player.py | MarsZone/DreamLand | 87455f421c1ba09cb6efd5fc0882fbc2a29ea1a5 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | """
This is adapt from evennia/evennia/commands/default/player.py.
The licence of Evennia can be found in evennia/LICENSE.txt.
"""
import time
from django.conf import settings
from evennia.server.sessionhandler import SESSIONS
from evennia.commands.command import Command
from evennia.utils import utils, create, search, prettytable
MAX_NR_CHARACTERS = settings.MAX_NR_CHARACTERS
MULTISESSION_MODE = settings.MULTISESSION_MODE
# limit symbol import for API
__all__ = ("CmdQuit")
# force max nr chars to 1 if mode is 0 or 1
MAX_NR_CHARACTERS = MULTISESSION_MODE < 2 and 1 or MAX_NR_CHARACTERS
BASE_PLAYER_TYPECLASS = settings.BASE_PLAYER_TYPECLASS
PERMISSION_HIERARCHY = settings.PERMISSION_HIERARCHY
PERMISSION_HIERARCHY_LOWER = [perm.lower() for perm in PERMISSION_HIERARCHY]
# Obs - these are all intended to be stored on the Player, and as such,
# use self.player instead of self.caller, just to be sure. Also self.msg()
# is used to make sure returns go to the right session
class CmdQuit(Command):
"""
quit the game
Usage:
{"cmd":"quit",
"args":""
}
Gracefully disconnect your current session from the
game. Use the /all switch to disconnect from all sessions.
"""
key = "quit"
locks = "cmd:all()"
def func(self):
"hook function"
player = self.player
nsess = len(player.sessions.all())
if nsess == 2:
player.msg({"msg":"{RQuitting{n. One session is still connected.",
"logout":""},
session=self.session)
elif nsess > 2:
player.msg({"msg":"{RQuitting{n. %i session are still connected." % (nsess-1),
"logout":""},
session=self.session)
else:
# we are quitting the last available session
player.msg({"msg":"{RQuitting{n. Hope to see you again, soon.",
"logout":""},
session=self.session)
player.disconnect_session_from_player(self.session)
| 32.15625 | 90 | 0.64723 | 263 | 2,058 | 4.961977 | 0.441065 | 0.019157 | 0.045977 | 0.048276 | 0.10728 | 0.042912 | 0.042912 | 0 | 0 | 0 | 0 | 0.005239 | 0.258017 | 2,058 | 63 | 91 | 32.666667 | 0.849378 | 0.301263 | 0 | 0.1875 | 0 | 0 | 0.137241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.15625 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442618fc9c28b2ce42e414759738c3f72d214758 | 20,002 | py | Python | python_utilities/parallel.py | sdaxen/python_utilities | 7b9d6cc21bfc31be83629d2ac02b27e886ebc2bb | [
"MIT"
] | 2 | 2020-04-13T20:17:36.000Z | 2020-05-12T01:13:12.000Z | python_utilities/parallel.py | sethaxen/python_utilities | 7b9d6cc21bfc31be83629d2ac02b27e886ebc2bb | [
"MIT"
] | 5 | 2015-10-20T22:57:51.000Z | 2017-09-07T01:10:23.000Z | python_utilities/parallel.py | sethaxen/python_utilities | 7b9d6cc21bfc31be83629d2ac02b27e886ebc2bb | [
"MIT"
] | 3 | 2015-08-17T17:55:41.000Z | 2018-09-19T13:56:42.000Z | """Tools to aid in parallelizing a function call.
Default method is MPI, if available. Fallback is concurrent.futures. If all
else fails, final fallback is serial.
Author: Seth Axen
Email: seth.axen@gmail.com
"""
import os
import sys
import logging
from copy import copy
import multiprocessing
try:
from itertools import izip as zip
except ImportError: # python 3
pass
# upon import, figure out if MPI is available, and decide parallel_mode
MPI_MODE = "mpi"
FUTURES_THREADS_MODE = "threads"
FUTURES_PROCESSES_MODE = "processes"
SERIAL_MODE = "serial"
ALL_PARALLEL_MODES = (MPI_MODE,
FUTURES_PROCESSES_MODE, FUTURES_THREADS_MODE,
SERIAL_MODE)
available_parallel_modes = []
try:
from mpi4py import MPI
available_parallel_modes.append(MPI_MODE)
except ImportError:
pass
try:
import concurrent.futures
available_parallel_modes.append(FUTURES_PROCESSES_MODE)
available_parallel_modes.append(FUTURES_THREADS_MODE)
except ImportError:
pass
available_parallel_modes.append(SERIAL_MODE)
def make_data_iterator(data_entries, *iterables):
"""Make an iterator from an iterable of data entries and constant values.
All iterables should have the same number of entries. Any passed values
that are not iterators, lists, or tuples will have that same value
repeated for the entire length of `data_entries`.
Parameters
----------
data_entries : iterable
Iterable of data entries.
*iterables
One or more iterables or constant values to serve as additional
data entries. These are zipped into an iterator with `data_entries`.
Returns
-------
iterator : Iterator of tuples, each with an item in `data_entries`
followed by corresponding items in `iterables`.
"""
from itertools import repeat, cycle
from collections import Iterator
new_iterables = [iter(data_entries), ]
for iterable in iterables:
if (isinstance(iterable, Iterator) or
isinstance(iterable, list) or
isinstance(iterable, tuple)):
new_iterables.append(cycle(iterable))
else:
new_iterables.append(repeat(iterable))
return zip(*new_iterables)
def read_bash_var(var, default=None):
"""Rad a bash variable for number of available processes/threads."""
if var is None:
return default
try:
val = int(os.environ[var])
logging.debug("Variable %s indicates %d processors" % (var, val))
return val
except KeyError:
logging.debug("Variable %s not set" % (var))
return default
except ValueError:
logging.debug("Variable %s set to non-integer %s" % (var, str(val)))
return default
def enum(*sequential, **named):
"""Fake an enumerated type.
Reference:
----------
- http://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python
Parameters
----------
*sequential
List of items.
**named
Dictionary of items.
"""
enums = dict(zip(sequential, range(len(sequential))), **named)
return type('Enum', (), enums)
class Parallelizer(object):
"""A class to aid in parallelizing a function call.
Ideal use case is when function calls are expected to have different
runtimes and each call is completely independent of all others."""
def __init__(self, parallel_mode=None, num_proc=None,
num_proc_bash_var=None, fail_value=False):
"""Choose mode and/or number of processors or use defaults.
Parameters
----------
parallel_mode : str, optional (default None)
Mode to use for parallelization. Available modes are
('mpi', 'processes', 'threads', 'serial').
num_proc : int, optional (default None)
Maximum number of processors to use. Ignored in MPI mode.
num_proc_bash_var : str, optional (default None)
Number of available processors will be read from bash variable.
Ignored if `num_proc` is specified.
fail_value : any, optional (default False)
Result to be yielded if specific function evaluation failed.
"""
preferred_parallel_modes = copy(available_parallel_modes)
logging.debug("Parallel modes %s are available." %
repr(preferred_parallel_modes))
self.fail_value = fail_value
self.rank = 0
self.num_proc = None
if parallel_mode is not None:
if parallel_mode not in ALL_PARALLEL_MODES:
raise KeyError("Parallel mode must be in %s." %
(repr(ALL_PARALLEL_MODES)))
if parallel_mode not in preferred_parallel_modes:
if self.is_master():
logging.warning(
"Parallel mode %s not available. Will auto-select a replacement." % (repr(parallel_mode)))
else:
preferred_parallel_modes.pop(
preferred_parallel_modes.index(parallel_mode))
preferred_parallel_modes = ([parallel_mode, ] +
preferred_parallel_modes)
logging.debug("Available parallel modes reorganized to %s." % (
repr(preferred_parallel_modes)))
if num_proc is None:
num_proc = read_bash_var(num_proc_bash_var)
for parallel_mode in preferred_parallel_modes:
logging.debug("Checking if mode %s is valid." %
(repr(parallel_mode)))
mode_num_proc = num_proc
self.rank = 0
if parallel_mode == MPI_MODE:
comm = MPI.COMM_WORLD
self.rank = comm.Get_rank()
mpi_num_proc = comm.Get_size()
mode_num_proc = mpi_num_proc
if (mode_num_proc is not None and mode_num_proc < 2
and parallel_mode != SERIAL_MODE):
if self.is_master():
logging.warning("Only %d processes available. %s mode not available." % (
mode_num_proc, repr(parallel_mode)))
continue
elif (mode_num_proc is None
and parallel_mode in (FUTURES_PROCESSES_MODE,
FUTURES_THREADS_MODE)):
mode_num_proc = multiprocessing.cpu_count()
logging.info("num_proc is not specified. %s mode will use all %d processes" % (
repr(parallel_mode), mode_num_proc))
elif parallel_mode == SERIAL_MODE:
mode_num_proc = 1
self.parallel_mode = parallel_mode
self.num_proc = mode_num_proc
break
if self.is_master():
logging.info(
"Parallelizer initialized with mode %s and %d processors." % (
repr(self.parallel_mode), self.num_proc))
def is_mpi(self):
return self.parallel_mode == MPI_MODE
def is_concurrent(self):
return self.is_threads() or self.is_processes()
def is_threads(self):
return self.parallel_mode == FUTURES_THREADS_MODE
def is_processes(self):
return self.parallel_mode == FUTURES_PROCESSES_MODE
def is_serial(self):
return self.parallel_mode == SERIAL_MODE
def is_master(self):
return self.rank == 0
def run(self, *args, **kwargs):
r"""Execute a function in parallel. Return list of results.
Parameters
----------
func : function
Function to execute. Argument is single entry of `data_iterator`
as well as named arguments in `kwargs`.
data_iterator : iterator
Iterator where each entry is an argument to `func`. These data
are communicated between processors so should be as small as
possible.
kwargs : dict, optional (default {})
Named arguments for `func`.
out_file : str, optional (default None)
File to write results of function to. If None, results are yielded
instead of being written to file.
out_str : str, optional (default "%s\n")
Format string to be written to output file for each result.
out_format : function, optional (default str)
Function to apply to output of `func` to format results to match
`out_str`.
logging_str : str, optional (default None)
Format string to be logged using `logging` for each successful
result. If None, only errors are logged.
logging_format : function, optional (default str)
Function to apply to `data` entries of `data_iterator` to format
results to match `logging_str`.
num_proc : int (default None)
Number of processors to use. If None, maximum number available
is used. If `is_mpi`, this term is ignored.
Returns
-------
list : List of results of `func`.
"""
results = [x for x in self.run_gen(*args, **kwargs)]
return results
def run_gen(self, func, data_iterator, kwargs={}, out_file=None,
out_str="%s\n", out_format=str, logging_str=None,
logging_format=str):
r"""Execute a function in parallel. Return result iterator.
Parameters
----------
func : function
Function to execute. Argument is single entry of `data_iterator`
as well as named arguments in `kwargs`.
data_iterator : iterator
Iterator where each entry is an argument to `func`. These data
are communicated between processors so should be as small as
possible.
kwargs : dict, optional (default {})
Named arguments for `func`.
out_file : str, optional (default None)
File to write results of function to. If None, results are yielded
instead of being written to file.
out_str : str, optional (default "%s\n")
Format string to be written to output file for each result.
out_format : function, optional (default str)
Function to apply to output of `func` to format results to match
`out_str`.
logging_str : str, optional (default None)
Format string to be logged using `logging` for each successful
result. If None, only errors are logged.
logging_format : function, optional (default str)
Function to apply to `data` entries of `data_iterator` to format
results to match `logging_str`.
num_proc : int (default None)
Number of processors to use. If None, maximum number available
is used. If `is_mpi`, this term is ignored.
Returns
-------
iterator : Iterator through results of `func`.
"""
result_iterator = iter([])
if self.is_mpi():
result_iterator = (x for x in self.mpi_run(
func, data_iterator, kwargs=kwargs, out_file=out_file,
out_str=out_str, out_format=out_format,
logging_str=logging_str, logging_format=logging_format))
elif self.is_concurrent():
result_iterator = (x for x in self.concurrent_run(
func, data_iterator, kwargs=kwargs, out_file=out_file,
out_str=out_str, out_format=out_format,
logging_str=logging_str, logging_format=logging_format))
else:
result_iterator = (x for x in self.serial_run(
func, data_iterator, kwargs=kwargs, out_file=out_file,
out_str=out_str, out_format=out_format,
logging_str=logging_str, logging_format=logging_format))
return result_iterator
def serial_run(self, func, data_iterator, kwargs={}, out_file=None,
out_str="%s\n", out_format=str, logging_str=None,
logging_format=str):
"""Run in serial on a single processor."""
if out_file is not None:
fh = open(out_file, "w")
for data in data_iterator:
try:
result = func(*data, **kwargs)
if out_file is None:
yield (result, data)
else:
fh.write(out_str % out_format(result))
yield (True, data)
if result != self.fail_value and logging_str is not None:
logging.info(logging_str % logging_format(data))
except:
logging.error("Error running: %s" % str(data),
exc_info=True)
yield(self.fail_value, data)
def concurrent_run(self, func, data_iterator, kwargs={}, out_file=None,
out_str="%s\n", out_format=str, logging_str=None,
logging_format=str):
"""Run in parallel with concurrent.futures."""
if self.is_threads():
executor = concurrent.futures.ThreadPoolExecutor(
max_workers=self.num_proc)
else:
executor = concurrent.futures.ProcessPoolExecutor(
max_workers=self.num_proc)
jobs = []
jobs_dict = {}
for data in data_iterator:
job = executor.submit(func, *data, **kwargs)
jobs.append(job)
jobs_dict[job] = data
jobs_iterator = concurrent.futures.as_completed(jobs)
if out_file is not None:
fh = open(out_file, "w")
for job in jobs_iterator:
data = jobs_dict[job]
try:
result = job.result()
if out_file is None:
yield (result, data)
else:
fh.write(out_str % out_format(result))
yield (True, data)
if result != self.fail_value and logging_str is not None:
logging.info(logging_str % logging_format(data))
except KeyboardInterrupt:
logging.error("Error running: %s" % str(data),
exc_info=True)
executor.shutdown()
yield(self.fail_value, data)
except:
logging.error("Error running: %s" % str(data),
exc_info=True)
yield(self.fail_value, data)
if out_file is not None:
fh.close()
executor.shutdown()
def mpi_run(self, func, data_iterator, kwargs={}, out_file=None,
out_str="%s\n", out_format=str, logging_str=None,
logging_format=str):
"""Run in parallel with MPI.
Reference:
----------
- https://github.com/jbornschein/mpi4py-examples/blob/master/09-task-pull.py
"""
comm = MPI.COMM_WORLD
status = MPI.Status() # get MPI status object
tags = enum('READY', 'DONE', 'EXIT', 'START')
msg = "Proc:%d|" % self.rank
comm.Barrier()
mode = MPI.MODE_WRONLY | MPI.MODE_CREATE
if out_file is not None:
fh = MPI.File.Open(comm, out_file, mode)
if self.is_master():
task_index = 0
num_workers = comm.Get_size() - 1
closed_workers = 0
logging.debug("%sMaster starting with %d workers" % (msg,
num_workers))
try:
i = 0
while closed_workers < num_workers:
received = comm.recv(source=MPI.ANY_SOURCE,
tag=MPI.ANY_TAG,
status=status)
source = status.Get_source()
tag = status.Get_tag()
if tag == tags.READY:
try:
data = next(data_iterator)
except StopIteration:
logging.debug(
"%sEnd of data iterator. Closing proc %d" % (
msg, source))
comm.send(
None, dest=source, tag=tags.EXIT)
except:
logging.debug("%sCould not get data" % msg)
logging.debug(
"%sSending task %d to proc %d" % (msg,
task_index,
source))
comm.send(data, dest=source, tag=tags.START)
task_index += 1
elif tag == tags.DONE:
if received is not None:
result, data = received
logging.debug(
"%sReceived result %d from proc %d" % (
msg, task_index, source))
if (result != self.fail_value and
logging_str is not None):
logging.info(
logging_str % logging_format(data))
if out_file is None or result == self.fail_value:
yield (result, data)
else:
yield (True, data)
i += 1
elif tag == tags.EXIT:
logging.debug("%sExiting proc %d" % (msg, source))
closed_workers += 1
except (KeyboardInterrupt, SystemExit):
logging.exception("%sError while processing" % msg,
exc_info=True)
sys.exit()
else:
# Worker processes execute code below
while True:
comm.send(None, dest=0, tag=tags.READY)
data = comm.recv(
source=0, tag=MPI.ANY_TAG, status=status)
tag = status.Get_tag()
if tag == tags.START:
try:
result = func(*data, **kwargs)
if out_file is None:
comm.send(
(result, data), dest=0, tag=tags.DONE)
else:
fh.Write_shared(
(out_str % out_format(result)).encode("utf-8"))
comm.send(
(True, data), dest=0, tag=tags.DONE)
except:
logging.error(
"%sError running: %s" % (msg, str(data)),
exc_info=True)
comm.send(
(self.fail_value, data), dest=0, tag=tags.DONE)
elif tag == tags.EXIT:
break
comm.send(None, dest=0, tag=tags.EXIT)
if out_file is not None:
fh.Sync()
fh.Close()
comm.Barrier()
if __name__ == "__main__":
def test_func(num, *args):
return num * 100
logging.basicConfig(level=logging.INFO, format="%(message)s")
data_list = range(100)
data_iterator = make_data_iterator(data_list, "test")
parallelizer = Parallelizer(parallel_mode=FUTURES_PROCESSES_MODE)
run_kwargs = {"out_file": "test_out.txt", "out_str": "%d\n",
"out_format": lambda x: x,
"logging_str": "Logged %s %d",
"logging_format": lambda x: (x[1], x[0])}
for result in parallelizer.run(test_func, data_iterator, **run_kwargs):
print(result)
| 38.026616 | 114 | 0.545945 | 2,241 | 20,002 | 4.706827 | 0.149487 | 0.019245 | 0.011471 | 0.009386 | 0.437998 | 0.381589 | 0.343098 | 0.308684 | 0.308684 | 0.308684 | 0 | 0.002944 | 0.371563 | 20,002 | 525 | 115 | 38.099048 | 0.836197 | 0.246425 | 0 | 0.358491 | 0 | 0 | 0.062592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050314 | false | 0.009434 | 0.040881 | 0.022013 | 0.141509 | 0.003145 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44274d06c13806c20cb7ccbcad1c6195b3fdd749 | 405 | py | Python | Algorithm/coding_interviews/Python/sword-for-offer/43_num_of_one.py | ck76/awesome-cs | 48cba4081dc5290f07e305850b9a3a7e8a590b64 | [
"Apache-2.0"
] | 1 | 2021-11-16T13:37:41.000Z | 2021-11-16T13:37:41.000Z | Algorithm/coding_interviews/Python/sword-for-offer/43_num_of_one.py | ck76/awesome-cs | 48cba4081dc5290f07e305850b9a3a7e8a590b64 | [
"Apache-2.0"
] | null | null | null | Algorithm/coding_interviews/Python/sword-for-offer/43_num_of_one.py | ck76/awesome-cs | 48cba4081dc5290f07e305850b9a3a7e8a590b64 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/python3
# -*- coding: utf-8 -*-
# @Time : 2019/3/10 5:21 PM
# @Author : xiaoliji
# @Email : yutian9527@gmail.com
"""
1~n整数中1出现的次数。
>>> countDigitOne(12)
5
"""
def countDigitOne(n: int) -> int:
counter, i = 0, 1
while i <= n:
divider = i * 10
counter += (n // divider) * i + min(max(n % divider - i + 1, 0), i)
i *= 10
return counter
| 19.285714 | 75 | 0.501235 | 55 | 405 | 3.690909 | 0.618182 | 0.118227 | 0.133005 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104693 | 0.316049 | 405 | 20 | 76 | 20.25 | 0.628159 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4428f0e65f1454c8ca62ff07f9d8c0d8f778e7d4 | 22,957 | py | Python | CoMPILE_github/test_ranking.py | TmacMai/CoMPILE_Inductive_Knowledge_Graph | 072885012893a50b47cdee17f2e47f671e33bc00 | [
"MIT"
] | 14 | 2020-12-07T16:36:30.000Z | 2022-03-05T12:31:30.000Z | CoMPILE_github/test_ranking.py | TmacMai/CoMPILE_Inductive_Knowledge_Graph | 072885012893a50b47cdee17f2e47f671e33bc00 | [
"MIT"
] | 3 | 2021-04-06T01:22:32.000Z | 2022-03-12T01:39:12.000Z | CoMPILE_github/test_ranking.py | TmacMai/CoMPILE_Inductive_Knowledge_Graph | 072885012893a50b47cdee17f2e47f671e33bc00 | [
"MIT"
] | 4 | 2021-03-10T05:10:05.000Z | 2022-03-05T12:32:45.000Z | import os
import random
import argparse
import logging
import json
import time
import multiprocessing as mp
import scipy.sparse as ssp
from tqdm import tqdm
import networkx as nx
import torch
import numpy as np
import dgl
#os.environ["CUDA_VISIBLE_DEVICES"]="1"
def process_files(files, saved_relation2id, add_traspose_rels):
'''
files: Dictionary map of file paths to read the triplets from.
saved_relation2id: Saved relation2id (mostly passed from a trained model) which can be used to map relations to pre-defined indices and filter out the unknown ones.
'''
entity2id = {}
relation2id = saved_relation2id
triplets = {}
ent = 0
rel = 0
for file_type, file_path in files.items():
data = []
with open(file_path) as f:
file_data = [line.split() for line in f.read().split('\n')[:-1]]
for triplet in file_data:
if triplet[0] not in entity2id:
entity2id[triplet[0]] = ent
ent += 1
if triplet[2] not in entity2id:
entity2id[triplet[2]] = ent
ent += 1
# Save the triplets corresponding to only the known relations
if triplet[1] in saved_relation2id:
data.append([entity2id[triplet[0]], entity2id[triplet[2]], saved_relation2id[triplet[1]]])
triplets[file_type] = np.array(data)
id2entity = {v: k for k, v in entity2id.items()}
id2relation = {v: k for k, v in relation2id.items()}
# Construct the list of adjacency matrix each corresponding to eeach relation. Note that this is constructed only from the train data.
adj_list = []
for i in range(len(saved_relation2id)):
idx = np.argwhere(triplets['graph'][:, 2] == i)
adj_list.append(ssp.csc_matrix((np.ones(len(idx), dtype=np.uint8), (triplets['graph'][:, 0][idx].squeeze(1), triplets['graph'][:, 1][idx].squeeze(1))), shape=(len(entity2id), len(entity2id))))
# Add transpose matrices to handle both directions of relations.
adj_list_aug = adj_list
if add_traspose_rels:
adj_list_t = [adj.T for adj in adj_list]
adj_list_aug = adj_list + adj_list_t
dgl_adj_list = ssp_multigraph_to_dgl(adj_list_aug)
return adj_list, dgl_adj_list, triplets, entity2id, relation2id, id2entity, id2relation
def intialize_worker(model, adj_list, dgl_adj_list, id2entity, params, node_features, kge_entity2id):
global model_, adj_list_, dgl_adj_list_, id2entity_, params_, node_features_, kge_entity2id_
model_, adj_list_, dgl_adj_list_, id2entity_, params_, node_features_, kge_entity2id_ = model, adj_list, dgl_adj_list, id2entity, params, node_features, kge_entity2id
def get_neg_samples_replacing_head_tail(test_links, adj_list, num_samples=50):
n, r = adj_list[0].shape[0], len(adj_list)
heads, tails, rels = test_links[:, 0], test_links[:, 1], test_links[:, 2]
neg_triplets = []
for i, (head, tail, rel) in enumerate(zip(heads, tails, rels)):
neg_triplet = {'head': [[], 0], 'tail': [[], 0]}
neg_triplet['head'][0].append([head, tail, rel])
while len(neg_triplet['head'][0]) < num_samples:
neg_head = head
neg_tail = np.random.choice(n)
if neg_head != neg_tail and adj_list[rel][neg_head, neg_tail] == 0:
neg_triplet['head'][0].append([neg_head, neg_tail, rel])
neg_triplet['tail'][0].append([head, tail, rel])
while len(neg_triplet['tail'][0]) < num_samples:
neg_head = np.random.choice(n)
neg_tail = tail
# neg_head, neg_tail, rel = np.random.choice(n), np.random.choice(n), np.random.choice(r)
if neg_head != neg_tail and adj_list[rel][neg_head, neg_tail] == 0:
neg_triplet['tail'][0].append([neg_head, neg_tail, rel])
neg_triplet['head'][0] = np.array(neg_triplet['head'][0])
neg_triplet['tail'][0] = np.array(neg_triplet['tail'][0])
neg_triplets.append(neg_triplet)
return neg_triplets
def get_neg_samples_replacing_head_tail_all(test_links, adj_list):
n, r = adj_list[0].shape[0], len(adj_list)
heads, tails, rels = test_links[:, 0], test_links[:, 1], test_links[:, 2]
neg_triplets = []
print('sampling negative triplets...')
for i, (head, tail, rel) in tqdm(enumerate(zip(heads, tails, rels)), total=len(heads)):
neg_triplet = {'head': [[], 0], 'tail': [[], 0]}
neg_triplet['head'][0].append([head, tail, rel])
for neg_tail in range(n):
neg_head = head
if neg_head != neg_tail and adj_list[rel][neg_head, neg_tail] == 0:
neg_triplet['head'][0].append([neg_head, neg_tail, rel])
neg_triplet['tail'][0].append([head, tail, rel])
for neg_head in range(n):
neg_tail = tail
if neg_head != neg_tail and adj_list[rel][neg_head, neg_tail] == 0:
neg_triplet['tail'][0].append([neg_head, neg_tail, rel])
neg_triplet['head'][0] = np.array(neg_triplet['head'][0])
neg_triplet['tail'][0] = np.array(neg_triplet['tail'][0])
neg_triplets.append(neg_triplet)
return neg_triplets
def get_neg_samples_replacing_head_tail_from_ruleN(ruleN_pred_path, entity2id, saved_relation2id):
with open(ruleN_pred_path) as f:
pred_data = [line.split() for line in f.read().split('\n')[:-1]]
neg_triplets = []
for i in range(len(pred_data) // 3):
neg_triplet = {'head': [[], 10000], 'tail': [[], 10000]}
if pred_data[3 * i][1] in saved_relation2id:
head, rel, tail = entity2id[pred_data[3 * i][0]], saved_relation2id[pred_data[3 * i][1]], entity2id[pred_data[3 * i][2]]
for j, new_head in enumerate(pred_data[3 * i + 1][1::2]):
neg_triplet['head'][0].append([entity2id[new_head], tail, rel])
if entity2id[new_head] == head:
neg_triplet['head'][1] = j
for j, new_tail in enumerate(pred_data[3 * i + 2][1::2]):
neg_triplet['tail'][0].append([head, entity2id[new_tail], rel])
if entity2id[new_tail] == tail:
neg_triplet['tail'][1] = j
neg_triplet['head'][0] = np.array(neg_triplet['head'][0])
neg_triplet['tail'][0] = np.array(neg_triplet['tail'][0])
neg_triplets.append(neg_triplet)
return neg_triplets
def incidence_matrix(adj_list):
'''
adj_list: List of sparse adjacency matrices
'''
rows, cols, dats = [], [], []
dim = adj_list[0].shape
for adj in adj_list:
adjcoo = adj.tocoo()
rows += adjcoo.row.tolist()
cols += adjcoo.col.tolist()
dats += adjcoo.data.tolist()
row = np.array(rows)
col = np.array(cols)
data = np.array(dats)
return ssp.csc_matrix((data, (row, col)), shape=dim)
def _bfs_relational(adj, roots, max_nodes_per_hop=None):
"""
BFS for graphs with multiple edge types. Returns list of level sets.
Each entry in list corresponds to relation specified by adj_list.
Modified from dgl.contrib.data.knowledge_graph to node accomodate sampling
"""
visited = set()
current_lvl = set(roots)
next_lvl = set()
while current_lvl:
for v in current_lvl:
visited.add(v)
next_lvl = _get_neighbors(adj, current_lvl)
next_lvl -= visited # set difference
if max_nodes_per_hop and max_nodes_per_hop < len(next_lvl):
next_lvl = set(random.sample(next_lvl, max_nodes_per_hop))
yield next_lvl
current_lvl = set.union(next_lvl)
def _get_neighbors(adj, nodes):
"""Takes a set of nodes and a graph adjacency matrix and returns a set of neighbors.
Directly copied from dgl.contrib.data.knowledge_graph"""
sp_nodes = _sp_row_vec_from_idx_list(list(nodes), adj.shape[1])
sp_neighbors = sp_nodes.dot(adj)
neighbors = set(ssp.find(sp_neighbors)[1]) # convert to set of indices
return neighbors
def _sp_row_vec_from_idx_list(idx_list, dim):
"""Create sparse vector of dimensionality dim from a list of indices."""
shape = (1, dim)
data = np.ones(len(idx_list))
row_ind = np.zeros(len(idx_list))
col_ind = list(idx_list)
return ssp.csr_matrix((data, (row_ind, col_ind)), shape=shape)
def get_neighbor_nodes(roots, adj, h=1, max_nodes_per_hop=None):
bfs_generator = _bfs_relational(adj, roots, max_nodes_per_hop)
lvls = list()
for _ in range(h):
try:
lvls.append(next(bfs_generator))
except StopIteration:
pass
return set().union(*lvls)
def subgraph_extraction_labeling(ind, rel, A_list, h=1, enclosing_sub_graph=False, max_nodes_per_hop=None, node_information=None, max_node_label_value=None):
# extract the h-hop enclosing subgraphs around link 'ind'
A_incidence = incidence_matrix(A_list)
A_incidence += A_incidence.T
# could pack these two into a function
root1_nei = get_neighbor_nodes(set([ind[0]]), A_incidence, h, max_nodes_per_hop)
root2_nei = get_neighbor_nodes(set([ind[1]]), A_incidence, h, max_nodes_per_hop)
subgraph_nei_nodes_int = root1_nei.intersection(root2_nei)
subgraph_nei_nodes_un = root1_nei.union(root2_nei)
# Extract subgraph | Roots being in the front is essential for labelling and the model to work properly.
if enclosing_sub_graph:
subgraph_nodes = list(ind) + list(subgraph_nei_nodes_int)
else:
subgraph_nodes = list(ind) + list(subgraph_nei_nodes_un)
subgraph = [adj[subgraph_nodes, :][:, subgraph_nodes] for adj in A_list]
labels, enclosing_subgraph_nodes = node_label_new(incidence_matrix(subgraph), max_distance=h)
pruned_subgraph_nodes = np.array(subgraph_nodes)[enclosing_subgraph_nodes].tolist()
pruned_labels = labels[enclosing_subgraph_nodes]
if max_node_label_value is not None:
pruned_labels = np.array([np.minimum(label, max_node_label_value).tolist() for label in pruned_labels])
return pruned_subgraph_nodes, pruned_labels
def remove_nodes(A_incidence, nodes):
idxs_wo_nodes = list(set(range(A_incidence.shape[1])) - set(nodes))
return A_incidence[idxs_wo_nodes, :][:, idxs_wo_nodes]
def node_label_new(subgraph, max_distance=1):
# an implementation of the proposed double-radius node labeling (DRNd L)
roots = [0, 1]
sgs_single_root = [remove_nodes(subgraph, [root]) for root in roots]
dist_to_roots = [np.clip(ssp.csgraph.dijkstra(sg, indices=[0], directed=False, unweighted=True, limit=1e6)[:, 1:], 0, 1e7) for r, sg in enumerate(sgs_single_root)]
dist_to_roots = np.array(list(zip(dist_to_roots[0][0], dist_to_roots[1][0])), dtype=int)
# dist_to_roots[np.abs(dist_to_roots) > 1e6] = 0
# dist_to_roots = dist_to_roots + 1
target_node_labels = np.array([[0, 1], [1, 0]])
labels = np.concatenate((target_node_labels, dist_to_roots)) if dist_to_roots.size else target_node_labels
enclosing_subgraph_nodes = np.where(np.max(labels, axis=1) <= max_distance)[0]
# print(len(enclosing_subgraph_nodes))
return labels, enclosing_subgraph_nodes
def ssp_multigraph_to_dgl(graph, n_feats=None):
"""
Converting ssp multigraph (i.e. list of adjs) to dgl multigraph.
"""
g_nx = nx.MultiDiGraph()
g_nx.add_nodes_from(list(range(graph[0].shape[0])))
# Add edges
for rel, adj in enumerate(graph):
# Convert adjacency matrix to tuples for nx0
nx_triplets = []
for src, dst in list(zip(adj.tocoo().row, adj.tocoo().col)):
nx_triplets.append((src, dst, {'type': rel}))
g_nx.add_edges_from(nx_triplets)
# make dgl graph
g_dgl = dgl.DGLGraph(multigraph=True)
g_dgl.from_networkx(g_nx, edge_attrs=['type'])
# add node features
if n_feats is not None:
g_dgl.ndata['feat'] = torch.tensor(n_feats)
return g_dgl
def prepare_features(subgraph, n_labels, max_n_label, n_feats=None):
# One hot encode the node label feature and concat to n_featsure
n_nodes = subgraph.number_of_nodes()
label_feats = np.zeros((n_nodes, max_n_label[0] + 1 + max_n_label[1] + 1))
label_feats[np.arange(n_nodes), n_labels[:, 0]] = 1
label_feats[np.arange(n_nodes), max_n_label[0] + 1 + n_labels[:, 1]] = 1
n_feats = np.concatenate((label_feats, n_feats), axis=1) if n_feats is not None else label_feats
subgraph.ndata['feat'] = torch.FloatTensor(n_feats)
head_id = np.argwhere([label[0] == 0 and label[1] == 1 for label in n_labels])
tail_id = np.argwhere([label[0] == 1 and label[1] == 0 for label in n_labels])
n_ids = np.zeros(n_nodes)
n_ids[head_id] = 1 # head
n_ids[tail_id] = 2 # tail
subgraph.ndata['id'] = torch.FloatTensor(n_ids)
return subgraph
def get_subgraphs(all_links, adj_list, dgl_adj_list, max_node_label_value, id2entity, node_features=None, kge_entity2id=None):
# dgl_adj_list = ssp_multigraph_to_dgl(adj_list)
subgraphs = []
r_labels = []
for link in all_links:
head, tail, rel = link[0], link[1], link[2]
nodes, node_labels = subgraph_extraction_labeling((head, tail), rel, adj_list, h=params_.hop, enclosing_sub_graph=params.enclosing_sub_graph, max_node_label_value=max_node_label_value)
subgraph = dgl.DGLGraph(dgl_adj_list.subgraph(nodes))
subgraph.edata['type'] = dgl_adj_list.edata['type'][dgl_adj_list.subgraph(nodes).parent_eid]
subgraph.edata['label'] = torch.tensor(rel * np.ones(subgraph.edata['type'].shape), dtype=torch.long)
edges_btw_roots = subgraph.edge_id(0, 1)
rel_link = np.nonzero(subgraph.edata['type'][edges_btw_roots] == rel)
if rel_link.squeeze().nelement() == 0:
# subgraph.add_edge(0, 1, {'type': torch.tensor([rel]), 'label': torch.tensor([rel])})
subgraph.add_edge(0, 1)
subgraph.edata['type'][-1] = torch.tensor(rel).type(torch.LongTensor)
subgraph.edata['label'][-1] = torch.tensor(rel).type(torch.LongTensor)
kge_nodes = [kge_entity2id[id2entity[n]] for n in nodes] if kge_entity2id else None
n_feats = node_features[kge_nodes] if node_features is not None else None
subgraph = prepare_features(subgraph, node_labels, max_node_label_value, n_feats)
subgraphs.append(subgraph)
r_labels.append(rel)
# batched_graph = dgl.batch(subgraphs)
r_labels = torch.LongTensor(r_labels)
return (subgraphs, r_labels)
def get_rank(neg_links):
head_neg_links = neg_links['head'][0]
head_target_id = neg_links['head'][1]
if head_target_id != 10000:
data = get_subgraphs(head_neg_links, adj_list_, dgl_adj_list_, model_.max_label_value, id2entity_, node_features_, kge_entity2id_)
head_scores = model_(data[0]).squeeze(1).detach().numpy()
head_rank = np.argwhere(np.argsort(head_scores)[::-1] == head_target_id) + 1
else:
head_scores = np.array([])
head_rank = 10000
tail_neg_links = neg_links['tail'][0]
tail_target_id = neg_links['tail'][1]
if tail_target_id != 10000:
data = get_subgraphs(tail_neg_links, adj_list_, dgl_adj_list_, model_.max_label_value, id2entity_, node_features_, kge_entity2id_)
tail_scores = model_(data[0]).squeeze(1).detach().numpy()
tail_rank = np.argwhere(np.argsort(tail_scores)[::-1] == tail_target_id) + 1
else:
tail_scores = np.array([])
tail_rank = 10000
return head_scores, head_rank, tail_scores, tail_rank
def save_to_file(neg_triplets, id2entity, id2relation):
with open(os.path.join('./data', params.dataset, 'ranking_head.txt'), "w") as f:
for neg_triplet in neg_triplets:
for s, o, r in neg_triplet['head'][0]:
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o]]) + '\n')
with open(os.path.join('./data', params.dataset, 'ranking_tail.txt'), "w") as f:
for neg_triplet in neg_triplets:
for s, o, r in neg_triplet['tail'][0]:
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o]]) + '\n')
def save_score_to_file(neg_triplets, all_head_scores, all_tail_scores, id2entity, id2relation):
with open(os.path.join('./data', params.dataset, 'grail_ranking_head_predictions.txt'), "w") as f:
for i, neg_triplet in enumerate(neg_triplets):
for [s, o, r], head_score in zip(neg_triplet['head'][0], all_head_scores[50 * i:50 * (i + 1)]):
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o], str(head_score)]) + '\n')
with open(os.path.join('./data', params.dataset, 'grail_ranking_tail_predictions.txt'), "w") as f:
for i, neg_triplet in enumerate(neg_triplets):
for [s, o, r], tail_score in zip(neg_triplet['tail'][0], all_tail_scores[50 * i:50 * (i + 1)]):
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o], str(tail_score)]) + '\n')
def save_score_to_file_from_ruleN(neg_triplets, all_head_scores, all_tail_scores, id2entity, id2relation):
with open(os.path.join('./data', params.dataset, 'grail_ruleN_ranking_head_predictions.txt'), "w") as f:
for i, neg_triplet in enumerate(neg_triplets):
for [s, o, r], head_score in zip(neg_triplet['head'][0], all_head_scores[50 * i:50 * (i + 1)]):
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o], str(head_score)]) + '\n')
with open(os.path.join('./data', params.dataset, 'grail_ruleN_ranking_tail_predictions.txt'), "w") as f:
for i, neg_triplet in enumerate(neg_triplets):
for [s, o, r], tail_score in zip(neg_triplet['tail'][0], all_tail_scores[50 * i:50 * (i + 1)]):
f.write('\t'.join([id2entity[s], id2relation[r], id2entity[o], str(tail_score)]) + '\n')
def get_kge_embeddings(dataset, kge_model):
path = './experiments/kge_baselines/{}_{}'.format(kge_model, dataset)
node_features = np.load(os.path.join(path, 'entity_embedding.npy'))
with open(os.path.join(path, 'id2entity.json')) as json_file:
kge_id2entity = json.load(json_file)
kge_entity2id = {v: int(k) for k, v in kge_id2entity.items()}
return node_features, kge_entity2id
def main(params):
model = torch.load(params.model_path, map_location='cpu')
adj_list, dgl_adj_list, triplets, entity2id, relation2id, id2entity, id2relation = process_files(params.file_paths, model.relation2id, params.add_traspose_rels)
node_features, kge_entity2id = get_kge_embeddings(params.dataset, params.kge_model) if params.use_kge_embeddings else (None, None)
if params.mode == 'sample':
neg_triplets = get_neg_samples_replacing_head_tail(triplets['links'], adj_list)
save_to_file(neg_triplets, id2entity, id2relation)
elif params.mode == 'all':
neg_triplets = get_neg_samples_replacing_head_tail_all(triplets['links'], adj_list)
elif params.mode == 'ruleN':
neg_triplets = get_neg_samples_replacing_head_tail_from_ruleN(params.ruleN_pred_path, entity2id, relation2id)
ranks = []
all_head_scores = []
all_tail_scores = []
with mp.Pool(processes=None, initializer=intialize_worker, initargs=(model, adj_list, dgl_adj_list, id2entity, params, node_features, kge_entity2id)) as p:
for head_scores, head_rank, tail_scores, tail_rank in tqdm(p.imap(get_rank, neg_triplets), total=len(neg_triplets)):
ranks.append(head_rank)
ranks.append(tail_rank)
all_head_scores += head_scores.tolist()
all_tail_scores += tail_scores.tolist()
# intialize_worker(model, adj_list, dgl_adj_list, id2entity, params, node_features, kge_entity2id)
# for link in tqdm(neg_triplets, total=len(neg_triplets)):
# head_scores, head_rank, tail_scores, tail_rank = get_rank(link)
# ranks.append(head_rank)
# ranks.append(tail_rank)
# all_head_scores += head_scores.tolist()
# all_tail_scores += tail_scores.tolist()
if params.mode == 'ruleN':
save_score_to_file_from_ruleN(neg_triplets, all_head_scores, all_tail_scores, id2entity, id2relation)
else:
save_score_to_file(neg_triplets, all_head_scores, all_tail_scores, id2entity, id2relation)
isHit1List = [x for x in ranks if x <= 1]
isHit5List = [x for x in ranks if x <= 5]
isHit10List = [x for x in ranks if x <= 10]
hits_1 = len(isHit1List) / len(ranks)
hits_5 = len(isHit5List) / len(ranks)
hits_10 = len(isHit10List) / len(ranks)
mrr = np.mean(1 / np.array(ranks))
logger.info(f'MRR | Hits@1 | Hits@5 | Hits@10 : {mrr} | {hits_1} | {hits_5} | {hits_10}')
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
parser = argparse.ArgumentParser(description='Testing script for hits@10')
# Experiment setup params
parser.add_argument("--experiment_name", "-e", type=str, default="fb_v2_margin_loss",
help="Experiment name. Log file with this name will be created")
parser.add_argument("--dataset", "-d", type=str, default="FB237_v2",
help="Path to dataset")
parser.add_argument("--mode", "-m", type=str, default="sample", choices=["sample", "all", "ruleN"],
help="Negative sampling mode")
parser.add_argument("--use_kge_embeddings", "-kge", type=bool, default=False,
help='whether to use pretrained KGE embeddings')
parser.add_argument("--kge_model", type=str, default="TransE",
help="Which KGE model to load entity embeddings from")
parser.add_argument('--enclosing_sub_graph', '-en', type=bool, default=True,
help='whether to only consider enclosing subgraph')
parser.add_argument("--hop", type=int, default=3,
help="How many hops to go while eextracting subgraphs?")
parser.add_argument('--add_traspose_rels', '-tr', type=bool, default=False,
help='Whether to append adj matrix list with symmetric relations?')
params = parser.parse_args()
params.file_paths = {
'graph': os.path.join('./data', params.dataset, 'train.txt'),
'links': os.path.join('./data', params.dataset, 'test.txt')
}
params.ruleN_pred_path = os.path.join('./data', params.dataset, 'pos_predictions.txt')
params.model_path = os.path.join('experiments', params.experiment_name, 'best_graph_classifier.pth')
file_handler = logging.FileHandler(os.path.join('experiments', params.experiment_name, f'log_rank_test_{time.time()}.txt'))
logger = logging.getLogger()
logger.addHandler(file_handler)
logger.info('============ Initialized logger ============')
logger.info('\n'.join('%s: %s' % (k, str(v)) for k, v
in sorted(dict(vars(params)).items())))
logger.info('============================================')
main(params)
| 42.200368 | 200 | 0.660191 | 3,332 | 22,957 | 4.289316 | 0.128451 | 0.027428 | 0.018612 | 0.017842 | 0.417017 | 0.367408 | 0.328995 | 0.285195 | 0.252379 | 0.241534 | 0 | 0.019681 | 0.205428 | 22,957 | 543 | 201 | 42.278085 | 0.763829 | 0.099229 | 0 | 0.166197 | 0 | 0.002817 | 0.070911 | 0.014688 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061972 | false | 0.002817 | 0.03662 | 0 | 0.143662 | 0.002817 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4428fa67a48dd401185f9736e5599509b0b9fc95 | 652 | py | Python | scheduler/scheduler.py | ericlearning/General-I2I | ba7c5d6a582bdf2e7b53c0e20c31e9097b1883a9 | [
"MIT"
] | 1 | 2019-12-20T15:08:18.000Z | 2019-12-20T15:08:18.000Z | scheduler/scheduler.py | ericlearning/General-I2I | ba7c5d6a582bdf2e7b53c0e20c31e9097b1883a9 | [
"MIT"
] | null | null | null | scheduler/scheduler.py | ericlearning/General-I2I | ba7c5d6a582bdf2e7b53c0e20c31e9097b1883a9 | [
"MIT"
] | null | null | null | import math
class LinearDecay():
def __init__(self, opt, optimizer, iter_num):
self.optimizer = optimizer
self.init = opt.lr
self.tot = iter_num * opt.epoch
self.st = iter_num * opt.decay_start_epoch
if(self.st < 0): self.st = self.tot
self.cnt = 0
self.state_dict = self.optimizer.state_dict()
def step(self):
for p in self.optimizer.param_groups:
if(self.cnt < self.st):
p['lr'] = self.init
else:
p['lr'] = self.init * (1.0 - (self.cnt - self.st) / (self.tot - 1 - self.st))
self.cnt += 1
self.optimizer.step()
def zero_grad(self):
self.optimizer.zero_grad()
def state_dict(self):
return self.state_dict | 25.076923 | 81 | 0.662577 | 107 | 652 | 3.88785 | 0.308411 | 0.086538 | 0.072115 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011321 | 0.187117 | 652 | 26 | 82 | 25.076923 | 0.773585 | 0 | 0 | 0 | 0 | 0 | 0.006126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.045455 | 0.045455 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4429767250a51905f861cf7049625276399f4004 | 23,698 | py | Python | pm.py | oscarmonllor99/dark_matter_study | b301cc2a4aa33d8b044b99da6310483814df55f1 | [
"MIT"
] | null | null | null | pm.py | oscarmonllor99/dark_matter_study | b301cc2a4aa33d8b044b99da6310483814df55f1 | [
"MIT"
] | null | null | null | pm.py | oscarmonllor99/dark_matter_study | b301cc2a4aa33d8b044b99da6310483814df55f1 | [
"MIT"
] | null | null | null | import numpy as np
from numba import jit
import random
import time
import argparse
##############################################
######### PARÁMETROS FÍSICOS ################
##############################################
Q = 1.3
NUM_PARTICLES = 100000 #Número de partículas
NUM_PARTICLES_BULGE = int(0.14 * NUM_PARTICLES) #el 14% de la materia ordinaria es del bulbo
M_TOTAL = 3245*2.325*1e7 #masa total de las particulas q van a interactuar
M_PARTICLE = M_TOTAL / NUM_PARTICLES #masa de las particulas en Msolares
G = 4.518 * 1e-12 #constante de gravitación universal en Kpc, Msolares y Millones de años
T_SOL = 225 #periodo del Sol alrededor de la galaxia en Millones de años
##############################################
##############################################
##############################################
### FUNCIONES A UTILIZAR ###
##############################################
"""
@jit(nopython=True, fastmath = True, parallel = False)
def W(d, h):
if d <= h:
return 1 - d/h
else:
return 0
"""
@jit(nopython=True, fastmath = True, parallel = False)
def W(d, h):
if d <= h/2:
return 3/4 - (d/h)**2
elif h/2 <= d <= 3*h/2:
return 0.5*(3/2 - d/h)**2
else:
return 0
@jit(nopython=True, fastmath = True, parallel = False)
def factor_r(r):
ah = 7.7
return (1/r**3)*np.log(1+r/ah) - 1/(ah*r**2 + r**3)
@jit(nopython=True, fastmath = True, parallel = False)
def f_dark(r_vec, lim):
r_dark = np.zeros(3)
r_dark[0] = r_vec[0] - lim/2
r_dark[1] = r_vec[1] - lim/2
r_dark[2] = r_vec[2] - lim/2
#MODELO 3: Navarro-Frenk-White
Mh = 12474 * 2.325*1e7
r = np.sqrt(np.dot(r_dark, r_dark))
g = np.zeros(3)
if r > 0.0:
factor = factor_r(r)
g = -G*Mh*factor*r_dark
return g
@jit(nopython=True, fastmath = True, parallel = False)
def pot_dark(r_vec, lim):
r_dark = np.zeros(3)
r_dark[0] = r_vec[0] - lim/2
r_dark[1] = r_vec[1] - lim/2
r_dark[2] = r_vec[2] - lim/2
#MODELO 3: Navarro-Frenk-White
Mh = 12474 * 2.325*1e7
ah = 7.7
r = np.sqrt(np.dot(r_dark, r_dark))
if r > 0.0:
return (-G*Mh/r)*np.log(1 + r/ah)
else:
return 0.0
@jit(nopython=True, fastmath = True, parallel = False)
def pot_bulge(r_vec, lim):
r_dark = np.zeros(3)
r_dark[0] = r_vec[0] - lim/2
r_dark[1] = r_vec[1] - lim/2
r_dark[2] = r_vec[2] - lim/2
#MODELO 3: Navarro-Frenk-White
Mb = 1.05*1e10
bb = 0.267
r = np.sqrt(np.dot(r_dark, r_dark))
return - G*Mb / (r**2 + bb**2)**(1/2)
@jit(nopython=True, fastmath = True, parallel = False)
def pot_disk(r_vec, lim):
r_dark = np.zeros(3)
r_dark[0] = r_vec[0] - lim/2
r_dark[1] = r_vec[1] - lim/2
r_dark[2] = r_vec[2] - lim/2
#MODELO 3: Navarro-Frenk-White
Md = 6.5*1e10
bd = 0.308
ad = 4.4
r = np.sqrt(np.dot(r_dark, r_dark))
return -G*Md/(r**2 + (ad+bd)**2)**(1/2)
@jit(nopython=True, fastmath = True, parallel = False)
def CM(r_list):
return np.sum(r_list * M_PARTICLE, axis=0) / M_TOTAL
@jit(nopython=True, fastmath = True)
def fuerza(r_list, g, i, dark, Np, h, lim):
r_list_i = r_list[i, :]
#identificamos en qué celda está la partícula
x_pos = int(r_list_i[0]/h)
y_pos = int(r_list_i[1]/h)
z_pos = int(r_list_i[2]/h)
if x_pos <= 1 or x_pos >= Np-1 or y_pos <= 1 or y_pos >= Np-1 or z_pos <= 1 or z_pos >= Np-1:
if dark:
r_centro = np.array([lim/2, lim/2, lim/2])
return ( -G*M_TOTAL*(r_list_i - r_centro)/np.linalg.norm(r_list_i - r_centro)**3
+ f_dark(r_list_i, lim) )
else:
r_centro = np.array([lim/2, lim/2, lim/2])
return -G*M_TOTAL*(r_list[i] - r_centro)/np.linalg.norm(r_list[i] - r_centro)**3
else:
fuerza_i = np.array([0., 0., 0.])
for x in range(-1, 2):
for y in range(-1, 2):
for z in range(-1, 2):
dx = abs((x_pos + x + 0.5)*h - r_list[i,0])
dy = abs((y_pos + y + 0.5)*h - r_list[i,1])
dz = abs((z_pos + z + 0.5)*h - r_list[i,2])
fuerza_i += ( g[:, x_pos + x, y_pos + y, z_pos + z]
* W(dx, h) * W(dy, h) * W(dz, h) )
#r_vec = r_list_i - lim/2
if dark:
return fuerza_i + f_dark(r_list_i, lim)
else:
return fuerza_i
@jit(nopython=True, fastmath = True, parallel = False)
def densidad(r_list, Np, h):
rho = np.zeros((Np, Np, Np))
for i in range(NUM_PARTICLES):
x_pos = int(r_list[i,0] // h)
y_pos = int(r_list[i,1] // h)
z_pos = int(r_list[i,2] // h)
if (x_pos <= 1 or x_pos >= Np-1 or y_pos <= 1
or y_pos >= Np-1 or z_pos <= 1 or z_pos >= Np-1):
pass
else:
for x in range(-1, 2):
for y in range(-1, 2):
for z in range(-1, 2):
dx = abs((x_pos + x + 0.5)*h - r_list[i,0])
dy = abs((y_pos + y + 0.5)*h - r_list[i,1])
dz = abs((z_pos + z + 0.5)*h - r_list[i,2])
rho[x_pos + x, y_pos + y, z_pos + z] += M_PARTICLE * W(dx, h) * W(dy, h) * W(dz, h)/h**3
return rho
@jit(nopython=True, fastmath = True)
def poisson(rho, phi, Np, h):
w = 0.95
tol = 1e-4
acabar = False
while not acabar:
max_diff = 0
for i in range(1, Np-1):
for j in range(1, Np-1):
for k in range(1, Np-1):
phi_0 = phi[i, j, k]
phi[i, j, k] = (1.+w)*(1/6)*(phi[i+1, j, k] + phi[i-1, j, k]
+ phi[i, j+1, k] + phi[i, j-1, k]
+ phi[i, j, k+1] + phi[i, j, k-1]
- h**2 * 4.*np.pi*G*rho[i, j, k]) - w*phi[i, j, k]
diff = abs(phi_0 - phi[i,j,k])
if diff > max_diff:
max_diff = diff
if max_diff < tol:
acabar = True
return phi
@jit(nopython=True, fastmath = True)
def gradiente(phi, Np, h):
g = np.zeros((3, Np, Np, Np))
for i in range(1, Np-1):
for j in range(1, Np-1):
for k in range(1, Np-1):
g[0,i,j,k] = - (phi[i+1, j, k] - phi[i-1, j, k])/(2*h)
g[1,i,j,k] = - (phi[i, j+1, k] - phi[i, j-1, k])/(2*h)
g[2,i,j,k] = - (phi[i, j, k+1] - phi[i, j, k-1])/(2*h)
return g
@jit(nopython = True, fastmath = True, parallel = False)
def ener_pot_calculator(r_list, phi, dark, lim, Np, h):
ener_list = np.zeros(NUM_PARTICLES)
for i in range(NUM_PARTICLES):
x_pos = int(r_list[i,0] // h)
y_pos = int(r_list[i,1] // h)
z_pos = int(r_list[i,2] // h)
if (x_pos <= 0 or x_pos >= Np-1 or y_pos <= 0 or y_pos >= Np-1 or
z_pos <= 0 or z_pos >= Np-1):
if dark:
r_centro = np.array([lim/2, lim/2, lim/2])
ener_list[i] = (-G*M_PARTICLE*M_TOTAL/np.linalg.norm(r_list[i]-r_centro)
+ M_PARTICLE*pot_dark(r_list[i], lim))
else:
r_centro = np.array([lim/2, lim/2, lim/2])
ener_list[i] = -G*M_PARTICLE*M_TOTAL/np.linalg.norm(r_list[i]-r_centro)
else:
pot_i = 0.
for x in range(-1, 2):
for y in range(-1, 2):
for z in range(-1, 2):
dx = abs((x_pos + x + 0.5)*h - r_list[i,0])
dy = abs((y_pos + y + 0.5)*h - r_list[i,1])
dz = abs((z_pos + z + 0.5)*h - r_list[i,2])
pot_i += (phi[x_pos + x, y_pos + y, z_pos + z]
* W(dx, h) * W(dy, h) * W(dz, h) )
if dark:
ener_list[i] = (M_PARTICLE*pot_i + M_PARTICLE*pot_dark(r_list[i], lim))
else:
ener_list[i] = M_PARTICLE*pot_i
return ener_list
###########################################################################
###########################################################################
def cond_inicial(lim, k_vel, eps, dark, Np, h, phi0):
#Montecarlo para obtener las densidades iniciales
###################################################
###################################################
@jit(fastmath = True, nopython = True)
def bulge(r):
b = 0.267
return 1/(r**2 + b**2)**(5/2)
max_bulge = bulge(0)
@jit(fastmath = True, nopython = True)
def MN(R, z):
a = 4.4
b = 0.308
return ( (a*R**2 + (a + 3*(z**2 + b**2)**(1/2))*(a + (z**2+b**2)**(1/2))**2 )
/ (( R**2 + (a + (z**2 + b**2)**(1/2))**2)**(5/2) * (z**2 + b**2)**(3/2)) )
max_disk = MN(0, 0)
@jit(fastmath = True, nopython = True)
def get_random_bulge(max_bulge):
R = random.uniform(0,3)
y = random.uniform(0, max_bulge)
while y > bulge(R):
R = random.uniform(0,3)
y = random.uniform(0, max_bulge)
return R
@jit(fastmath = True, nopython = True)
def get_random_disk(max_disk):
R = random.uniform(0, 50)
z = random.uniform(-2, 2)
y = random.uniform(0, max_disk)
while y > MN(R,z):
R = random.uniform(0, 50)
z = random.uniform(-2, 2)
y = random.uniform(0, max_disk)
return R,z
###################################################
###################################################
r_list_0 = np.zeros((NUM_PARTICLES, 3), dtype = float)
r_esf_tot = np.zeros((NUM_PARTICLES, 3), dtype = float)
for i in range(NUM_PARTICLES):
if i < NUM_PARTICLES_BULGE:
R = get_random_bulge(max_bulge)
while R>lim/2-1:
R = get_random_bulge(max_bulge)
theta = random.uniform(0, np.pi)
phi = random.uniform(0, 2*np.pi)
r_esf = [R, phi, theta]
r_esf_tot[i] = r_esf
r_list_0[i, 0] = lim/2 + R*np.cos(phi)*np.sin(theta)
r_list_0[i, 1] = lim/2 + R*np.sin(phi)*np.sin(theta)
r_list_0[i, 2] = lim/2 + R*np.cos(theta)
else:
R, z = get_random_disk(max_disk)
while R>lim/2-1 or z>lim/2-1:
R, z = get_random_disk(max_disk)
phi = random.uniform(0, 2*np.pi)
r_esf = [R, phi, z]
r_esf_tot[i] = r_esf
r_list_0[i, 0] = lim/2 + R*np.cos(phi)
r_list_0[i, 1] = lim/2 + R*np.sin(phi)
r_list_0[i, 2] = lim/2 + z
rho = densidad(r_list_0, Np, h)
phi = poisson(rho, phi0, Np, h)
g = gradiente(phi, Np, h)
Rs = np.linspace(0, lim/2, 10000)
NR = len(Rs)
HR = lim / NR #lado de una celda
phi_data = np.zeros(len(Rs))
for i in range(len(phi_data)):
r_vec = np.array([HR*(i+0.5) + lim/2, lim/2, lim/2])
if dark:
phi_data[i] += pot_dark(r_vec, lim) + pot_bulge(r_vec, lim) + pot_disk(r_vec, lim)
else:
phi_data[i] += pot_bulge(r_vec, lim) + pot_disk(r_vec, lim)
phi1_data = np.gradient(phi_data, HR)
phi2_data = np.gradient(phi1_data, HR)
v_list_0 = np.zeros((NUM_PARTICLES, 3), dtype = float)
def k_ep(R):
R_pos = int(R/HR)
phi1_k = phi1_data[R_pos]
phi2_k = phi2_data[R_pos]
return np.sqrt(((3/R) * phi1_k) + (phi2_k))
def v_circular(R):
R_pos = int(R/HR)
phi1_k = phi1_data[R_pos]
return np.sqrt(R*phi1_k)
def omega(R):
R_pos = int(R/HR)
phi1_k = phi1_data[R_pos]
return np.sqrt(abs((1/R)*phi1_k))
def sigma(R):
Md = 2798 * 2.325*1e7
hs = 2.43
return Md/(2*np.pi*hs**2) * np.exp(-R/hs)
def sigma2_R(R):
return (Q*3.36*G*sigma(R) / k_ep(R))**2
def sigma2_phi(R):
return sigma2_R(R) * (k_ep(R)/(2*omega(R)))**2
def sigma2_z(R):
z0 = 0.26
return np.pi*G*sigma(R)*z0
def v2_phi(R):
hs = 2.43
return v_circular(R)**2 + sigma2_R(R) * (1 - (k_ep(R)/(2*omega(R)))**2 - 2*R/hs)
E_rot = 0
E_pot = 0
for i in range(NUM_PARTICLES):
x_pos = int(r_list_0[i,0] // h)
y_pos = int(r_list_0[i,1] // h)
z_pos = int(r_list_0[i,2] // h)
if dark:
phii = ( pot_dark(r_list_0[i,:], lim) + pot_bulge(r_list_0[i,:], lim)
+ pot_disk(r_list_0[i,:], lim) )
E_pot += M_PARTICLE*phii
else:
phii = pot_bulge(r_list_0[i,:], lim) + pot_disk(r_list_0[i,:], lim)
E_pot += M_PARTICLE*phii
E_pot += M_PARTICLE*phii
v_esc = np.sqrt(-2 * phii)
R_norm = r_esf_tot[i,0]
v_circ = k_vel*v_circular(R_norm)
phi_g = r_esf_tot[i, 1]
if i < NUM_PARTICLES_BULGE:
theta_g = r_esf_tot[i, 2]
v_r = random.uniform(-0.3*v_esc, 0.3*v_esc)
v_z = v_circ*np.cos(theta_g)
v_tan = v_circ*np.sin(theta_g)
E_rot += 0.5*M_PARTICLE*v_tan**2
v_list_0[i, 0] = k_vel * (-v_tan * np.sin(phi_g) + v_r * np.cos(phi_g))
v_list_0[i, 1] = k_vel * (v_tan * np.cos(phi_g) + v_r * np.sin(phi_g))
v_list_0[i, 2] = k_vel * v_z
else:
v_phi_med = np.sqrt(v2_phi(R_norm))
sigma_R = np.sqrt(sigma2_R(R_norm))
sigma_phi = np.sqrt(sigma2_phi(R_norm))
sigma_z = np.sqrt(sigma2_z(R_norm))
v_tan = random.gauss(v_phi_med, sigma_phi)
v_R = random.gauss(0, sigma_R)
v_z = random.gauss(0, sigma_z)
E_rot += 0.5*M_PARTICLE*v_tan**2
v_list_0[i, 0] = (-v_tan * np.sin(phi_g) + v_R * np.cos(phi_g))
v_list_0[i, 1] = (v_tan * np.cos(phi_g) + v_R * np.sin(phi_g))
v_list_0[i, 2] = v_z
f_list_0 = np.zeros((NUM_PARTICLES, 3))
for i in range(NUM_PARTICLES):
force = fuerza(r_list_0, g, i, dark, Np, h, lim)
f_list_0[i, 0] = force[0]
f_list_0[i, 1] = force[1]
f_list_0[i, 2] = force[2]
print('Ostriker-Peebles criterion (t ~< 0.14): ', E_rot/abs(0.5*E_pot))
return r_list_0, v_list_0, f_list_0
#################################################################
#################################################################
#################################################################
#Se aplica el algoritmo de Leapfrog para resolver los primeros pasos de tiempo
#################################################################
@jit(nopython=True, fastmath = True)
def force_update(r_list_new, g, dark, Np, h, lim):
f_list_new = np.zeros((NUM_PARTICLES, 3))
for i in range(NUM_PARTICLES):
force = fuerza(r_list_new, g, i, dark, Np, h, lim)
f_list_new[i, 0] = force[0]
f_list_new[i, 1] = force[1]
f_list_new[i, 2] = force[2]
return f_list_new
def paso(r_list, v_list, f_list, dt, eps, dark, lim,
Np, h, phi0):
r_list_new = r_list + dt*v_list + 0.5 * dt**2 * f_list
rho = densidad(r_list_new, Np, h)
phi = poisson(rho, phi0, Np, h)
g = gradiente(phi, Np, h)
f_list_new = force_update(r_list_new, g, dark, Np, h, lim)
v_list_new = v_list + 0.5*dt*(f_list_new + f_list)
return r_list_new, v_list_new, f_list_new, phi
#################################################################
#################################################################
#####################################################################################################
#####################################################################################################
#####################################################################################################
#####################################################################################################
# FUNCIÓN QUE VA A OBTENER LAS TRAYECTORIAS Y LAS CARACTERÍSTICAS PARA CADA PASO DE TIEMPO
# LLAMANDO A LOS MÉTODOS DE RESOLUCIÓN PASO
#####################################################################################################
#####################################################################################################
def tiempo(r_list, v_list, f_list, n, n_r,
n_v, div_r, div_v, dt, eps, dark, lim,
Np, h, phi0):
#Lista de trayectorias de todas las partículas
tray = np.empty((n_r, NUM_PARTICLES, 3))
tray_CM = np.empty((n_r, 3))
vels = np.empty((n_v, NUM_PARTICLES, 3))
eners = np.empty((n_v, NUM_PARTICLES))
for k in range(n):
#Estos son los indices de paso de tiempo para guardar r y v
R_CM = CM(r_list)
if k == 1:
t0 = time.time()
r_list, v_list, f_list, phi = paso(r_list, v_list, f_list, dt, eps, dark, lim,
Np, h, phi0)
tf = time.time()
print("El programa va a tardar:", int(n*(tf-t0)/60),"minutos")
else:
r_list, v_list, f_list, phi = paso(r_list, v_list, f_list, dt, eps, dark, lim,
Np, h, phi0)
if k%div_r == 0.0:
k_r = int(k // div_r)
tray[k_r, :, :] = r_list[:, :]
tray_CM[k_r, :] = R_CM[:]
if k%div_v == 0.0:
k_v = int(k // div_v)
print((k/n)*100, "%")
vels[k_v, :, :] = v_list[:, :]
ener_list = ener_pot_calculator(r_list, phi, dark, lim, Np, h)
eners[k_v, :] = ener_list[:]
return tray, tray_CM, vels, eners
####################################################################################
####################################################################################
####################################################################################
#############################
# MAIN
###########################
def main(args):
##############################################
### PARÁMETROS DE SIMULACIÓN
##############################################
LIM = args.lim #en kpc
TIME_STEPS = args.time_step #número de pasos totales de tiempo
DIV_R = args.div_r
DIV_V = args.div_v
N_R = int(TIME_STEPS // DIV_R) #numero de pasos de tiempo guardados para r
N_V = int(TIME_STEPS // DIV_V) #numero de pasos de tiempo guardados para v
DT = T_SOL / 100 #intervalo de tiempo entre cada paso
D_MIN = args.dmin #distancia minima a la cual se debe calcular la fuerza en kpc
EPS = np.sqrt(2*D_MIN / 3**(3/2))
K_VEL= args.k_vel#parámetro de control de momento angular inicial (0--> velocidad angular inicial 0
# 1--> velocidad angular inicial máxima)
DARK = args.dark #hay materia oscura o no
NP = args.Nc #celdas en cada eje
H = LIM / NP #distancia entre puntos de red eje X
simulation_parameters = np.array([NUM_PARTICLES, LIM, TIME_STEPS, DIV_R, DIV_V, DT, NP, EPS, K_VEL, DARK])
np.savetxt('parameters.dat', simulation_parameters, fmt = '%.5e')
##############################################
##############################################
R0 = np.zeros((NP, NP, NP)) #para las condiciones de contorno del potencial
@jit(nopython=True, fastmath = True)
def bordes(r):
for i in range(NP):
for j in range(NP):
for k in range(NP):
r[i, j, k] = np.sqrt(((i-NP/2)*H)**2 + ((j-NP/2)*H)**2 + ((k-NP/2)*H)**2)
return r
R = bordes(R0)
PHI0 = np.zeros((NP, NP, NP))
#PLANOS DE CONDICIONES DE FRONTERA
PHI0[:, :, 0] = -G*M_TOTAL/(R[:, :, 0])
PHI0[:, :, NP-1] = -G*M_TOTAL/(R[:, :, NP-1])
PHI0[:, 0, :] = -G*M_TOTAL/(R[:, 0, :])
PHI0[:, NP-1, :] = -G*M_TOTAL/(R[:, NP-1, :])
PHI0[0, :, :] = -G*M_TOTAL/(R[0, :, :])
PHI0[NP-1, :, :] = -G*M_TOTAL/(R[NP-1, :, :])
r_list_0, v_list_0, f_list_0 = cond_inicial(LIM, K_VEL, EPS, DARK, NP, H, PHI0)
t0 = time.time()
trayectorias, trayectoria_CM, velocidades, energia_pot = tiempo(r_list_0, v_list_0, f_list_0, TIME_STEPS,
N_R, N_V, DIV_R, DIV_V, DT, EPS, DARK, LIM,
NP, H, PHI0)
tf = time.time()
print('El programa ha tardado: ', ((tf-t0)/60), 'minutos en completar las trayectorias.')
trayectorias3D = trayectorias.reshape(trayectorias.shape[0], -1)
velocidades3D = velocidades.reshape(velocidades.shape[0], -1)
np.savetxt('trayectorias.dat', trayectorias3D, fmt = '%.3e') #fmt es cuantas cifras decimales
np.savetxt('trayectoria_CM.dat', trayectoria_CM, fmt = '%.3e')
np.savetxt('velocidades.dat', velocidades3D, fmt = '%.3e')
np.savetxt('energia_pot.dat', energia_pot, fmt = '%.3e')
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="FB")
parser.add_argument(
"--lim",
type=float,
default=100.0,
help="en kpc.",
)
parser.add_argument(
"--time_step",
type=int,
default=1000,
help="timesteps.",
)
parser.add_argument(
"--div_r",
type=int,
default=50,
help="divr.",
)
parser.add_argument(
"--div_v",
type=int,
default=50,
help="divv.",
)
parser.add_argument(
"--dmin",
type=float,
default=0.05,
help="dmin.",
)
parser.add_argument(
"--k_vel",
type=float,
default=1,
help="kvel.",
)
parser.add_argument(
"--dark",
type=bool,
default=True,
help="dark_matter.",
)
parser.add_argument(
"--Nc",
type=int,
default=100,
help="x_cells.",
)
args = parser.parse_args()
main(args)
| 34.85 | 113 | 0.433243 | 3,369 | 23,698 | 2.864055 | 0.099436 | 0.03731 | 0.019277 | 0.035755 | 0.542129 | 0.510623 | 0.465126 | 0.433413 | 0.402632 | 0.370608 | 0 | 0.041435 | 0.347202 | 23,698 | 679 | 114 | 34.901325 | 0.582288 | 0.06279 | 0 | 0.364407 | 0 | 0 | 0.018333 | 0 | 0 | 0 | 0 | 0.001473 | 0 | 1 | 0.063559 | false | 0.002119 | 0.010593 | 0.006356 | 0.148305 | 0.008475 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4429e980a041daa0f6ae5470ca29efeec75f048a | 1,526 | py | Python | 08_apples_and_bananas/apples.py | trev-f/tiny_python_projects | 20b05f1def834bc9deda58ebdb5cb1d7fe647e45 | [
"MIT"
] | null | null | null | 08_apples_and_bananas/apples.py | trev-f/tiny_python_projects | 20b05f1def834bc9deda58ebdb5cb1d7fe647e45 | [
"MIT"
] | null | null | null | 08_apples_and_bananas/apples.py | trev-f/tiny_python_projects | 20b05f1def834bc9deda58ebdb5cb1d7fe647e45 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Author : treevooor <treevooor@localhost>
Date : 2021-11-09
Purpose: Apples and bananas
"""
import argparse
import os
# --------------------------------------------------
def get_args():
"""Get command-line arguments"""
parser = argparse.ArgumentParser(
description='Apples and bananas',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('text',
metavar='text',
help='Input text or file')
parser.add_argument('-v',
'--vowel',
help='The vowel to subsitutue',
metavar='vowel',
type=str,
choices=['a', 'e', 'i', 'o', 'u'],
default='a')
args = parser.parse_args()
if os.path.isfile(args.text):
args.text = open(args.text).read().rstrip()
return args
# --------------------------------------------------
def main():
"""Make a jazz noise here"""
args = get_args()
vowels = 'aeiou'
out_text = ''
for letter in args.text:
if letter.lower() in vowels:
if letter.islower():
out_text += letter.replace(letter, args.vowel)
else:
out_text += letter.replace(letter, args.vowel.upper())
else:
out_text += letter
print(out_text)
# --------------------------------------------------
if __name__ == '__main__':
main()
| 24.222222 | 70 | 0.467235 | 144 | 1,526 | 4.819444 | 0.541667 | 0.050432 | 0.056196 | 0.057637 | 0.100865 | 0.100865 | 0.100865 | 0 | 0 | 0 | 0 | 0.008654 | 0.31848 | 1,526 | 62 | 71 | 24.612903 | 0.658654 | 0.205111 | 0 | 0.057143 | 0 | 0 | 0.083893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.142857 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442b111ca91187ce9c255240e5dfebe65f40b89f | 348 | py | Python | configs/siam_hrnet/siam_hr18_512x512_40k_s2looking_backsplit.py | slchenchn/rsaicp_CD | 08723b6da125b4ebe7f4777be8ef14a1b5746523 | [
"Apache-2.0"
] | null | null | null | configs/siam_hrnet/siam_hr18_512x512_40k_s2looking_backsplit.py | slchenchn/rsaicp_CD | 08723b6da125b4ebe7f4777be8ef14a1b5746523 | [
"Apache-2.0"
] | null | null | null | configs/siam_hrnet/siam_hr18_512x512_40k_s2looking_backsplit.py | slchenchn/rsaicp_CD | 08723b6da125b4ebe7f4777be8ef14a1b5746523 | [
"Apache-2.0"
] | 1 | 2022-03-21T07:37:24.000Z | 2022-03-21T07:37:24.000Z | '''
Author: Shuailin Chen
Created Date: 2021-07-06
Last Modified: 2021-08-18
content: siamese HR18 with background splitting
'''
_base_ = [
'./siam_hr18_512x512_40k_s2looking.py'
]
model = dict(
decode_head=dict(
post_process=dict(
type='SetConstValue',
position=0,
value=0.1,
)
)
) | 18.315789 | 48 | 0.603448 | 41 | 348 | 4.926829 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128 | 0.281609 | 348 | 19 | 49 | 18.315789 | 0.68 | 0.367816 | 0 | 0 | 0 | 0 | 0.222727 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442c17486ddf7abefd979fe9ad65f0fda830f7ae | 1,774 | py | Python | examples/streak/plot.py | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | examples/streak/plot.py | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | examples/streak/plot.py | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | """Plot a snapshot of the particle distribution"""
# --------------------------------
# Bjørn Ådlandsvik <bjorn@ho.no>
# Institute of Marine Research
# November 2020
# --------------------------------
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
from postladim import ParticleFile
# ---------------
# User settings
# ---------------
# Files
particle_file = "out.nc"
grid_file = "../data/ocean_avg_0014.nc"
# Subgrid definition
i0, i1 = 100, 136
j0, j1 = 90, 121
# timestamp
t = 176
# ----------------
# ROMS grid, plot domain
with Dataset(grid_file) as nc:
H = nc.variables["h"][j0:j1, i0:i1]
M = nc.variables["mask_rho"][j0:j1, i0:i1]
lon = nc.variables["lon_rho"][j0:j1, i0:i1]
lat = nc.variables["lat_rho"][j0:j1, i0:i1]
M[M > 0] = np.nan # Mask out sea cells
# particle_file
pf = ParticleFile(particle_file)
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(1, 1, 1)
# Center and boundary grid points
Xc = np.arange(i0, i1)
Yc = np.arange(j0, j1)
Xb = np.arange(i0 - 0.5, i1)
Yb = np.arange(j0 - 0.5, j1)
# --- Background map ---
# Bottom topography
cmap = plt.get_cmap("Blues")
h = ax.contourf(Xc, Yc, H, cmap=cmap, alpha=0.3)
# Land mask
cmap = plt.matplotlib.colors.ListedColormap(["darkkhaki"])
ax.pcolormesh(Xb, Yb, M, cmap=cmap)
# Lon/lat lines
ax.contour(
Xc, Yc, lon, levels=[2, 4, 6], colors="black", linestyles=":", linewidths=0.5
)
ax.contour(
Xc, Yc, lat, levels=[59, 60, 61], colors="black", linestyles=":", linewidths=0.5
)
# --- Particle distribution ---
X, Y = pf.position(time=t)
timestring = pf.time(t)
ax.plot(X, Y, ".", color="red", markeredgewidth=0, lw=0.5)
ax.set_title(timestring)
# Show the results
plt.axis("image")
plt.axis((i0, i1 - 1, j0, j1 - 1))
plt.show()
| 22.74359 | 84 | 0.620068 | 275 | 1,774 | 3.952727 | 0.469091 | 0.025759 | 0.022079 | 0.029439 | 0.100276 | 0.060718 | 0 | 0 | 0 | 0 | 0 | 0.058116 | 0.156144 | 1,774 | 77 | 85 | 23.038961 | 0.668003 | 0.272266 | 0 | 0.051282 | 0 | 0 | 0.070411 | 0.019778 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.102564 | 0 | 0.102564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442dae1169b8f000e9fec9855121b98524da0cb8 | 1,945 | py | Python | tests/test-data/mock.py | mr-c/CTDConverter | 84c58674405d24cc21e765367fa089fa31a5df0f | [
"MIT"
] | null | null | null | tests/test-data/mock.py | mr-c/CTDConverter | 84c58674405d24cc21e765367fa089fa31a5df0f | [
"MIT"
] | null | null | null | tests/test-data/mock.py | mr-c/CTDConverter | 84c58674405d24cc21e765367fa089fa31a5df0f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# little mock app to handle ouput file parameters
# i.e. to create them
import os
import shutil
import sys
from CTDopts.CTDopts import CTDModel, _InFile, _OutFile
# from argparse import ArgumentParser
# parser = ArgumentParser(prog="mock.py", description="MOCK", add_help=True)
# parser.add_argument("-w", "-write_ctd", dest="write_ctd", metavar='PATH', default=None, required=False,
# help="Write CTD to given path")
# parser.add_argument("-i", "-ini", dest="ini", metavar='CTDFILE', default=None, required=False,
# help="Process CTDFILE")
# parser.add_argument('moreargs', metavar='ARGS', type=str, nargs='*', help='more arguments')
# args = parser.parse_args()
wd = os.path.dirname(__file__)
bn = os.path.splitext(os.path.basename(__file__))[0]
if sys.argv[1] == "-write_ctd":
shutil.copyfile(os.path.join(wd, bn + ".ctd"), os.path.join(sys.argv[2], bn + ".ctd"))
elif sys.argv[1] == "-ini":
fparam = {"input": set(), "output": set()}
model = CTDModel(from_file=sys.argv[2])
for p in model.get_parameters():
cli = ":".join(p.get_lineage(name_only=True))
if p.type is _InFile:
fparam["input"].add(cli)
elif p.type is _OutFile:
fparam["output"].add(cli)
i = 3
while i < len(sys.argv):
if sys.argv[i].startswith("-"):
param = sys.argv[i][1:]
if param in fparam["input"] or param in fparam["output"]:
if param in fparam["input"]:
mode = "r"
else:
mode = "w"
while i + 1 < len(sys.argv):
if sys.argv[i + 1].startswith("-"):
break
of = open(sys.argv[i + 1], mode)
of.close()
i += 1
i += 1
else:
sys.stderr.write("Either -write_ctd or -ini must be given")
sys.exit(1)
| 35.363636 | 105 | 0.560925 | 260 | 1,945 | 4.1 | 0.392308 | 0.065666 | 0.030019 | 0.025328 | 0.12758 | 0.037523 | 0.037523 | 0 | 0 | 0 | 0 | 0.009279 | 0.279692 | 1,945 | 54 | 106 | 36.018519 | 0.751606 | 0.315167 | 0 | 0.111111 | 0 | 0 | 0.078728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
442e571ab100fe4f8465f2178b6129b91ab30a27 | 1,323 | py | Python | cubes_lite/sql/request.py | notexistence/cubes_lite | 2cbc54509e6dc8a529c9f33fd39d0f659d6a5647 | [
"MIT"
] | null | null | null | cubes_lite/sql/request.py | notexistence/cubes_lite | 2cbc54509e6dc8a529c9f33fd39d0f659d6a5647 | [
"MIT"
] | null | null | null | cubes_lite/sql/request.py | notexistence/cubes_lite | 2cbc54509e6dc8a529c9f33fd39d0f659d6a5647 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import collections
from cubes_lite.query import Request, Response
__all__ = (
'ListSQLRequest',
'ListSQLResponse',
'OneRowSQLRequest',
'OneRowSQLResponse',
)
class ListSQLResponse(Response):
def __init__(self, *args, **kwargs):
self.labels = None
self._batch = None
super(ListSQLResponse, self).__init__(*args, **kwargs)
def __iter__(self):
while True:
if not self._batch:
many = self._data.fetchmany()
if not many:
break
self._batch = collections.deque(many)
row = self._batch.popleft()
if self.labels:
yield dict(zip(self.labels, row))
else:
yield row
class ListSQLRequest(Request):
response_cls = ListSQLResponse
class OneRowSQLResponse(ListSQLResponse):
@property
def data(self):
return list(self)[0]
class OneRowSQLRequest(ListSQLRequest):
response_cls = OneRowSQLResponse
def __init__(
self, model, type_, conditions=None, aggregates=None,
**options
):
super(OneRowSQLRequest, self).__init__(
model, type_, conditions, aggregates,
**options
)
| 21.33871 | 62 | 0.59486 | 122 | 1,323 | 6.131148 | 0.45082 | 0.048128 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002193 | 0.310658 | 1,323 | 61 | 63 | 21.688525 | 0.817982 | 0.015873 | 0 | 0.047619 | 0 | 0 | 0.047692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.071429 | 0.02381 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443047ead0ffbfc30ef88ed167c3c08722403b2c | 1,166 | py | Python | render.py | 22preich/BlenderNetworkRender | 53b4f036c3adea31d947d64aa1ed0983a29c8e07 | [
"MIT"
] | 1 | 2021-09-12T06:48:50.000Z | 2021-09-12T06:48:50.000Z | render.py | 22preich/BlenderNetworkRender | 53b4f036c3adea31d947d64aa1ed0983a29c8e07 | [
"MIT"
] | null | null | null | render.py | 22preich/BlenderNetworkRender | 53b4f036c3adea31d947d64aa1ed0983a29c8e07 | [
"MIT"
] | null | null | null | import bpy
import sys
dir = "C:/Users/foggy/Appdata/roaming/python37/site-packages"
sys.path.append(dir)
from flask import Flask, request, render_template
from werkzeug.utils import secure_filename
import eventlet
from eventlet import wsgi
app = Flask(__name__)
@app.route('/')
def hello_world():
return render_template("index.html")
@app.route('/submit', methods=['POST'])
def submit():
f = request.files['file']
filepath = 'tmp/' + secure_filename(f.filename)
f.save(filepath)
render(filepath)
return "<img src='/static/" + name_from_file(filepath) + ".png'>"
def render(filepath="tmp/alley.blend"):
bpy.ops.wm.open_mainfile(filepath=filepath)
#print("yay")
bpy.context.scene.cycles.device = 'GPU'
scene = bpy.context.scene
scene.render.image_settings.file_format = 'PNG'
scene.render.filepath = "C:/Users/foggy/Documents/Github/BlenderNetworkRender/static/" + name_from_file(filepath) + ".png"
print(bpy.ops.render.render(write_still=1))
def name_from_file(filename):
return filename.split("/").pop().split(".")[0]
# render()
#app.run("0.0.0.0", 5000)
wsgi.server(eventlet.listen(('', 8000)), app)
| 27.761905 | 126 | 0.704117 | 160 | 1,166 | 5.0125 | 0.48125 | 0.044888 | 0.044888 | 0.044888 | 0.072319 | 0.072319 | 0 | 0 | 0 | 0 | 0 | 0.015748 | 0.128645 | 1,166 | 41 | 127 | 28.439024 | 0.773622 | 0.038593 | 0 | 0 | 0 | 0 | 0.173524 | 0.101073 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.206897 | 0.068966 | 0.448276 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4433556a7cfa388b2c20801fb5e687c8efe75005 | 3,889 | py | Python | lab6/main_romain_claret_and_sylvain_robert-nicoud_lab6.py | RomainClaret/msc.ml.labs | 4e6b8e1c1ab841ab8ebbaee13f6ae43e9a1c44a5 | [
"MIT"
] | null | null | null | lab6/main_romain_claret_and_sylvain_robert-nicoud_lab6.py | RomainClaret/msc.ml.labs | 4e6b8e1c1ab841ab8ebbaee13f6ae43e9a1c44a5 | [
"MIT"
] | null | null | null | lab6/main_romain_claret_and_sylvain_robert-nicoud_lab6.py | RomainClaret/msc.ml.labs | 4e6b8e1c1ab841ab8ebbaee13f6ae43e9a1c44a5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# 26.04.21
# Assignment lab 06
# Master Class: Machine Learning (5MI2018)
# Faculty of Economic Science
# University of Neuchatel (Switzerland)
# Lab 6, see ML21_Exercise_6.pdf for more information
# https://github.com/RomainClaret/msc.ml.labs
# Authors:
# - Romain Claret @RomainClaret
# - Sylvain Robert-Nicoud @Nic0uds
# 2. Interpret the results for one year using a decision tree. (with the original fields)
# 3. Compare the results to the clusters from exercise 5.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.preprocessing import FunctionTransformer, Normalizer
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
# Import the data and filter the two datasets for the chosen two years.
df = pd.read_stata('pwt100.dta')
df_1990 = df[(df["year"]==1990)]
df_1970 = df[(df["year"]==1970)]
features = [
"country",
"hc",
"ctfp",
"cwtfp",
"delta",
"pl_con",
"pl_da",
"pl_gdpo",
"csh_g",
"pl_c",
"pl_i",
"pl_g",
#"pl_k",
]
features_logarithmic = [
"rgdpe",
"pop",
"ccon",
"rgdpna",
"rconna",
"xr",
]
# Apply the logarithmic function on badly proportionated features
log_transformer = FunctionTransformer(np.log1p)
df_1990_log_features = df_1990[features_logarithmic]
df_1990_log = log_transformer.transform(df_1990_log_features)
# Concat logarithmic features with unlogarithmic features
df_1990_concat = pd.concat([df_1990[features], df_1990_log], axis=1, join="inner")
# Drop rows with na values
df_1990_cleaned = df_1990_concat.dropna()
# 1. Pay special attention to the need of normalization.
df_1990_normalized = Normalizer().fit_transform(df_1990_cleaned[features[1:]+features_logarithmic])
# 1 bis. Use PCA (Principal Component Analysis) to reduce the number of features.
# We choose the kmeans clustering technique and obtain the clusters for the dataset.
# https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
from sklearn.decomposition import PCA
pca = PCA(2)
df_1990_pca = pca.fit_transform(df_1990_cleaned[features[1:]+features_logarithmic])
kmeans_1990 = KMeans(n_clusters=4, random_state=42).fit(df_1990_pca)
# Visualization of the clustering results.
y_kmeans_1990 = kmeans_1990.predict(df_1990_pca)
plt.scatter(df_1990_pca[:,0], df_1990_pca[:,1], c=y_kmeans_1990, s=50, cmap='viridis')
plt.scatter(kmeans_1990.cluster_centers_[:,0], kmeans_1990.cluster_centers_[:,1], c='blue', s=200, alpha=0.9)
plt.show()
# Listing the countries in the clusters
def country_listing(df, clusters):
tmp = []
for i_k in range(0, max(clusters)+1):
tmp_k = []
for i_c, c in enumerate(clusters):
if c == i_k: tmp_k.append(df.iloc[i_c]["country"])
tmp.append(tmp_k)
return tmp
df_test = pd.DataFrame.from_records(country_listing(df_1990_cleaned, y_kmeans_1990))
df_test.to_csv("out.csv", index=False)
# https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html
clf_dtree = DecisionTreeRegressor(random_state=42)
clf_dtree.fit(df_1990_cleaned[features[1:]+features_logarithmic], y_kmeans_1990)
fig = plt.figure(figsize=(10,10))
_ = tree.plot_tree(clf_dtree,
feature_names = df_1990_cleaned[features[1:]+features_logarithmic].columns,
class_names=y_kmeans_1990,
filled=True)
fig.savefig("decistion_tree.png")
for e in country_listing(df_1990_cleaned, y_kmeans_1990):
print(len(e))
#ccon, Real consumption of households and government, at current PPPs (in mil. 2017US$)
#rgdpe, Expenditure-side real GDP at chained PPPs (in mil. 2017US$)
#rconna, Real consumption at constant 2017 national prices (in mil. 2017US$)
#xr, Exchange rate, national currency/USD (market+estimated) | 31.877049 | 109 | 0.735665 | 572 | 3,889 | 4.809441 | 0.428322 | 0.047983 | 0.033079 | 0.030534 | 0.154853 | 0.130862 | 0.130862 | 0.101054 | 0.038531 | 0 | 0 | 0.064281 | 0.151967 | 3,889 | 122 | 110 | 31.877049 | 0.769861 | 0.371047 | 0 | 0.029851 | 0 | 0 | 0.062035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0 | 0.134328 | 0 | 0.164179 | 0.014925 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44348a202532b675204c5d619a14b8c76d684034 | 3,061 | py | Python | 13_week/pr12_1 .py | WoojaeJang/AppliedOptimization-Gurobi | 067e4e5a0391de74f673f935b0ba765b037a4149 | [
"AFL-1.1"
] | null | null | null | 13_week/pr12_1 .py | WoojaeJang/AppliedOptimization-Gurobi | 067e4e5a0391de74f673f935b0ba765b037a4149 | [
"AFL-1.1"
] | null | null | null | 13_week/pr12_1 .py | WoojaeJang/AppliedOptimization-Gurobi | 067e4e5a0391de74f673f935b0ba765b037a4149 | [
"AFL-1.1"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed May 26 10:28:54 2021
@author: woojae-macbook13
"""
from pandas import*
import pandas as pd
import numpy as np
# path = ".\\"
filename = "pr1.xlsx"
# dataset = pd.read_excel(path+filename)
dataset = pd.read_excel(filename)
order = "고객"
itemcode = "itemcode"
qty = "quantity"
orderSet = Series(dataset[order].drop_duplicates().sort_values().values)
itemSet = Series(dataset[itemcode].drop_duplicates().sort_values().values)
nOrders = len(orderSet)
nItem = len(itemSet)
orderitemSetVector = {}
orderSetQtyVector = {}
#주문(key) : [아이템벡터] (value)dictionary 생성 (루프)
for i in orderSet :
# 각 주문에 대한 아이템 수 크기의 0벡터 생성: one-zero 벡터 용
tempOZvector = np.zeros(len(itemSet))
# 각 주문에 대한 아이템 수 크기의 0벡터 생성: 수량 벡터 용
tempQtyVector = np.zeros(len(itemSet))
# 고객 번호가 i 인 주문이 포함하는 아이템 목록
tempItemList = dataset[dataset[order] == i][itemcode]
# 고객 번호가 i 인 주문이 포함하는 아이템 목록
tempQtyList = dataset[dataset[order] == i][qty]
#각 고객이 포함한 아이템에 대하여 반복
for j in range(len(tempItemList)) :
# 고객 번호가 i 인 주문이 포함하는 아이템을 하나씩 가져옴.
tempItem = tempItemList.iloc[j]
# 현재 선택된 아이템의 위치(행번호)를 가져옴
tempItemIndex = itemSet[itemSet == tempItem].index[0]
# 각 주문에 대한 ONE-ZERO 벡터를 생성: 아이템 수량 > 0 ==> 1, otherwise 0
# 해당 아이템을 가지므로 1을 assign 함
tempOZvector[tempItemIndex] = 1
# item벡터와 같은위치에 1,0 대신 수량을 assign함
# 각 주문에 대한 수량 벡터를 생성
tempQtyVector[tempItemIndex] = tempQtyList.iloc[j]
orderitemSetVector[i] = tempOZvector
orderSetQtyVector[i] = tempQtyVector
# 결과 확인
dfOrderItemVector = DataFrame(orderitemSetVector).T
dfOrderQtyVector = DataFrame(orderSetQtyVector).T
# 유사성 계산 함수 만들기
def calSimilarity(v1, v2) :
inner_sum = 0
size1 = 0.0
size2 = 0.0
for i in range(nItem) :
size1 += v1[i]*v1[i]
size2 += v2[i]*v2[i]
inner_sum += v1[i]*v2[i]
return inner_sum/(size1**(1/2)*size2**(1/2))
# 작업 1 : 모든 사용자 조합간 유사성 지표 계산
setSimilarity = {}
for i in orderSet :
tempSim = np.zeros(nOrders)
k = 0
for j in orderSet :
if i != j :
tempSim[k] = calSimilarity(dfOrderItemVector.T[i], dfOrderItemVector.T[j])
k += 1
setSimilarity[i] = tempSim
dfSimilarity = DataFrame(setSimilarity)
# 작업 2 : 각 사용자와 가장 유사한 사용자 선정
similarone = np.zeros(nOrders)
k = 0
for i in orderSet :
idx = 0
value = 0.0
for j in range(nOrders) :
if dfSimilarity[i][j] > value :
idx = j
value = dfSimilarity[i][j]
similarone[k] = idx
k += 1
print("user %d -> idx : %d" %(i, orderSet[idx]))
print()
# 작업 3 : 유사한 사용자가 구매했으나 기준 사용는 구매하지 않은 아이템
for i in range(nOrders) :
idx = similarone[i]
for j in range(nItem) :
if dfOrderItemVector.T[orderSet[i]][j] == 0 and dfOrderItemVector.T[orderSet[idx]][j] != 0 :
print("We recommend this item %d for user %d" %(j, i+1))
| 20.965753 | 100 | 0.601764 | 435 | 3,061 | 4.213793 | 0.383908 | 0.010911 | 0.016367 | 0.022913 | 0.101473 | 0.06874 | 0.040371 | 0.040371 | 0 | 0 | 0 | 0.028468 | 0.277034 | 3,061 | 145 | 101 | 21.110345 | 0.799819 | 0.217576 | 0 | 0.107692 | 0 | 0 | 0.034864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.046154 | 0 | 0.076923 | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443586b918e80fa19a7a9248966f3748b23543dc | 468 | py | Python | loso/util.py | fangpenlin/loso | 8677ed754c793887dde10feb9a13dce25ea09f58 | [
"BSD-3-Clause"
] | 28 | 2017-03-21T09:04:41.000Z | 2021-06-13T06:19:51.000Z | loso/util.py | JoyCTsai/loso | 8677ed754c793887dde10feb9a13dce25ea09f58 | [
"BSD-3-Clause"
] | null | null | null | loso/util.py | JoyCTsai/loso | 8677ed754c793887dde10feb9a13dce25ea09f58 | [
"BSD-3-Clause"
] | 8 | 2017-07-23T06:04:49.000Z | 2021-12-25T02:27:45.000Z | def ngram(n, terms):
"""An iterator for iterating n-gram terms from a text, for example:
>>> list(ngram(2, ['Today', 'is', 'my', 'day']))
[['Today', 'is'], ['is', 'my'], ['my', 'day']]
>>> list(ngram(3, ['Today', 'is', 'my', 'day']))
[['Today', 'is', 'my'], ['is', 'my', 'day']]
"""
for i in xrange(len(terms) - n + 1):
yield terms[i:i+n]
if __name__ == "__main__":
import doctest
doctest.testmod() | 29.25 | 71 | 0.472222 | 62 | 468 | 3.435484 | 0.516129 | 0.093897 | 0.126761 | 0.112676 | 0.178404 | 0.178404 | 0 | 0 | 0 | 0 | 0 | 0.008746 | 0.267094 | 468 | 16 | 72 | 29.25 | 0.612245 | 0.549145 | 0 | 0 | 0 | 0 | 0.046784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443924889fb2bc02902f4276bdfdab0574a5167f | 493 | py | Python | lbry/lbry/wallet/__init__.py | JupyterJones/lbry-sdk | be89436fa869e1b4b9f05c3faa5c126ebcfe6e57 | [
"MIT"
] | null | null | null | lbry/lbry/wallet/__init__.py | JupyterJones/lbry-sdk | be89436fa869e1b4b9f05c3faa5c126ebcfe6e57 | [
"MIT"
] | null | null | null | lbry/lbry/wallet/__init__.py | JupyterJones/lbry-sdk | be89436fa869e1b4b9f05c3faa5c126ebcfe6e57 | [
"MIT"
] | null | null | null | __node_daemon__ = 'lbrycrdd'
__node_cli__ = 'lbrycrd-cli'
__node_bin__ = ''
__node_url__ = (
'https://github.com/lbryio/lbrycrd/releases/download/v0.17.2.1/lbrycrd-linux.zip'
# 'https://github.com/lbryio/lbrycrd/releases/download/v0.17.3.1/lbrycrd-linux-1731.zip'
)
__spvserver__ = 'lbry.wallet.server.coin.LBCRegTest'
from lbry.wallet.manager import LbryWalletManager
from lbry.wallet.network import Network
from lbry.wallet.ledger import MainNetLedger, RegTestLedger, TestNetLedger
| 37.923077 | 92 | 0.78499 | 66 | 493 | 5.5 | 0.545455 | 0.110193 | 0.115702 | 0.110193 | 0.258953 | 0.258953 | 0.258953 | 0.258953 | 0.258953 | 0 | 0 | 0.031042 | 0.085193 | 493 | 12 | 93 | 41.083333 | 0.773836 | 0.174442 | 0 | 0 | 0 | 0.1 | 0.325926 | 0.083951 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443a0fbbf12f2c065e3a541ea080bc7fd07fd5a3 | 6,797 | py | Python | logdweb/views.py | hiidef/logdweb | c80d47f4c5759cadeb3088b9f7fa093c30e11696 | [
"MIT"
] | 1 | 2015-08-30T02:36:13.000Z | 2015-08-30T02:36:13.000Z | logdweb/views.py | hiidef/logdweb | c80d47f4c5759cadeb3088b9f7fa093c30e11696 | [
"MIT"
] | null | null | null | logdweb/views.py | hiidef/logdweb | c80d47f4c5759cadeb3088b9f7fa093c30e11696 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""logdweb admin views."""
try:
import simplejson as json
except ImportError:
import json
import time
from django.core.urlresolvers import reverse
from django.http import HttpResponse, Http404, HttpResponseRedirect
from django_jinja2 import render_to_response, render_to_string
from logdweb import models, settings, forms
from logdweb.settings import timer
from functools import wraps
from django.contrib.auth.decorators import login_required
def superuser_required(view):
if settings.LOGD_REQUIRE_SUPERUSER:
@wraps(view)
@login_required
def wrapped(request, *args, **kwargs):
if not request.user.is_superuser:
try: login_url = reverse('django.contrib.auth.views.login')
except: login_url = u'/'
return HttpResponseRedirect(login_url)
return view(request, *args, **kwargs)
return wrapped
return login_required(view)
def make_context(**kwargs):
logd = models.Logd()
global_context = {
'logd_base_url' : reverse('logd-index'),
'timers': timer.timers,
'stats': models.Graphite().get_stats(),
'info': models.Logd().server_info(),
}
path = kwargs.pop('path', None)
if path:
global_context['path'] = path
global_context['current'] = path
global_context['loggers'] = logd.get_loggers(path)
if 'logger' not in kwargs:
global_context['logger'] = None
else:
global_context['current'] = None
global_context['logger'] = None
global_context.update(kwargs)
return global_context
@superuser_required
def dashboard_index(request):
return render_to_response('logdweb/dashboard/index.jinja', make_context(), request)
@superuser_required
def config_index(request):
color_map_form = forms.ColorNameForm()
color_stats_form = forms.ColorStatsForm()
color_name_map = models.ColorNameMap()
color_stats_map = models.ColorStatsMap()
return render_to_response('logdweb/config.jinja', make_context(**locals()), request)
@superuser_required
def index(request):
timer.start('page-generation')
context = make_context()
timer.end('page-generation')
return render_to_response('logdweb/index.jinja', context, request)
@superuser_required
def path_index(request, path=""):
timer.start('page-generation')
logd = models.Logd()
lines = logd.get_lines(path)
context = make_context(path=path, lines=lines)
timer.end('page-generation')
return render_to_response('logdweb/index.jinja', context, request)
@superuser_required
def path_search(request, path):
term = request.GET.get('q', '')
if not term:
return HttpResponseRedirect(reverse('logd-path-index', kwargs={'path': path}))
limit = int(request.GET.get('limit', 50))
logd = models.Logd()
lines = logd.search(path, term, limit)
context = make_context(term=term, limit=limit, disable_update=True,
lines=lines, path=path)
return render_to_response('logdweb/index.jinja', context, request)
@superuser_required
def path_info(request, path=""):
timer.start('page-generation')
logd = models.Logd()
path_info = logd.path_info(path)
timer.end('page-generation')
context = make_context(path_info=path_info, path=path)
return render_to_response('logdweb/info.jinja', context, request)
@superuser_required
def path_line(request, path, line):
logd = models.Logd()
line = logd.get_line(path, line)
context = make_context(path=path, lines=[line],
details=True, disable_update=True)
return render_to_response('logdweb/details.jinja', context, request)
@superuser_required
def path_level(request, path="", level=""):
logd = models.Logd()
lines = logd.get_level_lines(path, level)
context = make_context(path=path, level=level, lines=lines)
return render_to_response('logdweb/index.jinja', context, request)
@superuser_required
def path_delete(request, path=""):
from pylogd import delete_log
delete_log(path, settings.LOGD_LOGD['host'], settings.LOGD_LOGD['port'])
# to increase the likelihood that the UDP message will "get there" by the
# time that the next page is loaded and the log will be deleted, we sleep
time.sleep(0.25)
return HttpResponseRedirect(reverse('logd-index'))
@superuser_required
def path_logger(request, path="", logger=""):
logd = models.Logd()
lines = logd.get_logger_lines(path, logger)
context = make_context(path=path, logger=logger, lines=lines)
return render_to_response('logdweb/index.jinja', context, request)
@superuser_required
def path_new(request, path="", level="", logger=""):
"""Fetch the new from a path."""
try:
from_id = str(request.GET['id'])
except (ValueError, KeyError):
raise Http404
logd = models.Logd()
new = logd.get_new_lines(path, from_id, level=level, logger=logger)
for line in new:
line['rendered'] = render_to_string('logdweb/single_line.jinja',
{'path':path, 'line':line})
response = json.dumps(new)
return HttpResponse(response, mimetype='application/javascript')
@superuser_required
def stats_index(request, stat):
graphite = models.Graphite()
stats = models.Graphite().get_stats()
for key in stats.keys():
for bucket in stats[key].keys():
stats[key][bucket] = models.stats_tree(stats[key][bucket])
context = make_context(stats=stats, stat=stat)
return render_to_response('logdweb/stats.jinja', context, request)
@superuser_required
def stats_chart(request, stat, bucket):
time = request.GET.get("time", "-1hours")
template = request.GET.get("template", "plain")
graphite = models.Graphite()
pref, bucket = bucket.split(".", 1)
stats = models.Graphite().get_stats()
chart = dict([(k,v) for k,v in stats[stat][pref].items() if k.startswith(bucket)])
chart = models.Chart(chart, stat, pref, time=time, template=template)
context = make_context(stat=stat, bucket=bucket, chart=chart)
return render_to_response("logdweb/charts.jinja", context, request)
@superuser_required
def chart_detail(request, path):
time = request.GET.get("time", "-1hours")
template = request.GET.get("template", "plain")
graphite = models.Graphite()
name, bucket = path.split(":", 1)
type, statpath = bucket.split(".", 1)
stats = models.Graphite().get_stats()
chart = dict([(k,v) for k,v in stats[name][type].items() if k.startswith(statpath)])
chart = models.Chart(chart, name, type, time=time, template=template)
context = make_context(stat=name, bucket=bucket, chart=chart)
return render_to_response("logdweb/chart-detail.jinja", context, request)
| 35.586387 | 88 | 0.692364 | 866 | 6,797 | 5.286374 | 0.191686 | 0.038445 | 0.061162 | 0.057667 | 0.373307 | 0.298384 | 0.253823 | 0.225208 | 0.205111 | 0.161424 | 0 | 0.003222 | 0.178167 | 6,797 | 190 | 89 | 35.773684 | 0.816327 | 0.034427 | 0 | 0.299363 | 0 | 0 | 0.093769 | 0.023519 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10828 | false | 0 | 0.076433 | 0.006369 | 0.312102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443d9413d2bd1afe7dab3a1392c07a5c51c65936 | 4,999 | py | Python | moves/auth.py | iwharris/moves-transponder | df72fd0f6a0dd77997376c2979c5c00edb7c6bab | [
"MIT"
] | null | null | null | moves/auth.py | iwharris/moves-transponder | df72fd0f6a0dd77997376c2979c5c00edb7c6bab | [
"MIT"
] | null | null | null | moves/auth.py | iwharris/moves-transponder | df72fd0f6a0dd77997376c2979c5c00edb7c6bab | [
"MIT"
] | null | null | null | from __future__ import print_function
import requests
import time
import sys
from lxml import html
import urlparse
from util import *
__author__ = 'iwharris'
def request_auth_desktop(base_url, client_id, client_secret, scope, redirect_uri, state=''):
"""Makes an auth request to the Moves API.
scope may be 'activity', 'location', or both (space-delimited)
redirect_uri must match a Redirect URI set in your app's Development config (https://dev.moves-app/com)
"""
parameters = {
'response_type': 'code',
'client_id': client_id,
'scope': scope,
'redirect_uri': redirect_uri,
'state': state,
}
# Initiate a session
session = requests.session()
auth_page = session.get(base_url + '/authorize', params=parameters)
print(auth_page.url)
if auth_page.status_code != requests.codes.ok:
auth_page.raise_for_status()
# print(auth_page.text)
# Scrape auth page for some useful info
tree = html.fromstring(auth_page.text)
# Get auth token (contained in a form element)
auth_token_elements = tree.xpath('//input[@name="auth_token"]')
auth_token = auth_token_elements[0].value
# Get pin code (contained in two span elements)
pin_pair = tree.xpath('//span[@class="digitgroup"]/text()')
pin_code = ''.join(pin_pair)
# Request code is just the pin with last digit omitted
request_code = pin_code[:-1]
# Display waiting message to user
print('Please enter this pin code into the Moves app: %s' % (' '.join(pin_pair)))
# Loop while making checkAuthorized requests
auth_result = request_check_authorized(base_url + '/checkAuthorized', client_id, session, request_code, auth_token)
if auth_result != 'authorize':
return False
authorization_code = request_authorize_redirect(base_url + '/authorizeAndRedirect', client_id, session, request_code, auth_token, scope, redirect_uri, state)
# return authorization_code
authorization_obj = request_access_token(session, base_url + '/access_token', authorization_code, client_id, client_secret, redirect_uri)
return authorization_obj
def request_check_authorized(url, client_id, session, request_code, auth_token, timeout=90, interval=2):
assert client_id, "client_id must be set in arguments or in the MT_CLIENT_ID env variable."
payload = {
'request_code': request_code,
'client_id': client_id,
'auth_token': auth_token,
}
timeout_time = time.time() + timeout
is_finished = False
finished_result = ''
print('Waiting for user to authorize the app.', end="")
while (not is_finished) and (time.time() < timeout_time):
response = session.post(url, data=payload)
if response.status_code == requests.codes.ok:
status = response.json()['status']
if status == 'pending':
print('.', end="")
elif status == 'authorize':
print('authorized!')
is_finished = True
finished_result = status
elif status == 'cancel':
print('cancelled.')
is_finished = True
finished_result = status
else:
is_finished = True
finished_result = 'error'
print('encountered an error.')
print("%s %s" % (response.status_code, response.text), file=sys.stderr)
if not is_finished:
time.sleep(interval)
# End while
if time.time() > timeout_time:
print('timed out after %d seconds.' % timeout)
return finished_result
def request_authorize_redirect(url, client_id, session, request_code, auth_token, scope, redirect_uri, state=''):
"""Using the auth session, obtains an auth code.
The auth code can then be exchanged for an access token.
"""
assert client_id, "client_id must be set in arguments or in the MT_CLIENT_ID env variable."
payload = {
'auth_token': auth_token,
'request_code': request_code,
'response_type': 'code',
'client_id': client_id,
'redirect_uri': redirect_uri,
'scope': scope,
'state': state,
'error_uri': redirect_uri,
}
response = session.post(url, data=payload)
# We don't care about response code - we just want the 'code' URL parameter
p = urlparse.urlparse(response.url)
return urlparse.parse_qs(p.query)['code'][0]
def request_access_token(session, url, auth_code, client_id, client_secret, redirect_uri):
parameters = {
'grant_type': 'authorization_code',
'code': auth_code,
'client_id': client_id,
'client_secret': client_secret,
'redirect_uri': redirect_uri,
}
url = build_url(url, parameters)
response = session.post(url)
print(response.url)
if (response.status_code == requests.codes.ok):
auth_object = response.json()
return auth_object
else:
response.raise_for_status() | 37.586466 | 161 | 0.656931 | 629 | 4,999 | 4.992051 | 0.27663 | 0.053503 | 0.044586 | 0.034395 | 0.251274 | 0.22707 | 0.173885 | 0.097771 | 0.084713 | 0.084713 | 0 | 0.001577 | 0.239048 | 4,999 | 133 | 162 | 37.586466 | 0.82387 | 0.143629 | 0 | 0.32 | 0 | 0 | 0.166077 | 0.019344 | 0 | 0 | 0 | 0 | 0.02 | 1 | 0.04 | false | 0 | 0.07 | 0 | 0.16 | 0.11 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443ec98bcd2dc6957fa92fd540a750b1e3acc89e | 5,907 | py | Python | Code_for_Signal_Processing_test/open_data_FFR_mac_v2.0.1.py | puyaraimondii/biometric-classification-of-frequency-following-responses | f5b5dca516592be451a3133acb8fa178519bc991 | [
"MIT"
] | 1 | 2021-04-20T14:47:40.000Z | 2021-04-20T14:47:40.000Z | Code_for_Signal_Processing_test/open_data_FFR_mac_v2.0.1.py | puyaraimondii/biometric-classification-of-frequency-following-responses | f5b5dca516592be451a3133acb8fa178519bc991 | [
"MIT"
] | null | null | null | Code_for_Signal_Processing_test/open_data_FFR_mac_v2.0.1.py | puyaraimondii/biometric-classification-of-frequency-following-responses | f5b5dca516592be451a3133acb8fa178519bc991 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Jul 22 21:22:58 2018
@author: bruce
"""
import pandas as pd
import os
import numpy as np
from scipy import fftpack
from scipy import signal
import matplotlib.pyplot as plt
pkl_file=pd.read_pickle('/Users/bruce/Documents/uOttawa/Project/audio_brainstem_response/Data_BruceSunMaster_Studies/study2/study2DataFrame.pkl')
# open files in subfolders
def must_open(dirpath, filename):
if filename.endswith('.txt'):
return True
def opened_files(*args):
"""generate a sequence of pairs (path to file, opened file)
in a given directory. Same arguments as os.walk."""
for dirpath, dirnames, filenames in os.walk(*args):
for filename in filenames:
if must_open(dirpath, filename):
filepath = os.path.join(dirpath, filename)
yield (filepath, open(filepath, "rU"))
# parameters
sampling_rate = 9606
n = 1024
k = np.arange(n)
T = n/sampling_rate
frq = k/T
freq = frq[range(int(n/2))]
# main loop:
n=1
df_FFR = pd.DataFrame()
mydir = "/Users/bruce/Documents/uOttawa/Project/audio_brainstem_response/Data_BruceSunMaster_Studies/study2/rawdata"
for filepath, file in opened_files(mydir):
# do something
_, _, _, _, _, _, _, _, _, _, gender, subject, condition, vowel, sound_level, filename = filepath.split("/")
temp_FFR = []
f = open(filepath, 'r')
data = f.readlines()
for line in data:
#Condensation, Rarefaction, EFR, FFR = line.split(",")
_, _, _, FFR = line.split(",")
temp_FFR.append(FFR)
del temp_FFR[0]
if n>2:
n=1
temp_FFR = np.reshape(temp_FFR, (1,1024))
label = np.array([subject[1:], gender, condition, vowel, sound_level, n, "FFR"]).reshape(1,7)
temp_FFR = pd.DataFrame(np.hstack((temp_FFR, label)))
df_FFR =df_FFR.append(temp_FFR, ignore_index=True)
#print("filepath:", filepath)
#print(label)
n = n + 1
# not working!!!!
#df_EFR.rename(columns={'1024':"Subjects", '1025':"Sex", "1026":"Condition", "1027":"Vowel", "1028":"Sound Level", "1029":"Num", "1030":"EFR/FFR"})
# set column name
df_FFR.columns = np.append(np.arange(1024), ["Subject", "Sex", "Condition", "Vowel", "Sound Level", "Num", "EFR/FFR"])
# change the type of data
df_FFR.iloc[:, 0:1024] = df_FFR.iloc[:, 0:1024].astype(float)
df_FFR.iloc[:, 1024:1025] = df_FFR.iloc[:, 1024:1025].astype(int)
df_FFR.sort_values(by=['Subject', 'Sound Level', 'Num'])
# sort index based on subject condition and sound level
df_FFR_sorted = df_FFR.sort_values(by=['Subject', 'Condition','Sound Level'])
# reset the index
df_FFR_sorted_newindex = df_FFR_sorted.reset_index(drop=True)
# save the final version of .pkl
df_FFR_sorted_newindex.to_pickle('df_FFR.pkl')
'''
# First Graph
plt.figure()
plt.subplot(221)
plt.plot(m_t_a_85_1_cond)
plt.title("Condensation")
plt.ylim(-0.4, 0.4)
plt.subplot(222)
plt.plot(m_t_a_85_1_rare)
plt.title("Rarefaction")
plt.ylim(-0.4, 0.4)
plt.subplot(223)
plt.plot(m_t_a_85_1_EFR)
plt.title("EFR_Signal")
plt.ylim(-0.4, 0.4)
plt.subplot(224)
plt.plot(m_t_a_85_1_FFR)
plt.title("FFR_Signal")
plt.ylim(-0.4, 0.4)
#plt.plot(m_t_a_85_2_EFR)
#plt.legend(["1", "2"])
plt.show()
# comparason of myresult and brian result
plt.figure()
plt.subplot(221)
plt.plot(m_t_a_85_1_EFR)
plt.title("EFR_Signal_Brian")
#plt.ylim(-0.4, 0.4)
plt.subplot(222)
plt.plot(m_t_a_85_1_FFR)
plt.title("FFR_Signal_Brian")
#plt.ylim(-0.4, 0.4)
plt.subplot(223)
plt.plot(m_t_a_85_1_EFRtest)
plt.title("EFR_Signal_Bruce")
#plt.ylim(-0.4, 0.4)
plt.subplot(224)
plt.plot(m_t_a_85_1_FFRtest)
plt.title("FFR_Signal_Brruce")
#plt.ylim(-0.4, 0.4)
plt.show()
'''
'''
# Second Graph
plt.figure()
plt.subplot(221)
plt.plot(freq, s1_m_t_a_85_1_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.title("s1_m_t_a_85_1_EFR_amplitude_spectrum(0-1000)")
#plt.xlabel("Frequency")
#plt.ylabel("Amplitude")
#plt.legend(["1", "2"])
plt.grid(True)
plt.subplot(222)
plt.plot(freq, s1_m_re_a_85_1_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_re_a_85_1_EFR_amplitude_spectrum")
plt.subplot(223)
plt.plot(freq, s1_m_t_a_85_2_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_t_a_85_2_EFR_amplitude_spectrum")
plt.subplot(224)
plt.plot(freq, s1_m_re_a_85_2_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_re_a_85_2_EFR_amplitude_spectrum")
plt.show()
# Third Graph
plt.figure()
plt.subplot(221)
plt.plot(freq, s2_f_t_a_85_1_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.title("s1_m_t_a_85_1_EFR_amplitude_spectrum(0-1000)")
#plt.xlabel("Frequency")
#plt.ylabel("Amplitude")
#plt.legend(["1", "2"])
plt.grid(True)
plt.subplot(222)
plt.plot(freq, s1_m_re_a_85_1_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_re_a_85_1_EFR_amplitude_spectrum")
plt.subplot(223)
plt.plot(freq, s2_f_t_a_85_2_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_t_a_85_2_EFR_amplitude_spectrum")
plt.subplot(224)
plt.plot(freq, s1_m_re_a_85_2_EFR_amplitude_spectrum)
plt.xlim(0,1000)
plt.grid(True)
plt.title("s1_m_re_a_85_2_EFR_amplitude_spectrum")
plt.show()
# Third Graph
plt.figure()
plt.subplot(221)
plt.plot(corr_s1t1_s1re1)
plt.title("correlation of s1t1 and s1re1")
plt.grid(True)
plt.subplot(222)
plt.plot(corr_s2t1_s1re1)
plt.grid(True)
plt.title("correlation of s2t1 and s1re1")
plt.subplot(223)
plt.plot(corr_s3t1_s1re1)
plt.grid(True)
plt.title("correlation of s3t1 and s1re1")
plt.subplot(224)
plt.plot(corr_s4t1_s1re1)
plt.grid(True)
plt.title("correlation of s4t1 and s1re1")
#plt.show()
#### comparason ####
plt.figure()
temp = pkl_file.iloc[6:7,0:1024]
temp2 = temp.values.T.tolist()
temp2_f = fftpack.fft(temp2)
temp2_f = temp2_f[0:int(len(temp2_f)/2)]
plt.plot(temp2_f)
plt.plot(m_t_a_85_1_EFR_f)
plt.title("Signal_undetrended")
#plt.legend(["1", "2"])
plt.show()
################
'''
| 23.164706 | 147 | 0.716946 | 1,034 | 5,907 | 3.834623 | 0.208897 | 0.019672 | 0.018159 | 0.020177 | 0.513241 | 0.486255 | 0.461791 | 0.45826 | 0.412106 | 0.400757 | 0 | 0.074693 | 0.118334 | 5,907 | 254 | 148 | 23.255906 | 0.686636 | 0.111393 | 0 | 0.041667 | 0 | 0 | 0.162746 | 0.107537 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
443fa32c189f00023a8d9b89e8fc6e723ffab926 | 4,511 | py | Python | mnist_model.py | shudong-zhang/Train-generator | 65688c36ffcaba688e96bf3932db07ab658ea99d | [
"MIT"
] | null | null | null | mnist_model.py | shudong-zhang/Train-generator | 65688c36ffcaba688e96bf3932db07ab658ea99d | [
"MIT"
] | null | null | null | mnist_model.py | shudong-zhang/Train-generator | 65688c36ffcaba688e96bf3932db07ab658ea99d | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
# 说明:target1和target2 是两个目标模型,后面的mnists1和mnists2 是用于训练vae的模型。
# 我们攻击的时候,就选择target1和target2就行,不要选择后面两个模型。
# target1的模型权重就是那个mnist_gpu,target2的模型权重是mnist_models/checkpoints/mnist_target_2_best.pth.tar
class MNIST_target_1(nn.Module):
def __init__(self):
super(MNIST_target_1, self).__init__()
self.features = self._make_layers()
self.fc1 = nn.Linear(1024, 200)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(200, 200)
self.dropout = nn.Dropout(p=0.5)
self.fc3 = nn.Linear(200, 10)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.dropout(out)
out = self.fc2(out)
out = self.relu(out)
out = self.dropout(out)
out = self.fc3(out)
return out
def _make_layers(self):
layers = []
in_channels = 1
layers += [nn.Conv2d(in_channels, 32, kernel_size=3),
nn.BatchNorm2d(32),
nn.ReLU()]
layers += [nn.Conv2d(32, 32, kernel_size=3),
nn.BatchNorm2d(32),
nn.ReLU()]
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
layers += [nn.Conv2d(32, 64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU()]
layers += [nn.Conv2d(64, 64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU()]
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
return nn.Sequential(*layers)
class MNIST_target_2(nn.Module):
def __init__(self):
super(MNIST_target_2, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1, 32, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2))
self.conv2 = nn.Sequential(
nn.Conv2d(32, 64, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.conv3 = nn.Sequential(
nn.Conv2d(64, 64, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.dense = nn.Sequential(
nn.Linear(64 * 3 * 3, 128),
nn.ReLU(),
nn.Linear(128, 10)
)
def forward(self, x):
conv1_out = self.conv1(x)
conv2_out = self.conv2(conv1_out)
conv3_out = self.conv3(conv2_out)
res = conv3_out.view(conv3_out.size(0), -1)
out = self.dense(res)
return out
class MNIST_s1(nn.Module):
def __init__(self):
super(MNIST_s1, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
class MNIST_s2(nn.Module):
def __init__(self):
super(MNIST_s2, self).__init__()
self.conv1 = nn.Sequential( # input shape (1, 28, 28)
nn.Conv2d(
in_channels=1, # input height
out_channels=16, # n_filters
kernel_size=5, # filter size
stride=1, # filter movement/step
padding=2,
# if want same width and length of this image after Conv2d, padding=(kernel_size-1)/2 if stride=1
), # output shape (16, 28, 28)
nn.ReLU(), # activation
nn.MaxPool2d(kernel_size=2), # choose max value in 2x2 area, output shape (16, 14, 14)
)
self.conv2 = nn.Sequential( # input shape (16, 14, 14)
nn.Conv2d(16, 32, 5, 1, 2), # output shape (32, 14, 14)
nn.ReLU(), # activation
nn.MaxPool2d(2), # output shape (32, 7, 7)
)
self.out = nn.Linear(32 * 7 * 7, 10) # fully connected layer, output 10 classes
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1) # flatten the output of conv2 to (batch_size, 32 * 7 * 7)
output = self.out(x)
return output | 33.917293 | 113 | 0.530924 | 606 | 4,511 | 3.825083 | 0.19637 | 0.011217 | 0.025884 | 0.025884 | 0.389991 | 0.342105 | 0.271786 | 0.236411 | 0.188525 | 0.154443 | 0 | 0.089357 | 0.337619 | 4,511 | 133 | 114 | 33.917293 | 0.686412 | 0.14254 | 0 | 0.378151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07563 | false | 0 | 0.02521 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
444109ce8cf31f72f2c3c1602e9ba493a22264fc | 339 | py | Python | function/news_text.py | zoohee/SqueezeNews | 4f51b307c05259fb567cbe2027fed09a354b4773 | [
"Apache-2.0"
] | null | null | null | function/news_text.py | zoohee/SqueezeNews | 4f51b307c05259fb567cbe2027fed09a354b4773 | [
"Apache-2.0"
] | 7 | 2021-11-01T08:41:33.000Z | 2021-11-06T20:42:41.000Z | function/news_text.py | zoohee/SqueezeNews | 4f51b307c05259fb567cbe2027fed09a354b4773 | [
"Apache-2.0"
] | 3 | 2021-11-01T14:58:55.000Z | 2022-03-21T07:37:05.000Z | import newspaper
from newspaper import Article
# url = 'http://fox13now.com/2013/12/30/new-year-new-laws-obamacare-pot-guns-and-drones/'
def text_extraction(url):
article = Article(url)
article.download()
article.parse()
text = article.text
text = text.replace("\n\n"," ") # '\n' -> ''
return text
| 21.1875 | 89 | 0.631268 | 44 | 339 | 4.840909 | 0.613636 | 0.093897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037313 | 0.20944 | 339 | 15 | 90 | 22.6 | 0.757463 | 0.289086 | 0 | 0 | 0 | 0 | 0.021008 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4447cebaca237fce398177fea1226e3ccac3370d | 4,724 | py | Python | handroute-power-stripes/check_sram.py | pohantw/caravel_user_project | 2589b79bf97fd43186bf854ca3aaa60665f573ba | [
"Apache-2.0"
] | 1 | 2021-11-24T12:42:26.000Z | 2021-11-24T12:42:26.000Z | handroute-power-stripes/check_sram.py | pohantw/caravel_user_project | 2589b79bf97fd43186bf854ca3aaa60665f573ba | [
"Apache-2.0"
] | null | null | null | handroute-power-stripes/check_sram.py | pohantw/caravel_user_project | 2589b79bf97fd43186bf854ca3aaa60665f573ba | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
#
# Script to read a GDS file, and replace a single cell structure with a cell of
# the same name from a different GDS file. If a checksum is provided, then
# the cell contents will be checked against the checksum before allowing the
# replacement. The checksum is just the sum of the length of all GDS records in
# the cell. A checksum can be determined by running this routine once without
# supplying a checksum; the checksum will be calculated and printed.
#
# There are no checks to ensure that the replacement cell is in any way compatible
# with the existing cell. Validation must be done independently. This script is
# only a simple GDS data compositor.
import os
import sys
def usage():
print(
'check_sram.py <sram_cell_name> <path_to_input_gds> [-ref_checksum=<checksum>]')
if __name__ == '__main__':
debug = False
if len(sys.argv) == 1:
print("No options given to check_sram.py.")
usage()
sys.exit(0)
optionlist = []
arguments = []
for option in sys.argv[1:]:
if option.find('-', 0) == 0:
optionlist.append(option)
else:
arguments.append(option)
if len(arguments) < 2 or len(arguments) > 3:
print("Wrong number of arguments given to change_gds_cell.py.")
usage()
sys.exit(0)
checksum = '0'
for option in optionlist:
if option == '-debug':
debug = True
elif option.split('=')[0] == '-ref_checksum':
checksum = option.split('=')[1]
try:
checksum = int(checksum)
except:
print('Checksum must evaluate to an integer.')
sys.exit(1)
cellname = arguments[0]
input_gds = arguments[1]
input_gdsdir = os.path.split(input_gds)[0]
gdsinfile = os.path.split(input_gds)[1]
if debug:
print('Reading GDS file for ' + input_gds)
with open(input_gds, 'rb') as ifile:
gdsdata = ifile.read()
datalen = len(gdsdata)
dataptr = 0
incell = False
cellchecksum = 0
datastart = dataend = -1
while dataptr < datalen:
# Read stream records up to any structure, then check for structure name
bheader = gdsdata[dataptr:dataptr + 2]
reclen = int.from_bytes(bheader, 'big')
if reclen == 0:
print('Error: found zero-length record at position ' + str(dataptr))
break
rectype = gdsdata[dataptr + 2]
datatype = gdsdata[dataptr + 3]
brectype = rectype.to_bytes(1, byteorder='big')
bdatatype = datatype.to_bytes(1, byteorder='big')
if rectype == 5: # beginstr
saveptr = dataptr
elif rectype == 6: # strname
if datatype != 6:
print('Error: Structure name record is not a string!')
sys.exit(1)
bstring = gdsdata[dataptr + 4: dataptr + reclen]
# Odd length strings end in null byte which needs to be removed
if bstring[-1] == 0:
bstring = bstring[:-1]
strname = bstring.decode('ascii')
if strname == cellname:
if debug:
print('Cell ' + cellname + ' found at position ' + str(saveptr))
datastart = saveptr
incell = True
elif debug:
print('Cell ' + strname + ' position ' + str(dataptr) + ' (copied)')
elif rectype == 7: # endstr
if incell:
incell = False
cellchecksum = cellchecksum + reclen
dataend = dataptr + reclen
if debug:
print('Cell ' + cellname + ' ends at position ' + str(dataend))
print('Cell ' + cellname + ' checksum is ' + str(cellchecksum))
# Find checksum (sum of length of all records in the cell of interest)
if incell:
cellchecksum = cellchecksum + reclen
# Advance the pointer past the data
dataptr += reclen
if datastart == -1 or dataend == -1:
print('Failed to find the cell data for ' + cellname)
sys.exit(1)
if checksum != 0:
if cellchecksum == checksum:
print(cellname + ' matches checksum ' + str(checksum))
else:
# print(cellname + ' at ' + str(datastart) + ' to ' + str(dataend) + ' has checksum ' + str(cellchecksum) +
print(cellname + ' has checksum ' + str(cellchecksum) +
' != ' + str(checksum) + ' (checksum failure)')
sys.exit(1)
else:
print(cellname + ' ' + str(cellchecksum))
exit(0) | 41.438596 | 120 | 0.563506 | 553 | 4,724 | 4.76311 | 0.325497 | 0.018223 | 0.012149 | 0.012149 | 0.059226 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013116 | 0.338273 | 4,724 | 114 | 121 | 41.438596 | 0.829495 | 0.218882 | 0 | 0.206186 | 0 | 0 | 0.150927 | 0.007307 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010309 | false | 0 | 0.020619 | 0 | 0.030928 | 0.154639 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4449bfa6d89ff2ac50729d1ef0563c812702054a | 527 | py | Python | python/draw_map.py | dickensn/irlome-dev | 40228014e7d82c3f629f5d86d489cf6b888cd089 | [
"MIT"
] | null | null | null | python/draw_map.py | dickensn/irlome-dev | 40228014e7d82c3f629f5d86d489cf6b888cd089 | [
"MIT"
] | 3 | 2018-11-03T16:17:17.000Z | 2018-11-04T04:55:31.000Z | python/draw_map.py | dickensn/irlome-dev | 40228014e7d82c3f629f5d86d489cf6b888cd089 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
''' IRLOME maps
'''
__author__ = "Nick Dickens"
__copyright__ = "Copyright 2018, Nicholas J. Dickens"
__email__ = "dickensn@fau.edu"
__license__ = "MIT"
import matplotlib.pyplot as plt
import mplleaflet
points = []
with open("../web/data/user_data.csv") as in_fh:
for line in in_fh:
line = line.rstrip()
lon, lat, color = line.split(",")
points.append((lon, lat, color))
for point in points:
plt.plot(point[0],point[1],'s',color=point[2])
mplleaflet.show(path="../web/map.html")
| 20.269231 | 53 | 0.673624 | 77 | 527 | 4.363636 | 0.675325 | 0.02381 | 0.065476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017978 | 0.155598 | 527 | 25 | 54 | 21.08 | 0.737079 | 0.062619 | 0 | 0 | 0 | 0 | 0.222222 | 0.05144 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
444e14447cb4f0e4bc8141a034007957513313e9 | 6,787 | py | Python | pickle_mom_seeding.py | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | pickle_mom_seeding.py | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | pickle_mom_seeding.py | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | # Question: are sweeping alleles in infants present in mom?
# Idea: store data on allele frequency in mom for each sweeping allele in infant
from utils import sample_utils as su, parse_midas_data, substitution_rates_utils, config, temporal_changes_utils, snps_utils, core_gene_utils, gene_diversity_utils
import numpy as np
from numpy.random import choice, random as np_random, randint
import random
from collections import defaultdict
import pickle
import bz2
import numpy
import os, sys
# ===========================================================================
# Loads allele counts for specific samples at specific sites
# where sites are provided as (contig, location, gene_id) tuples
# TODO: move to parse_midas_data later
# ===========================================================================
def parse_snps_specify_sites(species_name, desired_samples=[], desired_sites=[], prev_cohort='all'):
# Load population freqs (for polarization purposes)
population_freqs = snps_utils.parse_population_freqs(prev_cohort, species_name, polarize_by_consensus=False)
# Open post-processed MIDAS output
snps_dir = "%s/snps/%s/" % (config.data_directory, species_name)
snp_file = bz2.BZ2File("%s/annotated_snps.txt.bz2" % snps_dir, 'r')
# Get lists of desired sample idxs
items = snp_file.readline().strip().split()[1:]
samples = list(su.parse_merged_sample_names(items))
desired_sample_idxs = []
for sample in desired_samples:
if sample in samples:
desired_sample_idxs.append(samples.index(sample))
desired_sample_idxs = numpy.array(sorted(desired_sample_idxs))
# Map: sample -> site (contig, location, gene_id) -> allele count
allele_counts_map = defaultdict(dict)
num_sites_processed = 0
# Loop over sites in annotated_snps.txt file
for line in snp_file:
if num_sites_processed>0 and num_sites_processed%10000==0:
sys.stderr.write("%d0k sites processed...\n" % (num_sites_processed/10000))
num_sites_processed += 1
items = line.split()
# Load information about site
info_items = items[0].split("|")
contig = info_items[0]
location = long(info_items[1])
gene_name = info_items[2]
polarization = 'R' # note R was assigned indiscriminately
pvalue = float(info_items[5])
# Only look at sites of interest
if (contig, location, gene_name) not in desired_sites:
continue
# Load alt and depth counts at this site for all desired samples
alts, depths = [], []
for idx in desired_sample_idxs:
alt, depth = [float(num) for num in items[1+idx].split(",")]
alts.append(alt); depths.append(depth)
alts = numpy.array(alts)
depths = numpy.array(depths)
# Obtain population frequency of alt allele
# Recall: this is average proportion of majority-alt samples across subjects
if (contig, location) in population_freqs:
population_freq = population_freqs[(contig, location)]
else: # alt population prevalence is (probably? TODO) 0
population_freq = 0
# Polarize SFS according to population freq
if population_freq > 0.5: # This means alt allele is the major allele
alts = depths - alts
polarization = 'A'
# For sites from temporal changes, note that we should assume
# site is "passed" i.e. alt aleles make up >5% of all alleles
# in at least one sample and pvalue < 0.05
# Store allele counts only if the site is interesting
for alt, depth, sample in zip(alts, depths, desired_samples):
allele_counts_map[sample][(contig, location, gene_name)] = (alt, depth)
snp_file.close()
return allele_counts_map
# Parameters
species = sys.argv[1]
sweep_type = 'full' # assume full for now
pp_prev_cohort = 'all'
min_coverage = 0
thresholds = {'full': (0.2, 0.8), 'partial': (0.35, 0.65)}
lower_threshold, upper_threshold = thresholds[sweep_type]
# Sample-subject-order maps
sys.stderr.write("Loading sample metadata...\n")
subject_sample_map = su.parse_subject_sample_map()
sample_order_map = su.parse_sample_order_map()
sample_subject_map = su.parse_sample_subject_map()
sample_cohort_map = su.parse_sample_cohort_map()
same_mi_pair_dict = su.get_same_mi_pair_dict(subject_sample_map)
sys.stderr.write("Done!\n")
# Use output from pickle_everything.py to get list of sweeping alleles in infants
ddir = config.data_directory
pdir = "%s/pickles/cov%i_prev_%s/nonconsecutive/" % (ddir, min_coverage, pp_prev_cohort)
snp_changes = pickle.load(open('%s/big_snp_changes_%s.pkl' % (pdir, sweep_type), 'rb'))
'''
# Loop over all modification SNP changes
for species in snp_changes:
# Form list of sites and samples of interest
desired_sites = set()
desired_samples = set()
for s1, s2 in snp_changes[species]:
val = snp_changes[species][(s1, s2)]
# Interested in all samples within the mother-infant pair
# Note that mothers and infants differ by 2-letter suffix of subject
subject = sample_subject_map[s1]
for sample in subject_sample_map[subject]:
desired_samples.add(sample)
if subject in same_mi_pair_dict:
other_subject = same_mi_pair_dict[subject]
for sample in subject_sample_map[other_subject]:
desired_samples.add(sample)
if type(val) == type([]): # Modification event
for snp_change in val:
gene_name, contig, position, variant_type, A1, D1, A2, D2 = snp_change
desired_sites.add((contig, position, gene_name))
# Load allele_counts_map
allele_counts_map = parse_snps_specify_sites(species, desired_samples, desired_sites=list(desired_sites))
'''
# Form list of sites and samples of interest
desired_sites = set()
desired_samples = set()
for s1, s2 in snp_changes[species]:
val = snp_changes[species][(s1, s2)]
# Only look at mothers and infants excluding olm
cohort = sample_cohort_map[s1]
if cohort not in ['backhed', 'ferretti', 'yassour', 'shao']:
continue
# Interested in all samples within the mother-infant pair
# Note that mothers and infants differ by 2-letter suffix of subject
subject = sample_subject_map[s1]
for sample in subject_sample_map[subject]:
desired_samples.add(sample)
if subject in same_mi_pair_dict:
other_subject = same_mi_pair_dict[subject]
for sample in subject_sample_map[other_subject]:
desired_samples.add(sample)
if type(val) == type([]): # Modification event
for snp_change in val:
gene_name, contig, position, variant_type, A1, D1, A2, D2 = snp_change
desired_sites.add((contig, position, gene_name))
# Load allele_counts_map
allele_counts_map = parse_snps_specify_sites(species, desired_samples, desired_sites=list(desired_sites))
# Pickle time
sys.stderr.write("Pickling...\n")
ddir = config.data_directory
pdir = "%s/pickles/seeding/" % (ddir)
os.system('mkdir -p %s' % pdir)
pickle.dump(allele_counts_map, open('%s/allele_counts_map_%s.pkl' % (pdir, species), 'wb'))
sys.stderr.write("Done!\n")
| 34.984536 | 163 | 0.731104 | 1,000 | 6,787 | 4.746 | 0.255 | 0.035398 | 0.028445 | 0.017699 | 0.310788 | 0.292457 | 0.292457 | 0.277708 | 0.277708 | 0.277708 | 0 | 0.01162 | 0.150435 | 6,787 | 193 | 164 | 35.165803 | 0.811481 | 0.249742 | 0 | 0.086022 | 0 | 0 | 0.072682 | 0.028925 | 0 | 0 | 0 | 0.005181 | 0 | 1 | 0.010753 | false | 0 | 0.096774 | 0 | 0.11828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44501000b3734b479464d60d1f33e27a81e07057 | 821 | py | Python | recipes/Python/531821_Error_logging_context_manager/recipe-531821.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/531821_Error_logging_context_manager/recipe-531821.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/531821_Error_logging_context_manager/recipe-531821.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | from __future__ import with_statement
from contextlib import contextmanager
from functools import wraps
import logging
@contextmanager
def error_trapping(ident=None):
''' A context manager that traps and logs exception in its block.
Usage:
with error_trapping('optional description'):
might_raise_exception()
this_will_always_be_called()
'''
try:
yield None
except Exception:
if ident:
logging.error('Error in ' + ident, exc_info=True)
else:
logging.error('Error', exc_info=True)
def trap_errors(f):
''' A decorator to trap and log exceptions '''
@wraps(f)
def wrapper(*args, **kwds):
with error_trapping(f.__name__):
return f(*args, **kwds)
return wrapper
| 26.483871 | 69 | 0.624848 | 96 | 821 | 5.125 | 0.583333 | 0.079268 | 0.069106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.28989 | 821 | 30 | 70 | 27.366667 | 0.843911 | 0.254568 | 0 | 0 | 0 | 0 | 0.024955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.210526 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44505ee5565c15ff2332d974385cc5824f0522fa | 1,004 | py | Python | escolaridade/tests.py | Bleno/sisgestor-django | c35f76eafc3e51afb99c84245e01881cef43aa5b | [
"MIT"
] | 1 | 2017-04-27T19:26:49.000Z | 2017-04-27T19:26:49.000Z | escolaridade/tests.py | Bleno/sisgestor-django | c35f76eafc3e51afb99c84245e01881cef43aa5b | [
"MIT"
] | null | null | null | escolaridade/tests.py | Bleno/sisgestor-django | c35f76eafc3e51afb99c84245e01881cef43aa5b | [
"MIT"
] | null | null | null | from django.test import TestCase
from django.test import Client
from .models import Escolaridade
class EscolaridadeTestCase(TestCase):
def setUp(self):
Escolaridade.objects.create(escolaridade="Médio")
Escolaridade.objects.create(escolaridade="Fundamental")
def test_escolaridade_exists(self):
beleza = Escolaridade.objects.get(escolaridade="Médio")
tecnologia = Escolaridade.objects.get(escolaridade="Fundamental")
self.assertEqual(beleza.escolaridade, "Médio")
self.assertEqual(tecnologia.escolaridade, "Fundamental")
def test_view_home(self):
client = Client()
response = client.get('/escolaridade/')
self.assertEqual(response.status_code, 200)
def test_view_detail(self):
client = Client()
response = client.get('/escolaridade/1/')
self.assertEqual(response.status_code, 200)
# def text_get_all(self):
# eixos = Eixo.objects.all()
# self.assertIsInstance(a, b)
| 32.387097 | 73 | 0.692231 | 105 | 1,004 | 6.52381 | 0.352381 | 0.110949 | 0.040876 | 0.058394 | 0.245255 | 0.245255 | 0.245255 | 0 | 0 | 0 | 0 | 0.008696 | 0.198207 | 1,004 | 30 | 74 | 33.466667 | 0.842236 | 0.081673 | 0 | 0.2 | 0 | 0 | 0.084967 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.15 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4450c03b84b4b673f18bc1860ea01b76a4f2ec4d | 4,799 | py | Python | python_ws/src/sim/src/aruco_front.py | boris-gu/drone-api | fd90f226bf83c79eb7c31b69b9141474017160a3 | [
"BSD-3-Clause"
] | null | null | null | python_ws/src/sim/src/aruco_front.py | boris-gu/drone-api | fd90f226bf83c79eb7c31b69b9141474017160a3 | [
"BSD-3-Clause"
] | null | null | null | python_ws/src/sim/src/aruco_front.py | boris-gu/drone-api | fd90f226bf83c79eb7c31b69b9141474017160a3 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# =====================
# Зависнуть у маркера
# =====================
import numpy as np
import rospy
import cv2
import cv2.aruco as aruco
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from aruco_calibration import Calibration as clb
from drone_api import *
from math import pi, sqrt, sin, cos, atan2
FONT = cv2.FONT_HERSHEY_PLAIN
def toFixed(numObj, digits=0):
return f'{numObj:.{digits}f}'
def callback(data):
# Мост для преобразования изображения из формата ROS в формат OpenCV
bridge = CvBridge()
try:
frame = bridge.imgmsg_to_cv2(data, 'bgr8')
except Exception as e:
rospy.loginfo(e)
return
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, rejectedImgPoints = aruco.detectMarkers(gray, aruco_dict,
parameters=parameters,
cameraMatrix=camera_matrix,
distCoeff=dist_coef)
global marker_pose
if np.all(ids is not None):
rvec, tvec, markerPoints = aruco.estimatePoseSingleMarkers(corners, 0.2, camera_matrix,
dist_coef)
aruco.drawDetectedMarkers(frame, corners)
aruco.drawAxis(frame, camera_matrix,
dist_coef, rvec[0], tvec[0], 0.2)
cv2.putText(frame, ' id' + str(ids[0])[1:-1], (20, 30), FONT,
1, (255, 255, 255), 3, cv2.LINE_AA)
cv2.putText(frame, ' id' + str(ids[0])[1:-1], (20, 30), FONT,
1, (0, 0, 0), 1, cv2.LINE_AA)
drone_pose = drone.get_local_pose()
x, y, z, roll, pitch, yaw = Camera_api.marker_local_pose(rvec[0][0], tvec[0][0],
drone_pose)
marker_pose = [x, y, z, roll, pitch, yaw]
# Белая обводка и черный текст
cv2.putText(frame, str(toFixed(x, 3)+' ' +
toFixed(y, 3) + ' ' +
toFixed(z, 3) + ' '), (20, 90),
FONT, 1, (255, 255, 255), 3, cv2.LINE_AA)
cv2.putText(frame, str(toFixed(x, 3) + ' ' +
toFixed(y, 3) + ' ' +
toFixed(z, 3) + ' '), (20, 90),
FONT, 1, (0, 0, 0), 1, cv2.LINE_AA)
cv2.putText(frame, str(toFixed(roll, 3)+' ' +
toFixed(pitch, 3) + ' ' +
toFixed(yaw, 3)), (20, 120),
FONT, 1, (255, 255, 255), 3, cv2.LINE_AA)
cv2.putText(frame, str(toFixed(roll, 3) + ' ' +
toFixed(pitch, 3) + ' ' +
toFixed(yaw, 3)), (20, 120),
FONT, 1, (0, 0, 0), 1, cv2.LINE_AA)
else:
# Сбрасываем roll маркера, чтобы, в случае потери маркера, дрон не продолжал кружиться
marker_pose[3] = 0
cv2.putText(frame, 'NOT FOUND', (20, 30), FONT,
1, (255, 255, 255), 3, cv2.LINE_AA)
cv2.putText(frame, 'NOT FOUND', (20, 30), FONT,
1, (0, 0, 0), 1, cv2.LINE_AA)
try:
image_pub.publish(bridge.cv2_to_imgmsg(frame, 'bgr8'))
except Exception as e:
rospy.loginfo(e)
# calibration_save.yaml - уже проведена калибровка
camera_matrix, dist_coef = clb.loadCoefficients('calibration_save.yaml')
aruco_dict = aruco.Dictionary_get(aruco.DICT_4X4_50)
parameters = aruco.DetectorParameters_create()
marker_pose = [0, 0, 0, 0, 0, 0]
drone = Drone_api()
drone.start()
rospy.loginfo('Drone armed')
image_sub = rospy.Subscriber('/iris_front_fpv/usb_cam/image_raw',
Image, callback, queue_size=1)
rospy.loginfo('Start Subscriber')
image_pub = rospy.Publisher('/iris_front_fpv/usb_cam/location_img',
Image, queue_size=1)
rospy.loginfo('Start Publisher')
drone.sleep(5)
drone.set_local_pose(0, 0, 2, 0)
while not drone.point_is_reached() and not drone.is_shutdown():
drone.sleep(0.5)
# Т.к. после закрытия топика с изображением сохраняется последний кадр,
# необходимо перед стартом основной части скрипта "сбросить" значения маркера
marker_pose = [2, 0, 2, 0, 0, 0]
while not drone.is_shutdown():
drone_pose = drone.get_local_pose()
correct_drone_yaw = marker_pose[3] + drone_pose[5]
correct_drone_x = marker_pose[0] + (-2 * cos(correct_drone_yaw))
correct_drone_y = marker_pose[1] + (-2 * sin(correct_drone_yaw))
drone.set_local_pose(correct_drone_x, correct_drone_y, marker_pose[2],
correct_drone_yaw)
drone.sleep(0.5)
rospy.loginfo('Drone disarmed')
| 40.669492 | 95 | 0.549489 | 601 | 4,799 | 4.232945 | 0.30782 | 0.014937 | 0.010613 | 0.023585 | 0.329009 | 0.278302 | 0.238208 | 0.22327 | 0.195755 | 0.176887 | 0 | 0.058878 | 0.324026 | 4,799 | 117 | 96 | 41.017094 | 0.725339 | 0.09627 | 0 | 0.369565 | 0 | 0 | 0.054772 | 0.0208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.097826 | 0.01087 | 0.141304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4452b02d3ec66c4927bb44a51aa1b641b11fd632 | 3,933 | py | Python | Pipelines/Torch/Data/mnist.py | AkibMashrur/Research | a981e3410917216e03e09431c837607543905d83 | [
"Apache-2.0"
] | null | null | null | Pipelines/Torch/Data/mnist.py | AkibMashrur/Research | a981e3410917216e03e09431c837607543905d83 | [
"Apache-2.0"
] | null | null | null | Pipelines/Torch/Data/mnist.py | AkibMashrur/Research | a981e3410917216e03e09431c837607543905d83 | [
"Apache-2.0"
] | null | null | null | # Reference: https://stackoverflow.com/questions/40427435/extract-images-from-idx3-ubyte-file-or-gzip-via-python
# Based on answer by UdaraWanasinghe
import os
import gzip
import numpy as np
import torch
import torch.nn.functional as F
home_dir = os.path.expanduser('~')
parent_dir = "Datasets/Images/MNIST/"
def training_images():
with gzip.open(f'{home_dir}/{parent_dir}/train-images-idx3-ubyte.gz', 'r') as f:
# first 4 bytes is a magic number
magic_number = int.from_bytes(f.read(4), 'big')
# second 4 bytes is the number of images
image_count = int.from_bytes(f.read(4), 'big')
# third 4 bytes is the row count
row_count = int.from_bytes(f.read(4), 'big')
# fourth 4 bytes is the column count
column_count = int.from_bytes(f.read(4), 'big')
# rest is the image pixel data, each pixel is stored as an unsigned byte
# pixel values are 0 to 255
image_data = f.read()
images = np.frombuffer(image_data, dtype=np.uint8)\
.reshape((image_count, row_count, column_count))
return images
def training_labels():
with gzip.open(f'{home_dir}/{parent_dir}/train-labels-idx1-ubyte.gz', 'r') as f:
# first 4 bytes is a magic number
magic_number = int.from_bytes(f.read(4), 'big')
# second 4 bytes is the number of labels
label_count = int.from_bytes(f.read(4), 'big')
# rest is the label data, each label is stored as unsigned byte
# label values are 0 to 9
label_data = f.read()
labels = np.frombuffer(label_data, dtype=np.uint8)
return labels
def testing_images():
with gzip.open(f'{home_dir}/{parent_dir}/t10k-images-idx3-ubyte.gz', 'r') as f:
# first 4 bytes is a magic number
magic_number = int.from_bytes(f.read(4), 'big')
# second 4 bytes is the number of images
image_count = int.from_bytes(f.read(4), 'big')
# third 4 bytes is the row count
row_count = int.from_bytes(f.read(4), 'big')
# fourth 4 bytes is the column count
column_count = int.from_bytes(f.read(4), 'big')
# rest is the image pixel data, each pixel is stored as an unsigned byte
# pixel values are 0 to 255
image_data = f.read()
images = np.frombuffer(image_data, dtype=np.uint8)\
.reshape((image_count, row_count, column_count))
return images
def testing_labels():
with gzip.open(f'{home_dir}/{parent_dir}/t10k-labels-idx1-ubyte.gz', 'r') as f:
# first 4 bytes is a magic number
magic_number = int.from_bytes(f.read(4), 'big')
# second 4 bytes is the number of labels
label_count = int.from_bytes(f.read(4), 'big')
# rest is the label data, each label is stored as unsigned byte
# label values are 0 to 9
label_data = f.read()
labels = np.frombuffer(label_data, dtype=np.uint8)
return labels
class MNISTdataset(torch.utils.data.Dataset):
""" MNIST Dataset Subclass"""
def __init__(self, train=True, feature_transform=None, target_transform=None):
self.train = train
if self.train:
self.images = training_images()
self.labels = training_labels()
else:
self.images = testing_images()
self.labels = testing_labels()
self.feature_transform = feature_transform
self.target_transform = target_transform
def __getitem__(self, index):
image = self.images[index]
label = self.labels[index]
if self.feature_transform:
image = self.feature_transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
def __len__(self):
return len(self.labels)
if __name__ == "__main__":
mnist_test = MNISTdataset(train=False)
print(len(mnist_test)) | 37.817308 | 112 | 0.637935 | 570 | 3,933 | 4.247368 | 0.191228 | 0.033044 | 0.039653 | 0.064436 | 0.647666 | 0.647666 | 0.647666 | 0.647666 | 0.647666 | 0.582404 | 0 | 0.019481 | 0.256039 | 3,933 | 104 | 113 | 37.817308 | 0.807929 | 0.2418 | 0 | 0.4 | 0 | 0 | 0.091032 | 0.07445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107692 | false | 0 | 0.076923 | 0.015385 | 0.292308 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4452eed1643b0ab8c55a85a0007be4f6da57ef15 | 7,968 | py | Python | resources/rest-service/cloudify/migrations/versions/b92770a7b6ca_5_3_to_6_0.py | ilan-WS/cloudify-manager | 510d8a277c848db351f38fc5b264806b2cb36d0b | [
"Apache-2.0"
] | 124 | 2015-01-22T22:28:37.000Z | 2022-02-26T23:12:06.000Z | resources/rest-service/cloudify/migrations/versions/b92770a7b6ca_5_3_to_6_0.py | cloudify-cosmo/cloudify-manager | 4a3f44ceb49d449bc5ebc8766b1c7b9c174ff972 | [
"Apache-2.0"
] | 345 | 2015-01-08T15:49:40.000Z | 2022-03-29T08:33:00.000Z | resources/rest-service/cloudify/migrations/versions/b92770a7b6ca_5_3_to_6_0.py | ilan-WS/cloudify-manager | 510d8a277c848db351f38fc5b264806b2cb36d0b | [
"Apache-2.0"
] | 77 | 2015-01-07T14:04:35.000Z | 2022-03-07T22:46:00.000Z | """5_3 to 6_0
Revision ID: b92770a7b6ca
Revises: 396303c07e35
Create Date: 2021-04-12 09:33:44.399254
"""
from alembic import op
import sqlalchemy as sa
from manager_rest.storage import models
# revision identifiers, used by Alembic.
revision = 'b92770a7b6ca'
down_revision = '396303c07e35'
branch_labels = None
depends_on = None
def upgrade():
_add_execution_group_fk()
_add_new_execution_columns()
_drop_events_id()
_drop_logs_id()
_add_deployments_display_name_column()
_add_depgroups_creation_counter()
_add_execgroups_dep_fks()
_create_depgroup_dep_constraint()
def downgrade():
_drop_depgroup_dep_constraint()
_drop_execgroups_dep_fks()
_drop_depgroups_creation_counter()
_drop_deployments_display_name_column()
_create_logs_id()
_create_events_id()
_drop_execution_group_fk()
_drop_new_execution_columns()
def _drop_events_id():
op.drop_index('events_id_idx', table_name='events')
op.drop_column('events', 'id')
def _drop_logs_id():
op.drop_index('logs_id_idx', table_name='logs')
op.drop_column('logs', 'id')
def _create_logs_id():
op.add_column('logs', sa.Column('id', sa.Text(),
autoincrement=False, nullable=True))
op.create_index('logs_id_idx', 'logs', ['id'],
unique=False)
def _create_events_id():
op.add_column('events', sa.Column('id', sa.Text(),
autoincrement=False, nullable=True))
op.create_index('events_id_idx', 'events', ['id'],
unique=False)
def _add_new_execution_columns():
op.add_column(
'executions',
sa.Column('allow_custom_parameters', sa.Boolean(),
server_default='false', nullable=False)
)
def _drop_new_execution_columns():
op.drop_column('executions', 'allow_custom_parameters')
def _add_execution_group_fk():
op.add_column(
'events',
sa.Column('_execution_group_fk', sa.Integer(), nullable=True)
)
op.alter_column(
'events',
'_execution_fk',
existing_type=sa.Integer(),
nullable=True
)
op.create_index(
op.f('events__execution_group_fk_idx'),
'events',
['_execution_group_fk'],
unique=False
)
op.create_foreign_key(
op.f('events__execution_group_fk_fkey'),
'events',
'execution_groups',
['_execution_group_fk'],
['_storage_id'],
ondelete='CASCADE',
)
op.create_check_constraint(
'events__one_fk_not_null',
'events',
'(_execution_fk IS NOT NULL) != (_execution_group_fk IS NOT NULL)'
)
op.add_column(
'logs',
sa.Column('_execution_group_fk', sa.Integer(), nullable=True)
)
op.alter_column(
'logs',
'_execution_fk',
existing_type=sa.Integer(),
nullable=True
)
op.create_index(
op.f('logs__execution_group_fk_idx'),
'logs',
['_execution_group_fk'],
unique=False
)
op.create_foreign_key(
op.f('logs__execution_group_fk_fkey'),
'logs',
'execution_groups',
['_execution_group_fk'],
['_storage_id'],
ondelete='CASCADE'
)
op.create_check_constraint(
'logs__one_fk_not_null',
'logs',
'(_execution_fk IS NOT NULL) != (_execution_group_fk IS NOT NULL)'
)
def _drop_execution_group_fk():
op.drop_constraint(
op.f('logs__one_fk_not_null'),
'logs',
type_='check',
)
op.drop_constraint(
op.f('logs__execution_group_fk_fkey'),
'logs',
type_='foreignkey'
)
op.drop_index(
op.f('logs__execution_group_fk_idx'),
table_name='logs'
)
op.execute(
models.Log.__table__
.delete()
.where(models.Log.__table__.c._execution_fk.is_(None))
)
op.alter_column(
'logs',
'_execution_fk',
existing_type=sa.Integer(),
nullable=False
)
op.drop_column(
'logs',
'_execution_group_fk'
)
op.drop_constraint(
op.f('events__one_fk_not_null'),
'events',
type_='check',
)
op.drop_constraint(
op.f('events__execution_group_fk_fkey'),
'events',
type_='foreignkey'
)
op.drop_index(
op.f('events__execution_group_fk_idx'),
table_name='events'
)
op.execute(
models.Event.__table__
.delete()
.where(models.Event.__table__.c._execution_fk.is_(None))
)
op.alter_column(
'events',
'_execution_fk',
existing_type=sa.Integer(),
nullable=False
)
op.drop_column(
'events',
'_execution_group_fk'
)
def _add_deployments_display_name_column():
op.add_column('deployments',
sa.Column('display_name', sa.Text(), nullable=True))
op.execute(models.Deployment.__table__.update().values(
display_name=models.Deployment.__table__.c.id))
op.alter_column(
'deployments',
'display_name',
existing_type=sa.Text(),
nullable=False
)
op.create_index(op.f('deployments_display_name_idx'),
'deployments', ['display_name'], unique=False)
def _drop_deployments_display_name_column():
op.drop_index(op.f('deployments_display_name_idx'),
table_name='deployments')
op.drop_column('deployments', 'display_name')
def _add_depgroups_creation_counter():
op.add_column(
'deployment_groups',
sa.Column('creation_counter', sa.Integer(), nullable=False,
server_default='0')
)
def _drop_depgroups_creation_counter():
op.drop_column('deployment_groups', 'creation_counter')
def _add_execgroups_dep_fks():
op.add_column(
'execution_groups',
sa.Column('_success_group_fk', sa.Integer(), nullable=True)
)
op.add_column(
'execution_groups',
sa.Column('_failed_group_fk', sa.Integer(), nullable=True)
)
op.create_index(
op.f('execution_groups__failed_group_fk_idx'),
'execution_groups',
['_failed_group_fk'],
unique=False
)
op.create_index(
op.f('execution_groups__success_group_fk_idx'),
'execution_groups',
['_success_group_fk'],
unique=False
)
op.create_foreign_key(
op.f('execution_groups__success_group_fk_fkey'),
'execution_groups',
'deployment_groups',
['_success_group_fk'],
['_storage_id'],
ondelete='SET NULL'
)
op.create_foreign_key(
op.f('execution_groups__failed_group_fk_fkey'),
'execution_groups',
'deployment_groups',
['_failed_group_fk'],
['_storage_id'],
ondelete='SET NULL'
)
def _drop_execgroups_dep_fks():
op.drop_constraint(
op.f('execution_groups__failed_group_fk_fkey'),
'execution_groups',
type_='foreignkey'
)
op.drop_constraint(
op.f('execution_groups__success_group_fk_fkey'),
'execution_groups',
type_='foreignkey'
)
op.drop_index(
op.f('execution_groups__success_group_fk_idx'),
table_name='execution_groups'
)
op.drop_index(
op.f('execution_groups__failed_group_fk_idx'),
table_name='execution_groups'
)
op.drop_column('execution_groups', '_failed_group_fk')
op.drop_column('execution_groups', '_success_group_fk')
def _create_depgroup_dep_constraint():
op.create_unique_constraint(
op.f('deployment_groups_deployments_deployment_group_id_key'),
'deployment_groups_deployments',
['deployment_group_id', 'deployment_id']
)
def _drop_depgroup_dep_constraint():
op.drop_constraint(
op.f('deployment_groups_deployments_deployment_group_id_key'),
'deployment_groups_deployments',
type_='unique'
)
| 25.538462 | 74 | 0.633032 | 921 | 7,968 | 4.952226 | 0.117264 | 0.058321 | 0.077176 | 0.031572 | 0.623109 | 0.555141 | 0.507345 | 0.422495 | 0.35102 | 0.278009 | 0 | 0.009848 | 0.248117 | 7,968 | 311 | 75 | 25.620579 | 0.751461 | 0.017445 | 0 | 0.526718 | 0 | 0 | 0.28091 | 0.114308 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068702 | false | 0 | 0.01145 | 0 | 0.080153 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44567aa723c7083a702d9f866e897914bb948003 | 47,745 | py | Python | script/FilesFunctions/FilesFunctions.py | totordudu/UTBM_TZ20 | 27e4d3f247a3eba26c1c7c7a21e184917056ac52 | [
"MIT"
] | null | null | null | script/FilesFunctions/FilesFunctions.py | totordudu/UTBM_TZ20 | 27e4d3f247a3eba26c1c7c7a21e184917056ac52 | [
"MIT"
] | null | null | null | script/FilesFunctions/FilesFunctions.py | totordudu/UTBM_TZ20 | 27e4d3f247a3eba26c1c7c7a21e184917056ac52 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os
from datetime import datetime, timedelta
import errno
import time
import USBKey
class Files():
import sys
import csv
from datetime import datetime, date, time, timedelta
import os
from os import path
import shutil
# import urllib # for python 3 : import urllib.request
import urllib
import USBKey
import json
import structureConfig as structConfig
import re
import ast
"""
Le constructeur permet d'initialiser cette classe
IMPORTANT Si le fichier n'existe pas encore il sera cree. Cela signifie que nous creons un FICHIER DE STOCKAGE DES UIDs ET RIEN D'AUTRE
A l'inverse pour faire de la comparaison de fichier il faut creer un object Files en passant en argument l'endroit ou se trouve le fichier dans votre structure
Argument : path pour le fichier (la valeur par defaut est None)
"""
def __init__(self, initialFilePath=None):
self.presents = 0
self.absents = 0
self.wrong_presents = 0
self.errorsRequestAPI = 0
self.initFilePath = initialFilePath
self.folderPathName = ""
self.pathDSIFile = ""
self.read = 'r'
self.append = 'a'
self.write = 'w'
"""
Cette methode permet de faire l'ajout d'etudiant dans le fichier donne comme initialFilePath
"""
def addStudentToFile(self, UID, DTnow):
# Cette variable permet de savoir si l'uid est deja present dans le fichier
if(self.exist(self.initFilePath)):
# la lecture de fichier suivante permet de savoir si la carte scannee par l'utilisateur est deja presente dans le fichier
with open(self.initFilePath, self.read) as UIDFile:
UIDFilereader = self.csv.reader(UIDFile)
for row in UIDFilereader:
if(row[2] == UID):
print("Carte deja scannee")
return False
with open(self.initFilePath, self.append) as UIDFile:
UIDFileWriter = self.csv.writer(
UIDFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
UIDFileWriter.writerow(
[DTnow.strftime("%d-%m-%Y"), DTnow.strftime("%H-%M-%S"), UID])
return True
else:
with open(self.initFilePath, self.write) as UIDFile:
UIDFileWriter = self.csv.writer(
UIDFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
UIDFileWriter.writerow(
[DTnow.strftime("%d-%m-%Y"), DTnow.strftime("%H-%M-%S"), UID])
return True
"""
Cette methode permet de connnaitre le fichier qui possede la meme date a un interval donne dans le constructeur de la classe
Dans notre cas cette methode est utilisee pour rechercher dans cle usb un le dossier etant dans l'interval
Output : nom du fichier dans l'intervalle sinon None
"""
def foundSameEventFile(self, pathUSB, DTMain, interval):
'''
Pour cette fonction il faut encore ajouter des conditions d'autant plus que cette fonction permet de savoir si il y a des fichiers de la meme date sur celle-ci. Il faut donc ajouter cela
'''
folderList = listDirectory(
pathUSB + "/Fichiers_SCAN", True, False, False)
print("DIR LIST : ", folderList)
for folderName in folderList:
try:
folderDT = self.datetime.strptime(
folderName, "%d-%m-%Y-%H-%M-%S")
Delta = abs(DTMain - folderDT)
if(Delta <= interval):
InnerFiles = listDirectory(
pathUSB+"/Fichiers_SCAN/"+folderName, False, True, True)
StandardFiles = ["report", "total"]
isExtractionComplete = True
for stfile in StandardFiles:
if not stfile+".csv" in InnerFiles:
isExtractionComplete = False
if isExtractionComplete:
print("Complete extraction found on USB key: ", folderName)
return folderName
else:
print("Uncomplete extraction found on USB key: ", folderName)
except ValueError:
print("Warning : Fichiers_SCAN folder contains undesired folders")
return None
"""
Cette methode permet de faire l'extration du dossier final genere par la methode: compareDsiFilesToFileCreation()
Il est donc important de faire appel a cette methode avant de faire l'extraction vers la cle usb car sinon la valeur de la variable self.folderPathName
ne correspondera pas a l'endroit ou se trouve les fichiers sur la cle usb
"""
def exportFileToUSB(self, USB_Key, DSIPath):
if(USB_Key):
# this condition checks if the extraction have been done
if(self.folderPathName + '/' != ""):
if(self.exist(USB_Key + '/' + "Fichiers_SCAN")):
USB_Key += '/' + "Fichiers_SCAN" + '/' + \
self.initFilePath.rsplit(
'/')[-1].replace(".csv", "") + '/'
self.copyFilesFromDirectoryToDirectory(
self.folderPathName + '/', USB_Key)
if(self.exist(USB_Key)):
return True
else:
return False
else:
self.os.mkdir(USB_Key + '/' + "Fichiers_SCAN")
USB_Key += '/' + "Fichiers_SCAN" + '/' + \
self.initFilePath.rsplit(
'/')[-1].replace(".csv", "") + '/'
self.copyFilesFromDirectoryToDirectory(
self.folderPathName + '/', USB_Key)
if(self.exist(USB_Key)):
return True
else:
return False
# This condition permits to check if the final files extactions has been done and if the uid file is not empty
# if so it extracts final files and
elif(self.folderPathName == "" and not self.isEmpty(self.initFilePath)):
self.compareDsiFilesToFileCreation(
DSIPath, self.initFilePath.rsplit('/')[-1].replace(".csv", ""))
self.exportFileToUSB(USB_Key, DSIPath)
else:
print("ERROR : usb key missing or final files not in the folder")
return False
"""
Cette methode permet de relier des fichiers de la cle usb aux fichier presents dans le dosssier final_extractions.
Pour cela les fichiers du dossier final_extractions sont ajoutes au fichier sur la cle usb
"""
# to call in case of multiple extraction
def addToUSBKEY(self, pathToUSB, DTnow, interval):
if(pathToUSB):
if(self.folderPathName != ""): # permet de verifier que l'extraction a ete faite
if(self.exist(pathToUSB + '/' + "Fichiers_SCAN/")):
directoryName = self.foundSameEventFile(
pathToUSB, DTnow, interval)
if directoryName:
pathToUSB += '/Fichiers_SCAN/' + directoryName
if(self.exist(self.folderPathName + '/presents.csv')):
with open(self.folderPathName + '/presents.csv', self.read) as presentFile:
presentFileReader = self.csv.reader(
presentFile)
if(self.exist(pathToUSB + '/presents.csv')):
next(presentFileReader)
with open(pathToUSB + '/presents.csv', self.append) as presentFileUSBKey:
fileUSBwriter = self.csv.writer(
presentFileUSBKey)
# Cette lecture du fichier permet de verifier si la personne n'est pas deja presente dans le fichier sur la cle
# dans cette situation cela signifierait que la personne est rentree dans part deux entree en scannant les deux fois
with open(pathToUSB + '/presents.csv', self.read) as presentFileUSBReader:
checkerUSBPresent = self.csv.reader(
presentFileUSBReader)
for student in presentFileReader:
presentFileUSBReader.seek(0)
next(checkerUSBPresent)
indicePresent = False
for scannedPresent in checkerUSBPresent:
if(student[5] == scannedPresent[5]):
indicePresent = True
if not indicePresent:
fileUSBwriter.writerow(
student[:])
self.presents = self.__row_count(
pathToUSB + '/presents.csv') - 1
else:
with open(pathToUSB + '/presents.csv', self.write) as presentFileUSBKey:
fileUSBwriter = self.csv.writer(
presentFileUSBKey)
for student in presentFileReader:
fileUSBwriter.writerow(
student[:])
print("Adding present content to USB done")
if(self.exist(self.folderPathName + '/absents.csv')):
with open(self.folderPathName + '/absents.csv', self.read) as absentFile:
absentFileReader = self.csv.reader(absentFile)
if(self.exist(pathToUSB + '/absents.csv') and self.exist(pathToUSB + '/presents.csv')):
self.absents = 0
# Supression du fichier absent sur la cle usb afin de regener les absents en fonction du fichier des presents
self.deleteFile(pathToUSB + '/absents.csv')
# Cela permet par la suite de recreer un fichier absent sur la cle usb en lisant le fichier des presents qui est sur la cle
with open(pathToUSB + '/presents.csv', self.read) as Present_File, open(self.pathDSIFile+".csv", self.read) as DSIfile:
Present_FileReader = self.csv.reader(
Present_File)
DSI_FileReader = self.csv.reader(
DSIfile)
# Ces deux lignes permettent de skiper les headers dans les fichiers
next(DSI_FileReader)
for DSI_Row in DSI_FileReader:
indice_present = False
Present_File.seek(0)
next(Present_FileReader)
for Present_Row in Present_FileReader:
if(DSI_Row[3] == Present_Row[5]):
indice_present = True
break
if not indice_present:
if(self.exist(pathToUSB + '/absents.csv')):
with open(pathToUSB + '/absents.csv', self.append) as absentsFile:
absentFileWriter = self.csv.writer(
absentsFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
absentFileWriter.writerow(
DSI_Row[:])
else:
with open(pathToUSB + '/absents.csv', self.write) as absentsFile:
absentFileWriter = self.csv.writer(
absentsFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
absentFileWriter.writerow(
['NOM', 'PRENOM', 'ETUD_NUMERO', 'NO_INDIVIDU', 'MAIL', 'LOGIN', 'FORMATION', 'NIVEAU'])
absentFileWriter.writerow(
DSI_Row[:])
self.absents += 1
# Si il n'y pas de fichier de present sur la cle mais qu'il y a un fichier d'absent alors nous ajoutons les absent que nous vons genere dans ce fichier
elif(self.exist(pathToUSB + '/absents.csv')):
with open(pathToUSB + '/absents.csv', self.append) as absentFileUSBKey:
fileUSBwriter = self.csv.writer(
absentFileUSBKey)
next(absentFileReader)
for student in absentFileReader:
fileUSBwriter.writerow(
student[:])
self.absents = self.__row_count(
pathToUSB + '/absents.csv') - 1
# Si il n'y a aucun fichier d'absent sur la cle alors on copie celui que nous avons genere
elif not self.exist(pathToUSB + '/absents.csv'):
with open(pathToUSB + '/absents.csv', self.write) as absentFileUSBKey:
fileUSBwriter = self.csv.writer(
absentFileUSBKey)
for student in absentFileReader:
fileUSBwriter.writerow(
student[:])
print("Adding absents content to USB done")
# Comme les faux presents sont des personnes qui sont la par erreur nous pouvons les ajouter directement a la suite du fichier ou d'en creer un si il n'exite pas
if(self.exist(self.folderPathName + '/faux-presents.csv')):
with open(self.folderPathName + '/faux-presents.csv', self.read) as wrong_present_File:
wPresentFileReader = self.csv.reader(
wrong_present_File)
if(self.exist(pathToUSB + '/faux-presents.csv')):
next(wPresentFileReader)
with open(pathToUSB + '/faux-presents.csv', self.append) as wPresentFileUSBKey:
fileUSBwriter = self.csv.writer(
wPresentFileUSBKey)
for student in wPresentFileReader:
fileUSBwriter.writerow(
student[:])
else:
with open(pathToUSB + '/faux-presents.csv', self.write) as wPresentFileUSBKey:
fileUSBwriter = self.csv.writer(
wPresentFileUSBKey)
for student in wPresentFileReader:
fileUSBwriter.writerow(
student[:])
self.wrong_presents = self.__row_count(
pathToUSB + '/faux-presents.csv') - 1
print("Adding faux-present content to USB done")
with open(pathToUSB + '/total.csv', self.write) as totalFileUSBKey:
totalFileWriter = self.csv.writer(
totalFileUSBKey, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
totalFileWriter.writerow(
['DATE', 'HEURE', 'NOM', 'PRENOM', 'PRESENCE', 'ETUD_NUMERO', 'NO_INDIVIDU', 'MAIL', 'FORMATION', 'NIVEAU'])
if(self.exist(pathToUSB + '/presents.csv')):
with open(pathToUSB + '/presents.csv', self.read) as presentFile:
presentFileReader = self.csv.reader(
presentFile)
next(presentFileReader)
for present in presentFileReader:
totalFileWriter.writerow(
list(present[0:4])+["OUI"]+list(present[4:10]))
if(self.exist(pathToUSB + '/absents.csv')):
with open(pathToUSB + '/absents.csv', self.read) as absentsFile:
absentFileReader = self.csv.reader(
absentsFile)
next(absentFileReader)
for absent in absentFileReader:
totalFileWriter.writerow(
["/", "/"]+list(absent[0:2])+["NON"]+list(absent[2:8]))
if(self.exist(pathToUSB + '/faux-presents.csv')):
with open(pathToUSB + '/faux-presents.csv', self.read) as fauxPresentFile:
fauxPresentFileReader = self.csv.reader(
fauxPresentFile)
next(fauxPresentFileReader)
for faux_present in fauxPresentFileReader:
totalFileWriter.writerow(
list(faux_present[0:2]) + ["/", "/", "OUI MAIS INATTENDUE", "/", "/", "/"]+[faux_present[2]]+["/", "/"])
print("Adding total content to USB done")
if(self.exist(self.folderPathName + '/report.csv')):
linesReportUSB = []
with open(self.folderPathName + '/report.csv', self.read) as reportFile:
reportFileReader = self.csv.reader(reportFile)
if(self.exist(pathToUSB + '/report.csv') and self.__checkReportCorrectness(pathToUSB + '/report.csv')):
next(reportFileReader)
with open(pathToUSB + '/report.csv', self.read) as reportFileUSB:
reportFileReaderUSB = self.csv.reader(
reportFileUSB)
headerReport = next(
reportFileReaderUSB)
linesReportUSB = next(
reportFileReaderUSB)
numScan = self.__row_count(
self.initFilePath) + int(linesReportUSB[0])
numSupposedToAttend = self.__row_count(
self.pathDSIFile+".csv")-1
percentAbsents = (
float(self.absents) / numSupposedToAttend) * 100
percentPresents = (
float(self.presents) / numSupposedToAttend) * 100
percentWrongPresent = (
float(self.wrong_presents) / numScan) * 100
dateFirstScan, dateLastScan = 0, 0
with open(self.structConfig.structure['UID_inputs'] + DTnow.strftime("%d-%m-%Y-%H-%M-%S") + ".csv", self.read) as inputFile:
inputFileReader = self.csv.reader(
inputFile)
fileLines = list(inputFileReader)
dateFirstScan = self.datetime.strptime(
fileLines[0][1], "%H-%M-%S")
dateLastScan = self.datetime.strptime(
fileLines[-1][1], "%H-%M-%S")
scanningTime = dateLastScan - dateFirstScan
scanningTime += self.datetime.strptime(
linesReportUSB[5], "%H:%M:%S")
with open(pathToUSB + "/report.csv", self.write) as reportFile:
reportWriter = self.csv.writer(
reportFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
reportWriter.writerow(["Nombre scans", "Nombre etudiants attendus", "Nombre absents (%Attendus) ", "Nombre presents (%Attendus)",
"Nombre faux-presents (%Scans)", "DUREE SCAN", "Nombre erreur de requetes API"])
reportWriter.writerow([str(numScan), str(numSupposedToAttend), str(self.absents) + ' (' + '%.2f' % percentAbsents + '%)', str(
self.presents) + ' (' + '%.2f' % percentPresents + '%)', str(self.wrong_presents) + ' (' + '%.2f' % percentWrongPresent + '%)', scanningTime.strftime("%H:%M:%S"), self.errorsRequestAPI])
elif not self.__checkReportCorrectness(pathToUSB + '/report.csv'):
print(
"There are problems in the file report.csv at ", pathToUSB + '/report.csv')
return False
print("Adding report content to USB done")
return True
else:
print("No files found in the time interval")
return False
else:
print("The folder to contain all files on USB doesn't exit")
self.os.mkdir(pathToUSB + '/' + "Fichiers_SCAN/")
if(self.addToUSBKEY(pathToUSB, DTnow, interval)):
return True
return False
else:
print(
"Please do the comparaison with the DSI file before trying to export files")
return False
else:
print("No usb Key connected")
return False
"""
Cette methode permet de faire la comparaison de deux fichier.
Elle compare le fichier qui contient les numeros de cartes avec le fichier fournit par la DSI
argument : string pathToFile -> correspond a l'emplacement du fichier a comparer avec celui donne initialement
sortie :
Pour faire la comparaison il faut que le fichier de la DSI ait le format suivant :
NOM, PRENOM, ETUD_NUMERO, NO_INDIVIDU, EMAIL, LOGIN, FORMATION, NIVEAU
"""
def compareDsiFilesToFileCreation(self, pathToDsiFile, DTnow):
self.pathDSIFile = pathToDsiFile
pathToDsiFile += ".csv"
self.folderPathName = self.structConfig.structure["final_extractions"] + \
self.initFilePath.split('/')[-1].replace(".csv", "")
try: # here we try to create a Directory to store files
self.os.mkdir(self.folderPathName)
except OSError as e:
if(e.errno != os.errno.EEXIST):
print("Creation of the directory %s failed" %
self.folderPathName)
print(e)
return
# Cette ligne permet d'envoyer les requetes vers l'API afin de connaitre les logins
self.__getUserInfoViaAPI(DTnow)
api_output_path = self.structConfig.structure["API_OUTPUTS"] + \
DTnow.strftime("%d-%m-%Y-%H-%M-%S") + ".csv"
with open(api_output_path, self.read) as API_File, open(pathToDsiFile, self.read) as DSIfile:
API_FileReader = self.csv.reader(API_File)
# to specify delimiter : self.csv.reader(DSIfile,delimiter=";")
DSI_FileReader = self.csv.reader(DSIfile, delimiter=',')
# Cette ligne permet de ne pas compter le header
next(DSI_FileReader)
for API_Row in API_FileReader:
print("Student found in API_output : ", API_Row[1])
indice_present = 0
for DSI_Row in DSI_FileReader:
# print(DSI_Row[5])
if(API_Row[1] == DSI_Row[5]):
print('Present recognized')
rowUidFile = self.__getRowByKey(
API_Row[0], self.initFilePath, 2)
if(self.exist(self.folderPathName + '/presents.csv')):
with open(self.folderPathName + '/presents.csv', self.append) as presentFile:
presentFileWriter = self.csv.writer(
presentFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
presentFileWriter.writerow(
[rowUidFile[0], rowUidFile[1], DSI_Row[0], DSI_Row[1], DSI_Row[2], DSI_Row[3], DSI_Row[4], DSI_Row[5], DSI_Row[6], DSI_Row[7]])
else: # we first write the header in top of file
with open(self.folderPathName + '/presents.csv', self.write) as presentFile:
presentFileWriter = self.csv.writer(
presentFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
# ecrire le titre des colonnes du tableau
presentFileWriter.writerow(
['DATE', 'HEURE', 'NOM', 'PRENOM', 'ETUD_NUMERO', 'NO_INDIVIDU', 'MAIL', 'LOGIN', 'FORMATION', 'NIVEAU'])
presentFileWriter.writerow(
[rowUidFile[0], rowUidFile[1], DSI_Row[0], DSI_Row[1], DSI_Row[2], DSI_Row[3], DSI_Row[4], DSI_Row[5], DSI_Row[6], DSI_Row[7]])
self.presents += 1
indice_present = 1
break
DSIfile.seek(0)
next(DSI_FileReader)
if not indice_present and API_Row[1] != "Erreur_API":
rowUidFile = self.__getRowByKey(
API_Row[0], self.initFilePath, 2)
if(self.exist(self.folderPathName + '/faux-presents.csv')):
with open(self.folderPathName + '/faux-presents.csv', self.append) as fauxPresentFile:
fauxPresentFileWriter = self.csv.writer(
fauxPresentFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
fauxPresentFileWriter.writerow(
[rowUidFile[0], rowUidFile[1], API_Row[1]])
else:
with open(self.folderPathName + '/faux-presents.csv', self.write) as fauxPresentFile:
fauxPresentFileWriter = self.csv.writer(
fauxPresentFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
fauxPresentFileWriter.writerow(
['DATE', 'HEURE', 'LOGIN'])
fauxPresentFileWriter.writerow(
[rowUidFile[0], rowUidFile[1], API_Row[1]])
self.wrong_presents += 1
if(self.exist(self.folderPathName + '/presents.csv')):
with open(self.folderPathName + '/presents.csv', self.read) as Present_File, open(pathToDsiFile, self.read) as DSIfile:
DSIfile.seek(0)
Present_FileReader = self.csv.reader(Present_File)
DSI_FileReader = self.csv.reader(DSIfile)
# Ces deux lignes permettent d'ignorer les headers dans les fichiers
next(Present_FileReader)
next(DSI_FileReader)
for DSI_Row in DSI_FileReader:
indice_present = 0
Present_File.seek(0)
next(Present_FileReader)
for Present_Row in Present_FileReader:
if(DSI_Row[3] == Present_Row[5]):
indice_present = 1
print("Present found : ", Present_Row[2])
break
if not indice_present:
if(self.exist(self.folderPathName + '/absents.csv')):
with open(self.folderPathName + '/absents.csv', self.append) as absentsFile:
absentFileWriter = self.csv.writer(
absentsFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
absentFileWriter.writerow(
DSI_Row[:])
else:
with open(self.folderPathName + '/absents.csv', self.write) as absentsFile:
absentFileWriter = self.csv.writer(
absentsFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
# ecrire le titre des colonnes du tableau
absentFileWriter.writerow(
['NOM', 'PRENOM', 'ETUD_NUMERO', 'NO_INDIVIDU', 'MAIL', 'LOGIN', 'FORMATION', 'NIVEAU'])
absentFileWriter.writerow(
DSI_Row[:])
self.absents += 1
else:
print("PATH DSI FILE : ", self.pathDSIFile)
# self.os.system("cp "+self.pathDSIFile + " " +
# self.folderPathName + '/absents.csv')
self.absents = 0 if not self.exist(
self.folderPathName + '/absents.csv') else self.__row_count(self.folderPathName+'/absents.csv')
with open(self.folderPathName + '/total.csv', self.append) as total_File:
totalFileWriter = self.csv.writer(
total_File, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
totalFileWriter.writerow(
['DATE', 'HEURE', 'NOM', 'PRENOM', 'PRESENCE', 'ETUD_NUMERO', 'NO_INDIVIDU', 'MAIL', 'LOGIN', 'FORMATION', 'NIVEAU'])
if(self.exist(self.folderPathName + '/presents.csv')):
with open(self.folderPathName + '/presents.csv', self.read) as presentFile:
presentFileReader = self.csv.reader(presentFile)
next(presentFileReader)
for present in presentFileReader:
totalFileWriter.writerow(
list(present[0:4])+["OUI"]+list(present[4:10]))
if(self.exist(self.folderPathName + '/absents.csv')):
with open(self.folderPathName + '/absents.csv', self.read) as absentsFile:
absentFileReader = self.csv.reader(absentsFile)
next(absentFileReader)
for absent in absentFileReader:
totalFileWriter.writerow(
["/", "/"]+list(absent[0:2])+["NON"]+list(absent[2:8]))
if(self.exist(self.folderPathName + '/faux-presents.csv')):
with open(self.folderPathName + '/faux-presents.csv', self.read) as fauxPresentFile:
fauxPresentFileReader = self.csv.reader(fauxPresentFile)
next(fauxPresentFileReader)
for faux_present in fauxPresentFileReader:
totalFileWriter.writerow(
list(faux_present[0:2]) + ["/", "/", "OUI MAIS INATTENDUE", "/", "/", "/"]+[faux_present[2]]+["/", "/"])
self.__generateReport(self.folderPathName, DTnow)
"""
Cette methode permet verifier si un fichier existe deja dans la pathToFile specifie en argument
Ouput : boolean
"""
def exist(self, pathToFile=None):
if pathToFile == None:
if(self.path.exists(self.initFilePath)):
return True
else:
return False
else:
if(self.path.exists(pathToFile)):
return True
else:
return False
"""
Cette methode permet de savoir si un fichier csv possede un commentaire
"""
def isEmpty(self, pathToFile):
if self.exist(pathToFile):
if(self.__row_count(pathToFile) > 1):
return True
else:
return False
else:
return False
"""
Cette methode permet de supprimer un fichier a un certain endroit
"""
def deleteFile(self, pathToFile):
try:
self.os.system("rm -rf "+pathToFile)
print("deleted file ", pathToFile)
return True
except OSError:
print("cant delete file ", pathToFile)
return False
"""
Cette methode permet de copier un repertoire vers un autre
"""
def copyFilesFromDirectoryToDirectory(self, pathSrc, pathDest):
try:
self.os.system("cp -r " + pathSrc + ' ' + pathDest)
print("File copy to :", pathDest)
return True
except OSError:
print("ERROR : cant copy file to dest")
return False
"""
Cette methode permet de connaitre l'adresse du fichier
"""
def getPath(self):
return self.initFilePath
"""
Cette methode privee permet d'envoyer les requetes vers l'API de l'UTBM
"""
"""
Cette methode permet de connaitre l'index auquel se trouve un utilisateur dans le fichier qui se trouve a path et a la colonne precisee
"""
def __getRowByKey(self, searchKey, path, columnToSearchIn):
with open(path, self.read) as File:
FileReader = self.csv.reader(File)
for row in FileReader:
if(row[columnToSearchIn] == searchKey):
return row
return None
"""
Cette methode permet de connaitre l'ensemble des logins en fonction de l'UID de chaque carte.
Lecture du fichier des UID_inputs puis de generation du fichier API_OUTPUT. Si il y a une erreur dans la requete alors l'UID de la carte est ajoute au fichier
API_OUTPUT mais avec le login -> Erreur
Ainsi comme personne ne possede le login Erreur l'utilisateur sera mis dans le fichier de faux-presents.csv
"""
def __getUserInfoViaAPI(self, DTnow):
print("Getting students logins")
url = self.structConfig.structure["API_url"]
outFile = self.structConfig.structure["API_OUTPUTS"] + \
DTnow.strftime("%d-%m-%Y-%H-%M-%S") + ".csv"
with open(self.initFilePath, self.read) as f, open(outFile, self.write) as g:
# outFileWriter = self.csv.writer(g, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
outFileWriter = self.csv.writer(
g, delimiter=',', quotechar='|', escapechar='-', quoting=self.csv.QUOTE_NONE)
for i in f:
i = i[:-1].split(',')
try:
print("Sending Request to API")
response = self.urllib.urlopen(
url + str(i[2]).rstrip()).read().decode("UTF-8")
data = self.ast.literal_eval(response)
user_login = data['porteur']['login']
outFileWriter.writerow([i[2][:-1], user_login])
print("API request success")
except Exception as e:
self.errorsRequestAPI += 1
print("Error with UID : " + i[2])
print(e)
outFileWriter.writerow([i[2][:-1], "Erreur_API"])
"""
Cette methode permet de faire le fichier de report qui permet de faire un rapport sur l'evenement
"""
def __generateReport(self, pathToFinalExtraction, DTnow):
# on enleve 1 car cela correspond au header
numScan = self.__row_count(self.initFilePath)
numSupposedToAttend = self.__row_count(self.pathDSIFile+".csv")-1
percentAbsents = (float(self.absents) / numSupposedToAttend) * 100
percentPresents = (float(self.presents) / numSupposedToAttend) * 100
percentWrongPresent = (float(self.wrong_presents) / numScan) * 100
dateFirstScan, dateLastScan = 0, 0
with open(self.structConfig.structure['UID_inputs'] + DTnow.strftime("%d-%m-%Y-%H-%M-%S") + ".csv", self.read) as inputFile:
inputFileReader = self.csv.reader(inputFile)
fileLines = list(inputFileReader)
dateFirstScan = self.datetime.strptime(
fileLines[0][1], "%H-%M-%S")
dateLastScan = self.datetime.strptime(
fileLines[-1][1], "%H-%M-%S")
scanningTime = dateLastScan - dateFirstScan
with open(pathToFinalExtraction + "/report.csv", self.write) as reportFile:
reportWriter = self.csv.writer(
reportFile, delimiter=',', quotechar='|', quoting=self.csv.QUOTE_MINIMAL)
reportWriter.writerow(["Nombre scans", "Nombre etudiants attendus", "Nombre absents (%Attendus) ", "Nombre presents (%Attendus)",
"Nombre faux-presents (%Scans)", "DUREE SCAN", "Nombre erreur de requetes API"])
reportWriter.writerow([str(numScan), str(numSupposedToAttend), str(self.absents) + ' (' + '%.2f' % percentAbsents + '%)', str(
self.presents) + ' (' + '%.2f' % percentPresents + '%)', str(self.wrong_presents) + ' (' + '%.2f' % percentWrongPresent + '%)', str(scanningTime), self.errorsRequestAPI])
"""
Cette methode permet de verifir la validite d'un fichier report
"""
def __checkReportCorrectness(self, pathToReport):
with open(pathToReport, self.read) as reportFile:
reportReader = self.csv.reader(reportFile)
header = next(reportReader)
if(len(header) != 7): # Car le fichier report contient 7 elements si on commence a compter a partir de 1
return False
else:
return True
return False
"""
Cette methode permet de compter le nombre de lines dans le fichier csv
arguement : chemin vers le fichier dont on souhaite connaitre le nombre de lignes
"""
def __row_count(self, path):
file = open(path)
reader = self.csv.reader(file)
lines = len(list(reader))
return lines
"""
Cette methode permet de faire log des erreurs dans un fichier afin de faire du debug plus facilement
"""
def create_DSI_errlog(self, errors_list, initial_path):
Logfilename = '/'.join(initial_path.split('/')[:-1])+"/ERREURS_"+initial_path.split(
'/')[-1].replace('.csv', '').replace(' ', '_')+".log"
try:
with open(Logfilename, 'w') as logfile:
for error in errors_list:
logfile.write("--> "+error+"\n")
logfile.close()
return True
except IOError:
print("Can't write log file on USB Key (no write permission)")
return False
"""
Cette methode permet de verifier que le fichier de la DSI fournit par l'admin respecte la structure precisee dans le files/readme.md
NOM,PRENOM,ETUD_NUMERO,NO_INDIVIDU,EMAIL,LOGIN,FORMATION,NIVEAU
"""
def checkDSIFileStructure(self):
with open(self.initFilePath, self.read) as DSI_File:
DSI_Reader = self.csv.reader(DSI_File)
errors = []
try:
header = next(DSI_Reader)
num_col = len(header)
except StopIteration:
print("File is empty")
errors.append("Le fichier est vide")
self.create_DSI_errlog(errors, self.initFilePath)
return False
StandardHeader = ["NOM", "PRENOM", "ETUD_NUMERO",
"NO_INDIVIDU", "EMAIL", "LOGIN", "FORMATION", "NIVEAU"]
# Check header
if(num_col != 8):
print("Error in header length")
errors.append(
"Le nombre de colonnes de l'entete (actuellement de "+str(num_col)+") doit etre egal a 8")
else:
for i in range(8):
if header[i] != StandardHeader[i]:
errors.append("La cellule de l'entete en place "+str(
i+1)+" devrait etre nommee "+StandardHeader[i]+" et non "+str(header[i]))
print("Error in header titles")
line_index = 1
total_logins = [] # used to check login uniqueness in file
# Check first line
try:
line = next(DSI_Reader)
line_index += 1
except StopIteration:
print("Header is there, but nothing after")
errors.append("Aucune ligne apres l'entete")
self.create_DSI_errlog(errors, self.initFilePath)
return False
# Check other lines
while True:
num_col = len(line)
err_prefix = "Ligne "+str(line_index)+" : "
if(num_col == 8):
for i in [0, 1]:
if type(line[i]) != str or len(line[i]) == 0:
print(err_prefix + "cell "+str(i) +
" is not string or is empty")
errors.append(
err_prefix+"La cellule "+StandardHeader[i]+" doit etre une chaine de caracteres non vide")
if type(line[5]) != str or len(line[5]) == 0:
print(err_prefix + "cell "+str(5) +
" is not string or is empty")
errors.append(
err_prefix+"La cellule "+StandardHeader[5]+" doit etre une chaine de caracteres non vide")
else:
if line[5] in total_logins:
print(err_prefix + "login duplicate")
errors.append(
err_prefix+"Duplication du login "+line[5])
total_logins.append(line[5])
for i in [2, 3]:
if not line[i].isdigit():
print(err_prefix + "cell " +
StandardHeader[i]+" is not a number")
errors.append(
err_prefix+"La cellule "+StandardHeader[i]+" doit etre un nombre entier")
if not self.re.match("^[a-z-]+\.[a-z-]+[0-9-]*@utbm\.fr$", line[4]):
print(err_prefix +
"E-mail doesnt match utbm mail address regex")
errors.append(
err_prefix+"L'adresse e-mail ne correspond pas au format des adresses UTBM")
else:
print(err_prefix + "number of columns (" +
str(num_col)+") is different from 8")
errors.append(
err_prefix+"Le nombre de cellules (actuellement de "+str(num_col)+") doit etre egal a 8")
try:
line = next(DSI_Reader)
line_index += 1
except StopIteration: # End Of File
# TODO: check logins are unique
if errors:
self.create_DSI_errlog(errors, self.initFilePath)
return False
return True
def ImportDSIFile(self, formattedDatetime):
source = self.initFilePath
destination = self.structConfig.structure["DSI_lists"] + \
formattedDatetime+".csv"
try:
self.shutil.copyfile(source, destination)
except:
print("Error happened during importation of DSI file")
return False
return True
def ParseReport(self): # return the following array :
with open(self.initFilePath, self.read) as Report_File:
Report_Reader = self.csv.reader(Report_File)
next(Report_Reader)
try:
res = []
for item in Report_Reader:
res.append(item)
return res
except Exception as e:
print("Error while reading report.csv : ")
print(e)
return False
# prefer absolute path over relative ones for parentDir
def listDirectory(parentDir, list_folders, list_files, only_csv):
if only_csv and not list_files:
print("Incoherent parameters")
return
res = []
for r, d, f in os.walk(parentDir):
if list_folders:
for directory in d:
res.append(directory)
if list_files:
for file in f:
if not only_csv or file.endswith(".csv"):
res.append(file)
break # prevents from deeper search (we stop at level 1)
return res
if __name__ == '__main__':
try:
pass
except Exception as e:
print(e)
| 48.969231 | 230 | 0.489517 | 4,330 | 47,745 | 5.330023 | 0.147575 | 0.019715 | 0.015209 | 0.009576 | 0.503098 | 0.467308 | 0.422635 | 0.380259 | 0.361454 | 0.348282 | 0 | 0.006458 | 0.422683 | 47,745 | 974 | 231 | 49.019507 | 0.83083 | 0.056278 | 0 | 0.475771 | 0 | 0 | 0.098536 | 0.000827 | 0.002937 | 0 | 0 | 0.001027 | 0 | 1 | 0.030837 | false | 0.001468 | 0.0279 | 0.001468 | 0.13069 | 0.069016 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44585e2e58dd7c6c6dcab8b55ab1edd7e5fa7f24 | 5,835 | py | Python | soil/settings.py | major-hub/soil_app | ddd250161ad496afd4c8484f79500ff2657b51df | [
"MIT"
] | null | null | null | soil/settings.py | major-hub/soil_app | ddd250161ad496afd4c8484f79500ff2657b51df | [
"MIT"
] | null | null | null | soil/settings.py | major-hub/soil_app | ddd250161ad496afd4c8484f79500ff2657b51df | [
"MIT"
] | null | null | null | import os
import sys
from pathlib import Path
from datetime import timedelta
from django.conf import global_settings
from django.utils.translation import gettext_lazy as _
import django.conf.locale
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
sys.path.append(os.path.join(BASE_DIR, 'apps'))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-lhvdiilc*igrrcl-x=4kd=-3qd2$z=z5xzkse^m1xlph#0#3m$'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DEBUG', True)
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# my apps
'main',
'user',
# third party apps
'rest_framework',
'rest_framework_simplejwt',
'modeltranslation',
'easy_thumbnails',
'django_filters',
'corsheaders',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.locale.LocaleMiddleware',
]
ROOT_URLCONF = 'soil.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': []
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'soil.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql_psycopg2',
# 'NAME': os.environ.get('DATABASE_NAME', 'soil'),
# 'USER': os.environ.get('DATABASE_USERNAME', 'soil_user'),
# 'PASSWORD': os.environ.get('DATABASE_PASSWORD', 'soil_password'),
# 'PORT': 5432
# }
# }
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.2/topics/i18n/
LANGUAGE_CODE = 'en'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
LANGUAGES = (
('en', _('English')),
('ru', _('Russian')),
('uz', _('Uzbek')),
)
PARLER_LANGUAGES = {
None: (
{'code': 'en', },
{'code': 'ru', },
{'code': 'uz', },
),
'default': {
'fallback': 'en', # defaults to PARLER_DEFAULT_LANGUAGE_CODE
'hide_untranslated': False,
# the default; let .active_translations() return fallbacks too.
}
}
EXTRA_LANG_INFO = {
'uz': {
'bidi': False, # right-to-left
'code': 'uz',
'name': 'Uzbek', # name in English
'name_local': 'Uzbek', # locale name
},
}
django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)
global_settings.LANGUAGES = global_settings.LANGUAGES + [('uz', 'Uzbek'), ]
LOCALE_PATHS = (
os.path.join(BASE_DIR, 'locale'),
)
AUTH_USER_MODEL = 'user.User'
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
CORS_ALLOW_ALL_ORIGINS = True
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework_simplejwt.authentication.JWTAuthentication',
),
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend',
'rest_framework.filters.OrderingFilter',
'rest_framework.filters.SearchFilter', ],
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 100,
}
SIMPLE_JWT = {
'ACCESS_TOKEN_LIFETIME': timedelta(minutes=10),
'REFRESH_TOKEN_LIFETIME': timedelta(days=1),
'ALGORITHM': 'HS512',
'AUTH_HEADER_TYPES': ('Bearer',),
'AUTH_HEADER_NAME': 'HTTP_AUTHORIZATION',
'USER_ID_FIELD': 'id',
'USER_ID_CLAIM': 'user_id',
'USER_AUTHENTICATION_RULE': 'rest_framework_simplejwt.authentication.default_user_authentication_rule',
'AUTH_TOKEN_CLASSES': ('rest_framework_simplejwt.tokens.AccessToken',),
'TOKEN_TYPE_CLAIM': 'token_type',
}
# Heroku settings
# import django_heroku
# django_heroku.settings(locals())
| 27.91866 | 107 | 0.671637 | 621 | 5,835 | 6.115942 | 0.397746 | 0.051343 | 0.031332 | 0.039494 | 0.138231 | 0.129279 | 0.057135 | 0.0495 | 0.031596 | 0 | 0 | 0.008632 | 0.185947 | 5,835 | 208 | 108 | 28.052885 | 0.790947 | 0.229306 | 0 | 0.028571 | 0 | 0.007143 | 0.493271 | 0.370345 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.035714 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
445ab1c1e4d78b8048df97b2a01122c5b251eb06 | 4,345 | py | Python | Practica4/QuickSort/grafica.py | JosueHernandezR/An-lisis-de-Algoritmos | 9953f2d3fee6b4cfe842fdbbea83b46b62fa123f | [
"MIT"
] | 1 | 2021-09-30T20:05:41.000Z | 2021-09-30T20:05:41.000Z | Practica4/QuickSort/grafica.py | JosueHernandezR/An-lisis-de-Algoritmos | 9953f2d3fee6b4cfe842fdbbea83b46b62fa123f | [
"MIT"
] | null | null | null | Practica4/QuickSort/grafica.py | JosueHernandezR/An-lisis-de-Algoritmos | 9953f2d3fee6b4cfe842fdbbea83b46b62fa123f | [
"MIT"
] | null | null | null | #Análisis de Algoritmos 3CV2
# Alan Romero Lucero
# Josué David Hernández Ramírez
# Práctica 4 Divide y vencerás
import matplotlib.pyplot as plt
import numpy as np
import gb
"""
Variables globales:
proposed2: Función propuesta para el algoritmo Quicktime. Dependiendo
si el usuario elige ordenar el peor de los casos, esta
variable tomará el valor de "g (n) = 3/2 n ^ 2".
En otro caso "g (n) = n log (n)".
proposed1: Función propuesta para el algoritmo de Partición. Dependiendo
si el usuario elige ordenar el peor de los casos, esta
variable tomará el valor de "g (n) = 3/2 n".
En otro caso "g (n) = n".
function2: esta función para Quicksort se mostrará solo si el usuario
elige ordenar el peor de los casos y tomará el valor
de: "T (n) = n ^ 2".
function1: esta función para Partición se mostrará solo si el usuario
elige ordenar el peor de los casos y tomará el valor
de: "T (n) = n".
g2: esta lista almacenará los valores de la función propuesta para Quicksort.
g1: Esta lista almacenará los valores de la función propuesta para Partición.
"""
proposed2 = ""
proposed1 = ""
function2 = ""
function1 = ""
g2 = [ ]
g1 = [ ]
"""
Función nlogn referencias a parámetros de registro (n ^ n) o puntos de función
para trazar. Esta es la función propuesta en el gráfico, donde T (n) es el
tiempo computacional de nuestro algoritmo y g (n) la función propuesta
tal que T (n) en ϴ (g (n)).
"""
def nlogn ( ):
global g2, aux
f = open ( "n log ( n ).txt", "r" )
aux = f.readlines ( )
g2 = list ( map ( lambda x: float ( x ) * 5/4, aux [ : len ( gb.n ) + 1 ] ) )
f.close ( )
"""
Las etiquetas de función están controladas implícitamente por menu.py, dependiendo del valor de
gb.flag (verdadero o falso) asignará valor a las variables globales de cadena y
ayudarán a distinguir el tiempo propuesto y el algoritmo computacional
en el gráfico
"""
def labels ( ):
global proposed2, proposed1, function2, function1, g1
g1 = list ( map ( lambda x: 3/2 * x [ 1 ], gb._parameters ) )
nlogn ( )
if ( gb.flag ):
# Worst case labels assignation.
proposed2 = "g( n ) = ( 3/2 ) n^2"
proposed1 = "g( n ) = ( 3/2 ) n"
function2 = "T( n ) = n^2"
function1 = "T( n ) = n"
else:
g1 = list ( np.arange ( 6, max ( g1 ) + 6, max ( g1 ) / len ( gb.n ) ) )
# Any other case labels assignation.
proposed2 = "g( n ) = n log ( n )"
proposed1 = "g( n ) = n"
function2 = None
function1 = None
def graph ( ):
labels ( )
# Window title.
plt.figure ( "Algoritmo Quicksort", figsize = ( 14, 7 ) )
# Right graph: Temporal complexity of Partition.
plt.subplot ( 1, 2, 2 )
# Figure title.
plt.title ( "Partition ( " + str ( gb._parameters [ -1 ] [ 0 ] + 1 ) + ", " + str ( gb._parameters [ -1 ] [ 1 ] ) + " )" )
# Parameter Time ( t ).
_t = list ( map ( lambda x: x [ 1 ], gb._parameters ) )
# Parameter Size ( n ).
_s = list ( map ( lambda x: x [ 0 ] + 1, gb._parameters ) )
# Axes names.
plt.xlabel ( "Tamaño ( n )", color = ( 0.3, 0.4, 0.6 ), size = "large" )
plt.ylabel ( "Tiempo Partition ( t )", color = ( 0.3, 0.4, 0.6 ), size = "large" )
# Plot.
plt.plot ( _s, _t, "#778899", linewidth = 3, label = function1 )
plt.plot ( _s, g1, "#800000", linestyle = "--", label = proposed1 )
plt.legend ( loc = "upper left" )
# Left graph: Temporal complexity of Quicksort.
plt.subplot ( 1, 2, 1 )
# Figure title.
plt.title ( "Quicksort ( " + str ( gb.parameters [ -1 ] [ 0 ] ) + ", " + str ( gb.parameters [ -1 ] [ 1 ] ) + " )" )
# Parameter Time ( t ).
t = list ( map ( lambda x: x [ 1 ], gb.parameters ) )
# Parameter Size ( n ).
s = list ( map ( lambda x: x [ 0 ], gb.parameters ) )
# Axes names.
plt.xlabel ( "Size ( n )", color = ( 0.3, 0.4, 0.6 ), size = "large" )
plt.ylabel ( "Quicksort Time ( t )", color = ( 0.3, 0.4, 0.6 ), size = "large" )
# Plot.
plt.plot ( s, t, "#778899", linewidth = 3, label = function2 )
plt.plot ( s, g2, "#800000", linestyle = "--", label = proposed2 )
plt.legend ( loc = "upper left" )
plt.show ( ) | 38.451327 | 126 | 0.571231 | 607 | 4,345 | 4.072488 | 0.278418 | 0.008091 | 0.031553 | 0.033981 | 0.450647 | 0.383495 | 0.309871 | 0.309871 | 0.309871 | 0.309871 | 0 | 0.043549 | 0.297123 | 4,345 | 113 | 127 | 38.451327 | 0.765881 | 0.098964 | 0 | 0.038462 | 0 | 0 | 0.126675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.057692 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
445b95e70338446bf09b6f119beadcc108bb73c6 | 4,260 | py | Python | simArch/run_nets.py | mfkiwl/uSystolic-Sim | ed03101108d299557ff215239caa1b51783882f6 | [
"MIT"
] | 18 | 2021-04-08T10:35:31.000Z | 2022-03-03T14:29:06.000Z | simArch/run_nets.py | mfkiwl/uSystolic-Sim | ed03101108d299557ff215239caa1b51783882f6 | [
"MIT"
] | 1 | 2021-06-29T10:55:35.000Z | 2021-10-08T21:04:54.000Z | simArch/run_nets.py | mfkiwl/uSystolic-Sim | ed03101108d299557ff215239caa1b51783882f6 | [
"MIT"
] | 4 | 2021-04-08T10:35:32.000Z | 2021-12-11T13:45:24.000Z | import simArch.gemm_trace_wrapper as gemm_trace
def run_net(
ifmap_sram_size=1, # in K-Word
filter_sram_size=1, # in K-Word
ofmap_sram_size=1, # in K-Word
array_h=32,
array_w=32,
data_flow='ws',
word_size_bytes=1,
wgt_bw_opt=False,
topology_file=None,
net_name=None,
offset_list = [0, 10000000, 20000000] # in word
):
ifmap_sram_size *= 1024 # in word
filter_sram_size *= 1024 # in word
ofmap_sram_size *= 1024 # in word
param_file = open(topology_file, 'r')
fname = net_name + "_mac_util.csv"
cycl = open(fname, 'w')
cycl.write("Layer,\tType,\tCycles,\t% Utilization,\n")
first = True
for row in param_file:
# per layer trace gen
if first:
# skip the header row
first = False
continue
elems = row.strip().split(',')
elems = prune(elems)
# skip row if unrecognized
if len(elems) != 11:
continue
name = elems[0].strip()
layer_type = elems[1].strip()
if layer_type == "GEMM":
ifmap_h = int(elems[2].strip())
ifmap_w = int(elems[3].strip())
filt_h = int(elems[4].strip())
filt_w = int(elems[5].strip())
num_channels = int(elems[6].strip())
num_filters = int(elems[7].strip())
stride_h = int(elems[8].strip())
stride_w = int(elems[9].strip())
mac_cycles = int(elems[10].strip())
ifmap_base = offset_list[0] # in word
filter_base = offset_list[1] # in word
ofmap_base = offset_list[2] # in word
print("")
print("Commencing trace generation for " + name + " with a MAC cycle count of " + str(mac_cycles))
# all trace should be generated in granularity of word, the word size only influence the bandwidth
util, clk = gemm_trace.gen_all_traces(array_h = array_h,
array_w = array_w,
ifmap_h = ifmap_h,
ifmap_w = ifmap_w,
filt_h = filt_h,
filt_w = filt_w,
num_channels = num_channels,
num_filt = num_filters,
stride_h = stride_h,
stride_w = stride_w,
data_flow = data_flow,
word_size_bytes = word_size_bytes,
filter_sram_size = filter_sram_size, # in word
ifmap_sram_size = ifmap_sram_size, # in word
ofmap_sram_size = ofmap_sram_size, # in word
filt_base = filter_base, # in word granularity
ifmap_base = ifmap_base, # in word granularity
ofmap_base = ofmap_base, # in word granularity
mac_cycles = mac_cycles,
wgt_bw_opt = wgt_bw_opt,
sram_read_trace_file= net_name + "_" + name + "_sram_read.csv",
sram_write_trace_file= net_name + "_" + name + "_sram_write.csv",
dram_filter_trace_file=net_name + "_" + name + "_dram_filter_read.csv",
dram_ifmap_trace_file= net_name + "_" + name + "_dram_ifmap_read.csv",
dram_ofmap_trace_file= net_name + "_" + name + "_dram_ofmap_write.csv"
)
print("All done for " + name)
util_str = str(util)
line = name + ",\t" + layer_type + ",\t" + clk +",\t" + util_str + ",\n"
cycl.write(line)
cycl.close()
param_file.close()
def prune(input_list):
l = []
for e in input_list:
e = e.strip() # remove the leading and trailing characters, here space
if e != '' and e != ' ':
l.append(e)
return l | 38.378378 | 110 | 0.480047 | 483 | 4,260 | 3.929607 | 0.26294 | 0.041096 | 0.031612 | 0.04215 | 0.145416 | 0.088514 | 0 | 0 | 0 | 0 | 0 | 0.022204 | 0.429108 | 4,260 | 111 | 111 | 38.378378 | 0.758224 | 0.09061 | 0 | 0.023256 | 0 | 0 | 0.063068 | 0.017389 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.011628 | 0 | 0.046512 | 0.034884 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
445d5c9f90f1e235cdc97e3d99dbadddc792e214 | 550 | py | Python | homepage/views.py | c17r/tsace | 59c6e0388429943dc3de879745119f9c94cd9ccc | [
"MIT"
] | null | null | null | homepage/views.py | c17r/tsace | 59c6e0388429943dc3de879745119f9c94cd9ccc | [
"MIT"
] | null | null | null | homepage/views.py | c17r/tsace | 59c6e0388429943dc3de879745119f9c94cd9ccc | [
"MIT"
] | null | null | null | from datetime import datetime, timedelta
from django.shortcuts import render
from django.template import RequestContext
import api
def index(request):
uid = request.COOKIES.get("uid")
data = None
if not uid:
uid, _ = api.create_new_user()
else:
data = api.get_saved_cities(uid)
response = render(
request,
"homepage/index.html",
{"saved_cities": data}
)
expires = datetime.utcnow() + timedelta(days=180)
response.set_cookie("uid", uid, expires=expires)
return response
| 22 | 53 | 0.658182 | 66 | 550 | 5.378788 | 0.545455 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007194 | 0.241818 | 550 | 24 | 54 | 22.916667 | 0.844125 | 0 | 0 | 0 | 0 | 0 | 0.067273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
445ef4455f305009ac0e1b4aeb8c80e0b3115162 | 20,825 | py | Python | wolfgang_robot/wolfgang_pybullet_sim/src/wolfgang_pybullet_sim/simulation.py | MosHumanoid/bitbots_thmos_meta | f45ccc362dc689b69027be5b0d000d2a08580de4 | [
"MIT"
] | null | null | null | wolfgang_robot/wolfgang_pybullet_sim/src/wolfgang_pybullet_sim/simulation.py | MosHumanoid/bitbots_thmos_meta | f45ccc362dc689b69027be5b0d000d2a08580de4 | [
"MIT"
] | null | null | null | wolfgang_robot/wolfgang_pybullet_sim/src/wolfgang_pybullet_sim/simulation.py | MosHumanoid/bitbots_thmos_meta | f45ccc362dc689b69027be5b0d000d2a08580de4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import math
import sys
import os
import time
import pybullet as p
from time import sleep
import time
import rospy
import tf
from scipy import signal
import pybullet_data
import rospkg
from transforms3d.quaternions import quat2mat
from wolfgang_pybullet_sim.terrain import Terrain
import numpy as np
class Simulation:
def __init__(self, gui, urdf_path=None, foot_link_names=[], terrain=False, field=True, joints_ft=False,
robot="wolfgang"):
self.gui = gui
self.paused = False
self.gravity = True
self.terrain_on = terrain
self.field_on = field
self.urdf_path = urdf_path
self.foot_link_names = foot_link_names
self.joints_ft = joints_ft
self.robot = robot
# config values
self.start_position = [0, 0, 0.43]
self.start_orientation = p.getQuaternionFromEuler((0, 0.25, 0))
if self.robot == "wolfgang":
self.initial_joints_positions = {"LAnklePitch": -30, "LAnkleRoll": 0, "LHipPitch": 30, "LHipRoll": 0,
"LHipYaw": 0, "LKnee": 60, "RAnklePitch": 30, "RAnkleRoll": 0,
"RHipPitch": -30, "RHipRoll": 0, "RHipYaw": 0, "RKnee": -60,
"LShoulderPitch": 0, "LShoulderRoll": 0, "LElbow": 45, "RShoulderPitch": 0,
"RShoulderRoll": 0, "RElbow": -45, "HeadPan": 0, "HeadTilt": 0}
elif self.robot in ["op2", "robotis_op2"]:
self.initial_joints_positions = {"l_ankle_pitch": 0, "l_ankle_roll": 0, "l_hip_pitch": 0, "l_hip_roll": 0,
"l_hip_yaw": 0, "l_knee": 0, "r_ankle_pitch": 0, "r_ankle_roll": 0,
"r_hip_pitch": 0, "r_hip_roll": 0, "r_hip_yaw": 0, "r_knee": 0,
"l_sho_pitch": 0, "l_sho_roll": 0, "LElbow": 0, "r_sho_pitch": 0,
"r_sho_roll": 0, "RElbow": 0, "head_pan": 0, "head_tilt": 0}
self.foot_link_names = ["l_foot", "r_foot"]
elif self.robot == "sigmaban":
self.start_orientation = p.getQuaternionFromEuler((0, 0.0, 0))
self.initial_joints_positions = {"left_ankle_pitch": 0, "left_ankle_roll": 0, "left_hip_pitch": 0,
"left_hip_roll": 0, "left_hip_yaw": 0, "left_knee": 0,
"right_ankle_pitch": 0, "right_ankle_roll": 0, "right_hip_pitch": 0,
"right_hip_roll": 0, "right_hip_yaw": 0, "right_knee": 0,
"left_shoulder_pitch": 0, "left_shoulder_roll": 0, "LElbow": 0,
"right_shoulder_pitch": 0, "right_shoulder_roll": 0, "RElbow": 0,
"head_yaw": 0, "head_pitch": 0}
else:
print(f"robot {self.robot} not known")
quit(0)
# Instantiating Bullet
if self.gui:
self.client_id = p.connect(p.GUI)
else:
self.client_id = p.connect(p.DIRECT)
if self.gravity:
p.setGravity(0, 0, -9.81)
self.time = 0
# disable debug interface, only show robot
p.configureDebugVisualizer(p.COV_ENABLE_GUI, False)
# Engine parameters
# time step should be at 240Hz (due to pyBullet documentation)
self.timestep = 1 / 240
# standard parameters seem to be best. leave them like they are
# p.setPhysicsEngineParameter(fixedTimeStep=self.timestep, numSubSteps=1)
# no real time, as we will publish own clock, but we have option to switch to real_time by waiting
self.real_time = False
p.setRealTimeSimulation(0)
self.last_wall_time = time.time()
self.load_models()
def load_models(self):
print("load models")
# Load floor
self.terrain_index = None
self.plane_index = None
if self.terrain_on:
self.max_terrain_height = 0.01
self.terrain = Terrain(self.max_terrain_height)
self.terrain_index = self.terrain.id
else:
p.setAdditionalSearchPath(pybullet_data.getDataPath()) # needed for plane.urdf
self.plane_index = p.loadURDF('plane.urdf')
self.field_index = None
if self.field_on:
# Load field
rospack = rospkg.RosPack()
path = os.path.join(rospack.get_path('wolfgang_pybullet_sim'), 'models')
p.setAdditionalSearchPath(path) # needed to find field model
self.field_index = p.loadURDF('field/field.urdf')
# Load robot, deactivate all self collisions
if self.robot in ["wolfgang", "sigmaban"]:
# we have a better inertia estimation from onshape in our model
flags = p.URDF_USE_SELF_COLLISION + p.URDF_USE_INERTIA_FROM_FILE
else:
# most other URDFs from the internet have issues with their inertia values
flags = p.URDF_USE_SELF_COLLISION
if self.urdf_path is None:
# use wolfgang as standard
rospack = rospkg.RosPack()
if self.robot == "op2":
self.urdf_path = rospack.get_path("robotis_op2_description") + "/urdf/robot.urdf"
elif self.robot == "sigmaban":
self.urdf_path = rospack.get_path("sigmaban_description") + "/urdf/robot.urdf"
else:
self.urdf_path = rospack.get_path("wolfgang_description") + "/urdf/robot.urdf"
self.robot_index = p.loadURDF(self.urdf_path, self.start_position, self.start_orientation, flags=flags)
# Retrieving joints and foot pressure sensors
self.joints = {}
self.pressure_sensors = {}
self.links = {}
# Collecting the available joints
link_index = 0
for i in range(p.getNumJoints(self.robot_index)):
joint_info = p.getJointInfo(self.robot_index, i)
name = joint_info[1].decode('utf-8')
type = joint_info[2]
# we can get the links by seeing where the joint is attached
self.links[joint_info[12].decode('utf-8')] = link_index
link_index += 1
# see if it is a joint
if type == 0:
if self.joints_ft:
p.enableJointForceTorqueSensor(self.robot_index, i)
# remember joint if its revolute (0) and not fixed (4)
self.joints[name] = Joint(i, self.robot_index, ft=self.joints_ft)
elif name in ["LLB", "LLF", "LRF", "LRB", "RLB", "RLF", "RRF", "RRB"]:
p.enableJointForceTorqueSensor(self.robot_index, i)
self.pressure_sensors[name] = PressureSensor(name, i, self.robot_index, 10, 5)
for linkA in self.links.keys():
for linkB in self.links.keys():
p.setCollisionFilterPair(self.robot_index, self.robot_index, self.links[linkA],
self.links[linkB], 0)
hip_group = []
foot_group = []
if self.robot in ("wolfgang", "op2", "robotis_op2"):
# set collisions for hip and ankle towards each other, otherwise stand up is not realistic
hip_group = ["torso", "r_hip_1", "r_hip_2", "l_hip_1", "l_hip_2"]
foot_group = ["r_ankle", "r_foot", "l_ankle", "l_foot"]
elif self.robot in ("sigmaban"):
hip_group = ["mx106_block_mir_1", "u_block_1", "right_knee_1", "left_knee_1", "mx106_block_2", "u_block_2"]
foot_group = ["tibia_1", "mx106_block_1", "right_foot_cleat_back_left", "mx106_block_mir_2",
"left_foot_cleat_front_right", "tibia_2"]
for hip_link_index in hip_group:
for foot_link_index in foot_group:
p.setCollisionFilterPair(self.robot_index, self.robot_index, self.links[hip_link_index],
self.links[foot_link_index], 1)
# reset robot to initial position
self.reset()
def apply_force(self, link_id, force, position):
"""
Applies an external force to a position on a link.
:param link_id: link index or -1 for base link (int)
:param force: direction and amount of applied force (vec3)
:param position: where on the link the force should be applied (vec3)
"""
p.applyExternalForce(self.robot_index, link_id, force, position, flags=p.WORLD_FRAME)
def set_foot_dynamics(self, contact_damping, contact_stiffness, joint_damping, lateral_friction=1,
spinning_friction=1, rolling_friction=1, restitution=0):
# set dynamic values for all links and ground
for link_name in self.links.keys():
if link_name in ["llb", "llf", "lrf", "lrb", "rlb", "rlf", "rrf",
"rrb"] or link_name in self.foot_link_names:
# print(p.getLinkState(self.robot_index, self.links[link_name]))
p.changeDynamics(self.robot_index, self.links[link_name],
lateralFriction=lateral_friction,
spinningFriction=spinning_friction,
rollingFriction=rolling_friction,
contactDamping=contact_damping,
contactStiffness=contact_stiffness,
jointDamping=joint_damping,
restitution=restitution)
if self.plane_index:
p.changeDynamics(self.plane_index, -1, lateralFriction=lateral_friction, spinningFriction=spinning_friction,
rollingFriction=rolling_friction, restitution=restitution, contactDamping=contact_damping,
contactStiffness=contact_stiffness, jointDamping=joint_damping)
if self.field_index:
p.changeDynamics(self.field_index, -1, lateralFriction=lateral_friction, spinningFriction=spinning_friction,
rollingFriction=rolling_friction, restitution=restitution, contactDamping=contact_damping,
contactStiffness=contact_stiffness, jointDamping=joint_damping)
if self.terrain_index:
p.changeDynamics(self.terrain_index, -1, lateralFriction=lateral_friction,
spinningFriction=spinning_friction,
rollingFriction=rolling_friction, restitution=restitution, contactDamping=contact_damping,
contactStiffness=contact_stiffness, jointDamping=joint_damping)
def set_filter_params(self, cutoff, order):
for i in range(p.getNumJoints(self.robot_index)):
joint_info = p.getJointInfo(self.robot_index, i)
name = joint_info[1].decode('utf-8')
if name in ["LLB", "LLF", "LRF", "LRB", "RLB", "RLF", "RRF", "RRB"]:
self.pressure_sensors[name] = PressureSensor(name, i, self.robot_index, cutoff, order)
def reset(self):
# set joints to initial position
for name in self.joints:
joint = self.joints[name]
try:
pos_in_rad = math.radians(self.initial_joints_positions[name])
except KeyError:
pos_in_rad = 0
joint.reset_position(pos_in_rad, 0)
joint.set_position(pos_in_rad)
# reset body pose and velocity
p.resetBasePositionAndOrientation(self.robot_index, self.start_position, self.start_orientation)
p.resetBaseVelocity(self.robot_index, [0, 0, 0], [0, 0, 0])
def step(self):
# get keyboard events if gui is active
if self.gui:
# reset if R-key was pressed
rKey = ord('r')
# gravity
nKey = ord('n')
# single step
xKey = ord('x')
# real time
tKey = ord('t')
# randomize terrain
fKey = ord('f')
# pause
spaceKey = p.B3G_SPACE
# rotate robot for testing
jKey = ord('j')
kKey = ord('k')
lKey = ord('l')
# move robot for testing
qKey = ord('q')
aKey = ord('a')
sKey = ord('s')
dKey = ord('d')
keys = p.getKeyboardEvents()
if rKey in keys and keys[rKey] & p.KEY_WAS_TRIGGERED:
self.reset()
if spaceKey in keys and keys[spaceKey] & p.KEY_WAS_TRIGGERED:
self.paused = not self.paused
if nKey in keys and keys[nKey] & p.KEY_WAS_TRIGGERED:
self.set_gravity(not self.gravity)
if tKey in keys and keys[tKey] & p.KEY_WAS_TRIGGERED:
self.real_time = not self.real_time
if fKey in keys and keys[fKey] & p.KEY_WAS_TRIGGERED:
# generate new terain
self.terrain.randomize(0.01)
if jKey in keys and keys[jKey] & p.KEY_WAS_TRIGGERED:
pos, rpy = self.get_robot_pose_rpy()
self.reset_robot_pose_rpy(pos, (rpy[0] + math.radians(30), rpy[1], rpy[2]))
if kKey in keys and keys[kKey] & p.KEY_WAS_TRIGGERED:
pos, rpy = self.get_robot_pose_rpy()
self.reset_robot_pose_rpy(pos, (rpy[0], rpy[1] + math.radians(30), rpy[2]))
if lKey in keys and keys[lKey] & p.KEY_WAS_TRIGGERED:
pos, rpy = self.get_robot_pose_rpy()
self.reset_robot_pose_rpy(pos, (rpy[0], rpy[1], rpy[2] + math.radians(30)))
if qKey in keys and keys[qKey] & p.KEY_WAS_TRIGGERED:
pos, quat = self.get_robot_pose()
self.reset_robot_pose((pos[0] + 0.1, pos[1], pos[2]), quat)
if aKey in keys and keys[aKey] & p.KEY_WAS_TRIGGERED:
pos, quat = self.get_robot_pose()
self.reset_robot_pose((pos[0], pos[1] + 0.1, pos[2]), quat)
if sKey in keys and keys[sKey] & p.KEY_WAS_TRIGGERED:
pos, quat = self.get_robot_pose()
self.reset_robot_pose((pos[0] - 0.1, pos[1], pos[2]), quat)
if dKey in keys and keys[dKey] & p.KEY_WAS_TRIGGERED:
pos, quat = self.get_robot_pose()
self.reset_robot_pose((pos[0], pos[1] - 0.1, pos[2]), quat)
# block until pause is over
while self.paused:
# sleep a bit to not block a whole CPU core while waiting
sleep(0.1)
keys = p.getKeyboardEvents()
if spaceKey in keys and keys[spaceKey] & p.KEY_WAS_TRIGGERED:
self.paused = not self.paused
if xKey in keys and keys[xKey] & p.KEY_IS_DOWN:
break
if self.real_time:
# wait till one timestep has actually passed in real time
time.sleep(max(0, self.timestep - (time.time() - self.last_wall_time)))
self.last_wall_time = time.time()
self.time += self.timestep
p.stepSimulation()
for name, ps in self.pressure_sensors.items():
ps.filter_step()
def set_gravity(self, active):
if active:
p.setGravity(0, 0, -9.81)
else:
p.setGravity(0, 0, 0)
self.gravity = active
def set_robot_pose(self, position, orientation):
p.resetBasePositionAndOrientation(self.robot_index, position, orientation)
def reset_simulation(self):
p.resetSimulation()
self.load_models()
def reset_robot_pose(self, position, orientation):
# reset body pose and velocity
p.resetBasePositionAndOrientation(self.robot_index, position, orientation)
p.resetBaseVelocity(self.robot_index, [0, 0, 0], [0, 0, 0])
# we need to reset all joints to, otherwise they still have velocity
for name in self.joints:
joint = self.joints[name]
p.resetJointState(joint.body_index, joint.joint_index, 0, 0)
def reset_robot_pose_rpy(self, position, rpy):
quat = tf.transformations.quaternion_from_euler(*rpy)
self.reset_robot_pose(position, quat)
def get_robot_pose(self):
(x, y, z), (qx, qy, qz, qw) = p.getBasePositionAndOrientation(self.robot_index)
return (x, y, z), (qx, qy, qz, qw)
def get_robot_pose_rpy(self):
(x, y, z), (qx, qy, qz, qw) = p.getBasePositionAndOrientation(self.robot_index)
(roll, pitch, yaw) = p.getEulerFromQuaternion((qx, qy, qz, qw))
return (x, y, z), (roll, pitch, yaw)
def get_link_pose(self, link_name):
return p.getLinkState(self.robot_index, self.links[link_name])[0]
def get_robot_velocity(self):
# these are in world coordinate frame
(vx, vy, vz), (vr, vp, vy) = p.getBaseVelocity(self.robot_index)
# rotate to robot frame
_, (x, y, z, w) = self.get_robot_pose()
# rotation matrix
M = quat2mat((w, x, y, z))
# velocities as vector
v = np.array([vr, vp, vy]).T
angular_vel_robot_frame = np.matmul(M.T, v)
return (vx, vy, vz), angular_vel_robot_frame
def get_joint_names(self):
names = []
for name in self.joints:
joint = self.joints[name]
names.append(joint.name)
return names
def get_joint_position(self, name):
return self.joints[name].get_position()
class Joint:
def __init__(self, joint_index, body_index, ft=False):
self.joint_index = joint_index
self.body_index = body_index
joint_info = p.getJointInfo(self.body_index, self.joint_index)
self.name = joint_info[1].decode('utf-8')
self.type = joint_info[2]
self.max_force = joint_info[10]
self.max_velocity = joint_info[11]
self.lowerLimit = joint_info[8]
self.upperLimit = joint_info[9]
self.damping = joint_info[6]
self.friction = joint_info[7]
self.ft = ft
def reset_position(self, position, velocity):
p.resetJointState(self.body_index, self.joint_index, targetValue=position, targetVelocity=velocity)
# self.disable_motor()
def disable_motor(self):
p.setJointMotorControl2(self.body_index, self.joint_index,
controlMode=p.POSITION_CONTROL, targetPosition=0, targetVelocity=0,
positionGain=0.1, velocityGain=0.1, force=0)
def set_position(self, position):
p.setJointMotorControl2(self.body_index, self.joint_index,
p.POSITION_CONTROL,
targetPosition=position, force=self.max_force - self.friction,
maxVelocity=self.max_velocity)
def get_state(self):
position, velocity, forces, applied_torque = p.getJointState(self.body_index, self.joint_index)
return position, velocity, forces, applied_torque
def get_position(self):
position, velocity, forces, applied_torque = self.get_state()
return position
def get_velocity(self):
position, velocity, forces, applied_torque = self.get_state()
return velocity
def get_applied_torque(self):
position, velocity, forces, applied_torque = self.get_state()
return applied_torque
def get_torque(self):
if self.ft:
position, velocity, forces, applied_torque = self.get_state()
# todo this guesses z is the correct axis, but we dont know
return forces[5]
else:
print("Force Torque sensor not activated!")
return None
class PressureSensor:
def __init__(self, name, joint_index, body_index, cutoff, order):
self.joint_index = joint_index
self.name = name
self.body_index = body_index
nyq = 240 * 0.5 # nyquist frequency from simulation frequency
normalized_cutoff = cutoff / nyq # cutoff freq in hz
self.filter_b, self.filter_a = signal.butter(order, normalized_cutoff, btype='low')
self.filter_state = signal.lfilter_zi(self.filter_b, 1)
self.unfiltered = 0
self.filtered = [0]
def filter_step(self):
self.unfiltered = p.getJointState(self.body_index, self.joint_index)[2][2] * -1
self.filtered, self.filter_state = signal.lfilter(self.filter_b, self.filter_a, [self.unfiltered],
zi=self.filter_state)
def get_force(self):
return max(self.unfiltered, 0), max(self.filtered[0], 0) | 47.115385 | 120 | 0.585066 | 2,563 | 20,825 | 4.555989 | 0.180648 | 0.027747 | 0.031172 | 0.015586 | 0.364049 | 0.32534 | 0.276098 | 0.251691 | 0.232251 | 0.19988 | 0 | 0.018701 | 0.31443 | 20,825 | 442 | 121 | 47.115385 | 0.799188 | 0.099256 | 0 | 0.215743 | 0 | 0 | 0.070843 | 0.005198 | 0 | 0 | 0 | 0.002262 | 0 | 1 | 0.087464 | false | 0 | 0.043732 | 0.008746 | 0.177843 | 0.008746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4460bbb142aa4c99178afe32a15838a44e96e1f2 | 11,641 | py | Python | dataloader_cifar.py | pizard/DivideMix | 9610201b9eb5dc871a77bf12017c137be291c71c | [
"MIT"
] | null | null | null | dataloader_cifar.py | pizard/DivideMix | 9610201b9eb5dc871a77bf12017c137be291c71c | [
"MIT"
] | null | null | null | dataloader_cifar.py | pizard/DivideMix | 9610201b9eb5dc871a77bf12017c137be291c71c | [
"MIT"
] | null | null | null | from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
import random
import numpy as np
from PIL import Image
import json
import os
from torchnet.meter import AUCMeter
def uniform_mix_C(mixing_ratio, num_classes):
'''
returns a linear interpolation of a uniform matrix and an identity matrix
'''
return mixing_ratio * np.full((num_classes, num_classes), 1 / num_classes) + \
(1 - mixing_ratio) * np.eye(num_classes)
def flip_labels_C(corruption_prob, num_classes):
'''
returns a matrix with (1 - corruption_prob) on the diagonals, and corruption_prob
concentrated in only one other entry for each row
'''
C = np.eye(num_classes) * (1 - corruption_prob)
row_indices = np.arange(num_classes)
for i in range(num_classes):
C[i][np.random.choice(row_indices[row_indices != i])] = corruption_prob
# for i in range(len(train_labels)):
# self.train_labels[i] = np.random.choice(num_classes, p=C[self.train_labels[i]])
# self.corruption_matrix = C
return C
def flip_labels_C_two(corruption_prob, num_classes):
'''
returns a matrix with (1 - corruption_prob) on the diagonals, and corruption_prob
concentrated in only one other entry for each row
'''
C = np.eye(num_classes) * (1 - corruption_prob)
row_indices = np.arange(num_classes)
for i in range(num_classes):
C[i][np.random.choice(row_indices[row_indices != i], 2, replace=False)] = corruption_prob / 2
return C
def unpickle(file):
import _pickle as cPickle
with open(file, 'rb') as fo:
dict = cPickle.load(fo, encoding='latin1')
return dict
class cifar_dataset(Dataset):
def __init__(self, dataset, r, noise_mode, root_dir, transform, mode, noise_file='', pred=[], probability=[], log=''):
# mode
# Test : Test
# All : All
# Labeled : Labeled
# UnLabeled : UnLabeled
self.r = r # noise ratio
self.transform = transform
self.mode = mode
self.transition = {0:0,2:0,4:4,7:7,1:1,9:1,3:5,5:3,6:6,8:8} # class transition for asymmetric noise
if self.mode=='test':
if dataset=='cifar10':
test_dic = unpickle('%s/test_batch'%root_dir)
self.test_data = test_dic['data']
self.test_data = self.test_data.reshape((10000, 3, 32, 32))
self.test_data = self.test_data.transpose((0, 2, 3, 1))
self.test_label = test_dic['labels']
elif dataset=='cifar100':
test_dic = unpickle('%s/test'%root_dir)
self.test_data = test_dic['data']
self.test_data = self.test_data.reshape((10000, 3, 32, 32))
self.test_data = self.test_data.transpose((0, 2, 3, 1))
self.test_label = test_dic['fine_labels']
else:
train_data=[]
train_label=[]
if dataset=='cifar10':
for n in range(1,6):
dpath = '%s/data_batch_%d'%(root_dir,n)
data_dic = unpickle(dpath)
train_data.append(data_dic['data'])
train_label = train_label+data_dic['labels']
train_data = np.concatenate(train_data)
elif dataset=='cifar100':
train_dic = unpickle('%s/train'%root_dir)
train_data = train_dic['data']
train_label = train_dic['fine_labels']
train_data = train_data.reshape((50000, 3, 32, 32))
train_data = train_data.transpose((0, 2, 3, 1))
# Noise Label 생성
if os.path.exists(noise_file):
noise_label = json.load(open(noise_file,"r"))
else: #inject noise
noise_label = []
idx = list(range(50000))
random.shuffle(idx)
# num_noise = int(self.r*50000) -> 순 사기꾼이여ㅡㅡ
# noise_idx = idx[:num_noise]
noise_idx = idx[:]
num_classes = 10 if dataset == 'cifar10' else 100
if noise_mode == 'sym':
C = uniform_mix_C(self.r, num_classes)
# if dataset=='cifar10':
# noiselabel = random.randint(0,9)
# elif dataset=='cifar100':
# noiselabel = random.randint(0,99)
# noise_label.append(noiselabel)
elif noise_mode == 'asym':
C = flip_labels_C(self.r, num_classes)
for i in range(50000):
if i in noise_idx:
noiselabel = np.random.choice(num_classes, p=C[train_label[i]])
noise_label.append(noiselabel)
else:
noise_label.append(train_label[i])
print("save noisy labels to %s ..."%noise_file)
json.dump(noise_label,open(noise_file,"w"))
# 전체 부분
if self.mode == 'all':
self.train_data = train_data
self.noise_label = noise_label
else:
if self.mode == "labeled":
pred_idx = pred.nonzero()[0] # 4770
self.probability = [probability[i] for i in pred_idx] # 4770
clean = (np.array(noise_label)==np.array(train_label)) # 39981
auc_meter = AUCMeter()
auc_meter.reset()
auc_meter.add(probability,clean)
auc,_,_ = auc_meter.value()
log.write('Numer of labeled samples:%d AUC:%.3f\n'%(pred.sum(),auc))
log.flush()
elif self.mode == "unlabeled":
pred_idx = (1-pred).nonzero()[0] # 45230
self.train_data = train_data[pred_idx]
self.noise_label = [noise_label[i] for i in pred_idx]
print("%s data has a size of %d"%(self.mode,len(self.noise_label)))
def __getitem__(self, index):
if self.mode=='labeled':
img, target, prob = self.train_data[index], self.noise_label[index], self.probability[index]
img = Image.fromarray(img)
img1 = self.transform(img)
img2 = self.transform(img)
return img1, img2, target, prob
elif self.mode=='unlabeled':
img = self.train_data[index]
img = Image.fromarray(img)
img1 = self.transform(img)
img2 = self.transform(img)
return img1, img2
elif self.mode=='all':
img, target = self.train_data[index], self.noise_label[index]
img = Image.fromarray(img)
img = self.transform(img)
return img, target, index
elif self.mode=='test':
img, target = self.test_data[index], self.test_label[index]
img = Image.fromarray(img)
img = self.transform(img)
return img, target
def __len__(self):
if self.mode!='test':
return len(self.train_data)
else:
return len(self.test_data)
class cifar_dataloader():
def __init__(self, dataset, r, noise_mode, batch_size, num_workers, root_dir, log, noise_file=''):
self.dataset = dataset
self.r = r
self.noise_mode = noise_mode
self.batch_size = batch_size
self.num_workers = num_workers
self.root_dir = root_dir
self.log = log
self.noise_file = noise_file
if self.dataset=='cifar10':
self.transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465),(0.2023, 0.1994, 0.2010)),
])
self.transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465),(0.2023, 0.1994, 0.2010)),
])
elif self.dataset=='cifar100':
self.transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.507, 0.487, 0.441), (0.267, 0.256, 0.276)),
])
self.transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.507, 0.487, 0.441), (0.267, 0.256, 0.276)),
])
def run(self,mode,pred=[],prob=[]):
# Warmup -> all :
# Train_labeled -> labeled
# Train_unlabeled -> unLabeled
# Test -> test
if mode=='warmup':
all_dataset = cifar_dataset(dataset=self.dataset, noise_mode=self.noise_mode, r=self.r, root_dir=self.root_dir, transform=self.transform_train, mode="all",noise_file=self.noise_file)
trainloader = DataLoader(
dataset=all_dataset,
batch_size=self.batch_size*2,
shuffle=True,
num_workers=self.num_workers)
return trainloader
elif mode=='train':
labeled_dataset = cifar_dataset(dataset=self.dataset, noise_mode=self.noise_mode, r=self.r, root_dir=self.root_dir, transform=self.transform_train, mode="labeled", noise_file=self.noise_file, pred=pred, probability=prob,log=self.log)
labeled_trainloader = DataLoader(
dataset=labeled_dataset,
batch_size=self.batch_size,
shuffle=True,
num_workers=self.num_workers)
unlabeled_dataset = cifar_dataset(dataset=self.dataset, noise_mode=self.noise_mode, r=self.r, root_dir=self.root_dir, transform=self.transform_train, mode="unlabeled", noise_file=self.noise_file, pred=pred)
unlabeled_trainloader = DataLoader(
dataset=unlabeled_dataset,
batch_size=self.batch_size,
shuffle=True,
num_workers=self.num_workers)
return labeled_trainloader, unlabeled_trainloader
elif mode=='test':
test_dataset = cifar_dataset(dataset=self.dataset, noise_mode=self.noise_mode, r=self.r, root_dir=self.root_dir, transform=self.transform_test, mode='test')
test_loader = DataLoader(
dataset=test_dataset,
batch_size=self.batch_size,
shuffle=False,
num_workers=self.num_workers)
return test_loader
elif mode=='eval_train':
eval_dataset = cifar_dataset(dataset=self.dataset, noise_mode=self.noise_mode, r=self.r, root_dir=self.root_dir, transform=self.transform_test, mode='all', noise_file=self.noise_file)
eval_loader = DataLoader(
dataset=eval_dataset,
batch_size=self.batch_size,
shuffle=False,
num_workers=self.num_workers)
return eval_loader | 44.601533 | 259 | 0.545486 | 1,342 | 11,641 | 4.532042 | 0.151267 | 0.029596 | 0.023676 | 0.015784 | 0.494574 | 0.462019 | 0.446399 | 0.410556 | 0.392305 | 0.392305 | 0 | 0.034988 | 0.349369 | 11,641 | 261 | 260 | 44.601533 | 0.768022 | 0.084185 | 0 | 0.343284 | 0 | 0 | 0.033551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.044776 | 0 | 0.169154 | 0.00995 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4460f4628cd89ed83a3cb70a7461035c26830bb4 | 2,971 | py | Python | rnnparser/RecursiveNN/tests/test_vectorized_variables.py | uphere-co/nlp-prototype | c4623927e5c5c5f9c3e702eb36497ea1d9fd1ff3 | [
"BSD-3-Clause"
] | null | null | null | rnnparser/RecursiveNN/tests/test_vectorized_variables.py | uphere-co/nlp-prototype | c4623927e5c5c5f9c3e702eb36497ea1d9fd1ff3 | [
"BSD-3-Clause"
] | null | null | null | rnnparser/RecursiveNN/tests/test_vectorized_variables.py | uphere-co/nlp-prototype | c4623927e5c5c5f9c3e702eb36497ea1d9fd1ff3 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import numpy as np
import pytest
from vecGraphComp.base import MatrixValues, ValueHolder, Block, NodeType, ExpressionWriter
def test_numpy_structure_of_arrays_with_expand_dims():
m,n = 200,100
x1=np.random.random((n,1))
x2=np.random.random((n,1))
A=np.random.random((m,n))
xs = np.concatenate([np.expand_dims(x1,0),np.expand_dims(x2,0)],axis=0)
Av=np.expand_dims(A,0)
Axs = np.expand_dims(np.sum(Av*xs.transpose(0,2,1), axis=2),2)
assert Axs.shape == (2, m,1)
np.testing.assert_allclose(Axs[0],A.dot(x1), rtol=1e-7, atol=0.0)
np.testing.assert_allclose(Axs[1],A.dot(x2), rtol=1e-7, atol=0.0)
def test_MatrixValues():
vals=MatrixValues((3,5,5))
val=np.ones((5,5))
with pytest.raises(IndexError):
vals[0]=val
i=vals.save(val)
assert np.all(vals[i]==val)
val[1,:]=3
#The `val` is not shared, but copied:
assert not np.all(vals[i]==val)
#Assignment uses numpy's broadcasting
i=vals.save(2)
assert np.all(vals[i]==2)
#After the save, the value can be modified.
vals[i]=3
assert np.all(vals[i]==3)
#np.nan should not be tested by `==np.nan' comparison. Use np.isnan
i=vals.save(np.nan)
assert np.all(np.isnan(vals[i]))
def test_ValueHolder():
vs=ValueHolder(1000000)
vals=vs[(5,1)]
vals=vs[(5,5)]
vidx=vals.save(np.array([2]*25).reshape((5,5)))
vals2=vs[(5,5)]
assert np.all(vals2[vidx]==2)
sidx=vs.shape_index((5,5))
vals3=vs[sidx]
assert np.all(vals3[vidx]==2)
#Indexing unknown shape raises ValueError exception.
with pytest.raises(ValueError):
vs.shape_index([500,500])
def test_NodeType():
#Python Enum supports
i=NodeType.Var.value
assert isinstance(i, int)
assert NodeType(i) == NodeType.Var
def test_Block():
a=Block(1000)
b=ExpressionWriter(a)
name='abcdefghijklmn'
b.Var(name, np.zeros((10,1)))
a.uid(name)
name=name[:2]+name[5:]
#`name` is copied, not shared.
with pytest.raises(ValueError):
a.uid(name)
b.Var(u'가', np.zeros((10,1)))
#Declaring same name multiple times does nothing
b.Var(u'가', np.zeros((10,1)))
uid=b.Var(u'나', np.zeros((10,1)))
assert a.uid(u'나')==uid
assert a.name(uid)==u'나'
with pytest.raises(ValueError):
a.uid(u'다')
assert a.name(a.uid(u'abcdefghijklmn'))==u'abcdefghijklmn'
uid1=b.Var(u'foo', np.array(range(9)).reshape(3,3))
uid2=b.Var(u'bar', np.array(range(9,18)).reshape(3,3))
uid3=b.Var(u'x', np.array(range(6)).reshape(3,2))
uid4=b.Var(u'y', np.array(range(6,12)).reshape(3,2))
assert np.all(a.get_value(uid1)==np.array(range(9)).reshape(3,3))
assert np.all(a.get_value(u'bar')==np.array(range(9,18)).reshape(3,3))
assert np.all(a.get_value(uid3)==np.array(range(6)).reshape(3,2))
assert np.all(a.get_value(u'y')==np.array(range(6,12)).reshape(3,2))
word=u'\xa3'
b.Var(word, np.zeros((10,1)))
| 30.947917 | 90 | 0.63211 | 527 | 2,971 | 3.519924 | 0.259962 | 0.02965 | 0.059299 | 0.026954 | 0.292183 | 0.212399 | 0.166038 | 0.128302 | 0.111051 | 0.058221 | 0 | 0.056864 | 0.171323 | 2,971 | 95 | 91 | 31.273684 | 0.696588 | 0.117132 | 0 | 0.097222 | 0 | 0 | 0.024493 | 0 | 0 | 0 | 0 | 0 | 0.263889 | 1 | 0.069444 | false | 0 | 0.041667 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4462a86f114257e11040144f6da38d51df8b1bf2 | 6,421 | py | Python | code/pyorg/scripts/ribo_mt/particles_curator.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 12 | 2020-01-08T01:33:02.000Z | 2022-03-16T00:25:34.000Z | code/pyorg/scripts/ribo_mt/particles_curator.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 8 | 2019-12-19T19:34:56.000Z | 2022-03-10T10:11:28.000Z | code/pyorg/scripts/ribo_mt/particles_curator.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 2 | 2022-03-30T13:12:22.000Z | 2022-03-30T18:12:10.000Z | """
Curates an output STAR file from Relion to work as input for pyseg.pyorg scripts for microtubules
Input: - STAR file with the particles to curate
- STAR file to pair tomograms used for reconstruction with the one segmented used to pick the particles
Output: - A curated output STAR file
"""
################# Package import
import os
import vtk
import numpy as np
import scipy as sp
import sys
import time
import shutil
from pyorg import pexceptions, sub, disperse_io, surf
from pyorg import globals as gl
###### Global variables
__author__ = 'Antonio Martinez-Sanchez'
########################################################################################
# PARAMETERS
########################################################################################
ROOT_PATH = '/fs/pool/pool-plitzko/Saikat/antonio/sub_test/org'
# Input STAR file
in_star = ROOT_PATH + '/in/0_v1_k3_run1_c1_data.star'
# Input STAR for with the sub-volumes segmentations
in_seg = ROOT_PATH + '/in/mts_clines_mts_seg_v1.star'
# Output directory
out_star = ROOT_PATH + '/in/0_v1_k3_run1_c1_data_curated.star'
p_bin = 8
p_max_dst = 10 # nm
p_res = 1.792
p_swapxy = True
p_cp_ptomo = True # Copy column '_mtParticlesTomo' values to '_rlnMicrographName'
########################################################################################
# MAIN ROUTINE
########################################################################################
########## Print initial message
print('Curating particle STAR files for Microtubules.')
print('\tAuthor: ' + __author__)
print('\tDate: ' + time.strftime("%c") + '\n')
print('Options:')
print('\tInput STAR file of particles: ' + str(in_star))
print('\tInput STAR file for segmentations: ' + str(in_seg))
print('\tOutput STAR file: ' + str(out_star))
print('\tPre-processing settings: ')
print('\t\t-Coordinates binning factor: ' + str(p_bin))
print('\t\t-Maximun distance to a centerline: ' + str(p_max_dst) + ' nm')
print('\t\t-Picking data resolution: ' + str(p_res) + ' nm/vx')
print('\t\t-Swap X and Y.')
if p_cp_ptomo:
print('\t\t-Copying _mtParticlesTomo -> _rlnMicrographName.')
print('')
######### Process
print('Main Routine: ')
print('\tGenerating Micrograph-segmentations dictionary...')
star_seg = sub.Star()
try:
star_seg.load(in_seg)
except pexceptions.PySegInputError as e:
print('ERROR: input STAR file could not be loaded because of "' + e.get_message() + '"')
print('Terminated. (' + time.strftime("%c") + ')')
sys.exit(-1)
ct_dic = dict()
for seg_row in range(star_seg.get_nrows()):
ct_str = star_seg.get_element('_mtCenterLine', seg_row)
ct_dic[ct_str] = seg_row
print('\tPre-processing the input particles STAR file: ')
surfs = list()
print('\tLoading input STAR file(s)...')
star, star_out = sub.Star(), sub.Star()
p_max_dst_v = float(p_max_dst) / float(p_res)
try:
star.load(in_star)
star.add_column('_psSegImage')
except pexceptions.PySegInputError as e:
print('ERROR: input STAR file could not be loaded because of "' + e.get_message() + '"')
print('Terminated. (' + time.strftime("%c") + ')')
sys.exit(-1)
part_dsts= dict().fromkeys(list(range(star.get_nrows())))
for key in part_dsts.keys():
part_dsts[key] = np.finfo(np.float).max
if p_bin > 0:
print('\t\t-Binning the input coordinates: ')
p_bin_f = float(p_bin)
for row in range(star.get_nrows()):
hold_x = star.get_element('_rlnCoordinateX', row)
hold_x /= p_bin
star.set_element(key='_rlnCoordinateX', val=hold_x, row=row)
hold_y = star.get_element('_rlnCoordinateY', row)
hold_y /= p_bin
star.set_element(key='_rlnCoordinateY', val=hold_y, row=row)
hold_z = star.get_element('_rlnCoordinateZ', row)
hold_z /= p_bin
star.set_element(key='_rlnCoordinateZ', val=hold_z, row=row)
if p_cp_ptomo:
print('\t\t-Copying column values mtParticlesTomo to rlnMicrographName...')
for row in range(star_seg.get_nrows()):
star_seg.set_element(key='_rlnMicrographName', val=star_seg.get_element('_mtParticlesTomo', row), row=row)
print('\tLoop for MT: ')
seg_dic = dict()
for row_ct in range(star_seg.get_nrows()):
ct_str = star_seg.get_element('_mtCenterLine', row_ct)
print('\t\t-MT to process: ' + ct_str)
ct_vtp = disperse_io.load_poly(ct_str)
ct_points = np.zeros(shape=(ct_vtp.GetNumberOfPoints(), 3), dtype=np.float32)
for i in range(ct_points.shape[0]):
ct_points[i, :] = ct_vtp.GetPoint(i)
seg_dic[star_seg.get_element('_psSegImage', row_ct)] = star_seg.get_element('_rlnMicrographName', row_ct)
print('\tLoop for particles: ')
for row in range(star.get_nrows()):
# Loading the input coordiante
x = star.get_element('_rlnCoordinateX', row)
y = star.get_element('_rlnCoordinateY', row)
z = star.get_element('_rlnCoordinateZ', row)
part_center = np.asarray((x, y, z), dtype=np.float32)
# Transform to MT space
# Un-cropping
seg_offy, seg_offx, seg_offz = star_seg.get_element('_psSegOffX', row_ct), \
star_seg.get_element('_psSegOffY', row_ct), \
star_seg.get_element('_psSegOffZ', row_ct)
if p_swapxy:
part_center -= np.asarray((seg_offy, seg_offx, seg_offz), dtype=np.float32)
else:
part_center -= np.asarray((seg_offx, seg_offy, seg_offz), dtype=np.float32)
# Finding the minimum distance
hold = ct_points - part_center
hold_min = np.sqrt((hold * hold).sum(axis=1)).min()
if hold_min < part_dsts[row]:
hold_seg = star_seg.get_element('_psSegImage', row_ct)
star.set_element(key='_psSegImage', val=hold_seg, row=row)
part_dsts[row] = hold_min
print('\tDeleting unidentified particles...')
del_ids = list()
for row in range(star.get_nrows()):
if part_dsts[row] > p_max_dst_v:
del_ids.append(row)
else:
hold_mic = star.get_element('_rlnMicrographName', row)
hold_seg = star.get_element('_psSegImage', row)
if seg_dic[hold_seg] != hold_mic:
del_ids.append(row)
star.del_rows(del_ids)
print('\t\t-Final number of particles: ' + str(star.get_nrows()))
print('\tWriting output STAR file: ' + out_star)
star.store(out_star)
print('Terminated. (' + time.strftime("%c") + ')') | 35.672222 | 115 | 0.63339 | 889 | 6,421 | 4.328459 | 0.253093 | 0.044179 | 0.031185 | 0.039761 | 0.289501 | 0.261694 | 0.169699 | 0.142931 | 0.111746 | 0.111746 | 0 | 0.00608 | 0.180346 | 6,421 | 180 | 116 | 35.672222 | 0.725062 | 0.097181 | 0 | 0.162602 | 0 | 0 | 0.263972 | 0.031483 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.073171 | 0.252033 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4462c41b0b42cc76b2753d10c545dd05d91c8946 | 1,954 | py | Python | tests/test_proprietor_validation.py | LandRegistry/datatypes-alpha | b7ec20c9aec84697aa49aaf6bff52fac3770a942 | [
"MIT"
] | null | null | null | tests/test_proprietor_validation.py | LandRegistry/datatypes-alpha | b7ec20c9aec84697aa49aaf6bff52fac3770a942 | [
"MIT"
] | null | null | null | tests/test_proprietor_validation.py | LandRegistry/datatypes-alpha | b7ec20c9aec84697aa49aaf6bff52fac3770a942 | [
"MIT"
] | 1 | 2021-04-11T06:07:21.000Z | 2021-04-11T06:07:21.000Z | import unittest
from copy import deepcopy
from datatypes.exceptions import DataDoesNotMatchSchemaException
from datatypes import proprietor_validator
from datatypes.core import unicoded
proprietor = unicoded({
"title" : "Mrs",
"full_name": "Bootata Smick",
"decoration": "tidy"
})
proprietor_with_additional_fields = unicoded({
"title" : "",
"full_name": "Spielataville Heights Council",
"decoration": "",
"an_unknown": "thing"
})
class TestProprietorValidation(unittest.TestCase):
def test_can_validate_valid_proprietor(self):
try:
proprietor_validator.validate(proprietor)
except DataDoesNotMatchSchemaException as e:
self.fail("Could not validate proprietor: " + repr(e))
def test_can_validate_valid_proprietor_with_extra_keys(self):
try:
proprietor_validator.validate(proprietor_with_additional_fields)
except DataDoesNotMatchSchemaException as e:
self.fail("Could not validate proprietor: " + repr(e))
def test_does_not_validate_proprietor_without_title(self):
invalid_proprietor= deepcopy(proprietor)
del invalid_proprietor["title"]
self.assertRaisesRegexp(DataDoesNotMatchSchemaException, "title is a required field", proprietor_validator.validate, invalid_proprietor)
def test_does_not_validate_proprietor_without_full_name(self):
invalid_proprietor = deepcopy(proprietor)
del invalid_proprietor["full_name"]
self.assertRaisesRegexp(DataDoesNotMatchSchemaException, "full_name is a required field", proprietor_validator.validate, invalid_proprietor)
def test_does_not_validate_proprietor_without_decoration(self):
invalid_proprietor= deepcopy(proprietor)
del invalid_proprietor["decoration"]
self.assertRaisesRegexp(DataDoesNotMatchSchemaException, "decoration is a required field", proprietor_validator.validate, invalid_proprietor)
| 39.08 | 149 | 0.754862 | 197 | 1,954 | 7.203046 | 0.28934 | 0.107822 | 0.095137 | 0.029598 | 0.553911 | 0.553911 | 0.450317 | 0.427766 | 0.30303 | 0.260747 | 0 | 0 | 0.169908 | 1,954 | 49 | 150 | 39.877551 | 0.874846 | 0 | 0 | 0.282051 | 0 | 0 | 0.144319 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.128205 | false | 0 | 0.128205 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446691647b7b65b8fcd033c966f9edcd95dfe8b8 | 907 | py | Python | src/plots/experiments_1.py | ekreutz/bayes-cv-pruning | a82888fcc8771fdf90d51bd6c37c5bc7a449c81a | [
"MIT"
] | null | null | null | src/plots/experiments_1.py | ekreutz/bayes-cv-pruning | a82888fcc8771fdf90d51bd6c37c5bc7a449c81a | [
"MIT"
] | null | null | null | src/plots/experiments_1.py | ekreutz/bayes-cv-pruning | a82888fcc8771fdf90d51bd6c37c5bc7a449c81a | [
"MIT"
] | null | null | null | import os
import pickle
import numpy as np
import matplotlib.font_manager as font_manager
import matplotlib.pyplot as plt
import seaborn as sns
# Plot
sns.set(context="paper", style="whitegrid", font="STIXGeneral", font_scale=1.25)
def plot():
CURRENT_PATH = os.path.dirname(os.path.realpath(__file__))
# Final model with exponential hyperprior
with open(os.path.join(CURRENT_PATH, "..", "..", "stats.pkl"), "rb") as f:
data = pickle.load(f)
params = list(map(lambda d: d["total_params"], data))
x = np.arange(1, len(params) + 1)
y = x / 5
plt.plot(x, params, "-r", label="Bayesian pruning (b=10, tau=0.99)")
plt.plot(x, y, "-b", label="Regular k-fold CV (k=5)")
plt.legend()
plt.xlabel("# Total ML models fit")
plt.ylabel("# Hyperparameter sets tested")
plt.subplots_adjust(left=0.065, bottom=0.095, top=0.975, right=0.975)
plt.show()
| 29.258065 | 80 | 0.654906 | 143 | 907 | 4.076923 | 0.594406 | 0.030875 | 0.027444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.181918 | 907 | 30 | 81 | 30.233333 | 0.747978 | 0.048512 | 0 | 0 | 0 | 0 | 0.187209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.285714 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44677c1268d64c9902a9bcde21123859ab265c49 | 709 | py | Python | quickstart.py | adamstimb/nimbusinator | a7bb7e282b8322c1a97bffc3c40ab0541f746615 | [
"MIT"
] | null | null | null | quickstart.py | adamstimb/nimbusinator | a7bb7e282b8322c1a97bffc3c40ab0541f746615 | [
"MIT"
] | 16 | 2019-11-23T19:08:45.000Z | 2020-03-13T17:13:23.000Z | quickstart.py | adamstimb/nimbusinator | a7bb7e282b8322c1a97bffc3c40ab0541f746615 | [
"MIT"
] | null | null | null | from nimbusinator.nimbus import Nimbus
from nimbusinator.command import Command
if __name__ == '__main__':
# Create and bind nimbusinator objects:
nim = Nimbus(full_screen=True)
cmd = Command(nim)
nim.boot() # Boot the Nimbus
cmd.set_mode(40) # Low resolution mode
cmd.set_border(1) # Dark blue border
cmd.set_paper(9) # Light blue paper
cmd.cls() # Clear screen
cmd.plonk_logo((8, 110)) # Show Nimbus logo
# Display a message in cyan with shadowing
cmd.plot('Greetings!!!', (65, 155), size=2, brush=0)
cmd.plot('Greetings!!!', (66, 156), size=2, brush=13)
# Wait 5 seconds then shutdown
nim.sleep(5)
nim.shutdown()
| 33.761905 | 57 | 0.638928 | 99 | 709 | 4.444444 | 0.626263 | 0.040909 | 0.072727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046642 | 0.244006 | 709 | 20 | 58 | 35.45 | 0.774254 | 0.291961 | 0 | 0 | 0 | 0 | 0.065173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4467af496402d2c4c04283d114962eb0a43111b9 | 264 | py | Python | 03-threads/threads_example.py | LeandroMelloo/programacao_concorrente_assincrona_com_python | 49790de6004d588ccc2a07d1be0c420d6a1e4b7a | [
"Apache-2.0"
] | null | null | null | 03-threads/threads_example.py | LeandroMelloo/programacao_concorrente_assincrona_com_python | 49790de6004d588ccc2a07d1be0c420d6a1e4b7a | [
"Apache-2.0"
] | null | null | null | 03-threads/threads_example.py | LeandroMelloo/programacao_concorrente_assincrona_com_python | 49790de6004d588ccc2a07d1be0c420d6a1e4b7a | [
"Apache-2.0"
] | null | null | null | import threading
def start_threading(param):
print('Executa algo....')
print(f'Utiliza o parâmetro recebido: {param}')
return print(f'Resultado final: {param * param}')
th = threading.Thread(target=start_threading, args=(5,))
th.start()
th.join()
| 18.857143 | 56 | 0.689394 | 35 | 264 | 5.142857 | 0.628571 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004464 | 0.151515 | 264 | 13 | 57 | 20.307692 | 0.799107 | 0 | 0 | 0 | 0 | 0 | 0.32197 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446a9b413000d786cd15da4f54dc59cd09eba6bf | 9,450 | py | Python | catan-simulator/src/Api/Catan.py | williamaredal/Catan-Simulator | 27b5fa2bb77554fc5dfc67286899a70d3c41aeb4 | [
"MIT"
] | null | null | null | catan-simulator/src/Api/Catan.py | williamaredal/Catan-Simulator | 27b5fa2bb77554fc5dfc67286899a70d3c41aeb4 | [
"MIT"
] | null | null | null | catan-simulator/src/Api/Catan.py | williamaredal/Catan-Simulator | 27b5fa2bb77554fc5dfc67286899a70d3c41aeb4 | [
"MIT"
] | null | null | null | from collections import Counter
from random import choice
import numpy as np
# Figuring out the roads to victory requiring the minimum number of cards to achieve victory in catan
class Ledger:
def __init__(
self,
victoryPointCondition=10,
villageSettlement=1,
citySettlement=2,
startVillages=2,
startCities=0,
startRoads=2,
startDevCards=[],
maxVillages=5,
maxCities=4,
maxRoads=15,
villageSpacingRequirement=2,
longestRoad={'requirement': 5, 'points' : 2}, # minimum requirement is 5 connected roads
largestArmy={'requirement': 3, 'points' : 2}, # minimum requirement is 3 knights
constructionCosts={
'village' : {'wood' : 1, 'sheep' : 1, 'wheat' : 1, 'brick' : 1},
'city' : {'wheat' : 2, 'ore' : 3},
'road' : {'wood' : 1, 'brick' : 1},
'development' : {'sheep' : 1, 'wheat' : 1, 'ore' : 1}
},
initialResourceUsage={'wood' : 0, 'sheep' : 0, 'wheat' : 0, 'brick' : 0, 'ore' : 0},
developmentCards=['k' for n in range(14)] + ['v' for n in range(5)] + ['i' for n in range(6)]
):
self.victoryPointCondition = victoryPointCondition
self.villageSettlement = villageSettlement
self.citySettlement = citySettlement
self.startVillages = startVillages
self.startCities = startCities
self.startRoads = startRoads
self.startDevCards = startDevCards
self.maxVillages = maxVillages
self.maxCities = maxCities
self.maxRoads = maxRoads
self.villageSpacingRequirement = villageSpacingRequirement
self.longestRoad = longestRoad
self.largestArmy = largestArmy
self.constructionCosts = constructionCosts
self.initialResourceUsage = initialResourceUsage
self.developmentCards = developmentCards
class Player:
def __init__(
self,
resourceCards,
villages,
availableVillages,
cities,
availableCities,
roads,
availableRoads,
devCards,
availableDevCards,
longestRoad=False,
largestArmy=False
):
self.resourceCards = resourceCards
self.villages = villages
self.availableVillages = availableVillages
self.cities = cities
self.availableCities = availableCities
self.roads = roads
self.availableRoads = availableRoads
self.devCards = devCards
self.availableDevCards = availableDevCards
self.longestRoad = longestRoad
self.largestArmy = largestArmy
def updateResources(self, resources):
self.resourceCards = resources
def updateVillages(self, villages, availableVillages):
self.villages = villages
self.availableVillages = availableVillages
def updateCities(self, cities, availableCities):
self.cities = cities
self.availableCities = availableCities
def updateRoads(self, roads, availableRoads):
self.roads = roads
self.availableRoads = availableRoads
def updateDevCards(self, devCards, availableDevCards):
self.devCards = devCards
self.availableDevCards = availableDevCards
def updateSpecialCard(self, longestRoad, largestArmy):
self.longestRoad = longestRoad
self.largestArmy = largestArmy
def VictoryCondition(player, ledger):
requiredScore = ledger.victoryPointCondition
playerScore = int(
(player.villages * ledger.villageSettlement) +
(player.cities * ledger.citySettlement) +
(player.longestRoad * ledger.longestRoad['points']) +
(player.largestArmy * ledger.largestArmy['points'])
)
try:
playerScore += player.devCards.count('v')
except TypeError:
return
if playerScore < requiredScore:
return False
elif playerScore >= requiredScore:
return True
else:
print('Point VictoryCondition met an error')
def PlayerScore(player, ledger):
playerScore = int(
(player.villages * ledger.villageSettlement) +
(player.cities * ledger.citySettlement) +
(player.longestRoad * ledger.longestRoad['points']) +
(player.largestArmy * ledger.largestArmy['points']) +
(player.devCards.count('v'))
)
return playerScore
def BuildRoad(player, ledger):
player.updateResources(
Counter(player.resourceCards) +
Counter(ledger.constructionCosts['road'])
)
player.updateRoads(
player.roads + 1,
player.availableRoads - 1
)
def BuildVillage(player, ledger):
player.updateResources(
Counter(player.resourceCards) +
Counter(ledger.constructionCosts['village'])
)
player.updateVillages(
player.villages + 1,
player.availableVillages - 1
)
def BuildCity(player, ledger):
player.updateResources(
Counter(player.resourceCards) +
Counter(ledger.constructionCosts['city'])
)
player.updateCities(
player.cities + 1,
player.availableCities - 1
)
player.updateVillages(
player.villages - 1,
player.availableVillages + 1
)
def BuyDevCard(player, ledger):
devCardDraw = list(np.random.choice(player.availableDevCards, 1, ))
remainingDevCards = player.availableDevCards
remainingDevCards.remove(devCardDraw[0])
player.updateResources(
Counter(player.resourceCards) +
Counter(ledger.constructionCosts['development'])
)
player.updateDevCards(
(player.devCards + devCardDraw),
remainingDevCards
)
def SetSpecialCard(player, ledger):
player.updateSpecialCard(
((player.roads - (ledger.startRoads)) >= (ledger.longestRoad['requirement'])),
(len(player.devCards) != 0 and player.devCards.count('k') >= (ledger.largestArmy['requirement']))
)
def SimulateCatanGames(numberOfSimulations, simulatedGames={}, GameLedger=Ledger()):
for s in range(1, numberOfSimulations + 1):
p1DecisionTree = []
p1 = Player(
GameLedger.initialResourceUsage,
GameLedger.startVillages,
(GameLedger.maxVillages - GameLedger.startVillages),
GameLedger.startCities,
(GameLedger.maxCities - GameLedger.startCities),
GameLedger.startRoads,
(GameLedger.maxRoads - GameLedger.startRoads),
GameLedger.startDevCards,
['k' for n in range(14)] + ['v' for n in range(5)] + ['i' for n in range(6)]
)
while VictoryCondition(p1, GameLedger) == False:
possibleChoices = [c for c in range(1,4) if
(c == 1) and (p1.availableVillages > 0) and (p1.availableRoads >= GameLedger.villageSpacingRequirement)
or (c == 2) and (p1.availableCities > 0) and( p1.villages > 0)
or (c == 3) and (len(p1.availableDevCards) > 0)
]
randomDecision = choice(possibleChoices)
SetSpecialCard(p1, GameLedger)
p1DecisionTree.append(randomDecision)
# VILLAGE
if randomDecision == 1:
# VILLAGE AFTER BUILDING 1 ROAD
if (p1.roads >= GameLedger.villageSpacingRequirement) and (p1.roads < (GameLedger.villageSpacingRequirement * GameLedger.startVillages)):
BuildRoad(p1, GameLedger)
BuildVillage(p1, GameLedger)
# VILLAGE AFTER BUILDING 2 ROADS
else:
for r in range(0, GameLedger.villageSpacingRequirement):
BuildRoad(p1, GameLedger)
BuildVillage(p1, GameLedger)
# CITY
elif randomDecision == 2:
BuildCity(p1, GameLedger)
# DEVCARD
elif randomDecision == 3:
BuyDevCard(p1, GameLedger)
# COULD NOT BUILD MORE VICTORY POINTS
else:
print('something went wrong XD')
if p1.availableRoads < GameLedger.villageSpacingRequirement:
print('Not enough roads available')
elif p1.availableVillages < 1 and p1.availableCities < 1:
print('No available villages or cities')
elif p1.villages < 1:
print('No available villages')
elif p1.availableCities < 1:
print('No available cities')
elif p1.availableDevCards < 1:
print('No available devCards')
else:
print('Unexpected error...')
break
SetSpecialCard(p1, GameLedger)
simulatedGames[s] = {
'decisionTree' : p1DecisionTree,
'victoryPoints' : PlayerScore(p1, GameLedger),
'cardsToVictory' : sum(p1.resourceCards.values()),
'spentResources' : p1.resourceCards,
'villages' : p1.villages,
'availableVillages' : p1.availableVillages,
'cities' : p1.cities,
'availableCities' : p1.availableCities,
'roads' : p1.roads,
'availableRoads' : p1.availableRoads,
'devCards' : p1.devCards,
'availableDevCards' : p1.availableDevCards,
'longestRoad' : p1.longestRoad,
'largestArmy' : p1.largestArmy
}
return simulatedGames
| 31.605351 | 153 | 0.610899 | 792 | 9,450 | 7.27904 | 0.19697 | 0.020815 | 0.006245 | 0.011448 | 0.29575 | 0.280659 | 0.170685 | 0.151605 | 0.138248 | 0.116739 | 0 | 0.017117 | 0.295238 | 9,450 | 298 | 154 | 31.711409 | 0.848499 | 0.030794 | 0 | 0.243478 | 0 | 0 | 0.06252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069565 | false | 0 | 0.013043 | 0 | 0.113043 | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446aff1f81c23a0f38cd568dd2d43f6846714b6e | 29,745 | py | Python | client_server_test INHERIT/LocalModel.py | hades208002/mdp-project | c242a8d00412cc3772d298986977f6acc47002ee | [
"MIT"
] | null | null | null | client_server_test INHERIT/LocalModel.py | hades208002/mdp-project | c242a8d00412cc3772d298986977f6acc47002ee | [
"MIT"
] | null | null | null | client_server_test INHERIT/LocalModel.py | hades208002/mdp-project | c242a8d00412cc3772d298986977f6acc47002ee | [
"MIT"
] | null | null | null | # Import all the useful libraries
import numpy as np
import pandas as pd
import fancyimpute
from sklearn import model_selection
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import AdaBoostClassifier # PROBABILITY
from sklearn.tree import DecisionTreeClassifier # PROBABILITY
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.linear_model import RidgeClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier # PROBABILITY
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier # PROBABILITY
from sklearn.linear_model import LogisticRegression # PROBABILITY
from sklearn.naive_bayes import GaussianNB # PROBABILITY
from sklearn.ensemble import ExtraTreesClassifier # PROBABILITY
from sklearn.neighbors import KNeighborsClassifier # PROBABILITY
from sklearn.ensemble import BaggingClassifier # PROBABILITY
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import ADASYN
from imblearn.under_sampling import TomekLinks
from sklearn.externals import joblib
# MISSING PARTs
# 1) send the distribution (mean and std) of the data if requested (for example, how the two classes are distrubuted over the age of the population (or any other feature))
# 2) send other useful data ? ((if available) feature importance, decision_path)
# ...
# training data -> expected to be with all the listed features (IN ORDER -> like in the data we have). It is ok, if there are missing values
class LocalModel:
# local model functions
# train
# predict
# initialize the local model with the training data
def __init__(self, data = "none", target_name = "AFclass" , model_name = "ada4",random_state = 12345678, imputation_strategy = 'mice',balance_strategy = 'SMOTE'):
# we train the model with all the available data
self.target_name = target_name ## it the name of the target column
self.target = None ## it is the target vector
self.data_lm = data ## it is the complete dataset -> will be modified
self.original_data = data ## store a copy of the original data -> never modified
self.X = None ## it is the data except the target
self.features_lm = None ## available features
self.imputation_strategy = imputation_strategy
self.balance_strategy = balance_strategy
# for cross-validation
self.model_accuracy = ""
self.cv_x = None # data -> in principle equal to self.X
self.cv_y = None # target -> in principle equal to self.target
self.random_state = random_state # random state -> fixed for testing
self.selected_model_name = model_name # name of the model -> default fixed
self.selected_model = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10, random_state = self.random_state),algorithm="SAMME", n_estimators=300)#DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=15, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False) ## default model
self.models = [] ## list of all the available models
self.important_features = pd.DataFrame([],columns = {"important features"})
#if not isinstance(self, LocalModel):
# self.chosen_model(model_name) # select the chosen model -> otherwise use the default one
#self.check1, self.check2, self.check3 = self.fixDataset(imputation_strategy = imputation_strategy, balance_strategy = balance_strategy) ## fix data set before training -> clean data (remove unused columns, convert categotical attributes into numerical), recover missing values (use a strategy to impute the missing values), balance the data set
#if isinstance(self, LocalModel):
# self.chooseModel_with_crossValidation()
self.localModelType = "app" ## gui or app -> gui can only respond to predictions , app can only send prediction requests or send data to central model
if not str(self.data_lm) == "none":
self.localModelType = "gui"
self.perfromLocalOperations()
def perfromLocalOperations(self):
self.fixDataset(imputation_strategy = self.imputation_strategy, balance_strategy = self.balance_strategy) ## fix data set before training -> clean data (remove unused columns, convert categotical attributes into numerical), recover missing values (use a strategy to impute the missing values), balance the data set
#self.train()
# initiate the models_definition
def chooseModel_with_crossValidation_and_train(self):
r = []
if not str(self.data_lm) == "none":
try :
print ("TRY load model, " + self.selected_model_name)
self.selected_model = joblib.load(self.selected_model_name + '.pkl')
print ("model loaded")
r = self.crossValidation(all_models = 0) ## just to get the accuracy and the std deviation
print ("skip trainign -Z model loaded")
except :
self.models_definition(self.random_state)
r = self.crossValidation(all_models = 1, k_fold = 10)
found = 0
for (n,i) in self.models: # n = name , i = model
if n == r.iloc[0][0] and found == 0:
found = 1
self.selected_model = i
self.selected_model_name = n
if found == 0:
self.selected_model = DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=15, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)
self.selected_model_name = "dt4"
self.train()
joblib.dump(self.selected_model, self.selected_model_name + '.pkl')
else:
print ("no data")
self.getImportantFetures()
print ("DONE Cross validation -> choosen model")
print (self.important_features.shape)
print (self.important_features)
print ("shape")
print (self.selected_model_name, self.selected_model)
return r
def getImportantFetures(self):
a = []
print (self.selected_model.feature_importances_)
try:
indices = np.argsort(self.selected_model.feature_importances_)[-10:]
for i in indices :
a.append(self.data_lm.columns[i])
print ("important features here")
print (self.important_features)
print (" finish printing important features ")
except :
print("no features importance")
a = pd.DataFrame(a,columns = {"important features"})
self.important_features = a
def updateFeatures(self,features):
f = features["important features"].tolist()
pos = 0
res = self.important_features["important features"].tolist()
try:
for i in f:
if i not in res:
res.insert(pos , i)
pos += 1
else :
oldPos = res.index(i)
res.remove(i)
res.insert(int((pos + oldPos) / 2) , i)
pos += 1
res = pd.DataFrame(res,columns = {"important features"})
self.important_features = res
except:
print("error in update feature")
return res
def addData(self, data, target_name = "AFclass", localModelType = "gui"):
# add data only if there is not other yet (in future -> possibility to concat to self.original_data)
if str(self.data_lm) == "none":
self.localModelType = "gui"
self.data_lm = data
self.target_name = target_name
self.perfromLocalOperations()
print ("data Added and fixed")
return True
print ("abort -> there is already data")
return False
def models_definition(self,random_state):
## here we can tune the paramenters of the models
#self.models.append(("ada1",AdaBoostClassifier(DecisionTreeClassifier(max_depth=1, random_state = self.random_state),algorithm="SAMME", n_estimators=200)))
#self.models.append(("ada2",AdaBoostClassifier(DecisionTreeClassifier(max_depth=3, random_state = self.random_state),algorithm="SAMME", n_estimators=200)))
#self.models.append(("ada3",AdaBoostClassifier(DecisionTreeClassifier(max_depth=5, random_state = self.random_state),algorithm="SAMME", n_estimators=100)))
self.models.append(("ada4",AdaBoostClassifier(DecisionTreeClassifier(max_depth=10, random_state = self.random_state),algorithm="SAMME", n_estimators=300)))
#self.models.append(("ada5",AdaBoostClassifier(DecisionTreeClassifier(max_depth=20, random_state = self.random_state),algorithm="SAMME", n_estimators=100)))
#self.models.append(("ada6",AdaBoostClassifier(RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=2, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False))))
#self.models.append(("ada7",AdaBoostClassifier(RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=5, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False))))
#self.models.append(("ada8",AdaBoostClassifier(RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=10, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False))))
"""
#self.model.append(RadiusNeighborsClassifier(radius=10.0, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski'))
#self.models.append(("ridge1", RidgeClassifier(alpha=1.0, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, class_weight=None, solver='auto', random_state=self.random_state)))
#paramsGB1 = {'n_estimators': 120, 'max_depth': 3, 'subsample': 0.5,'learning_rate': 0.01, 'min_samples_leaf': 1, 'random_state': self.random_state}
#paramsGB2 = {'n_estimators': 120, 'max_depth': 6, 'subsample': 0.5,'learning_rate': 0.05, 'min_samples_leaf': 1, 'random_state': self.random_state}
#paramsGB3 = {'n_estimators': 60, 'max_depth': 15, 'subsample': 0.5,'learning_rate': 0.01, 'min_samples_leaf': 1, 'random_state': self.random_state}
paramsGB4 = {'n_estimators': 320, 'max_depth': 10, 'subsample': 0.5,'learning_rate': 0.005, 'min_samples_leaf': 1, 'random_state': self.random_state}
#self.models.append(("gb1",GradientBoostingClassifier(**paramsGB1)))
#self.models.append(("gb2",GradientBoostingClassifier(**paramsGB2)))
#self.models.append(("gb3",GradientBoostingClassifier(**paramsGB3)))
self.models.append(("gb4",GradientBoostingClassifier(**paramsGB4)))
"""
#self.models.append(("dt1",DecisionTreeClassifier(random_state=self.random_state)))
#self.models.append(("dt2",DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=3, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)))
#self.models.append(("dt3",DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=7, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)))
self.models.append(("dt4",DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=15, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)))
#self.models.append(("dt5",DecisionTreeClassifier(criterion='entropy', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)))
"""
#self.models.append(("rf1",RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=2, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False)))
self.models.append(("rf2",RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=5, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False)))
#self.models.append(("rf3",RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=10, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False)))
#self.models.append(("ld1",LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage=None,solver='svd', store_covariance=False, tol=0.0001)))
#self.models.append(("lr1",LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=self.random_state, solver='liblinear', max_iter=100, multi_class='ovr', verbose=0, warm_start=False, n_jobs=1)))
#self.models.append(("knn1",KNeighborsClassifier(n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1)))
self.models.append(("knn2",KNeighborsClassifier(n_neighbors=10, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1)))
#self.models.append(("knn3",KNeighborsClassifier(n_neighbors=15, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1)))
#self.models.append(("knn4",KNeighborsClassifier(n_neighbors=20, weights='distance', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1)))
#self.models.append(("knn5",KNeighborsClassifier(n_neighbors=50, weights='distance', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1)))
#self.models.append(("nb1",GaussianNB()))
#self.models.append(("et1",ExtraTreesClassifier(n_estimators=50, random_state=self.random_state)))
#self.models.append(("et2",ExtraTreesClassifier(n_estimators=100, random_state=self.random_state)))
self.models.append(("et3",ExtraTreesClassifier(n_estimators=200, random_state=self.random_state)))
#self.models.append(("bag1",BaggingClassifier(base_estimator=None, n_estimators=5, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag2",BaggingClassifier(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag3",BaggingClassifier(base_estimator=None, n_estimators=20, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag4",BaggingClassifier(base_estimator=None, n_estimators=50, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag5",BaggingClassifier(base_estimator=None, n_estimators=100, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag6",BaggingClassifier(base_estimator=None, n_estimators=150, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag7",BaggingClassifier(base_estimator=None, n_estimators=200, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
self.models.append(("bag8",BaggingClassifier(base_estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=2, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False), n_estimators=200, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag9",BaggingClassifier(base_estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=5, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False), n_estimators=200, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag10",BaggingClassifier(base_estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=10, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False), n_estimators=200, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
#self.models.append(("bag11",BaggingClassifier(base_estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',max_depth=20, max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=self.random_state, verbose=0, warm_start=False), n_estimators=200, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=self.random_state, verbose=0)))
"""
## add other models ...
def chosen_model(self, name):
# initialize the available models
self.models_definition(self.random_state)
found = 0
for (n,i) in self.models: # n = name , i = model
if n == name and found == 0:
found = 1
self.selected_model = i
self.selected_model_name = name
if found == 0 :
# feel free to modify the model.. if another is better
self.selected_model = DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=15, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=self.random_state, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)
self.selected_model_name = "dt4"
return
## to choose the best model using cross validation
## normally crossvalidate just the chosen model, if all_models = 1 -> crossvalidate all the models
def crossValidation(self, all_models = 0, k_fold = 10, random_state = 12345678):
# cross validation
if all_models == 1:
print ("begin cross validation for all models")
evaluation = []
counter = 1
numberOfModels = len(self.models)
#best = ("BEST", 0, 0)
for (name,i) in self.models:
print (round(counter / numberOfModels,3), " is complete \t" )
e = model_selection.cross_val_score(i, self.cv_x, self.cv_y, cv=StratifiedKFold(n_splits=k_fold,random_state=random_state,shuffle=True))
avg = round(np.average(e),4) * 100
std = round(np.std(e),4) * 100
evaluation.append ((name, avg , std))
counter = counter + 1
evaluation.sort(key = lambda tup: tup[1], reverse = True)
df_cv = pd.DataFrame (evaluation, columns=['Model','Accuracy','std'])
self.model_accuracy = df_cv.iloc[0][1]
print ("end cross validation")
return df_cv
else:
e = model_selection.cross_val_score(self.selected_model, self.cv_x, self.cv_y, cv=StratifiedKFold(n_splits=k_fold,random_state=random_state,shuffle=True))
t = pd.DataFrame([(self.selected_model_name, round(np.average(e),4) * 100 , round(np.std(e),4) * 100 )], columns=['Model','Accuracy','std'])
return t
return
def showData(self, lines = 5, original_data = 0):
if original_data == 1:
print (self.original_data.head(lines))
else :
print(self.data_lm.head(lines))
# remove unused features, convert categorical attributes to numerical ones
def cleanData(self):
print ("START CLEANING")
# re-start from the orginial data
#self.data_lm = self.original_data
if 'Soggetti' in self.data_lm.columns:
self.data_lm = self.data_lm.drop('Soggetti', axis = 1)
if 'PCneg' in self.data_lm.columns:
self.data_lm = self.data_lm.drop('PCneg', axis = 1)
if 'IPG' in self.data_lm.columns:
self.data_lm = self.data_lm.drop('IPG', axis = 1)
if 'sbjBeatConsidered' in self.data_lm.columns:
self.data_lm = self.data_lm.drop('sbjBeatConsidered', axis=1)
if 'numRRaveraged' in self.data_lm.columns:
self.data_lm = self.data_lm.drop('numRRaveraged', axis=1)
# convert categorical variables into numerical
if 'patsex' in self.data_lm.columns and ("männlich" in self.data_lm["patsex"].values or "weiblich" in self.data_lm["patsex"].values):
self.data_lm['patsex'] = self.data_lm['patsex'].map({'männlich' : 1, 'weiblich' : 0})
if 'AFclass' in self.data_lm.columns and ("persistierend (>7 Tage, EKV)" in self.data_lm["AFclass"].values or "paroxysmal" in self.data_lm["AFclass"].values):
self.data_lm["AFclass"] = self.data_lm["AFclass"].map({'persistierend (>7 Tage, EKV)' : 1, 'paroxysmal' : 0})
# extract features
self.features_lm = self.data_lm.columns[self.data_lm.columns != self.target_name]
self.X = self.data_lm[self.features_lm]
self.target = self.data_lm[self.target_name]
print ("END CLEANING")
# clean the test data -> first drop unused data -> make it "compliant" to the features of the dataset
def cleanDataTest(self, test_x, features = "self_features"):
print ("START TEST CLEANING")
print("TEST_X : ", test_x.shape)
print (test_x)
# convert categorical variables into numerical
if 'patsex' in test_x.columns and ("männlich" in test_x["patsex"].values or "weiblich" in test_x["patsex"].values):
test_x['patsex'] = test_x['patsex'].map({'männlich' : 1, 'weiblich' : 0})
if str(features) == "self_features":
list_of_features = self.features_lm
else :
list_of_features = features
# drop all the columns that are not present in the training dataset
for i in test_x.columns:
if i not in list_of_features:
test_x = test_x.drop(i, axis = 1)
# add columns that are not present in the test set
for i in list_of_features:
if i not in test_x.columns:
test_x[i] = np.nan
## REORDER the features
test_x = test_x[list_of_features]
print ("END TEST CLEANING")
print("DATA shape : ", self.data_lm.shape)
return test_x
## data -> it is the dataset we want to 'recover'
def imputeData(self, dataframe,imputation_strategy = 'knn', features = "self_features" ):
try:
if imputation_strategy == 'knn':
x_complete_a = fancyimpute.KNN(15).complete(dataframe)
## feel free to add other imputation methods
# ...
else : ## default case -> MICE impute method
mice = fancyimpute.MICE(n_imputations=100, impute_type='col', n_nearest_columns=5, init_fill_method = "mean")
x_complete_a = mice.complete(dataframe)
except:
x_complete_a = dataframe
print ("x_incomplete shape : ",x_complete_a.shape )
if str(features) == "self_features":
f = self.features_lm
else :
f = features
print ("FEATURESS : ",f.size, f )
return pd.DataFrame(x_complete_a, columns = f)
def recoverMissing(self, data = 'trainData', imputation_strategy = 'mice'):
print ("START RecoverMissing VALUES")
if str(data) == 'trainData':
x_incomplete = self.data_lm[self.features_lm]
else:
x_incomplete = data[self.features_lm]
#print (x_incomplete)
# create a united dataset -> suppose it is possile -> if we clean first ->> then it is possible
if str(data) != 'trainData':
united_df = pd.concat([x_incomplete, self.X])
united_complete = self.imputeData(united_df, features = x_incomplete.columns)
x_complete = united_complete.iloc[:x_incomplete.shape[0], :x_incomplete.shape[1]]
#print ("united_complete shape : ",united_complete.shape )
else :
x_complete = self.imputeData(x_incomplete)
'''
try:
if imputation_strategy == 'knn':
x_complete_a = fancyimpute.KNN(15).complete(x_incomplete)
## feel free to add other imputation methods
# ...
else : ## default case -> MICE impute method
mice = fancyimpute.MICE(n_imputations=100, impute_type='col', n_nearest_columns=5)
x_complete_a = mice.complete(x_incomplete)
except:
x_complete_a = x_incomplete
x_complete = pd.DataFrame(x_complete_a, columns = self.features_lm)
'''
if str(data) == 'trainData':
self.X = x_complete
return x_complete
def balanceDataSet(self, data = "trainData",target_name = "AFclass", balance_strategy = 'SMOTE'):
if str(data) == "trainData":
X = self.X
y = self.data_lm[self.target_name].as_matrix()
target_name = self.target_name
else :
X = data[data.columns[data.columns != target_name]]
y = data[target_name].as_matrix()
y_new = pd.DataFrame(y)
y_new = y_new.rename(columns = {y_new.columns[0] : target_name})
Data_complete = pd.concat([X,y_new], axis = 1)
if balance_strategy == 'ADASYN':
try:
print ("Try ADASYN")
X_resampled, y_resampled = ADASYN().fit_sample(X, y_new)
except:
print ("ADASYN FAILED -> used SMOTE")
X_resampled, y_resampled = SMOTE().fit_sample(X, y_new)
## feel free to add other balancing strategies
# ...
else : # default SMOTE
X_resampled, y_resampled = SMOTE().fit_sample(X, y_new)
X_final = pd.DataFrame(X_resampled, columns = self.features_lm)
Y_final = pd.DataFrame(y_resampled)
Y_final = Y_final.rename(columns = {Y_final.columns[0] : self.target_name})
Data_final = pd.concat([X_final,Y_final], axis = 1)
if str(data) == "trainData" :
self.X = X_final
self.target = Y_final
self.cv_x = X_final
self.cv_y = Y_final
self.data_lm = Data_final
return Data_final
# clean the data, recover missing values, balance the dataset
def fixDataset(self, imputation_strategy = 'mice', balance_strategy = 'SMOTE'):
print ("begin fixing dataset")
self.cleanData()
check1 = self.data_lm.copy()
self.recoverMissing(imputation_strategy = imputation_strategy)
check2 = self.X.copy()
self.balanceDataSet(balance_strategy = balance_strategy)
check3 = self.data_lm.copy()
print ("end fixing dataset")
return (check1, check2, check3)
# train the selected model
def train(self):
## use all the availble data -> we assume to know what is the best model -> otherwise use the crossvalidation function to choose a model
print ("begin training")
self.selected_model.fit(self.X, self.target)
print ("end training")
# predict using the trained model. x_test is a vector
# return the prediction for all values in the vector x_test, and all the other useful data (according to the selected_model used to predict)
def predict(self, test):
original_x = test
train = test.copy()
x_test = self.cleanDataTest(test_x = train)
x_test = self.recoverMissing(data = x_test)
result = x_test.copy()
prediction = self.selected_model.predict(x_test)
result['prediction'] = prediction
#decision_path = None
#features_importance = None
if callable(hasattr(self.selected_model, "predict_proba" )):
predict_proba_df = pd.DataFrame(self.selected_model.predict_proba(x_test), columns=self.selected_model.classes_)
result['predict_proba_zero'] = predict_proba_df[predict_proba_df.columns[0]]
result['predict_proba_uno'] = predict_proba_df[predict_proba_df.columns[1]]
'''
if callable(hasattr(self.selected_model, "predict_log_proba" )):
predict_log_proba_df = pd.DataFrame(self.selected_model.predict_log_proba(x_test), columns=self.selected_model.classes_)
result['predict_log_proba_zero'] = predict_log_proba_df[predict_log_proba_df.columns[0]]
result['predict_log_proba_uno'] = predict_log_proba_df[predict_log_proba_df.columns[1]]
'''
return pd.DataFrame(result)
def splitDataframe (self, data, step = 20):
splits = []
i = 0
n = data.shape[0]
if n > step:
while i < n:
l = i + step
temp = data.iloc[i: l, :]
splits.append(temp)
i += step
else:
splits.append(data)
return splits
| 59.49 | 605 | 0.755455 | 4,357 | 29,745 | 4.938949 | 0.112233 | 0.051118 | 0.034156 | 0.042939 | 0.573214 | 0.529904 | 0.474976 | 0.449138 | 0.425717 | 0.414192 | 0 | 0.021863 | 0.120424 | 29,745 | 499 | 606 | 59.609218 | 0.800634 | 0.236779 | 0 | 0.20915 | 0 | 0 | 0.100894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062092 | false | 0 | 0.127451 | 0 | 0.24183 | 0.124183 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446b4021141854c01eb8346f6d3902a45b8ff171 | 4,455 | py | Python | get_weibo.py | SUIBE-Blockchain/Data-Crawler-Practice | 46f4b98f05923ab534e28e51456c87efc33dbb8d | [
"Apache-2.0"
] | 1 | 2021-10-05T05:52:39.000Z | 2021-10-05T05:52:39.000Z | get_weibo.py | SUIBE-Blockchain/Data-Crawler-Practice | 46f4b98f05923ab534e28e51456c87efc33dbb8d | [
"Apache-2.0"
] | null | null | null | get_weibo.py | SUIBE-Blockchain/Data-Crawler-Practice | 46f4b98f05923ab534e28e51456c87efc33dbb8d | [
"Apache-2.0"
] | 7 | 2020-08-09T09:52:15.000Z | 2020-08-16T08:04:02.000Z | import sys
from bs4 import BeautifulSoup
import re
import urllib.request, urllib.error
import xlwt # 进行Excel操作
import time
def main():
Baseurl = 'https://weibo.cn/thepapernewsapp?page='
# 1.爬取网页
datalist = getdata(Baseurl)
#IDlence = (len(datalist) - 1)
savepath = "澎湃新闻.xls"
# 3.保存数据
saveData(datalist, savepath)
# 创建正则表达式对象,表示规则(字符串的模式)
ID = re.compile(r'M_(.*?)')
#每一条微博ID号
findID = re.compile(r'M_(.*?)">')
# 微博标题
findTitle = re.compile(r'">#(.*?)#</a>|【(.*?)】')
#标题链接
findlink = re.compile(r'<a href=(.*?)>#')
# 评论数
findComment = re.compile(r'>评论(.*?)<')
# 点赞数
findlike = re.compile(r'>赞(.*?)<')
# 转发数
#findForward = re.compile(r'>转发[(\d*)]<') # 未解决,这样子没法匹配到转发数
findForward = re.compile(r'>转发(.*?)<')
#替换链接
#replacelink = re.compile(r'[a-zA-z]+://[^\s]*')
#判断所给的字符串是否只包含中文
def check_contain_chinese(check_str):
flag = True
for ch in check_str :
if u'\u4e00' >= ch or ch >= u'\u9fff':
flag = False
return flag
# 爬取网页
def getdata(Baseurl):
datalist = []
#ID = re.compile(r'M_(.*?)')
#findID = re.compile(r'M_(.*?)">')
for i in range(1000, 1350): # 调用获取页面信息的函数,九次
url = Baseurl + str(i)
time.sleep(2)
html = askURL(url) # 保存获取到的网页源码
# print(html)
# 逐一解析数据
soup = BeautifulSoup(html, "html.parser")
for item in soup.find_all('div', class_="c", id=ID): # 查找符合要求的字符串,形成列表,这里查找的有多余,待修改(已修改)
#print(item)
data = [] # 保存信息
item = str(item)
IDnumber = re.findall(findID, item)
if len(IDnumber) != 0:
IDnumber = IDnumber[0]
data.append(IDnumber)
else:
data.append("") # 留空
title = re.findall(findTitle, item)
if len(title) != 0:
if check_contain_chinese(str(title)) == True:
title = title[0]
data.append(title)
else:
title = str(title[0])
title = re.sub('<a href="https://weibo.cn/(.*?)>#',' ',title)
data.append(title)
else:
data.append("") # 留空
link = re.findall(findlink, item)
if len(link) != 0:
link = link[0]
data.append(link)
else:
data.append("") # 留空
comment = re.findall(findComment, item)
if len(comment) != 0:
comment = comment[0]
#comment = re.sub('"["|"]"','',comment)
data.append(comment)
else:
data.append("") # 留空
like = re.findall(findlike, item)
if len(like) != 0:
like = like[0]
data.append(like)
else:
data.append("") # 留空
forword = re.findall(findForward, item)
if len(forword) != 0:
forword = forword[0]
data.append(forword)
else:
data.append("") # 留空
datalist.append(data) # 把处理好的信息放入datalist
X = len(datalist)
datalist.append(X)
print(datalist)
print(datalist[-1])
return datalist
# 得到指定一个url的网页内容
def askURL(url):
head = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36",
"cookie": "_T_WM=35741482901; SCF=AmvmEtdgbVNBLmY81b6nBUc87xPm36q8vSZjcyCgj68x-aG008yAR0L9y3r7Rr1F_50rLdr2o2BSdFmsblIMMIU.; SUB=_2A25yVyFBDeRhGeNM71oS8ifKzz6IHXVRuE8JrDV6PUJbktANLVmskW1NTgP5MCDkG4SdvMaBAF7itwAoudOB9Xw9; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9WWT3P_xWuc.x6bc01FVbD_G5JpX5KzhUgL.Fo-EShn0eo.cShz2dJLoIEBLxKMLBKzLBKMLxKML12-L1h.LxKnL12qLBozLxKML1hBLBoqt; SUHB=08J7XCqOJpiTJ1; ALF=1601887761"
}
request = urllib.request.Request(url, headers=head)
html = ""
try:
response = urllib.request.urlopen(request)
html = response.read().decode("utf-8")
# print(html)
except urllib.error.URLError as e:
if hasattr(e, "code"):
print(e.code)
if hasattr(e, "reason"):
print(e.reason)
return html
if __name__ == '__main__': # 当程序执行时
main()
print("爬取完毕!")
| 28.928571 | 412 | 0.525701 | 459 | 4,455 | 5.045752 | 0.40305 | 0.056131 | 0.047496 | 0.041451 | 0.045769 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050151 | 0.333109 | 4,455 | 153 | 413 | 29.117647 | 0.729384 | 0.111336 | 0 | 0.15 | 0 | 0.02 | 0.193471 | 0.094214 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.06 | 0 | 0.13 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446d377b81da3c67728272ccd4cb25650010c8a7 | 549 | py | Python | unittests/check_external_packages.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 2 | 2019-11-18T12:51:09.000Z | 2019-12-11T03:13:51.000Z | unittests/check_external_packages.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 5 | 2017-06-09T10:06:27.000Z | 2019-07-19T11:28:18.000Z | unittests/check_external_packages.py | davidharvey1986/rrg | 26b4658f14279af21af1a61d57e9936daf315a71 | [
"MIT"
] | 2 | 2017-07-19T15:48:33.000Z | 2017-08-09T16:07:20.000Z | import subprocess
def check_external_packages():
try:
stilts_path = subprocess.check_output(['which','stilts.sh'])
except:
raise ValueError('Cannot find STILTS please install and ensure it is in the shell path')
try:
stilts_path = subprocess.check_output(['which','sex'])
except:
raise ImportError('Cannot find SExtractir please install and ensure it can be called with "sex"')
try:
import pickle as pkl
except:
raise ImportError('Cannot find pickle, plesae install')
| 27.45 | 105 | 0.668488 | 68 | 549 | 5.308824 | 0.544118 | 0.091413 | 0.072022 | 0.127424 | 0.526316 | 0.216066 | 0.216066 | 0 | 0 | 0 | 0 | 0 | 0.245902 | 549 | 19 | 106 | 28.894737 | 0.871981 | 0 | 0 | 0.428571 | 0 | 0 | 0.364299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446d5bb17dd8a928ca7451126fb10c9f182c9616 | 6,084 | py | Python | 2020/day16/day16ab.py | jeremy-frank/advent-of-code | 6a9dd284f0e67fea694548f1545402c579ef3f08 | [
"MIT"
] | null | null | null | 2020/day16/day16ab.py | jeremy-frank/advent-of-code | 6a9dd284f0e67fea694548f1545402c579ef3f08 | [
"MIT"
] | null | null | null | 2020/day16/day16ab.py | jeremy-frank/advent-of-code | 6a9dd284f0e67fea694548f1545402c579ef3f08 | [
"MIT"
] | null | null | null | """
day16ab - https://adventofcode.com/2020/day/16
--- Day 16: Ticket Translation ---
* Part 1
Three input files:
the rules for ticket fields
the numbers on your ticket
the numbers on other nearby tickets
The rules for ticket fields specify a list of fields that exist somewhere on the ticket
and the valid ranges of values for each field
Start by determining which tickets are completely invalid;
these are tickets that contain values which aren't valid for any field
Example: 4 + 55 + 12 = 71
Adding together all of the invalid values produces your ticket scanning error rate: 4 + 55 + 12 = 71
Consider the validity of the nearby tickets you scanned.
What is your ticket scanning error rate?
24980
* Part 2
Using the valid ranges for each field, determine what order the fields appear on the tickets.
The order is consistent between all tickets: if seat is the third field, it is the third field
on every ticket, including your ticket.
Once you work out which field is which, look for the six fields on your ticket that start with
the word departure. What do you get if you multiply those six values together?
0: arrival track
1: duration
2: departure time
3: departure station
4: class
5: type
6: departure date
7: wagon
8: arrival platform
9: price
10: arrival location
11: row
12: departure platform
13: zone
14: arrival station
15: departure location
16: train
17: route
18: departure track
19: seat
809376774329
"""
def load_ticket():
ticket = []
#datafile = 'input-day16-ticket-example'
datafile = 'input-day16-ticket'
with open(datafile, 'r') as input:
for line in input:
bits = line.strip().split(",")
for x in bits:
ticket.append(int(x))
return ticket
def load_rules():
# part1 - single list of all possible values
rule_full_range = []
# part2 - dictionary holding the individual rules
rules = {}
#datafile = 'input-day16-rules-example'
datafile = 'input-day16-rules'
with open(datafile, 'r') as input:
for line in input:
line = line.strip().replace(":", ",").replace(" or", ",")
items = line.split(",")
rule_range = []
for numrun in [items[1], items[2]]:
nums = numrun.split("-")
for x in range(int(nums[0]), int(nums[1]) + 1):
rule_range.append(x)
rule_full_range.append(x)
rules[items[0]] = rule_range
rule_full_range.sort()
return rules, set(rule_full_range)
def load_nearby_tickets():
nearby_tickets = []
#datafile = 'input-day16-nearby-tickets-example'
datafile = 'input-day16-nearby-tickets'
with open(datafile, 'r') as input:
for line in input:
bits = line.strip().split(",")
nearby_tickets.append([int(x) for x in bits])
return nearby_tickets
def part1(rule_range, nearby_tickets):
invalid_values = []
for ticket in nearby_tickets:
for val in ticket:
if val not in rule_range:
invalid_values.append(val)
return sum(invalid_values)
def validate_tickets(rule_range, nearby_tickets):
valid_tickets = []
for ticket in nearby_tickets:
valid = True
for val in ticket:
if val not in rule_range:
valid = False
if valid:
valid_tickets.append(ticket)
#else:
# print(f"Invalid ticket: {ticket}")
return valid_tickets
def process_tickets(rules, tickets, my_ticket):
# for each position, find all rules that could match it
pos_matches = {}
for pos in range(len(tickets[0])):
pos_matches[pos] = []
for rule in rules:
rule_range = rules[rule]
rule_match = True
for ticket in tickets:
if ticket[pos] not in rule_range:
rule_match = False
break
if rule_match:
print(f"{pos} {rule}")
pos_matches[pos].append(rule)
print(f"\n\npos_matches: {pos_matches}\n\n")
# narrow it down - figure out which position maps to what rule
pos_solution = {}
solved_rule = []
while len(pos_solution) < len(rules):
new_pos_matches = {}
for pos in pos_matches:
if len(pos_matches[pos]) == 1:
# found a solution! (add to pos_solution and not to new_pos_matches)
pos_solution[pos] = pos_matches[pos][0]
solved_rule.append(pos_matches[pos][0])
print(f"updated pos_solution: {pos_solution}")
elif len(pos_matches[pos]) == 0:
# shouldn't ever happen
print("ERROR")
else:
# no solution yet, so add anything that isn't yet solved to new_pos_matches
new_pos_matches[pos] = []
for item in pos_matches[pos]:
if item not in solved_rule:
new_pos_matches[pos].append(item)
pos_matches = new_pos_matches
# print out the full position:rule mapping
print("\n")
for x in range(len(pos_solution)):
print(f"{x}: {pos_solution[x]}")
# calculate the solution
print("\n")
answer = 1
for pos in pos_solution:
if "departure" in pos_solution[pos]:
print(f"{pos} - {pos_solution[pos]}, ticket value {my_ticket[pos]}")
answer *= my_ticket[pos]
return answer
if __name__ == '__main__':
my_ticket = load_ticket()
print(f"my_ticket: {my_ticket} \n")
rules, rule_range = load_rules()
print(f"rules: {rules} \n")
print(f"rule_range: {rule_range} \n")
nearby_tickets = load_nearby_tickets()
print(f"nearby_tickets: {nearby_tickets} \n")
results1 = part1(rule_range, nearby_tickets)
valid_tickets = validate_tickets(rule_range, nearby_tickets)
results2 = process_tickets(rules, valid_tickets, my_ticket)
print(f"\nPart 1 - {results1}")
print(f"Part 2 - {results2}\n")
| 28.971429 | 100 | 0.619001 | 825 | 6,084 | 4.431515 | 0.254545 | 0.064004 | 0.035558 | 0.02407 | 0.180525 | 0.091904 | 0.059081 | 0.059081 | 0.059081 | 0.059081 | 0 | 0.024666 | 0.286982 | 6,084 | 209 | 101 | 29.110048 | 0.818119 | 0.335141 | 0 | 0.150943 | 0 | 0 | 0.101417 | 0.006463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0 | 0 | 0.113208 | 0.132075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
446f6522d2f68fb06ab1adc22f7cb83ad19c1bfc | 712 | py | Python | docs/blog/2017/0610.py | CylonOven/blog | ddc560edb0445f950b39d441b569ef2258abc2d6 | [
"BSD-2-Clause"
] | null | null | null | docs/blog/2017/0610.py | CylonOven/blog | ddc560edb0445f950b39d441b569ef2258abc2d6 | [
"BSD-2-Clause"
] | null | null | null | docs/blog/2017/0610.py | CylonOven/blog | ddc560edb0445f950b39d441b569ef2258abc2d6 | [
"BSD-2-Clause"
] | null | null | null | import random
random.seed("lel")
class table(object):
changes= {
'1':"@",
'0':" "
}
def __init__(self, lists):
self.data = lists
@classmethod
def gen_matix(cls, x, y):
return cls(
[
[random.choice((1,0)) for y in range(y)] for x in range(x)
])
def __str__(self):
output = []
width = len(self.data[0])
output.append("+" + "-"* width + "+\n")
for row in self.data:
output.append("|")
for e in row:
output.append(self.changes[e])
output.append("|\n")
output.append("+" + "-" * width + "+\n")
t = table.gen_matix(10,10)
print(t) | 20.342857 | 66 | 0.463483 | 85 | 712 | 3.764706 | 0.435294 | 0.1875 | 0.10625 | 0.1125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019824 | 0.36236 | 712 | 35 | 67 | 20.342857 | 0.685022 | 0 | 0 | 0.074074 | 0 | 0 | 0.029453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.037037 | 0.037037 | 0.259259 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
447144dab05f58bb3666adc6c76ca187a3b6abdf | 2,116 | py | Python | Firefly.py | iglennrogers/swarm_pyside | 8930f668473c5f9b399c29661295be47f31b9164 | [
"BSD-2-Clause"
] | null | null | null | Firefly.py | iglennrogers/swarm_pyside | 8930f668473c5f9b399c29661295be47f31b9164 | [
"BSD-2-Clause"
] | null | null | null | Firefly.py | iglennrogers/swarm_pyside | 8930f668473c5f9b399c29661295be47f31b9164 | [
"BSD-2-Clause"
] | null | null | null | import PySide.QtGui as QtGui
import PySide.QtCore as QtCore
import GravitionalObject as go
class Firefly(go.GravitationalObject):
def __init__(self):
super(Firefly, self).__init__()
self._acc = QtCore.QPointF()
self._vel = QtCore.QPointF()
self._pos = QtCore.QPointF()
#
def init(self, pos):
super(Firefly, self).init(pos)
self._pos = self.pos()
#
def set_velocity(self, vel):
self._vel = vel
#
def adjust_acceleration(self, acc):
self._acc += acc
#
def update_pos(self, rc):
oldpos = QtCore.QPointF(self._pos)
self._vel += self._acc
if (self._pos.x() < rc.left()):
self._vel.setX(abs(self._vel.x()))
elif (self._pos.x() > rc.right()):
self._vel.setX(-abs(self._vel.x()))
if (self._pos.y() < rc.top()):
self._vel.setY(abs(self._vel.y()))
elif (self._pos.y() > rc.bottom()):
self._vel.setY(-abs(self._vel.y()))
self._pos += self._vel
self.setPos(self._pos)
self._acc.setX(0)
self._acc.setY(0)
#self.scene().addLine(oldpos.x(), oldpos.y(), self._pos.x(), self._pos.y(), QtGui.QPen(QtCore.Qt.gray))
#
def setup_paint(self, status):
self.setBrush(QtCore.Qt.yellow)
self.setPen(QtGui.QPen(QtCore.Qt.green))
#
def acceleration_between(self, other):
self.change_status(other)
gap = other.scenePos() - self.scenePos();
dist2 = gap.x()*gap.x() + gap.y()*gap.y();
if self.status == go.Status.ACTIVE:
acc = -go.GravitationalObject.s_grav_constant/dist2
return QtCore.QPointF(acc*gap)
elif (self.status == go.Status.ACTIVE2):
acc = 2*go.GravitationalObject.s_grav_constant/dist2
return QtCore.QPointF(acc*gap);
else:
return QtCore.QPointF();
#
def ball_radius(self):
return 10
def outer_radius(self):
return 10
def inner_radius(self):
return 2
| 32.553846 | 112 | 0.557183 | 265 | 2,116 | 4.264151 | 0.260377 | 0.080531 | 0.038938 | 0.035398 | 0.260177 | 0.19115 | 0.19115 | 0.113274 | 0.113274 | 0.113274 | 0 | 0.008075 | 0.297732 | 2,116 | 64 | 113 | 33.0625 | 0.752355 | 0.048204 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.057692 | 0.057692 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4475eccd120177058a082a3f92db22d0104dd277 | 4,226 | py | Python | sasoptpy/abstract/parameter.py | jld23/sasoptpy | f96911f04d6c0c01fce902f1f995935583df69a8 | [
"Apache-2.0"
] | 20 | 2017-12-22T18:29:55.000Z | 2021-09-12T15:04:39.000Z | sasoptpy/abstract/parameter.py | jld23/sasoptpy | f96911f04d6c0c01fce902f1f995935583df69a8 | [
"Apache-2.0"
] | 9 | 2019-01-24T14:52:33.000Z | 2022-03-16T14:14:35.000Z | sasoptpy/abstract/parameter.py | jld23/sasoptpy | f96911f04d6c0c01fce902f1f995935583df69a8 | [
"Apache-2.0"
] | 12 | 2017-12-22T19:37:16.000Z | 2021-07-30T21:04:03.000Z | #!/usr/bin/env python
# encoding: utf-8
#
# Copyright SAS Institute
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sasoptpy
from sasoptpy.core import Expression
from sasoptpy.util.package_utils import _to_sas_string
class Parameter(Expression):
"""
Represents a problem input parameter
Parameters
----------
name : string
Name of the parameter
ptype : string, optional
Type of the parameter. Possible values are `sasoptpy.STR` and
`sasoptpy.NUM`
value : float, optional
Value of the parameter
init : float, optional
Initial value of the parameter
Examples
--------
>>> with so.Workspace('w') as w:
... p = so.Parameter(name='p', init=3)
... p.set_value(5)
...
<sasoptpy.abstract.statement.assignment.Assignment object at 0x7f7952e9bb38>
>>> print(so.to_optmodel(w))
proc optmodel;
num p init 3;
p = 5;
quit;
"""
@sasoptpy.class_containable
def __init__(self, name, ptype=None, value=None, init=None, **kwargs):
super().__init__(name=name)
if name is None:
name = sasoptpy.util.get_next_name()
if ptype is None:
if value is not None and isinstance(value, str):
ptype = sasoptpy.STR
elif init is not None and isinstance(init, str):
ptype = sasoptpy.STR
else:
ptype = sasoptpy.NUM
self._type = ptype
self._fix_value = value
self._init = init
self._parent = None
self._initialize_self_coef()
self._abstract = True
def set_parent(self, parent, key):
self._parent = parent
self._key = key
def _initialize_self_coef(self):
self.set_member(key=self._name, ref=self, val=1)
def set_init(self, value):
self._init = value
@sasoptpy.containable
def set_value(self, value):
self._fix_value = value
def get_value(self):
return self._fix_value
def _expr(self):
if self._parent:
return self._parent.get_element_name(self._key)
return self.get_name()
def _defn(self):
if self._parent:
return None
else:
s = '{} {}'.format(self._type, self.get_name())
if self._init:
#s += ' init {}'.format(_to_python_string(self._init))
s += ' init {}'.format(_to_sas_string(self._init))
elif self._fix_value is not None:
#s += ' = {}'.format(_to_python_string(self._fix_value))
s += ' = {}'.format(_to_sas_string(self._fix_value))
s += ';'
return s
def __str__(self):
return self._name
class ParameterValue(Expression):
"""
Represents a single value of a parameter
Parameters
----------
param : Parameter
Parameter that the value belongs to
key : tuple, optional
Key of the parameter value in the multi-index parameter
prefix : string
Prefix of the parameter
suffix : string
Suffix of the parameter, such as ``.lb`` and ``.ub``
Notes
-----
- Parameter values are mainly used in abstract expressions
"""
def __init__(self, param, key=None):
super().__init__()
self._param = param
tkey = sasoptpy.util.pack_to_tuple(key)
self._key = tkey
self._abstract = True
self.set_member(key=str(self), ref=self, val=1)
def __str__(self):
return \
sasoptpy.util.package_utils._insert_brackets(
self._param.get_name(), self._key)
def _expr(self):
return str(self)
| 27.986755 | 80 | 0.604827 | 530 | 4,226 | 4.626415 | 0.307547 | 0.016313 | 0.039967 | 0.013051 | 0.100326 | 0.017129 | 0 | 0 | 0 | 0 | 0 | 0.006707 | 0.294368 | 4,226 | 150 | 81 | 28.173333 | 0.81556 | 0.399195 | 0 | 0.215385 | 0 | 0 | 0.008102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184615 | false | 0 | 0.046154 | 0.061538 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
447be6e4fc567f716395f5061f29158ceb4b650e | 1,034 | py | Python | day-03/automatic-pizza-order-program.py | swokyisalreadytaken/100-Days-of-Code-in-Python | 269c99a481b94248454114c051fafa8f52697332 | [
"Unlicense"
] | 1 | 2021-08-13T23:46:20.000Z | 2021-08-13T23:46:20.000Z | day-03/automatic-pizza-order-program.py | swokyisalreadytaken/100-Days-of-Code-in-Python | 269c99a481b94248454114c051fafa8f52697332 | [
"Unlicense"
] | null | null | null | day-03/automatic-pizza-order-program.py | swokyisalreadytaken/100-Days-of-Code-in-Python | 269c99a481b94248454114c051fafa8f52697332 | [
"Unlicense"
] | null | null | null | # build an automatic pizza order program.
# ask the user the size and extra ingredients inputs
print("Welcome to Python Pizza Deliveries!")
size = input("What size pizza do you want? S, M, or L \n")
add_pepperoni = input("Do you want pepperoni? Y or N \n")
extra_cheese = input("Do you want extra cheese? Y or N \n")
# yes or no for pepperoni and assign value - small pizza
if size == "S":
bill = 15
if add_pepperoni == "N":
bill = 15
elif add_pepperoni == "Y":
bill += 2
# yes or no for pepperoni and assign value - medium pizza
elif size == "M":
bill = 20
if add_pepperoni == "N":
bill = 20
elif add_pepperoni == "Y":
bill += 3
# yes or no for pepperoni and assign value - large pizza
elif size == "L":
bill = 25
if add_pepperoni == "N":
bill = 25
elif add_pepperoni == "Y":
bill += 3
# yes or no for cheese, assign value and print final bill
if extra_cheese == "N":
print(f"Your final bill is: {bill}€.")
elif extra_cheese == "Y":
bill += 1
print(f"Your final bill is: {bill}€.") | 27.945946 | 59 | 0.647002 | 175 | 1,034 | 3.777143 | 0.314286 | 0.12708 | 0.04236 | 0.060514 | 0.428139 | 0.310136 | 0.310136 | 0.310136 | 0.096823 | 0.096823 | 0 | 0.020126 | 0.231141 | 1,034 | 37 | 60 | 27.945946 | 0.808805 | 0.301741 | 0 | 0.592593 | 0 | 0 | 0.294693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44809a2040f353f40337b79e36aebadd2135b1f0 | 2,091 | py | Python | WindowOperators.py | zatricion/Streams | d2f688e230b4cb325d5f76886a7499d132591bd4 | [
"MIT"
] | null | null | null | WindowOperators.py | zatricion/Streams | d2f688e230b4cb325d5f76886a7499d132591bd4 | [
"MIT"
] | null | null | null | WindowOperators.py | zatricion/Streams | d2f688e230b4cb325d5f76886a7499d132591bd4 | [
"MIT"
] | null | null | null | from Agent import *
from Stream import *
from MergeSplitOpStructures import *
def window_many_to_many(f, in_streams, num_out_streams, window_size, step_size, state=None):
def transition(in_lists, state=None):
range_out = range((num_out_streams))
range_in = range(len(in_streams))
output_lists = [ [] for _ in range_out ]
window_starts = [in_list.start for in_list in in_lists]
smallest_list_length = min(v.stop - v.start for v in in_lists)
if window_size > smallest_list_length:
#in_lists_start_values = [in_list.start for in_list in in_lists]
return (output_lists, state, [in_list.start for in_list in in_lists])
num_steps = 1 + (smallest_list_length - window_size)/step_size
for i in range(num_steps):
windows = [in_lists[j].list[window_starts[j]:window_starts[j]+window_size] \
for j in range_in]
increments = f(windows) if state is None else f(windows, state)
for k in range_out: output_lists[k].append(increments[k])
window_starts = map(lambda v: v+step_size, window_starts)
in_lists_start_values = [in_list.start + num_steps*step_size for in_list in in_lists]
return (output_lists, state, in_lists_start_values)
# Create agent
out_streams = [Stream() for v in range(num_out_streams)]
Agent(in_streams, out_streams, transition, state)
return out_streams
def window_merge(f, in_streams, window_size, step_size, state=None):
return merge_structure(window_many_to_many, f, in_streams,
window_size, step_size, state=None)
def window_split(f, in_stream, num_out_streams,
window_size, step_size, state=None):
return split_structure(window_many_to_many, f, in_stream, num_out_streams,
window_size, step_size, state=None)
def window_op(f, in_stream, window_size, step_size, state=None):
return op_structure(window_many_to_many, f, in_stream,
window_size, step_size, state=None)
| 44.489362 | 93 | 0.679101 | 309 | 2,091 | 4.245955 | 0.168285 | 0.07622 | 0.085366 | 0.109756 | 0.474085 | 0.474085 | 0.474085 | 0.394817 | 0.347561 | 0.17378 | 0 | 0.000628 | 0.238164 | 2,091 | 46 | 94 | 45.456522 | 0.822976 | 0.036346 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.147059 | false | 0 | 0.088235 | 0.088235 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4482d252ebee04a4fe3a2d16129164bc0d1de4ec | 623 | py | Python | frontend_command.py | privacyrespected/Alpha | 60ffdb73b334e37a87be119ce881d084aef8d7a1 | [
"Apache-2.0"
] | null | null | null | frontend_command.py | privacyrespected/Alpha | 60ffdb73b334e37a87be119ce881d084aef8d7a1 | [
"Apache-2.0"
] | null | null | null | frontend_command.py | privacyrespected/Alpha | 60ffdb73b334e37a87be119ce881d084aef8d7a1 | [
"Apache-2.0"
] | null | null | null | #this files process commands entered through the front end
from modules.sense import *
from modules.mainsystem import *
def Pcommand(command):
command=command.lower()
if command.startswith("speak"): #speak function to debug
command=command.replace("","speak")
speak(command)
elif command.startswith("shutdown"):
speak("shutdown initiated")
elif command.starswith("dns"):
command = command.replace("","dns")
if command.startswith("flush"):
flushdns()
else:
speak("No param specified")
elif command.startswith("kill"):
kill() | 34.611111 | 60 | 0.64366 | 67 | 623 | 5.985075 | 0.552239 | 0.139651 | 0.094763 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23435 | 623 | 18 | 61 | 34.611111 | 0.840671 | 0.128411 | 0 | 0 | 0 | 0 | 0.127306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4483b0857e8241cf5117ec7285f536dc85720d7e | 2,177 | py | Python | swamp/mr/tests/test_mrjob.py | rigdenlab/SWAMP | 3e93ab27f4acf0124f7cb2d78a151cc3352b9c6e | [
"BSD-3-Clause"
] | 2 | 2020-02-15T11:06:34.000Z | 2020-04-10T08:48:49.000Z | swamp/mr/tests/test_mrjob.py | rigdenlab/SWAMP | 3e93ab27f4acf0124f7cb2d78a151cc3352b9c6e | [
"BSD-3-Clause"
] | 15 | 2020-02-04T10:56:07.000Z | 2021-02-12T09:11:03.000Z | swamp/mr/tests/test_mrjob.py | rigdenlab/SWAMP | 3e93ab27f4acf0124f7cb2d78a151cc3352b9c6e | [
"BSD-3-Clause"
] | 4 | 2020-02-04T13:25:09.000Z | 2022-03-23T13:44:17.000Z | import os
import dill
import unittest
import collections
from pyjob import Script
from swamp.utils import remove
from swamp.mr.mrjob import MrJob
RESULTS = collections.namedtuple('results', ['results'])
WORKDIR = os.path.join(os.environ['CCP4_SCR'], 'test_workdir')
class MrJobTestCase(unittest.TestCase):
def test_1(self):
python_script = """cd %s
/empty/path/python << EOF
from swamp.mr import MrRun
mr_run = MrRun(id='test', workdir='%s', target_fa='/empty/path/target.fasta', target_mtz='/empty/path/target.mtz', extend_solution=False)
mr_run.phased_mtz = "/empty/path/phases.mtz"
mr_run.add_searchmodel(arg1="arg1", arg2="arg2", arg3="3")
if not mr_run.error:
mr_run.run()
mr_run.create_result_table_outfile()
mr_run.store_pickle()
EOF
""" % (WORKDIR, WORKDIR)
job = MrJob(id='test', workdir=WORKDIR, python_interpreter='/empty/path/python')
self.assertTrue(os.path.isdir(WORKDIR))
self.addCleanup(remove, WORKDIR)
job.add_searchmodel(arg1='arg1', arg2='arg2', arg3=3)
job.target_fa = '/empty/path/target.fasta'
job.target_mtz = '/empty/path/target.mtz'
job.phased_mtz = '/empty/path/phases.mtz'
self.assertEqual(python_script, job.python_script)
self.assertIsInstance(job.script, Script)
self.assertEqual(python_script, job.script[0])
def test_2(self):
pickle_fname = os.path.join(WORKDIR, "results.pckl")
job = MrJob(id='test', workdir=WORKDIR, python_interpreter='/empty/path/python')
results = RESULTS(
results=['SEARCH', 'RUN', 'LLG', 'TFZ', 'local_CC', 'overall_CC', 'rfree', 'rfactor', 'local_CC',
'overall_CC', 'cc', 'acl', 'is_extended', 'solution'])
with open(pickle_fname, 'wb') as fhandle:
dill.dump(results, fhandle)
self.addCleanup(remove, pickle_fname)
self.assertListEqual(results.results, job.results)
self.addCleanup(remove, pickle_fname)
def test_3(self):
job = MrJob(id='test', workdir=WORKDIR, python_interpreter='/empty/path/python')
with self.assertRaises(TypeError):
job.parent_array = 'dummy_array'
| 38.875 | 137 | 0.673404 | 286 | 2,177 | 4.975524 | 0.328671 | 0.063247 | 0.042164 | 0.029515 | 0.376669 | 0.290935 | 0.175685 | 0.175685 | 0.126493 | 0.126493 | 0 | 0.009534 | 0.180983 | 2,177 | 55 | 138 | 39.581818 | 0.788559 | 0 | 0 | 0.104167 | 0 | 0.020833 | 0.320625 | 0.128158 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.0625 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44844a84b744658a244c29556a18ff7de739bdd6 | 6,306 | py | Python | distil/active_learning_strategies/partition_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 83 | 2021-01-06T06:50:30.000Z | 2022-03-31T05:16:32.000Z | distil/active_learning_strategies/partition_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 30 | 2021-02-27T06:09:47.000Z | 2021-12-23T11:03:36.000Z | distil/active_learning_strategies/partition_strategy.py | ansunsujoe/distil | cf6cae2b88ef129d09c159aae0569978190e9f98 | [
"MIT"
] | 13 | 2021-03-05T18:26:58.000Z | 2022-03-12T01:53:17.000Z | import math
import numpy as np
from torch.utils.data import Subset
from .strategy import Strategy
class PartitionStrategy(Strategy):
"""
Provides a wrapper around most of the strategies implemented in DISTIL that allows one to select portions of the budget from
specific partitions of the unlabeled dataset. This allows the use of some strategies that would otherwise fail due to time or memory
constraints. For example, if one specifies a number of partitions to be 5 and wants to select 50 new points, 10 points would
be selected from the first fifth of the dataset, 10 points would be selected from the second fifth of the dataset, and so on.
Parameters
----------
labeled_dataset: torch.utils.data.Dataset
The labeled training dataset
unlabeled_dataset: torch.utils.data.Dataset
The unlabeled pool dataset
net: torch.nn.Module
The deep model to use
nclasses: int
Number of unique values for the target
args: dict
Specify additional parameters
- **batch_size**: The batch size used internally for torch.utils.data.DataLoader objects. (int, optional)
- **device**: The device to be used for computation. PyTorch constructs are transferred to this device. Usually is one of 'cuda' or 'cpu'. (string, optional)
- **loss**: The loss function to be used in computations. (typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], optional)
- **num_partitions**: Number of partitons to use (int, optional)
- **wrapped_strategy_class**: The class of the strategy to use (class, optional)
query_dataset: torch.utils.data.Dataset
The query dataset to use if the wrapped_strategy_class argument points to SMI or SCMI.
private_dataset: torch.utils.data.Dataset
The private dataset to use if the wrapped_strategy_class argument points to SCG or SCMI.
"""
def __init__(self, labeled_dataset, unlabeled_dataset, net, nclasses, args={}, query_dataset=None, private_dataset=None): #
super(PartitionStrategy, self).__init__(labeled_dataset, unlabeled_dataset, net, nclasses, args)
if "num_partitions" not in args:
self.num_partitions = 1
else:
self.num_partitions = args["num_partitions"]
if "wrapped_strategy_class" not in args:
raise ValueError("args dictionary requires 'wrapped_strategy_class' key")
self.wrapped_strategy_class = args["wrapped_strategy_class"]
self.query_dataset = query_dataset
self.private_dataset = private_dataset
def select(self, budget):
"""
Selects next set of points
Parameters
----------
budget: int
Number of data points to select for labeling
Returns
----------
idxs: list
List of selected data point indices with respect to unlabeled_dataset
"""
# The number of partitions should be less than or equal to the budget.
# This is because the budget is evenly divided among the partitions (roughly),
# so having a smaller budget than the number of partitions results in one or
# more partitions having a 0 budget, which should not happen.
if self.num_partitions > budget:
raise ValueError("Budget cannot be less than the number of partitions!")
# Furthermore, the number of partitions cannot be more than the size of the unlabeled set
if self.num_partitions > len(self.unlabeled_dataset):
raise ValueError("There cannot be more partitions than the size of the dataset!")
# Calculate partition splits and budgets for each partition
full_unlabeled_size = len(self.unlabeled_dataset)
split_indices = [math.ceil(full_unlabeled_size * ((1+x) / self.num_partitions)) for x in range(self.num_partitions)]
partition_budget_splits = [math.ceil(budget * (split_index / full_unlabeled_size)) for split_index in split_indices]
beginning_split = 0
selected_idx = []
for i in range(self.num_partitions):
end_split = split_indices[i]
# Create a subset of the original unlabeled dataset as a partition.
partition_index_list = list(range(beginning_split, end_split))
current_partition = Subset(self.unlabeled_dataset, partition_index_list)
# Calculate the budget for this partition
if i == 0:
partition_budget = partition_budget_splits[i]
else:
partition_budget = partition_budget_splits[i] - partition_budget_splits[i - 1]
# With the new subset, create an instance of the wrapped strategy and call its select function.
if(self.query_dataset != None and self.private_dataset != None):
wrapped_strategy = self.wrapped_strategy_class(self.labeled_dataset, current_partition, self.query_dataset, self.private_dataset, self.model, self.target_classes, self.args)
elif(self.query_dataset != None):
wrapped_strategy = self.wrapped_strategy_class(self.labeled_dataset, current_partition, self.query_dataset, self.model, self.target_classes, self.args)
elif(self.private_dataset != None):
wrapped_strategy = self.wrapped_strategy_class(self.labeled_dataset, current_partition, self.private_dataset, self.model, self.target_classes, self.args)
else:
wrapped_strategy = self.wrapped_strategy_class(self.labeled_dataset, current_partition, self.model, self.target_classes, self.args)
selected_partition_idxs = wrapped_strategy.select(partition_budget)
# Use the partition_index_list to map the selected indices w/ respect to the current partition to the indices w/ respect to the dataset
to_add_idxs = np.array(partition_index_list)[selected_partition_idxs]
selected_idx.extend(to_add_idxs)
beginning_split = end_split
# Return the selected idx
return selected_idx | 52.115702 | 189 | 0.669997 | 793 | 6,306 | 5.161412 | 0.239596 | 0.062301 | 0.05375 | 0.029318 | 0.297337 | 0.248473 | 0.200098 | 0.15612 | 0.15612 | 0.15612 | 0 | 0.002805 | 0.265144 | 6,306 | 121 | 190 | 52.115702 | 0.880449 | 0.423565 | 0 | 0.06383 | 0 | 0 | 0.069959 | 0.019988 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.085106 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4485c237e7028b7f11bca171722bc70d54c20ee5 | 2,120 | py | Python | lib/lib_constant.py | NingAnMe/GFSSI | 066ac3dcffe04927aa497ee8b2257bee3ec3789a | [
"MIT"
] | 1 | 2020-08-18T08:05:35.000Z | 2020-08-18T08:05:35.000Z | lib/lib_constant.py | NingAnMe/GFSSI | 066ac3dcffe04927aa497ee8b2257bee3ec3789a | [
"MIT"
] | null | null | null | lib/lib_constant.py | NingAnMe/GFSSI | 066ac3dcffe04927aa497ee8b2257bee3ec3789a | [
"MIT"
] | 1 | 2020-08-26T06:50:59.000Z | 2020-08-26T06:50:59.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time : 2019/8/2
@Author : AnNing
"""
import os
from lib.lib_path import get_aid_path, GFSSI_DIR
aid_path = get_aid_path()
# 无效数据的填充值
FULL_VALUE = -999
# 辅助文件
BASEMAP_FY4_4KM = os.path.join(aid_path, 'ditu_fy4a_4km.png')
LON_LAT_LUT_FY4_4KM = os.path.join(aid_path, 'lonlat_lut_fy4_4km.hdf') # FY4原始数据的经纬度查找表
LON_LAT_LUT_FY4_1KM = os.path.join(aid_path, 'lonlat_lut_fy4_1km.hdf') # FY4原始数据的经纬度查找表
LON_LAT_LUT_FY3_1KM = os.path.join(aid_path, 'lonlat_lut_fy3_1km.hdf') # FY3原始数据的经纬度查找表
PROJ_LUT_FY4_4KM = os.path.join(aid_path, 'lonlat_projlut_fy4_4km.hdf') # FY4投影的经纬度查找表
PROJ_LUT_FY4_1KM = os.path.join(aid_path, 'lonlat_projlut_fy4_1km.hdf') # FY4投影的经纬度查找表
KDTREE_LUT_FY4_4KM = os.path.join(aid_path, 'kdtree_lut_fy4_4km.hdf') # FY4原始数据经纬度KDtree查找表
KDTREE_LUT_FY4_1KM = os.path.join(aid_path, 'kdtree_lut_fy4_1km.hdf') # FY4原始数据经纬度KDtree查找表
KDTREE_LUT_FY3_1KM = os.path.join(aid_path, 'kdtree_lut_fy3_1km.hdf') # FY3原始数据经纬度KDtree查找表
D_DEM_1KM = os.path.join(aid_path, 'D_DEM.txt') # 1km校正文件
EP_TXT = os.path.join(aid_path, 'ep.txt')
ER_TXT = os.path.join(aid_path, 'er.txt')
CHINA_RANGE_MASK_1KM = os.path.join(aid_path, 'china_region_mask_1km.h5')
STATION_LIST = os.path.join(aid_path, 'StationList.txt')
# 图例范围
COLORBAR_RANGE_ORBIT_FY4A = (0, 1000, '$w/m^2$')
COLORBAR_RANGE_DAILY_FY4A = (0, 10, '$kw/m^2$')
COLORBAR_RANGE_MONTHLY_FY4A = (0, 400, '$kw/m^2$')
COLORBAR_RANGE_YEARLY_FY4A = (0, 4000, '$kw/m^2$')
COLORBAR_RANGE_DAILY_FY3D = (0, 1, '$kw/m^2$')
COLORBAR_RANGE_MONTHLY_FY3D = (0, 40, '$kw/m^2$')
COLORBAR_RANGE_YEARLY_FY3D = (0, 400, '$kw/m^2$')
# FY4A 1KM Correct程序的经纬度范围,需要和1KM对应
FY4A_1KM_CORRECT_LAT_LON_RANGE = [9.995, 54.995, 69.995, 139.995, 0.01]
# 1KM校正的二进制程序
VARIFY_EXE = os.path.join(GFSSI_DIR, 'bin', 'varify.exe')
STATISTICS_EXE = os.path.join(GFSSI_DIR, 'bin', 'LRR_dyn_statistic.exe')
# 预报的二进制程序
INTERP_EXE = os.path.join(GFSSI_DIR, 'bin', 'interp.exe')
FORECAST_EXE = os.path.join(GFSSI_DIR, 'bin', 'forecast.exe')
# 网站服务器的端口
RESTFUL_POST = 5000
# 网站服务器的端口
CONFIG_FILE = os.path.join(GFSSI_DIR, 'cfg', 'config.json')
| 37.192982 | 92 | 0.745283 | 366 | 2,120 | 3.967213 | 0.289617 | 0.078512 | 0.130854 | 0.125344 | 0.50551 | 0.412534 | 0.2927 | 0.210744 | 0.088154 | 0 | 0 | 0.069219 | 0.100472 | 2,120 | 56 | 93 | 37.857143 | 0.692187 | 0.146698 | 0 | 0 | 0 | 0 | 0.221537 | 0.128435 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44860ad18421633ee3079c64cf2ba7d53eaef76a | 1,647 | py | Python | fondo_api/tests/runner.py | Fonmon/Fondo-API | 0c78eaab259df18219c01fceb67bd1b6ff8ec941 | [
"MIT"
] | null | null | null | fondo_api/tests/runner.py | Fonmon/Fondo-API | 0c78eaab259df18219c01fceb67bd1b6ff8ec941 | [
"MIT"
] | 48 | 2018-01-13T14:52:52.000Z | 2022-03-13T17:41:42.000Z | fondo_api/tests/runner.py | Fonmon/Fondo-API | 0c78eaab259df18219c01fceb67bd1b6ff8ec941 | [
"MIT"
] | null | null | null | import logging
from django.test.runner import DiscoverRunner
class TestRunner(DiscoverRunner):
""" When migrations are disabled for the test runner, the `pre_migrate` signal
does not emit. So we need another hook for installing the extension. Prior to
Django 1.9, the `pre_syncdb` signal worked for that.
"""
def run_tests(self, test_labels, extra_tests=None, **kwargs):
logging.disable(logging.CRITICAL);
return super(TestRunner, self).run_tests(test_labels, extra_tests, **kwargs)
def setup_databases(self, **kwargs):
"""
Always create PostgreSQL HSTORE extension if it doesn't already exist
on the database before syncing the database. Requires PostgreSQL >= 9.1
"""
def wrap_create_test_db(function):
def decorated_create_test_db(self, verbosity, autoclobber, keepdb):
test_database_name = function(self, verbosity, autoclobber, keepdb)
self.connection.close()
self.connection.settings_dict["NAME"] = test_database_name
cursor = self.connection.cursor()
cursor.execute('CREATE EXTENSION IF NOT EXISTS hstore')
return test_database_name
return decorated_create_test_db
# Overriding class method from outside is ugly, but it's just for unit
# testing anyway.
from django.db.backends.base import creation
creation.BaseDatabaseCreation._create_test_db = wrap_create_test_db(
creation.BaseDatabaseCreation._create_test_db
)
return super(TestRunner, self).setup_databases(**kwargs) | 44.513514 | 84 | 0.680631 | 197 | 1,647 | 5.51269 | 0.477157 | 0.055249 | 0.066298 | 0.036832 | 0.073665 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003221 | 0.245902 | 1,647 | 37 | 85 | 44.513514 | 0.871176 | 0.264117 | 0 | 0 | 0 | 0 | 0.035314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
448774b167c44c150d8ea3fd24f5266556ab412f | 460 | py | Python | Diffusion of Infection and Campus Evacuation/parameters.py | dem123456789/Computer-Simulation | ec61bdd7ca767f631ef190c1248690bd65327302 | [
"MIT"
] | 2 | 2016-07-03T05:19:20.000Z | 2021-03-07T04:39:07.000Z | Diffusion of Infection and Campus Evacuation/parameters.py | dem123456789/Computer-Simulation | ec61bdd7ca767f631ef190c1248690bd65327302 | [
"MIT"
] | null | null | null | Diffusion of Infection and Campus Evacuation/parameters.py | dem123456789/Computer-Simulation | ec61bdd7ca767f631ef190c1248690bd65327302 | [
"MIT"
] | null | null | null | import math
DEBUG = 1
PARKING_NODE_COLOR = '#FF0099'
EXIT_NODE_COLOR = 'r'
STREET_NODE_COLOR = 'g'
COP_NODE_COLOR = '#3c3ccc'
VISUAL = 1
COP_MODE = 0
COP_INTERSECTION_THRESHOLD = 0
COP_CONGESTION_THRESHOLD = 0.5
COP_EVACUATION_THRESHOLD = 0
DEPTH_OF_AWARENESS = 1
EAST_TENDENCY = 0
SPACE_TIME_TRADEOFF = 1
DEAD_END = []
IF_MUTATE = 0
#Simulation parameter
UNIT_LENGTH = math.ceil(5000/738) #ft
AVERAGE_CAR_SPACE_LENGTH = 20 #ft
AVERAGE_CAR_SPEED = 37 #ft/s
| 20.909091 | 39 | 0.773913 | 75 | 460 | 4.373333 | 0.64 | 0.109756 | 0.073171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070707 | 0.13913 | 460 | 21 | 40 | 21.904762 | 0.757576 | 0.06087 | 0 | 0 | 0 | 0 | 0.037471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
448bc1e703300278885ac596fbdec39b730d307d | 3,611 | py | Python | constrained_language_typology/plot_languages_main.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 23,901 | 2018-10-04T19:48:53.000Z | 2022-03-31T21:27:42.000Z | constrained_language_typology/plot_languages_main.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 891 | 2018-11-10T06:16:13.000Z | 2022-03-31T10:42:34.000Z | constrained_language_typology/plot_languages_main.py | deepneuralmachine/google-research | d2ce2cf0f5c004f8d78bfeddf6e88e88f4840231 | [
"Apache-2.0"
] | 6,047 | 2018-10-12T06:31:02.000Z | 2022-03-31T13:59:28.000Z | # coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""Simple utility for plotting the given languages on world's map.
Example:
--------
> python3 plot_languages_main.py \
--training_data_dir /tmp \
--output_map_file /tmp/world.pdf
Extra Dependencies:
-------------------
To install BaseMap from sources:
> apt-get install libgeos-dev
> pip3 install geos
> pip3 install https://github.com/matplotlib/basemap/archive/master.zip
> pip3 install seaborn
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from absl import app
from absl import flags
from absl import logging
import basic_models
import constants as const
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
flags.DEFINE_string(
"training_data_dir", "",
"Directory where the training data resides. This data has to be in the "
"format generated from the original SIGTYP data by the "
"\"sigtyp_reader_main\" tool.")
flags.DEFINE_string(
"output_map_file", "",
"Output PDF containing the world map with languages shown.")
FLAGS = flags.FLAGS
_TOOL_NAME = "plot"
def _process_set(filename, world_map, color, size):
data = basic_models.load_training_data(
_TOOL_NAME, FLAGS.training_data_dir, filename)
# Plot the data.
mxy = world_map(data["longitude"].tolist(), data["latitude"].tolist())
world_map.scatter(mxy[0], mxy[1], s=size, c=color, lw=0, alpha=1, zorder=5)
def main(unused_argv):
if not FLAGS.training_data_dir:
raise ValueError("Specify --training_data_dir!")
if not FLAGS.output_map_file:
raise ValueError("Specify --output_map_file!")
# ------------------------------------------------
# Set up the map itself using Mercator projection:
# ------------------------------------------------
my_dpi = 96
plt.figure(figsize=(2600/my_dpi, 1800/my_dpi), dpi=my_dpi)
world_map = Basemap(projection="merc",
llcrnrlat=-60,
urcrnrlat=65,
llcrnrlon=-180,
urcrnrlon=180,
lat_ts=0,
resolution="c")
# Dark grey land, black lakes.
world_map.fillcontinents(color="#191919", lake_color="#17202A")
world_map.drawmapboundary(fill_color="#17202A") # Dark background.
# Thin white line for country borders.
world_map.drawcountries(linewidth=0.05, color="#5F6A6A")
# ------------------------------------
# Load the languages, display and save.
# ------------------------------------
set_names = [
(const.TRAIN_FILENAME, "#1292db", 10), # Blue.
(const.DEV_FILENAME, "#84DE02", 20), # Green.
(const.TEST_BLIND_FILENAME, "#F4D03F", 20) # Yellow.
]
for set_name, color, point_size in set_names:
_process_set(set_name, world_map, color, point_size)
logging.info("Saving plot to \"%s\" ...", FLAGS.output_map_file)
plt.savefig(FLAGS.output_map_file)
if __name__ == "__main__":
app.run(main)
| 32.241071 | 77 | 0.663805 | 474 | 3,611 | 4.85865 | 0.485232 | 0.031264 | 0.033869 | 0.023448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026603 | 0.188037 | 3,611 | 111 | 78 | 32.531532 | 0.758868 | 0.381058 | 0 | 0.036364 | 0 | 0 | 0.171208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.2 | 0 | 0.236364 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
448ce569784ffe842d7c48946cdd94821413b192 | 1,515 | py | Python | experiments/cartpole_dqn.py | jkulhanek/deep-rl-pytorch | 6fa7ceee8524f002d4a8d93295b231f6b9b7c29c | [
"MIT"
] | 7 | 2019-03-24T19:51:11.000Z | 2022-01-27T17:20:29.000Z | experiments/cartpole_dqn.py | jkulhanek/deep-rl-pytorch | 6fa7ceee8524f002d4a8d93295b231f6b9b7c29c | [
"MIT"
] | null | null | null | experiments/cartpole_dqn.py | jkulhanek/deep-rl-pytorch | 6fa7ceee8524f002d4a8d93295b231f6b9b7c29c | [
"MIT"
] | 4 | 2020-04-11T01:06:24.000Z | 2021-07-18T01:22:36.000Z | import gym
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import deep_rl.deepq as deepq
from deep_rl import register_trainer
class Model(nn.Module):
def __init__(self):
super().__init__()
def init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0)
self.layer = nn.Linear(4, 256)
self.adventage = nn.Linear(256, 2)
self.value = nn.Linear(256, 1)
self.apply(init_weights)
def forward(self, inputs):
features = self.layer(inputs)
features = F.relu(features)
value = self.value(features)
adventage = self.adventage(features)
features = adventage + value - adventage.mean()
return features
@register_trainer(max_time_steps=100000, episode_log_interval=10)
class Trainer(deepq.DeepQTrainer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.annealing_steps = 10000
self.preprocess_steps = 1000
self.replay_size = 50000
self.allow_gpu = False
def create_model(self):
return Model()
def create_env(self, env):
env = super().create_env(env)
class W(gym.ObservationWrapper):
def observation(self, o):
return o.astype(np.float32)
return W(env)
def default_args():
return dict(
env_kwargs=dict(id='CartPole-v0'),
model_kwargs=dict()
)
| 25.25 | 65 | 0.617162 | 192 | 1,515 | 4.666667 | 0.427083 | 0.035714 | 0.029018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03464 | 0.275908 | 1,515 | 59 | 66 | 25.677966 | 0.782133 | 0 | 0 | 0 | 0 | 0 | 0.007261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.133333 | 0.066667 | 0.488889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
448f5af76ad0cc20a0a28294b6c5340ad6385f98 | 1,063 | py | Python | setup.py | pymgrit/pymgrit | 40eca08cedf486de22604279b4add87086b7d3cc | [
"MIT"
] | 6 | 2020-04-24T13:14:17.000Z | 2022-03-09T14:16:51.000Z | setup.py | pymgrit/pymgrit | 40eca08cedf486de22604279b4add87086b7d3cc | [
"MIT"
] | 6 | 2020-03-24T09:03:05.000Z | 2021-08-02T13:31:39.000Z | setup.py | pymgrit/pymgrit | 40eca08cedf486de22604279b4add87086b7d3cc | [
"MIT"
] | 4 | 2020-06-09T21:11:19.000Z | 2021-06-27T11:34:58.000Z | from setuptools import setup, find_packages
install_requires = [
'numpy>=1.17.0',
'scipy>=1.4.1',
'mpi4py>=3.0',
'matplotlib>=3.1.3'
]
extras_requires = {
'docs': [
'sphinx'
],
'tests': [
'tox',
]
}
def long_description():
with open('README.rst') as f:
return f.read()
setup(name='pymgrit',
version='1.0.6',
description='Python implementation of the MGRIT algorithm',
long_description=long_description(),
long_description_content_type="text/x-rst",
url='https://github.com/pymgrit/pymgrit',
author='Jens Hahne <jens.hahne@math.uni-wuppertal.de>, Stephanie Friedhoff <friedhoff@math.uni-wuppertal.de>',
author_email='jens.hahne@math.uni-wuppertal.de',
license='MIT',
packages=find_packages(where='src', exclude=['doc']),
install_requires=install_requires,
extras_require=extras_requires,
python_requires=">=3.6",
include_package_data=True,
package_dir={'': 'src'},
test_suite='pytest',
zip_safe=False)
| 25.309524 | 116 | 0.631232 | 130 | 1,063 | 5 | 0.607692 | 0.092308 | 0.073846 | 0.083077 | 0.083077 | 0.083077 | 0 | 0 | 0 | 0 | 0 | 0.021454 | 0.210724 | 1,063 | 41 | 117 | 25.926829 | 0.753278 | 0 | 0 | 0 | 0 | 0.028571 | 0.316087 | 0.094073 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.028571 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4490ad67efce400919429713bee906671961ca09 | 2,590 | py | Python | mod6_lab1.py | KMSkelton/pyPrac | 11cec31d4cc1cbb890f89324a10f4daf66376de0 | [
"MIT"
] | 1 | 2017-08-08T20:38:27.000Z | 2017-08-08T20:38:27.000Z | mod6_lab1.py | KMSkelton/pyPrac | 11cec31d4cc1cbb890f89324a10f4daf66376de0 | [
"MIT"
] | 1 | 2018-08-15T22:26:29.000Z | 2018-08-15T22:26:29.000Z | mod6_lab1.py | KMSkelton/pyPrac | 11cec31d4cc1cbb890f89324a10f4daf66376de0 | [
"MIT"
] | null | null | null | # Update the code to have a function that reads in the file and returns contents as a list
def open_sample_func(sample_text):
with open(sample_text) as file:
lines = file.readlines()
return lines
# Update the code to have a function that converts the list of book lines into a list of the words
def word_list_func(lines):
word_list = []
for line in lines:
word_list.extend(line.strip().split(" "))
return word_list
def word_count_func(word_list):
word_count = {}
for word in word_list:
if word in word_count:
word_count[word] = word_count[word] + 1
else:
word_count[word] = 1
return word_count
# Update the code to have a function that calculates the minum word frequency
# and gets the list of words from the word frequency dictionary
def min_word_func(word_count):
min_word_count = min(word_count.values())
min_words = []
for word, fr in word_count.items():
if fr == min_word_count:
min_words.append(word)
print("The lowest word count is {} and there are {} words in "
"the book with that word_count".format(min_word_count, len(min_words)))
return min_words
# I made each of the given operations into its own function that returns the value(s) calculated
# This is the only function I wrote, which only calls each of the above functions
# Functions created with parameters require arguments. We can create those arguments inside the
# function that invokes all the others
# Create a main section at the bottom of the code
if __name__ == '__main__':
# Update the main section to call the function that reads the file
# and the function that converts the book lines into a list.
# Remember to pass the appropriate variable into the function
# and return the value you want and store it in a variable
sample_text = "./land_time_forgot.txt"
lines = open_sample_func(sample_text)
word_list = word_list_func(lines)
word_count = word_count_func(word_list)
# Update the main section to call the function that calculates the word count
# and find the word with the maximum count (hint: use max)
max_word = max(word_count, key=word_count.get)
print(f"The word with the most appearances is: {max_word}. It appears {word_count[max_word]} times.")
min_word_func(word_count)
# Update the main section to create a word set and print the sentence about unique words
word_set = set(word_list)
print("There are {} words in the book and {} of them are unique".format(
len(word_list), len(word_set)))
| 41.774194 | 105 | 0.71583 | 419 | 2,590 | 4.260143 | 0.291169 | 0.110924 | 0.036415 | 0.02521 | 0.266667 | 0.12437 | 0.09972 | 0.09972 | 0.045938 | 0 | 0 | 0.000989 | 0.218919 | 2,590 | 61 | 106 | 42.459016 | 0.881364 | 0.439768 | 0 | 0 | 0 | 0 | 0.182008 | 0.030683 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0 | 0 | 0.216216 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44911cbbd93bdb5beaecefc59e565353a88e30d2 | 1,439 | py | Python | volume/pipeline.py | Napam/INF399-Master-Evaluation | d6e66397ae6951cffae65ddd41979bab367abdb8 | [
"MIT"
] | 1 | 2021-05-31T13:32:09.000Z | 2021-05-31T13:32:09.000Z | volume/pipeline.py | Napam/INF399-Master-Evaluation | d6e66397ae6951cffae65ddd41979bab367abdb8 | [
"MIT"
] | null | null | null | volume/pipeline.py | Napam/INF399-Master-Evaluation | d6e66397ae6951cffae65ddd41979bab367abdb8 | [
"MIT"
] | null | null | null | import glob, os
from cococonvert import convert_csv_labels, convert_csv_outputs
from cocoeval import COCO, COCOeval
import sys
class StdoutRedirection:
"""Standard output redirection context manager"""
def __init__(self, path):
self._path = path
def __enter__(self):
sys.stdout = open(self._path, mode="w")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout.close()
sys.stdout = sys.__stdout__
if __name__ == '__main__':
mapdir = '/mnt/deepvol/mapdir'
print("\x1b[33mConverting from CSV to JSON format\x1b[0m")
convert_csv_labels(os.path.join(mapdir, "nogit_val_labels.csv"), "jsons/nogit_val_labels.json")
for dir_ in glob.glob(os.path.join(mapdir, "nogit_output_*.csv")):
print(dir_)
convert_csv_outputs(dir_, "jsons/"+os.path.basename(dir_).replace('.csv','.json'))
print("\x1b[33mLoading ground truth labels\x1b[0m")
cocogt = COCO("jsons/nogit_val_labels.json")
print("\x1b[33mGetting summaries\x1b[0m")
for dir_ in glob.glob("jsons/nogit_output_*.json"):
print(f"\x1b[32m{dir_}\x1b[0m")
cocodt = cocogt.loadRes(dir_)
cocoEval = COCOeval(cocogt, cocodt, 'bbox3d')
cocoEval.evaluate()
cocoEval.accumulate()
with StdoutRedirection("summaries/"+os.path.basename(dir_).replace('.json', '.txt')):
cocoEval.summarize()
print() | 32.704545 | 99 | 0.660181 | 183 | 1,439 | 4.89071 | 0.393443 | 0.044693 | 0.046927 | 0.035754 | 0.18771 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018325 | 0.203614 | 1,439 | 44 | 100 | 32.704545 | 0.762653 | 0.029882 | 0 | 0 | 0 | 0 | 0.23652 | 0.071891 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.125 | 0 | 0.28125 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4492af236eff4aa2e983034e070713b3b4e8e677 | 3,076 | py | Python | play.py | brandoFranco/tictactoeOpenCV | 60eaa9c028cdfe13e68124de4037b39fcfa7432c | [
"MIT"
] | null | null | null | play.py | brandoFranco/tictactoeOpenCV | 60eaa9c028cdfe13e68124de4037b39fcfa7432c | [
"MIT"
] | null | null | null | play.py | brandoFranco/tictactoeOpenCV | 60eaa9c028cdfe13e68124de4037b39fcfa7432c | [
"MIT"
] | null | null | null | #
# Jogo da Velha utilizando Visao Computacional e Realidade aumentada.
# Definicao de funcoes auxiliares
#
import numpy as np
# Funcao que verifica se e o fim do jogo.
def won(tabuleiro):
if (tabuleiro[0] == tabuleiro[1] == tabuleiro[2] == 0):
return 1
elif (tabuleiro[0] == tabuleiro[3] == tabuleiro[6] == 0):
return 1
elif (tabuleiro[6] == tabuleiro[7] == tabuleiro[8] == 0):
return 1
elif (tabuleiro[2] == tabuleiro[5] == tabuleiro[8] == 0):
return 1
elif (tabuleiro[3] == tabuleiro[4] == tabuleiro[5] == 0):
return 1
elif (tabuleiro[1] == tabuleiro[4] == tabuleiro[7] == 0):
return 1
elif (tabuleiro[0] == tabuleiro[4] == tabuleiro[8] == 0):
return 1
elif (tabuleiro[2] == tabuleiro[4] == tabuleiro[6] == 0):
return 1
elif (tabuleiro[0] == tabuleiro[1] == tabuleiro[2] == 1):
return 2
elif (tabuleiro[0] == tabuleiro[3] == tabuleiro[6] == 1):
return 2
elif (tabuleiro[6] == tabuleiro[7] == tabuleiro[8] == 1):
return 2
elif (tabuleiro[2] == tabuleiro[5] == tabuleiro[8] == 1):
return 2
elif (tabuleiro[3] == tabuleiro[4] == tabuleiro[5] == 1):
return 2
elif (tabuleiro[1] == tabuleiro[4] == tabuleiro[7] == 1):
return 2
elif (tabuleiro[0] == tabuleiro[4] == tabuleiro[8] == 1):
return 2
elif (tabuleiro[2] == tabuleiro[4] == tabuleiro[6] == 1):
return 2
else:
if -1 not in tabuleiro:
return 3
else:
return 0
# Realiza um movimento em uma posicao livre no tabuleiro.
def move(tabuleiro, posicao, peca):
if tabuleiro[posicao] == -1:
tabuleiro[posicao] = peca
# funcao para localizar os cantos do tabuleiro.
def canto(y1, x1, y2, x2, y3, x3, y4, x4, h, w, k):
if k == "TL":
yc, xc = 0, 0
if k == "TR":
yc, xc = 0, w
if k == "BL":
yc, xc = h, 0
if k == "BR":
yc, xc = h, w
d1 = np.sqrt(np.power(y1-yc,2)+ np.power(x1-xc,2))
d2 = np.sqrt(np.power(y2-yc,2)+ np.power(x2-xc,2))
d3 = np.sqrt(np.power(y3-yc,2)+ np.power(x3-xc,2))
d4 = np.sqrt(np.power(y4-yc,2)+ np.power(x4-xc,2))
d = [d1,d2,d3,d4]
if min(d) == d1:
return 0
if min(d) == d2:
return 1
if min(d) == d3:
return 2
if min(d) == d4:
return 3
# funcao utilizada para ordenar os pontos do tabuleiro.
# Utilizada para auxiliar na transformacao de perspectiva.
def ordena(aprox, shape):
ord = np.copy(aprox)
h,w = shape
y1,x1 = aprox[0][0][0],aprox[0][0][1]
y2,x2 = aprox[1][0][0],aprox[1][0][1]
y3,x3 = aprox[2][0][0],aprox[2][0][1]
y4,x4 = aprox[3][0][0],aprox[3][0][1]
TL = canto(y1, x1, y2, x2, y3, x3, y4, x4, h, w, "TL")
TR = canto(y1, x1, y2, x2, y3, x3, y4, x4, h, w, "TR")
BL = canto(y1, x1, y2, x2, y3, x3, y4, x4, h, w, "BL")
BR = canto(y1, x1, y2, x2, y3, x3, y4, x4, h, w, "BR")
ord[1] = aprox[TL]
ord[3] = aprox[TR]
ord[0] = aprox[BL]
ord[2] = aprox[BR]
return ord
| 28.220183 | 69 | 0.535761 | 486 | 3,076 | 3.390947 | 0.193416 | 0.118325 | 0.038835 | 0.058252 | 0.475121 | 0.455704 | 0.430825 | 0.169296 | 0.169296 | 0.069782 | 0 | 0.088889 | 0.28316 | 3,076 | 108 | 70 | 28.481481 | 0.658503 | 0.114434 | 0 | 0.3 | 0 | 0 | 0.005895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.0125 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4492b1a697142e5525ad3f5dfe95027989301553 | 1,407 | py | Python | src/main.py | TomoTom0/DiscordBot_Heroku_Stat.ink | 458ad08fb53a83b8fa96dad25fd603c9d14722b2 | [
"MIT"
] | 1 | 2020-11-12T04:26:30.000Z | 2020-11-12T04:26:30.000Z | src/main.py | TomoTom0/DiscordBot_Heroku_Stat.ink | 458ad08fb53a83b8fa96dad25fd603c9d14722b2 | [
"MIT"
] | null | null | null | src/main.py | TomoTom0/DiscordBot_Heroku_Stat.ink | 458ad08fb53a83b8fa96dad25fd603c9d14722b2 | [
"MIT"
] | 4 | 2020-11-14T14:53:30.000Z | 2021-07-05T11:36:59.000Z | #! /usr/bin/env python3
import discord
from discord.ext import commands
import subprocess
import re
import os, sys
import json
import basic
import datetime
import traceback
import iksm_discord
TOKEN = basic.DISCORD_TOKENS["0"]
startup_extensions = ["splat"] # cogの導入
description = f"stat.inkへ戦績自動アップロードを行うbotです。\nまずはstat.inkのAPI KEYを用意してください。"+\
'\nHerokuのAPI KEYとapp-nameを環境変数として入力しておいてください。' if os.getenv('DYNO', False) else '' +\
"\n詳しい使い方はこちら -> https://github.com/TomoTom0/DiscordBot_Heroku_Stat.ink"
bot = commands.Bot(command_prefix="?", description=description)
# 起動時に動作する処理
@bot.event
async def on_ready():
print(f"Logged in as\n{bot.user.name}\n{bot.user.id}\n------")
await iksm_discord.autoUploadCycle(next_time = 900)
# メッセージ受信時に動作する処理
@bot.event
async def on_message(message):
# メッセージ送信者がBotだった場合は無視する
if message.author.bot is True:
return
# bot.commandにmessageを流す
try:
await bot.process_commands(message)
except Exception as e:
error_message = f"エラーが発生しました。\n{traceback.format_exc()}"
print(error_message)
if __name__ == "__main__": # cogを導入
for extension in startup_extensions:
try:
bot.load_extension(extension)
except Exception as e:
exc = f'{e}: {e.args}'
print(f'Failed to load extension {extension}\n{exc}')
# Botの起動とDiscordサーバーへの接続
bot.run(TOKEN)
| 25.125 | 86 | 0.702914 | 173 | 1,407 | 5.578035 | 0.560694 | 0.022798 | 0.026943 | 0.033161 | 0.037306 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005222 | 0.183369 | 1,407 | 55 | 87 | 25.581818 | 0.834639 | 0.093817 | 0 | 0.162162 | 0 | 0.027027 | 0.266772 | 0.123125 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.27027 | 0 | 0.297297 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
44932b7c61cbe0cddbdb7086477b87d6e2cfcb5d | 1,019 | py | Python | tls/Alerts.py | anuraagbaishya/tls1.3 | e86b575431694fe4ea987cdb89ef3438f7f28360 | [
"MIT"
] | null | null | null | tls/Alerts.py | anuraagbaishya/tls1.3 | e86b575431694fe4ea987cdb89ef3438f7f28360 | [
"MIT"
] | null | null | null | tls/Alerts.py | anuraagbaishya/tls1.3 | e86b575431694fe4ea987cdb89ef3438f7f28360 | [
"MIT"
] | null | null | null | from tls.CustomEnums import UInt8Enum
class Alert(Exception):
def __init__(self, level, description):
self.level = level
self.description = description
class AlertLevel(UInt8Enum):
warning = 1
fatal = 2
class AlertDescription(UInt8Enum):
close_notify = 0
unexpected_message = 10
bad_record_mac = 20
record_overflow = 22
handshake_failure = 40
bad_certificate = 42
unsupported_certificate = 43
certificate_revoked = 44
certificate_expired = 45
certificate_unknown = 46
illegal_parameter = 47
unknown_ca = 48
access_denied = 49
decode_error = 50
decrypt_error = 51
protocol_version = 70
insufficient_security = 71
internal_error = 80
inappropriate_fallback = 86
user_canceled = 90
missing_extension = 109
unsupported_extension = 110
unrecognized_name = 112
bad_certificate_status_response = 113
unknown_psk_identity = 115
certificate_required = 116
no_application_protocol = 120
| 23.697674 | 43 | 0.708538 | 115 | 1,019 | 5.965217 | 0.773913 | 0.026239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084416 | 0.244357 | 1,019 | 42 | 44 | 24.261905 | 0.806494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.027778 | 0 | 0.944444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4494889d1e0dfebc5119191d7266469bf0009a3a | 21,922 | py | Python | ProjectSurveillance/views.py | psymen145/OVS-django-fe | 1823e8b42c17276d6b50a63dddd9b04a21c2038c | [
"MIT"
] | null | null | null | ProjectSurveillance/views.py | psymen145/OVS-django-fe | 1823e8b42c17276d6b50a63dddd9b04a21c2038c | [
"MIT"
] | null | null | null | ProjectSurveillance/views.py | psymen145/OVS-django-fe | 1823e8b42c17276d6b50a63dddd9b04a21c2038c | [
"MIT"
] | null | null | null | from django.shortcuts import render, get_object_or_404, redirect
from django.http import HttpResponse, JsonResponse, Http404
from django.contrib.auth.decorators import login_required
from django.core.exceptions import ObjectDoesNotExist
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from django.db.models import Q, F, Value, Case, When
from django.contrib.auth.models import User, Group
from django.template.loader import render_to_string
from django.utils import timezone
from datetime import timedelta, datetime
from django.core import serializers
import json
from .models import *
from .forms import ProjectForm, UserForm, OrgForm, SignUpForm
def error_404(request):
data = {}
return render(request, 'ProjectSurveillance/404.html', data)
def error_500(request):
data = {}
return render(request, 'ProjectSurveillance/500.html', data)
@login_required(login_url='/login/')
def dashboard(request):
#Serves the main dashboard to the user. Login is required.
#gives the projects under each phase
#params: get request only. Some calls may be an ajax (such as all active projects)
#return: a rendered html page
ip = request.META['REMOTE_ADDR']
print(ip)
if request.user.is_authenticated():
days_from_today = timezone.now() - timedelta(days = 90)
#recent_projects = Project.objects.filter(projectdetail__detailtypeid = 7, projectdetail__dateofevent__gte = days_from_today)[:8]
projects = Project.objects.filter(archive__isnull = True)
preproj = projects.filter(phaseid = 1)
newsub = projects.filter(phaseid = 2)
newlogging = projects.filter(phaseid = 3)
appreview = projects.filter(phaseid = 4)
revision = projects.filter(phaseid = 5)
duadraftrev = projects.filter(phaseid = 6)
reqduarev = projects.filter(phaseid = 7)
appclose = projects.filter(phaseid = 8)
datareqproc = projects.filter(phaseid = 9)
datasec = projects.filter(phaseid = 10)
dataclose = projects.filter(phaseid = 11)
data = {#'recent_projects': recent_projects,
'total_projects': Project.objects.count(),
'total_users': VSUser.objects.count(),
'total_open_projects': Project.objects.exclude(projectdetail__detailtypeid = 73).count(),
'total_orgs': Organization.objects.count(),
'preproj': preproj, 'newsub': newsub, 'newlogging': newlogging, 'appreview': appreview, 'revision': revision, 'duadraftrev': duadraftrev, 'reqduarev': reqduarev,
'appclose': appclose, 'datareqproc': datareqproc, 'datasec': datasec, 'dataclose': dataclose}
else:
data = {'statement' : "Please login to see your dashboard"}
return render(request, 'ProjectSurveillance/dashboard.html', data)
@login_required(login_url='/login/')
def update_phase(request):
if request.method == 'POST':
#be wary of changes to the project id in the front end, the proj_id should correspond primary key of the phase
proj_id = request.POST.get("proj_id", None)
new_phase = request.POST.get("new_phase", None)
if proj_id and new_phase:
try:
p = Project.objects.get(projectid = proj_id)
p.phaseid = WorkPhase.objects.get(phaseid = new_phase)
p.save()
success = True
except Exception as e:
print(e.args)
success = False
data = {"success": success}
return JsonResponse(data)
def all_users(request):
#Serves a table of all of the users, with a pagination bar, and a search option
#params: a get request only
#return: an html page
if request.method == 'GET':
search_query = request.GET.get('searchbox',None)
if search_query is not None:
#if the query seems to be one term
if len(search_query.split()) == 1:
user_list = VSUser.objects.filter(
Q(fname__icontains=search_query) | Q(lname__icontains=search_query) | Q(useremail__icontains=search_query)
).order_by("fname")
#if the query seems to be first name and last name
elif len(search_query.split()) > 1:
part_name = search_query.split()
user_list = VSUser.objects.filter(fname__icontains=part_name[0], lname__icontains=part_name[1])
#if user enters spaces or other characters
else:
user_list = VSUser.objects.exclude(userorganization__archive = True).order_by("fname")
else:
#if user does not do a search
user_list = VSUser.objects.exclude(userorganization__archive = True).order_by("fname")
page = 1
paginator = Paginator(user_list, 50)
try:
users = paginator.page(page)
except PageNotAnInteger:
users = paginator.page(1)
except EmptyPage:
users = paginator.page(paginator.num_pages)
index = users.number - 1
max_index = len(paginator.page_range)
if index >= 3:
if index >= max_index - 2:
start_index = max_index - 5
end_index = max_index
else:
start_index = index - 2
end_index = index + 3
else:
start_index = 0
end_index = 5
page_range = paginator.page_range[start_index:end_index]
return render(request, 'ProjectSurveillance/userview.html', {'users': users, 'page_range': page_range})
def user_details(request, pk):
#returns info of specific user
#params: a get request only and a primary key indicating which project we are getting specific details for
#return: rendered html page of specific user's info
user = get_object_or_404(VSUser, pk=pk)
#get user's projects
projects_with_user = Project.objects.filter(userproject__userid = pk).values_list('projectid', 'projname', 'archive', 'userproject__archive')
#get user's organization
orgs_with_user = Organization.objects.filter(userorganization__userid = pk).exclude(userorganization__archive = True)
#if more than one org returned for the user, then send a note to the template
if(orgs_with_user.count() > 1):
many_orgs_flag = True
else:
many_orgs_flag = False
org_with_user = orgs_with_user.first()
#get user's bvs confidentiality
dates = ProjectDetail.objects.filter(Q(detailtypeid = 72) | Q(detailtypeid = 71), userid = pk)
#date requested
req_obj = dates.filter(detailtypeid = 72)
#date signed
signed_obj = dates.filter(detailtypeid = 71)
#get user drive access
drive_access = DetailType.objects.filter(projectdetail__userid = pk, drive = True).order_by('detaildesc').values_list('detaildesc','projectdetail__note')
return render(request, 'ProjectSurveillance/indivuserview.html', {'userdetail': user,
'projects' : projects_with_user,
'org': org_with_user,
'signed': signed_obj,
'requested': req_obj,
'drive_access': drive_access,
'many_orgs_flag': many_orgs_flag})
def all_users_ajax(request):
#updates the table if user made a pagination request/sort request/search request
#params: must be an ajax get request
#returns: html code to replace a certain section of the webpage
if request.method == 'GET' and request.is_ajax():
search_query = request.GET.get('search_query',None)
sorttype = request.GET.get('sorttype', None)
cattype = request.GET.get('cattype', None)
#check if user made an ajax call to sort the current page of users
sortstring = "-" if sorttype == "dsc" else ""
if cattype == "id-sort-icon":
cattype = "userid"
elif cattype == "email-sort-icon":
cattype = "useremail"
elif cattype == "lname-sort-icon":
cattype = "lname"
else:
cattype = "fname"
if search_query is not None:
#if the query seems to be one term
if len(search_query.split()) == 1:
user_list = VSUser.objects.filter(
Q(fname__icontains=search_query) | Q(lname__icontains=search_query) | Q(useremail__icontains=search_query)
).order_by(sortstring + cattype)
#if the query seems to be first name and last name
elif len(search_query.split()) > 1:
part_name = search_query.split()
user_list = VSUser.objects.filter(fname__icontains=part_name[0], lname__icontains=part_name[1]).order_by(sortstring + cattype)
#if user enters spaces or other characters
else:
user_list = VSUser.objects.exclude(userorganization__archive = True).order_by(sortstring + cattype)
else:
#if user does not do a search
user_list = VSUser.objects.exclude(userorganization__archive = True).order_by(sortstring + cattype)
page = request.GET.get('desired_page',1)
paginator = Paginator(user_list, 50)
try:
users = paginator.page(page)
except PageNotAnInteger:
users = paginator.page(1)
except EmptyPage:
users = paginator.page(paginator.num_pages)
index = users.number - 1
max_index = len(paginator.page_range)
if index >= 3:
if index >= max_index - 2:
if max_index < 5:
start_index = 0
else:
start_index = max_index - 5
end_index = max_index
else:
start_index = index - 2
end_index = index + 3
else:
start_index = 0
end_index = 5
page_range = paginator.page_range[start_index:end_index]
data = {}
#we will just assign users to the projects key, so we can just reuse the paginationajax template
context1 = {'projects' : users, 'page_range': page_range}
context2 = {'users': users}
data['html_page'] = render_to_string('ProjectSurveillance/paginationajax.html', context1, request=request)
data['html_table'] = render_to_string('ProjectSurveillance/userviewajax.html', context2, request=request)
return JsonResponse(data)
@login_required(login_url="/login/")
def user_details_update_modal(request):
#this function is used by the function user_details_update
#params: get requests will get the projects currently associated with user
# post requests will decide whether or not to archive each user project relationship
#return: get requests will return an html page populating the modal html
if request.method == 'GET' and request.is_ajax():
user_id = request.GET.get("user_id", None)
#get all user's projects
projects = Project.objects.filter(userproject__userid = user_id).values_list('projectid','projname','userproject__archive')
context = {'projects': projects}
data = {}
data['html_form'] = render_to_string('ProjectSurveillance/userprojarchivemodal.html', context, request=request)
return JsonResponse(data)
if request.method == 'POST' and request.is_ajax():
user_id = request.POST.get("user_id", None)
#jquery sends this dictionary of dictionary into a string, so you have to make it a json dict
proj_archive_dict = json.loads(request.POST.get("project_archive_pair", None))
p = Project.objects.filter(userproject__userid = user_id, userproject__archive = True).values_list('projectid', flat=True)
list_catid = []
for i in proj_archive_dict.keys():
print(i, proj_archive_dict[i])
if proj_archive_dict[i]:
if int(i) not in p:
list_catid.append(int(i))
else:
if int(i) in p:
list_catid.append(int(i))
print(list_catid)
data = {}
if list_catid:
try:
UserProject.objects.filter(projectid__in = list_catid).update(archive = Case(
When(archive = True, then = Value(False)),
When(archive = False, then = Value(True)),
When(archive__isnull = True, then = Value(True))
)
)
data['success'] = True
except Exception as e:
print(e.args)
data['success'] = False
else:
data['success'] = True
return JsonResponse(data)
@login_required(login_url="/login/")
def user_details_update(request):
#this view processes all of the requests to edit the individual user view
#params: will always be an ajax post request because the user is editting existing database data
#return: confirmation if the data has been saved in the database. This will send back a json response
if request.method == "POST" and request.is_ajax():
user_id = request.POST.get("user_id", None)
new_email = request.POST.get("new_email", None)
new_phone = request.POST.get("new_phone", None)
new_org = request.POST.get("new_org", None)
new_req_dates = request.POST.get("list_of_req", None)
assoc_proj_id = request.POST.get("assoc_proj_id", None)
#this will tell is if the passed in json data has been successfully saved into the database
email_success = True
phone_success = True
org_success = True
if new_email is not None:
try:
u = VSUser.objects.get(userid = user_id)
u.useremail = new_email
u.save()
except Exception as e:
email_success = False
print(e.args)
if new_phone is not None:
try:
u = VSUser.objects.get(userid = user_id)
#check if blank string, then you have to set it to None in order for sql server to have store it as NULL
if new_phone == "":
new_phone = None
u.phone = new_phone
u.save()
except Exception as e:
phone_success = False
print(e.args)
if new_org is not None:
try:
#check if the user already has a record in this table that corresponds to the organization
q2 = UserOrganization.objects.filter(orgid__in = Organization.objects.filter(orgname = new_org), userid = user_id)
if q2.count() == 1:
#if there is one matching user organization relationship that already exists, then you change that one back to active
#archive the old userorg
UserOrganization.objects.filter(userid = user_id).update(archive = True)
q2.update(archive = False)
elif q2.count() > 1:
#tell the end user there is something wrong with the data storage, there should never be more than one user organization relationship where ...
#the userid and orgid is the same
raise Exception
else:
#if there is no existing relationship for the userid and orgid, then we create a new record
UserOrganization.objects.filter(userid = user_id).update(archive = True)
try:
org_obj = Organization.objects.get(orgname = new_org)
user_obj = VSUser.objects.get(userid = user_id)
uo = UserOrganization(userid = user_obj, orgid = org_obj)
uo.save()
except Exception as e:
print(e.args)
except Exception as e:
print(e.args)
org_success = False
if assoc_proj_id is not None:
try:
#ideally we should use get but just in case there are multiple instances
user_proj_obj = UserProject.objects.filter(projectid = assoc_proj_id, userid = user_id).update(archive = Case(
When(archive = True, then = Value(False)),
When(archive = False, then = Value(True)),
When(archive__isnull = True, then = Value(True))))
except Exception as e:
print(e.args)
data = {"email_success": email_success,
"phone_success": phone_success,
"org_success": org_success,
}
return JsonResponse(data)
else:
raise Http404
@login_required(login_url='/login/')
def new_user_form(request):
#the new project page loads through an ajax call to fill in the modal
#params: an ajax post request to submit data/ an ajax get request to get a new form (when modal is clicked)
#return: confirmation if the data was saved properly in the database
if request.is_ajax():
errors = None
if request.method == 'POST':
form = UserForm(request.POST)
if form.is_valid():
form = UserForm()
else:
errors = form.errors
else:
form = UserForm()
context = {
'form': form,
}
data = {
'html_page': render_to_string('ProjectSurveillance/newmodal.html', context, request=request)
}
return JsonResponse(data)
else:
raise Http404
@login_required(login_url='/login/')
def new_proj_form(request):
#the new project page loads through an ajax call to fill in the modal
#params: an ajax post request for submitting new info/ an ajax get request to get a new form (when modal is clicked)
#return: confirmation if the data was successfully saved into the database
if request.is_ajax():
errors = None
if request.method == 'POST':
form = ProjectForm(request.POST)
if form.is_valid():
#commit = false if you want to save something else before putting in database
#project = form.save(commit=True)
#p = Project(projname=form.cleaned_data["project_name"], intext=form.clean_data["intext"])
#p.save()
#entered_proj = Project.objects.latest('projectid')
form = ProjectForm()
else:
errors = form.errors
else:
form = ProjectForm()
context = {
'form': form,
}
data = {
'html_page': render_to_string('ProjectSurveillance/newmodal.html', context, request=request)
}
return JsonResponse(data)
else:
raise Http404
@login_required(login_url='/login/')
def new_org_form(request):
#the new project page loads through an ajax call to fill in the modal
#params: must be an ajax post request to submit new data/ an ajax get request to get a new form (when modal is clicked)
#return: confirmation of whether or not the data was successfully entered
if request.is_ajax():
errors = None
if request.method == 'POST':
form = OrgForm(request.POST)
if form.is_valid():
form = OrgForm()
else:
errors = form.errors
else:
form = OrgForm()
context = {
'form': form,
}
data = {
'html_page': render_to_string('ProjectSurveillance/newmodal.html', context, request=request)
}
return JsonResponse(data)
else:
raise Http404
def reports(request):
#WIP
if request.method == "GET":
context = {}
return render(request, "ProjectSurveillance/report.html", context)
def signup(request):
#This function is used to create another user
#param: a standard post request
#return: either redirects back to the dashboard if user was successfully created or it renders the same form they sent as a request
if request.method == 'POST':
form = SignUpForm(request.POST)
if form.is_valid():
form.save()
username = form.clean_data.get('username')
raw_password = form.cleaned_data.get('password1')
user = authenticate(username=username, password=raw_password)
login(request,user)
return redirect('dashboard')
else:
form = SignUpForm()
return render(request, 'ProjectSurveillance/signup.html', {'form': form})
def validate_username(request):
#This function is called the signup html and checks if the username already exists:
#param: GET request, AJAX?
#return: json response
username = request.GET.get('username', None)
data = {
'is_taken': User.objects.filter(username__iexact=username).exists()
}
return JsonResponse(data)
| 42.238921 | 178 | 0.59251 | 2,526 | 21,922 | 4.998812 | 0.163104 | 0.019561 | 0.018294 | 0.013305 | 0.413954 | 0.370476 | 0.345688 | 0.311713 | 0.300467 | 0.282252 | 0 | 0.007151 | 0.32383 | 21,922 | 518 | 179 | 42.320463 | 0.844701 | 0.199845 | 0 | 0.493188 | 0 | 0 | 0.082006 | 0.026136 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040872 | false | 0.00545 | 0.038147 | 0 | 0.125341 | 0.027248 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4497d26024dfdc2730aaaf21bcb0e80805684f50 | 7,806 | py | Python | optur/storages/backends/mysql.py | ytsmiling/optur | cbc56c60b322ea764592f01758798f745199b455 | [
"MIT"
] | 1 | 2022-01-19T09:18:15.000Z | 2022-01-19T09:18:15.000Z | optur/storages/backends/mysql.py | ytsmiling/optur | cbc56c60b322ea764592f01758798f745199b455 | [
"MIT"
] | null | null | null | optur/storages/backends/mysql.py | ytsmiling/optur | cbc56c60b322ea764592f01758798f745199b455 | [
"MIT"
] | null | null | null | import time
from typing import Any, Callable, List, Optional
from google.protobuf.timestamp_pb2 import Timestamp
from optur.errors import NotFoundError
from optur.proto.study_pb2 import StudyInfo
from optur.proto.study_pb2 import Trial as TrialProto
from optur.storages.backends.backend import StorageBackend
def _retry(func: Callable[..., Any]) -> Any:
def wrapped_func(self: "MySQLBackend", *args: Any, **kwargs: Any) -> Any:
import pymysql
s = 0.1
retry_limit = self.retry_limit
while retry_limit > 0:
try:
if not self._connection.open:
self._connection.connect()
return func(self, *args, **kwargs)
except (
pymysql.err.DatabaseError,
pymysql.err.IntegrityError,
pymysql.err.OperationalError,
):
retry_limit -= 1
if retry_limit <= 0:
raise
time.sleep(s)
s *= 2
continue
return func(self, *args, **kwargs)
return wrapped_func
class MySQLBackend(StorageBackend):
def __init__(
self, *, host: str, user: str, port: int = 3306, password: str, database: str
) -> None:
super().__init__()
try:
import pymysql
import pymysql.cursors
except ImportError:
# TODO(tsuzuku)
raise
self._retry_limit = 1
self._connection = pymysql.connect(
user=user,
host=host,
port=port,
password=password,
database=database,
cursorclass=pymysql.cursors.DictCursor,
)
@property
def retry_limit(self) -> int:
return self._retry_limit
@_retry
def drop_all(self) -> None:
with self._connection.cursor() as cursor:
cursor.execute(query="DELETE FROM trial_data;")
cursor.execute(query="DELETE FROM trial;")
cursor.execute(query="DELETE FROM study_info;")
cursor.execute(query="DELETE FROM study;")
@_retry
def init(self) -> None:
with self._connection.cursor() as cursor:
query = """CREATE TABLE IF NOT EXISTS study (
study_id varchar(32) PRIMARY KEY,
timestamp TIMESTAMP(6) NOT NULL,
INDEX study_timestamp (timestamp, study_id)
);"""
cursor.execute(query=query)
query = """CREATE TABLE IF NOT EXISTS study_info
(
study_id varchar(32) NOT NULL PRIMARY KEY,
info BLOB
);
"""
cursor.execute(query=query)
query = """CREATE TABLE IF NOT EXISTS trial (
trial_id varchar(32) PRIMARY KEY,
study_id varchar(32),
timestamp TIMESTAMP(6) NOT NULL,
FOREIGN KEY(study_id) REFERENCES study(study_id),
INDEX trial_study_timestamp (study_id, timestamp, trial_id),
INDEX trial_timestamp (timestamp, trial_id)
);
"""
cursor.execute(query=query)
query = """CREATE TABLE IF NOT EXISTS trial_data (
trial_id varchar(32) PRIMARY KEY,
data BLOB
);
"""
cursor.execute(query=query)
self._connection.commit()
@_retry
def get_current_timestamp(self) -> Optional[Timestamp]:
with self._connection.cursor() as cursor:
query = """SELECT CURRENT_TIMESTAMP(6);"""
cursor.execute(query=query)
data = cursor.fetchall()
timestamp = Timestamp()
timestamp.FromDatetime(next(iter(data[0].values())))
return timestamp
def get_studies(self, timestamp: Optional[Timestamp] = None) -> List[StudyInfo]:
with self._connection.cursor() as cursor:
if timestamp is None:
query = """SELECT info FROM study_info;"""
else:
ms = timestamp.ToMilliseconds()
query = f"""SELECT info FROM study_info INNER JOIN (
SELECT study_id FROM study
WHERE timestamp >= FROM_UNIXTIME({ms}/1000)
) as ts ON study_info.study_id = ts.study_id;
"""
cursor.execute(query=query)
data = cursor.fetchall()
return [StudyInfo.FromString(row["info"]) for row in data]
@_retry
def get_trials(
self, study_id: Optional[str] = None, timestamp: Optional[Timestamp] = None
) -> List[TrialProto]:
if study_id is None:
if timestamp is None:
query = """SELECT data from trial_data;"""
else:
ms = timestamp.ToMilliseconds()
query = f"""SELECT data from trial_data INNER JOIN (
SELECT trial_id FROM trial WHERE timestamp >= FROM_UNIXTIME({ms}/1000)
) as tt ON tt.trial_id = trial_data.trial_id;"""
else:
if timestamp is None:
query = f"""SELECT data from trial_data INNER JOIN (
SELECT trial_id FROM trial WHERE study_id = '{study_id}'
) as tt ON tt.trial_id = trial_data.trial_id;"""
else:
ms = timestamp.ToMilliseconds()
query = f"""SELECT data from trial_data INNER JOIN (
SELECT trial_id FROM trial
WHERE study_id = '{study_id}' AND timestamp >= FROM_UNIXTIME({ms}/1000)
) as tt ON tt.trial_id = trial_data.trial_id;"""
with self._connection.cursor() as cursor:
cursor.execute(query=query)
data = cursor.fetchall()
return [TrialProto.FromString(row["data"]) for row in data]
@_retry
def get_trial(self, trial_id: str, study_id: Optional[str] = None) -> TrialProto:
query = f"""SELECT data FROM trial_data WHERE trial_id = '{trial_id}';"""
with self._connection.cursor() as cursor:
cursor.execute(query=query)
data = cursor.fetchall()
if not data:
raise NotFoundError("")
return TrialProto.FromString(data[0]["data"])
@_retry
def write_study(self, study: StudyInfo) -> None:
with self._connection.cursor() as cursor:
self._connection.begin()
query = f"""
INSERT INTO study VALUES('{study.study_id}', CURRENT_TIMESTAMP(6))
ON DUPLICATE KEY UPDATE timestamp = CURRENT_TIMESTAMP(6);
"""
cursor.execute(query=query)
hex_data = study.SerializeToString().hex()
query = f"""
INSERT INTO study_info VALUES('{study.study_id}', x'{hex_data}')
ON DUPLICATE KEY UPDATE info = VALUES(info);
"""
cursor.execute(query=query)
self._connection.commit()
@_retry
def write_trial(self, trial: TrialProto) -> None:
import pymysql
with self._connection.cursor() as cursor:
self._connection.begin()
query = f"""
INSERT INTO trial VALUES('{trial.trial_id}', '{trial.study_id}', CURRENT_TIMESTAMP(6))
ON DUPLICATE KEY UPDATE study_id = VALUES(study_id), timestamp = CURRENT_TIMESTAMP(6);
"""
try:
cursor.execute(query=query)
except pymysql.err.IntegrityError:
raise NotFoundError("") # TODO(tsuzuku)
query = f"""
INSERT INTO trial_data VALUES('{trial.trial_id}', x'{trial.SerializeToString().hex()}')
ON DUPLICATE KEY UPDATE data = x'{trial.SerializeToString().hex()}';
"""
cursor.execute(query=query)
| 37.893204 | 99 | 0.55816 | 837 | 7,806 | 5.056153 | 0.16129 | 0.036389 | 0.068053 | 0.065217 | 0.527647 | 0.442344 | 0.366257 | 0.289225 | 0.265359 | 0.208648 | 0 | 0.008772 | 0.342813 | 7,806 | 205 | 100 | 38.078049 | 0.816179 | 0.003459 | 0 | 0.443243 | 0 | 0 | 0.350051 | 0.048354 | 0 | 0 | 0 | 0.004878 | 0 | 1 | 0.064865 | false | 0.010811 | 0.064865 | 0.005405 | 0.178378 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
449998dcb11c693a0012e9960aaba5d1b4901d48 | 1,394 | py | Python | pymclevel/test/templevel.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 237 | 2018-02-04T19:13:31.000Z | 2022-03-26T03:06:07.000Z | pymclevel/test/templevel.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 551 | 2015-01-01T02:36:53.000Z | 2018-02-01T00:03:12.000Z | pymclevel/test/templevel.py | bennettdc/MCEdit-Unified | 90abfb170c65b877ac67193e717fa3a3ded635dd | [
"0BSD"
] | 97 | 2015-01-02T01:31:12.000Z | 2018-01-22T05:37:47.000Z | import atexit
import os
from os.path import join
import shutil
import tempfile
from pymclevel import mclevel
__author__ = 'Rio'
tempdir = os.path.join(tempfile.gettempdir(), "pymclevel_test")
if not os.path.exists(tempdir):
os.mkdir(tempdir)
def mktemp(suffix):
td = tempfile.mkdtemp(suffix, dir=tempdir)
os.rmdir(td)
return td
class TempLevel(object):
def __init__(self, filename, createFunc=None):
if not os.path.exists(filename):
filename = join("testfiles", filename)
tmpname = mktemp(os.path.basename(filename))
if os.path.exists(filename):
if os.path.isdir(filename):
shutil.copytree(filename, tmpname)
else:
shutil.copy(filename, tmpname)
elif createFunc:
createFunc(tmpname)
else:
raise IOError("File %s not found." % filename)
self.tmpname = tmpname
self.level = mclevel.fromFile(tmpname)
atexit.register(self.removeTemp)
def __del__(self):
if hasattr(self, 'level'):
self.level.close()
del self.level
self.removeTemp()
def removeTemp(self):
if hasattr(self, 'tmpname'):
filename = self.tmpname
if os.path.isdir(filename):
shutil.rmtree(filename)
else:
os.unlink(filename)
| 24.892857 | 63 | 0.601148 | 155 | 1,394 | 5.322581 | 0.367742 | 0.058182 | 0.043636 | 0.026667 | 0.106667 | 0.065455 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29627 | 1,394 | 55 | 64 | 25.345455 | 0.840979 | 0 | 0 | 0.116279 | 0 | 0 | 0.040172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.139535 | 0 | 0.27907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
449a09fa6cac651fdb37c7759dc5ded62edc72b2 | 5,157 | py | Python | dags/oss_know/libs/github/init_profiles.py | HexaemeronFsk/airflow-jobs | 674f4c15f6889653bf5578117b085ef794c7b3f4 | [
"Apache-2.0"
] | null | null | null | dags/oss_know/libs/github/init_profiles.py | HexaemeronFsk/airflow-jobs | 674f4c15f6889653bf5578117b085ef794c7b3f4 | [
"Apache-2.0"
] | null | null | null | dags/oss_know/libs/github/init_profiles.py | HexaemeronFsk/airflow-jobs | 674f4c15f6889653bf5578117b085ef794c7b3f4 | [
"Apache-2.0"
] | null | null | null | import itertools
from loguru import logger
from opensearchpy.helpers import scan as os_scan
from oss_know.libs.base_dict.opensearch_index import OPENSEARCH_INDEX_GITHUB_COMMITS, \
OPENSEARCH_INDEX_GITHUB_ISSUES_TIMELINE
from oss_know.libs.util.opensearch_api import OpensearchAPI
from oss_know.libs.util.base import get_opensearch_client
def load_github_ids_by_repo(opensearch_conn_infos, owner, repo):
"""Get GitHub users' ids from GitHub assigned owner and repo."""
opensearch_client = get_opensearch_client(opensearch_conn_infos)
init_profile_ids = load_ids_from_issues_timeline(opensearch_client, owner,
repo)
init_profile_ids += load_ids_from_commits(opensearch_client, owner, repo)
return init_profile_ids
def load_ids_from_commits(opensearch_client, owner, repo):
"""Get GitHub users' ids from GitHub commits."""
logger.debug(f'calling load_ids_by_github_commits for {owner}/{repo}')
res = get_profiles_from_os(opensearch_client, owner, repo,
index=OPENSEARCH_INDEX_GITHUB_COMMITS)
if not res:
logger.info(f"There's no github commits in {repo}")
return []
# delete duplicated data of GitHub author and GitHub committer
all_commits_ids = set()
for commit in res:
raw_data = commit["_source"]["raw_data"]
if ("author" in raw_data) and raw_data["author"] and ("id" in raw_data["author"]):
all_commits_ids.add(raw_data["author"]["id"])
if ("committer" in raw_data) and raw_data["committer"] and ("id" in raw_data["committer"]):
all_commits_ids.add(raw_data["committer"]["id"])
return list(all_commits_ids)
def load_ids_from_issues_timeline(opensearch_client, owner, repo):
"""Get GitHub users' ids from GitHub issues timeline ."""
logger.debug(f'calling load_ids_by_github_issues_timeline for {owner}/{repo}')
res = get_profiles_from_os(opensearch_client, owner, repo,
index=OPENSEARCH_INDEX_GITHUB_ISSUES_TIMELINE)
if not res:
logger.info(f"There's no github issues' timeline in {repo}")
return []
all_issues_timeline_users = set()
for issue_timeline in res:
issue_timeline_raw_data = issue_timeline["_source"]["raw_data"]
if issue_timeline_raw_data["event"] == "cross-referenced":
try:
all_issues_timeline_users.add(issue_timeline_raw_data["actor"]["id"])
all_issues_timeline_users.add(issue_timeline_raw_data["source"]["issue"]["user"]["id"])
except KeyError as e:
logger.info(f"The key not exists in issue_timeline_raw_data :{e}")
except TypeError as e:
logger.info(f"The value is null in issue_timeline_raw_data :{e}")
elif issue_timeline_raw_data["event"] != "committed":
for key in ["user", "actor", "assignee"]:
if key in issue_timeline_raw_data:
try:
all_issues_timeline_users.add(issue_timeline_raw_data[key]["id"])
except KeyError as e:
logger.info(f"The key not exists in {issue_timeline_raw_data[key]}:{e}")
except TypeError as e:
logger.info(f"The value is null in {issue_timeline_raw_data[key]}:{e}")
return list(all_issues_timeline_users)
def get_profiles_from_os(opensearch_client, owner, repo, index):
"""Get GitHub users by repo and owner and index from opensearch."""
# 查询owner+repo所有github issues记录用来提取github issue的user
res = os_scan(client=opensearch_client, index=index,
query={
"track_total_hits": True,
"query": {
"bool": {"must": [
{"term": {
"search_key.owner.keyword": {
"value": owner
}
}},
{"term": {
"search_key.repo.keyword": {
"value": repo
}
}}
]}
}
}, doc_type='_doc', timeout='10m')
logger.info(f'Get GitHub users by {repo} and {owner} and {index} from opensearch.')
return res
def load_github_profiles(github_tokens, opensearch_conn_infos, github_users_ids):
"""Get GitHub profiles by ids."""
logger.debug(f'calling load_github_profiles by {github_users_ids}')
# get ids set;
github_users_ids = list(set(github_users_ids))
# put GitHub user profile into opensearch if it is not in opensearch
github_tokens_iter = itertools.cycle(github_tokens)
opensearch_api = OpensearchAPI()
opensearch_api.put_profile_into_opensearch(github_ids=github_users_ids, github_tokens_iter=github_tokens_iter,
opensearch_client=get_opensearch_client(opensearch_conn_infos))
| 46.459459 | 114 | 0.61528 | 620 | 5,157 | 4.806452 | 0.179032 | 0.051678 | 0.05906 | 0.073826 | 0.531208 | 0.455705 | 0.417785 | 0.415436 | 0.323826 | 0.267785 | 0 | 0.000549 | 0.293969 | 5,157 | 110 | 115 | 46.881818 | 0.817907 | 0.084545 | 0 | 0.166667 | 0 | 0 | 0.169864 | 0.047101 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059524 | false | 0 | 0.071429 | 0 | 0.202381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
922098fba0782c8cc0b4f538f40dc425f77aa77a | 1,457 | py | Python | image_extractor.py | satrajit-chatterjee/DeXpression-PyTorch | 00100385145485cf00b0a1cc53154db623c8a14f | [
"MIT"
] | 6 | 2019-01-18T02:34:21.000Z | 2022-03-22T13:37:11.000Z | image_extractor.py | satrajit-chatterjee/DeXpression-PyTorch | 00100385145485cf00b0a1cc53154db623c8a14f | [
"MIT"
] | 3 | 2019-01-18T02:44:13.000Z | 2019-12-16T14:32:55.000Z | image_extractor.py | satrajit-chatterjee/DeXpression-PyTorch | 00100385145485cf00b0a1cc53154db623c8a14f | [
"MIT"
] | 1 | 2022-03-14T03:29:47.000Z | 2022-03-14T03:29:47.000Z | import glob
from shutil import copyfile
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] # Define emotion order
participants = glob.glob("source_emotion\\*") # Returns a list of all folders with participant numbers
for x in participants:
part = "%s" % x[-4:] # store current participant number
for sessions in glob.glob("%s\\*" % x): # Store list of sessions for current participant
for files in glob.glob("%s\\*" % sessions):
current_session = files[20:-30]
file = open(files, 'r')
emotion = int(float(file.readline())) # emotions are encoded as a float, readline as float, then convert to
# integer.
sourcefile_emotion = glob.glob("source_images\\%s\\%s\\*" % (part, current_session))[-1] # get path for
# last image in sequence, which contains the emotion
sourcefile_neutral = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[0] # do same for
# neutral image
dest_neut = "sorted_set\\neutral\\%s" % sourcefile_neutral[25:] # Generate path to put neutral image
dest_emot = "sorted_set\\%s\\%s" % (emotions[emotion], sourcefile_emotion[25:]) # Do same for emotion
# containing image
copyfile(sourcefile_neutral, dest_neut) # Copy file
copyfile(sourcefile_emotion, dest_emot) # Copy file
| 60.708333 | 121 | 0.621139 | 178 | 1,457 | 4.983146 | 0.44382 | 0.045096 | 0.047351 | 0.024803 | 0.090192 | 0.090192 | 0.090192 | 0.090192 | 0.090192 | 0 | 0 | 0.010101 | 0.252574 | 1,457 | 23 | 122 | 63.347826 | 0.804408 | 0.284832 | 0 | 0 | 0 | 0 | 0.169492 | 0.070788 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
92232435b0e0b41e1bc75ad98de9764440370bab | 13,549 | py | Python | resultr_format/__init__.py | haykkh/resultr-format | 8188a9b9a011899a58be54c8036edc9207e63948 | [
"MIT"
] | null | null | null | resultr_format/__init__.py | haykkh/resultr-format | 8188a9b9a011899a58be54c8036edc9207e63948 | [
"MIT"
] | null | null | null | resultr_format/__init__.py | haykkh/resultr-format | 8188a9b9a011899a58be54c8036edc9207e63948 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Makes UCL PHAS results better
"""
__author__ = "Hayk Khachatryan"
__version__ = "0.1.4.4"
__license__ = "MIT"
import argparse
import csv
import sys
import itertools
import pathlib as pathlib
import inquirer
#########################
# #
# #
# functions #
# #
# #
#########################
def goodFormater(badFormat, outputPath, year, length):
'''[summary]
reformats the input results into a dictionary with module names as keys and their respective results as values
outputs to csv if outputPath is specified
Arguments:
badFormat {dict} -- candNumber : [results for candidate]
outputPath {str} -- the path to output to
year {int} -- the year candidateNumber is in
length {int} -- length of each row in badFormat divided by 2
Returns:
dictionary -- module : [results for module]
saves to file if output path is specified
'''
if year < 3:
devcom = 'PHAS' + badFormat['CAND'][0]
goodFormat = {devcom: []}
else:
goodFormat = {}
# ignore first row cause it's just 'Mark' & 'ModuleN'
for row in list(badFormat.values())[1:]:
if year < 3: # take first column = devcom into consideration
goodFormat[devcom].append(int(row[0])) # add first val to devcom
for i in range(length-1):
# if a key for that module doesn't exist, initialize with empt array
# .upper to convert all module names to uppercase
goodFormat.setdefault(row[(2 * i) + 1].upper(), [])
# add value of module to module
goodFormat[row[(2*i)+1].upper()].append(int(row[2*(i + 1)]))
else: # no more devcom
for i in range(length-1):
# if a key for that module doesn't exist, initialize with empt array
# .upper to convert all module names to uppercase
goodFormat.setdefault(row[(2 * i)].upper(), [])
# add value of module to module
goodFormat[row[(2*i)].upper()].append(int(row[(2*i) + 1]))
# pop the zeros
goodFormat.pop('0', None)
goodFormat['Averages'] = everyonesAverage(year, badFormat, length)
if outputPath is not None: # if requested to reformat and save to file
results = csv.writer(outputPath.open(mode='w'), delimiter=',')
# write the keys (module names) as first row
results.writerow(goodFormat.keys())
# zip module results together, fill modules with less people using empty values
# add row by row
results.writerows(itertools.zip_longest(
*goodFormat.values(), fillvalue=''))
return goodFormat
def myGrades(year, candidateNumber, badFormat, length):
'''returns final result of candidateNumber in year
Arguments:
year {int} -- the year candidateNumber is in
candidateNumber {str} -- the candidateNumber of candidateNumber
badFormat {dict} -- candNumber : [results for candidate]
length {int} -- length of each row in badFormat divided by 2
Returns:
int -- a weighted average for a specific candidate number and year
'''
weights1 = [1, 1, 1, 1, 0.5, 0.5, 0.5, 0.5]
weights2 = [1, 1, 1, 1, 1, 1, 0.5, 0.5]
if year == 1:
myFinalResult = sum([int(badFormat[candidateNumber][2*(i + 1)])
* weights1[i] for i in range(length-1)]) / 6
elif year == 2:
myFinalResult = sum([int(badFormat[candidateNumber][2*(i + 1)])
* weights2[i] for i in range(length-1)]) / 7
elif year == 3:
myFinalResult = sum([int(badFormat[candidateNumber][(2*i)+1])
* weights2[i] for i in range(length-1)]) / 7
elif year == 4:
myFinalResult = sum([int(badFormat[candidateNumber][(2*i)+1])
for i in range(length-1)]) / 8
return myFinalResult
def myRank(grade, badFormat, year, length):
'''rank of candidateNumber in year
Arguments:
grade {int} -- a weighted average for a specific candidate number and year
badFormat {dict} -- candNumber : [results for candidate]
year {int} -- year you are in
length {int} -- length of each row in badFormat divided by 2
Returns:
int -- rank of candidateNumber in year
'''
return int(sorted(everyonesAverage(year, badFormat, length), reverse=True).index(grade) + 1)
def everyonesAverage(year, badFormat, length):
''' creates list of weighted average results for everyone in year
Arguments:
year {int}
badFormat {dict} -- candNumber : [results for candidate]
length {int} -- length of each row in badFormat divided by 2
returns:
list -- weighted average results of everyone in year
'''
return [myGrades(year, cand, badFormat, length) for cand in list(badFormat.keys())[1:]]
def askInitial():
'''Asks the user for what it wants the script to do
Returns:
[dictionary] -- answers to the questions
'''
return inquirer.prompt([
inquirer.Text(
'inputPath', message="What's the path of your input file (eg input.csv)"),
inquirer.List(
'year',
message="What year are you in",
choices=[1, 2, 3, 4]
),
inquirer.Checkbox(
'whatToDo',
message="What can I do for you (select with your spacebar)",
choices=[
"Get your weighted average",
"Get your rank in the year",
"Reformat results by module and output to csv"
]),
])
def askYear():
'''Asks the user for what year they're in
Returns:
[int] -- year
'''
return inquirer.prompt([
inquirer.List(
'year',
message="What year are you in",
choices=[1, 2, 3, 4]
)
])['year']
def askInput():
'''Asks the user for input file path
Returns:
[str] -- input path
'''
return inquirer.prompt([
inquirer.Text(
'inputPath', message="What's the path of your input file (eg input.csv)")
])['inputPath']
def askFormat():
'''Asks user for where to save formatted csv
Returns:
[str] -- output path
'''
return inquirer.prompt([
inquirer.Text(
'formatPath', message="Where shall I save the reformatted csv (eg output.csv)")
])['formatPath']
def askCandidateNumber():
'''Asks the user for their candidate number
Returns:
[str] -- candidate number
'''
return inquirer.prompt([
inquirer.Text('candidateNumber',
message="What is your candidate number")
])['candidateNumber']
def badFormater(input):
'''[summary]
Converts candidate number (row[0]) to caps
sets as a key in dict
loop thru list of candidate's results (row[1:])
if val in results is not 'DA' add val to dictionary
else add 0 to dictionary
Arguments:
input {pathlib.Path} -- path to input file
Returns:
[dict] -- {candidate number : [list of module,result for candidate]}
'''
return {row[0].upper(): [val if val != 'DA' else 0 for val in row[1:]] for row in csv.reader(input.open(
mode='r', newline=''), delimiter=',')}
def main(args):
'''main entry point of app
Arguments:
args {namespace} -- arguments provided in cli
'''
#########################
# #
# #
# prompt #
# #
# #
#########################
if not len(sys.argv) > 1:
initialAnswers = askInitial()
inputPath = pathlib.Path(initialAnswers['inputPath'])
year = int(initialAnswers['year'])
# create a list from every row
badFormat = badFormater(inputPath) # create a list from every row
howManyCandidates = len(badFormat) - 1
length = int(len(badFormat['Cand'])/2)
finalReturn = []
if "Get your rank in the year" in initialAnswers['whatToDo']:
candidateNumber = askCandidateNumber()
weightedAverage = myGrades(year, candidateNumber, badFormat, length)
rank = myRank(weightedAverage, badFormat, year, length)
if "Get your weighted average" in initialAnswers['whatToDo']:
finalReturn.append('Your weighted average for the year is: {:.2f}%'.format(
weightedAverage))
finalReturn.append('Your rank is {}th of {} ({:.2f} percentile)'.format(
rank, howManyCandidates, (rank * 100) / howManyCandidates))
elif "Get your weighted average" in initialAnswers['whatToDo']:
candidateNumber = askCandidateNumber()
weightedAverage = myGrades(year, candidateNumber, badFormat, length)
finalReturn.append('Your weighted average for the year is: {:.2f}%'.format(
weightedAverage))
if "Reformat results by module and output to csv" in initialAnswers['whatToDo']:
formatOutputPath = pathlib.Path(askFormat())
goodFormat = goodFormater(badFormat, formatOutputPath, year, length)
[print('\n', x) for x in finalReturn]
#########################
# #
# end #
# prompt #
# #
# #
#########################
#########################
# #
# #
# run with #
# cli args #
# #
#########################
if len(sys.argv) > 1:
if not args.input:
inputPath = pathlib.Path(askInput())
else:
inputPath = pathlib.Path(args.input)
if not args.year:
year = int(askYear())
else:
year = int(args.year)
# create a list from every row
badFormat = badFormater(inputPath) # create a list from every row
howManyCandidates = len(badFormat) - 1
length = int(len(badFormat['Cand'])/2)
finalReturn = []
if args.rank:
if not args.candidate:
candidateNumber = askCandidateNumber()
else:
candidateNumber = args.candidate
weightedAverage = myGrades(year, candidateNumber, badFormat, length)
rank = myRank(weightedAverage, badFormat, year, length)
if args.my:
finalReturn.append('Your weighted average for the year is: {:.2f}%'.format(
weightedAverage))
finalReturn.append('Your rank is {}th of {} ({:.2f} percentile)'.format(
rank, howManyCandidates, (rank * 100) / howManyCandidates))
elif args.my:
if not args.candidate:
candidateNumber = askCandidateNumber()
else:
candidateNumber = args.candidate
weightedAverage = myGrades(year, candidateNumber, badFormat, length)
finalReturn.append('Your weighted average for the year is: {:.2f}%'.format(
weightedAverage))
if args.format is not None:
formatOutputPath = pathlib.Path(args.format)
goodFormat = goodFormater(badFormat, formatOutputPath, year, length)
[print('\n', x) for x in finalReturn]
#########################
# #
# end #
# run with #
# cli args #
# #
#########################
print('')
#########################
# #
# end #
# functions #
# #
# #
#########################
#########################
# #
# #
# good stuff #
# #
# #
#########################
if __name__ == '__main__':
#########################
# #
# #
# argparse #
# #
# #
#########################
parser = argparse.ArgumentParser(
description='Makes UCL PHAS results better')
parser.add_argument('--input', '-i',
type=str, help="csv file to import")
parser.add_argument('--format', '-f', type=str,
help="reformats results by module and exports it to file specified")
parser.add_argument(
'--my', '-m', action="store_true", help="returns your weighted average for the year")
parser.add_argument('--year', '-y', help="specify your year")
parser.add_argument('--rank', '-r', action='store_true',
help="returns your rank in the year")
parser.add_argument('--candidate', '-c',
help="specify your candidate number")
args = parser.parse_args()
#########################
# #
# end #
# argparse #
# #
# #
#########################
main(args)
| 32.259524 | 114 | 0.518857 | 1,372 | 13,549 | 5.102041 | 0.186589 | 0.025714 | 0.003429 | 0.009429 | 0.532714 | 0.485857 | 0.447143 | 0.419857 | 0.393857 | 0.393857 | 0 | 0.012199 | 0.346594 | 13,549 | 419 | 115 | 32.336516 | 0.778493 | 0.303712 | 0 | 0.393258 | 0 | 0 | 0.148821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061798 | false | 0 | 0.039326 | 0 | 0.157303 | 0.016854 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
922361dea1d8463d9ea2c90bf6568fee4603b6fb | 14,689 | py | Python | ml_boilerplate/preprocess/generic_pipeline.py | mcraig2/ml_boilerplate | 5373f593b76fb115fb7a80dc9b87c633a3fcde48 | [
"MIT"
] | null | null | null | ml_boilerplate/preprocess/generic_pipeline.py | mcraig2/ml_boilerplate | 5373f593b76fb115fb7a80dc9b87c633a3fcde48 | [
"MIT"
] | null | null | null | ml_boilerplate/preprocess/generic_pipeline.py | mcraig2/ml_boilerplate | 5373f593b76fb115fb7a80dc9b87c633a3fcde48 | [
"MIT"
] | null | null | null | """ The Pipeline object in sci-kit learn is very useful for constructing
simple and complex modeling pipelines. However, out of the box it is
cumbersome to build pipelines that involve heterogenous data. Most
transformers assume that the entirety of the input datasets are of the
same dtype. So, how do you scale your numeric columns, make dummy
variables out of your categorical variables, run TF-IDF on your text
columns, and fit all of this into one pipeline? A possible solution
is:
http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html
But building these FeatureUnion objects in a hardcoded fashion can be
cumbersome in and of itself. The DtypePipeline object here is a solution
to this. You can pass in Pipeline objects for each data type, and it
takes care of the rest.
"""
import bisect
import copy
import numpy as np
from sklearn.base import BaseEstimator
from sklearn.base import TransformerMixin
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
class DtypeSelector(BaseEstimator, TransformerMixin):
def __init__(self, dtype=None):
""" Initialize with a specific dtype to select. If None,
will select all columns.
:param dtype: a string representing the dtype to select.
Valid values are:
'numeric' - for numbers
'categorical' - for categorical data
'ordinal' - for ordered categorical data
'text' - for free text columns
'datetime' - for datetime columns """
self.dtype = dtype
def __select_cat(self, X, ordered=False):
""" Selects categorical columns, either categorical or ordinal.
:param X: the dataset to select from
:param ordered: if True, will return the ordinal columns,
otherwise will return the categorical columns
:return: a list of columns representing the categorical or
ordinal variables. """
c_cols = X.select_dtypes(include='category').columns
return X[[c for c in c_cols if X[c].dtype.ordered == ordered]]
def fit(self, X, y=None):
""" Fits the dtype selector.
:param X: the input X matrix
:param y: optional target labels
:return: returns self. """
return self
def transform(self, X):
""" Selects the given input by selecting the specified
column types.
:param X: the input X matrix
:return: a subset of X, with the specified dtypes selected. """
if self.dtype == 'numeric':
return X.select_dtypes(include=[np.number])
elif self.dtype == 'categorical':
return self.__select_cat(X, ordered=False)
elif self.dtype == 'ordinal':
return self.__select_cat(X, ordered=True)
elif self.dtype == 'text':
return X.select_dtypes(include='object')
elif self.dtype == 'datetime':
return X.select_dtypes(include='datetime')
else:
return X
class ColSelector(BaseEstimator, TransformerMixin):
def __init__(self, column=None):
""" Initialize with a specific column to select. If None,
will select all columns.
:param col: a string representing the column to select. """
self.column = column
def fit(self, X, y=None):
""" Fits the column selector.
:param X: the input X matrix
:param y: optional target labels
:return: returns self. """
return self
def transform(self, X):
""" Selects the given input by selecting the specified column.
:param X: the input X matrix
:return: a subset of X, with the specified column selected. """
if not self.column:
return X
return X[self.column]
class CategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self):
""" Initializes the categorical encoder transform. """
self._lbl = LabelEncoder()
self._onehot = OneHotEncoder(handle_unknown='ignore')
def fit(self, X, y=None):
""" Fits the categorical encoder to the given dataset.
:param X: the input dataset
:param y: optional target labels.
:return: returns self. """
self._lbl.fit(X)
self._onehot.fit(self._lbl.transform(X).reshape(-1, 1))
return self
def transform(self, X):
""" Transforms the input categorical variables into a
one/zero categorical encoding.
:param X: the input dataset
:return: the transformed input dataset. """
# The bisection is to ensure that LabelEncoder can handle unseen
# categories in the test set
if isinstance(self._lbl.classes_[0], str):
catch_val = '<unknown>'
else:
catch_val = -1
X = X.map(lambda x: catch_val if x not in self._lbl.classes_ else x)
lbl_classes = self._lbl.classes_.tolist()
bisect.insort_left(lbl_classes, catch_val)
self._lbl.classes_ = np.array(lbl_classes)
return self._onehot.transform(self._lbl.transform(X).reshape(-1, 1))
class CategoricalImputer(BaseEstimator, TransformerMixin):
def __init__(self, strategy='most_frequent'):
""" Initializes the categorical imputer.
:param strategy: either 'most_frequent' or 'distribution'.
If 'most_frequent', will impute the missing values
with the most frequent value. If 'distribution', will
create a discrete distribution based on the frequencies
and will sample from this distribution. """
self.strategy = strategy
def fit(self, X, y=None):
""" Fits the categorical imputer.
:param X: the input X matrix
:param y: optionally the target labels """
counts = X.value_counts()
self.counts = counts / counts.sum()
return self
def transform(self, X):
""" Fills the missing values with the most common value (if
'most_frequent' is the strategy) or sampled from the
distribution.
:param X: the input dataset
:return: the transformed dataset with the missing values
filled in. """
if self.strategy == 'most_frequent':
X = X.fillna(self.counts.index[0])
elif self.strategy == 'distribution':
missing = np.random.choice(self.counts.index,
X.shape[0], p=self.counts)
missing = pd.DataFrame(missing, index=X.index, columns=X.columns)
X = X.fillna(missing)
return X
class DtypePipeline(BaseEstimator, TransformerMixin):
def __init__(self, numeric=None, categorical=None,
ordinal=None, text=None, datetime=None):
""" Builds the model tree with the given parameters.
:param numeric: a Pipeline object to fit and transform all
numeric columns
:param categorical: a Pipeline object to fit and transform
to all categorical columns
:param ordinal: a Pipeline object to fit and transform to
all categorical columns
:param text: a Pipeline object to fit and transform to all
text columns
:param datetime: a Pipeline object to fit and transform to
all datetime columns """
self.model = None
# Prepend each pipeline with a step to select the dtype
self._steps = {'numeric': self.__type_sel(numeric, 'numeric'),
'categorical': self.__type_sel(categorical, 'categorical'),
'ordinal': self.__type_sel(ordinal, 'ordinal'),
'text': self.__type_sel(text, 'text'),
'datetime': self.__type_sel(datetime, 'datetime')}
def __type_sel(self, pipeline, dtype):
""" Prepends a step in a pipeline to select the given dtype first.
:param pipeline: the pipeline to prepend to
:param dtype: the dtype to add to
:return: returns a Pipeline that selects the dtype and then
performs the input pipeline. """
if not pipeline:
return pipeline
select_step = [(str(dtype), DtypeSelector(dtype=dtype))]
return Pipeline(select_step + copy.deepcopy(pipeline.steps))
def __col_sel(self, pipeline, col):
""" Prepends a column selector transform to a pipeline.
:param pipeline: the pipeline to prepend to
:param col: the column to select first
:return: the modified pipeline object. """
if not pipeline:
return pipeline
# We have to rename the steps in the pipeline so that there aren't
# any steps with the same name
select_step = [('{}_select'.format(col), ColSelector(column=col))]
pipeline_steps = [('{}_{}'.format(col, name), step)
for name, step in copy.deepcopy(pipeline.steps)]
return Pipeline(select_step + pipeline_steps)
def __per_col(self, X, dtype):
""" Expands the pipeline to apply it to each column for a given dtype.
:param X: the input dataset
:param dtype: the dtype to loop through
:return: a FeatureUnion object where each item is a pipeline
that operates on a given column. """
pipeline = self._steps[dtype]
if not pipeline:
return None
return FeatureUnion(transformer_list=[
(col, self.__col_sel(Pipeline(pipeline.steps[1:]), col))
for col in DtypeSelector(dtype=dtype).fit_transform(X).columns
])
def __str__(self):
""" Pretty prints the model tree. """
def str_helper(model, lvl=0):
is_feat = isinstance(model, FeatureUnion)
steps = model.transformer_list if is_feat else model.steps
model_str = model.__class__.__name__ + '('
for step, func in steps:
tabs = ''.join(['\t'] * (lvl + 1))
model_str += "\n{}('{}', ".format(tabs, step)
if not isinstance(func, (FeatureUnion, Pipeline)):
model_str += '{})'.format(func.__class__.__name__)
else:
rest = str_helper(func, lvl=lvl + 1)
model_str += '{})'.format(rest)
return model_str + ')'
return str_helper(self.model, lvl=0)
def build_model_tree(self, X, y=None):
""" Builds the model tree, given the input data. The model tree
cannot be created until this point because we don't know what
the columns are until this point. The model tree consists of
FeatureUnions that transform each group of columns correctly.
:param X: the input dataset
:param y: optional labels, may be required for some transformations
in your pipeline
:return: the model tree object, as either a FeatureUnion or
Pipeline object. """
transform_list = [('numeric', self._steps['numeric']),
('category', self.__per_col(X, 'categorical')),
('ordinal', self.__per_col(X, 'ordinal')),
('text', self.__per_col(X, 'text')),
('datetime', self.__per_col(X, 'datetime'))]
return FeatureUnion([step for step in transform_list if step[1]])
def fit(self, X, y=None):
""" Fits the pipeline to the given data.
:param X: the input X matrix
:param y: optional target labels, may be required for some
transformations in your pipeline
:return: the fitted DtypePipeline object. """
self.model = self.build_model_tree(X, y=y)
self.model.fit(X, y)
return self
def transform(self, X):
""" Transforms the pipeline to the given data.
:param X: the input X matrix
:return: the transformed X matrix. """
return self.model.transform(X)
if __name__ == '__main__':
import os
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import StandardScaler
# Read the Titanic dataset in to illustrate this use
curr_dir = os.path.dirname(os.path.abspath(__file__))
train = pd.read_csv('{}/../datasets/titanic_train.csv'.format(curr_dir))
test = pd.read_csv('{}/../datasets/titanic_test.csv'.format(curr_dir))
# Make sure each column is the correct dtype
for col in ['Pclass', 'SibSp', 'Parch', 'Embarked', 'Sex']:
train[col] = pd.Categorical(train[col], ordered=False)
test[col] = pd.Categorical(test[col], ordered=False)
train['Name'] = train['Name'].astype('object')
test['Name'] = test['Name'].astype('object')
dropcols = ['PassengerId', 'Ticket', 'Cabin']
train = train.drop(dropcols, axis=1)
test = test.drop(dropcols, axis=1)
X, y = train.drop('Survived', axis=1), train['Survived']
###########################################################
# Make the pipelines and combine them using DtypePipeline #
###########################################################
num_pipeline = Pipeline([('impute', Imputer()),
('scaler', StandardScaler())])
cat_pipeline = Pipeline([('impute', CategoricalImputer()),
('dummy', CategoricalEncoder())])
txt_pipeline = Pipeline([('tfidf', TfidfVectorizer())])
preprocessor = DtypePipeline(numeric=num_pipeline,
categorical=cat_pipeline,
text=txt_pipeline)
model = Pipeline([('preprocess', preprocessor),
('logistic', LogisticRegression())])
# Fit the model and get CV results
cv = KFold(n_splits=3)
for train, test in cv.split(X):
model.fit(X.ix[train, :], y[train])
yhat = model.predict(X.ix[test, :])
print('RMSE: {:.3f}'.format(mean_squared_error(y[test], yhat)))
| 38.054404 | 82 | 0.601675 | 1,738 | 14,689 | 4.975834 | 0.20023 | 0.014801 | 0.013529 | 0.019426 | 0.251156 | 0.192414 | 0.173797 | 0.16339 | 0.113552 | 0.088113 | 0 | 0.001848 | 0.300225 | 14,689 | 385 | 83 | 38.153247 | 0.839479 | 0.370754 | 0 | 0.156627 | 0 | 0 | 0.064557 | 0.007913 | 0 | 0 | 0 | 0 | 0 | 1 | 0.13253 | false | 0.006024 | 0.10241 | 0 | 0.421687 | 0.006024 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
92248f27e069d6c0f5243445c0b5840242d3af37 | 455 | py | Python | provider/urls.py | Creationeers/Django_Barebone_Rest | f7e414575a5e6ff6324cef7e4ceccac0775eba78 | [
"Apache-2.0"
] | null | null | null | provider/urls.py | Creationeers/Django_Barebone_Rest | f7e414575a5e6ff6324cef7e4ceccac0775eba78 | [
"Apache-2.0"
] | 7 | 2020-05-14T23:03:48.000Z | 2022-02-10T08:49:02.000Z | provider/urls.py | Creationeers/Django_Barebone_Rest | f7e414575a5e6ff6324cef7e4ceccac0775eba78 | [
"Apache-2.0"
] | null | null | null | from django.urls import path, include
from rest_framework_simplejwt import views as jwt_views
from .views import (RegisterUserView)
AUTH_PATTERNS = [
path('token/', jwt_views.TokenObtainPairView.as_view(), name='token-obtain'),
path('token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token-refresh')
]
urlpatterns = [
path('auth/', include(AUTH_PATTERNS)),
path('register/', RegisterUserView.as_view(), name='register-user')
] | 35 | 86 | 0.742857 | 56 | 455 | 5.857143 | 0.446429 | 0.073171 | 0.091463 | 0.091463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112088 | 455 | 13 | 87 | 35 | 0.811881 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
92278d4c99a794fc39f30624185168097ec4b202 | 1,400 | py | Python | url_fetcher.py | netarachelhershko/crawler | 22a5b41a768fae7415ad30cc6aec97063f2d07ce | [
"MIT"
] | null | null | null | url_fetcher.py | netarachelhershko/crawler | 22a5b41a768fae7415ad30cc6aec97063f2d07ce | [
"MIT"
] | null | null | null | url_fetcher.py | netarachelhershko/crawler | 22a5b41a768fae7415ad30cc6aec97063f2d07ce | [
"MIT"
] | null | null | null | from html_url_extractor import HtmlUrlExtractor
from request_getter import RequestGetter
from sitemap_fetcher import SitemapFetcher
from urlparse import urljoin
class UrlFetcher(object):
""" Handles url fetching from both html source and sitemaps """
def __init__(self, request_limit=10):
self.html_url_extractor = HtmlUrlExtractor()
self.sitemap_fetcher = SitemapFetcher(request_limit)
self.requests_getter = RequestGetter(request_limit)
def set_request_limit(self, request_rate_limit):
self.sitemap_fetcher.set_request_limit(request_rate_limit)
self.requests_getter.set_request_limit(request_rate_limit)
def fetch_from(self, urls):
"""
:param urls: A list of urls to fetch all child urls from
:return: A list of child urls found within html source and sitemaps
"""
children = self._get_html_urls(urls)
urls_from_sitemap = self.sitemap_fetcher.fetch_from(urls)
children.extend(urls_from_sitemap)
return children
def _get_html_urls(self, urls):
html_contents = self.requests_getter.get_content_from(urls)
results = []
for url, content in zip(urls, html_contents):
results.extend([urljoin(url, child) for child in
self.html_url_extractor.extract_from(content, include_nofollow=False)])
return results
| 40 | 98 | 0.712143 | 176 | 1,400 | 5.375 | 0.318182 | 0.07611 | 0.05074 | 0.044397 | 0.065539 | 0.065539 | 0 | 0 | 0 | 0 | 0 | 0.001832 | 0.22 | 1,400 | 34 | 99 | 41.176471 | 0.864469 | 0.129286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9227e595e3c40b1cac183a271ef1634007be821b | 7,207 | py | Python | lib/db.py | IoTtalk/os-IoTtalk | 3c8609d10fcad040e3de727216507d0369be4cd0 | [
"MIT"
] | 2 | 2021-06-28T13:25:54.000Z | 2021-07-27T08:43:38.000Z | lib/db.py | IoTtalk/IoTtalk | 3c8609d10fcad040e3de727216507d0369be4cd0 | [
"MIT"
] | 1 | 2021-11-24T09:15:40.000Z | 2021-11-24T13:51:23.000Z | lib/db.py | IoTtalk/IoTtalk | 3c8609d10fcad040e3de727216507d0369be4cd0 | [
"MIT"
] | null | null | null |
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session
from ec_config import SQLITE_PATH, MYSQL_HOST, MYSQL_USER, MYSQL_PASS
Base = declarative_base()
engine = None
def connect(db_name):
global engine
if db_name.startswith('sqlite:'):
url = 'sqlite+pysqlite:///' + SQLITE_PATH + '/' + db_name[7:]
else:
url = 'mysql+mysqlconnector://' + MYSQL_USER + ':'\
+ MYSQL_PASS + '@' + MYSQL_HOST + '/' + db_name
engine = create_engine(url, pool_recycle=3600)
def create():
if not engine:
raise Exception('You should invoke connect(db_name) first.')
Base.metadata.create_all(engine)
def get_session():
if not engine:
raise Exception('You should invoke connect(db_name) first.')
return Session(engine)
##############################################################################
##### schema #####
##############################################################################
from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer, Text, Float, Boolean, String, Enum, Date
from sqlalchemy.ext.declarative import declarative_base
class User(Base): # not describe in document.
__tablename__ = 'User'
__table_args__ = {'sqlite_autoincrement': True}
u_id = Column(Integer, primary_key=True, nullable=False)
u_name = Column(String(255), nullable=False)
passwd = Column(String(255), nullable=False)
class DeviceFeature(Base):
__tablename__ = 'DeviceFeature'
__table_args__ = {'sqlite_autoincrement': True}
df_id = Column(Integer, primary_key=True, nullable=False)
df_name = Column(String(255), nullable=False)
df_type = Column(Enum('input', 'output'), nullable=False)
df_category = Column(Enum('Sight', 'Hearing', 'Feeling', 'Motion', 'Other'), nullable=False)
param_no = Column(Integer, nullable=False)
comment = Column(Text, nullable=False) # not describe in document.
class DeviceModel(Base):
__tablename__ = 'DeviceModel'
__table_args__ = {'sqlite_autoincrement': True}
dm_id = Column(Integer, primary_key=True, nullable=False)
dm_name = Column(String(255), nullable=False)
dm_type = Column(Enum('smartphone', 'wearable', 'other'), nullable=False) # not use now.
class DM_DF(Base):
__tablename__ = 'DM_DF'
__table_args__ = {'sqlite_autoincrement': True}
mf_id = Column(Integer, primary_key=True, nullable=False)
dm_id = Column(Integer, ForeignKey('DeviceModel.dm_id'), nullable=False)
df_id = Column(Integer, ForeignKey('DeviceFeature.df_id'), nullable=False)
class Unit(Base):
__tablename__ = 'Unit'
__table_args__ = {'sqlite_autoincrement': True}
unit_id = Column(Integer, primary_key=True, nullable=False)
unit_name = Column(String(255), nullable=False)
class DF_Parameter(Base):
__tablename__ = 'DF_Parameter'
__table_args__ = {'sqlite_autoincrement': True}
dfp_id = Column(Integer, primary_key=True, nullable=False)
df_id = Column(Integer, ForeignKey('DeviceFeature.df_id'))
mf_id = Column(Integer, ForeignKey('DM_DF.mf_id'))
param_i = Column(Integer, nullable=False) # start from 0
param_type = Column(Enum('int', 'float', 'boolean', 'void', 'string', 'json'), nullable=False)
u_id = Column(Integer, ForeignKey('User.u_id'))
idf_type = Column(Enum('variant', 'sample'))
min = Column(Float, default=0)
max = Column(Float, default=0)
normalization = Column(Boolean, default=0, nullable=False)
unit_id = Column(Integer, ForeignKey('Unit.unit_id'), nullable=False, default=1) # 1 is None, need to change
class Device(Base):
__tablename__ = 'Device'
__table_args__ = {'sqlite_autoincrement': True}
d_id = Column(Integer, primary_key=True, nullable=False)
mac_addr = Column(String(255), nullable=False)
monitor = Column(String(255), nullable=False) # not describe in document.
d_name = Column(String(255), nullable=False)
status = Column(Enum('online', 'offline'), nullable=False)
u_id = Column(Integer, ForeignKey('User.u_id'))
dm_id = Column(Integer, ForeignKey('DeviceModel.dm_id'), nullable=False)
class Project(Base):
__tablename__ = 'Project'
__table_args__ = {'sqlite_autoincrement': True}
p_id = Column(Integer, primary_key=True, nullable=False)
p_name = Column(String(255), nullable=False) # not describe in document.
u_id = Column(Integer, ForeignKey('User.u_id'), nullable=False) # not describe in document.
status = Column(Enum('on', 'off'), nullable=False)
restart = Column(Boolean, nullable=False) # shortcut
exception = Column(Text, nullable=False) # for debug
sim = Column(Enum('on', 'off'), nullable=False)
pwd = Column(String(32), nullable=False) # password
class NetworkApplication(Base):
__tablename__ = 'NetworkApplication'
__table_args__ = {'sqlite_autoincrement': True}
na_id = Column(Integer, primary_key=True, nullable=False)
na_name = Column(String(255), nullable=False)
na_idx = Column(Integer, nullable=False) # not describe in document.
p_id = Column(Integer, ForeignKey('Project.p_id'), nullable=False)
class DeviceObject(Base):
__tablename__ = 'DeviceObject'
__table_args__ = {'sqlite_autoincrement': True}
do_id = Column(Integer, primary_key=True, nullable=False)
dm_id = Column(Integer, ForeignKey('DeviceModel.dm_id'), nullable=False)
p_id = Column(Integer, ForeignKey('Project.p_id'), nullable=False)
do_idx = Column(Integer, nullable=False)
d_id = Column(Integer, ForeignKey('Device.d_id'))
class DFObject(Base):
__tablename__ = 'DFObject'
__table_args__ = {'sqlite_autoincrement': True}
dfo_id = Column(Integer, primary_key=True, nullable=False)
do_id = Column(Integer, ForeignKey('DeviceObject.do_id'), nullable=False)
df_id = Column(Integer, ForeignKey('DeviceFeature.df_id'), nullable=False)
alias_name = Column(String, nullable=False)
class DF_Module(Base):
__tablename__ = 'DF_Module'
__table_args__ = {'sqlite_autoincrement': True}
na_id = Column(Integer, ForeignKey('NetworkApplication.na_id'), primary_key=True, autoincrement=False, nullable=False)
dfo_id = Column(Integer, ForeignKey('DFObject.dfo_id'), primary_key=True, autoincrement=False, nullable=False)
param_i = Column(Integer, primary_key=True, autoincrement=False, nullable=False)
idf_type = Column(Enum('variant', 'sample'))
min = Column(Float, default=0)
max = Column(Float, default=0)
normalization = Column(Boolean, default=0, nullable=False)
color = Column(Enum('red', 'black'), nullable=False) # not describe in document.
class MultipleJoin_Module(Base):
__tablename__ = 'MultipleJoin_Module'
__table_args__ = {'sqlite_autoincrement': True}
na_id = Column(Integer, ForeignKey('NetworkApplication.na_id'), primary_key=True, autoincrement=False, nullable=False)
param_i = Column(Integer, primary_key=True, autoincrement=False, nullable=False)
dfo_id = Column(Integer, ForeignKey('DFObject.dfo_id'), nullable=False)
| 42.146199 | 122 | 0.686971 | 866 | 7,207 | 5.430716 | 0.169746 | 0.154795 | 0.095684 | 0.100999 | 0.630023 | 0.54646 | 0.470338 | 0.432915 | 0.375505 | 0.319371 | 0 | 0.007669 | 0.167754 | 7,207 | 170 | 123 | 42.394118 | 0.776425 | 0.04121 | 0 | 0.305344 | 0 | 0 | 0.143668 | 0.010603 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022901 | false | 0.022901 | 0.053435 | 0 | 0.89313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
922960e3fb0974ea30b7500861328def7470c6f3 | 783 | py | Python | PraticandoKivy/aula1.py | AlexandrePeBrito/Python | 79a09b1fb8e705dc7b6859d977c8916a2d0dd4d0 | [
"MIT"
] | null | null | null | PraticandoKivy/aula1.py | AlexandrePeBrito/Python | 79a09b1fb8e705dc7b6859d977c8916a2d0dd4d0 | [
"MIT"
] | null | null | null | PraticandoKivy/aula1.py | AlexandrePeBrito/Python | 79a09b1fb8e705dc7b6859d977c8916a2d0dd4d0 | [
"MIT"
] | null | null | null | from kivy.app import App
from kivy.uix.behaviors import button
from kivy.uix.button import Button
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.label import Label
class teste(App):
def build(self): #Interface
box = BoxLayout( orientation='vertical')
button=Button(text='Botao 1')
label= Label(text='texto 1')
box.add_widget(button)
box.add_widget(label)
box2 = BoxLayout( orientation='horizontal')
button2=Button(text='Botao 2')
label2= Label(text='texto 2')
box2.add_widget(button2)
box2.add_widget(label2)
box.add_widget(box2)
return box
#return Button (text ='Ola Mundo') #este retorno eh oque aparece na tela
teste().run()
| 27.964286 | 86 | 0.630907 | 100 | 783 | 4.89 | 0.4 | 0.0818 | 0.08998 | 0.0818 | 0.09407 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0.268199 | 783 | 27 | 87 | 29 | 0.832461 | 0.108557 | 0 | 0 | 0 | 0 | 0.066187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.25 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
922ad101e021f8b01709397177dc8e329b180e68 | 1,798 | py | Python | karbor-1.3.0/karbor/tests/unit/fake_operation_log.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 1 | 2021-05-23T01:48:25.000Z | 2021-05-23T01:48:25.000Z | karbor-1.3.0/karbor/tests/unit/fake_operation_log.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | karbor-1.3.0/karbor/tests/unit/fake_operation_log.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_versionedobjects import fields
from karbor import objects
def fake_db_operation_log(**updates):
db_operation_log = {
"id": "36ea41b2-c358-48a7-9117-70cb7617410a",
"project_id": "586cc6ce-e286-40bd-b2b5-dd32694d9944",
"operation_type": "protect",
"checkpoint_id": "dcb20606-ad71-40a3-80e4-ef0fafdad0c3",
"plan_id": "cf56bd3e-97a7-4078-b6d5-f36246333fd9",
"provider_id": "23902b02-5666-4ee6-8dfe-962ac09c3994",
"scheduled_operation_id": "2220f8b1-975d-4621-a872-fa9afb43cb6c",
"status": "failed",
"error_info": "Could not access bank",
"extra_info": "[entries:{'timestamp': '2015-08-27T09:50:51-05:00',"
"'message': 'Doing things'}]"
}
for name, field in objects.OperationLog.fields.items():
if name in db_operation_log:
continue
if field.nullable:
db_operation_log[name] = None
elif field.default != fields.UnspecifiedDefault:
db_operation_log[name] = field.default
else:
raise Exception('db_operation_log needs help with %s.' % name)
if updates:
db_operation_log.update(updates)
return db_operation_log
| 39.086957 | 78 | 0.669077 | 227 | 1,798 | 5.180617 | 0.656388 | 0.07483 | 0.095238 | 0.027211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.228587 | 1,798 | 45 | 79 | 39.955556 | 0.74261 | 0.303671 | 0 | 0 | 0 | 0 | 0.379143 | 0.232821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
922bba1b4a76b2a553064b7df320dfa6750e6b3c | 2,776 | py | Python | stage_check/stage_check/OutputRedundancyDatabaseConn.py | 128technology/stage_check | 2c9cdc491bafbcc6ed1a308093fe606dfd37da67 | [
"MIT"
] | 2 | 2020-05-26T15:13:47.000Z | 2021-04-29T18:14:21.000Z | stage_check/stage_check/OutputRedundancyDatabaseConn.py | 128technology/stage_check | 2c9cdc491bafbcc6ed1a308093fe606dfd37da67 | [
"MIT"
] | null | null | null | stage_check/stage_check/OutputRedundancyDatabaseConn.py | 128technology/stage_check | 2c9cdc491bafbcc6ed1a308093fe606dfd37da67 | [
"MIT"
] | null | null | null | """
"""
try:
from stage_check import Output
except ImportError:
import Output
class Base(Output.Base):
"""
"""
def __init__(self):
super().__init__()
self.__full_name = "OutputRedundancyDatabaseConn.Base"
self.status = Output.Status.OK
"""
no_node_data
"""
def proc_no_node_data(self, local_info):
"""
"""
self.status = Output.Status.WARN
self.amend_no_node_data(local_info)
return self.status
def amend_no_node_data(self, local_info):
"""
"""
return True
"""
metric
"""
def proc_metric(self, entry, entry_value):
"""
"""
self.amend_metric(entry, entry_value)
return self.status
def amend_metric(self, entry, entry_value):
"""
"""
return True
"""
missing_data
"""
def proc_missing_data(self):
"""
"""
self.status = Output.Status.FAIL
self.amend_missing_data()
return self.status
def amend_missing_data(self):
"""
"""
return True
"""
"""
def proc_test_result(self,
status,
entry_count,
test_value,
fail_count,
expected_entries):
"""
@entry_count
@test_value
@fail_count
@expected_entries
"""
if status is not None:
self.status = status
self.amend_test_result(
entry_count,
test_value,
fail_count,
expected_entries
)
return self.status
def amend_test_result(self,
entry_count,
test_value,
fail_count,
expected_entries):
"""
@entry_count
@test_value
@fail_count
@expected_entries
"""
return True
"""
"""
def proc_test_result_bad_values(self,
entry_count,
test_value,
fail_count,
expected_entries):
"""
@entry_count
@test_value
@fail_count
@expected_entries
"""
self.amend_test_result_bad_values(
entry_count,
test_value,
fail_count,
expected_entries
)
return self.status
def amed_test_result_bad_values(self,
entry_count,
test_value,
fail_count,
expected_entries):
"""
@entry_count
@test_value
@fail_count
@expected_entries
"""
return True
| 19.828571 | 60 | 0.482349 | 250 | 2,776 | 4.968 | 0.184 | 0.080515 | 0.112721 | 0.152979 | 0.599839 | 0.497585 | 0.433172 | 0.433172 | 0.433172 | 0.433172 | 0 | 0 | 0.434078 | 2,776 | 139 | 61 | 19.971223 | 0.790579 | 0.096902 | 0 | 0.523077 | 0 | 0 | 0.014865 | 0.014865 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169231 | false | 0 | 0.046154 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9231baa109d9435fe051070cbdf5afd97866fc14 | 5,134 | py | Python | graphlearn/python/nn/tf/app/link_predictor.py | gasdaf/graph-learn | 4a77b39be37bb7507f0e9fb5d4ed40ca623b2ceb | [
"Apache-2.0"
] | 1 | 2021-08-30T03:13:23.000Z | 2021-08-30T03:13:23.000Z | graphlearn/python/nn/tf/app/link_predictor.py | gasdaf/graph-learn | 4a77b39be37bb7507f0e9fb5d4ed40ca623b2ceb | [
"Apache-2.0"
] | null | null | null | graphlearn/python/nn/tf/app/link_predictor.py | gasdaf/graph-learn | 4a77b39be37bb7507f0e9fb5d4ed40ca623b2ceb | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Alibaba Group Holding Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from graphlearn.python.nn.tf.config import conf
from graphlearn.python.nn.tf.module import Module
from graphlearn.python.nn.tf.layers.linear_layer import LinearLayer
class LinkPredictor(Module):
""" link predictor.
Args:
`dims` is an integer list, in which two adjacent elements stand for the
input and output dimensions of the corresponding layer. We will add the
output layer with dimension 1 automatically.
e.g.
`dims = [256, 64, 16]`, means 3 layers with shape (256, 64), (64, 16)
and (16, 1) exist. The classifier will take inputs whose dimension must
be 256.
active_fn: Activation function for hidden layers' output.
dropout: Dropout rate for hidden layers' output. Default is None, which
means dropout will not be performed. The optional value is a float.
"""
def __init__(self,
name,
dims,
active_fn=tf.nn.relu,
dropout=None):
self.active_func = active_fn
dims.append(1)
self.layers = []
with tf.variable_scope(name, reuse=tf.AUTO_REUSE):
for i in range(len(dims) - 1):
layer = LinearLayer("link_predictor_" + str(i),
input_dim=dims[i],
output_dim=dims[i + 1],
use_bias=True)
self.layers.append(layer)
if dropout is not None:
self.dropout_func = lambda x: tf.nn.dropout(x, keep_prob=1-dropout)
else:
self.dropout_func = None
def predict(self, x):
for i in range(len(self.layers) - 1):
x = self.layers[i].forward(x)
if self.active_func:
x = self.active_func(x)
if self.dropout_func and conf.training:
x = self.dropout_func(x)
# the output logits
logits = tf.squeeze(self.layers[-1].forward(x))
return logits
class SupervisedLinkPredictor(LinkPredictor):
def __init__(self,
name,
dims,
active_fn=tf.nn.relu,
dropout=None):
super(SupervisedLinkPredictor, self).__init__(
name, dims, active_fn, dropout)
def forward(self, src, dst, labels):
""" Return the similarity with shape [batch_size] between `src` and `dst`,
as well as the final loss compared with the ground truth `labels`.
src: The first input tensor with shape [batch_size, dim], where dim must
match the first layer.
dst: The second input tensor, whose shape must be the same with `src`.
labels: A tensor with shape [batch_size], each value must be 1 or 0,
indicating whether the link between `src` and `dst` exists or not.
"""
# The more similar of src and dst, the larger of x
x = 0 - tf.pow(src - dst, 2)
logits = self.predict(x)
labels = tf.cast(labels, logits.dtype)
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)
return logits, tf.reduce_mean(loss)
class UnsupervisedLinkPredictor(LinkPredictor):
def __init__(self,
name,
dims,
active_fn=tf.nn.relu,
dropout=None):
super(UnsupervisedLinkPredictor, self).__init__(
name, dims, active_fn, dropout)
def forward(self, src, dst, neg):
""" Return the final loss based on negative sampling.
src: An input tensor with shape [batch_size, dim], where dim must match the
first layer.
dst: A tensor with the same shape of `src`. It has real links to `src`.
neg: A tensor with shape [batch_size, neg_num, dim], which is generated by
negative sampling and has no links to `src`.
"""
x1 = src * dst
true_logits = self.predict(x1)
true_logits = tf.squeeze(true_logits)
dim = src.shape[1]
neg_expand = neg.shape[1]
src = tf.tile(tf.expand_dims(src, axis=1), [1, neg_expand, 1])
src = tf.reshape(src, [-1, dim])
neg = tf.reshape(neg, [-1, dim])
x2 = src * neg
neg_logits = self.predict(x2)
neg_logits = tf.squeeze(neg_logits)
true_loss = tf.nn.sigmoid_cross_entropy_with_logits(
labels=tf.ones_like(true_logits),
logits=true_logits)
neg_loss = tf.nn.sigmoid_cross_entropy_with_logits(
labels=tf.zeros_like(neg_logits),
logits=neg_logits)
loss = tf.reduce_mean(true_loss) + tf.reduce_mean(neg_loss)
return loss
| 36.15493 | 80 | 0.653292 | 727 | 5,134 | 4.480055 | 0.308116 | 0.017194 | 0.021492 | 0.024562 | 0.211544 | 0.180841 | 0.16549 | 0.16549 | 0.16549 | 0.152287 | 0 | 0.013628 | 0.242501 | 5,134 | 141 | 81 | 36.411348 | 0.823862 | 0.410012 | 0 | 0.217949 | 0 | 0 | 0.005144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.089744 | 0 | 0.24359 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9231c9644edcc00c1f418f1e4bb4808e72a0d692 | 1,099 | py | Python | torchtext/functional.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | torchtext/functional.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | torchtext/functional.py | parmeet/text | 1fb2aedb48b5ecc5e81741e7c8504486b91655c6 | [
"BSD-3-Clause"
] | null | null | null | import torch
from torch import Tensor
from torch.nn.utils.rnn import pad_sequence
from typing import List, Optional
__all__ = [
'to_tensor',
'truncate',
'add_token',
]
def to_tensor(input: List[List[int]], padding_value: Optional[int] = None) -> Tensor:
if padding_value is None:
output = torch.tensor(input, dtype=torch.long)
return output
else:
output = pad_sequence(
[torch.tensor(ids, dtype=torch.long) for ids in input],
batch_first=True,
padding_value=float(padding_value)
)
return output
def truncate(input: List[List[int]], max_seq_len: int) -> List[List[int]]:
output: List[List[int]] = []
for ids in input:
output.append(ids[:max_seq_len])
return output
def add_token(input: List[List[int]], token_id: int, begin: bool = True) -> List[List[int]]:
output: List[List[int]] = []
if begin:
for ids in input:
output.append([token_id] + ids)
else:
for ids in input:
output.append(ids + [token_id])
return output
| 23.891304 | 92 | 0.616015 | 148 | 1,099 | 4.425676 | 0.297297 | 0.085496 | 0.117557 | 0.079389 | 0.20916 | 0.20916 | 0.170992 | 0 | 0 | 0 | 0 | 0 | 0.264786 | 1,099 | 45 | 93 | 24.422222 | 0.810644 | 0 | 0 | 0.323529 | 0 | 0 | 0.023658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.117647 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
92334af517a1402e63b55a3fe85da09a5408dc14 | 6,901 | py | Python | python/GafferScene/ScriptProcedural.py | PaulDoessel/gaffer-play | 8b72dabb388e12424c230acfb0bd209049b01bd6 | [
"BSD-3-Clause"
] | 1 | 2016-07-31T09:55:09.000Z | 2016-07-31T09:55:09.000Z | python/GafferScene/ScriptProcedural.py | Kthulhu/gaffer | 8995d579d07231988abc92c3ac2788c15c8bc75c | [
"BSD-3-Clause"
] | null | null | null | python/GafferScene/ScriptProcedural.py | Kthulhu/gaffer | 8995d579d07231988abc92c3ac2788c15c8bc75c | [
"BSD-3-Clause"
] | 1 | 2020-02-15T16:15:54.000Z | 2020-02-15T16:15:54.000Z | ##########################################################################
#
# Copyright (c) 2012, John Haddon. All rights reserved.
# Copyright (c) 2013, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
from __future__ import with_statement
import sys
import IECore
import Gaffer
import GafferScene
class ScriptProcedural( IECore.ParameterisedProcedural ) :
def __init__( self ) :
IECore.ParameterisedProcedural.__init__( self, "Generates geometry from a node within a .gfr script." )
self.parameters().addParameters(
[
IECore.FileNameParameter(
name = "fileName",
description = "The gaffer script which contains a scene to generate geometry from.",
allowEmptyString = False,
check = IECore.FileNameParameter.CheckType.MustExist,
extensions = "gfr",
),
IECore.StringParameter(
name = "node",
description = "The node to generate geometry from.",
defaultValue = "",
),
IECore.FloatParameter(
name = "frame",
description = "The frame to generate geometry at.",
defaultValue = 1,
),
IECore.BoolParameter(
name = "computeBound",
description =
"Determines if the procedural will compute an accurate bound "
"or just not specify a bound. Not specifying a bound can give "
"improved performance in cases where the procedurals will all "
"be expanded immediately anyway.",
defaultValue = True,
),
IECore.StringVectorParameter(
name = "context",
description = "Additional context entries to be used during rendering.",
defaultValue = IECore.StringVectorData( [] ),
userData = {
"parser" : {
"acceptFlags" : IECore.BoolData( True ),
},
},
),
]
)
self.__currentFileName = None
self.__ensureAllRenderedConnection()
def doBound( self, args ) :
plug, context = self.__plugAndContext( args )
if plug is None :
return IECore.Box3f()
sceneProcedural = GafferScene.SceneProcedural( plug, context, "/", args["computeBound"].value )
return sceneProcedural.bound()
def doRender( self, renderer, args ) :
plug, context = self.__plugAndContext( args )
if plug is None :
return
sceneProcedural = GafferScene.SceneProcedural( plug, context, "/", args["computeBound"].value )
renderer.procedural( sceneProcedural )
def __plugAndContext( self, args ) :
if args["fileName"].value != self.__currentFileName :
if args["fileName"].value == "" :
self.__scriptNode = None
else :
self.__scriptNode = Gaffer.ScriptNode()
self.__scriptNode["fileName"].setValue( args["fileName"].value )
self.__scriptNode.load( continueOnError = True )
self.__currentFileName = args["fileName"].value
if self.__scriptNode is None :
return None, None
if not args["node"].value :
return None, None
node = self.__scriptNode.descendant( args["node"].value )
context = Gaffer.Context( self.__scriptNode.context() )
context.setFrame( args["frame"].value )
for i in range( 0, len(args["context"]), 2 ) :
entry = args["context"][i].lstrip( "-" )
context[entry] = eval( args["context"][i+1] )
self.__ensureErrorConnection( node )
with context :
globals = node["out"]["globals"].getValue()
if "option:render:performanceMonitor" in globals and globals["option:render:performanceMonitor"].value :
self.__ensurePerformanceMonitor()
return node["out"], context
__allRenderedConnection = None
@classmethod
def __ensureAllRenderedConnection( cls ) :
if cls.__allRenderedConnection is not None :
return
cls.__allRenderedConnection = GafferScene.SceneProcedural.allRenderedSignal().connect( cls.__allRendered )
@classmethod
def __allRendered( cls ):
if cls.__performanceMonitor is not None :
cls.__printPerformance()
# All the procedural expansion's done, so let's clear various Cortex/Gaffer
# caches to free up some memory.
IECore.ObjectPool.defaultObjectPool().clear()
memoryLimit = Gaffer.ValuePlug.getCacheMemoryLimit()
Gaffer.ValuePlug.setCacheMemoryLimit( 0 )
Gaffer.ValuePlug.setCacheMemoryLimit( memoryLimit )
__errorConnections = {}
@classmethod
def __ensureErrorConnection( cls, node ) :
if node in cls.__errorConnections :
return
cls.__errorConnections[node] = node.errorSignal().connect( cls.__error )
@staticmethod
def __error( plug, source, error ) :
errorContext = "Plug \"%s\"" % source.relativeName( source.ancestor( Gaffer.ScriptNode ) )
if "scene:path" in Gaffer.Context.current() :
path = GafferScene.ScenePlug.pathToString( Gaffer.Context.current()["scene:path"] )
errorContext += ", Location \"%s\"" % path
IECore.msg(
IECore.Msg.Level.Error,
errorContext,
error
)
__performanceMonitor = None
@classmethod
def __ensurePerformanceMonitor( cls ) :
if cls.__performanceMonitor is not None :
return
cls.__performanceMonitor = Gaffer.PerformanceMonitor()
cls.__performanceMonitor.setActive( True )
@classmethod
def __printPerformance( cls ) :
sys.stderr.write( "\nPerformance Monitor\n===================\n\n" )
sys.stderr.write( Gaffer.formatStatistics( cls.__performanceMonitor ) )
IECore.registerRunTimeTyped( ScriptProcedural, typeName = "GafferScene::ScriptProcedural" )
| 30.946188 | 108 | 0.697 | 741 | 6,901 | 6.373819 | 0.384615 | 0.02075 | 0.014398 | 0.013339 | 0.132119 | 0.097819 | 0.097819 | 0.082998 | 0.052086 | 0.052086 | 0 | 0.002495 | 0.186929 | 6,901 | 222 | 109 | 31.085586 | 0.839244 | 0.252717 | 0 | 0.184615 | 0 | 0 | 0.161537 | 0.025146 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.215385 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
923501e60d59a0a4529bbd2a7bc6300d6760071d | 5,881 | py | Python | tests/testapps/tests/test_models.py | salexkidd/django-stackstore-model | fb0bb6431dd772a80b8c9d6d2b625eae69562fa9 | [
"MIT"
] | 5 | 2020-05-28T07:04:25.000Z | 2020-09-26T05:29:46.000Z | tests/testapps/tests/test_models.py | salexkidd/django-stackstore-model | fb0bb6431dd772a80b8c9d6d2b625eae69562fa9 | [
"MIT"
] | 1 | 2020-09-26T05:34:19.000Z | 2020-09-26T05:34:19.000Z | tests/testapps/tests/test_models.py | salexkidd/django-stackstore-model | fb0bb6431dd772a80b8c9d6d2b625eae69562fa9 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
from copy import deepcopy
from .. import models as testapps_models
from .. import factories as testapps_factories
class StackStoreManagerTest(TestCase):
def test_delete(self):
with self.assertRaises(NotImplementedError):
testapps_models.Sample.objects.all().delete()
def test_force_delete(self):
instance = testapps_factories.Sample()
testapps_models.Sample.objects.all().force_delete()
self.assertNotIn(instance, testapps_models.Sample.objects.all())
def test_latest_from_stack_group(self):
instance_one = testapps_factories.Sample()
instance_two = testapps_factories.Sample()
for i in range(3):
instance_one.save()
instance_two.save()
queryset = testapps_models.Sample.objects.latest_from_stack_group()
self.assertEqual(queryset.count(), 2)
self.assertIn(
testapps_models.Sample.objects.filter(stack_group_uuid=instance_one.stack_group_uuid).latest(),
queryset.filter(stack_group_uuid=instance_one.stack_group_uuid)
)
def test_get_latest_from_stack_group(self, *args, **kwargs):
instance_one = testapps_factories.Sample()
instance_two = testapps_factories.Sample()
for i in range(3):
instance_one.save()
instance_two.save()
instance_one_latest = testapps_models.Sample.objects.get_latest_from_stack_group(
stack_group_uuid=instance_one.stack_group_uuid
)
self.assertEqual(
instance_one_latest,
testapps_models.Sample.objects.filter(stack_group_uuid=instance_one.stack_group_uuid).latest(),
)
class StackStoreModelTest(TestCase):
def test_save(self):
instance = testapps_factories.Sample()
original_pk = instance.pk
instance.test_field = "That is test field"
instance.save()
instance.refresh_from_db()
self.assertNotEqual(original_pk, instance.pk)
def test_save_with_create_new_version_is_false(self):
instance = testapps_factories.Sample()
original_pk = instance.pk
instance.test_field = "That is test field"
instance.save(__create_new_version=False)
self.assertEqual(original_pk, instance.pk)
def test_delete(self):
instance = testapps_factories.Sample()
with self.assertRaises(NotImplementedError):
instance.delete()
def test_force_delete(self):
instance = testapps_factories.Sample()
pk = instance.pk
instance.force_delete()
with self.assertRaises(testapps_models.Sample.DoesNotExist):
testapps_models.Sample.objects.get(pk=pk)
def test_previous_instance(self):
instance_one = testapps_factories.Sample()
instance_one.save()
_ = testapps_factories.Sample()
self.assertEqual(
testapps_models.Sample.objects.filter(stack_group_uuid=instance_one.stack_group_uuid).order_by("pk")[0],
instance_one.previous_instance()
)
def test_previous_instance_if_not_exist(self):
instance_one = testapps_factories.Sample()
with self.assertRaises(testapps_models.Sample.DoesNotExist):
instance_one.previous_instance()
def test_next_instance(self):
instance_one = testapps_factories.Sample()
instance_one_gen_one = deepcopy(instance_one)
instance_one.save()
_ = testapps_factories.Sample()
self.assertEqual(instance_one_gen_one.stack_group_uuid, instance_one.stack_group_uuid)
self.assertEqual(
testapps_models.Sample.objects.filter(stack_group_uuid=instance_one_gen_one.stack_group_uuid).order_by("-pk")[0],
instance_one_gen_one.next_instance()
)
def test_next_instance_if_not_exist(self):
instance_one = testapps_factories.Sample()
with self.assertRaises(testapps_models.Sample.DoesNotExist):
instance_one.next_instance()
def test_latest_instance(self):
instance_one = testapps_factories.Sample()
instance_one_gen_one = deepcopy(instance_one)
_ = testapps_factories.Sample()
for i in range(3):
instance_one.save()
self.assertEqual(
instance_one_gen_one.latest_instance(),
instance_one
)
def test_earliest_instance(self):
instance_one = testapps_factories.Sample()
instance_one_gen_one = deepcopy(instance_one)
_ = testapps_factories.Sample()
for i in range(3):
instance_one.save()
self.assertEqual(
instance_one.earliest_instance(),
instance_one_gen_one
)
def test_same_group_items(self):
instance_list = list()
instance_one = testapps_factories.Sample()
instance_list.append(deepcopy(instance_one))
for i in range(9):
instance_one.save()
instance_list.append(deepcopy(instance_one))
instance_list
self.assertEqual(instance_list, list(instance_one.same_group_items()))
def test_has_pk_and_stack_group_uuid_if_pk_is_none(self):
test_instance = testapps_models.Sample()
with self.assertRaises(testapps_models.Sample.DoesNotExist):
self.assertIsNone(test_instance.previous_instance())
with self.assertRaises(testapps_models.Sample.DoesNotExist):
self.assertIsNone(test_instance.next_instance())
with self.assertRaises(testapps_models.Sample.DoesNotExist):
self.assertIsNone(test_instance.latest_instance())
with self.assertRaises(testapps_models.Sample.DoesNotExist):
self.assertIsNone(test_instance.earliest_instance())
| 33.605714 | 125 | 0.689679 | 669 | 5,881 | 5.70852 | 0.124066 | 0.123854 | 0.12045 | 0.080649 | 0.756743 | 0.676617 | 0.604609 | 0.568735 | 0.546216 | 0.519508 | 0 | 0.001759 | 0.226492 | 5,881 | 174 | 126 | 33.798851 | 0.837767 | 0 | 0 | 0.5 | 0 | 0 | 0.006972 | 0 | 0 | 0 | 0 | 0 | 0.195313 | 1 | 0.125 | false | 0 | 0.046875 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9235169950838a2e17fec391196466cf8a8f8312 | 6,915 | py | Python | docs/_downloads/d6b1e39143e3255799ec607967cb9223/sample.py | harishbalakrishnan3/Visual-Categorization | 28c5fc4695fca931bb2a49697cf1776dae1e8259 | [
"MIT"
] | null | null | null | docs/_downloads/d6b1e39143e3255799ec607967cb9223/sample.py | harishbalakrishnan3/Visual-Categorization | 28c5fc4695fca931bb2a49697cf1776dae1e8259 | [
"MIT"
] | null | null | null | docs/_downloads/d6b1e39143e3255799ec607967cb9223/sample.py | harishbalakrishnan3/Visual-Categorization | 28c5fc4695fca931bb2a49697cf1776dae1e8259 | [
"MIT"
] | 1 | 2021-03-15T14:00:27.000Z | 2021-03-15T14:00:27.000Z | """
A sample python script that illustrates how to use the gcm module.
As a first step, we need to find the model's parameters - c,w,b (we will assume r = 2).
This is done using MLE. After we find the parameters, we use them to find the corresponding probabilities using the
functions from the gcm module.
The following script illustrates the procedure for Subject1. GCM predicted probabilities of all four categorization
types are derived and the graph is plotted in the end.
"""
import pandas as pd, numpy as np
from scipy.optimize import minimize
from gcm import *
import matplotlib.pyplot as plt
def objective(params):
def SSE(a, b):
n = len(a)
sse = 0
for i in range(n):
sse += (a[i] - b[i]) ** 2
return sse
w, c, b = params[1:numDimensions+1], params[0], params[numDimensions+1:numDimensions+1+totalCategories]
probabilities = []
for i in range(16):
p = probability_of_category_J(0, stimulus_representation, w, c, r, i, categories_idx, b)
probabilities.append(p)
sse = SSE(probabilities,observed)
return sse
def weightConstraint(params):
weights = params[1:numDimensions+1]
sum = 1
for i in range(numDimensions):
sum-=weights[i]
return sum
def biasConstraint(params):
biases = params[numDimensions+1:numDimensions+1+totalCategories]
sum =1
for i in range(totalCategories):
sum-=biases[i]
return sum
def calculateLL(probabilities):
return -np.sum(np.log(probabilities))
# Read the data set
data = pd.read_csv('../../datasets/nosofsky1986/subject1/stimuli.csv', sep=",")
observed_df = pd.read_csv('../../datasets/nosofsky1986/subject1/observed_probabilities.csv', sep=",")
# Calculate constants from the datasets
stimulus_representation = data.values
numDimensions = np.shape(stimulus_representation)[1]
#######################################################################################################################
# Case1: Dimensional Stimuli
#######################################################################################################################
observed = list(observed_df.values[:, 0])
observed_dimensional = observed
categories_idx = [[0, 1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14, 15]]
totalCategories = len(categories_idx)
# Guess Parameters
c= 1
w = [0.5, 0.5]
b = [0.5, 0.5]
r = 2
# Define Bounds for c, w, b
bounds = []
cBound = (0, np.inf)
bounds.append(cBound)
numWeights = 2
numBiases = 2
for i in range(numWeights):
bounds.append((0, 1))
for i in range(numBiases):
bounds.append((0, 1))
bnds = tuple(bounds)
# Define the parameters for the objective function - c, w, b
params = []
params.append(c)
params += w+b
# Constraints
con1 = {'type': 'eq', 'fun': weightConstraint}
con2 = {'type': 'eq', 'fun': biasConstraint}
cons = [con1, con2]
results = minimize(objective, params, method = 'SLSQP', bounds = bnds, constraints = cons, options={'disp': True})
w, c, b = results.x[1:numDimensions+1], results.x[0], results.x[numDimensions+1:numDimensions+1+totalCategories]
predicted_dimensional = calculate_probabilities(0, stimulus_representation, w, c, r, categories_idx, b)
#######################################################################################################################
# Case 2: Criss-Cross Stimuli
#######################################################################################################################
observed = list(observed_df.values[:, 1])
observed_crisscross = observed
categories_idx = [[2, 3, 6, 7, 8, 9, 12, 13], [0, 1, 4, 5, 10, 11, 14, 15]]
# Guess Parameters
c= 1
w = [0.5, 0.5]
b = [0.5, 0.5]
r = 2
# Define the parameters for the objective function - c, w, b
params = []
params.append(c)
params += w+b
results = minimize(objective, params, method = 'SLSQP', bounds = bnds, constraints = cons, options={'disp': True})
w, c, b = results.x[1:numDimensions+1], results.x[0], results.x[numDimensions+1:numDimensions+1+totalCategories]
predicted_crisscross = calculate_probabilities(0, stimulus_representation, w, c, r, categories_idx, b)
#######################################################################################################################
# Case 3: Interior-Exterior
#######################################################################################################################
observed = list(observed_df.values[:, 2])
observed_interiorexterior= observed
categories_idx = [[5, 6, 9, 10], [0, 1, 2, 3, 4, 7, 8, 11, 12, 13, 14, 15]]
# Guess Parameters
c= 1
w = [0.5, 0.5]
b = [0.5, 0.5]
r = 2
# Define the parameters for the objective function - c, w, b
params = []
params.append(c)
params += w+b
results = minimize(objective, params, method = 'SLSQP', bounds = bnds, constraints = cons, options={'disp': True})
w, c, b = results.x[1:numDimensions+1], results.x[0], results.x[numDimensions+1:numDimensions+1+totalCategories]
predicted_interiorexterior = calculate_probabilities(0, stimulus_representation, w, c, r, categories_idx, b)
#######################################################################################################################
# Case 4: Diagnol
#######################################################################################################################
observed = list(observed_df.values[:, 3])
observed_diagnol = observed
categories_idx = [[0, 1, 2, 4, 5, 8, 12], [3, 6, 7, 9, 10, 11, 13, 14, 15]]
# Guess Parameters
c= 1
w = [0.5, 0.5]
b = [0.5, 0.5]
r = 2
# Define the parameters for the objective function - c, w, b
params = []
params.append(c)
params += w+b
results = minimize(objective, params, method = 'SLSQP', bounds = bnds, constraints = cons, options={'disp': True})
w, c, b = results.x[1:numDimensions+1], results.x[0], results.x[numDimensions+1:numDimensions+1+totalCategories]
predicted_diagnol = calculate_probabilities(0, stimulus_representation, w, c, r, categories_idx, b)
#######################################################################################################################
# Plotting Graphs
#######################################################################################################################
yx = [0, 1]
plt.style.use('ggplot')
plt.scatter(predicted_dimensional, observed_dimensional, c="r", alpha=0.75, label="Dimensional")
plt.scatter(predicted_crisscross, observed_crisscross, c="b", alpha=0.75, label="Criss-Cross")
plt.scatter(predicted_interiorexterior, observed_interiorexterior, c="g", alpha=0.75, label="Interior-Exterior")
plt.scatter(predicted_diagnol, observed_diagnol, c="c", alpha=0.75, label="Diagnol")
plt.plot(yx, yx, 'k-', alpha=0.75, zorder=0)
plt.legend(loc='upper left', frameon=True)
plt.xlabel('Augmented GCM Predicted Probabilities')
plt.ylabel('Observed Categorization Probabilities')
plt.title('Subject1')
plt.savefig('../../datasets/nosofsky1986/subject1/AugmentedGCM.png')
plt.show()
| 36.015625 | 119 | 0.580188 | 856 | 6,915 | 4.630841 | 0.216122 | 0.063572 | 0.045409 | 0.008073 | 0.460394 | 0.437941 | 0.350656 | 0.350656 | 0.350656 | 0.350656 | 0 | 0.037197 | 0.12914 | 6,915 | 191 | 120 | 36.204188 | 0.621056 | 0.142299 | 0 | 0.382609 | 0 | 0 | 0.07844 | 0.034768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.034783 | 0.008696 | 0.121739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
923986f9bd14fbab28d2e07beb811c4e8f7e82e8 | 2,656 | py | Python | 5-Detection/SSD/utils/config.py | MaybeS/mnist | d0aeafce97d7308dc84adbb6ad8e547776db0cd5 | [
"MIT"
] | 8 | 2020-07-17T00:30:20.000Z | 2021-06-15T07:14:55.000Z | 5-Detection/SSD/utils/config.py | MaybeS/mnist | d0aeafce97d7308dc84adbb6ad8e547776db0cd5 | [
"MIT"
] | null | null | null | 5-Detection/SSD/utils/config.py | MaybeS/mnist | d0aeafce97d7308dc84adbb6ad8e547776db0cd5 | [
"MIT"
] | 2 | 2019-07-02T04:20:21.000Z | 2019-07-16T06:51:13.000Z | import json
from typing import Tuple, List
class Config:
"""Config stack layers
- Default config
- Model default config
- Load from config file
- User argument config
"""
size = (300, 300)
ssd_attributes = ['feature_map', 'steps', 'sizes', 'aspect_ratios']
ssd = {
"aspect_ratios": ((2,), (2, 3), (2, 3), (2, 3), (2,), (2,)),
"num_priors": 6,
"variance": (.1, .2),
"feature_map": (38, 19, 10, 5, 3, 1),
'sizes': ((30, 60), (60, 111), (111, 162), (162, 213), (213, 264), (264, 315)),
"steps": (8, 16, 32, 64, 100, 300),
"clip": True,
"warping": False,
"warping_mode": "sum",
}
efficientdet_attributes = ['FPN_D', 'FPN_W', 'CLASS_D', 'OUT']
efficientdet = {
"size": (512, 512),
"FPN_D": 0,
"FPN_W": 0,
"CLASS_D": 0,
"OUT": 0,
}
thresh = .3
conf_thresh = .01
nms = True
nms_thresh = .45
nms_top_k = 200
variance = .1, .2
optimizer = {
"lr": .0001,
"momentum": .9,
"weight_decay": 5e-4
}
scheduler = {
"factor": .1,
"patience": 3,
}
def __init__(self, path: str, network: str = None, model: object = None):
# Update default configs
for key, value in getattr(self, network.lower(), {}).items():
self.update(key, value)
# Update model default configs
for attribute in getattr(self, f'{network.lower()}_attributes', []):
self.update(attribute, getattr(model, attribute))
# Load config files
if path is not None:
try:
with open(path) as f:
for key, value in json.load(f).items():
self.update(key, value)
except (FileNotFoundError, RuntimeError) as e:
print(f'Configfile {path} is not exists or can not open')
def update(self, key, value):
if isinstance(getattr(self, key, None), dict):
getattr(self, key).update(value)
else:
setattr(self, key, value)
def sync(self, arguments: dict):
for key, value in arguments.items():
if hasattr(arguments, key):
setattr(arguments, key, value)
if key in self.dump.keys():
self.update(key, value)
@property
def dump(self):
return {
attr: getattr(self, attr)
for attr in filter(lambda attr: not attr.startswith('__') and attr != 'dump' and
not callable(getattr(self, attr)), dir(self))
}
| 27.381443 | 92 | 0.50753 | 311 | 2,656 | 4.250804 | 0.437299 | 0.054463 | 0.006808 | 0.029501 | 0.040091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05896 | 0.348645 | 2,656 | 96 | 93 | 27.666667 | 0.705202 | 0.067018 | 0 | 0.043478 | 0 | 0 | 0.111474 | 0.011433 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057971 | false | 0 | 0.028986 | 0.014493 | 0.304348 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
924075a62dfbc2884a5bde01f2ed7a11d0476d1c | 3,019 | py | Python | institutions/geo/models.py | sephcoster/mapusaurus | 5515f0d89ff7b7cbc796af25b3d45950c8ed882f | [
"CC0-1.0"
] | null | null | null | institutions/geo/models.py | sephcoster/mapusaurus | 5515f0d89ff7b7cbc796af25b3d45950c8ed882f | [
"CC0-1.0"
] | null | null | null | institutions/geo/models.py | sephcoster/mapusaurus | 5515f0d89ff7b7cbc796af25b3d45950c8ed882f | [
"CC0-1.0"
] | null | null | null | import json
from django.contrib.gis.db import models
class Geo(models.Model):
STATE_TYPE, COUNTY_TYPE, TRACT_TYPE, METRO_TYPE, MICRO_TYPE = range(1, 6)
METDIV_TYPE, = range(6, 7)
TYPES = [(STATE_TYPE, 'State'), (COUNTY_TYPE, 'County'),
(TRACT_TYPE, 'Census Tract'), (METRO_TYPE, 'Metropolitan'),
(MICRO_TYPE, 'Micropolitan'),
(METDIV_TYPE, 'Metropolitan Division')]
geoid = models.CharField(max_length=20, primary_key=True)
geo_type = models.PositiveIntegerField(choices=TYPES, db_index=True)
name = models.CharField(max_length=50)
state = models.CharField(max_length=2, null=True)
county = models.CharField(max_length=3, null=True)
tract = models.CharField(max_length=6, null=True)
csa = models.CharField(max_length=3, null=True,
help_text='Combined Statistical Area')
cbsa = models.CharField(max_length=5, null=True,
help_text='Core Based Statistical Area')
metdiv = models.CharField(max_length=5, null=True,
help_text='Metro Division')
geom = models.MultiPolygonField(srid=4269)
minlat = models.FloatField()
maxlat = models.FloatField()
minlon = models.FloatField()
maxlon = models.FloatField()
centlat = models.FloatField()
centlon = models.FloatField()
objects = models.GeoManager()
class Meta:
index_together = [("geo_type", "minlat", "minlon"),
("geo_type", "minlat", "maxlon"),
("geo_type", "maxlat", "minlon"),
("geo_type", "maxlat", "maxlon"),
("geo_type", "centlat", "centlon"),
("geo_type", "cbsa")]
def tract_centroids_as_geojson(self):
"""Convert this model into a geojson string"""
geojson = {'type': 'Feature',
'properties': {
'geoid': self.geoid,
'geoType': self.geo_type,
'state': self.state,
'county': self.county,
'cbsa': self.cbsa,
'centlat': self.centlat,
'centlon': self.centlon}}
geojson = json.dumps(geojson)
return geojson
def tract_shape_as_geojson(self):
"""Convert this model into a geojson string"""
geojson = {'type': 'Feature',
'geometry': '$_$', # placeholder
'properties': {
'geoid': self.geoid,
'geoType': self.geo_type,
'state': self.state,
'county': self.county,
'cbsa': self.cbsa,
'centlat': self.centlat,
'centlon': self.centlon}}
geojson = json.dumps(geojson)
return geojson.replace(
'"$_$"',
self.geom.simplify(preserve_topology=True).geojson)
| 39.207792 | 77 | 0.532958 | 292 | 3,019 | 5.359589 | 0.308219 | 0.040256 | 0.092013 | 0.122684 | 0.376997 | 0.376997 | 0.376997 | 0.334824 | 0.334824 | 0.282428 | 0 | 0.009063 | 0.342166 | 3,019 | 76 | 78 | 39.723684 | 0.778953 | 0.031136 | 0 | 0.31746 | 0 | 0 | 0.13315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.031746 | 0 | 0.412698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
924185f1b41185b073f99907c5347d88ec6b3ce8 | 2,798 | py | Python | phone_dic_accent_archive.py | arcman7/cmu_sphinx | 343ee1c08061d607cab6ba9ab738a1ac5a8b59d9 | [
"MIT"
] | null | null | null | phone_dic_accent_archive.py | arcman7/cmu_sphinx | 343ee1c08061d607cab6ba9ab738a1ac5a8b59d9 | [
"MIT"
] | null | null | null | phone_dic_accent_archive.py | arcman7/cmu_sphinx | 343ee1c08061d607cab6ba9ab738a1ac5a8b59d9 | [
"MIT"
] | null | null | null | import collections
import os
counter = collections.Counter()
dir_name = 'accent_archive'
text_file_name = 'reading-passage.txt'
print('creating dictionary from "{}"'.format(os.path.join(dir_name, 'AA_unpacked', text_file_name)))
# The Accent Archive uses the same text transcript for every audio recording
text = None
# clean up text and add to counts
with open(os.path.join(dir_name, 'AA_unpacked', text_file_name), 'r') as f:
counter += collections.Counter(f.read().replace('\n', '').replace('.', '').replace(',', '').replace(':', '').replace(' ', ' ').upper().split())
phone_counter = collections.Counter()
sphinx_dic = {}
sphinx_dic_list = None
with open ('sphinx.dic', 'r') as f:
# each line in sphinx.dic looks like:
# aalborg AO1 L B AO0 R G # place, danish
sphinx_dic_list = f.read().split('\n')
for index in range(len(sphinx_dic_list)):
row = sphinx_dic_list[index].split(' ')
sphinx_dic[row[0].upper()] = index
out_dir_name = os.path.join(dir_name, 'etc')
# create .vocab file
print('writing "{}.vocab" to {}'.format(dir_name, out_dir_name))
with open(os.path.join(out_dir_name, dir_name + '.vocab'), 'w') as f:
for item in counter:
f.write(item + '\n')
print('writing "{}.dic" to {}'.format(dir_name, out_dir_name))
# create .dic file
with open(os.path.join(out_dir_name, dir_name + '.dic'), 'w') as f:
for item in counter:
if item in sphinx_dic:
index = sphinx_dic[item]
row = sphinx_dic_list[index]
f.write(row.upper() + '\n')
# remove comments if any, then split phones by spaces
phone_counter += collections.Counter(row.split('#')[0].split(' ')[1:])
print('writing "{}.phone" to {}'.format(dir_name, out_dir_name))
# create .phone file
with open(os.path.join(out_dir_name, dir_name + '.phone'), 'w') as f:
for item in phone_counter:
f.write(item.upper() + '\n')
f.write('SIL\n')
############### used for Forced Alignment ###############
# create phoneme.dict file
print('writing "phonemes.dict"')
with open(os.path.join(dir_name, 'phonemes.dict'), 'w') as f:
for item in phone_counter:
if (item.lower() == 'ss'):
f.write('ss\tS')
else:
f.write(item.lower() + '\t' + item.upper() + '\n')
f.write('sil\tSIL\n')
# create words.dict file
print('writing "words.dict"')
with open(os.path.join(dir_name, 'words.dict'), 'w') as f:
for item in counter:
if item in sphinx_dic:
index = sphinx_dic[item]
row = sphinx_dic_list[index]
row = row.split(item.lower())
f.write(item.lower()+ '\t' + row[1].strip().upper() + '\n')
f.write('sil\tSIL\n')
print('finished.') | 38.328767 | 149 | 0.602931 | 405 | 2,798 | 4.024691 | 0.251852 | 0.081595 | 0.04908 | 0.051534 | 0.433742 | 0.390798 | 0.376687 | 0.314724 | 0.210429 | 0.210429 | 0 | 0.002745 | 0.218728 | 2,798 | 73 | 150 | 38.328767 | 0.742909 | 0.129736 | 0 | 0.245283 | 0 | 0 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.018868 | 0.037736 | 0 | 0.037736 | 0.132075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9242da30eb8cc1d9931d3a3bfc2731f59cc10a24 | 3,367 | py | Python | Advanced Computer Vision & Deep Learning/Project-2_Image_Captioning/model.py | sudoberlin/Computer_Vision_ND | 6211d0a610a26f6ed54116127588adb6ff4b7ba9 | [
"Apache-2.0"
] | 1 | 2020-08-09T19:49:38.000Z | 2020-08-09T19:49:38.000Z | Advanced Computer Vision & Deep Learning/Project-2_Image_Captioning/model.py | sudoberlin/Computer_Vision_ND | 6211d0a610a26f6ed54116127588adb6ff4b7ba9 | [
"Apache-2.0"
] | null | null | null | Advanced Computer Vision & Deep Learning/Project-2_Image_Captioning/model.py | sudoberlin/Computer_Vision_ND | 6211d0a610a26f6ed54116127588adb6ff4b7ba9 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
import torchvision.models as models
import torch.nn.functional as F
import math
class EncoderCNN(nn.Module):
def __init__(self, embed_size):
super(EncoderCNN, self).__init__()
resnet = models.resnet50(pretrained=True)
for param in resnet.parameters():
param.requires_grad_(False)
modules = list(resnet.children())[:-1]
self.resnet = nn.Sequential(*modules)
self.embed = nn.Linear(resnet.fc.in_features, embed_size)
def forward(self, images):
features = self.resnet(images)
features = features.view(features.size(0), -1)
features = self.embed(features)
return features
class DecoderRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1):
super(DecoderRNN, self).__init__()
self.embed_size = embed_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = num_layers
# embedding layer
# turns words into vector of specific size
self.word_embeddings = nn.Embedding(vocab_size, embed_size)
#now lstm layer will take the embedded word vector of a specific size
# as inputs and outputs hidden ststes (hidden_dim)
self.lstm = nn.LSTM(input_size = embed_size,
hidden_size = hidden_size,
num_layers = num_layers,
batch_first = True)
# Linear_layer--> maps hidden state output dimension to the vocab size
self.hidden2vocab = nn.Linear(hidden_size, vocab_size)
def forward(self, features, captions):
# now create embedded word vectors in captions batch for each token
# [batch_size,caption_lenghth
embeds = self.word_embeddings(captions[:, :-1])
inputs = torch.cat((features.unsqueeze(dim=1), embeds), dim = 1)
# lstm_out.shape--> batch_size, caption_length, hidden_size
lstm_out, _ = self.lstm(inputs)
# scores for most likely words
# outputs.shape-->batch_size, caption_length, vocab_Size
outputs = self.hidden2vocab(lstm_out)
return outputs
def sample(self, inputs, states=None, max_len=20):
" accepts pre-processed image tensor (inputs) and returns predicted sentence (list of tensor ids of length max_len) "
caption = []
# initialize the hidden states as inputs
hidden = (torch.randn(self.num_layers, 1, self.hidden_size).to(inputs.device),
torch.randn(self.num_layers, 1, self.hidden_size).to(inputs.device))
# now to get the caption feed the lstm output and hidden states back to itself
for i in range(max_len):
# batch_size = 1, sequence_length = 1-->(1,1,embedsize)
lstm_out, hidden = self.lstm(inputs, hidden)
outputs = self.hidden2vocab(lstm_out) #1,1,vocab_size
outputs = outputs.squeeze(1)
word_id = outputs.argmax(dim = 1)
caption.append(word_id.item())
# input for next iteratons
inputs = self.word_embeddings(word_id.unsqueeze(0))
return caption
| 38.701149 | 125 | 0.612712 | 409 | 3,367 | 4.858191 | 0.310513 | 0.045294 | 0.028183 | 0.025667 | 0.137896 | 0.080523 | 0.080523 | 0.05234 | 0.05234 | 0.05234 | 0 | 0.010638 | 0.302049 | 3,367 | 87 | 126 | 38.701149 | 0.834894 | 0.239085 | 0 | 0.039216 | 0 | 0.019608 | 0.043184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098039 | false | 0 | 0.098039 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |