hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8955951f8af9485bbbfe9c8b8031c4f7a7835b68 | 674 | py | Python | tests/test_utils.py | biglocalnews/covid-world-scraper | 385f792b32d58dbf67a524c36e60d21f76e463ef | [
"0BSD"
] | null | null | null | tests/test_utils.py | biglocalnews/covid-world-scraper | 385f792b32d58dbf67a524c36e60d21f76e463ef | [
"0BSD"
] | 11 | 2020-07-14T02:16:32.000Z | 2022-01-31T18:06:49.000Z | tests/test_utils.py | biglocalnews/covid-world-scraper | 385f792b32d58dbf67a524c36e60d21f76e463ef | [
"0BSD"
] | null | null | null | import datetime
from unittest.mock import patch, MagicMock
import pytest
from covid_world_scraper.utils import relative_year
DEC_31 = datetime.datetime(2020, 12, 31, 12, 59, 1)
JAN_1 = datetime.datetime(2020, 1, 1, 1, 1, 1)
@pytest.mark.parametrize(
'month,day,current_day,expected',
[
[12, 31, DEC_31, 2020],
[12, 31, JAN_1, 2020],
[1, 1, JAN_1, 2020]
]
)
def test_relative_year(month, day, current_day, expected):
mock_target = 'covid_world_scraper.utils.today'
with patch(mock_target) as mock_func:
mock_func.return_value = current_day
actual = relative_year(month, day)
assert actual == expected
| 24.962963 | 58 | 0.679525 | 98 | 674 | 4.459184 | 0.387755 | 0.022883 | 0.020595 | 0.100687 | 0.118993 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096045 | 0.212166 | 674 | 26 | 59 | 25.923077 | 0.72693 | 0 | 0 | 0 | 0 | 0 | 0.090504 | 0.090504 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8956065fd228d29e022eb365dffc1354b92e5e48 | 3,227 | py | Python | interface/__init__.py | KauaVicto/igbot | f490540e60643f735cc716f1424cbf087ad98c32 | [
"MIT"
] | null | null | null | interface/__init__.py | KauaVicto/igbot | f490540e60643f735cc716f1424cbf087ad98c32 | [
"MIT"
] | null | null | null | interface/__init__.py | KauaVicto/igbot | f490540e60643f735cc716f1424cbf087ad98c32 | [
"MIT"
] | null | null | null | from PySimpleGUI import PySimpleGUI as sg
from sys import exit
sg.theme('DarkGray14')
# sg.theme_previewer()
def layout():
layout = [
[sg.Text('Recomeçar:'),
sg.Radio('Sim', 'recomecar', key='rSim', default=True, enable_events=True),
sg.Radio('Não', 'recomecar', key='rNao', enable_events=True)],
[sg.Text('Usuário:', size=(8, 1), key='usuarioTxt', visible=True)], [sg.Input(key='usuario', size=(20, 1), visible=True)],
[sg.Text('Senha:', size=(8, 1), key='senhaTxt')], [sg.Input(key='senha', password_char='*', size=(20, 1))],
[sg.Text('Frase:', size=(8, 1), key='fraseTxt')], [sg.Input(key='frase', size=(20, 1))],
[sg.Text('Link do Post:', key='linkTxt', visible=True)],
[sg.Input(key='link', size=(40, 1), visible=True)],
[sg.Text('Número de seguidores:', size=(33, 1), key='qtSeguiTxt', visible=True)],
[sg.Input(key='qtSegui', size=(15, 1), visible=True)],
[sg.Text('Buscar:', visible=True, key='buscaTxt')],
[sg.Radio('Seguidores', 'busca', key='bSeguidor', visible=True, default=True, enable_events=True)],
[sg.Radio('Seguindo', 'busca', key='bSeguindo', visible=True, enable_events=True)],
[sg.Text('Navegador:'),
sg.Radio('Opera', 'navegador', key='opera', default=True),
sg.Radio('Google Chrome', 'navegador', key='chrome')],
[sg.Text('Marcações:')],
[sg.Slider(range=(1, 5), default_value=3, size=(20, 15), orientation='h', key='marcar')],
[sg.Text('Quantidade de comentarios:')],
[sg.Slider(range=(1, 300), default_value=20, size=(40, 15), orientation='h', key='comQuant')],
[sg.Button('Iniciar')]
#[sg.Output(size=(40, 20), key='output')]
]
return layout
def janela():
window = sg.Window('Bot de comentários', layout())
while True:
eventos, valores = window.read()
#window['output'].update(value=f'{"Informações":-^60}')
if eventos == sg.WINDOW_CLOSED:
exit()
break
if eventos == 'rSim':
window['link'].update(disabled=False)
window['qtSegui'].update(disabled=False)
window['usuario'].update(disabled=False)
window['senha'].update(disabled=False)
window['frase'].update(disabled=False)
window['bSeguidor'].update(disabled=False)
window['bSeguindo'].update(disabled=False)
elif eventos == 'rNao':
window['link'].update(disabled=True)
window['qtSegui'].update(disabled=True)
window['usuario'].update(disabled=True)
window['senha'].update(disabled=True)
window['frase'].update(disabled=True)
window['bSeguidor'].update(disabled=True)
window['bSeguindo'].update(disabled=True)
if eventos == 'Iniciar':
try:
valores['marcar'] = int(valores['marcar'])
valores['comQuant'] = int(valores['comQuant'])
if valores['rSim']:
valores['qtSegui'] = int(valores['qtSegui'])
return valores
except:
print('Erro! Digite os valores inteiros válidos!')
janela() | 44.205479 | 130 | 0.570189 | 369 | 3,227 | 4.96206 | 0.287263 | 0.107045 | 0.072638 | 0.081922 | 0.141453 | 0.037138 | 0.037138 | 0 | 0 | 0 | 0 | 0.01987 | 0.235823 | 3,227 | 73 | 131 | 44.205479 | 0.722628 | 0.035327 | 0 | 0 | 0 | 0 | 0.182315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0.016393 | 0.032787 | 0 | 0.098361 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8957faed268950fdd3010d67b51f473dca55db15 | 13,706 | py | Python | src/tests/api/test_episodes.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 5 | 2021-07-01T16:31:29.000Z | 2022-01-29T14:32:13.000Z | src/tests/api/test_episodes.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 45 | 2020-10-25T19:41:26.000Z | 2022-03-25T06:31:58.000Z | src/tests/api/test_episodes.py | DmitryBurnaev/podcast-service | 53349a3f9aed22a8024d0c83380f9a02464962a3 | [
"MIT"
] | 1 | 2022-01-27T11:30:07.000Z | 2022-01-27T11:30:07.000Z | import pytest
from common.statuses import ResponseStatus
from modules.providers.exceptions import SourceFetchError
from modules.podcast import tasks
from modules.podcast.models import Episode, Podcast
from modules.podcast.tasks import DownloadEpisodeTask
from tests.api.test_base import BaseTestAPIView
from tests.helpers import get_video_id, create_user, get_podcast_data, create_episode, await_
INVALID_UPDATE_DATA = [
[{"title": "title" * 100}, {"title": "Longer than maximum length 256."}],
[{"author": "author" * 100}, {"author": "Longer than maximum length 256."}],
]
INVALID_CREATE_DATA = [
[{"source_url": "fake-url"}, {"source_url": "Not a valid URL."}],
[{}, {"source_url": "Missing data for required field."}],
]
def _episode_in_list(episode: Episode):
return {
"id": episode.id,
"title": episode.title,
"status": str(episode.status),
"image_url": episode.image_url,
"created_at": episode.created_at.isoformat(),
}
def _episode_details(episode: Episode):
return {
"id": episode.id,
"title": episode.title,
"author": episode.author,
"status": str(episode.status),
"length": episode.length,
"watch_url": episode.watch_url,
"remote_url": episode.remote_url,
"image_url": episode.image_url,
"file_size": episode.file_size,
"description": episode.description,
"created_at": episode.created_at.isoformat(),
"published_at": episode.published_at.isoformat() if episode.published_at else None,
}
class TestEpisodeListCreateAPIView(BaseTestAPIView):
url = "/api/podcasts/{id}/episodes/"
def test_get_list__ok(self, client, episode, user):
client.login(user)
url = self.url.format(id=episode.podcast_id)
response = client.get(url)
response_data = self.assert_ok_response(response)
assert response_data["items"] == [_episode_in_list(episode)]
def test_create__ok(
self,
client,
podcast,
episode,
episode_data,
user,
mocked_episode_creator,
mocked_rq_queue,
dbs,
):
mocked_episode_creator.create.return_value = mocked_episode_creator.async_return(episode)
client.login(user)
episode_data = {"source_url": episode_data["watch_url"]}
url = self.url.format(id=podcast.id)
response = client.post(url, json=episode_data)
response_data = self.assert_ok_response(response, status_code=201)
assert response_data == _episode_in_list(episode), response.json()
self.assert_called_with(
mocked_episode_creator.target_class.__init__,
podcast_id=podcast.id,
source_url=episode_data["source_url"],
user_id=user.id,
)
mocked_episode_creator.create.assert_called_once()
mocked_rq_queue.enqueue.assert_called_with(
tasks.DownloadEpisodeImageTask(), episode_id=episode.id
)
def test_create__start_downloading__ok(
self, client, podcast, episode, episode_data, user, mocked_episode_creator, mocked_rq_queue
):
mocked_episode_creator.create.return_value = mocked_episode_creator.async_return(episode)
client.login(user)
url = self.url.format(id=podcast.id)
response = client.post(url, json={"source_url": episode_data["watch_url"]})
self.assert_ok_response(response, status_code=201)
expected_calls = [
{"args": (tasks.DownloadEpisodeTask(),), "kwargs": {"episode_id": episode.id}},
{"args": (tasks.DownloadEpisodeImageTask(),), "kwargs": {"episode_id": episode.id}},
]
actual_calls = [
{"args": call.args, "kwargs": call.kwargs}
for call in mocked_rq_queue.enqueue.call_args_list
]
assert actual_calls == expected_calls
def test_create__youtube_error__fail(
self, client, podcast, episode_data, user, mocked_episode_creator
):
mocked_episode_creator.create.side_effect = SourceFetchError("Oops")
client.login(user)
url = self.url.format(id=podcast.id)
response = client.post(url, json={"source_url": episode_data["watch_url"]})
response_data = self.assert_fail_response(response, status_code=500)
assert response_data == {
"error": "We couldn't extract info about requested episode.",
"details": "Oops",
}
@pytest.mark.parametrize("invalid_data, error_details", INVALID_CREATE_DATA)
def test_create__invalid_request__fail(
self, client, podcast, user, invalid_data: dict, error_details: dict
):
client.login(user)
url = self.url.format(id=podcast.id)
self.assert_bad_request(client.post(url, json=invalid_data), error_details)
def test_create__podcast_from_another_user__fail(self, client, podcast, dbs):
client.login(create_user(dbs))
url = self.url.format(id=podcast.id)
data = {"source_url": "http://link.to.resource/"}
self.assert_not_found(client.post(url, json=data), podcast)
class TestEpisodeRUDAPIView(BaseTestAPIView):
url = "/api/episodes/{id}/"
def test_get_details__ok(self, client, episode, user):
client.login(user)
url = self.url.format(id=episode.id)
response = client.get(url)
response_data = self.assert_ok_response(response)
assert response_data == _episode_details(episode)
def test_get_details__episode_from_another_user__fail(self, client, episode, user, dbs):
client.login(create_user(dbs))
url = self.url.format(id=episode.id)
self.assert_not_found(client.get(url), episode)
def test_update__ok(self, client, episode, user, dbs):
client.login(user)
url = self.url.format(id=episode.id)
patch_data = {
"title": "New title",
"author": "New author",
"description": "New description",
}
response = client.patch(url, json=patch_data)
await_(dbs.refresh(episode))
response_data = self.assert_ok_response(response)
assert response_data == _episode_details(episode)
assert episode.title == "New title"
assert episode.author == "New author"
assert episode.description == "New description"
@pytest.mark.parametrize("invalid_data, error_details", INVALID_UPDATE_DATA)
def test_update__invalid_request__fail(
self, client, episode, user, invalid_data: dict, error_details: dict
):
client.login(user)
url = self.url.format(id=episode.id)
self.assert_bad_request(client.patch(url, json=invalid_data), error_details)
def test_update__episode_from_another_user__fail(self, client, episode, dbs):
client.login(create_user(dbs))
url = self.url.format(id=episode.id)
self.assert_not_found(client.patch(url, json={}), episode)
def test_delete__ok(self, client, episode, user, mocked_s3, dbs):
client.login(user)
url = self.url.format(id=episode.id)
response = client.delete(url)
assert response.status_code == 204
assert await_(Episode.async_get(dbs, id=episode.id)) is None
mocked_s3.delete_files_async.assert_called_with([episode.file_name])
def test_delete__episode_from_another_user__fail(self, client, episode, user, dbs):
client.login(create_user(dbs))
url = self.url.format(id=episode.id)
self.assert_not_found(client.delete(url), episode)
@pytest.mark.parametrize(
"same_episode_status, delete_called",
[
(Episode.Status.NEW, True),
(Episode.Status.PUBLISHED, False),
(Episode.Status.DOWNLOADING, False),
],
)
def test_delete__same_episode_exists__ok(
self,
client,
podcast,
episode_data,
mocked_s3,
same_episode_status,
delete_called,
dbs,
):
source_id = get_video_id()
user_1 = create_user(dbs)
user_2 = create_user(dbs)
podcast_1 = await_(
Podcast.async_create(dbs, db_commit=True, **get_podcast_data(created_by_id=user_1.id))
)
podcast_2 = await_(
Podcast.async_create(dbs, db_commit=True, **get_podcast_data(created_by_id=user_2.id))
)
episode_data["created_by_id"] = user_1.id
_ = create_episode(
dbs, episode_data, podcast_1, status=same_episode_status, source_id=source_id
)
episode_data["created_by_id"] = user_2.id
episode_2 = create_episode(
dbs, episode_data, podcast_2, status=Episode.Status.NEW, source_id=source_id
)
url = self.url.format(id=episode_2.id)
client.login(user_2)
response = client.delete(url)
assert response.status_code == 204, f"Delete API is not available: {response.text}"
assert await_(Episode.async_get(dbs, id=episode_2.id)) is None
if delete_called:
mocked_s3.delete_files_async.assert_called_with([episode_2.file_name])
else:
assert not mocked_s3.delete_files_async.called
class TestEpisodeDownloadAPIView(BaseTestAPIView):
url = "/api/episodes/{id}/download/"
def test_download__ok(self, client, episode, user, mocked_rq_queue, dbs):
client.login(user)
url = self.url.format(id=episode.id)
response = client.put(url)
await_(dbs.refresh(episode))
response_data = self.assert_ok_response(response)
assert response_data == _episode_details(episode)
mocked_rq_queue.enqueue.assert_called_with(DownloadEpisodeTask(), episode_id=episode.id)
def test_download__episode_from_another_user__fail(self, client, episode, user, dbs):
client.login(create_user(dbs))
url = self.url.format(id=episode.id)
self.assert_not_found(client.put(url), episode)
class TestEpisodeFlatListAPIView(BaseTestAPIView):
url = "/api/episodes/"
def setup_episodes(self, dbs, user, episode_data):
self.user_2 = create_user(dbs)
podcast_1 = await_(Podcast.async_create(dbs, **get_podcast_data(created_by_id=user.id)))
podcast_2 = await_(Podcast.async_create(dbs, **get_podcast_data(created_by_id=user.id)))
podcast_3_from_user_2 = await_(
Podcast.async_create(dbs, **get_podcast_data(created_by_id=self.user_2.id))
)
episode_data = episode_data | {"created_by_id": user.id}
self.episode_1 = create_episode(dbs, episode_data, podcast_1)
self.episode_2 = create_episode(dbs, episode_data, podcast_2)
episode_data["created_by_id"] = self.user_2.id
self.episode_3 = create_episode(dbs, episode_data, podcast_3_from_user_2)
await_(dbs.commit())
@staticmethod
def assert_episodes(response_data: dict, expected_episode_ids: list[int]):
actual_episode_ids = [episode["id"] for episode in response_data["items"]]
assert actual_episode_ids == expected_episode_ids
def test_get_list__ok(self, client, episode_data, user, dbs):
self.setup_episodes(dbs, user, episode_data)
client.login(user)
response = client.get(self.url)
response_data = self.assert_ok_response(response)
expected_episode_ids = [self.episode_2.id, self.episode_1.id]
self.assert_episodes(response_data, expected_episode_ids)
def test_get_list__limited__ok(self, client, episode_data, user, dbs):
self.setup_episodes(dbs, user, episode_data)
client.login(user)
response = client.get(self.url, params={"limit": 1})
response_data = self.assert_ok_response(response)
self.assert_episodes(response_data, expected_episode_ids=[self.episode_2.id])
assert response_data["has_next"] is True, response_data
def test_get_list__offset__ok(self, client, episode_data, user, dbs):
self.setup_episodes(dbs, user, episode_data)
client.login(user)
response = client.get(self.url, params={"offset": 1})
response_data = self.assert_ok_response(response)
self.assert_episodes(response_data, expected_episode_ids=[self.episode_1.id])
assert response_data["has_next"] is False, response_data
@pytest.mark.parametrize(
"search,title1,title2,expected_titles",
[
("new", "New episode", "Old episode", ["New episode"]),
("epi", "New episode", "Old episode", ["New episode", "Old episode"]),
],
)
def test_get_list__filter_by_title__ok(
self, client, episode_data, user, dbs, search, title1, title2, expected_titles
):
self.setup_episodes(dbs, user, episode_data)
await_(self.episode_1.update(dbs, **{"title": title1}))
await_(self.episode_2.update(dbs, **{"title": title2}))
await_(dbs.commit())
await_(dbs.refresh(self.episode_1))
await_(dbs.refresh(self.episode_2))
episodes = [self.episode_2, self.episode_1]
expected_episodes = [episode.id for episode in episodes if episode.title in expected_titles]
client.login(user)
response = client.get(self.url, params={"q": search})
response_data = self.assert_ok_response(response)
self.assert_episodes(response_data, expected_episodes)
def test_create_without_podcast__fail(self, client, episode_data, user, dbs):
client.login(user)
response = client.post(self.url, data=get_podcast_data())
self.assert_fail_response(
response, status_code=405, response_status=ResponseStatus.NOT_ALLOWED
)
| 40.311765 | 100 | 0.670217 | 1,708 | 13,706 | 5.062646 | 0.104801 | 0.038164 | 0.020354 | 0.029606 | 0.634093 | 0.560194 | 0.505378 | 0.453915 | 0.377588 | 0.329363 | 0 | 0.007657 | 0.218663 | 13,706 | 339 | 101 | 40.430678 | 0.799795 | 0 | 0 | 0.323024 | 0 | 0 | 0.077338 | 0.006712 | 0 | 0 | 0 | 0 | 0.164948 | 1 | 0.085911 | false | 0 | 0.027491 | 0.006873 | 0.147766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89588acaf0215f496c8f0209b15ce97ce1c50516 | 4,806 | py | Python | ecosante/inscription/blueprint.py | betagouv/ecosante | cc7dd76bb65405ba44f432197de851dc7e22ed38 | [
"MIT"
] | 3 | 2021-09-24T14:07:51.000Z | 2021-12-14T13:48:34.000Z | ecosante/inscription/blueprint.py | betagouv/recosante-api | 4560b2cf2ff4dc19597792fe15a3805f6259201d | [
"MIT"
] | 187 | 2021-03-25T16:43:49.000Z | 2022-03-23T14:40:31.000Z | ecosante/inscription/blueprint.py | betagouv/recosante-api | 4560b2cf2ff4dc19597792fe15a3805f6259201d | [
"MIT"
] | 2 | 2020-04-08T11:56:17.000Z | 2020-04-09T14:04:15.000Z | from flask import (
abort,
render_template,
request,
jsonify,
stream_with_context,
)
from .models import Inscription, db
from .forms import FormPremiereEtape, FormDeuxiemeEtape
from ecosante.utils.decorators import (
admin_capability_url,
webhook_capability_url
)
from ecosante.utils import Blueprint
from ecosante.extensions import celery
from flask.wrappers import Response
from flask_cors import cross_origin
from datetime import datetime
from email_validator import validate_email
bp = Blueprint("inscription", __name__)
@bp.route('/premiere-etape', methods=['POST'], strict_slashes=False)
@cross_origin(origins='*')
def premiere_etape():
form = FormPremiereEtape(data=request.json)
if form.validate_on_submit():
valid = validate_email(form.mail.data)
mail = valid.email.lower()
inscription = Inscription.query.filter_by(mail=mail).first() or Inscription()
inscription.mail = mail
db.session.add(inscription)
db.session.commit()
return jsonify({"uid": inscription.uid}), 201
return jsonify(form.errors), 400
@bp.route('/<uid>/', methods=['POST', 'GET'], strict_slashes=False)
@cross_origin(origins='*')
def deuxieme_etape(uid):
inscription = db.session.query(Inscription).filter_by(uid=uid).first()
form = FormDeuxiemeEtape(data=request.json)
if request.method == 'POST':
if not inscription:
abort(404)
if form.validate_on_submit():
for fieldname in form._fields.keys():
if (request.form and fieldname in request.form.keys()) or (request.json and fieldname in request.json.keys()):
setattr(inscription, fieldname, getattr(form, fieldname).data)
db.session.add(inscription)
db.session.commit()
inscription = db.session.query(Inscription).filter_by(uid=uid).first()
else:
return jsonify(form.errors), 400
return {
**{
k: getattr(inscription, k)
for k in form._fields.keys()
},
**{
"ville_nom": inscription.ville_nom,
"ville_codes_postaux": inscription.ville_codes_postaux
}
}
@bp.route('/<uid>/_confirm', methods=['GET'], strict_slashes=False)
@cross_origin(origins='*')
def confirm(uid):
inscription = Inscription.query.filter_by(uid=uid).first()
if not inscription:
return jsonify({"errors": ["Unable to find inscription"]}), 404
inscription.indicateurs = ["indice_atmo", "raep"] if inscription.allergie_pollens else ["indice_atmo"]
inscription.indicateurs_frequence = ["quotidien"]
inscription.indicateurs_media = ["mail"]
inscription.recommandations_actives = ["oui"]
inscription.recommandations_frequence = ["quotidien"]
inscription.recommandations_media = ["mail"]
celery.send_task(
"ecosante.inscription.tasks.send_success_email.send_success_email",
(inscription.id,),
queue='send_email',
routing_key='send_email.subscribe'
)
return jsonify({"result": "ok"})
@bp.route('<secret_slug>/user_unsubscription', methods=['POST'])
@webhook_capability_url
def user_unsubscription(secret_slug):
mail = request.json['email']
user = Inscription.query.filter_by(mail=mail).first()
if not user:
celery.send_task("ecosante.inscription.tasks.send_unsubscribe.send_unsubscribe_errorsend_unsubscribe_error", (mail,))
else:
user.unsubscribe()
return jsonify(request.json)
@bp.route('<secret_slug>/export')
@bp.route('/export')
@admin_capability_url
def export():
return Response(
stream_with_context(Inscription.generate_csv()),
mimetype="text/csv",
headers={
"Content-Disposition": f"attachment; filename=export-{datetime.now().strftime('%Y-%m-%d_%H%M')}.csv"
}
)
@bp.route('<secret_slug>/liste')
@bp.route('/liste')
@admin_capability_url
def liste():
inscriptions = Inscription.active_query().all()
return render_template(
'liste.html',
inscriptions=inscriptions
)
@bp.route('/geojson')
def geojson():
return jsonify(Inscription.export_geojson())
@bp.route('/changement')
def changement():
return render_template('changement.html', uid=request.args.get('uid'))
@bp.route('/confirmer-changement', methods=['POST', 'GET'])
def confirmer_changement():
uid = request.args.get('uid')
if not uid:
abort(400)
inscription = db.session.query(Inscription).filter_by(uid=uid).first()
if not inscription:
abort(404)
inscription.deactivation_date = None
inscription.diffusion = 'mail'
inscription.frequence = 'quotidien'
db.session.add(inscription)
db.session.commit()
return render_template('confirmer_changement.html') | 33.608392 | 126 | 0.6804 | 547 | 4,806 | 5.815356 | 0.279708 | 0.024206 | 0.037724 | 0.017605 | 0.26061 | 0.19585 | 0.19585 | 0.121974 | 0.05187 | 0.05187 | 0 | 0.005385 | 0.188514 | 4,806 | 143 | 127 | 33.608392 | 0.810256 | 0 | 0 | 0.209302 | 0 | 0.007752 | 0.143125 | 0.060953 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.077519 | 0.023256 | 0.24031 | 0.015504 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8959bfc6f03789f676832a95e881b497a1ad60ab | 2,455 | py | Python | broker.py | batuengin/becalm-station | 1fa377c4553e92d6ffde7e4d999ed7d4940ecd77 | [
"Apache-2.0"
] | 2 | 2020-10-18T08:13:17.000Z | 2021-03-12T12:19:45.000Z | broker.py | batuengin/becalm-station | 1fa377c4553e92d6ffde7e4d999ed7d4940ecd77 | [
"Apache-2.0"
] | 5 | 2020-10-26T15:39:02.000Z | 2022-02-27T05:47:30.000Z | broker.py | batuengin/becalm-station | 1fa377c4553e92d6ffde7e4d999ed7d4940ecd77 | [
"Apache-2.0"
] | 3 | 2020-10-31T08:56:50.000Z | 2021-01-25T21:28:37.000Z | #!/usr/bin/python3
# This file is part of becalm-station
# https://github.com/idatis-org/becalm-station
# Copyright: Copyright (C) 2020 Enrique Melero <enrique.melero@gmail.com>
# License: Apache License Version 2.0, January 2004
# The full text of the Apache License is available here
# http://www.apache.org/licenses/
from datetime import datetime
from flask import Flask
from flask_restful import Resource, Api
from flask_cors import CORS
import requests
import json
from flask_apscheduler import APScheduler
import pytz
# Change this to fit your timezone
timezone="Europe/Madrid"
# The Server hostname and port where we can contact the becalm server service
serverAddr="becalm.valora.io"
serverPort="4000"
# The becalm Station hostname and port where the sensor drivers are running
sensorAddr="localhost"
sensorPort="8887"
# URL or the becalm Server to post the results
# There is normally no need to change this
serverurl="http://" + serverAddr + ":" + serverPort + "/v100/data-sensor/2?id_device=1"
sensorurl="http://" + sensorAddr + ":" + sensorPort + "/"
scheduler = APScheduler()
tz = pytz.timezone(timezone)
@scheduler.task('interval', id='do_job_1', seconds=5, misfire_grace_time=10)
def job1():
with scheduler.app.app_context():
# Gather data from sensor microsercice
r = requests.get(sensorurl)
if r.status_code != 200:
print("Error reading sensor " + sensorurl)
return
payload_dict = r.json()
timestamp= datetime.now(tz).__str__()
payload=[]
for key in payload_dict.keys():
measure={
'measure_type': key,
'measure_value': payload_dict[key],
'date_generation': timestamp
}
payload.append(measure)
# Post results to central server
headers = {'Content-type': 'application/json'}
r = requests.post(serverurl, headers=headers, json=payload)
if r.status_code == 201:
print ( datetime.now().__str__() + " Posted to server" + "\n" + json.dumps( payload ))
else:
print ("Error posting to server: " + str(r.status_code) + "\n" + json.dumps( payload ))
app = Flask(__name__)
@app.route('/rest/api/v1.0/debug', methods=['GET'])
def home2():
r = requests.get(sensorurl + '/debug')
return r.json()
if __name__ == '__main__':
scheduler.api_enabled = True
scheduler.init_app(app)
scheduler.start()
app.run(debug = True,host='0.0.0.0', port=8081)
| 29.578313 | 95 | 0.682281 | 327 | 2,455 | 5 | 0.492355 | 0.022018 | 0.020183 | 0.024465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023303 | 0.195927 | 2,455 | 82 | 96 | 29.939024 | 0.804965 | 0.27169 | 0 | 0 | 0 | 0 | 0.162909 | 0.017475 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.16 | 0 | 0.24 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
895b5c7a50aa9cfe925ce6b639fb8f5b69feda61 | 2,468 | py | Python | model/multiscale_HSD.py | hit-nclab/HSD | e0fe95b4a17eb3a261804a194802a95ccd729db0 | [
"MIT"
] | null | null | null | model/multiscale_HSD.py | hit-nclab/HSD | e0fe95b4a17eb3a261804a194802a95ccd729db0 | [
"MIT"
] | null | null | null | model/multiscale_HSD.py | hit-nclab/HSD | e0fe95b4a17eb3a261804a194802a95ccd729db0 | [
"MIT"
] | null | null | null | # -*- encoding: utf-8 -*-
# Multi-scales HSD implementataion
import numpy as np
import networkx as nx
import pygsp
import multiprocessing
from collections import defaultdict
from tqdm import tqdm
from model import HSD
from tools import hierarchy
class MultiHSD(HSD):
def __init__(self, graph: nx.Graph, graphName: str, hop: int, n_scales:int, metric="euclidean"):
super(MultiHSD, self).__init__(graph, graphName, 0, hop, metric)
self.n_scales = n_scales
self.scales = None
self.embeddings = {}
def init(self):
G = pygsp.graphs.Graph(self.adjacent)
G.estimate_lmax()
# 如何取scales?
self.scales = np.exp(np.linspace(np.log(0.01), np.log(G._lmax), self.n_scales))
self.hierarchy = hierarchy.read_hierarchical_representation(self.graphName, self.hop)
# embed nodes into vectors using multi-scale wavelets
def embed(self) -> dict:
embeddings = defaultdict(list)
for scale in tqdm(self.scales):
wavelets = self.calculate_wavelets(scale, approx=True)
for node in self.nodes:
embeddings[node].extend(self.get_layer_sum(wavelets, node))
return embeddings
def get_layer_sum(self, wavelets: np.ndarray, node:str) -> list:
layers_sum = [0] * (self.hop + 1)
neighborhoods = self.hierarchy[node]
node_idx = self.node2idx[node]
for hop, level in enumerate(neighborhoods):
for neighbor in level:
if neighbor == '':
continue
layers_sum[hop] += wavelets[node_idx, self.node2idx[neighbor]]
return layers_sum
def parallel_embed(self, n_workers) -> dict:
pool = multiprocessing.Pool(n_workers)
states = {}
for idx, scale in enumerate(self.scales):
res = pool.apply_async(self.calculate_wavelets, args=(scale, True))
states[idx] = res
pool.close()
pool.join()
results = []
for idx in range(self.n_scales):
results.append(states[idx].get())
embeddings = defaultdict(list)
for idx, _ in enumerate(self.scales):
wavelets = results[idx]
for node in self.nodes:
embeddings[node].extend(self.get_layer_sum(wavelets, node))
self.embeddings = embeddings
return embeddings
# plot wavelet changes in multi scales
def multiscale_plot_wavelets():
pass
| 30.097561 | 100 | 0.628849 | 299 | 2,468 | 5.06689 | 0.337793 | 0.023102 | 0.021782 | 0.036964 | 0.085809 | 0.085809 | 0.085809 | 0.085809 | 0.085809 | 0.085809 | 0 | 0.005006 | 0.271475 | 2,468 | 81 | 101 | 30.469136 | 0.837597 | 0.063209 | 0 | 0.142857 | 0 | 0 | 0.003905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.017857 | 0.142857 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
895b7c90cb75da66ff20cbbd6d8d03e20ae0fa6d | 10,433 | py | Python | src/tournaments/__init__.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | 1 | 2018-11-16T09:41:31.000Z | 2018-11-16T09:41:31.000Z | src/tournaments/__init__.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | 15 | 2015-01-07T14:17:36.000Z | 2019-04-29T13:26:43.000Z | src/tournaments/__init__.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | null | null | null | __author__ = 'Milos Prchlik'
__copyright__ = 'Copyright 2010 - 2014, Milos Prchlik'
__contact__ = 'happz@happz.cz'
__license__ = 'http://www.php-suit.com/dpl'
import collections
import threading
from collections import OrderedDict
import hlib.api
import hlib.events
import hlib.input
import hlib.error
import games
import lib.datalayer
import lib.chat
import lib.play
# pylint: disable-msg=F0401
import hruntime # @UnresolvedImport
ValidateTID = hlib.input.validator_factory(hlib.input.NotEmpty(), hlib.input.Int())
class TournamentLists(lib.play.PlayableLists):
def get_objects(self, l):
return [hruntime.dbroot.tournaments[tid] for tid in l]
def get_active(self, user):
return [t.id for t in hruntime.dbroot.tournaments.values() if t.is_active and (t.has_player(user) or t.stage == Tournament.STAGE_FREE)]
def get_inactive(self, user):
return [t.id for t in hruntime.dbroot.tournaments.values() if not t.is_active and t.has_player(user)]
def get_archived(self, user):
return [t.id for t in hruntime.dbroot.tournaments_archived.values() if user.name in t.players]
# Shortcuts
def created(self, t):
with self._lock:
super(TournamentLists, self).created(t)
hruntime.dbroot.tournaments.push(t)
return True
_tournament_lists = TournamentLists()
f_active = _tournament_lists.f_active
f_inactive = _tournament_lists.f_inactive
f_archived = _tournament_lists.f_archived
from hlib.stats import stats as STATS
STATS.set('Tournaments lists', OrderedDict([
('Active', lambda s: dict([ (k.name, dict(tournaments = ', '.join([str(i) for i in v]))) for k, v in _tournament_lists.snapshot('active').items() ])),
('Inactive', lambda s: dict([ (k.name, dict(tournaments = ', '.join([str(i) for i in v]))) for k, v in _tournament_lists.snapshot('inactive').items() ])),
('Archived', lambda s: dict([ (k.name, dict(tournaments = ', '.join([str(i) for i in v]))) for k, v in _tournament_lists.snapshot('archived').items() ]))
]))
class TournamentCreationFlags(games.GameCreationFlags):
FLAGS = ['name', 'desc', 'kind', 'owner', 'engine', 'password', 'num_players', 'limit_rounds']
MAX_OPPONENTS = 48
class Player(lib.play.Player):
def __init__(self, tournament, user):
lib.play.Player.__init__(self, user)
self.tournament = tournament
self.active = True
self.points = 0
self.wins = 0
def __getattr__(self, name):
if name == 'chat':
return lib.chat.ChatPagerTournament(self.tournament)
return lib.play.Player.__getattr__(self, name)
def __str__(self):
return 'Player(name = "%s", active = %s, points = %i, wins = %i)' % (self.user.name, self.active, self.points, self.wins)
def to_state(self):
d = lib.play.Player.to_state(self)
d['points'] = self.points
d['wins'] = self.wins
return d
class Group(hlib.database.DBObject):
def __init__(self, gid, tournament, round, players):
hlib.database.DBObject.__init__(self)
self.id = gid
self.tournament = tournament
self.round = round
self.players = players
self.games = hlib.database.SimpleList()
def __getattr__(self, name):
if name == 'finished_games':
return [g for g in self.games if g.type == games.Game.TYPE_FINISHED]
if name == 'completed_games':
return [g for g in self.games if g.type in [games.Game.TYPE_FINISHED, games.Game.TYPE_CANCELED]]
return hlib.database.DBObject.__getattr__(self, name)
def __str__(self):
attrs = {
'tid': self.tournament.id,
'gid': self.id,
'players': [str(p) for p in self.players],
'games': self.games,
'completed_games': self.completed_games
}
attrs = ', '.join(['%s = "%s"' % (key, value) for key, value in attrs.items()])
return 'Group(%s)' % attrs
def to_state(self):
def __game_to_state(g):
if not self.tournament.is_active or self.round != self.tournament.round:
__player_to_state = lambda x: {'user': hlib.api.User(x.user), 'points': x.points}
else:
__player_to_state = lambda x: {'user': hlib.api.User(x.user)}
return {
'id': g.id,
'round': g.round,
'type': g.type,
'players': [__player_to_state(p) for p in g.players.values()]
}
return {
'id': self.id,
'players': [{'user': hlib.api.User(p.user)} for p in self.players],
'games': [__game_to_state(g) for g in self.games]
}
class Tournament(lib.play.Playable):
STAGE_FREE = 0
STAGE_RUNNING = 1
STAGE_FINISHED = 2
STAGE_CANCELED = 3
MISSING_USER = lib.datalayer.User('"MISSING" player', 'foobar', 'osadnici@happz.cz')
BYE_USER = lib.datalayer.User('"BYE" player', 'foobar', 'osadnici@happz.cz')
def __init__(self, tournament_flags, game_flags):
lib.play.Playable.__init__(self, tournament_flags)
#if tournament_flags.limit % game_flags.limit != 0:
# raise WrongNumberOfPlayers()
self.game_flags = game_flags
self.chat_class = lib.chat.ChatPagerTournament
self.stage = Tournament.STAGE_FREE
self.players = hlib.database.SimpleMapping()
self.winner_player = None
self._v_engine = None
self.engine_class = tournaments.engines.engines[self.flags.engine]
self.engine_data = None
self.rounds = hlib.database.SimpleMapping()
def __getattr__(self, name):
if name == 'is_active':
return self.stage in (Tournament.STAGE_FREE, Tournament.STAGE_RUNNING)
if name == 'is_finished':
return self.stage == Tournament.STAGE_FINISHED
if name == 'engine':
if not hasattr(self, '_v_engine') or not self._v_engine:
self._v_engine = self.engine_class(self)
return self._v_engine
if name == 'chat':
return lib.chat.ChatPagerTournament(self)
if name == 'current_round':
return self.rounds[self.round]
if name == 'completed_current_round':
return [group for group in self.current_round if len(group.completed_games) == len(group.games)]
return lib.play.Playable.__getattr__(self, name)
def get_type(self):
return 'tournament'
def to_api(self):
d = lib.play.Playable.to_api(self)
d['is_game'] = False
d['limit'] = self.limit
d['limit_per_game'] = self.game_flags.limit
d['limit_rounds'] = self.flags.limit_rounds
d['winner'] = self.winner_player.to_state()
return d
def to_state(self):
d = lib.play.Playable.to_state(self)
d['tid'] = self.id
d['stage'] = self.stage
d['limit'] = self.limit
d['limit_rounds'] = self.flags.limit_rounds
d['winner'] = self.winner_player.to_state()
d['rounds'] = [[g.to_state() for g in self.rounds[round]] for round in sorted(self.rounds.keys())]
return d
def create_games(self):
# Create new round - list of player groups
self.rounds[self.round] = ROUND = hlib.database.SimpleList()
# Ask engine to group players
player_groups = self.engine.create_groups()
for group_id in range(0, len(player_groups)):
GROUP = player_groups[group_id]
ROUND.append(GROUP)
real_players = [p for p in GROUP.players if p.user.name != '"MISSING" player']
kwargs = {
'limit': len(real_players),
'turn_limit': self.game_flags.turn_limit,
'dont_shuffle': True,
'owner': real_players[0].user,
'label': 'Turnajovka \'%s\' - %i-%i' % (self.name, self.round, group_id + 1)
}
for player_id in range(1, len(real_players)):
kwargs['opponent' + str(player_id)] = real_players[player_id].user.name
# pylint: disable-msg=W0142
g = games.create_system_game(self.flags.kind, **kwargs)
g.tournament = self
g.tournament_group = GROUP
GROUP.games.append(g)
def next_round(self):
self.engine.round_finished()
if self.round == self.flags.limit_rounds:
self.finish()
return
self.round += 1
self.create_games()
def begin(self):
self.stage = Tournament.STAGE_RUNNING
self.round = 1
self.create_games()
hlib.events.trigger('tournament.Started', self, tournament = self)
def finish(self):
self.stage = Tournament.STAGE_FINISHED
hlib.events.trigger('tournament.Finished', self, tournament = self)
def cancel(self):
hlib.events.trigger('tournament.Canceled', self, tournament = self)
def join_player(self, user, password):
if self.stage != Tournament.STAGE_FREE:
raise lib.play.AlreadyStartedError()
if user in self.user_to_player:
raise lib.play.AlreadyJoinedError()
if self.is_password_protected and (password == None or len(password) <= 0 or lib.pwcrypt(password) != self.password):
raise lib.play.WrongPasswordError()
player = self.engine_class.player_class(self, user)
self.players[user.name] = player
hlib.events.trigger('tournament.PlayerJoined', self, tournament = self, user = user)
if len(self.players) == self.flags.limit:
self.begin()
return player
@staticmethod
def create_tournament(tournament_flags, game_flags):
t = Tournament(tournament_flags, game_flags)
hlib.events.trigger('tournament.Created', t, tournament = t)
if tournament_flags.owner != hruntime.dbroot.users['SYSTEM']:
t.join_player(tournament_flags.owner, tournament_flags.password)
return t
class TournamentError(lib.play.PlayableError):
pass
WrongNumberOfPlayers = lambda: TournamentError(msg = 'Number of players of the tournament must be divisible by number of players per game', reply_status = 402)
hlib.events.Hook('tournament.Created', lambda e: _tournament_lists.created(e.tournament))
hlib.events.Hook('torunament.Started', lambda e: _tournament_lists.started(e.tournament))
hlib.events.Hook('tournament.Finished', lambda e: _tournament_lists.finished(e.tournament))
hlib.events.Hook('tournament.Archived', lambda e: _tournament_lists.archived(e.tournament))
hlib.events.Hook('tournament.Canceled', lambda e: _tournament_lists.canceled(e.tournament))
hlib.events.Hook('tournament.PlayerJoined', lambda e: _tournament_lists.inval_players(e.tournament))
hlib.events.Hook('tournament.PlayerInvited', lambda e: _tournament_lists.inval_players(e.tournament))
hlib.events.Hook('tournament.ChatPost', lambda e: hruntime.cache.remove_for_users([p.user for p in e.tournament.players.values()], 'recent_events'))
import events.tournament
import tournaments.engines
import tournaments.engines.swiss
import tournaments.engines.randomized
| 31.33033 | 159 | 0.690501 | 1,421 | 10,433 | 4.885292 | 0.152006 | 0.015125 | 0.016134 | 0.024201 | 0.250648 | 0.205272 | 0.143475 | 0.137136 | 0.116393 | 0.116393 | 0 | 0.004081 | 0.177993 | 10,433 | 332 | 160 | 31.424699 | 0.805387 | 0.021854 | 0 | 0.109649 | 0 | 0.004386 | 0.107101 | 0.009121 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114035 | false | 0.026316 | 0.074561 | 0.026316 | 0.372807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
895bd2e21910859bb3cd7f5f07cedff8bc008454 | 6,474 | py | Python | max2/elo.py | thexa4/Spades | 1b6c5003d5bec13421418e1e563db435fac18286 | [
"MIT"
] | 1 | 2018-01-27T16:45:51.000Z | 2018-01-27T16:45:51.000Z | max2/elo.py | thexa4/Spades | 1b6c5003d5bec13421418e1e563db435fac18286 | [
"MIT"
] | null | null | null | max2/elo.py | thexa4/Spades | 1b6c5003d5bec13421418e1e563db435fac18286 | [
"MIT"
] | 1 | 2018-01-27T16:45:56.000Z | 2018-01-27T16:45:56.000Z | from game_manager import GameManager
from braindead_player import BraindeadPlayer
from max.random_player import RandomPlayer
from max2.training_player import TrainingPlayer
import max2.model
import sys
import trueskill
import random
import itertools
import math
import concurrent.futures
import threading
from os.path import exists
class EloRoundResult:
def __init__(self, team, score, wins):
self.team = team
self.score = score
self.wins = wins
def __str__(self):
percentage = 'n/a%'
if self.wins[0] + self.wins[1] > 0:
percentage = str(int(self.wins[0] / (self.wins[0] + self.wins[1]) * 100)) + '%'
return f'{self.team}: {self.score[0]} [{self.wins[0]}] vs {self.score[1]} [{self.wins[1]}], {percentage}'
class EloTeam:
def __init__(self, team1, team2):
self.teams = [team1, team2]
players = set([*team1, *team2])
if len(players) != len(team1) + len(team2):
raise Exception("Cannot have duplicate players in a game")
def __str__(self):
if len(self.teams[0]) == 1:
return f'[{self.teams[0][0].label} vs {self.teams[1][0].label}]'
return f'[{self.teams[0][0].label}, {self.teams[0][1].label} vs {self.teams[1][0].label}, {self.teams[1][1].label}]'
#https://github.com/sublee/trueskill/issues/1#issuecomment-149762508
def win_probability(self):
delta_mu = sum(r.score.mu for r in self.teams[0]) - sum(r.score.mu for r in self.teams[1])
sum_sigma = sum(r.score.sigma ** 2 for r in itertools.chain(self.teams[0], self.teams[1]))
size = len(self.teams[0]) + len(self.teams[1])
denom = math.sqrt(size * (trueskill.BETA * trueskill.BETA) + sum_sigma)
ts = trueskill.global_env()
return ts.cdf(delta_mu / denom)
def record_score(self, team1_score, team2_score):
rank = [team2_score, team1_score]
t1_rank, t2_rank = trueskill.rate([[p.score for p in self.teams[0]], [p.score for p in self.teams[1]]], ranks=rank)
players = [*self.teams[0], *self.teams[1]]
ranks = [*t1_rank, *t2_rank]
for player, rank in zip(players, ranks):
player.update_rank(rank)
def play(self, rounds):
scores = [0,0]
wins = [0,0]
def play_round(_):
players = []
if len(self.teams[0]) == 1:
players = [
self.teams[0][0].playerfunc(),
self.teams[1][0].playerfunc(),
self.teams[0][0].playerfunc(),
self.teams[1][0].playerfunc(),
]
else:
players = [
self.teams[0][0].playerfunc(),
self.teams[1][0].playerfunc(),
self.teams[0][1].playerfunc(),
self.teams[1][1].playerfunc(),
]
manager = GameManager(players)
return manager.play_game()
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as pool:
for score in pool.map(play_round, range(rounds)):
scores[0] = scores[0] + score[0]
scores[1] = scores[1] + score[1]
if score[0] > score[1]:
wins[0] = wins[0] + 1
if score[1] > score[0]:
wins[1] = wins[1] + 1
return EloRoundResult(self, scores, wins)
class EloPlayer:
def __init__(self, playerfunc, path, strategy, label, remote_path):
self.modelpath = path
self.elodatapath = path + '.' + strategy + '.elo'
self.score = trueskill.Rating()
self.label = label
self.playerfunc = playerfunc
self.remote_path = remote_path
if exists(self.elodatapath):
mu = 25
sigma = 8.333
with open(self.elodatapath, 'r') as f:
mu = float(f.readline())
sigma = float(f.readline())
self.score = trueskill.Rating(mu = mu, sigma = sigma)
def update_rank(self, rank):
self.score = rank
with open(self.elodatapath, 'w') as f:
f.write(f"{rank.mu}\n")
f.write(f"{rank.sigma}\n")
def __lt__(self, other):
return self.score.mu < other.score.mu
class EloManager:
def __init__(self, strategy):
self.lock = threading.Lock()
self.strategy = strategy
if strategy != 'single' and strategy != 'double':
raise Exception("Strategy should be either single or double.")
self.pool = [
EloPlayer(lambda: BraindeadPlayer(), 'max2/models/server/braindead', self.strategy, 'Braindead', 'braindead'),
EloPlayer(lambda: RandomPlayer(), 'max2/models/server/random', self.strategy, 'Random', 'random')
]
self.lookup = {
'braindead': self.pool[0],
'random': self.pool[1],
}
def add_player(self, playerfunc, path, label, remote_path):
newplayer = EloPlayer(playerfunc, path, self.strategy, label, remote_path)
with self.lock:
for p in self.pool:
if p.elodatapath == newplayer.elodatapath:
raise Exception("Player already in pool")
self.pool.append(newplayer)
self.lookup[remote_path] = newplayer
def generate_team(self):
if self.strategy == 'double' and len(self.pool) < 2:
raise Exception("Unable to run game with less than 4 players")
if self.strategy == 'single' and len(self.pool) < 4:
raise Exception("Unable to run game with less than 4 players")
teamsize = 1
if self.strategy == 'single':
teamsize = 2
with self.lock:
players = random.sample(self.pool, 2 * teamsize)
team1 = players[:teamsize]
team2 = players[teamsize:]
return EloTeam(team1, team2)
def generate_high_entropy_team(self):
while True:
team = self.generate_team()
equality = abs(team.win_probability() - 0.5) * 2
if random.random() > equality:
return team
def play_game(self, team = None):
if team == None:
team = self.generate_high_entropy_team()
result = team.play(10)
with self.lock:
team.record_score(result.wins[0], result.wins[1])
return result
| 36.167598 | 124 | 0.56194 | 793 | 6,474 | 4.496847 | 0.201765 | 0.068144 | 0.03926 | 0.015423 | 0.153113 | 0.150589 | 0.120303 | 0.084128 | 0.084128 | 0.069546 | 0 | 0.029734 | 0.309082 | 6,474 | 178 | 125 | 36.370787 | 0.767494 | 0.010195 | 0 | 0.115646 | 0 | 0.020408 | 0.095223 | 0.031689 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108844 | false | 0 | 0.088435 | 0.006803 | 0.292517 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
895d5c7ec5c22176fbc7bef3c30c34da72d63571 | 2,275 | py | Python | examples/density.py | dwbullok/python-colormath | 4b218effd53a52da891bbbb60661426ef194d085 | [
"BSD-3-Clause"
] | 1 | 2019-06-10T20:06:31.000Z | 2019-06-10T20:06:31.000Z | examples/density.py | dwbullok/python-colormath | 4b218effd53a52da891bbbb60661426ef194d085 | [
"BSD-3-Clause"
] | null | null | null | examples/density.py | dwbullok/python-colormath | 4b218effd53a52da891bbbb60661426ef194d085 | [
"BSD-3-Clause"
] | null | null | null | """
This module shows you how to perform various kinds of density calculations.
"""
# Does some sys.path manipulation so we can run examples in-place.
# noinspection PyUnresolvedReferences
import example_config
from colormath.color_objects import SpectralColor
from colormath.density_standards import ANSI_STATUS_T_RED, ISO_VISUAL
EXAMPLE_COLOR = SpectralColor(
observer=2, illuminant='d50',
spec_380nm=0.0600, spec_390nm=0.0600, spec_400nm=0.0641,
spec_410nm=0.0654, spec_420nm=0.0645, spec_430nm=0.0605,
spec_440nm=0.0562, spec_450nm=0.0543, spec_460nm=0.0537,
spec_470nm=0.0541, spec_480nm=0.0559, spec_490nm=0.0603,
spec_500nm=0.0651, spec_510nm=0.0680, spec_520nm=0.0705,
spec_530nm=0.0736, spec_540nm=0.0772, spec_550nm=0.0809,
spec_560nm=0.0870, spec_570nm=0.0990, spec_580nm=0.1128,
spec_590nm=0.1251, spec_600nm=0.1360, spec_610nm=0.1439,
spec_620nm=0.1511, spec_630nm=0.1590, spec_640nm=0.1688,
spec_650nm=0.1828, spec_660nm=0.1996, spec_670nm=0.2187,
spec_680nm=0.2397, spec_690nm=0.2618, spec_700nm=0.2852,
spec_710nm=0.2500, spec_720nm=0.2400, spec_730nm=0.2300)
def example_auto_status_t_density():
print("=== Example: Automatic Status T Density ===")
# If no arguments are provided to calc_density(), ANSI Status T density is
# assumed. The correct RGB "filter" is automatically selected for you.
print("Density: %f" % EXAMPLE_COLOR.calc_density())
print("=== End Example ===\n")
def example_manual_status_t_density():
print("=== Example: Manual Status T Density ===")
# Here we are specifically requesting the value of the red band via the
# ANSI Status T spec.
print("Density: %f (Red)" % EXAMPLE_COLOR.calc_density(
density_standard=ANSI_STATUS_T_RED))
print("=== End Example ===\n")
def example_visual_density():
print("=== Example: Visual Density ===")
# Here we pass the ISO Visual spectral standard.
print("Density: %f" % EXAMPLE_COLOR.calc_density(
density_standard=ISO_VISUAL))
print("=== End Example ===\n")
# Feel free to comment/un-comment examples as you please.
example_auto_status_t_density()
example_manual_status_t_density()
example_visual_density()
| 40.625 | 79 | 0.712967 | 344 | 2,275 | 4.491279 | 0.453488 | 0.045307 | 0.06343 | 0.04466 | 0.205825 | 0.114563 | 0.046602 | 0 | 0 | 0 | 0 | 0.154295 | 0.170989 | 2,275 | 55 | 80 | 41.363636 | 0.664899 | 0.225055 | 0 | 0.088235 | 0 | 0 | 0.129356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.088235 | 0 | 0.176471 | 0.264706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
895ea8c663f4b071e8a2f10939d1693b255ef7ea | 1,858 | py | Python | connector/python/setup.py | nikolaypavlov/spark-riak-connector | 84859aa4d82dd7234fb5c3c21789108d3a6c1094 | [
"Apache-2.0"
] | 63 | 2015-09-12T04:10:58.000Z | 2022-03-20T16:35:27.000Z | connector/python/setup.py | nikolaypavlov/spark-riak-connector | 84859aa4d82dd7234fb5c3c21789108d3a6c1094 | [
"Apache-2.0"
] | 83 | 2015-09-11T13:30:50.000Z | 2018-11-24T11:13:06.000Z | connector/python/setup.py | nikolaypavlov/spark-riak-connector | 84859aa4d82dd7234fb5c3c21789108d3a6c1094 | [
"Apache-2.0"
] | 34 | 2015-09-10T15:52:54.000Z | 2018-07-03T10:33:43.000Z | """
Copyright 2016 Basho Technologies, Inc.
This file is provided to you under the Apache License,
Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
"""
import os
from setuptools import setup, find_packages
from codecs import open
from os import path
basedir = os.path.dirname(os.path.abspath(__file__))
os.chdir(basedir)
with open(path.join(basedir, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='pyspark_riak',
version="1.6.3",
description='Utilities to asssist in working with Riak KV and PySpark.',
long_description=long_description,
license='Apache License 2.0',
author='Basho Technologies',
author_email='dataplatform@basho.com',
url='https://github.com/basho/spark-riak-connector/',
options={'easy_install': {'allow_hosts': 'pypi.python.org'}},
platforms='Platform Independent',
keywords='riak spark pyspark',
classifiers=[
'License :: OSI Approved :: Apache Software License',
'Intended Audience :: Developers',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Topic :: Database',
'Topic :: Software Development :: Libraries',
'Topic :: Scientific/Engineering :: Information Analysis',
'Topic :: Utilities',
],
packages=find_packages(),
include_package_data=True,
setup_requires='pytest-runner',
tests_require='pytest'
)
| 32.034483 | 73 | 0.728741 | 248 | 1,858 | 5.391129 | 0.584677 | 0.044877 | 0.019447 | 0.023934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.158235 | 1,858 | 57 | 74 | 32.596491 | 0.84399 | 0.312702 | 0 | 0 | 0 | 0 | 0.478329 | 0.034673 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89630d0bd0aad54ccd57d39bdeed4934b8f4b4ea | 4,812 | py | Python | scripts/pre_processing/pre_processing_LOCAL.py | self-improving-agent/SomaticVariantCallingWithDeepLearning | 50912dd3c2e88cd05daf5870ab6437d43a16cca8 | [
"MIT"
] | null | null | null | scripts/pre_processing/pre_processing_LOCAL.py | self-improving-agent/SomaticVariantCallingWithDeepLearning | 50912dd3c2e88cd05daf5870ab6437d43a16cca8 | [
"MIT"
] | null | null | null | scripts/pre_processing/pre_processing_LOCAL.py | self-improving-agent/SomaticVariantCallingWithDeepLearning | 50912dd3c2e88cd05daf5870ab6437d43a16cca8 | [
"MIT"
] | null | null | null | import pysam
import vcf
from vcf.parser import _Info as VcfInfo, field_counts as vcf_field_counts
import math
CHR = 22
chr_to_num = lambda x: ''.join([c for c in x if c.isdigit()])
purity = 0.6
# Open files
normalSample = pysam.AlignmentFile("../../data/external/HG002.hs37d5.300x.bam", "rb", ignore_truncation=True)
tumorSample = pysam.AlignmentFile("../../data/external/HG001.hs37d5.300x.bam", "rb", ignore_truncation=True)
normalvcf = vcf.Reader(open("../../data/external/HG002_GRCh37_1_22_v4.1_draft_benchmark.vcf", 'r'))
normalvcf.infos['datasetsmissingcall'] = VcfInfo('datasetsmissingcall', None, 'String',
'Names of datasets that are missing a call or have an incorrect call at this location, and the high-confidence call is a variant',
None, None)
tumorvcf = vcf.Reader(open("../../data/external/HG001_GRCh37_GIAB_highconf_CG-IllFB-IllGATKHC-Ion-10X-SOLID_CHROM1-X_v.3.3.2_highconf_PGandRTGphasetransfer.vcf", 'r'))
tumorvcf.infos['datasetsmissingcall'] = VcfInfo('datasetsmissingcall', None, 'String',
'Names of datasets that are missing a call or have an incorrect call at this location, and the high-confidence call is a variant',
None, None)
# normalOutput = open(snakemake.output[0], "w")
# normalOutput.write("CHR\tPOS\tREF\tALT\tLABEL\n")
# tumorOutput = open(snakemake.output[1], "w")
# tumorOutput.write("CHR\tPOS\tREF\tALT\tLABEL\n")
# Retrieve genomic region locations from BED
regions = []
bed = open("../../data/external/chr22_exons.bed", "r")
next(bed)
for line in bed:
region_start, region_end = line.split()[1:3]
regions.append((int(region_start), int(region_end)))
# Retrieve mutations from VCFs
normalMutations = {}
# for record in normalvcf:
# current_chr = chr_to_num(record.CHROM)
# if current_chr == '':
# break
# elif int(current_chr) < CHR:
# continue
# elif int(current_chr) > CHR:
# break
# if any(region_start <= record.POS <= region_end for (region_start, region_end) in regions):
# normalMutations[record.POS] = record.REF[0]
tumorMutations = {}
for record in tumorvcf:
current_chr = chr_to_num(record.CHROM)
if current_chr == '':
break
elif int(current_chr) < CHR:
continue
elif int(current_chr) > CHR:
break
if any(region_start <= record.POS <= region_end for (region_start, region_end) in regions):
# Extra condition to get mutations unique to tumor sample
if record.POS not in normalMutations.keys():
tumorMutations[record.POS] = record.REF[0]
print(record.POS)
# 17309881
break
# Process BAM file
for region in regions:
for pileup_column in normalSample.pileup("{}".format(CHR), 17309880, 17309881):
pos = pileup_column.pos + 1
# TEST CONTROL
if pos not in tumorMutations.keys():
continue
bases = {"A": 0, "T": 0, "C": 0, "G": 0}
ref = 0.0
alt = 0.0
# Count up pileup column reads
for pileup_read in pileup_column.pileups:
if not pileup_read.is_del and not pileup_read.is_refskip:
read = pileup_read.alignment.query_sequence[pileup_read.query_position]
if read in ["A","T","C","G"]:
bases[read] += 1
values = list(bases.values())
somaticVar = tumorMutations.get(pos, None)
if somaticVar:
tumor_bases = {"A": 0, "T": 0, "C": 0, "G": 0}
for tumor_pileup_column in tumorSample.pileup("{}".format(CHR), pos-1, pos):
if tumor_pileup_column.pos == pos-1:
for pileup_read in tumor_pileup_column.pileups:
if not pileup_read.is_del and not pileup_read.is_refskip:
read = pileup_read.alignment.query_sequence[pileup_read.query_position]
if read in ["A","T","C","G"]:
tumor_bases[read] += 1
tumor_values = list(tumor_bases.values())
combined_values = [math.floor((1-purity)*x) + math.ceil(purity*y) for (x,y) in zip(values, tumor_values)]
combined_bases = {"A": combined_values[0], "T": combined_values[1], "C": combined_values[2], "G": combined_values[3]}
tumor_ref = combined_bases[somaticVar]
tumor_label = "SomaticVariant"
combined_values.remove(tumor_ref)
tumor_alt = float(max(combined_values))
total = ref + alt
# if total == 0:
# continue
tumor_ref = round(tumor_ref / total, 3)
tumor_alt = round(tumor_alt / total, 3)
tumorOutput.write("{}\t{}\t{}\t{}\t{}\n".format(CHR,pos,tumor_ref,tumor_alt,tumor_label))
germlineVar = normalMutations.get(pos, None)
if germlineVar:
ref = bases[germlineVar]
label = "GermlineVariant"
else:
ref = float(max(values))
label = "Normal"
values.remove(ref)
alt = float(max(values))
total = ref + alt
if total == 0:
continue
ref = round(ref / total, 3)
alt = round(alt / total, 3)
normalOutput.write("{}\t{}\t{}\t{}\t{}\n".format(CHR,pos,ref,alt,label))
tumorOutput.write("{}\t{}\t{}\t{}\t{}\n".format(CHR,pos,ref,alt,label)) | 30.846154 | 167 | 0.687656 | 701 | 4,812 | 4.579173 | 0.256776 | 0.031153 | 0.024299 | 0.021184 | 0.42243 | 0.395016 | 0.395016 | 0.356386 | 0.335826 | 0.327726 | 0 | 0.025699 | 0.167082 | 4,812 | 156 | 168 | 30.846154 | 0.7752 | 0.152535 | 0 | 0.202247 | 0 | 0.033708 | 0.191909 | 0.076468 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044944 | 0 | 0.044944 | 0.011236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896353bb52dc91b51515cede2f8f6435b1c743a3 | 1,229 | py | Python | getitfixed/lingua_extractor.py | camptocamp/getitfixed | f339ee7ac603ebf6c5938d90b79d709e1c9e3f09 | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2021-02-11T15:09:15.000Z | 2021-02-23T07:56:49.000Z | getitfixed/lingua_extractor.py | camptocamp/getitfixed | f339ee7ac603ebf6c5938d90b79d709e1c9e3f09 | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2021-02-08T12:52:16.000Z | 2021-11-25T16:25:05.000Z | getitfixed/lingua_extractor.py | camptocamp/getitfixed | f339ee7ac603ebf6c5938d90b79d709e1c9e3f09 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from lingua.extractors import Extractor, Message
from c2c.template.config import config as configuration
class GetItFixedExtractor(Extractor): # pragma: no cover
"""
GetItFixed extractor (settings: emails subjects and bodys)
"""
extensions = [".yaml"]
def __call__(self, filename, options):
configuration.init(filename)
settings = configuration.get_config()
for path in (
("getitfixed", "admin_new_issue_email", "email_subject"),
("getitfixed", "admin_new_issue_email", "email_body"),
("getitfixed", "new_issue_email", "email_subject"),
("getitfixed", "new_issue_email", "email_body"),
("getitfixed", "update_issue_email", "email_subject"),
("getitfixed", "update_issue_email", "email_body"),
("getitfixed", "resolved_issue_email", "email_subject"),
("getitfixed", "resolved_issue_email", "email_body"),
):
value = settings
for key in path:
value = value[key]
# yield Message(msgctxt msgid msgid_plural flags comment tcomment location)
yield Message(None, value, None, [], u"", u"", (filename, "/".join(path)))
| 39.645161 | 87 | 0.617575 | 123 | 1,229 | 5.910569 | 0.447154 | 0.110041 | 0.165062 | 0.099037 | 0.3989 | 0.211829 | 0 | 0 | 0 | 0 | 0 | 0.001091 | 0.253865 | 1,229 | 30 | 88 | 40.966667 | 0.791712 | 0.12205 | 0 | 0 | 0 | 0 | 0.306968 | 0.039548 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89636af3bcc2dac82bb553831e85121b506753d9 | 1,577 | py | Python | magnify/script_utils.py | jiwoncpark/magnify | 04421d43b9f5340989e8614d961ac9f5988bde0c | [
"MIT"
] | null | null | null | magnify/script_utils.py | jiwoncpark/magnify | 04421d43b9f5340989e8614d961ac9f5988bde0c | [
"MIT"
] | null | null | null | magnify/script_utils.py | jiwoncpark/magnify | 04421d43b9f5340989e8614d961ac9f5988bde0c | [
"MIT"
] | 2 | 2021-09-14T19:14:12.000Z | 2021-11-07T10:29:01.000Z | import os
import torch
def save_state(model, optim, lr_scheduler, kl_scheduler, epoch,
train_dir, param_w_scheduler, epoch_i):
"""Save the state dict of the current training to disk
Parameters
----------
train_loss : float
current training loss
val_loss : float
current validation loss
"""
state = dict(
model=model.state_dict(),
optimizer=optim.state_dict(),
lr_scheduler=lr_scheduler.state_dict(),
kl_scheduler=kl_scheduler.__dict__,
param_w_scheduler=param_w_scheduler.__dict__,
epoch=epoch,
)
model_path = os.path.join(train_dir, f'model_{epoch_i}.mdl')
torch.save(state, model_path)
def load_state(model, train_dir, device,
optim=None, lr_scheduler=None, kl_scheduler=None,
param_w_scheduler=None,
epoch_i=0,
):
"""Load the state dict to resume training or infer
"""
model_path = os.path.join(train_dir, f'model_{epoch_i}.mdl')
state = torch.load(model_path)
model.load_state_dict(state['model'])
model.to(device)
if optim is not None:
optim.load_state_dict(state['optimizer'])
if lr_scheduler is not None:
lr_scheduler.load_state_dict(state['lr_scheduler'])
if kl_scheduler is not None:
kl_scheduler.__dict__ = state['kl_scheduler']
if param_w_scheduler is not None:
param_w_scheduler.__dict__ = state['param_w_scheduler']
print(f"Loaded model at epoch {state['epoch']}")
| 32.854167 | 64 | 0.63792 | 208 | 1,577 | 4.495192 | 0.225962 | 0.086631 | 0.112299 | 0.057754 | 0.08984 | 0.08984 | 0.08984 | 0.08984 | 0.08984 | 0.08984 | 0 | 0.000861 | 0.263158 | 1,577 | 47 | 65 | 33.553191 | 0.803787 | 0.137603 | 0 | 0.0625 | 0 | 0 | 0.099242 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89639c5827d0f48f2821d66cd1c6a72c013d43ba | 489 | py | Python | test/test_account.py | dheller1/personal-finance | 914de3e538515249b510383fd93538693a96d98f | [
"MIT"
] | null | null | null | test/test_account.py | dheller1/personal-finance | 914de3e538515249b510383fd93538693a96d98f | [
"MIT"
] | null | null | null | test/test_account.py | dheller1/personal-finance | 914de3e538515249b510383fd93538693a96d98f | [
"MIT"
] | null | null | null | from pfin.account import Account
from moneyed import Money, EUR, USD
import pytest
def test_emptyaccount():
a = Account('Giro', 'EUR')
assert a.name == 'Giro'
assert a.currency == EUR
assert a.balance == Money(0, EUR)
def test_nonemptyaccount():
u = Account('my Depot', USD, 15.22)
assert u.currency == USD
assert u.balance == Money(15.22, USD)
def test_mismatch():
with pytest.raises(TypeError):
Account('Cant Decide', 'CNY', Money(13, EUR))
| 22.227273 | 53 | 0.652352 | 69 | 489 | 4.57971 | 0.478261 | 0.066456 | 0.063291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028497 | 0.210634 | 489 | 21 | 54 | 23.285714 | 0.790155 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8963b9e21a078addbf999be72b2d1e008edfb9e8 | 2,306 | py | Python | preprocess/feature.py | NTHU-CVLab/ActivityProps | 68392fb38d87afdc92f6f054e83e9166121401a5 | [
"Apache-2.0"
] | 1 | 2017-10-31T15:36:55.000Z | 2017-10-31T15:36:55.000Z | preprocess/feature.py | NTHU-CVLab/ActivityProps | 68392fb38d87afdc92f6f054e83e9166121401a5 | [
"Apache-2.0"
] | null | null | null | preprocess/feature.py | NTHU-CVLab/ActivityProps | 68392fb38d87afdc92f6f054e83e9166121401a5 | [
"Apache-2.0"
] | null | null | null | import os
import re
import h5py
import numpy as np
class FeatureFile:
def __init__(self, feature_file, write=False):
self.feature_file = feature_file
self.h5 = self.open(feature_file, write)
self.features_keys = None
self.labels_keys = None
self.perm = None
def open(self, filepath, write):
mode = 'r+' if os.path.exists(filepath) and not write else 'w'
return h5py.File(filepath, mode)
def _load(self, features_keys, labels_keys):
f = self.h5
features = np.vstack([f.get(k) for k in features_keys])
labels = np.concatenate([f.get(k) for k in labels_keys])
return features, labels
def load(self, random=False, split=0.0, **kwargs):
f = self.h5
features_keys = natural_sort([k for k in f.keys() if k.startswith('features')])
labels_keys = natural_sort([k for k in f.keys() if k.startswith('labels')])
assert len(features_keys) == len(labels_keys)
self.features_keys = features_keys
self.labels_keys = labels_keys
if random and kwargs.get('video_wise'):
_features_keys = np.array(features_keys)
_labels_keys = np.array(labels_keys)
n = len(features_keys)
q = int(n * split)
self.perm = np.random.permutation(n)
self.excluded = _features_keys[self.perm[:q]]
l_keys_a = _labels_keys[self.perm[q:]]
l_keys_b = _labels_keys[self.perm[:q]]
f_keys_a = _features_keys[self.perm[q:]]
f_keys_b = _features_keys[self.perm[:q]]
return {
'train': self._load(f_keys_a, l_keys_a),
'test': self._load(f_keys_b, l_keys_b),
}
return self._load(features_keys, labels_keys)
def save(self, features, labels, suffix):
features_key = 'features_%s' % suffix
labels_key = 'labels_%s' % suffix
self.h5.create_dataset(features_key, data=features, dtype='float32')
self.h5.create_dataset(labels_key, data=labels, dtype='int8')
def natural_sort(l):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
return sorted(l, key = alphanum_key)
| 34.939394 | 87 | 0.617953 | 325 | 2,306 | 4.153846 | 0.255385 | 0.124444 | 0.044444 | 0.048148 | 0.165185 | 0.128889 | 0.059259 | 0.059259 | 0.059259 | 0.059259 | 0 | 0.008255 | 0.264527 | 2,306 | 65 | 88 | 35.476923 | 0.787736 | 0 | 0 | 0.039216 | 0 | 0 | 0.032524 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.117647 | false | 0 | 0.078431 | 0 | 0.313725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8963da14cc6badd83963c9d6f04c2a8c84d7a51c | 8,162 | py | Python | sdks/python/apache_beam/io/fileio_test.py | goldfishy/beam | a956ff77a8448e5f2c12f6695fec608348b5ab60 | [
"Apache-2.0",
"BSD-3-Clause"
] | 2 | 2019-04-25T22:16:34.000Z | 2019-07-11T10:14:15.000Z | sdks/python/apache_beam/io/fileio_test.py | goldfishy/beam | a956ff77a8448e5f2c12f6695fec608348b5ab60 | [
"Apache-2.0",
"BSD-3-Clause"
] | 6 | 2020-11-13T18:59:17.000Z | 2021-08-25T16:11:11.000Z | sdks/python/apache_beam/io/fileio_test.py | goldfishy/beam | a956ff77a8448e5f2c12f6695fec608348b5ab60 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Tests for transforms defined in apache_beam.io.fileio."""
from __future__ import absolute_import
import csv
import io
import logging
import sys
import unittest
from nose.plugins.attrib import attr
import apache_beam as beam
from apache_beam.io import fileio
from apache_beam.io.filebasedsink_test import _TestCaseWithTempDirCleanUp
from apache_beam.testing.test_pipeline import TestPipeline
from apache_beam.testing.test_utils import compute_hash
from apache_beam.testing.util import assert_that
from apache_beam.testing.util import equal_to
class MatchTest(_TestCaseWithTempDirCleanUp):
def test_basic_two_files(self):
files = []
tempdir = '%s/' % self._new_tempdir()
# Create a couple files to be matched
files.append(self._create_temp_file(dir=tempdir))
files.append(self._create_temp_file(dir=tempdir))
with TestPipeline() as p:
files_pc = p | fileio.MatchFiles(tempdir) | beam.Map(lambda x: x.path)
assert_that(files_pc, equal_to(files))
def test_match_all_two_directories(self):
files = []
directories = []
for _ in range(2):
# TODO: What about this having to append the ending slash?
d = '%s/' % self._new_tempdir()
directories.append(d)
files.append(self._create_temp_file(dir=d))
files.append(self._create_temp_file(dir=d))
with TestPipeline() as p:
files_pc = (p
| beam.Create(directories)
| fileio.MatchAll()
| beam.Map(lambda x: x.path))
assert_that(files_pc, equal_to(files))
def test_match_files_one_directory_failure(self):
directories = [
'%s/' % self._new_tempdir(),
'%s/' % self._new_tempdir()]
files = list()
files.append(self._create_temp_file(dir=directories[0]))
files.append(self._create_temp_file(dir=directories[0]))
with self.assertRaises(beam.io.filesystem.BeamIOError):
with TestPipeline() as p:
files_pc = (
p
| beam.Create(directories)
| fileio.MatchAll(fileio.EmptyMatchTreatment.DISALLOW)
| beam.Map(lambda x: x.path))
assert_that(files_pc, equal_to(files))
def test_match_files_one_directory_failure(self):
directories = [
'%s/' % self._new_tempdir(),
'%s/' % self._new_tempdir()]
files = list()
files.append(self._create_temp_file(dir=directories[0]))
files.append(self._create_temp_file(dir=directories[0]))
with TestPipeline() as p:
files_pc = (
p
| beam.Create(['%s*' % d for d in directories])
| fileio.MatchAll(fileio.EmptyMatchTreatment.ALLOW_IF_WILDCARD)
| beam.Map(lambda x: x.path))
assert_that(files_pc, equal_to(files))
class ReadTest(_TestCaseWithTempDirCleanUp):
def test_basic_file_name_provided(self):
content = 'TestingMyContent\nIn multiple lines\nhaha!'
dir = '%s/' % self._new_tempdir()
self._create_temp_file(dir=dir, content=content)
with TestPipeline() as p:
content_pc = (p
| beam.Create([dir])
| fileio.MatchAll()
| fileio.ReadMatches()
| beam.Map(lambda f: f.read().decode('utf-8')))
assert_that(content_pc, equal_to([content]))
def test_csv_file_source(self):
content = 'name,year,place\ngoogle,1999,CA\nspotify,2006,sweden'
rows = [r.split(',') for r in content.split('\n')]
dir = '%s/' % self._new_tempdir()
self._create_temp_file(dir=dir, content=content)
def get_csv_reader(readable_file):
if sys.version_info >= (3, 0):
return csv.reader(io.TextIOWrapper(readable_file.open()))
else:
return csv.reader(readable_file.open())
with TestPipeline() as p:
content_pc = (p
| beam.Create([dir])
| fileio.MatchAll()
| fileio.ReadMatches()
| beam.FlatMap(get_csv_reader))
assert_that(content_pc, equal_to(rows))
def test_string_filenames_and_skip_directory(self):
content = 'thecontent\n'
files = []
tempdir = '%s/' % self._new_tempdir()
# Create a couple files to be matched
files.append(self._create_temp_file(dir=tempdir, content=content))
files.append(self._create_temp_file(dir=tempdir, content=content))
with TestPipeline() as p:
contents_pc = (p
| beam.Create(files + [tempdir])
| fileio.ReadMatches()
| beam.Map(lambda x: x.read().decode('utf-8')))
assert_that(contents_pc, equal_to([content]*2))
def test_fail_on_directories(self):
content = 'thecontent\n'
files = []
tempdir = '%s/' % self._new_tempdir()
# Create a couple files to be matched
files.append(self._create_temp_file(dir=tempdir, content=content))
files.append(self._create_temp_file(dir=tempdir, content=content))
with self.assertRaises(beam.io.filesystem.BeamIOError):
with TestPipeline() as p:
_ = (p
| beam.Create(files + [tempdir])
| fileio.ReadMatches(skip_directories=False)
| beam.Map(lambda x: x.read_utf8()))
class MatchIntegrationTest(unittest.TestCase):
INPUT_FILE = 'gs://dataflow-samples/shakespeare/kinglear.txt'
KINGLEAR_CHECKSUM = 'f418b25f1507f5a901257026b035ac2857a7ab87'
INPUT_FILE_LARGE = (
'gs://dataflow-samples/wikipedia_edits/wiki_data-00000000000*.json')
WIKI_FILES = [
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000000.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000001.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000002.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000003.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000004.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000005.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000006.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000007.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000008.json',
'gs://dataflow-samples/wikipedia_edits/wiki_data-000000000009.json',
]
def setUp(self):
self.test_pipeline = TestPipeline(is_integration_test=True)
@attr('IT')
def test_transform_on_gcs(self):
args = self.test_pipeline.get_full_options_as_args()
with beam.Pipeline(argv=args) as p:
matches_pc = (p
| beam.Create([self.INPUT_FILE, self.INPUT_FILE_LARGE])
| fileio.MatchAll()
| 'GetPath' >> beam.Map(lambda metadata: metadata.path))
assert_that(matches_pc,
equal_to([self.INPUT_FILE] + self.WIKI_FILES),
label='Matched Files')
checksum_pc = (p
| 'SingleFile' >> beam.Create([self.INPUT_FILE])
| 'MatchOneAll' >> fileio.MatchAll()
| fileio.ReadMatches()
| 'ReadIn' >> beam.Map(lambda x: x.read_utf8().split('\n'))
| 'Checksums' >> beam.Map(compute_hash))
assert_that(checksum_pc,
equal_to([self.KINGLEAR_CHECKSUM]),
label='Assert Checksums')
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
unittest.main()
| 34.584746 | 80 | 0.661725 | 1,014 | 8,162 | 5.108481 | 0.241617 | 0.027027 | 0.037838 | 0.048649 | 0.512162 | 0.477413 | 0.442278 | 0.397683 | 0.315444 | 0.296139 | 0 | 0.028999 | 0.222617 | 8,162 | 235 | 81 | 34.731915 | 0.787392 | 0.119088 | 0 | 0.434783 | 0 | 0 | 0.146406 | 0.119051 | 0 | 0 | 0 | 0.004255 | 0.080745 | 1 | 0.068323 | false | 0 | 0.086957 | 0 | 0.21118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896461c46e6db21b14443c315d819b33f6df70bd | 856 | py | Python | pyuaparser/test.py | havocesp/pyuaparser | 486d6f19b05f0c8ae0e160b3185f06798b49fce6 | [
"Unlicense"
] | 2 | 2019-11-20T02:16:14.000Z | 2021-12-17T01:12:41.000Z | pyuaparser/test.py | havocesp/pyuaparser | 486d6f19b05f0c8ae0e160b3185f06798b49fce6 | [
"Unlicense"
] | null | null | null | pyuaparser/test.py | havocesp/pyuaparser | 486d6f19b05f0c8ae0e160b3185f06798b49fce6 | [
"Unlicense"
] | null | null | null | # -*- coding:utf-8 -*-
from core import UserAgent
testing_data = {
'user_agent': {
'family': 'Chrome',
'major': '60',
'minor': '0',
'patch': '3112'
},
'os': {
'family': 'Windows',
'major': '10',
'minor': None,
'patch': None,
'patch_minor': None
},
'device': {
'family': 'Other',
'brand': None,
'model': None
},
'string': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
}
ua = UserAgent(
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36')
ua2 = UserAgent(
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36')
print(ua)
print(ua2)
| 25.939394 | 131 | 0.55257 | 114 | 856 | 4.114035 | 0.438596 | 0.063966 | 0.057569 | 0.134328 | 0.443497 | 0.443497 | 0.443497 | 0.366738 | 0.366738 | 0.366738 | 0 | 0.151659 | 0.260514 | 856 | 32 | 132 | 26.75 | 0.589258 | 0.023364 | 0 | 0 | 0 | 0.107143 | 0.543165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8964797fa4b8e72c8fbd72fbf7a0a140ab73619e | 347 | py | Python | LeetCode/0219. Contains Duplicate II/solution.py | InnoFang/oh-my-algorithms | f559dba371ce725a926725ad28d5e1c2facd0ab2 | [
"Apache-2.0"
] | 19 | 2018-08-26T03:10:58.000Z | 2022-03-07T18:12:52.000Z | LeetCode/0219. Contains Duplicate II/solution.py | InnoFang/Algorithm-Library | 1896b9d8b1fa4cd73879aaecf97bc32d13ae0169 | [
"Apache-2.0"
] | null | null | null | LeetCode/0219. Contains Duplicate II/solution.py | InnoFang/Algorithm-Library | 1896b9d8b1fa4cd73879aaecf97bc32d13ae0169 | [
"Apache-2.0"
] | 6 | 2020-03-16T23:00:06.000Z | 2022-01-13T07:02:08.000Z | """
23 / 23 test cases passed.
Runtime: 48 ms
Memory Usage: 22.3 MB
"""
class Solution:
def containsNearbyDuplicate(self, nums: List[int], k: int) -> bool:
store = {}
for i, num in enumerate(nums):
if num in store and i - store[num] <= k:
return True
store[num] = i
return False
| 24.785714 | 71 | 0.54755 | 47 | 347 | 4.042553 | 0.702128 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039301 | 0.340058 | 347 | 13 | 72 | 26.692308 | 0.790393 | 0.181556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8964f69c1d51b707bae8188916bfb506969245ff | 5,714 | py | Python | src/datalab/notebook/notebook-command/command/api/datahub.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 84 | 2021-06-30T06:20:23.000Z | 2022-03-22T03:05:49.000Z | src/datalab/notebook/notebook-command/command/api/datahub.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 7 | 2021-06-30T06:21:16.000Z | 2022-03-29T07:36:13.000Z | src/datalab/notebook/notebook-command/command/api/datahub.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 40 | 2021-06-30T06:21:26.000Z | 2022-03-29T12:42:26.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making BK-BASE 蓝鲸基础平台 available.
Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
BK-BASE 蓝鲸基础平台 is licensed under the MIT License.
License for BK-BASE 蓝鲸基础平台:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import json
import time
import requests
from command.constants import (
CLUSTER_OBJECT_NOT_FOUND,
CODE,
DATA,
HEADERS,
HTTP_STATUS_OK,
MESSAGE,
RESULT,
SINK_RT_ID,
SINK_TYPE,
SOURCE_RT_ID,
SOURCE_TYPE,
STATUS,
STOPPED,
SUCCESS,
TRANSPORT_WAIT_INTERVAL,
)
from command.exceptions import ApiRequestException, ExecuteException
from command.settings import DATAHUB_API_ROOT
from command.utils import api_retry, extract_error_message, parse_response
def transport_data(source_rt_id, source_type, sink_rt_id, sink_type):
"""
迁移数据
:param source_rt_id: 源表
:param source_type: 源表的存储
:param sink_rt_id: 目标表
:param sink_type: 目标表的存储
:return: 迁移任务id
"""
url = "%s/databus/tasks/transport/" % DATAHUB_API_ROOT
data = json.dumps(
{SOURCE_RT_ID: source_rt_id, SOURCE_TYPE: source_type, SINK_RT_ID: sink_rt_id, SINK_TYPE: sink_type}
)
res = requests.post(url=url, headers=HEADERS, data=data)
return parse_response(res, "迁移数据失败")
def get_transport_status(transport_id):
"""
获取数据迁移任务状态
:param transport_id: 迁移任务id
:return: 迁移状态
"""
url = "{}/databus/tasks/transport/{}/".format(DATAHUB_API_ROOT, transport_id)
res = requests.get(url=url)
if res.status_code == HTTP_STATUS_OK and res.json().get(RESULT):
return True, res.json()
else:
return False, "获取数据迁移任务状态失败,迁移id:{};失败原因:{}".format(transport_id, extract_error_message(res))
def get_transport_result(transport_id):
"""
获取数据迁移任务结果
:param transport_id: 迁移任务id
:return: 获取迁移执行结果
"""
while True:
time.sleep(TRANSPORT_WAIT_INTERVAL)
status_flag, status_result = get_transport_status(transport_id)
if not status_flag:
retry_flag, status_result = api_retry(get_transport_status, transport_id)
if not retry_flag:
return status_result
status = status_result[DATA][STATUS]
if status == SUCCESS:
return "创建成功"
elif status == STOPPED:
return "迁移任务已被人工停止,任务id:%s" % transport_id
else:
continue
def retrieve_cluster_info(cluster_type, cluster_name):
"""
获取指定集群的信息
:param cluster_type: 集群类型
:param cluster_name: 集群名
:return: 集群信息
"""
url = "{}/storekit/clusters/{}/{}/".format(DATAHUB_API_ROOT, cluster_type, cluster_name)
res = requests.get(url=url)
if res.status_code == HTTP_STATUS_OK:
res_json = res.json()
if res_json[RESULT]:
return res_json[DATA]
else:
message = (
"save操作失败:集群不存在,请检查集群名是否正确"
if res_json[CODE] == CLUSTER_OBJECT_NOT_FOUND
else "获取集群信息失败:%s" % res_json[MESSAGE]
)
raise ExecuteException(message)
else:
raise ApiRequestException("storekit接口异常: %s" % extract_error_message(res))
def get_hdfs_conf(result_table_id):
"""
获取hdfs存储配置
:param result_table_id: 结果表id
:return: 类型和表名
"""
url = "{}/storekit/hdfs/{}/hdfs_conf/".format(DATAHUB_API_ROOT, result_table_id)
response = requests.get(url=url)
return parse_response(response, "获取结果表%s关联的hdfs存储配置失败,失败原因" % result_table_id)
def get_file_config(raw_data_id, file_name):
"""
获取文件配置
:param raw_data_id: 数据源id
:param file_name: 文件名
:return: 文件配置
"""
url = "{}/access/collector/upload/{}/get_hdfs_info/?file_name={}".format(DATAHUB_API_ROOT, raw_data_id, file_name)
response = requests.get(url=url)
return parse_response(response, "获取文件配置失败,数据源id:{},文件名:{},失败原因".format(raw_data_id, file_name))
def destroy_storage(storage, result_table_id):
"""
删除存储
:param storage: 存储类型
:param result_table_id: 结果表id
"""
url = "{}/storekit/{}/{}/".format(DATAHUB_API_ROOT, storage, result_table_id)
response = requests.delete(url=url, headers=HEADERS)
return parse_response(response, "删除存储失败")
def destroy_rt_storage_relation(result_table_id, storage):
"""
删除结果表和存储的关联关系
:param result_table_id: 结果表id
:param storage: 存储类型
"""
url = "{}/storekit/result_tables/{}/{}/".format(DATAHUB_API_ROOT, result_table_id, storage)
response = requests.delete(url=url, headers=HEADERS)
return parse_response(response, "结果表存储关系删除失败")
| 31.744444 | 118 | 0.687434 | 742 | 5,714 | 5.075472 | 0.319407 | 0.010621 | 0.034519 | 0.031864 | 0.231014 | 0.151354 | 0.124801 | 0.089219 | 0.089219 | 0.061604 | 0 | 0.001546 | 0.207735 | 5,714 | 179 | 119 | 31.921788 | 0.830351 | 0.336367 | 0 | 0.119048 | 0 | 0 | 0.111732 | 0.086592 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.083333 | 0 | 0.309524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8968f1c206c8bb5c3456179cb46db720666878ac | 2,247 | py | Python | examples/train_filters.py | shakenes/unsupervised-drl | bb44aa87411e5dde3aa0d049fd721d6fb9da0b7e | [
"MIT"
] | 1 | 2021-04-23T08:36:31.000Z | 2021-04-23T08:36:31.000Z | examples/train_filters.py | shakenes/unsupervised-drl | bb44aa87411e5dde3aa0d049fd721d6fb9da0b7e | [
"MIT"
] | null | null | null | examples/train_filters.py | shakenes/unsupervised-drl | bb44aa87411e5dde3aa0d049fd721d6fb9da0b7e | [
"MIT"
] | null | null | null | from __future__ import print_function
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.layers import Dense, Flatten, Conv2D, Permute, Input, MaxPooling2D, Dropout, Concatenate
import keras.backend as K
from datetime import datetime
import os
# directory for saving the model
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = datetime.now().strftime("ILSVRC-CNN3.h5")
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
batch_size = 128
img_size = (100, 120)
train_generator = train_datagen.flow_from_directory(
'/imagenet_mini/train',
target_size=img_size,
batch_size=batch_size,
color_mode='grayscale',
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'/imagenet_mini/val',
target_size=img_size,
batch_size=batch_size,
color_mode='grayscale',
class_mode='categorical')
input_shape = img_size + (1,)
input_tensor = Input(shape=input_shape)
input_permuted = Permute((1, 2, 3))(input_tensor)
t = Conv2D(32, (7, 7), strides=(4, 4), activation='relu', name="conv2D_1")(input_permuted)
t = Conv2D(32, (5, 5), strides=(2, 2), activation='relu', name="conv2D_2")(t)
t = Flatten()(t)
out = Dense(6, activation='softmax')(t) # number of classes
model = Model(inputs=input_tensor, outputs=out)
print(model.summary())
# initiate optimizer
opt = keras.optimizers.Adam()
# Let's train the model
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=73439/batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=18374/batch_size)
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
| 27.740741 | 99 | 0.721851 | 307 | 2,247 | 5.068404 | 0.429967 | 0.040488 | 0.033419 | 0.042416 | 0.186375 | 0.140103 | 0.09383 | 0.09383 | 0.09383 | 0.09383 | 0 | 0.031898 | 0.162884 | 2,247 | 80 | 100 | 28.0875 | 0.795322 | 0.049844 | 0 | 0.137931 | 0 | 0 | 0.090781 | 0.011289 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.155172 | 0 | 0.155172 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8969033413c810fd4ac389308247704b3968aeba | 2,346 | py | Python | core/infer.py | lvboodvl/smart_sync | be295c0db4cd0b7f3e9ff6f33889c24fa0d1eecd | [
"MIT"
] | 3 | 2020-07-17T13:08:23.000Z | 2021-12-23T08:41:30.000Z | core/infer.py | lvboodvl/smart_sync | be295c0db4cd0b7f3e9ff6f33889c24fa0d1eecd | [
"MIT"
] | null | null | null | core/infer.py | lvboodvl/smart_sync | be295c0db4cd0b7f3e9ff6f33889c24fa0d1eecd | [
"MIT"
] | null | null | null | #coding=utf-8
'''
infer module
'''
import sys
caffe_path = '../caffe/python/'
#caffe_path = '/root/caffe/python/'
sys.path.insert(0, caffe_path)
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
from caffe.proto import caffe_pb2
from google.protobuf import text_format
import numpy as np
#import cv2
'''
prepare caffemodel proto labelmap etc.
'''
root_googlenet = '../model/'
deploy_googlenet = root_googlenet + 'deploy-googlenet.prototxt'
#labels_filename = root_googlenet + 'labels.txt'
caffe_model_googlenet = root_googlenet + 'googlenet.caffemodel'
googlenet = caffe.Net(deploy_googlenet, caffe_model_googlenet, caffe.TEST)
# labels = np.loadtxt(labels_filename, str, delimiter='\t')
root_alexnet = root_googlenet
#deploy_alexnet = root_alexnet + 'deploy-alex.prototxt'
labels_filename = root_alexnet + 'labels.txt'
#caffe_model_alexnet = root_alexnet + 'snapshot_iter_992.caffemodel'
#alexnet = caffe.Net(deploy_alexnet, caffe_model_alexnet, caffe.TEST)
'''
define infer function with alexnet, googlenet and senet
output parm is prob(score) and class_label respectively
'''
def infer_img(googlenet, url):
transformer = caffe.io.Transformer({'data': googlenet.blobs['data'].data.shape})
transformer.set_transpose('data', (2, 0, 1))
transformer.set_raw_scale('data', 255)
transformer.set_channel_swap('data', (2,1,0))
labels = np.loadtxt(labels_filename, str, delimiter='\t')
# googlenet.blobs['data'].data[...] = transformer.preprocess('data', tmp)
# googlenet.forward()
# prob_googlenet = googlenet.blobs['softmax'].data[0].flatten()
# order_googlenet = prob_googlenet.argsort()[-1]
# score_googlenet = np.max(prob_googlenet)
# labels_googlenet = labels[order_googlenet]
image = caffe.io.load_image(url)
googlenet.blobs['data'].data[...] = transformer.preprocess('data', image)
googlenet.forward()
prob_googlenet = googlenet.blobs['softmax'].data[0].flatten()
order_googlenet = prob_googlenet.argsort()[-1]
score_googlenet = np.max(prob_googlenet)
labels_googlenet = labels[order_googlenet]
return labels_googlenet, score_googlenet, prob_googlenet
if __name__ == '__main__':
url = root_googlenet + 'a.jpg'
labels_googlenet, score_googlenet, prob_googlenet = infer_img(googlenet, url)
print(url, labels_googlenet, score_googlenet, prob_googlenet)
| 35.014925 | 84 | 0.746377 | 302 | 2,346 | 5.546358 | 0.317881 | 0.069851 | 0.065672 | 0.039403 | 0.377313 | 0.377313 | 0.30209 | 0.24597 | 0.195821 | 0.195821 | 0 | 0.010194 | 0.12191 | 2,346 | 66 | 85 | 35.545455 | 0.802913 | 0.283887 | 0 | 0 | 0 | 0 | 0.087015 | 0.016734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.151515 | 0 | 0.212121 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896a54ac83d7220e88e21b226080aa932a25b015 | 44,852 | py | Python | pycam/pycam/Plugins/OpenGLWindow.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | pycam/pycam/Plugins/OpenGLWindow.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | pycam/pycam/Plugins/OpenGLWindow.py | pschou/py-sdf | 0a269ed155d026e29429d76666fb63c95d2b4b2c | [
"MIT"
] | null | null | null | """
Copyright 2011 Lars Kruse <devel@sumpfralle.de>
This file is part of PyCAM.
PyCAM is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
PyCAM is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with PyCAM. If not, see <http://www.gnu.org/licenses/>.
"""
import math
from pycam.Geometry import number, sqrt
from pycam.Geometry.PointUtils import pcross, pmul, pnormalized
import pycam.Geometry.Matrix as Matrix
import pycam.Plugins
# The length of the distance vector does not matter - it will be normalized and
# multiplied later anyway.
VIEWS = {
"reset": {"distance": (-1.0, -1.0, 1.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 0.0, 1.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"top": {"distance": (0.0, 0.0, 1.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 1.0, 0.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"bottom": {"distance": (0.0, 0.0, -1.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 1.0, 0.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"left": {"distance": (-1.0, 0.0, 0.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 0.0, 1.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"right": {"distance": (1.0, 0.0, 0.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 0.0, 1.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"front": {"distance": (0.0, -1.0, 0.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 0.0, 1.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
"back": {"distance": (0.0, 1.0, 0.0), "center": (0.0, 0.0, 0.0),
"up": (0.0, 0.0, 1.0), "znear": 0.01, "zfar": 10000.0, "fovy": 30.0},
}
class OpenGLWindow(pycam.Plugins.PluginBase):
UI_FILE = "opengl.ui"
CATEGORIES = ["Visualization", "OpenGL"]
def setup(self):
if not self._GL:
self.log.error("Failed to initialize the interactive 3D model view.\nPlease verify "
"that all requirements (especially the Python package for 'OpenGL' - "
"e.g. 'python3-opengl') are installed.")
return False
# test support for GLArea (since GTK v3.16)
try:
self._gtk.GLArea
except AttributeError:
self.log.error("Failed to initialize the interactive 3D model view probably due to an "
"outdated version of GTK (required: v3.16).")
return False
if self.gui:
# buttons for rotating, moving and zooming the model view window
self.BUTTON_ROTATE = self._gdk.ModifierType.BUTTON1_MASK
self.BUTTON_MOVE = self._gdk.ModifierType.BUTTON2_MASK
self.BUTTON_ZOOM = self._gdk.ModifierType.BUTTON3_MASK
self.BUTTON_RIGHT = 3
self.context_menu = self._gtk.Menu()
self.window = self.gui.get_object("OpenGLWindow")
self.window.insert_action_group(self.core.get("gtk_action_group_prefix"),
self.core.get("gtk_action_group"))
drag_n_drop_func = self.core.get("configure-drag-drop-func")
if drag_n_drop_func:
drag_n_drop_func(self.window)
self.initialized = False
self.is_visible = False
self._last_view = VIEWS["reset"]
self._position = [200, 200]
box = self.gui.get_object("OpenGLPrefTab")
self.core.register_ui("preferences", "OpenGL", box, 40)
self._gtk_handlers = []
# options
# TODO: move the default value somewhere else
for name, objname, default in (("view_light", "OpenGLLight", True),
("view_shadow", "OpenGLShadow", True),
("view_polygon", "OpenGLPolygon", True),
("view_perspective", "OpenGLPerspective", True),
("opengl_cache_enable", "OpenGLCache", True)):
obj = self.gui.get_object(objname)
self.core.add_item(name, obj.get_active, obj.set_active)
obj.set_active(default)
self._gtk_handlers.append((obj, "toggled", self.glsetup))
self._gtk_handlers.append((obj, "toggled", "visual-item-updated"))
# frames per second
skip_obj = self.gui.get_object("DrillProgressFrameSkipControl")
self.core.add_item("tool_progress_max_fps", skip_obj.get_value, skip_obj.set_value)
# info bar above the model view
detail_box = self.gui.get_object("InfoBox")
def clear_window():
for child in detail_box.get_children():
detail_box.remove(child)
def add_widget_to_window(item, name):
if len(detail_box.get_children()) > 0:
sep = self._gtk.HSeparator()
detail_box.pack_start(sep, fill=True, expand=True, padding=0)
sep.show()
detail_box.pack_start(item, fill=True, expand=True, padding=0)
item.show()
self.core.register_ui_section("opengl_window", add_widget_to_window, clear_window)
self.core.register_ui("opengl_window", "Views", self.gui.get_object("ViewControls"),
weight=0)
# color box
color_frame = self.gui.get_object("ColorPrefTab")
color_frame.unparent()
self._color_settings = {}
self.core.register_ui("preferences", "Colors", color_frame, 30)
self.core.set("register_color", self.register_color_setting)
self.core.set("unregister_color", self.unregister_color_setting)
# TODO: move "material" to simulation viewer
for name, label, weight in (("color_background", "Background", 10),
("color_material", "Material", 80)):
self.core.get("register_color")(name, label, weight)
# display items
items_frame = self.gui.get_object("DisplayItemsPrefTab")
items_frame.unparent()
self._display_items = {}
self.core.register_ui("preferences", "Display Items", items_frame, 20)
self.core.set("register_display_item", self.register_display_item)
self.core.set("unregister_display_item", self.unregister_display_item)
# visual and general settings
# TODO: should directions be here?
self.core.get("register_display_item")("show_directions", "Show Directions", 80)
# toggle window state
toggle_3d = self.gui.get_object("Toggle3DView")
self._gtk_handlers.append((toggle_3d, "toggled", self.toggle_3d_view))
self.register_gtk_accelerator("opengl", toggle_3d, "<Control><Shift>v",
"ToggleOpenGLView")
self.core.register_ui("view_menu", "ViewOpenGL", toggle_3d, -20)
self.mouse = {"start_pos": None, "button": None, "event_timestamp": 0,
"last_timestamp": 0, "pressed_pos": None, "pressed_timestamp": 0,
"pressed_button": None}
self.window.connect("delete-event", self.destroy)
self.window.set_default_size(560, 400)
for obj_name, view in (("ResetView", "reset"),
("LeftView", "left"),
("RightView", "right"),
("FrontView", "front"),
("BackView", "back"),
("TopView", "top"),
("BottomView", "bottom")):
self._gtk_handlers.append((self.gui.get_object(obj_name), "clicked",
self.rotate_view, VIEWS[view]))
# key binding
self._gtk_handlers.append((self.window, "key-press-event", self.key_handler))
# OpenGL stuff
self.area = self._gtk.GLArea(auto_render=False, has_alpha=True, has_depth_buffer=True)
self.area.show()
# first run; might also be important when doing other fancy
# called when a part of the screen is uncovered
self._gtk_handlers.append((self.area, 'render', self.paint))
# resize window
self._gtk_handlers.append((self.area, "resize", self._resize_window))
# catch mouse events
self.area.set_events((self._gdk.InputSource.MOUSE
| self._gdk.EventMask.POINTER_MOTION_MASK
| self._gdk.EventMask.BUTTON_PRESS_MASK
| self._gdk.EventMask.BUTTON_RELEASE_MASK
| self._gdk.EventMask.SCROLL_MASK))
self._gtk_handlers.extend((
(self.area, "button-press-event", self.mouse_press_handler),
(self.area, "motion-notify-event", self.mouse_handler),
(self.area, "button-release-event", self.context_menu_handler),
(self.area, "scroll-event", self.scroll_handler)))
self.gui.get_object("OpenGLBox").pack_end(self.area, fill=True, expand=True, padding=0)
def get_area_allocation(self=self):
allocation = self.area.get_allocation()
return allocation.width, allocation.height
self.camera = Camera(self.core, get_area_allocation, self._GL, self._GLU)
self._event_handlers = (("visual-item-updated", self.update_view),
("visualization-state-changed", self._update_widgets),
("model-list-changed", self._restore_latest_view))
# handlers
self.register_gtk_handlers(self._gtk_handlers)
self.register_event_handlers(self._event_handlers)
# show the window - the handlers _must_ be registered before "show"
self.area.show()
toggle_3d.set_active(True)
# refresh display
self.core.emit_event("visual-item-updated")
def get_get_set_functions(name):
get_func = lambda: self.core.get(name)
set_func = lambda value: self.core.set(name, value)
return get_func, set_func
for name in ("view_light", "view_shadow", "view_polygon", "view_perspective",
"opengl_cache_enable", "tool_progress_max_fps"):
self.register_state_item("settings/view/opengl/%s" % name,
*get_get_set_functions(name))
return True
def teardown(self):
if self.gui:
self.core.unregister_ui("preferences", self.gui.get_object("OpenGLPrefTab"))
toggle_3d = self.gui.get_object("Toggle3DView")
# hide the window
toggle_3d.set_active(False)
self.core.unregister_ui("view_menu", toggle_3d)
self.unregister_gtk_accelerator("opengl", toggle_3d)
for name in ("color_background", "color_tool", "color_material"):
self.core.get("unregister_color")(name)
for name in ("show_tool", "show_directions"):
self.core.get("unregister_display_item")(name)
self.unregister_gtk_handlers(self._gtk_handlers)
self.unregister_event_handlers(self._event_handlers)
# the area will be created during setup again
self.gui.get_object("OpenGLBox").remove(self.area)
self.area = None
self.core.unregister_ui("preferences", self.gui.get_object("DisplayItemsPrefTab"))
self.core.unregister_ui("preferences", self.gui.get_object("OpenGLPrefTab"))
self.core.unregister_ui("opengl_window", self.gui.get_object("ViewControls"))
self.core.unregister_ui("preferences", self.gui.get_object("ColorPrefTab"))
self.core.unregister_ui_section("opengl_window")
self.clear_state_items()
def update_view(self, widget=None, data=None):
if self.is_visible:
self.trigger_rendering()
def _update_widgets(self):
self.unregister_gtk_handlers(self._gtk_handlers)
self.gui.get_object("Toggle3DView").set_active(self.is_visible)
self.register_gtk_handlers(self._gtk_handlers)
def register_display_item(self, name, label, weight=100):
if name in self._display_items:
self.log.debug("Tried to register display item '%s' twice", name)
return
# create an action and three derived items:
# - a checkbox for the preferences window
# - a tool item for the drop-down list in the 3D window
# - a menu item for the context menu in the 3D window
# the string value will be interpreted by the callback as the most recently updated widget
action_name = ".".join((self.core.get("gtk_action_group_prefix"), name))
action = self._gio.SimpleAction.new_stateful(name, self._glib.VariantType.new("s"),
self._glib.Variant.new_string("0"))
widgets = []
for index, item in enumerate((self._gtk.CheckButton(),
self._gtk.ToggleToolButton(),
self._gtk.CheckMenuItem())):
item.insert_action_group(self.core.get("gtk_action_group_prefix"),
self.core.get("gtk_action_group"))
item.set_label(label)
item.set_action_target_value(self._glib.Variant.new_string(str(index)))
item.set_action_name(action_name)
# The "target value" (the stringified widget index) is used by GTK for guessing the
# sensitivity of a control. This approach differs from ours - we ignore it.
item.set_sensitive(True)
widgets.append(item)
self._display_items[name] = {"name": name, "label": label, "weight": weight,
"widgets": widgets, "action": action}
def synchronize_widgets(action, widget_index_variant, widgets=widgets, is_blocked=[],
name=name):
""" copy the state of the most recently changed ("activated") control to the others
widget_index_variant: GLib.Variant containing the stringified index of the changed
widget (0, 1 or 2) - based on the widgets list
widgets: the three associated widgets
is_blocked: we need to avoid pseudo-recursive calls of this function after every
programmatic change of a control
"""
widget_index = int(widget_index_variant.get_string())
if not is_blocked:
is_blocked.append(True)
current_widget = widgets[widget_index]
current_value = current_widget.get_active()
for index, widget in enumerate(widgets):
if widget_index != index:
if hasattr(widget, "set_active"):
widget.set_active(current_value)
else:
widget.set_state(current_value)
widget.set_sensitive(True)
self.core.set(name, current_value)
self.core.emit_event("visual-item-updated")
is_blocked.clear()
action.connect("activate", synchronize_widgets)
self.core.get("gtk_action_group").add_action(action)
self.core.add_item(name, set_func=widgets[0].set_active)
# add this item to the state handler
self.register_state_item("settings/view/items/%s" % name,
widgets[0].get_active, widgets[0].set_active)
# synchronize the widgets
synchronize_widgets(None, self._glib.Variant.new_string("0"))
self._rebuild_display_items()
def unregister_display_item(self, name):
if name not in self._display_items:
self.log.info("Failed to unregister unknown display item: %s", name)
return
first_widget = self._display_items[name]["widgets"][0]
self.unregister_state_item("settings/view/items/%s" % name,
first_widget.get_active, first_widget.set_active)
action_name = ".".join((self.core.get("gtk_action_group_prefix"), name))
self.core.get("gtk_action_group").remove(action_name)
del self._display_items[name]
self._rebuild_display_items()
def _rebuild_display_items(self):
pref_box = self.gui.get_object("PreferencesVisibleItemsBox")
toolbar = self.gui.get_object("ViewItems")
for parent in pref_box, self.context_menu, toolbar:
for child in parent.get_children():
parent.remove(child)
items = list(self._display_items.values())
items.sort(key=lambda item: item["weight"])
for item in items:
pref_box.pack_start(item["widgets"][0], expand=True, fill=True, padding=0)
toolbar.add(item["widgets"][1])
self.context_menu.add(item["widgets"][2])
for parent in (pref_box, toolbar, self.context_menu):
parent.show_all()
parent.insert_action_group(self.core.get("gtk_action_group_prefix"),
self.core.get("gtk_action_group"))
def register_color_setting(self, name, label, weight=100):
if name in self._color_settings:
self.log.debug("Tried to register color '%s' twice", name)
return
def get_color_wrapper(obj):
def gtk_color_to_dict():
color_components = obj.get_rgba()
return {"red": color_components.red,
"green": color_components.green,
"blue": color_components.blue,
"alpha": color_components.alpha}
return gtk_color_to_dict
def set_color_wrapper(obj):
def set_gtk_color_by_dict(color):
obj.set_rgba(
self._gdk.RGBA(color["red"], color["green"], color["blue"], color["alpha"]))
return set_gtk_color_by_dict
widget = self._gtk.ColorButton()
widget.set_use_alpha(True)
wrappers = (get_color_wrapper(widget), set_color_wrapper(widget))
self._color_settings[name] = {"name": name, "label": label, "weight": weight,
"widget": widget, "wrappers": wrappers}
widget.connect("color-set", lambda widget: self.core.emit_event("visual-item-updated"))
self.core.add_item(name, *wrappers)
self.register_state_item("settings/view/colors/%s" % name, *wrappers)
self._rebuild_color_settings()
def unregister_color_setting(self, name):
if name not in self._color_settings:
self.log.debug("Failed to unregister unknown color item: %s", name)
return
wrappers = self._color_settings[name]["wrappers"]
self.unregister_state_item("settings/view/colors/%s" % name, *wrappers)
del self._color_settings[name]
self._rebuild_color_settings()
def _rebuild_color_settings(self):
color_table = self.gui.get_object("ColorTable")
for child in color_table.get_children():
color_table.remove(child)
items = list(self._color_settings.values())
items.sort(key=lambda item: item["weight"])
for index, item in enumerate(items):
label = self._gtk.Label("%s:" % item["label"])
label.set_alignment(0.0, 0.5)
color_table.attach(label, 0, index, 1, 1)
color_table.attach(item["widget"], 1, index, 1, 1)
color_table.show_all()
def toggle_3d_view(self, widget=None, value=None):
current_state = self.is_visible
if value is None:
new_state = not current_state
else:
new_state = value
if new_state == current_state:
return
elif new_state:
if self.is_visible:
self.reset_view()
else:
# the window is just hidden
self.show()
else:
self.hide()
def show(self):
self.is_visible = True
self.window.move(*self._position)
self.window.show()
def hide(self):
self.is_visible = False
self._position = self.window.get_position()
self.window.hide()
def key_handler(self, widget=None, event=None):
if event is None:
return
try:
keyval = getattr(event, "keyval")
get_state = getattr(event, "get_state")
key_string = getattr(event, "string")
except AttributeError:
return
# define arrow keys and "vi"-like navigation keys
move_keys_dict = {
self._gdk.KEY_Left: (1, 0),
self._gdk.KEY_Down: (0, -1),
self._gdk.KEY_Up: (0, 1),
self._gdk.KEY_Right: (-1, 0),
ord("h"): (1, 0),
ord("j"): (0, -1),
ord("k"): (0, 1),
ord("l"): (-1, 0),
ord("H"): (1, 0),
ord("J"): (0, -1),
ord("K"): (0, 1),
ord("L"): (-1, 0),
}
if key_string and (key_string in '1234567'):
self._last_view = None
names = ["reset", "front", "back", "left", "right", "top", "bottom"]
index = '1234567'.index(key_string)
self.rotate_view(view=VIEWS[names[index]])
self.trigger_rendering()
elif key_string in ('i', 'm', 's', 'p'):
if key_string == 'i':
key = "view_light"
elif key_string == 'm':
key = "view_polygon"
elif key_string == 's':
key = "view_shadow"
elif key_string == 'p':
key = "view_perspective"
else:
key = None
# toggle setting
self.core.set(key, not self.core.get(key))
# re-init gl settings
self.glsetup()
self.trigger_rendering()
elif key_string in ("+", "-"):
self._last_view = None
if key_string == "+":
self.camera.zoom_in()
else:
self.camera.zoom_out()
self.trigger_rendering()
elif keyval in move_keys_dict.keys():
self._last_view = None
move_x, move_y = move_keys_dict[keyval]
if get_state() & self._gdk.ModifierType.SHIFT_MASK:
# shift key pressed -> rotation
base = 0
factor = 10
self.camera.rotate_camera_by_screen(base, base, base - factor * move_x,
base - factor * move_y)
else:
# no shift key -> moving
self.camera.shift_view(x_dist=move_x, y_dist=move_y)
self.trigger_rendering()
else:
self.log.debug("Unhandled key pressed: %s (%s)", keyval, get_state())
def glsetup(self, widget=None):
GL = self._GL
GLUT = self._GLUT
if not GLUT.glutInit:
self.log.error("Failed to execute 'GLUT.glutInit': probably you need to install the"
"C library providing GLUT functions (e.g. 'freeglut3-dev' or "
"'freeglut-devel'). OpenGL visualization is disabled.")
return
GLUT.glutInit()
GLUT.glutInitDisplayMode(GLUT.GLUT_RGBA | GLUT.GLUT_DOUBLE | GLUT.GLUT_DEPTH
| GLUT.GLUT_MULTISAMPLE | GLUT.GLUT_ALPHA | GLUT.GLUT_ACCUM)
if self.core.get("view_shadow"):
# TODO: implement shadowing (or remove the setting)
pass
# use vertex normals for smooth rendering
GL.glShadeModel(GL.GL_SMOOTH)
bg_col = self.core.get("color_background")
GL.glClearColor(bg_col["red"], bg_col["green"], bg_col["blue"], 1.0)
GL.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_NICEST)
GL.glMatrixMode(GL.GL_MODELVIEW)
# enable blending/transparency (alpha) for colors
GL.glEnable(GL.GL_BLEND)
# see http://wiki.delphigl.com/index.php/glBlendFunc
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
GL.glEnable(GL.GL_DEPTH_TEST)
# "less" is OpenGL's default
GL.glDepthFunc(GL.GL_LESS)
# slightly improved performance: ignore all faces inside the objects
GL.glCullFace(GL.GL_BACK)
GL.glEnable(GL.GL_CULL_FACE)
# enable antialiasing
GL.glEnable(GL.GL_LINE_SMOOTH)
# GL.glEnable(GL.GL_POLYGON_SMOOTH)
GL.glHint(GL.GL_LINE_SMOOTH_HINT, GL.GL_NICEST)
GL.glHint(GL.GL_POLYGON_SMOOTH_HINT, GL.GL_NICEST)
# TODO: move to toolpath drawing
GL.glLineWidth(0.8)
# GL.glEnable(GL.GL_MULTISAMPLE_ARB)
GL.glEnable(GL.GL_POLYGON_OFFSET_FILL)
GL.glPolygonOffset(1.0, 1.0)
# ambient and diffuse material lighting is defined in OpenGLViewModel
GL.glMaterial(GL.GL_FRONT_AND_BACK, GL.GL_SPECULAR, (1.0, 1.0, 1.0, 1.0))
GL.glMaterial(GL.GL_FRONT_AND_BACK, GL.GL_SHININESS, (100.0))
if self.core.get("view_polygon"):
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_FILL)
else:
GL.glPolygonMode(GL.GL_FRONT_AND_BACK, GL.GL_LINE)
GL.glMatrixMode(GL.GL_MODELVIEW)
GL.glLoadIdentity()
GL.glMatrixMode(GL.GL_PROJECTION)
GL.glLoadIdentity()
GL.glViewport(0, 0, self.area.get_allocation().width, self.area.get_allocation().height)
# lighting
GL.glLightModeli(GL.GL_LIGHT_MODEL_LOCAL_VIEWER, GL.GL_TRUE)
# Light #1
# setup the ambient light
GL.glLightfv(GL.GL_LIGHT0, GL.GL_AMBIENT, (0.3, 0.3, 0.3, 1.0))
# setup the diffuse light
GL.glLightfv(GL.GL_LIGHT0, GL.GL_DIFFUSE, (0.8, 0.8, 0.8, 1.0))
# setup the specular light
GL.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, (0.1, 0.1, 0.1, 1.0))
# enable Light #1
GL.glEnable(GL.GL_LIGHT0)
# Light #2
# spotlight with small light cone (like a desk lamp)
# GL.glLightfv(GL.GL_LIGHT1, GL.GL_SPOT_CUTOFF, 10.0)
# ... directed at the object
v = self.camera.view
GL.glLightfv(GL.GL_LIGHT1, GL.GL_SPOT_DIRECTION,
(v["center"][0], v["center"][1], v["center"][2]))
GL.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, (0.3, 0.3, 0.3, 1.0))
# and dark outside of the light cone
# GL.glLightfv(GL.GL_LIGHT1, GL.GL_SPOT_EXPONENT, 100.0)
# GL.glLightf(GL.GL_LIGHT1, GL.GL_QUADRATIC_ATTENUATION, 0.5)
# setup the diffuse light
GL.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, (0.9, 0.9, 0.9, 1.0))
# setup the specular light
GL.glLightfv(GL.GL_LIGHT1, GL.GL_SPECULAR, (1.0, 1.0, 1.0, 1.0))
# enable Light #2
GL.glEnable(GL.GL_LIGHT1)
if self.core.get("view_light"):
GL.glEnable(GL.GL_LIGHTING)
else:
GL.glDisable(GL.GL_LIGHTING)
GL.glEnable(GL.GL_NORMALIZE)
GL.glColorMaterial(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT_AND_DIFFUSE)
GL.glColorMaterial(GL.GL_FRONT_AND_BACK, GL.GL_SPECULAR)
# GL.glColorMaterial(GL.GL_FRONT_AND_BACK, GL.GL_EMISSION)
GL.glEnable(GL.GL_COLOR_MATERIAL)
def destroy(self, widget=None, data=None):
self.hide()
self.core.emit_event("visualization-state-changed")
# don't close the window
return True
def _restore_latest_view(self):
""" this function is called whenever the model list changes
The function will restore the latest selected view - including
automatic distance adjustment. The latest view is always reset to
None, if any manual change (e.g. panning via mouse or keyboard)
occurred.
"""
if self._last_view:
self.rotate_view(view=self._last_view)
def context_menu_handler(self, widget, event):
if ((event.button == self.mouse["pressed_button"] == self.BUTTON_RIGHT)
and self.context_menu
and (event.get_time() - self.mouse["pressed_timestamp"] < 300)
and (abs(event.x - self.mouse["pressed_pos"][0]) < 3)
and (abs(event.y - self.mouse["pressed_pos"][1]) < 3)):
# A quick press/release cycle with the right mouse button
# -> open the context menu.
self.context_menu.popup(None, None, None, None, event.button, int(event.get_time()))
def scroll_handler(self, widget, event):
""" handle events of the scroll wheel
shift key: horizontal pan instead of vertical
control key: zoom
"""
remember_last_view = self._last_view
self._last_view = None
try:
modifier_state = event.get_state()
except AttributeError:
# this should probably never happen
return
control_pressed = modifier_state & self._gdk.ModifierType.CONTROL_MASK
shift_pressed = modifier_state & self._gdk.ModifierType.SHIFT_MASK
if ((event.direction == self._gdk.ScrollDirection.RIGHT)
or ((event.direction == self._gdk.ScrollDirection.UP) and shift_pressed)):
# horizontal move right
self.camera.shift_view(x_dist=-1)
elif ((event.direction == self._gdk.ScrollDirection.LEFT)
or ((event.direction == self._gdk.ScrollDirection.DOWN) and shift_pressed)):
# horizontal move left
self.camera.shift_view(x_dist=1)
elif (event.direction == self._gdk.ScrollDirection.UP) and control_pressed:
# zoom in
self.camera.zoom_in()
elif event.direction == self._gdk.ScrollDirection.UP:
# vertical move up
self.camera.shift_view(y_dist=1)
elif (event.direction == self._gdk.ScrollDirection.DOWN) and control_pressed:
# zoom out
self.camera.zoom_out()
elif event.direction == self._gdk.ScrollDirection.DOWN:
# vertical move down
self.camera.shift_view(y_dist=-1)
else:
# no interesting event -> no re-painting
self._last_view = remember_last_view
return
self.trigger_rendering()
def mouse_press_handler(self, widget, event):
self.mouse["pressed_timestamp"] = event.get_time()
self.mouse["pressed_button"] = event.button
self.mouse["pressed_pos"] = event.x, event.y
self.mouse_handler(widget, event)
def mouse_handler(self, widget, event):
x, y, state = event.x, event.y, event.state
if self.mouse["button"] is None:
if ((state & self.BUTTON_ZOOM)
or (state & self.BUTTON_ROTATE)
or (state & self.BUTTON_MOVE)):
self.mouse["button"] = state
self.mouse["start_pos"] = [x, y]
else:
# Don't try to create more than 25 frames per second (enough for
# a decent visualization).
if event.get_time() - self.mouse["event_timestamp"] < 40:
return
elif state & self.mouse["button"] & self.BUTTON_ZOOM:
self._last_view = None
# the start button is still active: update the view
start_x, start_y = self.mouse["start_pos"]
self.mouse["start_pos"] = [x, y]
# Move the mouse from lower left to top right corner for
# scaling up.
scale = 1 - 0.01 * ((x - start_x) + (start_y - y))
# do some sanity checks, scale no more than
# 1:100 on any given click+drag
if scale < 0.01:
scale = 0.01
elif scale > 100:
scale = 100
self.camera.scale_distance(scale)
self.trigger_rendering()
elif ((state & self.mouse["button"] & self.BUTTON_MOVE)
or (state & self.mouse["button"] & self.BUTTON_ROTATE)):
self._last_view = None
start_x, start_y = self.mouse["start_pos"]
self.mouse["start_pos"] = [x, y]
if (state & self.BUTTON_MOVE):
# Determine the biggest dimension (x/y/z) for moving the
# screen's center in relation to this value.
low, high = [None, None, None], [None, None, None]
self.core.call_chain("get_draw_dimension", low, high)
# use zero as fallback for undefined axes (None)
max_dim = max((v_high or 0) - (v_low or 0) for v_high, v_low in zip(high, low))
if max_dim == 0:
# some arbitrary value if there are no visible objects
max_dim = 10
self.camera.move_camera_by_screen(x - start_x, y - start_y, max_dim)
else:
# BUTTON_ROTATE
# update the camera position according to the mouse movement
self.camera.rotate_camera_by_screen(start_x, start_y, x, y)
self.trigger_rendering()
else:
# button was released
self.mouse["button"] = None
self.trigger_rendering()
self.mouse["event_timestamp"] = event.get_time()
def rotate_view(self, widget=None, view=None):
if view:
self._last_view = view.copy()
self.camera.set_view(view)
self.trigger_rendering()
def reset_view(self):
self.rotate_view(view=None)
self.trigger_rendering()
def _resize_window(self, widget, width, height, data=None):
self.trigger_rendering()
def paint(self, widget=None, data=None):
if not self.initialized:
self.glsetup()
self.initialized = True
# draw the items
GL = self._GL
prev_mode = GL.glGetIntegerv(GL.GL_MATRIX_MODE)
GL.glMatrixMode(GL.GL_MODELVIEW)
# clear the background with the configured color
bg_col = self.core.get("color_background")
GL.glClearColor(bg_col["red"], bg_col["green"], bg_col["blue"], 1.0)
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
self.camera.position_camera()
# adjust Light #2
v = self.camera.view
lightpos = (v["center"][0] + v["distance"][0],
v["center"][1] + v["distance"][1],
v["center"][2] + v["distance"][2])
GL.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, lightpos)
# trigger the visualization of all items
self.core.emit_event("visualize-items")
GL.glMatrixMode(prev_mode)
GL.glFlush()
# Return "True" in order to propagate the "render" signal.
return True
def trigger_rendering(self):
self.area.queue_render()
class Camera:
def __init__(self, core, get_dim_func, import_gl, import_glu):
self._GL = import_gl
self._GLU = import_glu
self.view = None
self.core = core
self._get_dim_func = get_dim_func
self.set_view(self.view)
def set_view(self, view=None):
if view is None:
self.view = VIEWS["reset"].copy()
else:
self.view = view.copy()
self.center_view()
self.auto_adjust_distance()
def _get_low_high_dims(self):
low, high = [None, None, None], [None, None, None]
self.core.call_chain("get_draw_dimension", low, high)
return low, high
def center_view(self):
center = []
low, high = self._get_low_high_dims()
if None in low or None in high:
center = [0, 0, 0]
else:
for index in range(3):
center.append((low[index] + high[index]) / 2)
self.view["center"] = center
def auto_adjust_distance(self):
v = self.view
# adjust the distance to get a view of the whole object
low_high = list(zip(*self._get_low_high_dims()))
if (None, None) in low_high:
return
max_dim = max([high - low for low, high in low_high])
distv = pnormalized((v["distance"][0], v["distance"][1], v["distance"][2]))
# The multiplier "1.25" is based on experiments. 1.414 (sqrt(2)) should
# be roughly sufficient for showing the diagonal of any model.
distv = pmul(distv, (max_dim * 1.25) / number(math.sin(v["fovy"] / 2)))
self.view["distance"] = distv
# Adjust the "far" distance for the camera to make sure, that huge
# models (e.g. x=1000) are still visible.
self.view["zfar"] = 100 * max_dim
def scale_distance(self, scale):
if scale != 0:
scale = number(scale)
dist = self.view["distance"]
self.view["distance"] = (scale * dist[0], scale * dist[1], scale * dist[2])
def get(self, key, default=None):
if (self.view is not None) and key in self.view:
return self.view[key]
else:
return default
def set(self, key, value):
self.view[key] = value
def move_camera_by_screen(self, x_move, y_move, max_model_shift):
""" move the camera according to a mouse movement
@type x_move: int
@value x_move: movement of the mouse along the x axis
@type y_move: int
@value y_move: movement of the mouse along the y axis
@type max_model_shift: float
@value max_model_shift: maximum shifting of the model view (e.g. for
x_move == screen width)
"""
factors_x, factors_y = self._get_axes_vectors()
width, height = self._get_screen_dimensions()
# relation of x/y movement to the respective screen dimension
win_x_rel = (-2 * x_move) / float(width) / math.sin(self.view["fovy"])
win_y_rel = (-2 * y_move) / float(height) / math.sin(self.view["fovy"])
# This code is completely arbitrarily based on trial-and-error for
# finding a nice movement speed for all distances.
# Anyone with a better approach should just fix this.
distance_vector = self.get("distance")
distance = float(sqrt(sum([dim ** 2 for dim in distance_vector])))
win_x_rel *= math.cos(win_x_rel / distance) ** 20
win_y_rel *= math.cos(win_y_rel / distance) ** 20
# update the model position that should be centered on the screen
old_center = self.view["center"]
new_center = []
for i in range(3):
new_center.append(old_center[i]
+ max_model_shift * (number(win_x_rel) * factors_x[i]
+ number(win_y_rel) * factors_y[i]))
self.view["center"] = tuple(new_center)
def rotate_camera_by_screen(self, start_x, start_y, end_x, end_y):
factors_x, factors_y = self._get_axes_vectors()
width, height = self._get_screen_dimensions()
# calculate rotation factors - based on the distance to the center
# (between -1 and 1)
rot_x_factor = (2.0 * start_x) / width - 1
rot_y_factor = (2.0 * start_y) / height - 1
# calculate rotation angles (between -90 and +90 degrees)
xdiff = end_x - start_x
ydiff = end_y - start_y
# compensate inverse rotation left/right side (around x axis) and
# top/bottom (around y axis)
if rot_x_factor < 0:
ydiff = -ydiff
if rot_y_factor > 0:
xdiff = -xdiff
rot_x_angle = rot_x_factor * math.pi * ydiff / height
rot_y_angle = rot_y_factor * math.pi * xdiff / width
# rotate around the "up" vector with the y-axis rotation
original_distance = self.view["distance"]
original_up = self.view["up"]
y_rot_matrix = Matrix.get_rotation_matrix_axis_angle(factors_y, rot_y_angle)
new_distance = Matrix.multiply_vector_matrix(original_distance, y_rot_matrix)
new_up = Matrix.multiply_vector_matrix(original_up, y_rot_matrix)
# rotate around the cross vector with the x-axis rotation
x_rot_matrix = Matrix.get_rotation_matrix_axis_angle(factors_x, rot_x_angle)
new_distance = Matrix.multiply_vector_matrix(new_distance, x_rot_matrix)
new_up = Matrix.multiply_vector_matrix(new_up, x_rot_matrix)
self.view["distance"] = new_distance
self.view["up"] = new_up
def position_camera(self):
GL = self._GL
GLU = self._GLU
width, height = self._get_screen_dimensions()
prev_mode = GL.glGetIntegerv(GL.GL_MATRIX_MODE)
GL.glMatrixMode(GL.GL_PROJECTION)
GL.glLoadIdentity()
v = self.view
# position the light according to the current bounding box
light_pos = [0, 0, 0]
low, high = self._get_low_high_dims()
if None not in low and None not in high:
for index in range(3):
light_pos[index] = 2 * (high[index] - low[index])
GL.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, (light_pos[0], light_pos[1], light_pos[2], 0.0))
# position the camera
camera_position = (v["center"][0] + v["distance"][0],
v["center"][1] + v["distance"][1],
v["center"][2] + v["distance"][2])
# position a second light at camera position
GL.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, (camera_position[0], camera_position[1],
camera_position[2], 0.0))
if self.core.get("view_perspective"):
# perspective view
GLU.gluPerspective(v["fovy"], (0.0 + width) / height, v["znear"], v["zfar"])
else:
# parallel projection
# This distance calculation is completely based on trial-and-error.
distance = math.sqrt(sum([d ** 2 for d in v["distance"]]))
distance *= math.log(math.sqrt(width * height)) / math.log(10)
sin_factor = math.sin(v["fovy"] / 360.0 * math.pi) * distance
left = v["center"][0] - sin_factor
right = v["center"][0] + sin_factor
top = v["center"][1] + sin_factor
bottom = v["center"][1] - sin_factor
near = v["center"][2] - 2 * sin_factor
far = v["center"][2] + 2 * sin_factor
GL.glOrtho(left, right, bottom, top, near, far)
GLU.gluLookAt(camera_position[0], camera_position[1], camera_position[2],
v["center"][0], v["center"][1], v["center"][2],
v["up"][0], v["up"][1], v["up"][2])
GL.glMatrixMode(prev_mode)
def shift_view(self, x_dist=0, y_dist=0):
obj_dim = []
low, high = self._get_low_high_dims()
if None in low or None in high:
return
for index in range(3):
obj_dim.append(high[index] - low[index])
max_dim = max(obj_dim)
factor = 50
self.move_camera_by_screen(x_dist * factor, y_dist * factor, max_dim)
def zoom_in(self):
self.scale_distance(sqrt(0.5))
def zoom_out(self):
self.scale_distance(sqrt(2))
def _get_screen_dimensions(self):
return self._get_dim_func()
def _get_axes_vectors(self):
"""calculate the model vectors along the screen's x and y axes"""
# The "up" vector defines, in what proportion each axis of the model is
# in line with the screen's y axis.
v_up = self.view["up"]
factors_y = (number(v_up[0]), number(v_up[1]), number(v_up[2]))
# Calculate the proportion of each model axis according to the x axis of
# the screen.
distv = self.view["distance"]
distv = pnormalized((distv[0], distv[1], distv[2]))
factors_x = pnormalized(pcross(distv, (v_up[0], v_up[1], v_up[2])))
return (factors_x, factors_y)
| 47.262381 | 99 | 0.580175 | 5,664 | 44,852 | 4.396186 | 0.134004 | 0.006908 | 0.006627 | 0.005141 | 0.310321 | 0.226707 | 0.186867 | 0.140723 | 0.116345 | 0.085261 | 0 | 0.019829 | 0.307389 | 44,852 | 948 | 100 | 47.312236 | 0.781716 | 0.146482 | 0 | 0.214888 | 0 | 0 | 0.099362 | 0.012951 | 0 | 0 | 0 | 0.00211 | 0 | 1 | 0.071629 | false | 0.001404 | 0.011236 | 0.001404 | 0.127809 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896a9f43a1ddcc85cd7e1204d61e68064fd6890e | 20,864 | py | Python | scripts/external_libs/scapy-2.4.5/scapy/contrib/scada/iec104/__init__.py | dariusgrassi/trex-core | 3b19ddcf67e33934f268b09d3364cd87275d48db | [
"Apache-2.0"
] | 250 | 2016-12-29T02:43:04.000Z | 2022-03-31T05:51:23.000Z | scripts/external_libs/scapy-2.4.5/scapy/contrib/scada/iec104/__init__.py | dariusgrassi/trex-core | 3b19ddcf67e33934f268b09d3364cd87275d48db | [
"Apache-2.0"
] | 2 | 2017-08-08T06:22:10.000Z | 2021-05-22T01:59:43.000Z | scripts/external_libs/scapy-2.4.5/scapy/contrib/scada/iec104/__init__.py | dariusgrassi/trex-core | 3b19ddcf67e33934f268b09d3364cd87275d48db | [
"Apache-2.0"
] | 86 | 2016-12-29T06:39:34.000Z | 2021-12-12T20:07:39.000Z | # This file is part of Scapy
# See http://www.secdev.org/projects/scapy for more information
# Copyright (C) Thomas Tannhaeuser <hecke@naberius.de>
# This program is published under a GPLv2 license
#
# scapy.contrib.description = IEC-60870-5-104 APCI / APDU layer definitions
# scapy.contrib.status = loads
"""
IEC 60870-5-104
~~~~~~~~~~~~~~~
:description:
This module provides the IEC 60870-5-104 (common short name: iec104)
layer, the information objects and related information element
definitions.
normative references:
- IEC 60870-5-4:1994 (atomic base types / data format)
- IEC 60870-5-101:2003 (information elements (sec. 7.2.6) and
ASDU definition (sec. 7.3))
- IEC 60870-5-104:2006 (information element TSC (sec. 8.8, p. 44))
:TODO:
- add allowed direction to IO attributes
(but this could be derived from the name easily <--> )
- information elements / objects need more testing
(e.g. on live traffic w comparison against tshark)
:NOTES:
- bit and octet numbering is used as in the related standards
(they usually start with index one instead of zero)
- some of the information objects are only valid for IEC 60870-5-101 -
so usually they should never appear on the network as iec101 uses
serial connections. I added them if decoding of those messages is
needed cause one goes to implement a iec101<-->iec104 gateway or
hits such a gateway that acts not standard conform (e.g. by
forwarding 101 messages to a 104 network)
"""
from scapy.contrib.scada.iec104.iec104_fields import * # noqa F403,F401
from scapy.contrib.scada.iec104.iec104_information_elements import * # noqa F403,F401
from scapy.contrib.scada.iec104.iec104_information_objects import * # noqa F403,F401
from scapy.compat import orb
from scapy.config import conf
from scapy.error import warning, Scapy_Exception
from scapy.fields import ByteField, BitField, ByteEnumField, PacketListField, \
BitEnumField, XByteField, FieldLenField, LEShortField, BitFieldLenField
from scapy.layers.inet import TCP
from scapy.packet import Raw, Packet, bind_layers
IEC_104_IANA_PORT = 2404
# direction - from the central station to the substation
IEC104_CONTROL_DIRECTION = 0
IEC104_CENTRAL_2_SUB_DIR = IEC104_CONTROL_DIRECTION
# direction - from the substation to the central station
IEC104_MONITOR_DIRECTION = 1
IEC104_SUB_2_CENTRAL_DIR = IEC104_MONITOR_DIRECTION
IEC104_DIRECTIONS = {
IEC104_MONITOR_DIRECTION: 'monitor direction (sub -> central)',
IEC104_CONTROL_DIRECTION: 'control direction (central -> sub)',
}
# COT - cause of transmission
IEC104_COT_UNDEFINED = 0
IEC104_COT_CYC = 1
IEC104_COT_BACK = 2
IEC104_COT_SPONT = 3
IEC104_COT_INIT = 4
IEC104_COT_REQ = 5
IEC104_COT_ACT = 6
IEC104_COT_ACTCON = 7
IEC104_COT_DEACT = 8
IEC104_COT_DEACTCON = 9
IEC104_COT_ACTTERM = 10
IEC104_COT_RETREM = 11
IEC104_COT_RETLOC = 12
IEC104_COT_FILE = 13
IEC104_COT_RESERVED_14 = 14
IEC104_COT_RESERVED_15 = 15
IEC104_COT_RESERVED_16 = 16
IEC104_COT_RESERVED_17 = 17
IEC104_COT_RESERVED_18 = 18
IEC104_COT_RESERVED_19 = 19
IEC104_COT_INROGEN = 20
IEC104_COT_INRO1 = 21
IEC104_COT_INRO2 = 22
IEC104_COT_INRO3 = 23
IEC104_COT_INRO4 = 24
IEC104_COT_INRO5 = 25
IEC104_COT_INRO6 = 26
IEC104_COT_INRO7 = 27
IEC104_COT_INRO8 = 28
IEC104_COT_INRO9 = 29
IEC104_COT_INRO10 = 30
IEC104_COT_INRO11 = 31
IEC104_COT_INRO12 = 32
IEC104_COT_INRO13 = 33
IEC104_COT_INRO14 = 34
IEC104_COT_INRO15 = 35
IEC104_COT_INRO16 = 36
IEC104_COT_REQCOGEN = 37
IEC104_COT_REQCO1 = 38
IEC104_COT_REQCO2 = 39
IEC104_COT_REQCO3 = 40
IEC104_COT_REQCO4 = 41
IEC104_COT_RESERVED_42 = 42
IEC104_COT_RESERVED_43 = 43
IEC104_COT_UNKNOWN_TYPE_CODE = 44
IEC104_COT_UNKNOWN_TRANSMIT_REASON = 45
IEC104_COT_UNKNOWN_COMMON_ADDRESS_OF_ASDU = 46
IEC104_COT_UNKNOWN_ADDRESS_OF_INFORMATION_OBJECT = 47
IEC104_COT_PRIVATE_48 = 48
IEC104_COT_PRIVATE_49 = 49
IEC104_COT_PRIVATE_50 = 50
IEC104_COT_PRIVATE_51 = 51
IEC104_COT_PRIVATE_52 = 52
IEC104_COT_PRIVATE_53 = 53
IEC104_COT_PRIVATE_54 = 54
IEC104_COT_PRIVATE_55 = 55
IEC104_COT_PRIVATE_56 = 56
IEC104_COT_PRIVATE_57 = 57
IEC104_COT_PRIVATE_58 = 58
IEC104_COT_PRIVATE_59 = 59
IEC104_COT_PRIVATE_60 = 60
IEC104_COT_PRIVATE_61 = 61
IEC104_COT_PRIVATE_62 = 62
IEC104_COT_PRIVATE_63 = 63
CAUSE_OF_TRANSMISSIONS = {
IEC104_COT_UNDEFINED: 'undefined',
IEC104_COT_CYC: 'cyclic (per/cyc)',
IEC104_COT_BACK: 'background (back)',
IEC104_COT_SPONT: 'spontaneous (spont)',
IEC104_COT_INIT: 'initialized (init)',
IEC104_COT_REQ: 'request (req)',
IEC104_COT_ACT: 'activation (act)',
IEC104_COT_ACTCON: 'activation confirmed (actcon)',
IEC104_COT_DEACT: 'activation canceled (deact)',
IEC104_COT_DEACTCON: 'activation cancellation confirmed (deactcon)',
IEC104_COT_ACTTERM: 'activation finished (actterm)',
IEC104_COT_RETREM: 'feedback caused by remote command (retrem)',
IEC104_COT_RETLOC: 'feedback caused by local command (retloc)',
IEC104_COT_FILE: 'file transfer (file)',
IEC104_COT_RESERVED_14: 'reserved_14',
IEC104_COT_RESERVED_15: 'reserved_15',
IEC104_COT_RESERVED_16: 'reserved_16',
IEC104_COT_RESERVED_17: 'reserved_17',
IEC104_COT_RESERVED_18: 'reserved_18',
IEC104_COT_RESERVED_19: 'reserved_19',
IEC104_COT_INROGEN: 'queried by station (inrogen)',
IEC104_COT_INRO1: 'queried by query to group 1 (inro1)',
IEC104_COT_INRO2: 'queried by query to group 2 (inro2)',
IEC104_COT_INRO3: 'queried by query to group 3 (inro3)',
IEC104_COT_INRO4: 'queried by query to group 4 (inro4)',
IEC104_COT_INRO5: 'queried by query to group 5 (inro5)',
IEC104_COT_INRO6: 'queried by query to group 6 (inro6)',
IEC104_COT_INRO7: 'queried by query to group 7 (inro7)',
IEC104_COT_INRO8: 'queried by query to group 8 (inro8)',
IEC104_COT_INRO9: 'queried by query to group 9 (inro9)',
IEC104_COT_INRO10: 'queried by query to group 10 (inro10)',
IEC104_COT_INRO11: 'queried by query to group 11 (inro11)',
IEC104_COT_INRO12: 'queried by query to group 12 (inro12)',
IEC104_COT_INRO13: 'queried by query to group 13 (inro13)',
IEC104_COT_INRO14: 'queried by query to group 14 (inro14)',
IEC104_COT_INRO15: 'queried by query to group 15 (inro15)',
IEC104_COT_INRO16: 'queried by query to group 16 (inro16)',
IEC104_COT_REQCOGEN: 'queried by counter general interrogation (reqcogen)',
IEC104_COT_REQCO1: 'queried by query to counter group 1 (reqco1)',
IEC104_COT_REQCO2: 'queried by query to counter group 2 (reqco2)',
IEC104_COT_REQCO3: 'queried by query to counter group 3 (reqco3)',
IEC104_COT_REQCO4: 'queried by query to counter group 4 (reqco4)',
IEC104_COT_RESERVED_42: 'reserved_42',
IEC104_COT_RESERVED_43: 'reserved_43',
IEC104_COT_UNKNOWN_TYPE_CODE: 'unknown type code',
IEC104_COT_UNKNOWN_TRANSMIT_REASON: 'unknown transmit reason',
IEC104_COT_UNKNOWN_COMMON_ADDRESS_OF_ASDU:
'unknown common address of ASDU',
IEC104_COT_UNKNOWN_ADDRESS_OF_INFORMATION_OBJECT:
'unknown address of information object',
IEC104_COT_PRIVATE_48: 'private_48',
IEC104_COT_PRIVATE_49: 'private_49',
IEC104_COT_PRIVATE_50: 'private_50',
IEC104_COT_PRIVATE_51: 'private_51',
IEC104_COT_PRIVATE_52: 'private_52',
IEC104_COT_PRIVATE_53: 'private_53',
IEC104_COT_PRIVATE_54: 'private_54',
IEC104_COT_PRIVATE_55: 'private_55',
IEC104_COT_PRIVATE_56: 'private_56',
IEC104_COT_PRIVATE_57: 'private_57',
IEC104_COT_PRIVATE_58: 'private_58',
IEC104_COT_PRIVATE_59: 'private_59',
IEC104_COT_PRIVATE_60: 'private_60',
IEC104_COT_PRIVATE_61: 'private_61',
IEC104_COT_PRIVATE_62: 'private_62',
IEC104_COT_PRIVATE_63: 'private_63'
}
IEC104_APDU_TYPE_UNKNOWN = 0x00
IEC104_APDU_TYPE_I_SEQ_IOA = 0x01
IEC104_APDU_TYPE_I_SINGLE_IOA = 0x02
IEC104_APDU_TYPE_U = 0x03
IEC104_APDU_TYPE_S = 0x04
def _iec104_apci_type_from_packet(data):
"""
the type of the message is encoded in octet 1..4
oct 1, bit 1 2 oct 3, bit 1
I Message 0 1|0 0
S Message 1 0 0
U Message 1 1 0
see EN 60870-5-104:2006, sec. 5 (p. 13, fig. 6,7,8)
"""
oct_1 = orb(data[2])
oct_3 = orb(data[4])
oct_1_bit_1 = bool(oct_1 & 1)
oct_1_bit_2 = bool(oct_1 & 2)
oct_3_bit_1 = bool(oct_3 & 1)
if oct_1_bit_1 is False and oct_3_bit_1 is False:
if len(data) < 8:
return IEC104_APDU_TYPE_UNKNOWN
is_seq_ioa = ((orb(data[7]) & 0x80) == 0x80)
if is_seq_ioa:
return IEC104_APDU_TYPE_I_SEQ_IOA
else:
return IEC104_APDU_TYPE_I_SINGLE_IOA
if oct_1_bit_1 and oct_1_bit_2 is False and oct_3_bit_1 is False:
return IEC104_APDU_TYPE_S
if oct_1_bit_1 and oct_1_bit_2 and oct_3_bit_1 is False:
return IEC104_APDU_TYPE_U
return IEC104_APDU_TYPE_UNKNOWN
class IEC104_APDU(Packet):
"""
basic Application Protocol Data Unit definition used by S/U/I messages
"""
def guess_payload_class(self, payload):
payload_len = len(payload)
if payload_len < 6:
return self.default_payload_class(payload)
if orb(payload[0]) != 0x68:
self.default_payload_class(payload)
# the length field contains the number of bytes starting from the
# first control octet
apdu_length = 2 + orb(payload[1])
if payload_len < apdu_length:
warning(
'invalid len of APDU. given len: {} available len: {}'.format(
apdu_length, payload_len))
return self.default_payload_class(payload)
apdu_type = _iec104_apci_type_from_packet(payload)
return IEC104_APDU_CLASSES.get(apdu_type,
self.default_payload_class(payload))
@classmethod
def dispatch_hook(cls, _pkt=None, *args, **kargs):
"""
detect type of the message by checking packet data
:param _pkt: raw bytes of the packet layer data to be checked
:param args: unused
:param kargs: unused
:return: class of the detected message type
"""
if _iec104_is_i_apdu_seq_ioa(_pkt):
return IEC104_I_Message_SeqIOA
if _iec104_is_i_apdu_single_ioa(_pkt):
return IEC104_I_Message_SingleIOA
if _iec104_is_u_apdu(_pkt):
return IEC104_U_Message
if _iec104_is_s_apdu(_pkt):
return IEC104_S_Message
return Raw
class IEC104_S_Message(IEC104_APDU):
"""
message used for ack of received I-messages
"""
name = 'IEC-104 S APDU'
fields_desc = [
XByteField('start', 0x68),
ByteField("apdu_length", 4),
ByteField('octet_1', 0x01),
ByteField('octet_2', 0),
IEC104SequenceNumber('rx_seq_num', 0),
]
class IEC104_U_Message(IEC104_APDU):
"""
message used for connection tx control (start/stop) and monitoring (test)
"""
name = 'IEC-104 U APDU'
fields_desc = [
XByteField('start', 0x68),
ByteField("apdu_length", 4),
BitField('testfr_con', 0, 1),
BitField('testfr_act', 0, 1),
BitField('stopdt_con', 0, 1),
BitField('stopdt_act', 0, 1),
BitField('startdt_con', 0, 1),
BitField('startdt_act', 0, 1),
BitField('octet_1_1_2', 3, 2),
ByteField('octet_2', 0),
ByteField('octet_3', 0),
ByteField('octet_4', 0)
]
def _i_msg_io_dispatcher_sequence(pkt, next_layer_data):
"""
get the type id and return the matching ASDU instance
"""
next_layer_class_type = IEC104_IO_CLASSES.get(pkt.type_id, conf.raw_layer)
return next_layer_class_type(next_layer_data)
def _i_msg_io_dispatcher_single(pkt, next_layer_data):
"""
get the type id and return the matching ASDU instance
(information object address + regular ASDU information object fields)
"""
next_layer_class_type = IEC104_IO_WITH_IOA_CLASSES.get(pkt.type_id,
conf.raw_layer)
return next_layer_class_type(next_layer_data)
class IEC104ASDUPacketListField(PacketListField):
"""
used to add a list of information objects to an I-message
"""
def m2i(self, pkt, m):
"""
add calling layer instance to the cls()-signature
:param pkt: calling layer instance
:param m: raw data forming the next layer
:return: instance of the class representing the next layer
"""
return self.cls(pkt, m)
class IEC104_I_Message_StructureException(Scapy_Exception):
"""
Exception raised if payload is not of type Information Object
"""
pass
class IEC104_I_Message(IEC104_APDU):
"""
message used for transmitting data (APDU - Application Protocol Data Unit)
APDU: MAGIC + APCI + ASDU
MAGIC: 0x68
APCI : Control Information (rx/tx seq/ack numbers)
ASDU : Application Service Data Unit - information object related data
see EN 60870-5-104:2006, sec. 5 (p. 12)
"""
name = 'IEC-104 I APDU'
IEC_104_MAGIC = 0x68 # dec -> 104
SQ_FLAG_SINGLE = 0
SQ_FLAG_SEQUENCE = 1
SQ_FLAGS = {
SQ_FLAG_SINGLE: 'single',
SQ_FLAG_SEQUENCE: 'sequence'
}
TEST_DISABLED = 0
TEST_ENABLED = 1
TEST_FLAGS = {
TEST_DISABLED: 'disabled',
TEST_ENABLED: 'enabled'
}
ACK_POSITIVE = 0
ACK_NEGATIVE = 1
ACK_FLAGS = {
ACK_POSITIVE: 'positive',
ACK_NEGATIVE: 'negative'
}
fields_desc = []
def __init__(self, _pkt=b"", post_transform=None, _internal=0,
_underlayer=None, **fields):
super(IEC104_I_Message, self).__init__(_pkt=_pkt,
post_transform=post_transform,
_internal=_internal,
_underlayer=_underlayer,
**fields)
if 'io' in fields and fields['io']:
self._information_object_update(fields['io'])
def _information_object_update(self, io_instances):
"""
set the type_id in the ASDU header based on the given information
object (io) and check for valid structure
:param io_instances: information object
"""
if not isinstance(io_instances, list):
io_instances = [io_instances]
first_io = io_instances[0]
first_io_class = first_io.__class__
if not issubclass(first_io_class, IEC104_IO_Packet):
raise IEC104_I_Message_StructureException(
'information object payload must be a subclass of '
'IEC104_IO_Packet')
self.type_id = first_io.iec104_io_type_id()
# ensure all io elements within the ASDU share the same class type
for io_inst in io_instances[1:]:
if io_inst.__class__ != first_io_class:
raise IEC104_I_Message_StructureException(
'each information object within the ASDU must be of '
'the same class type (first io: {}, '
'current io: {})'.format(first_io_class._name,
io_inst._name))
class IEC104_I_Message_SeqIOA(IEC104_I_Message):
"""
all information objects share a base information object address field
sq = 1, see EN 60870-5-101:2003, sec. 7.2.2.1 (p. 33)
"""
name = 'IEC-104 I APDU (Seq IOA)'
fields_desc = [
# APCI
XByteField('start', IEC104_I_Message.IEC_104_MAGIC),
FieldLenField("apdu_length", None, fmt="!B", length_of='io',
adjust=lambda pkt, x: x + 13),
IEC104SequenceNumber('tx_seq_num', 0),
IEC104SequenceNumber('rx_seq_num', 0),
# ASDU
ByteEnumField('type_id', 0, IEC104_IO_NAMES),
BitEnumField('sq', IEC104_I_Message.SQ_FLAG_SEQUENCE, 1,
IEC104_I_Message.SQ_FLAGS),
BitFieldLenField('num_io', None, 7, count_of='io'),
BitEnumField('test', 0, 1, IEC104_I_Message.TEST_FLAGS),
BitEnumField('ack', 0, 1, IEC104_I_Message.ACK_FLAGS),
BitEnumField('cot', 0, 6, CAUSE_OF_TRANSMISSIONS),
ByteField('origin_address', 0),
LEShortField('common_asdu_address', 0),
LEThreeBytesField('information_object_address', 0),
IEC104ASDUPacketListField('io',
conf.raw_layer(),
_i_msg_io_dispatcher_sequence,
length_from=lambda pkt: pkt.apdu_length - 13)
]
def post_dissect(self, s):
if self.type_id == IEC104_IO_ID_C_RD_NA_1:
# IEC104_IO_ID_C_RD_NA_1 has no payload. we will add the layer
# manually to the stack right now. we do this num_io times
# as - even if it makes no sense - someone could decide
# to add more than one read commands in a sequence...
setattr(self, 'io', [IEC104_IO_C_RD_NA_1()] * self.num_io)
return s
class IEC104_I_Message_SingleIOA(IEC104_I_Message):
"""
every information object contains an individual information object
address field
sq = 0, see EN 60870-5-101:2003, sec. 7.2.2.1 (p. 33)
"""
name = 'IEC-104 I APDU (single IOA)'
fields_desc = [
# APCI
XByteField('start', IEC104_I_Message.IEC_104_MAGIC),
FieldLenField("apdu_length", None, fmt="!B", length_of='io',
adjust=lambda pkt, x: x + 10),
IEC104SequenceNumber('tx_seq_num', 0),
IEC104SequenceNumber('rx_seq_num', 0),
# ASDU
ByteEnumField('type_id', 0, IEC104_IO_NAMES),
BitEnumField('sq', IEC104_I_Message.SQ_FLAG_SINGLE, 1,
IEC104_I_Message.SQ_FLAGS),
BitFieldLenField('num_io', None, 7, count_of='io'),
BitEnumField('test', 0, 1, IEC104_I_Message.TEST_FLAGS),
BitEnumField('ack', 0, 1, IEC104_I_Message.ACK_FLAGS),
BitEnumField('cot', 0, 6, CAUSE_OF_TRANSMISSIONS),
ByteField('origin_address', 0),
LEShortField('common_asdu_address', 0),
IEC104ASDUPacketListField('io',
conf.raw_layer(),
_i_msg_io_dispatcher_single,
length_from=lambda pkt: pkt.apdu_length - 10)
]
IEC104_APDU_CLASSES = {
IEC104_APDU_TYPE_UNKNOWN: conf.raw_layer,
IEC104_APDU_TYPE_I_SEQ_IOA: IEC104_I_Message_SeqIOA,
IEC104_APDU_TYPE_I_SINGLE_IOA: IEC104_I_Message_SingleIOA,
IEC104_APDU_TYPE_U: IEC104_U_Message,
IEC104_APDU_TYPE_S: IEC104_S_Message
}
def _iec104_is_i_apdu_seq_ioa(payload):
len_payload = len(payload)
if len_payload < 6:
return False
if orb(payload[0]) != 0x68 or (
orb(payload[1]) + 2) > len_payload or len_payload < 8:
return False
return IEC104_APDU_TYPE_I_SEQ_IOA == _iec104_apci_type_from_packet(payload)
def _iec104_is_i_apdu_single_ioa(payload):
len_payload = len(payload)
if len_payload < 6:
return False
if orb(payload[0]) != 0x68 or (
orb(payload[1]) + 2) > len_payload or len_payload < 8:
return False
return IEC104_APDU_TYPE_I_SINGLE_IOA == _iec104_apci_type_from_packet(
payload)
def _iec104_is_u_apdu(payload):
if len(payload) < 6:
return False
if orb(payload[0]) != 0x68 or orb(payload[1]) != 4:
return False
return IEC104_APDU_TYPE_U == _iec104_apci_type_from_packet(payload)
def _iec104_is_s_apdu(payload):
if len(payload) < 6:
return False
if orb(payload[0]) != 0x68 or orb(payload[1]) != 4:
return False
return IEC104_APDU_TYPE_S == _iec104_apci_type_from_packet(payload)
def iec104_decode(payload):
"""
can be used to dissect payload of a TCP connection
:param payload: the application layer data (IEC104-APDU(s))
:return: iec104 (I/U/S) message instance, conf.raw_layer() if unknown
"""
if _iec104_is_i_apdu_seq_ioa(payload):
return IEC104_I_Message_SeqIOA(payload)
elif _iec104_is_i_apdu_single_ioa(payload):
return IEC104_I_Message_SingleIOA(payload)
elif _iec104_is_s_apdu(payload):
return IEC104_S_Message(payload)
elif _iec104_is_u_apdu(payload):
return IEC104_U_Message(payload)
else:
return conf.raw_layer(payload)
bind_layers(TCP, IEC104_APDU, sport=IEC_104_IANA_PORT)
bind_layers(TCP, IEC104_APDU, dport=IEC_104_IANA_PORT)
| 32.651017 | 86 | 0.66852 | 2,885 | 20,864 | 4.501213 | 0.168458 | 0.088711 | 0.039427 | 0.024642 | 0.431619 | 0.279609 | 0.227322 | 0.190513 | 0.187433 | 0.178192 | 0 | 0.098162 | 0.249041 | 20,864 | 638 | 87 | 32.702194 | 0.730661 | 0.199387 | 0 | 0.179221 | 0 | 0 | 0.144313 | 0.001606 | 0 | 0 | 0.003954 | 0.001567 | 0 | 1 | 0.036364 | false | 0.002597 | 0.023377 | 0 | 0.223377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896c42df3aee1153cc88340c843b78ef48002567 | 367 | py | Python | DMGroup/ChangeDMGroupName.py | tungdo0602/Some-Discord-Collection | b14b9bf20261873c5ebf875607f305f5767bd874 | [
"MIT"
] | 4 | 2021-12-13T17:32:30.000Z | 2022-03-27T21:29:35.000Z | DMGroup/ChangeDMGroupName.py | tungdo0602/Some-Discord-Collection | b14b9bf20261873c5ebf875607f305f5767bd874 | [
"MIT"
] | 1 | 2021-11-28T07:03:00.000Z | 2021-11-28T07:03:00.000Z | DMGroup/ChangeDMGroupName.py | tungdo0602/Some-Discord-Collection | b14b9bf20261873c5ebf875607f305f5767bd874 | [
"MIT"
] | 1 | 2021-11-16T15:45:40.000Z | 2021-11-16T15:45:40.000Z | import requests, os
token = ""
DMGroup_id = ""
DMGroup_name = ""
group = requests.patch(f'https://discord.com/api/v9/channels/{DMGroup_id}', headers={"authorization": token}, json={"name": DMGroup_name})
if group.status_code == 200:
print("Successfully changed the group name!")
else:
print(f"Failed to change the group name! ERROR {group.status_code}") | 33.363636 | 139 | 0.705722 | 51 | 367 | 4.960784 | 0.627451 | 0.071146 | 0.118577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012618 | 0.13624 | 367 | 11 | 140 | 33.363636 | 0.785489 | 0 | 0 | 0 | 0 | 0 | 0.444134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896ccdee096faa75f037665b2b0f87f93f06154a | 1,815 | py | Python | autogoal/contrib/streamlit/__init__.py | gmijenes/autogoal | 916b0eb4d1aa1a222d0ff1b0f6f202bf56458ef5 | [
"MIT"
] | null | null | null | autogoal/contrib/streamlit/__init__.py | gmijenes/autogoal | 916b0eb4d1aa1a222d0ff1b0f6f202bf56458ef5 | [
"MIT"
] | null | null | null | autogoal/contrib/streamlit/__init__.py | gmijenes/autogoal | 916b0eb4d1aa1a222d0ff1b0f6f202bf56458ef5 | [
"MIT"
] | null | null | null | try:
import streamlit as st
except ImportError:
print(
"(!) The code inside `autogoal.contrib.streamlit` requires `streamlit>=0.55`."
)
print("(!) Fix it by running `pip install autogoal[streamlit]`.")
raise
from autogoal.search import Logger
class StreamlitLogger(Logger):
def __init__(self):
self.evaluations = 0
self.current = 0
self.status = st.info("Waiting for evaluation start.")
self.progress = st.progress(0)
self.error_log = st.empty()
self.best_fn = 0
self.chart = st.line_chart([dict(current=0.0, best=0.0)])
self.current_pipeline = st.code("")
self.best_pipeline = None
def begin(self, evaluations, pop_size):
self.status.info(f"Starting evaluation for {evaluations} iterations.")
self.progress.progress(0)
self.evaluations = evaluations
def update_best(self, new_best, new_fn, previous_best, previous_fn):
self.best_fn = new_fn
self.best_pipeline = repr(new_best)
def sample_solution(self, solution):
self.current += 1
self.status.info(
f"""
[Best={self.best_fn:0.3}] 🕐 Iteration {self.current}/{self.evaluations}.
"""
)
self.progress.progress(self.current / self.evaluations)
self.current_pipeline.code(repr(solution))
def eval_solution(self, solution, fitness):
self.chart.add_rows([dict(current=fitness, best=self.best_fn)])
def end(self, best, best_fn):
self.status.success(
f"""
**Evaluation completed:** 👍 Best solution={best_fn:0.3}
"""
)
self.progress.progress(1.0)
self.current_pipeline.code(self.best_pipeline)
| 32.410714 | 87 | 0.6 | 218 | 1,815 | 4.87156 | 0.33945 | 0.060264 | 0.037665 | 0.020716 | 0.056497 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014537 | 0.27989 | 1,815 | 55 | 88 | 33 | 0.79648 | 0 | 0 | 0.086957 | 0 | 0.021739 | 0.221023 | 0.073864 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.065217 | 0 | 0.217391 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
896d186eab4c273d1fcc4a39b1e7daa5cc6508b0 | 3,179 | py | Python | lib/Shade/storage/Completer.py | maarons/Shade | 34a223f2121664df3fc0834b32e13a84797e1084 | [
"MIT"
] | null | null | null | lib/Shade/storage/Completer.py | maarons/Shade | 34a223f2121664df3fc0834b32e13a84797e1084 | [
"MIT"
] | null | null | null | lib/Shade/storage/Completer.py | maarons/Shade | 34a223f2121664df3fc0834b32e13a84797e1084 | [
"MIT"
] | null | null | null | # Copyright (c) 2011, 2012, 2013 Marek Sapota
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE
import readline
# Readline interface is an undocumented mess - if this class does something
# really strange and apparently useless it is actually probably required. Edit
# with care.
class Completer():
def __init__(self, storage):
self.__storage = storage
self.__possible = []
def complete(self, text, state):
# Text passed to this function is useless, it only contains the last
# word in the line. For "abc def" it will only contain "def".
buf = readline.get_line_buffer()
# Generate the completion list on first request.
if state == 0:
self.__regenerate(text, buf)
if state >= len(self.__possible):
return None
return self.__possible[state]
def __regenerate(self, text, buf):
def add(cmd):
possible = map(
lambda n: '{0} {1}'.format(cmd, n),
device.names()
)
self.__possible.extend(possible)
self.__possible = ['list', 'update', 'exit']
for device in self.__storage.devices():
try:
if device.is_drive():
add('detach')
if device.is_partition():
if device.is_mounted():
add('umount')
add('unmount')
else:
add('mount')
if device.is_luks():
if device.is_open():
add('lock')
else:
add('unlock')
except Exception as e:
# Device disappeared, ignore it.
pass
self.__possible = filter(
lambda x: x.startswith(buf),
self.__possible,
)
# Compensate for ignoring the text given to us by readline.
to_discard = len(buf) - len(text)
self.__possible = list(map(
lambda x: x[to_discard:].strip(),
self.__possible,
))
| 38.768293 | 79 | 0.595156 | 382 | 3,179 | 4.850785 | 0.5 | 0.058284 | 0.026983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007016 | 0.327461 | 3,179 | 81 | 80 | 39.246914 | 0.859682 | 0.468701 | 0 | 0.086957 | 0 | 0 | 0.033173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.021739 | 0.021739 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
897282b1225402f34c02a3d173efb231094cfc04 | 1,062 | py | Python | ealgebra.py | LiDReSaR/algebra | ecf6a27439f3f7dcdb088bcb27c46122125fcd9d | [
"MIT"
] | 2 | 2020-01-13T19:57:42.000Z | 2020-01-14T18:42:11.000Z | ealgebra.py | LiDReSaR/algebra | ecf6a27439f3f7dcdb088bcb27c46122125fcd9d | [
"MIT"
] | null | null | null | ealgebra.py | LiDReSaR/algebra | ecf6a27439f3f7dcdb088bcb27c46122125fcd9d | [
"MIT"
] | null | null | null | __version__ = '0.0.4'
def phi(n: int) -> int:
"""Euler function
Parameters:
n (int): Number
Returns:
int: Result
"""
res, i = n, 2
while i * i <= n:
if n % i == 0:
while n % i == 0:
n //= i
res -= res // i
i += 1
if n > 1:
res -= res // n
return res
def binexp(x: int, n: int) -> int:
"""Binary exponentiation
Parameters:
x (int): Base
n (int): Exponent (power)
Returns:
int: Result
"""
res = 1
while n > 0:
if n & 1 > 0:
res *= x
x *= x
n >>= 1
return res
def gcd(x: int, y: int) -> int:
"""Greatest Common Divisor
Parameters:
x (int): Number
y (int): Number
Returns:
int: Result
"""
while y > 0:
x, y = y, x % y
return x
def lcm(x: int, y: int) -> int:
"""Least Common Multiplier
Parameters:
x (int): Number
y (int): Number
Returns:
int: Result
"""
return x // gcd(x, y) * y
| 12.795181 | 34 | 0.431262 | 142 | 1,062 | 3.197183 | 0.246479 | 0.052863 | 0.140969 | 0.125551 | 0.306167 | 0.202643 | 0.202643 | 0.202643 | 0.202643 | 0.202643 | 0 | 0.023064 | 0.428437 | 1,062 | 82 | 35 | 12.95122 | 0.724876 | 0.323917 | 0 | 0.076923 | 0 | 0 | 0.008319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89734318911678f29e1f240cbaf0439095d72eff | 1,016 | py | Python | old/bisenetv2/meters.py | khsily/BiSeNet | 7373cbb76f893c698dab8865306264fc2a3ca0a4 | [
"MIT"
] | 966 | 2018-12-13T12:11:18.000Z | 2022-03-31T14:13:55.000Z | old/bisenetv2/meters.py | khsily/BiSeNet | 7373cbb76f893c698dab8865306264fc2a3ca0a4 | [
"MIT"
] | 214 | 2019-01-25T10:06:24.000Z | 2022-03-22T01:55:28.000Z | old/bisenetv2/meters.py | khsily/BiSeNet | 7373cbb76f893c698dab8865306264fc2a3ca0a4 | [
"MIT"
] | 247 | 2019-03-04T11:39:06.000Z | 2022-03-30T05:45:56.000Z |
import time
import datetime
class TimeMeter(object):
def __init__(self, max_iter):
self.iter = 0
self.max_iter = max_iter
self.st = time.time()
self.global_st = self.st
self.curr = self.st
def update(self):
self.iter += 1
def get(self):
self.curr = time.time()
interv = self.curr - self.st
global_interv = self.curr - self.global_st
eta = int((self.max_iter-self.iter) * (global_interv / (self.iter+1)))
eta = str(datetime.timedelta(seconds=eta))
self.st = self.curr
return interv, eta
class AvgMeter(object):
def __init__(self, name):
self.name = name
self.seq = []
self.global_seq = []
def update(self, val):
self.seq.append(val)
self.global_seq.append(val)
def get(self):
avg = sum(self.seq) / len(self.seq)
global_avg = sum(self.global_seq) / len(self.global_seq)
self.seq = []
return avg, global_avg
| 23.090909 | 78 | 0.574803 | 137 | 1,016 | 4.10219 | 0.240876 | 0.106762 | 0.092527 | 0.060498 | 0.067616 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004237 | 0.30315 | 1,016 | 43 | 79 | 23.627907 | 0.789548 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.0625 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89737894de02186c1f7503654873acb36fbc2267 | 6,556 | py | Python | QFT_ram_reader_writer_metafied.py | woodrush/QFT-devkit | 8a2789c89e526a593fb56edab508ed56a6321c20 | [
"MIT"
] | null | null | null | QFT_ram_reader_writer_metafied.py | woodrush/QFT-devkit | 8a2789c89e526a593fb56edab508ed56a6321c20 | [
"MIT"
] | null | null | null | QFT_ram_reader_writer_metafied.py | woodrush/QFT-devkit | 8a2789c89e526a593fb56edab508ed56a6321c20 | [
"MIT"
] | null | null | null | from glife import *
import golly as g
s1 = g.getstring("Enter stack size:", "233")
s2 = g.getstring("Enter stdin buffer starting address:", "290")
s3 = g.getstring("Enter stdout buffer starting address:", "790")
# calc.c
RAM_NEGATIVE_BUFFER_SIZE = int(s1)
QFTASM_RAMSTDIN_BUF_STARTPOSITION = int(s2) + RAM_NEGATIVE_BUFFER_SIZE
QFTASM_RAMSTDOUT_BUF_STARTPOSITION = int(s3) + RAM_NEGATIVE_BUFFER_SIZE
# p_init = (337, 239)
# p_init = (-65648469, -16320387)
delta_x = 16*2048
delta_y = 16*2048
write_locations = [
(0,0), (0,1), (1,0), (1,1),
(1751, 1751), (1752, 1751), (1751, 1752), (1752, 1752),
]
write_locations_inv = [
(399, 1846), (400, 1846),
(398, 1847), (400, 1847),
(399, 1848),
]
def getcell_by_index(i_x, i_y):
boat_displacement = (400, 1847)
return 1 - g.getcell(
p_init[0] + i_x * delta_x + boat_displacement[0],
p_init[1] + i_y * delta_y + boat_displacement[1])
def get_rambyte_by_addr_str(addr):
bytestring = "".join([str(getcell_by_index(i_x, addr)) for i_x in range(16)])
return bytestring
def get_rambyte_by_addr_int(addr):
return int(get_rambyte_by_addr_str(addr), 2)
def show_raw_ram_region(i_x0=0, i_y0=0, i_x1=15, i_y1=32, reverse=False):
def cell2chr(c):
d = {0:"_", 1:"*"}
if c in d.keys():
return d[c]
else:
return "?"
cells = ["".join([cell2chr(getcell_by_index(i_x, i_y)) for i_x in range(i_x0, i_x1+1)]) for i_y in range(i_y0, i_y1+1)]
cells = "\n".join(reversed(cells) if reverse else cells)
ret = g.note(cells)
def show_registers():
regnames = ["pc", "stdin", "stdout", "a", "b", "c", "d", "bp", "sp", "temp", "temp2"]
string = ""
for i_addr, k in enumerate(regnames):
string += "[{}] {}: {}\n".format(i_addr, regnames[i_addr], get_rambyte_by_addr_int(i_addr))
g.note(string)
def encode_stdin_string(python_stdin):
ret = []
python_stdin_int = [ord(c) for c in python_stdin]
if len(python_stdin_int) % 2 == 1:
python_stdin_int = python_stdin_int + [0]
for i_str, i in enumerate(python_stdin_int):
# ram[QFTASM_RAMSTDIN_BUF_STARTPOSITION - i_str][0] = ord(c)
# ram[QFTASM_RAMSTDIN_BUF_STARTPOSITION - i_str][1] += 1
if i_str % 2 == 0:
stdin_int = i
else:
stdin_int += i << 8
if i_str % 2 == 1 or i_str == len(python_stdin_int) - 1:
ret.append(stdin_int)
# ram[QFTASM_RAMSTDIN_BUF_STARTPOSITION + i_str//2][0] = stdin_int
# ram[QFTASM_RAMSTDIN_BUF_STARTPOSITION + i_str//2][1] += 1
return ret
def decode_stdin_buffer(stdin_buf):
ret = []
for b in stdin_buf:
n = b & 0b0000000011111111
if n == 0:
break
ret.append(n)
n = b >> 8
if n == 0:
break
ret.append(n)
return "".join([chr(i) for i in ret])
d_bit2state = {
0: 0,
1: 1,
}
def write_byte_at(addr, write_byte):
if addr < 11:
addr = addr
# elif addr > 32768:
# addr = 32768 - addr + 11
elif addr >= 1024 - RAM_NEGATIVE_BUFFER_SIZE:
addr = 1024 - addr + 10
elif addr >= 11:
addr = addr + RAM_NEGATIVE_BUFFER_SIZE
b_binary = "{:016b}".format(write_byte)
for i_bit, bit in enumerate(b_binary):
for x_offset in range(2):
x_offset *= 2048
for x_p, y_p in write_locations:
write_x = p_init[0] + i_bit * delta_x + x_offset + x_p
write_y = p_init[1] + addr * delta_y + y_p
write_value = int(bit)
g.setcell(write_x, write_y, write_value)
for x_p, y_p in write_locations_inv:
write_x = p_init[0] + i_bit * delta_x + x_offset + x_p
write_y = p_init[1] + addr * delta_y + y_p
write_value = 1 - int(bit)
g.setcell(write_x, write_y, write_value)
def write_ram(stdin_string):
stdin_bytes = encode_stdin_string(stdin_string)
# g.note("Raw stdin bytes:" + str(stdin_bytes))
for i_byte, b in enumerate(stdin_bytes):
write_byte_at(i_byte + QFTASM_RAMSTDIN_BUF_STARTPOSITION - RAM_NEGATIVE_BUFFER_SIZE, b)
def show_stdio():
d_state2bit = {
0: 0,
1: 1,
}
stdin_bitstr = []
for i_y in range(QFTASM_RAMSTDIN_BUF_STARTPOSITION, QFTASM_RAMSTDOUT_BUF_STARTPOSITION):
stdin_bitstr.append("".join([str(d_state2bit[getcell_by_index(i_x, i_y)]) for i_x in range(16)]))
stdin_bytes = [int(s,2) for s in stdin_bitstr]
stdin_str = decode_stdin_buffer(stdin_bytes)
g.show("stdin_str")
g.show(str(len(stdin_str)))
g.show(stdin_str)
# TODO: stdout
stdout_bitstr = ["".join([str(d_state2bit[getcell_by_index(i_x, i_y)]) for i_x in range(16)])
for i_y in range(QFTASM_RAMSTDOUT_BUF_STARTPOSITION, QFTASM_RAMSTDIN_BUF_STARTPOSITION, -1)
]
stdout_bytes = [int(s,2) for s in stdout_bitstr]
stdout_bytes_2 = []
for c in stdout_bytes:
if c == 0:
break
stdout_bytes_2.append(chr(c))
stdout_str = "".join(stdout_bytes_2)
g.note("Stdin:\n" + stdin_str + "\n\nStdout:\n" + stdout_str)
s4 = g.getstring("""Enter the coordinates of the top pixel of the hive (the following pattern) at the top-left in the most top-left RAM cell:
(Note: These values change when a pattern with a different ROM size (i.e. a pattern with a different height) is metafied)
_*_
*_*
*_*
_*_""", "-65648599,-13895568")
t4 = tuple(map(int, s4.split(",")))
p_init = (t4[0] + 130, t4[1] + 13)
write_bytes_filepath = g.opendialog("Open CSV for the Initial RAM Values", "CSV files (*.csv)|*.csv")
if write_bytes_filepath:
with open(write_bytes_filepath, "rt") as f:
write_bytes = [map(int, line.split(",")) for line in f.readlines()]
g.show("Writing initial RAM bytes...")
for t in write_bytes:
write_byte_at(*t)
g.show("Done.")
g.note("Wrote {} initial RAM bytes.".format(len(write_bytes)))
else:
g.note("Skipped writing initial RAM bytes.")
show_raw_ram_region()
show_registers()
stdin_string_filepath = g.opendialog("Open the text file to write to the stdin buffer")
if stdin_string_filepath:
with open(stdin_string_filepath, "rt") as f:
stdin_string = f.read()
write_ram(stdin_string)
g.note("Wrote the following content from {} into the stdin buffer.\n----\n{}".format(stdin_string_filepath, stdin_string))
else:
g.note("Skipped writing the stdin buffer.")
show_stdio()
| 31.825243 | 141 | 0.623703 | 1,017 | 6,556 | 3.747296 | 0.187807 | 0.012595 | 0.035686 | 0.062976 | 0.25059 | 0.196536 | 0.16741 | 0.13461 | 0.12254 | 0.110732 | 0 | 0.057956 | 0.244661 | 6,556 | 205 | 142 | 31.980488 | 0.711632 | 0.061318 | 0 | 0.163399 | 0 | 0.013072 | 0.125081 | 0 | 0 | 0 | 0 | 0.004878 | 0 | 1 | 0.071895 | false | 0 | 0.013072 | 0.006536 | 0.130719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8973de1b669b61e0aa3358d6ae11021869cf1108 | 844 | py | Python | app/meda_sync_search/models/equipment.py | DEV3L/meda-sync-search | c67feb2f2b54ba153dc50e9aba5058d4e7948c92 | [
"Beerware"
] | null | null | null | app/meda_sync_search/models/equipment.py | DEV3L/meda-sync-search | c67feb2f2b54ba153dc50e9aba5058d4e7948c92 | [
"Beerware"
] | null | null | null | app/meda_sync_search/models/equipment.py | DEV3L/meda-sync-search | c67feb2f2b54ba153dc50e9aba5058d4e7948c92 | [
"Beerware"
] | null | null | null | from app.meda_sync_search.models.model import Model
class Equipment(Model):
def __init__(self, *,
description='',
hcpcs='',
average_cost=0,
category='',
modifier=''):
super().__init__(description=description)
self.hcpcs = hcpcs
self.average_cost = average_cost
self.category = category
self.modifier = modifier
def __eq__(self, other):
is_equal = self.description == other.description \
and self.hcpcs == other.hcpcs \
and self.average_cost == other.average_cost \
and self.category == other.category \
and self.modifier == other.modifier
return is_equal
def __hash__(self):
return super().__hash__()
| 29.103448 | 64 | 0.545024 | 81 | 844 | 5.320988 | 0.345679 | 0.12761 | 0.069606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001852 | 0.36019 | 844 | 28 | 65 | 30.142857 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.045455 | 0.045455 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8974ddb734130878e03e5d77f48d49024373a773 | 1,649 | py | Python | src/tf/cluster/train.py | juanprietob/gan-brain | 5783514427e0f08bb06116bc3b09e38d13216483 | [
"Apache-2.0"
] | 1 | 2018-01-10T23:59:20.000Z | 2018-01-10T23:59:20.000Z | src/tf/cluster/train.py | juanprietob/gan-brain | 5783514427e0f08bb06116bc3b09e38d13216483 | [
"Apache-2.0"
] | null | null | null | src/tf/cluster/train.py | juanprietob/gan-brain | 5783514427e0f08bb06116bc3b09e38d13216483 | [
"Apache-2.0"
] | null | null | null | import tensorflow as tf
cluster = tf.train.ClusterSpec({
"worker": [
"localhost:2223",
],
"ps": [
"152.19.32.251:2222"
]})
server = tf.train.Server(cluster, job_name='worker', task_index=0)
with tf.device("/job:ps/task:0"):
image = tf.get_variable("images", shape=[5,5], dtype=tf.float32, initializer=tf.truncated_normal_initializer(mean=0,stddev=0.1), trainable=False)
labels = tf.get_variable("labels", shape=[5,5], dtype=tf.float32, initializer=tf.truncated_normal_initializer(mean=0,stddev=0.1), trainable=False)
w_matmul = tf.get_variable("w1", shape=[5,5], dtype=tf.float32, initializer=tf.truncated_normal_initializer(mean=0,stddev=0.1))
bias = tf.get_variable("b1", shape=[5], dtype=tf.float32, initializer=tf.truncated_normal_initializer(mean=0,stddev=0.1))
with tf.device("/job:worker/task:0"):
layer_1 = tf.nn.relu(tf.nn.bias_add(tf.matmul(image, w_matmul), bias))
logits = tf.nn.relu(layer_1)
logits = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels, name='cross_entropy')
global_step = tf.contrib.framework.get_or_create_global_step()
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3,
beta1=0.9)
train_op = optimizer.minimize(logits, global_step=global_step)
hooks=[tf.train.StopAtStepHook(last_step=1000000)]
with tf.train.MonitoredTrainingSession(master=server.target,
is_chief=True,
checkpoint_dir="~/work/data/IBIS/checkpoints/",
hooks=hooks) as sess:
for _ in range(10000):
sess.run(train_op) | 38.348837 | 147 | 0.671922 | 231 | 1,649 | 4.632035 | 0.398268 | 0.03271 | 0.048598 | 0.056075 | 0.293458 | 0.293458 | 0.293458 | 0.293458 | 0.293458 | 0.293458 | 0 | 0.051111 | 0.181322 | 1,649 | 43 | 148 | 38.348837 | 0.741481 | 0 | 0 | 0 | 0 | 0 | 0.082424 | 0.017576 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
897553789aacbc17dcfc8919505f6b267a51d7e1 | 5,780 | py | Python | aleph_message/tests/test_models.py | leirbag95/aleph-message | 3b942e761a253126fb0240a3bce342db0a7333d2 | [
"MIT"
] | null | null | null | aleph_message/tests/test_models.py | leirbag95/aleph-message | 3b942e761a253126fb0240a3bce342db0a7333d2 | [
"MIT"
] | null | null | null | aleph_message/tests/test_models.py | leirbag95/aleph-message | 3b942e761a253126fb0240a3bce342db0a7333d2 | [
"MIT"
] | null | null | null | import json
import os.path
from hashlib import sha256
from os import listdir
from os.path import join, isdir
from pprint import pprint
import pytest
import requests
from pydantic import ValidationError
from aleph_message.models import MessagesResponse, Message, ProgramMessage, ForgetMessage, \
PostContent
from aleph_message.tests.download_messages import MESSAGES_STORAGE_PATH
ALEPH_API_SERVER = "https://api2.aleph.im"
HASHES_TO_IGNORE = (
"2fe5470ebcc5b6168b778ca3baadfd1618dc3acdb0690478760d21ff24b03164",
"1c0ce828b272fd9929e1dd6f665a4f845110b72a6aba74daa84a17e89da3718c",
)
def test_message_response_aggregate():
path = "/api/v0/messages.json?hashes=9b21eb870d01bf64d23e1d4475e342c8f958fcd544adc37db07d8281da070b00&addresses=0xa1B3bb7d2332383D96b7796B908fB7f7F3c2Be10&msgType=AGGREGATE"
data_dict = requests.get(f"{ALEPH_API_SERVER}{path}").json()
response = MessagesResponse(**data_dict)
assert response
def test_message_response_post():
path = "/api/v0/messages.json?hashes=6e5d0c7dce83bfd4c5d113ef67fbc0411f66c9c0c75421d61ace3730b0d1dd0b&addresses=0xa1B3bb7d2332383D96b7796B908fB7f7F3c2Be10&msgType=POST"
data_dict = requests.get(f"{ALEPH_API_SERVER}{path}").json()
response = MessagesResponse(**data_dict)
assert response
def test_message_response_store():
path = "/api/v0/messages.json?hashes=53c9317457d2d3caa205748917bc116921f4e8313e830c1c05c6eb6e2d9d9305&addresses=0x231a2342b7918129De0b910411378E22379F69b8&msgType=STORE"
data_dict = requests.get(f"{ALEPH_API_SERVER}{path}").json()
response = MessagesResponse(**data_dict)
assert response
def test_messages_last_page():
path = "/api/v0/messages.json"
page = 1
response = requests.get(f"{ALEPH_API_SERVER}{path}?page={page}")
response.raise_for_status()
data_dict = response.json()
for message_dict in data_dict["messages"]:
if message_dict["item_hash"] in HASHES_TO_IGNORE:
continue
try:
message = Message(**message_dict)
assert message
except:
raise
def test_post_content():
"""Test that a mistake in the validation of the POST content 'type' field is fixed.
Issue reported on 2021-10-21 on Telegram.
"""
custom_type = "arbitrary_type"
p1 = PostContent(
type=custom_type,
address="0x1",
content={"blah": "bar"},
time=1.,
)
assert p1.type == custom_type
with pytest.raises(ValueError):
PostContent(
type="amend",
address="0x1",
content={"blah": "bar"},
time=1.,
# 'ref' field is missing from an amend
)
# 'ref' field is present on an amend
PostContent(
type="amend",
address="0x1",
content={"blah": "bar"},
time=1.,
ref='0x123',
)
def test_message_machine():
path = os.path.abspath(os.path.join(__file__, "../messages/machine.json"))
with open(path) as fd:
message_raw = json.load(fd)
message_raw['item_hash'] = sha256(json.dumps(message_raw['content']).encode()).hexdigest()
message_raw['item_content'] = json.dumps(message_raw['content'])
message = ProgramMessage(**message_raw)
assert message
message2 = Message(**message_raw)
assert message == message2
assert hash(message.content)
def test_message_machine_named():
path = os.path.abspath(os.path.join(__file__, "../messages/machine_named.json"))
with open(path) as fd:
message_raw = json.load(fd)
message_raw['item_hash'] = sha256(json.dumps(message_raw['content']).encode()).hexdigest()
message_raw['item_content'] = json.dumps(message_raw['content'])
message = ProgramMessage(**message_raw)
assert message.content.metadata['version'] == '10.2'
def test_message_forget():
path = os.path.abspath(os.path.join(__file__, "../messages/forget.json"))
with open(path) as fd:
message_raw = json.load(fd)
message_raw['item_hash'] = sha256(json.dumps(message_raw['content']).encode()).hexdigest()
message_raw['item_content'] = json.dumps(message_raw['content'])
message = ForgetMessage(**message_raw)
assert message
message2 = Message(**message_raw)
assert message == message2
assert hash(message.content)
# A FORGET message may not be forgotten:
message_raw["forgotten_by"] = ['abcde']
with pytest.raises(ValueError) as e:
ForgetMessage(**message_raw)
assert e.value.args[0][0].exc.args == ("This type of message may not be forgotten", )
def test_message_forgotten_by():
path = os.path.abspath(os.path.join(__file__, "../messages/machine.json"))
with open(path) as fd:
message_raw = json.load(fd)
message_raw['item_hash'] = sha256(json.dumps(message_raw['content']).encode()).hexdigest()
message_raw['item_content'] = json.dumps(message_raw['content'])
# Test different values for field 'forgotten_by'
_ = ProgramMessage(**message_raw)
_ = ProgramMessage(**message_raw, forgotten_by=None)
_ = ProgramMessage(**message_raw, forgotten_by=['abcde'])
_ = ProgramMessage(**message_raw, forgotten_by=['abcde', 'fghij'])
@pytest.mark.skipif(not isdir(MESSAGES_STORAGE_PATH), reason="No file on disk to test")
def test_messages_from_disk():
for messages_page in listdir(MESSAGES_STORAGE_PATH):
with open(join(MESSAGES_STORAGE_PATH, messages_page)) as page_fd:
data_dict = json.load(page_fd)
for message_dict in data_dict["messages"]:
try:
message = Message(**message_dict)
assert message
except ValidationError as e:
pprint(message_dict)
print(e.json())
raise
| 33.410405 | 177 | 0.690311 | 671 | 5,780 | 5.731744 | 0.211624 | 0.080603 | 0.024961 | 0.039522 | 0.51222 | 0.478419 | 0.436557 | 0.404576 | 0.380135 | 0.369995 | 0 | 0.071719 | 0.191869 | 5,780 | 172 | 178 | 33.604651 | 0.751659 | 0.048616 | 0 | 0.47619 | 0 | 0 | 0.21774 | 0.153495 | 0 | 0 | 0.025552 | 0 | 0.111111 | 1 | 0.079365 | false | 0 | 0.087302 | 0 | 0.166667 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8976eae594b4584e64e8e97c683b3a1f51871c22 | 3,767 | py | Python | 19-python/BeaconScanner.py | kmolski/aoc-2021 | 59288c5deca1d65208573123c972fea37fbb3f9e | [
"MIT"
] | 1 | 2022-01-06T22:22:28.000Z | 2022-01-06T22:22:28.000Z | 19-python/BeaconScanner.py | kmolski/aoc-2021 | 59288c5deca1d65208573123c972fea37fbb3f9e | [
"MIT"
] | null | null | null | 19-python/BeaconScanner.py | kmolski/aoc-2021 | 59288c5deca1d65208573123c972fea37fbb3f9e | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from io import StringIO
from itertools import groupby, permutations, product
from math import cos, sin, radians
from sys import argv
import numpy as np
INTERSECTION_SIZE = 12
MATCHING_DIST_COUNT = (INTERSECTION_SIZE * (INTERSECTION_SIZE - 1)) / 2
SIN_90 = sin(radians(90))
COS_90 = cos(radians(90))
X_ROT = np.matrix([[1, 0, 0], [0, COS_90, -SIN_90], [0, SIN_90, COS_90]])
Y_ROT = np.matrix([[COS_90, 0, SIN_90], [0, 1, 0], [-SIN_90, 0, COS_90]])
Z_ROT = np.matrix([[COS_90, -SIN_90, 0], [SIN_90, COS_90, 0], [0, 0, 1]])
def point_diff(a, b):
return tuple(coord_a - coord_b for (coord_a, coord_b) in zip(a, b))
def sq_dist(a, b):
return sum(coord**2 for coord in point_diff(a, b))
def manhattan_dist(a, b):
return sum(abs(coord) for coord in point_diff(a, b))
def group_len(grouper):
return sum(1 for _ in grouper)
def get_dist_freqs(points):
sq_distances = [sq_dist(a, b) for a, b in permutations(points, 2)]
return {d: group_len(g) for d, g in groupby(sorted(sq_distances))}
def get_diffs(ref_points, points):
pos_diffs = [point_diff(a, b) for a, b in product(ref_points, points)]
return [(d, group_len(g)) for d, g in groupby(sorted(pos_diffs))]
def rotate_point_cloud(points, rot):
rotated = np.rint(points.copy() @ rot.T)
return np.array(rotated, dtype=np.int32)
def align_point_cloud(ref_points, points):
for _ in range(4):
for _ in range(4):
for _ in range(4):
diff = max(get_diffs(ref_points, points), key=lambda it: it[1])
if diff[1] >= INTERSECTION_SIZE:
return diff[0], points
points = rotate_point_cloud(points, Y_ROT)
points = rotate_point_cloud(points, X_ROT)
points = rotate_point_cloud(points, Z_ROT)
raise Exception("Alignment not found")
class PointCloud:
def __init__(self, points):
self.points = points
self.dist_freqs = get_dist_freqs(points)
self.scanner_pos = (0, 0, 0)
def does_merge_with(self, other_cloud):
self_freq, other_freq = self.dist_freqs, other_cloud.dist_freqs
common_dists = self_freq.keys() & other_freq.keys()
if not common_dists:
return False
common_dist_count = sum(min(self_freq[k], other_freq[k]) for k in common_dists)
return common_dist_count >= MATCHING_DIST_COUNT
def merge_into(self, scanner):
diff, points = align_point_cloud(scanner.points, self.points)
self.scanner_pos = diff
translated = np.add(points.copy(), np.array(diff))
all_points = np.unique(np.append(scanner.points, translated, axis=0), axis=0)
scanner.points = all_points
scanner.dist_freqs = get_dist_freqs(all_points)
def parse_scanner_data(section):
section_buffer = StringIO(section)
points = np.loadtxt(section_buffer, skiprows=1, delimiter=",", dtype=np.int32)
return PointCloud(points)
def merge_point_clouds(scanners):
target, *rest = scanners
done = []
while rest:
cloud = rest.pop(0)
if cloud.does_merge_with(target):
cloud.merge_into(target)
done.append(cloud)
else:
rest.append(cloud)
return target, done
def find_max_manhattan_dist(scanners):
return max(
manhattan_dist(a.scanner_pos, b.scanner_pos)
for a, b in permutations(scanners, 2)
)
with open(argv[1]) as input_file:
sections = input_file.read().split("\n\n")
point_clouds = [parse_scanner_data(section) for section in sections]
scanner_0, _ = merge_point_clouds(point_clouds)
print(f"Part 1: {scanner_0.points.shape[0]}")
max_distance = find_max_manhattan_dist(point_clouds)
print(f"Part 2: {max_distance}")
| 29.429688 | 87 | 0.664454 | 576 | 3,767 | 4.107639 | 0.237847 | 0.009298 | 0.010144 | 0.018597 | 0.220203 | 0.11623 | 0.082418 | 0.082418 | 0.048183 | 0.030431 | 0 | 0.026771 | 0.216618 | 3,767 | 127 | 88 | 29.661417 | 0.774992 | 0.005575 | 0 | 0.034884 | 0 | 0 | 0.021629 | 0.00721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162791 | false | 0 | 0.05814 | 0.05814 | 0.383721 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
897cee4247148463acee605a8d5251ae9c74fc0e | 1,518 | py | Python | logaspect/groundtruth/split_xml_format.py | studiawan/logaspect | adeb4a5f802ef22329cacb50ea00b090f12f6ebb | [
"MIT"
] | null | null | null | logaspect/groundtruth/split_xml_format.py | studiawan/logaspect | adeb4a5f802ef22329cacb50ea00b090f12f6ebb | [
"MIT"
] | null | null | null | logaspect/groundtruth/split_xml_format.py | studiawan/logaspect | adeb4a5f802ef22329cacb50ea00b090f12f6ebb | [
"MIT"
] | 1 | 2021-11-11T11:41:18.000Z | 2021-11-11T11:41:18.000Z | from xml.dom import minidom
import xml.etree.ElementTree as Et
class SplitXMLFormat(object):
def __init__(self, data, output_file):
self.data = data
self.output_file = output_file
@staticmethod
def __prettify(elements):
"""Return a pretty-printed XML string for the Element.
"""
rough_string = Et.tostring(elements, 'utf-8')
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent='\t')
def convert(self):
# create the file structure
sentences = Et.Element('sentences')
index = 1
for element in self.data:
sentence = Et.SubElement(sentences, 'sentence')
sentence_text = Et.SubElement(sentence, 'text')
sentence.set('id', str(index))
sentence_text.text = element['sentence']
if element['term'] is not None:
aspect_terms = Et.SubElement(sentence, 'aspectTerms')
aspect_term = Et.SubElement(aspect_terms, 'aspectTerm')
for term in element['term']:
aspect_term.set('term', term[0])
aspect_term.set('polarity', element['sentiment'])
aspect_term.set('from', str(term[1]))
aspect_term.set('to', str(term[2]))
index += 1
# create a new XML file with the results
xmlstr = self.__prettify(sentences)
with open(self.output_file, 'w') as f:
f.write(xmlstr)
| 33.733333 | 71 | 0.583663 | 172 | 1,518 | 5.017442 | 0.424419 | 0.057937 | 0.060255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005703 | 0.306983 | 1,518 | 44 | 72 | 34.5 | 0.814639 | 0.083004 | 0 | 0 | 0 | 0 | 0.068592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.064516 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
897d8a1a4cebed6675e8d257d5a7f5e1a6d7eaf2 | 1,510 | py | Python | app/Search/schema.py | psyphore/flask-phone-book | cceec3caabdeb03f260d37f3b55d5aa7a52c30c2 | [
"MIT"
] | null | null | null | app/Search/schema.py | psyphore/flask-phone-book | cceec3caabdeb03f260d37f3b55d5aa7a52c30c2 | [
"MIT"
] | 2 | 2021-03-19T03:39:56.000Z | 2021-06-08T20:28:03.000Z | app/Search/schema.py | psyphore/flask-phone-book | cceec3caabdeb03f260d37f3b55d5aa7a52c30c2 | [
"MIT"
] | null | null | null | import graphene
from graphql import GraphQLError
from app.People.models import Person
from .service import SearchService
from app.People.graphql_types import PersonType
from .graphql_types import SearchType, SearchResultType
service = SearchService()
class SearchQuery(graphene.ObjectType):
'''Search Query,
fetch person entries matching to provided criteria
'''
search = graphene.Field(SearchResultType, query=graphene.NonNull(graphene.String), limit=graphene.Int(10))
search_advanced = graphene.Field(SearchResultType, criteria=graphene.NonNull(SearchType))
def resolve_search(self, info, **args):
q, l = args.get("query"), args.get("limit")
result = service.filter(query=q,limit=l)
if result is None:
raise GraphQLError(f'"{q}" has not been found in our people list.')
sr = SearchResultType()
sr.count = len(result)
sr.data = [PersonType(**Person.wrap(r).as_dict()) for r in result]
return sr
def resolve_search_advanced(self, info, criteria):
result = service.filter(query=criteria.query,limit=criteria.first,skip=criteria.offset)
if result is None:
raise GraphQLError(f'"{criteria.query}" has not been found in our people list.')
sr = SearchResultType()
sr.count = len(result)
sr.data = [PersonType(**Person.wrap(r).as_dict()) for r in result]
return sr
schema = graphene.Schema(query=SearchQuery, auto_camelcase=True)
| 33.555556 | 110 | 0.686755 | 187 | 1,510 | 5.497326 | 0.385027 | 0.021401 | 0.025292 | 0.046693 | 0.289883 | 0.289883 | 0.289883 | 0.227626 | 0.227626 | 0.227626 | 0 | 0.001672 | 0.207947 | 1,510 | 44 | 111 | 34.318182 | 0.85786 | 0.043046 | 0 | 0.357143 | 0 | 0 | 0.07784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.464286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
897fa0ec09d0c1f6b4067a68206a8ac3cfab5cb6 | 1,180 | py | Python | initialization/bustype.py | rwl/rapid | c7c6592327e045f1828b351c498f6cb93b218a21 | [
"BSD-3-Clause"
] | 4 | 2021-10-30T02:18:21.000Z | 2021-11-02T12:39:05.000Z | initialization/bustype.py | rwl/rapid | c7c6592327e045f1828b351c498f6cb93b218a21 | [
"BSD-3-Clause"
] | null | null | null | initialization/bustype.py | rwl/rapid | c7c6592327e045f1828b351c498f6cb93b218a21 | [
"BSD-3-Clause"
] | 1 | 2021-10-30T02:18:24.000Z | 2021-10-30T02:18:24.000Z | '''
MATPOWER
Copyright (c) 1996-2016 by Power System Engineering Research Center (PSERC) by Ray Zimmerman, PSERC Cornell
This code follows part of MATPOWER.
See http://www.pserc.cornell.edu/matpower/ for more info.
Modified by Oak Ridge National Laboratory (Byungkwon Park) to be used in the parareal algorithm.
'''
import numpy as np
from scipy.sparse import csr_matrix, csc_matrix, lil_matrix, identity
def bustype(bus, gen):
nb = len(bus.toarray())
ng = len(gen.toarray())
row = gen[:, 0].toarray().reshape(-1) - 1
col = np.arange(ng)
data = (gen[:, 0] > 0).toarray().reshape(-1)
Cg = csc_matrix((data, (row, col)), shape=(nb, ng))
bus_gen_status = Cg*np.ones(ng)
## form index lists for slack, PV, and PQ buses
busidx = (bus.tocsc()[:, 1])
busidx = busidx.todense().reshape(-1)
ref = np.logical_and(busidx == 3, bus_gen_status > 0 )[0]
ref = np.where(np.transpose(ref) == True)[0]
pv = np.logical_and(busidx == 2, bus_gen_status > 0)[0]
pv = np.where(np.transpose(pv) == True)[0]
pq = np.logical_or(busidx == 1, bus_gen_status == 0)[0]
pq = np.where(np.transpose(pq) == True)[0]
return ref, pv, pq
| 31.052632 | 107 | 0.650847 | 188 | 1,180 | 4.005319 | 0.489362 | 0.039841 | 0.063745 | 0.051793 | 0.055777 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.19322 | 1,180 | 37 | 108 | 31.891892 | 0.761555 | 0.299153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89815f9fce2325c06f32e8ded32f06fbf3f4c405 | 2,599 | py | Python | mllib/utility/logger.py | a4rcvv/mllib | 7817abf4cebe6ab859be5927aef09fdaefc82a4d | [
"Unlicense"
] | null | null | null | mllib/utility/logger.py | a4rcvv/mllib | 7817abf4cebe6ab859be5927aef09fdaefc82a4d | [
"Unlicense"
] | null | null | null | mllib/utility/logger.py | a4rcvv/mllib | 7817abf4cebe6ab859be5927aef09fdaefc82a4d | [
"Unlicense"
] | null | null | null | import logging
import os
from slack_log_handler import SlackLogHandler
STOP_LOG = 100
def make_root_logger(console_loglevel: int = logging.DEBUG, file_loglevel: int = STOP_LOG,
slack_loglevel: int = STOP_LOG, log_file_path: str = None) -> logging.Logger:
"""Generate the root logger, including Slack log handler.
Args:
console_loglevel: the logging level of console handler.
file_loglevel: the logging level of file log handler.
slack_loglevel: the logging level of slack log handler.
log_file_path: the path of log file.
Returns:
the root logger
"""
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(
fmt="[%(levelname)s] %(asctime)s %(module)s::%(funcName)s, line %(lineno)d >> %(message)s"
)
if console_loglevel < STOP_LOG:
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
console_handler.setLevel(console_loglevel)
root_logger.addHandler(console_handler)
if log_file_path is not None and file_loglevel < STOP_LOG:
file_handler = logging.FileHandler(log_file_path)
file_handler.setFormatter(formatter)
file_handler.setLevel(file_loglevel)
root_logger.addHandler(file_handler)
if slack_loglevel < STOP_LOG:
env_val_name = "WEBHOOK_URL"
try:
webhook_url = os.environ[env_val_name]
except KeyError as e:
root_logger.error(
"Environment variable \"{0}\" is not defined, so root logger cannot log to Slack.".format(
env_val_name) +
"Do \"export {0}=(WebHook URL)\" in the terminal or".format(
env_val_name) +
"edit environment variables in Edit Configurations, PyCharm".format(
env_val_name))
else:
slack_handler = SlackLogHandler(webhook_url)
slack_handler.setFormatter(formatter)
slack_handler.setLevel(slack_loglevel)
root_logger.addHandler(slack_handler)
return root_logger
def make_child_logger(logger_name: str) -> logging.Logger:
"""Generate a logger, which propagates its logs to the root logger.
Args:
logger_name: the name of this logger. "__name__" is recommended.
Returns:
logging.Logger
"""
logger = logging.getLogger(logger_name)
logger.addHandler(logging.NullHandler())
logger.setLevel(logging.DEBUG)
logger.propagate = True
return logger
| 32.4875 | 106 | 0.660639 | 309 | 2,599 | 5.326861 | 0.297735 | 0.072904 | 0.030377 | 0.04192 | 0.045565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002597 | 0.259331 | 2,599 | 79 | 107 | 32.898734 | 0.852468 | 0.184686 | 0 | 0.044444 | 0 | 0.022222 | 0.121832 | 0.012183 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.066667 | 0 | 0.155556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898338757509f355846c2cf187edc5ea7c636b57 | 9,820 | py | Python | test_one_image.py | zhouxiaoxu/pytorch-retinanet | 72013ee0f46d2da08dd141f143a7a094477aa5e4 | [
"Apache-2.0"
] | null | null | null | test_one_image.py | zhouxiaoxu/pytorch-retinanet | 72013ee0f46d2da08dd141f143a7a094477aa5e4 | [
"Apache-2.0"
] | null | null | null | test_one_image.py | zhouxiaoxu/pytorch-retinanet | 72013ee0f46d2da08dd141f143a7a094477aa5e4 | [
"Apache-2.0"
] | null | null | null |
# coding: utf-8
import numpy as np
import torchvision
import time
import os
import copy
import pdb
import time
import argparse
from PIL import Image
import sys
import cv2
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import datasets, models, transforms
from retinanet.dataloader import CocoDataset, CSVDataset, collater, Resizer, AspectRatioBasedSampler, Augmenter, \
UnNormalizer, Normalizer
import skimage.io
import skimage.transform
import skimage.color
import skimage
def init_dataset(image_csv_file, class_csv_file):
'''
创建数据对象
参数:
image_csv_file: 需要测试图片信息,可以包含标注信息,也可以不包括,例如:
./dataset/chongyin/15050400/2_d68e121b909ee9c2.jpg,,,,,,
./dataset/biguashikongtiao/176249224_0070163647_1.jpg,18,98,377,233,biguashikongtiao
class_csv_file: label信息,例如:
baozhuang,0
biguashikongtiao,1
kongtiaoshan,2
liguishikongtiao,3
yaokongqi,4
'''
# 初始化dataset对象
the_dataset = CSVDataset(train_file=image_csv_file, class_list=class_csv_file, transform=transforms.Compose([Normalizer(), Resizer()]))
return the_dataset
def init_dataloader(the_dataset, batch_size=1, num_worker=1):
'''
创建数据加载对象
参数:
image_csv_file: 需要测试图片信息,可以包含标注信息,也可以不包括,例如:
./dataset/chongyin/15050400/2_d68e121b909ee9c2.jpg,,,,,,
./dataset/biguashikongtiao/176249224_0070163647_1.jpg,18,98,377,233,biguashikongtiao
class_csv_file: label信息,例如:
baozhuang,0
biguashikongtiao,1
kongtiaoshan,2
liguishikongtiao,3
yaokongqi,4
batch_size : 设置batch_size大小
num_worker: 加载训练数据的线程数量
'''
# 初始化Sampler对象
the_sampler = AspectRatioBasedSampler(the_dataset, batch_size=batch_size, drop_last=False)
# 创建dataloader对象
the_dataloader = DataLoader(the_dataset, num_workers=num_worker, collate_fn=collater, batch_sampler=the_sampler)
return the_dataloader
def init_model(model_file):
'''
创建模型对象
参数:
model_file: 模型文件保存路径
'''
use_gpu = True
retinanet = torch.load(model_file)
if use_gpu:
if torch.cuda.is_available():
retinanet = retinanet.cuda()
if torch.cuda.is_available():
retinanet = torch.nn.DataParallel(retinanet).cuda() # 设置为多GPU的并行模式
else:
retinanet = torch.nn.DataParallel(retinanet)
return retinanet
def detector_images(retinanet, the_dataset, the_data, thresh_score = 0.5):
'''
对输入数据,进行目标检测
参数:
the_dataset: 数据集对象,可以从中获取分类信息
the_data: 字典类型,存储需要进行目标检测数据 {'img': 图片数据; 'image_path': 图片路径; 'scale': 缩放比例}
thresh_score: 过滤box时使用的box
返回:
result_dict key值: 图片路径 value:[[x1,y1,x2,y2, classname, score], .....]
'''
result_dict = dict()
with torch.no_grad():
st = time.time()
if torch.cuda.is_available():
the_result = retinanet(the_data['img'].cuda().float())
else:
the_result = retinanet(the_data['img'].float())
print('Elapsed time: {}'.format(time.time()-st))
for image_index, (scores, classification, transformed_anchors) in enumerate(the_result):
idxs = np.where(scores.cpu()>thresh_score)
image_path = the_data['image_path'][image_index]
scale = the_data['scale'][image_index]
if idxs[0].shape[0]==0:
result_dict[image_path] = [[0,0,0,0, "None", .0]]
else:
for j in range(idxs[0].shape[0]):
bbox = transformed_anchors[idxs[0][j], :]
x1 = int(bbox[0]/scale)
y1 = int(bbox[1]/scale)
x2 = int(bbox[2]/scale)
y2 = int(bbox[3]/scale)
label_name = the_dataset.labels[int(classification[idxs[0][j]])]
if image_path in result_dict:
result_dict[image_path].append([x1,y1,x2,y2, label_name, scores[j]])
else:
result_dict[image_path]= [[x1,y1,x2,y2, label_name, scores[j]]]
return result_dict
def draw_caption(image, box, caption):
b = np.array(box).astype(int)
# b[1]-20防止label超过上边界
cv2.putText(image, caption, (b[0], b[1]-10 if b[1]-20>0 else 30), cv2.FONT_HERSHEY_PLAIN, 1, (0, 0, 0), 2)
cv2.putText(image, caption, (b[0], b[1]-10 if b[1]-20>0 else 30), cv2.FONT_HERSHEY_PLAIN, 1, (255, 255, 255), 1)
def open_for_csv(path):
"""
Open a file with flags suitable for csv.reader.
This is different for python2 it means with mode 'rb',
for python3 this means 'r' with "universal newlines".
"""
if sys.version_info[0] < 3:
return open(path, 'rb')
else:
return open(path, 'r', newline='')
def read_class_file(class_file):
# parse the provided class file
class_dict, label_dict = {}, {}
try:
with open_for_csv(class_file) as csv_reader:
for line, row in enumerate(csv_reader):
line += 1
try:
class_name, class_id = row.strip().split(',')
except ValueError:
raise_from(ValueError('line {}: format should be \'class_name,class_id\''.format(line)), None)
class_id = int(class_id)
if class_name in class_dict:
raise ValueError('line {}: duplicate class name: \'{}\''.format(line, class_name))
class_dict[class_name] = class_id
label_dict[class_id] = class_name
except ValueError as e:
raise_from(ValueError('invalid CSV class file: {}: {}'.format(self.class_list, e)), None)
return class_dict, label_dict
if __name__ == "__main__":
'''
该程序是为了验证 pytorch的数据加载逻辑
通过程序,可以看到程序通过scikit包括读入图片数据后,通过tranform对象,转换数据,然后通过collater变成batch形式,然后统一输入到网络中
另外可以看到pytorch的网络会根据输入的图片的数量,动态的调整输入占用显存,进行推理计算。
另外这里的transform使用的转换函数都是在dataloader中自定义的。可能与pytorch的官方实现不一样
'''
# 去取class文件
class_dict, label_dict =read_class_file("datasetv3/classes.csv")
# 创建图像transform对象
transform=transforms.Compose([Normalizer(), Resizer()])
# 创建网络
model_file = "csv_retinanet_65.pt"
use_gpu = True
retinanet = torch.load(model_file)
if use_gpu:
if torch.cuda.is_available():
retinanet = retinanet.cuda()
if torch.cuda.is_available():
retinanet = torch.nn.DataParallel(retinanet).cuda() # 设置为多GPU的并行模式
else:
retinanet = torch.nn.DataParallel(retinanet)
# 读取图片
image_path = "datasetv3/add/badcase/404859041667234040256125_x.jpg"
img = skimage.io.imread(image_path)
if len(img.shape) == 2:
img = skimage.color.gray2rgb(img)
img = img.astype(np.float32)/255.0
# 创建图片信息
im_dict = {'img':img, 'annot': np.array([[0,0,0,0,-1]], dtype='float64'), 'image_path':image_path}
im_tensor = transform(im_dict)
im_tensors = [im_tensor for i in range(10)]
im_tensors = collater(im_tensors)
# 前向传播
result_dict = dict()
with torch.no_grad():
st = time.time()
if torch.cuda.is_available():
the_result = retinanet(im_tensors['img'].cuda().float())
else:
the_result = retinanet(im_tensors['img'].float())
print('Elapsed time: {}'.format(time.time()-st))
for image_index, (scores, classification, transformed_anchors) in enumerate(the_result):
idxs = np.where(scores.cpu()>0.5)
image_path = im_tensors['image_path'][image_index]
scale = im_tensors['scale'][image_index]
if idxs[0].shape[0]==0:
im_tensors[image_path] = [[0,0,0,0, "None", .0]]
#print("no bouding box in {}".format(result_dict[image_path]))
else:
for j in range(idxs[0].shape[0]):
bbox = transformed_anchors[idxs[0][j], :]
x1 = int(bbox[0]/scale)
y1 = int(bbox[1]/scale)
x2 = int(bbox[2]/scale)
y2 = int(bbox[3]/scale)
label_name = label_dict[int(classification[idxs[0][j]])]
if image_path in result_dict:
result_dict[image_path].append([x1,y1,x2,y2, label_name, scores[j]])
else:
result_dict[image_path]= [[x1,y1,x2,y2, label_name, scores[j]]]
for index, image_path in enumerate(result_dict):
img = cv2.imread(image_path)
bboxes = result_dict[image_path]
for box in bboxes:
if box[4] != "None":
x1 = box[0]
y1 = box[1]
x2 = box[2]
y2 = box[3]
class_name = box[4]
score = box[5]
# 打印检测框信息
print("image path: {}, box local: x1= {}, x2= {}, y1= {}, y2{}, class label= {}, score={}".format(image_path, x1, y1,x2, y2, class_name, score))
# 在图像中显示检测框
txt_draw = "%s %.2f" % (class_name, score)
draw_caption(img, (x1, y1, x2, y2), txt_draw)
cv2.rectangle(img, (x1, y1), (x2, y2), color=(0, 0, 255), thickness=2)
result_path = os.path.join("result111",image_path)
result_dir = os.path.dirname(result_path)
if not os.path.exists(result_dir):
os.makedirs(result_dir)
cv2.imwrite(result_path, img)
#cv2.imshow('img', img)
#cv2.waitKey(0) | 36.236162 | 160 | 0.582383 | 1,173 | 9,820 | 4.692242 | 0.236147 | 0.039244 | 0.008721 | 0.011628 | 0.42351 | 0.395167 | 0.386265 | 0.37391 | 0.367369 | 0.356831 | 0 | 0.045349 | 0.299389 | 9,820 | 271 | 161 | 36.236162 | 0.754651 | 0.156314 | 0 | 0.378882 | 0 | 0.006211 | 0.052392 | 0.009514 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.118012 | 0 | 0.204969 | 0.018634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89837c95dfd6f41782d792b3614567e32b3938f5 | 13,424 | py | Python | prop/algorithms/dqn.py | abstractpaper/prop | f2ca127119ffbfb3f7d2855eff7e7473e0bb3a80 | [
"MIT"
] | null | null | null | prop/algorithms/dqn.py | abstractpaper/prop | f2ca127119ffbfb3f7d2855eff7e7473e0bb3a80 | [
"MIT"
] | null | null | null | prop/algorithms/dqn.py | abstractpaper/prop | f2ca127119ffbfb3f7d2855eff7e7473e0bb3a80 | [
"MIT"
] | null | null | null | import torch
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import random
import math
import copy
import time
from collections import namedtuple
from itertools import count, compress
from tensorboardX import SummaryWriter
from prop.buffers.priority_replay_buffer import PrioritizedReplayBuffer
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward', 'mask'))
class Agent:
def __init__(self,
env,
net,
name="",
double=True,
learning_rate=3e-4,
batch_size=128,
optimizer=optim.Adam,
loss_cutoff=0.1,
max_std_dev=-1,
epsilon_start=1,
epsilon_end=0.1,
epsilon_decay=1000,
discount=0.99,
target_net_update=5000,
eval_episodes_count=1000,
eval_every=1000,
replay_buffer=PrioritizedReplayBuffer,
replay_buffer_capacity=1000000,
extra_metrics=None,
logdir=None,
dev=None):
global device
device = dev
self.name = name
self.double = double # double q learning
self.loss_cutoff = loss_cutoff # training stops at loss_cutoff
self.max_std_dev = max_std_dev # max std deviation allowed to stop training; >= 0 to activate
self.learning_rate = learning_rate # alpha
self.batch_size = batch_size
self.optimizer = optimizer
self.epsilon_start = epsilon_start # start with 100% exploration
self.epsilon_end = epsilon_end # end with 10% exploration
self.epsilon_decay = epsilon_decay # higher value = slower decay
self.discount = discount # gamma
self.target_net_update = target_net_update # number of steps to update target network
self.eval_episodes_count = eval_episodes_count # number of episodes to evaluate
self.eval_every = eval_every # number of steps to run evaluations at
self.replay_buffer = replay_buffer(replay_buffer_capacity)
self.env = env
self.policy_net = net(self.env.observation_space_n, self.env.action_space_n).to(device) # what drives current actions; uses epsilon.
self.target_net = net(self.env.observation_space_n, self.env.action_space_n).to(device) # copied from policy net periodically; greedy.
self.logdir = logdir
# init target_net
self.target_net.load_state_dict(self.policy_net.state_dict())
self.target_net.eval()
def train(self):
writer = SummaryWriter(logdir=self.logdir, comment=f"-{self.name}" if self.name else "")
steps = 1
recent_loss = []
recent_eval = []
avg_rewards = 0
while True:
# fill replay buffer with one episode from the current policy (epsilon is used)
self.load_replay_buffer(policy=self.policy_net, steps=steps)
# sample transitions
transitions, idxs, is_weights = self.replay_buffer.sample(self.batch_size)
if len(transitions) < self.batch_size:
continue
# optimize policy_net
loss = self.optimize(transitions, idxs, is_weights)
# keep track of recent losses and truncate list to latest `eval_every` losses
recent_loss.append(loss)
recent_loss = recent_loss[-self.eval_every:]
# tensorboard metrics
epsilon = Agent.eps(self.epsilon_start, self.epsilon_end, self.epsilon_decay, steps)
writer.add_scalar("env/epsilon", epsilon, steps)
writer.add_scalar("env/replay_buffer", len(self.replay_buffer), steps)
writer.add_scalar("train/loss", loss, steps)
# update the target network, copying all weights and biases in policy_net to target_net
if steps % self.target_net_update == 0:
self.target_net.load_state_dict(self.policy_net.state_dict())
# run evaluation
if steps % self.eval_every == 0:
avg_rewards, stddev = self.evaluate_policy(self.policy_net)
writer.add_scalar("train/avg_rewards", avg_rewards, steps)
writer.add_scalar("train/ep_rewards_std", stddev, steps)
recent_eval.append(avg_rewards)
recent_eval = recent_eval[-10:]
loss_achieved = sum(recent_loss)/len(recent_loss) <= self.loss_cutoff
avg_rewards_achieved = sum(recent_eval)/len(recent_eval) >= self.env.spec.reward_threshold
std_dev_achieved = (self.max_std_dev < 0) or (self.max_std_dev >= 0 and stddev <= self.max_std_dev)
if loss_achieved and avg_rewards_achieved and std_dev_achieved:
break
steps = steps + 1
# save model
policy_name = self.name if self.name else "dqn"
torch.save(self.policy_net.state_dict(), f"policies/{policy_name}")
writer.close()
@staticmethod
def eps(start, end, decay, steps):
# compute epsilon threshold
return end + (start - end) * math.exp(-1. * steps / decay)
@staticmethod
def legal_actions_to_mask(legal_actions, action_space_n):
mask = [0]*action_space_n
for n in legal_actions:
mask[n] = 1
return mask
def load_replay_buffer(self, policy=None, episodes_count=1, steps=0):
""" load replay buffer with episodes_count """
for eps_idx in range(episodes_count):
state = self.env.reset()
while True:
legal_actions = self.env.legal_actions
action = self.select_action(
policy=policy,
state=state,
epsilon=True,
steps=steps,
legal_actions=legal_actions).item()
# perform action
next_state, reward, done, _ = self.env.step(action)
# insert into replay buffer
mask = Agent.legal_actions_to_mask(legal_actions, self.env.action_space_n)
transition = Transition(state, action, next_state if not done else None, reward, mask)
# set error of new transitions to a very high number so they get sampled
self.replay_buffer.push(self.replay_buffer.tree.total, transition)
if done:
break
else:
# transition
state = next_state
def evaluate_policy(self, policy):
ep_rewards = []
for _ in range(self.eval_episodes_count):
self.env.seed(time.time())
state = self.env.reset()
ep_reward = 0
while True:
legal_actions = self.env.legal_actions
action = self.select_action(
policy=policy,
state=state,
epsilon=False,
legal_actions=legal_actions).item()
next_state, reward, done, _ = self.env.step(action)
ep_reward += reward
if done:
ep_rewards.append(ep_reward)
break
else:
state = next_state
return np.mean(ep_rewards), np.std(ep_rewards)
def select_action(self, policy, state, epsilon=False, steps=None, legal_actions=[]):
"""
selects an action with a chance of being random if epsilon is True,
otherwise selects the action produced by policy.
"""
if epsilon:
if steps == None:
raise ValueError(f"steps must be an integer. Got = {steps}")
# pick a random number
sample = random.random()
# see what the dice rolls
threshold = Agent.eps(self.epsilon_start, self.epsilon_end, self.epsilon_decay, steps)
if sample <= threshold:
# explore
action = random.choice([i for i in range(self.env.action_space_n+1) if i in legal_actions])
return torch.tensor([[action]], device=device, dtype=torch.long)
# greedy action
with torch.no_grad():
# index of highest value item returned from policy -> action
state = torch.Tensor(state).to(device)
mask = torch.zeros(self.env.action_space_n).index_fill(0, torch.LongTensor(legal_actions), 1)
return policy(state, mask).argmax().view(1, 1)
def optimize(self, transitions, idxs, is_weights):
# n transitions -> 1 transition with each attribute containing all the
# data point values along its axis.
# e.g. batch.action = list of all actions from each row
batch = Transition(*zip(*transitions))
# Compute state action values; the value of each action in batch according
# to policy_net (feeding it a state and emitting an probability distribution).
# These are the values that our current network think are right and we want to correct.
state_action_values = self.state_action_values(batch)
# compute expected state action values (reward + value of next state according to target_net)
expected_state_action_values = self.expected_state_action_values(batch)
# calculate difference between actual and expected action values
batch_loss = F.smooth_l1_loss(state_action_values, expected_state_action_values, reduction='none')
loss = (sum(batch_loss * torch.FloatTensor(is_weights).unsqueeze(1))/self.batch_size).squeeze()
# update priority
for i in range(self.batch_size):
self.replay_buffer.update(idxs[i], batch_loss[i].item())
# optimizer
optimizer = self.optimizer(params=self.policy_net.parameters(), lr=self.learning_rate)
optimizer.zero_grad()
# calculate gradients
loss.backward()
for param in self.policy_net.parameters():
# clip gradients
param.grad.data.clamp_(-1, 1)
# optimize policy_net
optimizer.step()
return loss
def state_action_values(self, batch):
"""
Compute Q(s_t, a) - the model computes Q(s_t), then we select the
columns of actions taken. These are the actions which would've been taken
for each batch state according to policy_net.
"""
# list -> tensor
state_batch = torch.Tensor(batch.state).to(device)
mask_batch = torch.Tensor(batch.mask).to(device)
action_batch = torch.Tensor(batch.action).to(device)
# get action values for each state in batch
state_action_values = self.policy_net(state_batch, mask_batch)
# select action from state_action_values according to action_batch value
return state_action_values.gather(1, action_batch.unsqueeze(1).long())
def expected_state_action_values(self, batch):
"""
Compute V(s_{t+1}) for all next states.
Expected values of actions for non_final_next_states are computed based
on the "older" target_net; selecting their best reward with max(1)[0].
This is merged based on the mask, such that we'll have either the expected
state value or 0 in case the state was final.
"""
# a bool list indicating if next_state is final (s is not None)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.bool)
non_final_next_states = torch.Tensor([s for s in batch.next_state if s is not None]).to(device)
# get legal actions for non final states; (i, v) -> (list of legal actions, non_final_state)
next_mask = torch.Tensor([i for (i, v) in zip(list(batch.mask), non_final_mask.tolist()) if v]).to(device)
# initialize next_state_values to zeros
next_state_values = torch.zeros(self.batch_size).to(device)
if len(non_final_next_states) > 0:
if self.double:
# double q learning: get actions from policy_net and get their values according to target_net; decoupling
# action selection from evaluation reduces the bias imposed by max in single dqn.
# next_state_actions: action selection according to policy_net; Q(st+1, a)
next_state_actions = self.policy_net(non_final_next_states, next_mask).max(1)[1].unsqueeze(-1)
# next_state_values: action evaluation according to target_net; max Q`(st+1, max Q(st+1, a) )
next_state_values[non_final_mask] = self.target_net(non_final_next_states, next_mask).gather(1, next_state_actions).squeeze(-1)
else:
# max Q`(st+1, a)
next_state_values[non_final_mask] = self.target_net(non_final_next_states, next_mask).max(1)[0].detach()
# Compute the expected Q values
# reward + max Q`(st+1, a) * discount
reward_batch = torch.Tensor([[r] for r in batch.reward]).to(device)
state_action_values = reward_batch + (next_state_values.unsqueeze(1) * self.discount)
return state_action_values | 45.659864 | 143 | 0.617253 | 1,691 | 13,424 | 4.702543 | 0.195151 | 0.021504 | 0.032067 | 0.013581 | 0.171026 | 0.119593 | 0.096454 | 0.096454 | 0.087399 | 0.082746 | 0 | 0.009393 | 0.302071 | 13,424 | 294 | 144 | 45.659864 | 0.839364 | 0.228024 | 0 | 0.161458 | 0 | 0 | 0.019267 | 0.002163 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052083 | false | 0 | 0.0625 | 0.005208 | 0.161458 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8983a9be22a3861d8eb7b176eda31942d58d222f | 3,631 | py | Python | plugins/lookup/kube.py | dlwhitehurst/rustic-beast | 23fffb40b05f48fec5c6308f1ec36de48e387d40 | [
"Apache-2.0"
] | 48 | 2021-02-05T01:24:04.000Z | 2022-02-03T02:40:32.000Z | plugins/lookup/kube.py | dlwhitehurst/rustic-beast | 23fffb40b05f48fec5c6308f1ec36de48e387d40 | [
"Apache-2.0"
] | 48 | 2021-02-04T21:59:27.000Z | 2022-01-18T15:54:57.000Z | plugins/lookup/kube.py | dlwhitehurst/rustic-beast | 23fffb40b05f48fec5c6308f1ec36de48e387d40 | [
"Apache-2.0"
] | 18 | 2021-02-16T15:19:27.000Z | 2022-01-18T17:26:17.000Z | # https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#lookup-plugins
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: kube
author: Jose Montoya <jmontoya@ms3-inc.com>
version_added: "0.6.0"
short_description: Lookup kubernetes resources
description:
- Lookup kubernetes resources
options:
api_version:
description:
- Use to specify the API version. If I(resource definition) is provided, the I(apiVersion) from the
I(resource_definition) will override this option.
default: v1
kind:
description:
- Use to specify an object model. If I(resource definition) is provided, the I(kind) from a
I(resource_definition) will override this option.
required: true
resource_name:
description:
- Fetch a specific object by name. If I(resource definition) is provided, the I(metadata.name) value
from the I(resource_definition) will override this option.
namespace:
description:
- Limit the objects returned to a specific namespace. If I(resource definition) is provided, the
I(metadata.namespace) value from the I(resource_definition) will override this option.
label_selector:
description:
- Additional labels to include in the query. Ignored when I(resource_name) is provided.
field_selector:
description:
- Specific fields on which to query. Ignored when I(resource_name) is provided.
host:
description:
- Provide a URL for accessing the API. Can also be specified via K8S_AUTH_HOST environment variable.
kubeconfig:
description:
- Path to an existing Kubernetes config file. If not provided, and no other connection
options are provided, the openshift client will attempt to load the default
configuration file from I(~/.kube/config.json). Can also be specified via K8S_AUTH_KUBECONFIG environment
variable.
"""
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
from ansible_collections.ms3_inc.tavros.plugins.module_utils.kube_common import KubeBase
import yaml
display = Display()
class KubeLookup(KubeBase):
def _fail(self, msg=None, **kwargs):
raise AnsibleError(msg)
def run(self, terms, variables=None, **kwargs):
self._set_base_params(kwargs)
kind = kwargs.get('kind')
name = kwargs.get('resource_name')
namespace = kwargs.get('namespace')
api_version = kwargs.get('api_version', 'v1')
label_selector = kwargs.get('label_selector')
field_selector = kwargs.get('field_selector')
if not kind:
raise AnsibleError(
"Error: no Kind specified. Use the 'kind' parameter, or provide an object YAML configuration "
"using the 'resource_definition' parameter."
)
k8s_obj = self._get_resource(kind,
name=name,
api_version=api_version,
namespace=namespace,
label_selector=label_selector,
field_selector=field_selector)
if k8s_obj is None:
return None
if name:
return [k8s_obj]
return k8s_obj.get('items')
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
return KubeLookup().run(terms, variables=variables, **kwargs)
| 38.221053 | 115 | 0.662352 | 431 | 3,631 | 5.450116 | 0.350348 | 0.038314 | 0.064708 | 0.03576 | 0.235419 | 0.235419 | 0.235419 | 0.165177 | 0.102171 | 0.045126 | 0 | 0.004871 | 0.264941 | 3,631 | 94 | 116 | 38.62766 | 0.875234 | 0.02396 | 0 | 0.1625 | 0 | 0.0375 | 0.590229 | 0.049421 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0375 | false | 0 | 0.075 | 0.0125 | 0.1875 | 0.0125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8983e6cd68b9d8c7ccabefd09bef779c29996094 | 6,289 | py | Python | cabot/cabotapp/tests/tests_jenkins.py | boringusername99/cabot | 56cfed43c006e145931f46cb68e316fbaccf75cd | [
"MIT"
] | 3,865 | 2015-01-01T11:37:14.000Z | 2022-03-30T01:02:50.000Z | cabot/cabotapp/tests/tests_jenkins.py | boringusername99/cabot | 56cfed43c006e145931f46cb68e316fbaccf75cd | [
"MIT"
] | 550 | 2015-01-02T18:06:08.000Z | 2021-11-04T23:39:47.000Z | cabot/cabotapp/tests/tests_jenkins.py | boringusername99/cabot | 56cfed43c006e145931f46cb68e316fbaccf75cd | [
"MIT"
] | 598 | 2015-01-22T12:17:53.000Z | 2022-03-25T17:32:21.000Z | # -*- coding: utf-8 -*-
import unittest
from datetime import timedelta
import jenkins
from cabot.cabotapp import jenkins as cabot_jenkins
from cabot.cabotapp.models import JenkinsConfig
from cabot.cabotapp.models.jenkins_check_plugin import JenkinsStatusCheck
from django.utils import timezone
from freezegun import freeze_time
from mock import create_autospec, patch
class TestGetStatus(unittest.TestCase):
def setUp(self):
self.job = {
u'inQueue': False,
u'queueItem': None,
u'lastSuccessfulBuild': {
u'number': 12,
},
u'lastCompletedBuild': {
u'number': 12,
},
u'lastBuild': {
u'number': 12,
},
u'color': 'blue'
}
self.build = {
u'number': 12,
u'result': u'SUCCESS'
}
self.mock_client = create_autospec(jenkins.Jenkins)
self.mock_client.get_job_info.return_value = self.job
self.mock_client.get_build_info.return_value = self.build
self.mock_config = create_autospec(JenkinsConfig)
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_passing(self, mock_jenkins):
mock_jenkins.return_value = self.mock_client
status = cabot_jenkins.get_job_status(self.mock_config, 'foo')
expected = {
'active': True,
'succeeded': True,
'job_number': 12,
'blocked_build_time': None,
'consecutive_failures': 0,
'status_code': 200
}
self.assertEqual(status, expected)
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_failing(self, mock_jenkins):
mock_jenkins.return_value = self.mock_client
self.build[u'result'] = u'FAILURE'
self.job[u'lastSuccessfulBuild'] = {
u'number': 11,
u'result': u'SUCCESS'
}
jenkins_check = JenkinsStatusCheck(
name="foo",
jenkins_config=JenkinsConfig(
name="name",
jenkins_api="a",
jenkins_user="u",
jenkins_pass="p"
)
)
result = JenkinsStatusCheck._run(jenkins_check)
self.assertEqual(result.consecutive_failures, 1)
self.assertFalse(result.succeeded)
@freeze_time('2017-03-02 10:30')
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_queued_last_succeeded(self, mock_jenkins):
mock_jenkins.return_value = self.mock_client
self.job[u'lastBuild'] = {u'number': 13}
self.job[u'inQueue'] = True
self.job['queueItem'] = {
'inQueueSince': float(timezone.now().strftime('%s')) * 1000
}
with freeze_time(timezone.now() + timedelta(minutes=10)):
status = cabot_jenkins.get_job_status(self.mock_config, 'foo')
expected = {
'active': True,
'succeeded': True,
'job_number': 12,
'queued_job_number': 13,
'blocked_build_time': 600,
'consecutive_failures': 0,
'status_code': 200
}
self.assertEqual(status, expected)
@freeze_time('2017-03-02 10:30')
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_queued_last_failed(self, mock_jenkins):
mock_jenkins.return_value = self.mock_client
self.job[u'lastBuild'] = {u'number': 13}
self.job[u'inQueue'] = True
self.job['queueItem'] = {
'inQueueSince': float(timezone.now().strftime('%s')) * 1000
}
self.build[u'result'] = u'FAILURE'
with freeze_time(timezone.now() + timedelta(minutes=10)):
status = cabot_jenkins.get_job_status(self.mock_config, 'foo')
expected = {
'active': True,
'succeeded': False,
'job_number': 12,
'queued_job_number': 13,
'blocked_build_time': 600,
'consecutive_failures': 0,
'status_code': 200
}
self.assertEqual(status, expected)
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_unknown(self, mock_jenkins):
self.mock_client.get_job_info.side_effect = jenkins.NotFoundException()
mock_jenkins.return_value = self.mock_client
status = cabot_jenkins.get_job_status(self.mock_config, 'unknown-job')
expected = {
'active': None,
'succeeded': None,
'job_number': None,
'blocked_build_time': None,
'status_code': 404
}
self.assertEqual(status, expected)
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_no_build(self, mock_jenkins):
unbuilt_job = {
u'inQueue': False,
u'queueItem': None,
u'lastSuccessfulBuild': None,
u'lastCompletedBuild': None,
u'lastBuild': None,
u'color': u'notbuilt'
}
self.mock_client.get_job_info.return_value = unbuilt_job
mock_jenkins.return_value = self.mock_client
with self.assertRaises(Exception):
cabot_jenkins.get_job_status(self.mock_config, 'job-unbuilt')
@patch("cabot.cabotapp.jenkins._get_jenkins_client")
def test_job_no_good_build(self, mock_jenkins):
self.mock_client.get_job_info.return_value = {
u'inQueue': False,
u'queueItem': None,
u'lastSuccessfulBuild': None,
u'lastCompletedBuild': {
u'number': 1,
},
u'lastBuild': {
u'number': 1,
},
u'color': u'red'
}
self.mock_client.get_build_info.return_value = {
u'number': 1,
u'result': u'FAILURE'
}
mock_jenkins.return_value = self.mock_client
status = cabot_jenkins.get_job_status(self.mock_config, 'job-no-good-build')
expected = {
'active': True,
'succeeded': False,
'job_number': 1,
'blocked_build_time': None,
'consecutive_failures': 1,
'status_code': 200
}
self.assertEqual(status, expected)
| 32.251282 | 84 | 0.582605 | 680 | 6,289 | 5.139706 | 0.164706 | 0.064092 | 0.05608 | 0.050072 | 0.66867 | 0.66867 | 0.639199 | 0.60372 | 0.572532 | 0.507296 | 0 | 0.020843 | 0.305772 | 6,289 | 194 | 85 | 32.417526 | 0.779661 | 0.003339 | 0 | 0.527273 | 0 | 0 | 0.192467 | 0.04692 | 0 | 0 | 0 | 0 | 0.048485 | 1 | 0.048485 | false | 0.012121 | 0.054545 | 0 | 0.109091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8983fdeeef769de4d9c254fb60b07b53f7a140f9 | 2,101 | py | Python | Machine Learning A-Z/Part 2 - Regression/Section 4 - Simple Linear Regression/simple_linear_regression.py | AnubhavMadhav/Learn-Machine-Learning | 233a59c4c5ad9c0467ed732de881b61cbec72360 | [
"MIT"
] | 1 | 2020-11-29T08:33:57.000Z | 2020-11-29T08:33:57.000Z | Machine Learning A-Z/Part 2 - Regression/Section 4 - Simple Linear Regression/simple_linear_regression.py | AnubhavMadhav/Learn-Machine-Learning | 233a59c4c5ad9c0467ed732de881b61cbec72360 | [
"MIT"
] | null | null | null | Machine Learning A-Z/Part 2 - Regression/Section 4 - Simple Linear Regression/simple_linear_regression.py | AnubhavMadhav/Learn-Machine-Learning | 233a59c4c5ad9c0467ed732de881b61cbec72360 | [
"MIT"
] | null | null | null | # Simple Linear Regression Model
# Data Preprocessing Template
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Salary_Data.csv')
X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, 1].values
# Splitting the Dataset into Training Set and Test Set
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=1/3, random_state = 0)
# Feature Scaling
# We do not need Feature Scaling in Linear Regression Model because Library we use in Linear Regression Model will itself take care of Feature Scaling.
"""from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
sc_y = StandardScaler()
y_train = sc_y.fit_transform(y_train)"""
# Fitting Simple Linear Regression to the Training Set
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, Y_train)
# Predicting the Test Set Result
Y_pred = regressor.predict(X_test)
# Visualising the Training Set Results
plt.figure(1) # So, that we can see both Training Set and Test Set Graphs at same time
plt.scatter(X_train, Y_train, color = 'red')
plt.plot(X_train, regressor.predict(X_train), color = 'green' )
plt.title('Experience vs Salary (Training Set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
# Visualising the Twst Set Results
plt.figure(2) # So, that we can see both Training Set and Test Set Graphs at same time
plt.scatter(X_test, Y_test, color = 'red')
plt.plot(X_train, regressor.predict(X_train), color = 'green' ) # This line may remain the same so that we can compare the model which we trained on training data set with the new test values
plt.title('Experience vs Salary (Test Set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show() # If we want to show both the plot at the same time, so that we can compare we have to show() it only once. | 36.859649 | 203 | 0.743931 | 337 | 2,101 | 4.522255 | 0.335312 | 0.035433 | 0.020997 | 0.028871 | 0.285433 | 0.213911 | 0.213911 | 0.213911 | 0.213911 | 0.156168 | 0 | 0.004023 | 0.171823 | 2,101 | 57 | 204 | 36.859649 | 0.871839 | 0.405045 | 0 | 0.24 | 0 | 0 | 0.143415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8984723b36e886451beb99246f287ee4f2ba6b7f | 248 | py | Python | your_health/urls.py | JakubWolak/blood_pressure_monitor | 6ff4d6eeac29543c245b7f18568ead092063b778 | [
"CC0-1.0"
] | null | null | null | your_health/urls.py | JakubWolak/blood_pressure_monitor | 6ff4d6eeac29543c245b7f18568ead092063b778 | [
"CC0-1.0"
] | 8 | 2021-03-30T13:48:07.000Z | 2022-03-12T00:41:54.000Z | your_health/urls.py | JakubWolak/blood_pressure_monitor | 6ff4d6eeac29543c245b7f18568ead092063b778 | [
"CC0-1.0"
] | null | null | null | from django.urls import path
from . import views
app_name = "your_health"
urlpatterns = [
path("add_data", views.UserDataCreateView.as_view(), name="add_data"),
path("edit_data", views.UserDataUpdateView.as_view(), name="edit_data"),
]
| 20.666667 | 76 | 0.721774 | 33 | 248 | 5.181818 | 0.545455 | 0.081871 | 0.116959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133065 | 248 | 11 | 77 | 22.545455 | 0.795349 | 0 | 0 | 0 | 0 | 0 | 0.181452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898849fe81b6ae5ad41948570e4b00db9cbfbae5 | 10,265 | py | Python | src/bot.py | andlehma/Margery | 1d3cb44684f663e42753f60f0a82321c3601b046 | [
"MIT"
] | null | null | null | src/bot.py | andlehma/Margery | 1d3cb44684f663e42753f60f0a82321c3601b046 | [
"MIT"
] | null | null | null | src/bot.py | andlehma/Margery | 1d3cb44684f663e42753f60f0a82321c3601b046 | [
"MIT"
] | null | null | null | import math
import time
from rlbot.agents.base_agent import BaseAgent, SimpleControllerState
from rlbot.utils.structures.game_data_struct import GameTickPacket
from utils.vec3 import vec3
SIN45 = math.sin(0.785398)
def normalize_location(location: vec3):
"""
take any location and normalize it to be within the arena
min/max values can and should be tweaked
walls are at +- 4196 (x) and +- 5120 (y)
"""
arena_max_x = 4196 - 93 # wall x - ball radius
arena_min_x = -arena_max_x
arena_max_y = 5120 - 93 # wall y - ball radius
arena_min_y = -arena_max_y
output_location = vec3(location)
if location.x < arena_min_x:
output_location.x = arena_min_x
elif location.x > arena_max_x:
output_location.x = arena_max_x
if location.y < arena_min_y:
output_location.y = arena_min_y
elif location.y > arena_max_y:
output_location.y = arena_max_y
return output_location
class Margery(BaseAgent):
def __init__(self, name, team, index):
super().__init__(name, team, index)
self.controller_state = SimpleControllerState()
self.ball_pos = vec3(0, 0, 0)
self.defensive_goal = vec3(0, -5120, 0)
self.offensive_goal = vec3(0, 5120, 0)
if team == 1:
self.defensive_goal = vec3(0, 5120, 0)
self.offensive_goal = vec3(0, -5120, 0)
self.action = self.kickoff
self.action_display = "none"
self.pos = None
self.yaw = None
self.pitch = None
self.next_dodge_time = 0
self.on_second_jump = False
self.field_info = None
# CONSTANTS
self.POWERSLIDE_ANGLE = 3 # radians
self.TURN_THRESHOLD = 5 # degrees
self.DODGE_THRESHOLD = 300 # unreal units
self.DODGE_TIME = 0.2 # seconds
self.BALL_FAR_AWAY_DISTANCE = 1500
# Helper Functions
def aim(self, target: vec3):
"""point left analog stick towards target"""
angle_between_bot_and_target = math.atan2(
target.y - self.pos.y, target.x - self.pos.x)
angle_front_to_target = angle_between_bot_and_target - self.yaw
# correct values
if angle_front_to_target < -math.pi:
angle_front_to_target += 2 * math.pi
if angle_front_to_target > math.pi:
angle_front_to_target -= 2 * math.pi
self.controller_state.handbrake = abs(
angle_front_to_target) > self.POWERSLIDE_ANGLE
# steer
if angle_front_to_target < math.radians(-self.TURN_THRESHOLD):
self.controller_state.steer = -1
elif angle_front_to_target > math.radians(self.TURN_THRESHOLD):
self.controller_state.steer = 1
else:
self.controller_state.steer = 0
self.controller_state.pitch = 0
def go_to_location(self, location: vec3, threshold: float, boost: bool):
"""drive car to within threshold of location"""
distance = self.pos.dist(location)
if distance > threshold:
# aim at location
self.aim(location)
# drive
self.controller_state.throttle = 1
self.controller_state.boost = boost
else:
self.controller_state.throttle = 0
self.controller_state.boost = False
def check_for_boost_detour(self, location: vec3):
"""
if any boost pad is within threshold of path to location,
return that boost pad's location, otherwise return orignial location
"""
dist_thresh = 100
distance = self.pos.dist(location)
for boost_pad in self.field_info.boost_pads:
dist_to_boost_pad = self.pos.dist(boost_pad.location)
dist_from_boost_pad_to_location = vec3(
boost_pad.location).dist(location)
total_dist = dist_to_boost_pad + dist_from_boost_pad_to_location
dist_diff = total_dist - distance
if dist_diff < dist_thresh:
return boost_pad.location
return location
# Actions
def kickoff(self):
"""
performed when the ball is at (0, 0)
TODO: implement faster kickoffs
"""
self.action_display = "kickoff"
dist_to_ball = self.pos.dist(self.ball_pos)
if dist_to_ball > 500:
if self.team == 0:
self.go_to_location(
self.check_for_boost_detour(vec3(0, -300, 0)), 0, True)
else:
self.go_to_location(
self.check_for_boost_detour(vec3(0, 300, 0)), 0, True)
else:
self.ballchase()
def dodge(self, direction: vec3):
"""dodge towards direction by jumping twice and aiming left stick"""
if time.time() > self.next_dodge_time:
# get pitch and yaw values from angle to direction
angle_between_bot_and_target = math.atan2(
direction.y - self.pos.y, direction.x - self.pos.x)
angle_front_to_target = angle_between_bot_and_target - self.yaw
self.controller_state.pitch = -math.cos(angle_front_to_target)
self.controller_state.yaw = math.sin(angle_front_to_target)
# correct pitch values
if self.controller_state.pitch < -SIN45:
self.controller_state.pitch = -1
elif self.controller_state.pitch > SIN45:
self.controller_state.pitch = 1
else:
self.controller_state.pitch = 0
self.controller_state.jump = True
if self.on_second_jump:
self.on_second_jump = False
else:
self.on_second_jump = True
self.next_dodge_time = time.time() + self.DODGE_TIME
def ballchase(self):
"""get goalside of ball, aim at ball, and then dodge into ball"""
# check if we are goalside
goalside = False
if self.team == 0:
if self.pos.y < self.ball_pos.y:
goalside = True
else:
if self.pos.y > self.ball_pos.y:
goalside = True
# choose next action based on how far away from the ball we are
dist_to_ball = self.pos.dist(self.ball_pos)
if dist_to_ball < self.DODGE_THRESHOLD and goalside:
# dodge into ball
self.action_display = "shooting"
self.dodge(self.ball_pos)
elif dist_to_ball <= self.DODGE_THRESHOLD * 2 and goalside:
# face ball before dodging
self.action_display = "setting up to shoot"
self.aim(self.ball_pos)
self.controller_state.throttle = 0.5
else:
# we are either too far away from the ball or not goalside
boost = False
if dist_to_ball > self.BALL_FAR_AWAY_DISTANCE:
boost = True
ball_angle_to_goal = math.atan2(
self.offensive_goal.y - self.ball_pos.y,
self.offensive_goal.x - self.ball_pos.x)
ball_distance_to_goal = self.ball_pos.dist(self.offensive_goal)
dist_plus = ball_distance_to_goal + (self.DODGE_THRESHOLD * 2)
x = self.offensive_goal.x - \
(math.cos(ball_angle_to_goal) * dist_plus)
y = self.offensive_goal.y - \
(math.sin(ball_angle_to_goal) * dist_plus)
goalside_position = vec3(x, y, 0)
location = self.check_for_boost_detour(goalside_position)
if location == goalside_position:
self.action_display = "ballchasing"
else:
self.action_display = "boost > ball"
location = normalize_location(location)
self.go_to_location(location, 0, boost)
def go_to_goal(self):
"""go to the goal and wait"""
location = self.check_for_boost_detour(self.defensive_goal)
threshold = 800
if location == self.defensive_goal:
self.action_display = "going to goal"
else:
self.action_display = "boost > goal"
threshold = 50
location = normalize_location(location)
self.go_to_location(location, threshold, False)
def get_output(self, packet: GameTickPacket) -> SimpleControllerState:
"""main gameplay loop"""
# update information about Margery
margery = packet.game_cars[self.index]
self.pos = vec3(margery.physics.location)
self.yaw = margery.physics.rotation.yaw
self.pitch = margery.physics.rotation.pitch
# update information about the ball
self.ball_pos = vec3(packet.game_ball.physics.location)
ball_is_in_offensive_half = True
if self.team == 0: # blue
if self.ball_pos.y > -10:
ball_is_in_offensive_half = True
else:
ball_is_in_offensive_half = False
else: # orange
if self.ball_pos.y < 10:
ball_is_in_offensive_half = True
else:
ball_is_in_offensive_half = False
# update information about the field
self.field_info = self.get_field_info()
# decision making
if self.ball_pos.y == 0 and self.ball_pos.x == 0:
self.action = self.kickoff
else:
# go for ball if ball is in offensive half
# otherwise go to goal
if ball_is_in_offensive_half:
self.action = self.ballchase
else:
# self.action = self.go_to_goal
self.action = self.ballchase
# reset dodge
self.controller_state.jump = False
# perform the selected action
self.action()
# draw debugging information
draw_debug(self.renderer, margery, self.action_display)
# output the controller state
return self.controller_state
def draw_debug(renderer, car, action_display):
"""draw debugging information on screen"""
renderer.begin_rendering()
# print the action that the bot is taking
renderer.draw_string_3d(car.physics.location, 2, 2,
action_display, renderer.white())
renderer.end_rendering()
| 35.642361 | 76 | 0.606624 | 1,298 | 10,265 | 4.55624 | 0.168721 | 0.0558 | 0.067467 | 0.03348 | 0.37352 | 0.255157 | 0.198174 | 0.187014 | 0.187014 | 0.167738 | 0 | 0.021195 | 0.315149 | 10,265 | 287 | 77 | 35.766551 | 0.820057 | 0.135314 | 0 | 0.231959 | 0 | 0 | 0.00986 | 0 | 0 | 0 | 0 | 0.003484 | 0 | 1 | 0.056701 | false | 0 | 0.025773 | 0 | 0.108247 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89886a336e20fe2ed1bc7535f8a97dba9bde0006 | 3,969 | py | Python | cross_val_splitter.py | SebastianQuispeNaola/PruebaDeConcepto | 52da536c834955750711ab1f132801f47cd3e7be | [
"MIT"
] | null | null | null | cross_val_splitter.py | SebastianQuispeNaola/PruebaDeConcepto | 52da536c834955750711ab1f132801f47cd3e7be | [
"MIT"
] | null | null | null | cross_val_splitter.py | SebastianQuispeNaola/PruebaDeConcepto | 52da536c834955750711ab1f132801f47cd3e7be | [
"MIT"
] | null | null | null | import os
import argparse
import numpy as np
import shutil
class CrossValSplitter:
def __init__(self, num_folds, data_dir, output_dir):
self.num_folds = num_folds
self.data_dir = data_dir
self.output_dir = output_dir
def split_dataset(self):
# Creamos los directorios
for split_ind in range(self.num_folds):
# Creamos un directorio para cada split
split_path = os.path.join(self.output_dir, 'split' + str(split_ind)) #'./image_cross_val/splitX'
if not os.path.exists(split_path):
os.makedirs(split_path)
# Generamos los splits
for classe in os.listdir(self.data_dir):
if classe[0] == '.':
continue
# Creamos un directorio para cada clase dentro de un split
for split_ind in range(self.num_folds):
mod_path = os.path.join(self.output_dir, 'split' + str(split_ind), classe)
if not os.path.exists(mod_path):
os.makedirs(mod_path)
uni_videos = []
uni_images = []
for in_file in os.listdir(os.path.join(self.data_dir, classe)):
if in_file[0] == '.':
continue
if len(in_file.split('.')) == 3:
# Es un video. Ejem: Cov-Atlas+(44).gif_frame0.jpg
uni_videos.append(in_file.split('.')[0])
else:
# Es una imagen. Ejem: Cov_whitelungs_thoraric_paperfig5.png
uni_images.append(in_file.split('.')[0])
# Construimos un diccionario qu va a clasificar la imágenes en cada split
inner_dict = {}
# Se considera imágenes y videos separadamente
for k, uni in enumerate([uni_videos, uni_images]):
# Se crea una lista ordenada sin imágenes repetidas
unique_files = np.unique(uni)
# s es el número de imágenes en un split
s = len(unique_files) // self.num_folds
for i in range(self.num_folds):
for f in unique_files[i * s:(i + 1) * s]:
inner_dict[f] = i
# Si sobran imágenes se distribuyen aleatoriamente
for f in unique_files[self.num_folds * s:]:
inner_dict[f] = np.random.choice(np.arange(5))
for in_file in os.listdir(os.path.join(self.data_dir, classe)):
fold_to_put = inner_dict[in_file.split('.')[0]]
split_path = os.path.join(
self.output_dir, 'split' + str(fold_to_put), classe
)
shutil.copy(os.path.join(self.data_dir, classe, in_file), split_path)
#self.check_crossval(self.output_dir)
def check_crossval(self, output_dir):
"""
Test method to check a cross validation split (prints number of unique f)
"""
check = self.output_dir
file_list = []
for folder in os.listdir(check):
if folder[0] == '.':
continue
for classe in os.listdir(os.path.join(check, folder)):
if classe[0] == '.' or classe[0] == 'u':
continue
uni = []
is_image = 0
for file in os.listdir(os.path.join(check, folder, classe)):
if file[0] == 'u':
continue
if len(file.split('.')) == 2:
is_image += 1
file_list.append(file)
uni.append(file.split('.')[0])
#print(folder, classe, len(np.unique(uni)), len(uni), is_image)
print(folder, classe, len(uni))
assert len(file_list) == len(np.unique(file_list))
print('El dataset contiene en total', len(file_list), 'imágenes') | 44.1 | 108 | 0.525573 | 486 | 3,969 | 4.121399 | 0.27572 | 0.029955 | 0.03994 | 0.041937 | 0.335996 | 0.189715 | 0.189715 | 0.174239 | 0.112332 | 0.112332 | 0 | 0.008029 | 0.372386 | 3,969 | 90 | 109 | 44.1 | 0.796066 | 0.176619 | 0 | 0.136364 | 0 | 0 | 0.019517 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 1 | 0.045455 | false | 0 | 0.060606 | 0 | 0.121212 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8988f618f6884ba9859e27e0406209aa497f8589 | 7,697 | py | Python | ldt/utils/usaf/bcsd_preproc/lib_bcsd_metrics/bias_correction_nmme_modulefast.py | andrewsoong/LISF | 20e3b00a72b6b348c567d0703550f290881679b4 | [
"Apache-2.0"
] | 67 | 2018-11-13T21:40:54.000Z | 2022-02-23T08:11:56.000Z | ldt/utils/usaf/bcsd_preproc/lib_bcsd_metrics/bias_correction_nmme_modulefast.py | andrewsoong/LISF | 20e3b00a72b6b348c567d0703550f290881679b4 | [
"Apache-2.0"
] | 679 | 2018-11-13T20:10:29.000Z | 2022-03-30T19:55:25.000Z | ldt/utils/usaf/bcsd_preproc/lib_bcsd_metrics/bias_correction_nmme_modulefast.py | andrewsoong/LISF | 20e3b00a72b6b348c567d0703550f290881679b4 | [
"Apache-2.0"
] | 119 | 2018-11-08T15:53:35.000Z | 2022-03-28T10:16:01.000Z | #!/usr/bin/env python
"""
# Author: Shrad Shukla
# coding: utf-8
#Author: Shrad Shukla
#Usage: This is a module for the BCSD code.
#This module bias corrects a forecasts following
#probability mapping approach as described in Wood et al. 2002
#Date: August 06, 2015
# In[28]:
"""
from __future__ import division
#import pandas as pd
import os
import sys
import calendar
#import os.path as op
from datetime import datetime
import numpy as np
from dateutil.relativedelta import relativedelta
#from math import *
#import time
import xarray as xr
import BCSD_function
from BCSD_stats_functions import write_4d_netcdf
from Shrad_modules import read_nc_files
def get_index(ref_array, my_value):
"""
Function for extracting the index of a Numpy array (ref_array)
which value is closest to a given number.
Input parameters:
- ref_array: reference Numpy array
- my_value: floating point number
Returned value:
- An integer corresponding to the index
"""
return np.abs(ref_array - my_value).argmin()
def slice_latlon(lat, lon, lat_range: list, lon_range: list):
"""
Function for extracting a subset of Lat/Lon indices.
Given lat and lon arrays latitudes and longitudes,
we want to determine the arrays inxdex_lat and
index_lon of indices where the latitudes and longitudes
fall in provided ranges ([minLat,maxLat] and [minLon,maxLon])
lat[:]>=minLat and lat[:]<=maxLat
lon[:]>=minLon and lon[:]<=maxLon
"""
indexlat = np.nonzero((lat[:] >= lat_range[0]) & (lat[:] <= lat_range[-1]))[0]
indexlon = np.nonzero((lon[:] >= lon_range[0]) & (lon[:] <= lon_range[-1]))[0]
return indexlat, indexlon
# Small number
EPS = 1.0e-5
## Usage: <Name of variable in observed climatology>
## <Name of variable in reforecast climatology
## (same as the name in target forecast> <forecast model number>
print("In Python Script")
CMDARGS = str(sys.argv)
OBS_VAR = str(sys.argv[1])
FCST_VAR = str(sys.argv[2])
BC_VAR = str(sys.argv[3])
## This is used to figure out if the variable is a precipitation
## variable or not
UNIT = str(sys.argv[4])
LAT1, LAT2, LON1, LON2 = int(sys.argv[5]), int(sys.argv[6]), int(sys.argv[7]), int(sys.argv[8])
INIT_FCST_MON = int(sys.argv[9])
# Forecast model and ensemble input arguments:
MODEL_NAME = str(sys.argv[10])
LEAD_FINAL = int(sys.argv[11])
ENS_NUMC = int(sys.argv[12])
ENS_NUMF = int(sys.argv[13])
print(LEAD_FINAL)
print(ENS_NUMC)
print(ENS_NUMF)
FCST_SYR = int(sys.argv[14])
TARGET_FCST_SYR = int(sys.argv[14])
TARGET_FCST_EYR = int(sys.argv[15])
CLIM_SYR = int(sys.argv[16])
CLIM_EYR = int(sys.argv[17])
# Directory and file addresses
FCST_CLIM_INDIR = str(sys.argv[18])
OBS_CLIM_INDIR = str(sys.argv[19])
FCST_INDIR = str(sys.argv[20])
# Observation climatology filename templates:
OBS_CLIM_FILE_TEMPLATE = '{}/{}_obs_clim.nc'
FCST_CLIM_FILE_TEMPLATE = '{}/{}/{}_fcst_clim.nc'
MONTH_NAME_TEMPLATE = '{}01'
# GEOS5 filename TEMPLATE:
FCST_INFILE_TEMPLATE = '{}/{:04d}/ens{:01d}/{}.nmme.monthly.{:04d}{:02d}.nc'
# Input mask
MASK_FILE = str(sys.argv[21])
MASK = read_nc_files(MASK_FILE, 'mask')[0, ]
LATS = read_nc_files(MASK_FILE, 'lat')
LONS = read_nc_files(MASK_FILE, 'lon')
### Output directory
OUTFILE_TEMPLATE = '{}/{}.{}.{}_{:04d}_{:04d}.nc'
OUTDIR = str(sys.argv[22])
if not os.path.exists(OUTDIR):
os.makedirs(OUTDIR)
ENSS = int(sys.argv[23])
ENSF = int(sys.argv[24])
print(f"Ensemble number is {ENS_NUMF}")
NUM_YRS = (CLIM_EYR-CLIM_SYR)+1
TINY = ((1/(NUM_YRS))/ENS_NUMF)/2
# Adjust quantile, if it is out of bounds
# This value represents 1/NYRS/NENS/2, so about
# half the prob. interval beyond the lowest value
# (arbitrary choice) */
## This is probably used for real-time forecasts when a
## forecasted value happened to be an outlier of the
## reforecast climatology
##### Starting bias-correction from here
# First read observed climatology for the given variable
OBS_CLIM_FILE = OBS_CLIM_FILE_TEMPLATE.format(OBS_CLIM_INDIR, OBS_VAR)
print(f"Reading observed climatology {OBS_CLIM_FILE}")
OBS_CLIM_ARRAY = xr.open_dataset(OBS_CLIM_FILE)
# Then for forecast files:
for MON in [INIT_FCST_MON]:
MONTH_NAME = MONTH_NAME_TEMPLATE.format((calendar.month_abbr[MON]).lower())
## This provides abbrevated version of the name of a month:
## (e.g. for January (i.e. Month number = 1) it will return "Jan").
## The abbrevated name is used in the forecasts file name
print(f"Forecast Initialization month is {MONTH_NAME}")
#First read forecast climatology for the given variable and forecast
#initialzation month
FCST_CLIM_INFILE = FCST_CLIM_FILE_TEMPLATE.format(FCST_CLIM_INDIR, \
MODEL_NAME, FCST_VAR)
print(f"Reading forecast climatology {FCST_CLIM_INFILE}")
FCST_CLIM_ARRAY = xr.open_dataset(FCST_CLIM_INFILE)
#First read raw forecasts
FCST_COARSE = np.empty(((TARGET_FCST_EYR-TARGET_FCST_SYR)+1, \
LEAD_FINAL, ENS_NUMF, len(LATS), len(LONS)))
for LEAD_NUM in range(0, LEAD_FINAL): ## Loop from lead =0 to Final Lead
for ens in range(ENS_NUMF):
ens1 = ens+ENSS
for INIT_FCST_YEAR in range(TARGET_FCST_SYR, TARGET_FCST_EYR+1):
## Reading forecast file
FCST_DATE = datetime(INIT_FCST_YEAR, INIT_FCST_MON, 1) + \
relativedelta(months=LEAD_NUM)
FCST_YEAR, FCST_MONTH = FCST_DATE.year, FCST_DATE.month
INFILE = FCST_INFILE_TEMPLATE.format(FCST_INDIR, \
INIT_FCST_YEAR, ens1, MONTH_NAME, FCST_YEAR, FCST_MONTH)
print(INFILE)
FCST_COARSE[INIT_FCST_YEAR-TARGET_FCST_SYR, LEAD_NUM, ens, ] = \
read_nc_files(INFILE, FCST_VAR)
LAT_RANGE = [LAT1, LAT2]
LON_RANGE = [LON1, LON2]
indexLat, indexLon = slice_latlon(LATS, LONS, LAT_RANGE, LON_RANGE)
ilat_min, ilat_max = indexLat[0], indexLat[-1]
ilon_min, ilon_max = indexLon[0], indexLon[-1]
nlats = len(LATS)
nlons = len(LONS)
#print("indexLat=",indexLat)
#xprint("indexLon=",indexLon)
print("LAT_RANGE=", LAT_RANGE, "LON_RANGE=", LON_RANGE)
print("latmin=", ilat_min, "latmax=", ilat_max)
print("lonmin=", ilon_min, "lonmax=", ilon_max)
#exit()
# Get the values (Numpy array) for the lat/lon ranges
np_OBS_CLIM_ARRAY = OBS_CLIM_ARRAY.clim.sel(longitude=slice(LON1, LON2), \
latitude=slice(LAT1, LAT2)).values
np_FCST_CLIM_ARRAY = FCST_CLIM_ARRAY.clim.sel(longitude=slice(LON1, LON2), \
latitude=slice(LAT1, LAT2)).values
print("Latitude: ", nlats, ilat_min, ilat_max)
print("Longitude: ", nlons, ilon_min, ilon_max)
print("np_OBS_CLIM_ARRAY:", np_OBS_CLIM_ARRAY.shape, \
type(np_OBS_CLIM_ARRAY))
print("np_FCST_CLIM_ARRAY:", np_FCST_CLIM_ARRAY.shape, \
type(np_FCST_CLIM_ARRAY))
CORRECT_FCST_COARSE = BCSD_function.latlon_calculations(ilat_min, \
ilat_max, ilon_min, ilon_max, nlats, nlons, np_OBS_CLIM_ARRAY, \
np_FCST_CLIM_ARRAY, LEAD_FINAL, TARGET_FCST_EYR, TARGET_FCST_SYR, \
FCST_SYR, ENS_NUMF, MON, MONTH_NAME, BC_VAR, TINY, FCST_COARSE)
CORRECT_FCST_COARSE = np.ma.masked_array(CORRECT_FCST_COARSE, \
mask=CORRECT_FCST_COARSE == -999)
OUTFILE = OUTFILE_TEMPLATE.format(OUTDIR, FCST_VAR, MODEL_NAME, \
MONTH_NAME, TARGET_FCST_SYR, TARGET_FCST_EYR)
print(f"Now writing {OUTFILE}")
SDATE = datetime(TARGET_FCST_SYR, MON, 1)
dates = [SDATE+relativedelta(years=n) for n in range(CORRECT_FCST_COARSE.shape[0])]
write_4d_netcdf(OUTFILE, CORRECT_FCST_COARSE, FCST_VAR, MODEL_NAME, \
'Bias corrected', UNIT, 5, LONS, LATS, ENS_NUMF, LEAD_FINAL, SDATE, dates)
| 36.306604 | 95 | 0.701832 | 1,171 | 7,697 | 4.385141 | 0.260461 | 0.035443 | 0.029211 | 0.013632 | 0.128335 | 0.065433 | 0.035833 | 0.035833 | 0.025316 | 0.025316 | 0 | 0.019002 | 0.17955 | 7,697 | 211 | 96 | 36.478673 | 0.794141 | 0.291152 | 0 | 0.017391 | 0 | 0 | 0.085904 | 0.018921 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017391 | false | 0 | 0.095652 | 0 | 0.130435 | 0.147826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8989902473136427aab6e0c005da613605c34629 | 10,472 | py | Python | tests/test_siginfo.py | anergictcell/siginfo | d3c2690574c12ba62cc8cd8867c4ec79004dc4bf | [
"MIT"
] | null | null | null | tests/test_siginfo.py | anergictcell/siginfo | d3c2690574c12ba62cc8cd8867c4ec79004dc4bf | [
"MIT"
] | null | null | null | tests/test_siginfo.py | anergictcell/siginfo | d3c2690574c12ba62cc8cd8867c4ec79004dc4bf | [
"MIT"
] | null | null | null | import unittest
from siginfo import siginfoclass as si
import sys
OLD_OUT = sys.stdout
class MockOutput(object):
def __init__(self):
self.lines = []
def write(self, line):
self.lines.append(line)
def flush(self):
pass
class MockSignal(object):
def __init__(self, info=True, usr1=True, usr2=True):
if info:
self.SIGINFO = 1
if usr1:
self.SIGUSR1 = 2
if usr2:
self.SIGUSR2 = 3
self.signals = []
def signal(self, sigtype, func):
self.signals.append((sigtype, func))
class MockClass(object):
def __init__(self, attrs):
for key in attrs:
self.__setattr__(key, attrs[key])
self._keys = attrs.keys()
def __str__(self):
return 'MockClass: {}'.format(', '.join(
['{}={}'.format(key, self.__getattribute__(key)) for key in self._keys]
))
class MockFrame(object):
def __init__(self, local_vars={}, line_number=0, back=None):
self.f_locals = local_vars
self.f_code = MockClass(
{'co_name': 'my_test_function_line_{}'.format(line_number)}
)
self.f_lineno = line_number
self.f_back = back
class MockFunction(object):
def __init__(self):
self.called = 0
self.called_with = []
def __call__(self, *args, **kwargs):
self.called += 1
self.called_with.append((args, kwargs))
class SigInfoInitTests(unittest.TestCase):
def setUp(self):
si.sys.stdout = MockOutput()
si.signal = MockSignal()
def tearDown(self):
si.sys.stdout = OLD_OUT
def test_init(self):
res = si.SiginfoBasic()
assert isinstance(res, si.SiginfoBasic)
assert isinstance(res.pid, int)
assert isinstance(res.MAX_LEVELS, int)
assert isinstance(res.COLUMNS, int)
assert isinstance(res.OUTPUT, MockOutput)
def test_output(self):
si.sys.stdout.lines = []
si.SiginfoBasic(info=True, usr1=True, usr2=True)
assert len(si.sys.stdout.lines) == 6
class SigInfoInputTests(unittest.TestCase):
def setUp(self):
si.sys.stdout = MockOutput()
def tearDown(self):
si.sys.stdout = OLD_OUT
def test_inputs(self):
si.signal = MockSignal()
res = si.SiginfoBasic(info=True, usr1=True, usr2=True)
assert res.signals == ['INFO', 'USR1', 'USR2']
res = si.SiginfoBasic(info=True, usr1=True, usr2=False)
assert res.signals == ['INFO', 'USR1']
res = si.SiginfoBasic(info=True, usr1=False, usr2=False)
assert res.signals == ['INFO']
res = si.SiginfoBasic(info=True, usr1=False, usr2=True)
assert res.signals == ['INFO', 'USR2']
res = si.SiginfoBasic(info=False, usr1=True, usr2=True)
assert res.signals == ['USR1', 'USR2']
si.sys.stdout.lines = []
res = si.SiginfoBasic(info=False, usr1=False, usr2=False)
assert res.signals == []
assert si.sys.stdout.lines[0] == 'No signal specified\n'
def test_inexistent_inputs(self):
si.sys.stdout.lines = []
si.signal = MockSignal(info=False)
res = si.SiginfoBasic(info=True, usr1=True, usr2=True)
assert res.signals == ['USR1', 'USR2']
assert 'No SIGINFO availale\n' == si.sys.stdout.lines[0]
def test_all_missing_inputs(self):
si.sys.stdout.lines = []
si.signal = MockSignal(info=False, usr1=False, usr2=False)
res = si.SiginfoBasic(info=True, usr1=True, usr2=True)
assert res.signals == []
assert 'No SIGINFO availale\n' == si.sys.stdout.lines[0]
assert 'No SIGUSR1 availale\n' == si.sys.stdout.lines[1]
assert 'No SIGUSR2 availale\n' == si.sys.stdout.lines[2]
class SigInfoSigFormattingTests(unittest.TestCase):
def test_column_specification(self):
si.subprocess.check_output = lambda x: '5 140'
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=MockOutput())
assert res.COLUMNS == 120
# Defaulting to minimum of 80 colunns
si.subprocess.check_output = lambda x: '5 79'
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=MockOutput())
assert res.COLUMNS == 80
# Defaulting to minimum of 80 colunns
si.subprocess.check_output = lambda x: '5 81'
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=MockOutput())
assert res.COLUMNS == 80
# Defaulting to minimum of 80 colunns
si.subprocess.check_output = lambda x: '5 101'
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=MockOutput())
assert res.COLUMNS == 81
# this will cause the check_output to fail
si.subprocess.check_output = lambda x: 1
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=MockOutput())
assert res.COLUMNS == 80
@unittest.skip('Not yet testing script generation')
class SigInfoSigScriptTests(unittest.TestCase):
def test_create_info_script(self):
pass
def test_make_scripts_excecutable(self):
pass
def test_script_pids(self):
pass
class SiginfoFramePrinting(unittest.TestCase):
def test_print_without_parent(self):
si.subprocess.check_output = lambda x: '5 80'
mock_out = MockOutput()
mock_frame = MockFrame()
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res._print_frame(mock_frame)
assert len(mock_out.lines) == 16, mock_out.lines
assert mock_out.lines[1] == 'METHOD\t\tmy_test_function_line_0\n'
assert mock_out.lines[2] == 'LINE NUMBER:\t0\n'
assert mock_out.lines[3] == '-'*80
assert mock_out.lines[5] == 'LOCALS\n'
assert mock_out.lines[10] == 'SCOPE\t'
assert mock_out.lines[11] == 'MockClass: co_name=my_test_function_line_0'
assert mock_out.lines[13] == 'CALLER\t'
assert mock_out.lines[14] == 'NONE'
def test_print_with_parent(self):
si.subprocess.check_output = lambda x: '5 80'
mock_out = MockOutput()
mock_frame = MockFrame(back=MockFrame(line_number=2))
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res._print_frame(mock_frame)
assert len(mock_out.lines) == 16, mock_out.lines
assert mock_out.lines[1] == 'METHOD\t\tmy_test_function_line_0\n'
assert mock_out.lines[2] == 'LINE NUMBER:\t0\n'
assert mock_out.lines[3] == '-'*80
assert mock_out.lines[5] == 'LOCALS\n'
assert mock_out.lines[10] == 'SCOPE\t'
assert mock_out.lines[11] == 'MockClass: co_name=my_test_function_line_0'
assert mock_out.lines[13] == 'CALLER\t'
assert mock_out.lines[14] == 'MockClass: co_name=my_test_function_line_2'
class SiginfoCalling(unittest.TestCase):
def test_signal_calling(self):
si.subprocess.check_output = lambda x: '5 80'
mock_out = MockOutput()
mock_frame = MockFrame()
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res._print_frame = MockFunction()
res(1, mock_frame)
assert len(mock_out.lines) == 10
assert res._print_frame.called == 1
assert res._print_frame.called_with[0][0][0] == mock_frame
def test_signal_calling_multiple_level(self):
si.subprocess.check_output = lambda x: '5 80'
mock_out = MockOutput()
mock_frame_back = MockFrame(line_number=2)
mock_frame = MockFrame(back=mock_frame_back)
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res._print_frame = MockFunction()
res(1, mock_frame)
assert len(mock_out.lines) == 16
assert res._print_frame.called == 2
assert res._print_frame.called_with[0][0][0] == mock_frame
assert res._print_frame.called_with[1][0][0] == mock_frame_back
def test_signal_calling_limit_levels(self):
"""
2 levels of stack frames are present
but MAX_LEVELS is set to 1
"""
si.subprocess.check_output = lambda x: '5 80'
mock_out = MockOutput()
mock_frame_back = MockFrame(line_number=2)
mock_frame = MockFrame(back=mock_frame_back)
res = si.SiginfoBasic(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res.MAX_LEVELS = 1
res._print_frame = MockFunction()
res(1, mock_frame)
assert len(mock_out.lines) == 10
assert res._print_frame.called == 1
assert res._print_frame.called_with[0][0][0] == mock_frame
class SiginfoSingleCalling(unittest.TestCase):
def test_setting_variables(self):
mock_out = MockOutput()
res = si.SigInfoSingle(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res.set_var('foo')
assert res._varname == 'foo'
assert res._default is None
def test_getting_existing_variables(self):
mock_out = MockOutput()
mock_frame = MockFrame({'foo': '12', 'bar': 'abc'})
res = si.SigInfoSingle(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res.set_var('foo')
mock_out.lines = []
res(1, mock_frame)
assert len(mock_out.lines) == 1
assert mock_out.lines[0] == '12\n'
def test_getting_nonexisting_variables(self):
mock_out = MockOutput()
mock_frame = MockFrame({'foo': '12', 'bar': 'abc'})
res = si.SigInfoSingle(
info=False,
usr1=False,
usr2=False,
output=mock_out)
res.set_var('xyz', 'foobar')
mock_out.lines = []
res(1, mock_frame)
assert len(mock_out.lines) == 1
assert mock_out.lines[0] == 'foobar\n'
if __name__ == '__main__':
unittest.main()
| 30.005731 | 83 | 0.592628 | 1,292 | 10,472 | 4.608359 | 0.139319 | 0.052906 | 0.058448 | 0.063487 | 0.698522 | 0.672657 | 0.619248 | 0.601948 | 0.576755 | 0.537454 | 0 | 0.027231 | 0.291635 | 10,472 | 348 | 84 | 30.091954 | 0.775411 | 0.02034 | 0 | 0.585821 | 0 | 0 | 0.060348 | 0.01829 | 0 | 0 | 0 | 0 | 0.216418 | 1 | 0.115672 | false | 0.014925 | 0.011194 | 0.003731 | 0.175373 | 0.052239 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89899204c69d8a7489b0f58b06bb9c244ae7764a | 3,775 | py | Python | Code/classification_acc.py | prasys/textanalyzer | fd14454d073c8571ddaa40f6ac668842e8aef726 | [
"MIT"
] | null | null | null | Code/classification_acc.py | prasys/textanalyzer | fd14454d073c8571ddaa40f6ac668842e8aef726 | [
"MIT"
] | null | null | null | Code/classification_acc.py | prasys/textanalyzer | fd14454d073c8571ddaa40f6ac668842e8aef726 | [
"MIT"
] | null | null | null | import sys
from ast import literal_eval
import nltk
#below function requires a run flag (runf) (1-4) for the four folds, and tf is flag for feature combination (1-17)
def getr(runf,tf):
f1=open('classification_tweets/run'+str(runf)+'/test_sents.txt','r')
f2=open('classification_tweets/run'+str(runf)+'/test_targs.txt','r')
for l1 in f1:
test_sents=literal_eval(l1)
for l2 in f2:
test_targs=literal_eval(l2)
#print(test_sents)
#print(test_targs)
f3=open('classification_tweets/run'+str(runf)+'/use_t'+str(tf)+'.txt','r')
pred=[] #list of predicted label
words=[] #list of word corresponding to above label
for line in f3:
#line.strip('\n')
l1=line.split('\t')
pred.append(l1[0])
s=l1[1]
l=s.split(' ')
words.append(l[-1][1:-1])
pred=list(map(float,pred))
#print(pred)
#print(words)
x,y=0,0 #markers for beginning and end of that sentence, to break the collection of words sentence wise
part,full=0,0 #cound of partial and exact match
partio=0.0 #% of words in sentence also in predicted target for partial match
hm=0.0 #count for dice score
for i in range(len(test_sents)):
sen=test_sents[i].replace('#','')
w=nltk.word_tokenize(sen)
x=y
y=x+len(w)
p=pred[x:y] #list of predictions for the particular text instance
p1=[]
tar=test_targs[i].split(',') #actual targets
ww=words[x:y]
#print(ww)
p1=[k for k in range(len(p)) if p[k]>0]
pred_tar=[ww[z] for z in p1] #words in predicted targets
ws=len(w)
wt=len(set(pred_tar))
partio+=(float(wt)/float(ws))
#print(pred_tar)
ac_tar=[] #words in actual targets
for s in tar:
ac_tar.extend(nltk.word_tokenize(s))
#print(ac_tar)
pt=pred_tar
at=ac_tar
for xx in pt:
if xx=='.':
pt.remove(xx)
sp=set(pt)
sa=set(at)
#outside case handled first only for dice score, then for partial and exact match
if 'OUTSIDE/LISTENER' in sa:
if (len(sp)==0) or ('OUTSIDE/LISTENER' in sp):
hm+=1.0
else:
hm+=(float((2*len(sp.intersection(sa))))/float((len(sp)+len(sa))))
print(hm)
if 'OUTSIDE/LISTENER' in tar:
if len(pred_tar)==0:
part+=1
full+=1
else:
if len(ac_tar)>1:
for h in pred_tar:
if (h in ac_tar) and (h!='a') and (h!='.'):
part+=1
break
else:
if set(ac_tar)==set(pred_tar):
full+=1
part+=1
else:
#print(1)
flag=0
for v in pred_tar:
#print(v)
if (v in ac_tar) and (v!='a') and (v!='.'):
flag=1
break
if(flag):
part+=1
n=len(test_sents)
partacc=float(part)/float(n) #partial match
percs=float(partio)/float(n) #% of partial score not requirede in final draft
fullacc=float(full)/float(n) #exact match
hmean=float(hm)/float(n) #dice score
f4=open('classification_tweets/run'+str(runf)+'/harmonic_res_t'+str(tf)+'.txt','w') #only res for partial and exact match
# f4.write(str(part))
# f4.write('\n')
# f4.write(str(partacc))
# f4.write('\n')
# f4.write(str(full))
# f4.write('\n')
# f4.write(str(fullacc))
# f4.write('\n')
# f4.write(str(percs))
f4.write(str(hmean))
#getr(1,1)
#getr(sys.argv[0],sys.argv[1])
| 29.960317 | 125 | 0.527682 | 554 | 3,775 | 3.527076 | 0.263538 | 0.035824 | 0.030706 | 0.055271 | 0.134084 | 0.110542 | 0.038895 | 0 | 0 | 0 | 0 | 0.025671 | 0.329272 | 3,775 | 125 | 126 | 30.2 | 0.746051 | 0.273642 | 0 | 0.144578 | 0 | 0 | 0.081731 | 0.036982 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0 | 0.036145 | 0 | 0.048193 | 0.012048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8989e651dce56fb52bcfb6da9e177d2736fe5dbb | 1,108 | py | Python | binary_counter_exercise/binary_counter.py | madness007/algorithms-python-intro-exercise | ca0181d160b21422d177569ea00f7b6c482def23 | [
"Apache-2.0"
] | null | null | null | binary_counter_exercise/binary_counter.py | madness007/algorithms-python-intro-exercise | ca0181d160b21422d177569ea00f7b6c482def23 | [
"Apache-2.0"
] | null | null | null | binary_counter_exercise/binary_counter.py | madness007/algorithms-python-intro-exercise | ca0181d160b21422d177569ea00f7b6c482def23 | [
"Apache-2.0"
] | 1 | 2021-10-14T07:32:46.000Z | 2021-10-14T07:32:46.000Z | class BinaryCounter:
def __init__(self, led4, led3, led2, led1):
self.__led1 = led1
self.__led2 = led2
self.__led3 = led3
self.__led4 = led4
def asBinary(self):
return str(self.__led4) + " " + str(self.__led3) + " " + str(self.__led2) + " " + str(self.__led1)
def asHex(self):
result = ""
decimal = self.asDecimal()
if decimal < 10:
result = str(self.asDecimal())
elif decimal == 10:
result = 'A'
elif decimal == 11:
result = 'B'
elif decimal == 12:
result = 'C'
elif decimal == 13:
result = 'D'
elif decimal == 14:
result = 'E'
elif decimal == 15:
result = 'F'
return result
def asDecimal(self):
decimal = 0
if self.__led4 == 1:
decimal += 8
if self.__led3 == 1:
decimal += 4
if self.__led2 == 1:
decimal += 2
if self.__led1 == 1:
decimal += 1
return decimal
| 22.16 | 106 | 0.453069 | 114 | 1,108 | 4.157895 | 0.307018 | 0.139241 | 0.063291 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068581 | 0.434116 | 1,108 | 49 | 107 | 22.612245 | 0.6874 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0 | 0.027027 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898a588ad7df97ad04629bf1e3d9464b0c069554 | 4,678 | py | Python | recorded_failures/aoc2020/day_11_seating_system/invalid_state_change_caught_by_contracts.py | mristin/python-by-contract-corpus | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 8 | 2021-05-07T17:37:37.000Z | 2022-02-26T15:08:42.000Z | recorded_failures/aoc2020/day_11_seating_system/invalid_state_change_caught_by_contracts.py | mristin/python-by-contract-corpus | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 22 | 2021-04-28T21:55:48.000Z | 2022-03-04T07:41:37.000Z | recorded_failures/aoc2020/day_11_seating_system/invalid_state_change_caught_by_contracts.py | mristin/aocdbc | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 3 | 2021-03-26T22:29:12.000Z | 2021-04-11T20:45:45.000Z | import dataclasses
import enum
import re
from typing import Tuple, Mapping, List, Optional, Set, Iterable
from icontract import require, ensure
# crosshair: on
@require(lambda i, height: 0 <= i <= height)
@require(lambda j, width: 0 <= j <= width)
@ensure(
lambda height, width, result: all(
0 <= i <= height and 0 <= j <= width for i, j in result
)
)
@ensure(lambda result: len(result) <= 8)
@ensure(lambda i, j, result: (i, j) not in result)
def list_neighbourhood(
i: int, j: int, height: int, width: int
) -> List[Tuple[int, int]]:
# (mristin, 2021-04-03): This would be a nice use case for ensure_each.
start_i = max(0, i - 1)
end_i = min(height, i + 2)
start_j = max(0, j - 1)
end_j = min(width, j + 2)
result = [] # type: List[Tuple[int, int]]
for neighbour_i in range(start_i, end_i):
for neighbour_j in range(start_j, end_j):
if neighbour_i == i and neighbour_j == j:
continue
result.append((neighbour_i, neighbour_j))
return result
@require(
lambda layout: not (len(layout) > 0 and len(layout[0]) > 0)
or all(len(row) == len(layout[0]) for row in layout)
)
@require(
lambda layout: all(re.match("^[L#.]\Z", cell) for row in layout for cell in row)
)
@ensure(lambda result: all(re.match("^[L#.]+\Z", row) for row in result[0]))
@ensure(
lambda layout, result: len(layout) == len(result[0])
and all(len(result_row) == len(row) for row, result_row in zip(layout, result[0]))
)
# ERROR: there was an invalid change since the result switched from floor ('.') to '#'.
@ensure(
lambda layout, result: all(
(cell == "." and result_cell == ".")
or (cell != "." and result_cell in ["L", "#"])
for row, result_row in zip(layout, result[0])
for cell, result_cell in zip(row, result_row)
),
"Valid change",
)
def apply(layout: List[List[str]]) -> Tuple[List[List[str]], int]:
"""Return (new layout, number of changes)."""
height = len(layout)
width = len(layout[0])
result = [[""] * width] * height
change_count = 0
for i in range(height):
for j in range(width):
state = layout[i][j]
if state == ".":
new_state = "."
else:
occupied = 0
neighbourhood = list_neighbourhood(i=i, j=j, height=height, width=width)
for neighbour_i, neighbour_j in neighbourhood:
if layout[neighbour_i][neighbour_j] == "#":
occupied += 1
if state == "L" and occupied == 0:
new_state = "#"
change_count += 1
elif state == "#" and occupied >= 4:
new_state = "L"
change_count += 1
else:
new_state = state
result[i][j] = new_state
return result, change_count
@require(
lambda layout: not (len(layout) > 0 and len(layout[0]) > 0)
or all(len(row) == len(layout[0]) for row in layout)
)
@require(
lambda layout: all(re.match("^[L#.]\Z", cell) for row in layout for cell in row)
)
@ensure(lambda result: all(re.match("^[L#.]+\Z", row) for row in result))
@ensure(
lambda layout, result: len(layout) == len(result)
and all(len(result_row) == len(row) for row, result_row in zip(layout, result))
)
@ensure(
lambda layout, result: all(
cell == result_cell
for row, result_row in zip(layout, result)
for cell, result_cell in zip(row, result_row)
if cell == "."
),
"Floor remains floor",
)
def apply_until_stable(layout: List[List[str]]) -> List[List[str]]:
change_count = None # type: Optional[int]
result = [row[:] for row in layout]
while change_count is None or change_count > 0:
result, change_count = apply(layout=layout)
return result
@require(lambda lines: all(re.match(r"^[.L#]+\Z", line) for line in lines))
@require(
lambda lines: not (len(lines) > 0)
or all(len(line) == len(lines[0]) for line in lines),
"Lines are a table",
)
@ensure(lambda lines, result: len(lines) == len(result))
@ensure(
lambda lines, result: not len(lines) == 0
or all(len(line) == len(row) for line, row in zip(lines, result))
)
def parse_layout(lines: List[str]) -> List[List[str]]:
result = [] # type: List[List[str]]
for line in lines:
row = [] # type: List[str]
for symbol in line:
row.append(symbol)
result.append(row)
return result
def repr_layout(layout: List[List[str]]) -> str:
return "\n".join("".join(row) for row in layout)
| 30.180645 | 88 | 0.579521 | 666 | 4,678 | 3.993994 | 0.160661 | 0.024436 | 0.02406 | 0.031579 | 0.33609 | 0.309774 | 0.286466 | 0.286466 | 0.242857 | 0.184211 | 0 | 0.012673 | 0.27469 | 4,678 | 154 | 89 | 30.376623 | 0.771294 | 0.063061 | 0 | 0.260163 | 0 | 0 | 0.024256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04065 | false | 0 | 0.04065 | 0.00813 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898c2fa200064dde9d5f9f7743769291f662c8c6 | 30,086 | py | Python | plugin/CustomerSupportArchive/Lane_Diagnostics/tools/debug_reader.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 125 | 2015-01-22T05:43:23.000Z | 2022-03-22T17:15:59.000Z | plugin/CustomerSupportArchive/NucStepSpatialV2/tools/debug_reader.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 59 | 2015-02-10T09:13:06.000Z | 2021-11-11T02:32:38.000Z | plugin/CustomerSupportArchive/autoCal/tools/debug_reader.py | iontorrent/TS | 7591590843c967435ee093a3ffe9a2c6dea45ed8 | [
"Apache-2.0"
] | 98 | 2015-01-17T01:25:10.000Z | 2022-03-18T17:29:42.000Z | import datetime, re, subprocess
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
TODAY = datetime.datetime.today()
WF_REGEX = re.compile( r""".*working directory: (?P<wd>[\w/.]+), workflowVersion: (?P<version>[\w/.=\s\-]+)""" )
GIT_RE = re.compile( r"""git branch\s+=\s+(?P<branch>[\w\-./_]+)\s+git commit =\s+(?P<commit>[\w]+)""" )
DEBUG_REGEX = re.compile( r"""(?P<file>[\w/.]+):(?P<timestamp>[\w:\s]{15})\s(?P<inst>[\w\-_]+)\s(?P<source>[\w.]+):\s(?P<message>[\w\W\s]+)""" )
"""
Run timing is as follows (debug file used unless otherwise noted:
Start of run: explog_final.txt: Start Time
<dead time 1>
Review Run Plan: planStatus Review
<dead time 2>
Library start: planStatus Library Preparation Started
Library end: planStatus Library Preparation Completed
<dead time 3>
Templ. start: planStatus Templating Started
Templ. end: planStatus Templating Completed
<dead time 4>
Seq. start: planStatus Sequencing Started
Seq. end: planStatus Sequencing Completed -- this is identical to explog_final.txt: End Time
After this level of interest, we can dive into how long submodules take.
Note that as the grepping gets more serious, we can save time by only grepping once and
using multiple -e arguments.
"""
class DebugLog( object ):
""" Class for interaction with /var/log/debug. """
def __init__( self, debug_path, start_timestamp=None, end_timestamp=None ):
self.path = debug_path
# Allows setting of the start point right up front to ignore previous runs' information.
self.set_start_timestamp( start_timestamp )
# This manages if runs are sequencing only and missing much of the normal architecture.
self.set_end_timestamp( end_timestamp )
def search( self, grep_phrase, case_sensitive=False , after=None, before=None, context=None):
"""
Searches the debug file for the input phrase by using grep.
Returns lines as they are for further parsing.
"""
cmd = 'grep '
if not case_sensitive:
cmd += '-i '
if context:
if isinstance( context, int ):
cmd += '--context {} '.format( context )
else:
print( 'Error, context input must be an integer.' )
else:
if before:
if isinstance( before, int ):
cmd += '-B {} '.format( before )
else:
print( 'Error, before input must be an integer.' )
if after:
if isinstance( after, int ):
cmd += '-A {} '.format( after )
else:
print( 'Error, after input must be an integer.' )
cmd += '"{}" {}'.format( grep_phrase, self.path )
print( cmd )
p = subprocess.Popen( cmd, stdout=subprocess.PIPE, shell=True, universal_newlines=True )
ans, err = p.communicate()
print( ans )
try:
lines = ans.splitlines()
print(lines)
if self.start_timestamp:
return self.filter_lines( lines )
else:
return lines
except AttributeError:
print( 'wtf' ) # Leaving for posterity :)
return []
def search_many( self, *strings, **kwargs ):
""" Searches the file for many grep strings at once. """
case_sensitive = kwargs.get( 'case_sensitive', False )
context = kwargs.get( 'context', False )
before = kwargs.get( 'before', False )
after = kwargs.get( 'after', False )
cmd = 'grep '
if not case_sensitive:
cmd += '-i '
if context:
if isinstance( context, int ):
cmd += '--context {} '.format( context )
else:
print( 'Error, context input must be an integer.' )
else:
if before:
if isinstance( before, int ):
cmd += '-B {} '.format( before )
else:
print( 'Error, before input must be an integer.' )
if after:
if isinstance( after, int ):
cmd += '-A {} '.format( after )
else:
print( 'Error, after input must be an integer.' )
for string in strings:
cmd += '-e "{}" '.format( string )
cmd += self.path
p = subprocess.Popen( cmd, stdout=subprocess.PIPE, shell=True, universal_newlines=True )
ans, err = p.communicate()
try:
lines = ans.splitlines()
if self.start_timestamp:
return self.filter_lines( lines )
else:
return lines
except AttributeError:
return []
def search_blocks( self, block_start, block_stop, *strings, **kwargs ):
''' Finds a section that might be repeated and return selected lines
blocks are lists of lines bracketed by endpoints block_start and block_stop
Inputs --> regex for block_start, block_stop, and *strings
NOTE: block_start and block_stop need to be rigorous regex strings with wildcards if necessary
strings does not have to be populated but will check for lines within the endpoints
--> strings can be simple phrases used by grep
endpoints is a bool to include or exclude the start/stop lines in output
default is False == exclude
NOTE: endpoints has to live in **kwargs due to Py2 behavior
kwargs are for the debug.search_many function
Output --> list of blocks, where each block is a list of lines
'''
#NOTE: need to remove endpoints from kwargs if it exists
# Required by Py2
try: endpoints = kwargs.pop( 'endpoints' )
except KeyError: endpoints = False
lines = self.search_many( block_start, block_stop, *strings, **kwargs )
# Get blocks of lines for further parsing
blocks = []
add_line = False
#print( 'initial -- add_line', add_line )
regex_start = re.compile( block_start )
regex_stop = re.compile( block_stop )
for l in lines:
if not add_line:
match = regex_start.match( l )
if match:
add_line = True
temp = []
if endpoints: temp.append( l )
#print( 'start_phrase {} found, add_line'.format(start_phrase), add_line )
else:
continue
else:
match = regex_stop.match( l )
if match:
if endpoints: temp.append( l )
blocks.append(temp)
add_line = False
#print( 'stop_phrase {} found, add_line'.format(stop_phrase), add_line )
#print( 'updated blocks', blocks )
else:
#print( 'adding line to temp', l )
temp.append( l )
return blocks
def set_start_timestamp( self, start_timestamp ):
"""
Sets the initial timestamp for the debug file that will prevent messages prior to that
moment from being returned and worked with. Useful for avoiding constant reuse of timestamp
filtering.
"""
if isinstance( start_timestamp, datetime.datetime ):
self.start_timestamp = start_timestamp
else:
self.start_timestamp = None
print( "Starting point NOT set. Please input a datetime.datetime object." )
def set_end_timestamp( self, end_timestamp ):
"""
Sets the end timestamp for the debug file that will prevent messages after that
moment from being returned and worked with. Useful for avoiding constant reuse of timestamp
filtering.
"""
if isinstance( end_timestamp, datetime.datetime ):
self.end_timestamp = end_timestamp
else:
self.end_timestamp = None
print( "Ending point NOT set. Please input a datetime.datetime object." )
def read_line( self, line ):
""" Matches the line pattern and extracts the relevant components. Replaces date with datetime obj."""
m = DEBUG_REGEX.match( line )
if m:
data = m.groupdict()
data['timestamp'] = self.get_timestamp( data['timestamp'],
start_timestamp=self.start_timestamp,
end_timestamp=self.end_timestamp,
)
return data
else:
return {}
def filter_lines( self, lines , start_timestamp=None ):
"""
Filter out lines before the given start timestamp, which are probably from a previous run
because the debug log is written to serially and not necessarily restarted each time a new
experiment is started.
Also works without the input timestamp if we set it with set_start_timestamp.
"""
# If no starttimestamp is passed in, try using the attribute self.start_timestamp
if start_timestamp is None: start_timestamp = self.start_timestamp
# Add routine to ignore '--' lines in the grep output.
lines = [ line for line in lines if line != '--' ]
if isinstance( start_timestamp, datetime.datetime ):
# filter all_lines by lines that have timestamp after the official experiment start.
try:
filtered = [ l for l in lines if self.read_line( l )['timestamp'] > start_timestamp ]
return filtered
except KeyError:
return []
else:
print( "Not doing any filtering. Please input a datetime.datetime object." )
return lines
@staticmethod
def get_hms( hours , include_seconds=False ):
""" Returns a string of Hours:Minutes:Seconds (seconds if desired) from hours decimal. """
h = int( np.floor( hours ) )
minutes = (hours - h) * 60
m = int( np.floor( minutes ) )
seconds = (minutes - m) * 60
s = int( np.rint( seconds ) ) # take it to the nearest integer.
if include_seconds:
return '{}:{:02d}:{:02d}'.format( h, m, s )
else:
# Check if we need to round minutes up.
if seconds >= 30.0:
m += 1
return '{}:{:02d}'.format( h, m )
@staticmethod
def get_timestamp( date_string, start_timestamp=None, end_timestamp=None ):
""" Extract the timestamp from debug log line, convert to datetime object, and add in a year """
# NOTE: No Year in the debuglog timestamps
try:
stamp = datetime.datetime.strptime( date_string.strip() , "%b %d %H:%M:%S" )
except ValueError:
# necessary for leap year - Feb 29th. Thinks day is out of range, so we need to tag on the year
year = datetime.date.today().year
date_string = '{} {}'.format(year, date_string)
stamp = datetime.datetime.strptime( date_string.strip() , "%Y %b %d %H:%M:%S" )
# Below is where we add in a year
if stamp.month == 12 and stamp.day == 31:
# try matching the start timestamp month if it exists
if start_timestamp and stamp.month == start_timestamp.month:
return stamp.replace( start_timestamp.year )
elif stamp.month == 1 and stamp.day==1:
if start_timestamp and stamp.month == start_timestamp.month:
return stamp.replace( start_timestamp.year )
elif end_timestamp and stamp.month == end_timestamp.month:
return stamp.replace( end_timestamp.year )
# Fallback method --> not guaranteed to work on or about New Year's Eve
return stamp.replace( TODAY.year )
class ValkyrieDebug( DebugLog ):
""" Specific class for parsing the Valkyire Debug log for workflow timing and status messages. """
def parallel_grep( self , start_timestamp=None ):
""" Merges all phrases for grepping into a single operation, stores to self.all_lines """
# try using attribute self.start_timestamp is start_timestamp is None
if start_timestamp is None: start_timestamp = self.start_timestamp
greps = [ 'do_', # for modules
'RESEQUENCE', # to detect if resequencing happened.
'planStatus', # for high level timing
': peStatus', # for a typo.
'start magnetic isp', # for mag to seq timing, possibly
'Type:Experiment Complete', # for more accurate sequencing end time- was not in use for a while, but am bringing it back
'Acquisition Complete', # one for each sequencing and resequencing flow- help determine accurate sequencing end time
#'CopyLocalFile 783', # was used for a short period of time to determine a more accurate sequencing end time
'er52', # use to determine if error 52 occurred in either pipette
'W3 failed', # use to determine if conical clog check was skipped due to very low (below 50 uL/s) W3 flow
'ValueError: ERROR:', # use to find various pipette errors
]
self.all_lines = self.search_many( *greps )
if start_timestamp:
# filter all_lines by lines that have timestamp after the official experiment start.
self.all_lines = self.filter_lines( self.all_lines, start_timestamp )
def detect_modules( self ):
""" Reads log for workflow components, allowing detection of e2e runs. """
# Primary search criteria is 'do_'
# Let's assume that if we end up with a run report, we are actually doing sequencing...?
modules = { 'libprep' : False,
'harpoon' : False,
'magloading' : False,
'coca' : False,
'sequencing' : True ,
'resequencing': False } # cannot detect here since there is no do_resequencing in debug
#if hasattr( self, 'all_lines' ):
# lines = [ line for line in self.all_lines if 'do_' in line ]
#else:
# lines = self.search( 'do_' )
lines = self.search( 'do_' )
reseq_lines = self.search('RESEQUENCE')
parsed = [ self.read_line( line ) for line in lines ]
reseq_parsed = [self.read_line( line ) for line in reseq_lines]
conditions = [ ('libprep' , ['libprep'] ),
('harpoon' , ['harpoon'] ),
('magloading', ['magneticLoading'] ),
('coca' , ['coca'] ),]
#('sequencing', ['sequencing'] ) ]
for key, words in conditions:
print( 'Searching for {} . . .'.format( key ) )
for line in parsed:
m = re.match( '''.*do_(?P<module>[\w]+)\s(?P<active>[\w]+)''', line['message'] )
if m:
module = m.groupdict()['module']
active = m.groupdict()['active']
if module in words:
modules[ key ] = active.lower() == 'true'
print( '. . . {}'.format( active ) )
# Previous method
#message_words = line['message'].split()
#if set(words).issubset( set(message_words) ):
# modules[ key ] = 'true' in [ w.lower() for w in message_words ]
# Determine if resequencing happened
reseq_count = 0
for line in reseq_parsed:
if 'RESEQUENCE' in line['message']:
reseq_count += 1
if reseq_count > 4:
modules['resequencing'] = True
print('Resequencing Detected in debug')
self.modules = modules
print( 'summary' )
for k in ['libprep','harpoon','magloading','coca','sequencing','resequencing']:
print( '{}:\t{}'.format( k , modules[k] ) )
def get_overall_timing( self, reseq=False ):
if hasattr( self, 'all_lines' ):
lines = [ line for line in self.all_lines if 'planstatus' in line.lower() or 'pestatus' in line.lower() or 'type:experiment complete' in line.lower() or ('copylocalfile' and 'acq_') in line.lower()]
else:
# Bugfix. Looks like someone accidentally overwrote planStatus with peStatus
lines = self.search_many( 'planStatus', ': peStatus', 'copylocalfile' )
parsed = [ self.read_line( line ) for line in lines ]
timing = {}
conditions = [ ('review',['Review'],False),
('library_start',['Library','Started'],False),
('library_end',['Library','Completed'],False),
('templating_start',['Templating','Started'],False),
('templating_end',['Templating','Completed'],False),
('sequencing_start',['Sequencing','Started'],False),
('sequencing_end',['Sequencing','Completed'],False) ] # won't find this one in reseq runs
if reseq:
conditions += [
('resequencing-templating_start',['Resequencing','Started'],True), # actually called resequencing prep in debug
('resequencing-templating_end',['Resequencing','Started'],True),
('resequencing-sequencing_start',['Sequencing','Started'],True) , # for reseq
('resequencing-sequencing_end',['Sequencing','Completed'],True) ]
for key,words,uselast in conditions:
timing[ key ] = None
for line in parsed:
message_words = [ w.replace(',','').replace(')','') for w in line['message'].split() ]
if set(words).issubset( set( message_words ) ):
timing[ key ] = line['timestamp']
print('FOUND TIME {} FOR KEY {}'.format(line['timestamp'],key))
# The above method for determining sequencing end time is not accurate when postLib Deck clean takes longer than sequencing.
#if key=='sequencing_end':
# try:
# timing[ key ] = seq_complete # actual seq end, before 'Sequencing Complete' line
# print('updating sequencing complete time to file transfer of final acquisition....')
# except:
# print('tried but failed to update sequencing complete time')
if not uselast:
break # take first instance of finding it
#if 'Type:Experiment' in message_words:
# noticed this phrase does not appear in 6.35.1
#seq_complete = line['timestamp'] # Save time of copy files, since the one just before Sequencing Completed is the real sequencing end time- copying last acquisition file
#print('Type:Experiment line: {}'.format(line))
# Next, determine if we are missing sequencing_end time.
if reseq:
if timing[ 'sequencing_end' ] == timing[ 'resequencing-sequencing_end']:
print('Did not find a sequencing_end time. This is expected for RESEQUENCING runs')
# now we attempt to find it. Look for the first Type:Experiment Complete afte seq start time
get_next = False
for line in parsed:
if get_next:
message_words = [ w.replace(',','').replace(')','') for w in line['message'].split() ]
if 'Type:Experiment' in message_words:
print('Type:Experiment line for seq complete: {}'.format(line))
timing[ 'sequencing_end' ] = line['timestamp']
break
if line['timestamp'] == timing['sequencing_start']:
get_next = True
# Check for if they are all blank.
if not any( timing.values() ):
timing['sequencing_start'] = self.start_timestamp # Faster than getattr and now initialized to None
timing['sequencing_end'] = self.end_timestamp
print('debug_reader timing dict : {}'.format(timing))
self.timing = timing
# PW: At one point, I thought this was a good idea. Instead, I want runs that are not easily detected
# as having run modules to be categorized as "unknown" runs rather than muddying "Sequencing only"
# Update modules in case this was a tricky run that was manually loaded with emPCR, for instance:
#if self.modules['sequencing'] == False:
# if self.timing['sequencing_start'] and self.timing['sequencing_end']:
# print( 'Detected sequencing start/end times and updating modules to include sequencing!' )
# self.modules['sequencing'] == True
def plot_workflow( self, savepath='' ):
# Set colors for the workflow
COLORS = {'lib' : 'blue',
'temp': 'green',
'seq' : 'darkcyan' }
if self.modules['libprep']:
try:
lib = (self.timing['library_end'] - self.timing['library_start']).total_seconds() / 3600.
except:
print('Missing either library start or end... perhaps run crashed')
lib = 0
try:
dead3 = (self.timing['templating_start'] - self.timing['library_end']).total_seconds() / 3600.
except:
print('Missing templating_start time')
dead3 = 0
else:
lib = 0
dead3 = 0
if self.modules['harpoon'] or self.modules['magloading'] or self.modules['coca']:
try:
temp = (self.timing['templating_end'] - self.timing['templating_start']).total_seconds() / 3600.
dead4 = (self.timing['sequencing_start'] - self.timing['templating_end']).total_seconds() / 3600.
except:
print('Missing either templating start or end time... possibly due to COCA only run')
temp = 0
dead4 = 0
else:
temp = 0
dead4 = 0
try:
seq = (self.timing['sequencing_end'] - self.timing['sequencing_start']).total_seconds() / 3600.
except TypeError:
print( 'Error reading timing details from overall timing. Unable to plot workflow timing.' )
timing_metrics = { 'total' : 0,
'libprep' : lib ,
'templating': temp,
'sequencing': 0 ,
'dead_time' : dead3 + dead4 ,
}
return timing_metrics
fig = plt.figure( figsize=(8,2) )
ax = fig.add_subplot( 111 )
last = 0
# Library preparation time
if lib > 0:
ax.barh( 0.5 , lib , 0.5, left=0, color=COLORS['lib'], alpha=0.4, align='center' )
ax.text( lib/2., 0.5, 'Library Prep\n{}'.format( self.get_hms(lib) ), color=COLORS['lib'],
ha='center', va='center', fontsize=10 )
last += lib
# Need dead time
ax.barh( 0.5 , dead3 , 0.5, left=last, color='grey', alpha=0.4, align='center' )
last += dead3
# Templating time
if temp > 0:
ax.barh( 0.5 , temp, 0.5, left=last, color=COLORS['temp'], alpha=0.4, align='center' )
ax.text( last + temp/2., 0.5, 'Templating\n{}'.format( self.get_hms( temp ) ),
color=COLORS['temp'], ha='center', va='center', fontsize=10 )
last += temp
# Dead time again
ax.barh( 0.5 , dead4 , 0.5, left=last, color='grey', alpha=0.4, align='center' )
last += dead4
# Sequencing time
ax.barh( 0.5 , seq , 0.5, left=last, color=COLORS['seq'], alpha=0.4, align='center' )
ax.text( last + seq/2. , 0.5, 'Seq.\n{}'.format( self.get_hms( seq ) ), color=COLORS['seq'],
ha='center', va='center', fontsize=10 )
last += seq
ax.text( last + 0.1 , 0.5, self.get_hms( last ),
va='center', fontsize=12 )
ax.set_xlim( 0, ax.get_xlim( )[1] + 1 )
ax.set_ylim( 0,1 )
ax.yaxis.set_visible( False )
ax.set_xlabel( 'Run Time (hours)' )
fig.tight_layout( )
if savepath:
fig.savefig( savepath )
else:
fig.show( )
timing_metrics = { 'total' : last,
'libprep' : lib ,
'templating': temp,
'sequencing': seq ,
'dead_time' : dead3 + dead4 ,
}
return timing_metrics
def detect_workflow_version( self ):
"""
Reads through debug log to identify the version of the workflow scripts.
"""
# Initialize values
workflow_version = { 'working_directory': None,
'version': None,
'branch': None,
'commit': None
}
version_lines = self.search( 'workflowVersion:' )
if version_lines:
line = self.read_line( version_lines[0] )
m = WF_REGEX.match( line['message'] )
if m:
info = m.groupdict()
workflow_version['working_directory'] = info['wd']
workflow_version['version'] = info['version']
if 'git' in info['version']:
# This is a branch and we need to record
gm = GIT_RE.match( info['version'] )
if gm:
git_info = gm.groupdict()
workflow_version['commit'] = git_info['commit']
workflow_version['branch'] = git_info['branch']
return workflow_version
def detect_init( self ):
""" Detects if initialization happened during the run, and if so, returns timing details. """
timing = {}
lines = [line for line in self.search_many( 'script_init.py &&' , 'script_init_cancel.py' ) if 'running command' in line]
for line in lines:
parsed = self.read_line( line )
msg_lower = parsed['message'].lower()
if 'script_init.py' in msg_lower:
timing['start'] = parsed['timestamp']
elif 'script_init_cancel.py' in msg_lower:
timing['end'] = parsed['timestamp']
if set(['start','end']).issubset( set(timing.keys()) ):
# We have the right entries to do the calculations we need
timing['duration'] = float( (timing['end'] - timing['start']).seconds / 3600. )
return timing
def detect_postchip_clean( self ):
""" Detects if the postchipclean routine was run. This occurs once all lanes on a chip are spent. """
timing = {}
lines = self.search_many( '''.*Starting thread.*script_postchipclean.py.*''',
'''script_postchipclean_cancel.py''' )
for line in lines:
parsed = self.read_line( line )
msg_lower = parsed['message'].lower()
if 'clean.py' in msg_lower:
timing['start'] = parsed['timestamp']
elif '_cancel.py' in msg_lower:
timing['end'] = parsed['timestamp']
if set(['start','end']).issubset( set(timing.keys()) ):
# We have the right entries to do the calculations we need
timing['duration'] = float( (timing['end'] - timing['start']).seconds / 3600. )
return timing
def detect_postrun_clean( self ):
"""
Detects if the postrun clean routine was run. This clean is a subset of the postchipclean routine.
Only cleans the lanes used in this run. Time required scales with number of lanes used.
"""
timing = {}
lines = self.search_many( '''Script_PostRunClean.txt''',
'''PostRunClean''', context=5 )
for line in lines:
parsed = self.read_line( line )
msg_lower = parsed['message'].lower()
if 'openscriptdirfile' in msg_lower:
timing['start'] = parsed['timestamp']
elif 'experiment complete' in msg_lower:
timing['end'] = parsed['timestamp']
if set(['start','end']).issubset( set(timing.keys()) ):
# We have the right entries to do the calculations we need
timing['duration'] = float( (timing['end'] - timing['start']).seconds / 3600. )
return timing
| 46.286154 | 210 | 0.533039 | 3,288 | 30,086 | 4.791058 | 0.179136 | 0.03466 | 0.007998 | 0.006602 | 0.291817 | 0.247762 | 0.22491 | 0.199771 | 0.19171 | 0.17584 | 0 | 0.009011 | 0.365552 | 30,086 | 649 | 211 | 46.357473 | 0.816272 | 0.236156 | 0 | 0.376499 | 0 | 0.007194 | 0.156099 | 0.021858 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043165 | false | 0 | 0.009592 | 0 | 0.115108 | 0.06235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898c3e48331e030f060a5395594833520b8e7dfc | 15,973 | py | Python | challtools/cli.py | mateuszdrwal/challtools | 62ebd7e10762131eb6024da77fad050de653d9dc | [
"MIT"
] | 3 | 2021-01-17T16:16:10.000Z | 2022-02-23T21:40:51.000Z | challtools/cli.py | mateuszdrwal/challtools | 62ebd7e10762131eb6024da77fad050de653d9dc | [
"MIT"
] | null | null | null | challtools/cli.py | mateuszdrwal/challtools | 62ebd7e10762131eb6024da77fad050de653d9dc | [
"MIT"
] | null | null | null | import sys
import time
import argparse
import os
import uuid
import hashlib
from pathlib import Path
import requests
import yaml
from google.cloud import storage
from .validator import ConfigValidator
from .utils import (
process_messages,
load_ctf_config,
load_config_or_exit,
get_ctf_config_path,
get_valid_config_or_exit,
discover_challenges,
build_chall,
start_chall,
start_solution,
validate_solution_output,
format_user_service,
)
from .constants import *
def main():
parser = argparse.ArgumentParser(
prog="challtools",
description="A tool for managing CTF challenges and challenge repositories using the OpenChallSpec",
)
subparsers = parser.add_subparsers()
# TODO add help strings
allchalls_parser = subparsers.add_parser(
"allchalls",
description="Runs a different command on every challenge in this ctf",
)
allchalls_parser.add_argument("command", nargs=argparse.REMAINDER)
allchalls_parser.add_argument("-e", "--exit-on-failure", action="store_true")
allchalls_parser.set_defaults(func=allchalls, subparsers=subparsers, parser=parser)
validate_parser = subparsers.add_parser(
"validate",
description="Validates a challenge to make sure it's defined properly",
)
validate_parser.add_argument("-v", "--verbose", action="store_true")
validate_parser.set_defaults(func=validate)
build_parser = subparsers.add_parser(
"build",
description="Builds a challenge by running its build script and building docker images",
)
build_parser.set_defaults(func=build)
start_parser = subparsers.add_parser(
"start",
description="Starts a challenge by running its docker images",
)
start_parser.add_argument("-b", "--build", action="store_true")
start_parser.set_defaults(func=start)
solve_parser = subparsers.add_parser(
"solve",
description="Starts a challenge by running its docker images, and procedes to solve it using the solution container",
)
solve_parser.set_defaults(func=solve)
compose_parser = subparsers.add_parser(
"compose",
description="Writes a docker-compose.yml file to the challenge directory which can be used to run all challenge services",
)
compose_parser.set_defaults(func=compose)
ensureid_parser = subparsers.add_parser(
"ensureid",
description="Checks if a challenge has a challenge ID, and if not, generates and adds one",
)
ensureid_parser.set_defaults(func=ensureid)
push_parser = subparsers.add_parser(
"push",
description="Push a challenge to the ctf platform",
)
push_parser.set_defaults(func=push)
args = parser.parse_args()
if not getattr(args, "func", None):
parser.print_usage()
else:
exit(args.func(args))
def allchalls(args):
parser = args.subparsers.choices.get(args.command[0])
if not parser:
print(
f"{CRITICAL}Allchalls could not find the specified command to run on all challenges. Run {args.parser.prog} -h to view all commands.{CLEAR}"
)
return 1
if get_ctf_config_path() == None:
print(
f"{CRITICAL}No CTF configuration file (ctf.yaml) detected in the current directory or any parent directory, and therefore cannot discover challenges.{CLEAR}"
)
return 1
parser_args = parser.parse_args(args.command[1:])
failed = False
for path in discover_challenges():
print(f"{BOLD}Running {args.command[0]} on {path}{CLEAR}")
os.chdir(path.parent)
try:
exit_code = parser_args.func(parser_args)
except SystemExit as e:
exit_code = e.code or 0
if exit_code:
failed = True
if args.exit_on_failure:
return 1
return int(failed)
def validate(args):
config = load_config_or_exit()
validator = ConfigValidator(
config, ctf_config=load_ctf_config(), challdir=Path(".")
)
messages = validator.validate()[1]
processed = process_messages(messages, verbose=args.verbose)
if processed["highest_level"]:
print("\n".join(processed["message_strings"]))
print(processed["count_string"])
if processed["highest_level"] and not args.verbose:
print("Run with -v for detailed descriptions")
level_messages = [
f"{SUCCESS}Validation succeeded. No issues detected!",
f"{SUCCESS}Validation succeeded.",
f"{SUCCESS}Validation succeeded.",
f"{HIGH}Validation succeeded. You may want to investigate some of the issues.",
f"{HIGH}Validation succeeded, however you should fix errors of high severity.",
f"{CRITICAL}Validation failed, please fix the critical errors.",
]
print(level_messages[processed["highest_level"]] + CLEAR)
if processed["highest_level"] == 5:
return 1
return 0
def build(args):
config = get_valid_config_or_exit()
if build_chall(config):
print(f"{SUCCESS}Challenge built successfully!{CLEAR}")
else:
print(f"{BOLD}Nothing to do{CLEAR}")
return 0
def start(args):
config = get_valid_config_or_exit()
if args.build and build_chall(config):
print(f"{SUCCESS}Challenge built successfully!{CLEAR}")
containers, service_strings = start_chall(config)
if not containers:
print(f"{BOLD}No services defined, nothing to do{CLEAR}")
return 0
if service_strings:
print(f"{BOLD}Services:\n" + "\n".join(service_strings) + f"{CLEAR}")
try:
for log in containers[0].logs(
stream=True
): # TODO print logs from all containers, probably stream=False and a for loop iterating over all containers in a while true loop
sys.stdout.write(log.decode())
except KeyboardInterrupt:
print(f"{BOLD}Stopping...{CLEAR}")
for container in containers:
container.kill()
return 0
def solve(args): # TODO add support for solve script
config = get_valid_config_or_exit()
# if not config["solution_image"]:
# print(f"{BOLD}No solution defined, cannot solve challenge{CLEAR}")
# return 1
containers, service_strings = start_chall(config)
if not containers:
print(f"{BOLD}No services defined, there is nothing to solve{CLEAR}")
return 1
# sleep to let challenge spin up
time.sleep(3)
# TODO if the services have a docker healthcheck, wait for it to pass instead
# TODO configureable sleep with a cmd arg
solution_container = start_solution(config)
print(f"{BOLD}Solving...{CLEAR}")
try:
for log in solution_container.logs(stream=True, stderr=True):
sys.stdout.write(log.decode())
except KeyboardInterrupt:
print(f"{BOLD}Aborting...{CLEAR}")
for container in containers:
container.kill()
solution_container.kill()
solution_container.remove()
return 1
solution_container.wait()
for container in containers:
container.kill()
output = solution_container.logs()
solution_container.remove()
if validate_solution_output(config, output.decode()):
print(f"{SUCCESS}Challenge solved successfully!{CLEAR}")
else:
print(f"{CRITICAL}Challenge could not be solved{CLEAR}")
return 1
return 0
def compose(args):
config = get_valid_config_or_exit()
if not config["deployment"] or not config["deployment"].get("containers"):
print(f"{BOLD}No services defined, nothing to do{CLEAR}")
return 0
if config["deployment"]["type"] != "docker":
print(
f'{CRITICAL}Only deployments of type "docker" can be used to create a docker-compose file{CLEAR}'
)
return 1
compose = {
"version": "3",
"services": {},
}
if config["deployment"]["volumes"]:
compose["volumes"] = {volume: {} for volume in config["deployment"]["volumes"]}
if config["deployment"]["networks"]:
compose["networks"] = {
network: {} for network in config["deployment"]["networks"]
}
next_port = 50000
used_ports = set()
# TODO handle services with set external ports first so the auto assigned ports dont potentially conflict with them
for name, container in config["deployment"]["containers"].items():
compose_service = {"ports": []}
volumes = []
networks = []
if Path(container["image"]).exists():
compose_service["build"] = container["image"]
else:
compose_service["image"] = container["image"]
for service in container["services"]:
external_port = service.get("external_port")
if not external_port:
while next_port in used_ports:
next_port += 1
external_port = next_port
assert external_port not in used_ports
used_ports.add(external_port)
compose_service["ports"].append(
f"{external_port}:{service['internal_port']}"
)
for service in container["extra_exposed_ports"]:
assert service["external_port"] not in used_ports
used_ports.add(service["external_port"])
compose_service["ports"].append(
f"{service['external_port']}:{service['internal_port']}"
)
for volume_name, containers in config["deployment"]["volumes"].items():
for mapping in containers:
if name in mapping:
volumes.append(f"{volume_name}:{mapping[name]}")
for network_name, containers in config["deployment"]["networks"].items():
if name in containers:
networks.append(network_name)
if volumes:
compose_service["volumes"] = volumes
if networks:
compose_service["networks"] = networks
compose["services"][name] = compose_service
Path("docker-compose.yml").write_text(yaml.dump(compose))
print(f"{SUCCESS}docker-compose.yml written!{CLEAR}")
return 0
def ensureid(args):
path = Path(".")
if (path / "challenge.yml").exists():
path = path / "challenge.yml"
elif (path / "challenge.yaml").exists():
path = path / "challenge.yaml"
else:
print(
f"{CRITICAL}Could not find a challenge.yml file in this directory.{CLEAR}"
)
return 1
with path.open() as f:
raw_config = f.read()
config = yaml.safe_load(raw_config)
validator = ConfigValidator(
config, ctf_config=load_ctf_config(), challdir=Path(".")
)
messages = validator.validate()[1]
highest_level = process_messages(messages)["highest_level"]
if highest_level == 5:
print(
"\n".join(
process_messages([m for m in messages if m["level"] == 5])[
"message_strings"
]
)
)
print(
f"\n{CRITICAL}There are critical config validation errors. Please fix them before continuing."
)
return 1
config = validator.normalized_config
if config["challenge_id"]:
print(f"{SUCCESS}Challenge ID present!{CLEAR}")
return 0
if raw_config.endswith("\n\n"):
pass
elif raw_config.endswith("\n"):
raw_config += "\n"
else:
raw_config += "\n\n"
raw_config += f"challenge_id: {uuid.uuid4()}\n"
try:
edited_config = yaml.safe_load(raw_config)
del edited_config["challenge_id"]
validator = ConfigValidator(
edited_config, ctf_config=load_ctf_config(), challdir=Path(".")
)
messages = validator.validate()[1]
assert process_messages(messages)["highest_level"] != 5
assert validator.normalized_config == config
except (yaml.reader.ReaderError, KeyError, AssertionError):
print(
f"{CRITICAL}Could not automatically add the ID to the config. Here is a random ID for you to add manually: {uuid.uuid4()}{CLEAR}"
)
return 1
path.write_text(raw_config)
print(f"{SUCCESS}Challenge ID written to config!{CLEAR}")
return 0
def push(args):
config = get_valid_config_or_exit()
ctf_config = load_ctf_config()
if not config["challenge_id"]:
print(f"{CRITICAL}ID not configured in the challenge configuration file{CLEAR}")
return 1
if not ctf_config.get("custom", {}).get("platform_url"):
print(
f"{CRITICAL}Platform URL not configured in the CTF configuration file{CLEAR}"
)
return 1
if not ctf_config.get("custom", {}).get("platform_api_key"):
print(
f"{CRITICAL}Platform API key not configured in the CTF configuration file{CLEAR}"
)
return 1
file_urls = []
if not config["downloadable_files"]:
print(f"{BOLD}No files defined, nothing to upload{CLEAR}")
else:
if not ctf_config.get("custom", {}).get("bucket"):
print(
f"{CRITICAL}Bucket not configured in the CTF configuration file{CLEAR}"
)
return 1
if not ctf_config.get("custom", {}).get("secret"):
print(
f"{CRITICAL}Secret not configured in the CTF configuration file{CLEAR}"
)
return 1
storage_client = storage.Client()
bucket = storage_client.bucket(ctf_config["custom"]["bucket"])
folder = hashlib.sha256(
f"{ctf_config['custom']['secret']}-{config['challenge_id']}".encode()
).hexdigest()
for blob in bucket.list_blobs(prefix=folder):
print(f"{BOLD}Deleting old {blob.name.split('/')[-1]}...{CLEAR}")
blob.delete()
filepaths = []
for file in config["downloadable_files"]:
path = Path(file)
if path.is_dir():
filepaths += list(path.iterdir())
else:
filepaths.append(path)
for path in filepaths:
if not path.exists():
print(f"{CRITICAL}file {path} does not exist!{CLEAR}")
print(f"{BOLD}Uploading {path.name}...{CLEAR}")
blob = bucket.blob(folder + "/" + path.name)
blob.upload_from_file(path.open("rb"))
file_urls.append(blob.public_url)
service_types = {
s["type"]: s
for s in [
{"type": "website", "user_display": "{url}", "hyperlink": True},
{"type": "tcp", "user_display": "nc {host} {port}", "hyperlink": False},
]
+ config["custom_service_types"]
}
payload = {
"title": config["title"],
"description": config["description"],
"authors": config["authors"],
"categories": config["categories"],
"score": config["score"],
"challenge_id": config["challenge_id"],
"flag_format_prefix": config["flag_format_prefix"],
"flag_format_suffix": config["flag_format_suffix"],
"file_urls": file_urls,
"flags": config["flags"],
"order": config["custom"].get("order"),
"services": [
{
"hyperlink": service_types[c["type"]]["hyperlink"],
"user_display": format_user_service(config, c["type"], **c),
}
for c in config["predefined_services"]
],
}
r = requests.post(
ctf_config["custom"]["platform_url"] + "/api/admin/push_challenge",
json=payload,
headers={"X-API-Key": ctf_config["custom"]["platform_api_key"]},
)
if r.status_code != 200:
print(f"{CRITICAL}Request failed with status {r.status_code}{CLEAR}")
return 1
print(f"{SUCCESS}Challenge pushed!{CLEAR}")
return 0
| 31.381139 | 169 | 0.61942 | 1,878 | 15,973 | 5.129925 | 0.188498 | 0.021175 | 0.017438 | 0.02076 | 0.239568 | 0.183724 | 0.164833 | 0.142412 | 0.135354 | 0.109197 | 0 | 0.004859 | 0.265573 | 15,973 | 508 | 170 | 31.442913 | 0.816384 | 0.034934 | 0 | 0.218593 | 0 | 0.012563 | 0.288237 | 0.026227 | 0 | 0 | 0 | 0.001969 | 0.012563 | 1 | 0.022613 | false | 0.002513 | 0.032663 | 0 | 0.125628 | 0.09799 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
898eda22192d326a096fa0229a432d881d990666 | 861 | py | Python | src/main.py | TaylorCoons/fiscus | a5f705d66f0d545e75e8b7ffc11ac3bd7a3d2577 | [
"MIT"
] | null | null | null | src/main.py | TaylorCoons/fiscus | a5f705d66f0d545e75e8b7ffc11ac3bd7a3d2577 | [
"MIT"
] | null | null | null | src/main.py | TaylorCoons/fiscus | a5f705d66f0d545e75e8b7ffc11ac3bd7a3d2577 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
'''
Script to check for price movements of desired stocks and sent a price notification
'''
import argparse
def main():
'''Main entrypoint for the script'''
parser = argparse.ArgumentParser(
description='Check stocks for price movements',
epilog='example usage: ./main.py -e johndoe@gmail.com -t 10 FUJHY AAPL TXN'
)
parser.add_argument(
'-e', '--email',
type=str,
help='Email to send notification to'
)
parser.add_argument(
'-t', '--threshold',
type=int,
help='Percentage threshold of when to trigger a notification'
)
parser.add_argument(
'tickers',
metavar='TICKERS',
type=str,
nargs='+',
help='List of stock tickers to check'
)
parser.parse_args()
if __name__ == '__main__':
main()
| 23.916667 | 83 | 0.602787 | 102 | 861 | 4.970588 | 0.588235 | 0.053254 | 0.100592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004823 | 0.277584 | 861 | 35 | 84 | 24.6 | 0.810289 | 0.157956 | 0 | 0.192308 | 0 | 0 | 0.359551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8992b5f93842836b9d76a43b1ddc1b921a49a818 | 3,977 | py | Python | assignment2.py | tommasodeangeli97/assignment1 | c65915b65f3fbff602127dfbb30a436e75dc3314 | [
"MIT"
] | null | null | null | assignment2.py | tommasodeangeli97/assignment1 | c65915b65f3fbff602127dfbb30a436e75dc3314 | [
"MIT"
] | null | null | null | assignment2.py | tommasodeangeli97/assignment1 | c65915b65f3fbff602127dfbb30a436e75dc3314 | [
"MIT"
] | null | null | null | from __future__ import print_function
import time
from sr.robot import *
a_th = 2.0
""" float: Threshold for the control of the orientation"""
d_th = 0.4
""" float: Threshold for the control of the linear distance"""
d_min = 1.0
""" float: Threshold for the minimum distance from the golden token"""
angl2 = 0.0
"""Float: angolaxion to compare"""
angl3=0.0
"""Float: angolaxion to compare"""
R = Robot()
""" instance of the class Robot"""
def drive(speed, seconds):
"""
Function for setting a linear velocity
Args: speed (int): the speed of the wheels
seconds (int): the time interval
"""
R.motors[0].m0.power = speed
R.motors[0].m1.power = speed
time.sleep(seconds)
R.motors[0].m0.power = 0
R.motors[0].m1.power = 0
def turn(speed, seconds):
"""
Function for setting an angular velocity
Args: speed (int): the speed of the wheels
seconds (int): the time interval
"""
R.motors[0].m0.power = speed
R.motors[0].m1.power = -speed
time.sleep(seconds)
R.motors[0].m0.power = 0
R.motors[0].m1.power = 0
def grab_release():
"""
function that identify the nearest silver token and go grab it
"""
dist= 10
for token2 in R.see():
if token2.info.marker_type is MARKER_TOKEN_SILVER and token2.dist<dist:
dist = token2.dist
angl = token2.rot_y
if -a_th < angl < a_th:
print ("in vista")
if dist< d_th:
R.grab()
print ("preso")
turn(13,4)
R.release()
turn(-13,4)
print("pronto")
drive(20,0.5)
else:
print ("mo arrivo")
drive(20, 0.5)
grab_release()
elif angl > a_th:
print ("mi giro a destra")
turn (5,0.3)
grab_release()
elif angl < -a_th:
print ("mi giro a sinistra")
turn (-5,0.3)
grab_release()
def scelta():
"""
function to turn in the rigth way when it is at the corner
"""
dist2 =0.0
dist3 =0.0
for token3 in R.see():
if 88 < token3.rot_y < 92:
dist2=token3.dist
print("preso primo dato")
else:
print("nope1")
for token2 in R.see():
if -92 < token2.rot_y < -88:
dist3=token2.dist
print ("preso dato 2")
else:
print("nope 2")
if dist2 > dist3:
print ("da questa parte")
turn (12,1.5)
else:
print ("invece da questa parte")
turn(-12,1.5)
def distance(token):
"""
function that bring in entrance the token identified in the main and turn to not hit it
"""
dist = token.dist
angl = token.rot_y
if -10 < angl < 10:
if dist<= d_min:
if angl >= a_th+2.5 :
print("a sinistra")
turn(-15,1.3)
elif angl <= -a_th-2.5 :
print("a destra")
turn(15,1.3)
elif -a_th-2.5 < angl < a_th+2.5:
print("sono indeciso")
scelta()
elif dist> d_min:
print("ancora lontano")
elif -135 < angl < -45 or 45 < angl < 135:
if dist <= 0.5:
if angl < 0:
turn (5, 0.1)
elif angl > 0:
turn (-5, 0.1)
else:
print("per ora no")
#the main
drive (17,5)
while 1:
drive(17,0.5)
for token in R.see():
if token.info.marker_type is MARKER_TOKEN_GOLD:
if token.dist<=d_min:
distance(token)
elif token.info.marker_type is MARKER_TOKEN_SILVER and token.dist<1.2 and -45 < token.rot_y < 45:
print("vediamo")
grab_release()
| 23.25731 | 105 | 0.499371 | 537 | 3,977 | 3.625698 | 0.24581 | 0.013867 | 0.032871 | 0.020544 | 0.45095 | 0.396507 | 0.319979 | 0.231125 | 0.194145 | 0.194145 | 0 | 0.060805 | 0.387981 | 3,977 | 170 | 106 | 23.394118 | 0.739113 | 0.116922 | 0 | 0.201923 | 0 | 0 | 0.063735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048077 | false | 0 | 0.028846 | 0 | 0.076923 | 0.182692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
899312eeaa648d3cf1dc32eea72213d9bebf6345 | 4,587 | py | Python | bin/pcnaDeep/evaluate.py | Jeff-Gui/PCNAdeep | ed4effc07e330155905b73064435d444ac857c1d | [
"Apache-2.0"
] | null | null | null | bin/pcnaDeep/evaluate.py | Jeff-Gui/PCNAdeep | ed4effc07e330155905b73064435d444ac857c1d | [
"Apache-2.0"
] | null | null | null | bin/pcnaDeep/evaluate.py | Jeff-Gui/PCNAdeep | ed4effc07e330155905b73064435d444ac857c1d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import subprocess
from pcnaDeep.data.annotate import relabel_trackID, label_by_track, get_lineage_txt, break_track, save_seq
class pcna_ctcEvaluator:
def __init__(self, root, dt_id, digit_num=3, t_base=0, path_ctc_software=None, init_dir=True):
"""Evaluation of tracking output
"""
self.dt_id = dt_id
self.digit_num = digit_num
self.t_base = t_base
self.root = root
self.path_ctc_software = path_ctc_software
if init_dir:
self.init_ctc_dir()
self.trk_path = None
def set_evSoft(self, path_ctc_software):
"""Set evaluation software path
Args:
path_ctc_software (str): path to CTC evaluation software
"""
self.path_ctc_software = path_ctc_software
def generate_raw(self, stack):
"""Save raw images by slice
Args:
stack (numpy.ndarray): raw image
"""
fm = ("%0" + str(self.digit_num) + "d") % self.dt_id
save_seq(stack, os.path.join(self.root, fm), 't', dig_num=self.digit_num, base=self.t_base)
return
def generate_ctc(self, mask, track, mode='RES'):
"""Generate standard format for Cell Tracking Challenge Evaluation, for RES or GT.
Args:
mask (numpy.ndarray): mask output, no need to have cell cycle labeled
track (pandas.DataFrame): tracked object table, can have gaped tracks
mode (str): either "RES" or "GT".
"""
track_new = relabel_trackID(track.copy())
track_new = break_track(track_new.copy())
tracked_mask = label_by_track(mask.copy(), track_new.copy())
txt = get_lineage_txt(track_new)
fm = ("%0" + str(self.digit_num) + "d") % self.dt_id
tracked_mask = tracked_mask.astype('uint16')
if mode == 'RES':
# write out processed files for RES folder
save_seq(tracked_mask, os.path.join(self.root, fm + '_RES'), 'mask', dig_num=self.digit_num, base=self.t_base, sep='')
txt.to_csv(os.path.join(self.root, fm + '_RES', 'res_track.txt'), sep=' ', index=0, header=False)
elif mode == 'GT':
fm = os.path.join(self.root, fm + '_GT')
self.__saveGT(fm, txt, tracked_mask)
else:
raise ValueError('Can only generate CTC format files as RES or GT, not: ' + mode)
return
def __saveGT(self, fm, txt, mask):
"""Save ground truth in Cell Tracking Challenge format.
"""
txt.to_csv(os.path.join(fm, 'TRA', 'man_track.txt'), index=0, sep=' ', header=False)
save_seq(mask, os.path.join(fm, 'SEG'), 'man_seg', dig_num=self.digit_num, base=self.t_base, sep='')
save_seq(mask, os.path.join(fm, 'TRA'), 'man_track', dig_num=self.digit_num, base=self.t_base, sep='')
return
def init_ctc_dir(self):
"""Initialize Cell Tracking Challenge directory
Directory example
>-----0001----------
>-----0001_RES---
>-----0001_GT----
>----SEG------
>----TRA------
"""
root = self.root
fm = ("%0" + str(self.digit_num) + "d") % self.dt_id
if not os.path.isdir(os.path.join(root, fm)) and not os.path.isdir(os.path.join(root, fm + '_RES')) and \
not os.path.isdir(os.path.join(root, fm + '_GT')):
os.mkdir(os.path.join(root, fm))
os.mkdir(os.path.join(root, fm + '_RES'))
os.mkdir(os.path.join(root, fm + '_GT'))
os.mkdir(os.path.join(root, fm + '_GT', 'SEG'))
os.mkdir(os.path.join(root, fm + '_GT', 'TRA'))
else:
raise IOError('Directory already existed.')
return
def evaluate(self):
"""Call CTC evaluation software to run ((Unix) Linux/Mac only)
"""
fm = ("%0" + str(self.digit_num) + "d") % self.dt_id
if self.path_ctc_software is None:
raise ValueError('CTC evaluation software path not set yet. Call through pcna_ctcEvaluator.set_evSoft()')
wrap_root = "\"" + self.root + "\""
wrap_tra = "\"" + os.path.join(self.path_ctc_software, 'TRAMeasure') + "\""
wrap_seg = "\"" + os.path.join(self.path_ctc_software, 'SEGMeasure') + "\""
subprocess.run(wrap_tra + ' ' + wrap_root + ' ' + fm + ' ' + str(
self.digit_num), shell=True)
subprocess.run(wrap_seg + ' ' + wrap_root + ' ' + fm + ' ' + str(
self.digit_num), shell=True)
return
| 41.324324 | 130 | 0.572051 | 616 | 4,587 | 4.060065 | 0.228896 | 0.047981 | 0.067973 | 0.044782 | 0.338265 | 0.338265 | 0.313475 | 0.203519 | 0.189924 | 0.138345 | 0 | 0.006978 | 0.281448 | 4,587 | 110 | 131 | 41.7 | 0.75182 | 0.19032 | 0 | 0.230769 | 0 | 0 | 0.119581 | 0.028903 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107692 | false | 0 | 0.046154 | 0 | 0.246154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8993a61ba9d5f7f78947b3558ba1b503cf322e2a | 3,065 | py | Python | wampnado/transports/tcp/__init__.py | rexlunae/tornwamp | 881538c6ae7909e06a15a838a0d84ebb94a2aed2 | [
"Apache-2.0"
] | null | null | null | wampnado/transports/tcp/__init__.py | rexlunae/tornwamp | 881538c6ae7909e06a15a838a0d84ebb94a2aed2 | [
"Apache-2.0"
] | null | null | null | wampnado/transports/tcp/__init__.py | rexlunae/tornwamp | 881538c6ae7909e06a15a838a0d84ebb94a2aed2 | [
"Apache-2.0"
] | null | null | null | """
Methods common to all TCP transports.
"""
from enum import Enum
from warnings import warn
from datetime import datetime
from wampnado.serializer import JSON_PROTOCOL, BINARY_PROTOCOL, NONE_PROTOCOL
from wampnado.messages import Message
class HandshakeError(Enum):
NoError=0
SerializerUnsupported=1
MessageSizeRejected=2
UnknownOption=3
ConnectionCountLimit=4
class MessageType(Enum):
Regular=0
Ping=1
Pong=2
class EncodedMessage(bytearray):
def __init__(self, type, payload=b''):
self.insert(0, type.value)
length = len(payload)
if length > 0xffffff:
raise ValueError('Message length must be less than 0xffffff')
self.insert(1, (length & 0xff0000) >> 16)
self.insert(2, (length & 0xff00) >> 8)
self.insert(3, length & 0xff)
self.extend(payload)
class TCPSocketPeer:
"""
Contains the side-agnostic bits of the socket communication.
"""
supported_protocols = {
JSON_PROTOCOL: True,
BINARY_PROTOCOL: True,
}
def __init__(self, stream):
self.protocol = JSON_PROTOCOL
self.stream = stream
self.max_length = 0 # Until negotiated otherwise
def pong(self):
"""
Respond to a ping.
"""
self.stream.write(b'\x02\0\0\0')
def ping(self):
"""
Send a ping.
"""
self.stream.write(b'\x01\0\0\0')
def write_message(self, msg, **kwargs):
"""
Takes a WAMP message, puts the correct header around it, and sends it to the client iff it is within the negotiated max_length using the negotiated serializer.
"""
if self.protocol == JSON_PROTOCOL:
serialized_msg = msg.json.encode()
elif self.protocol == BINARY_PROTOCOL:
serialized_msg = msg.msgpack
if len(serialized_msg) > self.max_length:
warn('Message of length {} exceeded negotiated max length {}.'.format(len(serialized_msg), self.max_length))
return False
full_msg = EncodedMessage(MessageType.Regular, serialized_msg)
self.stream.write(full_msg)
async def read_message(self):
msg_type = MessageType((await self.stream.read_bytes(1))[0])
if msg_type == MessageType.Regular:
length_bytes = await self.stream.read_bytes(3)
length = (length_bytes[0] << 16) + (length_bytes[1] << 8) + length_bytes[2]
data = await self.stream.read_bytes(length)
if self.protocol == JSON_PROTOCOL:
msg = Message.from_text(data)
elif self.protocol == BINARY_PROTOCOL:
msg = Message.from_bin(data)
else:
warn('unknown protocol ' + self.protocol)
return msg
elif msg_type == MessageType.Ping:
self.pong()
elif msg_type == MessageType.Pong:
warn('{} got ping response'.format(datetime.now()))
else:
warn('got unknown message type {}' + msg_type.value)
| 29.471154 | 167 | 0.614356 | 362 | 3,065 | 5.074586 | 0.337017 | 0.043549 | 0.039194 | 0.039194 | 0.1546 | 0.054437 | 0 | 0 | 0 | 0 | 0 | 0.021033 | 0.28646 | 3,065 | 103 | 168 | 29.757282 | 0.81893 | 0.103752 | 0 | 0.089552 | 0 | 0 | 0.067797 | 0 | 0 | 0 | 0.012806 | 0 | 0 | 1 | 0.074627 | false | 0 | 0.074627 | 0 | 0.373134 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89949a0d7e5905c915a24c9c907a6109e2e0c7e4 | 2,008 | py | Python | kvcd/config.py | lrivallain/kvcd | 625f4d4d944015d25f092269ad197afaf5eeeae9 | [
"MIT"
] | 2 | 2021-08-24T09:42:27.000Z | 2021-08-24T10:02:02.000Z | kvcd/config.py | lrivallain/kvcd | 625f4d4d944015d25f092269ad197afaf5eeeae9 | [
"MIT"
] | 6 | 2021-08-06T13:24:06.000Z | 2021-11-24T16:12:55.000Z | kvcd/config.py | lrivallain/kvcd | 625f4d4d944015d25f092269ad197afaf5eeeae9 | [
"MIT"
] | null | null | null | """This Submodules contains the definition of the expected configuration
to setup to use kvcd.
The main configuration is handled by `environ-config` module.
"""
import environ
import logging
from kvcd import _available_modules
logger = logging.getLogger(__name__)
@environ.config(prefix="KVCD")
class KvcdConfig:
"""kvcd configuration
"""
@environ.config
class VcloudConfig:
"""vcloud configuration
"""
host = environ.var(
help="Hostname of the vCloud instance")
port = environ.var(
default=443,
help="Port of the vCloud instance",
converter=int)
org = environ.var(
default="System",
help="Organization of the vCloud instance")
username = environ.var(
default="Administrator",
help="Username of the vCloud instance")
password = environ.var(
help="Password of the vCloud instance user")
verify_ssl = environ.bool_var(
default=True,
help="Verify SSL certificate of the vCloud instance")
refresh_session_interval = environ.var(
default=3600,
help="Interval (in secs) between to refresh of the authentication session",
converter=int)
vcd = environ.group(
VcloudConfig,
optional=False)
refresh_interval = environ.var(
default=60,
help="Refresh interval of the vCloud instance data for each object",
converter=int)
refresh_initial_delay = environ.var(
default=60,
help="Warming up duration",
converter=int)
refresh_idle_delay = environ.var(
default=60,
help="Reduce the number of timer checks when the ressource is changed",
converter=int)
enabled_modules = environ.var(
default=",".join(_available_modules),
help="Enable a sublist of modules: all by default",
converter=lambda x: [m.strip() for m in x.split(',')]
)
| 31.873016 | 87 | 0.624502 | 225 | 2,008 | 5.493333 | 0.422222 | 0.080906 | 0.110032 | 0.107605 | 0.063916 | 0.045307 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.287849 | 2,008 | 62 | 88 | 32.387097 | 0.855245 | 0.103586 | 0 | 0.163265 | 0 | 0 | 0.270331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.040816 | 0.061224 | 0 | 0.204082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8995bddb98552d642fc694543bcd58329e9a0cf7 | 3,078 | py | Python | fixture/orm.py | DmitriyYa/python_training | e8cbb729eaf59ae19ce97c67532a5e0154ca5ca3 | [
"Apache-2.0"
] | null | null | null | fixture/orm.py | DmitriyYa/python_training | e8cbb729eaf59ae19ce97c67532a5e0154ca5ca3 | [
"Apache-2.0"
] | null | null | null | fixture/orm.py | DmitriyYa/python_training | e8cbb729eaf59ae19ce97c67532a5e0154ca5ca3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from pony.orm import *
from datetime import datetime
from model.group import Group
from model.myuser import MyUser
from pymysql.converters import decoders
class ORMFixture:
db = Database()
class ORMGroup(db.Entity):
_table_ = 'group_list'
id = PrimaryKey(int, column='group_id')
name = Optional(str, column='group_name')
header = Optional(str, column='group_header')
footer = Optional(str, column='group_footer')
users = Set(lambda: ORMFixture.ORMUser, table='address_in_groups', column='id', reverse='groups', lazy=True)
class ORMUser(db.Entity):
_table_ = 'addressbook'
id = PrimaryKey(int, column='id')
firstname = Optional(str, column='firstname')
lastname = Optional(str, column='lastname')
address = Optional(str, column='address')
home_phone = Optional(str, column='home')
mobile_phone = Optional(str, column='mobile')
work_phone = Optional(str, column='work')
phone2 = Optional(str, column='phone2')
email = Optional(str, column='email')
email2 = Optional(str, column='email2')
email3 = Optional(str, column='email3')
deprecated = Optional(str, column='deprecated')
groups = Set(lambda: ORMFixture.ORMGroup, table='address_in_groups', column='group_id', reverse='users',
lazy=True)
def __init__(self, host, name, user, password):
self.db.bind('mysql', host=host, database=name, user=user, password=password) # privazka k bd
self.db.generate_mapping() # preobrazovanie dannih iz tablic v obecti
def convert_groups_to_model(self, groups):
def convert(group):
return Group(id=str(group.id), name=group.name, header=group.header, footer=group.footer)
return list(map(convert, groups))
@db_session
def get_group_list(self):
return self.convert_groups_to_model(select(g for g in ORMFixture.ORMGroup))
def convert_users_to_model(self, users):
def convert(user):
return MyUser(id=str(user.id), first_name=user.firstname, last_name=user.lastname, address=user.address, home_phone=user.home_phone, mobile_phone=user.mobile_phone, work_phone=user.work_phone,
phone2=user.phone2, email=user.email, email2=user.email2, email3=user.email3)
return list(map(convert, users))
@db_session
def get_user_list(self):
return self.convert_users_to_model(select(u for u in ORMFixture.ORMUser if u.deprecated is None))
@db_session
def get_users_in_group(self, group):
orm_group = list(select(g for g in ORMFixture.ORMGroup if g.id == group.id))[0]
return self.convert_users_to_model(orm_group.users)
@db_session
def get_users_not_in_group(self, group):
orm_group = list(select(g for g in ORMFixture.ORMGroup if g.id == group.id))[0]
return self.convert_users_to_model(
select(u for u in ORMFixture.ORMUser if u.deprecated is None and orm_group not in u.groups)) | 43.352113 | 204 | 0.673164 | 412 | 3,078 | 4.866505 | 0.218447 | 0.076808 | 0.118703 | 0.029925 | 0.254364 | 0.179551 | 0.179551 | 0.16409 | 0.16409 | 0.16409 | 0 | 0.006191 | 0.212801 | 3,078 | 71 | 205 | 43.352113 | 0.821296 | 0.024691 | 0 | 0.105263 | 0 | 0 | 0.065355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.035088 | 0.087719 | 0.070175 | 0.45614 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89974218a6dc419bd7f56f7782b26f754696ba88 | 1,316 | py | Python | python/fastKnapSackSearchTree.py | SaadAhmad123/myCodeRepo | e63632a3851eb8cfb8a7a65002b65e86321d69ed | [
"Apache-2.0"
] | null | null | null | python/fastKnapSackSearchTree.py | SaadAhmad123/myCodeRepo | e63632a3851eb8cfb8a7a65002b65e86321d69ed | [
"Apache-2.0"
] | null | null | null | python/fastKnapSackSearchTree.py | SaadAhmad123/myCodeRepo | e63632a3851eb8cfb8a7a65002b65e86321d69ed | [
"Apache-2.0"
] | null | null | null | '''
This function implements the memoization
approach in the searchTree algorithm in
knapSackSearchTree.py
To use the implementation below the list should be
a list of objects with following functions.
- getValue() -----> this function will be maximized for all the elements in the list
- getCost() -----> this function will contribute toward the constraint.
'''
def fastSearchBestCombination(list, avail, memo = {}):
result = None;
if (len(list), avail) in memo:
return memo[(len(list), avail)];
elif list == [] or avail == 0:
result = (0, []);
elif list[0].getCost() > avail:
result = fastSearchBestCombination(list[1:], avail, memo);
else:
fItm = list[0];
others = list[1:];
# with fItm
(bestValue1, bestCombination1) = fastSearchBestCombination(others, avail - fItm.getCost(), memo);
bestValue1 = bestValue1 + fItm.getValue();
# without fItm
(bestValue0, bestCombination0) = fastSearchBestCombination(others, avail, memo);
if bestValue1 > bestValue0:
result = (bestValue1, bestCombination1 + [fItm]);
else:
result = (bestValue0, bestCombination0);
memo[(len(list), avail)] = result;
return result;
#end | 36.555556 | 105 | 0.617021 | 134 | 1,316 | 6.059701 | 0.41791 | 0.044335 | 0.044335 | 0.039409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01875 | 0.270517 | 1,316 | 36 | 106 | 36.555556 | 0.827083 | 0.31079 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8998590cb40538f31155d83c4fe0126a9ca17c93 | 574 | py | Python | src/haarcascade/static.py | rafaelscariot/face-detection | cd7e82a0133daac605d6d1384560439829fb002d | [
"MIT"
] | null | null | null | src/haarcascade/static.py | rafaelscariot/face-detection | cd7e82a0133daac605d6d1384560439829fb002d | [
"MIT"
] | 1 | 2021-11-08T12:15:29.000Z | 2021-11-08T12:15:29.000Z | src/haarcascade/static.py | rafaelscariot/facial-detection | cd7e82a0133daac605d6d1384560439829fb002d | [
"MIT"
] | null | null | null | import cv2
image_path = '../../images/person.jpg'
cascade_path = '../../resources/haarcascade_frontalface_default.xml'
def main():
clf = cv2.CascadeClassifier(cascade_path)
img = cv2.imread(image_path)
faces = clf.detectMultiScale(img, 1.3, 10)
for (x, y, h, w) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.putText(img, 'Detected face.', (x, y-8), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 1)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
main().run()
| 27.333333 | 97 | 0.620209 | 84 | 574 | 4.047619 | 0.559524 | 0.017647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060345 | 0.191638 | 574 | 20 | 98 | 28.7 | 0.672414 | 0 | 0 | 0 | 0 | 0 | 0.175958 | 0.12892 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89987bdf2033d7fc645b19e40cfe6505bcc8f8f5 | 9,714 | py | Python | src/benchmark/plot_benchmark.py | robertu94/ndzip | 5b3c34991005c0924a339f2ec06750729ebbf015 | [
"MIT"
] | 21 | 2021-03-04T11:45:37.000Z | 2022-02-24T03:38:41.000Z | src/benchmark/plot_benchmark.py | robertu94/ndzip | 5b3c34991005c0924a339f2ec06750729ebbf015 | [
"MIT"
] | 5 | 2022-01-19T08:05:26.000Z | 2022-03-05T16:01:53.000Z | src/benchmark/plot_benchmark.py | robertu94/ndzip | 5b3c34991005c0924a339f2ec06750729ebbf015 | [
"MIT"
] | 5 | 2021-06-07T07:12:49.000Z | 2022-03-02T13:46:03.000Z | #!/usr/bin/env python3
# pipe input from benchmark binary into this script to plot throughput vs. compression ratio
import csv
import sys
from collections import defaultdict
from operator import itemgetter
from argparse import ArgumentParser
from math import floor, ceil, log10
import numpy as np
import scipy.stats as st
from matplotlib import patches, ticker, pyplot as plt
from tabulate import tabulate
DATA_TYPES = ['float', 'double']
OPERATIONS = ['compression', 'decompression']
PALETTE = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf',
'#aec7e8', '#ffbb78', '#98df8a', '#ff9896', '#c5b0d5', '#c49c94', '#f7b6d2', '#c7c7c7', '#dbdb8d', '#9edae5']
def arithmetic_mean(x):
return sum(x) / len(x)
def input_files(file_list):
if file_list:
for n in file_list:
if n == '-':
yield sys.stdin
else:
with open(n, 'r') as f:
yield f
else:
yield sys.stdin
class ThroughputStats:
def __init__(self, dataset_points: list, op: str):
sample_means = [int(p['uncompressed bytes']) / np.mean(np.array(
[float(t) for t in p[f'{op} times (microseconds)'].split(',')])) * 1e6
for p in dataset_points]
# TODO stats except mean are probably imprecise
samples = [np.array(
[int(p['uncompressed bytes']) / float(t) * 1e6 for t in p[f'{op} times (microseconds)'].split(',')])
for p in dataset_points]
self.mean = np.mean(sample_means)
self.stddev = np.sqrt(np.mean([np.var(ds) for ds in samples]))
self.min = np.mean([np.min(ds) for ds in samples])
self.max = np.mean([np.max(ds) for ds in samples])
# TODO is averaging error bar sizes correct?
self.h95 = np.mean([st.t.ppf(1.95 / 2, len(ds) - 1) * st.sem(ds) for ds in samples])
def log_ticks(start: float, stop: float, step: int):
ticks = []
base = 10 ** floor(log10(start))
mul = ceil(start / base)
while mul * base <= stop:
ticks.append(mul * base)
mul += step
if mul >= 10:
base *= 10
mul = 1
return ticks
def plot_throughput_vs_ratio(algorithms, by_data_type_and_algorithm, output_pgf):
data_type_means = []
for row, data_type in enumerate(DATA_TYPES):
means = []
algo_means_dict = defaultdict(list)
for algo, results_by_tunable in by_data_type_and_algorithm[data_type].items():
for tunable, results_by_num_threads in results_by_tunable.items():
max_threads_results = results_by_num_threads[max(results_by_num_threads.keys())]
mean_compression_ratio = np.mean([float(a['compressed bytes']) / float(a['uncompressed bytes'])
for a in max_threads_results])
throughput_stats = {op: ThroughputStats(max_threads_results, op) for op in OPERATIONS}
means.append((algo, tunable, mean_compression_ratio, throughput_stats))
algo_means_dict[algo].append((tunable, mean_compression_ratio, throughput_stats))
means.sort(key=itemgetter(0, 1))
for v in algo_means_dict.values():
v.sort(key=itemgetter(0))
algo_means = sorted(algo_means_dict.items(), key=itemgetter(0))
print(f'({data_type})')
print(tabulate([[f'{a} {u}', '{:.3f}'.format(r).lstrip('0'),
*('{:,.0f} ± {:>3,.0f} MB/s'.format(t[o].mean * 1e-6, t[o].h95 * 1e-6) for o in OPERATIONS)]
for a, u, r, t in means], headers=['algorithm', 'ratio', *OPERATIONS], stralign='right',
disable_numparse=True))
print()
data_type_means.append((data_type, algo_means))
fig, axes = plt.subplots(len(DATA_TYPES), len(OPERATIONS), figsize=(10, 6))
fig.subplots_adjust(top=0.92, bottom=0.1, left=0.08, right=0.88, wspace=0.2, hspace=0.35)
algorithm_colors = dict(zip(sorted(algorithms), PALETTE))
for row, (data_type, algo_means) in enumerate(data_type_means):
for col, operation in enumerate(OPERATIONS):
ax = axes[row, col]
throughput_values = []
for algo, results in algo_means:
points = [(t[operation].mean, t[operation].h95, r) for _, r, t in results]
points.sort(key=itemgetter(0))
throughputs, h95s, ratios = zip(*points)
throughput_values += throughputs
if len(points) > 1:
marker = None
elif algo.startswith('ndzip'):
marker = 'D'
else:
marker = 'o'
ax.errorbar(throughputs, ratios, label=algo, xerr=h95s, color=algorithm_colors[algo], marker=marker,
linewidth=2)
ax.set_title(f'{data_type} {operation}')
ax.set_xscale('log')
if throughput_values:
ax.set_xlim(min(throughput_values) / 2, max(throughput_values) * 2)
ax.set_xlabel('arithmetic mean uncompressed throughput [B/s]')
ax.set_ylabel('arithmetic mean compression ratio')
fig.legend(
handles=[patches.Patch(color=c, label=a) for a, c in sorted(algorithm_colors.items(), key=itemgetter(0))],
loc='center right')
if output_pgf:
plt.savefig('benchmark.pgf')
else:
plt.show()
def plot_scaling(algorithms, by_data_type_and_algorithm, output_pgf):
data_type_means = []
for row, data_type in enumerate(DATA_TYPES):
means = []
algo_means_dict = defaultdict(list)
for algo, results_by_tunable in by_data_type_and_algorithm[data_type].items():
max_tunable_results = results_by_tunable[max(results_by_tunable.keys())]
if len(max_tunable_results) > 1:
for num_threads, results in max_tunable_results.items():
throughput_stats = {op: ThroughputStats(results, op) for op in OPERATIONS}
means.append((algo, num_threads, throughput_stats))
algo_means_dict[algo].append((num_threads, throughput_stats))
means.sort(key=itemgetter(0, 1))
for v in algo_means_dict.values():
v.sort(key=itemgetter(0))
algo_means = sorted(algo_means_dict.items(), key=itemgetter(0))
print(f'({data_type})')
print(tabulate([[f'{a} {u}',
*('{:,.0f} ± {:>3,.0f} MB/s'.format(t[o].mean * 1e-6, t[o].h95 * 1e-6) for o in OPERATIONS)]
for a, u, t in means], headers=['algorithm', *OPERATIONS], stralign='right',
disable_numparse=True))
print()
data_type_means.append((data_type, algo_means))
fig, axes = plt.subplots(len(DATA_TYPES), len(OPERATIONS), figsize=(10, 6))
fig.subplots_adjust(top=0.92, bottom=0.1, left=0.08, right=0.88, wspace=0.2, hspace=0.35)
algorithm_colors = dict(zip(sorted(algorithms), PALETTE))
for row, (data_type, algo_means) in enumerate(data_type_means):
for col, operation in enumerate(OPERATIONS):
ax = axes[row, col]
throughput_values = []
for algo, results in algo_means:
points = [(threads, t[operation].mean, t[operation].h95) for threads, t in results]
threads, throughputs, h95s = zip(*points)
throughput_values += throughputs
ax.errorbar(threads, throughputs, label=f'{algo} {data_type} {operation}', yerr=h95s, marker='o')
ax.set_title(f'{data_type} {operation}')
ax.set_xscale('log')
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
ax.xaxis.set_minor_formatter(ticker.ScalarFormatter())
ax.set_yscale('log')
if throughput_values:
start, stop = min(throughput_values) / 2, max(throughput_values) * 2
ax.set_ylim(start, stop)
ax.yaxis.set_minor_formatter(ticker.LogFormatterSciNotation(minor_thresholds=(2, 0.5)))
ax.set_xlabel('number of threads')
ax.set_ylabel('arithmetic mean uncompressed throughput [B/s]')
fig.legend(
handles=[patches.Patch(color=c, label=a) for a, c in sorted(algorithm_colors.items(), key=itemgetter(0))],
loc='center right')
if output_pgf:
plt.savefig('scaling.pgf')
else:
plt.show()
def main():
parser = ArgumentParser(description='Visualize benchmark results')
parser.add_argument('csv_files', metavar='CSVS', nargs='*', help='benchmark csv files')
parser.add_argument('--scaling', action='store_true', help='plot scaling (default: throughput vs ratio)')
parser.add_argument('--pgf', action='store_true', help='output pgfplots')
args = parser.parse_args()
by_data_type_and_algorithm = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: defaultdict(list))))
algorithms = set()
for f in input_files(args.csv_files):
rows = list(csv.reader(f, delimiter=';'))
column_names = rows[0]
for r in rows[1:]:
a = dict(zip(column_names, r))
num_threads = int(a.get('number of threads', 1))
by_data_type_and_algorithm[a['data type']][a['algorithm']][int(a['tunable'])][num_threads].append(a)
algorithms.add(a['algorithm'])
if not args.scaling:
plot_throughput_vs_ratio(algorithms, by_data_type_and_algorithm, args.pgf)
else:
plot_scaling(algorithms, by_data_type_and_algorithm, args.pgf)
if __name__ == '__main__':
main()
| 43.954751 | 120 | 0.607268 | 1,249 | 9,714 | 4.55004 | 0.221777 | 0.039416 | 0.022171 | 0.0183 | 0.515925 | 0.451522 | 0.412106 | 0.398733 | 0.386768 | 0.361077 | 0 | 0.024679 | 0.261684 | 9,714 | 220 | 121 | 44.154545 | 0.767429 | 0.020692 | 0 | 0.395604 | 0 | 0 | 0.09675 | 0 | 0 | 0 | 0 | 0.004545 | 0 | 1 | 0.038462 | false | 0 | 0.054945 | 0.005495 | 0.10989 | 0.032967 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89998d70c706b83396aaecba89ebc188e6a073e0 | 1,558 | py | Python | 27_remove_elements/remove_elements.py | daniel-hocking/leetcode | 62f881642f3e6ccdeef48f03e5f3f0c2aa7bad9c | [
"MIT"
] | null | null | null | 27_remove_elements/remove_elements.py | daniel-hocking/leetcode | 62f881642f3e6ccdeef48f03e5f3f0c2aa7bad9c | [
"MIT"
] | null | null | null | 27_remove_elements/remove_elements.py | daniel-hocking/leetcode | 62f881642f3e6ccdeef48f03e5f3f0c2aa7bad9c | [
"MIT"
] | null | null | null | '''
Description: Given an array nums and a value val, remove all instances of
that value in-place and return the new length.
Do not allocate extra space for another array, you must do this by
modifying the input array in-place with O(1) extra memory.
eg.
Input: [3,2,2,3], val = 3
Output: 2
and array = [2, 2]
Written by: Daniel Hocking
Date created: 26/05/2018
https://leetcode.com/problems/remove-element/description/
'''
class Solution:
def removeElement(self, nums, val):
"""
:type nums: List[int]
:rtype: int
"""
original_len = len(nums)
if not original_len:
return original_len
pointer = 0
for i in range(1, original_len):
if nums[pointer] == val:
nums[pointer], nums[i] = nums[i], nums[pointer]
if nums[pointer] != val:
pointer += 1
return pointer + (1 if nums[pointer] != val else 0)
def test_remove_element(nums, val):
'''
>>> test_remove_element([], 0)
(0, [])
>>> test_remove_element([1, 2], 3)
(2, [1, 2])
>>> test_remove_element([1, 2, 2, 4], 4)
(3, [1, 2, 2])
>>> test_remove_element([1, 1, 1, 1, 2, 2, 4], 1)
(3, [2, 2, 4])
>>> test_remove_element([0,0,1,1,1,2,2,3,3,4], 3)
(8, [0, 0, 1, 1, 1, 2, 2, 4])
>>> test_remove_element([3, 2, 2, 3], 3)
(2, [2, 2])
'''
sol = Solution()
num = sol.removeElement(nums, val)
return num, nums[:num:]
if __name__ == '__main__':
import doctest
doctest.testmod()
| 27.333333 | 73 | 0.558408 | 235 | 1,558 | 3.591489 | 0.344681 | 0.026066 | 0.140995 | 0.056872 | 0.159953 | 0.06872 | 0.016588 | 0 | 0 | 0 | 0 | 0.070979 | 0.285623 | 1,558 | 56 | 74 | 27.821429 | 0.687332 | 0.50706 | 0 | 0 | 0 | 0 | 0.01194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
899a16141300dc61abb244c3174df70a20cf5c7a | 3,885 | py | Python | rostran/core/parameters.py | aliyun/alibabacloud-ros-tool-transformer | f30b2cf3a7855e7d59a6657f32ee02a5c0c0109c | [
"Apache-2.0"
] | 9 | 2020-06-11T11:50:29.000Z | 2022-03-25T13:16:07.000Z | rostran/core/parameters.py | aliyun/alibabacloud-ros-tool-transformer | f30b2cf3a7855e7d59a6657f32ee02a5c0c0109c | [
"Apache-2.0"
] | 7 | 2020-09-09T12:50:52.000Z | 2021-09-10T02:33:58.000Z | rostran/core/parameters.py | aliyun/alibabacloud-ros-tool-transformer | f30b2cf3a7855e7d59a6657f32ee02a5c0c0109c | [
"Apache-2.0"
] | 4 | 2020-06-16T07:07:23.000Z | 2022-02-07T19:37:16.000Z | import re
from openpyxl.cell.cell import Cell
from .exceptions import InvalidTemplateParameter
from .utils import get_and_validate_cell
class Parameter:
TYPES = (STRING, NUMBER, LIST, MAP, BOOLEAN) = (
"String",
"Number",
"CommaDelimitedList",
"Json",
"Boolean",
)
def __init__(self, name, type, default=None, association_property=None, description=None,
constraint_description=None, allowed_values=None, min_length=None, max_length=None,
allowed_pattern=None, no_echo=None, min_value=None, max_value=None, label=None):
self.name = name
self.type = type
self.default = default
self.association_property = association_property
self.description = description
self.constraint_description = constraint_description
self.allowed_values = allowed_values
self.allowed_pattern = allowed_pattern
self.min_length = min_length
self.max_length = max_length
self.no_echo = no_echo
self.min_value = min_value
self.max_value = max_value
self.label = label
@classmethod
def initialize_from_excel(cls, header_cell: Cell, data_cell: Cell):
param_name = get_and_validate_cell(header_cell, InvalidTemplateParameter)
result = re.findall(r"(\S+)\((\S+)\)", param_name)
if result:
param_name, param_type = result[0]
if param_type not in cls.TYPES:
raise InvalidTemplateParameter(
name=param_name,
reason=f"Type {param_type} of {header_cell} is not supported. Allowed types: {cls.TYPES}",
)
else:
param_name, param_type = param_name, cls.STRING
return cls(name=param_name, type=param_type, default=data_cell.value)
def validate(self):
if self.type not in self.TYPES:
raise InvalidTemplateParameter(
name=self.name,
reason=f"Type {self.type} is not supported. Allowed types: {self.TYPES}",
)
class Parameters(dict):
def add(self, param: Parameter):
if param.name is None:
raise InvalidTemplateParameter(
name=param.name, reason="Parameter name should not be None"
)
self[param.name] = param
def as_dict(self) -> dict:
data = {}
for key, param in self.items():
value = {"Type": param.type}
if param.default is not None:
value.update({"Default": param.default})
if param.association_property is not None:
value.update({"AssociationProperty": param.association_property})
if param.description is not None:
value.update({"Description": param.description})
if param.constraint_description is not None:
value.update({"ConstraintDescription": param.constraint_description})
if param.allowed_values is not None:
value.update({"AllowedValues": param.allowed_values})
if param.min_length is not None:
value.update({"MinLength": param.min_length})
if param.max_length is not None:
value.update({"MaxLength": param.max_length})
if param.allowed_pattern is not None:
value.update({"AllowedPattern": param.allowed_pattern})
if param.no_echo is not None:
value.update({"NoEcho": param.no_echo})
if param.min_value is not None:
value.update({"MinValue": param.min_value})
if param.max_value is not None:
value.update({"MaxValue": param.max_value})
if param.label is not None:
value.update({"Label": param.label})
data[key] = value
return data
| 38.465347 | 110 | 0.607465 | 442 | 3,885 | 5.169683 | 0.190045 | 0.042888 | 0.047265 | 0.073523 | 0.189059 | 0.113786 | 0 | 0 | 0 | 0 | 0 | 0.000368 | 0.301158 | 3,885 | 100 | 111 | 38.85 | 0.841252 | 0 | 0 | 0.034884 | 0 | 0 | 0.093436 | 0.005405 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.046512 | 0 | 0.162791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
899b6653b64c0d95ab249abc168b2a37a76ba7d3 | 3,730 | py | Python | src/notification/notification.py | FrancoisChastel/DD2480_Software-Engineering_CI | 26424af8a349cc0abdc9a256bf91b161d989c702 | [
"BSD-2-Clause"
] | 1 | 2018-02-04T22:02:01.000Z | 2018-02-04T22:02:01.000Z | src/notification/notification.py | FrancoisChastel/DD2480_Software-Engineering_CI | 26424af8a349cc0abdc9a256bf91b161d989c702 | [
"BSD-2-Clause"
] | 4 | 2018-02-04T13:43:40.000Z | 2018-02-07T00:25:33.000Z | src/notification/notification.py | FrancoisChastel/DD2480_Software-Engineering_CI | 26424af8a349cc0abdc9a256bf91b161d989c702 | [
"BSD-2-Clause"
] | null | null | null | import email.message as e
import smtplib
# !/usr/bin/env python
# -*- coding: utf-8 -*-
import configs
import communication
def send_notifications(result):
"""
Function that will send by e-mail a notification of the state of the testing and compiling process
:param result: communication-object (see communication.py) that can hold all the information about the process
:return: True once the mail sent
"""
#Retrieving the response from system tests to send
message = get_message(result)
#can be changed: Email to send from and email to send to
fromaddr = 'DD2480.CI@gmail.com'
toaddrs = 'DD2480.CI@gmail.com'
# setting up message fields
m = e.Message()
m['From'] = "DD2480.CI@gmail.com"
m['To'] = "DD2480.CI@gmail.com"
m['Subject'] = "Compilation and Test results"
m.set_payload(message)
#logging into smtp server and sending mail
username = 'DD2480.CI@gmail.com'
password = 'DD2480CI'
server = smtplib.SMTP('smtp.gmail.com:587') #log into smtp
server.ehlo() #setting up server communication
server.starttls() #start tls for secure connection
server.login(username, password) #log into email account
server.sendmail(fromaddr, toaddrs, m.as_string()) #sending email
server.quit()
return True
def get_message(result):
"""
Function that allow you get a string that could be sent as notification with all the useful information
:param result: communication-object (see communication.py) that can hold all the information about the process
:return: a string holding the message that need to be sent
"""
message = ""
state = result.state
if not isinstance(state, communication.State):
if state in [0, 1, 2, 3, 4, 5]:
state = communication.State(state)
else:
raise ValueError("The state {0} is not recognized".format(state))
#check state of input to retrieve correct messages and information
if state == communication.State.COMPILING_FAILED: # failed compilation
message = configs.ER_CPL_MESSAGE % (result.state,
result.author,
result.commit,
result.url_repo,
result.compiling_messages)
elif state == communication.State.TEST_FAILED: # failed test(s)
message = configs.ER_TST_MESSAGE % (result.state,
result.author,
result.commit,
result.url_repo,
result.test_messages)
elif state == communication.State.TEST_SUCCEED: # passed all tests
message = configs.SCC_MESSAGE % (result.state,
result.author,
result.commit,
result.url_repo,
result.test_messages)
elif state == communication.State.TEST_WARNED: # test(s) warning
message = configs.WRN_CPL_MESSAGE % (result.state,
result.author,
result.commit,
result.url_repo,
result.compiling_messages, result.test_messages)
else:
raise ValueError("The state {0} is not managed by the notification system".format(state))
return message
| 42.873563 | 114 | 0.555764 | 398 | 3,730 | 5.145729 | 0.346734 | 0.038086 | 0.067383 | 0.039063 | 0.339355 | 0.322754 | 0.307617 | 0.307617 | 0.275391 | 0.275391 | 0 | 0.015319 | 0.369973 | 3,730 | 86 | 115 | 43.372093 | 0.85617 | 0.260858 | 0 | 0.280702 | 0 | 0 | 0.092022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0.035088 | 0.070175 | 0 | 0.140351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
899c6a5e5c9be0abc578e848dc68228baf0cd293 | 482 | py | Python | rescvae/__init__.py | myinxd/rescvae | 3f8025e2924b8bb643d6c2b4925b8d78f5d6dd07 | [
"MIT"
] | 1 | 2018-11-27T13:15:02.000Z | 2018-11-27T13:15:02.000Z | rescvae/__init__.py | myinxd/rescvae | 3f8025e2924b8bb643d6c2b4925b8d78f5d6dd07 | [
"MIT"
] | null | null | null | rescvae/__init__.py | myinxd/rescvae | 3f8025e2924b8bb643d6c2b4925b8d78f5d6dd07 | [
"MIT"
] | 1 | 2021-03-24T02:50:26.000Z | 2021-03-24T02:50:26.000Z | # Copyright (C) 2018 Zhixian MA <zx@mazhixian.me>
# MIT liscence
# Argument for setup()
__pkgname__ = "rescvae"
__version__ = "0.1.0"
__author__ = "Zhixian MA"
__author_email__ = "zx@mazhixian.me"
__license__ = "MIT"
__keywords__ = "ResCVAE: residual conditional variational autoencoder"
__copyright__ = "Copyright (C) 2018 Zhixian MA"
__url__ = "https://github.com/myinxd/rescvae"
__description__ = ("A toolbox for constructing the residual conditional variational autoencoder.")
| 34.428571 | 98 | 0.76556 | 57 | 482 | 5.824561 | 0.631579 | 0.081325 | 0.084337 | 0.126506 | 0.138554 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026005 | 0.122407 | 482 | 13 | 99 | 37.076923 | 0.758865 | 0.16805 | 0 | 0 | 0 | 0 | 0.581864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89a1bfc4acc2d83b6cc53fb4e7cfbb3bdd5c6936 | 7,407 | py | Python | Benchmark_2/Speaker.py | MarkFzp/ToM-Pragmatics | 3de1956c36ea40f29a41e4c153c4b8cdc73afc15 | [
"MIT"
] | null | null | null | Benchmark_2/Speaker.py | MarkFzp/ToM-Pragmatics | 3de1956c36ea40f29a41e4c153c4b8cdc73afc15 | [
"MIT"
] | null | null | null | Benchmark_2/Speaker.py | MarkFzp/ToM-Pragmatics | 3de1956c36ea40f29a41e4c153c4b8cdc73afc15 | [
"MIT"
] | null | null | null | import tensorflow as tf
from VisualEncoder import ConvAsFcEncoder
# from vgg16 import vgg16
import numpy as np
class Agnostic_Speaker:
@property
def message(self):
return self.message_
@property
def log_prob(self):
return self.log_prob_
def __init__(self, encode_type, input_len, dense_len, num_distract, vocabulary_size, temperature, img_height=None, img_width = None, sess = None, **kwargs):
if encode_type == 'fc':
self.target_ = tf.placeholder(dtype = tf.float32, shape = (None, input_len), name='target')
self.ori_distract_ = tf.placeholder(dtype = tf.float32, shape = (None, num_distract, input_len), name = 'distract')
self.distract_ = tf.transpose(self.ori_distract_, perm = [0, 2, 1])
elif encode_type == 'vgg':
#first go through VGG
assert(sess is not None and img_height is not None and img_width is not None)
self.target_imgs_ = tf.placeholder(dtype = tf.float32, shape =(None, img_height, img_width, 3))
self.distract_imgs_ = tf.placeholder(dtype = tf.float32, shape =(None, img_height, img_width, 3))
vgg_target_ = vgg16(self.target_imgs_, 'vgg16_weights.npz', sess)
vgg_distract_ = vgg16(self.distract_imgs_, 'vgg16_weights.npz', sess)
self.target_ = (np.argsort(vgg_target_.probs)[::-1])[0:input_len]
self.distract_ = (np.argsort(vgg_distract_.probs)[::-1])[0:input_len]
self.data_ = tf.expand_dims(tf.concat([tf.expand_dims(self.target_, axis=-1), self.distract_], axis=-1), axis = -1)
#after expand_dim self.data_ should have shape batch_size * input_len * (num_distract+1) * 1
self.data_encoder_ = ConvAsFcEncoder(self.data_, dense_len, (input_len, 1), dense_len, strides=(1,1), activation_fun = tf.sigmoid, name = "speaker_data_encoder")
#after fully connected the desnse output should have size batch_size * 1 * (num_distract + 1) * dense_len
#the dense should have shape batch_size * dense_len * (num_distract + 1) * 1
self.symbols_ = ConvAsFcEncoder(tf.transpose(self.data_encoder_.dense, perm=[0,3,2,1]), dense_len, (dense_len, num_distract+1), vocabulary_size, strides=(1,1), name = "speaker_symbols").dense
#after fully connected, the shape would be batch_size * 1 * 1 * vocabulary size
numer = tf.exp(tf.negative(tf.squeeze(self.symbols_)) / temperature)
denom = tf.reshape(tf.reduce_sum(numer, axis=1), (-1,1))
self.probabilities_ = numer / denom
self.distribution_ = tf.distributions.Categorical(probs = self.probabilities_)
sampled_idx = self.distribution_.sample()
self.message_ = tf.one_hot(sampled_idx, vocabulary_size, dtype=tf.float32)
self.log_prob_ = tf.log(tf.gather_nd(self.probabilities_, tf.stack([tf.range(tf.shape(self.probabilities_)[0]), sampled_idx], axis=1)))
print("Speaker tensor: {}".format(self.log_prob_))
class Informed_Speaker:
@property
def message(self):
return self.message_
@property
def log_prob(self):
return self.log_prob_
@property
def logits(self):
return self.logits_
# @property
# def reg_loss(self):
# return self.regularization_
def __init__(self, encode_type, input_len, dense_len, num_distract, num_filter, vocabulary_size, temperature, img_height = None, img_width = None, sess=None,**kwargs):
if encode_type == 'fc':
self.target_ = tf.placeholder(dtype = tf.float32, shape = (None, input_len), name = 'speaker_target')
self.ori_distract_ = tf.placeholder(dtype = tf.float32, shape = (None, num_distract, input_len), name = 'speaker_distract')
self.distract_ = tf.transpose(self.ori_distract_, perm = [0, 2, 1])
elif encode_type == 'vgg':
#first go through VGG
assert(sess is not None and img_height is not None and img_width is not None)
self.target_imgs_ = tf.placeholder(dtype = tf.float32, shape =(None, img_height, img_width, 3))
self.distract_imgs_ = tf.placeholder(dtype = tf.float32, shape =(None, img_height, img_width, 3))
vgg_target_ = vgg16(self.target_imgs_, 'vgg16_weights.npz', sess)
vgg_distract_ = vgg16(self.distract_imgs_, 'vgg16_weights.npz', sess)
self.target_ = (np.argsort(vgg_target_.probs)[::-1])[0:input_len]
self.distract_ = (np.argsort(vgg_distract_.probs)[::-1])[0:input_len]
#save the inputs for testing
tf.get_default_graph().add_to_collection("Speaker_input", self.target_)
tf.get_default_graph().add_to_collection("Speaker_input", self.ori_distract_)
self.data_ = tf.expand_dims(tf.concat([tf.expand_dims(self.target_, axis=-1) , self.distract_], axis=-1), axis=-1)
with tf.variable_scope('Teacher_Update'):
#no sigmoid nonlinearlity here
self.data_encoder_ = ConvAsFcEncoder(self.data_, dense_len, (input_len, 1), dense_len, strides=(1,1), name = "speaker_data_encoder")
#after the above fully connected operation, the output size is batch_size * 1*(num_distract+1)*dense_len
#use transpose to change shape to batch_size * (num_distract + 1) * dense_len * 1
self.feature_maps_ = tf.layers.conv2d(tf.transpose(self.data_encoder_.dense, perm=[0, 2, 3, 1]), filters = num_filter, kernel_size = (num_distract+1,1),strides=(1,1), activation = tf.sigmoid, name = 'feature_map')
#after convolution, the shape would be batch_size * 1 * dense_len * num_filters
#need to transpose to batch_size * num_filter * dense_len * 1
self.combined_feature_map_ = tf.layers.conv2d(tf.transpose(self.feature_maps_, perm = [0, 3, 2, 1]), filters = 1, kernel_size = (num_filter, 1), strides = (1,1), name = 'combined_feature_map')
#after combination, the shape would be batch_size * 1 * dense_len * 1
#the next step is not mentioned in the paper, need to find further confirmation
self.logits_ = ConvAsFcEncoder(self.combined_feature_map_, dense_len, (1, dense_len), vocabulary_size, strides = (1,1), name = "speaker_symbols").dense
self.logits_ = tf.squeeze(self.logits_)
self.probabilities_ = tf.nn.softmax(self.logits_/temperature)
# self.numer = tf.exp(self.logits_ / temperature)
# self.denom = tf.reshape(tf.reduce_sum(self.numer, axis=1),(-1,1))
# self.probabilities_ = self.numer / self.denom
self.distribution_ = tf.distributions.Categorical(probs = self.probabilities_)
sampled_idx = self.distribution_.sample()
self.message_ = tf.one_hot(sampled_idx, vocabulary_size, dtype=tf.float32, name = 'speaker_message')
tf.get_default_graph().add_to_collection("Speaker_input", self.message_)
self.log_prob_ = tf.log(tf.gather_nd(self.probabilities_, tf.stack([tf.range(tf.shape(self.probabilities_)[0]), sampled_idx], axis=1)))
self.reg_varlist_ = [v for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) if v.name.startswith('Teacher')]
# self.regularization_ = 0 * tf.add_n([ tf.nn.l2_loss(v) for v in self.reg_varlist_ if 'bias' not in v.name ])
| 64.408696 | 225 | 0.663697 | 1,005 | 7,407 | 4.619901 | 0.170149 | 0.029291 | 0.030153 | 0.03446 | 0.701917 | 0.667241 | 0.639457 | 0.621581 | 0.593151 | 0.559121 | 0 | 0.021747 | 0.217767 | 7,407 | 114 | 226 | 64.973684 | 0.7796 | 0.171189 | 0 | 0.561644 | 0 | 0 | 0.051668 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 1 | 0.09589 | false | 0 | 0.041096 | 0.068493 | 0.232877 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89a363fe1159744b8b9978656efa4f210d11eee9 | 1,704 | py | Python | CaptchaBreaker_cmd/CaptchaBreaker.py | alstjgg/captcha_image_preprocess | 5ecbf8eab3ce65e0a92c5e0ff10c51fd0de26cb5 | [
"MIT"
] | 2 | 2019-12-06T14:19:09.000Z | 2021-12-10T07:47:27.000Z | CaptchaBreaker_cmd/CaptchaBreaker.py | alstjgg/captcha_image_preprocess | 5ecbf8eab3ce65e0a92c5e0ff10c51fd0de26cb5 | [
"MIT"
] | 5 | 2021-03-18T21:59:16.000Z | 2022-03-11T23:38:47.000Z | CaptchaBreaker_cmd/CaptchaBreaker.py | alstjgg/captcha_image_preprocess | 5ecbf8eab3ce65e0a92c5e0ff10c51fd0de26cb5 | [
"MIT"
] | 1 | 2020-11-24T16:05:37.000Z | 2020-11-24T16:05:37.000Z | from load import get_image
from Preprocessing import bw
import process_manage
import testProcess
import argparse
def menu(args):
if args.option == 1:
res = process_manage.process(get_image(args.path), args.order)
res.show()
elif args.option == 2:
process_manage.show_rate(args.path, args.order)
elif args.option == 3:
testProcess.test_binarisation(get_image(args.path))
elif args.option == 4:
image = bw(get_image(args.path))
testProcess.test_morphology(image)
elif args.option == 5:
image = bw(get_image(args.path))
testProcess.test_blur(image)
# help descriptions
option_help = 'Choose operation' \
'\n1. Preprocess image' \
'\n2. Show success rate for dataset' \
'\n3. Test binarisation' \
'\n4. Test morphology' \
'\n5. Test blurring'
path_help = 'Path to data or link to image'
order_help = 'Choose order of processing' \
'\n1. Binarisation' \
'\n2. Cropping' \
'\n3. Closing' \
'\n4. Blurring'
# parser
parser = argparse.ArgumentParser(description='Preprocess Captcha images',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('option', type=int,
choices=range(1, 6), help=option_help)
parser.add_argument('--path', dest='path',
default='http://www.gov.kr/captcha',
help=path_help + '\n(default: %(default)s)')
parser.add_argument('--order', dest='order', default='1234',
help=order_help + '\n(default: %(default)s)')
args = parser.parse_args()
menu(args) | 33.411765 | 79 | 0.603286 | 198 | 1,704 | 5.075758 | 0.393939 | 0.039801 | 0.047761 | 0.063682 | 0.115423 | 0.075622 | 0.075622 | 0.075622 | 0 | 0 | 0 | 0.016142 | 0.272887 | 1,704 | 51 | 80 | 33.411765 | 0.794996 | 0.014085 | 0 | 0.047619 | 0 | 0 | 0.221097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.119048 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89a50781df307585317def9fc6141f6123c76bac | 872 | py | Python | generate_index.py | AnonyKagamine/MessengerDataIndex | 2059ac7a82e2b2549a1c6ccf11c06f35851a9853 | [
"MIT"
] | null | null | null | generate_index.py | AnonyKagamine/MessengerDataIndex | 2059ac7a82e2b2549a1c6ccf11c06f35851a9853 | [
"MIT"
] | null | null | null | generate_index.py | AnonyKagamine/MessengerDataIndex | 2059ac7a82e2b2549a1c6ccf11c06f35851a9853 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import sys
import json
from urllib.request import urljoin
GH_PAGES_PREFIX = "https://anonykagamine.github.io"
OUTPUT_FILENAME = "index.json"
def main():
index_list = []
for filename in sys.stdin.readlines():
filename = filename.strip("\n")
with open(filename, "r") as f:
jsonobj = json.load(f)
index_column = {}
index_column["title"] = jsonobj["_title"]
index_column["description"] = jsonobj["_description"]
index_column["provider"] = jsonobj["_provider"]
# index_column["url"] = urljoin(GH_PAGES_PREFIX, filename)
index_column["url"] = "../" + filename
index_list.append(index_column)
with open(OUTPUT_FILENAME, "w") as f:
json.dump(index_list, f, indent=2, ensure_ascii=False)
if __name__ == '__main__':
main()
| 31.142857 | 70 | 0.619266 | 102 | 872 | 5.019608 | 0.5 | 0.150391 | 0.054688 | 0.078125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00303 | 0.243119 | 872 | 27 | 71 | 32.296296 | 0.772727 | 0.084862 | 0 | 0 | 0 | 0 | 0.138191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89a624d75795630a0a33b81f16f5fb33f23b6c5a | 1,480 | py | Python | wav_reader.py | Acemyzoe/voiceclassifier-vggvox | 035f6553f85d51f1b60983acc425e6a926bf6ca9 | [
"MIT"
] | 3 | 2020-06-17T12:57:24.000Z | 2021-07-20T14:19:11.000Z | wav_reader.py | Acemyzoe/voiceclassifier-vggvox | 035f6553f85d51f1b60983acc425e6a926bf6ca9 | [
"MIT"
] | 1 | 2021-05-14T11:40:09.000Z | 2021-05-14T11:40:09.000Z | wav_reader.py | Acemyzoe/voiceclassifier-vggvox | 035f6553f85d51f1b60983acc425e6a926bf6ca9 | [
"MIT"
] | null | null | null | import librosa
import numpy as np
from scipy.signal import lfilter, butter
import sigproc
import constants as c
def load_wav(filename, sample_rate):
audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
audio = audio.flatten()
return audio
def normalize_frames(m,epsilon=1e-12):
return np.array([(v - np.mean(v)) / max(np.std(v),epsilon) for v in m])
# https://github.com/christianvazquez7/ivector/blob/master/MSRIT/rm_dc_n_dither.m
def remove_dc_and_dither(sin, sample_rate):
if sample_rate == 16e3:
alpha = 0.99
elif sample_rate == 8e3:
alpha = 0.999
else:
print("Sample rate must be 16kHz or 8kHz only")
exit(1)
sin = lfilter([1,-1], [1,-alpha], sin)
dither = np.random.random_sample(len(sin)) + np.random.random_sample(len(sin)) - 1
spow = np.std(dither)
sout = sin + 1e-6 * spow * dither
return sout
def get_fft_spectrum(filename, buckets):
signal = load_wav(filename,c.SAMPLE_RATE)
signal *= 2**15
# get FFT spectrum
signal = remove_dc_and_dither(signal, c.SAMPLE_RATE)
signal = sigproc.preemphasis(signal, coeff=c.PREEMPHASIS_ALPHA)
frames = sigproc.framesig(signal, frame_len=c.FRAME_LEN*c.SAMPLE_RATE, frame_step=c.FRAME_STEP*c.SAMPLE_RATE, winfunc=np.hamming)
fft = abs(np.fft.fft(frames,n=c.NUM_FFT))
fft_norm = normalize_frames(fft.T)
# truncate to max bucket sizes
rsize = max(k for k in buckets if k <= fft_norm.shape[1])
rstart = int((fft_norm.shape[1]-rsize)/2)
out = fft_norm[:,rstart:rstart+rsize]
return out
| 28.461538 | 130 | 0.731757 | 251 | 1,480 | 4.171315 | 0.410359 | 0.095511 | 0.042025 | 0.032474 | 0.049666 | 0.049666 | 0 | 0 | 0 | 0 | 0 | 0.02502 | 0.135811 | 1,480 | 51 | 131 | 29.019608 | 0.793589 | 0.084459 | 0 | 0 | 0 | 0 | 0.028127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.138889 | 0.027778 | 0.361111 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89aa8fb88d06975a11ab90766b62f3f73699496f | 4,393 | py | Python | scripts/preprocess_snli.py | jabalazs/gating | 713f954656bea127ea331ab85aa83f6aaad21954 | [
"MIT"
] | 10 | 2019-04-08T02:09:37.000Z | 2021-05-04T10:30:44.000Z | scripts/preprocess_snli.py | lizezhonglaile/gating | 713f954656bea127ea331ab85aa83f6aaad21954 | [
"MIT"
] | null | null | null | scripts/preprocess_snli.py | lizezhonglaile/gating | 713f954656bea127ea331ab85aa83f6aaad21954 | [
"MIT"
] | 4 | 2019-09-24T14:24:25.000Z | 2021-09-02T14:41:38.000Z | #!/usr/bin/env python
import argparse
import random
import os
import sys
import pandas as pd
import colored_traceback
# This script is supposed to be executed from the project's top-level directory
sys.path.append(os.path.abspath(os.curdir))
from src.utils.io import load_or_create
from src.corpus.lang import Lang
import src.config as config
random.seed(1234)
colored_traceback.add_hook(always=True)
arg_parser = argparse.ArgumentParser(description="Preprocess SNLI dataset")
arg_parser.add_argument(
"--force_reload",
action="store_true",
help="Whether to reload pickles or not (makes the "
"process slower, but ensures data coherence)",
)
arg_parser.add_argument(
"--reload_lang",
action="store_true",
help="Whether to reload pickles or not within Lang (makes the "
"process slower, but ensures data coherence)",
)
arg_parser.add_argument(
"--min_freq_threshold",
type=int,
default=2,
help="Only words that appear at least this number "
"of times will be considered",
)
SNLI_FIELDS = [
"prem_token_ids",
"hypo_token_ids",
"prem_char_ids",
"hypo_char_ids",
"label_id",
"pairID",
]
def main():
args = arg_parser.parse_args()
basename = os.path.basename(config.SNLI_TRAIN_PATH)
filename_no_ext = os.path.splitext(basename)[0]
train_pickle_path = os.path.join(config.CACHE_PATH, filename_no_ext + ".pkl")
train = load_or_create(
train_pickle_path,
pd.read_json,
config.SNLI_TRAIN_PATH,
lines=True,
force_reload=args.force_reload,
)
hyps = train["sentence1"].tolist()
prems = train["sentence2"].tolist()
all_train_sents = hyps + prems
lang = Lang(
all_train_sents,
mode="snli",
min_freq_threshold=args.min_freq_threshold,
force_reload=args.reload_lang,
)
print("Preprocessing training set")
# New columns must be named the same as SNLI_FIELDS
train["prem_token_ids"] = train["sentence1"].apply(lang.sent2ids)
train["hypo_token_ids"] = train["sentence2"].apply(lang.sent2ids)
train["prem_char_ids"] = train["sentence1"].apply(lang.sent2char_ids)
train["hypo_char_ids"] = train["sentence2"].apply(lang.sent2char_ids)
# label_encoder = LabelEncoder()
# label_encoder.fit(train['gold_label'])
def label_map(label_str):
return config.LABEL2ID[label_str]
train = train[train["gold_label"] != "-"]
train["label_id"] = train["gold_label"].apply(label_map)
# We just need a subset of the columns
train = train[SNLI_FIELDS]
if not os.path.exists(config.PREPROCESSED_DATA_PATH):
os.makedirs(config.PREPROCESSED_DATA_PATH)
print(f"Created {config.PREPROCESSED_DATA_PATH}")
train.to_json(
config.SNLI_TRAIN_PREPROCESSED_PATH, orient="records", lines=True
)
del train
print(f"{config.SNLI_TRAIN_PREPROCESSED_PATH} created")
print("Preprocessing dev set")
dev = pd.read_json(config.SNLI_DEV_PATH, lines=True)
dev["prem_token_ids"] = dev["sentence1"].apply(lang.sent2ids)
dev["hypo_token_ids"] = dev["sentence2"].apply(lang.sent2ids)
dev["prem_char_ids"] = dev["sentence1"].apply(lang.sent2char_ids)
dev["hypo_char_ids"] = dev["sentence2"].apply(lang.sent2char_ids)
# Remove extraneous labels
dev = dev[dev["gold_label"] != "-"]
# dev['label_id'] = label_encoder.transform(dev['gold_label'])
dev["label_id"] = dev["gold_label"].apply(label_map)
dev = dev[SNLI_FIELDS]
dev.to_json(config.SNLI_DEV_PREPROCESSED_PATH, orient="records", lines=True)
del dev
print(f"{config.SNLI_DEV_PREPROCESSED_PATH} created")
test = pd.read_json(config.SNLI_TEST_PATH, lines=True)
test["prem_token_ids"] = test["sentence1"].apply(lang.sent2ids)
test["hypo_token_ids"] = test["sentence2"].apply(lang.sent2ids)
test["prem_char_ids"] = test["sentence1"].apply(lang.sent2char_ids)
test["hypo_char_ids"] = test["sentence2"].apply(lang.sent2char_ids)
test = test[test["gold_label"] != "-"]
# test['label_id'] = label_encoder.transform(test['gold_label'])
test["label_id"] = test["gold_label"].apply(label_map)
test = test[SNLI_FIELDS]
test.to_json(config.SNLI_TEST_PREPROCESSED_PATH, orient="records", lines=True)
print(f"{config.SNLI_TEST_PREPROCESSED_PATH} created")
if __name__ == "__main__":
main()
| 30.089041 | 82 | 0.698156 | 602 | 4,393 | 4.833887 | 0.274086 | 0.037113 | 0.028866 | 0.043299 | 0.382131 | 0.148454 | 0.10378 | 0.075601 | 0.075601 | 0.075601 | 0 | 0.009063 | 0.171181 | 4,393 | 145 | 83 | 30.296552 | 0.790168 | 0.091964 | 0 | 0.067961 | 0 | 0 | 0.262563 | 0.034925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019417 | false | 0 | 0.087379 | 0.009709 | 0.116505 | 0.058252 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89adb232f0752b63ce120de811bab5e524b68d21 | 2,136 | py | Python | recommender_template.py | jairNeto/ibm-recommender-system | e57493ef28623d187f8431b6c756569fdb3fd0e3 | [
"MIT"
] | null | null | null | recommender_template.py | jairNeto/ibm-recommender-system | e57493ef28623d187f8431b6c756569fdb3fd0e3 | [
"MIT"
] | null | null | null | recommender_template.py | jairNeto/ibm-recommender-system | e57493ef28623d187f8431b6c756569fdb3fd0e3 | [
"MIT"
] | null | null | null | import pandas as pd
from recommender_functions import format_df, create_user_item_matrix, \
get_top_articles, user_user_recs_part2, make_content_recs, tokenize
class Recommender():
'''
This class implements a recommender system for the best ibm articles for
each specific user.
At this class you can chose to user the most used techniques
of recommendation that are rank based,
collaborative base and content based
'''
def __init__(self, df_path, df_content_path):
'''
INPUT:
df_path - (string) Path to a csv contaning the columns
user_id, article_id and title
df_content - (string) Path to a csv contaning the columns
doc_body, doc_description, doc_full_name, doc_status and article_id
Description:
Init of the Recommender system
'''
self.df = pd.read_csv(df_path)
self.df_content = pd.read_csv(df_content_path)
self.df_content.drop_duplicates(subset='article_id', inplace=True)
self.df = format_df(self.df)
def fit(self):
'''
Description:
Create the user item matrix
'''
self.user_item = create_user_item_matrix(self.df)
def make_recs(self, n_top=5, rec_type='rank', user_id=None):
'''
INPUT:
n_top - (int) The number of recommendations to make
rec_type - (string) The type of the recommendation, could be:
"rank", "collaborative" or "content".
user_id - (int) The user_id you want make the recommendations for.
OUTPUT:
recs_names - (list) a list with all recommendations articles
Description:
Init of the Recommender system
'''
if rec_type == 'rank':
return get_top_articles(n_top, self.df)
elif rec_type == 'collaborative':
_, recs_names = user_user_recs_part2(
user_id, self.df, self.user_item, n_top)
return recs_names
else:
_, recs_names = make_content_recs(
self.df, self.df_content, n_top, tokenize, user_id)
return recs_names
| 34.451613 | 76 | 0.639513 | 286 | 2,136 | 4.517483 | 0.332168 | 0.051084 | 0.032508 | 0.03096 | 0.111455 | 0.111455 | 0.05418 | 0.05418 | 0 | 0 | 0 | 0.001971 | 0.287453 | 2,136 | 61 | 77 | 35.016393 | 0.846912 | 0.409176 | 0 | 0.090909 | 0 | 0 | 0.029779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.090909 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89ae3151e34cd933a8d475aab9fdeb8b89601173 | 5,910 | py | Python | dip.py | chenhsiu48/PytorchWCT | c3346ebaec95358ad1d4d5a519d5d0e7de73bc75 | [
"MIT"
] | null | null | null | dip.py | chenhsiu48/PytorchWCT | c3346ebaec95358ad1d4d5a519d5d0e7de73bc75 | [
"MIT"
] | null | null | null | dip.py | chenhsiu48/PytorchWCT | c3346ebaec95358ad1d4d5a519d5d0e7de73bc75 | [
"MIT"
] | 1 | 2020-12-30T03:28:31.000Z | 2020-12-30T03:28:31.000Z | import os
from PIL import Image
import cv2
import numpy as np
from scipy.ndimage import gaussian_filter
def join_path(*dirs):
if len(dirs) == 0:
return ''
path = dirs[0]
for d in dirs[1:]:
path = os.path.join(path, d)
return path
def make_filepath(fpath, dir_name=None, ext_name=None, tag=None):
if dir_name is None:
dir_name = os.path.dirname(fpath)
if dir_name == '':
dir_name = '.'
fname = os.path.basename(fpath)
base, ext = os.path.splitext(fname)
if ext_name is None:
ext_name = ext
elif ext_name != '' and ext_name[0] != '.':
ext_name = '.' + ext_name
name = base
if tag == '':
name = name.split('-')[0]
elif tag is not None:
name = '%s-%s' % (name, tag)
if ext_name != '':
name = '%s%s' % (name, ext_name)
return join_path(dir_name, name)
def ensure_dir(path):
if not os.path.exists(path):
os.makedirs(path)
def rm_files(files):
for f in files:
if os.path.exists(f):
os.remove(f)
def get_saliency_map(image, sigma=24, drop_pct=0.1):
saliency = cv2.saliency.StaticSaliencySpectralResidual_create()
(success, sal_map) = saliency.computeSaliency(image)
s = sorted(list(sal_map.reshape(-1)))
th = s[int(len(s) * drop_pct)]
sal_map[sal_map <= th] = 0
sal_map = gaussian_filter(sal_map, sigma=sigma)
sal_map /= np.max(sal_map)
return sal_map
def adjust_gamma(image, gamma=1.0):
# build a lookup table mapping the pixel values [0, 255] to
# their adjusted gamma values
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
# apply gamma correction using the lookup table
return cv2.LUT(image, table)
def match_color(pre_name, ref_img, target_img):
from skimage.io import imread, imsave
from skimage.exposure import match_histograms
reference = imread(ref_img)
image = imread(target_img)
matched = match_histograms(image, reference, multichannel=True)
print(f'match color to {pre_name}')
imsave(pre_name, matched)
def oil_handler(args):
pre_name = make_filepath(args.content, tag='pre_oil', ext_name='png')
args.cleanup.append(pre_name)
print(f'preprocess oil {pre_name}')
im_org = Image.open(args.content)
im_style = Image.open(args.style).resize(im_org.size)
if args.no_saliency:
im_sal_map = np.full((im_org.height, im_org.width), 0)
else:
im_sal_map = get_saliency_map(np.array(im_org), sigma=10, drop_pct=0)
image = np.array(im_org)
hsv = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2HSV)
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
s = adjust_gamma(s, 1.5)
v = adjust_gamma(v, 0.9)
hsv = np.stack((h, s, v), axis=2)
image = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
image = cv2.bilateralFilter(image, 9, 41, 41)
im = Image.fromarray(image)
im.save(pre_name)
im_edit = im.copy()
args.content = pre_name
pre_name = make_filepath(args.style, tag='edit', ext_name='png')
args.cleanup.append(pre_name)
match_color(pre_name, args.content, args.style)
args.style = pre_name
im_style_edit = Image.open(args.style).resize(im_org.size)
return (im_org, im_sal_map, im_edit, im_style, im_style_edit)
def water_handler(args):
pre_name = make_filepath(args.content, tag='pre_water', ext_name='png')
args.cleanup.append(pre_name)
print(f'preprocess water {pre_name}')
im_org = Image.open(args.content)
im_style = Image.open(args.style).resize(im_org.size)
if args.no_saliency:
im_sal_map = np.full((im_org.height, im_org.width), 0)
else:
im_sal_map = get_saliency_map(np.array(im_org), sigma=10, drop_pct=0)
image = np.array(im_org)
hsv = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2HSV)
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
s = adjust_gamma(s, 0.75)
v = adjust_gamma(v, 1.1)
hsv = np.stack((h, s, v), axis=2)
image = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
im = Image.fromarray(image)
im.save(pre_name)
im_edit = im.copy()
args.content = pre_name
pre_name = make_filepath(args.style, tag='edit', ext_name='png')
args.cleanup.append(pre_name)
match_color(pre_name, args.content, args.style)
args.style = pre_name
im_style_edit = Image.open(args.style).resize(im_org.size)
return (im_org, im_sal_map, im_edit, im_style, im_style_edit)
def pencil_handler(args):
pre_name = make_filepath(args.content, tag='pre_pencil', ext_name='png')
args.cleanup.append(pre_name)
print(f'preprocess pencil {pre_name}')
im_org = Image.open(args.content)
im_style = Image.open(args.style)
if args.no_saliency:
sal_map = np.full((im_org.height, im_org.width), 0)
else:
sal_map = get_saliency_map(np.array(im_org), sigma=20, drop_pct=0.1)
im = im_org.convert('L').convert('RGB')
im.save(pre_name)
args.content = pre_name
im_edit = Image.open(args.content)
pre_name = make_filepath(args.style, tag='edit', ext_name='png')
args.cleanup.append(pre_name)
match_color(pre_name, args.content, args.style)
args.style = pre_name
im_style_edit = Image.open(args.style).resize(im_org.size)
return (im_org, sal_map, im_edit, im_style, im_style_edit)
def ink_handler(args):
im_org = Image.open(args.content)
im_style = Image.open(args.style)
if args.no_saliency:
sal_map = np.full((im_org.height, im_org.width), 0)
else:
sal_map = get_saliency_map(np.array(im_org), sigma=30, drop_pct=0.2)
im_edit = im_org
im_style_edit = im_style.copy()
return (im_org, sal_map, im_edit, im_style, im_style_edit)
handler = { 'oil': oil_handler, 'water': water_handler, 'ink': ink_handler, 'pencil': pencil_handler}
| 29.257426 | 101 | 0.654315 | 941 | 5,910 | 3.892667 | 0.166844 | 0.05733 | 0.042588 | 0.034398 | 0.54955 | 0.54955 | 0.54955 | 0.54955 | 0.54955 | 0.54955 | 0 | 0.018606 | 0.208799 | 5,910 | 201 | 102 | 29.402985 | 0.764756 | 0.022166 | 0 | 0.453333 | 0 | 0 | 0.034632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073333 | false | 0 | 0.046667 | 0 | 0.18 | 0.026667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89af0db8460aec85b4aa8863bfaf4d79688eaa92 | 20,638 | py | Python | mda/app/database.py | 5GZORRO/mda | 2f3bbb058b3017cf7cd720b9003c4c20155e3163 | [
"Apache-2.0"
] | 2 | 2021-03-11T11:08:35.000Z | 2022-03-15T14:23:35.000Z | mda/app/database.py | 5GZORRO/mda | 2f3bbb058b3017cf7cd720b9003c4c20155e3163 | [
"Apache-2.0"
] | 15 | 2021-03-05T16:16:26.000Z | 2021-10-11T16:42:22.000Z | mda/app/database.py | 5GZORRO/mda | 2f3bbb058b3017cf7cd720b9003c4c20155e3163 | [
"Apache-2.0"
] | 3 | 2021-03-22T05:44:49.000Z | 2022-01-13T14:50:47.000Z | from .main import *
engine = create_engine('postgresql+psycopg2://' + POSTGRES_USER + ':' + POSTGRES_PASSWORD + '@' + POSTGRES_HOST + ':' + POSTGRES_PORT + '/' + POSTGRES_DB, pool_size=num_fetch_threads+num_fetch_threads_agg, convert_unicode=True)
# Create database if it does not exist.
if not database_exists(engine.url):
create_database(engine.url)
db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
class Config(Base):
__tablename__ = 'config'
_id = Column(postgresql.UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, unique=True)
created_at = Column(DateTime, default=datetime.datetime.now)
updated_at = Column(DateTime, nullable=True)
transaction_id = Column(String(256), nullable=False)
instance_id = Column(String(256), nullable=True)
product_id = Column(String(256), nullable=True)
kafka_topic = Column(String(256), nullable=False)
monitoring_endpoint = Column(String(256), nullable=False)
network_slice_id = Column(String(256), nullable=True)
tenant_id = Column(String(256), nullable=False)
resource_id = Column(String(256), nullable=False)
parent_id = Column(String(256), nullable=True)
timestamp_start = Column(DateTime, nullable=False)
timestamp_end = Column(DateTime, nullable=True)
status = Column(Integer, default=1)
metrics = relationship("Metric")
def __init__(self, transaction_id, kafka_topic, network_slice_id, timestamp_start, timestamp_end, tenant_id, resource_id, parent_id, monitoring_endpoint, instance_id, product_id):
self.transaction_id = transaction_id
self.instance_id = instance_id
self.product_id = product_id
self.kafka_topic = kafka_topic
self.network_slice_id = network_slice_id
self.timestamp_start = timestamp_start
self.timestamp_end = timestamp_end
self.tenant_id = tenant_id
self.resource_id = resource_id
self.parent_id = parent_id
self.monitoring_endpoint = monitoring_endpoint
def toString(self):
return ({'id': self._id,
'created_at': self.created_at,
'updated_at': self.updated_at,
'transaction_id': self.transaction_id,
'instance_id': self.instance_id,
'product_id': self.product_id,
'topic': self.kafka_topic,
'monitoring_endpoint': self.monitoring_endpoint,
'timestamp_start': self.timestamp_start,
'timestamp_end': self.timestamp_end,
'metrics': [],
'status': self.status,
'tenant_id' : self.tenant_id,
'context_ids': [
{
'resource_id': self.resource_id,
'network_slice_id': self.network_slice_id,
'parent_id' : self.parent_id
}
]})
class Metric(Base):
__tablename__ = 'metric'
_id = Column(postgresql.UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, unique=True)
config_id = Column(postgresql.UUID(as_uuid=True), ForeignKey('config._id'))
metric_name = Column(String(256), nullable=False)
metric_type = Column(String(256), nullable=False)
aggregation_method = Column(String(256), nullable=True)
step = Column(String(256), nullable=False)
step_aggregation = Column(String(256), nullable=True)
next_run_at = Column(DateTime, nullable=False)
next_aggregation = Column(DateTime, nullable=True)
status = Column(Integer, default=1)
values = relationship("Value", cascade="all, delete")
def __init__(self, metric_name, metric_type, aggregation_method, step, step_aggregation, config_id, next_run_at, next_aggregation):
self.metric_name = metric_name
self.metric_type = metric_type
self.aggregation_method = aggregation_method
self.step = step
self.step_aggregation = step_aggregation
self.config_id = config_id
self.next_run_at = next_run_at
self.next_aggregation = next_aggregation
def toString(self):
return ({'metric_name': self.metric_name,
'metric_type': self.metric_type,
'aggregation_method': self.aggregation_method,
'step': self.step,
'step_aggregation': self.step_aggregation,
'next_run_at': self.next_run_at,
'next_aggregation': self.next_aggregation})
class Value(Base):
__tablename__ = 'value'
timestamp = Column(DateTime, nullable=False, primary_key=True)
metric_id = Column(postgresql.UUID(as_uuid=True), ForeignKey('metric._id'), primary_key=True)
metric_value = Column(Float, nullable=False)
def __init__(self, timestamp, metric_id, metric_value):
self.timestamp = timestamp
self.metric_id = metric_id
self.metric_value = metric_value
# ----------------------------------------------------------------#
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
def convert_to_seconds(s):
return int(s[:-1]) * seconds_per_unit[s[-1]]
def add_config(config: Config_Model, orchestrator, aggregator):
try:
row = Config(config.transaction_id, config.topic, config.context_ids[0].network_slice_id, config.timestamp_start, config.timestamp_end, config.tenant_id, config.context_ids[0].resource_id, config.context_ids[0].parent_id, config.monitoring_endpoint, config.instance_id, config.product_id)
db_session.add(row)
db_session.commit()
response = row.toString()
for metric in config.metrics:
aggregation = None
if metric.step_aggregation != None:
sec_to_add = convert_to_seconds(metric.step_aggregation)
aggregation = row.timestamp_start + relativedelta(seconds=sec_to_add)
row_m = Metric(metric.metric_name, metric.metric_type, metric.aggregation_method, metric.step, metric.step_aggregation, row._id, row.timestamp_start, aggregation)
db_session.add(row_m)
db_session.commit()
# Add to queue
orchestrator.wait_queue.put((row_m.next_run_at, row.timestamp_start, row_m.step, row.timestamp_end, row_m._id, row_m.metric_name, row_m.metric_type, row_m.aggregation_method, row.transaction_id, row.kafka_topic, row.network_slice_id, row.tenant_id, row.resource_id, row_m.step_aggregation, row_m.next_aggregation, row.monitoring_endpoint, config.instance_id, config.product_id))
if row_m.aggregation_method != None:
aggregator.wait_queue_agg.put((row_m.next_aggregation, row.timestamp_start, row_m.step, row.timestamp_end, row_m._id, row_m.metric_name, row_m.metric_type, row_m.aggregation_method, row.transaction_id, row.kafka_topic, row.network_slice_id, row.tenant_id, row.resource_id, row_m.step_aggregation, row_m.next_aggregation, config.instance_id, config.product_id))
response['metrics'].append(row_m.toString())
return response
except Exception as e:
print(e)
return -1
def get_config(config_id):
try:
config = Config.query.filter_by(_id=config_id).first()
if config == None:
return 0
response = config.toString()
metrics = Metric.query.filter_by(config_id=config_id).all()
[response['metrics'].append(metric.toString()) for metric in metrics]
return response
except Exception as e:
print(e)
return -1
def get_configs():
try:
configs = Config.query.all()
response = []
for config in configs:
add_metrics = config.toString()
metrics = Metric.query.filter_by(config_id=config._id).all()
[add_metrics['metrics'].append(metric.toString()) for metric in metrics]
response.append(add_metrics)
return response
except Exception as e:
print(e)
return -1
def delete_metric_queue(metric_id, orchestrator, aggregator):
index = True
while(index):
index = False
for i in range(len(orchestrator.wait_queue.queue)):
if orchestrator.wait_queue.queue[i][4] == metric_id:
del orchestrator.wait_queue.queue[i]
index = True
break
for i in range(len(aggregator.wait_queue_agg.queue)):
if aggregator.wait_queue_agg.queue[i][4] == metric_id:
del aggregator.wait_queue_agg.queue[i]
index = True
break
for i in range(len(orchestrator.metrics_queue.queue)):
if orchestrator.metrics_queue.queue[i][4] == metric_id:
del orchestrator.metrics_queue.queue[i]
index = True
break
for i in range(len(aggregator.aggregation_queue.queue)):
if aggregator.aggregation_queue.queue[i][4] == metric_id:
del aggregator.aggregation_queue.queue[i]
index = True
break
return
def update_config(config_id, config, orchestrator, aggregator):
try:
row = Config.query.filter_by(_id=config_id).first()
if row == None:
return 0
if config.timestamp_end == None and config.metrics == None:
return 1
if config.timestamp_end != None and row.timestamp_end != None and config.timestamp_end <= row.timestamp_end:
return 2
now = datetime.datetime.now()
row.updated_at = now
# Update config
if config.timestamp_end != None:
row.timestamp_end = config.timestamp_end
db_session.commit()
response = row.toString()
# Update metrics
# Delete old metrics
metrics = Metric.query.filter_by(config_id=config_id).all()
for metric in metrics:
delete_metric_queue(metric._id, orchestrator, aggregator)
db_session.delete(metric)
if config.metrics != None:
#Create new metrics
for metric in config.metrics:
aggregation = None
if metric.step_aggregation != None:
sec_to_add = convert_to_seconds(metric.step_aggregation)
aggregation = now + relativedelta(seconds=sec_to_add)
row_m = Metric(metric.metric_name, metric.metric_type, metric.aggregation_method, metric.step, metric.step_aggregation, row._id, now, aggregation)
db_session.add(row_m)
db_session.commit()
# Add to queue
orchestrator.wait_queue.put((row_m.next_run_at, row.timestamp_start, row_m.step, row.timestamp_end, row_m._id, row_m.metric_name, row_m.metric_type, row_m.aggregation_method, row.transaction_id, row.kafka_topic, row.network_slice_id, row.tenant_id, row.resource_id, row_m.step_aggregation, row_m.next_aggregation, row.monitoring_endpoint, config.instance_id, config.product_id))
if row_m.aggregation_method != None:
aggregator.wait_queue_agg.put((row_m.next_aggregation, row.timestamp_start, row_m.step, row.timestamp_end, row_m._id, row_m.metric_name, row_m.metric_type, row_m.aggregation_method, row.transaction_id, row.kafka_topic, row.network_slice_id, row.tenant_id, row.resource_id, row_m.step_aggregation, row_m.next_aggregation, config.instance_id, config.product_id))
response['metrics'].append(row_m.toString())
return response
return get_config(config_id)
except Exception as e:
print(e)
return -1
def update_next_run(metric_id, next_run_at):
try:
metric = Metric.query.filter_by(_id=metric_id).first()
config = Config.query.filter_by(_id=metric.config_id).first()
sec_to_add = convert_to_seconds(metric.step)
next = next_run_at + relativedelta(seconds=sec_to_add)
if config.timestamp_end != None and next > config.timestamp_end:
metric.status = 0
db_session.commit()
else:
metric.next_run_at = next
db_session.commit()
return 1
except Exception as e:
print(e)
return -1
def update_aggregation(metric_id, next_aggregation):
try:
metric = Metric.query.filter_by(_id=metric_id).first()
config = Config.query.filter_by(_id=metric.config_id).first()
sec_to_add = convert_to_seconds(metric.step_aggregation)
next = next_aggregation + relativedelta(seconds=sec_to_add)
if config.timestamp_end != None and next > config.timestamp_end:
metric.status = 0
db_session.commit()
else:
metric.next_aggregation = next
db_session.commit()
return 1
except Exception as e:
print(e)
return -1
def enable_config(config_id, orchestrator, aggregator):
try:
config = Config.query.filter_by(_id=config_id).first()
if config == None or (config.timestamp_end != None and config.timestamp_end < datetime.datetime.now()):
return 0
if config.status == 1:
return 1
config.status = 1
now = datetime.datetime.now()
config.updated_at = now
add_metrics = config.toString()
metrics = Metric.query.filter_by(config_id=config._id).all()
for metric in metrics:
metric.status = 1
metric.next_run_at = now
orchestrator.wait_queue.put((metric.next_run_at, config.timestamp_start, metric.step, config.timestamp_end, metric._id, metric.metric_name, metric.metric_type, metric.aggregation_method, config.transaction_id, config.kafka_topic, config.network_slice_id, config.tenant_id, config.resource_id, metric.step_aggregation, metric.next_aggregation, config.monitoring_endpoint, config.instance_id, config.product_id))
if metric.aggregation_method != None:
sec_to_add = convert_to_seconds(metric.step_aggregation)
metric.next_aggregation = now + relativedelta(seconds=sec_to_add)
aggregator.wait_queue_agg.put((metric.next_aggregation, config.timestamp_start, metric.step, config.timestamp_end, metric._id, metric.metric_name, metric.metric_type, metric.aggregation_method, config.transaction_id, config.kafka_topic, config.network_slice_id, config.tenant_id, config.resource_id, metric.step_aggregation, metric.next_aggregation, config.instance_id, config.product_id))
add_metrics['metrics'].append(metric.toString())
db_session.commit()
return add_metrics
except Exception as e:
print(e)
return -1
def disable_config(config_id, orchestrator, aggregator):
try:
config = Config.query.filter_by(_id=config_id).first()
if config == None:
return 0
if config.status == 0:
return 1
config.status = 0
config.updated_at = datetime.datetime.now()
add_metrics = config.toString()
metrics = Metric.query.filter_by(config_id=config._id).all()
for metric in metrics:
metric.status = 0
add_metrics['metrics'].append(metric.toString())
delete_metric_queue(metric._id, orchestrator, aggregator)
db_session.commit()
return add_metrics
except Exception as e:
print(e)
return -1
def delete_config(config_id, orchestrator, aggregator):
try:
config = Config.query.filter_by(_id=config_id).first()
if config == None:
return 0
metrics = Metric.query.filter_by(config_id=config._id).all()
for metric in metrics:
delete_metric_queue(metric._id, orchestrator, aggregator)
db_session.delete(metric)
db_session.delete(config)
db_session.commit()
return 1
except Exception as e:
print(e)
return -1
def load_database_metrics(orchestrator, aggregator):
try:
# Update old metrics and next executions
now = datetime.datetime.now()
db_session.execute("UPDATE config " \
"SET status = 0 " \
"WHERE status = 1 AND timestamp_end < '"+str(now)+"'; " \
"UPDATE metric " \
"SET next_run_at = '"+str(now)+"', " \
"next_aggregation = CASE WHEN aggregation_method is not null " \
"THEN '"+str(now)+"'::timestamp + step_aggregation::interval END " \
"FROM config c " \
"WHERE c.status = 1 AND next_run_at < '"+str(now)+"';");
db_session.commit()
# Get metrics
result = db_session.execute("SELECT next_run_at, metric_name, metric_type, aggregation_method, step, transaction_id, instance_id, product_id, kafka_topic, network_slice_id, " \
"tenant_id, resource_id, timestamp_start, timestamp_end, metric._id, step_aggregation, " \
"next_aggregation, monitoring_endpoint " \
"FROM metric join config on metric.config_id = config._id " \
"WHERE metric.status = 1;")
for row in result:
orchestrator.wait_queue.put((row['next_run_at'], row['timestamp_start'], row['step'], row['timestamp_end'], row['_id'], row['metric_name'], row['metric_type'], row['aggregation_method'], row['transaction_id'], row['kafka_topic'], row['network_slice_id'], row['tenant_id'], row['resource_id'], row['step_aggregation'], row['next_aggregation'], row['monitoring_endpoint'], row['instance_id'], row['product_id']))
if row['aggregation_method'] != None:
aggregator.wait_queue_agg.put((row['next_aggregation'], row['timestamp_start'], row['step'], row['timestamp_end'], row['_id'], row['metric_name'], row['metric_type'], row['aggregation_method'], row['transaction_id'], row['kafka_topic'], row['network_slice_id'], row['tenant_id'], row['resource_id'], row['step_aggregation'], row['next_aggregation'], row['instance_id'], row['product_id']))
return 1
except Exception as e:
print(e)
return -1
def insert_metric_value(metric_id, metric_value, timestamp):
try:
row = Value(timestamp, metric_id, metric_value)
db_session.add(row)
db_session.commit()
return 1
except Exception as e:
print(e)
return -1
''' Not used now
def create_aggregate_view(metric_id, aggregation_method, step_aggregation):
global db_session
db_session.execute("CREATE VIEW \"agg_"+str(metric_id)+"_"+aggregation_method+"\" " \
"WITH (timescaledb.continuous) AS " \
"SELECT time_bucket(\'"+step_aggregation+"\', timestamp) AS bucket, "+aggregation_method+"(metric_value) AS aggregation " \
"FROM value " \
"WHERE metric_id = '"+str(metric_id)+"' " \
"GROUP BY bucket;")
db_session.commit()
return
def drop_aggregate_view(metric_id, aggregation_method):
db_session.execute("DROP VIEW IF EXISTS \"agg_"+str(metric_id)+"_"+aggregation_method+"\" CASCADE;")
db_session.commit()
return
'''
def get_last_aggregation(metric_id, aggregation_method, bucket, step_aggregation):
#result = db_session.execute("REFRESH VIEW \"agg_"+str(metric_id)+"_"+aggregation_method+"\";" \
# "SELECT * FROM \""+str(metric_id)+"_"+aggregation_method+"\" LIMIT 1;").fetchone()
result = db_session.execute("SELECT "+aggregation_method+"(metric_value) " \
"FROM value " \
"WHERE metric_id = '"+str(metric_id)+"' and timestamp < '"+str(bucket)+"'::timestamp " \
"and timestamp >= ('"+str(bucket)+"'::timestamp - interval '"+str(step_aggregation)+"');").fetchone()
return result[0]
def create_index():
#db_session.execute("CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" \
# "CREATE INDEX value_index ON value (timestamp ASC, metric_id);" \
# "SELECT create_hypertable('value', 'timestamp', if_not_exists => TRUE);")
db_session.execute("CREATE INDEX value_index ON value (timestamp ASC, metric_id);")
db_session.commit()
return
'''
def drop_all_views():
global db_session
result = db_session.execute("SELECT 'DROP VIEW \"' || table_name || '\" CASCADE;' " \
"FROM information_schema.views " \
"WHERE table_schema NOT IN ('pg_catalog', 'information_schema') AND " \
"table_name !~ '^pg_' AND table_name LIKE 'agg_%';")
for row in result:
try:
db_session.execute(row[0])
except Exception:
pass
db_session.commit()
return
'''
def close_connection():
db_session.remove()
return
def reload_connection():
db_session.remove()
db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))
return
# ----------------------------------------------------------------#
# Reset db if env flag is True
if RESET_DB.lower() == 'true':
try:
try:
db_session.commit()
Base.metadata.drop_all(bind=engine)
except Exception as e:
print(e)
Base.metadata.create_all(bind=engine)
db_session.commit()
create_index()
except Exception as e:
print(e)
sys.exit(0)
# Create db if not exists
try:
resp1 = Config.query.first()
resp2 = Metric.query.first()
resp3 = Value.query.first()
except Exception as e:
try:
Base.metadata.create_all(bind=engine)
db_session.commit()
create_index()
except Exception as e:
print(e)
sys.exit(0)
| 42.728778 | 416 | 0.682334 | 2,670 | 20,638 | 4.997753 | 0.082022 | 0.029676 | 0.022482 | 0.020234 | 0.647857 | 0.567071 | 0.501649 | 0.469649 | 0.44327 | 0.416667 | 0 | 0.007193 | 0.198372 | 20,638 | 482 | 417 | 42.817427 | 0.799383 | 0.039636 | 0 | 0.488127 | 0 | 0.002639 | 0.089484 | 0.002594 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058047 | false | 0.002639 | 0.002639 | 0.007916 | 0.261214 | 0.036939 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89b501b387e90046414f18562a546e79e3957067 | 2,620 | py | Python | mqtt_panel/web/widget/light.py | joseph-tobin/mqtt-panel | df203af5bd5b7e0dd32be1cc5b9deea8400d102a | [
"MIT"
] | null | null | null | mqtt_panel/web/widget/light.py | joseph-tobin/mqtt-panel | df203af5bd5b7e0dd32be1cc5b9deea8400d102a | [
"MIT"
] | null | null | null | mqtt_panel/web/widget/light.py | joseph-tobin/mqtt-panel | df203af5bd5b7e0dd32be1cc5b9deea8400d102a | [
"MIT"
] | null | null | null | import logging
from mqtt_panel.web.widget.widget import Widget
class Light(Widget):
widget_type = 'light'
def __init__(self, *args, **kwargs):
super(Light, self).__init__(*args, **kwargs)
# self._value = self._c['values'][0].get('payload')
self._payload_map = {}
for blob in self._c['values']:
self._payload_map[blob['payload']] = blob
def open(self):
self._mqtt.subscribe(self._c['subscribe'], self._on_mqtt)
def _on_mqtt(self, payload, timestamp):
logging.debug("Light [%s] on_mqtt: %s", self.id, payload)
try:
value = self._payload_map[payload]['payload']
except KeyError as ex:
logging.warning('Unexpected MQTT value: %s', payload)
value = None
self.set_value(value)
def _blob(self):
return {
'value': self.value
}
def _html(self, fh):
self._write_render(fh, '''\
<div class="value">
''', indent=4)
for blob in self._c['values']:
value = blob.get('payload')
display = ''
if self.value != value:
display = ' d-none'
text = blob.get('text', 'text')
icon = blob.get('icon', Default.icon(text))
color = blob.get('color', Default.color(text))
self._write_render(fh, '''\
<div class="value-item value-{value}{display}">
<span class="material-icons" style="color:{color};">{icon}</span>
<span style="color:{color};">{text}</span>
</div>
''', locals(), indent=4)
display = ''
if self.value is not None:
display = ' d-none'
self._write_render(fh, '''\
<div class="value-item value-null{display}">
<span class="material-icons">do_not_disturb</span>
<span>unknown</span>
</div>
</div>
''', locals(), indent=4)
class Default(object):
_map = {
('on', 'true'): ('emoji_objects', 'yellow'),
('off', 'false'): ('emoji_objects','black'),
None: ('help_center', None)
}
@classmethod
def _lookup(cls, key):
key = key.lower()
for keys in cls._map.keys():
if keys and key in keys:
return cls._map[keys]
return cls._map[None]
@classmethod
def icon(cls, key):
return cls._lookup(key)[0]
@classmethod
def color(cls, key):
return cls._lookup(key)[1]
Widget.register(Light)
| 29.772727 | 85 | 0.516031 | 295 | 2,620 | 4.420339 | 0.298305 | 0.027607 | 0.025307 | 0.03911 | 0.194785 | 0.150307 | 0.082822 | 0.059816 | 0.059816 | 0 | 0 | 0.003423 | 0.330916 | 2,620 | 87 | 86 | 30.114943 | 0.740445 | 0.018702 | 0 | 0.239437 | 0 | 0 | 0.271312 | 0.071234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112676 | false | 0 | 0.028169 | 0.042254 | 0.267606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89b6dd98d28c57ba3f0ec9763862fdf9de99608d | 8,464 | py | Python | mqtty/config.py | masayukig/mqtty | 7b2439959bb1d308e0cb4f0e98316e8ee8df6aa2 | [
"Apache-2.0"
] | null | null | null | mqtty/config.py | masayukig/mqtty | 7b2439959bb1d308e0cb4f0e98316e8ee8df6aa2 | [
"Apache-2.0"
] | 9 | 2017-08-23T08:34:55.000Z | 2017-12-16T13:39:50.000Z | mqtty/config.py | masayukig/mqtty | 7b2439959bb1d308e0cb4f0e98316e8ee8df6aa2 | [
"Apache-2.0"
] | 1 | 2019-06-04T17:48:15.000Z | 2019-06-04T17:48:15.000Z | # Copyright 2014 OpenStack Foundation
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import os
import re
try:
import ordereddict
except ImportError:
pass
import yaml
import voluptuous as v
import mqtty.keymap
import mqtty.palette
try:
OrderedDict = collections.OrderedDict
except AttributeError:
OrderedDict = ordereddict.OrderedDict
DEFAULT_CONFIG_PATH = '~/.mqtty.yaml'
class ConfigSchema(object):
server = {v.Required('name'): str,
v.Required('host'): str,
}
servers = [server]
topic = {'name': str,
'topic': str,
}
subscribed_topics = [topic]
_sort_by = v.Any('number', 'updated', 'last-seen', 'project')
sort_by = v.Any(_sort_by, [_sort_by])
text_replacement = {'text': v.Any(str,
{'color': str,
v.Required('text'): str})}
link_replacement = {'link': {v.Required('url'): str,
v.Required('text'): str}}
search_replacement = {'search': {v.Required('query'): str,
v.Required('text'): str}}
replacement = v.Any(text_replacement, link_replacement, search_replacement)
palette = {v.Required('name'): str,
v.Match('(?!name)'): [str]}
palettes = [palette]
dashboard = {v.Required('name'): str,
v.Required('query'): str,
v.Optional('sort-by'): sort_by,
v.Optional('reverse'): bool,
v.Required('key'): str}
dashboards = [dashboard]
reviewkey_approval = {v.Required('category'): str,
v.Required('value'): int}
reviewkey = {v.Required('approvals'): [reviewkey_approval],
'submit': bool,
v.Required('key'): str}
reviewkeys = [reviewkey]
hide_comment = {v.Required('author'): str}
hide_comments = [hide_comment]
change_list_options = {'sort-by': sort_by,
'reverse': bool}
keymap = {v.Required('name'): str,
v.Match('(?!name)'): v.Any([[str], str], [str], str)}
keymaps = [keymap]
thresholds = [int, int, int, int, int, int, int, int]
size_column = {v.Required('type'): v.Any('graph', 'splitGraph', 'number',
'disabled', None),
v.Optional('thresholds'): thresholds}
def getSchema(self, data):
schema = v.Schema({v.Required('servers'): self.servers,
'subscribed-topics': self.subscribed_topics,
'palettes': self.palettes,
'palette': str,
'keymaps': self.keymaps,
'keymap': str,
'dashboards': self.dashboards,
'reviewkeys': self.reviewkeys,
'change-list-query': str,
'diff-view': str,
'hide-comments': self.hide_comments,
'thread-changes': bool,
'display-times-in-utc': bool,
'handle-mouse': bool,
'breadcrumbs': bool,
'change-list-options': self.change_list_options,
'expire-age': str,
'size-column': self.size_column,
})
return schema
class Config(object):
def __init__(self, server=None, palette='default', keymap='default',
path=DEFAULT_CONFIG_PATH):
self.path = os.path.expanduser(path)
if not os.path.exists(self.path):
self.printSample()
exit(1)
self.config = yaml.load(open(self.path))
schema = ConfigSchema().getSchema(self.config)
schema(self.config)
server = self.getServer(server)
self.server = server
self.subscribed_topic = self.get_topic('default')
self.dburi = server.get(
'dburi', 'sqlite:///' + os.path.expanduser('~/.mqtty.db'))
socket_path = server.get('socket', '~/.mqtty.sock')
self.socket_path = os.path.expanduser(socket_path)
log_file = server.get('log-file', '~/.mqtty.log')
self.log_file = os.path.expanduser(log_file)
lock_file = server.get(
'lock-file', '~/.mqtty.%s.lock' % server['name'])
self.lock_file = os.path.expanduser(lock_file)
self.palettes = {
'default': mqtty.palette.Palette({}),
'light': mqtty.palette.Palette(mqtty.palette.LIGHT_PALETTE), }
for p in self.config.get('palettes', []):
if p['name'] not in self.palettes:
self.palettes[p['name']] = mqtty.palette.Palette(p)
else:
self.palettes[p['name']].update(p)
self.palette = self.palettes[self.config.get('palette', palette)]
self.keymaps = {'default': mqtty.keymap.KeyMap({}),
'vi': mqtty.keymap.KeyMap(mqtty.keymap.VI_KEYMAP)}
for p in self.config.get('keymaps', []):
if p['name'] not in self.keymaps:
self.keymaps[p['name']] = mqtty.keymap.KeyMap(p)
else:
self.keymaps[p['name']].update(p)
self.keymap = self.keymaps[self.config.get('keymap', keymap)]
self.project_change_list_query = self.config.get(
'change-list-query', 'status:open')
self.diff_view = self.config.get('diff-view', 'side-by-side')
self.dashboards = OrderedDict()
for d in self.config.get('dashboards', []):
self.dashboards[d['key']] = d
self.dashboards[d['key']]
self.reviewkeys = OrderedDict()
for k in self.config.get('reviewkeys', []):
self.reviewkeys[k['key']] = k
self.hide_comments = []
for h in self.config.get('hide-comments', []):
self.hide_comments.append(re.compile(h['author']))
self.thread_changes = self.config.get('thread-changes', True)
self.utc = self.config.get('display-times-in-utc', False)
self.breadcrumbs = self.config.get('breadcrumbs', True)
self.handle_mouse = self.config.get('handle-mouse', True)
change_list_options = self.config.get('change-list-options', {})
self.change_list_options = {
'sort-by': change_list_options.get('sort-by', 'number'),
'reverse': change_list_options.get('reverse', False)}
self.expire_age = self.config.get('expire-age', '2 months')
self.size_column = self.config.get('size-column', {})
self.size_column['type'] = self.size_column.get('type', 'graph')
if self.size_column['type'] == 'graph':
self.size_column['thresholds'] = self.size_column.get(
'thresholds', [1, 10, 100, 1000])
else:
self.size_column['thresholds'] = self.size_column.get(
'thresholds', [1, 10, 100, 200, 400, 600, 800, 1000])
def getServer(self, name=None):
for server in self.config['servers']:
if name is None or name == server['name']:
return server
return None
def get_topic(self, name=None):
for topic in self.config['subscribed-topics']:
if name is None or name == topic['name']:
return topic
return None
def printSample(self):
filename = 'share/mqtty/examples'
print("""Mqtty requires a configuration file at ~/.mqtty.yaml
If the file contains a password then permissions must be set to 0600.
Several sample configuration files were installed with Mqtty and are
available in %s in the root of the installation.
For more information, please see the README.
""" % (filename,))
| 36.32618 | 79 | 0.560964 | 950 | 8,464 | 4.917895 | 0.238947 | 0.044949 | 0.044521 | 0.012842 | 0.158604 | 0.090325 | 0.056935 | 0.024401 | 0.024401 | 0.024401 | 0 | 0.008477 | 0.303166 | 8,464 | 232 | 80 | 36.482759 | 0.783655 | 0.072661 | 0 | 0.076923 | 0 | 0 | 0.16288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029586 | false | 0.011834 | 0.053254 | 0 | 0.266272 | 0.017751 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89b77ef30272c6810340d56b530f449b5d7fbb5a | 1,458 | py | Python | project/com/dao/MedicineDAO.py | soham2512/Agripedia | bc9fd31cb7a080ceccbf4fb6d189b27d398f9e33 | [
"MIT"
] | null | null | null | project/com/dao/MedicineDAO.py | soham2512/Agripedia | bc9fd31cb7a080ceccbf4fb6d189b27d398f9e33 | [
"MIT"
] | null | null | null | project/com/dao/MedicineDAO.py | soham2512/Agripedia | bc9fd31cb7a080ceccbf4fb6d189b27d398f9e33 | [
"MIT"
] | null | null | null | from project import db
from project.com.vo.CropTypeVO import CropTypeVO
from project.com.vo.CropVO import CropVO
from project.com.vo.ImageVO import ImageVO
from project.com.vo.MedicineVO import MedicineVO
class MedicineDAO:
def insertMedicine(self, MedicineVO):
db.session.add(MedicineVO)
db.session.commit()
def viewMedicine(self):
medicineList = db.session.query(MedicineVO, CropVO, CropTypeVO). \
join(CropVO, MedicineVO.medicine_CropId == CropVO.cropId).\
join(CropTypeVO, MedicineVO.medicine_CropTypeId == CropTypeVO.cropTypeId).all()
return medicineList
def userViewMedicine(self, imageVO):
userMedicineList = db.session.query(MedicineVO, CropVO, CropTypeVO). \
join(CropVO, MedicineVO.medicine_CropId == CropVO.cropId). \
join(CropTypeVO, MedicineVO.medicine_CropTypeId == CropTypeVO.cropTypeId).\
filter(MedicineVO.diseaseName == imageVO.cropDisease).all()
return userMedicineList
def deleteMedicine(self, medicineVO):
medicineList = MedicineVO.query.get(medicineVO.medicineId)
db.session.delete(medicineList)
db.session.commit()
def editMedicine(self, medicineVO):
medicineList = MedicineVO.query.filter_by(medicineId=medicineVO.medicineId).all()
return medicineList
def updateMedicine(self, medicineVO):
db.session.merge(medicineVO)
db.session.commit()
| 38.368421 | 91 | 0.709191 | 148 | 1,458 | 6.952703 | 0.27027 | 0.069971 | 0.054422 | 0.062196 | 0.367347 | 0.287658 | 0.287658 | 0.287658 | 0.287658 | 0.287658 | 0 | 0 | 0.194787 | 1,458 | 37 | 92 | 39.405405 | 0.876491 | 0 | 0 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89b7b256a8fbc4ac824033e18daa6a97bbd3501a | 18,182 | py | Python | djangoProject1/venv/Lib/site-packages/owlready2/editor.py | meddhafer97/Risk-management-khnowledge-based-system | aba86734801a9e0313071e2c9931295e0da08ed0 | [
"MIT"
] | null | null | null | djangoProject1/venv/Lib/site-packages/owlready2/editor.py | meddhafer97/Risk-management-khnowledge-based-system | aba86734801a9e0313071e2c9931295e0da08ed0 | [
"MIT"
] | null | null | null | djangoProject1/venv/Lib/site-packages/owlready2/editor.py | meddhafer97/Risk-management-khnowledge-based-system | aba86734801a9e0313071e2c9931295e0da08ed0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Owlready2
# Copyright (C) 2013-2019 Jean-Baptiste LAMY
# LIMICS (Laboratoire d'informatique médicale et d'ingénierie des connaissances en santé), UMR_S 1142
# University Paris 13, Sorbonne paris-Cité, Bobigny, France
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from collections import defaultdict
from functools import reduce
import editobj3, editobj3.introsp as introsp, editobj3.field as field, editobj3.editor as editor
from owlready2 import *
from owlready2.base import _universal_datatype_2_abbrev
from owlready2.prop import _CLASS_PROPS, _TYPE_PROPS
IGNORE_DOMAINLESS_PROPERTY = False
introsp.def_attr("topObjectProperty", field.HiddenField)
def _keep_most_generic(s):
r = set()
for i in s:
for parent in i.is_a:
if parent in s: break
else: r.add(i)
return r
#def _available_ontologies(o):
# return sorted(o.ontology.indirectly_imported_ontologies(), key = lambda x: x.name)
def _available_classes():
#r = set()
#for ontology in o.ontology.indirectly_imported_ontologies():
# r.update(ontology.classes)
r = default_world.search(subclass_of = Thing)
return sorted(_keep_most_generic(r), key = lambda x: str(x))
#def _available_properties(o):
# r = set()
# for ontology in o.ontology.indirectly_imported_ontologies():
# r.update(ontology.properties)
# return sorted(_keep_most_generic(r), key = lambda x: str(x))
#def _available_properties_and_types(o):
# return [FunctionalProperty, InverseFunctionalProperty, TransitiveProperty, SymmetricProperty, AsymmetricProperty, ReflexiveProperty, IrreflexiveProperty] + _available_properties(o)
#def _available_classes_and_datatypes(o):
# r = set()
# for ontology in o.ontology.indirectly_imported_ontologies():
# r.update(ontology.classes)
# r = _keep_most_generic(r)
# r.update(owlready._PYTHON_2_DATATYPES.keys())
# return sorted(r, key = lambda x: str(x))
def _get_label(o): return str(o).replace("_", " ")
#descr = introsp.description(EntityClass)
#descr.def_attr("ontology" , field.HiddenField)
#descr = introsp.description_for_type(Thing)
##descr.def_attr("ontology" , field.ObjectSelectorField, addable_values = _available_ontologies)
#descr.def_attr("namespace" , field.HiddenField)
#descr.def_attr("name" , field.StringField)
#descr.def_attr("python_name" , field.StringField)
#descr.def_attr("is_a" , field.HierarchyAndObjectListField, addable_values = _available_classes)
#descr.def_attr("equivalent_to", field.HierarchyAndObjectListField, addable_values = _available_classes)
#descr.set_label(_get_label)
#descr.set_icon_filename(os.path.join(os.path.dirname(__file__), "icons", "owl_class.svg"))
#descr = introsp.description_for_type(Property)
##descr.def_attr("ontology" , field.ObjectSelectorField, addable_values = _available_ontologies)
#descr.def_attr("namespace" , field.HiddenField)
#descr.def_attr("name" , field.StringField)
#descr.def_attr("python_name" , field.StringField)
#descr.def_attr("is_a" , field.HierarchyAndObjectListField, addable_values = _available_properties_and_types)
#descr.def_attr("domain" , field.HierarchyAndObjectListField, addable_values = _available_classes , reorder_method = None)
#descr.def_attr("range" , field.HierarchyAndObjectListField, addable_values = _available_classes_and_datatypes, reorder_method = None)
#descr.def_attr("inverse_property", field.ObjectSelectorField , addable_values = lambda o: [None] + _available_properties(o))
#descr.def_attr("equivalent_to" , field.HierarchyAndObjectListField, addable_values = _available_properties_and_types)
#descr.set_label(_get_label)
#descr.set_icon_filename(os.path.join(os.path.dirname(__file__), "icons", "owl_property.svg"))
descr = introsp.description(Thing)
descr.def_attr("iri" , field.StringField)
descr.def_attr("namespace" , field.HiddenField)
descr.def_attr("is_a" , field.HiddenField)
descr.def_attr("is_instance_of" , field.HiddenField)
descr.def_attr("name" , field.HiddenField)
descr.def_attr("storid" , field.HiddenField)
descr.def_attr("equivalent_to" , field.HiddenField)
descr.def_attr("properties" , field.HiddenField)
descr.def_attr("inverse_properties", field.HiddenField)
descr.set_label(_get_label)
descr.set_icon_filename(os.path.join(os.path.dirname(__file__), "icons", "owl_instance.svg"))
descr.set_constructor(introsp.Constructor(lambda Class, parent: Class(namespace = parent.namespace)))
introsp.MAX_NUMBER_OF_ATTRIBUTE_FOR_EMBEDDING = 0
def _get_priority(Prop):
return Prop.editobj_priority.first()
def _intersect_reduce(s):
if not s: return set()
if len(s) == 1: return s[0]
return reduce(set.intersection, s)
def _flattened_or(Classes):
if Classes: yield from _flattened_or_iteration(Classes)
else: yield Thing
def _flattened_or_iteration(Classes):
for Class in Classes:
if isinstance(Class, ThingClass): yield Class
elif isinstance(Class, Or): yield from _flattened_or_iteration(Class.Classes)
def _get_class_one_of(Class):
if isinstance(Class, OneOf): return Class.instances
if isinstance(Class, ThingClass):
s = []
for ancestor in Class.ancestors():
for superclass in ancestor.is_a + ancestor.equivalent_to:
if isinstance(superclass, OneOf): s.append(superclass.instances)
return _intersect_reduce(s)
def _prop_use_children_group(Prop, domain):
for superprop in Prop.mro():
if (superprop in _CLASS_PROPS) or (superprop in _TYPE_PROPS): continue
if isinstance(superprop, PropertyClass) and not superprop.is_functional_for(domain): return True
for range in _flattened_or(Prop.range):
if isinstance(range, ThingClass) and _has_object_property(range): return True
return False
def _has_object_property(Class):
for Prop in Class._get_class_possible_relations():
if not isinstance(Prop, DataPropertyClass): return True
return False
def _is_abstract_class(Class):
for superclass in Class.is_a + list(Class.equivalent_to.indirect()):
if isinstance(superclass, Or):
for or_class in superclass.Classes:
if not isinstance(or_class, ThingClass): break
else: return True
def configure_editobj_from_ontology(onto):
introsp._init_for_owlready2()
for Prop in onto.properties():
if len(Prop.range) != 1: continue
if isinstance(Prop, DataPropertyClass): ranges = [Prop.range[0]]
else: ranges = list(_flattened_or(Prop.range))
if not ranges: continue
priority = _get_priority(Prop)
for domain in _flattened_or(Prop.domain):
if isinstance(domain, ThingClass):
if len(ranges) == 1: one_of = _get_class_one_of(ranges[0])
else: one_of = None
if one_of: RangeInstanceOnly(Prop, domain, one_of)
else: RangeClassOnly (Prop, domain, ranges)
for Class in onto.classes():
for superclass in Class.is_a: _configure_class_restriction(Class, superclass)
for superclass in Class.equivalent_to.indirect(): _configure_class_restriction(Class, superclass)
for prop_children_group in PROP_CHILDREN_GROUPS.values():
if prop_children_group.changed: prop_children_group.define_children_groups()
def _configure_class_restriction(Class, restriction):
if isinstance(restriction, And):
for sub_restriction in restriction.Classes:
_configure_class_restriction(Class, sub_restriction)
elif isinstance(restriction, Restriction):
if restriction.type == "VALUE":
introsp.description(Class).def_attr(restriction.Prop.python_name, field.LabelField, priority = _get_priority(restriction.Prop))
elif restriction.type == "ONLY":
if isinstance(restriction.Prop, ObjectPropertyClass):
if isinstance(restriction.Class, ThingClass):
ranges = [restriction.Class]
elif isinstance(restriction.Class, LogicalClassConstruct):
ranges = list(_flattened_or(restriction.Class.Classes))
else: return
if len(ranges) == 1: one_of = _get_class_one_of(ranges[0])
else: one_of = None
if one_of: RangeInstanceOnly(restriction.Prop, Class, one_of)
else: RangeClassOnly (restriction.Prop, Class, ranges)
elif (restriction.type == "EXACTLY") or (restriction.type == "MAX"):
# These restrictions can make the Property functional for the given Class
# => Force the redefinition of the field type by creating an empty range restriction list
if restriction.cardinality == 1:
for subprop in restriction.Prop.descendants(include_self = False):
prop_children_group = get_prop_children_group(subprop)
prop_children_group.range_restrictions[Class] # Create the list if not already existent
prop_children_group.changed = True
elif isinstance(restriction, Not):
for sub_restriction in _flattened_or([restriction.Class]):
if isinstance(sub_restriction, Restriction):
if sub_restriction.type == SOME and isinstance(sub_restriction.Prop, ObjectPropertyClass):
ranges = list(_flattened_or([sub_restriction.Class]))
if len(ranges) == 1: one_of = _get_class_one_of(ranges[0])
else: one_of = None
if one_of: RangeInstanceExclusion(sub_restriction.Prop, Class, one_of)
else: RangeClassExclusion (sub_restriction.Prop, Class, ranges)
PROP_CHILDREN_GROUPS = {}
def get_prop_children_group(Prop): return PROP_CHILDREN_GROUPS.get(Prop) or PropChildrenGroup(Prop)
class PropChildrenGroup(object):
def __init__(self, Prop):
self.Prop = Prop
self.range_restrictions = defaultdict(list)
self.changed = False
PROP_CHILDREN_GROUPS[Prop] = self
def define_children_groups(self):
self.changed = False
priority = _get_priority(self.Prop)
for domain in set(self.range_restrictions):
descr = introsp.description(domain)
functional = self.Prop.is_functional_for(domain)
range_restrictions = set()
for superclass in domain.mro():
s = self.range_restrictions.get(superclass)
if s: range_restrictions.update(s)
range_instance_onlys = { range_restriction for range_restriction in range_restrictions if isinstance(range_restriction, RangeInstanceOnly) }
if range_instance_onlys:
instances = _intersect_reduce([i.ranges for i in range_instance_onlys])
d = { instance.name : instance for instance in instances }
if functional:
d["None"] = None
descr.def_attr(self.Prop.python_name, field.EnumField(d), priority = priority, optional = False)
else:
descr.def_attr(self.Prop.python_name, field.EnumListField(d), priority = priority, optional = False)
else:
if isinstance(self.Prop, DataPropertyClass):
datatype = None
for range_restriction in range_restrictions:
if isinstance(range_restriction, RangeClassOnly):
for range in range_restriction.ranges:
if range in _universal_datatype_2_abbrev:
datatype = range
break
if datatype:
if datatype is int:
if functional: descr.def_attr(self.Prop.python_name, field.IntField , allow_none = True, optional = False, priority = priority)
else: descr.def_attr(self.Prop.python_name, field.IntListField , optional = False, priority = priority)
elif datatype is float:
if functional: descr.def_attr(self.Prop.python_name, field.FloatField , allow_none = True, optional = False, priority = priority)
else: descr.def_attr(self.Prop.python_name, field.FloatListField , optional = False, priority = priority)
elif datatype is normstr:
if functional: descr.def_attr(self.Prop.python_name, field.StringField , allow_none = True, optional = False, priority = priority)
else: descr.def_attr(self.Prop.python_name, field.StringListField, optional = False, priority = priority)
elif datatype is str:
if functional: descr.def_attr(self.Prop.python_name, field.TextField , allow_none = True, optional = False, priority = priority)
else: descr.def_attr(self.Prop.python_name, field.StringListField, optional = False, priority = priority)
elif datatype is bool:
if functional: descr.def_attr(self.Prop.python_name, field.BoolField , optional = False, priority = priority)
else:
if functional: descr.def_attr(self.Prop.python_name, field.EntryField , allow_none = True, optional = False, priority = priority)
else: descr.def_attr(self.Prop.python_name, field.EntryListField , optional = False, priority = priority)
else:
values_lister = ValuesLister(self.Prop, domain, range_restrictions)
if _prop_use_children_group(self.Prop, domain) or values_lister.values_have_children():
if self.Prop.inverse: inverse_attr = self.Prop.inverse.python_name
else: inverse_attr = ""
if functional: field_class = field.HierarchyOrObjectSelectorField
else: field_class = field.HierarchyOrObjectListField
descr.def_attr(self.Prop.python_name,
field_class,
addable_values = values_lister.available_values,
inverse_attr = inverse_attr,
priority = priority)
else:
descr.def_attr(self.Prop.python_name,
field.ObjectSelectorField,
addable_values = values_lister.available_values,
priority = priority)
class RangeRestriction(object):
def __init__(self, Prop, domain, ranges):
self.domain = domain
self.ranges = ranges
for subprop in Prop.descendants(include_self = True):
prop_children_group = get_prop_children_group(subprop)
prop_children_group.range_restrictions[domain].append(self)
prop_children_group.changed = True
def __repr__(self): return "<%s %s %s>" % (self.__class__.__name__, self.domain, self.ranges)
def get_classes(self):
available_classes = set()
for range in self.ranges:
for subrange in range.descendants(): available_classes.add(subrange)
return available_classes
class RangeClassOnly (RangeRestriction): pass
class RangeClassExclusion (RangeRestriction): pass
class RangeInstanceOnly (RangeRestriction): pass
class RangeInstanceExclusion(RangeRestriction): pass
VALUES_LISTERS = {}
class ValuesLister(object):
def __init__(self, Prop, domain, range_restrictions):
self.Prop = Prop
self.domain = domain
self.range_restrictions = range_restrictions
VALUES_LISTERS[Prop, domain] = self
def values_have_children(self):
for range_restriction in self.range_restrictions:
if isinstance(range_restriction, RangeClassOnly):
for range in range_restriction.ranges:
for subrange in range.descendants():
for attribute in introsp.description(subrange).attributes.values():
try: return issubclass(attribute.field_class, FieldInHierarchyPane)
except: return False # attribute.field_class if a func and not a class
def available_values(self, subject):
available_classes = []
excluded_classes = set()
for range_restriction in self.range_restrictions:
if isinstance(range_restriction, RangeClassOnly):
available_classes.append(range_restriction.get_classes())
elif isinstance(range_restriction, RangeClassExclusion):
excluded_classes.update(range_restriction.get_classes())
available_classes = _intersect_reduce(available_classes)
available_classes.difference_update(excluded_classes)
available_classes = sorted(available_classes, key = lambda Class: Class.name)
new_instances_of = [introsp.NewInstanceOf(Class) for Class in available_classes if (not _get_class_one_of(Class)) and (not _is_abstract_class(Class))]
existent_values = set()
for Class in available_classes:
existent_values.update(default_world.search(type = Class))
if excluded_classes:
excluded_classes = tuple(excluded_classes)
existent_values = [o for o in existent_values if not isinstance(o, excluded_classes)]
# For InverseFunctional props, remove values already used.
if issubclass(self.Prop, InverseFunctionalProperty) and self.Prop.inverse_property:
existent_values = { value for value in existent_values
if not getattr(value, self.Prop.inverse_property.python_name) }
existent_values = sorted(existent_values, key = lambda obj: obj.name)
return new_instances_of + existent_values
def range_match_classes(self, classes):
classes = tuple(classes)
for range_restriction in self.range_restrictions:
if isinstance(range_restriction, RangeClassOnly):
for range in range_restriction.ranges:
if issubclass(range, classes): return True
| 47.103627 | 183 | 0.702948 | 2,164 | 18,182 | 5.665896 | 0.151109 | 0.023978 | 0.039149 | 0.024794 | 0.391893 | 0.334883 | 0.275671 | 0.257483 | 0.251774 | 0.241253 | 0 | 0.002786 | 0.210483 | 18,182 | 385 | 184 | 47.225974 | 0.851341 | 0.210923 | 0 | 0.160156 | 0 | 0 | 0.010783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089844 | false | 0.015625 | 0.023438 | 0.015625 | 0.175781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89bcbdc18d626d05c6643c94a638ed633b84861b | 7,965 | py | Python | bitfeeds/exchange.py | bopo/bitfeeds | bc525386418061aa4cac11852b1cf28d3b29dea3 | [
"Apache-2.0"
] | 1 | 2018-02-25T04:27:07.000Z | 2018-02-25T04:27:07.000Z | bitfeeds/exchange.py | bopo/bitfeeds | bc525386418061aa4cac11852b1cf28d3b29dea3 | [
"Apache-2.0"
] | null | null | null | bitfeeds/exchange.py | bopo/bitfeeds | bc525386418061aa4cac11852b1cf28d3b29dea3 | [
"Apache-2.0"
] | null | null | null | #!/bin/python
from bitfeeds.storage.zeromq import ZmqStorage
from bitfeeds.storage.file import FileStorage
from bitfeeds.market import L2Depth, Trade, Snapshot
from datetime import datetime
from threading import Lock
class ExchangeGateway:
############################################################################
# Static variable
# Applied on all gateways whether to record the timestamp in local machine,
# rather than exchange timestamp given by the API
is_local_timestamp = True
############################################################################
"""
Exchange gateway
"""
def __init__(self,
api_socket,
db_storages=[]):
"""
Constructor
:param exchange_name: Exchange name
:param exchange_api: Exchange API
:param db_storage: Database storage
"""
self.db_storages = db_storages
self.api_socket = api_socket
self.lock = Lock()
self.exch_snapshot_id = 0
@classmethod
def get_exchange_name(cls):
"""
Get exchange name
:return: Exchange name string
"""
return ''
@classmethod
def get_instmt_snapshot_table_name(cls, exchange, instmt_name):
"""
Get instmt snapshot
:param exchange: Exchange name
:param instmt_name: Instrument name
"""
return 'exch_' + exchange.lower() + '_' + instmt_name.lower() + \
'_snapshot_' + datetime.utcnow().strftime("%Y%m%d")
@classmethod
def get_snapshot_table_name(cls):
return 'exchanges_snapshot'
@classmethod
def is_allowed_snapshot(cls, db_storage):
return not isinstance(db_storage, FileStorage)
@classmethod
def is_allowed_instmt_record(cls, db_storage):
return not isinstance(db_storage, ZmqStorage)
@classmethod
def init_snapshot_table(cls, db_storages):
for db_storage in db_storages:
db_storage.create(cls.get_snapshot_table_name(),
Snapshot.columns(),
Snapshot.types(),
[0,1])
def init_instmt_snapshot_table(self, instmt):
table_name = self.get_instmt_snapshot_table_name(instmt.get_exchange_name(),
instmt.get_instmt_name())
for db_storage in self.db_storages:
db_storage.create(table_name,
['id'] + Snapshot.columns(False),
['int'] + Snapshot.types(False),
[0])
def start(self, instmt):
"""
Start the exchange gateway
:param instmt: Instrument
:return List of threads
"""
return []
def get_instmt_snapshot_id(self, instmt):
with self.lock:
self.exch_snapshot_id += 1
return self.exch_snapshot_id
def insert_order_book(self, instmt):
"""
Insert order book row into the database storage
:param instmt: Instrument
"""
# If local timestamp indicator is on, assign the local timestamp again
if self.is_local_timestamp:
instmt.get_l2_depth().date_time = datetime.utcnow().strftime("%Y%m%d %H:%M:%S.%f")
# Update the snapshot
if instmt.get_l2_depth() is not None:
id = self.get_instmt_snapshot_id(instmt)
for db_storage in self.db_storages:
if self.is_allowed_snapshot(db_storage):
db_storage.insert(table=self.get_snapshot_table_name(),
columns=Snapshot.columns(),
types=Snapshot.types(),
values=Snapshot.values(instmt.get_exchange_name(),
instmt.get_instmt_name(),
instmt.get_l2_depth(),
Trade() if instmt.get_last_trade() is None else instmt.get_last_trade(),
Snapshot.UpdateType.ORDER_BOOK),
primary_key_index=[0,1],
is_orreplace=True,
is_commit=True)
if self.is_allowed_instmt_record(db_storage):
db_storage.insert(table=instmt.get_instmt_snapshot_table_name(),
columns=['id'] + Snapshot.columns(False),
types=['int'] + Snapshot.types(False),
values=[id] +
Snapshot.values('',
'',
instmt.get_l2_depth(),
Trade() if instmt.get_last_trade() is None else instmt.get_last_trade(),
Snapshot.UpdateType.ORDER_BOOK),
is_commit=True)
def insert_trade(self, instmt, trade):
"""
Insert trade row into the database storage
:param instmt: Instrument
"""
# If the instrument is not recovered, skip inserting into the table
if not instmt.get_recovered():
return
# If local timestamp indicator is on, assign the local timestamp again
if self.is_local_timestamp:
trade.date_time = datetime.utcnow().strftime("%Y%m%d %H:%M:%S.%f")
# Set the last trade to the current one
instmt.set_last_trade(trade)
# Update the snapshot
if instmt.get_l2_depth() is not None and \
instmt.get_last_trade() is not None:
id = self.get_instmt_snapshot_id(instmt)
for db_storage in self.db_storages:
is_allowed_snapshot = self.is_allowed_snapshot(db_storage)
is_allowed_instmt_record = self.is_allowed_instmt_record(db_storage)
if is_allowed_snapshot:
db_storage.insert(table=self.get_snapshot_table_name(),
columns=Snapshot.columns(),
values=Snapshot.values(instmt.get_exchange_name(),
instmt.get_instmt_name(),
instmt.get_l2_depth(),
instmt.get_last_trade(),
Snapshot.UpdateType.TRADES),
types=Snapshot.types(),
primary_key_index=[0,1],
is_orreplace=True,
is_commit=not is_allowed_instmt_record)
if is_allowed_instmt_record:
db_storage.insert(table=instmt.get_instmt_snapshot_table_name(),
columns=['id'] + Snapshot.columns(False),
types=['int'] + Snapshot.types(False),
values=[id] +
Snapshot.values('',
'',
instmt.get_l2_depth(),
instmt.get_last_trade(),
Snapshot.UpdateType.TRADES),
is_commit=True)
| 44.005525 | 137 | 0.473572 | 720 | 7,965 | 4.975 | 0.166667 | 0.057789 | 0.037968 | 0.031267 | 0.539363 | 0.491625 | 0.462032 | 0.435232 | 0.401731 | 0.375489 | 0 | 0.003785 | 0.436033 | 7,965 | 180 | 138 | 44.25 | 0.793633 | 0.113748 | 0 | 0.469565 | 0 | 0 | 0.01369 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104348 | false | 0 | 0.043478 | 0.026087 | 0.234783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
981fabdb0ce384e116d81de9570573bc722147f3 | 6,468 | py | Python | test/func_test.py | estheruary/dynomite-deb | 601367cfb6b298a1d460ba5891e8254edf974686 | [
"Apache-2.0"
] | null | null | null | test/func_test.py | estheruary/dynomite-deb | 601367cfb6b298a1d460ba5891e8254edf974686 | [
"Apache-2.0"
] | null | null | null | test/func_test.py | estheruary/dynomite-deb | 601367cfb6b298a1d460ba5891e8254edf974686 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
import redis
import argparse
import random
import string
import sys
import time
from utils import string_generator, number_generator
from dyno_node import DynoNode
from redis_node import RedisNode
from dyno_cluster import DynoCluster
from dual_run import dual_run, ResultMismatchError
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--debug', action='store_true')
return parser.parse_args()
def create_key(test_name, key_id):
return test_name + "_" + str(key_id)
def run_key_value_tests(c, max_keys=1000, max_payload=1024):
#Set some
test_name="KEY_VALUE"
print("Running %s tests" % test_name)
for x in range(0, max_keys):
key = create_key(test_name, x)
c.run_verify("set", key, string_generator(size=random.randint(1, max_payload)))
# get them and see
for x in range(0, max_keys):
key = create_key(test_name, x)
c.run_verify("get", key)
# append a key
key = create_key(test_name, random.randint(0, max_keys-1))
value = string_generator()
c.run_verify("append", key, value)
c.run_verify("get", key)
# expire a few
key = create_key(test_name, random.randint(0, max_keys-1))
c.run_verify("expire", key, 5)
time.sleep(7)
c.run_verify("exists", key)
def run_multikey_test(c, max_keys=1000, max_payload=10):
#Set some
test_name="MULTIKEY"
print("Running %s tests" % test_name)
for n in range(0, 100):
kv_pairs = {}
len = random.randint(1, 50)
for x in range(0, len):
key_id = random.randint(0, max_keys-1)
key = create_key(test_name, key_id)
value = string_generator(size=random.randint(1, max_payload))
kv_pairs[key] = value
c.run_verify("mset", kv_pairs)
keys = []
len = random.randint(1, 50)
for x in range(0, len):
key_id = random.randint(0, max_keys-1)
key = create_key(test_name, key_id)
keys.append(key)
c.run_verify("mget", keys)
def run_script_tests(c):
TEST_NAME="SCRIPTS"
print("Running %s tests" % TEST_NAME)
# This script basically executes 'GET <key>'.
SCRIPT_BODY='{}'.format("return redis.call('get', KEYS[1])")
EXPECTED_VALUE = "value1"
# Load a simple script.
script_hash = c.run_verify("script_load", SCRIPT_BODY)
# Make sure that the script exists.
assert c.run_verify("script_exists", script_hash)[0] == True
# Create a key to test with.
key = create_key(TEST_NAME, "key1")
c.run_verify("set", key, EXPECTED_VALUE)
# Verify that the result of the script is the same in both Dynomite and Redis using
# EVALSHA.
evalsha_result = c.run_verify("evalsha", script_hash, 1, key)
# Decode from UTF-8 before comparing the result.
assert str(evalsha_result, 'utf-8') == EXPECTED_VALUE
# Flush the Redis script cache through Dynomite.
c.run_dynomite_only("script_flush")
# Verify that the script no longer exists.
assert c.run_dynomite_only("evalsha", script_hash, 1, key) == None
def run_hash_tests(c, max_keys=10, max_fields=1000):
def create_key_field(keyid=None, fieldid=None):
if keyid is None:
keyid = random.randint(0, max_keys - 1)
if fieldid is None:
fieldid = random.randint(0, max_fields- 1)
key = create_key(test_name, keyid)
field = create_key("_field", fieldid)
return (key, key + field)
test_name="HASH_MAP"
print("Running %s tests" % test_name)
#hset
for key_iter in range(0, max_keys):
for field_iter in range(0, max_fields):
key, field = create_key_field(key_iter, field_iter)
value = number_generator()
c.run_verify("hset", key, field, value)
# hmset
keyid = random.randint(0, max_keys-1)
key, _ = create_key_field(keyid)
kv_pairs = {}
for x in range(0, 50):
_, field = create_key_field(keyid)
value = number_generator()
kv_pairs[field] = value
c.run_verify("hmset", key, kv_pairs)
# hmget
keyid = random.randint(0, max_keys-1)
key, _ = create_key_field(keyid)
list_args = [key]
for x in range(0, 5):
_, field = create_key_field(keyid)
list_args.append(field)
args = tuple(list_args)
c.run_verify("hmget", *args)
# hincrby, hdel, hexists
key, field = create_key_field()
c.run_verify("hincrby", key, field, 50)
c.run_verify("hdel", key, field)
c.run_verify("hexists", key, field)
key, _ = create_key_field()
c.run_verify("hlen", key)
# These have issues because redis instances can return different values.
# hgetall, hkeys, hvals
#key, _ = create_key_field()
#c.run_verify("hgetall", key)
#key, _ = create_key_field()
#c.run_verify("hkeys", key)
#key, _ = create_key_field()
#c.run_verify("hvals", key)
# finally do a hscan
#key, _ = create_key_field()
#next_index = 0;
#while True:
#result = c.run_verify("hscan", key, next_index)
#next_index = result[0]
#print next_index
#if next_index == 0:
#break
def comparison_test(redis, dynomite, debug):
r_c = redis.get_connection()
d_c = dynomite.get_connection()
c = dual_run(r_c, d_c, debug)
run_key_value_tests(c)
# XLarge payloads
run_key_value_tests(c, max_keys=10, max_payload=5*1024*1024)
run_multikey_test(c)
run_hash_tests(c, max_keys=10, max_fields=100)
run_script_tests(c)
print("All test ran fine")
def main(args):
# This test assumes for now that the nodes are running at the given ports.
# This is done by travis.sh. Please check that file and the corresponding
# yml files for each dynomite instance there to get an idea of the topology.
r = RedisNode(ip="127.0.1.1", port=1212)
d1 = DynoNode(ip="127.0.1.2", data_store_port=22121)
d2 = DynoNode(ip="127.0.1.3", data_store_port=22122)
d3 = DynoNode(ip="127.0.1.4", data_store_port=22123)
d4 = DynoNode(ip="127.0.1.5", data_store_port=22124)
d5 = DynoNode(ip="127.0.1.6", data_store_port=22125)
dyno_nodes = [d1,d2,d3,d4,d5]
cluster = DynoCluster(dyno_nodes)
try:
comparison_test(r, cluster, args.debug)
except ResultMismatchError as r:
print(r)
return 1
return 0
if __name__ == "__main__":
args = parse_args()
sys.exit(main(args))
| 32.34 | 87 | 0.649505 | 971 | 6,468 | 4.098867 | 0.223481 | 0.026131 | 0.057789 | 0.038442 | 0.351759 | 0.261558 | 0.212563 | 0.166332 | 0.129648 | 0.11407 | 0 | 0.033878 | 0.233302 | 6,468 | 199 | 88 | 32.502513 | 0.768703 | 0.176716 | 0 | 0.227273 | 0 | 0 | 0.070577 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 1 | 0.068182 | false | 0 | 0.083333 | 0.007576 | 0.189394 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9820ddd02fc57ce99f82b5e80b5558efc6e3e333 | 7,212 | py | Python | frontend/pages/admin_portal/download_data.py | zagaran/instant-census | 62dd5bbc62939f43776a10708ef663722ead98af | [
"MIT"
] | 1 | 2021-06-01T17:03:47.000Z | 2021-06-01T17:03:47.000Z | frontend/pages/admin_portal/download_data.py | zagaran/instant-census | 62dd5bbc62939f43776a10708ef663722ead98af | [
"MIT"
] | null | null | null | frontend/pages/admin_portal/download_data.py | zagaran/instant-census | 62dd5bbc62939f43776a10708ef663722ead98af | [
"MIT"
] | null | null | null | import time
import zipfile
from io import BytesIO
from flask import Blueprint, send_file, request
from mongolia import ID_KEY
from backend.admin_portal.common_helpers import validate_cohort, validate_user, raise_404_error
from backend.admin_portal.download_data_helpers import (generate_messages_csv,
generate_users_csv, generate_question_answer_csv,
generate_question_answer_summary_by_question_csv,
generate_question_answer_summary_by_recipient_csv, get_user_messages_history)
from conf.settings import SHOW_DELETED_USERS
from constants.download_data import TS_FORMAT
from constants.users import Status
from frontend import auth
from utils.time import now
download_data = Blueprint('download_data', __name__)
@download_data.route('/download/users/<cohort_id>', methods=["GET"])
@auth.admin
def download_user_data(cohort_id):
timestamp = now().replace(microsecond=0).strftime(TS_FORMAT)
# validate cohort
cohort = validate_cohort(cohort_id)
# get users
csv_data = generate_users_csv(cohort)
return send_file(csv_data, as_attachment=True, attachment_filename="cohort_(%s)_users_%s.csv" % (cohort["cohort_name"], timestamp))
@download_data.route("/download/history/<user_id>", methods=["GET"])
@auth.admin
def download_message_history(user_id):
# validate user
user = validate_user(user_id)
# do not download if deleted and option isn't set
if not SHOW_DELETED_USERS and user["status"] == Status.deleted:
raise_404_error("User not found.")
user_messages = user.all_messages()
timestamp = now().replace(microsecond=0).strftime(TS_FORMAT)
csv_data = generate_messages_csv(user_messages, user["phonenum"], user["timezone"])
return send_file(csv_data, as_attachment=True, attachment_filename="user_(%s)_messages_%s.csv" % (user["phonenum"], timestamp))
# TODO: This could be DRYed out
@download_data.route("/download/history-incoming/<user_id>", methods=["GET"])
@auth.admin
def download_incoming_message_history(user_id):
timestamp = now().replace(microsecond=0).strftime(TS_FORMAT)
# validate user
user = validate_user(user_id)
# do not download if deleted and option isn't set
if not SHOW_DELETED_USERS and user["status"] == Status.deleted:
raise_404_error("User not found.")
user_messages = user.all_messages()
# remove outgoing messages
user_messages = [message for message in user_messages if message["incoming"] == True]
csv_data = generate_messages_csv(user_messages, user["phonenum"], user["timezone"])
return send_file(csv_data, as_attachment=True, attachment_filename="user_(%s)_messages_%s.csv" % (user["phonenum"], timestamp))
# these are the commands to trigger the different things.
# http://127.0.0.1:5000/download/custom/582645d1d9e34415c4f662d0?download_cohort_users=true&statuses=active,pending,invalid,waitlist,paused,disabled,inactive,&
# http://127.0.0.1:5000/download/custom/582645d1d9e34415c4f662d0?download_user_message_histories=true&type=all&
# http://127.0.0.1:5000/download/custom/582645d1d9e34415c4f662d0?download_question_answer_data=true&
@download_data.route("/download/custom/<cohort_id>", methods=["GET"])
@auth.admin
def download_data_custom(cohort_id):
# collect GET data TODO: do we need to sanitize these inputs?
download_cohort_users = request.args.get("download_cohort_users")
download_user_message_histories = request.args.get("download_user_message_histories")
download_question_answer_data = request.args.get("download_question_answer_data")
# validate cohort
cohort = validate_cohort(cohort_id)
cohort_id = cohort[ID_KEY]
# get all users
# date_time (for zip file) should be a tuple containing six fields which describe the time of the file last modification
date_time = time.localtime(time.time())[:6]
# timestamp for file name
timestamp = now().replace(microsecond=0).strftime(TS_FORMAT)
# store all files to be zipped in list
files = []
if download_cohort_users == "true":
# collect and clean GET data
statuses_to_download_string = request.args.get("statuses")[:-1] # since there is a trailing comma
statuses_to_download = [element for element in statuses_to_download_string.split(",")]
# remove deleted status if option not set
if not SHOW_DELETED_USERS:
statuses_to_download = [element for element in statuses_to_download if element != "deleted"]
statuses_to_download_string = ",".join(statuses_to_download)
# create csv and add to return
csv_data = generate_users_csv(cohort, statuses_to_download)
files.append({
"file_name": "cohort_users/cohort_(%s)_users_(%s)_%s.csv" %
(cohort["cohort_name"], statuses_to_download_string, timestamp),
"date_time": date_time,
"file_data": csv_data.getvalue()
})
if download_user_message_histories == "true":
# This function uses a lot of memory, so it has been stuck into its own function to
# enable some memory cleanup. get_user_messages_history isa mutator function, it modifies
# the files variable (its a list).
get_user_messages_history(cohort_id, files, timestamp, date_time)
if download_question_answer_data == "true":
csv_data = generate_question_answer_csv(cohort)
files.append({
"file_name": "questions_and_answers/cohort_(%s)_all_questions_and_answers_%s.csv" % (cohort["cohort_name"], timestamp),
"date_time": date_time,
"file_data": csv_data.getvalue()
})
summary_csv_data_by_question = generate_question_answer_summary_by_question_csv(cohort)
files.append({
"file_name": "questions_and_answers/cohort_(%s)_questions_and_answers_summary_by_question_%s.csv" % (cohort["cohort_name"], timestamp),
"date_time": date_time,
"file_data": summary_csv_data_by_question.getvalue()
})
summary_csv_data_by_recipient = generate_question_answer_summary_by_recipient_csv(cohort)
files.append({
"file_name": "questions_and_answers/cohort_(%s)_questions_and_answers_summary_by_recipient_%s.csv" % (cohort["cohort_name"], timestamp),
"date_time": date_time,
"file_data": summary_csv_data_by_recipient.getvalue()
})
# create zip file in memory
memory_file = BytesIO()
with zipfile.ZipFile(memory_file, 'w') as zf:
for file in files:
data = zipfile.ZipInfo(file["file_name"])
data.date_time = file["date_time"]
data.compress_type = zipfile.ZIP_DEFLATED
zf.writestr(data, file["file_data"])
# move pointer to beginning of BytesIO for send_file to read the data
memory_file.seek(0)
# return file to download
return send_file(
memory_file,
attachment_filename="cohort_(%s)_custom_download_%s.zip" % (cohort["cohort_name"], timestamp),
as_attachment=True
)
| 49.737931 | 160 | 0.707571 | 937 | 7,212 | 5.125934 | 0.197439 | 0.020404 | 0.033729 | 0.016656 | 0.507391 | 0.452009 | 0.428274 | 0.378305 | 0.32792 | 0.32792 | 0 | 0.017227 | 0.195092 | 7,212 | 145 | 161 | 49.737931 | 0.810164 | 0.186772 | 0 | 0.352941 | 0 | 0 | 0.164061 | 0.10188 | 0 | 0 | 0 | 0.006897 | 0 | 1 | 0.039216 | false | 0 | 0.117647 | 0 | 0.196078 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98211d0bec5a62e16e0f6aca7fb7de15284eb727 | 4,819 | py | Python | security_monkey/watchers/vpc/subnet.py | bungoume/security_monkey | 90c02638a315c78535869ab71a8859d17e011a6a | [
"Apache-2.0"
] | null | null | null | security_monkey/watchers/vpc/subnet.py | bungoume/security_monkey | 90c02638a315c78535869ab71a8859d17e011a6a | [
"Apache-2.0"
] | 1 | 2021-03-26T00:43:03.000Z | 2021-03-26T00:43:03.000Z | security_monkey/watchers/vpc/subnet.py | cxmcc/security_monkey | ae4c4b5b278505a97f0513f5ae44db3eb23c175c | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Netflix, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
.. module: security_monkey.watchers.subnet
:platform: Unix
.. version:: $$VERSION$$
.. moduleauthor:: Patrick Kelley <pkelley@netflix.com> @monkeysecurity
"""
from security_monkey.decorators import record_exception, iter_account_region
from security_monkey.watcher import Watcher
from security_monkey.watcher import ChangeItem
from security_monkey import app
class Subnet(Watcher):
index = 'subnet'
i_am_singular = 'Subnet'
i_am_plural = 'Subnets'
def __init__(self, accounts=None, debug=False):
super(Subnet, self).__init__(accounts=accounts, debug=debug)
@record_exception()
def get_all_subnets(self, **kwargs):
from security_monkey.common.sts_connect import connect
conn = connect(kwargs['account_name'], 'boto3.ec2.client', region=kwargs['region'],
assumed_role=kwargs['assumed_role'])
all_subnets = self.wrap_aws_rate_limited_call(conn.describe_subnets)
return all_subnets.get('Subnets')
def slurp(self):
"""
:returns: item_list - list of subnets.
:returns: exception_map - A dict where the keys are a tuple containing the
location of the exception and the value is the actual exception
"""
self.prep_for_slurp()
@iter_account_region(index=self.index, accounts=self.accounts, service_name='ec2')
def slurp_items(**kwargs):
item_list = []
exception_map = {}
kwargs['exception_map'] = exception_map
app.logger.debug("Checking {}/{}/{}".format(self.index, kwargs['account_name'], kwargs['region']))
all_subnets = self.get_all_subnets(**kwargs)
if all_subnets:
app.logger.debug("Found {} {}".format(len(all_subnets), self.i_am_plural))
for subnet in all_subnets:
subnet_name = None
for tag in subnet.get('Tags', []):
if tag.get('Key') == 'Name':
subnet_name = tag.get('Value')
subnet_id = subnet.get('SubnetId')
if subnet_name:
subnet_name = "{0} ({1})".format(subnet_name, subnet_id)
else:
subnet_name = subnet_id
if self.check_ignore_list(subnet_name):
continue
arn = 'arn:aws:ec2:{region}:{account_number}:subnet/{subnet_id}'.format(
region=kwargs["region"],
account_number=kwargs["account_number"],
subnet_id=subnet_id)
config = {
"name": subnet_name,
"arn": arn,
"id": subnet_id,
"cidr_block": subnet.get('CidrBlock'),
"availability_zone": subnet.get('AvailabilityZone'),
# TODO:
# available_ip_address_count is likely to change often
# and should be in the upcoming ephemeral section.
# "available_ip_address_count": subnet.available_ip_address_count,
"defaultForAz": subnet.get('DefaultForAz'),
"mapPublicIpOnLaunch": subnet.get('MapPublicIpOnLaunch'),
"state": subnet.get('State'),
"tags": subnet.get('Tags'),
"vpc_id": subnet.get('VpcId')
}
item = SubnetItem(region=kwargs['region'],
account=kwargs['account_name'],
name=subnet_name, arn=arn, config=config)
item_list.append(item)
return item_list, exception_map
return slurp_items()
class SubnetItem(ChangeItem):
def __init__(self, region=None, account=None, name=None, arn=None, config={}):
super(SubnetItem, self).__init__(
index=Subnet.index,
region=region,
account=account,
name=name,
arn=arn,
new_config=config)
| 39.826446 | 110 | 0.566923 | 510 | 4,819 | 5.154902 | 0.362745 | 0.034234 | 0.034234 | 0.026246 | 0.038798 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00436 | 0.333679 | 4,819 | 120 | 111 | 40.158333 | 0.814388 | 0.229508 | 0 | 0 | 0 | 0 | 0.113213 | 0.015351 | 0 | 0 | 0 | 0.008333 | 0 | 1 | 0.070423 | false | 0 | 0.070423 | 0 | 0.253521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98239f7c5579754d650e0fc77e6f345a387dabfe | 715 | py | Python | review-sentiment/sentiment-backend/tests/test_api.py | aldeeb/xai-demonstrator | 45b600bd326923a21dc2c6e2659b58ab3c7b9bd4 | [
"Apache-2.0"
] | 8 | 2021-05-03T13:05:49.000Z | 2022-01-11T02:57:33.000Z | review-sentiment/sentiment-backend/tests/test_api.py | aldeeb/xai-demonstrator | 45b600bd326923a21dc2c6e2659b58ab3c7b9bd4 | [
"Apache-2.0"
] | 467 | 2021-01-22T16:58:56.000Z | 2022-03-28T11:15:09.000Z | review-sentiment/sentiment-backend/tests/test_api.py | aldeeb/xai-demonstrator | 45b600bd326923a21dc2c6e2659b58ab3c7b9bd4 | [
"Apache-2.0"
] | 8 | 2021-05-25T16:10:18.000Z | 2022-02-28T13:21:31.000Z | import pytest
from sentiment import api
def test_that_the_explainer_availability_check_works(mocker):
mocker.patch.object(api, 'EXPLAINERS', ["existing"])
good_exp_req = api.ExplanationRequest(text="some text",
target=3,
method="existing")
with pytest.raises(ValueError):
bad_exp_req = api.ExplanationRequest(text="some text",
target=3,
method="unavailable")
def test_that_loading_is_triggered(mocker):
bert_loader = mocker.patch.object(api.bert, "load")
api.load()
assert bert_loader.call_count == 1
| 29.791667 | 66 | 0.562238 | 72 | 715 | 5.347222 | 0.583333 | 0.036364 | 0.057143 | 0.103896 | 0.27013 | 0.27013 | 0.27013 | 0.27013 | 0.27013 | 0.27013 | 0 | 0.006452 | 0.34965 | 715 | 23 | 67 | 31.086957 | 0.821505 | 0 | 0 | 0.133333 | 0 | 0 | 0.082517 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
982ad0f9ccac820b137e7234becb236db7c159ad | 961 | py | Python | git_analyzers/gituser.py | Telefonica/packagedna | 3b3a18c95d651d85095438de5e9084cc21567865 | [
"MIT"
] | 52 | 2021-08-06T15:17:33.000Z | 2022-02-03T13:45:44.000Z | git_analyzers/gituser.py | bruno-rodrigues-bitsight/packagedna | 3b3a18c95d651d85095438de5e9084cc21567865 | [
"MIT"
] | 1 | 2021-08-23T09:15:33.000Z | 2021-11-08T11:31:55.000Z | git_analyzers/gituser.py | bruno-rodrigues-bitsight/packagedna | 3b3a18c95d651d85095438de5e9084cc21567865 | [
"MIT"
] | 11 | 2021-08-08T04:16:12.000Z | 2021-11-09T05:36:15.000Z | #!/usr/bin/env python3
# Detect repos of user en GIT
# %%%%%%%%%%% Libraries %%%%%%%%%%%#
import json
import urllib.request
from auxiliar_functions.globals import url_git_user
# %%%%%%%%%%% Functions %%%%%%%%%%%#
def git_user(username):
try:
user_info = json.loads(urllib.request.urlopen(url_git_user + username)
.read().decode('utf-8'))
if 'login' in user_info.keys():
user_git = {'username': username, 'name': user_info['name'],
'yours_repositories': {}}
repos = json.loads(urllib.request.urlopen(
url_git_user + username + '/repos').read().decode('utf-8'))
for repo in repos:
user_git['yours_repositories'][repo['name']] = {
'language': repo['language'], 'url': repo['html_url']}
else:
user_git = {}
except:
user_git = {}
return json.dumps(user_git)
| 25.289474 | 78 | 0.532778 | 104 | 961 | 4.740385 | 0.432692 | 0.070994 | 0.060852 | 0.089249 | 0.190669 | 0.190669 | 0.190669 | 0.190669 | 0.190669 | 0 | 0 | 0.004425 | 0.294485 | 961 | 37 | 79 | 25.972973 | 0.722714 | 0.121748 | 0 | 0.1 | 0 | 0 | 0.124402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
982b798bc343755a3272085df113ecba5567376b | 687 | py | Python | tests/test_utils.py | capybala/find-ebook-edition-backend | 0321ae8c883fdc241553ad16abf0db9d47eb278d | [
"MIT"
] | null | null | null | tests/test_utils.py | capybala/find-ebook-edition-backend | 0321ae8c883fdc241553ad16abf0db9d47eb278d | [
"MIT"
] | null | null | null | tests/test_utils.py | capybala/find-ebook-edition-backend | 0321ae8c883fdc241553ad16abf0db9d47eb278d | [
"MIT"
] | null | null | null | from unittest import TestCase
from utils import to_bytes, to_str
class TestUtils(TestCase):
def test_bytes(self):
self.assertEqual(to_bytes(b'Should be bytes \xe3\x81\xa0\xe3\x82\x88'),
b'Should be bytes \xe3\x81\xa0\xe3\x82\x88')
self.assertEqual(to_bytes('Should be bytes だよ'),
b'Should be bytes \xe3\x81\xa0\xe3\x82\x88')
def test_str(self):
self.assertEqual(to_str(b'Should be (Python 3) str \xe3\x81\xa0\xe3\x82\x88'),
'Should be (Python 3) str だよ')
self.assertEqual(to_str('Should be (Python 3) str だよ'),
'Should be (Python 3) str だよ')
| 36.157895 | 86 | 0.58952 | 102 | 687 | 3.892157 | 0.245098 | 0.161209 | 0.171285 | 0.120907 | 0.483627 | 0.438287 | 0.241814 | 0.241814 | 0.241814 | 0.241814 | 0 | 0.082136 | 0.291121 | 687 | 18 | 87 | 38.166667 | 0.73306 | 0 | 0 | 0.307692 | 0 | 0 | 0.390102 | 0.139738 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
982c08be97b8ab5fb5bcf91b48cfeba930557c31 | 1,723 | py | Python | alipay/aop/api/response/AlipayTradeBatchTransferQueryResponse.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | 213 | 2018-08-27T16:49:32.000Z | 2021-12-29T04:34:12.000Z | alipay/aop/api/response/AlipayTradeBatchTransferQueryResponse.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | 29 | 2018-09-29T06:43:00.000Z | 2021-09-02T03:27:32.000Z | alipay/aop/api/response/AlipayTradeBatchTransferQueryResponse.py | antopen/alipay-sdk-python-all | 8e51c54409b9452f8d46c7bb10eea7c8f7e8d30c | [
"Apache-2.0"
] | 59 | 2018-08-27T16:59:26.000Z | 2022-03-25T10:08:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from alipay.aop.api.response.AlipayResponse import AlipayResponse
from alipay.aop.api.domain.BatchRoyaltyDetail import BatchRoyaltyDetail
class AlipayTradeBatchTransferQueryResponse(AlipayResponse):
def __init__(self):
super(AlipayTradeBatchTransferQueryResponse, self).__init__()
self._out_request_no = None
self._royalty_detail = None
self._settle_no = None
@property
def out_request_no(self):
return self._out_request_no
@out_request_no.setter
def out_request_no(self, value):
self._out_request_no = value
@property
def royalty_detail(self):
return self._royalty_detail
@royalty_detail.setter
def royalty_detail(self, value):
if isinstance(value, list):
self._royalty_detail = list()
for i in value:
if isinstance(i, BatchRoyaltyDetail):
self._royalty_detail.append(i)
else:
self._royalty_detail.append(BatchRoyaltyDetail.from_alipay_dict(i))
@property
def settle_no(self):
return self._settle_no
@settle_no.setter
def settle_no(self, value):
self._settle_no = value
def parse_response_content(self, response_content):
response = super(AlipayTradeBatchTransferQueryResponse, self).parse_response_content(response_content)
if 'out_request_no' in response:
self.out_request_no = response['out_request_no']
if 'royalty_detail' in response:
self.royalty_detail = response['royalty_detail']
if 'settle_no' in response:
self.settle_no = response['settle_no']
| 32.509434 | 110 | 0.676146 | 195 | 1,723 | 5.641026 | 0.230769 | 0.13 | 0.098182 | 0.058182 | 0.034545 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000766 | 0.2426 | 1,723 | 52 | 111 | 33.134615 | 0.842146 | 0.024376 | 0 | 0.073171 | 0 | 0 | 0.044074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0 | 0.073171 | 0.073171 | 0.365854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
982c4d2462efaa8078e7d8cc19486e7b382ca3e9 | 9,418 | py | Python | barcodes/views.py | bihealth/digestiflow-server | 298c53f95dbf56e7be0d0b8bcceacabc21257d5f | [
"MIT"
] | 13 | 2019-11-27T19:12:15.000Z | 2021-12-01T21:32:18.000Z | barcodes/views.py | bihealth/digestiflow-server | 298c53f95dbf56e7be0d0b8bcceacabc21257d5f | [
"MIT"
] | 60 | 2019-03-27T14:43:19.000Z | 2022-03-22T09:12:53.000Z | barcodes/views.py | bihealth/digestiflow-server | 298c53f95dbf56e7be0d0b8bcceacabc21257d5f | [
"MIT"
] | 3 | 2020-11-09T07:08:42.000Z | 2022-02-09T11:37:54.000Z | """The views for the barcodes app."""
import json
from django.contrib.auth.mixins import LoginRequiredMixin
from django.contrib import messages
from django.contrib.messages.views import SuccessMessageMixin
from django.db import transaction
from django.db.models import ProtectedError
from django.shortcuts import reverse
from django.views.generic import CreateView, DeleteView, DetailView, ListView, UpdateView
from projectroles.plugins import get_backend_api
from projectroles.views import LoggedInPermissionMixin, ProjectContextMixin
from django.core.validators import ValidationError
from digestiflow.utils import model_to_dict, ProjectPermissionMixin
from .forms import BarcodeSetForm
from .models import BarcodeSet, BarcodeSetEntry
class BarcodeSetListView(
LoginRequiredMixin,
LoggedInPermissionMixin,
ProjectPermissionMixin,
ProjectContextMixin,
ListView,
):
"""Display list of all BarcodeSet records"""
template_name = "barcodes/barcodeset_list.html"
permission_required = "barcodes.view_barcodeset"
model = BarcodeSet
paginate_by = 10
def get_queryset(self):
return (
super()
.get_queryset()
.filter(project__sodar_uuid=self.kwargs["project"])
.prefetch_related("project")
)
class BarcodeSetDetailView(
LoginRequiredMixin,
LoggedInPermissionMixin,
ProjectPermissionMixin,
ProjectContextMixin,
DetailView,
):
"""Display detail of BarcodeSet records"""
template_name = "barcodes/barcodeset_detail.html"
permission_required = "barcodes.view_barcodeset"
model = BarcodeSet
slug_url_kwarg = "barcodeset"
slug_field = "sodar_uuid"
class BarcodeSetCreateView(
SuccessMessageMixin,
LoginRequiredMixin,
LoggedInPermissionMixin,
ProjectPermissionMixin,
ProjectContextMixin,
CreateView,
):
"""Display list of all BarcodeSet records"""
success_message = "Barcode set successfully created."
template_name = "barcodes/barcodeset_create.html"
permission_required = "barcodes.add_barcodeset"
model = BarcodeSet
form_class = BarcodeSetForm
@transaction.atomic
def form_valid(self, form):
# Properly set the reference to the current project.
form.instance.project = self.get_project(self.request, self.kwargs)
# Save form, get ``self.object``, ready for creating barcode set entries.
self.object = form.save()
for entry in json.loads(form.cleaned_data["entries_json"]):
BarcodeSetEntry.objects.create(
barcode_set=self.object,
name=entry["name"],
aliases=[x.strip() for x in entry["name"].split(",")],
sequence=entry["sequence"],
)
# Call into super class.
result = super().form_valid(form)
# Register event with timeline.
timeline = get_backend_api("timeline_backend")
if timeline:
tl_event = timeline.add_event(
project=self.get_project(self.request, self.kwargs),
app_name="barcodes",
user=self.request.user,
event_name="barcodeset_create",
description="create barcodeset {barcodeset}: {extra-barcodeset_dict}",
status_type="OK",
extra_data={
"barcodeset_dict": {
**model_to_dict(self.object),
"entries": [model_to_dict(entry) for entry in self.object.entries.all()],
}
},
)
tl_event.add_object(obj=self.object, label="barcodeset", name=self.object.name)
return result
class BarcodeSetUpdateView(
SuccessMessageMixin,
LoginRequiredMixin,
LoggedInPermissionMixin,
ProjectPermissionMixin,
ProjectContextMixin,
UpdateView,
):
"""Updating of BarcodeSet records"""
success_message = "Barcode set successfully updated."
template_name = "barcodes/barcodeset_update.html"
permission_required = "barcodes.change_barcodeset"
model = BarcodeSet
form_class = BarcodeSetForm
slug_url_kwarg = "barcodeset"
slug_field = "sodar_uuid"
@transaction.atomic
def form_valid(self, form):
# Save form, get ``self.object``, ready for updating barcode set entries.
self.object = form.save()
try:
self._update_entries(self.object, form)
except ValidationError as e:
messages.error(
self.request, "Problem updating barcode set: %s" % ", ".join(map(str, e))
)
return self.form_invalid(form)
except ProtectedError as e: # pragma: no cover
messages.error(self.request, "Could not update barcode set entries: %s" % e)
return self.form_invalid(form)
# Call into super class.
result = super().form_valid(form)
# Register event with timeline.
timeline = get_backend_api("timeline_backend")
if timeline:
tl_event = timeline.add_event(
project=self.get_project(self.request, self.kwargs),
app_name="barcodes",
user=self.request.user,
event_name="barcodeset_update",
description="update barcodeset {barcodeset}: {extra-barcodeset_dict}",
status_type="OK",
extra_data={
"barcodeset_dict": {
**model_to_dict(self.object),
"entries": [model_to_dict(entry) for entry in self.object.entries.all()],
}
},
)
tl_event.add_object(obj=self.object, label="barcodeset", name=self.object.name)
return result
def _update_entries(self, barcode_set, form):
"""Update barcode set entries of ``barcode_set`` record from JSON field.
This method must be called within a transaction, of course.
The algorithm for matching them is also mirrored in the JavaScript and both need to be kept in sync.
"""
# Existing entries and to-be-updated values by UUID.
existing = {str(entry.sodar_uuid): entry for entry in barcode_set.entries.all()}
updated = json.loads(form.cleaned_data["entries_json"])
for rank, entry in enumerate(updated):
entry["rank"] = rank
updated_by_uuid = {entry.get("uuid"): entry for entry in updated if entry.get("uuid")}
# Delete and update existing.
for entry in existing.values():
if str(entry.sodar_uuid) not in updated_by_uuid:
# Delete records from existing set that we don't find in updated records.
entry.delete()
else:
# Update existing record.
the_updated = updated_by_uuid[str(entry.sodar_uuid)]
entry.rank = the_updated["rank"]
entry.name = the_updated["name"]
entry.aliases = [x.strip() for x in the_updated.get("aliases", "").split(",")]
entry.sequence = the_updated["sequence"]
entry.save()
# Add new records.
for entry in updated:
if not entry.get("uuid") or entry.get("uuid") not in existing:
BarcodeSetEntry.objects.create(
rank=entry["rank"],
name=entry["name"],
aliases=[x.strip() for x in entry.get("aliases", "").split(",")],
sequence=entry["sequence"],
barcode_set=barcode_set,
)
class BarcodeSetDeleteView(
SuccessMessageMixin,
LoginRequiredMixin,
LoggedInPermissionMixin,
ProjectPermissionMixin,
ProjectContextMixin,
DeleteView,
):
"""Deletion of BarcodeSet records"""
success_message = "Barcode set successfully deleted."
template_name = "barcodes/barcodeset_confirm_delete.html"
permission_required = "barcodes.delete_barcodeset"
model = BarcodeSet
slug_url_kwarg = "barcodeset"
slug_field = "sodar_uuid"
@transaction.atomic
def delete(self, *args, **kwargs):
# Delete barcode set record.
for entry in self.get_object().entries.all():
entry.delete()
result = super().delete(*args, **kwargs)
# Register event with timeline.
timeline = get_backend_api("timeline_backend")
if timeline:
tl_event = timeline.add_event(
project=self.get_project(self.request, self.kwargs),
app_name="barcodes",
user=self.request.user,
event_name="barcodeset_delete",
description="delete barcodeset {barcodeset}: {extra-barcodeset_dict}",
status_type="OK",
extra_data={
"barcodeset_dict": {
**model_to_dict(self.object),
"entries": [model_to_dict(entry) for entry in self.object.entries.all()],
}
},
)
tl_event.add_object(obj=self.object, label="barcodeset", name=self.object.name)
return result
def get_success_url(self):
return reverse(
"barcodes:barcodeset-list",
kwargs={"project": self.get_project(self.request, self.kwargs).sodar_uuid},
)
| 36.223077 | 108 | 0.621151 | 967 | 9,418 | 5.898656 | 0.197518 | 0.031557 | 0.015778 | 0.071879 | 0.507714 | 0.436886 | 0.385519 | 0.332048 | 0.281732 | 0.281732 | 0 | 0.000297 | 0.285305 | 9,418 | 259 | 109 | 36.362934 | 0.847125 | 0.107135 | 0 | 0.502488 | 0 | 0 | 0.12731 | 0.045236 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.069652 | 0.00995 | 0.293532 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
982ee8da8c5f2de58352f5bf4f00e71e8a1a90b0 | 857 | py | Python | src/chapter 2/exercise 7.py | group5BCS1/BCS-2021 | 696b53bdfc46799b4a527604fbd6cd6cfb3982eb | [
"MIT"
] | null | null | null | src/chapter 2/exercise 7.py | group5BCS1/BCS-2021 | 696b53bdfc46799b4a527604fbd6cd6cfb3982eb | [
"MIT"
] | null | null | null | src/chapter 2/exercise 7.py | group5BCS1/BCS-2021 | 696b53bdfc46799b4a527604fbd6cd6cfb3982eb | [
"MIT"
] | null | null | null | dollars = float(input("Enter amount to change in dollars :"))
# 20 dollar notes
num1 = dollars//20
# remainder after 20
num2 = dollars%20
# notes by 10
num3 = num2//10
# remainder after 10
num4 = num2%10
# notes by 5 dollar
num5 = num4//5
# remainder after 5 dollars
num6 = num4%5
# notes by 1 dollar
num7 = num6//1
#remiander after 1 dollar
num8 = num6%1
# cents
#cents by 0.25 dollars
num9 =num8//0.25
# remainder after quarter
num10 = num8%0.25
# the dimes
num11 = num10//0.1
# remainder after the dimes
num12 = num10%0.1
# the nickles
num13 = num12//0.05
# the remainder after the nickles
num14 = num12%0.05
# the pennies
num15 = num14//0.01
print(int(num1),"twenties")
print(int(num3),"tens")
print(int(num5),"fives")
print(int(num7),"ones")
print(int(num9),"quarters")
print(int(num11),"dimes")
print(int(num13),"nickles")
print(int(num15),"pennies")
| 20.902439 | 61 | 0.705951 | 145 | 857 | 4.172414 | 0.344828 | 0.105785 | 0.023141 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131327 | 0.147025 | 857 | 40 | 62 | 21.425 | 0.696306 | 0.343057 | 0 | 0 | 0 | 0 | 0.152015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9831311007b8bf4c390cc44301f62f55d2e750be | 12,902 | py | Python | wagtailorderable/modeladmin/mixins.py | kausaltech/wagtail-orderable | 72ca5835b9aa83e02cb7eb564a4517fac2273c28 | [
"MIT"
] | null | null | null | wagtailorderable/modeladmin/mixins.py | kausaltech/wagtail-orderable | 72ca5835b9aa83e02cb7eb564a4517fac2273c28 | [
"MIT"
] | null | null | null | wagtailorderable/modeladmin/mixins.py | kausaltech/wagtail-orderable | 72ca5835b9aa83e02cb7eb564a4517fac2273c28 | [
"MIT"
] | null | null | null | from django.conf.urls import url
from django.core.exceptions import FieldDoesNotExist, ImproperlyConfigured, PermissionDenied
from django.db import connections, transaction
from django.db.models import F, Count
from django.db.models.expressions import Case, Value, When
from django.db.models.functions import Cast
from django.http.response import HttpResponse, HttpResponseBadRequest
from django.shortcuts import get_object_or_404
from django.utils.safestring import mark_safe
from django.utils.translation import ugettext_lazy as _
from ..signal import pre_reorder, post_reorder
class OrderableMixinMetaClass(type):
"""
index_order method needs to be completed with an `admin_order_field` but as sort_order_field
is not yet known in the class, we need this meta class to get it from other final class args
"""
def __new__(cls, name, bases, attrs):
model = attrs.get('model', None)
sort_order_field = attrs.get('sort_order_field', None)
if model and not sort_order_field:
sort_order_field = getattr(model, 'sort_order_field', None)
if sort_order_field:
# unfortunately, wagtail IndexView._get_default_ordering is curently using
# `model_admin.ordering` instead of `model_admin.get_ordering()`
# So we need to automagically set it here
if 'ordering' not in attrs:
attrs['ordering'] = (sort_order_field, )
elif sort_order_field not in attrs['ordering']:
attrs['ordering'] = (sort_order_field, ) + tuple(attrs['ordering'])
# set the "sorting" column
if 'index_order' not in attrs:
def index_order(self, obj):
"""Content for the `index_order` column"""
return mark_safe((
'<div class="handle icon icon-grip text-replace ui-sortable-handle">'
'%s</div>'
) % _('Drag'))
index_order.admin_order_field = sort_order_field
index_order.short_description = _('Order')
attrs['index_order'] = index_order
return type.__new__(cls, name, bases, attrs)
class OrderableMixin(object, metaclass=OrderableMixinMetaClass):
sort_order_field = None
"""
Mixin class to add drag-and-drop ordering support to the ModelAdmin listing
view when the model extends the `wagtailorderable.models.Orderable`
abstract model class.
"""
def __init__(self, parent=None):
super(OrderableMixin, self).__init__(parent)
# Don't allow initialisation unless self.model subclasses
# `wagtail.wagtailcore.models.Orderable` or sort_order_field is set
if not self.sort_order_field and hasattr(self.model, 'sort_order_field'):
self.sort_order_field = getattr(self.model, 'sort_order_field', None)
if not self.sort_order_field:
raise ImproperlyConfigured(
u"You are using OrderableMixin for your '%(cls)s' class, but the "
"django model specified is not a sub-class of "
"'wagtail.wagtailcore.models.Orderable and you did not set "
"'%(cls)s.sort_order_field'." % {'cls': self.__class__.__name__}
)
try:
self.model._meta.get_field(self.sort_order_field)
except FieldDoesNotExist:
raise ImproperlyConfigured(
u"You are using OrderableMixin for your '%s' class, but the "
"'sort_order_field' is set to '%s' which does not exists "
"into your model." %
(self.__class__.__name__, self.sort_order_field))
def get_ordering(self, request):
"""
Returns a sequence defining the default ordering for results in the
list view.
"""
if not self.ordering:
return (self.sort_order_field, )
elif self.sort_order_field not in self.ordering:
return (self.sort_order_field, ) + tuple(self.ordering)
return self.ordering
def get_list_display(self, request):
"""Add `index_order` as the first column to results"""
list_display = list(super().get_list_display(request))
if self.sort_order_field in list_display:
# Used JS need one and only one order field displayed in the list
list_display.remove(self.sort_order_field)
return ('index_order', *list_display)
def get_list_display_add_buttons(self, request):
"""
If `list_display_add_buttons` isn't set, ensure the buttons are not
added to the `index_order` column.
"""
col_field_name = super(
OrderableMixin, self).get_list_display_add_buttons(request)
if col_field_name == 'index_order':
list_display = self.get_list_display(request)
return list_display[1]
return col_field_name
def get_extra_attrs_for_field_col(self, obj, field_name):
"""
Add data attributes to the `index_order` column that can be picked
up via JS. The width attribute helps the column remain at a fixed size
while dragging and the title is used for generating a success message
on completion reorder completion.
"""
attrs = super(OrderableMixin, self).get_extra_attrs_for_field_col(
obj, field_name)
if field_name == 'index_order':
attrs.update({
'data-title': obj.__str__(),
'width': 20,
})
return attrs
def get_extra_class_names_for_field_col(self, obj, field_name):
"""
Add the `visible-on-drag` class to certain columns
"""
classnames = super(OrderableMixin, self).get_extra_class_names_for_field_col(
obj, field_name
)
if field_name in ('index_order', self.list_display[0], 'admin_thumb',
self.list_display_add_buttons or ''):
classnames.append('visible-on-drag')
return classnames
def _get_position(self, pk):
try:
obj = self.model.objects.get(pk=pk)
return getattr(obj, self.sort_order_field), obj
except self.model.DoesNotExist:
return None, None
@transaction.atomic
def reorder_view(self, request, instance_pk):
"""
Very simple view functionality for updating the `sort_order` values
for objects after a row has been dragged to a new position.
"""
self.fix_duplicate_positions(request)
obj_to_move = get_object_or_404(self.model, pk=instance_pk)
if not self.permission_helper.user_can_edit_obj(request.user, obj_to_move):
raise PermissionDenied
# determine the start position
old_position = getattr(obj_to_move, self.sort_order_field) or 0
# determine the destination position
after_position, after = self._get_position(request.GET.get('after'))
before_position, before = self._get_position(request.GET.get('before'))
if after:
position = after_position or 0
response = _('"%s" moved after "%s"') % (obj_to_move, after)
elif before:
position = before_position or 0
response = _('"%s" moved before "%s"') % (obj_to_move, before)
else:
return HttpResponseBadRequest(_('"%s" not moved') % obj_to_move)
qs = self.get_filtered_queryset(request)
signal_kwargs = {'sender': self.__class__, 'queryset': qs}
# move the object from old_position to new_position
if position < old_position:
if position == after_position:
position += 1
qs = qs.filter(**{'%s__lt' % self.sort_order_field: old_position,
'%s__gte' % self.sort_order_field: position})
update_value = F(self.sort_order_field) + 1
signal_kwargs.update({'from_order': position, 'to_position': old_position + 1})
elif position > old_position:
if position == before_position:
position -= 1
qs = qs.filter(**{'%s__gt' % self.sort_order_field: old_position,
'%s__lte' % self.sort_order_field: position})
update_value = F(self.sort_order_field) - 1
signal_kwargs.update({'from_order': old_position - 1, 'to_position': position})
# let's signal we will reorder some instances.
pre_reorder.send(**signal_kwargs)
# reorder all previous|next
qs.update(**{self.sort_order_field: update_value})
# reorder current one
self.model.objects.filter(pk=obj_to_move.pk)\
.update(**{self.sort_order_field: position})
# let's signal we just reorder some instances.
post_reorder.send(**signal_kwargs)
return HttpResponse(response)
def get_filtered_queryset(self, request):
parent_field = getattr(self, 'parent_field', None)
if not parent_field or parent_field not in request.GET:
return self.get_queryset(request)
return self.get_queryset(request).filter(**{parent_field: request.GET.get(parent_field)})
@transaction.atomic
def fix_duplicate_positions(self, request):
"""
Low level function which updates each element to have sequential sort_order values
if the database contains any duplicate values (gaps are ok).
"""
qs = self.get_filtered_queryset(request)
first_duplicate = qs.values('order')\
.annotate(index_order_count=Count(self.sort_order_field))\
.filter(index_order_count__gt=1)\
.order_by('order').first()
if not first_duplicate:
return
# let's retrieve all next the first duplicate found
lookups = {'%s__gte' % self.sort_order_field: first_duplicate[self.sort_order_field]}
to_reorder = qs.filter(**lookups).order_by(self.sort_order_field)\
.values_list('pk', self.sort_order_field)[1:]
# first one has the good order value, so we don't get it
# let's prepare our custom bulk_update to reorder the wring ordered ones
# (we don't use django's native bulk_update which require real model instances which is
# overkill in our case). When django's bulk_update will be able to accept iterable of dicts
# we won't need this custom bulk_update anymore.
field = self.model._meta.get_field(self.sort_order_field)
when_statements = []
pks = []
bulk_update_qs = self.get_filtered_queryset(request)
new_order = first_duplicate['index_order_count']
for pk, current_order in to_reorder:
new_order += 1
if current_order > new_order:
# we are ok with gaps, this one does not need to be updated
new_order = current_order + 1
continue
if current_order == new_order:
# neither this one
continue
pks.append(pk)
when_statements.append(When(pk=pk, then=Value(new_order, output_field=field)))
case_statement = Case(*when_statements, output_field=field)
if connections[bulk_update_qs.db].features.requires_casted_case_in_updates:
case_statement = Cast(case_statement, output_field=field)
# let's signal we will reorder some instances.
pre_reorder.send(
sender=self.__class__,
from_order=first_duplicate['index_order_count'] + 1,
to_order=new_order,
queryset=bulk_update_qs,
)
bulk_update_qs.filter(pk__in=pks).update(**{self.sort_order_field: case_statement})
# let's signal we just reorder some instances.
post_reorder.send(
sender=self.__class__,
from_order=first_duplicate['index_order_count'] + 1,
to_order=new_order,
queryset=bulk_update_qs,
)
def get_index_view_extra_css(self):
css = super(OrderableMixin, self).get_index_view_extra_css()
css.append('wagtailorderable/modeladmin/css/orderablemixin.css')
return css
def get_index_view_extra_js(self):
js = super(OrderableMixin, self).get_index_view_extra_js()
js.append('wagtailorderable/modeladmin/js/orderablemixin.js')
return js
def get_admin_urls_for_registration(self):
urls = super(OrderableMixin, self).get_admin_urls_for_registration()
urls += (
url(
self.url_helper.get_action_url_pattern('reorder'),
view=self.reorder_view,
name=self.url_helper.get_action_url_name('reorder')
),
)
return urls
| 45.111888 | 99 | 0.634553 | 1,603 | 12,902 | 4.834061 | 0.196507 | 0.060653 | 0.079494 | 0.062718 | 0.308814 | 0.207124 | 0.151503 | 0.112273 | 0.10453 | 0.070719 | 0 | 0.002693 | 0.280344 | 12,902 | 285 | 100 | 45.270175 | 0.831879 | 0.172066 | 0 | 0.090909 | 0 | 0 | 0.094671 | 0.015811 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0 | 0.055556 | 0 | 0.247475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9833ab5177863126b1d352268f98e263bfeed2b3 | 4,875 | py | Python | train.py | ahrnbom/guts | 9134e7f6568a24b435841e5934a640bdbe329a68 | [
"MIT"
] | null | null | null | train.py | ahrnbom/guts | 9134e7f6568a24b435841e5934a640bdbe329a68 | [
"MIT"
] | null | null | null | train.py | ahrnbom/guts | 9134e7f6568a24b435841e5934a640bdbe329a68 | [
"MIT"
] | null | null | null | """
Copyright (C) 2022 Martin Ahrnbom
Released under MIT License. See the file LICENSE for details.
General script for training a CNN in PyTorch
"""
from typing import List
import torch
from torch import optim
import numpy as np
from datetime import datetime
from plot import multi_plot
from util import batches
def train(net, data, folder, criterion, write=print,
batch_size=16, learning_rate=1e-5, epochs=64,
plot_title="Network Training"):
write(f"Starting training at {datetime.now()}")
write(f"Learning rate: {learning_rate}")
device = 'cuda' if torch.cuda.is_available() else 'cpu'
write(f"Using PyTorch device {device}")
net.to(device)
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
loss_history = list()
val_history = list()
max_n_train_batches = 0
for epoch in range(epochs):
epoch_start = datetime.now()
# Train!
net.train()
loss_sum = 0.0
n_train_batches = 0
for batch_num, train_batch in enumerate(batches(data['training'](),
batch_size)):
batch_length = len(train_batch) # not always batch size!
xs = [t[0] for t in train_batch]
ys = [t[1] for t in train_batch]
# xs and ys are now lists of tuples like (x1, x2...)
# Build batches in numpy
nx = len(xs[0]) # 2 would mean we have x1, x2 as inputs to network
xx = [np.stack([x[i] for x in xs]) for i in range(nx)]
ny = len(ys[0]) # 2 would mean we have y1, y2 as ground truth(s)
yy = [np.stack([y[i] for y in ys]) for i in range(ny)]
# Now xx is a list of the inputs, of shape (bs, ...)
# Same for y, except it's ground truth
# Convert to PyTorch
xx = [torch.from_numpy(x).to(device) for x in xx]
yy = [torch.from_numpy(y).to(device) for y in yy]
optimizer.zero_grad()
outputs = net(*xx)
loss = criterion(outputs, *yy)
loss.backward()
optimizer.step()
curr_loss = float(loss.detach().cpu()) / batch_length
loss_sum += curr_loss
mean_loss = loss_sum/(batch_num+1)
n_train_batches += 1
max_n_train_batches = max(max_n_train_batches, n_train_batches)
if (batch_num%200 == 0):
write(f"Epoch {epoch+1}, " \
f"Batch {batch_num+1} / {max_n_train_batches}, "\
f"Loss {mean_loss:_}")
# max_n_train_batches will be wrong during first batch...
# Validate!
val_loss = 0.0
net.eval()
n_val_batches = 0
for batch_num, val_batch in enumerate(batches(data['validation'](),
batch_size)):
batch_length = len(val_batch)
xs = [t[0] for t in val_batch]
ys = [t[1] for t in val_batch]
nx = len(xs[0])
xx = [np.stack([x[i] for x in xs]) for i in range(nx)]
ny = len(ys[0])
yy = [np.stack([y[i] for y in ys]) for i in range(ny)]
xx = [torch.from_numpy(x).to(device) for x in xx]
yy = [torch.from_numpy(y).to(device) for y in yy]
outputs = net(*xx)
loss = criterion(outputs, *yy)
curr_loss = float(loss.detach().cpu()) / batch_length
val_loss += curr_loss
n_val_batches += 1
val_loss /= n_val_batches
train_loss = loss_sum / n_train_batches
write(f"Epoch {epoch+1}/{epochs} done. Train loss: {train_loss:_}, " \
f"val loss: {val_loss}")
# Store and visualize
loss_history.append(train_loss)
val_history.append(val_loss)
n_history = len(loss_history)
if n_history > 2:
multi_plot([range(1, n_history+1), range(1, n_history+1)],
[loss_history, val_history], folder / "train_plot.png",
xlabel='Epochs', ylabel='Loss',
legend=['Training loss', 'Validation loss'],
title=plot_title, use_grid=True,
ylim=[0.0, max(max(val_history), max(loss_history))*1.1])
write("Plot drawn!")
w_path = folder / f"epoch{epoch+1}_vloss{val_loss:_}.pth"
torch.save(net.state_dict(), w_path)
write(f"It is written... {w_path}")
write(f"Best val loss so far: {np.min(val_history)} at epoch " \
f"{np.argmin(val_history)+1}")
now = datetime.now()
epoch_time = now - epoch_start
write(f"Time for this epoch: {epoch_time}") | 35.583942 | 80 | 0.537846 | 662 | 4,875 | 3.794562 | 0.256798 | 0.021497 | 0.046576 | 0.031847 | 0.312898 | 0.199443 | 0.178344 | 0.139331 | 0.109873 | 0.109873 | 0 | 0.017056 | 0.350564 | 4,875 | 137 | 81 | 35.583942 | 0.776374 | 0.110359 | 0 | 0.21978 | 0 | 0 | 0.12375 | 0.024424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010989 | false | 0 | 0.076923 | 0 | 0.087912 | 0.010989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9834c4b6e694746ce6bd6fb10e5fcbdc3c27ff60 | 1,179 | py | Python | Communication/Mqtt/MqttPublish.py | landbroken/python-learning | 48351f8e1990ca6823fdcb1ac71574542167fb11 | [
"MIT"
] | 1 | 2018-07-13T07:46:59.000Z | 2018-07-13T07:46:59.000Z | Communication/Mqtt/MqttPublish.py | landbroken/python-learning | 48351f8e1990ca6823fdcb1ac71574542167fb11 | [
"MIT"
] | null | null | null | Communication/Mqtt/MqttPublish.py | landbroken/python-learning | 48351f8e1990ca6823fdcb1ac71574542167fb11 | [
"MIT"
] | 1 | 2018-07-13T03:21:02.000Z | 2018-07-13T03:21:02.000Z | # import paho.mqtt.client as mqtt
import paho.mqtt.publish as publish
import time
HOST = "127.0.0.1"
PORT = 8222
def on_connect(client, userdata, flags, rc):
print("Connected with result code " + str(rc))
client.subscribe("test")
def on_message(client, userdata, msg):
print(msg.topic + " " + msg.payload.decode("utf-8"))
if __name__ == '__main__':
client_id = time.strftime('%Y%m%d%H%M%S', time.localtime(time.time()))
# client = mqtt.Client(client_id) # ClientId不能重复,所以使用当前时间
# client.username_pw_set("admin", "123456") # 必须设置,否则会返回「Connected with result code 4」
# client.on_connect = on_connect
# client.on_message = on_message
# client.connect(HOST, PORT, 60)
# client.publish("test", "你好 MQTT", qos=0, retain=False) # 发布消息
client_id_c_sharp = "client001"
publish.single(topic="test",
payload="你好, MqttSever. I'm MqttPublish.py",
qos=1,
hostname=HOST,
port=PORT,
client_id=client_id_c_sharp,
auth={'username': "username001", 'password': "psw001"},
retain=False
)
| 32.75 | 91 | 0.597964 | 149 | 1,179 | 4.563758 | 0.496644 | 0.058824 | 0.041176 | 0.067647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035797 | 0.265479 | 1,179 | 35 | 92 | 33.685714 | 0.749423 | 0.27905 | 0 | 0 | 0 | 0 | 0.172825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.047619 | 0.095238 | 0 | 0.190476 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98350a26a814c2d0a64a7520f06048d7bfb761fc | 476 | py | Python | leetcode/020/20.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | leetcode/020/20.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | leetcode/020/20.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | class Solution:
def isValid(self, s: str) -> bool:
stack = []
d = {"]": "[", "}": "{", ")": "("}
for char in s:
# Opening
if char in d.values():
stack.append(char)
elif char in d.keys():
if stack == [] or d[char] != stack.pop():
return False
else:
return False
return stack == []
s = Solution()
print(s.isValid("()[]{}"))
| 25.052632 | 57 | 0.386555 | 47 | 476 | 3.914894 | 0.531915 | 0.097826 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.439076 | 476 | 18 | 58 | 26.444444 | 0.689139 | 0.014706 | 0 | 0.133333 | 0 | 0 | 0.025696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.333333 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98351c2147ed07a632f10bfd99940c5816becbe4 | 339 | py | Python | alyBlog/apps/course/urls.py | Hx-someone/aly-blog | e0205777d2ff1642fde5741a5b5c1b06ad675001 | [
"WTFPL"
] | 1 | 2020-04-17T02:15:45.000Z | 2020-04-17T02:15:45.000Z | alyBlog/apps/course/urls.py | Hx-someone/aly-blog | e0205777d2ff1642fde5741a5b5c1b06ad675001 | [
"WTFPL"
] | null | null | null | alyBlog/apps/course/urls.py | Hx-someone/aly-blog | e0205777d2ff1642fde5741a5b5c1b06ad675001 | [
"WTFPL"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@Time : 2020/3/4 13:58
@Author : 半纸梁
@File : urls.py
"""
from django.urls import path
from course import views
app_name = "course"
urlpatterns = [
path("index/", views.CourseIndexView.as_view(), name="index"),
path("<int:course_id>/", views.CourseDetailView.as_view(), name="course_detail"),
] | 21.1875 | 85 | 0.646018 | 46 | 339 | 4.652174 | 0.673913 | 0.093458 | 0.093458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038732 | 0.162242 | 339 | 16 | 86 | 21.1875 | 0.714789 | 0.241888 | 0 | 0 | 0 | 0 | 0.184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98359bcafb43958d81e407a2ccbc55ca959dfeb4 | 10,932 | py | Python | XlsxTools/xls2json/Tools/xls2json.py | maplelearC/Unity3DTraining | 3824d5f92c5fce5cbd8806feb1852e9a99e4a711 | [
"MIT"
] | 3,914 | 2017-01-20T04:55:53.000Z | 2022-03-31T18:06:12.000Z | XlsxTools/xls2json/Tools/xls2json.py | maplelearC/Unity3DTraining | 3824d5f92c5fce5cbd8806feb1852e9a99e4a711 | [
"MIT"
] | 5 | 2019-12-17T05:27:58.000Z | 2022-01-20T11:55:33.000Z | XlsxTools/xls2json/Tools/xls2json.py | maplelearC/Unity3DTraining | 3824d5f92c5fce5cbd8806feb1852e9a99e4a711 | [
"MIT"
] | 1,263 | 2017-01-15T09:54:44.000Z | 2022-03-31T14:59:11.000Z | # -*- coding: utf-8 -*-
import os,sys,importlib
import xml.etree.ElementTree as ET
import xdrlib,xlrd
# 防止中文乱码
importlib.reload(sys)
#配置文件名
CONFIG_NAME = "config.ini"
#保存文件类型
SAVE_FILE_TYPE = ".json"
#保存映射类类型
SAVE_MAPPING_TYPE = ".cs"
#分隔符
SPLIT_CAHR = ":"
#表格路径
XLS_PATH = ""
#解析路径
XML_PATH = ""
#导出路径
OUT_PATH = ""
#映射路径
MAP_PATH = ""
#映射总数据类分表内容
MAPPING_CONTENT = ""
#读取配置
def read_config():
print("开始读取配置文件")
config_file = open(CONFIG_NAME, "r", encoding = "utf-8")
#表格路径
cur_line = config_file.readline().rstrip("\r\n").split(SPLIT_CAHR)
global XLS_PATH
XLS_PATH = os.path.abspath(cur_line[1])
print("表格路径:", XLS_PATH)
#解析路径
cur_line = config_file.readline().rstrip("\r\n").split(SPLIT_CAHR)
global XML_PATH
XML_PATH = os.path.abspath(cur_line[1])
print("解析路径", XML_PATH)
#导出路径
cur_line = config_file.readline().rstrip("\r\n").split(SPLIT_CAHR)
global OUT_PATH
OUT_PATH = os.path.abspath(cur_line[1])
print("导出路径", OUT_PATH)
#映射路径
cur_line = config_file.readline().rstrip("\r\n").split(SPLIT_CAHR)
global MAP_PATH
MAP_PATH = os.path.abspath(cur_line[1])
print("映射路径", MAP_PATH)
config_file.close()
#删除导出目录原文件
def delect_old_file():
print("删除导出目录原文件")
file_list = os.listdir(OUT_PATH)
for file in file_list:
#只删除JSON文件
if file.endswith(SAVE_FILE_TYPE):
os.remove(OUT_PATH + "\\" + file)
print("删除映射目录原文件")
file_list = os.listdir(MAP_PATH)
for file in file_list:
#只删除C#文件
if file.endswith(SAVE_MAPPING_TYPE):
os.remove(MAP_PATH + "\\" + file)
#转换文件
def change_file():
print("开始转换文件")
file_list = os.listdir(XML_PATH)
for file in file_list:
if file.endswith(".xml"):
#拼接XML路径
xml_file_path = XML_PATH + "\\" + file
isSucc = parse_file_by_xml(xml_file_path)
if (False == isSucc):
print("出错了!!!!!!!!!!!!!!!!!!")
return
def parse_file_by_xml(xml_file_path):
#解析XML
try:
tree = ET.parse(xml_file_path)
#获得根节点
root = tree.getroot()
except Exception as e:
print("解析{0}失败!!!!!!!!!!!!".format(xml_file_path))
sys.exit()
return False
#解析内容
if root.tag == "config":
xls_file_list = []
save_file_name = ""
element_list = []
for child in root:
if child.tag == "input":
#要转换的表格
for input_child in child:
xls_file_list.append(input_child.get("file"))
elif child.tag == "output":
#输出文件名称
save_file_name = child.get("name")
elif child.tag == "elements":
#列表转换
element_list = child
#转换数据
return change_file_by_xml_data(xls_file_list, element_list, save_file_name)
else:
print("找不到config节点 {0}".format(xml_file_path))
return False
#开始转换表格
def change_file_by_xml_data(xls_file_list, element_list, save_file_name):
#主键检查
primary_key = None
primary_type = None
for element in element_list:
if "true" == element.get("primary"):
if None == primary_key:
primary_key = element.get("name")
primary_type = element.get("type")
else:
print("存在多个主键")
return False
if None == primary_key:
print("没有主键")
return False
all_value_list = {}
for xls_file in xls_file_list:
xls_file_path = XLS_PATH + "\\" + xls_file
print("转换文件{0}".format(xls_file_path))
#打开表格
xls_data = None
try:
xls_data = xlrd.open_workbook(xls_file_path)
except Exception as e:
print(str(e))
return False
#读取sheet1的数据
xls_table = xls_data.sheets()[0]
nrows = xls_table.nrows #行数
ncols = xls_table.ncols #列数
#转换为XML中的数据
key_list = xls_table.row_values(0)
for row_index in range(1, nrows):
row_values = xls_table.row_values(row_index)
#将数据转存为字典
value_dic = {}
for col_index in range(0, ncols):
for element in element_list:
if key_list[col_index] == element.get("key"):
if "int" == element.get("type"):
value_dic[element.get("name")] = int(row_values[col_index])
elif "string" == element.get("type"):
value_dic[element.get("name")] = str(row_values[col_index])
else:
value_dic[element.get("name")] = str(row_values[col_index])
break
#设置主键
primary_value = str(value_dic[primary_key])
if primary_value in all_value_list:
print("存在重复的主键")
return False
all_value_list[primary_value] = value_dic
#释放内存
xls_data.release_resources()
#拼接为JSON字符串
JSON_STR = str(all_value_list).replace("\'", "\"")
#拼接类名
file_name = "Table" + save_file_name[0].upper() + save_file_name[1:]
#存储为JSON文件
save_to_json(JSON_STR, file_name)
#生成C#映射类
save_to_mapping(file_name, element_list, primary_type)
return True
#存储为JSON文件
def save_to_json(str, file_name):
save_file_path = OUT_PATH + "\\" + file_name + SAVE_FILE_TYPE
print("输出文件:" + save_file_path)
file_object = open(save_file_path, 'w', encoding = "utf-8")
file_object.write(str)
file_object.close()
#生成C#映射类
def save_to_mapping(file_name, element_list, primary_type):
table_content_frame = "public class " + file_name + " {{\n{0}{1}\n}}"
table_content_field = ""
constructor_content = ""
constructor_params = None
constructor_assign = None
mapping_single_content = create_single_table_mapping_content(file_name)
mapping_json_value = None
#映射类成员
for element in element_list:
field_name = element.get("name")
type_str = element.get("type")
field_str = "\n\t//列名[{0}] Type[{1}]\n\tpublic {2} " + field_name + " = {3};\n"
define_value_str = None
if "int" == type_str:
define_value_str = 0
elif "string" == type_str:
define_value_str = "\"\""
if None != type_str:
#填充
key_name_str = element.get("key")
table_content_field = table_content_field + field_str.format(key_name_str, type_str, type_str, define_value_str)
if None != constructor_params:
constructor_params = constructor_params + ", " + type_str + " " + field_name
constructor_assign = constructor_assign + "\n\t\tthis.{0} = {1};".format(field_name, field_name)
mapping_json_value = mapping_json_value + (", ({0})json.Value[\"{1}\"]").format(type_str, field_name)
else:
constructor_params = type_str + " " + field_name
constructor_assign = "\t\tthis.{0} = {1};".format(field_name, field_name)
mapping_json_value = "({0})json.Value[\"{1}\"]".format(type_str, field_name)
#可以创建构造函数
if None != constructor_params:
#构造函数
constructor_content = ("\n\t//构造函数\n\tpublic " + file_name + "({0})\n\t{{\n{1}\n\t}}").format(constructor_params, constructor_assign)
#映射总数据
global MAPPING_CONTENT
prime_key_trans = "null"
if "int" == primary_type:
prime_key_trans = "int.Parse(json.Key)"
elif "string" == primary_type:
prime_key_trans = "json.Key"
MAPPING_CONTENT = MAPPING_CONTENT + mapping_single_content.format(prim_key_type = primary_type, prime_key_trans = prime_key_trans, json_value = mapping_json_value)
save_file_path = MAP_PATH + "\\" + file_name + SAVE_MAPPING_TYPE
print("输出映射类:" + save_file_path)
file_object = open(save_file_path, 'w', encoding = "utf-8")
file_object.write(table_content_frame.format(table_content_field, constructor_content))
file_object.close()
#生成单个映射总数据内容
def create_single_table_mapping_content(file_name):
content = ""
content = content + "\n\n\t//{xml_name}"
content = content + "\n\tprivate Dictionary<{{prim_key_type}}, {file_name}> {lower_file_name}Dic = new Dictionary<{{prim_key_type}}, {file_name}>();"
content = content + "\n\t//初始化{xml_name}字典"
content = content + "\n\tprivate void Init{file_name}()"
content = content + "\n\t{{{{"
content = content + "\n\t\tJObject jsonData = JsonManager.GetTableJson(\"{file_name}\");"
content = content + "\n\t\tforeach (var json in jsonData)"
content = content + "\n\t\t{{{{"
content = content + "\n\t\t\t{{prim_key_type}} key = {{prime_key_trans}};"
content = content + "\n\t\t\tvar jsonValue = json.Value;"
content = content + "\n\t\t\t{file_name} value = new {file_name}({{json_value}});"
content = content + "\n\t\t\t{lower_file_name}Dic.Add(key, value);"
content = content + "\n\t\t}}}}"
content = content + "\n\t}}}}"
content = content + "\n\t//通过主键值获取{xml_name}数据"
content = content + "\n\tpublic {file_name} Get{file_name}ByPrimKey({{prim_key_type}} primKey)"
content = content + "\n\t{{{{"
content = content + "\n\t\tif (0 == {lower_file_name}Dic.Count) Init{file_name}();"
content = content + "\n\t\t//获取数据"
content = content + "\n\t\t{file_name} {lower_file_name}Data = null;"
content = content + "\n\t\t{lower_file_name}Dic.TryGetValue(primKey, out {lower_file_name}Data);"
content = content + "\n\t\treturn {lower_file_name}Data;"
content = content + "\n\t}}}}"
return content.format(xml_name = file_name[5:], file_name = file_name, lower_file_name = file_name[0].lower() + file_name[1:])
#创建映射总数据文件
def craete_table_mapping_cs():
mapping_frame = ""
mapping_frame = mapping_frame + "using System.Collections.Generic;"
mapping_frame = mapping_frame + "\nusing Newtonsoft.Json.Linq;"
mapping_frame = mapping_frame + "\n\npublic class TableMapping"
mapping_frame = mapping_frame + "\n{{\n{0}{1}\n}}"
mapping_ins = ""
mapping_ins = mapping_ins + "//单例"
mapping_ins = mapping_ins + "\n\tprivate TableMapping() { }"
mapping_ins = mapping_ins + "\n\tprivate static TableMapping _ins;"
mapping_ins = mapping_ins + "\n\tpublic static TableMapping Ins { get { if (null == _ins) { _ins = new TableMapping(); } return _ins; } }"
#保存文件
save_file_path = MAP_PATH + "\\TableMappnig" + SAVE_MAPPING_TYPE
file_object = open(save_file_path, 'w', encoding = "utf-8")
file_object.write(mapping_frame.format(mapping_ins, MAPPING_CONTENT))
file_object.close()
def main():
read_config()
delect_old_file()
change_file()
craete_table_mapping_cs()
if __name__ == "__main__":
main() | 36.198675 | 171 | 0.608214 | 1,414 | 10,932 | 4.399576 | 0.164074 | 0.051439 | 0.055457 | 0.048867 | 0.395917 | 0.308632 | 0.258801 | 0.223115 | 0.147565 | 0.123131 | 0 | 0.005171 | 0.257044 | 10,932 | 302 | 172 | 36.198676 | 0.760773 | 0.028632 | 0 | 0.176991 | 0 | 0.004425 | 0.164332 | 0.046573 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044248 | false | 0 | 0.017699 | 0 | 0.106195 | 0.079646 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
983645ba8bb0cbefb83e179f06ceca1eabb1c5e8 | 393 | py | Python | Class 12/12th - Project/using yield giving primes.py | edwardmasih/Python-School-Level | 545e8fcd87f540be2bbf01d3493bd84dd5504739 | [
"MIT"
] | null | null | null | Class 12/12th - Project/using yield giving primes.py | edwardmasih/Python-School-Level | 545e8fcd87f540be2bbf01d3493bd84dd5504739 | [
"MIT"
] | null | null | null | Class 12/12th - Project/using yield giving primes.py | edwardmasih/Python-School-Level | 545e8fcd87f540be2bbf01d3493bd84dd5504739 | [
"MIT"
] | null | null | null | n=int(input("Enter the number upto which you want the prime numbers => "))
print
print ("The List of Prime Nubmber :-")
def prime(n):
for i in range (1,n):
x=1
for j in range (2,i):
n=i%j
if n==0:
x=0
break
if x==1:
yield (i)
x=prime(n)
for i in range(n):
print (x.next())
| 20.684211 | 75 | 0.445293 | 62 | 393 | 2.822581 | 0.483871 | 0.12 | 0.102857 | 0.114286 | 0.194286 | 0.194286 | 0 | 0 | 0 | 0 | 0 | 0.026786 | 0.430025 | 393 | 18 | 76 | 21.833333 | 0.754464 | 0 | 0 | 0 | 0 | 0 | 0.230563 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.0625 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9837bcd5e661ac2fc2610832a9e2f5b9c2137ebd | 3,117 | py | Python | tests/test_scenario/test_follow_my_commit.py | magnusbaeck/eiffel-graphql-api | c0cd0dc3fdad7787988599974ace2a4cebf70844 | [
"Apache-2.0"
] | null | null | null | tests/test_scenario/test_follow_my_commit.py | magnusbaeck/eiffel-graphql-api | c0cd0dc3fdad7787988599974ace2a4cebf70844 | [
"Apache-2.0"
] | null | null | null | tests/test_scenario/test_follow_my_commit.py | magnusbaeck/eiffel-graphql-api | c0cd0dc3fdad7787988599974ace2a4cebf70844 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Axis Communications AB.
#
# For a full list of individual contributors, please see the commit history.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -*- coding: utf-8 -*-
import pytest
import logging
from unittest import TestCase
from .event import *
from .queries import *
from tests.lib.query_handler import GraphQLQueryHandler
logging.basicConfig(
level=logging.DEBUG
)
class TestFollowMyCommit(TestCase):
@classmethod
def setUpClass(cls):
cls.query_handler = GraphQLQueryHandler("http://127.0.0.1:12345/graphql")
cls.events = [
eiffel_source_change_created_event(),
eiffel_source_change_submitted_event(),
eiffel_composition_defined_event(),
eiffel_artifact_created_event(),
eiffel_artifact_published_event(),
eiffel_confidence_level_modified_event("readyForIntegration"),
eiffel_confidence_level_modified_event("IntegrationTests"),
eiffel_confidence_level_modified_event("Daily"),
eiffel_confidence_level_modified_event("Stability"),
eiffel_confidence_level_modified_event("Weekly"),
eiffel_confidence_level_modified_event("FredrikIsNojd")
]
cls.logger = logging.getLogger("TestFollowMyCommit")
def setUp(self):
self.logger.info("\n")
for event in self.events:
insert(event)
def tearDown(self):
for event in self.events:
remove(event)
def test_follow_my_commit(self):
"""Test that you can follow a commit with the graphql API.
Approval criteria:
- GraphQL API shall provide a way to determine the confidence levels of a commit.
Test steps:
1. Query a commit ID from GraphQL API.
2. Verify that it is possible to fetch confidence levels from this commit ID.
"""
self.logger.info("STEP: Query a commit ID from GraphQL API.")
response = self.query_handler.execute(FOLLOW_MY_COMMIT)
nodes = self.query_handler.search_for_node_typename(response, "ConfidenceLevelModified")
self.logger.info("STEP: Cerify that it is possible to fetch confidence levels from this commit ID.")
node_names = []
for node_name, node in nodes:
self.assertEqual(node_name, "ConfidenceLevelModified")
node_names.append(node["data"]["name"])
for node_name in ("readyForIntegration", "IntegrationTests", "Daily",
"Stability", "Weekly", "FredrikIsNojd"):
self.assertIn(node_name, node_names)
| 37.554217 | 108 | 0.686878 | 377 | 3,117 | 5.519894 | 0.427056 | 0.028832 | 0.060548 | 0.083614 | 0.197021 | 0.079769 | 0.079769 | 0.052859 | 0.052859 | 0.052859 | 0 | 0.009193 | 0.232275 | 3,117 | 82 | 109 | 38.012195 | 0.860426 | 0.307347 | 0 | 0.043478 | 0 | 0 | 0.17281 | 0.02202 | 0 | 0 | 0 | 0 | 0.043478 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.23913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
983d36fb52bfd8dc267daed6bd1722f87ca8c9d7 | 2,852 | py | Python | gameProj/gameApp/views.py | cs-fullstack-2019-spring/django-mini-project4-chelsea-porche | 54b73e89c67c5cf2ada57e529e982ffd291fc314 | [
"Apache-2.0"
] | null | null | null | gameProj/gameApp/views.py | cs-fullstack-2019-spring/django-mini-project4-chelsea-porche | 54b73e89c67c5cf2ada57e529e982ffd291fc314 | [
"Apache-2.0"
] | null | null | null | gameProj/gameApp/views.py | cs-fullstack-2019-spring/django-mini-project4-chelsea-porche | 54b73e89c67c5cf2ada57e529e982ffd291fc314 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from django.contrib.auth.decorators import login_required
from django.http import HttpResponse
from .models import Game, Person
from .forms import NewUserForm, NewGameForm
from django.contrib.auth.models import User
# Create your views here.
# TO REQUIRE LOGIN TO VIEW PAGE
@login_required
def index(request):
# TO LOGIN SPECIFIC USER
userLogin = Person.objects.filter(username=request.user)
# TO GRAB GAME OBJECTS- grabs all instead of specific users :(
gamer = Game.objects.all()
# TO SYNC VARIABLES WITH HTML PAGE FORMAT
context = {
'userLogin': userLogin,
'gamer': gamer,
}
# TO ROUTE FUNCTION TO PAGE
return render(request, 'gameApp/index.html', context)
def createuser(request):
# TO USE INFORMATION ENTERED IN FORM/PULL INFO FROM FORM TO DISPLAY
new_user = NewUserForm(request.POST or None)
# TO SAVE IF INFO VALIDATES
if new_user.is_valid():
# SAVES DATA AS PERSON
new_user.save()
# TO CREATE A USER/PERSON THAT CAN LOGIN
user = User.objects.create_user(username=request.POST['username'], password=request.POST['password1'])
user.save()
# TO RETURN TO INDEX AFTER SUBMIT
return redirect('index')
# FOR DISPLAYING FORM INFO ON HTML FROM FORM/MODEL
context = {
'userform': new_user
}
# TO ROUTE TO/FROM CREATE USER HTML
return render(request, 'gameApp/createuser.html', context)
def creategame(request):
# TO GRAB OBJECTS FROM GAME FORM/MODEL
gameform = NewGameForm(request.POST or None)
# TO SAVE IF INFO VALIDATES
if gameform.is_valid():
gameform.save()
# TO RETURN TO INDEX AFTER SUBMIT
return redirect('index')
# FOR DISPLAYING FORM INFO ON HTML FROM FORM/MODEL
context = {
'gameform': gameform
}
# TO ROUTE TO/FROM CREATEGAME HTML
return render(request, 'gameApp/creategame.html', context)
def editgame(request,id):
# TO GRAB SPECIFIC GAMER/USER
gamer = get_object_or_404(Game, pk=id)
# TO GRAB SELECTED GAME AND SEND TO FORM
game_account = NewGameForm(request.POST or None, instance=gamer)
# TO SAVE CHANGES
if game_account.is_valid():
game_account.save()
# SEND BACK TO INDEX
return redirect('index')
# ROUTE TO HTML PAGE
return render(request, 'gameApp/creategame.html', {'gameform': game_account})
def deleteaccount(request, id):
# TO GRAB SPECIFIC ACCOUNT
games = get_object_or_404(Game, pk=id)
# TO DELETE IF SUBMITED/SAVE DELETE
if request.method == 'POST':
games.delete()
# RETURN TO INDEX
return redirect('index')
# ROUTE TO/FROM DELETE CONFIRMATION PAGE
return render(request, 'gameApp/delete.html', {'selectedgame': games}) | 32.044944 | 110 | 0.680926 | 376 | 2,852 | 5.103723 | 0.287234 | 0.015633 | 0.049505 | 0.067744 | 0.338197 | 0.242835 | 0.201146 | 0.166754 | 0.14174 | 0.14174 | 0 | 0.004564 | 0.231767 | 2,852 | 89 | 111 | 32.044944 | 0.871292 | 0.309257 | 0 | 0.145833 | 0 | 0 | 0.10139 | 0.035512 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104167 | false | 0.020833 | 0.125 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
983feb73273a2249b6440b588403dab50f5f78f8 | 676 | py | Python | migrations/versions/2c64078d1aff_.py | Alhuin/playlist-handler | 99b008ec8d9f4f2163266af19ad2ece478b0b172 | [
"MIT"
] | null | null | null | migrations/versions/2c64078d1aff_.py | Alhuin/playlist-handler | 99b008ec8d9f4f2163266af19ad2ece478b0b172 | [
"MIT"
] | null | null | null | migrations/versions/2c64078d1aff_.py | Alhuin/playlist-handler | 99b008ec8d9f4f2163266af19ad2ece478b0b172 | [
"MIT"
] | null | null | null | """empty message
Revision ID: 2c64078d1aff
Revises: 635e91180d41
Create Date: 2021-03-04 20:03:25.441608
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '2c64078d1aff'
down_revision = '635e91180d41'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('users', sa.Column('soundcloud_tkn', sa.String(length=1000), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('users', 'soundcloud_tkn')
# ### end Alembic commands ###
| 23.310345 | 94 | 0.699704 | 84 | 676 | 5.547619 | 0.607143 | 0.05794 | 0.090129 | 0.098712 | 0.188841 | 0.188841 | 0.188841 | 0.188841 | 0 | 0 | 0 | 0.103203 | 0.168639 | 676 | 28 | 95 | 24.142857 | 0.725979 | 0.436391 | 0 | 0 | 0 | 0 | 0.180233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9841245f738118e4913e7d61e4e18f45d42f104e | 2,115 | py | Python | runs/kubernetes/start_haproxy_cfg.py | Ruilkyu/kubernetes_start | 9e88a7f1c64899454af8f9be1dd9653ba435e21f | [
"Apache-2.0"
] | 2 | 2020-07-24T14:19:57.000Z | 2020-08-10T18:30:08.000Z | runs/kubernetes/start_haproxy_cfg.py | Ruilkyu/kubernetes_start | 9e88a7f1c64899454af8f9be1dd9653ba435e21f | [
"Apache-2.0"
] | null | null | null | runs/kubernetes/start_haproxy_cfg.py | Ruilkyu/kubernetes_start | 9e88a7f1c64899454af8f9be1dd9653ba435e21f | [
"Apache-2.0"
] | 1 | 2021-07-09T10:29:11.000Z | 2021-07-09T10:29:11.000Z | """
时间:2020/6/13
作者:lurui
功能:根据提供的模版,生成haprocy对应的haproxy.cfg配置文件
时间:2020/6/16
作者:lurui
修改:读master.txt文件改为读config.ini
时间:2020/6/17
作者:lurui
修改:基路径 basedir = os.path.dirname(os.path.dirname(os.getcwd())),改为调用者路径 basedir = os.path.abspath('.')
"""
import os
import configparser
def start_haproxy_cfg():
basedir = os.path.abspath('.')
config = configparser.ConfigParser()
# config.read(basedir + '/cfg/vip.ini')
config.read(basedir + '/cfg/config.ini')
vip = config['VIP']['vip']
port = config['VIP']['port']
# master_list = basedir + '/cfg/master.txt'
# try:
# master_list_fh = open(master_list, mode="r", encoding='utf-8')
# except FileNotFoundError:
# os.mknod(master_list)
# master_list_fh = open(master_list, mode="r", encoding='utf-8')
haproxy_templates = basedir + '/templates/haproxy/haproxy.yaml'
try:
haproxy_templates_fh = open(haproxy_templates, mode="r", encoding='utf-8')
except FileNotFoundError:
os.mknod(haproxy_templates)
haproxy_templates_fh = open(haproxy_templates, mode="r", encoding='utf-8')
if os.path.exists(basedir + '/deploy/haproxy/cfg/haproxy.cfg'):
os.remove(basedir + '/deploy/haproxy/cfg/haproxy.cfg')
if not os.path.exists(basedir + '/deploy/haproxy/cfg'):
os.makedirs(basedir + '/deploy/haproxy/cfg')
haproxy_data = ''
try:
for k in haproxy_templates_fh.readlines():
haproxy_data += k
except Exception as e:
print(e)
try:
location = basedir + '/deploy/haproxy/cfg/haproxy.cfg'
file = open(location, 'a')
# masterlist = []
#
# for l in master_list_fh.readlines():
# masterlist.append(l.strip("\n"))
master1 = config['MASTER']['master1']
master2 = config['MASTER']['master2']
master3 = config['MASTER']['master3']
resultdate = ""
resultdate = haproxy_data.format(port, master1, master2, master3)
file.write(resultdate)
file.close()
except Exception as e:
print(e)
# start_haproxy_cfg()
| 27.467532 | 101 | 0.628369 | 260 | 2,115 | 5.007692 | 0.307692 | 0.076805 | 0.076805 | 0.088326 | 0.368664 | 0.345622 | 0.250384 | 0.196621 | 0.196621 | 0.150538 | 0 | 0.020631 | 0.220804 | 2,115 | 76 | 102 | 27.828947 | 0.769417 | 0.293144 | 0 | 0.25 | 0 | 0 | 0.164634 | 0.084011 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.083333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
984150f5c2ce37c60a6c7646551282b4404a706e | 3,025 | py | Python | app/server/database.py | situkangsayur/async-fastapi-mongo | 2ee9dcb87687bcf5e3eba5cdb4ec49573d6c1f16 | [
"MIT"
] | null | null | null | app/server/database.py | situkangsayur/async-fastapi-mongo | 2ee9dcb87687bcf5e3eba5cdb4ec49573d6c1f16 | [
"MIT"
] | null | null | null | app/server/database.py | situkangsayur/async-fastapi-mongo | 2ee9dcb87687bcf5e3eba5cdb4ec49573d6c1f16 | [
"MIT"
] | 1 | 2021-12-05T17:26:08.000Z | 2021-12-05T17:26:08.000Z | import motor.motor_asyncio
from bson.objectid import ObjectId
from decouple import config
from outfit import Outfit, Logger, ConsulCon, VaultCon, merge_dict
__confit_info__ = 'configs/config.yaml'
# load config via python-outfit
Outfit.setup(__confit_info__)
vault = VaultCon().get_secret_kv()
consul = ConsulCon().get_kv()
# merge dict from vault and consul
config_set = merge_dict(consul, vault)
# MONGO_DETAILS = config('MONGO_DETAILS')
# read environment variable.
uri = "mongodb://%s:%s@%s:%d/%s" % (
config_set['mongodb']['username'],
config_set['mongodb']['password'],
config_set['mongodb']['host'], config_set['mongodb']['port'],
config_set['mongodb']['database'])
Logger.info(uri)
print(uri)
client = motor.motor_asyncio.AsyncIOMotorClient(uri)
'''
client = motor.motor_asyncio.AsyncIOMotorClient(
config_set['mongodb']['host'],
config_set['mongodb']['port'],
username = config_set['mongodb']['username'],
password = config_set['mongodb']['password'],
authSource = config_set['mongodb']['database'],
)
'''
database = client[config_set['mongodb']['database']]
print(config_set['mongodb']['database'])
student_collection = database.get_collection("students_collection")
# helpers
def student_helper(student) -> dict:
return {
"id": str(student["_id"]),
"fullname": student["fullname"],
"email": student["email"],
"course_of_study": student["course_of_study"],
"year": student["year"],
"GPA": student["gpa"],
}
# crud operations
# Retrieve all students present in the database
async def retrieve_students():
students = []
async for student in student_collection.find():
students.append(student_helper(student))
return students
# Add a new student into to the database
async def add_student(student_data: dict) -> dict:
student = await student_collection.insert_one(student_data)
new_student = await student_collection.find_one({"_id": student.inserted_id})
return student_helper(new_student)
# Retrieve a student with a matching ID
async def retrieve_student(id: str) -> dict:
student = await student_collection.find_one({"_id": ObjectId(id)})
if student:
return student_helper(student)
# Update a student with a matching ID
async def update_student(id: str, data: dict):
# Return false if an empty request body is sent.
if len(data) < 1:
return False
student = await student_collection.find_one({"_id": ObjectId(id)})
if student:
updated_student = await student_collection.update_one(
{"_id": ObjectId(id)}, {"$set": data}
)
if updated_student:
return True
return False
# Delete a student from the database
async def delete_student(id: str):
student = await student_collection.find_one({"_id": ObjectId(id)})
if student:
await student_collection.delete_one({"_id": ObjectId(id)})
return True
| 28.537736 | 81 | 0.677355 | 369 | 3,025 | 5.341463 | 0.271003 | 0.059361 | 0.097412 | 0.102993 | 0.241502 | 0.22273 | 0.178082 | 0.158803 | 0.086758 | 0.086758 | 0 | 0.00041 | 0.19405 | 3,025 | 105 | 82 | 28.809524 | 0.808039 | 0.130579 | 0 | 0.172414 | 0 | 0 | 0.112084 | 0.010508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0.017241 | 0.068966 | 0.017241 | 0.224138 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98428c858cd2ff5cf0e1c7d684b9bde11e18d613 | 340 | py | Python | UDPClient.py | Eadral/socket_test | 97c98f9d1e922867b0260994407d39d5aca42751 | [
"MIT"
] | null | null | null | UDPClient.py | Eadral/socket_test | 97c98f9d1e922867b0260994407d39d5aca42751 | [
"MIT"
] | null | null | null | UDPClient.py | Eadral/socket_test | 97c98f9d1e922867b0260994407d39d5aca42751 | [
"MIT"
] | null | null | null | from socket import *
serverName = "localhost"
serverPort = 12000
clientSocket = socket(AF_INET, SOCK_DGRAM)
message = bytes(input("Input lowercase sentence: "), encoding="UTF-8")
clientSocket.sendto(message, (serverName, serverPort))
modifiedMessage, serverAddress = clientSocket.recvfrom(2048)
print(modifiedMessage)
clientSocket.close()
| 30.909091 | 70 | 0.794118 | 36 | 340 | 7.444444 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032362 | 0.091176 | 340 | 10 | 71 | 34 | 0.834951 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |