hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ad049c2108d387b7a21cb9771949419fab0bb4c8 | 251 | py | Python | examples/hist.py | RyanAugust/geoplotlib | 97ae83fc05d19237db79be66eb577906c35e8db5 | [
"MIT"
] | 1,021 | 2015-02-26T12:08:01.000Z | 2022-03-15T10:04:29.000Z | examples/hist.py | RyanAugust/geoplotlib | 97ae83fc05d19237db79be66eb577906c35e8db5 | [
"MIT"
] | 51 | 2015-03-27T20:46:44.000Z | 2022-02-03T09:58:35.000Z | examples/hist.py | RyanAugust/geoplotlib | 97ae83fc05d19237db79be66eb577906c35e8db5 | [
"MIT"
] | 196 | 2015-03-25T02:32:28.000Z | 2022-03-25T23:07:22.000Z | """
Example of 2D histogram
"""
import geoplotlib
from geoplotlib.utils import read_csv, BoundingBox
data = read_csv('data/opencellid_dk.csv')
geoplotlib.hist(data, colorscale='sqrt', binsize=8)
geoplotlib.set_bbox(BoundingBox.DK)
geoplotlib.show()
| 20.916667 | 51 | 0.784861 | 34 | 251 | 5.676471 | 0.647059 | 0.072539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008772 | 0.091633 | 251 | 11 | 52 | 22.818182 | 0.837719 | 0.091633 | 0 | 0 | 0 | 0 | 0.118182 | 0.1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ad052c19680261e01fda678f8a14469fccd45f3c | 10,083 | py | Python | southwestalerts/southwest.py | hoopsbwc34/southwest-alerts | 39a9e13cb045cf3601b02518fc4e13753cce9ca6 | [
"MIT"
] | null | null | null | southwestalerts/southwest.py | hoopsbwc34/southwest-alerts | 39a9e13cb045cf3601b02518fc4e13753cce9ca6 | [
"MIT"
] | null | null | null | southwestalerts/southwest.py | hoopsbwc34/southwest-alerts | 39a9e13cb045cf3601b02518fc4e13753cce9ca6 | [
"MIT"
] | null | null | null | import json
import time
import requests
BASE_URL = 'https://mobile.southwest.com'
class Southwest(object):
def __init__(self, username, password, headers, cookies, account):
self._session = _SouthwestSession(username, password, headers, cookies, account)
def get_upcoming_trips(self):
# return self._session.get(
# '/api/mobile-air-booking/v1/mobile-air-booking/page/view-reservation/{record_locator}?{first_name}&last-name={last_name}'.format(
# record_locator=record_locator,
# first_name=first_name,
# last_name=last_name
return self._session.get(
'/api/mobile-misc/v1/mobile-misc/page/upcoming-trips'
)
def start_change_flight(self, record_locator, first_name, last_name):
"""Start the flight change process.
This returns the flight including itinerary."""
resp = self._session.get(
'/api/extensions/v1/mobile/reservations/record-locator/{record_locator}?first-name={first_name}&last-name={last_name}&action=CHANGE'.format(
record_locator=record_locator,
first_name=first_name,
last_name=last_name
))
return resp
def get_available_change_flights(self, record_locator, first_name, last_name, departure_date, origin_airport,
destination_airport):
"""Select a specific flight and continue the checkout process."""
url = '/api/extensions/v1/mobile/reservations/record-locator/{record_locator}/products?first-name={first_name}&last-name={last_name}&is-senior-passenger=false&trip%5B%5D%5Borigination%5D={origin_airport}&trip%5B%5D%5Bdestination%5D={destination_airport}&trip%5B%5D%5Bdeparture-date%5D={departure_date}'.format(
record_locator=record_locator,
first_name=first_name,
last_name=last_name,
origin_airport=origin_airport,
destination_airport=destination_airport,
departure_date=departure_date
)
return self._session.get(url)
def get_price_change_flight(self, record_locator, first_name, last_name, product_id):
url = '/api/reservations-api/v1/air-reservations/reservations/record-locator/{record_locator}/prices?first-name={first_name}&last-name={last_name}&product-id%5B%5D={product_id}'.format(
record_locator=record_locator,
first_name=first_name,
last_name=last_name,
product_id=product_id
)
return self._session.get(url)
def get_cancellation_details(self, record_locator, first_name, last_name):
# url = '/api/reservations-api/v1/air-reservations/reservations/record-locator/{record_locator}?first-name={first_name}&last-name={last_name}&action=CANCEL'.format(
url = '/api/mobile-air-booking/v1/mobile-air-booking/page/view-reservation/{record_locator}?first-name={first_name}&last-name={last_name}'.format(
record_locator=record_locator,
first_name=first_name,
last_name=last_name
)
temp = self._session.get(url)
if not (temp['viewReservationViewPage']['greyBoxMessage'] is None):
return None
url = '/api/mobile-air-booking/v1/mobile-air-booking/page/flights/cancel-bound/{record_locator}?passenger-search-token={token}'.format(
record_locator=record_locator,
token=temp['viewReservationViewPage']['_links']['cancelBound']['query']['passenger-search-token']
)
temp = self._session.get(url)
url = '/api/mobile-air-booking/v1/mobile-air-booking/page/flights/cancel/refund-quote/{record_locator}'.format(
record_locator=record_locator
)
payload = temp['viewForCancelBoundPage']['_links']['refundQuote']['body']
return self._session.post(url, payload)
def get_available_flights(self, departure_date, origin_airport, destination_airport, currency='Points'):
url = '/api/mobile-air-shopping/v1/mobile-air-shopping/page/flights/products?origination-airport={origin_airport}&destination-airport={destination_airport}&departure-date={departure_date}&number-adult-passengers=1¤cy=PTS'.format(
origin_airport=origin_airport,
destination_airport=destination_airport,
departure_date=departure_date
)
#uurl = '{}{}'.format(BASE_URL, url)
#resp = requests.get(uurl, headers=self._get_headers_all(self.headers))
#return resp.json()
return self._session.get(url)
def get_available_flights_dollars(self, departure_date, origin_airport, destination_airport):
url = '/api/mobile-air-shopping/v1/mobile-air-shopping/page/flights/products?origination-airport={origin_airport}&destination-airport={destination_airport}&departure-date={departure_date}&number-adult-passengers=1¤cy=USD'.format(
origin_airport=origin_airport,
destination_airport=destination_airport,
departure_date=departure_date
)
return self._session.get(url)
class _SouthwestSession():
def __init__(self, username, password, headers, cookies, account):
self._session = requests.Session()
self._login(username, password, headers, cookies, account)
def _login(self, username, password, headers, cookies, account):
# headers['content-type']='application/vnd.swacorp.com.accounts.login-v1.0+json'
# headers['user-agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'
# data = requests.post(BASE_URL + '/api/mobile-misc/v1/mobile-misc/feature/my-account', json={
# 'accountNumberOrUserName': username, 'password': password},
# headers=headers
# )
# data = requests.get(BASE_URL + '/api/mobile-misc/v1/mobile-misc/feature/my-account', headers=headers)
# data = data.json()
# self.account_number = data['accessTokenDetails']['accountNumber']
self.account_number = account['customers.userInformation.accountNumber']
self.access_token = account['access_token']
self.headers = headers
self.cookies = cookies
def get(self, path, success_codes=[200]):
f = 1
while f < 8:
print('.', end='', flush=True)
time.sleep(5)
#resp = requests.get(self._get_url(path), headers=self._get_headers_all(self.headers))
#resp = requests.get(self._get_url(path), headers=self._get_headers_all(self.headers))
resp = self._session.get(self._get_url(path), headers=self._get_headers_all(self.headers))
if resp.status_code == 200:
return self._parsed_response(resp, success_codes=success_codes)
break
f = f+1
def getb(self, path, success_codes=[200]):
time.sleep(5)
resp = self._session.get(self._get_url(path), headers=self._get_headers_brief(self.headers))
return self._parsed_response(resp, success_codes=success_codes)
def post(self, path, payload, success_codes=[200]):
#print(json.dumps(payload))
tempheaders = self._get_headers_all(self.headers)
tempheaders['content-type'] = 'application/json'
resp = self._session.post(self._get_url(path), data=json.dumps(payload),
headers=tempheaders)
return self._parsed_response(resp, success_codes=success_codes)
@staticmethod
def _get_url(path):
return '{}{}'.format(BASE_URL, path)
def _get_cookies(self, cookies):
for x in cookies:
self._session.cookies.set(x['name'], x['value'], domain=x['domain'], path=x['path'])
default = self._session.cookies
return default
def _get_headers_brief(self, headers):
default = {
'token': (self.access_token if hasattr(self, 'access_token') else None),
'x-api-key': headers['x-api-key'],
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36',
'origin': None,
'content-type': None,
'accept': None,
'x-requested-with': None,
'referer': None
}
tempheaders = {**headers, **default}
return tempheaders
def _get_headers_all(self, headers):
default = {
'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36",
}
tempheaders = {**headers, **default}
# tempheaders['authority'] = 'mobile.southwest.com'
# tempheaders['sec-ch-ua'] = '" Not;A Brand";v="99", "Google Chrome";v="91", "Chromium";v="91"'
# tempheaders['sec-ch-ua-mobile'] = '?0'
# tempheaders.pop('origin')
# tempheaders.pop('x-user-experience-id')
# #tempheaders.pop('user-agent')
# tempheaders.pop('content-type')
# tempheaders.pop('accept')
# tempheaders.pop('x-requested-with')
# tempheaders.pop('cookie')
# tempheaders.pop('referer')
#return default
return tempheaders
@staticmethod
def _parsed_response(response, success_codes=[200]):
if response.status_code == 429:
print(response.text)
print(
'Invalid status code received. Expected {}. Received {}. '
'This error usually indicates a rate limiting has kicked in from southwest. '
'Wait and try again later.'.format(
success_codes, response.status_code))
elif response.status_code not in success_codes:
print(response.text)
raise Exception(
'Invalid status code received. Expected {}. Received {}.'.format(success_codes, response.status_code))
#print(response.json())
return response.json()
| 47.561321 | 318 | 0.648517 | 1,178 | 10,083 | 5.356537 | 0.188455 | 0.061807 | 0.049445 | 0.040412 | 0.581616 | 0.545642 | 0.4813 | 0.43962 | 0.434231 | 0.38225 | 0 | 0.018566 | 0.225429 | 10,083 | 211 | 319 | 47.78673 | 0.789373 | 0.194882 | 0 | 0.319149 | 0 | 0.070922 | 0.279176 | 0.192856 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12766 | false | 0.070922 | 0.021277 | 0.014184 | 0.276596 | 0.028369 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ad11d47033d4af835763923edb8fa478546cbbc5 | 1,897 | py | Python | views.py | wbellman/Python-Fate-Example | a764d3d386b60d4ecfbb59837321e6c30f1c4249 | [
"MIT"
] | null | null | null | views.py | wbellman/Python-Fate-Example | a764d3d386b60d4ecfbb59837321e6c30f1c4249 | [
"MIT"
] | null | null | null | views.py | wbellman/Python-Fate-Example | a764d3d386b60d4ecfbb59837321e6c30f1c4249 | [
"MIT"
] | null | null | null | import time
import settings
from printLibs import printl, printc
from inputLibs import get_number
def print_character(character):
print()
printc(character["realname"],"-",40)
print()
print( character["name"] + " (" + character["role"] + ") -- " + character["pole"].title() + ":" + str(character["number"]) )
print()
print( "Goal: " + character["goal"])
print()
if len(character["notes"]) > 0:
print("Notes:")
n = 1
for note in character["notes"]:
print(" " + str(n) + ". " + note )
print()
printc("","-",40)
print()
print()
def print_characters(characters,short = True):
if len(characters) < 1:
print("No characters defined.")
i = 1
for character in characters:
if short:
print( str(i) + ". " + character["name"].ljust(20) + " (" + character["realname"] + ")")
i = i + 1
else:
print_character(character)
def do_character_list(characters):
printc("Characters", "-")
print_characters(characters)
print()
def do_view_characters(characters):
printc("Characters", "-")
print_characters(characters,False)
input("Enter to continue: ")
def do_select_character(characters):
print()
printl("0. Abort")
do_character_list(characters)
number = get_number("Character #")
if number == 0:
return None
number = number - 1
if number >= len(characters):
print("Invalid character!")
return do_select_character(characters)
else:
return characters[number]
def do_set_multiplier():
print()
return get_number("Multiplier")
def do_set_pole():
print()
print("0. Abort")
print("1. " + settings.high_pole)
print("2. " + settings.low_pole)
number = get_number("Pole")
if number == 0:
return None
elif number == 1:
return settings.high_pole
elif number == 2:
return settings.low_pole
else:
print("Invalid pole.")
return do_set_pole()
| 22.05814 | 126 | 0.634686 | 227 | 1,897 | 5.176211 | 0.255507 | 0.021277 | 0.06383 | 0.042553 | 0.119149 | 0.086809 | 0 | 0 | 0 | 0 | 0 | 0.013307 | 0.207696 | 1,897 | 85 | 127 | 22.317647 | 0.768463 | 0 | 0 | 0.285714 | 0 | 0 | 0.118081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.057143 | 0 | 0.271429 | 0.471429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ad159cd804674520f14ca7bf3672f76b7911e56a | 10,068 | py | Python | core/models.py | ditttu/gymkhana-Nominations | 2a0e993c1b8362c456a9369b0b549d1c809a21df | [
"MIT"
] | 3 | 2018-02-27T13:48:28.000Z | 2018-03-03T21:57:50.000Z | core/models.py | ditttu/gymkhana-Nominations | 2a0e993c1b8362c456a9369b0b549d1c809a21df | [
"MIT"
] | 6 | 2020-02-12T00:07:46.000Z | 2022-03-11T23:25:59.000Z | core/models.py | ditttu/gymkhana-Nominations | 2a0e993c1b8362c456a9369b0b549d1c809a21df | [
"MIT"
] | 1 | 2019-03-26T20:19:57.000Z | 2019-03-26T20:19:57.000Z | from django.db import models
from django.contrib.auth.models import User
from .choices import *
from datetime import datetime,date
from django.dispatch import receiver
from django.db.models.signals import post_save
from django.utils import timezone
def default_end_date():
now = datetime.now()
end = now.replace(day=31, month=3, year=now.year)
if end > now:
return end
else:
next_year = now.year + 1
return end.replace(year=next_year)
def session_end_date(session):
now = date.today()
next_year = session + 1
return now.replace(day=31, month=3, year=next_year)
class Session(models.Model):
start_year = models.IntegerField(unique=True)
def __str__(self):
return str(self.start_year)
class Club(models.Model):
club_name = models.CharField(max_length=100, null=True)
club_parent = models.ForeignKey('self', null=True, blank=True)
def __str__(self):
return self.club_name
class ClubCreate(models.Model):
club_name = models.CharField(max_length=100, null=True)
club_parent = models.ForeignKey(Club, null=True, blank=True)
take_approval = models.ForeignKey('Post', related_name="give_club_approval", on_delete=models.SET_NULL, null=True,blank=True)
requested_by = models.ForeignKey('Post', related_name="club_request", on_delete=models.SET_NULL, null=True,blank=True)
def __str__(self):
return self.club_name
class Post(models.Model):
post_name = models.CharField(max_length=500, null=True)
club = models.ForeignKey(Club, on_delete=models.CASCADE, null=True, blank=True)
tags = models.ManyToManyField(Club, related_name='club_posts', symmetrical=False, blank=True)
parent = models.ForeignKey('self', on_delete=models.CASCADE, null=True, blank=True)
elder_brother = models.ForeignKey('self', related_name="little_bro", on_delete=models.CASCADE, null=True,blank=True)
post_holders = models.ManyToManyField(User, related_name='posts', blank=True)
post_approvals = models.ManyToManyField('self', related_name='approvals', symmetrical=False, blank=True)
take_approval = models.ForeignKey('self', related_name="give_approval", on_delete=models.SET_NULL, null=True,blank=True)
status = models.CharField(max_length=50, choices=POST_STATUS, default='Post created')
perms = models.CharField(max_length=200, choices=POST_PERMS, default='normal')
def __str__(self):
return self.post_name
def remove_holders(self):
for holder in self.post_holders.all():
history = PostHistory.objects.get(post=self, user=holder)
if datetime.now() > history.end:
self.post_holders.remove(holder)
return self.post_holders
class PostHistory(models.Model):
post = models.ForeignKey(Post, on_delete=models.CASCADE, null=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
start = models.DateField(auto_now_add=True)
end = models.DateField(null=True, blank=True, editable=True)
post_session = models.ForeignKey(Session, on_delete=models.CASCADE, null=True)
class Nomination(models.Model):
name = models.CharField(max_length=200)
description = models.TextField(max_length=20000, null=True, blank=True)
nomi_post = models.ForeignKey(Post, null=True)
nomi_form = models.OneToOneField('forms.Questionnaire', null=True)
nomi_session = models.IntegerField(null=True)
status = models.CharField(max_length=50, choices=STATUS, default='Nomination created')
result_approvals = models.ManyToManyField(Post, related_name='result_approvals', symmetrical=False, blank=True)
nomi_approvals = models.ManyToManyField(Post, related_name='nomi_approvals', symmetrical=False, blank=True)
group_status = models.CharField(max_length=50, choices=GROUP_STATUS, default='normal')
tags = models.ManyToManyField(Club, related_name='club_nomi', symmetrical=False, blank=True)
opening_date = models.DateField(null=True, blank=True)
re_opening_date = models.DateField(null=True, blank=True, editable=True)
deadline = models.DateField(null=True, blank=True, editable=True)
interview_panel = models.ManyToManyField(User, related_name='panel', symmetrical=False, blank=True)
def __str__(self):
return self.name
def append(self):
selected = NominationInstance.objects.filter(submission_status = True).filter(nomination=self, status='Accepted')
st_year = self.nomi_session
session = Session.objects.filter(start_year=st_year).first()
if session is None:
session = Session.objects.create(start_year = st_year)
self.status = 'Work done'
self.save()
for each in selected:
PostHistory.objects.create(post=self.nomi_post, user=each.user, end=session_end_date(session.start_year),
post_session=session)
self.nomi_post.post_holders.add(each.user)
return self.nomi_post.post_holders
def replace(self):
for holder in self.nomi_post.post_holders.all():
history = PostHistory.objects.get(post=self.nomi_post, user=holder)
history.end = default_end_date()
history.save()
self.nomi_post.post_holders.clear()
self.append()
return self.nomi_post.post_holders
def open_to_users(self):
self.status = 'Nomination out'
self.opening_date = datetime.now()
self.save()
return self.status
class ReopenNomination(models.Model):
nomi = models.OneToOneField(Nomination, on_delete=models.CASCADE)
approvals = models.ManyToManyField(Post,symmetrical=False)
reopening_date = models.DateField(null=True, blank=True)
def re_open_to_users(self):
self.nomi.status = 'Interview period and Nomination reopened'
self.nomi.re_opening_date = datetime.now()
self.nomi.save()
return self.nomi
class GroupNomination(models.Model):
name = models.CharField(max_length=2000, null=True)
description = models.TextField(max_length=5000, null=True, blank=True)
nominations = models.ManyToManyField(Nomination, symmetrical=False, blank=True)
status = models.CharField(max_length=50, choices=G_STATUS, default='created')
opening_date = models.DateField(null=True, blank=True, default=timezone.now)
deadline = models.DateField(null=True, blank=True)
approvals = models.ManyToManyField(Post, related_name='group_approvals', symmetrical=False, blank=True)
tags = models.ManyToManyField(Club, related_name='club_group', symmetrical=False, blank=True)
def __str__(self):
return str(self.name)
class NominationInstance(models.Model):
nomination = models.ForeignKey('Nomination', on_delete=models.CASCADE, null=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True)
status = models.CharField(max_length=20, choices=NOMI_STATUS, null=True, blank=True, default=None)
interview_status = models.CharField(max_length=20, choices=INTERVIEW_STATUS, null=True, blank=True,
default='Interview Not Done')
filled_form = models.OneToOneField('forms.FilledForm', null=True, blank=True)
submission_status = models.BooleanField(default= False)
timestamp = models.DateField(default=timezone.now)
edit_time = models.DateField(null=True, default=timezone.now)
def __str__(self):
return str(self.user) + ' ' + str(self.id)
class Deratification(models.Model):
name = models.ForeignKey(User, max_length=30, null=True)
post = models.ForeignKey(Post, on_delete=models.CASCADE, null=True)
status = models.CharField(max_length=10, choices=DERATIFICATION, default='safe')
deratify_approval = models.ForeignKey(Post, related_name='to_deratify',on_delete=models.CASCADE,null = True)
class Commment(models.Model):
comments = models.TextField(max_length=1000, null=True, blank=True)
nomi_instance = models.ForeignKey(NominationInstance, on_delete=models.CASCADE, null=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
def user_directory_path(instance, filename):
# file will be uploaded to MEDIA_ROOT/user_<id>/<filename>
return 'user_{0}/{1}'.format(instance.user.id, filename)
class UserProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
user_img = models.ImageField(upload_to=user_directory_path, null=True, blank=True)
name = models.CharField(max_length=100, blank=True)
roll_no = models.IntegerField(null=True)
programme = models.CharField(max_length=100, choices=PROGRAMME, default='B.Tech')
department = models.CharField(max_length=200, default='AE')
hall = models.CharField(max_length=10,default=1)
room_no = models.CharField(max_length=10, null=True, blank=True)
contact = models.CharField(max_length=10, null=True, blank=True)
def __str__(self):
return str(self.name)
def image_url(self):
if self.roll_no:
return 'http://oa.cc.iitk.ac.in/Oa/Jsp/Photo/' + str(self.roll_no) + '_0.jpg'
else:
return '/static/nomi/img/banner.png'
@receiver(post_save, sender=Nomination)
def ensure_parent_in_approvals(sender, **kwargs):
nomi = kwargs.get('instance')
post = nomi.nomi_post
if post:
parent = post.parent
nomi.nomi_approvals.add(parent)
nomi.result_approvals.add(parent)
nomi.tags.add(post.club)
nomi.tags.add(parent.club)
@receiver(post_save, sender=Post)
def ensure_parent_in_post_approvals(sender, **kwargs):
post = kwargs.get('instance')
if post:
try:
parent = post.parent
post.post_approvals.add(parent)
post.tags.add(parent.club)
except:
print('error parent')
pass
try:
big_bro = post.elder_brother
post.tags.add(big_bro.club)
except:
print('error')
post.tags.add(post.club)
| 38.723077 | 129 | 0.708482 | 1,301 | 10,068 | 5.312836 | 0.15834 | 0.049769 | 0.04702 | 0.061487 | 0.485677 | 0.365741 | 0.311198 | 0.233073 | 0.170284 | 0.097656 | 0 | 0.009057 | 0.177493 | 10,068 | 259 | 130 | 38.872587 | 0.825625 | 0.005562 | 0 | 0.168421 | 0 | 0 | 0.04955 | 0.002697 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.005263 | 0.036842 | 0.047368 | 0.673684 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ad1cabe254e2aa9697b539f3226adbf97155e405 | 819 | py | Python | 2018/day02.py | iKevinY/advent | d160fb711a0a4d671f53cbd61088117e7ff0276a | [
"MIT"
] | 11 | 2019-12-03T06:32:37.000Z | 2021-12-24T12:23:57.000Z | 2018/day02.py | iKevinY/advent | d160fb711a0a4d671f53cbd61088117e7ff0276a | [
"MIT"
] | null | null | null | 2018/day02.py | iKevinY/advent | d160fb711a0a4d671f53cbd61088117e7ff0276a | [
"MIT"
] | 1 | 2019-12-07T06:21:31.000Z | 2019-12-07T06:21:31.000Z | import fileinput
from collections import Counter
BOXES = [line.strip() for line in fileinput.input()]
DOUBLES = 0
TRIPLES = 0
COMMON = None
for box_1 in BOXES:
doubles = 0
triples = 0
for char, count in Counter(box_1).items():
if count == 2:
doubles += 1
elif count == 3:
triples += 1
if doubles > 0:
DOUBLES += 1
if triples > 0:
TRIPLES += 1
for box_2 in BOXES:
if box_1 == box_2:
continue
diffs = 0
for i in range(len(box_1)):
if box_1[i] != box_2[i]:
diffs += 1
if diffs == 1:
COMMON = ''.join(a for a, b in zip(box_1, box_2) if a == b)
print "Checksum for list of box IDs:", DOUBLES * TRIPLES
print "Common letters for right IDs:", COMMON
| 19.5 | 71 | 0.534799 | 120 | 819 | 3.566667 | 0.341667 | 0.056075 | 0.070093 | 0.074766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047985 | 0.363858 | 819 | 41 | 72 | 19.97561 | 0.773512 | 0 | 0 | 0 | 0 | 0 | 0.070818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068966 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad2372f4683a8e3edf11b3a342a6152ebfa81c44 | 3,877 | py | Python | SimCalorimetry/EcalSelectiveReadoutProducers/python/ecalDigis_craft_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | SimCalorimetry/EcalSelectiveReadoutProducers/python/ecalDigis_craft_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | SimCalorimetry/EcalSelectiveReadoutProducers/python/ecalDigis_craft_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
simEcalDigis = cms.EDProducer("EcalSelectiveReadoutProducer",
# Label of input EB and EE digi collections
digiProducer = cms.string('simEcalUnsuppressedDigis'),
# Instance name of input EB digi collections
EBdigiCollection = cms.string(''),
# Instance name of input EB digi collections
EEdigiCollection = cms.string(''),
# Instance name of output EB SR flags collection
EBSrFlagCollection = cms.string('ebSrFlags'),
# Instance name of output EE SR flags collection
EESrFlagCollection = cms.string('eeSrFlags'),
# Instance name of output EB digis collection
EBSRPdigiCollection = cms.string('ebDigis'),
# Instance name of output EE digis collection
EESRPdigiCollection = cms.string('eeDigis'),
# Label name of input ECAL trigger primitive collection
trigPrimProducer = cms.string('simEcalTriggerPrimitiveDigis'),
# Instance name of ECAL trigger primitive collection
trigPrimCollection = cms.string(''),
# Neighbour eta range, neighborhood: (2*deltaEta+1)*(2*deltaPhi+1)
deltaEta = cms.int32(1),
# Neighbouring eta range, neighborhood: (2*deltaEta+1)*(2*deltaPhi+1)
deltaPhi = cms.int32(1),
# Index of time sample (staring from 1) the first DCC weights is implied
ecalDccZs1stSample = cms.int32(3),
# ADC to GeV conversion factor used in ZS filter for EB
ebDccAdcToGeV = cms.double(0.035),
# ADC to GeV conversion factor used in ZS filter for EE
eeDccAdcToGeV = cms.double(0.06),
#DCC ZS FIR weights.
#d-efault value set of DCC firmware used in CRUZET and CRAFT
dccNormalizedWeights = cms.vdouble(-1.1865, 0.0195, 0.2900, 0.3477, 0.3008,
0.2266),
# Switch to use a symetric zero suppression (cut on absolute value). For
# studies only, for time being it is not supported by the hardware.
symetricZS = cms.bool(False),
# ZS energy threshold in GeV to apply to low interest channels of barrel
srpBarrelLowInterestChannelZS = cms.double(3*.035),
# ZS energy threshold in GeV to apply to low interest channels of endcap
srpEndcapLowInterestChannelZS = cms.double(3*0.06),
# ZS energy threshold in GeV to apply to high interest channels of barrel
srpBarrelHighInterestChannelZS = cms.double(-1.e9),
# ZS energy threshold in GeV to apply to high interest channels of endcap
srpEndcapHighInterestChannelZS = cms.double(-1.e9),
#switch to run w/o trigger primitive. For debug use only
trigPrimBypass = cms.bool(False),
#for debug mode only:
trigPrimBypassLTH = cms.double(1.0),
#for debug mode only:
trigPrimBypassHTH = cms.double(1.0),
#for debug mode only
trigPrimBypassWithPeakFinder = cms.bool(True),
# Mode selection for "Trig bypass" mode
# 0: TT thresholds applied on sum of crystal Et's
# 1: TT thresholds applies on compressed Et from Trigger primitive
# @ee trigPrimByPass_ switch
trigPrimBypassMode = cms.int32(0),
#number of events whose TT and SR flags must be dumped (for debug purpose):
dumpFlags = cms.untracked.int32(0),
#logical flag to write out SrFlags
writeSrFlags = cms.untracked.bool(True),
#switch to apply selective readout decision on the digis and produce
#the "suppressed" digis
produceDigis = cms.untracked.bool(True),
#Trigger Tower Flag to use when a flag is not found from the input
#Trigger Primitive collection. Must be one of the following values:
# 0: low interest, 1: mid interest, 3: high interest
# 4: forced low interest, 5: forced mid interest, 7: forced high interest
defaultTtf_ = cms.int32(4),
# SR->action flag map
actions = cms.vint32(1, 3, 3, 3, 5, 7, 7, 7)
)
| 36.233645 | 79 | 0.685324 | 506 | 3,877 | 5.247036 | 0.379447 | 0.030508 | 0.036911 | 0.030132 | 0.227495 | 0.187571 | 0.187571 | 0.160452 | 0.140113 | 0.109981 | 0 | 0.033378 | 0.234976 | 3,877 | 106 | 80 | 36.575472 | 0.861767 | 0.513026 | 0 | 0 | 0 | 0 | 0.060705 | 0.04336 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.147059 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ad24ff76711efc4ad761466ca54751008e0bd6ce | 1,176 | py | Python | datasource/interface.py | YAmikep/datasource | 6c8d72bd299aa0a9e2880228f0f39d2b8721b146 | [
"MIT"
] | 1 | 2018-06-16T11:33:56.000Z | 2018-06-16T11:33:56.000Z | datasource/interface.py | YAmikep/datasource | 6c8d72bd299aa0a9e2880228f0f39d2b8721b146 | [
"MIT"
] | 1 | 2020-03-24T17:32:45.000Z | 2020-03-24T17:32:45.000Z | datasource/interface.py | YAmikep/datasource | 6c8d72bd299aa0a9e2880228f0f39d2b8721b146 | [
"MIT"
] | 2 | 2018-06-16T11:37:34.000Z | 2020-07-30T17:56:54.000Z | MAX_MEMORY = 5 * 1024 * 2 ** 10 # 5 MB
BUFFER_SIZE = 1 * 512 * 2 ** 10 # 512 KB
class DataSourceInterface(object):
"""Provides a uniform API regardless of how the data should be fetched."""
def __init__(self, target, preload=False, **kwargs):
raise NotImplementedError()
@property
def is_loaded(self):
raise NotImplementedError()
def load(self):
"""
Loads the data if not already loaded.
"""
raise NotImplementedError()
def size(self, force_load=False):
"""
Returns the size of the data.
If the datasource has not loaded the data yet (see preload argument in constructor), the size is by default equal to 0.
Set force_load to True if you want to trigger data loading if not done yet.
:param boolean force_load: if set to True will force data loading if not done yet.
"""
raise NotImplementedError()
def get_reader(self):
"""
Returns an independent reader (with the read and seek methods).
The data will be automatically loaded if not done yet.
"""
raise NotImplementedError()
| 30.153846 | 127 | 0.62415 | 153 | 1,176 | 4.72549 | 0.509804 | 0.048409 | 0.112033 | 0.049793 | 0.146611 | 0.146611 | 0 | 0 | 0 | 0 | 0 | 0.024331 | 0.30102 | 1,176 | 38 | 128 | 30.947368 | 0.855231 | 0.465986 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad276315f893a288238d66dae8d8e08290828c3c | 85,143 | py | Python | pyjsg/parser/jsgParser.py | hsolbrig/pyjsg | 5ef46d9af6a94a0cd0e91ebf8b22f61c17e78429 | [
"CC0-1.0"
] | 3 | 2017-07-23T11:11:23.000Z | 2020-11-30T15:36:51.000Z | pyjsg/parser/jsgParser.py | hsolbrig/pyjsg | 5ef46d9af6a94a0cd0e91ebf8b22f61c17e78429 | [
"CC0-1.0"
] | 15 | 2018-01-05T17:18:34.000Z | 2021-12-13T17:40:25.000Z | try2/lib/python3.9/site-packages/pyjsg/parser/jsgParser.py | diatomsRcool/eco-kg | 4251f42ca2ab353838a39b640cb97593db76d4f4 | [
"BSD-3-Clause"
] | 1 | 2021-01-18T10:32:56.000Z | 2021-01-18T10:32:56.000Z | # Generated from jsgParser.g4 by ANTLR 4.9
# encoding: utf-8
from antlr4 import *
from io import StringIO
import sys
if sys.version_info[1] > 5:
from typing import TextIO
else:
from typing.io import TextIO
def serializedATN():
with StringIO() as buf:
buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\3\'")
buf.write("\u0143\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t\7")
buf.write("\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r\4\16")
buf.write("\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23\t\23")
buf.write("\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30\4\31")
buf.write("\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36\t\36")
buf.write("\4\37\t\37\4 \t \4!\t!\4\"\t\"\3\2\5\2F\n\2\3\2\7\2I\n")
buf.write("\2\f\2\16\2L\13\2\3\2\7\2O\n\2\f\2\16\2R\13\2\3\2\5\2")
buf.write("U\n\2\3\2\3\2\3\3\3\3\3\3\5\3\\\n\3\3\3\3\3\3\4\3\4\6")
buf.write("\4b\n\4\r\4\16\4c\3\5\3\5\7\5h\n\5\f\5\16\5k\13\5\3\5")
buf.write("\3\5\3\6\3\6\3\6\3\6\5\6s\n\6\3\7\3\7\3\7\3\b\3\b\5\b")
buf.write("z\n\b\3\b\3\b\3\b\5\b\177\n\b\3\b\3\b\3\b\5\b\u0084\n")
buf.write("\b\3\b\3\b\5\b\u0088\n\b\3\t\3\t\6\t\u008c\n\t\r\t\16")
buf.write("\t\u008d\3\t\3\t\7\t\u0092\n\t\f\t\16\t\u0095\13\t\3\t")
buf.write("\3\t\5\t\u0099\n\t\5\t\u009b\n\t\3\n\7\n\u009e\n\n\f\n")
buf.write("\16\n\u00a1\13\n\3\13\3\13\5\13\u00a5\n\13\3\f\3\f\3\r")
buf.write("\3\r\3\r\3\r\5\r\u00ad\n\r\3\r\3\r\5\r\u00b1\n\r\3\r\3")
buf.write("\r\6\r\u00b5\n\r\r\r\16\r\u00b6\3\r\3\r\3\r\3\r\5\r\u00bd")
buf.write("\n\r\5\r\u00bf\n\r\3\16\3\16\3\17\3\17\3\17\3\20\3\20")
buf.write("\3\20\3\20\7\20\u00ca\n\20\f\20\16\20\u00cd\13\20\3\20")
buf.write("\5\20\u00d0\n\20\3\20\3\20\3\21\3\21\3\21\3\21\3\21\3")
buf.write("\22\3\22\3\22\3\22\3\22\7\22\u00de\n\22\f\22\16\22\u00e1")
buf.write("\13\22\3\22\3\22\3\23\3\23\3\24\3\24\5\24\u00e9\n\24\3")
buf.write("\25\3\25\3\25\3\25\3\25\3\25\3\25\3\25\3\25\5\25\u00f4")
buf.write("\n\25\3\26\3\26\3\26\6\26\u00f9\n\26\r\26\16\26\u00fa")
buf.write("\3\27\3\27\3\30\3\30\3\30\3\30\3\30\3\30\3\30\5\30\u0106")
buf.write("\n\30\5\30\u0108\n\30\3\30\5\30\u010b\n\30\3\31\3\31\7")
buf.write("\31\u010f\n\31\f\31\16\31\u0112\13\31\3\32\3\32\3\32\3")
buf.write("\32\3\32\3\33\3\33\5\33\u011b\n\33\3\34\3\34\3\34\7\34")
buf.write("\u0120\n\34\f\34\16\34\u0123\13\34\3\35\3\35\5\35\u0127")
buf.write("\n\35\3\36\6\36\u012a\n\36\r\36\16\36\u012b\3\37\3\37")
buf.write("\5\37\u0130\n\37\3\37\3\37\5\37\u0134\n\37\5\37\u0136")
buf.write("\n\37\3 \3 \3 \3 \3!\3!\3!\5!\u013f\n!\3\"\3\"\3\"\2\2")
buf.write("#\2\4\6\b\n\f\16\20\22\24\26\30\32\34\36 \"$&(*,.\60\62")
buf.write("\64\668:<>@B\2\7\4\2\3\3\7\7\3\2\4\5\4\2\7\7\f\22\4\2")
buf.write("\6\6\32\32\4\2\5\5$$\2\u0154\2E\3\2\2\2\4X\3\2\2\2\6_")
buf.write("\3\2\2\2\be\3\2\2\2\nr\3\2\2\2\ft\3\2\2\2\16\u0087\3\2")
buf.write("\2\2\20\u009a\3\2\2\2\22\u009f\3\2\2\2\24\u00a2\3\2\2")
buf.write("\2\26\u00a6\3\2\2\2\30\u00be\3\2\2\2\32\u00c0\3\2\2\2")
buf.write("\34\u00c2\3\2\2\2\36\u00c5\3\2\2\2 \u00d3\3\2\2\2\"\u00d8")
buf.write("\3\2\2\2$\u00e4\3\2\2\2&\u00e8\3\2\2\2(\u00f3\3\2\2\2")
buf.write("*\u00f5\3\2\2\2,\u00fc\3\2\2\2.\u010a\3\2\2\2\60\u010c")
buf.write("\3\2\2\2\62\u0113\3\2\2\2\64\u0118\3\2\2\2\66\u011c\3")
buf.write("\2\2\28\u0126\3\2\2\2:\u0129\3\2\2\2<\u0135\3\2\2\2>\u0137")
buf.write("\3\2\2\2@\u013e\3\2\2\2B\u0140\3\2\2\2DF\5\4\3\2ED\3\2")
buf.write("\2\2EF\3\2\2\2FJ\3\2\2\2GI\5\b\5\2HG\3\2\2\2IL\3\2\2\2")
buf.write("JH\3\2\2\2JK\3\2\2\2KP\3\2\2\2LJ\3\2\2\2MO\5\n\6\2NM\3")
buf.write("\2\2\2OR\3\2\2\2PN\3\2\2\2PQ\3\2\2\2QT\3\2\2\2RP\3\2\2")
buf.write("\2SU\5\60\31\2TS\3\2\2\2TU\3\2\2\2UV\3\2\2\2VW\7\2\2\3")
buf.write("W\3\3\2\2\2XY\7\t\2\2Y[\5\32\16\2Z\\\5\6\4\2[Z\3\2\2\2")
buf.write("[\\\3\2\2\2\\]\3\2\2\2]^\7\25\2\2^\5\3\2\2\2_a\7\26\2")
buf.write("\2`b\5,\27\2a`\3\2\2\2bc\3\2\2\2ca\3\2\2\2cd\3\2\2\2d")
buf.write("\7\3\2\2\2ei\7\n\2\2fh\5\32\16\2gf\3\2\2\2hk\3\2\2\2i")
buf.write("g\3\2\2\2ij\3\2\2\2jl\3\2\2\2ki\3\2\2\2lm\7\25\2\2m\t")
buf.write("\3\2\2\2ns\5\f\7\2os\5\34\17\2ps\5 \21\2qs\5\"\22\2rn")
buf.write("\3\2\2\2ro\3\2\2\2rp\3\2\2\2rq\3\2\2\2s\13\3\2\2\2tu\7")
buf.write("\4\2\2uv\5\16\b\2v\r\3\2\2\2wy\7\27\2\2xz\5\20\t\2yx\3")
buf.write("\2\2\2yz\3\2\2\2z{\3\2\2\2{\u0088\7\30\2\2|~\7\27\2\2")
buf.write("}\177\t\2\2\2~}\3\2\2\2~\177\3\2\2\2\177\u0080\3\2\2\2")
buf.write("\u0080\u0081\7\13\2\2\u0081\u0083\5&\24\2\u0082\u0084")
buf.write("\5.\30\2\u0083\u0082\3\2\2\2\u0083\u0084\3\2\2\2\u0084")
buf.write("\u0085\3\2\2\2\u0085\u0086\7\30\2\2\u0086\u0088\3\2\2")
buf.write("\2\u0087w\3\2\2\2\u0087|\3\2\2\2\u0088\17\3\2\2\2\u0089")
buf.write("\u009b\7\31\2\2\u008a\u008c\5\24\13\2\u008b\u008a\3\2")
buf.write("\2\2\u008c\u008d\3\2\2\2\u008d\u008b\3\2\2\2\u008d\u008e")
buf.write("\3\2\2\2\u008e\u0093\3\2\2\2\u008f\u0090\7\37\2\2\u0090")
buf.write("\u0092\5\22\n\2\u0091\u008f\3\2\2\2\u0092\u0095\3\2\2")
buf.write("\2\u0093\u0091\3\2\2\2\u0093\u0094\3\2\2\2\u0094\u0098")
buf.write("\3\2\2\2\u0095\u0093\3\2\2\2\u0096\u0097\7\37\2\2\u0097")
buf.write("\u0099\5\26\f\2\u0098\u0096\3\2\2\2\u0098\u0099\3\2\2")
buf.write("\2\u0099\u009b\3\2\2\2\u009a\u0089\3\2\2\2\u009a\u008b")
buf.write("\3\2\2\2\u009b\21\3\2\2\2\u009c\u009e\5\24\13\2\u009d")
buf.write("\u009c\3\2\2\2\u009e\u00a1\3\2\2\2\u009f\u009d\3\2\2\2")
buf.write("\u009f\u00a0\3\2\2\2\u00a0\23\3\2\2\2\u00a1\u009f\3\2")
buf.write("\2\2\u00a2\u00a4\5\30\r\2\u00a3\u00a5\7\31\2\2\u00a4\u00a3")
buf.write("\3\2\2\2\u00a4\u00a5\3\2\2\2\u00a5\25\3\2\2\2\u00a6\u00a7")
buf.write("\7\31\2\2\u00a7\27\3\2\2\2\u00a8\u00a9\5\32\16\2\u00a9")
buf.write("\u00aa\7 \2\2\u00aa\u00ac\5&\24\2\u00ab\u00ad\5.\30\2")
buf.write("\u00ac\u00ab\3\2\2\2\u00ac\u00ad\3\2\2\2\u00ad\u00bf\3")
buf.write("\2\2\2\u00ae\u00b0\5,\27\2\u00af\u00b1\5.\30\2\u00b0\u00af")
buf.write("\3\2\2\2\u00b0\u00b1\3\2\2\2\u00b1\u00bf\3\2\2\2\u00b2")
buf.write("\u00b4\7\35\2\2\u00b3\u00b5\5\32\16\2\u00b4\u00b3\3\2")
buf.write("\2\2\u00b5\u00b6\3\2\2\2\u00b6\u00b4\3\2\2\2\u00b6\u00b7")
buf.write("\3\2\2\2\u00b7\u00b8\3\2\2\2\u00b8\u00b9\7\36\2\2\u00b9")
buf.write("\u00ba\7 \2\2\u00ba\u00bc\5&\24\2\u00bb\u00bd\5.\30\2")
buf.write("\u00bc\u00bb\3\2\2\2\u00bc\u00bd\3\2\2\2\u00bd\u00bf\3")
buf.write("\2\2\2\u00be\u00a8\3\2\2\2\u00be\u00ae\3\2\2\2\u00be\u00b2")
buf.write("\3\2\2\2\u00bf\31\3\2\2\2\u00c0\u00c1\t\3\2\2\u00c1\33")
buf.write("\3\2\2\2\u00c2\u00c3\7\4\2\2\u00c3\u00c4\5\36\20\2\u00c4")
buf.write("\35\3\2\2\2\u00c5\u00c6\7\23\2\2\u00c6\u00cb\5&\24\2\u00c7")
buf.write("\u00c8\7\37\2\2\u00c8\u00ca\5&\24\2\u00c9\u00c7\3\2\2")
buf.write("\2\u00ca\u00cd\3\2\2\2\u00cb\u00c9\3\2\2\2\u00cb\u00cc")
buf.write("\3\2\2\2\u00cc\u00cf\3\2\2\2\u00cd\u00cb\3\2\2\2\u00ce")
buf.write("\u00d0\5.\30\2\u00cf\u00ce\3\2\2\2\u00cf\u00d0\3\2\2\2")
buf.write("\u00d0\u00d1\3\2\2\2\u00d1\u00d2\7\24\2\2\u00d2\37\3\2")
buf.write("\2\2\u00d3\u00d4\7\4\2\2\u00d4\u00d5\7!\2\2\u00d5\u00d6")
buf.write("\5\20\t\2\u00d6\u00d7\7\25\2\2\u00d7!\3\2\2\2\u00d8\u00d9")
buf.write("\7\4\2\2\u00d9\u00da\7!\2\2\u00da\u00df\5(\25\2\u00db")
buf.write("\u00dc\7\37\2\2\u00dc\u00de\5(\25\2\u00dd\u00db\3\2\2")
buf.write("\2\u00de\u00e1\3\2\2\2\u00df\u00dd\3\2\2\2\u00df\u00e0")
buf.write("\3\2\2\2\u00e0\u00e2\3\2\2\2\u00e1\u00df\3\2\2\2\u00e2")
buf.write("\u00e3\7\25\2\2\u00e3#\3\2\2\2\u00e4\u00e5\t\4\2\2\u00e5")
buf.write("%\3\2\2\2\u00e6\u00e9\5,\27\2\u00e7\u00e9\5(\25\2\u00e8")
buf.write("\u00e6\3\2\2\2\u00e8\u00e7\3\2\2\2\u00e9\'\3\2\2\2\u00ea")
buf.write("\u00f4\7\3\2\2\u00eb\u00f4\7\5\2\2\u00ec\u00f4\5$\23\2")
buf.write("\u00ed\u00f4\5\16\b\2\u00ee\u00f4\5\36\20\2\u00ef\u00f0")
buf.write("\7\35\2\2\u00f0\u00f1\5*\26\2\u00f1\u00f2\7\36\2\2\u00f2")
buf.write("\u00f4\3\2\2\2\u00f3\u00ea\3\2\2\2\u00f3\u00eb\3\2\2\2")
buf.write("\u00f3\u00ec\3\2\2\2\u00f3\u00ed\3\2\2\2\u00f3\u00ee\3")
buf.write("\2\2\2\u00f3\u00ef\3\2\2\2\u00f4)\3\2\2\2\u00f5\u00f8")
buf.write("\5&\24\2\u00f6\u00f7\7\37\2\2\u00f7\u00f9\5&\24\2\u00f8")
buf.write("\u00f6\3\2\2\2\u00f9\u00fa\3\2\2\2\u00fa\u00f8\3\2\2\2")
buf.write("\u00fa\u00fb\3\2\2\2\u00fb+\3\2\2\2\u00fc\u00fd\7\4\2")
buf.write("\2\u00fd-\3\2\2\2\u00fe\u010b\7\33\2\2\u00ff\u010b\7\32")
buf.write("\2\2\u0100\u010b\7\34\2\2\u0101\u0102\7\27\2\2\u0102\u0107")
buf.write("\7\6\2\2\u0103\u0105\7\31\2\2\u0104\u0106\t\5\2\2\u0105")
buf.write("\u0104\3\2\2\2\u0105\u0106\3\2\2\2\u0106\u0108\3\2\2\2")
buf.write("\u0107\u0103\3\2\2\2\u0107\u0108\3\2\2\2\u0108\u0109\3")
buf.write("\2\2\2\u0109\u010b\7\30\2\2\u010a\u00fe\3\2\2\2\u010a")
buf.write("\u00ff\3\2\2\2\u010a\u0100\3\2\2\2\u010a\u0101\3\2\2\2")
buf.write("\u010b/\3\2\2\2\u010c\u0110\7\b\2\2\u010d\u010f\5\62\32")
buf.write("\2\u010e\u010d\3\2\2\2\u010f\u0112\3\2\2\2\u0110\u010e")
buf.write("\3\2\2\2\u0110\u0111\3\2\2\2\u0111\61\3\2\2\2\u0112\u0110")
buf.write("\3\2\2\2\u0113\u0114\7$\2\2\u0114\u0115\7 \2\2\u0115\u0116")
buf.write("\5\64\33\2\u0116\u0117\7\25\2\2\u0117\63\3\2\2\2\u0118")
buf.write("\u011a\5\66\34\2\u0119\u011b\5$\23\2\u011a\u0119\3\2\2")
buf.write("\2\u011a\u011b\3\2\2\2\u011b\65\3\2\2\2\u011c\u0121\5")
buf.write("8\35\2\u011d\u011e\7\37\2\2\u011e\u0120\58\35\2\u011f")
buf.write("\u011d\3\2\2\2\u0120\u0123\3\2\2\2\u0121\u011f\3\2\2\2")
buf.write("\u0121\u0122\3\2\2\2\u0122\67\3\2\2\2\u0123\u0121\3\2")
buf.write("\2\2\u0124\u0127\5:\36\2\u0125\u0127\3\2\2\2\u0126\u0124")
buf.write("\3\2\2\2\u0126\u0125\3\2\2\2\u01279\3\2\2\2\u0128\u012a")
buf.write("\5<\37\2\u0129\u0128\3\2\2\2\u012a\u012b\3\2\2\2\u012b")
buf.write("\u0129\3\2\2\2\u012b\u012c\3\2\2\2\u012c;\3\2\2\2\u012d")
buf.write("\u012f\5@!\2\u012e\u0130\5.\30\2\u012f\u012e\3\2\2\2\u012f")
buf.write("\u0130\3\2\2\2\u0130\u0136\3\2\2\2\u0131\u0133\5> \2\u0132")
buf.write("\u0134\5.\30\2\u0133\u0132\3\2\2\2\u0133\u0134\3\2\2\2")
buf.write("\u0134\u0136\3\2\2\2\u0135\u012d\3\2\2\2\u0135\u0131\3")
buf.write("\2\2\2\u0136=\3\2\2\2\u0137\u0138\7\35\2\2\u0138\u0139")
buf.write("\5\66\34\2\u0139\u013a\7\36\2\2\u013a?\3\2\2\2\u013b\u013f")
buf.write("\5B\"\2\u013c\u013f\7%\2\2\u013d\u013f\7\7\2\2\u013e\u013b")
buf.write("\3\2\2\2\u013e\u013c\3\2\2\2\u013e\u013d\3\2\2\2\u013f")
buf.write("A\3\2\2\2\u0140\u0141\t\6\2\2\u0141C\3\2\2\2+EJPT[cir")
buf.write("y~\u0083\u0087\u008d\u0093\u0098\u009a\u009f\u00a4\u00ac")
buf.write("\u00b0\u00b6\u00bc\u00be\u00cb\u00cf\u00df\u00e8\u00f3")
buf.write("\u00fa\u0105\u0107\u010a\u0110\u011a\u0121\u0126\u012b")
buf.write("\u012f\u0133\u0135\u013e")
return buf.getvalue()
class jsgParser ( Parser ):
grammarFileName = "jsgParser.g4"
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
sharedContextCache = PredictionContextCache()
literalNames = [ "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "<INVALID>", "'@terminals'", "'.TYPE'",
"'.IGNORE'", "'->'", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
"'['", "']'", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
"<INVALID>", "'='" ]
symbolicNames = [ "<INVALID>", "LEXER_ID_REF", "ID", "STRING", "INT",
"ANY", "TERMINALS", "TYPE", "IGNORE", "MAPSTO", "JSON_STRING",
"JSON_NUMBER", "JSON_INT", "JSON_BOOL", "JSON_NULL",
"JSON_ARRAY", "JSON_OBJECT", "OBRACKET", "CBRACKET",
"SEMI", "DASH", "OBRACE", "CBRACE", "COMMA", "STAR",
"QMARK", "PLUS", "OPREN", "CPREN", "BAR", "COLON",
"EQUALS", "PASS", "COMMENT", "LEXER_ID", "LEXER_CHAR_SET",
"LEXER_PASS", "LEXER_COMMENT" ]
RULE_doc = 0
RULE_typeDirective = 1
RULE_typeExceptions = 2
RULE_ignoreDirective = 3
RULE_grammarElt = 4
RULE_objectDef = 5
RULE_objectExpr = 6
RULE_membersDef = 7
RULE_altMemberDef = 8
RULE_member = 9
RULE_lastComma = 10
RULE_pairDef = 11
RULE_name = 12
RULE_arrayDef = 13
RULE_arrayExpr = 14
RULE_objectMacro = 15
RULE_valueTypeMacro = 16
RULE_builtinValueType = 17
RULE_valueType = 18
RULE_nonRefValueType = 19
RULE_typeAlternatives = 20
RULE_idref = 21
RULE_ebnfSuffix = 22
RULE_lexerRules = 23
RULE_lexerRuleSpec = 24
RULE_lexerRuleBlock = 25
RULE_lexerAltList = 26
RULE_lexerAlt = 27
RULE_lexerElements = 28
RULE_lexerElement = 29
RULE_lexerBlock = 30
RULE_lexerAtom = 31
RULE_lexerTerminal = 32
ruleNames = [ "doc", "typeDirective", "typeExceptions", "ignoreDirective",
"grammarElt", "objectDef", "objectExpr", "membersDef",
"altMemberDef", "member", "lastComma", "pairDef", "name",
"arrayDef", "arrayExpr", "objectMacro", "valueTypeMacro",
"builtinValueType", "valueType", "nonRefValueType", "typeAlternatives",
"idref", "ebnfSuffix", "lexerRules", "lexerRuleSpec",
"lexerRuleBlock", "lexerAltList", "lexerAlt", "lexerElements",
"lexerElement", "lexerBlock", "lexerAtom", "lexerTerminal" ]
EOF = Token.EOF
LEXER_ID_REF=1
ID=2
STRING=3
INT=4
ANY=5
TERMINALS=6
TYPE=7
IGNORE=8
MAPSTO=9
JSON_STRING=10
JSON_NUMBER=11
JSON_INT=12
JSON_BOOL=13
JSON_NULL=14
JSON_ARRAY=15
JSON_OBJECT=16
OBRACKET=17
CBRACKET=18
SEMI=19
DASH=20
OBRACE=21
CBRACE=22
COMMA=23
STAR=24
QMARK=25
PLUS=26
OPREN=27
CPREN=28
BAR=29
COLON=30
EQUALS=31
PASS=32
COMMENT=33
LEXER_ID=34
LEXER_CHAR_SET=35
LEXER_PASS=36
LEXER_COMMENT=37
def __init__(self, input:TokenStream, output:TextIO = sys.stdout):
super().__init__(input, output)
self.checkVersion("4.9")
self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
self._predicates = None
class DocContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def EOF(self):
return self.getToken(jsgParser.EOF, 0)
def typeDirective(self):
return self.getTypedRuleContext(jsgParser.TypeDirectiveContext,0)
def ignoreDirective(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.IgnoreDirectiveContext)
else:
return self.getTypedRuleContext(jsgParser.IgnoreDirectiveContext,i)
def grammarElt(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.GrammarEltContext)
else:
return self.getTypedRuleContext(jsgParser.GrammarEltContext,i)
def lexerRules(self):
return self.getTypedRuleContext(jsgParser.LexerRulesContext,0)
def getRuleIndex(self):
return jsgParser.RULE_doc
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitDoc" ):
return visitor.visitDoc(self)
else:
return visitor.visitChildren(self)
def doc(self):
localctx = jsgParser.DocContext(self, self._ctx, self.state)
self.enterRule(localctx, 0, self.RULE_doc)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 67
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.TYPE:
self.state = 66
self.typeDirective()
self.state = 72
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.IGNORE:
self.state = 69
self.ignoreDirective()
self.state = 74
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 78
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.ID:
self.state = 75
self.grammarElt()
self.state = 80
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 82
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.TERMINALS:
self.state = 81
self.lexerRules()
self.state = 84
self.match(jsgParser.EOF)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class TypeDirectiveContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def TYPE(self):
return self.getToken(jsgParser.TYPE, 0)
def name(self):
return self.getTypedRuleContext(jsgParser.NameContext,0)
def SEMI(self):
return self.getToken(jsgParser.SEMI, 0)
def typeExceptions(self):
return self.getTypedRuleContext(jsgParser.TypeExceptionsContext,0)
def getRuleIndex(self):
return jsgParser.RULE_typeDirective
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitTypeDirective" ):
return visitor.visitTypeDirective(self)
else:
return visitor.visitChildren(self)
def typeDirective(self):
localctx = jsgParser.TypeDirectiveContext(self, self._ctx, self.state)
self.enterRule(localctx, 2, self.RULE_typeDirective)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 86
self.match(jsgParser.TYPE)
self.state = 87
self.name()
self.state = 89
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.DASH:
self.state = 88
self.typeExceptions()
self.state = 91
self.match(jsgParser.SEMI)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class TypeExceptionsContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def DASH(self):
return self.getToken(jsgParser.DASH, 0)
def idref(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.IdrefContext)
else:
return self.getTypedRuleContext(jsgParser.IdrefContext,i)
def getRuleIndex(self):
return jsgParser.RULE_typeExceptions
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitTypeExceptions" ):
return visitor.visitTypeExceptions(self)
else:
return visitor.visitChildren(self)
def typeExceptions(self):
localctx = jsgParser.TypeExceptionsContext(self, self._ctx, self.state)
self.enterRule(localctx, 4, self.RULE_typeExceptions)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 93
self.match(jsgParser.DASH)
self.state = 95
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 94
self.idref()
self.state = 97
self._errHandler.sync(self)
_la = self._input.LA(1)
if not (_la==jsgParser.ID):
break
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class IgnoreDirectiveContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def IGNORE(self):
return self.getToken(jsgParser.IGNORE, 0)
def SEMI(self):
return self.getToken(jsgParser.SEMI, 0)
def name(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.NameContext)
else:
return self.getTypedRuleContext(jsgParser.NameContext,i)
def getRuleIndex(self):
return jsgParser.RULE_ignoreDirective
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitIgnoreDirective" ):
return visitor.visitIgnoreDirective(self)
else:
return visitor.visitChildren(self)
def ignoreDirective(self):
localctx = jsgParser.IgnoreDirectiveContext(self, self._ctx, self.state)
self.enterRule(localctx, 6, self.RULE_ignoreDirective)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 99
self.match(jsgParser.IGNORE)
self.state = 103
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.ID or _la==jsgParser.STRING:
self.state = 100
self.name()
self.state = 105
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 106
self.match(jsgParser.SEMI)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class GrammarEltContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def objectDef(self):
return self.getTypedRuleContext(jsgParser.ObjectDefContext,0)
def arrayDef(self):
return self.getTypedRuleContext(jsgParser.ArrayDefContext,0)
def objectMacro(self):
return self.getTypedRuleContext(jsgParser.ObjectMacroContext,0)
def valueTypeMacro(self):
return self.getTypedRuleContext(jsgParser.ValueTypeMacroContext,0)
def getRuleIndex(self):
return jsgParser.RULE_grammarElt
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitGrammarElt" ):
return visitor.visitGrammarElt(self)
else:
return visitor.visitChildren(self)
def grammarElt(self):
localctx = jsgParser.GrammarEltContext(self, self._ctx, self.state)
self.enterRule(localctx, 8, self.RULE_grammarElt)
try:
self.state = 112
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,7,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 108
self.objectDef()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 109
self.arrayDef()
pass
elif la_ == 3:
self.enterOuterAlt(localctx, 3)
self.state = 110
self.objectMacro()
pass
elif la_ == 4:
self.enterOuterAlt(localctx, 4)
self.state = 111
self.valueTypeMacro()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ObjectDefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def objectExpr(self):
return self.getTypedRuleContext(jsgParser.ObjectExprContext,0)
def getRuleIndex(self):
return jsgParser.RULE_objectDef
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitObjectDef" ):
return visitor.visitObjectDef(self)
else:
return visitor.visitChildren(self)
def objectDef(self):
localctx = jsgParser.ObjectDefContext(self, self._ctx, self.state)
self.enterRule(localctx, 10, self.RULE_objectDef)
try:
self.enterOuterAlt(localctx, 1)
self.state = 114
self.match(jsgParser.ID)
self.state = 115
self.objectExpr()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ObjectExprContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def OBRACE(self):
return self.getToken(jsgParser.OBRACE, 0)
def CBRACE(self):
return self.getToken(jsgParser.CBRACE, 0)
def membersDef(self):
return self.getTypedRuleContext(jsgParser.MembersDefContext,0)
def MAPSTO(self):
return self.getToken(jsgParser.MAPSTO, 0)
def valueType(self):
return self.getTypedRuleContext(jsgParser.ValueTypeContext,0)
def ebnfSuffix(self):
return self.getTypedRuleContext(jsgParser.EbnfSuffixContext,0)
def LEXER_ID_REF(self):
return self.getToken(jsgParser.LEXER_ID_REF, 0)
def ANY(self):
return self.getToken(jsgParser.ANY, 0)
def getRuleIndex(self):
return jsgParser.RULE_objectExpr
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitObjectExpr" ):
return visitor.visitObjectExpr(self)
else:
return visitor.visitChildren(self)
def objectExpr(self):
localctx = jsgParser.ObjectExprContext(self, self._ctx, self.state)
self.enterRule(localctx, 12, self.RULE_objectExpr)
self._la = 0 # Token type
try:
self.state = 133
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,11,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 117
self.match(jsgParser.OBRACE)
self.state = 119
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.ID) | (1 << jsgParser.STRING) | (1 << jsgParser.COMMA) | (1 << jsgParser.OPREN))) != 0):
self.state = 118
self.membersDef()
self.state = 121
self.match(jsgParser.CBRACE)
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 122
self.match(jsgParser.OBRACE)
self.state = 124
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.LEXER_ID_REF or _la==jsgParser.ANY:
self.state = 123
_la = self._input.LA(1)
if not(_la==jsgParser.LEXER_ID_REF or _la==jsgParser.ANY):
self._errHandler.recoverInline(self)
else:
self._errHandler.reportMatch(self)
self.consume()
self.state = 126
self.match(jsgParser.MAPSTO)
self.state = 127
self.valueType()
self.state = 129
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 128
self.ebnfSuffix()
self.state = 131
self.match(jsgParser.CBRACE)
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class MembersDefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def COMMA(self):
return self.getToken(jsgParser.COMMA, 0)
def member(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.MemberContext)
else:
return self.getTypedRuleContext(jsgParser.MemberContext,i)
def BAR(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.BAR)
else:
return self.getToken(jsgParser.BAR, i)
def altMemberDef(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.AltMemberDefContext)
else:
return self.getTypedRuleContext(jsgParser.AltMemberDefContext,i)
def lastComma(self):
return self.getTypedRuleContext(jsgParser.LastCommaContext,0)
def getRuleIndex(self):
return jsgParser.RULE_membersDef
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitMembersDef" ):
return visitor.visitMembersDef(self)
else:
return visitor.visitChildren(self)
def membersDef(self):
localctx = jsgParser.MembersDefContext(self, self._ctx, self.state)
self.enterRule(localctx, 14, self.RULE_membersDef)
self._la = 0 # Token type
try:
self.state = 152
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.COMMA]:
self.enterOuterAlt(localctx, 1)
self.state = 135
self.match(jsgParser.COMMA)
pass
elif token in [jsgParser.ID, jsgParser.STRING, jsgParser.OPREN]:
self.enterOuterAlt(localctx, 2)
self.state = 137
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 136
self.member()
self.state = 139
self._errHandler.sync(self)
_la = self._input.LA(1)
if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.ID) | (1 << jsgParser.STRING) | (1 << jsgParser.OPREN))) != 0)):
break
self.state = 145
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,13,self._ctx)
while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt==1:
self.state = 141
self.match(jsgParser.BAR)
self.state = 142
self.altMemberDef()
self.state = 147
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,13,self._ctx)
self.state = 150
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.BAR:
self.state = 148
self.match(jsgParser.BAR)
self.state = 149
self.lastComma()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class AltMemberDefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def member(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.MemberContext)
else:
return self.getTypedRuleContext(jsgParser.MemberContext,i)
def getRuleIndex(self):
return jsgParser.RULE_altMemberDef
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitAltMemberDef" ):
return visitor.visitAltMemberDef(self)
else:
return visitor.visitChildren(self)
def altMemberDef(self):
localctx = jsgParser.AltMemberDefContext(self, self._ctx, self.state)
self.enterRule(localctx, 16, self.RULE_altMemberDef)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 157
self._errHandler.sync(self)
_la = self._input.LA(1)
while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.ID) | (1 << jsgParser.STRING) | (1 << jsgParser.OPREN))) != 0):
self.state = 154
self.member()
self.state = 159
self._errHandler.sync(self)
_la = self._input.LA(1)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class MemberContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def pairDef(self):
return self.getTypedRuleContext(jsgParser.PairDefContext,0)
def COMMA(self):
return self.getToken(jsgParser.COMMA, 0)
def getRuleIndex(self):
return jsgParser.RULE_member
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitMember" ):
return visitor.visitMember(self)
else:
return visitor.visitChildren(self)
def member(self):
localctx = jsgParser.MemberContext(self, self._ctx, self.state)
self.enterRule(localctx, 18, self.RULE_member)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 160
self.pairDef()
self.state = 162
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.COMMA:
self.state = 161
self.match(jsgParser.COMMA)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LastCommaContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def COMMA(self):
return self.getToken(jsgParser.COMMA, 0)
def getRuleIndex(self):
return jsgParser.RULE_lastComma
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLastComma" ):
return visitor.visitLastComma(self)
else:
return visitor.visitChildren(self)
def lastComma(self):
localctx = jsgParser.LastCommaContext(self, self._ctx, self.state)
self.enterRule(localctx, 20, self.RULE_lastComma)
try:
self.enterOuterAlt(localctx, 1)
self.state = 164
self.match(jsgParser.COMMA)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class PairDefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def name(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.NameContext)
else:
return self.getTypedRuleContext(jsgParser.NameContext,i)
def COLON(self):
return self.getToken(jsgParser.COLON, 0)
def valueType(self):
return self.getTypedRuleContext(jsgParser.ValueTypeContext,0)
def ebnfSuffix(self):
return self.getTypedRuleContext(jsgParser.EbnfSuffixContext,0)
def idref(self):
return self.getTypedRuleContext(jsgParser.IdrefContext,0)
def OPREN(self):
return self.getToken(jsgParser.OPREN, 0)
def CPREN(self):
return self.getToken(jsgParser.CPREN, 0)
def getRuleIndex(self):
return jsgParser.RULE_pairDef
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitPairDef" ):
return visitor.visitPairDef(self)
else:
return visitor.visitChildren(self)
def pairDef(self):
localctx = jsgParser.PairDefContext(self, self._ctx, self.state)
self.enterRule(localctx, 22, self.RULE_pairDef)
self._la = 0 # Token type
try:
self.state = 188
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,22,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 166
self.name()
self.state = 167
self.match(jsgParser.COLON)
self.state = 168
self.valueType()
self.state = 170
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 169
self.ebnfSuffix()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
self.state = 172
self.idref()
self.state = 174
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 173
self.ebnfSuffix()
pass
elif la_ == 3:
self.enterOuterAlt(localctx, 3)
self.state = 176
self.match(jsgParser.OPREN)
self.state = 178
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 177
self.name()
self.state = 180
self._errHandler.sync(self)
_la = self._input.LA(1)
if not (_la==jsgParser.ID or _la==jsgParser.STRING):
break
self.state = 182
self.match(jsgParser.CPREN)
self.state = 183
self.match(jsgParser.COLON)
self.state = 184
self.valueType()
self.state = 186
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 185
self.ebnfSuffix()
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class NameContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def STRING(self):
return self.getToken(jsgParser.STRING, 0)
def getRuleIndex(self):
return jsgParser.RULE_name
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitName" ):
return visitor.visitName(self)
else:
return visitor.visitChildren(self)
def name(self):
localctx = jsgParser.NameContext(self, self._ctx, self.state)
self.enterRule(localctx, 24, self.RULE_name)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 190
_la = self._input.LA(1)
if not(_la==jsgParser.ID or _la==jsgParser.STRING):
self._errHandler.recoverInline(self)
else:
self._errHandler.reportMatch(self)
self.consume()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ArrayDefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def arrayExpr(self):
return self.getTypedRuleContext(jsgParser.ArrayExprContext,0)
def getRuleIndex(self):
return jsgParser.RULE_arrayDef
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitArrayDef" ):
return visitor.visitArrayDef(self)
else:
return visitor.visitChildren(self)
def arrayDef(self):
localctx = jsgParser.ArrayDefContext(self, self._ctx, self.state)
self.enterRule(localctx, 26, self.RULE_arrayDef)
try:
self.enterOuterAlt(localctx, 1)
self.state = 192
self.match(jsgParser.ID)
self.state = 193
self.arrayExpr()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ArrayExprContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def OBRACKET(self):
return self.getToken(jsgParser.OBRACKET, 0)
def valueType(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.ValueTypeContext)
else:
return self.getTypedRuleContext(jsgParser.ValueTypeContext,i)
def CBRACKET(self):
return self.getToken(jsgParser.CBRACKET, 0)
def BAR(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.BAR)
else:
return self.getToken(jsgParser.BAR, i)
def ebnfSuffix(self):
return self.getTypedRuleContext(jsgParser.EbnfSuffixContext,0)
def getRuleIndex(self):
return jsgParser.RULE_arrayExpr
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitArrayExpr" ):
return visitor.visitArrayExpr(self)
else:
return visitor.visitChildren(self)
def arrayExpr(self):
localctx = jsgParser.ArrayExprContext(self, self._ctx, self.state)
self.enterRule(localctx, 28, self.RULE_arrayExpr)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 195
self.match(jsgParser.OBRACKET)
self.state = 196
self.valueType()
self.state = 201
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.BAR:
self.state = 197
self.match(jsgParser.BAR)
self.state = 198
self.valueType()
self.state = 203
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 205
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 204
self.ebnfSuffix()
self.state = 207
self.match(jsgParser.CBRACKET)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ObjectMacroContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def EQUALS(self):
return self.getToken(jsgParser.EQUALS, 0)
def membersDef(self):
return self.getTypedRuleContext(jsgParser.MembersDefContext,0)
def SEMI(self):
return self.getToken(jsgParser.SEMI, 0)
def getRuleIndex(self):
return jsgParser.RULE_objectMacro
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitObjectMacro" ):
return visitor.visitObjectMacro(self)
else:
return visitor.visitChildren(self)
def objectMacro(self):
localctx = jsgParser.ObjectMacroContext(self, self._ctx, self.state)
self.enterRule(localctx, 30, self.RULE_objectMacro)
try:
self.enterOuterAlt(localctx, 1)
self.state = 209
self.match(jsgParser.ID)
self.state = 210
self.match(jsgParser.EQUALS)
self.state = 211
self.membersDef()
self.state = 212
self.match(jsgParser.SEMI)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ValueTypeMacroContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def EQUALS(self):
return self.getToken(jsgParser.EQUALS, 0)
def nonRefValueType(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.NonRefValueTypeContext)
else:
return self.getTypedRuleContext(jsgParser.NonRefValueTypeContext,i)
def SEMI(self):
return self.getToken(jsgParser.SEMI, 0)
def BAR(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.BAR)
else:
return self.getToken(jsgParser.BAR, i)
def getRuleIndex(self):
return jsgParser.RULE_valueTypeMacro
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitValueTypeMacro" ):
return visitor.visitValueTypeMacro(self)
else:
return visitor.visitChildren(self)
def valueTypeMacro(self):
localctx = jsgParser.ValueTypeMacroContext(self, self._ctx, self.state)
self.enterRule(localctx, 32, self.RULE_valueTypeMacro)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 214
self.match(jsgParser.ID)
self.state = 215
self.match(jsgParser.EQUALS)
self.state = 216
self.nonRefValueType()
self.state = 221
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.BAR:
self.state = 217
self.match(jsgParser.BAR)
self.state = 218
self.nonRefValueType()
self.state = 223
self._errHandler.sync(self)
_la = self._input.LA(1)
self.state = 224
self.match(jsgParser.SEMI)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class BuiltinValueTypeContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def JSON_STRING(self):
return self.getToken(jsgParser.JSON_STRING, 0)
def JSON_NUMBER(self):
return self.getToken(jsgParser.JSON_NUMBER, 0)
def JSON_INT(self):
return self.getToken(jsgParser.JSON_INT, 0)
def JSON_BOOL(self):
return self.getToken(jsgParser.JSON_BOOL, 0)
def JSON_NULL(self):
return self.getToken(jsgParser.JSON_NULL, 0)
def JSON_ARRAY(self):
return self.getToken(jsgParser.JSON_ARRAY, 0)
def JSON_OBJECT(self):
return self.getToken(jsgParser.JSON_OBJECT, 0)
def ANY(self):
return self.getToken(jsgParser.ANY, 0)
def getRuleIndex(self):
return jsgParser.RULE_builtinValueType
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitBuiltinValueType" ):
return visitor.visitBuiltinValueType(self)
else:
return visitor.visitChildren(self)
def builtinValueType(self):
localctx = jsgParser.BuiltinValueTypeContext(self, self._ctx, self.state)
self.enterRule(localctx, 34, self.RULE_builtinValueType)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 226
_la = self._input.LA(1)
if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.ANY) | (1 << jsgParser.JSON_STRING) | (1 << jsgParser.JSON_NUMBER) | (1 << jsgParser.JSON_INT) | (1 << jsgParser.JSON_BOOL) | (1 << jsgParser.JSON_NULL) | (1 << jsgParser.JSON_ARRAY) | (1 << jsgParser.JSON_OBJECT))) != 0)):
self._errHandler.recoverInline(self)
else:
self._errHandler.reportMatch(self)
self.consume()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class ValueTypeContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def idref(self):
return self.getTypedRuleContext(jsgParser.IdrefContext,0)
def nonRefValueType(self):
return self.getTypedRuleContext(jsgParser.NonRefValueTypeContext,0)
def getRuleIndex(self):
return jsgParser.RULE_valueType
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitValueType" ):
return visitor.visitValueType(self)
else:
return visitor.visitChildren(self)
def valueType(self):
localctx = jsgParser.ValueTypeContext(self, self._ctx, self.state)
self.enterRule(localctx, 36, self.RULE_valueType)
try:
self.state = 230
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.ID]:
self.enterOuterAlt(localctx, 1)
self.state = 228
self.idref()
pass
elif token in [jsgParser.LEXER_ID_REF, jsgParser.STRING, jsgParser.ANY, jsgParser.JSON_STRING, jsgParser.JSON_NUMBER, jsgParser.JSON_INT, jsgParser.JSON_BOOL, jsgParser.JSON_NULL, jsgParser.JSON_ARRAY, jsgParser.JSON_OBJECT, jsgParser.OBRACKET, jsgParser.OBRACE, jsgParser.OPREN]:
self.enterOuterAlt(localctx, 2)
self.state = 229
self.nonRefValueType()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class NonRefValueTypeContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def LEXER_ID_REF(self):
return self.getToken(jsgParser.LEXER_ID_REF, 0)
def STRING(self):
return self.getToken(jsgParser.STRING, 0)
def builtinValueType(self):
return self.getTypedRuleContext(jsgParser.BuiltinValueTypeContext,0)
def objectExpr(self):
return self.getTypedRuleContext(jsgParser.ObjectExprContext,0)
def arrayExpr(self):
return self.getTypedRuleContext(jsgParser.ArrayExprContext,0)
def OPREN(self):
return self.getToken(jsgParser.OPREN, 0)
def typeAlternatives(self):
return self.getTypedRuleContext(jsgParser.TypeAlternativesContext,0)
def CPREN(self):
return self.getToken(jsgParser.CPREN, 0)
def getRuleIndex(self):
return jsgParser.RULE_nonRefValueType
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitNonRefValueType" ):
return visitor.visitNonRefValueType(self)
else:
return visitor.visitChildren(self)
def nonRefValueType(self):
localctx = jsgParser.NonRefValueTypeContext(self, self._ctx, self.state)
self.enterRule(localctx, 38, self.RULE_nonRefValueType)
try:
self.state = 241
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.LEXER_ID_REF]:
self.enterOuterAlt(localctx, 1)
self.state = 232
self.match(jsgParser.LEXER_ID_REF)
pass
elif token in [jsgParser.STRING]:
self.enterOuterAlt(localctx, 2)
self.state = 233
self.match(jsgParser.STRING)
pass
elif token in [jsgParser.ANY, jsgParser.JSON_STRING, jsgParser.JSON_NUMBER, jsgParser.JSON_INT, jsgParser.JSON_BOOL, jsgParser.JSON_NULL, jsgParser.JSON_ARRAY, jsgParser.JSON_OBJECT]:
self.enterOuterAlt(localctx, 3)
self.state = 234
self.builtinValueType()
pass
elif token in [jsgParser.OBRACE]:
self.enterOuterAlt(localctx, 4)
self.state = 235
self.objectExpr()
pass
elif token in [jsgParser.OBRACKET]:
self.enterOuterAlt(localctx, 5)
self.state = 236
self.arrayExpr()
pass
elif token in [jsgParser.OPREN]:
self.enterOuterAlt(localctx, 6)
self.state = 237
self.match(jsgParser.OPREN)
self.state = 238
self.typeAlternatives()
self.state = 239
self.match(jsgParser.CPREN)
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class TypeAlternativesContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def valueType(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.ValueTypeContext)
else:
return self.getTypedRuleContext(jsgParser.ValueTypeContext,i)
def BAR(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.BAR)
else:
return self.getToken(jsgParser.BAR, i)
def getRuleIndex(self):
return jsgParser.RULE_typeAlternatives
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitTypeAlternatives" ):
return visitor.visitTypeAlternatives(self)
else:
return visitor.visitChildren(self)
def typeAlternatives(self):
localctx = jsgParser.TypeAlternativesContext(self, self._ctx, self.state)
self.enterRule(localctx, 40, self.RULE_typeAlternatives)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 243
self.valueType()
self.state = 246
self._errHandler.sync(self)
_la = self._input.LA(1)
while True:
self.state = 244
self.match(jsgParser.BAR)
self.state = 245
self.valueType()
self.state = 248
self._errHandler.sync(self)
_la = self._input.LA(1)
if not (_la==jsgParser.BAR):
break
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class IdrefContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def ID(self):
return self.getToken(jsgParser.ID, 0)
def getRuleIndex(self):
return jsgParser.RULE_idref
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitIdref" ):
return visitor.visitIdref(self)
else:
return visitor.visitChildren(self)
def idref(self):
localctx = jsgParser.IdrefContext(self, self._ctx, self.state)
self.enterRule(localctx, 42, self.RULE_idref)
try:
self.enterOuterAlt(localctx, 1)
self.state = 250
self.match(jsgParser.ID)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class EbnfSuffixContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def QMARK(self):
return self.getToken(jsgParser.QMARK, 0)
def STAR(self):
return self.getToken(jsgParser.STAR, 0)
def PLUS(self):
return self.getToken(jsgParser.PLUS, 0)
def OBRACE(self):
return self.getToken(jsgParser.OBRACE, 0)
def INT(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.INT)
else:
return self.getToken(jsgParser.INT, i)
def CBRACE(self):
return self.getToken(jsgParser.CBRACE, 0)
def COMMA(self):
return self.getToken(jsgParser.COMMA, 0)
def getRuleIndex(self):
return jsgParser.RULE_ebnfSuffix
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitEbnfSuffix" ):
return visitor.visitEbnfSuffix(self)
else:
return visitor.visitChildren(self)
def ebnfSuffix(self):
localctx = jsgParser.EbnfSuffixContext(self, self._ctx, self.state)
self.enterRule(localctx, 44, self.RULE_ebnfSuffix)
self._la = 0 # Token type
try:
self.state = 264
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.QMARK]:
self.enterOuterAlt(localctx, 1)
self.state = 252
self.match(jsgParser.QMARK)
pass
elif token in [jsgParser.STAR]:
self.enterOuterAlt(localctx, 2)
self.state = 253
self.match(jsgParser.STAR)
pass
elif token in [jsgParser.PLUS]:
self.enterOuterAlt(localctx, 3)
self.state = 254
self.match(jsgParser.PLUS)
pass
elif token in [jsgParser.OBRACE]:
self.enterOuterAlt(localctx, 4)
self.state = 255
self.match(jsgParser.OBRACE)
self.state = 256
self.match(jsgParser.INT)
self.state = 261
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.COMMA:
self.state = 257
self.match(jsgParser.COMMA)
self.state = 259
self._errHandler.sync(self)
_la = self._input.LA(1)
if _la==jsgParser.INT or _la==jsgParser.STAR:
self.state = 258
_la = self._input.LA(1)
if not(_la==jsgParser.INT or _la==jsgParser.STAR):
self._errHandler.recoverInline(self)
else:
self._errHandler.reportMatch(self)
self.consume()
self.state = 263
self.match(jsgParser.CBRACE)
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerRulesContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def TERMINALS(self):
return self.getToken(jsgParser.TERMINALS, 0)
def lexerRuleSpec(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.LexerRuleSpecContext)
else:
return self.getTypedRuleContext(jsgParser.LexerRuleSpecContext,i)
def getRuleIndex(self):
return jsgParser.RULE_lexerRules
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerRules" ):
return visitor.visitLexerRules(self)
else:
return visitor.visitChildren(self)
def lexerRules(self):
localctx = jsgParser.LexerRulesContext(self, self._ctx, self.state)
self.enterRule(localctx, 46, self.RULE_lexerRules)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 266
self.match(jsgParser.TERMINALS)
self.state = 270
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.LEXER_ID:
self.state = 267
self.lexerRuleSpec()
self.state = 272
self._errHandler.sync(self)
_la = self._input.LA(1)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerRuleSpecContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def LEXER_ID(self):
return self.getToken(jsgParser.LEXER_ID, 0)
def COLON(self):
return self.getToken(jsgParser.COLON, 0)
def lexerRuleBlock(self):
return self.getTypedRuleContext(jsgParser.LexerRuleBlockContext,0)
def SEMI(self):
return self.getToken(jsgParser.SEMI, 0)
def getRuleIndex(self):
return jsgParser.RULE_lexerRuleSpec
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerRuleSpec" ):
return visitor.visitLexerRuleSpec(self)
else:
return visitor.visitChildren(self)
def lexerRuleSpec(self):
localctx = jsgParser.LexerRuleSpecContext(self, self._ctx, self.state)
self.enterRule(localctx, 48, self.RULE_lexerRuleSpec)
try:
self.enterOuterAlt(localctx, 1)
self.state = 273
self.match(jsgParser.LEXER_ID)
self.state = 274
self.match(jsgParser.COLON)
self.state = 275
self.lexerRuleBlock()
self.state = 276
self.match(jsgParser.SEMI)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerRuleBlockContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerAltList(self):
return self.getTypedRuleContext(jsgParser.LexerAltListContext,0)
def builtinValueType(self):
return self.getTypedRuleContext(jsgParser.BuiltinValueTypeContext,0)
def getRuleIndex(self):
return jsgParser.RULE_lexerRuleBlock
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerRuleBlock" ):
return visitor.visitLexerRuleBlock(self)
else:
return visitor.visitChildren(self)
def lexerRuleBlock(self):
localctx = jsgParser.LexerRuleBlockContext(self, self._ctx, self.state)
self.enterRule(localctx, 50, self.RULE_lexerRuleBlock)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 278
self.lexerAltList()
self.state = 280
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.ANY) | (1 << jsgParser.JSON_STRING) | (1 << jsgParser.JSON_NUMBER) | (1 << jsgParser.JSON_INT) | (1 << jsgParser.JSON_BOOL) | (1 << jsgParser.JSON_NULL) | (1 << jsgParser.JSON_ARRAY) | (1 << jsgParser.JSON_OBJECT))) != 0):
self.state = 279
self.builtinValueType()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerAltListContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerAlt(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.LexerAltContext)
else:
return self.getTypedRuleContext(jsgParser.LexerAltContext,i)
def BAR(self, i:int=None):
if i is None:
return self.getTokens(jsgParser.BAR)
else:
return self.getToken(jsgParser.BAR, i)
def getRuleIndex(self):
return jsgParser.RULE_lexerAltList
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerAltList" ):
return visitor.visitLexerAltList(self)
else:
return visitor.visitChildren(self)
def lexerAltList(self):
localctx = jsgParser.LexerAltListContext(self, self._ctx, self.state)
self.enterRule(localctx, 52, self.RULE_lexerAltList)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 282
self.lexerAlt()
self.state = 287
self._errHandler.sync(self)
_la = self._input.LA(1)
while _la==jsgParser.BAR:
self.state = 283
self.match(jsgParser.BAR)
self.state = 284
self.lexerAlt()
self.state = 289
self._errHandler.sync(self)
_la = self._input.LA(1)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerAltContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerElements(self):
return self.getTypedRuleContext(jsgParser.LexerElementsContext,0)
def getRuleIndex(self):
return jsgParser.RULE_lexerAlt
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerAlt" ):
return visitor.visitLexerAlt(self)
else:
return visitor.visitChildren(self)
def lexerAlt(self):
localctx = jsgParser.LexerAltContext(self, self._ctx, self.state)
self.enterRule(localctx, 54, self.RULE_lexerAlt)
try:
self.state = 292
self._errHandler.sync(self)
la_ = self._interp.adaptivePredict(self._input,35,self._ctx)
if la_ == 1:
self.enterOuterAlt(localctx, 1)
self.state = 290
self.lexerElements()
pass
elif la_ == 2:
self.enterOuterAlt(localctx, 2)
pass
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerElementsContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerElement(self, i:int=None):
if i is None:
return self.getTypedRuleContexts(jsgParser.LexerElementContext)
else:
return self.getTypedRuleContext(jsgParser.LexerElementContext,i)
def getRuleIndex(self):
return jsgParser.RULE_lexerElements
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerElements" ):
return visitor.visitLexerElements(self)
else:
return visitor.visitChildren(self)
def lexerElements(self):
localctx = jsgParser.LexerElementsContext(self, self._ctx, self.state)
self.enterRule(localctx, 56, self.RULE_lexerElements)
try:
self.enterOuterAlt(localctx, 1)
self.state = 295
self._errHandler.sync(self)
_alt = 1
while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
if _alt == 1:
self.state = 294
self.lexerElement()
else:
raise NoViableAltException(self)
self.state = 297
self._errHandler.sync(self)
_alt = self._interp.adaptivePredict(self._input,36,self._ctx)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerElementContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerAtom(self):
return self.getTypedRuleContext(jsgParser.LexerAtomContext,0)
def ebnfSuffix(self):
return self.getTypedRuleContext(jsgParser.EbnfSuffixContext,0)
def lexerBlock(self):
return self.getTypedRuleContext(jsgParser.LexerBlockContext,0)
def getRuleIndex(self):
return jsgParser.RULE_lexerElement
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerElement" ):
return visitor.visitLexerElement(self)
else:
return visitor.visitChildren(self)
def lexerElement(self):
localctx = jsgParser.LexerElementContext(self, self._ctx, self.state)
self.enterRule(localctx, 58, self.RULE_lexerElement)
self._la = 0 # Token type
try:
self.state = 307
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.STRING, jsgParser.ANY, jsgParser.LEXER_ID, jsgParser.LEXER_CHAR_SET]:
self.enterOuterAlt(localctx, 1)
self.state = 299
self.lexerAtom()
self.state = 301
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 300
self.ebnfSuffix()
pass
elif token in [jsgParser.OPREN]:
self.enterOuterAlt(localctx, 2)
self.state = 303
self.lexerBlock()
self.state = 305
self._errHandler.sync(self)
_la = self._input.LA(1)
if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << jsgParser.OBRACE) | (1 << jsgParser.STAR) | (1 << jsgParser.QMARK) | (1 << jsgParser.PLUS))) != 0):
self.state = 304
self.ebnfSuffix()
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerBlockContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def OPREN(self):
return self.getToken(jsgParser.OPREN, 0)
def lexerAltList(self):
return self.getTypedRuleContext(jsgParser.LexerAltListContext,0)
def CPREN(self):
return self.getToken(jsgParser.CPREN, 0)
def getRuleIndex(self):
return jsgParser.RULE_lexerBlock
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerBlock" ):
return visitor.visitLexerBlock(self)
else:
return visitor.visitChildren(self)
def lexerBlock(self):
localctx = jsgParser.LexerBlockContext(self, self._ctx, self.state)
self.enterRule(localctx, 60, self.RULE_lexerBlock)
try:
self.enterOuterAlt(localctx, 1)
self.state = 309
self.match(jsgParser.OPREN)
self.state = 310
self.lexerAltList()
self.state = 311
self.match(jsgParser.CPREN)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerAtomContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def lexerTerminal(self):
return self.getTypedRuleContext(jsgParser.LexerTerminalContext,0)
def LEXER_CHAR_SET(self):
return self.getToken(jsgParser.LEXER_CHAR_SET, 0)
def ANY(self):
return self.getToken(jsgParser.ANY, 0)
def getRuleIndex(self):
return jsgParser.RULE_lexerAtom
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerAtom" ):
return visitor.visitLexerAtom(self)
else:
return visitor.visitChildren(self)
def lexerAtom(self):
localctx = jsgParser.LexerAtomContext(self, self._ctx, self.state)
self.enterRule(localctx, 62, self.RULE_lexerAtom)
try:
self.state = 316
self._errHandler.sync(self)
token = self._input.LA(1)
if token in [jsgParser.STRING, jsgParser.LEXER_ID]:
self.enterOuterAlt(localctx, 1)
self.state = 313
self.lexerTerminal()
pass
elif token in [jsgParser.LEXER_CHAR_SET]:
self.enterOuterAlt(localctx, 2)
self.state = 314
self.match(jsgParser.LEXER_CHAR_SET)
pass
elif token in [jsgParser.ANY]:
self.enterOuterAlt(localctx, 3)
self.state = 315
self.match(jsgParser.ANY)
pass
else:
raise NoViableAltException(self)
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
class LexerTerminalContext(ParserRuleContext):
def __init__(self, parser, parent:ParserRuleContext=None, invokingState:int=-1):
super().__init__(parent, invokingState)
self.parser = parser
def LEXER_ID(self):
return self.getToken(jsgParser.LEXER_ID, 0)
def STRING(self):
return self.getToken(jsgParser.STRING, 0)
def getRuleIndex(self):
return jsgParser.RULE_lexerTerminal
def accept(self, visitor:ParseTreeVisitor):
if hasattr( visitor, "visitLexerTerminal" ):
return visitor.visitLexerTerminal(self)
else:
return visitor.visitChildren(self)
def lexerTerminal(self):
localctx = jsgParser.LexerTerminalContext(self, self._ctx, self.state)
self.enterRule(localctx, 64, self.RULE_lexerTerminal)
self._la = 0 # Token type
try:
self.enterOuterAlt(localctx, 1)
self.state = 318
_la = self._input.LA(1)
if not(_la==jsgParser.STRING or _la==jsgParser.LEXER_ID):
self._errHandler.recoverInline(self)
else:
self._errHandler.reportMatch(self)
self.consume()
except RecognitionException as re:
localctx.exception = re
self._errHandler.reportError(self, re)
self._errHandler.recover(self, re)
finally:
self.exitRule()
return localctx
| 34.429034 | 299 | 0.566318 | 9,901 | 85,143 | 4.782244 | 0.076861 | 0.018628 | 0.013242 | 0.013686 | 0.674228 | 0.6209 | 0.577626 | 0.522714 | 0.450126 | 0.445838 | 0 | 0.085111 | 0.314025 | 85,143 | 2,472 | 300 | 34.442961 | 0.725576 | 0.003242 | 0 | 0.576352 | 1 | 0.073701 | 0.109301 | 0.091774 | 0 | 0 | 0.000566 | 0 | 0 | 1 | 0.130965 | false | 0.018028 | 0.002651 | 0.066808 | 0.33404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad2d2edbe5cd065fd478d317524918fed089d282 | 1,111 | py | Python | example/familytree.py | realistschuckle/pyvisitor | f08bd50f5ca5ff4288f00d9045ca406e278ed306 | [
"MIT"
] | 15 | 2015-01-30T21:08:28.000Z | 2022-02-03T18:00:56.000Z | example/familytree.py | realistschuckle/pyvisitor | f08bd50f5ca5ff4288f00d9045ca406e278ed306 | [
"MIT"
] | 2 | 2016-10-03T21:33:29.000Z | 2019-02-05T13:06:05.000Z | example/familytree.py | realistschuckle/pyvisitor | f08bd50f5ca5ff4288f00d9045ca406e278ed306 | [
"MIT"
] | 7 | 2016-09-16T07:34:50.000Z | 2022-02-03T18:03:01.000Z | from __future__ import print_function
import sys
import os
# Put the path to the visitor module on the search path
path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'src'))
if not path in sys.path:
sys.path.insert(1, path)
import visitor
class Person(object):
def __init__(self, name):
self.name = name
self.deps = []
def add_dependent(self, dep):
self.deps.append(dep);
def accept(self, visitor):
visitor.visit(self)
class Pet(object):
def __init__(self, name, breed):
self.name = name
self.breed = breed
def accept(self, visitor):
visitor.visit(self)
class DescendantsVisitor(object):
def __init__(self):
self.level = 0
@visitor.on('member')
def visit(self, member):
pass
@visitor.when(Person)
def visit(self, member):
self.write_padding()
print('-', member.name)
self.level += 1
for dep in member.deps:
dep.accept(self)
self.level -= 1
@visitor.when(Pet)
def visit(self, member):
self.write_padding()
print('-', member.name, 'a', member.breed)
def write_padding(self):
for i in range(self.level):
sys.stdout.write(' ') | 19.491228 | 76 | 0.688569 | 167 | 1,111 | 4.431138 | 0.323353 | 0.060811 | 0.052703 | 0.068919 | 0.3 | 0.243243 | 0.243243 | 0.243243 | 0.132432 | 0.132432 | 0 | 0.004324 | 0.167417 | 1,111 | 57 | 77 | 19.491228 | 0.795676 | 0.047705 | 0 | 0.261905 | 0 | 0 | 0.015137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0.02381 | 0.095238 | 0 | 0.404762 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad2d3fa37ba1868e4e2a135fb9535b7aef051f1f | 234 | py | Python | bgui/server/server/config.py | monash-emu/Legacy-AuTuMN | 513bc14b4ea8c29c5983cc90fb94284e6a003515 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | bgui/server/server/config.py | monash-emu/Legacy-AuTuMN | 513bc14b4ea8c29c5983cc90fb94284e6a003515 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | bgui/server/server/config.py | monash-emu/Legacy-AuTuMN | 513bc14b4ea8c29c5983cc90fb94284e6a003515 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | SQLALCHEMY_DATABASE_URI = 'sqlite:///database.sqlite'
SECRET_KEY = 'F12Zr47j\3yX R~X@H!jmM]Lwf/,?KT'
SAVE_FOLDER = '../../../projects'
SQLALCHEMY_TRACK_MODIFICATIONS = 'False'
PORT = '3000'
STATIC_FOLDER = '../../client/dist/static'
| 29.25 | 53 | 0.709402 | 30 | 234 | 5.3 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042056 | 0.08547 | 234 | 7 | 54 | 33.428571 | 0.700935 | 0 | 0 | 0 | 0 | 0 | 0.454936 | 0.2103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad31e247da8cf855b63a1a735072233c2abea496 | 3,608 | py | Python | babyname_parser.py | jongtaeklho/swpp-hw1-jongtaeklho | 1f0d2e4d4af985f83cbf3a9ee7548fecee68d346 | [
"Apache-2.0"
] | null | null | null | babyname_parser.py | jongtaeklho/swpp-hw1-jongtaeklho | 1f0d2e4d4af985f83cbf3a9ee7548fecee68d346 | [
"Apache-2.0"
] | null | null | null | babyname_parser.py | jongtaeklho/swpp-hw1-jongtaeklho | 1f0d2e4d4af985f83cbf3a9ee7548fecee68d346 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright 2010 Google Inc.
# Licensed under the Apache License, Version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
# Google's Python Class
# http://code.google.com/edu/languages/google-python-class/
# Modified by Alchan Kim at SNU Software Platform Lab for
# SWPP fall 2020 lecture.
import sys
import re
import os
from functools import wraps
"""Baby Names exercise
Implement the babyname parser class that parses the popular names and their ranks from a html file.
1) At first, you need to implement a decorator that checks whether the html file exists or not.
2) Also, the parser should extract tuples of (rank, male-name, female-name) from the file by using regex.
For writing regex, it's nice to include a copy of the target text for inspiration.
3) Finally, you need to implement `parse` method in `BabynameParser` class that parses the extracted tuples
with the given lambda and return a list of processed results.
"""
class BabynameFileNotFoundException(Exception):
"""
A custom exception for the cases that the babyname file does not exist.
"""
pass
def check_filename_existence(func):
@wraps(func)
def wrapper(*args,**kwargs):
try:
return func(*args,**kwargs)
except FileNotFoundError as pathname :
raise BabynameFileNotFoundException("No such file: {}".format(pathname.filename))
return wrapper
"""
(1 point)
A decorator that catches the non-exiting filename argument and raises a custom `BabynameFileNotFoundException`.
Args:
func: The function to decorate.
Raises:
BabynameFileNotFoundException: if there is no such file while func tries to open a file.
We assume func receives directory path and year to generate a filename to open.
"""
# TODO: Implement this decorator
class BabynameParser:
@check_filename_existence
def __init__(self, dirname, year):
"""
(3 points)
Given directory path and year, extracts the name of a file to open the corresponding file
and a list of the (rank, male-name, female-name) tuples from the file read by using regex.
[('1', 'Michael', 'Jessica'), ('2', 'Christopher', 'Ashley'), ....]
Args:
dirname: The name of the directory where baby name html files are stored
year: The year number. int.
"""
pathname = os.path.join(dirname, "{}.html".format(year))
f=open(pathname,'r')
text=f.read()
self.year=year
regex=re.compile("<td>\w{1,60}</td>")
res=regex.findall(text)
mylist=[(res[0][4:-5],res[1][4:-5],res[2][4:-5])]
i=3
while i <= (len(res)-3):
firs=res[i][4:-5]
secon=res[i+1][4:-5]
thir=res[i+2][4:-5]
mylist.append((firs,secon,thir))
i+=3
self.rank_to_names_tuples = mylist
def parse(self, parsing_lambda):
answer=[]
for i in self.rank_to_names_tuples :
answer.append(parsing_lambda(i))
return answer
"""
(2 points)
Collects a list of babynames parsed from the (rank, male-name, female-name) tuples.
The list must contains all results processed with the given lambda.
Args:
parsing_lambda: The parsing lambda.
It must process an single (string, string, string) tuple and return something.
Returns:
A list of lambda function's output
"""
# TODO: Implement this method
| 33.100917 | 119 | 0.636918 | 491 | 3,608 | 4.645621 | 0.413442 | 0.005261 | 0.012275 | 0.023674 | 0.055239 | 0.027181 | 0.027181 | 0 | 0 | 0 | 0 | 0.0163 | 0.268847 | 3,608 | 108 | 120 | 33.407407 | 0.84837 | 0.221729 | 0 | 0 | 0 | 0 | 0.032334 | 0 | 0 | 0 | 0 | 0.018519 | 0 | 1 | 0.108108 | false | 0.027027 | 0.108108 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad375040d6b6884d5223dc6bfe23eb549e651e7b | 322 | py | Python | sql/mysql_demo.py | garyhu1/first-python | 01731d419a64aec9683b450d0e8e233f4b5cc9a7 | [
"Apache-2.0"
] | 1 | 2019-09-03T11:42:38.000Z | 2019-09-03T11:42:38.000Z | sql/mysql_demo.py | garyhu1/first-python | 01731d419a64aec9683b450d0e8e233f4b5cc9a7 | [
"Apache-2.0"
] | null | null | null | sql/mysql_demo.py | garyhu1/first-python | 01731d419a64aec9683b450d0e8e233f4b5cc9a7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
'Mysql 连接数据库'
__author__ = 'garyhu'
import mysql.connector;
# 数据库连接
conn = mysql.connector.connect(user='root',password='****',database='websites');
s = conn.cursor();
s.execute('select * from users where id = %s',(7,))
value = s.fetchall();
print(value);
s.close();
| 15.333333 | 80 | 0.642857 | 43 | 322 | 4.72093 | 0.790698 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.13354 | 322 | 20 | 81 | 16.1 | 0.716846 | 0.189441 | 0 | 0 | 0 | 0 | 0.245353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ad3f016195bbebe749ea93e6dd6bbfeb76dd0048 | 1,376 | py | Python | client/examples/cycle-cards.py | spoore1/smart-card-removinator | dfc42e0ab5cea45c2ba299c10e7bc3b5857ddba2 | [
"Apache-2.0"
] | 26 | 2016-10-14T02:33:42.000Z | 2022-02-22T01:44:28.000Z | client/examples/cycle-cards.py | spoore1/smart-card-removinator | dfc42e0ab5cea45c2ba299c10e7bc3b5857ddba2 | [
"Apache-2.0"
] | 2 | 2018-03-18T03:06:52.000Z | 2021-03-21T10:14:17.000Z | client/examples/cycle-cards.py | spoore1/smart-card-removinator | dfc42e0ab5cea45c2ba299c10e7bc3b5857ddba2 | [
"Apache-2.0"
] | 8 | 2017-04-26T01:54:07.000Z | 2021-09-21T14:14:49.000Z | #!/usr/bin/env python
from removinator import removinator
import subprocess
# This example cycles through each card slot in the Removinator. Any
# slots that have a card present will then have the certificates on the
# card printed out using the pkcs15-tool utility, which is provided by
# the OpenSC project.
#
# Examples of parsing the Removinator status output and enabling debug
# output from the firmware are also provided.
print('--- Connecting to Removinator ---')
ctl = removinator.Removinator()
print('--- Cycling through cards ---')
for card in range(1, 9):
try:
ctl.insert_card(card)
print('Inserted card {0}'.format(card))
print('{0}'.format(subprocess.check_output(['pkcs15-tool',
'--list-certificates'])
.rstrip()))
except removinator.SlotError:
print('Card {0} is not inserted'.format(card))
print('--- Checking Removinator status ---')
status = ctl.get_status()
print('Current card: {0}'.format(status['current']))
for card in status['present']:
print('Card {0} is present'.format(card))
print('--- Debug output for re-insertion of current card ---')
ctl.set_debug(True)
ctl.insert_card(status['current'])
print('{0}'.format(ctl.last_response.rstrip()))
ctl.set_debug(False)
print('--- Remove current card ---')
ctl.remove_card()
| 32.761905 | 74 | 0.667151 | 179 | 1,376 | 5.083799 | 0.458101 | 0.03956 | 0.049451 | 0.026374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01086 | 0.196948 | 1,376 | 41 | 75 | 33.560976 | 0.81267 | 0.261628 | 0 | 0 | 0 | 0 | 0.308532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.423077 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ad3f3ba0c8f945d938221059aed3b9fa065e26c7 | 716 | py | Python | PDFParser/Client.py | NekuHarp/TPScrum1 | bf5d4cd4353066517077bd4116b523b3ce1f99ea | [
"Apache-2.0"
] | 2 | 2018-12-14T10:57:02.000Z | 2019-11-23T14:20:55.000Z | PDFParser/Client.py | NekuHarp/TPScrum1 | bf5d4cd4353066517077bd4116b523b3ce1f99ea | [
"Apache-2.0"
] | null | null | null | PDFParser/Client.py | NekuHarp/TPScrum1 | bf5d4cd4353066517077bd4116b523b3ce1f99ea | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from . import Defs as _D
from .Parser import Parser as _P
class Client:
def __init__(self):
self.id = 4
self.p = _P()
pass
def doXML(self, folder):
_EOUT = 'xml'
print(_EOUT, folder)
def doTXT(self, folder):
_EOUT = 'txt'
print(_EOUT, folder)
def setOut(self, wd):
self.p.setWD(wd)
def ls(self, wd):
gl = self.p.listDir(wd)
gl = [g.replace('\\','/') for g in gl]
return gl
def parser(self, Fname, xml):
return self.p.parser(Fname, xml)
def run(self):
print("Client {} : {} ".format(self.p.parse("-{}".format(self.id)), _D.VAR))
self.cli.main()
| 21.058824 | 84 | 0.52095 | 98 | 716 | 3.683673 | 0.438776 | 0.069252 | 0.077562 | 0.099723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00409 | 0.317039 | 716 | 33 | 85 | 21.69697 | 0.734151 | 0.02933 | 0 | 0.083333 | 0 | 0 | 0.038961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.291667 | false | 0.041667 | 0.083333 | 0.041667 | 0.5 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad43350b2da704f794c52e0d66b5d6a868f93d05 | 388 | py | Python | Proctor_Brad/Assignments/bubble sort.py | webguru001/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | 5 | 2019-05-17T01:30:02.000Z | 2021-06-17T21:02:58.000Z | Proctor_Brad/Assignments/bubble sort.py | curest0x1021/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | null | null | null | Proctor_Brad/Assignments/bubble sort.py | curest0x1021/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | null | null | null | import random
import time
b = []
for x in range(0,100):
b.append(int(random.random()*10000))
maximum = len(b) - 1
start_time = time.time()
for i in range(0,maximum):
for j in range(0,maximum):
if(b[j] > b[j+1]):
temp = b[j]
b[j] = b[j+1]
b[j+1] = temp
maximum -= 1
print b
print("--- %s seconds ---" % (time.time() - start_time))
| 20.421053 | 56 | 0.523196 | 66 | 388 | 3.045455 | 0.363636 | 0.059701 | 0.119403 | 0.059701 | 0.049751 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057762 | 0.286082 | 388 | 18 | 57 | 21.555556 | 0.66787 | 0 | 0 | 0 | 0 | 0 | 0.046392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad437b9b77502ea14c72a607a4c29fcf984906ad | 1,863 | py | Python | app/morocco/authentication.py | troydai/Morocco | b975d4f6813734a4c7b8e6c61669976389a27560 | [
"MIT"
] | null | null | null | app/morocco/authentication.py | troydai/Morocco | b975d4f6813734a4c7b8e6c61669976389a27560 | [
"MIT"
] | 9 | 2017-07-11T08:17:27.000Z | 2017-08-01T21:48:00.000Z | app/morocco/authentication.py | troydai/Morocco | b975d4f6813734a4c7b8e6c61669976389a27560 | [
"MIT"
] | null | null | null | import flask_login
from .application import app
from .models import DbUser
login_manager = flask_login.LoginManager() # pylint: disable=invalid-name
login_manager.init_app(app)
login_manager.user_loader(lambda user_id: DbUser.query.filter_by(id=user_id).first())
login_required = flask_login.login_required
@login_manager.unauthorized_handler
def unauthorized_handler():
from flask import redirect, request, url_for
return redirect(url_for('login', request_uri=request.path))
@app.before_request
def redirect_https():
from flask import redirect, request
if 'X-Arr-Ssl' not in request.headers and not app.config['is_local_server']:
redirect_url = request.url.replace('http', 'https')
return redirect(redirect_url)
@app.route('/', methods=['GET'])
def index():
from flask import render_template
byline = 'Morocco - An automation service runs on Azure Batch.\n'
return render_template('index.html', byline=byline, title='Azure CLI')
@app.route('/login', methods=['GET'])
def login():
"""Redirect user agent to Azure AD sign-in page"""
import morocco.auth
return morocco.auth.openid_login()
@app.route('/signin-callback', methods=['POST'])
def signin_callback():
"""Redirect from AAD sign in page"""
def get_or_add_user(user_id: str):
from .application import db
from .models import DbUser
user = DbUser.query.filter_by(id=user_id).first()
if not user:
user = DbUser(user_id)
db.session.add(user)
db.session.commit()
return user
import morocco.auth
return morocco.auth.openid_callback(get_or_add_user)
@app.route('/logout', methods=['POST'])
def logout():
"""Logout from both this application as well as Azure OpenID sign in."""
import morocco.auth
return morocco.auth.openid_logout()
| 27.397059 | 85 | 0.703167 | 256 | 1,863 | 4.957031 | 0.355469 | 0.052009 | 0.035461 | 0.054374 | 0.192277 | 0.144996 | 0.144996 | 0.050433 | 0 | 0 | 0 | 0 | 0.180891 | 1,863 | 67 | 86 | 27.80597 | 0.831586 | 0.092324 | 0 | 0.116279 | 0 | 0 | 0.092593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162791 | false | 0 | 0.255814 | 0 | 0.581395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ad4606ad266b7b3db3e78f36d7d519b541e707cd | 1,242 | py | Python | log_utils.py | zheng-yanan/hierarchical-deep-generative-models | 3a92d2ce69a51f4da55a18b09ca4c246f6f6ed43 | [
"MIT"
] | 1 | 2019-06-06T02:55:45.000Z | 2019-06-06T02:55:45.000Z | log_utils.py | zheng-yanan/hierarchical-deep-generative-model | 3a92d2ce69a51f4da55a18b09ca4c246f6f6ed43 | [
"MIT"
] | null | null | null | log_utils.py | zheng-yanan/hierarchical-deep-generative-model | 3a92d2ce69a51f4da55a18b09ca4c246f6f6ed43 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
import os
import sys
import logging
def logger_fn(name, filepath, level = logging.DEBUG):
""" Function for creating log manager
Args:
name: name for log manager
filepath: file path for log file
level: log level (CRITICAL > ERROR > WARNING > INFO > DEBUG)
Return:
log manager
"""
logger = logging.getLogger(name)
logger.setLevel(level)
sh = logging.StreamHandler(sys.stdout)
fh = logging.FileHandler(filepath, mode = 'w')
# formatter = logging.Formatter('[%(asctime)s][%(levelname)s][%(filename)s][line:%(lineno)d] %(message)s')
# formatter = logging.Formatter('[%(asctime)s][%(filename)s][line:%(lineno)d] %(message)s')
formatter = logging.Formatter('[%(asctime)s] %(message)s')
"""
%(levelno)s: 打印日志级别的数值
%(levelname)s: 打印日志级别名称
%(pathname)s: 打印当前执行程序的路径,其实就是sys.argv[0]
%(filename)s: 打印当前执行程序名
%(funcName)s: 打印日志的当前函数
%(lineno)d: 打印日志的当前行号
%(asctime)s: 打印日志的时间
%(thread)d: 打印线程ID
%(threadName)s: 打印线程名称
%(process)d: 打印进程ID
%(message)s: 打印日志信息
"""
sh.setFormatter(formatter)
fh.setFormatter(formatter)
logger.addHandler(sh)
logger.addHandler(fh)
return logger | 25.875 | 107 | 0.711755 | 160 | 1,242 | 5.43125 | 0.45 | 0.036824 | 0.055236 | 0.110472 | 0.179517 | 0.141542 | 0.141542 | 0.141542 | 0.141542 | 0.141542 | 0 | 0.001862 | 0.135266 | 1,242 | 48 | 108 | 25.875 | 0.807263 | 0.345411 | 0 | 0 | 0 | 0 | 0.048327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.352941 | 0 | 0.470588 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ad47d50e27f1ff53557e090e02e00ff688fd1b95 | 1,818 | py | Python | phr/insteducativa/api/serializers.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | phr/insteducativa/api/serializers.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | phr/insteducativa/api/serializers.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | from drf_extra_fields.geo_fields import PointField
from rest_framework import serializers
from phr.insteducativa.models import InstitucionEducativa
from phr.ubigeo.models import UbigeoDepartamento, UbigeoDistrito, UbigeoProvincia
class InstEducativaSerializer(serializers.ModelSerializer):
ubicacion = PointField(required=False)
departamento_nombre = serializers.SerializerMethodField()
provincia_nombre = serializers.SerializerMethodField()
distrito_nombre = serializers.SerializerMethodField()
class Meta:
model = InstitucionEducativa
fields = ('codigo_colegio', 'codigo_modular', 'nombre', 'ubigeo', 'direccion', 'nivel', 'nivel_descripcion',
'tipo', 'tipo_descripcion', 'nombre_ugel', 'establecimiento_renaes', 'establecimiento_nombre',
'ubicacion', 'departamento_nombre', 'provincia_nombre', 'distrito_nombre',)
def get_departamento_nombre(self, obj):
if obj.ubigeo:
try:
departamento = UbigeoDepartamento.objects.get(cod_ubigeo_inei_departamento=obj.ubigeo[:2])
return departamento.ubigeo_departamento
except UbigeoDepartamento.DoesNotExist:
return ''
def get_provincia_nombre(self, obj):
if obj.ubigeo:
try:
provincia = UbigeoProvincia.objects.get(cod_ubigeo_inei_provincia=obj.ubigeo[:4])
return provincia.ubigeo_provincia
except UbigeoProvincia.DoesNotExist:
return ''
def get_distrito_nombre(self, obj):
if obj.ubigeo:
try:
distrito = UbigeoDistrito.objects.get(cod_ubigeo_inei_distrito=obj.ubigeo)
return distrito.ubigeo_distrito
except UbigeoDistrito.DoesNotExist:
return ''
| 42.27907 | 116 | 0.684268 | 164 | 1,818 | 7.371951 | 0.335366 | 0.044665 | 0.094293 | 0.037221 | 0.124069 | 0.066998 | 0.066998 | 0 | 0 | 0 | 0 | 0.001442 | 0.237074 | 1,818 | 42 | 117 | 43.285714 | 0.870224 | 0 | 0 | 0.257143 | 0 | 0 | 0.112761 | 0.024202 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.114286 | 0 | 0.542857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ad4d7bab059d0ea9bad8d2d28f8ce727a3796264 | 14,523 | py | Python | src/dmglib.py | rickmark/dmglib | abcb16b4eeaec8e34f13248874c0e5b39dcfd96d | [
"MIT"
] | null | null | null | src/dmglib.py | rickmark/dmglib | abcb16b4eeaec8e34f13248874c0e5b39dcfd96d | [
"MIT"
] | null | null | null | src/dmglib.py | rickmark/dmglib | abcb16b4eeaec8e34f13248874c0e5b39dcfd96d | [
"MIT"
] | null | null | null | """
dmglib is a basic ``hdiutil`` wrapper that simplifies working with dmg images from Python.
The module can be used to attach and detach disk images, to check a disk image's
validity and to query whether disk images are password protected or have a license
agreement included.
"""
import plistlib
import subprocess
import os
import enum
import sys
import typing
from contextlib import contextmanager
NAME = 'dmglib'
HDIUTIL_PATH = '/usr/bin/hdiutil'
class InvalidDiskImage(Exception):
"""The disk image is deemed invalid and therefore cannot be attached."""
pass
class InvalidOperation(Exception):
"""An invalid operation was performed by the user.
Examples include trying to detach a dmg that was never attached or
trying to attach a disk image twice.
"""
pass
class AttachingFailed(Exception):
"""Attaching failed for unknown reasons."""
pass
class AlreadyAttached(AttachingFailed):
"""The disk image has already been attached previously."""
pass
class PasswordRequired(AttachingFailed):
"""No password was required even though one was required."""
pass
class PasswordIncorrect(AttachingFailed):
"""An incorrect password was supplied for the disk image."""
pass
class LicenseAgreementNeedsAccepting(AttachingFailed):
"""Error indicating that a license agreement needs accepting."""
pass
class DetachingFailed(Exception):
"""Error to indicate a volume could not be detached successfully."""
pass
class ConversionFailed(Exception):
"""Error to indicate that conversion failed"""
pass
def _raw_hdituil(args, input: bytes = None) -> (int, bytes):
"""Invokes hdiutil with the supplied arguments and returns return code and stdout contents."""
if not os.path.exists(HDIUTIL_PATH):
raise FileNotFoundError('Unable to find hdituil.')
completed = subprocess.run([HDIUTIL_PATH] + args,
input=input, capture_output=True)
return (completed.returncode, completed.stdout)
def _hdiutil(args, plist=True, keyphrase=None) -> (bool, dict):
"""Calls the command line 'hdiutil' binary with the supplied parameters.
Args:
args: Arguments for the hdiutil command.
plist: Whether to ask hdiutil to return plist (dictionary) output.
keyphrase: Optional parameter for encrypted disk images.
Returns:
Tuple containing result status as first element and a dictionary
containing the decoded plist response or `None` if the operation failed.
"""
# Certain operations do not support plist output...
if plist and '-plist' not in args:
args.append('-plist')
if keyphrase is not None:
args.append('-stdinpass')
returncode, output = _raw_hdituil(args, input=keyphrase.encode('utf8') if keyphrase else None)
if returncode != 0:
return False, dict()
if plist:
return True, plistlib.loads(output)
else:
return True, dict()
def _hdiutil_isencrypted(path) -> bool:
"""Checks whether a disk image is encrypted."""
success, result = _hdiutil(['isencrypted', path])
return success and result.get('encrypted', False)
def _hdiutil_imageinfo(path, keyphrase=None) -> (bool, dict):
"""Obtains image infos for a disk image.
Args:
path: The disk image for which to obtain information.
keyphrase: Optional parameter for encrypted images.
Returns:
Tuple containing result status as first element and a dictionary
containing the disk image infos obtaining from hdiutil.
"""
return _hdiutil(['imageinfo', path], keyphrase=keyphrase)
def _hdiutil_convert(input_path: str, output_path: str, disk_format: str) -> (bool, typing.Sequence[str]):
"""Converts a disk image to a different format.
Args:
input_path: The source disk image
output_path: The converted disk image
disk_format: One of the hdiutil supported disk image formats
Returns:
Tuple containing the resulting file
"""
return _hdiutil([
'convert',
'-format',
disk_format,
'-o',
output_path,
input_path
])
def _hdiutil_attach(path, keyphrase=None) -> (bool, dict):
"""Attaches a disk image.
The image is mounted using the `-nobrowse` flag so that it is not visible in
Finder.app.
Args:
path: The disk image to attach.
keyphrase: Optional parameter for encrypted images.
Returns:
Tuple containing status code and information on mounted volume,
if successful.
"""
return _hdiutil([
'attach',
path,
'-nobrowse' # Do not make the mounted volumes visible in Finder.app
], keyphrase=keyphrase)
def _hdiutil_detach(dev_node, force=False) -> bool:
"""Detaches a disk image.
Args:
dev_node: Filesystem path to attached volume, e.g. `/dev/disk1s1`.
force: Whether to ignore open files on the attached volume.
Returns:
Status code indicating success.
"""
success, _ = _hdiutil(['detach', dev_node] + (['-force'] if force else []), plist=False)
return success
def _hdiutil_info() -> (bool, dict):
"""Obtains state information about volumes attached on the system."""
return _hdiutil(['info'])
def attached_images() -> list:
"""Obtain a list of paths to disk images that are currently attached."""
success, infos = _hdiutil_info()
return [image['image-path']
for image in infos.get('images', [])
if 'image-path' in image]
def dmg_already_attached(path: str) -> bool:
"""Checks whether the disk image at the supplied path has already been attached.
Querying the system for further information about already attached images fails
with a resource exhaustion error message.
"""
return os.path.realpath(path) in attached_images()
def dmg_is_encrypted(path: str) -> bool:
"""Checks whether DMG at the supplied path is password protected."""
return _hdiutil_isencrypted(path)
def dmg_check_keyphrase(path: str, keyphrase: str) -> bool:
"""Checks the keyphrase for the disk image at the supplied path.
Note:
This function assumes the DiskImage is encrypted and raises
an exception if it is not.
Args:
path: path to disk image for which to check the keyphrase
keyphrase: keyphrase to check
Raises:
InvalidOperation: the disk image was not encrypted.
"""
if not dmg_is_encrypted(path):
raise InvalidOperation('DiskImage is not encrypted')
success, _ = _hdiutil_imageinfo(path, keyphrase=keyphrase)
return success
def dmg_is_valid(path: str) -> bool:
"""Checks the validity of the supplied disk image.
A disk image is valid according to this logic, if it is either not encrypted
and valid according to hdiutil, or encrypted according to hdiutil.
"""
if dmg_is_encrypted(path):
return True
success, _ = _hdiutil_imageinfo(path)
return success
class MountedVolume:
def __init__(self, mount_point, volume_kind):
self.mount_point = mount_point
self.volume_kind = volume_kind
class DMGState(enum.Enum):
DETACHED = 1
ATTACHED = 2
class DiskFormat(enum.Enum):
READ_ONLY = 'UDRO'
COMPRESSED_ADC = 'UDCO'
COMPRESSED = 'UDZO'
COMPRESSED_BZIP2 = 'UDBZ'
COMPRESSED_LZFSE = 'UDFO'
COMPRESSED_LZMA = 'ULMO'
ENTIRE_DEVICE = 'UFBI'
IPOD_IMAGE = 'IPOD'
UDIF_STUB = 'UDxx'
SPARSE_BUNDLE = 'UDSB'
SPARSE = 'UDSP'
READ_WRITE = 'UDRW'
OPTICAL_MASTER = 'UDTO'
DISK_COPY = 'DC42'
NDIF_READ_WRITE = 'RdWr'
NDIF_READ_ONLY = 'Rdxx'
NDIF_COMPRESSED = 'ROCo'
NDIF_KEN_CODE = 'Rken'
class DMGStatus:
def __init__(self):
self.status = DMGState.DETACHED
self.mount_points = []
self.root_dev_entry = None
def is_attached(self) -> bool:
return self.status == DMGState.ATTACHED
def record_attached(self, paths, root_dev_entry):
self.status = DMGState.ATTACHED
self.mount_points = paths
self.root_dev_entry = root_dev_entry
def record_detached(self):
self.status = DMGState.DETACHED
self.mount_points = []
class DiskImage:
"""Class representing macOS Disk Images (.dmg) files.
"""
def __init__(self, path, keyphrase=None):
"""Initialize a disk image object. Note: Simply constructing the object
does not attach the DMG. Use the :py:meth:`DiskImage.attach` method for that.
Args:
path: The path to the disk image
keyphrase: Optional argument for password protected images
Raises:
AlreadyAttached: The disk image is already attached on the system.
InvalidDiskImage: The disk image is not a valid disk image.
PasswordRequired: A password is required but none was provided.
PasswordIncorrect: A incorrect password was supplied.
"""
# The hdiutil fails when the target path has already been mounted / attached.
if dmg_already_attached(path):
raise AlreadyAttached()
if not dmg_is_valid(path):
raise InvalidDiskImage()
if dmg_is_encrypted(path) and keyphrase is None:
raise PasswordRequired()
if dmg_is_encrypted(path) and not dmg_check_keyphrase(path, keyphrase):
raise PasswordIncorrect()
self.path = path
self.keyphrase = keyphrase
_, self.imginfo = _hdiutil_imageinfo(path, keyphrase=keyphrase)
self.status = DMGStatus()
def _lookup_property(self, property_name, default_value):
return self.imginfo \
.get('Properties', dict()) \
.get(property_name, default_value)
def has_license_agreement(self) -> bool:
"""Checks whether the disk image has an attached license agreement.
DMGs with license agreements cannot be attached using this package.
"""
return self._lookup_property('Software License Agreement', False)
def attach(self):
"""Attaches a disk image.
Returns:
List of mount points.
Raises:
InvalidOperation: This disk image has already been attached.
LicenseAgreementNeedsAccepting: The image cannot be automatically
mounted due to a license agreement.
AttachingFailed: Could not attach the disk image or no volumes on
mounted disk.
"""
if self.status.is_attached():
raise InvalidOperation()
if self.has_license_agreement():
raise LicenseAgreementNeedsAccepting()
success, result = _hdiutil_attach(self.path, keyphrase=self.keyphrase)
if not success:
raise AttachingFailed('Attaching failed for unknown reasons.')
mounted_volumes = [MountedVolume(mount_point=entity['mount-point'],
volume_kind=entity['volume-kind'])
for entity in result.get('system-entities', [])
if 'mount-point' in entity and 'volume-kind' in entity]
if len(mounted_volumes) == 0:
raise AttachingFailed('Attaching the disk image mounted no volumes.')
# The root dev entry is the smallest '/dev/disk...' entry when sorted
# lexicographically. (/dev/disk2 < /dev/disk3 < /dev/disk3s1)
# In the case of disk images containing APFS volumes, we need to detach this disk _after_
# detaching the main volumes. This is a bug in Apple's code -- for all other types of volumes,
# detaching the volume automatically detaches the entire disk image.
root_dev_entry = sorted(entity['dev-entry']
for entity in result.get('system-entities', [])
if 'dev-entry' in entity)[0]
self.status.record_attached(mounted_volumes, root_dev_entry)
return [volume.mount_point for volume in self.status.mount_points]
def detach(self, force=True):
"""Detaches a disk image.
Args:
force: ignore open files on mounted volumes. See `man 1 hdiutil`.
Raises:
InvalidOperation: The disk image was not attached on the system.
DetachingFailed: Detaching failed for unknown reasons.
"""
if not self.status.is_attached():
raise InvalidOperation()
# Detaching any mount point of an attached image automatically unmounts
# all associated volumes.
# ... unless one of these volumes is an APFS volume. In that case,
# it needs to be detached separately. Additionally, the root dev entry
# also needs to be detached explicitly.
# First detach all APFS volumes, otherwise detaching other volumes appears to
# succeeds but really fails with an error code (!)
for volume in self.status.mount_points:
if volume.volume_kind == 'apfs':
success = _hdiutil_detach(volume.mount_point, force=force)
if not success:
raise DetachingFailed()
# Finally, detach the root dev entry.
success = _hdiutil_detach(self.status.root_dev_entry, force=force)
if not success:
raise DetachingFailed()
self.status.record_detached()
def convert(self, path: str, disk_format: DiskFormat) -> str:
success, mount_point_array = _hdiutil_convert(self.path, path, disk_format.value)
if success:
return mount_point_array[0]
raise ConversionFailed()
@contextmanager
def attachedDiskImage(path: str, keyphrase=None):
"""Context manager to work with a disk image.
The context manager returns the list of mount points of the attached volumes.
There is always at least one mount point available, otherwise attaching fails.
The caller needs to catch exceptions (see documentation for the :class:`DiskImage`
class), or call the appropriate methods beforehand (:meth:`dmg_is_encrypted`, ...).
Example::
with dmg.attachedDiskImage('path/to/disk_image.dmg',
keyphrase='sample') as mount_points:
print(mount_points)
"""
dmg = DiskImage(path, keyphrase=keyphrase)
try:
yield dmg.attach()
finally:
if dmg.status.is_attached():
dmg.detach()
| 31.988987 | 106 | 0.658955 | 1,755 | 14,523 | 5.349288 | 0.218234 | 0.035471 | 0.020452 | 0.009587 | 0.152322 | 0.101193 | 0.076055 | 0.04559 | 0.028334 | 0.017256 | 0 | 0.001585 | 0.261379 | 14,523 | 453 | 107 | 32.059603 | 0.87359 | 0.426634 | 0 | 0.139896 | 0 | 0 | 0.063136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129534 | false | 0.072539 | 0.036269 | 0.010363 | 0.450777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ad5db874c90f5842c8a6e82a7b558b48f5b79bd1 | 4,187 | py | Python | web/models.py | rkhozinov/dicease-area | 9ca2159705c778a73f45ca83e83f881d47c355c4 | [
"MIT"
] | null | null | null | web/models.py | rkhozinov/dicease-area | 9ca2159705c778a73f45ca83e83f881d47c355c4 | [
"MIT"
] | null | null | null | web/models.py | rkhozinov/dicease-area | 9ca2159705c778a73f45ca83e83f881d47c355c4 | [
"MIT"
] | null | null | null | # models.py
from sys import path
from os.path import dirname as dir
path.append(dir(path[0]))
from app import db
class District(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(120), unique=True)
coordinates = db.Column(db.String(120), nullable=True)
def __init__(self, name, coordinates=None):
self.name = name
self.coordinates = coordinates
def __repr__(self):
return self.name
class Hospital(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(120), unique=True)
address = db.Column(db.String(120), nullable=True)
phone = db.Column(db.String(120), nullable=True)
coordinates = db.Column(db.String(120), nullable=True)
district_id = db.Column(db.Integer, db.ForeignKey('district.id'))
district = db.relationship('District',
backref=db.backref('hospitals', lazy='dynamic', uselist=True))
def __init__(self, name, district, address=None, phone=None, coordinates=None):
self.name = name
self.district = district
self.address = address
self.phone = phone
self.coordinates = coordinates
def __repr__(self):
return self.name
class Disease(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Text, unique=True)
def __init__(self, name, description=None):
self.name = name
self.description = description
class DiseasePopulation(db.Model):
id = db.Column(db.Integer, primary_key=True)
year = db.Column(db.Integer, nullable=True)
children = db.Column(db.Integer, nullable=True)
children_observed = db.Column(db.Integer, nullable=True)
adults = db.Column(db.Integer, nullable=True)
adults_observed = db.Column(db.Integer, nullable=True)
hospital_id = db.Column(db.Integer, db.ForeignKey('hospital.id'))
hospital = db.relationship('Hospital',
backref=db.backref('population', lazy='dynamic',
uselist=True))
disease_id = db.Column(db.Integer, db.ForeignKey('disease.id'))
disease = db.relationship('Disease',
backref=db.backref('population', lazy='dynamic', uselist=True))
def __init__(self, disease, hospital, year, adults=0, adults_observed=0,
children=0, children_observed=0):
self.disease = disease
self.hospital = hospital
self.year = int(year) if year else 0
self.children = int(children)
self.children_observed = int(children_observed)
self.adults = int(adults)
self.adults_observed = int(adults_observed)
self.all = self.children + self.adults
self.all_observed = self.children_observed + self.adults_observed
def __repr__(self):
return '{0}{1}'.format(self.name, self.year)
class Population(db.Model):
id = db.Column(db.Integer, primary_key=True)
year = db.Column(db.Integer)
all = db.Column(db.Integer)
men = db.Column(db.Integer)
women = db.Column(db.Integer)
children = db.Column(db.Integer)
adults = db.Column(db.Integer)
employable = db.Column(db.Integer)
employable_men = db.Column(db.Integer)
employable_women = db.Column(db.Integer)
district_id = db.Column(db.Integer, db.ForeignKey('district.id'))
district = db.relationship('District',
backref=db.backref('population', lazy='dynamic', uselist=True))
def __init__(self, year, district,
men=0, women=0, children=0, employable_men=0, employable_women=0, district_id=0):
self.district = district
self.year = int(year)
self.men = int(men)
self.women = int(women)
self.children = int(children)
self.employable_men = int(employable_men)
self.employable_women = int(employable_women)
self.all = self.men + self.women
self.adults = self.all - self.children
self.employable = self.employable_men + self.employable_women
def __repr__(self):
return '{}:{}'.format(self.year, self.all)
| 34.04065 | 98 | 0.642942 | 527 | 4,187 | 4.981025 | 0.119545 | 0.091429 | 0.114286 | 0.148952 | 0.627048 | 0.474286 | 0.45219 | 0.328381 | 0.310095 | 0.276571 | 0 | 0.009975 | 0.233819 | 4,187 | 122 | 99 | 34.319672 | 0.808292 | 0.00215 | 0 | 0.32967 | 0 | 0 | 0.036398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098901 | false | 0 | 0.032967 | 0.043956 | 0.604396 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ad611aec8e39567a4f46f53f19066964bc6e7636 | 425 | py | Python | cloudflare_exporter/handlers.py | cpaillet/cloudflare-exporter | 194a0ce0f316aadc2802fbf180d06f5aab7849be | [
"Apache-2.0"
] | null | null | null | cloudflare_exporter/handlers.py | cpaillet/cloudflare-exporter | 194a0ce0f316aadc2802fbf180d06f5aab7849be | [
"Apache-2.0"
] | 7 | 2019-11-28T11:43:56.000Z | 2020-06-09T08:21:19.000Z | cloudflare_exporter/handlers.py | cpaillet/cloudflare-exporter | 194a0ce0f316aadc2802fbf180d06f5aab7849be | [
"Apache-2.0"
] | 3 | 2019-11-28T08:36:23.000Z | 2022-02-21T11:34:41.000Z | from aiohttp import web
from prometheus_client import generate_latest
from prometheus_client.core import REGISTRY
def metric_to_text():
return generate_latest(REGISTRY).decode('utf-8')
async def handle_metrics(_request):
return web.Response(text=metric_to_text())
async def handle_health(_request):
health_text = 'ok'
health_status = 200
return web.Response(status=health_status, text=health_text)
| 22.368421 | 63 | 0.778824 | 59 | 425 | 5.338983 | 0.457627 | 0.088889 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010959 | 0.141176 | 425 | 18 | 64 | 23.611111 | 0.852055 | 0 | 0 | 0 | 1 | 0 | 0.016471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad62b39dc3d459b038919fec68d5b17bad7d8e64 | 6,454 | py | Python | util/list_store.py | natduca/ndbg | f8da43be62dac18072e9b0e6e5ecd0d1818aea4d | [
"Apache-2.0"
] | 5 | 2016-05-12T08:48:41.000Z | 2018-07-17T00:48:32.000Z | util/list_store.py | natduca/ndbg | f8da43be62dac18072e9b0e6e5ecd0d1818aea4d | [
"Apache-2.0"
] | 1 | 2022-01-16T12:18:50.000Z | 2022-01-16T12:18:50.000Z | util/list_store.py | natduca/ndbg | f8da43be62dac18072e9b0e6e5ecd0d1818aea4d | [
"Apache-2.0"
] | null | null | null | # Copyright 2011 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import gtk
except:
gtk = None
import re
if gtk:
def liststore_get_children(ls):
res = []
for i in range(0,ls.iter_n_children(None)):
iter = ls.iter_nth_child(None,i)
res.append(iter)
return res
class _PListIter(object):
def __init__(self,pls,iter):
self._pls = pls
self._iter = iter
self._initialized = True
def __getattr__(self,k):
if self.__dict__.has_key(k):
return self.__dict__[k]
else:
i = self._pls._name_to_index[k]
return self._pls.get_value(self._iter,i)
def __setattr__(self,k,v):
if self.__dict__.has_key("_initialized"):
i = self._pls._name_to_index[k]
self._pls.set(self._iter,i,v)
return v
else:
return object.__setattr__(self,k,v)
class PListStore(gtk.ListStore):
def __init__(self, **kwargs):
keys = list(range(len(kwargs)))
types = list(range(len(kwargs)))
has_pos = False
has_nonpos = False
i = 0
for k in kwargs.keys():
m = re.match("(.+)_(\d+)",k)
if m:
if has_nonpos:
raise Exception("Cant mix _n arguments with implicitly positioned arguments")
pos = int(m.group(2))
types[pos] = kwargs[k]
keys[pos] = m.group(1)
has_pos = True
else:
if has_pos:
raise Exception("Cant mix _n arguments with implicit positioned")
pos = i
i += 1
keys[pos] = k
types[pos] = kwargs[k]
has_nonpos = True
self._is_nonpos = has_nonpos
gtk.ListStore.__init__(self, *types)
self._types = types
self._num_columns = len(keys)
self._column_names = keys
self._name_to_index = {}
self._is_nonpos = has_nonpos
for i in range(0,self._num_columns):
self._name_to_index[self._column_names[i]] = i
self._initialized = True
def append(self,*args,**kwargs):
if len(args) == 0 and len(kwargs) == 0:
iter = gtk.ListStore.append(self)
return _PListIter(self,iter)
elif len(args) == self._num_columns:
if self._is_nonpos:
raise Exception("Must use kwargs for append.")
iter = gtk.ListStore.append(self)
for i in range(0,self._num_columns):
self.set(iter,i,args[i])
return _PListIter(self,iter)
elif len(kwargs) == self._num_columns:
iter = gtk.ListStore.append(self)
for k in kwargs:
i = self._name_to_index[k]
self.set(iter,i,kwargs[k])
return _PListIter(self,iter)
def __len__(self):
return self.iter_n_children(None)
def __getitem__(self,idx):
if type(idx) == int:
return _PListIter(self,self.iter_nth_child(None,idx))
elif type(idx) == gtk.TreeIter:
return _PListIter(self,idx)
else:
raise Exception("Must be int or iter, got %s" % type(idx))
def __iter__(self):
for i in range(len(self)):
yield self[i]
def __getattr__(self,k):
if self.__dict__.has_key(k):
return self.__dict__[k]
else:
i = self._name_to_index[k]
return i
def find(self,pred):
for i in range(len(self)):
d = self[i]
if pred(d):
return d
def remove(self,iter):
if type(iter) != _PListIter:
raise Exception("Expected plistiter")
gtk.ListStore.remove(self,iter._iter)
class PListView(gtk.TreeView):
def __init__(self, pls, **kwargs):
self._pls = pls
gtk.TreeView.__init__(self, pls)
poslogic = False
neglogic = False
for k in kwargs:
if kwargs[k] == True:
poslogic = True
elif kwargs[k] == False:
neglogic = True
else:
raise Exception("Values must be true or false")
if poslogic and neglogic:
raise Exception("Make the args true or false but dont mix them")
if not poslogic and not neglogic:
raise Exception("Must pass one column to enable or disable")
cols = []
if poslogic:
cols=kwargs.keys
else: # neglogic
cols=list(pls._name_to_index.keys())
for k in kwargs.keys():
cols.remove(k)
# create views
txtCell = gtk.CellRendererText()
pixCell = gtk.CellRendererPixbuf()
for c in cols:
i = pls._name_to_index[c]
t = pls._types[i]
if t == str:
col = gtk.TreeViewColumn(c, txtCell, text=i)
elif t == gtk.gdk.Pixbuf:
col = gtk.TreeViewColumn(c, pixCell, pixbuf=i)
else:
raise Exception("Dont understand waht to do with %s" % t)
self.append_column(col)
def get_selected(self):
sel = self.get_selection()
m,iter = sel.get_selected()
if iter:
return _PListIter(self._pls,iter)
else:
return None
def set_selected(self,iter):
if iter == None:
self.get_selection.set_selected(None)
return
if type(iter) != _PListIter:
raise Exception("Expected plistiter")
sel = self.get_selection()
sel.set_selected(iter._iter)
if __name__ == "__main__":
w = gtk.Window()
ls = PListStore(Name_0 = str, Description_1 = str, Key_2 = object)
print "expect 0 got %s" % ls.Name
print "expect 1 got %s" % ls.Description
print "expect 2 got %s" % ls.Key
ls.append("1", "2", "3")
ls.append("4", "5", "6")
r = ls.append()
r.Name = "7"
r.Description = "8"
r.Key = "9"
print "expect 3 got %s" % len(ls)
print "expect 1 got %s" % ls[0].Name
print "expect 2 got %s" % ls[0].Description
print "expect 3 got %s" % ls[0].Key
print "expect 5 got %s" % ls[1].Description
ls[0].Key = "**3**"
print "expect **3** got %s" % ls[0].Key
tv = PListView(ls, Key = False)
w.add(tv)
w.show_all()
gtk.main()
| 28.431718 | 89 | 0.597614 | 905 | 6,454 | 4.058564 | 0.21768 | 0.023959 | 0.023959 | 0.014974 | 0.237136 | 0.183229 | 0.11462 | 0.086033 | 0.046828 | 0.030493 | 0 | 0.009776 | 0.286799 | 6,454 | 226 | 90 | 28.557522 | 0.788182 | 0.088472 | 0 | 0.258242 | 0 | 0 | 0.089484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005495 | 0.010989 | null | null | 0.049451 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad6dc8f5ec7abbc123e658c8a53c68b664ad7c1a | 271 | py | Python | modules/04/examples/dollar.py | edsu/inst126 | a14f2c6901759f87b1f199f79ed1b8a5c03c688d | [
"CC-BY-4.0"
] | 2 | 2019-08-07T07:49:09.000Z | 2019-08-24T02:07:39.000Z | modules/04/examples/dollar.py | edsu/inst126 | a14f2c6901759f87b1f199f79ed1b8a5c03c688d | [
"CC-BY-4.0"
] | 2 | 2020-07-18T02:43:50.000Z | 2022-02-10T19:04:57.000Z | modules/04/examples/dollar.py | edsu/inst126 | a14f2c6901759f87b1f199f79ed1b8a5c03c688d | [
"CC-BY-4.0"
] | null | null | null | # jaylin
hours = float(input("Enter hours worked: "))
rate = float(input("Enter hourly rate: "))
if (rate >= 15):
pay=(hours* rate)
print("Pay: $", pay)
else:
print("I'm sorry " + str(rate) + " is lower than the minimum wage!")
| 20.846154 | 76 | 0.535055 | 35 | 271 | 4.142857 | 0.657143 | 0.137931 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010471 | 0.295203 | 271 | 12 | 77 | 22.583333 | 0.748691 | 0.02214 | 0 | 0 | 0 | 0 | 0.346614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad7bd9466582c413f0454448c47639038f5336ef | 469 | py | Python | Python/Difference of times/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | 5 | 2020-08-29T15:15:31.000Z | 2022-03-01T18:22:34.000Z | Python/Difference of times/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | null | null | null | Python/Difference of times/main.py | drtierney/hyperskill-problems | b74da993f0ac7bcff1cbd5d89a3a1b06b05f33e0 | [
"MIT"
] | 1 | 2020-12-02T11:13:14.000Z | 2020-12-02T11:13:14.000Z | # put your python code here
def event_time(hours, minutes, seconds):
return (hours * 3600) + (minutes * 60) + seconds
def time_difference(a, b):
return abs(a - b)
hours_1 = int(input())
minutes_1 = int(input())
seconds_1 = int(input())
hours_2 = int(input())
minutes_2 = int(input())
seconds_2 = int(input())
event_1 = event_time(hours_1, minutes_1, seconds_1)
event_2 = event_time(hours_2, minutes_2, seconds_2)
print(time_difference(event_1, event_2))
| 21.318182 | 52 | 0.705757 | 77 | 469 | 4.025974 | 0.298701 | 0.154839 | 0.135484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055416 | 0.153518 | 469 | 21 | 53 | 22.333333 | 0.725441 | 0.053305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0 | 0.153846 | 0.307692 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
ad82be8113236ba6ef1a019056b5a21f96562145 | 589 | py | Python | setup.py | Shadofer/dogey | 1d9f1b82aa7ecfe6d9776feb03364ef9eb00bd63 | [
"MIT"
] | 3 | 2021-05-18T09:46:30.000Z | 2022-03-26T14:23:24.000Z | setup.py | Shadofer/dogey | 1d9f1b82aa7ecfe6d9776feb03364ef9eb00bd63 | [
"MIT"
] | null | null | null | setup.py | Shadofer/dogey | 1d9f1b82aa7ecfe6d9776feb03364ef9eb00bd63 | [
"MIT"
] | null | null | null | from setuptools import setup
with open('README.md', 'r') as f:
long_description = f.read()
setup(
name = 'dogey',
version = '0.1',
description = 'A pythonic dogehouse API.',
long_description = long_description,
long_description_content_type = 'text/markdown',
author = 'Shadofer#7312',
author_email = 'shadowrlrs@gmail.com',
python_requires = '>=3.8.0',
url = 'https://github.com/Shadofer/dogey',
packages = ['dogey'],
install_requires = ['websockets'],
extras_require = {
'sound': ['pymediasoup']
},
license = 'MIT'
)
| 25.608696 | 52 | 0.626486 | 66 | 589 | 5.439394 | 0.757576 | 0.167131 | 0.10585 | 0.167131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019565 | 0.219015 | 589 | 22 | 53 | 26.772727 | 0.76087 | 0 | 0 | 0 | 0 | 0 | 0.27674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad8464a5e90866608323322de1a9bc098cc0a1d3 | 476 | py | Python | settings/testing.py | skylifewww/artdelo | 55d235a59d8a3abdf0f904336c1c75a2be903699 | [
"MIT"
] | null | null | null | settings/testing.py | skylifewww/artdelo | 55d235a59d8a3abdf0f904336c1c75a2be903699 | [
"MIT"
] | null | null | null | settings/testing.py | skylifewww/artdelo | 55d235a59d8a3abdf0f904336c1c75a2be903699 | [
"MIT"
] | null | null | null | ALLOWED_HOSTS = ['testserver']
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:'
}
}
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'iosDevCourse'
},
'local': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'iosDevCourse'
}
}
| 22.666667 | 67 | 0.596639 | 41 | 476 | 6.878049 | 0.609756 | 0.138298 | 0.180851 | 0.156028 | 0.475177 | 0.475177 | 0.475177 | 0.475177 | 0.475177 | 0 | 0 | 0.002725 | 0.228992 | 476 | 20 | 68 | 23.8 | 0.765668 | 0 | 0 | 0.333333 | 0 | 0 | 0.552521 | 0.340336 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad8619a24bcb752efa61539552ec1f87e1e97167 | 8,140 | py | Python | h1/models/billing.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | h1/models/billing.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | h1/models/billing.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
HyperOne
HyperOne API # noqa: E501
The version of the OpenAPI document: 0.1.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from h1.configuration import Configuration
class Billing(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'id': 'str',
'period': 'str',
'price': 'float',
'quantity': 'float',
'project': 'str',
'one_time': 'bool',
'service': 'BillingService',
'resource': 'BillingResource',
'charges': 'list[BillingCharges]'
}
attribute_map = {
'id': 'id',
'period': 'period',
'price': 'price',
'quantity': 'quantity',
'project': 'project',
'one_time': 'oneTime',
'service': 'service',
'resource': 'resource',
'charges': 'charges'
}
def __init__(self, id=None, period=None, price=None, quantity=None, project=None, one_time=None, service=None, resource=None, charges=None, local_vars_configuration=None): # noqa: E501
"""Billing - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._id = None
self._period = None
self._price = None
self._quantity = None
self._project = None
self._one_time = None
self._service = None
self._resource = None
self._charges = None
self.discriminator = None
if id is not None:
self.id = id
if period is not None:
self.period = period
if price is not None:
self.price = price
if quantity is not None:
self.quantity = quantity
if project is not None:
self.project = project
if one_time is not None:
self.one_time = one_time
if service is not None:
self.service = service
if resource is not None:
self.resource = resource
if charges is not None:
self.charges = charges
@property
def id(self):
"""Gets the id of this Billing. # noqa: E501
:return: The id of this Billing. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this Billing.
:param id: The id of this Billing. # noqa: E501
:type: str
"""
self._id = id
@property
def period(self):
"""Gets the period of this Billing. # noqa: E501
:return: The period of this Billing. # noqa: E501
:rtype: str
"""
return self._period
@period.setter
def period(self, period):
"""Sets the period of this Billing.
:param period: The period of this Billing. # noqa: E501
:type: str
"""
self._period = period
@property
def price(self):
"""Gets the price of this Billing. # noqa: E501
:return: The price of this Billing. # noqa: E501
:rtype: float
"""
return self._price
@price.setter
def price(self, price):
"""Sets the price of this Billing.
:param price: The price of this Billing. # noqa: E501
:type: float
"""
self._price = price
@property
def quantity(self):
"""Gets the quantity of this Billing. # noqa: E501
:return: The quantity of this Billing. # noqa: E501
:rtype: float
"""
return self._quantity
@quantity.setter
def quantity(self, quantity):
"""Sets the quantity of this Billing.
:param quantity: The quantity of this Billing. # noqa: E501
:type: float
"""
self._quantity = quantity
@property
def project(self):
"""Gets the project of this Billing. # noqa: E501
:return: The project of this Billing. # noqa: E501
:rtype: str
"""
return self._project
@project.setter
def project(self, project):
"""Sets the project of this Billing.
:param project: The project of this Billing. # noqa: E501
:type: str
"""
self._project = project
@property
def one_time(self):
"""Gets the one_time of this Billing. # noqa: E501
:return: The one_time of this Billing. # noqa: E501
:rtype: bool
"""
return self._one_time
@one_time.setter
def one_time(self, one_time):
"""Sets the one_time of this Billing.
:param one_time: The one_time of this Billing. # noqa: E501
:type: bool
"""
self._one_time = one_time
@property
def service(self):
"""Gets the service of this Billing. # noqa: E501
:return: The service of this Billing. # noqa: E501
:rtype: BillingService
"""
return self._service
@service.setter
def service(self, service):
"""Sets the service of this Billing.
:param service: The service of this Billing. # noqa: E501
:type: BillingService
"""
self._service = service
@property
def resource(self):
"""Gets the resource of this Billing. # noqa: E501
:return: The resource of this Billing. # noqa: E501
:rtype: BillingResource
"""
return self._resource
@resource.setter
def resource(self, resource):
"""Sets the resource of this Billing.
:param resource: The resource of this Billing. # noqa: E501
:type: BillingResource
"""
self._resource = resource
@property
def charges(self):
"""Gets the charges of this Billing. # noqa: E501
:return: The charges of this Billing. # noqa: E501
:rtype: list[BillingCharges]
"""
return self._charges
@charges.setter
def charges(self, charges):
"""Sets the charges of this Billing.
:param charges: The charges of this Billing. # noqa: E501
:type: list[BillingCharges]
"""
self._charges = charges
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Billing):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, Billing):
return True
return self.to_dict() != other.to_dict()
| 24.741641 | 189 | 0.552703 | 931 | 8,140 | 4.73362 | 0.12782 | 0.049013 | 0.106195 | 0.104152 | 0.361924 | 0.309281 | 0.295893 | 0.137055 | 0.062401 | 0.017245 | 0 | 0.019006 | 0.347174 | 8,140 | 328 | 190 | 24.817073 | 0.810312 | 0.316216 | 0 | 0.089655 | 1 | 0 | 0.056765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165517 | false | 0 | 0.027586 | 0 | 0.324138 | 0.013793 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad8655b4f82e7a25c6632660cfa325c2cad9ee23 | 645 | py | Python | consumers/venv/lib/python3.7/site-packages/faust/cli/faust.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | consumers/venv/lib/python3.7/site-packages/faust/cli/faust.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | consumers/venv/lib/python3.7/site-packages/faust/cli/faust.py | spencerpomme/Public-Transit-Status-with-Apache-Kafka | 2c85d7daadf4614fe7ce2eabcd13ff87236b1c7e | [
"MIT"
] | null | null | null | """Program ``faust`` (umbrella command)."""
# Note: The command options above are defined in .cli.base.builtin_options
from .agents import agents
from .base import call_command, cli
from .clean_versions import clean_versions
from .completion import completion
from .livecheck import livecheck
from .model import model
from .models import models
from .reset import reset
from .send import send
from .tables import tables
from .worker import worker
__all__ = [
'agents',
'call_command',
'clean_versions',
'cli',
'completion',
'livecheck',
'model',
'models',
'reset',
'send',
'tables',
'worker',
]
| 21.5 | 74 | 0.699225 | 79 | 645 | 5.582278 | 0.367089 | 0.088435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193798 | 645 | 29 | 75 | 22.241379 | 0.848077 | 0.172093 | 0 | 0 | 0 | 0 | 0.162879 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.44 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ad8c13ec67ec87fbb7e52ba3ef0d5416c7d7a8bc | 6,691 | py | Python | combine-json.py | efficient/catbench | 4f66541efd8318109c4ac150898d60f023e7aba5 | [
"Apache-2.0"
] | 10 | 2017-12-12T17:20:41.000Z | 2021-05-03T14:40:35.000Z | combine-json.py | efficient/catbench | 4f66541efd8318109c4ac150898d60f023e7aba5 | [
"Apache-2.0"
] | null | null | null | combine-json.py | efficient/catbench | 4f66541efd8318109c4ac150898d60f023e7aba5 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import argparse;
import os;
import sys;
import json;
def setup_optparse():
parser = argparse.ArgumentParser();
parser.add_argument('--input', '-i', dest='file1',
help='json to append to');
parser.add_argument('--append', '-a', nargs='+', dest='files2',
help='json(s) to be appended to --input.');
parser.add_argument('--suffix', '-s', dest='suffix', default="",
help='Suffix to attach to series from the second file');
parser.add_argument('--outfile', '-o', dest='outfile',
help='Output json. Note that if -i and -o are the same, -i will be overwritten.');
parser.add_argument('--norm', '-n', dest='norm', default="",
help='Norm to normalize all (other) series against');
parser.add_argument('--norm-suffix', dest='norm_suffix', default="",
help='Suffix to add to normalized series');
parser.add_argument('--norm-x', dest='norm_x', default="",
help='Do not normalize these values');
parser.add_argument('--series', '-d', nargs='+', dest='series', default=[],
help='Only copy specified data series (still applies suffix). Note that if suffix is empty, a replacement will be done.')
parser.add_argument('--baseline-contention', '-b', dest='baselinecontention', action='store_true', default=False,
help='Only copy baseline and contention (leave suffix blank for best results). Overrides -d switch!');
parser.add_argument('--median', '-m', dest='median', default=None,
help='Select each point in the specified --series from a group of --append files based on the median of the specified field. ' +
'Using this with suffix is untested, and probably not a good idea, and your data files should probably all have the same domain...' +
'Normalization is right out.');
args = parser.parse_args();
if args.median:
if not isinstance(args.files2, list):
sys.stderr.write('ERROR: Use of --median requires more than one file to --append');
sys.exit(1);
else:
if not isinstance(args.files2, list):
args.files2 = [args.files2];
elif len(args.files2) != 1:
sys.stderr.write('ERROR: I don\'t know what to do with more than one --append file');
sys.exit(1);
if args.baselinecontention:
args.series = ["baseline", "contention"];
return args.file1, args.files2, args.suffix, args.outfile, args.norm, args.norm_suffix, args.norm_x, set(args.series), args.median;
constant_keys=("cache_ways", "mite_tput_limit", "zipf_alpha");
def verify(file1, file2):
fd1 = open(file1, 'r');
fd2 = open(file2, 'r');
json1 = json.load(fd1);
json2 = json.load(fd2);
data1 = json1.get("data");
data2 = json2.get("data");
found_violation = {};
for key in data1.keys():
for entry in data1[key]["samples"]:
for const_key in constant_keys:
if(const_key not in entry):
continue;
for entry2 in data2["baseline"]["samples"]:
if(const_key not in entry2):
continue;
print(entry2[const_key] + " = " + entry[const_key]);
if(entry2[const_key] != entry[const_key]):
found_violation[const_key] = True;
for key in found_violation.keys():
if(found_violation[key]):
print("Warning, variable " + key + " mismatch between baseline file and experiment file");
def combine(file1, files2, suffix, outfile, norm, norm_suffix, norm_x, series, median):
fd1 = open(file1, 'r');
fds2 = [open(each, 'r') for each in files2];
json1 = json.load(fd1);
jsons2 = [json.load(each) for each in fds2];
data1 = json1.get("data");
datas2 = [each.get("data") for each in jsons2];
if median:
alldat = [data1] + datas2;
if not len(series):
series = data1.keys();
for group in series:
samps = [each[group]['samples'] for each in alldat];
res = samps[0];
if len(samps) != len(alldat):
sys.stderr.write('ERROR: Couldn\'t find series \'series\' in all files')
exit(1)
nsamps = len(res);
if filter(lambda elm: len(elm) != nsamps, samps):
sys.stderr.write('ERROR: Not all input files have the same number of elements in \'series\'')
exit(1)
for idx in range(nsamps):
order = sorted([each[idx] for each in samps], key=lambda elm: elm[median]);
res[idx] = order[len(order) / 2];
print('Chose ' + group + '.samples[' + str(idx) + '].' + median + ' as \'' + str(res[idx][median]) + '\' out of: ' + str([each[median] for each in order]));
else:
data2 = datas2[0];
for key in data2.keys():
if(len(series) and key not in series):
continue;
new_key = key + suffix;
if(new_key in data1):
print("Warning, overwriting " + new_key + " in " + file1);
data1[new_key] = data2[key];
data1[new_key]["description"] = data1[new_key]["description"] + suffix;
if(norm != ""):
for key in data2.keys():
if(key == norm):
continue;
new_key = key + suffix + norm_suffix
index = 0;
while(index < len(data2[key]["samples"])):
sample = data2[key]["samples"][index];
base_sample = data2[norm]["samples"][index];
for ylabel in sample:
if(base_sample[ylabel] != 0 and ylabel != norm_x):
data2[key]["samples"][index][ylabel] = sample[ylabel] / base_sample[ylabel];
index += 1
data1[new_key] = data2[key];
data1[new_key]["description"] = data1[new_key]["description"] + suffix + " normalized to " + norm;
fd1.close();
for each in fds2:
each.close();
outfd = open(outfile, 'w');
json.dump(json1, outfd, indent=4, sort_keys=True);
def main():
file1, files2, suffix, outfile, norm, norm_suffix, norm_x, series, median = setup_optparse();
#if(baselinecontention == True):
# verify(file1, file2);
if((norm == "" and norm_suffix == "" and norm_x == "") or (norm != "" and norm_suffix != "" and norm_x != "")):
combine(file1, files2, suffix, outfile, norm, norm_suffix, norm_x, series, median);
else:
print("Missing one of: --norm, --norm-suffix, --norm-x\n");
main();
| 46.144828 | 166 | 0.567329 | 836 | 6,691 | 4.466507 | 0.253589 | 0.024103 | 0.045528 | 0.020354 | 0.17729 | 0.138457 | 0.098286 | 0.084896 | 0.084896 | 0.084896 | 0 | 0.017965 | 0.284561 | 6,691 | 144 | 167 | 46.465278 | 0.762064 | 0.010761 | 0 | 0.181102 | 0 | 0.047244 | 0.228537 | 0.006348 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.031496 | null | null | 0.03937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad8c7679c4fcaabd0666e9de8206c186f2f5bf7b | 773 | py | Python | following.py | yoshualukash/insta-crawler | c2b9b150e4fff70cb03ea49c08fb46ffc8f23dd0 | [
"MIT"
] | null | null | null | following.py | yoshualukash/insta-crawler | c2b9b150e4fff70cb03ea49c08fb46ffc8f23dd0 | [
"MIT"
] | null | null | null | following.py | yoshualukash/insta-crawler | c2b9b150e4fff70cb03ea49c08fb46ffc8f23dd0 | [
"MIT"
] | null | null | null | # Get instance
import instaloader
import json
L = instaloader.Instaloader(max_connection_attempts=0)
# Login or load session
username = ''
password = ''
L.login(username, password) # (login)
# Obtain profile metadata
instagram_target = ''
profile = instaloader.Profile.from_username(L.context, instagram_target)
following_list = []
count=1
for followee in profile.get_followees():
username = followee.username
following_list.append(username)
print(str(count) + ". " + username)
count = count + 1
following_list_json = json.dumps(following_list)
open("list_following_" + instagram_target +".json","w").write(following_list_json)
print("selesai")
print("cek file json di file : list_following_" + instagram_target +".json") | 29.730769 | 83 | 0.720569 | 93 | 773 | 5.784946 | 0.451613 | 0.120818 | 0.063197 | 0.104089 | 0.118959 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004651 | 0.165589 | 773 | 26 | 84 | 29.730769 | 0.829457 | 0.085382 | 0 | 0 | 0 | 0 | 0.109145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.105263 | 0.105263 | 0 | 0.105263 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ad97e5e6daa0d77eaf62c07024df8c1d58d486a8 | 2,450 | py | Python | PiCN/Layers/PacketEncodingLayer/BasicPacketEncodingLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | PiCN/Layers/PacketEncodingLayer/BasicPacketEncodingLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | PiCN/Layers/PacketEncodingLayer/BasicPacketEncodingLayer.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | """ De- and Encoding Layer, using a predefined Encoder """
import multiprocessing
from PiCN.Layers.PacketEncodingLayer.Encoder import BasicEncoder
from PiCN.Processes import LayerProcess
class BasicPacketEncodingLayer(LayerProcess):
""" De- and Encoding Layer, using a predefined Encoder """
def __init__(self, encoder: BasicEncoder=None, log_level=255):
LayerProcess.__init__(self, logger_name="PktEncLayer", log_level=log_level)
self._encoder: BasicEncoder = encoder
@property
def encoder(self):
return self._encoder
@encoder.setter
def encoder(self, encoder):
self._encoder = encoder
def data_from_higher(self, to_lower: multiprocessing.Queue, to_higher: multiprocessing.Queue, data):
face_id, packet = self.check_data(data)
if face_id == None or packet is None:
return
self.logger.info("Packet from higher, Faceid: " + str(face_id) + ", Name: " + str(packet.name))
encoded_packet = self.encode(packet)
if encoded_packet is None:
self.logger.info("Dropping Packet since None")
return
to_lower.put([face_id, encoded_packet])
def data_from_lower(self, to_lower: multiprocessing.Queue, to_higher: multiprocessing.Queue, data):
face_id, packet = self.check_data(data)
if face_id == None or packet == None:
return
decoded_packet = self.decode(packet)
if decoded_packet is None:
self.logger.info("Dropping Packet since None")
return
self.logger.info("Packet from lower, Faceid: " + str(face_id) + ", Name: " + str(decoded_packet.name))
to_higher.put([face_id, decoded_packet])
def encode(self, data):
self.logger.info("Encode packet")
return self._encoder.encode(data)
def decode(self, data):
self.logger.info("Decode packet")
return self._encoder.decode(data)
def check_data(self, data):
"""check if data from queue match the requirements"""
if len(data) != 2:
self.logger.warning("PacketEncoding Layer expects queue elements to have size 2")
return (None, None)
if type(data[0]) != int:
self.logger.warning("PacketEncoding Layer expects first element to be a faceid (int)")
return (None, None)
#TODO test if data[1] has type packet or bin data? howto?
return data[0], data[1]
| 38.888889 | 110 | 0.655102 | 308 | 2,450 | 5.064935 | 0.25 | 0.057692 | 0.053846 | 0.023077 | 0.415385 | 0.387179 | 0.303846 | 0.266667 | 0.214103 | 0.214103 | 0 | 0.004857 | 0.243673 | 2,450 | 62 | 111 | 39.516129 | 0.837021 | 0.084898 | 0 | 0.212766 | 0 | 0 | 0.126349 | 0 | 0 | 0 | 0 | 0.016129 | 0 | 1 | 0.170213 | false | 0 | 0.06383 | 0.021277 | 0.468085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ad9f4705eca81ea52720a66a8b32f82f6946af08 | 11,392 | py | Python | tools/launcher.py | agentx-cgn/Hannibal | 8157261c28bd67755dad38ef6b7862d1b736e644 | [
"JasPer-2.0"
] | 189 | 2015-01-10T07:35:16.000Z | 2021-05-05T08:21:22.000Z | tools/launcher.py | agentx-cgn/Hannibal | 8157261c28bd67755dad38ef6b7862d1b736e644 | [
"JasPer-2.0"
] | 6 | 2015-02-02T19:18:34.000Z | 2017-12-07T11:19:23.000Z | tools/launcher.py | agentx-cgn/Hannibal | 8157261c28bd67755dad38ef6b7862d1b736e644 | [
"JasPer-2.0"
] | 15 | 2016-03-14T12:27:59.000Z | 2020-04-28T23:24:05.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
https://docs.python.org/2/library/subprocess.html#popen-objects
http://stackoverflow.com/questions/1606795/catching-stdout-in-realtime-from-subprocess
http://askubuntu.com/questions/458041/find-x-window-name
http://stackoverflow.com/questions/9681959/how-can-i-use-xdotool-from-within-a-python-module-script
http://manpages.ubuntu.com/manpages/trusty/en/man1/avconv.1.html
http://stackoverflow.com/questions/287871/print-in-terminal-with-colors-using-python
xwininfo gives window info: xwininfo: Window id: 0x2800010 "0 A.D."
xdotool:
sudo apt-get install libx11-dev libxtst-dev libXinerama-dev
make
make install
https://github.com/nullkey/glc/wiki/Capture
glc-capture --start --fps=30 --resize=1.0 --disable-audio --out=pyro.glc ./launcher.py
glc-play pyro.glc -o - -y 1 | avconv -i - -an -y pyro.mp4
avconv -i pyro.mp4 -codec copy -ss 15 -y pyro01.mp4
qt-faststart pyro01.mp4 pyro02.mp4
mplayer pyro02.mp4
'''
VERSION = "0.2.0"
import os, sys, subprocess, time, json
from time import sleep
sys.dont_write_bytecode = True
## maps etc.
from data import data
bcolors = {
"Bold": "\033[1m",
"Header" : "\033[95m",
"LBlue" : "\033[94m", ## light blue
"DBlue" : "\033[34m", ## dark blue
"OKGreen" : "\033[32m", ## dark Green
"Green" : "\033[92m", ## light green
"Warn" : "\033[33m", ## orange
"Fail" : "\033[91m",
"End" : "\033[0m",
# orange='\033[33m'
}
def printc(color, text) :
print (bcolors[color] + text + bcolors["End"])
def stdc(color, text) :
sys.stdout.write (bcolors[color] + text + bcolors["End"])
folders = {
"pro" : "/home/noiv/Desktop/0ad", ## project
"rel" : "/usr/games/0ad", ## release
"trunk" : "/Daten/Projects/Osiris/ps/trunk", ## svn
"share" : "/home/noiv/.local/share", ## user mod
}
## the game binary
locations = {
"rel" : folders["rel"], ## release
"svn" : folders["trunk"] + "/binaries/system/pyrogenesis", ## svn
"hbl" : folders["share"] + "/0ad/mods/hannibal/simulation/ai/hannibal/", ## bot folder
"deb" : folders["share"] + "/0ad/mods/hannibal/simulation/ai/hannibal/_debug.js", ## bot folder
"log" : folders["pro"] + "/last.log", ## log file
"ana" : folders["pro"] + "/analysis/", ## analysis csv file
}
## Hannibal log/debug options + data, readable by JS and Python
DEBUG = {
## default map
"map": "scenarios/Arcadia 02",
## counter
"counter": [],
## num: 0=no numerus
## xdo: move window, sim speed
## fil can use files
## log: 0=silent, 1+=errors, 2+=warnings, 3+=info, 4=all
## col: log colors
## sup: suppress, bot does not intialize (saves startup time)
## tst: activate tester
"bots": {
"0" : {"num": 0, "xdo": 0, "fil": 0, "log": 4, "sup": 1, "tst": 0, "col": "" },
"1" : {"num": 1, "xdo": 1, "fil": 1, "log": 4, "sup": 0, "tst": 1, "col": "" },
"2" : {"num": 0, "xdo": 0, "fil": 0, "log": 3, "sup": 0, "tst": 1, "col": "" },
"3" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
"4" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
"5" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
"6" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
"7" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
"8" : {"num": 0, "xdo": 0, "fil": 0, "log": 2, "sup": 1, "tst": 0, "col": "" },
}
}
## keep track of open file handles
files = {}
## civs to choose from at start
civs = [
"athen",
"brit",
"cart",
"celt",
"gaul",
"hele",
"iber",
"mace",
"maur",
"pers",
"ptol",
"rome",
"sele",
"spart",
]
def buildCmd(typ="rel", map="Arcadia 02", bots=2) :
## see /ps/trunk/binaries/system/readme.txt
cmd = [
locations[typ],
"-quickstart", ## load faster (disables audio and some system info logging)
"-autostart=" + map, ## enables autostart and sets MAPNAME; TYPEDIR is skirmishes, scenarios, or random
"-mod=public", ## start the game using NAME mod
"-mod=charts",
"-mod=hannibal",
"-autostart-seed=0", ## sets random map SEED value (default 0, use -1 for random)
"-autostart-size=192", ## sets random map size in TILES (default 192)
# "-autostart-players=2", ## sets NUMBER of players on random map (default 2)
# "-autostart-ai=1:hannibal",
# "-autostart-civ=1:athen", ## sets PLAYER's civilisation to CIV (skirmish and random maps only)
# "-autostart-ai=2:hannibal", ## sets the AI for PLAYER (e.g. 2:petra)
# "-autostart-civ=2:cart", ## sets PLAYER's civilisation to CIV (skirmish and random maps only)
]
## svn does not autoload /user
if typ == "svn" : cmd.append("-mod=user")
## set # of players
cmd.append("-autostart-players=" + str(bots))
## add bots with civ
for bot in range(1, bots +1) :
cmd.append("-autostart-ai=" + str(bot) + ":hannibal")
cmd.append("-autostart-civ=" + str(bot) + ":" + civs[bot -1])
return cmd
def findWindow(title) :
process = subprocess.Popen("xdotool search --name '%s'" % (title), stdout=subprocess.PIPE, shell="FALSE")
windowid = process.stdout.readlines()[0].strip()
process.stdout.close()
return windowid
def xdotool(command) :
subprocess.call(("xdotool %s" % command).split(" "))
def cleanup() :
for k, v in files.iteritems() : v.close()
def writeDEBUG():
fTest = open(locations["deb"], 'w')
fTest.truncate()
fTest.write("var HANNIBAL_DEBUG = " + json.dumps(DEBUG, indent=2) + ";")
fTest.close()
def killDEBUG():
fTest = open(locations["deb"], 'w')
fTest.truncate()
fTest.close()
def processMaps():
proc0AD = None
DEBUG["OnUpdate"] = "print('#! terminate');"
for mp in data["testMaps"] :
DEBUG["map"] = mp
writeDEBUG()
cmd0AD = [pyrogenesis, "-quickstart", "-autostart=" + mp, "-mod=public", "-mod:hannibal", "-autostart-ai=1:hannibal"]
proc0AD = subprocess.Popen(cmd0AD, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print " > " + " ".join(cmd0AD)
try:
for line in iter(proc0AD.stdout.readline, b'') :
sline = line.strip()
if sline.startswith("#! terminate") :
proc0AD.terminate()
sleep(2)
if proc0AD : proc0AD.wait()
if proc0AD : proc0AD.kill()
break
else :
pass
# sys.stdout.write(line)
except KeyboardInterrupt, e :
if proc0AD : proc0AD.terminate()
break
print "done."
def launch(typ="rel", map="Arcadia 02", bots=2):
winX = 1520; winY = 20
doWrite = False
curFileNum = None
idWindow = None
proc0AD = None
def terminate() :
if proc0AD : proc0AD.terminate()
files["log"] = open(locations["log"], 'w')
files["log"].truncate()
DEBUG['map'] = map
writeDEBUG()
cmd0AD = buildCmd(typ, map, bots)
print (" cmd: %s" % " ".join(cmd0AD));
proc0AD = subprocess.Popen(cmd0AD, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
try:
for line in iter(proc0AD.stdout.readline, b'') :
## line has everything
## sline is stripped
## bline is active bot line after colon
sline = line.strip() ## removes nl and wp
bline = ""
id = 0
bot = DEBUG["bots"]["0"]
## detect bot id
if len(sline) >= 2 and sline[1:3] == "::" :
id = sline[0]
bot = DEBUG["bots"][id]
bline = "" if bot["log"] == 0 else sline[3:]
files["log"].write(line)
## terminate everything
if sline.startswith("#! terminate") :
if bot["xdo"] :
print(sline)
terminate()
return
## clear console
elif bline.startswith("#! clear") :
print(sline)
sys.stderr.write("\x1b[2J\x1b[H") ## why not ??
## xdo init
elif bot["xdo"] and bline.startswith("#! xdotool init") :
idWindow = findWindow("0 A.D")
printc("DBlue", " xdo: window id: %s" % idWindow)
xdotool("windowmove %s %s %s" % (idWindow, winX, winY))
## xdo command with echo
elif bot["xdo"] and bline.startswith("#! xdotool ") :
params = " ".join(bline.split(" ")[2:])
printc("DBlue", " X11: " + params)
xdotool(params)
## xdo command without echo
elif bot["xdo"] and bline.startswith("## xdotool ") : ## same, no echo
params = " ".join(bline.split(" ")[2:])
xdotool(params)
## xdo command suppress
elif not bot["xdo"] and bline.startswith("## xdotool ") :
pass
## file open
elif bot["fil"] and bline.startswith("#! open ") :
filenum = bline.split(" ")[2]
filename = bline.split(" ")[3]
files[filenum] = open(filename, 'w')
files[filenum].truncate()
## file append
elif bot["fil"] and bline.startswith("#! append ") :
filenum = bline.split(" ")[2]
dataLine = ":".join(bline.split(":")[1:])
files[filenum].write(dataLine + "\n")
## file write
elif bot["fil"] and bline.startswith("#! write ") :
print(bline)
filenum = bline.split(" ")[2]
filename = bline.split(" ")[3]
files[filenum] = open(filename, 'w')
files[filenum].truncate()
curFileNum = filenum
## file close
elif bot["fil"] and bline.startswith("#! close ") :
filenum = bline.split(" ")[2]
files[filenum].close()
print("#! closed %s at %s" % (filenum, os.stat(filename).st_size))
## bot output
elif bot["log"] > 0 and bline :
if bline.startswith("ERROR :") : stdc("Fail", id + "::" + bline + "\n")
elif bline.startswith("WARN :") : stdc("Warn", id + "::" + bline + "\n")
elif bline.startswith("INFO :") : stdc("OKGreen", id + "::" + bline + "\n")
else : sys.stdout.write("" + bline + "\n")
## suppressed bots - no output
elif bot["log"] == 0:
pass
## hannibal or map or 0AD output
elif line :
if line.startswith("ERROR :") : stdc("Fail", line + "\n")
elif line.startswith("WARN :") : stdc("Warn", line + "\n")
elif line.startswith("INFO :") : stdc("OKGreen", line + "\n")
elif line.startswith("TIMER| ") : pass ## suppress 0AD debugs
elif line.startswith("sys_cursor_create:") : pass
elif line.startswith("AL lib:") : pass
elif line.startswith("Sound:") : pass
else :
sys.stdout.write("" + line)
except KeyboardInterrupt, e :
terminate()
if __name__ == '__main__':
args = sys.argv[1:]
if args[0] == "maps" :
print (" processing maps...")
processMaps(args)
else:
typ = args[0] if len(args) > 0 else "rel"
map = args[1] if len(args) > 1 else "Arcadia 02"
bots = args[2] if len(args) > 2 else "2"
launch(typ, map, int(bots))
cleanup()
print ("\nBye\n")
| 30.297872 | 122 | 0.545207 | 1,405 | 11,392 | 4.409964 | 0.288968 | 0.029051 | 0.009038 | 0.010329 | 0.245642 | 0.208522 | 0.176727 | 0.145255 | 0.104584 | 0.092318 | 0 | 0.033894 | 0.269663 | 11,392 | 375 | 123 | 30.378667 | 0.710817 | 0.152475 | 0 | 0.224215 | 0 | 0 | 0.182369 | 0.025737 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.03139 | 0.013453 | null | null | 0.06278 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ada0c87b43dddcb6a0efe0af0db1a22cebfdc36f | 26,826 | py | Python | sdscli/adapters/hysds/configure.py | sdskit/sdscli | 9bb96e880c8251d1dce56b901c1289ed80f83ce7 | [
"Apache-2.0"
] | null | null | null | sdscli/adapters/hysds/configure.py | sdskit/sdscli | 9bb96e880c8251d1dce56b901c1289ed80f83ce7 | [
"Apache-2.0"
] | 24 | 2018-03-14T15:37:38.000Z | 2021-11-30T21:59:44.000Z | sdscli/adapters/hysds/configure.py | sdskit/sdscli | 9bb96e880c8251d1dce56b901c1289ed80f83ce7 | [
"Apache-2.0"
] | 13 | 2018-02-22T15:01:35.000Z | 2019-02-07T18:58:57.000Z | """
Configuration for HySDS cluster.
"""
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from builtins import open
from builtins import str
from future import standard_library
standard_library.install_aliases()
import os
import yaml
import pwd
import shutil
import hashlib
import traceback
from pkg_resources import resource_filename
from glob import glob
from prompt_toolkit.shortcuts import prompt, print_tokens
from prompt_toolkit.styles import style_from_dict
from prompt_toolkit.validation import Validator, ValidationError
from pygments.token import Token
from sdscli.log_utils import logger
from sdscli.conf_utils import get_user_config_path, get_user_files_path, SettingsConf
from sdscli.os_utils import validate_dir
from sdscli.prompt_utils import YesNoValidator
prompt_style = style_from_dict({
Token.Alert: 'bg:#D8060C',
Token.Username: '#D8060C',
Token.Param: '#3CFF33',
})
CFG_TMPL = """# HySDS config
TYPE: hysds
# mozart
MOZART_PVT_IP: {MOZART_PVT_IP}
MOZART_PUB_IP: {MOZART_PUB_IP}
MOZART_FQDN: {MOZART_FQDN}
# mozart rabbitmq
MOZART_RABBIT_PVT_IP: {MOZART_RABBIT_PVT_IP}
MOZART_RABBIT_PUB_IP: {MOZART_RABBIT_PUB_IP}
MOZART_RABBIT_FQDN: {MOZART_RABBIT_FQDN}
MOZART_RABBIT_USER: {MOZART_RABBIT_USER}
MOZART_RABBIT_PASSWORD: {MOZART_RABBIT_PASSWORD}
# mozart redis
MOZART_REDIS_PVT_IP: {MOZART_REDIS_PVT_IP}
MOZART_REDIS_PUB_IP: {MOZART_REDIS_PUB_IP}
MOZART_REDIS_FQDN: {MOZART_REDIS_FQDN}
MOZART_REDIS_PASSWORD: {MOZART_REDIS_PASSWORD}
# mozart ES
MOZART_ES_PVT_IP: {MOZART_ES_PVT_IP}
MOZART_ES_PUB_IP: {MOZART_ES_PUB_IP}
MOZART_ES_FQDN: {MOZART_ES_FQDN}
OPS_USER: {OPS_USER}
OPS_HOME: {OPS_HOME}
OPS_PASSWORD_HASH: {OPS_PASSWORD_HASH}
LDAP_GROUPS: {LDAP_GROUPS}
KEY_FILENAME: {KEY_FILENAME}
JENKINS_USER: {JENKINS_USER}
JENKINS_DIR: {JENKINS_DIR}
# metrics
METRICS_PVT_IP: {METRICS_PVT_IP}
METRICS_PUB_IP: {METRICS_PUB_IP}
METRICS_FQDN: {METRICS_FQDN}
# metrics redis
METRICS_REDIS_PVT_IP: {METRICS_REDIS_PVT_IP}
METRICS_REDIS_PUB_IP: {METRICS_REDIS_PUB_IP}
METRICS_REDIS_FQDN: {METRICS_REDIS_FQDN}
METRICS_REDIS_PASSWORD: {METRICS_REDIS_PASSWORD}
# metrics ES
METRICS_ES_PVT_IP: {METRICS_ES_PVT_IP}
METRICS_ES_PUB_IP: {METRICS_ES_PUB_IP}
METRICS_ES_FQDN: {METRICS_ES_FQDN}
# grq
GRQ_PVT_IP: {GRQ_PVT_IP}
GRQ_PUB_IP: {GRQ_PUB_IP}
GRQ_FQDN: {GRQ_FQDN}
GRQ_PORT: {GRQ_PORT}
# grq ES
GRQ_ES_PVT_IP: {GRQ_ES_PVT_IP}
GRQ_ES_PUB_IP: {GRQ_ES_PUB_IP}
GRQ_ES_FQDN: {GRQ_ES_FQDN}
# factotum
FACTOTUM_PVT_IP: {FACTOTUM_PVT_IP}
FACTOTUM_PUB_IP: {FACTOTUM_PUB_IP}
FACTOTUM_FQDN: {FACTOTUM_FQDN}
# continuous integration server
CI_PVT_IP: {CI_PVT_IP}
CI_PUB_IP: {CI_PUB_IP}
CI_FQDN: {CI_FQDN}
JENKINS_API_USER: {JENKINS_API_USER}
JENKINS_API_KEY: {JENKINS_API_KEY}
# verdi build
VERDI_PVT_IP: {VERDI_PVT_IP}
VERDI_PUB_IP: {VERDI_PUB_IP}
VERDI_FQDN: {VERDI_FQDN}
# other non-autoscale verdi hosts (optional)
OTHER_VERDI_HOSTS:
- VERDI_PVT_IP:
VERDI_PUB_IP:
VERDI_FQDN:
# WebDAV product server
DAV_SERVER: {DAV_SERVER}
DAV_USER: {DAV_USER}
DAV_PASSWORD: {DAV_PASSWORD}
# AWS settings for product bucket
DATASET_AWS_ACCESS_KEY: {DATASET_AWS_ACCESS_KEY}
DATASET_AWS_SECRET_KEY: {DATASET_AWS_SECRET_KEY}
DATASET_AWS_REGION: {DATASET_AWS_REGION}
DATASET_S3_ENDPOINT: {DATASET_S3_ENDPOINT}
DATASET_S3_WEBSITE_ENDPOINT: {DATASET_S3_WEBSITE_ENDPOINT}
DATASET_BUCKET: {DATASET_BUCKET}
# AWS settings for autoscale workers
AWS_ACCESS_KEY: {AWS_ACCESS_KEY}
AWS_SECRET_KEY: {AWS_SECRET_KEY}
AWS_REGION: {AWS_REGION}
S3_ENDPOINT: {S3_ENDPOINT}
CODE_BUCKET: {CODE_BUCKET}
VERDI_PRIMER_IMAGE: {VERDI_PRIMER_IMAGE}
VERDI_TAG: {VERDI_TAG}
VERDI_UID: {VERDI_UID}
VERDI_GID: {VERDI_GID}
VENUE: {VENUE}
QUEUES:
- QUEUE_NAME: dumby-job_worker-small
INSTANCE_TYPES:
- t2.medium
- t3a.medium
- t3.medium
- QUEUE_NAME: dumby-job_worker-large
INSTANCE_TYPES:
- t2.medium
- t3a.medium
- t3.medium
# git oauth token
GIT_OAUTH_TOKEN: {GIT_OAUTH_TOKEN}
# container registry
CONTAINER_REGISTRY: {CONTAINER_REGISTRY}
CONTAINER_REGISTRY_BUCKET: {CONTAINER_REGISTRY_BUCKET}
# DO NOT EDIT ANYTHING BELOW THIS
# user_rules_dataset
PROVES_URL: https://prov-es.jpl.nasa.gov/beta
PROVES_IMPORT_URL: https://prov-es.jpl.nasa.gov/beta/api/v0.1/prov_es/import/json
DATASETS_CFG: {DATASETS_CFG}
# system jobs queue
SYSTEM_JOBS_QUEUE: system-jobs-queue
MOZART_ES_CLUSTER: resource_cluster
METRICS_ES_CLUSTER: metrics_cluster
DATASET_QUERY_INDEX: grq
USER_RULES_DATASET_INDEX: user_rules
"""
CFG_DEFAULTS = {
"mozart": [
["MOZART_PVT_IP", ""],
["MOZART_PUB_IP", ""],
["MOZART_FQDN", ""],
],
"mozart-rabbit": [
["MOZART_RABBIT_PVT_IP", ""],
["MOZART_RABBIT_PUB_IP", ""],
["MOZART_RABBIT_FQDN", ""],
["MOZART_RABBIT_USER", "guest"],
["MOZART_RABBIT_PASSWORD", "guest"],
],
"mozart-redis": [
["MOZART_REDIS_PVT_IP", ""],
["MOZART_REDIS_PUB_IP", ""],
["MOZART_REDIS_FQDN", ""],
["MOZART_REDIS_PASSWORD", ""],
],
"mozart-es": [
["MOZART_ES_PVT_IP", ""],
["MOZART_ES_PUB_IP", ""],
["MOZART_ES_FQDN", ""],
],
"ops": [
["OPS_USER", pwd.getpwuid(os.getuid())[0]],
["OPS_HOME", os.path.expanduser('~')],
["OPS_PASSWORD_HASH", ""],
["LDAP_GROUPS", ""],
["KEY_FILENAME", ""],
["DATASETS_CFG", os.path.join(os.path.expanduser(
'~'), 'verdi', 'etc', 'datasets.json')],
],
"metrics": [
["METRICS_PVT_IP", ""],
["METRICS_PUB_IP", ""],
["METRICS_FQDN", ""],
],
"metrics-redis": [
["METRICS_REDIS_PVT_IP", ""],
["METRICS_REDIS_PUB_IP", ""],
["METRICS_REDIS_FQDN", ""],
["METRICS_REDIS_PASSWORD", ""],
],
"metrics-es": [
["METRICS_ES_PVT_IP", ""],
["METRICS_ES_PUB_IP", ""],
["METRICS_ES_FQDN", ""],
],
"grq": [
["GRQ_PVT_IP", ""],
["GRQ_PUB_IP", ""],
["GRQ_FQDN", ""],
["GRQ_PORT", 8878],
],
"grq-es": [
["GRQ_ES_PVT_IP", ""],
["GRQ_ES_PUB_IP", ""],
["GRQ_ES_FQDN", ""],
],
"factotum": [
["FACTOTUM_PVT_IP", ""],
["FACTOTUM_PUB_IP", ""],
["FACTOTUM_FQDN", ""],
],
"ci": [
["CI_PVT_IP", ""],
["CI_PUB_IP", ""],
["CI_FQDN", ""],
["JENKINS_USER", "jenkins"],
["JENKINS_DIR", os.path.join(os.path.expanduser('~'), 'jenkins')],
["JENKINS_API_USER", ""],
["JENKINS_API_KEY", ""],
["GIT_OAUTH_TOKEN", ""],
],
"verdi": [
["VERDI_PVT_IP", ""],
["VERDI_PUB_IP", ""],
["VERDI_FQDN", ""],
["CONTAINER_REGISTRY", ""],
["CONTAINER_REGISTRY_BUCKET", ""],
],
"webdav": [
["DAV_SERVER", ""],
["DAV_USER", ""],
["DAV_PASSWORD", ""],
],
"aws-dataset": [
["DATASET_AWS_ACCESS_KEY", ""],
["DATASET_AWS_SECRET_KEY", ""],
["DATASET_AWS_REGION", "us-west-2"],
["DATASET_S3_ENDPOINT", "s3-us-west-2.amazonaws.com"],
["DATASET_S3_WEBSITE_ENDPOINT", "s3-website-us-west-2.amazonaws.com"],
["DATASET_BUCKET", ""],
],
"aws-asg": [
["AWS_ACCESS_KEY", ""],
["AWS_SECRET_KEY", ""],
["AWS_REGION", "us-west-2"],
["S3_ENDPOINT", "s3-us-west-2.amazonaws.com"],
["CODE_BUCKET", ""],
["VERDI_PRIMER_IMAGE", ""],
["VERDI_TAG", ""],
["VERDI_UID", os.getuid()],
["VERDI_GID", os.getgid()],
["VENUE", "ops"],
]
}
def copy_files():
"""Copy templates and files to user config files."""
files_path = get_user_files_path()
logger.debug('files_path: %s' % files_path)
validate_dir(files_path, mode=0o700)
sds_files_path = resource_filename(
'sdscli', os.path.join('adapters', 'hysds', 'files'))
sds_files = glob(os.path.join(sds_files_path, '*'))
for sds_file in sds_files:
if os.path.basename(sds_file) == 'cluster.py':
user_file = os.path.join(os.path.dirname(
get_user_config_path()), os.path.basename(sds_file))
if not os.path.exists(user_file):
shutil.copy(sds_file, user_file)
else:
user_file = os.path.join(files_path, os.path.basename(sds_file))
if os.path.isdir(sds_file) and not os.path.exists(user_file):
shutil.copytree(sds_file, user_file)
logger.debug("Copying dir %s to %s" % (sds_file, user_file))
elif os.path.isfile(sds_file) and not os.path.exists(user_file):
shutil.copy(sds_file, user_file)
logger.debug("Copying file %s to %s" % (sds_file, user_file))
def configure():
"""Configure SDS config file for HySDS."""
# copy templates/files
copy_files()
# config file
cfg_file = get_user_config_path()
if os.path.exists(cfg_file):
cont = prompt(get_prompt_tokens=lambda x: [(Token, cfg_file),
(Token, " already exists. "),
(Token.Alert,
"Customizations will be lost or overwritten!"),
(Token, " Continue [y/n]: ")],
validator=YesNoValidator(), style=prompt_style) == 'y'
# validator=YesNoValidator(), default='n', style=prompt_style) == 'y'
if not cont:
return 0
with open(cfg_file) as f:
cfg = yaml.load(f, Loader=yaml.FullLoader)
else:
cfg = {}
# mozart
for k, d in CFG_DEFAULTS['mozart']:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# mozart components
comps = [('mozart-rabbit', 'rabbitMQ'), ('mozart-redis', 'redis'),
('mozart-es', 'elasticsearch')]
for grp, comp in comps:
reuse = prompt("Is mozart %s on a different IP [y/n]: " % comp,
validator=YesNoValidator(), default='n') == 'n'
for k, d in CFG_DEFAULTS[grp]:
if reuse:
if k.endswith('_PVT_IP'):
cfg[k] = cfg['MOZART_PVT_IP']
continue
elif k.endswith('_PUB_IP'):
cfg[k] = cfg['MOZART_PUB_IP']
continue
elif k.endswith('_FQDN'):
cfg[k] = cfg['MOZART_FQDN']
continue
if k == 'MOZART_RABBIT_PASSWORD':
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter RabbitMQ password for user "),
(Token.Username, "%s" %
cfg['MOZART_RABBIT_USER']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter RabbitMQ password for user "),
(Token.Username, "%s" %
cfg['MOZART_RABBIT_USER']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
if p1 == "":
print("Password can't be empty.")
continue
v = p1
break
print("Passwords don't match.")
elif k == 'MOZART_REDIS_PASSWORD':
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter Redis password: ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter Redis password: ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Passwords don't match.")
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# ops
for k, d in CFG_DEFAULTS['ops']:
if k == 'OPS_PASSWORD_HASH':
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter web interface password for ops user "),
(Token.Username, "%s" %
cfg['OPS_USER']),
(Token, ": ")],
default="",
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter web interface password for ops user "),
(Token.Username, "%s" %
cfg['OPS_USER']),
(Token, ": ")],
default="",
style=prompt_style,
is_password=True)
if p1 == p2:
if p1 == "":
print("Password can't be empty.")
continue
v = hashlib.sha224(p1.encode()).hexdigest()
break
print("Passwords don't match.")
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# metrics
for k, d in CFG_DEFAULTS['metrics']:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# metrics components
comps = [('metrics-redis', 'redis'), ('metrics-es', 'elasticsearch')]
for grp, comp in comps:
reuse = prompt("Is metrics %s on a different IP [y/n]: " % comp,
validator=YesNoValidator(), default='n') == 'n'
for k, d in CFG_DEFAULTS[grp]:
if reuse:
if k.endswith('_PVT_IP'):
cfg[k] = cfg['METRICS_PVT_IP']
continue
elif k.endswith('_PUB_IP'):
cfg[k] = cfg['METRICS_PUB_IP']
continue
elif k.endswith('_FQDN'):
cfg[k] = cfg['METRICS_FQDN']
continue
if k == 'METRICS_REDIS_PASSWORD':
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter Redis password: ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter Redis password: ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Passwords don't match.")
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# grq
for k, d in CFG_DEFAULTS['grq']:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# grq components
comps = [('grq-es', 'elasticsearch')]
for grp, comp in comps:
reuse = prompt("Is grq %s on a different IP [y/n]: " % comp,
validator=YesNoValidator(), default='n') == 'n'
for k, d in CFG_DEFAULTS[grp]:
if reuse:
if k.endswith('_PVT_IP'):
cfg[k] = cfg['GRQ_PVT_IP']
continue
elif k.endswith('_PUB_IP'):
cfg[k] = cfg['GRQ_PUB_IP']
continue
elif k.endswith('_FQDN'):
cfg[k] = cfg['GRQ_FQDN']
continue
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# factotum
for k, d in CFG_DEFAULTS['factotum']:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# ci
for k, d in CFG_DEFAULTS['ci']:
if k in ('JENKINS_API_KEY', 'GIT_OAUTH_TOKEN'):
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Values don't match.")
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# verdi
for k, d in CFG_DEFAULTS['verdi']:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# webdav
for k, d in CFG_DEFAULTS['webdav']:
if k == 'DAV_PASSWORD':
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter webdav password for user "),
(Token.Username, "%s" %
cfg['DAV_USER']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter webdav password for ops user "),
(Token.Username, "%s" %
cfg['DAV_USER']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Passwords don't match.")
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# aws-dataset
for k, d in CFG_DEFAULTS['aws-dataset']:
if k == 'DATASET_AWS_SECRET_KEY':
if cfg['DATASET_AWS_ACCESS_KEY'] == "":
cfg['DATASET_AWS_SECRET_KEY'] = ""
continue
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter AWS secret key for "),
(Token.Username, "%s" %
cfg['DATASET_AWS_ACCESS_KEY']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter AWS secret key for "),
(Token.Username, "%s" %
cfg['DATASET_AWS_ACCESS_KEY']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Keys don't match.")
elif k == 'DATASET_AWS_ACCESS_KEY':
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ". If using instance roles, just press enter"),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# aws-asg
for k, d in CFG_DEFAULTS['aws-asg']:
if k == 'AWS_SECRET_KEY':
if cfg['AWS_ACCESS_KEY'] == "":
cfg['AWS_SECRET_KEY'] = ""
continue
while True:
p1 = prompt(get_prompt_tokens=lambda x: [(Token, "Enter AWS secret key for "),
(Token.Username, "%s" %
cfg['AWS_ACCESS_KEY']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
p2 = prompt(get_prompt_tokens=lambda x: [(Token, "Re-enter AWS secret key for "),
(Token.Username, "%s" %
cfg['AWS_ACCESS_KEY']),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style,
is_password=True)
if p1 == p2:
v = p1
break
print("Keys don't match.")
elif k == 'AWS_ACCESS_KEY':
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ". If using instance roles, just press enter"),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
else:
v = prompt(get_prompt_tokens=lambda x: [(Token, "Enter value for "),
(Token.Param, "%s" % k),
(Token, ": ")],
default=str(cfg.get(k, d)),
style=prompt_style)
cfg[k] = v
# ensure directory exists
validate_dir(os.path.dirname(cfg_file), mode=0o700)
yml = CFG_TMPL.format(**cfg)
with open(cfg_file, 'w') as f:
f.write(yml)
| 38.213675 | 114 | 0.456348 | 2,705 | 26,826 | 4.251756 | 0.099076 | 0.018694 | 0.045909 | 0.05843 | 0.725415 | 0.681593 | 0.615077 | 0.560299 | 0.512129 | 0.512129 | 0 | 0.00616 | 0.42511 | 26,826 | 701 | 115 | 38.268188 | 0.739593 | 0.013494 | 0 | 0.47541 | 0 | 0.001639 | 0.265191 | 0.041279 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003279 | false | 0.078689 | 0.039344 | 0 | 0.044262 | 0.019672 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a8ded0f79cfee5cbfebfd883ed9fb8314fcc5c0e | 161 | py | Python | internetarchive/spew-shelf.py | wumpus/visigoth-data | 6887937f4547a5c8a9b52c5a0e75cb258cc55c97 | [
"Apache-2.0"
] | 1 | 2019-02-18T19:34:24.000Z | 2019-02-18T19:34:24.000Z | internetarchive/spew-shelf.py | wumpus/visigoth-data | 6887937f4547a5c8a9b52c5a0e75cb258cc55c97 | [
"Apache-2.0"
] | null | null | null | internetarchive/spew-shelf.py | wumpus/visigoth-data | 6887937f4547a5c8a9b52c5a0e75cb258cc55c97 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
import shelve
import sys
for f in sys.argv[1:]:
with shelve.open(f, flag='r') as d:
for k in d:
print(k,d[k])
| 13.416667 | 39 | 0.552795 | 30 | 161 | 2.966667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017699 | 0.298137 | 161 | 11 | 40 | 14.636364 | 0.769912 | 0.130435 | 0 | 0 | 0 | 0 | 0.007299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a8dfd18189d8ca071a33df8e68b7e90fd7a7c3a0 | 2,666 | py | Python | pi88reader/pi88_to_excel.py | natter1/pi88reader | 9698b25c3df0f1175fcbcec6c6ab22f6fe3aca6b | [
"MIT"
] | null | null | null | pi88reader/pi88_to_excel.py | natter1/pi88reader | 9698b25c3df0f1175fcbcec6c6ab22f6fe3aca6b | [
"MIT"
] | null | null | null | pi88reader/pi88_to_excel.py | natter1/pi88reader | 9698b25c3df0f1175fcbcec6c6ab22f6fe3aca6b | [
"MIT"
] | null | null | null | """
todo: check pandas
"""
from openpyxl import Workbook
from openpyxl.styles import Font
from pi88reader.pi88_importer import PI88Measurement, SegmentType
def main():
filename = '..\\resources\\quasi_static_12000uN.tdm'
filename = '..\\resources\\AuSn_Creep\\1000uN 01 LC.tdm'
measurement = PI88Measurement(filename)
to_excel = PI88ToExcel(measurement)
to_excel.write("delme.xlsx")
class PI88ToExcel:
def __init__(self, pi88_measurement):
self.measurement = pi88_measurement
self.workbook = Workbook()
self.workbook.remove(self.workbook.active)
def write(self, filename):
self.add_sheet_quasi_static_data() # self.workbook.active)
self.add_sheet_segment_data()
self.workbook.save(filename=filename)
def add_sheet_quasi_static_data(self):
wb = self.workbook
#mws_title = self.measurement.filename.split('.')[2].split('\\')[2]
ws_title = self.measurement.filename.split('\\')[-1].split('.')[0]
ws = wb.create_sheet(title=ws_title)
data = self.measurement.get_quasi_static_curve()
self.write_data(ws, data)
def add_sheet_segment_data(self):
wb = self.workbook
ws_title = "segments"
ws = wb.create_sheet(title=ws_title)
ws.cell(row=1, column=1).value = "LOAD:"
data = self.measurement.get_segment_curve(SegmentType.LOAD)
self.write_data(ws, data, row=1, col=2)
ws.cell(row=1, column=5).value = "HOLD:"
data = self.measurement.get_segment_curve(SegmentType.HOLD)
self.write_data(ws, data, row=1, col=6)
ws.cell(row=1, column=9).value = "UNLOAD:"
data = self.measurement.get_segment_curve(SegmentType.UNLOAD)
self.write_data(ws, data, row=1, col=10)
@staticmethod
def write_row(ws, data, row, col):
font = Font(bold=True)
for i, value in enumerate(data):
ws.cell(row=row, column=col+i).value = value
ws.cell(row=row, column=col + i).font = font
@staticmethod
def write_cols(ws, data, row, col):
for i, value in enumerate(data[0]):
for j, column in enumerate(data):
ws.cell(row=row+i, column=col+j).value = column[i]
def write_data(self, ws, data, row=1, col=1):
header = data[0]
if header:
self.write_row(ws, header, row, col)
row += 1
self.write_cols(ws, data[1:], row, col)
# for i, value in enumerate(data[1]):
# for j, column in enumerate(data[1:]):
# ws.cell(row=row+i, column=col+j).value = column[i]
if __name__ == "__main__":
main()
| 32.91358 | 75 | 0.626782 | 360 | 2,666 | 4.477778 | 0.222222 | 0.044665 | 0.039082 | 0.054591 | 0.489454 | 0.352978 | 0.292184 | 0.129032 | 0.043424 | 0.043424 | 0 | 0.025616 | 0.23856 | 2,666 | 80 | 76 | 33.325 | 0.768473 | 0.091523 | 0 | 0.107143 | 0 | 0 | 0.053112 | 0.029876 | 0 | 0 | 0 | 0.0125 | 0 | 1 | 0.142857 | false | 0 | 0.053571 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a8e1a924cdbc26ab3a5fbd0dc19d7a202d15d1ad | 55,228 | py | Python | caffe_files/caffe_traininglayers.py | excalib/interactive-deep-colorization | 8247da3cc83e54201b20d5a67b997120bb368436 | [
"MIT"
] | 1 | 2021-03-26T14:42:01.000Z | 2021-03-26T14:42:01.000Z | caffe_files/caffe_traininglayers.py | SleepProgger/interactive-deep-colorization | 8247da3cc83e54201b20d5a67b997120bb368436 | [
"MIT"
] | null | null | null | caffe_files/caffe_traininglayers.py | SleepProgger/interactive-deep-colorization | 8247da3cc83e54201b20d5a67b997120bb368436 | [
"MIT"
] | 1 | 2017-09-13T15:36:23.000Z | 2017-09-13T15:36:23.000Z | # **************************************
# ***** Richard Zhang / 2016.08.06 *****
# **************************************
import numpy as np
import warnings
import os
import sklearn.neighbors as nn
import caffe
from skimage import color
import matplotlib.pyplot as plt
import math
import platform
import cv2
import rz_fcns_nohdf5 as rz
# ***************************************
# ***** LAYERS FOR GLOBAL HISTOGRAM *****
# ***************************************
class SpatialRepLayer(caffe.Layer):
'''
INPUTS
bottom[0].data NxCx1x1
bottom[1].data NxCxXxY
OUTPUTS
top[0].data NxCxXxY repeat 0th input spatially '''
def setup(self,bottom,top):
if(len(bottom)!=2):
raise Exception("Layer needs 2 inputs")
self.param_str_split = self.param_str.split(' ')
# self.keep_ratio = float(self.param_str_split[0]) # frequency keep whole input
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
if(self.X!=1 or self.Y!=1):
raise Exception("bottom[0] should have spatial dimensions 1x1")
# self.Nref = bottom[1].data.shape[0]
# self.Cref = bottom[1].data.shape[1]
self.Xref = bottom[1].data.shape[2]
self.Yref = bottom[1].data.shape[3]
def reshape(self,bottom,top):
top[0].reshape(self.N,self.C,self.Xref,self.Yref) # output shape
def forward(self,bottom,top):
top[0].data[...] = bottom[0].data[:,:,:,:] # will do singleton expansion
def backward(self,top,propagate_down,bottom):
bottom[0].diff[:,:,0,0] = np.sum(np.sum(top[0].diff,axis=2),axis=2)
bottom[1].diff[...] = 0
class ColorGlobalDropoutLayer(caffe.Layer):
'''
Inputs
bottom[0].data NxCx1x1
Outputs
top[0].data Nx(C+1)x1x1 last channel is whether or not to keep input
first C channels are copied from bottom (if kept)
'''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.keep_ratio = float(self.param_str_split[0]) # frequency keep whole input
self.cnt = 0
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self,bottom,top):
top[0].reshape(self.N,self.C+1,self.X,self.Y) # output mask
def forward(self,bottom,top):
top[0].data[...] = 0
# top[0].data[:,:self.C,:,:] = bottom[0].data[...]
# determine which ones are kept
keeps = np.random.binomial(1,self.keep_ratio,size=self.N)
top[0].data[:,-1,:,:] = keeps[:,np.newaxis,np.newaxis]
top[0].data[:,:-1,:,:] = bottom[0].data[...]*keeps[:,np.newaxis,np.newaxis,np.newaxis]
def backward(self,top,propagate_down,bottom):
0; # backward not implemented
class ChooseOneDropoutLayer(caffe.Layer):
'''
Inputs
bottom[0].data NxCx1x1
Outputs
top[0].data Nx2Cx1x1 evens are the bottom data (0)
odds indicate which one is kept
'''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.drop_all_ratio = float(self.param_str_split[0]) # frequency keep whole input
self.cnt = 0
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self,bottom,top):
top[0].reshape(self.N,2*self.C,self.X,self.Y) # output mask
def forward(self,bottom,top):
top[0].data[...] = 0 # clear everything
# determine which ones are kept
drop_alls = np.random.binomial(1,self.drop_all_ratio,size=self.N)
# determine which to keep
keep_inds = np.random.randint(self.C,size=self.N)
for nn in range(self.N):
if(drop_alls[nn]==0):
keep_ind = keep_inds[nn]
top[0].data[nn,2*keep_ind,0,0] = bottom[0].data[nn,keep_ind,0,0]
top[0].data[nn,2*keep_ind+1,0,0] = 1
# top[0].data[:,-1,:,:] = keeps[:,np.newaxis,np.newaxis]
def backward(self,top,propagate_down,bottom):
0; # backward not implemented
# **************************************
# ***** RANDOM REVEALING OF COLORS *****
# **************************************
class ColorRandPointLayer(caffe.Layer):
''' Layer which reveals random square chunks of the input color '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.cnt = 0
self.mask_mult = 110.
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.p_numpatch = 0.125 # probability for number of patches to use drawn from geometric distribution
self.p_min_size = 0 # half-patch min size
self.p_max_size = 4 # half-patch max size
self.p_std = .25 # percentage of image for std where patch is located
self.p_whole = .01 # probability of revealing whole image
def reshape(self,bottom,top):
top[0].reshape(self.N,self.C+1,self.X,self.Y) # output mask
def forward(self,bottom,top):
top[0].data[...] = 0
# top[0].data[:,:self.C,:,:] = bottom[0].data[...]
# determine number of points
Ns = np.random.geometric(p=self.p_numpatch,size=self.N)
# determine half-patch sizes
Ps = np.random.random_integers(self.p_min_size,high=self.p_max_size,size=np.sum(Ns))
#determine location
Xs = np.clip(np.random.normal(loc=self.X/2.,scale=self.X*self.p_std,size=np.sum(Ns)),0,self.X)
Ys = np.clip(np.random.normal(loc=self.Y/2.,scale=self.Y*self.p_std,size=np.sum(Ns)),0,self.Y)
use_wholes = np.random.binomial(1,self.p_whole,size=self.N)
cnt = 0
for nn in range(self.N):
if(use_wholes[nn]==1): # throw in whole image
# print('Using whole image')
top[0].data[nn,:self.C,:,:] = bottom[0].data[nn,:,:,:]
top[0].data[nn,-1,:,:] = self.mask_mult
cnt = cnt+Ns[nn]
else: # sample points
for nnn in range(Ns[nn]):
p = Ps[cnt]
x = Xs[cnt]
y = Ys[cnt]
# print '(%i,%i,%i)'%(x,y,p)
top[0].data[nn,:self.C,x-p:x+p+1,y-p:y+p+1] \
= np.mean(np.mean(bottom[0].data[nn,:,x-p:x+p+1,y-p:y+p+1],axis=1),axis=1)[:,np.newaxis,np.newaxis]
top[0].data[nn,-1,x-p:x+p+1,y-p:y+p+1] = self.mask_mult
cnt = cnt+1
def backward(self,top,propagate_down,bottom):
0; # backward not implemented
# Randomly reveal strokes
def gen_random_stroke(X,Nmin=0,Nmax=8,Lmin=4,Lmax=20):
''' Generate a random stroke
(1) Randomly pick a direction and location to begin with
(2) Randomly choose number of points, loop through points
(a) randomly generate delta_theta, length
(b) append to list of points
(c) if the point crosses the edge, exist loop
(3) Clip on boundaries and return
INPUTS
X size of image
Nmin min number of points
Nmax max number of points
Lmin min length of a segment
Lmax max length of a segment
'''
cur_theta = np.random.uniform(-math.pi,math.pi,size=1)
pts = []
cur_pt = np.random.uniform(.1*X,.9*X,size=2)
pts.append(cur_pt.copy())
N = np.random.randint(Nmin,Nmax) # number of points
dtheta_bnd = np.random.uniform(-.4,.4) # amount that curve will deviate
for nn in range(N):
delta_theta = np.random.uniform(-dtheta_bnd*math.pi,dtheta_bnd*math.pi,size=1) # deviation
cur_length = np.random.uniform(Lmin,Lmax,size=1) # pixels
cur_theta = cur_theta+delta_theta
cur_pt = cur_pt + np.array((cur_length*math.cos(cur_theta),cur_length*math.sin(cur_theta))).flatten()
pts.append(cur_pt.copy())
if(np.sum(cur_pt<0) or np.sum(cur_pt>X-1)): # went out of bounds
break
return np.array(np.clip(pts,0,X-1))
def stroke2mask(pts,W,X,returnFlat=False):
''' Given stroke endpoints and line thickness, return a mask '''
pts = pts.astype('int')
cur_img = np.zeros((X,X,1),dtype='uint8')
for pp in range(pts.shape[0]-1):
cur_img = cv2.line(cur_img,(pts[pp,0],pts[pp,1]),(pts[pp+1,0],pts[pp+1,1]),(255,255,255),thickness=W)
cur_img_mask = cur_img[:,:,0]>0
cur_img_mask_flt = cur_img_mask.flatten()
if(returnFlat):
return cur_img_mask_flt
else:
return cur_img_mask
class RandStrokePointLayer(caffe.Layer):
''' Layer reveals random strokes and points '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.cnt = 0
self.mask_mult = 110.
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.p_numpatch = 0.125 # probability for number of points/strokes to use, drawn from geometric distribution
self.p_stroke = 0.25 # probability of using a stroke (rather than a point)
# patch settings
self.p_min_size = 0 # half-patch min size
self.p_max_size = 4 # half-patch max size
self.p_std = .25 # percentage of image for std where patch is located
self.p_whole = .01 # probability of revealing whole image
# stroke settings
self.l_min_thick=1; self.l_max_thick=8; # thickness
self.l_min_seg=0; self.l_max_seg=10; # number of points per line
self.l_min_len=0; self.l_max_len=10; # length of each line segment
def reshape(self,bottom,top):
top[0].reshape(self.N,self.C+1,self.X,self.Y) # output mask
def forward(self,bottom,top):
top[0].data[...] = 0
# top[0].data[:,:self.C,:,:] = bottom[0].data[...]
# determine number of points/patches
Ns = np.random.geometric(p=self.p_numpatch,size=self.N)
use_wholes = np.random.binomial(1,self.p_whole,size=self.N)
# Patch settings
# determine half-patch sizes
Ps = np.random.random_integers(self.p_min_size,high=self.p_max_size,size=np.sum(Ns))
#determine location
Xs = np.clip(np.random.normal(loc=self.X/2.,scale=self.X*self.p_std,size=np.sum(Ns)),0,self.X)
Ys = np.clip(np.random.normal(loc=self.Y/2.,scale=self.Y*self.p_std,size=np.sum(Ns)),0,self.Y)
# stroke or patch
is_strokes = np.random.binomial(1,self.p_stroke,size=np.sum(Ns))
Ws = np.random.randint(self.l_min_thick,self.l_max_thick,np.sum(Ns))
cnt = 0
for nn in range(self.N):
if(use_wholes[nn]==1): # throw in whole image
# print('Using whole image')
top[0].data[nn,:self.C,:,:] = bottom[0].data[nn,:,:,:]
top[0].data[nn,-1,:,:] = self.mask_mult
cnt = cnt+Ns[nn]
else: # sample points
for nnn in range(Ns[nn]):
if(not is_strokes[nnn]): # point mode
p = Ps[cnt]
x = Xs[cnt]
y = Ys[cnt]
# print '(%i,%i,%i)'%(x,y,p)
top[0].data[nn,:self.C,x-p:x+p+1,y-p:y+p+1] \
= np.mean(np.mean(bottom[0].data[nn,:,x-p:x+p+1,y-p:y+p+1],axis=1),axis=1)[:,np.newaxis,np.newaxis]
top[0].data[nn,-1,x-p:x+p+1,y-p:y+p+1] = self.mask_mult
else: # stroke mode
stroke_pts = gen_random_stroke(self.X,Nmin=self.l_min_seg,Nmax=self.l_max_seg,\
Lmin=self.l_min_len,Lmax=self.l_max_len).astype('int')
cur_mask = stroke2mask(stroke_pts,Ws[nnn],self.X)
cur_mask_inds = rz.find_nd(cur_mask)
top[0].data[nn,:self.C,cur_mask_inds[:,0],cur_mask_inds[:,1]] \
= bottom[0].data[nn,:,cur_mask_inds[:,0],cur_mask_inds[:,1]]
top[0].data[nn,-1,cur_mask_inds[:,0],cur_mask_inds[:,1]] = self.mask_mult
cnt = cnt+1
def backward(self,top,propagate_down,bottom):
0; # backward not implemented
# **********************************
# ***** PREVIOUSLY MADE LAYERS *****
# **********************************
class DataDropoutLayer(caffe.Layer):
''' Layer which drops out chunks of the input '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.dropout_ratio = float(self.param_str_split[0]) # dropout frequency
self.dropout_size = int(self.param_str_split[1]) # block size for dropout
self.refresh_period = int(self.param_str_split[2]) # regenerate every few iterations
self.channel_sync = bool(int(self.param_str_split[3])) # sync dropout through channels
self.retain_ratio = 1 - self.dropout_ratio
self.cnt = 0
self.N = bottom[0].data.shape[0]
self.C = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.Xblock = self.X/self.dropout_size
self.Yblock = self.Y/self.dropout_size
def reshape(self,bottom,top):
top[0].reshape(self.N,self.C,self.X,self.Y) # output mask
top[1].reshape(self.N,self.C,self.X,self.Y) # masked input
def forward(self,bottom,top):
if(np.mod(self.cnt,self.refresh_period)==0):
if(self.channel_sync):
retain_block = np.random.binomial(1,self.retain_ratio,size=(self.N,1,self.Xblock,self.Yblock))
else:
retain_block = np.random.binomial(1,self.retain_ratio,size=(self.N,self.C,self.Xblock,self.Yblock))
top[0].data[...] = retain_block.repeat(self.dropout_size,axis=2).repeat(self.dropout_size,axis=3)
self.cnt = self.cnt+1
top[1].data[...] = bottom[0].data[...]*top[0].data[...] # mask image
def backward(self,top,propagate_down,bottom):
0; # backward not implemented
class LossMeterLayer(caffe.Layer):
''' Layer acts as a "meter" to track loss values '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.LOSS_DIR = self.param_str_split[0]
self.P = int(self.param_str_split[1])
self.H = int(self.param_str_split[2])
if(len(self.param_str_split)==4):
self.prefix = self.param_str_split[3]
else:
self.prefix = ''
# self.P = 1000 # interval to print losses
# self.H = 1000 # history size
# self.LOSS_DIR = './loss_save'
self.cnt = 0 # loss track counter
# self.P = 1 # interval to print losses
self.h = 0 # index into history
self.L = len(bottom)
self.losses = np.zeros((self.L,self.H))
self.ITER_PATH = os.path.join(self.LOSS_DIR,'iter.npy')
self.LOG_PATH = os.path.join(self.LOSS_DIR,'loss_log')
rz.mkdir(self.LOSS_DIR)
if(os.path.exists(self.ITER_PATH)):
self.iter = np.load(self.ITER_PATH)
else:
self.iter = 0 # iteration counter
print 'Initial iteration: %i'%(self.iter+1)
def reshape(self,bottom,top):
0;
# top[0].reshape(1)
# print 'No'
def forward(self,bottom,top):
for ll in range(self.L):
self.losses[ll,self.h] = bottom[ll].data[...]
if(np.mod(self.cnt,self.P)==self.P-1): # print
if(self.cnt >= self.H-1):
tmp_str = 'NumAvg %i, Loss '%(self.H)
for ll in range(self.L):
tmp_str += '%.3f, '%np.mean(self.losses[ll,:])
else:
tmp_str = 'NumAvg %i, Loss '%(self.h)
for ll in range(self.L):
tmp_str += '%.3f, '%np.mean(self.losses[ll,:self.cnt+1])
print_str = '%s: Iter %i, %s'%(self.prefix,self.iter+1,tmp_str)
print print_str
self.f = open(self.LOG_PATH,'a')
self.f.write(print_str)
self.f.write('\n')
self.f.close()
np.save(self.ITER_PATH,self.iter)
self.h = np.mod(self.h+1,self.H) # roll through history
self.cnt = self.cnt+1
self.iter = self.iter+1
def backward(self,top,propagate_down,bottom):
for ll in range(self.L):
continue
class LossMeterLayer(caffe.Layer):
''' Layer acts as a "meter" to track loss values '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.param_str_split = self.param_str.split(' ')
self.LOSS_DIR = self.param_str_split[0]
self.P = int(self.param_str_split[1])
self.H = int(self.param_str_split[2])
if(len(self.param_str_split)==4):
self.prefix = self.param_str_split[3]
else:
self.prefix = ''
# self.P = 1000 # interval to print losses
# self.H = 1000 # history size
# self.LOSS_DIR = './loss_save'
self.cnt = 0 # loss track counter
# self.P = 1 # interval to print losses
self.h = 0 # index into history
self.L = len(bottom)
self.losses = np.zeros((self.L,self.H))
self.ITER_PATH = os.path.join(self.LOSS_DIR,'iter.npy')
self.LOG_PATH = os.path.join(self.LOSS_DIR,'loss_log')
rz.mkdir(self.LOSS_DIR)
if(os.path.exists(self.ITER_PATH)):
self.iter = np.load(self.ITER_PATH)
else:
self.iter = 0 # iteration counter
print 'Initial iteration: %i'%(self.iter+1)
def reshape(self,bottom,top):
0;
# top[0].reshape(1)
# print 'No'
def forward(self,bottom,top):
for ll in range(self.L):
self.losses[ll,self.h] = bottom[ll].data[...]
if(np.mod(self.cnt,self.P)==self.P-1): # print
if(self.cnt >= self.H-1):
tmp_str = 'NumAvg %i, Loss '%(self.H)
for ll in range(self.L):
tmp_str += '%.3e, '%np.mean(self.losses[ll,:])
else:
tmp_str = 'NumAvg %i, Loss '%(self.h)
for ll in range(self.L):
tmp_str += '%.3e, '%np.mean(self.losses[ll,:self.cnt+1])
print_str = '%s: Iter %i, %s'%(self.prefix,self.iter+1,tmp_str)
print print_str
self.f = open(self.LOG_PATH,'a')
self.f.write(print_str)
self.f.write('\n')
self.f.close()
np.save(self.ITER_PATH,self.iter)
self.h = np.mod(self.h+1,self.H) # roll through history
self.cnt = self.cnt+1
self.iter = self.iter+1
def backward(self,top,propagate_down,bottom):
for ll in range(self.L):
continue
# ***********************************
# ***** PARSE LOSS LOG WRAPPERS *****
# ***********************************
def group_iter_losses(base_dirs,sets,LOSS_ROOTDIR,base_names=-1,set_names=-1,\
return_min_max=False,min_maxes=1,mask_max=True):
'''
INPUTS
base_dirs base subdirectory to search for loss logs in
sets subsubdirectory to find loss log
LOSS_ROOTDIR rootdir to attach to all base_dirs
base_names [base_dirs] base names to populate dictionary with
set_names [set_names] set names to populate dictionary with
return_min_maxs boolean whether or not to return min/max of dataset
min_maxs array of 0/1, 0 for min, 1 for max
OUTPUTS
(iters,losses)
'''
base_dirs = np.array(base_dirs)
sets = np.array(sets)
B = base_dirs.size
if(rz.check_value(base_names,-1)):
base_names = base_dirs
if(rz.check_value(set_names,-1)):
set_names = sets
min_maxes = rz.scalar_to_array(B,min_maxes)
iters = {}; losses = {}
if(return_min_max):
ret_min_maxes = {}
# if(return_max_iter):
# max_iter = {}
for (bb,base) in enumerate(base_dirs):
base_name = base_names[bb]
loss_paths = []; names = []
for (ss,set) in enumerate(sets):
set_name = set_names[ss]
loss_paths.append('%s/%s/loss_log'%(base,set))
if(return_min_max):
(iters[base_name],losses[base_name],ret_min_maxes[base_name])\
= parse_loss_logs(set_names,loss_paths,LOSS_ROOTDIR,return_min_max=True,min_maxes=min_maxes,mask_max=mask_max)
else:
(iters[base_name],losses[base_name])\
= parse_loss_logs(set_names,loss_paths,LOSS_ROOTDIR,return_min_max=False,mask_max=mask_max)
# rets = []
# if(return_min_max):
# rets.append(ret_min_maxes)
# if(return_max_iter):
# rets.append(max_iter)
# if(return_min_max or return_max_iter):
# return (iters,losses,rets)
if(return_min_max):
return (iters,losses,ret_min_maxes)
else:
return (iters,losses)
def parse_loss_logs(names,LOSS_LOG_PATHS,rootdir='',iter_norm_factor=1000,\
return_min_max=False,min_maxes=1,mask_max=True):
''' grab multiple loss_logs'''
LOSS_LOG_PATHS = np.array(LOSS_LOG_PATHS)
names = np.array(names)
N = names.size
min_maxes = rz.scalar_to_array(N,min_maxes)
iters = {}
losses = {}
if(return_min_max):
ret_min_maxes = {}
for (nn,name) in enumerate(names):
LOSS_LOG_PATH = os.path.join(rootdir,LOSS_LOG_PATHS[nn])
if(return_min_max):
(iters[name],losses[name],ret_min_maxes[name]) = parse_loss_log(LOSS_LOG_PATH,iter_norm_factor=iter_norm_factor,\
return_min_max=True,min_max=min_maxes[nn],mask_max=mask_max)
else:
(iters[name],losses[name]) = parse_loss_log(LOSS_LOG_PATH,iter_norm_factor=iter_norm_factor,\
return_min_max=False,mask_max=mask_max)
if(return_min_max):
return (iters,losses,ret_min_maxes)
else:
return (iters,losses)
def parse_loss_log(LOSS_LOG_PATH,iter_norm_factor=1000,\
return_min_max=False,min_max=1,mask_max=True):
if(os.path.exists(LOSS_LOG_PATH)):
f = open(LOSS_LOG_PATH,'r')
cnt = 0
cur_line = f.readline()
recs = []
while(cur_line!=''):
# print cur_line
cur_line_split = cur_line.split(',')
L = len(cur_line_split)-1
NL = L-2
cur_rec = np.zeros((L,))
for (cc,part) in enumerate(cur_line_split[:-1]):
cur_rec[cc] = float(part.split(' ')[-1])
recs.append(cur_rec)
cur_line = f.readline()
recs = np.array(recs)
f.close()
if(mask_max):
mask = recs[:,1]==np.max(recs[:,1])
else:
mask = np.zeros(recs[:,1].size,dtype=bool)+True
recs = recs[mask]
recs = np.array(recs)
iters = recs[:,0]
Navg = recs[:,1]
losses = recs[:,2:]
if(return_min_max):
if(min_max==0):
ret_min_max = np.min(losses,axis=0)
elif(min_max==1):
ret_min_max = np.max(losses,axis=0)
# print ret_min_max
return (iters/iter_norm_factor,losses,ret_min_max)
else:
return (iters/iter_norm_factor,losses)
else:
if(return_min_max):
return (np.zeros((1,1)),np.zeros((1,1)),np.zeros((1,1)))
else:
return (np.zeros((1,1)),np.zeros((1,1)))
def cmap_to_color(cmap,bb,B):
return cmap(1.*bb/(B))
def plot_losses(ax,iters,losses,base_names,set_names,\
cmap=plt.cm.hsv_r,set_lines='-',inds=0,mults=1,toNorm=False):
B = base_names.size
base_names = np.array(base_names).flatten()
set_names = np.array(set_names).flatten()
inds = rz.scalar_to_array(B,inds)
mults = rz.scalar_to_array(B,mults)
for (bb,base_name) in enumerate(base_names):
for (ss,set_name) in enumerate(set_names):
ax.plot(iters[base_name][set_name],mults[bb]*losses[base_name][set_name][:,inds[bb]],\
set_lines[ss],color=cmap_to_color(cmap,bb,B),\
linewidth=2,label='%s-%s'%(base_name,set_name),toNorm=toNorm)
def plot_losses_single(ax,iters,losses,set_names,\
cmap=plt.cm.hsv_r,set_lines='-',chars='',toNorm=False):
for (ss,set_name) in enumerate(set_names):
I = losses[set_name].shape[1]
chars_use = rz.scalar_to_array(I,chars)
for ii in range(I):
if(toNorm):
plot_vals = losses[set_name][:,ii]/(losses[set_name][-1,ii])
else:
plot_vals = losses[set_name][:,ii]
ax.plot(iters[set_name],plot_vals,\
set_lines[ss],color=cmap_to_color(cmap,ii,I),\
linewidth=2,label='[%i]-%s-%s'%(ii,set_name,chars_use[ii]))
class GradientMagnitudeMeterLayer(caffe.Layer):
''' Layer which acts as a "meter" to measure gradient magnitude '''
def setup(self,bottom,top):
if(len(bottom)==0):
raise Exception("Layer needs inputs")
self.cnt = 0 # iteration counter
self.I = 10 # interval of iterations to keep track
self.pp = 0
# self.P = 1 # interval to print gradient magnitudes
self.P = 10 # interval to print gradient magnitudes
# self.P = 100 # interval to print gradient magnitudes
self.h = 0 # index into history
# self.H = 100 # history size
self.H = 10 # history size
self.H_reached = False
self.L = len(bottom)
self.Ns = np.zeros((self.L,),dtype=int)
self.Cs = np.zeros((self.L,),dtype=int)
self.Xs = np.zeros((self.L,),dtype=int)
self.Ys = np.zeros((self.L,),dtype=int)
self.mags = np.zeros((self.L,self.H))
self.LOG_PATH = './grad_log'
def reshape(self,bottom,top):
# print self.L
for ll in range(self.L):
self.Ns[ll] = bottom[ll].data.shape[0]
self.Cs[ll] = bottom[ll].data.shape[1]
self.Xs[ll] = bottom[ll].data.shape[2]
self.Ys[ll] = bottom[ll].data.shape[3]
top[ll].reshape(self.Ns[ll],self.Cs[ll],self.Xs[ll],self.Ys[ll])
# for ll in range(self.L):
def forward(self,bottom,top):
for ll in range(self.L):
top[ll].data[...] = bottom[ll].data[...] # copy data through
def backward(self,top,propagate_down,bottom):
for ll in range(self.L):
if not propagate_down[ll]:
continue
bottom[ll].diff[...] = top[ll].diff[...] # copy diff through
if(np.mod(self.cnt,self.I)==0): # every Ith iteration, record
self.mags[ll,self.h] = np.linalg.norm(bottom[ll].diff[...])/self.Ns[ll]
# if(self.pp==0):
# if(self.H_reached==True): # average whole history
# print('GradMag %i/%i (%i): %.3f'%(ll,self.L,self.H,np.mean(self.mags[ll,:])))
# else: # haven't built whole history yet
# print('GradMag %i/%i (%i): %.3f'%(ll,self.L,self.h,np.mean(self.mags[ll,:self.h])))
# self.pp = np.mod(self.pp+1,self.P)
if(np.mod(self.cnt,self.I)==0): # every Ith iteration, record
if(self.pp==0):
if(self.H_reached==True): # average whole history
tmp_str = '(%i)'%self.H
for ll in range(self.L):
tmp_str += ' / %.3f'%(np.mean(self.mags[ll,:]))
else: # haven't built whole history yet
tmp_str = '(%i)'%self.h
for ll in range(self.L):
tmp_str += ' / %.3f'%(np.mean(self.mags[ll,:self.h+1]))
print_str = 'GradMag: %s'%tmp_str
print print_str
self.f = open(self.LOG_PATH,'a')
self.f.write(print_str)
self.f.write('\n')
self.f.close()
self.pp = np.mod(self.pp+1,self.P)
if((self.H_reached==False) and (self.h==self.H-1)):
self.H_reached = True
self.h = np.mod(self.h+1,self.H)
self.cnt = self.cnt+1
class ManhattanLossLayer(caffe.Layer):
''' Layer which computes L1 loss '''
def setup(self,bottom,top):
if(len(bottom)!=2):
raise Exception("Layer inputs != 2 (len(bottom)!=2)")
self.N = bottom[0].data.shape[0]
self.P = np.prod(np.array(bottom[0].data.shape[1:]))
# self.C = bottom[0].data.shape[1]
# self.X = bottom[0].data.shape[2]
# self.Y = bottom[0].data.shape[3]
# self.P = self.N*self.X*self.Y
def reshape(self, bottom, top):
top[0].reshape(1) # single loss value
def forward(self, bottom, top):
top[0].data[...] = np.sum(np.abs(bottom[1].data[...]-bottom[0].data[...]))/(self.N*self.P)
def backward(self, top, propagate_down, bottom):
sign_diff = np.sign(bottom[1].data[...]-bottom[0].data[...])
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
if(i==0):
bottom[i].diff[...] = -1.*sign_diff/(self.N*self.P)
else:
bottom[i].diff[...] = 1.*sign_diff/(self.N*self.P)
class NNEnc2Layer(caffe.Layer):
''' Layer which encodes ab map into Q colors
INPUTS
bottom[0] Nx2xXxY
OUTPUTS
top[0].data NxQ
'''
def setup(self,bottom,top):
warnings.filterwarnings("ignore")
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.NN = 9 # this is hard-coded into the forward
# self.NN = 1 # this is hard-coded into the forward
self.sigma = 5.
self.ENC_DIR = './data/color_bins'
# self.nnenc = NNEncode(self.NN,self.sigma,km_filepath=os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.pts_in_hull = np.load(os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.prior_probs = np.load(os.path.join(self.ENC_DIR,'prior_probs.npy'))
self.ENC_DIR = './data/color_bins'
self.pts_in_hull = np.load(os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.pts_grid = np.load(os.path.join(self.ENC_DIR,'pts_grid.npy'))
self.prior_probs = np.load(os.path.join(self.ENC_DIR,'prior_probs.npy'))
self.prior_probs_full = np.load(os.path.join(self.ENC_DIR,'prior_probs_full.npy'))
self.in_hull = np.load(os.path.join(self.ENC_DIR,'in_hull.npy'))
self.full_to_hull = np.cumsum(self.in_hull)-1
self.min_pt = np.min(self.pts_grid)
self.spacing = np.sort(np.unique(self.pts_grid))
self.spacing = self.spacing[1] - self.spacing[0]
self.S = np.sqrt(self.pts_grid.shape[0])
self.Q = self.pts_in_hull.shape[0]
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.P = self.N*self.X*self.Y
self.dists_sq = np.zeros((self.P,self.NN))
self.inds = np.zeros((self.P,self.NN),dtype='int')
self.ab_enc_flt = np.zeros((self.P,self.Q))
self.inds_P = np.arange(0,self.P,dtype='int')[:,rz.na()]
self.ab_enc_flt_hard = np.zeros((self.P,self.Q))
if(len(top)==1):
self.HARD_ENC = False
else:
self.HARD_ENC = True
def reshape(self, bottom, top):
top[0].reshape(self.N,self.Q,self.X,self.Y) # soft encoding
if(self.HARD_ENC):
top[1].reshape(self.N,self.Q,self.X,self.Y) # hard encoding
def forward(self, bottom, top):
# print 'hello'
self.ab_enc_flt[...] = 0
# soft encoding
ab_val = bottom[0].data[...]
ab_val_flt = rz.flatten_nd_array(ab_val,axis=1)
ab_enc_sub = np.round((ab_val-self.min_pt)/self.spacing)
ab_enc_sub = np.clip(ab_enc_sub,1,self.S-1) # force points into margin
# ab_enc_sub_flt = rz.flatten_nd_array(ab_enc_sub,axis=1)
# inds_map = self.full_to_hull[rz.sub2ind2(ab_enc_sub_flt,np.array((self.S,self.S)))]
t = rz.Timer()
cnt = 0
for aa in np.array((0,-1,1)): # hard-coded to find 9-NN
for bb in np.array((0,-1,1)):
# for aa in np.array((0,)): # hard-coded to find 1-NN
# for bb in np.array((0,)):
tmp = ab_enc_sub.copy()
tmp[:,0,:,:] = tmp[:,0,:,:]+aa
tmp[:,1,:,:] = tmp[:,1,:,:]+bb
ab_enc_sub_flt = rz.flatten_nd_array(tmp,axis=1)
inds_hull = self.full_to_hull[rz.sub2ind2(ab_enc_sub_flt,np.array((self.S,self.S)))]
self.dists_sq[:,cnt] = np.sum((ab_val_flt-self.pts_in_hull[inds_hull,:])**2,axis=1)
self.inds[:,cnt] = inds_hull
cnt = cnt+1
# print t.tocStr()
wts = np.exp(-self.dists_sq/(2*self.sigma**2))
# print t.tocStr()
wts = wts/np.sum(wts,axis=1)[:,rz.na()]
# print t.tocStr()
self.ab_enc_flt[self.inds_P,self.inds] = wts
# print t.tocStr()
top[0].data[...] = rz.unflatten_2d_array(self.ab_enc_flt,ab_val,axis=1)
# print t.tocStr()
# hard encoding
if(self.HARD_ENC):
self.ab_enc_flt_hard[self.inds_P,self.inds[:,[0]]] = 1
# print t.tocStr()
top[1].data[...] = rz.unflatten_2d_array(self.ab_enc_flt_hard,ab_val,axis=1)
# print t.tocStr()
self.ab_enc_flt_hard[self.inds_P,self.inds[:,[0]]] = 0
# print t.tocStr()
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class NNEnc1HotLayer(caffe.Layer):
''' Layer which encodes ab map into Q colors
INPUTS
bottom[0] Nx2xXxY
OUTPUTS
top[0].data NxQ
'''
def setup(self,bottom,top):
warnings.filterwarnings("ignore")
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.NN = 1 # this is hard-coded into the forward
self.sigma = 5.
self.ENC_DIR = './data/color_bins'
# self.nnenc = NNEncode(self.NN,self.sigma,km_filepath=os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.pts_in_hull = np.load(os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.prior_probs = np.load(os.path.join(self.ENC_DIR,'prior_probs.npy'))
self.ENC_DIR = './data/color_bins'
self.pts_in_hull = np.load(os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.pts_grid = np.load(os.path.join(self.ENC_DIR,'pts_grid.npy'))
self.prior_probs = np.load(os.path.join(self.ENC_DIR,'prior_probs.npy'))
self.prior_probs_full = np.load(os.path.join(self.ENC_DIR,'prior_probs_full.npy'))
self.in_hull = np.load(os.path.join(self.ENC_DIR,'in_hull.npy'))
self.full_to_hull = np.cumsum(self.in_hull)-1
self.min_pt = np.min(self.pts_grid)
self.spacing = np.sort(np.unique(self.pts_grid))
self.spacing = self.spacing[1] - self.spacing[0]
self.S = np.sqrt(self.pts_grid.shape[0])
self.Q = self.pts_in_hull.shape[0]
def reshape(self, bottom, top):
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.P = self.N*self.X*self.Y
self.dists_sq = np.zeros((self.P,self.NN))
self.inds = np.zeros((self.P,self.NN),dtype='int')
self.ab_enc_flt = np.zeros((self.P,self.Q))
self.inds_P = np.arange(0,self.P,dtype='int')[:,rz.na()]
self.ab_enc_flt_hard = np.zeros((self.P,self.Q))
top[0].reshape(self.N,self.Q,self.X,self.Y) # hard encoding
def forward(self, bottom, top):
self.ab_enc_flt[...] = 0
# soft encoding
ab_val = bottom[0].data[...]
ab_val_flt = rz.flatten_nd_array(ab_val,axis=1)
ab_enc_sub = np.round((ab_val-self.min_pt)/self.spacing)
ab_enc_sub = np.clip(ab_enc_sub,1,self.S-1) # force points into margin
t = rz.Timer()
cnt = 0
for aa in np.array((0,)): # hard-coded to find 9-NN
for bb in np.array((0,)):
tmp = ab_enc_sub.copy()
tmp[:,0,:,:] = tmp[:,0,:,:]+aa
tmp[:,1,:,:] = tmp[:,1,:,:]+bb
ab_enc_sub_flt = rz.flatten_nd_array(tmp,axis=1)
inds_hull = self.full_to_hull[rz.sub2ind2(ab_enc_sub_flt,np.array((self.S,self.S)))]
self.dists_sq[:,cnt] = np.sum((ab_val_flt-self.pts_in_hull[inds_hull,:])**2,axis=1)
self.inds[:,cnt] = inds_hull
cnt = cnt+1
# print t.tocStr()
wts = np.exp(-self.dists_sq/(2*self.sigma**2))
# print t.tocStr()
wts = wts/np.sum(wts,axis=1)[:,rz.na()]
# print t.tocStr()
# print np.sum(wts)
self.ab_enc_flt[self.inds_P,self.inds] = wts
# print t.tocStr()
# top[0].data[...] = rz.unflatten_2d_array(self.ab_enc_flt,ab_val,axis=1)
# print t.tocStr()
# hard encoding
# if(self.HARD_ENC):
self.ab_enc_flt_hard[self.inds_P,self.inds[:,[0]]] = 1
# print t.tocStr()
top[0].data[...] = rz.unflatten_2d_array(self.ab_enc_flt_hard, ab_val, axis=1)
# print t.tocStr()
# self.ab_enc_flt_hard[self.inds_P,self.inds[:,[0]]] = 0
# print t.tocStr()
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
# ************************
# ***** CAFFE LAYERS *****
# ************************
class BGR2HSVLayer(caffe.Layer):
''' Layer converts BGR to HSV
INPUTS
bottom[0] Nx3xXxY
OUTPUTS
top[0].data Nx3xXxY
'''
def setup(self,bottom, top):
warnings.filterwarnings("ignore")
if(len(bottom)!=1):
raise Exception("Layer should a single input")
if(bottom[0].data.shape[1]!=3):
raise Exception("Input should be 3-channel BGR image")
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self, bottom, top):
top[0].reshape(self.N,3,self.X,self.Y)
def forward(self, bottom, top):
for nn in range(self.N):
top[0].data[nn,:,:,:] = color.rgb2hsv(bottom[0].data[nn,::-1,:,:].astype('uint8').transpose((1,2,0))).transpose((2,0,1))
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
# bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class BGR2LabLayer(caffe.Layer):
''' Layer converts BGR to Lab
INPUTS
bottom[0] Nx3xXxY
OUTPUTS
top[0].data Nx3xXxY
'''
def setup(self,bottom, top):
warnings.filterwarnings("ignore")
if(len(bottom)!=1):
raise Exception("Layer should a single input")
if(bottom[0].data.shape[1]!=3):
raise Exception("Input should be 3-channel BGR image")
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self, bottom, top):
top[0].reshape(self.N,3,self.X,self.Y)
def forward(self, bottom, top):
top[0].data[...] = color.rgb2lab(bottom[0].data[:,::-1,:,:].astype('uint8').transpose((2,3,0,1))).transpose((2,3,0,1))
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
# bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class EncLayer(caffe.Layer):
''' Layer which does hard quantization into bins
INPUTS
bottom[0] Nx1xXxY
OUTPUTS
top[0].data NxQ
'''
def setup(self,bottom, top):
warnings.filterwarnings("ignore")
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.param_str_split = self.param_str.split(' ')
self.min = float(self.param_str_split[0])
self.max = float(self.param_str_split[1])
self.inc = float(self.param_str_split[2])
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self, bottom, top):
top[0].reshape(self.N,1,self.X,self.Y)
def forward(self, bottom, top):
top[0].data[...] = (bottom[0].data[...]-self.min)/self.inc
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class NNEncLayer(caffe.Layer):
''' Layer which encodes ab map into Q colors
INPUTS
bottom[0] Nx2xXxY
OUTPUTS
top[0].data NxQ
'''
def setup(self,bottom,top):
warnings.filterwarnings("ignore")
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.NN = 10.
self.sigma = 5.
self.ENC_DIR = './data/color_bins'
self.nnenc = NNEncode(self.NN,self.sigma,km_filepath=os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.HARD_FLAG = False
if(len(top)==2):
self.nnenc2 = NNEncode(1,self.sigma,km_filepath=os.path.join(self.ENC_DIR,'pts_in_hull.npy'))
self.HARD_FLAG = True
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.Q = self.nnenc.K
def reshape(self, bottom, top):
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
self.Q = self.nnenc.K
top[0].reshape(self.N,self.Q,self.X,self.Y)
if(self.HARD_FLAG):
top[1].reshape(self.N,self.Q,self.X,self.Y)
def forward(self, bottom, top):
# print bottom[0].data.shape
# top[0].data[...] = self.nnenc.encode_points_mtx_nd(bottom[0].data[...],axis=1)
if(self.HARD_FLAG):
top[1].data[...] = self.nnenc2.encode_points_mtx_nd(bottom[0].data[...],axis=1)
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class PriorBoostLayer(caffe.Layer):
''' Layer boosts ab values based on their rarity
INPUTS
bottom[0] NxQxXxY
OUTPUTS
top[0].data Nx1xXxY
'''
def setup(self,bottom, top):
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.ENC_DIR = './data/color_bins'
self.gamma = .5
self.alpha = 1.
self.pc = PriorFactor(self.alpha,gamma=self.gamma,priorFile=os.path.join(self.ENC_DIR,'prior_probs.npy'))
self.N = bottom[0].data.shape[0]
self.Q = bottom[0].data.shape[1]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self, bottom, top):
top[0].reshape(self.N,1,self.X,self.Y)
def forward(self, bottom, top):
top[0].data[...] = self.pc.forward(bottom[0].data[...],axis=1)
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class NonGrayMaskLayer(caffe.Layer):
''' Layer outputs a mask based on if the image is grayscale or not
INPUTS
bottom[0] Nx2xXxY ab values
OUTPUTS
top[0].data Nx1xXxY 1 if image is NOT grayscale
0 if image is grayscale
'''
def setup(self,bottom, top):
if len(bottom) == 0:
raise Exception("Layer should have inputs")
self.thresh = 5 # threshold on ab value
self.N = bottom[0].data.shape[0]
self.X = bottom[0].data.shape[2]
self.Y = bottom[0].data.shape[3]
def reshape(self, bottom, top):
top[0].reshape(self.N,1,self.X,self.Y)
def forward(self, bottom, top):
# if an image has any (a,b) value which exceeds threshold, output 1
top[0].data[...] = (np.sum(np.sum(np.sum(np.abs(bottom[0].data) > self.thresh,axis=1),axis=1),axis=1) > 0)[:,na(),na(),na()]
def backward(self, top, propagate_down, bottom):
# no back-prop
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[i].diff[...] = np.zeros_like(bottom[i].data)
class ClassRebalanceMultLayer(caffe.Layer):
# '''
# INPUTS
# bottom[0] NxMxXxY feature map
# bottom[1] Nx1xXxY boost coefficients
# OUTPUTS
# top[0] NxMxXxY on forward, gets copied from bottom[0]
# FUNCTIONALITY
# On forward pass, top[0] passes bottom[0]
# On backward pass, bottom[0] gets boosted by bottom[1]
# through pointwise multiplication (with singleton expansion) '''
def setup(self, bottom, top):
# check input pair
if len(bottom)==0:
raise Exception("Specify inputs")
def reshape(self, bottom, top):
i = 0
if(bottom[i].data.ndim==1):
top[i].reshape(bottom[i].data.shape[0])
elif(bottom[i].data.ndim==2):
top[i].reshape(bottom[i].data.shape[0], bottom[i].data.shape[1])
elif(bottom[i].data.ndim==4):
top[i].reshape(bottom[i].data.shape[0], bottom[i].data.shape[1], bottom[i].data.shape[2], bottom[i].data.shape[3])
def forward(self, bottom, top):
# output equation to negative of inputs
top[0].data[...] = bottom[0].data[...]
# top[0].data[...] = bottom[0].data[...]*bottom[1].data[...] # this was bad, would mess up the gradients going up
def backward(self, top, propagate_down, bottom):
for i in range(len(bottom)):
if not propagate_down[i]:
continue
bottom[0].diff[...] = top[0].diff[...]*bottom[1].data[...]
# print 'Back-propagating class rebalance, %i'%i
# ***************************
# ***** SUPPORT CLASSES *****
# ***************************
class PriorFactor():
''' Class handles prior factor '''
# def __init__(self,alpha,gamma=0,verbose=True,priorFile='/home/eecs/rich.zhang/src/projects/cross_domain/save/ab_grid_10/prior_probs.npy',genc=-1):
def __init__(self,alpha,gamma=0,verbose=True,priorFile=''):
# INPUTS
# alpha integer prior correction factor, 0 to ignore prior, 1 to divide by prior, alpha to divide by prior^alpha power
# gamma integer percentage to mix in prior probability
# priorFile file file which contains prior probabilities across classes
# settings
self.alpha = alpha
self.gamma = gamma
self.verbose = verbose
# empirical prior probability
self.prior_probs = np.load(priorFile)
# define uniform probability
self.uni_probs = np.zeros_like(self.prior_probs)
self.uni_probs[self.prior_probs!=0] = 1.
self.uni_probs = self.uni_probs/np.sum(self.uni_probs)
# convex combination of empirical prior and uniform distribution
self.prior_mix = (1-self.gamma)*self.prior_probs + self.gamma*self.uni_probs
# set prior factor
self.prior_factor = self.prior_mix**-self.alpha
self.prior_factor = self.prior_factor/np.sum(self.prior_probs*self.prior_factor) # re-normalize
# implied empirical prior
self.implied_prior = self.prior_probs*self.prior_factor
self.implied_prior = self.implied_prior/np.sum(self.implied_prior) # re-normalize
# add this to the softmax score
# self.softmax_correction = np.log(self.prior_probs/self.implied_prior * (1-self.implied_prior)/(1-self.prior_probs))
if(self.verbose):
self.print_correction_stats()
# if(not check_value(genc,-1)):
# self.expand_grid(genc)
# def expand_grid(self,genc):
# self.prior_probs_full_grid = genc.enc_full_grid_mtx_nd(self.prior_probs,axis=0,returnGrid=True)
# self.uni_probs_full_grid = genc.enc_full_grid_mtx_nd(self.uni_probs,axis=0,returnGrid=True)
# self.prior_mix_full_grid = genc.enc_full_grid_mtx_nd(self.prior_mix,axis=0,returnGrid=True)
# self.prior_factor_full_grid = genc.enc_full_grid_mtx_nd(self.prior_factor,axis=0,returnGrid=True)
# self.implied_prior_full_grid = genc.enc_full_grid_mtx_nd(self.implied_prior,axis=0,returnGrid=True)
# self.softmax_correction_full_grid = genc.enc_full_grid_mtx_nd(self.softmax_correction,axis=0,returnGrid=True)
def print_correction_stats(self):
print 'Prior factor correction:'
print ' (alpha,gamma) = (%.2f, %.2f)'%(self.alpha,self.gamma)
print ' (min,max,mean,med,exp) = (%.2f, %.2f, %.2f, %.2f, %.2f)'%(np.min(self.prior_factor),np.max(self.prior_factor),np.mean(self.prior_factor),np.median(self.prior_factor),np.sum(self.prior_factor*self.prior_probs))
def forward(self,data_ab_quant,axis=1):
# data_ab_quant = net.blobs['data_ab_quant_map_233'].data[...]
data_ab_maxind = np.argmax(data_ab_quant,axis=axis)
corr_factor = self.prior_factor[data_ab_maxind]
if(axis==0):
return corr_factor[na(),:]
elif(axis==1):
return corr_factor[:,na(),:]
elif(axis==2):
return corr_factor[:,:,na(),:]
elif(axis==3):
return corr_factor[:,:,:,na()]
class NNEncode():
''' Encode points using NN search and Gaussian kernel '''
def __init__(self,NN,sigma,km_filepath='',cc=-1):
if(check_value(cc,-1)):
self.cc = np.load(km_filepath)
else:
self.cc = cc
self.K = self.cc.shape[0]
# self.NN = NN
self.NN = int(NN)
self.sigma = sigma
self.nbrs = nn.NearestNeighbors(n_neighbors=NN, algorithm='ball_tree').fit(self.cc)
self.alreadyUsed = False
def encode_points_mtx_nd(self,pts_nd,axis=1,returnSparse=False,sameBlock=True):
t = rz.Timer();
pts_flt = flatten_nd_array(pts_nd,axis=axis)
P = pts_flt.shape[0]
if(sameBlock and self.alreadyUsed):
self.pts_enc_flt[...] = 0 # already pre-allocated
else:
self.alreadyUsed = True
self.pts_enc_flt = np.zeros((P,self.K))
self.p_inds = np.arange(0,P,dtype='int')[:,na()]
P = pts_flt.shape[0]
(dists,inds) = self.nbrs.kneighbors(pts_flt)
wts = np.exp(-dists**2/(2*self.sigma**2))
wts = wts/np.sum(wts,axis=1)[:,na()]
self.pts_enc_flt[self.p_inds,inds] = wts
pts_enc_nd = unflatten_2d_array(self.pts_enc_flt,pts_nd,axis=axis)
return pts_enc_nd
def decode_points_mtx_nd(self,pts_enc_nd,axis=1):
pts_enc_flt = flatten_nd_array(pts_enc_nd,axis=axis)
pts_dec_flt = np.dot(pts_enc_flt,self.cc)
pts_dec_nd = unflatten_2d_array(pts_dec_flt,pts_enc_nd,axis=axis)
return pts_dec_nd
def decode_1hot_mtx_nd(self,pts_enc_nd,axis=1,returnEncode=False):
pts_1hot_nd = nd_argmax_1hot(pts_enc_nd,axis=axis)
pts_dec_nd = self.decode_points_mtx_nd(pts_1hot_nd,axis=axis)
if(returnEncode):
return (pts_dec_nd,pts_1hot_nd)
else:
return pts_dec_nd
# *****************************
# ***** Utility functions *****
# *****************************
def check_value(inds, val):
''' Check to see if an array is a single element equaling a particular value
for pre-processing inputs in a function '''
if(np.array(inds).size==1):
if(inds==val):
return True
return False
def na(): # shorthand for new axis
return np.newaxis
def flatten_nd_array(pts_nd,axis=1):
''' Flatten an nd array into a 2d array with a certain axis
INPUTS
pts_nd N0xN1x...xNd array
axis integer
OUTPUTS
pts_flt prod(N \ N_axis) x N_axis array '''
NDIM = pts_nd.ndim
SHP = np.array(pts_nd.shape)
nax = np.setdiff1d(np.arange(0,NDIM),np.array((axis))) # non axis indices
NPTS = np.prod(SHP[nax])
axorder = np.concatenate((nax,np.array(axis).flatten()),axis=0)
pts_flt = pts_nd.transpose((axorder))
pts_flt = pts_flt.reshape(NPTS,SHP[axis])
return pts_flt
def unflatten_2d_array(pts_flt,pts_nd,axis=1,squeeze=False):
''' Unflatten a 2d array with a certain axis
INPUTS
pts_flt prod(N \ N_axis) x M array
pts_nd N0xN1x...xNd array
axis integer
squeeze bool if true, M=1, squeeze it out
OUTPUTS
pts_out N0xN1x...xNd array '''
NDIM = pts_nd.ndim
SHP = np.array(pts_nd.shape)
nax = np.setdiff1d(np.arange(0,NDIM),np.array((axis))) # non axis indices
NPTS = np.prod(SHP[nax])
if(squeeze):
axorder = nax
axorder_rev = np.argsort(axorder)
M = pts_flt.shape[1]
NEW_SHP = SHP[nax].tolist()
# print NEW_SHP
# print pts_flt.shape
pts_out = pts_flt.reshape(NEW_SHP)
pts_out = pts_out.transpose(axorder_rev)
else:
axorder = np.concatenate((nax,np.array(axis).flatten()),axis=0)
axorder_rev = np.argsort(axorder)
M = pts_flt.shape[1]
NEW_SHP = SHP[nax].tolist()
NEW_SHP.append(M)
pts_out = pts_flt.reshape(NEW_SHP)
pts_out = pts_out.transpose(axorder_rev)
return pts_out | 37.931319 | 226 | 0.573405 | 8,136 | 55,228 | 3.757006 | 0.078048 | 0.022246 | 0.031668 | 0.031406 | 0.67815 | 0.636732 | 0.604017 | 0.578892 | 0.559852 | 0.540518 | 0 | 0.0216 | 0.269863 | 55,228 | 1,456 | 227 | 37.931319 | 0.736435 | 0.148783 | 0 | 0.613786 | 0 | 0.001094 | 0.031832 | 0.000516 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.012035 | null | null | 0.017505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a8e65d0852311bd81e04a9f8be156d61168eac0b | 1,222 | py | Python | seq_util/pull_longest_seq_from_img_fa.py | fandemonium/code | 4498f97658c146de8e693776e05bedeaaf2d5a1d | [
"MIT"
] | 2 | 2017-12-25T09:14:52.000Z | 2021-05-18T06:39:26.000Z | seq_util/pull_longest_seq_from_img_fa.py | fandemonium/code | 4498f97658c146de8e693776e05bedeaaf2d5a1d | [
"MIT"
] | 1 | 2018-09-29T22:34:30.000Z | 2018-09-29T22:34:30.000Z | seq_util/pull_longest_seq_from_img_fa.py | fandemonium/code | 4498f97658c146de8e693776e05bedeaaf2d5a1d | [
"MIT"
] | 1 | 2015-01-30T20:29:25.000Z | 2015-01-30T20:29:25.000Z | import sys
from Bio import SeqIO
import operator
# 1. get genome img_oid from the genecart text file
# 2. create gene sequence dictionary
# 3. add genome img_oid to the gene sequence dictionary
# 4. for genes from the same organism, pull the longest sequence out
gene_cart = open(sys.argv[1], 'rU')
firstline = gene_cart.readline()
oid_dict = {}
for lines in gene_cart:
lexeme = lines.strip().split("\t")
gene_id = lexeme[0]
img_oid = lexeme[3]
if img_oid not in oid_dict:
oid_dict[img_oid] = [gene_id]
else:
oid_dict[img_oid].append(gene_id)
seq_dict = SeqIO.to_dict(SeqIO.parse(open(sys.argv[2]), 'fasta'))
MIN_LENGTH = int(sys.argv[3])
no_genes_out = open(sys.argv[4], 'w')
l = []
for oid in oid_dict:
gene_dict = {}
for gene in oid_dict[oid]:
gene_dict[gene] = seq_dict[gene]
if len(gene_dict) > 0:
longest_seq_key = max(gene_dict.iteritems(), key=operator.itemgetter(1))[0]
if len(gene_dict[longest_seq_key]) >= MIN_LENGTH:
print ">"+ oid + "::" + longest_seq_key + "::" + gene_dict[longest_seq_key].description + "\n" +gene_dict[longest_seq_key].seq
else:
l.append(oid)
else:
l.append(oid)
no_genes_out.write("\n".join(l))
| 28.418605 | 144 | 0.673486 | 202 | 1,222 | 3.856436 | 0.341584 | 0.071887 | 0.08344 | 0.06932 | 0.080873 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013131 | 0.189853 | 1,222 | 42 | 145 | 29.095238 | 0.773737 | 0.167758 | 0 | 0.16129 | 0 | 0 | 0.018793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.096774 | null | null | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a8ebb8c7ba04d8c8e2be86842c467f1340eb08b4 | 3,223 | py | Python | ZZZ/DES/match_bliss.py | ivmfnal/striped | eef1a4d544fa1b97fde39d7ee5ef779071218891 | [
"BSD-3-Clause"
] | 1 | 2019-07-01T15:19:43.000Z | 2019-07-01T15:19:43.000Z | ZZZ/DES/match_bliss.py | ivmfnal/striped | eef1a4d544fa1b97fde39d7ee5ef779071218891 | [
"BSD-3-Clause"
] | null | null | null | ZZZ/DES/match_bliss.py | ivmfnal/striped | eef1a4d544fa1b97fde39d7ee5ef779071218891 | [
"BSD-3-Clause"
] | 1 | 2020-04-21T21:18:01.000Z | 2020-04-21T21:18:01.000Z | from striped.common import Tracer
T = Tracer()
with T["run"]:
with T["imports"]:
from striped.job import SinglePointStripedSession as Session
import numpy as np
from numpy.lib.recfunctions import append_fields
import fitsio, healpy as hp
import sys, time
#job_server_address = ("dbwebdev.fnal.gov", 8765) #development
job_server_address = ("ifdb01.fnal.gov", 8765) #production
session = Session(job_server_address)
input_file = sys.argv[1]
input_filename = input_file.rsplit("/",1)[-1].rsplit(".",1)[-1]
with T["fits/read"]:
input_data = fitsio.read(input_file, ext=2, columns=["ALPHAWIN_J2000","DELTAWIN_J2000"])
with T["hpix"]:
hpix = hp.ang2pix(nside=16384,theta=input_data['ALPHAWIN_J2000'],phi=input_data['DELTAWIN_J2000'],
lonlat=True, nest=True)
hpix = np.asarray(hpix, np.float64)
input_data = append_fields(input_data, "HPIX", hpix)
np.sort(input_data, order="HPIX")
input_data = np.array(zip(input_data['ALPHAWIN_J2000'], input_data['DELTAWIN_J2000'], input_data['HPIX']))
matches = []
class Callback:
def on_streams_update(self, nevents, data):
if "matches" in data:
for m in data["matches"]:
matches.append(m)
for obs_i, cat_id, obs_ra, obs_dec, cat_ra, cat_dec in m:
print "Match: index: %10d RA: %9.4f Dec: %9.4f" % (int(obs_i), obs_ra, obs_dec)
print " COADD oject id: %10d %9.4f %9.4f" % (int(cat_id), cat_ra, cat_dec)
if "message" in data:
for msg in data["message"]:
print msg
def on_exception(self, wid, info):
print "Worker exception:\n--------------------"
print info
print "--------------------"
job = session.createJob("Y3A2",
user_callback = Callback(),
worker_class_file="bliss_match_worker.py",
user_params = {"observations":input_data})
with T["job"]:
job.run()
runtime = job.TFinish - job.TStart
catalog_objects = job.EventsProcessed
print "Compared %d observations against %d catalog objects, elapsed time=%f" % (len(input_data), catalog_objects, runtime)
if matches:
matches = np.concatenate(matches, axis=0)
matches = np.array(matches, dtype=[("INDEX", int),("COADD_OBJECT_ID", int)])
save_fn = input_filename + "_match.fits"
with T["fits/write"]:
fitsio.write(save_fn, matches, clobber=True)
print "Saved %d matches in %s" % (len(matches), save_fn)
else:
print "No matches"
T.printStats()
| 39.790123 | 131 | 0.501396 | 346 | 3,223 | 4.49711 | 0.378613 | 0.069409 | 0.030848 | 0.028278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031563 | 0.380701 | 3,223 | 80 | 132 | 40.2875 | 0.747996 | 0.021719 | 0 | 0 | 0 | 0 | 0.160356 | 0.016863 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.122807 | null | null | 0.175439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a8ec95c8cb86838f72fe414c4922ade690fedb2c | 4,509 | py | Python | materials-downloader.py | goDoCer/imperial-computing-materials-downloader | 7ab906d3b2720d1f7739c50908a367d4a6d3155e | [
"MIT"
] | 10 | 2020-11-02T12:27:16.000Z | 2020-12-23T05:31:03.000Z | materials-downloader.py | prnvbn/imperial-computing-materials-downloader | 7ab906d3b2720d1f7739c50908a367d4a6d3155e | [
"MIT"
] | null | null | null | materials-downloader.py | prnvbn/imperial-computing-materials-downloader | 7ab906d3b2720d1f7739c50908a367d4a6d3155e | [
"MIT"
] | null | null | null | import sys
import os
import json
import subprocess
import datetime as dt
sys.path.insert(1, './lib')
from config import *
from webhelpers import *
from argsparser import *
from getpass import getpass
from distutils.dir_util import remove_tree, copy_tree
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import WebDriverException
if __name__ == "__main__":
args = get_args()
exit = False
############################# NON SELENIUM FLAGS #############################
# open("lib/auth.json")
try:
open("auth.json")
except FileNotFoundError:
file = open("auth.json", "w+")
file.write('{"shortcode": "XXXXX", "password": "XXXXX", "directory": "XXXXX"}')
quit()
with open("lib/auth.json") as authfile:
auth = json.load(authfile)
if args.credentials:
[print(f"Your {key} is set as {auth[key]}")
for key in ["shortcode", "directory"]]
exit = True
if args.update_chromedriver:
subprocess.call(["sh", "./get_chromedriver.sh"])
exit = True
if s := args.shortcode:
auth["shortcode"] = s
print(f"Shortcode set to {s}")
exit = True
if args.password:
pswd = getpass('Password:')
auth["password"] = pswd
if pswd == "":
print("Password can not be empty")
print(f"Password has been set")
exit = True
if d := args.dir:
if os.path.isdir(d):
print(f"Directory set to {d}")
else:
print(f"{d} is not a valid directory!!!")
response = input(
f"Do you want to create directory {d}? (Y/n) ").lower()
if response == "y" or response == "or":
print(f"Made directory {d}")
os.mkdir(d)
else:
print(f"Please pass in a valid directory")
auth["directory"] = d
exit = True
with open("lib/auth.json", "wt") as authfile:
json.dump(auth, authfile)
if exit:
quit()
headless = not args.real
verbose = args.verbose
############################# CHROME WEBDRIVER OPTIONS #############################
chrome_options = Options()
if headless:
chrome_options.add_argument("--headless")
chrome_options.add_argument("--window-size=1920x1080")
chrome_options.add_argument("--disable-notifications")
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--verbose')
chrome_options.add_experimental_option("prefs", {
"download.default_directory": "./",
"download.prompt_for_download": False,
"download.directory_upgrade": True,
"safebrowsing_for_trusted_sources_enabled": False,
"safebrowsing.enabled": False
})
options = webdriver.ChromeOptions()
if headless:
options.add_argument('headless')
try:
driver = webdriver.Chrome(
options=chrome_options, executable_path=CHROMEDRIVER_PATH)
except WebDriverException or FileNotFoundError:
print("There is something wrong with your chromedriver installation")
print(
f"Run 'sh get_chromedriver.sh' in {os.getcwd()} to get the latest version")
print("You can also run this command with the -u (--update-chromedriver) flag.")
quit()
driver.get(MATERIALS_URL)
print("authenticating...")
authenticate(driver)
############################# DOWNLOADING #############################
base_dir = "./downloads"
try:
os.makedirs(base_dir)
except Exception:
pass
if args.quick:
download_course(driver, args.quick, base_dir=base_dir, verbose=verbose)
else:
download_courses(driver, base_dir=base_dir, verbose=verbose)
driver.quit()
print("Finishing...")
############################# CLEAN UP #############################
for parent, dirnames, filenames in os.walk(base_dir):
for fn in filenames:
if fn.lower().endswith('.crdownload'):
os.remove(os.path.join(parent, fn))
# Moving the dowloads to the specified directory
save_dir = DIRECTORY
if args.location is not None:
save_dir = args.location
copy_tree(base_dir, save_dir)
remove_tree(base_dir)
print("DONE!!!")
| 30.466216 | 88 | 0.569306 | 493 | 4,509 | 5.089249 | 0.342799 | 0.051813 | 0.038262 | 0.047828 | 0.062973 | 0.02232 | 0 | 0 | 0 | 0 | 0 | 0.002747 | 0.273453 | 4,509 | 147 | 89 | 30.673469 | 0.763126 | 0.030384 | 0 | 0.141593 | 0 | 0 | 0.226976 | 0.050761 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.079646 | 0.115044 | 0 | 0.115044 | 0.123894 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
a8ed6afe97f49d0c92eeaeda111ff0c3ba602a10 | 16,551 | py | Python | PoliCmm/src/parser.py | jutge-org/cpp2many | 2d2fb1784f2515b3c1a1056e163640e556331766 | [
"MIT"
] | 4 | 2018-04-06T00:18:20.000Z | 2021-10-11T20:25:38.000Z | PoliCmm/src/parser.py | jutge-org/cpp2many | 2d2fb1784f2515b3c1a1056e163640e556331766 | [
"MIT"
] | 1 | 2019-02-27T17:04:43.000Z | 2019-02-28T08:25:12.000Z | PoliCmm/src/parser.py | jutge-org/cpp2many | 2d2fb1784f2515b3c1a1056e163640e556331766 | [
"MIT"
] | 4 | 2019-02-27T17:05:17.000Z | 2021-03-12T10:36:04.000Z | import ply.lex as lex
import ply.yacc as yacc
import lexer
import sys
import ast
tokens = lexer.tokens
precedence = (
('right', 'ELSE'),
)
def p_start (t):
'''start : program'''
t[0] = t[1]
def p_program_01 (t):
'''program : program_part'''
t[0] = ast.Program(t[1])
def p_program_02 (t):
'''program : program program_part'''
t[1].add(t[2])
t[0] = t[1]
def p_program_part (t):
'''program_part : include_directive
| typedef
| structdef
| using_directive
| function_definition
| declaration_statement
| comment
'''
t[0] = t[1]
def p_typedef_01 (t):
'''typedef : typedef_body SEMI'''
t[0] = t[1]
def p_typedef_body (t):
'''typedef_body : TYPEDEF type IDENTIFIER'''
lexer.typedefs[t[3]] = 'TYPEID'
t[0] = ast.TypeDef(t[2], t[3])
def p_structdef (t):
'''structdef : struct_name LBRA struct_elem_list RBRA SEMI'''
t[3].id = t[1]
t[0] = t[3]
def p_struct_name (t):
'''struct_name : STRUCT IDENTIFIER'''
print "Added typeid " + t[2]
lexer.typedefs[t[2]] = 'TYPEID'
t[0] = t[2]
def p_struct_elem_list_01 (t):
'''struct_elem_list : declaration_statement'''
t[0] = ast.StructDef(t[1])
def p_struct_elem_list_02 (t):
'''struct_elem_list : struct_elem_list declaration_statement'''
t[1].add(t[2])
t[0] = t[1]
def p_struct_elem (t):
'''struct_elem : type identifier_list SEMI'''
for c in t[2].children:
c.type = t[1]
t[0] = t[2]
def p_identifier_list_01 (t):
'''identifier_list : IDENTIFIER'''
t[0] = ast.VariableDeclarationStatement(ast.VariableDeclaration(t[1]))
def p_identifier_list_02 (t):
'''identifier_list : identifier_list COMMA IDENTIFIER'''
t[1].add(ast.VariableDeclaration(t[3]))
t[0] = t[1]
def p_comment_01 (t):
'''comment : LINECOM'''
t[0] = ast.LineComment(t[1])
def p_comment_02 (t):
'''comment : BLOCKCOM'''
t[0] = ast.BlockComment(t[1])
def p_include_directive_01 (t):
'''include_directive : INCLUDE LT IDENTIFIER GT
| INCLUDE LT STRING GT
| INCLUDE LT VECTOR GT'''
t[0] = ast.Include(t[3])
def p_include_directive_02 (t):
'''include_directive : INCLUDE STRING_LIT'''
t[0] = ast.Include(t[2])
def p_using_directive (t):
'''using_directive : USING NAMESPACE IDENTIFIER SEMI'''
t[0] = ast.UsingNamespace(t[3])
def p_function_definition_01 (t):
'''function_definition : type IDENTIFIER LPAR RPAR block'''
t[0] = ast.Function(t[2], t[1], ast.FormalParametersList(), t[5])
def p_function_definition_02 (t):
'''function_definition : type IDENTIFIER LPAR formal_parameters_list RPAR block'''
t[0] = ast.Function(t[2], t[1], t[4], t[6])
def p_empty (t):
'''empty :'''
pass
def p_formal_parameters_list_01 (t):
'''formal_parameters_list : formal_parameter'''
t[0] = ast.FormalParametersList(t[1])
def p_formal_parameters_list_02 (t):
'''formal_parameters_list : formal_parameters_list COMMA formal_parameter'''
t[1].add(t[3])
t[0] = t[1]
def p_formal_parameter_01 (t):
'''formal_parameter : type IDENTIFIER'''
t[0] = ast.FormalParameter(t[2], t[1])
t[0].is_ref = False
def p_formal_parameter_02 (t):
'''formal_parameter : type AND IDENTIFIER'''
t[0] = ast.FormalParameter(t[3], t[1])
t[0].is_ref = True
t[0].type.is_reference = True
def p_statement_list_01 (t):
'''statement_list : statement'''
t[1].isStatement = True
t[0] = ast.CompoundStatement(t[1])
def p_statement_list_02 (t):
'''statement_list : statement_list statement'''
t[2].isStatement = True
t[1].add(t[2])
t[0] = t[1]
def p_statement (t):
'''statement : declaration_statement
| cout_statement
| cin_statement
| while_statement
| for_statement
| if_statement
| assignment_statement
| return_statement
| block
| comment
| empty_statement
'''
# | while_statement_cin
t[0] = t[1]
def p_empty_statement (t):
'''empty_statement : '''
t[0] = ast.NullNode()
def p_block (t):
'''block : LBRA statement_list RBRA'''
t[0] = t[2]
def p_cout_statement_01 (t):
'''cout_statement : COUT cout_elements_list SEMI'''
t[0] = t[2]
def p_cout_statement_02 (t):
'''cout_statement : CERR cout_elements_list SEMI'''
t[0] = t[2]
def p_cout_statement_03 (t):
'''cout_statement : COUT DOT IDENTIFIER LPAR actual_parameters_list RPAR SEMI'''
t[0] = ast.CoutModifier(t[3], t[5])
def p_cout_statement_04 (t):
'''cout_statement : CERR DOT IDENTIFIER LPAR actual_parameters_list RPAR SEMI'''
t[0] = ast.CoutModifier(t[3], t[5])
def p_cout_elements_list_01 (t):
'''cout_elements_list : LPUT cout_element'''
t[0] = ast.CoutStatement(t[2])
def p_cout_elements_list_02 (t):
'''cout_elements_list : cout_elements_list LPUT cout_element'''
t[1].add(t[3])
t[0] = t[1]
def p_cout_element_01 (t):
'''cout_element : ENDL'''
t[0] = ast.CoutBreakLine();
def p_cout_element_02 (t):
'''cout_element : lor_expression'''
t[0] = ast.CoutElement(t[1])
def p_cin_bloc (t):
'''cin_bloc : CIN cin_elements_list'''
t[0] = t[2]
t[0].is_expression = True
def p_cin_statement (t):
'''cin_statement : CIN cin_elements_list SEMI'''
t[0] = t[2]
t[0].is_expression = False
def p_cin_elements_list_01 (t):
'''cin_elements_list : RPUT reference_expression'''
t[0] = ast.CinStatement(t[2])
def p_cin_elements_list_02 (t):
'''cin_elements_list : cin_elements_list RPUT reference_expression'''
t[1].add(t[3])
t[0] = t[1]
def p_literal_01 (t):
'''literal : INTEGER_LIT'''
t[0]=ast.IntLiteral(t[1])
def p_literal_02 (t):
'''literal : REAL_LIT'''
t[0]=ast.FloatLiteral(t[1])
def p_literal_03 (t):
'''literal : TRUE
| FALSE'''
t[0]=ast.BoolLiteral(t[1])
def p_literal_04 (t):
'''literal : STRING_LIT'''
t[0]=ast.StringLiteral(t[1])
def p_literal_05 (t):
'''literal : CHAR_LIT'''
t[0]=ast.CharLiteral(t[1])
def p_factor_01 (t):
'''factor : literal'''
t[0] = t[1]
def p_factor_02 (t):
'''factor : reference_expression'''
t[0] = t[1]
def p_factor_03(t):
'''factor : LPAR assignment_expression RPAR'''
t[0] = ast.Parenthesis(t[2])
def p_factor_04 (t):
'''factor : IDENTIFIER LPAR actual_parameters_list RPAR'''
t[0] = ast.FunctionCall(t[1], t[3])
def p_factor_05 (t):
'''factor : IDENTIFIER COLONCOLON assignment_expression'''
t[0] = t[3]
def p_factor_06 (t):
'''factor : reference_expression DOT IDENTIFIER LPAR actual_parameters_list RPAR'''
t[0] = ast.FunctionCall(t[3], t[5], t[1])
def p_factor_07 (t):
'''factor : type LPAR actual_parameters_list RPAR'''
t[0] = ast.Constructor(t[1], t[3])
def p_factor_08 (t):
'''factor : LPAR type RPAR assignment_expression'''
t[0] = ast.CastExpression(t[2], t[4])
def p_reference_expression_01 (t):
'''reference_expression : IDENTIFIER'''
t[0] = ast.Identifier(t[1])
def p_reference_expression_02 (t):
'''reference_expression : reference_expression LCOR relational_expression RCOR'''
t[0] = ast.Reference(t[1], t[3])
def p_reference_expression_03 (t):
'''reference_expression : reference_expression DOT IDENTIFIER'''
t[0] = ast.StructReference(t[1], t[3])
def p_unary_expression_01(t):
'''unary_expression : unary_operator factor
| PLUSPLUS unary_expression
| MINUSMINUS unary_expression
'''
t[0]=ast.UnaryOp(t[1],t[2])
t[0].pre = True
def p_unary_expression_02(t):
'''unary_expression : unary_expression PLUSPLUS
| unary_expression MINUSMINUS
'''
t[0]=ast.UnaryOp(t[2],t[1])
t[0].pre = False
def p_unary_expression_03(t):
'''unary_expression : factor
'''
t[0]=t[1]
# me faltara tema ++
def p_cast_expression_01(t):
'''
cast_expression : unary_expression
'''
t[0]=t[1]
def p_cast_expression_02(t):
'''
cast_expression : type LPAR lor_expression RPAR
'''
t[0]=ast.CastExpression(t[1],t[3])
def p_multiplicative_expression_01(t):
'''
multiplicative_expression : unary_expression
'''
t[0]=t[1]
def p_multiplicative_expression_02(t):
'''
multiplicative_expression : multiplicative_expression multiplicative_operator unary_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3]);
def p_additive_expression_01(t):
'''
additive_expression : multiplicative_expression
'''
t[0]=t[1]
def p_additive_expression_02(t):
'''
additive_expression : additive_expression additive_operator multiplicative_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
#def p_shift_expression_01(t):
#'''
#shift_expression : additive_expression
#'''
#t[0]=t[1]
#def p_shift_expression_02(t):
#'''
#shift_expression : shift_expression shift_operator additive_expression
#'''
#t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_relational_expression_01(t):
'''
relational_expression : additive_expression
'''
t[0]=t[1]
def p_relational_expression_02(t):
'''
relational_expression : relational_expression relational_operator additive_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_equality_expression_01(t):
'''
equality_expression : relational_expression
'''
t[0]=t[1]
def p_equality_expression_02(t):
'''
equality_expression : equality_expression equality_operator relational_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_and_expression_01(t):
'''
and_expression : equality_expression
'''
t[0]=t[1]
def p_and_expression_02(t):
'''
and_expression : and_expression AND equality_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_xor_expression_01(t):
'''
xor_expression : and_expression
'''
t[0]=t[1]
def p_xor_expression_02(t):
'''
xor_expression : xor_expression XOR and_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_or_expression_01(t):
'''
or_expression : xor_expression
| cin_bloc
'''
t[0]=t[1]
def p_or_expression_02(t):
'''
or_expression : or_expression OR xor_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_land_expression_01(t):
'''
land_expression : or_expression
'''
t[0]=t[1]
def p_land_expression_02(t):
'''
land_expression : land_expression LAND or_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_lor_expression_01(t):
'''
lor_expression : land_expression
'''
t[0]=t[1]
def p_lor_expression_02(t):
'''
lor_expression : lor_expression LOR land_expression
'''
t[0]=ast.BinaryOp(t[1],t[2],t[3])
def p_assignment_expression_01(t):
'''
assignment_expression : lor_expression
'''
t[0]=t[1]
def p_assignment_expression_02(t): # a=b=3
'''
assignment_expression : reference_expression assignment_operator assignment_expression
'''
t[0]=ast.AssignmentStatement(t[1],t[2],t[3]) # ojo q se puede liar una buena asignandoCONTROLAR
def p_declaration_statement_01(t):
'''
declaration_statement : type declaration_list SEMI
'''
# para cada elemento de la declarator list crear un nodo declaracion
for c in t[2].children:
c.type=t[1]
t[0]=t[2]
#def p_declaration_statement_02(t):
#'''
#declaration_statement : declaration_statement_init
#'''
## para cada elemento de la declarator list crear un nodo declaracion
#t[0]=t[1]
#def p_declaration_statement_init(t):
#'''
#declaration_statement_init : type declaration_list EQUALS initializer SEMI
#'''
## para cada elemento de la declarator list crear un nodo declaracion
#for c in t[2].children:
#c.type=t[1]
#c.init=t[4]
#t[0]=t[2]
#def p_declaration_statement_03(t):
# '''
# declaration_statement : struct ID LBRA RBRA
# '''
def p_declaration_list_01(t):
'''
declaration_list : declaration_list COMMA declaration
'''
t[1].add(t[3])
t[0]=t[1]
def p_declaration_list_02(t):
'''
declaration_list : declaration
'''
t[0]=ast.VariableDeclarationStatement(t[1])
def p_declaration_01(t):
'''
declaration : IDENTIFIER
'''
t[0]=ast.VariableDeclaration(t[1])
def p_declaration_02(t):
'''
declaration : IDENTIFIER EQUALS initializer
'''
t[0]=ast.VariableDeclaration(t[1])
t[0].init = t[3]
def p_declaration_03(t):
'''
declaration : IDENTIFIER LPAR actual_parameters_list RPAR
'''
t[0]=ast.VariableDeclaration(t[1])
t[0].params = t[3]
def p_declaration_04(t):
'''
declaration : IDENTIFIER LPAR RPAR
'''
t[0]=ast.VariableDeclaration(t[1])
t[0].cons = ast.ActualParametersList()
def p_initializer(t): # ampliable con vectores
'''
initializer : lor_expression
'''
t[0]=t[1]
def p_assignment_statement(t):
'''
assignment_statement : assignment_expression SEMI
'''
t[0]=t[1]
def p_type_01 (t):
'''type : TYPEID'''
t[0] = ast.CustomType(t[1])
def p_type_02 (t):
'''type : VOID
| INT
| FLOAT
| DOUBLE
| CHAR
| BOOL
| STRING'''
t[0] = ast.Type(t[1])
def p_type_03 (t): #PRODUCE AMBIGUEDAD
'''type : CONST type'''
t[0] = t[2]
t[0].constant = True
def p_type_04 (t):
'''type : VECTOR LT type GT'''
t[0] = ast.VectorType(t[1], t[3])
def p_unary_operator(t):
'''
unary_operator : MINUS
| LNOT
'''
t[0]=t[1]
def p_multiplicative_operator(t):
'''
multiplicative_operator : MULT
| DIV
| MOD
'''
t[0]=t[1]
def p_additive_operator(t):
'''
additive_operator : PLUS
| MINUS
'''
t[0]=t[1]
def p_shift_operator(t):
'''
shift_operator : RPUT
| LPUT
'''
t[0]=t[1]
def p_relational_operator(t):
'''
relational_operator : GT
| LT
| LE
| GE
'''
t[0]=t[1]
def p_equality_operator(t):
'''
equality_operator : EQ
| NE
'''
t[0]=t[1]
def p_assignment_operator(t):
'''
assignment_operator : EQUALS
| MULTEQUAL
| DIVEQUAL
| MODEQUAL
| PLUSEQUAL
| MINUSEQUAL
| ANDEQUAL
| OREQUAL
| XOREQUAL
| RIGHTSHIFTEQUAL
| LEFTSHIFTEQUAL
'''
t[0]=t[1]
def p_while_statement_01 (t):
'''while_statement : WHILE LPAR lor_expression RPAR statement'''
t[0] = ast.WhileStatement(t[3], t[5])
t[5].isStatement = True
def p_while_statement_02 (t):
'''while_statement : WHILE LPAR lor_expression RPAR SEMI'''
t[0] = ast.WhileStatement(t[3], ast.NullNode())
#def p_while_statement_cin (t):
#'''while_statement_cin : WHILE LPAR cin_bloc RPAR statement'''
#t[0] = ast.WhileStatementCin(t[3], t[5])
def p_for_statement (t):
'''for_statement : FOR LPAR assignment_statement assignment_statement assignment_expression RPAR statement'''
t[0] = ast.ForStatement(t[3], t[4], t[5], t[7])
t[7].isStatement = True
def p_for_statement_init (t):
'''for_statement : FOR LPAR declaration_statement assignment_statement assignment_expression RPAR statement'''
t[0] = ast.ForStatementInit(t[3], t[4], t[5], t[7])
t[7].isStatement = True
def p_if_statement_01 (t):
'''if_statement : IF LPAR assignment_expression RPAR statement'''
t[0] = ast.IfStatement(t[3], t[5])
t[5].isStatement = True
def p_if_statement_02(t):
'''if_statement : IF LPAR assignment_expression RPAR statement ELSE statement'''
t[0] = ast.IfStatement(t[3], t[5], t[7])
t[5].isStatement = True
t[7].isStatement = True
def p_return_statement_01 (t):
'''return_statement : RETURN assignment_statement'''
t[0] = ast.ReturnStatement(t[2])
def p_return_statement_02 (t):
'''return_statement : RETURN SEMI'''
t[0] = ast.ReturnStatement(None)
def p_actual_parameters_list_01 (t):
'''actual_parameters_list : empty'''
t[0] = ast.ActualParametersList()
def p_actual_parameters_list_02 (t):
'''actual_parameters_list : actual_parameter'''
t[0] = ast.ActualParametersList(t[1])
def p_actual_parameters_list_03 (t):
'''actual_parameters_list : actual_parameters_list COMMA actual_parameter'''
t[1].add(t[3])
t[0] = t[1]
def p_actual_parameter (t):
'''actual_parameter : assignment_expression'''
t[0] = t[1]
def p_error (t):
print 'Syntax error around line %d in token %s.' % (t.lineno, t.type)
yacc.errok()
#raise Exception('Syntax error around line %d in token %s.' % (t.lineno, t.type))
# Build the parser
parser = yacc.yacc()
| 21.921854 | 114 | 0.638995 | 2,422 | 16,551 | 4.129645 | 0.098266 | 0.025995 | 0.033993 | 0.034793 | 0.426115 | 0.330234 | 0.280444 | 0.218256 | 0.192462 | 0.153369 | 0 | 0.040299 | 0.208386 | 16,551 | 754 | 115 | 21.950928 | 0.723096 | 0.069241 | 0 | 0.286738 | 0 | 0 | 0.008567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003584 | 0.017921 | null | null | 0.007168 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a8f6ec64b56c58fb68898e4ec50ed4fb8d84702a | 431 | py | Python | week08/states_utils.py | thashmadech/is445_spring2022 | 034f71ca545bf06fb2491d818ceb3f8dd6bba8b7 | [
"BSD-3-Clause"
] | 1 | 2019-08-11T04:03:24.000Z | 2019-08-11T04:03:24.000Z | week08/states_utils.py | thashmadech/is445_spring2022 | 034f71ca545bf06fb2491d818ceb3f8dd6bba8b7 | [
"BSD-3-Clause"
] | 1 | 2020-03-02T00:11:33.000Z | 2020-03-02T00:11:33.000Z | week08/states_utils.py | thashmadech/is445_spring2022 | 034f71ca545bf06fb2491d818ceb3f8dd6bba8b7 | [
"BSD-3-Clause"
] | 5 | 2022-01-30T19:45:48.000Z | 2022-03-07T04:15:37.000Z | import numpy as np
def get_ids_and_names(states_map):
ids = []
state_names = []
state_data_vec = states_map.map_data['objects']['subunits']['geometries']
for i in range(len(state_data_vec)):
if state_data_vec[i]['properties'] is not None:
state_names.append(state_data_vec[i]['properties']['name'])
ids.append(state_data_vec[i]['id'])
return np.array(ids), np.array(state_names)
| 39.181818 | 77 | 0.663573 | 65 | 431 | 4.107692 | 0.492308 | 0.168539 | 0.224719 | 0.146067 | 0.265918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187935 | 431 | 10 | 78 | 43.1 | 0.762857 | 0 | 0 | 0 | 0 | 0 | 0.118329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d100ae3537b01cb4189c0cbb205089a37a69ed98 | 14,136 | py | Python | flybrainlab/utilities/neurometry.py | FlyBrainLab/FBLClient | c85de23d428a38fe13491b2f5eb30b690610108e | [
"BSD-3-Clause"
] | 3 | 2020-07-23T05:51:22.000Z | 2021-12-24T11:40:30.000Z | flybrainlab/utilities/neurometry.py | FlyBrainLab/FBLClient | c85de23d428a38fe13491b2f5eb30b690610108e | [
"BSD-3-Clause"
] | 3 | 2020-07-31T05:08:35.000Z | 2021-01-08T17:55:16.000Z | flybrainlab/utilities/neurometry.py | FlyBrainLab/FBLClient | c85de23d428a38fe13491b2f5eb30b690610108e | [
"BSD-3-Clause"
] | 1 | 2019-02-03T02:03:00.000Z | 2019-02-03T02:03:00.000Z | import pandas as pd
import numpy as np
from scipy.spatial.distance import pdist
from sklearn.metrics import pairwise_distances
import networkx as nx
def generate_neuron_stats(_input, scale = 'mum', scale_coefficient = 1., log=False):
"""Generates statistics for a given neuron.
# Arguments:
_input (str or np.array): Name of the file to use, or the numpy array to get as input.
scale (str): Optional. Name of the measurement scale. Default to 'mum'.
scale_coefficient (float): Optional. A number to multiply the input with if needed. Defaults to 1.
# Returns:
dict: A result dictionary with all results.
"""
if isinstance(_input, str):
a = pd.read_csv(_input, sep=' ', header=None, comment='#')
X = a.values
else:
X = _input
if X.shape[1]>7:
X = X[:, X.shape[1]-7:]
G = nx.DiGraph()
distance = 0
surface_area = 0
volume = 0
X[:,2:5] = X[:,2:5] * scale_coefficient
for i in range(X.shape[0]):
if X[i,6] != -1:
G.add_node(i)
parent = np.where(X[:,0] == X[i,6])[0][0]
x_parent = X[parent,2:5]
x = X[i,2:5]
h = np.sqrt(np.sum(np.square(x_parent-x)))
G.add_edge(parent,i,weight=h)
distance += h
r_parent = X[parent,5]
r = X[i,5]
surface_area += np.pi * (r + r_parent) * np.sqrt(np.square(r-r_parent)+np.square(h))
volume += np.pi/3.*(r*r+r*r_parent+r_parent*r_parent)*h
XX = X[:,2:5]
w = np.abs(np.max(XX[:,0])-np.min(XX[:,0]))
h = np.abs(np.max(XX[:,1])-np.min(XX[:,1]))
d = np.abs(np.max(XX[:,2])-np.min(XX[:,2]))
bifurcations = len(X[:,6])-len(np.unique(X[:,6]))
max_euclidean_dist = np.max(pdist(XX))
max_path_dist = nx.dag_longest_path_length(G)
if log == True:
print('Total Length: ', distance, scale)
print('Total Surface Area: ', surface_area, scale+'^2')
print('Total Volume: ', volume, scale+'^3')
print('Maximum Euclidean Distance: ', max_euclidean_dist, scale)
print('Width (Orientation Variant): ', w, scale)
print('Height (Orientation Variant): ', h, scale)
print('Depth (Orientation Variant): ', d, scale)
print('Average Diameter: ', 2*np.mean(X[:,5]), scale)
print('Number of Bifurcations:', bifurcations)
print('Max Path Distance: ', max_path_dist, scale)
results = {}
results['Total Length'] = distance
results['Total Surface Area'] = surface_area
results['Total Volume'] = volume
results['Maximum Euclidean Distance'] = max_euclidean_dist
results['Width (Orientation Variant)'] = w
results['Height (Orientation Variant)'] = h
results['Depth (Orientation Variant)'] = d
results['Average Diameter'] = 2*np.mean(X[:,5])
results['Number of Bifurcations'] = bifurcations
results['Max Path Distance'] = max_path_dist
return results
def generate_naquery_neuron_stats(res, node):
"""Generates statistics for a given NAqueryResult.
# Arguments:
res (NAqueryResult): Name of the NAqueryResult structure to use.
node (str): id of the node to use.
# Returns:
dict: A result dictionary with all results.
"""
x = res.graph.nodes[node]
X = np.vstack((np.array(x['sample']),
np.array(x['identifier']),
np.array(x['x']),
np.array(x['y']),
np.array(x['z']),
np.array(x['r']),
np.array(x['parent']))).T
return generate_neuron_stats(X)
def morphometrics(res):
""" computes the morphometric measurements of neurons in NAqueryResult.
# Arguments:
res (flybrainlab.graph.NAqueryResult): query result from an NeuroArch query.
# Returns
pandas.DataFrame: a data frame with morphometric measurements in each row and neuron unames in each column
"""
metrics = {}
for rid, attributes in res.neurons.items():
morphology_data = [res.graph.nodes[n] for n in res.getData(rid) \
if res.graph.nodes[n]['class'] == 'MorphologyData' \
and res.graph.nodes[n]['morph_type'] == 'swc']
if len(morphology_data):
x = morphology_data[0]
X = np.vstack((np.array(x['sample']),
np.array(x['identifier']),
np.array(x['x']),
np.array(x['y']),
np.array(x['z']),
np.array(x['r']),
np.array(x['parent']))).T
uname = attributes['uname']
metrics[uname] = generate_neuron_stats(X)
return pd.DataFrame.from_dict(metrics)
def generate_neuron_shape(_input, scale = 'mum', scale_coefficient = 1., log=False):
"""Generates shape structures for the specified neuron.
# Arguments:
_input (str or np.array): Name of the file to use, or the numpy array to get as input.
scale (str): Optional. Name of the measurement scale. Default to 'mum'.
scale_coefficient (float): Optional. A number to multiply the input with if needed. Defaults to 1.
# Returns:
X: A result matrix with the contents of the input.
G: A directed networkx graph with the contents of the input.
distances: List of all distances in the .swc file.
"""
if isinstance(_input, str):
a = pd.read_csv(_input, sep=' ', header=None, comment='#')
X = a.values
else:
X = _input
if X.shape[1]>7:
X = X[:, X.shape[1]-7:]
G = nx.DiGraph()
X[:,2:5] = X[:,2:5] * scale_coefficient
distances = []
for i in range(X.shape[0]):
if X[i,6] != -1:
parent = np.where(X[:,0] == X[i,6])[0][0]
x_parent = X[parent,2:5]
G.add_node(i, position_data = X[i,2:5], parent_position_data = X[parent,2:5], r = X[i,5])
x = X[i,2:5]
h = np.sqrt(np.sum(np.square(x_parent-x)))
G.add_edge(parent,i,weight=h)
distances.append(h)
else:
G.add_node(i, position_data = X[i,2:5], parent_position_data = X[i,2:5], r = X[i,5])
return X, G, distances
def fix_swc(swc_file, new_swc_file,
percentile_cutoff = 50,
similarity_cutoff = 0.40,
distance_multiplier = 5):
"""Tries to fix connectivity errors in a given swc file.
# Arguments:
swc_file (str or np.array): Name of the file to use, or the numpy array to get as input.
new_swc_file (str): Name of the new swc file to use as output.
percentile_cutoff (int): Optional. Percentile to use for inter-node cutoff distance during reconstruction for connecting two nodes. Defaults to 50.
similarity_cutoff (float): Optional. Cosine similarity cutoff value between two endpoints' branches during reconstruction. Defaults to 0.8.
distance_multiplier (float): Optional. A multiplier to multiply percentile_cutoff with. Defaults to 8.
# Returns:
G: A directed networkx graph with the contents of the input.
G_d: A directed networkx graph with the contents of the input after the fixes.
"""
X, G, distances = generate_neuron_shape(swc_file)
endpoints = []
endpoint_vectors = []
endpoint_dirs = []
for i in G.nodes():
if len(list(G.successors(i)))==0:
endpoints.append(i)
endpoint_vectors.append(G.nodes()[i]['position_data'])
direction = G.nodes()[i]['position_data'] - G.nodes()[i]['parent_position_data']
if np.sqrt(np.sum(np.square(direction)))>0.:
direction = direction / np.sqrt(np.sum(np.square(direction)))
endpoint_dirs.append(direction)
endpoint_vectors = np.array(endpoint_vectors)
endpoint_dirs = np.array(endpoint_dirs)
distance_cutoff = np.percentile(distances,percentile_cutoff)
X_additions = []
X_a_idx = int(np.max(X[:,0]))+1
G_d = G.copy()
for idx_a_i in range(len(endpoints)):
for idx_b_j in range(idx_a_i+1,len(endpoints)):
idx_a = endpoints[idx_a_i]
idx_b = endpoints[idx_b_j]
if np.abs(np.sum(np.multiply(endpoint_dirs[idx_a_i],endpoint_dirs[idx_b_j])))>similarity_cutoff:
x = X[idx_b,2:5]
x_parent = X[idx_a,2:5]
if np.sqrt(np.sum(np.square(x_parent-x)))<distance_multiplier * distance_cutoff:
X_additions.append([X_a_idx,0,X[idx_b,2],X[idx_b,3],X[idx_b,4],X[idx_b,5],X[idx_a,0]])
X_a_idx += 1
G_d.add_edge(idx_a, idx_b)
X_additions = np.array(X_additions)
X_all = np.vstack((X, X_additions))
X_pd = pd.DataFrame(X_all)
X_pd[0] = X_pd[0].astype(int)
X_pd[1] = X_pd[1].astype(int)
X_pd[6] = X_pd[6].astype(int)
X_pd.to_csv(new_swc_file, sep=' ', header=None, index=None)
return G, G_d
def fix_swc_components(swc_file, new_swc_file,
percentile_cutoff = 50,
similarity_cutoff = 0.40,
distance_multiplier = 5):
"""Tries to fix connectivity errors in a given swc file and connect disconnected components.
# Arguments:
swc_file (str or np.array): Name of the file to use, or the numpy array to get as input.
new_swc_file (str): Name of the new swc file to use as output.
percentile_cutoff (int): Optional. Percentile to use for inter-node cutoff distance during reconstruction for connecting two nodes. Defaults to 50.
similarity_cutoff (float): Optional. Cosine similarity cutoff value between two endpoints' branches during reconstruction. Defaults to 0.8.
distance_multiplier (float): Optional. A multiplier to multiply percentile_cutoff with. Defaults to 8.
# Returns:
G: A directed networkx graph with the contents of the input.
G_d: A directed networkx graph with the contents of the input after the fixes.
G_d_uncon: An undirected networkx graph with the contents of the input after the fixes with no disconnected components.
"""
X, G, distances = generate_neuron_shape(swc_file)
endpoints = []
endpoint_vectors = []
endpoint_dirs = []
for i in G.nodes():
if len(list(G.successors(i)))==0:
endpoints.append(i)
endpoint_vectors.append(G.nodes()[i]['position_data'])
direction = G.nodes()[i]['position_data'] - G.nodes()[i]['parent_position_data']
if np.sqrt(np.sum(np.square(direction)))>0.:
direction = direction / np.sqrt(np.sum(np.square(direction)))
endpoint_dirs.append(direction)
endpoint_vectors = np.array(endpoint_vectors)
endpoint_dirs = np.array(endpoint_dirs)
distance_cutoff = np.percentile(distances,percentile_cutoff)
X_additions = []
X_a_idx = int(np.max(X[:,0]))+1
G_d = G.copy()
for idx_a_i in range(len(endpoints)):
for idx_b_j in range(idx_a_i+1,len(endpoints)):
idx_a = endpoints[idx_a_i]
idx_b = endpoints[idx_b_j]
if np.abs(np.sum(np.multiply(endpoint_dirs[idx_a_i],endpoint_dirs[idx_b_j])))>similarity_cutoff:
x = X[idx_b,2:5]
x_parent = X[idx_a,2:5]
if np.sqrt(np.sum(np.square(x_parent-x)))<distance_multiplier * distance_cutoff:
X_additions.append([X_a_idx,0,X[idx_b,2],X[idx_b,3],X[idx_b,4],X[idx_b,5],X[idx_a,0]])
X_a_idx += 1
G_d.add_edge(idx_a, idx_b)
G_d_uncon = nx.Graph(G_d)
processing = True
X_disconnected_additions = []
while processing == True:
components = []
for component in nx.connected_components(G_d_uncon):
components.append(list(component))
if len(components)<2:
processing = False
else:
print(len(components))
components_endpoints = []
component_matrices = []
for component in components:
component_endpoints = []
component_matrix = []
for i in component:
if i in endpoints:
component_endpoints.append(i)
component_matrix.append(G_d_uncon.nodes()[i]['position_data'])
component_matrix = np.array(component_matrix)
components_endpoints.append(component_endpoints)
component_matrices.append(component_matrix)
max_dist = 10000.
min_a = 0
min_b = 0
min_vals = None
for component_idx in range(len(components)):
for component_idx_b in range(component_idx+1, len(components)):
DD = pairwise_distances(component_matrices[component_idx], component_matrices[component_idx_b])
if np.min(DD)<max_dist:
max_dist = np.min(DD)
min_a = component_idx
min_b = component_idx_b
min_vals = np.unravel_index(DD.argmin(), DD.shape)
G_d_uncon.add_edge(components_endpoints[min_a][min_vals[0]], components_endpoints[min_b][min_vals[1]])
print(min_a, min_b)
X_disconnected_additions.append([X_a_idx,0,X[components_endpoints[min_a][min_vals[0]],2],X[components_endpoints[min_a][min_vals[0]],3],X[components_endpoints[min_a][min_vals[0]],4],X[components_endpoints[min_a][min_vals[0]],5],X[components_endpoints[min_b][min_vals[1]],0]])
X_a_idx += 1
X_additions = np.array(X_additions)
X_disconnected_additions = np.array(X_disconnected_additions)
X_all = np.vstack((X, X_additions, X_disconnected_additions))
X_pd = pd.DataFrame(X_all)
X_pd[0] = X_pd[0].astype(int)
X_pd[1] = X_pd[1].astype(int)
X_pd[6] = X_pd[6].astype(int)
X_pd.to_csv(new_swc_file, sep=' ', header=None, index=None)
return G, G_d, G_d_uncon | 44.037383 | 286 | 0.599392 | 2,008 | 14,136 | 4.046315 | 0.114542 | 0.0224 | 0.016738 | 0.010831 | 0.662646 | 0.640246 | 0.622523 | 0.593846 | 0.5632 | 0.544738 | 0 | 0.016045 | 0.276952 | 14,136 | 321 | 287 | 44.037383 | 0.778887 | 0.225311 | 0 | 0.534188 | 1 | 0 | 0.059831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.021368 | 0 | 0.07265 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d104b69824ddd7c1f8640b233b546c4955e4df9d | 4,021 | py | Python | extensions/customer_action.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | null | null | null | extensions/customer_action.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | 1 | 2019-07-03T13:32:38.000Z | 2019-07-03T13:32:38.000Z | extensions/customer_action.py | time-track-tool/time-track-tool | a1c280f32a7766e460c862633b748fa206256f24 | [
"MIT"
] | 1 | 2019-05-15T16:01:31.000Z | 2019-05-15T16:01:31.000Z | #! /usr/bin/python
# Copyright (C) 2006 Dr. Ralf Schlatterbeck Open Source Consulting.
# Reichergasse 131, A-3411 Weidling.
# Web: http://www.runtux.com Email: office@runtux.com
# All rights reserved
# ****************************************************************************
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
# ****************************************************************************
#
#++
# Name
# customer_action
#
# Purpose
#
# Various actions for editing of customer data
#--
from roundup.cgi.actions import Action, EditItemAction, SearchAction
from roundup.cgi.exceptions import Redirect
from roundup.exceptions import Reject
from roundup.cgi import templating
from roundup.date import Date, Interval, Range
from roundup.cgi.TranslationService import get_translation
from roundup.hyperdb import Multilink
class Create_New_Address (Action) :
""" Create a new address to be added to the current item (used for
customer addresses, supply_address, invoice_address,
contact_person).
Optional @frm specifies the address to copy.
"""
copy_attributes = \
[ 'adr_type', 'birthdate', 'city', 'country'
, 'firstname', 'function', 'initial', 'lastname', 'lettertitle'
, 'postalcode', 'salutation', 'street', 'title'
, 'valid'
]
def handle (self) :
self.request = templating.HTMLRequest (self.client)
assert (self.client.nodeid)
klass = self.db.classes [self.request.classname]
id = self.client.nodeid
attr = self.form ['@attr'].value.strip ()
if '@frm' in self.form :
frm = self.form ['@frm'].value.strip ()
node = self.db.address.getnode (self.db.cust_supp.get (id, frm))
attributes = dict \
((k, node [k]) for k in self.copy_attributes
if node [k] is not None
)
else :
attributes = dict \
( function = klass.get (id, 'name')
, country = ' '
)
newvalue = newid = self.db.address.create (** attributes)
if isinstance (klass.properties [attr], Multilink) :
newvalue = klass.get (id, attr) [:]
newvalue.append (newid)
newvalue = dict.fromkeys (newvalue).keys ()
klass.set (id, ** {attr : newvalue})
self.db.commit ()
raise Redirect, "%s%s" % (self.request.classname, id)
# end def handle
# end class Create_New_Address
def del_link (classname, id) :
return \
( "document.forms.itemSynopsis ['@remove@%s'].value = '%s';"
"alert(document.forms.itemSynopsis ['@remove@%s'].value);"
% (classname, id, classname)
)
# end def del_link
def adress_button (db, adr_property_frm, adr_property_to) :
"""Compute address copy button inscription"""
adr_frm = db._ (adr_property_frm)
adr_to = db._ (adr_property_to)
return db._ (''"new %(adr_to)s from %(adr_frm)s") % locals ()
# end def adress_button
def init (instance) :
actn = instance.registerAction
actn ('create_new_address', Create_New_Address)
util = instance.registerUtil
util ("del_link", del_link)
util ("adress_button", adress_button)
# end def init
| 38.663462 | 79 | 0.605073 | 472 | 4,021 | 5.069915 | 0.447034 | 0.032177 | 0.023402 | 0.023819 | 0.08107 | 0.054325 | 0 | 0 | 0 | 0 | 0 | 0.006693 | 0.256901 | 4,021 | 103 | 80 | 39.038835 | 0.794177 | 0.294454 | 0 | 0.035714 | 0 | 0 | 0.122634 | 0.032334 | 0 | 0 | 0 | 0 | 0.017857 | 0 | null | null | 0 | 0.125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d104ca6c9b0546b1e6e69454841d4fb7d4af63b5 | 7,018 | py | Python | layouts/window_profile.py | TkfleBR/PyManager | d57f6cced4932d03b51902cbcf4d9b217c67bd3c | [
"MIT"
] | null | null | null | layouts/window_profile.py | TkfleBR/PyManager | d57f6cced4932d03b51902cbcf4d9b217c67bd3c | [
"MIT"
] | null | null | null | layouts/window_profile.py | TkfleBR/PyManager | d57f6cced4932d03b51902cbcf4d9b217c67bd3c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'window_profile.ui'
#
# Created by: PyQt5 UI code generator 5.9.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtWidgets import QMainWindow
class Profile(QMainWindow):
def __init__(self):
super().__init__()
self.setObjectName("MainWindow")
self.resize(397, 374)
self.centralwidget = QtWidgets.QWidget(self)
self.centralwidget.setObjectName("centralwidget")
self.verticalLayoutWidget = QtWidgets.QWidget(self.centralwidget)
self.verticalLayoutWidget.setGeometry(QtCore.QRect(10, 10, 381, 361))
self.verticalLayoutWidget.setObjectName("verticalLayoutWidget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.gridLayout = QtWidgets.QGridLayout()
self.gridLayout.setObjectName("gridLayout")
self.lineEdit = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit.setText("")
self.lineEdit.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit.setObjectName("lineEdit")
self.gridLayout.addWidget(self.lineEdit, 0, 1, 1, 1)
self.label_6 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_6.setAlignment(QtCore.Qt.AlignCenter)
self.label_6.setObjectName("label_6")
self.gridLayout.addWidget(self.label_6, 6, 0, 1, 1)
self.lineEdit_6 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_6.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_6.setObjectName("lineEdit_6")
self.gridLayout.addWidget(self.lineEdit_6, 7, 1, 1, 1)
self.label = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
self.dateEdit = QtWidgets.QDateEdit(self.verticalLayoutWidget)
self.dateEdit.setAlignment(QtCore.Qt.AlignCenter)
self.dateEdit.setObjectName("dateEdit")
self.gridLayout.addWidget(self.dateEdit, 6, 1, 1, 1)
self.lineEdit_8 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.gridLayout.addWidget(self.lineEdit_8, 8, 1, 1, 1)
self.lineEdit_8.setObjectName("lineEdit_8")
self.lineEdit_8.setAlignment(QtCore.Qt.AlignCenter)
self.label_8 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_8.setAlignment(QtCore.Qt.AlignCenter)
self.label_8.setObjectName("label_8")
self.gridLayout.addWidget(self.label_8, 7, 0, 1, 1)
self.label_7 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_7.setAlignment(QtCore.Qt.AlignCenter)
self.label_7.setObjectName("label_7")
self.gridLayout.addWidget(self.label_7, 8, 0, 1, 1)
self.lineEdit_4 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_4.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_4.setObjectName("lineEdit_4")
self.gridLayout.addWidget(self.lineEdit_4, 4, 1, 1, 1)
self.label_3 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setObjectName("label_3")
self.gridLayout.addWidget(self.label_3, 3, 0, 1, 1)
self.label_2 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_2.setObjectName("label_2")
self.gridLayout.addWidget(self.label_2, 2, 0, 1, 1)
self.lineEdit_5 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_5.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_5.setObjectName("lineEdit_5")
self.gridLayout.addWidget(self.lineEdit_5, 5, 1, 1, 1)
self.lineEdit_3 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_3.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_3.setObjectName("lineEdit_3")
self.gridLayout.addWidget(self.lineEdit_3, 3, 1, 1, 1)
self.label_5 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_5.setAlignment(QtCore.Qt.AlignCenter)
self.label_5.setObjectName("label_5")
self.gridLayout.addWidget(self.label_5, 5, 0, 1, 1)
self.label_4 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_4.setAlignment(QtCore.Qt.AlignCenter)
self.label_4.setObjectName("label_4")
self.gridLayout.addWidget(self.label_4, 4, 0, 1, 1)
self.lineEdit_2 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_2.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_2.setObjectName("lineEdit_2")
self.gridLayout.addWidget(self.lineEdit_2, 2, 1, 1, 1)
self.label_9 = QtWidgets.QLabel(self.verticalLayoutWidget)
self.label_9.setObjectName("label_9")
self.gridLayout.addWidget(
self.label_9, 1, 0, 1, 1, QtCore.Qt.AlignHCenter)
self.lineEdit_7 = QtWidgets.QLineEdit(self.verticalLayoutWidget)
self.lineEdit_7.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit_7.setObjectName("lineEdit_7")
self.gridLayout.addWidget(self.lineEdit_7, 1, 1, 1, 1)
self.verticalLayout.addLayout(self.gridLayout)
self.horizontalLayout = QtWidgets.QHBoxLayout()
self.horizontalLayout.setObjectName("horizontalLayout")
self.pushButton_2 = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton_2.setObjectName("pushButton_2")
self.horizontalLayout.addWidget(self.pushButton_2)
self.pushButton = QtWidgets.QPushButton(self.verticalLayoutWidget)
self.pushButton.setObjectName("pushButton")
self.horizontalLayout.addWidget(self.pushButton)
self.verticalLayout.addLayout(self.horizontalLayout)
self.setCentralWidget(self.centralwidget)
self.retranslateUi()
QtCore.QMetaObject.connectSlotsByName(self)
self.show()
def retranslateUi(self):
_translate = QtCore.QCoreApplication.translate
self.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label_6.setText(_translate("MainWindow", "Birthday:"))
self.label.setText(_translate("MainWindow", "Name:"))
self.label_8.setText(_translate("MainWindow", "Company:"))
self.label_7.setText(_translate("MainWindow", "Salary:"))
self.label_3.setText(_translate("MainWindow", "Password:"))
self.label_2.setText(_translate("MainWindow", "Username:"))
self.label_5.setText(_translate("MainWindow", "Status:"))
self.label_4.setText(_translate("MainWindow", "Repeat Password:"))
self.label_9.setText(_translate("MainWindow", "CPF/CNPJ"))
self.pushButton_2.setText(_translate("MainWindow", "Cancel"))
self.pushButton.setText(_translate("MainWindow", "Save"))
| 53.572519 | 78 | 0.708891 | 788 | 7,018 | 6.175127 | 0.126904 | 0.081381 | 0.120838 | 0.099877 | 0.523839 | 0.336005 | 0.017263 | 0.017263 | 0 | 0 | 0 | 0.032141 | 0.175406 | 7,018 | 130 | 79 | 53.984615 | 0.808709 | 0.026646 | 0 | 0 | 1 | 0 | 0.068875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0.016949 | 0.016949 | 0 | 0.042373 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d106a8e65c56d421953b12f1ee56992d0d42670b | 2,546 | py | Python | tests/create_test_db.py | TargetProcess/duro | 3e70c17aed3d6d8714c94f0dfda539969d22157a | [
"MIT"
] | 4 | 2020-01-31T13:54:51.000Z | 2020-04-17T15:53:02.000Z | tests/create_test_db.py | TargetProcess/duro | 3e70c17aed3d6d8714c94f0dfda539969d22157a | [
"MIT"
] | null | null | null | tests/create_test_db.py | TargetProcess/duro | 3e70c17aed3d6d8714c94f0dfda539969d22157a | [
"MIT"
] | 1 | 2020-04-14T12:32:08.000Z | 2020-04-14T12:32:08.000Z | import sqlite3
ddl = """
create table commits
(
hash text,
processed integer
);
create table tables
(
table_name text,
query text,
interval integer,
config text,
last_created integer,
mean real,
times_run integer,
force integer,
started integer,
deleted integer,
waiting integer
);
create table timestamps
(
"table" text,
start int,
connect int,
"select" int,
create_temp int,
process int,
csv int,
s3 int,
"insert" int,
clean_csv int,
tests int,
replace_old int,
drop_old int,
make_snapshot int,
finish int
);
create table version
(
major INTEGER,
minor INTEGER
);
"""
inserts = """
INSERT INTO tables (table_name, query, interval, config, last_created, mean, times_run, force, started, deleted, waiting)
VALUES ('first.cities', 'select city, country
from first.cities_raw', 1440,
'{"grant_select": "jane, john"}',
null, 0, 0, null, null, null, null);
INSERT INTO tables
VALUES ('first.countries', 'select country, continent
from first.countries_raw;', 60,
'{"grant_select": "joan, john"}',
null, 0, 0, null, null, null, null);
INSERT INTO tables
VALUES ('second.child', 'select city, country from first.cities', null,
'{"diststyle": "all", "distkey": "city", "snapshots_interval": "24d", "snapshots_stored_for": "90d"}',
null, 0, 0, null, null, null, null);
INSERT INTO tables
VALUES ('second.parent', 'select * from second.child limit 10', 24,
'{"diststyle": "even"}', null, 0, 0, null, null, null, null);
INSERT INTO timestamps ("table", start, connect, "select", create_temp,
process, csv, s3, "insert", clean_csv, tests, replace_old, drop_old, make_snapshot,
finish)
VALUES ('first.cities', 1522151698, 1522151699, 1522151773, 1522151783, null,
1522151793, null, null, null, 1522151799, 1522151825, 1522151825, null, 1522151825);
INSERT INTO timestamps ("table", start, connect, "select", create_temp,
process, csv, s3, "insert", clean_csv, tests, replace_old, drop_old, make_snapshot,
finish)
VALUES ('first.cities', 1522151835, 1522151849, 1522152053, 1522152063, null,
1522152073, null, null, null, 1522152155, 1522152202, 1522152202, null, 1522152202);
INSERT INTO timestamps ("table", start, connect, "select", create_temp,
process, csv, s3, "insert", clean_csv, tests, replace_old, drop_old, make_snapshot,
finish)
VALUES ('first.cities', 1523544406, null, null, null, null, null, null, null,
null, null, null, null, null, null);
INSERT INTO version (major, minor) VALUES (1, 0);
"""
| 27.376344 | 122 | 0.689709 | 329 | 2,546 | 5.237082 | 0.291793 | 0.130006 | 0.146257 | 0.130006 | 0.410911 | 0.410911 | 0.367963 | 0.367963 | 0.367963 | 0.355194 | 0 | 0.104485 | 0.176748 | 2,546 | 92 | 123 | 27.673913 | 0.717557 | 0 | 0 | 0.2625 | 0 | 0.0375 | 0.98154 | 0.025923 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0125 | 0 | 0.0125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d11150f9da78258d94fd1eac873f91d70d526a04 | 1,098 | py | Python | alphatwirl/parallel/parallel.py | shane-breeze/AlphaTwirl | 59dbd5348af31d02e133d43fd5bfaad6b99a155e | [
"BSD-3-Clause"
] | null | null | null | alphatwirl/parallel/parallel.py | shane-breeze/AlphaTwirl | 59dbd5348af31d02e133d43fd5bfaad6b99a155e | [
"BSD-3-Clause"
] | 7 | 2018-02-26T10:32:26.000Z | 2018-03-19T12:27:12.000Z | alphatwirl/parallel/parallel.py | shane-breeze/AlphaTwirl | 59dbd5348af31d02e133d43fd5bfaad6b99a155e | [
"BSD-3-Clause"
] | null | null | null | # Tai Sakuma <tai.sakuma@gmail.com>
##__________________________________________________________________||
class Parallel(object):
def __init__(self, progressMonitor, communicationChannel, workingarea=None):
self.progressMonitor = progressMonitor
self.communicationChannel = communicationChannel
self.workingarea = workingarea
def __repr__(self):
name_value_pairs = (
('progressMonitor', self.progressMonitor),
('communicationChannel', self.communicationChannel),
('workingarea', self.workingarea)
)
return '{}({})'.format(
self.__class__.__name__,
', '.join(['{}={!r}'.format(n, v) for n, v in name_value_pairs]),
)
def begin(self):
self.progressMonitor.begin()
self.communicationChannel.begin()
def terminate(self):
self.communicationChannel.terminate()
def end(self):
self.progressMonitor.end()
self.communicationChannel.end()
##__________________________________________________________________||
| 32.294118 | 80 | 0.65847 | 81 | 1,098 | 7.049383 | 0.358025 | 0.166375 | 0.136602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23133 | 1,098 | 33 | 81 | 33.272727 | 0.67654 | 0.153916 | 0 | 0 | 0 | 0 | 0.066089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217391 | false | 0 | 0 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1170ce42494e908356aac4f4b85a5adc8b67a14 | 2,814 | py | Python | CNN.py | psmishra7/CryptocurrencyPrediction | 96f85ba45d1acbd531ad86f7f9ba32b9acd3ddaf | [
"MIT"
] | null | null | null | CNN.py | psmishra7/CryptocurrencyPrediction | 96f85ba45d1acbd531ad86f7f9ba32b9acd3ddaf | [
"MIT"
] | null | null | null | CNN.py | psmishra7/CryptocurrencyPrediction | 96f85ba45d1acbd531ad86f7f9ba32b9acd3ddaf | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as numpy
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv1D, MaxPooling1D, LeakyReLU, PReLU
from keras.utils import np_utils
from keras.callbacks import CSVLogger, ModelCheckpoint
import h5py
import os
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
# Use CNN to capture local temporal dependency of data in risk prediction or other related tasks.
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
set_session(tf.Session(config=config))
with h5py.File(''.join(['bitcoin2012_2017_50_30_prediction.h5']), 'r') as hf:
datas = hf['inputs'].value
labels = hf['outputs'].value
output_file_name='bitcoin2015to2017_close_CNN_2_relu'
step_size = datas.shape[1]
batch_size= 8
nb_features = datas.shape[2]
epochs = 100
#split training validation
training_size = int(0.8* datas.shape[0])
training_datas = datas[:training_size,:]
training_labels = labels[:training_size,:]
validation_datas = datas[training_size:,:]
validation_labels = labels[training_size:,:]
#build model
# 2 layers
model = Sequential()
model.add(Conv1D(activation='relu', input_shape=(step_size, nb_features), strides=3, filters=8, kernel_size=20))
#model.add(PReLU())
model.add(Dropout(0.5))
model.add(Conv1D( strides=4, filters=nb_features, kernel_size=16))
'''
# 3 Layers
model.add(Conv1D(activation='relu', input_shape=(step_size, nb_features), strides=3, filters=8, kernel_size=8))
#model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Conv1D(activation='relu', strides=2, filters=8, kernel_size=8))
#model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Conv1D( strides=2, filters=nb_features, kernel_size=8))
# 4 layers
model.add(Conv1D(activation='relu', input_shape=(step_size, nb_features), strides=2, filters=8, kernel_size=2))
#model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Conv1D(activation='relu', strides=2, filters=8, kernel_size=2))
#model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Conv1D(activation='relu', strides=2, filters=8, kernel_size=2))
#model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Conv1D( strides=2, filters=nb_features, kernel_size=2))
'''
model.compile(loss='mse', optimizer='adam')
model.fit(training_datas, training_labels,verbose=1, batch_size=batch_size,validation_data=(validation_datas,validation_labels), epochs = epochs, callbacks=[CSVLogger(output_file_name+'.csv', append=True),ModelCheckpoint('weights/'+output_file_name+'-{epoch:02d}-{val_loss:.5f}.hdf5', monitor='val_loss', verbose=1,mode='min')])
# model.fit(datas,labels)
#model.save(output_file_name+'.h5')
| 33.105882 | 328 | 0.767235 | 428 | 2,814 | 4.871495 | 0.308411 | 0.080576 | 0.060432 | 0.069065 | 0.363549 | 0.3506 | 0.3506 | 0.3506 | 0.3506 | 0.332374 | 0 | 0.03642 | 0.0828 | 2,814 | 84 | 329 | 33.5 | 0.771406 | 0.076759 | 0 | 0 | 0 | 0 | 0.125934 | 0.058654 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.305556 | 0 | 0.305556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d1183372da6062d23499982cf0b0d29bf10d59d1 | 3,260 | py | Python | openqemist/tests/problem_decomposition/dmet/test_dmet_orbitals.py | 1QB-Information-Technologies/openqemist | e2ab887af31d78d03dcb92cfa3a0705b2436823d | [
"Apache-2.0"
] | 35 | 2019-05-31T22:37:23.000Z | 2022-01-06T12:01:18.000Z | openqemist/tests/problem_decomposition/dmet/test_dmet_orbitals.py | rickyHong/1Qbit-QEMIST-repl | 863fafdbc5bcd2c267b6a57dfa06b050aa053a6d | [
"Apache-2.0"
] | 2 | 2021-03-23T22:34:23.000Z | 2021-06-23T13:09:46.000Z | openqemist/tests/problem_decomposition/dmet/test_dmet_orbitals.py | rickyHong/1Qbit-QEMIST-repl | 863fafdbc5bcd2c267b6a57dfa06b050aa053a6d | [
"Apache-2.0"
] | 10 | 2019-06-06T23:14:18.000Z | 2021-12-02T21:56:13.000Z | # Copyright 2019 1QBit
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test the construction of localized orbitals for DMET calculation
"""
import unittest
from openqemist.problem_decomposition.dmet._helpers.dmet_orbitals import dmet_orbitals
from openqemist.problem_decomposition.electron_localization import iao_localization
from pyscf import gto, scf
import numpy as np
def get_file_path_stub():
""" Gets the path of the test files from anywhere in the test tree."
The direcory structure should be $SOMETHING/openqemist/openqemist/tests/$SOMETHINGELSE
so we trim after "tests", then add the path to the results files so we can
run the tests from anywhere in the tree."""
import os
cwd = os.getcwd()
tests_root = cwd[0:cwd.find("tests") + 5]
return tests_root + "/problem_decomposition/dmet/"
class TestDMETorbitals(unittest.TestCase):
""" Generate the localized orbitals employing IAOs """
def test_orbital_construction(self):
# Initialize Molecule object with PySCF and input
mol = gto.Mole()
mol.atom = """
C 0.94764 -0.02227 0.05901
H 0.58322 0.35937 -0.89984
H 0.54862 0.61702 0.85300
H 0.54780 -1.03196 0.19694
C 2.46782 -0.03097 0.07887
H 2.83564 0.98716 -0.09384
H 2.83464 -0.65291 -0.74596
C 3.00694 -0.55965 1.40773
H 2.63295 -1.57673 1.57731
H 2.63329 0.06314 2.22967
C 4.53625 -0.56666 1.42449
H 4.91031 0.45032 1.25453
H 4.90978 -1.19011 0.60302
C 5.07544 -1.09527 2.75473
H 4.70164 -2.11240 2.92450
H 4.70170 -0.47206 3.57629
C 6.60476 -1.10212 2.77147
H 6.97868 -0.08532 2.60009
H 6.97839 -1.72629 1.95057
C 7.14410 -1.62861 4.10112
H 6.77776 -2.64712 4.27473
H 6.77598 -1.00636 4.92513
C 8.66428 -1.63508 4.12154
H 9.06449 -2.27473 3.32841
H 9.02896 -2.01514 5.08095
H 9.06273 -0.62500 3.98256"""
mol.basis = "3-21g"
mol.charge = 0
mol.spin = 0
mol.build(verbose=0)
mf = scf.RHF(mol)
mf.scf()
dmet_orbs = dmet_orbitals(mol, mf, range(mol.nao_nr()), iao_localization)
dmet_orbitals_ref = np.loadtxt(get_file_path_stub() + 'test_dmet_orbitals.txt')
# Test the construction of IAOs
for index, value_ref in np.ndenumerate(dmet_orbitals_ref):
self.assertAlmostEqual(value_ref, dmet_orbs.localized_mo[index], msg='DMET orbs error at index ' + str(index), delta=1e-6)
if __name__ == "__main__":
unittest.main()
| 36.629213 | 134 | 0.632822 | 494 | 3,260 | 4.095142 | 0.489879 | 0.029659 | 0.012852 | 0.015818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206882 | 0.277914 | 3,260 | 88 | 135 | 37.045455 | 0.652506 | 0.314724 | 0 | 0 | 0 | 0 | 0.517825 | 0.022852 | 0 | 0 | 0 | 0 | 0.019231 | 1 | 0.038462 | false | 0 | 0.115385 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1192c11dd7c40a7a80b9ec12ae3bec4e88c356a | 1,684 | py | Python | books/techno/python/programming_python_4_ed_m_lutz/code/chapter_8/13_binding_events/bind.py | ordinary-developer/lin_education | 13d65b20cdbc3e5467b2383e5c09c73bbcdcb227 | [
"MIT"
] | 1 | 2017-05-04T08:23:46.000Z | 2017-05-04T08:23:46.000Z | books/techno/python/programming_python_4_ed_m_lutz/code/chapter_8/13_binding_events/bind.py | ordinary-developer/lin_education | 13d65b20cdbc3e5467b2383e5c09c73bbcdcb227 | [
"MIT"
] | null | null | null | books/techno/python/programming_python_4_ed_m_lutz/code/chapter_8/13_binding_events/bind.py | ordinary-developer/lin_education | 13d65b20cdbc3e5467b2383e5c09c73bbcdcb227 | [
"MIT"
] | null | null | null | from tkinter import *
def showPosEvent(event):
print('Widget ={} X={} Y={}'.format(event.widget, event.x, event.y))
def showAllEvent(event):
print(event)
for attr in dir(event):
if not attr.startswith('__'):
print(attr, '=>', getattr(event, attr))
def onKeyPress(event):
print('Got key press: ', event.char)
def onArrowKey(event):
print('Got up arrow key press')
def onReturnKey(event):
print('Got return key press')
def onLeftClick(event):
print('Got left mouse button click: ', end = ' ')
showPosEvent(event)
def onRightClick(event):
print('Got right mouse button click: ', end = ' ')
showPosEvent(event)
def onMiddleClick(event):
print('Got middle mouse button click:', end = ' ')
showPosEvent(event)
showAllEvent(event)
def onLeftDrag(event):
print('Got left mouse button drag: ', end = ' ')
showPosEvent(event)
def onDoubleLeftClick(event):
print('Got double left mouse click', end = ' ')
showPosEvent(event)
tkroot.quit()
tkroot = Tk()
labelfont = ('courier', 20, 'bold')
widget = Label(tkroot, text = 'Hello bind world')
widget.config(bg = 'red', font = labelfont)
widget.config(height = 5, width = 20)
widget.pack(expand = YES, fill = BOTH)
widget.bind('<Button-1>', onLeftClick)
widget.bind('<Button-3>', onRightClick)
widget.bind('<Button-2>', onMiddleClick)
widget.bind('<Double-1>', onDoubleLeftClick)
widget.bind('<B1-Motion>', onLeftDrag)
widget.bind('<KeyPress>', onKeyPress)
widget.bind('<Up>', onArrowKey)
widget.bind('<Return>', onReturnKey)
widget.focus()
tkroot.title('Click me')
tkroot.mainloop()
| 27.16129 | 73 | 0.644893 | 202 | 1,684 | 5.366337 | 0.386139 | 0.092251 | 0.095941 | 0.092251 | 0.146679 | 0.146679 | 0.071956 | 0 | 0 | 0 | 0 | 0.007391 | 0.196556 | 1,684 | 61 | 74 | 27.606557 | 0.793792 | 0 | 0 | 0.104167 | 0 | 0 | 0.210105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.020833 | 0 | 0.229167 | 0.229167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1195e5aa794a4828c802143266203e6d6782d92 | 553 | py | Python | week9/api/migrations/0002_auto_20200324_1713.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | week9/api/migrations/0002_auto_20200324_1713.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | 13 | 2021-03-10T08:46:52.000Z | 2022-03-02T08:13:58.000Z | week9/api/migrations/0002_auto_20200324_1713.py | yestemir/web | 5bdead66c26a3c466701e25ecae9720f04ad4118 | [
"Unlicense"
] | null | null | null | # Generated by Django 3.0.4 on 2020-03-24 11:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='products',
name='category',
field=models.CharField(default='category 0', max_length=300),
),
migrations.AlterField(
model_name='products',
name='description',
field=models.TextField(default=''),
),
]
| 23.041667 | 73 | 0.56962 | 54 | 553 | 5.759259 | 0.703704 | 0.057878 | 0.109325 | 0.135048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060052 | 0.307414 | 553 | 23 | 74 | 24.043478 | 0.751958 | 0.081374 | 0 | 0.235294 | 1 | 0 | 0.118577 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d11ca10c2506a5eec12ca314bb1040463209b759 | 2,825 | py | Python | tutos/institutions/views.py | UVG-Teams/Tutos-System | 230dd9434f745c2e6e69e10f9908e9818c559d03 | [
"MIT"
] | null | null | null | tutos/institutions/views.py | UVG-Teams/Tutos-System | 230dd9434f745c2e6e69e10f9908e9818c559d03 | [
"MIT"
] | null | null | null | tutos/institutions/views.py | UVG-Teams/Tutos-System | 230dd9434f745c2e6e69e10f9908e9818c559d03 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from rest_framework import viewsets
from institutions.models import Institution, Career, Course
from institutions.serializers import InstitutionSerializer, CareerSerializer, CourseSerializer
from permissions.services import APIPermissionClassFactory
class InstitutionViewSet(viewsets.ModelViewSet):
queryset = Institution.objects.all()
serializer_class = InstitutionSerializer
permission_classes = (
APIPermissionClassFactory(
name='InstitutionPermission',
permission_configuration={
'base': {
'create': lambda user, req: user.is_authenticated,
'list': lambda user, req: user.is_authenticated,
},
'instance': {
'retrieve': lambda user, obj, req: user.is_authenticated,
'update': lambda user, obj, req: user.is_authenticated,
'partial_update': lambda user, obj, req: user.is_authenticated,
'destroy': lambda user, obj, req: user.is_authenticated,
}
}
),
)
class CareerViewSet(viewsets.ModelViewSet):
queryset = Career.objects.all()
serializer_class = CareerSerializer
permission_classes = (
APIPermissionClassFactory(
name='CareerPermission',
permission_configuration={
'base': {
'create': lambda user, req: user.is_authenticated,
'list': lambda user, req: user.is_authenticated,
},
'instance': {
'retrieve': lambda user, obj, req: user.is_authenticated,
'update': lambda user, obj, req: user.is_authenticated,
'partial_update': lambda user, obj, req: user.is_authenticated,
'destroy': lambda user, obj, req: user.is_authenticated,
}
}
),
)
class CourseViewSet(viewsets.ModelViewSet):
queryset = Course.objects.all()
serializer_class = CourseSerializer
permission_classes = (
APIPermissionClassFactory(
name='CoursePermission',
permission_configuration={
'base': {
'create': lambda user, req: user.is_authenticated,
'list': lambda user, req: user.is_authenticated,
},
'instance': {
'retrieve': lambda user, obj, req: user.is_authenticated,
'update': lambda user, obj, req: user.is_authenticated,
'partial_update': lambda user, obj, req: user.is_authenticated,
'destroy': lambda user, obj, req: user.is_authenticated,
}
}
),
)
| 38.175676 | 94 | 0.572743 | 230 | 2,825 | 6.9 | 0.217391 | 0.113422 | 0.102079 | 0.249527 | 0.541273 | 0.541273 | 0.541273 | 0.541273 | 0.541273 | 0.541273 | 0 | 0 | 0.339469 | 2,825 | 73 | 95 | 38.69863 | 0.850482 | 0 | 0 | 0.553846 | 0 | 0 | 0.07932 | 0.007436 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.261538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d11e05aec66f3501ed77fd7b7c80b254c9a474b3 | 4,923 | py | Python | vendor/guardian/tests/decorators_test.py | AhmadManzoor/jazzpos | 7b771095b8df52d036657f33f36a97efb575d36c | [
"MIT"
] | 5 | 2015-12-05T15:39:51.000Z | 2020-09-16T20:14:29.000Z | vendor/guardian/tests/decorators_test.py | AhmadManzoor/jazzpos | 7b771095b8df52d036657f33f36a97efb575d36c | [
"MIT"
] | null | null | null | vendor/guardian/tests/decorators_test.py | AhmadManzoor/jazzpos | 7b771095b8df52d036657f33f36a97efb575d36c | [
"MIT"
] | 2 | 2019-11-23T17:47:46.000Z | 2022-01-14T11:05:21.000Z | from django.test import TestCase
from django.contrib.auth.models import User, Group, AnonymousUser
from django.http import HttpRequest
from django.http import HttpResponse
from django.http import HttpResponseForbidden
from django.http import HttpResponseRedirect
from django.shortcuts import get_object_or_404
from guardian.decorators import permission_required, permission_required_or_403
from guardian.exceptions import GuardianError
from guardian.shortcuts import assign
class PermissionRequiredTest(TestCase):
fixtures = ['tests.json']
def setUp(self):
self.anon = AnonymousUser()
self.user = User.objects.get(username='jack')
self.group = Group.objects.get(name='jackGroup')
def _get_request(self, user=None):
if user is None:
user = AnonymousUser()
request = HttpRequest()
request.user = user
return request
def test_no_args(self):
try:
@permission_required
def dummy_view(request):
return HttpResponse('dummy_view')
except GuardianError:
pass
else:
self.fail("Trying to decorate using permission_required without "
"permission as first argument should raise exception")
def test_anonymous_user_wrong_app(self):
request = self._get_request(self.anon)
@permission_required_or_403('not_installed_app.change_user')
def dummy_view(request):
return HttpResponse('dummy_view')
self.assertTrue(isinstance(dummy_view(request), HttpResponseForbidden))
def test_anonymous_user_wrong_codename(self):
request = self._get_request()
@permission_required_or_403('auth.wrong_codename')
def dummy_view(request):
return HttpResponse('dummy_view')
self.assertTrue(isinstance(dummy_view(request), HttpResponseForbidden))
def test_anonymous_user(self):
request = self._get_request()
@permission_required_or_403('auth.change_user')
def dummy_view(request):
return HttpResponse('dummy_view')
self.assertTrue(isinstance(dummy_view(request), HttpResponseForbidden))
def test_wrong_lookup_variables_number(self):
request = self._get_request()
try:
@permission_required_or_403('auth.change_user', (User, 'username'))
def dummy_view(request, username):
pass
dummy_view(request, username='jack')
except GuardianError:
pass
else:
self.fail("If lookup variables are passed they must be tuple of: "
"(ModelClass/app_label.ModelClass/queryset, "
"<pair of lookup_string and view_arg>)\n"
"Otherwise GuardianError should be raised")
def test_wrong_lookup_variables(self):
request = self._get_request()
args = (
(2010, 'username', 'username'),
('User', 'username', 'username'),
(User, 'username', 'no_arg'),
)
for tup in args:
try:
@permission_required_or_403('auth.change_user', tup)
def show_user(request, username):
user = get_object_or_404(User, username=username)
return HttpResponse("It's %s here!" % user.username)
show_user(request, 'jack')
except GuardianError:
pass
else:
self.fail("Wrong arguments given but GuardianError not raised")
def test_model_lookup(self):
request = self._get_request(self.user)
perm = 'auth.change_user'
joe, created = User.objects.get_or_create(username='joe')
assign(perm, self.user, obj=joe)
models = (
'auth.User',
User,
User.objects.filter(is_active=True),
)
for model in models:
@permission_required_or_403(perm, (model, 'username', 'username'))
def dummy_view(request, username):
get_object_or_404(User, username=username)
return HttpResponse('hello')
response = dummy_view(request, username=joe.username)
self.assertEqual(response.content, 'hello')
def test_redirection(self):
request = self._get_request(self.user)
foo = User.objects.create(username='foo')
foobar = Group.objects.create(name='foobar')
foo.groups.add(foobar)
@permission_required('auth.change_group',
(User, 'groups__name', 'group_name'),
login_url='/foobar/')
def dummy_view(request, group_name):
pass
response = dummy_view(request, group_name='foobar')
self.assertTrue(isinstance(response, HttpResponseRedirect))
self.assertTrue(response._headers['location'][1].startswith(
'/foobar/'))
| 33.719178 | 79 | 0.630713 | 527 | 4,923 | 5.675522 | 0.250474 | 0.051153 | 0.069542 | 0.053828 | 0.393514 | 0.334336 | 0.289535 | 0.241391 | 0.199264 | 0.164493 | 0 | 0.009812 | 0.275442 | 4,923 | 145 | 80 | 33.951724 | 0.828708 | 0 | 0 | 0.292035 | 0 | 0 | 0.142219 | 0.014425 | 0 | 0 | 0 | 0 | 0.053097 | 1 | 0.159292 | false | 0.053097 | 0.088496 | 0.035398 | 0.327434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d11f4b63fbf5bd854138ff819170bf1b5bcc07d8 | 1,532 | py | Python | poky/meta/lib/oeqa/sdk/cases/gcc.py | buildlinux/unityos | dcbe232d0589013d77a62c33959d6a69f9bfbc5e | [
"Apache-2.0"
] | 1 | 2020-01-13T13:16:52.000Z | 2020-01-13T13:16:52.000Z | poky/meta/lib/oeqa/sdk/cases/gcc.py | buildlinux/unityos | dcbe232d0589013d77a62c33959d6a69f9bfbc5e | [
"Apache-2.0"
] | 3 | 2019-11-20T02:53:01.000Z | 2019-12-26T03:00:15.000Z | sources/poky/meta/lib/oeqa/sdk/cases/gcc.py | zwg0106/imx-yocto | e378ca25352a59d1ef84ee95f3386b7314f4565b | [
"MIT"
] | null | null | null | import os
import shutil
import unittest
from oeqa.core.utils.path import remove_safe
from oeqa.sdk.case import OESDKTestCase
class GccCompileTest(OESDKTestCase):
td_vars = ['MACHINE']
@classmethod
def setUpClass(self):
files = {'test.c' : self.tc.files_dir, 'test.cpp' : self.tc.files_dir,
'testsdkmakefile' : self.tc.sdk_files_dir}
for f in files:
shutil.copyfile(os.path.join(files[f], f),
os.path.join(self.tc.sdk_dir, f))
def setUp(self):
machine = self.td.get("MACHINE")
if not (self.tc.hasTargetPackage("packagegroup-cross-canadian-%s" % machine) or
self.tc.hasTargetPackage("gcc")):
raise unittest.SkipTest("GccCompileTest class: SDK doesn't contain a cross-canadian toolchain")
def test_gcc_compile(self):
self._run('$CC %s/test.c -o %s/test -lm' % (self.tc.sdk_dir, self.tc.sdk_dir))
def test_gpp_compile(self):
self._run('$CXX %s/test.c -o %s/test -lm' % (self.tc.sdk_dir, self.tc.sdk_dir))
def test_gpp2_compile(self):
self._run('$CXX %s/test.cpp -o %s/test -lm' % (self.tc.sdk_dir, self.tc.sdk_dir))
def test_make(self):
self._run('cd %s; make -f testsdkmakefile' % self.tc.sdk_dir)
@classmethod
def tearDownClass(self):
files = [os.path.join(self.tc.sdk_dir, f) \
for f in ['test.c', 'test.cpp', 'test.o', 'test',
'testsdkmakefile']]
for f in files:
remove_safe(f)
| 34.818182 | 107 | 0.611619 | 220 | 1,532 | 4.136364 | 0.290909 | 0.092308 | 0.098901 | 0.118681 | 0.243956 | 0.243956 | 0.243956 | 0.192308 | 0.141758 | 0.141758 | 0 | 0.000865 | 0.245431 | 1,532 | 43 | 108 | 35.627907 | 0.786332 | 0 | 0 | 0.117647 | 0 | 0 | 0.196475 | 0.019582 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.147059 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1214c4589f598426173369d768239200b8619fe | 2,079 | py | Python | module/server/view/login/routes.py | antkrit/project | 89172be482a640fe656c45a1c35ea1242cd98347 | [
"MIT"
] | null | null | null | module/server/view/login/routes.py | antkrit/project | 89172be482a640fe656c45a1c35ea1242cd98347 | [
"MIT"
] | 1 | 2021-05-20T18:15:46.000Z | 2021-05-21T14:27:47.000Z | module/server/view/login/routes.py | antkrit/project | 89172be482a640fe656c45a1c35ea1242cd98347 | [
"MIT"
] | null | null | null | """Define the route of the login form"""
from flask import (
render_template,
redirect,
url_for,
request,
current_app,
session,
flash,
)
from flask_login import current_user, login_user, logout_user
from module.server import messages
from module.server.models.user import User
from module.server.view.login import bp, forms as f
@bp.route("/", methods=["GET", "POST"])
def login_view():
"""
View for the login page.
Once the user tries to get to his account, he will be redirected to the login page.
Methods: GET, POST
"""
if current_user.is_authenticated: # if the user is already logged in - redirect to his cabinet
if current_user.username == "admin": # If current user is admin
return redirect(url_for("admin.admin_view"))
return redirect(url_for("cabinet.cabinet_view"))
login_form = f.LoginForm()
if request.method == "POST":
data = request.form
if login_form.validate_on_submit() or (data and current_app.testing):
# If user clicked the "Sign In" button or there is data in the request while testing app
username = data.get("username")
password = data.get("password")
user = User.query.filter_by(username=username).first()
if user and user.check_password(password): # If such login exists, login and password match - login user
session.clear()
login_user(user)
flash(messages["success_login"], "info")
if user.username == "admin": # If user is admin - redirect him to the admin interface
return redirect(url_for("admin.admin_view"))
return redirect(url_for("cabinet.cabinet_view"))
return redirect(url_for("login.login_view"))
return render_template("auth/login.html", title="Login", form=login_form)
@bp.route("/logout", methods=["GET"])
def logout():
"""
Log out current user.
Methods: GET
"""
logout_user()
return redirect(url_for("login.login_view"))
| 34.65 | 117 | 0.64406 | 276 | 2,079 | 4.724638 | 0.315217 | 0.059049 | 0.075153 | 0.092025 | 0.162577 | 0.162577 | 0.162577 | 0.110429 | 0.110429 | 0.110429 | 0 | 0 | 0.251082 | 2,079 | 59 | 118 | 35.237288 | 0.837508 | 0.232804 | 0 | 0.153846 | 0 | 0 | 0.122489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0.051282 | 0.128205 | 0 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d123bf2051577f1b1d94b482d255562a77a61f9a | 1,262 | py | Python | config/qtile/Managers/LayoutManager.py | dat-adi/Dotfiles | 7a541aba2bbdd88736bebc9e82f6921ab4a3e03b | [
"Apache-2.0"
] | 2 | 2021-05-06T15:58:29.000Z | 2021-10-02T14:12:08.000Z | config/qtile/Managers/LayoutManager.py | dat-adi/dotfiles | 7a541aba2bbdd88736bebc9e82f6921ab4a3e03b | [
"Apache-2.0"
] | null | null | null | config/qtile/Managers/LayoutManager.py | dat-adi/dotfiles | 7a541aba2bbdd88736bebc9e82f6921ab4a3e03b | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
from libqtile import layout
def get_layouts():
layout_theme = {
"border_width": 2,
"margin": 8,
"border_focus": "#F0F0F0",
"border_normal": "#1D233F",
}
layouts = [
# layout.Bsp(),
# layout.MonadWide(),
# layout.Tile(**layout_theme),
# layout.VerticalTile(),
# layout.Zoomy(),
# layout.Max(**layout_theme),
layout.Columns(**layout_theme),
layout.Stack(num_stacks=2, **layout_theme),
layout.Matrix(**layout_theme),
layout.RatioTile(**layout_theme),
layout.MonadTall(**layout_theme),
layout.TreeTab(
font="Source Code Pro",
fontsize=10,
sections=["FIRST", "SECOND", "THIRD", "FOURTH"],
section_fontsize=10,
border_width=2,
bg_color="1c1f24",
active_bg="2E7588",
active_fg="000000",
inactive_bg="a9a1e1",
inactive_fg="1c1f24",
padding_left=0,
padding_x=0,
padding_y=5,
section_top=10,
section_bottom=20,
level_shift=8,
vspace=3,
panel_width=200,
),
]
return layouts
| 26.291667 | 60 | 0.510301 | 122 | 1,262 | 5.057377 | 0.57377 | 0.142626 | 0.192869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06135 | 0.3542 | 1,262 | 47 | 61 | 26.851064 | 0.695706 | 0.118859 | 0 | 0 | 0 | 0 | 0.112319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.027778 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d12a7191b4c49b6eb3dffbf58c4bda9e9deb59fa | 682 | py | Python | src/python/WMCore/WMBS/MySQL/Locations/ListSites.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | 1 | 2015-02-05T13:43:46.000Z | 2015-02-05T13:43:46.000Z | src/python/WMCore/WMBS/MySQL/Locations/ListSites.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | 1 | 2016-10-13T14:57:35.000Z | 2016-10-13T14:57:35.000Z | src/python/WMCore/WMBS/MySQL/Locations/ListSites.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
_ListSites_
MySQL implementation of Locations.ListSites
"""
__all__ = []
from WMCore.Database.DBFormatter import DBFormatter
import logging
class ListSites(DBFormatter):
sql = "SELECT site_name FROM wmbs_location"
def format(self, results):
if len(results) == 0:
return False
else:
format = []
for i in results[0].fetchall():
format.append(i.values()[0])
return format
def execute(self, conn = None, transaction = False):
results = self.dbi.processData(self.sql, {}, conn = conn, transaction = transaction)
return self.format(results)
| 19.485714 | 92 | 0.618768 | 75 | 682 | 5.52 | 0.6 | 0.082126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006061 | 0.274194 | 682 | 34 | 93 | 20.058824 | 0.830303 | 0.112903 | 0 | 0 | 0 | 0 | 0.058626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d12f9186081828c736de8dcb09176bf8a7fdf2c8 | 584 | py | Python | src/python/test.py | jdrprod/miniBlock | c233dae5380a851e85d78d297e560833b81cf6b8 | [
"MIT"
] | null | null | null | src/python/test.py | jdrprod/miniBlock | c233dae5380a851e85d78d297e560833b81cf6b8 | [
"MIT"
] | null | null | null | src/python/test.py | jdrprod/miniBlock | c233dae5380a851e85d78d297e560833b81cf6b8 | [
"MIT"
] | null | null | null | from cheater import *
from main import *
# new Chain instance with
# mining difficulty = 4
c = Chain(4)
c.createGenesis()
# simulate transactions
c.addBlock(Block("3$ to Arthur"))
c.addBlock(Block("5$ to Bob"))
c.addBlock(Block("12$ to Jean"))
c.addBlock(Block("7$ to Jake"))
c.addBlock(Block("2$ to Camille"))
c.addBlock(Block("13$ to Marth"))
c.addBlock(Block("9$ to Felix"))
# chech chain validity
c.isChainValid()
# fake transaction
cheat(c, 1, "6 to jean")
# check chain validity
c.isChainValid()
# print all blocks
c.printChain()
print("len", len(c.blocks[0].hash) + 15)
| 18.83871 | 40 | 0.693493 | 93 | 584 | 4.354839 | 0.526882 | 0.155556 | 0.241975 | 0.128395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.136986 | 584 | 30 | 41 | 19.466667 | 0.771825 | 0.244863 | 0 | 0.125 | 0 | 0 | 0.207852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d131ead143f7ae14b44aa50e156995d7274d1c57 | 3,991 | py | Python | mindwavemobile/MindwaveMobileRawReader.py | martinezmizael/Escribir-con-la-mente | f93456bc2ff817cf0ae808a0f711168f82e142ff | [
"MIT"
] | null | null | null | mindwavemobile/MindwaveMobileRawReader.py | martinezmizael/Escribir-con-la-mente | f93456bc2ff817cf0ae808a0f711168f82e142ff | [
"MIT"
] | null | null | null | mindwavemobile/MindwaveMobileRawReader.py | martinezmizael/Escribir-con-la-mente | f93456bc2ff817cf0ae808a0f711168f82e142ff | [
"MIT"
] | null | null | null | import bluetooth
import time
import textwrap
class MindwaveMobileRawReader:
START_OF_PACKET_BYTE = 0xaa;
def __init__(self, address=None):
self._buffer = [];
self._bufferPosition = 0;
self._isConnected = False;
self._mindwaveMobileAddress = address
def connectToMindWaveMobile(self):
# First discover mindwave mobile address, then connect.
# Headset address of my headset was'9C:B7:0D:72:CD:02';
# not sure if it really can be different?
# now discovering address because of https://github.com/robintibor/python-mindwave-mobile/issues/4
if (self._mindwaveMobileAddress is None):
self._mindwaveMobileAddress = self._findMindwaveMobileAddress()
if (self._mindwaveMobileAddress is not None):
print ("Discovered Mindwave Mobile...")
self._connectToAddress(self._mindwaveMobileAddress)
else:
self._printErrorDiscoveryMessage()
def _findMindwaveMobileAddress(self):
nearby_devices = bluetooth.discover_devices(lookup_names = True)
for address, name in nearby_devices:
if (name == "MindWave Mobile"):
return address
return None
def _connectToAddress(self, mindwaveMobileAddress):
self.mindwaveMobileSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
while (not self._isConnected):
try:
self.mindwaveMobileSocket.connect(
(mindwaveMobileAddress, 1))
self._isConnected = True
except bluetooth.btcommon.BluetoothError as error:
print "Could not connect: ", error, "; Retrying in 5s..."
time.sleep(5)
def isConnected(self):
return self._isConnected
def _printErrorDiscoveryMessage(self):
print(textwrap.dedent("""\
Could not discover Mindwave Mobile. Please make sure the
Mindwave Mobile device is in pairing mode and your computer
has bluetooth enabled.""").replace("\n", " "))
def _readMoreBytesIntoBuffer(self, amountOfBytes):
newBytes = self._readBytesFromMindwaveMobile(amountOfBytes)
self._buffer += newBytes
def _readBytesFromMindwaveMobile(self, amountOfBytes):
missingBytes = amountOfBytes
receivedBytes = ""
# Sometimes the socket will not send all the requested bytes
# on the first request, therefore a loop is necessary...
while(missingBytes > 0):
receivedBytes += self.mindwaveMobileSocket.recv(missingBytes)
missingBytes = amountOfBytes - len(receivedBytes)
return receivedBytes;
def peekByte(self):
self._ensureMoreBytesCanBeRead();
return ord(self._buffer[self._bufferPosition])
def getByte(self):
self._ensureMoreBytesCanBeRead(100);
return self._getNextByte();
def _ensureMoreBytesCanBeRead(self, amountOfBytes):
if (self._bufferSize() <= self._bufferPosition + amountOfBytes):
self._readMoreBytesIntoBuffer(amountOfBytes)
def _getNextByte(self):
nextByte = ord(self._buffer[self._bufferPosition]);
self._bufferPosition += 1;
return nextByte;
def getBytes(self, amountOfBytes):
self._ensureMoreBytesCanBeRead(amountOfBytes);
return self._getNextBytes(amountOfBytes);
def _getNextBytes(self, amountOfBytes):
nextBytes = map(ord, self._buffer[self._bufferPosition: self._bufferPosition + amountOfBytes])
self._bufferPosition += amountOfBytes
return nextBytes
def clearAlreadyReadBuffer(self):
self._buffer = self._buffer[self._bufferPosition : ]
self._bufferPosition = 0;
def _bufferSize(self):
return len(self._buffer);
#------------------------------------------------------------------------------
| 38.747573 | 106 | 0.636181 | 347 | 3,991 | 7.149856 | 0.391931 | 0.072551 | 0.033857 | 0.056429 | 0.070536 | 0.058041 | 0.0395 | 0 | 0 | 0 | 0 | 0.006485 | 0.265848 | 3,991 | 102 | 107 | 39.127451 | 0.840273 | 0.109246 | 0 | 0.026316 | 0 | 0 | 0.080654 | 0 | 0 | 0 | 0.001128 | 0 | 0 | 0 | null | null | 0 | 0.039474 | null | null | 0.065789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1367794248b2e3c030b18062325fb8aedea6ff8 | 4,020 | py | Python | src/primaires/perso/commandes/prompt/__init__.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | src/primaires/perso/commandes/prompt/__init__.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | src/primaires/perso/commandes/prompt/__init__.py | stormi/tsunami | bdc853229834b52b2ee8ed54a3161a1a3133d926 | [
"BSD-3-Clause"
] | null | null | null | # -*-coding:Utf-8 -*
# Copyright (c) 2010 LE GOFF Vincent
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Package contenant la commande 'prompt'.
Dans ce fichier ne se trouve que la commande.
Les sous-commandes peuvent être trouvées dans le package.
"""
from primaires.interpreteur.commande.commande import Commande
from .defaut import PrmDefaut
# Constantes
AIDE = """
Cette commande permet de configurer vos différents prompts. Le prompt
est un message qui s'affiche généralement après l'entrée d'une commande ou une
action quelconque dans l'univers. Ce message donne des informations
générales sur votre personnage (par défaut, sa vitalité, mana et
endurance).
Il existe plusieurs prompts. Par exemple, celui que vous verrez à
votre première connexion est le prompt par défaut qui s'affiche dans
la plupart des circonstances. Il existe également un prompt de combat
qui est affiché quand votre personnage est en combat et peut donner
des informations supplémentaires.
Vous pouvez ici configurer votre prompt, c'est-à-dire changer ce
message. En utilisant une des sous-commandes ci-dessous, vous pouvez
soit consulter, masquer, modifier ou réinitialiser votre prompt.
Ce que vous entrez grâce à cette commande deviendra votre prompt. Vous
pouvez aussi utiliser des symboles (par exemple, vous pouvez entrer
%prompt% %prompt:défaut%|cmd| Vit(|pc|v) Man(|pc|m) End(|pc|e)|ff| pour
avoir un prompt sous la forme |ent|Vit(50) Man(50) End(50)|ff|.
Les symboles sont des combinaisons de lettres précédées du signe
pourcent (|pc|). Voici les symboles que vous pouvez utiliser pour tous
les prompts :
|pc|v Vitalité actuelle
|pc|m Mana actuelle
|pc|e Endurance actuelle
|pc|vx Vitalité maximum
|pc|mx Mana maximum
|pc|ex Endurance maximum
|pc|sl Saut de ligne (pour avoir un prompt sur deux lignes)
|pc|f Force
|pc|a Agilité
|pc|r Robustesse
|pc|i Intelligence
|pc|c Charisme
|pc|s Sensibilité
""".strip()
class CmdPrompt(Commande):
"""Commande 'prompt'.
"""
def __init__(self):
"""Constructeur de la commande"""
Commande.__init__(self, "prompt", "prompt")
self.schema = ""
self.aide_courte = "affiche ou configure votre prompt"
self.aide_longue = AIDE
def ajouter_parametres(self):
"""Ajout dynamique des paramètres."""
for prompt in importeur.perso.prompts.values():
self.ajouter_parametre(PrmDefaut(prompt))
| 42.765957 | 79 | 0.732587 | 568 | 4,020 | 5.163732 | 0.512324 | 0.017047 | 0.011592 | 0.015684 | 0.062734 | 0.046369 | 0.046369 | 0.046369 | 0.046369 | 0.046369 | 0 | 0.003433 | 0.202985 | 4,020 | 93 | 80 | 43.225806 | 0.911985 | 0.43408 | 0 | 0 | 0 | 0.043478 | 0.776629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.065217 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d13da41b3a4f220e015d9e442ca5b4d723221a8a | 299 | py | Python | xml_to_text.py | EvanHahn/xml-to-text | 4c064e8df978a9f857045e44b6665ce6a5f6f1af | [
"Unlicense"
] | 1 | 2015-01-23T19:28:56.000Z | 2015-01-23T19:28:56.000Z | xml_to_text.py | EvanHahn/xml-to-text | 4c064e8df978a9f857045e44b6665ce6a5f6f1af | [
"Unlicense"
] | null | null | null | xml_to_text.py | EvanHahn/xml-to-text | 4c064e8df978a9f857045e44b6665ce6a5f6f1af | [
"Unlicense"
] | 1 | 2021-05-26T12:34:59.000Z | 2021-05-26T12:34:59.000Z | #!/usr/bin/env python
from sys import argv
from bs4 import BeautifulSoup
def xml_to_text(file):
return BeautifulSoup(file).get_text()
if __name__ == "__main__":
if len(argv) < 2:
print "What file should I get plain text from?"
exit(1)
print xml_to_text(open(argv[1]))
| 21.357143 | 55 | 0.672241 | 47 | 299 | 4 | 0.638298 | 0.053191 | 0.095745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.217391 | 299 | 13 | 56 | 23 | 0.786325 | 0.06689 | 0 | 0 | 0 | 0 | 0.169065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d1440a5d8d1674d5c1865a9e8914bba72236e29c | 2,146 | py | Python | tools/build.py | kxf3199/gltf_tool | b060135209dff2127095575b8fc87849b5bfbdf4 | [
"MIT"
] | 1 | 2022-03-04T10:53:42.000Z | 2022-03-04T10:53:42.000Z | tools/build.py | spindro/disney_brdf_for_yocto-gl | aa79c60ec9603240f35a6c70ed20586d3fe5df45 | [
"MIT"
] | null | null | null | tools/build.py | spindro/disney_brdf_for_yocto-gl | aa79c60ec9603240f35a6c70ed20586d3fe5df45 | [
"MIT"
] | null | null | null | #! /usr/bin/env python3 -B
# build utility for easy development
# complete and unreliable hack used for making it easier to develop
import click, os, platform, markdown, glob, textwrap
def build(target, dirname, buildtype, cmakeopts=''):
os.system('mkdir -p build/{dirname}; cd build/{dirname}; cmake ../../ -GNinja -DCMAKE_BUILD_TYPE={buildtype} -DYOCTO_EXPERIMENTAL=ON {cmakeopts}; cmake --build . {target}'.format(target=target, dirname=dirname, buildtype=buildtype, cmakeopts=cmakeopts))
os.system('ln -Ffs {dirname} build/latest'.format(dirname=dirname))
@click.group()
def run():
pass
@run.command()
@click.argument('target', required=False, default='')
def latest(target=''):
os.system('cd build/latest; cmake --build . {target}'.format(target=target))
@run.command()
@click.argument('target', required=False, default='')
def release(target=''):
build(target, 'release', 'Release')
@run.command()
@click.argument('target', required=False, default='')
def nogl(target=''):
build(target, 'nogl', 'Release', '-DYOCTO_OPENGL=OFF')
@run.command()
@click.argument('target', required=False, default='')
def debug(target=''):
build(target, 'debug', 'Debug')
@run.command()
@click.argument('target', required=False, default='')
def gcc(target=''):
build(target, 'gcc', 'Release', '-DCMAKE_C_COMPILER=gcc-7 -DCMAKE_CXX_COMPILER=g++-7')
@run.command()
def xcode():
os.system('mkdir -p build/xcode; cd build/xcode; cmake -G Xcode -DYOCTO_EXPERIMENTAL=ON ../../; open yocto-gl.xcodeproj')
@run.command()
def clean():
os.system('rm -rf bin; rm -rf build')
@run.command()
def format():
for glob in ['yocto/yocto_*.h', 'yocto/yocto_*.cpp', 'apps/y*.cpp']:
os.system('clang-format -i -style=file ' + glob)
@run.command()
def docs():
os.system('./tools/cpp2doc.py')
@run.command()
def doxygen():
os.system('doxygen ./tools/Doxyfile')
@run.command()
@click.argument('msg', required=True, default='')
def commit(msg=''):
os.system('./tools/build.py format')
os.system('./tools/build.py docs')
os.system('git commit -a -m ' + msg)
if __name__ == '__main__':
run()
| 30.225352 | 257 | 0.671482 | 286 | 2,146 | 4.972028 | 0.342657 | 0.061885 | 0.063291 | 0.097046 | 0.285513 | 0.230661 | 0.182841 | 0.182841 | 0.182841 | 0 | 0 | 0.00214 | 0.129077 | 2,146 | 70 | 258 | 30.657143 | 0.758694 | 0.058714 | 0 | 0.313725 | 0 | 0.039216 | 0.342588 | 0.062469 | 0 | 0 | 0 | 0 | 0 | 1 | 0.254902 | false | 0.019608 | 0.019608 | 0 | 0.27451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d146f4973440816e4a9c135565077e8cd8ed1f36 | 2,108 | py | Python | server/wrangle/config.py | kdinkla/ProtoMPDA | 08ec7de3a24ea07da19062b009ca81a0f5a9c924 | [
"MIT"
] | 3 | 2017-12-07T19:11:24.000Z | 2020-07-03T07:51:09.000Z | server/wrangle/config.py | kdinkla/Screenit | 08ec7de3a24ea07da19062b009ca81a0f5a9c924 | [
"MIT"
] | null | null | null | server/wrangle/config.py | kdinkla/Screenit | 08ec7de3a24ea07da19062b009ca81a0f5a9c924 | [
"MIT"
] | null | null | null | import sqlite3 as lite
import csv
# Constants.
inputPath = "/Users/kdinkla/Desktop/Novartis/HCS/CellMorph/www.ebi.ac.uk/huber-srv/cellmorph/data/"
outputPath = "/Users/kdinkla/MPDA/git/wrangle/db/"
sqlDotReplacement = '_'
# Screening parameters.
plates = ["HT" + str(i).zfill(2) for i in range(1, 69)]
plateDirectories = [inputPath + d + "/" for d in plates]
columns = [c for c in 'ABCDEFGHIJKLMNOP']
rows = [str(r).zfill(3) for r in range(4, 25)]
imageSpots = range(1, 5)
assignedClasses = {
"AF": "Actin fiber",
"BC": "Big cells",
"C": "Condensed",
"D": "Debris",
"LA": "Lamellipodia",
"M": "Metaphase",
"MB": "Membrane blebbing",
"N": "Normal",
"P": "Protruded",
"Z": "Telophase"
}
# Derived.
dbPath = outputPath + "core.db"
# Connect to SQLite database.
def connect():
return lite.connect(dbPath)
# Format object feature field for SQL.
def formatField(field):
return field.replace(".", sqlDotReplacement)
# Convert plate index (starting at 1) to plate tag.
def plateTag(index):
return plates[index]
def columnTag(index):
return columns[index]
def rowTag(index):
return rows[index]
# Determine object feature fields.
def objectFeatures():
firstFilePath = inputPath + "HT01/HT01A004_ftrs.tab"
with open(firstFilePath, 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
header = reader.next()
return [formatField(f) for f in header if f != 'spot' and f != 'class']
# Directory of feature file of given plate, column, and row.
def featureDirectory(plate, column, row):
return inputPath + plate + "/" + plate + column + row + "_ftrs.tab"
# Resolves directory for given database column, row, and plate number. Image types: seg and rgb
def wellURL(column, row, plate, type):
plateTag = plates[plate]
wellTag = plateTag + columns[column] + rows[row]
return "http://www.ebi.ac.uk/huber-srv/cellmorph/view/" + plateTag + "/" + wellTag + "/" + wellTag + "_" + type + ".jpeg"
#return "dataset/images/" + plateTag + "/" + wellTag + "/" + wellTag + "_seg.jpeg" | 31.462687 | 125 | 0.652277 | 267 | 2,108 | 5.131086 | 0.52809 | 0.026277 | 0.011679 | 0.014599 | 0.039416 | 0.039416 | 0.039416 | 0 | 0 | 0 | 0 | 0.011203 | 0.195446 | 2,108 | 67 | 126 | 31.462687 | 0.79658 | 0.200664 | 0 | 0 | 0 | 0.022222 | 0.213731 | 0.084776 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.044444 | 0.133333 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
d148ad0b74eafb9e28b83f4e64a1d66cc75dbe56 | 590 | py | Python | engfrosh_site/frosh/migrations/0004_alter_team_group.py | engfrosh/engfrosh | 8eed0f3e86ff43de569c280a5f1571c02f2324f2 | [
"MIT"
] | 1 | 2021-05-21T01:01:16.000Z | 2021-05-21T01:01:16.000Z | engfrosh_site/frosh/migrations/0004_alter_team_group.py | engfrosh/engfrosh | 8eed0f3e86ff43de569c280a5f1571c02f2324f2 | [
"MIT"
] | 50 | 2021-05-20T21:00:55.000Z | 2022-03-12T00:59:18.000Z | engfrosh_site/frosh/migrations/0004_alter_team_group.py | engfrosh/engfrosh | 8eed0f3e86ff43de569c280a5f1571c02f2324f2 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.3 on 2021-06-19 00:08
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('auth', '0012_alter_user_first_name_max_length'),
('frosh', '0003_alter_team_coin_amount'),
]
operations = [
migrations.AlterField(
model_name='team',
name='group',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='frosh_team', serialize=False, to='auth.group'),
),
]
| 28.095238 | 163 | 0.662712 | 72 | 590 | 5.222222 | 0.666667 | 0.06383 | 0.074468 | 0.117021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049676 | 0.215254 | 590 | 20 | 164 | 29.5 | 0.762419 | 0.076271 | 0 | 0 | 1 | 0 | 0.187845 | 0.117864 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d14a47851458aeac04d23e1f6be1cad44af69083 | 905 | py | Python | CMSIS/DSP/Testing/PatternGeneration/Convolutions.py | milos-lazic/CMSIS_5 | 61b74b70bd961af9e2a8bb6bc1632c1014e7c42e | [
"Apache-2.0"
] | 6 | 2019-05-30T21:02:44.000Z | 2022-01-16T23:40:23.000Z | CMSIS/DSP/Testing/PatternGeneration/Convolutions.py | milos-lazic/CMSIS_5 | 61b74b70bd961af9e2a8bb6bc1632c1014e7c42e | [
"Apache-2.0"
] | 2 | 2019-05-23T10:11:45.000Z | 2019-08-28T15:13:56.000Z | CMSIS/DSP/Testing/PatternGeneration/Convolutions.py | milos-lazic/CMSIS_5 | 61b74b70bd961af9e2a8bb6bc1632c1014e7c42e | [
"Apache-2.0"
] | 2 | 2019-11-27T09:56:17.000Z | 2021-11-25T11:02:17.000Z | import os.path
import numpy as np
import itertools
import Tools
# Those patterns are used for tests and benchmarks.
# For tests, there is the need to add tests for saturation
def writeTests(config):
NBSAMPLES=128
inputsA=np.random.randn(NBSAMPLES)
inputsB=np.random.randn(NBSAMPLES)
inputsA = inputsA/max(inputsA)
inputsB = inputsB/max(inputsB)
config.writeInput(1, inputsA,"InputsA")
config.writeInput(1, inputsB,"InputsB")
PATTERNDIR = os.path.join("Patterns","DSP","Filtering","MISC","MISC")
PARAMDIR = os.path.join("Parameters","DSP","Filtering","MISC","MISC")
configf32=Tools.Config(PATTERNDIR,PARAMDIR,"f32")
configq31=Tools.Config(PATTERNDIR,PARAMDIR,"q31")
configq15=Tools.Config(PATTERNDIR,PARAMDIR,"q15")
configq7=Tools.Config(PATTERNDIR,PARAMDIR,"q7")
writeTests(configf32)
writeTests(configq31)
writeTests(configq15)
writeTests(configq7)
| 20.568182 | 69 | 0.741436 | 113 | 905 | 5.938053 | 0.442478 | 0.065574 | 0.125186 | 0.172876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032911 | 0.127072 | 905 | 43 | 70 | 21.046512 | 0.816456 | 0.117127 | 0 | 0 | 0 | 0 | 0.104666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d14a9fead3b8a6d1a0d3f6e69ca6d8524429fdaf | 410 | py | Python | todo/migrations/0004_alter_post_title.py | Saup21/Todo-list | f806fee5ba11a2ce6a8242d8675f0984d2c2f0eb | [
"MIT"
] | 14 | 2021-05-14T15:06:38.000Z | 2021-09-10T06:29:23.000Z | todo/migrations/0004_alter_post_title.py | Saup21/Todo-list | f806fee5ba11a2ce6a8242d8675f0984d2c2f0eb | [
"MIT"
] | null | null | null | todo/migrations/0004_alter_post_title.py | Saup21/Todo-list | f806fee5ba11a2ce6a8242d8675f0984d2c2f0eb | [
"MIT"
] | 3 | 2021-05-16T12:39:41.000Z | 2021-05-18T04:13:57.000Z | # Generated by Django 3.2.1 on 2021-05-12 20:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('todo', '0003_auto_20210511_0127'),
]
operations = [
migrations.AlterField(
model_name='post',
name='title',
field=models.CharField(blank=True, max_length=25),
),
]
| 21.578947 | 63 | 0.570732 | 44 | 410 | 5.204545 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117857 | 0.317073 | 410 | 18 | 64 | 22.777778 | 0.7 | 0.109756 | 0 | 0 | 1 | 0 | 0.104348 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d14ad9b665a10159104a85fed696e67d8770cb72 | 1,246 | py | Python | vn.trader/strategyMain.py | freeitaly/TT | 9b88ccec3739077649b0f57787d7f02764ad6897 | [
"MIT"
] | null | null | null | vn.trader/strategyMain.py | freeitaly/TT | 9b88ccec3739077649b0f57787d7f02764ad6897 | [
"MIT"
] | null | null | null | vn.trader/strategyMain.py | freeitaly/TT | 9b88ccec3739077649b0f57787d7f02764ad6897 | [
"MIT"
] | null | null | null | # encoding: UTF-8
import sys
import ctypes
import platform
from vtEngine import MainEngine
from ctaAlgo.uiStrategyWindow import *
#----------------------------------------------------------------------
def main():
"""主程序入口"""
# 设置底部任务栏图标,win7以下请注释掉
try:
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('vn.py demo')
except:
pass
# 重载sys模块,设置默认字符串编码方式为utf8
reload(sys)
sys.setdefaultencoding('utf8')
# # 设置Windows底部任务栏图标
# if 'Windows' in platform.uname() :
# ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('vn.trader')
# 初始化Qt应用对象
app = QtGui.QApplication(sys.argv)
app.setWindowIcon(QtGui.QIcon('vnpy.ico'))
app.setFont(BASIC_FONT)
# 设置Qt的皮肤
try:
f = file("VT_setting.json")
setting = json.load(f)
if setting['darkStyle']:
import qdarkstyle
app.setStyleSheet(qdarkstyle.load_stylesheet(pyside=False))
except:
pass
# 初始化主引擎和主窗口对象
mainEngine = MainEngine()
mainWindow = MainWindow(mainEngine, mainEngine.eventEngine)
mainWindow.showMaximized()
# 在主线程中启动Qt事件循环
sys.exit(app.exec_())
if __name__ == '__main__':
main()
| 23.961538 | 84 | 0.61557 | 111 | 1,246 | 6.801802 | 0.630631 | 0.031788 | 0.050331 | 0.153642 | 0.15894 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008333 | 0.229535 | 1,246 | 51 | 85 | 24.431373 | 0.778125 | 0.252809 | 0 | 0.206897 | 0 | 0 | 0.059081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0.068966 | 0.206897 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d14c9b8f7f4b5ed59766d994c17ec61a593de502 | 5,742 | py | Python | autopatch/target_finder.py | Hydrogen-OS-P/tools | 6bf6f5a9f922ca64a22434cd986db5452f7a796b | [
"Apache-2.0"
] | 2 | 2020-05-17T00:33:41.000Z | 2020-05-21T16:08:35.000Z | autopatch/target_finder.py | Hydrogen-OS-P/tools | 6bf6f5a9f922ca64a22434cd986db5452f7a796b | [
"Apache-2.0"
] | null | null | null | autopatch/target_finder.py | Hydrogen-OS-P/tools | 6bf6f5a9f922ca64a22434cd986db5452f7a796b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python2
# Filename: target_finder.py
"""
Fast search the target out.
Usage: target_finder.py TARGET
- TARGET target path relative to current directory.
"""
__author__ = 'duanqz@gmail.com'
import sys
import re
import os
import fnmatch
import commands
class TargetFinder:
# The framework partitions
PARTITIONS = []
def __init__(self):
self.__initPartitions()
# Path Regex to match out useful parts
# Using named group match with "(?P<group_name>)", using minimum match end with "?"
self.pathRegex = re.compile("(?P<part1>.*?)/(?P<part2>smali/.*?)(?P<part3>.*)")
def __initPartitions(self):
""" Parse out the framework partitions.
"""
makefile = None
for filename in os.listdir(os.curdir):
if fnmatch.fnmatch(filename.lower(), "makefile"):
makefile = filename
if makefile == None:
return
fileHandle = open(makefile, "r")
content = fileHandle.read()
modifyJars = re.compile("\n\s*vendor_modify_jars\s*:=\s*(?P<jars>.*)\n")
match = modifyJars.search(content)
if match != None:
TargetFinder.PARTITIONS = match.group("jars").split(" ")
fileHandle.close()
def __findInDexPartitions(self, target):
""" Find the target in dex partition.
On Android 5.0, Files might be split to different dex-partitions
"""
if os.path.exists(target):
return target
(outClass, innerClass) = TargetFinder.__extractInnerClass(target)
match = self.pathRegex.search(outClass)
if match != None:
# Part 1: top directory of framework
# Part 2: smali or smali_classes2 ...
# Part 3: the remains of the path
part1 = match.group("part1")
part2 = match.group("part2")
part3 = match.group("part3")
if not os.path.exists(part1):
return target
for subDir in os.listdir(part1):
if subDir.startswith("smali") and subDir != part2:
newTarget = os.path.join(part1, subDir, part3)
if os.path.exists(newTarget):
return TargetFinder.__concatInnerClass(newTarget, innerClass)
# Not found
return target
def __findInFrwPartitions(self, target):
""" Find the target in the partitions.
Files might be split to different framework-partition
"""
(outClass, innerClass) = TargetFinder.__extractInnerClass(target)
match = self.pathRegex.search(outClass)
if match != None:
# Part 1: top directory of framework
# Part 2: smali or smali_classes2 ...
# Part 3: the remains of the path
part1 = match.group("part1")
part2 = match.group("part2")
part3 = match.group("part3")
for partition in TargetFinder.PARTITIONS:
if not partition.endswith(".jar.out"):
partition += ".jar.out"
newTarget = os.path.join(partition, part2, part3)
if os.path.exists(newTarget):
return TargetFinder.__concatInnerClass(outClass, innerClass)
# Not found
return target
@staticmethod
def __extractInnerClass(target):
""" Extract the inner class file from target
"""
pos = target.find("$")
if pos >= 0:
# Inner class, set outer class as new target to find
outClass = target[:pos] + ".smali"
innerClass = target[pos:]
return (outClass, innerClass)
else:
return (target, None)
@staticmethod
def __concatInnerClass(outClass, innerClass):
if innerClass != None:
return outClass.replace(".smali", innerClass)
else:
return outClass
def __findInAll(self, target):
""" Find the target in all project root
"""
basename = os.path.basename(target)
searchPath = []
for partition in TargetFinder.PARTITIONS:
if not partition.endswith(".jar.out"):
partition += ".jar.out"
searchPath.append(partition)
cmd = "find %s -name %s" % (" ".join(searchPath), commands.mkarg(basename))
(sts, text) = commands.getstatusoutput(cmd)
try:
if sts == 0:
text = text.split("\n")[0]
if len(text) > 0:
return text
except:
pass
return target
def find(self, target, loosely=False):
""" Find the target out in the current directory.
Set loosely to be True to find file base name in all directory
"""
# Firstly, check whether target exists in dex partitions
target = self.__findInDexPartitions(target)
if os.path.exists(target):
return target
# Secondly, check whether target exists in framework partitions
# It is more efficiently than find in all files
target = self.__findInFrwPartitions(target)
if os.path.exists(target):
return target
# Thirdly, still not find the target, search in all sub directories
if loosely:
return self.__findInAll(target)
else:
return target
# End of class TargetFinder
if __name__ == "__main__":
argc = len(sys.argv)
if argc != 2 :
print __doc__
sys.exit()
target = sys.argv[1]
print TargetFinder().find(target)
| 29.147208 | 91 | 0.564089 | 601 | 5,742 | 5.297837 | 0.271215 | 0.01696 | 0.022613 | 0.021985 | 0.345163 | 0.309987 | 0.268844 | 0.258794 | 0.234925 | 0.19598 | 0 | 0.010285 | 0.339603 | 5,742 | 196 | 92 | 29.295918 | 0.829378 | 0.125392 | 0 | 0.349057 | 0 | 0 | 0.053093 | 0.021468 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009434 | 0.04717 | null | null | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d14d43cb5ea0c774889305ac6803d6586c9a4422 | 853 | py | Python | geotrek/flatpages/serializers.py | pierreloicq/Geotrek-admin | 00cd29f29843f2cc25e5a3c7372fcccf14956887 | [
"BSD-2-Clause"
] | 50 | 2016-10-19T23:01:21.000Z | 2022-03-28T08:28:34.000Z | geotrek/flatpages/serializers.py | pierreloicq/Geotrek-admin | 00cd29f29843f2cc25e5a3c7372fcccf14956887 | [
"BSD-2-Clause"
] | 1,422 | 2016-10-27T10:39:40.000Z | 2022-03-31T13:37:10.000Z | geotrek/flatpages/serializers.py | pierreloicq/Geotrek-admin | 00cd29f29843f2cc25e5a3c7372fcccf14956887 | [
"BSD-2-Clause"
] | 46 | 2016-10-27T10:59:10.000Z | 2022-03-22T15:55:56.000Z | from rest_framework import serializers as rest_serializers
from geotrek.flatpages import models as flatpages_models
from geotrek.common.serializers import (
TranslatedModelSerializer, BasePublishableSerializerMixin,
RecordSourceSerializer, TargetPortalSerializer
)
class FlatPageSerializer(BasePublishableSerializerMixin, TranslatedModelSerializer):
last_modified = rest_serializers.ReadOnlyField(source='date_update')
media = rest_serializers.ReadOnlyField(source='parse_media')
source = RecordSourceSerializer(many=True)
portal = TargetPortalSerializer(many=True)
class Meta:
model = flatpages_models.FlatPage
fields = ('id', 'title', 'external_url', 'content', 'target',
'last_modified', 'slug', 'media', 'source', 'portal') + \
BasePublishableSerializerMixin.Meta.fields
| 40.619048 | 84 | 0.757327 | 74 | 853 | 8.581081 | 0.513514 | 0.070866 | 0.088189 | 0.107087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157093 | 853 | 20 | 85 | 42.65 | 0.883171 | 0 | 0 | 0 | 0 | 0 | 0.103165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d14f95d4a726b589d4b5ddfbe61823721f991d94 | 1,553 | py | Python | apostello/migrations/0007_auto_20160315_1213.py | LaudateCorpus1/apostello | 1ace89d0d9e1f7a1760f6247d90a60a9787a4f12 | [
"MIT"
] | 69 | 2015-10-03T20:27:53.000Z | 2021-04-06T05:26:18.000Z | apostello/migrations/0007_auto_20160315_1213.py | LaudateCorpus1/apostello | 1ace89d0d9e1f7a1760f6247d90a60a9787a4f12 | [
"MIT"
] | 73 | 2015-10-03T17:53:47.000Z | 2020-10-01T03:08:01.000Z | apostello/migrations/0007_auto_20160315_1213.py | LaudateCorpus1/apostello | 1ace89d0d9e1f7a1760f6247d90a60a9787a4f12 | [
"MIT"
] | 29 | 2015-10-23T22:00:13.000Z | 2021-11-30T04:48:06.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2016-03-15 12:13
from __future__ import unicode_literals
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("apostello", "0006_userprofile_show_tour")]
operations = [
migrations.AddField(model_name="userprofile", name="can_archive", field=models.BooleanField(default=True)),
migrations.AlterField(
model_name="recipient",
name="first_name",
field=models.CharField(
db_index=True,
max_length=16,
validators=[
django.core.validators.RegexValidator(
"^[\\s\\w@?£!1$\"¥#è?¤é%ù&ì\\ò(Ç)*:Ø+;ÄäøÆ,<LÖlöæ\\-=ÑñÅß.>ÜüåÉ/§à¡¿']+$",
message="You can only use GSM characters.",
)
],
verbose_name="First Name",
),
),
migrations.AlterField(
model_name="recipient",
name="last_name",
field=models.CharField(
db_index=True,
max_length=40,
validators=[
django.core.validators.RegexValidator(
"^[\\s\\w@?£!1$\"¥#è?¤é%ù&ì\\ò(Ç)*:Ø+;ÄäøÆ,<LÖlöæ\\-=ÑñÅß.>ÜüåÉ/§à¡¿']+$",
message="You can only use GSM characters.",
)
],
verbose_name="Last Name",
),
),
]
| 33.76087 | 115 | 0.490663 | 160 | 1,553 | 4.7125 | 0.5 | 0.039788 | 0.079576 | 0.076923 | 0.551724 | 0.551724 | 0.440318 | 0.440318 | 0.440318 | 0.323607 | 0 | 0.026396 | 0.365744 | 1,553 | 45 | 116 | 34.511111 | 0.726904 | 0.043142 | 0 | 0.578947 | 1 | 0.026316 | 0.139582 | 0.017532 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078947 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d14f96e0734761eeae0e630ef8c89ddc1d156cc7 | 5,287 | py | Python | bridle/const_expr.py | iguessthislldo/avidly | 0257c22966c297fad1254574cac60bb52b2da6ff | [
"MIT"
] | 1 | 2022-02-16T08:23:35.000Z | 2022-02-16T08:23:35.000Z | bridle/const_expr.py | iguessthislldo/bridle | f7b0228a5d3e1e05e5643c2f787dd175dd243965 | [
"MIT"
] | null | null | null | bridle/const_expr.py | iguessthislldo/bridle | f7b0228a5d3e1e05e5643c2f787dd175dd243965 | [
"MIT"
] | null | null | null | import enum
import operator as pyop
from abc import ABC, abstractmethod
from collections.abc import Callable
from typing import Any, Optional
import inspect
import string
from dataclasses import dataclass
from typing import TYPE_CHECKING
from .errors import ConstExprError, InternalError
if TYPE_CHECKING:
from .tree import PrimitiveKind
class ConstAbc(ABC):
def uncasted_kind(self):
return None
@abstractmethod
def can_eval(self):
pass
@abstractmethod
def eval(self, to: 'PrimitiveKind'):
pass
@abstractmethod
def __str__(self):
pass
def __repr__(self):
return '<{}: {}>'.format(self.__class__.__name__, str(self))
class ConstValue(ConstAbc):
def __init__(self, value: Any, kind: 'PrimitiveKind'):
if kind is not None:
kind.check_value(value)
self.value = value
self.kind = kind
def uncasted_kind(self):
return self.kind
def can_eval(self):
return self.value is not None
def eval(self, to: 'PrimitiveKind') -> Any:
if to != self.kind:
to.check_value(self.value)
return self.value
def __str__(self):
return str(self.value)
@dataclass(frozen=True)
class OpTraits:
fmt: str
# TODO: Fix for Python 3.8
# impl: Optional[Callable[..., Any]]
# type_impl: Optional[Callable[PrimitiveKind, ..., Any]] = None
impl: Optional[Any]
type_impl: Optional[Any] = None
accepts_floats: bool = True
@property
def impl_details(self):
if self.impl is None:
return (True, self.type_impl)
return (False, self.impl)
@property
def operand_count(self):
subtract_one, impl = self.impl_details
impl_count = len(inspect.getfullargspec(impl).args)
if subtract_one:
impl_count -= 1
fmt_count = len(list(string.Formatter().parse(self.fmt)))
if impl_count != fmt_count:
InternalError('impl_count ({}) and fmt_count ({}) are different for {}',
impl_count, fmt_count, repr(self.fmt))
return impl_count
def divide_impl(to: 'PrimitiveKind', a, b) -> Any:
return (pyop.truediv if to.value.is_float else pyop.floordiv)(a, b)
def invert_impl(to: 'PrimitiveKind', value) -> Any:
if to.value.is_signed_int:
return -(value + 1)
return to.value.max_number_like - value
class Op(enum.Enum):
OR = OpTraits(fmt='{} | {}', impl=pyop.or_, accepts_floats=False)
XOR = OpTraits(fmt='{} ^ {}', impl=pyop.xor, accepts_floats=False)
AND = OpTraits(fmt='{} & {}', impl=pyop.and_, accepts_floats=False)
RSHIFT = OpTraits(fmt='{} >> {}', impl=pyop.rshift, accepts_floats=False)
LSHIFT = OpTraits(fmt='{} << {}', impl=pyop.lshift, accepts_floats=False)
ADD = OpTraits(fmt='{} + {}', impl=pyop.add)
SUBTRACT = OpTraits(fmt='{} - {}', impl=pyop.sub)
MULTIPLY = OpTraits(fmt='{} * {}', impl=pyop.mul)
DIVIDE = OpTraits(fmt='{} / {}', impl=None, type_impl=divide_impl)
MODULO = OpTraits(fmt='{} % {}', impl=pyop.mod, accepts_floats=False)
POSITIVE = OpTraits(fmt='+{}', impl=pyop.pos)
NEGATIVE = OpTraits(fmt='-{}', impl=pyop.neg)
INVERT = OpTraits(fmt='~{}', impl=None, type_impl=invert_impl, accepts_floats=False)
PRIORITIZE = OpTraits(fmt='({})', impl=lambda a: a)
def impl(self, to: 'PrimitiveKind', operands) -> Callable:
add_to, impl = self.value.impl_details
if add_to:
return impl(to, *operands)
else:
return impl(*operands)
@property
def operand_count(self):
return self.value.operand_count
def check_operand(self, operand: ConstAbc):
kind = operand.uncasted_kind()
if kind is not None:
if kind.value.is_float and not self.value.accepts_floats:
raise ConstExprError(
'{} operation doesn\'t accept floating point values', self.name)
if not kind.value.can_op:
raise ConstExprError('Not possible to do operations on {}', kind.name)
def fmt_operands(self, operands):
return self.value.fmt.format(*[str(i) for i in operands])
class ConstExpr(ConstAbc):
def __init__(self, op: Op, *operands):
expected_count = op.operand_count
if len(operands) != expected_count:
raise InternalError('{} expects {} operands, got {}',
op.name, expected_count, len(operands))
self.op = op
self.operands = operands
def can_eval(self):
for operand in self.operands:
if not operand.can_eval():
return False
return True
def eval(self, to: 'PrimitiveKind'):
if not to.value.can_op:
raise ConstExprError('Not possible to do operations to get to {}', to)
operand_values = []
for operand in self.operands:
self.op.check_operand(operand)
operand_values.append(operand.eval(to))
try:
value = self.op.impl(to, operand_values)
to.check_value(value)
except Exception as e:
raise ConstExprError('Eval failed: ' + str(e)) from e
return value
def __str__(self):
return self.op.fmt_operands(self.operands)
| 31.470238 | 88 | 0.621714 | 656 | 5,287 | 4.853659 | 0.204268 | 0.051822 | 0.065955 | 0.065641 | 0.145729 | 0.050879 | 0.03392 | 0.03392 | 0.03392 | 0.03392 | 0 | 0.001018 | 0.257046 | 5,287 | 167 | 89 | 31.658683 | 0.809572 | 0.022886 | 0 | 0.189394 | 0 | 0 | 0.073227 | 0 | 0 | 0 | 0 | 0.005988 | 0 | 1 | 0.166667 | false | 0.022727 | 0.083333 | 0.068182 | 0.575758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d1573555106525d0958ee59e807692fbb800871e | 19,560 | py | Python | tools/rest_integration_test.py | osrf/cloudsim-legacy | 01ea7dd2708ed9797a860ac839028ec62fd96a23 | [
"Apache-2.0"
] | null | null | null | tools/rest_integration_test.py | osrf/cloudsim-legacy | 01ea7dd2708ed9797a860ac839028ec62fd96a23 | [
"Apache-2.0"
] | null | null | null | tools/rest_integration_test.py | osrf/cloudsim-legacy | 01ea7dd2708ed9797a860ac839028ec62fd96a23 | [
"Apache-2.0"
] | 1 | 2021-03-16T15:00:51.000Z | 2021-03-16T15:00:51.000Z | #!/usr/bin/env python
from __future__ import print_function
import os
import sys
import unittest
import time
import datetime
import logging
from cloudsim_rest_api import CloudSimRestApi
import traceback
# add cloudsim directory to system path
basepath = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.insert(0, basepath)
print (sys.path)
import cloudsimd.launchers.cloudsim as cloudsim
from cloudsimd.launchers.launch_utils.launch_db import ConstellationState
from cloudsimd.launchers.launch_utils.launch_db import get_unique_short_name
from cloudsimd.launchers.launch_utils.testing import get_test_runner
from cloudsimd.launchers.launch_utils.testing import get_boto_path
from cloudsimd.launchers.launch_utils.testing import get_test_path
CLOUDSIM_CONFIG = "CloudSim-stable (m1.small)"
SIM_CONFIG = "Simulator-stable (g2.2xlarge)" # Simulator-stable (cg1.4xlarge)
CLOUD_CREDS = "aws"
CLOUD_REGION = "us-east-1"
try:
logging.basicConfig(filename='/tmp/rest_integration_test.log',
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
level=logging.DEBUG)
except Exception, e:
print("Can't enable logging: %s" % e)
def create_task_dict(title, launch_file='vrc_task_1.launch'):
"""
Generates a simple task for testing purposes
"""
def _get_now_str(days_offset=0):
"""
Returns a utc string date time format of now, with optional
offset.
"""
dt = datetime.timedelta(days=days_offset)
now = datetime.datetime.utcnow()
t = now - dt
s = t.isoformat()
return s
task = {}
task['task_title'] = title
task['ros_package'] = 'drcsim_gazebo'
task['ros_launch'] = launch_file
task['launch_args'] = ''
task['timeout'] = '3600'
task['latency'] = '0'
task['uplink_data_cap'] = '0'
task['downlink_data_cap'] = '0'
task['local_start'] = _get_now_str(-1) # yesterday
task['local_stop'] = _get_now_str(1) # tomorrow
task['bash_src'] = "/home/ubuntu/cloudsim/sim_setup.bash"
task['vrc_id'] = 1
task['vrc_num'] = 1
return task
class RestException(Exception):
pass
def _diff_list(a, b):
"""
Compares 2 lists and returns the elements in list a only
"""
b = set(b)
return [aa for aa in a if aa not in b]
def launch_constellation_and_wait(api, config, max_count=100):
"""
Launch a new constellation, waits for it to appear, and
returns the new constellation name
"""
# we're about to create a new constellation... this may not
# be the first
previous_constellations = [x['constellation_name'] \
for x in api.get_constellations()]
api.launch_constellation(CLOUD_CREDS, CLOUD_REGION, config)
print("waiting 10 secs")
time.sleep(10)
found = False
count = 0
constellation_name = None
while not found:
count += 1
if count > max_count:
raise RestException("Timeout in Launch %s" % config)
constellation_list = api.get_constellations()
current_names = [x['constellation_name'] \
for x in constellation_list]
new_constellations = _diff_list(current_names, previous_constellations)
print ("%s/%s) new constellations: %s" % (count,
max_count,
new_constellations))
if len(new_constellations) > 0:
found = True
constellation_name = new_constellations[0]
return constellation_name
def terminate_constellation(api,
constellation_name,
sleep_secs=2,
max_count=100):
"""
Terminates a constellation and waits until the process is done.
"""
def exists(api, constellation_name):
constellation_list = api.get_constellations()
current_names = [x['constellation_name'] \
for x in constellation_list]
return constellation_name in current_names
constellation_exists = exists(api, constellation_name)
if not constellation_exists:
raise RestException("terminate_constellation: "
"Constellation '%s' not found" % constellation_name)
# send the termination signal
api.terminate_constellation(constellation_name)
count = 0
while constellation_exists:
time.sleep(sleep_secs)
count += 1
if count > max_count:
raise RestException("Timeout in terminate_constellation %s" % (
constellation_name))
constellation_exists = exists(api, constellation_name)
print("%s/%s %s exists: %s" % (count,
max_count,
constellation_name,
constellation_exists))
def wait_for_constellation_state(api,
constellation_name,
key="constellation_state",
value="running",
max_count=100,
sleep_secs=5):
"""
Polls constellation state key until its value matches value. This is used
to wait until a constellation is ready to run simulations
"""
count = 0
while True:
time.sleep(sleep_secs)
count += 1
if count > max_count:
raise RestException("Timeout in wait for %s = %s "
" for %s" % (key, value, constellation_name))
const_data = api.get_constellation_data(constellation_name)
state = const_data[key]
print("%s/%s) %s [%s] = %s" % (count,
max_count,
constellation_name,
key,
state))
if state == value:
return const_data
def create_task(cloudsim_api, constellation_name, task_dict):
"""
Creates a new task and retrieves the id of the new task. This
requires comparing task names before and after creation
"""
def task_names():
const_data = cloudsim_api.get_constellation_data(constellation_name)
task_names = [x['task_id'] for x in const_data['tasks']]
return task_names
previous_tasks = task_names()
cloudsim_api.create_task(constellation_name, task_dict)
new_tasks = task_names()
delta_tasks = _diff_list(new_tasks, previous_tasks)
new_task_id = delta_tasks[0]
return new_task_id
def wait_for_task_state(cloudsim_api,
constellation_name,
task_id,
target_state,
max_count=100,
sleep_secs=1):
"""
Wait until the task is in a target state (ex "running", or "stopped")
"""
count = 0
while True:
time.sleep(sleep_secs)
count += 1
if count > max_count:
raise RestException("Timeout in start_task"
"%s for %s" % (task_id, constellation_name))
task_dict = cloudsim_api.read_task(constellation_name, task_id)
current_state = task_dict['task_state']
print("%s/%s Task %s: %s" % (count, max_count,
task_id,
current_state))
if current_state == target_state:
return
def run_task(cloudsim_api, constellation_name, task_id,
max_count=100,
sleep_secs=1):
"""
Starts a task and waits for its status to be "running"
"""
# check task
task_dict = cloudsim_api.read_task(constellation_name, task_id)
state = task_dict['task_state']
if state != "ready":
raise RestException("Can't start task in state '%s'" % state)
# run task
cloudsim_api.start_task(constellation_name, task_id)
wait_for_task_state(cloudsim_api,
constellation_name,
task_id,
'running',
max_count,
sleep_secs)
def run_notebook(cloudsim_api, constellation_name):
"""
Starts the notebook service and waits for its status to be "running"
"""
cloudsim_api.start_notebook(constellation_name)
count=100
while count > 0:
time.sleep(5)
count -= 1
r = cloudsim_api.ping_notebook(constellation_name)
print("%s/100 notebook state: %s" % (count, r))
if r == "running":
return
raise RestException("Can't start notebook on %s" % constellation_name)
def stop_notebook(cloudsim_api, constellation_name):
"""
Stops the notebook service and waits for its status to "stopped"
"""
cloudsim_api.stop_notebook(constellation_name)
count=100
while count > 0:
print("count %s/100" % count)
time.sleep(5)
count -= 1
r = cloudsim_api.ping_notebook(constellation_name)
print("%s/100 notebook state: %s" % (count, r))
if r == "":
return
raise RestException("Can't start notebook on %s" % constellation_name)
def run_gzweb(cloudsim_api, constellation_name):
"""
Starts the gzweb service and waits for its status to be "running"
"""
cloudsim_api.start_gzweb(constellation_name)
count=100
while count > 0:
time.sleep(5)
count -= 1
r = cloudsim_api.ping_gzweb(constellation_name)
print("%s/100 gzweb state: %s" % (count, r))
if r == "running":
return
raise RestException("Can't start gzweb on %s" % constellation_name)
def stop_gzweb(cloudsim_api, constellation_name):
"""
Stops the gzweb service and waits for its status to "stopped"
"""
cloudsim_api.stop_gzweb(constellation_name)
count=100
while count > 0:
print("count %s/100" % count)
time.sleep(5)
count -= 1
r = cloudsim_api.ping_gzweb(constellation_name)
print("%s/100 gzweb state: %s" % (count, r))
if r == "":
return
raise RestException("Can't start notebook on %s" % constellation_name)
def stop_task(cloudsim_api, constellation_name, task_id, max_count=100,
sleep_secs=1):
"""
Stops a task and waits for its status to go from "running" to "stopped"
"""
# check task
task_dict = cloudsim_api.read_task(constellation_name, task_id)
state = task_dict['task_state']
if state != "running":
raise RestException("Can't stop task in state '%s'" % state)
# run task
cloudsim_api.stop_task(constellation_name)
wait_for_task_state(cloudsim_api,
constellation_name,
task_id,
'stopped',
max_count,
sleep_secs)
def flush():
"""
Fake method to avoid crashes, because flush is not present on Delegate_io
class used by XMLTestRunner.
"""
pass
class RestTest(unittest.TestCase):
"""
Test that Creates a CloudSim on AWS. A simulator is then launched
from that CloudSim and a simulation task is run.
This test is run by Jenkins when CloudSim code is modified.
"""
def title(self, text):
print("")
print("#######################################")
print("#")
print("# %s" % text)
print("#")
print("#######################################")
def setUp(self):
self.title("setUp")
try:
# provide no op flush to avoid crashes when sys.stdout and stderr
# are overriden to write xml files (when running with Jenkins)
sys.stdout.flush = flush
sys.stderr.flush = flush
except:
print("Using normal sys.stdout and sys.stderr")
self.cloudsim_api = None
self.simulator_name = None
self.papa_cloudsim_name = None
self.baby_cloudsim_name = None
self.user = 'admin'
self.password = 'test123'
self.papa_cloudsim_name = get_unique_short_name('rst')
self.data_dir = get_test_path("rest_test")
self.creds_fname = get_boto_path()
self.ip = None
print("data dir: %s" % self.data_dir)
print("cloudsim constellation: %s" % self.papa_cloudsim_name)
print("user: %s, password: %s" % (self.user, self.password))
def test(self):
self.title("create_cloudsim")
self.ip = cloudsim.create_cloudsim(username=self.user,
credentials_fname=self.creds_fname,
region=CLOUD_REGION,
configuration=CLOUDSIM_CONFIG,
authentication_type="Basic",
password=self.password,
data_dir=self.data_dir,
constellation_name=self.papa_cloudsim_name)
self.assertTrue(True, "cloudsim not created")
print("papa cloudsim %s created in %s" % (self.ip, self.data_dir))
print("\n\n")
print('api = CloudSimRestApi("%s", "%s", "%s")' % (self.ip,
self.user,
self.password))
self.cloudsim_api = CloudSimRestApi(self.ip, self.user, self.password)
cfgs = self.cloudsim_api.get_machine_configs()
try:
print(cfgs.keys())
print(cfgs)
cfgs_creds = cfgs[CLOUD_CREDS]['regions']
cfgs_region = cfgs_creds[CLOUD_REGION]['configurations']
cfgs_names = [x['name'] for x in cfgs_region]
print("configs: %s" % cfgs_names)
except Exception, e:
import traceback
tb = traceback.format_exc()
print("traceback: %s" % tb)
self.title("launch baby cloudsim")
self.baby_cloudsim_name = launch_constellation_and_wait(
self.cloudsim_api,
config=CLOUDSIM_CONFIG)
print("# baby cloudsim %s launched" % (self.baby_cloudsim_name))
self.assertTrue(True, "baby cloudsim not created")
self.title("launch simulator")
self.simulator_name = launch_constellation_and_wait(self.cloudsim_api,
config=SIM_CONFIG)
print("# Simulator %s launched" % (self.simulator_name))
self.assertTrue(True, "simulator not created")
self.title("Wait for baby cloudsim readyness")
print("api.get_constellation_data('%s')" % self.baby_cloudsim_name)
wait_for_constellation_state(self.cloudsim_api,
self.baby_cloudsim_name,
key="constellation_state",
value="running",
max_count=100)
self.assertTrue(True, "baby cloudsim not ready")
print("# baby cloudsim machine ready")
self.title("Update baby cloudsim")
self.cloudsim_api.update_constellation(self.baby_cloudsim_name)
wait_for_constellation_state(self.cloudsim_api,
self.baby_cloudsim_name,
key="constellation_state",
value="running",
max_count=100)
print("# baby cloudsim machine updated")
self.title("Wait for simulator readyness")
print("api.get_constellation_data('%s')" % self.simulator_name)
wait_for_constellation_state(self.cloudsim_api,
self.simulator_name,
key="launch_stage",
value="running",
max_count=100)
self.assertTrue(True, "simulator not ready")
print("# Simulator machine ready")
self.title("Test notebook")
run_notebook(self.cloudsim_api,self.simulator_name)
stop_notebook(self.cloudsim_api,self.simulator_name)
# the simulator is ready!
self.title("# create task")
print('tid = create_task(api, "%s", '
'create_task_dict("test 0"))' % self.simulator_name)
print("\n\n")
task_dict = create_task_dict("test task 1")
print("%s" % task_dict)
self.task_id = create_task(self.cloudsim_api,
self.simulator_name,
task_dict)
self.assertTrue(True, "task not created")
run_task(self.cloudsim_api,self.simulator_name, self.task_id)
self.title("Test gzweb")
run_gzweb(self.cloudsim_api,self.simulator_name)
stop_gzweb(self.cloudsim_api,self.simulator_name)
self.assertTrue(True, "task not run")
self.title("# stop task")
stop_task(self.cloudsim_api,self.simulator_name, self.task_id)
self.assertTrue(True, "task not stopped")
def tearDown(self):
self.title("tearDown")
self.title("terminate baby cloudsim")
try:
if self.cloudsim_api and self.baby_cloudsim_name:
terminate_constellation(self.cloudsim_api,
self.baby_cloudsim_name)
else:
print("No baby cloudsim created")
except Exception, e:
print("Error terminating baby cloudsim constellation %s: %s" % (
self.baby_cloudsim_name,
e))
self.title("terminate simulator")
try:
if self.cloudsim_api and self.simulator_name:
terminate_constellation(self.cloudsim_api, self.simulator_name)
else:
print("No simulator created")
except Exception, e:
print("Error terminating simulator constellation %s: %s" % (
self.simulator_name,
e))
tb = traceback.format_exc()
print("traceback: %s" % tb)
self.title("terminate papa cloudsim")
try:
if self.papa_cloudsim_name and self.ip:
print("terminate cloudsim '%s' %s" % (self.papa_cloudsim_name,
self.ip))
cloudsim.terminate(self.papa_cloudsim_name)
# remove from Redis
constellation = ConstellationState(self.papa_cloudsim_name)
constellation.expire(1)
except Exception, e:
print("Error terminating papa cloudsim '%s' : %s" % (
self.papa_cloudsim_name,
e))
tb = traceback.format_exc()
print("traceback: %s" % tb)
if __name__ == "__main__":
xmlTestRunner = get_test_runner()
unittest.main(testRunner=xmlTestRunner)
| 36.155268 | 80 | 0.561145 | 2,122 | 19,560 | 4.956645 | 0.146089 | 0.08243 | 0.028523 | 0.021677 | 0.450181 | 0.417855 | 0.352634 | 0.304526 | 0.269253 | 0.230082 | 0 | 0.009306 | 0.346268 | 19,560 | 540 | 81 | 36.222222 | 0.813248 | 0.021217 | 0 | 0.371795 | 0 | 0 | 0.138212 | 0.016998 | 0 | 0 | 0 | 0 | 0.020513 | 0 | null | null | 0.017949 | 0.041026 | null | null | 0.125641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d15966aff54460eabbd17a44b8dbeb7ba2af747c | 305 | py | Python | snmp/simple_snmp.py | gahlberg/pynet_class_work | 2389e7e5717d4b479ee002ada3b45694b7566756 | [
"Apache-2.0"
] | null | null | null | snmp/simple_snmp.py | gahlberg/pynet_class_work | 2389e7e5717d4b479ee002ada3b45694b7566756 | [
"Apache-2.0"
] | null | null | null | snmp/simple_snmp.py | gahlberg/pynet_class_work | 2389e7e5717d4b479ee002ada3b45694b7566756 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from snmp_helper import snmp_get_oid,snmp_extract
COMMUNITY_STRING = 'galileo'
SNMP_PORT = 7961
IP = '50.76.53.27'
a_device = (IP, COMMUNITY_STRING, SNMP_PORT)
OID = '1.3.6.1.2.1.1.1.0'
snmp_data = snmp_get_oid(a_device, oid=OID)
output = snmp_extract(snmp_data)
print output
| 16.944444 | 49 | 0.740984 | 57 | 305 | 3.701754 | 0.54386 | 0.066351 | 0.094787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.127869 | 305 | 17 | 50 | 17.941176 | 0.714286 | 0.065574 | 0 | 0 | 0 | 0 | 0.123239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d15b50d016c594fe47cebba986b4e8896fb93412 | 8,037 | py | Python | titanic2.py | kyzoon/kaggle_titanic | 9aad72932343d3387b744688cb1cd7edbfd4ef41 | [
"MIT"
] | null | null | null | titanic2.py | kyzoon/kaggle_titanic | 9aad72932343d3387b744688cb1cd7edbfd4ef41 | [
"MIT"
] | null | null | null | titanic2.py | kyzoon/kaggle_titanic | 9aad72932343d3387b744688cb1cd7edbfd4ef41 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = 'preston.zhu'
import numpy as np
import pandas as pd
import re
import operator
from sklearn.ensemble import RandomForestClassifier, ExtraTreesRegressor
import pdb
def get_title(name):
# 正则表达式搜索,搜索到如' Mr.'的字符串
title_search = re.search(' ([A-Za-z]+)\.', name)
# 搜索到则非空
if title_search:
# group()返回一个字符串,字符串[0]为空格,故从[1]开始读取
return title_search.group(1)
return ""
family_id_mapping = {}
def get_family_id(row):
last_name = row['Name'].split(',')[0]
family_id = "{0}{1}".format(last_name, row['FamilySize'])
if family_id not in family_id_mapping:
if len(family_id_mapping) == 0:
current_id = 1
else:
# operator.itemgetter(1)即取出对象第一个域的内容
# dict.items(): dict所有元素成都表示成元组,然后展开成list
# max()函数按key参数比较大小,取出第一个列表中最大的对象
current_id = (max(family_id_mapping.items(), key=operator.itemgetter(1))[1] + 1)
# 姓氏名映射成数值家庭号
family_id_mapping[family_id] = current_id
return family_id_mapping[family_id]
# 函数根据年龄和性别分成三类
def get_person(passenger):
age, sex = passenger
if age < 14: # child age define 14
return 'child'
elif sex == 'female':
return 'female_adult'
else:
return 'male_adult'
# 获取家族名称
def process_surname(nm):
return nm.split(',')[0].lower()
perishing_female_surnames = []
def perishing_mother_wife(passenger):
surname, Pclass, person = passenger
# 筛选“成年女性,过逝的,有同行家庭成员的”,并返回1,否则返回0
return 1.0 if (surname in perishing_female_surnames) else 0.0
surviving_male_surnames = []
def surviving_father_husband(passenger):
surname, Pclass, person = passenger
return 1.0 if (surname in surviving_male_surnames) else 0.0
"""
特征工程:
采用将训练集与测试集的数据拼接在一起,然后进行回归,再补充'Age'的缺失值,这是一个很好的方法;
移除掉'Ticket'特征
'Embarked'特征采用众数'S'补充缺失值
'Fare'采用中间值补充缺件值
增加'TitleCat'特征:从名称中抽取表示个人身份地位的称为来表示
增加'CabinCat'特征:先将缺件值补充字符'0',然后提取第一个字符做为其分类。缺失值太多,另作为一个分类
增加'EmbarkedCat'特征:由'Cabin'特征转换成数值分类表示
增加'Sex_male'和'Sex_female'两个特征:由'Sex'特征仿拟(dummy)
增加'FamilySize'特征:由'SibSp'和'Parch'两个特征之各,表示同行家人数量
增加'NameLength'特征:由名称字符数量表示。
增加'FamilyId'特征:先提取'Name'特征中的姓氏,并按字母排序编号得出。另外,同行家人少于3人的,'FamilyId'统一
归于-1类
增加'person'特征:由'Age'和'Sex'特征,小于14岁定义为儿童'child',大于14岁的女性定义为成年女性
'female_adult',大于14岁的男性定义为成年男性'male_adult'
增加'persion_child', 'person_female_adult', 'person_male_adult'三个特征:由'person'特征仿拟(dummy)
增加'surname'特征:由'Name'特征提取出姓氏部分
增加'perishing_mother_wife'特征:过逝的母亲或妻子,对家人的存活影响会比较大
增加‘surviving_father_husband'特征:存活的父亲或丈夫,对家人的存活影响也会比较大
最后选择进行训练的特征为:
'Age', 'Fare', 'Parch', 'Pclass', 'SibSp','male_adult', 'female_adult', 'child',
'perishing_mother_wife', 'surviving_father_husband', 'TitleCat', 'CabinCat',
'Sex_female', 'Sex_male', 'EmbarkedCat', 'FamilySize', 'NameLength', 'FamilyId'
由于经过拼接,所以需要对训练集与测试集进行拆分,前891个实例为训练集,后418个实例为测试集
"""
def features():
train_data = pd.read_csv("input/train.csv", dtype={"Age": np.float64})
test_data = pd.read_csv("input/test.csv", dtype={"Age": np.float64})
# 按列方向连接两个DataFrame,test_data排在train_data之后
combined2 = pd.concat([train_data, test_data], axis=0)
# 去掉'Ticket'特征,axis=1表示每行,inplace=True表示直接作用于本身
combined2.drop(['Ticket'], axis=1, inplace=True)
# Embarked特征使用众数补充缺失值
# inplace参数为True,函数作用于当前变量之上
combined2.Embarked.fillna('S', inplace=True)
# Fare特征使用中间值补充缺失值
combined2.Fare.fillna(combined2.Fare[combined2.Fare.notnull()].median(), inplace=True)
# 新建'Title'特征,从'Name'特征中提取称谓来表示
# Series.apply(func)对Series中每一个元素都调用一次func函数
combined2['Title'] = combined2["Name"].apply(get_title)
title_mapping = {
"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7,
"Col": 7, "Mlle": 8, "Mme": 8, "Don": 7, "Dona": 10, "Lady": 10,
"Countess": 10, "Jonkheer": 10, "Sir": 7, "Capt": 7, "Ms": 2
}
# 新建'TitleCat'特征,则'Title'映射成数值
# map()函数使用dict对'Title'每个元素都进行映射
combined2["TitleCat"] = combined2.loc[:, 'Title'].map(title_mapping)
# 新建'CabinCat'特征。缺失值补充为0,其它项取第一个字母
# Categorical先统计数组中有多少个不同项,按升序排列,然后用数值表示各个分类
combined2["CabinCat"] = pd.Categorical(combined2.Cabin.fillna('0').apply(lambda x: x[0])).codes
# Cabin特征缺失值为'0'填充
combined2.Cabin.fillna('0', inplace=True)
# 以数值表示'Embarked'各个分类
combined2['EmbarkedCat'] = pd.Categorical(combined2.Embarked).codes
# 延行方向进行连接,创建新DataFrame;增加两个性别相关列,并将'Survived'特征移动至最后一列
# pandas.get_dummies()将'Sex'特征重构成'Sex_male'和'Sex_female'两列,
# 'Sex_male'列中,male表示为1,female表示为0. 'Sex_female'则相反
full_data = pd.concat([
combined2.drop(['Survived'], axis=1),
pd.get_dummies(combined2.Sex, prefix='Sex'),
combined2.Survived
], axis=1)
# 新建'FamilySize'特征,使用'SibSp'和'Parch'两个家庭成员特征求和得出
full_data['FamilySize'] = full_data['SibSp'] + full_data['Parch']
# 新建'NameLength'特征,使用名称长度表示
full_data['NameLength'] = full_data.Name.apply(lambda x: len(x))
family_ids = full_data.apply(get_family_id, axis=1)
# 将所有家庭成员人数小于3个的设置成-1类,归成一类
family_ids[full_data['FamilySize'] < 3] = -1
# 新建'FamilyId'特征
full_data['FamilyId'] = family_ids
# 追加'person'特征列,'person'特征由年龄和性别划分成child, femal_adult, male_adult
full_data = pd.concat([
full_data,
pd.DataFrame(full_data[['Age', 'Sex']].apply(get_person, axis=1), columns=['person'])
], axis=1)
# dummies person
dummies = pd.get_dummies(full_data['person'])
# 追加'persion_child', 'person_female_adult', 'person_male_adult'三个特征
full_data = pd.concat([full_data, dummies], axis=1)
# 新建姓氏名称'surname'特征
full_data['surname'] = full_data['Name'].apply(process_surname)
# 筛选“成年女性,过逝的,有同行家庭成员的”,去重
perishing_female_surnames = list(set(full_data[
(full_data.female_adult == 1.0)
& (full_data.Survived == 0.0)
& ((full_data.Parch > 0) | (full_data.SibSp > 0))]['surname'].values))
# 新建'perishing_mother_wife'特征,如果是“已过逝且有同行家人的成年女性”为1,否则为0
full_data['perishing_mother_wife'] \
= full_data[['surname', 'Pclass', 'person']].apply(perishing_mother_wife, axis=1)
# 筛选“成年男性,存活的,有同行家人了”,去重
surviving_male_surnames = list(set(full_data[
(full_data.male_adult == 1.0)
& (full_data.Survived == 1.0)
& ((full_data.Parch > 0) | (full_data.SibSp > 0))]['surname']))
full_data['surviving_father_husband'] \
= full_data[['surname', 'Pclass', 'person']].apply(surviving_father_husband, axis=1)
# 定义筛选器
classers = [
'Fare', 'Parch', 'Pclass', 'SibSp', 'TitleCat', 'CabinCat', 'Sex_female', 'Sex_male',
'EmbarkedCat', 'FamilySize', 'NameLength', 'FamilyId'
]
# ExtraTreesRegressor模型,用带'Age'特征的数据回归出'Age'特征缺失的值
age_et = ExtraTreesRegressor(n_estimators=200)
# 筛选'Age'不为空的数据作为训练集
X_train = full_data.loc[full_data.Age.notnull(), classers]
# 筛选'Age'不为空的数据的'Age'特征作为训练集的结果标签
Y_train = full_data.loc[full_data.Age.notnull(), ['Age']]
# 'Age'为空即为测试集
X_test = full_data.loc[full_data.Age.isnull(), classers]
# np.ravel()转换为np.array
age_et.fit(X_train, np.ravel(Y_train))
age_preds = age_et.predict(X_test)
# 将回归预测的结果,填充到原数据集中
full_data.loc[full_data.Age.isnull(), ['Age']] = age_preds
# 定义筛选器
model_dummys = [
'Age', 'Fare', 'Parch', 'Pclass', 'SibSp','male_adult', 'female_adult', 'child',
'perishing_mother_wife', 'surviving_father_husband', 'TitleCat', 'CabinCat',
'Sex_female', 'Sex_male', 'EmbarkedCat', 'FamilySize', 'NameLength', 'FamilyId'
]
# 筛选出训练集,测试集
X_data = full_data.iloc[:891, :]
X_train = X_data.loc[:, model_dummys]
Y_data = full_data.iloc[:891, :]
y_train = Y_data.loc[:, ['Survived']]
X_t_data = full_data.iloc[891:, :]
X_test = X_t_data.loc[:, model_dummys]
test_PassengerId = X_t_data.PassengerId.as_matrix()
return X_train, y_train, X_test, test_PassengerId
def titanic():
print('Preparing Data...')
X_train, y_train, X_test, test_PassengerId = features()
print('Train RandomForestClassifier Model...')
# 随机森林模型
model_rf = RandomForestClassifier(n_estimators=300,
min_samples_leaf=4,
class_weight={0:0.745,1:0.255})
# 训练
model_rf.fit(X_train, np.ravel(y_train))
print('Predictings...')
model_results = model_rf.predict(X_test)
print('Generate Submission File...')
submission = pd.DataFrame({
'PassengerId': test_PassengerId,
'Survived': model_results.astype(np.int32)
})
submission.to_csv('prediction7.csv', index=False)
print('Done.')
if __name__ == '__main__':
titanic() | 32.938525 | 96 | 0.725395 | 1,129 | 8,037 | 4.961913 | 0.302037 | 0.059979 | 0.023742 | 0.01071 | 0.238843 | 0.194395 | 0.152267 | 0.123349 | 0.099786 | 0.082649 | 0 | 0.021277 | 0.111111 | 8,037 | 244 | 97 | 32.938525 | 0.762878 | 0.177429 | 0 | 0.061538 | 0 | 0 | 0.153564 | 0.020524 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0.076923 | 0.046154 | 0.007692 | 0.184615 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d161ec660784d01b878001017831664382622e75 | 382 | py | Python | cyan/util/_enum.py | huajitech/cyan | 6809f7b738b2b4c458d08346f533167c7e7c0a83 | [
"MIT"
] | 5 | 2022-01-23T11:57:55.000Z | 2022-01-25T07:03:09.000Z | cyan/util/_enum.py | huajitech/cyan | 6809f7b738b2b4c458d08346f533167c7e7c0a83 | [
"MIT"
] | null | null | null | cyan/util/_enum.py | huajitech/cyan | 6809f7b738b2b4c458d08346f533167c7e7c0a83 | [
"MIT"
] | 2 | 2022-01-25T03:04:43.000Z | 2022-01-25T07:03:17.000Z | from enum import EnumMeta
from typing import Any
def get_enum_key(enum: EnumMeta, value: Any, default: Any = ...) -> Any:
"""
获取 `Enum` 值对应的键。
参数:
- enum: Enum 类型
- value: 将要查询对应键的值
- default: 当对应键不存在时返回的默认值(默认返回传入的 `value` 参数)
"""
return enum._value2member_map_.get(
value,
value if default == ... else default
)
| 20.105263 | 72 | 0.581152 | 44 | 382 | 4.931818 | 0.522727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003759 | 0.303665 | 382 | 18 | 73 | 21.222222 | 0.81203 | 0.298429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d168739d8cc490f771d23c7a1b691bf5116d7173 | 430 | py | Python | bugtests/test286c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test286c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test286c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | """
Test multilevel overriding of java methods in jythonc.
"""
from java.util import Date
class SubDate(Date):
def toString(self):
s = Date.toString(self)
return 'SubDate -> Date'
class SubSubDate(SubDate):
def toString(self):
return 'SubSubDate -> ' + SubDate.toString(self)
assert SubDate().toString() == 'SubDate -> Date'
assert SubSubDate().toString() == 'SubSubDate -> SubDate -> Date'
| 23.888889 | 65 | 0.655814 | 48 | 430 | 5.875 | 0.4375 | 0.156028 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204651 | 430 | 17 | 66 | 25.294118 | 0.824561 | 0.125581 | 0 | 0.2 | 0 | 0 | 0.19837 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d16e384cf387a664a33b991f18c0766cbc5a4c0d | 4,294 | py | Python | dev_tools/scan_inclusions.py | frannuca/quantlib | 63e66f5f767397e5b7c79fa78eaed4e3e0a6b7c6 | [
"BSD-3-Clause"
] | null | null | null | dev_tools/scan_inclusions.py | frannuca/quantlib | 63e66f5f767397e5b7c79fa78eaed4e3e0a6b7c6 | [
"BSD-3-Clause"
] | null | null | null | dev_tools/scan_inclusions.py | frannuca/quantlib | 63e66f5f767397e5b7c79fa78eaed4e3e0a6b7c6 | [
"BSD-3-Clause"
] | 1 | 2022-02-24T04:54:18.000Z | 2022-02-24T04:54:18.000Z | import os, sys, re, string
import xml.dom.minidom
import xml.dom.ext
QL_ROOT = "C:/Projects/QuantLibSVN/trunk/"
VC8 = "C:/Program Files/Microsoft Visual Studio 8/"
BOOST = "C:/Boost/boost_1_33_1/"
QL = QL_ROOT +"QuantLib/"
QL_ADDIN = QL_ROOT + "QuantLibAddin/"
OBJECT_HANDLER = QL_ROOT + "ObjectHandler/"
QL_XL = QL_ROOT + "QuantLibXL/"
STD = VC8 + "VC/include/"
SDK = VC8 + "VC/PlatformSDK/Include"
INCLUDE_PATH = [QL, QL_ADDIN, OBJECT_HANDLER, QL_XL, BOOST, STD, SDK]
PREFIX_PATH = ["ql", "qlo", "oh", "boost", "qlxl", "ohxl", "xlsdk"]
class MyError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
def searchAndParseHeaderFile(fileName):
for includePath in INCLUDE_PATH:
filePath = includePath + fileName[0].lower() + fileName[1:]
if os.path.isfile(filePath):
return parseHeaderFile(filePath)
filePath = includePath + fileName[0].upper() + fileName[1:]
if os.path.isfile(filePath):
return parseHeaderFile(filePath)
raise MyError("searchAndParseHeaderFile: " + fileName + " not found")
def getFilePrefix(include):
for prefix in PREFIX_PATH:
if re.match(prefix + '/.*',include):
return prefix
return "std"
def parseHeaderFile(filePath):
includes = []
nbLines = 0
f=open(filePath)
for line in f:
nbLines +=1
if not re.match("//", line):
includesLines = re.findall('^#include.*<.*>', line)
if includesLines:
includeName = re.findall('<.*>', includesLines[0])[0][1:-1]
includes.append(includeName)
f.close()
return includes, nbLines
def walkThroughIncludesFiles(fileName, files, filesCounters, node, document):
new = document.createElement('header')
node.appendChild(new)
parsingResults = searchAndParseHeaderFile(fileName)
includes = parsingResults[0]
attribute = "%i" % parsingResults[1]
new.setAttribute('nbLines', attribute)
nbLines = parsingResults[1]
for include in includes:
#if the son is not recorded yet we explore it
include = "%s" % include
if not files.count(include) > 0:
files.append(include)
try:
prefix = getFilePrefix(include)
filesCounters[prefix][0] +=1
result = walkThroughIncludesFiles(include, files, filesCounters, new, document)
nbLines += result[0]
filesCounters[prefix][1] += result[1]
except MyError, e:
print e.value, " in : " + fileName
attribute = "%i" % nbLines
new.setAttribute('total', attribute)
new.setAttribute('name', fileName)
return int(nbLines), parsingResults[1]
def trackDependencies(fileName):
document = xml.dom.minidom.Document()
filesCounters = {}
filesCounters["boost"] = [0,0]
filesCounters["ql"] = [0,0]
filesCounters["qlo"] = [0,0]
filesCounters["qlxl"] = [0,0]
filesCounters["oh"] = [0,0]
filesCounters["ohxl"] = [0,0]
filesCounters["xlsdk"] = [0,0]
filesCounters["std"] = [0,0]
files = []
files.append(fileName)
nbLines = walkThroughIncludesFiles(fileName, files, filesCounters, document, document)
return filesCounters, document, nbLines, files
if __name__ == '__main__':
if len(sys.argv) != 2:
print 'Give the relative path of the file you want to scan (wrt to the included folders)'
sys.exit()
args = sys.argv[1:]
fileName = args[0]
result = trackDependencies(fileName)
nbLinesParsed = result[2][0]
print "number of files parsed ", len(result[3])
print "number of lines parsed ", nbLinesParsed
namespaces = result[0]
for namespace in namespaces:
print namespace, ":\tnb Files ", namespaces[namespace][0]
print "\tnb lines ", namespaces[namespace][1]
print "\t%(nbLines)02d" % {'nbLines': float(namespaces[namespace][1])/nbLinesParsed * 100}, "%"
outputName = fileName.replace("/", "-") + ".xml"
output = "./" + outputName
f=open(output, 'w')
xml.dom.ext.PrettyPrint(result[1], f)
f.close()
print "result saved in ", outputName
| 33.811024 | 98 | 0.617839 | 475 | 4,294 | 5.513684 | 0.311579 | 0.006873 | 0.040092 | 0.021382 | 0.045819 | 0.045819 | 0.045819 | 0.045819 | 0.045819 | 0.045819 | 0 | 0.018599 | 0.248719 | 4,294 | 127 | 99 | 33.811024 | 0.793242 | 0.010247 | 0 | 0.056604 | 0 | 0 | 0.128689 | 0.023948 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028302 | null | null | 0.075472 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d16e9d0e26c5e30db9fe137457ef9304c8e4a910 | 5,779 | py | Python | ports/esp32/boards/METERBOARD32/modules/mbus/device.py | henriknelson/micropython | eb6c2bd0f4ac133bcb8edb81fb29aa21ade5211b | [
"MIT"
] | 1 | 2020-01-21T01:49:20.000Z | 2020-01-21T01:49:20.000Z | ports/esp32/boards/METERBOARD32/modules/mbus/device.py | henriknelson/micropython | eb6c2bd0f4ac133bcb8edb81fb29aa21ade5211b | [
"MIT"
] | null | null | null | ports/esp32/boards/METERBOARD32/modules/mbus/device.py | henriknelson/micropython | eb6c2bd0f4ac133bcb8edb81fb29aa21ade5211b | [
"MIT"
] | null | null | null | from mbus.record import ValueRecord
from machine import RTC
import ubinascii
import random
import time
import re
class MBusDevice:
"""Class that encapulates/emulates a single MBus device"""
def __init__(self, primary_address, secondary_address, manufacturer, meter_type):
self._primary_address = primary_address
self._secondary_address = secondary_address
self._manufacturer = manufacturer
self._type = meter_type
self._access_number = random.randint(0,255)
self._records = []
self._rsp_ud2 = []
self._selected = False
self.rtc = RTC()
def get_time(self):
"""Returns the current time, as known by this MBus device"""
return "%02u:%02u:%02u (%d)" % self.rtc.datetime()[4:8]
def select(self):
"""Puts this MBus device in the 'selected' state"""
if not self._selected:
self._selected = True
self.log("device {} is now selected".format(self._secondary_address))
def deselect(self):
"""Puts this MBus device in an 'unselected' state"""
if self._selected:
self._selected = False
self.log("device {} is now deselected".format(self._secondary_address))
def is_selected(self):
"""Returns the current selection state for this MBus device"""
return self._selected
def log(self, message):
print("[{}][debug ] {}".format(self.get_time(),message))
def update(self):
for record in self._records:
record.update()
self.log("Device with ID {} has updated its data".format(self._secondary_address))
self.seal()
def add_record(self,record):
self._records.append(record)
def seal(self):
self._rsp_ud2 = self.get_rsp_ud2()
def get_primary_address(self):
"""Returns the primary address for this MBus device"""
return self._primary_address
def get_secondary_address(self):
"""Returns the secondary address for this MBus device"""
return self._secondary_address
def matches_secondary_address(self,search_string):
"""Returns true if the secondary address of this MBus device matches the provided search string"""
pattern = re.compile(search_string.replace('f','[0-9]'))
if pattern.match(self._secondary_address):
return True
return False
def get_manufacturer_id(self):
"""Returns the manufacturer id for this MBus device"""
return self._manufacturer
def get_type(self):
"""Returns the MBus attribute 'type' for this MBus device"""
return self._type
def get_address_bytes(self):
"""Returns the secondary address for this MBus device, as a byte array"""
resp_bytes = []
resp_bytes.append(self._secondary_address[6])
resp_bytes.append(self._secondary_address[7])
resp_bytes.append(self._secondary_address[4])
resp_bytes.append(self._secondary_address[5])
resp_bytes.append(self._secondary_address[2])
resp_bytes.append(self._secondary_address[3])
resp_bytes.append(self._secondary_address[0])
resp_bytes.append(self._secondary_address[1])
resp_str = []
resp_str.append(resp_bytes[0] + resp_bytes[1])
resp_str.append(resp_bytes[2] + resp_bytes[3])
resp_str.append(resp_bytes[4] + resp_bytes[5])
resp_str.append(resp_bytes[6] + resp_bytes[7])
ret = [x for x in resp_str]
return ret
def get_manufacturer_bytes(self):
"""Returns the manufacturer id for this MBus device, as a byte array"""
manufacturer = self._manufacturer.upper()
id = ((ord(manufacturer[0]) - 64) * 32 * 32 +
(ord(manufacturer[1]) - 64) * 32 +
(ord(manufacturer[2]) - 64))
if 0x0421 <= id <= 0x6b5a:
return self.manufacturer_encode(id, 2)
return False
def manufacturer_encode(self, value, size):
"""Converts a manufacturer id to its byte equivalent"""
if value is None or value == False:
return None
data = []
for i in range(0, size):
data.append((value >> (i * 8)) & 0xFF)
return data
def calculate_checksum(self, message):
"""Calculates the checksum of the provided data"""
return sum([int(x, 16) if type(x) == str else x for x in message]) & 0xFF
def get_latest_values(self):
return self._rsp_ud2
def get_rsp_ud2(self):
"""Generates a RSP_UD2 response message"""
resp_bytes = []
resp_bytes.append(0x68) # start
resp_bytes.append(0xFF) # length
resp_bytes.append(0xFF) # length
resp_bytes.append(0x68) # start
resp_bytes.append(0x08) # C
resp_bytes.append(self._primary_address) # A
resp_bytes.append(0x72) # CI
resp_bytes.extend(self.get_address_bytes())
resp_bytes.extend(self.get_manufacturer_bytes())
resp_bytes.append(0x01) # version
resp_bytes.append(self._type) # medium (heat)
resp_bytes.append(self._access_number) # access no
resp_bytes.append(0x00) # status
resp_bytes.append(0x00) # configuration 1
resp_bytes.append(0x00) # configuration 2
for record in self._records:
resp_bytes.extend(record.get_bytes())
resp_bytes.append(self.calculate_checksum(resp_bytes[4:]))
resp_bytes.append(0x16) # stop
length = len(resp_bytes) - 9 + 3
resp_bytes[1] = length
resp_bytes[2] = length
ret = ["{:>2}".format(hex(x)[2:]).replace(' ', '0') if type(x) == int else x for x in resp_bytes]
if self._access_number < 255:
self._access_number = self._access_number + 1
else:
self._access_number = 1
return ''.join(ret).upper()
| 37.283871 | 106 | 0.643191 | 755 | 5,779 | 4.711258 | 0.206623 | 0.103739 | 0.096992 | 0.064099 | 0.337082 | 0.21282 | 0.102615 | 0.090245 | 0.051729 | 0 | 0 | 0.025264 | 0.246582 | 5,779 | 154 | 107 | 37.525974 | 0.791686 | 0.018688 | 0 | 0.12605 | 0 | 0 | 0.029202 | 0 | 0 | 0 | 0.013445 | 0 | 0 | 0 | null | null | 0 | 0.05042 | null | null | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
66f5b0cbb8ef944f7945f54d1777a667ec6dbe6b | 3,024 | py | Python | Echoes/Filezilla.py | xeddmc/BrainDamage | 855f696883d495e2f1b1b55ced31a54f3426c50e | [
"Apache-2.0"
] | 1,520 | 2020-10-23T06:22:06.000Z | 2022-03-26T09:17:47.000Z | Echoes/Filezilla.py | 1612480331/BrainDamage | ac412e32583436cab3e836713008c207229c9cf2 | [
"Apache-2.0"
] | 12 | 2017-03-25T16:31:20.000Z | 2021-12-28T05:04:52.000Z | Echoes/Filezilla.py | 1612480331/BrainDamage | ac412e32583436cab3e836713008c207229c9cf2 | [
"Apache-2.0"
] | 661 | 2020-10-23T06:23:53.000Z | 2021-09-06T23:05:30.000Z | # Based on the work of https://github.com/AlessandroZ/LaZagne/blob/master/Windows/lazagne/
import xml.etree.cElementTree as ET
import os, base64
class Filezilla():
def __init__(self):
options = {'command': '-f', 'action': 'store_true', 'dest': 'filezilla', 'help': 'filezilla'}
def run(self):
if 'APPDATA' in os.environ:
directory = os.environ['APPDATA'] + '\FileZilla'
else:
return
interesting_xml_file = []
info_xml_file = []
if os.path.exists(os.path.join(directory, 'sitemanager.xml')):
interesting_xml_file.append('sitemanager.xml')
info_xml_file.append('Stores all saved sites server info including password in plaintext')
if os.path.exists(os.path.join(directory, 'recentservers.xml')):
interesting_xml_file.append('recentservers.xml')
info_xml_file.append('Stores all recent server info including password in plaintext')
if os.path.exists(os.path.join(directory, 'filezilla.xml')):
interesting_xml_file.append('filezilla.xml')
info_xml_file.append('Stores most recent server info including password in plaintext')
if interesting_xml_file != []:
pwdFound = []
for i in range(len(interesting_xml_file)):
xml_file = os.path.expanduser(directory + os.sep + interesting_xml_file[i])
tree = ET.ElementTree(file=xml_file)
root = tree.getroot()
servers = root.getchildren()
for ss in servers:
server = ss.getchildren()
jump_line = 0
for s in server:
s1 = s.getchildren()
values = {}
for s11 in s1:
if s11.tag == 'Host':
values[s11.tag] = s11.text
if s11.tag == 'Port':
values[s11.tag] = s11.text
if s11.tag == 'User':
values['Login'] = s11.text
if s11.tag == 'Pass':
try:
# if base64 encoding
if 'encoding' in s11.attrib:
if s11.attrib['encoding'] == 'base64':
values['Password'] = base64.b64decode(s11.text)
else:
values['Password'] = s11.text
except:
values['Password'] = s11.text
# password found
if len(values) != 0:
pwdFound.append(values)
# print the results
return pwdFound
else:
pass
#tem = Filezilla()
#a = tem.run()
#print a
| 37.8 | 102 | 0.472553 | 289 | 3,024 | 4.84083 | 0.342561 | 0.065046 | 0.090064 | 0.030021 | 0.329521 | 0.260901 | 0.242316 | 0.200858 | 0.101501 | 0.101501 | 0 | 0.025522 | 0.429894 | 3,024 | 79 | 103 | 38.278481 | 0.785963 | 0.058532 | 0 | 0.127273 | 0 | 0 | 0.148292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0.145455 | 0.036364 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
66f69535e1d0a44683902c8b5bdee87b710b453d | 1,519 | py | Python | flights/models.py | solnsubuga/flightapp | 2da79cb4edef51507152a1d27388292a15b67815 | [
"Apache-2.0"
] | null | null | null | flights/models.py | solnsubuga/flightapp | 2da79cb4edef51507152a1d27388292a15b67815 | [
"Apache-2.0"
] | 8 | 2020-02-12T00:24:07.000Z | 2021-09-08T01:11:22.000Z | flights/models.py | solnsubuga/flightapp | 2da79cb4edef51507152a1d27388292a15b67815 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
class Flight(models.Model):
STATUSES = (
('SCHEDULED', 'SCHEDULED'),
('DELAYED', 'DELAYED'),
('ON_TIME', 'ON TIME'),
('ARRIVED', 'ARRIVED'),
('LATE', 'LATE')
)
number = models.CharField(max_length=10)
departure_time = models.DateTimeField()
arrival_time = models.DateTimeField()
origin = models.CharField(max_length=150)
destination = models.CharField(max_length=150)
status = models.CharField(choices=STATUSES, max_length=100)
@property
def duration(self):
timespan = self.arrival_time - self.departure_time
days, seconds = timespan.days, timespan.seconds
return days * 24 + seconds // 3600 # return hours
@property
def available_seats(self):
return self.seats.all()
def __str__(self):
return self.number
class Seat(models.Model):
flight = models.ForeignKey(
Flight, on_delete=models.CASCADE, related_name='seats')
number = models.CharField(max_length=50)
is_available = models.BooleanField(default=True)
def __str__(self):
return self.number
class Reservation(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
flight = models.ForeignKey(Flight, on_delete=models.CASCADE)
seat = models.ForeignKey(Seat, on_delete=models.CASCADE)
is_notified = models.BooleanField(default=False)
created = models.DateTimeField(auto_now_add=True)
| 30.38 | 64 | 0.681369 | 176 | 1,519 | 5.721591 | 0.369318 | 0.074479 | 0.0715 | 0.095333 | 0.272095 | 0.158888 | 0.158888 | 0.097319 | 0 | 0 | 0 | 0.015715 | 0.204082 | 1,519 | 49 | 65 | 31 | 0.817204 | 0.0079 | 0 | 0.153846 | 0 | 0 | 0.048505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.051282 | 0.076923 | 0.717949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
66fcf10eec6db42f209edd690e099b600c633815 | 3,968 | py | Python | sentence_generator.py | gabrielilharco/sentence-generator | 5d75dca74363bd9ddcd54f9559b94bb6185667e3 | [
"MIT"
] | 5 | 2018-05-12T13:54:07.000Z | 2019-06-16T09:56:52.000Z | sentence_generator.py | gabrielilharco/sentence-generator | 5d75dca74363bd9ddcd54f9559b94bb6185667e3 | [
"MIT"
] | null | null | null | sentence_generator.py | gabrielilharco/sentence-generator | 5d75dca74363bd9ddcd54f9559b94bb6185667e3 | [
"MIT"
] | null | null | null | import argparse
import random
import operator
import os
def parse_grammar(file_path):
"""
Generate a grammar from a file describing the production rules.
Note that the symbols are inferred from the production rules.
For more information on the format of the file, please reffer to
the README.md or the the sample grammars provided in this repository.
:param file_path: Path to the file containing the description of the grammar.
:returns: the grammar object and the starting symbol.
"""
with open(file_path) as f:
content = f.read().splitlines()
if len(content) <= 1:
raise Exception('Grammar should have at least one production rule and a starting symbol')
# First line should be the starting symbol
start_symbol = content[0]
grammar = {}
for line in content[1:]:
# Each line should be in the format:
# X -> A B ... C
symbols = line.split()
if len(symbols) <= 2 or symbols[1] != '->':
raise Exception('Each production line should be in the format: X -> A B ... C')
if symbols[0] not in grammar:
grammar[symbols[0]] = []
grammar[symbols[0]].append(symbols[2:])
if start_symbol not in grammar:
raise Exception('Grammar should have at leats one production rule with the start_symbol.')
return grammar, start_symbol
def find_terminals(grammar):
"""
For a given grammar, return a set of the terminal symbols.
:param grammar: The grammar (set of productions rules).
:return: set of terminal symbols.
"""
terminals = set()
for key, val in grammar.items():
for word_list in val:
for word in word_list:
if word not in grammar:
terminals.add(word)
return terminals
def analyze_stats(sentences):
"""
For a given set of sentences, print how many times each symbol appears,
printing statistics sorted by occurrance.
:param sentences: List of sentences.
"""
counts = {}
for sentence in sentences:
for element in sentence.split():
if element not in counts:
counts[element] = 1
else:
counts[element] += 1
# print stats
sorted_counts = sorted(counts.items(), key = operator.itemgetter(1))
for key, val in sorted_counts:
print("%5d %s" % (val, key))
def generate_random_sentence(grammar, start_symbol, print_sentence = True):
"""
For a given grammar (set of production rules) and a starting symbol,
randomly generate a sentence using the production rules.
:param sentences: The grammar (set of productions rules).
:param start_symbol: The starting symbol.
:param print_sentence: Wether to print the generated sentence. Defaults to true.
:returns: A randomly generated sentence.
"""
# Starting symbol must be a part of the grammar
assert start_symbol in grammar
sentence = [start_symbol]
idx = 0
while idx < len(sentence):
if sentence[idx] in terminals:
idx += 1
else:
choices = grammar[sentence[idx]]
choice = random.choice(choices)
sentence = sentence[:idx] + choice + sentence[idx+1:]
sentence = " ".join([word.upper() for word in sentence])
if print_sentence:
print(sentence)
return sentence
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Grammar utils')
parser.add_argument('--grammar', type=str, default='simple_grammar.txt',
help='Path to grammar file.')
parser.add_argument('--print_terminal_symbols', type=bool, default=False,
help='Print the terminal symbols of the grammar.')
parser.add_argument('--num_sentences', type=int, default=0,
help='The number of random sentences to generate.')
args = parser.parse_args()
grammar, start_symbol = parse_grammar(args.grammar)
terminals = find_terminals(grammar)
if args.print_terminal_symbols:
for terminal in sorted(terminals):
print(terminal)
print('-----------------')
print('There are', len(terminals), 'terminals')
sentences = []
for i in range(args.num_sentences):
sentences.append(generate_random_sentence(grammar, start_symbol, False))
for i in range(len(sentences)):
print("%d. %s" % (i, sentences[i])) | 29.834586 | 92 | 0.717742 | 568 | 3,968 | 4.929577 | 0.264085 | 0.039286 | 0.025714 | 0.019286 | 0.093571 | 0.093571 | 0.019286 | 0.019286 | 0.019286 | 0.019286 | 0 | 0.005194 | 0.175151 | 3,968 | 133 | 93 | 29.834586 | 0.85029 | 0.301159 | 0 | 0.026316 | 1 | 0 | 0.163295 | 0.008827 | 0 | 0 | 0 | 0 | 0.013158 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.144737 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f0047c195a44a1e7096ffa7a8721ac9af656c82 | 225 | py | Python | WebFrameDocs/src/demo/fileStorage/script/genScaleMap.py | Bean-jun/LearnGuide | 30a8567b222d18b15d3e9027a435b5bfe640a046 | [
"MIT"
] | 1 | 2022-02-23T13:42:01.000Z | 2022-02-23T13:42:01.000Z | WebFrameDocs/src/demo/fileStorage/script/genScaleMap.py | Bean-jun/LearnGuide | 30a8567b222d18b15d3e9027a435b5bfe640a046 | [
"MIT"
] | null | null | null | WebFrameDocs/src/demo/fileStorage/script/genScaleMap.py | Bean-jun/LearnGuide | 30a8567b222d18b15d3e9027a435b5bfe640a046 | [
"MIT"
] | null | null | null | """
A-Z: 65-90
a-z: 97-122
"""
dic = {}
n = 0
for i in range(10):
dic[n] = str(i)
n += 1
for i in range(65, 91):
dic[n] = chr(i)
n += 1
for i in range(97, 123):
dic[n] = chr(i)
n += 1
print(dic)
| 9.782609 | 24 | 0.444444 | 48 | 225 | 2.083333 | 0.416667 | 0.16 | 0.18 | 0.33 | 0.45 | 0.45 | 0.28 | 0 | 0 | 0 | 0 | 0.16 | 0.333333 | 225 | 22 | 25 | 10.227273 | 0.506667 | 0.097778 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f0216b386fed492d52639a4b739d1af569a3fae | 2,198 | py | Python | spyre/spyre/spyrelets/peaktrack_spyrelet.py | zhong-lab/code | 068ca3df58c3804fcc858f26ac5b26106e1d0cb0 | [
"BSD-2-Clause"
] | 1 | 2022-03-27T07:47:19.000Z | 2022-03-27T07:47:19.000Z | peaktrack_spyrelet.py | zhong-lab/code | 068ca3df58c3804fcc858f26ac5b26106e1d0cb0 | [
"BSD-2-Clause"
] | null | null | null | peaktrack_spyrelet.py | zhong-lab/code | 068ca3df58c3804fcc858f26ac5b26106e1d0cb0 | [
"BSD-2-Clause"
] | 4 | 2019-11-08T22:39:04.000Z | 2021-11-05T02:39:37.000Z | import numpy as np
import pyqtgraph as pg
import matplotlib.pyplot as plt
import csv
import sys
from PyQt5.Qsci import QsciScintilla, QsciLexerPython
from PyQt5.QtWidgets import QPushButton, QTextEdit, QVBoxLayout
import time
import random
import os
from spyre import Spyrelet, Task, Element
from spyre.widgets.task import TaskWidget
from spyre.plotting import HeatmapPlotWidget,LinePlotWidget
from spyre.widgets.rangespace import Rangespace
from spyre.widgets.param_widget import ParamWidget
from spyre.widgets.repository_widget import RepositoryWidget
from lantz.drivers.keysight import Arbseq_Class
from lantz.drivers.keysight.seqbuild import SeqBuild
from lantz import Q_
from lantz.drivers.ando.aq6317b import AQ6317B
from lantz.drivers.artisan.ldt5910b import LDT5910B
class filter(Spyrelet):
# delete if not using power meter
requires = {
'osa':AQ6317B,
'tc':LDT5910B
}
@Task()
def track(self):
# unpack the parameters
params=self.parameters.widget.get()
filename=params['Filename']
tracktime=params['Track time'].magnitude #s
sleep=params['Sleep Interval'].magnitude #s
# read peak position for a while
start=time.time()
t=start
with open(filename+'.csv','w',newline='') as csvfile:
writer=csv.writer(
csvfile,
delimiter=',',
quotechar='|',
quoting=csv.QUOTE_MINIMAL)
while (t-start)<tracktime:
pk,pwr=self.osa.read_marker
temp=self.tc.display('T')
t=time.time()
print('t: '+str(t-start)+', '+str(pk)+', '+str(temp))
writer.writerow([t-start,pk,temp])
time.sleep(sleep)
return
@Element(name='Params')
def parameters(self):
params = [
('Sleep Interval',{'type':int,'default':10,'units':'s'}),
('Track time', {'type': int, 'default': 1200,'units':'s'}),
('Filename', {'type': str, 'default':'Q:\\06.02.21_ff\\track1'})
]
w = ParamWidget(params)
return w
| 32.323529 | 73 | 0.61465 | 251 | 2,198 | 5.354582 | 0.450199 | 0.040179 | 0.047619 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024421 | 0.27343 | 2,198 | 67 | 74 | 32.80597 | 0.817157 | 0.039126 | 0 | 0 | 0 | 0 | 0.077489 | 0.01128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.368421 | 0 | 0.473684 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0f043fd8060ea318b76a0b1d439aef5060a3a833 | 14,408 | py | Python | smq/plot.py | x75/smq | 17fc1219b3f34f6e6035d261021b8e772b7a287d | [
"MIT"
] | null | null | null | smq/plot.py | x75/smq | 17fc1219b3f34f6e6035d261021b8e772b7a287d | [
"MIT"
] | null | null | null | smq/plot.py | x75/smq | 17fc1219b3f34f6e6035d261021b8e772b7a287d | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as pl
import pandas as pd
import seaborn as sns
import smq.logging as log
# check pandas, seaborne
# FIXME: fix hardcoded tablenames
from smq.utils import set_attr_from_dict
def get_data_from_item_log(items):
tbl_key = items[0].name
# print "%s.run: tbl_key = %s" % (self.__class__.__name__, tbl_key)
print "plot.get_data_from_item_log: tbl_key = %s" % (tbl_key)
df = log.log_lognodes[tbl_key]
data = df.values.T
columns = df.columns
return tbl_key, df, data, columns
class Plot(object):
def __init__(self, conf):
self.conf = conf
set_attr_from_dict(self, conf)
def run(self, items):
self.make_plot(items)
def make_plot(self, items):
print "%s.make_plot: implement me" % (self.__class__.__name__)
class PlotTimeseries(Plot):
def __init__(self, conf):
Plot.__init__(self, conf)
def run(self, items):
# how many axes / plotitems
# configure subplotgrid
tbl_key = items[0].name
# tbl_key = items[0].conf["name"]
print "tbl_key", tbl_key
df = log.log_lognodes[tbl_key]
# data = log.h5file.root.item_pm_data.read()
# data = log.log_lognodes["pm"].values.T
# columns = log.log_lognodes["pm"].columns
data = df.values.T
columns = df.columns
# print "data.shape", data.shape
pl.ioff()
# create figure
fig = pl.figure()
fig.suptitle("Experiment %s" % (log.h5file.title))
# fig.suptitle("Experiment %s" % (self.title))
for i in range(data.shape[0]): # loop over data items
ax1 = pl.subplot2grid((data.shape[0], 2), (i, 0))
ax1 = self.make_plot_timeseries(ax1, data[i], columns[i])
ax2 = pl.subplot2grid((data.shape[0], 2), (i, 1)) # second plotgrid column
ax2 = self.make_plot_histogram(ax2, data[i], columns[i])
# global for plot, use last axis
ax1.set_xlabel("t [steps]")
ax2.set_xlabel("counts")
# fig.show() # this doesn't work
pl.show()
def make_plot_timeseries(self, ax, data, columns):
ax.plot(data, "k-", alpha=0.5)
# print "columns[i]", type(columns[i])
ax.legend(["%s" % (columns)])
return ax
def make_plot_histogram(self, ax, data, columns):
ax.hist(data, bins=20, orientation="horizontal")
ax.legend(["%s" % (columns)])
# pl.hist(data.T, bins=20, orientation="horizontal")
return ax
# def make_plot(self, items):
# # print "log.h5file", log.h5file
# # print "dir(log.h5file)", dir(log.h5file)
# # print "blub", type(log.h5file.root.item_pm_data)
# # for item in log.h5file.root.item_pm_data:
# # print type(item)
# # print "log.h5file.root.item_pm_data", log.h5file.root.item_pm_data.read()
# # df = log.log_lognodes["pm"]
# # g = sns.FacetGrid(df, col=list(df.columns))
# # g.map(pl.plot, )
# # print "data.shape", data.shape
# for i,datum in enumerate(data):
# pl.subplot(data.shape[0], 2, (i*2)+1)
# # pl.title(columns[i])
# # sns.timeseries.tsplot(datum)
# pl.plot(datum, "k-", alpha=0.5)
# # print "columns[i]", type(columns[i])
# pl.legend(["%s" % (columns[i])])
# pl.xlabel("t [steps]")
# # pl.legend(["acc_p", "vel_e", "vel_", "pos_", "vel_goal", "dist_goal", "acc_pred", "m"])
# # pl.subplot(122)
# for i,datum in enumerate(data):
# pl.subplot(data.shape[0], 2, (i*2)+2)
# # print "dataum", datum
# pl.hist(datum, bins=20, orientation="horizontal")
# pl.legend(["%s" % (columns[i])])
# # pl.hist(data.T, bins=20, orientation="horizontal")
# pl.xlabel("counts")
# pl.show()
class PlotTimeseries2D(Plot):
def __init__(self, conf):
Plot.__init__(self, conf)
def run(self, items):
# FIXME: assuming len(items) == 1, which might be appropriate depending on the experiment
if items[0].dim_s_motor > 2:
print "more than two dimensions in data, plot is going to be incomplete"
return
tbl_key = items[0].name
# tbl_key = items[0].conf["name"]
print "%s.run: tbl_key = %s" % (self.__class__.__name__, tbl_key)
df = log.log_lognodes[tbl_key]
data = df.values.T
columns = df.columns
# print "columns", columns
# transform df to new df
if hasattr(self, "cols"):
cols = self.cols
else:
cols = ["vel%d" % (i) for i in range(items[0].dim_s_motor)]
cols += ["acc_pred%d" % (i) for i in range(items[0].dim_s_motor)]
df2 = df[cols]
# print df
# goal columns
if not hasattr(self, "cols_goal_base"):
setattr(self, "cols_goal_base", "vel_goal")
print "PlotTimeseries2D", self.cols, self.cols_goal_base
pl.ioff() #
goal_col_1 = "%s%d" % (self.cols_goal_base, 0)
goal_col_2 = "%s%d" % (self.cols_goal_base, 1)
if self.type == "pyplot":
# pl.plot(df["vel0"], df["vel1"], "ko")
# print df["vel0"].values.dtype
pl.subplot(131)
pl.title("state distribution and goal")
# print df["vel_goal0"].values, df["vel_goal1"].values
# pl.hist2d(df["vel0"].values, df["vel1"].values, bins=20)
pl.plot(df["%s%d" % (self.cols_goal_base, 0)].values[0],
df["%s%d" % (self.cols_goal_base, 1)].values[0], "ro", markersize=16, alpha=0.5)
pl.hexbin(df[self.cols[0]].values, df[self.cols[1]].values, gridsize = 30, marginals=True)
pl.plot(df[self.cols[0]].values, df[self.cols[1]].values, "k-", alpha=0.25, linewidth=1)
# pl.xlim((-1.2, 1.2))
# pl.ylim((-1.2, 1.2))
pl.grid()
pl.colorbar()
pl.subplot(132)
pl.title("prediction distribution")
pl.hexbin(df["acc_pred0"].values, df["acc_pred1"].values, gridsize = 30, marginals=True)
pl.xlim((-1.2, 1.2))
pl.ylim((-1.2, 1.2))
pl.colorbar()
pl.subplot(133)
pl.title("goal distance distribution")
pl.hist(df["dist_goal0"].values)
pl.show()
elif self.type == "seaborn":
print "goal", df[goal_col_1][0], df[goal_col_2][0]
ax = sns.jointplot(x=self.cols[0], y=self.cols[1], data=df)
print "ax", dir(ax)
# plot goal
print "df[goal_col_1][0], df[goal_col_2][0]", self.cols_goal_base, goal_col_1, goal_col_2, df[goal_col_1][0], df[goal_col_2][0]
ax.ax_joint.plot(df[goal_col_1][0], df[goal_col_2][0], "ro", alpha=0.5)
# pl.plot(df["vel_goal0"], df["vel_goal1"], "ro")
pl.show()
class PlotTimeseriesND(Plot):
"""Plot a hexbin scattermatrix for N-dim data"""
def __init__(self, conf):
Plot.__init__(self, conf)
def run(self, items):
pl.ioff()
tbl_key, df, data, columns = get_data_from_item_log(items)
# transform df to new df
if hasattr(self, "cols"):
cols = self.cols
else:
cols = ["vel%d" % (i) for i in range(items[0].dim_s_motor)]
cols += ["acc_pred%d" % (i) for i in range(items[0].dim_s_motor)]
df2 = df[cols]
print df2
# goal columns
if not hasattr(self, "cols_goal_base"):
setattr(self, "cols_goal_base", "vel_goal")
# pp = sns.pairplot(df2)
# for i in range(3):
# for j in range(3): # 1, 2; 0, 2; 0, 1
# if i == j:
# continue
# pp.axes[i,j].plot(df["vel_goal%d" % i][0], df["vel_goal%d" % j][0], "ro", alpha=0.5)
# # print pp.axes
# # for axset in pp.axes:
# # print "a", axset
# # for
# # print "dir(pp)", dir(pp)
# pl.show()
g = sns.PairGrid(df2)
g.map_diag(pl.hist)
g.map_offdiag(pl.hexbin, cmap="gray", gridsize=30, bins="log");
# print "dir(g)", dir(g)
# print g.diag_axes
# print g.axes
for i in range(items[0].dim_s_motor):
for j in range(items[0].dim_s_motor): # 1, 2; 0, 2; 0, 1
if i == j:
continue
# column gives x axis, row gives y axis, thus need to reverse the selection for plotting goal
g.axes[i,j].plot(df["%s%d" % (self.cols_goal_base, j)], df["%s%d" % (self.cols_goal_base, i)], "ro", alpha=0.5)
pl.show()
pl.hist(df["dist_goal0"].values, bins=20)
pl.show()
class PlotExplautoSimplearm(Plot):
def __init__(self, conf):
Plot.__init__(self, conf)
def make_plot(self, items):
print "items", items
pl.ioff()
tbl_key, df, data, columns = get_data_from_item_log(items)
motors = df[["j_ang%d" % i for i in range(items[0].dim_s_motor)]]
goals = df[["j_ang_goal%d" % i for i in range(items[0].dim_s_motor)]]
# print "df", motors, columns #, df
fig = pl.figure()
for i,item in enumerate(items):
# fig.suptitle("Experiment %s" % (log.h5file.title))
ax = fig.add_subplot(len(items), 1, i+1)
for m in motors.values:
# print "m", m
item.env.env.plot_arm(ax = ax, m = m)
print "plot goal", goals.values[0]
item.env.env.plot_arm(ax = ax, m = goals.values[0], c="r")
pl.show()
################################################################################
class PlotTimeseries2(Plot):
def __init__(self, conf):
Plot.__init__(self, conf)
def run(self, items):
# how many axes / plotitems
# configure subplotgrid
tbl_key = items[0].name
# tbl_key = items[0].conf["name"]
print "tbl_key", tbl_key
df = log.log_lognodes[tbl_key]
# data = log.h5file.root.item_pm_data.read()
# data = log.log_lognodes["pm"].values.T
# columns = log.log_lognodes["pm"].columns
data = df.values.T
columns = df.columns
# print "data.shape", data.shape
pl.ioff()
# create figure
fig = pl.figure()
fig.suptitle("Experiment %s, module %s" % (self.title, tbl_key))
for i in range(data.shape[0]): # loop over data items
ax1 = pl.subplot2grid((data.shape[0], 2), (i, 0))
ax1 = self.make_plot_timeseries(ax1, data[i], columns[i])
ax2 = pl.subplot2grid((data.shape[0], 2), (i, 1)) # second plotgrid column
ax2 = self.make_plot_histogram(ax2, data[i], columns[i])
# global for plot, use last axis
ax1.set_xlabel("t [steps]")
ax2.set_xlabel("counts")
# fig.show() # this doesn't work
pl.show()
def make_plot_timeseries(self, ax, data, columns):
ax.plot(data, "k-", alpha=0.5)
# print "columns[i]", type(columns[i])
ax.legend(["%s" % (columns)])
return ax
def make_plot_histogram(self, ax, data, columns):
ax.hist(data, bins=20, orientation="horizontal")
ax.legend(["%s" % (columns)])
# pl.hist(data.T, bins=20, orientation="horizontal")
return ax
class PlotTimeseriesNDrealtimeseries(Plot):
"""Plot a hexbin scattermatrix for N-dim data"""
def __init__(self, conf):
Plot.__init__(self, conf)
def run(self, items):
pl.ioff()
tbl_key, df, data, columns = get_data_from_item_log(items)
# transform df to new df
if hasattr(self, "cols"):
cols = self.cols
else:
cols = ["vel%d" % (i) for i in range(items[0].dim_s_motor)]
cols += ["acc_pred%d" % (i) for i in range(items[0].dim_s_motor)]
# FIXME: make generic
numplots = 1
cols_ext = []
for i in range(items[0].dim_s_extero):
colname = "pos_goal%d" % i
if colname in columns:
cols_ext += [colname]
numplots = 2
colname = "ee_pos%d" % i
if colname in columns:
cols_ext += [colname]
cols_error_prop = []
colnames_error_prop = ["avgerror_prop", "davgerror_prop", "avgderror_prop"]
for ec in colnames_error_prop:
if ec in columns:
# print "lalala", err_colname
cols_error_prop.append(ec)
cols_error_ext = []
colnames_error_ext = ["avgerror_ext", "davgerror_ext", "avgderror_ext"]
for ec in colnames_error_ext:
if ec in columns:
# print "lalala", err_colname
cols_error_ext.append(ec)
df2 = df[cols]
print df2
# goal columns
if not hasattr(self, "cols_goal_base"):
setattr(self, "cols_goal_base", "vel_goal")
pl.ioff()
# create figure
fig = pl.figure()
fig.suptitle("Experiment %s, module %s" % (self.title, tbl_key))
if numplots == 1:
pl.subplot(211)
else:
pl.subplot(411)
pl.title("Proprioceptive space")
x1 = df[cols].values
x2 = df[self.cols_goals].values
# print "x1.shape", x1.shape
x1plot = x1 + np.arange(x1.shape[1])
x2plot = x2 + np.arange(x2.shape[1])
print "x1plot.shape", x1plot.shape
pl.plot(x1plot)
pl.plot(x2plot)
if numplots == 1:
pl.subplot(212)
else: # numplots == 2:
pl.subplot(412)
pl.plot(df[cols_error_prop])
if numplots == 2:
pl.subplot(413)
pl.title("Exteroceptive space")
pl.plot(df[cols_ext])
print "cols_error_ext", cols_error_ext
pl.subplot(414)
pl.plot(df[cols_error_ext])
pl.show()
| 33.981132 | 139 | 0.530261 | 1,935 | 14,408 | 3.781395 | 0.135401 | 0.030614 | 0.02296 | 0.030614 | 0.63619 | 0.597923 | 0.554326 | 0.533552 | 0.507995 | 0.495148 | 0 | 0.026682 | 0.321072 | 14,408 | 423 | 140 | 34.061466 | 0.721325 | 0.241324 | 0 | 0.562232 | 0 | 0 | 0.082188 | 0.002636 | 0 | 0 | 0 | 0.002364 | 0 | 0 | null | null | 0 | 0.025751 | null | null | 0.06867 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f045c6ae18a61a369eced501af84eaf8bea2c34 | 642 | py | Python | test/model/test_pddl_action_representation.py | DLR-RM/rafcon-task-planner-plugin | 9d004c76aa6f54c992a2f3f00b9dd98f9fb4e498 | [
"BSD-3-Clause"
] | 1 | 2020-05-21T17:08:02.000Z | 2020-05-21T17:08:02.000Z | test/model/test_pddl_action_representation.py | DLR-RM/rafcon-task-planner-plugin | 9d004c76aa6f54c992a2f3f00b9dd98f9fb4e498 | [
"BSD-3-Clause"
] | null | null | null | test/model/test_pddl_action_representation.py | DLR-RM/rafcon-task-planner-plugin | 9d004c76aa6f54c992a2f3f00b9dd98f9fb4e498 | [
"BSD-3-Clause"
] | null | null | null | from rafcontpp.model.pddl_action_representation import PddlActionRepresentation
from rafcontpp.model.pddl_action_representation import action_to_upper
def test_action_to_upper():
#arrange
action = PddlActionRepresentation('myAction','(action:)',['(at ?a - Object)'],['Object'],[':strips'],['param1','param2'])
#act
action = action_to_upper(action)
#assert
assert 'MYACTION' == action.name
assert '(ACTION:)' == action.action
assert ['(AT ?A - OBJECT)'] == action.predicates
assert ['OBJECT'] == action.types
assert [':STRIPS'] == action.requirements
assert ['param1','param2'] == action.parameters | 42.8 | 125 | 0.697819 | 69 | 642 | 6.333333 | 0.391304 | 0.05492 | 0.089245 | 0.100687 | 0.21968 | 0.21968 | 0.21968 | 0 | 0 | 0 | 0 | 0.007286 | 0.14486 | 642 | 15 | 126 | 42.8 | 0.788707 | 0.024922 | 0 | 0 | 0 | 0 | 0.185897 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f0bb21d732e1e1a3fe041762c735e3ea255fe56 | 345 | py | Python | meadow/meadow/migrations/0007_book_is_approved.py | digital-gachilib/meadow | 7a4510bc6290a74305536c35b24867d79107bd30 | [
"MIT"
] | null | null | null | meadow/meadow/migrations/0007_book_is_approved.py | digital-gachilib/meadow | 7a4510bc6290a74305536c35b24867d79107bd30 | [
"MIT"
] | 26 | 2020-04-05T08:37:16.000Z | 2021-09-22T18:47:20.000Z | meadow/meadow/migrations/0007_book_is_approved.py | digital-gachilib/meadow | 7a4510bc6290a74305536c35b24867d79107bd30 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-04-28 15:40
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("meadow", "0006_mmake_isbn_charfield"),
]
operations = [
migrations.AddField(model_name="book", name="is_approved", field=models.BooleanField(default=False),),
]
| 23 | 110 | 0.684058 | 42 | 345 | 5.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067857 | 0.188406 | 345 | 14 | 111 | 24.642857 | 0.757143 | 0.130435 | 0 | 0 | 1 | 0 | 0.154362 | 0.083893 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f1592f3be048e8437004cc1fbfde1d17593625b | 689 | py | Python | noise_layers/rotate.py | pierrefdz/HiDDeN | c1ca842389f86239c4e3ac9911f784cd3965f260 | [
"MIT"
] | null | null | null | noise_layers/rotate.py | pierrefdz/HiDDeN | c1ca842389f86239c4e3ac9911f784cd3965f260 | [
"MIT"
] | null | null | null | noise_layers/rotate.py | pierrefdz/HiDDeN | c1ca842389f86239c4e3ac9911f784cd3965f260 | [
"MIT"
] | null | null | null | import torch.nn as nn
import torch.nn.functional as F
from torchvision.transforms import functional
import numpy as np
class Rotate(nn.Module):
"""
Rotate the image by random angle between -degrees and degrees.
"""
def __init__(self, degrees, interpolation_method='nearest'):
super(Rotate, self).__init__()
self.degrees = degrees
self.interpolation_method = interpolation_method
def forward(self, noised_and_cover):
rotation_angle = np.random.uniform(-self.degrees, self.degrees)
noised_image = noised_and_cover[0]
noised_and_cover[0] = functional.rotate(noised_image, rotation_angle)
return noised_and_cover
| 31.318182 | 77 | 0.716981 | 88 | 689 | 5.352273 | 0.409091 | 0.093418 | 0.118896 | 0.063694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00363 | 0.20029 | 689 | 21 | 78 | 32.809524 | 0.85118 | 0.089985 | 0 | 0 | 0 | 0 | 0.011457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0f1744199f477069c02875d1067c890dfb7a1805 | 340 | py | Python | src/main/python/grammer/Function.py | photowey/python-study | 218456a0d661709a49fb060659664102b9287de8 | [
"Apache-2.0"
] | null | null | null | src/main/python/grammer/Function.py | photowey/python-study | 218456a0d661709a49fb060659664102b9287de8 | [
"Apache-2.0"
] | null | null | null | src/main/python/grammer/Function.py | photowey/python-study | 218456a0d661709a49fb060659664102b9287de8 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
# ---------------------------------------------
# @file Function.py
# @description Function
# @author WcJun
# @date 2020/06/20
# ---------------------------------------------
# 求两个数 n 加到 m 的和
def add(n, m):
s = 0
while n <= m:
s += n
n += 1
return s
# 求和
add = add(1, 100)
print(add)
| 14.782609 | 47 | 0.364706 | 41 | 340 | 3.02439 | 0.682927 | 0.032258 | 0.048387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058594 | 0.247059 | 340 | 22 | 48 | 15.454545 | 0.425781 | 0.591176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f1fdc9583dd36bad5225aaff9eca13e2d53bb4b | 825 | py | Python | nfl/migrations/0007_player_position.py | rwflick/djangoXNFLDemo | 825072b25b9b33eba9687d7ec358d59c7706a16f | [
"MIT"
] | 1 | 2020-09-14T16:43:33.000Z | 2020-09-14T16:43:33.000Z | nfl/migrations/0007_player_position.py | rwflick/djangoXNFLDemo | 825072b25b9b33eba9687d7ec358d59c7706a16f | [
"MIT"
] | null | null | null | nfl/migrations/0007_player_position.py | rwflick/djangoXNFLDemo | 825072b25b9b33eba9687d7ec358d59c7706a16f | [
"MIT"
] | null | null | null | # Generated by Django 3.0.5 on 2020-09-13 19:49
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nfl', '0006_player'),
]
operations = [
migrations.AddField(
model_name='player',
name='position',
field=models.CharField(choices=[('QB', 'Quarterback'), ('RB', 'Running Back'), ('FB', 'Fullback'), ('WR', 'Wide Receiver'), ('TE', 'Tight End'), ('C', 'Center'), ('OT', 'Offensive Tackle'), ('OG', 'Offensive Guard'), ('DE', 'Defensive End'), ('DT', 'Defensive Tackle'), ('LB', 'Line Backer'), ('DB', 'Defensive Back'), ('CB', 'Cornerback'), ('S', 'Safety'), ('K', 'Kicker'), ('P', 'Punter'), ('LS', 'Long Snapper'), ('KR', 'Kick Returner'), ('PR', 'Punt Returner')], max_length=25, null=True),
),
]
| 43.421053 | 505 | 0.56 | 93 | 825 | 4.935484 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031674 | 0.196364 | 825 | 18 | 506 | 45.833333 | 0.660633 | 0.054545 | 0 | 0 | 1 | 0 | 0.349614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f21ea6cca6377a0bd8bcf855a84161050071410 | 5,954 | py | Python | src/commercetools/services/shopping_lists.py | jeroenubbink/commercetools-python-sdk | ee27768d6fdde3e12618059891d1d4f75dd61390 | [
"MIT"
] | null | null | null | src/commercetools/services/shopping_lists.py | jeroenubbink/commercetools-python-sdk | ee27768d6fdde3e12618059891d1d4f75dd61390 | [
"MIT"
] | null | null | null | src/commercetools/services/shopping_lists.py | jeroenubbink/commercetools-python-sdk | ee27768d6fdde3e12618059891d1d4f75dd61390 | [
"MIT"
] | null | null | null | # DO NOT EDIT! This file is automatically generated
import typing
from commercetools._schemas._shopping_list import (
ShoppingListDraftSchema,
ShoppingListPagedQueryResponseSchema,
ShoppingListSchema,
ShoppingListUpdateSchema,
)
from commercetools.helpers import RemoveEmptyValuesMixin
from commercetools.types._shopping_list import (
ShoppingList,
ShoppingListDraft,
ShoppingListPagedQueryResponse,
ShoppingListUpdate,
ShoppingListUpdateAction,
)
from commercetools.typing import OptionalListStr
from . import abstract, traits
class _ShoppingListQuerySchema(
traits.ExpandableSchema,
traits.SortableSchema,
traits.PagingSchema,
traits.QuerySchema,
):
pass
class _ShoppingListUpdateSchema(traits.ExpandableSchema, traits.VersionedSchema):
pass
class _ShoppingListDeleteSchema(
traits.VersionedSchema, traits.ExpandableSchema, traits.DataErasureSchema
):
pass
class ShoppingListService(abstract.AbstractService):
"""shopping-lists e.
g. for wishlist support
"""
def get_by_id(self, id: str, *, expand: OptionalListStr = None) -> ShoppingList:
"""Gets a shopping list by ID."""
params = self._serialize_params({"expand": expand}, traits.ExpandableSchema)
return self._client._get(
endpoint=f"shopping-lists/{id}",
params=params,
schema_cls=ShoppingListSchema,
)
def get_by_key(self, key: str, *, expand: OptionalListStr = None) -> ShoppingList:
"""Gets a shopping list by Key."""
params = self._serialize_params({"expand": expand}, traits.ExpandableSchema)
return self._client._get(
endpoint=f"shopping-lists/key={key}",
params=params,
schema_cls=ShoppingListSchema,
)
def query(
self,
*,
expand: OptionalListStr = None,
sort: OptionalListStr = None,
limit: int = None,
offset: int = None,
with_total: bool = None,
where: OptionalListStr = None,
predicate_var: typing.Dict[str, str] = None,
) -> ShoppingListPagedQueryResponse:
"""shopping-lists e.g. for wishlist support
"""
params = self._serialize_params(
{
"expand": expand,
"sort": sort,
"limit": limit,
"offset": offset,
"withTotal": with_total,
"where": where,
"predicate_var": predicate_var,
},
_ShoppingListQuerySchema,
)
return self._client._get(
endpoint="shopping-lists",
params=params,
schema_cls=ShoppingListPagedQueryResponseSchema,
)
def create(
self, draft: ShoppingListDraft, *, expand: OptionalListStr = None
) -> ShoppingList:
"""shopping-lists e.g. for wishlist support
"""
params = self._serialize_params({"expand": expand}, traits.ExpandableSchema)
return self._client._post(
endpoint="shopping-lists",
params=params,
data_object=draft,
request_schema_cls=ShoppingListDraftSchema,
response_schema_cls=ShoppingListSchema,
)
def update_by_id(
self,
id: str,
version: int,
actions: typing.List[ShoppingListUpdateAction],
*,
expand: OptionalListStr = None,
force_update: bool = False,
) -> ShoppingList:
params = self._serialize_params({"expand": expand}, _ShoppingListUpdateSchema)
update_action = ShoppingListUpdate(version=version, actions=actions)
return self._client._post(
endpoint=f"shopping-lists/{id}",
params=params,
data_object=update_action,
request_schema_cls=ShoppingListUpdateSchema,
response_schema_cls=ShoppingListSchema,
force_update=force_update,
)
def update_by_key(
self,
key: str,
version: int,
actions: typing.List[ShoppingListUpdateAction],
*,
expand: OptionalListStr = None,
force_update: bool = False,
) -> ShoppingList:
"""Update a shopping list found by its Key."""
params = self._serialize_params({"expand": expand}, _ShoppingListUpdateSchema)
update_action = ShoppingListUpdate(version=version, actions=actions)
return self._client._post(
endpoint=f"shopping-lists/key={key}",
params=params,
data_object=update_action,
request_schema_cls=ShoppingListUpdateSchema,
response_schema_cls=ShoppingListSchema,
force_update=force_update,
)
def delete_by_id(
self,
id: str,
version: int,
*,
expand: OptionalListStr = None,
data_erasure: bool = None,
force_delete: bool = False,
) -> ShoppingList:
params = self._serialize_params(
{"version": version, "expand": expand, "dataErasure": data_erasure},
_ShoppingListDeleteSchema,
)
return self._client._delete(
endpoint=f"shopping-lists/{id}",
params=params,
response_schema_cls=ShoppingListSchema,
force_delete=force_delete,
)
def delete_by_key(
self,
key: str,
version: int,
*,
expand: OptionalListStr = None,
data_erasure: bool = None,
force_delete: bool = False,
) -> ShoppingList:
params = self._serialize_params(
{"version": version, "expand": expand, "dataErasure": data_erasure},
_ShoppingListDeleteSchema,
)
return self._client._delete(
endpoint=f"shopping-lists/key={key}",
params=params,
response_schema_cls=ShoppingListSchema,
force_delete=force_delete,
)
| 31.172775 | 86 | 0.615721 | 515 | 5,954 | 6.916505 | 0.190291 | 0.040146 | 0.056148 | 0.056148 | 0.619876 | 0.588714 | 0.57187 | 0.543515 | 0.535093 | 0.535093 | 0 | 0 | 0.293416 | 5,954 | 190 | 87 | 31.336842 | 0.846684 | 0.048875 | 0 | 0.5875 | 1 | 0 | 0.050329 | 0.012805 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.01875 | 0.0375 | 0 | 0.1625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f2caf81081a11f464cc6de29247f66e01a2f10e | 2,324 | py | Python | Plumet/scoring.py | mehmeterenballi/Plumet | 2f81cd8cb7f50432ff0b4d46c43aa8ebc5e91a2c | [
"MIT"
] | null | null | null | Plumet/scoring.py | mehmeterenballi/Plumet | 2f81cd8cb7f50432ff0b4d46c43aa8ebc5e91a2c | [
"MIT"
] | null | null | null | Plumet/scoring.py | mehmeterenballi/Plumet | 2f81cd8cb7f50432ff0b4d46c43aa8ebc5e91a2c | [
"MIT"
] | null | null | null | import pygame as pg
def score_blitting(win, score):
screen_width, screen_height = 288, 512
score_image = [pg.image.load('%d.png' % decimal) for decimal in range(0, 10)]
if 10 > score >= 0:
win.blit(score_image[score], (screen_width / 2, 0))
elif 100 > score >= 10:
units_digit = score % 10
tens_digit = int((score - units_digit) / 10)
win.blit(score_image[tens_digit], (screen_width / 2 - 10, 0))
win.blit(score_image[units_digit], (screen_width / 2 + 10, 0))
elif 1000 > score >= 100:
units_digit = score % 10
tens_digit = int(((score - units_digit) % 100) / 10)
hundreds_digit = int((score - tens_digit * 10 - units_digit) / 100)
win.blit(score_image[hundreds_digit], (screen_width / 2 - 36, 0))
win.blit(score_image[tens_digit], (screen_width / 2 - 12, 0))
win.blit(score_image[units_digit], (screen_width / 2 + 12, 0))
elif 10000 > score >= 1000:
units_digit = score % 10
tens_digit = int(((score % 100) - units_digit) / 10)
hundreds_digit = int(((score - (score % 100)) % 1000) / 100)
thousands_digit = int((score - (score % 1000)) / 1000)
win.blit(score_image[thousands_digit], (screen_width / 2 - 48, 0))
win.blit(score_image[hundreds_digit], (screen_width / 2 - 24, 0))
win.blit(score_image[tens_digit], (screen_width / 2, 0))
win.blit(score_image[units_digit], (screen_width / 2 + 24, 0))
elif 100000 > score >= 10000:
units_digit = score % 10
tens_digit = int(((score % 100) - units_digit) / 10)
hundreds_digit = int(((score - tens_digit * 10 - units_digit) % 1000) / 100)
thousands_digit = int(((score - hundreds_digit * 100 - tens_digit * 10 - units_digit) % 10000) / 1000)
ten_thousands_digit = int((score - (score % 10000)) / 10000)
win.blit(score_image[hundreds_digit], (screen_width / 2 - 12, 0))
win.blit(score_image[tens_digit], (screen_width / 2 + 12, 0))
win.blit(score_image[units_digit], (screen_width / 2 + 36, 0))
win.blit(score_image[thousands_digit], (screen_width / 2 - 36, 0))
win.blit(score_image[ten_thousands_digit], (screen_width / 2 - 60, 0))
else:
print("Game Completed")
| 52.818182 | 111 | 0.603701 | 321 | 2,324 | 4.137072 | 0.146417 | 0.13253 | 0.135542 | 0.192018 | 0.768072 | 0.694277 | 0.640813 | 0.640813 | 0.640813 | 0.503765 | 0 | 0.104867 | 0.257315 | 2,324 | 43 | 112 | 54.046512 | 0.664542 | 0 | 0 | 0.15 | 0 | 0 | 0.008768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.025 | 0 | 0.05 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f3599212f84331de9e9f8ea42c3d1bb4ebb50e9 | 1,881 | py | Python | mpesaviz/apps/transactions/models.py | savioabuga/mpesaviz | 2567ab5646f8f684c32b0644f5b06d3cc58c62bc | [
"BSD-3-Clause"
] | 2 | 2015-07-07T09:30:27.000Z | 2017-01-31T20:26:45.000Z | mpesaviz/apps/transactions/models.py | savioabuga/mpesaviz | 2567ab5646f8f684c32b0644f5b06d3cc58c62bc | [
"BSD-3-Clause"
] | null | null | null | mpesaviz/apps/transactions/models.py | savioabuga/mpesaviz | 2567ab5646f8f684c32b0644f5b06d3cc58c62bc | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from model_utils import Choices
from model_utils.models import TimeStampedModel
from phonenumber_field.modelfields import PhoneNumberField
from django_pandas.io import read_frame
class Transaction(TimeStampedModel):
TYPES = Choices(('sent', 'Sent Transactions'), ('received', 'Received Transactions'), ('paybill', 'Pay bill Transactions'),
('buy_good', 'Buy Good Transactions'), ('airtime', 'Airtime'), ('deposits', 'Deposits'), ('withdrawals', 'Withdrawals'),)
code = models.CharField(max_length=30)
date = models.DateTimeField()
type = models.CharField(choices=TYPES, max_length=20, default=TYPES.sent)
amount = models.DecimalField(max_digits=10, decimal_places=4)
recipient = models.CharField(max_length=30, blank=True)
phonenumber = PhoneNumberField()
sent_by = models.CharField(max_length=30, blank=True)
account_number = models.CharField(max_length=30)
airtime_for = models.CharField(max_length=30)
def monthly_transactions(self):
dataframe = read_frame(Transaction.objects.all())
dataframe['month'] = [date.strftime('%B') for date in dataframe['date']]
dataframe['year'] = [date.strftime('%Y') for date in dataframe['date']]
groups = dataframe.groupby(['year', 'month', 'type'])['amount'].sum().reset_index(name='amount')
return groups
def top_recipients(self):
dataframe = read_frame(Transaction.objects.all())
recipient_dataframe = dataframe[dataframe.type == 'Sent Transactions']
return recipient_dataframe.groupby(['recipient', 'type'])['amount'].sum().reset_index(name='amount').sort('amount', ascending=False)
class UploadFile(TimeStampedModel):
type = models.CharField(choices=Transaction.TYPES, max_length=20, default=Transaction.TYPES.sent)
file = models.FileField(upload_to='uploads/')
| 47.025 | 141 | 0.714514 | 216 | 1,881 | 6.097222 | 0.393519 | 0.079727 | 0.068337 | 0.091116 | 0.296128 | 0.168565 | 0.168565 | 0 | 0 | 0 | 0 | 0.010592 | 0.14673 | 1,881 | 39 | 142 | 48.230769 | 0.809969 | 0 | 0 | 0.066667 | 0 | 0 | 0.138756 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.166667 | 0 | 0.766667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0f39a392c46b7861f4edc27f2f6de39934c00cb4 | 1,236 | py | Python | workbench/awt/migrations/0012_auto_20201012_1433.py | yoshson/workbench | 701558cac3357cd82e4dc99f0fefed12ee81ddc5 | [
"MIT"
] | 15 | 2020-09-02T22:17:34.000Z | 2022-02-01T20:09:10.000Z | workbench/awt/migrations/0012_auto_20201012_1433.py | yoshson/workbench | 701558cac3357cd82e4dc99f0fefed12ee81ddc5 | [
"MIT"
] | 18 | 2020-01-08T15:28:26.000Z | 2022-02-28T02:46:41.000Z | workbench/awt/migrations/0012_auto_20201012_1433.py | yoshson/workbench | 701558cac3357cd82e4dc99f0fefed12ee81ddc5 | [
"MIT"
] | 8 | 2020-09-29T08:00:24.000Z | 2022-01-16T11:58:19.000Z | # Generated by Django 3.1.2 on 2020-10-12 12:33
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("awt", "0011_absence_ends_on"),
]
operations = [
migrations.AddField(
model_name="absence",
name="is_working_time",
field=models.BooleanField(default=True, verbose_name="is working time"),
),
migrations.AlterField(
model_name="absence",
name="reason",
field=models.CharField(
choices=[
("vacation", "vacation"),
("sickness", "sickness"),
("paid", "paid leave (e.g. civilian service, maternity etc.)"),
("other", "other reasons (no working time)"),
("correction", "Working time correction"),
],
max_length=10,
verbose_name="reason",
),
),
migrations.RunSQL(
"""
UPDATE awt_absence
SET reason='paid'
WHERE starts_on<'2020-01-01' AND reason='other';
UPDATE awt_absence
SET is_working_time=FALSE
WHERE reason='other';
""",
"",
),
]
| 26.869565 | 84 | 0.514563 | 116 | 1,236 | 5.353448 | 0.543103 | 0.088567 | 0.062802 | 0.064412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036709 | 0.360841 | 1,236 | 45 | 85 | 27.466667 | 0.749367 | 0.036408 | 0 | 0.193548 | 1 | 0 | 0.230315 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f42eaeb31887e9ae1caee055573b220ada36e35 | 1,347 | py | Python | shop/migrations/0002_add_example_data.py | Chaiok/-django_ne_copipast_shop_master2 | 67abde1191f15bbf366b8666f0a1c17f7c4e0c9f | [
"MIT"
] | 1 | 2022-02-05T17:28:54.000Z | 2022-02-05T17:28:54.000Z | shop/migrations/0002_add_example_data.py | Chaiok/-django_ne_copipast_shop_master2 | 67abde1191f15bbf366b8666f0a1c17f7c4e0c9f | [
"MIT"
] | null | null | null | shop/migrations/0002_add_example_data.py | Chaiok/-django_ne_copipast_shop_master2 | 67abde1191f15bbf366b8666f0a1c17f7c4e0c9f | [
"MIT"
] | null | null | null | # Generated by Django 4.0 on 2021-12-13 17:54
from django.db import migrations
_CAR_GOODS = 'Автотовары'
_APPLIANCES = 'Бытовая техника'
def _create_categories(apps, schema_editor) -> None:
"""Создает две категории"""
# noinspection PyPep8Naming
Category = apps.get_model('shop', 'Category')
Category.objects.get_or_create(name=_CAR_GOODS)
Category.objects.get_or_create(name=_APPLIANCES)
def _create_products(apps, schema_editor) -> None:
"""Создает два товара"""
# noinspection PyPep8Naming
Product = apps.get_model('shop', 'Product')
# noinspection PyPep8Naming
Category = apps.get_model('shop', 'Category')
Product.objects.get_or_create(
name='Зимняя резина',
category=Category.objects.get(name=_CAR_GOODS),
price=4990.00,
)
Product.objects.get_or_create(
name='Холодильник',
category=Category.objects.get(name=_APPLIANCES),
price=49990.00,
)
class Migration(migrations.Migration):
dependencies = [
('shop', '0001_initial'),
]
operations = [
migrations.RunPython(
code=_create_categories,
reverse_code=migrations.RunPython.noop,
),
migrations.RunPython(
code=_create_products,
reverse_code=migrations.RunPython.noop,
),
]
| 24.944444 | 56 | 0.6585 | 144 | 1,347 | 5.930556 | 0.423611 | 0.070258 | 0.084309 | 0.084309 | 0.482436 | 0.269321 | 0.131148 | 0.131148 | 0 | 0 | 0 | 0.032787 | 0.230141 | 1,347 | 53 | 57 | 25.415094 | 0.790743 | 0.12101 | 0 | 0.294118 | 1 | 0 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.029412 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f47f785479aecc6801edc61672d6949600791f3 | 5,271 | py | Python | peer/lifecycle/db_pb2.py | jeffgarratt/fabric-prototype | 46cfc67a1d74d1f38f498d3409327692fb733fd0 | [
"CC-BY-4.0"
] | 6 | 2017-10-16T13:46:46.000Z | 2020-02-28T07:48:51.000Z | peer/lifecycle/db_pb2.py | jeffgarratt/fabric-prototype | 46cfc67a1d74d1f38f498d3409327692fb733fd0 | [
"CC-BY-4.0"
] | 18 | 2017-10-02T16:31:51.000Z | 2020-02-24T21:39:20.000Z | peer/lifecycle/db_pb2.py | jeffgarratt/fabric-prototype | 46cfc67a1d74d1f38f498d3409327692fb733fd0 | [
"CC-BY-4.0"
] | 4 | 2019-02-01T14:46:21.000Z | 2021-06-01T05:49:11.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: peer/lifecycle/db.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='peer/lifecycle/db.proto',
package='lifecycle',
syntax='proto3',
serialized_pb=_b('\n\x17peer/lifecycle/db.proto\x12\tlifecycle\"1\n\rStateMetadata\x12\x10\n\x08\x64\x61tatype\x18\x01 \x01(\t\x12\x0e\n\x06\x66ields\x18\x02 \x03(\t\"G\n\tStateData\x12\x0f\n\x05Int64\x18\x01 \x01(\x03H\x00\x12\x0f\n\x05\x42ytes\x18\x02 \x01(\x0cH\x00\x12\x10\n\x06String\x18\x03 \x01(\tH\x00\x42\x06\n\x04TypeBf\n,org.hyperledger.fabric.protos.peer.lifecycleZ6github.com/hyperledger/fabric-protos-go/peer/lifecycleb\x06proto3')
)
_STATEMETADATA = _descriptor.Descriptor(
name='StateMetadata',
full_name='lifecycle.StateMetadata',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='datatype', full_name='lifecycle.StateMetadata.datatype', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='fields', full_name='lifecycle.StateMetadata.fields', index=1,
number=2, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=38,
serialized_end=87,
)
_STATEDATA = _descriptor.Descriptor(
name='StateData',
full_name='lifecycle.StateData',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='Int64', full_name='lifecycle.StateData.Int64', index=0,
number=1, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='Bytes', full_name='lifecycle.StateData.Bytes', index=1,
number=2, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='String', full_name='lifecycle.StateData.String', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='Type', full_name='lifecycle.StateData.Type',
index=0, containing_type=None, fields=[]),
],
serialized_start=89,
serialized_end=160,
)
_STATEDATA.oneofs_by_name['Type'].fields.append(
_STATEDATA.fields_by_name['Int64'])
_STATEDATA.fields_by_name['Int64'].containing_oneof = _STATEDATA.oneofs_by_name['Type']
_STATEDATA.oneofs_by_name['Type'].fields.append(
_STATEDATA.fields_by_name['Bytes'])
_STATEDATA.fields_by_name['Bytes'].containing_oneof = _STATEDATA.oneofs_by_name['Type']
_STATEDATA.oneofs_by_name['Type'].fields.append(
_STATEDATA.fields_by_name['String'])
_STATEDATA.fields_by_name['String'].containing_oneof = _STATEDATA.oneofs_by_name['Type']
DESCRIPTOR.message_types_by_name['StateMetadata'] = _STATEMETADATA
DESCRIPTOR.message_types_by_name['StateData'] = _STATEDATA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
StateMetadata = _reflection.GeneratedProtocolMessageType('StateMetadata', (_message.Message,), dict(
DESCRIPTOR = _STATEMETADATA,
__module__ = 'peer.lifecycle.db_pb2'
# @@protoc_insertion_point(class_scope:lifecycle.StateMetadata)
))
_sym_db.RegisterMessage(StateMetadata)
StateData = _reflection.GeneratedProtocolMessageType('StateData', (_message.Message,), dict(
DESCRIPTOR = _STATEDATA,
__module__ = 'peer.lifecycle.db_pb2'
# @@protoc_insertion_point(class_scope:lifecycle.StateData)
))
_sym_db.RegisterMessage(StateData)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\n,org.hyperledger.fabric.protos.peer.lifecycleZ6github.com/hyperledger/fabric-protos-go/peer/lifecycle'))
# @@protoc_insertion_point(module_scope)
| 36.604167 | 447 | 0.751281 | 668 | 5,271 | 5.654192 | 0.218563 | 0.038126 | 0.036007 | 0.03336 | 0.54673 | 0.472862 | 0.459624 | 0.449034 | 0.438973 | 0.438973 | 0 | 0.031981 | 0.116107 | 5,271 | 143 | 448 | 36.86014 | 0.778708 | 0.053311 | 0 | 0.516667 | 1 | 0.016667 | 0.19988 | 0.155127 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f4a41bba9d795071136091d1dc496c27479a9f9 | 1,163 | py | Python | netta/a.py | zhangdafu12/web | 64ce7db4697167215bf9ee25cd5bdc0bd15b5831 | [
"MIT"
] | null | null | null | netta/a.py | zhangdafu12/web | 64ce7db4697167215bf9ee25cd5bdc0bd15b5831 | [
"MIT"
] | 1 | 2020-03-30T09:26:59.000Z | 2020-03-30T09:26:59.000Z | netta/a.py | zhangdafu12/web | 64ce7db4697167215bf9ee25cd5bdc0bd15b5831 | [
"MIT"
] | null | null | null | # -*- encoding:utf8 -*-
# author: Shulei
# e-mail: 1191543592@qq.com
# time: 2019/4/2 10:00
import time
# 一个描述器就是一个实现了三个核心的属性访问操作(get、set、delete)的类,分别为__get__(), __set__(),__delete__()
# 这些方法接受一个实例作为输入,之后相应的操作实例底层的字典, 为了使用一个描述器,需要将这个描述器的实例作为类属性放到一个类的定义中Dadej
# Descriptors are class attributes (like properties or methods) with any of the following special methods:
# __get__ (non-data descriptor method, for example on a method/function)
# __set__ (data descriptor method, for example on a property instance)
# __delete__ (data descriptor method)
class Celsius:
def __init__(self, value=0.0):
self.value = float(value)
def __get__(self, instance, owner):
print(instance)
print(owner)
print("执行的是get")
return self.value
def __set__(self, instance, value):
self.value = float(value)
def __delete__(self, instance):
print("删除。。。。")
print(self)
print(instance)
print(instance.__dict__)
class Temperature:
celsius = Celsius()
t = Temperature()
print(t.celsius)
del t.celsius
print(t.celsius)
print(__name__)
if __name__ == '__main__':
print("这里执行了") | 24.229167 | 106 | 0.687876 | 140 | 1,163 | 5.314286 | 0.507143 | 0.048387 | 0.080645 | 0.061828 | 0.147849 | 0.08871 | 0.08871 | 0 | 0 | 0 | 0 | 0.024652 | 0.197764 | 1,163 | 48 | 107 | 24.229167 | 0.772776 | 0.44368 | 0 | 0.24 | 0 | 0 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.04 | 0 | 0.36 | 0.44 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
0f4aea28cd9c70e23bcd7e3a57f7b80215d70f0c | 476 | py | Python | data_structures/binary_tree/__init__.py | Mhassanbughio/Python-1 | 704c0f93425c53fcadcdbb3dbdf337b0598079fd | [
"MIT"
] | 2 | 2022-01-13T04:56:29.000Z | 2022-01-26T05:09:34.000Z | data_structures/binary_tree/__init__.py | Mhassanbughio/Python-1 | 704c0f93425c53fcadcdbb3dbdf337b0598079fd | [
"MIT"
] | null | null | null | data_structures/binary_tree/__init__.py | Mhassanbughio/Python-1 | 704c0f93425c53fcadcdbb3dbdf337b0598079fd | [
"MIT"
] | null | null | null | class Rectangle:
def __init__(self, length, breadth, unit_cost=0):
self.length = length
self.breadth = breadth
self.unit_cost = unit_cost
def get_area(self):
return self.length * self.breadth
def calculate_cost(self):
area = self.get_area()
return area * self.unit_cost
# breadth = 120 units, length = 160 units, 1 sq unit cost = Rs 2000
r = Rectangle(160, 120, 2000)
print("Area of Rectangle: %s sq units" % (r.get_area()))
| 34 | 67 | 0.657563 | 70 | 476 | 4.3 | 0.357143 | 0.13289 | 0.112957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060274 | 0.233193 | 476 | 13 | 68 | 36.615385 | 0.764384 | 0.136555 | 0 | 0 | 0 | 0 | 0.07335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.083333 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f69d3eab2dbb5cfe8989db4cb4da5f04dcd2474 | 1,653 | py | Python | src/moodlews/service.py | pystardust/Welearn-bot | 87a8c388201629c7ce94e6f9730a4ac7442b36f6 | [
"MIT"
] | null | null | null | src/moodlews/service.py | pystardust/Welearn-bot | 87a8c388201629c7ce94e6f9730a4ac7442b36f6 | [
"MIT"
] | null | null | null | src/moodlews/service.py | pystardust/Welearn-bot | 87a8c388201629c7ce94e6f9730a4ac7442b36f6 | [
"MIT"
] | null | null | null | from requests import Session
import urllib.parse
import json
class ServerFunctions:
SITE_INFO = "core_webservice_get_site_info"
ALL_COURSES = "core_course_get_courses_by_field"
USER_COURSES = "core_enrol_get_users_courses"
COURSE_CONTENTS = "core_course_get_contents"
ASSIGNMENTS = "mod_assign_get_assignments"
ASSIGNMENT_STATUS = "mod_assign_get_submission_status"
URLS = "mod_url_get_urls_by_courses"
RESOURCES = "mod_resource_get_resources_by_courses"
class MoodleClient:
def __init__(self, baseurl):
self.baseurl = baseurl
self.login_url = urllib.parse.urljoin(baseurl, "login/token.php")
self.server_url = urllib.parse.urljoin(baseurl, "webservice/rest/server.php")
self.session = Session()
self.token = ""
def response(self, url, **data):
return self.session.post(url, data)
def response_json(self, url, **data):
response = self.response(url, **data)
return json.loads(response.content)
def authenticate(self, username, password):
login = self.response_json(
self.login_url,
username=username,
password=password,
service="moodle_mobile_app",
)
try:
self.token = login["token"]
return self.token
except KeyError:
return False
def server(self, function, **data):
return self.response_json(
self.server_url,
wstoken=self.token,
moodlewsrestformat="json",
wsfunction=function,
**data
)
def close(self):
self.session.close()
| 30.611111 | 85 | 0.643678 | 185 | 1,653 | 5.475676 | 0.356757 | 0.035538 | 0.047384 | 0.041461 | 0.055281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264368 | 1,653 | 53 | 86 | 31.188679 | 0.833059 | 0 | 0 | 0 | 0 | 0 | 0.182698 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0.043478 | 0.065217 | 0.043478 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
0f7394ec885b966aaf92685817a31c344da144b3 | 577 | py | Python | unittests/validators.py | lspestrip/stdb2 | 80703385f6e681962140d82a4991878a995c90fd | [
"MIT"
] | 1 | 2018-03-07T10:13:12.000Z | 2018-03-07T10:13:12.000Z | unittests/validators.py | lspestrip/stdb2 | 80703385f6e681962140d82a4991878a995c90fd | [
"MIT"
] | 31 | 2017-10-28T07:17:38.000Z | 2018-05-11T11:28:02.000Z | unittests/validators.py | lspestrip/stdb2 | 80703385f6e681962140d82a4991878a995c90fd | [
"MIT"
] | 1 | 2017-11-28T21:50:27.000Z | 2017-11-28T21:50:27.000Z | # -*- encoding: utf-8 -*-
VALID_REPORT_EXTENSIONS = [
'.pdf',
'.doc',
'.docx',
'.html',
'.htm',
'.xsl',
'.xslx',
'.md',
'.rst',
'.zip',
]
def validate_report_file_ext(value):
import os
from django.core.exceptions import ValidationError
ext = os.path.splitext(value.name)[1]
if not ext.lower() in VALID_REPORT_EXTENSIONS:
raise ValidationError('unsupported file extension "{0}", valid extensions are {1}'
.format(ext, ', '.join(['"' + x + '"' for x in VALID_REPORT_EXTENSIONS])))
| 23.08 | 104 | 0.561525 | 65 | 577 | 4.846154 | 0.692308 | 0.104762 | 0.2 | 0.146032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.265165 | 577 | 24 | 105 | 24.041667 | 0.733491 | 0.039861 | 0 | 0 | 0 | 0 | 0.188406 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f75773a133923f483bf0f32ed5fc3578f9eaa52 | 1,566 | py | Python | src/ui/pgUI.py | yuj09161/MoneyManager | 32d57ca05d0c182ace68ea960a7bf7dfaad0fc79 | [
"MIT"
] | null | null | null | src/ui/pgUI.py | yuj09161/MoneyManager | 32d57ca05d0c182ace68ea960a7bf7dfaad0fc79 | [
"MIT"
] | null | null | null | src/ui/pgUI.py | yuj09161/MoneyManager | 32d57ca05d0c182ace68ea960a7bf7dfaad0fc79 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'pg.ui'
##
## Created by: Qt User Interface Compiler version 6.0.0
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide6.QtCore import *
from PySide6.QtGui import *
from PySide6.QtWidgets import *
class Ui_Pg(object):
def setupUi(self, Pg):
if not Pg.objectName():
Pg.setObjectName(u"Pg")
Pg.setFixedSize(200, 70)
self.centralwidget = QWidget(Pg)
self.centralwidget.setObjectName(u"centralwidget")
self.vlCent = QVBoxLayout(self.centralwidget)
self.vlCent.setObjectName(u"vlCent")
self.lbStatus = QLabel(self.centralwidget)
self.lbStatus.setObjectName(u"lbStatus")
self.lbStatus.setAlignment(Qt.AlignCenter)
self.vlCent.addWidget(self.lbStatus)
self.pgPg = QProgressBar(self.centralwidget)
self.pgPg.setObjectName(u"pgPg")
self.pgPg.setValue(24)
self.vlCent.addWidget(self.pgPg)
Pg.setCentralWidget(self.centralwidget)
self.retranslateUi(Pg)
QMetaObject.connectSlotsByName(Pg)
# setupUi
def retranslateUi(self, Pg):
Pg.setWindowTitle(QCoreApplication.translate("Pg", u"\uc9c4\ud589 \uc911", None))
self.lbStatus.setText(QCoreApplication.translate("Pg", u"TextLabel", None))
# retranslateUi
| 31.959184 | 89 | 0.598978 | 160 | 1,566 | 5.85625 | 0.4625 | 0.108858 | 0.089648 | 0.049093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017502 | 0.197318 | 1,566 | 48 | 90 | 32.625 | 0.727924 | 0.139208 | 0 | 0 | 1 | 0 | 0.055413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0f7e825a057f5ebc72d9fb6d24195fedaea8c685 | 216 | py | Python | app/ref/test-ttyP.py | lucid281/pyEfi | c8a5b69820a3c1e7c4c652f7e4194cd8ce8a6e18 | [
"Apache-2.0"
] | 7 | 2017-07-30T20:23:58.000Z | 2021-12-10T19:38:04.000Z | app/ref/test-ttyP.py | lucid281/pyEfi | c8a5b69820a3c1e7c4c652f7e4194cd8ce8a6e18 | [
"Apache-2.0"
] | 2 | 2017-07-31T23:03:39.000Z | 2021-03-26T21:06:02.000Z | app/ref/test-ttyP.py | lucid281/pyEfi | c8a5b69820a3c1e7c4c652f7e4194cd8ce8a6e18 | [
"Apache-2.0"
] | null | null | null | from .. app.pyefi.ttyp import ttyP
ttyP(0, "0 - ttyP test")
ttyP(1, "1 - header")
ttyP(2, "2 - bold")
ttyP(3, "3 - okblue")
ttyP(4, "4 - okgreen")
ttyP(5, "5 - underline")
ttyP(6, "6 - warning")
ttyP(7, "7 - fail")
| 19.636364 | 34 | 0.578704 | 39 | 216 | 3.205128 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090395 | 0.180556 | 216 | 10 | 35 | 21.6 | 0.615819 | 0 | 0 | 0 | 0 | 0 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.