text
stringlengths
29
850k
from header_filter.matchers import Header def test_or_matcher_supports_bitwise_not(rf): h_name_1, h_value_1 = 'HTTP_X_A', 'val_x' h_name_2, h_value_2 = 'HTTP_X_B', 'val_y' matcher = ~(Header(h_name_1, h_value_1) | Header(h_name_2, h_value_2)) request = rf.get('/', **{h_name_1: h_value_1, h_name_2: h_value_2}) assert matcher.match(request) is False def test_or_matcher_supports_bitwise_and(rf): h_name_1, h_value_1 = 'HTTP_X_A', 'val_x' h_name_2, h_value_2 = 'HTTP_X_B', 'val_y' h_name_3, h_value_3 = 'HTTP_X_C', 'val_z' matcher = (Header(h_name_1, h_value_1) | Header(h_name_2, h_value_2)) & Header(h_name_3, h_value_3) request = rf.get('/', **{h_name_1: h_value_1, h_name_2: h_value_2, h_name_3: h_value_3}) assert matcher.match(request) is True def test_or_matcher_supports_bitwise_or(rf): h_name_1, h_value_1 = 'HTTP_X_A', 'val_x' h_name_2, h_value_2 = 'HTTP_X_B', 'val_y' h_name_3, h_value_3 = 'HTTP_X_C', 'val_z' matcher = (Header(h_name_1, h_value_1) | Header(h_name_2, h_value_2)) | Header(h_name_3, h_value_3) request = rf.get('/', **{h_name_1: h_value_1, h_name_2: h_value_2, h_name_3: h_value_3}) assert matcher.match(request) is True def test_or_matcher_supports_bitwise_xor(rf): h_name_1, h_value_1 = 'HTTP_X_A', 'val_x' h_name_2, h_value_2 = 'HTTP_X_B', 'val_y' h_name_3, h_value_3 = 'HTTP_X_C', 'val_z' matcher = (Header(h_name_1, h_value_1) | Header(h_name_2, h_value_2)) ^ Header(h_name_3, h_value_3) request = rf.get('/', **{h_name_1: h_value_1, h_name_2: h_value_2, h_name_3: h_value_3}) assert matcher.match(request) is False def test_repr(): assert ( repr(Header('HTTP_X_A', 'val_x') | Header('HTTP_X_B', 'val_y')) == "(Header('HTTP_X_A', 'val_x') | Header('HTTP_X_B', 'val_y'))" )
Does Freedom Still Ring After 9/11? Nine years and ten days ago from the day of this publication I was sifting through the rubble of the Pentagon. As an emergency manager for FEMA I would spend the next five weeks or so between the Pentagon, Ground Zero and the National Emergency Operations Center in downtown DC. The experience, as it was for many Americans, was life changing. I saw the best of America. People banded together as friends, neighbors and countrymen to help one another. Petty disputes and prejudices’ were put away, and we all prayed, each according to his or her own understanding of God, for those lost in the attack and their suffering families. I was proud (and not for the first time) to be an American. And, I was proud of the political system that supports and breeds such a resilient and independent people. Since that time, every year on September 11th, I reflect on the state of our nation. Where are we now? Does freedom still ring, or did the terrorists win? For this issue I would like to share some of those random thoughts. Agree or disagree, but hopefully these ideas will encourage you to think deeply on the state of our nation and what it means to us, our children and the generations of Americans yet to come. Sorry, but burning books is simply un-American. Especially religious books that many consider sacred in a country founded on religious freedom. Pastor Jones’s planned stunt, and the copycat stunts that followed, were, as Sarah Palin put it, an “unnecessary provocation.” I hope Jones is back in Florida now from his trip to New York City. The Gainesville residents must have missed their village idiot. Besides NYC has already has Mayor Bloomberg. With that said, Muslims who would attack Americans and kill innocents over this event are also idiots… just more dangerous. It seems they are more than willing to prove that the things Jones was saying about their religion might be true. Besides, Americans are getting tired of walking on pins and needles so as to not offend Muslims, while Muslims burn Bibles, stomp on American flags and burn our leaders in effigy with nary a word from the so called peace loving moderate Muslim community. They don’t seem too worried about our feelings. That Muslims can walk the streets of America generally without fear of reprisal due to their religion is a tribute to our culture and a lot more than the Muslim countries can say about Christians. While I’m on the subject, Opposing the Mosque at Ground Zero is not bigotry. Just like the burning of Qurans, this stunt is specifically designed as a slap in the face of America. Being offended by this brazen act is not bigotry but a normal sane reaction to another unnecessary provocation. Glenn Beck’s “Restoring Honor” rally was a unique and unapologetically patriotic, faith based event that was less protest than it was a celebration of the traditional values and principles that have always produced the best this nation has to offer. Add those hundreds of thousands who attended the event to the Tea Party protests across the country, and you have a grass roots message that only a political fool would ignore. As Obama might put it, “let me be perfectly clear,” these movements are about conservative principles of government – not political parties. Republicans will make a big mistake if they believe that they can keep the support of this huge ground swell – based simply on party affiliation. The only reason they are beating the Democrats right now is because historically they lean more toward the conservative right than the Democrats. Let that demographic change, and watch what happens. (Democrats, you may want to be listening to this). In the same light, Democrats are making a big mistake by underestimating, and trying to demonize these grassroots groups. For every one of these good folks that attend a rally – there are hundreds more who support their point of view. And they are only getting bolder as they see the traditional silent majority speaking up and they realize they are not alone in believing that freedom, self reliance and limited government power the engine of innovation and human progress. I am truly enjoying the way so many Democrats are now running away from the far left’s socialist agenda. If they want the support of mainstream America this is the right move. The success of these patriotic rallies and the kind letters and emails I get in support of this column, have convinced me that Americans aren’t quite ready to quietly slip into a European style socialism. We remain instead, the last and best hope for freedom for much of the world today. My heart soared as the Children’s Choir sang “The Star Spangled Banner” during the memorial at Ground Zero this past Saturday. These words, so beautifully echoed in the voices of our children, and from the same spot where just a few years ago rescuers choked on the dust and rubble of American dreams, asks again the century old question, one so vital to a world vastly different from the one that existed on 9-10. “Oh say does that star spangled banner yet wave, O’re the land of the free and the home of the brave.” I believe it does, and will continue to do so as long as good people continue to gather – and remember.
# Copyright (c) 2010-2011 OpenStack, LLC. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import cPickle as pickle from gzip import GzipFile from os.path import getmtime from struct import unpack_from from time import time from swift.common.utils import hash_path, validate_configuration class RingData(object): """Partitioned consistent hashing ring data (used for serialization).""" def __init__(self, replica2part2dev_id, devs, part_shift): self.devs = devs self._replica2part2dev_id = replica2part2dev_id self._part_shift = part_shift def to_dict(self): return {'devs': self.devs, 'replica2part2dev_id': self._replica2part2dev_id, 'part_shift': self._part_shift} class Ring(object): """ Partitioned consistent hashing ring. :param pickle_gz_path: path to ring file :param reload_time: time interval in seconds to check for a ring change """ def __init__(self, pickle_gz_path, reload_time=15): # can't use the ring unless HASH_PATH_SUFFIX is set validate_configuration() self.pickle_gz_path = pickle_gz_path self.reload_time = reload_time self._reload(force=True) def _reload(self, force=False): self._rtime = time() + self.reload_time if force or self.has_changed(): ring_data = pickle.load(GzipFile(self.pickle_gz_path, 'rb')) if not hasattr(ring_data, 'devs'): ring_data = RingData(ring_data['replica2part2dev_id'], ring_data['devs'], ring_data['part_shift']) self._mtime = getmtime(self.pickle_gz_path) self.devs = ring_data.devs self.zone2devs = {} for dev in self.devs: if not dev: continue if dev['zone'] in self.zone2devs: self.zone2devs[dev['zone']].append(dev) else: self.zone2devs[dev['zone']] = [dev] self._replica2part2dev_id = ring_data._replica2part2dev_id self._part_shift = ring_data._part_shift @property def replica_count(self): """Number of replicas used in the ring.""" return len(self._replica2part2dev_id) @property def partition_count(self): """Number of partitions in the ring.""" return len(self._replica2part2dev_id[0]) def has_changed(self): """ Check to see if the ring on disk is different than the current one in memory. :returns: True if the ring on disk has changed, False otherwise """ return getmtime(self.pickle_gz_path) != self._mtime def get_part_nodes(self, part): """ Get the nodes that are responsible for the partition. :param part: partition to get nodes for :returns: list of node dicts See :func:`get_nodes` for a description of the node dicts. """ if time() > self._rtime: self._reload() return [self.devs[r[part]] for r in self._replica2part2dev_id] def get_nodes(self, account, container=None, obj=None): """ Get the partition and nodes for an account/container/object. :param account: account name :param container: container name :param obj: object name :returns: a tuple of (partition, list of node dicts) Each node dict will have at least the following keys: ====== =============================================================== id unique integer identifier amongst devices weight a float of the relative weight of this device as compared to others; this indicates how many partitions the builder will try to assign to this device zone integer indicating which zone the device is in; a given partition will not be assigned to multiple devices within the same zone ip the ip address of the device port the tcp port of the device device the device's name on disk (sdb1, for example) meta general use 'extra' field; for example: the online date, the hardware description ====== =============================================================== """ key = hash_path(account, container, obj, raw_digest=True) if time() > self._rtime: self._reload() part = unpack_from('>I', key)[0] >> self._part_shift return part, [self.devs[r[part]] for r in self._replica2part2dev_id] def get_more_nodes(self, part): """ Generator to get extra nodes for a partition for hinted handoff. :param part: partition to get handoff nodes for :returns: generator of node dicts See :func:`get_nodes` for a description of the node dicts. """ if time() > self._rtime: self._reload() zones = sorted(self.zone2devs.keys()) for part2dev_id in self._replica2part2dev_id: zones.remove(self.devs[part2dev_id[part]]['zone']) while zones: zone = zones.pop(part % len(zones)) weighted_node = None for i in xrange(len(self.zone2devs[zone])): node = self.zone2devs[zone][(part + i) % len(self.zone2devs[zone])] if node.get('weight'): weighted_node = node break if weighted_node: yield weighted_node
Foreign Minister Micheline Calmy-Rey has been holding talks in the Middle East on a new emblem for the Red Cross and Red Crescent Movement. On Saturday she met both her Egyptian counterpart and the president of the country's first aid service, before travelling to Israel, the Palestinian territories and Lebanon. The Swiss foreign minister's whirlwind three-day tour is part of efforts by Switzerland - the depositary state of the Geneva Conventions - to find a new humanitarian emblem. Switzerland aims to host a diplomatic conference before the end of the year, which would allow Israel's first-aid service, Magen David Adom, to be globally recognised. The Israeli society insists on its own emblem – a red Star of David. It refuses to operate under either of the two emblems currently used and recognised by the international movement: the cross and the crescent. Calmy-Rey held talks with the Egyptian foreign minister, Ahmed Abul Gheit, and Suzanne Mubarak, the president's wife and head of Egypt's Red Crescent Society. The meeting in Cairo passed off in a constructive atmosphere, said a Swiss foreign ministry spokesman. On Sunday Calmy-Rey is due in Israel and the Palestinian territories, before travelling to Lebanon on Monday for talks on the issue. In a separate development, Switzerland has condemned comments by Iran's president, Mahmoud Ahmadinejad, calling for Israel's annihilation. Bern said it was unacceptable for a member state of the United Nations to call for the destruction of another member state. The foreign ministry summoned Iran's ambassador to Switzerland for an explanation. Many other European countries took similar steps and the UN secretary-general, Kofi Annan, rebuked Teheran for the comments. Switzerland is the depositary state of the Geneva Conventions – a series of rules to guarantee human rights during times of conflict. The conventions and the additional protocols are aimed at protecting civilians, the sick and prisoners of war. More than 190 countries have ratified the treaties. The Federation of the International Red Cross and Red Crescent Societies comprises 181 national societies. With the International Committee of the Red Cross (ICRC), it forms the International Red Cross and Red Crescent Movement. The movement does not recognise the Israeli first-aid society, Magen David Adom. Swiss in search of neutral global aid symbol Oct 29, 2005 - 19:03 Foreign Minister Micheline Calmy-Rey has been holding talks in the Middle East on a new emblem for the Red Cross and Red Crescent Movement. On Saturday she met both her Egyptian counterpart and the president of the country's first aid service, before travelling to Israel, the Palestinian territories and Lebanon. The Swiss foreign minister's whirlwind three-day tour is part of efforts by Switzerland - the depositary state of the Geneva Conventions - to find a new humanitarian emblem. Switzerland aims to host a diplomatic conference before the end of the year, which would allow Israel's first-aid service, Magen David Adom, to be globally recognised. The Israeli society insists on its own emblem – a red Star of David. It refuses to operate under either of the two emblems currently used and recognised by the international movement: the cross and the crescent. Calmy-Rey held talks with the Egyptian foreign minister, Ahmed Abul Gheit, and Suzanne Mubarak, the president's wife and head of Egypt's Red Crescent Society. The meeting in Cairo passed off in a constructive atmosphere, said a Swiss foreign ministry spokesman. On Sunday Calmy-Rey is due in Israel and the Palestinian territories, before travelling to Lebanon on Monday for talks on the issue. Condemnation of Iran In a separate development, Switzerland has condemned comments by Iran's president, Mahmoud Ahmadinejad, calling for Israel's annihilation. Bern said it was unacceptable for a member state of the United Nations to call for the destruction of another member state. The foreign ministry summoned Iran's ambassador to Switzerland for an explanation. Many other European countries took similar steps and the UN secretary-general, Kofi Annan, rebuked Teheran for the comments. swissinfo with agencies In brief Switzerland is the depositary state of the Geneva Conventions – a series of rules to guarantee human rights during times of conflict. The conventions and the additional protocols are aimed at protecting civilians, the sick and prisoners of war. More than 190 countries have ratified the treaties. Key facts The Federation of the International Red Cross and Red Crescent Societies comprises 181 national societies. With the International Committee of the Red Cross (ICRC), it forms the International Red Cross and Red Crescent Movement. The movement does not recognise the Israeli first-aid society, Magen David Adom. Switzerland is hosting an international meeting in Geneva to try to agree on a new emblem for the International Red Cross and Red Crescent Movement.
# -*- coding: utf-8 -*- """Windows Registry plugin to parse the Background Activity Moderator keys.""" import os from dfdatetime import filetime as dfdatetime_filetime from plaso.containers import events from plaso.containers import time_events from plaso.lib import definitions from plaso.lib import dtfabric_helper from plaso.lib import errors from plaso.parsers import winreg_parser from plaso.parsers.winreg_plugins import interface class BackgroundActivityModeratorEventData(events.EventData): """Background Activity Moderator event data. Attributes: binary_path (str): binary executed. user_sid (str): user SID associated with entry. """ DATA_TYPE = 'windows:registry:bam' def __init__(self): """Initializes event data.""" super( BackgroundActivityModeratorEventData, self).__init__(data_type=self.DATA_TYPE) self.binary_path = None self.user_sid = None class BackgroundActivityModeratorWindowsRegistryPlugin( interface.WindowsRegistryPlugin, dtfabric_helper.DtFabricHelper): """Background Activity Moderator data Windows Registry plugin.""" NAME = 'bam' DATA_FORMAT = 'Background Activity Moderator (BAM) Registry data' FILTERS = frozenset([ interface.WindowsRegistryKeyPathFilter( 'HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\bam' '\\UserSettings'), interface.WindowsRegistryKeyPathFilter( 'HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\bam' '\\State\\UserSettings')]) _DEFINITION_FILE = os.path.join( os.path.dirname(__file__), 'filetime.yaml') def _ParseValue(self, registry_value): """Parses the registry value. Args: registry_value (bytes): value data. Returns: int: timestamp. Raises: ParseError: if the value data could not be parsed. """ try: timestamp = self._ReadStructureFromByteStream( registry_value, 0, self._GetDataTypeMap('filetime')) except (ValueError, errors.ParseError) as exception: raise errors.ParseError( 'Unable to parse timestamp with error: {0!s}'.format( exception)) return timestamp def ExtractEvents(self, parser_mediator, registry_key, **kwargs): """Extracts events from a Windows Registry key. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. registry_key (dfwinreg.WinRegistryKey): Windows Registry key. Raises: ParseError: if the value data could not be parsed. """ sid_keys = registry_key.GetSubkeys() if not sid_keys: return for sid_key in sid_keys: for value in sid_key.GetValues(): if not value.name == 'Version' and not value.name == 'SequenceNumber': timestamp = self._ParseValue(value.data) if timestamp: event_data = BackgroundActivityModeratorEventData() event_data.binary_path = value.name event_data.user_sid = sid_key.name date_time = dfdatetime_filetime.Filetime(timestamp=timestamp) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_LAST_RUN) parser_mediator.ProduceEventWithEventData(event, event_data) winreg_parser.WinRegistryParser.RegisterPlugin( BackgroundActivityModeratorWindowsRegistryPlugin)
The Community Mentorship Project (CMP) is a partnership between the University of Hawaii at Manoa's Office of Multicultural Student Services (OMSS), Ewa Elementary School, and James Campbell High School. The Goal of CMP is to provide community members the opportunity to mentor Ewa Elementary Children. A mentor is a role model. CMP mentors are students from James Campbell High School and are involved in Campbell High School's Community Service Project. Mentors share their talents and experiences to Ewa Elementary children. Mentors also provide guidance and support to the children they are mentoring. What talents can a mentor provide Ewa Elementary children? Mentors will give guidane for homework, support reading skills, and provide motivation for learning. Mentors decide the activities they want to provide for the children they are mentoring. Activities that have been concluded in the past include: sports, performing arts, creative arts, and martial arts. Where and when will the sessions be held? Sessions will be held in a classroom at the campus of Ewa Elementary School. Sessions will be Tuesdays and Thursdays from 2:45 p.m. to 4:00p.m. What kind of guidance will the OMSS coordinator provide? The coordinator will provide training sessions for three days before the start of the actual mentoring. Topics include: activity planning, classroom management, mentoring styles, policies, and procedures. The OMSS coordinator will also be present during mentoring sessions to provide support. What will mentors get for all their hard work? Other than the satisfaction of positively influencing the lives of children, Campbell High School mentors will receive a 1/2 DOE credit. Applications are available from Ms. Nelwynn Young. Interested applicants can also call the Office of Multicultural Student Services at 956-7348 and request CMP applications.
# coding: utf-8 """ MailMojo API v1 of the MailMojo API # noqa: E501 OpenAPI spec version: 1.1.0 Contact: hjelp@mailmojo.no Generated by: https://github.com/swagger-api/swagger-codegen.git """ from __future__ import absolute_import import re # noqa: F401 # python 2 and python 3 compatibility library import six from mailmojo_sdk.api_client import ApiClient class PageApi(object): """NOTE: This class is auto generated by the swagger code generator program. Do not edit the class manually. Ref: https://github.com/swagger-api/swagger-codegen """ def __init__(self, api_client=None): if api_client is None: api_client = ApiClient() self.api_client = api_client def get_page_by_id(self, id, **kwargs): # noqa: E501 """Retrieve a landing page. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_page_by_id(id, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the landing page to retrieve. (required) :return: Page If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.get_page_by_id_with_http_info(id, **kwargs) # noqa: E501 else: (data) = self.get_page_by_id_with_http_info(id, **kwargs) # noqa: E501 return data def get_page_by_id_with_http_info(self, id, **kwargs): # noqa: E501 """Retrieve a landing page. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_page_by_id_with_http_info(id, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the landing page to retrieve. (required) :return: Page If the method is called asynchronously, returns the request thread. """ all_params = ['id'] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method get_page_by_id" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'id' is set if ('id' not in params or params['id'] is None): raise ValueError("Missing the required parameter `id` when calling `get_page_by_id`") # noqa: E501 collection_formats = {} path_params = {} if 'id' in params: path_params['id'] = params['id'] # noqa: E501 query_params = [] header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.select_header_accept( ['application/json']) # noqa: E501 # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 ['application/json']) # noqa: E501 # Authentication setting auth_settings = ['mailmojo_auth'] # noqa: E501 return self.api_client.call_api( '/v1/pages/{id}/', 'GET', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='Page', # noqa: E501 auth_settings=auth_settings, async_req=params.get('async_req'), _return_http_data_only=params.get('_return_http_data_only'), _preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats) def get_pages(self, **kwargs): # noqa: E501 """Retrieve all landing pages. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_pages(async_req=True) >>> result = thread.get() :param async_req bool :return: list[Page] If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.get_pages_with_http_info(**kwargs) # noqa: E501 else: (data) = self.get_pages_with_http_info(**kwargs) # noqa: E501 return data def get_pages_with_http_info(self, **kwargs): # noqa: E501 """Retrieve all landing pages. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_pages_with_http_info(async_req=True) >>> result = thread.get() :param async_req bool :return: list[Page] If the method is called asynchronously, returns the request thread. """ all_params = [] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method get_pages" % key ) params[key] = val del params['kwargs'] collection_formats = {} path_params = {} query_params = [] header_params = {} form_params = [] local_var_files = {} body_params = None # HTTP header `Accept` header_params['Accept'] = self.api_client.select_header_accept( ['application/json']) # noqa: E501 # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 ['application/json']) # noqa: E501 # Authentication setting auth_settings = ['mailmojo_auth'] # noqa: E501 return self.api_client.call_api( '/v1/pages/', 'GET', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='list[Page]', # noqa: E501 auth_settings=auth_settings, async_req=params.get('async_req'), _return_http_data_only=params.get('_return_http_data_only'), _preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats) def track_page_view(self, id, view, **kwargs): # noqa: E501 """Track a view of a landing page. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.track_page_view(id, view, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the page to track view of. (required) :param TrackPageView view: (required) :return: TrackPageView If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.track_page_view_with_http_info(id, view, **kwargs) # noqa: E501 else: (data) = self.track_page_view_with_http_info(id, view, **kwargs) # noqa: E501 return data def track_page_view_with_http_info(self, id, view, **kwargs): # noqa: E501 """Track a view of a landing page. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.track_page_view_with_http_info(id, view, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the page to track view of. (required) :param TrackPageView view: (required) :return: TrackPageView If the method is called asynchronously, returns the request thread. """ all_params = ['id', 'view'] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method track_page_view" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'id' is set if ('id' not in params or params['id'] is None): raise ValueError("Missing the required parameter `id` when calling `track_page_view`") # noqa: E501 # verify the required parameter 'view' is set if ('view' not in params or params['view'] is None): raise ValueError("Missing the required parameter `view` when calling `track_page_view`") # noqa: E501 collection_formats = {} path_params = {} if 'id' in params: path_params['id'] = params['id'] # noqa: E501 query_params = [] header_params = {} form_params = [] local_var_files = {} body_params = None if 'view' in params: body_params = params['view'] # HTTP header `Accept` header_params['Accept'] = self.api_client.select_header_accept( ['application/json']) # noqa: E501 # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 ['application/json']) # noqa: E501 # Authentication setting auth_settings = ['mailmojo_auth'] # noqa: E501 return self.api_client.call_api( '/v1/pages/{id}/track/view/', 'PATCH', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='TrackPageView', # noqa: E501 auth_settings=auth_settings, async_req=params.get('async_req'), _return_http_data_only=params.get('_return_http_data_only'), _preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats) def update_page(self, id, **kwargs): # noqa: E501 """Update a landing page partially. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.update_page(id, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the landing page to update. (required) :param Page page: :return: Page If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('async_req'): return self.update_page_with_http_info(id, **kwargs) # noqa: E501 else: (data) = self.update_page_with_http_info(id, **kwargs) # noqa: E501 return data def update_page_with_http_info(self, id, **kwargs): # noqa: E501 """Update a landing page partially. # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.update_page_with_http_info(id, async_req=True) >>> result = thread.get() :param async_req bool :param int id: ID of the landing page to update. (required) :param Page page: :return: Page If the method is called asynchronously, returns the request thread. """ all_params = ['id', 'page'] # noqa: E501 all_params.append('async_req') all_params.append('_return_http_data_only') all_params.append('_preload_content') all_params.append('_request_timeout') params = locals() for key, val in six.iteritems(params['kwargs']): if key not in all_params: raise TypeError( "Got an unexpected keyword argument '%s'" " to method update_page" % key ) params[key] = val del params['kwargs'] # verify the required parameter 'id' is set if ('id' not in params or params['id'] is None): raise ValueError("Missing the required parameter `id` when calling `update_page`") # noqa: E501 collection_formats = {} path_params = {} if 'id' in params: path_params['id'] = params['id'] # noqa: E501 query_params = [] header_params = {} form_params = [] local_var_files = {} body_params = None if 'page' in params: body_params = params['page'] # HTTP header `Accept` header_params['Accept'] = self.api_client.select_header_accept( ['application/json']) # noqa: E501 # HTTP header `Content-Type` header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501 ['application/json']) # noqa: E501 # Authentication setting auth_settings = ['mailmojo_auth'] # noqa: E501 return self.api_client.call_api( '/v1/pages/{id}/', 'PATCH', path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='Page', # noqa: E501 auth_settings=auth_settings, async_req=params.get('async_req'), _return_http_data_only=params.get('_return_http_data_only'), _preload_content=params.get('_preload_content', True), _request_timeout=params.get('_request_timeout'), collection_formats=collection_formats)
Diary of a year with blind spots (mon. 30.05.2011-wed. 30.05.2012). Fyrir Tìnum. Brab is also available at Printed Matter here. What they wrote: "Jan Matthé spent a year considering blind spots in his daily routine, wondering what things he fails or see or sees through strong distortion, specifically in a technological age. This diary is a record of that year, composed of every CAPTCHA password Matthé entered while using different websites. CAPTCHA is a computer challenge-response, prompted when a user enters a website, aimed at determining whether the user is human. CAPTCHA passwords generally consist of visually distorted words which would be illegible to a computer, but which a human can easily decipher and type. Sometimes CAPTCHA words are real words, but often they are merely familiar-looking but nonsensical sequences of letters. The compilation of these words, paired with the time and date of each CAPTCHA test, creates a quasi-narrative. The book is wrapped in commercial packaging for an automotive blind spot mirror."
# coding: utf-8 """ Provide a report and downloadable CSV according to the German DATEV format. - Query report showing only the columns that contain data, formatted nicely for dispay to the user. - CSV download functionality `download_datev_csv` that provides a CSV file with all required columns. Used to import the data into the DATEV Software. """ from __future__ import unicode_literals import datetime import json import zlib import zipfile import six from csv import QUOTE_NONNUMERIC from six import BytesIO from six import string_types import frappe from frappe import _ import pandas as pd from .datev_constants import DataCategory from .datev_constants import Transactions from .datev_constants import DebtorsCreditors from .datev_constants import AccountNames from .datev_constants import QUERY_REPORT_COLUMNS def execute(filters=None): """Entry point for frappe.""" validate(filters) result = get_transactions(filters, as_dict=0) columns = QUERY_REPORT_COLUMNS return columns, result def validate(filters): """Make sure all mandatory filters and settings are present.""" if not filters.get('company'): frappe.throw(_('<b>Company</b> is a mandatory filter.')) if not filters.get('from_date'): frappe.throw(_('<b>From Date</b> is a mandatory filter.')) if not filters.get('to_date'): frappe.throw(_('<b>To Date</b> is a mandatory filter.')) try: frappe.get_doc('DATEV Settings', filters.get('company')) except frappe.DoesNotExistError: frappe.throw(_('Please create <b>DATEV Settings</b> for Company <b>{}</b>.').format(filters.get('company'))) def get_transactions(filters, as_dict=1): """ Get a list of accounting entries. Select GL Entries joined with Account and Party Account in order to get the account numbers. Returns a list of accounting entries. Arguments: filters -- dict of filters to be passed to the sql query as_dict -- return as list of dicts [0,1] """ filter_by_voucher = 'AND gl.voucher_type = %(voucher_type)s' if filters.get('voucher_type') else '' gl_entries = frappe.db.sql(""" SELECT /* either debit or credit amount; always positive */ case gl.debit when 0 then gl.credit else gl.debit end as 'Umsatz (ohne Soll/Haben-Kz)', /* 'H' when credit, 'S' when debit */ case gl.debit when 0 then 'H' else 'S' end as 'Soll/Haben-Kennzeichen', /* account number or, if empty, party account number */ coalesce(acc.account_number, acc_pa.account_number) as 'Konto', /* against number or, if empty, party against number */ coalesce(acc_against.account_number, acc_against_pa.account_number) as 'Gegenkonto (ohne BU-Schlüssel)', gl.posting_date as 'Belegdatum', gl.voucher_no as 'Belegfeld 1', LEFT(gl.remarks, 60) as 'Buchungstext', gl.voucher_type as 'Beleginfo - Art 1', gl.voucher_no as 'Beleginfo - Inhalt 1', gl.against_voucher_type as 'Beleginfo - Art 2', gl.against_voucher as 'Beleginfo - Inhalt 2' FROM `tabGL Entry` gl /* Statistisches Konto (Debitoren/Kreditoren) */ left join `tabParty Account` pa on gl.against = pa.parent and gl.company = pa.company /* Kontonummer */ left join `tabAccount` acc on gl.account = acc.name /* Gegenkonto-Nummer */ left join `tabAccount` acc_against on gl.against = acc_against.name /* Statistische Kontonummer */ left join `tabAccount` acc_pa on pa.account = acc_pa.name /* Statistische Gegenkonto-Nummer */ left join `tabAccount` acc_against_pa on pa.account = acc_against_pa.name WHERE gl.company = %(company)s AND DATE(gl.posting_date) >= %(from_date)s AND DATE(gl.posting_date) <= %(to_date)s {} ORDER BY 'Belegdatum', gl.voucher_no""".format(filter_by_voucher), filters, as_dict=as_dict) return gl_entries def get_customers(filters): """ Get a list of Customers. Arguments: filters -- dict of filters to be passed to the sql query """ return frappe.db.sql(""" SELECT acc.account_number as 'Konto', cus.customer_name as 'Name (Adressatentyp Unternehmen)', case cus.customer_type when 'Individual' then 1 when 'Company' then 2 else 0 end as 'Adressatentyp', adr.address_line1 as 'Straße', adr.pincode as 'Postleitzahl', adr.city as 'Ort', UPPER(country.code) as 'Land', adr.address_line2 as 'Adresszusatz', con.email_id as 'E-Mail', coalesce(con.mobile_no, con.phone) as 'Telefon', cus.website as 'Internet', cus.tax_id as 'Steuernummer', ccl.credit_limit as 'Kreditlimit (Debitor)' FROM `tabParty Account` par left join `tabAccount` acc on acc.name = par.account left join `tabCustomer` cus on cus.name = par.parent left join `tabAddress` adr on adr.name = cus.customer_primary_address left join `tabCountry` country on country.name = adr.country left join `tabContact` con on con.name = cus.customer_primary_contact left join `tabCustomer Credit Limit` ccl on ccl.parent = cus.name and ccl.company = par.company WHERE par.company = %(company)s AND par.parenttype = 'Customer'""", filters, as_dict=1) def get_suppliers(filters): """ Get a list of Suppliers. Arguments: filters -- dict of filters to be passed to the sql query """ return frappe.db.sql(""" SELECT acc.account_number as 'Konto', sup.supplier_name as 'Name (Adressatentyp Unternehmen)', case sup.supplier_type when 'Individual' then '1' when 'Company' then '2' else '0' end as 'Adressatentyp', adr.address_line1 as 'Straße', adr.pincode as 'Postleitzahl', adr.city as 'Ort', UPPER(country.code) as 'Land', adr.address_line2 as 'Adresszusatz', con.email_id as 'E-Mail', coalesce(con.mobile_no, con.phone) as 'Telefon', sup.website as 'Internet', sup.tax_id as 'Steuernummer', case sup.on_hold when 1 then sup.release_date else null end as 'Zahlungssperre bis' FROM `tabParty Account` par left join `tabAccount` acc on acc.name = par.account left join `tabSupplier` sup on sup.name = par.parent left join `tabDynamic Link` dyn_adr on dyn_adr.link_name = sup.name and dyn_adr.link_doctype = 'Supplier' and dyn_adr.parenttype = 'Address' left join `tabAddress` adr on adr.name = dyn_adr.parent and adr.is_primary_address = '1' left join `tabCountry` country on country.name = adr.country left join `tabDynamic Link` dyn_con on dyn_con.link_name = sup.name and dyn_con.link_doctype = 'Supplier' and dyn_con.parenttype = 'Contact' left join `tabContact` con on con.name = dyn_con.parent and con.is_primary_contact = '1' WHERE par.company = %(company)s AND par.parenttype = 'Supplier'""", filters, as_dict=1) def get_account_names(filters): return frappe.get_list("Account", fields=["account_number as Konto", "name as Kontenbeschriftung"], filters={"company": filters.get("company"), "is_group": "0"}) def get_datev_csv(data, filters, csv_class): """ Fill in missing columns and return a CSV in DATEV Format. For automatic processing, DATEV requires the first line of the CSV file to hold meta data such as the length of account numbers oder the category of the data. Arguments: data -- array of dictionaries filters -- dict csv_class -- defines DATA_CATEGORY, FORMAT_NAME and COLUMNS """ empty_df = pd.DataFrame(columns=csv_class.COLUMNS) data_df = pd.DataFrame.from_records(data) result = empty_df.append(data_df, sort=True) if csv_class.DATA_CATEGORY == DataCategory.TRANSACTIONS: result['Belegdatum'] = pd.to_datetime(result['Belegdatum']) if csv_class.DATA_CATEGORY == DataCategory.ACCOUNT_NAMES: result['Sprach-ID'] = 'de-DE' data = result.to_csv( # Reason for str(';'): https://github.com/pandas-dev/pandas/issues/6035 sep=str(';'), # European decimal seperator decimal=',', # Windows "ANSI" encoding encoding='latin_1', # format date as DDMM date_format='%d%m', # Windows line terminator line_terminator='\r\n', # Do not number rows index=False, # Use all columns defined above columns=csv_class.COLUMNS, # Quote most fields, even currency values with "," separator quoting=QUOTE_NONNUMERIC ) if not six.PY2: data = data.encode('latin_1') header = get_header(filters, csv_class) header = ';'.join(header).encode('latin_1') # 1st Row: Header with meta data # 2nd Row: Data heading (Überschrift der Nutzdaten), included in `data` here. # 3rd - nth Row: Data (Nutzdaten) return header + b'\r\n' + data def get_header(filters, csv_class): coa = frappe.get_value("Company", filters.get("company"), "chart_of_accounts") description = filters.get("voucher_type", csv_class.FORMAT_NAME) coa_used = "04" if "SKR04" in coa else ("03" if "SKR03" in coa else "") header = [ # DATEV format # "DTVF" = created by DATEV software, # "EXTF" = created by other software '"EXTF"', # version of the DATEV format # 141 = 1.41, # 510 = 5.10, # 720 = 7.20 '700', csv_class.DATA_CATEGORY, '"%s"' % csv_class.FORMAT_NAME, # Format version (regarding format name) csv_class.FORMAT_VERSION, # Generated on datetime.datetime.now().strftime("%Y%m%d%H%M%S") + '000', # Imported on -- stays empty '', # Origin. Any two symbols, will be replaced by "SV" on import. '"EN"', # I = Exported by '"%s"' % frappe.session.user, # J = Imported by -- stays empty '', # K = Tax consultant number (Beraternummer) frappe.get_value("DATEV Settings", filters.get("company"), "consultant_number"), # L = Tax client number (Mandantennummer) frappe.get_value("DATEV Settings", filters.get("company"), "client_number"), # M = Start of the fiscal year (Wirtschaftsjahresbeginn) frappe.utils.formatdate(frappe.defaults.get_user_default("year_start_date"), "yyyyMMdd"), # N = Length of account numbers (Sachkontenlänge) '4', # O = Transaction batch start date (YYYYMMDD) frappe.utils.formatdate(filters.get('from_date'), "yyyyMMdd"), # P = Transaction batch end date (YYYYMMDD) frappe.utils.formatdate(filters.get('to_date'), "yyyyMMdd"), # Q = Description (for example, "Sales Invoice") Max. 30 chars '"{}"'.format(_(description)), # R = Diktatkürzel '', # S = Buchungstyp # 1 = Transaction batch (Finanzbuchführung), # 2 = Annual financial statement (Jahresabschluss) '1' if csv_class.DATA_CATEGORY == DataCategory.TRANSACTIONS else '', # T = Rechnungslegungszweck # 0 oder leer = vom Rechnungslegungszweck unabhängig # 50 = Handelsrecht # 30 = Steuerrecht # 64 = IFRS # 40 = Kalkulatorik # 11 = Reserviert # 12 = Reserviert '0', # U = Festschreibung # TODO: Filter by Accounting Period. In export for closed Accounting Period, this will be "1" '0', # V = Default currency, for example, "EUR" '"%s"' % frappe.get_value("Company", filters.get("company"), "default_currency"), # reserviert '', # Derivatskennzeichen '', # reserviert '', # reserviert '', # SKR '"%s"' % coa_used, # Branchen-Lösungs-ID '', # reserviert '', # reserviert '', # Anwendungsinformation (Verarbeitungskennzeichen der abgebenden Anwendung) '' ] return header @frappe.whitelist() def download_datev_csv(filters=None): """ Provide accounting entries for download in DATEV format. Validate the filters, get the data, produce the CSV file and provide it for download. Can be called like this: GET /api/method/erpnext.regional.report.datev.datev.download_datev_csv Arguments / Params: filters -- dict of filters to be passed to the sql query """ if isinstance(filters, string_types): filters = json.loads(filters) validate(filters) # This is where my zip will be written zip_buffer = BytesIO() # This is my zip file datev_zip = zipfile.ZipFile(zip_buffer, mode='w', compression=zipfile.ZIP_DEFLATED) transactions = get_transactions(filters) transactions_csv = get_datev_csv(transactions, filters, csv_class=Transactions) datev_zip.writestr('EXTF_Buchungsstapel.csv', transactions_csv) account_names = get_account_names(filters) account_names_csv = get_datev_csv(account_names, filters, csv_class=AccountNames) datev_zip.writestr('EXTF_Kontenbeschriftungen.csv', account_names_csv) customers = get_customers(filters) customers_csv = get_datev_csv(customers, filters, csv_class=DebtorsCreditors) datev_zip.writestr('EXTF_Kunden.csv', customers_csv) suppliers = get_suppliers(filters) suppliers_csv = get_datev_csv(suppliers, filters, csv_class=DebtorsCreditors) datev_zip.writestr('EXTF_Lieferanten.csv', suppliers_csv) # You must call close() before exiting your program or essential records will not be written. datev_zip.close() frappe.response['filecontent'] = zip_buffer.getvalue() frappe.response['filename'] = 'DATEV.zip' frappe.response['type'] = 'binary'
Our beautiful Bell Tents provide the ultimate glamping experience at your wedding for you and your guests. We can create a glamping village within the grounds of your reception venue or anywhere of your choice and therefore provide a stress free vibe for all on your perfect day! Our beautiful cotton canvas bell tents create the perfect get away for you and your guests on your wedding day, with a range of hire options covering 2 to 6-person, we have a bell tent for all your guests. You can hire our beautiful bell tents in furnishing style of your choice meaning you can hire a basic tent and bring your own kit to hiring the full luxury bell tent experience, complete with mattresses, bed linen and of course bunting. 2 Person Luxury: Single airbeds or real double mattress complete with full linen. 4 Person Luxury: Single airbeds and/or real double mattresses with full linen as required. Wedding guests can book & pay for their package direct to us if preferred.
import sys import random from noise import snoise3 from PIL import Image from biome import Biome """ Class that generates tile data for the world map with a given seed. The generation algorithm uses Perlin noise to generate maps for both altitude and moisture. (This requires the 'noise' library) Based on the generated noise, biomes are determined. The algorithm is (and must be) deterministic for any discrete seed, as the whole game world will not be generated in a single call of the GenerateMap() function. Rather, chunks may be requested from the generator using the map's seed, and the location / size of the requested chunks. """ class MapGenerator: chunkSizeX = 32 chunkSizeY = 32 @staticmethod def GenerateMap(seed, startx, starty, sizex, sizey): #Constants needed for the perlin noise algorithm octaves = 8; freq = 64.0 * octaves """ NOTE: Changing the value of freq essentially changes the scale / level of detail produced in the noise maps. """ #Generate 2d Lists for height and moisture data heightMap = [[None]*sizex for g in range(sizey)] moistureMap = [[None]*sizex for h in range(sizey)] for outputy, y in enumerate(range(starty, sizey + starty)): for outputx, x in enumerate(range(startx, sizex + startx)): #Generate Perlin noise for the given x,y using the map seed as the z value #Map the noise to between 0 and 255 heightMap[outputx][outputy] = int(snoise3(x / freq, y / freq, seed, octaves) * 127.0 + 128.0) #Change the z value so that moisture is determined by a different (but predictable) seed moistureMap[outputx][outputy] = int(snoise3(x / freq, y / freq, seed*10, octaves) * 127.0 + 128.0) biomeMap = MapGenerator.AssignBiomes(heightMap,moistureMap,sizex,sizey) return biomeMap @staticmethod def AssignBiomes(altitude,moisture,sizex,sizey): biomeMap = [[None]*sizex for g in range(sizey)] for y in range(0,sizex): for x in range(0,sizey): #ocean if(altitude[x][y] <= Biome.ocean_height): biomeMap[y][x] = Biome.ocean #shore elif(altitude[x][y] <= Biome.shore_height): biomeMap[y][x] = Biome.shore #Mountain Peak elif(altitude[x][y] >= Biome.peak_height): biomeMap[y][x] = Biome.peak #Mountain elif(altitude[x][y] >= Biome.mountain_height): biomeMap[y][x] = Biome.mountain #tundra elif(moisture[x][y] >= Biome.tundra_moisture): biomeMap[y][x] = Biome.tundra #tropical elif(moisture[x][y] >= Biome.tropical_moisture): biomeMap[y][x] = Biome.tropical #Forest elif(moisture[x][y] >= Biome.forest_moisture): biomeMap[y][x] = Biome.forest #Grassland elif(moisture[x][y] >= Biome.grassland_moisture): biomeMap[y][x] = Biome.grassland #desert elif(moisture[x][y] >= Biome.desert_moisture): biomeMap[y][x] = Biome.desert return biomeMap @staticmethod def SmoothMoistureMap(moisture): """ TODO """ pass @staticmethod def GenerateChunk(seed,chunkx, chunky): worldx = chunkx * MapGenerator.chunkSizeX worldy = chunky * MapGenerator.chunkSizeY return MapGenerator.GenerateMap(seed, worldx, worldy, MapGenerator.chunkSizeX, MapGenerator.chunkSizeX) @staticmethod def DrawMap(biomeMap): #initializes new image img = Image.new("RGB", (len(biomeMap),len(biomeMap[0])), "blue") pixels = img.load() #Iterate through all pixels for y in range(len(biomeMap)): for x in range(len(biomeMap[0])): #Mountain Peak if(biomeMap[x][y] == Biome.peak): pixels[x,y] = Biome.peak_color #Mountain elif(biomeMap[x][y] == Biome.mountain): pixels[x,y] = Biome.mountain_color #Forest elif(biomeMap[x][y] == Biome.forest): pixels[x,y] = Biome.forest_color #Grassland elif(biomeMap[x][y] == Biome.grassland): pixels[x,y] = Biome.grassland_color #desert elif(biomeMap[x][y] == Biome.desert): pixels[x,y] = Biome.desert_color #ocean elif(biomeMap[x][y] == Biome.ocean): pixels[x,y] = Biome.ocean_color #shore elif(biomeMap[x][y] == Biome.shore): pixels[x,y] = Biome.shore_color #tropical elif(biomeMap[x][y] == Biome.tropical): pixels[x,y] = Biome.tropical_color #tundra elif(biomeMap[x][y] == Biome.tundra): pixels[x,y] = Biome.tundra_color else: pixels[x,y] = 0x000000 #Biome not assigned if x % 32 == 0 or y % 32 == 0: pixels[x,y] = 0xeeeeee if x == 0 and y == 0: pixels[x,y] = 0xff0000 img.show() #MapGenerator.DrawMap(MapGenerator.GenerateMap(random.random(),0,0,512,512))
A Safety DDA Compliant return to door round bar lever on a concealed fix sprung round rose. Part of the Steelworx range in Grade 316 Stainless Steel. Will suit commercial and modern residential projects as they provide the quality and durability to cope within these environments. Makes this ideal for public/office doors where a more ergonomic operation is a necessary requirement. BS EN 1906 Rated, comes with a 25 year mechanical guarantee and is Fire door rated.
# -*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import models, migrations import datetime from django.utils.timezone import utc class Migration(migrations.Migration): dependencies = [ ('blog', '0002_category'), ] operations = [ migrations.AlterModelOptions( name='category', options={'verbose_name_plural': 'Categories', 'ordering': ['title'], 'verbose_name': 'Category'}, ), migrations.AddField( model_name='category', name='slug', field=models.SlugField(verbose_name='Slug', default=datetime.datetime(2015, 2, 12, 7, 46, 25, 982321, tzinfo=utc), help_text='Uri identifier.', max_length=255, unique=True), preserve_default=False, ), migrations.AddField( model_name='post', name='categories', field=models.ManyToManyField(null=True, verbose_name='Categories', help_text=' ', to='blog.Category', blank=True), preserve_default=True, ), migrations.AlterField( model_name='post', name='date_publish', field=models.DateTimeField(auto_now=True, help_text=' ', verbose_name='Publish Date'), preserve_default=True, ), ]
Manufacturer of home furnishing accessories. The company engages in manufacturing of bathroom and kitchen accessories. This information is reserved for PitchBook Platform users. To explore Sabi (Sabi Space Business)‘s full profile, request access.
#!/usr/bin/env python """The setup and build script for the python-telegram-bot library.""" import os import subprocess import sys from setuptools import setup, find_packages UPSTREAM_URLLIB3_FLAG = '--with-upstream-urllib3' def get_requirements(raw=False): """Build the requirements list for this project""" requirements_list = [] with open('requirements.txt') as reqs: for install in reqs: if install.startswith('# only telegram.ext:'): if raw: break continue requirements_list.append(install.strip()) return requirements_list def get_packages_requirements(raw=False): """Build the package & requirements list for this project""" reqs = get_requirements(raw=raw) exclude = ['tests*'] if raw: exclude.append('telegram.ext*') packs = find_packages(exclude=exclude) # Allow for a package install to not use the vendored urllib3 if UPSTREAM_URLLIB3_FLAG in sys.argv: sys.argv.remove(UPSTREAM_URLLIB3_FLAG) reqs.append('urllib3 >= 1.19.1') packs = [x for x in packs if not x.startswith('telegram.vendor.ptb_urllib3')] return packs, reqs def get_setup_kwargs(raw=False): """Builds a dictionary of kwargs for the setup function""" packages, requirements = get_packages_requirements(raw=raw) raw_ext = "-raw" if raw else "" readme = f'README{"_RAW" if raw else ""}.rst' fn = os.path.join('telegram', 'version.py') with open(fn) as fh: for line in fh.readlines(): if line.startswith('__version__'): exec(line) with open(readme, 'r', encoding='utf-8') as fd: kwargs = dict( script_name=f'setup{raw_ext}.py', name=f'python-telegram-bot{raw_ext}', version=locals()['__version__'], author='Leandro Toledo', author_email='devs@python-telegram-bot.org', license='LGPLv3', url='https://python-telegram-bot.org/', # Keywords supported by PyPI can be found at https://git.io/JtLIZ project_urls={ "Documentation": "https://python-telegram-bot.readthedocs.io", "Bug Tracker": "https://github.com/python-telegram-bot/python-telegram-bot/issues", "Source Code": "https://github.com/python-telegram-bot/python-telegram-bot", "News": "https://t.me/pythontelegrambotchannel", "Changelog": "https://python-telegram-bot.readthedocs.io/en/stable/changelog.html", }, download_url=f'https://pypi.org/project/python-telegram-bot{raw_ext}/', keywords='python telegram bot api wrapper', description="We have made you a wrapper you can't refuse", long_description=fd.read(), long_description_content_type='text/x-rst', packages=packages, install_requires=requirements, extras_require={ 'json': 'ujson', 'socks': 'PySocks', # 3.4-3.4.3 contained some cyclical import bugs 'passport': 'cryptography!=3.4,!=3.4.1,!=3.4.2,!=3.4.3', }, include_package_data=True, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)', 'Operating System :: OS Independent', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Communications :: Chat', 'Topic :: Internet', 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', ], python_requires='>=3.6' ) return kwargs def main(): # If we're building, build ptb-raw as well if set(sys.argv[1:]) in [{'bdist_wheel'}, {'sdist'}, {'sdist', 'bdist_wheel'}]: args = ['python', 'setup-raw.py'] args.extend(sys.argv[1:]) subprocess.run(args, check=True, capture_output=True) setup(**get_setup_kwargs(raw=False)) if __name__ == '__main__': main()
Protects carpeted skirting from dirt, dust, paint & spills during renovation, DIY’s and construction work. Our self-adhesive transparent carpet film adheres well to carpets to eliminating the spillage and leakage problem when using drop sheets. This product is "super sticky" with a very high adhesive coating needed for carpet adhesion, therefore this product is recommended for use on carpets only! Carpet film may be applied for up to 2 months (guide only). Please read the instructions. Now adaptable to fit both 1M x 100M rolls and our new 700mm x 100m rolls MADE IN AUSTRALIA - QUICK CHANGE Carpet Film Dispenser.. Excellent contamination prevention between footwear and work environmentComfortable with good strength and abrasion resistanceBoot covers are 500mm hi.. Tape right up to your skirting boards or used for taping down AV and power cables at party's or events! This self adhesive sticky back plastic film no.. Polywoven is a double laminate non rip, water resistant, woven reinforced polyethylene plastic. Polywoven sheets are a strong protection for interior ..
import re import time import traceback EVENT_TEMPERATURE = 'temperature' EVENT_STARTAPP = 'start_app' EVENT_STOPAPP = 'stop_app' EVENT_CONNECTED = 'connected' EVENT_DISCONNECTED = 'disconnected' EVENT_ALL = [ EVENT_TEMPERATURE, EVENT_STARTAPP, EVENT_STOPAPP, EVENT_CONNECTED, EVENT_DISCONNECTED ] class Printer(object): _instance = None _init = False def __new__(cls, *args, **kwargs): if not cls._instance: cls._instance = super(Printer, cls).__new__(cls, *args, **kwargs) return cls._instance def __init__(self): # don't re-initialize an instance (needed because singleton) if self._init: return self._init = True self.f = open('statistics.txt','w') def printline(self,line): print line self.f.write(line) def __def__(self): self.f.close() #============================ statistics ====================================== class Stats(object): def feedstat(self,stat): raise NotImplementedError() def formatheader(self): output = [] output += [''] output += [''] output += ['='*79] output += [self.description] output += ['='*79] output += [''] output = '\n'.join(output) return output class NumDataPoints(Stats): description = 'Number of Data Points Received' def __init__(self): self.numDataPoints = {} def feedstat(self,statline): if statline['eventType']!=EVENT_TEMPERATURE: return # count number of packets if statline['mac'] not in self.numDataPoints: self.numDataPoints[statline['mac']] = 0 self.numDataPoints[statline['mac']] += 1 def formatstat(self): # identify missing maxMumDataPoints = max([v for (k,v) in self.numDataPoints.items()]) # format output output = [] output += [self.formatheader()] output += ['({0} motes)'.format(len(self.numDataPoints))] for (mac,num) in self.numDataPoints.items(): if num<maxMumDataPoints: remark = ' ({0} missing)'.format(maxMumDataPoints-num) else: remark = '' output += [' - {0} : {1}{2}'.format(mac,num,remark)] output = '\n'.join(output) return output class MaxTimeGap(Stats): description = 'Maximum Time Gap Between Consecutive Data Points' def __init__(self): self.maxgap = {} def feedstat(self,statline): if statline['eventType']!=EVENT_TEMPERATURE: return mac = statline['mac'] timestamp = statline['timestamp'] # count number of packets if mac not in self.maxgap: self.maxgap[mac] = { 'lasttimestamp' : None, 'maxgap': None, } if self.maxgap[mac]['lasttimestamp']!=None: thisgap = timestamp-self.maxgap[mac]['lasttimestamp'] if self.maxgap[mac]['maxgap']==None or thisgap>self.maxgap[mac]['maxgap']: self.maxgap[mac]['maxgap'] = thisgap self.maxgap[mac]['lasttimestamp'] = timestamp def formatstat(self): output = [] output += [self.formatheader()] output += ['({0} motes)'.format(len(self.maxgap))] for (mac,v) in self.maxgap.items(): if v['maxgap']: output += [' - {0} : {1}s'.format(mac,int(v['maxgap']))] else: output += [' - {0} : {1}'.format(mac, v['maxgap'] )] output = '\n'.join(output) return output class BurstSpread(Stats): description = 'Maximum time spread among measurements (for each burst)' MAX_BURST_SPREAD = 5.0 def __init__(self): self.bursts = {} def calculateBurstId(self,timestamp): return int(self.MAX_BURST_SPREAD * round(float(timestamp)/self.MAX_BURST_SPREAD)) def feedstat(self,statline): if statline['eventType']!=EVENT_TEMPERATURE: return timestamp = statline['timestamp'] burstId = self.calculateBurstId(timestamp) # count number of packets if burstId not in self.bursts: self.bursts[burstId] = { 'numpoints': 0, 'mintimestamp' : None, 'maxtimestamp': None, } self.bursts[burstId]['numpoints'] += 1 if self.bursts[burstId]['mintimestamp']==None or timestamp<self.bursts[burstId]['mintimestamp']: self.bursts[burstId]['mintimestamp'] = timestamp if self.bursts[burstId]['maxtimestamp']==None or timestamp>self.bursts[burstId]['maxtimestamp']: self.bursts[burstId]['maxtimestamp'] = timestamp def formatstat(self): # calculate spread for (k,v) in self.bursts.items(): if v['mintimestamp']!=None and v['maxtimestamp']!=None: v['spread'] = v['maxtimestamp']-v['mintimestamp'] else: v['spread'] = None output = [] output += [self.formatheader()] output += [ '({0} bursts separated by {1:.03f}s or more)'.format( len(self.bursts), self.MAX_BURST_SPREAD ) ] allts = sorted(self.bursts.keys()) for ts in allts: b = self.bursts[ts] tsString = time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(ts)) if b['spread']!=None: spreadString = '{0:>5}ms'.format(int(1000*b['spread'])) else: spreadString = 'not enough data' output += [' - around {0} : {1} ({2} points)'.format(tsString,spreadString,b['numpoints'])] output = '\n'.join(output) return output #============================ main ============================================ def main(): try: print 'logAnalysis - Dust Networks (c) 2014' # initialize stats stats = [s() for s in Stats.__subclasses__()] # parse file and fill in statistics with open('temperaturelog.csv','r') as f: for line in f: m = re.search('([0-9\- :]*).([0-9]{3}),([a-zA_Z _]*),([0-9a-f\-]*),([0-9.]*)',line) if not m: print 'WARNING: following line not parsed: {0}'.format(line) assert(0) continue # rawline rawline = {} rawline['timestamp'] = m.group(1) rawline['timestampMs'] = m.group(2) rawline['eventType'] = m.group(3) rawline['mac'] = m.group(4) rawline['temperature'] = m.group(5) # print output = [] output += [''] output += ['===================='] output += [''] output += ['{0:>20} : "{1}"'.format("line",line.strip())] output += [''] for (k,v) in rawline.items(): output += ['{0:>20} : {1}'.format(k,v)] output = '\n'.join(output) #Printer().printline(output) # statline statline = {} statline['timestamp'] = time.mktime(time.strptime(rawline['timestamp'],"%Y-%m-%d %H:%M:%S")) statline['timestamp'] += int(rawline['timestampMs'])/1000.0 statline['eventType'] = rawline['eventType'] assert rawline['eventType'] in EVENT_ALL if rawline['mac']: statline['mac'] = rawline['mac'] else: statline['mac'] = None if rawline['temperature']: statline['temperature'] = float(rawline['temperature']) else: statline['temperature'] = None # print output = [] output += [''] output += ['{0:>20} : {1:.03f}'.format("timestamp",statline['timestamp'])] output += ['{0:>20} : {1}'.format("eventType",statline['eventType'])] output += ['{0:>20} : {1}'.format("mac",rawline['mac'])] if statline['temperature']: output += ['{0:>20} : {1:.02f}'.format("temperature",statline['temperature'])] else: output += ['{0:>20} : {1}'.format("temperature",statline['temperature'])] output = '\n'.join(output) #Printer().printline(output) # feed stat for stat in stats: stat.feedstat(statline) # print statistics for stat in stats: Printer().printline(stat.formatstat()) except Exception as err: print "FATAL: ({0}) {1}".format(type(err),err) print traceback.print_exc() else: print "\n\nScript ended normally" raw_input("\nPress enter to close.") if __name__=="__main__": main()
Your funchal phone numbers don't require any equipment to buy, setup fees to pay, or even penalties for cancellation. You can also incorporate smart features such as time-of-day forwarding and real-time call records for an even more effective telecommunications solution. We offer many great smart features like real-time call records, sequential or simultaneous ringing. TollFreeForwarding.com can setup your funchal phone numbers with no setup fees or contracts. Our website allows you to pick your own number, have it activated within 3 minutes, and manage your call forwarding and other smart features online. Navigate our Online Control Center to see how easy it is to manage your account and call forwarding settings. TollFreeForwarding.com offers the best service and prices available on funchal phone numbers. Simply pick your own number in over 70 countries and you'll be up and running in just three minutes. Get your own dedicated phone number in any country, and answer those calls anywhere. Simply choose your own Funchal Phone Numbers and tell us where you would like those Funchal Phone Numbers forwarded. Affordable plans on Your Funchal Phone Numbers.
import os from setuptools import setup # Manage version in __init__.py def get_version(version_tuple): """version from tuple accounting for possible a,b,rc tags.""" # in case an a, b, or rc tag is added if not isinstance(version_tuple[-1], int): return '.'.join( map(str, version_tuple[:-1]) ) + version_tuple[-1] return '.'.join(map(str, version_tuple)) # path to __init__ for package INIT = os.path.join( os.path.dirname(__file__), 'domopy', '__init__.py' ) VERSION_LINE = list( filter(lambda line: line.startswith('VERSION'), open(INIT)) )[0] # lotta effort but package might not be importable before # install is finished so can't just import VERSION VERSION = get_version(eval(VERSION_LINE.split('=')[-1])) setup( name='domopy', version=VERSION, author='Ryan Wilson', license='MIT', url='https://github.com/RyanWilsonDev/DomoPy', description="methods for interacting with Domo APIs", long_description=""" Set of classes and methods for interacting with the Domo Data APIs and Domo User APIs. Handles Authentication, pulling data from domo, creating new domo datasets, replace/appending existing datasets, etc. """, packages=[ 'domopy' ], package_data={'': ['LICENSE'], 'LICENSES': ['NOTICE', 'PANDAS_LICENSE', 'REQUESTS_LICENSE']}, include_package_data=True, install_requires=[ 'pandas', 'requests', 'requests_oauthlib' ], classifiers=( 'Intended Audience :: Developers', 'Natural Language :: English', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython' ), tests_require=[] )
The extended system tray (xSysTray) is a plugin for XCenter/eCenter that provides support for the system tray area in the XCenter/eCenter bar. This area, also called as the notification or indicator area, is commonly used by long running applications to display their status using graphical icons and to provide a way to control these applications by clicking on their icons. There is a number of other tools to implement the described functionality. The main reason to create another implementation is that existing ones do not provide the level of functionality required by the QT Toolkit for which this extended implementation was created for. Since the SysTray/WPS API is being already used by a number of applications to implement system tray icons, the extended System Tray widget implements this API as well to provide backward compatibility.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """A module that implements the Nesterov momentum optimizer. """ import numpy from theanolm.backend import Parameters from theanolm.training.basicoptimizer import BasicOptimizer class NesterovOptimizer(BasicOptimizer): """Nesterov Momentum Optimization Method Normally Nesterov momentum is implemented by first taking a step towards the previous update direction, calculating gradient at that position, using the gradient to obtain the new update direction, and finally updating the parameters. We use an alternative formulation that requires the gradient to be computed only at the current parameter values, described here: https://github.com/lisa-lab/pylearn2/pull/136#issuecomment-10381617 v_{t} = mu * v_{t-1} - lr * gradient(params_{t-1}) params_{t} = params_{t-1} + mu * v_{t} - lr * gradient(params_{t-1}) """ def __init__(self, optimization_options, network, *args, **kwargs): """Creates a Nesterov momentum optimizer. Nesterov momentum optimizer does not use additional parameters. :type optimization_options: dict :param optimization_options: a dictionary of optimization options :type network: Network :param network: the neural network object """ self._params = Parameters() for path, param in network.get_variables().items(): self._params.add(path + '_velocity', numpy.zeros_like(param.get_value())) # momentum if 'momentum' not in optimization_options: raise ValueError("Momentum is not given in optimization options.") self._momentum = optimization_options['momentum'] super().__init__(optimization_options, network, *args, **kwargs) def _get_param_updates(self, alpha): """Returns Theano expressions for updating the model parameters and any additional parameters required by the optimizer. :type alpha: Variable :param alpha: a scale to be applied to the model parameter updates :rtype: iterable over pairs (shared variable, new expression) :returns: expressions how to update the optimizer parameters """ result = [] deltas = dict() for path, gradient in zip(self.network.get_variables(), self._gradients): deltas[path] = -gradient self._normalize(deltas) for path, param_old in self.network.get_variables().items(): delta = deltas[path] velocity_old = self._params[path + '_velocity'] velocity = self._momentum * velocity_old + alpha * delta param = param_old + self._momentum * velocity + alpha * delta result.append((velocity_old, velocity)) result.append((param_old, param)) return result
Molly Purser, PhD, is an Associate Director of Health Economics in Regenerative Medicine and Advanced Therapies at RTI-HS. Dr. Purser is experienced in developing economic models to demonstrate the value of drugs and other health technologies and, in particular, models of cell therapies, gene therapies, and tissue-engineered products. She has substantial experience developing early-phase decision-making tools as well as later-phase launch models for use by health technology appraisal organizations. Prior to joining RTI- HS, she worked as a postdoctoral scholar at North Carolina State University in collaboration with the Wake Forest Institute for Regenerative Medicine. Her research experience includes oncology, rare diseases, medical devices, dermatology, pain, and genetic disorders. She is a member of the International Society of Pharmacoeconomics and Outcomes Research and has presented her research at a number of professional conferences. Dr. Purser’s work has been published in peer-reviewed journals including Advances in Therapy, Cellular Reprogramming, Pharmacoeconomics, Annals of Biomedical Engineering, and Annals of Thoracic Surgery.
__author__ = 'Bohdan Mushkevych' from odm.document import BaseDocument from odm.fields import StringField, ObjectIdField, IntegerField, DictField, DateTimeField TYPE_MANAGED = 'type_managed' # identifies UOW created by Abstract State Machine child for Managed Process TYPE_FREERUN = 'type_freerun' # identifies UOW created by FreerunStateMachine for ad-hock processing # UOW was successfully processed by the worker STATE_PROCESSED = 'state_processed' # UOW was received by the worker and it started the processing STATE_IN_PROGRESS = 'state_in_progress' # UOW was instantiated and send to the worker STATE_REQUESTED = 'state_requested' # Job has been manually marked as SKIPPED via MX # and so the associated UOW got cancelled # or the life-support threshold has been crossed for failing UOW STATE_CANCELED = 'state_canceled' # UOW can get into STATE_INVALID if: # a. related Job was marked for reprocessing via MX # b. have failed with an exception at the worker level # NOTICE: GarbageCollector changes STATE_INVALID -> STATE_REQUESTED during re-posting STATE_INVALID = 'state_invalid' # UOW was received by a worker, # but no data was found to process STATE_NOOP = 'state_noop' class UnitOfWork(BaseDocument): """ Module represents persistent Model for atomic unit of work performed by the system. UnitOfWork Instances are stored in the <unit_of_work> collection """ db_id = ObjectIdField(name='_id', null=True) process_name = StringField() timeperiod = StringField(null=True) start_timeperiod = StringField(null=True) # [synergy date] lower boundary of the period that needs to be processed end_timeperiod = StringField(null=True) # [synergy date] upper boundary of the period that needs to be processed start_id = ObjectIdField(name='start_obj_id') # [DB _id] lower boundary of the period that needs to be processed end_id = ObjectIdField(name='end_obj_id') # [DB _id] upper boundary of the period that needs to be processed source = StringField(null=True) # defines source of data for the computation sink = StringField(null=True) # defines sink where the aggregated data will be saved arguments = DictField() # task-level arguments that could supplement or override process-level ones state = StringField(choices=[STATE_INVALID, STATE_REQUESTED, STATE_IN_PROGRESS, STATE_PROCESSED, STATE_CANCELED, STATE_NOOP]) created_at = DateTimeField() submitted_at = DateTimeField() started_at = DateTimeField() finished_at = DateTimeField() number_of_aggregated_documents = IntegerField() number_of_processed_documents = IntegerField() number_of_retries = IntegerField(default=0) unit_of_work_type = StringField(choices=[TYPE_MANAGED, TYPE_FREERUN]) @classmethod def key_fields(cls): return (cls.process_name.name, cls.timeperiod.name, cls.start_id.name, cls.end_id.name) @property def is_active(self): return self.state in [STATE_REQUESTED, STATE_IN_PROGRESS, STATE_INVALID] @property def is_finished(self): return self.state in [STATE_PROCESSED, STATE_CANCELED, STATE_NOOP] @property def is_processed(self): return self.state == STATE_PROCESSED @property def is_noop(self): return self.state == STATE_NOOP @property def is_canceled(self): return self.state == STATE_CANCELED @property def is_invalid(self): return self.state == STATE_INVALID @property def is_requested(self): return self.state == STATE_REQUESTED @property def is_in_progress(self): return self.state == STATE_IN_PROGRESS PROCESS_NAME = UnitOfWork.process_name.name TIMEPERIOD = UnitOfWork.timeperiod.name START_TIMEPERIOD = UnitOfWork.start_timeperiod.name END_TIMEPERIOD = UnitOfWork.end_timeperiod.name START_ID = UnitOfWork.start_id.name END_ID = UnitOfWork.end_id.name STATE = UnitOfWork.state.name UNIT_OF_WORK_TYPE = UnitOfWork.unit_of_work_type.name
’70 Painting Years’ is a joint exhibition by Trevor Chamberlain ROI RSMA and Bert Wright (RSMA, FRSA), celebrating 70 painting years. Daily 10am to 5pm. Closes 1pm on Sat 22 October and admission is free.
#!/usr/bin/env python # -*- encoding: utf-8 -*- ############################################################################## # ############################################################################## from openerp.osv import osv, fields import datetime from datetime import timedelta from datetime import date class res_partner(osv.osv): _inherit = 'res.partner' _columns = { 'installer': fields.boolean('Installateur'), } res_partner() class crm_software_version(osv.osv): _name = 'crm.software.version' _columns = { 'case_section_id': fields.many2one('crm.case.section', 'Type thermostaat', required=True, select=True), 'name': fields.char('Software versie', required=True), } crm_software_version() class crm_case_section(osv.osv): _inherit = 'crm.case.section' _columns = { 'software_ids': fields.one2many('crm.software.version', 'case_section_id', 'Software versies'), } crm_case_section() class crm_installation_request(osv.osv): _name = 'crm.installation.request' _inherit = ['mail.thread'] _columns = { 'partner_id': fields.many2one('res.partner', 'Relatie', required=True, select=True), 'cust_zip': fields.char('Postcode'), 'zip_id': fields.many2one('res.country.city', 'Postcodetabel'), 'street_id': fields.many2one('res.country.city.street', 'Straattabel'), 'street_nbr': fields.char('Huisnummer', size=16), 'phone': fields.char('Telefoon'), 'mobile': fields.char('Mobiel'), 'email': fields.char('E-mail'), 'name': fields.char('ID'), 'user_id': fields.many2one('res.users', 'Gebruiker', required=True, select=True), 'state': fields.selection([ ('new', 'Nieuw'), ('in_progress', 'In Behandeling'), ('problem', 'Probleem'), ('done', 'Ingepland'), ('cancel', 'Geannuleerd'), ], 'Status', readonly=True, track_visibility='onchange', select=True), 'case_section_id': fields.many2one('crm.case.section', 'Type thermostaat', required=True, select=True), 'software_version_id': fields.many2one('crm.software.version', 'Software versie', select=True), 'connected_to': fields.text('Aangesloten op'), 'problem': fields.text('Probleem'), 'installer_id': fields.many2one('res.partner', 'Installateur', select=True), 'request_date': fields.date('Aanvraagdatum'), 'installation_date': fields.date('Geplande installatiedatum'), 'first_name': fields.char('Voornaam', len=24), 'middle_name': fields.char('Tussenvoegsel(s)', len=24), 'last_name': fields.char('Achternaam', len=24), 'one': fields.integer('Een'), 'color': fields.integer('Color Index'), 'create_partner': fields.boolean('Relatie aanmaken'), 'address': fields.char('Adres'), } _defaults = { 'request_date': fields.date.context_today, 'user_id': lambda obj, cr, uid, context: uid, # 'name': lambda x, y, z, c: x.pool.get('ir.sequence').get(y, z, 'crm.installation.request'), 'state': 'new', 'one': 1, 'color': 0, } _order = 'id desc' def onchange_street_id(self, cr, uid, ids, cust_zip, zip_id, street_id, street_nbr, context=None): res = {} zip = zip_id street = street_id nbr = street_nbr partner_id = None if cust_zip: sql_stat = "select res_country_city_street.id as street_id, city_id, res_country_city_street.zip, res_country_city_street.name as street_name, res_country_city.name as city_name from res_country_city_street, res_country_city where replace(res_country_city_street.zip, ' ', '') = upper(replace('%s', ' ', '')) and city_id = res_country_city.id" % (cust_zip, ) cr.execute(sql_stat) sql_res = cr.dictfetchone() if sql_res and sql_res['street_id']: res['street_id'] = sql_res['street_id'] res['zip_id'] = sql_res['city_id'] res['cust_zip'] = sql_res['zip'] address = sql_res['street_name'] if street_nbr: address = address + ' ' + street_nbr address = address + ', ' + sql_res['zip'] + ' ' + sql_res['city_name'] res['address'] = address if zip_id and street_nbr: sql_stat = "select id as partner_id from res_partner where zip_id = %d and trim(street_nbr) = trim('%s')" % (zip_id, street_nbr, ) cr.execute(sql_stat) sql_res = cr.dictfetchone() if sql_res: if sql_res['partner_id']: res['partner_id'] = sql_res['partner_id'] else: res['partner_id'] = None else: res['partner_id'] = None return {'value':res} def icy_onchange_partner_id(self, cr, uid, ids, partner_id, context=None): res = {} if partner_id: partner_obj = self.pool.get('res.partner') partner = partner_obj.browse(cr, uid, partner_id, context=context) res['phone'] = partner.phone res['email'] = partner.email res['mobile'] = partner.mobile res['zip_id'] = partner.zip_id.id res['street_id'] = partner.street_id.id res['street_nbr'] = partner.street_nbr res['cust_zip'] = partner.zip else: res['cust_phone'] = None res['email'] = None res['mobile'] = None return {'value':res} def button_in_progress(self, cr, uid, ids, context=None): self.write(cr, uid, ids, {'state': 'in_progress'}, context=context) return True def button_problem(self, cr, uid, ids, context=None): self.write(cr, uid, ids, {'state': 'problem'}, context=context) return True def button_done(self, cr, uid, ids, context=None): self.write(cr, uid, ids, {'state': 'done'}, context=context) return True def button_cancel(self, cr, uid, ids, context=None): self.write(cr, uid, ids, {'state': 'cancel'}, context=context) return True def button_reset(self, cr, uid, ids, context=None): self.write(cr, uid, ids, {'state': 'new'}, context=context) return True def onchange_create_partner(self, cr, uid, ids, partner_id, phone, email, mobile, zip_id, street_id, street_nbr, first_name, middle_name, last_name, context=None): res = {} if not partner_id: obj_partner = self.pool.get('res.partner') vals_partner = {} cust_name = '' if first_name: cust_name = first_name if middle_name: if cust_name == '': cust_name = middle_name else: cust_name = cust_name + ' ' + middle_name if last_name: if cust_name == '': cust_name = last_name else: cust_name = cust_name + ' ' + last_name vals_partner['name'] = cust_name vals_partner['lang'] = "nl_NL" vals_partner['company_id'] = 1 vals_partner['use_parent_address'] = False vals_partner['active'] = True sql_stat = "select res_country_city_street.name as cust_street, res_country_city.name as cust_city, res_country_city.zip as cust_zip from res_country_city_street, res_country_city where res_country_city_street.id = %d and city_id = res_country_city.id" % (street_id, ) cr.execute(sql_stat) sql_res = cr.dictfetchone() if sql_res and sql_res['cust_street']: cust_street = sql_res['cust_street'] cust_zip = sql_res['cust_zip'] cust_city = sql_res['cust_city'] if street_nbr: vals_partner['street'] = cust_street + ' ' + street_nbr else: vals_partner['street'] = cust_street vals_partner['supplier'] = False vals_partner['city'] = cust_city vals_partner['zip'] = cust_zip vals_partner['employee'] = False vals_partner['installer'] = False vals_partner['type'] = "contact" vals_partner['email'] = email vals_partner['phone'] = phone vals_partner['mobile'] = mobile vals_partner['customer'] = False vals_partner['is_company'] = False vals_partner['notification_email_send'] = "comment" vals_partner['opt_out'] = False vals_partner['display_name'] = cust_name vals_partner['purchase_warn'] = "no-message" vals_partner['sale_warn'] = "no-message" vals_partner['invoice_warn'] = "no-message" vals_partner['picking-warn'] = "no-message" vals_partner['received_via'] = False vals_partner['consumer'] = True vals_partner['subcontractor'] = False vals_partner['zip_id'] = zip_id vals_partner['street_id'] = street_id vals_partner['street_nbr'] = street_nbr vals_partner['first_name'] = first_name vals_partner['middle_name'] = middle_name vals_partner['last_name'] = last_name partner_id = obj_partner.create(cr, uid, vals=vals_partner, context=context) res['partner_id'] = partner_id return {'value':res} def create(self, cr, uid, vals, context=None): vals['name'] = self.pool.get('ir.sequence').get(cr, uid, 'crm.installation.request') if 'cust_zip' in vals and vals['cust_zip']: sql_stat = "select res_country_city_street.id as street_id, city_id, res_country_city_street.zip, res_country_city_street.name as street_name, res_country_city.name as city_name from res_country_city_street, res_country_city where replace(res_country_city_street.zip, ' ', '') = upper(replace('%s', ' ', '')) and city_id = res_country_city.id" % (vals['cust_zip'], ) cr.execute(sql_stat) sql_res = cr.dictfetchone() if sql_res and sql_res['street_id']: address = sql_res['street_name'] if 'street_nbr' in vals and vals['street_nbr']: address = address + ' ' + vals['street_nbr'] address = address + ', ' + sql_res['zip'] + ' ' + sql_res['city_name'] vals['address'] = address return super(crm_installation_request, self).create(cr, uid, vals, context=context) crm_installation_request() # vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
The Oculus Rift S is a bit of an odd one. Three years after the Rift’s initial launch, Oculus has released a product that feels like a lateral move rather than a leap forward. It’s better in a few ways and worse in a few ways. After spending some time playing with it and spending a lot more time thinking about it, it’s not super clear to me why Oculus made it. The best reason I can think of is that Facebook sees standalone VR as the area where it should be completely ignoring profits to achieve a mass audience, and PC VR users should essentially be subsidizing the broader market with hardware they actually make money off. Oculus seems to be wanting it both ways though, because they could have released a headset that pushed the limits and charged more for it, but they opted to launch a product that moved laterally and made sacrifices — but they still are charging more for it. We reported that former Oculus CEO Brendan Iribe left his position as head of PC VR at Facebook partially over the frustration of this project being settled on, something he saw as representative of the company’s “race to the bottom,” a source told us in October. I will say that the Rift S looks better in real life than it does on paper, but compared to the Oculus Quest and Oculus Go headsets, it still feels like Oculus is launching something below their own standards with the Rift S, and that their co-designer Lenovo ultimately made them a headset on-the-cheap that got the job done while lowering the build-of-materials costs. Well, what is there to like about the new headset? The new Insight tracking is great, and while this headset basically feels like a minor upgrade to Lenovo’s Mixed Reality headset, the tracking is undoubtedly better than what is available on Microsoft’s two-camera reference layout. By comparison, the Rift S has five cameras, which seem to capture a much greater tracking volume, which really encapsulates all of those edge cases where the controllers are far out of sight. This is a great system and while outside-in tracking is probably always going to be more accurate in certain situations, moving away from the old method was worth it in terms of making the setup process easier. On that note, the new passthrough mode, which you can use to set up your boundaries in the Guardian system, seems quite a bit easier to use. On the note of displays, Oculus made some sacrifices here moving from OLED to LCD… and from 90hz to 80hz… and from dual adjustable panels to a single panel, but I was largely pleased with the clarity of the new, higher-res single display. This is an area that I’ll really need to dig more into with a full review, but there weren’t any apparent huge issues. Otherwise, not a ton jumps out as a clear improvement. The new “halo” ring strap system isn’t for me, comfort-wise, but I can imagine others will prefer the fit. Even so, it gives the headset a much more rickety build quality, which has taken an overall downgrade from the original Rift, in my opinion. Lenovo’s headsets have typically been bulkier and harder feeling than the softer-edged products from Google, Oculus and HTC; Lenovo’s VR design ethos is on full display here. The removal of built-in headphones seems like the most outright poor decision with this release and, while the integrated speakers are serviceable, it’s clear you’ll want to add some wired headphones if you’re looking for a serious experience, which most PC VR users definitely are. The new Touch controllers are fine; they’re the same as what will ship with the Oculus Quest. They have a different design that feels pretty familiar, but they feel smaller and a bit cheaper. The tracking ring has moved from around your knuckles to the top of the controller. When it comes to gameplay — when the headset is on and you’re buried in an experience — most of these issues aren’t as apparent as when you consider them individually. The issue is that while the Quest and Go are miles better than any other products in their individual categories, this latest effort is just very mehh. It’s actually odd how much more high-quality the Oculus Quest feels than the Rift S when trying one after the other; it seems like it should be the other way around. I’ll have to spend more time with the headset for a full review, of course, but on first approach the Rift S seems to be a misstep in Facebook’s otherwise stellar VR product line, even if the new Insight tracking system is a push forward in the hardware’s overall usability.
import re from pcs_test.tier1.cib_resource.common import ResourceTest from pcs_test.tier1.cib_resource.stonith_common import need_load_xvm_fence_agent from pcs_test.tools.misc import is_minimum_pacemaker_version PCMK_2_0_3_PLUS = is_minimum_pacemaker_version(2, 0, 3) PCMK_2_0_5_PLUS = is_minimum_pacemaker_version(2, 0, 5) ERRORS_HAVE_OCURRED = ( "Error: Errors have occurred, therefore pcs is unable to continue\n" ) class PlainStonith(ResourceTest): @need_load_xvm_fence_agent def test_simplest(self): self.assert_effect( "stonith create S fence_xvm".split(), """<resources> <primitive class="stonith" id="S" type="fence_xvm"> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) def test_base_with_agent_that_provides_unfencing(self): self.assert_effect( "stonith create S fence_scsi".split(), """<resources> <primitive class="stonith" id="S" type="fence_scsi"> <meta_attributes id="S-meta_attributes"> <nvpair id="S-meta_attributes-provides" name="provides" value="unfencing" /> </meta_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) def test_error_when_not_valid_name(self): self.assert_pcs_fail_regardless_of_force( "stonith create S fence_xvm:invalid".split(), "Error: Invalid stonith agent name 'fence_xvm:invalid'. List of" " agents can be obtained by using command 'pcs stonith list'." " Do not use the 'stonith:' prefix. Agent name cannot contain" " the ':' character.\n", ) def test_error_when_not_valid_agent(self): error = error_re = None if PCMK_2_0_3_PLUS: # pacemaker 2.0.5 adds 'crm_resource:' # The exact message returned form pacemaker differs from version to # version (sometimes from commit to commit), so we don't check for # the whole of it. error_re = re.compile( "^" "Error: Agent 'absent' is not installed or does not provide " "valid metadata:( crm_resource:)? Metadata query for " "stonith:absent failed:.+" "use --force to override\n$", re.MULTILINE, ) else: error = ( "Error: Agent 'absent' is not installed or does not provide " "valid metadata: Agent absent not found or does not support " "meta-data: Invalid argument (22), " "Metadata query for stonith:absent failed: Input/output error, " "use --force to override\n" ) self.assert_pcs_fail( "stonith create S absent".split(), stdout_full=error, stdout_regexp=error_re, ) def test_warning_when_not_valid_agent(self): error = error_re = None if PCMK_2_0_3_PLUS: # pacemaker 2.0.5 adds 'crm_resource:' # The exact message returned form pacemaker differs from version to # version (sometimes from commit to commit), so we don't check for # the whole of it. error_re = re.compile( "^" "Warning: Agent 'absent' is not installed or does not provide " "valid metadata:( crm_resource:)? Metadata query for " "stonith:absent failed:.+", re.MULTILINE, ) else: error = ( "Warning: Agent 'absent' is not installed or does not provide " "valid metadata: Agent absent not found or does not support " "meta-data: Invalid argument (22), " "Metadata query for stonith:absent failed: Input/output error\n" ) self.assert_effect( "stonith create S absent --force".split(), """<resources> <primitive class="stonith" id="S" type="absent"> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", output=error, output_regexp=error_re, ) @need_load_xvm_fence_agent def test_disabled_puts_target_role_stopped(self): self.assert_effect( "stonith create S fence_xvm --disabled".split(), """<resources> <primitive class="stonith" id="S" type="fence_xvm"> <meta_attributes id="S-meta_attributes"> <nvpair id="S-meta_attributes-target-role" name="target-role" value="Stopped" /> </meta_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) def test_debug_and_verbose_allowed(self): self.assert_effect( "stonith create S fence_apc ip=i username=u verbose=v debug=d".split(), """<resources> <primitive class="stonith" id="S" type="fence_apc"> <instance_attributes id="S-instance_attributes"> <nvpair id="S-instance_attributes-debug" name="debug" value="d" /> <nvpair id="S-instance_attributes-ip" name="ip" value="i" /> <nvpair id="S-instance_attributes-username" name="username" value="u" /> <nvpair id="S-instance_attributes-verbose" name="verbose" value="v" /> </instance_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) @need_load_xvm_fence_agent def test_error_when_action_specified(self): self.assert_pcs_fail( "stonith create S fence_xvm action=reboot".split(), "Error: stonith option 'action' is deprecated and should not be" " used, use 'pcmk_off_action', 'pcmk_reboot_action' instead, " "use --force to override\n" + ERRORS_HAVE_OCURRED, ) @need_load_xvm_fence_agent def test_warn_when_action_specified_forced(self): self.assert_effect( "stonith create S fence_xvm action=reboot --force".split(), """<resources> <primitive class="stonith" id="S" type="fence_xvm"> <instance_attributes id="S-instance_attributes"> <nvpair id="S-instance_attributes-action" name="action" value="reboot" /> </instance_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", "Warning: stonith option 'action' is deprecated and should not be" " used, use 'pcmk_off_action', 'pcmk_reboot_action' instead\n", ) class WithMeta(ResourceTest): @need_load_xvm_fence_agent def test_simplest_with_meta_provides(self): self.assert_effect( "stonith create S fence_xvm meta provides=something".split(), """<resources> <primitive class="stonith" id="S" type="fence_xvm"> <meta_attributes id="S-meta_attributes"> <nvpair id="S-meta_attributes-provides" name="provides" value="something" /> </meta_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) def test_base_with_agent_that_provides_unfencing_with_meta_provides(self): self.assert_effect( "stonith create S fence_scsi meta provides=something".split(), """<resources> <primitive class="stonith" id="S" type="fence_scsi"> <meta_attributes id="S-meta_attributes"> <nvpair id="S-meta_attributes-provides" name="provides" value="unfencing" /> </meta_attributes> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </resources>""", ) class InGroup(ResourceTest): @need_load_xvm_fence_agent def test_command_simply_puts_stonith_into_group(self): self.assert_effect( "stonith create S fence_xvm --group G".split(), """<resources> <group id="G"> <primitive class="stonith" id="S" type="fence_xvm"> <operations> <op id="S-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </group> </resources>""", ) @need_load_xvm_fence_agent def test_command_simply_puts_stonith_into_group_at_the_end(self): self.assert_pcs_success("stonith create S1 fence_xvm --group G".split()) self.assert_effect( "stonith create S2 fence_xvm --group G".split(), """<resources> <group id="G"> <primitive class="stonith" id="S1" type="fence_xvm"> <operations> <op id="S1-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> <primitive class="stonith" id="S2" type="fence_xvm"> <operations> <op id="S2-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </group> </resources>""", ) @need_load_xvm_fence_agent def test_command_simply_puts_stonith_into_group_before_another(self): self.assert_pcs_success("stonith create S1 fence_xvm --group G".split()) self.assert_effect( "stonith create S2 fence_xvm --group G --before S1".split(), """<resources> <group id="G"> <primitive class="stonith" id="S2" type="fence_xvm"> <operations> <op id="S2-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> <primitive class="stonith" id="S1" type="fence_xvm"> <operations> <op id="S1-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </group> </resources>""", ) @need_load_xvm_fence_agent def test_command_simply_puts_stonith_into_group_after_another(self): self.assert_pcs_success_all( [ "stonith create S1 fence_xvm --group G".split(), "stonith create S2 fence_xvm --group G".split(), ] ) self.assert_effect( "stonith create S3 fence_xvm --group G --after S1".split(), """<resources> <group id="G"> <primitive class="stonith" id="S1" type="fence_xvm"> <operations> <op id="S1-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> <primitive class="stonith" id="S3" type="fence_xvm"> <operations> <op id="S3-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> <primitive class="stonith" id="S2" type="fence_xvm"> <operations> <op id="S2-monitor-interval-60s" interval="60s" name="monitor" /> </operations> </primitive> </group> </resources>""", ) @need_load_xvm_fence_agent def test_fail_when_inteded_before_item_does_not_exist(self): self.assert_pcs_fail( "stonith create S2 fence_xvm --group G --before S1".split(), "Error: there is no resource 'S1' in the group 'G'\n", ) @need_load_xvm_fence_agent def test_fail_when_inteded_after_item_does_not_exist(self): self.assert_pcs_fail( "stonith create S2 fence_xvm --group G --after S1".split(), "Error: there is no resource 'S1' in the group 'G'\n", ) def test_fail_when_entered_both_after_and_before(self): self.assert_pcs_fail( "stonith create S fence_xvm --group G --after S1 --before S2".split(), "Error: you cannot specify both --before and --after\n", ) def test_fail_when_after_is_used_without_group(self): self.assert_pcs_fail( "stonith create S fence_xvm --after S1".split(), "Error: you cannot use --after without --group\n", ) def test_fail_when_before_is_used_without_group(self): self.assert_pcs_fail( "stonith create S fence_xvm --before S1".split(), "Error: you cannot use --before without --group\n", ) def test_fail_when_before_after_conflicts_and_moreover_without_group(self): self.assert_pcs_fail( "stonith create S fence_xvm --after S1 --before S2".split(), "Error: you cannot specify both --before and --after" " and you have to specify --group\n", )
Building a multi-family residence is a great way of maximizing your homes value by creating an additional living space you can take advantage of. For clients who’d like to make the most of their land, Alair Homes Salt Lake City is proud to offer multi-family homebuilding to our list of services. Sometimes the perfect house needs a little more space to make it functional. Completing a home addition makes a difference. We use the finest materials and maintain the highest standards for craftsmanship and client satisfaction. Discover your dream home in the house you already own. Building a custom home in Salt Lake doesn't have to be stressful. Our goal is to help educate you the best we can so that you can make the most informed decision that is best for you, your family, and your future. At Alair Homes Salt Lake, we understand what it takes to make a kitchen spectacular. Our dedicated team of general contractors and design professionals work one-on-one with each client, listening first in order to best understand your needs and providing expert advice and guidance to transform dreams into reality. This spacious modern custom home features an open concept great room with a gourmet kitchen, chic living room, and high end finishes. The vaulted ceiling and wood support beams give this space a rustic flair. This custom home features an open concept great room with a soft color palette, hardwood floors, large windows, and high end finishes. The large windows bring in lots of natural light, giving the space a bright, open atmosphere. This stone and brick exterior custom home features a gourmet kitchen, dining room with a view, cozy living room, master suite, and large home theatre. The living room has a brick fireplace and vaulted ceiling with exposed wooden beams. This custom home has a beautiful stone and brick exterior. The interior features a living room with a cozy brick fireplace, a large gourmet kitchen, family room, two bedrooms and master suite, home office, and large home theatre. A beautiful custom home, featuring hardwood floors in the main living space, large windows to let in plenty of natural light, and a modern kitchen with stainless steel appliances.
from diofant import QQ, GoldenRatio, Rational, RootOf, Sum, oo, pi, sqrt from diofant.abc import n, x from diofant.printing.lambdarepr import MpmathPrinter __all__ = () def test_basic(): p = MpmathPrinter() assert p.doprint(GoldenRatio) == 'phi' assert p.doprint(Rational(2)) == '2' assert p.doprint(Rational(2, 3)) == '2*power(3, -1)' def test_Pow(): p = MpmathPrinter() assert p.doprint(sqrt(pi)) == 'root(pi, 2)' assert p.doprint(pi**Rational(2, 3)) == 'root(pi, 3)**2' assert p.doprint(pi**Rational(-2, 3)) == 'power(root(pi, 3), -2)' assert p.doprint(pi**pi) == 'pi**pi' def test_RootOf(): p = MpmathPrinter() e = RootOf(x**3 + x - 1, x, 0) r = ('findroot(lambda x: x**3 + x - 1, (%s, %s), ' "method='bisection')" % (p.doprint(QQ(0)), p.doprint(QQ(1)))) assert p.doprint(e) == r e = RootOf(x**3 + x - 1, x, 1) r = ('findroot(lambda x: x**3 + x - 1, mpc(%s, %s), ' "method='secant')" % (p.doprint(QQ(-3, 8)), p.doprint(QQ(-9, 8)))) assert p.doprint(e) == r def test_Sum(): p = MpmathPrinter() s = Sum(n**(-2), (n, 1, oo)) assert p.doprint(s) == 'nsum(lambda n: power(n, -2), (1, inf))'
Great weather yesterday, good powder but not pristine, some icy spots but that's okay. Great mountain would come back again! They also had set up a inflated jump off area...we wanted to try but they took it down really quick. Please keep it on next time! Verry sunny,morning & mid day runs fun!! Late afternoon ice spots from powder push off. Grate fun !! West Park uncrowned , lots of fun, grate features!!!! Nice weather, snow was nicer than I thought for "Granular", they must have done an awesome job grooming. I didn't hit one icy spot today..which was a surprise.
#Tools for reading and analysis of data from TruBlu data loggers from pandas import read_csv from pandas import concat from pandas import DataFrame import os """ Functions to read TruBlu logger files. """ #read in the CSV file from a TruBlu logger and return a pandas DataFrame def readTruBlu(csvfile): """ Reads data from a CSV file exported from a TruBlu logger. Parameters ---------- csv_file : string A string containing the file name of the CSV or MON file to be read. Returns ------- df : pandas.DataFrame DataFrame containing data from a TruBlu csv file. """ sep = ',' header = 0 skiprows = 16 #this is somewhat weak, number of lines could change over time?? # Definitely weak. Probably an automated read to csv header would be better index_col = 3 #names = ['ID','Name','Address','Time of Acquisition','Elapsed(Sec)','Level(PSI)','Temperature (\'C)','Battery Voltage(Volt)','Supply Voltage(Volt)','Scan No','blank'] parse_dates = True #skip_footer = 1 #print(csvfile) #df = read_csv(csvfile, sep=sep, names=names, skiprows=skiprows, index_col=index_col, parse_dates=parse_dates) try: if os.stat(csvfile).st_size > 0: df = read_csv(csvfile, sep=sep, skiprows=skiprows, header=header, index_col=index_col, parse_dates=parse_dates) return df else: print((csvfile + " is empty")) except OSError: print((csvfile + " does not exist"))
Sportstalk Media Group is entering its sixth year offering broadcast streaming services to Norman Public Schools athletics. This year is set to be the most complete coverage yet thanks to the introduction of a pair of smart phone applications designed to serve as a one-stop shop for NPS athletics information. A quick search in the App Store for iPhone users or the Play Store for Android users connects fans better than ever before to NPS athletics. Simply search “Norman High Athletics” or “Norman North Athletics” and download the respective free applications to be immersed in NPS athletics coverage. Fans of both Norman High and Norman North athletics are likely familiar with Sportstalk’s web streaming via NormanSports.TV. It has served as the engine for NormanHighLive.com and NormanNorthLive.com, where Sportstalk has streamed video broadcasts of numerous NPS athletics over the past five years. With the introduction of these new applications, fans will have access to a plethora of NPS athletics’ information in one convenient location. The apps provide quick access to the live video streaming of games from NormanSports.TV as well as radio broadcasts on any smart device. In addition to the live games, fans can find teams’ schedules, rosters, highlights and the latest stories about NPS athletes from The Norman Transcript and The Oklahoman. “Hopefully, this app will be what bridges it all together. You can go to the app and pick up the streaming, go to the app and pick up radio, go to the app and get rosters and scores and go to the app and get updates that have been through The Norman Transcript or The Oklahoman. It’s a way for everyone to go to the same spot to find NPS athletics,” Sportstalk Media Group owner Randy Laffoon said. Sportstalk broadcast director Perry Spencer envisions the apps providing something that students, families and others can enjoy. “We’re adding an element that we hope will engage the younger crowd and relate to the students. It’s a way to give everyone a platform to connect with the Norman public schools and know about their athletes. All of the stuff that you might want to pull up on a phone without having to search the internet,” Spencer said. With the app, fans will have an easy avenue to find stories from The Norman Transcript’s Thursday high school football Varsity section as well as all of their high school coverage. ”We’re always focusing on telling compelling stories, finding really interesting narratives. We’re not just going to tell you what the score is. We’re going to tell you something that you want to know, that you need to know in an entertaining, engaging way. Those interesting stories that we find, we find across all the various sports that our student athletes participate in,” Norman Transcript editor Caleb Slinkard said. Similar to how The Norman Transcript covers area high school sports, Sportstalk is committed to providing the best coverage for all of the sports that encompass NPS athletics. Sportstalk hosts weekly NPS coaches’ shows that feature players and coaches from sports that range all the way from football to cheerleading. “You’re not getting it just for the football. You’re getting softball, volleyball, cheerleading, swimming coverage and more. You’re getting exposure to these athletes for the Crosstown Clash coaches’ shows on a weekly basis. All of these students are having a platform to put their voice out there and give them the notoriety they all deserve,” Spencer said. Oklahoma football: Could Sooners’ Playoff ranking serve as motivation?
#!/usr/bin/python # Author: Luxing Huang # Purpose: Merging a specific Chef environment into another, or into a # template. # Note: This can also be applied to Role files and Data Bag files, as long # as they are valid JSON files. # # This script always creates a 3rd json file because we wouldn't want # to overwrite the 2nd file for backup purpose. from __future__ import print_function import json import os import sys import argparse DEFAULT_KEYWORD = "template_default" arg = argparse.ArgumentParser(description="I merge environments/roles into each other, or to the template") arg.add_argument("-y", "--yes", help="Assume yes to overwrite 3rd argument if that file already exists.", dest="ifyes", action="store_true", required=False) arg.add_argument("env1", type=str, help="first environment file, e.g. env1.json") arg.add_argument("env2", type=str, help="second environment or template, e.g. env2.json") arg.add_argument("merged", type=str, help="The target template file name, e.g. template.json") arg.set_defaults(ifyes=False) args = arg.parse_args() env1_name = "" env2_name = "" def merge(j_env1, j_env2): """ Merge 2 jsons into 1 and return the combined json """ # Creating a locally empty json template for argv[3] j_template = {} for key in j_env1.keys(): # if env2 has no such key: if not j_env2.has_key(key): # add the new entry to template j_template[key] = j_env1[key] # On to the next key continue if j_env1[key] == j_env2[key]: # Then we update the key in our template json. j_template.update({key: j_env2[key]}) # on to the next key continue else: # env1 = template, env2 = string if isinstance(j_env1[key], dict) and isinstance(j_env2[key], unicode): print("Please do manual integration at key %s because env1 is a dict but env2 is a string" % key) sys.exit(2) # If env1 = str, env2 = str if isinstance(j_env1[key], unicode) and isinstance(j_env2[key], unicode): # obtain the name of env2 if env2_name == "": # if the env2 name is missing, we assume env2 is actually a template. # so we set it as the default value. j_template[key] = {DEFAULT_KEYWORD : j_env2[key], env1_name : j_env1[key]} else: # Env2 is actually an environment j_template[key] = {DEFAULT_KEYWORD : "", env1_name : j_env1[key], env2_name : j_env2[key]} # On to the next key continue # If env2 is a template and env1 is merging into it if isinstance(j_env1[key], unicode) and isinstance(j_env2[key], dict): # make sure it is a templated dict if j_env2[key].has_key(DEFAULT_KEYWORD): # copy env2 to the new template json j_template[key] = j_env2[key] # add or update env1 entry to it j_template[key].update({env1_name : j_env1[key]}) # On to the next key continue else: print("env2 file does not have a %s key on parent key %s, abort." % (DEFAULT_KEYWORD, key), file=sys.stderr) sys.exit(1) if isinstance(j_env1[key], dict) and isinstance(j_env2[key], dict): # if env1 is a dict, stop if j_env1[key].has_key(DEFAULT_KEYWORD) or j_env2[key].has_key(DEFAULT_KEYWORD): print("either environments must not be dict templates on %s." % key, file=sys.stderr) sys.exit(2) # Recursive call to build json's sub tree. j_template[key] = merge(j_env1[key], j_env2[key]) continue return j_template if __name__ == "__main__": # read env1 and env2 values and dict-ize env1_fp = open(args.env1, 'r') try: env1_json = json.load(env1_fp) except: print("Cannot parse json, check if it's valid?", file=sys.stderr) sys.exit(2) env1_fp.close() env2_fp = open(args.env2, 'r') try: env2_json = json.load(env2_fp) except: print("Cannot parse json, check if it's valid?", file=sys.stderr) sys.exit(2) env2_fp.close() # set global name for env1/2 try: name = env1_json['name'] except KeyError: print("Name key not found in 1st environment. Giving up") sys.exit(1) if not isinstance(name, unicode): print("file 1 must be an environment, not a template!", file=sys.stderr) sys.exit(1) else: env1_name = name try: name = env2_json['name'] except KeyError: print("Required name key not found in 2nd environment/template. Giving up") sys.exit(1) if isinstance(name, unicode): # It's an environment env2_name = name else: # It's a template pass merge_json = merge(env1_json, env2_json) if args.ifyes is False: if os.path.exists(args.merged): answer = raw_input("Do you really want to overwrite %s? type YES to proceed: " % args.merged).strip() if answer != "YES": print("Abort.", file=sys.stderr) sys.exit(2) merge_fp = open(args.merged, 'w') json.dump(merge_json, merge_fp, sort_keys=True, indent=2) merge_fp.close()
Like AmeriCares, many nonprofits today suffer from low or no brand awareness. This happens for many reasons: limited media and marketing budgets, insufficient insight about target donors or constituents, lack of internal digital marketing expertise, and so on. Regardless, these nonprofit brands are unable to achieve their full potential to raise awareness for their cause, attract donors and volunteers, and ultimately, deliver on their mission. Most engaging and successful non-profit websites are minimalistic in design. They have intuitive navigation, visually appealing color scheme, easy to read text, and engaging videos or images that tell the story of the organization’s mission. This article was written by Dave Sutton from Business2Community and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.
# -*- coding: utf-8 -*- """TODO(e-carlin): Doc :copyright: Copyright (c) 2019 RadiaSoft LLC. All Rights Reserved. :license: http://www.apache.org/licenses/LICENSE-2.0.html """ from __future__ import absolute_import, division, print_function from pykern import pkcollections from pykern import pkconfig from pykern import pkinspect from pykern import pkio from pykern.pkcollections import PKDict from pykern.pkdebug import pkdp, pkdc, pkdformat, pkdlog, pkdexc from sirepo import job import asyncio import contextlib import copy import datetime import os import pykern.pkio import sirepo.auth import sirepo.auth_db import sirepo.http_reply import sirepo.sim_data import sirepo.simulation_db import sirepo.srdb import sirepo.srtime import sirepo.tornado import sirepo.util import time import tornado.ioloop import tornado.locks #: where supervisor state is persisted to disk _DB_DIR = None _NEXT_REQUEST_SECONDS = None _HISTORY_FIELDS = frozenset(( 'alert', 'canceledAfterSecs', 'computeJobQueued', 'computeJobSerial', 'computeJobStart', 'computeModel' 'driverDetails', 'error', 'internalError', 'isParallel', 'isPremiumUser', 'jobRunMode', 'jobStatusMessage', 'lastUpdateTime', 'status', )) _PARALLEL_STATUS_FIELDS = frozenset(( 'computeJobHash', 'computeJobStart', 'computeModel', 'elapsedTime', 'frameCount', 'lastUpdateTime', 'percentComplete', )) cfg = None #: how many times restart request when Awaited() raised _MAX_RETRIES = 10 class Awaited(Exception): """An await occurred, restart operation""" pass class ServerReq(PKDict): def copy_content(self): return copy.deepcopy(self.content) def pkdebug_str(self): c = self.get('content') if not c: return 'ServerReq(<no content>)' return pkdformat('ServerReq({}, {})', c.api, c.get('computeJid')) async def receive(self): s = self.content.pkdel('serverSecret') # no longer contains secret so ok to log assert s, \ 'no secret in message content={}'.format(self.content) assert s == sirepo.job.cfg.server_secret, \ 'server_secret did not match content={}'.format(self.content) self.handler.write(await _ComputeJob.receive(self)) class SlotProxy(PKDict): def __init__(self, **kwargs): super().__init__(_value=None, **kwargs) async def alloc(self, situation): if self._value is not None: return try: self._value = self._q.get_nowait() except tornado.queues.QueueEmpty: pkdlog('{} situation={}', self._op, situation) with self._op.set_job_situation(situation): self._value = await self._q.get() raise Awaited() def free(self): if self._value is None: return self._q.task_done() self._q.put_nowait(self._value) self._value = None class SlotQueue(sirepo.tornado.Queue): def __init__(self, maxsize=1): super().__init__(maxsize=maxsize) for i in range(1, maxsize + 1): self.put_nowait(i) def sr_slot_proxy(self, op): return SlotProxy(_op=op, _q=self) def init(): global cfg, _DB_DIR, _NEXT_REQUEST_SECONDS, job_driver if cfg: return job.init() from sirepo import job_driver job_driver.init(pkinspect.this_module()) cfg = pkconfig.init( job_cache_secs=(300, int, 'when to re-read job state from disk'), max_secs=dict( analysis=(144, pkconfig.parse_seconds, 'maximum run-time for analysis job',), parallel=(3600, pkconfig.parse_seconds, 'maximum run-time for parallel job (except sbatch)'), parallel_premium=(3600*2, pkconfig.parse_seconds, 'maximum run-time for parallel job for premium user (except sbatch)'), sequential=(360, pkconfig.parse_seconds, 'maximum run-time for sequential job'), ), purge_non_premium_after_secs=(0, pkconfig.parse_seconds, 'how long to wait before purging non-premium users simulations'), purge_non_premium_task_secs=(None, pkconfig.parse_seconds, 'when to clean up simulation runs of non-premium users (%H:%M:%S)'), sbatch_poll_secs=(15, int, 'how often to poll squeue and parallel status'), ) _DB_DIR = sirepo.srdb.supervisor_dir() _NEXT_REQUEST_SECONDS = PKDict({ job.PARALLEL: 2, job.SBATCH: cfg.sbatch_poll_secs, job.SEQUENTIAL: 1, }) sirepo.auth_db.init() tornado.ioloop.IOLoop.current().add_callback( _ComputeJob.purge_free_simulations, ) async def terminate(): from sirepo import job_driver await job_driver.terminate() class _ComputeJob(PKDict): instances = PKDict() _purged_jids_cache = set() def __init__(self, req, **kwargs): super().__init__( ops=[], run_op=None, run_dir_slot_q=SlotQueue(), **kwargs, ) # At start we don't know anything about the run_dir so assume ready self.pksetdefault(db=lambda: self.__db_init(req)) self.cache_timeout_set() def cache_timeout(self): if self.ops: self.cache_timeout_set() else: del self.instances[self.db.computeJid] def cache_timeout_set(self): self.timer = tornado.ioloop.IOLoop.current().call_later( cfg.job_cache_secs, self.cache_timeout, ) def destroy_op(self, op): if op in self.ops: self.ops.remove(op) if self.run_op == op: self.run_op = None def elapsed_time(self): if not self.db.computeJobStart: return 0 return ( sirepo.srtime.utc_now_as_int() if self._is_running_pending() else self.db.dbUpdateTime ) - self.db.computeJobStart @classmethod def get_instance_or_class(cls, req): try: j = req.content.computeJid except AttributeError: return cls self = cls.instances.pksetdefault(j, lambda: cls.__create(req))[j] # SECURITY: must only return instances for authorized user assert req.content.uid == self.db.uid, \ 'req.content.uid={} is not same as db.uid={} for jid={}'.format( req.content.uid, self.db.uid, j, ) return self def pkdebug_str(self): d = self.get('db') if not d: return '_ComputeJob()' return pkdformat( '_ComputeJob({} u={} {} {})', d.get('computeJid'), d.get('uid'), d.get('status'), self.ops, ) @classmethod async def purge_free_simulations(cls): def _get_uids_and_files(): r = [] u = None p = sirepo.auth_db.UserRole.uids_of_paid_users() for f in pkio.sorted_glob(_DB_DIR.join('*{}'.format( sirepo.simulation_db.JSON_SUFFIX, ))): n = sirepo.sim_data.split_jid(jid=f.purebasename).uid if n in p or f.mtime() > _too_old \ or f.purebasename in cls._purged_jids_cache: continue if u != n: # POSIT: Uid is the first part of each db file. The files are # sorted so this should yield all of a user's files if r: yield u, r u = n r = [] r.append(f) if r: yield u, r def _purge_sim(jid): d = cls.__db_load(jid) # OPTIMIZATION: We assume the uids_of_paid_users doesn't change very # frequently so we don't need to check again. A user could run a sim # at anytime so we need to check that they haven't if d.lastUpdateTime > _too_old: return cls._purged_jids_cache.add(jid) if d.status == job.JOB_RUN_PURGED: return p = sirepo.simulation_db.simulation_run_dir(d) pkio.unchecked_remove(p) n = cls.__db_init_new(d, d) n.status = job.JOB_RUN_PURGED cls.__db_write_file(n) if not cfg.purge_non_premium_task_secs: return s = sirepo.srtime.utc_now() u = None f = None try: _too_old = ( sirepo.srtime.utc_now_as_int() - cfg.purge_non_premium_after_secs ) with sirepo.auth_db.session(): for u, v in _get_uids_and_files(): with sirepo.auth.set_user_outside_of_http_request(u): for f in v: _purge_sim(jid=f.purebasename) await tornado.gen.sleep(0) except Exception as e: pkdlog('u={} f={} error={} stack={}', u, f, e, pkdexc()) finally: tornado.ioloop.IOLoop.current().call_later( cfg.purge_non_premium_task_secs, cls.purge_free_simulations, ) @classmethod async def receive(cls, req): if req.content.get('api') != 'api_runStatus': pkdlog('{}', req) try: o = cls.get_instance_or_class(req) return await getattr( o, '_receive_' + req.content.api, )(req) except sirepo.util.ASYNC_CANCELED_ERROR: return PKDict(state=job.CANCELED) except Exception as e: pkdlog('{} error={} stack={}', req, e, pkdexc()) return sirepo.http_reply.gen_tornado_exception(e) def set_situation(self, op, situation, exception=None): if op.opName != job.OP_RUN: return s = self.db.jobStatusMessage p = 'Exception: ' if situation is not None: # POSIT: no other situation begins with exception assert not s or not s.startswith(p), \ f'Trying to overwrite existing jobStatusMessage="{s}" with situation="{situation}"' if exception is not None: if not str(exception): exception = repr(exception) situation = f'{p}{exception}, while {s}' self.__db_update(jobStatusMessage=situation) @classmethod def __create(cls, req): try: d = cls.__db_load(req.content.computeJid) self = cls(req, db=d) if self._is_running_pending(): #TODO(robnagler) when we reconnect with running processes at startup, # we'll need to change this self.__db_update(status=job.CANCELED) return self except Exception as e: if pykern.pkio.exception_is_not_found(e): return cls(req).__db_write() raise @classmethod def __db_file(cls, computeJid): return _DB_DIR.join( computeJid + sirepo.simulation_db.JSON_SUFFIX, ) def __db_init(self, req, prev_db=None): self.db = self.__db_init_new(req.content, prev_db) return self.db @classmethod def __db_init_new(cls, data, prev_db=None): db = PKDict( alert=None, canceledAfterSecs=None, computeJid=data.computeJid, computeJobHash=data.computeJobHash, computeJobQueued=0, computeJobSerial=0, computeJobStart=0, computeModel=data.computeModel, dbUpdateTime=sirepo.srtime.utc_now_as_int(), driverDetails=PKDict(), error=None, history=cls.__db_init_history(prev_db), isParallel=data.isParallel, isPremiumUser=data.get('isPremiumUser'), jobStatusMessage=None, lastUpdateTime=0, simName=None, simulationId=data.simulationId, simulationType=data.simulationType, status=job.MISSING, uid=data.uid, ) r = data.get('jobRunMode') if not r: assert data.api != 'api_runSimulation', \ 'api_runSimulation must have a jobRunMode content={}'.format(data) # __db_init() will be called when runDirNotFound. # The api_* that initiated the request may not have # a jobRunMode (ex api_downloadDataFile). In that # case use the existing jobRunMode because the # request doesn't care about the jobRunMode r = prev_db.jobRunMode db.pkupdate( jobRunMode=r, nextRequestSeconds=_NEXT_REQUEST_SECONDS[r], ) if db.isParallel: db.parallelStatus = PKDict( ((k, 0) for k in _PARALLEL_STATUS_FIELDS), ) return db @classmethod def __db_init_history(cls, prev_db): if prev_db is None: return [] return prev_db.history + [ PKDict(((k, v) for k, v in prev_db.items() if k in _HISTORY_FIELDS)), ] @classmethod def __db_load(cls, compute_jid): v = None f = cls.__db_file(compute_jid) d = pkcollections.json_load_any(f) for k in [ 'alert', 'canceledAfterSecs', 'isPremiumUser', 'jobStatusMessage', 'internalError', ]: d.setdefault(k, v) for h in d.history: h.setdefault(k, v) d.pksetdefault( computeModel=lambda: sirepo.sim_data.split_jid(compute_jid).compute_model, dbUpdateTime=lambda: f.mtime(), ) if 'cancelledAfterSecs' in d: d.canceledAfterSecs = d.pkdel('cancelledAfterSecs', default=v) for h in d.history: h.canceledAfterSecs = d.pkdel('cancelledAfterSecs', default=v) return d def __db_restore(self, db): self.db = db self.__db_write() def __db_update(self, **kwargs): self.db.pkupdate(**kwargs) return self.__db_write() def __db_write(self): self.db.dbUpdateTime = sirepo.srtime.utc_now_as_int() self.__db_write_file(self.db) return self @classmethod def __db_write_file(cls, db): sirepo.util.json_dump(db, path=cls.__db_file(db.computeJid)) @classmethod def _get_running_pending_jobs(cls, uid=None): def _filter_jobs(job): if uid and job.db.uid != uid: return False return job._is_running_pending() def _get_header(): h = [ ['App', 'String'], ['Simulation id', 'String'], ['Start', 'DateTime'], ['Last update', 'DateTime'], ['Elapsed', 'Time'], ['Status', 'String'], ] if uid: h.insert(l, ['Name', 'String']) else: h.insert(l, ['User id', 'String']) h.extend([ ['Queued', 'Time'], ['Driver details', 'String'], ['Premium user', 'String'], ]) return h def _get_rows(): def _get_queued_time(db): m = i.db.computeJobStart if i.db.status == job.RUNNING \ else sirepo.srtime.utc_now_as_int() return m - db.computeJobQueued r = [] for i in filter(_filter_jobs, cls.instances.values()): d = [ i.db.simulationType, i.db.simulationId, i.db.computeJobStart, i.db.lastUpdateTime, i.elapsed_time(), i.db.get('jobStatusMessage', ''), ] if uid: d.insert(l, i.db.simName) else: d.insert(l, i.db.uid) d.extend([ _get_queued_time(i.db), ' | '.join(sorted(i.db.driverDetails.values())), i.db.isPremiumUser, ]) r.append(d) r.sort(key=lambda x: x[l]) return r l = 2 return PKDict(header=_get_header(), rows=_get_rows()) def _is_running_pending(self): return self.db.status in (job.RUNNING, job.PENDING) def _init_db_missing_response(self, req): self.__db_init(req, prev_db=self.db) self.__db_write() assert self.db.status == job.MISSING, \ 'expecting missing status={}'.format(self.db.status) return PKDict(state=self.db.status) def _raise_if_purged_or_missing(self, req): if self.db.status in (job.MISSING, job.JOB_RUN_PURGED): sirepo.util.raise_not_found('purged or missing {}', req) @classmethod async def _receive_api_admJobs(cls, req): return cls._get_running_pending_jobs() async def _receive_api_downloadDataFile(self, req): self._raise_if_purged_or_missing(req) return await self._send_with_single_reply( job.OP_ANALYSIS, req, jobCmd='download_data_file', dataFileKey=req.content.pop('dataFileKey') ) @classmethod async def _receive_api_ownJobs(cls, req): return cls._get_running_pending_jobs(uid=req.content.uid) async def _receive_api_runCancel(self, req, timed_out_op=None): """Cancel a run and related ops Analysis ops that are for a parallel run (ex. sim frames) will not be canceled. Args: req (ServerReq): The cancel request timed_out_op (_Op, Optional): the op that was timed out, which needs to be canceled Returns: PKDict: Message with state=canceled """ def _ops_to_cancel(): r = set( o for o in self.ops # Do not cancel sim frames. Allow them to come back for a canceled run if not (self.db.isParallel and o.opName == job.OP_ANALYSIS) ) if timed_out_op in self.ops: r.add(timed_out_op) return r r = PKDict(state=job.CANCELED) if ( # a running simulation may be canceled due to a # downloadDataFile request timeout in another browser window (only the # computeJids must match between the two requests). This might be # a weird UX but it's important to do, because no op should take # longer than its timeout. # # timed_out_op might not be a valid request, because a new compute # may have been started so either we are canceling a compute by # user directive (left) or timing out an op (and canceling all). (not self._req_is_valid(req) and not timed_out_op) or (not self._is_running_pending() and not self.ops) ): # job is not relevant, but let the user know it isn't running return r candidates = _ops_to_cancel() c = None o = set() # No matter what happens the job is canceled self.__db_update(status=job.CANCELED) self._canceled_serial = self.db.computeJobSerial try: for i in range(_MAX_RETRIES): try: o = _ops_to_cancel().intersection(candidates) if o: #TODO(robnagler) cancel run_op, not just by jid, which is insufficient (hash) if not c: c = self._create_op(job.OP_CANCEL, req) await c.prepare_send() elif c: c.destroy() c = None pkdlog('{} cancel={}', self, o) for x in o: x.destroy(cancel=True) if timed_out_op: self.db.canceledAfterSecs = timed_out_op.max_run_secs if c: c.msg.opIdsToCancel = [x.opId for x in o] c.send() await c.reply_get() return r except Awaited: pass else: raise AssertionError('too many retries {}'.format(req)) finally: if c: c.destroy(cancel=False) async def _receive_api_runSimulation(self, req, recursion_depth=0): def _set_error(compute_job_serial, internal_error): if self.db.computeJobSerial != compute_job_serial: # Another run has started return self.__db_update( error='Server error', internalError=internal_error, status=job.ERROR, ) f = req.content.data.get('forceRun') if self._is_running_pending(): if f or not self._req_is_valid(req): return PKDict( state=job.ERROR, error='another browser is running the simulation', ) return self._status_reply(req) if ( not f and self._req_is_valid(req) and self.db.status == job.COMPLETED ): # Valid, completed, transient simulation # Read this first https://github.com/radiasoft/sirepo/issues/2007 r = await self._receive_api_runStatus(req) if r.state == job.MISSING: # happens when the run dir is deleted (ex _purge_free_simulations) assert recursion_depth == 0, \ 'Infinite recursion detected. Already called from self. req={}'.format( req, ) return await self._receive_api_runSimulation( req, recursion_depth + 1, ) return r # Forced or canceled/errored/missing/invalid so run o = self._create_op( job.OP_RUN, req, jobCmd='compute', nextRequestSeconds=self.db.nextRequestSeconds, ) t = sirepo.srtime.utc_now_as_int() s = self.db.status d = self.db self.__db_init(req, prev_db=d) self.__db_update( computeJobQueued=t, computeJobSerial=t, computeModel=req.content.computeModel, driverDetails=o.driver.driver_details, # run mode can change between runs so we must update the db jobRunMode=req.content.jobRunMode, simName=req.content.data.models.simulation.name, status=job.PENDING, ) self._purged_jids_cache.discard(self.__db_file(self.db.computeJid).purebasename) c = self.db.computeJobSerial try: for i in range(_MAX_RETRIES): try: await o.prepare_send() self.run_op = o o.make_lib_dir_symlink() o.send() r = self._status_reply(req) assert r o.run_callback = tornado.ioloop.IOLoop.current().call_later( 0, self._run, o, ) o = None return r except Awaited: pass else: raise AssertionError('too many retries {}'.format(req)) except sirepo.util.ASYNC_CANCELED_ERROR: if self.pkdel('_canceled_serial') == c: # We were canceled due to api_runCancel. # api_runCancel destroyed the op and updated the db raise # There was a timeout getting the run started. Set the # error and let the user know. The timeout has destroyed # the op so don't need to destroy here _set_error(c, o.internal_error) return self._status_reply(req) except Exception as e: # _run destroys in the happy path (never got to _run here) o.destroy(cancel=False) if isinstance(e, sirepo.util.SRException) and \ e.sr_args.params.get('isGeneral'): self.__db_restore(d) else: _set_error(c, o.internal_error) raise async def _receive_api_runStatus(self, req): r = self._status_reply(req) if r: return r r = await self._send_with_single_reply( job.OP_ANALYSIS, req, jobCmd='sequential_result', ) if r.state == job.ERROR: return self._init_db_missing_response(req) return r async def _receive_api_sbatchLogin(self, req): return await self._send_with_single_reply(job.OP_SBATCH_LOGIN, req) async def _receive_api_simulationFrame(self, req): if not self._req_is_valid(req): sirepo.util.raise_not_found('invalid req={}', req) self._raise_if_purged_or_missing(req) return await self._send_with_single_reply( job.OP_ANALYSIS, req, jobCmd='get_simulation_frame' ) async def _receive_api_statelessCompute(self, req): return await self._send_with_single_reply( job.OP_ANALYSIS, req, jobCmd='stateless_compute' ) def _create_op(self, opName, req, **kwargs): #TODO(robnagler) kind should be set earlier in the queuing process. req.kind = job.PARALLEL if self.db.isParallel and opName != job.OP_ANALYSIS \ else job.SEQUENTIAL req.simulationType = self.db.simulationType # run mode can change between runs so use req.content.jobRunMode # not self.db.jobRunMode r = req.content.get('jobRunMode', self.db.jobRunMode) if r not in sirepo.simulation_db.JOB_RUN_MODE_MAP: # happens only when config changes, and only when sbatch is missing sirepo.util.raise_not_found('invalid jobRunMode={} req={}', r, req) o = _Op( #TODO(robnagler) don't like the camelcase. It doesn't actually work right because # these values are never sent directly, only msg which can be camelcase computeJob=self, kind=req.kind, msg=PKDict(req.copy_content()).pksetdefault(jobRunMode=r), opName=opName, task=asyncio.current_task(), ) if 'dataFileKey' in kwargs: kwargs['dataFileUri'] = job.supervisor_file_uri( o.driver.cfg.supervisor_uri, job.DATA_FILE_URI, kwargs.pop('dataFileKey'), ) o.msg.pkupdate(**kwargs) self.ops.append(o) return o def _req_is_valid(self, req): return ( self.db.computeJobHash == req.content.computeJobHash and ( not req.content.computeJobSerial or self.db.computeJobSerial == req.content.computeJobSerial ) ) async def _run(self, op): op.task = asyncio.current_task() op.pkdel('run_callback') try: with op.set_job_situation('Entered __create._run'): while True: try: r = await op.reply_get() #TODO(robnagler) is this ever true? if op != self.run_op: return # run_dir is in a stable state so don't need to lock op.run_dir_slot.free() self.db.status = r.state self.db.alert = r.get('alert') if self.db.status == job.ERROR: self.db.error = r.get('error', '<unknown error>') if 'computeJobStart' in r: self.db.computeJobStart = r.computeJobStart if 'parallelStatus' in r: self.db.parallelStatus.update(r.parallelStatus) self.db.lastUpdateTime = r.parallelStatus.lastUpdateTime else: # sequential jobs don't send this self.db.lastUpdateTime = sirepo.srtime.utc_now_as_int() #TODO(robnagler) will need final frame count self.__db_write() if r.state in job.EXIT_STATUSES: break except sirepo.util.ASYNC_CANCELED_ERROR: return except Exception as e: pkdlog('error={} stack={}', e, pkdexc()) if op == self.run_op: self.__db_update( status=job.ERROR, error='server error', ) finally: op.destroy(cancel=False) async def _send_with_single_reply(self, opName, req, **kwargs): o = self._create_op(opName, req, **kwargs) try: for i in range(_MAX_RETRIES): try: await o.prepare_send() o.send() r = await o.reply_get() # POSIT: any api_* that could run into runDirNotFound # will call _send_with_single_reply() and this will # properly format the reply if r.get('runDirNotFound'): return self._init_db_missing_response(req) return r except Awaited: pass else: raise AssertionError('too many retries {}'.format(req)) finally: o.destroy(cancel=False) def _status_reply(self, req): def res(**kwargs): r = PKDict(**kwargs) if self.db.canceledAfterSecs is not None: r.canceledAfterSecs = self.db.canceledAfterSecs if self.db.error: r.error = self.db.error if self.db.alert: r.alert = self.db.alert if self.db.isParallel: r.update(self.db.parallelStatus) r.computeJobHash = self.db.computeJobHash r.computeJobSerial = self.db.computeJobSerial r.elapsedTime = self.elapsed_time() if self._is_running_pending(): c = req.content r.update( nextRequestSeconds=self.db.nextRequestSeconds, nextRequest=PKDict( computeJobHash=self.db.computeJobHash, computeJobSerial=self.db.computeJobSerial, computeJobStart=self.db.computeJobStart, report=c.analysisModel, simulationId=self.db.simulationId, simulationType=self.db.simulationType, ), ) return r if self.db.computeJobHash != req.content.computeJobHash: return PKDict(state=job.MISSING, reason='computeJobHash-mismatch') if ( req.content.computeJobSerial and self.db.computeJobSerial != req.content.computeJobSerial ): return PKDict(state=job.MISSING, reason='computeJobSerial-mismatch') if self.db.isParallel or self.db.status != job.COMPLETED: return res( state=self.db.status, dbUpdateTime=self.db.dbUpdateTime, ) return None class _Op(PKDict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.update( do_not_send=False, internal_error=None, opId=job.unique_key(), run_dir_slot=self.computeJob.run_dir_slot_q.sr_slot_proxy(self), _reply_q=sirepo.tornado.Queue(), ) self.msg.update(opId=self.opId, opName=self.opName) self.driver = job_driver.assign_instance_op(self) self.cpu_slot = self.driver.cpu_slot_q.sr_slot_proxy(self) q = self.driver.op_slot_q.get(self.opName) self.op_slot = q and q.sr_slot_proxy(self) self.max_run_secs = self._get_max_run_secs() pkdlog('{} runDir={}', self, self.msg.get('runDir')) def destroy(self, cancel=True, internal_error=None): self.run_dir_slot.free() if cancel: if self.task: self.task.cancel() self.task = None # Ops can be destroyed multiple times # The first error is "closest to the source" so don't overwrite it if not self.internal_error: self.internal_error = internal_error for x in 'run_callback', 'timer': if x in self: tornado.ioloop.IOLoop.current().remove_timeout(self.pkdel(x)) self.computeJob.destroy_op(self) self.driver.destroy_op(self) def make_lib_dir_symlink(self): self.driver.make_lib_dir_symlink(self) def pkdebug_str(self): return pkdformat('_Op({}, {:.4})', self.opName, self.opId) async def prepare_send(self): """Ensures resources are available for sending to agent To maintain consistency, do not modify global state before calling this method. """ await self.driver.prepare_send(self) async def reply_get(self): # If we get an exception (canceled), task is not done. # Had to look at the implementation of Queue to see that # task_done should only be called if get actually removes # the item from the queue. pkdlog('{} await _reply_q.get()', self) r = await self._reply_q.get() self._reply_q.task_done() return r def reply_put(self, reply): self._reply_q.put_nowait(reply) async def run_timeout(self): """Can be any op that's timed""" pkdlog('{} max_run_secs={}', self, self.max_run_secs) await self.computeJob._receive_api_runCancel( ServerReq(content=self.msg), timed_out_op=self, ) def send(self): if self.max_run_secs: self.timer = tornado.ioloop.IOLoop.current().call_later( self.max_run_secs, self.run_timeout, ) self.driver.send(self) @contextlib.contextmanager def set_job_situation(self, situation): self.computeJob.set_situation(self, situation) try: yield self.computeJob.set_situation(self, None) except Exception as e: pkdlog('{} situation={} stack={}', self, situation, pkdexc()) self.computeJob.set_situation(self, None, exception=e) raise def _get_max_run_secs(self): if self.driver.op_is_untimed(self): return 0 if self.opName == sirepo.job.OP_ANALYSIS: return cfg.max_secs.analysis if self.kind == job.PARALLEL and self.msg.get('isPremiumUser'): return cfg.max_secs['parallel_premium'] return cfg.max_secs[self.kind] def __hash__(self): return hash((self.opId,))
Steel Seamless Pipes Manufacturer in Mexico, SS Welded Pipes Exporter in Mexico, Stainless Steel ERW Pipe Supplier in Mexico, Steel Tubing in Mexico, Steel Tubes Stockist in Mexico. stainless pipe supplier in mexico, ss pipes in mexico, seamless pipe in mexico, EFW & welded pipe in mexico. Carbon Steel Pipes Exporter in Mexico, Alloy Steel Seamless Tubes in Mexico, Nickel Alloy Pipe & Tubes Supplier in Mexico, Steel Pipes Distributor in Mexico, Duplex Steel Pipe in Mexico. Neon Alloy is acknowledged as a manufacturer, exporter & suppliers of stainless steel pipe in mexico, steel pipe in mexico, ss pipe and steel tubes in mexico, stainless steel tube supplier in mexico, ss tubes. we are supply to seamless pipe and welded pipe in mexico. Our products available grade in stainless steel pipe, carbon steel pipe, alloy steel pipes, duplex steel pipe, nickel alloy pipe in mexico. Available types of our products are round pipe, square pipe, seamless pipe, welded pipe, efw pipe, rectangular pipe, erw pipe in mexico. These seamless stainless steel pipes in mexico have excellent corrosion resistance and superior dimensional accuracy. We offer seamless stainless steel pipes and tubes in mexico with different sizes and forms, ensuring a widest choice to the customers. Our Stainless 304 product includes tubes and pipes, SS Pipes in mexico, SS Tubes , Stainless Seamless Pipes & Tubes etc. These products are available at best possible prices which is another important feature exhibited by steel pipe, steel tubes in mexico. Alloy Steel Pipes in Mexico : A335 GR P1, P5, P9, P11, P12, P22, P23, P91. A213 GR T2, T5, T9, T11, T12, T22, T23, T91. Steel Pipes & Tubes in Mexico - Seamless, Welded, EFW, ERW, Fabricated / LSAW Pipes in Round, Square, Rectangular and Hydraulic. Israel, Germany, Sudan, Thailand (Bangkok), Azerbaijan, Sri Lanka, Houston, Saudi Arabia, Tunisia, Mozambique, Iran, Malaysia, Canada, Russia, Morocco, Australia, Democratic Republic of the Congo, Angola, Iraq, Chine, Ethiopa, Mexico, Kuwait, Turkey, Trinidad and Tobago, Egypt, Dubai, Algeria, South Africa, Kazakhstan, Brazil, United States, Indonesia, Colombia, Argentina, Jordan, Africa, Ghana, Vietnam, Uganda, UK, New Zealand, Peru, Venezuela, UAE, Bahrain, Italy, Cameroon, London, Nigeria. Singapore, Lusaka, Durban, Mecca, Benin, Cairo, Yaoundé, Cairo, Hong Kong, Dubai, Algiers, Doha, Tel Aviv, Bethlehem, Riyadh, Harare, Nairobi, Amman, Tehran, Istanbul, Cape Town, Kampala, Abidjan, Colombo, Manama, Tripoli, Beirut, Kolwezi, Abu Dhabi, Subra al-Haymah, Conakry, Accra, Jerusalem, Dar es Salaam, Mogadishu, New York, Douala, Muscat, Johannesburg, Ibadan, Byblos, Freetown, Aqaba, Addis Ababa, Kinshasa, Dammam, Soweto, Kaduna, Lagos, Port Harcourt, Antananarivo, Casablanca, Zaria, Maputo, Luanda, Port Elizabeth, Brazzaville, Omdurman, Maiduguri, Rabat, Kano, Khartoum, Data from the UN, Bulawayo, Fez, Giza, Pretoria, Lubumbashi, Bamako, Mbuji-Mayi, Alexandria, Sharm el-Sheikh, Jeddah, Dakar, Ouagadougou.
# -*- coding: utf-8 -*- from django.db import models from django.utils.translation import ugettext_lazy as _ from videostore.models import Video TICKET_STATUS_CHOICES = ( ('pending', _('Pending view')), ('seen', _('Seen')), ('overdue', _('Overdue')), ('blocked', _('Blocked')), ) class Server(models.Model): title = models.CharField( max_length=50, verbose_name=_('Title') ) ip_address = models.CharField( max_length=50, verbose_name=_('IP address') ) is_enabled = models.BooleanField( verbose_name=_('Enabled?'), default=True, ) def __unicode__(self): return u'%s on %s' % (self.title, self.ip_address) class Meta: verbose_name = _(u'Server') verbose_name_plural = _(u'Servers') class Ticket(models.Model): video = models.ForeignKey( Video, verbose_name=_('Video') ) hash = models.CharField( max_length=50, editable = False, db_index=True, ) created_at = models.DateTimeField( verbose_name=_('Created time'), auto_now_add=True, ) seen_at = models.DateTimeField( verbose_name=_('Created time'), null=True, ) valid_to = models.DateTimeField( verbose_name=_('Valid to'), null=True, ) status = models.CharField( max_length=50, choices=TICKET_STATUS_CHOICES, verbose_name=_('Status'), default='pending', ) client_id = models.CharField( max_length=50, verbose_name=_('Client id'), ) headers = models.TextField( verbose_name=_('Dump of viewer HTTP headers'), editable=False, null=True, ) def __unicode__(self): return u'#%s %s %s' % (self.id, self.created_at, self.status) class Meta: verbose_name = _('Ticket') verbose_name_plural = _('Tickets')
Bujinkan Nottingham is located at 19 Greyhound Street, Nottingham, NG1 2DP. An interactive map of it's location is shown below. You can get step by step driving directions to Bujinkan Nottingham.
from django import forms from django.conf import settings from django.forms import ModelForm, Form from django.forms.models import BaseInlineFormSet, inlineformset_factory from django.contrib.contenttypes.generic import BaseGenericInlineFormSet, generic_inlineformset_factory from django.utils.translation import ugettext as _ from crispy_forms.helper import FormHelper from crispy_forms.layout import * from crispy_forms.bootstrap import FormActions from filer.models.imagemodels import Image from django.contrib.admin import widgets as admin_widgets import autocomplete_light from alibrary.models import Media, Relation, Artist, MediaExtraartists from pagedown.widgets import PagedownWidget import selectable.forms as selectable from alibrary.lookups import ReleaseNameLookup, ArtistLookup, LicenseLookup import floppyforms as forms from django_date_extensions.fields import ApproximateDateFormField from ajax_select.fields import AutoCompleteSelectField from ajax_select import make_ajax_field from django.forms.widgets import FileInput, HiddenInput #from floppyforms.widgets import DateInput from tagging.forms import TagField from ac_tagging.widgets import TagAutocompleteTagIt from lib.widgets.widgets import ReadOnlyIconField ACTION_LAYOUT = action_layout = FormActions( HTML('<button type="submit" name="save" value="save" class="btn btn-primary pull-right ajax_submit" id="submit-id-save-i-classicon-arrow-upi"><i class="icon-save icon-white"></i> Save</button>'), HTML('<button type="reset" name="reset" value="reset" class="reset resetButton btn btn-secondary pull-right" id="reset-id-reset"><i class="icon-trash"></i> Cancel</button>'), ) ACTION_LAYOUT_EXTENDED = action_layout = FormActions( Field('publish', css_class='input-hidden' ), HTML('<button type="submit" name="save" value="save" class="btn btn-primary pull-right ajax_submit" id="submit-id-save-i-classicon-arrow-upi"><i class="icon-save icon-white"></i> Save</button>'), HTML('<button type="submit" name="save-and-publish" value="save" class="btn pull-right ajax_submit save-and-publish" id="submit-id-save-i-classicon-arrow-upi"><i class="icon-bullhorn icon-white"></i> Save & Publish</button>'), HTML('<button type="reset" name="reset" value="reset" class="reset resetButton btn btn-secondary pull-right" id="reset-id-reset"><i class="icon-trash"></i> Cancel</button>'), ) class MediaActionForm(Form): def __init__(self, *args, **kwargs): self.instance = kwargs.pop('instance', False) super(MediaActionForm, self).__init__(*args, **kwargs) self.helper = FormHelper() self.helper.form_class = 'form-horizontal' self.helper.form_tag = False self.helper.add_layout(ACTION_LAYOUT) """ if self.instance and self.instance.publish_date: self.helper.add_layout(ACTION_LAYOUT) else: self.helper.add_layout(ACTION_LAYOUT_EXTENDED) """ publish = forms.BooleanField(required=False) class MediaForm(ModelForm): class Meta: model = Media fields = ('name', 'description', 'artist', 'tracknumber', 'mediatype', 'license', 'release', 'd_tags', 'isrc', ) def __init__(self, *args, **kwargs): self.user = kwargs['initial']['user'] self.instance = kwargs['instance'] print self.instance print self.user.has_perm("alibrary.edit_release") print self.user.has_perm("alibrary.admin_release", self.instance) self.label = kwargs.pop('label', None) super(MediaForm, self).__init__(*args, **kwargs) """ Prototype function, set some fields to readonly depending on permissions """ """ if not self.user.has_perm("alibrary.admin_release", self.instance): self.fields['catalognumber'].widget.attrs['readonly'] = 'readonly' """ self.helper = FormHelper() self.helper.form_id = "id_feedback_form_%s" % 'asd' self.helper.form_class = 'form-horizontal' self.helper.form_method = 'post' self.helper.form_action = '' self.helper.form_tag = False base_layout = Fieldset( _('General'), LookupField('name', css_class='input-xlarge'), LookupField('release', css_class='input-xlarge'), LookupField('artist', css_class='input-xlarge'), LookupField('tracknumber', css_class='input-xlarge'), LookupField('mediatype', css_class='input-xlarge'), ) license_layout = Fieldset( _('License/Source'), Field('license', css_class='input-xlarge'), ) catalog_layout = Fieldset( _('Label/Catalog'), LookupField('label', css_class='input-xlarge'), LookupField('catalognumber', css_class='input-xlarge'), LookupField('release_country', css_class='input-xlarge'), # LookupField('releasedate', css_class='input-xlarge'), LookupField('releasedate_approx', css_class='input-xlarge'), ) meta_layout = Fieldset( 'Meta', LookupField('description', css_class='input-xxlarge'), ) tagging_layout = Fieldset( 'Tags', LookupField('d_tags'), ) identifiers_layout = Fieldset( _('Identifiers'), LookupField('isrc', css_class='input-xlarge'), ) layout = Layout( base_layout, # artist_layout, meta_layout, license_layout, tagging_layout, identifiers_layout, ) self.helper.add_layout(layout) d_tags = TagField(widget=TagAutocompleteTagIt(max_tags=9), required=False, label=_('Tags')) release = selectable.AutoCompleteSelectField(ReleaseNameLookup, allow_new=True, required=True) """ extra_artists = forms.ModelChoiceField(Artist.objects.all(), widget=autocomplete_light.ChoiceWidget('ArtistAutocomplete'), required=False) """ artist = selectable.AutoCompleteSelectField(ArtistLookup, allow_new=True, required=False) description = forms.CharField(widget=PagedownWidget(), required=False, help_text="Markdown enabled text") #license = selectable.AutoCompleteSelectField(LicenseLookup, widget=selectable.AutoComboboxSelectWidget(lookup_class=LicenseLookup), allow_new=False, required=False, label=_('License')) # aliases = selectable.AutoCompleteSelectMultipleField(ArtistLookup, required=False) # aliases = make_ajax_field(Media,'aliases','aliases',help_text=None) #members = selectable.AutoCompleteSelectMultipleField(ArtistLookup, required=False) def clean(self, *args, **kwargs): cd = super(MediaForm, self).clean() print "*************************************" print cd print "*************************************" """ if 'main_image' in cd and cd['main_image'] != None: try: ui = cd['main_image'] dj_file = DjangoFile(open(ui.temporary_file_path()), name='cover.jpg') cd['main_image'], created = Image.objects.get_or_create( original_filename='cover_%s.jpg' % self.instance.pk, file=dj_file, folder=self.instance.folder, is_public=True) except Exception, e: print e pass else: cd['main_image'] = self.instance.main_image """ return cd # TODO: take a look at save def save(self, *args, **kwargs): return super(MediaForm, self).save(*args, **kwargs) """ Album Artists """ class BaseExtraartistFormSet(BaseInlineFormSet): def __init__(self, *args, **kwargs): self.instance = kwargs['instance'] super(BaseExtraartistFormSet, self).__init__(*args, **kwargs) self.helper = FormHelper() self.helper.form_id = "id_artists_form_%s" % 'inline' self.helper.form_class = 'form-horizontal' self.helper.form_method = 'post' self.helper.form_action = '' self.helper.form_tag = False base_layout = Row( Column( Field('artist', css_class='input-large'), css_class='span5' ), Column( Field('profession', css_class='input-large'), css_class='span5' ), Column( Field('DELETE', css_class='input-mini'), css_class='span2' ), css_class='albumartist-row row-fluid form-autogrow', ) self.helper.add_layout(base_layout) def add_fields(self, form, index): # allow the super class to create the fields as usual super(BaseExtraartistFormSet, self).add_fields(form, index) # created the nested formset try: instance = self.get_queryset()[index] pk_value = instance.pk except IndexError: instance=None pk_value = hash(form.prefix) class BaseExtraartistForm(ModelForm): class Meta: model = MediaExtraartists parent_model = Media fields = ('artist','profession',) def __init__(self, *args, **kwargs): super(BaseExtraartistForm, self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) artist = selectable.AutoCompleteSelectField(ArtistLookup, allow_new=True, required=False) class BaseMediaReleationFormSet(BaseGenericInlineFormSet): def __init__(self, *args, **kwargs): self.instance = kwargs['instance'] super(BaseMediaReleationFormSet, self).__init__(*args, **kwargs) self.helper = FormHelper() self.helper.form_id = "id_releasemediainline_form_%s" % 'asdfds' self.helper.form_class = 'form-horizontal' self.helper.form_method = 'post' self.helper.form_action = '' self.helper.form_tag = False base_layout = Row( Column( Field('url', css_class='input-xlarge'), css_class='span6 relation-url' ), Column( Field('service', css_class='input-mini'), css_class='span4' ), Column( Field('DELETE', css_class='input-mini'), css_class='span2' ), css_class='row-fluid relation-row form-autogrow', ) self.helper.add_layout(base_layout) class BaseMediaReleationForm(ModelForm): class Meta: model = Relation parent_model = Media formset = BaseMediaReleationFormSet fields = ('url','service',) def __init__(self, *args, **kwargs): super(BaseMediaReleationForm, self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) self.fields['service'].widget.instance = instance if instance and instance.id: self.fields['service'].widget.attrs['readonly'] = True def clean_service(self): return self.instance.service service = forms.CharField(label='', widget=ReadOnlyIconField(), required=False) url = forms.URLField(label=_('Website / URL'), required=False) # Compose Formsets MediaRelationFormSet = generic_inlineformset_factory(Relation, form=BaseMediaReleationForm, formset=BaseMediaReleationFormSet, extra=10, exclude=('action',), can_delete=True) ExtraartistFormSet = inlineformset_factory(Media, MediaExtraartists, form=BaseExtraartistForm, formset=BaseExtraartistFormSet, fk_name = 'media', extra=10, #exclude=('position',), can_delete=True, can_order=False,)
Starting to feel more adjusted to the temperature, climate and culture. Remembering slowly why I came here in the first place. I feel today this is where I want to be and am much more relaxed than at home, it’s like an alternate reality. Sarah and I were talking about how much we prefer feeling closer to the culture of a place as opposed to sipping martinis on a beach being pampered. We have witnessed a scooter accident and a tourist evacuating his stomach from our balcony. Everything feels different and makes me realize I can get by on so much less. The Thai people seem to be a very happy people. Neither of us are in a vacation mode right now. We don’t mind missing out on the tourist sites to just relax, work on our projects and experience Thailand. We head to Lampang on Wednesday. We chose it because it’s in a completely different part of Thailand and supposedly isn’t so full of tourists. We will, however, be staying at a Guesthouse instead of Airbnb, so I expect a different experience.
#!/usr/bin/env python # -*- coding: utf-8 -*- from sys import argv import os def header_file( ): """ #ifndef SUBSYS_%ss-name%_H #define SUBSYS_%ss-name%_H #include "application.h" namespace msctl { namespace agent { class %ss-name%: public common::subsys_iface { struct impl; friend struct impl; impl *impl_; public: %ss-name%( application *app ); static std::shared_ptr<%ss-name%> create( application *app ); static const char *name( ) { return "%ss-name%"; } private: void init( ) override; void start( ) override; void stop( ) override; }; }} #endif // SUBSYS_%ss-name%_H """ return header_file.__doc__ def source_file( ): """ #include "subsys-%ss-name%.h" #define LOG(lev) log_(lev, "%ss-name%") #define LOGINF LOG(logger_impl::level::info) #define LOGDBG LOG(logger_impl::level::debug) #define LOGERR LOG(logger_impl::level::error) #define LOGWRN LOG(logger_impl::level::warning) namespace msctl { namespace agent { struct %ss-name%::impl { application *app_; %ss-name% *parent_; logger_impl &log_; impl( application *app ) :app_(app) ,log_(app_->log( )) { } }; %ss-name%::%ss-name%( application *app ) :impl_(new impl(app)) { impl_->parent_ = this; } void %ss-name%::init( ) { } void %ss-name%::start( ) { impl_->LOGINF << "Started."; } void %ss-name%::stop( ) { impl_->LOGINF << "Stopped."; } std::shared_ptr<%ss-name%> %ss-name%::create( application *app ) { return std::make_shared<%ss-name%>( app ); } }} """ return source_file.__doc__ def usage( ): """ usage: addsubsys.py <subsystem-name> """ print( usage.__doc__ ) def fix_iface_inc( ss_name ): src_path = os.path.join( 'subsys.inc' ) s = open( src_path, 'r' ); content = s.readlines( ) s.close() content.append( '#include "subsys-' + ss_name + '.h"\n') s = open( src_path, 'w' ); s.writelines( content ) if __name__ == '__main__': if len( argv ) < 2: usage( ) exit( 1 ) ss_file = argv[1] ss_name = ss_file # ss_file.replace( '-', '_' ) src_name = 'subsys-' + ss_file + '.cpp'; hdr_name = 'subsys-' + ss_file + '.h'; if os.path.exists( src_name ) or os.path.exists( hdr_name ): print ( "File already exists" ) exit(1) src_content = source_file( ).replace( '%ss-name%', ss_name ) hdr_content = header_file( ).replace( '%ss-name%', ss_name ) s = open( src_name, 'w' ); s.write( src_content ) h = open( hdr_name, 'w' ); h.write( hdr_content ) fix_iface_inc( ss_name )
Adtext subtitles over 5,500 commercials a year, making us the market leader in commercial subtitling. Digitally integrated with all the leading commercial distribution companies, Adtext can offer clients a single, combined service, delivering the highest quality subtitles in the fastest turnaround times across an extended working day. Adtext is market leader for commercial subtitling (c.90%). Fully integrated with all the leading distribution companies - BEAM.TV, IMD and Adstream. Digitally streamlined subtitling operation – reducing turnaround times. Dedicated in-house team of six experienced commercial subtitlers working a shift system 09.30 to 21.00 and later if required. Uniquely placed to provide the fastest, highest quality and most efficient service for clients. Significant investment in cutting-edge technology. 1 in 7 (8.94 million people)* in the UK are hard of hearing, which is 14% of the population. 1 in 6 (10.89 million people)* use subtitles sometimes and 3.56 million use them all or most of the time. Subtitles are also used by people who live in houses where it is difficult to hear the television – such as student houses and homes with small children – and are frequently switched on in gyms, bars and airports. Subtitles are used by people who speak English as a second language. There is a legal requirement for commercial broadcasters to subtitle. ITV and C4 currently subtitle 90% of all programming and the BBC is now subtitling 100% of all programming. The majority of the UK's national advertisers now subtitle as a matter of policy. It is IPA (Institute of Practitioners in Advertising) policy. Over 6,000 commercials are subtitled every year.
# -*- coding: UTF-8 -*- import os import threading import django.conf from HowOldWebsite.models import RecordFace from HowOldWebsite.utils.image import do_imread from HowOldWebsite.utils.image import do_rgb2gray from HowOldWebsite.utils.language import reflect_get_class __author__ = 'Hao Yu' class UtilTrainer: __busy = False # __threads = [] @classmethod def is_busy(cls): return cls.__busy @classmethod def train(cls, request): if cls.is_busy(): return False cls.__busy = True # Is it OK? th = threading.Thread(target=UtilTrainer.__train_main, args=(request,)) th.start() return True @classmethod def __train_main(cls, model_names): model_names = [m.lower() for m in model_names] print("=" * 10 + " Train Start " + "=" * 10) try: faces = RecordFace.objects.filter(used_flag=1) if not django.conf.settings.DEBUG: if len(faces) < 100: print("Error: The training set is too small.") print("\t Skip the training!") raise Exception() image_jar = dict() feature_jar = dict() target_jar = dict() estimator_jar = dict() threads = list() # Get estimator class for m in model_names: class_estimator = 'HowOldWebsite.estimators.estimator_{}.Estimator{}'.format( m, m.capitalize() ) estimator_jar[m] = reflect_get_class(class_estimator) for face in faces: face_id = face.id # Get image face_filename_color = os.path.join( django.conf.settings.SAVE_DIR['FACE'], str(face_id) + '.jpg' ) # face_filename_gray = os.path.join(SAVE_DIR['FACE_GRAY'], str(face_id) + '.jpg') cv_face_image = do_imread(face_filename_color) cv_face_gray = do_rgb2gray(cv_face_image) if 'rgb' not in image_jar.keys(): image_jar['rgb'] = list() image_jar['rgb'].append(cv_face_image) if 'gray' not in image_jar.keys(): image_jar['gray'] = list() image_jar['gray'].append(cv_face_gray) # Get target if 'sex' not in target_jar.keys(): target_jar['sex'] = list() target_jar['sex'].append((face.recordsex_set.first()).value_user) if 'age' not in target_jar.keys(): target_jar['age'] = list() target_jar['age'].append((face.recordage_set.first()).value_user) if 'smile' not in target_jar.keys(): target_jar['smile'] = list() target_jar['smile'].append((face.recordsmile_set.first()).value_user) # Extract features for m in model_names: feature_jar = estimator_jar[m].feature_extract(feature_jar, image_jar) # Train for m in model_names: th = threading.Thread(target=cls.__do_thread_train, args=(m, estimator_jar[m], feature_jar, target_jar[m]) ) threads.append(th) th.start() for item in threads: item.join() # Change the used flag if not django.conf.settings.DEBUG: faces.update(used_flag=2) except Exception as e: # print(e) print("Error occurred while training") pass print("=" * 10 + " Train Finish " + "=" * 10) # Set the busy flag UtilTrainer.__busy = False @classmethod def __do_thread_train(cls, model_name, estimator, feature_jar, target): print("{} Start".format(model_name.capitalize())) try: class_worker = 'HowOldWebsite.trainers.trainer_{}.Trainer{}'.format( model_name, model_name.capitalize() ) obj_worker = reflect_get_class(class_worker) worker = obj_worker(estimator) worker.train(feature_jar, target) except Exception as e: print(e) pass print("{} OK".format(model_name.capitalize()))
Honda “Walk Away” Storboards and concept art. Production Company: 1st Ave. Machine. Stride Mystery Gum Story Boards.
# -*- coding: utf-8 -*- """ Build a wiki database from scratch. You should run this the FIRST TIME you install your wiki. """ # Imports import sys import os import shutil import time from copy import copy import __init__ # woo hackmagic __directory__ = os.path.dirname(__file__) share_directory = os.path.abspath(os.path.join(__directory__, '..', 'share')) sys.path.extend([share_directory]) from Sycamore import wikidb from Sycamore import config from Sycamore import wikiutil from Sycamore import maintenance from Sycamore import wikiacl from Sycamore.wikiutil import quoteFilename, unquoteFilename from Sycamore.action import Files class FlatPage(object): """ A basic flat page object containing text and possibly files to be imported. """ def __init__(self, text=""): self.text = text self.files = [] self.acl = None def add_file(self, filename, filecontent): self.files.append((filename, filecontent)) def parseACL(text): groupdict = None lines = text.split('\n') if lines: groupdict = {} for line in lines: line = line.strip() if not line: continue groupname = line[:line.find(':')] rights = line[line.find(':')+1:].split(',') for right in rights: if right == 'none': groupdict[groupname] = [False, False, False, False] break if not groupdict.has_key(groupname): groupdict[groupname] = [False, False, False, False] groupdict[groupname][wikiacl.ACL_RIGHTS_TABLE[right]] = True return groupdict def init_basic_pages(prefix='common'): """ Initializes basic pages from share/initial_pages directory. """ pages = {} # We do the basic database population here page_list = map(unquoteFilename, filter(lambda x: not x.startswith('.'), os.listdir(os.path.join(share_directory, 'initial_pages', prefix)))) for pagename in page_list: page_loc = os.path.join(share_directory, 'initial_pages', prefix, quoteFilename(pagename)) page_text_file = open(os.path.join(page_loc, "text")) page_text = ''.join(page_text_file.readlines()) page_text_file.close() pages[pagename] = FlatPage(text=page_text) if os.path.exists(os.path.join(page_loc, "files")): file_list = map(unquoteFilename, filter(lambda x: not x.startswith('.'), os.listdir(os.path.join(page_loc, "files")))) for filename in file_list: file = open(os.path.join(page_loc, "files", quoteFilename(filename))) file_content = ''.join(file.readlines()) file.close() pages[pagename].files.append((filename, file_content)) if os.path.exists(os.path.join(page_loc, "acl")): file = open(os.path.join(page_loc, "acl"), "r") text = ''.join(file.readlines()) file.close() pages[pagename].acl = parseACL(text) return pages def init_db(cursor): if config.db_type == 'postgres': cursor.execute("""CREATE FUNCTION UNIX_TIMESTAMP(timestamp) RETURNS integer AS 'SELECT date_part(''epoch'', $1)::int4 AS result' language 'sql'""", isWrite=True) def create_tables(cursor): print "creating tables.." if config.db_type == 'mysql': cursor.execute("""create table curPages ( name varchar(100) not null, text mediumtext, cachedText mediumblob, editTime double, cachedTime double, userEdited char(20), propercased_name varchar(100) not null, wiki_id int, primary key (name, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table curPages ( name varchar(100) not null, text text, cachedText bytea, editTime double precision, cachedTime double precision, userEdited char(20), propercased_name varchar(100) not null, wiki_id int, primary key (name, wiki_id) )""", isWrite=True) cursor.execute("CREATE INDEX curPages_userEdited on curPages (userEdited)") if config.db_type == 'mysql': cursor.execute("""create table allPages ( name varchar(100) not null, text mediumtext, editTime double, userEdited char(20), editType CHAR(30) CHECK (editType in ('SAVE','SAVENEW','ATTNEW','ATTDEL','RENAME','NEWEVENT', 'COMMENT_MACRO','SAVE/REVERT','DELETE', 'SAVEMAP')), comment varchar(194), userIP char(16), propercased_name varchar(100) not null, wiki_id int, primary key(name, editTime, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table allPages ( name varchar(100) not null, text text, editTime double precision, userEdited char(20), editType CHAR(30) CHECK (editType in ('SAVE','SAVENEW','ATTNEW','ATTDEL','RENAME','NEWEVENT', 'COMMENT_MACRO','SAVE/REVERT','DELETE', 'SAVEMAP')), comment varchar(194), userIP inet, propercased_name varchar(100) not null, wiki_id int, primary key (name, editTime, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX allPages_userEdited on allPages (userEdited);""", isWrite=True) cursor.execute("""CREATE INDEX allPages_userIP on allPages (userIP);""", isWrite=True) cursor.execute("""CREATE INDEX editTime_wiki_id on allPages (editTime, wiki_id);""", isWrite=True) #for local-wiki changes cursor.execute("""CREATE INDEX editTime on allPages (editTime);""", isWrite=True) # global changes if config.db_type == 'mysql': cursor.execute("""create table users ( id char(20) primary key not null, name varchar(100) unique not null, email varchar(255), enc_password varchar(255), language varchar(80), remember_me tinyint, css_url varchar(255), disabled tinyint, edit_cols smallint, edit_rows smallint, edit_on_doubleclick tinyint, theme_name varchar(40), last_saved double, join_date double, created_count int default 0, edit_count int default 0, file_count int default 0, last_page_edited varchar(255), last_edit_date double, rc_bookmark double, rc_showcomments tinyint default 1, tz varchar(50), propercased_name varchar(100) not null, last_wiki_edited int, wiki_for_userpage varchar(100), rc_group_by_wiki BOOLEAN default false ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table users ( id char(20) primary key not null, name varchar(100) unique not null, email varchar(255), enc_password varchar(255), language varchar(80), remember_me smallint, css_url varchar(255), disabled smallint, edit_cols smallint, edit_rows smallint, edit_on_doubleclick smallint, theme_name varchar(40), last_saved double precision, join_date double precision, created_count int default 0, edit_count int default 0, file_count int default 0, last_page_edited varchar(255), last_edit_date double precision, rc_bookmark double precision, rc_showcomments smallint default 1, tz varchar(50), propercased_name varchar(100) not null, last_wiki_edited int, wiki_for_userpage varchar(100), rc_group_by_wiki boolean default false, CHECK (disabled IN ('0', '1')), CHECK (remember_me IN ('0', '1')), CHECK (rc_showcomments IN ('0', '1')) );""", isWrite=True) cursor.execute("CREATE INDEX users_name on users (name);", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userFavorites ( username varchar(100) not null, page varchar(100) not null, viewTime double, wiki_name varchar(100) not null, primary key (username, page, wiki_name) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userFavorites ( username varchar(100) not null, page varchar(100) not null, viewTime double precision, wiki_name varchar(100) not null, primary key (username, page, wiki_name) );""", isWrite=True) cursor.execute("""CREATE INDEX userfavorites_username on userFavorites(username);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userWatchedWikis ( username varchar(100) not null, wiki_name varchar(100) not null, primary key (username, wiki_name) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userWatchedWikis ( username varchar(100) not null, wiki_name varchar(100) not null, primary key (username, wiki_name) );""", isWrite=True) cursor.execute("""CREATE INDEX userWatchedWikis_username on userWatchedWikis (username);""", isWrite=True) cursor.execute("""CREATE INDEX userWatchedWikis_wiki_name on userWatchedWikis (wiki_name);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userPageOnWikis ( username varchar(100) not null, wiki_name varchar(100) not null, primary key (username, wiki_name) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userPageOnWikis ( username varchar(100) not null, wiki_name varchar(100) not null, primary key (username, wiki_name) );""", isWrite=True) cursor.execute("""CREATE INDEX userPageOnWikis_username on userPageOnWikis (username);""", isWrite=True) if config.db_type == 'mysql': #This is throw-away data. User sessions aren't that important # so we'll use a MyISAM table for speed cursor.execute("""create table userSessions ( user_id char(20) not null, session_id char(28) not null, secret char(28) not null, expire_time double, primary key (user_id, session_id) ) ENGINE=MyISAM CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userSessions ( user_id char(20) not null, session_id char(28) not null, secret char(28) not null, expire_time double precision, primary key (user_id, session_id) );""", isWrite=True) cursor.execute("""CREATE INDEX userSessions_expire_time on userSessions (expire_time);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table links ( source_pagename varchar(100) not null, destination_pagename varchar(100) not null, destination_pagename_propercased varchar(100) not null, wiki_id int, primary key (source_pagename, destination_pagename, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table links ( source_pagename varchar(100) not null, destination_pagename varchar(100) not null, destination_pagename_propercased varchar(100) not null, wiki_id int, primary key (source_pagename, destination_pagename, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX links_source_pagename_wiki_id on links (source_pagename, wiki_id);""", isWrite=True) cursor.execute("""CREATE INDEX links_destination_pagename_wiki_id on links (destination_pagename, wiki_id);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table events ( uid int not null AUTO_INCREMENT primary key, event_time double not null, posted_by varchar(100), text mediumtext not null, location mediumtext not null, event_name mediumtext not null, posted_by_ip char(16), posted_time double, wiki_id int ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) cursor.execute("ALTER TABLE events AUTO_INCREMENT = 1;", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""CREATE sequence events_seq start 1 increment 1;""", isWrite=True) cursor.execute("""create table events ( uid int primary key not null, event_time double precision not null, posted_by varchar(100), text text not null, location text not null, event_name text not null, posted_by_ip inet, posted_time double precision, wiki_id int );""", isWrite=True) cursor.execute("""CREATE INDEX events_event_time on events (event_time, wiki_id);""", isWrite=True) cursor.execute("""CREATE INDEX events_posted_by on events (posted_by);""", isWrite=True) cursor.execute("""CREATE INDEX events_posted_by_ip on events (posted_by_ip);""", isWrite=True) cursor.execute("""CREATE INDEX events_posted_time on events (posted_time);""", isWrite=True) # global events cursor.execute("""CREATE INDEX events_posted_time_wiki_id on events (posted_time, wiki_id);""", isWrite=True) # global events if config.db_type == 'mysql': cursor.execute("""create table files ( name varchar(100) not null, file mediumblob not null, uploaded_time double not null, uploaded_by char(20), attached_to_pagename varchar(255) not null, uploaded_by_ip char(16), attached_to_pagename_propercased varchar(255) not null, wiki_id int, primary key (name, attached_to_pagename, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table files ( name varchar(100) not null, file bytea not null, uploaded_time double precision not null, uploaded_by char(20), attached_to_pagename varchar(255) not null, uploaded_by_ip inet, attached_to_pagename_propercased varchar(255) not null, wiki_id int, primary key (name, attached_to_pagename, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX files_uploaded_by on files (uploaded_by);""", isWrite=True) cursor.execute("""CREATE INDEX files_uploaded_time on files (uploaded_time);""", isWrite=True) # global rc cursor.execute("""CREATE INDEX files_uploaded_time_wiki_id on files (uploaded_time, wiki_id);""", isWrite=True) # local rc if config.db_type == 'mysql': cursor.execute("""create table oldFiles ( name varchar(100) not null, file mediumblob not null, uploaded_time double not null, uploaded_by char(20), attached_to_pagename varchar(255) not null, deleted_time double, deleted_by char(20), uploaded_by_ip char(16), deleted_by_ip char(16), attached_to_pagename_propercased varchar(255) not null, wiki_id int, primary key (name, attached_to_pagename, uploaded_time, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table oldFiles ( name varchar(100) not null, file bytea not null, uploaded_time double precision not null, uploaded_by char(20), attached_to_pagename varchar(255) not null, deleted_time double precision, deleted_by char(20), uploaded_by_ip inet, deleted_by_ip inet, attached_to_pagename_propercased varchar(255) not null, wiki_id int, primary key (name, attached_to_pagename, uploaded_time, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX oldFiles_deleted_time on oldFiles (deleted_time);""", isWrite=True) # global rc cursor.execute("""CREATE INDEX oldFiles_deleted_time_wiki_id on oldFiles (deleted_time, wiki_id);""", isWrite=True) # local rc if config.db_type == 'mysql': #throw-away and easily regenerated data cursor.execute("""create table thumbnails ( xsize smallint, ysize smallint, name varchar(100) not null, attached_to_pagename varchar(100) not null, image mediumblob not null, last_modified double, wiki_id int, primary key (name, attached_to_pagename, wiki_id) ) ENGINE=MyISAM CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table thumbnails ( xsize smallint, ysize smallint, name varchar(100) not null, attached_to_pagename varchar(100) not null, image bytea not null, last_modified double precision, wiki_id int, primary key (name, attached_to_pagename, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table imageInfo ( name varchar(100) not null, attached_to_pagename varchar(255) not null, xsize smallint, ysize smallint, wiki_id int, primary key (name, attached_to_pagename, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table imageInfo ( name varchar(100) not null, attached_to_pagename varchar(255) not null, xsize smallint, ysize smallint, wiki_id int, primary key (name, attached_to_pagename, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table oldImageInfo ( name varchar(100) not null, attached_to_pagename varchar(255) not null, xsize smallint, ysize smallint, uploaded_time double not null, wiki_id int, primary key (name, attached_to_pagename, uploaded_time, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table oldImageInfo ( name varchar(100) not null, attached_to_pagename varchar(255) not null, xsize smallint, ysize smallint, uploaded_time double precision not null, wiki_id int, primary key (name, attached_to_pagename, uploaded_time, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table imageCaptions ( image_name varchar(100) not null, attached_to_pagename varchar(100) not null, linked_from_pagename varchar(100), caption text not null, wiki_id int, primary key (image_name, attached_to_pagename, linked_from_pagename, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table imageCaptions ( image_name varchar(100) not null, attached_to_pagename varchar(100) not null, linked_from_pagename varchar(100), caption text not null, wiki_id int, primary key (image_name, attached_to_pagename, linked_from_pagename, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table mapCategoryDefinitions ( id int not null, img varchar(100), name varchar(100) not null, wiki_id int, primary key (id, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table mapCategoryDefinitions ( id int not null, img varchar(100), name varchar(100) not null, wiki_id int, primary key (id, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table mapPoints ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, created_time double, created_by char(20), created_by_ip char(16), pagename_propercased varchar(100) not null, address varchar(255), wiki_id int, primary key (pagename, x, y, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table mapPoints ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, created_time double precision, created_by char(20), created_by_ip inet, pagename_propercased varchar(100) not null, address varchar(255), wiki_id int, primary key (pagename, x, y, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX mapPoints_pagename_wiki_id on mapPoints (pagename, wiki_id);""", isWrite=True) cursor.execute("""CREATE INDEX mapPoints_x on mapPoints (x);""", isWrite=True) cursor.execute("""CREATE INDEX mapPoints_y on mapPoints (y);""", isWrite=True) cursor.execute("""CREATE INDEX mapPoints_wiki on mapPoints (wiki_id);""", isWrite=True) cursor.execute("""CREATE INDEX mapPoints_created_time on mapPoints (created_time);""", isWrite=True) # global rc cursor.execute("""CREATE INDEX mapPoints_created_time_wiki_id on mapPoints (created_time, wiki_id);""", isWrite=True) # local rc cursor.execute("""CREATE INDEX mapPoints_address on mapPoints (address);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table oldMapPoints ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, created_time double, created_by char(20), created_by_ip char(16), deleted_time double, deleted_by char(20), deleted_by_ip char(16), pagename_propercased varchar(100) not null, address varchar(255), wiki_id int, primary key (pagename, x, y, deleted_time, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table oldMapPoints ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, created_time double precision, created_by char(20), created_by_ip inet, deleted_time double precision, deleted_by char(20), deleted_by_ip inet, pagename_propercased varchar(100) not null, address varchar(255), wiki_id int, primary key (pagename, x, y, deleted_time, wiki_id) );""", isWrite=True) cursor.execute("""CREATE INDEX oldMapPoints_deleted_time on oldMapPoints (deleted_time);""", isWrite=True) # global rc cursor.execute("""CREATE INDEX oldMapPoints_deleted_time_wiki_id on oldMapPoints (deleted_time, wiki_id);""", isWrite=True) # local rc cursor.execute("""CREATE INDEX oldMapPoints_created_time on oldMapPoints (created_time);""", isWrite=True) # global rc cursor.execute("""CREATE INDEX oldMapPoints_created_time_wiki_id on oldMapPoints (created_time, wiki_id);""", isWrite=True) # local rc if config.db_type == 'mysql': cursor.execute("""create table mapPointCategories ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, id int not null, wiki_id int, primary key (pagename, x, y, id, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table mapPointCategories ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, id int not null, wiki_id int, primary key (pagename, x, y, id, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table oldMapPointCategories ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, id int not null, deleted_time double, wiki_id int, primary key (pagename, x, y, id, deleted_time, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table oldMapPointCategories ( pagename varchar(100) not null, x varchar(100) not null, y varchar(100) not null, id int not null, deleted_time double precision, wiki_id int, primary key (pagename, x, y, id, deleted_time, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table pageDependencies ( page_that_depends varchar(100) not null, source_page varchar(100) not null, wiki_id int, primary key (page_that_depends, source_page, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table pageDependencies ( page_that_depends varchar(100) not null, source_page varchar(100) not null, wiki_id int, primary key (page_that_depends, source_page, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table metadata ( pagename varchar(100), type varchar(100), name varchar(100), value varchar(100), wiki_id int, primary key (pagename, type, name, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table metadata ( pagename varchar(100), type varchar(100), name varchar(100), value varchar(100), wiki_id int, primary key (pagename, type, name, wiki_id) );""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table wikis ( id int not null AUTO_INCREMENT, name varchar(100) unique not null, domain varchar(64), is_disabled BOOLEAN, sitename varchar(100), other_settings mediumblob, primary key (id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) cursor.execute("ALTER TABLE wikis AUTO_INCREMENT = 1;", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table wikis ( id int not null, name varchar(100) unique not null, domain varchar(64), is_disabled boolean, sitename varchar(100), other_settings bytea, primary key (id) )""", isWrite=True) cursor.execute("CREATE sequence wikis_seq start 1 increment 1;", isWrite=True) cursor.execute("CREATE INDEX wikis_name on wikis (name);", isWrite=True) cursor.execute("CREATE INDEX wikis_domain on wikis (domain);", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userWikiInfo ( user_name varchar(100) not null, wiki_id int, first_edit_date double, created_count int default 0, edit_count int default 0, file_count int default 0, last_page_edited varchar(100), last_edit_date double, rc_bookmark double, primary key (user_name, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userWikiInfo ( user_name varchar(100) not null, wiki_id int, first_edit_date double precision, created_count int default 0, edit_count int default 0, file_count int default 0, last_page_edited varchar(100), last_edit_date double precision, rc_bookmark double precision, primary key (user_name, wiki_id) )""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table pageAcls ( pagename varchar(100) not null, groupname varchar(100) not null, wiki_id int, may_read BOOLEAN, may_edit BOOLEAN, may_delete BOOLEAN, may_admin BOOLEAN, primary key (pagename, groupname, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table pageAcls ( pagename varchar(100) not null, groupname varchar(100) not null, wiki_id int, may_read boolean, may_edit boolean, may_delete boolean, may_admin boolean, primary key (pagename, groupname, wiki_id) )""", isWrite=True) cursor.execute("""CREATE INDEX pageAcls_pagename_wiki on pageAcls (pagename, wiki_id);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userGroups ( username varchar(100) not null, groupname varchar(100) not null, wiki_id int, primary key (username, groupname, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userGroups ( username varchar(100) not null, groupname varchar(100) not null, wiki_id int, primary key (username, groupname, wiki_id) )""", isWrite=True) cursor.execute("""CREATE INDEX user_groups_group_wiki on userGroups (groupname, wiki_id);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table userGroupsIPs ( ip char(16) not null, groupname varchar(100) not null, wiki_id int, primary key (ip, groupname, wiki_id) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table userGroupsIPs ( ip inet not null, groupname varchar(100) not null, wiki_id int, primary key (ip, groupname, wiki_id) )""", isWrite=True) cursor.execute("""CREATE INDEX user_groups_ip_ips on userGroupsIPs (groupname, wiki_id);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table lostPasswords ( uid char(20) not null, code varchar(255), written_time double, primary key (uid, code, written_time) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table lostPasswords ( uid char(20) not null, code varchar(255), written_time double precision, primary key (uid, code, written_time) )""", isWrite=True) cursor.execute("""CREATE INDEX lostpasswords_uid on lostPasswords (uid);""", isWrite=True) cursor.execute("""CREATE INDEX lostpasswords_written_time on lostPasswords (written_time);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table wikisPending ( wiki_name varchar(100) not null, code varchar(255) not null, written_time double not null, primary key (wiki_name, code, written_time) ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table wikisPending ( wiki_name varchar(100) not null, code varchar(255) not null, written_time double precision not null, primary key (wiki_name, code, written_time) )""", isWrite=True) cursor.execute("""CREATE INDEX wikispending_written_time on wikisPending (written_time);""", isWrite=True) if config.db_type == 'mysql': cursor.execute("""create table captchas ( id char(33) primary key, secret varchar(100) not null, human_readable_secret mediumblob, written_time double ) ENGINE=InnoDB CHARACTER SET utf8;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""create table captchas ( id char(33) primary key, secret varchar(100) not null, human_readable_secret bytea, written_time double precision )""", isWrite=True) cursor.execute("""CREATE INDEX captchas_written_time on captchas (written_time);""", isWrite=True) print "tables created" def create_views(cursor): print "creating views..." if config.db_type == 'mysql': cursor.execute("""CREATE VIEW eventChanges as SELECT 'Events Board'as name, events.posted_time as changeTime, users.id as id, 'NEWEVENT' as editType, events.event_name as comment, events.posted_by_IP as userIP, 'Events Board' as propercased_name, events.wiki_id as wiki_id from events, users where users.propercased_name=events.posted_by;""", isWrite=True) cursor.execute("""CREATE VIEW deletedFileChanges as SELECT oldFiles.attached_to_pagename as name, oldFiles.deleted_time as changeTime, oldFiles.deleted_by as id, 'ATTDEL' as editType, name as comment, oldFiles.deleted_by_ip as userIP, oldFiles.attached_to_pagename_propercased as propercased_name, oldFiles.wiki_id as wiki_id from oldFiles;""", isWrite=True) cursor.execute("""CREATE VIEW oldFileChanges as SELECT oldFiles.attached_to_pagename as name, oldFiles.uploaded_time as changeTime, oldFiles.uploaded_by as id, 'ATTNEW' as editType, name as comment, oldFiles.uploaded_by_ip as userIP, oldFiles.attached_to_pagename_propercased as propercased_name, oldFiles.wiki_id as wiki_id from oldFiles;""", isWrite=True) cursor.execute("""CREATE VIEW currentFileChanges as SELECT files.attached_to_pagename as name, files.uploaded_time as changeTime, files.uploaded_by as id, 'ATTNEW' as editType, name as comment, files.uploaded_by_ip as userIP, files.attached_to_pagename_propercased as propercased_name, files.wiki_id as wiki_id from files;""", isWrite=True) cursor.execute("""CREATE VIEW pageChanges as SELECT name, editTime as changeTime, userEdited as id, editType, comment, userIP, propercased_name, wiki_id from allPages;""", isWrite=True) cursor.execute("""CREATE VIEW currentMapChanges as SELECT mapPoints.pagename as name, mapPoints.created_time as changeTime, mapPoints.created_by as id, 'SAVEMAP' as editType, NULL as comment, mapPoints.created_by_ip as userIP, mapPoints.pagename_propercased as propercased_name, mapPoints.wiki_id as wiki_id from mapPoints;""", isWrite=True) cursor.execute("""CREATE VIEW oldMapChanges as SELECT oldMapPoints.pagename as name, oldMapPoints.created_time as changeTime, oldMapPoints.created_by as id, 'SAVEMAP' as editType, NULL as comment, oldMapPoints.created_by_ip as userIP, oldMapPoints.pagename_propercased as propercased_name, oldMapPoints.wiki_id as wiki_id from oldMapPoints;""", isWrite=True) cursor.execute("""CREATE VIEW deletedMapChanges as SELECT oldMapPoints.pagename as name, oldMapPoints.deleted_time as changeTime, oldMapPoints.deleted_by as id, 'SAVEMAP' as editType, NULL as comment, oldMapPoints.deleted_by_ip as userIP, oldMapPoints.pagename_propercased as propercased_name, oldMapPoints.wiki_id as wiki_id from oldMapPoints;""", isWrite=True) elif config.db_type == 'postgres': cursor.execute("""CREATE VIEW eventChanges as SELECT char 'Events Board' as name, events.posted_time as changeTime, users.id as id, char 'NEWEVENT' as editType, events.event_name as comment, events.posted_by_IP as userIP, events.wiki_id as wiki_id from events, users where users.propercased_name=events.posted_by;""", isWrite=True) cursor.execute("""CREATE VIEW deletedFileChanges as SELECT oldFiles.attached_to_pagename as name, oldFiles.deleted_time as changeTime, oldFiles.deleted_by as id, char 'ATTDEL' as editType, name as comment, oldFiles.deleted_by_ip as userIP, oldFiles.attached_to_pagename_propercased as propercased_name, oldFiles.wiki_id as wiki_id from oldFiles;""", isWrite=True) cursor.execute("""CREATE VIEW oldFileChanges as SELECT oldFiles.attached_to_pagename as name, oldFiles.uploaded_time as changeTime, oldFiles.uploaded_by as id, char 'ATTNEW' as editType, name as comment, oldFiles.uploaded_by_ip as userIP, oldFiles.attached_to_pagename_propercased as propercased_name, oldFiles.wiki_id as wiki_id from oldFiles;""", isWrite=True) cursor.execute("""CREATE VIEW currentFileChanges as SELECT files.attached_to_pagename as name, files.uploaded_time as changeTime, files.uploaded_by as id, char 'ATTNEW' as editType, name as comment, files.uploaded_by_ip as userIP, files.attached_to_pagename_propercased as propercased_name, files.wiki_id as wiki_id from files;""", isWrite=True) cursor.execute("""CREATE VIEW pageChanges as SELECT name, editTime as changeTime, userEdited as id, editType, comment, userIP, propercased_name, wiki_id from allPages;""", isWrite=True) cursor.execute("""CREATE VIEW currentMapChanges as SELECT mapPoints.pagename as name, mapPoints.created_time as changeTime, mapPoints.created_by as id, char 'SAVEMAP' as editType, char ''as comment, mapPoints.created_by_ip as userIP, mapPoints.pagename_propercased as propercased_name, mapPoints.wiki_id as wiki_id from mapPoints;""", isWrite=True) cursor.execute("""CREATE VIEW oldMapChanges as SELECT oldMapPoints.pagename as name, oldMapPoints.created_time as changeTime, oldMapPoints.created_by as id, char 'SAVEMAP' as editType, char '' as comment, oldMapPoints.created_by_ip as userIP, oldMapPoints.pagename_propercased as propercased_name, oldMapPoints.wiki_id as wiki_id from oldMapPoints;""", isWrite=True) cursor.execute("""CREATE VIEW deletedMapChanges as SELECT oldMapPoints.pagename as name, oldMapPoints.deleted_time as changeTime, oldMapPoints.deleted_by as id, char 'SAVEMAP' as editType, char '' as comment, oldMapPoints.deleted_by_ip as userIP, oldMapPoints.pagename_propercased as propercased_name, oldMapPoints.wiki_id as wiki_id from oldMapPoints;""", isWrite=True) print "views created" def create_config(request, wiki_id=None): config_dict = config.reduce_to_local_config(config.CONFIG_VARS) # want to make sure we increment the wiki_id properly del config_dict['wiki_id'] if wiki_id is not None: config_dict['wiki_id'] = wiki_id config_dict['active'] = True site_conf = config.Config(config.wiki_name, request, process_config=False) request.config = site_conf request.config.active = True request.config.set_config(request.config.wiki_name, config_dict, request) request.setup_basics() def create_other_stuff(request): print "creating other stuff..." d = {'wiki_id' : request.config.wiki_id} cursor = request.cursor cursor.execute("""INSERT into mapCategoryDefinitions values (1, 'food.png', 'Restaurants', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (2, 'dollar.png', 'Shopping', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (3, 'hand.png', 'Services', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (4, 'run.png', 'Parks & Recreation', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (5, 'people.png', 'Community', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (6, 'arts.png', 'Arts & Entertainment', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (7, 'edu.png', 'Education', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (9, 'head.png', 'People', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (10, 'gov.png', 'Government', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (11, 'bike.png', 'Bars & Night Life', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (12, 'coffee.png', 'Cafes', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (13, 'house.png', 'Housing', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (14, 'wifi.png', 'WiFi Hot Spots', %(wiki_id)s);""", d, isWrite=True) cursor.execute("""INSERT into mapCategoryDefinitions values (99, NULL, 'Other', %(wiki_id)s);""", d, isWrite=True) print "other stuff created" def insert_acl(plist, flat_page_dict, request): for pagename in plist: if flat_page_dict[pagename].acl: wikiacl.setACL(pagename, flat_page_dict[pagename].acl, request) def insert_pages(request, flat_page_dict=None, plist=None, without_files=False, global_pages=True): timenow = time.time() cursor = request.cursor if not flat_page_dict: if global_pages: flat_page_dict = all_pages else: flat_page_dict = basic_pages if not plist: plist = flat_page_dict.keys() file_dict = { 'uploaded_time': 0, 'uploaded_by': None, 'uploaded_by_ip': None } for pagename in plist: request.req_cache['pagenames'][ pagename.lower()] = pagename # for caching flatpage = flat_page_dict[pagename] cursor.execute("""INSERT into curPages (name, text, cachedText, editTime, cachedTime, userEdited, propercased_name, wiki_id) values (%(pagename)s, %(pagetext)s, NULL, %(timenow)s, NULL, NULL, %(propercased_name)s, %(wiki_id)s);""", {'pagename':pagename.lower(), 'pagetext':flatpage.text, 'propercased_name':pagename, 'wiki_id': request.config.wiki_id, 'timenow': timenow}, isWrite=True) cursor.execute("""INSERT into allPages (name, text, editTime, userEdited, editType, comment, userIP, propercased_name, wiki_id) values (%(pagename)s, %(pagetext)s, %(timenow)s, NULL, 'SAVENEW', 'System page', NULL, %(propercased_name)s, %(wiki_id)s);""", {'pagename':pagename.lower(), 'pagetext':flatpage.text, 'propercased_name':pagename, 'wiki_id': request.config.wiki_id, 'timenow': timenow}, isWrite=True) file_dict['pagename'] = pagename for filename, content in flatpage.files: file_dict['filename'] = filename file_dict['filecontent'] = content if wikiutil.isImage(filename): xsize, ysize = Files.openImage(content).size file_dict['xsize'] = xsize file_dict['ysize'] = ysize wikidb.putFile(request, file_dict) insert_acl(plist, flat_page_dict, request) def build_search_index(): """ Builds the title and full text search indexes. """ if not config.has_xapian: print ("You don't have Xapian installed..." "skipping configuration of search index.") return if not os.path.exists(config.search_db_location): # create it if it doesn't exist, we don't want to create # intermediates though os.mkdir(config.search_db_location) # prune existing db directories, do this explicitly in case third party # extensions use this directory (they really shouldn't) for db in ('title', 'text'): dbpath = os.path.join(config.search_db_location, db) if os.path.exists(dbpath): shutil.rmtree(dbpath) print "Building search index..." from Sycamore import wikiutil, search pages = wikiutil.getPageList(req, objects=True) for page in pages: print " %s added to search index." % page.page_name # don't use remote server on initial build search.add_to_index(page, try_remote=False) def setup_admin(request): print "\n-------------" print "Enter the primary admin's wiki username:" username = raw_input() group = wikiacl.Group("Admin", request, fresh=True) groupdict = {username.lower(): None} group.update(groupdict) group.save() basic_pages = init_basic_pages() all_pages = copy(basic_pages) all_pages.update(init_basic_pages('global')) if __name__ == '__main__': from Sycamore import request # building for first time..don't try to load config from db req = request.RequestDummy(process_config=False) cursor = req.cursor init_db(cursor) create_tables(cursor) create_views(cursor) create_config(req) create_other_stuff(req) print "inserting basic pages..." insert_pages(req) build_search_index() setup_admin(req) req.db_disconnect() # commit before building caches req = request.RequestDummy(process_config=True) wiki_list = wikiutil.getWikiList(req) for wiki_name in wiki_list: req.config = config.Config(wiki_name, req, process_config=True) plist = wikiutil.getPageList(req) maintenance.buildCaches(wiki_name, plist, doprint=True) req.db_disconnect() print "-------------" print ("All done! Now, start up the sycamore server and " "create the admin account!")
Apollo Beach, FL. (February 9, 2016) - Waterset by Newland Communities has been recognized with a Gold award in the 2015 Best in American LivingTM Awards (BALA) by the National Association of Home Builders (NAHB). Waterset was honored in the Best Single-Family Community, 100 units and over. This new master-planned community located in the South Shore Apollo Beach area of Tampa Bay is envisioned to create a “real town.” With close proximity to I-75 and nearby beach and gulf access, the community features shopping, gathering places, neighborhood parks and trails and onsite public elementary and middle schools. New one- and two-story homes are priced from the $190,000s to $500,000.
# # Copyright 2008-2014 Universidad Complutense de Madrid # # This file is part of PyEmir # # PyEmir is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # PyEmir is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with PyEmir. If not, see <http://www.gnu.org/licenses/>. # """Preprocessing EMIR readout modes""" from __future__ import division import six from astropy.io import fits from numina.array import ramp_array, fowler_array from .core import EMIR_READ_MODES PREPROC_KEY = 'READPROC' PREPROC_VAL = True class ReadModeGuessing(object): def __init__(self, mode, info=None): self.mode = mode self.info = info def image_readmode(hdulist, default=None): header = hdulist[0].header if 'READMODE' in header: p_readmode = header['READMODE'].lower() if p_readmode in EMIR_READ_MODES: return ReadModeGuessing(mode=p_readmode, info={'source': 'keyword'} ) # Using heuristics shape = hdulist[0].shape if len(shape) == 2: # A 2D image, mode is single return ReadModeGuessing(mode='single', info={'source': 'heuristics'}) else: nd = min(shape) if nd == 2: # A NXNX2 image # mide is cds return ReadModeGuessing(mode='cds', info={'source': 'heuristics'}) # Insufficient information if default: return ReadModeGuessing(mode=default, info={'source': 'default'}) else: return None def preprocess_single(hdulist): return hdulist def preprocess_cds(hdulist): # CDS is Fowler with just one pair of reads return preprocess_fowler(hdulist) def preprocess_fowler(hdulist): hdulist[0].header.update(PREPROC_KEY, PREPROC_VAL) # We need: tint = 0.0 # Integration time (from first read to last read) ts = 0.0 # Time between samples gain = 1.0 # Detector gain (number) ron = 1.0 # Detector RON (number) # A master badpixel mask cube = hdulist[0].data res, var, npix, mask = fowler_array(cube, ti=tint, ts=ts, gain=gain, ron=ron, badpixels=None, dtype='float32', saturation=55000.0 ) hdulist[0].data = res varhdu = fits.ImageHDU(var) varhdu.update_ext_name('VARIANCE') hdulist.append(varhdu) nmap = fits.ImageHDU(npix) nmap.update_ext_name('MAP') hdulist.append(nmap) nmask_hdu = fits.ImageHDU(mask) nmask_hdu.update_ext_name('MASK') hdulist.append(nmask_hdu) return hdulist def preprocess_ramp(hdulist): hdulist[0].header.update(PREPROC_KEY, PREPROC_VAL) cube = hdulist[0].data # We need ti = 0.0 # Integration time gain = 1.0 ron = 1.0 rslt = ramp_array(cube, ti, gain=gain, ron=ron, badpixels=None, dtype='float32', saturation=55000.0 ) result, var, npix, mask = rslt hdulist[0].data = result varhdu = fits.ImageHDU(var) varhdu.update_ext_name('VARIANCE') hdulist.append(varhdu) nmap = fits.ImageHDU(npix) nmap.update_ext_name('MAP') hdulist.append(nmap) nmask = fits.ImageHDU(mask) nmask.update_ext_name('MASK') hdulist.append(nmask) return hdulist def fits_wrapper(frame): if isinstance(frame, six.string_types): return fits.open(frame) elif isinstance(frame, fits.HDUList): return frame else: raise TypeError def preprocess(input_, output): with fits_wrapper(input_) as hdulist: header = hdulist[0].header if 'PREPROC' in header: # if the image is preprocessed, do nothing if input != output: hdulist.writeto(output, overwrite=True) return # determine the READ mode guess = image_readmode(hdulist, 'single') if guess is None: # We have a problem here return if guess.mode == 'single': hduproc = preprocess_single(hdulist) elif guess.mode == 'cds': hduproc = preprocess_cds(hdulist) elif guess.mode == 'fowler': hduproc = preprocess_fowler(hdulist) elif guess.mode == 'ramp': pass else: hduproc = preprocess_single(hdulist) hduproc.writeto(output, overwrite=True)
VMG gathers competitive intelligence, identifies, engages, and builds a talent pipeline of passive candidates and converts them to active candidates for our clients. Our company is a national executive search firm with over 27 years of experience. We are about recruiting the best candidates for our clients that meet their specific needs. We have spent years working hard to provide our clients with the best possible end result and we have the track record to show you. We are proud of the fact that 100% of our clients who repeatedly use VMG month after month, quarter after quarter, year after year, say we’re the best in recruiting. We’re a strategic partner with our client.
#!/usr/bin/python # -*- coding: utf-8 -*- ## # models.py: Likelihood models for quantum state and process tomography. ## # © 2017, Chris Ferrie (csferrie@gmail.com) and # Christopher Granade (cgranade@cgranade.com). # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # 3. Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE # POSSIBILITY OF SUCH DAMAGE. ## # TODO: docstrings! # TODO: unit tests! ## DOCSTRING ################################################################# """ """ ## FEATURES ################################################################## from __future__ import absolute_import from __future__ import division from __future__ import unicode_literals ## IMPORTS ################################################################### from builtins import range, map from qinfer import FiniteOutcomeModel import numpy as np ## EXPORTS ################################################################### # TODO ## DESIGN NOTES ############################################################## """ Bases are always assumed to have exactly one traceful element— in particular, the zeroth basis element. """ ## FUNCTIONS ################################################################# # TODO: document, contribute to QuTiP? def heisenberg_weyl_operators(d=2): w = np.exp(2 * np.pi * 1j / d) X = qt.Qobj([ qt.basis(d, (idx + 1) % d).data.todense().view(np.ndarray)[:, 0] for idx in range(d) ]) Z = qt.Qobj(np.diag(w ** np.arange(d))) return [X**i * Z**j for i in range(d) for j in range(d)] ## CLASSES ################################################################### class TomographyModel(FiniteOutcomeModel): r""" Model for tomographically learning a quantum state using two-outcome positive-operator valued measures (POVMs). :param TomographyBasis basis: Basis used in representing states as model parameter vectors. :param bool allow_subnormalized: If `False`, states :math:`\rho` are constrained during resampling such that :math:`\Tr(\rho) = 1`. """ def __init__(self, basis, allow_subnormalized=False): self._dim = basis.dim self._basis = basis self._allow_subnormalied = allow_subnormalized super(TomographyModel, self).__init__() @property def dim(self): """ Dimension of the Hilbert space on which density operators learned by this model act. :type: `int` """ return self._dim @property def basis(self): """ Basis used in converting between :class:`~qutip.Qobj` and model parameter vector representations of states. :type: `TomographyBasis` """ return self._basis @property def n_modelparams(self): return self._dim ** 2 @property def modelparam_names(self): return list(map( r'\langle\!\langle{} | \rho\rangle\!\rangle'.format, self.basis.labels )) @property def is_n_outcomes_constant(self): return True @property def expparams_dtype(self): return [ (str('meas'), float, self._dim ** 2) ] def n_outcomes(self, expparams): return 2 def are_models_valid(self, modelparams): # This is wrong, but is wrong for the sake of speed. # As a future improvement, validity checking needs to # be enabled as a non-default option. return np.ones((modelparams.shape[0],), dtype=bool) def canonicalize(self, modelparams): """ Truncates negative eigenvalues and from each state represented by a tensor of model parameter vectors, and renormalizes as appropriate. :param np.ndarray modelparams: Array of shape ``(n_states, dim**2)`` containing model parameter representations of each of ``n_states`` different states. :return: The same model parameter tensor with all states truncated to be positive operators. If :attr:`~TomographyModel.allow_subnormalized` is `False`, all states are also renormalized to trace one. """ modelparams = np.apply_along_axis(self.trunc_neg_eigs, 1, modelparams) # Renormalizes particles if allow_subnormalized=False. if not self._allow_subnormalied: modelparams = self.renormalize(modelparams) return modelparams def trunc_neg_eigs(self, particle): """ Given a state represented as a model parameter vector, returns a model parameter vector representing the same state with any negative eigenvalues set to zero. :param np.ndarray particle: Vector of length ``(dim ** 2, )`` representing a state. :return: The same state with any negative eigenvalues set to zero. """ arr = np.tensordot(particle, self._basis.data.conj(), 1) w, v = np.linalg.eig(arr) if np.all(w >= 0): return particle else: w[w < 0] = 0 new_arr = np.dot(v * w, v.conj().T) new_particle = np.real(np.dot(self._basis.flat(), new_arr.flatten())) assert new_particle[0] > 0 return new_particle def renormalize(self, modelparams): """ Renormalizes one or more states represented as model parameter vectors, such that each state has trace 1. :param np.ndarray modelparams: Array of shape ``(n_states, dim ** 2)`` representing one or more states as model parameter vectors. :return: The same state, normalized to trace one. """ # The 0th basis element (identity) should have # a value 1 / sqrt{dim}, since the trace of that basis # element is fixed to be sqrt{dim} by convention. norm = modelparams[:, 0] * np.sqrt(self._dim) assert not np.sum(norm == 0) return modelparams / norm[:, None] def likelihood(self, outcomes, modelparams, expparams): super(TomographyModel, self).likelihood(outcomes, modelparams, expparams) pr1 = np.empty((modelparams.shape[0], expparams.shape[0])) pr1[:, :] = np.einsum( 'ei,mi->me', # This should be the Hermitian conjugate, but since # expparams['meas'] is real (that is, since the measurement) # is Hermitian, then that's not needed here. expparams['meas'], modelparams ) np.clip(pr1, 0, 1, out=pr1) return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, 1 - pr1) class DiffusiveTomographyModel(TomographyModel): @property def n_modelparams(self): return super(DiffusiveTomographyModel, self).n_modelparams + 1 @property def expparams_dtype(self): return super(DiffusiveTomographyModel, self).expparams_dtype + [ (str('t'), float) ] @property def modelparam_names(self): return super(DiffusiveTomographyModel, self).modelparam_names + [r'\epsilon'] def are_models_valid(self, modelparams): return np.logical_and( super(DiffusiveTomographyModel, self).are_models_valid(modelparams), modelparams[:, -1] > 0 ) def canonicalize(self, modelparams): return np.concatenate([ super(DiffusiveTomographyModel, self).canonicalize(modelparams[:, :-1]), modelparams[:, -1, None] ], axis=1) def likelihood(self, outcomes, modelparams, expparams): return super(DiffusiveTomographyModel, self).likelihood(outcomes, modelparams[:, :-1], expparams) def update_timestep(self, modelparams, expparams): # modelparams: [n_m, d² + 1] # expparams: [n_e,] # eps: [n_m, 1] * [n_e] → [n_m, n_e, 1] eps = (modelparams[:, -1, None] * np.sqrt(expparams['t']))[:, :, None] # steps: [n_m, n_e, 1] * [n_m, 1, d²] steps = eps * np.random.randn(*modelparams[:, None, :].shape) steps[:, :, [0, -1]] = 0 raw_modelparams = modelparams[:, None, :] + steps # raw_modelparams[:, :, :-1] = np.apply_along_axis(self.trunc_neg_eigs, 2, raw_modelparams[:, :, :-1]) for idx_experiment in range(len(expparams)): raw_modelparams[:, idx_experiment, :] = self.canonicalize(raw_modelparams[:, idx_experiment, :]) return raw_modelparams.transpose((0, 2, 1))
Invent Investor Network - How much does it cost? FindIt, RaiseIt, CloseIt - how much does it cost? It is an unfortunate fact that it "costs money to raise money" - the good news is that Peter Hopkinson, Founder and MD of Invent Network, is an approved business coach on the government funded Growth Accelerator Access to Finance scheme. This means that we can deliver exactly the support you need – including the FindIt service - for a cost to you as little as £600. We will need to discuss your precise circumstances and assess your eligibility prior to acceptance on the scheme. If, after you have completed the Growth Accelerator support scheme, you decide to engage with our investor network we will charge a success fee of 5% of funds raised from our sources that were introduced to you by us. Where we are helping you enagage with other networks and syndicates as part of the FindIt service we do not charge a success fee on funds raised through those sources. If you are already in the fund raising process, have some funds already committed, have prepared a full due diligence pack and are prepared to populate our deal room and prepare a 5 minute video presentation. Then it could be possible that you do not require the FindIt element of our service. In this case we will only charge you the 5% success fee.
import wx import wx.lib.wordwrap as wordwrap import util class Event(wx.PyEvent): def __init__(self, event_object, type): super(Event, self).__init__() self.SetEventType(type.typeId) self.SetEventObject(event_object) EVT_HYPERLINK = wx.PyEventBinder(wx.NewEventType()) class Line(wx.PyPanel): def __init__(self, parent, pen=wx.BLACK_PEN): super(Line, self).__init__(parent, -1, style=wx.BORDER_NONE) self.pen = pen self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM) self.Bind(wx.EVT_PAINT, self.on_paint) self.Bind(wx.EVT_SIZE, self.on_size) def on_size(self, event): self.Refresh() def on_paint(self, event): dc = wx.AutoBufferedPaintDC(self) dc.Clear() dc.SetPen(self.pen) width, height = self.GetClientSize() y = height / 2 dc.DrawLine(0, y, width, y) def DoGetBestSize(self): return -1, self.pen.GetWidth() class Text(wx.PyPanel): def __init__(self, parent, width, text): super(Text, self).__init__(parent, -1, style=wx.BORDER_NONE) self.text = text self.width = width self.wrap = True self.rects = [] self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM) self.Bind(wx.EVT_PAINT, self.on_paint) self.Bind(wx.EVT_SIZE, self.on_size) def on_size(self, event): self.Refresh() def on_paint(self, event): dc = wx.AutoBufferedPaintDC(self) self.setup_dc(dc) dc.Clear() self.draw_lines(dc) def setup_dc(self, dc): dc.SetFont(self.GetFont()) dc.SetTextBackground(self.GetBackgroundColour()) dc.SetTextForeground(self.GetForegroundColour()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) def draw_lines(self, dc, emulate=False): if self.wrap: text = wordwrap.wordwrap(self.text.strip(), self.width, dc) else: text = self.text.strip() lines = text.split('\n') lines = [line.strip() for line in lines] lines = [line for line in lines if line] x, y = 0, 0 rects = [] for line in lines: if not emulate: dc.DrawText(line, x, y) w, h = dc.GetTextExtent(line) rects.append(wx.Rect(x, y, w, h)) y += h if not emulate: self.rects = rects return y def compute_height(self): dc = wx.ClientDC(self) self.setup_dc(dc) height = self.draw_lines(dc, True) return height def fit_no_wrap(self): dc = wx.ClientDC(self) self.setup_dc(dc) width, height = dc.GetTextExtent(self.text.strip()) self.width = width self.wrap = False def DoGetBestSize(self): height = self.compute_height() return self.width, height class Link(Text): def __init__(self, parent, width, link, text): super(Link, self).__init__(parent, width, text) self.link = link self.trigger = False self.hover = False self.Bind(wx.EVT_LEAVE_WINDOW, self.on_leave) self.Bind(wx.EVT_MOTION, self.on_motion) self.Bind(wx.EVT_LEFT_DOWN, self.on_left_down) self.Bind(wx.EVT_LEFT_UP, self.on_left_up) def hit_test(self, point): for rect in self.rects: if rect.Contains(point): self.on_hover() break else: self.on_unhover() def on_motion(self, event): self.hit_test(event.GetPosition()) def on_leave(self, event): self.on_unhover() def on_hover(self): if self.hover: return self.hover = True font = self.GetFont() font.SetUnderlined(True) self.SetFont(font) self.SetCursor(wx.StockCursor(wx.CURSOR_HAND)) self.Refresh() def on_unhover(self): if not self.hover: return self.hover = False self.trigger = False font = self.GetFont() font.SetUnderlined(False) self.SetFont(font) self.SetCursor(wx.StockCursor(wx.CURSOR_DEFAULT)) self.Refresh() def on_left_down(self, event): if self.hover: self.trigger = True def on_left_up(self, event): if self.hover and self.trigger: event = Event(self, EVT_HYPERLINK) event.link = self.link wx.PostEvent(self, event) self.trigger = False class BitmapLink(wx.PyPanel): def __init__(self, parent, link, bitmap, hover_bitmap=None): super(BitmapLink, self).__init__(parent, -1) self.link = link self.bitmap = bitmap self.hover_bitmap = hover_bitmap or bitmap self.hover = False self.trigger = False self.SetInitialSize(bitmap.GetSize()) self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM) self.Bind(wx.EVT_PAINT, self.on_paint) self.Bind(wx.EVT_ENTER_WINDOW, self.on_enter) self.Bind(wx.EVT_LEAVE_WINDOW, self.on_leave) self.Bind(wx.EVT_LEFT_DOWN, self.on_left_down) self.Bind(wx.EVT_LEFT_UP, self.on_left_up) def on_paint(self, event): dc = wx.AutoBufferedPaintDC(self) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() bitmap = self.hover_bitmap if self.hover else self.bitmap dc.DrawBitmap(bitmap, 0, 0, True) def on_enter(self, event): self.hover = True self.SetCursor(wx.StockCursor(wx.CURSOR_HAND)) self.Refresh() def on_leave(self, event): self.trigger = False self.hover = False self.SetCursor(wx.StockCursor(wx.CURSOR_DEFAULT)) self.Refresh() def on_left_down(self, event): self.trigger = True def on_left_up(self, event): if self.trigger: event = Event(self, EVT_HYPERLINK) event.link = self.link wx.PostEvent(self, event) self.trigger = False
Catholics in China should build a more independent, socialist church, a senior Beijing official has said, as the government remains at odds with the Vatican on the issue of ordaining bishops. The country’s roughly 12 million believers are divided between those loyal to Beijing, whose clergy are chosen by the Communist Party, and members of a so-called “underground” church which swears allegiance to the Vatican. The Holy See and Beijing have not had diplomatic ties since 1951, and although relations have improved in recent years as China’s Catholic population has grown, they remain at odds over which side has the authority to appoint senior clergy.
#!/usr/bin/env python # -*- coding: utf-8 -*- """ test_dyna_settings ---------------------------------- Tests for `dyna_settings` module. I normally use py.test. """ import unittest from dyna_settings.core import DynaSettings, _dyna_controller, register_dyna_settings, dyna_value, \ NoMatchingSettingsClass, DynaSettingsController __author__ = 'curtis' class ChildOK_Match(DynaSettings): def value_dict(self): return {'A': 'a', 'B': 'b', 'C': 9} def env_detector(self): return True class ChildOK_NoMatch(DynaSettings): def value_dict(self): return {'A': 'aa', 'B': 'bb', 'C': 99} def env_detector(self): return False class EnvSettingTrue(DynaSettings): def __init__(self): super(EnvSettingTrue, self).__init__() self._environ_vars_trump = True def value_dict(self): return { 'PATH': 'a very wrong path', 'AINT_THAR': 'This aint gonna be there' } def env_detector(self): return True class TestDynaSettings(unittest.TestCase): def test_parent_interface_excepts(self): bad = DynaSettings() with self.assertRaises(NotImplementedError): bad.env_detector() with self.assertRaises(NotImplementedError): bad.value_dict() def test_child_interface(self): good = ChildOK_Match() self.assertIsInstance(good.value_dict(), dict) self.assertTrue(good.env_detector()) good.init_values() self.assertEqual(good.get_value('A', 'x'), 'a') def test_no_match_child_interface(self): good = ChildOK_NoMatch() self.assertIsInstance(good.value_dict(), dict) self.assertFalse(good.env_detector()) good.init_values() self.assertEqual(good.get_value('A', 'x'), 'aa') def test_register_match(self): _dyna_controller.reset() instance = ChildOK_Match() register_dyna_settings(instance) register_dyna_settings(ChildOK_NoMatch()) self.assertEqual(_dyna_controller.detected_settings, instance) def test_register_nomatch(self): _dyna_controller.reset() register_dyna_settings(ChildOK_NoMatch()) self.assertIsNone(_dyna_controller.detected_settings) def test_get_values(self): _dyna_controller.reset() register_dyna_settings(ChildOK_Match()) register_dyna_settings(ChildOK_NoMatch()) val = dyna_value('A', production_value='x') self.assertEqual(val, 'a') val = dyna_value('B', production_value='x') self.assertEqual(val, 'b') val = dyna_value('UNDEFINED', production_value='prod') self.assertEqual(val, 'prod') def test_get_values_with_no_settings_class(self): _dyna_controller.reset() with self.assertRaises(NoMatchingSettingsClass): val = dyna_value('BAD') def test_environ_var_trump_global(self): """ Verify that with the global trump set True we'll get from the environment :return: """ DynaSettingsController.set_environ_vars_trump(flag=True) self.assertTrue(_dyna_controller.environ_vars_trump) import os path = os.environ.get('PATH') self.assertTrue(path) path_from_settings = dyna_value('PATH', production_value=None) self.assertTrue(path_from_settings) self.assertEqual(path_from_settings, path) def test_environ_var_trump_off(self): """ Verify that with the environment var trump off we obtain the value from our dyna settings and not the environment variable. :return: """ DynaSettingsController.set_environ_vars_trump(flag=False) self.assertFalse(_dyna_controller.environ_vars_trump) import os path = os.environ.get('PATH') self.assertTrue(path) path_from_settings = dyna_value('PATH', production_value='Internal path') self.assertTrue(path_from_settings) self.assertNotEqual(path_from_settings, path) def test_environ_var_trump_instance(self): """ Verify that, with a DynaSettings instance registered that sets trump True it behaves properly by obtaining the value from the environment variable. Should ignore both the production_value and the settings class definition. :return: """ _dyna_controller.reset() self.assertFalse(_dyna_controller.environ_vars_trump) register_dyna_settings(EnvSettingTrue()) import os path = os.environ.get('PATH') self.assertTrue(path) path_from_settings = dyna_value('PATH', production_value='Internal path') self.assertTrue(path_from_settings) self.assertEqual(path_from_settings, path) def test_environ_var_trump_no_env_var(self): """ Verify that if trump is True but the environment var is not defined we'll still pick up the value if the class instance has defined it :return: """ _dyna_controller.reset() register_dyna_settings(EnvSettingTrue()) path = dyna_value('AINT_THAR', production_value=None) self.assertTrue(path) def test_environ_var_trump_fail(self): """ Verifies that if Trump is true, environment doesn't have the variable, production_value doesn't define it, and the class does not either, then exception is raised. :return: """ _dyna_controller.reset() register_dyna_settings(EnvSettingTrue()) with self.assertRaises(NoMatchingSettingsClass): bad = dyna_value('VOODOOUDO', production_value=None) print bad if __name__ == '__main__': unittest.main()
If you’ve been reading this blog for a while, you know that I love homemade Valentines. My girls have done Fortune Teller Valentines and Kool-Aid Valentines among others. This year we’re doing Slap Bracelet Valentines and these Maze Valentines. Maze Valentines are nothing new. In searching the internet, we’ve never found one we really like though. Most we found, you need to attach an actual, small plastic maze to or, if the maze is printed on, it is cheesy. I decided to make my own for the girls. These took a lot longer to make than expected but we think they are well worth it. There are three different Maze Valentines and each one has a different maze. Just because your friend with the pink Valentine solved their maze, doesn’t mean your friend with the purple one can copy it and solve their maze! Fun huh? Click on the photo above to download and print these aMAZEing Valentines for your kids. They won’t be disappointed!
#Embedded file name: ACEStream\Core\Statistics\Logger.pyo import sys import os import time import socket import threading from traceback import print_exc DEBUG = False log_separator = ' ' logger = None def create_logger(file_name): global logger logger = Logger(3, file_name) def get_logger(): if logger is None: create_logger('global.log') return logger def get_today(): return time.gmtime(time.time())[:3] class Logger: def __init__(self, threshold, file_name, file_dir = '.', prefix = '', prefix_date = False, open_mode = 'a+b'): self.threshold = threshold self.Log = self.log if file_name == '': self.logfile = sys.stderr else: try: if not os.access(file_dir, os.F_OK): try: os.mkdir(file_dir) except os.error as msg: raise 'logger: mkdir error: ' + msg file_path = self.get_file_path(file_dir, prefix, prefix_date, file_name) self.logfile = open(file_path, open_mode) except Exception as msg: self.logfile = None print >> sys.stderr, 'logger: cannot open log file', file_name, file_dir, prefix, prefix_date, msg print_exc() def __del__(self): self.close() def get_file_path(self, file_dir, prefix, prefix_date, file_name): if prefix_date is True: today = get_today() date = '%04d%02d%02d' % today else: date = '' return os.path.join(file_dir, prefix + date + file_name) def log(self, level, msg, showtime = True): if level <= self.threshold: if self.logfile is None: return if showtime: time_stamp = '%.01f' % time.time() self.logfile.write(time_stamp + log_separator) if isinstance(msg, str): self.logfile.write(msg) else: self.logfile.write(repr(msg)) self.logfile.write('\n') self.logfile.flush() def close(self): if self.logfile is not None: self.logfile.close() class OverlayLogger: __single = None __lock = threading.RLock() def __init__(self, file_name, file_dir = '.'): if OverlayLogger.__single: raise RuntimeError, 'OverlayLogger is singleton2' self.file_name = file_name self.file_dir = file_dir OverlayLogger.__single = self self.Log = self.log self.__call__ = self.log def getInstance(*args, **kw): OverlayLogger.__lock.acquire() try: if OverlayLogger.__single is None: OverlayLogger(*args, **kw) return OverlayLogger.__single finally: OverlayLogger.__lock.release() getInstance = staticmethod(getInstance) def log(self, *msgs): log_msg = '' nmsgs = len(msgs) if nmsgs < 2: print >> sys.stderr, 'Error message for log', msgs return for i in range(nmsgs): if isinstance(msgs[i], tuple) or isinstance(msgs[i], list): log_msg += log_separator for msg in msgs[i]: try: log_msg += str(msg) except: log_msg += repr(msg) log_msg += log_separator else: try: log_msg += str(msgs[i]) except: log_msg += repr(msgs[i]) log_msg += log_separator if log_msg: self._write_log(log_msg) def _write_log(self, msg): today = get_today() if not hasattr(self, 'today'): self.logger = self._make_logger(today) elif today != self.today: self.logger.close() self.logger = self._make_logger(today) self.logger.log(3, msg) def _make_logger(self, today): self.today = today hostname = socket.gethostname() logger = Logger(3, self.file_name, self.file_dir, hostname, True) logger.log(3, '# ACEStream Overlay Log Version 3', showtime=False) logger.log(3, '# BUCA_STA: nRound nPeer nPref nTorrent ' + 'nBlockSendList nBlockRecvList ' + 'nConnectionsInSecureOver nConnectionsInBuddyCast ' + 'nTasteConnectionList nRandomConnectionList nUnconnectableConnectionList', showtime=False) logger.log(3, '# BUCA_STA: Rd Pr Pf Tr Bs Br SO Co Ct Cr Cu', showtime=False) return logger if __name__ == '__main__': create_logger('test.log') get_logger().log(1, 'abc ' + str(['abc', 1, (2, 3)])) get_logger().log(0, [1, 'a', {(2, 3): 'asfadf'}]) ol = OverlayLogger('overlay.log') ol.log('CONN_TRY', '123.34.3.45', 34, 'asdfasdfasdfasdfsadf') ol.log('CONN_ADD', '123.34.3.45', 36, 'asdfasdfasdfasdfsadf', 3) ol.log('CONN_DEL', '123.34.3.45', 38, 'asdfasdfasdfasdfsadf', 'asbc') ol.log('SEND_MSG', '123.34.3.45', 39, 'asdfasdfasdfasdfsadf', 2, 'BC', 'abadsfasdfasf') ol.log('RECV_MSG', '123.34.3.45', 30, 'asdfasdfasdfasdfsadf', 3, 'BC', 'bbbbbbbbbbbbb') ol.log('BUCA_STA', (1, 2, 3), (4, 5, 6), (7, 8), (9, 10, 11)) ol.log('BUCA_CON', ['asfd', 'bsdf', 'wevs', 'wwrewv'])
Louisiana has over 1/2 million seniors that make up about 12% of the population. Life In Louisiana has changed significantly since Katrina so it has been hard to get accurate up-to-date numbers. Louisiana ranked the 19th cheapest state to live in 2007 with an overall cost of living index of 95 (national average = 100). Utilities (90) and housing (93) are both below average and so is health care (94) while groceries are at 98. Although the main impact of Katrina was centered around New Orleans and the coast, the entire state has seen financial impact. The state is starting to rebuild but choosing a Louisiana assisted living facilities can still be challenge. One important question to ask any potential assisted living facility is exactly what is their evacuation plan. In general, the options for assisted living, independent living, Alzheimer's care and nursing homes is best in Baton Rouge, Shreveport and Lake Charles. Since the population of the state has been shrinking over the past few years, it is more difficult to get current information about every assisted living facility. Louisiana's Department of Social Services conduct annual assisted living facility inspections. The get a copy of inspection reports, call the Department of Social Services at 225-922-0015 or check on line at www.dss.state.la.us. Complaints are also listed on the website. If you wish to file a complaint, contact the Department of Social Services at 225-922-0015. You may also add your feedback to this site by creating a review and noting that a formal complaint has been generated. This directory includes 806 assisted living options for Louisiana. Use the "Advance Search" to find the nearest 40 senior services based on your desired location or select a city and then choose a tab to see different types of senior care including nursing homes, Alzheimer's care centers, CCRCs, independent living, in-home and hospice care. Here are the direct links to Alexandria Nursing Homes, Baton Rouge Nursing Homes, Lake Charles Nursing Homes, Monroe Nursing Homes, New Orleans Nursing Homes, Shreveport Nursing Homes, or use the advanced search and limit your results to Nursing Homes. Balancing these factor is different for every senior and area. Try to visit as many assisted living facilities as possible and speak to the staff and residents to get a feel life in the facility.
#!/usr/bin/python3 import os import re import time import touch_icd9 as touch # read input data inputFile = input("Please enter the input CSV file name: ") # inputFile = "sample_icd9.csv" try: fhand0 = open(inputFile) except: print("Error: failed to open/find", inputFile) exit() # define output file name basePath = os.path.basename(inputFile) outputFile = re.sub(".csv", "_touch.csv", basePath) # read in dictionaries touch.dicts_icd9() # read input and write output line by line fout = open(outputFile, "w") firstObs = 1 for line in fhand0: tmpLine = line.strip().lower() tmpLine = re.sub('"|[ ]', '', tmpLine) oneLine = tmpLine.split(",") if firstObs: input_colNames = oneLine output_colNames = [touch.dicts_icd9.colNames[i].upper() for i in range(len(touch.dicts_icd9.colNames))] fout.write(",".join(output_colNames) + '\n') dx_idx = [] drg_idx = None for i in range(len(input_colNames)): if input_colNames[i].startswith("dx"): dx_idx.append(i) if input_colNames[i].startswith("drg"): drg_idx = i firstObs = 0 # quick check on dx_idx and drg_idx if len(dx_idx) <= 1: print("Error: failed to locate (secondary) diagnoses code", "in the input file:", inputFile) exit() if drg_idx is None: print("Error: failed to locate DRG code", "in the input file:", inputFile) exit() else: tmp = touch.icd9(oneLine, drg_idx, dx_idx, touch.dicts_icd9) fout.write(",".join(list(map(str, tmp))) + "\n") fout.close() # output message print("Comorbidity measures have been successfully generated.") print("Output file:", outputFile) print("The system and user CPU time: %.3f seconds." % time.process_time())
This entry was posted on 18/06/2010 at 10:36 and is filed under Conservation, Environment, Inspirational People, Leadership, Service, Sustainable Living/Communities. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
import random, math class MapGenerator: def __init__(self, x, y, seed): self.x = x self.y = y self.rand = random.Random(seed) self.mapMatrix = self.emptyMap(x, y) def emptyMap(self, x, y): return [[0]*x for i in range(y)] def randomCord(self): x = int(self.rand.uniform(0, self.x-1)) % self.x#int(self.rand.normalvariate(self.x/2, self.x) % self.x) y = int(self.rand.uniform(0, self.y-1)) % self.y#int(self.rand.normalvariate(self.y/2, self.y) % self.y) return (x, y) def randomPlayable(self): point = self.randomCord() while not self.isPlayArea(point): point = self.randomCord(); return point def neighborCord(self, point): ptList = [] (x, y) = point for a in range(x-1, x+2): for b in range(y-1, y+2): if(a>=0 and b>=0 and a < self.x and b < self.y): ptList.append((a, b)) ptList.remove(point) return ptList def manhatanDist(self, pt1, pt2): return abs(pt1[0] - pt2[0]) + abs(pt1[1] - pt2[1]) def moveableNeighbors(self, point): (x, y) = point ptList = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)] return filter(self.isPlayArea, ptList) def makeRandom(self): for i in range(int(self.x * self.y * 0.75)): (x, y) = self.randomCord() self.mapMatrix[y][x] ^= 1 return self def smooth(self, r=1, factor=4): newMap = self.emptyMap(self.x, self.y) for x in range(self.x): for y in range(self.y): i = 0 for a in range(x-r, x+r+1): for b in range(y-r, y+r+1): i += self.mapMatrix[b%self.y][a%self.x] newMap[y][x] = i>=factor self.mapMatrix = newMap return self def nearestPlayable(self, point): frontier = [point] explored = [] while frontier: node = frontier.pop(0) if self.isPlayArea(node): return node neighbors = self.neighborCord(node) toAdd = filter(lambda x: not (x in frontier or x in explored), neighbors) frontier.extend(toAdd) explored.append(node) def removeIslands(self): newMap = self.emptyMap(self.x, self.y) point = self.randomPlayable() frontier = [point] explored = [] while frontier: node = frontier.pop() neighbors = self.moveableNeighbors(node) toAdd = filter(lambda x: not (x in frontier or x in explored), neighbors) frontier.extend(toAdd) explored.append(node) newMap[node[1]][node[0]] = 1 self.mapMatrix = newMap return self def removeIslands_n(self): newMap = self.emptyMap(self.x, self.y) (start, goal) = self.spawns frontier = [(start, self.manhatanDist(start, goal))] explored = [] while frontier: frontier.sort(key=lambda tup: tup[1], reverse=True) (node, weight) = frontier.pop() if node == goal: self.searched = explored return self neighbors = self.moveableNeighbors(node) toAdd = filter(lambda x: not (x in explored), neighbors) toAddW = map(lambda x: (x, self.manhatanDist(x, goal)), toAdd) frontier.extend(toAddW) explored.extend(toAdd) newMap[node[1]][node[0]] = 1 self.searched = explored self.mapMatrix = newMap self.findSpawns() return self def findSpawns(self): posibleSpawns = self.playableCords() s1 = self.randomPlayable() s2 = self.randomPlayable() for i in range(20): s1acum = (0, 0) s2acum = (0, 0) s1c = 0 s2c = 0 deadCord = ( (int( (s1[0]+s2[0]) / 2) ), (int( (s1[1]+s2[1]) / 2 )) ) for s in posibleSpawns: s1Dis = self.manhatanDist(s, s1) s2Dis = self.manhatanDist(s, s2) deadDis = self.manhatanDist(s, deadCord)/2 if s1Dis < s2Dis: if deadDis < s1Dis: continue s1c += 1 s1acum = (s1acum[0] + s[0], s1acum[1] + s[1]) else: if deadDis < s2Dis: continue s2c += 1 s2acum = (s2acum[0] + s[0], s2acum[1] + s[1]) s1 = (s1acum[0] / s1c, s1acum[1] / s1c) s2 = (s2acum[0] / s2c, s2acum[1] / s2c) s1 = self.nearestPlayable( (int(s1[0]), int(s1[1])) ) s2 = self.nearestPlayable( (int(s2[0]), int(s2[1])) ) self.spawns = (s1, s2) return self def isPlayArea(self, point): (x, y) = point if(x<0 or y<0 or x >= self.x or y >= self.y): return False return self.mapMatrix[y][x] == 1 def isWall(self, point): neighbors = self.neighborCord(point) if self.isPlayArea(point): return len(neighbors) != 8 allNeighborsPlayable = False for n in neighbors: allNeighborsPlayable |= self.isPlayArea(n) return allNeighborsPlayable def playableCords(self): ptLst = [] for y in range(self.y): for x in range(self.x): if self.isPlayArea((x, y)): ptLst.append((x, y)) return ptLst def matrix(self): return self.mapMatrix def size(self): return (self.x, self.y)
It’s tough to write an academic paper that will impress your readers, but with the right capstone research project editing services, you’ll be able to catch your readers’ attention right from the start. Editing is a must when it comes to writing no matter what subject it may be. Edit and review process is absolutely obvious if you want to receive the work that you can be proud of. In academic writing, it is important that you have someone check out your work to ensure that there are no errors and that all pertinent information is included. This way, you’ll be more confident that the paper that you’ll submit is error-free and is the best there is. Don’t be shy, ask for help from the real experts because no one can write anything outstanding the first time. It is normal to find someone you can trust to assist you. We provide business, marketing, nursing, healthcare capstone paper writing service, and many others. Capstone proofreading service is not that hard to find especially when you’ll be using the Internet to search for the best service there is. The only problem with this is that there are dozens of editing services today which might make it difficult for you to find the right one. If you are looking for the best, you don’t have to look very far because we are here considered to be one of the best capstone paper help services available. Our specialists are always prepared to assist you at any time you need. We will provide you with quality results that you expect. Revising your capstone paper is an important stage to achieve a perfect, error-free, and well-written project. However, it’s quiet a task to spot errors from your own writing. You need another pair of eyes to look at them. This is why you need capstone paper revision service to help you. Look for years of experience. Some capstone revision services are just new in the business. Entrust the editing of your paper to a revision services with years of experience and technical know-how in correcting capstone projects. Check for testimonials. No one can attest to the quality of a company’s service but a customer who has availed it. Look for previous clients’ testimonials and reviews about the services. If you can see positive reviews from clients, this only means that the company has delivered quality and timely products to them. Look for the services they provide. Look for capstone writing services that also provide editing and proofreading services. This means they have in-house editors to perform the job. Also, look for online services whose experts are familiar with different writing standards and requirements. Check for guarantees. As a client, you have to make sure that you get the best quality of service for your money. Look for revision services that offer 100% money back guarantee, on time delivery, error free, and quality product. What makes our service really stand out is the way we edit academic papers, whether it’s a capstone project for nursing, marketing, management, business or other fields. Unlike other editing companies that use editing programs, we have editors working for us to manually check your work to see if there are any errors or changes that need to be done. Our high-standard paper revision online is the best and the quickest way to get the results you want. We have academic expert editors in all majors and fields. We usually assign an editor that we know is knowledgeable in the subject that you’ve written about so that the information you’ve written is checked for accuracy. Your specialist will double check all spelling, grammar, and formatting mistakes. We will also fix all your style issues in order to get a work with the correct style and format. An experienced editor in the needed field will polish your work in the right way. Provide us with detailed information and your expectations and we will do our best for you. Our editors are fully certified to handle all capstone editing projects no matter how quickly you need it. Why bother looking for another capstone paper editing service when you can hire capstone project expert editing service today? Not only will you get professional paper editing done but you don’t even have to spend a lot of your time. It won’t take us a long time to get your academic paper edited especially when we have several editors working for us. Use our cheap paper editing service in order to succeed. Best capstone editors. Your capstone paper is proofread and edited by the best editors we have. Our editors and writers are skilled professionals who have degrees and suited education to write about the topic. We know how capstone papers work – their format, language and content. Our editors can spot errors and correct them immediately. Familiarity with writing standards. Our editors are skilled and familiar with the various writing standards required in colleges and universities. Rest assured that your capstone paper follows the standard set. Error free products. We make sure that all products are free from any possible error. Each capstone paper undergoes series of editing to ensure quality and correctness. On time delivery. We understand the value of your time. This is why we make it a point to deliver your capstone paper on time, sometimes, even ahead of time. Choosing us to revise and edit your capstone paper will give you a farther advantage. Our years of experience, skilled and professional editors, and guarantees assure you that you deal with the best “Best Paper Editing Services” online. Our clients’ testimonials and reviews also prove our dedication to providing quality service at all times. Want to get professional capstone paper revision? We know how to help you. Just place an order right now!
# -*- coding: iso-8859-15 -*- # ============================================================================= # Copyright (c) 2004, 2006 Sean C. Gillies # Copyright (c) 2005 Nuxeo SARL <http://nuxeo.com> # # Authors : Sean Gillies <sgillies@frii.com> # Julien Anguenot <ja@nuxeo.com> # # Contact email: sgillies@frii.com # ============================================================================= """ API for Web Map Service (WMS) methods and metadata. Currently supports only version 1.1.1 of the WMS protocol. """ from __future__ import (absolute_import, division, print_function) import cgi try: # Python 3 from urllib.parse import urlencode except ImportError: # Python 2 from urllib import urlencode import warnings import six from .etree import etree from .util import openURL, testXMLValue, extract_xml_list, xmltag_split, OrderedDict from .fgdc import Metadata from .iso import MD_Metadata class ServiceException(Exception): """WMS ServiceException Attributes: message -- short error message xml -- full xml error message from server """ def __init__(self, message, xml): self.message = message self.xml = xml def __str__(self): return repr(self.message) class CapabilitiesError(Exception): pass class WebMapService(object): """Abstraction for OGC Web Map Service (WMS). Implements IWebMapService. """ def __getitem__(self,name): ''' check contents dictionary to allow dict like access to service layers''' if name in self.__getattribute__('contents'): return self.__getattribute__('contents')[name] else: raise KeyError("No content named %s" % name) def __init__(self, url, version='1.1.1', xml=None, username=None, password=None, parse_remote_metadata=False, timeout=30): """Initialize.""" self.url = url self.username = username self.password = password self.version = version self.timeout = timeout self._capabilities = None # Authentication handled by Reader reader = WMSCapabilitiesReader(self.version, url=self.url, un=self.username, pw=self.password) if xml: # read from stored xml self._capabilities = reader.readString(xml) else: # read from server self._capabilities = reader.read(self.url, timeout=self.timeout) # avoid building capabilities metadata if the response is a ServiceExceptionReport se = self._capabilities.find('ServiceException') if se is not None: err_message = str(se.text).strip() raise ServiceException(err_message, xml) # build metadata objects self._buildMetadata(parse_remote_metadata) def _getcapproperty(self): if not self._capabilities: reader = WMSCapabilitiesReader( self.version, url=self.url, un=self.username, pw=self.password ) self._capabilities = ServiceMetadata(reader.read(self.url)) return self._capabilities def _buildMetadata(self, parse_remote_metadata=False): ''' set up capabilities metadata objects ''' #serviceIdentification metadata serviceelem=self._capabilities.find('Service') self.identification=ServiceIdentification(serviceelem, self.version) #serviceProvider metadata self.provider=ServiceProvider(serviceelem) #serviceOperations metadata self.operations=[] for elem in self._capabilities.find('Capability/Request')[:]: self.operations.append(OperationMetadata(elem)) #serviceContents metadata: our assumption is that services use a top-level #layer as a metadata organizer, nothing more. self.contents = OrderedDict() caps = self._capabilities.find('Capability') #recursively gather content metadata for all layer elements. #To the WebMapService.contents store only metadata of named layers. def gather_layers(parent_elem, parent_metadata): layers = [] for index, elem in enumerate(parent_elem.findall('Layer')): cm = ContentMetadata(elem, parent=parent_metadata, index=index+1, parse_remote_metadata=parse_remote_metadata) if cm.id: if cm.id in self.contents: warnings.warn('Content metadata for layer "%s" already exists. Using child layer' % cm.id) layers.append(cm) self.contents[cm.id] = cm cm.children = gather_layers(elem, cm) return layers gather_layers(caps, None) #exceptions self.exceptions = [f.text for f \ in self._capabilities.findall('Capability/Exception/Format')] def items(self): '''supports dict-like items() access''' items=[] for item in self.contents: items.append((item,self.contents[item])) return items def getcapabilities(self): """Request and return capabilities document from the WMS as a file-like object. NOTE: this is effectively redundant now""" reader = WMSCapabilitiesReader( self.version, url=self.url, un=self.username, pw=self.password ) u = self._open(reader.capabilities_url(self.url)) # check for service exceptions, and return if u.info()['Content-Type'] == 'application/vnd.ogc.se_xml': se_xml = u.read() se_tree = etree.fromstring(se_xml) err_message = str(se_tree.find('ServiceException').text).strip() raise ServiceException(err_message, se_xml) return u def __build_getmap_request(self, layers=None, styles=None, srs=None, bbox=None, format=None, size=None, time=None, transparent=False, bgcolor=None, exceptions=None, **kwargs): request = {'version': self.version, 'request': 'GetMap'} # check layers and styles assert len(layers) > 0 request['layers'] = ','.join(layers) if styles: assert len(styles) == len(layers) request['styles'] = ','.join(styles) else: request['styles'] = '' # size request['width'] = str(size[0]) request['height'] = str(size[1]) request['srs'] = str(srs) request['bbox'] = ','.join([repr(x) for x in bbox]) request['format'] = str(format) request['transparent'] = str(transparent).upper() request['bgcolor'] = '0x' + bgcolor[1:7] request['exceptions'] = str(exceptions) if time is not None: request['time'] = str(time) if kwargs: for kw in kwargs: request[kw]=kwargs[kw] return request def getmap(self, layers=None, styles=None, srs=None, bbox=None, format=None, size=None, time=None, transparent=False, bgcolor='#FFFFFF', exceptions='application/vnd.ogc.se_xml', method='Get', timeout=None, **kwargs ): """Request and return an image from the WMS as a file-like object. Parameters ---------- layers : list List of content layer names. styles : list Optional list of named styles, must be the same length as the layers list. srs : string A spatial reference system identifier. bbox : tuple (left, bottom, right, top) in srs units. format : string Output image format such as 'image/jpeg'. size : tuple (width, height) in pixels. transparent : bool Optional. Transparent background if True. bgcolor : string Optional. Image background color. method : string Optional. HTTP DCP method name: Get or Post. **kwargs : extra arguments anything else e.g. vendor specific parameters Example ------- >>> wms = WebMapService('http://giswebservices.massgis.state.ma.us/geoserver/wms', version='1.1.1') >>> img = wms.getmap(layers=['massgis:GISDATA.SHORELINES_ARC'],\ styles=[''],\ srs='EPSG:4326',\ bbox=(-70.8, 42, -70, 42.8),\ size=(300, 300),\ format='image/jpeg',\ transparent=True) >>> out = open('example.jpg', 'wb') >>> bytes_written = out.write(img.read()) >>> out.close() """ try: base_url = next((m.get('url') for m in self.getOperationByName('GetMap').methods if m.get('type').lower() == method.lower())) except StopIteration: base_url = self.url request = self.__build_getmap_request(layers=layers, styles=styles, srs=srs, bbox=bbox, format=format, size=size, time=time, transparent=transparent, bgcolor=bgcolor, exceptions=exceptions, kwargs=kwargs) data = urlencode(request) u = openURL(base_url, data, method, username=self.username, password=self.password, timeout=timeout or self.timeout) # check for service exceptions, and return if u.info()['Content-Type'] == 'application/vnd.ogc.se_xml': se_xml = u.read() se_tree = etree.fromstring(se_xml) err_message = six.text_type(se_tree.find('ServiceException').text).strip() raise ServiceException(err_message, se_xml) return u def getfeatureinfo(self, layers=None, styles=None, srs=None, bbox=None, format=None, size=None, time=None, transparent=False, bgcolor='#FFFFFF', exceptions='application/vnd.ogc.se_xml', query_layers = None, xy=None, info_format=None, feature_count=20, method='Get', timeout=None, **kwargs ): try: base_url = next((m.get('url') for m in self.getOperationByName('GetFeatureInfo').methods if m.get('type').lower() == method.lower())) except StopIteration: base_url = self.url # GetMap-Request request = self.__build_getmap_request(layers=layers, styles=styles, srs=srs, bbox=bbox, format=format, size=size, time=time, transparent=transparent, bgcolor=bgcolor, exceptions=exceptions, kwargs=kwargs) # extend to GetFeatureInfo-Request request['request'] = 'GetFeatureInfo' if not query_layers: __str_query_layers = ','.join(layers) else: __str_query_layers = ','.join(query_layers) request['query_layers'] = __str_query_layers request['x'] = str(xy[0]) request['y'] = str(xy[1]) request['info_format'] = info_format request['feature_count'] = str(feature_count) data = urlencode(request) u = openURL(base_url, data, method, username=self.username, password=self.password, timeout=timeout or self.timeout) # check for service exceptions, and return if u.info()['Content-Type'] == 'application/vnd.ogc.se_xml': se_xml = u.read() se_tree = etree.fromstring(se_xml) err_message = six.text_type(se_tree.find('ServiceException').text).strip() raise ServiceException(err_message, se_xml) return u def getServiceXML(self): xml = None if self._capabilities is not None: xml = etree.tostring(self._capabilities) return xml def getOperationByName(self, name): """Return a named content item.""" for item in self.operations: if item.name == name: return item raise KeyError("No operation named %s" % name) class ServiceIdentification(object): ''' Implements IServiceIdentificationMetadata ''' def __init__(self, infoset, version): self._root=infoset self.type = testXMLValue(self._root.find('Name')) self.version = version self.title = testXMLValue(self._root.find('Title')) self.abstract = testXMLValue(self._root.find('Abstract')) self.keywords = extract_xml_list(self._root.findall('KeywordList/Keyword')) self.accessconstraints = testXMLValue(self._root.find('AccessConstraints')) self.fees = testXMLValue(self._root.find('Fees')) class ServiceProvider(object): ''' Implements IServiceProviderMetatdata ''' def __init__(self, infoset): self._root=infoset name=self._root.find('ContactInformation/ContactPersonPrimary/ContactOrganization') if name is not None: self.name=name.text else: self.name=None self.url=self._root.find('OnlineResource').attrib.get('{http://www.w3.org/1999/xlink}href', '') #contact metadata contact = self._root.find('ContactInformation') ## sometimes there is a contact block that is empty, so make ## sure there are children to parse if contact is not None and contact[:] != []: self.contact = ContactMetadata(contact) else: self.contact = None def getContentByName(self, name): """Return a named content item.""" for item in self.contents: if item.name == name: return item raise KeyError("No content named %s" % name) def getOperationByName(self, name): """Return a named content item.""" for item in self.operations: if item.name == name: return item raise KeyError("No operation named %s" % name) class ContentMetadata: """ Abstraction for WMS layer metadata. Implements IContentMetadata. """ def __init__(self, elem, parent=None, children=None, index=0, parse_remote_metadata=False, timeout=30): if elem.tag != 'Layer': raise ValueError('%s should be a Layer' % (elem,)) self.parent = parent if parent: self.index = "%s.%d" % (parent.index, index) else: self.index = str(index) self._children = children self.id = self.name = testXMLValue(elem.find('Name')) # layer attributes self.queryable = int(elem.attrib.get('queryable', 0)) self.cascaded = int(elem.attrib.get('cascaded', 0)) self.opaque = int(elem.attrib.get('opaque', 0)) self.noSubsets = int(elem.attrib.get('noSubsets', 0)) self.fixedWidth = int(elem.attrib.get('fixedWidth', 0)) self.fixedHeight = int(elem.attrib.get('fixedHeight', 0)) # title is mandatory property self.title = None title = testXMLValue(elem.find('Title')) if title is not None: self.title = title.strip() self.abstract = testXMLValue(elem.find('Abstract')) # bboxes b = elem.find('BoundingBox') self.boundingBox = None if b is not None: try: #sometimes the SRS attribute is (wrongly) not provided srs=b.attrib['SRS'] except KeyError: srs=None self.boundingBox = ( float(b.attrib['minx']), float(b.attrib['miny']), float(b.attrib['maxx']), float(b.attrib['maxy']), srs, ) elif self.parent: if hasattr(self.parent, 'boundingBox'): self.boundingBox = self.parent.boundingBox # ScaleHint sh = elem.find('ScaleHint') self.scaleHint = None if sh is not None: if 'min' in sh.attrib and 'max' in sh.attrib: self.scaleHint = {'min': sh.attrib['min'], 'max': sh.attrib['max']} attribution = elem.find('Attribution') if attribution is not None: self.attribution = dict() title = attribution.find('Title') url = attribution.find('OnlineResource') logo = attribution.find('LogoURL') if title is not None: self.attribution['title'] = title.text if url is not None: self.attribution['url'] = url.attrib['{http://www.w3.org/1999/xlink}href'] if logo is not None: self.attribution['logo_size'] = (int(logo.attrib['width']), int(logo.attrib['height'])) self.attribution['logo_url'] = logo.find('OnlineResource').attrib['{http://www.w3.org/1999/xlink}href'] b = elem.find('LatLonBoundingBox') if b is not None: self.boundingBoxWGS84 = ( float(b.attrib['minx']), float(b.attrib['miny']), float(b.attrib['maxx']), float(b.attrib['maxy']), ) elif self.parent: self.boundingBoxWGS84 = self.parent.boundingBoxWGS84 else: self.boundingBoxWGS84 = None #SRS options self.crsOptions = [] #Copy any parent SRS options (they are inheritable properties) if self.parent: self.crsOptions = list(self.parent.crsOptions) #Look for SRS option attached to this layer if elem.find('SRS') is not None: ## some servers found in the wild use a single SRS ## tag containing a whitespace separated list of SRIDs ## instead of several SRS tags. hence the inner loop for srslist in [x.text for x in elem.findall('SRS')]: if srslist: for srs in srslist.split(): self.crsOptions.append(srs) #Get rid of duplicate entries self.crsOptions = list(set(self.crsOptions)) #Set self.crsOptions to None if the layer (and parents) had no SRS options if len(self.crsOptions) == 0: #raise ValueError('%s no SRS available!?' % (elem,)) #Comment by D Lowe. #Do not raise ValueError as it is possible that a layer is purely a parent layer and does not have SRS specified. Instead set crsOptions to None # Comment by Jachym: # Do not set it to None, but to [], which will make the code # work further. Fixed by anthonybaxter self.crsOptions=[] #Styles self.styles = {} #Copy any parent styles (they are inheritable properties) if self.parent: self.styles = self.parent.styles.copy() #Get the styles for this layer (items with the same name are replaced) for s in elem.findall('Style'): name = s.find('Name') title = s.find('Title') if name is None or title is None: raise ValueError('%s missing name or title' % (s,)) style = { 'title' : title.text } # legend url legend = s.find('LegendURL/OnlineResource') if legend is not None: style['legend'] = legend.attrib['{http://www.w3.org/1999/xlink}href'] self.styles[name.text] = style # keywords self.keywords = [f.text for f in elem.findall('KeywordList/Keyword')] # timepositions - times for which data is available. self.timepositions=None self.defaulttimeposition = None for extent in elem.findall('Extent'): if extent.attrib.get("name").lower() =='time': if extent.text: self.timepositions=extent.text.split(',') self.defaulttimeposition = extent.attrib.get("default") break # Elevations - available vertical levels self.elevations=None for extent in elem.findall('Extent'): if extent.attrib.get("name").lower() =='elevation': if extent.text: self.elevations=extent.text.split(',') break # MetadataURLs self.metadataUrls = [] for m in elem.findall('MetadataURL'): metadataUrl = { 'type': testXMLValue(m.attrib['type'], attrib=True), 'format': testXMLValue(m.find('Format')), 'url': testXMLValue(m.find('OnlineResource').attrib['{http://www.w3.org/1999/xlink}href'], attrib=True) } if metadataUrl['url'] is not None and parse_remote_metadata: # download URL try: content = openURL(metadataUrl['url'], timeout=timeout) doc = etree.parse(content) if metadataUrl['type'] is not None: if metadataUrl['type'] == 'FGDC': metadataUrl['metadata'] = Metadata(doc) if metadataUrl['type'] == 'TC211': metadataUrl['metadata'] = MD_Metadata(doc) except Exception: metadataUrl['metadata'] = None self.metadataUrls.append(metadataUrl) # DataURLs self.dataUrls = [] for m in elem.findall('DataURL'): dataUrl = { 'format': m.find('Format').text.strip(), 'url': m.find('OnlineResource').attrib['{http://www.w3.org/1999/xlink}href'] } self.dataUrls.append(dataUrl) self.layers = [] for child in elem.findall('Layer'): self.layers.append(ContentMetadata(child, self)) @property def children(self): return self._children @children.setter def children(self, value): if self._children is None: self._children = value else: self._children.extend(value) # If layer is a group and one of its children is queryable, the layer must be queryable. if self._children and self.queryable == 0: for child in self._children: if child.queryable: self.queryable = child.queryable break def __str__(self): return 'Layer Name: %s Title: %s' % (self.name, self.title) class OperationMetadata: """Abstraction for WMS OperationMetadata. Implements IOperationMetadata. """ def __init__(self, elem): """.""" self.name = xmltag_split(elem.tag) # formatOptions self.formatOptions = [f.text for f in elem.findall('Format')] self.methods = [] for verb in elem.findall('DCPType/HTTP/*'): url = verb.find('OnlineResource').attrib['{http://www.w3.org/1999/xlink}href'] self.methods.append({'type' : xmltag_split(verb.tag), 'url': url}) class ContactMetadata: """Abstraction for contact details advertised in GetCapabilities. """ def __init__(self, elem): name = elem.find('ContactPersonPrimary/ContactPerson') if name is not None: self.name=name.text else: self.name=None email = elem.find('ContactElectronicMailAddress') if email is not None: self.email=email.text else: self.email=None self.address = self.city = self.region = None self.postcode = self.country = None address = elem.find('ContactAddress') if address is not None: street = address.find('Address') if street is not None: self.address = street.text city = address.find('City') if city is not None: self.city = city.text region = address.find('StateOrProvince') if region is not None: self.region = region.text postcode = address.find('PostCode') if postcode is not None: self.postcode = postcode.text country = address.find('Country') if country is not None: self.country = country.text organization = elem.find('ContactPersonPrimary/ContactOrganization') if organization is not None: self.organization = organization.text else:self.organization = None position = elem.find('ContactPosition') if position is not None: self.position = position.text else: self.position = None class WMSCapabilitiesReader: """Read and parse capabilities document into a lxml.etree infoset """ def __init__(self, version='1.1.1', url=None, un=None, pw=None): """Initialize""" self.version = version self._infoset = None self.url = url self.username = un self.password = pw #if self.username and self.password: ## Provide login information in order to use the WMS server ## Create an OpenerDirector with support for Basic HTTP ## Authentication... #passman = HTTPPasswordMgrWithDefaultRealm() #passman.add_password(None, self.url, self.username, self.password) #auth_handler = HTTPBasicAuthHandler(passman) #opener = build_opener(auth_handler) #self._open = opener.open def capabilities_url(self, service_url): """Return a capabilities url """ qs = [] if service_url.find('?') != -1: qs = cgi.parse_qsl(service_url.split('?')[1]) params = [x[0] for x in qs] if 'service' not in params: qs.append(('service', 'WMS')) if 'request' not in params: qs.append(('request', 'GetCapabilities')) if 'version' not in params: qs.append(('version', self.version)) urlqs = urlencode(tuple(qs)) return service_url.split('?')[0] + '?' + urlqs def read(self, service_url, timeout=30): """Get and parse a WMS capabilities document, returning an elementtree instance service_url is the base url, to which is appended the service, version, and request parameters """ getcaprequest = self.capabilities_url(service_url) #now split it up again to use the generic openURL function... spliturl=getcaprequest.split('?') u = openURL(spliturl[0], spliturl[1], method='Get', username=self.username, password=self.password, timeout=timeout) return etree.fromstring(u.read()) def readString(self, st): """Parse a WMS capabilities document, returning an elementtree instance string should be an XML capabilities document """ if not isinstance(st, str) and not isinstance(st, bytes): raise ValueError("String must be of type string or bytes, not %s" % type(st)) return etree.fromstring(st)
Get a hip '20s look for Halloween. This fabulous flapper dress features black fringe that sways with your movement and a silver-sequined headband that matches the trim of the dress. Get our flapper purse to heighten your authentic style.
##################################################### # # copyright.txt # # Copyright 2012 Hewlett-Packard Development Company, L.P. # # Hewlett-Packard and the Hewlett-Packard logo are trademarks of # Hewlett-Packard Development Company, L.P. in the U.S. and/or other countries. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this library; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # # Author: # Chris Frantz # # Description: # Import ElementTree. Pick lxml as the preferred version # ##################################################### #try: # import lxml.etree as ET #except ImportError: # import xml.etree.ElementTree as ET import lxml.etree as ET class xmlstr: ''' A class that defers ET.tostring until the instance is evaluated in string context. Use this to print xml etrees in debug statements: xml = ET.parse(...) log.debug("The xml was: %s", xmlstr(xml)) ''' def __init__(self, xml): self.xml = xml def __str__(self): return ET.tostring(self.xml, pretty_print=True) __all__ = [ 'ET', 'xmlstr'] # VIM options (place at end of file) # vim: ts=4 sts=4 sw=4 expandtab:
you. It, against Me was definitely a Nashville Sound recording, with violins and other string instruments that can be heard. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds ) before embarking on a reunion tour in 2016. TNN was later revived from 2012 to 2013 after Jim Owens Entertainment (the company responsible for prominent TNN hosts Crook Chase ) acquired the trademark and licensed it to Luken Communications; that channel renamed itself Heartland after Luken was embroiled in an unrelated dispute that. By 1966, Dottie West&apos;s professional career in Country music was only getting started. The comments caused a rift between the band and the country music scene, and the band&apos;s fourth (and most recent) album, 2006&apos;s Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall but largely ignored among country audiences citation needed. The origins of country music are the folk music of working class Americans, who blended popular songs, Irish and Celtic fiddle tunes, traditional English ballads, cowboy songs, and various musical traditions from European immigrants.
#: E261:4 pass # an inline comment #: E261:4 pass# an inline comment # Okay pass # an inline comment pass # an inline comment #: E262:11 x = x + 1 #Increment x #: E262:11 x = x + 1 # Increment x #: E262:11 x = y + 1 #: Increment x #: E265 #Block comment a = 1 #: E265+1 m = 42 #! This is important mx = 42 - 42 # Comment without anything is not an issue. # # However if there are comments at the end without anything it obviously # doesn't make too much sense. #: E262:9 foo = 1 # #: E266+2:4 E266+5:4 def how_it_feel(r): ### This is a variable ### a = 42 ### Of course it is unused return #: E266 E266+1 ##if DEBUG: ## logging.error() #: E266 ######################################### # Not at the beginning of a file #: E265 #!/usr/bin/env python # Okay pass # an inline comment x = x + 1 # Increment x y = y + 1 #: Increment x # Block comment a = 1 # Block comment1 # Block comment2 aaa = 1 # example of docstring (not parsed) def oof(): """ #foo not parsed """ ########################################################################### # A SEPARATOR # ########################################################################### # ####################################################################### # # ########################## another separator ########################## # # ####################################################################### #
We offer a wide range of services including TV Activation (TV In Motion) (for BMW, Range Rover, Mini, Rover, VW & Audi), Car Radio and Navigation Unit Decoding and Unlocking as well as an Airbag Crash Data and a full vehicle diagnostics sevice. We operate within the Lancashire and North West areas, more specifically around the Preston, Lancaster, Morecambe, Kendal, South Lakes & Blackpool areas. We are quite willing to travel further afield as required. In special cases we also offer a bespoke hardware and software development service. As part of our commitment to you, the customer, our services are second to none. We use only the very latest hardware, techniques and procedures to ensure that your valuable property is repaired to the highest possible standards. We are not backstreet merchants who will reprogram your vehicle with inferior, outdated laptop computers using a mish-mash of bug-ridden programs. All our work is guaranteed, we stand by our results and total customer satisfaction is our goal. Please take some time to browse our website, if you have any questions about our services then please feel free to get in touch. The image above is of our in house designed custom programmer for automitive use, more details can be seen on its dedicated website at www.ultraprog.co.uk and on our sister website at www.rampantapathy.co.uk. We can also convert your imported car to display speed and distances in mph/miles by way of a small conversion module fitted to your vehicle. For more information on this or any of services please contact us by either email or telephone on 07870903552 (7 days a week).
import os from helpers.python_ext import lmap from parsing.acacia_lexer_desc import * class Assumption: def __init__(self, data): self.data = data class Guarantee: def __init__(self, data): self.data = data precedence = ( ('left','OR'), ('left','IMPLIES','EQUIV'), ('left','AND'), ('left', 'TEMPORAL_BINARY'), ('left', 'NEG'), # left - right should not matter.. ('left', 'TEMPORAL_UNARY'), # left - right should not matter.. ('nonassoc','EQUALS') ) def p_start(p): """start : empty | units """ if p[1] is not None: p[0] = p[1] else: p[0] = [] def p_empty(p): r"""empty :""" pass def p_units(p): """units : unit | units unit""" if len(p) == 2: p[0] = [p[1]] else: p[0] = p[1] + [p[2]] #TODO: figure out why conflict arises #def p_unit_without_header(p): # """unit : unit_data """ # p[0] = ('None', p[1]) def p_unit_with_header(p): """unit : unit_header unit_data """ p[0] = (p[1], p[2]) def p_unit_header(p): """unit_header : LBRACKET SPEC_UNIT NAME RBRACKET""" p[0] = p[3] def p_unit_data(p): """unit_data : empty | expressions """ if p[1] is None: p[0] = ([],[]) else: assumptions = lmap(lambda e: e.data, filter(lambda e: isinstance(e, Assumption), p[1])) guarantees = lmap(lambda e: e.data, filter(lambda e: isinstance(e, Guarantee), p[1])) p[0] = (assumptions, guarantees) def p_signal_name(p): """ signal_name : NAME """ p[0] = Signal(p[1], ) def p_unit_data_expressions(p): """expressions : expression SEP | expressions expression SEP""" if len(p) == 3: p[0] = [p[1]] else: p[0] = p[1] + [p[2]] def p_unit_data_expression_assumption(p): """ expression : ASSUME property """ p[0] = Assumption(p[2]) def p_unit_data_expression_guarantee(p): """ expression : property """ p[0] = Guarantee(p[1]) def p_unit_data_property_bool(p): """property : BOOL""" p[0] = p[1] def p_unit_data_property_binary_operation(p): """property : signal_name EQUALS NUMBER | property AND property | property OR property | property IMPLIES property | property EQUIV property | property TEMPORAL_BINARY property""" assert p[2] in BIN_OPS p[0] = BinOp(p[2], p[1], p[3]) def p_unit_data_property_unary(p): """property : TEMPORAL_UNARY property | NEG property """ p[0] = UnaryOp(p[1], p[2]) def p_unit_data_property_grouping(p): """property : LPAREN property RPAREN""" p[0] = p[2] def p_error(p): if p: print("----> Syntax error at '%s'" % p.value) print("lineno: %d" % p.lineno) else: print('----> Syntax error, t is None') assert 0 from third_party.ply import yacc acacia_parser = yacc.yacc(debug=0, outputdir=os.path.dirname(os.path.realpath(__file__)))
EngSoc are currently one of the largest societies on campus, boasting a huge number of members from every faculty, not just Engineering. We aim to promote engineering and technology across the University, as well as provide a fun, engaging and informative society for Engineering students to get involved in! EngSoc organises regular talks from established speakers from a wide range of Engineering disciplines, including our recent EEE event with the Energy Society and Entrepeneurship Society. These speakers included Micheal Campion, a lecturer in entrepreneurship; Eugene Greeaney, co-founder of DoughBros; and Joe Smyth, CTO and Co-ounder o Altocloud. Aside from talks, EngSoc also organises several social events, in association with our sponsor, Electric Garden and Theatre. These include an annual First Year Party, as well as several charity events throught the year. One of these events include a team building trip to Connemara were 20 students will attend Tough Mudder, raising money for our chosen charity for the year, Console. Console is a suicide awareness charity that the society feels strongly about supporting. EngSoc also holds several overnight trips throughout the year. One trip is an overnight stay in Cork/Limerick, which includes activities such as paintballing, go-karting and rollerdisco! Our other annual trip is our International Mystery Trip, where EngSoc takes 40 lucky members to a mystery European city! Past locations include Budapest and Berlin. The Engineering and Nursing ball, organised by EngSoc is one of the largest events on the Universtiy Calendar. Over 1,200 students attend annually and get to experience some of the best live acts in Ireland and around the world. Past acts include Le Galaxie, Delorentos, Nina Nesbitt and Watermat. Find us one Facebook and sign up to stay up to date with all our events!
# -*- coding: utf-8 -*- # -*- Channel DoramedPlay -*- # -*- BASED ON: Channel DramasJC -*- # -*- Created for Alfa-addon -*- import requests import sys PY3 = False if sys.version_info[0] >= 3: PY3 = True; unicode = str; unichr = chr; long = int import re from channelselector import get_thumb from core import httptools from core import scrapertools from core import servertools from core import tmdb from core.item import Item from platformcode import config, logger from channels import autoplay from channels import filtertools host = 'https://doramedplay.com/' IDIOMAS = {'VOSE': 'VOSE', 'LAT':'LAT'} list_language = list(IDIOMAS.values()) list_quality = [] list_servers = ['okru', 'mailru', 'openload'] def mainlist(item): logger.info() autoplay.init(item.channel, list_servers, list_quality) itemlist = list() itemlist.append(Item(channel=item.channel, title="Doramas", action="list_all", url=host+'tvshows/', type="tvshows", thumbnail=get_thumb('tvshows', auto=True))) itemlist.append(Item(channel=item.channel, title="Películas", action="list_all", url=host+'movies/', type='movies', thumbnail=get_thumb('movies', auto=True))) # itemlist.append(Item(channel=item.channel, title="Generos", action="section", # url=host + 'catalogue', thumbnail=get_thumb('genres', auto=True))) # itemlist.append(Item(channel=item.channel, title="Por Años", action="section", url=host + 'catalogue', # thumbnail=get_thumb('year', auto=True))) itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host+'?s=', thumbnail=get_thumb('search', auto=True))) autoplay.show_option(item.channel, itemlist) return itemlist def get_source(url): logger.info() data = httptools.downloadpage(url).data data = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data) return data def list_all(item): logger.info() itemlist = [] data = get_source(item.url) patron = '<article id="post-\d+".*?<img (?:data-)?src="([^"]+").*?<div class="rating">([^<]+)?<.*?' patron += '<h3><a href="([^"]+)".*?>([^<]+)<.*?<span>.*?, (\d{4})<.*?<div class="texto">([^<]+)<' matches = re.compile(patron, re.DOTALL).findall(data) for scrapedthumbnail, scrapedrating, scrapedurl, scrapedtitle, scrapedyear, scrapedplot in matches: url = scrapedurl year = scrapedyear filtro_tmdb = list({"first_air_date": year}.items()) contentname = scrapedtitle title = '%s (%s) [%s]'%(contentname, scrapedrating, year) thumbnail = scrapedthumbnail new_item = Item(channel=item.channel, title=title, contentSerieName=contentname, plot=scrapedplot, url=url, thumbnail=thumbnail, infoLabels={'year':year, 'filtro': filtro_tmdb} ) if item.type == 'tvshows': new_item.action = 'seasons' else: new_item.action = 'findvideos' itemlist.append(new_item) tmdb.set_infoLabels_itemlist(itemlist, True) # Paginación url_next_page = scrapertools.find_single_match(data,"<span class=\"current\">.?<\/span>.*?<a href='([^']+)'") if url_next_page: itemlist.append(Item(channel=item.channel, type=item.type, title="Siguiente >>", url=url_next_page, action='list_all')) return itemlist def seasons(item): logger.info() itemlist = [] data = get_source(item.url) patron = "<div id='seasons'>.*?>Temporada([^<]+)<i>([^<]+).*?<\/i>" matches = re.compile(patron, re.DOTALL).findall(data) logger.info("hola mundo") for temporada, fecha in matches: title = 'Temporada %s (%s)' % (temporada.strip(), fecha) contentSeasonNumber = temporada.strip() item.infoLabels['season'] = contentSeasonNumber itemlist.append(item.clone(action='episodesxseason', title=title, contentSeasonNumber=contentSeasonNumber )) tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True) if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'episodios': itemlist.append(Item(channel=item.channel, title='[COLOR yellow]Añadir esta serie a la videoteca[/COLOR]', url=item.url, action="add_serie_to_library", extra="episodios", contentSerieName=item.contentSerieName, contentSeasonNumber=contentSeasonNumber )) return itemlist def episodios(item): logger.info() itemlist = [] templist = seasons(item) for tempitem in templist: itemlist += episodesxseason(tempitem) return itemlist def episodesxseason(item): logger.info() itemlist = [] season = item.contentSeasonNumber data = get_source(item.url) data = scrapertools.find_single_match(data, ">Temporada %s .*?<ul class='episodios'>(.*?)<\/ul>" % season) patron = "<a href='([^']+)'>([^<]+)<\/a>.*?<span[^>]+>([^<]+)<\/span>" matches = re.compile(patron, re.DOTALL).findall(data) ep = 1 for scrapedurl, scrapedtitle, fecha in matches: epi = str(ep) title = season + 'x%s - Episodio %s (%s)' % (epi, epi, fecha) url = scrapedurl contentEpisodeNumber = epi item.infoLabels['episode'] = contentEpisodeNumber itemlist.append(item.clone(action='findvideos', title=title, url=url, contentEpisodeNumber=contentEpisodeNumber, )) ep += 1 tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True) return itemlist def findvideos(item): logger.info() itemlist = [] data = get_source(item.url) post_id = scrapertools.find_single_match(data, "'https:\/\/doramedplay\.com\/\?p=(\d+)'") body = "action=doo_player_ajax&post=%s&nume=1&type=tv" % post_id source_headers = dict() source_headers["Content-Type"] = "application/x-www-form-urlencoded; charset=UTF-8" source_headers["X-Requested-With"] = "XMLHttpRequest" source_headers["Referer"] = host source_result = httptools.downloadpage(host + "wp-admin/admin-ajax.php", post=body, headers=source_headers) # logger.info(source_result.json) if source_result.code == 200: source_json = source_result.json if source_json['embed_url']: source_url = source_json['embed_url'] logger.info("source: " + source_url) DIRECT_HOST = "v.pandrama.com" if DIRECT_HOST in source_url: # logger.info(source_url) directo_result = httptools.downloadpage(source_url, headers={"Referer": item.url}) directo_result = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", directo_result.data) metadata_url = scrapertools.find_single_match(directo_result, 'videoSources\":\[{\"file\":\"([^"]+)\"') metadata_url = re.sub(r'\\', "", metadata_url) metadata_url = re.sub(r'/1/', "/" + DIRECT_HOST + "/", metadata_url) metadata_url += "?s=1&d=" # logger.info(metadata_url) # logger.info('metadata_url: ' + re.sub(r'\\', "", metadata_url)) # get metadata_url logger.info(source_url) # metadata_headers = dict() # metadata_headers["Referer"] = source_url # metadata = httptools.downloadpage(metadata_url, headers=metadata_headers) # metadata = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", metadata.data) metadata = requests.get(metadata_url, headers={"Referer": source_url}).content # metadata = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", metadata) # Get URLs patron = "RESOLUTION=(.*?)http([^#]+)" video_matches = re.compile(patron, re.DOTALL).findall(metadata) for video_resolution, video_url in video_matches: final_url = "http" + video_url.strip() url_video = final_url + "|referer="+ final_url logger.info(final_url) itemlist.append(Item(channel=item.channel, title='%s (' + video_resolution.strip() + ')', url=url_video, action='play')) # https://1/cdn/hls/9be120188fe6b91e70db037b674c686d/master.txt else: itemlist.append(Item(channel=item.channel, title='%s', url=source_json['embed_url'], action='play')) itemlist = servertools.get_servers_itemlist(itemlist, lambda x: x.title % x.server.capitalize()) # Requerido para FilterTools itemlist = filtertools.get_links(itemlist, item, list_language) # Requerido para AutoPlay autoplay.start(itemlist, item) if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos': itemlist.append( Item(channel=item.channel, title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', url=item.url, action="add_pelicula_to_library", extra="findvideos", contentTitle=item.contentTitle)) return itemlist def list_search(item): logger.info() itemlist = [] data = get_source(item.url) patron = '<div class="result-item">.*?<div class="thumbnail.*?<a href="([^"]+)">' patron += '<img src="([^"]+)".*?<span class="([^"]+)".*?<div class="title">' patron += '<a href="[^"]+">([^<]+)<.*?<span class="year">([^<]+)<.*?<p>([^<]+)<' matches = re.compile(patron, re.DOTALL).findall(data) for scrapedurl, scrapedthumbnail, scrapedtype, scrapedtitle, scrapedyear, scrapedplot in matches: # logger.info(scrapedurl) url = scrapedurl year = scrapedyear contentname = scrapedtitle title = '%s (%s) (%s)'%(contentname, scrapedtype, year) thumbnail = scrapedthumbnail new_item = Item(channel=item.channel, title=title, url=url, thumbnail=thumbnail, plot=scrapedplot, type=scrapedtype, action='list_all', infoLabels={'year':year} ) new_item.contentSerieName = contentname if new_item.type == 'tvshows': new_item.action = 'seasons' else: new_item.action = 'findvideos' itemlist.append(new_item) # tmdb.set_infoLabels_itemlist(itemlist, True) return itemlist def search(item, texto): logger.info() texto = texto.replace(" ", "+") item.url = item.url + texto try: return list_search(item) except: import sys for line in sys.exc_info(): logger.error("%s" % line) return []
An instant musical rapport between Jenni & Javier evolved into the Miel Crudo experience – a unique celebration of ‘world’ music. How did Miel Crudo come to fruition? Jenni: My lifelong friend, Jules, got us to meet up about 3 years ago, and we had an instant musical rapport with each other; however, we both needed a catalyst or kick start to get the music going. The catalyst was meeting each other – we seem to bounce off each other’s music style with no stressful effort, which is a great thing. Javier: That’s it in a nutshell – we just clicked. You were originally known as Raw Honey. Why did the name change, and what does it mean? Javier: The name change came about as we evolved as a duo. We had more original and exotic tunes, so we wanted a more original and exotic name which defined and reflected what we are now – hence the name Miel Crudo came about. In addition, as I’m of Spanish heritage, it seemed appropriate and extremely relevant – the name translates to ‘Raw Honey’ in English. Jenni: ‘Raw Honey’ defined what we are doing – an unplugged feel without any additional instruments or special effects; however, we felt that the Spanish translation reflected our sound more accurately. Introduce us to the members of Miel Crudo and the instruments they each play. Javier: The band consists of two members – Jenni Willow, who is one of the most passionate and creative musicians I have had the pleasure of collaborating with, and myself, Javier Serrano. I play a Maton acoustic guitar, which has a distinctive and unique tone to it, and Jenni plays a powerful bellowing Taylor, which acts as the motherboard of our combined sound. Jenni: We also have the sweetest little 9 string acoustic/electric guitar (it was originally a 12 string, but we removed 3 strings to change the sound). Amazingly, we can play it to sound like a Mandolin, Balalaika or even a Sitar. It’s a joy! Tell us about your style of music and its origins. Javier: Our style of music is difficult to define, as it fits into many genres of music, each with its own unique sound. You could loosely call it ‘Intercontinental’ or ‘World Music’, as it originates from a collective memory of sounds and styles throughout both our lifetimes. Amazingly, this unique collective of sound just shines when we play together; mostly spontaneous, our music is never or rarely planned. Jenni: To be honest, we just pick around and something comes together. We don’t consciously set out to have a song with a theme; if it happens, it happens. We often say, “Gee, that sounds Russian” or “Hey, that sounds Arabic” or “That’s a bit Irish”– and around the world we go! Where did your passion for music, and in particular, this style, stem from? Javier: I don’t really know where MY passion for music comes from; however, I have definitely been influenced by a number of musical genres, from hard Rock to traditional classical to Flamenco guitar, but most of what we’re playing as Miel Crudo tends to be spontaneous. Jenni: We believe we have been bestowed with the ability to make wonderful music – an amazing gift! I can remember humming entire piano concertos when I was only 4 years old (I would hide in the cupboard and pretend to be the radio for my sisters). I’ve been playing piano since I was nine and love that medium just as much as the guitar. I’m not sure where my style comes from, but it definitely comes from the soul. Who and what have been some of your influences throughout your career? Javier: My love of music and natural curiosity for all things music amount to influences drawn from a massive range of genres and sub-cultures, from Van Halen and ACDC to Paco De Lucia and Julian Bream. Jenni: From an early age, we had Classical music constantly blaring in our house; add to that the delicious 1960s cocktail sounds and the Tijuana Brass (which was so cool), and I feel it’s safe to say my musical interests are eclectic. However, I’m not sure if I’m influenced by that style of music, as I prefer not to mold myself on other music or musicians, but allow my music to take its own path; although, I’ve always been nuts about Jeff Beck. When I first heard his Truth album, I almost went into orbit. Way ahead of his time, that’s for sure. Then there’s my love of Jimmy Page with his amazing riffs, Jimi Hendrix and Eric Clapton. You recently finished recording an album; tell us about that. Javier: The album is the culmination of the music and energy that continues to flow between us – a documentation of our work. It features all original music, which is raw, sweet, yet a little quirky … a tapestry of uniquely different sounds drawn from different cultures. Jenni: Our album Salt of the Earth is a juxtaposition of songs about daily life, observations, our surrounds, the random, the mundane, the exciting. It’s a musical journey that focuses entirely on the experiential aspect of sound, without any vocal additions. We are both so proud of the end result and can’t wait to share the Miel Crudo experience through this album. What other plans have you got in the pipeline for 2012? Javier: Our focus for the coming months is to promote our album and to develop and foster ongoing relationships with venues/clients with the objective of securing gig bookings up and down the East Coast. We also have plans for another album. Jenni: Our ultimate goal is to be out there playing. There’s nothing more rewarding than to have folks coming up to you and saying, “We loved that” or “You made me feel good”. That’s our utopia. However, we understand the importance of ensuring that Miel Crudo is ready from a business perspective (i.e. development and implementation of our online presence via our website and social media network marketing channels), so we have also concentrated our efforts on the actual ‘business’ side of things, so potential clients and fans have easy access to our music and direct contact with us. Where can people find out more, and how do they book you for a gig? Jenni: We have a dynamite promoter and marketing guru called Mel Cox of Marshmello Marketing. She is responsible for promoting and marketing Miel Crudo and is also focusing on the development and implementation of our online presence. She really believes in us and has been an absolute pearl. For bookings, contact Jenni on 0487 407 046 or Javier on 0432 519 933.
#Written by Reid McIlroy-Young for Dr. John McLevey, University of Waterloo 2015 import unittest import metaknowledge class TestCitation(unittest.TestCase): def setUp(self): self.Cite = metaknowledge.Citation("John D., 2015, TOPICS IN COGNITIVE SCIENCE, V1, P1, DOI 0.1063/1.1695064") def test_citation_author(self): self.assertEqual(self.Cite.author, "John D") def test_citation_year(self): self.assertEqual(self.Cite.year, 2015) def test_citation_journal(self): self.assertEqual(self.Cite.journal, "TOPICS IN COGNITIVE SCIENCE") def test_citation_v(self): self.assertEqual(self.Cite.V, "V1") def test_citation_p(self): self.assertEqual(self.Cite.P, "P1") def test_citation_DOI(self): self.assertEqual(self.Cite.DOI, "0.1063/1.1695064") def test_citation_id(self): self.assertEqual(self.Cite.ID(), "John D, 2015, TOPICS IN COGNITIVE SCIENCE") def test_citation_str(self): self.assertEqual(str(self.Cite), "John D., 2015, TOPICS IN COGNITIVE SCIENCE, V1, P1, DOI 0.1063/1.1695064") def test_citation_extra(self): self.assertEqual(self.Cite.Extra(), "V1, P1, 0.1063/1.1695064") def test_citation_badDetection(self): self.assertTrue(metaknowledge.Citation("").bad) def test_citation_equality(self): c1 = metaknowledge.Citation("John D., 2015, TOPICS IN COGNITIVE SCIENCE, P1, DOI 0.1063/1.1695064") c2 = metaknowledge.Citation("John D., 2015, TOPICS IN COGNITIVE SCIENCE, V1, P1") c3 = metaknowledge.Citation("John D., 2015, TOPICS IN COGNITIVE SCIENCE, V1, P2") self.assertTrue(c1 == self.Cite) self.assertTrue(c2 == self.Cite) self.assertFalse(c1 != c2) self.assertFalse(c3 != c1) def test_citation_hash(self): self.assertTrue(bool(hash(self.Cite))) self.assertTrue(bool(hash(metaknowledge.Citation("John D., 2015, TOPICS IN COGNITIVE SCIENCE, V1, P1")))) self.assertTrue(bool(hash(metaknowledge.Citation("John D., 2015")))) def test_citation_badLength(self): c = metaknowledge.Citation("ab, c") self.assertTrue(c.bad) self.assertEqual(str(c.error), "Not a complete set of author, year and journal") self.assertEqual(c.Extra(),'') self.assertEqual(c.author,'Ab') self.assertEqual(c.ID(),'Ab, C') def test_citation_badNumbers(self): c = metaknowledge.Citation("1, 2, 3, 4") self.assertTrue(c.bad) self.assertEqual(c.ID(), '1, 2') self.assertEqual(str(c.error), "The citation did not fully match the expected pattern")
LAGOS, Nigeria – Nigerian first lady Patience Jonathan returned home Wednesday following a mystery illness that kept her abroad for weeks as the nation wondered about her health. The state-run Nigerian Television Authority cut into its programming to show Jonathan waving from a presidential jet on the tarmac of Nnamdi Azikiwe International Airport in Abuja. Wearing a purple traditional dress, Jonathan slowly walked into a crowd of singers, drummers and supporters, surrounded by police officers. Addressing the nation, Jonathan denied she had a "terminal illness or cosmetic surgery." However, her voice sounded strained by speaking and she appeared slimmer than before. "I believe that God has saved me," the first lady said. In September, Jonathan disappeared from public view after hosting an event for African first ladies. Authorities refused to say publicly why she left the country. An official later told The Associated Press the first lady fell ill with "food poisoning" and needed to be hospitalized in Germany. Yet as her absence lengthened, rumors began to swirl around the country that she had a more serious medical condition. Nigeria has had leaders die in office before, like military dictator Sani Abacha in the 1990s and late President Umaru Yar&apos;Adua, who her husband Goodluck Jonathan later succeeded. Former President Olusegun Obasanjo&apos;s wife Stella died in Spain while he was in office after what some have said was a botched cosmetic surgery. Seeking to halt questions, the state-run television network aired footage a week ago showing the president visit his wife with their children in Germany. However, the video only fueled speculations about her absence, as well as criticism of government leaders who seek medical care abroad as hospitals in Nigeria routinely lack electricity and life-saving drugs.
import subprocess import sys import re import collections LICENSE_BLOCK_RE = r'/[*].*?This program is free software.*?\*/' HEADER_TEMPLATE = """/* * This file is part of NumptyPhysics <http://thp.io/2015/numptyphysics/> * <<COPYRIGHT>> * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 3 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. */""" def update(filename): d = subprocess.check_output(['git', 'log', '--format=%an <%ae>|%ad', '--follow', filename]) dd = collections.defaultdict(list) for author, date in map(lambda x: x.split('|'), d.decode('utf-8').splitlines()): dd[author].append(date) def combine(s, date): # "Sat Feb 18 12:58:00 2012 -0800" s.add(date.split()[4]) return s for author in dd: dd[author] = sorted(reduce(combine, dd[author], set())) def year_sort(item): _, years = item return tuple(map(int, years)) def inject(): for line in HEADER_TEMPLATE.splitlines(): line = line.rstrip('\n') if '<<COPYRIGHT>>' in line: for author, years in sorted(dd.items(), key=year_sort): copyright = 'Coyright (c) {years} {author}'.format(years=', '.join(years), author=author) yield line.replace('<<COPYRIGHT>>', copyright) continue yield line license = '\n'.join(inject()) d = open(filename).read() if re.search(LICENSE_BLOCK_RE, d, re.DOTALL) is None: open(filename, 'w').write(license + '\n\n' + d) else: d = re.sub(LICENSE_BLOCK_RE, license, d, 0, re.DOTALL) open(filename, 'w').write(d) for filename in sys.argv[1:]: print 'Updating:', filename update(filename)
Professor Hildi Froese Tiessen has organized a spectacular reading and lecture series focused on Mennonite Literature that will run at Conrad Grebel College for nine weeks throughout the new year. In conjunction with ENGL 218, Mennonite Literature, there will be a series of public readings and lectures by well known authors and academics, including Rudy Wiebe, Patrick Friesen, and Magdalena Redekop. The public events are open to alumni, faculty, staff, students–in fact, everyone is invited! Please mark your calendars now. Websites uW English professors contribute to.
from django.views.generic import DetailView, ListView, TemplateView from django.views.generic.detail import SingleObjectMixin from .models import Category, Product, ProductImage from django.shortcuts import render_to_response, redirect from django.template import RequestContext from fm.views import AjaxCreateView from .forms import SignUpForm class CategoryDetail(SingleObjectMixin, ListView): paginate_by = 2 template_name = "category_detail.html" def get(self, request, *args, **kwargs): self.object = self.get_object(queryset=Category.objects.all()) return super(CategoryDetail, self).get(request, *args, **kwargs) def get_context_data(self, **kwargs): context = super(CategoryDetail, self).get_context_data(**kwargs) context['category'] = self.object context['product_list'] = Product.objects.filter(category=self.object) return context class ProductDetailView(DetailView): template_name = "product_detail.html" model = Product #context_object_name = 'product' #def get(self, request, *args, **kwargs): # self.object = self.get_object(queryset=Product.objects.all()) # return super(ProductDetailView, self).get(request, *args, **kwargs) def get_context_data(self, **kwargs): context = super(ProductDetailView, self).get_context_data(**kwargs) context['product_images'] = ProductImage.objects.filter(product=self.get_object()) return context class TestView(AjaxCreateView): form_class = SignUpForm def add_quote(request): # Get the context from the request. context = RequestContext(request) # A HTTP POST? if request.method == 'POST': form = SignUpForm(request.POST or None) # Have we been provided with a valid form? if form.is_valid(): # Save the new category to the database. form.save(commit=True) #post_save.connect(send_update, sender=Book) # Now call the index() view. # The user will be shown the homepage. return redirect('/thanks/') else: # The supplied form contained errors - just print them to the terminal. print form.errors else: # If the request was not a POST, display the form to enter details. form = SignUpForm() # Bad form (or form details), no form supplied... # Render the form with error messages (if any). return render_to_response('add_signup.html', {'form': form}, context) class ThanksPage(TemplateView): template_name = "thanks.html"
← Don’t investigate the generals! [Deputy Agriculture and Cooperatives Minister Dr. Wiwat Salyakamthorn is a supporter of organic gardening so he has been in the crosshairs of large Thai companies that sell chemicals to farmers. << Don’t investigate the generals!
#!/usr/bin/env python ## # This file is part of the carambot-usherpa project. # # Copyright (C) 2012 Stefan Wendler <sw@kaltpost.de> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. ## ''' This file is part of the pyscmpd project. To install pyscmpd: sudo python setup.py install ''' import os from distutils.core import setup from distutils.sysconfig import get_python_lib setup(name='pyscmpd', version='0.1', description='sound-cloud music server daemon', long_description='Python based sound-cloud music server talking MPD protocol', author='Stefan Wendler', author_email='sw@kaltpost.de', url='http://www.kaltpost.de/', license='GPL 3.0', platforms=['Linux'], packages = ['pyscmpd', 'mpdserver'], package_dir = {'pyscmpd' : 'src/pyscmpd', 'mpdserver' : 'extlib/python-mpd-server/mpdserver' }, requires = ['soundcloud(>=0.3.1)'] ) # Symlink starter linkSrc = "%s/pyscmpd/pyscmpdctrl.py" % get_python_lib(False, False, '/usr/local') linkDst = "/usr/local/bin/pyscmpdctrl" if not os.path.lexists(linkDst): os.symlink(linkSrc, linkDst) os.chmod(linkSrc, 0755)
Sammy Lopez has 28 articles published. Did One Of Gary Johnson’s High Profile Endorsements Actually Vote For Donald Trump? Listen Or Ignore The Voices?
# This file is part of Indico. # Copyright (C) 2002 - 2016 European Organization for Nuclear Research (CERN). # # Indico is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 3 of the # License, or (at your option) any later version. # # Indico is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Indico; if not, see <http://www.gnu.org/licenses/>. from MaKaC.services.implementation.base import ParameterManager from MaKaC.services.implementation.conference import ConferenceModifBase import MaKaC.user as user from MaKaC.common.fossilize import fossilize from MaKaC.user import AvatarHolder from MaKaC.services.interface.rpc.common import ServiceError, NoReportError class ChangeAbstractSubmitter(ConferenceModifBase): def _checkParams(self): ConferenceModifBase._checkParams(self) pm = ParameterManager(self._params) submitterId = pm.extract("submitterId", pType=str, allowEmpty=False) abstractId = pm.extract("abstractId", pType=str, allowEmpty=False) self._abstract = self._conf.getAbstractMgr().getAbstractById(abstractId) self._submitter = user.AvatarHolder().getById(submitterId) if self._submitter is None: raise NoReportError(_("The user that you are changing does not exist anymore in the database")) def _getAnswer(self): self._abstract.setSubmitter(self._submitter) return {"name": self._submitter.getFullName(), "affiliation": self._submitter.getAffiliation(), "email": self._submitter.getEmail()} class AddLateSubmissionAuthUser(ConferenceModifBase): def _checkParams(self): ConferenceModifBase._checkParams(self) pm = ParameterManager(self._params) self._userList = pm.extract("userList", pType=list, allowEmpty=False) def _getAnswer(self): ah = AvatarHolder() for user in self._userList: if user["id"] != None: self._conf.getAbstractMgr().addAuthorizedSubmitter(ah.getById(user["id"])) else: raise ServiceError("ERR-U0", _("User does not exist.")) return fossilize(self._conf.getAbstractMgr().getAuthorizedSubmitterList()) class RemoveLateSubmissionAuthUser(ConferenceModifBase): def _checkParams(self): ConferenceModifBase._checkParams(self) pm = ParameterManager(self._params) ah = AvatarHolder() userId = pm.extract("userId", pType=str, allowEmpty=False) self._user = ah.getById(userId) if self._user == None: raise ServiceError("ERR-U0", _("User '%s' does not exist.") % userId) def _getAnswer(self): self._conf.getAbstractMgr().removeAuthorizedSubmitter(self._user) return fossilize(self._conf.getAbstractMgr().getAuthorizedSubmitterList()) methodMap = { "changeSubmitter": ChangeAbstractSubmitter, "lateSubmission.addExistingLateAuthUser": AddLateSubmissionAuthUser, "lateSubmission.removeLateAuthUser": RemoveLateSubmissionAuthUser }
With the demands of today’s rapidly growing world, marketing communications has never been more complicated. Whether you’re a successful known brand, a start-up or somewhere in between, the creative consistency your business must fulfill in-order to stay competitive is more apparent today than ever before. The number one goal of NLK® Consulting & Design, LLC is to support you with an array of creative and tailored marketing services that will help you to reach your greatest potential. NLK® Consulting & Design, LLC works in tandem with cutting edge printers, creative photographers, graphic design visionaries and futuristic web developers designing each individual program. Nancy Klein, creator of NLK® Consulting & Design, LLC is an award-winning brand strategist and creative publicist whose goal is to improve the performance of your brand with consistent focus, intriguing creative and alluring marketing that will ultimately achieve extraordinary results. Nancy Klein will work hand-in-hand with your company as your marketing partner and brand manager. This will enable your company’s message to be streamlined and safely positioned, while avoiding the complications of multiple suppliers. Allowing you to focus on the fundamentals of what makes your business truly flourish.
import json import aiohttp import arrow import discord async def wanikani(cmd, message, args): if message.mentions: target = message.mentions[0] else: target = message.author api_document = cmd.db[cmd.db.db_cfg.database]['WaniKani'].find_one({'UserID': target.id}) if api_document: try: api_key = api_document['WKAPIKey'] url = f'https://www.wanikani.com/api/user/{api_key}' async with aiohttp.ClientSession() as session: async with session.get(url + '/srs-distribution') as data: srs = await data.read() srs = json.loads(srs) username = srs['user_information']['username'] sect = srs['user_information']['title'] level = srs['user_information']['level'] avatar = srs['user_information']['gravatar'] creation_date = srs['user_information']['creation_date'] apprentice = srs['requested_information']['apprentice']['total'] guru = srs['requested_information']['guru']['total'] master = srs['requested_information']['master']['total'] enlighten = srs['requested_information']['enlighten']['total'] burned = srs['requested_information']['burned']['total'] async with aiohttp.ClientSession() as session: async with session.get(url + '/study-queue') as data: study = await data.read() study = json.loads(study) lessons_available = study['requested_information']['lessons_available'] reviews_available = study['requested_information']['reviews_available'] next_review = study['requested_information']['next_review_date'] reviews_available_next_hour = study['requested_information']['reviews_available_next_hour'] reviews_available_next_day = study['requested_information']['reviews_available_next_day'] async with aiohttp.ClientSession() as session: async with session.get(url + '/level-progression') as data: progression = await data.read() progression = json.loads(progression) radicals_progress = progression['requested_information']['radicals_progress'] radicals_total = progression['requested_information']['radicals_total'] kanji_progress = progression['requested_information']['kanji_progress'] kanji_total = progression['requested_information']['kanji_total'] level = '**Level {}** Apprentice'.format(level) avatar = f'https://www.gravatar.com/avatar/{avatar}.jpg?s=300&d=' avatar += 'https://cdn.wanikani.com/default-avatar-300x300-20121121.png' creation_date = arrow.get(creation_date).format('MMMM DD, YYYY') radicals = 'Radicals: **{}**/**{}**'.format(radicals_progress, radicals_total) kanji = 'Kanji: **{}**/**{}**'.format(kanji_progress, kanji_total) embed = discord.Embed(color=target.color) level_progression = level + '\n' level_progression += radicals + '\n' level_progression += kanji embed.add_field(name='Level progression', value=level_progression) srs_distibution = 'Apprentice: **{}**\n'.format(apprentice) srs_distibution += 'Guru: **{}**\n'.format(guru) srs_distibution += 'Master: **{}**\n'.format(master) srs_distibution += 'Enlighten: **{}**\n'.format(enlighten) srs_distibution += 'Burned: **{}**'.format(burned) embed.add_field(name='SRS distribution', value=srs_distibution) study_queue = 'Lessons available: **{}**\n'.format(lessons_available) study_queue += 'Reviews available: **{}**\n'.format(reviews_available) if lessons_available or reviews_available: next_review = 'now' else: next_review = arrow.get(next_review).humanize() study_queue += 'Next review date: **{}**\n'.format(next_review) study_queue += 'Reviews in next hour: **{}**\n'.format(reviews_available_next_hour) study_queue += 'Reviews in next day: **{}**'.format(reviews_available_next_day) embed.add_field(name='Study queue', value=study_queue) userinfo = '**{}** of **Sect {}**\n'.format(username, sect) userinfo += '**Level {}** Apprentice\n'.format(level) userinfo += 'Serving the Crabigator since {}'.format(creation_date) embed.set_author(name='{} of Sect {}'.format(username, sect), url='https://www.wanikani.com/community/people/{}'.format(username), icon_url=avatar) embed.set_footer(text='Serving the Crabigator since {}'.format(creation_date)) except KeyError: embed = discord.Embed(color=0xBE1931, title='❗ Invalid data was retrieved.') else: embed = discord.Embed(color=0xBE1931, title='❗ User has no Key saved.') await message.channel.send(None, embed=embed)
The Sundeck 203 is the perfect boat for a day on the water with a few friends. She features plenty of seating and a 150hp outboard to get you to where you are going quickly. The boat also has a small bimini for sun protection, as well as a radio. Usually this boat operates around St Petersburg, FL. Welcome to one of the most famous year-round boating destinations in the world! Hop aboard your yacht charter in St Petersburg, FL and enjoy this amazing floating playground for both beginners and experienced sailors. The boat rental in St Petersburg, FL you have selected is a Hurricane Motor boat - a popular choice to explore the natural beauty of Florida’s coast. Sailing on a FL, St Petersburg yacht charter is an experience you should not miss if you decide to visit the boating capital of the world. Picture yourself on romantic sunset cruise on this FL, St Petersburg boat rental, or having fun with your family or friends on a sightseeing tour along the coast. For more ideas about things to do on your boat rental or yacht charter in St Petersburg, FL, make sure to check our destination guide for Sailing in South Florida! We invite you to browse through hundreds of Sailo boats perfect for sailing in Florida, and choose the dream Motor boat rental or yacht charter for your nautical adventure. Whether you are looking to spend a relaxed afternoon on a classy motorboat or sailboat, or have a fun on a sporty catamaran, our team here to make sure you will be making the best out of your time on the water. For details about this Hurricane 20.0 boat rental in St Petersburg, FL, or to make special arrangements for your trip, please click on the “Message Owner “ blue button to send a direct message to the boat representative.
__author__ = 'daniel' from PyQt4 import QtGui, QtCore from PyQt4.QtCore import pyqtSignal class TrialDetailsWidget(QtGui.QWidget): def __init__(self, parent=None): super(TrialDetailsWidget, self).__init__(parent) self.layout = QtGui.QVBoxLayout() self.setLayout(self.layout) self.box = QtGui.QGroupBox("Trial Settings") self.layout.addWidget(self.box) self.box_layout = QtGui.QFormLayout() self.box_layout.setFormAlignment(QtCore.Qt.AlignTop | QtCore.Qt.AlignLeft) self.box.setLayout(self.box_layout) self.type = QtGui.QLabel("") self.box_layout.addRow("<b>Type</b>", self.type) self.name = QtGui.QLabel("") self.box_layout.addRow("<b>Name</b>", self.name) self.description = QtGui.QLabel("") self.box_layout.addRow("<b>Description</b>", self.description) class EchoTrialDetailsWidget(TrialDetailsWidget): def __init__(self, parent=None): super(EchoTrialDetailsWidget, self).__init__(parent) self.delay = QtGui.QLabel("") self.box_layout.addRow("<b>Delay (ns)</b>", self.delay) class HttpTrialDetailsWidget(TrialDetailsWidget): def __init__(self, parent=None): super(HttpTrialDetailsWidget, self).__init__(parent) self.request_url = QtGui.QLabel("") self.box_layout.addRow("<b>Request URL</b>", self.request_url) class RacerDetailsWidget(QtGui.QWidget): def __init__(self, parent=None): super(RacerDetailsWidget, self).__init__(parent) self.layout = QtGui.QVBoxLayout() self.setLayout(self.layout) self.box = QtGui.QGroupBox("Racer Settings") self.layout.addWidget(self.box) self.box_layout = QtGui.QFormLayout() self.box_layout.setFormAlignment(QtCore.Qt.AlignTop | QtCore.Qt.AlignLeft) self.box.setLayout(self.box_layout) self.racer = QtGui.QLabel("") self.box_layout.addRow("<b>Racer</b>", self.racer) self.core_id = QtGui.QLabel("") self.box_layout.addRow("<b>Core ID</b>", self.core_id) self.real_time = QtGui.QLabel("") self.box_layout.addRow("<b>Real-Time</b>", self.real_time) class TrialStatusWidget(QtGui.QWidget): trial_started = pyqtSignal() trial_stopped = pyqtSignal() trial_refreshed = pyqtSignal() trial_edit = pyqtSignal() def __init__(self, parent=None): super(TrialStatusWidget, self).__init__(parent) self.layout = QtGui.QVBoxLayout() self.setLayout(self.layout) self.box = QtGui.QGroupBox("Trial Status") self.super_box_layout = QtGui.QGridLayout() self.box_layout = QtGui.QFormLayout() self.box_layout.setFormAlignment(QtCore.Qt.AlignTop | QtCore.Qt.AlignLeft) self.box.setLayout(self.super_box_layout) self.super_box_layout.addLayout(self.box_layout,0,0,1,2) self.layout.addWidget(self.box) self.start = QtGui.QLabel("") self.box_layout.addRow("<b>Start</b>", self.start) self.end = QtGui.QLabel("") self.box_layout.addRow("<b>End</b>", self.end) self.job_status = QtGui.QLabel("") self.box_layout.addRow("<b>Job Status</b>", self.job_status) self.start_trial_button = QtGui.QPushButton("Start") self.start_trial_button.setEnabled(False) self.start_trial_button.released.connect(self.trial_started.emit) self.super_box_layout.addWidget(self.start_trial_button,1,0) self.stop_trial_button = QtGui.QPushButton("Cancel and Reset") self.stop_trial_button.setEnabled(False) self.stop_trial_button.released.connect(self.trial_stopped.emit) self.super_box_layout.addWidget(self.stop_trial_button,1,1) self.refresh_trial_button = QtGui.QPushButton("Refresh") self.refresh_trial_button.setEnabled(False) self.refresh_trial_button.released.connect(self.trial_refreshed.emit) self.layout.addWidget(self.refresh_trial_button) self.edit_trial_button = QtGui.QPushButton("Edit") self.edit_trial_button.setEnabled(False) self.edit_trial_button.released.connect(self.trial_edit.emit) self.layout.addWidget(self.edit_trial_button)
A novel method for color image enhancement is proposed as an extension of the scalar-diffusion-shock-filter coupling model, where noisy and blurred images are denoised and sharpened. The proposed model is based on using the single vectors of the gradient magnitude and the second derivatives as a manner to relate different color components of the image. This model can be viewed as a generalization of the Bettahar-Stambouli filter to multivalued images. The proposed algorithm is more efficient than the mentioned filter and some previous works at color images denoising and deblurring without creating false colors.
# ------------------------------------ # Copyright (c) Microsoft Corporation. # Licensed under the MIT License. # ------------------------------------ """Policy implementing Key Vault's challenge authentication protocol. Normally the protocol is only used for the client's first service request, upon which: 1. The challenge authentication policy sends a copy of the request, without authorization or content. 2. Key Vault responds 401 with a header (the 'challenge') detailing how the client should authenticate such a request. 3. The policy authenticates according to the challenge and sends the original request with authorization. The policy caches the challenge and thus knows how to authenticate future requests. However, authentication requirements can change. For example, a vault may move to a new tenant. In such a case the policy will attempt the protocol again. """ import copy import time from azure.core.exceptions import ServiceRequestError from azure.core.pipeline import PipelineContext, PipelineRequest from azure.core.pipeline.policies import HTTPPolicy from azure.core.pipeline.transport import HttpRequest from .http_challenge import HttpChallenge from . import http_challenge_cache as ChallengeCache try: from typing import TYPE_CHECKING except ImportError: TYPE_CHECKING = False if TYPE_CHECKING: from typing import Any, Optional from azure.core.credentials import AccessToken, TokenCredential from azure.core.pipeline import PipelineResponse def _enforce_tls(request): # type: (PipelineRequest) -> None if not request.http_request.url.lower().startswith("https"): raise ServiceRequestError( "Bearer token authentication is not permitted for non-TLS protected (non-https) URLs." ) def _get_challenge_request(request): # type: (PipelineRequest) -> PipelineRequest # The challenge request is intended to provoke an authentication challenge from Key Vault, to learn how the # service request should be authenticated. It should be identical to the service request but with no body. challenge_request = HttpRequest( request.http_request.method, request.http_request.url, headers=request.http_request.headers ) challenge_request.headers["Content-Length"] = "0" options = copy.deepcopy(request.context.options) context = PipelineContext(request.context.transport, **options) return PipelineRequest(http_request=challenge_request, context=context) def _update_challenge(request, challenger): # type: (PipelineRequest, PipelineResponse) -> HttpChallenge """parse challenge from challenger, cache it, return it""" challenge = HttpChallenge( request.http_request.url, challenger.http_response.headers.get("WWW-Authenticate"), response_headers=challenger.http_response.headers, ) ChallengeCache.set_challenge_for_url(request.http_request.url, challenge) return challenge class ChallengeAuthPolicyBase(object): """Sans I/O base for challenge authentication policies""" def __init__(self, **kwargs): self._token = None # type: Optional[AccessToken] super(ChallengeAuthPolicyBase, self).__init__(**kwargs) @property def _need_new_token(self): # type: () -> bool return not self._token or self._token.expires_on - time.time() < 300 class ChallengeAuthPolicy(ChallengeAuthPolicyBase, HTTPPolicy): """policy for handling HTTP authentication challenges""" def __init__(self, credential, **kwargs): # type: (TokenCredential, **Any) -> None self._credential = credential super(ChallengeAuthPolicy, self).__init__(**kwargs) def send(self, request): # type: (PipelineRequest) -> PipelineResponse _enforce_tls(request) challenge = ChallengeCache.get_challenge_for_url(request.http_request.url) if not challenge: challenge_request = _get_challenge_request(request) challenger = self.next.send(challenge_request) try: challenge = _update_challenge(request, challenger) except ValueError: # didn't receive the expected challenge -> nothing more this policy can do return challenger self._handle_challenge(request, challenge) response = self.next.send(request) if response.http_response.status_code == 401: # any cached token must be invalid self._token = None # cached challenge could be outdated; maybe this response has a new one? try: challenge = _update_challenge(request, response) except ValueError: # 401 with no legible challenge -> nothing more this policy can do return response self._handle_challenge(request, challenge) response = self.next.send(request) return response def _handle_challenge(self, request, challenge): # type: (PipelineRequest, HttpChallenge) -> None """authenticate according to challenge, add Authorization header to request""" if self._need_new_token: # azure-identity credentials require an AADv2 scope but the challenge may specify an AADv1 resource scope = challenge.get_scope() or challenge.get_resource() + "/.default" self._token = self._credential.get_token(scope) # ignore mypy's warning because although self._token is Optional, get_token raises when it fails to get a token request.http_request.headers["Authorization"] = "Bearer {}".format(self._token.token) # type: ignore
CHARGE Award recipient Virta offers services across the electric vehicle charging value chain. It has been a good week for Finnish companies Virta, Fingrid and Canatu, with all three bagging awards at the CHARGE Awards or Automotive Brand Contest. Finnish electric vehicle (EV) charging company Virta was recognised as the Best Energy Product Brand at the CHARGE Awards 2018, held in connection with the Branding Energy seminar in Iceland on Monday. Fingrid’s senior VP Tiina Miettinen (left) and communications manager Marjaana Kivioja on hand at the CHARGE Awards 2018. The Finnish company offers EV charging services across the EV charging value chain, including to EV drivers, charging point owners, professional charging point operators and energy utilities. Another Finnish company was also awarded at the CHARGE Awards 2018, with transmission system operator Fingrid winning in the Best Transmission Brand category. Meanwhile, at the Automotive Brand Contest 2018 organised by the German Design Council, Finnish company Canatu received an award in the Concept Cars category for its 3D Touch Cockpit.
import sys, os from pymongo import MongoClient import gridfs import re isPython3 = bool(sys.version_info >= (3, 0)) if isPython3: from io import BytesIO else: from StringIO import StringIO if len(sys.argv) <= 5: print("Not enough arguments.") print("removeOrphanFiles.py <mongoURL> <mongoPort> <userName> <password> <localFolder>") sys.exit(0) mongoURL = sys.argv[1] mongoPort = sys.argv[2] userName = sys.argv[3] password = sys.argv[4] localFolder = sys.argv[5] if not os.path.exists(localFolder): print("LocalFolder " + localFolder + " does not exist.") sys.exit(0) connString = "mongodb://"+ userName + ":" + password +"@"+mongoURL + ":" + mongoPort + "/" ##### Enable dry run to not commit to the database ##### dryRun = True verbose = True ignoreDirs = ["toy_2019-05-31"] ##### Retrieve file list from local folder ##### fileList = {} missing = [] ignoreDirs = [os.path.normpath(os.path.join(localFolder, x)) for x in ignoreDirs] for (dirPath, dirNames, fileNames) in os.walk(localFolder): for fileName in fileNames: if not dirPath in ignoreDirs: entry = os.path.normpath(os.path.join(dirPath, fileName)) fileList[entry] = False ##### Connect to the Database ##### db = MongoClient(connString) for database in db.database_names(): if database != "admin" and database != "local" and database != "notifications": db = MongoClient(connString)[database] if verbose: print("--database:" + database) ##### Get a model ID and find entries ##### regex = re.compile(".+\.ref$"); for colName in db.collection_names(): result = regex.match(colName); if result: if verbose: print("\t--collection:" + colName); for refEntry in db[colName].find({"type": "fs"}): filePath = os.path.normpath(os.path.join(localFolder, refEntry['link'])) inIgnoreDir= bool([ x for x in ignoreDirs if filePath.find(x) + 1 ]) if not inIgnoreDir: fileStatus = fileList.get(filePath) if fileStatus == None: refInfo = database + "." + colName + ": " + refEntry["_id"] if dryRun: missing.append(refInfo); else: ##### Upload missing files to FS and insert BSON ##### parentCol = colName[:-4]; fs = gridfs.GridFS(db, parentCol) if ".stash.json_mpc" in parentCol or "stash.unity3d" in parentCol: modelId = parentCol.split(".")[0]; if len(refEntry["_id"].split("/")) > 1: toRepair = "/" + database + "/" + modelId + "/revision/" + refEntry["_id"] else: toRepair = "/" + database + "/" + modelId + "/" + refEntry["_id"] else: toRepair = refEntry["_id"] gridFSEntry = fs.find_one({"filename": toRepair}) if gridFSEntry != None: if not os.path.exists(os.path.dirname(filePath)): os.makedirs(os.path.dirname(filePath)) file = open(filePath,'wb') if isPython3: file.write(BytesIO(gridFSEntry.read()).getvalue()) else: file.write(StringIO(gridFSEntry.read()).getvalue()) file.close() missing.append(refInfo + " (Restored to: " + filePath + ")"); else: missing.append(refInfo + ": No backup found. Reference removed."); db[colName].remove({"_id": refEntry["_id"]}); else: fileList[filePath] = True print("===== Missing Files ====="); for entry in missing: print("\t"+ entry); print("========================="); print("===== Orphaned Files ====="); for filePath in fileList: if not fileList[filePath]: if dryRun: print("\t"+ filePath); else: ##### Delete orphan files ##### os.remove(filePath) print("\t\t--Removed: " + filePath) print("==========================");
Copyright © 2019 FilipinoInfo. All rights reserved. Djursholm Theme: ColorMag by ThemeGrill. Powered by WordPress.
''' Se crea una clase jugador la cual tendra un movimiento con los teclas y dispara ''' import pygame from configuraciones import * TIEMPODESTRUCION = 8 class Vida(pygame.sprite.Sprite): def __init__(self, img): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load(img).convert() self.rect = self.image.get_rect() def setPos(self, x, y): self.rect.x = x self.rect.y = y class Player(pygame.sprite.Sprite): def __init__(self,img, imgdestru, vidas = 3, col = AZUL): pygame.sprite.Sprite.__init__(self) self.img = pygame.image.load(img).convert_alpha() self.image = pygame.image.load(img).convert_alpha() self.imagedestruida = pygame.image.load(imgdestru).convert_alpha() self.rect = self.image.get_rect() self.setPos(400,ALTO - 80) self.disparo = pygame.mixer.Sound('Sonidos/disparojg.ogg') #variables de movimiento self.var_x = 0 self.vidas = vidas #tiempo que durara destruida self.destrucion = False self.tiempodes = TIEMPODESTRUCION def disparar(self): self.disparo.play() def setPos(self, x, y): self.rect.x = x self.rect.y = y def setX(self, x): self.rect.x = x def setX(self, y): self.rect.x = y def movX(self): if self.rect.x >= ANCHO -50 and self.var_x >= 0: self.var_x = 0 if self.rect.x <= 0 and self.var_x <= 0: self.var_x = 0 self.rect.x += self.var_x def update(self): if self.destrucion: self.image = self.imagedestruida if self.tiempodes <= 0: self.image = self.img self.destrucion = False self.tiempodes = TIEMPODESTRUCION self.tiempodes -= 1 else: self.movX()
The Connected Traveler, one of the world’s first web travel magazines is now designed to be “mobile first,” to look and function like an app, an eBook and even an audio book on everything from the smallest mobile phone to a high resolution flat panel screen. What this means for armchair travelers, is that they can actually read, listen to an audio story or watch a video from an armchair or an airport lounge. The Connected Traveler is about places, people and passions. “It is all about telling stories, without limitations, in whatever medium that works best,” says editor-publisher Russell Johnson. It is, indeed, a partnership of passions between writer, photographer, film producer and former broadcaster Russ Johnson and painter, writer and former high tech PR legend Pat-Meier Johnson. For years, they have traveled the world together in support of each other’s businesses, Russ producing media for international organizations, mostly in the tourism industry, and Pat introducing technological innovations. Now they are in full time pursuit of their own passions for travel and the arts of painting, photography writing and film making. Videos in formats up to 4K resolution can play on a mobile phone or a huge screen and audio features are embedded in stories so The Connected Traveler also functions as a portable audio book. After experimenting with a phone-in bulletin board, Russell Johnson founded The Connected Traveler in 1994 on The Well, the pioneering online community founded by Steward Brand and Larry Brilliant. Johnson became inspired while researching a documentary on the future of travel, which took him to tech publisher O’Reilly and Associates in Sebastapol, California. O’Reilly had just introduced the Global Network Navigator, which would become the world’s first commercial web magazine. Seizing the challenge to develop his own magazine, he became one of the world’s first travel bloggers, long before the term blog was invented. Early editions of The Connected Traveler featured guest authors including Jan Morris, Pico Iyer and Peter Mayle. Two of its original “podcasts,” before that term was invented, included a story recorded by travel writer Georgia Hesse as she scrunched through the snow at the North Pole and Johnson’s own interview with Arthur C. Clarke in Sri Lanka. The Connected Traveler was a “Cool Site of the Day,” garnering an unprecedented 230 thousand viewers on a single day in 1995. It was runner up with Lonely Planet in the Best of the Web competition and has been the subject of scores of articles in books, magazines, newspapers and web sites including CNN, CBS, the BBC, Time Magazine, USA Today, The Wall Street Journal, the Guardian and the London Daily Telegraph. National Geographic named it as a resource on sustainable tourism. “His (Johnson’s) unabashedly refined tastes emerge throughout richly detailed accounts of places ranging from Boise, Idaho to Kunming, China. The Connected Traveler was featured in Time and USA today when it offered the web’s first online charity auction, in support of heritage and nature conservation, in the same year auction site eBay was founded. For almost ten years, before music publishers clamped down on licensing streaming music, the Connected Traveler hosted the world’s only 24×7 travel and world music internet radio station. Connected Traveler audio features, now posted on The Connected Traveler, still play on public radio stations, its YouTube channel has had well over a million views and its recent 4K video “A Day in Peru” was featured at the American of Natural History in New York.
from django.contrib import admin from quadraticlands.models import ( GTCSteward, InitialTokenDistribution, MissionStatus, QLVote, QuadLandsFAQ, SchwagCoupon, ) class InitialTokenDistributionAdmin(admin.ModelAdmin): raw_id_fields = ['profile'] search_fields = ['profile__handle'] list_display = ['id', 'profile', 'claim_total'] class MissionStatusAdmin(admin.ModelAdmin): search_fields = ['profile__handle'] list_display = ['id', 'profile', 'proof_of_use', 'proof_of_receive', 'proof_of_knowledge'] raw_id_fields = ['profile'] class GTCStewardAdmin(admin.ModelAdmin): raw_id_fields = ['profile'] search_fields = ['profile__handle'] list_display = ['id', 'profile', 'real_name', 'profile_link'] class QuadLandsFAQAdmin(admin.ModelAdmin): list_display = ['id', 'position', 'question'] class QLVoteAdmin(admin.ModelAdmin): raw_id_fields = ['profile'] list_display = ['id', 'profile'] class SchwagCouponAdmin(admin.ModelAdmin): raw_id_fields = ['profile'] search_fields = ['profile__handle', 'coupon_code', 'discount_type'] list_display = ['id', 'discount_type', 'coupon_code', 'profile'] admin.site.register(InitialTokenDistribution, InitialTokenDistributionAdmin) admin.site.register(MissionStatus, MissionStatusAdmin) admin.site.register(QuadLandsFAQ, QuadLandsFAQAdmin) admin.site.register(GTCSteward, GTCStewardAdmin) admin.site.register(QLVote, QLVoteAdmin) admin.site.register(SchwagCoupon, SchwagCouponAdmin)
Discover the 5 mistakes that most fighters make in their MMA strength and conditioning workouts… and how to avoid them in the 45 page NEVER GAS ebook. 2. If you had to choose, what do you need to work on most?
""" BlueSky plugin template. The text you put here will be visible in BlueSky as the description of your plugin. """ import numpy as np # Import the global bluesky objects. Uncomment the ones you need from bluesky import stack #, settings, navdb, traf, sim, scr, tools from bluesky import navdb from bluesky.tools.aero import ft from bluesky.tools import geo, areafilter ### Initialization function of your plugin. Do not change the name of this ### function, as it is the way BlueSky recognises this file as a plugin. def init_plugin(): # Addtional initilisation code # Configuration parameters config = { # The name of your plugin 'plugin_name': 'ILSGATE', # The type of this plugin. For now, only simulation plugins are possible. 'plugin_type': 'sim', # Update interval in seconds. By default, your plugin's update function(s) # are called every timestep of the simulation. If your plugin needs less # frequent updates provide an update interval. 'update_interval': 0.0, # The update function is called after traffic is updated. Use this if you # want to do things as a result of what happens in traffic. If you need to # something before traffic is updated please use preupdate. 'update': update, # If your plugin has a state, you will probably need a reset function to # clear the state in between simulations. 'reset': reset } stackfunctions = { # The command name for your function 'ILSGATE': [ # A short usage string. This will be printed if you type HELP <name> in the BlueSky console 'ILSGATE Airport/runway', # A list of the argument types your function accepts. For a description of this, see ... 'txt', # The name of your function in this plugin ilsgate, # a longer help text of your function. 'Define an ILS approach area for a given runway.'] } # init_plugin() should always return these two dicts. return config, stackfunctions ### Periodic update functions that are called by the simulation. You can replace ### this by anything, so long as you communicate this in init_plugin def update(): pass def reset(): pass ### Other functions of your plugin def ilsgate(rwyname): if '/' not in rwyname: return False, 'Argument is not a runway ' + rwyname apt, rwy = rwyname.split('/RW') rwy = rwy.lstrip('Y') apt_thresholds = navdb.rwythresholds.get(apt) if not apt_thresholds: return False, 'Argument is not a runway (airport not found) ' + apt rwy_threshold = apt_thresholds.get(rwy) if not rwy_threshold: return False, 'Argument is not a runway (runway not found) ' + rwy # Extract runway threshold lat/lon, and runway heading lat, lon, hdg = rwy_threshold # The ILS gate is defined as a triangular area pointed away from the runway # First calculate the two far corners in cartesian coordinates cone_length = 50 # nautical miles cone_angle = 20.0 # degrees lat1, lon1 = geo.qdrpos(lat, lon, hdg - 180.0 + cone_angle, cone_length) lat2, lon2 = geo.qdrpos(lat, lon, hdg - 180.0 - cone_angle, cone_length) coordinates = np.array([lat, lon, lat1, lon1, lat2, lon2]) areafilter.defineArea('ILS' + rwyname, 'POLYALT', coordinates, top=4000*ft)
In 1874, Lewis Moulton purchased Rancho Niguel from Don Juan Avila and increased the original grant to 22,000 acres. Moulton and his partner, Jean Piedra Daguerre, used the ranch to raise sheep and cattle. The Moulton Ranch was eventually subdivided in the early 1960's, part of which is recognized as Laguna Hills.The city was incorporated in 1991. Laguna Hills gained recognition when the private senior comunity Leisure World was developed. The self-contained community includes swimming pools, golf courses, shopping centers and an internal transportation. Home to some of Orange county's best medical facilities, a large mall and many fine restaurants, laguna Hills enjoys one of the lowest crime rates in Orange County. Call it furnished short-term housing, furnished temporary apartments, corporate accommodations, temporary accommodations, temporary corporate apartments, temporary housing, furnished accommodations, temporary suites, corporate housing, temporary furnished housing, furnished corporate housing, temporary furnished apartments or furnished corporate apartments, chances are that SuiteNet members can accommodate you in Laguna Hills, California.
# -*- coding: utf-8 -*- # # Copyright © 2007-2008 Red Hat, Inc. All rights reserved. # # This file is part of python-fedora # # python-fedora is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # python-fedora is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with python-fedora; if not, see <http://www.gnu.org/licenses/> # # Adapted from code in the TurboGears project licensed under the MIT license. ''' This plugin provides integration with the Fedora Account System using JSON calls to the account system server. .. moduleauthor:: Toshio Kuratomi <tkuratom@redhat.com> ''' from turbogears import config from turbogears.visit.api import Visit, BaseVisitManager from fedora.client import FasProxyClient from fedora import _, __version__ import logging log = logging.getLogger("turbogears.identity.savisit") class JsonFasVisitManager(BaseVisitManager): ''' This proxies visit requests to the Account System Server running remotely. ''' fas_url = config.get('fas.url', 'https://admin.fedoraproject.org/accounts') fas = None def __init__(self, timeout): self.debug = config.get('jsonfas.debug', False) if not self.fas: self.fas = FasProxyClient( self.fas_url, debug=self.debug, session_name=config.get('visit.cookie.name', 'tg-visit'), useragent='JsonFasVisitManager/%s' % __version__) BaseVisitManager.__init__(self, timeout) log.debug('JsonFasVisitManager.__init__: exit') def create_model(self): ''' Create the Visit table if it doesn't already exist. Not needed as the visit tables reside remotely in the FAS2 database. ''' pass def new_visit_with_key(self, visit_key): ''' Return a new Visit object with the given key. ''' log.debug('JsonFasVisitManager.new_visit_with_key: enter') # Hit any URL in fas2 with the visit_key set. That will call the # new_visit method in fas2 # We only need to get the session cookie from this request request_data = self.fas.refresh_session(visit_key) session_id = request_data[0] log.debug('JsonFasVisitManager.new_visit_with_key: exit') return Visit(session_id, True) def visit_for_key(self, visit_key): ''' Return the visit for this key or None if the visit doesn't exist or has expired. ''' log.debug('JsonFasVisitManager.visit_for_key: enter') # Hit any URL in fas2 with the visit_key set. That will call the # new_visit method in fas2 # We only need to get the session cookie from this request request_data = self.fas.refresh_session(visit_key) session_id = request_data[0] # Knowing what happens in turbogears/visit/api.py when this is called, # we can shortcircuit this step and avoid a round trip to the FAS # server. # if visit_key != session_id: # # visit has expired # return None # # Hitting FAS has already updated the visit. # return Visit(visit_key, False) log.debug('JsonFasVisitManager.visit_for_key: exit') if visit_key != session_id: return Visit(session_id, True) else: return Visit(visit_key, False) def update_queued_visits(self, queue): '''Update the visit information on the server''' log.debug('JsonFasVisitManager.update_queued_visits: enter') # Hit any URL in fas with each visit_key to update the sessions for visit_key in queue: log.info(_('updating visit (%s)'), visit_key) self.fas.refresh_session(visit_key) log.debug('JsonFasVisitManager.update_queued_visits: exit')
Fort Lauderdale is located in Florida. Fort Lauderdale, Florida 33324has a population of 92,560. Fort Lauderdale 33324 is more family-centric than the surrounding county with 30.89% of the households containing married families with children. The county average for households married with children is 30.54%. The median household income in Fort Lauderdale, Florida 33324 is $66,886. The median household income for the surrounding county is $51,574 compared to the national median of $53,482. The median age of people living in Fort Lauderdale 33324 is 40 years. The average high temperature in July is 89.8 degrees, with an average low temperature in January of 57.1 degrees. The average rainfall is approximately 63.1 inches per year, with 0 inches of snow per year.
# coding=utf-8 r""" This code was generated by \ / _ _ _| _ _ | (_)\/(_)(_|\/| |(/_ v1.0.0 / / """ from twilio.base import deserialize from twilio.base import values from twilio.base.instance_context import InstanceContext from twilio.base.instance_resource import InstanceResource from twilio.base.list_resource import ListResource from twilio.base.page import Page class SampleList(ListResource): """ PLEASE NOTE that this class contains preview products that are subject to change. Use them with caution. If you currently do not have developer preview access, please contact help@twilio.com. """ def __init__(self, version, assistant_sid, task_sid): """ Initialize the SampleList :param Version version: Version that contains the resource :param assistant_sid: The unique ID of the Assistant. :param task_sid: The unique ID of the Task associated with this Sample. :returns: twilio.rest.preview.understand.assistant.task.sample.SampleList :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleList """ super(SampleList, self).__init__(version) # Path Solution self._solution = {'assistant_sid': assistant_sid, 'task_sid': task_sid, } self._uri = '/Assistants/{assistant_sid}/Tasks/{task_sid}/Samples'.format(**self._solution) def stream(self, language=values.unset, limit=None, page_size=None): """ Streams SampleInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param unicode language: An ISO language-country string of the sample. :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.preview.understand.assistant.task.sample.SampleInstance] """ limits = self._version.read_limits(limit, page_size) page = self.page(language=language, page_size=limits['page_size'], ) return self._version.stream(page, limits['limit']) def list(self, language=values.unset, limit=None, page_size=None): """ Lists SampleInstance records from the API as a list. Unlike stream(), this operation is eager and will load `limit` records into memory before returning. :param unicode language: An ISO language-country string of the sample. :param int limit: Upper limit for the number of records to return. list() guarantees never to return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, list() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.preview.understand.assistant.task.sample.SampleInstance] """ return list(self.stream(language=language, limit=limit, page_size=page_size, )) def page(self, language=values.unset, page_token=values.unset, page_number=values.unset, page_size=values.unset): """ Retrieve a single page of SampleInstance records from the API. Request is executed immediately :param unicode language: An ISO language-country string of the sample. :param str page_token: PageToken provided by the API :param int page_number: Page Number, this value is simply for client state :param int page_size: Number of records to return, defaults to 50 :returns: Page of SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SamplePage """ data = values.of({ 'Language': language, 'PageToken': page_token, 'Page': page_number, 'PageSize': page_size, }) response = self._version.page(method='GET', uri=self._uri, params=data, ) return SamplePage(self._version, response, self._solution) def get_page(self, target_url): """ Retrieve a specific page of SampleInstance records from the API. Request is executed immediately :param str target_url: API-generated URL for the requested results page :returns: Page of SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SamplePage """ response = self._version.domain.twilio.request( 'GET', target_url, ) return SamplePage(self._version, response, self._solution) def create(self, language, tagged_text, source_channel=values.unset): """ Create the SampleInstance :param unicode language: An ISO language-country string of the sample. :param unicode tagged_text: The text example of how end-users may express this task. The sample may contain Field tag blocks. :param unicode source_channel: The communication channel the sample was captured. It can be: voice, sms, chat, alexa, google-assistant, or slack. If not included the value will be null :returns: The created SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ data = values.of({'Language': language, 'TaggedText': tagged_text, 'SourceChannel': source_channel, }) payload = self._version.create(method='POST', uri=self._uri, data=data, ) return SampleInstance( self._version, payload, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], ) def get(self, sid): """ Constructs a SampleContext :param sid: A 34 character string that uniquely identifies this resource. :returns: twilio.rest.preview.understand.assistant.task.sample.SampleContext :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleContext """ return SampleContext( self._version, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], sid=sid, ) def __call__(self, sid): """ Constructs a SampleContext :param sid: A 34 character string that uniquely identifies this resource. :returns: twilio.rest.preview.understand.assistant.task.sample.SampleContext :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleContext """ return SampleContext( self._version, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], sid=sid, ) def __repr__(self): """ Provide a friendly representation :returns: Machine friendly representation :rtype: str """ return '<Twilio.Preview.Understand.SampleList>' class SamplePage(Page): """ PLEASE NOTE that this class contains preview products that are subject to change. Use them with caution. If you currently do not have developer preview access, please contact help@twilio.com. """ def __init__(self, version, response, solution): """ Initialize the SamplePage :param Version version: Version that contains the resource :param Response response: Response from the API :param assistant_sid: The unique ID of the Assistant. :param task_sid: The unique ID of the Task associated with this Sample. :returns: twilio.rest.preview.understand.assistant.task.sample.SamplePage :rtype: twilio.rest.preview.understand.assistant.task.sample.SamplePage """ super(SamplePage, self).__init__(version, response) # Path Solution self._solution = solution def get_instance(self, payload): """ Build an instance of SampleInstance :param dict payload: Payload response from the API :returns: twilio.rest.preview.understand.assistant.task.sample.SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ return SampleInstance( self._version, payload, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], ) def __repr__(self): """ Provide a friendly representation :returns: Machine friendly representation :rtype: str """ return '<Twilio.Preview.Understand.SamplePage>' class SampleContext(InstanceContext): """ PLEASE NOTE that this class contains preview products that are subject to change. Use them with caution. If you currently do not have developer preview access, please contact help@twilio.com. """ def __init__(self, version, assistant_sid, task_sid, sid): """ Initialize the SampleContext :param Version version: Version that contains the resource :param assistant_sid: The unique ID of the Assistant. :param task_sid: The unique ID of the Task associated with this Sample. :param sid: A 34 character string that uniquely identifies this resource. :returns: twilio.rest.preview.understand.assistant.task.sample.SampleContext :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleContext """ super(SampleContext, self).__init__(version) # Path Solution self._solution = {'assistant_sid': assistant_sid, 'task_sid': task_sid, 'sid': sid, } self._uri = '/Assistants/{assistant_sid}/Tasks/{task_sid}/Samples/{sid}'.format(**self._solution) def fetch(self): """ Fetch the SampleInstance :returns: The fetched SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ payload = self._version.fetch(method='GET', uri=self._uri, ) return SampleInstance( self._version, payload, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], sid=self._solution['sid'], ) def update(self, language=values.unset, tagged_text=values.unset, source_channel=values.unset): """ Update the SampleInstance :param unicode language: An ISO language-country string of the sample. :param unicode tagged_text: The text example of how end-users may express this task. The sample may contain Field tag blocks. :param unicode source_channel: The communication channel the sample was captured. It can be: voice, sms, chat, alexa, google-assistant, or slack. If not included the value will be null :returns: The updated SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ data = values.of({'Language': language, 'TaggedText': tagged_text, 'SourceChannel': source_channel, }) payload = self._version.update(method='POST', uri=self._uri, data=data, ) return SampleInstance( self._version, payload, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], sid=self._solution['sid'], ) def delete(self): """ Deletes the SampleInstance :returns: True if delete succeeds, False otherwise :rtype: bool """ return self._version.delete(method='DELETE', uri=self._uri, ) def __repr__(self): """ Provide a friendly representation :returns: Machine friendly representation :rtype: str """ context = ' '.join('{}={}'.format(k, v) for k, v in self._solution.items()) return '<Twilio.Preview.Understand.SampleContext {}>'.format(context) class SampleInstance(InstanceResource): """ PLEASE NOTE that this class contains preview products that are subject to change. Use them with caution. If you currently do not have developer preview access, please contact help@twilio.com. """ def __init__(self, version, payload, assistant_sid, task_sid, sid=None): """ Initialize the SampleInstance :returns: twilio.rest.preview.understand.assistant.task.sample.SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ super(SampleInstance, self).__init__(version) # Marshaled Properties self._properties = { 'account_sid': payload.get('account_sid'), 'date_created': deserialize.iso8601_datetime(payload.get('date_created')), 'date_updated': deserialize.iso8601_datetime(payload.get('date_updated')), 'task_sid': payload.get('task_sid'), 'language': payload.get('language'), 'assistant_sid': payload.get('assistant_sid'), 'sid': payload.get('sid'), 'tagged_text': payload.get('tagged_text'), 'url': payload.get('url'), 'source_channel': payload.get('source_channel'), } # Context self._context = None self._solution = { 'assistant_sid': assistant_sid, 'task_sid': task_sid, 'sid': sid or self._properties['sid'], } @property def _proxy(self): """ Generate an instance context for the instance, the context is capable of performing various actions. All instance actions are proxied to the context :returns: SampleContext for this SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleContext """ if self._context is None: self._context = SampleContext( self._version, assistant_sid=self._solution['assistant_sid'], task_sid=self._solution['task_sid'], sid=self._solution['sid'], ) return self._context @property def account_sid(self): """ :returns: The unique ID of the Account that created this Sample. :rtype: unicode """ return self._properties['account_sid'] @property def date_created(self): """ :returns: The date that this resource was created :rtype: datetime """ return self._properties['date_created'] @property def date_updated(self): """ :returns: The date that this resource was last updated :rtype: datetime """ return self._properties['date_updated'] @property def task_sid(self): """ :returns: The unique ID of the Task associated with this Sample. :rtype: unicode """ return self._properties['task_sid'] @property def language(self): """ :returns: An ISO language-country string of the sample. :rtype: unicode """ return self._properties['language'] @property def assistant_sid(self): """ :returns: The unique ID of the Assistant. :rtype: unicode """ return self._properties['assistant_sid'] @property def sid(self): """ :returns: A 34 character string that uniquely identifies this resource. :rtype: unicode """ return self._properties['sid'] @property def tagged_text(self): """ :returns: The text example of how end-users may express this task. The sample may contain Field tag blocks. :rtype: unicode """ return self._properties['tagged_text'] @property def url(self): """ :returns: The url :rtype: unicode """ return self._properties['url'] @property def source_channel(self): """ :returns: The communication channel the sample was captured. It can be: voice, sms, chat, alexa, google-assistant, or slack. If not included the value will be null :rtype: unicode """ return self._properties['source_channel'] def fetch(self): """ Fetch the SampleInstance :returns: The fetched SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ return self._proxy.fetch() def update(self, language=values.unset, tagged_text=values.unset, source_channel=values.unset): """ Update the SampleInstance :param unicode language: An ISO language-country string of the sample. :param unicode tagged_text: The text example of how end-users may express this task. The sample may contain Field tag blocks. :param unicode source_channel: The communication channel the sample was captured. It can be: voice, sms, chat, alexa, google-assistant, or slack. If not included the value will be null :returns: The updated SampleInstance :rtype: twilio.rest.preview.understand.assistant.task.sample.SampleInstance """ return self._proxy.update(language=language, tagged_text=tagged_text, source_channel=source_channel, ) def delete(self): """ Deletes the SampleInstance :returns: True if delete succeeds, False otherwise :rtype: bool """ return self._proxy.delete() def __repr__(self): """ Provide a friendly representation :returns: Machine friendly representation :rtype: str """ context = ' '.join('{}={}'.format(k, v) for k, v in self._solution.items()) return '<Twilio.Preview.Understand.SampleInstance {}>'.format(context)
I suppose, you are aware of the disturbing influence of such an \"ignorance\" to one of the unique and ancient languages of the world, which alphabet dates to 362 A.D.and practically is not changed till now! We have the same 39 letters (36+3) which our ancestors used many-many years and centuries ago...Can you imagine that feeling? Some translations of Aristoteles and other greats are still the only original copies, due to the tragic fire of the Alexandrian library in the IV century and now many scientists all over the world use to go to Armenia, learn Armenian language, so to have an unique chance to read the original old manuscripts. And we can find also the Armenian National Language Support site (Armenian NLS for Windows) where the unique Armenian fonts are available to be downloaded as well.
import sys sys.path.append("../src/") import unittest from app.admin.admin_mgr import AdminManager from app.database.models import AdminUser, AdminEmail, AdminUserHackathonRel from hackathon import app from mock import Mock, ANY from flask import g class AdminManagerTest(unittest.TestCase): def setUp(self): app.config['TESTING'] = True app.config['WTF_CSRF_ENABLED'] = False def tearDown(self): pass '''test method: get_hackid_from_adminid''' def test_get_hackid_by_adminid(self): admin_email_test = [AdminEmail(email='test@ms.com')] admin_user_hackathon_rel = [AdminUserHackathonRel(hackathon_id=-1)] mock_db = Mock() mock_db.find_all_objects_by.return_value = admin_email_test mock_db.find_all_objects.return_value = admin_user_hackathon_rel am = AdminManager(mock_db) self.assertEqual(am.get_hackid_from_adminid(1), [-1L]) mock_db.find_all_objects_by.assert_called_once_with(AdminEmail, admin_id=1) mock_db.find_all_objects.assert_called_once_with(AdminUserHackathonRel, ANY) '''test method: check_role for decorators''' def test_check_role_super_admin_success(self): admin_email_test = [AdminEmail(email='test@ms.com')] admin_user_hackathon_rel = [AdminUserHackathonRel(hackathon_id=-1)] mock_db = Mock() mock_db.find_all_objects_by.return_value = admin_email_test mock_db.find_all_objects.return_value = admin_user_hackathon_rel am = AdminManager(mock_db) with app.test_request_context('/'): g.admin = AdminUser(id=1, name='testadmin') self.assertTrue(am.check_role(0)) mock_db.find_all_objects_by.assert_called_once_with(AdminEmail, admin_id=1) mock_db.find_all_objects.assert_called_once_with(AdminUserHackathonRel, ANY) def test_check_role_super_admin_faild(self): admin_email_test = [AdminEmail(email='test@ms.com')] admin_user_hackathon_rel = [AdminUserHackathonRel(hackathon_id=2)] mock_db = Mock() mock_db.find_all_objects_by.return_value = admin_email_test mock_db.find_all_objects.return_value = admin_user_hackathon_rel am = AdminManager(mock_db) with app.test_request_context('/'): g.admin = AdminUser(id=1, name='testadmin') self.assertFalse(am.check_role(0)) mock_db.find_all_objects_by.assert_called_once_with(AdminEmail, admin_id=1) mock_db.find_all_objects.assert_called_once_with(AdminUserHackathonRel, ANY) def test_check_role_common_admin_success(self): admin_email_test = [AdminEmail(email='test@ms.com')] admin_user_hackathon_rel = [AdminUserHackathonRel(hackathon_id=2)] mock_db = Mock() mock_db.find_all_objects_by.return_value = admin_email_test mock_db.find_all_objects.return_value = admin_user_hackathon_rel am = AdminManager(mock_db) with app.test_request_context('/'): g.admin = AdminUser(id=1, name='testadmin') self.assertTrue(am.check_role(1)) mock_db.find_all_objects_by.assert_called_once_with(AdminEmail, admin_id=1) mock_db.find_all_objects.assert_called_once_with(AdminUserHackathonRel, ANY) def test_check_role_common_admin_faild(self): admin_email_test = [AdminEmail(email='test@ms.com')] admin_user_hackathon_rel = None mock_db = Mock() mock_db.find_all_objects_by.return_value = admin_email_test mock_db.find_all_objects.return_value = admin_user_hackathon_rel am = AdminManager(mock_db) with app.test_request_context('/'): g.admin = AdminUser(id=1, name='testadmin') self.assertFalse(am.check_role(1)) mock_db.find_all_objects_by.assert_called_once_with(AdminEmail, admin_id=1)
Malik Andrews, Thomas Holley and Khendell Puryear signed their National Letters of Intent on Wednesday (pictured with assistant coach Joe Desiena, principal Ari Hoogenboom, and Shawn O’Connor). Photo courtesy of Lincoln High School. Wednesday was National Signing Day, the day high school football players can officially commit to play college football next season and Lincoln’s Thomas Holley, Erasmus Hall’s Curtis Samuel, and Poly Prep’s Jay Hayes headlined a group of Brooklyn players that will play Division-I football next season. Samuel and Hayes made their verbal commitments months ago choosing to go to Ohio State and Notre Dame respectively. Holley’s situation was different because he only started playing football last season. He initially made a verbal commitment to Penn State, but after its coach Bill O’Brien was hired by the Houston Texans, he decided to decommit and instead went with Florida. Holley had two teammates sign NLI on Wednesday — Khendell Puryear and Malik Andrews. Andrews, a wide receiver, signed with American International College and Puryear signed with Central Connecticut. The two nearly ended up at the same school though as Central Connecticut only ended up on Puryear’s radar within the last two weeks. Rossomando, who went to high school at Port Richmond in Staten Island, eventually landed at Central Connecticut and even at a different school he still had interest in Puryear. He also brought with him a coach from the University at Albany that also had interest in Puryear. Once they reached out to him he quickly made up his mind and decided to forgo AIC and went with CCSU instead. There were quite a few others from Brooklyn that signed including Erasmus Hall’s Jose Duncan, who signed with the University of Rhode Island, and Darin Peart, who signed with Stony Brook. Sheepshead Bay’s Aking Gaston signed with Auburn, Brooklyn Tech’s Deon Mash and New Utrecht’s Richard Wright both signed with C.W. Post and Mark Goldman, an offensive lineman from Midwood, signed with Harvard. There are expected to be a number of other signings in the upcoming weeks as some kids are waiting on report cards or other eligibility requirements. Dashawn Brice Expect Erasmus Hall’s Khalil Lewin and Lincoln’s Leroy Hancle to sign somewhere. Antoine Holloman should sign at Morrisville along with teammate Tyler Cowley, who has already committed there, and Dashawn Brice from Boys and Girls High School. I will keep you updated on those possible signings as well as any others I may have missed or materialize.