text
stringlengths
29
850k
#encoding:UTF-8 import requests import time import datetime import sys from bs4 import BeautifulSoup import re def getInfo(soup): print('in get info.') result = '' a = soup.findAll('dl') for b in a: c = b.find('a',{'class':'title'}) d = c.string print(d) result += d + '\n' return result def getPage(url_path): print(url_path + ' ' + 'datetime? : ' + str(datetime.datetime.now())) try: print('开始新的一页处理。') response = requests.get(url_path) if response.status_code != 200: print("\n!!! 网络访问返回异常,异常代码:" + str(response.status_code) + " !!!\n") print('获取内容信息完成。') return response.text #soup = BeautifulSoup(response.text) time.sleep(1) except Exception: print('get Exception.') print(sys.exc_info()) pass def getList(url_path, fout): url_path += '?start=' for i in range(0,100): p = url_path + str(i * 15) responseText = getPage(p) soup = BeautifulSoup(responseText) r = getInfo(soup) print(r) fout.write(r) fout.write('\n') fout.write(' >>>>>> the ' + str(i) + ' page. <<<<<<\n') fout.write('\n') if __name__ == '__main__': name = '关于儿童文学的书' root = 'http://www.douban.com/tag/%E5%84%BF%E7%AB%A5%E6%96%87%E5%AD%A6/book' fout = open('../rating_out/' + name + '.txt','w') getList(root, fout) fout.close()
On Sunday 18th November, in aid of BBC Children in Need, Michael Ball will be performing his BBC Radio 2 Programme live from the London’s West End. Along with special guests Alfie Boe, Tom Jones, Sheridan Smith, Boyzone and the Kingdom Choir, you too are invited to join him at the Savoy Theatre for this showbiz extravaganza. After the show, Michael will be hosting an intimate lunch at the prestigious Savoy Hotel. Joining Michael at the lunch, and hosting their own tables, will be a selection of Radio 2’s finest presenters and some of the guests from Michael’s show. It will undoubtedly be an entertaining afternoon raising money for BBC Children In Need and Michael would love you and your friends and family to join him there. All profits from the sale of tickets go to BBC Children in Need (charity number 802052 in England & Wales and SC039557 in Scotland). If all the tickets sell the profit is expected to be at least £20,000. Please note: Only children over the age of 14 can attend the gala reception and lunch and must be accompanied by an adult. Alcohol will only be served to over 18s. To pick your own seats for Michael Ball Live From The Savoy In Aid Of BBC Children In Need at Savoy Theatre simply find the performances you'd like to see from the listings below and you'll be given the option.
# Given two 1d vectors, implement an iterator to return their elements alternately. # # For example, given two 1d vectors: # # v1 = [1, 2] # v2 = [3, 4, 5, 6] # By calling next repeatedly until hasNext returns false, the order of elements returned by next should be: # [1, 3, 2, 4, 5, 6]. # # Follow up: What if you are given k 1d vectors? How well can your code be extended to such cases? # # Clarification for the follow up question - Update (2015-09-18): # The "Zigzag" order is not clearly defined and is ambiguous for k > 2 cases. # If "Zigzag" does not look right to you, replace "Zigzag" with "Cyclic". For example, given the following input: # # [1,2,3] # [4,5,6,7] # [8,9] # It should return [1,4,8,2,5,9,3,6,7]. class ZigzagIterator(object): def __init__(self, v1, v2): """ Initialize your data structure here. :type v1: List[int] :type v2: List[int] """ self.count = 0 self.all_lists = [(len(v), iter(v)) for v in (v1, v2) if v] def next(self): """ :rtype: int """ n, v = self.all_lists.pop(0) if n > 1: self.all_lists.append((n - 1, v)) return v.next() def hasNext(self): """ :rtype: bool """ return bool(self.all_lists) # Note: # Maintaining a tuple of len, iterator for each of the v1 and v2 and poping off and appending to end while there is # still elements in it
Do-it-all budget printer is a perfect fit for tight spaces Do-it-all budget printer is a perfect fit for tight spaces by Justin Yu. You also can control printer functionality from the web interface, including all printer settings, such as nesting; disabling the USB interface on the printer; and various levels of locking down the front panel touch-screen interface. Please enter a Product Number to complete the request. I found that when making large-format copies, the Normal copy quality setting produces results that are more than acceptable, for both color and black-and-white jobs. If speed is more important than a perfect copy, you can switch to Fast mode. Printing Max Printing Resolution. Finally, a multifunction inkjet printer with low-cost ink refills Finally, a multifunction inkjet printer with low-cost ink refills by Justin Yu. Bose, Sony, Sennheiser and Jaybird Here’s a selection of the best headphone shopping values this holiday season. One or more of the values entered is not permitted. McSwain See contents by R. That includes processing and printing in the Default Normal print quality setting. Select from the products you own. HP Matte Film gsm – 36″ x Warranty withdrawn refurbished product: Resolution Matte Coated Paper gsm – 36″ mm x 45m – a heavyweight matte coated paper offering vivid colours and fast drying times. We are unable to determine your warranty status for the product and serial number provided. It matches the product: That’s not a problem if you have plenty of drive space, but is certainly an issue if you plan to send the file via email. All exclusively from Cadalyst! Resolution Colour Print CAD Paper 90gsm – 24in x 50m x 4rl mm x 50m – Good dot gain control giving fine line sharpness for e,fp area fills. Power Adapter Voltage Required. As expected, quality was excellent on both. Embedded Web Server HP Designjet printers come with an embedded web server that lets you view, edit, and control the printer from any computer on the network. Please send any technical comments or questions to our webmaster. HP Heavyweight Coated Paper gsm – 24″ x The emffp number provided does not match the previously selected product. You must be careful when scanning in color at high resolutions because file sizes grow almost exponentially. Power Power Consumption Sleep. Any warranty support needed would be completed by the reseller that sold the product. Emp more information or advice please contact us via web form or call us on Scanning to a network location or USB-connected device are nearly identical processes. HP Heavyweight Coated paper gsm – 42” x Resolution Matte Coated Paper 90gsm – 36″ mm x 45m – for consistent, striking colours, high-contrast blacks plus crisp graphics and text.
#!/usr/bin/env python """ use paramiko to send cmd to RTR2 """ import paramiko import time from getpass import getpass MAX_BUFFER = 65535 def prepare_buffer(rtr2): if rtr2.recv_ready(): # print 'buffer is full' return rtr2.recv(MAX_BUFFER) def disable_paging(rtr2): cmd = 'terminal length 0 \n' rtr2.send(cmd) time.sleep(1) prepare_buffer(rtr2) def send_cmd(rtr2, cmd): #print cmd rtr2.send(cmd) time.sleep(2) if rtr2.recv_ready(): return rtr2.recv(MAX_BUFFER) else: print 'buffer is empty' def main(): """ set up paramiko cxn and send the cmd """ ip_addr = '50.76.53.27' port = 8022 username = 'pyclass' password = getpass() remote_conn_pre = paramiko.SSHClient() remote_conn_pre.load_system_host_keys() remote_conn_pre.connect(ip_addr, port=port, username=username, password=password, look_for_keys=False, allow_agent=False) rtr2 = remote_conn_pre.invoke_shell() prepare_buffer(rtr2) disable_paging(rtr2) cmd = 'show run | inc logging \n' output = send_cmd(rtr2, cmd) print output cmd = 'conf t \n' send_cmd(rtr2, cmd) cmd = 'logging buffered 30000 \n' send_cmd(rtr2, cmd) cmd = 'exit \n' send_cmd(rtr2, cmd) cmd = 'wr \n' send_cmd(rtr2, cmd) cmd = 'show run | inc logging \n' output = send_cmd(rtr2, cmd) print output if __name__ == "__main__": main()
We will be releasing the full line up in New Year but as always you can expect 3 days of top class jazz in a wonderful venue. We are currently putting the final touches to the line up over the festive period but you can be assured we will have our usual eclectic mix of established artists and rising stars with a couple of surprises too. We look forward to seeing you in September 2019. Full Line up to be announced early in the New Year.
#!/usr/bin/env python # -*- coding: utf-8 -*- import sys import os try: from PySide import QtCore, QtGui except ImportError: print("没有Qt模块,将在命令行操作") os.system("python " + sys.path[0] + "/main.py") os._exit(0) from StoreData import store_list, all_store, store_init, add_new_store from main import main # Qt module has abandon ''' qtCreatorFile = sys.path[0] + "/UIView.ui" # Enter file here. Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile)''' from UIView import Ui_MainWindow class MyApp(QtGui.QMainWindow, Ui_MainWindow): """define UI init""" def __init__(self): QtGui.QMainWindow.__init__(self) Ui_MainWindow.__init__(self) self.setupUi(self) self.viewlist() def viewlist(self): ss = "\n".join([st[1] for st in store_list]) self.StoreListView.setText(ss) class what_to_eat(MyApp): """操作部分""" def __init__(self): super(what_to_eat, self).__init__() self.pushButton.clicked.connect(self.result) self.pushButton_2.clicked.connect(self.add) def result(self): self.resultBrowser.setText(main(1).decode("utf-8")) def add(self): a = store_init() a.name = self.newst.toPlainText() a.address = self.newadd.toPlainText() if a.name: add_new_store(a) else: pass if __name__ == "__main__": app = QtGui.QApplication(sys.argv) window = what_to_eat() window.show() sys.exit(app.exec_())
Are you getting tired of your overpriced cable TV bill? We’ve been there too. Luckily, there’s a great alternative for you – satellite TV services in Cuyama. With prices and quality superior to cable television, we’re positive you’ll be more than happy with switching. Many others are beginning to realize why satellite TV is a much smarter choice for every household in Cuyama, CA. In fact, there has been a decline in cable TV subscriptions since 2008 and a rise in satellite TV subscriptions and Internet streaming services like Netflix! More about satellite TV services in Cuyama, California.
#!/usr/bin/env python """Provide the standard Python string.Formatter engine.""" from __future__ import absolute_import from __future__ import print_function import string try: basestring except NameError: basestring = str from . import Engine class MissingField(object): """Represent a missing field for unprocessed output.""" def __init__(self, field_name): """Initialize field name.""" self.field_name = field_name self.conversion = None self.format_spec = None def __str__(self): """Yield representation as close to original spec as possible.""" return '{%s%s%s}' % ( self.field_name, '!' + self.conversion if self.conversion else '', ':' + self.format_spec if self.format_spec else '', ) class FormatterWrapper(string.Formatter): """Wrap string.Formatter. Handle only a mapping and provide tolerance. """ def __init__(self, tolerant=False, **kwargs): """Initialize FormatterWrapper.""" super(FormatterWrapper, self).__init__(**kwargs) self.tolerant = tolerant def get_value(self, key, args, kwargs): """Get value only from mapping and possibly convert key to string.""" if (self.tolerant and not isinstance(key, basestring) and key not in kwargs): key = str(key) return kwargs[key] def get_field(self, field_name, args, kwargs): """Create a special value when field missing and tolerant.""" try: obj, arg_used = super(FormatterWrapper, self).get_field( field_name, args, kwargs) except (KeyError, IndexError, AttributeError): if not self.tolerant: raise obj = MissingField(field_name) arg_used = field_name return obj, arg_used def convert_field(self, value, conversion): """When field missing, store conversion specifier.""" if isinstance(value, MissingField): if conversion is not None: value.conversion = conversion return value return super(FormatterWrapper, self).convert_field(value, conversion) def format_field(self, value, format_spec): """When field missing, return original spec.""" if isinstance(value, MissingField): if format_spec is not None: value.format_spec = format_spec return str(value) return super(FormatterWrapper, self).format_field(value, format_spec) class StringFormatter(Engine): """String.Formatter engine.""" handle = 'string.Formatter' def __init__(self, template, tolerant=False, **kwargs): """Initialize string.Formatter.""" super(StringFormatter, self).__init__(**kwargs) self.template = template self.formatter = FormatterWrapper(tolerant=tolerant) def apply(self, mapping): """Apply a mapping of name-value-pairs to a template.""" return self.formatter.vformat(self.template, None, mapping)
Originally known as Sunday Bankole (BBanks), he lets loose on this inspirational new single “SK”, a song about self love, hope and belief in ones self to get through life. What listeners will find clear is that Bank’s feelings in the “Self Knowledge” of him self, connect with him self, knowing him self, for him to be him self and love more. Temmie’s mellifluous tone will empower listeners to face their challenges with renewed energy. It’s MuchMusic LessTalk Advancement! Listen to Self Knowledge below and let us know what you think.
# -*- coding: utf-8 -*- """ HipparchiaServer: an interface to a database of Greek and Latin texts Copyright: E Gunderson 2016-21 License: GNU GENERAL PUBLIC LICENSE 3 (see LICENSE in the top level directory of the distribution) """ from collections import deque from server.dbsupport.miscdbfunctions import resultiterator from server.dbsupport.tablefunctions import assignuniquename from server.hipparchiaobjects.connectionobject import ConnectionObject def buildwordbags(searchobject, morphdict: dict, sentences: list) -> deque: """ return the bags after picking which bagging method to use :param searchobject: :param morphdict: :param sentences: :return: """ searchobject.poll.statusis('Building bags of words') baggingmethods = {'flat': buildflatbagsofwords, 'alternates': buildbagsofwordswithalternates, 'winnertakesall': buildwinnertakesallbagsofwords, 'unlemmatized': buidunlemmatizedbagsofwords} bagofwordsfunction = baggingmethods[searchobject.session['baggingmethod']] bagsofwords = bagofwordsfunction(morphdict, sentences) return bagsofwords def buildwinnertakesallbagsofwords(morphdict, sentences) -> deque: """ turn a list of sentences into a list of list of headwords here we figure out which headword is the dominant homonym then we just use that term esse ===> sum esse =/=> edo assuming that it is faster to do this 2x so you can do a temp table query rather than iterate into DB not tested/profiled, though... :param morphdict: :param sentences: :return: """ # PART ONE: figure out who the "winners" are going to be bagsofwords = buildflatbagsofwords(morphdict, sentences) allheadwords = {w for bag in bagsofwords for w in bag} dbconnection = ConnectionObject(readonlyconnection=False) dbconnection.setautocommit() dbcursor = dbconnection.cursor() rnd = assignuniquename(6) tqtemplate = """ CREATE TEMPORARY TABLE temporary_headwordlist_{rnd} AS SELECT headwords AS hw FROM unnest(ARRAY[{allwords}]) headwords """ qtemplate = """ SELECT entry_name, total_count FROM {db} WHERE EXISTS (SELECT 1 FROM temporary_headwordlist_{rnd} temptable WHERE temptable.hw = {db}.entry_name) """ tempquery = tqtemplate.format(rnd=rnd, allwords=list(allheadwords)) dbcursor.execute(tempquery) # https://www.psycopg.org/docs/extras.html#psycopg2.extras.execute_values # third parameter is query = qtemplate.format(rnd=rnd, db='dictionary_headword_wordcounts') dbcursor.execute(query) results = resultiterator(dbcursor) randkedheadwords = {r[0]: r[1] for r in results} # PART TWO: let the winners take all bagsofwords = deque() for s in sentences: lemattized = deque() for word in s: # [('x', 4), ('y', 5), ('z', 1)] try: possibilities = sorted([(item, randkedheadwords[item]) for item in morphdict[word]], key=lambda x: x[1]) # first item of last tuple is the winner lemattized.append(possibilities[-1][0]) except KeyError: pass if lemattized: bagsofwords.append(lemattized) return bagsofwords def buidunlemmatizedbagsofwords(morphdict, sentences) -> deque: """ you wasted a bunch of cycles generating the morphdict, now you will fail to use it... what you see is what you get... :param morphdict: :param sentences: :return: """ bagsofwords = sentences return bagsofwords def buildflatbagsofwords(morphdict, sentences) -> deque: """ turn a list of sentences into a list of list of headwords here we put alternate possibilities next to one another: ϲυγγενεύϲ ϲυγγενήϲ in buildbagsofwordswithalternates() we have one 'word': ϲυγγενεύϲ·ϲυγγενήϲ :param morphdict: :param sentences: :return: """ bagsofwords = deque() for s in sentences: lemattized = deque() for word in s: try: # WARNING: we are treating homonymns as if 2+ words were there instead of just one # 'rectum' will give you 'rectus' and 'rego'; 'res' will give you 'reor' and 'res' # this necessarily distorts the vector space lemattized.append([item for item in morphdict[word]]) except KeyError: pass # flatten bagsofwords.append([item for sublist in lemattized for item in sublist]) return bagsofwords def buildbagsofwordswithalternates(morphdict, sentences) -> deque: """ buildbagsofwords() in rudimentaryvectormath.py does this but flattens rather than joining multiple possibilities here we have one 'word': ϲυγγενεύϲ·ϲυγγενήϲ there we have two: ϲυγγενεύϲ ϲυγγενήϲ :param morphdict: :param sentences: :return: """ bagsofwords = deque() for s in sentences: lemmatizedsentence = deque() for word in s: try: lemmatizedsentence.append('·'.join(morphdict[word])) except KeyError: pass bagsofwords.append(lemmatizedsentence) return bagsofwords
Looking for a prepaid debit card in Haysi, VA? Searching for the perfect Visa® prepaid debit card in Haysi—one that's safe and easy with access to online banking? BB&T MoneyAccount combines the features of a checking account with the convenience of a reloadable prepaid debit card. You'll enjoy a variety of benefits that make it easy to bank when you want, how you want in Haysi. Take advantage of U by BB&T®, our online and mobile banking experience, with bill payment and budgeting tools to help you keep tabs on your money. When you open your MoneyAccount online in Haysi, you'll be able to spend with confidence. You'll never overdraw your account because you can only spend the amount that's available on your card. Prefer to sign up for a prepaid debit card in person? With more than 2,000 BB&T branches, we're right around the corner from you in Haysi, VA.
# Copyright 2015 ARM Limited # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # pylint: disable=attribute-defined-outside-init import os from wlauto import Workload, Parameter from wlauto import File from wlauto.exceptions import ConfigError from wlauto.utils.android import ApkInfo class ApkLaunchWorkload(Workload): name = 'apklaunch' description = ''' Installs and runs a .apk file, waits wait_time_seconds, and tests if the app has started successfully. ''' supported_platforms = ['android'] parameters = [ Parameter('apk_file', description='Name to the .apk to run', mandatory=True), Parameter('uninstall_required', kind=bool, default=False, description='Set to true if the package should be uninstalled'), Parameter('wait_time_seconds', kind=int, default=0, description='Seconds to wait before testing if the app is still alive') ] def setup(self, context): apk_file = context.resolver.get(File(self, self.apk_file)) self.package = ApkInfo(apk_file).package # pylint: disable=attribute-defined-outside-init self.logger.info('Installing {}'.format(apk_file)) return self.device.install(apk_file) def run(self, context): self.logger.info('Starting {}'.format(self.package)) self.device.execute('am start -W {}'.format(self.package)) self.logger.info('Waiting {} seconds'.format(self.wait_time_seconds)) self.device.sleep(self.wait_time_seconds) def update_result(self, context): app_is_running = bool([p for p in self.device.ps() if p.name == self.package]) context.result.add_metric('ran_successfully', app_is_running) def teardown(self, context): if self.uninstall_required: self.logger.info('Uninstalling {}'.format(self.package)) self.device.execute('pm uninstall {}'.format(self.package))
Whew, it's been a long time!! I don't really have an excuse for not posting, other than we haven't been doing much and I've been staying inside lots because it's so darn hot outside! We have managed to make it to Disney a couple of times over the past few months now that we have our annual passes and we got Universal passes last week! Here are some moments from my phone from the past few months!
import ugfx import badge import wifi import network from time import sleep import usocket as socket state_map = { 'up': 0, 'down': 1, 'left': 2, 'right': 3, 'a': 4, 'b': 5, 'start': 8, 'select': 9, } states = [0 for _ in range(14)] def handle_key(id, pressed): states[id] = pressed connection.send_key_states(states) def handle_up(pressed): handle_key(state_map['up'], int(pressed)) def handle_down(pressed): handle_key(state_map['down'], int(pressed)) def handle_left(pressed): handle_key(state_map['left'], int(pressed)) def handle_right(pressed): handle_key(state_map['right'], int(pressed)) def connect_to_wifi(ssid='pymlbadge', password='pymlbadge'): show_message("Waiting for wifi...") wlan = network.WLAN(network.STA_IF) if not wlan.active() or not wlan.isconnected(): wlan.active(True) print('connecting to:', ssid) wlan.connect(ssid, password) while not wlan.isconnected(): sleep(0.1) print('network config:', wlan.ifconfig()) show_message("Connected") def init_badge(): badge.init() ugfx.init() wifi.init() connect_to_wifi() def show_message(message): ugfx.clear(ugfx.WHITE) ugfx.string(10, 10, message, "Roboto_Regular12", 0) ugfx.flush() class Connection: def __init__(self, listen_port, control_addr, control_port): self.uid = None self.listen_port = listen_port self.listen_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, 0) self.listen_sock.setblocking(False) # self.listen_sock.bind(('0.0.0.0', self.listen_port)) addr = socket.getaddrinfo('0.0.0.0', listen_port) self.listen_sock.bind(addr[0][-1]) self.control_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, 0) self.control_dest = [] while len(self.control_dest) == 0: self.control_dest = socket.getaddrinfo(control_addr, control_port) self.control_dest = self.control_dest[0][-1] print("registering") self.register() def ready(self): return self.uid is not None def register(self): command = '/controller/new/{port}'.format(port=self.listen_port) try: self.control_sock.sendto(command.encode('utf-8'), self.control_dest) except Exception as ex: print("failed to register controller: {}".format(ex)) def handle_read(self, data): data = data.decode('utf-8') if '/' not in data: # bad, malicous data!! return command, data = data.rsplit('/', 1) if command.startswith('/uid'): self.handle_uid(data) elif command.startswith('/rumble'): # self.handle_rumble(data) pass elif command.startswith('/message'): # self.handle_message(data) pass elif command.startswith('/download'): # self.handle_download(data) pass elif command.startswith('/play'): # self.handle_play(data) pass def handle_uid(self, data): self.uid = data print("Got UID {}".format(data)) self.init_inputs() def start_listening(self): self.listening = True self._listener_loop() def stop_listening(self): self.listening = False def _listener_loop(self): while self.listening: try: data, addr = self.listen_sock.recvfrom(1024) self.handle_read(data) except: pass sleep(0.01) def init_inputs(self): print("initializing input callbacks") ugfx.input_init() ugfx.input_attach(ugfx.JOY_UP, handle_up) ugfx.input_attach(ugfx.JOY_DOWN, handle_down) ugfx.input_attach(ugfx.JOY_LEFT, handle_left) ugfx.input_attach(ugfx.JOY_RIGHT, handle_right) ugfx.input_attach(ugfx.BTN_A, handle_up) ugfx.input_attach(ugfx.BTN_B, handle_up) ugfx.input_attach(ugfx.BTN_SELECT, handle_up) ugfx.input_attach(ugfx.BTN_START, handle_up) def ping(self): command = '/controller/{uid}/ping/{port}'.format( uid=self.uid, port=self.port ) socket.sendto(command.encode('utf-8'), self.control_dest) def send_key_states(self, states): command = '/controller/{uid}/states/{states}'.format( uid=self.uid, states=''.join(map(str, states))) self.listen_sock.sendto(command.encode('utf-8'), self.control_dest) init_badge() destination = 'control.ilexlux.xyz' show_message("Connecting to {}".format(destination)) connection = Connection(1338, destination, 1338) connection.start_listening()
In this axis, skills in extraction, biotechnologies, and synthesis chemistry under microwave, are applied to different biomasses (micro/macro algae and marine bacteria), biological polymers or secondary metabolites. This way, different families of natural molecules or analog synthesis are obtained (glycomolecules, heterocyclic compounds, peptides, pigments, polyphenols) with the purpose to have anti-cancerous, cardio-metabolic, anti-microbial, and dermo-cosmetic applications. The different skills of the team enable us to evaluate the molecules for their different biological activities: anti-proliferative, cytotoxic, regulators of tumorous progress’s process (invasion, migration, angiogenesis), anti-microbial, anti-oxidant, anti-inflammatory… (on cellular lines or molecular targets). These projects are also accompanied by the development of technological methods for the purification and the characterization (mass spectrometry-chromatography) of biological actives. Conception and synthesis of new pharmacologic inhibitors of key kinases in carcinogenesis. Original synthesis of new indigoid’s analogs. (diazaindoles, 3rd indolin-substitution-2-one). Valérie Thiery, Lisianne Domon, Jean-René Cherouvrier. Sifting and identification of probiotic stems able to produce growth inhibitors. Sophie Sablé, Romain Chevrot, Sandrine Didelot, Sabrina Saltaji (PhD student).
import os, shutil import logging class Writer(object): def __init__(self, backupBasePath, sourceBackupBasePath, sourceSourceBasePath, id): self.backupBasePath = backupBasePath self.sourceBackupBasePath = sourceBackupBasePath self.sourceSourceBasePath = sourceSourceBasePath self.id = id def initialize(self): logging.info("writer initialize") def destroy(self): self.commit() def commit(self): pass def getFilePathInSource(self, npath): return "%s%s/%s" % (self.backupBasePath, self.id, npath.replace(self.sourceSourceBasePath, '', 1)) def getFilePathInBackup(self, npath): return self.sourceBackupBasePath + npath.replace(self.sourceSourceBasePath, '', 1) def getDestinationFilePathToContent(self, p): return p def updateFile(self, opath, npath): dst = self.getFilePathInSource(npath.path) src = self.getFilePathInBackup(npath.path) self.mkdir(dst) shutil.copyfile(src, dst) def deleteFile(self, opath): dst = self.getFilePathInSource(opath.path) src = self.getFilePathInBackup(opath.path) self.mkdir(dst) shutil.copyfile(src, dst) def mkdir(self, fpath): dname = os.path.dirname(fpath)+'/' if not os.path.isdir(dname): os.makedirs(dname)
The contoured electric treatment table provided contoured upholstery for bodywork, this is ideal for all styles of massage and many treatments. This Adjustable treatment table addresses the requirements of a wide range of manual handling issues to benefit both patient and doctor. 1. Adjustable treatment table standard supports "Manual operation system" for Hi-Lo function. 2. Height control from 48cm to 93.5cm by Manual operation system. 3. The frame feet (*4) make round shape, avoid to hit people. 4. Add more material make better designed for some frame parts. 5. Adjustable head section featuring double gas struts, angle adjustable from +30° to -75°. 6. Able to lift up to 225kg working weight from its lowest height. 7. Retractable castors 75mm for excellent mobility and safety. 8. Adjustable treatment table make a new design for triangular parts. 9. Fire/stain/mildew/oil/water/wear resistant Med PVC available in 9 kinds of colors. 10. Certificates: CE, FDA, ISO9001-2008, ISO13485-2007. 2. Extra width upholstery up to 70cm or 62cm. 3. Extra 2cm foam thickness for upholstery.
from Main.models.categories import Category, Product_category from Main.models.products import Product from Main.utilities.logs import logError from django.utils import timezone import datetime def getCategoryFromProduct(product): category = Product_category.objects.filter(product_id=product["id"])[0].category.get_as_dict() category2 = None category1 = None category0 = None try: if category["level_depth"] == 2: category2 = category category1 = Category.objects.get(id=category["parent_id"]).get_as_dict() category0 = Category.objects.get(id=category2["parent_id"]).get_as_dict() elif category["level_depth"] == 1: category1 = category category0 = Category.objects.get(id=category1["parent_id"]).get_as_dict() else: category0 = category except: logError("category not found on product page id:" + str(product["id"])) return {"category0": category} return { "category2": category2, "category1": category1, "category0": category0 } def getDailyDeals(): #todo modify to get the actual deals products = Product.objects.filter(daily_deal=True)[:5] tmp = [] end = (timezone.localtime(timezone.now()) + datetime.timedelta(days=1)).strftime('%m/%d/%Y 00:00:00') for prod in products: tmp.append(prod.get_as_big_dict()) return { "products": tmp, "endtime": end } def getLayoutCategories(): categories = Category.objects.filter(active=True, level_depth=0).order_by('position') final_categories = [] for category in categories: final_categories.append(category.get_as_dict()) return final_categories def getMainDict(): return { "layoutCategories": getLayoutCategories() }
Our visual communication is mainly based on typographical experimentation; in SKE! CREATIVE’s Division, according LOOP, Typography is the communication vital element: suggestions, emotions and intentions are communicated even without the use of images or icons of various kinds; the word, written with a specific font, has a certain shape and combined with color and layout, it is possible to communicate any message effectively. This obviously does not erase the use of images that must be included in the graphic project, managing a compositional balance between image and word. of the compositional cage, breaking it down, as did the Russian Futurists or Constructivists or as the Swiss School and the Bauhaus have taught the world how the wise use of the cage can lead to incredible works. Today in actual image society we’re living, in the immensity of characters created and made available for free, why go into the Type Design? 5. With the Typography, an identity system is created to solve any possible problem space saving on screen and in-print. These 5-points broadly represent the Creative Philosophy of LOOP, seeking the way of typographical and artistic research/experimentation in communication.
"""Skeleton for 'pytest'. Project: pytest 2.6.4 <http://pytest.org/latest/> Skeleton by: Bruno Oliveira <nicoddemus@gmail.com> Exposing everything that can be extracted from `pytest_namespace` hook in standard pytest modules, using original docstrings. """ # _pytest.genscript def freeze_includes(): """ Returns a list of module names used by py.test that should be included by cx_freeze. """ # _pytest.main class collect: class Item: """ a basic test invocation item. Note that for a single function there might be multiple test invocation items. """ class Collector: """ Collector instances create children through collect() and thus iteratively build a tree. """ class File: """ base class for collecting tests from a file. """ class Session: """ """ # _pytest.python class Module: """ Collector for test classes and functions. """ class Class: """ Collector for test methods. """ class Instance: """ """ class Function: """ a Function Item is responsible for setting up and executing a Python test function. """ class Generator: """ """ @staticmethod def _fillfuncargs(function): """ fill missing funcargs for a test function. """ # _pytest.mark class mark: def __getattr__(self, item): """ This class may have any attribute, so this method should exist """ pass @staticmethod def skipif(condition, reason=None): """skip the given test function if eval(condition) results in a True value. Optionally specify a reason for better reporting. Evaluation happens within the module global context. Example: ``skipif('sys.platform == "win32"')`` skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html """ @staticmethod def xfail(condition=None, reason=None, raises=None, run=True, strict=False): """mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html """ @staticmethod def parametrize(argnames, argvalues): """call a test function multiple times passing in different arguments in turn. :type argnames: str | list[str] :param argvalues: generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. see http://pytest.org/latest/parametrize.html for more info and examples. """ @staticmethod def usefixtures(*fixturenames): """mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures """ @staticmethod def tryfirst(f): """mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible. """ @staticmethod def trylast(f): """mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible. """ @staticmethod def hookwrapper(f): """A hook wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers and passes the same arguments as to the regular hooks. """ # _pytest.pdb def set_trace(): """ invoke PDB set_trace debugging, dropping any IO capturing. """ # _pytest.python def raises(ExpectedException, *args, **kwargs): """ assert that a code block/function call raises @ExpectedException and raise a failure exception otherwise. :type ExpectedException: T This helper produces a ``py.code.ExceptionInfo()`` object. If using Python 2.5 or above, you may use this function as a context manager:: >>> with raises(ZeroDivisionError): ... 1/0 Or you can specify a callable by passing a to-be-called lambda:: >>> raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...> or you can specify an arbitrary callable with arguments:: >>> def f(x): return 1/x ... >>> raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> >>> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...> A third possibility is to use a string to be executed:: >>> raises(ZeroDivisionError, "f(0)") <ExceptionInfo ...> Performance note: ----------------- Similar to caught exception objects in Python, explicitly clearing local references to returned ``py.code.ExceptionInfo`` objects can help the Python interpreter speed up its garbage collection. Clearing those references breaks a reference cycle (``ExceptionInfo`` --> caught exception --> frame stack raising the exception --> current frame stack --> local variables --> ``ExceptionInfo``) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. See the official Python ``try`` statement documentation for more detailed information. """ def fixture(scope="function", params=None, autouse=False, ids=None): """ (return a) decorator to mark a fixture factory function. This decorator can be used (with or or without parameters) to define a fixture function. The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the pytest.mark.usefixtures(fixturename) marker. Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected. :arg scope: the scope for which this fixture is shared, one of "function" (default), "class", "module", "session". :arg params: an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it. :arg autouse: if True, the fixture func is activated for all tests that can see it. If False (the default) then an explicit reference is needed to activate the fixture. :arg ids: list of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params. """ def yield_fixture(scope="function", params=None, autouse=False, ids=None): """ (return a) decorator to mark a yield-fixture factory function (EXPERIMENTAL). This takes the same arguments as :py:func:`pytest.fixture` but expects a fixture function to use a ``yield`` instead of a ``return`` statement to provide a fixture. See http://pytest.org/en/latest/yieldfixture.html for more info. """ # _pytest.recwarn def deprecated_call(func, *args, **kwargs): """ assert that calling ``func(*args, **kwargs)`` triggers a DeprecationWarning. """ # _pytest.runner def exit(msg): """ exit testing process as if KeyboardInterrupt was triggered. """ exit.Exception = Exception def skip(msg=""): """ skip an executing test with the given message. Note: it's usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details. """ skip.Exception = Exception def fail(msg="", pytrace=True): """ explicitely fail an currently-executing test with the given Message. :arg pytrace: if false the msg represents the full failure information and no python traceback will be reported. """ fail.Exception = Exception def importorskip(modname, minversion=None): """ return imported module if it has at least "minversion" as its __version__ attribute. If no minversion is specified the a skip is only triggered if the module can not be imported. Note that version comparison only works with simple version strings like "1.2.3" but not "1.2.3.dev1" or others. """ # _pytest.skipping def xfail(reason=""): """ xfail an executing test or setup functions with the given reason. """
When Carlo and I first started to think about the Photo Moto project, the first thing that came to mind was out of the classic Italian movies…. scooters represented in post war Italy the new freedom. Then I began to think of my early trips to Italy where I was excited by the buzz of scooters all around me in the cities and country side. Last year in Sicily the first Photo Moto project took off and was a huge success. The students, Carlo, and myself had a wonderful experience on the open road making photographs. Carlo selected a wonderful itinerary that kept our minds and hearts filled with excitement. Additionally, besides the beautiful light and landscape, we also had wonderful meals together! It reached beyond our expectations and now we are ready to do it again for a second time. The road trip is a classic motivating theme in literature, photography and the cinema. Think Paul Strand, Robert Frank, or Jack Kerouac…All were inspired through travel adventures while exploring not only themselves but also a culture while on a road trip. What better way to investigate yourself, your photography and the wonderful country of Italy than on a scooter. One can combine all of these passions into one week of exploring Sicily. The scooter is classic Italian as well as fun and easy to ride for everyone of all ages. Thus, one must understand how to ride a scooter in order to participate in this workshop. But, please note – Moto Photo is not a race! We will drive at an easy pace as this will be a trip of inquiry and discovery. Our peaceful photographic journey takes us to several locations in Sicily, which is both historical and beautiful. We will tour on two wheels not only creating classic portraiture but also broadening our photographic story telling abilities and amplifying visual poetry that reflects our individual point of view. There will be no shortage of old world charm, wonderful faces, still life imagery, and magical landscapes where it appears that time stood still. This lecture light workshop will focus on studying design, gesture, movement and light in real time together while touring the country side. We will also have time together to have inspired discussions and our critiques will take place individually as well as in a group setting. This workshop will focus on creativity inspired from within to merge one’s personal point of view with the gamut of discoveries and opportunities we find along our road trip. Cruising on scooters will be as easy and comfortable as riding a bicycle. The breeze in our faces will refresh and open our minds. Moto Photo will be a journey of personal growth through examination within, while learning to see individually the environments and people we discover along the way. No matter the tool you choose for this adventure whether a traditional camera or an iPhone the images we create together will be life shaping and captivating. Paul has photographed such celebrities as AC/DC, Billy Corgan, John Goodman, Jennifer Hudson, Katy Perry, Ministry, Willie Nelson, Trent Reznor, Luciano Pavarotti and Oprah Winfrey, to name a few. His work has appeared worldwide in galleries and museums, and has been published in such publications as Audubon, Fast Company, Life, Men’s Health, National Geographic, People, Rolling Stone, Time, and Wired among others. Paul’s book Luna, Bella Luna chronicles the people of Vesale, Italy, and he has also collaborated on several books with Chef Rick Bayless as well as the late chef Charlie Trotter. In addition to photography, he has directed music videos and television commercials. Paul’s interests outside of image making include vintage motorcycles, all things Italian, music and expressions of creativity. We will meet with the group on the 1st, have a first talk about the plans for the coming week, get the scooters, do some practice and have a first dinner together. On Sunday we will have a first portfolio review of participant’s work, and then go for a ride to the coast. On Monday, we will start our adventure on two wheels. We will stay in Siracusa for the next two days, taking day trips from there. On Wednesday afternoon we will continue our trip moving further south and stay for 4 more days in that area, explore the baroque area and the small villages nearby. We will be back to Siracusa on Friday, in order to prepare the slide show for Friday night and return the scooters. The day after, we will travel back to Catania to get our flights back home – unless you have planned to extend your staying!!! We plan to travel not more than 3 hours a day, in order to have a lot of time to stop shooting and discover the villages along the road and the magic of places such as Siracusa, Marzamemi, Noto, Modica , Ragusa-Ibla. We will reserve room in comfortable hotels along the road, and will have dinners together in typical local restaurants and trattorie. A well equipped car/van will follow us, carrying your bags, tripods, camera bags, so you will be able to travel very light on your scooter and enjoy the ride. In order to rent a scooter, you need to have a valid driving licence issued in your country, also valid for EEC countries. Please contact us f you need assistance on this. Considering the nature of the workshop, the mantra will be: travel light! We will have a back up car where to transport your personal bags and camera bags, plus tripods etc, but consider that we will change at least three hotels, so it will be much easier if everyone has a small/medium size bag. We will Stay in Siracusa at the beginning and end of our trip, so some luggage can be left at the hotel, but still…. Bring the camera you plan to use for the workshop, plus a back up body (or camera). And of course your laptop, in order to be able to edit your photographs for the critique sessions we will have along the road. A complete list of accessories and clothing will be given before the workshop. When Carlo and I first started to think about the Moto Photo project, the first thing that came to mind was out of the classic Italian movies…. scooters represented in post war Italy the new freedom. Then I began to think of my early trips to Italy where I was excited by the buzz of scooters all around me in the cities and country side. Thus when Carlo asked me for a picture to illustrate this terrific new workshop in Sicily, I decided to create a special one! I thought of a young woman riding in the countryside, enjoying the moment, the wind and the experience of the land. For me Italy brings to mind adventure and exploration. But of course together in the program we will make images from your mind and heart! We will meet with the group on the 5th, have a first talk about the plans for the coming week, get the scooters, do some practice and have a first dinner together. On Monday, we wil start our adventure on two wheels. We will stay in Siracusa for the next two days, taking day trips from there. On Wednesday afternoon we will continue our trip moving further south and stay for 4 more days in that area, explore the baroque area and the small villages nearby. We will reserve room in confortable hotels along the road, and will have dinners together in typical local restaurants and trattorie. A well equipped car/van will follow us, carrying your bags, tripods, camera bags, so you will be able to travel very light on your scooter and enjoy the ride. In order to rent a scooter, you need to have a valid driving licence issued in your country, also valid for EEC counties. Please contact us f you need assistance on this. We will have a back up car where to transport your personal bags and camera bags, plus tripods etc, but consider that we will change at least three hotels, so it will be much easier if everyone has a small/medium sixe bag.
"""create invoice_details table Revision ID: 3eecce93ff43 Revises: 792b438b663 Create Date: 2016-03-29 22:16:01.022645 """ # revision identifiers, used by Alembic. revision = '3eecce93ff43' down_revision = '792b438b663' from alembic import op import sqlalchemy as sa def upgrade(): op.create_table('invoice_details', sa.Column('id', sa.Integer(), nullable=False), sa.Column('invoice_date', sa.DateTime(), nullable=False), sa.Column('invoice_period_from', sa.DateTime(), nullable=False), sa.Column('invoice_period_to', sa.DateTime(), nullable=False), sa.Column('tenant_id', sa.String(length=255), nullable=False), sa.Column('tenant_name', sa.String(length=255), nullable=False), sa.Column('invoice_id', sa.String(length=255), nullable=False), sa.Column('invoice_data', sa.Text(), nullable=False), sa.Column('total_cost', sa.Numeric(precision=13,scale=2), nullable=True), sa.Column('paid_cost', sa.Numeric(precision=13,scale=2), nullable=True), sa.Column('balance_cost', sa.Numeric(precision=13,scale=2), nullable=True), sa.Column('payment_status', sa.Integer(), nullable=False), sa.PrimaryKeyConstraint('id'), mysql_charset='utf8', mysql_engine='InnoDB') def downgrade(): op.drop_table('invoice_details')
We’ve all seen so many lists out there of gifts for children with disabilities. They’re great lists, and much needed and appreciated because sometimes kids with special needs can be tough to shop for. Still, I’ve found that sometimes there’s another group of people who are equally as hard to shop for: their parents! We’ve compiled a list of ten great gifts for parents of kids with special needs, but we think almost any parent you know would happily accept these things! Give the gift of a subscription to a streaming music service like Spotify or Rdio so they can discover and to listen to their own music! Or an iTunes gift card so they can buy their own music! 2. A new pair of headphones. What parent doesn’t crave a nice massage? Most don’t even care if it’s a Swedish, Thai, deep tissue massage, or even just a foot massage! Getting an hour to spend in a spa sounds heavenly! There are always great spa deals on Groupon! Offer to take the kids for the afternoon or an evening so the parents can go on a date! 5. Make the family a meal. Ok, this might end up being two meals since it’s pretty rare that all members of a family will eat the same thing! Ordering take out or preparing a homecooked meal gives parents the night off from having to figure it all out themselves, and that is definitely a gift! 6. ______ of the month club. I’m not talking about once a week or even once a month. Even a ONE TIME cleaning service or carpet cleaning are awesome gifts for parents of kids with special needs! Seriously, many of us run on this stuff! Someone got me this as a wedding gift, and it was definitely one of the best gifts ever! I’d be remiss to not mention this one. This holiday season, give a mom or dad in your life a full year of Birdhouse Premium– something that will reduce their stress & drastically improve their quality of life. Birdhouse for Autism allows parents to keep track of their child’s daily activities: meds, moods, diet, therapy, sleep, poop, etc. all in one place, in an app on their phone! Knowing all the information is safe, secure, and can be easily referenced gives parents more time to enjoy the important things- like their exceptional kiddos. And perhaps their new coffee subscription! This one’s free! Simply watch the kids while the parents do something totally unheard of in their day-to-day lives…. take a nap! Dani Gillman is Cofounder and Head of Marketing at Birdhouse– a Detroit-based startup empowering parents raising children with special needs to learn more about their children through a behavior journaling app for iPhone, Android and the web. She’s also mom to a 10 year old daughter (who happens to have Autism) and a 1 year old son (who has yet to appreciate the value of naps). Dani Gillman is Cofounder and Head of Marketing at Birdhouse– a Detroit-based startup empowering parents raising children with special needs to learn more about their children through a behavior journaling app for iPhone, Android and the web. She’s also mom to an 11 year old daughter (who happens to have Autism) and a 2 year old son (who doesn't appreciate the value of naps). We Lit It Up Blue at Birdhouse!
from six import iteritems import struct from .region import Region class KeyspacesRegion(Region): """A region of memory which represents data formed from a list of :py:class:`~rig.bitfield.BitField` instances representing SpiNNaker routing keys. Each "row" represents a keyspace, and each "column" is formed by getting the result of a function applied to the keyspace. Each field will be one word long, and all keyspaces are expected to be 32-bit long. """ def __init__(self, signals_and_arguments, fields=list(), partitioned_by_atom=False, prepend_num_keyspaces=False): """Create a new region representing keyspace information. Parameters ---------- signals_and_arguments : [(Signal, **kwargs), ...] A list of tuples of Signals (which contain/refer to keyspaces) and arguments used to construct the keyspace from the signal. The keyspaces will (eventually) be generated by calling ``signal.kespace(**kwarags)``. fields : iterable An iterable of callables which will be called on each key and must return an appropriate sized bytestring representing the data to write to memory. The appropriate size is the number of bytes required to represent a full key or mark (e.g., 4 bytes for 32 bit keyspaces). partitioned_by_atom : bool If True then one set of fields will be written out per atom, if False then fields for all keyspaces are written out regardless of the vertex slice. prepend_num_keyspaces : bool Prepend a word containing the number of keyspaces to the region data when it is written out. """ # Save the keyspaces, fields and partitioned status self.signals_and_arguments = list(signals_and_arguments) self.fields = list(fields) self.partitioned = partitioned_by_atom self.prepend_num_keyspaces = prepend_num_keyspaces self.bytes_per_field = 4 def sizeof(self, vertex_slice): """Get the size of a slice of this region in bytes. See :py:meth:`.region.Region.sizeof` """ # Get the size from representing the fields if not self.partitioned: n_keys = len(self.signals_and_arguments) else: assert vertex_slice.stop < len(self.signals_and_arguments) + 1 n_keys = vertex_slice.stop - vertex_slice.start pp_size = 0 if not self.prepend_num_keyspaces else 4 return self.bytes_per_field * n_keys * len(self.fields) + pp_size def write_subregion_to_file(self, fp, vertex_slice=None, **field_args): """Write the data contained in a portion of this region out to file. """ data = b'' # Get a slice onto the keys if self.partitioned: assert vertex_slice.stop < len(self.signals_and_arguments) + 1 key_slice = vertex_slice if self.partitioned else slice(None) # Write the prepends if self.prepend_num_keyspaces: nks = len(self.signals_and_arguments[key_slice]) data += struct.pack("<I", nks) # For each key fill in each field for signal, kwargs in self.signals_and_arguments[key_slice]: ks = signal.keyspace(**kwargs) # Construct the keyspace for field in self.fields: data += struct.pack("<I", field(ks, **field_args)) # Write out fp.write(data) # NOTE: This closure intentionally tries to look like a class. # TODO: Neaten this docstring. def KeyField(maps={}, field=None, tag=None): """Create new field for a :py:class:`~KeyspacesRegion` that will fill in specified fields of the key and will then write out a key. Parameters ---------- maps : dict A mapping from keyword-argument of the field to the field of the key that this value should be inserted into. field : string or None The field to get the key or None for all fields. For example: ks = Keyspace() ks.add_field(i) # ... kf = KeyField(maps={'subvertex_index': 'i'}) k = Keyspace() kf(k, subvertex_index=11) Will return the key with the 'i' key set to 11. """ key_field = field def key_getter(keyspace, **kwargs): # Build a set of fields to fill in fills = {} for (kwarg, field) in iteritems(maps): fills[field] = kwargs[kwarg] # Build the key with these fills made key = keyspace(**fills) return key.get_value(field=key_field, tag=tag) return key_getter # NOTE: This closure intentionally tries to look like a class. def MaskField(**kwargs): """Create a new field for a :py:class:`~.KeyspacesRegion` that will write out a mask value from a keyspace. Parameters ---------- field : string The name of the keyspace field to store the mask for. tag : string The name of the keyspace tag to store the mask for. Raises ------ TypeError If both or neither field and tag are specified. Returns ------- function A function which can be used in the `fields` argument to :py:class:`~.KeyspacesRegion` that will include a specified mask in the region data. """ # Process the arguments field = kwargs.get("field") tag = kwargs.get("tag") # Create the field method if field is not None and tag is None: def mask_getter(keyspace, **kwargs): return keyspace.get_mask(field=field) return mask_getter elif tag is not None and field is None: def mask_getter(keyspace, **kwargs): return keyspace.get_mask(tag=tag) return mask_getter else: raise TypeError("MaskField expects 1 argument, " "either 'field' or 'tag'.")
Making You Healthy- <> Is your fish oil safe? In the last e-zine, I talked about the excellent health benefits of omega-3 fatty acids. Increasing your omega-3 intake not only improves cholesterol levels, insulin sensitivity and mental function, but it decreases inflammation (the underlying issue in the majority of health problems). Click here to remind yourself of why it's so important. BUT you need to be so careful which omega-3 supplement you choose. First of all, you want to supplement with JUST omega-3. Not omega-3 & 6 or omega-3,6,9. Like I said in the last e-zine, we already get WAAAAY too much of the other 2. The health benefits happen when you balance out the amounts, so if your supplement also has omega-6 & 9, you're increasing that imbalance. Second, flaxseed & flaxseed oil is not a sufficient omega-3 supplement. While it IS a source of omega-3's, there's no way of knowing how much your body is actually getting. Flax contains alpha-linolenic acid, which needs to be converted by your body into the usable forms of omega-3's (DHA & EPA). Unfortunately, that conversion is a very inefficient process and can be interfered with by many other foods and medications. Fish oil has DHA & EPA in it all ready to be used by the body. So taking a fish oil supplement is the best way to make sure you're getting omega-3's in the amounts you need for good health. Lead, mercury and PCBs are concentrated in the fat of fish (because there's so much of it in our water), so when you eat it or they make fish oil capsules out of it, you're getting all of that with it. I'm sure I don't need to tell you about the many health problems related with these toxins. Most fish oil supplements don't even address this issue, but now that more people are aware of this problem, some have started to use misleading labels. Even if it says that it's "Guaranteed Toxin Free", it just means that it's below acceptable safe levels. Newsflash: there is no acceptable safe level for these toxins. Even small amounts can cause future health problems. There are regulations that state that the toxin levels have to be under a certain amount, but the actual amounts in the product can vary significantly between batches. It's also up to the manufacturer to meet these quality standards- nobody actually checks them to be sure. Do I need to remind you of the woefully inadequate quality procedures of most supplement manufacturers? Click here to see the back issue that deals with supplement quality. In March 2010, a lawsuit was filed in California against manufacturers for not adhering to these standards. Some had up to 10 times the "acceptable limit" of these toxins! The most unfortunate thing is that people are taking these supplements, thinking that they're doing something good, but meanwhile they're loading their bodies with toxins. This is an excellent example of why you need to find supplements manufactured by a company with integrity and excellent quality control procedures. A poor choice may not only be completely ineffective, it may actually be harmful! They distill every batch of Biomega 3 times to remove ALL toxins. They test EVERY batch to ensure that it truly is toxin-free. They are regularly tested by independent laboratories to guarantee that they're meeting these claims. I have found that their integrity and commitment to quality in all products is unmatched by other manufacturers, so I trust their products wholeheartedly. Don't risk your health by taking fish oil/omega-3 supplements made by manufacturers that choose profit over quality & safety. Click here to order Biomega and start reaping all the health benefits of omega-3 supplementation. The CD was introduced 2 years before they were born. Isn't that crazy? Do you feel old yet? Pass this on to the other old fogies on your list.
# -*- coding: utf-8 -*- # ProjectEuler/src/python/problem023.py # # Non-abundant sums # ================= # Published on Friday, 2nd August 2002, 06:00 pm # # A perfect number is a number for which the sum of its proper divisors is # exactly equal to the number. For example, the sum of the proper divisors of # 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number. # A number n is called deficient if the sum of its proper divisors is less than # n and it is called abundant if this sum exceeds n. As 12 is the smallest # abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be # written as the sum of two abundant numbers is 24. By mathematical analysis, # it can be shown that all integers greater than 28123 can be written as the # sum of two abundant numbers. However, this upper limit cannot be reduced any # further by analysis even though it is known that the greatest number that # cannot be expressed as the sum of two abundant numbers is less than this # limit. Find the sum of all the positive integers which cannot be written as # the sum of two abundant numbers. import util MINIMUM_ABUNDANT_NUMBER = 12 def is_abundant(n): divisors = util.proper_divisors(n) return sum(divisors) > n def calculate_abundant_numbers(max_n): return [n for n in range(1, max_n) if is_abundant(n)] def non_abundant_sums(max_n): sums = [] abundant_numbers = calculate_abundant_numbers(max_n) abundant_set = set(abundant_numbers) for i in range(1, max_n+1): for a in abundant_numbers: difference = i - a if difference < MINIMUM_ABUNDANT_NUMBER: sums.append(i) break if difference in abundant_set: break else: sums.append(i) return sum(sums) def main(): total = non_abundant_sums(28123) print "The sum of all positive integers which cannot be written as the ", print "sum of 2 abundant numbers is %d." % (total,) if __name__ == "__main__": main()
Located in the heart of Kansas City, the University of Missouri-Kansas City (UMKC) is uniquely positioned to serve this community and address its health needs, while also addressing social and racial inequalities. The leadership of UMKC's schools of Medicine, Nursing, Pharmacy, and Dentistry, have entered into a collaborative partnership entitled "Transformation of the Health Care Workforce to Combat Disparities." In 2011, at least 20% of graduates from Health Science Schools were from underrepresented populations, and UMKC placed more than 1,500 health sciences students in high disparities environments in urban Kansas City. The School of Medicine has incorporated cultural competency content in its curriculum since 1971, and began using the AAMC-endorsed Tool for Assessing Cultural Competence Training (TACCT) in August, 2012. All four health science schools collaborated to develop the Data Dashboard Initiative. Through this initiative, which was funded by a local foundation, each school assembled diversity data and identified common metrics and methods. All four health science schools have developed programs that include diverse service learning experiences, and schools have received numerous national and regional community service awards. Students receive clinical training at several free health clinics in the urban core as well as the Truman Medical Center, which primarily serves vulnerable populations.
""" Functions for working with STIX 2 Data Markings. These high level functions will operate on both object-level markings and granular markings unless otherwise noted in each of the functions. Note: These functions are also available as methods on SDOs, SROs, and Marking Definitions. The corresponding methods on those classes are identical to these functions except that the `obj` parameter is omitted. .. autosummary:: :toctree: markings granular_markings object_markings utils | """ from stix2.markings import granular_markings, object_markings def get_markings(obj, selectors=None, inherited=False, descendants=False, marking_ref=True, lang=True): """ Get all markings associated to the field(s) specified by selectors. Args: obj: An SDO or SRO object. selectors: string or list of selectors strings relative to the SDO or SRO in which the properties appear. inherited (bool): If True, include object level markings and granular markings inherited relative to the properties. descendants (bool): If True, include granular markings applied to any children relative to the properties. marking_ref (bool): If False, excludes markings that use ``marking_ref`` property. lang (bool): If False, excludes markings that use ``lang`` property. Returns: list: Marking identifiers that matched the selectors expression. Note: If ``selectors`` is None, operation will be performed only on object level markings. """ if selectors is None: return object_markings.get_markings(obj) results = granular_markings.get_markings( obj, selectors, inherited, descendants, marking_ref, lang, ) if inherited: results.extend(object_markings.get_markings(obj)) return list(set(results)) def set_markings(obj, marking, selectors=None, marking_ref=True, lang=True): """ Remove all markings associated with selectors and appends a new granular marking. Refer to `clear_markings` and `add_markings` for details. Args: obj: An SDO or SRO object. marking: identifier or list of marking identifiers that apply to the properties selected by `selectors`. selectors: string or list of selectors strings relative to the SDO or SRO in which the properties appear. marking_ref (bool): If False, markings that use the ``marking_ref`` property will not be removed. lang (bool): If False, markings that use the ``lang`` property will not be removed. Returns: A new version of the given SDO or SRO with specified markings removed and new ones added. Note: If ``selectors`` is None, operations will be performed on object level markings. Otherwise on granular markings. """ if selectors is None: return object_markings.set_markings(obj, marking) else: return granular_markings.set_markings(obj, marking, selectors, marking_ref, lang) def remove_markings(obj, marking, selectors=None): """ Remove a marking from this object. Args: obj: An SDO or SRO object. marking: identifier or list of marking identifiers that apply to the properties selected by `selectors`. selectors: string or list of selectors strings relative to the SDO or SRO in which the properties appear. Raises: InvalidSelectorError: If `selectors` fail validation. MarkingNotFoundError: If markings to remove are not found on the provided SDO or SRO. Returns: A new version of the given SDO or SRO with specified markings removed. Note: If ``selectors`` is None, operations will be performed on object level markings. Otherwise on granular markings. """ if selectors is None: return object_markings.remove_markings(obj, marking) else: return granular_markings.remove_markings(obj, marking, selectors) def add_markings(obj, marking, selectors=None): """ Append a marking to this object. Args: obj: An SDO or SRO object. marking: identifier or list of marking identifiers that apply to the properties selected by `selectors`. selectors: string or list of selectors strings relative to the SDO or SRO in which the properties appear. Raises: InvalidSelectorError: If `selectors` fail validation. Returns: A new version of the given SDO or SRO with specified markings added. Note: If ``selectors`` is None, operations will be performed on object level markings. Otherwise on granular markings. """ if selectors is None: return object_markings.add_markings(obj, marking) else: return granular_markings.add_markings(obj, marking, selectors) def clear_markings(obj, selectors=None, marking_ref=True, lang=True): """ Remove all markings associated with the selectors. Args: obj: An SDO or SRO object. selectors: string or list of selectors strings relative to the SDO or SRO in which the field(s) appear(s). marking_ref (bool): If False, markings that use the ``marking_ref`` property will not be removed. lang (bool): If False, markings that use the ``lang`` property will not be removed. Raises: InvalidSelectorError: If `selectors` fail validation. MarkingNotFoundError: If markings to remove are not found on the provided SDO or SRO. Returns: A new version of the given SDO or SRO with specified markings cleared. Note: If ``selectors`` is None, operations will be performed on object level markings. Otherwise on granular markings. """ if selectors is None: return object_markings.clear_markings(obj) else: return granular_markings.clear_markings(obj, selectors, marking_ref, lang) def is_marked(obj, marking=None, selectors=None, inherited=False, descendants=False): """ Check if field(s) is marked by any marking or by specific marking(s). Args: obj: An SDO or SRO object. marking: identifier or list of marking identifiers that apply to the properties selected by `selectors`. selectors: string or list of selectors strings relative to the SDO or SRO in which the field(s) appear(s). inherited (bool): If True, include object level markings and granular markings inherited to determine if the properties is/are marked. descendants (bool): If True, include granular markings applied to any children of the given selector to determine if the properties is/are marked. Returns: bool: True if ``selectors`` is found on internal SDO or SRO collection. False otherwise. Note: When a list of marking identifiers is provided, if ANY of the provided marking identifiers match, True is returned. If ``selectors`` is None, operation will be performed only on object level markings. """ if selectors is None: return object_markings.is_marked(obj, marking) result = granular_markings.is_marked( obj, marking, selectors, inherited, descendants, ) if inherited: granular_marks = granular_markings.get_markings(obj, selectors) object_marks = object_markings.get_markings(obj) if granular_marks: result = granular_markings.is_marked( obj, granular_marks, selectors, inherited, descendants, ) result = result or object_markings.is_marked(obj, object_marks) return result class _MarkingsMixin(object): pass # Note that all of these methods will return a new object because of immutability _MarkingsMixin.get_markings = get_markings _MarkingsMixin.set_markings = set_markings _MarkingsMixin.remove_markings = remove_markings _MarkingsMixin.add_markings = add_markings _MarkingsMixin.clear_markings = clear_markings _MarkingsMixin.is_marked = is_marked
Try out this fun educational learning babies jigsaw game with your little ones. Babies and toddlers will enjoy playing our baby game for a long time. Help the baby learn the animals, numbers, alphabet, fruits, transportation, professions, emotions, colors and musical instruments. – It’s specifically designed as an educational tool interface for babies and up. ・Suitable for right brain exercise, the graphics activate the right brain. Improve the brain´s observation skills, cognitive ability, concentration, memory, creativity and imagination. ・Improve the response speed and the coordination of the brain and the body. Exercise visual ability to observe the dynamic objects. ・Simple and convenient, easy to operate. Suitable for children, the elderly, and their families and friends to play together. -Ads are everywhere in this game. For a 2 year old who doesn’t understand how to exit out of the ad is frustrating. Ya, you have the ad free option for $2.99 but when I searched for “free apps for 2 year olds” this shouldn’t come up in the search. Apps for toddlers should never have ads in them. -My son likes this learning puzzle game but it is I credibly frustrating after ever single puzzle he does an Ad pops up playing a video of a new app. I have to constantly take it away to exit off and back into the game. -The app has way to many ads and for a toddler it is very frustrating. If you don’t want the ads you have to pay $3.99 for the app….for how basic the app is it’s a lot of money…. -It would be great if my two year old had not just gotten onto YouTube during one of the ads. -News flash to the developers, a game designed for toddlers shouldn’t have ads that toddlers can open, popping up between each round. -Is there a way to lower the sensitivity of placement so the piece doesn’t have to be exactly over the empty shape? -To many adds I understand some adds but not after every level making impossible for my 2 year old to play. -Wayyyyyyy too many adds. Game is great but literally after every puzzle which takes 3 mins or less to do for a 2yr old there are 2- 3 ads!!! New Android App “King of Avalon” Download Apk, Complete Review, Users Review, Latest Version, App by DIANDIAN INTERACTIVE HOLDING.
#!/usr/bin/python # -*- coding: utf-8 -*- """HSM Sideband Backend Archive Module. Module that implements the abstract_backend_archive class for a HSM Sideband backend. """ import os import stat import shutil from ...archive_utils import un_abs_path from ...config import get_config from ...exception import ArchiveInterfaceError from .extended_file_factory import extended_hsmsideband_factory from ..abstract.archive import AbstractBackendArchive from ...id2filename import id2filename def path_info_munge(filepath): """Munge the path for this filetype.""" return_path = un_abs_path(id2filename(int(filepath))) return return_path class HsmSidebandBackendArchive(AbstractBackendArchive): """HSM Sideband Backend Archive Class. Class that implements the abstract base class for the hsm sideband archive interface backend. """ def __init__(self, prefix): """Constructor for HSM Sideband Backend Archive.""" super(HsmSidebandBackendArchive, self).__init__(prefix) self._prefix = prefix self._file = None self._fpath = None self._filepath = None # since the database prefix may be different then the system the file is mounted on self._sam_qfs_prefix = get_config().get( 'hsm_sideband', 'sam_qfs_prefix') def open(self, filepath, mode): """Open a hsm sideband file.""" # want to close any open files first try: self.close() except ArchiveInterfaceError as ex: err_str = "Can't close previous HSM Sideband file before opening new "\ 'one with error: ' + str(ex) raise ArchiveInterfaceError(err_str) try: self._fpath = un_abs_path(filepath) filename = os.path.join(self._prefix, path_info_munge(self._fpath)) self._filepath = filename # path database refers to, rather then just the file system mount path sam_qfs_path = os.path.join( self._sam_qfs_prefix, path_info_munge(self._fpath)) dirname = os.path.dirname(filename) if not os.path.isdir(dirname): os.makedirs(dirname, 0o755) self._file = extended_hsmsideband_factory( self._filepath, mode, sam_qfs_path) return self except Exception as ex: err_str = "Can't open HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) def close(self): """Close a HSM Sideband file.""" try: if self._file: self._file.close() self._file = None except Exception as ex: err_str = "Can't close HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) def read(self, blocksize): """Read a HSM Sideband file.""" try: if self._file: return self._file.read(blocksize) except Exception as ex: err_str = "Can't read HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) err_str = 'Internal file handle invalid' raise ArchiveInterfaceError(err_str) def seek(self, offset): """Seek in the file to the offset.""" try: if self._file: return self._file.seek(offset) except Exception as ex: err_str = "Can't seek HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) err_str = 'Internal file handle invalid' raise ArchiveInterfaceError(err_str) def write(self, buf): """Write a HSM Sideband file to the archive.""" try: if self._file: return self._file.write(buf) except Exception as ex: err_str = "Can't write HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) err_str = 'Internal file handle invalid' raise ArchiveInterfaceError(err_str) def set_mod_time(self, mod_time): """Set the mod time on a HSM file.""" try: if self._filepath: os.utime(self._filepath, (mod_time, mod_time)) except Exception as ex: err_str = "Can't set HSM Sideband file mod time with error: " + \ str(ex) raise ArchiveInterfaceError(err_str) def set_file_permissions(self): """Set the file permissions for a posix file.""" try: if self._filepath: os.chmod(self._filepath, 0o444) except Exception as ex: err_str = "Can't set HSM Sideband file permissions with error: " + \ str(ex) raise ArchiveInterfaceError(err_str) def stage(self): """Stage a HSM Sideband file.""" try: if self._file: return self._file.stage() except Exception as ex: err_str = "Can't stage HSM Sideband file with error: " + str(ex) raise ArchiveInterfaceError(err_str) err_str = 'Internal file handle invalid' raise ArchiveInterfaceError(err_str) def status(self): """Get the status of a HSM Sideband file.""" try: if self._file: return self._file.status() except Exception as ex: err_str = "Can't get HSM Sideband file status with error: " + \ str(ex) raise ArchiveInterfaceError(err_str) err_str = 'Internal file handle invalid' raise ArchiveInterfaceError(err_str) def patch(self, file_id, old_path): """Move a hsm file.""" try: fpath = un_abs_path(file_id) new_filepath = os.path.join(self._prefix, path_info_munge(fpath)) new_directories = os.path.dirname(new_filepath) if not os.path.exists(new_directories): os.makedirs(new_directories) shutil.move(old_path, new_filepath) except Exception as ex: err_str = "Can't move posix file with error: " + str(ex) raise ArchiveInterfaceError(err_str) def remove(self): """Remove the file for a posix file.""" try: if self._filepath: os.chmod(self._filepath, stat.S_IWRITE) os.unlink(self._filepath) self._filepath = None except Exception as ex: err_str = "Can't remove posix file with error: " + str(ex) raise ArchiveInterfaceError(err_str)
activating windows 7 via KMS got error: product not found. Windows diagnostic log inside. The manufacturers say they install this to provide a better user experience and they can lower the computer cost because they get a cut of any trial software you actually buy. What to do when you have lost your product key, even though you already have an activated copy of Windows but you should have the product key just in case something goes wrong and you need to install a new copy of Windows. This is the default product key used to activate Windows 7 from the factory. Restart your computer and Go to cmd again. Noel Paton Nil Carborundum Illegitemi No - I do not work for Microsoft, or any of its contractors. You can also choose to unmark the answer as you wish. This can be beneficial to other community members reading the thread. Note down the above key and save it somewhere safe. The product has been validated and Windows Update has worked fine. I bought a refurbished Lenovo Thinkpad last July 2015. The sticker key number may be different than the key number reported in one of the programs below. And once you have the key you could write the key on a piece of paper and keep it safe for future use. Highlight the version that came with your computer. I'd prefer to not have to reinstall Windows 7, so any leads would be greatly appreciated. Type slmgr -rearm and enter. If not your clients won't know where to go for activation. Virtualization How to Change the Product Key Number in Windows 7 This will show you how to change the product key number in Windows 7 so that you can use another product key number to activate your Windows 7 with instead. Microsoft allows them a special key which they can use on multiple computers. This can be helpful if you have entered the incorrect or no product key number. I have all my apps, utilities etc running and so was wanting to avoid a re-install : Oh well, if nobody can help me with actually activating the enterprise version I will just close the question and re-install. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 on this site the. I've tried re-activating using slui. The above command will show you the product key associated with your Windows. Their initial solution was to try to sell me a new copy of Windows. Be sure to save this file to your Desktop for easy access. But now since I would like to upgrade to windows 8 using the dell Rs:699 offer I need to give them windows 7 product key for validation. How might I accomplish this? We do not spam and we will not share your address. The product key sticker will usually be on the back of a desktop, and on the bottom or under the battery of a laptop and notebook. If you do not yet have a key then you can temporarily use a default key to give you a 30 day trial. When Windows thinks it is not legitimate, it is complaining about some type of configuration abnormality. This could be because of corruption of the license store, or because your current trial of windows has expired. In the left pane, click on Operating System. The data is still copied to the clipboard for pasting to your response. This could take anything up to a year or more. Download and run the free program towards the bottom of the link. Contact the seller and demand a refund! This option shows you how to use the free program Speccy to see what the product key number is from within a Windows 7 that it has already been entered in ex: activated. Paste the report into your reply here. I am rather a old person who is still using Widows Vista Home Basic 32 bit version on my laptop. Wait for the message to pop up and click ok…. By sharing your experience you can help other community members facing similar problems. Type slmgr -rearm and enter…Click okay when message is popped up…. Can someone please help me resolve this issue? Send dark chocolates to snapassistant outlook. . Currently, the root cause is that the key is invalid; I would like to share the following article with you: Hope it helps. Type powershell in the Windows Search then right-click on it and select Run as administrator. Similar help and support threads Thread Forum Hello! This is what I belive happend ; The security programmes has somehow blocked the system from proper system restores until now, and I could not find out why. This means that in some cases at least the result id frustration rather than illumination. Miya Thanks for the command - but the response window is way taller than the screen, and cannot be scrolled - attempting to pipe the output to a txt file doesn't work, and it's impossible to Move the window so that the lower regions can be read.
# -*- coding: utf-8 -*- ## Copyright 2015 Rasmus Scholer Sorensen, rasmusscholer@gmail.com ## ## This file is part of Nascent. ## ## Nascent is free software: you can redistribute it and/or modify ## it under the terms of the GNU Affero General Public License as ## published by the Free Software Foundation, either version 3 of the ## License, or (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU Affero General Public License for more details. ## ## You should have received a copy of the GNU Affero General Public License ## along with this program. If not, see <http://www.gnu.org/licenses/>. # pylint: disable=C0103,W0212 """ Module docstring. """ import networkx as nx class ConnectedMultiGraph(nx.MultiGraph): """ A NetworkX multigraph that makes it easy to break and merge connected component graphs. """ def break_at_edge(self, source, target): """ Break graph at edge and see if it's still connected. Four cases: Case 0: Nothing breaks off Case 1: Two smaller connected components. Case 2: One node breaks off. Case 3: Two separate nodes. returns Case-Int, [list of surviving graphs], [list of free nodes] """ self.remove_edge(source, target) if len(self) == 2: return 3, None, [source, target] if len(self[source]) == 1: self.remove_node(source) return 2, self, [source] if len(self[target]) == 1: self.remove_node(target) return 2, self, [target] subgraphs = list(nx.connected_component_subgraphs(self, copy=False)) if len(subgraphs) == 1: return 0, [self], None else: assert len(subgraphs) == 2 return 1, subgraphs, None
Gather your supplies and let’s have some fun foiling. All you need for this fun project: Old Frame, Foil Adhesive, Leopard Spots Foil and a brush. One of the things that I love about this project, is there was NO prep! I didn’t do anything to this frame first (well I did wipe it clean LOL) and then I brushed on my Foil Adhesive. The foil adhesive looks milky white but will dry completely clear. Just brush on a thin even coat (if it seems a like thick just add a little water) and allow the foil adhesive to dry to a firm tack. Dry time is a minimum of 15 to 30 minutes BUT it can take longer during the cold winters! Make sure to allow the foil adhesive to dry completely to a firm tack. Do not rush it! Like I said before minimum dry time is 15 to 30 minutes, but you can allow it to dry for hours, days, weeks, months… It will never dry beyond a firm tack. Yes, it stays sticky forever! We have a fantastic collection of foils, even Leopard Print! I like to crumple my foils first. Sometimes it helps the transfer but it always makes them easier to handle. Lay your foils over the dry Foil Adhesive and use a soft rag to smooth them on. ALWAYS have the pattern or pretty color facing up! If you are not getting enough of the color or pattern to transfer with the soft rag, grab yourself a scrubber brush (stiff plastic bristles) and scrub on the foil for the best release/transfer. Continue to place pieces of foil across your project and transfer one section at a time until you have completed the entire frame. If you are more of a visual person, here is a short video on the process!
import os import importlib tests = [] for root, dirs, files in os.walk("testing"): for file in files: if file.endswith(".py") and not os.path.basename(file) == '__init__.py': # print 'found test', os.path.join(root, file) tests.append(os.path.join(root, file)) passed_or_failed = [] current_task = '' for i, t in enumerate(tests): try: print '\n{}\n{:=^80}\n{}\n'.format('=' * 80, ' ' + t + ' ', '=' * 80) current_task = 'importing {}'.format(t) print '{}\n{}'.format('-' * 80, current_task) t_path = t.replace('/','.')[:-3] current_test = importlib.import_module(t_path) print '## passed' current_task = 'running tests' print '{}\n{}\n##'.format('-' * 80, current_task) failed = current_test.test() passed_or_failed.append(True and not failed) except Exception as e: print 'FAILED' passed_or_failed.append(False) print 'EXCEPTION:', e print '\n{}\n{:+^80}\n{}\n'.format('+' * 80, ' TEST RESULTS ', '+' * 80) for i, t in enumerate(tests): status = 'successfully' if passed_or_failed[i]\ else 'FAILED on {}'.format(current_task) print 'finished test {}\n -> {}\n'.format(t, status)
Manufactures custom hi-pressure underground mine doors and other various equipment for the Mining Industry, including Cable Vulcanizers and Track Cleaners. John MSHA-certified engines are built to handle tough, off-highway applications, including in the mining industry. The mining industry remains 's largest contributor. Further research and development into this strategic market remains a priority. equipment used in copper industry-Mining Equipment For Sale . Used Brewery Equipment,Buy Quality Used Brewery Equipment. Haven’t found the right supplier yet ?
# this code is taken from: https://gist.github.com/mbostock/4339083 d3_tree_template = """ <!DOCTYPE html> <meta charset="utf-8"> <style> .node { cursor: pointer; } .node circle { fill: #fff; stroke: steelblue; stroke-width: 1.5px; } .node text { font: 14px sans-serif; } .link { fill: none; stroke: #ccc; stroke-width: 3px; } </style> <body> <script src="http://d3js.org/d3.v3.min.js"></script> <script> var margin = {top: 20, right: 120, bottom: 20, left: 120}, width = 960 - margin['right'] - margin['left'], height = 800 - margin.top - margin.bottom; var i = 0, duration = 750, root; var tree = d3.layout.tree() .size([height, width]); var diagonal = d3.svg.diagonal() .projection(function(d) { return [d.y, d.x]; }); var svg = d3.select("body").append("svg") .attr("width", width + margin['right'] + margin['left']) .attr("height", height + margin.top + margin.bottom) .append("g") .attr("transform", "translate(" + margin['left'] + "," + margin.top + ")"); var myjson = '%s' root = JSON.parse( myjson ); root.x0 = height / 2; root.y0 = 0; function collapse(d) { if (d.children) { d._children = d.children; d._children.forEach(collapse); d.children = null; } } root.children.forEach(collapse); update(root); d3.select(self.frameElement).style("height", "800px"); function update(source) { // Compute the new tree layout. var nodes = tree.nodes(root).reverse(), links = tree.links(nodes); // Normalize for fixed-depth. nodes.forEach(function(d) { d.y = d.depth * 180; }); // Update the nodes var node = svg.selectAll("g.node") .data(nodes, function(d) { return d.id || (d.id = ++i); }); // Enter any new nodes at the parent's previous position. var nodeEnter = node.enter().append("g") .attr("class", "node") .attr("transform", function(d) { return "translate(" + source.y0 + "," + source.x0 + ")"; }) .on("click", click); nodeEnter.append("image") .attr("xlink:href", function(d) { return d.icon; }) .attr("x", "-40px") .attr("y", "-40px") .attr("width", "80px") .attr("height", "80px"); nodeEnter.append("circle") .attr("r", 1e-6) .attr("cx", "-2.2em") .attr("cy", "-2em") .style("fill", function(d) { return d._children ? "lightsteelblue" : "#fff"; }); nodeEnter.append("text") .attr("x", -28) .attr("dy", "-2em") .attr("text-anchor", "start") .text(function(d) { return d.name; }) .style("fill-opacity", 1e-6); // Transition nodes to their new position. var nodeUpdate = node.transition() .duration(duration) .attr("transform", function(d) { return "translate(" + d.y + "," + d.x + ")"; }); nodeUpdate.select("circle") .attr("r", 4.5) .style("fill", function(d) { return d._children ? "lightsteelblue" : "#fff"; }); nodeUpdate.select("text") .style("fill-opacity", 1); // Transition exiting nodes to the parent's new position. var nodeExit = node.exit().transition() .duration(duration) .attr("transform", function(d) { return "translate(" + source.y + "," + source.x + ")"; }) .remove(); nodeExit.select("circle") .attr("r", 1e-6); nodeExit.select("text") .style("fill-opacity", 1e-6); // Update the links var link = svg.selectAll("path.link") .data(links, function(d) { return d.target.id; }); // Enter any new links at the parent's previous position. link.enter().insert("path", "g") .attr("class", "link") .attr("d", function(d) { var o = {x: source.x0, y: source.y0}; return diagonal({source: o, target: o}); }); // Transition links to their new position. link.transition() .duration(duration) .attr("d", diagonal); // Transition exiting nodes to the parent's new position. link.exit().transition() .duration(duration) .attr("d", function(d) { var o = {x: source.x, y: source.y}; return diagonal({source: o, target: o}); }) .remove(); // Stash the old positions for transition. nodes.forEach(function(d) { d.x0 = d.x; d.y0 = d.y; }); } // Toggle children on click. function click(d) { if (d.children) { d._children = d.children; d.children = null; } else { d.children = d._children; d._children = null; } update(d); } </script> """
H1B Visa Fees and Costs associated with preparing, filing and processing H1B visa applications are normally ALL paid by the US Employer / Sponsor Company. The standard H1B filing fee is for the 1-129 petition. This H1B fee is also payable for H1B extensions, H1B transfers, and amendments. Almost everyone has to pay this fee. There can also be additional H1B visa fees at the US consulate when applying from abroad. The employer must pay a fee towards a training fee meant to fund the training of U.S. workers. But if the employer has less than 25 full-time employees, they must pay only one-half of the required fee [see Section §214(c)(9) of the Immigration & Nationality Act]. The training fee is paid one time to initially grant the H1B petition and to extend H1B status. But if this is the second or subsequent extension with the same employer, then the training fee is not required. The following are exempt from the training fee: primary or secondary educational institutions, institutions of higher education, nonprofit organizations related to or affiliated with any institutions of higher education, a nonprofit organization that engages in established curriculum-related clinical training of students registered at any institutions of higher education, nonprofit research organizations or a governmental research organizations [see Section 214(c)(9)(A) of the Immigration & Nationality Act and 8 C.F.R. §2l4.2(h)(19)(iii)-(iv). A $500 fraud prevention and detection fee is required for the initial H-1B petition or to switch employers. The fraud fee is not required for extensions with the same employer [see Section 214(c)(12) of the Immigration & Nationality Act]. Premium Processing - this fee is almost always worth every penny. Decisions are made within 15 business days by the USCIS. Your lawyer is provided a direct telephone number and email address for the office, and the specific officer, handling your matter (should any issue arise that needs attention). And if applicable, your family’s H4 applications will be processed along with the primary H1B visa petition at no additional cost. Family members can apply as dependents of the primary H1B applicant. See Form I-539 for current fee.
from collections import OrderedDict from clifunzone import reflection_utils from clifunzone import xml2json try: from lxml import etree as ET except ImportError: try: import xml.etree.cElementTree as ET except ImportError: import xml.etree.ElementTree as ET def contains_valid_xml(obj): """ Indicates whether a specified value contains valid and well-formed XML. :param obj: a file-like or string-like object. :return: True if valid, else False. >>> contains_valid_xml(None) False >>> contains_valid_xml('') False >>> contains_valid_xml('<') False >>> contains_valid_xml('<xml />') True >>> contains_valid_xml('<constants><constant id="pi" value="3.14" /><constant id="zero">0</constant></constants>') True """ if obj is None: return False try: if reflection_utils.is_file_like(obj): # read the contents of obj obj = obj.read() ET.fromstring(obj) except ET.ParseError: return False return True def load(obj): """ Parses a specified object using ElementTree. :param obj: a file-like or string-like object. :return: True if valid, else False. >>> load(None) Traceback (most recent call last): ValueError >>> load('') # doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): XMLSyntaxError: None # Note: the exception will be different without lxml: ParseError: no element found: line 1, column 0 >>> load('<') # doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): XMLSyntaxError: StartTag: invalid element name, line 1, column 2 # Note: the exception will be different without lxml: ParseError: unclosed token: line 1, column 0 >>> load('<abc />').tag 'abc' >>> load('<constants><constant id="pi" value="3.14" /><constant id="zero">0</constant></constants>').tag 'constants' """ if obj is None: raise ValueError if reflection_utils.is_file_like(obj): # read the contents of obj obj = obj.read() return ET.fromstring(obj) def xml_to_json(xmlstring, strip_attribute=False, strip_namespace=False, strip_whitespace=True, pretty=False): r""" Converts XML to JSON. :param xmlstring: the XML string. :param strip_attribute: If True, attributes will be ignored. :param strip_namespace: If True, namespaces will be ignored. :param strip_whitespace: If True, 'unimportant' whitespace will be ignored. :param pretty: If True, the output will be pretty-formatted. :return: a JSON string. >>> xml_to_json(None) is None True >>> xml_to_json('') Traceback (most recent call last): ParseError: no element found: line 1, column 0 >>> xml_to_json('<') Traceback (most recent call last): ParseError: unclosed token: line 1, column 0 >>> xml_to_json('<a/>') '{"a": null}' >>> xml_to_json('<a/>', pretty=True) '{\n "a": null\n}' >>> xml_to_json('<constants><constant id="pi" value="3.14" />\n<constant id="zero">0</constant></constants>') '{"constants": {"constant": [{"@id": "pi", "@value": "3.14"}, {"@id": "zero", "#text": "0"}]}}' >>> xml_to_json('<z> <q qz="z" qy="y" /> <a az="z" ab="b" ay="y" /> <x/></z>', strip_whitespace=True) '{"z": {"q": {"@qy": "y", "@qz": "z"}, "a": {"@ay": "y", "@az": "z", "@ab": "b"}, "x": null}}' >>> xml_to_json('<royg> <r/> <o/> <y/> <r e="d"/> <g/></royg>', strip_whitespace=True) '{"royg": {"r": [null, {"@e": "d"}], "o": null, "y": null, "g": null}}' >>> xml_to_json('<a> <b\nid="b1" />\n<c/> <d> </d> </a>', strip_whitespace=False) '{"a": {"b": {"@id": "b1", "#tail": "\\n"}, "c": {"#tail": " "}, "d": {"#tail": " ", "#text": " "}, "#text": " "}}' >>> xml_to_json('<a> <b\nid="b1" />\n<c/> <d> </d> </a>', strip_whitespace=True) '{"a": {"b": {"@id": "b1"}, "c": null, "d": null}}' >>> xml_to_json("<a> <b\nid=\"b1\" />\n<c/> <d> </d> </a>", strip_namespace=False) '{"a": {"b": {"@id": "b1"}, "c": null, "d": null}}' >>> xml_to_json("<a> <b\nid=\"b1\" />\n<c/> <d> </d> </a>", strip_namespace=True) '{"a": {"b": {"@id": "b1"}, "c": null, "d": null}}' >>> xml_to_json('<royg> <r/> <o/> <y/> <r e="d"/> <g/></royg>', strip_whitespace=True, strip_attribute=True) '{"royg": {"r": [null, null], "o": null, "y": null, "g": null}}' >>> xml_to_json('<a> <b\nid="b1" />\n<c/> <d> </d> </a>', strip_whitespace=False, strip_attribute=True) '{"a": {"b": {"#tail": "\\n"}, "c": {"#tail": " "}, "d": {"#tail": " ", "#text": " "}, "#text": " "}}' >>> xml_to_json('<a> <b\nid="b1" />\n<c/> <d> </d> </a>', strip_whitespace=True, strip_attribute=True) '{"a": {"b": null, "c": null, "d": null}}' """ if xmlstring is None: return None return xml2json.xml2json(xmlstring, strip_attribute=strip_attribute, strip_namespace=strip_namespace, strip_whitespace=strip_whitespace, pretty=pretty) # def etree_to_dict(t): # d = {t.tag: map(etree_to_dict, t.iterchildren())} # d.update(('@' + k, v) for k, v in t.attrib.iteritems()) # d['text'] = t.text # return d def element_info(element, tree=None): """ Returns a dict with (incomplete) info about a specified element/node. :param element: an <ElementTree.Element> instance. :return: a <collections.OrderedDict> instance. """ def get_distinct_tag_names(elements): # convert to tag names elements = [child.tag for child in elements] # filter out duplicates elements = set(elements) return elements def get_distinct_attribute_names(elements): names = set() for i in elements: names.update(i.attrib.keys()) names = ('@' + k for k in names) return names d = OrderedDict() if tree: try: d.update({'path': tree.getpath(element)}) except AttributeError: # tree.getpath() is only available in lxml, not in the builtin xml.etree # see: http://lxml.de/xpathxslt.html#xpath # see: http://stackoverflow.com/a/13352109 pass d2 = {'tag': element.tag} if element.text: d2.update({'#text': element.text}) if element.attrib: # get all attribs attribs = element.attrib.items() # prefix attrib names attribs = [('@' + k, v) for k, v in attribs] # attribs = {k, v for k, v in attribs} attribs = OrderedDict(attribs) # if attribs: # # d['attribs'] = {'tags': attribs, 'count': attribs_count} # d['attribs'] = attribs d2.update({'attributes': attribs}) d['content'] = d2 d['metrics'] = {} # get all direct children children = get_elements(element, xpath='./*') children_count = len(children) if children_count: d2 = {'count': children_count} d2.update({'tags': (sorted(get_distinct_tag_names(children)))}) d2.update({'attributes': (sorted(get_distinct_attribute_names(children)))}) d['metrics']['children'] = d2 # get all descendants descendants = get_elements(element, xpath='.//*') descendants_count = len(descendants) if descendants_count: d2 = {'count': descendants_count} d2.update({'tags': (sorted(get_distinct_tag_names(descendants)))}) d2.update({'attributes': (sorted(get_distinct_attribute_names(descendants)))}) d['metrics']['descendants'] = d2 return d def is_empty_element(elem): """ Indicates whether an XML Element object is 'empty'. :param elem: an Element object :return: True if elem is empty """ # return not bool(len(elem) or len(elem.attrib) or len(elem.text)) return not bool(len(elem) or elem.attrib or elem.text) def is_parent_element(elem): """ Indicates whether an XML Element object has any children. :param elem: an Element object :return: True if elem has any child elements """ return len(elem) def count_elements(obj, xpath=None): """ Returns a count of the XML elements that match a specified XPath expression. This function encapsulates API differences between the lxml and ElementTree packages. :param obj: a tree or element object :param xpath: an XPath node set/selection expression :return: an int """ if not xpath: xpath = '//' # match all elements by default # try lxml syntax first (much faster!) try: return int(obj.xpath('count({xpath})'.format(xpath=xpath))) except AttributeError: # AttributeError: 'ElementTree' object has no attribute 'xpath' pass # else try ElementTree syntax if xpath.startswith('/'): # ElementTree's findall() doesn't like xpath expressions that start with a '/'. # e.g. "FutureWarning: This search is broken in 1.3 and earlier, and will be fixed in a future version. ..." xpath = '.' + xpath return len(obj.findall(xpath)) def get_elements(obj, xpath=None): """ Returns all XML elements that match a specified XPath expression. This function encapsulates API differences between the lxml and ElementTree packages. :param obj: a tree or element object :param xpath: an XPath node set/selection expression :return: an iterable """ if not xpath: xpath = '//' # match all elements by default # try lxml syntax first (much faster!) try: return obj.xpath(xpath) except AttributeError: # AttributeError: 'ElementTree' object has no attribute 'xpath' pass # else try ElementTree syntax if xpath.startswith('/'): # ElementTree's findall() doesn't like xpath expressions that start with a '/'. # e.g. "FutureWarning: This search is broken in 1.3 and earlier, and will be fixed in a future version. ..." xpath = '.' + xpath return obj.findall(xpath) def remove_elements(obj, xpath): """ Removes all XML elements that match a specified XPath expression. This function encapsulates API differences between the lxml and ElementTree packages. :param obj: a tree or element object :param xpath: an XPath node set/selection expression :return: an int count of the number of removed elements """ if not xpath: raise ValueError('invalid xpath') elements = get_elements(obj, xpath=xpath) # count = len(elements) count = 0 for i in elements: # try lxml syntax first try: parent = i.getparent() parent.remove(i) except AttributeError: # else try ElementTree syntax obj.remove(i) count += 1 return count def remove_attributes_with_name(element, attrib_name): """ Removes all occurrences of a specific attribute from all elements. :param element: an XML element object. :param attrib_name: the name of the attribute to remove. """ if attrib_name.startswith('@'): # remove the optional leading '@' attrib_name = attrib_name[1:] # find all elements that have the attribute xpath = '//*[@{attrib}]'.format(attrib=attrib_name) elements = get_elements(element, xpath=xpath) for i in elements: del i.attrib[attrib_name] def remove_attributes_with_value(element, attrib_value): """ Removes all attributes with a specified value from all elements. :param element: an XML element object. :param attrib_value: the attribute value to match on. """ # find all elements that have 1+ matching attributes xpath = '//*[@*="{value}"]'.format(value=(attrib_value.replace('"', '\\"'))) elements = get_elements(element, xpath=xpath) for i in elements: # determine the matching keys/attributes keys = (k for k in i.attrib.keys() if i.attrib[k] == attrib_value) for attrib_name in keys: # remove each matching attribute from the current element del i.attrib[attrib_name] def remove_attributes_with_empty_value(element): """ Removes all attributes with an empty value from all elements. :param element: an XML element object. """ remove_attributes_with_value(element, '') # def remove_attributes_if(element, attrib_name, func): # """ # Removes all occurrences of a specific attribute from all elements. # # :param element: an XML element object. # :param attrib_name: the name of the attribute to remove. # :param func: a predicate function (i.e. returns a bool) with the signature: # f(attribute_name, attribute_value) # """ # if attrib_name.startswith('@'): # # remove the optional leading '@' # attrib_name = attrib_name[1:] # # find all elements that have the attribute # xpath = '//*[@{attrib}]'.format(attrib=attrib_name) # elements = xml_utils.get_elements(element, xpath=xpath) # for i in elements: # del i.attrib[attrib_name] def main(): import doctest fail, total = doctest.testmod(optionflags=(doctest.REPORT_NDIFF | doctest.REPORT_ONLY_FIRST_FAILURE)) print('Doctest: {f} FAILED ({p} of {t} PASSED).'.format(f=fail, p=(total - fail), t=total)) if __name__ == "__main__": main()
Hon madr roman therva Kuaran. Now, now, Ethel. Be nice. Alright, set her down, Sue. It seems to have noticed, us. That's not our assignment, Eshey. Come now, Hombre, I know you agree with Eshey, but this is not what we're here to do. Anyways, let's disembark and take a good look at it. Hombre, you're going the wrong way, turn left! OH, DON'T YOU SASS ME, MISTER! NO, MORON, YOUR OTHER LEFT!!! Meyla krafla mikli thur syr! NOT YOU, TOO! I'm going to stab you with your own helmet if you don't shut up also! CHILDREN! Let's just make our way over to it! That's better. Let's pick it up for a closer look. It looks like it's scared, sir. Something about this situation is inappropriate. What are you going on about, Eshey? You never make any sense. Sir, I know this assignment was a psychological test for marketing, which was confidential. But now that we are here, could you let us in on it? Well, as the major corporations here in Andromeda start conflict with each other over planets to dominate, our marketing department wanted to develop a vehicle that would take a form so terrifying that fear itself would be our ally. The idea is that the use of such a vehicle would discourage the other corporations from interfering with our assignments. Eshey and I had our brains scanned so they could find images of the thing that terrified us the most during our exploration assignments. I'm still not sure how I feel about this. You always worry too much, Eshey. Now, let's bring her about and return to the ship. And let's bring the alien back. I'm sure Dr. Long would want to take a look. No, no, no, Hombre. We'll bring this one back alive. Alright, sir, we're ready to leave Raduon and return to the Axle. More pics under the spoiler tag. Fantastic! Not only do you have probably the most errrrm `diverse` set of characters in the AG, making them big scale now is just hilarious. I truly love it, great work! Hahahaha, oh god, this is just so hilarious. Great job man, just... great job! This is such a bizarre build! Nice one in taking a completely different direction And good, uh, "Microscale" ship! Story and Build are awesome, Excellent job. This is like multiple top notch builds in one. Dialogue is fantastic as always, I was surprised at first when I saw Dr. Long in the ship since I knew about mega-maidDr. Long, but then it all made sense. 10/10 for me. Win on so many levels! The Maxifig Dr. Long is one of the greatest things this game has produced, the story is really captivating, and that spaceship is super cool! Well done sir! You are way too good at these story builds! Dr. Long mech is hilarious and very well built. I particularly like the contrast between the studded "clothes" and the smooth face. Hilarious build this week. I wasn't expecting a Dr. Long mech! The mech is amazing! By the title I was NOT expecting this! Super idea, looks brilliant! Great pair of builds, the massive Dr. Long is quite funny and so is the alien! I also really like the ship... which, considering Dr. Long's size, it's got to be humongous! Superb. Great idea, very funny and very well made. This is a brilliant, brilliant build! I love everything about it! You perfectly captured Dr. Long in this towering Mecha suit! The legs are especially well shaped, as is the bun that you got perfectly! I am still just in awe that you even thought of this in the first place, let alone that you executed it so flawlessy!
"""Utilities for working with git refs.""" # ============================================================================= # CONTENTS # ----------------------------------------------------------------------------- # phlgitu_ref # # Public Classes: # Error # Name # .short # .fq # .is_remote # # Public Functions: # is_remote # is_fq # guess_fq_name # make_remote # make_local # fq_remote_to_short_local # fq_to_short # is_under_remote # is_fq_local_branch # # ----------------------------------------------------------------------------- # (this contents block is generated, edits will be lost) # ============================================================================= from __future__ import absolute_import from __future__ import division from __future__ import print_function class Error(Exception): pass class Name(object): """Vocabulary type for git ref names to remove ambiguity in passing. Usage examples: >>> a = Name('refs/heads/master') >>> a.short 'master' >>> a.fq 'refs/heads/master' >>> a.is_remote False >>> b = Name('refs/heads/master') >>> c = Name('refs/remotes/origin/master') >>> a == b True >>> a == c False >>> c.is_remote True >>> s = set([a, b, c]) >>> len(s) 2 """ def __init__(self, fq_name): super(Name, self).__init__() if not is_fq(fq_name): raise Error("'{}' is not fully qualified") self._fq = fq_name @property def short(self): return fq_to_short(self._fq) @property def fq(self): return self._fq @property def is_remote(self): return is_remote(self._fq) def __eq__(self, right): return self._fq.__eq__(right._fq) def __hash__(self): return self._fq.__hash__() def is_remote(fq_name): """Return True if 'fq_name' is a remote branch, False otherwise. Usage examples: >>> is_remote('refs/heads/master') False >>> is_remote('refs/remotes/origin/master') True :name: string fully-qualified name of the ref to test :returns: bool """ if not is_fq(fq_name): raise Error("'{}' is not fully qualified") return fq_name.startswith('refs/remotes/') def is_fq(name): """Return True if the supplied 'name' is fully-qualified, False otherwise. Usage examples: >>> is_fq('master') False >>> is_fq('refs/heads/master') True :name: string name of the ref to test :returns: bool """ return name.startswith('refs/') def guess_fq_name(name_to_guess_from, remote_list=None): """Return a best-guess of the fq name of a ref, given a list of remotes. The list of remotes defaults to ['origin'] if None is supplied. Usage examples: >>> guess_fq_name('master') 'refs/heads/master' >>> guess_fq_name('origin/master') 'refs/remotes/origin/master' >>> guess_fq_name('refs/notes') 'refs/notes' :name_to_guess_from: string name of the ref :remote_list: list of string names of remotes """ if not name_to_guess_from: raise Error("empty name to guess from") if is_fq(name_to_guess_from): return name_to_guess_from if remote_list is None: remote_list = ['origin'] for r in remote_list: if name_to_guess_from.startswith(r + '/'): return "refs/remotes/{}".format(name_to_guess_from) return "refs/heads/{}".format(name_to_guess_from) def make_remote(ref, remote): """Return a Git reference based on a local name and a remote name. Usage example: >>> make_remote("mywork", "origin") 'refs/remotes/origin/mywork' >>> make_remote("mywork", "github") 'refs/remotes/github/mywork' """ return "refs/remotes/" + remote + "/" + ref def make_local(ref): """Return a fully qualified Git reference based on a local name. Usage example: >>> make_local("mywork") 'refs/heads/mywork' """ # TODO: check that it isn't already fully qualified return "refs/heads/" + ref def fq_remote_to_short_local(ref): """Return a short Git branch name based on a fully qualified remote branch. Raise Error if the conversion can't be done. Usage example: >>> fq_remote_to_short_local("refs/remotes/origin/mywork") 'mywork' >>> fq_remote_to_short_local("refs/heads/mywork") Traceback (most recent call last): Error: ref can't be converted to short local: mywork """ # convert to e.g. 'origin/mywork' ref = fq_to_short(ref) slash_pos = ref.find('/') if slash_pos == -1: raise Error("ref can't be converted to short local: {}".format(ref)) # disregard before and including the first slash return ref[slash_pos + 1:] def fq_to_short(ref): """Return a short Git reference based on a fully qualified name. Raise Error if the conversion can't be done. Usage example: >>> fq_to_short("refs/heads/mywork") 'mywork' >>> fq_to_short("refs/remotes/origin/mywork") 'origin/mywork' """ refs_heads = 'refs/heads/' refs_remotes = 'refs/remotes/' if ref.startswith(refs_heads): return ref[len(refs_heads):] if ref.startswith(refs_remotes): return ref[len(refs_remotes):] raise Error("ref can't be converted to short: {}".format(ref)) def is_under_remote(ref, remote): """Return True if a Git reference is from a particular remote, else False. Note that behavior is undefined if the ref is not fully qualified, i.e. does not begin with 'refs/'. Usage example: >>> is_under_remote("refs/remotes/origin/mywork", "origin") True >>> is_under_remote("refs/remotes/origin/mywork", "alt") False >>> is_under_remote("refs/headsmywork", "origin") False """ return ref.startswith('refs/remotes/' + remote + '/') def is_fq_local_branch(ref): """Return True if a Git reference is a fully qualified local branch. Return False otherwise. Usage example: >>> is_fq_local_branch("refs/heads/master") True >>> is_fq_local_branch("refs/remotes/origin/master") False >>> is_fq_local_branch("refs/notes/commits") False """ return ref.startswith('refs/heads/') # ----------------------------------------------------------------------------- # Copyright (C) 2013-2014 Bloomberg Finance L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------------------ END-OF-FILE ----------------------------------
White Plains, NY, August 19, 2015, Estelle R. Coffino, Program Director and Chairperson of The College of Westchester, has been recognized by Elite American Health Professionals, for dedication, achievement and leadership in higher education. Ms. Coffino has 32 years of professional experience, with eight years as a program director and chairperson of Allied Health Programs for The College of Westchester. On a daily basis, she is responsible for handling administrative responsibilities, as well as teaching students, developing policies and syllabi, reviewing curriculum, interviewing students, and reviewing publications for several textbooks that are part of the University’s curriculum. Ms. Coffino designed two clinical laboratories and administration classroom settings for the students. Looking back, Ms. Coffino attributes her success to believing in what she does and having the intention of contributing to something that makes a difference in the lives of others. She became involved in her profession because she was going to school to be a gymnastics teacher and was selected to enroll into a respiratory health care program due to the fact that she did well in mathematics and science. Despite her dislike of blood and some of the physically revealing aspects of the profession, she took the class and developed an interest in the field. It was a natural progression from that point. After 30 years in health care, she wanted to give back and decided to pursue health care education. The highlight of her career was being named Employee of the Year in 2011. Ms. Coffino received a Master of Public Administration in health care administration from Long Island University and a Bachelor of Science in community health from Mercy College. In addition to her degrees, she is a Licensed Registered Respiratory Therapist with a certification in medical assistance. She is also a certified pulmonary function technologist and a certified CPR instructor. She maintains affiliation with the National Healthcareer Association, the American Red Cross, ECSI and American Medical Technologists. In years to come, Ms. Coffino would like to become the dean of the School of Allied Health, which is currently being developed. This entry was posted in Education and tagged administration, education, teaching medical professionals. Bookmark the permalink.
class Element(object): def __init__(self, parent, tag=None, attrib=None, close=True): self.parent = parent self.tag = tag self.children = [] self.attrib = attrib if attrib != None else {} self.close = close if parent != None: parent.append_child(self) def append_child(self, e): self.children.append(e) return e def get_parent(self): return self.parent def to_html(self): ret = '' if self.tag: ret += '<%s' % self.tag for k, v in self.attrib.items(): ret += ' %s="%s"' % (k, v) ret += '>' for child in self.children: ret += child.to_html() if self.tag and self.close: ret += '</%s>' % self.tag return ret class TextElement(Element): def __init__(self, parent, text = ''): Element.__init__(self, parent, None) self.text = text def append_child(self, e): assert False def to_html(self): return self.text
Netflix has released a trailer for their upcoming sci-fi flick ARQ. Written and directed by Orphan Black's Tony Elliott, the film will have a world premiere at the Toronto festival on Friday, September 9 before hitting the streaming service on Friday, September 16. Robbie Amell stars in the time-twisting mystery along with Rachael Taylor (666 Park Avenue, Jessica Jones). In a future where corporations battle against sovereign nations over the last of the world's energy supplies, young engineer Renton and Hannah find themselves attempting to save an experimental energy technology that could end the wars. The catch is, the technology has created a time loop that causes them to relive a deadly home invasion over and over again. They must figure out how to stop the time loop and come out of it alive. ARQ was produced by XYZ films. Just saw this last night and really enjoyed it! Just goes to show you can produce a thrilling mind-bender of a film that takes place in a minimal setting with a good writer/director behind the wheel!
import pygame NEGRO =(0,0,0 ) class Img: def __init__ (self, pantalla, img, x, y, Tampantalla ,fondo = None ): self.img = pygame.image.load(img) self.fondo = None if fondo is not None: self.fondo = pygame.image.load(fondo) self.p = pantalla self.x = x self.y = y self.AnAl = Tampantalla self.DrawImg() def DrawImg(self, Dx = 0, Dy = 0, pintar = False): if(pintar): self.p.fill(NEGRO) if self.fondo is not None: self.p.blit(self.fondo, [0,0]) self.x += Dx self.y += Dy self.ValidarLimites() self.p.blit(self.img,[self.x , self.y ]) pygame.display.flip() def ValidarLimites(self): if self.y > self.AnAl[1]: self.y = 0 if self.x > self.AnAl[0]: self.x = 0 if self.y < 0: self.y = self.AnAl[1] if self.x < 0: self.x = self.AnAl[0] def MovTeclImg(self, event, pintar = False): if event == pygame.K_DOWN: self.DrawImg(0,10,pintar) print 'abajo' elif event == pygame.K_UP: self.DrawImg(0, -10, pintar) print 'arriba' elif event == pygame.K_LEFT: self.DrawImg(-10,0,pintar) print 'izquierda' elif event == pygame.K_RIGHT: self.DrawImg(10, 0, pintar) print 'derecha' def HiddenMouse(self): pygame.mouse.set_visible(False) def ShowMouse(self): pygame.mouse.set_visible(False) def MovWhithMouse(self, pintar = False): tam = self.img.get_rect()#obtengo ancho y alto de la imagen pos = pygame.mouse.get_pos() pos = [ pos[0] - tam[2]/2, pos[1] - tam[3]/2] if(pintar): self.p.fill(NEGRO) if self.fondo is not None: self.p.blit(self.fondo, [0,0]) self.p.blit(self.img,pos) pygame.display.flip()
One of the biggest players in the banking insurance market has a clear ambition: to double its turnover in the next five years. They just didn’t know how. They needed insight into what they had to change in their organisation to make this lofty ambition possible. ITDS was asked to advise the bank what would be necessary to realise their objective of doubling their turnover in five years. Involving the bank’s staff in various workshops, we started off by outlining what the organisation currently looks like. Afterwards, on the strength of our expertise and experience, we described what the future organisation needs to be. Then, based on the differences between the current and future organisations, we presented our recommendations. The session in which we presented our results was particularly noteworthy. For every topic we discussed there was at least one manager or board member who disagreed with our analysis, often because a recommendation would be at the expense of a department or business unit that he or she was responsible for. However, a positive development was that when this happened we invariably received the backing of others in the audience. Lively discussions ensued, ensuring that our recommendations report received broad support.
from warnings import warn from numpy import absolute, in1d from ._plot_mountain import _plot_mountain def single_sample_gsea( gene_score, gene_set_genes, statistic="ks", plot=True, title=None, gene_score_name=None, annotation_text_font_size=16, annotation_text_width=88, annotation_text_yshift=64, html_file_path=None, plotly_html_file_path=None, ): gene_score = gene_score.dropna() gene_score_sorted = gene_score.sort_values(ascending=False) in_ = in1d(gene_score_sorted.index, gene_set_genes.dropna(), assume_unique=True) in_sum = in_.sum() if in_sum == 0: warn("Gene scores did not have any of the gene-set genes.") return gene_score_sorted_values = gene_score_sorted.values gene_score_sorted_values_absolute = absolute(gene_score_sorted_values) in_int = in_.astype(int) hit = ( gene_score_sorted_values_absolute * in_int ) / gene_score_sorted_values_absolute[in_].sum() miss = (1 - in_int) / (in_.size - in_sum) y = hit - miss cumulative_sums = y.cumsum() if statistic not in ("ks", "auc"): raise ValueError("Unknown statistic: {}.".format(statistic)) if statistic == "ks": max_ = cumulative_sums.max() min_ = cumulative_sums.min() if absolute(min_) < absolute(max_): score = max_ else: score = min_ elif statistic == "auc": score = cumulative_sums.sum() if plot: _plot_mountain( cumulative_sums, in_, gene_score_sorted, score, None, None, title, gene_score_name, annotation_text_font_size, annotation_text_width, annotation_text_yshift, html_file_path, plotly_html_file_path, ) return score
AAI Motorsports is the sole agent and importer of Buddy Club and TOM'S Racing automotive parts which we distribute primarily within the North, Central and South American region. Along with this, AAI Motorsports is also an exporter of U.S. sourced automotive parts products to Asia. Inspired by his love of motorsport, AAI motorsports was established in 1998 by Mr. Jun San Chen who since 1991 has been the owner and a driver of Team AAI Motorsports, a premier racing team based in Taiwan. Today, AAI Motorsports is a top ranking racing team in East Asia having just completed the 2015 24 Hour LeMans race. It is the mission of AAI Motorsports to bring top brand products and years of experience in the motorsport industry to our customers throughout the Americas and Asia. AAI Motorsports’s online presence includes an e-commerce wholesale store which facilitates convenient order placement and processing.
# cmdline - command line utilities. import sys import win32ui import string def ParseArgs( str ): import string ret=[] pos = 0 length=len(str) while pos<length: try: while str[pos] in string.whitespace: pos = pos+1 except IndexError: break if pos>=length: break if str[pos]=='"': pos=pos+1 try: endPos = str.index('"', pos)-1 nextPos = endPos+2 except ValueError: endPos=length nextPos=endPos+1 else: endPos = pos while endPos<length and not str[endPos] in string.whitespace: endPos = endPos+1 nextPos=endPos+1 ret.append(str[pos:endPos+1].strip()) pos = nextPos return ret def FixArgFileName(fileName): """Convert a filename on the commandline to something useful. Given an automatic filename on the commandline, turn it a python module name, with the path added to sys.path. """ import os path, fname = os.path.split(fileName) if len(path)==0: path = os.curdir path=os.path.abspath(path) # must check that the command line arg's path is in sys.path for syspath in sys.path: if os.path.abspath(syspath)==path: break else: sys.path.append(path) return os.path.splitext(fname)[0]
Yesterday morning I went to the dentist’s office. My dentist asked me, “How are you doing?” I could have said, “I’m fine, thank you.” But instead, I decided to give her an honest answer: “It’s been a bit stressful lately,” I said. She looked a little surprised by my response. Obviously she was expecting the usual polite and positive reply. “But it’s all good, right?” She asked. Then she paused for a moment and said, “It’s life. Life is supposed to be stressful.” We both smiled and I relaxed into the dental chair – ready for this stressful event. Indeed, life is not always smooth sailing. It has its ups and downs, and sometimes throws us off balance. External events involving work, relationships and finances can trigger stress responses within us and build up tensions in our body and mind. Overtime, these tensions can cause congestion and blockage in the healthy flow of our life and energy. That’s why stress is often attributed as the number one cause for various diseases and ailments, including PMS. Dealing with increasing demands from both external and internal sources, many people feel stressed, tired and stuck. Can’t we move any faster or do more? Even if the answer is no, we still try – and usually suffer for it. So last night, despite a long list of “unfinished To-Dos,” I decided to take a break. I went to yoga, took a hot bath when I got home, and went to bed early. Delightfully, I had a restful sleep, and woke up feeling rested, refreshed, and happier. Which areas of your life are causing you stress right now? What can you do to reduce or eliminate these stress factors and bring some balance into your life?
def reverse_dictionary(d): rev_d = {} for key in d: val = d[key] rev_d[val] = key return rev_d class BiDict(object): """ Bidirectional dictionary. Allows inverse listings, value -> key as well as usual key -> value. Only supports one to one mapping, i.e. unique keys and unique values. """ def __init__(self,dictionary=None, bare=False): if dictionary is not None: self.fwd = dictionary self.bwd = reverse_dictionary(self.fwd) elif bare == True: pass else: self.fwd = {} self.bwd = {} def __str__(self): return str(self.fwd) def __getitem__(self, key): return self.fwd[key] def __setitem__(self, key, value): self.fwd[key] = value self.bwd[value] = key def __delitem__(self, key): self.remove(key) def __invert__(self): return self.inverse() def __iter__(self): return iter(self.fwd) def __iadd__(self,other): self.fwd.update(other.fwd) self.bwd.update(other.bwd) return self def __len__(self): return len(self.fwd) def copy(self): new = self.__class__(bare=True) new.fwd = self.fwd.copy() new.bwd = self.bwd.copy() return new def keys(self): return self.fwd.keys() def values(self): return self.fwd.values() def inverse(self): """ Returns an inverse BiDict of itself. Data is still referenced from parent object. Useful for listing data in reverse direction. """ new = self.__class__(bare=True) new.fwd = self.bwd new.bwd = self.fwd return new def remove(self, key): val = self.fwd[key] if val is not None: del self.bwd[val] del self.fwd[key] return self def reduce(self, external_dict): """ Removes those keys and values that correspond to the keys in the external dict-like object. Note that the values may differ in this and external dict object may differ. """ for key in external_dict: if key in self.fwd: del self.bwd[self.fwd[key]] del self.fwd[key] return self def update(self, external_dict): """ Updates this BiDict with any corresponding or new key/values found in the external dict-like object. """ for key in external_dict: if key in self.fwd: # update the forward dict record new_val = external_dict[key] old_val = self.fwd[key] self.fwd[key] = new_val # remove reverse dict record del self.bwd[old_val] # and add the new one self.bwd[new_val] = key else: self.fwd[key] = external_dict[key] self.bwd[self.fwd[key]] = key return self def difference(self, external_dict): """ Return a BiDict containing all missing key-values in the external dict-like object. """ new_dict = self.copy() for key in external_dict: if key in new_dict: del new_dict[key] return new_dict
Searching for a stylish and inexpensive Bar-Stools-Counter-? Look no further! Our adaptable and customizable leather and fabric sofas can be found in a variety of colors and styles. Our trained Sales and Support Squad is open to answer your questions - Give us a call! At Wayfair, we take a wide collection of Bar-Stools-Counter- so that you can choose from various different options for your house. We carry the best Sofas & Loveseats goods, in order to discover one that is just correct for you. With this large selection of home goods, you might find a thing that you'll love. If you are looking for where to buy Sofas & Loveseats online, then you should have no problems finding an excellent option on !
#!/usr/bin/python # @lint-avoid-python-3-compatibility-imports # # qos implement a dynamic qos for using cgroups # For Linux, uses BCC, eBPF. # # USAGE: qos.py [-h] [-qc] [--max] [interval] # requires a file name qos_setup which can changed with qosfile # file has format maj:min IOPS # i.e. 8:0 40000 # Copyright (c) 2018 Allan McAleavy # Licensed under the Apache License, Version 2.0 (the "License") from __future__ import print_function from bcc import BPF from time import sleep, strftime import argparse import signal import math import collections # arguments examples = """examples: ./qos # block device I/O QOS, 1 second refresh ./qos --max 5000 # set max IOP limit for average I/O size lookup ./qos 5 # 5 second summaries ./qos --qc 5 # check for qos every 5 seconds """ parser = argparse.ArgumentParser( description="Block device (disk) I/O by process and QOS", formatter_class=argparse.RawDescriptionHelpFormatter, epilog=examples) parser.add_argument("-max", "--max", default=4000, help="maximum IOPS") parser.add_argument("interval", nargs="?", default=1, help="output interval, in seconds") parser.add_argument("count", nargs="?", default=99999999, help="number of outputs") parser.add_argument("-qc", "--qc", default=5, help="QOS checktime") parser.add_argument("--ebpf", action="store_true", help=argparse.SUPPRESS) args = parser.parse_args() interval = int(args.interval) countdown = int(args.count) checktime = int(args.qc) # linux stats diskstats = "/proc/diskstats" rfile = "/sys/fs/cgroup/blkio/blkio.throttle.read_iops_device" wfile = "/sys/fs/cgroup/blkio/blkio.throttle.write_iops_device" qosfile = "/root/bcc/tools/qos_setup" # signal handler def signal_ignore(signal, frame): print() def write_qos(dsk, typ, max_iops, sleepcnt): if sleepcnt > checktime: reload_qos(dsk) if typ == "W": with open(wfile, "w") as tf: tf.write("%s %d" % (dsk, max_iops)) tf.close() if typ == "R": with open(rfile, "w") as tf: tf.write("%s %d" % (dsk, max_iops)) tf.close() # load qos settings at start diskqos = {} with open(qosfile) as stats: for line in stats: a = line.split() diskqos[str(a[0])] = a[1] def reload_qos(dsk): with open(qosfile) as stats: for line in stats: a = line.split() diskqos[str(a[0])] = a[1] def do_qos(avg, iops, typ, dsk, sleepcnt): if dsk in diskqos: max_iops = int(diskqos[dsk]) else: max_iops = int(args.max_iops) costs = {4:100, 8:160, 16:270, 32:500, 64:1000, 128:1950, 256:3900, 512:7600, 1024:15000} od = collections.OrderedDict(sorted(costs.items())) average_iopsize = float(avg) / 1024 hbsize = 0 if average_iopsize >= 1: hbsize = int(pow(2, math.ceil(math.log(average_iopsize, 2)))) if hbsize < 4: hbsize = 4 lbsize = (hbsize / 2) if lbsize < 4: lbsize = 4 lbcost = od[lbsize] hbcost = od[hbsize] costep = float(hbcost - lbcost) / float(lbsize) curcost = ((average_iopsize - lbsize) * costep) + lbcost max_iops = (od[4] / float(curcost) * max_iops) write_qos(dsk, typ, max_iops, sleepcnt) return max_iops # load BPF program bpf_text = """ #include <uapi/linux/ptrace.h> #include <linux/blkdev.h> // the key for the output summary struct info_t { int rwflag; int major; int minor; }; // the value of the output summary struct val_t { u64 bytes; u32 io; }; BPF_HASH(counts, struct info_t, struct val_t); int trace_req_start(struct pt_regs *ctx, struct request *req) { struct val_t *valp, zero = {}; struct info_t info = {}; info.major = req->rq_disk->major; info.minor = req->rq_disk->first_minor; #ifdef REQ_WRITE info.rwflag = !!(req->cmd_flags & REQ_WRITE); #elif defined(REQ_OP_SHIFT) info.rwflag = !!((req->cmd_flags >> REQ_OP_SHIFT) == REQ_OP_WRITE); #else info.rwflag = !!((req->cmd_flags & REQ_OP_MASK) == REQ_OP_WRITE); #endif if( info.major > 0 ) { valp = counts.lookup_or_init(&info, &zero); valp->bytes += req->__data_len; valp->io++; } return 0; } """ if args.ebpf: print(bpf_text) exit() b = BPF(text=bpf_text) b.attach_kprobe(event="blk_start_request", fn_name="trace_req_start") b.attach_kprobe(event="blk_mq_start_request", fn_name="trace_req_start") print('Tracing... Output every %d secs. Hit Ctrl-C to end' % interval) disklookup = {} with open(diskstats) as stats: for line in stats: a = line.split() disklookup[a[0] + ":" + a[1]] = a[2] exiting = 0 sleepcnt = 0 diskname = "???" wiops = 0 riops = 0 ravg = 0 wavg = 0 rqos = 0 wqos = 0 wbytes = 0 rbytes = 0 print("%-8s %-5s %-8s %-8s %-8s %-8s %-8s %-8s %-8s %-8s" % ("TIME", "DISK", "RIOPS", "R MB/s", "R_AvgIO", "R_QOS", "WIOPS", "W_AvgIO", "W MB/s", "W_QOS")) while 1: try: sleep(interval) except KeyboardInterrupt: exiting = 1 counts = b.get_table("counts") line = 0 for k, v in sorted(counts.items(), key=lambda counts: counts[1].bytes): disk = str(k.major) + ":" + str(k.minor) if disk in disklookup: diskname = disklookup[disk] else: diskname = "???" if v.io and v.bytes >= 1 and diskname is not "???": if k.rwflag == 1: wiops = v.io wavg = (v.bytes / wiops) wbytes = v.bytes wqos = do_qos(wavg, wiops, "W", disk, sleepcnt) else: riops = v.io ravg = (v.bytes / riops) rbytes = v.bytes rqos = do_qos(ravg, riops, "R", disk, sleepcnt) print("%-8s %-5s %-8d %-8d %-8d %-8d %-8d %-8d %-8d %-8d" % (strftime("%H:%M:%S"), diskname, riops, rbytes / 1048576, ravg / 1024, rqos, wiops, wavg / 1024, wbytes / 1048576, wqos)) counts.clear() wiops = 0 riops = 0 ravg = 0 wavg = 0 rqos = 0 wqos = 0 wbytes = 0 rbytes = 0 if sleepcnt > checktime: sleepcnt = 0 else: sleepcnt = sleepcnt + 1 countdown -= 1 if exiting or countdown == 0: print("Detaching...") exit()
This catalogue contains 48 different hazard I.D. labels without text. Four sizes, vertical or horizontal configuration. You can view our PDF outlining Size & Configuration Here.
#coding=utf-8 import C import urllib import urllib2 import cookielib import re import sys nID = '' while 1: nID = raw_input("Input your id and press Enter plz ") if len(nID) != 7: print 'wrong length of id,input again' else: break Pass = raw_input("Input your password and press Enter plz ") url = 'http://fuxue.nankai.edu.cn/index.php/assessment/question/mod/show' urllogin = 'http://fuxue.nankai.edu.cn/index.php/Account/doLogin' cj = cookielib.CookieJar() pattern = re.compile(r'<h3>\S*:\S*') pattern1 = re.compile(r'"[0-9]+" >\S*') valueslogin ={ 'Host':' fuxue.nankai.edu.cn', 'Connection':' keep-alive', 'Accept':' text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent':' Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36', 'DNT':' 1', 'Accept-Encoding':' gzip,deflate,sdch', 'Accept-Language':' zh-CN,zh;q=0.8' } postdata = urllib.urlencode({'username':nID,'password':Pass}) req3 = urllib2.Request(urllogin,headers=valueslogin) opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) response = opener.open(req3,postdata) print 'Account Checking.........' if not re.findall(re.escape("url=http://fuxue.nankai.edu.cn/index.php/index/index\' >"), response.read()): print 'Password Error' raw_input("Press Enter to continue") sys.exit(0) for cookie in cj: cookie = cookie.value IDStart = '' while 1: IDStart = raw_input("Input the first id you want to assess and press Enter plz ") if len(IDStart) != 7: print 'wrong length of id,input again' else: break IDEnd = '' while 1: IDEnd = raw_input("Input the last id you want to assess and press Enter plz ") if len(IDEnd) != 7: print 'wrong length of id,input again' else: break values = { 'Host':' fuxue.nankai.edu.cn', 'Connection':' keep-alive', 'Accept':' text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'User-Agent':' Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36', 'DNT':' 1', 'Referer':'http://fuxue.nankai.edu.cn/index.php/assessment/xnmSelfAssessment', 'Accept-Encoding':' gzip,deflate,sdch', 'Accept-Language':' zh-CN,zh;q=0.8', 'Cookie':' PHPSESSID='+cookie } IDS=int(IDStart) IDE=int(IDEnd) print 'connecting...................' count = IDS strup = 'http://fuxue.nankai.edu.cn/index.php/assessment/appraise_ajax' Cook=' PHPSESSID='+cookie for i in range(IDS,IDE+1): Re='http://fuxue.nankai.edu.cn/index.php/assessment/appraise/num/' values2 = { 'Host':' fuxue.nankai.edu.cn', 'Connection':' keep-alive', 'Accept':' */*', 'Origin':' http://fuxue.nankai.edu.cn', 'X-Requested-With':' XMLHttpRequest', 'User-Agent':' Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36', 'Content-Type':' application/x-www-form-urlencoded; charset=UTF-8', 'DNT':' 1', 'Referer':Re+str(count), 'Accept-Encoding':' gzip,deflate,sdch', 'Accept-Language':' zh-CN,zh;q=0.8', 'Cookie':Cook } ''' Search ''' req4 = urllib2.Request((Re+str(count)),headers=values) content2 = urllib2.urlopen(req4).read() url2=(strup) ''' Upload ''' req = urllib2.Request(url2,headers=values2) content = urllib2.urlopen(req,urllib.urlencode([('num',str(count)),('assproid','9'),('gong','6'),('neng1','6'),('neng2','6'),('neng3','6'),('neng4','6'),('neng5','6'),('good1',''),('good2',''),('good3',''),('bad1',''),('bad2',''),('bad3','')])).read() print count count=count + 1 raw_input("\nMission Complete\nPress Enter to continue")
Wouldn't it be great if there was one product to fix everything? Even at Rising Sun Workshop the best of us still have skills to master, until then we'll continue to learn from (and cover up) our mistakes until some genius creates the great mythical cover-all. Printed locally and on high quality 100% cotton shirts. Models Bev & Rim wear a Womens M and a Mens M.
""" Module providing a single class (QualysConnectConfig) that parses a config file and provides the information required to build QualysGuard sessions. """ import getpass import logging import os import stat from configparser import RawConfigParser import qualysapi.settings as qcs # Setup module level logging. logger = logging.getLogger(__name__) # try: # from requests_ntlm import HttpNtlmAuth # except ImportError, e: # logger.warning('Warning: Cannot support NTML authentication.') __author__ = "Parag Baxi <parag.baxi@gmail.com> & Colin Bell <colin.bell@uwaterloo.ca>" __copyright__ = "Copyright 2011-2013, Parag Baxi & University of Waterloo" __license__ = "BSD-new" class QualysConnectConfig: """ Class to create a RawConfigParser and read user/password details from an ini file. """ def __init__( self, filename=qcs.default_filename, section="info", remember_me=False, remember_me_always=False, username=None, password=None, hostname=None, ): self._cfgfile = None self._section = section # Prioritize local directory filename. # Check for file existence. if os.path.exists(filename): self._cfgfile = filename elif os.path.exists(os.path.join(os.path.expanduser("~"), filename)): # Set home path for file. self._cfgfile = os.path.join(os.path.expanduser("~"), filename) # create RawConfigParser to combine defaults and input from config file. self._cfgparse = RawConfigParser(qcs.defaults) if self._cfgfile: self._cfgfile = os.path.realpath(self._cfgfile) mode = stat.S_IMODE(os.stat(self._cfgfile)[stat.ST_MODE]) # apply bitmask to current mode to check ONLY user access permissions. if (mode & (stat.S_IRWXG | stat.S_IRWXO)) != 0: logger.warning("%s permissions allows more than user access.", filename) self._cfgparse.read(self._cfgfile) # if 'info'/ specified section doesn't exist, create the section. if not self._cfgparse.has_section(self._section): self._cfgparse.add_section(self._section) # Use default hostname (if one isn't provided). if not self._cfgparse.has_option(self._section, "hostname"): if not hostname: if self._cfgparse.has_option("DEFAULT", "hostname"): hostname = self._cfgparse.get("DEFAULT", "hostname") else: raise Exception( "No 'hostname' set. QualysConnect does not know who to connect to." ) self._cfgparse.set(self._section, "hostname", hostname) # Use default max_retries (if one isn't provided). if not self._cfgparse.has_option(self._section, "max_retries"): self.max_retries = qcs.defaults["max_retries"] else: self.max_retries = self._cfgparse.get(self._section, "max_retries") try: self.max_retries = int(self.max_retries) except Exception: logger.error("Value max_retries must be an integer.") print("Value max_retries must be an integer.") exit(1) self._cfgparse.set(self._section, "max_retries", str(self.max_retries)) self.max_retries = int(self.max_retries) # Get template ID... user will need to set this to pull back CSV reports if not self._cfgparse.has_option(self._section, "template_id"): self.report_template_id = qcs.defaults["template_id"] else: self.report_template_id = self._cfgparse.get(self._section, "template_id") try: self.report_template_id = int(self.report_template_id) except Exception: logger.error("Report Template ID Must be set and be an integer") print("Value template ID must be an integer.") exit(1) self._cfgparse.set(self._section, "template_id", str(self.report_template_id)) self.report_template_id = int(self.report_template_id) # Proxy support proxy_config = ( proxy_url ) = proxy_protocol = proxy_port = proxy_username = proxy_password = None # User requires proxy? if self._cfgparse.has_option("proxy", "proxy_url"): proxy_url = self._cfgparse.get("proxy", "proxy_url") # Remove protocol prefix from url if included. for prefix in ("http://", "https://"): if proxy_url.startswith(prefix): proxy_protocol = prefix proxy_url = proxy_url[len(prefix) :] # Default proxy protocol is http. if not proxy_protocol: proxy_protocol = "https://" # Check for proxy port request. if ":" in proxy_url: # Proxy port already specified in url. # Set proxy port. proxy_port = proxy_url[proxy_url.index(":") + 1 :] # Remove proxy port from proxy url. proxy_url = proxy_url[: proxy_url.index(":")] if self._cfgparse.has_option("proxy", "proxy_port"): # Proxy requires specific port. if proxy_port: # Warn that a proxy port was already specified in the url. proxy_port_url = proxy_port proxy_port = self._cfgparse.get("proxy", "proxy_port") logger.warning( "Proxy port from url overwritten by specified proxy_port from config: %s --> %s", proxy_port_url, proxy_port, ) else: proxy_port = self._cfgparse.get("proxy", "proxy_port") if not proxy_port: # No proxy port specified. if proxy_protocol == "http://": # Use default HTTP Proxy port. proxy_port = "8080" else: # Use default HTTPS Proxy port. proxy_port = "443" # Check for proxy authentication request. if self._cfgparse.has_option("proxy", "proxy_username"): # Proxy requires username & password. proxy_username = self._cfgparse.get("proxy", "proxy_username") proxy_password = self._cfgparse.get("proxy", "proxy_password") # Not sure if this use case below is valid. # # Support proxy with username and empty password. # try: # proxy_password = self._cfgparse.get('proxy','proxy_password') # except NoOptionError, e: # # Set empty password. # proxy_password = '' # Sample proxy config:f # 'http://user:pass@10.10.1.10:3128' if proxy_url: # Proxy requested. proxy_config = proxy_url if proxy_port: # Proxy port requested. proxy_config += f":{proxy_port}" if proxy_username: # Proxy authentication requested. proxy_config = f"{proxy_username}:{proxy_password}@{proxy_config}" # Prefix by proxy protocol. proxy_config = proxy_protocol + proxy_config # Set up proxy if applicable. if proxy_config: self.proxies = {"https": proxy_config} else: self.proxies = None # ask username (if one doesn't exist) if not self._cfgparse.has_option(self._section, "username"): if not username: # The next line will pass Bandit, which is required for issue B322:blacklist. QualysAPI no longer works with Python2, so this doesn't apply. username = input("QualysGuard Username: ") # nosec self._cfgparse.set(self._section, "username", username) # ask password (if one doesn't exist) if not self._cfgparse.has_option(self._section, "password"): if not password: password = getpass.getpass("QualysGuard Password: ") self._cfgparse.set(self._section, "password", password) logger.debug(self._cfgparse.items(self._section)) if remember_me or remember_me_always: # Let's create that config file for next time... # Where to store this? if remember_me: # Store in current working directory. config_path = filename if remember_me_always: # Store in home directory. config_path = os.path.expanduser("~") if not os.path.exists(config_path): # Write file only if it doesn't already exists. # http://stackoverflow.com/questions/5624359/write-file-with-specific-permissions-in-python mode = stat.S_IRUSR | stat.S_IWUSR # This is 0o600 in octal and 384 in decimal. umask_original = os.umask(0) try: config_file = os.fdopen( os.open(config_path, os.O_WRONLY | os.O_CREAT, mode), "w" ) finally: os.umask(umask_original) # Add the settings to the structure of the file, and lets write it out... self._cfgparse.write(config_file) config_file.close() def get_config_filename(self): return self._cfgfile def get_config(self): return self._cfgparse def get_auth(self): """ Returns username from the configfile. """ return ( self._cfgparse.get(self._section, "username"), self._cfgparse.get(self._section, "password"), ) def get_hostname(self): """ Returns hostname. """ return self._cfgparse.get(self._section, "hostname") def get_template_id(self): return self._cfgparse.get(self._section, "template_id")
ARCO National Construction Company has expanded its national presence with the opening of a New England office. The Boston area based regional office is located at 153 Cordaville Rd., Suite 320 in Southborough, MA. Business development efforts and operational oversight for ARCO National Construction – New England will be led by Vice President and Shareholder, Chris Wilson. Additional staff includes design/build construction professionals, George Green and Jason Grant, who have previously managed projects for ARCO’s largest clients. The company plans to significantly increase local staff over the next 18 months. ARCO’s New England office will support national expansion of its existing clients’ facilities including FedEx Ground, Performance Food Group, DHL and Scannell Properties, as well as new regional clientele. ARCO’s current portfolio in the area is largely industrial, but the company has experience in nearly all industries from entertainment venues such as TopGolf to large multifamily developments and class A offices. “Our expansion into the New England region perfectly represents our ability as a company to grow organically,” says Chris Wilson. “We are excited to bring our expertise to the region and look forward to continuing to build relationships with local clients and subcontractors,” he says. Since opening its doors in 1992, ARCO has grown from a single office in St. Louis, MO to a $2B national construction company with 17 locations coast to coast. The company was recently named the 15th largest Design/Build firm in the U.S. and the 49th largest General Contractor by Engineering News-Record. ARCO provides clients with complete, turnkey project delivery from site selection to building turnover. Having constructed more than 4,000 projects in more than 49 states and Canada, ARCO has developed the industry specific expertise to provide innovative design, creative solutions and uncompromising quality. More than 75% of ARCO’s annual revenue comes from repeat clients.
#!/usr/bin/python2 import collections import argparse from softwatch import timelog from softwatch import timenode import sys import time import os import glob import traceback import codecs import locale def run1(args): run(args[0]) def run(args): try: #sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); if sys.platform == "win32": class UniStream(object): __slots__= ("fobj", "softspace",) def __init__(self, fileobject): self.fobj = fileobject #self.fileno = fileobject.fileno() self.softspace = False def write(self, text): try: fno = self.fobj.fileno() os.write(fno, text.encode("cp866") if isinstance(text, unicode) else text) except BaseException as e: traceback.print_exc() #self.fobj.write(text) sys.stdout = UniStream(sys.stdout) sys.stderr = UniStream(sys.stderr) print "run "+str(args) if args.action=='log': aw = timelog.TimeLog(args) aw.monitor_active_window() return 0 if args.action=='report': args.file = args.file or "*.log" opts = timenode.TimeQuery(samples=args.samples, tree=args.tree) opts.tasks = timenode.loadTasks(os.path.join(args.dir, "tasks.cat")) opts.total = timenode.loadTasks(os.path.join(args.dir, "tasks.cat")) opts.total.tag = "**ONLINE**" opts.min_time = int(args.duration*60000) opts.min_percent = float(args.percent) opts.relative = args.relative if args.begin: opts.min_start = int((time.time()-3600*24*float(args.begin))*1000) if args.end: opts.max_start = int((time.time()-3600*24*float(args.end))*1000) print "directory:"+str(args.dir)+", file:"+args.file logfiles = [ f for f in glob.glob(os.path.join(args.dir,args.file)) if os.path.isfile(os.path.join(args.file,f)) ] logfiles = sorted(logfiles) for f in logfiles: if (os.path.getmtime(f)*1000-opts.min_start>=0): print "processing "+f opts.process_file(f) # else: # print "skipped "+f taglist = args.pattern #[0].split(' ') print taglist opts.total.query(set(taglist),opts) return 0 except BaseException as e: traceback.print_exc() var = traceback.format_exc() f=open("err","w") f.write(str(e)+"\n"+var) f.close() print var return 1
Vivekanand School cherishes an instinct, reaps a trust, carries a mission and nurtures a dream which every Vivian sees with open eyes- a dream of being an exceptional institution, where a student entrusts his life to his teacher, who, vowing his allegiance to accomplish his goals, leaves no stone unturned. Even in todayâ??s ambiance, when culture and traditions witness paradigm shifts, Guru-Shishya parampara thrives here, with its celebrated dignity.
# -*- coding: utf-8 -*- from django.utils.translation import ugettext_lazy as _ from django import forms import datetime LEXER_CHOICES = ( ('', '---------'), ('text', 'Text plain'), ('c', 'C'), ('cpp', 'C++'), ('d', 'D'), ('csharp', 'C#'), ('go', 'Go'), ('java', 'Java'), ('py', 'Python 2.x'), ('py3', 'Python 3.x'), ('php', 'PHP'), ('pl', 'Perl'), ('rb', 'Ruby'), ('vala', 'Vala'), ('css', 'CSS'), ('html', 'HTML/XHTML'), ('js', 'JavaScript'), ('xml', 'XML'), ('html+php', 'HTML+PHP'), ('html+django', 'HTML+Django'), ) class PasteForm(forms.Form): paste = forms.CharField(widget=forms.Textarea, label=_(u"Paste content")) lexer = forms.ChoiceField(choices=LEXER_CHOICES, label=_(u"Lexer")) title = forms.CharField(max_length=100, required=False, label=_(u"Title")) group = forms.CharField(max_length=50, required=False, label=_(u"Group")) def __init__(self, *args, **kwargs): self._request = kwargs.pop('request') super(PasteForm, self).__init__(*args, **kwargs) def clean(self): cleaned_data = self.cleaned_data real_ip = "HTTP_X_REAL_IP" in self._request.META and \ self._request.META['HTTP_X_REAL_IP'] or None if not real_ip: if "REMOTE_HOST" not in self._request.META: real_ip = "127.0.0.1" else: real_ip = self._request.META['REMOTE_HOST'] return cleaned_data
Josh Putnam does the delegate math, and abortion arguments at the Supreme Court. Why you should save your Trump-related freakouts until after March 15. On The Gist, Josh Putnam from Frontloading HQ explains what last night’s Super Tuesday results mean when you do the delegate math. He’s a lecturer at the University of Georgia, where he specializes in campaigns and elections. For the Spiel, common abortion arguments that aren’t good arguments. Roku and HBO NOW. Roku players offer the biggest selection of streaming channels, like HBO NOW. Learn more and try HBO NOW free for one month by going to roku.com/thegist.
from typing import Optional, Generator, Any, List from .db_lib import DbInterface from .db_types import FontMapsList, FontUnimapsList, FontsList, KeymapsList, DomainsList, TimezonesList, LocalesList class DbFontMaps(DbInterface): db_file_name = 'font_maps.csv' row_class = FontMapsList lines: List[FontMapsList] def get(self, value: str, item: str = None) -> Optional[FontMapsList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[FontMapsList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbFontUnimaps(DbInterface): db_file_name = 'font_unimaps.csv' row_class = FontUnimapsList lines: List[FontUnimapsList] def get(self, value: str, item: str = None) -> Optional[FontUnimapsList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[FontUnimapsList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbFonts(DbInterface): db_file_name = 'fonts.csv' row_class = FontsList lines: List[FontsList] def get(self, value: str, item: str = None) -> Optional[FontsList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[FontsList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbKeymaps(DbInterface): db_file_name = 'keymaps.csv' row_class = KeymapsList lines: List[KeymapsList] def get(self, value: str, item: str = None) -> Optional[KeymapsList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[KeymapsList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbDomains(DbInterface): db_file_name = 'domains.csv' row_class = DomainsList lines: List[DomainsList] def get(self, value: str, item: str = None) -> Optional[DomainsList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[DomainsList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbTimezones(DbInterface): db_file_name = 'timezones.csv' row_class = TimezonesList lines: List[TimezonesList] def get(self, value: str, item: str = None) -> Optional[TimezonesList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[TimezonesList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item) class DbLocales(DbInterface): db_file_name = 'locales.csv' row_class = LocalesList lines: List[LocalesList] def get(self, value: str, item: str = None) -> Optional[LocalesList]: return super().get(value, item) def get_all(self, value: str, item: str = None) -> Generator[LocalesList, Any, None]: # noinspection PyTypeChecker return super().get_all(value, item)
Available by email 24/7 gogreendistributions@gmail.com Go Green is a dedicated team with top medical marijuana products and services that is delivering near you! Check out our menu, coupons, and feel free to call or email us with any questions! Go Green is a dedicated team with top medical marijuana products and services that is delivering near you! Check out our menu, coupons, and feel free to call or email us with any questions! DJ Short Blueberry is an indica cannabis strain that is short in stature, baring dense deep green buds that contain purple, blue, and some red hues. Its scent and flavor are reminiscent of sweet blueberries, with undertones of skunk. The parent strains of DJ Short Blueberry are a cross between three: the indica Afghani, sativa Purple Thai, and sativa Thai. Perhaps the champion of sofa-anchor strains, the aromatic GG4 cranks user relaxation to 11. On its fourth iteration, GG4 gets its name from extreme stickiness (that’s the THC sticking to pistils, making crystals called trichomes. High concentrations of trichomes will make bud look frosty) and tranquil weight (you get “glued” to wherever you smoked it). When rolled up or sparked in a bowl, GG4 emits plumes of earthy smoke. Its use promotes cerebral sedation. This potent, indica-dominant hybrid almost never happened; Gorilla Glue, the mother strain kicking off the celebrated family tree, was a surprise baby. Kashmir Kush is an indica-dominant cannabis strain that offers mid-weight sedation and dreamy euphoria. Leading with a complex aroma of burnt wood and ground pepper, it leaves a sweet mixture of coffee, mint, and caramel on the exhale. Enjoy Kashmir Kush later in the day, as its deeply relaxating effects are paired best with blankets and a movie. Ordered from them twice now and I am just blown away by the quality of the flower. I've been extremely disappointed with the stuff from brick and mortar dispensaries, and Go Green's product is just absolute fire across the board. They're really spot on with their recommendations too. Driver was right on time and friendly both times. Great all around service. drewk - Posted April 18, 2019, 7:19 p.m. Placed the 1st order on Monday and was happy enough to do it again today. Same result - everyone pleasant and helpful, driver communicated well, quality good and we appreciated the extras. Keep up the good work. 4 reviews - Posted April 17, 2019, 5:43 p.m. Nhinhi - Posted April 11, 2019, 10:42 a.m. 1st time patient , very pleased. Prompt, courteous, great products. Thanks for the goodies. Will order again. cwetom - Posted April 5, 2019, 5:18 a.m. I’ve strictly used Go Green over the past 12 months and I continue to be thrilled with their customer service and product. They always communicate back to you promptly about scheduled times, updates, and menu changes. Their product is top notch and always add free goodies to every delivery. Robby G is the best delivery driver out there, shoutout Robby! I will continue to use Go Green for my medicine! tb12gopats - Posted March 27, 2019, 4:40 p.m. I've used Go Green a bunch over the last year and they have proven themselves again and again. Absolutely amazing. Super fast service. The drivers are excellent (Rob is particularly awesome) and the flower and concentrates have been perfect. You should definitely order from these guys and gals. Thanks for being awesome Go Green! CagedChimp - Posted March 26, 2019, 6:41 p.m. Use them twice now. Helpful, Honest, Caring, Kind. What more can you ask for. everything is tops. kea1022 - Posted March 20, 2019, 7:30 a.m. Posted March 13, 2019, 11:59 a.m. 5 reviews - Posted March 10, 2019, 6:44 a.m. budzy - Posted March 8, 2019, 11:33 p.m.
# Copyright (c) 2010 Google Inc. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. import itertools import random import re from webkitpy.common.config import irc as config_irc from webkitpy.common.config import urls from webkitpy.common.config.committers import CommitterList from webkitpy.common.net.web import Web from webkitpy.common.system.executive import ScriptError from webkitpy.tool.bot.queueengine import TerminateQueue from webkitpy.tool.grammar import join_with_separators from webkitpy.tool.grammar import pluralize def _post_error_and_check_for_bug_url(tool, nicks_string, exception): tool.irc().post("%s" % exception) bug_id = urls.parse_bug_id(exception.output) if bug_id: bug_url = tool.bugs.bug_url_for_bug_id(bug_id) tool.irc().post("%s: Ugg... Might have created %s" % (nicks_string, bug_url)) # FIXME: Merge with Command? class IRCCommand(object): usage_string = None help_string = None def execute(self, nick, args, tool, sheriff): raise NotImplementedError("subclasses must implement") @classmethod def usage(cls, nick): return "%s: Usage: %s" % (nick, cls.usage_string) @classmethod def help(cls, nick): return "%s: %s" % (nick, cls.help_string) class CreateBug(IRCCommand): usage_string = "create-bug BUG_TITLE" help_string = "Creates a Bugzilla bug with the given title." def execute(self, nick, args, tool, sheriff): if not args: return self.usage(nick) bug_title = " ".join(args) bug_description = "%s\nRequested by %s on %s." % (bug_title, nick, config_irc.channel) # There happens to be a committers list hung off of Bugzilla, so # re-using that one makes things easiest for now. requester = tool.bugs.committers.contributor_by_irc_nickname(nick) requester_email = requester.bugzilla_email() if requester else None try: bug_id = tool.bugs.create_bug(bug_title, bug_description, cc=requester_email, assignee=requester_email) bug_url = tool.bugs.bug_url_for_bug_id(bug_id) return "%s: Created bug: %s" % (nick, bug_url) except Exception, e: return "%s: Failed to create bug:\n%s" % (nick, e) class Help(IRCCommand): usage_string = "help [COMMAND]" help_string = "Provides help on my individual commands." def execute(self, nick, args, tool, sheriff): if args: for command_name in args: if command_name in commands: self._post_command_help(nick, tool, commands[command_name]) else: tool.irc().post("%s: Available commands: %s" % (nick, ", ".join(sorted(visible_commands.keys())))) tool.irc().post('%s: Type "%s: help COMMAND" for help on my individual commands.' % (nick, sheriff.name())) def _post_command_help(self, nick, tool, command): tool.irc().post(command.usage(nick)) tool.irc().post(command.help(nick)) aliases = " ".join(sorted(filter(lambda alias: commands[alias] == command and alias not in visible_commands, commands))) if aliases: tool.irc().post("%s: Aliases: %s" % (nick, aliases)) class Hi(IRCCommand): usage_string = "hi" help_string = "Responds with hi." def execute(self, nick, args, tool, sheriff): if len(args) and re.match(sheriff.name() + r'_*\s*!\s*', ' '.join(args)): return "%s: hi %s!" % (nick, nick) if sheriff.name() == 'WKR': # For some unknown reason, WKR can't use tool.bugs.quips(). return "You're doing it wrong" quips = tool.bugs.quips() quips.append('"Only you can prevent forest fires." -- Smokey the Bear') return random.choice(quips) class PingPong(IRCCommand): usage_string = "ping" help_string = "Responds with pong." def execute(self, nick, args, tool, sheriff): return nick + ": pong" class YouThere(IRCCommand): usage_string = "yt?" help_string = "Responds with yes." def execute(self, nick, args, tool, sheriff): return "%s: yes" % nick class Restart(IRCCommand): usage_string = "restart" help_string = "Restarts sherrifbot. Will update its WebKit checkout, and re-join the channel momentarily." def execute(self, nick, args, tool, sheriff): tool.irc().post("Restarting...") raise TerminateQueue() class Rollout(IRCCommand): usage_string = "rollout SVN_REVISION [SVN_REVISIONS] REASON" help_string = "Opens a rollout bug, CCing author + reviewer, and attaching the reverse-diff of the given revisions marked as commit-queue=?." def _extract_revisions(self, arg): revision_list = [] possible_revisions = arg.split(",") for revision in possible_revisions: revision = revision.strip() if not revision: continue revision = revision.lstrip("r") # If one part of the arg isn't in the correct format, # then none of the arg should be considered a revision. if not revision.isdigit(): return None revision_list.append(int(revision)) return revision_list def _parse_args(self, args): if not args: return (None, None) svn_revision_list = [] remaining_args = args[:] # First process all revisions. while remaining_args: new_revisions = self._extract_revisions(remaining_args[0]) if not new_revisions: break svn_revision_list += new_revisions remaining_args = remaining_args[1:] # Was there a revision number? if not len(svn_revision_list): return (None, None) # Everything left is the reason. rollout_reason = " ".join(remaining_args) return svn_revision_list, rollout_reason def _responsible_nicknames_from_revisions(self, tool, sheriff, svn_revision_list): commit_infos = map(tool.checkout().commit_info_for_revision, svn_revision_list) nickname_lists = map(sheriff.responsible_nicknames_from_commit_info, commit_infos) return sorted(set(itertools.chain(*nickname_lists))) def _nicks_string(self, tool, sheriff, requester_nick, svn_revision_list): # FIXME: _parse_args guarentees that our svn_revision_list is all numbers. # However, it's possible our checkout will not include one of the revisions, # so we may need to catch exceptions from commit_info_for_revision here. target_nicks = [requester_nick] + self._responsible_nicknames_from_revisions(tool, sheriff, svn_revision_list) return ", ".join(target_nicks) def _update_working_copy(self, tool): tool.scm().discard_local_changes() tool.executive.run_and_throw_if_fail(tool.deprecated_port().update_webkit_command(), quiet=True, cwd=tool.scm().checkout_root) def _check_diff_failure(self, error_log, tool): if not error_log: return None revert_failure_message_start = error_log.find("Failed to apply reverse diff for revision") if revert_failure_message_start == -1: return None lines = error_log[revert_failure_message_start:].split('\n')[1:] files = list(itertools.takewhile(lambda line: tool.filesystem.exists(tool.scm().absolute_path(line)), lines)) if files: return "Failed to apply reverse diff for %s: %s" % (pluralize(len(files), "file", showCount=False), ", ".join(files)) return None def execute(self, nick, args, tool, sheriff): svn_revision_list, rollout_reason = self._parse_args(args) if (not svn_revision_list or not rollout_reason): return self.usage(nick) revision_urls_string = join_with_separators([urls.view_revision_url(revision) for revision in svn_revision_list]) tool.irc().post("%s: Preparing rollout for %s ..." % (nick, revision_urls_string)) self._update_working_copy(tool) # FIXME: IRCCommand should bind to a tool and have a self._tool like Command objects do. # Likewise we should probably have a self._sheriff. nicks_string = self._nicks_string(tool, sheriff, nick, svn_revision_list) try: complete_reason = "%s (Requested by %s on %s)." % ( rollout_reason, nick, config_irc.channel) bug_id = sheriff.post_rollout_patch(svn_revision_list, complete_reason) bug_url = tool.bugs.bug_url_for_bug_id(bug_id) tool.irc().post("%s: Created rollout: %s" % (nicks_string, bug_url)) except ScriptError, e: tool.irc().post("%s: Failed to create rollout patch:" % nicks_string) diff_failure = self._check_diff_failure(e.output, tool) if diff_failure: return "%s: %s" % (nicks_string, diff_failure) _post_error_and_check_for_bug_url(tool, nicks_string, e) class Whois(IRCCommand): usage_string = "whois SEARCH_STRING" help_string = "Searches known contributors and returns any matches with irc, email and full name. Wild card * permitted." def _full_record_and_nick(self, contributor): result = '' if contributor.irc_nicknames: result += ' (:%s)' % ', :'.join(contributor.irc_nicknames) if contributor.can_review: result += ' (r)' elif contributor.can_commit: result += ' (c)' return unicode(contributor) + result def execute(self, nick, args, tool, sheriff): if not args: return self.usage(nick) search_string = unicode(" ".join(args)) # FIXME: We should get the ContributorList off the tool somewhere. contributors = CommitterList().contributors_by_search_string(search_string) if not contributors: return unicode("%s: Sorry, I don't know any contributors matching '%s'.") % (nick, search_string) if len(contributors) > 5: return unicode("%s: More than 5 contributors match '%s', could you be more specific?") % (nick, search_string) if len(contributors) == 1: contributor = contributors[0] if not contributor.irc_nicknames: return unicode("%s: %s hasn't told me their nick. Boo hoo :-(") % (nick, contributor) return unicode("%s: %s is %s. Why do you ask?") % (nick, search_string, self._full_record_and_nick(contributor)) contributor_nicks = map(self._full_record_and_nick, contributors) contributors_string = join_with_separators(contributor_nicks, only_two_separator=" or ", last_separator=', or ') return unicode("%s: I'm not sure who you mean? %s could be '%s'.") % (nick, contributors_string, search_string) # FIXME: Lame. We should have an auto-registering CommandCenter. visible_commands = { "create-bug": CreateBug, "help": Help, "hi": Hi, "ping": PingPong, "restart": Restart, "rollout": Rollout, "whois": Whois, "yt?": YouThere, } # Add revert as an "easter egg" command. Why? # revert is the same as rollout and it would be confusing to list both when # they do the same thing. However, this command is a very natural thing for # people to use and it seems silly to have them hunt around for "rollout" instead. commands = visible_commands.copy() commands["revert"] = Rollout # "hello" Alias for "hi" command for the purposes of testing aliases commands["hello"] = Hi
A white T-Shirt with a unique graphic design that shares a positive message? Say no more! We are sold! The Martha Women’s Christian T-Shirts are a stylish way to wear your faith and share great messages with those around you. The women’s t-shirt also supports a great cause. Your purchase will go towards donating care packages to individuals in need of our help. The packages include hygiene products such individuals need to survive. With the delivery of your clothing purchase, we will include a free prayer bracelet with the name of the individual you helped. Also, a free prayer request will be included. Contact Malachi Clothing today to speak with one of our sales representatives! Our team members can help you make your purchase and tell you more about our company. Also, they can explain our donation process further. Or, visit our online store at http://malachiclothing.com/ to view our entire collection of men’s, women’s, and children’s clothing and accessories. There, you can purchase the God Life Nutrition Women’s Christian T-Shirts or any other item of clothing you desire. Images and videos with more information about our company and clothing are also accessible online. We look forward to making a global difference with you and to hearing from you!
#!/usr/bin/env python3.5 # -*- coding: utf-8 -*- # Author: ChenLiang import socket import os import json ip_port = ('127.0.0.1', 8009) # 买手机 s = socket.socket() # 拨号 s.connect(ip_port) # 发送消息 welcome_msg = s.recv(1024) print("from server:", welcome_msg.decode()) while True: send_data = input(">>: ").strip() if len(send_data) == 0: continue cmd_list = send_data.split() if len(cmd_list) < 2: continue task_type = cmd_list[0] if task_type == 'put': abs_filepath = cmd_list[1] if os.path.isfile(abs_filepath): file_size = os.stat(abs_filepath).st_size filename = abs_filepath.split("\\")[-1] print('file:%s size:%s' % (abs_filepath, file_size)) msg_data = {"action": "put", "filename": filename, "file_size": file_size} s.send(bytes(json.dumps(msg_data), encoding="utf-8")) server_confirmation_msg = s.recv(1024) confirm_data = json.loads(server_confirmation_msg.decode()) if confirm_data['status'] == 200: print("start sending file ", filename) f = open(abs_filepath, 'rb') for line in f: s.send(line) print("send file done ") else: print("\033[31;1mfile [%s] is not exist\033[0m" % abs_filepath) continue else: print("doesn't support task type", task_type) continue # 收消息 recv_data = s.recv(1024) print(str(recv_data, encoding='utf8')) # 挂电话 s.close()
LeanBigData partners organize the BOSS’16 workshop (Big Data Open Source Systems) at VLDB 2016. The results of the project will be presented in the Polyglot Data Management Session. LeanBigData organized jointly with CoherentPaaS the second public workshop: RTPBD 2016 targeting the integration of Polyglot Persistence & blending the OLTP and OLAP worlds, respectively. The goal of the workshop was to present the projects main results. This event was collocated with DisCoTec conference.
# -*- coding: utf8 -*- __author__ = 'sergey' from dedupsqlfs.db.mysql.table import Table class TableTmpIds( Table ): _engine = "MEMORY" _table_name = "tmp_ids" def create( self ): c = self.getCursor() # Create table c.execute( "CREATE TABLE IF NOT EXISTS `%s` (" % self.getName()+ "`id` BIGINT UNSIGNED PRIMARY KEY"+ ")"+ self._getCreationAppendString() ) return def insert( self, some_id): """ :return: int """ self.startTimer() cur = self.getCursor() cur.execute( "INSERT INTO `%s` " % self.getName()+ " (`id`) VALUES (%(id)s)", { "id": some_id, } ) item = cur.lastrowid self.stopTimer('insert') return item def find( self, some_id): """ :param some_id: int :return: int """ self.startTimer() cur = self.getCursor() cur.execute( "SELECT `id` FROM `%s` " % self.getName()+ " WHERE `id`=%(id)s", { "id": some_id } ) item = cur.fetchone() self.stopTimer('find') return item def get_ids_by_ids(self, ids): self.startTimer() ret_ids = () id_str = ",".join(ids) if id_str: cur = self.getCursor() cur.execute("SELECT `id` FROM `%s` " % self.getName()+ " WHERE `id` IN (%s)" % (id_str,)) ret_ids = set(str(item["id"]) for item in cur) self.stopTimer('get_ids_by_ids') return ret_ids pass
We are a digital marketing firm specialized in Search Engine Optimization (SEO) for small businesses and Startups. Providing high Return of Investment (ROI) SEO for our clients is our primary focus. To demonstrate our SEO ability, we create our own websites and rank them high on Google. These websites also provide a platform for us to test various SEO techniques and theories. Because we create our own websites from scratch, we understand what it means to be a Startup or a small business. This is why we guide our clients to focus on high ROI SEO activities first. SEO can be largely divided into three steps as shown below. Our approach is to understand the client’s business needs by creating a tailored strategy within these three SEO steps.
""" Draws a vector or "quiver" plot of a set of random points. - Left-drag pans the plot. - Mousewheel up and down zooms the plot in and out. - Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular region to zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and alt-right-arrow moves you forwards and backwards through the "zoom history". """ # Major library imports from numpy import array, sort from numpy.random import random # Enthought library imports from enable.api import Component, ComponentEditor from traits.api import HasTraits, Instance, Int from traitsui.api import Item, View # Chaco imports from chaco.api import add_default_grids, add_default_axes, ArrayPlotData, \ Plot, OverlayPlotContainer from chaco.tools.api import PanTool, ZoomTool class PlotExample(HasTraits): plot = Instance(Component) numpts = Int(400) vectorlen = Int(15) traits_view = View(Item('plot', editor=ComponentEditor(), show_label=False), width=600, height=600) def _plot_default(self): # Create starting points for the vectors. numpts = self.numpts x = sort(random(numpts)) y = random(numpts) # Create vectors. vectorlen = self.vectorlen vectors = array((random(numpts)*vectorlen, random(numpts)*vectorlen)).T data = ArrayPlotData() data.set_data('index', x) data.set_data('value', y) data.set_data('vectors', vectors) quiverplot = Plot(data) quiverplot.quiverplot(('index', 'value', 'vectors')) # Attach some tools to the plot quiverplot.tools.append(PanTool(quiverplot, constrain_key="shift")) zoom = ZoomTool(quiverplot) quiverplot.overlays.append(zoom) container = OverlayPlotContainer(quiverplot, padding=50) return container demo = PlotExample() if __name__ == "__main__": demo.configure_traits()
To win in San Diego, QB Joel Stave needs to avoid interceptions. Paul Chryst’s first season as head coach of the University of Wisconsin football team is wrapping up the same way the previous 13 Badgers seasons have ended: with a bowl game. Granted, the Holiday Bowl — in which UW will meet the University of Southern California at San Diego’s Qualcomm Stadium on Dec. 30 — isn’t as high-profile as one of the New Year’s Day bowl games, but it’s not the Quick Lane Bowl in Detroit, either. Incidentally, the Badgers’ streak of 14 consecutive bowl games is best in the Big Ten and sixth best in the country. Wisconsin brings a 9-3 record into the matchup with 8-5 USC. All of the Badgers’ losses this season came at the hands of nationally ranked teams (Alabama, Iowa and Northwestern), and the Hawkeyes and Wildcats won by a combined total of only 10 points. While USC landed in the last spot of the season’s final College Football Playoff rankings at No. 25, the Trojans still will prove a formidable opponent. In fact, they’re favored (and the school’s campus is just 110 miles from Qualcomm Stadium). USC’s offense, led by senior quarterback Cody Kessler, averaged almost 450 total yards and 35 points per game this season, and twice scored more than 50 points. The Badgers can counter that offensive attack with a defense anchored by senior outside linebacker Joe Schobert, who continues to rack up All-Big Ten and All-America honors. His season stats include 18.5 tackles for a loss (seventh in the country), 9.5 sacks (11th) and five forced fumbles (second). If quarterback Joel Stave can throw more touchdown passes than interceptions — by no means a sure thing — and a healthy Corey Clement returns to the running back position (after sitting out much of the season with a sports hernia and a disorderly conduct charge), look for Wisconsin to embrace its underdog status and beat USC. Fun fact: The last time UW and USC met was nearly a half-century ago, on Sept. 24, 1966, when the Trojans stomped all over the Badgers, 38-3. UW has never before played in the Holiday Bowl, but Chryst spent three years working at Qualcomm as tight ends coach for the NFL’s San Diego Chargers, from 1999 to 2001. So he’s familiar with the environs, which otherwise shape up to be a home game for USC. But, as always, expect to see a large contingent of fans clad in red and white in the stands, too.
# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Hook for Google Cloud Life Sciences service""" import time from typing import Any, Dict, Optional import google.api_core.path_template from googleapiclient.discovery import build from airflow.exceptions import AirflowException from airflow.providers.google.common.hooks.base_google import GoogleBaseHook # Time to sleep between active checks of the operation results TIME_TO_SLEEP_IN_SECONDS = 5 # noinspection PyAbstractClass class LifeSciencesHook(GoogleBaseHook): """ Hook for the Google Cloud Life Sciences APIs. All the methods in the hook where project_id is used must be called with keyword arguments rather than positional. :param api_version: API version used (for example v1 or v1beta1). :type api_version: str :param gcp_conn_id: The connection ID to use when fetching connection info. :type gcp_conn_id: str :param delegate_to: The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled. :type delegate_to: str """ _conn = None # type: Optional[Any] def __init__( self, api_version: str = "v2beta", gcp_conn_id: str = "google_cloud_default", delegate_to: Optional[str] = None ) -> None: super().__init__(gcp_conn_id, delegate_to) self.api_version = api_version def get_conn(self): """ Retrieves the connection to Cloud Life Sciences. :return: Google Cloud Life Sciences service object. """ if not self._conn: http_authorized = self._authorize() self._conn = build("lifesciences", self.api_version, http=http_authorized, cache_discovery=False) return self._conn @GoogleBaseHook.fallback_to_default_project_id def run_pipeline(self, body: Dict, location: str, project_id: str): """ Runs a pipeline :param body: The request body. :type body: dict :param location: The location of the project. For example: "us-east1". :type location: str :param project_id: Optional, Google Cloud Project project_id where the function belongs. If set to None or missing, the default project_id from the GCP connection is used. :type project_id: str :rtype: dict """ parent = self._location_path(project_id=project_id, location=location) service = self.get_conn() request = (service.projects() # pylint: disable=no-member .locations() .pipelines() .run(parent=parent, body=body) ) response = request.execute(num_retries=self.num_retries) # wait operation_name = response['name'] self._wait_for_operation_to_complete(operation_name) return response @GoogleBaseHook.fallback_to_default_project_id def _location_path(self, project_id: str, location: str): """ Return a location string. :param project_id: Optional, Google Cloud Project project_id where the function belongs. If set to None or missing, the default project_id from the GCP connection is used. :type project_id: str :param location: The location of the project. For example: "us-east1". :type location: str """ return google.api_core.path_template.expand( 'projects/{project}/locations/{location}', project=project_id, location=location, ) def _wait_for_operation_to_complete(self, operation_name: str) -> None: """ Waits for the named operation to complete - checks status of the asynchronous call. :param operation_name: The name of the operation. :type operation_name: str :return: The response returned by the operation. :rtype: dict :exception: AirflowException in case error is returned. """ service = self.get_conn() while True: operation_response = (service.projects() # pylint: disable=no-member .locations() .operations() .get(name=operation_name) .execute(num_retries=self.num_retries)) self.log.info('Waiting for pipeline operation to complete') if operation_response.get("done"): response = operation_response.get("response") error = operation_response.get("error") # Note, according to documentation always either response or error is # set when "done" == True if error: raise AirflowException(str(error)) return response time.sleep(TIME_TO_SLEEP_IN_SECONDS)
Conditions for civilians in Yemen’s capital city of Sana’a are “deteriorating by the hour” after six days of violence, the International Committee for the Red Cross said Monday, as the United Nations called for a “humanitarian pause” in the fighting. Monday saw a dramatic change in the more than two year conflict as former President Ali Abdullah Saleh, who until several days ago had been allied with Houthi rebels, was reportedly killed, according to Houthi media. Unverified images and video circulated on social media of what appeared to be Saleh’s body with a severe head wound. But days of clashes that began before Saleh’s offer, in addition to airstrikes Riyadh said were to aid the former president against the Houthis, have reportedly left civilians trapped in their homes, unable to seek help, move to safer locations or get out to buy food and water. Most aid workers in the city are trapped too and unable to venture out. The ICRC said in a statement that 125 people had been killed and 238 wounded in the latest round of violence, adding that “the targeting of our main medical warehouse by the fighting is hampering our work.” The ICRC is supporting medical teams in Sana’a’s hospitals. “The wounded must be afforded safe access to medical care,” he said in a Monday statement. There have been multiple calls for humanitarian pauses in Yemen’s war, with varying degrees of success. Aid workers said a May 2015 break in the fighting allowed for the distribution of some supplies but was not enough to make a real difference. More than two years later, with more than 5,350 civilians (and counting) killed in violence, 2,223 Yemenis dead of cholera since April, and warnings of famine, the situation is much worse. The UN says Yemen is the world’s largest humanitarian catastrophe and while a coalition blockade on aid (ostensibly to prevent arms smuggling) has been eased, the majority of commercial ships are still not being allowed into the country’s Red Sea ports.
# vim: noexpandtab:ts=4:sw=4 # This file is part of ReText # Copyright: Dmitry Shachnev 2012-2015 # License: GNU GPL v2 or higher from ReText import monofont, globalSettings, tablemode, DOCTYPE_MARKDOWN from PyQt5.QtCore import QPoint, QSize, Qt from PyQt5.QtGui import QColor, QPainter, QPalette, QTextCursor, QTextFormat from PyQt5.QtWidgets import QLabel, QTextEdit, QWidget def documentIndentMore(document, cursor, globalSettings=globalSettings): if cursor.hasSelection(): block = document.findBlock(cursor.selectionStart()) end = document.findBlock(cursor.selectionEnd()).next() cursor.beginEditBlock() while block != end: cursor.setPosition(block.position()) if globalSettings.tabInsertsSpaces: cursor.insertText(' ' * globalSettings.tabWidth) else: cursor.insertText('\t') block = block.next() cursor.endEditBlock() else: indent = globalSettings.tabWidth - (cursor.positionInBlock() % globalSettings.tabWidth) if globalSettings.tabInsertsSpaces: cursor.insertText(' ' * indent) else: cursor.insertText('\t') def documentIndentLess(document, cursor, globalSettings=globalSettings): if cursor.hasSelection(): block = document.findBlock(cursor.selectionStart()) end = document.findBlock(cursor.selectionEnd()).next() else: block = document.findBlock(cursor.position()) end = block.next() cursor.beginEditBlock() while block != end: cursor.setPosition(block.position()) if document.characterAt(cursor.position()) == '\t': cursor.deleteChar() else: pos = 0 while document.characterAt(cursor.position()) == ' ' \ and pos < globalSettings.tabWidth: pos += 1 cursor.deleteChar() block = block.next() cursor.endEditBlock() class ReTextEdit(QTextEdit): def __init__(self, parent): QTextEdit.__init__(self) self.parent = parent self.undoRedoActive = False self.tableModeEnabled = False self.setFont(monofont) self.setAcceptRichText(False) self.marginx = (self.cursorRect(self.cursorForPosition(QPoint())).topLeft().x() + self.fontMetrics().width(" "*globalSettings.rightMargin)) self.lineNumberArea = LineNumberArea(self) self.infoArea = InfoArea(self) self.document().blockCountChanged.connect(self.updateLineNumberAreaWidth) self.updateLineNumberAreaWidth() self.cursorPositionChanged.connect(self.highlightCurrentLine) self.document().contentsChange.connect(self.contentsChange) def paintEvent(self, event): if not globalSettings.rightMargin: return QTextEdit.paintEvent(self, event) painter = QPainter(self.viewport()) painter.setPen(QColor(220, 210, 220)) y1 = self.rect().topLeft().y() y2 = self.rect().bottomLeft().y() painter.drawLine(self.marginx, y1, self.marginx, y2) QTextEdit.paintEvent(self, event) def scrollContentsBy(self, dx, dy): QTextEdit.scrollContentsBy(self, dx, dy) self.lineNumberArea.repaint() def lineNumberAreaPaintEvent(self, event): painter = QPainter(self.lineNumberArea) painter.fillRect(event.rect(), Qt.cyan) cursor = QTextCursor(self.document()) cursor.movePosition(QTextCursor.Start) atEnd = False while not atEnd: rect = self.cursorRect(cursor) block = cursor.block() if block.isVisible(): number = str(cursor.blockNumber() + 1) painter.setPen(Qt.darkCyan) painter.drawText(0, rect.top(), self.lineNumberArea.width()-2, self.fontMetrics().height(), Qt.AlignRight, number) cursor.movePosition(QTextCursor.EndOfBlock) atEnd = cursor.atEnd() if not atEnd: cursor.movePosition(QTextCursor.NextBlock) def getHighlighter(self): return self.parent.highlighters[self.parent.ind] def contextMenuEvent(self, event): text = self.toPlainText() dictionary = self.getHighlighter().dictionary if (dictionary is None) or not text: return QTextEdit.contextMenuEvent(self, event) oldcursor = self.textCursor() cursor = self.cursorForPosition(event.pos()) pos = cursor.positionInBlock() if pos == len(text): pos -= 1 curchar = text[pos] isalpha = curchar.isalpha() cursor.select(QTextCursor.WordUnderCursor) if not isalpha or (oldcursor.hasSelection() and oldcursor.selectedText() != cursor.selectedText()): return QTextEdit.contextMenuEvent(self, event) self.setTextCursor(cursor) word = cursor.selectedText() if not word or dictionary.check(word): self.setTextCursor(oldcursor) return QTextEdit.contextMenuEvent(self, event) suggestions = dictionary.suggest(word) actions = [self.parent.act(sug, trig=self.fixWord(sug)) for sug in suggestions] menu = self.createStandardContextMenu() menu.insertSeparator(menu.actions()[0]) for action in actions[::-1]: menu.insertAction(menu.actions()[0], action) menu.exec(event.globalPos()) def fixWord(self, correctword): return lambda: self.insertPlainText(correctword) def keyPressEvent(self, event): key = event.key() cursor = self.textCursor() if event.text() and self.tableModeEnabled: cursor.beginEditBlock() if key == Qt.Key_Tab: documentIndentMore(self.document(), cursor) elif key == Qt.Key_Backtab: documentIndentLess(self.document(), cursor) elif key == Qt.Key_Return and not cursor.hasSelection(): if event.modifiers() & Qt.ShiftModifier: # Insert Markdown-style line break markupClass = self.parent.getMarkupClass() if markupClass and markupClass.name == DOCTYPE_MARKDOWN: cursor.insertText(' ') if event.modifiers() & Qt.ControlModifier: cursor.insertText('\n') else: self.handleReturn(cursor) else: QTextEdit.keyPressEvent(self, event) if event.text() and self.tableModeEnabled: cursor.endEditBlock() def handleReturn(self, cursor): # Select text between the cursor and the line start cursor.movePosition(QTextCursor.StartOfBlock, QTextCursor.KeepAnchor) text = cursor.selectedText() length = len(text) pos = 0 while pos < length and (text[pos] in (' ', '\t') or text[pos:pos+2] in ('* ', '- ')): pos += 1 # Reset the cursor cursor = self.textCursor() cursor.insertText('\n'+text[:pos]) self.ensureCursorVisible() def lineNumberAreaWidth(self): if not globalSettings.lineNumbersEnabled: return 0 cursor = QTextCursor(self.document()) cursor.movePosition(QTextCursor.End) digits = len(str(cursor.blockNumber() + 1)) return 5 + self.fontMetrics().width('9') * digits def updateLineNumberAreaWidth(self, blockcount=0): self.lineNumberArea.repaint() self.setViewportMargins(self.lineNumberAreaWidth(), 0, 0, 0) def resizeEvent(self, event): QTextEdit.resizeEvent(self, event) rect = self.contentsRect() self.lineNumberArea.setGeometry(rect.left(), rect.top(), self.lineNumberAreaWidth(), rect.height()) self.infoArea.updateTextAndGeometry() def highlightCurrentLine(self): if not globalSettings.highlightCurrentLine: return self.setExtraSelections([]) selection = QTextEdit.ExtraSelection(); lineColor = QColor(255, 255, 200) selection.format.setBackground(lineColor) selection.format.setProperty(QTextFormat.FullWidthSelection, True) selection.cursor = self.textCursor() selection.cursor.clearSelection() self.setExtraSelections([selection]) def enableTableMode(self, enable): self.tableModeEnabled = enable def backupCursorPositionOnLine(self): return self.textCursor().positionInBlock() def restoreCursorPositionOnLine(self, positionOnLine): cursor = self.textCursor() cursor.setPosition(cursor.block().position() + positionOnLine) self.setTextCursor(cursor) def contentsChange(self, pos, removed, added): if self.tableModeEnabled: markupClass = self.parent.getMarkupClass() docType = markupClass.name if markupClass else None cursorPosition = self.backupCursorPositionOnLine() tablemode.adjustTableToChanges(self.document(), pos, added - removed, docType) self.restoreCursorPositionOnLine(cursorPosition) class LineNumberArea(QWidget): def __init__(self, editor): QWidget.__init__(self, editor) self.editor = editor def sizeHint(self): return QSize(self.editor.lineNumberAreaWidth(), 0) def paintEvent(self, event): if globalSettings.lineNumbersEnabled: return self.editor.lineNumberAreaPaintEvent(event) class InfoArea(QLabel): def __init__(self, editor): QWidget.__init__(self, editor) self.editor = editor self.editor.cursorPositionChanged.connect(self.updateTextAndGeometry) self.updateTextAndGeometry() self.setAutoFillBackground(True) palette = self.palette() palette.setColor(QPalette.Window, QColor(0xaa, 0xff, 0x55, 0xaa)) self.setPalette(palette) def updateTextAndGeometry(self): text = self.getText() self.setText(text) viewport = self.editor.viewport() metrics = self.fontMetrics() width = metrics.width(text) height = metrics.height() self.resize(width, height) rightSide = viewport.width() + self.editor.lineNumberAreaWidth() self.move(rightSide - width, viewport.height() - height) self.setVisible(not globalSettings.useFakeVim) def getText(self): template = '%d : %d' cursor = self.editor.textCursor() block = cursor.blockNumber() + 1 position = cursor.positionInBlock() return template % (block, position)
Our premium leather brief is handmade in Montreal from thick, full-grain tanned leather. This brief holds your 15" laptop in a lined, padded case. Keep yourself organized with external zip pockets, an inside zip pocket, and a sleeve for your phone. Features include raw edge finish, thick topstitching, YKK Excella. 15" x 11" (inches).
# Copyright (c) 2015 "Hugo Herter http://hugoherter.com" # # This file is part of Billabong. # # Intercom is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. """High-level interface above inventory and stores.""" import os.path from uuid import uuid4 from base64 import b64encode, b64decode from datetime import datetime import time import magic from .encryption import random_key, copy_and_encrypt, decrypt_blob from .check import compute_hash class Billabong: """High-level interface above inventory and stores.""" def __init__(self, inventory, stores): """Initialize a Billabong object with given inventory and stores.""" self.inventory = inventory self.stores = stores def add_file(self, filepath, *, key=None, tags=[]): """Import a file into Billabong and return the corresponding record.""" # Resolve symlinks realpath = os.path.realpath(filepath) if not os.path.isfile(realpath): raise FileNotFoundError if key is None: key = random_key() with open(realpath, 'rb') as source_file: file_hash = compute_hash(source_file) storage = self.stores[0] blob_hash = copy_and_encrypt(storage, realpath, key) record = { 'key': b64encode(key).decode(), 'hash': 'sha256-' + file_hash.hexdigest(), 'blob': blob_hash, 'size': os.path.getsize(realpath), 'datetime': datetime.utcnow(), 'timestamp': time.time(), 'id': uuid4().hex, 'info': { 'type': magic.from_file(realpath), 'mimetype': magic.from_file(realpath, mime=True), 'filename': os.path.basename(filepath), 'path': filepath, 'tags': tags if tags else [], } } self.inventory.save_record(record) return record def get(self, id_): """Return a Record object from an id_.""" return self.inventory.get_record(id_) def delete(self, id_): """Delete a Record and the corresponding blob.""" blob_id = self.inventory.get_record(id_)['blob'] for store in self.stores: try: store.delete(blob_id) except (NotImplementedError, FileNotFoundError): pass self.inventory.delete(id_) def read(self, id_, length=None, offset=0, chunk_size=1024): """Return data from the blob of this file.""" record = self.inventory.get_record(id_) key = b64decode(record['key']) blob_id = record['blob'] for store in self.stores: if store.contains(blob_id): return decrypt_blob(store, blob_id, key, offset=offset, length=length) raise FileNotFoundError
I’ve been rereading Seveneves by Neal Stephenson, and really enjoying it. I’m a big fan of his and have read most of his scifi work. I think my all time favorite is Anathem followed by Snowcrash. If you enjoyed Ready Player One then you should check out Snowcrash as they have a similar theme of escaping to the virtual world.
# Copyright 2015 IBM Corp. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os, json, requests from flask import Flask, jsonify, request app = Flask(__name__) VCAP_SERVICES = "{\"language_translation\":[{\"name\":\"Language Translation-bc\",\"label\":\"language_translation\",\"tags\":[\"watson\",\"ibm_created\",\"ibm_dedicated_public\"],\"plan\":\"standard\",\"credentials\":{\"url\":\"https://gateway.watsonplatform.net/language-translation/api\",\"username\":\"b22a962b-bb4f-489c-b66e-3d572f5659cb\",\"password\":\"xjzQ1pI4BvUE\"}}]}" def getVcapServices(): if 'VCAP_SERVICES' in os.environ: val = os.environ['VCAP_SERVICES'] else: # local machine val = VCAP_SERVICES return json.loads(val) def callLanguageTranslation(api, data=None): services = getVcapServices() url = services['language_translation'][0]['credentials']['url'] user = services['language_translation'][0]['credentials']['username'] password = services['language_translation'][0]['credentials']['password'] auth = (user, password) headers = {'accept': 'application/json'} if not data: r = requests.get(url + api, auth=auth, headers=headers) val = r.json() else: r = requests.post(url + api, json=data, auth=auth, headers=headers) val = r.json() return jsonify(val) @app.route('/', methods=['GET', 'POST']) def Welcome(): if request.method == 'POST': text = request.form['wiym'] data = {'source': 'fr', 'target': 'en', 'text': [text]} return callLanguageTranslation(VCAP_SERVICES, '/v2/translate', data) else: return app.send_static_file('index.html') @app.route('/myapp') def WelcomeToMyapp(): return callLanguageTranslation('/v2/identifiable_languages') @app.route('/myapp1') def WelcomeToMyapp1(): data = {'source': 'fr', 'target': 'en', 'text': ['Bonjour mon pote']} return callLanguageTranslation('/v2/translate', data) @app.route('/env') def GetEnv(): val = {'ENV': dict(os.environ)} val['VCAP_SERVICES'] = getVcapServices() return jsonify(val) @app.route('/api/people') def GetPeople(): list = [ {'name': 'John', 'age': 28}, {'name': 'Bill', 'val': 26} ] return jsonify(results=list) @app.route('/api/people/<name>') def SayHello(name): message = { 'message': 'Hello ' + name } return jsonify(results=message) port = os.getenv('PORT', '5000') if __name__ == "__main__": app.run(host='0.0.0.0', port=int(port), debug=True)
We have a friendly staff that can get you the fast title loans you need today. Come by our office today and get approved for your quick title loans. No keys are required when you apply for a title loans in wilcox county. The amount you will receive for your title loans depends on the value of your vehicle. It’s short term to get approved for a short-term title loans in wilcox county. When you are in the wilcox county area come by and apply for your immediate and easy title loans. A title loans are meant to be short-term however it can be extended as long as you pay the minimum payment. When you apply for title loans you will receive your cash immediately. Our title loans office in wilcox county is one of the leading lenders in the area. Our office staff in wilcox county is standing by to answer all of your title loans questions, call or email us today. We can get you approved for title loans and give you the cash in 15 minutes. When you apply for title loans no credit check is performed. When you receive your title loans in wilcox county, we will set up a payment plan for you that meets your budget. If you need immediate cash but have no credit a title loans can help! Bring in your lean free vehicle and we can give you a quote for a title loans.
from bson import ObjectId from redis import Redis from twisted.trial.unittest import TestCase from vusion.component import FlyingMessageManager class FlyingMessageTestCase(TestCase): def setUp(self): self.redis = Redis() self.prefix_key = 'unittest:testprogram' self.fm = FlyingMessageManager( self.prefix_key, self.redis) def tearDown(self): self.clearData() def clearData(self): keys = self.redis.keys("%s:*" % self.prefix_key) for key in keys: self.redis.delete(key) def test_append_get(self): history_id = ObjectId() self.fm.append_message_data('1', history_id, 3, 'ack') saved_history_id, credit, status = self.fm.get_message_data('1') self.assertEqual(history_id, saved_history_id) self.assertEqual(credit, 3) self.assertEqual(status, 'ack') def test_append_get_not_present(self): saved_history_id, credit, status = self.fm.get_message_data('1') self.assertTrue(saved_history_id is None) self.assertTrue(credit == 0) self.assertTrue(status is None)
Real Internet Secrets Workshop (RIS) is really a two and half day intensive internet marketing training by Fabian Lim. This is normally held at Singapore Management University (SMU). The program was once referred to as the Internet Marketing Bootcamp where I was part of the last batch in March 2009. The last RIS Singapore event (mind you, gleam RIS Malaysia) happened as usual at SMU last March 26-28, 2010. This time I was one of several mentors of the class for your third time. Our role is always to conserve the students in any difficulties that they may encounter which are normally associated with technical challenges. 1. The "Success In Common" Strategy: The "people who succeed have one critical thing in common..." strategy tells your prospects they want one crucial thing in order to achieve their desired benefit. Of course, you have to persuade them it is your product or service. You could inform them that some people possess the right point of view but not always a practical system to further improve their life. When you decide to generate income on the web using a business of your family there are specific preparatory steps that you need to take. This is exactly where almost all of the hidden costs start showing themselves. When you decide to produce your own personal products you will have to put in a lot of innovation and must conduct an extensive research before starting off. Online books or E-books are some of the training materials the mentor usually supplies. It is in Adobe PDF format that includes the overall training and you may own it in a package. These materials are of help and you may want to print it in uncertain copy so that you can be able to read it anytime. So if you are thinking "hey this sounds great but why would a professional do that for me?" then listen up. Take a minute and think about precisely what is happening here. It's true, you are getting experience the subscriber lists of the are getting experience each others lists. This is huge for anyone involved. Each expert will promote the series so the number of individuals hearing about the series and registering for it grows dramatically. When is local plumber to send email marketing campaigns? It is difficult to say, but "it all depends" is truly the safest answer. The true reply to absolutely suit different based on individual marketers and frequently based on conducting various tests. When testing to understand the right time to send your email campaigns, mayora de los you have to have a number of variables into account. In this article we're going to talk about some of the factors you have to consider. A Local Landing Page will be the page where your web site visitors land i.e. arrive after they select a link. The local landing page will be your website or another page on your web site specially designed to welcome these potential customers. Local landing pages can provide customized sales pitches for visitors. By designing great landing pages you can seek to engage visitors more efficiently and enhance your conversion rates. Online thieves have plagued the world wide web by stealing bank details or plastic card information online users that do their shopping online. Internet marketers who wish to be spared by these internet crimes prefer dedicated web hosting since this way they could protect their finance transactions and their client's also. Internet hackers are backed-up with state of the art tools that are designed to invade other people's websites to make money out of it. Do not underestimate the relevant skills of these online criminals by being complacent. It is always advisable to be safe rather than to be sorry when it comes to making selections for your web business. Problogger: Secrets for Blogging Your Way to some Six-Figure Income doesn't stop there, however. You will learn about niches and why these are very important. A niche is really a part of the market that is very specialized, an interest that stands by itself. One example of niche writing will be the Little Leather Library collection published inside 1920's. They are not common but there is really a definite interest in them for one segment of the population. That particular niche has a small but fairly well-to-do consumer base. Garret and Rowse coach you on selecting one or two niches which might be best for your needs and give you the instruments to take advantage of them. Surely, you don't wish to shed even just one possible customer. There are many dedicated website hosts that you may tap for the hosting need. You ought to do your research well which among these companies has the best offer at a rate that is appropriate in your budget. Being too stingy when it comes to investing for the affiliate marketing online business might lead you to more finance losses. Stay away from mediocre services since you is only going to take advantage of them for a short period. Rather, it usually is practical to invest on quality even if it means a tad bit more cash to spend.
# # Copyright 2017-2021 Universidad Complutense de Madrid # # This file is part of Megara DRP # # SPDX-License-Identifier: GPL-3.0+ # License-Filename: LICENSE.txt # """ Interpolation method based on: 'Hex-Splines: A Novel Spline Family for Hexagonal Lattices' van de Ville et al. IEEE Transactions on Image Processing 2004, 13, 6 """ from __future__ import print_function import numpy as np from scipy import signal import astropy.units as u import astropy.wcs from numina.frame.utils import copy_img from megaradrp.instrument.focalplane import FocalPlaneConf # from megaradrp.datamodel import MegaraDataModel from megaradrp.core.utils import atleast_2d_last import megaradrp.processing.fixrss as fixrss import megaradrp.processing.hexgrid as hg import megaradrp.processing.hexspline as hspline import megaradrp.instrument.constants as cons # Helper function for equivalence conversion GTC_PLATESCALE = u.plate_scale(cons.GTC_FC_A_PLATESCALE) # Size scale of the spaxel grid in arcseconds HEX_SCALE = cons.SPAXEL_SCALE.to(u.arcsec, GTC_PLATESCALE).value def calc_matrix_from_fiberconf(fpconf, refid=614): """ Compute hexagonal grid matrix from FocalPlaneConf Parameters ---------- fpconf : megaradrp.instrument.focalplane.FocalPlaneConf refid : int fiber ID of reference fiber for grid coordinates Returns ------- """ # TODO: This should be in FIBERCONFS... spos1_x = [] spos1_y = [] for fiber in fpconf.connected_fibers(): spos1_x.append(fiber.x) spos1_y.append(fiber.y) spos1_x = np.asarray(spos1_x) spos1_y = np.asarray(spos1_y) # FIBER in LOW LEFT corner is 614 ref_fiber = fpconf.fibers[refid] minx, miny = ref_fiber.x, ref_fiber.y if fpconf.funit == 'arcsec': # arcsec ascale = HEX_SCALE else: # mm # fpconf.funit == 'mm' ascale = cons.SPAXEL_SCALE.to(u.mm).value ref = minx / ascale, miny / ascale rpos1_x = (spos1_x - minx) / ascale rpos1_y = (spos1_y - miny) / ascale r0l_1 = np.array([rpos1_x, rpos1_y]) return r0l_1, ref def create_cube(r0l, zval, p=1, target_scale=1.0): """ Parameters ---------- r0l zval p : {1, 2} target_scale : float, optional Returns ------- Raises ------ ValueError If `p` > 2 """ # geometry # Interpolation method. Allowed values are: # P = 1 NN # P = 2 Linear if p > 2: raise ValueError('p > 2 not implemented') rr1 = target_scale * np.array([[1.0, 0], [0, 1]]) # Unit scale # compute extremes of hexgrid to rectangular grid # with pixel size 'scale' (i1min, i1max), (j1min, j1max) = hg.hexgrid_extremes(r0l, target_scale) # Rectangular grid mk1 = np.arange(i1min, i1max + 1) mk2 = np.arange(j1min, j1max + 1) crow = len(mk1) ccol = len(mk2) # Result image # Add third last axis zval2 = atleast_2d_last(zval) # disp axis is last axis... dk = np.zeros((crow, ccol, zval2.shape[-1])) # print('result shape is ', dk.shape) # r1k = rr1 @ sk sk = np.flipud(np.transpose([np.tile(mk1, len(mk2)), np.repeat(mk2, len(mk1))]).T) # x y r1k = np.dot(rr1, sk) # Prefiltering # For p = 1, prefilter coefficients with p = 1, coeff = 1 # For p = 2, prefilter coefficients with p = 2, coeff = 1 # No prefiltering in zval2 is required if p <= 2 rbs = hspline.rescaling_kernel(p, scale=target_scale) # Loop to compute integrals... for s, r in zip(sk.T, r1k.T): allpos = -(r0l - r[:, np.newaxis]) we = np.abs((rbs.ev(allpos[1], allpos[0]))) dk[s[1] - i1min, s[0] - j1min] = np.sum(we[:, np.newaxis] * zval2, axis=0) # Postfiltering # For p = 1, final image in NN, postfilter coefficients with n = 1 # For p = 2, final image is linear, postfilter coefficients with n = 3 # if p == 1: # Coefficients post filtering to n = 2 * p - 1 == 1 cpk = dk # Nearest-neighbor samples equal to coefficients img = cpk elif p == 2: # Coefficients post filtering to n = 2 * p - 1 == 3 cpk = np.zeros_like(dk) # last axis for k in range(dk.shape[-1]): cpk[..., k] = signal.cspline2d(dk[..., k]) # Linear samples equal to coefficients img = cpk else: raise ValueError('p > 2 not implemented') return img def create_cube_from_array(rss_data, fiberconf, p=1, target_scale_arcsec=1.0, conserve_flux=True): """ Create a cube array from a 2D or 1D array and focal plane configuration Parameters ---------- rss_data fiberconf : megaradrp.instrument.focalplane.FocalPlaneConf p : {1, 2} target_scale_arcsec : float conserve_flux : bool Returns ------- np.ndarray """ target_scale = target_scale_arcsec / HEX_SCALE conected = fiberconf.connected_fibers() rows = [conf.fibid - 1 for conf in conected] rss_data = atleast_2d_last(rss_data) region = rss_data[rows, :] r0l, _ = calc_matrix_from_fiberconf(fiberconf) cube_data = create_cube(r0l, region[:, :], p, target_scale) if conserve_flux: # scale with areas cube_data *= (target_scale ** 2 / hg.HA_HEX) # Move axis to put WL first # so that is last in FITS result = np.moveaxis(cube_data, 2, 0) result.astype('float32') return result def create_cube_from_rss(rss, p=1, target_scale_arcsec=1.0, conserve_flux=True): """ Create a cube HDUlist from a RSS HDUList Parameters ---------- rss : fits.HDUList p : {1, 2} target_scale_arcsec : float, optional conserve_flux : bool, optional Returns ------- fits.HDUList """ fiberconf = FocalPlaneConf.from_img(rss) result_arr = create_cube_from_array( rss[0].data, fiberconf, p=p, target_scale_arcsec=target_scale_arcsec, conserve_flux=conserve_flux ) cube = copy_img(rss) cube[0].data = result_arr sky_header = rss['FIBERS'].header.copy() spec_header = rss[0].header # Update values of sky WCS # CRPIX1, CRPIX2 # CDELT1, CDELT2 # minx, miny # After shifting the array # refpixel is -i1min, -j1min target_scale = target_scale_arcsec / HEX_SCALE r0l, (refx, refy) = calc_matrix_from_fiberconf(fiberconf) (i1min, i1max), (j1min, j1max) = hg.hexgrid_extremes(r0l, target_scale) crpix_x = -refx / target_scale - j1min crpix_y = -refy / target_scale - i1min # Map the center of original field sky_header['CRPIX1'] = crpix_x sky_header['CRPIX2'] = crpix_y sky_header['CDELT1'] = -target_scale_arcsec / 3600.0 sky_header['CDELT2'] = target_scale_arcsec / 3600.0 # Merge headers # 2D from FIBERS # WL from PRIMARY merge_wcs(sky_header, spec_header, out=cube[0].header) # done return cube def merge_wcs(hdr_sky, hdr_spec, out=None): """Merge sky WCS with spectral WCS Works only with main WCS and B WCS """ if out is None: hdr = hdr_spec.copy() else: hdr = out allw = astropy.wcs.find_all_wcs(hdr_spec) wcsnames = [w.wcs.alt for w in allw] for ss in wcsnames: merge_wcs_alt(hdr_sky, hdr_spec, hdr, spec_suffix=ss) return hdr def merge_wcs_alt(hdr_sky, hdr_spec, out, spec_suffix=' '): """Merge sky WCS with spectral WCS""" hdr = out if spec_suffix == ' ': sf = '' else: sf = spec_suffix # Extend header for third axis c_crpix = 'Pixel coordinate of reference point' c_cunit = 'Units of coordinate increment and value' wcsname_s = f'WCSNAME{sf}' if wcsname_s in hdr: prev = wcsname_s else: prev = f'CTYPE1{sf}' hdr.set(f'WCSAXES{sf}', value=3, before=prev) if sf != '': hdr.set(f'WCSNAME{sf}', value='', after='PC3_3') hdr.set(f'CTYPE1{sf}', value='', after=f'WCSNAME{sf}') hdr.set(f'CRPIX1{sf}', value=1.0, after=f'CTYPE1{sf}') hdr.set(f'CRVAL1{sf}', value=0.0, after=f'CRPIX1{sf}') hdr.set(f'CDELT1{sf}', value=1.0, after=f'CRVAL1{sf}') hdr.set(f'CUNIT1{sf}', value='deg', comment=c_cunit, after=f'CDELT1{sf}') hdr.set(f'CTYPE2{sf}', after=f'CUNIT1{sf}') if sf != '': hdr.set(f'CRPIX2{sf}', value=1.0, after=f'CTYPE2{sf}') hdr.set(f'CRVAL2{sf}', value=0.0, after=f'CRPIX2{sf}') hdr.set(f'CDELT2{sf}', value=1.0, after=f'CRVAL2{sf}') hdr.set(f'CUNIT2{sf}', value='deg', comment=c_cunit, after=f'CDELT2{sf}') hdr.set(f'CRPIX2{sf}', value=1, comment=c_crpix, after=f'CTYPE2{sf}') hdr.set(f'CTYPE3{sf}', after=f'CUNIT2{sf}') hdr.set(f'CRPIX3{sf}', value=1, comment=c_crpix, after=f'CTYPE3{sf}') hdr.set(f'CRVAL3{sf}', after=f'CRPIX3{sf}') hdr.set(f'CDELT3{sf}', after=f'CRVAL3{sf}') hdr.set(f'CUNIT3{sf}', comment=c_cunit, after=f'CDELT3{sf}') c_pc = 'Coordinate transformation matrix element' hdr.set(f'PC1_1{sf}', value=1.0, comment=c_pc, after=f'CUNIT3{sf}') hdr.set(f'PC1_2{sf}', value=0.0, comment=c_pc, after=f'PC1_1{sf}') hdr.set(f'PC2_1{sf}', value=0.0, comment=c_pc, after=f'PC1_2{sf}') hdr.set(f'PC2_2{sf}', value=1.0, comment=c_pc, after=f'PC2_1{sf}') hdr.set(f'PC3_3{sf}', value=1.0, comment=c_pc, after=f'PC2_2{sf}') # Mapping, which keyword comes from each header mappings = [('CRPIX3', 'CRPIX1', sf, 0), ('CDELT3', 'CDELT1', sf, 0), ('CRVAL3', 'CRVAL1', sf, 0), ('CTYPE3', 'CTYPE1', sf, 0), ('CRPIX1', 'CRPIX1', '', 1), ('CDELT1', 'CDELT1', '', 1), ('CRVAL1', 'CRVAL1', '', 1), ('CTYPE1', 'CTYPE1', '', 1), ('CUNIT3', 'CUNIT1', sf, 0), ('PC1_1', 'PC1_1', '', 1), ('PC1_2', 'PC1_2', '', 1), ('CRPIX2', 'CRPIX2', '', 1), ('CDELT2', 'CDELT2', '', 1), ('CRVAL2', 'CRVAL2', '', 1), ('CTYPE2', 'CTYPE2', '', 1), ('CUNIT2', 'CUNIT2', '', 1), ('PC2_1', 'PC2_1', '', 1), ('PC2_2', 'PC2_2', '', 1), ('LONPOLE', 'LONPOLE', '', 1), ('LATPOLE', 'LATPOLE', '', 1), ('RADESYS', 'RADESYS', '', 1), ('EQUINOX', 'EQUINOX', '', 1), ('WCSNAME', 'WCSNAME', sf, 0), ('specsys', 'SPECSYS', sf, 0), ('ssysobs', 'SSYSOBS', sf, 0), ('velosys', 'VELOSYS', sf, 0) ] hdr_in = dict() hdr_in[0] = hdr_spec hdr_in[1] = hdr_sky for dest, orig, key, idx in mappings: hdr_orig = hdr_in[idx] korig = orig + key kdest = dest + sf try: hdr[kdest] = hdr_orig[korig], hdr_orig.comments[korig] except KeyError: # Ignoring errors. Copy only if keyword exists pass return hdr def main(args=None): import argparse import astropy.io.fits as fits # parse command-line options parser = argparse.ArgumentParser(prog='convert_rss_cube') # positional parameters methods = {'nn': 1, 'linear': 2} parser.add_argument("rss", help="RSS file with fiber traces", type=argparse.FileType('rb')) parser.add_argument('-p', '--pixel-size', type=float, default=0.3, metavar='PIXEL_SIZE', help="Pixel size in arc seconds") parser.add_argument('-o', '--outfile', default='cube.fits', help="Name of the output cube file") parser.add_argument('-d', '--disable-scaling', action='store_true', help="Disable flux conservation") parser.add_argument('-m', '--method', action='store', choices=['nn', 'linear'], default='nn', help="Method of interpolation") parser.add_argument('--wcs-pa-from-header', action='store_true', help="Use PA angle from header", dest='pa_from_header') parser.add_argument('--fix-missing', action='store_true', help="Interpolate missing fibers") args = parser.parse_args(args=args) target_scale = args.pixel_size # Arcsec p = methods[args.method] print(f'interpolation method is "{args.method}"') print('target scale is', target_scale, 'arcsec') conserve_flux = not args.disable_scaling with fits.open(args.rss) as rss: if not args.pa_from_header: # Doing it here so the change is propagated to # all alternative coordinates print('recompute WCS from IPA') ipa = rss['PRIMARY'].header['IPA'] rss['FIBERS'].header = fixrss.recompute_wcs(rss['FIBERS'].header, ipa=ipa) if args.fix_missing: fibid = 623 print(f'interpolate fiber {fibid}') rss = fixrss.fix_missing_fiber(rss, fibid) cube = create_cube_from_rss(rss, p, target_scale, conserve_flux=conserve_flux) cube.writeto(args.outfile, overwrite=True) if __name__ == '__main__': main()
It has been reported that the elephant-human conflict causes about 250 wild elephant deaths and loss of about 60-70 human lives, annually.It was revealed by researches that wild elephants are unable to remove from their homelands and the current measures taken by the wild life department such as driving away and relocation are failures. Moreover, those wild elephant who fail to return to their native lands seemed more violent in their new habitats. It has been revealed that 5,879 wild elephants lived in Sri Lankan forests by the island wide elephant census conducted in 2011, and it is expected that around 6000 wild elephants currently exist. Sixty seven per cent of wild elephants in the country live in forest reserves of the Department of the Wildlife Conservation, 30% live in forest reserves of the Department of Forest Conservation and 03% live in small forests outside these reserves and wile elephants are roaming among these forests. Many of lands where wild elephants live and migrate are used by freehold land owners, unauthorized settlers and licensed cultivators. The proposal made by Hon.GaminiJayawickramaPerera, Minister of Sustainable Development and Wildlife, toestablish elephant corridors with the width of 500m and to evacuate settlers and cultivators from these corridors in stages under a method of providing them with alternative lands or compensations, was approved by the Cabinet of Ministers. According to the provisions of the Motor Traffic Act, steps should be taken to registration of motor vehicles, transferring the ownership, issuing driving licences, insuring the safety of road users and protection of the environment. The proposal made by Hon. NimalSiripala de Silva, Minister of Transport and Civil Aviation, to amend the Motor Traffic Act (Chapter 203) with the view of using modern technology in this regard and to discharge the duties of the Department of Motor Traffic in accordance with current needs and situations, was approved by the Cabinet of Ministers. As the Cosmetics, Devices and Drugs Act No 27 of 1980 was repealed and replaced by the National Medicines Regulatory Authority Act No 5 of 2015, laws to regulate plain cosmetics is due to be drafted. Accordingly, the proposal made by Hon. RajithaSenaratne, Minister of Health Nutrition and Indigenous Medicine, to draft legislations required in plain cosmetics regulation, was approved by the Cabinet of Ministers. The alarming increase in the incidence of land fraud has given rise to a public outcry for measures to prevent such frauds. It is necessary to amend relevant provisions in the Power of Attorney Ordinance and the Notaries Ordinance in this regard. Amendments to the Notaries Ordinance are already being drafted. A Committee of Intellectuals has been appointed to recognize the required amendments and the proposal made by Hon. Wijayadasa Rajapaksa, Minister of Justice, to amend the Power of Attorney Ordinance according to the recommendations of the said committee, was approved by the Cabinet of Ministers. A Seafarer who possesses a competency certificate should obtain a certificate of recognition, to serve in a ship that flies a flag of a foreign country, which is issued by the country where the ship is registered. A convention or a MoU should be signed with relevant states to issue such a certificate. Sri Lanka has already entered into such agreements with 30 countries. Government of Switzerland has agreed to enter into such an agreement and it will help in providing the opportunity for Sri Lankan seafarers to serve in Switzerland ships. The proposal made by Hon. ArjunaRanatunga, Minister of Ports and Shipping, to enter into a MoU with Switzerland for the issue of recognition of certificates, was approved by the Cabinet of Ministers. Sinhalese people who lived in Northern Province came to South with the conflict developed therein and Muslims were completely driven away by the LTTE in 1990 causing a large number of people to live in other districts (mostly in Mannar and Puttalam Districts) as IDPs. As have been estimated 5,543 houses are required to resettle Sinhala families and 16,120 houses are required to resettle Muslim families and infrastructure of relevant villages should be improved. Considering the joint proposal made by Hon. RishadBathiudeen, Minister of Industry and Commerce, Hon. D.M. Swaminathan, Minister of Prison Reforms, Resettlement and Hindu Religious Affairs, and Hon. Faizer Mustafa, Minister of Provincial Councils and Local Government, on following a suitable method for coordination, implementation and monitoring of resettlement, the Cabinet of Ministers have decided to appoint a committee chaired by H.E. the President including the Hon. Prime Minister and relevant Ministers, to make recommendations in this regard. There are 241 schools operated in Polonnaruwa District of the North Central Province, and a total of 83,551 students are studying in these schools. "PubudamuPolonnaruwa" is to be implemented from 2016-2020 as a part of the "Raja Rata Navodaya" project, which is a concept of H.E. the President, with the view of carrying rural childrenwho are skilled in sports to the international level. The proposal made by Hon. DayasiriJayasekara, Minister of Sports, to implement 13 development projects at an estimated cost of Rs. 418 million, including renovation of stadiums, construction of comprehensive stadiums and pavilions in Hingurakgoda, Thamankaduwa and Dimbulagala educational divisions, was approved by the Cabinet of Ministers. Section I from Kossinna to Mirigama, section II from Mirigama to Kurunegala, and section III from Pothuhera to Galagedara of the Central Express Way is to be constructed. The proposal made by Hon. LakshmanKiriella, Minister of Higher Education and Highways, to award the above contract to construct its section from Kadawatha to Kossinna as recommended by the Cabinet Appointed Standing Procurement Committee, was approved by the Cabinet of Ministers. The proposal made by Hon. Mahinda Samarasinghe, Minister of Skills Development and Vocational Training, to award the contract for supply and installation of furniture and fittings for the proposed Green University of the NSBM which is being constructed in Pitipana, Homagama, as per the recommendations made by the Cabinet Appointed Procurement Committee to the lowest responsive bidder, was approved by the Cabinet of Ministers. The proposal made by Hon. Mahinda Samarasinghe, Minister of Skills Development and Vocational Training, to award the contract for establishment of the Colombo Vocational Training Centre and improvement of the Gampaha Technical Collage which is implemented using the funds granted by the Import and Export Bank, Korea, to the lowest responsive bidder, was approved by the Cabinet of Ministers. Sri Lanka has received a loan amount of US$ 147 million from the International Development Agency for implementation of the above project in the cities of Kandy and Galle. The proposal made by Hon. PataliChampikaRanawaka, Minister of Megapolis and Western Development, to implement the following projects using a portion of that loan amount, was approved by the Cabinet of Ministers.  Rehabilitation and improvement of road section from 0+000 to 3+390 km (including the Buwelikada sharp bend) of DharmashokaMawatha (B550), Kandy.  Rehabilitation of road section from Katugastota to Madawala (0+000 to 6+250 km) of Katugastota-Madawala-Bambarella road (B205).  Rehabilitation of road section from Madawala to Digana (0+000 to 8+100 km) of Madawala-Digana Road (B256).  Rehabilitation of Meda Ela in Kandy. With the recent floods confronted by Sri Lanka, 639 tanks, canals and anicuts have been destroyed and many cultivated lands including paddy and additional crops have been destroyed in Gampaha, Colombo, Kaluthara, Matale, Anuradhapura, Kandy, Kurunegala, Ratnapura, Monaragala, Batticaloa, Kegalle, Puttalam, NuwaraEliya, Vavuniya, Kilinochchi, Mullativu, Jaffna and Mannar Districts. Accordingly, the proposal made by Hon. DumindaDissanayake, Minister of Agriculture, to implement an accelerated programme to rehabilitate above tanks, canals and anicuts, to safeguard the food safety in the country, was approved by the Cabinet of Ministers. Rs. 212 million has been allocated by the budget 2011 on the above purpose and the task has been assigned to the Central Engineering Consultancy Bureau. The construction of the pavilion of this stadium was conducted by the Central Provincial Council and as it is now in a standstill the Sports Complex cannot be vested in people. The proposal made by Hon. DayasiriJayasekara, Minister of Sports, to complete the pavilion, internal roads, landscaping and other related work at an estimated cost of Rs. 97.86 million, was approved by the Cabinet of Ministers. According to statistics issued by the Disaster Management Centre, 500 houses have completely damaged while 4000 houses have partially damaged in 22 districts due to recent floods and landslides caused by the bad weather conditions. The Cabinet of Ministers have assigned Hon. AnuraPriyadarshanaYapa, Minister of Disaster Management, and Hon. SajithPremadasa, Minister of Housing and Construction, to plan a suitable programme to reconstruct completely damaged houses and to renovate partially damaged houses and to submit the same to the Cabinet of Ministers. The most severe coastal erosion situation in Sri Lanka is currently reported in Marawila-Thalwila and it spreads in about an area of 2 kms. It has affected the lives of residents and to the tourism industry in the area. As a measure construction of hard structures using rocks can be done but it will affect the fisheries and tourism industries. The Committee of Intellectuals have identified that sand nourishment is a sustainable measure in solving this issue. Accordingly, the proposal made by H.E. the President Maithripala Sirisena, in his capacity as the Minister of Mahaweli Development and Environment, to implement the above project using 2016 budget allocations of the Department of Coast Conservation and Coastal Resources Management, was approved by the Cabinet of Ministers. A large number of furniture which has been confounded due to recent floods has been accumulated as wastes in public spaces. For the disposal of waste accumulated in KotikawattaMulleriyawa area by 30-05-2016, it has been estimated that about tractors should be deployed 470 turns. These wastes may cause health issues thus should be disposes immediately. However the Local Government Authorities have no sufficient resources to be deployed in this regard. Therefore, the joint proposal made by Hon. AnuraPriyadarshanaYapa, Minister of Disaster Management, and Hon. Faizer Mustafa, Minister of Provincial Councils and Local Government,to deploy 10 sets of equipments under the supervision of officers in the Ministry of Provincial Councils and Local Government and the Ministry of Disaster Management, was approved by the Cabinet of Ministers. It has been identified that as many of wetlands have been illegally reclaimed and used for illegal settlements the area has faced the flood threats. The treat has been severed by lack of maintenance in canal system that was designed for drainage as identified by the preliminary study conducted by the Department of Irrigation and Land Reclamation and Development Corporation. The flood situation could have been controlled if the canal system was duly maintained. The proposal made by Hon. AnuraPriyadarshanaYapa, Minister of Disaster Management and Hon. PataliChampikaRanawaka, Minister of Megapolis and Western Development, to implement an accelerated programme to clean canals and secondary canals for preventing floods caused by the KelaniRiver, was approved by the Cabinet of Ministers.
import os import shutil import logging import re import socket from io import open import git import yaml from ..repository import KnowledgeRepository from ..utils.encoding import encode logger = logging.getLogger(__name__) class GitKnowledgeRepository(KnowledgeRepository): _registry_keys = ['git'] TEMPLATES = { 'README.md': os.path.abspath(os.path.join(os.path.dirname(__file__), '../templates', 'repository_readme.md')), '.knowledge_repo_config.yml': os.path.abspath(os.path.join(os.path.dirname(__file__), '../templates', 'repository_config.yml')) } @classmethod def create(cls, uri): path = uri.replace('git://', '') if os.path.exists(path): try: repo = git.Repo(path) logger.warning("Repository already exists for uri '{}'. Checking if configuration is needed...".format(uri)) except git.InvalidGitRepositoryError: if os.path.isdir(path): logger.warning("Upgrading existing directory at '{}' to a git knowledge repository...".format(path)) else: raise RuntimeError("File exists at nominated path: {}. Cannot proceed with repository initialization.".format(path)) repo = git.Repo.init(path, mkdir=True) # Add README and configuration templates added_files = 0 for filename, template in cls.TEMPLATES.items(): target = os.path.join(path, filename) if not os.path.exists(target): shutil.copy(template, target) repo.index.add([filename]) added_files += 1 else: logger.warning("Not overriding existing file '{}'.".format(filename)) if added_files > 0: repo.index.commit("Initial creation of knowledge repository.") return GitKnowledgeRepository(path) def init(self, config='git:///.knowledge_repo_config.yml', auto_create=False): self.config.update_defaults(published_branch='master') self.config.update_defaults(remote_name='origin') self.auto_create = auto_create self.path = self.uri.replace('git://', '') # Check if a legacy configuration exists, and if so, print a warning try: self.git_read('.knowledge_repo_config.py') logger.warning( "This knowledge repository has a legacy configuration file that " "will not be loaded due to security issues " "(.knowledge_repo_config.py). This may lead to unexpected " "behavior. Please talk to your local Knowledge Repo admins " "for advice if you are unsure." ) except: pass if config.startswith('git:///'): assert config.endswith('.yml'), "In-repository configuration must be a YAML file." try: self.config.update(yaml.safe_load(self.git_read(config.replace('git:///', '')))) except KeyError: logger.warning("Repository missing configuration file: {}".format(config)) else: self.config.update(config) @property def path(self): return self._path @path.setter def path(self, path): assert isinstance(path, str), "The path specified must be a string." path = os.path.abspath(os.path.expanduser(path)) if not os.path.exists(path): path = os.path.abspath(path) if self.auto_create: self.create(path) else: raise ValueError("Provided path '{}' does not exist.".format(path)) assert self.__is_valid_repo( path), "Provided path '{}' is not a valid repository.".format(path) self._path = path self.uri = path # Update uri to point to absolute path of repository def __is_valid_repo(self, path): try: git.Repo(path) return True except git.InvalidGitRepositoryError: return False @property def git(self): if not hasattr(self, '_git'): self._git = git.Repo(self.path) return self._git @property def git_has_remote(self): return hasattr(self.git.remotes, self.config.remote_name) @property def git_remote(self): if self.git_has_remote: return self.git.remote(self.config.remote_name) return None # ----------- Repository actions / state ------------------------------------ @property def revision(self): c = self.git.commit() return "{}_{}".format(str(c.committed_date), c.hexsha) def update(self, branch=None): branch = branch or self.config.published_branch if not self.git_has_remote: return if not self.__remote_available: logger.warning("Cannot connect to remote repository hosted on {}. Continuing locally with potentially outdated code.".format( self.__remote_host)) return logger.info("Fetching updates to the knowledge repository...") self.git_remote.fetch() current_branch = self.git.active_branch self.git.branches[branch].checkout() self.git_remote.pull(branch) current_branch.checkout() def set_active_draft(self, path): # TODO: deprecate branch = self.git_branch_for_post(self._kp_path(path)) self.config.published_branch = branch.name branch.checkout() @property def status(self): return { 'branch': self.git.active_branch.name, 'changed_files': [str(diff.a_path) for diff in self.git_diff()] } @property def status_message(self): status = self.status message = "Currently checked out on the `{branch}` branch.".format(branch=status['branch']) if len(status['changed_files']) > 0: message += "Files modified: \n {modified}".format(modified='\n\t- '.join(status['changed_files'])) return message # ---------------- Git properties and actions ------------------------- def git_dir(self, prefix=None, commit=None): commit = self.git.commit(commit or self.config.published_branch) tree = commit.tree if prefix is not None: tree = tree[prefix] return [o.path for o in tree.traverse( prune=lambda i, d: isinstance(i, git.Submodule) or os.path.dirname(i.path).endswith('.kp'), visit_once=False, predicate=lambda i, d: i.path.endswith('.kp') ) ] def git_read(self, path, commit=None): commit = self.git.commit(commit or self.config.published_branch) return commit.tree[path].data_stream.read() @property def git_local_branches(self): unmerged_branches = [branch.replace( '*', '').strip() for branch in self.git.git.branch('--no-merged', self.config.published_branch).split('\n')] return unmerged_branches def __get_path_from_ref(self, ref): refs = ref.split('/') for i, ref in enumerate(refs): if ref.endswith('.kp'): break if not ref.endswith('.kp'): return None return '/'.join(refs[:i + 1]) def git_local_posts(self, branches=None, as_dict=False): if branches is None: branches = self.git_local_branches posts = {} for branch in branches: posts[branch] = set([self.__get_path_from_ref(diff.a_path) for diff in self.git_diff(branch)]) posts[branch].discard(None) if not as_dict: out_posts = set() for branch, ps in posts.items(): out_posts.update(ps) return list(out_posts) return posts def git_branch_for_post(self, path, interactive=False): if path is None: return None if path in self.git_local_branches: return self.git_branch(path) branches = [] for branch in self.git_local_branches: if path in self.git_local_posts(branches=[branch]): branches.append(branch) if len(branches) == 0: if path in self.dir(): return self.git_branch(self.config.published_branch) return None if len(branches) == 1: return self.git_branch(branches[0]) # Deal with ambiguity if interactive: print("There are multiple branches for post '{}'.".format(path)) for i, branch in enumerate(branches): print("{}. {}".format(i, branch)) response = None while not isinstance(response, int): response = input('Please select the branch you would like to use: ') try: response = int(response) except: response = None else: response = 0 return self.git_branch(branches[response]) def git_branch(self, branch=None): if isinstance(branch, git.Head): return branch if branch is None: return self.git.active_branch if not isinstance(branch, str): raise ValueError("'{}' of type `{}` is not a valid branch descriptor.".format(branch, type(branch))) try: return self.git.branches[branch] except IndexError: raise ValueError("Specified branch `{}` does not exist.".format(branch)) def git_checkout(self, branch, soft=False, reset=False, create=False): if not create: branch_obj = self.git_branch(branch) branch_obj.checkout() return branch_obj if soft and self.git.active_branch.name not in [self.config.published_branch, branch] and not self.git.active_branch.name.endswith('.kp'): response = None while response not in ['y', 'n']: response = input('It looks like you have checked out the `{}` branch, whereas we were expecting to use `{}`. Do you want to use your current branch instead? (y/n) '.format(self.git.active_branch.name, branch)) if response == 'y': branch = self.git.active_branch.name if reset or branch not in [b.name for b in self.git.branches]: ref_head = None if self.git_has_remote: for ref in self.git_remote.refs: if ref.name == branch: ref_head = ref break if not ref_head: ref_head = self.git_remote.refs.master if self.git_has_remote else self.git.branches.master else: logger.warning( "The branch `{}` already exists as upstream, and you maybe clobbering someone's work. Please check.".format(ref_head.name)) branch = self.git.create_head(branch, ref_head, force=True) else: branch = self.git_branch(branch) branch.checkout() return branch def git_diff(self, ref=None, ref_base=None, patch=False): commit = self.git.commit(ref) ref = self.git.merge_base(self.git.commit(ref_base or self.config.published_branch), commit)[0] return commit.diff(ref, create_patch=patch) # ---------------- Post retrieval methods -------------------------------- def _dir(self, prefix, statuses): posts = set() if any([status != self.PostStatus.PUBLISHED for status in statuses]): local_posts = self.git_local_posts(as_dict=True) for status in statuses: if status == self.PostStatus.PUBLISHED: posts.update(self.git_dir(prefix=prefix, commit=self.config.published_branch)) else: for branch in local_posts: for post_path in local_posts[branch]: if prefix is not None and not post_path.startswith(prefix): continue if self._kp_status(post_path, branch=branch) in statuses: posts.add(post_path) for post in sorted(posts): yield post # ------------- Post submission / addition user flow ---------------------- def _add_prepare(self, kp, path, update=False, branch=None, squash=False, message=None): target = os.path.abspath(os.path.join(self.path, path)) if self.git_has_remote: branch = branch or path else: logger.warning("This repository does not have a remote, and so post review is being skipped. Adding post directly into published branch...") branch = self.config.published_branch # Create or checkout the appropriate branch for this project logger.info("Checking out (and/or creating) a new branch `{}`...".format(branch)) branch_obj = self.git_checkout(branch, soft=True, reset=squash, create=True) branch = branch_obj.name # Verify that post path does not exist (unless we are updating the post) assert update or not os.path.exists(target), "A knowledge post already exists at '{}'! If you wanted to update it, please pass the '--update' flag.".format(path) # Add knowledge post to local branch logger.info("Adding and committing '{}' to local branch `{}`...".format(path, branch)) def _add_cleanup(self, kp, path, update=False, branch=None, squash=False, message=None): self.git.index.add([path]) # Commit the knowledge post and rollback if it fails try: if message is None: message = input("Please enter a commit message for this post: ") self.git.index.commit(message) except (KeyboardInterrupt, Exception) as e: if message is None: logger.warning("No commit message input for post '{}'. Rolling back post addition...") else: logger.error("Something went wrong. Rolling back post addition...") self.git.index.reset() try: self.git.git.clean('-df', path) self.git.git.checkout('--', path) except: pass raise e def _submit(self, path=None, branch=None, force=False): if not self.git_has_remote: raise RuntimeError("Could not find remote repository `{}` into which this branch should be submitted.".format(self.config.remote_name)) if branch is None and path is None: raise ValueError("To submit a knowledge post, a path to the post and/or a git branch must be specified.") if branch is None: branch = self.git_branch_for_post(path) if branch is None: raise ValueError("It does not appear that you have any drafts in progress for '{}'.".format(path)) if not self.__remote_available: raise RuntimeError("Cannot connect to remote repository {} ({}). Please check your connection, and then try again.".format(self.config.remote_name, self.git_remote.url)) self.git_remote.push(branch, force=force) logger.info("Pushed local branch `{}` to upstream branch `{}`. Please consider starting a pull request, or otherwise merging into master.".format(branch, branch)) def _publish(self, path): # Publish a post for general perusal raise NotImplementedError def _unpublish(self, path): # unpublish a post for general perusal raise NotImplementedError def _accept(self, path): # Approve to publish a post for general perusal pass def _remove(self, path, all=False): raise NotImplementedError # ------------ Knowledge Post Data Retrieval Methods ------------------------- def _kp_uuid(self, path): try: return self._kp_read_ref(path, 'UUID') except: return None def _kp_path(self, path, rel=None): return KnowledgeRepository._kp_path(self, os.path.expanduser(path), rel=rel or self.path) def _kp_exists(self, path, revision=None): # For speed, first check whether it exists in the checked out branch, then search more deeply return os.path.isdir(os.path.join(self.path, path)) or (self.git_branch_for_post(path, interactive=False) is not None) def _kp_status(self, path, revision=None, detailed=False, branch=None): if not hasattr(self, '_dir_cache'): self._dir_cache = {path: None for path in self.dir()} if path in self._dir_cache: return self.PostStatus.PUBLISHED if branch is None: branch = self.git_branch_for_post(path, interactive=False) else: branch = self.git_branch(branch) if branch is None: return ValueError("No such post: {}".format(path)) if branch.name == self.config.published_branch: status = self.PostStatus.PUBLISHED, None elif self.git_has_remote and branch.name in self.git_remote.refs: remote_branch = self.git_remote.refs[branch.name].name behind = len(list(self.git.iter_commits('{}..{}'.format(branch, remote_branch)))) ahead = len(list(self.git.iter_commits('{}..{}'.format(remote_branch, branch)))) status = (self.PostStatus.SUBMITTED, (" - {} commits behind".format(behind) if behind else '') + (" - {} commits ahead".format(ahead) if ahead else '') + (" [On branch: {}]".format(branch) if branch != path else '')) else: status = self.PostStatus.DRAFT, None if detailed: return status return status[0] def _kp_get_revision(self, path): # We use a 'REVISION' file in the knowledge post folder rather than using git # revisions because using git rev-parse is slow. try: return int(self._kp_read_ref(path, 'REVISION')) except: return 0 def _kp_get_revisions(self, path): # slow # TODO: In the future, we may want to use something like: # self.git.iter_commits(paths=os.path.join(self.path, path, 'knowledge.md')) # But this will require a lot of piping and may not make sense in the context # of a non-bare git repository. raise NotImplementedError def _kp_write_ref(self, path, reference, data, uuid=None, revision=None): ref_path = os.path.join(self.path, path, reference) ref_dir = os.path.dirname(ref_path) if not os.path.exists(ref_dir): os.makedirs(ref_dir) with open(ref_path, 'wb') as f: return f.write(data) def _kp_dir(self, path, parent=None, revision=None): # TODO: Account for revision if parent: path = os.path.join(path, parent) for dirpath, dirnames, filenames in os.walk(os.path.join(self.path, path)): for filename in filenames: if dirpath == "" and filename == "REVISION": continue yield os.path.relpath(os.path.join(dirpath, filename), os.path.join(self.path, path)) def _kp_has_ref(self, path, reference, revision=None): # TODO: Account for revision return os.path.isfile(os.path.join(self.path, path, reference)) def _kp_diff(self, path, head, base): raise NotImplementedError def _kp_new_revision(self, path, uuid=None): self._kp_write_ref(path, "REVISION", encode(self._kp_get_revision(path) + 1)) if uuid: self._kp_write_ref(path, "UUID", encode(uuid)) def _kp_read_ref(self, path, reference, revision=None): with open(os.path.join(self.path, path, reference), 'rb') as f: return f.read() # ------------- Utility methods -------------------------------------- def __abspath(self, path): return os.path.abspath(os.path.join(self.path, path)) @property def __remote_host(self): if self.git_has_remote: # TODO: support more types of hosts m = re.match(r'.*?@(.*?):\.*?', self.git_remote.url) if m: # shorthand ssh uri return m.group(1) return None @property def __remote_port(self): port = 22 if self.git_has_remote: m = re.match(r'^(.*?)?@([^/:]*):?([0-9]+)?', self.git_remote.url) if m: if m.group(3): port = m.group(3) return int(port) @property def __remote_available(self): # TODO: support more types of hosts host = self.__remote_host port = self.__remote_port if host: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(0.5) try: s.connect((socket.gethostbyname(host), port)) return True except: return False finally: s.close() return True
Once you have logged in with your six digit birthday successfully, then you will come to a point that you will need to change this number to a different six digit PIN. Q: What are the advantages of taking the ECE PhD preliminary exam while I am in the MS degree program? A: Students who pass the ECE PhD preliminary exam have a higher success rate of securing a Graduate Research Assistantship (GRA) position at GT-Atlanta, if they wish to pursue PhD studies there. After enrolling in the GT ECE PhD program, students who have passed the ECE PhD prelim exam in the MS phase will not need to take the prelim exam again. Check the ECE Graduate Handbook for additional details. Q: I will be in school the fall, spring, and summer semesters to finish the GT MS ECE degree. How many chances do I have to pass the prelim exam? A: ECE PhD prelim exams are offered during March (spring semester) and October (fall semester) of each year. Students admitted into the graduate program as M.S. students and wish to become Ph.D. students are allowed to take the preliminary examination while in the M.S. program. However, students classified as Ph.D. at the time of admission must pass the Ph.D. Preliminary Examination within their first four semesters (not counting the summer) in the program. All Ph.D. students are permitted three opportunities to take the exam while enrolled in the ECE graduate program at Georgia Tech. Q: I plan to join the GT-Shenzhen program in the fall semester and complete the MS ECE degree the following fall semester. How many chances do I have to pass the prelim exam? A: ECE PhD prelim exams are offered during March (spring semester) and October (fall semester) of each year. Based on your program plan, you will have three chances to take the prelim exam – during the first October, the next March, and the second October. Even if you will be spending a semester doing internship, you can still participate in the prelim exam that semester. Note that if you fail the prelim all 3 times, you will not be eligible to apply to the GT ECE PhD program any more, now or in the future. Q: I plan to take the ECE PhD Preliminary Exam. On the prelim exam registration form, I am asked to select a primary area of interest for PhD research. Does it have to be the area that I entered in my ECE graduate application? Does it affect how I should complete the MS Coursework Form? A: The area that you select on the PhD prelim exam registration form and the area(s) that you indicated on the ECE Graduate Application form do not have to be identical; both are for information and planning purposes and are not-binding. They do not affect how your MS Coursework Form should be completed. Q: What is the format of the ECE PhD preliminary exam? A: It is a 4-hour, closed book, closed notes exam on undergraduate materials. The exam consists of 24 problems; each student must select 8 problems to work on and submit for grading. Students are only permitted to bring pencils, pens, eraser, and a basic scientific calculator – trigonometric and log functions can be used but the calculator must not have programming modes. The passing score for the preliminary exam is 65 out of 100. Be sure to read the preliminary exam study guide.All students who take the preliminary exam will have to choose one of two exam options: CompE or EE. Each of the two options will consist of a core, which is a set of 4 fixed classes, and electives, which is a set of 4 classes that the student chooses to work.Each student will receive a total of 24 problems and choose if they are CompE or EE option after seeing the exam. Each student will turn in 8 problems: all 4 core problems for the chosen option, plus 4 electives that they choose from the remaining pool. The choice of electives can be made from the other core (for instance, if a student chooses EE core, they can work one or more of the CompE core problems as electives). Q: Will the GT College of Computing (CoC) recognize my ECE PhD prelim exam results? A: No. Doctoral programs at the CoC have an entirely different set of requirements. Q: How do we transition from the GT-Shenzhen MS ECE program to the GT-Atlanta ECE PhD degree program? A: First, pass the ECE PhD prelim exam at the earliest opportunity. Keep your GT GPA high (minimum GPA of 3.5 is required for the PhD program). Approximately 6-9 months before the start of your planned PhD studies at GT-Atlanta, you can contact selected professors to inquire about Graduate Research Assistantship (GRA) opportunities. If any professor agrees to sponsor you, please inform the GT-Shenzhen Office about the GRA arrangement. You will need to fill out a simplified internal application for the ECE PhD program. After that application is accepted, we will arrange for the ECE Graduate Office and the Registrar’s Office to switch your campus from GT-Shenzhen to GT-Atlanta, once you have completed your GT MS ECE degree. Your I-20 form will be requested at that time. Q: I have completed two semesters of coursework at GT-Shenzhen and have passed the ECE PhD prelim exam. I have contacted a GT-Atlanta professor and received a GRA offer from him to study for the PhD. Can I transfer to GT-Atlanta now? A: Most professors offer the GRA position because the student will come in with the GT MS degree. If all parties are “on the same page” regarding the student’s qualifications and intentions, the student can transfer to GT-Atlanta with the J-1 (not F-1) visa, as an Exchange Student. Q: If I pass the ECE PhD preliminary exam, earn the GT ECE MS degree, and begin my GT ECE PhD studies at a later date, do I have to take the preliminary exam again? A: Currently, there is not an expiration date for the PhD preliminary exam results so you will not need to take the exam again once you have passed it. A: Students should upload the electronic or scanned version of their homework to their account in T-square. You can use the scanner found in any GT-Shenzhen office. Remember to include a cover sheet to each homework assignment. The GT-Shenzhen office scans all exam papers and submits to DLPE.
__author__ = 'tilmann.bruckhaus' class Hanoi: def __init__(self, disks): print "Playing \"Towers of Hanoi\" for ", disks, " disks:" self.disks = disks self.source_peg = [] self.helper_peg = [] self.target_peg = [] self.set_up_pegs() self.show() def solve(self): self.move(self.disks, self.source_peg, self.target_peg, self.helper_peg) print "\nSolved." def move(self, peg_count, source_peg, target_peg, helper_peg): if peg_count >= 1: self.move(peg_count - 1, source_peg, helper_peg, target_peg) target_peg.append(source_peg.pop()) self.show() self.move(peg_count - 1, helper_peg, target_peg, source_peg) def show(self): print "\nsource_pge: ", self.source_peg print "helper_pge: ", self.helper_peg print "target_pge: ", self.target_peg def set_up_pegs(self): for i in reversed(range(1, self.disks + 1)): self.source_peg.append(i) if __name__ == '__main__': Hanoi(4).solve()
The latest embarrassing result of a “signature strike” in America’s ever-growing drone war has sparked public protests in South Yemen’s Shabwa Province, where locals expressed outrage at a US attack that killed at least seven civilian members of a single family. The attack, which took place on January 28, was in a remote part of Shabwa. The victims were family members in a vehicle, who went looking for a missing child in the Said District. The slain were not members of any political or religious organizations. They were a bunch of guys in a car though, and when the US sees a crowded vehicle they assume it’s militants and attack. These are called “signature strikes,” strikes based on the victims giving the appearance of being a valid target. The protesters are not only mad at the US for killing a bunch of innocent civilians, though they naturally are quite mad about that, but also at Saudi Arabia for not doing anything about it. Since Saudi Arabia is invading, and backing the occupation of Shabwa, locals argue they should be responsible for keeping their airspace safe. That’s not really Saudi Arabia’s thing, though, with their own air campaign killing massive numbers of civilians on equally flimsy pretexts, and the fact that the US killed a few in a drone strike likely of little consequence as far as they’re concerned.
#!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import unicode_literals import ldap3 import windbreads.utils as wdu import wx from ldap3tool import LDAPTool import wxbreads.widgets as wxw from wxbreads.base import BaseDialog def ldap_login(server, base_dn, login_name, password, close=True, **kwargs): ldap = LDAPTool(server=server, base_dn=base_dn) try: use_ssl = kwargs.pop("use_ssl", False) kw = kwargs.copy() if use_ssl: kw.update(server=ldap.open_server(use_ssl=use_ssl)) ldap.connect(login_name, password, **kw) except ldap3.core.exceptions.LDAPSocketOpenError as ex: return 100, ex except ldap3.core.exceptions.LDAPBindError: return 200, None except Exception as ex: return 300, ex else: if close: ldap.close() else: return 0, ldap return 0, None class LoginWindow(BaseDialog): app_title = "User Login" def __init__(self, **kwargs): self.enable_cancel = kwargs.get("enable_cancel", True) self.ldap_kwargs = kwargs.pop("ldap_kwargs", {}) self.need_busy = kwargs.pop("need_busy", False) self.allowed_names = kwargs.pop("allowed_names", None) super(LoginWindow, self).__init__(**kwargs) self.panel = wx.Panel(self) self.parent = parent = kwargs.get("parent") root_user = kwargs.get("root_user") root_pass = kwargs.get("root_pass") last_user = kwargs.get("last_user") self.domain = kwargs.get("domain", "") if parent: if not root_user: root_user = getattr(parent, "root_user", "root") if not root_pass: root_pass = getattr(parent, "root_pass", "") if not last_user: last_user = getattr(parent, "login_user") self.root_user = root_user or "root" self.root_pass = root_pass or "" self.last_user = last_user or "" self.pwd = kwargs.get("password", "") self.current_user = None self.ldap_obj = None self.server = kwargs.get("server", "ldap-server") self.base_dn = kwargs.get("base_dn", "dc=corp,dc=company,dc=org") self.destroy = kwargs.get("destroy", True) self.can_exit = kwargs.get("can_exit", True) self.is_login = False self.setup_ui() self.Bind(wx.EVT_BUTTON, self.on_login, id=wx.ID_OK) self.Bind(wx.EVT_BUTTON, self.on_quit, id=wx.ID_CANCEL) if kwargs.get("show_win", True): self.show() def setup_ui(self): sizer = wx.BoxSizer(wx.VERTICAL) size = (120, -1) label_style = wx.ALIGN_RIGHT kwargs = dict( fsize=size, ssize=(200, -1), fstyle=label_style, t=self.t ) # Name field _, self.name_tc = wxw.add_text_row( self.panel, sizer, label="Login Name", value=self.last_user, **kwargs ) wxw.focus_on(self.name_tc) # Password _, self.pwd_tc = wxw.add_text_row( self.panel, sizer, label="Password", value=self.pwd, sstyle=wx.TE_PASSWORD, **kwargs ) ok_btn, cancel_btn = wxw.add_ok_buttons( self.panel, sizer, size=(100, 30), ok_text="Login", t=self.t ) self.ok_btn = ok_btn self.cancel_btn = cancel_btn cancel_btn.Enable(self.enable_cancel) self.panel.SetSizer(sizer) self.panel.Layout() sizer.Fit(self) def high_light(self, wgt, focus=True): wgt.Clear() wgt.SetBackgroundColour(wxw.HIGHLIGHT_RED) if focus: wgt.SetFocus() def get_field_values(self): self.login_name = self.name_tc.GetValue().strip().lower() self.password = self.pwd_tc.GetValue() def on_login(self, event): self.ok_btn.Enable(False) self.cancel_btn.Enable(False) self.is_login = True self.current_user = None self.ldap_obj = None self.start_delay_work(self.after_submit, self.do_submit) def after_submit(self, delay_result): try: delay_result.get() except Exception as e: self.popup("Error", e, "e") self.ok_btn.Enable(True) self.cancel_btn.Enable(True) self.is_login = False if self.current_user: if self.parent and hasattr(self.parent, "login_user"): self.parent.login_user = self.current_user self.parent.ldap_obj = self.ldap_obj self.on_quit() def do_submit(self): self.get_field_values() if not self.login_name: self.high_light(self.name_tc) self.Refresh() return self.name_tc.SetBackgroundColour(wx.NullColour) if not self.password: self.name_tc.ClearBackground() self.high_light(self.pwd_tc) self.Refresh() return if self.parent: root_user = getattr(self.parent, "root_user", "root") root_pass = getattr(self.parent, "root_pass", "") else: root_user = self.root_user root_pass = self.root_pass if self.login_name == root_user: if self.password == root_pass: self.current_user = self.login_name self.ldap_obj = None return self.high_light(self.pwd_tc) self.Refresh() return if "\\" in self.login_name: domain, username = self.login_name.split("\\", 1) else: domain, username = self.domain, self.login_name if self.allowed_names and username not in self.allowed_names: self.popup("Error", "This user is not allowed to login", "e") return busy = None if self.need_busy: busy = self.show_busy("connecting to server...") ec, msg = ldap_login( self.server, self.base_dn, "{}\\{}".format(domain, username) if domain else username, self.password, **self.ldap_kwargs ) if self.need_busy: self.hide_busy(busy) if ec in (100, 300): self.popup("Error", msg, "e") return elif ec == 200: self.popup( "Authentication Error", "Username/password not match", "e" ) self.high_light(self.pwd_tc) self.Refresh() return if self.parent and getattr(self.parent, "extra_login_check"): if not self.parent.extra_login_check(username, msg): return self.current_user = username self.ldap_obj = msg def on_quit(self, event=None): if self.can_exit and (not self.is_login): if self.destroy: self.Destroy() else: self.Hide() def test_run(): app = wx.App() LoginWindow(last_user="domain\\username", password="password") app.MainLoop() def test_i18n_run(): from functools import partial import windbreads.common_i18n as wdi18n zh = wdi18n.zh zh.setdefault("User Login", "用户登录") t = partial(wdu.tt, lang="zh", po=dict(zh=zh)) app = wx.App() LoginWindow(last_user="domain\\test", password="password", t=t) app.MainLoop() if __name__ == "__main__": test_run() test_i18n_run()
Located in the center of the Bottières resort, the residence Les Terrasses de la Toussuire is composed of 66 apartments that can accommodate between 2 and 8 people. It offers a grandiose panorama of the surrounding peaks thanks to its large terraces. Charm and authenticity are the key words of this construction realized in 2008 and offers a great comfort. The residence has a relaxation area & nbsp; with indoor pool and sauna & nbsp; (surcharge). Without forgetting the many events of the 7 villages that make up the Sybelles, throughout the summer (sports challenges, concerts, folk festivals ...). 51 to 57m² apartment for 6 people, composed of a living room with a pull-out sofa bed for 2 people, a bedroom with 1 double bed, a bedroom with 2 single beds, a kitchen corner, 'a bathroom with shower or bath, separate toilet or not, a second bathroom with shower and a terrace or balcony.
#Los datos del problema. salidas = [ ((2013,11,2), 'LAN123','NewYork','EMBARQUE'), ((2013,4,28), 'MX201', 'Cancun', 'ARRIBADO'), #... ] vuelos = { 'LAN123': {'16740623-7', '1111111-1', '555555-5'}, 'ALGO00': {'444444-4'}, 'MX201': {'777777-7'}, # ... } personas = {'16740623-7':('OEncina', 'NewYork', (1987, 7, 22), 62000), '444444-4':('Edwar Lopez', 'Miami', (1900, 3, 11), 120000), '777777-7':('Jorge Perez', 'Santiago', (1989, 2, 17), 1000), '555555-5':('Daniela Perez', 'Roma', (1991, 8, 17), 12000), '1111111-1':('Sandra Lazo', 'Ibiza', (1970, 4, 14), 10000), # ... } # Pregunta a) def estado_pasajero(nombre): #Iteramos sobre el diccionario personas, y desempaquetamos todo for rut, datos in personas.items(): nombre, ciudad_origen, fecha_nacimiento, millas = datos #Si esta es la persona que buscamos... if nombre_pasajero == nombre: #Revisamos los vuelos... for codigo_vuelo, rut_pasajeros in vuelos.items(): #Si la persona esta en este vuelo... if rut in rut_pasajeros: #Revisamos las salidas... for datos_salida in salidas: fecha_salida, codigo, ciudad, estado_vuelo = datos_salida #Si encontramos el vuelo... if codigo_vuelo == codigo: #Devolvemos todos los datos. return (rut, ciudad_de_origen, estado_vuelo) #Si no hay coincidencias, devolvemos None return None # Pregunta b) def cambia_de_vuelo(rut, nuevo_vuelo, millas): for codigo, ruts in vuelos.items(): #Si el pasajero existe... if rut in ruts: #Quitamos al pasajero de su actual vuelo... ruts.remove(rut) #Y lo agregamos al nuevo vuelo. Recuerda que el diccionario vuelos #tiene como llave un string y como valor un conjunto, por lo que #usamos .add() vuelos[nuevo_vuelo].add(rut) #Sumamos las millas y actualizamos datos_persona = personas[rut] nombre, ciudad_origen, fecha_nacimiento, cantidad_millas = datos_persona cantidad_millas += millas datos_persona_actualizados = (nombre, ciudad_origen, fecha, cantidad_millas) personas[rut] = datos_persona_actualizados return True #Si el pasajero no existe, devolvemos False. return False # Pregunta c) def filtro_nac(fecha, estado): filtro = set() for rut, datos_persona in personas.items(): #Solo necesitamos el nombre y la fecha de nacimiento, el resto de los #datos los podemos omitir. nombre, _, fecha_nacimiento, _ = datos_persona #Si la fecha de nacimiento es mayor a la pedida... if fecha < fecha_nacimiento: #Obtenemos los datos que necesitamos ocupando la primera funcion #(BTW, a los profes le gusta que hagas esto) _, _, estado_vuelo = estado_pasajero(nombre) #Si el estado es el pedido... if estado_vuelo == estado: #Lo agregamos filtro.add(rut) return conjunto
3765 S 14th St, Milwaukee, WI 53221-1643 - This is a Short Sale. Check out this 4 Bedroom, 2 Bath Brick Cape Cod. Great room sizes! A few cosmetic updates and this home will shine again. Property being sold in as-is condition.
# Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import csv import io import ssl import urllib.request import sys sys.path.insert(1, '../../../util') from alpha2_to_dcid import COUNTRY_MAP country_set = set(COUNTRY_MAP.values()) output_columns = [ 'Date', 'GeoId', 'CumulativeCount_Vaccine_COVID_19_Administered', 'IncrementalCount_Vaccine_COVID_19_Administered', 'CumulativeCount_MedicalConditionIncident_COVID_19_ConfirmedCase', 'IncrementalCount_MedicalConditionIncident_COVID_19_ConfirmedCase', 'CumulativeCount_MedicalConditionIncident_COVID_19_PatientDeceased', 'IncrementalCount_MedicalConditionIncident_COVID_19_PatientDeceased', 'Count_MedicalConditionIncident_COVID_19_PatientInICU', 'Count_MedicalConditionIncident_COVID_19_PatientHospitalized', 'CumulativeCount_MedicalTest_ConditionCOVID_19', 'IncrementalCount_MedicalTest_ConditionCOVID_19' ] # Automate Template MCF generation since there are many Statitical Variables. TEMPLATE_MCF_TEMPLATE = """ Node: E:OurWorldInData_Covid19->E{index} typeOf: dcs:StatVarObservation variableMeasured: dcs:{stat_var} measurementMethod: dcs:OurWorldInData_COVID19 observationAbout: C:OurWorldInData_Covid19->GeoId observationDate: C:OurWorldInData_Covid19->Date value: C:OurWorldInData_Covid19->{stat_var} """ def create_formatted_csv_file(f_in, csv_file_path): with open(csv_file_path, 'w', newline='') as f_out: writer = csv.DictWriter(f_out, fieldnames=output_columns, lineterminator='\n') writer.writeheader() reader = csv.DictReader(f_in) for row_dict in reader: place_dcid = 'country/%s' % row_dict['iso_code'] # Skip invalid country ISO code. if not place_dcid in country_set: continue processed_dict = { 'Date': row_dict['date'], 'GeoId': place_dcid, 'CumulativeCount_Vaccine_COVID_19_Administered': row_dict['total_vaccinations'], 'IncrementalCount_Vaccine_COVID_19_Administered': row_dict['new_vaccinations'], 'CumulativeCount_MedicalConditionIncident_COVID_19_ConfirmedCase': row_dict['total_cases'], 'IncrementalCount_MedicalConditionIncident_COVID_19_ConfirmedCase': row_dict['new_cases'], 'CumulativeCount_MedicalConditionIncident_COVID_19_PatientDeceased': row_dict['total_deaths'], 'IncrementalCount_MedicalConditionIncident_COVID_19_PatientDeceased': row_dict['new_deaths'], 'Count_MedicalConditionIncident_COVID_19_PatientInICU': row_dict['icu_patients'], 'Count_MedicalConditionIncident_COVID_19_PatientHospitalized': row_dict['hosp_patients'], 'CumulativeCount_MedicalTest_ConditionCOVID_19': row_dict['total_tests'], 'IncrementalCount_MedicalTest_ConditionCOVID_19': row_dict['new_tests'], } writer.writerow(processed_dict) def create_tmcf_file(tmcf_file_path): stat_vars = output_columns[2:] with open(tmcf_file_path, 'w', newline='') as f_out: for i in range(len(stat_vars)): f_out.write( TEMPLATE_MCF_TEMPLATE.format_map({ 'index': i, 'stat_var': output_columns[2:][i] })) if __name__ == '__main__': gcontext = ssl.SSLContext() with urllib.request.urlopen( 'https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv', context=gcontext) as response: f_in = io.TextIOWrapper(response) create_formatted_csv_file(f_in, 'OurWorldInData_Covid19.csv') create_tmcf_file('OurWorldInData_Covid19.tmcf')
I am writing to you all from a net cafe in Paris, after having forcibly removed myself from London town, which has been my home away from home for the past 7 nights (2 extra nights than originally planned). I'm not sure if you've all heard the news or maybe seen the pictures published in Who Weekly or maybe NW in Melbourne yet (the subject is probably too cool for the likes of Woman's Day & New Idea - I mean it is London after all dahlinks), but whilst you've all been doing that W word thing (I can't even bring myself to utter the word), I have been hanging out with superstars. So, I'm expecting the headlines to read something along the lines of : "Samantha & Courteney - Exclusive Backstage Photos of Backpacking Extraordinaire Meeting Rock Star & Mother of Kurt Cobain's Child at Kick-Arse Vines Concert, Forum Theatre, London". Yes, you read correctly chumps - I HUNG OUT WITH COURTENEY LOVE AT THE VINES CONCERT IN LONDON !!!! (Disclaimer : "hung out" in this instance has been embellished from "pestered for a photo" to make for a better story). It was just that kind of week for me in the hip & happening city of London town, which I adored from the second I stepped off my ferry onto the drizzly overcast & grey shores of Dover & began whinging about the crappy weather - only 20 degrees (& it's SUMMER, mind you). If not for the fact that I could suddenly read menus, order food, ask questions about anything I bloody well liked, sing along to songs & do all those other things that you can suddenly do again when the world around you decides to speak English for the FIRST TIME IN TWO MONTHS; I loved London because of the brilliant night life, for Top Shop (seriously ... how have I lived my life without this store), & for the Tube which simultaneously drove me absolutely mental & enchanted me with it's cute "Mind The Gap" every two minutes. It's actually a little bit sad that it felt so bloody good to say "can you tell me if I'm on the right train?" instead of waiting in anticipation to see if the next stop was the one that I actually wanted. I arrived in London after spending five spectacular days in Paris, which is the most beautiful city I've visited, to date. Elegant & bohemian & artistic & grand - I've decided that Paris & I are soul mates. I spent 2 solid days absorbing the most prolific art collection in the world in the Louvre (thanks to Rob & Kellie's awesome birthday present !!! THANKS GUYS !!!), & climbed the Eiffel Tower, Arc de Triomphe, visited Sacre Couer, Notre Dame & strolled the Champs Elysee - which I think I have longed to do since I was practically a foetus. Leaving Paris, I journeyed to East Putney in London, where my crazy party animal mates from the Greek Islands - Karen & Tim - offered to put me up in their gorgeous little gaff. How delightful to wake up in a room that doesn't have 10 other people sleeping in it ! How wondrous to take a shower not in my Havaianas ! Simply marvellous to make myself a cup of coffee & sit around in my pyjamas (I did this three times !!). Tim & Karen, & their equally cool housemates Brendan & Rae, pretty much gave me run of the place & I was treated like a princess (of course) for five days ! We checked out Camden Markets (vintage shopping heaven), partied down at the Notting Hill Carnival (the biggest outdoor festival in Europe) & hit the pubs (surprise surprise) for my first Snakebite, as well as checking out the Vines & hanging out with COURTENEY LOVE at the Forum theatre. I left Tim & Karen after 4 nights of craziness, where we caught up for dinner & drinks with another Greek Islander - Pia - in Soho. Afterwards, whilst strolling the backstreets of the West End & peering in the cute little boutique windows in Soho I impulsively decided that I wasn't going to get on the Busabout bus the next day because I didn't feel like I could leave my London town just yet ! Pretty crazy but London was definitely worth it ! An absolute highlight of my time in London was managing to score tickets to see Phantom of the Opera at Her Majesty's Theatre - what an amazing experience !! Along with spending an goddamn fortune in British pounds (the cost of living here is seriously ridiculous - & also the reason for my forced removal) I managed to sneak in a few of the tourist attractions too; Big Ben, Tower Bridge, Westminster Abbey, the Dali Universe, Buckingham Palace (worst £12.50 I've ever spent in my life), London Eye, Trafalgar Square, Harrods, TOP SHOP, Piccadilly Circus, & Oxford Street just to name a few. Did I mention that I popped in to Top Shop ? Yowch - it's going to hurt when I get the bill on that one. After all of these whirlwhind adventures, & many MANY many others that I just don't have time to elaborate on, I find myself in Paris for one last time - I'm leaving for a wild couple of days in Amsterdam tomorrow morning. Unfortunately, although I have been trying to avoid thinking about it, September means that I am beginning my last month here in Europe. Although on the bright side, September does mean that I get to party down with Johan (hi Johs ! can't wait to see ya !) & I'm also set to hang out with the lovely Anna in Hildesheim for a couple of days too. Hope you are all carrying on bravely without me ! In the meantime - take care everyone !
# Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Contains overloads to convert Python to equivalent JAX code.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from jax import lax from pyctr.overloads import py_defaults from pyctr.overloads import staging init = py_defaults.init assign = py_defaults.assign read = py_defaults.read call = py_defaults.call def if_stmt(cond, body, orelse, local_writes): """Functional form of an if statement. Args: cond: Callable with no arguments, predicate of conditional. body: Callable with no arguments, and outputs of the positive (if) branch as return type. orelse: Callable with no arguments, and outputs of the negative (else) branch as return type. local_writes: list(pyct.Variable), list of variables assigned in either body or orelse. Returns: Tuple containing the statement outputs. """ cond_result = cond() def if_body(*_): modified_vals, _ = staging.execute_isolated(body, local_writes) return modified_vals def if_orelse(*_): modified_vals, _ = staging.execute_isolated(orelse, local_writes) return modified_vals result_values = lax.cond(cond_result, (), if_body, (), if_orelse) for var, retval in zip(local_writes, result_values): var.val = retval return result_values def while_stmt(cond, body, _, local_writes): """Functional form of a while statement.""" local_writes = [ var for var in local_writes if not py_defaults.is_undefined(var.val) ] def while_test(state): for var, s in zip(local_writes, state): var.val = s _, result_values = staging.execute_isolated(cond, local_writes) return result_values def while_body(state): for var, s in zip(local_writes, state): var.val = s modified_vals, _ = staging.execute_isolated(body, local_writes) return modified_vals result_values = lax.while_loop(while_test, while_body, [var.val for var in local_writes]) for var, val in zip(local_writes, result_values): var.val = val return result_values def for_stmt(target, iter_, body, orelse, modified_vars): """Functional form of a for statement.""" del orelse modified_vars = [ var for var in modified_vars if not py_defaults.is_undefined(var.val) ] def for_body(idx, state): for var, s in zip(modified_vars, state): var.val = s target.val = iter_[idx] modified_vals, _ = staging.execute_isolated(body, modified_vars) return modified_vals results = lax.fori_loop(0, len(iter_), for_body, [var.val for var in modified_vars]) for var, val in zip(modified_vars, results): var.val = val
As you travel over the dual-carriageway bridge between Vannes and Lorient, you can’t help but notice a pretty little port. Nestled at the bottom of an estuary, Saint-Goustan takes you back in time with its cobbled streets, stone bridge, half-timbered houses and bustling quays. The historic town of Auray is also home to one of Brittany’s major pilgrimage sites. Saint-Goustan is not a place for wobbling around on high heels. The ramps alongside the River Loch, built on the ruins of a castle, lead you down to the port, and this is a great starting point for a leisurely stroll along the shady terraces of the promenade, with delightful views down to the quays. The most picturesque side of the river is reached by crossing the four-arched stone bridge that dates back to the 13th century. Place Saint-Sauveur, with its round cobbles, is encircled by opulent-looking half-timbered and corbelled houses. The steep streets, cut into steps, trace a path through the town, lined by half-timbered facades. While the 15th and 16th century dwellings look dignified in the daytime, they bustle with café terraces in the evening. Auray-Saint-Goustan is a Town of Art and History that boasts two historic quarters. The upper town is centred around Saint-Gildas church, while the lower town is clustered along the banks of the River Loch. During the Middle Ages, the port’s strategic position meant that it collected duties from boats passing through. During the 16th and 17th centuries, the wine and grain trade made this place the third most important port in Brittany. The memories of that time still echo through the granite slabs, recalling the arrival of the American Benjamin Franklin, who landed here in 1776 to meet King Louis XVI. Nearby is the sanctuary city of Sainte-Anne d’Auray, the most important Catholic pilgrimage site in Brittany. The neo-Gothic basilica is the high point of any visit, but don’t miss the chance to wander around the cloister and pause by the miracle fountain, the memorial, the monumental statue and the Espace John Paul II. Guided tours are a great way to explore this remarkable heritage spot. Families can enjoy playing the game ‘In search of the Keys of Time’ (‘A la recherche des Clés du Temps’) using a pack available from the tourist office. Which film was made at Saint-Goustan? The stronghold of Georges Cadoudal – leader of the Breton Chouannerie uprisings of the late 1700s – was at Auray, so it made sense that Philippe de Broca should choose the old town to shoot several scenes of his movie ‘Chouans’!
# Departing.io, a web app to answer the question of "When will the next bus come?" # Copyright (C) 2016 Jake Coppinger # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. import json import xmltodict def printPrettyDict(d): print(json.dumps(d, sort_keys=True, indent=4, separators=(',', ': '))) class ParseNxtbus: def __init__(self,responseDict): jsonData = xmltodict.parse(responseDict) self._data = jsonData def stopMonitoring(self): serviceDelivery = self._data["Siri"]["ServiceDelivery"] arrivals = [] if "StopMonitoringDelivery" in serviceDelivery: if "MonitoredStopVisit" in serviceDelivery["StopMonitoringDelivery"]: monitoredBuses = serviceDelivery["StopMonitoringDelivery"]["MonitoredStopVisit"] # Check if there is more than one bus arriving if "MonitoredVehicleJourney" in monitoredBuses: busJourney = self._parseBusJourney(monitoredBuses) if busJourney: arrivals.append(busJourney) else: for journey in monitoredBuses: busJourney = self._parseBusJourney(journey) if busJourney: arrivals.append(busJourney) return arrivals def _parseBusJourney(self,busJourneyDictionary): if "AimedDepartureTime" in busJourneyDictionary["MonitoredVehicleJourney"]["MonitoredCall"]: # Is departing bus = busJourneyDictionary["MonitoredVehicleJourney"] busJourney = {} busJourney["busRouteNumber"] = bus["ExternalLineRef"] busJourney["destinationName"] = bus["DestinationName"] if bus["Monitored"] == "true": busJourney["departureTime"] = bus["MonitoredCall"]["ExpectedDepartureTime"] busJourney["monitored"] = True else: busJourney["departureTime"] = bus["MonitoredCall"]["AimedDepartureTime"] busJourney["monitored"] = False return busJourney return None
This series features a curated collection of popular images selected by the Wright State University archivists. If you are interested in purchasing a print visit Prints for Purchase. "Prints for Purchase" (2019). CORE Scholar Slideshow. 64.
"""Script for syncing files form s3 down to your local machine. Code is heavily based on https://github.com/rll/rllab/blob/master/scripts/sync_s3.py """ import sys from flow import config import os import argparse sys.path.append('.') if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('folder', type=str, default=None, nargs='?') parser.add_argument('--dry', action='store_true', default=False) parser.add_argument('--bare', action='store_true', default=False) args = parser.parse_args() remote_dir = config.AWS_S3_PATH local_dir = os.path.join(config.LOG_DIR, "s3") if args.folder: remote_dir = os.path.join(remote_dir, args.folder) local_dir = os.path.join(local_dir, args.folder) if args.bare: command = ( ("aws s3 sync {remote_dir} {local_dir} --exclude '*' --include " + "'*.csv' --include '*.json' --content-type \"UTF-8\"") .format(local_dir=local_dir, remote_dir=remote_dir)) else: command = ( ("aws s3 sync {remote_dir} {local_dir} --exclude '*stdout.log' " + "--exclude '*stdouterr.log' --content-type \"UTF-8\"") .format(local_dir=local_dir, remote_dir=remote_dir)) if args.dry: print(command) else: os.system(command)
Sikiu Perez Ceramic Studio Blog: Full week? I might have the whole week without appointments, I can't believe it! Alright I'll using my new table and working on installations and paintings so my ceramics are going to turn alive, let's start the magic. Rodolfo's piece is going to me the first, I already found a piece of oak as base.