text
stringlengths
29
850k
''' Rabin Karp Algorithm for pattern matching.It is a string searching algorithm that uses hashing to find any one of a set of pattern strings in a text. ''' def rabin_karp(pat,txt): q = 19 # a random prime number to calcualte hash values d = 256 # number of input characters h = 1 # hash function initial value flag = 0 M = len(pat) N = len(txt) # hash function for i in range(0,M-1): h = (h*d)%q # initial hash values for pattern and text p = 0 t = 0 # hash values of pattern and first window of text for i in range(0,M): p = (d*p + ord(pat[i]))%q t = (d*t + ord(txt[i]))%q # sliding the pattern over text one by one for i in range(0,N - M + 1): # If the hash values of current window and the pattern matches, then only check for characters on by one if p == t: for j in range(0,M): if txt[i+j] != pat[j]: break if j+1 == M: print "Pattern found at index", i flag = 1 # Hash value for next window of text, Remove leading digit, add trailing digit if i < N-M: t = (d*(t - ord(txt[i])*h) + ord(txt[i+M]))%q; if t < 0: t = t + q if flag != 1: print "No pattern found" if __name__ == '__main__': txt = "Driver program to test above function" pat = "test" rabin_karp(pat, txt)
Now, a new microalgae product has emerged in the natural products market that joins the ranks of these high-potency superfoods: Marine Phytoplankton. And unlike most of the other microalgae products on the market that are freeze dried into powders, tablets or capsules, the Marine Phytoplankton product I'm reviewing here is raw, unprocessed, in liquid form and full of life! But beware! In researching related products for this story, I've discovered at least two products in the marketplace today that sound like they deliver marine phytoplankton, but they actually deliver only miniscule portions of the phytoplankton along with lots and lots of filler. I'll reveal the names of those products that I don't recommend later in this article. You'll also get to view a video review where I compare these products right in front of your eyes so you can actually see the nutrient density in glasses of water (see below). I'll also reveal my No. 1 recommended "Editor's Choice" marine phytoplankton product and tell you where to get it at 22% off the retail price. I don't earn anything on the sale of this product, nor have I been paid anything to review it. This is a 100% independent, unbiased review of a product so impressive that I actually brought two bottles with me to Ecuador! Read this product review if you're interested in learning about the latest, most nutritionally-potent superfood product that can provide you with astonishing health benefits. This is a superfood that contains potent anti-cancer nutrients, elements that help reverse heart disease, nutrients for protecting brain function and even specific minerals and phytochemicals that can help normalize body weight. Raw products always have more nutritional potency than cooked or dried products because heat quickly destroys delicate phytochemicals. So raw spirulina is better than dried spirulina. Raw chlorella is better than dried chlorella. And raw marine phytoplankton is better than dried or pasteurized marine phytoplankton. Based on the research I've done, Oceans Alive from Sunfood is the only source currently offering raw, unprocessed, liquid marine phytoplankton that's truly nutrient dense. That's one of the reasons why it has earned my enthusiastic recommendation. Oceans Alive isn't actually harvested from the ocean. It's grown in a controlled ocean water environment called a "bioreactor." The name sounds ominous but is simply means the algae are reproducing in a large container of water that's exposed to sunlight. Cyanotech uses the same process to grow their spirulina starter cultures, which are then transferred to outdoor ponds to continue multiplying. Oceans Alive Marine Phytoplankton is grown in a controlled, chemical-free environment, which makes the product highly consistent and free from environmental contaminants. It takes an enormous amount of space, ocean water and sunlight to make just a single ounce of marine phytoplankton, which is why the product seems so expensive at first, but when you consider the amount of nutrition you're getting in each ounce, it's actually quite a bargain. Once the phytoplankton have reached the desired density in the bioreactor, they are then harvested by straining the ocean water through a large fine mesh strainer that collects the phytoplankton. These are washed and transferred to a larger container where they're mixed with desalinated ocean water that's rich in ionic trace minerals. This resulting liquid is then transferred into Oceans Alive bottles and shipped to Sunfood. Note that during this entire process, the product is never heated, cooked, pasteurized or dried. So virtually 100% of the original nutrition (the phytonutrients) in the marine phytoplankton stays intact. It's almost as good as eating the phytoplankton right out of the ocean, which is of course what whales do, and they're mammals too! Note: Neither myself nor NaturalNews earns anything at all from these product sales. We have not been paid to promote this product. This is a 100% independent, objective, non-commcerial review offered to you under the Free Speech protections of the United States Constitution. If you want to help NaturalNews financially, you may choose to offer a small donation to us, below, which is the only way we financially benefit from this product review. As far as I can tell, this product is a complete joke. Some might call it a fraud, because the amount of marine phytoplankton in the bottle seems to be virtually nothing. This is true both for the liquid UMAC-CORE product (which seems to have absolutely no color at all) and the capsules, which contain 475mg of filler and only 25mg of actual phytoplankton. (Do the math: That's 95% filler!) You actually have to take more than an entire bottle of capsules just to get 3 grams of total marine phytoplankton. Bottom line? Avoid UMAC-CORE. I call it a consumer rip-off. This is a product sold under a network marketing structure. While it has nice promotional materials, it contains only a miniscule amount of actual phytoplankton. In fact, by weight it is 99.75% something else and only 0.25% phytoplankton. What are the other ingredients? Purified water, white grape juice, aloe vera juice and other "filler" juices. Bottom line? Avoid FrequenSea. It's not a rip-off like UMAC-CORE, but neither is it very potent in terms of marine phytoplankton. While it has some other nice ingredients (like astaxanthin and ginger), it's still an overpriced liquid, in my opinion. I like this company. Poseidon Health seems to have an honest product made with 100% dried marine phytoplankton. Although I've never met or interviewed these folks, their product looks legit. In fact, it's probably very similar to the microalgae used in Oceans Alive, except it's dried into a powder rather than being used in its raw, liquid state. Their website needs a whole lot of work, but their product looks solid. I plan to interview this company at a later date, if they're willing, and learn more about them. Bottom line? Cautiously Recommended. This looks like an honest marine phytoplankton product, although it's not raw. I don't yet know what drying process they use, or how hot the phytoplankton get during drying, but I intend to find out and report that to you later. • Oceans Alive remains my No. 1 choice due to the reputation of Sunfood (and David Wolfe), the integrity of their product line, the raw, liquid state of the product and the phenomenal health benefits of raw microalgae vs. processed microalgae. I put this on my list of the top superfoods in the world, joining the ranks of astaxanthin, spirulina, chlorella, brown seaweed extract (Modifilan), blue-green algae and others. As of this writing, Sunfood Nutrition is out of stock but is expecting a shipment in just a few days. Vitacost has just 200 units in stock right now. This product is difficult to get and supplies are very small. If you really want some of this, place your order right now, or you'll probably have to wait several weeks to get in on the next shipment.
class TimeSettingsManager(object): def __init__(self): self.minutes = 10 self.seconds = 0 self.time_change_callbacks = [] def get_time_string(self): return "{0:0>2}:{1:0>2}".format(self.minutes, self.seconds) def increment_minutes(self): self.minutes += 1 self.fire_time_change_callbacks() def decrement_minutes(self): self.minutes -= 1 if self.minutes < 0: self.minutes = 0 self.fire_time_change_callbacks() def increment_seconds(self, increment = 15): self.seconds = (self.seconds + increment) % 60 self.fire_time_change_callbacks() def decrement_seconds(self, decrement=15): self.seconds = ((self.seconds - decrement) % 60) self.fire_time_change_callbacks() def subscribe_to_timechange(self, time_change_callback): self.time_change_callbacks.append(time_change_callback) self.fire_time_change_callbacks() def fire_time_change_callbacks(self, origin_station_name=None): for time_change_callback in self.time_change_callbacks: if time_change_callback: time_change_callback(self.get_time_string(), self.minutes, self.seconds, origin_station_name) def set_countdown_time(self, minutes, seconds, origin_station_name=None): self.minutes = minutes self.seconds = seconds self.fire_time_change_callbacks(origin_station_name)
Remote Fishing Expeditions with Big Fish Down Under and Moana III Luxury Charter Boat specialises in tailor made adventure fishing charters to remote areas of Princess Charlotte Bay, Cape York Peninsula, Gulf of Carpentaria and Papua New Guinea. These trips are 100 per cent planned to suit your specific interests, skills or fishing experience. They will show you the best of scenery, the best light tackle sportsfishing, the best barramundi fishing and reef fishing and give you not only life time of memories to take home of remarkable locations, but also containers of fish! They catch unbelievable amounts of Barramundi, Fingermark, Giant Trevally, Mangrove Jacks, King Salmon, Coral Trout, Spanish mackerel, Red Emperor, Yellow and Blue Fin Tuna, Large Mouth Nannygai and Wahoo! These are all beautiful fish on the menu and they will whip them out of the ocean and straight onto the bbq plate! Huge Mud Crabs, Large Black Lip Oysters are also plentiful in these parts and they have their fair share on each charter. Princess Charlotte Bay is 350kms NNW of Cairns on the eastern side of Cape York Peninsula. Being shielded from the south-east trade winds, Princess Charlotte Bay provides an ideal haven for the keen angler and thats the reason for its popularity. The area plays host to extensive mangrove river systems, lagoons and offshore coral reefs. The fishing is as varied as the Great Barrier Reef itself with Coral Trout, Red Emperor, Mud Crabs galore, the mighty Barramundi, and larger Mackerel and pelagic fish. But its not all about fishing…. at your request, they can just cruise, go snorkelling on the Great Barrier Reef, visit beautiful and remote islands in the coral sea or just relax and watch beautiful sunsets over the remote mainland! You can travel with them from Cairns, Cooktown, Lizard Island and Weipa or fly to meet them by helicopter or float plane. Expeditions to the Gulf of Carpentaria and Papua New Guinea take quite alot of planning, so please contact us so we can provide you with extensive information needed to put the trips together so you can experience a trip of a lifetime! Prior arrangement needs to be made well in advance for all their Remote Expeditions to Far Northern Australia and beyond. Various Dinner menus including; Lamb Roast and freshly caught fish cooked with our secret herbs and spices.
import copy from django.conf.urls import (include, url) from rest_framework import routers from treeherder.webapp.api import (artifact, bug, bugzilla, classifiedfailure, failureline, job_log_url, jobs, logslice, note, performance_data, refdata, resultset, runnable_jobs, text_log_summary, text_log_summary_line) # router for views that are bound to a project # i.e. all those views that don't involve reference data project_bound_router = routers.SimpleRouter() project_bound_router.register( r'jobs', jobs.JobsViewSet, base_name='jobs', ) project_bound_router.register( r'runnable_jobs', runnable_jobs.RunnableJobsViewSet, base_name='runnable_jobs', ) project_bound_router.register( r'resultset', resultset.ResultSetViewSet, base_name='resultset', ) project_bound_router.register( r'artifact', artifact.ArtifactViewSet, base_name='artifact', ) project_bound_router.register( r'note', note.NoteViewSet, base_name='note', ) project_bound_router.register( r'bug-job-map', bug.BugJobMapViewSet, base_name='bug-job-map', ) project_bound_router.register( r'logslice', logslice.LogSliceView, base_name='logslice', ) project_bound_router.register( r'job-log-url', job_log_url.JobLogUrlViewSet, base_name='job-log-url', ) project_bound_router.register( r'performance/data', performance_data.PerformanceDatumViewSet, base_name='performance-data') project_bound_router.register( r'performance/signatures', performance_data.PerformanceSignatureViewSet, base_name='performance-signatures') project_bound_router.register( r'performance/platforms', performance_data.PerformancePlatformViewSet, base_name='performance-signatures-platforms') # this is the default router for plain restful endpoints class ExtendedRouter(routers.DefaultRouter): routes = copy.deepcopy(routers.DefaultRouter.routes) routes[0].mapping[u"put"] = u"update_many" # refdata endpoints: default_router = ExtendedRouter() default_router.register(r'product', refdata.ProductViewSet) default_router.register(r'machine', refdata.MachineViewSet) default_router.register(r'machineplatform', refdata.MachinePlatformViewSet) default_router.register(r'buildplatform', refdata.BuildPlatformViewSet) default_router.register(r'jobgroup', refdata.JobGroupViewSet) default_router.register(r'jobtype', refdata.JobTypeViewSet) default_router.register(r'repository', refdata.RepositoryViewSet) default_router.register(r'optioncollectionhash', refdata.OptionCollectionHashViewSet, base_name='optioncollectionhash') default_router.register(r'failureclassification', refdata.FailureClassificationViewSet) default_router.register(r'user', refdata.UserViewSet, base_name='user') default_router.register(r'exclusion-profile', refdata.ExclusionProfileViewSet) default_router.register(r'job-exclusion', refdata.JobExclusionViewSet) default_router.register(r'matcher', refdata.MatcherViewSet) default_router.register(r'failure-line', failureline.FailureLineViewSet, base_name='failure-line') default_router.register(r'classified-failure', classifiedfailure.ClassifiedFailureViewSet, base_name='classified-failure') default_router.register(r'text-log-summary', text_log_summary.TextLogSummaryViewSet, base_name='text-log-summary') default_router.register(r'text-log-summary-line', text_log_summary_line.TextLogSummaryLineViewSet, base_name='text-log-summary-line') default_router.register(r'performance/alertsummary', performance_data.PerformanceAlertSummaryViewSet, base_name='performance-alert-summaries') default_router.register(r'performance/alert', performance_data.PerformanceAlertViewSet, base_name='performance-alerts') default_router.register(r'performance/framework', performance_data.PerformanceFrameworkViewSet, base_name='performance-frameworks') default_router.register(r'performance/bug-template', performance_data.PerformanceBugTemplateViewSet, base_name='performance-bug-template') default_router.register(r'bugzilla', bugzilla.BugzillaViewSet, base_name='bugzilla') default_router.register(r'jobdetail', jobs.JobDetailViewSet, base_name='jobdetail') urlpatterns = [ url(r'^project/(?P<project>[\w-]{0,50})/', include(project_bound_router.urls)), url(r'^', include(default_router.urls)), ]
Enable your users like employees, customers, prospects, etc. to get complex tasks done like, raise helpdesk tickets, get latest status updates on their submitted claims, find the right product, set up facilities, ask queries related to service request, complaints, and business policies, find issues with their expenses and more via a simple chat. Advaiya Chatbot solution interprets user requests not only reliably, but also contextually connects and automates tasks or queries through multiple business applications. It can automatically initiate conversation with users by greeting or by interactive messages. Microsoft Cognitive Services based Advaiya Chat Framework analyzes natural language queries to identify the domain and context, calls the right enterprise database and provides appropriate responses. This enables automation of business processes via the use of machine learning and artificial intelligence. It understands the queries just like humans, suggests the related queries to maintain conversation flow and responds to queries in the least possible time. Advaiya Bot solution can naturally interact with your users on a website, app, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more channels. It can easily integrate with your line of business applications and serves queries in natural language in regard to various transactions. Scale and automate repeated inquiries, allowing SSC Helpdesk staff to handle more complex requests. Facilitate visitor to quickly search flat and apartments in a required locality with specific attributes. Increase customer satisfaction and loyalty by allowing customers to report problems, request a demo, or schedule an appointment. Build smarter and more intuitive and engaging experiences for FAQs by converting FAQ into information chatbot.
############################### # This file is part of PyLaDa. # # Copyright (C) 2013 National Renewable Energy Lab # # PyLaDa is a high throughput computational platform for Physics. It aims to make it easier to submit # large numbers of jobs on supercomputers. It provides a python interface to physical input, such as # crystal structures, as well as to a number of DFT (VASP, CRYSTAL) and atomic potential programs. It # is able to organise and launch computational jobs on PBS and SLURM. # # PyLaDa is free software: you can redistribute it and/or modify it under the terms of the GNU General # Public License as published by the Free Software Foundation, either version 3 of the License, or (at # your option) any later version. # # PyLaDa is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even # the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along with PyLaDa. If not, see # <http://www.gnu.org/licenses/>. ############################### """ Defines a path given relative to another. The object is to make it easy to switch from one computer to another, using environment variables defined in both. """ class RelativePath(object): """ Directory property which is relative to the user's home. The path which is returned (eg __get__) is always absolute. However, it is stored relative to the user's home, and hence can be passed from one computer system to the next. Unless you know what you are doing, it is best to get and set using the ``path`` attribute, starting from the current working directory if a relative path is given, and from the '/' root if an absolute path is given. >>> from os import getcwd, environ >>> getcwd() '/home/me/inhere/' >>> relative_directory.path = 'use/this/attribute' >>> relative_directory.path '/home/me/inhere/use/this/attribute' Other descriptors have somewhat more complex behaviors. ``envvar`` is the root directory - aka the fixed point. Changing it will simply change the root directory. >>> environ["SCRATCH"] '/scratch/me/' >>> relative_directory.envvar = "$SCRATCH" >>> relative_directory.envvar '/scratch/me/use/this/attribute' Modifying ``relative`` will change the second part of the relative directory. If a relative path is given, that relative path is used as is, without reference to the working directory. It is an error to give an absolute directory. >>> relative_directory.relative = "now/here" '/scratch/me/now/here' >>> relative_directory.relative = "/now/here" ValueError: Cannot set relative with absolute path. """ def __init__(self, path=None, envvar=None, hook=None): """ Initializes the relative directory. :Parameters: path : str or None path to store here. It can be relative to the current working directory, include envirnonment variables or shorthands for user homes. If None, will be set to `envvar`. envvar : str or None Fixed point wich can be understood from system to system. It should be a shorthand to a user homer directory ("~/") or use an environment variable ("$SCRATCH"). If None, defaults to user's home. hook : callable or None This function will be called if/when the directory is changed. Note that it may be lost during pickling if it is not itself pickelable. """ super(RelativePath, self).__init__() self._relative = None """ Private path relative to fixed point. """ self._envvar = None """ Private envvar variable. """ self._hook = None """ Private hook variable. """ self.path = path """ Relative path. """ self.envvar = envvar """ Fixed point. """ self.hook = hook """ An object to call when the path is changed. Callable with at most one argument. """ @property def relative(self): """ Path relative to fixed point. """ return self._relative if self._relative is not None else "" @relative.setter def relative(self, value): """ Path relative to fixed point. """ from os.path import expandvars, expanduser if value is None: value = "" value = expandvars(expanduser(value.rstrip().lstrip())) assert value[0] != '/', ValueError('Cannot set "relative" attribute with absolute path.') self._relative = value if len(value) else None self.hook(self.path) @property def envvar(self): """ Fixed point for relative directory. """ from os.path import expanduser, expandvars, normpath from . import local_path from .. import global_root if self._envvar is None: if global_root is None: return '/' if '$' not in global_root and '~' not in global_root: return normpath(global_root) # Need to figure it out. try: local_path(global_root).ensure(dir=True) return str(local_path(global_root)) except OSError as e: raise IOError('Could not figure out directory {0}.\n' 'Caught error OSError {1.errno}: {1.message}' .format(global_root, e)) return normpath(expandvars(expanduser(self._envvar))) @envvar.setter def envvar(self, value): path = self.path if self._relative is not None else None if value is None: self._envvar = None elif len(value.rstrip().lstrip()) == 0: self._envvar = None else: self._envvar = value if path is not None: self.path = path self.hook(self.path) @property def path(self): """ Returns absolute path, including fixed-point. """ from os.path import join, normpath if self._relative is None: return self.envvar return normpath(join(self.envvar, self._relative)) @path.setter def path(self, value): from os.path import relpath, expandvars, expanduser, abspath from os import getcwd if value is None: value = getcwd() if isinstance(value, tuple) and len(value) == 2: self.envvar = value[0] self.relative = value[1] return if len(value.rstrip().lstrip()) == 0: value = getcwd() # This is a python bug where things don't work out if the root path is '/'. # Seems corrected after 2.7.2 if self.envvar == '/': self._relative = abspath(expanduser(expandvars(value)))[1:] else: self._relative = relpath(expanduser(expandvars(value)), self.envvar) self.hook(self.path) @property def unexpanded(self): """ Unexpanded path (eg with envvar as is). """ from os.path import join from .. import global_root e = global_root if self._envvar is None else self._envvar return e if self._relative is None else join(e, self._relative) @property def hook(self): from inspect import ismethod from sys import version_info if version_info[0] < 3: from inspect import getargspec else: from inspect import getfullargspec as getargspec if self._hook is None: return lambda x: None N = len(getargspec(self._hook).args) if ismethod(self._hook): N -= 1 if N == 0: return lambda x: self._hook() return self._hook @hook.setter def hook(self, value): from sys import version_info from inspect import ismethod, getargspec, isfunction if version_info[0] == 2: from inspect import getargspec else: from inspect import getfullargspec as getargspec if value is None: self._hook = None return assert ismethod(value) or isfunction(value), \ TypeError("hook is not a function or bound method.") N = len(getargspec(value)[0]) if ismethod(value): if getattr(value, '__self__', getattr(value, 'im_self', None)) is None: raise TypeError("hook callable cannot be an unbound method.") N -= 1 assert N < 2, TypeError("hook callable cannot have more than one argument.") self._hook = value def __getstate__(self): """ Saves state. If hook was not pickleable, then it will not be saved appropriately. """ from pickle import dumps try: dumps(self._hook) except: return self._relative, self._envvar else: return self._relative, self._envvar, self._hook def __setstate__(self, args): """ Resets state. If hook was not pickleable, then it will not be reset. """ if len(args) == 3: self._relative, self._envvar, self._hook = args else: self._relative, self._envvar = args def set(self, path=None, envvar=None): """ Sets path and envvar. Used by repr. """ hook = self._hook self._hook = None self.envvar = envvar self.path = path self._hook = hook self.hook(self.path) def repr(self): """ Makes this instance somewhat representable. Since hook cannot be represented in most cases, and is most-likely set on initialization, this method uses ``set`` to get away with representability. """ return "{0}, {1}".format(repr(self._envvar), repr(self._relative))
Blue Room was established in 1952 and is Mombasa's most famous landmark restaurant. Famous for its bhajias, tea and BR ice cream. Blue Room maintains a long-standing tradition of quality, offering customers quality food at fair prices. Our products are always prepared with the highest quality ingredients, and are certain to please the most discriminating connoisseurs. We serve multi-cuisine, Western and Indian foods. The menu includes a variety of Pizzas and Hamburgers, Calzones, Submarines, Steak, Chicken, Fish, Samosas, Bhajias, Fresh Fruit Juices, Milk Shakes, Faluda, Cappuccino, Cakes, Tarts, Croissants - all using traditional recipes. Even Hamburger buns and Pizza Dough are prepared in-house! They serve the best ice cream ever!! I would definitely recommend it for anyone and i promise you that you won't get disappointed. At first i thought blue room was so expensive. I went with a friend and i found out their prices are friendly, now i go often. my first time to blue room was in 1998.i remember i was a kid where mum and dad would always take me and till now being a grown adult, i normally go for their samosa's and they have great dishes and quick snacks and i will continue going there and i hope blue room keep on being strong and greater for having being around till now. I used to go to Blue room as a kid nearly 3 times a week. The food is good there and you get a huge variety. This has been here for ages. You cannot go to Mombasa and not come here. Its one of those common places that holds heritage in our hearts.
import logging import re from collections import namedtuple from operator import attrgetter, itemgetter import click from regparser.federalregister import fetch_notice_json from regparser.history.versions import Version from regparser.index import dependency, entry logger = logging.getLogger(__name__) def fetch_version_ids(cfr_title, cfr_part, notice_dir): """Returns a list of version ids after looking them up between the federal register and the local filesystem""" present_ids = [v.path[-1] for v in notice_dir.sub_entries()] final_rules = fetch_notice_json(cfr_title, cfr_part, only_final=True) version_ids = [] pair_fn = itemgetter('document_number', 'full_text_xml_url') for fr_id, xml_url in map(pair_fn, final_rules): if xml_url: # Version_id concatenated with the date regex = re.compile(re.escape(fr_id) + r"_\d{8}") split_entries = [vid for vid in present_ids if regex.match(vid)] # Add either the split entries or the original version_id version_ids.extend(split_entries or [fr_id]) else: logger.warning("No XML for %s; skipping", fr_id) return version_ids Delay = namedtuple('Delay', ['by', 'until']) def delays(xmls): """Find all changes to effective dates. Return the latest change to each version of the regulation""" delay_map = {} # Sort so that later modifications override earlier ones for delayer in sorted(xmls, key=attrgetter('published')): for delay in delayer.delays(): for delayed in filter(delay.modifies_notice_xml, xmls): delay_map[delayed.version_id] = Delay(delayer.version_id, delay.delayed_until) return delay_map def generate_dependencies(version_dir, version_ids, delays_by_version): """Creates a dependency graph and adds all dependencies for input xml and delays between notices""" notice_dir = entry.Notice() deps = dependency.Graph() for version_id in version_ids: deps.add(version_dir / version_id, notice_dir / version_id) for delayed, delay in delays_by_version.items(): deps.add(version_dir / delayed, notice_dir / delay.by) return deps def write_to_disk(xml, version_entry, delay=None): """Serialize a Version instance to disk""" effective = xml.effective if delay is None else delay.until if effective: version = Version(xml.version_id, effective, xml.fr_citation) version_entry.write(version) else: logger.warning("No effective date for this rule: %s. Skipping", xml.version_id) def write_if_needed(cfr_title, cfr_part, version_ids, xmls, delays_by_version): """All versions which are stale (either because they were never create or because their dependency has been updated) are written to disk. If any dependency is missing, an exception is raised""" version_dir = entry.FinalVersion(cfr_title, cfr_part) deps = generate_dependencies(version_dir, version_ids, delays_by_version) for version_id in version_ids: version_entry = version_dir / version_id deps.validate_for(version_entry) if deps.is_stale(version_entry): write_to_disk(xmls[version_id], version_entry, delays_by_version.get(version_id)) @click.command() @click.argument('cfr_title', type=int) @click.argument('cfr_part', type=int) def versions(cfr_title, cfr_part): """Find all Versions for a regulation. Accounts for locally modified notice XML and rules modifying the effective date of versions of a regulation""" cfr_title, cfr_part = str(cfr_title), str(cfr_part) notice_dir = entry.Notice() logger.info("Finding versions") version_ids = fetch_version_ids(cfr_title, cfr_part, notice_dir) logger.debug("Versions found: %r", version_ids) version_entries = [notice_dir / version_id for version_id in version_ids] # notices keyed by version_id xmls = {e.path[-1]: e.read() for e in version_entries if e.exists()} delays_by_version = delays(xmls.values()) write_if_needed(cfr_title, cfr_part, version_ids, xmls, delays_by_version)
“Ties that Bind” revolves around Allison McLean (Kelli Williams), a tough and experienced police detective, mother and wife in suburban Seattle. When she and her police partner (Dion Johnstone) must arrest her brother (Luke Perry) for aggravated assault, her world drastically changes as he’s convicted and sent to prison, leaving his two teenagers teetering on the brink of foster care. Ultimately, she takes them into her home and ends up raising four teenagers while solving local crimes at her demanding job. This review is for the first episode of the series only. Allison is a police detective and, as everyone knows, her job isn’t easy. She has a family of her own and a wonderful husband who stands by her side. But when she decides to take in her brother’s teenagers, will the challenge be too much for her? In this first episode, her brother goes to prison, and Allison decides to take the kids home with her and her family. The children and their aunt’s family have some angry feelings among each other, partly because of the distance between Allison and her brother. Now Allison only wants what is best for the kids. In the meantime, she still has the responsibility to her job. “Ties That Bind” is an interesting story of family, respect and love — not to mention, responsibility of work and home. The interaction of the family under the new circumstances is very entertaining. The show deals with the hard feelings that must be resolved and also features the stories of cases that she works as a police detective. We award this episode of “Ties that Bind” with the Dove Family Approved Seal for all ages.
# -*- encoding:utf-8 -*- """ 包装选股worker进行,完善前后工作 """ from __future__ import absolute_import from __future__ import print_function from __future__ import division from .ABuPickStockWorker import AbuPickStockWorker from ..CoreBu.ABuEnvProcess import add_process_env_sig from ..MarketBu.ABuMarket import split_k_market from ..TradeBu.ABuKLManager import AbuKLManager from ..CoreBu.ABuFixes import ThreadPoolExecutor __author__ = '阿布' __weixin__ = 'abu_quant' @add_process_env_sig def do_pick_stock_work(choice_symbols, benchmark, capital, stock_pickers): """ 包装AbuPickStockWorker进行选股 :param choice_symbols: 初始备选交易对象序列 :param benchmark: 交易基准对象,AbuBenchmark实例对象 :param capital: 资金类AbuCapital实例化对象 :param stock_pickers: 选股因子序列 :return: """ kl_pd_manager = AbuKLManager(benchmark, capital) stock_pick = AbuPickStockWorker(capital, benchmark, kl_pd_manager, choice_symbols=choice_symbols, stock_pickers=stock_pickers) stock_pick.fit() return stock_pick.choice_symbols @add_process_env_sig def do_pick_stock_thread_work(choice_symbols, benchmark, capital, stock_pickers, n_thread): """包装AbuPickStockWorker启动线程进行选股""" result = [] def when_thread_done(r): result.extend(r.result()) with ThreadPoolExecutor(max_workers=n_thread) as pool: thread_symbols = split_k_market(n_thread, market_symbols=choice_symbols) for symbols in thread_symbols: future_result = pool.submit(do_pick_stock_work, symbols, benchmark, capital, stock_pickers) future_result.add_done_callback(when_thread_done) return result
Thames Water are planning roadworks on Queens Road (A329) from 16 – 26 May. The apparently relentless drive to convert the area’s pubs, offices and vacant lots into flats and HMOs (houses in multiple occupation) continued apace this year. Some proposals, including the demolition of the After Dark Club and the Woodley Arms, were rejected, whilst others, such as the new residential care home near the Rising Sun Arts Centre, will proceed. The saving of the South Street Arts Centre was a major success in keeping Katesgrove at the front of Reading’s arts scene. BT are planning roadworks on Queens Road near the Queen’s Road car park entrance, opposite the taxi rank, on 2 -3 September.
# -*- coding: utf-8 -*- import furl import httplib as http import urllib import markupsafe from django.utils import timezone from flask import request import uuid from modularodm import Q from modularodm.exceptions import NoResultsFound from modularodm.exceptions import ValidationError from modularodm.exceptions import ValidationValueError from framework import forms, sentry, status from framework import auth as framework_auth from framework.auth import exceptions from framework.auth import cas, campaigns from framework.auth import logout as osf_logout from framework.auth import get_user from framework.auth.exceptions import DuplicateEmailError, ExpiredTokenError, InvalidTokenError from framework.auth.core import generate_verification_key from framework.auth.decorators import block_bing_preview, collect_auth, must_be_logged_in from framework.auth.forms import ResendConfirmationForm, ForgotPasswordForm, ResetPasswordForm from framework.auth.utils import ensure_external_identity_uniqueness, validate_recaptcha from framework.exceptions import HTTPError from framework.flask import redirect # VOL-aware redirect from framework.sessions.utils import remove_sessions_for_user, remove_session from framework.sessions import get_session from website import settings, mails, language from website.models import User from website.util import web_url_for from website.util.time import throttle_period_expired from website.util.sanitize import strip_html @block_bing_preview @collect_auth def reset_password_get(auth, uid=None, token=None): """ View for user to land on the reset password page. HTTp Method: GET :param auth: the authentication state :param uid: the user id :param token: the token in verification key :return :raises: HTTPError(http.BAD_REQUEST) if verification key for the user is invalid, has expired or was used """ # if users are logged in, log them out and redirect back to this page if auth.logged_in: return auth_logout(redirect_url=request.url) # Check if request bears a valid pair of `uid` and `token` user_obj = User.load(uid) if not (user_obj and user_obj.verify_password_token(token=token)): error_data = { 'message_short': 'Invalid Request.', 'message_long': 'The requested URL is invalid, has expired, or was already used', } raise HTTPError(http.BAD_REQUEST, data=error_data) # refresh the verification key (v2) user_obj.verification_key_v2 = generate_verification_key(verification_type='password') user_obj.save() return { 'uid': user_obj._id, 'token': user_obj.verification_key_v2['token'], } def reset_password_post(uid=None, token=None): """ View for user to submit reset password form. HTTP Method: POST :param uid: the user id :param token: the token in verification key :return: :raises: HTTPError(http.BAD_REQUEST) if verification key for the user is invalid, has expired or was used """ form = ResetPasswordForm(request.form) # Check if request bears a valid pair of `uid` and `token` user_obj = User.load(uid) if not (user_obj and user_obj.verify_password_token(token=token)): error_data = { 'message_short': 'Invalid Request.', 'message_long': 'The requested URL is invalid, has expired, or was already used', } raise HTTPError(http.BAD_REQUEST, data=error_data) if not form.validate(): # Don't go anywhere forms.push_errors_to_status(form.errors) else: # clear verification key (v2) user_obj.verification_key_v2 = {} # new verification key (v1) for CAS user_obj.verification_key = generate_verification_key(verification_type=None) try: user_obj.set_password(form.password.data) user_obj.save() except exceptions.ChangePasswordError as error: for message in error.messages: status.push_status_message(message, kind='warning', trust=False) else: status.push_status_message('Password reset', kind='success', trust=False) # redirect to CAS and authenticate the user automatically with one-time verification key. return redirect(cas.get_login_url( web_url_for('user_account', _absolute=True), username=user_obj.username, verification_key=user_obj.verification_key )) return { 'uid': user_obj._id, 'token': user_obj.verification_key_v2['token'], } @collect_auth def forgot_password_get(auth): """ View for user to land on the forgot password page. HTTP Method: GET :param auth: the authentication context :return """ # if users are logged in, log them out and redirect back to this page if auth.logged_in: return auth_logout(redirect_url=request.url) return {} def forgot_password_post(): """ View for user to submit forgot password form. HTTP Method: POST :return {} """ form = ForgotPasswordForm(request.form, prefix='forgot_password') if not form.validate(): # Don't go anywhere forms.push_errors_to_status(form.errors) else: email = form.email.data status_message = ('If there is an OSF account associated with {0}, an email with instructions on how to ' 'reset the OSF password has been sent to {0}. If you do not receive an email and believe ' 'you should have, please contact OSF Support. ').format(email) kind = 'success' # check if the user exists user_obj = get_user(email=email) if user_obj: # rate limit forgot_password_post if not throttle_period_expired(user_obj.email_last_sent, settings.SEND_EMAIL_THROTTLE): status_message = 'You have recently requested to change your password. Please wait a few minutes ' \ 'before trying again.' kind = 'error' else: # TODO [OSF-6673]: Use the feature in [OSF-6998] for user to resend claim email. # if the user account is not claimed yet if (user_obj.is_invited and user_obj.unclaimed_records and not user_obj.date_last_login and not user_obj.is_claimed and not user_obj.is_registered): status_message = 'You cannot reset password on this account. Please contact OSF Support.' kind = 'error' else: # new random verification key (v2) user_obj.verification_key_v2 = generate_verification_key(verification_type='password') user_obj.email_last_sent = timezone.now() user_obj.save() reset_link = furl.urljoin( settings.DOMAIN, web_url_for( 'reset_password_get', uid=user_obj._id, token=user_obj.verification_key_v2['token'] ) ) mails.send_mail( to_addr=email, mail=mails.FORGOT_PASSWORD, reset_link=reset_link ) status.push_status_message(status_message, kind=kind, trust=False) return {} def login_and_register_handler(auth, login=True, campaign=None, next_url=None, logout=None): """ Non-view helper to handle `login` and `register` requests. :param auth: the auth context :param login: `True` if `GET /login`, `False` if `GET /register` :param campaign: a target campaign defined in `auth.campaigns` :param next_url: the service url for CAS login or redirect url for OSF :param logout: used only for `claim_user_registered` :return: data object that contains actions for `auth_register` and `auth_login` :raises: http.BAD_REQUEST """ # Only allow redirects which are relative root or full domain. Disallows external redirects. if next_url and not validate_next_url(next_url): raise HTTPError(http.BAD_REQUEST) data = { 'status_code': http.FOUND if login else http.OK, 'next_url': next_url, 'campaign': None, 'must_login_warning': False, } # login or register with campaign parameter if campaign: if validate_campaign(campaign): # GET `/register` or '/login` with `campaign=institution` # unlike other campaigns, institution login serves as an alternative for authentication if campaign == 'institution': next_url = web_url_for('dashboard', _absolute=True) data['status_code'] = http.FOUND if auth.logged_in: data['next_url'] = next_url else: data['next_url'] = cas.get_login_url(next_url, campaign='institution') # for non-institution campaigns else: destination = next_url if next_url else campaigns.campaign_url_for(campaign) if auth.logged_in: # if user is already logged in, go to the campaign landing page data['status_code'] = http.FOUND data['next_url'] = destination else: # if user is logged out, go to the osf register page with campaign context if login: # `GET /login?campaign=...` data['next_url'] = web_url_for('auth_register', campaign=campaign, next=destination) else: # `GET /register?campaign=...` data['campaign'] = campaign if campaigns.is_proxy_login(campaign): data['next_url'] = web_url_for( 'auth_login', next=destination, _absolute=True ) else: data['next_url'] = destination else: # invalid campaign, inform sentry and redirect to non-campaign sign up or sign in redirect_view = 'auth_login' if login else 'auth_register' data['status_code'] = http.FOUND data['next_url'] = web_url_for(redirect_view, campaigns=None, next=next_url) data['campaign'] = None sentry.log_message( '{} is not a valid campaign. Please add it if this is a new one'.format(campaign) ) # login or register with next parameter elif next_url: if logout: # handle `claim_user_registered` data['next_url'] = next_url if auth.logged_in: # log user out and come back data['status_code'] = 'auth_logout' else: # after logout, land on the register page with "must_login" warning data['status_code'] = http.OK data['must_login_warning'] = True elif auth.logged_in: # if user is already logged in, redirect to `next_url` data['status_code'] = http.FOUND data['next_url'] = next_url elif login: # `/login?next=next_url`: go to CAS login page with current request url as service url data['status_code'] = http.FOUND data['next_url'] = cas.get_login_url(request.url) else: # `/register?next=next_url`: land on OSF register page with request url as next url data['status_code'] = http.OK data['next_url'] = request.url else: # `/login/` or `/register/` without any parameter if auth.logged_in: data['status_code'] = http.FOUND data['next_url'] = web_url_for('dashboard', _absolute=True) return data @collect_auth def auth_login(auth): """ View (no template) for OSF Login. Redirect user based on `data` returned from `login_and_register_handler`. `/login` only takes valid campaign, valid next, or no query parameter `login_and_register_handler()` handles the following cases: if campaign and logged in, go to campaign landing page (or valid next_url if presents) if campaign and logged out, go to campaign register page (with next_url if presents) if next_url and logged in, go to next url if next_url and logged out, go to cas login page with current request url as service parameter if none, go to `/dashboard` which is decorated by `@must_be_logged_in` :param auth: the auth context :return: redirects """ campaign = request.args.get('campaign') next_url = request.args.get('next') data = login_and_register_handler(auth, login=True, campaign=campaign, next_url=next_url) if data['status_code'] == http.FOUND: return redirect(data['next_url']) @collect_auth def auth_register(auth): """ View for OSF register. Land on the register page, redirect or go to `auth_logout` depending on `data` returned by `login_and_register_handler`. `/register` only takes a valid campaign, a valid next, the logout flag or no query parameter `login_and_register_handler()` handles the following cases: if campaign and logged in, go to campaign landing page (or valid next_url if presents) if campaign and logged out, go to campaign register page (with next_url if presents) if next_url and logged in, go to next url if next_url and logged out, go to cas login page with current request url as service parameter if next_url and logout flag, log user out first and then go to the next_url if none, go to `/dashboard` which is decorated by `@must_be_logged_in` :param auth: the auth context :return: land, redirect or `auth_logout` :raise: http.BAD_REQUEST """ context = {} # a target campaign in `auth.campaigns` campaign = request.args.get('campaign') # the service url for CAS login or redirect url for OSF next_url = request.args.get('next') # used only for `claim_user_registered` logout = request.args.get('logout') # logout must have next_url if logout and not next_url: raise HTTPError(http.BAD_REQUEST) data = login_and_register_handler(auth, login=False, campaign=campaign, next_url=next_url, logout=logout) # land on register page if data['status_code'] == http.OK: if data['must_login_warning']: status.push_status_message(language.MUST_LOGIN, trust=False) destination = cas.get_login_url(data['next_url']) # "Already have and account?" link context['non_institution_login_url'] = destination # "Sign In" button in navigation bar, overwrite the default value set in routes.py context['login_url'] = destination # "Login through your institution" link context['institution_login_url'] = cas.get_login_url(data['next_url'], campaign='institution') context['campaign'] = data['campaign'] return context, http.OK # redirect to url elif data['status_code'] == http.FOUND: return redirect(data['next_url']) # go to other views elif data['status_code'] == 'auth_logout': return auth_logout(redirect_url=data['next_url']) raise HTTPError(http.BAD_REQUEST) @collect_auth def auth_logout(auth, redirect_url=None, next_url=None): """ Log out, delete current session and remove OSF cookie. If next url is valid and auth is logged in, redirect to CAS logout endpoint with the current request url as service. If next url is valid and auth is logged out, redirect directly to the next url. Otherwise, redirect to CAS logout or login endpoint with redirect url as service. The CAS logout endpoint which clears sessions and cookies for CAS and Shibboleth. HTTP Method: GET Note 1: OSF tells CAS where it wants to be redirected back after successful logout. However, CAS logout flow may not respect this url if user is authenticated through remote identity provider. Note 2: The name of the query parameter is `next`, `next_url` is used to avoid python reserved word. :param auth: the authentication context :param redirect_url: url to DIRECTLY redirect after CAS logout, default is `OSF/goodbye` :param next_url: url to redirect after OSF logout, which is after CAS logout :return: the response """ # For `?next=`: # takes priority # the url must be a valid OSF next url, # the full request url is set to CAS service url, # does not support `reauth` # For `?redirect_url=`: # the url must be valid CAS service url # the redirect url is set to CAS service url. # support `reauth` # logout/?next=<an OSF verified next url> next_url = next_url or request.args.get('next', None) if next_url and validate_next_url(next_url): cas_logout_endpoint = cas.get_logout_url(request.url) if auth.logged_in: resp = redirect(cas_logout_endpoint) else: resp = redirect(next_url) # logout/ or logout/?redirect_url=<a CAS verified redirect url> else: redirect_url = redirect_url or request.args.get('redirect_url') or web_url_for('goodbye', _absolute=True) # set redirection to CAS log out (or log in if `reauth` is present) if 'reauth' in request.args: cas_endpoint = cas.get_login_url(redirect_url) else: cas_endpoint = cas.get_logout_url(redirect_url) resp = redirect(cas_endpoint) # perform OSF logout osf_logout() # set response to delete OSF cookie resp.delete_cookie(settings.COOKIE_NAME, domain=settings.OSF_COOKIE_DOMAIN) return resp def auth_email_logout(token, user): """ When a user is adding an email or merging an account, add the email to the user and log them out. """ redirect_url = cas.get_logout_url(service_url=cas.get_login_url(service_url=web_url_for('index', _absolute=True))) try: unconfirmed_email = user.get_unconfirmed_email_for_token(token) except InvalidTokenError: raise HTTPError(http.BAD_REQUEST, data={ 'message_short': 'Bad token', 'message_long': 'The provided token is invalid.' }) except ExpiredTokenError: status.push_status_message('The private link you used is expired.') raise HTTPError(http.BAD_REQUEST, data={ 'message_short': 'Expired link', 'message_long': 'The private link you used is expired.' }) try: user_merge = User.find_one(Q('emails', 'eq', unconfirmed_email)) except NoResultsFound: user_merge = False if user_merge: remove_sessions_for_user(user_merge) user.email_verifications[token]['confirmed'] = True user.save() remove_sessions_for_user(user) resp = redirect(redirect_url) resp.delete_cookie(settings.COOKIE_NAME, domain=settings.OSF_COOKIE_DOMAIN) return resp @block_bing_preview @collect_auth def external_login_confirm_email_get(auth, uid, token): """ View for email confirmation links when user first login through external identity provider. HTTP Method: GET When users click the confirm link, they are expected not to be logged in. If not, they will be logged out first and redirected back to this view. After OSF verifies the link and performs all actions, they will be automatically logged in through CAS and redirected back to this view again being authenticated. :param auth: the auth context :param uid: the user's primary key :param token: the verification token """ user = User.load(uid) if not user: raise HTTPError(http.BAD_REQUEST) destination = request.args.get('destination') if not destination: raise HTTPError(http.BAD_REQUEST) # if user is already logged in if auth and auth.user: # if it is a wrong user if auth.user._id != user._id: return auth_logout(redirect_url=request.url) # if it is the expected user new = request.args.get('new', None) if destination in campaigns.get_campaigns(): # external domain takes priority campaign_url = campaigns.external_campaign_url_for(destination) if not campaign_url: campaign_url = campaigns.campaign_url_for(destination) return redirect(campaign_url) if new: status.push_status_message(language.WELCOME_MESSAGE, kind='default', jumbotron=True, trust=True) return redirect(web_url_for('dashboard')) # token is invalid if token not in user.email_verifications: raise HTTPError(http.BAD_REQUEST) verification = user.email_verifications[token] email = verification['email'] provider = verification['external_identity'].keys()[0] provider_id = verification['external_identity'][provider].keys()[0] # wrong provider if provider not in user.external_identity: raise HTTPError(http.BAD_REQUEST) external_status = user.external_identity[provider][provider_id] try: ensure_external_identity_uniqueness(provider, provider_id, user) except ValidationError as e: raise HTTPError(http.FORBIDDEN, e.message) if not user.is_registered: user.register(email) if email.lower() not in user.emails: user.emails.append(email.lower()) user.date_last_logged_in = timezone.now() user.external_identity[provider][provider_id] = 'VERIFIED' user.social[provider.lower()] = provider_id del user.email_verifications[token] user.verification_key = generate_verification_key() user.save() service_url = request.url if external_status == 'CREATE': mails.send_mail( to_addr=user.username, mail=mails.WELCOME, mimetype='html', user=user ) service_url += '&{}'.format(urllib.urlencode({'new': 'true'})) elif external_status == 'LINK': mails.send_mail( user=user, to_addr=user.username, mail=mails.EXTERNAL_LOGIN_LINK_SUCCESS, external_id_provider=provider, ) # redirect to CAS and authenticate the user with the verification key return redirect(cas.get_login_url( service_url, username=user.username, verification_key=user.verification_key )) @block_bing_preview @collect_auth def confirm_email_get(token, auth=None, **kwargs): """ View for email confirmation links. Authenticates and redirects to user settings page if confirmation is successful, otherwise shows an "Expired Link" error. HTTP Method: GET """ user = User.load(kwargs['uid']) is_merge = 'confirm_merge' in request.args is_initial_confirmation = not user.date_confirmed log_out = request.args.get('logout', None) if user is None: raise HTTPError(http.NOT_FOUND) # if the user is merging or adding an email (they already are an osf user) if log_out: return auth_email_logout(token, user) if auth and auth.user and (auth.user._id == user._id or auth.user._id == user.merged_by._id): if not is_merge: # determine if the user registered through a campaign campaign = campaigns.campaign_for_user(user) if campaign: return redirect(campaigns.campaign_url_for(campaign)) # go to home page with push notification if len(auth.user.emails) == 1 and len(auth.user.email_verifications) == 0: status.push_status_message(language.WELCOME_MESSAGE, kind='default', jumbotron=True, trust=True) if token in auth.user.email_verifications: status.push_status_message(language.CONFIRM_ALTERNATE_EMAIL_ERROR, kind='danger', trust=True) return redirect(web_url_for('index')) status.push_status_message(language.MERGE_COMPLETE, kind='success', trust=False) return redirect(web_url_for('user_account')) try: user.confirm_email(token, merge=is_merge) except exceptions.EmailConfirmTokenError as e: raise HTTPError(http.BAD_REQUEST, data={ 'message_short': e.message_short, 'message_long': e.message_long }) if is_initial_confirmation: user.update_date_last_login() user.save() # send out our welcome message mails.send_mail( to_addr=user.username, mail=mails.WELCOME, mimetype='html', user=user ) # new random verification key, allows CAS to authenticate the user w/o password one-time only. user.verification_key = generate_verification_key() user.save() # redirect to CAS and authenticate the user with a verification key. return redirect(cas.get_login_url( request.url, username=user.username, verification_key=user.verification_key )) @must_be_logged_in def unconfirmed_email_remove(auth=None): """ Called at login if user cancels their merge or email add. HTTP Method: DELETE """ user = auth.user json_body = request.get_json() try: given_token = json_body['token'] except KeyError: raise HTTPError(http.BAD_REQUEST, data={ 'message_short': 'Missing token', 'message_long': 'Must provide a token' }) user.clean_email_verifications(given_token=given_token) user.save() return { 'status': 'success', 'removed_email': json_body['address'] }, 200 @must_be_logged_in def unconfirmed_email_add(auth=None): """ Called at login if user confirms their merge or email add. HTTP Method: PUT """ user = auth.user json_body = request.get_json() try: token = json_body['token'] except KeyError: raise HTTPError(http.BAD_REQUEST, data={ 'message_short': 'Missing token', 'message_long': 'Must provide a token' }) try: user.confirm_email(token, merge=True) except exceptions.InvalidTokenError: raise InvalidTokenError(http.BAD_REQUEST, data={ 'message_short': 'Invalid user token', 'message_long': 'The user token is invalid' }) except exceptions.EmailConfirmTokenError as e: raise HTTPError(http.BAD_REQUEST, data={ 'message_short': e.message_short, 'message_long': e.message_long }) user.save() return { 'status': 'success', 'removed_email': json_body['address'] }, 200 def send_confirm_email(user, email, renew=False, external_id_provider=None, external_id=None, destination=None): """ Sends `user` a confirmation to the given `email`. :param user: the user :param email: the email :param renew: refresh the token :param external_id_provider: user's external id provider :param external_id: user's external id :param destination: the destination page to redirect after confirmation :return: :raises: KeyError if user does not have a confirmation token for the given email. """ confirmation_url = user.get_confirmation_url( email, external=True, force=True, renew=renew, external_id_provider=external_id_provider, destination=destination ) try: merge_target = User.find_one(Q('emails', 'eq', email)) except NoResultsFound: merge_target = None campaign = campaigns.campaign_for_user(user) branded_preprints_provider = None # Choose the appropriate email template to use and add existing_user flag if a merge or adding an email. if external_id_provider and external_id: # First time login through external identity provider, link or create an OSF account confirmation if user.external_identity[external_id_provider][external_id] == 'CREATE': mail_template = mails.EXTERNAL_LOGIN_CONFIRM_EMAIL_CREATE elif user.external_identity[external_id_provider][external_id] == 'LINK': mail_template = mails.EXTERNAL_LOGIN_CONFIRM_EMAIL_LINK elif merge_target: # Merge account confirmation mail_template = mails.CONFIRM_MERGE confirmation_url = '{}?logout=1'.format(confirmation_url) elif user.is_active: # Add email confirmation mail_template = mails.CONFIRM_EMAIL confirmation_url = '{}?logout=1'.format(confirmation_url) elif campaign: # Account creation confirmation: from campaign mail_template = campaigns.email_template_for_campaign(campaign) if campaigns.is_proxy_login(campaign) and campaigns.get_service_provider(campaign) != 'OSF': branded_preprints_provider = campaigns.get_service_provider(campaign) else: # Account creation confirmation: from OSF mail_template = mails.INITIAL_CONFIRM_EMAIL mails.send_mail( email, mail_template, 'plain', user=user, confirmation_url=confirmation_url, email=email, merge_target=merge_target, external_id_provider=external_id_provider, branded_preprints_provider=branded_preprints_provider ) def register_user(**kwargs): """ Register new user account. HTTP Method: POST :param-json str email1: :param-json str email2: :param-json str password: :param-json str fullName: :param-json str campaign: :raises: HTTPError(http.BAD_REQUEST) if validation fails or user already exists """ # Verify that email address match. # Note: Both `landing.mako` and `register.mako` already have this check on the form. Users can not submit the form # if emails do not match. However, this check should not be removed given we may use the raw api call directly. json_data = request.get_json() if str(json_data['email1']).lower() != str(json_data['email2']).lower(): raise HTTPError( http.BAD_REQUEST, data=dict(message_long='Email addresses must match.') ) # Verify that captcha is valid if settings.RECAPTCHA_SITE_KEY and not validate_recaptcha(json_data.get('g-recaptcha-response'), remote_ip=request.remote_addr): raise HTTPError( http.BAD_REQUEST, data=dict(message_long='Invalid Captcha') ) try: full_name = request.json['fullName'] full_name = strip_html(full_name) campaign = json_data.get('campaign') if campaign and campaign not in campaigns.get_campaigns(): campaign = None user = framework_auth.register_unconfirmed( request.json['email1'], request.json['password'], full_name, campaign=campaign, ) framework_auth.signals.user_registered.send(user) except (ValidationValueError, DuplicateEmailError): raise HTTPError( http.BAD_REQUEST, data=dict( message_long=language.ALREADY_REGISTERED.format( email=markupsafe.escape(request.json['email1']) ) ) ) except ValidationError as e: raise HTTPError( http.BAD_REQUEST, data=dict(message_long=e.message) ) if settings.CONFIRM_REGISTRATIONS_BY_EMAIL: send_confirm_email(user, email=user.username) message = language.REGISTRATION_SUCCESS.format(email=user.username) return {'message': message} else: return {'message': 'You may now log in.'} @collect_auth def resend_confirmation_get(auth): """ View for user to land on resend confirmation page. HTTP Method: GET """ # If user is already logged in, log user out if auth.logged_in: return auth_logout(redirect_url=request.url) form = ResendConfirmationForm(request.form) return { 'form': form, } @collect_auth def resend_confirmation_post(auth): """ View for user to submit resend confirmation form. HTTP Method: POST """ # If user is already logged in, log user out if auth.logged_in: return auth_logout(redirect_url=request.url) form = ResendConfirmationForm(request.form) if form.validate(): clean_email = form.email.data user = get_user(email=clean_email) status_message = ('If there is an OSF account associated with this unconfirmed email {0}, ' 'a confirmation email has been resent to it. If you do not receive an email and believe ' 'you should have, please contact OSF Support.').format(clean_email) kind = 'success' if user: if throttle_period_expired(user.email_last_sent, settings.SEND_EMAIL_THROTTLE): try: send_confirm_email(user, clean_email, renew=True) except KeyError: # already confirmed, redirect to dashboard status_message = 'This email {0} has already been confirmed.'.format(clean_email) kind = 'warning' user.email_last_sent = timezone.now() user.save() else: status_message = ('You have recently requested to resend your confirmation email. ' 'Please wait a few minutes before trying again.') kind = 'error' status.push_status_message(status_message, kind=kind, trust=False) else: forms.push_errors_to_status(form.errors) # Don't go anywhere return {'form': form} def external_login_email_get(): """ Landing view for first-time oauth-login user to enter their email address. HTTP Method: GET """ form = ResendConfirmationForm(request.form) session = get_session() if not session.is_external_first_login: raise HTTPError(http.UNAUTHORIZED) external_id_provider = session.data['auth_user_external_id_provider'] return { 'form': form, 'external_id_provider': external_id_provider } def external_login_email_post(): """ View to handle email submission for first-time oauth-login user. HTTP Method: POST """ form = ResendConfirmationForm(request.form) session = get_session() if not session.is_external_first_login: raise HTTPError(http.UNAUTHORIZED) external_id_provider = session.data['auth_user_external_id_provider'] external_id = session.data['auth_user_external_id'] fullname = session.data['auth_user_fullname'] service_url = session.data['service_url'] # TODO: @cslzchen use user tags instead of destination destination = 'dashboard' for campaign in campaigns.get_campaigns(): if campaign != 'institution': # Handle different url encoding schemes between `furl` and `urlparse/urllib`. # OSF use `furl` to parse service url during service validation with CAS. However, `web_url_for()` uses # `urlparse/urllib` to generate service url. `furl` handles `urlparser/urllib` generated urls while ` but # not vice versa. campaign_url = furl.furl(campaigns.campaign_url_for(campaign)).url external_campaign_url = furl.furl(campaigns.external_campaign_url_for(campaign)).url if campaigns.is_proxy_login(campaign): # proxy campaigns: OSF Preprints and branded ones if check_service_url_with_proxy_campaign(str(service_url), campaign_url, external_campaign_url): destination = campaign # continue to check branded preprints even service url matches osf preprints if campaign != 'osf-preprints': break elif service_url.startswith(campaign_url): # osf campaigns: OSF Prereg and ERPC destination = campaign break if form.validate(): clean_email = form.email.data user = get_user(email=clean_email) external_identity = { external_id_provider: { external_id: None, }, } try: ensure_external_identity_uniqueness(external_id_provider, external_id, user) except ValidationError as e: raise HTTPError(http.FORBIDDEN, e.message) if user: # 1. update user oauth, with pending status external_identity[external_id_provider][external_id] = 'LINK' if external_id_provider in user.external_identity: user.external_identity[external_id_provider].update(external_identity[external_id_provider]) else: user.external_identity.update(external_identity) # 2. add unconfirmed email and send confirmation email user.add_unconfirmed_email(clean_email, external_identity=external_identity) user.save() send_confirm_email( user, clean_email, external_id_provider=external_id_provider, external_id=external_id, destination=destination ) # 3. notify user message = language.EXTERNAL_LOGIN_EMAIL_LINK_SUCCESS.format( external_id_provider=external_id_provider, email=user.username ) kind = 'success' # 4. remove session and osf cookie remove_session(session) else: # 1. create unconfirmed user with pending status external_identity[external_id_provider][external_id] = 'CREATE' user = User.create_unconfirmed( username=clean_email, password=str(uuid.uuid4()), fullname=fullname, external_identity=external_identity, campaign=None ) # TODO: [#OSF-6934] update social fields, verified social fields cannot be modified user.save() # 3. send confirmation email send_confirm_email( user, user.username, external_id_provider=external_id_provider, external_id=external_id, destination=destination ) # 4. notify user message = language.EXTERNAL_LOGIN_EMAIL_CREATE_SUCCESS.format( external_id_provider=external_id_provider, email=user.username ) kind = 'success' # 5. remove session remove_session(session) status.push_status_message(message, kind=kind, trust=False) else: forms.push_errors_to_status(form.errors) # Don't go anywhere return { 'form': form, 'external_id_provider': external_id_provider } def validate_campaign(campaign): """ Non-view helper function that validates `campaign`. :param campaign: the campaign to validate :return: True if valid, False otherwise """ return campaign and campaign in campaigns.get_campaigns() def validate_next_url(next_url): """ Non-view helper function that checks `next_url`. Only allow redirects which are relative root or full domain (CAS, OSF and MFR). Disallows external redirects. :param next_url: the next url to check :return: True if valid, False otherwise """ # disable external domain using `//`: the browser allows `//` as a shortcut for non-protocol specific requests # like http:// or https:// depending on the use of SSL on the page already. if next_url.startswith('//'): return False # only OSF, MFR, CAS and Branded Preprints domains are allowed if next_url[0] == '/' or next_url.startswith(settings.DOMAIN): # OSF return True if next_url.startswith(settings.CAS_SERVER_URL) or next_url.startswith(settings.MFR_SERVER_URL): # CAS or MFR return True for url in campaigns.get_external_domains(): # Branded Preprints Phase 2 if next_url.startswith(url): return True return False def check_service_url_with_proxy_campaign(service_url, campaign_url, external_campaign_url=None): """ Check if service url belongs to proxy campaigns: OSF Preprints and branded ones. Both service_url and campaign_url are parsed using `furl` encoding scheme. :param service_url: the `furl` formatted service url :param campaign_url: the `furl` formatted campaign url :param external_campaign_url: the `furl` formatted external campaign url :return: the matched object or None """ prefix_1 = settings.DOMAIN + 'login/?next=' + campaign_url prefix_2 = settings.DOMAIN + 'login?next=' + campaign_url valid = service_url.startswith(prefix_1) or service_url.startswith(prefix_2) valid_external = False if external_campaign_url: prefix_3 = settings.DOMAIN + 'login/?next=' + external_campaign_url prefix_4 = settings.DOMAIN + 'login?next=' + external_campaign_url valid_external = service_url.startswith(prefix_3) or service_url.startswith(prefix_4) return valid or valid_external
Has the desire for hats, hats, delicious TF2 hats diminished over the last few years, or is the public's interest in digital head-adornment as strong as ever? I ask because Valve and Irrational are adding BioShock clobber to Team Fortress 2, and- hey, don't all load up the game at once. You'll need to buy BioShock Infinite's season pass on Steam to gain access to it, which I believe comes with a few pieces of downloadable content in addition to a very small selection of hats. Full details here .
# Copyright (c) 2013 Calin Crisan # This file is part of motionEye. # # motionEye is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from jinja2 import Environment, FileSystemLoader import settings import utils _jinja_env = None def _init_jinja(): global _jinja_env _jinja_env = Environment( loader=FileSystemLoader(settings.TEMPLATE_PATH), trim_blocks=False) # globals _jinja_env.globals['settings'] = settings # filters _jinja_env.filters['pretty_date_time'] = utils.pretty_date_time _jinja_env.filters['pretty_date'] = utils.pretty_date _jinja_env.filters['pretty_time'] = utils.pretty_time _jinja_env.filters['pretty_duration'] = utils.pretty_duration def add_template_path(path): global _jinja_env if _jinja_env is None: _init_jinja() _jinja_env.loader.searchpath.append(path) def add_context(name, value): global _jinja_env if _jinja_env is None: _init_jinja() _jinja_env.globals[name] = value def render(template_name, **context): global _jinja_env if _jinja_env is None: _init_jinja() template = _jinja_env.get_template(template_name) return template.render(**context)
18.50 Acres great to build, hunting & trapping. River access. Rolling and hilly. Out door paradise. Come take a look at this fully rented triplex! Bringing in $1175/month. 2 of the 3 units feature rehabs within the last 6 months. Great income producing property near downtown! Make an offer today! Recently updated 2 story in great neighborhood. New Garage door and flooring going in the next week. Come take a look before it is gone.
import re import hashlib import hmac import binascii from datetime import datetime as DateTime from urllib.parse import urlsplit, parse_qsl, urlencode from collections import namedtuple from collections.abc import MutableMapping, Mapping class Headers(MutableMapping): """ A case-insensitive dictionary-like object, for use in storing the headers. """ def __init__(self, init): self._map = {} for key, value in init.items(): self[key] = value def __getitem__(self, key): return self._map[key.lower()] def __setitem__(self, key, value): self._map[key.lower()] = value def __delitem__(self, key): del self._map[key.lower()] def __iter__(self): for key in self._map: yield key.lower() def __len__(self): return len(self._map) class CanonicalRequest(object): """ An object representing an HTTP request to be made to AWS. :param method: The HTTP method being used. :type method: str :param url: The full URL, including protocol, host, and optionally the query string. :type uri: str :param query: The request query as a dictionary or a string. Can be omitted if no query string or included in the URL. :type query: str or dict or list of two-tuples :param headers: A dictionary of headers. :type headers: dict :param payload: The request body. :type payload: bytes-like object """ def __init__( self, method, uri, query=None, headers=None, payload=b'', ): self.method = method self._parts = urlsplit(uri) if isinstance(query, Mapping): self.query = list(query.items()) elif isinstance(query, str): self.query = parse_qsl(query) else: self.query = query or [] self.headers = Headers(headers or {}) self.payload = payload if self._parts[1] and 'host' not in self.headers: self.headers['host'] = self._parts[1] def __str__(self): return '\n'.join([ self.method, self._parts[2], urlencode(self.query), self.canonical_headers, self.signed_headers, self.hashed_payload, ]) @property def hashed(self): return hashlib.sha256(str(self).encode('ascii')).hexdigest() @property def payload(self): raise NotImplementedError('Cannot directly access payload.') @payload.setter def payload(self, value): self.hashed_payload = hashlib.sha256(value).hexdigest() @property def canonical_headers(self): lines = [] for header, value in sorted( self.headers.items(), key=lambda x: x[0].lower(), ): value = value.strip() # Eliminate duplicate spaces in non-quoted headers if not (len(value) >= 2 and value[0] == '"' and value[-1] == '"'): value = re.sub(r' +', ' ', value) lines.append('{}:{}'.format(header.lower(), value)) return '\n'.join(lines) + '\n' @property def signed_headers(self): return ';'.join(sorted(self.headers.keys())) def _datetime(self): """ Return the current UTC datetime. """ return DateTime.utcnow() @property def datetime(self): """ Extract the datetime from the request """ if 'x-amz-date' in self.headers: datetime = self.headers['x-amz-date'] elif any(key == 'X-Amz-Date' for (key, _) in self.query): datetime = dict(self.query)['X-Amz-Date'] else: raise ValueError('No datetime is set in the request.') return DateTime.strptime( datetime, '%Y%m%dT%H%M%SZ', ) def set_date_header(self): """ Set the ``X-Amz-Date`` header to the current datetime, if not set. :returns: The datetime from the ``X-Amz-Date`` header. :rtype: :class:`datetime.datetime` """ if 'x-amz-date' not in self.headers: datetime = self._datetime().strftime('%Y%m%dT%H%M%SZ') self.headers['x-amz-date'] = datetime return datetime else: return None def set_date_param(self): """ Set the ``X-Amz-Date`` query parameter to the current datetime, if not set. :returns: The datetime from the ``X-Amz-Date`` parameter. :rtype: :class:`datetime.datetime` """ if not any(key == 'X-Amz-Date' for (key, _) in self.query): datetime = self._datetime().strftime('%Y%m%dT%H%M%SZ') self.query.append( ('X-Amz-Date', datetime) ) return datetime else: return None #: A signed request. Does not include the request body. SignedRequest = namedtuple('SignedRequest', [ 'method', 'uri', 'headers', ]) class CredentialScope( namedtuple('CredentialScope', ['region', 'service']) ): """ The credential scope, sans date. :param region: The region the request is querying. See `Regions and Endpoints`_ for a list of values. :type region: str :param service: The service the request is querying. :type service: str """ def date(self, date): """ Generate a :class:`DatedCredentialScope` from this objec.t """ return DatedCredentialScope( self.region, self.service, date, ) class DatedCredentialScope( namedtuple('DatedCredentialScope', ['region', 'service', 'date']) ): """ The credential scope, generated from the region and service. :param region: The region the request is querying. See `Regions and Endpoints`_ for a list of values. :type region: str :param service: The service the request is querying. :type service: str :param date: The date for the credential scope. :type date: :class:`datetime.date` or :class:`datetime.datetime` .. _`Regions and Endpoints`: http://docs.aws.amazon.com/general/latest/gr/rande.html """ def __str__(self): """ Calculate the credential scope for the given date. """ return '/'.join([ self.date.strftime('%Y%m%d'), self.region, self.service, 'aws4_request', ]) class SigningKey(object): """ A signing key from the secret and the credential scope. :param secret: The AWS key secret. :type secret: str :param scope: The credential scope with date. :type scope: :class:`DatedCredentialScope` """ #: The computed signing key as a bytes object key = None def __init__(self, secret, scope): date = scope.date.strftime('%Y%m%d') signed_date = self._sign(b'AWS4' + secret.encode('ascii'), date) signed_region = self._sign(signed_date, scope.region) signed_service = self._sign(signed_region, scope.service) self.key = self._sign(signed_service, 'aws4_request') def _sign(self, key, value): return hmac.new( key, value.encode('ascii'), hashlib.sha256, ).digest() def sign(self, string): """ Sign a string. Returns the hexidecimal digest. """ return binascii.hexlify(self._sign(self.key, string)).decode('ascii') def generate_string_to_sign(date, scope, request): """ Generate a string which should be signed by the signing key. :param date: The datetime of the request. :type date: :class:`datetime.datetime` :param scope: The credential scope. :type scope: :class:`CredentialScope` or :class:`DatedCredentialScope` :param request: The request to sign. :type request: :class:`CanonicalRequest` """ if isinstance(scope, CredentialScope): scope = scope.date(date) return '\n'.join([ 'AWS4-HMAC-SHA256', date.strftime('%Y%m%dT%H%M%SZ'), str(scope), request.hashed, ]) class Credentials(object): """ An object that encapsulates all the necessary credentials to sign a request. """ def __init__(self, key_id, key_secret, region, service): self._key_id = key_id self._key_secret = key_secret self._scope = CredentialScope(region, service) def scope(self, datetime): return self._scope.date(datetime) def signing_key(self, datetime): return SigningKey(self._key_secret, self.scope(datetime)) def sign_via_headers(self, request): """ Generate the appropriate headers to sign the request :param request: The request to sign. :type request: :class:`CanonicalRequest` :returns: A list of additional headers. :rtype: list of two-tuples """ headers = [] datetime_str = request.set_date_header() if datetime_str is not None: headers.append(('X-Amz-Date', datetime_str)) datetime = request.datetime scope = self.scope(datetime) key = self.signing_key(datetime) to_sign = generate_string_to_sign(datetime, scope, request) auth = 'AWS4-HMAC-SHA256 ' + ', '.join([ 'Credential={}/{}'.format(self._key_id, str(scope)), 'SignedHeaders={}'.format(request.signed_headers), 'Signature={}'.format(key.sign(to_sign)), ]) headers.append(('Authorization', auth)) return headers def sign_via_query_string(self, request, expires=60): """ Create a :clas:`SignedRequest` from the given request by adding the appropriate query parameters. :param credentials: The credentials with which to sign the request. :type credentials: :class:`Client` :returns: The signed request. :rtype: :class:`SignedRequest` """ params = [] datetime_str = request.set_date_param() if datetime_str is not None: params.append(('X-Amz-Date', datetime_str)) datetime = request.datetime scope = self.scope(datetime) key = self.signing_key(datetime) to_append = [ ('X-Amz-Algorithm', 'AWS4-HMAC-SHA256'), ('X-Amz-Credential', '{}/{}'.format(self._key_id, str(scope))), ('X-Amz-Expires', str(expires)), ('X-Amz-SignedHeaders', request.signed_headers), ] request.query = request.query[:-1] + to_append[:2] + request.query[-1:] + to_append[2:] params = to_append[:2] + params + to_append[2:] to_sign = generate_string_to_sign(datetime, scope, request) params.append( ('X-Amz-Signature', key.sign(to_sign)) ) return params
Since the end of the BBA/ABI Bank Agreement in 2012, commercial property insurers have seen a sharp increase in the number of requests for bespoke amendments to policies at the request of lenders. These are creating challenges for insurers and brokers, due to their complexity and nature of the requests. With thanks to Allianz Insurance for content development, this article explains what is being requested by lenders and looks at the implications from both the insurers’ and policyholders’ point of view.
import tensorflow as tf import tensorgraph as tg from tensorgraph.layers import Reshape, Embedding, Conv2D, RELU, Linear, Flatten, ReduceSum, Softmax from nltk.tokenize import RegexpTokenizer from nlpbox import CharNumberEncoder, CatNumberEncoder from tensorgraph.utils import valid, split_df, make_one_hot from tensorgraph.cost import entropy, accuracy import pandas import numpy as np # character CNN def model(word_len, sent_len, nclass): unicode_size = 1000 ch_embed_dim = 20 X_ph = tf.placeholder('int32', [None, sent_len, word_len]) input_sn = tg.StartNode(input_vars=[X_ph]) charcnn_hn = tg.HiddenNode(prev=[input_sn], layers=[Reshape(shape=(-1, word_len)), Embedding(cat_dim=unicode_size, encode_dim=ch_embed_dim, zero_pad=True), Reshape(shape=(-1, ch_embed_dim, word_len, 1)), Conv2D(num_filters=20, padding='VALID', kernel_size=(ch_embed_dim,5), stride=(1,1)), RELU(), Conv2D(num_filters=40, padding='VALID', kernel_size=(1,5), stride=(1,1)), RELU(), Conv2D(num_filters=60, padding='VALID', kernel_size=(1,5), stride=(1,2)), RELU(), Flatten(), Linear(nclass), Reshape((-1, sent_len, nclass)), ReduceSum(1), Softmax() ]) output_en = tg.EndNode(prev=[charcnn_hn]) graph = tg.Graph(start=[input_sn], end=[output_en]) y_train_sb = graph.train_fprop()[0] y_test_sb = graph.test_fprop()[0] return X_ph, y_train_sb, y_test_sb def tweets(word_len, sent_len, train_valid_ratio=[5,1]): df = pandas.read_csv('tweets_large.csv') field = 'text' label = 'label' tokenizer = RegexpTokenizer(r'\w+') # encode characters into numbers encoder = CharNumberEncoder(df[field].values, tokenizer=tokenizer, word_len=word_len, sent_len=sent_len) encoder.build_char_map() encode_X = encoder.make_char_embed() # encode categories into one hot array cat_encoder = CatNumberEncoder(df[label]) cat_encoder.build_cat_map() encode_y = cat_encoder.make_cat_embed() nclass = len(np.unique(encode_y)) encode_y = make_one_hot(encode_y, nclass) return encode_X, encode_y, nclass def train(): from tensorgraph.trainobject import train as mytrain with tf.Session() as sess: word_len = 20 sent_len = 50 # load data X_train, y_train, nclass = tweets(word_len, sent_len) # build model X_ph, y_train_sb, y_test_sb = model(word_len, sent_len, nclass) y_ph = tf.placeholder('float32', [None, nclass]) # set cost and optimizer train_cost_sb = entropy(y_ph, y_train_sb) optimizer = tf.train.AdamOptimizer(0.001) test_accu_sb = accuracy(y_ph, y_test_sb) # train model mytrain(session=sess, feed_dict={X_ph:X_train, y_ph:y_train}, train_cost_sb=train_cost_sb, valid_cost_sb=-test_accu_sb, optimizer=optimizer, epoch_look_back=5, max_epoch=100, percent_decrease=0, train_valid_ratio=[5,1], batchsize=64, randomize_split=False) if __name__ == '__main__': train()
Clairvoyant Reading form 10am to 4pm Wednesday to Sunday. *Due to popular demand, Luke is only reading Tea Leaves during Half Hour or Hour Clairvoyant Consultations. If you are wanting a Tea Leaf reading as part of your consultation, you need to arrive 30 minutes prior to ensure your cup is ready to be read by the time of your appointment. Cash Payments - To save you from paying a PayPal Transaction Fee on all cards, we appreciate you to have correct change and to pay Luke directly at the beginning of your reading. Card Payments - All Cards will carry a PayPal Transaction fee (1.95%). To book with Luke, use our Online Booking System, or call us on 07 3393 1101. Luke Quadrelli has been helping thousands of his clients nationally and internationally over the last 30 years. In Luke's readings, he gives insight through the use of his clairvoyant abilities, on present and future events. He is able to read photographs, see auras and consolidate with the Tarot. With his gifts, he can offer clarity, purpose and a spiritual sense of well being which can help you understand yourself in a deeper way. It may validate where you are heading in life and help you get back on track, giving you confirmation and peace of mind. A clairvoyant reading with Luke can help you to obtain those goals. Luke has opened The Rendezvous Tea Room where you can enjoy morning or afternoon tea before your reading, or book here for a long distance reading..
#coding:utf-8 from mailpile.commands import Command from mailpile.conn_brokers import Master as ConnBroker from mailpile.plugins import PluginManager from mailpile.plugins.search import Search from mailpile.mailutils import Email # from mailpile.crypto.state import * from mailpile.crypto.gpgi import GnuPG import httplib import re import socket import sys import urllib import urllib2 import ssl import json # TODO: # * SSL certificate validation # * Check nicknym server for a given host # * Store provider keys on first discovery # * Verify provider key signature class Nicknym: def __init__(self, config): self.config = config def get_key(self, address, keytype="openpgp", server=None): """ Request a key for address. """ result, signature = self._nickserver_get_key(address, keytype, server) if self._verify_result(result, signature): return self._import_key(result, keytype) return False def refresh_keys(self): """ Refresh all known keys. """ for addr, keytype in self._get_managed_keys(): result, signature = self._nickserver_get_key(addr, keytype) # TODO: Check whether it needs refreshing and is valid if self._verify_result(result, signature): self._import_key(result, keytype) def send_key(self, address, public_key, type): """ Send a new key to the nickserver """ # TODO: Unimplemented. There is currently no authentication mechanism # defined in Nicknym standard raise NotImplementedError() def _parse_result(self, result): """Parse the result into a JSON blob and a signature""" # TODO: No signature implemented on server side yet. # See https://leap.se/code/issues/5340 return json.loads(result), "" def _nickserver_get_key(self, address, keytype="openpgp", server=None): if server == None: server = self._discover_server(address) data = urllib.urlencode({"address": address}) with ConnBroker.context(need=[ConnBroker.OUTGOING_HTTP]): r = urllib2.urlopen(server, data) result = r.read() result, signature = self._parse_result(result) return result, signature def _import_key(self, result, keytype): if keytype == "openpgp": g = GnuPG(self.config) res = g.import_keys(result[keytype]) if len(res["updated"]): self._managed_keys_add(result["address"], keytype) return res else: # We currently only support OpenPGP keys return False def _get_providerkey(self, domain): """ Request a provider key for the appropriate domain. This is equivalent to get_key() with address=domain, except it should store the provider key in an appropriate key store """ pass def _verify_providerkey(self, domain): """ ... """ pass def _verify_result(self, result, signature): """ Verify that the JSON result blob is correctly signed, and that the signature is from the correct provider key. """ # No signature. See https://leap.se/code/issues/5340 return True def _discover_server(self, address): """ Automatically detect which nicknym server to query based on the address. """ # TODO: Actually perform some form of lookup addr = address.split("@") addr.reverse() domain = addr[0] return "https://nicknym.%s:6425/" % domain def _audit_key(self, address, keytype, server): """ Ask an alternative server for a key to verify that the same result is being provided. """ result, signature = self._nickserver_get_key(address, keytype, server) if self._verify_result(result, signature): # TODO: verify that the result is acceptable pass return True def _managed_keys_add(self, address, keytype): try: data = self.config.load_pickle("nicknym.cache") except IOError: data = [] data.append((address, keytype)) data = list(set(data)) self.config.save_pickle(data, "nicknym.cache") def _managed_keys_remove(self, address, keytype): try: data = self.config.load_pickle("nicknym.cache") except IOError: data = [] data.remove((address, keytype)) self.config.save_pickle(data, "nicknym.cache") def _get_managed_keys(self): try: return self.config.load_pickle("nicknym.cache") except IOError: return [] class NicknymGetKey(Command): """Get a key from a nickserver""" ORDER = ('', 0) SYNOPSIS = (None, 'crypto/nicknym/getkey', 'crypto/nicknym/getkey', '<address> [<keytype>] [<server>]') HTTP_CALLABLE = ('POST',) HTTP_QUERY_VARS = { 'address': 'The nick/address to fetch a key for', 'keytype': 'What type of key to import (defaults to OpenPGP)', 'server': 'The Nicknym server to use (defaults to autodetect)'} def command(self): address = self.data.get('address', self.args[0]) keytype = self.data.get('keytype', None) server = self.data.get('server', None) if len(self.args) > 1: keytype = self.args[1] else: keytype = 'openpgp' if len(self.args) > 2: server = self.args[2] n = Nicknym(self.session.config) return n.get_key(address, keytype, server) class NicknymRefreshKeys(Command): """Get a key from a nickserver""" ORDER = ('', 0) SYNOPSIS = (None, 'crypto/nicknym/refreshkeys', 'crypto/nicknym/refreshkeys', '') HTTP_CALLABLE = ('POST',) def command(self): n = Nicknym(self.session.config) n.refresh_keys() return True _plugins = PluginManager(builtin=__file__) _plugins.register_commands(NicknymGetKey) _plugins.register_commands(NicknymRefreshKeys) if __name__ == "__main__": n = Nicknym() print n.get_key("varac@bitmask.net")
If you’ve read Farrell’s latest book, The Third Way, then you will want to read this book to be able to further delve into the inner workings of the Nazi mindset, and how they were able to accomplish what they did before, and more importantly after the war. And even if you haven’t, this is a great starting place from which to branch out into Farrell’s many other books which talk about the many aspects of the Nazis that most people don’t realize are even available. Farrell gives a top-down lengthy synopsis into a sizeable breath of the Nazi’s post-WWII war machine and the extent to which the Nazis went to continue what they began before WWII. Farrell begins by noting what he’s mentioned in countless interviews, which is that the military of Germany – the Nazis – did not surrender. That alone is an excellent foundation to the notion that the Nazi’s weren’t ones to roll over and die as many would have you believe. A small, but notable data point, because it extends further into the future, all the way into the present, in fact. In contrast, the Japanese did in fact surrender during WWII. From there, the author continues with various examinations into the Nazi connection with key corporations, spearheaded by the notorious I.G. Farben and its penetration of big banks, to the growth of the Nazi postwar network lead by the notorious Martin Bormann. In conjunction with that, Farrell lays down many of the intricacies that allowed the Nazis not only to survive in a post-WW2 environment, but to actually thrive. That is a rather disturbing prospect indeed. Another notable component that goes oft-overlooked is the analysis of Hitler’s alleged death. Many people take it a face value that he committed ‘suicide’. Farrell sifts through and narrow down what the probable truth might have been. That is rather intriguing as it would literally change history in more ways than people could imagine. The Nazis surviving in a post-WWII establishment is a bold, but truthful claim. Operation Paper clip, which is on public record, bears this out. But the possibility of the boss – the very Nazi symbol – also making it through? Now that will make some folks’ head spin. Other notable components of the story are the Vatican’s connection to the Nazi ratlines, Argentina’s connection to the Nazis before and after the war, as well as how key individuals like Allen Dulles’ are involved in this whole ordeal. Another very incisive, and oft-overlooked component in Farrell’s research is the fact that the Nazis played an integral role in the manipulation of the Muslim world. This arguably has probably continued to this day, which is quite distressing. If such is the case, the whole Muslims-are-terrorist meme needs to be examined with precision, because behind the scenes much of what seems to be one element, might just be another pulling the strings from behind the scenes, as other authors have also noted. The surge of terrorism that has ensued since must be questioned given how much it has served to fracture how the US is seen in the world, how much profits the military industrial complex has achieved, and how much control over the region has been established in the Middle East and beyond. That’s not to say people are not responsible for their own actions. However, who drives those actions is just as, if not more important because that would be the root cause, rather than one addressing a possible symptom by only focusing on Muslims. Farrell furthers his research by addressing technological components the Nazis were working on such as the notorious Bell, while also examining NASA’s secret history in connection to his thesis. But when you couple all of the above to the fact that people with the same Nazi-like mindset are still around and share the same ideals et al., it makes it vital for individuals to know what’s going on, which makes this book that much more important.
import sys, traceback import mal_readline import mal_types as types import reader, printer from env import Env import core # read def READ(str): return reader.read_str(str) # eval def is_pair(x): return types._sequential_Q(x) and len(x) > 0 def quasiquote(ast): if not is_pair(ast): return types._list(types._symbol("quote"), ast) elif ast[0] == 'unquote': return ast[1] elif is_pair(ast[0]) and ast[0][0] == 'splice-unquote': return types._list(types._symbol("concat"), ast[0][1], quasiquote(ast[1:])) else: return types._list(types._symbol("cons"), quasiquote(ast[0]), quasiquote(ast[1:])) def is_macro_call(ast, env): return (types._list_Q(ast) and types._symbol_Q(ast[0]) and env.find(ast[0]) and hasattr(env.get(ast[0]), '_ismacro_')) def macroexpand(ast, env): while is_macro_call(ast, env): mac = env.get(ast[0]) ast = macroexpand(mac(*ast[1:]), env) return ast def eval_ast(ast, env): if types._symbol_Q(ast): return env.get(ast) elif types._list_Q(ast): return types._list(*map(lambda a: EVAL(a, env), ast)) elif types._vector_Q(ast): return types._vector(*map(lambda a: EVAL(a, env), ast)) elif types._hash_map_Q(ast): keyvals = [] for k in ast.keys(): keyvals.append(EVAL(k, env)) keyvals.append(EVAL(ast[k], env)) return types._hash_map(*keyvals) else: return ast # primitive value, return unchanged def EVAL(ast, env): while True: #print("EVAL %s" % printer._pr_str(ast)) if not types._list_Q(ast): return eval_ast(ast, env) # apply list ast = macroexpand(ast, env) if not types._list_Q(ast): return ast if len(ast) == 0: return ast a0 = ast[0] if "def!" == a0: a1, a2 = ast[1], ast[2] res = EVAL(a2, env) return env.set(a1, res) elif "let*" == a0: a1, a2 = ast[1], ast[2] let_env = Env(env) for i in range(0, len(a1), 2): let_env.set(a1[i], EVAL(a1[i+1], let_env)) ast = a2 env = let_env # Continue loop (TCO) elif "quote" == a0: return ast[1] elif "quasiquote" == a0: ast = quasiquote(ast[1]); # Continue loop (TCO) elif 'defmacro!' == a0: func = EVAL(ast[2], env) func._ismacro_ = True return env.set(ast[1], func) elif 'macroexpand' == a0: return macroexpand(ast[1], env) elif "do" == a0: eval_ast(ast[1:-1], env) ast = ast[-1] # Continue loop (TCO) elif "if" == a0: a1, a2 = ast[1], ast[2] cond = EVAL(a1, env) if cond is None or cond is False: if len(ast) > 3: ast = ast[3] else: ast = None else: ast = a2 # Continue loop (TCO) elif "fn*" == a0: a1, a2 = ast[1], ast[2] return types._function(EVAL, Env, a2, env, a1) else: el = eval_ast(ast, env) f = el[0] if hasattr(f, '__ast__'): ast = f.__ast__ env = f.__gen_env__(el[1:]) else: return f(*el[1:]) # print def PRINT(exp): return printer._pr_str(exp) # repl repl_env = Env() def REP(str): return PRINT(EVAL(READ(str), repl_env)) # core.py: defined using python for k, v in core.ns.items(): repl_env.set(types._symbol(k), v) repl_env.set(types._symbol('eval'), lambda ast: EVAL(ast, repl_env)) repl_env.set(types._symbol('*ARGV*'), types._list(*sys.argv[2:])) # core.mal: defined using the language itself REP("(def! not (fn* (a) (if a false true)))") REP("(def! load-file (fn* (f) (eval (read-string (str \"(do \" (slurp f) \")\")))))") REP("(defmacro! cond (fn* (& xs) (if (> (count xs) 0) (list 'if (first xs) (if (> (count xs) 1) (nth xs 1) (throw \"odd number of forms to cond\")) (cons 'cond (rest (rest xs)))))))") REP("(defmacro! or (fn* (& xs) (if (empty? xs) nil (if (= 1 (count xs)) (first xs) `(let* (or_FIXME ~(first xs)) (if or_FIXME or_FIXME (or ~@(rest xs))))))))") if len(sys.argv) >= 2: REP('(load-file "' + sys.argv[1] + '")') sys.exit(0) # repl loop while True: try: line = mal_readline.readline("user> ") if line == None: break if line == "": continue print(REP(line)) except reader.Blank: continue except Exception as e: print("".join(traceback.format_exception(*sys.exc_info())))
Requires macOS 10.9+. Works with Mojave. It's Free and Open Source. Donate. Reduces image file sizes — so they take up less disk space and down­load faster — by applying advanced compression that preserves quality. Removes invisible junk: private EXIF meta­data from digital cameras, embedded thumbnails, comments, and unnecessary color profiles. Seamlessly combines all the best image optimization tools: MozJPEG, pngquant, Pngcrush, 7zip, SVGO and Google Zopfli. All Free and Open-Source. English, French, German, Spanish, Portuguese, Italian, Dutch, Norwegian, Swedish, Danish, Japanese, Chinese, Korean, Vietnamese, Turkish, Russian, Lithuanian, Czech and Polish. Help translate it! ImageOptim is excellent for publishing images on the web (easily shrinks images “Saved for Web” in Photoshop). It's useful for making Mac and iPhone/iPad applications smaller (if you configure Xcode to allow better optimization). ImageOptim removes EXIF meta­data, such as GPS position and camera's serial number, so that you can publish images without exposing private information (but there's an option to keep the meta­data if you need it). When you drag'n'drop images into ImageOptim's window it will run several image optimization tools automatically and combine their results, ensuring that you always get the smallest file. See installation and usage instructions. ImageOptim integrates well with macOS, so you can also drop files on ImageOptim's Dock icon, or use Services menu in Finder, or Markup menu on attached images in Apple Mail. ImageOptim can also be launched from command line or Sketch. If you enable Lossy minification you'll get smallest file sizes possible. By default ImageOptim is very cautious and exactly preserves image quality, but if you allow it to change the quality — even only a little — it will be free to use much more aggressive optimizations that give the biggest results. You can configure lossy optimizations in ImageOptim's Preferences. ImageOptim can apply lossy compression not only to JPEG, but SVG, anim GIF and PNG as well! ImageOptim is free, open-source soft­ware under terms of the GPL v2 or later. You can fork the code on GitHub and improve it! Feel free to contact me for assistance. PNGOUT is bundled with permission of Ardfry Imaging, LLC and is not covered by the GPL. How does ImageOptim compare to TinyPNG, MozJPEG or Guetzli? You can get the same or better compression if you enable Lossy minification option in in ImageOptim preferences. Tools like ImageAlpha/pngquant/TinyPNG/JPEGMini/MozJPEG make files smaller by using lossy compres­sion which lowers image quality, which ImageOptim doesn't do by default, but can if you allow it. Can I keep embedded copyright, camera information? Yes. Uncheck Strip JPEG metadata in Preferences. It's slow on PNG files. How can I make it faster? In preferences uncheck PNGOUT and Zopfli. Without these tools optimiza­tion will run much quicker, but will be a bit less effective. Will ImageOptim be in the App Store? No, and please beware of knock-offs in the App Store! Apple's has been selling three already. ImageOptim is given for free on terms that basically say “you can do what­ever you want except taking this freedom away from others”. Apple does not allow such permissive terms. Apple requires all App Store users to accept DRM (copy protection) and legal restrictions in the iTunes EULA. You can get ImageOptim here, DRM-free. Its license allows you to share it, modify it, use it in any country in the world — even sell it — if you don't forbid anybody else from doing the same. You can donate to support the project. Pro tips about ImageOptim, ImageAlpha and image formats in general. News about upcoming features and access to preview versions of apps I develop. Created by Kornel Lesiński. Contact. Follow on Twitter. App icon by icons8. GitHub project. ImageAlpha. pngquant2. API.
#!/usr/bin/python # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved # # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. # These set of commands provide a way to use counters in debug time. By using these counters, # you can track how many times your program takes a specific path. # # Sample Use Case: # Let's say you have a function that logs some messages from various parts of your code. # And you want to learn how many times logMessage is called on startup. # # 1. Add a breakpoint to the entry point of your program (e.g. main). # a. Add `zzz 10 printcounter` as an action. # b. Check "Automatically continue after evaluating actions" # 2. Add a breakpoint to the logMessage function. # a. Add `incrementcounter log` as an action. # b. Add `incrementcounter log_{} message` as an action. # c. Check "Automatically continue after evaluating actions" # 3. Run the program # # Format String: # It uses Python's string.Formatter to format strings. You can use placeholders here as you can in Python: # https://docs.python.org/3.4/library/string.html#string.Formatter.format # # Sample key_format_string: # "key_{}" (int)5 -> Will build the key string as "key_5" # Can be removed when Python 2 support is removed. from __future__ import print_function import fbchisellldbbase as fb counters = {} def lldbcommands(): return [ FBIncrementCounterCommand(), FBPrintCounterCommand(), FBPrintCountersCommand(), FBResetCounterCommand(), FBResetCountersCommand(), ] def generateKey(arguments): keyFormatString = arguments[1] keyArgs = [] for argument in arguments[2:]: if argument.startswith("("): value = fb.evaluateExpression(argument) else: value = fb.evaluateExpressionValue(argument).GetObjectDescription() if not value: value = fb.evaluateExpression(argument) keyArgs.append(value) return keyFormatString.format(*keyArgs).strip() # Increments the counter for the key. # (lldb) incrementcounter key_format_string key_args class FBIncrementCounterCommand(fb.FBCommand): def name(self): return "incrementcounter" def description(self): return "Increments the counter for the key." def run(self, arguments, options): key = generateKey(arguments) counters[key] = counters.get(key, 0) + 1 # Prints the counter for the key. # (lldb) printcounter key_format_string key_args # 0 class FBPrintCounterCommand(fb.FBCommand): def name(self): return "printcounter" def description(self): return "Prints the counter for the key." def run(self, arguments, options): key = generateKey(arguments) print(str(counters[key])) # Prints all the counters sorted by the keys. # (lldb) printcounters # key_1: 0 class FBPrintCountersCommand(fb.FBCommand): def name(self): return "printcounters" def description(self): return "Prints all the counters sorted by the keys." def run(self, arguments, options): keys = sorted(counters.keys()) for key in keys: print(key + ": " + str(counters[key])) # Resets the counter for the key. # (lldb) resetcounter key_format_string key_args class FBResetCounterCommand(fb.FBCommand): def name(self): return "resetcounter" def description(self): return "Resets the counter for the key." def run(self, arguments, options): key = generateKey(arguments) counters[key] = 0 # Resets all the counters. # (lldb) resetcounters class FBResetCountersCommand(fb.FBCommand): def name(self): return "resetcounters" def description(self): return "Resets all the counters." def run(self, arguments, options): counters.clear()
Name changed to 'Charge Blade Effect Replace'. Name changed to 'Charge Blade Effect Version 1.1'. Name changed to '(Under Maintenance)Charge Blade Effect Version 1.1'. File 'Charge Blade Effect Version 1.1' category changed.
# -*- coding: utf-8 -*- # =============================================================================== # @ Creator:Hainnan.Zhang # @ Date:2016-3-24 # 模拟浏览器 # =============================================================================== from Stock_Interface_test.old.interface_test import * from Stock_Interface_test.old.InterFace_List import * class MyWeb(): """ 模拟一个浏览器 """ def __init__(self): self.header = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0" } self.cookie = cookielib.CookieJar() self.cookie_support = urllib2.HTTPCookieProcessor(self.cookie) self.opener = urllib2.build_opener(self.cookie_support, urllib2.HTTPHandler) # urllib2.install_opener(self.opener) def post(self, posturl, dictdata): """ 模拟post请求 :param string posturl: url地址 :param dict dictdata: 发送的数据 """ request = urllib2.Request(posturl, dictdata, self.header) try: content = self.opener.open(request) return content except Exception, e: print ("post:" + str(e)) return None def get(self, url): """ 模拟get请求 :param url: url地址 :return content: 常使用read的方法来读取返回数据 :rtype : instance or None """ request = urllib2.Request(url, None, self.header) try: content = urllib2.urlopen(request) return content except Exception, e: print ("open:" + str(e)) return None if __name__ == "__main__": web = MyWeb() code_use = get_magic_code(url_result) # print code_use ticket_use = get_ticket(code_use, user_name, pwd) # print ticket_use date_use = 'ticket='+ticket_use login = web.post('https://stock-api.jimustock.com/api/v1/security/login', date_use) print login.read() url = 'https://stock-api.jimu.com/api/v1/us/trade/validateBuy' data = 'entrustAmount=1&entrustPrice=0.1&symbol=ACW&usAccountId=66&type=LIMIT&orderTimeInForce=DAY' res = web.post(url, data) print res.read()
This week began with my daughter's 12th Birthday. I took her for a surprise kid's manicure after school (she loves doing her nails) and we had a family dinner. She celebrated with friends on the weekend. Work has been super busy with technology workshops each Monday. It's good though, I feel like we are reaching a lot of the staff with the workshops and one-on-one sessions. I finished The Woman Who Fell from the Sky by Jennifer Stiel and found it quite interesting. My next book is The Queen of Water by Laura Resau, a YA book my colleague recommended. My daughter and I are in San Antonio, TX this weekend for Camp PULSE, a dance convention. The dancing is going really well with three choreographers: Lane Napper from Victorious (he acts in the show as well as choreographing all the dance); Janelle Ginestra from GLEE (choreographer); and David Moore (choreographer for the movie Step Up 3). Lots of high energy fun. For a break in the dance activities, we hung out with Amanda (Reading and Running) who lives here in San Antonio! It was fun to meet her and her family. We just walked along the river walk and chatted.
""" Photoshop metadata parser. References: - http://www.scribd.com/doc/32900475/Photoshop-File-Formats """ from hachoir_core.field import (FieldSet, ParserError, UInt8, UInt16, UInt32, Float32, Enum, SubFile, String, CString, PascalString8, NullBytes, RawBytes) from hachoir_core.text_handler import textHandler, hexadecimal from hachoir_core.tools import alignValue, createDict from hachoir_parser.image.iptc import IPTC from hachoir_parser.common.win32 import PascalStringWin32 BOOL = {0: False, 1: True} class Version(FieldSet): def createFields(self): yield UInt32(self, "version") yield UInt8(self, "has_realm") yield PascalStringWin32(self, "writer_name", charset="UTF-16-BE") yield PascalStringWin32(self, "reader_name", charset="UTF-16-BE") yield UInt32(self, "file_version") size = (self.size - self.current_size) // 8 if size: yield NullBytes(self, "padding", size) class FixedFloat32(FieldSet): def createFields(self): yield UInt16(self, "int_part") yield UInt16(self, "float_part") def createValue(self): return self["int_part"].value + float(self["float_part"].value) / (1<<16) class ResolutionInfo(FieldSet): def createFields(self): yield FixedFloat32(self, "horiz_res") yield Enum(UInt16(self, "horiz_res_unit"), {1:'px/in', 2:'px/cm'}) yield Enum(UInt16(self, "width_unit"), {1:'inches', 2:'cm', 3:'points', 4:'picas', 5:'columns'}) yield FixedFloat32(self, "vert_res") yield Enum(UInt16(self, "vert_res_unit"), {1:'px/in', 2:'px/cm'}) yield Enum(UInt16(self, "height_unit"), {1:'inches', 2:'cm', 3:'points', 4:'picas', 5:'columns'}) class PrintScale(FieldSet): def createFields(self): yield Enum(UInt16(self, "style"), {0:'centered', 1:'size to fit', 2:'user defined'}) yield Float32(self, "x_location") yield Float32(self, "y_location") yield Float32(self, "scale") class PrintFlags(FieldSet): def createFields(self): yield Enum(UInt8(self, "labels"), BOOL) yield Enum(UInt8(self, "crop_marks"), BOOL) yield Enum(UInt8(self, "color_bars"), BOOL) yield Enum(UInt8(self, "reg_marks"), BOOL) yield Enum(UInt8(self, "negative"), BOOL) yield Enum(UInt8(self, "flip"), BOOL) yield Enum(UInt8(self, "interpolate"), BOOL) yield Enum(UInt8(self, "caption"), BOOL) yield Enum(UInt8(self, "print_flags"), BOOL) yield Enum(UInt8(self, "unknown"), BOOL) def createValue(self): return [field.name for field in self if field.value] def createDisplay(self): return ', '.join(self.value) class PrintFlags2(FieldSet): def createFields(self): yield UInt16(self, "version") yield UInt8(self, "center_crop_marks") yield UInt8(self, "reserved") yield UInt32(self, "bleed_width") yield UInt16(self, "bleed_width_scale") class GridGuides(FieldSet): def createFields(self): yield UInt32(self, "version") yield UInt32(self, "horiz_cycle", "Horizontal grid spacing, in quarter inches") yield UInt32(self, "vert_cycle", "Vertical grid spacing, in quarter inches") yield UInt32(self, "guide_count", "Number of guide resource blocks (can be 0)") class Thumbnail(FieldSet): def createFields(self): yield Enum(UInt32(self, "format"), {0:'Raw RGB', 1:'JPEG RGB'}) yield UInt32(self, "width", "Width of thumbnail in pixels") yield UInt32(self, "height", "Height of thumbnail in pixels") yield UInt32(self, "widthbytes", "Padded row bytes = (width * bits per pixel + 31) / 32 * 4") yield UInt32(self, "uncompressed_size", "Total size = widthbytes * height * planes") yield UInt32(self, "compressed_size", "Size after compression. Used for consistency check") yield UInt16(self, "bits_per_pixel") yield UInt16(self, "num_planes") yield SubFile(self, "thumbnail", self['compressed_size'].value, "Thumbnail (JPEG file)", mime_type="image/jpeg") class Photoshop8BIM(FieldSet): TAG_INFO = { 0x03ed: ("res_info", ResolutionInfo, "Resolution information"), 0x03f3: ("print_flag", PrintFlags, "Print flags: labels, crop marks, colour bars, etc."), 0x03f5: ("col_half_info", None, "Colour half-toning information"), 0x03f8: ("color_trans_func", None, "Colour transfer function"), 0x0404: ("iptc", IPTC, "IPTC/NAA"), 0x0406: ("jpeg_qual", None, "JPEG quality"), 0x0408: ("grid_guide", GridGuides, "Grid guides informations"), 0x0409: ("thumb_res", Thumbnail, "Thumbnail resource (PS 4.0)"), 0x0410: ("watermark", UInt8, "Watermark"), 0x040a: ("copyright_flag", UInt8, "Copyright flag"), 0x040b: ("url", None, "URL"), 0x040c: ("thumb_res2", Thumbnail, "Thumbnail resource (PS 5.0)"), 0x040d: ("glob_angle", UInt32, "Global lighting angle for effects"), 0x0411: ("icc_tagged", None, "ICC untagged (1 means intentionally untagged)"), 0x0414: ("base_layer_id", UInt32, "Base value for new layers ID's"), 0x0416: ("indexed_colors", UInt16, "Number of colors in table that are actually defined"), 0x0417: ("transparency_index", UInt16, "Index of transparent color"), 0x0419: ("glob_altitude", UInt32, "Global altitude"), 0x041a: ("slices", None, "Slices"), 0x041e: ("url_list", None, "Unicode URLs"), 0x0421: ("version", Version, "Version information"), 0x0425: ("caption_digest", None, "16-byte MD5 caption digest"), 0x0426: ("printscale", PrintScale, "Printer scaling"), 0x2710: ("print_flag2", PrintFlags2, "Print flags (2)"), } TAG_NAME = createDict(TAG_INFO, 0) CONTENT_HANDLER = createDict(TAG_INFO, 1) TAG_DESC = createDict(TAG_INFO, 2) def __init__(self, *args, **kw): FieldSet.__init__(self, *args, **kw) try: self._name, self.handler, self._description = self.TAG_INFO[self["tag"].value] except KeyError: self.handler = None size = self["size"] self._size = size.address + size.size + alignValue(size.value, 2) * 8 def createFields(self): yield String(self, "signature", 4, "8BIM signature", charset="ASCII") if self["signature"].value != "8BIM": raise ParserError("Stream doesn't look like 8BIM item (wrong signature)!") yield textHandler(UInt16(self, "tag"), hexadecimal) if self.stream.readBytes(self.absolute_address + self.current_size, 4) != "\0\0\0\0": yield PascalString8(self, "name") size = 2 + (self["name"].size // 8) % 2 yield NullBytes(self, "name_padding", size) else: yield String(self, "name", 4, strip="\0") yield UInt16(self, "size") size = alignValue(self["size"].value, 2) if not size: return if self.handler: if issubclass(self.handler, FieldSet): yield self.handler(self, "content", size=size*8) else: yield self.handler(self, "content") else: yield RawBytes(self, "content", size) class PhotoshopMetadata(FieldSet): def createFields(self): yield CString(self, "signature", "Photoshop version") if self["signature"].value == "Photoshop 3.0": while not self.eof: yield Photoshop8BIM(self, "item[]") else: size = (self._size - self.current_size) / 8 yield RawBytes(self, "rawdata", size)
Anvil provides comprehensive designs for new or upgraded substations and components within a facility’s electrical distribution system. Our technical services range from early phase studies, option development, and project justifications through detailed design to factory and site acceptance testing and commissioning. Our team has extensive experience with engineering specifications and detailed designs involving facility main substations, distribution substations, and utilization substations.
import logging import os import readline import shlex import sys import hvac LOG_FILENAME = '/tmp/vaulty-completer.log' logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG) class REPLState: """ Stores state for the user's session and also wraps `hvac`. """ _pwd = 'secret/' # pwd is wrapped to magically make `oldpwd` work oldpwd = None home = 'secret/' # This is only used to help assist tab completion _list_cache = {} def __init__(self, vault_client): self.vault = vault_client def list(self, path): try: results = self.vault.list(path)['data']['keys'] self._list_cache[path] = results return results except TypeError: # TODO don't fail silently return [] @property def pwd(self): return self._pwd @pwd.setter def pwd(self, new_pwd): self.oldpwd = self._pwd self._pwd = new_pwd def readline_completer(self, text, state): logging.debug('readline text:%s state:%d', text, state) if state > 5: # Why does this happen? logging.error('infinite loop detected, terminating') return None if state == 0: if self.pwd not in self._list_cache: self.list(self.pwd) current_options = [x for x in self._list_cache[self.pwd] if x.startswith(text)] in_cd = readline.get_line_buffer().startswith('cd ') # TODO this is awkward if in_cd: current_options = [x for x in current_options if x.endswith('/')] if len(current_options) == 1: return current_options[0] if current_options: print() print('\n'.join(current_options)) # print(text, end='') print(f'{self.pwd}> {readline.get_line_buffer()}', end='') sys.stdout.flush() return None def cmd_cd(state, dir_path=None): if dir_path is None: state.pwd = state.home return if dir_path == '-': new_pwd = state.oldpwd or state.pwd else: new_pwd = os.path.normpath(os.path.join(state.pwd, dir_path)) + '/' if state.list(new_pwd): state.pwd = new_pwd return return f'{new_pwd} is not a valid path' def cmd_ls(state, path=None): """List secrets and paths in a path, defaults to PWD.""" if path is None: target_path = state.pwd else: target_path = os.path.normpath(os.path.join(state.pwd, path)) + '/' results = state.list(target_path) if results: return('\n'.join(results)) return f'{path} is not a valid path' def cmd_rm(state, *paths): return 'rm is not implemented yet' def repl(state): in_text = input(f'{state.pwd}> ') bits = shlex.split(in_text) if not bits: return if bits[0] == 'pwd': print(state.pwd) return if bits[0] == 'ls' or bits[0] == 'l': print(cmd_ls(state, *bits[1:])) return if bits[0] == 'cd': out = cmd_cd(state, *bits[1:]) out and print(out) return if bits[0] == 'cat': if len(bits) != 2: return 'USAGE: cat <path>' secret_path = os.path.normpath(os.path.join(state.pwd, bits[1])) try: for key, value in state.vault.read(secret_path)['data'].items(): print(f'{key}={value}') except TypeError: print(f'{bits[1]} does not exist') return if bits[0] == 'rm': print(cmd_rm(state, *bits[1:])) return print('DEBUG:', in_text) def main(): path = os.path.expanduser('~/.vault-token') if os.path.isfile(path): with open(path) as fh: token = fh.read().strip() client = hvac.Client(url=os.getenv('VAULT_ADDR'), token=token) assert client.is_authenticated() state = REPLState(client) team = os.getenv('VAULT_TEAM', '') state.home = state.pwd = os.path.join(state.pwd, team) + '/' readline.set_completer(state.readline_completer) readline.parse_and_bind('tab: complete') # readline.get_completer_delims() # readline.set_completer_delims('\n`~!@#$%^&*()-=+[{]}\|;:'",<>/? ') readline.set_completer_delims('\n`~!@#$%^&*()=+[{]}\|;:\'",<>/? ') try: while True: try: repl(state) except hvac.exceptions.Forbidden as e: print(e) except (KeyboardInterrupt, EOFError): sys.exit(1) if __name__ == "__main__": main()
AUDI A3 SEDAN AMBITION S-TRONIC 1.8 TFSI 180 CV 4P AUT./SEQ. 2015 AUDI A3 SEDAN AMBITION S-TRONIC 1.8 TFSI 180 CV 4P AUT./SEQ. 2014 FIAT STRADA WORKING 1.4 MPI 8V FLEX MEC.
#!/usr/bin/python # Copyright (c) 2012 The Chromium OS Authors. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. """Test the commandline module.""" from __future__ import print_function import cPickle import signal import os import sys sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname( os.path.abspath(__file__))))) from chromite.lib import commandline from chromite.lib import cros_build_lib_unittest from chromite.lib import cros_test_lib from chromite.lib import gs from chromite.lib import partial_mock from chromite.cbuildbot import constants # pylint: disable=W0212 class TestShutDownException(cros_test_lib.TestCase): """Test that ShutDownException can be pickled.""" def testShutDownException(self): """Test that ShutDownException can be pickled.""" ex = commandline._ShutDownException(signal.SIGTERM, 'Received SIGTERM') ex2 = cPickle.loads(cPickle.dumps(ex)) self.assertEqual(ex.signal, ex2.signal) self.assertEqual(ex.message, ex2.message) class GSPathTest(cros_test_lib.TestCase): """Test type=gs_path normalization functionality.""" GS_REL_PATH = 'bucket/path/to/artifacts' @staticmethod def _ParseCommandLine(argv): parser = commandline.OptionParser() parser.add_option('-g', '--gs-path', type='gs_path', help=('GS path that contains the chrome to deploy.')) return parser.parse_args(argv) def _RunGSPathTestCase(self, raw, parsed): options, _ = self._ParseCommandLine(['--gs-path', raw]) self.assertEquals(options.gs_path, parsed) def testNoGSPathCorrectionNeeded(self): """Test case where GS path correction is not needed.""" gs_path = '%s/%s' % (gs.BASE_GS_URL, self.GS_REL_PATH) self._RunGSPathTestCase(gs_path, gs_path) def testTrailingSlashRemoval(self): """Test case where GS path ends with /.""" gs_path = '%s/%s/' % (gs.BASE_GS_URL, self.GS_REL_PATH) self._RunGSPathTestCase(gs_path, gs_path.rstrip('/')) def testDuplicateSlashesRemoved(self): """Test case where GS path contains many / in a row.""" self._RunGSPathTestCase( '%s/a/dir/with//////////slashes' % gs.BASE_GS_URL, '%s/a/dir/with/slashes' % gs.BASE_GS_URL) def testRelativePathsRemoved(self): """Test case where GS path contain /../ logic.""" self._RunGSPathTestCase( '%s/a/dir/up/here/.././../now/down/there' % gs.BASE_GS_URL, '%s/a/dir/now/down/there' % gs.BASE_GS_URL) def testCorrectionNeeded(self): """Test case where GS path correction is needed.""" self._RunGSPathTestCase( '%s/%s/' % (gs.PRIVATE_BASE_HTTPS_URL, self.GS_REL_PATH), '%s/%s' % (gs.BASE_GS_URL, self.GS_REL_PATH)) def testInvalidPath(self): """Path cannot be normalized.""" with cros_test_lib.OutputCapturer(): self.assertRaises2( SystemExit, self._RunGSPathTestCase, 'http://badhost.com/path', '', check_attrs={'code': 2}) class DetermineCheckoutTest(cros_test_lib.MockTempDirTestCase): """Verify functionality for figuring out what checkout we're in.""" def setUp(self): self.rc_mock = cros_build_lib_unittest.RunCommandMock() self.StartPatcher(self.rc_mock) self.rc_mock.SetDefaultCmdResult() def RunTest(self, dir_struct, cwd, expected_root, expected_type, expected_src): """Run a test with specific parameters and expected results.""" cros_test_lib.CreateOnDiskHierarchy(self.tempdir, dir_struct) cwd = os.path.join(self.tempdir, cwd) checkout_info = commandline.DetermineCheckout(cwd) full_root = expected_root if expected_root is not None: full_root = os.path.join(self.tempdir, expected_root) full_src = expected_src if expected_src is not None: full_src = os.path.join(self.tempdir, expected_src) self.assertEquals(checkout_info.root, full_root) self.assertEquals(checkout_info.type, expected_type) self.assertEquals(checkout_info.chrome_src_dir, full_src) def testGclientRepo(self): dir_struct = [ 'a/.gclient', 'a/b/.repo/', 'a/b/c/.gclient', 'a/b/c/d/somefile', ] self.RunTest(dir_struct, 'a/b/c', 'a/b/c', commandline.CHECKOUT_TYPE_GCLIENT, 'a/b/c/src') self.RunTest(dir_struct, 'a/b/c/d', 'a/b/c', commandline.CHECKOUT_TYPE_GCLIENT, 'a/b/c/src') self.RunTest(dir_struct, 'a/b', 'a/b', commandline.CHECKOUT_TYPE_REPO, None) self.RunTest(dir_struct, 'a', 'a', commandline.CHECKOUT_TYPE_GCLIENT, 'a/src') def testGitSubmodule(self): """Recognizes a chrome git submodule checkout.""" self.rc_mock.AddCmdResult( partial_mock.In('config'), output=constants.CHROMIUM_GOB_URL) dir_struct = [ 'a/.gclient', 'a/.repo', 'a/b/.git/', ] self.RunTest(dir_struct, 'a/b', 'a/b', commandline.CHECKOUT_TYPE_SUBMODULE, 'a/b') def testBadGit1(self): """.git is not a directory.""" self.RunTest(['a/.git'], 'a', None, commandline.CHECKOUT_TYPE_UNKNOWN, None) def testBadGit2(self): """'git config' returns nothing.""" self.RunTest(['a/.repo/', 'a/b/.git/'], 'a/b', 'a', commandline.CHECKOUT_TYPE_REPO, None) def testBadGit3(self): """'git config' returns error.""" self.rc_mock.AddCmdResult(partial_mock.In('config'), returncode=5) self.RunTest(['a/.git/'], 'a', None, commandline.CHECKOUT_TYPE_UNKNOWN, None) class CacheTest(cros_test_lib.MockTempDirTestCase): """Test cache dir specification and finding functionality.""" REPO_ROOT = '/fake/repo/root' GCLIENT_ROOT = '/fake/gclient/root' SUBMODULE_ROOT = '/fake/submodule/root' CACHE_DIR = '/fake/cache/dir' def setUp(self): self.PatchObject(commandline.ArgumentParser, 'ConfigureCacheDir') dir_struct = [ 'repo/.repo/', 'gclient/.gclient', 'submodule/.git/', ] cros_test_lib.CreateOnDiskHierarchy(self.tempdir, dir_struct) self.repo_root = os.path.join(self.tempdir, 'repo') self.gclient_root = os.path.join(self.tempdir, 'gclient') self.submodule_root = os.path.join(self.tempdir, 'submodule') self.nocheckout_root = os.path.join(self.tempdir, 'nothing') self.rc_mock = self.StartPatcher(cros_build_lib_unittest.RunCommandMock()) self.rc_mock.AddCmdResult( partial_mock.In('config'), output=constants.CHROMIUM_GOB_URL) self.cwd_mock = self.PatchObject(os, 'getcwd') self.parser = commandline.ArgumentParser(caching=True) def _CheckCall(self, expected): # pylint: disable=E1101 f = self.parser.ConfigureCacheDir self.assertEquals(1, f.call_count) self.assertTrue(f.call_args[0][0].startswith(expected)) def testRepoRoot(self): """Test when we are inside a repo checkout.""" self.cwd_mock.return_value = self.repo_root self.parser.parse_args([]) self._CheckCall(self.repo_root) def testGclientRoot(self): """Test when we are inside a gclient checkout.""" self.cwd_mock.return_value = self.gclient_root self.parser.parse_args([]) self._CheckCall(self.gclient_root) def testSubmoduleRoot(self): """Test when we are inside a git submodule Chrome checkout.""" self.cwd_mock.return_value = self.submodule_root self.parser.parse_args([]) self._CheckCall(self.submodule_root) def testTempdir(self): """Test when we are not in any checkout.""" self.cwd_mock.return_value = self.nocheckout_root self.parser.parse_args([]) self._CheckCall('/tmp') def testSpecifiedDir(self): """Test when user specifies a cache dir.""" self.cwd_mock.return_value = self.repo_root self.parser.parse_args(['--cache-dir', self.CACHE_DIR]) self._CheckCall(self.CACHE_DIR) class ParseArgsTest(cros_test_lib.TestCase): """Test parse_args behavior of our custom argument parsing classes.""" def _CreateOptionParser(self, cls): """Create a class of optparse.OptionParser with prepared config. Args: cls: Some subclass of optparse.OptionParser. Returns: The created OptionParser object. """ usage = 'usage: some usage' parser = cls(usage=usage) # Add some options. parser.add_option('-x', '--xxx', action='store_true', default=False, help='Gimme an X') parser.add_option('-y', '--yyy', action='store_true', default=False, help='Gimme a Y') parser.add_option('-a', '--aaa', type='string', default='Allan', help='Gimme an A') parser.add_option('-b', '--bbb', type='string', default='Barry', help='Gimme a B') parser.add_option('-c', '--ccc', type='string', default='Connor', help='Gimme a C') return parser def _CreateArgumentParser(self, cls): """Create a class of argparse.ArgumentParser with prepared config. Args: cls: Some subclass of argparse.ArgumentParser. Returns: The created ArgumentParser object. """ usage = 'usage: some usage' parser = cls(usage=usage) # Add some options. parser.add_argument('-x', '--xxx', action='store_true', default=False, help='Gimme an X') parser.add_argument('-y', '--yyy', action='store_true', default=False, help='Gimme a Y') parser.add_argument('-a', '--aaa', type=str, default='Allan', help='Gimme an A') parser.add_argument('-b', '--bbb', type=str, default='Barry', help='Gimme a B') parser.add_argument('-c', '--ccc', type=str, default='Connor', help='Gimme a C') parser.add_argument('args', type=str, nargs='*', help='args') return parser def _TestParser(self, parser): """Test the given parser with a prepared argv.""" argv = ['-x', '--bbb', 'Bobby', '-c', 'Connor', 'foobar'] parsed = parser.parse_args(argv) if isinstance(parser, commandline.OptionParser): # optparse returns options and args separately. options, args = parsed self.assertEquals(['foobar'], args) else: # argparse returns just options. Options configured above to have the # args stored at option "args". options = parsed self.assertEquals(['foobar'], parsed.args) self.assertTrue(options.xxx) self.assertFalse(options.yyy) self.assertEquals('Allan', options.aaa) self.assertEquals('Bobby', options.bbb) self.assertEquals('Connor', options.ccc) self.assertRaises(AttributeError, getattr, options, 'xyz') # Now try altering option values. options.aaa = 'Arick' self.assertEquals('Arick', options.aaa) # Now freeze the options and try altering again. options.Freeze() self.assertRaises(commandline.cros_build_lib.AttributeFrozenError, setattr, options, 'aaa', 'Arnold') self.assertEquals('Arick', options.aaa) def testOptionParser(self): self._TestParser(self._CreateOptionParser(commandline.OptionParser)) def testFilterParser(self): self._TestParser(self._CreateOptionParser(commandline.FilteringParser)) def testArgumentParser(self): self._TestParser(self._CreateArgumentParser(commandline.ArgumentParser)) class ScriptWrapperMainTest(cros_test_lib.MockTestCase): """Test the behavior of the ScriptWrapperMain function.""" def setUp(self): self.PatchObject(sys, 'exit') # pylint: disable=W0613 @staticmethod def _DummyChrootTarget(args): raise commandline.ChrootRequiredError() DUMMY_CHROOT_TARGET_ARGS = ['cmd', 'arg1', 'arg2'] @staticmethod def _DummyChrootTargetArgs(args): args = ScriptWrapperMainTest.DUMMY_CHROOT_TARGET_ARGS raise commandline.ChrootRequiredError(args) def testRestartInChroot(self): rc = self.StartPatcher(cros_build_lib_unittest.RunCommandMock()) rc.SetDefaultCmdResult() ret = lambda x: ScriptWrapperMainTest._DummyChrootTarget commandline.ScriptWrapperMain(ret) rc.assertCommandContains(enter_chroot=True) rc.assertCommandContains(self.DUMMY_CHROOT_TARGET_ARGS, expected=False) def testRestartInChrootArgs(self): rc = self.StartPatcher(cros_build_lib_unittest.RunCommandMock()) rc.SetDefaultCmdResult() ret = lambda x: ScriptWrapperMainTest._DummyChrootTargetArgs commandline.ScriptWrapperMain(ret) rc.assertCommandContains(self.DUMMY_CHROOT_TARGET_ARGS, enter_chroot=True) if __name__ == '__main__': cros_test_lib.main()
The most recent projections and estimates for different types of power plants are in Levelized cost and levelized avoided cost of new generation resources in the Annual Energy Outlook 2018, which includes estimated costs in dollars per megawatthour (mWh) based on a 30-year cost recovery period for various types of power plants that start operation in 2020, 2022, and 2040. coal fired power plant mill operation cost manufacturer in Shanghai, China. coal fired power plant mill operation cost is manufactured from Shanghai Xuanshi,It is the main mineral processing solutions. Oil and gas fired units have more uniform sizes, but the coal fired units have large variation. High Ash Coals – A challenge to Power Plants – Optimisation of combustion in high ash coal fired boilers is of special interest due to the organic and inorganic mix up and the large amount of variation in the organics. Power plant O&M: how does the industry stack up on cost? Operations and maintenance costs vary widely between different forms of power generation but form an important part of any power plant's business case. Power Technology ranks average O&M costs in the energy sector to find out which generating facilities are the cheapest to run and maintain.
import os from setuptools import setup here = lambda *a: os.path.join(os.path.dirname(__file__), *a) # read the long description with open(here('README.md'), 'r') as readme_file: long_description = readme_file.read() # read the requirements.txt with open(here('requirements.txt'), 'r') as requirements_file: requirements = [x.strip() for x in requirements_file.readlines()] setup( name='pyenergenie', version='0.0.1', description='A python interface to the Energenie line of products', long_description=long_description, author='whaleygeek', classifiers=[ 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.6' ], packages=['pyenergenie', 'pyenergenie.energenie'], package_dir={ 'pyenergenie': 'src/', 'pyenergenie.energenie': 'src/energenie/' }, install_requires=requirements, package_data={ 'pyenergenie': [ 'energenie/drv/*' ] }, entry_points={ 'console_scripts': [ 'pyenergenie=pyenergenie.setup_tool:main' ] } )
KEDM 90.3 FM one of the populer radio of the USA. If you are the one kind of listener who don’t want to get stuck with a single radio than the thing is about to change. Because, you have just tuned into KEDM 90.3 FM and this is the radio that will engage you with their programs so deeply that you yourself will tune in to this radio again and again. A community is a social unit of any size that shares common values, or that is situated in a given geographical area (e.g. a village or town). It is a group of people who are connected by durable relations that extend beyond immediate genealogical ties, and who usually define that relationship as important to their social identity and practice.[need quotation to verify] Although communities are usually small, “community” may also refer to large groups, such as national communities, international communities, and virtual communities. The word “community” derives from the Old French comuneté which comes from the Latin communitas (from Latin communis, things held in common).
VOWELS = 'aeiouy' TRIPLE_SCORES = {} def word_groups(word): """ >>> list(word_groups('weight')) ['w', 'ei', 'ght'] >>> list(word_groups('Eightyfive')) ['ei', 'ght', 'y', 'f', 'i', 'v', 'e'] """ index = 0 word = word.lower() while index < len(word): # Find some consonants. start = index while index < len(word) and word[index] not in VOWELS: index += 1 if index > start: yield word[start:index] # Find some vowels. start = index while index < len(word) and word[index] in VOWELS: index += 1 if index > start: yield word[start:index] def word_triples(word): """ >>> list(word_triples('weight')) ['^wei', 'weight', 'eight$'] >>> list(word_triples('eightyfive')) ['^eight', 'eighty', 'ghtyf', 'yfi', 'fiv', 'ive', 've$'] """ groups = ['^'] + list(word_groups(word)) + ['$'] for start in range(len(groups) - 2): yield ''.join(groups[start:start + 3]) def word_score(word, triple_scores): triples = list(word_triples(word)) result = 0.0 for triple in triples: result += triple_scores.get(triple, 0.0) return result / len(triples) if __name__ == '__main__': import doctest doctest.testmod()
Dear Wirecutter: Should I Switch to Aluminum-Free Deodorant? Q: Should I switch to aluminum-free deodorant? I’ve read studies about various health concerns about antiperspirant and deodorant ingredients, but I wasn’t sure if the risk was worth the switch. I also haven’t found one that lasts long enough either. I used Every Man Jack and I’m currently using Tom’s, but both can barely handle everyday work sweat let alone running or working out. Longer answer, with all the evidence and stuff: There are two rumors, older than “them thar hills,” that aluminum in underarm antiperspirant is potentially harmful to our health. The first is that it causes breast cancer, the second is that it causes Alzheimer’s. There isn’t any good evidence for either. Next: Alzheimer’s. “The connection between aluminum and Alzheimer’s disease is less a myth than a longstanding scientific controversy,” says this Washington Post article. There’s no question that people with Alzheimer’s have higher concentrations of aluminum in their brains than people without Alzheimer’s, but scientists still really don’t know what this means. But they do think that the increased aluminum is a result, not a cause, of the disease, and that the metal has a very small role, if any at all, in Alzheimer’s. Also, remember what I said above, that only 0.012 percent of antiperspirant aluminum gets absorbed into people’s skin. So you might get a tiny amount of this metal that scientists don’t really think causes Alzheimer’s anyway. Two unlikelys here make a really unlikely. Last, a word on the difference between deodorant—which doesn’t have aluminum in it—and antiperspirant, which does. If you’re trying to stop the sweat, use antiperspirant. The aluminum salts in antiperspirant dissolve in the sweat coming off your pits and make a kind of goo that partially blocks your sweat glands, keeping them from spewing out more moisture. (Antiperspirants also usually have some kind of alcohol in them to help any moisture evaporate off your body faster, because azeotropes.) Deodorant doesn’t do this; it only masks the smell of your pits. So if you want to not get soaked when working out and you’re using a deodorant only, you’ve got the wrong product. The good news: You don’t have to choose between them, because there are lots of antiperspirant-deodorant combos out there. Every Man Jack is a deodorant only, btw. Tom’s of Maine makes antiperspirants, deodorants, and combos (and their antiperspirants do have aluminum). Make sure to check the label to verify you’re buying the right stuff—and antiperspirant away, my friend.
# -*- coding: utf-8 -*- # The LLVM Compiler Infrastructure # # This file is distributed under the University of Illinois Open Source # License. See LICENSE.TXT for details. """ This module implements basic shell escaping/unescaping methods. """ import re import shlex __all__ = ['encode', 'decode'] def encode(command): """ Takes a command as list and returns a string. """ def needs_quote(word): """ Returns true if arguments needs to be protected by quotes. Previous implementation was shlex.split method, but that's not good for this job. Currently is running through the string with a basic state checking. """ reserved = {' ', '$', '%', '&', '(', ')', '[', ']', '{', '}', '*', '|', '<', '>', '@', '?', '!'} state = 0 for current in word: if state == 0 and current in reserved: return True elif state == 0 and current == '\\': state = 1 elif state == 1 and current in reserved | {'\\'}: state = 0 elif state == 0 and current == '"': state = 2 elif state == 2 and current == '"': state = 0 elif state == 0 and current == "'": state = 3 elif state == 3 and current == "'": state = 0 return state != 0 def escape(word): """ Do protect argument if that's needed. """ table = {'\\': '\\\\', '"': '\\"'} escaped = ''.join([table.get(c, c) for c in word]) return '"' + escaped + '"' if needs_quote(word) else escaped return " ".join([escape(arg) for arg in command]) def decode(string): """ Takes a command string and returns as a list. """ def unescape(arg): """ Gets rid of the escaping characters. """ if len(arg) >= 2 and arg[0] == arg[-1] and arg[0] == '"': arg = arg[1:-1] return re.sub(r'\\(["\\])', r'\1', arg) return re.sub(r'\\([\\ $%&\(\)\[\]\{\}\*|<>@?!])', r'\1', arg) return [unescape(arg) for arg in shlex.split(string)]
A blog is an indispensable tool for engaging and expanding your audience. What insights can you share about your industry? What questions can you answer to help potential customers evaluate options or solve a problem? What do you care about? Good blogs build your reputation as a trusted resource and increase your online visibility. Great blogs are both evergreen and fresh. FSC can help you with editing, writing, and scheduling.
""" Cache This module provides a generic Cache extended to be used on RSS, RSSCache. This cache features a lazy update method. It will only be updated if it is empty and there is a new query. If not, it will remain in its previous state. However, Cache class internal cache: DictCache sets a validity to its entries. After that, the cache is empty. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function __RCSID__ = '$Id$' import six import itertools import random from DIRAC import gLogger, S_OK, S_ERROR from DIRAC.Core.Utilities.DictCache import DictCache from DIRAC.Core.Utilities.LockRing import LockRing from DIRAC.ResourceStatusSystem.Utilities.RssConfiguration import RssConfiguration class Cache(object): """ Cache basic class. WARNING: None of its methods is thread safe. Acquire / Release lock when using them ! """ def __init__(self, lifeTime, updateFunc): """ Constructor :Parameters: **lifeTime** - `int` Lifetime of the elements in the cache ( seconds ! ) **updateFunc** - `function` This function MUST return a S_OK | S_ERROR object. In the case of the first, its value must be a dictionary. """ # We set a 20% of the lifetime randomly, so that if we have thousands of jobs # starting at the same time, all the caches will not end at the same time. randomLifeTimeBias = 0.2 * random.random() self.log = gLogger.getSubLogger(self.__class__.__name__) self.__lifeTime = int(lifeTime * (1 + randomLifeTimeBias)) self.__updateFunc = updateFunc # The records returned from the cache must be valid at least 30 seconds. self.__validSeconds = 30 # Cache self.__cache = DictCache() self.__cacheLock = LockRing() self.__cacheLock.getLock(self.__class__.__name__) # internal cache object getter def cacheKeys(self): """ Cache keys getter :returns: list with keys in the cache valid for at least twice the validity period of the element """ # Here we need to have more than the validity period because of the logic of the matching: # * get all the keys with validity T # * for each key K, get the element K with validity T # This logic fails for elements just at the limit of the required time return self.__cache.getKeys(validSeconds=self.__validSeconds * 2) # acquire / release Locks def acquireLock(self): """ Acquires Cache lock """ self.__cacheLock.acquire(self.__class__.__name__) def releaseLock(self): """ Releases Cache lock """ self.__cacheLock.release(self.__class__.__name__) # Cache getters def get(self, cacheKeys): """ Gets values for cacheKeys given, if all are found ( present on the cache and valid ), returns S_OK with the results. If any is not neither present not valid, returns S_ERROR. :Parameters: **cacheKeys** - `list` list of keys to be extracted from the cache :return: S_OK | S_ERROR """ result = {} for cacheKey in cacheKeys: cacheRow = self.__cache.get(cacheKey, validSeconds=self.__validSeconds) if not cacheRow: return S_ERROR('Cannot get %s' % str(cacheKey)) result.update({cacheKey: cacheRow}) return S_OK(result) def check(self, cacheKeys, vO): """ Modified get() method. Attempts to find keys with a vO value appended or 'all' value appended. The cacheKeys passed in are 'flattened' cache keys (no vO) Gets values for cacheKeys given, if all are found ( present on the cache and valid ), returns S_OK with the results. If any is not neither present not valid, returns S_ERROR. :Parameters: **cacheKeys** - `list` list of keys to be extracted from the cache :return: S_OK | S_ERROR """ result = {} for cacheKey in cacheKeys: longCacheKey = cacheKey + ('all',) cacheRow = self.__cache.get(longCacheKey, validSeconds=self.__validSeconds) if not cacheRow: longCacheKey = cacheKey + (vO,) cacheRow = self.__cache.get(longCacheKey, validSeconds=self.__validSeconds) if not cacheRow: return S_ERROR('Cannot get extended %s (neither for VO = %s nor for "all" Vos)' % (str(cacheKey), vO)) result.update({longCacheKey: cacheRow}) return S_OK(result) # Cache refreshers def refreshCache(self): """ Purges the cache and gets fresh data from the update function. :return: S_OK | S_ERROR. If the first, its content is the new cache. """ self.log.verbose('refreshing...') self.__cache.purgeAll() newCache = self.__updateFunc() if not newCache['OK']: self.log.error(newCache['Message']) return newCache newCache = self.__updateCache(newCache['Value']) self.log.verbose('refreshed') return newCache # Private methods def __updateCache(self, newCache): """ Given the new cache dictionary, updates the internal cache with it. It sets a duration to the entries of <self.__lifeTime> seconds. :Parameters: **newCache** - `dict` dictionary containing a new cache :return: dictionary. It is newCache argument. """ for cacheKey, cacheValue in newCache.items(): self.__cache.add(cacheKey, self.__lifeTime, value=cacheValue) # We are assuming nothing will fail while inserting in the cache. There is # no apparent reason to suspect from that piece of code. return S_OK(newCache) class RSSCache(Cache): """ The RSSCache is an extension of Cache in which the cache keys are pairs of the form: ( elementName, statusType ). When instantiating one object of RSSCache, we need to specify the RSS elementType it applies, e.g. : StorageElement, CE, Queue, ... It provides a unique public method `match` which is thread safe. All other methods are not !! """ def __init__(self, lifeTime, updateFunc): """ Constructor :Parameters: **elementType** - `string` RSS elementType, e.g.: StorageElement, CE, Queue... note that one RSSCache can only hold elements of a single elementType to avoid issues while doing the Cartesian product. **lifeTime** - `int` Lifetime of the elements in the cache ( seconds ! ) **updateFunc** - `function` This function MUST return a S_OK | S_ERROR object. In the case of the first, its value must follow the dict format: ( key, value ) being key ( elementName, statusType ) and value status. """ super(RSSCache, self).__init__(lifeTime, updateFunc) self.allStatusTypes = RssConfiguration().getConfigStatusType() def match(self, elementNames, elementType, statusTypes, vO): """ In first instance, if the cache is invalid, it will request a new one from the server. It make the Cartesian product of elementNames x statusTypes to generate a key set that will be compared against the cache set. If the first is included in the second, we have a positive match and a dictionary will be returned. Otherwise, we have a cache miss. However, arguments ( elementNames or statusTypes ) can have a None value. If that is the case, they are considered wildcards. :Parameters: **elementNames** - [ None, `string`, `list` ] name(s) of the elements to be matched **elementType** - [ `string` ] type of the elements to be matched **statusTypes** - [ None, `string`, `list` ] name(s) of the statusTypes to be matched :return: S_OK() || S_ERROR() """ self.acquireLock() try: return self._match(elementNames, elementType, statusTypes, vO) finally: # Release lock, no matter what ! self.releaseLock() # Private methods: NOT THREAD SAFE !! def _match(self, elementNames, elementType, statusTypes, vO): """ Method doing the actual work. It must be wrapped around locks to ensure no disaster happens. :Parameters: **elementNames** - [ None, `string`, `list` ] name(s) of the elements to be matched **elementType** - [ `string` ] type of the elements to be matched **statusTypes** - [ None, `string`, `list` ] name(s) of the statusTypes to be matched :return: S_OK() || S_ERROR() """ # Gets the entire cache or a new one if it is empty / invalid validCache = self.__getValidCache() if not validCache['OK']: return validCache validCache = validCache['Value'] # Gets matched keys try: matchKeys = self.__match(validCache, elementNames, elementType, statusTypes, vO) except IndexError: return S_ERROR("RSS cache empty?") if not matchKeys['OK']: return matchKeys # Gets objects for matched keys. It will return S_ERROR if the cache value # has expired in between. It has 30 valid seconds, which means something was # extremely slow above. if matchKeys['CheckVO']: cacheMatches = self.check(matchKeys['Value'], vO) # add an appropriate VO to the keys else: cacheMatches = self.get(matchKeys['Value']) if not cacheMatches['OK']: return cacheMatches cacheMatches = cacheMatches['Value'] if not cacheMatches: return S_ERROR('Empty cache for: %s, %s' % (elementNames, elementType)) # We undo the key into <elementName> and <statusType> try: cacheMatchesDict = self.__getDictFromCacheMatches(cacheMatches) except ValueError: cacheMatchesDict = cacheMatches return S_OK(cacheMatchesDict) def __getValidCache(self): """ Obtains the keys on the cache which are valid. If any, returns the complete valid dictionary. If the list is empty, we assume the cache is invalid or not filled, so we issue a cache refresh and return its data. :return: { ( elementName, statusType, vO ) : status, ... } """ cacheKeys = self.cacheKeys() # If cache is empty, we refresh it. if not cacheKeys: cache = self.refreshCache() else: cache = self.get(cacheKeys) return cache def __match(self, validCache, elementNames, elementType, statusTypes, vO): """ Obtains all keys on the cache ( should not be empty ! ). Gets the sets ( no duplicates ) of elementNames and statusTypes. There is a slight distinction. A priori we cannot know which are all the elementNames. So, if elementNames is None, we will consider all elementNames in the cacheKeys. However, if statusTypes is None, we will get the standard list from the ResourceStatus configuration in the CS. If the cartesian product of our sets is on the cacheKeys set, we have a positive match. :Parameters: **validCache** - `dict` cache dictionary **elementNames** - [ None, `string`, `list` ] name(s) of the elements to be matched **elementType** - [ `string` ] type of the elements to be matched **statusTypes** - [ None, `string`, `list` ] name(s) of the statusTypes to be matched :return: S_OK() with a Vo check marker || S_ERROR() """ cacheKeys = list(validCache) # flatten the cache. From our VO perspective we are only want to keep: # 1) keys with vO tuple element equal to our vO, # 2) keys with vO tuple element equal to 'all', but only if no element described in 1) exists. # a resource key is set to have 3 elements to allow a comparison with a cartesian product. checkVo = False if len(cacheKeys[0]) == 4: # resource checkVo = True flattenedCache = {(key[0], key[1], key[2]): value for key, value in validCache.items() if key[3] == "all"} flattenedCache.update({(key[0], key[1], key[2]): value for key, value in validCache.items() if key[3] == vO}) validCache = flattenedCache else: # site, not VO specific in SiteStatus, eventually to be upgraded there to include the VO pass if isinstance(elementNames, six.string_types): elementNames = [elementNames] elif elementNames is None: if isinstance(cacheKeys[0], (tuple, list)): elementNames = [cacheKey[0] for cacheKey in cacheKeys] else: elementNames = cacheKeys # Remove duplicates, makes Cartesian product faster elementNamesSet = set(elementNames) if isinstance(elementType, six.string_types): if not elementType or elementType == 'Site': elementType = [] else: elementType = [elementType] elif elementType is None: elementType = [cacheKey[1] for cacheKey in cacheKeys] # Remove duplicates, makes Cartesian product faster elementTypeSet = set(elementType) if isinstance(statusTypes, six.string_types): if not statusTypes: statusTypes = [] else: statusTypes = [statusTypes] elif statusTypes is None: statusTypes = self.allStatusTypes # Remove duplicates, makes Cartesian product faster statusTypesSet = set(statusTypes) if not elementTypeSet and not statusTypesSet: cartesianProduct = elementNamesSet else: cartesianProduct = set(itertools.product(elementNamesSet, elementTypeSet, statusTypesSet)) # Some users find funny sending empty lists, which will make the cartesianProduct # be []. Problem: [] is always subset, no matter what ! if not cartesianProduct: self.log.warn('Empty cartesian product') return S_ERROR('Empty cartesian product') notInCache = list(cartesianProduct.difference(set(validCache))) if notInCache: self.log.warn('Cache misses: %s' % notInCache) return S_ERROR('Cache misses: %s' % notInCache) result = S_OK(cartesianProduct) result['CheckVO'] = checkVo return result @staticmethod def __getDictFromCacheMatches(cacheMatches): """ Formats the cacheMatches to a format expected by the RSS helpers clients. :Parameters: **cacheMatches** - `dict` cache dictionary of the form { ( elementName, elementType, statusType, vO ) : status, ... } :return: dict of the form { elementName : { statusType: status, ... }, ... } """ result = {} for cacheKey, cacheValue in cacheMatches.items(): elementName, _elementType, statusType, vO = cacheKey result.setdefault(elementName, {})[statusType] = cacheValue return result
If you are looking for nuffield tractor spares in Storwood then come to Tractor Spare Parts Ltd! We are the UK's largest stockists for tractor parts. Our extensive range of parts, merchandise, and booklets ensures anyone who is looking for marshall tractor parts in Storwood can find what they need. We specialise in BMC engine parts and parts for Nuffield, Leyland and Marshall Tractors. We offer a full restoration service as well as our parts business, this is to help ensure all of our parts fit straight on to the tractors. Where possible we aim to improve both the design and materials used in our parts. Looking for nuffield tractor spares in Storwood can become arduous if you do not know who, or what to look for. Our reliable service is open to visitors by appointment only. So if you need to contact us, please send an email to info@tractorspareparts.co.uk and fill out the message box with any queries you have and we will be happy to help. Also, we have a home delivery service which is fast and a fully trackable courier service. If you need any help with identifying the engine you are looking for, please get in touch with us. All our parts are re-manufactured where possible to the same standard OEM suppliers. With our vast range of stock and engines, our team we will help you find the parts you need straight away, all from one place and at a competitive price that can not be matched anywhere else in the UK. Realising the changing market early on, this enabled Tractor Spare Parts Ltd to build a name in a market that is constantly growing. We aim to build strong relationships with our consumers as we do every day, whilst delivering top quality products throughout the UK and internationally. Our love for all things tractor has stemmed from working around and using tractors on a daily basis, the work and care put into your project will be the best quality work in the UK. A BCM engine in Storwood are becoming more and more limited by the day, this can be from tractor owners throwing away parts or the neglection of tractors leaving the part to rust and become unusable. At tractor spare parts we will pass on our expert existing knowledge to ensure the maintenance and parts are well looked after and will last for years to come. So if you are looking to purchase nuffield tractor spares in Storwood or would like to find out more about our services for purchasing or collecting nuffield tractor spares in Storwood, contact Tractor Spare Parts Ltd on 01335 310 538 or alternatively email info@tractorspareparts.co.uk. We look forward to hearing from you.
'''Module containing EMCReader class to parse .emc files''' from __future__ import print_function import sys import numpy as np try: import h5py HDF5_MODE = True except ImportError: HDF5_MODE = False class EMCReader(object): """EMC file reader Provides access to assembled or raw frames given a list of .emc filenames __init__ arguments: photons_list - Path or sequence of paths to emc files. If single file, pass as [fname] geom_list - Single or list of Detector objects. geom_mapping (list, optional) - Mapping from photons_list to geom_list If there is only one entry in geom_list, all emc files are assumed to use \ that detector. Otherwise, a mapping must be provided. \ The mapping is a list of the same length as photons_list with entries \ giving indices in geom_list for the corresponding emc file. Methods: get_frame(num, raw=False, sparse=False, zoomed=False, sym=False) get_powder(raw=False, zoomed=False, sym=False) """ def __init__(self, photons_list, geom_list, geom_mapping=None): if hasattr(photons_list, 'strip') or not hasattr(photons_list, '__getitem__'): photons_list = [photons_list] if not hasattr(geom_list, '__getitem__'): geom_list = [geom_list] self.flist = [{'fname': fname} for fname in photons_list] num_files = len(photons_list) self.multiple_geom = False if len(geom_list) == 1: for i in range(num_files): self.flist[i]['geom'] = geom_list[0] else: try: for i in range(num_files): self.flist[i]['geom'] = geom_list[geom_mapping[i]] self.multiple_geom = True except TypeError: print('Need mapping if multiple geometries are provided') raise self._parse_headers() @staticmethod def _test_h5file(fname): if HDF5_MODE: return h5py.is_hdf5(fname) if os.path.splitext(fname)[1] == '.h5': fheader = np.fromfile(fname, '=c', count=8) if fheader == chr(137)+'HDF\r\n'+chr(26)+'\n': return True return False def _parse_headers(self): for i, pdict in enumerate(self.flist): pdict['is_hdf5'] = self._test_h5file(pdict['fname']) if pdict['is_hdf5'] and not HDF5_MODE: print('Unable to parse HDF5 dataset') raise IOError elif not pdict['is_hdf5']: self._parse_binaryheader(pdict) else: self._parse_h5header(pdict) if pdict['num_pix'] != len(pdict['geom'].x): sys.stderr.write( 'Warning: num_pix for %s is different (%d vs %d)\n' % (pdict['fname'], pdict['num_pix'], len(pdict['geom'].x))) if i > 0: pdict['num_data'] += self.flist[i-1]['num_data'] self.num_frames = self.flist[-1]['num_data'] @staticmethod def _parse_binaryheader(pdict): with open(pdict['fname'], 'rb') as fptr: num_data = np.fromfile(fptr, dtype='i4', count=1)[0] pdict['num_pix'] = np.fromfile(fptr, dtype='i4', count=1)[0] fptr.seek(1024, 0) ones = np.fromfile(fptr, dtype='i4', count=num_data) multi = np.fromfile(fptr, dtype='i4', count=num_data) pdict['num_data'] = num_data pdict['ones_accum'] = np.cumsum(ones) pdict['multi_accum'] = np.cumsum(multi) @staticmethod def _parse_h5header(pdict): with h5py.File(pdict['fname'], 'r') as fptr: pdict['num_data'] = fptr['place_ones'].shape[0] pdict['num_pix'] = np.prod(fptr['num_pix'][()]) def get_frame(self, num, **kwargs): """Get particular frame from file list The method determines the file with that frame number and reads it Arguments: num (int) - Frame number Keyword arguments: raw (bool) - Whether to get unassembled frame (False) sparse (bool) - Whether to return sparse data (False) zoomed (bool) - Whether to zoom assembled frame to non-masked region (False) sym (bool) - Whether to centro-symmetrize assembled frame (False) Returns: Assembled or unassembled frame as a dense array """ file_num = np.where(num < np.array([pdict['num_data'] for pdict in self.flist]))[0][0] #file_num = np.where(num < self.num_data_list)[0][0] if file_num == 0: frame_num = num else: frame_num = num - self.flist[file_num-1]['num_data'] return self._read_frame(file_num, frame_num, **kwargs) def get_powder(self, raw=False, **kwargs): """Get virtual powder sum of all frames in file list Keyword arguments: raw (bool) - Whether to return unassembled powder sum (False) zoomed (bool) - Whether to zoom assembled frame to non-masked region (False) sym (bool) - Whether to centro-symmetrize assembled frame (False) Returns: Assembled or unassembled powder sum as a dense array """ if self.multiple_geom: raise ValueError('Powder sum unreasonable with multiple geometries') powder = np.zeros((self.flist[0]['num_pix'],), dtype='f8') for pdict in self.flist: if pdict['is_hdf5']: with h5py.File(pdict['fname'], 'r') as fptr: place_ones = np.hstack(fptr['place_ones'][:]) place_multi = np.hstack(fptr['place_multi'][:]) count_multi = np.hstack(fptr['count_multi'][:]) else: with open(pdict['fname'], 'rb') as fptr: num_data = np.fromfile(fptr, dtype='i4', count=1)[0] fptr.seek(1024, 0) ones = np.fromfile(fptr, dtype='i4', count=num_data) multi = np.fromfile(fptr, dtype='i4', count=num_data) place_ones = np.fromfile(fptr, dtype='i4', count=ones.sum()) place_multi = np.fromfile(fptr, dtype='i4', count=multi.sum()) count_multi = np.fromfile(fptr, dtype='i4', count=multi.sum()) np.add.at(powder, place_ones, 1) np.add.at(powder, place_multi, count_multi) #powder *= self.flist[0]['geom'].unassembled_mask if not raw: powder = self.flist[0]['geom'].assemble_frame(powder, **kwargs) return powder def _read_frame(self, file_num, frame_num, raw=False, sparse=False, **kwargs): pdict = self.flist[file_num] if pdict['is_hdf5']: po, pm, cm = self._read_h5frame(pdict, frame_num) # pylint: disable=invalid-name else: po, pm, cm = self._read_binaryframe(pdict, frame_num) # pylint: disable=invalid-name if sparse: return po, pm, cm frame = np.zeros(pdict['num_pix'], dtype='i4') np.add.at(frame, po, 1) np.add.at(frame, pm, cm) #frame *= pdict['geom'].unassembled_mask if not raw: frame = pdict['geom'].assemble_frame(frame, **kwargs) return frame @staticmethod def _read_h5frame(pdict, frame_num): with h5py.File(pdict['fname'], 'r') as fptr: place_ones = fptr['place_ones'][frame_num] place_multi = fptr['place_multi'][frame_num] count_multi = fptr['count_multi'][frame_num] return place_ones, place_multi, count_multi @staticmethod def _read_binaryframe(pdict, frame_num): with open(pdict['fname'], 'rb') as fptr: num_data = np.fromfile(fptr, dtype='i4', count=1)[0] accum = [pdict['ones_accum'], pdict['multi_accum']] offset = [0, 0] size = [0, 0] if frame_num == 0: size = [accum[0][frame_num], accum[1][frame_num]] else: offset = [accum[0][frame_num-1], accum[1][frame_num-1]] size[0] = accum[0][frame_num] - accum[0][frame_num - 1] size[1] = accum[1][frame_num] - accum[1][frame_num - 1] fptr.seek(1024 + num_data*8 + offset[0]*4, 0) place_ones = np.fromfile(fptr, dtype='i4', count=size[0]) fptr.seek(1024 + num_data*8 + accum[0][-1]*4 + offset[1]*4, 0) place_multi = np.fromfile(fptr, dtype='i4', count=size[1]) fptr.seek(1024 + num_data*8 + accum[0][-1]*4 + accum[1][-1]*4 + offset[1]*4, 0) count_multi = np.fromfile(fptr, dtype='i4', count=size[1]) return place_ones, place_multi, count_multi
Visit us on your mobile device and order on the go! VIP Flowers proudly serves many Portland and Washington areas. VIP Flowers in Portland, OR provides flower delivery service to the following cities and zip codes in Oregon: Portland, Milwaukee, Oregon City, West Linn, Clackamas, Gladstone, Lake Oswego, Tigard, Aloha, Hillsboro, Vancouver WA, St Johns, Happy Valley, Clackamas, Gresham, Troutdale, Wood Village, Fairview 97003, 97007, 97006, 97015, 97027, 97030, 97035, 97045, 97060, 97068, 97070, 97080, 97086, 97201 97202, 97203, 97204, 97205, 97206, 97207, 97208, 97209, 97210, 97211, 97212 97213, 97214, 97215, 97216, 97217, 97218, 97219, 97220, 97221, 97222, 97223 97224, 97225, 97227, 97228, 97229, 97230, 97232, 97233, 97236, 97238, 97239 97242, 97256, 97258, 97266, 97267, 97268, 97269, 97271, 97280, 97281, 97282 97283, 97286, 97290, 97291, 97292, 97293, 97294, 97296, 97298, 97299, 98664. .
from django import template from django.conf import settings from django.core.urlresolvers import reverse, NoReverseMatch from radpress import settings as radpress_settings, get_version from radpress.compat import get_user_model from radpress.models import Article from radpress.readers import get_markup_choices, get_reader, trim register = template.Library() @register.inclusion_tag('radpress/tags/datetime.html') def radpress_datetime(datetime): """ Time format that compatible with html5. Arguments: - `datetime`: datetime.datetime """ context = {'datetime': datetime} return context @register.inclusion_tag('radpress/tags/widget_latest_posts.html') def radpress_widget_latest_posts(): """ Receives latest posts. """ limit = radpress_settings.LIMIT context = { 'object_list': Article.objects.all_published()[:limit] } return context @register.simple_tag def radpress_static_url(path): """ Receives Radpress static urls. """ version = get_version() return '%sradpress/%s?ver=%s' % (settings.STATIC_URL, path, version) @register.assignment_tag def radpress_get_markup_descriptions(): """ Provides markup options. It used for adding descriptions in admin and zen mode. :return: list """ result = [] for markup in get_markup_choices(): markup_name = markup[0] result.append({ 'name': markup_name, 'title': markup[1], 'description': trim(get_reader(markup=markup_name).description) }) return result @register.filter def radpress_full_name(user): if not isinstance(user, get_user_model()): full_name = '' else: full_name = user.get_full_name() if not full_name: full_name = user.username return full_name @register.assignment_tag(takes_context=True) def radpress_get_url(context, obj): return '%s%s' % (context['DOMAIN'], obj.get_absolute_url()) @register.assignment_tag def radpress_zen_mode_url(entry): try: if not isinstance(entry, Article): url = reverse('radpress-zen-mode') else: url = reverse('radpress-zen-mode-update', args=[entry.pk]) except NoReverseMatch: url = '' return url
When I took up writing this blog, I was prepared to get spam in the comments. I’ve deleted quite a few so far. But when I got one that said “Super post, Need to mark it on Digg”, I have to admit, I almost approved it by default. That seems like just “nice article” compliment. But, being the person I am, and having experienced so many people trying to trick me, I looked at the site in the commentor’s URL. It was a site in a language I couldn’t read, but the format suggested that it was a sales site of some kind. So the surprise? I Googled for the exact string (with quotes) “Super post, Need to mark it on Digg” and got over 16,000 results. This exact string appears in blog after blog. The name and URL changes in the blogs I examined, but it always leads to some sales site. Well, it was a surprise to me. Funny one today: my default.png file would not display. My application launched with a black screen zooming out instead. A quick look at another app that did work properly showed the only difference: the “D” was capitolized. A quick check and sure enough, it worked if the D was upper-case. That was a surprise to me, being so used to the case-insensitive file system on the Mac for so long. DevBits06: Debugging with Instruments causes Bugs? After a LOT of work going over code trying to find the problem, I was thinking it must be something device-specific because it hadn’t happened in the simulator at all. I looked through all my notes, I looked through posts online, nothing seemed to narrow it down. Now, with all the logging, I finally see that the index was jumping by 2 rather than 1 at wrong moments. That was sending the index beyond the limit. Now Why? WHY? The only increment code was in an IBACTION called by a button on the view, and it only incremented by 1. In the end, I could hardly believe the answer: I was incrementing by 1 – TWICE. The IBACTION was being called multiple times by MY rapid clicking (touching) on the button. I thought it was cool that while the animation was transitioning between 2 views, the destination view was actually loaded and ready to be touched. It made everything faster. I was going through views at a rapid pace in my testing and thinking nothing of it. Every new touch would interrupt the animation and do whatever the touch would do. Most of the time, the offending view would disappear before the button could be pushed again. However, with Instruments running, things on the device slowed down just enough for the button to sometimes get pushed again and send the message to the IBACTION a second time. The indexes incremented incorrectly as a result. The first thought I had was “I shouldn’t use Touch-Up-Inside. I should send the event on Value-Changed or something”. That might have worked, but the button in question was a button bar item. It didn’t have the option to choose the event. The solution: add some code to the program to prevent incrementing beyond the array bounds. The solution I probably should have used (and will use in the next version): add code to the IBACTION to prevent it from executing more than once before the view disappears (and re-appears a little later.) I’ll just add a member variable to the view controller that turns false/NO on entering the IBACTION, and resets to true/YES in ViewDidAppear. The moral: do not assume that an IBACTION can only be called once just because it changes the view or view controller. The system controls that. Your downloaded certificate file doesn’t seem to contain a private key? You turned down the arrow in the Keychain Utility and there wasn’t anything there? I’ve had this happen twice and I’ve got the following I wrote down. I wish I had written something more detailed at the time, but here is this if it helps. – the original private key for the request is held on the computer that generated the certificate request (CSR) file, so you must download the certificate on that same computer. (And hopefully a crash or drive corruption hasn’t hurt that key in the meantime.) I’m not an expert on security, but I believe the idea is that the request (CSR) is actually a public key, or contains one, so when Apple creates the certificate, only you can decrypt it with the private key for the CSR. – Create a different CSR for each certificate: One for development, one for distribution. I’m not exactly positive on this one. I’m wishing I wrote more, but I’m sure that doing this won’t hurt anything. Remember: to be able to give free updates to your customers, you must have the original distribution private key. Back it up in places that will survive the death, destruction, disappearance and theft of the hard drive it is stored on. Changed your bundle identifier and now XCode won’t load the result app onto the iPhone or iPod Touch device? – Try deleting the old version of the application from the device before loading/running the new one. This one frustrated me for longer than I care to think about. (It wasn’t really all that long, but my pride was hurt that I didn’t figure it out much more quickly.) It appeared to be something wrong in the generated token stream but the messages gave no real clue as to why this was happening. My first thought was that I had accidentally edited an SDK file. Special thanks to John Muchow for his post on iphonedevelopertips.com which quickly lead me to the correct answer. No doubt you saved me time and hair-pulling.
# -*- coding: utf-8 -*- # Generated by Django 1.11.5 on 2018-01-28 22:56 from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): initial = True dependencies = [ ('users', '0001_initial'), ('songs', '0001_initial'), ] operations = [ migrations.CreateModel( name='UserSongRecommendations', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('similarity', models.IntegerField(default=0)), ('iLike', models.BooleanField(default=False)), ('score', models.IntegerField(blank=True, null=True)), ('created_at', models.DateTimeField(auto_now_add=True)), ('updated_at', models.DateTimeField(auto_now=True)), ('song', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='songs.Song')), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='users.User')), ], ), migrations.AlterUniqueTogether( name='usersongrecommendations', unique_together=set([('user', 'song')]), ), ]
Benzoin Essential Oil or Onycha - Tincture of Benzoin? Benzoin essential oil or Onycha (Styrax benzoin) is an essence that is extracted from the gum resin. When it is mixed with alcohol it is referred to as tincture of benzoin. Benzoin tincture was used for over 200 years in hospitals prior to World War II. Benzoin is a “warm” oil that may supports a healthy cardiovascular system, emotions and healthy skin. As I receive new tips and testimonials for Benzoin Essential Oil they will be added to the page, so check back frequently! Onycha or Benzoin Oil is only contained in the Twelve Oils of Ancient Scripture Kit, but that collection can be purchased at The Oil Shop! What is the Genus Species? Styrax benzoin. Its common name is Onycha, Benzoin, Styrax, Friar's Balm, Benjamin Tree and Java Frankincense. When it is mixed with ethyl alcohol it is referred to as tincture of benzoin. Where did the name come from? Onycha derived from Hebrew "shechelet", derived from an Arabic word "husks of wheat or barley". Styrax means fragrant gum and benzoin means incense of Java. What is the ORAC Value? Can't seem to find it. Please let me know if you do! Did you know that Benzoin tincture was used in hospitals to cleanse? What is the Aromatic Affect on the Mind? Its spicy and hot aroma is soothing and calming. What is the Spiritual and Emotional Influence? Benzoin essential oil was placed in the Holy Incense Oil given to Moses for making sure the area used for sacrifices was free of disease, cleansed and purified. Benzoin oil also can cleanse and purify the spirit. It helps to release the energy of suppression and feelings of being eliminated or annihilated. It promotes feelings of liberation, freedom and emancipation. Can this Oil be used for Pets? Yes, but I would use Purification instead. What are the Safety Precautions? First, only use therapeutic grade essential oils for best results! Benzoin essential oil is approved by the FDA as a Food Additive (FA) and Flavoring Agent (FL). Although it may be used as a dietary supplement, it is advised not to use it in this manner for children under 6 years of age. Did you know that Onycha oil is mentioned in the Bible one time directly and 54 times indirectly? Yes it was an important oil of ancient times! Did you know that Onycha smells like Vanilla? Yes it contains vanillin aldehyde that gives vanilla that distinctive smell! Want to Purchase Therapeutic Benzoin Oil?
#!/usr/bin/env python3 # Copyright 2019 The Imaging Source Europe GmbH # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This example will show you how to start a simply live stream # import time import sys import gi gi.require_version("Tcam", "0.1") gi.require_version("Gst", "1.0") from gi.repository import Tcam, Gst def main(): Gst.init(sys.argv) # init gstreamer # this line sets the gstreamer default logging level # it can be removed in normal applications # gstreamer logging can contain verry useful information # when debugging your application # see https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html # for further details Gst.debug_set_default_threshold(Gst.DebugLevel.WARNING) serial = None pipeline = Gst.parse_launch("tcambin name=bin " " ! videoconvert" " ! ximagesink") # retrieve the bin element from the pipeline camera = pipeline.get_by_name("bin") # serial is defined, thus make the source open that device if serial is not None: camera.set_property("serial", serial) pipeline.set_state(Gst.State.PLAYING) print("Press Ctrl-C to stop.") # We wait with this thread until a # KeyboardInterrupt in the form of a Ctrl-C # arrives. This will cause the pipline # to be set to state NULL try: while True: time.sleep(1) except KeyboardInterrupt: pass finally: pipeline.set_state(Gst.State.NULL) if __name__ == "__main__": main()
The meaning of „Karanga“ (Swahili) is „peanut“ which is a part of Moshi/Tanzania. In this idyllic place the parish of reverend father Wilibald Maningi is situated. The guest house of Karanga offers all kinds of tourists and their interests an authentic accomodation to spend their holidays, therefore it is not only suitable for volunteers! The house is open for everybody who wants to visit the beautiful land of Kilimanjaro around Moshi. The Karanga area is guarded at night so you do not have to worry about your possessions or well-being. For your culinary needs delicious food is served by the cooking lady Radegunde Laurent and her charming kitchen girls. They even cook „Wiener Schnitzel“ and other Austrian food for you if you desire – of course they know almost every regional dish as well. Furthermore Karanga accomodates a seminary for young priests and the area has been upgraded by a newly built swimming pool to refresh you when the thermometer climbs up to 30 degrees or more.
import time from emotion import Controller from emotion import log as elog from emotion.controller import add_axis_method from emotion.axis import AxisState import pi_gcs from emotion.comm import tcp from emotion import event """ Emotion controller for ethernet PI E517 piezo controller. Cyril Guilloud ESRF BLISS Thu 13 Feb 2014 15:51:41 """ class PI_E517(Controller): def __init__(self, name, config, axes, encoders): Controller.__init__(self, name, config, axes, encoders) self.host = self.config.get("host") def move_done_event_received(self, state): if self.auto_gate_enabled: if state is True: elog.info("PI_E517.py : movement is finished") self._set_gate(0) elog.debug("mvt finished, gate set to 0") else: elog.info("PI_E517.py : movement is starting") self._set_gate(1) elog.debug("mvt started, gate set to 1") def initialize(self): """ Opens a single socket for all 3 axes. """ self.sock = tcp.Socket(self.host, 50000) def finalize(self): """ Closes the controller socket. """ self.sock.close() def initialize_axis(self, axis): """ - Reads specific config - Adds specific methods - Switches piezo to ONLINE mode so that axis motion can be caused by move commands. Args: - <axis> Returns: - None """ axis.channel = axis.config.get("channel", int) axis.chan_letter = axis.config.get("chan_letter") add_axis_method(axis, self.get_id, types_info=(None, str)) '''Closed loop''' add_axis_method(axis, self.open_loop, types_info=(None, None)) add_axis_method(axis, self.close_loop, types_info=(None, None)) '''DCO''' add_axis_method(axis, self.activate_dco, types_info=(None, None)) add_axis_method(axis, self.desactivate_dco, types_info=(None, None)) '''GATE''' # to enable automatic gating (ex: zap) add_axis_method(axis, self.enable_auto_gate, types_info=(bool, None)) # to trig gate from external device (ex: HPZ with setpoint controller) add_axis_method(axis, self.set_gate, types_info=(bool, None)) if axis.channel == 1: self.gate_axis = axis self.ctrl_axis = axis # NO automatic gating by default. self.auto_gate_enabled = False '''end of move event''' event.connect(axis, "move_done", self.move_done_event_received) # Enables the closed-loop. # self.sock.write("SVO 1 1\n") self.send_no_ans(axis, "ONL %d 1" % axis.channel) # VCO for velocity control mode ? # self.send_no_ans(axis, "VCO %d 1" % axis.channel) # Updates cached value of closed loop status. self.closed_loop = self._get_closed_loop_status(axis) def initialize_encoder(self, encoder): pass def read_position(self, axis, last_read={"t": time.time(), "pos": [None, None, None]}): """ Returns position's setpoint. Setpoint position is MOV? of VOL? or SVA? depending on closed-loop mode is ON or OFF. Args: - <axis> : emotion axis. Returns: - <position> : float : piezo position in Micro-meters or in Volts. """ cache = last_read if time.time() - cache["t"] < 0.005: # print "en cache not meas %f" % time.time() _pos = cache["pos"] else: # print "PAS encache not meas %f" % time.time() _pos = self._get_target_pos(axis) cache["pos"] = _pos cache["t"] = time.time() elog.debug("position setpoint read : %r" % _pos) return _pos[axis.channel - 1] def read_encoder(self, encoder, last_read={"t": time.time(), "pos": [None, None, None]}): cache = last_read if time.time() - cache["t"] < 0.005: # print "encache meas %f" % time.time() _pos = cache["pos"] else: # print "PAS encache meas %f" % time.time() _pos = self._get_pos(axis) cache["pos"] = _pos cache["t"] = time.time() elog.debug("position measured read : %r" % _pos) return _pos[axis.channel - 1] def read_velocity(self, axis): """ Args: - <axis> : Emotion axis object. Returns: - <velocity> : float """ _ans = self.send(axis, "VEL? %s" % axis.chan_letter) # _ans should looks like "A=+0012.0000" # removes 'X=' prefix _velocity = float(_ans[2:]) elog.debug("read_velocity : %g " % _velocity) return _velocity def set_velocity(self, axis, new_velocity): self.send_no_ans(axis, "VEL %s %f" % (axis.chan_letter, new_velocity)) elog.debug("velocity set : %g" % new_velocity) return self.read_velocity(axis) def read_acceleration(self, axis): """Returns axis current acceleration in steps/sec2""" return 1 def set_acceleration(self, axis, new_acc): """Set axis acceleration given in steps/sec2""" pass def state(self, axis): # if self._get_closed_loop_status(axis): if self.closed_loop: # elog.debug("CLOSED-LOOP is active") if self._get_on_target_status(axis): return AxisState("READY") else: return AxisState("MOVING") else: elog.debug("CLOSED-LOOP is not active") return AxisState("READY") def prepare_move(self, motion): """ - TODO for multiple move... Args: - <motion> : Emotion motion object. Returns: - Raises: - ? """ pass def start_one(self, motion): """ - Sends 'MOV' or 'SVA' depending on closed loop mode. Args: - <motion> : Emotion motion object. Returns: - None """ if self.closed_loop: # Command in position. self.send_no_ans(motion.axis, "MOV %s %g" % (motion.axis.chan_letter, motion.target_pos)) else: # Command in voltage. self.send_no_ans(motion.axis, "SVA %s %g" % (motion.axis.chan_letter, motion.target_pos)) def stop(self, axis): """ * HLT -> stop smoothly * STP -> stop asap * 24 -> stop asap * to check : copy of current position into target position ??? """ self.send_no_ans(axis, "HLT %s" % axis.chan_letter) # self.sock.write("STP\n") """ Communication """ def raw_write(self, cmd): self.send_no_ans(self.ctrl_axis, cmd) # def raw_write_read(self, cmd): # return self.send(self.ctrl_axis, cmd) def raw_write_read(self, cmd): return self.send(self.ctrl_axis, cmd) def send(self, axis, cmd): """ - Adds the 'newline' terminator character : "\\\\n" - Sends command <cmd> to the PI E517 controller. - Channel is defined in <cmd>. - <axis> is passed for debugging purposes. - Returns answer from controller. Args: - <axis> : passed for debugging purposes. - <cmd> : GCS command to send to controller (Channel is already mentionned in <cmd>). Returns: - 1-line answer received from the controller (without "\\\\n" terminator). """ _cmd = cmd + "\n" _t0 = time.time() # PC _ans = "toto" _ans = self.sock.write_readline(_cmd) _duration = time.time() - _t0 if _duration > 0.005: elog.info("PI_E517.py : Received %r from Send %s (duration : %g ms) " % (_ans, _cmd, _duration * 1000)) return _ans def send_no_ans(self, axis, cmd): """ - Adds the 'newline' terminator character : "\\\\n" - Sends command <cmd> to the PI E517 controller. - Channel is already defined in <cmd>. - <axis> is passed for debugging purposes. - Used for answer-less commands, thus returns nothing. """ _cmd = cmd + "\n" self.sock.write(_cmd) """ E517 specific """ def _get_pos(self, axis): """ Args: - <axis> : Returns: - <position> Returns real position (POS? command) read by capacitive sensor. Raises: ? """ # _ans = self.send(axis, "POS? %s" % axis.chan_letter) # _pos = float(_ans[2:]) _ans = self.sock.write_readlines("POS?\n", 3) _pos = map(float, [x[2:] for x in _ans]) return _pos def _get_target_pos(self, axis): """ Returns last target position (MOV?/SVA?/VOL? command) (setpoint value). - SVA? : Query the commanded output voltage (voltage setpoint). - VOL? : Query the current output voltage (real voltage). - MOV? : Returns the last valid commanded target position. Args: - <> Returns: - Raises: ? """ if self.closed_loop: # _ans = self.send(axis, "MOV? %s" % axis.chan_letter) _ans = self.sock.write_readlines("MOV?\n", 3) else: # _ans = self.send(axis, "SVA? %s" % axis.chan_letter) _ans = self.sock.write_readlines("SVA?\n", 3) # _pos = float(_ans[2:]) _pos = map(float, [x[2:] for x in _ans]) return _pos def open_loop(self, axis): self.send_no_ans(axis, "SVO %s 0" % axis.chan_letter) def close_loop(self, axis): self.send_no_ans(axis, "SVO %s 1" % axis.chan_letter) """ DCO : Drift Compensation Offset. """ def activate_dco(self, axis): self.send_no_ans(axis, "DCO %s 1" % axis.chan_letter) def desactivate_dco(self, axis): self.send_no_ans(axis, "DCO %s 0" % axis.chan_letter) """ Voltage commands """ def _get_voltage(self, axis): """ Returns Voltage Of Output Signal Channel (VOL? command) """ _ans = self.send(axis, "VOL? %s" % axis.channel) _vol = float(_ans.split("=+")[-1]) return _vol def _get_closed_loop_status(self, axis): """ Returns Closed loop status (Servo state) (SVO? command) -> True/False """ _ans = self.send(axis, "SVO? %s" % axis.chan_letter) _status = float(_ans[2:]) if _status == 1: return True else: return False def _get_on_target_status(self, axis): """ Returns << On Target >> status (ONT? command). True/False """ _ans = self.send(axis, "ONT? %s" % axis.chan_letter) _status = float(_ans[2:]) if _status == 1: return True else: return False def enable_auto_gate(self, axis, value): if value: # auto gating self.auto_gate_enabled = True self.gate_axis = axis elog.info("PI_E517.py : enable_gate " + value + "fro axis.channel " + axis.channel) else: self.auto_gate_enabled = False # To keep external gating possible. self.gate_axis = 1 def set_gate(self, axis, state): """ Method to wrap '_set_gate' to be exported to device server. <axis> parameter is requiered. """ self.gate_axis = axis self._set_gate(state) def _set_gate(self, state): """ CTO [<TrigOutID> <CTOPam> <Value>]+ - <TrigOutID> : {1, 2, 3} - <CTOPam> : - 3: trigger mode - <Value> : {0, 2, 3, 4} - 0 : position distance - 2 : OnTarget - 3 : MinMaxThreshold <---- - 4 : Wave Generator - 5: min threshold - 6: max threshold - 7: polarity : 0 / 1 ex : CTO 1 3 3 1 5 0 1 6 100 1 7 1 Args: - <state> : True / False Returns: - Raises: ? """ if state: _cmd = "CTO %d 3 3 1 5 0 1 6 100 1 7 1" % (self.gate_axis.channel) else: _cmd = "CTO %d 3 3 1 5 0 1 6 100 1 7 0" % (self.gate_axis.channel) self.send_no_ans(self.gate_axis, _cmd) def get_id(self, axis): """ Returns Identification information (\*IDN? command). """ return self.send(axis, "*IDN?") def get_error(self, axis): _error_number = self.send(axis, "ERR?") _error_str = pi_gcs.get_error_str(_error_number) return (_error_number, _error_str) def get_info(self, axis): """ Returns a set of usefull information about controller. Helpful to tune the device. Args: <axis> : emotion axis Returns: None Raises: ? """ _infos = [ ("Identifier ", "*IDN?"), ("Serial Number ", "SSN?"), ("Com level ", "CCL?"), ("GCS Syntax version ", "CSV?"), ("Last error code ", "ERR?"), ("Real Position ", "POS? %s" % axis.chan_letter), ("Position low limit ", "NLM? %s" % axis.chan_letter), ("Position high limit ", "PLM? %s" % axis.chan_letter), ("Closed loop status ", "SVO? %s" % axis.chan_letter), ("Voltage output high limit ", "VMA? %s" % axis.channel), ("Voltage output low limit ", "VMI? %s" % axis.channel), ("Output Voltage ", "VOL? %s" % axis.channel), ("Setpoint Position ", "MOV? %s" % axis.chan_letter), ("Drift compensation Offset ", "DCO? %s" % axis.chan_letter), ("Online ", "ONL? %s" % axis.channel), ("On target ", "ONT? %s" % axis.chan_letter), ("ADC Value of input signal ", "TAD? %s" % axis.channel), ("Input Signal Position value", "TSP? %s" % axis.channel), ("Velocity control mode ", "VCO? %s" % axis.chan_letter), ("Velocity ", "VEL? %s" % axis.chan_letter), ("Osensor ", "SPA? %s 0x02000200" % axis.channel), ("Ksensor ", "SPA? %s 0x02000300" % axis.channel), ("Digital filter type ", "SPA? %s 0x05000000" % axis.channel), ("Digital filter Bandwidth ", "SPA? %s 0x05000001" % axis.channel), ("Digital filter order ", "SPA? %s 0x05000002" % axis.channel), ] _txt = "" for i in _infos: _txt = _txt + " %s %s\n" % \ (i[0], self.send(axis, i[1])) _txt = _txt + " %s \n%s\n" % \ ("Communication parameters", "\n".join(self.sock.write_readlines("IFC?\n", 6))) _txt = _txt + " %s \n%s\n" % \ ("Firmware version", "\n".join(self.sock.write_readlines("VER?\n", 3))) return _txt
This might be the only pack you'll ever need for short trips with minimal gear. The comfortable harness adapts to your body for a snug fit whether you are hiking, running or biking. Lots of pockets keep all your essentials close at hand. Inside there's a sleeve for your hydration reservoir. Lightweight 3D harness wraps around your body for a comfortable fit. Large main compartment has a wide zippered opening for easy access. An external zippered pocket and side stretch pockets keep you organized. Stretch loops let you attach trekking poles. Clip a blinky light on to the reflective loop for extra nighttime visibility.
import dateutil.parser import collections import re import gogdb.core.model as model from gogdb.core.normalization import normalize_system def parse_datetime(date_str): if date_str is None: return None else: return dateutil.parser.isoparse(date_str) IMAGE_RE = re.compile(r"\w{64}") def extract_imageid(image_url): if image_url is None: return None m = IMAGE_RE.search(image_url) if m is None: return None else: return m.group(0) def extract_properties_v0(prod, v0_cont): prod.id = v0_cont["id"] prod.access = 1 prod.title = v0_cont["title"] prod.type = v0_cont["game_type"] prod.slug = v0_cont["slug"] prod.cs_systems = [] for cs_name in ["windows", "osx", "linux"]: if v0_cont["content_system_compatibility"][cs_name]: prod.cs_systems.append(normalize_system(cs_name)) prod.cs_systems.sort(reverse=True) prod.store_date = parse_datetime(v0_cont["release_date"]) prod.is_in_development = v0_cont["in_development"]["active"] prod.is_pre_order = v0_cont["is_pre_order"] prod.image_logo = extract_imageid(v0_cont["images"]["logo"]) prod.image_background = extract_imageid(v0_cont["images"]["background"]) prod.image_icon = extract_imageid(v0_cont["images"]["sidebarIcon"]) prod.link_forum = v0_cont["links"]["forum"] prod.link_store = v0_cont["links"]["product_card"] prod.link_support = v0_cont["links"]["support"] prod.screenshots = [x["image_id"] for x in v0_cont.get("screenshots", [])] prod.videos = [ model.Video( video_url=v["video_url"], thumbnail_url=v["thumbnail_url"], provider=v["provider"] ) for v in v0_cont.get("videos", []) ] if v0_cont["dlcs"]: prod.dlcs = [x["id"] for x in v0_cont["dlcs"]["products"]] prod.changelog = v0_cont["changelog"] or None def parse_file(file_cont): return model.File( id = str(file_cont["id"]), size = file_cont["size"], downlink = file_cont["downlink"] ) def parse_bonusdls(bonus_cont): return [ model.BonusDownload( id = str(dl["id"]), name = dl["name"], total_size = dl["total_size"], bonus_type = dl["type"], count = dl["count"], files = [parse_file(dlfile) for dlfile in dl["files"]] ) for dl in bonus_cont ] prod.dl_bonus = parse_bonusdls(v0_cont["downloads"]["bonus_content"]) def parse_softwaredls(software_cont): return [ model.SoftwareDownload( id = dl["id"], name = dl["name"], total_size = dl["total_size"], os = normalize_system(dl["os"]), language = model.Language(dl["language"], dl["language_full"]), version = dl["version"], files = [parse_file(dlfile) for dlfile in dl["files"]] ) for dl in software_cont ] prod.dl_installer = parse_softwaredls(v0_cont["downloads"]["installers"]) prod.dl_langpack = parse_softwaredls(v0_cont["downloads"]["language_packs"]) prod.dl_patch = parse_softwaredls(v0_cont["downloads"]["patches"]) PRODID_RE = re.compile(r"games/(\d+)") def extract_prodid(apiv2_url): m = PRODID_RE.search(apiv2_url) return int(m.group(1)) def extract_properties_v2(prod, v2_cont): v2_embed = v2_cont["_embedded"] v2_links = v2_cont["_links"] prod.features = [ model.Feature( id=x["id"], name=x["name"] ) for x in v2_embed["features"] ] localizations_map = collections.defaultdict(lambda: model.Localization()) for loc in v2_embed["localizations"]: loc_embed = loc["_embedded"] localization = localizations_map[loc_embed["language"]["code"]] localization.code = loc_embed["language"]["code"] localization.name = loc_embed["language"]["name"] if loc_embed["localizationScope"]["type"] == "text": localization.text = True elif loc_embed["localizationScope"]["type"] == "audio": localization.audio = True prod.localizations = list(localizations_map.values()) prod.tags = [ model.Tag( id=x["id"], level=x["level"], name=x["name"], slug=x["slug"] ) for x in v2_embed["tags"] ] prod.comp_systems = [ normalize_system(support_entry["operatingSystem"]["name"]) for support_entry in v2_embed["supportedOperatingSystems"] ] prod.comp_systems.sort(reverse=True) prod.is_using_dosbox = v2_cont["isUsingDosBox"] prod.developers = [x["name"] for x in v2_embed["developers"]] prod.publisher = v2_embed["publisher"]["name"] prod.copyright = v2_cont["copyrights"] or None prod.global_date = parse_datetime(v2_embed["product"].get("globalReleaseDate")) if "galaxyBackgroundImage" in v2_links: prod.image_galaxy_background = extract_imageid(v2_links["galaxyBackgroundImage"]["href"]) prod.image_boxart = extract_imageid(v2_links["boxArtImage"]["href"]) prod.image_icon_square = extract_imageid(v2_links["iconSquare"]["href"]) prod.editions = [ model.Edition( id=ed["id"], name=ed["name"], has_product_card=ed["hasProductCard"] ) for ed in v2_embed["editions"] ] prod.includes_games = [ extract_prodid(link["href"]) for link in v2_links.get("includesGames", []) ] prod.is_included_in = [ extract_prodid(link["href"]) for link in v2_links.get("isIncludedInGames", []) ] prod.required_by = [ extract_prodid(link["href"]) for link in v2_links.get("isRequiredByGames", []) ] prod.requires = [ extract_prodid(link["href"]) for link in v2_links.get("requiresGames", []) ] if v2_embed["series"]: prod.series = model.Series( id=v2_embed["series"]["id"], name=v2_embed["series"]["name"] ) prod.description = v2_cont["description"] META_ID_RE = re.compile(r"v2/meta/.{2}/.{2}/(\w+)") def extract_metaid(meta_url): m = META_ID_RE.search(meta_url) if m is None: return None else: return m.group(1) def extract_builds(prod, build_cont, system): for build in prod.builds: # Mark all builds as unlisted to relist them later if build.os == system: build.listed = False for build_item in build_cont["items"]: build_id = int(build_item["build_id"]) # Find existing build based on id and set `build` to it for existing_build in prod.builds: if existing_build.id == build_id: build = existing_build break else: # No existing build found build = model.Build() prod.builds.append(build) build.id = build_id build.product_id = int(build_item["product_id"]) build.os = build_item["os"] build.branch = build_item["branch"] build.version = build_item["version_name"] or None build.tags = build_item["tags"] build.public = build_item["public"] build.date_published = parse_datetime(build_item["date_published"]) build.generation = build_item["generation"] build.legacy_build_id = build_item.get("legacy_build_id") build.meta_id = extract_metaid(build_item["link"]) build.link = build_item["link"] build.listed = True prod.builds.sort(key=lambda b: b.date_published)
You are now downloading Ultimate Windows Tweaker 4 for Windows 10. This download is being provided to you free of charge and does not include any third-party offers. Your download will start automatically in 5 seconds. Please wait while we transfer you to the requested download.
import abc import redis import time class Task: redisTaskOngoing = "task-meta/ongoing" rclient = redis.StrictRedis(host="localhost", port=6379, db=0) task_sleep = 0 def __init__(self, name, logger, mail, retryFail=False, pipeline_key=[]): self.taskName = name self.mail = mail self.pipeline_key = pipeline_key self.retryFail = retryFail self.rclient = redis.StrictRedis(host="localhost", port=6379, db=0) self.redisTaskPopKey = "task-pending/" + name self.redisTaskPushKey = "task-finish/" + name self.redisTaskFailKey = "task-fail/" + name self.logger = logger self.task_sleep = 0 def demon(self, time_sleep=10): while True: try: self.go() except Exception, e: self.mail.send_timed(600, self.taskName + " exception", str(e)) finally: time.sleep(time_sleep) def getInitQueueBySetNames(self, set_names): ret = [] for name in set_names: ret.extend(self.rclient.smembers(set_names)) return ret @abc.abstractmethod def getInitQueue(self): pass @abc.abstractmethod def taskOperation(self, hash): pass def init(self): pass def _singleTask(self, hash_id): try: r = self.taskOperation(hash_id) if r is not None: self.rclient.smove(self.redisTaskPopKey, self.redisTaskPushKey, hash_id) if r: for key in self.pipeline_key: self.rclient.sadd(key, hash_id) except Exception, e: self.rclient.smove(self.redisTaskPopKey, self.redisTaskFailKey, hash_id) self.logger.warning("not success on:" + hash_id + ":" + str(e)) def _initTaskQueue(self): if not self.rclient.sismember(self.redisTaskOngoing, self.taskName): list_hash = self.getInitQueue() for hash_id in list_hash: self.rclient.sadd(self.redisTaskPopKey, hash_id) self.logger.info("add person to pending:" + hash_id) self.logger.info("task added") self.rclient.sadd(self.redisTaskOngoing, self.taskName) else: self.logger.info("task already exists, try to resume") if self.retryFail: list_hash = self.rclient.smembers(self.redisTaskFailKey) for hash_id in list_hash: self.rclient.smove(self.redisTaskFailKey, self.redisTaskPopKey, hash_id) self.logger.info("add person to pending:" + hash_id) self.logger.info("task added") def _summary(self): fail = self.rclient.scard(self.redisTaskFailKey) succ = self.rclient.scard(self.redisTaskPushKey) total = fail + succ self.logger.info("task finished:%d, fail:%d", total, fail) def _valid(self, hash_id): try: if self.filter_out(hash_id): self.rclient.smove(self.redisTaskPopKey, self.redisTaskFailKey, hash_id) self.logger.warning("filtered:" + hash_id + ":") return False return True except Exception, e: self.rclient.smove(self.redisTaskPopKey, self.redisTaskFailKey, hash_id) self.logger.warning("not success on filter:" + hash_id + ":" + str(e)) return False def filter_out(self, hash_id): return False def go(self): self.init() self._initTaskQueue() while True: hash_id = self.rclient.srandmember(self.redisTaskPopKey) if hash_id is not None: if self._valid(hash_id): self._singleTask(hash_id) else: break self._summary()
NPE designed, built, installed and commissioned an effective spray evaporation system for waste water from the RNO process plant in an environmentally sensitive area. This project was completed for our client First Quantum Minerals at Ravensthorpe in Western Australia. Monitoring and logging of drift control data including wind direction, speed, humidity and system run time. Unique system with a robust design designed to restrict overspray on to nearby vegetation. Ongoing operational and maintenance support.
#!/usr/bin/python check_version = '0714' from threading import Thread import sys import traceback import os import numpy as np import time #import datetime import logging mainPath = r'/opt/stationtest' mainDataPath = r'/localhome/stationtest' observationsPath = r'/opt/lofar/var/run' beamletPath = r'/localhome/data/Beamlets' libPath = os.path.join(mainPath, 'lib') sys.path.insert(0, libPath) confPath = os.path.join(mainDataPath, 'config') logPath = os.path.join(mainDataPath, 'log') rtsmPath = os.path.join(mainDataPath, 'rtsm_data') from general_lib import * from lofar_lib import * from search_lib import * from data_lib import * os.umask(001) os.nice(15) # make path if not exists if not os.access(logPath, os.F_OK): os.mkdir(logPath) if not os.access(rtsmPath, os.F_OK): os.mkdir(rtsmPath) logger = None def lbaMode(mode): if mode in (1, 2, 3, 4): return (True) return (False) def lbaLowMode(mode): if mode in (1, 2): return (True) return (False) def lbaHighMode(mode): if mode in (3, 4): return (True) return (False) def hbaMode(mode): if mode in (5, 6, 7): return (True) return (False) def checkStr(key): checks = dict({'OSC':"Oscillation", 'HN':"High-noise", 'LN':"Low-noise", 'J':"Jitter", 'SN':"Summator-noise",\ 'CR':"Cable-reflection", 'M':"Modem-failure", 'DOWN':"Antenna-fallen", 'SHIFT':"Shifted-band"}) return (checks.get(key, 'Unknown')) def printHelp(): print "----------------------------------------------------------------------------" print "Usage of arguments" print print "Set logging level, can be: debug|info|warning|error" print "-ls=debug : print all information on screen, default=info" print "-lf=info : print debug|warning|error information to log file, default=debug" print print "----------------------------------------------------------------------------" def getArguments(): args = dict() key = '' value = '-' for arg in sys.argv[1:]: if arg[0] == '-': opt = arg[1:].upper() valpos = opt.find('=') if valpos != -1: key, value = opt.strip().split('=') else: key, value = opt, '-' if key in ('H','LS','LF'): if value != '-': args[key] = value else: args[key] = '-' else: sys.exit("Unknown key %s" %(key)) return (args) # get and unpack configuration file class cConfiguration: def __init__(self): self.conf = dict() full_filename = os.path.join(confPath, 'checkHardware.conf') f = open(full_filename, 'r') data = f.readlines() f.close() for line in data: if line[0] in ('#','\n',' '): continue if line.find('#') > 0: line = line[:line.find('#')] try: key, value = line.strip().split('=') key = key.replace('_','-') self.conf[key] = value except: print "Not a valid configuration setting: %s" %(line) def getInt(self,key, default=0): return (int(self.conf.get(key, str(default)))) def getFloat(self,key, default=0.0): return (float(self.conf.get(key, str(default)))) def getStr(self,key): return (self.conf.get(key, '')) # setup default python logging system # logstream for screen output # filestream for program log file def init_logging(args): log_levels = {'DEBUG' : logging.DEBUG, 'INFO' : logging.INFO, 'WARNING': logging.WARNING, 'ERROR' : logging.ERROR} try: screen_log_level = args.get('LS', 'INFO') file_log_level = args.get('LF', 'DEBUG') except: print "Not a legal log level, try again" sys.exit(-1) station = getHostName() # create logger _logger = logging.getLogger() _logger.setLevel(logging.DEBUG) # create file handler filename = '%s_rtsm.log' %(getHostName()) full_filename = os.path.join(logPath, filename) file_handler = logging.FileHandler(full_filename, mode='w') formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') file_handler.setFormatter(formatter) file_handler.setLevel(log_levels[file_log_level]) _logger.addHandler(file_handler) if (len(_logger.handlers) == 1) and ('LS' in args): # create console handler stream_handler = logging.StreamHandler() fmt = '%s %%(levelname)-8s %%(message)s' %(station) formatter = logging.Formatter(fmt) stream_handler.setFormatter(formatter) stream_handler.setLevel(log_levels[screen_log_level]) _logger.addHandler(stream_handler) return (_logger) def getRcuMode(n_rcus): # RCU[ 0].control=0x10337a9c => ON, mode:3, delay=28, att=06 rcumode = -1 rcu_info = {} answer = rspctl("--rcu") if answer.count('mode:') == n_rcus: for line in answer.splitlines(): if line.find('mode:') == -1: continue rcu = line[line.find('[')+1 : line.find(']')].strip() state = line[line.find('=>')+2 : line.find(',')].strip() mode = line[line.find('mode:')+5] if rcu.isdigit() and state in ("OFF", "ON") and mode.isdigit(): rcu_info[int(rcu)] = (state, int(mode)) for mode in range(8): mode_cnt = answer.count("mode:%d" %(mode)) if mode == 0: if mode_cnt == n_rcus: logger.debug("Not observing") elif mode_cnt > (n_rcus / 3) and answer.count("mode:0") == (n_rcus - mode_cnt): logger.debug("Now observing in rcumode %d" %(mode)) rcumode = mode return (rcumode, rcu_info) def getAntPol(rcumode, rcu): pol_str = ('X','Y') ant = rcu / 2 if rcumode == 1: pol_str = ('Y','X') ant += 48 pol = pol_str[rcu % 2] return (ant, pol) class CSV: station = "" obs_id = "" filename = "" rcu_mode = 0 record_timestamp = 0 @staticmethod def setObsID(obs_id): CSV.station = getHostName() CSV.obs_id = obs_id CSV.filename = "%s_%s_open.dat" %(CSV.station, CSV.obs_id) CSV.rcu_mode = 0 CSV.rec_timestamp = 0 CSV.writeHeader() return @staticmethod def setRcuMode(rcumode): CSV.rcu_mode = rcumode return @staticmethod def setRecordTimestamp(timestamp): CSV.record_timestamp = timestamp return @staticmethod def writeHeader(): full_filename = os.path.join(rtsmPath, CSV.filename) # write only if new file if not os.path.exists(full_filename): f = open(full_filename, 'w') f.write('# SPECTRA-INFO=rcu,rcumode,obs-id,check,startfreq,stopfreq,rec-timestamp\n') f.write('#\n') f.flush() f.close() return @staticmethod def writeSpectra(data, rcu, check): #dumpTime = time.gmtime(CSV.record_timestamp) #date_str = time.strftime("%Y%m%d", dumpTime) full_filename = os.path.join(rtsmPath, CSV.filename) logger.debug("start dumping data to %s" %(full_filename)) f = open(full_filename, 'a') if CSV.rcu_mode in (1, 2, 3, 4): freq = (0 , 100) elif CSV.rcu_mode in (5,): freq = (100, 200) elif CSV.rcu_mode in (6,): freq = (160, 240) elif CSV.rcu_mode in (7,): freq = (200, 300) spectra_info = "SPECTRA-INFO=%d,%d,%s,%s,%d,%d,%f\n" %\ (rcu, CSV.rcu_mode, CSV.obs_id, check, freq[0], freq[1], CSV.record_timestamp) mean_spectra = "MEAN-SPECTRA=[" for i in np.nan_to_num(data.getMeanSpectra(rcu%2)): mean_spectra += "%3.1f " %(i) mean_spectra += "]\n" bad_spectra = "BAD-SPECTRA=[" for i in np.nan_to_num(data.getSpectra(rcu)): bad_spectra += "%3.1f " %(i) bad_spectra += "]\n\n" f.write(spectra_info) f.write(mean_spectra) f.write(bad_spectra) f.close() return @staticmethod def writeInfo(start_time, stop_time, obsid_samples): full_filename = os.path.join(rtsmPath, CSV.filename) logger.debug("add obs_info to %s" %(full_filename)) f = open(full_filename, 'a') f.write('# OBS-ID-INFO=obs_id,start_time,stop_time,obsid_samples\n') f.write('OBS-ID-INFO=%s,%5.3f,%5.3f,%d\n\n' %(CSV.obs_id, start_time, stop_time, obsid_samples)) f.flush() f.close() return @staticmethod def closeFile(): full_filename = os.path.join(rtsmPath, CSV.filename) filename_new = CSV.filename.replace('open','closed') full_filename_new = os.path.join(rtsmPath, filename_new) logger.debug("rename file from %s to %s" %(full_filename, full_filename_new)) os.rename(full_filename, full_filename_new) CSV.obs_id = "" CSV.filename = "" return def checkForOscillation(data, rcumode, error_list, delta): logger.debug("start oscillation check") for pol_nr, pol in enumerate(('X', 'Y')): #test_data = data.getAll()[:,:1,:] result = search_oscillation(data, pol, delta) if len(result) > 1: # get mean values from all rcu's (rcu = -1) bin_nr, ref_max_sum, ref_n_peaks, ref_rcu_low = result[0] #rcu, max_sum, n_peaks, rcu_low = sorted(result[1:], reverse=True)[0] if len(result) == 2: bin_nr, max_sum, n_peaks, rcu_low = result[1] else: ref_low = result[0][3] max_low_rcu = (-1, -1) max_sum_rcu = (-1, -1) for i in result[1:]: bin_nr, max_sum, n_peaks, rcu_low = i if max_sum > max_sum_rcu[0]: max_sum_rcu = (max_sum, bin_nr) if (rcu_low - ref_low) > max_low_rcu[0]: max_low_rcu = (rcu_low, bin_nr) rcu_low, bin_nr = max_low_rcu rcu = (bin_nr * 2) + pol_nr ant, pol = getAntPol(rcumode, rcu) if lbaMode(rcumode): logger.info("Mode-%d RCU-%03d Ant-%03d %c Oscillation, sum=%3.1f(%3.1f) peaks=%d(%d) low=%3.1fdB(%3.1f) (=ref)" %\ (rcumode, rcu, ant, pol, max_sum, ref_max_sum, n_peaks, ref_n_peaks, rcu_low, ref_rcu_low)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "OSC") if hbaMode(rcumode): if ((max_sum > 5000.0) or (n_peaks > 40)): logger.info("Mode-%d RCU-%03d Tile-%02d %c Oscillation, sum=%3.1f(%3.1f) peaks=%d(%d) low=%3.1fdB(%3.1f) ref=()" %\ (rcumode, rcu, ant, pol, max_sum, ref_max_sum, n_peaks, ref_n_peaks, rcu_low, ref_rcu_low)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "OSC") return def checkForNoise(data, rcumode, error_list, low_deviation, high_deviation, max_diff): logger.debug("start noise check") for pol_nr, pol in enumerate(('X', 'Y')): low_noise, high_noise, jitter = search_noise(data, pol, low_deviation, high_deviation*1.5, max_diff) for err in high_noise: bin_nr, val, bad_secs, ref, diff = err rcu = (bin_nr * 2) + pol_nr ant, pol = getAntPol(rcumode, rcu) if lbaMode(rcumode): logger.info("Mode-%d RCU-%03d Ant-%03d %c High-noise, value=%3.1fdB bad=%d(%d) limit=%3.1fdB diff=%3.1fdB" %\ (rcumode, rcu, ant, pol, val, bad_secs, data.frames, ref, diff)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "HN") if hbaMode(rcumode): logger.info("Mode-%d RCU-%03d Tile-%02d %c High-noise, value=%3.1fdB bad=%d(%d) limit=%3.1fdB diff=%3.1fdB" %\ (rcumode, rcu, ant, pol, val, bad_secs, data.frames, ref, diff)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "HN") for err in low_noise: bin_nr, val, bad_secs, ref, diff = err rcu = (bin_nr * 2) + pol_nr ant, pol = getAntPol(rcumode, rcu) if lbaMode(rcumode): logger.info("Mode-%d RCU-%03d Ant-%03d %c Low-noise, value=%3.1fdB bad=%d(%d) limit=%3.1fdB diff=%3.1fdB" %\ (rcumode, rcu, ant, pol, val, bad_secs, data.frames, ref, diff)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "LN") if hbaMode(rcumode): logger.info("Mode-%d RCU-%03d Tile-%02d %c Low-noise, value=%3.1fdB bad=%d(%d) limit=%3.1fdB diff=%3.1fdB" %\ (rcumode, rcu, ant, pol, val, bad_secs, data.frames, ref, diff)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "LN") return def checkForSummatorNoise(data, rcumode, error_list): logger.debug("start summator-noise check") for pol_nr, pol in enumerate(('X', 'Y')): # sn=SummatorNoise cr=CableReflections sn, cr = search_summator_noise(data=data, pol=pol, min_peak=2.0) for msg in sn: bin_nr, peaks, max_peaks = msg rcu = (bin_nr * 2) + pol_nr tile, pol = getAntPol(rcumode, rcu) logger.info("Mode-%d RCU-%03d Tile-%02d %c Summator-noise, cnt=%d peaks=%d" %\ (rcumode, rcu, tile, pol, peaks, max_peaks)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "SN") for msg in cr: bin_nr, peaks, max_peaks = msg rcu = (bin_nr * 2) + pol_nr tile, pol = getAntPol(rcumode, rcu) logger.info("Mode-%d RCU-%03d Tile-%02d %c Cable-reflections, cnt=%d peaks=%d" %\ (rcumode, rcu, tile, pol, peaks, max_peaks)) #if rcu not in error_list: #error_list.append(rcu) #CSV.writeSpectra(data, rcu, "CR") return def checkForDown(data, rcumode, error_list, subband): logger.debug("start down check") down, shifted = searchDown(data, subband) for msg in down: ant, max_x_sb, max_y_sb, mean_max_sb = msg rcu = ant * 2 max_x_offset = max_x_sb - mean_max_sb max_y_offset = max_y_sb - mean_max_sb ant, pol = getAntPol(rcumode, rcu) logger.info("Mode-%d RCU-%02d/%02d Ant-%02d Down, x-offset=%d y-offset=%d" %\ (rcumode, rcu, (rcu+1), ant, max_x_offset, max_y_offset)) if rcu not in error_list: error_list.append(rcu) error_list.append(rcu+1) CSV.writeSpectra(data, rcu, "DOWN") CSV.writeSpectra(data, rcu+1, "DOWN") return def checkForFlat(data, rcumode, error_list): logger.debug("start flat check") flat = searchFlat(data) for msg in flat: rcu, mean_val = msg ant, pol = getAntPol(rcumode, rcu) logger.info("Mode-%d RCU-%02d Ant-%02d Flat, value=%5.1fdB" %\ (rcumode, rcu, ant, mean_val)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "FLAT") return def checkForShort(data, rcumode, error_list): logger.debug("start short check") short = searchShort(data) for msg in short: rcu, mean_val = msg ant, pol = getAntPol(rcumode, rcu) logger.info("Mode-%d RCU-%02d Ant-%02d Short, value=%5.1fdB" %\ (rcumode, rcu, ant, mean_val)) if rcu not in error_list: error_list.append(rcu) CSV.writeSpectra(data, rcu, "SHORT") return def closeAllOpenFiles(): files = os.listdir(rtsmPath) for filename in files: if filename.find('open') > -1: full_filename = os.path.join(rtsmPath, filename) filename_new = filename.replace('open','closed') full_filename_new = os.path.join(rtsmPath, filename_new) os.rename(full_filename, full_filename_new) return class cDayInfo: def __init__(self): self.date = time.strftime("%Y%m%d", time.gmtime(time.time())) self.filename = "%s_%s_dayinfo.dat" %(getHostName(), self.date) self.samples = [0,0,0,0,0,0,0] # RCU-mode 1..7 self.obs_info = list() self.deleteOldDays() self.readFile() def addSample(self, rcumode=-1): date = time.strftime("%Y%m%d", time.gmtime(time.time())) # new day reset data and set new filename if self.date != date: self.date = date self.reset() if rcumode in range(1,8,1): self.samples[rcumode-1] += 1 self.writeFile() def addObsInfo(self, obs_id, start_time, stop_time, rcu_mode, samples): self.obs_info.append([obs_id, start_time, stop_time, rcu_mode, samples]) def reset(self): self.filename = "%s_%s_dayinfo.dat" %(getHostName(), self.date) self.samples = [0,0,0,0,0,0,0] # RCU-mode 1..7 self.obs_info = list() self.deleteOldDays() # after a restart, earlier data is imported def readFile(self): full_filename = os.path.join(rtsmPath, self.filename) if os.path.exists(full_filename): f = open(full_filename, 'r') lines = f.readlines() f.close() for line in lines: if len(line.strip()) == 0 or line.strip()[0] == '#': continue key,data = line.split('=') if key == 'DAY-INFO': self.samples = [int(i) for i in data.split(',')[1:]] if key == 'OBSID-INFO': d = data.split(',') self.obs_info.append([d[0],float(d[1]),float(d[2]),int(d[3]), int(d[4])]) # rewrite file every sample def writeFile(self): full_filename = os.path.join(rtsmPath, self.filename) f = open(full_filename, 'w') f.write('#DAY-INFO date,M1,M2,M3,M4,M5,M6,M7\n') f.write('DAY-INFO=%s,%d,%d,%d,%d,%d,%d,%d\n' %\ (self.date, self.samples[0], self.samples[1], self.samples[2], self.samples[3], self.samples[4], self.samples[5], self.samples[6])) f.write('\n#OBS-ID-INFO obs_id, start_time, stop_time, rcu_mode, samples\n') for i in self.obs_info: f.write('OBS-ID-INFO=%s,%5.3f,%5.3f,%d,%d\n' %\ (i[0],i[1],i[2],i[3],i[4])) f.close() def deleteOldDays(self): files = os.listdir(rtsmPath) backup = True for filename in files: if filename.find('closed') != -1: backup = False if backup == True: for filename in files: if filename.find('dayinfo') != -1: if filename.split('.')[0].split('_')[1] != self.date: full_filename = os.path.join(rtsmPath, filename) os.remove(full_filename) def getObsId(): #obs_start_str = "" #obs_stop_str = "" #obs_start_time = 0.0 #obs_stop_time = 0.0 obsids = "" answer = sendCmd('swlevel') if answer.find("ObsID") > -1: s1 = answer.find("ObsID:")+6 s2 = answer.find("]") obsids = answer[s1:s2].strip().split() return (obsids) def getObsIdInfo(obsid): filename = "Observation%s" %(obsid.strip()) fullfilename = os.path.join(observationsPath, filename) f = open(fullfilename, 'r') obsinfo = f.read() f.close() m1 = obsinfo.find("Observation.startTime") m2 = obsinfo.find("\n", m1) obs_start_str = obsinfo[m1:m2].split("=")[1].strip() obs_start_time = time.mktime(time.strptime(obs_start_str, "%Y-%m-%d %H:%M:%S")) m1 = obsinfo.find("Observation.stopTime",m2) m2 = obsinfo.find("\n", m1) obs_stop_str = obsinfo[m1:m2].split("=")[1].strip() obs_stop_time = time.mktime(time.strptime(obs_stop_str, "%Y-%m-%d %H:%M:%S")) logger.debug("obsid %s %s .. %s" %(obsid, obs_start_str, obs_stop_str)) return(obsid, obs_start_time, obs_stop_time) class RecordBeamletStatistics(Thread): def __init__(self): Thread.__init__(self) self.running = False self.reset() def reset(self): self.dump_dir = '' self.obsid = '' self.duration = 0 def set_obsid(self, obsid): self.dump_dir = os.path.join(beamletPath, obsid) try: os.mkdir(self.dump_dir) except: pass self.obsid = obsid def set_duration(self, duration): self.duration = duration def is_running(self): return self.running def kill_recording(self): if self.running: logger.debug("kill recording beamlet statistics") sendCmd(cmd='pkill', args='rspctl') logger.debug("recording killed") #self.running = False #self.make_plots() def make_plots(self): if self.obsid: try: response = sendCmd(cmd='/home/fallows/inspect_bsts.bash', args=self.obsid) logger.debug('response "inspect.bsts.bash" = {%s}' % response) except: logger.debug('exception while running "inspect.bsts.bash"') self.reset() def run(self): if self.duration: self.running = True logger.debug("start recording beamlet statistics for %d seconds" % self.duration) rspctl('--statistics=beamlet --duration=%d --integration=1 --directory=%s' % (self.duration, self.dump_dir)) logger.debug("recording done") self.make_plots() self.running = False def main(): global logger obs_id = "" active_obs_id = "" rcumode = 0 #station = getHostName() DI = cDayInfo() args = getArguments() if args.has_key('H'): printHelp() sys.exit() logger = init_logging(args) init_lofar_lib() init_data_lib() conf = cConfiguration() #StID = getHostName() logger.info('== Start rtsm (Real Time Station Monitor) ==') removeAllDataFiles() # Read in RemoteStation.conf ID, nRSP, nTBB, nLBL, nLBH, nHBA, HBA_SPLIT = readStationConfig() n_rcus = nRSP * 8 data = cRCUdata(n_rcus) obs_start_time = 0 obs_stop_time = 0 obsid_samples = 0 beamlet_recording = RecordBeamletStatistics() while True: try: # get active obsid from swlevel obsids = getObsId() time_now = time.time() # stop if no more obsids or observation is stoped if obs_stop_time > 0.0: if active_obs_id not in obsids or len(obsids) == 0 or time_now > obs_stop_time: logger.debug("save obs_id %s" %(obs_id)) DI.addObsInfo(obs_id, obs_start_time, obs_stop_time, rcumode, obsid_samples) DI.writeFile() CSV.writeInfo(obs_start_time, obs_stop_time, obsid_samples) CSV.closeFile() active_obs_id = "" obs_start_time = 0.0 obs_stop_time = 0.0 # if still running kill recording if beamlet_recording: if beamlet_recording.is_running(): beamlet_recording.kill_recording() beamlet_recording = 0 # if no active observation get obs info if obsid available if active_obs_id == "": # if still running kill recording if beamlet_recording: if beamlet_recording.is_running(): beamlet_recording.kill_recording() beamlet_recording = 0 for id in obsids: obsid, start, stop = getObsIdInfo(id) if time_now >= (start - 60.0) and (time_now + 15) < stop: active_obs_id = obsid obs_start_time = start obs_stop_time = stop break if time_now < obs_start_time: logger.debug("waiting %d seconds for start of observation" %(int(obs_start_time - time_now))) time.sleep((obs_start_time - time_now) + 1.0) # start recording beamlets if not beamlet_recording: if obs_start_time > 0.0 and time.time() >= obs_start_time: duration = obs_stop_time - time.time() - 10 if duration > 2: beamlet_recording = RecordBeamletStatistics() beamlet_recording.set_obsid(active_obs_id) beamlet_recording.set_duration(duration) beamlet_recording.start() check_start = time.time() # if new obs_id save data and reset settings if obs_id != active_obs_id: # start new file and set new obsid obs_id = active_obs_id obsid_samples = 0 CSV.setObsID(obs_id) # it takes about 11 seconds to record data, for safety use 15 if (time.time() + 15.0) < obs_stop_time: # observing, so check mode now rcumode, rcu_info = getRcuMode(n_rcus) if rcumode <= 0: continue active_rcus = [] for rcu in rcu_info: state, mode = rcu_info[rcu] if state == 'ON': active_rcus.append(rcu) data.setActiveRcus(active_rcus) rec_timestamp = time.time()+3.0 data.record(rec_time=1, read=True, slow=True) #data.fetch() CSV.setRcuMode(rcumode) CSV.setRecordTimestamp(rec_timestamp) DI.addSample(rcumode) obsid_samples += 1 logger.debug("do tests") mask = extractSelectStr(conf.getStr('mask-rcumode-%d' %(rcumode))) data.setMask(mask) if len(mask) > 0: logger.debug("mask=%s" %(str(mask))) error_list = [] # do LBA tests if lbaMode(rcumode): checkForDown(data, rcumode, error_list, conf.getInt('lbh-test-sb',301)) checkForShort(data, rcumode, error_list) checkForFlat(data, rcumode, error_list) checkForOscillation(data, rcumode, error_list, 6.0) checkForNoise(data, rcumode, error_list, conf.getFloat('lba-noise-min-deviation', -3.0), conf.getFloat('lba-noise-max-deviation', 2.5), conf.getFloat('lba-noise-max-difference', 1.5)) # do HBA tests if hbaMode(rcumode): checkForOscillation(data, rcumode, error_list, 9.0) checkForSummatorNoise(data, rcumode, error_list) checkForNoise(data, rcumode, error_list, conf.getFloat('hba-noise-min-deviation', -3.0), conf.getFloat('hba-noise-max-deviation', 2.5), conf.getFloat('hba-noise-max-difference', 2.0)) else: closeAllOpenFiles() if active_obs_id == "": # if not observing check every 30 seconds for observation start sleeptime = 30.0 logger.debug("no observation, sleep %1.0f seconds" %(sleeptime)) else: # if observing do check every 60 seconds check_stop = time.time() sleeptime = 60.0 - (check_stop - check_start) logger.debug("sleep %1.0f seconds till next check" %(sleeptime)) while sleeptime > 0.0: wait = min(1.0, sleeptime) sleeptime -= wait time.sleep(wait) except KeyboardInterrupt: logger.info("stopped by user") sys.exit() except: logger.error('Caught %s', str(sys.exc_info()[0])) logger.error(str(sys.exc_info()[1])) logger.error('TRACEBACK:\n%s', traceback.format_exc()) logger.error('Aborting NOW') sys.exit(0) # do test and write result files to log directory log_dir = conf.getStr('log-dir-local') if os.path.exists(log_dir): logger.info("write result data") # write result else: logger.warn("not a valid log directory") logger.info("Test ready.") # if still running kill recording if beamlet_recording: if beamlet_recording.is_running(): beamlet_recording.kill_recording() beamlet_recording = 0 # delete files from data directory removeAllDataFiles() sys.exit(0) if __name__ == '__main__': main()
← Water for Elephants on the New York Times bestseller list… AGAIN! Haven’t seen the film myself, so can’t comment, but I have read both the book and the script of which Water for Elephants – the new film starring Robert Pattinson and Reese Witherspoon – is based on, and can I just say, I think it’s a very solid, enjoyable yarn. I’m already predicting it might be one of my favorite films of the year – it’s story seems to encompass everything I require in a yarn. The film – and source material – tells of a veterinary student, played by Pattinson, who abandons his studies after his parents are killed and joins a traveling circus as their vet. Two different sources confirmed for me today that the film is undergoing reshoots this month. The first scene to be reshot “involves a 9-month old baby (blue-eyed) who is going to playing with a small dog – a Jack Russell, I believe”, I was informed earlier today. So… um, they’re adding cute stuff? or a pre-credits sequence? Young Robert Pattinson? Whatever the case, I’m fairly confident this is gonna be a must-watch flick. The reshoots begin January 14. If true, I have a different idea about what shot that could be but will leave it to your own imagination. While we haven’t heard much from Mr. Lawrence during post-production, he has been wonderful at popping in at the right times and keeping fans in the loop. If anymore reshoot info unfolds, we’ll keep you posted 100 more days until April 22nd! This entry was posted in Christoph Waltz, Francis Lawrence, Reese Witherspoon, Robert Pattinson, Twitter, Water for Elephants and tagged Christoph Waltz, Francis Lawrence, Reese Witherspoon, Reshoots, Robert Pattinson, Water for Elephants, WFE is almost here!. Bookmark the permalink. THEY BETTER NOT CUT HIS HAIR AGAIN. It’s already too short as it is for BD. I don’t think they involve him, Hon. Rob couldn’t be on two sets at the same time. and he’s committed to BD until end of March. He’s going to be packing up and heading to BC shortly. Doh. My bad. I was wondering why I didnt see any mention of him being on set. Surprise ending?? Not the one that Sara wrote?? It had better be a good one……. Seriously! This is what is getting me through this winter! This….and Rob coming to Toronto in May!!! Will Robert be in reshoot? He is working on Breaking Dawn and longer hair. of cource he could wear a wig but can he get away? Would love more of him in movie. I wonder if it’ll be in Piru, CA again. That’s where “Big Top” was before. Does anybody know any details? Reshoots. Oscar 2012. 100 more day! Oh my! I’m excited. (Lion, tigers and bears, oh my!) Dorothy girl, do you thing. Click those heels three times and make those days fly by.
# =============================================================================== # Copyright 2013 Jake Ross # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # =============================================================================== # ============= enthought library imports ======================= from __future__ import absolute_import from pyface.tasks.task_layout import TaskLayout, PaneItem from traits.api import Instance # ============= standard library imports ======================== # ============= local library imports ========================== from pychron.dashboard.tasks.server.panes import DashboardDevicePane, DashboardCentralPane from pychron.dashboard.server import DashboardServer from pychron.envisage.tasks.base_task import BaseTask class DashboardServerTask(BaseTask): name = 'Dashboard Server' server = Instance(DashboardServer) def activated(self): self.server.activate() def create_central_pane(self): return DashboardCentralPane(model=self.server) def create_dock_panes(self): panes = [DashboardDevicePane(model=self.server)] return panes def _default_layout_default(self): return TaskLayout(left=PaneItem('pychron.dashboard.devices')) # ============= EOF =============================================
The glute bridge is an easy to perform glute isolation exercise. This exercise is also similar to a hip thrust. Incidentally both exercises are easily progressed by adding weight. For further reading on the differences see; Glute Bridge vs Hip Thrusts. Start by lying flat on a mat with your knees bent and feet firmly on the floor. Keep your hands by your sides and drive through your heels bringing your hips as far off the floor as possible. Squeeze your glutes at the top, pause and then lower yourself back down through the same controlled movement. Go for reps. Progress your glute bridge by resting a barbell across your hips and taking the additional weight through the same movement. To further progress this move, consider moving onto hip thrusts. While doing the exercise is far better than not doing it (in most cases), it should be noted that sloppy form on any exercise is not recommended. If you are performing your glute bridges with incorrect form, you may recruit other muscle groups during the exercise, or poorly engage the intended muscle groups, which will hinder your gains at best and result in injury at worst. In addition to your glute bridges, you should also consider your overall bigger picture. How many reps and sets you perform with each exercise depends entirely on where you are physically and of course, your desired outcomes. Check out more bum exercises and of course, be sure to track your lifts.
#!/usr/bin/env python # # Author: Jorg Bornschein <bornschein@fias.uni-frankfurt.de) # Lincense: Academic Free License (AFL) v3.0 # from __future__ import division import sys sys.path.insert(0, "lib/") import numpy as np import tables from optparse import OptionParser from scipy.signal import convolve2d from pulp.utils.autotable import AutoTable from pulp.utils.datalog import dlog import pulp.utils.parallel as parallel #from viz import * def DoG(sigma_pos, sigma_neg, size): """ Difference of gaussians kernel of (size, size)-shape. The kernel is constructed to be mean free and to have a peak amplitude of 1. """ s2 = size // 2 gy, gx = np.ogrid[ -s2:size-s2 , -s2:size-s2 ] G1 = np.exp( -(gx*gx+gy*gy) / (2.*sigma_pos**2) ) / (2*np.pi*sigma_pos**2) G2 = np.exp( -(gx*gx+gy*gy) / (2.*sigma_neg**2) ) / (2*np.pi*sigma_neg**2) G2 = G2 / G2.sum()*G1.sum() # make DC free G = G1-G2 # combine positive and negative Gaussians G = G / G.max() # mormalize peak to 1. return G #============================================================================= if __name__ == "__main__": parser = OptionParser(usage="Usage: %prog [options] <patches.h5>") parser.add_option("--mf", dest="mf", action="store_true", help="make each patch individually mean-free") parser.add_option("--norm", dest="norm", action="store_true", help="normalize each patch to [-1 .. 1]") parser.add_option("--varnorm", dest="varnorm", action="store_true", help="normalize each patch to variance 1") parser.add_option("-n", "--num-patches", type="int", dest="num_patches", default=None, help="number of patches to generate") options, args = parser.parse_args() if len(args) != 1: parser.print_help() exit(1) # Open input file in_fname = args[0] in_h5 = tables.openFile(in_fname, "r") in_patches = in_h5.root.patches in_oversized = in_h5.root.oversized # Some asserts in the input data assert in_patches.shape[0] == in_oversized.shape[0] # number of patches assert in_patches.shape[1] == in_patches.shape[2] # sqare patches assert in_oversized.shape[1] == in_oversized.shape[2] # square oversized # Number of patches to extract N_patches = in_patches.shape[0] if options.num_patches is not None: N_patches = min(N_patches, options.num_patches) # Size of the patches size = in_patches.shape[1] oversize = in_oversized.shape[1] # Output file name out_fname = "patches-%d-dog" % size if options.mf: out_fname += "-mf" if options.norm: out_fname += "-norm" if options.varnorm: out_fname += "-varnorm" # print "Input file: %s" % in_fname print "Output file: %s" % out_fname print "# of patches: %d" % N_patches print "Patch size : %d x %d" % (size, size) # Create output file tbl_out = AutoTable(out_fname+".h5") # Size magic left = (oversize // 2)-(size //2) right = left + size #============================================================ # Start to do some real work batch_size = 1000 dog = DoG(1., 3., 9) for n in xrange(0, N_patches): if n % batch_size == 0: dlog.progress("Preprocessing...", n/N_patches) P = in_oversized[n,:,:] P_ = convolve2d(P, dog, 'same') P_ = P_[left:right, left:right] # Normalize and mean-free if options.mf: P_ -= P_.mean() if options.norm: P_max = max(P_.max(), -P_.min()) P_ /= (P_max+1e-5) if options.varnorm: P_var = np.var(P_) P_ /= (np.sqrt(P_var)+1e-5) tbl_out.append("patches", P_) in_h5.close() tbl_out.close() exit(0) #============================================================ # Safe debug-output zoom = 6 grid = U.transpose().reshape( (D, size, size) ) img = tiled_gfs(grid, sym_cm=False, global_cm=True) img = img.resize( (zoom*img.size[0], zoom*img.size[1]) ) img.save(out_fname+"-components.png") grid = P[:100,:,:] img = tiled_gfs(grid, sym_cm=False, global_cm=False) img = img.resize( (zoom*img.size[0], zoom*img.size[1]) ) img.save(out_fname+"-orig.png") grid = P_[:100,:,:] img = tiled_gfs(grid, sym_cm=True, global_cm=False) img = img.resize( (zoom*img.size[0], zoom*img.size[1]) ) img.save(out_fname+"-patches.png")
Dr. Towne earned a Bachelor of Science degree in Microbiology from University of Arizona. She then moved to Colorado, where she earned a Doctor of Philosophy degree in Immunology from University of Colorado. She worked as a post-doctoral research fellow at National Jewish Health, then as an Assistant Professor at Regis University, teaching both undergraduate and graduate students. Dr. Towne has a sweet and adventurous daughter, who keeps her on her toes, and a young son, the newest addition to the family. In their spare time, she and her husband love to plan their weekends around family-based activities such as hiking and Colorado Rapids soccer games. A little-known fact about her is that she was an accountant for three years before graduate school.
from __future__ import unicode_literals from dvc.utils.compat import str, open import os import errno class System(object): @staticmethod def is_unix(): return os.name != "nt" @staticmethod def hardlink(source, link_name): import ctypes from dvc.exceptions import DvcException if System.is_unix(): try: os.link(source, link_name) return except Exception as exc: raise DvcException("link", cause=exc) CreateHardLink = ctypes.windll.kernel32.CreateHardLinkW CreateHardLink.argtypes = [ ctypes.c_wchar_p, ctypes.c_wchar_p, ctypes.c_void_p, ] CreateHardLink.restype = ctypes.wintypes.BOOL res = CreateHardLink(link_name, source, None) if res == 0: raise DvcException("CreateHardLinkW", cause=ctypes.WinError()) @staticmethod def symlink(source, link_name): import ctypes from dvc.exceptions import DvcException if System.is_unix(): try: os.symlink(source, link_name) return except Exception as exc: msg = "failed to symlink '{}' -> '{}': {}" raise DvcException(msg.format(source, link_name, str(exc))) flags = 0 if source is not None and os.path.isdir(source): flags = 1 func = ctypes.windll.kernel32.CreateSymbolicLinkW func.argtypes = (ctypes.c_wchar_p, ctypes.c_wchar_p, ctypes.c_uint32) func.restype = ctypes.c_ubyte if func(link_name, source, flags) == 0: raise DvcException("CreateSymbolicLinkW", cause=ctypes.WinError()) @staticmethod def _reflink_darwin(src, dst): import ctypes import dvc.logger as logger LIBC = "libc.dylib" LIBC_FALLBACK = "/usr/lib/libSystem.dylib" try: clib = ctypes.CDLL(LIBC) except OSError as exc: logger.debug( "unable to access '{}' (errno '{}'). " "Falling back to '{}'.".format(LIBC, exc.errno, LIBC_FALLBACK) ) if exc.errno != errno.ENOENT: raise # NOTE: trying to bypass System Integrity Protection (SIP) clib = ctypes.CDLL(LIBC_FALLBACK) if not hasattr(clib, "clonefile"): return -1 clonefile = clib.clonefile clonefile.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_int] clonefile.restype = ctypes.c_int return clonefile( ctypes.c_char_p(src.encode("utf-8")), ctypes.c_char_p(dst.encode("utf-8")), ctypes.c_int(0), ) @staticmethod def _reflink_windows(src, dst): return -1 @staticmethod def _reflink_linux(src, dst): import os import fcntl FICLONE = 0x40049409 s = open(src, "r") d = open(dst, "w+") try: ret = fcntl.ioctl(d.fileno(), FICLONE, s.fileno()) except IOError: s.close() d.close() os.unlink(dst) raise s.close() d.close() if ret != 0: os.unlink(dst) return ret @staticmethod def reflink(source, link_name): import platform from dvc.exceptions import DvcException system = platform.system() try: if system == "Windows": ret = System._reflink_windows(source, link_name) elif system == "Darwin": ret = System._reflink_darwin(source, link_name) elif system == "Linux": ret = System._reflink_linux(source, link_name) else: ret = -1 except IOError: ret = -1 if ret != 0: raise DvcException("reflink is not supported") @staticmethod def getdirinfo(path): import ctypes from ctypes import c_void_p, c_wchar_p, Structure, WinError, POINTER from ctypes.wintypes import DWORD, HANDLE, BOOL # NOTE: use this flag to open symlink itself and not the target # See https://docs.microsoft.com/en-us/windows/desktop/api/ # fileapi/nf-fileapi-createfilew#symbolic-link-behavior FILE_FLAG_OPEN_REPARSE_POINT = 0x00200000 FILE_FLAG_BACKUP_SEMANTICS = 0x02000000 FILE_SHARE_READ = 0x00000001 OPEN_EXISTING = 3 class FILETIME(Structure): _fields_ = [("dwLowDateTime", DWORD), ("dwHighDateTime", DWORD)] class BY_HANDLE_FILE_INFORMATION(Structure): _fields_ = [ ("dwFileAttributes", DWORD), ("ftCreationTime", FILETIME), ("ftLastAccessTime", FILETIME), ("ftLastWriteTime", FILETIME), ("dwVolumeSerialNumber", DWORD), ("nFileSizeHigh", DWORD), ("nFileSizeLow", DWORD), ("nNumberOfLinks", DWORD), ("nFileIndexHigh", DWORD), ("nFileIndexLow", DWORD), ] flags = FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT func = ctypes.windll.kernel32.CreateFileW func.argtypes = [ c_wchar_p, DWORD, DWORD, c_void_p, DWORD, DWORD, HANDLE, ] func.restype = HANDLE hfile = func( path, 0, FILE_SHARE_READ, None, OPEN_EXISTING, flags, None ) if hfile is None: raise WinError() func = ctypes.windll.kernel32.GetFileInformationByHandle func.argtypes = [HANDLE, POINTER(BY_HANDLE_FILE_INFORMATION)] func.restype = BOOL info = BY_HANDLE_FILE_INFORMATION() rv = func(hfile, info) func = ctypes.windll.kernel32.CloseHandle func.argtypes = [HANDLE] func.restype = BOOL func(hfile) if rv == 0: raise WinError() return info @staticmethod def inode(path): if System.is_unix(): import ctypes inode = os.lstat(path).st_ino # NOTE: See https://bugs.python.org/issue29619 and # https://stackoverflow.com/questions/34643289/ # pythons-os-stat-is-returning-wrong-inode-value inode = ctypes.c_ulong(inode).value else: # getdirinfo from ntfsutils works on both files and dirs info = System.getdirinfo(path) inode = abs( hash( ( info.dwVolumeSerialNumber, info.nFileIndexHigh, info.nFileIndexLow, ) ) ) assert inode >= 0 assert inode < 2 ** 64 return inode @staticmethod def _wait_for_input_windows(timeout): import sys import ctypes import msvcrt from ctypes.wintypes import DWORD, HANDLE # https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-waitforsingleobject WAIT_OBJECT_0 = 0 WAIT_TIMEOUT = 0x00000102 func = ctypes.windll.kernel32.WaitForSingleObject func.argtypes = [HANDLE, DWORD] func.restype = DWORD rc = func(msvcrt.get_osfhandle(sys.stdin.fileno()), timeout * 1000) if rc not in [WAIT_OBJECT_0, WAIT_TIMEOUT]: raise RuntimeError(rc) @staticmethod def _wait_for_input_posix(timeout): import sys import select try: select.select([sys.stdin], [], [], timeout) except select.error: pass @staticmethod def wait_for_input(timeout): if System.is_unix(): return System._wait_for_input_posix(timeout) else: return System._wait_for_input_windows(timeout) @staticmethod def is_symlink(path): if System.is_unix(): return os.path.islink(path) # https://docs.microsoft.com/en-us/windows/desktop/fileio/ # file-attribute-constants FILE_ATTRIBUTE_REPARSE_POINT = 0x400 if os.path.lexists(path): info = System.getdirinfo(path) return info.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT return False @staticmethod def is_hardlink(path): if System.is_unix(): return os.stat(path).st_nlink > 1 info = System.getdirinfo(path) return info.nNumberOfLinks > 1
Casino bonuses are an effective method of players’ encouragement. Moreover, the best casino bonuses give the player a chance to win and beat the house. The player receives an online casino bonus after crediting their casino account. To receive a free bonus casino, the player doesn’t need to credit money. Deposit casino bonuses are much more popular than free casino bonuses and exceed them in size significantly. Refer-a-friend bonus. All you need to do is to send to your friend an email with a link to the casino. After your friend visits the casino website, opens an account, and makes a deposit, you will be given a refer-a-friend online casino bonus. For making a deposit via a certain payment method. To receive this type of bonus, you need to credit your account via a certain payment method, such as bank check, Neteller, Fire Pay and IGM checks. Free casino bonus is extra money for playing at the casino that is credited to a newly-registered casino account. This is the best casino bonus to receive which you don’t need to make a deposit. You can spend this money on trying out all casino games you want and get your first impressions. Free casino bonuses vary from $5 to $25. This amount is enough to weigh your chances. Frees spins are extra spins that you receive without a deposit or with it. Sign-up bonus casino is a bonus that is credited to the player’s account after making a deposit. VIP bonus is the largest amount of money that loyal clients are granted by the casino. Cashback bonus is a certain part of the deposit that is returned to the player’s account in case of a loss. Moreover, online casinos often encourage their clients through attractive special casino offers. The best online casinos’ offers are extremely beneficial for players. We recommend to follow the latest news on special online casino offers and bonuses on the casino’s website.
import time from datetime import date from django.core.cache import get_cache from ella.core.cache.utils import normalize_key from hashlib import md5 from test_ella.cases import RedisTestCase as TestCase from django.test.client import RequestFactory from django.contrib.sites.models import Site from django.contrib.contenttypes.models import ContentType from ella.core.cache import utils, redis from ella.core.models import Listing, Publishable from ella.core.views import ListContentType from ella.core.managers import ListingHandler from ella.articles.models import Article from ella.utils.timezone import from_timestamp from test_ella.test_core import create_basic_categories, create_and_place_a_publishable, \ create_and_place_more_publishables, list_all_publishables_in_category_by_hour from nose import tools class CacheTestCase(TestCase): def setUp(self): self.old_cache = utils.cache self.cache = get_cache('locmem://') utils.cache = self.cache super(CacheTestCase, self).setUp() def tearDown(self): super(CacheTestCase, self).tearDown() utils.cache = self.old_cache class TestCacheUtils(CacheTestCase): def test_get_many_objects(self): ct_ct = ContentType.objects.get_for_model(ContentType) site_ct = ContentType.objects.get_for_model(Site) objs = utils.get_cached_objects([(ct_ct.id, ct_ct.id), (ct_ct.id, site_ct.id), (site_ct.id, 1)]) tools.assert_equals([ct_ct, site_ct, Site.objects.get(pk=1)], objs) def test_get_many_publishables_will_respect_their_content_type(self): create_basic_categories(self) create_and_place_a_publishable(self) objs = utils.get_cached_objects([self.publishable.pk], Publishable) tools.assert_true(isinstance(objs[0], Article)) def test_get_many_objects_raises_by_default(self): ct_ct = ContentType.objects.get_for_model(ContentType) site_ct = ContentType.objects.get_for_model(Site) tools.assert_raises(Site.DoesNotExist, utils.get_cached_objects, [(ct_ct.id, ct_ct.id), (ct_ct.id, site_ct.id), (site_ct.id, 1), (site_ct.id, 100)]) def test_get_many_objects_can_replace_missing_with_none(self): ct_ct = ContentType.objects.get_for_model(ContentType) site_ct = ContentType.objects.get_for_model(Site) objs = utils.get_cached_objects([(ct_ct.id, ct_ct.id), (ct_ct.id, site_ct.id), (site_ct.id, 1), (site_ct.id, 100)], missing=utils.NONE) tools.assert_equals([ct_ct, site_ct, Site.objects.get(pk=1), None], objs) def test_get_many_objects_can_skip(self): ct_ct = ContentType.objects.get_for_model(ContentType) site_ct = ContentType.objects.get_for_model(Site) objs = utils.get_cached_objects([(ct_ct.id, ct_ct.id), (ct_ct.id, site_ct.id), (site_ct.id, 1), (site_ct.id, 100)], missing=utils.SKIP) tools.assert_equals([ct_ct, site_ct, Site.objects.get(pk=1)], objs) def test_get_publishable_returns_subclass(self): create_basic_categories(self) create_and_place_a_publishable(self) tools.assert_equals(self.publishable, utils.get_cached_object(Publishable, pk=self.publishable.pk)) def test_get_article_uses_the_publishable_key_and_0_for_version(self): tools.assert_equals( ':'.join((utils.KEY_PREFIX, str(ContentType.objects.get_for_model(Publishable).pk), '123', '0')), utils._get_key(utils.KEY_PREFIX, ContentType.objects.get_for_model(Article), pk=123) ) def test_get_article_uses_the_publishable_key_and_version_from_cache(self): key = utils._get_key(utils.KEY_PREFIX, ContentType.objects.get_for_model(Article), pk=123, version_key=True) self.cache.set(key, 3) tools.assert_equals( ':'.join((utils.KEY_PREFIX, str(ContentType.objects.get_for_model(Publishable).pk), '123', '3')), utils._get_key(utils.KEY_PREFIX, ContentType.objects.get_for_model(Article), pk=123) ) class TestCacheInvalidation(CacheTestCase): def test_save_invalidates_object(self): self.ct = ContentType.objects.get_for_model(ContentType) ct = utils.get_cached_object(self.ct, pk=self.ct.pk) tools.assert_equals(ct, self.ct) tools.assert_equals(self.ct, self.cache.get(utils._get_key(utils.KEY_PREFIX, self.ct, pk=self.ct.pk))) self.ct.save() tools.assert_equals(None, self.cache.get(utils._get_key(utils.KEY_PREFIX, self.ct, pkr=self.ct.pk))) class TestRedisListings(TestCase): def setUp(self): super(TestRedisListings, self).setUp() create_basic_categories(self) create_and_place_more_publishables(self) def test_access_to_individual_listings(self): list_all_publishables_in_category_by_hour(self) lh = Listing.objects.get_queryset_wrapper(category=self.category, children=ListingHandler.ALL, source='redis') l = lh[0] tools.assert_equals(l.publishable, self.listings[0].publishable) def test_listings_dont_propagate_where_they_shouldnt(self): self.category_nested.app_data = {'ella': {'propagate_listings': False}} self.category_nested.save() # small hack to remove the cached category on Publishable for p in self.publishables: del p._category_cache list_all_publishables_in_category_by_hour(self) ct_id = self.publishables[0].content_type_id tools.assert_equals(['%d:1' % ct_id], redis.client.zrange('listing:d:1', 0, 100)) tools.assert_equals(['%d:1' % ct_id], redis.client.zrange('listing:c:1', 0, 100)) tools.assert_equals(['%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:c:2', 0, 100)) tools.assert_equals(['%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:d:2', 0, 100)) def test_listing_gets_removed_when_publishable_goes_unpublished(self): list_all_publishables_in_category_by_hour(self) p = self.publishables[0] p.published = False p.save() ct_id = p.content_type_id tools.assert_equals(set([ 'listing:2', 'listing:3', 'listing:c:1', 'listing:c:2', 'listing:c:3', 'listing:d:1', 'listing:d:2', 'listing:d:3', 'listing:ct:%d' % ct_id, ]), set(redis.client.keys()) ) tools.assert_equals(['%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:ct:%d' % ct_id, 0, 100)) tools.assert_equals(['%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:d:1', 0, 100)) tools.assert_equals(['%d:2' % ct_id], redis.client.zrange('listing:c:1', 0, 100)) def test_listing_save_adds_itself_to_relevant_zsets(self): list_all_publishables_in_category_by_hour(self) ct_id = self.publishables[0].content_type_id tools.assert_equals(set([ 'listing:1', 'listing:2', 'listing:3', 'listing:c:1', 'listing:c:2', 'listing:c:3', 'listing:d:1', 'listing:d:2', 'listing:d:3', 'listing:ct:%d' % ct_id, ]), set(redis.client.keys()) ) tools.assert_equals(['%d:3' % ct_id], redis.client.zrange('listing:3', 0, 100)) tools.assert_equals(['%d:1' % ct_id, '%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:ct:%d' % ct_id, 0, 100)) tools.assert_equals(['%d:1' % ct_id, '%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:d:1', 0, 100)) def test_listing_delete_removes_itself_from_redis(self): list_all_publishables_in_category_by_hour(self) self.listings[1].delete() ct_id = self.publishables[0].content_type_id tools.assert_equals(set([ 'listing:1', 'listing:3', 'listing:c:1', 'listing:c:2', 'listing:c:3', 'listing:d:1', 'listing:d:2', 'listing:d:3', 'listing:ct:%d' % ct_id, ]), set(redis.client.keys()) ) tools.assert_equals(['%d:3' % ct_id], redis.client.zrange('listing:3', 0, 100)) tools.assert_equals(['%d:3' % ct_id], redis.client.zrange('listing:c:2', 0, 100)) tools.assert_equals(['%d:3' % ct_id], redis.client.zrange('listing:d:2', 0, 100)) tools.assert_equals(['%d:1' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:d:1', 0, 100)) tools.assert_equals(['%d:1' % ct_id], redis.client.zrange('listing:c:1', 0, 100)) tools.assert_equals(['%d:1' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:ct:%d' % ct_id, 0, 100)) def test_get_listing_uses_data_from_redis(self): ct_id = self.publishables[0].content_type_id t1, t2 = time.time()-90, time.time()-100 redis.client.zadd('listing:c:2', '%d:1' % ct_id, repr(t1)) redis.client.zadd('listing:c:2', '%d:3' % ct_id, repr(t2)) dt1, dt2 = from_timestamp(t1), from_timestamp(t2) lh = Listing.objects.get_queryset_wrapper(category=self.category_nested, children=ListingHandler.IMMEDIATE, source='redis') tools.assert_equals(2, lh.count()) l1, l2 = lh.get_listings(0, 10) tools.assert_equals(l1.publishable, self.publishables[0]) tools.assert_equals(l2.publishable, self.publishables[2]) tools.assert_equals(l1.publish_from, dt1) tools.assert_equals(l2.publish_from, dt2) def test_get_listing_omits_excluded_publishable(self): ct_id = self.publishables[0].content_type_id t1, t2 = time.time()-90, time.time()-100 redis.client.zadd('listing:c:2', '%d:1' % ct_id, repr(t1)) redis.client.zadd('listing:c:2', '%d:3' % ct_id, repr(t2)) dt1, dt2 = from_timestamp(t1), from_timestamp(t2) lh = Listing.objects.get_queryset_wrapper(category=self.category_nested, children=ListingHandler.IMMEDIATE, exclude=self.publishables[0], source='redis') tools.assert_equals(1, lh.count()) l = lh.get_listings(0, 10) tools.assert_equals(l[0].publishable, self.publishables[2]) tools.assert_equals(l[0].publish_from, dt2) def test_redis_listing_handler_used_from_view_when_requested(self): ct_id = self.publishables[0].content_type_id t1, t2 = time.time()-90, time.time()-100 redis.client.zadd('listing:d:2', '%d:1' % ct_id, repr(t1)) redis.client.zadd('listing:d:2', '%d:3' % ct_id, repr(t2)) dt1, dt2 = from_timestamp(t1), from_timestamp(t2) rf = RequestFactory() request = rf.get(self.category_nested.get_absolute_url(), {'using': 'redis'}) lct = ListContentType() context = lct.get_context(request, self.category_nested) tools.assert_equals(2, len(context['listings'])) l1, l2 = context['listings'] tools.assert_equals(l1.publishable, self.publishables[0]) tools.assert_equals(l2.publishable, self.publishables[2]) tools.assert_equals(l1.publish_from, dt1) tools.assert_equals(l2.publish_from, dt2) def test_get_listing_uses_data_from_redis_correctly_for_pagination(self): ct_id = self.publishables[0].content_type_id t1, t2, t3 = time.time()-90, time.time()-100, time.time() - 110 redis.client.zadd('listing:c:2', '%d:1' % ct_id, repr(t1)) redis.client.zadd('listing:c:2', '%d:3' % ct_id, repr(t2)) redis.client.zadd('listing:c:2', '%d:2' % ct_id, repr(t3)) lh = Listing.objects.get_queryset_wrapper(category=self.category_nested, children=ListingHandler.IMMEDIATE, source='redis') tools.assert_equals(3, lh.count()) l = lh.get_listings(2, 1) tools.assert_equals(1, len(l)) tools.assert_equals(l[0].publishable, self.publishables[1]) def test_redis_lh_slicing(self): list_all_publishables_in_category_by_hour(self) # Instantiate the RedisListingHandler and have it fetch all children lh = redis.RedisListingHandler(self.category, ListingHandler.ALL) for offset, count in [(0, 10), (0, 1), (0, 2), (1, 2), (2, 3), (3, 3)]: partial = lh.get_listings(offset=offset, count=count) tools.assert_equals( [l.publishable for l in partial], [l.publishable for l in self.listings[offset:offset + count]] ) def test_time_based_lh_slicing(self): list_all_publishables_in_category_by_hour(self) # Instantiate the RedisListingHandler and have it fetch all children lh = redis.TimeBasedListingHandler(self.category, ListingHandler.ALL) for offset, count in [(0, 10), (0, 1), (0, 2), (1, 2), (2, 3), (3, 3)]: partial = lh.get_listings(offset=offset, count=count) tools.assert_equals( [l.publishable for l in partial], [l.publishable for l in self.listings[offset:offset + count]] ) class TestAuthorLH(TestCase): def setUp(self): from ella.core.models import Author super(TestAuthorLH, self).setUp() create_basic_categories(self) create_and_place_more_publishables(self) self.author = Author.objects.create(slug='testauthor') for p in self.publishables: p.authors = [self.author] p.save() def test_listing_save_adds_itself_to_relevant_zsets(self): list_all_publishables_in_category_by_hour(self) ct_id = self.publishables[0].content_type_id tools.assert_equals(set([ 'listing:1', 'listing:2', 'listing:3', 'listing:c:1', 'listing:c:2', 'listing:c:3', 'listing:d:1', 'listing:d:2', 'listing:d:3', 'listing:a:%d' % self.author.pk, 'listing:ct:%d' % ct_id, ]), set(redis.client.keys()) ) tools.assert_equals(['%d:1' % ct_id, '%d:2' % ct_id, '%d:3' % ct_id], redis.client.zrange('listing:a:1', 0, 100)) class SlidingLH(redis.SlidingListingHandler): PREFIX = 'sliding' class TestSlidingListings(TestCase): def setUp(self): super(TestSlidingListings, self).setUp() create_basic_categories(self) create_and_place_more_publishables(self) self.ct_id = self.publishables[0].content_type_id def test_remove_publishable_clears_all_windows(self): SlidingLH.add_publishable(self.category, self.publishables[0], 10) SlidingLH.remove_publishable(self.category, self.publishables[0]) tools.assert_equals(set(['sliding:KEYS', 'sliding:WINDOWS']), set(redis.client.keys(SlidingLH.PREFIX + '*'))) def test_add_publishable_pushes_to_day_and_global_keys(self): SlidingLH.add_publishable(self.category, self.publishables[0], 10) day = date.today().strftime('%Y%m%d') expected_base = [ 'sliding:1', 'sliding:c:1', 'sliding:d:1', 'sliding:ct:%s' % self.ct_id, ] expected = expected_base + [k + ':' + day for k in expected_base] + ['sliding:KEYS', 'sliding:WINDOWS'] tools.assert_equals(set(expected), set(redis.client.keys(SlidingLH.PREFIX + '*'))) tools.assert_equals(redis.client.zrange('sliding:d:1', 0, -1, withscores=True), redis.client.zrange('sliding:d:1' + ':' + day, 0, -1, withscores=True)) def test_slide_windows_regenerates_aggregates(self): SlidingLH.add_publishable(self.category, self.publishables[0], 10) # register the keys that should exist redis.client.sadd('sliding:KEYS', 'sliding:1', 'sliding:c:1') redis.client.zadd('sliding:1:20101010', **{'17:1': 10, '17:2': 1}) redis.client.zadd('sliding:1:20101009', **{'17:1': 9, '17:2': 2}) redis.client.zadd('sliding:1:20101007', **{'17:1': 8, '17:2': 3, '17:3': 11}) redis.client.zadd('sliding:1:20101001', **{'17:1': 8, '17:2': 3, '17:3': 11}) SlidingLH.regenerate(date(2010, 10, 10)) tools.assert_equals([('17:2', 6.0), ('17:3', 11.0), ('17:1', 27.0)], redis.client.zrange('sliding:1', 0, -1, withscores=True)) def test_regenerate_removes_old_slots(self): redis.client.zadd('sliding:WINDOWS', **{ 'sliding:1:20101010': 20101010, 'sliding:1:20101009': 20101009, 'sliding:1:20101007': 20101007, 'sliding:1:20101001': 20101001 }) redis.client.zadd('sliding:1:20101010', **{'17:1': 10, '17:2': 1}) redis.client.zadd('sliding:1:20101009', **{'17:1': 9, '17:2': 2}) redis.client.zadd('sliding:1:20101007', **{'17:1': 8, '17:2': 3, '17:3': 11}) redis.client.zadd('sliding:1:20101001', **{'17:1': 8, '17:2': 3, '17:3': 11}) SlidingLH.regenerate(date(2010, 10, 10)) tools.assert_false(redis.client.exists('sliding:1:20101001')) tools.assert_true(redis.client.exists('sliding:1:20101007')) tools.assert_equals([ ('sliding:1:20101007', 20101007), ('sliding:1:20101009', 20101009), ('sliding:1:20101010', 20101010) ], redis.client.zrange('sliding:WINDOWS', 0, -1, withscores=True) ) def test_normalize_key_doesnt_touch_short_key(): key = "thisistest" tools.assert_equals(key,normalize_key(key)) def test_normalize_key_md5s_long_key(): key = "0123456789" * 30 tools.assert_equals(md5(key).hexdigest(),normalize_key(key))
RSVP via LinkedIn Events. Max. 12 people. Questions? Contact us.
from distutils.core import setup version = '0.5' long_description = '' try: with open('README.md') as readme: # load long_description into memory long_description = readme.read() # save README (no extension) for pypi with open('README', 'w') as myfile: myfile.write(long_description) except IOError: with open('README') as readme: long_description = readme.read() setup( name = 'sobidata', version = version, description = 'Downloads your Social Bicycles route data.', long_description = long_description, author = 'Ryan McGreal', author_email = 'ryan@quandyfactory.com', license = 'LICENCE.txt', url = 'https://github.com/quandyfactory/sobidata', py_modules = ['sobidata'], install_requires = [ 'dicttoxml', 'openpyxl', 'requests' ], download_url = 'https://pypi.python.org/packages/source/s/sobidata/sobidata-%s.tar.gz?raw=true' % (version), platforms='Cross-platform', classifiers=[ 'Programming Language :: Python', ], )
This Sunday the Jervis Bay Cruising Yacht Club Fleet will contest the Ladies Day and Forward Hands Races. This is the chance for the crew to show the skippers how it should be done. It will be the last race for 2013 so dress up your boat and dress up your crew. As well as trophies for the two races there will be a prize for the best dressed boat. Briefing will be at 9.45am with an 11.00am start. Following the race there will a BBQ and quiet little pre Christmas drink at the clubhouse. It is also a great opportunity to give the Commodore his Christmas present.
from anoncreds.protocol.issuer import Issuer from anoncreds.protocol.repo.attributes_repo import AttributeRepo from anoncreds.protocol.repo.public_repo import PublicRepo from anoncreds.protocol.wallet.issuer_wallet import IssuerWalletInMemory from indy_client.anon_creds.indy_public_repo import IndyPublicRepo from indy_client.client.wallet.wallet import Wallet class IndyIssuer(Issuer): def __init__(self, client, wallet: Wallet, attrRepo: AttributeRepo, publicRepo: PublicRepo = None): publicRepo = publicRepo or IndyPublicRepo(client=client, wallet=wallet) issuerWallet = IndyIssuerWalletInMemory(wallet.name, publicRepo) super().__init__(issuerWallet, attrRepo) def prepareForWalletPersistence(self): # TODO: If we don't set self.wallet._repo.client to None, # it hangs during wallet persistence, based on findings, it seems, # somewhere it hangs during persisting client._ledger and # client.ledgerManager self.wallet._repo.client = None def restorePersistedWallet(self, issuerWallet): curRepoClient = self.wallet._repo.client self.wallet = issuerWallet self._primaryIssuer._wallet = issuerWallet self._nonRevocationIssuer._wallet = issuerWallet self.wallet._repo.client = curRepoClient class IndyIssuerWalletInMemory(IssuerWalletInMemory): def __init__(self, name, pubRepo): IssuerWalletInMemory.__init__(self, name, pubRepo) # available claims to anyone whose connection is accepted by the agent self.availableClaimsToAll = [] # available claims only for certain invitation (by nonce) self.availableClaimsByNonce = {} # available claims only for certain invitation (by nonce) self.availableClaimsByInternalId = {} # mapping between specific identifier and available claims which would # have been available once they have provided requested information # like proof etc. self.availableClaimsByIdentifier = {} self._proofRequestsSchema = {} # Dict[str, Dict[str, any]]
The 106th Nebraska Legislature, First Session, convened its 2019 session this week. The first day, new senators plus re-elected senators were sworn into office and Senator Jim Scheer was re-elected Speaker of the Legislature for the next two years. Senators also elected chair positions for the 14 standing committees, the Executive Board and the Committee on Committees. The first session of Nebraska's 105th Legislature adjourned Sine Die on May 23, Senators tackled many important issues and introduced nearly 700 bills, but the session was dominated by efforts to achieve a balanced budget despite significant revenue shortfalls. See our report on bills we identified as high-priority during this legislative session.
"""This module has good intentions, like helping you debug API calls""" from collections import OrderedDict from copy import deepcopy from datetime import datetime from json import loads, dumps from os import listdir from os.path import join, isdir from re import search from click import command import io from json import JSONDecodeError from gobble.config import ROOT_DIR from gobble.logger import log from gobble.config import settings SNAPSHOTS_DIR = join(ROOT_DIR, 'assets', 'snapshots') def to_json(response): """Safely extract the payload from the response object""" try: return loads(response.text) except JSONDecodeError: return {} class SnapShot(OrderedDict): """A chatty wrapper around the API transaction""" def __init__(self, endpoint, url, reponse, params, headers=None, json=None, is_freeze=False): """Log, record and save before returning an instance""" self.is_freeze = is_freeze self.url = url self.endpoint = endpoint self.response = reponse self.headers = headers self.params = params self.request_payload = json super(SnapShot, self).__init__(self._template) self.timestamp = str(datetime.now()) self._log() self._record() self._save() def _log(self): """Is there such a thing as too much logging?""" code = self.response.status_code reason = self.response.reason response_json = to_json(self.response) begin = code, reason, self.endpoint, 'begin' end = code, reason, self.endpoint, 'end' transaction = ' [%s] %s - %s (%s) ' log.debug('{:*^100}'.format(transaction % begin)) messages = ( ('Request endpoint: %s', self.endpoint.url), ('Request time: %s', self.response.elapsed), ('Request parameters: %s', self.params), ('Request payload: %s', self.request_payload), ('Request headers: %s', self.headers), ('Response headers: %s', self.response.headers), ('Response payload: %s', response_json), ('Response cookies: %s', self.response.cookies), ('Request full URL: %s', self.url), ) for message in messages: log.debug(*message) indent = 4 if settings.EXPANDED_LOG_STYLE else None log.debug(dumps(response_json, ensure_ascii=False, indent=indent)) log.debug('{:*^100}'.format(transaction % end)) def _record(self): """Store the transaction info""" json = to_json(self.response) duplicate_json = deepcopy(json) self['timestamp'] = self.timestamp self['url'] = self.url self['query'] = self.params self['request_json'] = self.request_payload self['response_json'] = duplicate_json self['request_headers'] = self.headers self['response_headers'] = dict(self.response.headers) self['cookies'] = dict(self.response.cookies) @property def _template(self): return ( ('timestamp', None), ('host', settings.OS_URL), ('url', None), ('method', self.endpoint.method), ('path', self.endpoint.path), ('query', None), ('request_json', None), ('response_json', None), ('request_headers', None), ('response_headers', None), ('cookies', None), ) def _save(self): """Save the snapshot as JSON in the appropriate place""" with io.open(self._filepath, 'w+', encoding='utf-8') as file: file.write(dumps(self, ensure_ascii=False)) log.debug('Saved request + response to %s', self._filepath) @property def _folder(self): return SNAPSHOTS_DIR if self.is_freeze else settings.USER_DIR @property def _filepath(self): template = '{method}.{path}.json' dot_path = '.'.join(self.endpoint._path).rstrip('/') params = {'method': self.endpoint.method, 'path': dot_path} filename = template.format(**params) return join(self._folder, filename) def __str__(self): return str(self.endpoint) + ' at ' + self.timestamp def __repr__(self): return '<SnapShot %s>' % str(self) @property def json(self): return dumps(self, ensure_ascii=False) def freeze(json): """Recursively substitute unwanted strings inside a json-like object Basically, remove anything in the substitution list below, even when hidden in inside query strings. """ subs = { 'jwt': r'jwt=([^&^"]+)', "bucket_id": r'\/([\w]{32})\/', 'Signature': r'Signature=([^&^"]+)', 'AWSAccessKeyId': r'AWSAccessKeyId=([^&^"]+)', 'Expires': r'Expires=([^&^"]+)', 'Date': None, "Set-Cookie": None, 'token': None, } def regex(dummy_, json_, key_, pattern_, value_): match = search(pattern_, value_) if match: sub = match.group(1), dummy_ json_[key_] = value_.replace(*sub) if isinstance(json, list): for item in json: freeze(item) elif isinstance(json, dict): for field, pattern in subs.items(): for key, value in json.items(): dummy = field.upper() if key == field: json[key] = dummy elif isinstance(value, str): if pattern: regex(dummy, json, key, pattern, value) elif isinstance(value, dict): freeze(value) @command def archive(destination): """Freeze and move all snapshots to the destination folder.""" if not isdir(destination): raise NotADirectoryError(destination) for file in listdir(settings.USER_DIR): verb = file.split('.')[0] if verb in ['GET', 'POST', 'PUT']: with io.open(file) as source: snapshot = loads(source.read()) freeze(snapshot) # Overwrite if necessary output = join(destination, file) with io.open(output, 'w+', encoding='utf-8') as target: target.write(dumps(snapshot, ensure_ascii=False))
Search results for: 'Detox Antiox' | Keri Brooks Health | Reclaim your Health! Ignite your Life! Learn what you body needs to THRIVE!
from rest_framework.permissions import BasePermission DELETE_METHODS = ('DELETE',) UPDATE_METHODS = ('PUT', 'PATCH', 'POST') READ_METHODS = ('GET', 'HEAD', 'OPTIONS') class UpdateDeletePermission(BasePermission): """ Base permission which allows anyone to read the resources but allows to set different methods for updating and deleting resources for authenticated users """ def has_update_obj_permission(self, user, obj): return obj.can_update(user) def has_delete_obj_permission(self, user, obj): return obj.can_delete(user) def has_object_permission(self, request, view, obj): method = request.method if method in READ_METHODS: return True elif request.user and request.user.is_authenticated: user = request.user if method in UPDATE_METHODS: return self.has_update_obj_permission(user, obj) elif method in DELETE_METHODS: return self.has_delete_obj_permission(user, obj) return False
We’ve all experienced the long days, the late nights, and the work-filled weekends that come with any small business. When I first started my business, every weekend was consumed with work. I couldn’t go anywhere without my laptop (and I seriously mean anywhere). Being away from the computer for longer than an hour gave me anxiety because I was so worried I would take too long to get back to a client. I worked late all of the time and always felt behind and had to meet certain deadlines. This frequently caused so much stress that caused me to constantly oversleep in the morning (I could not drag myself out of bed no matter how hard I tried), which would then result in another long night of finishing my work. It’s exhausting just reading that mess, but it’s completely true. It was an awful cycle that I couldn’t pull myself out of, and it hurt me in so many ways. My body was physically tired all of the time, and I had a difficult time focusing on work. It not only took a physical and mental toll on me, but it took a toll on my personal relationships as well. After enduring this pattern for several months, I finally realized that I was missing balance in my business. Finding the right balance in your business is one of the most important things you can do for yourself when you’re an entrepreneur. So often, especially when we’re just starting out, we get so caught up in the idea that we have to constantly “hustle” in order to make it somewhere that we frequently forget to take care of ourselves along the way. I finally reached a point where I couldn’t do it anymore, and I needed a change. I decided to set office hours and tried to stick to those hours to get work done, answer emails, and post on social media. Once 5:00 rolls around, I shut down the computer and leave my office. I can cook dinner and enjoy an evening with my husband without having to stress about deadlines. I also let myself have my weekends back. One of the reasons I decided to give up photography and pursue editing is because I didn’t want to constantly be working on weekends — that is time I’d rather be spending with my family and traveling. It was a micro change that produced a macro result, and it gave me my life back. After I made this change, I started to see a huge difference in how I was operating my business. Because I didn’t feel tired and overwhelmed all of the time, I was more productive during my office hours, and I was able to get through catalogs faster and more efficiently. I quickly realized that clients are JUST FINE if you take 24-48 hours to respond to an email. Trust me – THEY UNDERSTAND! I don’t expect people to answer my emails within the hour, so why did I think everyone expected this of me? Your business isn’t going to fail because you didn’t answer that email the second you got it, and you don’t have to get a catalog back to someone that day if you’ve explained the expectation should be 3-5. You have to make sure you’re taking care of yourself, or you will burn out really quickly. So if you find you’re struggling with the same fatigue I was plagued with early on in my business, try finding the right balance for you. It could mean taking on fewer weddings a year, or it could be as simple as setting strict office hours. Listen to your body – is it telling you that you need more sleep? Also, listen to your friends and family. If you’re giving up your entire life to your business and sacrificing time with your support group, it’s probably time to make a change. Like Mary Marantz says, “I didn’t quite a 9-5 to work a 24/7.” Figuring out what you need in order to have a more productive workweek will put you in the right direction for healthy success. I love this and I really need to hear this. Thank you for sharing your thoughts! Last night I got a work email at 3 am. My phone dinged and I made the mistake of answering it. 3hrs later I went back to bed. This am I needed a refocus. Thank you! Yes!! So much truth in this. (Why is it the hardest thing in the world to do, though?!) My freedom came when I was able to rent studio space to work. Now, the majority of my stuff is out of my home so home stays home and work stays work. Girl, yes. This is such a hard thing for passionate business owners. You nailed it! Thanks for sharing! I so often get caught up in the hustle.. What a great post. I have told my husband so many times that we need to set hours for ourselves to “clock in and clock out”. It’s something we really need to work on. Thanks for sharing. Oh man, I so relate to this! I thought it was interesting that you found you actually worked better by working less. I am so guilty of working everywhere and anywhere I can (like today at my son’s Ninja class!). Time to make some changes. This is so true and important to recognize. It’s so easy to always be ON in our technology filled society. I had to turn off sounds and notifications for texts and e-mails. I now physically check a few times a day. I also have official office hours from 10-6. That has helped a TON! Great post!! This is an awesome post and something EVERYONE needs to read. Thanks for sharing! This is a really great post, now to implement this into my routine!! Really needed this! Thanks for your authenticity and posting this. This is such a great reminder. Someone once told me that no matter what, when the clock strikes 5pm you close your computer and be done. The work will be there again tomorrow. I have always remembered those words. Yes!! I love this! I completely agree, and sometimes have to remind myself that a client will be not only understanding, but okay with me taking a little time to respond. Great post!! yes yes and yes. as business owner, it’ so easy to lose the balance. this was such a good blog post! thank you so much for sharing! “Balance” is my word for the year! Great post! What a great post! I need this information today!! I really struggle with balance. I’m still in the first year and this read was really helpful. Sometimes I WANT to keep going.. keep editing, keep blogging, keep researching … because I LOVE my work. But It is a real life stuggle to make sure I am not ignoring other important parts of my life… my husband, my friends and family. Thank you for sharing this. Balance is something I am always struggling with! I finally turned off my email update on my phone so I have to physically click on emails to check it. That helped a little bit but I definitely have to set some hours where I shut it all down! Glad to hear it wasn’t just me that went through this! It’s easy to lose that balance. I need to take a page out of your book. I needed to read this. Thanks. Finding a balance is so important. I love the way this was written! Balance is def key. Especially in creative fields. Thanks for sharing your tips. I needed this today! With a full-time job, I am finding it really hard to get my blog up and running and today was the first time I couldn’t meet my goal of two posts per week. I am going to have to re-think whether this is feasible!
import os import pandas as pd import numpy as np import random import matplotlib.pyplot as plt import matplotlib.cm from matplotlib.colors import rgb2hex from matplotlib import colors as matplot_colors import six colors = list(six.iteritems(matplot_colors.cnames)) colors_hex = zip(*colors)[1] #cmap = matplotlib.cm.get_cmap(name='viridis') cmap = matplotlib.cm.get_cmap(name='hsv') input_files = [ "badmintonplayers.csv", "basketballplayers.csv", "boxers.csv", "cyclists.csv", "golfplayers3.csv", "gymnasts.csv", "handballplayers.csv", "Olympic Games.csv", "rower.csv", "soccerplayers4.csv", "stadiums2.csv", "swimmers.csv", "tennisplayers.csv", "volleyballplayers.csv", "wrestlers.csv", ] d = "clean_input" def get_outliers(df, k=1.5): q1 = df.quantile(q=0.25) q3 = df.quantile(q=0.75) return df[(df < q1 - k * (q3 - q1)) | (df > q3 + k * (q3 - q1))], df[(df >= q1 - k * (q3 - q1)) & (df <= q3 + k * (q3 - q1))] def explore_input_files(): color_idx = 0 print "outliers in: " for idx_inp, inpf in enumerate(input_files): df = pd.read_csv(os.path.join(d, inpf)).select_dtypes(include=[np.number]).dropna(axis=1, how='any') for idx, column in enumerate(df): if df[column].size == 0: continue plt.plot(df[column], [column[0:6].lower() + "(" + inpf[0:6].lower() + ")"] * df[column].size, ".", c=rgb2hex(cmap(color_idx % cmap.N)), alpha=0.5, label=column[0:6] + "(" + inpf[0:4] + ")") outliers, _ = get_outliers(df[column]) if outliers.size != 0: print " > "+column.lower() + "(" + inpf.lower() + ")" + " num of outliers is: "+str(outliers.size) plt.plot(outliers, [column[0:6].lower() + "(" + inpf[0:6].lower() + ")"] * outliers.size, "X", c=rgb2hex(cmap(color_idx % cmap.N)), alpha=1.0, label=column[0:6] + "(" + inpf[0:4] + ")") color_idx += 15 def free_form_visualization(): color_idx = 0 for idx_inp, inpf in enumerate(input_files): df = pd.read_csv(os.path.join(d, inpf)).select_dtypes(include=[np.number]).dropna(axis=1, how='any') for idx, column in enumerate(df): if df[column].size == 0: continue plt.plot(df[column], [column[0:6].lower() + "(" + inpf[0:6].lower() + ")"] * df[column].size, "1", c=rgb2hex(cmap(color_idx % cmap.N)), alpha=0.3, label=column[0:6] + "(" + inpf[0:4] + ")") # draw the mean plt.plot([df[column].mean()], [column[0:6].lower() + "(" + inpf[0:6].lower() + ")"], "s", c=rgb2hex(cmap(color_idx % cmap.N)), alpha=0.5, label=column[0:6] + "(" + inpf[0:4] + ")") outliers, non_outliers = get_outliers(df[column]) # draw the mean without the outliers plt.plot([non_outliers.mean()], [column[0:6].lower() + "(" + inpf[0:6].lower() + ")"], "D", c=rgb2hex(cmap(color_idx % cmap.N)), alpha=0.5, label=column[0:6] + "(" + inpf[0:4] + ")") color_idx += 15 line_up, = plt.plot([], [], "s", label='mean with outliers', c=rgb2hex(cmap(15 % cmap.N))) line_down, = plt.plot([], [], "D", label='mean without outliers', c=rgb2hex(cmap(15*10 % cmap.N))) plt.legend(handles=[line_up, line_down]) if raw_input("Enter:\n1) Data Exploration\n2) Free Form Visualization\n")=="1": explore_input_files() else: free_form_visualization() plt.show()
The Blue Springs South girls soccer team took a trip to Columbia, Mo., over the weekend and came away with a pair of wins against Columbia Rock Bridge and Columbia Hickman. Against the Kewpies, the Jaguars came away with a 3-1 win Saturday. Brie Severns scored in the third minute off an assist from Khaliana Garrett and she scored again in the 17th minute off an assist from Kennedi Hooks to make it 2-0. That score held until halftime and Hickman narrowed the gap to 2-1 with a goal in the 45th minute. But the Jaguars (7-1) scored the final goal as Emma Robinson scored on Severns assist. Against the Bruins, the Jaguars overcame a 2-1 deficit to take a 3-2 victory. Garrett scored the game-winner in the 76th minute off an assist from Braylee Childers. Hooks scored the tying goal in the 75th minute off an assist from Abigail Carino. Severns scored the team’s other goal off an assist from Logan Abernathy in the 25th minute. “To see the girls come back from adversity was very exciting,” Findley said. LIBERTY NORTH 1, LEE’S SUMMIT NORTH 0: The Broncos fell to 1-7 overall following a loss to the Eagles Monday.
'''Arsenal client Tags class.''' import logging from arsenalclient.interface.arsenal_interface import ArsenalInterface from arsenalclient.exceptions import NoResultFound LOG = logging.getLogger(__name__) class Tags(ArsenalInterface): '''The arsenal client Tags class.''' def __init__(self, **kwargs): super(Tags, self).__init__(**kwargs) self.uri = '/api/tags' # Overridden methods def search(self, params=None): '''Search for tags. Usage: >>> params = { ... 'name': 'my_tag', ... 'exact_get': True, ... } >>> Tags.search(params) Args: params (dict): a dictionary of url parameters for the request. Returns: A json response from ArsenalInterface.check_response_codes(). ''' return super(Tags, self).search(params) def create(self, params): '''Create a new tag. Args: params (dict): A dictionary with the following attributes: tag_name : The name of the tag you wish to create. tag_value: The value of the tag you wish to create. Usage: >>> params = { ... 'name': 'meaning', ... 'value': 42, ... } >>> Tags.create(params) <Response [200]> ''' return super(Tags, self).create(params) def update(self, params): '''Update a tag. There is nothing to update with tags as every field must be unique.''' pass def delete(self, params): '''Delete a tag object from the server. Args: params: A tag dictionary to delete. Must contain the tag id, name, and value. Usage: >>> params = { ... 'id': 1, ... 'name': 'my_tag', ... 'value': 'my_string', ... } >>> Tags.delete(params) ''' return super(Tags, self).delete(params) def get_audit_history(self, results): '''Get the audit history for tags.''' return super(Tags, self).get_audit_history(results) def get_by_name(self, name): '''Get a single tag by it's name. This is not possible as a tag's uniqueness is determined by both it's name and value. Use Tags.get_by_name_value() instead. ''' pass # Custom methods def get_by_name_value(self, name, value): '''Get a tag from the server based on it's name and value.''' LOG.debug('Searching for tag name: {0} value: {1}'.format(name, value)) data = { 'name': name, 'value': value, 'exact_get': True, } resp = self.api_conn('/api/tags', data, log_success=False) LOG.debug('Results are: {0}'.format(resp)) try: resource = resp['results'][0] except IndexError: msg = 'Tag not found: {0}={1}'.format(name, value) LOG.info(msg) raise NoResultFound(msg) if len(resp['results']) > 1: msg = 'More than one result found: {0}'.format(name) LOG.error(msg) raise RuntimeError(msg) return resource def _manage_assignments(self, name, value, object_type, results, api_method): '''Assign or de-assign a tag to a list of node, node_group, or data_center dictionaries.''' action_names = [] action_ids = [] msg = 'Assigning' if api_method == 'delete': msg = 'De-assigning' for action_object in results: try: action_names.append(action_object['name']) except KeyError: action_names.append(action_object['serial_number']) action_ids.append(action_object['id']) try: this_tag = self.get_by_name_value(name, value) except NoResultFound: if api_method == 'delete': LOG.debug('Tag not found, nothing to do.') return else: params = { 'name': name, 'value': value, } resp = self.create(params) this_tag = resp['results'][0] LOG.info('{0} tag: {1}={2}'.format(msg, this_tag['name'], this_tag['value'])) for action_name in action_names: LOG.info(' {0}: {1}'.format(object_type, action_name)) data = { object_type: action_ids } try: uri = '/api/tags/{0}/{1}'.format(this_tag['id'], object_type) resp = self.api_conn(uri, data, method=api_method) except: raise return resp def assign(self, name, value, object_type, results): '''Assign a tag to one or more nodes, node_groups, or data_centers. Args: name (str) : The name of the tag to assign to the <Class>.search() results. value (str) : The value of the tag to assign to the <Class>.search() results. object_type (str): A string representing the object_type to assign the tag to. One of nodes, node_groups or data_centers. results : The nodes, node_groups, or data_centers from the results of <Class>.search() to assign the tag to. Usage: >>> Tags.assign('meaning', 42, 'nodes', <search results>) <json> ''' return self._manage_assignments(name, value, object_type, results, 'put') def deassign(self, name, value, object_type, results): '''De-assign a tag from one or more nodes, node_groups, or data_centers. Args: name (str) : The name of the tag to deassign to the <Class>.search() results. value (str) : The value of the tag to deassign to the <Class>.search() results. object_type (str): A string representing the object_type to deassign the tag to. One of nodes, node_groups or data_centers. results : The nodes, node_groups, or data_centers from the results of <Class>.search() to deassign the tag from. Usage: >>> Tags.deassign('meaning', 42, 'nodes', <search results>) <json> ''' return self._manage_assignments(name, value, object_type, results, 'delete')
6793 Kirkwood Dr, Mentor, OH 44060 (MLS # 4067757) | NagelRealtors serving Lake, Geauga and Eastern Cuyahoga! Fantastic opportunity in Bellflower Terrace. Updated throughout with new flooring, fresh paint, updated kitchen and baths. The list goes on. New Electrical Panel, Furnace & AC in 2016, New Hot Water Tank 2018, New Concrete driveway 2016. All the big stuff is done for you, all you need to do is unpack and you're done!
# -*- coding: utf-8 -*- """ Display tasks in thunderbird calendar. Configuration parameters: cache_timeout: how often we refresh usage in seconds (default 120) err_exception: error message when an exception is raised (default 'error: calendar parsing failed') err_profile: error message regarding profile path and read access (default 'error: profile not readable') format: see placeholders below (default 'tasks:[{due}] current:{current}') profile_path: path to the user thunderbird profile (not optional) (default '') Format of status string placeholders: {completed} completed tasks {current} title of current running task (sorted by priority and stamp) {due} due tasks Make sure to configure profile_path in your i3status config using the full path or this module will not be able to retrieve any information from your calendar. ex: profile_path = "/home/user/.thunderbird/1yawevtp.default" @author mrt-prodz SAMPLE OUTPUT {'full_text': 'tasks[3] current: finish the birdhouse'} """ from sqlite3 import connect from os import access, R_OK from time import time class Py3status: # available configuration parameters cache_timeout = 120 err_exception = 'error: calendar parsing failed' err_profile = 'error: profile not readable' format = 'tasks:[{due}] current:{current}' profile_path = '' def _response(self, text, color=None): response = { 'cached_until': time() + self.cache_timeout, 'full_text': text, } if color is not None: response['color'] = color return response # return calendar data def get_calendar(self, i3s_output_list, i3s_config): _err_color = i3s_config['color_bad'] db = self.profile_path + '/calendar-data/local.sqlite' if not access(db, R_OK): return self._response(self.err_profile, _err_color) try: con = connect(db) cur = con.cursor() cur.execute('SELECT title, todo_completed FROM cal_todos ' 'ORDER BY priority DESC, todo_stamp DESC') tasks = cur.fetchall() con.close() # task[0] is the task name, task[1] is the todo_completed column duetasks = [task[0] for task in tasks if task[1] is None] due = len(duetasks) completed = len(tasks) - due current = duetasks[0] if due else '' return self._response( self.format.format( due=due, completed=completed, current=current)) except Exception: return self._response(self.err_exception, _err_color) if __name__ == "__main__": x = Py3status() config = { 'color_good': '#00FF00', 'color_degraded': '#00FFFF', 'color_bad': '#FF0000' } print(x.get_calendar([], config))
This post describes the three monthly update types that you can install. We are making an important change to .NET Framework updates to align with the Windows Monthly Rollup, also announced today. Beginning October 2016, you will be able to install a new update release called the .NET Framework Monthly Rollup. The rollup will update the .NET Framework with the latest security and quality improvements. Updated (4/2017): See .NET Framework Releases for newer releases. Today we are excited to announce the availability of the .NET Framework 4.6.2! Many of the changes are based on your feedback, including those submitted on UserVoice and Connect. Thanks for your continued help and engagement! To read last week’s post, see The week in .NET – 7/19/2016. Last week, we had Rowan Miller on the show to talk about Entity Framework. This week’s show has been canceled and is anticipated to return next week. To read the last post, see The week in .NET – 7/12/2016. Last week’s show was postponed and will be rescheduled at a later date. This week, we will have Rowan Miller on the show to talk about EF Core. To read the last post, see The week in .NET – 6/28/2016. Last week, we had Mukul Sabharwal on the show to talk about .NET Core usage at Bing. This week, we’ll have Lucas Meijer from Unity Technologies on the show. We are happy to announce the latest version of the .NET Framework Repair Tool that supports all versions of the .NET Framework from 3.5 SP1 to 4.6.1.
# Licensed under an MIT open source license - see LICENSE ''' KS p-values for different properties. ''' import numpy as np from pandas import read_csv import matplotlib.pyplot as p import numpy as np import seaborn as sn sn.set_context('talk') sn.set_style('ticks') # sn.mpl.rc("figure", figsize=(7, 9)) # Widths widths = read_csv("width_ks_table_pvals.csv") widths.index = widths["Unnamed: 0"] del widths["Unnamed: 0"] widths_arr = np.asarray(widths) widths_arr[np.arange(0, 14), np.arange(0, 14)] = 1.0 widths_arr = -np.log10(widths_arr) # p.figure(figsize=(12, 10)) p.subplot(111) # p.xlabel("Widths") p.imshow(widths_arr, origin='lower', cmap='binary', interpolation='nearest') p.xticks(np.arange(0, 14), widths.columns, rotation=90) # p.xticks(np.arange(0, 14), [], rotation=90) p.yticks(np.arange(0, 14), widths.columns) # p.figtext(0.05, 0.95, "a)", fontsize=20) cb = p.colorbar() cb.set_label(r'$-\log_{10}$ p-value') cb.solids.set_edgecolor("face") p.tight_layout() p.show() # Curvature # curve = read_csv("curvature_ks_table_pvals.csv") # curve.index = curve["Unnamed: 0"] # del curve["Unnamed: 0"] # curve_arr = np.asarray(curve) # curve_arr[np.arange(0, 14), np.arange(0, 14)] = 1.0 # curve_arr = -np.log10(curve_arr) # # p.figure(figsize=(12, 10)) # p.subplot(212) # # p.xlabel("Curvature") # p.imshow(curve_arr, interpolation='nearest', origin='lower', cmap='binary') # p.xticks(np.arange(0, 14), curve.columns, rotation=90) # p.yticks(np.arange(0, 14), curve.columns) # p.figtext(0.05, 0.55, "b)", fontsize=20) # cb = p.colorbar() # cb.set_label(r'$-\log_{10}$ p-value') # cb.solids.set_edgecolor("face") # p.tight_layout() # p.show()
Vampire Weekend have a new album on the way. It's titled: FOTB. The FOTB acronym was, until recently, up for interpretation. Vampire Weekend, who haven't released music in 6 years, seemingly weren't keen to release the album's name either, and instead offered a reward via their Instagram for anyone who could crack the significance of the FOTB acronym. It did eventually come out however that FOTB stands for Father Of The Bride. With the album name being released Ezra and co. also dropped a two-track EP, featuring singles "Harmony Hall" & "2021." Both songs will be part of the new Vampire Weekend album, and on top of that another four singles will be released before Father Of The Bride drops later this year. FOTB will be a double album (eighteen tracks long) and features The Internet's Steve Lacy. There will be three 2-song drops every month until the record is out. 1. hh/2021 2. s/bb 3. tl/uw. Which means two new songs will be out each month until the album's release. Whether that's March or April is ambiguous, but my bet is on April because that's my birthday month, and, more realistically, because unless the band is planning on releasing the final EP and Father Of The Bride in the same month, it'll be the month prior to the third batch of singles being released. Ezra yet again posted to his Instagram with information about the album's release date, this time as a snap of the Jumbotron at Madison Square Garden. It doesn't fit with my hypothesis above, but alas it's 9 / 16 / 2019. Now that we've gotten that out of the way, peep the first of Vampire Weekend's singles for Father Of The Bride here.
from json import loads from django.core.exceptions import ValidationError from django.db import models from django.utils.functional import cached_property from django.utils.module_loading import import_string from django.utils.translation import ugettext_lazy as _ from ..conf import settings from ..utils import first_upper class Question(models.Model): name = models.CharField(_("name"), max_length=50, unique=True) question = models.CharField(_("question"), max_length=50) help_text = models.TextField( _("help text"), blank=True, null=True, help_text=_("This is help text. The help text is shown next to the form field."), ) field = models.CharField( _("field"), max_length=150, choices=((key, val["name"]) for key, val in settings.LEPRIKON_QUESTION_FIELDS.items()), ) field_args = models.TextField( _("field_args"), blank=True, default="{}", help_text=_("Enter valid JSON structure representing field configuration."), ) active = models.BooleanField(_("active"), default=True) class Meta: app_label = "leprikon" verbose_name = _("additional question") verbose_name_plural = _("additional questions") def __str__(self): return self.question @cached_property def field_class(self): return import_string(settings.LEPRIKON_QUESTION_FIELDS[self.field]["class"]) @cached_property def field_kwargs(self): return loads(self.field_args) @cached_property def field_label(self): return first_upper(self.question) def get_field(self, initial=None): return self.field_class(label=self.field_label, initial=initial, help_text=self.help_text, **self.field_kwargs) def clean(self): try: self.get_field() except Exception as e: raise ValidationError({"field_args": [_("Failed to create field with given field args: {}").format(e)]})
This command updates the localisation file for the specified language tag (e.g. 'English'). Language Tag The language tag of the localisation file you wish to reload (e.g. 'English'). This command would update the English localisation files.
#!/usr/local/bin/python3.7 # for mac? #!/usr/bin/env python3 #!/usr/bin/python3 # https://en.wikipedia.org/wiki/Alt_key alt_a = "å" ord_alt_a = ord(alt_a) ord_a = ord("a") alt_diff = ord_alt_a - ord_a print("alt_a = %s, ord_a = %s, ord_alt_a = %s, alt_diff = %s" % (str(alt_a), str(ord_a), str(ord_alt_a), str(alt_diff))) # √∫˜µ≤Ω≈ß∂ƒ©˙∆˚¬ÅÍÎÏ˝ÓÔ # ¡£º # this is not ord_b: why mess up the mapping? didn't make much sense for the chars they wanted I suspect # I should have known when alt_diff wasn't 128 I guess ord_b = ord_a + 1 ord_alt_b = ord_b + alt_diff alt_b = chr(ord_alt_b) print("alt_b = %s, ord_b = %s, ord_alt_b = %s, alt_diff = %s" % (str(alt_b), str(ord_b), str(ord_alt_b), str(alt_diff))) #for i1 in range(256): # str2 = "%s %d 0x%x '%c'" % (str(i1), i1, i1, i1) # print(("str2 = %s" % str2)) # #for i1 in range(32, 127): # c1 = chr(i1) # print "%d %s" % (i1, c1) # 1 ¡ # ! ⁄ # 2 ™ # @ € # ¡™£¢∞§¶•ªº # 1234567890 # # ⁄€‹›fifl‡‡°·‚ # !@#$%^&*() # # œ∑´®†\¨ˆøπ # qwertyuiop # # Œ„´‰ˇÁ¨ˆØ∏ # QWERTYUIOP
Serving Suggestions: Stir-fry vegetables and vegetable dishes. General Serving Suggestions: Rub in chops, steak, rib, oven-roasted lamb en beef, in mince meat dishes, home-made bread & sprinkled over roasted vegetables. Salt can be used in grinder. Ingredients: Salt, olives, oreganum, rosemary, thyme & basil.
# -*- coding: utf-8 -*- """ Tests for user authn views. """ from http.cookies import SimpleCookie import logging import re from unittest import skipUnless from urllib import urlencode import ddt import mock from django.conf import settings from django.contrib import messages from django.contrib.auth import get_user_model from django.contrib.auth.models import AnonymousUser from django.contrib.messages.middleware import MessageMiddleware from django.contrib.sessions.middleware import SessionMiddleware from django.core import mail from django.core.files.uploadedfile import SimpleUploadedFile from django.urls import reverse from django.test import TestCase from django.test.client import RequestFactory from django.test.utils import override_settings from django.utils.translation import ugettext as _ from edx_oauth2_provider.tests.factories import AccessTokenFactory, ClientFactory, RefreshTokenFactory from oauth2_provider.models import AccessToken as dot_access_token from oauth2_provider.models import RefreshToken as dot_refresh_token from provider.oauth2.models import AccessToken as dop_access_token from provider.oauth2.models import RefreshToken as dop_refresh_token from testfixtures import LogCapture from course_modes.models import CourseMode from openedx.core.djangoapps.user_authn.views.login_form import login_and_registration_form from openedx.core.djangoapps.oauth_dispatch.tests import factories as dot_factories from openedx.core.djangoapps.site_configuration.tests.mixins import SiteMixin from openedx.core.djangoapps.theming.tests.test_util import with_comprehensive_theme_context from openedx.core.djangoapps.user_api.accounts.api import activate_account, create_account from openedx.core.djangoapps.user_api.errors import UserAPIInternalError from openedx.core.djangolib.js_utils import dump_js_escaped_json from openedx.core.djangolib.markup import HTML, Text from openedx.core.djangolib.testing.utils import CacheIsolationTestCase, skip_unless_lms from third_party_auth.tests.testutil import ThirdPartyAuthTestMixin, simulate_running_pipeline from util.testing import UrlResetMixin from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase LOGGER_NAME = 'audit' User = get_user_model() # pylint:disable=invalid-name FEATURES_WITH_FAILED_PASSWORD_RESET_EMAIL = settings.FEATURES.copy() FEATURES_WITH_FAILED_PASSWORD_RESET_EMAIL['ENABLE_PASSWORD_RESET_FAILURE_EMAIL'] = True @skip_unless_lms @ddt.ddt class UserAccountUpdateTest(CacheIsolationTestCase, UrlResetMixin): """ Tests for views that update the user's account information. """ USERNAME = u"heisenberg" ALTERNATE_USERNAME = u"walt" OLD_PASSWORD = u"ḅḷüëṡḳÿ" NEW_PASSWORD = u"🄱🄸🄶🄱🄻🅄🄴" OLD_EMAIL = u"walter@graymattertech.com" NEW_EMAIL = u"walt@savewalterwhite.com" INVALID_ATTEMPTS = 100 INVALID_KEY = u"123abc" URLCONF_MODULES = ['student_accounts.urls'] ENABLED_CACHES = ['default'] def setUp(self): super(UserAccountUpdateTest, self).setUp() # Create/activate a new account activation_key = create_account(self.USERNAME, self.OLD_PASSWORD, self.OLD_EMAIL) activate_account(activation_key) # Login result = self.client.login(username=self.USERNAME, password=self.OLD_PASSWORD) self.assertTrue(result) @skipUnless(settings.ROOT_URLCONF == 'lms.urls', 'Test only valid in LMS') def test_password_change(self): # Request a password change while logged in, simulating # use of the password reset link from the account page response = self._change_password() self.assertEqual(response.status_code, 200) # Check that an email was sent self.assertEqual(len(mail.outbox), 1) # Retrieve the activation link from the email body email_body = mail.outbox[0].body result = re.search(r'(?P<url>https?://[^\s]+)', email_body) self.assertIsNot(result, None) activation_link = result.group('url') # Visit the activation link response = self.client.get(activation_link) self.assertEqual(response.status_code, 200) # Submit a new password and follow the redirect to the success page response = self.client.post( activation_link, # These keys are from the form on the current password reset confirmation page. {'new_password1': self.NEW_PASSWORD, 'new_password2': self.NEW_PASSWORD}, follow=True ) self.assertEqual(response.status_code, 200) self.assertContains(response, "Your password has been reset.") # Log the user out to clear session data self.client.logout() # Verify that the new password can be used to log in result = self.client.login(username=self.USERNAME, password=self.NEW_PASSWORD) self.assertTrue(result) # Try reusing the activation link to change the password again # Visit the activation link again. response = self.client.get(activation_link) self.assertEqual(response.status_code, 200) self.assertContains(response, "This password reset link is invalid. It may have been used already.") self.client.logout() # Verify that the old password cannot be used to log in result = self.client.login(username=self.USERNAME, password=self.OLD_PASSWORD) self.assertFalse(result) # Verify that the new password continues to be valid result = self.client.login(username=self.USERNAME, password=self.NEW_PASSWORD) self.assertTrue(result) def test_password_change_failure(self): with mock.patch('openedx.core.djangoapps.user_api.accounts.api.request_password_change', side_effect=UserAPIInternalError): self._change_password() self.assertRaises(UserAPIInternalError) @override_settings(FEATURES=FEATURES_WITH_FAILED_PASSWORD_RESET_EMAIL) def test_password_reset_failure_email(self): """Test that a password reset failure email notification is sent, when enabled.""" # Log the user out self.client.logout() bad_email = 'doesnotexist@example.com' response = self._change_password(email=bad_email) self.assertEqual(response.status_code, 200) # Check that an email was sent self.assertEqual(len(mail.outbox), 1) # Verify that the body contains the failed password reset message sent_message = mail.outbox[0] text_body = sent_message.body html_body = sent_message.alternatives[0][0] for email_body in [text_body, html_body]: msg = 'However, there is currently no user account associated with your email address: {email}'.format( email=bad_email ) assert u'reset for your user account at {}'.format(settings.PLATFORM_NAME) in email_body assert 'password_reset_confirm' not in email_body, 'The link should not be added if user was not found' assert msg in email_body @ddt.data(True, False) def test_password_change_logged_out(self, send_email): # Log the user out self.client.logout() # Request a password change while logged out, simulating # use of the password reset link from the login page if send_email: response = self._change_password(email=self.OLD_EMAIL) self.assertEqual(response.status_code, 200) else: # Don't send an email in the POST data, simulating # its (potentially accidental) omission in the POST # data sent from the login page response = self._change_password() self.assertEqual(response.status_code, 400) def test_access_token_invalidation_logged_out(self): self.client.logout() user = User.objects.get(email=self.OLD_EMAIL) self._create_dop_tokens(user) self._create_dot_tokens(user) response = self._change_password(email=self.OLD_EMAIL) self.assertEqual(response.status_code, 200) self.assert_access_token_destroyed(user) def test_access_token_invalidation_logged_in(self): user = User.objects.get(email=self.OLD_EMAIL) self._create_dop_tokens(user) self._create_dot_tokens(user) response = self._change_password() self.assertEqual(response.status_code, 200) self.assert_access_token_destroyed(user) def test_password_change_inactive_user(self): # Log out the user created during test setup self.client.logout() # Create a second user, but do not activate it create_account(self.ALTERNATE_USERNAME, self.OLD_PASSWORD, self.NEW_EMAIL) # Send the view the email address tied to the inactive user response = self._change_password(email=self.NEW_EMAIL) # Expect that the activation email is still sent, # since the user may have lost the original activation email. self.assertEqual(response.status_code, 200) self.assertEqual(len(mail.outbox), 1) def test_password_change_no_user(self): # Log out the user created during test setup self.client.logout() with LogCapture(LOGGER_NAME, level=logging.INFO) as logger: # Send the view an email address not tied to any user response = self._change_password(email=self.NEW_EMAIL) self.assertEqual(response.status_code, 200) logger.check((LOGGER_NAME, 'INFO', 'Invalid password reset attempt')) def test_password_change_rate_limited(self): # Log out the user created during test setup, to prevent the view from # selecting the logged-in user's email address over the email provided # in the POST data self.client.logout() # Make many consecutive bad requests in an attempt to trigger the rate limiter for __ in xrange(self.INVALID_ATTEMPTS): self._change_password(email=self.NEW_EMAIL) response = self._change_password(email=self.NEW_EMAIL) self.assertEqual(response.status_code, 403) @ddt.data( ('post', 'password_change_request', []), ) @ddt.unpack def test_require_http_method(self, correct_method, url_name, args): wrong_methods = {'get', 'put', 'post', 'head', 'options', 'delete'} - {correct_method} url = reverse(url_name, args=args) for method in wrong_methods: response = getattr(self.client, method)(url) self.assertEqual(response.status_code, 405) def _change_password(self, email=None): """Request to change the user's password. """ data = {} if email: data['email'] = email return self.client.post(path=reverse('password_change_request'), data=data) def _create_dop_tokens(self, user=None): """Create dop access token for given user if user provided else for default user.""" if not user: user = User.objects.get(email=self.OLD_EMAIL) client = ClientFactory() access_token = AccessTokenFactory(user=user, client=client) RefreshTokenFactory(user=user, client=client, access_token=access_token) def _create_dot_tokens(self, user=None): """Create dop access token for given user if user provided else for default user.""" if not user: user = User.objects.get(email=self.OLD_EMAIL) application = dot_factories.ApplicationFactory(user=user) access_token = dot_factories.AccessTokenFactory(user=user, application=application) dot_factories.RefreshTokenFactory(user=user, application=application, access_token=access_token) def assert_access_token_destroyed(self, user): """Assert all access tokens are destroyed.""" self.assertFalse(dot_access_token.objects.filter(user=user).exists()) self.assertFalse(dot_refresh_token.objects.filter(user=user).exists()) self.assertFalse(dop_access_token.objects.filter(user=user).exists()) self.assertFalse(dop_refresh_token.objects.filter(user=user).exists()) @skip_unless_lms @ddt.ddt class LoginAndRegistrationTest(ThirdPartyAuthTestMixin, UrlResetMixin, ModuleStoreTestCase): """ Tests for the student account views that update the user's account information. """ shard = 7 USERNAME = "bob" EMAIL = "bob@example.com" PASSWORD = "password" URLCONF_MODULES = ['openedx.core.djangoapps.embargo'] @mock.patch.dict(settings.FEATURES, {'EMBARGO': True}) def setUp(self): # pylint: disable=arguments-differ super(LoginAndRegistrationTest, self).setUp() # Several third party auth providers are created for these tests: self.google_provider = self.configure_google_provider(enabled=True, visible=True) self.configure_facebook_provider(enabled=True, visible=True) self.configure_dummy_provider( visible=True, enabled=True, icon_class='', icon_image=SimpleUploadedFile('icon.svg', '<svg><rect width="50" height="100"/></svg>'), ) self.hidden_enabled_provider = self.configure_linkedin_provider( visible=False, enabled=True, ) self.hidden_disabled_provider = self.configure_azure_ad_provider() @ddt.data( ("signin_user", "login"), ("register_user", "register"), ) @ddt.unpack def test_login_and_registration_form(self, url_name, initial_mode): response = self.client.get(reverse(url_name)) expected_data = '"initial_mode": "{mode}"'.format(mode=initial_mode) self.assertContains(response, expected_data) @ddt.data("signin_user", "register_user") def test_login_and_registration_form_already_authenticated(self, url_name): # Create/activate a new account and log in activation_key = create_account(self.USERNAME, self.PASSWORD, self.EMAIL) activate_account(activation_key) result = self.client.login(username=self.USERNAME, password=self.PASSWORD) self.assertTrue(result) # Verify that we're redirected to the dashboard response = self.client.get(reverse(url_name)) self.assertRedirects(response, reverse("dashboard")) @ddt.data( (None, "signin_user"), (None, "register_user"), ("edx.org", "signin_user"), ("edx.org", "register_user"), ) @ddt.unpack def test_login_and_registration_form_signin_not_preserves_params(self, theme, url_name): params = [ ('course_id', 'edX/DemoX/Demo_Course'), ('enrollment_action', 'enroll'), ] # The response should not have a "Sign In" button with the URL # that preserves the querystring params with with_comprehensive_theme_context(theme): response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") expected_url = '/login?{}'.format(self._finish_auth_url_param(params + [('next', '/dashboard')])) self.assertNotContains(response, expected_url) # Add additional parameters: params = [ ('course_id', 'edX/DemoX/Demo_Course'), ('enrollment_action', 'enroll'), ('course_mode', CourseMode.DEFAULT_MODE_SLUG), ('email_opt_in', 'true'), ('next', '/custom/final/destination') ] # Verify that this parameter is also preserved with with_comprehensive_theme_context(theme): response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") expected_url = '/login?{}'.format(self._finish_auth_url_param(params)) self.assertNotContains(response, expected_url) @mock.patch.dict(settings.FEATURES, {"ENABLE_THIRD_PARTY_AUTH": False}) @ddt.data("signin_user", "register_user") def test_third_party_auth_disabled(self, url_name): response = self.client.get(reverse(url_name)) self._assert_third_party_auth_data(response, None, None, [], None) @mock.patch('openedx.core.djangoapps.user_authn.views.login_form.enterprise_customer_for_request') @mock.patch('openedx.core.djangoapps.user_api.api.enterprise_customer_for_request') @ddt.data( ("signin_user", None, None, None, False), ("register_user", None, None, None, False), ("signin_user", "google-oauth2", "Google", None, False), ("register_user", "google-oauth2", "Google", None, False), ("signin_user", "facebook", "Facebook", None, False), ("register_user", "facebook", "Facebook", None, False), ("signin_user", "dummy", "Dummy", None, False), ("register_user", "dummy", "Dummy", None, False), ( "signin_user", "google-oauth2", "Google", { 'name': 'FakeName', 'logo': 'https://host.com/logo.jpg', 'welcome_msg': 'No message' }, True ) ) @ddt.unpack def test_third_party_auth( self, url_name, current_backend, current_provider, expected_enterprise_customer_mock_attrs, add_user_details, enterprise_customer_mock_1, enterprise_customer_mock_2 ): params = [ ('course_id', 'course-v1:Org+Course+Run'), ('enrollment_action', 'enroll'), ('course_mode', CourseMode.DEFAULT_MODE_SLUG), ('email_opt_in', 'true'), ('next', '/custom/final/destination'), ] if expected_enterprise_customer_mock_attrs: expected_ec = { 'name': expected_enterprise_customer_mock_attrs['name'], 'branding_configuration': { 'logo': 'https://host.com/logo.jpg', 'welcome_message': expected_enterprise_customer_mock_attrs['welcome_msg'] } } else: expected_ec = None email = None if add_user_details: email = 'test@test.com' enterprise_customer_mock_1.return_value = expected_ec enterprise_customer_mock_2.return_value = expected_ec # Simulate a running pipeline if current_backend is not None: pipeline_target = "openedx.core.djangoapps.user_authn.views.login_form.third_party_auth.pipeline" with simulate_running_pipeline(pipeline_target, current_backend, email=email): response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") # Do NOT simulate a running pipeline else: response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") # This relies on the THIRD_PARTY_AUTH configuration in the test settings expected_providers = [ { "id": "oa2-dummy", "name": "Dummy", "iconClass": None, "iconImage": settings.MEDIA_URL + "icon.svg", "loginUrl": self._third_party_login_url("dummy", "login", params), "registerUrl": self._third_party_login_url("dummy", "register", params) }, { "id": "oa2-facebook", "name": "Facebook", "iconClass": "fa-facebook", "iconImage": None, "loginUrl": self._third_party_login_url("facebook", "login", params), "registerUrl": self._third_party_login_url("facebook", "register", params) }, { "id": "oa2-google-oauth2", "name": "Google", "iconClass": "fa-google-plus", "iconImage": None, "loginUrl": self._third_party_login_url("google-oauth2", "login", params), "registerUrl": self._third_party_login_url("google-oauth2", "register", params) }, ] self._assert_third_party_auth_data( response, current_backend, current_provider, expected_providers, expected_ec, add_user_details ) def _configure_testshib_provider(self, provider_name, idp_slug): """ Enable and configure the TestShib SAML IdP as a third_party_auth provider. """ kwargs = {} kwargs.setdefault('name', provider_name) kwargs.setdefault('enabled', True) kwargs.setdefault('visible', True) kwargs.setdefault('slug', idp_slug) kwargs.setdefault('entity_id', 'https://idp.testshib.org/idp/shibboleth') kwargs.setdefault('metadata_source', 'https://mock.testshib.org/metadata/testshib-providers.xml') kwargs.setdefault('icon_class', 'fa-university') kwargs.setdefault('attr_email', 'dummy-email-attr') kwargs.setdefault('max_session_length', None) self.configure_saml_provider(**kwargs) @mock.patch('django.conf.settings.MESSAGE_STORAGE', 'django.contrib.messages.storage.cookie.CookieStorage') @mock.patch('openedx.core.djangoapps.user_authn.views.login_form.enterprise_customer_for_request') @ddt.data( ( 'signin_user', 'tpa-saml', 'TestShib', ) ) @ddt.unpack def test_saml_auth_with_error( self, url_name, current_backend, current_provider, enterprise_customer_mock, ): params = [] request = RequestFactory().get(reverse(url_name), params, HTTP_ACCEPT='text/html') SessionMiddleware().process_request(request) request.user = AnonymousUser() self.enable_saml() dummy_idp = 'testshib' self._configure_testshib_provider(current_provider, dummy_idp) enterprise_customer_data = { 'uuid': '72416e52-8c77-4860-9584-15e5b06220fb', 'name': 'Dummy Enterprise', 'identity_provider': dummy_idp, } enterprise_customer_mock.return_value = enterprise_customer_data dummy_error_message = 'Authentication failed: SAML login failed ' \ '["invalid_response"] [SAML Response must contain 1 assertion]' # Add error message for error in auth pipeline MessageMiddleware().process_request(request) messages.error(request, dummy_error_message, extra_tags='social-auth') # Simulate a running pipeline pipeline_response = { 'response': { 'idp_name': dummy_idp } } pipeline_target = 'openedx.core.djangoapps.user_authn.views.login_form.third_party_auth.pipeline' with simulate_running_pipeline(pipeline_target, current_backend, **pipeline_response): with mock.patch('edxmako.request_context.get_current_request', return_value=request): response = login_and_registration_form(request) expected_error_message = Text(_( u'We are sorry, you are not authorized to access {platform_name} via this channel. ' u'Please contact your learning administrator or manager in order to access {platform_name}.' u'{line_break}{line_break}' u'Error Details:{line_break}{error_message}') ).format( platform_name=settings.PLATFORM_NAME, error_message=dummy_error_message, line_break=HTML('<br/>') ) self._assert_saml_auth_data_with_error( response, current_backend, current_provider, expected_error_message ) def test_hinted_login(self): params = [("next", "/courses/something/?tpa_hint=oa2-google-oauth2")] response = self.client.get(reverse('signin_user'), params, HTTP_ACCEPT="text/html") self.assertContains(response, '"third_party_auth_hint": "oa2-google-oauth2"') tpa_hint = self.hidden_enabled_provider.provider_id params = [("next", "/courses/something/?tpa_hint={0}".format(tpa_hint))] response = self.client.get(reverse('signin_user'), params, HTTP_ACCEPT="text/html") self.assertContains(response, '"third_party_auth_hint": "{0}"'.format(tpa_hint)) tpa_hint = self.hidden_disabled_provider.provider_id params = [("next", "/courses/something/?tpa_hint={0}".format(tpa_hint))] response = self.client.get(reverse('signin_user'), params, HTTP_ACCEPT="text/html") self.assertNotIn(response.content, tpa_hint) @ddt.data( ('signin_user', 'login'), ('register_user', 'register'), ) @ddt.unpack def test_hinted_login_dialog_disabled(self, url_name, auth_entry): """Test that the dialog doesn't show up for hinted logins when disabled. """ self.google_provider.skip_hinted_login_dialog = True self.google_provider.save() params = [("next", "/courses/something/?tpa_hint=oa2-google-oauth2")] response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") expected_url = '/auth/login/google-oauth2/?auth_entry={}&next=%2Fcourses'\ '%2Fsomething%2F%3Ftpa_hint%3Doa2-google-oauth2'.format(auth_entry) self.assertRedirects( response, expected_url, target_status_code=302 ) @override_settings(FEATURES=dict(settings.FEATURES, THIRD_PARTY_AUTH_HINT='oa2-google-oauth2')) @ddt.data( 'signin_user', 'register_user', ) def test_settings_tpa_hinted_login(self, url_name): """ Ensure that settings.FEATURES['THIRD_PARTY_AUTH_HINT'] can set third_party_auth_hint. """ params = [("next", "/courses/something/")] response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") self.assertContains(response, '"third_party_auth_hint": "oa2-google-oauth2"') # THIRD_PARTY_AUTH_HINT can be overridden via the query string tpa_hint = self.hidden_enabled_provider.provider_id params = [("next", "/courses/something/?tpa_hint={0}".format(tpa_hint))] response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") self.assertContains(response, '"third_party_auth_hint": "{0}"'.format(tpa_hint)) # Even disabled providers in the query string will override THIRD_PARTY_AUTH_HINT tpa_hint = self.hidden_disabled_provider.provider_id params = [("next", "/courses/something/?tpa_hint={0}".format(tpa_hint))] response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") self.assertNotIn(response.content, tpa_hint) @override_settings(FEATURES=dict(settings.FEATURES, THIRD_PARTY_AUTH_HINT='oa2-google-oauth2')) @ddt.data( ('signin_user', 'login'), ('register_user', 'register'), ) @ddt.unpack def test_settings_tpa_hinted_login_dialog_disabled(self, url_name, auth_entry): """Test that the dialog doesn't show up for hinted logins when disabled via settings.THIRD_PARTY_AUTH_HINT. """ self.google_provider.skip_hinted_login_dialog = True self.google_provider.save() params = [("next", "/courses/something/")] response = self.client.get(reverse(url_name), params, HTTP_ACCEPT="text/html") expected_url = '/auth/login/google-oauth2/?auth_entry={}&next=%2Fcourses'\ '%2Fsomething%2F%3Ftpa_hint%3Doa2-google-oauth2'.format(auth_entry) self.assertRedirects( response, expected_url, target_status_code=302 ) @mock.patch('openedx.core.djangoapps.user_authn.views.login_form.enterprise_customer_for_request') @ddt.data( ('signin_user', False, None, None), ('register_user', False, None, None), ('signin_user', True, 'Fake EC', 'http://logo.com/logo.jpg'), ('register_user', True, 'Fake EC', 'http://logo.com/logo.jpg'), ('signin_user', True, 'Fake EC', None), ('register_user', True, 'Fake EC', None), ) @ddt.unpack def test_enterprise_register(self, url_name, ec_present, ec_name, logo_url, mock_get_ec): """ Verify that when an EnterpriseCustomer is received on the login and register views, the appropriate sidebar is rendered. """ if ec_present: mock_get_ec.return_value = { 'name': ec_name, 'branding_configuration': {'logo': logo_url} } else: mock_get_ec.return_value = None response = self.client.get(reverse(url_name), HTTP_ACCEPT="text/html") enterprise_sidebar_div_id = u'enterprise-content-container' if not ec_present: self.assertNotContains(response, text=enterprise_sidebar_div_id) else: self.assertContains(response, text=enterprise_sidebar_div_id) welcome_message = settings.ENTERPRISE_SPECIFIC_BRANDED_WELCOME_TEMPLATE expected_message = Text(welcome_message).format( start_bold=HTML('<b>'), end_bold=HTML('</b>'), line_break=HTML('<br/>'), enterprise_name=ec_name, platform_name=settings.PLATFORM_NAME, privacy_policy_link_start=HTML("<a href='{pp_url}' target='_blank'>").format( pp_url=settings.MKTG_URLS.get('PRIVACY', 'https://www.edx.org/edx-privacy-policy') ), privacy_policy_link_end=HTML("</a>"), ) self.assertContains(response, expected_message) if logo_url: self.assertContains(response, logo_url) def test_enterprise_cookie_delete(self): """ Test that enterprise cookies are deleted in login/registration views. Cookies must be deleted in login/registration views so that *default* login/registration branding is displayed to subsequent requests from non-enterprise customers. """ cookies = SimpleCookie() cookies[settings.ENTERPRISE_CUSTOMER_COOKIE_NAME] = 'test-enterprise-customer' response = self.client.get(reverse('signin_user'), HTTP_ACCEPT="text/html", cookies=cookies) self.assertIn(settings.ENTERPRISE_CUSTOMER_COOKIE_NAME, response.cookies) enterprise_cookie = response.cookies[settings.ENTERPRISE_CUSTOMER_COOKIE_NAME] self.assertEqual(enterprise_cookie['domain'], settings.BASE_COOKIE_DOMAIN) self.assertEqual(enterprise_cookie.value, '') @override_settings(SITE_NAME=settings.MICROSITE_TEST_HOSTNAME) def test_microsite_uses_old_login_page(self): # Retrieve the login page from a microsite domain # and verify that we're served the old page. resp = self.client.get( reverse("signin_user"), HTTP_HOST=settings.MICROSITE_TEST_HOSTNAME ) self.assertContains(resp, "Log into your Test Site Account") self.assertContains(resp, "login-form") def test_microsite_uses_old_register_page(self): # Retrieve the register page from a microsite domain # and verify that we're served the old page. resp = self.client.get( reverse("register_user"), HTTP_HOST=settings.MICROSITE_TEST_HOSTNAME ) self.assertContains(resp, "Register for Test Site") self.assertContains(resp, "register-form") def test_login_registration_xframe_protected(self): resp = self.client.get( reverse("register_user"), {}, HTTP_REFERER="http://localhost/iframe" ) self.assertEqual(resp['X-Frame-Options'], 'DENY') self.configure_lti_provider(name='Test', lti_hostname='localhost', lti_consumer_key='test_key', enabled=True) resp = self.client.get( reverse("register_user"), HTTP_REFERER="http://localhost/iframe" ) self.assertEqual(resp['X-Frame-Options'], 'ALLOW') def _assert_third_party_auth_data(self, response, current_backend, current_provider, providers, expected_ec, add_user_details=False): """Verify that third party auth info is rendered correctly in a DOM data attribute. """ finish_auth_url = None if current_backend: finish_auth_url = reverse("social:complete", kwargs={"backend": current_backend}) + "?" auth_info = { "currentProvider": current_provider, "providers": providers, "secondaryProviders": [], "finishAuthUrl": finish_auth_url, "errorMessage": None, "registerFormSubmitButtonText": "Create Account", "syncLearnerProfileData": False, "pipeline_user_details": {"email": "test@test.com"} if add_user_details else {} } if expected_ec is not None: # If we set an EnterpriseCustomer, third-party auth providers ought to be hidden. auth_info['providers'] = [] auth_info = dump_js_escaped_json(auth_info) expected_data = '"third_party_auth": {auth_info}'.format( auth_info=auth_info ) self.assertContains(response, expected_data) def _assert_saml_auth_data_with_error( self, response, current_backend, current_provider, expected_error_message ): """ Verify that third party auth info is rendered correctly in a DOM data attribute. """ finish_auth_url = None if current_backend: finish_auth_url = reverse('social:complete', kwargs={'backend': current_backend}) + '?' auth_info = { 'currentProvider': current_provider, 'providers': [], 'secondaryProviders': [], 'finishAuthUrl': finish_auth_url, 'errorMessage': expected_error_message, 'registerFormSubmitButtonText': 'Create Account', 'syncLearnerProfileData': False, 'pipeline_user_details': {'response': {'idp_name': 'testshib'}} } auth_info = dump_js_escaped_json(auth_info) expected_data = '"third_party_auth": {auth_info}'.format( auth_info=auth_info ) self.assertContains(response, expected_data) def _third_party_login_url(self, backend_name, auth_entry, login_params): """Construct the login URL to start third party authentication. """ return u"{url}?auth_entry={auth_entry}&{param_str}".format( url=reverse("social:begin", kwargs={"backend": backend_name}), auth_entry=auth_entry, param_str=self._finish_auth_url_param(login_params), ) def _finish_auth_url_param(self, params): """ Make the next=... URL parameter that indicates where the user should go next. >>> _finish_auth_url_param([('next', '/dashboard')]) '/account/finish_auth?next=%2Fdashboard' """ return urlencode({ 'next': '/account/finish_auth?{}'.format(urlencode(params)) }) def test_english_by_default(self): response = self.client.get(reverse('signin_user'), [], HTTP_ACCEPT="text/html") self.assertEqual(response['Content-Language'], 'en') def test_unsupported_language(self): response = self.client.get(reverse('signin_user'), [], HTTP_ACCEPT="text/html", HTTP_ACCEPT_LANGUAGE="ts-zx") self.assertEqual(response['Content-Language'], 'en') def test_browser_language(self): response = self.client.get(reverse('signin_user'), [], HTTP_ACCEPT="text/html", HTTP_ACCEPT_LANGUAGE="es") self.assertEqual(response['Content-Language'], 'es-419') def test_browser_language_dialent(self): response = self.client.get(reverse('signin_user'), [], HTTP_ACCEPT="text/html", HTTP_ACCEPT_LANGUAGE="es-es") self.assertEqual(response['Content-Language'], 'es-es') @skip_unless_lms @override_settings(SITE_NAME=settings.MICROSITE_LOGISTRATION_HOSTNAME) class MicrositeLogistrationTests(TestCase): """ Test to validate that microsites can display the logistration page """ def test_login_page(self): """ Make sure that we get the expected logistration page on our specialized microsite """ resp = self.client.get( reverse('signin_user'), HTTP_HOST=settings.MICROSITE_LOGISTRATION_HOSTNAME ) self.assertEqual(resp.status_code, 200) self.assertIn('<div id="login-and-registration-container"', resp.content) def test_registration_page(self): """ Make sure that we get the expected logistration page on our specialized microsite """ resp = self.client.get( reverse('register_user'), HTTP_HOST=settings.MICROSITE_LOGISTRATION_HOSTNAME ) self.assertEqual(resp.status_code, 200) self.assertIn('<div id="login-and-registration-container"', resp.content) @override_settings(SITE_NAME=settings.MICROSITE_TEST_HOSTNAME) def test_no_override(self): """ Make sure we get the old style login/registration if we don't override """ resp = self.client.get( reverse('signin_user'), HTTP_HOST=settings.MICROSITE_TEST_HOSTNAME ) self.assertEqual(resp.status_code, 200) self.assertNotIn('<div id="login-and-registration-container"', resp.content) resp = self.client.get( reverse('register_user'), HTTP_HOST=settings.MICROSITE_TEST_HOSTNAME ) self.assertEqual(resp.status_code, 200) self.assertNotIn('<div id="login-and-registration-container"', resp.content) @skip_unless_lms class AccountCreationTestCaseWithSiteOverrides(SiteMixin, TestCase): """ Test cases for Feature flag ALLOW_PUBLIC_ACCOUNT_CREATION which when turned off disables the account creation options in lms """ def setUp(self): """Set up the tests""" super(AccountCreationTestCaseWithSiteOverrides, self).setUp() # Set the feature flag ALLOW_PUBLIC_ACCOUNT_CREATION to False self.site_configuration_values = { 'ALLOW_PUBLIC_ACCOUNT_CREATION': False } self.site_domain = 'testserver1.com' self.set_up_site(self.site_domain, self.site_configuration_values) def test_register_option_login_page(self): """ Navigate to the login page and check the Register option is hidden when ALLOW_PUBLIC_ACCOUNT_CREATION flag is turned off """ response = self.client.get(reverse('signin_user')) self.assertNotIn('<a class="btn-neutral" href="/register?next=%2Fdashboard">Register</a>', response.content)
It isn’t all love for “Love & Hip Hop Hollywood” star Yung Berg. The super producer was arrested for domestic violence after allegedly choking an unnamed woman and causing her “obstruction of breathing.” Berg and the victim were at the Gershwin Hotel when it all happened. According to TMZ, Berg “allegedly grabbed her by the neck, threw her to the floor, dragged her by her hair and hit her in the face.” The woman complained of pain and Berg was taken into custody. Berg was photoed with his cast mate and rumored girlfriend Masika this morning, before the incident. He captioned the photo, “Bae.” Speculation leads us to believe she may be the unnamed woman in the altercation. She recently made Berg her #MCM. Berg’s Lothario reputation on “LHHH” doesn’t help his case either. After ending his “relationship” with Hazel E, he tried to bed her friend-turned-enemy Teairra Mari, then moved on to Hazel’s frenemy Masika. There’s no telling who his girlfriend actually is. And, in case you’re not yet convinced Berg is a jerk, it was recently exposed that he owes $86,000 in back child support. In other “Love & Hip Hop Hollywood” news, Ray J’s girlfriend Princess Love attacked Ray’s former assistant Morgan during the “Love & Hip Hop Hollywood” reunion. Rumor has it, Morgan claimed Ray had physically abused her, so Princess punched her. Sigh. The ratchetivity has reached new levels. UP NEXT: Did Chris Brown Apologize For Blasting Tamar & Adrienne Bailon?
import sys import re import nids from . import regex DEBUG = False #initialize on every call of extract_flows ts = None # request timestamp requestdata = None # buffer for requests from open connections responsedata = None # buffer for responses from open connections requestcounter = None http_req = None # contains data from closed connections NIDS_END_STATES = (nids.NIDS_CLOSE, nids.NIDS_TIMEOUT, nids.NIDS_RESET) class FlowHeader(object): def __init__(self, ts_request_start, ts_request_finish, ts_response_start, ts_response_finish, srcip, sport, dstip, dport): self.ts_request_start = ts_request_start self.ts_request_finish = ts_request_finish self.ts_response_start = ts_response_start self.ts_response_finish = ts_response_finish self.srcip = srcip self.sport = sport self.dstip = dstip self.dport = dport def __eq__(self, other): return (self.ts_request_start, self.ts_request_finish, self.ts_response_start, self.ts_response_finish, self.srcip, self.sport, self.dstip, self.dport) == \ (other.ts_request_start, other.ts_request_finish, other.ts_response_start, other.ts_response_finish, other.srcip, other.sport, other.dstip, other.dport) def __hash__(self): return hash((self.ts_request_start, self.ts_request_finish, self.ts_response_start, self.ts_response_finish, self.srcip, self.sport, self.dstip, self.dport)) def __repr__(self): return ("FlowHeader(ts_request_start=%r,ts_request_finish=%r,ts_response_start=%r" ",ts_response_finish=%r,srcip=%r,sport=%r,dstip=%r,dport=%r)") % \ (self.ts_request_start, self.ts_request_finish, self.ts_response_start, self.ts_response_finish, self.srcip, self.sport, self.dstip, self.dport) def __str__(self): return self.__repr__() #http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6.1 #finds the ending index of chucked response starting at the index start def find_chunk_end(h, start): matches = re.finditer(regex.END_CHUNK_REGEX, responsedata[h]) end_size_line = -1 for m in matches: if m.start() > start: #we subtract 2 because if there is no trailer after the #last chuck the first CRLF of the ending double CRLF is #the CRLF at the end of the regex end_size_line = m.end() - 2 break if end_size_line != -1: matches = re.finditer('\r\n\r\n', responsedata[h]) for m in matches: if m.start() >= end_size_line: return m.end() return None def get_response_headers(h, start): return get_headers(responsedata, h, start) def get_request_headers(h, start): return get_headers(requestdata, h, start) def get_headers(buf, h, start): header_start = None header_end = None matches = re.finditer('\r\n\r\n', responsedata[h]) for m in matches: if m.start() > start: header_end = m.end() break matches = re.finditer('\r\n', responsedata[h]) for m in matches: if m.start() > start: header_start = m.end() break if header_start is not None and header_end is not None: return buf[h][header_start:header_end] return None def split_responses(h): matches = re.finditer(regex.HTTP_RESP_REGEX, responsedata[h]) responses = list() start = -1 for m in matches: end = -1 if start != -1 and start < m.start(): headers = get_response_headers(h, start) if "Transfer-Encoding: chunked" in headers: end = find_chunk_end(h, start) else : end = m.start() responses.append(responsedata[h][start : end]) else: end = m.start() start = end responses.append(responsedata[h][start:]) return responses def split_requests(h): matches = re.finditer(regex.HTTP_REQ_REGEX, requestdata[h]) requests = list() start = -1 for m in matches: if start != -1: requests.append(requestdata[h][start : m.start()]) start = m.start() requests.append(requestdata[h][start:]) return requests def is_http_response(data): m = re.search(regex.HTTP_RESP_REGEX, data) if m: if m.start() == 0: return True return False def is_http_request(data): m = re.search(regex.HTTP_REQ_REGEX, data) if m: if m.start() == 0: return True return False def num_requests(h): return len(re.findall(regex.HTTP_REQ_REGEX, requestdata[h])) def num_responses(h): matches = re.finditer(regex.HTTP_RESP_REGEX, responsedata[h]) resp_count = 0 start = -1 for m in matches: end = -1 if start != -1 and start < m.start(): headers = get_response_headers(h, start) if "Transfer-Encoding: chunked" in headers: end = find_chunk_end(h, start) else: end = m.start() resp_count += 1 else: end = m.start() start = end if len(responsedata[h][start:].strip()) > 0: resp_count += 1 return resp_count # returns a list of tuple, each tuple contains (count, request, response) def add_reconstructed_flow(h): retval = list() requests = list() responses = list() if num_requests(h) > 1: requests = split_requests(h) else: requests.append(requestdata[h]) if num_responses(h) > 1: responses = split_responses(h) else: responses.append(responsedata[h]) maxlen = 0 if len(requests) > len(responses): maxlen = len(requests) else: maxlen = len(responses) if DEBUG and len(requests) != len(responses): print "Unequal number of requests and responses. " + str(h) print(str(len(requests)) + " " + str(len(responses)) + "\n") for i in range(maxlen): countval = None reqval = None respval = None if i < len(requests) and len(requests[i].strip()) > 0 and is_http_request(requests[i]): reqval = requests[i] if i < len(responses) and len(responses[i].strip()) > 0 and is_http_response(responses[i]): respval = responses[i] if reqval or respval: countval = requestcounter[h] requestcounter[h] = requestcounter[h] + 1 if countval != None: if DEBUG: print "Appending request " + str(countval) + " to " + str(h) retval.append((countval, reqval, respval)) requestdata[h] = '' responsedata[h] = '' if DEBUG: print "Tuples in list for " + str(h) + " = " + str(len(retval)) return retval def handle_tcp_stream(tcp): global DEBUG # print "tcps -", str(tcp.addr), " state:", tcp.nids_state if tcp.nids_state == nids.NIDS_JUST_EST: # new tcp flow ((srcip, sport), (dstip, dport)) = tcp.addr h = (srcip, sport, dstip, dport) #(req_start, req_stop, resp_start, resp_stop) ts[h] = [nids.get_pkt_ts(), 0, 0 ,0] requestcounter[h] = 0 requestdata[h] = '' responsedata[h] = '' if DEBUG: print "Reconstructing TCP flow:", tcp.addr tcp.client.collect = 1 # collects server -> client data tcp.server.collect = 1 # collects client -> server data elif tcp.nids_state == nids.NIDS_DATA: # keep all of the stream's new data tcp.discard(0) ((srcip, sport), (dstip, dport)) = tcp.addr h = (srcip, sport, dstip, dport) if requestdata.has_key(h): client2server_data = tcp.server.data[tcp.server.count-tcp.server.count_new:tcp.server.count] server2client_data = tcp.client.data[tcp.client.count-tcp.client.count_new:tcp.client.count] #this if statement is necessary to ensure proper ordering of request/response pairs in the output if is_http_request(client2server_data): if len(requestdata[h]) > 0: if DEBUG: print "Added request/response..." k = FlowHeader(ts[h][0], ts[h][1], ts[h][2], ts[h][3], h[0], h[1], h[2], h[3]) http_req[k] = add_reconstructed_flow(h) ts[h] = [nids.get_pkt_ts(), 0, 0 ,0] if len(client2server_data) > 0: #sets the start timestamp for request if(requestdata[h] == ''): ts[h][0] = nids.get_pkt_ts() requestdata[h] = requestdata[h] + client2server_data #sets the end timestamp for request ts[h][1] = nids.get_pkt_ts() if len(server2client_data) > 0: #sets the start timestamp for response if(responsedata[h] == ''): ts[h][2] = nids.get_pkt_ts() responsedata[h] = responsedata[h] + server2client_data #sets the end timestamp for response ts[h][3] = nids.get_pkt_ts() elif tcp.nids_state in NIDS_END_STATES: ((srcip, sport), (dstip, dport)) = tcp.addr if DEBUG: print "End of flow:", tcp.addr h = (srcip, sport, dstip, dport) if requestdata.has_key(h) and is_http_request(requestdata[h]) and is_http_response(responsedata[h]): k = FlowHeader(ts[h][0], ts[h][1], ts[h][2], ts[h][3], h[0], h[1], h[2], h[3]) http_req[k] = add_reconstructed_flow(h) else: if DEBUG: print "Failed to add flow" print str(h) print "has_key? " + str(requestdata.has_key(h)) print "is_http_request? " + str(is_http_request(requestdata[h])) print "is_http_response? " + str(is_http_response(responsedata[h])) del ts[h] del requestdata[h] del responsedata[h] del requestcounter[h] # adds the remaining open connections to the http_req dictionary def finalize_http_flows(): for h in requestdata.keys(): finalize_http_flow_header(ts[h]) k = FlowHeader(ts[h][0], ts[h][1], ts[h][2], ts[h][3], h[0], h[1], h[2], h[3]) if DEBUG: print "Finalizing flow", k http_req[k] = add_reconstructed_flow(h) for h in http_req.keys(): if len(http_req[h]) < 1: del http_req[h] if DEBUG: print "Num of flows " + str(len(http_req.keys())) # sets the empty timestamp values for the remaining open connections def finalize_http_flow_header(header): for i in range(len(header)): if header[i] == 0: header[i] = nids.get_pkt_ts() # prints flow headers in timestamp order def print_flows(http_req): for fh in sorted(http_req.keys(), key=lambda x: x.ts): print str(fh) + " " + str(len(http_req[fh])) # if DEBUG: # for tup in http_req[fh]: # print tup # extracts the http flows from a pcap file # returns a dictionary of the reconstructed flows, keys are FlowHeader objects # values are lists of tuples of the form (count, request, response) def extract_flows(pcap_file): global ts, requestdata, responsedata, requestcounter, http_req ts, requestdata, responsedata, requestcounter, http_req = \ dict([]), dict([]), dict([]), dict([]), dict([]) nids.param("tcp_workarounds", 1) nids.param("pcap_filter", "tcp") # bpf restrict to TCP only, note nids.param("scan_num_hosts", 0) # disable portscan detection nids.chksum_ctl([('0.0.0.0/0', False)]) # disable checksumming nids.param("filename", pcap_file) nids.init() nids.register_tcp(handle_tcp_stream) # print "pid", os.getpid() if DEBUG: print "Reading from pcap file:", pcap_file try: nids.run() except nids.error, e: print "nids/pcap error: ", pcap_file + " ", e except KeyboardInterrupt: print "Control C!" sys.exit(0) except Exception, e: print "Exception (runtime error in user callback?): ", pcap_file + " ", e finalize_http_flows() if DEBUG: print "Done!\n" return http_req
TWO Of Richie’s Favourites To Walk Out On Him In ONE WEEK! If you’ve been watching The Bachelor, no doubt you’re as addicted as the rest of the country. ‘Who will he CHOOSE?!’ I can hear you screaming, well the good news is this week we’re about to get a couple of steps closer to knowing. According to a report in NW, one of Richie’s bachelorettes will reject his rose during a ceremony while another is evicted after a ‘failed one-on-one date’. It is not yet known who Richie sends packing following their alone time experience. One of the beauties told NW that she doesn't believe Richie will miss her presence because 'he has, like 10 other girlfriends...he doesn't need me'. According to TV Week magazine, one of the mystery women apologises after rejecting the rose while asking him to speak outside. 'I just explained that the environment - where there are just so many girls and it's very competitive - was distracting,' she told the publication. This news comes just a week after New Idea published Megan Marx will leave Richie ‘devastated’ after walking out on him on the show. 'She [Megan] admitted she didn't think Richie was being true to himself by keeping drama queen Keira [Maguire] in for ratings,' a source told the publication. Throughout the season, bachelorettes have hinted that Richie's body odour has been a mood killer. An insider told New Idea earlier in the year: ‘Many of them confessed they wouldn’t have applied for the show if they’d known Richie would be their prize, and a fair few expressed their desire to leave more than once'.
# Copyright (c) 2014, Facebook, Inc. # All rights reserved. # # This source code is licensed under the BSD-style license found in the # LICENSE file in the root directory of this source tree. An additional grant # of patent rights can be found in the PATENTS file in the same directory. from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from collections import defaultdict from functools import reduce from mcrouter.test.mock_servers import MockServer from mcrouter.test.McrouterTestCase import McrouterTestCase class EchoServer(MockServer): """A server that responds to get requests with its port number. """ def runServer(self, client_socket, client_address): while not self.is_stopped(): cmd = client_socket.recv(1000) if not cmd: return if cmd.startswith('get'): client_socket.send('VALUE hit 0 %d\r\n%s\r\nEND\r\n' % (len(str(self.port)), str(self.port))) class TestWCH3(McrouterTestCase): config = './mcrouter/test/test_wch3.json' extra_args = [] def setUp(self): for i in range(8): self.add_server(EchoServer()) self.mcrouter = self.add_mcrouter( self.config, '/test/A/', extra_args=self.extra_args) def test_wch3(self): valid_ports = [] for i in [1, 2, 4, 5, 6, 7]: valid_ports.append(self.get_open_ports()[i]) invalid_ports = [] for i in [0, 3]: invalid_ports.append(self.get_open_ports()[i]) request_counts = defaultdict(int) n = 20000 for i in range(0, n): key = 'someprefix:{}:|#|id=123'.format(i) resp = int(self.mcrouter.get(key)) respB = int(self.mcrouter.get('/test/B/' + key)) respC = int(self.mcrouter.get('/test/C/' + key)) self.assertEqual(resp, respB) self.assertEqual(resp, respC) request_counts[resp] += 1 self.assertTrue(resp in valid_ports) self.assertTrue(resp not in invalid_ports) # Make sure that the fraction of keys to a server are what we expect # within a tolerance expected_fractions = { 0: 0, 1: 1, 2: 1, 3: 0.0, 4: 0.5, 5: 1, 6: 0.3, 7: 0.5 } tolerance = 0.075 total_weight = reduce(lambda x, y: x + y, map(lambda x: x[1], expected_fractions.items())) for i, weight in expected_fractions.items(): expected_frac = weight / total_weight port = int(self.get_open_ports()[i]) measured_frac = request_counts[port] / float(n) if expected_frac > 0: delta = measured_frac - expected_frac self.assertTrue(abs(delta) <= tolerance) else: self.assertEqual(measured_frac, 0.0)
Family of pendant, wall and ceiling lamps, with elements in extruded aluminium. Opal polycarbonate diffusers, for a warm and ambient lighting. Adjustable elements in the wall versions. Available also with dimmer push on the body of the lamp for the Wall 2 version. Ceiling version with a 340° rotation for the ceiling fixed element while the other element is adjustable by 90°.
################################################################################ class Color: """Shortcuts for the ANSI escape sequences to control formatting, color, etc. on text terminals. Use it like this: print Color.red + "Hello world" + Color.end """ # Special # end = '\033[0m' # Regular # blk = '\033[0;30m' # Black red = '\033[0;31m' # Red grn = '\033[0;32m' # Green ylw = '\033[0;33m' # Yellow blu = '\033[0;34m' # Blue pur = '\033[0;35m' # Purple cyn = '\033[0;36m' # Cyan wht = '\033[0;37m' # White # Bold # bold = '\033[1m' b_blk = '\033[1;30m' # Black b_red = '\033[1;31m' # Red b_grn = '\033[1;32m' # Green b_ylw = '\033[1;33m' # Yellow b_blu = '\033[1;34m' # Blue b_pur = '\033[1;35m' # Purple b_cyn = '\033[1;36m' # Cyan b_wht = '\033[1;37m' # White # Light # light = '\033[2m' l_blk = '\033[2;30m' # Black l_red = '\033[2;31m' # Red l_grn = '\033[2;32m' # Green l_ylw = '\033[2;33m' # Yellow l_blu = '\033[2;34m' # Blue l_pur = '\033[2;35m' # Purple l_cyn = '\033[2;36m' # Cyan l_wht = '\033[2;37m' # White # Italic # italic = '\033[1m' i_blk = '\033[3;30m' # Black i_red = '\033[3;31m' # Red i_grn = '\033[3;32m' # Green i_ylw = '\033[3;33m' # Yellow i_blu = '\033[3;34m' # Blue i_pur = '\033[3;35m' # Purple i_cyn = '\033[3;36m' # Cyan i_wht = '\033[3;37m' # White # Underline # underline = '\033[4m' u_blk = '\033[4;30m' # Black u_red = '\033[4;31m' # Red u_grn = '\033[4;32m' # Green u_ylw = '\033[4;33m' # Yellow u_blu = '\033[4;34m' # Blue u_pur = '\033[4;35m' # Purple u_cyn = '\033[4;36m' # Cyan u_wht = '\033[4;37m' # White # Glitter # flash = '\033[5m' g_blk = '\033[5;30m' # Black g_red = '\033[5;31m' # Red g_grn = '\033[5;32m' # Green g_ylw = '\033[5;33m' # Yellow g_blu = '\033[5;34m' # Blue g_pur = '\033[5;35m' # Purple g_cyn = '\033[5;36m' # Cyan g_wht = '\033[5;37m' # White # Fill # f_blk = '\033[40m' # Black f_red = '\033[41m' # Red f_grn = '\033[42m' # Green f_ylw = '\033[43m' # Yellow f_blu = '\033[44m' # Blue f_pur = '\033[45m' # Purple f_cyn = '\033[46m' # Cyan f_wht = '\033[47m' # White
The Hudson Catholic baseball team will count heavily on its seniors in order to contend again this year. Front row, from left, are Anneury Familia, Kris Salinas, Tino Salgado and Joshua Ortiz. Back row, from left, are Ethan Lopez, Dewayne Sims, head coach Alberto Vasquez, A.J. Perrenod and Kris Jacobsen. The Hudson Catholic Regional High School baseball team should definitely be one of Hudson County’s premier squads again this season. That much is guaranteed. The Hawks are perennially contending for both Hudson County Tournament and NJSIAA Non-Public B North championships. But what isn’t a sure thing is the manner in which the Hawks will contend this spring. Veteran head coach Alberto Vasquez, a product of the Hudson Catholic program himself, has enjoyed immense success since he took over the program, including an overall state championship four years ago. Vasquez is hoping that a strong finish in 2018 can lead to a solid season in 2019. The Hawks are fortunate to have two intense and fierce competitors returning, two Hudson Reporter All-Area honorees in junior pitcher/first baseman Jimmy Kemp and senior shortstop Tino Salgado. Kemp was one of the area’s top two-way performers last season, winning five games on the mound with a stingy 0.89 earned run average and batting .440 at the plate. Salgado earned Hudson Reporter Player of the Year honors, batting an even .500 with 28 RBI. He has already signed a national letter of intent to attend the University of Rhode Island in the fall. Together, Kemp and Salgado form the county’s premier 1-2 punch. Vasquez has the same feeling about Salgado. Sophomore right-hander Allen Sanchez moves into the Hawks’ rotation. He is also a player to watch. It means that Vasquez puts Sanchez almost on the same plateau as Kemp. Junior right-hander Rafael Solano is another standout pitcher. He will serve as the team’s closer as well as the team’s No. 1 catcher. Senior Kris Salinas is another fine hurler for the Hawks. Junior Isiah Olmo is a left-hander who will get his chance to be a solid starter. Junior Ivan Gonzalez is a right-handed hurler. Senior Joshua Ortiz is another righty that Vasquez can call upon. “He’s going to get outs,” Vasquez said. Needless to say, Vasquez likes his pitching staff, especially since all six of the top hurlers have no problems throwing strikes. Solano is an excellent backstop, learning from the best, as Vasquez was a standout All-Area catcher with the Hawks and later Rutgers University and the New Jersey Jackals of independent baseball. Kemp plays first base when he’s not pitching. When he’s on the hill, junior Ethan Lopez takes over at first. Sanchez plays second base and when he pitches, Gonzalez takes over at second. Salgado is the fixture at shortstop. Just pencil him in at the top of the Hawks’ lineup and leave him alone. Salinas is the third baseman. Senior A.J. Perranod is the team’s resident jack-of-all-trades. Perranod, who had five RBI in the Hawks’ season-opening win over Lincoln, can play second base, third base, first base and slide behind the plate. Such versatility is hard to find. Junior Isaiah Decias is the left fielder. Senior Dewayne Sims is the team’s centerfielder. Senior Chris Jacobsen and junior Michael Santiago are sharing right field duties. Needless to say, the Hawks are ready for a solid run at county and state titles. “We’re ready to handle whatever comes,” Vasquez said.
import sys import parameters as pars import html from article import Article from file_writer import FileWriter def run(journal, num_articles): # Setup output file, get input parameters, and use brief run if testing writer = FileWriter(pars.filename) journal = journal # journal name num_articles = num_articles # number of articles to use from each issue num_volumes = 18 # 18 volumes per year issue = 1 # sample issue for each volume # if len(sys.argv) > 1: # print "Testing....." # num_articles = 10 # num_volumes = 1 # Sample papers accepted in previous year date = html.detect_start_volume() start_volume = date[0] acceptance_year = date[1] volumes = range(start_volume-num_volumes+1, start_volume+1) # for volume in reversed(volumes): # # Go to volume/issue contents page, and extract URLs of articles # articles = html.build_urls(journal, volume, issue) # for num in range(1, num_articles+1): # # For first 'num_articles' in this volume/issue, try to extract date string from article webpage # url = articles[num] # try: # date_string = html.get_date_div(url) # except: # print "Some error occurred (URL '",url,"' not available?). Skipping." # break # article = Article(date_string) # if article.get_year() == acceptance_year: # writer.write_to_file(article) writer.close_file() if __name__ == "__main__": run(pars.journal, pars.num_articles)
Earlier this week, I got to play the role of a guest-host/panelist on the Cisco Champion Radio podcast, talking about tech predictions for 2016. It was a very lively conversation and a lot of fun. If you have any thoughts to add to the conversation, feel free to do so in the comments below.
#coding=utf-8 from flask import Flask, request, jsonify from flask import g, Response from flask_restful.reqparse import RequestParser from flask.views import MethodView import time, datetime from Crypto.Cipher import AES from Crypto import Random import base64 app = Flask(__name__) ENCRYPT_KEY = "AUJJSLSPDVMDSSJSODSLIDmlcxsxslin" @app.before_request def option_autoreply(): """ Always reply 200 on OPTIONS request """ if request.method == 'OPTIONS': resp = app.make_default_options_response() headers = None if 'ACCESS_CONTROL_REQUEST_HEADERS' in request.headers: headers = request.headers['ACCESS_CONTROL_REQUEST_HEADERS'] h = resp.headers # Allow the origin which made the XHR h['Access-Control-Allow-Origin'] = request.headers['Origin'] # Allow the actual method h['Access-Control-Allow-Methods'] = request.headers['Access-Control-Request-Method'] # Allow for 10 seconds h['Access-Control-Max-Age'] = "10" # We also keep current headers if headers is not None: h['Access-Control-Allow-Headers'] = headers return resp @app.after_request def set_allow_origin(resp): """ Set origin for GET, POST, PUT, DELETE requests """ h = resp.headers # Allow crossdomain for other HTTP Verbs if request.method != 'OPTIONS' and 'Origin' in request.headers: h['Access-Control-Allow-Origin'] = request.headers['Origin'] return resp def encrypt(data, encrypt_key): bs = AES.block_size pad = lambda s: s + (bs - len(s) % bs) * chr(bs - len(s) % bs) iv = Random.new().read(bs) cipher = AES.new(encrypt_key, AES.MODE_CBC, iv) data = cipher.encrypt(pad(data)) data = iv + data data = base64.encodestring(data).strip() return data def decrypt(data, encrypt_key): data = base64.decodestring(data) bs = AES.block_size if len(data) <= bs: return data unpad = lambda s : s[0:-ord(s[-1])] iv = data[:bs] cipher = AES.new(encrypt_key, AES.MODE_CBC, iv) data = unpad(cipher.decrypt(data[bs:])) return data class Login(MethodView): @classmethod def login(cls): p_token = "123123" strs = request.json['account'] + request.json['password'] + datetime.datetime.now().strftime('%Y%m%d%H%M%S') enrypt_data = encrypt(strs, ENCRYPT_KEY) res = {'account': request.json['account'], 'p_token': enrypt_data # ,"decrypt_data" : decrypt(enrypt_data, ENCRYPT_KEY) } return jsonify(res) app.add_url_rule('/prog/api/login', 'login', Login.login, methods=['POST']) class Profile(MethodView): @classmethod def get(cls): res = { "member": { "name":"bill yang", "age": 26, "id_number": "R123123123123", "token": "GJKLGFSAS", "create_time": "2016/04/10 00:10:20" }, "alerts": [ { "event": "Morning Call", "level": 1, "disabled": False }, { "event": "study", "level": 2, "disabled": True }, { "event": "Go to taipei", "level": 3, "disabled": False } ], "messages": [ { "sender": "Shyshyhao", "send_time": "2016/03/25 10:10:20", "message": "Hello, yuchen Nice to meet you." }, { "sender": "Adela Lin", "send_time": "2016/03/25 07:10:20", "message": "Get up!! Now is your time." } ] } return jsonify(res) app.add_url_rule('/prog/api/profile', 'user_profile', Profile.get, methods=['GET']) if __name__ == '__main__': app.debug = True # or add debug=True in param app.run(host='0.0.0.0')
Use the template according to your order. Template files are all made with Adobe Illustrator. For Mac OS 9.x, they are compressed as Sit files, and, for Windows/Mac OS X (Mac OS X version 10.2 or later), they are compressed as zip files.
# -*- coding: utf-8 -*- # This file is part of the pymfony package. # # (c) Alexandre Quercia <alquerci@email.com> # # For the full copyright and license information, please view the LICENSE # file that was distributed with this source code. from __future__ import absolute_import; import sys; import inspect; from pymfony.component.system import Object; from pymfony.component.system import ClassLoader; from pymfony.component.system import Tool; from pymfony.component.system.oop import final; from pymfony.component.system.types import String; from pymfony.component.system.exception import StandardException; from pymfony.component.system.oop import abstract; """ """ class ReflectionException(StandardException): pass; class ReflectionParameter(Object): def __init__(self, function, parameter): """Constructor @param: The function to reflect parameters from. @param: The parameter. """ self.__name = str(parameter); self.__defaultValue = None; self.__isDefaultValueAvailable = None; self.__isOptional = None; self.__position = None; args, varargs, varkw, defaults = inspect.getargspec(function); offset = -1 if inspect.ismethod(function) else 0; self.__position = list(args).index(self.__name) + offset; defaults = defaults if defaults else tuple(); firstOptional = len(args) + offset - len(defaults); if self.__position >= firstOptional: self.__isOptional = True; self.__isDefaultValueAvailable = True; self.__defaultValue = defaults[self.__position - firstOptional]; else: self.__isOptional = False; self.__isDefaultValueAvailable = False; def __str__(self): return self.__name; @final def __clone__(self): raise TypeError(); def getName(self): """Gets the name of the parameter @return: string The name of the reflected parameter. """ return self.__name; def getDefaultValue(self): """Gets the default value of the parameter for a user-defined function or method. If the parameter is not optional a ReflectionException will be raise. @return: mixed The parameters default value. """ if not self.isOptional(): raise ReflectionException("The parameter {0} is not optional".format( self.__name )); return self.__defaultValue; def isDefaultValueAvailable(self): """Checks if a default value for the parameter is available. @return: Boolean TRUE if a default value is available, otherwise FALSE """ return self.__isDefaultValueAvailable; def isOptional(self): """Checks if the parameter is optional. @return: Boolean TRUE if the parameter is optional, otherwise FALSE """ return self.__isOptional; def getPosition(self): """Gets the position of the parameter. @return: int The position of the parameter, left to right, starting at position #0. """ return self.__position; @abstract class AbstractReflectionFunction(Object): @final def __clone__(self): """The clone method prevents an object from being cloned. Reflection objects cannot be cloned. """ raise TypeError("Reflection objects cannot be cloned."); @abstract def getParameters(self): """Get the parameters as a list of ReflectionParameter. @return: list A list of Parameters, as a ReflectionParameter object. """ pass; class ReflectionFunction(AbstractReflectionFunction): def __init__(self, function): """Constructs a ReflectionFunction object. @param: string|function The name of the function to reflect or a closure. @raise ReflectionException: When the name parameter does not contain a valid function. """ if isinstance(function, String): try: function = ClassLoader.load(function); except ImportError: function = False; if not inspect.isfunction(function): raise ReflectionException( "The {0} parameter is not a valid function.".format( function )) self._name = function.__name__; self._parameters = None; self._function = function; def __str__(self): return self._name; def getName(self): return self._name; def getParameters(self): """Get the parameters as a list of ReflectionParameter. @return: list A list of Parameters, as a ReflectionParameter object. """ if self._parameters is None: self._parameters = list(); args = inspect.getargspec(self._function)[0]; for arg in args: self._parameters.append(ReflectionParameter(self._function, arg)); return self._parameters; class ReflectionMethod(AbstractReflectionFunction): IS_STATIC = 1; IS_ABSTRACT = 2; IS_FINAL = 4; IS_PUBLIC = 256; IS_PROTECTED = 512; IS_PRIVATE = 1024; def __init__(self, method): """Constructs a ReflectionFunction object. @param: method The method to reflect. @raise ReflectionException: When the name parameter does not contain a valid method. """ if not inspect.ismethod(method): raise ReflectionException( "The {0} parameter is not a valid method.".format( method )) self._className = None; self._parameters = None; self._mode = None; self._name = method.__name__; self._method = method; def __str__(self): return self._name; def getName(self): return self._name; def getClassName(self): if self._className is None: if sys.version_info < (2, 7): cls = self._method.im_class; else: cls = self._method.__self__.__class__; self._className = ReflectionClass(cls).getName(); return self._className; def getMode(self): if self._mode is None: if self._name.startswith('__') and self._name.endswith('__'): self._mode = self.IS_PUBLIC; elif self._name.startswith('__'): self._mode = self.IS_PRIVATE; elif self._name.startswith('_'): self._mode = self.IS_PROTECTED; else: self._mode = self.IS_PUBLIC; if getattr(self._method, '__isabstractmethod__', False): self._mode = self._mode | self.IS_ABSTRACT; if getattr(self._method, '__isfinalmethod__', False): self._mode = self._mode | self.IS_FINAL; if isinstance(self._method, classmethod): self._mode = self._mode | self.IS_STATIC; return self._mode; def getParameters(self): """Get the parameters as a list of ReflectionParameter. @return: list A list of Parameters, as a ReflectionParameter object. """ if self._parameters is None: self._parameters = list(); args = inspect.getargspec(self._method)[0]; for arg in args[1:]: self._parameters.append(ReflectionParameter(self._method, arg)); return self._parameters; class ReflectionClass(Object): def __init__(self, argument): if isinstance(argument, String): qualClassName = argument; try: argument = ClassLoader.load(argument); except ImportError: argument = False; if argument is not False: assert issubclass(argument, object); self.__exists = True; self._class = argument; self._fileName = None; self._mro = None; self._namespaceName = None; self._parentClass = None; self._name = None; else: self.__exists = False; self._name = qualClassName; self._fileName = ''; self._mro = tuple(); self._namespaceName = Tool.split(qualClassName)[0]; self._parentClass = False; self._class = None; self._methods = None; def getFileName(self): if self._fileName is not None: return self._fileName; try: self._fileName = inspect.getabsfile(self._class); except TypeError: self._fileName = False; return self._fileName; def getParentClass(self): """ @return: ReflexionClass|False """ if self._parentClass is None: if len(self.getmro()) > 1: self._parentClass = ReflectionClass(self.getmro()[1]); else: self._parentClass = False; return self._parentClass; def getmro(self): if self._mro is None: self._mro = inspect.getmro(self._class); return self._mro; def getNamespaceName(self): if self._namespaceName is None: self._namespaceName = str(self._class.__module__); return self._namespaceName; def getName(self): if self._name is None: self._name = self.getNamespaceName()+'.'+str(self._class.__name__); return self._name; def exists(self): return self.__exists; def newInstance(self, *args, **kargs): return self._class(*args, **kargs); class ReflectionObject(ReflectionClass): def __init__(self, argument): assert isinstance(argument, object); ReflectionClass.__init__(self, argument.__class__); self.__object = argument; def getMethod(self, name): """Gets a ReflectionMethod for a class method. @param: string The method name to reflect. @return: ReflectionMethod A ReflectionMethod. @raise: ReflectionException When the method does not exist. """ if hasattr(self.__object, name): return ReflectionMethod(getattr(self.__object, name)); raise ReflectionException("The method {0} of class {1} does not exist.".format( name, self.getName(), )); def getMethods(self, flag = 0): """Gets a list of methods for the class. @param: int flag Filter the results to include only methods with certain attributes. Defaults to no filtering. Any combination of ReflectionMethod.IS_STATIC, ReflectionMethod.IS_PUBLIC, ReflectionMethod.IS_PROTECTED, ReflectionMethod.IS_PRIVATE, ReflectionMethod.IS_ABSTRACT, ReflectionMethod.IS_FINAL. @return: list A list of ReflectionMethod objects reflecting each method. """ if self._methods is None: self._methods = list(); for name, method in inspect.getmembers(self.__object, inspect.ismethod): refMethod = ReflectionMethod(method); if flag == flag & refMethod.getMode(): self._methods.append(refMethod); return self._methods;
Madison Collection is a unisex collection combines elegant flowing curves, tray tops arched, one-piece corner post, and tapered legs. Dressed up or down to match any decor, this collection radiates timeless appeal and will provide both utility and pleasure over the years your children’s lives. Classics Beatrice Combo Tower Chest in Cherry Red by Stork Craft, BUY HERE!