text
stringlengths 29
850k
|
|---|
from collections import namedtuple
from coala_utils.decorators import generate_eq, generate_repr
@generate_repr()
@generate_eq("documentation", "language", "docstyle",
"indent", "marker", "range")
class DocumentationComment:
"""
The DocumentationComment holds information about a documentation comment
inside source-code, like position etc.
"""
Parameter = namedtuple('Parameter', 'name, desc')
ReturnValue = namedtuple('ReturnValue', 'desc')
Description = namedtuple('Description', 'desc')
def __init__(self, documentation, language,
docstyle, indent, marker, range):
"""
Instantiates a new DocumentationComment.
:param documentation: The documentation text.
:param language: The language of the documention.
:param docstyle: The docstyle used in the documentation.
:param indent: The string of indentation used in front
of the first marker of the documentation.
:param marker: The three-element tuple with marker strings,
that identified this documentation comment.
:param range: The position range of type TextRange.
"""
self.documentation = documentation
self.language = language.lower()
self.docstyle = docstyle.lower()
self.indent = indent
self.marker = marker
self.range = range
def __str__(self):
return self.documentation
def parse(self):
"""
Parses documentation independent of language and docstyle.
:return:
The list of all the parsed sections of the documentation. Every
section is a namedtuple of either ``Description`` or ``Parameter``
or ``ReturnValue``.
:raises NotImplementedError:
When no parsing method is present for the given language and
docstyle.
"""
if self.language == "python" and self.docstyle == "default":
return self._parse_documentation_with_symbols(
(":param ", ": "), ":return: ")
elif self.language == "python" and self.docstyle == "doxygen":
return self._parse_documentation_with_symbols(
("@param ", " "), "@return ")
elif self.language == "java" and self.docstyle == "default":
return self._parse_documentation_with_symbols(
("@param ", " "), "@return ")
else:
raise NotImplementedError(
"Documentation parsing for {0.language!r} in {0.docstyle!r}"
" has not been implemented yet".format(self))
def _parse_documentation_with_symbols(self, param_identifiers,
return_identifiers):
"""
Parses documentation based on parameter and return symbols.
:param param_identifiers:
A tuple of two strings with which a parameter starts and ends.
:param return_identifiers:
The string with which a return description starts.
:return:
The list of all the parsed sections of the documentation. Every
section is a namedtuple of either ``Description`` or ``Parameter``
or ``ReturnValue``.
"""
lines = self.documentation.splitlines(keepends=True)
parse_mode = self.Description
cur_param = ""
desc = ""
parsed = []
for line in lines:
stripped_line = line.strip()
if stripped_line.startswith(param_identifiers[0]):
parse_mode = self.Parameter
param_offset = line.find(
param_identifiers[0]) + len(param_identifiers[0])
splitted = line[param_offset:].split(param_identifiers[1], 1)
cur_param = splitted[0].strip()
# For cases where the param description is not on the
# same line, but on subsequent lines.
try:
param_desc = splitted[1]
except IndexError:
param_desc = ""
parsed.append(self.Parameter(name=cur_param, desc=param_desc))
elif stripped_line.startswith(return_identifiers):
parse_mode = self.ReturnValue
return_offset = line.find(
return_identifiers) + len(return_identifiers)
retval_desc = line[return_offset:]
parsed.append(self.ReturnValue(desc=retval_desc))
elif parse_mode == self.ReturnValue:
retval_desc += line
parsed.pop()
parsed.append(self.ReturnValue(desc=retval_desc))
elif parse_mode == self.Parameter:
param_desc += line
parsed.pop()
parsed.append(self.Parameter(name=cur_param, desc=param_desc))
else:
desc += line
# This is inside a try-except for cases where the list
# is empty and has nothing to pop.
try:
parsed.pop()
except IndexError:
pass
parsed.append(self.Description(desc=desc))
return parsed
|
4 Startups Revolutionizing the Car Industry – What It Is?
Companies today are constantly changing their offerings. With rapid changes in technology, the car industry is going through significant changes too. Thanks to artificial intelligence and the internet, it is easier to share information and start businesses. As-needed car rental companies and Google’s self-driven cars are changing how people approach buying vehicles, car rentals and cars in general.
Soon, people may not need driver’s licenses or need to own vehicles. People who want to make the most of the latest technology need to respect the industry changes. Businesses that want long-term success need to adapt or else they’ll be left behind. Following are some of the ways that start-up companies are drastically changing the car industry and drivers’ lives.
Not everyone wants the cost of dealing with a car’s maintenance, insurance or gas. Instead, many people just need cars for specific things: from day trips to shopping excursions or errand running. Companies like ZipCar and Car2Go make it easy for people to rent cars as and when they are needed.
By signing up for a membership, people can register and reserve their vehicles either online or telephonically. Cars are available around the country, and can be more cost-effective than using traditional car rental companies. Gas and insurance are also included in the memberships. One can reserve the car for a couple of hours or for the whole day. Thus, one does not have to pay thousands of dollars for a car or deal with expensive insurance and gas. This changes how people approach vehicle ownership and rental.
More and more people want to be eco-friendly in today’s environmentally conscious world. Electric and hybrid start-ups are becoming popular because these vehicles run on alternative energy options, which save driver’s money on gas. Start-ups like Tesla, Think, and Commuter Cars are developing vehicles that run on electricity. In fact, Moteur Development International’s Air Car runs entirely on compressed air options.
There are also vans and buses in development like Modec’s Electric Va. In 10 years, traditional gas vehicles may no longer be as common or may not be used at all. This will change how manufacturers design and develop future models.
Google has developed a self-driven car using radar senses, video cameras and artificial intelligence software. These vehicles are gaining traction and are currently being tested and refined. While Google is the most famous company working on such vehicles, other groups have worked on such projects as well. It is hopes that self-driving vehicles will be up for sale in 5 years or less.
This would mean that people would no longer need driver’s license. Google says that they have tested vehicles for as much as 300,000 miles and 50,000 of these miles were without any human assistance. The only fender benders that have occurred happened when a human was controlling the vehicle.
Experts hope that driverless cars will be safer than human operated options. There were about 33,000 deaths because of motor vehicle accidents in 2010. Another benefit is that the flow of traffic will be more efficient and that people can focus on other things like completing work, communicating with loved ones and more.
States are already starting to legalize cars that drive themselves. California now allows Google’s self-driving vehicle to be tested on the road as long as there is a human passenger in the car. Nevada also has passed a similar law last year that approves driverless cars as long as there is a human passenger in the vehicle. If the vehicles are legalized, blind individuals, among others, will have better access to transportation.
Companies today understand that people want to save money and contribute towards an eco-friendly world but still have reliable and safe transportation. This is why so many startup companies are looking for creative approaches to vehicles.
|
"""Create videoconference rooms
Revision ID: 233928da84b2
Revises: 50c2b5ee2726
Create Date: 2015-02-11 13:17:44.365589
"""
import sqlalchemy as sa
from alembic import op
from indico.core.db.sqlalchemy import PyIntEnum
from indico.core.db.sqlalchemy import UTCDateTime
from indico.modules.vc.models.vc_rooms import VCRoomLinkType, VCRoomStatus
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = '233928da84b2'
down_revision = '5583f647dff5'
def upgrade():
op.create_table('vc_rooms',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('type', sa.String(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('status', PyIntEnum(VCRoomStatus), nullable=False),
sa.Column('created_by_id', sa.Integer(), nullable=False, index=True),
sa.Column('created_dt', UTCDateTime, nullable=False),
sa.Column('modified_dt', UTCDateTime, nullable=True),
sa.Column('data', postgresql.JSON(), nullable=False),
sa.PrimaryKeyConstraint('id'),
schema='events')
op.create_table('vc_room_events',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('event_id', sa.Integer(), autoincrement=False, nullable=False, index=True),
sa.Column('vc_room_id', sa.Integer(), nullable=False, index=True),
sa.Column('link_type', PyIntEnum(VCRoomLinkType), nullable=False),
sa.Column('link_id', sa.String(), nullable=True),
sa.Column('show', sa.Boolean(), nullable=False),
sa.Column('data', postgresql.JSON(), nullable=False),
sa.ForeignKeyConstraint(['vc_room_id'], ['events.vc_rooms.id']),
sa.PrimaryKeyConstraint('id'),
schema='events')
def downgrade():
op.drop_table('vc_room_events', schema='events')
op.drop_table('vc_rooms', schema='events')
|
Sweep Stone Cold, Behind the Sun or Interstellar across lids using the Eye Shadow Brush. Create dimension by applying Red Rock, Infra-Red or Fever Dream to creases with the Angle Eye Shadow Brush. Finish by smudging Slow Burn or Solar Flame along upper and lower lashlines with the Smokey Eye Liner Brush. Brushes sold separately.
|
"""Test connection to a MySQL server
Usage:
test_connect.py [options] <host> <user>
test_connect.py -h
Arguments:
host MySQL server IP address
user Username to connect with
Options:
-h, --help Show this screen
-d, --debug Show some debug information
-p, --port MySQL port. Default is 3306.
--password=<password> User password.
Author: Avan Suinesiaputra - University of Auckland (2017)
"""
# Docopt is a library for parsing command line arguments
import docopt
import getpass
import mysql.connector
if __name__ == '__main__':
try:
# Parse arguments, use file docstring as a parameter definition
arguments = docopt.docopt(__doc__)
# Default values
if not arguments['--port']:
arguments['--port'] = 3306
# Check password
if arguments['--password'] is None:
arguments['--password'] = getpass.getpass('Password: ')
# print arguments for debug
if arguments['--debug']:
print arguments
# Handle invalid options
except docopt.DocoptExit as e:
print e.message
exit()
# everything goes fine
# let's go!
print 'Connecting mysql://' + arguments['<host>'] + ':' + str(arguments['--port']) + ' ...'
try:
cnx = mysql.connector.connect(user=arguments['<user>'],
host=arguments['<host>'],
port=arguments['--port'],
password=arguments['--password'])
except mysql.connector.Error as err:
print(err)
else:
print "SUCCESS"
cnx.close()
|
Delivery - Rainbow Cosmetics - Wholesale supplier of branded Fragrances and Cosmetics.
The Rainbow business has built its success on fast and flexible delivery solutions offering customers a choice of service to meet their needs. We are able to deliver container shipment, central depot, direct to store and direct to consumer.
Upon receipt, your orders are processed quickly and delivered complete, intact, and when you need them. We offer a next day delivery on orders placed before 1pm and below the value of £2,000. All of our UK orders are sent out for guaranteed next day delivery. All orders with a value over £300 on branded fragrance and £500 on Rainbow own brand products are delivered free to customers located on mainland UK.
If you have any questions about delivery in the UK try our FAQ section or contact our Customer Team.
Delivery is free of charge for orders over £300 on branded fragrance and £500 on Rainbow own brand products to locations in the UK (excluding highlands/Islands and NI). Orders below these thresholds are subject to a delivery charge of a minimum of £6 plus VAT. We do not accept orders under £150. If an order of Own Brand Cosmetics (Sunkissed, Active, Tilly and Style & Grace) is over £500 there is no charge for delivery (excluding Scottish Highlands/Islands and Northern Ireland) If an order of Own Brand Cosmetics (Sunkissed, Active, Tilly and Style & Grace) is below £500 then actual carriage charges will be applied up to a value of £25.00 If both Own Brand Cosmetics and Fragrance is ordered and the value is below £500 then actual carriage charges will be applied up to a value of £15.00.
We currently ship products to over 32 countries worldwide, including into the US, Australia and Europe. If placing an order for export we require a minimum value of £1,500. On receipt of your order we will process the necessary paperwork (including dangerous goods notes for transit) and reserve the stock whilst you organise the transport details.
All export orders are ex-works, Rainbow is partnered with a number of freight companies who will be happy to provide you with a quote, or you can use or organise shipment with your own supplier. If you are not using a Rainbow preferred supplier (currently DSV and Trans Global) VAT will be charged on your order, but it can be claimed back once we have received a certificate of shipment (See below).
Payment must be cleared funds in our bank account before we will release your order. Payment can be received in GBP(£), USD($) and Euros(€).
Important: Please note that it is your responsibility to insure goods in transit from the Rainbow warehouse to your delivery address.
Q. Why do I need a dangerous goods note?
A. Fragrance, nail polish and a number of other products sold by Rainbow cosmetics are classed as hazardous* by the UN and it is a legal requirement to have a dangerous goods note for the transit of these products overseas.
If your order contains any products classed as dangerous then a dangerous good note will be required for transit. We will prepare these on your behalf free of charge.
Q. What is a certificate of shipment and how do I get one?
|
#!/usr/bin/env python
import vtk
from vtk.test import Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
math = vtk.vtkMath()
math.RandomSeed(22)
sphere = vtk.vtkSphereSource()
sphere.SetPhiResolution(32)
sphere.SetThetaResolution(32)
extract = vtk.vtkExtractPolyDataPiece()
extract.SetInputConnection(sphere.GetOutputPort())
normals = vtk.vtkPolyDataNormals()
normals.SetInputConnection(extract.GetOutputPort())
ps = vtk.vtkPieceScalars()
ps.SetInputConnection(normals.GetOutputPort())
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(ps.GetOutputPort())
mapper.SetNumberOfPieces(2)
actor = vtk.vtkActor()
actor.SetMapper(mapper)
sphere2 = vtk.vtkSphereSource()
sphere2.SetPhiResolution(32)
sphere2.SetThetaResolution(32)
extract2 = vtk.vtkExtractPolyDataPiece()
extract2.SetInputConnection(sphere2.GetOutputPort())
mapper2 = vtk.vtkPolyDataMapper()
mapper2.SetInputConnection(extract2.GetOutputPort())
mapper2.SetNumberOfPieces(2)
mapper2.SetPiece(1)
mapper2.SetScalarRange(0, 4)
mapper2.SetScalarModeToUseCellFieldData()
mapper2.SetColorModeToMapScalars()
mapper2.ColorByArrayComponent(vtk.vtkDataSetAttributes.GhostArrayName(), 0)
mapper2.SetGhostLevel(4)
# check the pipeline size
extract2.UpdateInformation()
psize = vtk.vtkPipelineSize()
if (psize.GetEstimatedSize(extract2, 0, 0) > 100):
print ("ERROR: Pipeline Size increased")
pass
if (psize.GetNumberOfSubPieces(10, mapper2) != 1):
print ("ERROR: Number of sub pieces changed",
psize.GetNumberOfSubPieces(10, mapper2))
pass
actor2 = vtk.vtkActor()
actor2.SetMapper(mapper2)
actor2.SetPosition(1.5, 0, 0)
sphere3 = vtk.vtkSphereSource()
sphere3.SetPhiResolution(32)
sphere3.SetThetaResolution(32)
extract3 = vtk.vtkExtractPolyDataPiece()
extract3.SetInputConnection(sphere3.GetOutputPort())
ps3 = vtk.vtkPieceScalars()
ps3.SetInputConnection(extract3.GetOutputPort())
mapper3 = vtk.vtkPolyDataMapper()
mapper3.SetInputConnection(ps3.GetOutputPort())
mapper3.SetNumberOfSubPieces(8)
mapper3.SetScalarRange(0, 8)
actor3 = vtk.vtkActor()
actor3.SetMapper(mapper3)
actor3.SetPosition(0, -1.5, 0)
sphere4 = vtk.vtkSphereSource()
sphere4.SetPhiResolution(32)
sphere4.SetThetaResolution(32)
extract4 = vtk.vtkExtractPolyDataPiece()
extract4.SetInputConnection(sphere4.GetOutputPort())
ps4 = vtk.vtkPieceScalars()
ps4.RandomModeOn()
ps4.SetScalarModeToCellData()
ps4.SetInputConnection(extract4.GetOutputPort())
mapper4 = vtk.vtkPolyDataMapper()
mapper4.SetInputConnection(ps4.GetOutputPort())
mapper4.SetNumberOfSubPieces(8)
mapper4.SetScalarRange(0, 8)
actor4 = vtk.vtkActor()
actor4.SetMapper(mapper4)
actor4.SetPosition(1.5, -1.5, 0)
ren = vtk.vtkRenderer()
ren.AddActor(actor)
ren.AddActor(actor2)
ren.AddActor(actor3)
ren.AddActor(actor4)
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
iren.Initialize()
#iren.Start()
|
Saturday, April 27, 2019 - This paddle will launch from MacWilliams Park, at Merril Barber Bridge, Vero Beach at 8:30 a.m. and we will ride the incoming tide south to Wabasso Causeway Boat Ramp. This paddle will be south to norht to accommodate the predicted SW winds. The mandatory safety brief will be at 8:15a.m. Launch is at 8:30 a.m. All participants will be provided with water and a picnic lunch. Check in is at 8 a.m. Registration is $35 per person. This is a 10-mile paddle. You must have your own board, paddle, leash, and life jacket. If you require rental equipment, please let us know at least 3 days in advance. Rental equipment is an additional $20 per day. Some areas we paddle have heavy boat traffic, please wear a bright colored shirt so that our safety boat and other boaters can see you from a distance. Sunscreen and water shoes are recommended. Proceeds from this paddle will go to Alzheimer's Association. This event is for experienced paddlers. Please register in advance so we know you are coming. Weather conditions and wind directions sometimes create last minute plan changes. We will do our best to keep you informed of changes. FOR ADDITIONAL INFORMATION CALL 904-484-6516.
|
#!/usr/bin/env python
"""Forseti is a tool to manage AWS autoscaling groups.
Usage:
{% for doc in command_docs -%}
forseti {{ doc }}
{% endfor -%}
forseti (-h | --help)
forseti --version
Options:
{% for doc in command_options -%}
{{ doc }}
{% endfor -%}
-h --help Show this screen.
--version Show version.
"""
import sys
from docopt import docopt
from forseti.metadata import __version__ as forseti_version
from forseti.configuration import ForsetiConfiguration
from forseti.commands.base import get_all_commands
from forseti.commands.commands import CleanUpAutoscaleConfigurationsCommand
from jinja2 import Template
import os.path
def get_configuration_file_path():
return os.path.abspath(os.path.expanduser('~/.forseti/config.json'))
def read_configuration_file():
config_path = get_configuration_file_path()
if not os.path.exists(config_path):
raise ValueError("Configuration file does not exist at %r" % config_path)
try:
return ForsetiConfiguration(config_path)
except ValueError as exception:
print("Invalid JSON configuration file {}\n".format(config_path))
raise exception
def generate_dosctring():
commands_documentation = []
options_documentation = []
commands = get_all_commands()
for command_class in commands:
command = command_class()
command_doc = command.cli_command_doc()
if command_doc:
commands_documentation.append(command_doc)
comand_options_docs = command.cli_command_options_doc()
if comand_options_docs:
options_documentation.append(comand_options_docs)
return Template(__doc__).render(
command_docs=commands_documentation,
command_options=options_documentation,
app_name=sys.argv[0]
)
def commands_arguments_mapper():
mapper = []
commands = get_all_commands()
for command_class in commands:
command = command_class()
mapper.append(
(command.cli_command_name(), command)
)
return mapper
def should_run_cleanup(forseti_command):
return forseti_command.cli_command_name() == "deploy"
def main():
arguments = docopt(generate_dosctring())
if arguments['--version']:
print("Forseti {}".format(forseti_version))
return
configuration = read_configuration_file()
for cli_command, forseti_command in commands_arguments_mapper():
if arguments[cli_command]:
forseti_command.run(configuration, arguments)
if should_run_cleanup(forseti_command):
forseit_cleanup_command = CleanUpAutoscaleConfigurationsCommand()
forseit_cleanup_command.run(configuration, arguments)
if __name__ == '__main__':
main()
|
The hardest part about being a Girl Scout cookie-lover is waiting until cookie-selling season. But good news for Thin Mint fans! The Girl Scouts of the USA and Biena Snacks recently partnered up to develop a Girl Scout Cookie-inspired treat that will be available year-round! Inspired by the iconic Thin Mint, the new snack is the same cookie flavor, but in chickpea-snack form (which tastes just as good frozen as regular thin mints do)!
More good news: they are a healthy Thin-Mint alternative. Biena considers itself a home for healthy snacks with no artificial ingredients or flavors, and this new addition to the family will be no different. The Thin Mints Chickpea Snacks will be made from only six ingredients – and they are gluten-free too!
The brand already offers delicious flavors like Sea Salt, Honey Roasted, Barbecue, and Habanero, and will be launching new Dark Chocolate and Salted Caramel flavors along side the new Girl Scout-inspired treat.
Biena’s chickpea version will be available year-round at all Whole Foods Markets nationwide, starting in June. They will also be available in other retailers starting in September. Further product information and product availability can be found on Biena’s website.
The snacks will be sold in 3.15-ounce packs for $4.49 with 130 calories and four grams of protein and fiber per serving.
If the snack is tasty enough for the Girl Scout stamp of approval, I think it is safe to assume they will be nothing less than addicting!
|
import unittest
from ct.crypto import error
class TypeTestBase(unittest.TestCase):
# Test class for each concrete type should fill this in.
asn1_type = None
# Immutable types support hashing.
immutable = True
# Repeated types support lookup and assignment by index.
repeated = True
# Keyed types support lookup and assignment by key.
keyed = True
# A tuple of initializer tuples; components in each tuple should yield
# equal objects. The first component in each tuple should be the canonical
# value (returned by .value).
initializers = None
# A tuple of (bad_initializer, exception_raised) pairs.
bad_initializers = None
# A tuple of (value, hex_der_encoding) pairs.
# Note: test vectors should include the complete encoding (including tag
# and length). This is so we can lift test vectors directly from the ASN.1
# spec and test that we recognize the correct tag for each type.
# However test vectors for invalid encodings should focus on type-specific
# corner cases. It's not necessary for each type to verify that invalid
# tags and lengths are rejected: this is covered in separate tests.
encode_test_vectors = None
# A tuple of of serialized, hex-encoded values.
bad_encodings = None
# A tuple of (value, hex_encoding) pairs that can only be decoded
# in non-strict mode.
bad_strict_encodings = None
def test_create(self):
for initializer_set in self.initializers:
value = initializer_set[0]
# The canonical initializer.
for i in initializer_set:
o1 = self.asn1_type(value=i)
self.assertEqual(o1.value, value)
# And other initializers that yield the same value.
for j in initializer_set:
o2 = self.asn1_type(value=j)
self.assertEqual(o2.value, value)
self.assertEqual(o1, o2)
if self.immutable:
self.assertEqual(hash(o1), hash(o2))
elif self.repeated:
self.assertEqual(len(o1), len(o2))
for i in range(len(o1)):
self.assertEqual(o1[i], o2[i])
elif self.keyed:
self.assertEqual(len(o1), len(o2))
self.assertEqual(o1.keys(), o2.keys())
for key in o1:
self.assertEqual(o1[key], o2[key])
# Sanity-check: different initializers yield different values.
for i in range(len(self.initializers)):
for j in range(i+1, len(self.initializers)):
o1 = self.asn1_type(value=self.initializers[i][0])
o2 = self.asn1_type(value=self.initializers[j][0])
self.assertNotEqual(o1, o2)
if self.immutable:
self.assertNotEqual(hash(o1), hash(o2))
self.assertNotEqual(o1.value, o2.value)
def test_create_fails(self):
for init, err in self.bad_initializers:
self.assertRaises(err, self.asn1_type, init)
def test_encode_decode(self):
for value, enc in self.encode_test_vectors:
o1 = self.asn1_type(value=value)
o2 = self.asn1_type.decode(enc.decode("hex"))
self.assertEqual(o1, o2)
self.assertEqual(o1.value, o2.value)
self.assertEqual(enc, o1.encode().encode("hex"))
self.assertEqual(enc, o2.encode().encode("hex"))
def test_decode_fails(self):
for bad_enc in self.bad_encodings:
self.assertRaises(error.ASN1Error, self.asn1_type.decode,
bad_enc.decode("hex"))
self.assertRaises(error.ASN1Error, self.asn1_type.decode,
bad_enc.decode("hex"), strict=False)
def test_strict_decode_fails(self):
for value, bad_enc in self.bad_strict_encodings:
o = self.asn1_type(value=value)
self.assertRaises(error.ASN1Error,
self.asn1_type.decode, bad_enc.decode("hex"))
o2 = self.asn1_type.decode(bad_enc.decode("hex"), strict=False)
self.assertEqual(o, o2)
# The object should keep its original encoding...
self.assertEqual(bad_enc, o2.encode().encode("hex"))
# ... which is not the canonical encoding.
self.assertNotEqual(bad_enc, o.encode().encode("hex"))
|
Main Goal of the Project: Providing access to education for out-of-school children in 12 municipalities in North Macedonia.
The implementation of this project will provide support in achieving the priority defined in the Education Strategy of the Republic of Macedonia for 2018-2025, which refers to increasing the scope of students and improving the inclusion in primary education, through implementing a sustainable identification model for children who are obliged to attend school and their monitoring until the completion of the compulsory education.
– A procedure for free of charge nostrification of school completion documents obtained abroad for socially vulnerable children, in order to support them to continue their education.
The municipalities and schools will receive support in the process of increasing the rate of enrollment and retention of the children in primary school, through applying the data sharing protocol for the identification of school-age children in three pilot schools. Also, in cooperation with the schools, trainings will be organized for the teachers and school support staff, which will focus on applying children-centered methodological approaches, as well as extra-curricular intercultural activities, in order to strengthen the inclusion of children from socially vulnerable groups.
Within the project, and in cooperation with MoES, around 300 out-of-school children living in socially disadvantaged families will receive scholarship and support when enrolling in school, which will help the families to provide education for their children. Activities enhancing the cooperation between the schools and the parents will also be implemented in order to ensure that the children will receive the necessary support during their education and stay at school, but also better master the teaching material.
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution Addon
# Copyright (C) 2009-2013 IRSID (<http://irsid.ru>),
# Paul Korotkov (korotkov.paul@gmail.com).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import osv, fields
from core import EDU_STATES
class edu_student_program(osv.Model):
_name = 'edu.student.program'
_description = 'Student Program'
_inherit = ['mail.thread']
def _get_state(self, cr, uid, ids, name, arg, context=None):
res = {}
for st_program in self.browse(cr, uid, ids, context):
res[st_program.id] = st_program.stage_id.state
return res
# Access Functions
def create(self, cr, uid, vals, context=None):
if vals.get('code','/')=='/':
vals['code'] = self.pool.get('ir.sequence').get(cr, uid, 'edu.student.program') or '/'
return super(edu_student_program, self).create(cr, uid, vals, context=context)
def write(self, cr, uid, ids, vals, context=None):
super(edu_student_program, self).write(cr, uid, ids, vals, context=context)
if isinstance(ids, (int, long)):
ids = [ids]
for st_program in self.browse(cr, uid, ids, context=context):
student_id = st_program.student_id.id
if student_id not in st_program.message_follower_ids:
self.message_subscribe(cr, uid, ids, [student_id], context=context)
return True
def copy(self, cr, uid, id, default=None, context=None):
default = default or {}
default.update({
'code': self.pool.get('ir.sequence').get(cr, uid, 'edu.student.program'),
})
return super(edu_student_program, self).copy(cr, uid, id, default, context=context)
def unlink(self, cr, uid, ids, context=None):
context = context or {}
for record in self.browse(cr, uid, ids, context=context):
if record.state not in ['draft']:
raise osv.except_osv(_('Invalid Action!'), _('Cannot delete document in state \'%s\'.') % record.state)
return super(edu_student_program, self).unlink(cr, uid, ids, context=context)
# Naming Functions
def _name_get_fnc(self, cr, uid, ids, field_name, arg, context=None):
result = {}
for st_program in self.browse(cr, uid, ids, context=context):
result[st_program.id] = st_program.code + ': ' + st_program.student_id.name
return result
# Update Functions
def _update_list_by_student(self, cr, uid, ids, context=None):
return self.pool.get('edu.student.program').search(cr, uid, [('student_id', 'in', ids)], context=context)
def _update_list_by_stage(self, cr, uid, ids, context=None):
return self.pool.get('edu.student.program').search(cr, uid, [('stage_id', 'in', ids)], context=context)
# Onchange Functions
def onchange_program_id(self, cr, uid, ids, program_id, context=None):
if program_id:
program = self.pool.get('edu.program').browse(cr, uid, program_id, context=context)
return {'value': {
'speciality_id': program.speciality_id.id,
'mode_id': program.mode_id.id,
'stage_id': program.stage_ids[0].id or False,
'plan_id': False,
}}
return {'value': {}}
# Other Functions
def make_work_orders(self, cr, uid, ids, context=None):
work_order_obj = self.pool.get('edu.work.order')
work_obj = self.pool.get('edu.work')
module_work_obj = self.pool.get('edu.module.work')
line_obj = self.pool.get('edu.order.line')
year_id = self.pool.get('edu.year').search(cr, uid, [], limit=1, context=context)[0]
cr.execute("""
SELECT DISTINCT
program_id,
stage_id
FROM
edu_student_program
WHERE
id IN %s
""",(tuple(ids),))
params = cr.fetchall()
if params:
for param in params:
cr.execute("""
SELECT DISTINCT
module_id
FROM
edu_plan_module_rel
WHERE
plan_id IN (
SELECT DISTINCT
plan_id
FROM
edu_student_program
WHERE
id IN %s AND
program_id = %s AND
stage_id = %s
)
""",(tuple(ids), param[0], param[1],))
module_ids = [r[0] for r in cr.fetchall()]
module_work_ids = module_work_obj.search(cr, uid, [
('time_id.period_id.stage_id','=',param[1]),
('module_id','in', module_ids),
], context=context)
if module_work_ids:
work_order_ids = work_order_obj.search(cr, uid, [
('year_id','=',year_id),
('program_id','=',param[0]),
('stage_id','=',param[1]),
('state','=','draft'),
], context=context)
if len(work_order_ids):
work_order_id = work_order_ids[0]
else:
vals = work_order_obj.onchange_year_id(cr, uid, ids, year_id, context=context)['value']
vals['year_id'] = year_id
vals['program_id'] = param[0]
vals['stage_id'] = param[1]
vals['name'] = 'Об установлении учебной нагрузки'
work_order_id = work_order_obj.create(cr, uid, vals, context=context)
cr.execute("""
SELECT
time_id,
date_start,
date_stop
FROM
edu_schedule_line
WHERE
year_id = %s AND
program_id = %s AND
state = 'approved'
""",(year_id, param[0],))
schedule_line = dict(map(lambda x: (x[0], (x[1],x[2])), cr.fetchall()))
for module_work in module_work_obj.browse(cr, uid, module_work_ids, context = context):
cr.execute("""
SELECT
id
FROM
edu_student_program
WHERE
id IN %s AND
program_id = %s AND
stage_id = %s AND
plan_id IN %s
""",(tuple(ids), param[0], param[1], tuple(plan.id for plan in module_work.module_id.plan_ids)))
st_program_ids = [r[0] for r in cr.fetchall()]
work_ids = work_obj.search(cr, uid, [('modulework_id','=',module_work.id),('order_id','=',work_order_id)], context=context)
if len(work_ids):
dates = schedule_line.get(module_work.time_id.id,(False, False))
work_obj.write(cr, uid, work_ids, {
'date_start': dates[0],
'date_stop': dates[1],
'st_program_ids': [(6, 0, st_program_ids)]
}, context=context)
else:
vals = work_obj.onchange_modulework_id(cr, uid, ids, module_work.id, context=context)['value']
vals['order_id'] = work_order_id
vals['modulework_id'] = module_work.id
dates = schedule_line.get(module_work.time_id.id,(False, False))
vals['date_start'] = dates[0]
vals['date_stop'] = dates[1]
vals['st_program_ids'] = [(6, 0, st_program_ids)]
work_obj.create(cr, uid, vals, context = context)
return True
# Fields
_columns = {
'code': fields.char(
'Code',
size = 32,
required = True,
readonly = True,
states = {'draft': [('readonly',False)]},
),
'name': fields.function(
_name_get_fnc,
type='char',
string = 'Name',
store = {
'edu.student.program': (lambda self, cr, uid, ids, c={}: ids, ['code', 'student_id'], 10),
'res.partner': (_update_list_by_student, ['name'], 20),
},
readonly = True,
),
'student_id': fields.many2one(
'res.partner',
'Student',
domain="[('student','=',True)]",
required = True,
readonly = True,
states = {'draft': [('readonly',False)]},
track_visibility='onchange',
),
'program_id': fields.many2one(
'edu.program',
'Education Program',
required = True,
readonly = True,
states = {'draft': [('readonly',False)]},
track_visibility='onchange',
),
'speciality_id': fields.related(
'program_id',
'speciality_id',
type='many2one',
relation = 'edu.speciality',
string = 'Speciality',
store = True,
readonly = True,
),
'mode_id': fields.related(
'program_id',
'mode_id',
type='many2one',
relation = 'edu.mode',
string = 'Mode Of Study',
store = True,
readonly = True,
),
'group_id': fields.many2one(
'edu.group',
'Group',
track_visibility='onchange',
),
'plan_id': fields.many2one(
'edu.plan',
'Training Plan',
readonly = True,
states = {'draft': [('readonly',False)]},
track_visibility='onchange',
),
'stage_id': fields.many2one(
'edu.stage',
'Stage',
readonly = True,
required = True,
states = {'draft': [('readonly',False)]},
track_visibility='onchange',
),
'color': fields.integer(
'Color Index',
),
'status': fields.selection(
[
('student', 'Student'),
('listener', 'Listener'),
],
'Status',
required = True,
readonly = True,
states = {'draft': [('readonly',False)]},
track_visibility='onchange',
),
'grade_ids': fields.many2many(
'edu.grade',
'edu_student_program_grade_rel',
'st_program_id',
'grade_id',
'Grades',
readonly = True,
),
'work_ids': fields.many2many(
'edu.work',
'edu_work_st_program_rel',
'st_program_id',
'work_id',
'Training Work',
readonly = True,
),
'record_ids': fields.many2many(
'edu.record',
'edu_student_program_record_rel',
'st_program_id',
'record_id',
'Record',
readonly = True,
),
'image_medium': fields.related(
'student_id',
'image_medium',
type = 'binary',
string = 'Medium-sized image',
readonly = True,
),
'state': fields.function(
_get_state,
type = 'selection',
selection = EDU_STATES,
string = 'State',
store = {
'edu.student.program': (lambda self, cr, uid, ids, c={}: ids, ['stage_id'], 10),
'edu.stage': (_update_list_by_stage, ['state'], 20),
},
readonly = True,
),
}
# Default Values
def _get_default_stage_id(self, cr, uid, context=None):
""" Gives default stage_id """
stage_ids = self.pool.get('edu.stage').search(cr, uid, [('state','=','draft')], order='sequence', context=context)
if stage_ids:
return stage_ids[0]
return False
_defaults = {
'stage_id': _get_default_stage_id,
'state': 'draft',
'status': 'student',
'code': '/',
}
# SQL Constraints
_sql_constraints = [
('student_program_uniq', 'unique(student_id, program_id)', 'Program must be unique per Student!'),
('code_uniq', 'unique(code)', 'Code must be unique!')
]
# Sorting Order
_order = 'program_id,stage_id,group_id,student_id'
|
This sticker is 6 inches wide by 2.5 inches tall when applied. It features the phrase, BULLDOG MOM, in blue letters with black outlining on a white background; a gray bulldog mascot emblem wearing a spiked, blue collar replaces the O in BULLDOG. This sticker is cut with a slight contour, so minimal white border will remain after application. The pink line in the image shows where the sticker is cut; however, the pink line is not printed on the sticker. This carefully crafted product is a great way to share your bulldog spirit!
StickerTalk stickers are made to the highest standards. Your Blue Bulldog Mom Sticker is made using only name brand materials. We start with solvent based inks that are scratch resistant, UV resistant and waterproof. We print that ink onto high quality vinyl that utilizes a special air release technology. This amazing tech incorporates micro channels into the adhesive of the sticker to allow you to smooth out most air bubbles after installation. So there isn’t a need to keep removing and reapplying until you get it right. However, if you do need to remove and reapply the sticker, because it isn’t straight or you just don’t like the location, this special adhesive allow for that too. We have tried many brands and many different product types and this is the best product available for bumper stickers. To make your new Blue Bulldog Mom Sticker even better and to insure that it will last for years in most outdoor environments, we laminate every one with a PVC UV resistant film. This provides extra protection against UV rays and scratches and adds more life to your sticker. This step is one that most online sellers skip because it is time consuming and costly. However, at StickerTalk, your satisfaction is our primary concern. We won’t sell a Blue Bulldog Mom Sticker unless we know it is the best we can make.
|
# Copyright 2020 The Johns Hopkins University Applied Physics Laboratory
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from intern.resource.dvid import DataInstanceResource, RepositoryResource
from intern.service.dvid import DVIDService
from intern.utils.parallel import *
from requests import HTTPError
import requests
import numpy as np
import json
import blosc
def check_data_instance(fcn):
"""Decorator that ensures a valid data instance is passed in.
Args:
fcn (function): Function that has a DataInstanceResource as its second argument.
Returns:
(function): Wraps given function with one that checks for a valid data instance.
"""
def wrapper(*args, **kwargs):
if not isinstance(args[1], DataInstanceResource):
raise RuntimeError(
"resource must be an instance of intern.resource.intern.DataInstanceResource."
)
return fcn(*args, **kwargs)
return wrapper
class VolumeService(DVIDService):
"""VolumeService for DVID service.
"""
def __init__(self, base_url):
"""Constructor.
Args:
base_url (str): Base url (host) of project service.
Raises:
(KeyError): if given invalid version.
"""
DVIDService.__init__(self)
self.base_url = base_url
@check_data_instance
def get_cutout(self, resource, resolution, x_range, y_range, z_range, **kwargs):
"""Download a cutout from DVID data store.
Args:
resource (intern.resource.resource.Resource): Resource compatible
with cutout operations
resolution (int): 0 (not applicable on DVID Resource).
x_range (list[int]): x range such as [10, 20] which means x>=10 and x<20.
y_range (list[int]): y range such as [10, 20] which means y>=10 and y<20.
z_range (list[int]): z range such as [10, 20] which means z>=10 and z<20.
chunk_size (optional Tuple[int, int, int]): The chunk size to request
Returns:
(numpy.array): A 3D or 4D numpy matrix in ZXY(time) order.
Raises:
requests.HTTPError
"""
x_size = x_range[1] - x_range[0]
y_size = y_range[1] - y_range[0]
z_size = z_range[1] - z_range[0]
# Make the request
resp = requests.get(
"{}/api/node/{}/{}/raw/0_1_2/{}_{}_{}/{}_{}_{}/octet-stream".format(
self.base_url,
resource.UUID,
resource.name,
x_size,
y_size,
z_size,
x_range[0],
y_range[0],
z_range[0],
)
)
if resp.status_code != 200 or resp.status_code == 201:
msg = "Get cutout failed on {}, got HTTP response: ({}) - {}".format(
resource.name, resp.status_code, resp.text
)
raise HTTPError(msg, response=resp)
block = np.frombuffer(resp.content, dtype=resource.datatype)
cutout = block.reshape(z_size, y_size, x_size)
return cutout
@check_data_instance
def create_cutout(
self, resource, resolution, x_range, y_range, z_range, numpyVolume, send_opts
):
"""Upload a cutout to the volume service.
NOTE: This method will fail if no metadata has been added to the data instance.
Args:
resource (intern.resource.Resource): Resource compatible with cutout operations.
resolution (int): 0 indicates native resolution.
x_range (list[int]): x range such as [10, 20] which means x>=10 and x<20.
y_range (list[int]): y range such as [10, 20] which means y>=10 and y<20.
z_range (list[int]): z range such as [10, 20] which means z>=10 and z<20.
numpyVolume (numpy.array): A 3D or 4D (time) numpy matrix in (time)ZYX order.
send_opts (dictionary): Additional arguments to pass to session.send().
"""
# Check that the data array is C Contiguous
blktypes = ["uint8blk", "labelblk", "rgba8blk"]
if not numpyVolume.flags["C_CONTIGUOUS"]:
numpyVolume = np.ascontiguousarray(numpyVolume)
if resource._type == "tile":
# Compress the data
# NOTE: This is a convenient way for compressing/decompressing NumPy arrays, however
# this method uses pickle/unpickle which means we make additional copies that consume
# a bit of extra memory and time.
compressed = blosc.pack_array(numpyVolume)
url_req = "{}/api/node/{}/{}/tile/xy/{}/{}_{}_{}".format(
self.base_url,
resource.UUID,
resource.name,
resolution,
x_range[0],
y_range[0],
z_range[0],
)
out_data = compressed
# Make the request
elif resource._type in blktypes:
numpyVolume = numpyVolume.tobytes(order="C")
url_req = "{}/api/node/{}/{}/raw/0_1_2/{}_{}_{}/{}_{}_{}".format(
self.base_url,
resource.UUID,
resource.name,
x_range[1] - x_range[0],
y_range[1] - y_range[0],
z_range[1] - z_range[0],
x_range[0],
y_range[0],
z_range[0],
)
out_data = numpyVolume
else:
raise NotImplementedError(
"{} type is not yet implemented in create_cutout".format(resource._type)
)
resp = requests.post(url_req, data=out_data)
if resp.status_code != 200 or resp.status_code == 201:
msg = "Create cutout failed on {}, got HTTP response: ({}) - {}".format(
resource.name, resp.status_code, resp.text
)
raise HTTPError(msg, response=resp)
return
|
As much as holiday shopping is a thrill-filled function, it can also overwhelm amid the glut of retail marketing messages asserting deals, steals and grand gifting opportunities galore. With so much to choose from, here’s a short list of a few tried-and-true gifts, gets and getaways that are well-suited for a holiday spending spree.
Funcl is a hardware startup devoted to making true wireless headphones accessible to everyone with price points starting at $19. The company recently launched two models: the Funcl W1 and Funcl AI. The Funcl W1 feature touch buttons on the headphones, which let you control the music and easily answer or reject calls. The Funcl W1 boasts integrated, sophisticated audio engineering to support AAC and provide good sound quality and bass performance for both calls and music. Designed by the company’s award-winning industrial design team, these earbuds also have a unique design for a comfortable fit and attractive aesthetic. Qualcomm chip powered, Funcl’s other model, the AI, provides Hi-Fi sound quality and extremely low latency. Also power friendly, a single charge provides six hours of battery life and, with three extra charges from the case, total battery life is a full 24 hours. Ranked with a waterproof level of IPX5, Funcl AI is well-suited for use during workouts and exercising, whether at the gym or outdoors. A free Funcl AI App also makes the AI voice assistant feature easier and more intuitive to access.
The Anova Precision Cooker Nano is an appliance innovation allowing you to cook delicious, restaurant-quality meals in the comfort of your own home. Paired with the Anova App, which features the world’s largest collection of sous vide recipes, this device makes it easy to cook, control and keep track of your recipe from your mobile phone. Simply choose a recipe, press start, and the Nano does the rest. Used by professional chefs for decades, sous vide is a simple and approachable technique that eliminates overcooked, dried out food and ensures edge-to-edge perfection every time. Whether it’s steak, chicken, fish, vegetables, or even crème brûlée, with the Anova Precision Cooker Nano anyone can cook like a pro!
Lamo is a Southern California-based company known for creating comfortable, high quality sheepskin boots, slippers and moccasins for men, women and children. Lamo footwear is extremely high quality, offered at a value price. With a SoCal fashion-forward feel, Lamo’s stylish and comfortable footwear ensures you’ll walk in style, whether you’re traversing snowy peaks, snagging a few rays at the beach, or even cozying up at home. For at-home coziness I particularly love the Ladies Scuff design, which features a luxurious 100% Australian sheepskin lining for $80. Or choose the ultra-affordable premium faux fur option for just $40—the western print is super cute!
Buckle up for a truly unique gift idea that fuses belts and wallets into one super stylish, extremely efficient wearable. As seen on ABC’s Shark Tank, Wallet Buckle changes the way men and women carry valuable credit cards and cash while out and about. This clever creation keeps these essentials secure and on hand in the most stylish, fun, convenient belt buckle that combines fashion with function. Wallet Buckle stores up to four credit cards and ID's, so it’s a great way to keep these itmes accessible and prevent them from getting lost or stolen.
Beauty gifts are a burgeoning category and here’s a fantastic idea in kind: The Eterno Device—a four-week face lift you can use at home instead of those creams and masks that just don’t work. This brand new, time-saving device is clinically-proven to use NASA’s red and infrared LED Light Therapy, the most innovative, age-defying technology on the planet. Its patented glass head is powerful and gentle, and and the device effectively reduces wrinkles, softens fine lines, increases collagen and elastin production, lifts and tightens skin, and improves complexion through its technology. Eterno is a must-have for anyone who wants a fresh glow achieved in the comfort of their own home.
If there’s one company who knows cosmetics and self-care, it’s Shiseido. From the many fabulous things they offer, there are a few standouts this season that I particularly like. First is their ModernMatte Powder Lipstick Expressive Deluxe Mini Set—a limited-edition wardrobe of five mini ModernMatte Powder Lipsticks. They’re all housed in a deluxe acrylic container for the holidays, and the outer package features festive RIBBONESIA art. The shades range from soft neutrals to bold berries to festive reds, ensuring there’s something for every occasion. This creamy, non-drying formula transforms into a weightless powder and wraps lips in a velvety, matte pigment. Now what gal wouldn’t love that? Also from Shiseido is the Ult-Immune Power-Infusing Concentrate. This is Shiseido’s best-selling serum, actually having relaunched in a stronger formula powered by Imu-Generation Technology. It has anti-oxidant rich reishi mushroom and iris root extracts that strengthen skin, restore firmness and defend against daily damage. It’s made to produce fast results, with skin growing 28 percent stronger in just one week.
STICK-AMIS are nifty phone stickers that allow complete hands-free usage of any type of phone or tablet without the need for special case sizes or bulky contraptions. It’s ideal for the perfect selfie or group shot…often on the first click! Patent pending, STICK-AMIS is a super-handy phone accessory that instantly solves the problem of having to fumble with your phone while taking selfies, group photos, watching videos or even video conferencing. In addition, STICK-AMIS doubles as an instant stand and can eliminate the need for oft cumbersome accessories like pop-sockets, selfie-sticks and tripods! Use STICK-AMIS to also avoid awkward angles, shadows and limited range of your phone or tablet camera.
For the home improvement and child safety-minded is Shhhtop, which is better than conventional door dampers and stoppers for ensuring doors don't slam shut--even when other doors and windows are open to let in fresh air. Beyond the annoyance and stress from slamming doors, they're also a safety hazard to children, the elderly and pets. Shhhtop installs easily with no drilling or damage to door components. It's also discreet and invisible once installed.
Whether as a holiday getaway for yourself or an experiential gift for another, the Renaissance Indian Wells Resort & Spa is a splendid desert home-away-from-home for the holidays courtesy of a range of events and activities inspired by holiday storybooks and tales. This December, there are some super special offerings at this Palm Springs-area destination, including: a giant walk-in snow globe for family photo ops; kids' craft classes with Nutcracker, Grinch and other themes; nightly free holiday movies; a gingerbread house decorating contest; oversized storybook characters; an Elf-on-the-Shelf hunt; desert Santa in flip-flops; as well as holiday tree-lighting and entertainment. Then there’s the ultimate New Year's Eve Glow-in-the-Dark family dance party, which is a huge hit with kids. Their Storybook Holiday package includes $50 per night resort credit as well as milk cookies and a children’s holiday storybook upon check-in.
Another fabulous holiday or travel gift is Luxe Rodeo Drive Hotel, the only hotel on Beverly Hills’ iconic Rodeo Drive. A blend of modern style, relaxed Southern California spirit and the elegance of Beverly Hills, this boutique hotel features inspired spaces from famed designer Vicente Wolf and genuine hospitality from a dedicated staff. Revel in unmatched access to world-class shopping, dining and entertainment, and exclusive Luxe Club amenities. Like a home-away-from-home, the Luxe Club benefits include delicious breakfasts, light lunches, evening tastings with happy hour house wines and cocktails to ensure every craving and necessity is met. The hotel also features complimentary bicycle rentals, a 2nd floor outdoor mezzanine and rooftop that provides incredible 360-degree views of the Los Angeles hills and skyline. The rooftop also has a Fitness Center and hosts “Cinema Under the Stars” events on the weekend. Luxe Rodeo Drive Hotel is a premier choice for an immersive Beverly Hills holiday.
|
# -*- coding: utf-8 -*-
#-------------------------------------------------------------------------
# Script for downloading files from any server supported on pelisalacarta
# http://blog.tvalacarta.info/plugin-xbmc/pelisalacarta/
#-------------------------------------------------------------------------
import re,urllib,urllib2,sys,os
sys.path.append ("lib")
from core import config
config.set_setting("debug","true")
from core import scrapertools
from core import downloadtools
from core.item import Item
from servers import servertools
def download_url(url,titulo,server):
url = url.replace("\\","")
print "Analizando enlace "+url
# Averigua el servidor
if server=="":
itemlist = servertools.find_video_items(data=url)
if len(itemlist)==0:
print "No se puede identificar el enlace"
return
item = itemlist[0]
print "Es un enlace en "+item.server
else:
item = Item()
item.server = server
# Obtiene las URL de descarga
video_urls, puedes, motivo = servertools.resolve_video_urls_for_playing(item.server,url)
if len(video_urls)==0:
print "No se ha encontrado nada para descargar"
return
# Descarga el de mejor calidad, como hace pelisalacarta
print "Descargando..."
print video_urls
devuelve = downloadtools.downloadbest(video_urls,titulo,continuar=True)
if __name__ == "__main__":
url = sys.argv[1]
title = sys.argv[2]
if len(sys.argv)>=4:
server = sys.argv[3]
else:
server = ""
if title.startswith("http://") or title.startswith("https://"):
url = sys.argv[2]
title = sys.argv[1]
download_url(url,title,server)
|
This is very true because in psychology we have studied that people tend to conform to a particular group. A study was conducted in a way where there were six people in the group. One person was the subject and the rest were part of the study. When the researcher asked them which line (on the board) is longer, all of the men said the shorter line was the longest. Now this was an obvious wrong answer but surprisingly the subject said the same answer along with the other men. Why? Because he wanted to get along, be a part of the group rather than an outsider or an outlier. This theory is called the group conformity.
We have experienced this in our schools. In middle school friends will threat other friends that if you do not follow or do what they are doing, then you are not part of the group anymore. So for one to be a part of the group they need to do something crazy but they still do it just to be in the group. Many times it has a negative consequence rather than a positive.
I would say listen to your heart. Do not let the voice of others interfere with what you want but doing what the individual wants is very important. In the lasts paragraph Lewis says “The quest of the Inner Ring will break your hearts unless you break it.” We need to break that desire to be in the inner ring because if we don’t, then it will destroy us.
|
"""The first hint for this problem is the title of the webpage: 'peak hell'.
When pronounced, it sounds very similar to 'pickle', which is the builtin
python object serialization package. When viewing the the source code of the
webpage, there is a 'peakhell' tag that links to a pickle file. We'll download
the file (prompting the user if they are okay with deserializing the file) then
view its contents."""
import pickle
import requests
import webbrowser
from bs4 import BeautifulSoup
webpage = "http://www.pythonchallenge.com/pc/def/peak.html"
r = requests.get(webpage)
soup = BeautifulSoup(r.content, "html.parser")
peakhell = soup.find("peakhell")["src"]
split_page = webpage.split("peak.html")
pickle_file = f"{split_page[0]}{peakhell}"
r = requests.get(pickle_file)
with open(peakhell, "wb") as fp:
fp.write(r.content)
# Print out each line to the console.
msg = pickle.load(open(peakhell, "rb"))
line = ""
for lst in msg:
for tup in lst:
line += tup[0] * tup[1]
print(line)
line = ""
print("opening new webpage...")
split_page = webpage.split("peak.html")
new_page = f"{split_page[0]}channel.html"
webbrowser.open(new_page)
|
The Nike Kyrie 2 will be releasing in a Black History Month colorway on January 18, 2016. The strap, laces and outsole features a vivid Pan African-inspired color palette, with custom geometric motifs. The Kyrie and BHM logo will be in blue while the insole has a statement that reads “The Power of One” in a unique style.
Juan Martinez, Editor-In-Chief: Well, I guess we don’t need Nike’s official preview thanks this boatload of images for the Nike Kyrie 2 BHM. If there are any changes to the shoe, it will most likely be very minor since it’s only a few months away from release. I’m still not sold on the Kyrie 2 as a whole based on all the leaked pics because it has too much of a KD 4 vibe to it kind of like how the Kyrie 1 might as well have been called the Hyperfuse 2015. I am surprised to see Nike go with a multi-color vibe for their BHM collection after a few years of going with a more subdued take.
Andres Carrillo, News: I like how the graphic/print was only placed on the strap and not the entire shoe, it makes the shoe a lot more wearable.
|
# -*- coding: utf-8 -*-
#------------------------------------------------------------------------------
# file: $Id$
# auth: metagriffin <mg.github@uberdev.org>
# date: 2012/06/14
# copy: (C) Copyright 2012-EOT metagriffin -- see LICENSE.txt
#------------------------------------------------------------------------------
# This software is free software: you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This software is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#------------------------------------------------------------------------------
'''
The ``pysyncml.model.store`` provides a SyncML datastore abstraction
via the :class:`pysyncml.model.store.Store` class, which includes both
the datastore meta information and, if the datastore is local, an
agent to execute data interactions.
'''
import sys, json, logging
import xml.etree.ElementTree as ET
from sqlalchemy import Column, Integer, Boolean, String, Text, ForeignKey
from sqlalchemy.orm import relation, synonym, backref
from sqlalchemy.orm.exc import NoResultFound
from .. import common, constants, ctype
log = logging.getLogger(__name__)
#------------------------------------------------------------------------------
def decorateModel(model):
#----------------------------------------------------------------------------
class Store(model.DatabaseObject):
allSyncTypes = [
constants.SYNCTYPE_TWO_WAY,
constants.SYNCTYPE_SLOW_SYNC,
constants.SYNCTYPE_ONE_WAY_FROM_CLIENT,
constants.SYNCTYPE_REFRESH_FROM_CLIENT,
constants.SYNCTYPE_ONE_WAY_FROM_SERVER,
constants.SYNCTYPE_REFRESH_FROM_SERVER,
constants.SYNCTYPE_SERVER_ALERTED,
]
adapter_id = Column(Integer, ForeignKey('%s_adapter.id' % (model.prefix,),
onupdate='CASCADE', ondelete='CASCADE'),
nullable=False, index=True)
adapter = relation('Adapter', backref=backref('_stores', # order_by=id,
cascade='all, delete-orphan',
passive_deletes=True))
uri = Column(String(4095), nullable=False, index=True)
displayName = Column(String(4095))
_syncTypes = Column('syncTypes', String(4095)) # note: default set in __init__
maxGuidSize = Column(Integer) # note: default set in __init__
maxObjSize = Column(Integer) # note: default set in __init__
_conflictPolicy = Column('conflictPolicy', Integer)
agent = None
@property
def syncTypes(self):
return json.loads(self._syncTypes or 'null')
@syncTypes.setter
def syncTypes(self, types):
self._syncTypes = json.dumps(types)
@property
def contentTypes(self):
if self.agent is not None:
return self.agent.contentTypes
return self._contentTypes
@property
def conflictPolicy(self):
if self._conflictPolicy is not None:
return self._conflictPolicy
# todo: this assumes that this store is the local one...
return self.adapter.conflictPolicy
@conflictPolicy.setter
def conflictPolicy(self, policy):
self._conflictPolicy = policy
@property
def peer(self):
return self.getPeerStore()
#--------------------------------------------------------------------------
def getPeerStore(self, adapter=None):
if not self.adapter.isLocal:
if adapter is None:
# todo: implement this...
raise common.InternalError('local adapter is required for call to remoteStore.getPeerStore()')
uri = adapter.router.getSourceUri(self.uri, mustExist=False)
if uri is None:
return None
return adapter.stores[uri]
if self.adapter.peer is None:
return None
ruri = self.adapter.router.getTargetUri(self.uri, mustExist=False)
if ruri is None:
return None
return self.adapter.peer.stores[ruri]
#--------------------------------------------------------------------------
def __init__(self, **kw):
# TODO: this is a little hack... it is because the .merge() will
# otherwise override valid values with null values when the merged-in
# store has not been flushed, and because this is a valid value,
# open flush, is being nullified. ugh.
# NOTE: the default is set here, not in the Column() definition, so that
# NULL values remain NULL during a flush) - since they are valid.
self._syncTypes = kw.get('syncTypes', repr(Store.allSyncTypes))
self.maxGuidSize = kw.get('maxGuidSize', common.getAddressSize())
self.maxObjSize = kw.get('maxObjSize', common.getMaxMemorySize())
super(Store, self).__init__(**kw)
#----------------------------------------------------------------------------
def __repr__(self):
ret = '<Store "%s": uri=%s' % (self.displayName or self.uri, self.uri)
if self.maxGuidSize is not None:
ret += '; maxGuidSize=%d' % (self.maxGuidSize,)
if self.maxObjSize is not None:
ret += '; maxObjSize=%d' % (self.maxObjSize,)
if self.syncTypes is not None and len(self.syncTypes) > 0:
ret += '; syncTypes=%s' % (','.join([str(st) for st in self.syncTypes]),)
if self.contentTypes is not None and len(self.contentTypes) > 0:
ret += '; contentTypes=%s' % (','.join([str(ct) for ct in self.contentTypes]),)
return ret + '>'
#----------------------------------------------------------------------------
def merge(self, store):
if self.uri != store.uri:
raise common.InternalError('unexpected merging of stores with different URIs (%s != %s)'
% (self.uri, store.uri))
self.displayName = store.displayName
if cmp(self._contentTypes, store._contentTypes) != 0:
# todo: this is a bit drastic... perhaps have an operational setting
# which controls how paranoid to be?...
self.binding = None
self._contentTypes = [e.clone() for e in store._contentTypes]
self.syncTypes = store.syncTypes
self.maxGuidSize = store.maxGuidSize
self.maxObjSize = store.maxObjSize
self.agent = store.agent
return self
#----------------------------------------------------------------------------
def clearChanges(self):
if self.adapter.isLocal:
# TODO: THIS NEEDS TO BE SIGNIFICANTLY OPTIMIZED!... either:
# a) optimize this reverse lookup, or
# b) use a query that targets exactly the set of stores needed
# note that a pre-emptive model.session.flush() may be necessary.
for peer in self.adapter.getKnownPeers():
for store in peer._stores:
if store.binding is not None and store.binding.uri == self.uri:
store.clearChanges()
return
if self.id is None:
model.session.flush()
model.Change.q(store_id=self.id).delete()
#----------------------------------------------------------------------------
def registerChange(self, itemID, state, changeSpec=None, excludePeerID=None):
if self.adapter.isLocal:
# TODO: THIS NEEDS TO BE SIGNIFICANTLY OPTIMIZED!... either:
# a) optimize this reverse lookup, or
# b) use a query that targets exactly the set of stores needed
# note that a pre-emptive model.session.flush() may be necessary.
for peer in self.adapter.getKnownPeers():
if excludePeerID is not None and peer.id == excludePeerID:
continue
for store in peer._stores:
if store.binding is not None and store.binding.uri == self.uri:
store.registerChange(itemID, state, changeSpec=changeSpec)
return
if self.id is None:
model.session.flush()
itemID = str(itemID)
change = None
if changeSpec is not None:
try:
change = model.Change.q(store_id=self.id, itemID=itemID).one()
change.state = state
if change.changeSpec is not None:
change.changeSpec += ';' + changeSpec
if len(change.changeSpec) > model.Change.c.changeSpec.type.length:
change.changeSpec = None
except NoResultFound:
change = None
if change is None:
model.Change.q(store_id=self.id, itemID=itemID).delete()
change = model.Change(store_id=self.id, itemID=itemID,
state=state, changeSpec=changeSpec)
model.session.add(change)
#--------------------------------------------------------------------------
def getRegisteredChanges(self):
return model.Change.q(store_id=self.id)
#----------------------------------------------------------------------------
def describe(self, s1):
s2 = common.IndentStream(s1)
s3 = common.IndentStream(s2)
print >>s1, self.displayName or self.uri
print >>s2, 'URI:', self.uri
print >>s2, 'Sync types:', ','.join([str(e) for e in self.syncTypes or []])
print >>s2, 'Max ID size:', self.maxGuidSize or '(none)'
print >>s2, 'Max object size:', self.maxObjSize or '(none)'
print >>s2, 'Capabilities:'
for cti in self.contentTypes or []:
cti.describe(s3)
#----------------------------------------------------------------------------
def toSyncML(self):
xstore = ET.Element('DataStore')
if self.uri is not None:
ET.SubElement(xstore, 'SourceRef').text = self.uri
if self.displayName is not None:
ET.SubElement(xstore, 'DisplayName').text = self.displayName
if self.maxGuidSize is not None:
# todo: this should ONLY be sent by the client... (according to the
# spec, but not according to funambol behavior...)
ET.SubElement(xstore, 'MaxGUIDSize').text = str(self.maxGuidSize)
if self.maxObjSize is not None:
ET.SubElement(xstore, 'MaxObjSize').text = str(self.maxObjSize)
if self.contentTypes is not None:
rxpref = [ct for ct in self.contentTypes if ct.receive and ct.preferred]
if len(rxpref) > 1:
raise common.InvalidAgent('agents can prefer at most one rx content-type, not %r' % (rxpref,))
if len(rxpref) == 1:
for idx, xnode in enumerate(rxpref[0].toSyncML('Rx-Pref', uniqueVerCt=True)):
if idx != 0:
xnode.tag = 'Rx'
xstore.append(xnode)
for rx in [ct for ct in self.contentTypes if ct.receive and not ct.preferred]:
for xnode in rx.toSyncML('Rx', uniqueVerCt=True):
xstore.append(xnode)
txpref = [ct for ct in self.contentTypes if ct.transmit and ct.preferred]
if len(txpref) > 1:
raise common.InvalidAgent('agents can prefer at most one tx content-type, not %r' % (txpref,))
if len(txpref) == 1:
for idx, xnode in enumerate(txpref[0].toSyncML('Tx-Pref', uniqueVerCt=True)):
if idx != 0:
xnode.tag = 'Tx'
xstore.append(xnode)
for tx in [ct for ct in self.contentTypes if ct.transmit and not ct.preferred]:
for xnode in tx.toSyncML('Tx', uniqueVerCt=True):
xstore.append(xnode)
if self.syncTypes is not None and len(self.syncTypes) > 0:
xcap = ET.SubElement(xstore, 'SyncCap')
for st in self.syncTypes:
ET.SubElement(xcap, 'SyncType').text = str(st)
return xstore
#----------------------------------------------------------------------------
@staticmethod
def fromSyncML(xnode):
store = model.Store()
store.uri = xnode.findtext('SourceRef')
store.displayName = xnode.findtext('DisplayName')
store.maxGuidSize = xnode.findtext('MaxGUIDSize')
if store.maxGuidSize is not None:
store.maxGuidSize = int(store.maxGuidSize)
store.maxObjSize = xnode.findtext('MaxObjSize')
if store.maxObjSize is not None:
store.maxObjSize = int(store.maxObjSize)
store.syncTypes = [int(x.text) for x in xnode.findall('SyncCap/SyncType')]
store._contentTypes = []
for child in xnode:
if child.tag not in ('Tx-Pref', 'Tx', 'Rx-Pref', 'Rx'):
continue
cti = model.ContentTypeInfo.fromSyncML(child)
for curcti in store._contentTypes:
if curcti.merge(cti):
break
else:
store._contentTypes.append(cti)
return store
#----------------------------------------------------------------------------
class ContentTypeInfo(model.DatabaseObject, ctype.ContentTypeInfoMixIn):
store_id = Column(Integer, ForeignKey('%s_store.id' % (model.prefix,),
onupdate='CASCADE', ondelete='CASCADE'),
nullable=False, index=True)
store = relation('Store', backref=backref('_contentTypes', # order_by=id,
cascade='all, delete-orphan',
passive_deletes=True))
ctype = Column(String(4095))
_versions = Column('versions', String(4095))
preferred = Column(Boolean, default=False)
transmit = Column(Boolean, default=True)
receive = Column(Boolean, default=True)
@property
def versions(self):
return json.loads(self._versions or 'null')
@versions.setter
def versions(self, types):
self._versions = json.dumps(types)
def clone(self):
# TODO: this should be moved into `model.DatabaseObject`
# see:
# https://groups.google.com/forum/?fromgroups#!topic/sqlalchemy/bhYvmnRpegE
# http://www.joelanman.com/2008/09/making-a-copy-of-a-sqlalchemy-object/
return ContentTypeInfo(ctype=self.ctype, _versions=self._versions,
preferred=self.preferred, transmit=self.transmit, receive=self.receive)
def __str__(self):
return ctype.ContentTypeInfoMixIn.__str__(self)
def __repr__(self):
return ctype.ContentTypeInfoMixIn.__repr__(self)
def __cmp__(self, other):
for attr in ('ctype', 'versions', 'preferred', 'transmit', 'receive'):
ret = cmp(getattr(self, attr), getattr(other, attr))
if ret != 0:
return ret
return 0
#----------------------------------------------------------------------------
class Binding(model.DatabaseObject):
# todo: since store <=> binding is one-to-one, shouldn't this be a primary key?...
store_id = Column(Integer, ForeignKey('%s_store.id' % (model.prefix,),
onupdate='CASCADE', ondelete='CASCADE'),
nullable=False, index=True)
targetStore = relation('Store', backref=backref('binding', uselist=False,
cascade='all, delete-orphan',
passive_deletes=True))
# todo: this uri *could* be replaced by an actual reference to the Store object...
# and then the getSourceStore() method can go away...
# *BUT* this would require a one-to-many Adapter<=>Adapter relationship...
uri = Column(String(4095), nullable=True)
autoMapped = Column(Boolean)
sourceAnchor = Column(String(4095), nullable=True)
targetAnchor = Column(String(4095), nullable=True)
def getSourceStore(self, adapter):
return adapter.stores[self.uri]
#----------------------------------------------------------------------------
class Change(model.DatabaseObject):
store_id = Column(Integer, ForeignKey('%s_store.id' % (model.prefix,),
onupdate='CASCADE', ondelete='CASCADE'),
nullable=False, index=True)
# store = relation('Store', backref=backref('changes',
# cascade='all, delete-orphan',
# passive_deletes=True))
itemID = Column(String(4095), index=True, nullable=False)
state = Column(Integer)
registered = Column(Integer, default=common.ts)
changeSpec = Column(String(4095))
model.Store = Store
model.ContentTypeInfo = ContentTypeInfo
model.Binding = Binding
model.Change = Change
#------------------------------------------------------------------------------
# end of $Id$
#------------------------------------------------------------------------------
|
- The apartment is Big 180m2 , it's perfect for families, groups and also couples.
-The area where you will be staying called Pyramids Gardens, it's very close to Giza Pyramids, and all the 5* hotels in Giza, -the area has gates and there a door-man at your service so it's totally safe. - Less than 5 minutes walk from the apartment you can find Supermarkets, pharmacy, Coffeshops, and all kind of restaurants because our apartment is in the main way so it's easy to go everywhere.
Mahmoud is a Superhost · Superhosts are experienced, highly rated hosts who are committed to providing great stays for guests.
|
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(__file__))+"/..")
import Core.SentenceGraph as SentenceGraph
from Utils.ProgressCounter import ProgressCounter
from FindHeads import findHeads
import Utils.ElementTreeUtils as ETUtils
import Utils.InteractionXML.CorpusElements
import Utils.Range as Range
import Utils.Libraries.PorterStemmer as PorterStemmer
def getTriggers(corpus):
"""
Returns a dictionary of "entity type"->"entity text"->"count"
"""
corpus = ETUtils.ETFromObj(corpus)
trigDict = {}
for entity in corpus.getroot().getiterator("entity"):
if entity.get("given") == "True":
continue
eType = entity.get("type")
if not trigDict.has_key(eType):
trigDict[eType] = {}
eText = entity.get("text")
eText = PorterStemmer.stem(eText)
if not trigDict[eType].has_key(eText):
trigDict[eType][eText] = 0
trigDict[eType][eText] += 1
return trigDict
def getDistribution(trigDict):
"""
Converts a dictionary of "entity type"->"entity text"->"count"
to "entity text"->"entity type"->"(count, fraction)"
"""
distDict = {}
eTypes = trigDict.keys()
for eType in trigDict.keys():
for string in trigDict[eType].keys():
if not distDict.has_key(string):
distDict[string] = {}
for e in eTypes:
distDict[string][e] = [0, None]
distDict[string][eType] = [trigDict[eType][string], None]
# define ratios
for string in distDict.keys():
count = 0.0
for eType in distDict[string].keys():
count += distDict[string][eType][0]
for eType in distDict[string].keys():
distDict[string][eType][1] = distDict[string][eType][0] / count
return distDict
def getHeads(corpus):
corpus = ETUtils.ETFromObj(corpus)
headDict = {}
headDict["None"] = {}
for sentence in corpus.getiterator("sentence"):
headOffsetStrings = set()
for entity in sentence.findall("entity"):
eType = entity.get("type")
if not headDict.has_key(eType):
headDict[eType] = {}
eText = entity.get("text")
headOffset = entity.get("headOffset")
headOffsetStrings.add(headOffset)
headOffset = Range.charOffsetToSingleTuple(headOffset)
charOffset = Range.charOffsetToSingleTuple(entity.get("charOffset"))
if headOffset == charOffset:
if not headDict[eType].has_key(eText): headDict[eType][eText] = 0
headDict[eType][eText] += 1
else:
headText = sentenceText[headOffset[0]-charOffset[0]:headOffset[1]-charOffset[0]+1]
if not headDict[eType].has_key(headText): headDict[eType][headText] = 0
headDict[eType][headText] += 1
for token in tokens:
if not token.get("charOffset") in headOffsetStrings: # token is not the head of any entity
headText = token.get("text")
if not headDict["None"].has_key(headText): headDict["None"][headText] = 0
headDict["None"][headText] += 1
return headDict
def getOverlap():
pass
def removeHeads(corpus):
print >> sys.stderr, "Removing existing head offsets"
removeCount = 0
xml = ETUtils.ETFromObj(corpus)
for d in xml.getroot().findall("document"):
for s in d.findall("sentence"):
for e in s.findall("entity"):
if e.get("headOffset") != None:
removeCount += 1
del e.attrib["headOffset"]
print >> sys.stderr, "Removed head offsets from", removeCount, "entities"
return [0, removeCount]
def findHeads(corpus, stringsFrom, methods, parse, tokenization):
for m in methods:
assert m in ["REMOVE", "SYNTAX", "DICT"]
corpus = ETUtils.ETFromObj(corpus)
counts = {}
for method in methods:
print >> sys.stderr, method, "pass"
if method == "REMOVE":
counts[method] = removeHeads(corpus)
elif method == "DICT":
counts[method] = findHeadsDictionary(corpus, stringsFrom, parse, tokenization)
elif method == "SYNTAX":
counts[method] = findHeadsSyntactic(corpus, parse, tokenization)
print >> sys.stderr, method, "pass added", counts[method][0], "and removed", counts[method][1], "heads"
print >> sys.stderr, "Summary (pass/added/removed):"
for method in methods:
print >> sys.stderr, " ", method, "/", counts[method][0], "/", counts[method][1]
def mapSplits(splits, string, stringOffset):
"""
Maps substrings to a string, and stems them
"""
begin = 0
tuples = []
for split in splits:
offset = string.find(split, begin)
assert offset != -1
tuples.append( (split, PorterStemmer.stem(split), (offset,len(split))) )
begin = offset + len(split)
return tuples
def findHeadsDictionary(corpus, stringsFrom, parse, tokenization):
print "Extracting triggers from", stringsFrom
trigDict = getTriggers(stringsFrom)
print "Determining trigger distribution"
distDict = getDistribution(trigDict)
allStrings = sorted(distDict.keys())
print "Determining heads for", corpus
corpusElements = Utils.InteractionXML.CorpusElements.loadCorpus(corpus, parse, tokenization, removeIntersentenceInteractions=False, removeNameInfo=False)
cases = {}
counts = [0,0]
for sentence in corpusElements.sentences:
#print sentence.sentence.get("id")
sText = sentence.sentence.get("text")
#tokenHeadScores = None
for entity in sentence.entities:
if entity.get("headOffset") != None:
continue
if entity.get("given") == "True": # Only for triggers
continue
#if tokenHeadScores == None:
# tokenHeadScores = getTokenHeadScores(sentence.tokens, sentence.dependencies, sentenceId=sentence.sentence.get("id"))
eText = entity.get("text")
eType = entity.get("type")
eOffset = Range.charOffsetToSingleTuple(entity.get("charOffset"))
wsSplits = eText.split() # Split by whitespace
if len(wsSplits) == 1 and eText.find("-") == -1: # unambiguous head will be assigned by SYNTAX pass
continue
else: # Entity text has multiple (whitespace or hyphen separated) parts
candidates = []
# Try to find entity substring in individual entity strings
for wsTuple in mapSplits(wsSplits, eText, eOffset):
if not distDict.has_key(wsTuple[1]): # string not found, low score
candidates.append( ((-1, -1), wsTuple[2], wsTuple[0], wsTuple[1]) )
else: # String found, more common ones get higher score
assert distDict[wsTuple[1]].has_key(eType), (distDict[wsTuple[0]], wsTuple[0], eText)
candidates.append( (tuple(distDict[wsTuple[1]][eType]), wsTuple[2], wsTuple[0], wsTuple[1]) )
# Split each whitespace-separated string further into hyphen-separated substrings
for candidate in candidates[:]:
hyphenSplits = candidate[2].split("-")
if len(hyphenSplits) > 1: # Substring has a hyphen
# Try to find entity substring in individual entity strings
for hyphenTuple in mapSplits(hyphenSplits, eText, candidate[1]):
if not distDict.has_key(hyphenTuple[1]):
candidates.append( ((-1, -1), hyphenTuple[2], hyphenTuple[0], hyphenTuple[1]) )
else:
candidates.append( (tuple(distDict[hyphenTuple[1]][eType]), hyphenTuple[2], hyphenTuple[0], hyphenTuple[1]) )
# Sort candidates, highes scores come first
candidates.sort(reverse=True)
# If not matches, look for substrings inside words
if candidates[0][0][0] in [-1, 0]: # no matches, look for substrings
print "Substring matching", candidates, "for entity", entity.get("id")
for i in range(len(candidates)):
candidate = candidates[i]
cText = candidate[2]
for string in allStrings:
subStringPos = cText.find(string)
if subStringPos != -1:
print " Substring match", string, cText,
score = tuple(distDict[string][eType])
if score > candidate[0]:
print score, candidate[0], "Substring selected" #, score > candidate[0], score < candidate[0]
subStringCoords = [candidate[1][0] + subStringPos, len(string)]
candidate = (score, subStringCoords, candidate[2], ">"+string+"<")
else:
print score, candidate[0]
candidates[i] = candidate
# Resort after possibly replacing some candidates
candidates.sort(reverse=True)
if candidates[0][0][0] not in [-1, 0]: # if it is in [-1, 0], let SYNTAX pass take care of it
candidateOffset = (candidates[0][1][0] + eOffset[0], candidates[0][1][0] + candidates[0][1][1] + eOffset[0])
entity.set("headOffset", str(candidateOffset[0]) + "-" + str(candidateOffset[1]-1))
entity.set("headMethod", "Dict")
entity.set("headString", sText[candidateOffset[0]:candidateOffset[1]])
counts[0] += 1
# Prepare results for printing
for i in range(len(candidates)):
c = candidates[i]
candidates[i] = (tuple(c[0]), c[2], c[3])
case = (eType, eText, tuple(candidates))
if not cases.has_key(case):
cases[case] = 0
cases[case] += 1
print entity.get("id"), eType + ": '" + eText + "'", candidates
#headToken = getEntityHeadToken(entity, sentence.tokens, tokenHeadScores)
# The ElementTree entity-element is modified by setting the headOffset attribute
#entity.set("headOffset", headToken.get("charOffset"))
#entity.set("headMethod", "Syntax")
print "Cases"
for case in sorted(cases.keys()):
print case, cases[case]
#return corpus
return counts
def findHeadsSyntactic(corpus, parse, tokenization):
"""
Determine the head token for a named entity or trigger. The head token is the token closest
to the root for the subtree of the dependency parse spanned by the text of the element.
@param entityElement: a semantic node (trigger or named entity)
@type entityElement: cElementTree.Element
@param verbose: Print selected head tokens on screen
@param verbose: boolean
"""
counts = [0,0]
sentences = [x for x in corpus.getiterator("sentence")]
counter = ProgressCounter(len(sentences), "SYNTAX")
for sentence in sentences:
counter.update()
tokElement = ETUtils.getElementByAttrib(sentence, "sentenceanalyses/tokenizations/tokenization", {"tokenizer":tokenization})
parseElement = ETUtils.getElementByAttrib(sentence, "sentenceanalyses/parses/parse", {"parser":parse})
if tokElement == None or parseElement == None:
print >> sys.stderr, "Warning, sentence", sentence.get("id"), "missing parse or tokenization"
tokens = tokElement.findall("token")
tokenHeadScores = getTokenHeadScores(tokens, parseElement.findall("dependency"), sentenceId=sentence.get("id"))
for entity in sentence.findall("entity"):
if entity.get("headOffset") == None:
headToken = getEntityHeadToken(entity, tokens, tokenHeadScores)
# The ElementTree entity-element is modified by setting the headOffset attribute
entity.set("headOffset", headToken.get("charOffset"))
entity.set("headMethod", "Syntax")
entity.set("headString", headToken.get("text"))
counts[0] += 1
return counts
def getEntityHeadToken(entity, tokens, tokenHeadScores):
if entity.get("headOffset") != None:
charOffsets = Range.charOffsetToTuples(entity.get("headOffset"))
elif entity.get("charOffset") != "":
charOffsets = Range.charOffsetToTuples(entity.get("charOffset"))
else:
charOffsets = []
# Each entity can consist of multiple syntactic tokens, covered by its
# charOffset-range. One of these must be chosen as the head token.
headTokens = [] # potential head tokens
for token in tokens:
tokenOffset = Range.charOffsetToSingleTuple(token.get("charOffset"))
for offset in charOffsets:
if Range.overlap(offset, tokenOffset):
headTokens.append(token)
if len(headTokens)==1: # An unambiguous head token was found
selectedHeadToken = headTokens[0]
else: # One head token must be chosen from the candidates
selectedHeadToken = findHeadToken(headTokens, tokenHeadScores)
#if verbose:
# print >> sys.stderr, "Selected head:", token.attrib["id"], token.attrib["text"]
assert selectedHeadToken != None, entityElement.get("id")
return selectedHeadToken
def findHeadToken(candidateTokens, tokenHeadScores):
"""
Select the candidate token that is closest to the root of the subtree of the depencdeny parse
to which the candidate tokens belong to. See getTokenHeadScores method for the algorithm.
@param candidateTokens: the list of syntactic tokens from which the head token is selected
@type candidateTokens: list of cElementTree.Element objects
"""
if len(candidateTokens) == 0:
return None
highestScore = -9999999
bestTokens = []
for token in candidateTokens:
if tokenHeadScores[token] > highestScore:
highestScore = tokenHeadScores[token]
for token in candidateTokens:
if tokenHeadScores[token] == highestScore:
bestTokens.append(token)
return bestTokens[-1]
def getTokenHeadScores(tokens, dependencies, sentenceId=None):
"""
A head token is chosen using a heuristic that prefers tokens closer to the
root of the dependency parse. In a list of candidate tokens, the one with
the highest score is the head token. The return value of this method
is a dictionary that maps token elements to their scores.
"""
tokenHeadScores = {}
# Give all tokens initial scores
for token in tokens:
tokenHeadScores[token] = 0 # initialize score as zero (unconnected token)
for dependency in dependencies:
if dependency.get("t1") == token.get("id") or dependency.get("t2") == token.get("id"):
tokenHeadScores[token] = 1 # token is connected by a dependency
break
# Give a low score for tokens that clearly can't be head and are probably produced by hyphen-splitter
for token in tokens:
tokenText = token.get("text")
if tokenText == "\\" or tokenText == "/" or tokenText == "-":
tokenHeadScores[token] = -1
# Loop over all dependencies and increase the scores of all governor tokens
# until each governor token has a higher score than its dependent token.
# Some dependencies might form a loop so a list is used to define those
# dependency types used in determining head scores.
depTypesToInclude = ["prep", "nn", "det", "hyphen", "num", "amod", "nmod", "appos", "measure", "dep", "partmod"]
#depTypesToRemoveReverse = ["A/AN"]
modifiedScores = True
loopCount = 0 # loopcount for devel set approx. 2-4
while modifiedScores == True: # loop until the scores no longer change
if loopCount > 20: # survive loops
print >> sys.stderr, "Warning, possible loop in parse for sentence", sentenceId
break
modifiedScores = False
for token1 in tokens:
for token2 in tokens: # for each combination of tokens...
for dep in dependencies: # ... check each dependency
if dep.get("t1") == token1.get("id") and dep.get("t2") == token2.get("id") and (dep.get("type") in depTypesToInclude):
# The governor token of the dependency must have a higher score
# than the dependent token.
if tokenHeadScores[token1] <= tokenHeadScores[token2]:
tokenHeadScores[token1] = tokenHeadScores[token2] + 1
modifiedScores = True
loopCount += 1
return tokenHeadScores
if __name__=="__main__":
import sys
print >> sys.stderr, "##### Calculating entity head token offsets #####"
from optparse import OptionParser
# Import Psyco if available
try:
import psyco
psyco.full()
print >> sys.stderr, "Found Psyco, using"
except ImportError:
print >> sys.stderr, "Psyco not installed"
optparser = OptionParser(usage="%prog [options]\nRecalculate head token offsets.")
optparser.add_option("-i", "--input", default=None, dest="input", help="Corpus in interaction xml format", metavar="FILE")
optparser.add_option("-o", "--output", default=None, dest="output", help="Output file in interaction xml format.")
optparser.add_option("-d", "--dictionary", default=None, dest="dictionary", help="Corpus file to use as dictionary of entity strings.")
optparser.add_option("-m", "--methods", default=None, dest="methods", help="")
optparser.add_option("-p", "--parse", default="split-McClosky", dest="parse", help="Parse element name for calculating head offsets")
optparser.add_option("-t", "--tokenization", default="split-McClosky", dest="tokenization", help="Tokenization element name for calculating head offsets")
(options, args) = optparser.parse_args()
print >> sys.stderr, "Loading corpus"
corpus = ETUtils.ETFromObj(options.input)
print >> sys.stderr, "Finding heads"
findHeads(corpus, options.dictionary, ["REMOVE", "DICT", "SYNTAX"], options.parse, options.tokenization)
#findHeadsDictionary(corpus, options.parse, options.tokenization)
if options.output != None:
print >> sys.stderr, "Writing corpus"
ETUtils.write(corpus, options.output)
|
Hear ye, hear ye! FX has revealed the title of American Horror Story's eighth season, and it really packs a punch. The next installment, previously confirmed as a Coven/Murder House crossover, will be called Apocalypse, and we can already feel the nightmares creeping up on us thanks to the first promotional images. Anyway, now that we know the end of times are on the horizon, we're doing what we do best: theorizing.
|
from __future__ import print_function, division
import pytest
import myhdl
from myhdl import always, delay, instance, now, StopSimulation
import rhea
from rhea.system import Global
from rhea.cores.video import VideoStream, HDMIExtInterface
from rhea.cores.video import hdmi_xcvr
# a video desplay model to check the timings
from rhea.models.video import VideoDisplay
from rhea.utils.test import run_testbench
# @todo move cosimulation to cosimulation directory
# from _hdmi_prep_cosim import prep_cosim
# from interfaces import HDMI
def test_hdmi():
""" simple test to demonstrate test framework
"""
@myhdl.block
def bench_hdmi():
glbl = Global()
clock, reset = glbl.clock, glbl.reset
vid = VideoStream()
ext = HDMIExtInterface()
tbdut = hdmi_xcvr(glbl, vid, ext)
# clock for the design
@always(delay(5))
def tbclk():
clock.next = not clock
@instance
def tbstim():
yield delay(13)
reset.next = reset.active
yield delay(33)
reset.next = not reset.active
yield clock.posedge
try:
for ii in range(100):
yield delay(100)
except AssertionError as err:
print("@E: assertion error @ %d ns" % (now(),))
print(" %s" % (str(err),))
# additional simulation cycles after the error
yield delay(111)
raise err
except Exception as err:
print("@E: error occurred")
print(" %s" % (str(err),))
raise err
raise StopSimulation
return tbclk, tbstim
# run the above test
run_testbench(bench_hdmi)
|
"Tangy, sweet old fashioned baked beans, are made the easy way in this side dish. This is my grandma's favorite semi home-made recipe. She makes it every Thanksgiving, and we usually end up scraping the pan clean! Never any leftovers!"
In a large bowl, stir together the baked beans, onion, brown sugar, syrup, ketchup and mustard. Pour into a 9x13 inch baking dish, and lay strips of bacon across the top.
Bake for 35 to 40 minutes in the preheated oven, until the bacon is browned and the beans have thickened.
An excellent baked bean recipe using Great Northern beans!
Yeah these were good. Not sure how much better they were with the extra ingredients, but yeah, they were tasty.
Needed a quick and easy baked bean recipe to take to a campground with our friends. Just the right amount of everything and really delicious! Thanks Apple!
These were quite tasty. I made these for a party and everyone liked them. I threw all the ingredients in my crock pot and cooked it for 2 hours on high.
These were really good and easy to make. Will be making these again.
Very good baked bean recipe. This is similar to how I make them, but with the addition of the pancake syrup. We really liked the flavor.
This was a really easy recipe, my husband enjoyed it. I also used real bacon bits because I was in a rush, will use this recipe every time.
I took these beans to a BBQ and everyone loved them, they were gone in minutes. Next time i will make a double batch. :) Thanks for thegreat recipe!
|
from billing import Integration, IntegrationNotConfigured
from billing.forms.authorize_net_forms import AuthorizeNetDPMForm
from billing.signals import transaction_was_successful, transaction_was_unsuccessful
from django.conf import settings
from django.conf.urls import patterns, url
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_POST
from django.utils.decorators import method_decorator
from django.http import HttpResponseForbidden
from django.shortcuts import render_to_response
from django.template import RequestContext
from django.core.urlresolvers import reverse
import hashlib
import hmac
import urllib
csrf_exempt_m = method_decorator(csrf_exempt)
require_POST_m = method_decorator(require_POST)
class AuthorizeNetDpmIntegration(Integration):
display_name = "Authorize.Net Direct Post Method"
template = "billing/authorize_net_dpm.html"
def __init__(self):
super(AuthorizeNetDpmIntegration, self).__init__()
merchant_settings = getattr(settings, "MERCHANT_SETTINGS")
if not merchant_settings or not merchant_settings.get("authorize_net"):
raise IntegrationNotConfigured("The '%s' integration is not correctly "
"configured." % self.display_name)
self.authorize_net_settings = merchant_settings["authorize_net"]
def form_class(self):
return AuthorizeNetDPMForm
def generate_form(self):
transaction_key = self.authorize_net_settings["TRANSACTION_KEY"]
login_id = self.authorize_net_settings["LOGIN_ID"]
initial_data = self.fields
x_fp_hash = hmac.new(transaction_key, "%s^%s^%s^%s^" % (login_id,
initial_data['x_fp_sequence'],
initial_data['x_fp_timestamp'],
initial_data['x_amount']),
hashlib.md5)
initial_data.update({'x_login': login_id,
'x_fp_hash': x_fp_hash.hexdigest()})
form = self.form_class()(initial=initial_data)
return form
@property
def service_url(self):
if self.test_mode:
return "https://test.authorize.net/gateway/transact.dll"
return "https://secure.authorize.net/gateway/transact.dll"
def verify_response(self, request):
data = request.POST.copy()
md5_hash = self.authorize_net_settings["MD5_HASH"]
login_id = self.authorize_net_settings["LOGIN_ID"]
hash_str = "%s%s%s%s" % (md5_hash, login_id,
data.get("x_trans_id", ""),
data.get("x_amount", ""))
return hashlib.md5(hash_str).hexdigest() == data.get("x_MD5_Hash").lower()
@csrf_exempt_m
@require_POST_m
def authorizenet_notify_handler(self, request):
response_from_authorize_net = self.verify_response(request)
if not response_from_authorize_net:
return HttpResponseForbidden()
post_data = request.POST.copy()
result = post_data["x_response_reason_text"]
if request.POST['x_response_code'] == '1':
transaction_was_successful.send(sender=self,
type="sale",
response=post_data)
redirect_url = "%s?%s" % (request.build_absolute_uri(reverse("authorize_net_success_handler")),
urllib.urlencode({"response": result,
"transaction_id": request.POST["x_trans_id"]}))
return render_to_response("billing/authorize_net_relay_snippet.html",
{"redirect_url": redirect_url})
redirect_url = "%s?%s" % (request.build_absolute_uri(reverse("authorize_net_failure_handler")),
urllib.urlencode({"response": result}))
transaction_was_unsuccessful.send(sender=self,
type="sale",
response=post_data)
return render_to_response("billing/authorize_net_relay_snippet.html",
{"redirect_url": redirect_url})
def authorize_net_success_handler(self, request):
response = request.GET
return render_to_response("billing/authorize_net_success.html",
{"response": response},
context_instance=RequestContext(request))
def authorize_net_failure_handler(self, request):
response = request.GET
return render_to_response("billing/authorize_net_failure.html",
{"response": response},
context_instance=RequestContext(request))
def get_urls(self):
urlpatterns = patterns('',
url('^authorize_net-notify-handler/$', self.authorizenet_notify_handler, name="authorize_net_notify_handler"),
url('^authorize_net-sucess-handler/$', self.authorize_net_success_handler, name="authorize_net_success_handler"),
url('^authorize_net-failure-handler/$', self.authorize_net_failure_handler, name="authorize_net_failure_handler"),)
return urlpatterns
|
A homage to the simple beauty of the forger's craft. The texture of hammertone has been seen for millennia in metal objects, bearing silent testament to each stroke of the hammer. The surface that remains is faceted and meticulous without losing its random and organic feeling. 11" diameter; Limoges porcelain with platinum finish edge.
|
import random
from math import cos, pi
from PIL import Image, ImageFilter
COLS = 20
ROWS = 12
def squash(v, low, high):
return min(max(low, v), high)
class FlatImage:
def __init__(self, file_name):
self.image = Image.open(file_name)
self.arr = self.image.load()
def tick(self, ts):
pass
def get_pixels(self):
return [self.arr[c, r][:3]
for r in range(0, ROWS)
for c in range(0, COLS)]
def randfloat(lo, hi):
return lo + random.random() * hi
def sprite_off(sprite, target_img):
pass
def sprite_move(sprite, target_img):
target_img.paste(sprite,
(random.choice([-1, 1]), random.choice([-1, 1])),
mask=sprite)
def sprite_smooth(sprite, target_img):
blurred = sprite.copy()
blurred.paste(sprite, (0, -1), sprite)
blurred.paste(sprite, (0, 1), sprite)
blurred = Image.blend(blurred, sprite, 0.3)
blurred.paste(sprite, (0, 0), mask=sprite)
target_img.paste(blurred, None, mask=blurred)
def normal(sprite, target_img):
target_img.paste(sprite, None, mask=sprite)
class SpriteJitter:
EMPTY_COLOR = (0, 0, 0)
OPS = [sprite_smooth, sprite_off, sprite_move]
EFFECT_DURATION = [0.2, 1.5]
BREAK_DURATION = [0.5, 1]
SPRITE_IDX_DURATION = [3, 10]
def __init__(self, file_name):
self.image = Image.open(file_name)
self.sprites = self.find_sprites()
self.start_ts = None
self.effect_end = None
self.effect = None
def tick(self, ts):
if self.start_ts is None:
self.start_ts = ts
self.effect_end = ts
self.effect = '--start--'
self.sprite_idx_end = ts
if ts >= self.sprite_idx_end:
self.sprite_idx_end = ts + randfloat(*self.SPRITE_IDX_DURATION)
self.sprite_idx = random.randrange(0, len(self.sprites))
elif ts < self.effect_end:
return
if self.effect == normal:
self.effect = random.choice(self.OPS)
self.effect_end = ts + randfloat(*self.EFFECT_DURATION)
else:
self.effect = normal
self.effect_end = ts + randfloat(*self.BREAK_DURATION)
img = Image.new('RGBA', self.image.size, (0, 0, 0, 0))
for idx, s in enumerate(self.sprites):
if idx != self.sprite_idx:
img.paste(s, (0, 0), mask=s)
if self.sprite_idx is not None:
self.effect(self.sprites[self.sprite_idx], img)
self.arr = img.load()
def source_p(self, c, r):
if (c < 0) or (c >= self.image.width) or (r < 0) or (r >= self.image.height):
return (0, 0, 0, 0)
return self.image.load()[c, r]
def get_pixels(self):
return [self.arr[c, r][:3]
for r in range(0, ROWS)
for c in range(0, COLS)]
def find_sprites(self):
seen_img = Image.new('1', self.image.size, 0)
sprites = []
def seen(x, y): return seen_img.load()[x, y]
def mark_seen(x, y): seen_img.putpixel((x, y), 1)
def flood_copy(sprite_color, spr, x, y):
if seen(x, y):
return
for dx in [-1, 0, 1]:
for dy in [-1, 0, 1]:
if self.source_p(x + dx, y + dy) == sprite_color:
mark_seen(x, y)
spr.putpixel((x, y), sprite_color)
if (dx != 0) or (dy != 0):
flood_copy(sprite_color, spr, x+dx, y+dy)
def get_sprite(sx, sy):
spr = Image.new('RGBA', self.image.size, (0, 0, 0, 0))
flood_copy(self.source_p(sx, sy), spr, sx, sy)
return spr
for x in range(0, self.image.width):
for y in range(0, self.image.height):
if seen(x, y):
continue
if self.source_p(x, y)[:3] == (0, 0, 0):
continue
sprites.append(get_sprite(x, y))
return sprites
class GreenT:
LOOP_LEN = 10
def __init__(self, file_name):
self.image = Image.open(file_name)
(self.image_width, self.image_height) = self.image.size
self.arr = self.image.load()
self.start_ts = None
def tick(self, ts):
if self.start_ts is None:
self.start_ts = ts
self.dy = squash(
14*(0.5-cos((ts - self.start_ts) * 2 * pi / self.LOOP_LEN)),
0, 6,
)
def p(self, c, r):
return self.arr[
squash(c, 0, self.image_width-1),
squash(r, 0, self.image_height-1),
]
def get_pixels(self):
return [self.p(c, r + self.dy)[:3]
for r in range(0, ROWS)
for c in range(0, COLS)]
class MultiEffect:
TRANSITION_TIME = 0.1
def __init__(self, effects, duration):
self.effects = effects
self.duration = duration
self.effect_idx = 0
self.state = 'effect'
self.end_ts = None
self.offset = None
def tick(self, ts):
if self.end_ts is None:
self.end_ts = ts + self.duration
if self.state == 'transition':
self.offset = int(squash((self.TRANSITION_TIME - (self.end_ts - ts)) / self.TRANSITION_TIME * COLS, 0, COLS-1))
self.effects[self.effect_idx].tick(ts)
self.effects[(self.effect_idx + 1) % len(self.effects)].tick(ts)
self.pixels = [None] * ROWS * COLS
curr_pixels = self.effects[self.effect_idx].get_pixels()
next_pixels = self.effects[(self.effect_idx + 1) % len(self.effects)].get_pixels()
for c in range(0, COLS):
for r in range(0, ROWS):
pi = r * COLS + c
if c + self.offset < COLS:
self.pixels[pi] = curr_pixels[pi + self.offset]
else:
self.pixels[pi] = next_pixels[pi + self.offset - COLS]
if ts >= self.end_ts:
self.effect_idx = (self.effect_idx + 1) % len(self.effects)
self.end_ts = ts + self.duration
self.state = 'effect'
else:
self.effects[self.effect_idx].tick(ts)
self.pixels = self.effects[self.effect_idx].get_pixels()
if ts >= self.end_ts:
self.end_ts = ts + self.TRANSITION_TIME
self.state = 'transition'
def get_pixels(self):
return self.pixels
class Bunny:
# In memory of Bull Bunny aka Michael R Oddo
FILE_NAME = 'img/bunny4.png'
def __init__(self):
self.image = Image.open(self.FILE_NAME)
self.arr = self.image.load()
def tick(self, ts):
pass
def get_pixels(self):
return [self.arr[c, r][:3]
for r in range(0, ROWS)
for c in range(0, COLS)]
|
Growing Readers Together, an early literacy initiative funded by a grant from the Temple Hoyne Buell Foundation and administered by the Colorado State Library, provides early literacy activities and resources for family, friends and neighbors who care for children under the age of 6.
The best way to funnel these resources is through the parents and community partnerships.
Woodruff Memorial Library dedicated a portion of their grant funds toward an early literacy bag project called “Bright Beginnings.” The bags contain a board book, early literacy literature and Woodruff Memorial Library information.
Heather Maes, director of Library Services, and Kimberly Gallegos, Program and Outreach coordinator, presented the Obstetrics Department at Arkansas Valley Regional Medical Center with bags to distribute to each baby born at AVRMC.
For more information about upcoming opportunities for Growing Readers Together events and resources, call the library at 384-4612.
|
# Copyright (c) 2011, Peter Thatcher
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# 3. The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
# EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# The purpose of this file is to abstactly access the FileSystem,
# especially for the purpose of scanning it to see what files are
# different. It works really hard to do so fast.
import hashlib
import logging
import os
import platform
import shutil
import sys
from util import Record
DELETED_SIZE = 0
DELETED_MTIME = 0
class RootedPath(Record("root", "rel")):
""" Represents a path (rel) that is relative to another path
(root). For examples, when scanning a large directory, it is
convenient to know the paths relative to the directory passed in.
In code a RootedPath is often called an "rpath"."""
@property
def full(self):
return join_paths(*self)
# An "rpath" is short for "RootedPath"
class FileStat(Record("rpath", "size", "mtime")):
@classmethod
def from_deleted(cls, rpath):
return cls.new(rpath, DELETED_SIZE, DELETED_MTIME)
@property
def deleted(entry):
return entry.mtime == DELETED_MTIME
STAT_SIZE_INDEX = 6
STAT_MTIME_INDEX = 8
# All paths are unicode separated by "/". We encode for a given
# platform (Windows) as necessary.
PATH_SEP = "/"
def join_paths(*paths):
return PATH_SEP.join(paths)
def parent_path(path):
try:
parent, child = path.rsplit(PATH_SEP, 1)
except:
parent, child = "", path
return parent
# Windows shaves off a bit of mtime info.
# TODO: Only do this sillyness on Windows.
def mtimes_eq(mtime1, mtime2):
return (mtime1 >> 1) == (mtime2 >> 1)
# Path encoding is needed because Windows has really funky rules for
# dealing with unicode paths. It seems like an all OSes, what you get
# back and what it expects from you isn't consistent. The PathEncoder
# stuff is there to be a single place where we can take care of this.
# Also, we want to deal with paths in a consistent way with "/" and
# not worry about Windows oddities ("\", etc).
def PathEncoder():
is_mac = platform.os.name == "posix" and platform.system() == "Darwin"
is_windows = platform.os.name in ["nt", "dos"]
decoding = sys.getfilesystemencoding()
encoding = None if os.path.supports_unicode_filenames else decoding
if is_windows:
return WindowsPathEncoder(encoding, decoding)
else:
return UnixPathEncoder(encoding, decoding)
class UnixPathEncoder(Record("encoding", "decoding")):
def encode_path(self, path):
if self.encoding:
return path.encode(self.encoding)
else:
return path
def decode_path(self, path):
return path.decode(self.decoding)
class WindowsPathEncoder(Record("encoding", "decoding")):
def encode_path(self, path):
win_path = "\\\\?\\" + os.path.abspath(path.replace(PATH_SEP, os.sep))
if self.encoding:
return win_path.encode(self.encoding)
else:
return win_path
def decode_path(self, win_path):
return win_path.replace(os.sep, PATH_SEP).decode(self.decoding)
class FileSystem(Record("slog", "path_encoder")):
"""Encapsulates all of the operations we need on the FileSystem.
The most important part is probably listing/stating."""
READ_MODE = "rb"
NEW_WRITE_MODE = "wb"
EXISTING_WRITE_MODE = "r+b"
# slog needs to have
def __new__(cls, slog):
return cls.new(slog, PathEncoder())
def encode_path(fs, path):
return fs.path_encoder.encode_path(path)
def decode_path(fs, path):
return fs.path_encoder.decode_path(path)
def exists(fs, path):
encoded_path = fs.encode_path(path)
return os.path.exists(encoded_path)
def isdir(fs, path):
encoded_path = fs.encode_path(path)
return os.path.isdir(encoded_path)
def isfile(fs, path):
encoded_path = fs.encode_path(path)
return os.path.isfile(encoded_path)
def isempty(fs, path):
encoded_path = fs.encode_path(path)
for _ in fs.list(encoded_path):
return False
return True
# yields FileStat, with same "root marker" rules as self.list(...)
#
# On my 2008 Macbook, reads about 10,000 files/sec when doing small
# groups (5,000 files), and 4,000 files/sec when doing large
# (200,000). These means it can take anywhere from .1 sec to 1
# minute. Cacheing seems to improve performance by about 30%.
# While running, the CPU is pegged :(. Oh well, 60,000 files in 8
# sec isn't too bad. That's my whole home directory.
#
# On my faster linux desktop machine, it's about 30,000 files/sec
# when cached, even for 200,00 files, which is a big improvement.
def list_stats(fs, root, root_marker = None, names_to_ignore = frozenset()):
return fs.stats(fs.list(
root, root_marker = root_marker, names_to_ignore = names_to_ignore))
# yields a RootedPath for each file found in the root. The intial
# root is the given root. Deeper in, if there is a "root_marker"
# file in a directory, that directory becomes a new root.
def list(fs, root, root_marker = None, names_to_ignore = frozenset()):
listdir = os.listdir
join = os.path.join
isdir = os.path.isdir
islink = os.path.islink
def decode(encoded_path):
try:
return fs.decode_path(encoded_path)
except Exception as err:
fs.slog.path_error("Could not decode file path {0}: {1}"
.format(repr(encoded_path)), err)
return None
# We pass root around so that we only have to decode it once.
def walk(root, encoded_root, encoded_parent):
child_names = listdir(encoded_parent)
if root_marker is not None:
if root_marker in child_names:
encoded_root = encoded_parent
root = decode(encoded_root)
# If decoding root fails, no point in traversing any futher.
if root is not None:
for child_name in child_names:
if child_name not in names_to_ignore:
encoded_full = join(encoded_parent, child_name)
if isdir(encoded_full):
if not islink(encoded_full):
for child in \
walk(root, encoded_root, encoded_full):
yield child
else:
rel = decode(encoded_full[len(encoded_root)+1:])
if rel:
yield RootedPath(root, rel)
encoded_root = fs.encode_path(root)
return walk(root, encoded_root, encoded_root)
# yields FileStats
def stats(fs, rpaths):
stat = os.stat
for rpath in rpaths:
try:
encoded_path = fs.encode_path(rpath.full)
stats = stat(encoded_path)
size = stats[STAT_SIZE_INDEX]
mtime = stats[STAT_MTIME_INDEX]
yield FileStat(rpath, size, mtime)
except OSError:
pass # Probably a link
# returns (size, mtime)
def stat(fs, path):
encoded_path = fs.encode_path(path)
stats = os.stat(encoded_path)
return stats[STAT_SIZE_INDEX], stats[STAT_MTIME_INDEX]
# Will not throw OSError for no path. Will return False in that case.
def stat_eq(fs, path, size, mtime):
try:
(current_size, current_mtime) = fs.stat(path)
return (current_size == size and
mtimes_eq(current_mtime, mtime))
except OSError:
return False
def read(fs, path, start = 0, size = None):
encoded_path = fs.encode_path(path)
with open(path, fs.READ_MODE) as file:
if loc > 0:
file.seek(start, 0)
if size:
return file.read(size)
else:
return file.read()
# On my 2008 Macbook, with SHA1, it can hash 50,000 files
# totalling 145GB (about 3MB each file) in 48min, which is 17
# files totalling 50MB/sec. So, if you scan 30GB of new files, it
# will take 10min. During that time, CPU usage is ~80%.
def hash(fs, path, hash_type = hashlib.sha1, chunk_size = 100000):
if hash_type == None:
return ""
hasher = hash_type()
for chunk_data in fs._iter_chunks(path, chunk_size):
hasher.update(chunk_data)
return hasher.digest()
def _iter_chunks(fs, path, chunk_size):
encoded_path = fs.encode_path(path)
with open(path, fs.READ_MODE) as file:
chunk = file.read(chunk_size)
while chunk:
yield chunk
chunk = file.read(chunk_size)
def write(fs, path, contents, start = None, mtime = None):
encoded_path = fs.encode_path(path)
fs.create_parent_dirs(path)
if (start is not None) and fs.exists(encoded_path):
mode = fs.EXISTING_WRITE_MODE
else:
mode = fs.NEW_WRITE_MODE
with open(encoded_path, mode) as file:
if start is not None:
file.seek(start, 0)
assert start == file.tell(), \
"Failed to seek to proper location in file"
file.write(contents)
if mtime is not None:
fs.touch(encoded_path, mtime)
def touch(fs, path, mtime):
encoded_path = fs.encode_path(path)
os.utime(encoded_path, (mtime, mtime))
def create_parent_dirs(fs, path):
fs.create_dir(parent_path(path))
def create_dir(fs, path):
encoded_path = fs.encode_path(path)
if not os.path.exists(encoded_path):
os.makedirs(encoded_path)
# # Blows up if existing stuff "in the way".
def move(fs, from_path, to_path, mtime = None):
encoded_from_path = fs.encode_path(from_path)
encoded_to_path = fs.encode_path(to_path)
fs.create_parent_dirs(to_path)
os.rename(encoded_from_path, encoded_to_path)
if mtime is not None:
fs.touch(to_path, mtime)
# Blows up if existing stuff "in the way".
def copy(fs, from_path, to_path, mtime = None):
encoded_from_path = fs.encode_path(from_path)
encoded_to_path = fs.encode_path(to_path)
fs.create_parent_dirs(to_path)
shutil.copyfile(encoded_from_path, encoded_to_path)
if mtime is not None:
fs.touch(to_path, mtime)
# Blows up if non-empy directory
def delete(fs, path):
encoded_path = fs.encode_path(path)
if os.path.exists(encoded_path):
os.remove(encoded_path)
def remove_empty_parent_dirs(fs, path):
encoded_parent_path = fs.encode_path(parent_path(path))
try:
os.removedirs(encoded_parent_path)
except OSError:
pass # Not empty
|
I made vegan ice cream! My first homemade vegan ice cream that I made last week! Cocunut ice cream with berries! It was amazing! There are no excuses to keep supporting animal suffering and causing a huge impact in our environment, our planet! #veganicecream #homemade #homemadeicecream #coconuticecream #berries #berriesandcocunuticecream #tasty #delicious #allhereisvegan #compassion #respectforotherspecies #veganislove #beethical #veganforallofus #veganisethis #veganisthefuture #savetheworld #veganuk #sharethelove #veganwillsavetheplanet #starttocare #donotbeselfish #weallliveinthisplanet #oneearth #thereisnoplanetb🌍💚 #veganforyourhealth #veganlondon #veganinspiration #veganfortheenvironment 🌏 #veganfortheanimals🐖🐄🐓🐇🐏🐢🐕🐩🐁🐃🐂🐨🐘🐪🐊🐋🐠🐟🐞🐝 FOLLOW MY PAGE!
|
import os,sys #for system commands
import argparse #used for allowing command line switches
from stat import * #for stat command
import datetime #used for float to datetime
# import win32security
import ctypes as _ctypes #this is used to determine windows SID
from ctypes import wintypes as _wintypes
def convert(bytes, type):
text = ""
if bytes < 1024:
number = bytes
text = "BYTES"
elif bytes >= 1025 and bytes < 1048576:
number = bytes/1024
text = "KB"
elif bytes >= 1048577 and bytes < 1073741824:
number = number = bytes/1048576
text = "MB"
elif bytes >= 1073741825:
number = bytes/1073741824
text = "GB"
return str(round(number,2))+" "+text
def return_file_owners(file):
process = os.popen('Icacls '+"\""+file+"\"")
result = process.read()
process.close()
lines = result.split('\n')
for index, line in enumerate(lines):
if file in line:
line = line.split(file)[-1]
elif "Successfully processed 1 files;" in line:
line = ""
lines[index] = line.strip(" ")
lines = [x for x in lines if x]
return lines
def main():
#Available command line options
parser = argparse.ArgumentParser(description='Available Command Line Switches')
parser.add_argument('-F',metavar='F', nargs="+", help="Target File To Scan")
#all all available arguments to the 'args' variable
args = parser.parse_args()
for filePath in args.F:
try:
st = os.stat(os.path.abspath(filePath))
print("File Permissions","."*20,filemode(st.st_mode))
print("Size","."*32,convert(st.st_size, "MB"))
# #windows network all SIDs = wmic useraccount get name,sid
print("User ID","."*29,st.st_uid) #local windows SID = HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
print("Group ID","."*28,st.st_gid)
if os.name == "nt":
owners = return_file_owners(os.path.abspath(filePath))
print("File Owner(s)","."*23,owners[0])
for index, owner in enumerate(owners):
if index != 0:
print(" "*37,owner)
print("Creation Time","."*23,datetime.datetime.fromtimestamp(st.st_ctime)) #windows = time of creation, unix = time of most recent metadata change
print("Last File Access","."*20,datetime.datetime.fromtimestamp(st.st_atime)) #time of most recent access
print("Last Mod Time","."*23,datetime.datetime.fromtimestamp(st.st_mtime)) #time of most recent content modification
print("Symbolic Link","."*23,S_ISLNK(st.st_mode)) #Return non-zero if the mode is from a symbolic link..
print("# of Locations on System","."*12,st.st_nlink) #number of hard links (number of locations in the file system)
print("Device","."*30,st.st_dev)
# print("st_mode:",st.st_mode) #protection bits
# print("st_ino:",st.st_ino) #inode number
# print("st_dev:",st.st_dev) #device
# print("is directory:",S_ISDIR(st.st_mode)) #is it a directory?
# print("Character Special Device:",S_ISCHR(st.st_mode)) #Return non-zero if the mode is from a character special device file.
# print("block special device file:",S_ISBLK(st.st_mode)) #Return non-zero if the mode is from a block special device file.
# print("Regular File:",S_ISREG(st.st_mode)) #Return non-zero if the mode is from a regular file.
# print("FIFO (named pipe):",S_ISFIFO(st.st_mode)) #Return non-zero if the mode is from a FIFO (named pipe).
# print("Is Socket:",S_ISSOCK(st.st_mode)) #Return non-zero if the mode is from a socket.
# print("Is Door:",S_ISDOOR(st.st_mode)) #Return non-zero if the mode is from a door.
# print("Event Port:",S_ISPORT(st.st_mode)) #Return non-zero if the mode is from an event port.
# print("whiteout:",S_ISWHT(st.st_mode)) #Return non-zero if the mode is from a whiteout.
# try:
# print("file’s permission bits:",S_IMODE(st.st_mode)) #Return the portion of the file’s mode that can be set by os.chmod()—that is, the file’s permission bits, plus the sticky bit, set-group-id, and set-user-id bits (on systems that support them).
# except:
# print("file's permission bits: Unable To Determine")
# print("file type:",S_IFMT(st.st_mode)) #Return the portion of the file’s mode that describes the file type (used by the S_IS*() functions above).
except IOError as e:
print ("I/O error({0}): {1}".format(e.errno, e.strerror))
except ValueError:
print ("Could not convert data to an integer.")
except:
print ("Unexpected error:", sys.exc_info()[0])
main()
|
I am willing to bet that there are very few people on this planet that do not like pizza. I mean what’s not to like? The aroma and that warm chewy cheesy texture when you take your first bite is what does it for me!
Pizza is enjoyed worldwide thanks to its versatility – there are so many different versions such as Neapolitan, deep dish, gourmet and New York style. And because of this, from a nutrition standpoint not all pizzas are the same. So the question is, is pizza healthy and can you eat it whilst on a diet?
A calorie is a unit of energy and all foods that you eat contain calories. However different foods contain different amounts of calories. For example mushrooms, pineapples and olives, all of which are common pizza toppings contain 34, 50 and 115 calories respectively, per 100 g serving. In general nuts & seeds (which contain a lot of fat) tend to have the most calories whilst fruits and vegetables (which contain a lot of water) have the lowest.
When you are trying to lose or maintain weight, you need to keep an eye on calorie consumption. In order to lose weight, you need to consume fewer calories than you use up (e.g. via exercise). Doing so will cause your body to use its fat stores as a source of energy, to make up for the calorie deficit. If you want to maintain your current weight, you need to eat the same amount of calories as you use up. And to gain weight you need to eat more calories than you are using.
You can use the calorie calculator on this page to estimate the number of calories you should be eating. For example, a 40 year old lady who weighs 80 kg, is 155 cm tall and exercises 1 – 3 days a week would need to consume approximately 1936 calories to maintain her weight, 1549 calories to start losing weight and 1162 calories to lose weight fast.
So how many calories are there in a slice of pizza?
As mentioned at the beginning of this article, not all pizza is created the same. There are so many different varieties all with different toppings that the number of calories varies greatly.
However on average, a 1 slice serving (103 g) of a 14 inch pizza with a regular crust and cheese topping contains approximately 272 calories. It also contains 12 g of protein, 10 g of fat (4 g saturated) and 34 g carbohydrate (4 g sugar).
A whole 14 inch pizza on the other hand contains a whopping 2389 calories! So this means that if the lady in the example above were to eat a whole pizza for lunch 2 or 3 times a week, she would most likely exceed the number of calories she needs to maintain her current weight. She would therefore put on weight, most of which would be stored in the form of fat.
So does this mean she can’t enjoy pizza at all? Absolutely not! The key here, as with most things in life, is moderation. So long as she doesn’t overindulge and monitors her calorie consumption, she could perhaps enjoy 1 or 2 slices of pizza on a special day of the week.
Pizza doesn’t have to be a super fatty & calorie dense food, there are a number of different ways to make it much healthier.
Choose your toppings wisely – the first thing you want to do is load up on vegetables and get rid of toppings such as pepperoni. 100 g of pepperoni has 494 calories, compared to just 17 calories in 100 g of zucchini. That’s a huge difference! Additionally, a recent study showed that the consumption of processed meat causes cancer. If you enjoy meat on your pizza use grilled chicken or seafood such as shrimp instead.
Use whole-wheat flour for the crust – instead of using traditional white flour (364 calories / 100 g), opt for whole wheat flour instead (340 calories / 100 g). The difference in calories is not huge but whole wheat flour contains 4 times as much dietary fiber and slightly more protein than white flour. Both dietary fiber and protein help to fill you up faster which will lower your chances of overindulging. Whole wheat flour is also much richer in nutrients such as iron and magnesium.
Make a thin crust pizza – I personally prefer thin crust pizzas over thick crusts by default, I enjoy the crisp texture. Since flour is quite calorie dense, you should aim to minimise the amount that you use and the easiest way to do this is to make a thin crust. Additionally, make the dough and pizza base at home rather than buying it. This way you get full control of the ingredients used.
Make your own tomato sauce – a classic pizza tomato sauce is super easy to make. Store bought varieties tend to have a lot of added sugar and salt, making them less healthier options. So make your own at home with fresh tomatoes instead, they are super healthy and are a great source of lycopene, vitamin C and vitamin A.
Use less cheese – cheese is a great source of protein and calcium but it is also high in fat and that means it is high in calories. Mozzarella is one of the best options as it is both tasty and relatively low in fat. If cheese is not a deal breaker, you might even want to leave it out completely.
Be wary of side dips – a lot of people tend to dunk their pizza in dips, however doing so can quickly increase the number of calories whilst providing little nutritional value. If you must enjoy a dip that is not homemade, do so sparingly.
Pizza, if eaten in moderation can be perfectly healthy. However when eaten in abundance, it will most likely cause you to gain weight in the long run.
The best option would be to make your own pizza at home. Not only is it a fun activity, it also gives you full control over what goes in your belly!
|
namespace Seymour
{
partial class AddFeedDialog
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.dCancel = new System.Windows.Forms.Button();
this.dOk = new System.Windows.Forms.Button();
this.dFeedUrl = new System.Windows.Forms.TextBox();
this.SuspendLayout();
//
// dCancel
//
this.dCancel.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.dCancel.DialogResult = System.Windows.Forms.DialogResult.Cancel;
this.dCancel.Location = new System.Drawing.Point(205, 38);
this.dCancel.Name = "dCancel";
this.dCancel.Size = new System.Drawing.Size(75, 23);
this.dCancel.TabIndex = 0;
this.dCancel.Text = "Cancel";
this.dCancel.UseVisualStyleBackColor = true;
this.dCancel.Click += new System.EventHandler(this.dCancel_Click);
//
// dOk
//
this.dOk.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.dOk.Location = new System.Drawing.Point(124, 38);
this.dOk.Name = "dOk";
this.dOk.Size = new System.Drawing.Size(75, 23);
this.dOk.TabIndex = 1;
this.dOk.Text = "OK";
this.dOk.UseVisualStyleBackColor = true;
this.dOk.Click += new System.EventHandler(this.dOk_Click);
//
// dFeedUrl
//
this.dFeedUrl.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.dFeedUrl.Location = new System.Drawing.Point(12, 12);
this.dFeedUrl.Name = "dFeedUrl";
this.dFeedUrl.Size = new System.Drawing.Size(268, 20);
this.dFeedUrl.TabIndex = 2;
//
// AddFeedDialog
//
this.AcceptButton = this.dOk;
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.CancelButton = this.dCancel;
this.ClientSize = new System.Drawing.Size(292, 72);
this.Controls.Add(this.dFeedUrl);
this.Controls.Add(this.dOk);
this.Controls.Add(this.dCancel);
this.Name = "AddFeedDialog";
this.Text = "AddFeedDialog";
this.Load += new System.EventHandler(this.AddFeedDialog_Load);
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.Button dCancel;
private System.Windows.Forms.Button dOk;
private System.Windows.Forms.TextBox dFeedUrl;
}
}
|
I expressly consent that my personal data can be continuously stored and used by Eppendorf Group for advertising or marketing purposes, e.g. for providing information and offers concerning the goods and services of Eppendorf by email, mail or phone. This consent is voluntary and can be withdrawn completely or partly at any time with future effect. We will erase your data upon receipt of the withdrawal.
|
#!/usr/bin/env python
import math,re, os, random
import Polygon, Polygon.IO, Polygon.Utils
import project
from regions import *
import itertools
import decomposition
Polygon.setTolerance(0.1)
class parseLP:
"""
A parser to parse the locative prepositions in specification
"""
def __init__(self):
pass
def main(self,argv):
""" Main function; run automatically when called from command-line """
spec_file = argv
self.regionNear = []
self.regionBetween = []
defaultNearDistance = 50
# load data
self.proj = project.Project()
self.proj.setSilent(True)
self.proj.loadProject(spec_file)
if self.proj.compile_options['decompose']:
# we will do the decomposition
# Look for a defined boundary region, and set it aside if available
self.boundaryRegion = None
for region in self.proj.rfi.regions:
if region.name.lower() == 'boundary':
self.boundaryRegion = region
self.proj.rfi.regions.remove(region)
break
# TODO: If not defined, use the minimum bounding polygon by default
if self.boundaryRegion is None:
print "ERROR: You need to define a boundary region (just create a region named 'boundary' in RegionEditor)"
return
# turn list of string into one string
spec = "\n".join([line for line in self.proj.spec_data['SPECIFICATION']['Spec'] if not line.startswith("#")])
# get all regions that need to find "region near"
# the items in the list are tuple with region name and distance from the region boundary, default value is 50
for m in re.finditer(r'near (?P<rA>\w+)', spec):
if m.group("rA") not in self.regionNear:
self.regionNear.append((m.group("rA"),50))
# find "within distance from a region" is just special case of find "region near"
for m in re.finditer(r'within (?P<dist>\d+) (from|of) (?P<rA>\w+)', spec):
if m.group("rA") not in self.regionNear:
self.regionNear.append((m.group("rA"),int(m.group("dist"))))
# get all regions that need to find "region between"
# the items in the list are tuple with two region names
for m in re.finditer(r'between (?P<rA>\w+) and (?P<rB>\w+)', spec):
if (m.group("rA"),m.group("rB")) not in self.regionBetween and (m.group("rB"),m.group("rA")) not in self.regionBetween:
self.regionBetween.append((m.group("rA"),m.group("rB")))
# generate new regions
self.generateNewRegion()
# break the overlapped regions into seperated parts
self.checkOverLapping()
# remove small regions
self.removeSmallRegions()
# decompose any regions with holes or are concave
if self.proj.compile_options['convexify']:
self.decomp()
# store the regionMapping data to project file
self.proj.regionMapping = self.newPolysMap
# save the regions into new region file
fileName = self.proj.getFilenamePrefix()+'_decomposed.regions'
self.saveRegions(fileName)
else:
# if decompose option is disabled, we skip the following step but keep the mapping
self.newPolysMap = {} # {"nameOfRegion":a list holds name of portion}
for region in self.proj.rfi.regions:
self.newPolysMap[region.name] = [region.name]
# store the regionMapping data to project file
self.proj.regionMapping = self.newPolysMap
fileName = self.proj.getFilenamePrefix()+'_decomposed.regions'
self.proj.rfi.writeFile(fileName)
def generateNewRegion(self):
"""
Generate new regions for locative prepositions
"""
# regions related with "near/within" preposition
for (regionName,dist) in self.regionNear:
for region in self.proj.rfi.regions:
if region.name == regionName:
oldRegion = region
newRegion = oldRegion.findRegionNear(dist,mode="overEstimate",name='near$'+regionName+'$'+str(dist))
self.proj.rfi.regions.append(newRegion)
# regions related with "between" preposition
for (regionNameA,regionNameB) in self.regionBetween:
for region in self.proj.rfi.regions:
if region.name == regionNameA:
regionA = region
elif region.name == regionNameB:
regionB = region
newRegion = findRegionBetween(regionA,regionB,name='between$'+regionNameA+'$and$'+regionNameB+"$")
self.proj.rfi.regions.append(newRegion)
def checkOverLapping(self):
"""
Check if and regions overlap each other
Break the ones that overlap into portions that don't overlap
"""
oldRegionNames=[]
self.oldPolys = {} # {"nameOfRegion":polygon of that region}
self.newPolysMap = {} # {"nameOfRegion":a list holds name of portion}
self.portionOfRegion = {} # {"nameOfPortion":polygon of that portion}
for region in self.proj.rfi.regions:
points = [(pt.x,pt.y) for pt in region.getPoints()]
poly = Polygon.Polygon(points)
self.oldPolys[region.name] = self.intAllPoints(poly)
self.newPolysMap[region.name] = []
oldRegionNames = sorted(self.oldPolys.keys())
self.newPolysMap['others'] = [] # parts out side of all regions
# set up a iterator of lists of boolean value (0/1) for finding overlapping regions
# each item is corrsponding to one possible overlapping
# each boolean value is corresponding to one region
boolList = itertools.product([0,1],repeat=len(oldRegionNames))
self.count = 1 # for naming the portion
# break the overlapping regions
for expr in boolList:
tempRegionList = []
result = self.intAllPoints(Polygon.Polygon([(pt.x,pt.y) for pt in self.boundaryRegion.getPoints()])) # starts with the boundary region
for i,item in enumerate(expr):
if item == 1:
# when the region is included
result = result & self.oldPolys[oldRegionNames[i]]
tempRegionList.append(oldRegionNames[i])
else:
# when the region is excluded
result = result - self.oldPolys[oldRegionNames[i]]
if result.nPoints()>0:
# there is a portion of region left
holeList = []
nonHoleList = []
for i,contour in enumerate(result):
if not result.isHole(i):
nonHoleList.append(Polygon.Polygon(result[i]))
else:
holeList.append(Polygon.Polygon(result[i]))
for nonHolePoly in nonHoleList:
polyWithoutOverlapNode = self.decomposeWithOverlappingPoint(nonHolePoly)
for poly in polyWithoutOverlapNode:
portionName = 'p'+str(self.count)
p = self.intAllPoints(poly)
for hole in holeList:
p = p - self.intAllPoints(hole)
self.portionOfRegion[portionName] = p
if len(tempRegionList) == 0:
self.newPolysMap['others'].append(portionName)
else:
for regionName in tempRegionList:
# update the maping dictionary
self.newPolysMap[regionName].append(portionName)
self.count = self.count + 1
def decomposeWithOverlappingPoint(self,polygon):
"""
When there are points overlapping each other in a given polygon
First decompose this polygon into sub-polygons at the overlapping point
"""
# recursively break the polygon at any overlap point into two polygons until no overlap points are found
# here we are sure there is only one contour in the given polygon
ptDic = {}
overlapPtIndex = None
# look for overlap point and stop when one is found
for i,pt in enumerate(polygon[0]):
if pt not in ptDic:
ptDic[pt]=[i]
else:
ptDic[pt].append(i)
overlapPtIndex = ptDic[pt]
break
if overlapPtIndex:
polyWithoutOverlapNode = []
# break the polygon into sub-polygons
newPoly = Polygon.Polygon(polygon[0][overlapPtIndex[0]:overlapPtIndex[1]])
polyWithoutOverlapNode.extend(self.decomposeWithOverlappingPoint(newPoly))
reducedPoly = Polygon.Polygon(decomposition.removeDuplicatePoints((polygon-newPoly)[0]))
polyWithoutOverlapNode.extend(self.decomposeWithOverlappingPoint(reducedPoly))
else:
# no overlap point is found
return [polygon]
return polyWithoutOverlapNode
def decomp(self):
"""
Decompose the region with holes or are concave
"""
tempDic = {} # temporary variable for storing polygon
# will be merged at the end to self.portionOfRegion
for nameOfPortion,poly in self.portionOfRegion.iteritems():
result = [] # result list of polygon from decomposition
if len(poly)>1:
# the polygon contains holes
holes = [] # list holds polygon stands for holes
for i,contour in enumerate(poly):
if poly.isHole(i):
holes.append(Polygon.Polygon(poly[i]))
else:
newPoly = Polygon.Polygon(poly[i])
de = decomposition.decomposition(newPoly,holes)
result = de.MP5()
else:
# if the polygon doesn't have any hole, decompose it if it is concave,
# nothing will be done if it is convex
de = decomposition.decomposition(poly)
result = de.MP5()
if len(result)>1:
# the region is decomposed to smaller parts
newPortionName=[]
# add the new portions
for item in result:
portionName = 'p'+str(self.count)
newPortionName.append(portionName)
tempDic[portionName] = item
self.count = self.count + 1
# update the mapping dictionary
for nameOfRegion,portionList in self.newPolysMap.iteritems():
if nameOfPortion in portionList:
self.newPolysMap[nameOfRegion].remove(nameOfPortion)
self.newPolysMap[nameOfRegion].extend(newPortionName)
else:
tempDic[nameOfPortion] = Polygon.Polygon(result[0])
self.portionOfRegion = tempDic
def drawAllPortions(self):
"""
Output a drawing of all the polygons that stored in self.portionOfRegion, for debug purpose
"""
if len(self.portionOfRegion)==0:
print "There is no polygon stored."
print
return
polyList = []
for nameOfPortion,poly in self.portionOfRegion.iteritems():
polyList.append(poly)
Polygon.IO.writeSVG('/home/cornell/Desktop/ltlmop-google/allPortions.svg', polyList)
def removeSmallRegions(self):
"""
A function to remove small region
"""
tolerance=0.0000001
# find the area of largest regions
area = 0
for nameOfPortion,poly in self.portionOfRegion.iteritems():
if area<poly.area():
area = poly.area()
# remove small regions
smallRegion = []
for nameOfPortion,poly in self.portionOfRegion.iteritems():
if poly.area()<tolerance*area:
smallRegion.append(nameOfPortion)
for nameOfRegion, portionList in self.newPolysMap.iteritems():
if nameOfPortion in portionList:
self.newPolysMap[nameOfRegion].remove(nameOfPortion)
for region in smallRegion:
#print "remove"+region
del self.portionOfRegion[region]
def intAllPoints(self,poly):
"""
Function that turn all point coordinates into integer
Return a new polygon
"""
return Polygon.Utils.prunePoints(Polygon.Polygon([(int(pt[0]),int(pt[1])) for pt in poly[0]]))
def saveRegions(self, fileName=''):
"""
Save the region data into a new region file
"""
# use the existing rfi as to start
# the only different data is regions
self.proj.rfi.regions = []
for nameOfPortion,poly in self.portionOfRegion.iteritems():
newRegion = Region()
newRegion.name = nameOfPortion
newRegion.color = Color()
newRegion.color.SetFromName(random.choice(['RED','ORANGE','YELLOW','GREEN','BLUE','PURPLE']))
for i,ct in enumerate(poly):
if poly.isHole(i):
newRegion.holeList.append([Point(*x) for x in Polygon.Utils.pointList(Polygon.Polygon(poly[i]))])
else:
newRegion.pointArray = [Point(*x) for x in Polygon.Utils.pointList(Polygon.Polygon(poly[i]))]
newRegion.alignmentPoints = [False] * len([x for x in newRegion.getPoints()])
newRegion.recalcBoundingBox()
if newRegion.getDirection() == dir_CCW:
newRegion.pointArray.reverse()
self.proj.rfi.regions.append(newRegion)
# Giant loop!
for obj1 in self.proj.rfi.regions:
for obj2 in self.proj.rfi.regions:
self.proj.rfi.splitSubfaces(obj1, obj2)
self.proj.rfi.recalcAdjacency()
self.proj.rfi.writeFile(fileName)
|
AL- INJAZ INSTITUTE OF ISLAMIC FINANCE LTD was incorporated to serve as an impetus towards development of a robust Islamic finance industry in Africa and beyond. The vision started under the name and style of Shariah Lifestyle and over the years our hallmark has been Shariah first, innovation, excellence and integrity. It is anchored on a team of passionate, innovative and highly qualified professionals with vast experience in Islamic finance. Indeed, we derive satisfaction in our commitment to offer timely, efficient and tailor-made solutions to our delighted customers.
To be the leading global Islamic finance institute shaping the future.
1. To continuously develop human capital and bridge the dearth of competent and all- rounded Islamic finance professionals in East Africa whilst enhancing research and development.
2. To offer tailor-made solutions to all Islamic finance stakeholders starting with specific focus on the government, Islamic financial and learning institutions and the public at large.
3. To consistently champion public awareness programs and knowledge sharing session through both print and electronic media as well as public forums.
|
# -*- coding: utf-8 -*-
import v2_swagger_client
from library.base import _assert_status_code
from v2_swagger_client.models.role_request import RoleRequest
from v2_swagger_client.rest import ApiException
import base
def is_member_exist_in_project(members, member_user_name, expected_member_role_id = None):
result = False
for member in members:
if member.entity_name == member_user_name:
if expected_member_role_id != None:
if member.role_id == expected_member_role_id:
return True
else:
return True
return result
def get_member_id_by_name(members, member_user_name):
for member in members:
if member.entity_name == member_user_name:
return member.id
return None
class Project(base.Base):
def __init__(self, username=None, password=None):
kwargs = dict(api_type="projectv2")
if username and password:
kwargs["credential"] = base.Credential('basic_auth', username, password)
super(Project, self).__init__(**kwargs)
def create_project(self, name=None, registry_id=None, metadata=None, expect_status_code = 201, expect_response_body = None, **kwargs):
if name is None:
name = base._random_name("project")
if metadata is None:
metadata = {}
if registry_id is None:
registry_id = registry_id
project = v2_swagger_client.ProjectReq(project_name=name, registry_id = registry_id, metadata=metadata)
try:
_, status_code, header = self._get_client(**kwargs).create_project_with_http_info(project)
except ApiException as e:
base._assert_status_code(expect_status_code, e.status)
if expect_response_body is not None:
base._assert_status_body(expect_response_body, e.body)
return
base._assert_status_code(expect_status_code, status_code)
base._assert_status_code(201, status_code)
return base._get_id_from_header(header), name
def get_projects(self, params, **kwargs):
data = []
data, status_code, _ = self._get_client(**kwargs).list_projects_with_http_info(**params)
base._assert_status_code(200, status_code)
return data
def get_project_id(self, project_name, **kwargs):
project_data = self.get_projects(dict(), **kwargs)
actual_count = len(project_data)
if actual_count == 1 and str(project_data[0].project_name) != str(project_name):
return project_data[0].project_id
else:
return None
def projects_should_exist(self, params, expected_count = None, expected_project_id = None, **kwargs):
project_data = self.get_projects(params, **kwargs)
actual_count = len(project_data)
if expected_count is not None and actual_count!= expected_count:
raise Exception(r"Private project count should be {}.".format(expected_count))
if expected_project_id is not None and actual_count == 1 and str(project_data[0].project_id) != str(expected_project_id):
raise Exception(r"Project-id check failed, expect {} but got {}, please check this test case.".format(str(expected_project_id), str(project_data[0].project_id)))
def check_project_name_exist(self, name=None, **kwargs):
try:
_, status_code, _ = self._get_client(**kwargs).head_project_with_http_info(name)
except ApiException as e:
status_code = -1
return {
200: True,
404: False,
}.get(status_code,False)
def get_project(self, project_id, expect_status_code = 200, expect_response_body = None, **kwargs):
try:
data, status_code, _ = self._get_client(**kwargs).get_project_with_http_info(project_id)
except ApiException as e:
base._assert_status_code(expect_status_code, e.status)
if expect_response_body is not None:
base._assert_status_body(expect_response_body, e.body)
return
base._assert_status_code(expect_status_code, status_code)
base._assert_status_code(200, status_code)
print("Project {} info: {}".format(project_id, data))
return data
def update_project(self, project_id, expect_status_code=200, metadata=None, cve_allowlist=None, **kwargs):
project = v2_swagger_client.ProjectReq(metadata=metadata, cve_allowlist=cve_allowlist)
try:
_, sc, _ = self._get_client(**kwargs).update_project_with_http_info(project_id, project)
except ApiException as e:
base._assert_status_code(expect_status_code, e.status)
else:
base._assert_status_code(expect_status_code, sc)
def delete_project(self, project_id, expect_status_code = 200, **kwargs):
_, status_code, _ = self._get_client(**kwargs).delete_project_with_http_info(project_id)
base._assert_status_code(expect_status_code, status_code)
def get_project_log(self, project_name, expect_status_code = 200, **kwargs):
body, status_code, _ = self._get_client(**kwargs).get_logs_with_http_info(project_name)
base._assert_status_code(expect_status_code, status_code)
return body
def filter_project_logs(self, project_name, operator, resource, resource_type, operation, **kwargs):
access_logs = self.get_project_log(project_name, **kwargs)
count = 0
for each_access_log in list(access_logs):
if each_access_log.username == operator and \
each_access_log.resource_type == resource_type and \
each_access_log.resource == resource and \
each_access_log.operation == operation:
count = count + 1
return count
def get_project_members(self, project_id, **kwargs):
kwargs['api_type'] = 'member'
return self._get_client(**kwargs).list_project_members(project_id)
def get_project_member(self, project_id, member_id, expect_status_code = 200, expect_response_body = None, **kwargs):
from swagger_client.rest import ApiException
kwargs['api_type'] = 'member'
data = []
try:
data, status_code, _ = self._get_client(**kwargs).get_project_member_with_http_info(project_id, member_id,)
except ApiException as e:
base._assert_status_code(expect_status_code, e.status)
if expect_response_body is not None:
base._assert_status_body(expect_response_body, e.body)
return
base._assert_status_code(expect_status_code, status_code)
base._assert_status_code(200, status_code)
return data
def get_project_member_id(self, project_id, member_user_name, **kwargs):
kwargs['api_type'] = 'member'
members = self.get_project_members(project_id, **kwargs)
result = get_member_id_by_name(list(members), member_user_name)
if result == None:
raise Exception(r"Failed to get member id of member {} in project {}.".format(member_user_name, project_id))
else:
return result
def check_project_member_not_exist(self, project_id, member_user_name, **kwargs):
kwargs['api_type'] = 'member'
members = self.get_project_members(project_id, **kwargs)
result = is_member_exist_in_project(list(members), member_user_name)
if result == True:
raise Exception(r"User {} should not be a member of project with ID {}.".format(member_user_name, project_id))
def check_project_members_exist(self, project_id, member_user_name, expected_member_role_id = None, **kwargs):
kwargs['api_type'] = 'member'
members = self.get_project_members(project_id, **kwargs)
result = is_member_exist_in_project(members, member_user_name, expected_member_role_id = expected_member_role_id)
if result == False:
raise Exception(r"User {} should be a member of project with ID {}.".format(member_user_name, project_id))
def update_project_member_role(self, project_id, member_id, member_role_id, expect_status_code = 200, **kwargs):
kwargs['api_type'] = 'member'
role = RoleRequest(role_id = member_role_id)
data, status_code, _ = self._get_client(**kwargs).update_project_member_with_http_info(project_id, member_id, role = role)
base._assert_status_code(expect_status_code, status_code)
base._assert_status_code(200, status_code)
return data
def delete_project_member(self, project_id, member_id, expect_status_code = 200, **kwargs):
kwargs['api_type'] = 'member'
_, status_code, _ = self._get_client(**kwargs).delete_project_member_with_http_info(project_id, member_id)
base._assert_status_code(expect_status_code, status_code)
base._assert_status_code(200, status_code)
def add_project_members(self, project_id, user_id = None, member_role_id = None, _ldap_group_dn=None, expect_status_code = 201, **kwargs):
kwargs['api_type'] = 'member'
projectMember = v2_swagger_client.ProjectMember()
if user_id is not None:
projectMember.member_user = {"user_id": int(user_id)}
if member_role_id is None:
projectMember.role_id = 1
else:
projectMember.role_id = member_role_id
if _ldap_group_dn is not None:
projectMember.member_group = v2_swagger_client.UserGroup(ldap_group_dn=_ldap_group_dn)
data = []
try:
data, status_code, header = self._get_client(**kwargs).create_project_member_with_http_info(project_id, project_member = projectMember)
except ApiException as e:
base._assert_status_code(expect_status_code, e.status)
else:
base._assert_status_code(expect_status_code, status_code)
return base._get_id_from_header(header)
def query_user_logs(self, project_name, status_code=200, **kwargs):
try:
logs = self.get_project_log(project_name, expect_status_code=status_code, **kwargs)
count = 0
for log in list(logs):
count = count + 1
return count
except ApiException as e:
_assert_status_code(status_code, e.status)
return 0
|
Andax Industries LLC has a new patent on its color-coded Spill Preparedness Control Center (SPCC). The color-coded labeling system of the SPCC ensures the correct Spill Pac will be used every time. This system consists of four Spill Pacs with unique color-coded labeling with matching color-coded equipment labels. Potential leak sources can be identified by placing color-coded labels on equipment before a spill or leak occurs. In the event of a spill, just grab the matching color-coded Pac for the equipment label to have the exact sorbents and pads needed to contain and clean up the spill or leak. The SPCC System includes four vacuum-packed Spill Pacs: The orange Combo Pac, yellow Chemical & Hazmat Pac, blue Oil & Oil-Based Pac and the pink Battery Pac. Each Pac will contain or clean up to a 10 gallon spill.
|
from __future__ import absolute_import
import requests
from zerver.models import get_user_profile_by_email, UserProfile
from zerver.lib.avatar import gravatar_hash
from zerver.lib.upload import upload_avatar_image
from django.core.management.base import BaseCommand, CommandError
from django.core.files.uploadedfile import SimpleUploadedFile
class Command(BaseCommand):
help = """Migrate the specified user's Gravatar over to an avatar that we serve. If two
email addresses are specified, use the Gravatar for the first and upload the image
for both email addresses."""
def add_arguments(self, parser):
parser.add_argument('old_email', metavar='<old email>', type=str,
help="user whose Gravatar should be migrated")
parser.add_argument('new_email', metavar='<new email>', type=str, nargs='?', default=None,
help="user to copy the Gravatar to")
def handle(self, *args, **options):
old_email = options['old_email']
if options['new_email']:
new_email = options['new_email']
else:
new_email = old_email
gravatar_url = "https://secure.gravatar.com/avatar/%s?d=identicon" % (gravatar_hash(old_email),)
gravatar_data = requests.get(gravatar_url).content
gravatar_file = SimpleUploadedFile('gravatar.jpg', gravatar_data, 'image/jpeg')
try:
user_profile = get_user_profile_by_email(old_email)
except UserProfile.DoesNotExist:
try:
user_profile = get_user_profile_by_email(new_email)
except UserProfile.DoesNotExist:
raise CommandError("Could not find specified user")
upload_avatar_image(gravatar_file, user_profile, old_email)
if old_email != new_email:
gravatar_file.seek(0)
upload_avatar_image(gravatar_file, user_profile, new_email)
user_profile.avatar_source = UserProfile.AVATAR_FROM_USER
user_profile.save(update_fields=['avatar_source'])
|
Chown is proud to present this medium bronze patina finished door handle set, by Emtek. The 1441NARHMB is made from premium materials, this Door Handle Set offers great function and value for your home. This fixture is part of Emtek's decorative Collection, so make sure to check out other styles of fixtures to accessorize your room.
|
# Config Wizard By: Blazetamer 2013-2014
# Thanks to Blazetamer, TheHighway, and the rest of the crew at TVADDONS.ag (XBMCHUB.com).
import urllib,urllib2,re,xbmcplugin,xbmcgui,xbmc,xbmcaddon,os,sys,downloader,extract,time,shutil
import wizardmain as main
AddonTitle='Config Wizard'; wizardUrl='http://tribeca.tvaddons.ag/tools/wizard/';
SiteDomain='TVADDONS.AG'; TeamName='TEAM TVADDONS';
addon=main.addon; net=main.net; settings=main.settings;
SkinBackGroundImg=os.path.join('special://','home','media','SKINDEFAULT.jpg')
RequiredHostsPath=xbmc.translatePath(os.path.join(main.AddonPath,'requiredhosts.py'))
RequiredHostsUrl=wizardUrl+'requiredhosts.txt'
RequiredHostsUrl='https://offshoregit.com/xbmchub/config-wizard-development/raw/master/requiredhosts.py'
LinksUrl=wizardUrl+'links.txt'
#LinksUrl='https://offshoregit.com/xbmchub/config-wizard-development/raw/master/links.txt'
LocalLinks=xbmc.translatePath(os.path.join(main.AddonPath,'links.txt'))
#==========================Help WIZARD=====================================================================================================
def HELPCATEGORIES():
if ((XBMCversion['Ver'] in ['','']) or (int(XBMCversion['two']) < 12)) and (settings.getSetting('bypass-xbmcversion')=='false'):
eod(); addon.show_ok_dialog(["Compatibility Issue: Outdated Kodi Setup","Please upgrade to a newer version of XBMC first!","Visit %s for Support!"%SiteDomain],title="XBMC "+XBMCversion['Ver'],is_error=False); DoA('Back');
else:
if main.isFile(LocalLinks)==True: link=main.nolines(main.FileOpen(LocalLinks))
else: link=main.OPEN_URL(LinksUrl).replace('\n','').replace('\r','').replace('\a','')
match=re.compile('name="(.+?)".+?rl="(.+?)".+?mg="(.+?)".+?anart="(.+?)".+?escription="(.+?)".+?ype="(.+?)"').findall(link)
for name,url,iconimage,fanart,description,filetype in match:
#if 'status' in filetype:
#main.addHELPDir(name,url,'wizardstatus',iconimage,fanart,description,filetype)
#else:
main.addHELPDir(name,url,'helpwizard',iconimage,fanart,description,filetype)
CustomUrl=settings.getSetting('custom-url')
try:
if (len(CustomUrl) > 10) and ('://' in CustomUrl):
main.addHELPDir('Custom Url[CR](Addon Settings)',CustomUrl,'helpwizard',main.AddonIcon,main.AddonFanart,"Custom url found in addon settings.","main") ## For Testing to test a url with a FileHost.
except: pass
#main.addHELPDir('Testing','http://www.firedrive.com/file/################','helpwizard',iconimage,fanart,description,filetype) ## For Testing to test a url with a FileHost.
main.AUTO_VIEW('movies')
## ### ##
def xEBb(t): main.xEB('Skin.SetBool(%s)'%t)
def xEBS(t,n): main.xEB('Skin.SetString(%s,%s)'%(t,n))
def HELPWIZARD(name,url,description,filetype):
path=xbmc.translatePath(os.path.join('special://home','addons','packages')); confirm=xbmcgui.Dialog(); filetype=filetype.lower();
if filetype=='splash':
try: html=main.OPEN_URL(url)
except: return
import splash_highway as splash
SplashBH=xbmc.translatePath(os.path.join(main.AddonPath,'ContentPanel.png'))
ExitBH=xbmc.translatePath(os.path.join(main.AddonPath,'Exit.png'))
splash.do_My_TextSplash2(html,SplashBH,12,TxtColor='0xff00bfff',Font='font12',BorderWidth=40,ImgexitBtn=ExitBH,colorDiffuse='0xff00bfff');
return
if confirm.yesno(TeamName,"Would you like %s to "%SiteDomain,"customize your add-on selection? "," "):
dp=xbmcgui.DialogProgress(); dp.create(AddonTitle,"Downloading ",'','Please Wait')
lib=os.path.join(path,name+'.zip')
try: os.remove(lib)
except: pass
### ## ... ##
#try:
# if (main.isFile(LocalLinks)==False) or (main.isFile(RequiredHostsPath)==False): FHTML=main.OPEN_URL(RequiredHostsUrl); main.FileSave(RequiredHostsPath,FHTML); time.sleep(2)
#except: pass
if main.isFile(RequiredHostsPath)==False: dialog=xbmcgui.Dialog(); dialog.ok("Error!",'import not found.'); return
try: import requiredhosts as RequiredHosts
except: print "error attempting to import requiredhosts as RequiredHosts"; dialog=xbmcgui.Dialog(); dialog.ok("Error!","import failed."); return
#print {'url':url}
url=RequiredHosts.CheckForHosts(url); #print {'url':url}
### ## ... ##
if str(url).endswith('[error]'): print url; dialog=xbmcgui.Dialog(); dialog.ok("Error!",url); return
if '[error]' in url: print url; dialog=xbmcgui.Dialog(); dialog.ok("Error!",url); return
if not str(url).lower().startswith('http://'): print url; dialog=xbmcgui.Dialog(); dialog.ok("Error!",url); return
print {'url':url}
downloader.download(url,lib,dp)
### ## ... ##
#return ## For Testing 2 Black Overwrite of stuff. ##
### ## ... ##
if filetype=='main': addonfolder=xbmc.translatePath('special://home')
elif filetype=='addon': addonfolder=xbmc.translatePath(os.path.join('special://home','addons'))
else: print {'filetype':filetype}; dialog=xbmcgui.Dialog(); dialog.ok("Error!",'filetype: "%s"'%str(filetype)); return
#time.sleep(2)
xbmc.sleep(4000)
dp.update(0,"","Extracting Zip Please Wait")
print '======================================='; print addonfolder; print '======================================='
extract.all(lib,addonfolder,dp)
proname=xbmc.getInfoLabel("System.ProfileName")
if (filetype=='main') and (settings.getSetting('homescreen-shortcuts')=='true'):
link=main.OPEN_URL(wizardUrl+'shortcuts.txt')
shorts=re.compile('shortcut="(.+?)"').findall(link)
for shortname in shorts: main.xEB('Skin.SetString(%s)'%shortname)
if (filetype=='main') and (settings.getSetting('other-skin-settings')=='true'):
#main.xEB('Skin.SetString(CustomBackgroundPath,%s)' %img)
#main.xEB('Skin.SetBool(ShowBackgroundVideo)') ## Set to true so we can later set them to false.
#main.xEB('Skin.SetBool(ShowBackgroundVis)') ## Set to true so we can later set them to false.
#main.xEB('Skin.ToggleSetting(ShowBackgroundVideo)') ## Switching from true to false.
#main.xEB('Skin.ToggleSetting(ShowBackgroundVis)') ## Switching from true to false.
xEBb('HideBackGroundFanart')
xEBb('HideVisualizationFanart')
xEBb('AutoScroll')
if (filetype=='main') and (main.isFile(xbmc.translatePath(SkinBackGroundImg))==True):
xEBS('CustomBackgroundPath',SkinBackGroundImg)
xEBb('UseCustomBackground')
#time.sleep(2)
xbmc.sleep(4000)
xbmc.executebuiltin('UnloadSkin()'); xbmc.executebuiltin('ReloadSkin()'); xbmc.executebuiltin("LoadProfile(%s)" % proname)
dialog=xbmcgui.Dialog(); dialog.ok("Success!","Installation Complete"," [COLOR gold]Brought To You By %s[/COLOR]"%SiteDomain)
##
#==========
def DoA(a): xbmc.executebuiltin("Action(%s)" % a) #DoA('Back'); # to move to previous screen.
def eod(): addon.end_of_directory()
#==========OS Type & XBMC Version===========================================================================================
XBMCversion={}; XBMCversion['All']=xbmc.getInfoLabel("System.BuildVersion"); XBMCversion['Ver']=XBMCversion['All']; XBMCversion['Release']=''; XBMCversion['Date']='';
if ('Git:' in XBMCversion['All']) and ('-' in XBMCversion['All']): XBMCversion['Date']=XBMCversion['All'].split('Git:')[1].split('-')[0]
if ' ' in XBMCversion['Ver']: XBMCversion['Ver']=XBMCversion['Ver'].split(' ')[0]
if '-' in XBMCversion['Ver']: XBMCversion['Release']=XBMCversion['Ver'].split('-')[1]; XBMCversion['Ver']=XBMCversion['Ver'].split('-')[0]
if len(XBMCversion['Ver']) > 1: XBMCversion['two']=str(XBMCversion['Ver'][0])+str(XBMCversion['Ver'][1])
else: XBMCversion['two']='00'
if len(XBMCversion['Ver']) > 3: XBMCversion['three']=str(XBMCversion['Ver'][0])+str(XBMCversion['Ver'][1])+str(XBMCversion['Ver'][3])
else: XBMCversion['three']='000'
sOS=str(main.get_xbmc_os());
print [['Version All',XBMCversion['All']],['Version Number',XBMCversion['Ver']],['Version Release Name',XBMCversion['Release']],['Version Date',XBMCversion['Date']],['OS',sOS]]
#==========END HELP WIZARD==================================================================================================
params=main.get_params(); url=None; name=None; mode=None; year=None; imdb_id=None
def ParsUQP(s,Default=None):
try: return urllib.unquote_plus(params[s])
except: return Default
fanart=ParsUQP("fanart",""); description=ParsUQP("description",""); filetype=ParsUQP("filetype",""); url=ParsUQP("url",""); name=ParsUQP("name",""); mode=ParsUQP("mode"); year=ParsUQP("year");
print "Mode: "+str(mode); print "URL: "+str(url); print "Name: "+str(name); print "Year: "+str(year)
if mode==None or url==None or len(url)<1: HELPCATEGORIES()
elif mode=="wizardstatus": print""+url; items=main.WIZARDSTATUS(url)
elif mode=='helpwizard': HELPWIZARD(name,url,description,filetype)
xbmcplugin.endOfDirectory(int(sys.argv[1]))
|
Established in 1983, St. Hubert's acclaimed pet training school offers one of the most comprehensive and respected dog training and behavior programs in the nation.
A good education may be the most valuable gift you can ever give your canine companion. Proper training will enable your dog to become a well-behaved family member, participating in daily activities both at home and in public. Training also increases effective communication between dog and pet parent which strengthens the special bond you share with your best friend.
Classes are conducted in St. Hubert's modern training facility located at our 575 Woodland Avenue Campus in Madison, New Jersey. Fully air-conditioned for year-round comfort, our school features three large, fully matted and cushioned training rings. Outdoor classes are offered seasonally in a spacious yet secure area on site.
In addition, we are pleased to begin offering pet training classes in Ledgewood, New Jersey, where we have partnered with The Animal Hospital of Roxbury.
Through humane methods such as reward-based training, fair leadership and simple kindness, more than 4,000 canine diplomas are awarded annually to guardians and their dogs!
The schedule for training class is listed online and can be accessed at the menu to the left. If you have questions about what class to enroll your dog or puppy in, please call us at 973-377-0116 and we will be glad to assist you!
Casey, an alumnus of St. Hubert's, recently graduated from training!
The St. Hubert’s Scholarship Fund was created to help animals that are adopted adjust to their new home and bond with their new family. We all know that the more time we spend with our pets, the stronger our connection and the better our communication with them is. For an adopted animal, it also increases the odds that they will not be returned to the shelter for behavior issues.
For every $100 collected, we will grant a full scholarship to one of our training classes (for our canine companions) or a private, in home session (for our feline friends). Our goal is to collect enough donations to offer this to every animal available for adoption at any St. Hubert’s Animal Welfare Center location.
Please consider donating to this amazing fund and help us give the gift of learning to homeless animals in need. Whether you choose to give a one-time gift or choose to make it a recurring, monthly donation, your help is greatly appreciated by all of us.
|
import random
import pickle
from sklearn import tree
from pprint import pprint
from PIL import Image
##choices = ["█", " ", "▓", "▒", "░"]
choices = ["██", " "]
WIDTH = 8
def clear(p):
if p:
name = 'training.dat'
with open(name, 'rb') as f:
l = pickle.load(f)
l = l[1][:-1]
with open(name, 'wb') as f:
pickle.dump(l,f)
else:
with open("training.dat", "wb") as f:
pickle.dump([0,[]],f)
def run(width):
total = []
for i in range(width):
total.append([])
for j in range(width):
total[i].append("")
if width % 2 != 0:
for i in range(width):
f = choices[random.randint(0,len(choices)-1)]
total[i][int((width-1)/2)] = f
for i in range(width):
if width % 2 != 0:
for j in range(int((width-1)/2)):
x = choices[random.randint(0,len(choices)-1)]
total[i][j] = x
total[i][width-1-j] = x
else:
for j in range(int(width/2)):
x = choices[random.randint(0,len(choices)-1)]
total[i][j] = x
total[i][width-j-1] = x
for l in total:
strng = ""
for sl in l:
strng += sl
print(strng)
return sprite_to_num(total)
def like(t):
#whether you like the image or not
name = 'training.dat'
with open(name, 'rb') as f:
l = pickle.load(f)
ans = input("te gusta hombre? (y/n)\n")
if ans == "y":
#print('appending to yes list:', t)
l[1].append([t, 1]) # tell computer you like the image
im = Image.new("RGB", (WIDTH, WIDTH))
pix = im.load()
for x in range(WIDTH):
for y in range(WIDTH):
if t[y][x] == "0": #0 means black
pix[x,y] = (0,0,0)
else:
pix[x,y] = (255,255,255)
im.save("sprites/%d.png" % l[0], "PNG")
l[0] += 1
elif ans == "n":
#print('appending to no list:', t)
l[1].append([t, 0]) # tell computer you do not like the image
#print(l)
else:
return
with open(name, 'wb') as f:
pickle.dump(l,f)
def sprite_to_num(sprite):
#converts sprite into a readable format for sklearn
for i, row in enumerate(sprite):
s = ""
for j, char in enumerate(row): #char is the individual items in each row
s += str(choices.index(char))
sprite[i] = s
return sprite
def learn(width):
name = 'training.dat'
with open(name, 'rb') as f:
l = pickle.load(f)
l = l[1]
if l == []:
#pass
return run(width)
features = []
labels = []
## for sprite in l:
## for i, row in enumerate(sprite):
## for j, s in enumerate(row): #s is the individual items in each row
## #-1 means there is no character adjacenct to the current character
## up = choices.index(sprite[i-1][j]) if i != 0 else -1 #the item above the current
## down = choices.index(sprite[i+1][j]) if i != width - 1 else -1
## left = choices.index(sprite[i][j-1]) if j != 0 else -1
## right = choices.index(sprite[i][j+1]) if j != width - 1 else -1
##
## #features.append([up, down, left, right, i, j])
## features.append([up, left, i, j]) #only up and left because down and right haven't been generated yet
## labels.append(choices.index(s))
## #print(up, down, left, right)
for sprite in l:
## pprint(sprite[0])
## s = sprite_to_num(sprite[0])
features.append(sprite[0])
labels.append(sprite[1])
clf = tree.DecisionTreeClassifier()
clf = clf.fit(features, labels)
#random indices to create a fixed char (in order to randomize results)
#fixed_i, fixed_j = random.randint(0, width-1), random.randint(0, width-1)
#total[fixed_i][fixed_j] = choices[random.randint(0, len(choices)-1)]
## if width % 2 != 0:
## for i in range(width):
## f = choices[random.randint(0,len(choices)-1)]
## total[i][int((width-1)/2)] = f
##
##
## for i in range(width):
## if width % 2 != 0:
## for j in range(int((width-1)/2)):
## x = choices[random.randint(0,len(choices)-1)]
## total[i][j] = x
## total[i][width-1-j] = x
## else:
## for i in range(width):
## for j in range(width):
## #if i == fixed_i and j == fixed_j:
## # continue
## up = choices.index(total[i-1][j]) if i != 0 else -1 #the item above the current
## #down = choices.index(total[i+1][j]) if i != width - 1 else -1
## left = choices.index(total[i][j-1]) if j != 0 else -1
## #right = choices.index(total[i][j+1]) if j != width - 1 else -1
## x = clf.predict([[up, down, left, right, i, j]])[0]
## x = clf.predict([[up, left, i, j]])[0]
total = run(width)
#t = sprite_to_num(total)
## print('total: ')
## pprint(total)
x = clf.predict([total])
if x:
print("Computer says YES: ")
pprint(total)
else:
print("Computer says NO: ")
pprint(total)
return total
#print(clf.predict())
#clear(0) #1 if remove last one, 0 if all
while True:
#like(run(8))
like(learn(WIDTH))
|
A Bronx teenager is recovering in the hospital after a robber allegedly slashed her in the face.
The 18-year-old victim wants to remain anonymous because she says she's very scared after what happened. She says the incident occurred while she was going to get a snack at a deli grocery shop after being let out of Lehman High School for the day.
Police confirm that at 3:15 p.m. Monday, a man walked up to her on the street outside a Walgreens, demanded the chain around her neck and then ripped it off her, hurting her neck, cheek and earlobe. Police say he then went into the victim's right pocket, took the $6 she had in there and then punched her in the face.
The girl told News 12 The Bronx that he also pulled out a box cutter and slashed her face. Police say the suspect ran away toward the Bruckner Expressway.
|
"""
Django settings for simple_app project.
Generated by 'django-admin startproject' using Django 1.8.3.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'xj#n!k9!7lgce4yem@h9g%jpg_cg&4!&_eh6gknic_b%e$yndk'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'sample'
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'simple_app.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'simple_app.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:',
}
}
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'cache',
'TIMEOUT': 3 # Every 3 seconds
}
}
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'': {
'handlers': [],
'level': 'DEBUG',
'propagate': False,
},
}
|
Reischauer, Edwin O. and Haru M. Reischauer. Samurai and Silk: A Japanese and American Heritage. Cambridge: Harvard University Press, 1986. ISBN 0-674-78800-1.
Yomiuri Shimbun: Less than 30% of primary school students in Japan know historical significance of Ōkubo, 2008.
หน้านี้แก้ไขล่าสุดเมื่อวันที่ 7 มีนาคม 2562 เวลา 19:07 น.
|
from .kern import Kern, CombinationKernel
import numpy as np
from functools import reduce, partial
from .independent_outputs import index_to_slices
from paramz.caching import Cache_this
class ZeroKern(Kern):
def __init__(self):
super(ZeroKern, self).__init__(1, None, name='ZeroKern',useGPU=False)
def K(self, X ,X2=None):
if X2 is None:
X2 = X
return np.zeros((X.shape[0],X2.shape[0]))
def update_gradients_full(self,dL_dK, X, X2=None):
return np.zeros(dL_dK.shape)
def gradients_X(self,dL_dK, X, X2=None):
return np.zeros((X.shape[0],X.shape[1]))
class MultioutputKern(CombinationKernel):
"""
Multioutput kernel is a meta class for combining different kernels for multioutput GPs.
As an example let us have inputs x1 for output 1 with covariance k1 and x2 for output 2 with covariance k2.
In addition, we need to define the cross covariances k12(x1,x2) and k21(x2,x1). Then the kernel becomes:
k([x1,x2],[x1,x2]) = [k1(x1,x1) k12(x1, x2); k21(x2, x1), k2(x2,x2)]
For the kernel, the kernels of outputs are given as list in param "kernels" and cross covariances are
given in param "cross_covariances" as a dictionary of tuples (i,j) as keys. If no cross covariance is given,
it defaults to zero, as in k12(x1,x2)=0.
In the cross covariance dictionary, the value needs to be a struct with elements
-'kernel': a member of Kernel class that stores the hyper parameters to be updated when optimizing the GP
-'K': function defining the cross covariance
-'update_gradients_full': a function to be used for updating gradients
-'gradients_X': gives a gradient of the cross covariance with respect to the first input
"""
def __init__(self, kernels, cross_covariances={}, name='MultioutputKern'):
#kernels contains a list of kernels as input,
if not isinstance(kernels, list):
self.single_kern = True
self.kern = kernels
kernels = [kernels]
else:
self.single_kern = False
self.kern = kernels
# The combination kernel ALLWAYS puts the extra dimension last.
# Thus, the index dimension of this kernel is always the last dimension
# after slicing. This is why the index_dim is just the last column:
self.index_dim = -1
super(MultioutputKern, self).__init__(kernels=kernels, extra_dims=[self.index_dim], name=name, link_parameters=False)
nl = len(kernels)
#build covariance structure
covariance = [[None for i in range(nl)] for j in range(nl)]
linked = []
for i in range(0,nl):
unique=True
for j in range(0,nl):
if i==j or (kernels[i] is kernels[j]):
covariance[i][j] = {'kern': kernels[i], 'K': kernels[i].K, 'update_gradients_full': kernels[i].update_gradients_full, 'gradients_X': kernels[i].gradients_X}
if i>j:
unique=False
elif cross_covariances.get((i,j)) is not None: #cross covariance is given
covariance[i][j] = cross_covariances.get((i,j))
else: # zero covariance structure
kern = ZeroKern()
covariance[i][j] = {'kern': kern, 'K': kern.K, 'update_gradients_full': kern.update_gradients_full, 'gradients_X': kern.gradients_X}
if unique is True:
linked.append(i)
self.covariance = covariance
self.link_parameters(*[kernels[i] for i in linked])
@Cache_this(limit=3, ignore_args=())
def K(self, X ,X2=None):
if X2 is None:
X2 = X
slices = index_to_slices(X[:,self.index_dim])
slices2 = index_to_slices(X2[:,self.index_dim])
target = np.zeros((X.shape[0], X2.shape[0]))
[[[[ target.__setitem__((slices[i][k],slices2[j][l]), self.covariance[i][j]['K'](X[slices[i][k],:],X2[slices2[j][l],:])) for k in range( len(slices[i]))] for l in range(len(slices2[j])) ] for i in range(len(slices))] for j in range(len(slices2))]
return target
@Cache_this(limit=3, ignore_args=())
def Kdiag(self,X):
slices = index_to_slices(X[:,self.index_dim])
kerns = itertools.repeat(self.kern) if self.single_kern else self.kern
target = np.zeros(X.shape[0])
[[np.copyto(target[s], kern.Kdiag(X[s])) for s in slices_i] for kern, slices_i in zip(kerns, slices)]
return target
def _update_gradients_full_wrapper(self, cov_struct, dL_dK, X, X2):
gradient = cov_struct['kern'].gradient.copy()
cov_struct['update_gradients_full'](dL_dK, X, X2)
cov_struct['kern'].gradient += gradient
def _update_gradients_diag_wrapper(self, kern, dL_dKdiag, X):
gradient = kern.gradient.copy()
kern.update_gradients_diag(dL_dKdiag, X)
kern.gradient += gradient
def reset_gradients(self):
for kern in self.kern: kern.reset_gradients()
def update_gradients_full(self,dL_dK, X, X2=None):
self.reset_gradients()
slices = index_to_slices(X[:,self.index_dim])
if X2 is not None:
slices2 = index_to_slices(X2[:,self.index_dim])
[[[[ self._update_gradients_full_wrapper(self.covariance[i][j], dL_dK[slices[i][k],slices2[j][l]], X[slices[i][k],:], X2[slices2[j][l],:]) for k in range(len(slices[i]))] for l in range(len(slices2[j]))] for i in range(len(slices))] for j in range(len(slices2))]
else:
[[[[ self._update_gradients_full_wrapper(self.covariance[i][j], dL_dK[slices[i][k],slices[j][l]], X[slices[i][k],:], X[slices[j][l],:]) for k in range(len(slices[i]))] for l in range(len(slices[j]))] for i in range(len(slices))] for j in range(len(slices))]
def update_gradients_diag(self, dL_dKdiag, X):
self.reset_gradients()
slices = index_to_slices(X[:,self.index_dim])
[[ self._update_gradients_diag_wrapper(self.covariance[i][i]['kern'], dL_dKdiag[slices[i][k]], X[slices[i][k],:]) for k in range(len(slices[i]))] for i in range(len(slices))]
def gradients_X(self,dL_dK, X, X2=None):
slices = index_to_slices(X[:,self.index_dim])
target = np.zeros((X.shape[0], X.shape[1]) )
if X2 is not None:
slices2 = index_to_slices(X2[:,self.index_dim])
[[[[ target.__setitem__((slices[i][k]), target[slices[i][k],:] + self.covariance[i][j]['gradients_X'](dL_dK[slices[i][k],slices2[j][l]], X[slices[i][k],:], X2[slices2[j][l],:]) ) for k in range(len(slices[i]))] for l in range(len(slices2[j]))] for i in range(len(slices))] for j in range(len(slices2))]
else:
[[[[ target.__setitem__((slices[i][k]), target[slices[i][k],:] + self.covariance[i][j]['gradients_X'](dL_dK[slices[i][k],slices[j][l]], X[slices[i][k],:], (None if (i==j and k==l) else X[slices[j][l],:] )) ) for k in range(len(slices[i]))] for l in range(len(slices[j]))] for i in range(len(slices))] for j in range(len(slices))]
return target
|
In his monumental ''Frontiers,'' Noel Mostert, a white South-African author residing in Tangier, has self-administered a cathartic cure for a kind of disorder only traditional doctors or herbalists may remedy: the angst of exile. Exacerbated here by the agony of abiding Afrikaner affection for Africa, this angst is coupled with the carnage that has sealed Afrikaner claims to it. But Mostert makes clear that there is ample enough blame to go around.
In classic cadences, ''Frontiers'' ushers us through spectacular visions of the Xhosa heartland, one of the linguistic subdivisions of the Nguni subgroup of the Southern Bantu-speaking peoples who, with their Zulu-speaking counterparts, played prominent roles in the 19th Century history of South Africa. Dispossessed and outdone by both internal and external forces, Xhosa- speakers faced an irruptive settler presence with apocalyptic, millennarian weapons. Against a background of ceaseless imperial aggression, they heeded the alarm of perennial prophecy in the form of one Nongqawuse, a Xhosa maiden who dreamed she foresaw a way out of their dilemma.
But this was the end of a process whose antecedents lay in the 17th and early 19th Centuries, when Holland and Britain respectively sought to reconfigure South Africa in strategies that eventually led to white migration. Of the two, Britain`s was much the more damaging.
Clashes of culture, ethics, beliefs and essential goals led to a poignant juxtaposition: subjugation of many African peoples set against a phoenix-like prophecy of a permanent settler presence. But the drama in which outwardly separate racial and societal destinies became inextricably bound is Mostert`s secondary concern. More evident is the subtle manner in which this son of the Huguenot diaspora traces his clan`s forebears to the earliest era of Cape society.
Mostert tells a story well and achieves an admirable degree of balance in the presentation, having grounded himself in rich primary sources. The result fuses Michener with Michelet (the great 19th Century historian of France) and salves Mostert`s own sestiger conscience-sestigers (sixties) being the term for Afrikaner and Anglo-South Africans who sought to escape their baleful homeland during the 1960s.
Since that time, and especially as the anti-apartheid opposition grew in strength and authority in the mid- to late-1980s, the stance of the sestiger has seemed increasingly mainstream in the white emigre community, its ideological center moving ever more leftward as a kind of corrective to traditional right-wing politics. Within the same two decades South African historiography has undergone a revolution of revision, at home and abroad. Forced to come to terms with the struggles of the black majority, and unwilling to suffer consignment to perennial pariahdom, historians have started to think differently.
Mostert had to be touched by all this; his spiritual odyssey resembles theirs. And thus, though he left home in 1947, ''Frontiers'' is a sestiger work. On a canvas as capacious as the veld itself, in strokes as broad and majestic as those routinely reserved for the movies, it strives to tell a story of how settlers and the indigenous population collide, cohabit, conflict and cooperate.
To be sure, difficulties abound here. Mostert purports to relate contrasting chronicles of ''conquering'' and ''conquered'' nations in one narrative, seeking to erect a level platform from which each has its say. But even as an honest broker he retains specific advantages that remind us that this playing field has never been an even one.
The settlers were at once repelled and fascinated by the indigenous residents of the Western Cape, for cultural reasons Mostert graphically illustrates. This oscillation between abhorrence and reverence parallels the feeling the Dutch and later their English antagonists held for the land itself, terrain as alluring and unforgiving as the siren`s haunting song. Frontier realities created a ruthless, raw, rebellious run of renegades who grudgingly admired their Xhosa predecessors in what Boers saw as their Promised Land, fighting them and one another at every turn.
The bulk of the four parts of ''Frontiers'' covers in unsparingly excruciating detail those campaigns that led to the decay of the Xhosa and an apocalyptic cataclysm in 1857, situating this against the background of Anglo- Boer cultural warfare. It is an epic of perfidy and genocide very like the westward expansion of Manifest Destiny in North America, which was undertaken with equivalent zeal.
(farmers) were Africanized by their testy new frontier environment. Boers would regularly trek to the interior, seasonally or permanently. Miscegenation was rife, giving rise to ''Coloured'' people of mixed race and giving the lie to Afrikaner pretense of ethnic purity.
Mostert`s treatment of the Xhosa is reverential but unsentimental. He portrays their cosmology, mores, social organization, political and spiritual ideas, as well as memorable leading figures in human terms.
Although their roots undoubtedly reach back into the far more distant past, modern Xhosa history begins in the 18th Century, with the passing of Tshiwo, great-great-great grandfather of Ngqika. This precipitated a succession crisis that rapidly escalated into civil war. Mdange, uncle and regent of Tshiwo`s designated heir, Phalo, won it. Later, within Phalo`s rule, marital indiscretion caused another schism. The Xhosa split into three, following Phalo or his sons Gcaleka and Rarabe. By the 1770s five Xhosa polities lay in the path of the Boer colonists. A turf battle waged by these parties was the first Xhosa frontier war.
What began as a simple conflict over land was exacerbated by the entry onto a South African stage of new European geopolitical factors. The Dutch East India Company, supporters of limited colonization in the 1600s, were virtually bankrupt by the late 1700s. The Netherlands, a sometime ally and frequent rival of Britain in the previous century, was confronted, like the rest of Europe, by Napoleon, and Britain used this as an excuse to undertake a limited occupation of the Cape. A fragile Batavian Republic was briefly allowed to resume Dutch control, before a second and much more thorough British return in the early 1800s.
Those ostensibly external forces intensified local conflict, as Boer settlers were joined by uitlanders, colonists coming under British aegis. This led to a shaky alliance between Boer and ''Briton'' and a struggle over how to deal with ''Natives,'' increasingly regarded as a problem. It took an entire century, nine frontier wars, two Anglo-Boer wars and a mineral revolution to resolve each of these vexing questions. In settler eyes, the Xhosa were obstacles to their endeavor. What happened between them set the tone for what South Africa is.
''Frontiers'' would be a courageous work for anyone to write at any time, but it is a most necessary one to read at this moment, as apartheid begins its death-although, like all tyrannies, its demise could take decades. Noel Mostert provides a mirror of the last hundred years and serves as a missal to help us in mourning the victims of the past five centuries.
|
import os, sys
import pystache
from classes.Extension import Extension
execDir = os.path.dirname(os.path.abspath(sys.argv[0])) + "/"
templateDir = "templates/"
templateExtension = "tpl"
tab = " "
tab2 = tab + tab
def versionBID(feature, core = False, ext = False):
if feature is None:
return ""
version = str(feature.major) + str(feature.minor)
if core:
return version + "core"
elif ext:
return version + "ext"
return version
def template(outputfile):
with open (execDir + templateDir + outputfile + ".in", "rU") as file:
return file.read()
def supportedLambda(obj):
return lambda feature, core, ext: ( not ext and obj.supported(feature, core)
or ext and not obj.supported(feature, False) )
def enumSuffixPriority(name):
index = name.rfind("_")
if index < 0:
return -1
ext = name[index + 1:]
if ext not in Extension.suffixes:
return -1
return Extension.suffixes.index(ext)
class Generator:
renderer = None
@classmethod
def generate(_class, context, outputPath, templateName=None):
if _class.renderer is None:
_class.renderer = pystache.Renderer(search_dirs=os.path.join(execDir, templateDir),
file_extension=templateExtension,
escape=lambda u: u)
outputDir = os.path.dirname(outputPath).format(**context)
if not os.path.exists(outputDir):
os.makedirs(outputDir)
outputFile = os.path.basename(outputPath)
if templateName is None:
templateName = outputFile
outputFile = outputFile.format(**context)
print("generating {} in {}".format(outputFile, outputDir)) #TODO-LW move logging to appropriate place
with open(os.path.join(outputDir, outputFile), 'w') as file:
file.write(_class.renderer.render_name(templateName, context))
class Status:
targetdir = ""
def status(file):
print("generating " + file.replace(Status.targetdir, ""))
# enum_binding_name_exceptions = [ "DOMAIN", "MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB", "FALSE", "TRUE", "NO_ERROR", "WAIT_FAILED" ]
def enumBID(enum):
return enum.name
# extension_binding_name_exceptions = [ ]
# ToDo: discuss - just use name for glbinding?
def extensionBID(extension):
return extension.name
def functionBID(function):
return function.name
def alphabeticallyGroupedLists():
# create a dictionary of lists by upper case letters
# and a single "everythingelse" list
keys = '0ABCDEFGHIJKLMNOPQRSTUVWXYZ'
lists = dict()
for key in keys:
lists[key] = list()
return lists
def alphabeticalGroupKeys():
return [str(c) for c in "0ABCDEFGHIJKLMNOPQRSTUVWXYZ"]
def alphabeticalGroupKey(identifier, prefix):
# derives an key from an identifier with "GL_" prefix
index = identifier.find(prefix)
if index < 0:
return -1
index += len(prefix)
key = ((identifier[index:])[:1]).upper()
if ord(key) not in range(65, 91):
key = '0'
return key
|
This is a 4.4% ABV wheat ale brewed with lemon peel, with natural flavors and pumpkin added. It is from the Traveler Beer Company in collaboration with the Boston Beer Company. Traveler Beer used to be House of Shandy.
The beer pours a slightly hazed amber in color. There is a short-lived head of off-white foam. The aroma is highly spiced, I mean this is like church incense. There is a lesser amount of pumpkin behind there. The taste moderates the spice, amps up the pumpkin, and lets the lemon peel through. The toasted wheat is behind it. The taste is much more balanced than the aroma. The beer drinks easy, moderately carbonated.
Welcome! This is a collection of information, pictures and tasting notes, from the low end to the high end and everything in between. These are my thoughts on beer. Everybody has different tastes, so I don't declare my thoughts on any beer to be definitive. You just have to keep trying for yourself and see what you like. If you have tried a beer that is talked about, leave your comments on it.
Copyright 2010-2019 by Chris Polking. Simple theme. Powered by Blogger.
|
#!/usr/bin/env python3
# Copyright (c) 2015-2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test multisig RPCs"""
import binascii
import decimal
import itertools
import json
import os
from test_framework.blocktools import COINBASE_MATURITY
from test_framework.authproxy import JSONRPCException
from test_framework.descriptors import descsum_create, drop_origins
from test_framework.key import ECPubKey, ECKey
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_raises_rpc_error,
assert_equal,
)
from test_framework.wallet_util import bytes_to_wif
class RpcCreateMultiSigTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 3
self.supports_cli = False
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def get_keys(self):
self.pub = []
self.priv = []
node0, node1, node2 = self.nodes
for _ in range(self.nkeys):
k = ECKey()
k.generate()
self.pub.append(k.get_pubkey().get_bytes().hex())
self.priv.append(bytes_to_wif(k.get_bytes(), k.is_compressed))
self.final = node2.getnewaddress()
def run_test(self):
node0, node1, node2 = self.nodes
self.check_addmultisigaddress_errors()
self.log.info('Generating blocks ...')
node0.generate(149)
self.sync_all()
self.moved = 0
for self.nkeys in [3, 5]:
for self.nsigs in [2, 3]:
for self.output_type in ["bech32", "p2sh-segwit", "legacy"]:
self.get_keys()
self.do_multisig()
self.checkbalances()
# Test mixed compressed and uncompressed pubkeys
self.log.info('Mixed compressed and uncompressed multisigs are not allowed')
pk0 = node0.getaddressinfo(node0.getnewaddress())['pubkey']
pk1 = node1.getaddressinfo(node1.getnewaddress())['pubkey']
pk2 = node2.getaddressinfo(node2.getnewaddress())['pubkey']
# decompress pk2
pk_obj = ECPubKey()
pk_obj.set(binascii.unhexlify(pk2))
pk_obj.compressed = False
pk2 = binascii.hexlify(pk_obj.get_bytes()).decode()
node0.createwallet(wallet_name='wmulti0', disable_private_keys=True)
wmulti0 = node0.get_wallet_rpc('wmulti0')
# Check all permutations of keys because order matters apparently
for keys in itertools.permutations([pk0, pk1, pk2]):
# Results should be the same as this legacy one
legacy_addr = node0.createmultisig(2, keys, 'legacy')['address']
assert_equal(legacy_addr, wmulti0.addmultisigaddress(2, keys, '', 'legacy')['address'])
# Generate addresses with the segwit types. These should all make legacy addresses
assert_equal(legacy_addr, wmulti0.createmultisig(2, keys, 'bech32')['address'])
assert_equal(legacy_addr, wmulti0.createmultisig(2, keys, 'p2sh-segwit')['address'])
assert_equal(legacy_addr, wmulti0.addmultisigaddress(2, keys, '', 'bech32')['address'])
assert_equal(legacy_addr, wmulti0.addmultisigaddress(2, keys, '', 'p2sh-segwit')['address'])
self.log.info('Testing sortedmulti descriptors with BIP 67 test vectors')
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data/rpc_bip67.json'), encoding='utf-8') as f:
vectors = json.load(f)
for t in vectors:
key_str = ','.join(t['keys'])
desc = descsum_create('sh(sortedmulti(2,{}))'.format(key_str))
assert_equal(self.nodes[0].deriveaddresses(desc)[0], t['address'])
sorted_key_str = ','.join(t['sorted_keys'])
sorted_key_desc = descsum_create('sh(multi(2,{}))'.format(sorted_key_str))
assert_equal(self.nodes[0].deriveaddresses(sorted_key_desc)[0], t['address'])
# Check that bech32m is currently not allowed
assert_raises_rpc_error(-5, "createmultisig cannot create bech32m multisig addresses", self.nodes[0].createmultisig, 2, self.pub, "bech32m")
def check_addmultisigaddress_errors(self):
if self.options.descriptors:
return
self.log.info('Check that addmultisigaddress fails when the private keys are missing')
addresses = [self.nodes[1].getnewaddress(address_type='legacy') for _ in range(2)]
assert_raises_rpc_error(-5, 'no full public key for address', lambda: self.nodes[0].addmultisigaddress(nrequired=1, keys=addresses))
for a in addresses:
# Importing all addresses should not change the result
self.nodes[0].importaddress(a)
assert_raises_rpc_error(-5, 'no full public key for address', lambda: self.nodes[0].addmultisigaddress(nrequired=1, keys=addresses))
# Bech32m address type is disallowed for legacy wallets
pubs = [self.nodes[1].getaddressinfo(addr)["pubkey"] for addr in addresses]
assert_raises_rpc_error(-5, "Bech32m multisig addresses cannot be created with legacy wallets", self.nodes[0].addmultisigaddress, 2, pubs, "", "bech32m")
def checkbalances(self):
node0, node1, node2 = self.nodes
node0.generate(COINBASE_MATURITY)
self.sync_all()
bal0 = node0.getbalance()
bal1 = node1.getbalance()
bal2 = node2.getbalance()
height = node0.getblockchaininfo()["blocks"]
assert 150 < height < 350
total = 149 * 50 + (height - 149 - 100) * 25
assert bal1 == 0
assert bal2 == self.moved
assert bal0 + bal1 + bal2 == total
def do_multisig(self):
node0, node1, node2 = self.nodes
if 'wmulti' not in node1.listwallets():
try:
node1.loadwallet('wmulti')
except JSONRPCException as e:
path = os.path.join(self.options.tmpdir, "node1", "regtest", "wallets", "wmulti")
if e.error['code'] == -18 and "Wallet file verification failed. Failed to load database path '{}'. Path does not exist.".format(path) in e.error['message']:
node1.createwallet(wallet_name='wmulti', disable_private_keys=True)
else:
raise
wmulti = node1.get_wallet_rpc('wmulti')
# Construct the expected descriptor
desc = 'multi({},{})'.format(self.nsigs, ','.join(self.pub))
if self.output_type == 'legacy':
desc = 'sh({})'.format(desc)
elif self.output_type == 'p2sh-segwit':
desc = 'sh(wsh({}))'.format(desc)
elif self.output_type == 'bech32':
desc = 'wsh({})'.format(desc)
desc = descsum_create(desc)
msig = node2.createmultisig(self.nsigs, self.pub, self.output_type)
madd = msig["address"]
mredeem = msig["redeemScript"]
assert_equal(desc, msig['descriptor'])
if self.output_type == 'bech32':
assert madd[0:4] == "bcrt" # actually a bech32 address
# compare against addmultisigaddress
msigw = wmulti.addmultisigaddress(self.nsigs, self.pub, None, self.output_type)
maddw = msigw["address"]
mredeemw = msigw["redeemScript"]
assert_equal(desc, drop_origins(msigw['descriptor']))
# addmultisigiaddress and createmultisig work the same
assert maddw == madd
assert mredeemw == mredeem
txid = node0.sendtoaddress(madd, 40)
tx = node0.getrawtransaction(txid, True)
vout = [v["n"] for v in tx["vout"] if madd == v["scriptPubKey"]["address"]]
assert len(vout) == 1
vout = vout[0]
scriptPubKey = tx["vout"][vout]["scriptPubKey"]["hex"]
value = tx["vout"][vout]["value"]
prevtxs = [{"txid": txid, "vout": vout, "scriptPubKey": scriptPubKey, "redeemScript": mredeem, "amount": value}]
node0.generate(1)
outval = value - decimal.Decimal("0.00001000")
rawtx = node2.createrawtransaction([{"txid": txid, "vout": vout}], [{self.final: outval}])
prevtx_err = dict(prevtxs[0])
del prevtx_err["redeemScript"]
assert_raises_rpc_error(-8, "Missing redeemScript/witnessScript", node2.signrawtransactionwithkey, rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
# if witnessScript specified, all ok
prevtx_err["witnessScript"] = prevtxs[0]["redeemScript"]
node2.signrawtransactionwithkey(rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
# both specified, also ok
prevtx_err["redeemScript"] = prevtxs[0]["redeemScript"]
node2.signrawtransactionwithkey(rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
# redeemScript mismatch to witnessScript
prevtx_err["redeemScript"] = "6a" # OP_RETURN
assert_raises_rpc_error(-8, "redeemScript does not correspond to witnessScript", node2.signrawtransactionwithkey, rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
# redeemScript does not match scriptPubKey
del prevtx_err["witnessScript"]
assert_raises_rpc_error(-8, "redeemScript/witnessScript does not match scriptPubKey", node2.signrawtransactionwithkey, rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
# witnessScript does not match scriptPubKey
prevtx_err["witnessScript"] = prevtx_err["redeemScript"]
del prevtx_err["redeemScript"]
assert_raises_rpc_error(-8, "redeemScript/witnessScript does not match scriptPubKey", node2.signrawtransactionwithkey, rawtx, self.priv[0:self.nsigs-1], [prevtx_err])
rawtx2 = node2.signrawtransactionwithkey(rawtx, self.priv[0:self.nsigs - 1], prevtxs)
rawtx3 = node2.signrawtransactionwithkey(rawtx2["hex"], [self.priv[-1]], prevtxs)
self.moved += outval
tx = node0.sendrawtransaction(rawtx3["hex"], 0)
blk = node0.generate(1)[0]
assert tx in node0.getblock(blk)["tx"]
txinfo = node0.getrawtransaction(tx, True, blk)
self.log.info("n/m=%d/%d %s size=%d vsize=%d weight=%d" % (self.nsigs, self.nkeys, self.output_type, txinfo["size"], txinfo["vsize"], txinfo["weight"]))
wmulti.unloadwallet()
if __name__ == '__main__':
RpcCreateMultiSigTest().main()
|
Focus on Jobs is dedicated to delivering professional vocational rehabilitation services. We have a proven track record in securing new and lasting employment for those out of work due to injury.
Occupational Therapist Fiona Lilley can help unleash your childs potential. Messy, illegible handwriting? Difficulty with schoolwork? Disorganised? Homework a stressful time? Contact Fiona now. Proven clinical assessment can pinpoint the difficulties and establish a program to help.
The New Zealand Association of Hand Therapists (NZAHT) Inc is New Zealand's only professional association representing Hand Therapists. It was established in the 1980s to provide support for its members through education, professional development, networking and representation at regional and national levels. It also acts as a central referral point for doctors and members of the public seeking the services of Hand Therapists in specific geographic locations.
Invacare is the world’s leading manufacturer of home care equipment. Invacare New Zealand supplies private homes, aged care and nursing homes, clinics, and hospitals with mobility aids, health care and therapeutic products with products for sale and hire.
|
import wave, struct, time
import numpy as np
import scipy.io.wavfile as wav
import scipy.fftpack as fft
def openWavFile(fileName):
data = wav.read(fileName)
ssize = data[1].shape[0]
nparray = data[1].astype('float32')
return nparray
def stftWindowFunction(xPhi, xMag):
oldShapePhi = xPhi.shape
oldShapeMag = xMag.shape
xPhi = np.reshape(xPhi, (-1, xPhi.shape[-1]))
xMag = np.reshape(xMag, (-1, xMag.shape[-1]))
retValPhi = []
retValMag = []
for xValPhi, xValMag in zip(xPhi, xMag):
w = np.hanning(xValPhi.shape[0])
phiObj = np.zeros(xValPhi.shape[0], dtype=complex)
phiObj.real, phiObj.imag = np.cos(xValPhi), np.sin(xValPhi)
xIfft = np.fft.ifft(xValMag * phiObj)
wFft = np.fft.fft(w*xIfft.real)
retValPhi.append(np.angle(wFft))
retValMag.append(np.abs(wFft))
retValMag = np.reshape(retValMag, oldShapeMag)
retValPhi = np.reshape(retValPhi, oldShapePhi)
return retValPhi, retValMag
def stft(x, framesz, hop):
framesamp = int(framesz)
hopsamp = int(hop)
#w = np.hanning(framesamp)
X = np.asarray([np.fft.fft(x[i:i+framesamp]) for i in range(0, len(x) - framesamp, hopsamp)])
xPhi = np.angle(X)
xMag = np.abs(X)
return xPhi, xMag
def istft(X, fs, hop, origs):
x = np.zeros(origs)
framesamp = X.shape[1]
hopsamp = int(hop*fs)
for n,i in enumerate(range(0, len(x)-framesamp, hopsamp)):
x[i:i+framesamp] += np.real(np.fft.ifft(X[n]))
return x
def waveToSTFT(waveData, sampCount, blkSize, hop):
initLen = len(waveData)
sampSize = int(initLen/sampCount)
phiObj = []
magObj = []
for sInd in xrange(0, sampCount):
tempTmSpls = []
sampBlk = waveData[sInd * sampSize:(sInd + 1) * sampSize]
stftPhi, stftMag = stft(sampBlk, blkSize, hop)
phiObj.append(stftPhi)
magObj.append(stftMag)
return ([], np.asarray(phiObj), np.asarray(magObj))
def waveToMFCC(waveData, sampCount, blkCount=False, blkSize=False):
waveLen = len(waveData)
sampSize = int(waveLen/sampCount)
retTmSpl = []
if blkSize:
blkCount = sampSize/blkSize
elif blkCount:
blkSize = sampSize/blkCount
else:
return False
for sInd in xrange(0, sampCount):
tempTmSpls = []
sampBlk = waveData[sInd * sampSize:(sInd + 1) * sampSize]
for bInd in xrange(0, blkCount):
tempBlk = sampBlk[bInd * blkSize:(bInd + 1) * blkSize]
complexSpectrum = np.fft.fft(tempBlk)
powerSpectrum = np.abs(complexSpectrum) ** 2
filteredSpectrum = powerSpectrum
logSpectrum = np.log(filteredSpectrum)
dctSpectrum = fft.dct(logSpectrum, type=2)
tempTmSpls.append(dctSpectrum)
retTmSpl.append(tempTmSpls)
retTmSpl = np.asarray(retTmSpl)
return retTmSpl
def waveToBlock(waveData, sampCount, blkCount=False, blkSize=False, olapf=1, shift=False):
if shift:
waveData = np.concatenate((waveData[shift:], waveData[:shift]))
waveLen = len(waveData)
sampSize = int(waveLen/sampCount)
retPhase = []
retMag = []
retTmSpl = []
if blkSize and blkCount:
tlen = sampCount * blkCount * blkSize
sampSize = blkCount * blkSize
diff = tlen - waveLen
if diff > 0:
waveData = np.pad(waveData, (0,diff), 'constant', constant_values=0)
elif blkSize:
blkCount = sampSize/blkSize
elif blkCount:
blkSize = sampSize/blkCount
else:
return False
for sInd in xrange(0, sampCount):
tempPhases = []
tempMags = []
tempTmSpls = []
sampBlk = waveData[sInd * sampSize:(sInd + 1) * sampSize]
for bInd in xrange(0, blkCount - (olapf - 1)):
tempBlk = sampBlk[bInd * blkSize:(bInd + olapf) * blkSize]
tempFFT = np.fft.fft(tempBlk)
tempPhase = np.angle(tempFFT)
tempMagn = np.abs(tempFFT)
tempPhases.append(tempPhase)
tempMags.append(tempMagn)
tempTmSpls.append(tempBlk)
retPhase.append(tempPhases)
retMag.append(tempMags)
retTmSpl.append(tempTmSpls)
retPhase = np.asarray(retPhase)
retTmSpl = np.asarray(retTmSpl)
retMag = np.asarray(retMag)
return (retTmSpl, retPhase, retMag)
def sectionFeatureScaling(data):
dataShape = data.shape
flatData = np.copy(data).flatten()
flatMax = np.max(flatData)
flatMin = np.min(flatData)
scaledData = (flatData - flatMin)/(flatMax- flatMin)
scaledData = np.reshape(scaledData, dataShape)
return scaledData, flatMax, flatMin
def blockFeatureScaling(kData):
data = np.copy(kData)
maxVal = np.max(np.max(data, axis=0), axis=0)
minVal = np.min(np.min(data, axis=0), axis=0)
scaledData = (data - minVal)/(maxVal- minVal)
return scaledData, maxVal, minVal
def sectionNormalize(data):
dataShape = data.shape
flatData = np.copy(data).flatten()
flatMean = np.mean(flatData)
flatStd = np.std(flatData)
scaledData = (flatData - flatMean)/flatStd
scaledData = np.reshape(scaledData, dataShape)
return scaledData, flatMean, flatStd
def blockNormalize(data):
dataStartShape = data.shape
if len(data.shape) == 2:
data = np.reshape(data, (1, data.shape[0], data.shape[1]))
if len(data.shape) == 1:
data = np.reshape(data, (1, 1, data.shape[0]))
npNorm = np.zeros_like(data)
xCount = data.shape[0]
yCount = data.shape[1]
for sectInd in xrange(xCount):
for blockInd in xrange(yCount):
npNorm[sectInd][blockInd] = data[sectInd][blockInd]
mean = np.mean(np.mean(npNorm, axis=0), axis=0)
std = np.sqrt(np.mean(np.mean(np.abs(npNorm-mean)**2, axis=0), axis=0))
std = np.maximum(1.0e-8, std)
norm = npNorm.copy()
norm[:] -= mean
norm[:] /= std
return norm, mean, std
def extractSTFTWaveData(wavData, sampCount, blkSize=False, returnObj="all", olapf=100):#(waveData, sampCount, blkCount, blkSize, hop):
#wavData = openWavFile(fileName)
#wavObj, wavPhi, wavMag = waveToBlock(wavData, sampCount, blkCount=blkCount, blkSize=blkSize, olapf=olapf, shift=False)
wavObj, wavPhi, wavMag = waveToSTFT(wavData, sampCount, blkSize=blkSize, hop=olapf)
#mfccObj = waveToMFCC(wavData, sampCount, blkCount, blkSize)
phiWav, meanPhiWav, stdPhiWav = blockNormalize(wavPhi)
magWav, meanMagWav, stdMagWav = blockNormalize(wavMag)
#MfccWav, meanMfcc, stdMfcc = blockNormalize(mfccObj)
if returnObj == "phase":
return phiWav, meanPhiWav, stdPhiWav
elif returnObj == "magnitude":
return magWav, meanMagWav, stdMagWav
else:
return phiWav, meanPhiWav, stdPhiWav, magWav, maxMagWav, minMagWav
def extractWaveData(wavData, sampCount, blkCount=False, blkSize=False, returnObj="all", olapf=1, shift=False):
#wavData = openWavFile(fileName)
wavObj, wavPhi, wavMag = waveToBlock(wavData, sampCount, blkCount=blkCount, blkSize=blkSize, olapf=olapf, shift=False)
#mfccObj = waveToMFCC(wavData, sampCount, blkCount, blkSize)
phiWav, meanPhiWav, stdPhiWav = blockNormalize(wavPhi)
magWav, meanMagWav, stdMagWav = blockNormalize(wavMag)
#MfccWav, meanMfcc, stdMfcc = blockNormalize(mfccObj)
if returnObj == "phase":
return phiWav, meanPhiWav, stdPhiWav
elif returnObj == "magnitude":
return magWav, meanMagWav, stdMagWav
else:
return phiWav, meanPhiWav, stdPhiWav, magWav, maxMagWav, minMagWav
def blockShift(data, shift=1):
retObj = []
for sectInd in xrange(data.shape[0]):
retObj.append( np.concatenate((data[sectInd][shift:], data[sectInd][0:shift])) )
return np.reshape(retObj, data.shape)
|
JNN 01 May 2014 Tehran : Iran and Russia are negotiating a power deal worth up to $10 billion in the face of increasing US financial alienation. The construction of new thermal and hydroelectric plants and a transmission network are in the works.
Iran’s Energy Minister Hamid Chitchian met his Russian counterpart Aleksandr Novak in Tehran on Sunday in order to discuss the potential power deals, according to Iran’s Mehr news agency.
Moscow has additionally been discussing the trade of 500,000 barrels a day of Iranian oil for Russian goods with Tehran. The protracted deal, first reported at the beginning of April could be worth as much as $20 billion, and has rattled Washington because it could bring Iran’s crude exports above one million barrels a day – the threshold agreed upon in the nuclear deal between the P5+1 powers – US, Britain, France, China, Russia and Germany – and Iran.
Moscow and Tehran are far from finalizing the contract, according to Russian business daily Kommersant, which first broke the news. Nonetheless, the Obama administration has expressed distaste at the reports.
Tehran’s ambassador to Moscow Mehdi Sanaei said on Friday that the implementation of Iran-Russia energy agreements hold the key to economic expansion.
Sanaei underlined the importance of promoting of Iran-Russia cooperation and called for the implementation of oil, gas and electricity deals, according to Press TV.
Russia-Iran trade is currently worth $5 billion a year, but economists say the two countries can at least quadruple the volume of trade.
Earlier this month, Iranian Oil Minister Bijan Namdar Zanganeh said the Islamic Republic is determined to raise the volume of its “economic transactions” with Russia.
This entry was posted in Asia Pacific, Europe, Iran News, Middle East and tagged Hydro Electric Power Generation, iran, Power Deal, Russia, Thermal Power Plants. Bookmark the permalink.
|
"""Test tools.
Attributes:
easy_clang_complete (module): this plugin module
SublBridge (SublBridge): class for subl bridge
"""
import imp
from os import path
from EasyClangComplete.plugin.settings import settings_manager
from EasyClangComplete.plugin.utils.subl import subl_bridge
from EasyClangComplete.tests.gui_test_wrapper import GuiTestWrapper
imp.reload(settings_manager)
imp.reload(subl_bridge)
SettingsManager = settings_manager.SettingsManager
SublBridge = subl_bridge.SublBridge
PosStatus = subl_bridge.PosStatus
class test_tools_command(GuiTestWrapper):
"""Test sublime commands."""
def set_text(self, string):
"""Set text to a view.
Args:
string (str): some string to set
"""
self.view.run_command("insert", {"characters": string})
def move(self, dist, forward=True):
"""Move the cursor by distance.
Args:
dist (int): pixels to move
forward (bool, optional): forward or backward in the file
"""
for _ in range(dist):
self.view.run_command("move",
{"by": "characters", "forward": forward})
def test_next_line(self):
"""Test returning next line."""
self.set_up_view()
self.set_text("hello\nworld!")
self.move(10, forward=False)
next_line = SublBridge.next_line(self.view)
self.assertEqual(next_line, "world!")
def test_wrong_triggers(self):
"""Test that we don't complete on numbers and wrong triggers."""
self.set_up_view(path.join(path.dirname(__file__),
'test_files',
'test_wrong_triggers.cpp'))
# Load the completions.
manager = SettingsManager()
settings = manager.user_settings()
# Check the current cursor position is completable.
self.assertEqual(self.get_row(2), " a > 2.")
# check that '>' does not trigger completions
pos = self.view.text_point(2, 5)
current_word = self.view.substr(self.view.word(pos))
self.assertEqual(current_word, "> ")
status = SublBridge.get_pos_status(pos, self.view, settings)
# Verify that we got the expected completions back.
self.assertEqual(status, PosStatus.WRONG_TRIGGER)
# check that 'a' does not trigger completions
pos = self.view.text_point(2, 3)
current_word = self.view.substr(self.view.word(pos))
self.assertEqual(current_word, "a")
status = SublBridge.get_pos_status(pos, self.view, settings)
# Verify that we got the expected completions back.
self.assertEqual(status, PosStatus.COMPLETION_NOT_NEEDED)
# check that '2.' does not trigger completions
pos = self.view.text_point(2, 8)
current_word = self.view.substr(self.view.word(pos))
self.assertEqual(current_word, ".\n")
status = SublBridge.get_pos_status(pos, self.view, settings)
# Verify that we got the expected completions back.
self.assertEqual(status, PosStatus.WRONG_TRIGGER)
|
Description This image shows an empty pool in the basement of the Johnson Student Union. The space was eventually remodeled into offices and a gathering area called "The Dive." Notice the peeling paint on the back wall.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (C) Grigoriy A. Armeev, 2015
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as·
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License v2 for more details.
# Cheers, Satary.
#
from PyQt4 import QtGui,QtCore
import sys, csv
class TableWidget(QtGui.QTableWidget):
def __init__(self,parent=None):
super(TableWidget, self).__init__(parent)
# if you want to use parent's methods and be ugly as I do, use parent =)
self.parent=parent
self.clip = QtGui.QApplication.clipboard()
self.horizontalHeader().setMovable(True)
self.verticalHeader().setMovable(True)
self.horizontalHeader().setDefaultSectionSize(60)
self.setMinimumWidth(250)
self.setMinimumHeight(250)
self.setSizePolicy(QtGui.QSizePolicy.Expanding,QtGui.QSizePolicy.Minimum)
self.rowOrder=[]
self.columnOrder=[]
self.verticalHeader().sectionMoved.connect( self.getOrders)
self.horizontalHeader().sectionMoved.connect( self.getOrders)
def buildFromDict(self,inDict,rowOrder=[],columnOrder=[]):
self.setRowCount(0)
self.setColumnCount(0)
# finding all rows and cols in dict
newRow = []
newCol = []
for row in inDict:
if not(row in newRow):
newRow.append(row)
for col in inDict[row]:
if not(col in newCol):
newCol.append(col)
# adding new rows and cols in dict
sortNewRow=[]
sortNewCol=[]
for row in inDict:
if not(row in rowOrder):
sortNewRow.append(row)
for col in inDict[row]:
if not(col in columnOrder):
sortNewCol.append(col)
sortNewRow.sort()
sortNewCol.sort()
[rowOrder.append(row) for row in sortNewRow]
[columnOrder.append(col) for col in sortNewCol]
# creating ordered list of not empty values
visibleRows = []
visibleCols = []
for row in rowOrder:
if row in newRow:
visibleRows.append(row)
for col in columnOrder:
if col in newCol:
visibleCols.append(col)
#drawin table and asigning row and column names
rows=[]
columns=[]
for row in visibleRows:
#if row in inDict:
rows.append(row)
self.insertRow(self.rowCount())
self.setVerticalHeaderItem(self.rowCount()-1, QtGui.QTableWidgetItem(row))
for col in visibleCols:
#if (col in inDict[row]):
if (not(col in columns)):
columns.append(col)
self.insertColumn(self.columnCount())
self.setHorizontalHeaderItem(self.columnCount()-1,QtGui.QTableWidgetItem(col))
#asidning values
for row in rows:
for col in columns:
try:
item=QtGui.QTableWidgetItem(str(inDict[row][col]))
item.setFlags(QtCore.Qt.ItemIsSelectable | QtCore.Qt.ItemIsEnabled)
self.setItem(rows.index(row),columns.index(col),item)
except:
pass
self.verticalHeader().setDefaultSectionSize(self.verticalHeader().minimumSectionSize())
self.rowOrder = rowOrder #rows
self.columnOrder = columnOrder #columns
def getOrders(self,event=None):
#try:
rowNames = [str(self.verticalHeaderItem(i).text()) for i in range(self.rowCount())]
rowIndx = [self.visualRow(i) for i in range(self.rowCount())]
rowOrder = [x for (y,x) in sorted(zip(rowIndx,rowNames))]
for row in self.rowOrder:
if not(row in rowOrder):
rowOrder.append(row)
self.rowOrder = rowOrder
colNames = [str(self.horizontalHeaderItem(i).text()) for i in range(self.columnCount())]
colIndx = [self.visualColumn(i) for i in range(self.columnCount())]
columnOrder = [x for (y,x) in sorted(zip(colIndx,colNames))]
for col in self.columnOrder:
if not(col in columnOrder):
columnOrder.append(col)
self.columnOrder = columnOrder
def keyPressEvent(self, e):
if (e.modifiers() & QtCore.Qt.ControlModifier):
if e.key() == QtCore.Qt.Key_C:
self.copySelectionToClipboard()
def contextMenuEvent(self, pos):
menu = QtGui.QMenu()
copyAction = menu.addAction("Copy")
action = menu.exec_(QtGui.QCursor.pos())
if action == copyAction:
self.copySelectionToClipboard()
def handleSave(self,path):
rowLog = range(self.rowCount())
rowIndx = [self.visualRow(i) for i in rowLog]
rowVis = [x for (y,x) in sorted(zip(rowIndx,rowLog))]
colLog = range(self.columnCount())
colIndx = [self.visualColumn(i) for i in colLog]
colVis = [x for (y,x) in sorted(zip(colIndx,colLog))]
with open(unicode(path), 'wb') as stream:
writer = csv.writer(stream)
rowdata = []
rowdata.append("")
for column in colVis:
rowdata.append(unicode(self.horizontalHeaderItem(column).text()).encode('utf8'))
writer.writerow(rowdata)
for row in rowVis:
rowdata = []
rowdata.append(unicode(self.verticalHeaderItem(row).text()).encode('utf8'))
for column in colVis:
item = self.item(row, column)
if item is not None:
rowdata.append(
unicode(item.text()).encode('utf8'))
else:
rowdata.append('')
writer.writerow(rowdata)
def copySelectionToClipboard(self):
selected = self.selectedRanges()
s = ""
for r in xrange(selected[0].topRow(),selected[0].bottomRow()+1):
for c in xrange(selected[0].leftColumn(),selected[0].rightColumn()+1):
try:
s += str(self.item(r,c).text()) + "\t"
except AttributeError:
s += "\t"
s = s[:-1] + "\n" #eliminate last '\t'
self.clip.setText(s)
def main():
app = QtGui.QApplication(sys.argv)
ex = TableWidget()
ex.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
|
"Amy is an outstanding dental coach! She kept us on task, helped us problem-solve key practice issues and most importantly, gave us tools to implement the next day. As a dental coach, she has the keen ability to discern the problem, deliver feedback and help create a workable solution to most any problem a practice can face. I would highly recommend her to any practice that feels it’s not reached its potential."
"Amy Kirsch has been our dental consultant for almost 15 years. We feel she has been very helpful in supporting us as we have evolved over the years. We know we can count on Amy for up-to-date information on systems, hiring new dental team members, leadership, and all team issues. She has helped us learn how to work more efficiently and how to increase our level of care to our dental patients. Amy has always been there for us when we have a problem."
"Amy Kirsch has provided us with the building blocks to allow us to build a successful dental practice. She has helped my team and I implement important customer service skill and business systems. Through consistent monitoring and coaching, they have empowered us achieve our goals."
"Amy has created systems in our dental practice that helps us function more efficiently. They have improved our customer service through better communication skills, starting from the initial phone call to financial arrangements and scheduling. We have had a significant increase in case acceptance and production."
"Amy Kirsch has been committed to helping my dental practice achieve new levels of production, collection and profitability. Her endless energy and exceptional coaching skills have allowed us to become a more successful dental practice."
|
#!/usr/bin/env python
"""
Code to initilise the MultiQC logging
"""
import coloredlogs
import logging
import os
import shutil
import sys
import tempfile
from multiqc.utils import config, util_functions, mqc_colour
LEVELS = {0: "INFO", 1: "DEBUG"}
log_tmp_dir = None
log_tmp_fn = "/dev/null"
def init_log(logger, loglevel=0, no_ansi=False):
"""
Initializes logging.
Prints logs to console with level defined by loglevel
Also prints verbose log to the multiqc data directory if available.
(multiqc_data/multiqc.log)
Args:
loglevel (str): Determines the level of the log output.
"""
# File for logging
global log_tmp_dir, log_tmp_fn
log_tmp_dir = tempfile.mkdtemp()
log_tmp_fn = os.path.join(log_tmp_dir, "multiqc.log")
# Logging templates
debug_template = "[%(asctime)s] %(name)-50s [%(levelname)-7s] %(message)s"
info_template = "|%(module)18s | %(message)s"
# Base level setup
logger.setLevel(getattr(logging, "DEBUG"))
# Automatically set no_ansi if not a tty terminal
if not no_ansi:
if not sys.stderr.isatty() and not force_term_colors():
no_ansi = True
# Set up the console logging stream
console = logging.StreamHandler()
console.setLevel(getattr(logging, loglevel))
level_styles = coloredlogs.DEFAULT_LEVEL_STYLES
level_styles["debug"] = {"faint": True}
field_styles = coloredlogs.DEFAULT_FIELD_STYLES
field_styles["module"] = {"color": "blue"}
if loglevel == "DEBUG":
if no_ansi:
console.setFormatter(logging.Formatter(debug_template))
else:
console.setFormatter(
coloredlogs.ColoredFormatter(fmt=debug_template, level_styles=level_styles, field_styles=field_styles)
)
else:
if no_ansi:
console.setFormatter(logging.Formatter(info_template))
else:
console.setFormatter(
coloredlogs.ColoredFormatter(fmt=info_template, level_styles=level_styles, field_styles=field_styles)
)
logger.addHandler(console)
# Now set up the file logging stream if we have a data directory
file_handler = logging.FileHandler(log_tmp_fn, encoding="utf-8")
file_handler.setLevel(getattr(logging, "DEBUG")) # always DEBUG for the file
file_handler.setFormatter(logging.Formatter(debug_template))
logger.addHandler(file_handler)
def move_tmp_log(logger):
"""Move the temporary log file to the MultiQC data directory
if it exists."""
try:
# https://stackoverflow.com/questions/15435652/python-does-not-release-filehandles-to-logfile
logging.shutdown()
shutil.copy(log_tmp_fn, os.path.join(config.data_dir, "multiqc.log"))
os.remove(log_tmp_fn)
util_functions.robust_rmtree(log_tmp_dir)
except (AttributeError, TypeError, IOError):
pass
def get_log_stream(logger):
"""
Returns a stream to the root log file.
If there is no logfile return the stderr log stream
Returns:
A stream to the root log file or stderr stream.
"""
file_stream = None
log_stream = None
for handler in logger.handlers:
if isinstance(handler, logging.FileHandler):
file_stream = handler.stream
else:
log_stream = handler.stream
if file_stream:
return file_stream
return log_stream
def force_term_colors():
"""
Check if any environment variables are set to force Rich to use coloured output
"""
if os.getenv("GITHUB_ACTIONS") or os.getenv("FORCE_COLOR") or os.getenv("PY_COLORS"):
return True
return None
|
Added support for MP4 E-AC3.
Added information fields% _subsong% (subsong index) and% _subsong_count% (number of subsongs).
Compatibility improvements for raw AAC that now also supports reading and writing of ID3v2 tags.
Occasional runtime error when pasting metadata.
|
# ########################## Copyrights and License #############################
# #
# Copyright 2016 Yang Fang <yangfangscu@gmail.com> #
# #
# This file is part of PhySpeTree. #
# https://xiaofeiyangyang.github.io/physpetools/ #
# #
# PhySpeTree is free software: you can redistribute it and/or modify it under #
# the terms of the GNU Lesser General Public License as published by the Free #
# Software Foundation, either version 3 of the License, or (at your option) #
# any later version. #
# #
# PhySpeTree is distributed in the hope that it will be useful, but WITHOUT ANY #
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
# details. #
# #
# You should have received a copy of the GNU Lesser General Public License #
# along with PhySpeTree. If not, see <http://www.gnu.org/licenses/>. #
# #
# ###############################################################################
"""
The module retrieve highly conserved proteins and download from KEGG database
"""
import shutil
import glob
import ftplib
import os
import sqlite3
import time
from physpetool.database.dbpath import getlocaldbpath
from physpetool.phylotree.log import getLogging
from physpetool.tools.keggapi import getprotein
logretrieveprotein = getLogging('KEGG INDEX DB')
KEGGDB = "KEGG_DB_3.0.db"
def getspecies(spelist, colname):
"""
Get species protein index for DB
:param name: a list contain abbreviation species nam
:param colname: a list contain colname of DB
:return: a list contain protein index can be retrieved and a match ko list (is a ko id list)
"""
dbpath = getlocaldbpath()
db = os.path.join(dbpath, KEGGDB)
relist = []
match_ko_name = []
conn = sqlite3.connect(db)
conn.text_factory = str
c = conn.cursor()
if len(spelist) >= 1000:
sp = splist(spelist,500)
else:
sp = [spelist]
for ko in colname:
tem_reslist = []
tem_none = 0
for line in sp:
connect = "' OR NAME = '".join(line)
query = "SELECT " + ko + " FROM proindex WHERE NAME = '" + connect + "'"
c.execute(query)
ids = list(c.fetchall())
idslist = [str(x[0]) for x in ids]
num_none = len([x for x in idslist if x == 'None'])
tem_none += num_none
tem_reslist.extend(idslist)
if tem_none != len(tem_reslist):
relist.append(tem_reslist)
match_ko_name.append(ko)
c.close()
return relist, match_ko_name
def getcolname():
"""get BD colnames"""
dbpath = getlocaldbpath()
db = os.path.join(dbpath, KEGGDB)
conn = sqlite3.connect(db)
conn.text_factory = str
c = conn.cursor()
c.execute("SELECT * FROM proindex")
col_name_list = [tuple[0] for tuple in c.description]
c.close()
return col_name_list[2:]
def splist(l, s):
"""split a list to sub list contain s"""
return [l[i:i + s] for i in range(len(l)) if i % s == 0]
def retrieveprotein(proindexlist, outpath, matchlist, spelist, local_db):
"""
Retrieve proteins form Kegg DB
:param proindexlist: a list contain protein index
:param outpath: user input outpath
:return: retrieve protein path
"""
timeformat = '%Y%m%d%H%M%S'
timeinfo = str(time.strftime(timeformat))
subdir = 'temp/conserved_protein' + timeinfo
dirname = os.path.dirname(outpath)
dirname = os.path.join(dirname, subdir)
if not os.path.exists(dirname):
os.makedirs(dirname)
fasta = {}
p = 1
# get hcp proteins form ftp server
if local_db == "":
connect = ftplib.FTP("173.255.208.244")
connect.login('anonymous')
connect.cwd('/pub/databasehcp')
for line in spelist:
w_file = dirname + "/" + line + ".fa"
fw_ = open(w_file, 'ab')
retrievename = line + '.fasta'
remoteFileName = 'RETR ' + os.path.basename(retrievename)
connect.retrbinary(remoteFileName, fw_.write)
fw_.write(b'\n')
fw_.close()
logretrieveprotein.info("Retrieve " + line + " highly conserved proteins completed")
# read get sequences
with open(w_file, 'r') as f:
for line in f:
if line != "\n":
tem = line.strip()
if tem[0] == '>':
header = tem[1:]
else:
sequence = tem
fasta[header] = fasta.get(header, '') + sequence
connect.quit()
# get protein sequence from local
else:
for line in spelist:
file_name = line +".fasta"
file_name_new = line + ".fa"
abb_data_path = os.path.join(local_db,file_name)
abb_data_path_new = os.path.join(dirname,file_name_new)
shutil.copyfile(abb_data_path,abb_data_path_new)
logretrieveprotein.info("Retrieve " + line + " highly conserved proteins completed")
# read get sequences
with open(abb_data_path_new, 'r') as f:
for line in f:
if line != "\n":
tem = line.strip()
if tem[0] == '>':
header = tem[1:]
else:
sequence = tem
fasta[header] = fasta.get(header, '') + sequence
for index in proindexlist:
have_none = False
none_num = len([x for x in index if x == 'None'])
app_spe = []
if none_num != len(index):
q_index = [var for var in index if var != 'None']
have_spe = [x.split(":")[0] for x in q_index]
app_spe = [x for x in spelist if x not in have_spe]
have_none = True
else:
q_index = index
hcp_pro_name = hcp_name(matchlist[p - 1])
wfiles = "{0}/p{1}.fasta".format(dirname, p)
fw = open(wfiles, 'a')
for id in q_index:
abb_name = id.strip().split(":")[0]
if id in fasta.keys():
fw.write(">"+abb_name+"\n"+fasta[id]+"\n")
else:
name_none = ">" + abb_name + "\n"
fw.write(name_none + "M" + "\n")
if have_none:
for line in app_spe:
name_none = ">" + line + "\n"
fw.write(name_none + "M" + "\n")
fw.close()
logretrieveprotein.info(
"Retrieve and download of highly conserved protein '{0}' was successful store in p{1}.fasta file".format(
hcp_pro_name, str(p)))
p += 1
logretrieveprotein.info("Retrieve from KEGG database " + str(p - 1) + " highly conserved proteins")
for infile in glob.glob(os.path.join(dirname, '*.fa')):
os.remove(infile)
return dirname
def doretrieve(specieslistfile, outpath,local_db):
'''main to retrieve protein from kegg db'''
# spelist = []
# for line in specieslistfile:
# st = line.strip()
# spelist.append(st)
spelist = specieslistfile
logretrieveprotein.info("Reading organisms's names success!")
colname = getcolname()
proindexlist, matchlist = getspecies(spelist, colname)
dirpath = retrieveprotein(proindexlist, outpath, matchlist, spelist,local_db)
return dirpath
def hcp_name(index):
"""get highly conserved protein names from ko list"""
ko_path = getlocaldbpath()
pro_ko = os.path.join(ko_path, "protein_ko.txt")
with open(pro_ko) as ko:
for line in ko:
name = line.strip().split(',')
if name[1] == index:
return name[0]
if __name__ == '__main__':
print(getcolname())
print(getspecies(['swe'], ['K01409']))
# for line in getcolname():
# if getspecies(['mdm'], [line])[0] != []:
# proid = getspecies(['mdm'], [line])[0][0][0]
# print("http://rest.kegg.jp/get/" + proid + "/aaseq")
specieslistfile = ['zma', "ath", "eco"]
outpath = "/home/yangfang/test/alg2/"
doretrieve(specieslistfile, outpath,local_db="")
|
Graetzlhotel Neubau provides accommodation with free WiFi in Vienna, ideally located 1.7 km from Parliament of Austria and 1.7 km from MuseumsQuartier. The property is situated 1.7 km from Leopold Museum and 1.8 km from Vienna City Hall. The property is 1.9 km from Kunsthistorisches Museum and 1.9 km from Naturhistorisches Museum.
The units are fitted with a flat-screen TV with cable channels, a coffee machine, a shower, free toiletries and a desk. With a private bathroom, rooms at the hotel also feature a city view. Guest rooms will provide guests with a fridge.
Haus des Meeres is 3.4 km from Graetzlhotel Neubau. The nearest airport is Vienna International Airport, 22 km from the property.
|
import subprocess
import os
import glob
from pathlib import Path
import pdb
#A script to find the fortran files within Isca's src directory
#that include namelists, and to check if namelist checking is done.
#find the location of the source code
GFDL_BASE = os.environ['GFDL_BASE']
#setup some output dictionaries and lists
fortran_file_dict = {}
includes_namelist_dict = {}
includes_check_namelist_dict = {}
n_check_namelist_dict = {}
if_def_internal_nml_dict={}
files_with_namelists = []
namelists_to_flag = []
namelists_to_flag_possible = []
#find ALL of the fortran files within GFDL_BASE/src directory
for path in Path(f'{GFDL_BASE}/src/').rglob('*.*90'):
#exclude files with ._ at the start
if path.name[0:2]!='._':
#add all the remaining files to a dictionary
fortran_file_dict[path.name] = path
#go through each file and check if it contains a namelist, and if it does namelist checking
for file_name in fortran_file_dict.keys():
file_path = fortran_file_dict[file_name]
#initialise some of the checking variables
namelist_in_file=False
check_namelist_in_file=False
number_of_checks=0
if_def_internal_nml_in_file=False
#open each of the fortran files
with open(file_path, 'r') as read_obj:
for line in read_obj:
#check if it contains a namelist
if 'namelist /' in line and not namelist_in_file:
namelist_in_file=True
# does it contain the check_nml_error command?
if 'check_nml_error' in line and not check_namelist_in_file:
check_namelist_in_file=True
# count how many times this string is mentioned
if 'check_nml_error' in line:
number_of_checks=number_of_checks+1
#check if there's more than one type of namelist reading available
if '#ifdef INTERNAL_FILE_NML' in line and not if_def_internal_nml_in_file:
if_def_internal_nml_in_file=True
#make a list of those files that do have a namelist
if namelist_in_file:
files_with_namelists.append(file_name)
#make a list of those files that do have a namelist but don't do checking
if namelist_in_file and not check_namelist_in_file:
namelists_to_flag.append(file_name)
#making a list of files that have namelists, that read them in more than one way, and have fewer than 3 mentions of check_nml_error. This is to catch cases where there is some namelist checking taking place, but it's not on all the methods of namelist reading.
if namelist_in_file and if_def_internal_nml_in_file and number_of_checks<3:
namelists_to_flag_possible.append(file_name)
#keep a record of the files that include a namelist
includes_namelist_dict[file_name]=namelist_in_file
#keep a record of the files that do and don't do namelist checking
includes_check_namelist_dict[file_name]=check_namelist_in_file
#keep a record of the number of checks taking place
n_check_namelist_dict[file_name] = number_of_checks
#create a list of files that appear in namelists_to_flag_possible
list_of_filepaths_to_check = [str(fortran_file_dict[path]) for path in namelists_to_flag_possible]
#print the number of checks
print([n_check_namelist_dict[path] for path in namelists_to_flag_possible])
#print the list of files
print(namelists_to_flag_possible)
#print their directories
print(list_of_filepaths_to_check)
|
← It is I! Super Katie!
It’s days like today that I love my job and couldn’t imagine doing anything else. I mean, seriously, what other job lets you bomb around on your bike all afternoon – and actually calls it work?
Slow Food Vancouver, which puts on these events all throughout B.C., is a non-profit organization that was founded in 1989 to counteract fast food and fast life. And the purpose of the Slow Food Cycle Tours is to foster a connection between local producers and urban consumers, and to reignite an interest into the food we eat, and where it comes from, how it tastes, etc., etc.. Last year, in its first year, Chilliwack attracted more than 1,000 cyclists to the event, many of whom came all the way out from the city. Impressive indeed.
Just a little country couture.
I didn’t exactly have a proper route to follow, as the organizers are a bit stingy in providing a map for those who don’t pay for registration, so I figured out a route of my own based on all the stops. Thank you mapmyride.com. The tour has 15 stops in total, but because it wasn’t the actual tour, and because many of the farms on the tour are private farms, not all of them were open for viewing. But that didn’t hinder my experience, oh no.
Cycling in farm country, with or without food, is an experience in itself.
One minute you’re mooing at the moo cows, or gushing over the billy goats, or in awe with the heron hovering over the corn fields, and the next you’re battling with road-hogging feed trucks or grannies with caved in bumpers who can barely see over their steering wheels, or the smell. Ohhh that smell. It’s a funny smell, kind of smells like poo. But ask anyone who lives out there and it’s like their smell sense is turned off when it comes to the poo. What are you talking about? It doesn’t smell like poo. Hate to break it to y’all, but yes it does. Majorly!
Look! I learned how to ride no hands, something I've been trying to do since practically the first time I climbed onto a bike! The beauty of a nice, quiet, country road.
Where do you most like to ride? In the city? the country? the middle of nowhere?
|
#!/usr/bin/env python
###############################################################################
# TODO JMP JE JG JL JGE JLE SETJMP LONGJMP DATA LABEL
# TODO discover how to keep code resident and send it new data
# TODO discover how to reference other pixel data for convolution/correlation
# TODO Use Tower of Hanoi separate data stacks for each type and
# make different instructions (or modifiers) for each.
# TODO test whether the BLOCKSIZE approach interferes with referencing
# Perhaps convolve a checkerboard with a Gaussian blur.
###############################################################################
"""gpu11.py implements an RPN kernel constructor.
"""
import re
from sys import (argv, path)
from PIL import (Image)
from time import (time)
from numpy import (array, float32, int32, empty_like, uint8)
path.append('../Banner')
# from pprint import pprint
from Banner import (Banner)
# from operator import (add, sub, mul, div)
# pycuda imports do not pass pylint tests.
# pycuda.autoinit is needed for cuda.memalloc.
import pycuda.autoinit # noqa
from pycuda.driver import (mem_alloc, memcpy_htod, memcpy_dtoh) # noqa
from pycuda.compiler import (SourceModule) # noqa
###############################################################################
class CUDAMathConstants(object):
"""Initialize math constants for the interpreter."""
###########################################################################
def __init__(self, **kw):
"""Initialize math constants class."""
filename = kw.get(
'filename',
'/usr/local/cuda-5.5/targets/x86_64-linux/include/'
'math_constants.h')
self.caselist = []
self.identified = {}
with open('RPN_CUDA_constants.txt', 'w') as manual:
print>>manual, '# RPN CUDA constants'
self.hrule(manual)
print>>manual, '# PUSH CUDA constant onto RPN stack'
self.hrule(manual)
with open(filename) as source:
for line in source:
if line.startswith('#define'):
token = re.findall(r'(\S+)', line)
if len(token) != 3:
continue
define, name, value = token
if '.' not in value:
continue
# if name.endswith('_HI') or name.endswith('_LO'):
# continue
self.identified[name] = value
print>>manual, '%24s: %s' % (name, value)
self.hrule(manual)
###########################################################################
def hrule(self, stream):
"""Debugging: output horizontal rule."""
print>>stream, '#' + '_' * 78
###########################################################################
def functions(self):
"""Prepare function handling."""
end = '/*************************************************************/'
text = ''
for token in self.identified.iteritems():
name, value = token
text += ''.join((
'__device__ int %s\n' % (end),
'RPN_%s_RPN(Thep the) {' % (name),
' IPUP = %s;' % (name),
' return 0;',
'}\n',
))
return text
###########################################################################
def cases(self):
"""Prepare case handling."""
# case = []
# count = 0
for token in self.identified.iteritems():
name, value = token
# case += ['error = RPN_%s_RPN(&the)' % (name), ]
self.caselist += ['{ *dstack++ = %s; }' % (name), ]
return self.caselist
###############################################################################
class CUDAMathFunctions(object):
"""CUDAMathFunctions class"""
found = set()
###########################################################################
def __init__(self, **kw):
"""CUDAMathFunctions __init__"""
clip = kw.get(
'clip',
True)
filename = kw.get(
'filename',
'/usr/local/cuda-5.5/targets/x86_64-linux/include/'
'math_functions.h')
signature = kw.get(
'signature',
'extern __host__ __device__ __device_builtin__ float')
self.caselist = []
with open('RPN_CUDA_functions.txt', 'w') as manual:
print>>manual, '# RPN CUDA functions'
self.hrule(manual)
signatureAB = '(float x, float y)'
signatureA_ = '(float x)'
self.one = {}
self.two = {}
with open(filename) as source:
for line in source:
if line.startswith(signature):
A, B, C = line.partition('float')
if not C:
continue
function = C.strip()
if function.endswith(') __THROW;'):
function = function[:-9]
name, paren, args = function.partition('(')
if name in CUDAMathFunctions.found:
continue
else:
CUDAMathFunctions.found.add(name)
if signatureAB in function:
# print 'AB', function
if clip:
name = name[:-1] # remove f
self.two[name] = name
self.caselist += ['{ ab %s(a, b); }' % (name), ]
elif signatureA_ in function:
# print 'A_', function
self.one[name] = name
self.caselist += ['{ a_ %s(a); }' % (name), ]
else:
continue
print>>manual, '# functions of one float parameter'
print>>manual, '# pop A and push fun(A).'
self.hrule(manual)
for cuda, inner in self.one.iteritems():
print>>manual, 'float %s(float) // %s' % (inner, name)
self.hrule(manual)
print>>manual, '# functions of two float parameters'
print>>manual, '# pop A, pop B and push fun(A, B)'
self.hrule(manual)
for cuda, inner in self.two.iteritems():
print>>manual, 'float %s(float, float) // %s' % (inner, name)
self.hrule(manual)
###########################################################################
def hrule(self, stream):
"""CUDAMathFunctions hrule"""
print>>stream, '#' + '_' * 78
###########################################################################
def functions(self):
"""CUDAMathFunctions functions"""
return ''
###########################################################################
def cases(self):
"""CUDAMathFunctions cases"""
return self.caselist
###############################################################################
class Timing(object):
"""Timing class"""
text = ''
###########################################################################
def __init__(self, msg=''):
"""Timing __init__"""
self.msg = msg
###########################################################################
def __enter__(self):
"""Timing __enter__"""
self.t0 = time()
###########################################################################
def __exit__(self, typ, value, tb):
"""Timing __exit__"""
Timing.text += '%40s: %e\n' % (self.msg, (time() - self.t0))
###############################################################################
class Function(object):
"""Function class"""
###########################################################################
def __init__(self, **kw):
"""Function __init__"""
self.index = kw.get('start', 0)
self.name = {}
self.body = ""
self.case = ""
self.tab = " " * 12
self.final = [0]
self.code = {'#%d' % d: d for d in range(kw.get('bss', 64))}
self.bss = self.code.keys()
for i, name in enumerate(
kw.get('handcode', [
'swap', 'add', 'mul', 'ret', 'sub', 'div',
'call', 'noop', 'invert', 'push', 'pop', 'jmp', ])):
self.add_name(name, i)
###########################################################################
def add_name(self, name, index):
"""Function add_name"""
self.code[name] = index
self.name[index] = name
###########################################################################
def assemble(self, source, DATA, **kw):
"""Function assemble"""
self.label = {'code': [], 'data': [], }
self.data = []
fixups = {}
self.clabels = {}
self.backclabels = {}
self.dlabels = {}
self.backdlabels = {}
self.final = []
extra = 0
for offset, name in enumerate(DATA):
name = str(name)
label, colon, datum = name.partition(':')
if colon:
self.dlabels[label] = offset + extra
self.backdlabels[offset + extra] = label
self.label['data'] += [label, ]
# print '\t\t\tdata', label, offset + extra
else:
datum = label
values = datum.split()
self.data += values
extra += len(values) - 1
# print 'A0', self.backclabels
# print 'B0', self.clabels
for offset, name in enumerate(source):
name = re.sub(' \t', '', name)
label, colon, opname = name.partition(':')
if not colon:
label, opname = None, label
# print 'name = %s', (opname)
else:
assert label not in self.clabels.keys()
self.clabels[label] = offset
self.backclabels[offset] = label
self.label['code'] += [label, ]
# print '\t\t\tcode', label
if opname in self.code.keys():
self.final += [self.code[opname], ]
# print 'instruction'
else:
self.final += [stop, ]
fixups[opname] = fixups.get(opname, []) + [offset, ]
# print 'opname:fixup = %s/%s' %(opname, offset)
for label, offsets in fixups.iteritems():
if not label:
continue
if label in self.clabels:
for offset in offsets:
self.final[offset] = self.clabels[label]
if (not self.final) or (self.final[-1] != stop):
self.final += [stop, ]
# print 'A1', self.backclabels
# print 'B1', self.clabels
if kw.get('verbose', False):
# print source
# print self.final
direct = False
# print '(',
for code in self.final:
if not direct:
name = self.name[code]
# print "'%s'," % (name),
if name in ('push', 'call', 'jmp'):
direct = True
else:
label = self.backclabels.get(code, None)
if offset is None:
# print label, "'#%d'" % (code),
pass
else:
# print "'#%d'," % (code),
pass
direct = False
# print ')'
# print 'A2', self.backclabels
# print 'B2', self.clabels
###########################################################################
def disassemble(self, **kw):
"""Function disassemble"""
verbose = kw.get('verbose', False)
if not verbose:
return
direct = False
# print self.data
# print self.label['data']
# print self.backclabels
print '#'*79
print '.data'
# print '#', self.data
nl = False
comma = ''
for offset, datum in enumerate(self.data):
if not datum:
continue
label = self.backdlabels.get(offset, None)
if label and label in self.label['data']:
if nl:
print
print '%-12s%+11.9f' % (label+':', float(datum)),
comma = ','
else:
print comma + ' %+11.9f' % (float(datum)),
comma = ','
nl = True
print
print '#'*79
print '.code'
# print '#', self.final
for offset, code in enumerate(self.final):
if direct:
clabel = self.backclabels.get(code, None)
if clabel:
print clabel
else:
print '#%d' % (code)
direct = False
else:
label = self.backclabels.get(offset, None)
name = self.name[code]
direct = (name in ('push', 'call', 'jmp'))
if label and label in self.label['code']:
print '%-12s%s' % (label+':', name),
else:
print ' %s' % (name),
if not direct:
print
print '.end'
print '#'*79
###########################################################################
def add_body(self, fmt, **kw):
"""Function add_body"""
cmt = '/*************************************************************/'
base = "__device__ int " + cmt + "\nRPN_%(name)s_RPN(Thep the) "
self.body += ((base + fmt) % kw) + '\n'
###########################################################################
def add_case(self, **kw):
"""Function add_case"""
k = {'number': self.index}
k.update(kw)
casefmt = "case %(number)d: error = RPN_%(name)s_RPN(&the); break;\n"
self.case += self.tab + casefmt % k
self.code[kw['name']] = self.index
self.add_name(kw['name'], self.index)
###########################################################################
def add_last(self):
"""Function add_last"""
self.index += 1
###########################################################################
def unary(self, **kw):
"""Function unary"""
self.add_case(**kw)
self.add_body("{ A_ %(name)s(A); return 0; }", **kw)
self.add_last()
###########################################################################
def binary(self, **kw):
"""Function binary"""
self.add_case(**kw)
self.add_body("{ AB %(name)s(A,B); return 0; }", **kw)
self.add_last()
###############################################################################
def CudaRPN(inPath, outPath, mycode, mydata, **kw):
"""CudaRPN implements the interface to the CUDA run environment.
"""
verbose = kw.get('verbose', False)
BLOCK_SIZE = 1024 # Kernel grid and block size
STACK_SIZE = 64
# OFFSETS = 64
# unary_operator_names = {'plus': '+', 'minus': '-'}
function = Function(
start=len(hardcase),
bss=64,
handcode=kw.get('handcode'))
with Timing('Total execution time'):
with Timing('Get and convert image data to gpu ready'):
im = Image.open(inPath)
px = array(im).astype(float32)
function.assemble(mycode, mydata, verbose=True)
function.disassemble(verbose=True)
cx = array(function.final).astype(int32)
dx = array(function.data).astype(float32)
with Timing('Allocate mem to gpu'):
d_px = mem_alloc(px.nbytes)
memcpy_htod(d_px, px)
d_cx = mem_alloc(cx.nbytes)
memcpy_htod(d_cx, cx)
d_dx = mem_alloc(dx.nbytes)
memcpy_htod(d_dx, dx)
with Timing('Kernel execution time'):
block = (BLOCK_SIZE, 1, 1)
checkSize = int32(im.size[0]*im.size[1])
grid = (int(im.size[0] * im.size[1] / BLOCK_SIZE) + 1, 1, 1)
kernel = INCLUDE + HEAD + function.body + convolve + TAIL
sourceCode = kernel % {
'pixelwidth': 3,
'stacksize': STACK_SIZE,
'case': function.case}
with open("RPN_sourceCode.c", "w") as target:
print>>target, sourceCode
module = SourceModule(sourceCode)
func = module.get_function("RPN")
func(d_px, d_cx, d_dx, checkSize, block=block, grid=grid)
with Timing('Get data from gpu and convert'):
RPNPx = empty_like(px)
memcpy_dtoh(RPNPx, d_px)
RPNPx = uint8(RPNPx)
with Timing('Save image time'):
pil_im = Image.fromarray(RPNPx, mode="RGB")
pil_im.save(outPath)
# Output final statistics
if verbose:
print '%40s: %s%s' % ('Target image', outPath, im.size)
print Timing.text
###############################################################################
INCLUDE = """// RPN_sourceCode.c
// GENERATED KERNEL IMPLEMENTING RPN ON CUDA
#include <math.h>
"""
HEAD = """
#define a_ float a = *--dstack; *dstack++ =
#define ab float a = *--dstack; float b = *--dstack; *dstack++ =
typedef struct _XY {
int x;
int y;
float n;
} XY, *XYp;
/************************** HANDCODE FUNCTIONS *******************************/
"""
handcode = {
'pop': "{ --dstack; }",
'quit': "{ stop = 1; }",
'noop': "{ }",
'invert': "{ a_ 1.0 - a; }",
'swap': """{
float a = *--dstack;
float b = *--dstack;
*++dstack = a;
*++dstack = b;
} """,
'push': "{ *dstack++ = data[code[ip++]]; }",
'add': "{ ab a + b; }",
'sub': "{ ab a - b; }",
'mul': "{ ab a * b; }",
'div': "{ ab a / b; }",
'call': """{
int to = code[ip++];
cstack[sp++] = ip;
ip = to;
} """,
'ret': "{ ip = cstack[--sp]; }",
'jmp': "{ ip = code[ip]; }",
}
hardcase = []
for i, (case, code) in enumerate(handcode.iteritems()):
hardcase += ['/* %s */ %s' % (case, code), ]
if 'stop' in code:
stop = i
HEAD += """
/************************** CUDA FUNCTIONS ***********************************/
"""
# Name header files and function signatures of linkable functions.
CUDA_sources = {
'/usr/local/cuda-5.5/targets/x86_64-linux/include/math_functions.h': [
'extern __host__ __device__ __device_builtin__ float',
'extern __device__ __device_builtin__ __cudart_builtin__ float',
'extern _CRTIMP __host__ __device__ __device_builtin__ float',
],
'/usr/local/cuda-5.5/targets/x86_64-linux/include/device_functions.h': [
# 'extern __device__ __device_builtin__ __cudart_builtin__ float',
'extern _CRTIMP __host__ __device__ __device_builtin__ float',
# 'extern __device__ __device_builtin__ float',
]
}
INCLUDE += '#include <%s>\n' % ('math_constants.h')
# Ingest header files to make use of linkable functions.
CUDA_constants = CUDAMathConstants()
hardcase += CUDA_constants.cases()
for filename, signatures in CUDA_sources.iteritems():
stars = max(2, 73 - len(filename))
pathname, twixt, basename = filename.partition('/include/')
INCLUDE += '#include <%s>\n' % (basename)
left = stars/2
right = stars - left
left, right = '*' * left, '*' * right
HEAD += '/*%s %s %s*/\n' % (left, filename, right)
for signature in signatures:
CUDA_functions = CUDAMathFunctions(
filename=filename,
signature=signature,
clip=True)
hardcase += CUDA_functions.cases()
###############################################################################
convolve = """
// data: the data field from which to convolve.
// kn: a length L array of coefficients (terminated by 0.0)
// kx: a length L array of x offsets
// ky: a length L array of y offsets
// X: width of data field (stride, not necessarily visible image width)
// Y: height of data field.
// C: color band (0, 1, or 2)
__device__ float planar_convolve(
float *data, float *kn, int *kx, int *ky, int X, int Y, int C)
{
float K = 0.0;
float V = 0.0;
int x0 = (threadIdx.x + blockIdx.x * blockDim.x);
int y0 = (threadIdx.y + blockIdx.y * blockDim.y);
int D = X * Y;
int N = 0;
float ki;
while((ki = *kn++) != 0.0) {
int xi = *kx++;
int yi = *ky++;
int x = (x0-xi);
int y = (y0-yi);
int d = C + (x + y * X) * 3;
if(d < 0 || d >= D) continue;
V += data[d];
K += ki;
N += 1;
};
if(N == 0) {
V = 0.0;
} else {
V /= K*N;
}
return V;
}
//__device__ void planar_ring_test(float *data, int C) {
// float kn[5] = { 1.0, 1.0, 1.0, 1.0 };
// int kx[5] = { +1, 0, -1, 0, 0 };
// int ky[5] = { 0, +1, 0, -1, 0 };
//}
"""
convolutionGPU = """
__global__ void convolutionGPU(
float *d_Result,
float *d_Data,
int dataW,
int dataH )
{
//////////////////////////////////////////////////////////////////////
// most slowest way to compute convolution
//////////////////////////////////////////////////////////////////////
// global mem address for this thread
const int gLoc = threadIdx.x +
blockIdx.x * blockDim.x +
threadIdx.y * dataW +
blockIdx.y * blockDim.y * dataW;
float sum = 0;
float value = 0;
for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) // row wise
for (int j = -KERNEL_RADIUS; j <= KERNEL_RADIUS; j++) // col wise
{
// check row first
if (blockIdx x == 0 && (threadIdx x + i) < 0) // left apron
value = 0;
else if ( blockIdx x == (gridDim x - 1) &&
(threadIdx x + i) > blockDim x-1 ) // right apron
value = 0;
else
{
// check col next
if (blockIdx y == 0 && (threadIdx y + j) < 0) // top apron
value = 0;
else if ( blockIdx y == (gridDim y - 1) &&
(threadIdx y + j) > blockDim y-1 ) // bottom apron
value = 0;
else // safe case
value = d_Data[gLoc + i + j * dataW];
}
sum += value *
d_Kernel[KERNEL_RADIUS + i] *
d_Kernel[KERNEL_RADIUS + j];
}
d_Result[gLoc] = sum;
}
"""
###############################################################################
TAIL = """
__device__ int machine(int *code, float *data, float *value) {
const float numerator = 255.0;
const float denominator = 1.0 / numerator;
float DSTACK[%(stacksize)d];
int CSTACK[%(stacksize)d];
int opcode;
int error = 0;
int *cstack = &CSTACK[0];
float *dstack = &DSTACK[0];
int ip = 0, sp = 0, stop = 0;
*dstack++ = *value * denominator;
*value = 0.0;
while((!stop) && (opcode = code[ip++]) != 0) {
switch(opcode) {
"""
for i, case in enumerate(hardcase):
TAIL += ' '*12
TAIL += 'case %3d: %-49s; break;\n' % (i, case)
TAIL += """
%(case)s
default: error = opcode; break;
}
stop |= !!error;
}
if(error) {
*value = float(error);
} else {
*value = *--dstack * numerator;
}
return error;
}
__global__ void RPN( float *inIm, int *code, float *data, int check ) {
const int pw = %(pixelwidth)s;
const int idx = (threadIdx.x ) + blockDim.x * blockIdx.x ;
if(idx * pw < check * pw) {
const int offset = idx * pw;
int error = 0;
int c;
for(c=0; c<pw && !error; ++c) {
error += machine(code, data, inIm + offset + c);
}
}
}
"""
###############################################################################
if __name__ == "__main__":
Banner(arg=[argv[0] + ': main', ], bare=True)
if len(argv) == 1:
Banner(arg=[argv[0] + ': default code and data', ], bare=True)
DATA = [0.0, 1.0]
CODE = [
'push', '#1',
'sub',
'noop',
'call', 'here',
'quit',
'here:ret', ]
else:
Banner(arg=[argv[0] + ': code and data from file: ', ], bare=True)
DATA = []
CODE = []
STATE = 0
with open(argv[1]) as source:
for number, line in enumerate(source):
line = line.strip()
if STATE == 0:
if line.startswith('#'):
# print number, 'comment'
continue
elif line.startswith('.data'):
# print number, 'keyword .data'
STATE = 1
else:
assert False, '.data section must come first'
elif STATE == 1:
if line.startswith('#'):
# print number, 'comment'
continue
line = re.sub(r':\s+', ':', line)
if line.startswith('.code'):
# print number, 'keyword .code'
STATE = 2
else:
# print number, 'add data'
DATA += re.split(r'\s+', line)
elif STATE == 2:
if line.startswith('#'):
# print number, 'comment'
continue
line = re.sub(r':\s+', ':', line)
# print number, 'add code'
CODE += re.split(r'\s+', line)
# print '.data\n', '\n'.join([str(datum) for datum in data])
# print '.code\n', '\n'.join(code)
Banner(arg=[argv[0] + ': run in CUDA', ], bare=True)
CudaRPN(
'img/source.png',
'img/target.png',
CODE,
DATA,
handcode=handcode
)
###############################################################################
|
Data recovery is something we hope that we will never need. Whether you simply make the mistake of deleting important software or you lose it all in a computer crash, that valuable asset of information may seem like it is lost forever. But, if you have complete and quality data-recovery software on your computer or through your system, you can be confident that it has a backup and it is there even though these things will happen to you. This can be so important when it comes to safeguarding your business or your personal information.
What people do not realize is that there are a number of ways that things can go wrong on their computers. Whether you are responsible for the computers of a large corporation, a small business or even just your own personal computer, having a way to restore information when things go wrong is quite important. Here are some things that could happen to you, well, anytime, even right now. Your computer could be running slowly or be loaded with powerful spyware that can destroy files, transmit personal data or, even worse, cause the computer to crash. All of a sudden, it’s gone. Or, you could be working along nicely without a care in the world and bam! A power surge, an electric storm or something else electrical happens and it’s all gone.
There are many more ways in which you can lose all of the personal data that is stored on your computer. You don’t even have to have the programs running to lose it. Nevertheless, many people mistakenly believe this will not happen to them and therefore do not do anything to prevent this total loss. Data recovery is necessary for this not to happen. The strange thing is, it takes only minutes to install and use and it virtually and it takes care of itself. It is not overly costly either. So, why don’t more people use data recovery? They just don’t realize its importance. And that is one mistake we don’t want to make. Data recovery is a need all computer users have.
|
# Copyright (C) 2011 Michal Zielinski (michal@zielinscy.org.pl)
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
import client.diplomacy
from client.diplomacy import (CLAUSE_ADVANCE, CLAUSE_GOLD, CLAUSE_MAP,
CLAUSE_SEAMAP, CLAUSE_CITY,
CLAUSE_CEASEFIRE, CLAUSE_PEACE, CLAUSE_ALLIANCE,
CLAUSE_VISION, CLAUSE_EMBASSY,
DS_WAR, DS_ARMISTICE, DS_CEASEFIRE, DS_ALLIANCE, DS_PEACE)
import ui
class Meeting(client.diplomacy.Meeting):
def init(self):
self.dialog = None
self.open_dialog()
def create_clause(self, giver, type, value):
self.open_dialog()
self.dialog.add_clause(giver, type, value)
print 'create_clause', giver, type, value
def remove_clause(self, giver, type, value):
print 'remove_clause', giver, type, value
def accept_treaty(self, me, other):
print 'accept_treaty', me, other
self.open_dialog()
self.dialog.set_accept_treaty(me, other)
def open_dialog(self):
if not self.dialog:
self.dialog = MeetingDialog(self)
ui.set_dialog(self.dialog, scroll=True)
class MeetingDialog(ui.LinearLayoutWidget):
def __init__(self, meeting):
super(MeetingDialog, self).__init__()
self.meeting = meeting
self.left = ConditionsWidget(meeting.client.get_playing())
self.right = ConditionsWidget(meeting.counterpart)
c = meeting.counterpart
self.top = ui.HorizontalLayoutWidget()
# Sir!, the %s ambassador has arrived \nWhat are your wishes?
self.top.add(ui.Label('Meeting with '))
self.top.add(ui.Label(' ', image=c.get_flag()))
self.top.add(ui.Label(' %s (%s)' % (c.get_nation_pl(), c.get_name())))
self.add(self.top)
self.middle = ui.HorizontalLayoutWidget(spacing=10)
w = 200
self.middle.add(ui.Bordered(self.left, force_width=w))
self.middle.add(ui.Bordered(self.right, force_width=w))
self.add(self.middle)
self.add(ui.Button('Add condition', self.add_condition))
self.bottom = ui.HorizontalLayoutWidget(spacing=10)
self.bottom.add(ui.Button('Cancel treaty', self.cancel_treaty))
self.bottom.add(ui.Button('Accept treaty', self.accept_treaty))
self.add(self.bottom)
def cancel_treaty(self):
self.meeting.cancel()
ui.back()
def accept_treaty(self):
self.meeting.accept()
def add_condition(self):
def ph(type): # pact handler
def handler():
ui.back()
self.meeting.pact(type)
return handler
panel = ui.LinearLayoutWidget()
c = self.meeting.counterpart
state = c.get_state()
if state not in (DS_ARMISTICE, DS_CEASEFIRE, DS_PEACE, DS_ALLIANCE):
panel.add(ui.Button('Ceasefire', ph(CLAUSE_CEASEFIRE)))
if state not in (DS_PEACE, DS_ALLIANCE):
panel.add(ui.Button('Peace', ph(CLAUSE_PEACE)))
if state not in (DS_ALLIANCE, ):
panel.add(ui.Button('Alliance', ph(CLAUSE_ALLIANCE)))
if not c.gives_shared_vision():
panel.add(ui.Button('Shared vision', ph(CLAUSE_VISION)))
ui.set_dialog(panel)
def add_clause(self, giver, type, value):
if giver == self.meeting.counterpart:
panel = self.right
else:
panel = self.left
panel.add_condition(type, value, self.meeting.get_clause_repr(type, value))
def set_accept_treaty(self, me, other):
self.left.set_accept(me)
self.right.set_accept(other)
class ConditionsWidget(ui.LinearLayoutWidget):
def __init__(self, player):
super(ConditionsWidget, self).__init__()
p = ui.HorizontalLayoutWidget()
p.add(ui.Spacing(10, 0))
p.add(ui.Label(' ', image=player.get_flag()))
p.add(ui.Spacing(10, 0))
self.accepting = ui.Label('?')
p.add(self.accepting)
self.add(p)
self.panel = ui.LinearLayoutWidget()
self.add(self.panel)
def add_condition(self, type, value, string):
self.panel.add(ui.Label(string))
def set_accept(self, b):
if b:
self.accepting.set_text('Accepts')
else:
self.accepting.set_text('Declines')
if __name__ == '__main__':
d = MeetingDialog()
ui.set_dialog(d)
ui.main()
|
Diamond Way Buddhist Center New York is part of an international network of over 600 meditation centers in the Karma Kagyu tradition of Tibetan Buddhism. The centers were started due to the unique inspiration of Lama Ole Nydahl according to the wishes of H.H. 16th Karmapa. They are now under the spiritual guidance of H.H. 17th Gyalwa Karmapa Trinley Thaye Dorje.
To receive information about visiting teachers, current events and latest Dharma news, subscribe to our RSS feed.
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jan 25 22:34:05 2020
@author: mostafamousavi
"""
from EQTransformer.utils.downloader import downloadMseeds, makeStationList, downloadSacs
import pytest
import glob
import os
def test_downloader():
makeStationList(client_list=["SCEDC"],
min_lat=35.50,
max_lat=35.60,
min_lon=-117.80,
max_lon=-117.40,
start_time="2019-09-01 00:00:00.00",
end_time="2019-09-03 00:00:00.00",
channel_list=["HH[ZNE]", "HH[Z21]", "BH[ZNE]", "EH[ZNE]", "SH[ZNE]", "HN[ZNE]", "HN[Z21]", "DP[ZNE]"],
filter_network=["SY"],
filter_station=[])
downloadMseeds(client_list=["SCEDC", "IRIS"],
stations_json='station_list.json',
output_dir="downloads_mseeds",
start_time="2019-09-01 00:00:00.00",
end_time="2019-09-02 00:00:00.00",
min_lat=35.50,
max_lat=35.60,
min_lon=-117.80,
max_lon=-117.40,
chunck_size=1,
channel_list=[],
n_processor=2)
dir_list = [ev for ev in os.listdir('.')]
if ('downloads_mseeds' in dir_list) and ('station_list.json' in dir_list):
successful = True
else:
successful = False
assert successful == True
def test_mseeds():
mseeds = glob.glob("downloads_mseeds/CA06/*.mseed")
assert len(mseeds) > 0
|
Plus Checking - Earn Interest & Enjoy Exclusive Benefits!
Plus Checking is a continuation of Communication Federal Credit Union’s commitment to being your partner in financial success. Plus Checking offers all of the perks and features you expect from a checking account at a big bank, but without the fees and additional charges.
Information for the above comparison table was obtained January 1st, 2019 from the aforementioned bank’s websites. Information is subject to change without notice.
1 Monthly Fee can be waived with qualifying account activities or if you are under 24 years old and enrolled in school. 2 CFCU does not charge out-of-network ATM fees, but other financial institutions or merchants may. 3 Up to $25 in fees assessed by out-of-network ATMs can be refunded each month with qualifying account activities. 4 Overdraft protection transfer from a linked Bank of America savings account or line of credit. 5 Overdraft protection transfer from a linked Chase savings account.
Switch to Plus Checking today and experience better banking at Communication Federal Credit Union – Oklahoma’s #1 Credit Union!
We’re here to make switching as easy as possible! Once your account has been approved, use the resources below to set up your new account and experience better checking at Communication Federal Credit Union.
Have questions about using your new account? Call us at 844.231.6818 or Contact Us Online.
Dividends paid monthly on the daily balance when it exceeds $750. Interest rate may vary. See our Rates page for most current interest rate information.
|
#
# This file is part of the CCP1 Graphical User Interface (ccp1gui)
#
# (C) 2002-2005 CCLRC Daresbury Laboratory
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
#
"""Job manager and editor Tkinter interface to control jobs
"""
import sys
import jobmanager.job
import jobmanager.ccp1gui_subprocess
import jobmanager.jobeditor
import jobmanager.jobthread
if sys.platform[:3] == 'win':
import jobmanager.winprocess
# Constants
#
# jmht - don't think these are used anywhere?
#MODIFIED = "Modified"
#SUBMITTED = "Submitted"
#RUNNING = "Running"
#KILLED = "Killed"
#DONE = "Done"
#STOPPED = "Stopped"
class JobManager:
def __init__(self):
self.registered_jobs = []
def RegisterJob(self,job):
if job not in self.registered_jobs:
self.registered_jobs.append(job)
def RemoveJob(self,job):
self.registered_jobs.remove(job)
|
If they can get past Rupert Murdoch's lovely parting gift of $40 million, a lot of people will revel in the denouement of Roger Ailes' Fox News career. But just because the hens at Fox might be safer from some predators, it's a limited victory. Sexual harassment remains a seemingly intractable problem in the U.S. workplace because a lot of fully civilized people don't know what it is.
I was discussing Ailes' presumably forced resignation on Thursday with one of my best friends, a man who makes a living in a high-profile profession populated by arrogant blowhards not unlike some of the people Ailes inflicted on American TV audiences. My friend is like none of them; he's wise, patient and fair. He's slow to anger. He listens. He's deeply wounded over stories about people hurting children and animals. A onetime Republican who wasn't very good at it, he's always been a conscientious citizen who prefers to solve problems than exploit them for attention. He's the person I most want to talk to when I'm troubled about injustice.
Until I told him, he hadn't heard that Fox had dumped Ailes after investigating harassment allegations by Gretchen Carlson and apparently finding that she wasn't singing solo. Alluding to the generally blond, youthful good looks of the women he supposedly harassed, my friend commented, "Why would he ever think women like that would be attracted to him?"
I was stunned. How could someone so smart be so ignorant? Was his disconnect just a momentary departure from humanity? Could someone I know so well be a misogynist?
Nah. Some white people just don't get that virtually all black people, at some time in their lives, have been treated differently just because they aren't white and, more important, that even if that unfairness was unconscious, it's still racist. Reflexive treatment based on someone's immutable traits is insidious, and it won't stop until empathy is a default response to human pain.
How does my compassionate friend not get that what Roger Ailes probably did, what so many men routinely do, is not romance? How do they not get that it's bullying, it's terrorism that isn't political, but personal? How can my friend, a generous donor to charity, a champion of disabled people, a guy steeped in family values, not grasp that sexual harassment is not about seduction, but about the abuse of power?
He's like a lot us. We are decent people invested in the social contract of human equality but who unwittingly till the soil where the "isms" that rend society thrive.
You don't have to be a person of color to understand the cost of racism. You don't have to be female to know the difference between being wooed and being threatened.
But it's not enough to conclude, like Edmund Burke, that "the only thing necessary for the triumph of evil is for good men to do nothing." Because if you don't recognize evil, you do nothing to stop it.
|
import datetime
import os
import shutil
import sys
import build_python_code_block
args = sys.argv[1:]
this_script_path = sys.argv[0]
this_script_dir = os.path.split(this_script_path)[0]
CURRENT_DATE = datetime.datetime.now()
# CURRENT_DATE = datetime.datetime(2017, 9, 20)
update_site_versions = [
'6.3.2',
'6.3.1',
'6.3.0',
'6.2.0',
'6.1.0',
'6.0.0',
'5.9.2',
'5.9.1',
'5.9.0',
'5.8.0',
'5.7.0',
'5.6.0',
'5.5.0',
'5.4.0',
'5.3.1',
'5.3.0',
'5.2.0',
'5.1.2',
'5.1.1',
'5.0.0',
'4.5.5',
'4.5.4',
'4.5.3',
'4.5.1',
'4.5.0',
'old',
]
LAST_VERSION_TAG = update_site_versions[0]
DEFAULT_CONTENTS_TEMPLATE = '''<doc>
<contents_area></contents_area>
%s
</doc>
'''
DEFAULT_AREAS = '''
<right_area>
</right_area>
<image_area></image_area>
<quote_area></quote_area>
'''
DEFAULT_AREAS_MANUAL = '''
<right_area>
</right_area>
<image_area>manual.png</image_area>
<quote_area></quote_area>
'''
#=======================================================================================================================
# BuildFromRst
#=======================================================================================================================
def BuildFromRst(source_filename, is_new_homepage=False):
print source_filename
import os
from docutils import core
# dict of default settings to override (same as in the cmdline params, but as attribute names:
# "--embed-stylesheet" => "embed_stylesheet"
settings_overrides = {}
import os
# publish as html
ret = core.publish_file(
writer_name='html',
source_path=source_filename,
destination_path=os.tempnam(),
settings_overrides=settings_overrides,
)
final = ret[ret.find('<body>') + 6: ret.find('</body>')].strip()
if final.startswith('<div'):
final = final[final.find('\n'):]
final = final[:final.rfind('</div>')]
rst_contents = open(source_filename, 'r').read()
if rst_contents.startswith('..'):
image_area_right_area_and_quote_area = ''
# lines = []
# for line in rst_contents.splitlines():
# if line.strip().startswith('..'):
# lines.append(line.strip()[2:].strip())
# lines = lines[1:] #remove the first (empty) line
# image_area_right_area_and_quote_area = '\n'.join(lines)
else:
if rst_contents.startswith('manual_adv'):
image_area_right_area_and_quote_area = DEFAULT_AREAS
else:
image_area_right_area_and_quote_area = DEFAULT_AREAS_MANUAL
name = source_filename.split('.')[0]
if is_new_homepage:
if os.path.exists(name + '.contents.htm'):
raise AssertionError('This file should not exist: ' + name + '.contents.htm')
if os.path.exists(name + '.contents.html'):
raise AssertionError('This file should not exist: ' + name + '.contents.html')
contents = DEFAULT_CONTENTS_TEMPLATE % (image_area_right_area_and_quote_area,)
final = contents.replace('<contents_area></contents_area>', '<contents_area>%s</contents_area>' % final)
final = final.replace('\r\n', '\n').replace('\r', '\n')
f = open(name + '.contents.rst_html', 'wb')
print >> f, final
f.close()
COMPOSITE_CONTENT = '''<?xml version='1.0' encoding='UTF-8'?>
<?compositeMetadataRepository version='1.0.0'?>
<repository name='"Eclipse Project Test Site"'
type='org.eclipse.equinox.internal.p2.metadata.repository.CompositeMetadataRepository' version='1.0.0'>
<properties size='1'>
<property name='p2.timestamp' value='{timestamp}'/>
</properties>
<children size='1'>
<child location='https://dl.bintray.com/fabioz/pydev/{version}'/>
</children>
</repository>
'''
COMPOSITE_ARTIFACTS = '''<?xml version='1.0' encoding='UTF-8'?>
<?compositeArtifactRepository version='1.0.0'?>
<repository name='"Eclipse Project Test Site"'
type='org.eclipse.equinox.internal.p2.artifact.repository.CompositeArtifactRepository' version='1.0.0'>
<properties size='1'>
<property name='p2.timestamp' value='{timestamp}'/>
</properties>
<children size='3'>
<child location='https://dl.bintray.com/fabioz/pydev/{version}'/>
</children>
</repository>
'''
INDEX_CONTENTS = '''<!DOCTYPE html>
<html>
<head></head>
<body>PyDev update site aggregator.<br>
<br>
Bundles the following PyDev update site(s):<br>
<br>
<a href="https://dl.bintray.com/fabioz/pydev/{version}">https://dl.bintray.com/fabioz/pydev/{version}</a><br>
</body>
</html>
'''
#=======================================================================================================================
# GenerateRstInDir
#=======================================================================================================================
def GenerateRstInDir(d, is_new_homepage=False):
for f in os.listdir(d):
if f.endswith('.rst'):
BuildFromRst(f, is_new_homepage)
if __name__ == '__main__':
this_script_dir = os.path.realpath(os.path.abspath(this_script_dir))
print 'Directory with this script:', this_script_dir
print 'Generating rst for homepage'
os.chdir(os.path.join(this_script_dir, 'homepage'))
# Copy the update site redirections
shutil.rmtree(os.path.join('final', 'updates'), ignore_errors=True)
shutil.copytree('updates', os.path.join('final', 'updates'))
shutil.rmtree(os.path.join('final', 'nightly'), ignore_errors=True)
shutil.copytree('nightly', os.path.join('final', 'nightly'))
import time
timestamp = str(int(time.time()))
def make_update_site_at_dir(directory, version, force):
try:
os.mkdir(directory)
except:
pass
xml1 = os.path.join(directory, 'compositeArtifacts.xml')
if force or not os.path.exists(xml1):
with open(xml1, 'w') as stream:
stream.write(COMPOSITE_ARTIFACTS.replace('{version}', version).replace('{timestamp}', timestamp))
xml2 = os.path.join(directory, 'compositeContent.xml')
if force or not os.path.exists(xml2):
with open(xml2, 'w') as stream:
stream.write(COMPOSITE_CONTENT.replace('{version}', version).replace('{timestamp}', timestamp))
html = os.path.join(directory, 'index.html')
if force or not os.path.exists(html):
with open(html, 'w') as stream:
stream.write(INDEX_CONTENTS.replace('{version}', version).replace('{timestamp}', timestamp))
make_update_site_at_dir(os.path.join('final', 'updates'), LAST_VERSION_TAG, force=True)
make_update_site_at_dir(os.path.join('final', 'nightly'), LAST_VERSION_TAG, force=True)
for update_site_version in update_site_versions:
make_update_site_at_dir(os.path.join('final', 'update_sites', update_site_version), update_site_version, force=False)
shutil.copyfile('stylesheet.css', os.path.join('final', 'stylesheet.css'))
shutil.copyfile('favicon.ico', os.path.join('final', 'favicon.ico'))
shutil.copyfile('pydev_certificate.cer', os.path.join('final', 'pydev_certificate.cer'))
shutil.copyfile('video_pydev_20.html', os.path.join('final', 'video_pydev_20.html'))
shutil.copyfile('video_swfobject.js', os.path.join('final', 'video_swfobject.js'))
GenerateRstInDir('.', True)
sys.path.insert(0, os.path.join(this_script_dir, 'homepage', 'scripts'))
sys.path.insert(0, '.')
# print 'PYTHONPATH changed. Using:'
# for p in sys.path:
# print ' - ', p
os.chdir(os.path.join(this_script_dir, 'homepage', 'scripts'))
import build_merged # @UnresolvedImport
os.chdir(os.path.join(this_script_dir, 'homepage'))
build_merged.LAST_VERSION_TAG = LAST_VERSION_TAG
build_merged.CURRENT_DATE = CURRENT_DATE
build_merged.DoIt()
sys.stdout.write('Finished\n')
|
Bill: Nelson Amendment to S 2201, the Hollings privacy bill, 5/16/02.
Amendment to Manager's Amendment to S 2201, the Online Personal Privacy Act 2002.
Offered by Sen. Bill Nelson (D-FL).
This amendment, and the manager's amendment, as amended, were both approved at the Senate Commerce Committee's mark up meeting of May 16, 2002.
Purpose: To require each internet service provider, online service provider, and operator of a commercial website shall designate a privacy compliance officer, and for other purposes.
IN THE SENATE OF THE UNITED STATES—107th Cong., 2d Sess.
To protect the online privacy of individuals who use the Internet.
(c) COMPLIANCE OFFICERS.—Each internet service provider, online service provider, and operator of a commercial website shall designate a privacy compliance officer, who shall be responsible for ensuring compliance with the requirements of this title and the privacy policies of that provider or operator.
|
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (C) 2011 ~ 2012 Deepin, Inc.
# 2011 ~ 2012 Hou Shaohui
#
# Author: Hou Shaohui <houshao55@gmail.com>
# Maintainer: Hou Shaohui <houshao55@gmail.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import dbus
from dbus_utils import DBusProperty, DBusIntrospectable, type_convert
from events import event_manager
from common import Storage
REASON_EXPIRED = 1 # The notification expired.
REASON_DISMISSED = 2 # The notification was dismissed by the user.
REASON_CLOSED = 3 # The notification was closed by a call to CloseNotification.
REASON_UNDEFINED = 4 # Undefined/reserved reasons.
SERVER_CAPABILITIES = [
"action-icons", # Supports using icons instead of text for displaying actions.
"actions", # The server will provide the specified actions to the user.
"body", # Supports body text.
"body-hyperlinks", # The server supports hyperlinks in the notifications.
"body-images", # The server supports images in the notifications.
"body-markup", # Supports markup in the body text.
"icon-multi", # The server will render an animation of all the frames in a given image array.
"icon-static", # Supports display of exactly 1 frame of any given image array.
"persistence", # The server supports persistence of notifications.
"sound", # The server supports sounds on notifications .
]
DEFAULT_STANDARD_HINST = Storage({
"action-icons" : False, # The icon name should be compliant with the Freedesktop.org Icon Naming Specification.
"category" : "", # The type of notification this is.
"desktop-entry" : "", # This specifies the name of the desktop filename representing the calling program.
"image-data" : "", # This is a raw data image format.
"image-path" : "", # Alternative way to define the notification image.
"resident" : False, # This hint is likely only useful when the server has the "persistence" capability.
"sound-file" : "", # The path to a sound file to play when the notification pops up.
"sound-name" : "", # A themeable named sound from the freedesktop.org sound naming specification.
"suppress-sound" : False, # Causes the server to suppress playing any sounds, if it has that ability.
"transient" : False,
"x" : None,
"y" : None,
"urgency" : 1 # 0 Low, 1 Normal, 2 Critical
})
class Notifications(DBusProperty, DBusIntrospectable, dbus.service.Object):
BUS_NAME = "org.freedesktop.Notifications"
PATH = "/org/freedesktop/Notifications"
NOTIFY_IFACE = "org.freedesktop.Notifications"
NOTIFY_ISPEC = """
<method name="CloseNotification">
<arg direction="in" name="id" type="u"/>
</method>
<method name="GetCapabilities">
<arg direction="out" name="caps" type="as"/>
</method>
<method name="GetServerInformation">
<arg direction="out" name="name" type="s"/>
<arg direction="out" name="vendor" type="s"/>
<arg direction="out" name="version" type="s"/>
<arg direction="out" name="spec_version" type="s"/>
</method>
<method name="Notify">
<arg direction="in" name="app_name" type="s" />
<arg direction="in" name="id" type="u" />
<arg direction="in" name="icon" type="s" />
<arg direction="in" name="summary" type="s" />
<arg direction="in" name="body" type="s" />
<arg direction="in" name="actions" type="as" />
<arg direction="in" name="hints" type="a{sv}" />
<arg direction="in" name="timeout" type="i" />
<arg direction="out" name="id" type="u" />
</method>
<signal name="NotificationClosed">
<arg name="id" type="u" />
<arg name="reason" type="u" />
</signal>
<signal name="ActionInvoked">
<arg name="id" type="u" />
<arg name="action_key" type="s" />
</signal>
"""
def __init__(self):
DBusIntrospectable.__init__(self)
DBusProperty.__init__(self)
self.set_introspection(self.NOTIFY_IFACE, self.NOTIFY_ISPEC)
bus = dbus.SessionBus()
name = dbus.service.BusName(self.BUS_NAME, bus)
dbus.service.Object.__init__(self, bus, self.PATH, name)
self.id_cursor = long(0)
@dbus.service.method(NOTIFY_IFACE, in_signature="u")
def CloseNotification(self, replaces_id):
return replaces_id
@dbus.service.method(NOTIFY_IFACE, out_signature="as")
def GetCapabilities(self):
return SERVER_CAPABILITIES
@dbus.service.method(NOTIFY_IFACE, out_signature="ssss")
def GetServerInformation(self):
return "Notifications", "LinuxDeepin", "0.1", "1.2"
@dbus.service.method(NOTIFY_IFACE, in_signature="susssasa{sv}i", out_signature="u")
def Notify(self, app_name, replaces_id, app_icon, summary, body, actions, hints, timeout):
notify_storage = Storage({"app_name" : type_convert.dbus2py(app_name),
"replaces_id" : type_convert.dbus2py(replaces_id),
"app_icon" : type_convert.dbus2py(app_icon),
"summary" : type_convert.dbus2py(summary),
"body" : type_convert.dbus2py(body),
"actions" : type_convert.dbus2py(actions),
"hints" : type_convert.dbus2py(hints),
"expire_timeout" : type_convert.dbus2py(timeout)})
event_manager.emit("notify", notify_storage)
if replaces_id:
notify_storage.id = replaces_id
else:
self.id_cursor += 1
notify_storage.id = self.id_cursor
return notify_storage.id
@dbus.service.signal(NOTIFY_IFACE, signature='uu')
def NotificationClosed(self, id, reason):
pass
@dbus.service.signal(NOTIFY_IFACE, signature='us')
def ActionInvoked(self, id, action_key):
print id, action_key
|
The majority of people show to the world what they want them to see and this is “not” their real self. How often have you thought “what a likeable person”, to shortly discover they are not so likeable as an immense amount of emotional baggage (and possibly bad behaviour) has suddenly just surfaced.
What happened here is the ego´s ability to show the world what we are comfortable showing and hiding some of our true feelings failed, and our true feelings emerged. That which we so wanted to hide has become evident to all.
To further complicate matters, the majority of people do not like themselves and therefore only want to project what they like about themselves. The complexity and contradictions of modern day life mean we are often hiding more than we would like to and this often causes profound frustration as we are unable to fully express ourselves and perhaps not be able to show the beauty, love and compassion we have inside.
“What you see is what you get.” – I like a few cents for every time I have heard that in my life, but what they should have said is “What I am going to project is what you are going to get”.
Part of the modern day man´s problems is about fragmentation, and this example shows a typical scenario. What we need is integration – bringing all aspects of ourselves in harmony.
One problem with the mind and emotions is that we cannot often “think” ourselves to clarity or stability because those impressions we have gained in the mind are alive and dominant. So, the key is to be able to go beyond those thoughts, and we do this successfully through using FISU meditation techniques. Our techniques allow you to transcend beyond the conscious and subconscious levels of the mind and descend into the upper regions of the super-conscious self. This is the higher-self within, our essence. Here, the energies are subtler and therefore, more powerful. We begin drawing these energies into the patterned areas of the mind that are the storehouse of thought and emotion, and because these energies are of a subtler nature, they refine and rebalance and bring integration.
Without much effort, we become the observer of our thoughts, and in doing so, they begin to lose their hold over us.
Through integration, we are super calm, and the mirror of the mind is showing us who we are and with greater clarity. Combined with the strength drawn from the deeper regions of the mind, as this is where strength resides within us, we can face ourselves, and in doing so, we know ourselves better. All aspects of our self have become unified – integrated. We are operating in oneness and not with a divided personality. The ego is in check and is now purely the vehicle through which your higher or spiritual self expresses itself.
Now you are unified in a way you never thought possible. You are stronger, you know your mind more intimately, and of course, you now know yourself. You can now show all that beauty and love you have within with greater confidence and ease.
|
"""
This module contains classes that hold the position information of the robots
"""
__author__ = 'Wuersch Marcel'
__license__ = "GPLv3"
import time
import threading
from libraries.can import MsgSender
import numpy as np
class RobotPosition():
""" parent class for PositionMyRobot and PositionOtherRobot
The objects of this class wait for position information over CAN and save them.
They also draw a map where the robot has been on the table.
"""
def __init__(self, can_socket, msg_type, size):
self.size = size
self.position = (0, 0)
self.angle = 0
self.lock = threading.Lock()
resolution = 200
table_size = 2000
self.map = np.zeros((resolution*1.5+1, resolution+1))
self.scale = table_size / resolution
self.last_position_update = 0
self.last_angle_update = 0
self.new_position_data = []
can_socket.create_interrupt(msg_type, self.can_robot_position)
def get_new_position_lock(self):
""" returns a lock which gets released each time new position information is received.
:return: lock
"""
lock = threading.Lock()
self.new_position_data.append(lock)
return lock
def can_robot_position(self, can_msg):
""" waits for new position information, saves them and puts them in the map """
margin = int(200 / self.scale) # minimum distance to an object
# TODO: check sender ID (in case drive and navigation both send)
if can_msg['position_correct'] and can_msg['sender'] == MsgSender.Navigation.value:
x, y = can_msg['x_position'], can_msg['y_position']
with self.lock:
self.position = x, y
self.map[round(x / self.scale) - margin: round(x / self.scale) + margin,
round(y / self.scale) - margin: round(y / self.scale) + margin] += 1
for lock in self.new_position_data: # release all locks
lock.acquire(False)
lock.release()
self.last_position_update = time.time()
if can_msg['angle_correct']:
with self.lock:
self.angle = can_msg['angle'] / 100
self.last_angle_update = time.time()
def get_position(self):
"""
:return: position of the robot (x, y)
"""
with self.lock:
return self.position
def get_angle(self):
"""
:return: angle of the robot
"""
with self.lock:
return self.angle
def get_map(self):
"""
:return: map where the robot has been
"""
with self.lock:
return self.map
class PositionMyRobot(RobotPosition):
""" Holds the position information of the robot on which the program is running. """
def __init__(self, can_socket, msg_type, name, size=20):
super().__init__(can_socket, msg_type, size)
self.name = name
class PositionOtherRobot(RobotPosition):
""" Holds the position information of all other robots. """
def __init__(self, can_socket, msg_type, size=20):
super().__init__(can_socket, msg_type, size)
self.check_thread = threading.Thread(target=self.check_navigation)
self.check_thread.setDaemon(1)
self.check_thread.start()
def check_navigation(self):
""" checks if the position information of the navigation system is to old """
while True:
now = time.time()
if now - self.last_position_update > 0.5:
self.angle = None
if now - self.last_angle_update > 0.5:
self.position = None
time.sleep(0.5) # TODO: set correct time
|
Dry Needling is a physical therapy treatment technique that utilizes filament needles to make changes in myofascial trigger points.
Myofascial trigger points are “knots” in muscles that can contribute to pain and decreased muscle function. Dry Needling is an effective tool in the hand of a physical therapist managing soft tissue injuries, disorders and pain.
What Can Dry Needling at Redbud Do For You?
All Redbud locations offer Dry Needling treatment by Certified therapists.
Click here for FAQs about Dry Needling.
|
###############################################################################
# Caleydo - Visualization for Molecular Biology - http://caleydo.org
# Copyright (c) The Caleydo Team. All rights reserved.
# Licensed under the new BSD license, available at http://caleydo.org/license
###############################################################################
from builtins import str
import phovea_server.plugin
import phovea_server.range
import phovea_server.util
from phovea_server.dataset_def import to_idtype_description
import itertools
_providers_r = None
def _providers():
global _providers_r
if _providers_r is None:
_providers_r = [p.load().factory() for p in phovea_server.plugin.list('dataset-provider')]
return _providers_r
def iter():
"""
an iterator of all known datasets
:return:
"""
return itertools.chain(*_providers())
def list_datasets():
"""
list all known datasets
:return:
"""
return list(iter())
def get(dataset_id):
"""
:param dataset_id:
:return: returns the selected dataset identified by id
"""
for p in _providers():
r = p[dataset_id]
if r is not None:
return r
return None
def add(desc, files=[], id=None):
"""
adds a new dataset to this storage
:param desc: the dict description information
:param files: a list of FileStorage
:param id: optional the unique id to use
:return: the newly created dataset or None if an error occurred
"""
for p in _providers():
r = p.upload(desc, files, id)
if r:
return r
return None
def update(dataset, desc, files=[]):
"""
updates the given dataset
:param dataset: a dataset or a dataset id
:param desc: the dict description information
:param files: a list of FileStorage
:return:
"""
old = get(dataset) if isinstance(dataset, str) else dataset
if old is None:
return add(desc, files)
r = old.update(desc, files)
return r
def remove(dataset):
"""
removes the given dataset
:param dataset: a dataset or a dataset id
:return: boolean whether the operation was successful
"""
old = get(dataset) if isinstance(dataset, str) else dataset
if old is None:
return False
for p in _providers():
if p.remove(old):
return True
return False
def list_idtypes():
tmp = dict()
for d in list_datasets():
for idtype in d.to_idtype_descriptions():
tmp[idtype['id']] = idtype
# also include the known elements from the mapping graph
mapping = get_mappingmanager()
for idtype_id in mapping.known_idtypes():
tmp[idtype_id] = to_idtype_description(idtype_id)
return list(tmp.values())
def get_idmanager():
return phovea_server.plugin.lookup('idmanager')
def get_mappingmanager():
return phovea_server.plugin.lookup('mappingmanager')
|
I am a developer and maker from Hobart Tasmania. I am familiar with Web, Python, Linux, embedded electronics and games.
Support using computers including system maintenance and upgrades.
Particulalry proficient with Arduino. Experience with AVR, TI MSP430, Raspberry Pi. Can design circuit boards.
Cracked screen? Broken ports? Worn battery? I can repair most mobile devices and laptops.
|
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import threading
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_poller
zmq = zmq_async.import_zmq()
LOG = logging.getLogger(__name__)
class ThreadingPoller(zmq_poller.ZmqPoller):
def __init__(self):
self.poller = zmq.Poller()
self.sockets_and_recv_methods = {}
def register(self, socket, recv_method=None):
socket_handle = socket.handle
if socket_handle in self.sockets_and_recv_methods:
return
LOG.debug("Registering socket %s", socket_handle.identity)
self.sockets_and_recv_methods[socket_handle] = (socket, recv_method)
self.poller.register(socket_handle, zmq.POLLIN)
def unregister(self, socket):
socket_handle = socket.handle
socket_and_recv_method = \
self.sockets_and_recv_methods.pop(socket_handle, None)
if socket_and_recv_method:
LOG.debug("Unregistering socket %s", socket_handle.identity)
self.poller.unregister(socket_handle)
def poll(self, timeout=None):
if timeout is not None and timeout > 0:
timeout *= 1000 # convert seconds to milliseconds
socket_handles = {}
try:
socket_handles = dict(self.poller.poll(timeout=timeout))
except zmq.ZMQError as e:
LOG.debug("Polling terminated with error: %s", e)
if not socket_handles:
return None, None
for socket_handle in socket_handles:
socket, recv_method = self.sockets_and_recv_methods[socket_handle]
if recv_method:
return recv_method(socket), socket
else:
return socket.recv_multipart(), socket
def close(self):
pass # Nothing to do for threading poller
class ThreadingExecutor(zmq_poller.Executor):
def __init__(self, method):
self._method = method
thread = threading.Thread(target=self._loop)
thread.daemon = True
super(ThreadingExecutor, self).__init__(thread)
self._stop = threading.Event()
def _loop(self):
while not self._stop.is_set():
self._method()
def execute(self):
self.thread.start()
def stop(self):
self._stop.set()
|
As you inspect the sweeping intersecting lines supporting the pencil edge top of the Frisco Occasional Table Collection, you have to ask yourself, “How did they do that?” With sleek brushed bronze legs fused to the tempered glass with brushed nickel cubes it’s still a mystery, but definitely a modern treasure!.
Materials: 10mm glass top with pencil edge, Metal, Fused metal pucks.
All pieces are constructed of 10mm clear glass top over painted metal base, connected by square metal pucks.
|
from flask_wtf import Form
from flask_wtf.file import FileField, FileRequired
from wtforms import StringField, SubmitField, SelectField
from wtforms.validators import InputRequired, IPAddress, URL
from mass_flask_core.models import TLPLevelField
class FileSampleSubmitForm(Form):
file = FileField('File', validators=[FileRequired()])
tlp_level = SelectField('Sample privacy (TLP level)', coerce=int, choices=[
(TLPLevelField.TLP_LEVEL_WHITE, 'WHITE (unlimited)'),
(TLPLevelField.TLP_LEVEL_GREEN, 'GREEN (community)'),
(TLPLevelField.TLP_LEVEL_AMBER, 'AMBER (limited distribution)'),
(TLPLevelField.TLP_LEVEL_RED, 'RED (personal for named recipients)'),
])
submit = SubmitField()
class IPSampleSubmitForm(Form):
ip_address = StringField('IPv4/IPv6 address', validators=[InputRequired(), IPAddress()])
tlp_level = SelectField('Sample privacy (TLP level)', coerce=int, choices=[
(TLPLevelField.TLP_LEVEL_WHITE, 'WHITE (unlimited)'),
(TLPLevelField.TLP_LEVEL_GREEN, 'GREEN (community)'),
(TLPLevelField.TLP_LEVEL_AMBER, 'AMBER (limited distribution)'),
(TLPLevelField.TLP_LEVEL_RED, 'RED (personal for named recipients)'),
])
submit = SubmitField()
class DomainSampleSubmitForm(Form):
domain = StringField('Domain name', validators=[InputRequired()])
tlp_level = SelectField('Sample privacy (TLP level)', coerce=int, choices=[
(TLPLevelField.TLP_LEVEL_WHITE, 'WHITE (unlimited)'),
(TLPLevelField.TLP_LEVEL_GREEN, 'GREEN (community)'),
(TLPLevelField.TLP_LEVEL_AMBER, 'AMBER (limited distribution)'),
(TLPLevelField.TLP_LEVEL_RED, 'RED (personal for named recipients)'),
])
submit = SubmitField()
class URISampleSubmitForm(Form):
uri = StringField('URI', validators=[InputRequired(), URL()])
tlp_level = SelectField('Sample privacy (TLP level)', coerce=int, choices=[
(TLPLevelField.TLP_LEVEL_WHITE, 'WHITE (unlimited)'),
(TLPLevelField.TLP_LEVEL_GREEN, 'GREEN (community)'),
(TLPLevelField.TLP_LEVEL_AMBER, 'AMBER (limited distribution)'),
(TLPLevelField.TLP_LEVEL_RED, 'RED (personal for named recipients)'),
])
submit = SubmitField()
|
An evening with former soldier and debut novelist Harry Parker, in partnership with ShortList.
Join ShortList and Faber & Faber for an evening with the hottest debut novelist of 2016, Harry Parker. Parker’s forthcoming book has already been praised by the likes of Hilary Mantel (Wolf Hall) and General (Ret) David Petraeus, Commander of the International Security Assistance Force in Afghanistan, 2010-11.
In his book Anatomy of a Soldier, Harry Parker tells the heart-stopping story of Captain Tom Barnes who is leading British troops in a war zone. The story opens with one catastrophic act of violence, and on all sides of the conflict people become caught up in the destruction - from the man who trains one boy to fight the infidel invaders to Barnes' family waiting for him to return home. We see them not as they see themselves, but as all the objects surrounding them do: a bicycle, a drone, an exploding IED, a beer glass, dog tags and weaponry.
Harry Parker joined the British Army when he was twenty-three and served in Iraq and Afghanistan. He is now a writer and an artist, and lives in London.
The evening represents the first London on-stage conversation Harry will do to for Anatomy of a Soldier. He’ll do a live Q&A with ShortList editor-at-large and author, David Whitehouse, which will be followed by questions from the audience.
Please note: guests may be subject to a baggage search upon entry as this venue is property of the British military.
An evening with former soldier and debut novelist Harry Parker, in partnership with ShortList. Thursday, 10 March, 6.30pm Venue: The Rifles Club 52-56 Davies St, Mayfair, London, W1K 5HR Ticket price: £10 (includes a copy of the book and a beer) — YOUR CONFIRMATION EMAIL IS YOUR TICKET — Join ShortList and Faber & Faber for an evening with the hottest debut novelist of 2016, Harry Parker. Parker’s forthcoming book has already been praised by the likes of Hilary Mantel (Wolf Hall) and General (Ret) David Petraeus, Commander of the International Security Assistance Force in Afghanistan, 2010-11. In his book Anatomy of a Soldier, Harry Parker tells the heart-stopping story of Captain Tom Barnes who is leading British troops in a war zone. The story opens with one catastrophic act of violence, and on all sides of the conflict people become caught up in the destruction - from the man who trains one boy to fight the infidel invaders to Barnes' family waiting for him to return home. We see them not as they see themselves, but as all the objects surrounding them do: a bicycle, a drone, an exploding IED, a beer glass, dog tags and weaponry. Harry Parker joined the British Army when he was twenty-three and served in Iraq and Afghanistan. He is now a writer and an artist, and lives in London. The evening represents the first London on-stage conversation Harry will do to for Anatomy of a Soldier. He’ll do a live Q&A with ShortList editor-at-large and author, David Whitehouse, which will be followed by questions from the audience. Please note: guests may be subject to a baggage search upon entry as this venue is property of the British military.
|
#
# Banana banana banana
#
# ~ ABorgna
#
#
# Explicacion en 'README-esp.txt'.
#
#
# Los paquetes son listas con n items, n >= 2;
# 1er valor: FUNC - int
# siguientes: DATA - int || list || NumpyArray_t
# Includes
import constants as const
from sockets import *
from transmitter import *
from pygame import Surface,surfarray
import numpy as np
#from numpy import ndarray as NumpyArray_t
from sys import stderr
class ResponseError(Exception):
pass
class BadResponseError(ResponseError):
pass
class NullResponseError(ResponseError):
pass
class Driver(object):
"""
Interface with the POV display
"""
def __init__(self, socket, res, depth=1):
super(Driver, self).__init__()
# Variables
self.resolution = res
self.depth = depth
# The array buffer
if res[1] % 8:
raise ValueError("The display height must be a multiple of 8")
# Image buffer, the data to transmit
self.buffer = np.empty((res[0],res[1],3),dtype=np.uint8)
# Creates the transmitter and connects with the device
self.transmitter = Transmitter(socket)
self.transmitter.start()
# Set the resolution on the device
#self.setTotalWidth(res[0])
self.setResolution(res)
self.setDepth(depth)
self.setDim(0)
# Go
self.syncro()
# Go
self.syncro()
def _send(self,packet,errorStr="Transmission error",retries=0):
"""
Sends the packet
and checks the response for error codes (0xff00-0xfffe)
Response:
>= 0 - Response
< 0 - Error
None - No response
"""
if retries >= 0:
retries += 1
while retries:
retries -= 1
self.transmitter.send(packet)
r = self.transmitter.recv()
if r == None:
if not retries:
stderr.write(errorStr+", couldn't get response\n")
return None
elif 0xffff > r >= 0xff00:
stderr.write(errorStr+", {:#x}\n".format(r))
return -r
else:
return r
def _send_noRcv(self,packet):
"""
Sends the packet,
doesn't wait for the operation to finish
"""
self.transmitter.send(packet)
# Special commands
def ping(self):
r = self._send((const.PING|const.GET,),"Error when pinging")
return r != None
def syncro(self):
self._send((const.STORE|const.SET,),"Error: Snchronization went bad :(")
def clean(self):
self._send((const.CLEAN|const.SET,),"Error cleaning the display")
# Variable setters
def setResolution(self,res):
if res[1] % 8:
raise ValueError("The display height must be a multiple of 8")
self.transmitter.txJoin()
# Height
self._send((const.HEIGHT|const.SET,res[1]),"Error setting the resolution")
# Width
self._send((const.WIDTH|const.SET,res[0]),"Error setting the resolution")
# Resizes the buffer
buffer = np.empty((res[0],res[1],3),dtype=np.uint8)
buffer[0:len(self.buffer)] = self.buffer
self.buffer = buffer
def setDepth(self,depth):
self.transmitter.txJoin()
self._send((const.DEPTH|const.SET,depth),"Error setting the depth")
def setTotalWidth(self,width):
self._send((const.TOTAL_WIDTH|const.SET,width),"Error setting the total width")
def setSpeed(self,s):
self._send((const.SPEED|const.SET,s),"Error setting the speed")
def setDim(self,s):
self._send((const.DIMM|const.SET,s),"Error setting the dimm")
# Variable getters
def getFPS(self):
return self._send((const.FPS|const.GET,),"Error getting the fps")
def getResolution(self):
# Height
h = self._send((const.HEIGHT|const.GET,),"Error getting the resolution")
# Width
w =self._send((const.WIDTH|const.GET,),"Error getting the resolution")
return (w,h)
def getDepth(self):
return self._send((const.DEPTH|const.GET,),"Error getting the depth")
def getTotalWidth(self):
return self._send((const.TOTAL_WIDTH|const.GET,),"Error getting the total width")
def getSpeed(self):
return self._send((const.SPEED|const.GET,),"Error getting the speed")
def getDim(self):
return self._send((const.DIMM|const.GET,),"Error getting the dimm")
def getSpeed(self):
return self._send((const.SPEED|const.GET,),"Error getting the speed")
def getSpeed(self):
return self._send((const.SPEED|const.GET),"Error getting the speed")
# Pygame data writers
def pgBlit(self,surface):
# Copy the matrix as a numpy array
self.buffer = np.copy(surfarray.pixels3d(surface).flatten())
# Is there isn't already a burst task in the queue, create one
if not self.transmitter.burstInQueue.isSet():
self.transmitter.burstInQueue.set()
self._send_noRcv([const.BURST|const.DATA, self.buffer])
def pgBlitColumn(self,surface,pos):
# Copy the column to a numpy array
self.buffer[pos:pos+1] = np.copy(surfarray.pixels3d(surface).flatten())
# Is there isn't already a burst task in the queue, create a write_column task
if not self.transmitter.burstInQueue.isSet():
self._send_noRcv([const.WRITE_COLUMN|const.DATA, pos, self.buffer[pos:pos+1]])
def pgBlitSection(self,surface,pos,lenght):
# Copy the section to a numpy array
self.buffer[pos:pos+lenght] = np.copy(surfarray.pixels3d(surface).flatten())
# Is there isn't already a burst task in the queue, create a write_section task
if not self.transmitter.burstInQueue.isSet():
self._send_noRcv([const.WRITE_SECTION|const.DATA, pos, lenght,
self.buffer[pos:pos+lenght]])
self.setTotalWidth(res[0])
|
Dr. Joanna Castaldi, PHD is a clinical psychologist in Richmond, VA. She specializes in clinical psychology.
×Post a ResponseAre you Dr. Joanna Castaldi, PHD?
How was your experience with Dr. Castaldi?
|
__author__ = 'Neil Butcher'
from PyQt4 import QtGui, QtCore
import widget_core
from Rota_System.UI.Appointments import AppointmentsListWidget
class EventTemplateWidget(QtGui.QWidget):
commandIssued = QtCore.pyqtSignal(QtGui.QUndoCommand)
criticalCommandIssued = QtCore.pyqtSignal()
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self.layout = QtGui.QVBoxLayout(self)
self.core_widget = widget_core.EventTemplateWidget(self)
self.layout.addWidget(self.core_widget)
self.core_widget.commandIssued.connect(self.emitCommand)
self.core_widget.criticalCommandIssued.connect(self.emitCriticalCommand)
self.appointment_widget = AppointmentsListWidget(self)
self.layout.addWidget(self.appointment_widget)
self.appointment_widget.commandIssued.connect(self.emitCommand)
self.appointment_widget.criticalCommandIssued.connect(self.emitCriticalCommand)
@QtCore.pyqtSlot(QtCore.QObject)
def setEvent(self, item):
self.core_widget.setEvent(item)
self.appointment_widget.setEvent(item)
@QtCore.pyqtSlot(QtGui.QUndoCommand)
def emitCommand(self, command):
self.commandIssued.emit(command)
@QtCore.pyqtSlot()
def emitCriticalCommand(self):
self.criticalCommandIssued.emit()
import sys
from Rota_System.Roles import Role, GlobalRoleList
from Rota_System import Events
from Rota_System.UI.model_undo import MasterUndoModel
def main():
GlobalRoleList.add_role(Role('Baker', 'B', 2))
GlobalRoleList.add_role(Role('Steward', 'S', 9))
GlobalRoleList.add_role(Role('Fisherman', 'F', 7))
m = MasterUndoModel()
app = QtGui.QApplication(sys.argv)
w = EventTemplateWidget(None)
e = Events.Event(None)
w.setEvent(e)
m.add_command_contributer(w)
w.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
|
You may not have heard about it before, but this coverage “forgives” a surcharge associated with only one at-fault accident. If you are a new driver you can only benefit from this type of coverage if you have already qualified for the Excellent Driver Discount Plus, under the Save Driver Insurance plan. If you already have insurance you can add this coverage to your plan, but you must first meet three requirements. You need your policy to have been active for the last 2 years, qualify for the Excellent Driver Discount plus and meet other requirements imposed by your insurance company.
Another interesting coverage plan is the Dissapearing Deductible one. What it basically does is provide with an automatic 100$ credit/vehicle for every year that you are claim-free. You can add it against the collision deductible, and receive a maximum of 500 dollars per vehicle. This can be a great addition to your discounts for Houston Car Insurance.
Have you just purchased a brand new car? This is great news because the Loan or Lease gap will pay the difference between the loss and the balance owed on Loaned/leased vehicles. The only requirement is for the car to have less than 45.000 miles over the last 36 months since it was purchased.
If these three points are met, your enhancement and repairs will apply in case of collision, limited collision and comprehensive coverage for your vehicles.
Claims can be a drag, but this will no longer be the case if you have Door-2-Door claim service. It will ensure that your vehicle is picked-up, and repaired, while you stroll around in a rental one. Once your car is restored to its former condition, it will be returned, and the rental will be picked up. Another great thing about this coverage is the fact that repairs done by the referral shop are guaranteed to last as long as you own/lease the car.
There are other discounts for Houston car insurance available, but these ones are the ones with the best rates.
|
#!/usr/bin/env python
# Copyright 2018-present Facebook, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import errno
import glob
import json
import os
import os.path
import platform
import time
import uuid
from timing import monotonic_time_nanos
def create_symlink(original, symlink):
if platform.system() == "Windows":
# Not worth dealing with the convenience symlink on Windows.
return
else:
(symlink_dir, symlink_file) = os.path.split(symlink)
# Avoid race conditions with other processes by:
#
# 1) Creating a symlink /path/to/.symlink_file.UUID -> /path/to/original
# 2) Atomically renaming /path/to/.symlink_file.UUID -> /path/to/symlink_file
#
# If another process races with this one, the most recent one wins, which
# is the behavior we want.
temp_symlink_filename = ".{0}.{1}".format(symlink_file, uuid.uuid4())
temp_symlink_path = os.path.join(symlink_dir, temp_symlink_filename)
os.symlink(original, temp_symlink_path)
os.rename(temp_symlink_path, symlink)
class _TraceEventPhases(object):
BEGIN = "B"
END = "E"
IMMEDIATE = "I"
COUNTER = "C"
ASYNC_START = "S"
ASYNC_FINISH = "F"
OBJECT_SNAPSHOT = "O"
OBJECT_NEW = "N"
OBJECT_DELETE = "D"
METADATA = "M"
class Tracing(object):
_trace_events = [
{
"name": "process_name",
"ph": _TraceEventPhases.METADATA,
"pid": os.getpid(),
"args": {"name": "buck.py"},
}
]
def __init__(self, name, args={}):
self.name = name
self.args = args
self.pid = os.getpid()
def __enter__(self):
now_us = monotonic_time_nanos() / 1000
self._add_trace_event(
"buck-launcher",
self.name,
_TraceEventPhases.BEGIN,
self.pid,
1,
now_us,
self.args,
)
def __exit__(self, x_type, x_value, x_traceback):
now_us = monotonic_time_nanos() / 1000
self._add_trace_event(
"buck-launcher",
self.name,
_TraceEventPhases.END,
self.pid,
1,
now_us,
self.args,
)
@staticmethod
def _add_trace_event(category, name, phase, pid, tid, ts, args):
Tracing._trace_events.append(
{
"cat": category,
"name": name,
"ph": phase,
"pid": pid,
"tid": tid,
"ts": ts,
"args": args,
}
)
@staticmethod
def write_to_dir(buck_log_dir, build_id):
filename_time = time.strftime("%Y-%m-%d.%H-%M-%S")
trace_filename = os.path.join(
buck_log_dir, "launch.{0}.{1}.trace".format(filename_time, build_id)
)
trace_filename_link = os.path.join(buck_log_dir, "launch.trace")
try:
os.makedirs(buck_log_dir)
except OSError as e:
if e.errno != errno.EEXIST:
raise
with open(trace_filename, "w") as f:
json.dump(Tracing._trace_events, f)
create_symlink(trace_filename, trace_filename_link)
Tracing.clean_up_old_logs(buck_log_dir)
@staticmethod
def clean_up_old_logs(buck_log_dir, logs_to_keep=25):
traces = filter(
os.path.isfile, glob.glob(os.path.join(buck_log_dir, "launch.*.trace"))
)
try:
traces = sorted(traces, key=os.path.getmtime)
for f in traces[:-logs_to_keep]:
os.remove(f)
except OSError:
return # a concurrent run cleaned up the logs
|
WOBURN, MA--(Marketwire - March 7, 2011) - Axceler, the leader in Microsoft SharePoint administration and migration software, today announced the expansion of its presence in the Australia and New Zealand region with the opening of a new office in Sydney, bronze sponsorship of the upcoming Australia and New Zealand SharePoint Conferences, and the appointment of a new Channel Sales Manager for the Asia Pacific region. Axceler's ControlPoint, the company's award-winning SharePoint administration software, helps enterprises simplify, optimize and secure their large and complex SharePoint environments, which increasingly span international borders.
SharePoint is the fastest growing Microsoft product of all time, and the market for SharePoint add-on products continues to increase at an equally rapid pace. Axceler's expansion to Australia and New Zealand comes at a time of substantial growth in the SharePoint market both in the region and globally. The move also follows a year of record-breaking 145 percent growth overall for Axceler in 2010. More than one-third of all Axceler revenue now comes from outside the U.S., and the company has broadened its customer base globally and increased global staff by 50 percent.
"We've found two compelling sources of market opportunity that led us to make this move: SharePoint market potential in the Australia and New Zealand region, and the rising number of multinational organizations with extensive, global SharePoint deployments," said Michael Alden, President and CEO, Axceler. "Enterprises across many industries are deploying SharePoint as a strategic and increasingly global platform. We're seeing overwhelming demand for SharePoint add-on products, particularly in the area of SharePoint governance. This year we expect our growth to further increase as more and more organizations make SharePoint a bigger part of their IT strategy."
Also announced was the appointment of Vijay Raghvani, who will be based in the new Sydney office, as Channel Sales Manager for the Asia Pacific region. Raghvani will focus on developing the Asia Pacific channel for Axceler products starting with Australia and New Zealand. Prior to joining Axceler, he was working in the Microsoft channel for a leading SharePoint partner, and with Canon Australia in the solutions business. Raghvani moved to Australia in 2003 to join a leading Microsoft Dynamics partner after spending five years at Microsoft in England, where he also completed his BA in information systems.
The company's new Australian office is located at 100 Walker St, North Sydney, NSW 2059.
ControlPoint, the leading SharePoint administration product and winner of the most recent Best SharePoint Product award, includes comprehensive permissions management, in-depth activity and storage analysis and the ability to measure performance of SharePoint environments against governance policies. It also gives administrators complete control over the configuration and deployment of their SharePoint environments.
Davinci Migrator offers comprehensive, risk-based control when moving to the latest SharePoint platform, from either SharePoint 2003 or 2007. It reduces the risks, lowers the overall cost and shortens the time it takes to complete a SharePoint 2010 migration. By helping administrators with the discovery and planning of their migrations as well as managing metadata and taxonomies, Davinci Migrator reduces the number of failed migration attempts and dramatically shortens project schedules.
Copyright 2011, Axceler. Axceler and Axceler ControlPoint are registered trademarks. All other trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.
|
"""
homeassistant.components.binary_sensor.template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Support for exposing a templated binary_sensor
"""
import logging
from homeassistant.components.binary_sensor import (BinarySensorDevice,
DOMAIN,
SENSOR_CLASSES)
from homeassistant.const import ATTR_FRIENDLY_NAME, CONF_VALUE_TEMPLATE
from homeassistant.core import EVENT_STATE_CHANGED
from homeassistant.exceptions import TemplateError
from homeassistant.helpers.entity import generate_entity_id
from homeassistant.helpers import template
from homeassistant.util import slugify
ENTITY_ID_FORMAT = DOMAIN + '.{}'
CONF_SENSORS = 'sensors'
_LOGGER = logging.getLogger(__name__)
def setup_platform(hass, config, add_devices, discovery_info=None):
"""Setup template binary sensors."""
sensors = []
if config.get(CONF_SENSORS) is None:
_LOGGER.error('Missing configuration data for binary_sensor platform')
return False
for device, device_config in config[CONF_SENSORS].items():
if device != slugify(device):
_LOGGER.error('Found invalid key for binary_sensor.template: %s. '
'Use %s instead', device, slugify(device))
continue
if not isinstance(device_config, dict):
_LOGGER.error('Missing configuration data for binary_sensor %s',
device)
continue
friendly_name = device_config.get(ATTR_FRIENDLY_NAME, device)
sensor_class = device_config.get('sensor_class')
value_template = device_config.get(CONF_VALUE_TEMPLATE)
if sensor_class not in SENSOR_CLASSES:
_LOGGER.error('Sensor class is not valid')
continue
if value_template is None:
_LOGGER.error(
'Missing %s for sensor %s', CONF_VALUE_TEMPLATE, device)
continue
sensors.append(
BinarySensorTemplate(
hass,
device,
friendly_name,
sensor_class,
value_template)
)
if not sensors:
_LOGGER.error('No sensors added')
return False
add_devices(sensors)
return True
class BinarySensorTemplate(BinarySensorDevice):
"""A virtual binary_sensor that triggers from another sensor."""
# pylint: disable=too-many-arguments
def __init__(self, hass, device, friendly_name, sensor_class,
value_template):
self._hass = hass
self._device = device
self._name = friendly_name
self._sensor_class = sensor_class
self._template = value_template
self._state = None
self.entity_id = generate_entity_id(
ENTITY_ID_FORMAT, device,
hass=hass)
_LOGGER.info('Started template sensor %s', device)
hass.bus.listen(EVENT_STATE_CHANGED, self._event_listener)
def _event_listener(self, event):
self.update_ha_state(True)
@property
def should_poll(self):
return False
@property
def sensor_class(self):
return self._sensor_class
@property
def name(self):
return self._name
@property
def is_on(self):
return self._state
def update(self):
try:
value = template.render(self._hass, self._template)
except TemplateError as ex:
if ex.args and ex.args[0].startswith(
"UndefinedError: 'None' has no attribute"):
# Common during HA startup - so just a warning
_LOGGER.warning(ex)
return
_LOGGER.error(ex)
value = 'false'
self._state = value.lower() == 'true'
|
How to install Mob Spawner Skin for Minecraft?
Save the image to your desktop and rename it “steve”.
Delete or rename the image “steve”.
Paste the image we downloaded skin.
Become a whole Mob Spawner!
|
#
# startup_utils.py - code used during early startup with minimal dependencies
#
# Copyright (C) 2014 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions of
# the GNU General Public License v.2, or (at your option) any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY expressed or implied, including the implied warranties of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details. You should have received a copy of the
# GNU General Public License along with this program; if not, write to the
# Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA. Any Red Hat trademarks that are incorporated in the
# source code or documentation are not subject to the GNU General Public
# License and may only be used or replicated with the express permission of
# Red Hat, Inc.
#
from pyanaconda.i18n import _
import logging
log = logging.getLogger("anaconda")
stdout_log = logging.getLogger("anaconda.stdout")
import sys
import time
import imp
import os
from pyanaconda import iutil
from pyanaconda import product
from pyanaconda import constants
from pyanaconda import geoloc
from pyanaconda import anaconda_log
from pyanaconda import network
from pyanaconda import safe_dbus
from pyanaconda import kickstart
from pyanaconda.flags import flags
from pyanaconda.flags import can_touch_runtime_system
from pyanaconda.screensaver import inhibit_screensaver
import blivet
def module_exists(module_path):
"""Report is a given module exists in the current module import pth or not.
Supports checking bot modules ("foo") os submodules ("foo.bar.baz")
:param str module_path: (sub)module identifier
:returns: True if (sub)module exists in path, False if not
:rtype: bool
"""
module_path_components = module_path.split(".")
module_name = module_path_components.pop()
parent_module_path = None
if module_path_components:
# the path specifies a submodule ("bar.foo")
# we need to chain-import all the modules in the submodule path before
# we can check if the submodule itself exists
for name in module_path_components:
module_info = imp.find_module(name, parent_module_path)
module = imp.load_module(name, *module_info)
if module:
parent_module_path = module.__path__
else:
# one of the parents was not found, abort search
return False
# if we got this far we should have either some path or the module is
# not a submodule and the default set of paths will be used (path=None)
try:
# if the module is not found imp raises an ImportError
imp.find_module(module_name, parent_module_path)
return True
except ImportError:
return False
def get_anaconda_version_string():
"""Return a string describing current Anaconda version.
If the current version can't be determined the string
"unknown" will be returned.
:returns: string describing Anaconda version
:rtype: str
"""
# we are importing the version module directly so that we don't drag in any
# non-necessary stuff; we also need to handle the possibility of the
# import itself failing
if module_exists("pyanaconda.version"):
# Ignore pylint not finding the version module, since thanks to automake
# there's a good chance that version.py is not in the same directory as
# the rest of pyanaconda.
from pyanaconda import version # pylint: disable=no-name-in-module
return version.__version__
else:
return "unknown"
def gtk_warning(title, reason):
"""A simple warning dialog for use during early startup of the Anaconda GUI.
:param str title: title of the warning dialog
:param str reason: warning message
TODO: this should be abstracted out to some kind of a "warning API" + UI code
that shows the actual warning
"""
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk
dialog = Gtk.MessageDialog(type=Gtk.MessageType.ERROR,
buttons=Gtk.ButtonsType.CLOSE,
message_format=reason)
dialog.set_title(title)
dialog.run()
dialog.destroy()
def check_memory(anaconda, options, display_mode=None):
"""Check is the system has enough RAM for installation.
:param anaconda: instance of the Anaconda class
:param options: command line/boot options
:param display_mode: a display mode to use for the check
(graphical mode usually needs more RAM, etc.)
"""
from pyanaconda import isys
reason_strict = _("%(product_name)s requires %(needed_ram)s MB of memory to "
"install, but you only have %(total_ram)s MB on this machine.\n")
reason_graphical = _("The %(product_name)s graphical installer requires %(needed_ram)s "
"MB of memory, but you only have %(total_ram)s MB\n.")
reboot_extra = _('\n'
'Press [Enter] to reboot your system.\n')
livecd_title = _("Not enough RAM")
livecd_extra = _(" Try the text mode installer by running:\n\n"
"'/usr/bin/liveinst -T'\n\n from a root terminal.")
nolivecd_extra = _(" Starting text mode.")
# skip the memory check in rescue mode
if options.rescue:
return
if not display_mode:
display_mode = anaconda.display_mode
reason = reason_strict
total_ram = int(isys.total_memory() / 1024)
needed_ram = int(isys.MIN_RAM)
graphical_ram = int(isys.MIN_GUI_RAM)
# count the squashfs.img in if it is kept in RAM
if not iutil.persistent_root_image():
needed_ram += isys.SQUASHFS_EXTRA_RAM
graphical_ram += isys.SQUASHFS_EXTRA_RAM
log.info("check_memory(): total:%s, needed:%s, graphical:%s",
total_ram, needed_ram, graphical_ram)
if not options.memcheck:
log.warning("CHECK_MEMORY DISABLED")
return
reason_args = {"product_name": product.productName,
"needed_ram": needed_ram,
"total_ram": total_ram}
if needed_ram > total_ram:
if options.liveinst:
# pylint: disable=logging-not-lazy
stdout_log.warning(reason % reason_args)
gtk_warning(livecd_title, reason % reason_args)
else:
reason += reboot_extra
print(reason % reason_args)
print(_("The installation cannot continue and the system will be rebooted"))
print(_("Press ENTER to continue"))
input()
iutil.ipmi_report(constants.IPMI_ABORTED)
sys.exit(1)
# override display mode if machine cannot nicely run X
if display_mode != constants.DisplayModes.TUI and not flags.usevnc:
needed_ram = graphical_ram
reason_args["needed_ram"] = graphical_ram
reason = reason_graphical
if needed_ram > total_ram:
if options.liveinst:
reason += livecd_extra
# pylint: disable=logging-not-lazy
stdout_log.warning(reason % reason_args)
title = livecd_title
gtk_warning(title, reason % reason_args)
iutil.ipmi_report(constants.IPMI_ABORTED)
sys.exit(1)
else:
reason += nolivecd_extra
# pylint: disable=logging-not-lazy
stdout_log.warning(reason % reason_args)
anaconda.display_mode = constants.DisplayModes.TUI
time.sleep(2)
def start_geolocation(provider_id=constants.GEOLOC_DEFAULT_PROVIDER):
"""Start an asynchronous geolocation attempt.
The data from geolocation is used to pre-select installation language and timezone.
:param str provider_id: geolocation provider id
"""
# check if the provider id is valid
parsed_id = geoloc.get_provider_id_from_option(provider_id)
if parsed_id is None:
log.error('geoloc: wrong provider id specified: %s', provider_id)
else:
provider_id = parsed_id
# instantiate the geolocation module and start location data refresh
geoloc.init_geolocation(provider_id=provider_id)
geoloc.refresh()
def setup_logging_from_options(options):
"""Configure logging according to Anaconda command line/boot options.
:param options: Anaconda command line/boot options
"""
if (options.debug or options.updateSrc) and not options.loglevel:
# debugging means debug logging if an explicit level hasn't been st
options.loglevel = "debug"
if options.loglevel and options.loglevel in anaconda_log.logLevelMap:
log.info("Switching logging level to %s", options.loglevel)
level = anaconda_log.logLevelMap[options.loglevel]
anaconda_log.logger.loglevel = level
anaconda_log.setHandlersLevel(log, level)
storage_log = logging.getLogger("storage")
anaconda_log.setHandlersLevel(storage_log, level)
packaging_log = logging.getLogger("packaging")
anaconda_log.setHandlersLevel(packaging_log, level)
if can_touch_runtime_system("syslog setup"):
if options.syslog:
anaconda_log.logger.updateRemote(options.syslog)
if options.remotelog:
try:
host, port = options.remotelog.split(":", 1)
port = int(port)
anaconda_log.logger.setup_remotelog(host, port)
except ValueError:
log.error("Could not setup remotelog with %s", options.remotelog)
def prompt_for_ssh():
"""Prompt the user to ssh to the installation environment on the s390."""
# Do some work here to get the ip addr / hostname to pass
# to the user.
import socket
ip = network.getFirstRealIP()
if not ip:
stdout_log.error("No IP addresses found, cannot continue installation.")
iutil.ipmi_report(constants.IPMI_ABORTED)
sys.exit(1)
ipstr = ip
try:
hinfo = socket.gethostbyaddr(ipstr)
except socket.herror as e:
stdout_log.debug("Exception caught trying to get host name of %s: %s", ipstr, e)
name = network.getHostname()
else:
if len(hinfo) == 3:
name = hinfo[0]
if ip.find(':') != -1:
ipstr = "[%s]" % (ip,)
if (name is not None) and (not name.startswith('localhost')) and (ipstr is not None):
connxinfo = "%s (%s)" % (socket.getfqdn(name=name), ipstr,)
elif ipstr is not None:
connxinfo = "%s" % (ipstr,)
else:
connxinfo = None
if connxinfo:
stdout_log.info(_("Please ssh install@%s to begin the install."), connxinfo)
else:
stdout_log.info(_("Please ssh install@HOSTNAME to continue installation."))
def clean_pstore():
"""Remove files stored in nonvolatile ram created by the pstore subsystem.
Files in pstore are Linux (not distribution) specific, but we want to
make sure the entirety of them are removed so as to ensure that there
is sufficient free space on the flash part. On some machines this will
take effect immediately, which is the best case. Unfortunately on some,
an intervening reboot is needed.
"""
iutil.dir_tree_map("/sys/fs/pstore", os.unlink, files=True, dirs=False)
def print_startup_note(options):
"""Print Anaconda version and short usage instructions.
Print Anaconda version and short usage instruction to the TTY where Anaconda is running.
:param options: command line/boot options
"""
verdesc = "%s for %s %s" % (get_anaconda_version_string(),
product.productName, product.productVersion)
logs_note = " * installation log files are stored in /tmp during the installation"
shell_and_tmux_note = " * shell is available on TTY2"
shell_only_note = " * shell is available on TTY2 and in second TMUX pane (ctrl+b, then press 2)"
tmux_only_note = " * shell is available in second TMUX pane (ctrl+b, then press 2)"
text_mode_note = " * if the graphical installation interface fails to start, try again with the\n"\
" inst.text bootoption to start text installation"
separate_attachements_note = " * when reporting a bug add logs from /tmp as separate text/plain attachments"
if product.isFinal:
print("anaconda %s started." % verdesc)
else:
print("anaconda %s (pre-release) started." % verdesc)
if not options.images and not options.dirinstall:
print(logs_note)
# no fancy stuff like TTYs on a s390...
if not blivet.arch.is_s390():
if "TMUX" in os.environ and os.environ.get("TERM") == "screen":
print(shell_and_tmux_note)
else:
print(shell_only_note) # TMUX is not running
# ...but there is apparently TMUX during the manual installation on s390!
elif not options.ksfile:
print(tmux_only_note) # but not during kickstart installation
# no need to tell users how to switch to text mode
# if already in text mode
if options.display_mode == constants.DisplayModes.TUI:
print(text_mode_note)
print(separate_attachements_note)
def live_startup(anaconda, options):
"""Live environment startup tasks.
:param anaconda: instance of the Anaconda class
:param options: command line/boot options
"""
flags.livecdInstall = True
try:
anaconda.dbus_session_connection = safe_dbus.get_new_session_connection()
except safe_dbus.DBusCallError as e:
log.info("Unable to connect to DBus session bus: %s", e)
else:
anaconda.dbus_inhibit_id = inhibit_screensaver(anaconda.dbus_session_connection)
def set_installation_method_from_anaconda_options(anaconda, ksdata):
"""Set the installation method from Anaconda options.
This basically means to set the installation method from options provided
to Anaconda via command line/boot options.
:param anaconda: instance of the Anaconda class
:param ksdata: data model corresponding to the installation kickstart
"""
if anaconda.methodstr.startswith("cdrom"):
ksdata.method.method = "cdrom"
elif anaconda.methodstr.startswith("nfs"):
ksdata.method.method = "nfs"
nfs_options, server, path = iutil.parseNfsUrl(anaconda.methodstr)
ksdata.method.server = server
ksdata.method.dir = path
ksdata.method.opts = nfs_options
elif anaconda.methodstr.startswith("hd:"):
ksdata.method.method = "harddrive"
url = anaconda.methodstr.split(":", 1)[1]
url_parts = url.split(":")
device = url_parts[0]
path = ""
if len(url_parts) == 2:
path = url_parts[1]
elif len(url_parts) == 3:
path = url_parts[2]
ksdata.method.partition = device
ksdata.method.dir = path
elif anaconda.methodstr.startswith("http") or anaconda.methodstr.startswith("ftp") or anaconda.methodstr.startswith("file"):
ksdata.method.method = "url"
ksdata.method.url = anaconda.methodstr
# installation source specified by bootoption
# overrides source set from kickstart;
# the kickstart might have specified a mirror list,
# so we need to clear it here if plain url source is provided
# by a bootoption, because having both url & mirror list
# set at once is not supported and breaks dnf in
# unpredictable ways
# FIXME: Is this still needed for dnf?
ksdata.method.mirrorlist = None
elif anaconda.methodstr.startswith("livecd"):
ksdata.method.method = "harddrive"
device = anaconda.methodstr.split(":", 1)[1]
ksdata.method.partition = os.path.normpath(device)
else:
log.error("Unknown method: %s", anaconda.methodstr)
def parse_kickstart(options, addon_paths):
"""Parse the input kickstart.
If we were given a kickstart file, parse (but do not execute) that now.
Otherwise, load in defaults from kickstart files shipped with the
installation media. Pick up any changes from interactive-defaults.ks
that would otherwise be covered by the dracut KS parser.
:param options: command line/boot options
:param dict addon_paths: addon paths dictionary
:returns: kickstart parsed to a data model
"""
ksdata = None
if options.ksfile and not options.liveinst:
if not os.path.exists(options.ksfile):
stdout_log.error("Kickstart file %s is missing.", options.ksfile)
iutil.ipmi_report(constants.IPMI_ABORTED)
sys.exit(1)
flags.automatedInstall = True
flags.eject = False
ks_files = [options.ksfile]
elif os.path.exists("/run/install/ks.cfg") and not options.liveinst:
# this is to handle such cases where a user has pre-loaded a
# ks.cfg onto an OEMDRV labeled device
flags.automatedInstall = True
flags.eject = False
ks_files = ["/run/install/ks.cfg"]
else:
ks_files = ["/tmp/updates/interactive-defaults.ks",
"/usr/share/anaconda/interactive-defaults.ks"]
for ks in ks_files:
if not os.path.exists(ks):
continue
kickstart.preScriptPass(ks)
log.info("Parsing kickstart: " + ks)
ksdata = kickstart.parseKickstart(ks)
# Only load the first defaults file we find.
break
if not ksdata:
ksdata = kickstart.AnacondaKSHandler(addon_paths["ks"])
return ksdata
|
Read MoreCNBC Explains: So what the heck is this 4% rule?
Part of the problem with the 4 percent rule is that it was developed in the 1990s, when interest rates were significantly higher. Retirees with their savings in safe instruments such as bonds and annuities were getting more income than retirees today do with similar assets.
Another problem, though one with a positive side as well, is that life expectancies have increased. Americans are living longer after they stop working, which means their savings have to last longer. A man reaching age 65 in 1970 could expect to live 13 more years, but by 2011 that figure was 18 years. A woman's life expectancy at age 65 rose from 17 years in 1970 to 20 years in 2011 (the most recent year for which such data is available from the Centers for Disease Control).
Read MoreDo you really need $2.5 million to have a good retirement?
"They have enough money. It really doesn't impact them," said Anand Rao, a partner at PwC and an author of the study. But those with less savings have less of a margin for error.
For them, PwC analyzed behavioral trends to describe another problem as well: the so-called sequence of consumption problem. The 4 percent rule expects people to draw down their money in a mostly linear pattern, but life is not linear. PwC found that retirees commonly spend more money—much of it discretionary—when they first retire, either because they don't know what they should be spending or because they are enjoying long-awaited activities such as travel.
PwC's findings add to concerns raised earlier about the viability of the 4 percent rule. Research published in 2013 by Michael Finke of Texas Tech University, Wade Pfau of The American College, and David Blanchett of Morningstar Investment Management found that using historical interest rate averages, a retiree drawing down savings for a 30-year retirement using the 4 percent rule had only a 6 percent chance of running out. But using interest rate levels from January 2013, when their research was published, the authors found that retirees' savings would grow so slowly that the chance of failure rose to 57 percent.
"The 4 percent rule cannot be treated as a safe initial withdrawal rate in today's low interest rate environment," they concluded.
PwC has developed models to help investors and their advisors understand how prepared they are for retirement. Rao said the firm is also developing tools that mass market investors can use to determine how to draw down their savings. "The approach has to be much more personalized," he said.
As for drawdowns, he recommends "an actuarial view of the length of the plan remaining," or a calculation of how long a client's money has to last. Then he looks at clients' current balance net of recent market performance, and from there he can calculate how much is safe to draw down for a given time period. "If you do the household math, then you have a better sense of how much risk you are taking and whether it's a good idea," he said.
Elvin Turner, managing director of Turner Consulting, a financial services consultancy, said the new research on the 4 percent rule also points to the fact that firms can use big data to better understand how people accumulate and spend their savings, and thus develop better plans for them. "The tools are much more sophisticated today."
That's a good thing, he said, because people facing retirement today are looking at a much more complicated financial situation. "There is no longer a one-size-fits-all strategy," he said.
|
'''
Allmyvideos urlresolver plugin
Copyright (C) 2013 Vinnydude
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import re
from t0mm0.common.net import Net
from urlresolver.plugnplay.interfaces import UrlResolver
from urlresolver.plugnplay.interfaces import PluginSettings
from urlresolver.plugnplay import Plugin
import xbmc
class MooShareResolver(Plugin, UrlResolver, PluginSettings):
implements = [UrlResolver, PluginSettings]
name = "mooshare"
domains = [ "mooshare.biz" ]
pattern = '(?://|\.)(mooshare\.biz)/(?:embed-|iframe/)?([0-9a-zA-Z]+)'
def __init__(self):
p = self.get_setting('priority') or 100
self.priority = int(p)
self.net = Net()
def get_media_url(self, host, media_id):
url = self.get_url(host, media_id)
html = self.net.http_GET(url).content
data = {}
if '<form role="search"' in html and '<Form method="POST" action=\'\'>' in html: html=html.split('<Form method="POST" action=\'\'>')[1]
r = re.findall(r'type="hidden" name="(.+?)"\s* value="?(.+?)">', html)
for name, value in r:
data[name] = value
data[u'referer']=''; data[u'usr_login']=''; data[u'imhuman']='Proceed to video'; data[u'btn_download']='Proceed to video';
xbmc.sleep(5000)
html = self.net.http_POST(url, data).content
r = re.search('file\s*:\s*"(.+?)"', html)
if r:
return r.group(1)
else:
raise UrlResolver.ResolverError('could not find video')
def get_url(self, host, media_id):
return 'http://mooshare.biz/%s' % media_id
def get_host_and_id(self, url):
r = re.search(self.pattern, url)
if r:
return r.groups()
else:
return False
def valid_url(self, url, host):
return re.search(self.pattern, url) or self.name in host
|
Port Washington, NY–Growth in activewear buoyed the total U.S. apparel market in the 12 months ending June 2014 (July’13—June’14), helping total apparel sales reach $206.3 billion, a 1% increase over the prior year, according to leading global information company The NPD Group.
Activewear sales accounted for $33.7 billion, representing 16% of the total apparel market, and have played a significant role in the overall success of the total apparel market over the last two years. Activewear-related accessories also have seen an upswing.
The top three primary uses for activewear are Casual/Every Day Use, Athletic/Sport/Exercise, and School. While use of activewear in Athletic/Sport/Exercise declined during the 12 months ending June 2014, School use experienced growth.
The activewear industry has experienced growth across all channels*, demonstrating that competition exists and it is not limited to athletic retailers.
“Retailers and manufacturers across the board know that activewear is active, and they all want a piece of the action,” added Cohen. “Because of this competition, it’s not enough for them to simply jump on the activewear bandwagon. To truly capitalize on this ongoing trend, they need to unveil products that are colorful and unique, and fuel the activewear demand not with repetition, but with innovation and creativity.
|
# -*- coding: ascii -*-
"""Idea of event binder decoretor is Mr.NoboNobo's and TurboGears. Thanks.
Mr.NoboNobo's site: http://python.matrix.jp
"""
__pyspec = 1
__all__ = ('expose',
'bind_event_handler',
'get_resource_path',
'load_png')
import pyspec.util
attr_key = "__pyspec_wxutil_eventhandler"
class binder_class(object):
def __init__(self, event, id):
self.event = event
self.id = id
def __call__(self, method):
from pyspec.util import Struct
event_info = Struct(event=self.event, id=self.id)
if hasattr(method, attr_key):
getattr(method, attr_key).append(event_info)
else:
setattr(method, attr_key, [event_info])
return method
def expose(event, id=None):
return binder_class(event, id)
def bind_event_handler(frame, controller=None):
import wx
from wx.xrc import XRCID
if controller is None:
controller = frame
for name in dir(controller):
obj = getattr(controller, name)
if hasattr(obj, attr_key):
for event_info in getattr(obj, attr_key):
if event_info.id is None:
frame.Bind(event_info.event, obj)
else:
frame.Bind(event_info.event, obj, id=XRCID(event_info.id))
def get_resource_path(filename):
import os
if os.path.exists("resource"):
return os.path.join("resource", filename)
path_in_lib = pyspec.util.pyspec_file_path("resource", filename)
if os.path.exists(path_in_lib):
return path_in_lib
return os.path.abspath(os.path.join(path_in_lib, "..", "..", "..", "resource", filename))
def load_png(filename):
import wx
return wx.Image(filename, wx.BITMAP_TYPE_PNG).ConvertToBitmap()
|
Kuantan and Pahang has changed over the recent years in terms of tourism and business growth and this has been accompanied by a substantial growth in Kuantan and Pahang hotels to meet the demand from visitors who come to the Kuantan and Pahang for holiday, vacation, leisure, or business.
Rehlat provides booking for a long list of hotels in Kuantan and Pahang including the top 5 star hotels in Kuantan and Pahang and the hotels in Kuantan and Pahang near airport for extra comfort and convenience. If you are flying to Kuantan and Pahang, why not stay at a Kuantan and Pahang hotel near airport. Visitors on a budget can choose from the cheap hotels in Kuantan and Pahang or from budget 3 star hotels in Kuantan and Pahang and low cost 4 star hotels in Kuantan and Pahang.
Our comprehensive range includes Kuantan and Pahang hotels, bed and breakfasts, guest houses, and hotel apartments.
Rehlat provides Kuantan and Pahang hotel accommodations for people visiting the Kuantan and Pahang's many attractions including the religious, adventure, and historic sites.
Are you in a hurry to get Kuantan and Pahang hotel booking done?
Are you already late for booking hotel? Save time with the Rehlat’s quick search technique which lists out some of the most popular hotel in Kuantan and Pahang in a jiffy.
You can be specific like with hotel selection. For eg: select Kuantan and Pahang hotels with Wi Fi or go for 5 star Kuantan and Pahang hotels with gymnasium and swimming pool facility. Not only hotels, but there are Kuantan and Pahang resorts, villas, apartments, Kuantan and Pahang lodges, and hostels also listed on Rehlat website.
A happening place with a filled calendar, you'll never be bored in buzzing and happening Kuantan and Pahang. Visit Kuantan and Pahang at any time of the year and enjoy the Kuantan and Pahang's event calendar.
Whether you want a unique 5 star luxury hotel for a special stay or a budget-friendly Kuantan and Pahang hotels for work trips, apartment hotels for groups and families, hostels, or resort stays – there’s something for every one and every trip.
Rehlat comes up with regular hotel discounts and last minute hotel booking deals for Kuantan and Pahang hotels. Best price is guaranteed for all hotels in Kuantan and Pahang.
Enjoy your trip the most; just book your stay at one of the popular Hotels in Kuantan And Pahang.
Rehlat has all the convenient located Kuantan And Pahang hotels listed at one place, making it easy for you to explore all of the exciting things Kuantan And Pahang has to offer. Every popular hotel in Kuantan And Pahang has stylish rooms and world class facilities and amenities; you are guaranteed an unforgettable Kuantan And Pahang trip. Book your stay at one of the hotels in Kuantan And Pahang today and start planning your Kuantan And Pahang trip getaway.Rehlat has an interesting mix and list of hotels in Kuantan And Pahang, with everything from cheap hotels to rich and famous 5 star hotel accommodations. We love to give discounts and so you are guaranteed to get your online hotel booking done for the lowest available discounted hotel rates in Kuantan And Pahang.We have the facility for Kuantan And Pahang hotels reservation from a vast list of 2 star to luxury to boutique hotels, economy hotels, budget hotels, bed and breakfasts, guest houses, and resorts. Rehlat has exclusive offers on the cheap hotels in Kuantan And Pahang. Book 5 star Hotels in Kuantan And Pahang or find last minute deals on budget hotels in Kuantan And Pahang only on Rehlat. Save more on hotel bookings in Kuantan And Pahang with affordable prices.
Compare prices for all popular hotels in Kuantan And Pahang to guarantee you get the best deal on your hotel booking. You can select 5 star to single star hotels as per your suitability or select as per your affordable price range. You can also allow to choose hotel as per the amenities they offer. For eg: if you need hotels with Wifi, you can select the same and get all hotels in City with Wifi.The search console on Rehlat allows you to access the names of 5 star hotels, 4 star hotels, 3 star hotels and even cheap hotels, if available in the city. Not only this, we have also listed Kuantan And Pahang city hotel amenities like WiFi availability or Gym option to get you the exact Kuantan And Pahang hotel that best fits your bill.
If you are looking for online hotel booking for Kuantan And Pahang, you have done the right thing and have arrived at the right place. Now, enjoy an easy search and book hotel reservation system and save time to plan your holiday!
So, let the journey begin with Rehlat’s simple hotel booking gateway.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.