text
stringlengths 29
850k
|
|---|
"""
A directed graph is strongly connected if there is a path between all pairs of vertices. A strongly connected component (SCC) of a directed graph is a maximal strongly connected subgraph.
Time Complexity: The above algorithm calls DFS, fins reverse of the graph and again calls DFS. DFS takes O(V+E) for a graph represented using adjacency list. Reversing a graph also takes O(V+E) time. For reversing the graph, we simple traverse all adjacency lists.
Time Complexity: O(V + E)
"""
from collections import defaultdict
class Graph:
def __init__(self, vertices):
self.V = vertices
self.graph = defaultdict(list)
def add_edge(self, u, v):
self.graph[u].append(v)
def fill_order(self, vertex, visited, stack):
visited[vertex] = True
for neighbour in self.graph[vertex]:
if visited[neighbour] == False:
self.fill_order(neighbour, visited, stack)
stack.append(vertex)
def get_traspose(self):
g = Graph(self.V)
for u in self.graph:
for v in self.graph[u]:
g.add_edge(v, u)
return g
def dfs_util(self, vertex, visited, curr_res):
visited[vertex] = True
curr_res.append(vertex)
for u in self.graph[vertex]:
if visited[u] == False:
self.dfs_util(u, visited, curr_res)
def get_strongly_connected_component(self):
stack = []
result = []
visited = [False for i in xrange(self.V)]
for u in xrange(self.V):
if visited[u] == False:
self.fill_order(u, visited, stack)
transposed_graph = self.get_traspose()
visited = [False for i in xrange(self.V)]
while stack:
vertex = stack.pop()
if visited[vertex] == False:
curr_res = []
transposed_graph.dfs_util(vertex, visited, curr_res)
result.append(curr_res)
return result
g = Graph(5)
g.add_edge(1, 0)
g.add_edge(0, 2)
g.add_edge(2, 1)
g.add_edge(0, 3)
g.add_edge(3, 4)
print g.get_strongly_connected_component()
|
Bingley Bed and Breakfasts, Guest Houses. Yorkshire, UK.
Shown on this page are B&Bs in Bingley or near Bingley.
Named after the historic canal locks nearby, the hotel is a solid Victorian stone house. It�s in a leafy, tranquil area, but is close to Leeds and Bradford, as well as the Dales and Bronte country. The hotel offers good views, individual decor, informal style, interesting artworks and good food.
Bingley, BD16 4DD, Yorkshire, England.
Room Types Available: Single Room, Four Poster Room, Double Rooms, Twin Rooms.
The Hotel is an impressive listed building set in peaceful woodland with ample free parking. All the rooms are individual with en suite facilities equipped to a very high standard with all the amenities you would expect to find in a quality hotel.
Bingley, BD16 4AW, Yorkshire, England.
|
"""
Copies names (value 1 in 1:2:3) from source file to target file
Requires manual use
"""
import os, sys
verbose = True # If True, print every replacement
pretend = False # If True, don't actually write
# The two files to be worked with
#source = "../../../mhag-read-only/src/org/mhag/model/data/mh3g/armor.dat"
#target = "../src/org/mhag/model/data/mh3g/armor.dat"
source = "../src/org/mhag/model/data/mh3g/armor.dat"
target = "../src/org/mhag/model/data/mh3g/armor_item.dat"
# Prevent cross contamination
try:
os.remove(source + ".new")
os.remove(target + ".new")
except OSError:
pass
# Read both files, line by line
with open(source) as f:
source_data = f.readlines()
with open(target) as f:
target_data = f.readlines()
# Generate list of armor names from source
source_names = []
for line in source_data:
# Exclude comments and blank lines
if line[0] != "#" and line[0] != "\n":
source_names.append(line.split(":")[0])
source_data = None
# Replace the names in the target
i = 0
for line in target_data:
# Exclude comments and blank lines
if line[0] != "#" and line[0] != "\n":
name = line.split(":")[0]
if verbose: print(name + "|" + source_names[i])
new_line = line.replace(name, source_names[i])
# Write replaced line to temporary file
with open(target + ".new", "a") as f:
f.write(new_line)
# Increment
i += 1
# Replace target file with temporary file
if not pretend:
os.remove(target)
os.rename(target + ".new", target)
|
Transactional memory (TM) is a promising paradigm for concurrent programming, in which threads of an application communicate, and synchronize their actions, via inmemory transactions. Each transaction can perform any number of operations on shared data, and then either commit or abort. When the transaction commits, the effects of all its operations become immediately visible to other transactions; when it aborts, however, those effects are entirely discarded. Transactions are atomic: programmers get the illusion that every transaction executes all its operations instantaneously, at some single and unique point in time. The TM paradigm has raised a lot of hope for mastering the complexity of concurrent programming. The aim is to provide programmers with an abstraction, i.e., the transaction, that makes handling concurrency as easy as with coarse-grained locking, while exploiting the parallelism of the underlying multi-core or multi-processor hardware as well as hand-crafted fine-grained locking (which is typically an engineering challenge). It is thus not surprising to see a large body of work devoted to implementing this paradigm efficiently, and integrating it with common programming languages. Very little work, however, was devoted to the underlying theory and principles. The aim of this thesis is to provide theoretical foundations for transactional memory. This includes defining a model of a TM, as well as answering precisely when a TM implementation is correct, what kind of properties it can ensure, what the power and limitations of a TM are, and what inherent trade-offs are involved in designing a TM algorithm. In particular, this manuscript contains precise definitions of properties that capture the safety and progress semantics of TMs, as well as several fundamental results related to computability and complexity of TM implementations. While the focus of the thesis is on theory, its goal is to capture the common intuition behind the semantics of TMs and the properties of existing TM implementations.
|
import numpy as np
from robosuite.models.arenas import TableArena
from robosuite.utils.mjcf_utils import CustomMaterial, find_elements
from robosuite.models.objects import CylinderObject
class WipeArena(TableArena):
"""
Workspace that contains an empty table with visual markers on its surface.
Args:
table_full_size (3-tuple): (L,W,H) full dimensions of the table
table_friction (3-tuple): (sliding, torsional, rolling) friction parameters of the table
table_offset (3-tuple): (x,y,z) offset from center of arena when placing table.
Note that the z value sets the upper limit of the table
coverage_factor (float): Fraction of table that will be sampled for dirt placement
num_markers (int): Number of dirt (peg) particles to generate in a path on the table
table_friction_std (float): Standard deviation to sample for the peg friction
line_width (float): Diameter of dirt path trace
two_clusters (bool): If set, will generate two separate dirt paths with half the number of sensors in each
"""
def __init__(
self,
table_full_size=(0.8, 0.8, 0.05),
table_friction=(0.01, 0.005, 0.0001),
table_offset=(0, 0, 0.8),
coverage_factor=0.9,
num_markers=10,
table_friction_std=0,
line_width=0.02,
two_clusters=False
):
# Tactile table-specific features
self.table_friction_std = table_friction_std
self.line_width = line_width
self.markers = []
self.coverage_factor = coverage_factor
self.num_markers = num_markers
self.two_clusters = two_clusters
# Attribute to hold current direction of sampled dirt path
self.direction = None
# run superclass init
super().__init__(
table_full_size=table_full_size,
table_friction=table_friction,
table_offset=table_offset,
)
def configure_location(self):
"""Configures correct locations for this arena"""
# Run superclass first
super().configure_location()
# Define start position for drawing the line
pos = self.sample_start_pos()
# Define dirt material for markers
tex_attrib = {
"type": "cube",
}
mat_attrib = {
"texrepeat": "1 1",
"specular": "0.0",
"shininess": "0.0",
}
dirt = CustomMaterial(
texture="Dirt",
tex_name="dirt",
mat_name="dirt_mat",
tex_attrib=tex_attrib,
mat_attrib=mat_attrib,
shared=True,
)
# Define line(s) drawn on table
for i in range(self.num_markers):
# If we're using two clusters, we resample the starting position and direction at the halfway point
if self.two_clusters and i == int(np.floor(self.num_markers / 2)):
pos = self.sample_start_pos()
marker_name = f'contact{i}'
marker = CylinderObject(
name=marker_name,
size=[self.line_width / 2, 0.001],
rgba=[1, 1, 1, 1],
material=dirt,
obj_type="visual",
joints=None,
)
# Manually add this object to the arena xml
self.merge_assets(marker)
table = find_elements(root=self.worldbody, tags="body", attribs={"name": "table"}, return_first=True)
table.append(marker.get_obj())
# Add this marker to our saved list of all markers
self.markers.append(marker)
# Add to the current dirt path
pos = self.sample_path_pos(pos)
def reset_arena(self, sim):
"""
Reset the visual marker locations in the environment. Requires @sim (MjSim) reference to be passed in so that
the Mujoco sim can be directly modified
Args:
sim (MjSim): Simulation instance containing this arena and visual markers
"""
# Sample new initial position and direction for generated marker paths
pos = self.sample_start_pos()
# Loop through all visual markers
for i, marker in enumerate(self.markers):
# If we're using two clusters, we resample the starting position and direction at the halfway point
if self.two_clusters and i == int(np.floor(self.num_markers / 2)):
pos = self.sample_start_pos()
# Get IDs to the body, geom, and site of each marker
body_id = sim.model.body_name2id(marker.root_body)
geom_id = sim.model.geom_name2id(marker.visual_geoms[0])
site_id = sim.model.site_name2id(marker.sites[0])
# Determine new position for this marker
position = np.array([pos[0], pos[1], self.table_half_size[2]])
# Set the current marker (body) to this new position
sim.model.body_pos[body_id] = position
# Reset the marker visualization -- setting geom rgba alpha value to 1
sim.model.geom_rgba[geom_id][3] = 1
# Hide the default visualization site
sim.model.site_rgba[site_id][3] = 0
# Sample next values in local marker trajectory
pos = self.sample_path_pos(pos)
def sample_start_pos(self):
"""
Helper function to return sampled start position of a new dirt (peg) location
Returns:
np.array: the (x,y) value of the newly sampled dirt starting location
"""
# First define the random direction that we will start at
self.direction = np.random.uniform(-np.pi, np.pi)
return np.array(
(
np.random.uniform(
-self.table_half_size[0] * self.coverage_factor + self.line_width / 2,
self.table_half_size[0] * self.coverage_factor - self.line_width / 2),
np.random.uniform(
-self.table_half_size[1] * self.coverage_factor + self.line_width / 2,
self.table_half_size[1] * self.coverage_factor - self.line_width / 2)
)
)
def sample_path_pos(self, pos):
"""
Helper function to add a sampled dirt (peg) position to a pre-existing dirt path, whose most
recent dirt position is defined by @pos
Args:
pos (np.array): (x,y) value of most recent dirt position
Returns:
np.array: the (x,y) value of the newly sampled dirt position to add to the current dirt path
"""
# Random chance to alter the current dirt direction
if np.random.uniform(0, 1) > 0.7:
self.direction += np.random.normal(0, 0.5)
posnew0 = pos[0] + 0.005 * np.sin(self.direction)
posnew1 = pos[1] + 0.005 * np.cos(self.direction)
# We keep resampling until we get a valid new position that's on the table
while abs(posnew0) >= self.table_half_size[0] * self.coverage_factor - self.line_width / 2 or \
abs(posnew1) >= self.table_half_size[1] * self.coverage_factor - self.line_width / 2:
self.direction += np.random.normal(0, 0.5)
posnew0 = pos[0] + 0.005 * np.sin(self.direction)
posnew1 = pos[1] + 0.005 * np.cos(self.direction)
# Return this newly sampled position
return np.array((posnew0, posnew1))
|
Need Utilities Professionals? Our Resume Database is Second to None.
Access the freshest Utilities resumes or target any of our 50+ communities. Get matching resumes emailed directly to you with saved search alerts.
Thank you very much for your interest in Resume Search. An account manager will follow up with you shortly. In the meantime, please feel free to review the other great products and services iHireUtilities has to offer!
Please fill out the brief form below and an account manager will contact you shortly to discuss how iHireUtilities can help you meet your hiring goals.
|
from __future__ import print_function
from os.path import join as pjoin
from copy import copy
from . import SETest
from ..data.LiF_g4 import nqpt, wtq, fnames, refdir
class Test_LiF_g4(SETest):
common = dict(
temperature = False,
renormalization = False,
broadening = False,
self_energy = False,
spectral_function = False,
dynamical = True,
split_active = True,
double_grid = False,
write = True,
verbose = False,
nqpt = nqpt,
wtq = wtq,
smearing_eV = 0.01,
temp_range = [0, 300, 300],
omega_range = [-0.1, 0.1, 0.001],
rootname = 'epc.out',
**fnames)
@property
def refdir(self):
return refdir
# ZPR
def test_zpr_dyn(self):
"""Dynamical zero-point renormalization"""
self.run_compare_nc(
function = self.get_zpr_dyn,
key = 'zero_point_renormalization',
)
#def generate_zpr_dyn(self):
# """Generate epc data for this test."""
# return self.generate_test_ref(self.get_zpr_dyn)
def test_tdr_dyn(self):
"""Dynamical temperature dependent renormalization"""
self.run_compare_nc(
function = self.get_tdr_dyn,
key = 'temperature_dependent_renormalization',
)
# ZPR
def test_zpr_stat_mode(self):
"""Dynamical zero-point renormalization"""
self.run_compare_nc(
function = self.get_zpr_stat_mode,
key = 'zero_point_renormalization_by_modes',
)
def test_zpb_dyn(self):
"""Dynamical ZP Brd"""
self.run_compare_nc(
function = self.get_zpb_dyn,
key = 'zero_point_broadening',
)
def test_tdb_dyn(self):
"""Dynamical TD Brd"""
self.run_compare_nc(
function = self.get_tdb_dyn,
key = 'temperature_dependent_broadening',
)
def test_zpb_stat(self):
"""Static ZP Brd"""
self.run_compare_nc(
function = self.get_zpb_stat,
key = 'zero_point_broadening',
)
def test_tdb_stat(self):
"""Dynamical TD Brd"""
self.run_compare_nc(
function = self.get_tdb_stat,
key = 'temperature_dependent_broadening',
)
#def generate_tdr_dyn(self):
# return self.generate_test_ref(self.get_tdr_dyn)
# All
def generate(self):
"""Generate epc data for all tests."""
print('Generating reference data for tests in directory: {}'.format(
self.refdir))
for function in (
self.get_zpr_dyn,
self.get_tdr_dyn,
self.get_zpr_stat_mode,
self.get_zpb_dyn,
self.get_tdb_dyn,
self.get_zpb_stat,
self.get_tdb_stat,
):
self.generate_ref(function)
|
Event #63 Champion Bryn Kenney!
The final mixed game event of the 2014 World Series of Poker has officially come to a close with none other than Bryn Kenney earning his first gold bracelet as well as $153,220 in prize money!
Kenney dominated this tournament from the very beginning, bagging up the chip lead on Day 1. From there, Kenney continued to keep his seat at the top of the chip counts, remaining in the top ten chip counts throughout Day 2 and ultimately bagging second place chips to end the day. With just nine players remaining for Day 3, Kenney shot out to a substantial chip lead early and rode that all of the way to his first championship bracelet.
The day began with short-stacked David Blatte being eliminated from play during the first orbit of pot-limit Omaha. Following Blattte out the door was Haresh Thaker, prompting a total re-draw to the unofficial final table of seven. That unofficial final table quickly became official, as Michael Mixer was eliminated from play as the official final table bubble boy.
Randy Ohel came into the day looking for his second WSOP bracelet, but ended up falling as the first casualty of the final table. During pot-limit Omaha, Ohel made a full house on the river but ended up shipping all in into Daniel Zack's flopped quads. After Ohel's elimination, five-handed play wore on for quite some time. Ultimately it was Andrey Zaichenko who was next to go after shipping all in into Fabio Coppola's pocket kings in limit hold'em. Zaichenko did not improve and was forced to settle for fifth place.
Zack was eliminated in fourth place when he failed to make a four-card badugi against Jan Suchanek's jack badugi. Soon after, Copolla became the third place finisher after an unfortunate hand of no-limit 2-7 single draw.
This left Kenney heads up with Suchanek for the bracelet with Kenney holding a significant chip lead. Despite Suchanek being able to battle back in a few of the limit games, it was ultimately no-limit hold'em that did him in. On the final hand, Suchanek three-barreled on each street after flopping a pair of deuces against Kenney's turned trip nines. Kenney called the shove on the river and was awarded the bracelet.
That does it for our coverage of Event #63! Be sure to check out our Live Reporting page as we gear up for wall-to-wall coverage of the WSOP Main Event!
Jan Suchanek Eliminated in 2nd Place ($94,618); Bryn Kenney Wins!
Jan Suchanek limped on the button, Bryn Kenney raised to 33,000 in the big blind and Suchanek called.
The flop came down and Kenney checked to Suchanek who bet 105,000. Kenney called, the hit the turn and Kenney checked again. Suchanek fired 250,000 and Kenney called to see the river, which he checked. Suchanek shoved for about 520,000 and Kenney snap called.
Suchanek tabled for nines and deuces, but it was second best to Kenney's for trips.
Bryn Kenney was the bring in and Jan Suchanek completed the action. Kenney raised, Suchanek re-raised, and Kenney called.
On fourth, Suchanek had first action and he led out with a bet. Kenney called and picked up the first action on fifth. He check-called a bet and both players checked sixth street. Kenney tossed out a bet on seventh and Suchanek called.
Kenney showed for two pair of fives and deuces. Suchanek had that beat, however, with for sevens and four. Suchanek has now evened the gap with his 900,000 to Kenney's 1.1 million.
Jan Suchanek opened his button to 30,000 and Bryn Kenney called. The flop came down and Kennecy checked. Suchanek continued for 55,000 and Kenney called. The dropped down on the turn and both players checked to the on the river. Kenney led at the pot for 55,000 and Suchanek let go of his hand. Kenney took down the pot and increased his lead to about 1.465 million.
Fabio Coppola limped from the button and the floor staff informed him that he must come in for a raise. He made it 20,000 to go and it folded over to Bryn Kenney in the big blind who moved all in. Coppola called for his last 80,000 and the two were off to the draw.
Kenney stood pat and Fabio opted for one card.
Coppola squeezed out a , failing to make a hand to beat Kenney's jack-ten. He was eliminated in third place for $61,396.
Bryn Kenney raised from the button and Jan Suchanek made it three bets from the big blind. Kenney called and the two were off to the draw. Suchanek took two cards while Kenney picked just one new card. Suchanek bet and Kenney called. The second draw saw both players take one card, Suchanek fired again, and Kenney tossed out a call.
On the final draw, each player took one card once more. Suchanek bet and Kenney called, Suchanek showed a winning for a seven badugi. Kenney mucked and Suchanek increased his stack to 660,000.
Bryn Kenney brought in with a queen and Jan Suchanek completed with a five showing. Suchanek continued out with a bet on fourth and Kenney called. Kenney had first action on fifth street and took this opportunity to lead out. Suchanek called and both players checked sixth street. Suchanek fired a bet on seventh, Kenney let it go, and Suchanek took down te pot.
|
from django.contrib.gis.db import models
from tenant_schemas.models import TenantMixin
from django.contrib.auth.models import User
import re
ACCESS_TYPES = (
('read', 'Read',),
('write', 'Write'),
('admin', 'Admin'),
)
SITE_STATUS = (
('active', 'Active'),
('disabled', 'Disabled'),
)
class Membership(models.Model):
user = models.ForeignKey(User)
access = models.CharField(max_length=10, choices=ACCESS_TYPES)
class GeoKitSite(TenantMixin):
RESERVED = [
'test', 'geokit', 'admin', 'public', 'topology', 'geometry', 'data',
'raster', 'template', 'schema_template'
]
user = models.ForeignKey(User)
name = models.CharField(max_length=100, null=True)
created = models.DateTimeField(auto_now_add=True, editable=False)
modified = models.DateTimeField(auto_now=True)
status = models.CharField(max_length=15, choices=SITE_STATUS, default='active')
auto_create_schema = False
@classmethod
def is_allowed(cls, name):
m = re.match(r'^[a-z0-9]+$', name)
if m is None:
return False
return name not in cls.RESERVED
@classmethod
def is_available(cls, name):
return cls.is_allowed(name) and \
not cls.objects.filter(schema_name=name).exists()
def __unicode__(self):
return '%s - %s' % (self.user, self.name)
|
Maya was crying desperately and wouldn't talk.
"Mrs Hart it will be okay, but we need to know what happened"
"I need to find him" Maya said and Headed out without even taking an umbrella.
"It's okay I don't need one" she said and left.
I needed to find Lucas so I headed out of the Matthews'. There was a foggy storm outside and I forgot to take an umbrella but I didn't care.
I walked to blocks when I saw him him.
"Maya!" He yelled back and he crossed the street.
All of a sudden, a car came out of nowhere and ran into him.
"Oh my god!" The driver a middle aged man said as he got out of the car.
The man called the police and an ambulance, but Lucas didn't wake up.
Maya finished talking to the officer and her face was red from crying.
Suddenly, the doctor came out.
"Mr. Lucas friar is in a coma" he said and Maya cried harder.
SO?! CLIFFHANGER !!!! COMMENT YOUR OPINIONS!
Ps: thanks for 1k reads!
|
r'''
Calculate the derivatives of a dihedral angle.
'''
import numpy as np
from oricreate.api import CreasePatternState, CustomCPFactory
from oricreate.util.einsum_utils import \
DELTA, EPS
z_e = 0.5
def create_cp_factory():
cp = CreasePatternState(X=[[0, 0, 0],
[1, 1, 0],
[1, 0, 0],
[2, 0, 0]],
# L=[[0, 1],
# [1, 2],
# [2, 0],
# [1, 3],
# [3, 2]],
L=[[0, 2],
[1, 2],
[1, 0],
[2, 3],
[3, 1]],
F=[[0, 1, 2],
[1, 3, 2]]
)
cp_factory = CustomCPFactory(formed_object=cp)
return cp_factory
if __name__ == '__main__':
# end_doc
cp_factory = create_cp_factory()
cp = cp_factory.formed_object
vl = cp.iL_vectors
nl0, nl1 = np.einsum('fi...->if...', cp.iL_F_normals)
print('vl', vl.shape)
print(vl)
print('nl0', nl0.shape)
print(nl0)
print('nl1', nl1.shape)
print(nl1)
norm_vl = np.sqrt(np.einsum('...i,...i->...', vl, vl))
norm_nl0 = np.sqrt(np.einsum('...i,...i->...', nl0, nl0))
norm_nl1 = np.sqrt(np.einsum('...i,...i->...', nl1, nl1))
unit_vl = vl / norm_vl[:, np.newaxis]
unit_nl0 = nl0 / norm_nl0[:, np.newaxis]
unit_nl1 = nl1 / norm_nl1[:, np.newaxis]
print('unit_vl', unit_vl.shape)
print(unit_vl)
print('unit_nl0', unit_nl0.shape)
print(unit_nl0)
print('unit_nl1', unit_nl1.shape)
print(unit_nl1)
Tl0 = np.einsum('ij...->ji...',
np.array(
[unit_vl,
unit_nl0,
np.einsum('...j,...k,...ijk->...i',
unit_vl, unit_nl0, EPS)]
))
print('Tl0', Tl0.shape)
print(Tl0)
unit_nl01 = np.einsum('...ij,...j->...i', Tl0, unit_nl1)
print('unit_nl01[:,2]', unit_nl01[:, 2])
print(unit_nl01[:, 2])
psi = np.arcsin(unit_nl01[:, 2])
print('psi', psi)
print('L_vectors', cp.L_vectors.shape)
print(cp.L_vectors[1])
print('L_vectors_du', cp.L_vectors_dul.shape)
print(cp.L_vectors_dul[1])
print('iL_within_F0')
print(cp.iL_within_F0)
print('F_L_vectors_dul', cp.F_L_vectors_dul.shape)
print(cp.F_L_vectors_dul)
vl_dul = cp.iL_vectors_dul
nl0_dul0, nl1_dul1 = np.einsum('fi...->if...', cp.iL_F_normals_du)
print(cp.iL_N.shape)
print('vl_dul', vl_dul.shape)
print(vl_dul)
print('nl0_dul0', nl0_dul0.shape)
print(nl0_dul0)
print('nl1_dul1', nl1_dul1.shape)
print(nl1_dul1)
unit_nl0_dul0 = 1 / norm_nl0[:, np.newaxis, np.newaxis, np.newaxis] * (
nl0_dul0 -
np.einsum('...j,...i,...iNd->...jNd', unit_nl0, unit_nl0, nl0_dul0)
)
unit_nl1_dul1 = 1 / norm_nl1[:, np.newaxis, np.newaxis, np.newaxis] * (
nl1_dul1 -
np.einsum('...j,...i,...iNd->...jNd', unit_nl1, unit_nl1, nl1_dul1)
)
unit_vl_dul = 1 / norm_vl[:, np.newaxis, np.newaxis, np.newaxis] * (
vl_dul -
np.einsum('...j,...i,...iNd->...jNd', unit_vl, unit_vl, vl_dul)
)
print('unit_nl0_dul0', unit_nl0_dul0.shape)
print(unit_nl0_dul0)
print('unit_nl1_dul1', unit_nl1_dul1.shape)
print(unit_nl1_dul1)
print('unit_vl_dul', unit_vl_dul.shape)
print(unit_vl_dul)
Tl0_dul0 = np.einsum('ij...->ji...',
np.array([np.zeros_like(unit_nl0_dul0),
unit_nl0_dul0,
np.einsum(
'...j,...kNd,...ijk->...iNd',
unit_vl, unit_nl0_dul0, EPS)
]
))
print('Tl0_dul0', Tl0_dul0.shape)
print(Tl0_dul0)
Tl0_dul = np.einsum('ij...->ji...',
np.array([unit_vl_dul,
np.zeros_like(unit_vl_dul),
np.einsum(
'...jNd,...k,...ijk->...iNd',
unit_vl_dul, unit_nl0, EPS)
]
)
)
print('Tl0_dul0', Tl0_dul.shape)
print(Tl0_dul)
rho = 1 / np.sqrt((1 - unit_nl01[:, 2]**2))
print('rho', unit_nl01[:, 2])
unit_nl01_dul = np.einsum(
'...,...j,...ijNd->...iNd', rho, unit_nl1, Tl0_dul)[:, 2, ...]
unit_nl01_dul0 = np.einsum(
'...,...j,...ijNd->...iNd', rho, unit_nl1, Tl0_dul0)[:, 2, ...]
unit_nl01_dul1 = np.einsum(
'...,...jNd,...ij->...iNd', rho, unit_nl1_dul1, Tl0)[:, 2, ...]
print('unit_nl01_dul', unit_nl01_dul.shape)
print(unit_nl01_dul)
print('unit_nl01_dul0', unit_nl01_dul0.shape)
print(unit_nl01_dul0)
print('unit_nl01_dul1', unit_nl01_dul1.shape)
print(unit_nl01_dul1)
# get the map of facet nodes attached to interior lines
iL0_N_map = cp.F_N[cp.iL_F[:, 0]].reshape(cp.n_iL, -1)
iL1_N_map = cp.F_N[cp.iL_F[:, 1]].reshape(cp.n_iL, -1)
#iL_N_map = cp.iL_N
iL_N_map = cp.F_L_N[cp.iL_within_F0]
print('iL_N_map', iL_N_map.shape)
print(iL_N_map)
# enumerate the interior lines and broadcast it N and D into dimensions
iL_map = np.arange(cp.n_iL)[:, np.newaxis, np.newaxis]
# broadcast the facet node map into D dimension
l0_map = iL0_N_map[:, :, np.newaxis]
l1_map = iL1_N_map[:, :, np.newaxis]
l_map = iL_N_map[:, :, np.newaxis]
# broadcast the spatial dimension map into iL and N dimensions
D_map = np.arange(3)[np.newaxis, np.newaxis, :]
# allocate the gamma derivatives of iL with respect to N and D dimensions
psi_du = np.zeros((cp.n_iL, cp.n_N, cp.n_D), dtype='float_')
# add the contributions gamma_du from the left and right facet
# Note: this cannot be done in a single step since the incremental
# assembly is not possible within a single index expression.
psi_du[iL_map, l_map, D_map] += unit_nl01_dul
print('l_map', l_map.shape)
print(l_map)
print('psi_du', psi_du.shape)
print(psi_du)
psi_du[iL_map, l0_map, D_map] += unit_nl01_dul0
print('l0_map', l0_map.shape)
print(l0_map)
print('psi_du', psi_du.shape)
print(psi_du)
psi_du[iL_map, l1_map, D_map] += unit_nl01_dul1
print('l1_map', l1_map.shape)
print(l1_map)
print('psi_du', psi_du.shape)
print(psi_du)
|
CHICAGO, May 17, 2017 /PRNewswire/ -- Grubhub, the nation's leading takeout marketplace, today announced its integration with industry-leading point of sale (POS) systems, Breadcrumb POS by Upserve and Toast. Restaurants using these systems can now manage all of their orders, both in-house and takeout, from one device. The new integration will also allow restaurants to more efficiently staff, save time on menu updates, consolidate financials and free up space on the crowded delivery tablet counter.
Grubhub's POS integration raises the bar for in-house restaurant technology efficiency and eliminates the need to use multiple tablets, offering restaurant employees a streamlined restaurant operations experience. This means more time for staff to focus on what matters most: serving up delicious food.
"We're completely focused on creating technology that enhances the experience of our restaurant partners," said Stan Chia, Chief Operating Officer of Grubhub. "With restaurant feedback in mind, we've built integrations designed to create efficiencies for restaurateurs. These improvements will positively impact the bottom line of restaurateurs, by helping restaurant owners spend less time on management logistics and more time creating great food."
"We're thrilled to welcome Grubhub to the Upserve Marketplace as our newest ordering partner, enabling instant online ordering to Breadcrumb POS to improve productivity, speed delivery time and eliminate errors," said Angus Davis, CEO and founder of Upserve. "Building on the early success we saw with shared customers during the pre-release shipped last quarter, we are excited to now bring this new program to many more of the 32 million active diners who enjoy over 23 million meals per month on the Upserve platform."
"Toast Restaurant POS is an all-in-one mobile, cloud-based platform built specifically for restaurants. Incorporating key facets of the guest experience into one tool, Toast integrates online ordering, gift card capabilities, loyalty programs, labor reporting, and sales data," said Chris Comparato, CEO of Toast. "We are excited to bring our best-in-class customer support and service to Grubhub users nationwide."
Grubhub's point of sale integration -- a top request from its restaurant partners -- is already creating efficiencies for much-loved restaurant chains including Protein Bar, along with local favorites like NYC's Mile End Delicatessen and Chicago's Bombay Wraps.
"Protein Bar and Kitchen's focus on serving delicious, healthy food to guests has been greatly accelerated due to the power of Grubhub's marketplace," said Jeff Drake, CEO Protein Bar and Kitchen. "In addition, Grubhub continues to demonstrate its leadership and restaurant partner focus by innovating on products such as POS integration, that will allow for unparalleled execution of 3rd party online orders."
"We work hard to create Montreal-style specialty foods for our diners to enjoy, and appreciate the additional reach and awareness brought about by online ordering," said Joel Tietolman, co-owner of Mile End Delicatessen in New York. "Grubhub and Breadcrumb's POS integration allows us to update our in-restaurant and takeout menus at the same time, on one platform. This saves us time each day, and ensures that our diners can choose from the most updated menu items, whether they're eating in our restaurant or dining in the comfort of their own home."
"Our restaurant has one simple mission: to bring the delicious street foods that we ate growing up in Bombay," said Ali Dewjee, owner of Bombay Wraps in Chicago. "Since the integration of our Breadcrumb POS with Grubhub, it's been easier than ever to manage our back-of-house functions, giving us more time to bring the amazing tastes of Bombay to the people of Chicago!"
To find takeout restaurants available in your area, check out Grubhub.com. If you are interested in becoming part of the Grubhub Delivery team, please visit driver.grubhub.com. To find out how your restaurant can join Grubhub, check out get.grubhub.com. To learn more about Grubhub and its portfolio of brands, please visit newsroom.grubhub.com.
Grubhub (NYSE: GRUB) is the nation's leading online and mobile takeout food-ordering marketplace with the most comprehensive network of restaurant partners and largest active diner base. Dedicated to moving eating forward and connecting diners with the food they love from their favorite local restaurants, the company's platforms and services strive to elevate food ordering through innovative restaurant technology, easy-to-use platforms and an improved delivery experience. Grubhub is proud to work with more than 50,000 restaurant partners in over 1,100 U.S. cities and London. The Grubhub portfolio of brands includes Grubhub, Seamless, AllMenus, and MenuPages.
|
#!/usr/bin/python
from pandas import DataFrame
from selenium import webdriver
from selenium.webdriver import PhantomJS
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
import click
import pandas as pd
import re
import time
@click.command()
@click.option('--u', prompt='Your Yahoo username', help='Your Yahoo account username')
@click.option('--p', hide_input=True, prompt='Your Yahoo password', help='Your Yahoo account password')
@click.option('--l', type=int, prompt='Your yahoo league id', help='Your yahoo league id')
@click.option('--f', prompt='your player value csv file', help='your player value csv file')
@click.option('--n', type=int, default=300, prompt='The number of players you\'d like to rank', help='top number players you\' like to rank')
@click.option('--h', type=bool, default=True, prompt='Do you want to run in headless mode? [True|False]', help='If True you won\'t see what\'s going on while it\'s running. If false you will see the browser render the steps.')
def import_player_ranks(u, p, l, f, n, h):
"""Given a csv file that has player values, Set pre draft player values for a yahoo fantasy basketball league."""
# read player values from csv file.
print('reading player ranks from csv file...')
df = pd.read_csv(f, encoding = "ISO-8859-1")
player_list = []
names = df[df.columns[0]].tolist()
for name in names:
name = name.replace(".", "") # C.J. McCollum -> CJ McCollum
name = name.replace(",", "") # Dennis Smith, Jr.
match = re.search(r'^(\S+\s\S+)(\s\S+)*$', name) # Larry Nance Jr. -> Larry Nance, Glen Robinson III -> Glen Robinson
name = match.group(1)
# print(name)
player_list.append(name)
# get selenium web driver
if h:
DesiredCapabilities.PHANTOMJS['phantomjs.page.settings.userAgent'] = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:16.0) Gecko/20121026 Firefox/16.0'
driver = webdriver.PhantomJS()
else:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--dns-prefetch-disable")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.set_window_size(1920, 1080)
driver.maximize_window()
# login into yahoo
print('login into yahoo as {}'.format(u))
driver.get('https://login.yahoo.com/?.src=fantasy&specId=usernameRegWithName&.intl=us&.lang=en-US&authMechanism=primary&yid=&done=https%3A%2F%2Fbasketball.fantasysports.yahoo.com%2Fnba%2F%3Futmpdku%3D1&eid=100&add=1')
delay = 8 # seconds
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.ID, 'login-username'))).send_keys(u)
driver.find_element_by_id('login-signin').click()
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.ID, 'login-passwd'))).send_keys(p)
driver.find_element_by_id('login-signin').click()
# make sure the 'My Teams and Leagues' Table is loaded
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.ID, "gamehome-teams")))
# find all leagues and teams
print('find all leagues and teams')
userTeamElements = driver.find_elements_by_xpath("//div[@class='Grid-table']//a[@class='Block Fz-sm Phone-fz-xs Pbot-xs']")
# get the url of pre draft value rank for this league
team_url = None
for teamElement in userTeamElements:
user_team_url = teamElement.get_attribute("href")
match = re.search(r'/nba/(\d+)/(\d+)/?$', user_team_url)
match_league_id = int(match.group(1))
if match_league_id == l:
team_url = user_team_url
break;
if team_url is None:
print('cannot find league id={}'.format( l))
return
# there are usually about 600 players, we set count to 800 to make all players can display in one page
pre_draft_value_url = team_url + '/editprerank?count=800'
print('set pre draft values in {}'.format(pre_draft_value_url))
driver.get(pre_draft_value_url)
# first click 'Start Over'
print('click "Start Over"')
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.XPATH, '//button[contains(@class, "reset-roster-btn")]'))).click()
alert = driver.switch_to.alert
alert.accept()
# save result
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.ID, 'submit-editprerank'))).click()
time.sleep(2)
# then click 'load all' to load all pages
# click save would reset the status(count =800), so loading all players explicitly again.
driver.get(pre_draft_value_url)
print('Load all players')
loadAllEle = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.LINK_TEXT, 'Load More Players')))
hov = ActionChains(driver).move_to_element(loadAllEle)
hov.perform()
time.sleep(2)
ActionChains(driver).move_to_element(loadAllEle).click(loadAllEle).perform()
time.sleep(5)
playerElements = driver.find_elements_by_xpath('//ul[@id="all_player_list"]//li//span//div[contains(@class, "playersimple-adddrop")]//span[@class="Bfc"]//span[2]')
plusElements = driver.find_elements_by_xpath('//ul[@id="all_player_list"]//li//span//div[contains(@class, "playersimple-adddrop")]//div[@class="Fl-end"]//span[2]')
print('There are {} players in the table.'.format(len(playerElements)))
name_to_ele_map = {}
for plyaerEle, plusEle in zip(playerElements, plusElements):
player_name = plyaerEle.text.replace(".", "") # C.J. McCollum -> CJ McCollum
player_name = player_name.replace(",", "") # Dennis Smith, Jr.
match = re.search(r'^(\S+\s\S+)(\s\S+)*$', player_name) # Larry Nance Jr. -> Larry Nance, Glen Robinson III -> Glen Robinson
player_name = match.group(1)
# print(player_name)
name_to_ele_map[player_name] = plusEle
print('Set player ranks...')
for i, player_name in enumerate(player_list, start = 1):
# just need to rank the top n players
if i>n:
break
# special cases
if player_name not in name_to_ele_map:
if player_name == 'Guillermo Hernangomez':
player_name = 'Willy Hernangomez'
elif player_name == 'Juan Hernangomez':
player_name = 'Juancho Hernangomez'
elif player_name == 'Moe Harkless':
player_name = 'Maurice Harkless'
if player_name not in name_to_ele_map:
if i == 1:
print('***** Cannot find player {} in the table, please check the name and add it to the top manually *****'.format(player_name))
else:
print('***** Cannot find player {} in the table, please check the name and add it to the #{} position, just after {} *****'.format(player_name, i, player_list[i-2]))
continue
webEle = name_to_ele_map[player_name]
hov = ActionChains(driver).move_to_element(webEle)
hov.perform()
# time.sleep(2)
ActionChains(driver).move_to_element(webEle).click(webEle).perform()
# save result
WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.ID, 'submit-editprerank'))).click()
time.sleep(2)
# show result
print('show result')
driver.get(team_url + '/prerank')
time.sleep(60)
driver.quit()
if __name__ == '__main__':
import_player_ranks()
|
How to get to the festival.
How to get to the festival including parking and shuttle information.
Showing the tour bus routes.
Map showing the streets of downtown closed for the festival.
Resource for general information about Mount Dora including attractions, dining and other valuable visitor information.
|
#!/usr/bin/env python
"""
@package mi.dataset.parser.dosta_abcdjm_mmp_cds
@file marine-integrations/mi/dataset/parser/dosta_abcdjm_mmp_cds.py
@author Mark Worden
@brief Parser for the DostaAbcdjmMmpCds dataset driver
Release notes:
initial release
"""
__author__ = 'Mark Worden'
__license__ = 'Apache 2.0'
from mi.core.log import get_logger
log = get_logger()
from mi.core.common import BaseEnum
from mi.dataset.parser.mmp_cds_base import MmpCdsParserDataParticle, MmpCdsParser
class DataParticleType(BaseEnum):
INSTRUMENT = 'dosta_abcdjm_mmp_cds_instrument'
class DostaAbcdjmMmpCdsParserDataParticleKey(BaseEnum):
CALIBRATED_PHASE = 'calibrated_phase'
OPTODE_TEMPERATURE = 'optode_temperature'
class DostaAbcdjmMmpCdsParserDataParticle(MmpCdsParserDataParticle):
"""
Class for parsing data from the DostaAbcdjmMmpCds data set
"""
_data_particle_type = DataParticleType.INSTRUMENT
def _get_mmp_cds_subclass_particle_params(self, dict_data):
"""
This method is required to be implemented by classes that extend the MmpCdsParserDataParticle class.
This implementation returns the particle parameters specific for DostaAbcdjmMmpCds. As noted in the
base, it is okay to allow the following exceptions to propagate: ValueError, TypeError, IndexError, KeyError.
@returns a list of particle params specific to DostaAbcdjmMmpCds
"""
calibrated_phase = self._encode_value(DostaAbcdjmMmpCdsParserDataParticleKey.CALIBRATED_PHASE,
dict_data['doconcs'], float)
optode_temperature = self._encode_value(DostaAbcdjmMmpCdsParserDataParticleKey.OPTODE_TEMPERATURE,
dict_data['t'], float)
subclass_particle_params = [calibrated_phase, optode_temperature]
return subclass_particle_params
class DostaAbcdjmMmpCdsParser(MmpCdsParser):
"""
Class for parsing data obtain from a DOSTA sensor, series A, B, C, D, J and M, as received from a McLane Moored
Profiler connected to the cabled docking station.
"""
def __init__(self,
config,
state,
stream_handle,
state_callback,
publish_callback,
*args, **kwargs):
"""
This method is a constructor that will instantiate a DostaAbcdjmMmpCdsParser object.
@param config The configuration for this MmpCdsParser parser
@param state The state the DostaAbcdjmMmpCdsParser should use to initialize itself
@param stream_handle The handle to the data stream containing the MmpCds data
@param state_callback The function to call upon detecting state changes
@param publish_callback The function to call to provide particles
"""
# Call the superclass constructor
super(DostaAbcdjmMmpCdsParser, self).__init__(config,
state,
stream_handle,
state_callback,
publish_callback,
*args, **kwargs)
|
Saint-Germain is a destination for visitors from all around the world who come to shop and (most importantly) to eat. For this tour, we’ll be exploring some of the tiniest and most special food shops in this gastronomically-gifted neighborhood.
This upscale neighborhood boasts the most impressive concentration of chocolate and pastry shops in Paris, and perhaps the world. But don’t worry, we’ll also be sharing an impressive array of savory delights, from iconic breads to carefully aged cheeses to artisanal charcuterie. We’ll be tasting a bit along the way and finishing at a table where pair our “picnic” finds with different estate bottled wines.
Price for a small group tour: 110€ per person, including all tastings. Our tours are all conducted in English and last approximately three hours. All tours finish with a seated tasting that includes wine, but we’ll be on our feet and moving for at least two hours throughout the tour. We’ll send you the exact meeting point upon booking, but you can plan to finish near the Saint-Germain-des-Prés métro stop. We maintain very small group sizes and are unable to add guests to tours that don’t have enough available tickets. Send us an email if your desired dates are sold out or if you have any other questions. We hope to meet you soon!
A refund of 75% is available for those who cancel with at least 48 hours advance notice. No refunds will be given for clients who cancel with less than 48 hours notice, or arrive more than 20 minutes late without calling, or don’t show up at all (no-shows). Our tours run rain or shine.
The Louvre? Sure. The Eiffel Tower? Yeah, you and 10,000 of your closest friends. Food tour of the gastronomic capital of the world? Insanely great. Maximum of 7 people. Three food shops, 1 food market and 1 wine shop. A taste at each of the 3 food shops, and an indoor picnic of food market items in the back room of the wine shop. Each purveyor was world class in their respective speciality. Seriously world class. Our guide Patrick is the most convivial and charming Irishman you’re likely to meet. Incredible command of the history and product of each shop as well as deep background of each speciality: bakery goods, chocolate, macarons, cheese, charcuterie and wine. The curveball? I am wheelchair bound and the “walking” tour was easy to roll with only 1 narrow doorway. The best experience of our Paris stay. Thanks Patrick.
|
#!/usr/bin/env python
# -*- Coding: UTF-8 -*-
# ---------------------------------------------------------------------------
# Open Asset Import Library (ASSIMP)
# ---------------------------------------------------------------------------
#
# Copyright (c) 2006-2010, ASSIMP Development Team
#
# All rights reserved.
#
# Redistribution and use of this software in source and binary forms,
# with or without modification, are permitted provided that the following
# conditions are met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the
# following disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the
# following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
# * Neither the name of the ASSIMP team, nor the names of its
# contributors may be used to endorse or promote products
# derived from this software without specific prior
# written permission of the ASSIMP Development Team.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# ---------------------------------------------------------------------------
"""Update PyAssimp's data structures to keep up with the
C/C++ headers.
This script is meant to be executed in the source tree, directly from
port/PyAssimp/gen
"""
import os
import re
#==[regexps]=================================================
# Clean desc
REdefine = re.compile(r''
r'(?P<desc>)' # /** *desc */
r'#\s*define\s(?P<name>[^(\n]+?)\s(?P<code>.+)$' # #define name value
, re.MULTILINE)
# Get structs
REstructs = re.compile(r''
#r'//\s?[\-]*\s(?P<desc>.*?)\*/\s' # /** *desc */
#r'//\s?[\-]*(?P<desc>.*?)\*/(?:.*?)' # garbage
r'//\s?[\-]*\s(?P<desc>.*?)\*/\W*?' # /** *desc */
r'struct\s(?:ASSIMP_API\s)?(?P<name>[a-z][a-z0-9_]\w+\b)' # struct name
r'[^{]*?\{' # {
r'(?P<code>.*?)' # code
r'\}\s*(PACK_STRUCT)?;' # };
, re.IGNORECASE + re.DOTALL + re.MULTILINE)
# Clean desc
REdesc = re.compile(r''
r'^\s*?([*]|/\*\*)(?P<line>.*?)' # * line
, re.IGNORECASE + re.DOTALL + re.MULTILINE)
# Remove #ifdef __cplusplus
RErmifdef = re.compile(r''
r'#ifdef __cplusplus' # #ifdef __cplusplus
r'(?P<code>.*)' # code
r'#endif(\s*//\s*!?\s*__cplusplus)*' # #endif
, re.IGNORECASE + re.DOTALL)
# Replace comments
RErpcom = re.compile(r''
r'\s*(/\*+\s|\*+/|\B\*\s|///?!?)' # /**
r'(?P<line>.*?)' # * line
, re.IGNORECASE + re.DOTALL)
# Restructure
def GetType(type, prefix='c_'):
t = type
while t.endswith('*'):
t = t[:-1]
if t[:5] == 'const':
t = t[5:]
# skip some types
if t in skiplist:
return None
t = t.strip()
types = {'unsigned int':'uint', 'unsigned char':'ubyte',}
if t in types:
t = types[t]
t = prefix + t
while type.endswith('*'):
t = "POINTER(" + t + ")"
type = type[:-1]
return t
def restructure( match ):
type = match.group("type")
if match.group("struct") == "":
type = GetType(type)
elif match.group("struct") == "C_ENUM ":
type = "c_uint"
else:
type = GetType(type[2:], '')
if type is None:
return ''
if match.group("index"):
type = type + "*" + match.group("index")
result = ""
for name in match.group("name").split(','):
result += "(\"" + name.strip() + "\", "+ type + "),"
return result
RErestruc = re.compile(r''
r'(?P<struct>C_STRUCT\s|C_ENUM\s|)' # [C_STRUCT]
r'(?P<type>\w+\s?\w+?[*]*)\s' # type
#r'(?P<name>\w+)' # name
r'(?P<name>\w+|[a-z0-9_, ]+)' # name
r'(:?\[(?P<index>\w+)\])?;' # []; (optional)
, re.DOTALL)
#==[template]================================================
template = """
class $NAME$(Structure):
\"\"\"
$DESCRIPTION$
\"\"\"
$DEFINES$
_fields_ = [
$FIELDS$
]
"""
templateSR = """
class $NAME$(Structure):
\"\"\"
$DESCRIPTION$
\"\"\"
$DEFINES$
$NAME$._fields_ = [
$FIELDS$
]
"""
skiplist = ("FileIO", "File", "locateFromAssimpHeap",'LogStream','MeshAnim','AnimMesh')
#============================================================
def Structify(fileName):
file = open(fileName, 'r')
text = file.read()
result = []
# Get defines.
defs = REdefine.findall(text)
# Create defines
defines = "\n"
for define in defs:
# Clean desc
desc = REdesc.sub('', define[0])
# Replace comments
desc = RErpcom.sub('#\g<line>', desc)
defines += desc
if len(define[2].strip()):
# skip non-integral defines, we can support them right now
try:
int(define[2],0)
except:
continue
defines += " "*4 + define[1] + " = " + define[2] + "\n"
# Get structs
rs = REstructs.finditer(text)
fileName = os.path.basename(fileName)
print fileName
for r in rs:
name = r.group('name')[2:]
desc = r.group('desc')
# Skip some structs
if name in skiplist:
continue
text = r.group('code')
# Clean desc
desc = REdesc.sub('', desc)
desc = "See '"+ fileName +"' for details." #TODO
# Remove #ifdef __cplusplus
text = RErmifdef.sub('', text)
# Whether the struct contains more than just POD
primitive = text.find('C_STRUCT') == -1
# Restructure
text = RErestruc.sub(restructure, text)
# Replace comments
text = RErpcom.sub('# \g<line>', text)
text = text.replace("),#", "),\n#")
text = text.replace("#", "\n#")
text = "".join([l for l in text.splitlines(True) if not l.strip().endswith("#")]) # remove empty comment lines
# Whether it's selfreferencing: ex. struct Node { Node* parent; };
selfreferencing = text.find('POINTER('+name+')') != -1
complex = name == "Scene"
# Create description
description = ""
for line in desc.split('\n'):
description += " "*4 + line.strip() + "\n"
description = description.rstrip()
# Create fields
fields = ""
for line in text.split('\n'):
fields += " "*12 + line.strip() + "\n"
fields = fields.strip()
if selfreferencing:
templ = templateSR
else:
templ = template
# Put it all together
text = templ.replace('$NAME$', name)
text = text.replace('$DESCRIPTION$', description)
text = text.replace('$FIELDS$', fields)
if ((name.lower() == fileName.split('.')[0][2:].lower()) and (name != 'Material')) or name == "String":
text = text.replace('$DEFINES$', defines)
else:
text = text.replace('$DEFINES$', '')
result.append((primitive, selfreferencing, complex, text))
return result
text = "#-*- coding: UTF-8 -*-\n\n"
text += "from ctypes import POINTER, c_int, c_uint, c_size_t, c_char, c_float, Structure, c_char_p, c_double, c_ubyte\n\n"
structs1 = ""
structs2 = ""
structs3 = ""
structs4 = ""
path = '../../../include/assimp'
files = os.listdir (path)
#files = ["aiScene.h", "aiTypes.h"]
for fileName in files:
if fileName.endswith('.h'):
for struct in Structify(os.path.join(path, fileName)):
primitive, sr, complex, struct = struct
if primitive:
structs1 += struct
elif sr:
structs2 += struct
elif complex:
structs4 += struct
else:
structs3 += struct
text += structs1 + structs2 + structs3 + structs4
file = open('structs.py', 'w')
file.write(text)
file.close()
print("Generation done. You can now review the file 'structs.py' and merge it.")
|
Bag "Halzan " which came up in 2014 fall and winter becomes the novel 5Way bag overturning conventional common sense. It becomes it is simple, and slender made, but five ways of the clutch handbag long shoulder shortstop shoulder cross shoulder bag can arrange it and transform myself into the multi-bag which is available in various scenes! Because it comes from harness, it is a point that a handle becomes the form of the stirrup (stirrup). The size is not too big, too and is available with unisex because it is not too small. Bleu nuit where the color expressed night mysterious, calm blue. It is refined, and the blue with the depth has a few casual usability in a monotone color. "Halzan " usable formally casually is a check required!
*Belt hole damage of the strap.
|
# !/usr/bin/python
# -*- coding: utf-8 -*-
#
# Created on Oct 16, 2015
# @author: Bo Zhao
# @email: bo_zhao@hks.harvard.edu
# @website: http://yenching.org
# @organization: Harvard Kennedy School
from wbcrawler.sna import generate_sematic_network
# generate_sematic_network(keywords=[u'社保', u'社会保险'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/社保.gexf")
# generate_sematic_network(keywords=[u'延退', u'延迟', u'65'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/延迟.gexf") # u'社保亏空'
# generate_sematic_network(keywords=[u'公积金'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/公积金.gexf")
# generate_sematic_network(keywords=[u'报销'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/报销.gexf")
# generate_sematic_network(keywords=[u'二孩', u'二胎'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"five/二孩.gexf")
# generate_sematic_network(keywords=[u'二孩', u'二胎'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"five/二孩.gexf")
# generate_sematic_network(keywords=[u'商业保险'], depth=[10, 10, 10], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/商险.gexf")
# generate_sematic_network(keywords=[u'亏空', u'缺口', u'财政补贴'], depth=[10, 10, 1], w2v_file=u'insurance/w2v.bin', gexf_file=u"insurance/社保亏空2.gexf")
|
Allow me to take a personal privilege here and post the new brochure I have been working on for Restoration Systems. I post this because, first of all, most blogs have off-subject insights into the blog author’s other interests. And second, this is the kind of economic responsibility I feel above and beyond my fondness for the Cosmic Tusk. For the most part, my duties at RS are why I come and go, wax ands wane, here as your subject maven. I enjoy producing good media for good ideas — but I can only handle one good idea at a time.
|
# Copyright (C) 2014-2016 Andrey Antukh <niwi@niwi.nz>
# Copyright (C) 2014-2016 Jesús Espino <jespinog@gmail.com>
# Copyright (C) 2014-2016 David Barragán <bameda@dbarragan.com>
# Copyright (C) 2014-2016 Alejandro Alonso <alejandro.alonso@kaleidos.net>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
from django.apps import AppConfig
from django.db.models import signals
def connect_events_signals():
from . import signal_handlers as handlers
signals.post_save.connect(handlers.on_save_any_model, dispatch_uid="events_change")
signals.post_delete.connect(handlers.on_delete_any_model, dispatch_uid="events_delete")
def disconnect_events_signals():
from . import signal_handlers as handlers
signals.post_save.disconnect(dispatch_uid="events_change")
signals.post_delete.disconnect(dispatch_uid="events_delete")
class EventsAppConfig(AppConfig):
name = "taiga.events"
verbose_name = "Events App Config"
def ready(self):
connect_events_signals()
|
Have you struggled to stay on budget or track your spending? Let me guess it seems like a punishment or restriction on your spending freedom.
Today I talked with a client who has been struggling to feel in control of her finances. She’s tried budgeting, but as soon as she felt money getting tight, she would break out the credit cards and swipe away. That temporary want-based shopping, knocked her down even more to appoint where she gave up entirely.
What if you tracked your spending, but wrote things down in a “Want” or “Need” column?
So we now have realistic tracking with accountability. If you’d like to start keeping track, feel free to download this form as an Excel or printable PDF.
|
#!/usr/bin/env python
"""A kernel that creates a new ASCII file with a given size and name.
"""
__author__ = "The ExTASY project <vivek.balasubramanian@rutgers.edu>"
__copyright__ = "Copyright 2015, http://www.extasy-project.org/"
__license__ = "MIT"
from copy import deepcopy
from radical.ensemblemd.exceptions import ArgumentError
from radical.ensemblemd.exceptions import NoKernelConfigurationError
from radical.ensemblemd.engine import get_engine
from radical.ensemblemd.kernel_plugins.kernel_base import KernelBase
# ------------------------------------------------------------------------------
_KERNEL_INFO = {
"name": "custom.tleap",
"description": "Creates a new file of given size and fills it with random ASCII characters.",
"arguments": {
"--numofsims=":
{
"mandatory": True,
"description": "No. of frontpoints = No. of simulation CUs"
},
"--cycle=":
{
"mandatory": True,
"description": "Output filename for postexec"
}
},
"machine_configs":
{
"*": {
"environment" : {"FOO": "bar"},
"pre_exec" : [],
"executable" : "python",
"uses_mpi" : False
},
"xsede.stampede":
{
"environment" : {},
"pre_exec" : [
"module load TACC",
"module load intel/13.0.2.146",
"module load python/2.7.9",
"module load netcdf/4.3.2",
"module load hdf5/1.8.13",
"export AMBERHOME=/opt/apps/intel13/mvapich2_1_9/amber/12.0",
"export PYTHONPATH=//work/02998/ardi/coco_installation/lib/python2.7/site-packages:$PYTHONPATH",
"export PATH=/work/02998/ardi/coco_installation/bin:$AMBERHOME/bin:$PATH"],
"executable" : ["python"],
"uses_mpi" : False
},
"epsrc.archer":
{
"environment" : {},
"pre_exec" : [
"module load python-compute/2.7.6",
"module load pc-numpy",
"module load pc-scipy",
"module load pc-coco",
"module load pc-netcdf4-python",
"module load amber"],
"executable" : ["python"],
"uses_mpi" : False
},
}
}
# ------------------------------------------------------------------------------
#
class kernel_tleap(KernelBase):
def __init__(self):
super(kernel_tleap, self).__init__(_KERNEL_INFO)
"""Le constructor."""
# --------------------------------------------------------------------------
#
@staticmethod
def get_name():
return _KERNEL_INFO["name"]
def _bind_to_resource(self, resource_key):
"""(PRIVATE) Implements parent class method.
"""
if resource_key not in _KERNEL_INFO["machine_configs"]:
if "*" in _KERNEL_INFO["machine_configs"]:
# Fall-back to generic resource key
resource_key = "*"
else:
raise NoKernelConfigurationError(kernel_name=_KERNEL_INFO["name"], resource_key=resource_key)
cfg = _KERNEL_INFO["machine_configs"][resource_key]
executable = cfg["executable"]
arguments = ['postexec.py','{0}'.format(self.get_arg("--numofsims=")),'{0}'.format(self.get_arg("--cycle="))]
self._executable = executable
self._arguments = arguments
self._environment = cfg["environment"]
self._uses_mpi = cfg["uses_mpi"]
self._pre_exec = cfg["pre_exec"]
self._post_exec = None
# ------------------------------------------------------------------------------
|
Set within the second part of the popular Greenwich Millennium Village, this modern and spacious one bedroom apartment offers a smart interior & stunning views of the Canary Wharf Skyline. Situated on the seventh floor and benefiting from a secure entry system, the property comprises an open-plan reception room with access to a well proportioned balcony and a simply beautiful integrated kitchen. You also benefit from a well decorated double bedroom & ample storage throughout. Greenwich Millennium Village offers a 24 hour concierge, as well as boasting a quiet residential feel within easy reach of the O2 arena and the beautiful centre of Greenwich where a fabulous selection of shops, bars and restaurants can be found as well as the relaxing retreat of Greenwich Park. Local transport links include North Greenwich Station (Jubilee Line), Westcombe Park Station (National Rail) and a choice of useful A roads providing access into and out of the City.
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Simon Dodsley (simon@purestorage.com)
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: purefa_hg
version_added: "2.4"
short_description: Create, Delete and Modify hostgroups on Pure Storage FlashArray
description:
- This module creates, deletes or modifies hostgroups on Pure Storage FlashArray.
author: Simon Dodsley (@simondodsley)
options:
hostgroup:
description:
- Host Name.
required: true
state:
description:
- Creates or modifies hostgroup.
required: false
default: present
choices: [ "present", "absent" ]
host:
description:
- List of existing hosts to add to hostgroup.
required: false
volume:
description:
- List of existing volumes to add to hostgroup.
required: false
extends_documentation_fragment:
- purestorage
'''
EXAMPLES = '''
- name: Create new hostgroup
purefa_hg:
hostgroup: foo
fa_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
- name: Delete hostgroup - this will disconnect all hosts and volume in the hostgroup
purefa_hg:
hostgroup: foo
state: absent
fa_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
- name: Create host group with hosts and volumes
purefa_hg:
hostgroup: bar
host:
- host1
- host2
volume:
- vol1
- vol2
fa_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
'''
RETURN = '''
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pure import get_system, purefa_argument_spec
HAS_PURESTORAGE = True
try:
from purestorage import purestorage
except ImportError:
HAS_PURESTORAGE = False
def get_hostgroup(module, array):
hostgroup = None
for h in array.list_hgroups():
if h["name"] == module.params['hostgroup']:
hostgroup = h
break
return hostgroup
def make_hostgroup(module, array):
changed = True
if not module.check_mode:
host = array.create_hgroup(module.params['hostgroup'])
if module.params['host']:
array.set_hgroup(module.params['hostgroup'], hostlist=module.params['host'])
if module.params['volume']:
for v in module.params['volume']:
array.connect_hgroup(module.params['hostgroup'], v)
module.exit_json(changed=changed)
def update_hostgroup(module, array):
changed = False
hostgroup = module.params['hostgroup']
module.exit_json(changed=changed)
def delete_hostgroup(module, array):
changed = True
if not module.check_mode:
for vol in array.list_hgroup_connections(module.params['hostgroup']):
array.disconnect_hgroup(module.params['hostgroup'], vol["vol"])
host = array.get_hgroup(module.params['hostgroup'])
array.set_hgroup(module.params['hostgroup'], remhostlist=host['hosts'])
array.delete_hgroup(module.params['hostgroup'])
module.exit_json(changed=changed)
def main():
argument_spec = purefa_argument_spec()
argument_spec.update(
dict(
hostgroup=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
host=dict(type='list'),
volume=dict(type='list'),
)
)
module = AnsibleModule(argument_spec, supports_check_mode=True)
if not HAS_PURESTORAGE:
module.fail_json(msg='purestorage sdk is required for this module in host')
state = module.params['state']
array = get_system(module)
hostgroup = get_hostgroup(module, array)
if module.params['host']:
try:
for h in module.params['host']:
array.get_host(h)
except:
module.fail_json(msg='Host not found')
if module.params['volume']:
try:
for v in module.params['volume']:
array.get_volume(v)
except:
module.fail_json(msg='Volume not found')
if hostgroup and state == 'present':
update_hostgroup(module, array)
elif hostgroup and state == 'absent':
delete_hostgroup(module, array)
elif hostgroup is None and state == 'absent':
module.exit_json(changed=False)
else:
make_hostgroup(module, array)
if __name__ == '__main__':
main()
|
Rediff.com » Movies » Here's how Shah Rukh spent his birthday!
After celebrating with the family, Shah Rukh Khan thanks his fans.
Shah Rukh Khan, who turns 53 today, November 2, had a late night birthday celebration the day before with his family and close friend Karan Johar at his sea side villa, Mannat.
The celebrations included a game of Mono Deal with his girls, and greeting throngs of fans outside Mannat.
And how did the superstar spend the day?
Shah Rukh greeted his fans once again in the morning, as they struggled for a glimpse of their idol.
Even as the fans craned their necks for a peek, Shah Rukh tried his best to make himself visible to everyone, by climbing as high as he could.
'I believe ownership makes one very small. I believe I am the luckiest man that I no longer own my Birthday also...it belongs to all these beautiful ppl who love me and my family so much. Thank you God,' SRK tweeted.
Fans wave to their idol and try to catch that lucky picture.
Later, in the afternoon, Shah Rukh unveiled the trailer of his new film, Zero.
Directed by Aanand L Rai, it co-stars Anushka Sharma and Katrina Kaif.
We certainly loved the trailer!
Vote for Shah Rukh Khan's BEST film!
Have YOU met Shah Rukh? Tell us!
Rishi, Big B, SRK: Retro pics that'll make you laugh out loud!
|
"""AoC 2015.03 problem solver.
Takes input from STDIN by default.
(c) Alexander Kashev, 2017
"""
import sys
def chars(file, chunkSize=4096):
"""
Take a file object, read it in chuncks and iterate over it one character at a time.
Keyword arguments:
file --- a file object to iterate over
chunkSize --- buffer size for file reads (default=4096)
"""
chunk = file.read(chunkSize)
while chunk:
for char in chunk:
yield char
chunk = file.read(chunkSize)
def move(position, instruction):
"""
Take a position and offset it based on instuction, or raise an error on invalid instruction.
Keyword arguments:
position --- current position as a tuple (x,y)
Instruction --- single-character instruction to move in ["^", "v", ">", "<"]
"""
if instruction == "^":
return (position[0], position[1] + 1)
elif instruction == "v":
return (position[0], position[1] - 1)
elif instruction == ">":
return (position[0] + 1, position[1])
elif instruction == "<":
return (position[0] - 1, position[1])
else:
raise ValueError("Instruction '{}' not recognized".format(instruction))
def solver(file):
"""
Take a file object with input and solve AoC 2015.03 problem on the input.
Keyword arguments:
file --- a file object to read input from
"""
alone = set([(0, 0)])
alone_position = (0, 0)
together = set([(0, 0)])
santa_position = (0, 0)
robot_position = (0, 0)
robot_turn = False
for instruction in chars(file):
alone_position = move(alone_position, instruction)
alone.add(alone_position)
if robot_turn:
robot_position = move(robot_position, instruction)
together.add(robot_position)
else:
santa_position = move(santa_position, instruction)
together.add(santa_position)
robot_turn = not robot_turn
return (len(alone), len(together))
if __name__ == "__main__":
solution = solver(sys.stdin)
print("Part A: Santa alone will deliver presents to {} houses.".format(solution[0]))
print("Part B: Santa and Robo-Santa will deliver presents to {} houses.".format(solution[1]))
|
The headline: “U.S. investigates potential covert Russian plan to disrupt November elections.” To those unused to this kind of story, I can imagine that headline, from The Post this week, seemed strange. A secret Russian plot to throw a U.S. election through a massive hack of the electoral system? It sounds like a thriller, or a movie starring Harrison Ford.
1. Donald Trump, who is advised by several people with Russian links, will repeat and strengthen his “the election is rigged” narrative. The “polls are lying,” the “real” people aren’t being counted, the corrupt elites/Clinton clan/mainstream media are colluding to prevent him from taking office. Trump will continue to associate himself with Brexit — a vote that pollsters really did get wrong — and with Nigel Farage, the far-right British politician who now promotes Trump (and has, incidentally, just been offered his own show on RT, the Russian state-sponsored TV channel).
2. Russia will continue to distribute and publish the material its hackers have already obtained from attacks on the Democratic National Committee, George Soros’s Open Society Foundation, former NATO supreme commander Gen. Philip Breedlove and probably others. The point will be to discredit not just Hillary Clinton but also the U.S. democratic process and, again, the “elite” who supposedly run it. As we have learned in multiple countries, even benign private conversations and emails can, when published in a newspaper, look sinister. Speculation seems ominous; jokes menacing. Almost any leak of anything is damaging.
3. On or before Election Day, Russian hackers will seek to break into the U.S. voting system. We certainly know that this is possible: Hackers have already targeted voter registration systems in Illinois and Arizona, according to The Post, and the FBI has informed Arizona officials that it suspects Russian hacking teams. Possible breaches are being investigated in several other states, and it’s not hard to imagine that many are vulnerable. The U.S. election system is decentralized and in some places frankly amateurish, as we learned in Florida in 2000.
4. The Russians attempt to throw the election. They might try to get Trump elected. Alternatively — and this would, of course, be even more devastating — they might try to rig the election for Clinton, perhaps leaving a trail of evidence designed to connect the rigging operation to Clinton’s campaign.
6. More likely, the hack will fail, or never even get off the ground. But what’s the downside in trying, or even in letting it be known that it was tried? Rumors of election fraud can create the same hysteria as real election fraud. Already, Russia’s propaganda wire service, sputniknews.com, has speculated that The Post’s article on Russian electoral manipulation is a clever plot “to hide the actual efforts at electoral manipulation” and a “good cover for vote-rigging.” That thought will be tweeted and posted and shared by a whole ecosystem of professional trolls and computer bots, over and over again until it finally shows up on authentic pro-Trump websites.
7. And what’s the downside for Trump? If he wins, he wins. If he loses — then there are all kinds of ways to make money from the “election was rigged” narrative. He could start a media company focused on the conspiracy. He could start a national movement. He could make movies. He could be a hero. Whatever happens, the political process is undermined, social trust plummets further and the appeal of U.S. democracy, both at home and around the world, diminishes. And that, of course, is the point.
|
import numpy as np
from numpy.testing import (assert_, assert_approx_equal,
assert_allclose, assert_array_equal, assert_equal,
assert_array_almost_equal_nulp, suppress_warnings)
import pytest
from pytest import raises as assert_raises
from scipy import signal
from scipy.fft import fftfreq
from scipy.signal import (periodogram, welch, lombscargle, csd, coherence,
spectrogram, stft, istft, check_COLA, check_NOLA)
from scipy.signal.spectral import _spectral_helper
class TestPeriodogram(object):
def test_real_onesided_even(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.linspace(0, 0.5, 9))
q = np.ones(9)
q[0] = 0
q[-1] /= 2.0
q /= 8
assert_allclose(p, q)
def test_real_onesided_odd(self):
x = np.zeros(15)
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.arange(8.0)/15.0)
q = np.ones(8)
q[0] = 0
q *= 2.0/15.0
assert_allclose(p, q, atol=1e-15)
def test_real_twosided(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x, return_onesided=False)
assert_allclose(f, fftfreq(16, 1.0))
q = np.full(16, 1/16.0)
q[0] = 0
assert_allclose(p, q)
def test_real_spectrum(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x, scaling='spectrum')
g, q = periodogram(x, scaling='density')
assert_allclose(f, np.linspace(0, 0.5, 9))
assert_allclose(p, q/16.0)
def test_integer_even(self):
x = np.zeros(16, dtype=int)
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.linspace(0, 0.5, 9))
q = np.ones(9)
q[0] = 0
q[-1] /= 2.0
q /= 8
assert_allclose(p, q)
def test_integer_odd(self):
x = np.zeros(15, dtype=int)
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.arange(8.0)/15.0)
q = np.ones(8)
q[0] = 0
q *= 2.0/15.0
assert_allclose(p, q, atol=1e-15)
def test_integer_twosided(self):
x = np.zeros(16, dtype=int)
x[0] = 1
f, p = periodogram(x, return_onesided=False)
assert_allclose(f, fftfreq(16, 1.0))
q = np.full(16, 1/16.0)
q[0] = 0
assert_allclose(p, q)
def test_complex(self):
x = np.zeros(16, np.complex128)
x[0] = 1.0 + 2.0j
f, p = periodogram(x, return_onesided=False)
assert_allclose(f, fftfreq(16, 1.0))
q = np.full(16, 5.0/16.0)
q[0] = 0
assert_allclose(p, q)
def test_unk_scaling(self):
assert_raises(ValueError, periodogram, np.zeros(4, np.complex128),
scaling='foo')
def test_nd_axis_m1(self):
x = np.zeros(20, dtype=np.float64)
x = x.reshape((2,1,10))
x[:,:,0] = 1.0
f, p = periodogram(x)
assert_array_equal(p.shape, (2, 1, 6))
assert_array_almost_equal_nulp(p[0,0,:], p[1,0,:], 60)
f0, p0 = periodogram(x[0,0,:])
assert_array_almost_equal_nulp(p0[np.newaxis,:], p[1,:], 60)
def test_nd_axis_0(self):
x = np.zeros(20, dtype=np.float64)
x = x.reshape((10,2,1))
x[0,:,:] = 1.0
f, p = periodogram(x, axis=0)
assert_array_equal(p.shape, (6,2,1))
assert_array_almost_equal_nulp(p[:,0,0], p[:,1,0], 60)
f0, p0 = periodogram(x[:,0,0])
assert_array_almost_equal_nulp(p0, p[:,1,0])
def test_window_external(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x, 10, 'hann')
win = signal.get_window('hann', 16)
fe, pe = periodogram(x, 10, win)
assert_array_almost_equal_nulp(p, pe)
assert_array_almost_equal_nulp(f, fe)
win_err = signal.get_window('hann', 32)
assert_raises(ValueError, periodogram, x,
10, win_err) # win longer than signal
def test_padded_fft(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x)
fp, pp = periodogram(x, nfft=32)
assert_allclose(f, fp[::2])
assert_allclose(p, pp[::2])
assert_array_equal(pp.shape, (17,))
def test_empty_input(self):
f, p = periodogram([])
assert_array_equal(f.shape, (0,))
assert_array_equal(p.shape, (0,))
for shape in [(0,), (3,0), (0,5,2)]:
f, p = periodogram(np.empty(shape))
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
def test_empty_input_other_axis(self):
for shape in [(3,0), (0,5,2)]:
f, p = periodogram(np.empty(shape), axis=1)
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
def test_short_nfft(self):
x = np.zeros(18)
x[0] = 1
f, p = periodogram(x, nfft=16)
assert_allclose(f, np.linspace(0, 0.5, 9))
q = np.ones(9)
q[0] = 0
q[-1] /= 2.0
q /= 8
assert_allclose(p, q)
def test_nfft_is_xshape(self):
x = np.zeros(16)
x[0] = 1
f, p = periodogram(x, nfft=16)
assert_allclose(f, np.linspace(0, 0.5, 9))
q = np.ones(9)
q[0] = 0
q[-1] /= 2.0
q /= 8
assert_allclose(p, q)
def test_real_onesided_even_32(self):
x = np.zeros(16, 'f')
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.linspace(0, 0.5, 9))
q = np.ones(9, 'f')
q[0] = 0
q[-1] /= 2.0
q /= 8
assert_allclose(p, q)
assert_(p.dtype == q.dtype)
def test_real_onesided_odd_32(self):
x = np.zeros(15, 'f')
x[0] = 1
f, p = periodogram(x)
assert_allclose(f, np.arange(8.0)/15.0)
q = np.ones(8, 'f')
q[0] = 0
q *= 2.0/15.0
assert_allclose(p, q, atol=1e-7)
assert_(p.dtype == q.dtype)
def test_real_twosided_32(self):
x = np.zeros(16, 'f')
x[0] = 1
f, p = periodogram(x, return_onesided=False)
assert_allclose(f, fftfreq(16, 1.0))
q = np.full(16, 1/16.0, 'f')
q[0] = 0
assert_allclose(p, q)
assert_(p.dtype == q.dtype)
def test_complex_32(self):
x = np.zeros(16, 'F')
x[0] = 1.0 + 2.0j
f, p = periodogram(x, return_onesided=False)
assert_allclose(f, fftfreq(16, 1.0))
q = np.full(16, 5.0/16.0, 'f')
q[0] = 0
assert_allclose(p, q)
assert_(p.dtype == q.dtype)
class TestWelch(object):
def test_real_onesided_even(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_onesided_odd(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477455, 0.23430933, 0.17072113, 0.17072113,
0.17072113])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_twosided(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.07638889])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_spectrum(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8, scaling='spectrum')
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.015625, 0.02864583, 0.04166667, 0.04166667,
0.02083333])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_onesided_even(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_onesided_odd(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477455, 0.23430933, 0.17072113, 0.17072113,
0.17072113])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_twosided(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.07638889])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_complex(self):
x = np.zeros(16, np.complex128)
x[0] = 1.0 + 2.0j
x[8] = 1.0 + 2.0j
f, p = welch(x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.41666667, 0.38194444, 0.55555556, 0.55555556,
0.55555556, 0.55555556, 0.55555556, 0.38194444])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_unk_scaling(self):
assert_raises(ValueError, welch, np.zeros(4, np.complex128),
scaling='foo', nperseg=4)
def test_detrend_linear(self):
x = np.arange(10, dtype=np.float64) + 0.04
f, p = welch(x, nperseg=10, detrend='linear')
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_no_detrending(self):
x = np.arange(10, dtype=np.float64) + 0.04
f1, p1 = welch(x, nperseg=10, detrend=False)
f2, p2 = welch(x, nperseg=10, detrend=lambda x: x)
assert_allclose(f1, f2, atol=1e-15)
assert_allclose(p1, p2, atol=1e-15)
def test_detrend_external(self):
x = np.arange(10, dtype=np.float64) + 0.04
f, p = welch(x, nperseg=10,
detrend=lambda seg: signal.detrend(seg, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_detrend_external_nd_m1(self):
x = np.arange(40, dtype=np.float64) + 0.04
x = x.reshape((2,2,10))
f, p = welch(x, nperseg=10,
detrend=lambda seg: signal.detrend(seg, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_detrend_external_nd_0(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((2,1,10))
x = np.rollaxis(x, 2, 0)
f, p = welch(x, nperseg=10, axis=0,
detrend=lambda seg: signal.detrend(seg, axis=0, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_nd_axis_m1(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((2,1,10))
f, p = welch(x, nperseg=10)
assert_array_equal(p.shape, (2, 1, 6))
assert_allclose(p[0,0,:], p[1,0,:], atol=1e-13, rtol=1e-13)
f0, p0 = welch(x[0,0,:], nperseg=10)
assert_allclose(p0[np.newaxis,:], p[1,:], atol=1e-13, rtol=1e-13)
def test_nd_axis_0(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((10,2,1))
f, p = welch(x, nperseg=10, axis=0)
assert_array_equal(p.shape, (6,2,1))
assert_allclose(p[:,0,0], p[:,1,0], atol=1e-13, rtol=1e-13)
f0, p0 = welch(x[:,0,0], nperseg=10)
assert_allclose(p0, p[:,1,0], atol=1e-13, rtol=1e-13)
def test_window_external(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, 10, 'hann', nperseg=8)
win = signal.get_window('hann', 8)
fe, pe = welch(x, 10, win, nperseg=None)
assert_array_almost_equal_nulp(p, pe)
assert_array_almost_equal_nulp(f, fe)
assert_array_equal(fe.shape, (5,)) # because win length used as nperseg
assert_array_equal(pe.shape, (5,))
assert_raises(ValueError, welch, x,
10, win, nperseg=4) # because nperseg != win.shape[-1]
win_err = signal.get_window('hann', 32)
assert_raises(ValueError, welch, x,
10, win_err, nperseg=None) # win longer than signal
def test_empty_input(self):
f, p = welch([])
assert_array_equal(f.shape, (0,))
assert_array_equal(p.shape, (0,))
for shape in [(0,), (3,0), (0,5,2)]:
f, p = welch(np.empty(shape))
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
def test_empty_input_other_axis(self):
for shape in [(3,0), (0,5,2)]:
f, p = welch(np.empty(shape), axis=1)
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
def test_short_data(self):
x = np.zeros(8)
x[0] = 1
#for string-like window, input signal length < nperseg value gives
#UserWarning, sets nperseg to x.shape[-1]
with suppress_warnings() as sup:
sup.filter(UserWarning, "nperseg = 256 is greater than input length = 8, using nperseg = 8")
f, p = welch(x,window='hann') # default nperseg
f1, p1 = welch(x,window='hann', nperseg=256) # user-specified nperseg
f2, p2 = welch(x, nperseg=8) # valid nperseg, doesn't give warning
assert_allclose(f, f2)
assert_allclose(p, p2)
assert_allclose(f1, f2)
assert_allclose(p1, p2)
def test_window_long_or_nd(self):
assert_raises(ValueError, welch, np.zeros(4), 1, np.array([1,1,1,1,1]))
assert_raises(ValueError, welch, np.zeros(4), 1,
np.arange(6).reshape((2,3)))
def test_nondefault_noverlap(self):
x = np.zeros(64)
x[::8] = 1
f, p = welch(x, nperseg=16, noverlap=4)
q = np.array([0, 1./12., 1./3., 1./5., 1./3., 1./5., 1./3., 1./5.,
1./6.])
assert_allclose(p, q, atol=1e-12)
def test_bad_noverlap(self):
assert_raises(ValueError, welch, np.zeros(4), 1, 'hann', 2, 7)
def test_nfft_too_short(self):
assert_raises(ValueError, welch, np.ones(12), nfft=3, nperseg=4)
def test_real_onesided_even_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_real_onesided_odd_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477458, 0.23430935, 0.17072113, 0.17072116,
0.17072113], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_real_twosided_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.11111111,
0.07638889], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_complex_32(self):
x = np.zeros(16, 'F')
x[0] = 1.0 + 2.0j
x[8] = 1.0 + 2.0j
f, p = welch(x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.41666666, 0.38194442, 0.55555552, 0.55555552,
0.55555558, 0.55555552, 0.55555552, 0.38194442], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype,
'dtype mismatch, %s, %s' % (p.dtype, q.dtype))
def test_padded_freqs(self):
x = np.zeros(12)
nfft = 24
f = fftfreq(nfft, 1.0)[:nfft//2+1]
f[-1] *= -1
fodd, _ = welch(x, nperseg=5, nfft=nfft)
feven, _ = welch(x, nperseg=6, nfft=nfft)
assert_allclose(f, fodd)
assert_allclose(f, feven)
nfft = 25
f = fftfreq(nfft, 1.0)[:(nfft + 1)//2]
fodd, _ = welch(x, nperseg=5, nfft=nfft)
feven, _ = welch(x, nperseg=6, nfft=nfft)
assert_allclose(f, fodd)
assert_allclose(f, feven)
def test_window_correction(self):
A = 20
fs = 1e4
nperseg = int(fs//10)
fsig = 300
ii = int(fsig*nperseg//fs) # Freq index of fsig
tt = np.arange(fs)/fs
x = A*np.sin(2*np.pi*fsig*tt)
for window in ['hann', 'bartlett', ('tukey', 0.1), 'flattop']:
_, p_spec = welch(x, fs=fs, nperseg=nperseg, window=window,
scaling='spectrum')
freq, p_dens = welch(x, fs=fs, nperseg=nperseg, window=window,
scaling='density')
# Check peak height at signal frequency for 'spectrum'
assert_allclose(p_spec[ii], A**2/2.0)
# Check integrated spectrum RMS for 'density'
assert_allclose(np.sqrt(np.trapz(p_dens, freq)), A*np.sqrt(2)/2,
rtol=1e-3)
def test_axis_rolling(self):
np.random.seed(1234)
x_flat = np.random.randn(1024)
_, p_flat = welch(x_flat)
for a in range(3):
newshape = [1,]*3
newshape[a] = -1
x = x_flat.reshape(newshape)
_, p_plus = welch(x, axis=a) # Positive axis index
_, p_minus = welch(x, axis=a-x.ndim) # Negative axis index
assert_equal(p_flat, p_plus.squeeze(), err_msg=a)
assert_equal(p_flat, p_minus.squeeze(), err_msg=a-x.ndim)
def test_average(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = welch(x, nperseg=8, average='median')
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([.1, .05, 0., 1.54074396e-33, 0.])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_raises(ValueError, welch, x, nperseg=8,
average='unrecognised-average')
class TestCSD:
def test_pad_shorter_x(self):
x = np.zeros(8)
y = np.zeros(12)
f = np.linspace(0, 0.5, 7)
c = np.zeros(7,dtype=np.complex128)
f1, c1 = csd(x, y, nperseg=12)
assert_allclose(f, f1)
assert_allclose(c, c1)
def test_pad_shorter_y(self):
x = np.zeros(12)
y = np.zeros(8)
f = np.linspace(0, 0.5, 7)
c = np.zeros(7,dtype=np.complex128)
f1, c1 = csd(x, y, nperseg=12)
assert_allclose(f, f1)
assert_allclose(c, c1)
def test_real_onesided_even(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_onesided_odd(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477455, 0.23430933, 0.17072113, 0.17072113,
0.17072113])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_twosided(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.07638889])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_real_spectrum(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8, scaling='spectrum')
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.015625, 0.02864583, 0.04166667, 0.04166667,
0.02083333])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_onesided_even(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_onesided_odd(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477455, 0.23430933, 0.17072113, 0.17072113,
0.17072113])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_integer_twosided(self):
x = np.zeros(16, dtype=int)
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.07638889])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_complex(self):
x = np.zeros(16, np.complex128)
x[0] = 1.0 + 2.0j
x[8] = 1.0 + 2.0j
f, p = csd(x, x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.41666667, 0.38194444, 0.55555556, 0.55555556,
0.55555556, 0.55555556, 0.55555556, 0.38194444])
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
def test_unk_scaling(self):
assert_raises(ValueError, csd, np.zeros(4, np.complex128),
np.ones(4, np.complex128), scaling='foo', nperseg=4)
def test_detrend_linear(self):
x = np.arange(10, dtype=np.float64) + 0.04
f, p = csd(x, x, nperseg=10, detrend='linear')
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_no_detrending(self):
x = np.arange(10, dtype=np.float64) + 0.04
f1, p1 = csd(x, x, nperseg=10, detrend=False)
f2, p2 = csd(x, x, nperseg=10, detrend=lambda x: x)
assert_allclose(f1, f2, atol=1e-15)
assert_allclose(p1, p2, atol=1e-15)
def test_detrend_external(self):
x = np.arange(10, dtype=np.float64) + 0.04
f, p = csd(x, x, nperseg=10,
detrend=lambda seg: signal.detrend(seg, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_detrend_external_nd_m1(self):
x = np.arange(40, dtype=np.float64) + 0.04
x = x.reshape((2,2,10))
f, p = csd(x, x, nperseg=10,
detrend=lambda seg: signal.detrend(seg, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_detrend_external_nd_0(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((2,1,10))
x = np.rollaxis(x, 2, 0)
f, p = csd(x, x, nperseg=10, axis=0,
detrend=lambda seg: signal.detrend(seg, axis=0, type='l'))
assert_allclose(p, np.zeros_like(p), atol=1e-15)
def test_nd_axis_m1(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((2,1,10))
f, p = csd(x, x, nperseg=10)
assert_array_equal(p.shape, (2, 1, 6))
assert_allclose(p[0,0,:], p[1,0,:], atol=1e-13, rtol=1e-13)
f0, p0 = csd(x[0,0,:], x[0,0,:], nperseg=10)
assert_allclose(p0[np.newaxis,:], p[1,:], atol=1e-13, rtol=1e-13)
def test_nd_axis_0(self):
x = np.arange(20, dtype=np.float64) + 0.04
x = x.reshape((10,2,1))
f, p = csd(x, x, nperseg=10, axis=0)
assert_array_equal(p.shape, (6,2,1))
assert_allclose(p[:,0,0], p[:,1,0], atol=1e-13, rtol=1e-13)
f0, p0 = csd(x[:,0,0], x[:,0,0], nperseg=10)
assert_allclose(p0, p[:,1,0], atol=1e-13, rtol=1e-13)
def test_window_external(self):
x = np.zeros(16)
x[0] = 1
x[8] = 1
f, p = csd(x, x, 10, 'hann', 8)
win = signal.get_window('hann', 8)
fe, pe = csd(x, x, 10, win, nperseg=None)
assert_array_almost_equal_nulp(p, pe)
assert_array_almost_equal_nulp(f, fe)
assert_array_equal(fe.shape, (5,)) # because win length used as nperseg
assert_array_equal(pe.shape, (5,))
assert_raises(ValueError, csd, x, x,
10, win, nperseg=256) # because nperseg != win.shape[-1]
win_err = signal.get_window('hann', 32)
assert_raises(ValueError, csd, x, x,
10, win_err, nperseg=None) # because win longer than signal
def test_empty_input(self):
f, p = csd([],np.zeros(10))
assert_array_equal(f.shape, (0,))
assert_array_equal(p.shape, (0,))
f, p = csd(np.zeros(10),[])
assert_array_equal(f.shape, (0,))
assert_array_equal(p.shape, (0,))
for shape in [(0,), (3,0), (0,5,2)]:
f, p = csd(np.empty(shape), np.empty(shape))
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
f, p = csd(np.ones(10), np.empty((5,0)))
assert_array_equal(f.shape, (5,0))
assert_array_equal(p.shape, (5,0))
f, p = csd(np.empty((5,0)), np.ones(10))
assert_array_equal(f.shape, (5,0))
assert_array_equal(p.shape, (5,0))
def test_empty_input_other_axis(self):
for shape in [(3,0), (0,5,2)]:
f, p = csd(np.empty(shape), np.empty(shape), axis=1)
assert_array_equal(f.shape, shape)
assert_array_equal(p.shape, shape)
f, p = csd(np.empty((10,10,3)), np.zeros((10,0,1)), axis=1)
assert_array_equal(f.shape, (10,0,3))
assert_array_equal(p.shape, (10,0,3))
f, p = csd(np.empty((10,0,1)), np.zeros((10,10,3)), axis=1)
assert_array_equal(f.shape, (10,0,3))
assert_array_equal(p.shape, (10,0,3))
def test_short_data(self):
x = np.zeros(8)
x[0] = 1
#for string-like window, input signal length < nperseg value gives
#UserWarning, sets nperseg to x.shape[-1]
with suppress_warnings() as sup:
sup.filter(UserWarning, "nperseg = 256 is greater than input length = 8, using nperseg = 8")
f, p = csd(x, x, window='hann') # default nperseg
f1, p1 = csd(x, x, window='hann', nperseg=256) # user-specified nperseg
f2, p2 = csd(x, x, nperseg=8) # valid nperseg, doesn't give warning
assert_allclose(f, f2)
assert_allclose(p, p2)
assert_allclose(f1, f2)
assert_allclose(p1, p2)
def test_window_long_or_nd(self):
assert_raises(ValueError, csd, np.zeros(4), np.ones(4), 1,
np.array([1,1,1,1,1]))
assert_raises(ValueError, csd, np.zeros(4), np.ones(4), 1,
np.arange(6).reshape((2,3)))
def test_nondefault_noverlap(self):
x = np.zeros(64)
x[::8] = 1
f, p = csd(x, x, nperseg=16, noverlap=4)
q = np.array([0, 1./12., 1./3., 1./5., 1./3., 1./5., 1./3., 1./5.,
1./6.])
assert_allclose(p, q, atol=1e-12)
def test_bad_noverlap(self):
assert_raises(ValueError, csd, np.zeros(4), np.ones(4), 1, 'hann',
2, 7)
def test_nfft_too_short(self):
assert_raises(ValueError, csd, np.ones(12), np.zeros(12), nfft=3,
nperseg=4)
def test_real_onesided_even_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8)
assert_allclose(f, np.linspace(0, 0.5, 5))
q = np.array([0.08333333, 0.15277778, 0.22222222, 0.22222222,
0.11111111], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_real_onesided_odd_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=9)
assert_allclose(f, np.arange(5.0)/9.0)
q = np.array([0.12477458, 0.23430935, 0.17072113, 0.17072116,
0.17072113], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_real_twosided_32(self):
x = np.zeros(16, 'f')
x[0] = 1
x[8] = 1
f, p = csd(x, x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.08333333, 0.07638889, 0.11111111,
0.11111111, 0.11111111, 0.11111111, 0.11111111,
0.07638889], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype)
def test_complex_32(self):
x = np.zeros(16, 'F')
x[0] = 1.0 + 2.0j
x[8] = 1.0 + 2.0j
f, p = csd(x, x, nperseg=8, return_onesided=False)
assert_allclose(f, fftfreq(8, 1.0))
q = np.array([0.41666666, 0.38194442, 0.55555552, 0.55555552,
0.55555558, 0.55555552, 0.55555552, 0.38194442], 'f')
assert_allclose(p, q, atol=1e-7, rtol=1e-7)
assert_(p.dtype == q.dtype,
'dtype mismatch, %s, %s' % (p.dtype, q.dtype))
def test_padded_freqs(self):
x = np.zeros(12)
y = np.ones(12)
nfft = 24
f = fftfreq(nfft, 1.0)[:nfft//2+1]
f[-1] *= -1
fodd, _ = csd(x, y, nperseg=5, nfft=nfft)
feven, _ = csd(x, y, nperseg=6, nfft=nfft)
assert_allclose(f, fodd)
assert_allclose(f, feven)
nfft = 25
f = fftfreq(nfft, 1.0)[:(nfft + 1)//2]
fodd, _ = csd(x, y, nperseg=5, nfft=nfft)
feven, _ = csd(x, y, nperseg=6, nfft=nfft)
assert_allclose(f, fodd)
assert_allclose(f, feven)
class TestCoherence(object):
def test_identical_input(self):
x = np.random.randn(20)
y = np.copy(x) # So `y is x` -> False
f = np.linspace(0, 0.5, 6)
C = np.ones(6)
f1, C1 = coherence(x, y, nperseg=10)
assert_allclose(f, f1)
assert_allclose(C, C1)
def test_phase_shifted_input(self):
x = np.random.randn(20)
y = -x
f = np.linspace(0, 0.5, 6)
C = np.ones(6)
f1, C1 = coherence(x, y, nperseg=10)
assert_allclose(f, f1)
assert_allclose(C, C1)
class TestSpectrogram(object):
def test_average_all_segments(self):
x = np.random.randn(1024)
fs = 1.0
window = ('tukey', 0.25)
nperseg = 16
noverlap = 2
f, _, P = spectrogram(x, fs, window, nperseg, noverlap)
fw, Pw = welch(x, fs, window, nperseg, noverlap)
assert_allclose(f, fw)
assert_allclose(np.mean(P, axis=-1), Pw)
def test_window_external(self):
x = np.random.randn(1024)
fs = 1.0
window = ('tukey', 0.25)
nperseg = 16
noverlap = 2
f, _, P = spectrogram(x, fs, window, nperseg, noverlap)
win = signal.get_window(('tukey', 0.25), 16)
fe, _, Pe = spectrogram(x, fs, win, nperseg=None, noverlap=2)
assert_array_equal(fe.shape, (9,)) # because win length used as nperseg
assert_array_equal(Pe.shape, (9,73))
assert_raises(ValueError, spectrogram, x,
fs, win, nperseg=8) # because nperseg != win.shape[-1]
win_err = signal.get_window(('tukey', 0.25), 2048)
assert_raises(ValueError, spectrogram, x,
fs, win_err, nperseg=None) # win longer than signal
def test_short_data(self):
x = np.random.randn(1024)
fs = 1.0
#for string-like window, input signal length < nperseg value gives
#UserWarning, sets nperseg to x.shape[-1]
f, _, p = spectrogram(x, fs, window=('tukey',0.25)) # default nperseg
with suppress_warnings() as sup:
sup.filter(UserWarning,
"nperseg = 1025 is greater than input length = 1024, using nperseg = 1024")
f1, _, p1 = spectrogram(x, fs, window=('tukey',0.25),
nperseg=1025) # user-specified nperseg
f2, _, p2 = spectrogram(x, fs, nperseg=256) # to compare w/default
f3, _, p3 = spectrogram(x, fs, nperseg=1024) # compare w/user-spec'd
assert_allclose(f, f2)
assert_allclose(p, p2)
assert_allclose(f1, f3)
assert_allclose(p1, p3)
class TestLombscargle(object):
def test_frequency(self):
"""Test if frequency location of peak corresponds to frequency of
generated input signal.
"""
# Input parameters
ampl = 2.
w = 1.
phi = 0.5 * np.pi
nin = 100
nout = 1000
p = 0.7 # Fraction of points to select
# Randomly select a fraction of an array with timesteps
np.random.seed(2353425)
r = np.random.rand(nin)
t = np.linspace(0.01*np.pi, 10.*np.pi, nin)[r >= p]
# Plot a sine wave for the selected times
x = ampl * np.sin(w*t + phi)
# Define the array of frequencies for which to compute the periodogram
f = np.linspace(0.01, 10., nout)
# Calculate Lomb-Scargle periodogram
P = lombscargle(t, x, f)
# Check if difference between found frequency maximum and input
# frequency is less than accuracy
delta = f[1] - f[0]
assert_(w - f[np.argmax(P)] < (delta/2.))
def test_amplitude(self):
# Test if height of peak in normalized Lomb-Scargle periodogram
# corresponds to amplitude of the generated input signal.
# Input parameters
ampl = 2.
w = 1.
phi = 0.5 * np.pi
nin = 100
nout = 1000
p = 0.7 # Fraction of points to select
# Randomly select a fraction of an array with timesteps
np.random.seed(2353425)
r = np.random.rand(nin)
t = np.linspace(0.01*np.pi, 10.*np.pi, nin)[r >= p]
# Plot a sine wave for the selected times
x = ampl * np.sin(w*t + phi)
# Define the array of frequencies for which to compute the periodogram
f = np.linspace(0.01, 10., nout)
# Calculate Lomb-Scargle periodogram
pgram = lombscargle(t, x, f)
# Normalize
pgram = np.sqrt(4 * pgram / t.shape[0])
# Check if difference between found frequency maximum and input
# frequency is less than accuracy
assert_approx_equal(np.max(pgram), ampl, significant=2)
def test_precenter(self):
# Test if precenter gives the same result as manually precentering.
# Input parameters
ampl = 2.
w = 1.
phi = 0.5 * np.pi
nin = 100
nout = 1000
p = 0.7 # Fraction of points to select
offset = 0.15 # Offset to be subtracted in pre-centering
# Randomly select a fraction of an array with timesteps
np.random.seed(2353425)
r = np.random.rand(nin)
t = np.linspace(0.01*np.pi, 10.*np.pi, nin)[r >= p]
# Plot a sine wave for the selected times
x = ampl * np.sin(w*t + phi) + offset
# Define the array of frequencies for which to compute the periodogram
f = np.linspace(0.01, 10., nout)
# Calculate Lomb-Scargle periodogram
pgram = lombscargle(t, x, f, precenter=True)
pgram2 = lombscargle(t, x - x.mean(), f, precenter=False)
# check if centering worked
assert_allclose(pgram, pgram2)
def test_normalize(self):
# Test normalize option of Lomb-Scarge.
# Input parameters
ampl = 2.
w = 1.
phi = 0.5 * np.pi
nin = 100
nout = 1000
p = 0.7 # Fraction of points to select
# Randomly select a fraction of an array with timesteps
np.random.seed(2353425)
r = np.random.rand(nin)
t = np.linspace(0.01*np.pi, 10.*np.pi, nin)[r >= p]
# Plot a sine wave for the selected times
x = ampl * np.sin(w*t + phi)
# Define the array of frequencies for which to compute the periodogram
f = np.linspace(0.01, 10., nout)
# Calculate Lomb-Scargle periodogram
pgram = lombscargle(t, x, f)
pgram2 = lombscargle(t, x, f, normalize=True)
# check if normalization works as expected
assert_allclose(pgram * 2 / np.dot(x, x), pgram2)
assert_approx_equal(np.max(pgram2), 1.0, significant=2)
def test_wrong_shape(self):
t = np.linspace(0, 1, 1)
x = np.linspace(0, 1, 2)
f = np.linspace(0, 1, 3)
assert_raises(ValueError, lombscargle, t, x, f)
def test_zero_division(self):
t = np.zeros(1)
x = np.zeros(1)
f = np.zeros(1)
assert_raises(ZeroDivisionError, lombscargle, t, x, f)
def test_lombscargle_atan_vs_atan2(self):
# https://github.com/scipy/scipy/issues/3787
# This raised a ZeroDivisionError.
t = np.linspace(0, 10, 1000, endpoint=False)
x = np.sin(4*t)
f = np.linspace(0, 50, 500, endpoint=False) + 0.1
lombscargle(t, x, f*2*np.pi)
class TestSTFT(object):
def test_input_validation(self):
assert_raises(ValueError, check_COLA, 'hann', -10, 0)
assert_raises(ValueError, check_COLA, 'hann', 10, 20)
assert_raises(ValueError, check_COLA, np.ones((2,2)), 10, 0)
assert_raises(ValueError, check_COLA, np.ones(20), 10, 0)
assert_raises(ValueError, check_NOLA, 'hann', -10, 0)
assert_raises(ValueError, check_NOLA, 'hann', 10, 20)
assert_raises(ValueError, check_NOLA, np.ones((2,2)), 10, 0)
assert_raises(ValueError, check_NOLA, np.ones(20), 10, 0)
assert_raises(ValueError, check_NOLA, 'hann', 64, -32)
x = np.zeros(1024)
z = np.array(stft(x), dtype=object)
assert_raises(ValueError, stft, x, window=np.ones((2,2)))
assert_raises(ValueError, stft, x, window=np.ones(10), nperseg=256)
assert_raises(ValueError, stft, x, nperseg=-256)
assert_raises(ValueError, stft, x, nperseg=256, noverlap=1024)
assert_raises(ValueError, stft, x, nperseg=256, nfft=8)
assert_raises(ValueError, istft, x) # Not 2d
assert_raises(ValueError, istft, z, window=np.ones((2,2)))
assert_raises(ValueError, istft, z, window=np.ones(10), nperseg=256)
assert_raises(ValueError, istft, z, nperseg=-256)
assert_raises(ValueError, istft, z, nperseg=256, noverlap=1024)
assert_raises(ValueError, istft, z, nperseg=256, nfft=8)
assert_raises(ValueError, istft, z, nperseg=256, noverlap=0,
window='hann') # Doesn't meet COLA
assert_raises(ValueError, istft, z, time_axis=0, freq_axis=0)
assert_raises(ValueError, _spectral_helper, x, x, mode='foo')
assert_raises(ValueError, _spectral_helper, x[:512], x[512:],
mode='stft')
assert_raises(ValueError, _spectral_helper, x, x, boundary='foo')
def test_check_COLA(self):
settings = [
('boxcar', 10, 0),
('boxcar', 10, 9),
('bartlett', 51, 26),
('hann', 256, 128),
('hann', 256, 192),
('blackman', 300, 200),
(('tukey', 0.5), 256, 64),
('hann', 256, 255),
]
for setting in settings:
msg = '{0}, {1}, {2}'.format(*setting)
assert_equal(True, check_COLA(*setting), err_msg=msg)
def test_check_NOLA(self):
settings_pass = [
('boxcar', 10, 0),
('boxcar', 10, 9),
('boxcar', 10, 7),
('bartlett', 51, 26),
('bartlett', 51, 10),
('hann', 256, 128),
('hann', 256, 192),
('hann', 256, 37),
('blackman', 300, 200),
('blackman', 300, 123),
(('tukey', 0.5), 256, 64),
(('tukey', 0.5), 256, 38),
('hann', 256, 255),
('hann', 256, 39),
]
for setting in settings_pass:
msg = '{0}, {1}, {2}'.format(*setting)
assert_equal(True, check_NOLA(*setting), err_msg=msg)
w_fail = np.ones(16)
w_fail[::2] = 0
settings_fail = [
(w_fail, len(w_fail), len(w_fail) // 2),
('hann', 64, 0),
]
for setting in settings_fail:
msg = '{0}, {1}, {2}'.format(*setting)
assert_equal(False, check_NOLA(*setting), err_msg=msg)
def test_average_all_segments(self):
np.random.seed(1234)
x = np.random.randn(1024)
fs = 1.0
window = 'hann'
nperseg = 16
noverlap = 8
# Compare twosided, because onesided welch doubles non-DC terms to
# account for power at negative frequencies. stft doesn't do this,
# because it breaks invertibility.
f, _, Z = stft(x, fs, window, nperseg, noverlap, padded=False,
return_onesided=False, boundary=None)
fw, Pw = welch(x, fs, window, nperseg, noverlap, return_onesided=False,
scaling='spectrum', detrend=False)
assert_allclose(f, fw)
assert_allclose(np.mean(np.abs(Z)**2, axis=-1), Pw)
def test_permute_axes(self):
np.random.seed(1234)
x = np.random.randn(1024)
fs = 1.0
window = 'hann'
nperseg = 16
noverlap = 8
f1, t1, Z1 = stft(x, fs, window, nperseg, noverlap)
f2, t2, Z2 = stft(x.reshape((-1, 1, 1)), fs, window, nperseg, noverlap,
axis=0)
t3, x1 = istft(Z1, fs, window, nperseg, noverlap)
t4, x2 = istft(Z2.T, fs, window, nperseg, noverlap, time_axis=0,
freq_axis=-1)
assert_allclose(f1, f2)
assert_allclose(t1, t2)
assert_allclose(t3, t4)
assert_allclose(Z1, Z2[:, 0, 0, :])
assert_allclose(x1, x2[:, 0, 0])
def test_roundtrip_real(self):
np.random.seed(1234)
settings = [
('boxcar', 100, 10, 0), # Test no overlap
('boxcar', 100, 10, 9), # Test high overlap
('bartlett', 101, 51, 26), # Test odd nperseg
('hann', 1024, 256, 128), # Test defaults
(('tukey', 0.5), 1152, 256, 64), # Test Tukey
('hann', 1024, 256, 255), # Test overlapped hann
]
for window, N, nperseg, noverlap in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=False)
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window)
msg = '{0}, {1}'.format(window, noverlap)
assert_allclose(t, tr, err_msg=msg)
assert_allclose(x, xr, err_msg=msg)
def test_roundtrip_not_nola(self):
np.random.seed(1234)
w_fail = np.ones(16)
w_fail[::2] = 0
settings = [
(w_fail, 256, len(w_fail), len(w_fail) // 2),
('hann', 256, 64, 0),
]
for window, N, nperseg, noverlap in settings:
msg = '{0}, {1}, {2}, {3}'.format(window, N, nperseg, noverlap)
assert not check_NOLA(window, nperseg, noverlap), msg
t = np.arange(N)
x = 10 * np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=True,
boundary='zeros')
with pytest.warns(UserWarning, match='NOLA'):
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window, boundary=True)
assert np.allclose(t, tr[:len(t)]), msg
assert not np.allclose(x, xr[:len(x)]), msg
def test_roundtrip_nola_not_cola(self):
np.random.seed(1234)
settings = [
('boxcar', 100, 10, 3), # NOLA True, COLA False
('bartlett', 101, 51, 37), # NOLA True, COLA False
('hann', 1024, 256, 127), # NOLA True, COLA False
(('tukey', 0.5), 1152, 256, 14), # NOLA True, COLA False
('hann', 1024, 256, 5), # NOLA True, COLA False
]
for window, N, nperseg, noverlap in settings:
msg = '{0}, {1}, {2}'.format(window, nperseg, noverlap)
assert check_NOLA(window, nperseg, noverlap), msg
assert not check_COLA(window, nperseg, noverlap), msg
t = np.arange(N)
x = 10 * np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=True,
boundary='zeros')
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window, boundary=True)
msg = '{0}, {1}'.format(window, noverlap)
assert_allclose(t, tr[:len(t)], err_msg=msg)
assert_allclose(x, xr[:len(x)], err_msg=msg)
def test_roundtrip_float32(self):
np.random.seed(1234)
settings = [('hann', 1024, 256, 128)]
for window, N, nperseg, noverlap in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size)
x = x.astype(np.float32)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=False)
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window)
msg = '{0}, {1}'.format(window, noverlap)
assert_allclose(t, t, err_msg=msg)
assert_allclose(x, xr, err_msg=msg, rtol=1e-4, atol=1e-5)
assert_(x.dtype == xr.dtype)
def test_roundtrip_complex(self):
np.random.seed(1234)
settings = [
('boxcar', 100, 10, 0), # Test no overlap
('boxcar', 100, 10, 9), # Test high overlap
('bartlett', 101, 51, 26), # Test odd nperseg
('hann', 1024, 256, 128), # Test defaults
(('tukey', 0.5), 1152, 256, 64), # Test Tukey
('hann', 1024, 256, 255), # Test overlapped hann
]
for window, N, nperseg, noverlap in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size) + 10j*np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=False,
return_onesided=False)
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window, input_onesided=False)
msg = '{0}, {1}, {2}'.format(window, nperseg, noverlap)
assert_allclose(t, tr, err_msg=msg)
assert_allclose(x, xr, err_msg=msg)
# Check that asking for onesided switches to twosided
with suppress_warnings() as sup:
sup.filter(UserWarning,
"Input data is complex, switching to return_onesided=False")
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=False,
return_onesided=True)
tr, xr = istft(zz, nperseg=nperseg, noverlap=noverlap,
window=window, input_onesided=False)
msg = '{0}, {1}, {2}'.format(window, nperseg, noverlap)
assert_allclose(t, tr, err_msg=msg)
assert_allclose(x, xr, err_msg=msg)
def test_roundtrip_boundary_extension(self):
np.random.seed(1234)
# Test against boxcar, since window is all ones, and thus can be fully
# recovered with no boundary extension
settings = [
('boxcar', 100, 10, 0), # Test no overlap
('boxcar', 100, 10, 9), # Test high overlap
]
for window, N, nperseg, noverlap in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=True,
boundary=None)
_, xr = istft(zz, noverlap=noverlap, window=window, boundary=False)
for boundary in ['even', 'odd', 'constant', 'zeros']:
_, _, zz_ext = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=True,
boundary=boundary)
_, xr_ext = istft(zz_ext, noverlap=noverlap, window=window,
boundary=True)
msg = '{0}, {1}, {2}'.format(window, noverlap, boundary)
assert_allclose(x, xr, err_msg=msg)
assert_allclose(x, xr_ext, err_msg=msg)
def test_roundtrip_padded_signal(self):
np.random.seed(1234)
settings = [
('boxcar', 101, 10, 0),
('hann', 1000, 256, 128),
]
for window, N, nperseg, noverlap in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size)
_, _, zz = stft(x, nperseg=nperseg, noverlap=noverlap,
window=window, detrend=None, padded=True)
tr, xr = istft(zz, noverlap=noverlap, window=window)
msg = '{0}, {1}'.format(window, noverlap)
# Account for possible zero-padding at the end
assert_allclose(t, tr[:t.size], err_msg=msg)
assert_allclose(x, xr[:x.size], err_msg=msg)
def test_roundtrip_padded_FFT(self):
np.random.seed(1234)
settings = [
('hann', 1024, 256, 128, 512),
('hann', 1024, 256, 128, 501),
('boxcar', 100, 10, 0, 33),
(('tukey', 0.5), 1152, 256, 64, 1024),
]
for window, N, nperseg, noverlap, nfft in settings:
t = np.arange(N)
x = 10*np.random.randn(t.size)
xc = x*np.exp(1j*np.pi/4)
# real signal
_, _, z = stft(x, nperseg=nperseg, noverlap=noverlap, nfft=nfft,
window=window, detrend=None, padded=True)
# complex signal
_, _, zc = stft(xc, nperseg=nperseg, noverlap=noverlap, nfft=nfft,
window=window, detrend=None, padded=True,
return_onesided=False)
tr, xr = istft(z, nperseg=nperseg, noverlap=noverlap, nfft=nfft,
window=window)
tr, xcr = istft(zc, nperseg=nperseg, noverlap=noverlap, nfft=nfft,
window=window, input_onesided=False)
msg = '{0}, {1}'.format(window, noverlap)
assert_allclose(t, tr, err_msg=msg)
assert_allclose(x, xr, err_msg=msg)
assert_allclose(xc, xcr, err_msg=msg)
def test_axis_rolling(self):
np.random.seed(1234)
x_flat = np.random.randn(1024)
_, _, z_flat = stft(x_flat)
for a in range(3):
newshape = [1,]*3
newshape[a] = -1
x = x_flat.reshape(newshape)
_, _, z_plus = stft(x, axis=a) # Positive axis index
_, _, z_minus = stft(x, axis=a-x.ndim) # Negative axis index
assert_equal(z_flat, z_plus.squeeze(), err_msg=a)
assert_equal(z_flat, z_minus.squeeze(), err_msg=a-x.ndim)
# z_flat has shape [n_freq, n_time]
# Test vs. transpose
_, x_transpose_m = istft(z_flat.T, time_axis=-2, freq_axis=-1)
_, x_transpose_p = istft(z_flat.T, time_axis=0, freq_axis=1)
assert_allclose(x_flat, x_transpose_m, err_msg='istft transpose minus')
assert_allclose(x_flat, x_transpose_p, err_msg='istft transpose plus')
|
To be honest, we just love design! No matter how complex a thing is, if you can make the experience very good and make it look good, it works even better. Our entire mission is to make the world an even more beautiful place by providing beautifully designed and crafted stickers 🙂 This Official Designer Vinyl sticker is a tribute to all the designers. Put it on your laptop whether you work in a MNC or a startup. Be an #OfficialDesigner.
Typography is a very vital part of designing. The font “Sans-serif” had several names in the past. Some of which are Egyptian, Antique, Grotesque, Doric, Heiti, Lineale, and Simplices. Do you have any interesting facts about designing and designers? Tweet it to us @juststickersind. Be an Official Designer.
|
#!/usr/bin/env python
from horton import *
import numpy as np
###############################################################################
## Set up molecule, define basis set ##########################################
###############################################################################
# get the XYZ file from HORTON's test data directory
fn_xyz = context.get_fn('test/h2.xyz')
mol = IOData.from_file(fn_xyz)
obasis = get_gobasis(mol.coordinates, mol.numbers, 'aug-cc-pVDZ')
###############################################################################
## Define Occupation model, expansion coefficients and overlap ################
###############################################################################
lf = CholeskyLinalgFactory(obasis.nbasis)
occ_model = AufbauOccModel(1)
orb = lf.create_expansion(obasis.nbasis)
olp = obasis.compute_overlap(lf)
###############################################################################
## Construct Hamiltonian ######################################################
###############################################################################
kin = obasis.compute_kinetic(lf)
na = obasis.compute_nuclear_attraction(mol.coordinates, mol.pseudo_numbers, lf)
er = obasis.compute_electron_repulsion(lf)
external = {'nn': compute_nucnuc(mol.coordinates, mol.pseudo_numbers)}
terms = [
RTwoIndexTerm(kin, 'kin'),
RDirectTerm(er, 'hartree'),
RExchangeTerm(er, 'x_hf'),
RTwoIndexTerm(na, 'ne'),
]
ham = REffHam(terms, external)
###############################################################################
## Perform initial guess ######################################################
###############################################################################
guess_core_hamiltonian(olp, kin, na, orb)
###############################################################################
## Do a Hartree-Fock calculation ##############################################
###############################################################################
scf_solver = PlainSCFSolver(1e-6)
scf_solver(ham, lf, olp, occ_model, orb)
###############################################################################
## Combine one-electron integrals to single Hamiltonian #######################
###############################################################################
one = kin.copy()
one.iadd(na)
###############################################################################
## Do OO-AP1roG optimization ##################################################
###############################################################################
ap1rog = RAp1rog(lf, occ_model)
energy, c, l = ap1rog(one, er, external['nn'], orb, olp, True, **{
'indextrans': 'tensordot',
'warning': False,
'checkpoint': 1,
'levelshift': 1e-8,
'absolute': False,
'givensrot': np.array([[]]),
'swapa': np.array([[]]),
'sort': True,
'guess': {'type': 'random', 'factor': -0.1, 'geminal': None, 'lagrange': None},
'solver': {'wfn': 'krylov', 'lagrange': 'krylov'},
'maxiter': {'wfniter': 200, 'orbiter': 100},
'dumpci': {'amplitudestofile': False, 'amplitudesfilename': './ap1rog_amplitudes.dat'},
'thresh': {'wfn': 1e-12, 'energy': 1e-8, 'gradientnorm': 1e-4, 'gradientmax': 5e-5},
'printoptions': {'geminal': True, 'ci': 0.01, 'excitationlevel': 1},
'stepsearch': {'method': 'trust-region', 'alpha': 1.0, 'c1': 0.0001, 'minalpha': 1e-6, 'maxiterouter': 10, 'maxiterinner': 500, 'maxeta': 0.75, 'mineta': 0.25, 'upscale': 2.0, 'downscale': 0.25, 'trustradius': 0.75, 'maxtrustradius': 0.75, 'threshold': 1e-8, 'optimizer': 'ddl'},
'orbitaloptimizer': 'variational'
}
)
|
We'd love to hear from you about your recent experience with us. Fill out this simple form.
MD. Milan Simko, Immunologist – Allergologist, University Hospital Bratislava initially attributed the positive effect staying in a cave on the airways that inhaled aerosols dissolve viscous mucus that is better then removed from the airways. Recent research, however, suggest an important regulatory role of surface padding of the skin and respiratory tract.
-Doc. MD. Igor Kajaba, PhD.
-Prof. MD. Jiri Homolka, MD.
|
#!/usr/bin/python
#
# linearize-hashes.py: List blocks in a linear, no-fork version of the chain.
#
# Copyright (c) 2013 The Bitcoin developers
# Distributed under the MIT/X11 software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
import json
import struct
import re
import base64
import httplib
import sys
settings = {}
class BitcoinRPC:
OBJID = 1
def __init__(self, host, port, username, password):
authpair = "%s:%s" % (username, password)
self.authhdr = "Basic %s" % (base64.b64encode(authpair))
self.conn = httplib.HTTPConnection(host, port, False, 30)
def rpc(self, method, params=None):
self.OBJID += 1
obj = { 'version' : '1.1',
'method' : method,
'id' : self.OBJID }
if params is None:
obj['params'] = []
else:
obj['params'] = params
self.conn.request('POST', '/', json.dumps(obj),
{ 'Authorization' : self.authhdr,
'Content-type' : 'application/json' })
resp = self.conn.getresponse()
if resp is None:
print "JSON-RPC: no response"
return None
body = resp.read()
resp_obj = json.loads(body)
if resp_obj is None:
print "JSON-RPC: cannot JSON-decode body"
return None
if 'error' in resp_obj and resp_obj['error'] != None:
return resp_obj['error']
if 'result' not in resp_obj:
print "JSON-RPC: no result in object"
return None
return resp_obj['result']
def getblock(self, hash, verbose=True):
return self.rpc('getblock', [hash, verbose])
def getblockhash(self, index):
return self.rpc('getblockhash', [index])
def get_block_hashes(settings):
rpc = BitcoinRPC(settings['host'], settings['port'],
settings['rpcuser'], settings['rpcpassword'])
for height in xrange(settings['min_height'], settings['max_height']+1):
hash = rpc.getblockhash(height)
print(hash)
if __name__ == '__main__':
if len(sys.argv) != 2:
print "Usage: linearize-hashes.py CONFIG-FILE"
sys.exit(1)
f = open(sys.argv[1])
for line in f:
# skip comment lines
m = re.search('^\s*#', line)
if m:
continue
# parse key=value lines
m = re.search('^(\w+)\s*=\s*(\S.*)$', line)
if m is None:
continue
settings[m.group(1)] = m.group(2)
f.close()
if 'host' not in settings:
settings['host'] = '127.0.0.1'
if 'port' not in settings:
settings['port'] = 15715
if 'min_height' not in settings:
settings['min_height'] = 0
if 'max_height' not in settings:
settings['max_height'] = 319000
if 'rpcuser' not in settings or 'rpcpassword' not in settings:
print "Missing username and/or password in cfg file"
sys.exit(1)
settings['port'] = int(settings['port'])
settings['min_height'] = int(settings['min_height'])
settings['max_height'] = int(settings['max_height'])
get_block_hashes(settings)
|
BladeWP provides this website located at www.bladewp.com (“bladewp.com”), all the content under this domain, certain related software, and its services to you subject to the following terms and conditions. By using bladewp.com you agree to be bound by the latest amended versions of this Agreement and BladeWP’s Privacy and Security Policy (see “Modifications” below).
By using the information, tools, features and functionality located on bladewp.com, through any BladeWP APIs, or through any software or other websites that interface with bladewp.com or its APIs (collectively the “Service”), you agree to be bound by this Agreement, whether you are a “Visitor” (meaning you merely browse the bladewp.com website) or you are a “Member” (meaning you have registered with BladeWP). The term “you” or “User” refers to a Visitor or a Member. If you wish to become a Member and make use of the Service you must read this Agreement and indicate your acceptance during the Registration process. If you accept this Agreement, you represent that you have the capacity to be bound by it or, if you are acting on behalf of a company or entity, that you have the authority to bind such entity.
All content included on bladewp.com, such as text, graphics, logos, button icons, images, audio clips, digital downloads, data compilations, and software, as well as the compilation of that content into one, coherent website, is the property of BladeWP and protected by international copyright laws. Reproduction of the content of bladewp.com without the written permission of BladeWP is prohibited.
BladeWP, the BladeWP logo, and other BladeWP graphics, logos, page headers, button icons, scripts, and service names are trademarks, certification marks, service marks, or other trade dress of BladeWP or its subsidiaries. BladeWP’s trademarks, certification marks, service marks, and trade dress have inherent meaning and substantial value because of their restricted use. They may not be used in connection with any product or service that is not BladeWP’s, in any manner without BladeWP’s permission. All other trademarks not owned by BladeWP or its subsidiaries that appear on bladewp.com are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by BladeWP or its subsidiaries.
In consideration for accessing bladewp.com and the Service, BladeWP grants you a limited license to access and make personal use of the website. This license prohibits your downloading (other than page caching) or modifying any portion of it, except with express, written consent of BladeWP. This license does not allow resale of BladeWP’s services without BladeWP’s written permission. You may not frame or utilize framing techniques to enclose any trademark, logo, or other proprietary information (including images, text, page layout, or form) of bladewp.com without express written consent of BladeWP. You may not use any meta tags or any other “hidden text” utilizing BladeWP’s name or trademarks without the express written consent of BladeWP. Any unauthorized use automatically terminates the permission or license granted by BladeWP and may incur legal liabilities for any damages.
You are granted a limited, revocable, and nonexclusive right to create a hyperlink to any non-password protected directories. You may not use any of BladeWP’s proprietary graphics or trademarks as part of the link without express written permission.
If you are issued an account, you are responsible for maintaining the confidentiality of your account and password, and you agree to accept responsibility for all activities that occur under your account or password. BladeWP reserves the right to, under its sole discretion, refuse service, suspend or terminate accounts, or otherwise restrict access to bladewp.com and the BladeWP Service.
BladeWP’s Service allows you to install or utilize certain Third Party Apps (“Apps”). These Apps are provided “AS IS” and governed by their own Terms of Service and Privacy Policies as set forth by the Third Parties that provide them. BladeWP does not endorse and is not responsible or liable for the services or features provided by these Apps you choose to install. You acknowledge and agree that BladeWP shall not be responsible or liable, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with use of or reliance on any Apps.
BladeWP will make it clear whenever a feature will modify your content and, whenever possible, provide you a mechanism to allow you to disable the feature.
As a visitor to bladewp.com and a user of the BladeWP Service, you consent to having your Internet Protocol address recorded and your activities monitored to prevent abuse.
You acknowledge that BladeWP’s Service is offered as a platform to cache and serve web pages and websites and is not offered for other purposes, such as remote storage. Accordingly, you understand and agree to use the Service solely for the purpose of hosting and serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. BladeWP’s Service is also a shared web caching service, which means a number of customers’ websites are cached from the same server. To ensure that BladeWP’s Service is reliable and available for the greatest number of users, a customer’s usage cannot adversely affect the performance of other customers’ sites. Additionally, the purpose of BladeWP’s Service is to proxy web content, not store data. Using an account primarily as an online storage space, including the storage or caching of a disproportionate percentage of pictures, movies, audio files, or other non-HTML content, is prohibited. You further agree that if, at BladeWP’s sole discretion, you are deemed to have violated this section, or if BladeWP, in its sole discretion, deems it necessary due to excessive burden or potential adverse impact on BladeWP’s systems, potential adverse impact on other users, server processing power, server memory, abuse controls, or other reasons, BladeWP may suspend or terminate your account without notice to or liability to you.
BladeWP reserves the right to investigate you, your business, and/or your owners, officers, directors, managers, and other principals, your sites, and the materials comprising the sites at any time. These investigations will be conducted solely for BladeWP’s benefit, and not for your benefit or that of any third party. If the investigation reveals any information, act, or omission, which in BladeWP’s sole opinion, constitutes a violation of any law or regulation, this Agreement, or is otherwise deemed harm the Service, BladeWP may immediately shut down your access to the Service. You agree to waive any cause of action or claim you may have against BladeWP for such action, including but not limited to any disruption to your website. You acknowledge that BladeWP may, at its own discretion, reveal the information about your web server to alleged copyright holders or other complaintants who have filed complaints with us.
You agree to indemnify and hold BladeWP, and its subsidiaries, affiliates, officers, agents, co-branders or other partners, and employees, harmless from any claim or demand, including reasonable attorneys’ fees, arising out of your use of the Service, your connection to the Service, your violation of the Terms of Service, or your violation of any rights of another.
BLADEWP.COM, THE BLADEWP SERVICE, AND DOWNLOADABLE SOFTWARE ARE PROVIDED BY BLADEWP ON AN “AS IS” AND “AS AVAILABLE” BASIS. BLADEWP MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE OPERATION OF bladewp.com, THE EFFECTIVENESS OF ITS SERVICES, OR THE INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED ON bladewp.com. YOU EXPRESSLY AGREE THAT YOUR USE OF bladewp.com, THE BLADEWP SERVICE, AND ANY DOWNLOADABLE SOFTWARE IS AT YOUR SOLE RISK. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, BLADEWP DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL BLADEWP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF bladewp.com, THE BLADEWP SERVICE, OR DOWNLOADABLE SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
BLADEWP MAKES REASONABLE EFFORTS, BUT DOES NOT WARRANT THAT bladewp.com, THE BLADEWP SERVICE, ANY DOWNLOADABLE SOFTWARE, THE BLADEWP SERVERS, OR EMAIL SENT FROM ANY OF ITS DOMAINS ARE FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. BLADEWP WILL NOT BE LIABLE FOR ANY DAMAGES OF ANY KIND ARISING FROM THE USE OF THE bladewp.com, THE BLADEWP SERVICE, OR BLADEWP DOWNLOADABLE SOFTWARE INCLUDING, BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, PUNITIVE, AND CONSEQUENTIAL DAMAGES.
BladeWP’s policy is to investigate violations of these Terms of Service and terminate repeat infringers. You agree that BladeWP may, under certain circumstances and without prior notice, immediately terminate your BladeWP account, any associated email address, and access to bladewp.com and associated Services. Cause for such termination shall include, but not be limited to: (a) breaches or violations of the Terms of Service or other incorporated agreements or guidelines; (b) requests by law enforcement or other government agencies; (c) a request by you (self-initiated account deletions); (d) discontinuance or material modification to the Service (or any part thereof); (e) unexpected technical or security issues or problems; (f) extended periods of inactivity; (g) you have engaged or are reasonably suspected to be engaged in fraudulent or illegal activities; (h) having provided false information as part of your account; (i) having failed to keep your account complete, true, and accurate; (j) any use of the Service deemed at BladeWP’s sole discretion to be prohibited; (k) use of fraudulent payment methods; and/or (l) nonpayment of any fees owed by you in connection with bladewp.com and associated Services. Further, you agree that all terminations for cause shall be made in BladeWP’s sole discretion and that BladeWP shall not be liable to you or any third-party for any termination of your account, access to the Service, or any disruption to your services such a termination may cause. You expressly agree that in the case of a termination for cause you will not have any opportunity to cure. You further acknowledge and agree that notwithstanding any termination, your obligations to BladeWP set forth in Sections 2, 3, 4, 8, 9, 11, 12, 13, 23, 24, 25 and 26 shall survive such termination.
BladeWP may modify this Agreement from time to time. Any and all changes to this Agreement will be posted on bladewp.com. In addition, the Agreement will indicate the date it was last revised. You are deemed to accept and agree to be bound by any changes to the Agreement when you use the Service after those changes are posted.
The Service may provide, or third parties may provide, links to other websites or resources. Because BladeWP has no control over such sites and resources, you acknowledge and agree that BladeWP is not responsible for the availability of such external sites or resources, and does not endorse and is not responsible or liable for any content, advertising, products, or other materials on or available from such sites or resources. You further acknowledge and agree that BladeWP shall not be responsible or liable, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with use of or reliance on any such content.
BladeWP shall be permitted to identify you as a customer, to use your website’s name in connection with proposals to prospective customers, to hyperlink to your website’s home page, to display your logo on the BladeWP’s web site, and to otherwise refer to you in print or electronic form for marketing or reference purposes.
The failure of BladeWP to exercise or enforce any right or provision of this Agreement shall not constitute a waiver of such right or provision. If any provision of the Terms of Service is found by a court of competent jurisdiction to be invalid, the parties nevertheless agree that the court should endeavor to give effect to the parties’ intentions as reflected in the provision, and the other provisions of the Terms of Service remain in full force and effect.
If any provision of the Terms of Service is found by a court of competent jurisdiction to be invalid, the parties nevertheless agree that the court should endeavor to give effect to the parties’ intentions as reflected in the provision, and the other provisions of the Terms of Service remain in full force and effect.
You agree that your BladeWP account is non-transferable except with the written consent of BladeWP.
You agree that regardless of any statute or law to the contrary, any claim or cause of action arising out of or related to use of bladewp.com must be filed within one year after such claim or cause of action arose or be forever barred.
The authoritative version of BladeWP’s Terms of Service is available at: www.bladewp.com/terms. While translations of these terms may be provided in multiple languages for your convenience, the English language version hosted at the link above is binding for all users of bladewp.com, the BladeWP Service, and any BladeWP Downloadable Software.
The headings and section titles in the Terms of Service are for convenience only and have no legal or contractual effect.
|
#!/usr/bin/python -tt
# Copyright 2010 Google Inc.
# Licensed under the Apache License, Version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
# Google's Python Class
# http://code.google.com/edu/languages/google-python-class/
"""Mimic pyquick exercise -- optional extra exercise.
Google's Python Class
Read in the file specified on the command line.
Do a simple split() on whitespace to obtain all the words in the file.
Rather than read the file line by line, it's easier to read
it into one giant string and split it once.
Build a "mimic" dict that maps each word that appears in the file
to a list of all the words that immediately follow that word in the file.
The list of words can be be in any order and should include
duplicates. So for example the key "and" might have the list
["then", "best", "then", "after", ...] listing
all the words which came after "and" in the text.
We'll say that the empty string is what comes before
the first word in the file.
With the mimic dict, it's fairly easy to emit random
text that mimics the original. Print a word, then look
up what words might come next and pick one at random as
the next work.
Use the empty string as the first word to prime things.
If we ever get stuck with a word that is not in the dict,
go back to the empty string to keep things moving.
Note: the standard python module 'random' includes a
random.choice(list) method which picks a random element
from a non-empty list.
For fun, feed your program to itself as input.
Could work on getting it to put in linebreaks around 70
columns, so the output looks better.
"""
import random
import sys
def mimic_dict(filename):
"""Returns mimic dict mapping each word to list of words which follow it."""
# +++your code here+++
word=open(filename).read().split()
mimic_dict={}
prev=''
for words in word:
if not prev in mimic_dict:
mimic_dict[prev]=[words]
else:
mimic_dict[prev].append(words)
prev=words
return mimic_dict
def print_mimic(mimic_dict, word):
"""Given mimic dict and start word, prints 200 random words."""
# +++your code here+++
for i in range(200):
print word,
nexts=mimic_dict.get(word)
if not nexts:
nexts=mimic_dict['']
word=random.choice(nexts)
# Provided main(), calls mimic_dict() and mimic()
def main():
if len(sys.argv) != 2:
print 'usage: ./mimic.py file-to-read'
sys.exit(1)
dict = mimic_dict(sys.argv[1])
print dict
print_mimic(dict, '')
if __name__ == '__main__':
main()
|
Electrical and electronic design tools attempt to model and predict the behaviour of systems in their working environment. However as systems become ever more complex and the all possible usage scenarios cannot be imagined, the need to test products and systems both in the laboratory and in real world conditions in the field will continue. This magazine is fully dedicated to the ScopeCorders - precision measurements in the laboratory and now also in the field.
|
'''
Created on Sep 30, 2014
@author: mike
'''
import logging
import json
from twisted.web.resource import Resource
from exe.webui.renderable import RenderableResource
from exe.engine.readabilityutil import ReadabilityUtil
from exe.engine.path import Path
import os.path
from exe import globals as G
import re
import uuid
class ReadabilityPresetsPage(RenderableResource):
'''
This page delivers AJAX about readability presets
'''
name = 'readabilitypresets'
def __init__(self, parent):
'''
Constructor
'''
RenderableResource.__init__(self, parent)
def render_POST(self, request):
action = None
result = {}
if 'action' in request.args:
action = request.args['action'][0]
if action == "savepreset":
json_str = request.args['presetval'][0]
json_obj = json.loads(json_str, "utf-8")
json_obj = ReadabilityUtil().save_readability_preset(json_obj)
result = {
"success" : True,
"uuid" : json_obj['uuid']
}
return json.dumps(result)
def _check_is_clean_filename(self, filename):
"""Check a filename is only letters, numbers and dots - no slashes etc"""
filename_clean = re.sub("[^a-z0-9A-Z\\-\\.]+", "", filename)
if filename_clean != filename:
raise ValueError("Invalid chars in filename")
def render_GET(self, request):
action = None
if 'action' in request.args:
action = request.args['action'][0]
#result that will be put into json and sent back
result = {}
if action == "list_params_by_lang":
result = self.list_params_by_lang(request)
elif action == "list_presets":
result = ReadabilityUtil().list_readability_preset_ids("erp2")
elif action == "get_preset_by_id":
preset_id = request.args['presetid'][0]
result = ReadabilityUtil().get_readability_preset_by_id(preset_id)
elif action == "delete_preset_by_id":
preset_id = request.args['presetid'][0]
self._check_is_clean_filename(preset_id)
result = ReadabilityUtil().delete_readability_preset_by_id(preset_id)
else:
extension_req = request.args['type'][0]
extension_clean = re.sub("[^a-z0-9]+", "", extension_req)
if extension_clean != extension_req:
raise ValueError("Can only allow letters and numbers in extension")
readability_presets = ReadabilityUtil().list_readability_presets(extension_req)
result['rootList'] = []
for preset_name in readability_presets:
result['rootList'].append({"filename" : preset_name, "basename" : preset_name})
return json.dumps(result)
def list_params_by_lang(self, request):
lang_code = request.args['lang'][0]
return ReadabilityUtil.get_params_by_lang(lang_code)
|
lifeinspiringquotes.com ~ Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page is one of best image reference about Coloring pages. our collection has been collected from various references. Improved Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page by was posted in September 14, 2018. Improved Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page reference also have Tag ;, for your convenience in searching this reference more specific.
Do you know The thought of Coloring Pages That You Can Print Out we give you in this post relates to the desire record about Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page That You Can Print Out. We found that some people seek Coloring Pages That You Can Print Out on search engines like bing. We tend to present a most recent picture to suit your needs.
If you like the idea of Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page, I would like you to support and help us developing more experience by sharing this home interior design or click some related posts below for more pictures and further information. Moreover, you can help us grow by sharing this reference of the home interior design ideas on Facebook, Twitter, and Pinterest. The contact us will also be available for you to give and share your comments with us. We would like to be opened for your every comment and every suggestion. Goodluck for selecting great Pleasant Mystery Grid Coloring Pages Colouring In Snazzy Mystery Grid Coloring Pages Art Games Pinterest Mystery Page for your dream home interior design.
|
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=protected-access
"""Utilities for Keras classes with v1 and v2 versions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.eager import context
from tensorflow.python.framework import ops
from tensorflow.python.keras.utils.generic_utils import LazyLoader
# TODO(b/134426265): Switch back to single-quotes once the issue
# with copybara is fixed.
# pylint: disable=g-inconsistent-quotes
training = LazyLoader(
"training", globals(),
"tensorflow.python.keras.engine.training")
training_v1 = LazyLoader(
"training_v1", globals(),
"tensorflow.python.keras.engine.training_v1")
base_layer = LazyLoader(
"base_layer", globals(),
"tensorflow.python.keras.engine.base_layer")
base_layer_v1 = LazyLoader(
"base_layer_v1", globals(),
"tensorflow.python.keras.engine.base_layer_v1")
callbacks = LazyLoader(
"callbacks", globals(),
"tensorflow.python.keras.callbacks")
callbacks_v1 = LazyLoader(
"callbacks_v1", globals(),
"tensorflow.python.keras.callbacks_v1")
# pylint: enable=g-inconsistent-quotes
class ModelVersionSelector(object):
"""Chooses between Keras v1 and v2 Model class."""
def __new__(cls, *args, **kwargs): # pylint: disable=unused-argument
use_v2 = should_use_v2()
cls = swap_class(cls, training.Model, training_v1.Model, use_v2) # pylint: disable=self-cls-assignment
return super(ModelVersionSelector, cls).__new__(cls)
class LayerVersionSelector(object):
"""Chooses between Keras v1 and v2 Layer class."""
def __new__(cls, *args, **kwargs): # pylint: disable=unused-argument
use_v2 = should_use_v2()
cls = swap_class(cls, base_layer.Layer, base_layer_v1.Layer, use_v2) # pylint: disable=self-cls-assignment
return super(LayerVersionSelector, cls).__new__(cls)
class TensorBoardVersionSelector(object):
"""Chooses between Keras v1 and v2 TensorBoard callback class."""
def __new__(cls, *args, **kwargs): # pylint: disable=unused-argument
use_v2 = should_use_v2()
start_cls = cls
cls = swap_class(start_cls, callbacks.TensorBoard, callbacks_v1.TensorBoard,
use_v2)
if start_cls == callbacks_v1.TensorBoard and cls == callbacks.TensorBoard:
# Since the v2 class is not a subclass of the v1 class, __init__ has to
# be called manually.
return cls(*args, **kwargs)
return super(TensorBoardVersionSelector, cls).__new__(cls)
def should_use_v2():
"""Determine if v1 or v2 version should be used."""
if context.executing_eagerly():
return True
elif ops.executing_eagerly_outside_functions():
# Check for a v1 `wrap_function` FuncGraph.
# Code inside a `wrap_function` is treated like v1 code.
graph = ops.get_default_graph()
if (getattr(graph, "name", False) and
graph.name.startswith("wrapped_function")):
return False
return True
else:
return False
def swap_class(cls, v2_cls, v1_cls, use_v2):
"""Swaps in v2_cls or v1_cls depending on graph mode."""
if cls == object:
return cls
if cls in (v2_cls, v1_cls):
return v2_cls if use_v2 else v1_cls
# Recursively search superclasses to swap in the right Keras class.
new_bases = []
for base in cls.__bases__:
if ((use_v2 and issubclass(base, v1_cls)
# `v1_cls` often extends `v2_cls`, so it may still call `swap_class`
# even if it doesn't need to. That being said, it may be the safest
# not to over optimize this logic for the sake of correctness,
# especially if we swap v1 & v2 classes that don't extend each other,
# or when the inheritance order is different.
or (not use_v2 and issubclass(base, v2_cls)))):
new_base = swap_class(base, v2_cls, v1_cls, use_v2)
else:
new_base = base
new_bases.append(new_base)
cls.__bases__ = tuple(new_bases)
return cls
def disallow_legacy_graph(cls_name, method_name):
if not ops.executing_eagerly_outside_functions():
error_msg = (
"Calling `{cls_name}.{method_name}` in graph mode is not supported "
"when the `{cls_name}` instance was constructed with eager mode "
"enabled. Please construct your `{cls_name}` instance in graph mode or"
" call `{cls_name}.{method_name}` with eager mode enabled.")
error_msg = error_msg.format(cls_name=cls_name, method_name=method_name)
raise ValueError(error_msg)
def is_v1_layer_or_model(obj):
return isinstance(obj, (base_layer_v1.Layer, training_v1.Model))
|
We, the operators of the DERMAPHARM HOLDING SE website, take the protection of your personal data very seriously and strictly adhere to the rules of the Data Protection Act (BDSG), the Telemedia Act (TMG) and other data protection regulations. Data protection deals with personal data. According to Section 3(1) BDSG, that means individual details of personal or material circumstances of a specific or identifiable natural person. This includes information such as name, postal address, email address or telephone number, and possibly also usage data such as your IP address.
Personal data is only stored if you voluntarily provide it to us, such as if you register on the DERMAPHARM HOLDING SE website, order informational material or subscribe to the newsletter. We use your personal information solely for the technical administration purposes of our websites, to give you access to specific information, and for other communication with you.
We take precautions to protect your personal information from loss, destruction, falsification, tampering and unauthorized access. Of course, all legal data protection regulations are observed. We will neither pass on your personal data to third parties nor otherwise use them without your express consent. If you have given your consent, you can revoke it at any time with future effect.
When you access a website, data is always logged for technical and organizational reasons. These include: date and time, web browser, requesting IP address and domain, page or file viewed, status information about success or failure of the call. The log data of the web server is stored for 30 days and deleted after a statistical evaluation. Further using or passing on this data does not take place.
If you send us an email, we will use your email address only for correspondence with you.
You have the right of withdrawal at any time. Please contact us in writing at the address in the imprint. Your data will be deleted immediately.
|
#!/usr/bin/env python
import os
import os.path
from sys import exit, argv
import json
import oauth2 as oauth
from StringIO import StringIO
from urlparse import parse_qsl
# Please don't use this key and secret if you create a new version of this script.
# You can request your own API key at https://dev.twitter.com/apps/new
# (If you fork my repo to merely submit a pull request then you don't need to change this.)
consumer_key = 'I5Qy02p5CrIXw8Sa9ohw'
consumer_secret = 'ubG7dkIS6g2cjYshXM6gtN6dSZEekKTRZMKgjYIv4'
max_tweets_per_request = 200
access_token_filepath = '~/.config/twitter-backup.py/access-token.json'
def get_access_token_from_twitter():
# Taken from https://github.com/simplegeo/python-oauth2#twitter-three-legged-oauth-example
request_token_url = 'https://api.twitter.com/oauth/request_token'
access_token_url = 'https://api.twitter.com/oauth/access_token'
authorize_url = 'https://api.twitter.com/oauth/authorize'
client = oauth.Client(consumer)
# Step 1: Get a request token. This is a temporary token that is used for
# having the user authorize an access token and to sign the request to obtain
# said access token.
resp, content = client.request(request_token_url, "GET")
if resp['status'] != '200':
raise Exception("Invalid response %s." % resp['status'])
request_token = dict(parse_qsl(content))
# Step 2: Redirect to the provider. Since this is a CLI script we do not
# redirect. In a web application you would redirect the user to the URL
# below.
print "Visit %s?oauth_token=%s" % (authorize_url, request_token['oauth_token'])
# After the user has granted access to you, the consumer, the provider will
# redirect you to whatever URL you have told them to redirect to. You can
# usually define this in the oauth_callback argument as well.
oauth_verifier = raw_input('What is the PIN? ')
# Step 3: Once the consumer has redirected the user back to the oauth_callback
# URL you can request the access token the user has approved. You use the
# request token to sign this request. After this is done you throw away the
# request token and use the access token returned. You should store this
# access token somewhere safe, like a database, for future use.
token = oauth.Token(request_token['oauth_token'], request_token['oauth_token_secret'])
token.set_verifier(oauth_verifier)
client = oauth.Client(consumer, token)
resp, content = client.request(access_token_url, "POST")
access_token = dict(parse_qsl(content))
if access_token == {}:
print 'Invalid PIN was given'
exit(1)
return access_token
def fetch_tweets(access_token, screen_name, max_id=None):
token = oauth.Token(access_token['oauth_token'], access_token['oauth_token_secret'])
client = oauth.Client(consumer, token)
screen_name = '' if screen_name==None else '&screen_name='+screen_name
max_id = '' if max_id==None else '&max_id='+str(max_id)
request_url = 'https://api.twitter.com/1.1/statuses/user_timeline.json?count=%d%s%s' % \
(max_tweets_per_request, screen_name, max_id)
response = client.request(request_url)
response_headers, response_body = response
tweets = json.load(StringIO(response_body))
return tweets
def get_earliest_tweet_id(tweets):
id = None
for tweet in tweets:
id = tweet['id']
return id
def save_tweets(json_object, filepath):
json_string = json.dumps(json_object, indent=4)
with open(filepath, 'w') as file:
file.write(json_string)
def save_access_token(token):
token_directory = os.path.dirname(get_access_token_file_path())
if not os.path.exists(token_directory):
os.makedirs(token_directory)
dumped_token = json.dumps(token)
with open(get_access_token_file_path(), 'w') as file:
file.write(dumped_token)
def load_access_token():
try:
with open(get_access_token_file_path(), 'r') as file:
access_token = json.load(file)
return access_token
except IOError:
return None
def get_access_token_file_path():
return os.path.expanduser(access_token_filepath)
def print_help():
print 'Usage: %s [SCREEN-NAME] | -h | --help' % (argv[0])
print 'Fetch the tweets of SCREEN-NAME'
print 'SCREEN-NAME is optional and defaults to the sceen name of the authorizing user'
# Main program
if len(argv) >= 2:
if argv[1] in ['-h', '--help']:
print_help()
exit(0)
else:
screen_name = argv[1]
else:
screen_name = None
consumer = oauth.Consumer(consumer_key, consumer_secret)
access_token = load_access_token()
if access_token == None:
access_token = get_access_token_from_twitter()
save_access_token(access_token)
earliest_tweet_id = None
page_number = 1
tweet_index = 0
while True:
tweets = fetch_tweets(access_token, screen_name, earliest_tweet_id)
if len(tweets) > 0:
dest_filename = '%02d.json' % (page_number)
print 'Saving tweet %d to %d as %s' % (tweet_index, tweet_index+len(tweets), dest_filename)
save_tweets(tweets, dest_filename)
earliest_tweet_id = get_earliest_tweet_id(tweets)
page_number += 1
tweet_index += len(tweets)
if len(tweets) < max_tweets_per_request:
break
|
As one of the pioneers in using TIG & MIG welding processes for the repair of tool and dies we provide a service tailor made to your suit requirements that is a quick and efficient solution to all of the industry problems that arise today.
We work closely with our customers listening to their specific needs and can match our processes and consumables accurately to achieve the best possible solution for each material specification.
These processes are generally for the larger repair areas but can be complemented by laser welding around edges to improve the general machining properties thus giving you the finish you require.
|
# This file is part of GridCal.
#
# GridCal is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# GridCal is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GridCal. If not, see <http://www.gnu.org/licenses/>.
import time
import json
import numpy as np
import numba as nb
from GridCal.Engine.Core.multi_circuit import MultiCircuit
from GridCal.Engine.Core.snapshot_pf_data import compile_snapshot_circuit
from GridCal.Engine.Simulations.LinearFactors.linear_analysis import LinearAnalysis, make_worst_contingency_transfer_limits
from GridCal.Engine.Simulations.driver_types import SimulationTypes
from GridCal.Engine.Simulations.result_types import ResultTypes
from GridCal.Engine.Simulations.results_model import ResultsModel
from GridCal.Engine.Simulations.results_template import ResultsTemplate
from GridCal.Engine.Simulations.driver_template import DriverTemplate
########################################################################################################################
# Optimal Power flow classes
########################################################################################################################
@nb.njit()
def compute_ntc(ptdf, lodf, P0, flows, rates, idx1, idx2, threshold=0.02):
"""
Compute all lines' ATC
:param ptdf: Power transfer distribution factors (n-branch, n-bus)
:param lodf: Line outage distribution factors (n-branch, n-outage branch)
:param P0: all bus injections [p.u.]
:param flows: Line Sf [MW]
:param rates: all line rates vector
:param idx1: bus indices of the sending region
:param idx2: bus indices of the receiving region
:param threshold: value that determines if a line is studied for the ATC calculation
:return: ATC vector for all the lines
"""
nbr = ptdf.shape[0]
nbus = ptdf.shape[1]
# declare the bus injections increment due to the transference
dTi = np.zeros(nbus)
# set the sending power increment proportional to the current power
# dTi[idx1] = dT * (P0[idx1] / P0[idx1].sum())
dTi[idx1] = P0[idx1] / P0[idx1].sum()
# set the receiving power increment proportional to the current power
# dTi[idx2] = -dT * (P0[idx2] / P0[idx2].sum())
dTi[idx2] = -P0[idx2] / P0[idx2].sum()
# compute the line flow increments due to the exchange increment dT in MW
dFlow = ptdf.dot(dTi)
# this operation proves the same result: dFlow == dFlow2
# Pbr = np.dot(ptdf, (P0 * Sbase) + dTi)
# dFlow2 = Pbr - flows
# compute the sensitivities to the exchange
# alpha = dFlow / dT
alpha = dFlow / 1.0
# explore the ATC
atc_max = np.zeros(nbr)
atc_min = np.zeros(nbr)
worst_max = 0
worst_min = 0
worst_contingency_max = 0
worst_contingency_min = 0
PS_max = -1e20 # should increase
PS_min = 1e20 # should decrease
for m in range(nbr): # for each branch
if abs(alpha[m]) > threshold and abs(flows[m]) < rates[m]: # if the branch is relevant enough for the NTC...
# explore the ATC in "N-1"
for c in range(nbr): # for each contingency
if m != c:
# compute the OTDF
otdf = alpha[m] + lodf[m, c] * alpha[c]
# compute the contingency flow
contingency_flow = flows[m] + lodf[m, c] * flows[c]
if abs(otdf) > threshold:
# compute the branch+contingency ATC for each "sense" of the flow
alpha_ij = (rates[m] - contingency_flow) / otdf
beta_ij = (-rates[m] - contingency_flow) / otdf
#
alpha_p_ij = min(alpha_ij, beta_ij)
beta_p_ij = max(alpha_ij, beta_ij)
if alpha_p_ij > PS_max:
PS_max = alpha_p_ij
atc_max[m] = alpha_p_ij
worst_max = m
worst_contingency_max = c
if beta_p_ij < PS_min:
PS_min = beta_p_ij
atc_min[m] = alpha_p_ij
worst_min = m
worst_contingency_min = c
if PS_min == 1e20:
PS_min = 0.0
if PS_max == -1e20:
PS_max = 0.0
return alpha, atc_max, atc_min, worst_max, worst_min, worst_contingency_max, worst_contingency_min, PS_max, PS_min
class NetTransferCapacityResults(ResultsTemplate):
def __init__(self, n_br, n_bus, br_names, bus_names, bus_types, bus_idx_from, bus_idx_to):
"""
:param n_br:
:param n_bus:
:param br_names:
:param bus_names:
:param bus_types:
"""
ResultsTemplate.__init__(self,
name='ATC Results',
available_results=[ResultTypes.NetTransferCapacityPS,
ResultTypes.NetTransferCapacityAlpha,
ResultTypes.NetTransferCapacityReport
],
data_variables=['alpha',
'atc_max',
'atc_min',
'worst_max',
'worst_min',
'worst_contingency_max',
'worst_contingency_min',
'PS_max',
'PS_min',
'report',
'report_headers',
'report_indices',
'branch_names',
'bus_names',
'bus_types',
'bus_idx_from',
'bus_idx_to'])
self.n_br = n_br
self.n_bus = n_bus
self.branch_names = br_names
self.bus_names = bus_names
self.bus_types = bus_types
self.bus_idx_from = bus_idx_from
self.bus_idx_to = bus_idx_to
# stores the worst transfer capacities (from to) and (to from)
self.alpha = np.zeros(self.n_br)
self.alpha = np.zeros(self.n_br)
self.atc_max = np.zeros(self.n_br)
self.atc_min = np.zeros(self.n_br)
self.worst_max = 0
self.worst_min = 0
self.worst_contingency_max = 0
self.worst_contingency_min = 0
self.PS_max = 0.0
self.PS_min = 0.0
self.report = np.empty((1, 8), dtype=object)
self.report_headers = ['Branch min',
'Branch max',
'Worst Contingency min',
'Worst Contingency max',
'ATC max',
'ATC min',
'PS max',
'PS min']
self.report_indices = ['All']
def get_steps(self):
return
def make_report(self):
"""
:return:
"""
self.report = np.empty((1, 8), dtype=object)
self.report_headers = ['Branch min',
'Branch max',
'Worst Contingency min',
'Worst Contingency max',
'ATC max',
'ATC min',
'PS max',
'PS min']
self.report_indices = ['All']
self.report[0, 0] = self.branch_names[self.worst_max]
self.report[0, 1] = self.branch_names[self.worst_min]
self.report[0, 2] = self.branch_names[self.worst_contingency_max]
self.report[0, 3] = self.branch_names[self.worst_contingency_min]
self.report[0, 4] = self.atc_max[self.worst_max]
self.report[0, 5] = self.atc_min[self.worst_min]
self.report[0, 6] = self.PS_max
self.report[0, 7] = self.PS_min
def get_results_dict(self):
"""
Returns a dictionary with the results sorted in a dictionary
:return: dictionary of 2D numpy arrays (probably of complex numbers)
"""
data = {'atc_max': self.atc_max.tolist(),
'atc_min': self.atc_min.tolist(),
'PS_max': self.PS_max,
'PS_min': self.PS_min}
return data
def mdl(self, result_type: ResultTypes):
"""
Plot the results
:param result_type:
:return:
"""
index = self.branch_names
if result_type == ResultTypes.NetTransferCapacityPS:
data = np.array([self.PS_min, self.PS_max])
y_label = '(MW)'
title, _ = result_type.value
labels = ['Power shift']
index = ['PS min', 'PS max']
elif result_type == ResultTypes.NetTransferCapacityAlpha:
data = self.alpha
y_label = '(p.u.)'
title, _ = result_type.value
labels = ['Sensitivity to the exchange']
index = self.branch_names
elif result_type == ResultTypes.NetTransferCapacityReport:
data = np.array(self.report)
y_label = ''
title, _ = result_type.value
index = self.report_indices
labels = self.report_headers
else:
raise Exception('Result type not understood:' + str(result_type))
# assemble model
mdl = ResultsModel(data=data,
index=index,
columns=labels,
title=title,
ylabel=y_label)
return mdl
class NetTransferCapacityOptions:
def __init__(self, distributed_slack=True, correct_values=True,
bus_idx_from=list(), bus_idx_to=list(), dT=100.0, threshold=0.02):
"""
:param distributed_slack:
:param correct_values:
:param bus_idx_from:
:param bus_idx_to:
:param dT:
:param threshold:
"""
self.distributed_slack = distributed_slack
self.correct_values = correct_values
self.bus_idx_from = bus_idx_from
self.bus_idx_to = bus_idx_to
self.dT = dT
self.threshold = threshold
class NetTransferCapacityDriver(DriverTemplate):
tpe = SimulationTypes.NetTransferCapacity_run
name = tpe.value
def __init__(self, grid: MultiCircuit, options: NetTransferCapacityOptions):
"""
Power Transfer Distribution Factors class constructor
@param grid: MultiCircuit Object
@param options: OPF options
@:param pf_results: PowerFlowResults, this is to get the Sf
"""
DriverTemplate.__init__(self, grid=grid)
# Options to use
self.options = options
# OPF results
self.results = NetTransferCapacityResults(n_br=0,
n_bus=0,
br_names=[],
bus_names=[],
bus_types=[],
bus_idx_from=[],
bus_idx_to=[])
def run(self):
"""
Run thread
"""
start = time.time()
self.progress_text.emit('Analyzing')
self.progress_signal.emit(0)
# compile the circuit
nc = compile_snapshot_circuit(self.grid)
# get the converted bus indices
# idx1b, idx2b = compute_transfer_indices(idx1=self.options.bus_idx_from,
# idx2=self.options.bus_idx_to,
# bus_types=nc.bus_types)
idx1b = self.options.bus_idx_from
idx2b = self.options.bus_idx_to
# declare the linear analysis
linear = LinearAnalysis(grid=self.grid)
linear.run()
# declare the results
self.results = NetTransferCapacityResults(n_br=linear.numerical_circuit.nbr,
n_bus=linear.numerical_circuit.nbus,
br_names=linear.numerical_circuit.branch_names,
bus_names=linear.numerical_circuit.bus_names,
bus_types=linear.numerical_circuit.bus_types,
bus_idx_from=idx1b,
bus_idx_to=idx2b)
# compute NTC
alpha, atc_max, atc_min, worst_max, worst_min, \
worst_contingency_max, worst_contingency_min, PS_max, PS_min = compute_ntc(ptdf=linear.PTDF,
lodf=linear.LODF,
P0=nc.Sbus.real,
flows=linear.get_flows(nc.Sbus),
rates=nc.ContingencyRates,
idx1=idx1b,
idx2=idx2b)
# post-process and store the results
self.results.alpha = alpha
self.results.atc_max = atc_max
self.results.atc_min = atc_min
self.results.worst_max = worst_max
self.results.worst_max = worst_max
self.results.worst_contingency_max = worst_contingency_max
self.results.worst_contingency_min = worst_contingency_min
self.results.PS_max = PS_max
self.results.PS_min = PS_min
self.results.make_report()
end = time.time()
self.elapsed = end - start
self.progress_text.emit('Done!')
self.done_signal.emit()
def get_steps(self):
"""
Get variations list of strings
"""
return list()
if __name__ == '__main__':
from GridCal.Engine import *
fname = r'C:\Users\penversa\Git\GridCal\Grids_and_profiles\grids\IEEE 118 Bus - ntc_areas.gridcal'
main_circuit = FileOpen(fname).open()
options = NetTransferCapacityOptions()
driver = NetTransferCapacityDriver(main_circuit, options)
driver.run()
print()
|
Many people enjoy preparing homecooked meals for their loved ones. Whether it’s a large family gathering during the holiday season or a weeknight meal for their immediate families, men, women and even children who like to cook enjoy the satisfied looks on their loved ones’ faces after sharing a delicious meal.
Come the holiday season, gift givers can put the same satisfied look on the faces of the home cooks in their lives by offering a variety of gifts that can make mealtime easier and/or more enjoyable.
1. Electric corkscrew: Nothing complements a good meal quite like an appropriately paired bottle of wine. Cooks who are too busy in the kitchen to utilize traditional corkscrews, which can be time-consuming and messy, might enjoy an electric corkscrew. Such corkscrews quickly remove corks from wine bottles, requiring little effort on the part of already busy cooks.
2. Cookbook: People who understand the joy of cooking often love to experiment in the kitchen. Cookbooks can be an ideal gift for such cooks. Choose a book that provides recipes from their favorite styles of cuisine, such as Italian or Indian food. Or find a book that offers an array of recipes that allows them to explore various types of cuisine.
3. Cookware: Even the best cookware can only take so much usage, and chances are home cooks’ pantries can afford an upgrade or two. Gift givers should keep in mind that many home cooks have strong preferences regarding their cookware, so it might be wise to give a gift card or ask a loved one which type of cookware he or she prefers. Of course, a covert inspection of a loved one’s pantry might provide the insight gift givers need as well.
4. Rolling pin: For the person who loves to bake, a rolling pin might make a better gift than noncooks may appreciate. Rolling pins are necessary to prepare many baked goods, and a customizable rolling pin can flatten dough to the exact millimeter, helping bake-happy home cooks prepare the perfect plate of cookies.
5. Cooking class: Cooking classes can make the ideal gift for novice home cooks who are just beginning to explore their love of cooking. But advanced classes can help more seasoned cooks perfect their craft as they learn to prepare more complex dishes.
6. Wine aerator: Much like electric corkscrews can make opening bottles of wine much easier, wine aerators can help aerate red wine more quickly than decanters, which can take up to two hours to fully aerate wine. Aerators oxidate red wine, softening its flavors and bringing out the aromas that can make a great bottle of wine that much more enjoyable.
Home cooks often enjoy preparing fresh meals for their loved ones. The holiday season presents a perfect opportunity to find gifts that make cooking that much more enjoyable for loved ones who can’t wait to whip up the next homecooked meal for family and friends.
Hummus provides a delicious and healthy alternative to less nutritional dips. Versatile and available in various flavors, hummus can be whipped up at home for those who prefer to make their own dips. The following recipe for “Garbanzo-Carrot Hummus with Grilled Yogurt Flatbread” from James Campbell Caruso’s “España: Explore the Flavors of Spain” (Gibbs Smith) includes some Moroccan flavors that give this easy-to-prepare recipe a truly unique taste.
In a medium saucepan, combine the carrots with 2 quarts water and 2 teaspoons salt. Bring the mixture to a boil then reduce the heat and simmer for 8 to 10 minutes, until the carrots are tender. Remove the pan from the heat and allow the carrots to drain and cool in a colander.
Combine carrots and remaining ingredients, except for Yogurt Flatbread, in the work bowl of a food processor and puree until smooth. Season to taste with salt and pepper and garnish with the remaining cilantro. Serve with fresh, hot Yogurt Flatbread cut in wedges.
In a small resealable glass or plastic container, combine all of the ingredients.
Sift the flour, baking powder and salt into the work bowl of a stand mixer fitted with the dough hook. Add the yogurt and mix on low speed for 2 minutes. Cover the work bowl and allow the dough to rest at room temperature for 30 minutes.
Preheat a gas or charcoal grill to medium. Scrape the dough from the work bowl and turn it out onto a lightly floured surface. Roll the dough into a long log and divide it into 12 equal pieces. Roll each piece into a ball and use a rolling pin or tortilla press to flatten it into a 1/4-inch-thick tortilla shape. Brush each “tortilla” lightly with olive oil. Grill each for about 40 seconds then turn and cook another 40 seconds.
|
import os
import csv
import logging
import xml.dom.minidom
log = logging.getLogger("modules.Picasa")
import conduit
import conduit.utils as Utils
import conduit.Vfs as Vfs
import conduit.Exceptions as Exceptions
import conduit.dataproviders.DataProvider as DataProvider
import conduit.datatypes.Photo as Photo
from gettext import gettext as _
FILENAME_IDX = 0
DISPLAYNAME_IDX = 1
PHOTOS_IDX = 2
PICASA_DIR = os.path.join(os.path.expanduser("~"),".picasa")
if os.path.exists(PICASA_DIR):
MODULES = {
"PicasaDesktopSource" : { "type": "dataprovider" }
}
log.info("Picasa desktop directory detected")
else:
MODULES = {}
log.info("Picasa desktop not installed")
class PicasaDesktopSource(DataProvider.DataSource):
_name_ = _("Picasa Desktop")
_description_ = _("Synchronize Picasa from Picasa Desktop")
_category_ = conduit.dataproviders.CATEGORY_PHOTOS
_module_type_ = "source"
_in_type_ = "file/photo"
_out_type_ = "file/photo"
_icon_ = "picasa"
_configurable_ = True
def __init__(self, *args):
DataProvider.DataSource.__init__(self)
self.albums = []
self.enabledAlbums = []
def _fix_picasa_image_filename(self, filename):
#Picasa stores the image filename in some weird relative format
#with $ = $HOME and $My Pictures = xdg pictures dir
parts = filename.split("\\")
if parts[0] == "$My Pictures":
#FIXME: Use xdg user dirs to get localised photo dir
parts[0] = os.path.join(os.environ['HOME'],'Pictures')
elif parts[0][0] == "$":
#Take care of other photos in ~ by replacing $ with $HOME
parts[0] = os.path.join(os.environ['HOME'],parts[0][1:])
elif parts[0] == "[z]":
#absolute paths
parts[0] = "/"
else:
log.warn("Could not convert picasa photo path to unix path")
return None
path = os.path.abspath(os.sep.join(parts))
return path
def _get_all_albums(self):
#only work if picasa has been configured to use a CSV DB
#http://www.zmarties.com/picasa/
dbfile = os.path.join(PICASA_DIR,'drive_c','Program Files','Picasa2','db','dirscanner.csv')
if not os.path.exists(dbfile):
raise Exceptions.RefreshError("Picasa Not Configured to use CSV Database")
pals = []
#Open the CSV file and find all entries with Type = 19 (albums)
f = open(dbfile, 'rt')
try:
reader = csv.DictReader(f)
for row in reader:
if row['Type'] == '19':
#wine picasa stores all pal files (that describes an album)
#in the following base dir
parts = [PICASA_DIR,
'drive_c',
'Documents and Settings',
os.getlogin(),
'Local Settings']
#and then as given in the csv file
#but first change the windows path to a linux one
parts += row['Name'].split("\\")
path = os.path.abspath(os.sep.join(parts))
pals.append(path)
finally:
f.close()
#parse each pal file to get album info
albums = []
for pal in pals:
log.debug("Parsing album file %s" % pal)
doc = xml.dom.minidom.parse(pal)
#album name
for prop in doc.getElementsByTagName('property'):
if prop.hasAttribute("name") and prop.getAttribute("name") == "name":
name = prop.getAttribute("value")
#image filenames
photos = []
for f in doc.getElementsByTagName('filename'):
filename = self._fix_picasa_image_filename(f.firstChild.data)
if filename != None:
photos.append(filename)
albums.append((
pal, #FILENAME_IDX
name, #DISPLAYNAME_IDX
photos)) #PHOTOS_IDX
return albums
def initialize(self):
return True
def refresh(self):
DataProvider.DataSource.refresh(self)
self.albums = []
try:
self.albums = self._get_all_albums()
except:
#re-raise the refresh error
raise
print self.albums
def get_all(self):
DataProvider.DataSource.get_all(self)
photos = []
for album in self.albums:
if album[FILENAME_IDX] in self.enabledAlbums:
for photouri in album[PHOTOS_IDX]:
if Vfs.uri_exists(photouri):
photos.append(photouri)
return photos
def get(self, LUID):
DataProvider.DataSource.get(self, LUID)
f = Photo.Photo(URI=LUID)
f.set_UID(LUID)
f.set_open_URI(LUID)
return f
def finish(self, aborted, error, conflict):
DataProvider.DataSource.finish(self)
self.albums = []
def configure(self, window):
import gobject
import gtk
def col1_toggled_cb(cell, path, model ):
#not because we get this cb before change state
checked = not cell.get_active()
model[path][2] = checked
val = model[path][FILENAME_IDX]
if checked and val not in self.enabledAlbums:
self.enabledAlbums.append(val)
elif not checked and val in self.enabledAlbums:
self.enabledAlbums.remove(val)
tree = Utils.dataprovider_glade_get_widget(
__file__,
"config.glade",
"PicasaDesktopConfigDialog"
)
tagtreeview = tree.get_widget("albumtreeview")
#Build a list of all the tags
list_store = gtk.ListStore( gobject.TYPE_STRING, #FILENAME_IDX
gobject.TYPE_STRING, #DISLAYNAME_IDX
gobject.TYPE_BOOLEAN, #active
)
#Fill the list store
for t in self._get_all_albums():
list_store.append((
t[FILENAME_IDX],
t[DISPLAYNAME_IDX],
t[FILENAME_IDX] in self.enabledAlbums)
)
#Set up the treeview
tagtreeview.set_model(list_store)
#column 1 is the album name
tagtreeview.append_column( gtk.TreeViewColumn(_("Album Name"),
gtk.CellRendererText(),
text=DISPLAYNAME_IDX)
)
#column 2 is a checkbox for selecting the album to sync
renderer1 = gtk.CellRendererToggle()
renderer1.set_property('activatable', True)
renderer1.connect( 'toggled', col1_toggled_cb, list_store )
tagtreeview.append_column( gtk.TreeViewColumn(_("Enabled"),
renderer1,
active=2)
)
dlg = tree.get_widget("PicasaDesktopConfigDialog")
response = Utils.run_dialog (dlg, window)
dlg.destroy()
print self.enabledAlbums
def get_configuration(self):
return {"enabledAlbums": self.enabledAlbums}
def get_UID(self):
return Utils.get_user_string()
|
Electricity is potentially hazardous, and it’s always recommended that you employ an expert of Electrical Services in Adelaide when you require any electrical work to be carried out. Whether you have to mend a line, rewire your house or premises, replace a socket, you should contact an approved electric services company like Trade Focus Electrical. However, identifying an efficient provider is much easier said than done.
• Ask the service provider if they are approved.
• Ask the service provider for any other credentials & references from the past clients. This is ‘common sense,’ but surprisingly, many of the people fail to ask this kind of question. Academic & professional credentials along with references can assist you to learn a great deal about a service provider.
• When looking for a provider of Electrical Services in Adelaide, ask family, friends & neighbours for contacts of trustworthy contractors they’ve worked within the past.
However, it is significant to note that some of the contractors specialize in particular types of Electrical Services in Adelaide. This will be perfect if the nature of your work is highly specialized & calls for ‘in-depth’ knowledge of a specific category of expertise. Different companies are good in different fields of electric servicing.
Finally, you have to be sure that your contractor has knowledge of all local & national regulations. Many of the projects require permits & inspections. An experienced provider of Electrical Services in Adelaide will be able to help you with the permits needed & ensure that the project passes any essential inspections.
Today, one cannot imagine a life without electricity. Especially, in the 21st century, it is next to impossible to live without electricity because we are so habitual to many of the appliances that work only with the flow of electricity. Many a time, we find minor faults in the electrical appliances & start repairing it. However, people who do not understand the complexity of wiring systems at home, seek the help of company who carry out Electrical Services in Adelaide. We know that electricity is hazardous if you do not know the working mechanism of ‘electrical home appliances.’ So, it’s better to take the aid of available service companies, even if you have a little bit of knowledge about electricity. To be on the safe side, one should not approach electric wires their own. A short-circuit might cause massive destruction. Therefore, spend some bucks and hire a professional from a service company.
Electrical Services in Adelaide provide a professional approach to the problem to fix it. Nonetheless, it is an intimidating job to select the services from so many choices, especially when you only have the choice of ‘yellow pages or search engines’ for finding out the service provider. To obtain the best service, you need to know some vital factors that can ensure you to choose a good or the right company for carrying out Electrical Services in Adelaide.
Trust only on those companies that provide qualified personnel for the service. You can definitely ask for proof about their being qualified engineers or professionals for the job they are going to carry out.
It is without a doubt understood that the company charges should be within your budget as then only you will be able to hire the Electrical Services in Adelaide. However, take care that you aren’t compromising with the quality of the work. Make a proper agreement on the ‘fixed price’ before beginning the work for your home. Also, try to know the time the company will take for accomplishing the given work to be able to make a guess that the work will be completed in the desired time period.
Some other things to take into consideration while hiring electricians in Adelaide are call out charges & VAT. Some of the companies hide the information regarding this matter. It is a pretty common practice. Thus, you should be very careful. Do not rely on them as their Electrical Services in Adelaide might give you a shock at the end of the task. Also, find out the guarantee period offered by the company for the electric service you are employing.
|
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
#
# Copyright (C) 2005-2007 David Guerizec <david@guerizec.net>
#
# Last modified: 2008 Jan 22, 14:19:03 by david
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import os, sys, threading, socket
import paramiko
from paramiko import AuthenticationException
from registry import Registry
import cipher, util, log, proxy
import ipc
from options import OptionParser
from util import chanfmt, convert, SSHProxyError
from config import get_config
from dispatcher import Dispatcher
class IPCClientInterface(ipc.IPCInterface):
def __init__(self, server):
self.server = server
def __call__(self, chan):
# simulate an instanciation
ipc.IPCInterface.__init__(self, chan)
return self
class Server(Registry, paramiko.ServerInterface):
_class_id = "Server"
_singleton = True
def __reginit__(self, client, addr, host_key_file):
self.client = client
self.client_addr = addr
ipc_address = get_config('sshproxy').get('ipc_address',
'sshproxy-control')
handler = IPCClientInterface(self)
try:
self.monitor = ipc.IPCClient(ipc_address, handler=handler)
except:
log.exception("Couldn't create IPC channel to monitor")
raise
self.host_key = paramiko.DSSKey(filename=host_key_file)
#self.ip_addr, self.port = client.getsockname()
self.event = threading.Event()
self.args = []
self._remotes = {}
self.exit_status = -1
def get_ns_tag(self, namespace, tag, default=None):
return self.monitor.call('get_ns_tag', namespace=namespace,
tag=tag,
default=default)
def update_ns(self, name, value):
return self.monitor.call('update_ns', name=name, value=value)
def check_acl(self, acl_name):
return self.monitor.call('check_acl', acl_name)
def authorize(self, user_site, need_login=True):
return self.monitor.call('authorize', user_site=user_site,
need_login=need_login)
def setup_forward_handler(self, check_channel_direct_tcpip_request):
if check_channel_direct_tcpip_request:
self.check_channel_direct_tcpip_request = \
check_channel_direct_tcpip_request
def check_direct_tcpip_acl(self, chanid, origin, destination):
o_ip, o_port = origin
d_ip, d_port = destination
self.update_ns('proxy', {
'forward_ip': origin[0],
'forward_port': origin[1]
})
if not (self.check_acl('local_forwarding')):
log.debug("Local Port Forwarding not allowed by ACLs")
self.chan_send("Local Port Forwarding not allowed by ACLs\n")
return False
log.debug("Local Port Forwarding allowed by ACLs")
return True
def check_channel_x11_request(self, channel, single_connection,
x11_auth_proto, x11_auth_cookie, x11_screen_number):
class X11Channel(object):
pass
x11 = X11Channel()
x11.single_connection = single_connection
x11.x11_auth_proto = x11_auth_proto
x11.x11_auth_cookie = x11_auth_cookie
x11.x11_screen_number = x11_screen_number
self.x11 = x11
return True
def check_x11_acl(self):
if not hasattr(self, 'x11'):
log.debug("X11Forwarding not requested by the client")
return False
if not (self.check_acl('x11_forwarding')):
log.debug("X11Forwarding not allowed by ACLs")
return False
log.debug("X11Forwarding allowed by ACLs")
return True
def check_remote_port_forwarding(self):
if (hasattr(self, 'tcpip_forward_ip') and
hasattr(self, 'tcpip_forward_port')):
self.update_ns('proxy', {
'forward_ip': self.tcpip_forward_ip,
'forward_port': self.tcpip_forward_port
})
if not (self.check_acl('remote_forwarding')):
log.debug("Remote Port Forwarding not allowed by ACLs")
self.chan_send("Remote Port Forwarding not allowed by ACLs\n")
return False
log.debug("Remote Port Forwarding allowed by ACLs")
return True
return False
### STANDARD PARAMIKO SERVER INTERFACE
def check_unhandled_channel_request(self, channel, kind, want_reply, m):
log.debug("check_unhandled_channel_request %s", kind)
if kind == "auth-agent-req@openssh.com":
return True
return False
def check_global_request(self, kind, m):
log.devdebug("check_global_request %s", kind)
return paramiko.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED
def check_port_forward_request(self, address, port):
log.devdebug("check_port_forward_request %s %s", address, port)
self.tcpip_forward_ip = address
self.tcpip_forward_port = port
log.debug('tcpip-forward %s:%s' % (self.tcpip_forward_ip,
self.tcpip_forward_port))
return str(self.tcpip_forward_port)
def check_channel_request(self, kind, chanid):
log.devdebug("check_channel_request %s %s", kind, chanid)
if kind == 'session':
return paramiko.OPEN_SUCCEEDED
return paramiko.OPEN_FAILED_ADMINISTRATIVELY_PROHIBITED
def check_auth_password(self, username, password):
if self.valid_auth(username=username, password=password):
return paramiko.AUTH_SUCCESSFUL
return paramiko.AUTH_FAILED
def check_auth_publickey(self, username, key):
if self.valid_auth(username=username, pubkey=key.get_base64()):
return paramiko.AUTH_SUCCESSFUL
return paramiko.AUTH_FAILED
def get_allowed_auths(self, username):
return 'password,publickey'
def check_channel_shell_request(self, channel):
log.devdebug("check_channel_shell_request")
self.event.set()
return True
def check_channel_subsystem_request(self, channel, name):
log.devdebug("check_channel_subsystem_request %s %s", channel, name)
return paramiko.ServerInterface.check_channel_subsystem_request(self,
channel, name)
def check_channel_exec_request(self, channel, command):
log.devdebug('check_channel_exec_request %s %s', channel, command)
self.set_channel(channel)
value = self.set_exec_args(command)
self.event.set()
return value
def check_channel_pty_request(self, channel, term, width, height,
pixelwidth, pixelheight, modes):
self.set_term(term, width, height)
return True
def window_change_handler(self):
return False
def setup_window_change_handler(self, window_change_handler):
self.window_change_handler = window_change_handler
def check_channel_window_change_request(self, channel, width, height,
pixelwidth, pixelheight):
log.devdebug('window_change: %s %s' % (width, height))
self.set_term(self.term, width, height)
return self.window_change_handler()
### SSHPROXY SERVER INTERFACE
def valid_auth(self, username, password=None, pubkey=None):
if not self.monitor.call('authenticate',
username=username,
password=password,
pubkey=pubkey,
ip_addr=self.client_addr[0]):
self._unauth_pubkey = pubkey
return False
self.username = username
self.monitor.call('update_ns', 'client', {'username': username})
if hasattr(self, '_unauth_pubkey') and self._unauth_pubkey:
if self.monitor.call('add_client_pubkey', self._unauth_pubkey):
self.message_client("WARNING: Your public key"
" has been added to the keyring\n")
return True
#we can put here some logging for connection failures
def report_failure(self, reason, *args, **kwargs):
log.error("Failure: %s %s" % (reason, args[0]))
def message_client(self, msg):
self.queue_message(msg)
def queue_message(self, msg=None):
chan = getattr(self, 'chan', None)
if not hasattr(self, 'qmsg'):
self.qmsg = []
if msg is not None:
self.qmsg.append(msg)
if not chan:
return
while len(self.qmsg):
chan.send(chanfmt(self.qmsg.pop(0)))
def set_username(self, username):
self.username = username
def set_channel(self, chan):
self.chan = chan
def set_term(self, term, width, height):
self.term, self.width, self.height = term, width, height
def set_exec_args(self, argstr):
# XXX: naive arguments splitting
self.args = argstr.strip().split()
return True
def is_admin(self):
return self.is_authenticated() and self.monitor.call('is_admin')
def is_authenticated(self):
return hasattr(self, 'username')
def add_cmdline_options(self, parser):
if self.check_acl('admin'):
parser.add_option("", "--admin", dest="action",
help=_(u"run administrative commands"),
action="store_const",
const='admin',
)
if self.check_acl('console_session'):
parser.add_option("", "--console", dest="action",
help=_(u"open administration console"),
action="store_const",
const='console',
)
if self.check_acl('opt_list_sites'):
parser.add_option("-l", "--list-sites", dest="action",
help=_(u"list allowed sites"),
action="store_const",
const='list_sites',
)
if self.check_acl('opt_get_pubkey') or self.check_acl('opt_get_pkey'):
parser.add_option("", "--get-pubkey", dest="action",
help=_(u"display public key for user@host."),
action="store_const",
const="get_pubkey",
)
def parse_cmdline(self, args):
usage = u"""
pssh [options]
pssh [user@site [cmd]]
"""
parser = OptionParser(self.chan, usage=usage)
# add options from a mapping or a Registry callback
self.add_cmdline_options(parser)
return parser.parse_args(args)
def opt_admin(self, options, *args):
if not len(args):
self.chan.send(chanfmt(_(u'Missing argument, try --admin help '
'to get a list of commands.\n')))
return
resp = self.dispatcher.console('%s' % ' '.join(args)) or ''
self.chan.send(chanfmt(resp+'\n'))
def opt_console(self, options, *args):
return self.do_console()
def opt_list_sites(self, options, *args):
self.chan_send(self.run_cmd('list_sites %s'% ' '.join(args)))
def chan_send(self, s):
chan = self.chan
s = chanfmt(s)
sz = len(s)
while sz:
sent = chan.send(s)
if sent:
s = s[sent:]
sz = sz - sent
def run_cmd(self, cmd):
result = self.dispatcher.dispatch(cmd) or ''
return result + '\n'
def readlines(self):
buffer = []
chan = self.chan
chan.setblocking(True)
while True:
data = chan.recv(4096)
if not data:
chan.shutdown_read()
yield ''.join(buffer)
break
if '\n' in data:
yield ''.join(buffer) + data[:data.index('\n')+1]
buffer = [ data[data.index('\n')+1:] ]
else:
buffer.append(data)
def console_no_pty(self):
from server import Server
chan = self.chan
for data in self.readlines():
if not data:
continue
response = self.run_cmd(data)
self.chan_send(response)
def opt_get_pubkey(self, options, *args):
result = []
for site in args:
spubkey = util.get_site_pubkey(site)
if spubkey is None:
result.append(_(u"%s: No privkey tag found") % site)
continue
if len(spubkey):
result.append('%s: %s' % (site, ' '.join(spubkey)))
else:
result.append(_(u"%s: No privkey found") % site)
if not result:
result.append(_(u'Please give at least a site.'))
self.chan.send(chanfmt('\n'.join(result)+'\n'))
def do_eval_options(self, options, args):
if options.action and hasattr(self, 'opt_%s' % options.action):
getattr(self, 'opt_%s' % options.action)(options, *args)
def init_subsystems(self):
#self.transport.set_subsystem_handler('sftp', paramiko.SFTPServer,
# ProxySFTPServer)
pass
def start(self):
# start transport for the client
self.transport = paramiko.Transport(self.client)
self.transport.set_log_channel("paramiko")
# debug !!
#self.transport.set_hexdump(1)
try:
self.transport.load_server_moduli()
except:
raise
self.transport.add_server_key(self.host_key)
# start the server interface
negotiation_ev = threading.Event()
self.init_subsystems()
self.transport.start_server(negotiation_ev, self)
while not negotiation_ev.isSet():
negotiation_ev.wait(0.5)
if not self.transport.is_active():
raise 'SSH negotiation failed'
chan = self.transport.accept(60)
if chan is None:
log.error('cannot open the channel. '
'Check the transport object. Exiting..')
return
log.info('Authenticated %s', self.username)
self.event.wait(15)
if not self.event.isSet():
log.error('client never asked for a shell or a command.'
' Exiting.')
sys.exit(1)
self.set_channel(chan)
namespace = self.monitor.call('get_namespace')
self.dispatcher = Dispatcher(self.monitor, namespace)
try:
try:
# this is the entry point after initialization have been done
self.do_work()
# after this point, client is disconnected
except SSHProxyError, msg:
log.exception(msg)
chan.send(chanfmt(str(msg)+'\n'))
except Exception, msg:
log.exception("An error occured: %s" % msg)
chan.send(chanfmt(_(u"An error occured: %s\n") % msg))
finally:
if self.chan.active:
self.chan.send_exit_status(self.exit_status)
# close what we can
for item in ('chan', 'transport', 'ipc'):
try:
getattr(self, item).close()
except:
pass
return
def do_console(self):
if not self.check_acl('console_session'):
self.chan.send(chanfmt(_(u"ERROR: You are not allowed to"
" open a console session.\n")))
return False
self.monitor.call('update_ns', 'client', {'type': 'console'})
if hasattr(self, 'term'):
return self.dispatcher.console()
else:
return self.console_no_pty()
def do_scp(self):
args = []
argv = self.args[1:]
while True:
if argv[0][0] == '-':
args.append(argv.pop(0))
continue
break
site, path = argv[0].split(':', 1)
if not self.authorize(site, need_login=True):
self.chan.send(chanfmt(_(u"ERROR: %s does not exist "
"in your scope\n") % site))
return False
if '-t' in args:
upload = True
scpdir = 'upload'
else:
upload = False
scpdir = 'download'
self.update_ns('proxy', {
'scp_dir': scpdir,
'scp_path': path or '.',
'scp_args': ' '.join(args)
})
# check ACL for the given direction, then if failed, check general ACL
if not ((self.check_acl('scp_' + scpdir)) or
self.check_acl('scp_transfer')):
self.chan.send(chanfmt(_(u"ERROR: You are not allowed to"
" do scp file transfert in this"
" directory or direction on %s\n") % site))
return False
self.update_ns('client', {
'type': 'scp_%s' % scpdir,
})
conn = proxy.ProxyScp(self.chan, self.connect_site(), self.monitor)
try:
self.exit_status = conn.loop()
except AuthenticationException, msg:
self.chan.send("\r\n ERROR: %s." % msg +
"\r\n Please report this error "
"to your administrator.\r\n\r\n")
self.report_failure("site_authentication_error", msg)
return False
return True
def do_remote_execution(self):
site = self.args.pop(0)
if not self.authorize(site, need_login=True):
self.chan.send(chanfmt(_(u"ERROR: %s does not exist in "
"your scope\n") % site))
return False
self.update_ns('proxy', {'cmdline': (' '.join(self.args)).strip()})
if not self.check_acl('remote_exec'):
self.chan.send(chanfmt(_(u"ERROR: You are not allowed to"
" exec that command on %s"
"\n") % site))
return False
self.update_ns('client', {
'type': 'remote_exec',
})
conn = proxy.ProxyCmd(self.chan, self.connect_site(), self.monitor)
try:
self.exit_status = conn.loop()
except AuthenticationException, msg:
self.chan.send(_(u"\r\n ERROR: %s.") % msg +
_(u"\r\n Please report this error "
"to your administrator.\r\n\r\n"))
self.report_failure("site_authentication_error", msg)
return False
conn = None
log.info("Exiting %s", site)
return True
def do_shell_session(self):
site = self.args.pop(0)
if not self.authorize(site, need_login=True):
self.chan.send(chanfmt(_(u"ERROR: %s does not exist in "
"your scope\n") % site))
return False
if not self.check_acl('shell_session'):
self.chan.send(chanfmt(_(u"ERROR: You are not allowed to"
" open a shell session on %s"
"\n") % site))
return False
self.update_ns('client', {
'type': 'shell_session'
})
log.info("Connecting to %s", site)
conn = proxy.ProxyShell(self.chan, self.connect_site(), self.monitor)
try:
self.exit_status = conn.loop()
except AuthenticationException, msg:
self.chan.send(_(u"\r\n ERROR: %s.") % msg +
_(u"\r\n Please report this error "
"to your administrator.\r\n\r\n"))
self.report_failure("site_authentication_error", msg)
return False
except KeyboardInterrupt:
return True
except Exception, e:
self.chan.send(_(u"\r\n ERROR: It seems you found a bug."
"\r\n Please report this error "
"to your administrator.\r\n"
"Exception class: <%s>\r\n\r\n")
% e.__class__.__name__)
self.report_failure("bug", str(e))
raise
# if the direct connection closed, then exit cleanly
conn = None
log.info("Exiting %s", site)
return True
# XXX: stage2: make it easier to extend
# make explicit the stage automaton
def do_work(self):
# empty the message queue now we've got a valid channel
self.queue_message()
# this is a connection to the proxy console
if not len(self.args):
return self.do_console()
else:
# this is an option list
if len(self.args[0]) and self.args[0][0] == '-':
try:
options, args = self.parse_cmdline(self.args)
except 'EXIT':
return False
return self.do_eval_options(options, args)
# this is an scp file transfer
elif self.args[0] == 'scp':
return self.do_scp()
else:
site = self.args[0]
# this is a remote command execution
if len(self.args) > 1:
return self.do_remote_execution()
# this is a shell session
else:
return self.do_shell_session()
# Should never get there
return False
def connect_site(self, site_tags=None, site_ref=None):
tags = self.monitor.call('get_namespace')
if site_tags:
tags['site'] = site_tags
name = '%s@%s' % (tags['site']['login'],
tags['site']['name'])
hostkey = tags['proxy'].get('hostkey', None) or None
if site_ref is None:
if not tags['site'].get('ip_address'):
raise ValueError('Missing site address in database')
site_ref = (tags['site']['ip_address'],
int(tags['site'].get('port', 22)))
import socket
try:
transport = paramiko.Transport(site_ref)
except socket.error, msg:
raise SSHProxyError("Could not connect to site %s: %s"
% (name, msg[1]))
except Exception, msg:
raise SSHProxyError("Could not connect to site %s: %s"
% (name, str(msg)))
transport.start_client()
if hostkey is not None:
transport._preferred_keys = [ hostkey.get_name() ]
key = transport.get_remote_server_key()
if (key.get_name() != hostkey.get_name()
or str(key) != str(hostkey)):
log.error('Bad host key from server (%s).' % name)
raise AuthenticationError('Bad host key from server (%s).'
% name)
log.info('Server host key verified (%s) for %s' % (key.get_name(),
name))
privkey = cipher.decipher(tags['site'].get('privkey',
tags['site'].get('pkey', '')))
password = cipher.decipher(tags['site'].get('password', ''))
password_encoding = tags['site'].get('password_encoding', 'utf8')
password = convert(password, password_encoding)
authentified = False
if privkey:
privkey = util.get_dss_key_from_string(privkey)
try:
transport.auth_publickey(tags['site']['login'], privkey)
authentified = True
except AuthenticationException:
log.warning('PKey for %s was not accepted' % name)
if not authentified and password:
try:
transport.auth_password(tags['site']['login'], password)
authentified = True
except AuthenticationException:
log.error('Password for %s is not valid' % name)
raise
if not authentified:
raise AuthenticationException('No valid authentication token for %s'
% name)
chan = transport.open_session()
chan.settimeout(1.0)
return chan
Server.register()
|
WILMINGTON, DE - The Mount Pleasant High School Field Hockey Team has joined the Muscle Movement Foundation (MMF) for their very first season. Head Coach, Pete Meisel, learned of the Muscle Movement Foundation through the widely spread MMF message in his home state of Delaware. A coach who strongly believes in implementing life lessons into coaching, he took immediate initiative to have his girls join the MMF - the Charity of Champions.
Coach Meisel invited Muscle Movement Fnd. Founder and President, Rob DeMasi, to speak to his team. The young ladies quickly learned about the obstacles that MMF families endure. They also learned how important their opportunity is as student-athletes and the significance of healthy muscular function. By the end of the speech, the girls were thrilled to be apart of the MMF and quickly started to discuss ideas to empower local Muscle Champions in need.
The Green Knights have created their own Muscle Movement Foundation page to raise contributions for the fight against neuromuscular disease. Please feel free to "sponsor" the team for their performance on and off of the field this season!
Click the following link to donate to the Muscle Movement Fnd. - Mount Pleasant Field Hockey page: STICK IT TO MUSCLE DISEASE!
The Green Knights look forward to holding an apparel fundraiser and participate in the Muscle Movement Foundation's Annual Delaware Run for Strength 5k on April 7th, 2018!
|
# -*- coding: utf-8 -*-
"""
solace.utils.admin
~~~~~~~~~~~~~~~~~~
Admin helpers.
:copyright: (c) 2009 by Plurk Inc., see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from solace import settings
from solace.i18n import _
from solace.application import url_for
from solace.templating import render_template
from solace.utils.mail import send_email
from solace.models import User, session
def ban_user(user):
"""Bans a user if it was not already banned. This also sends the
user an email that he was banned.
"""
if user.is_banned:
return
user.is_banned = True
send_email(_(u'User account banned'),
render_template('mails/user_banned.txt', user=user),
user.email)
session.commit()
def unban_user(user):
"""Unbans the user. What this actually does is sending the user
an email with a link to reactivate his account. For reactivation
he has to give himself a new password.
"""
if not user.is_banned:
return
if settings.REQUIRE_NEW_PASSWORD_ON_UNBAN:
user.is_active = False
user.is_banned = False
reset_url = url_for('core.reset_password', email=user.email,
key=user.password_reset_key, _external=True)
send_email(_(u'Your ban was lifted'),
render_template('mails/user_unbanned.txt', user=user,
reset_url=reset_url), user.email)
session.commit()
|
Claudiare Motley still has scars on his neck after a teen shot him on June 21, 2014, during an attempted carjacking. Motley estimates the resulting medical bills, lost wages and travel costs for medical appointments and court hearings have cost him about $80,000 out-of-pocket. Motley is pictured here in downtown Milwaukee on Oct. 14. Photo by Mike De Sisti of the Milwaukee Journal Sentinel.
It was 1:30 a.m. on the first day of summer in 2014, and Claudiare Motley had just dropped off a friend after coming into town for his Milwaukee Tech High School 25-year class reunion. He was parked around North 63rd Street and West Capitol Drive, writing an email on his phone, as two cars pulled up.
Motley, then 43, knew “something was going on” as one of the vehicles turned in front of him and stopped. He put his phone into his pocket, shifted his car into gear. A teenager jumped from the car and tapped Motley’s window with a gun.
He accelerated as 15-year-old Nathan King fired, shattering glass. Motley rammed the car in front of him out of the way. He sped off and looked in the rearview mirror to see if they were chasing him.
“I just saw blood gushing out of my jaw,” Motley said.
After more than a year and six surgeries to repair his injuries, Motley estimates his out-of-pocket costs to be at least $80,000, and he expects more medical expenses as he continues to recover.
His efforts to get state victim’s compensation for his medical bills not covered by insurance have been unsuccessful so far. Motley’s credit has taken a hit, and he estimates lost earnings because of time he could not work in his family’s international law firm to be between $40,000 and $60,000.
Wisconsin taxpayers and health care providers also pay a high price for gun violence. In April, Mother Jones magazine pegged the cost of gun violence to Wisconsinites in 2012 at $2.9 billion in direct and indirect costs, or $508 for every person in the state.
Those figures include the financial and psychological tolls taken when a bullet forever alters the lives of victims and shooters alike. There are lost wages, stunted futures, shattered plans, life-changing trauma.
The credit of shooting victim Claudiare Motley has taken a hit after a 2014 shooting left him paying tens of thousands of dollars in out-of-pocket medical and other expenses. Photo by Haley Henschel of the Wisconsin Center for Investigative Journalism.
Firearms are a big factor in crime statewide. In 2014, guns were involved in 75 percent of murders, 56 percent of armed robberies, 27 percent of aggravated assaults and 3 percent of forcible rapes, according to the state Department of Justice.
Taxpayers pay all of the costs for police, prosecutors and incarceration — and sometimes to defend the accused — in gun crimes. And 79 percent of health care costs in Wisconsin associated with firearm-related injuries are paid by the public, according to a 2014 report using 2010 data by the Urban Institute, a Washington, D.C.-based think tank.
When he is released from prison, King — who is now paralyzed from the waist down because of a separate shooting days after he shot Motley — will probably face limited employment opportunities. Motley said he does not expect to see much of the $29,339 in court-ordered restitution.
Nathan King is wheeled out of court after being sentenced in July. King, who shot Claudiare Motley on June 21, 2014, was paralyzed from the waist down after being shot during another attempted robbery a few days later. Photo by John Klein for the Milwaukee Journal Sentinel.
State taxpayers paid about $1,500 for the 50 hours spent by Milwaukee County Assistant District Attorney Joy Hammond to prosecute King during the case that ended in September. They paid $1,537 for King’s attorney, Ann T. Bowe.
Residents of Wisconsin will spend about $405,000 to keep King in prison during his 12-and-a-half-year sentence for the Motley shooting and a later armed robbery in which King himself was shot. After he is released, King will be on extended supervision for seven and a half years at a cost to taxpayers of at least $21,000 in today’s dollars.
The tally for the Motley shooting — at least half a million dollars — is the cost of just one shooting in a city that this year has seen 691 people shot, including 131 killed, by firearms as of Nov. 15. That was a 77 percent increase in gun homicides from November 2014 and an 11 percent increase in nonfatal shootings.
In addition to victims, entire communities face costs, including reduced property values in high-crime areas and increased costs to keep the public safe.
“You always hear the bad things that could happen, but you never really think … that you actually encounter something like that,” he said.
A report from the Center for American Progress, a progressive public policy organization, suggests that a reduction in violent crime — including homicides, rapes and assaults — could have large impacts on urban areas. The report, which analyzed 2010 crime levels in Milwaukee and seven other cities, suggested a 10 percent reduction in homicides could boost residential real estate by $800 million in Milwaukee.
The report noted such a reduction could generate “large revenue gains” from property taxes, but it did not provide an estimate. Cutting homicides by 25 percent, it projected, could add $2 billion in increased housing values.
Police tape marks the scene where 13-year-old Giovonnie G. Cameron was shot and killed near Lincoln Park in Milwaukee on July 8. As of Nov. 15, fatal shootings in Milwaukee are up 77 percent compared to the same period in 2014. Photo by Mark Hoffman of the Milwaukee Journal Sentinel.
Experts say because the cost for each shooting is so high — often in the tens if not hundreds of thousands of dollars — anything that could reduce gun violence would likely be worth the investment.
“Almost any reasonable policy that reduces crime will pay for itself,” said David Weimer, a University of Wisconsin-Madison professor of political economy and expert in cost-benefit analysis.
But not everyone agrees on the best way to curb gun violence. Some have called for expanding background checks and for banning certain types of assault weapons.
Jeff Nass, executive director of Wisconsin Firearm Owners, Ranges, Clubs and Educators Inc., disagrees. He said any cost-benefit analysis should include the positive value that guns have when used for self-defense.
Nass, whose organization is affiliated with the National Rifle Association, argued that violence and gun issues are separate. He called for more prosecution of illegal gun possession and gun crimes.
It’s ironic you you feel that way, tomw… because Nathan King, the boy who shot Claudiare, was eventually stopped by a ccw permit holder when King tried to rob her as well.
How about we hold parents accountable for their childrens actions… or at least for letting them run the streets at 1:30 am. Or how about we try to promote traditional family values and education. It’s no surprise that Nathan King came from a single parent household. Where are the fathers in all this?
How about we rescind this ludicrous no chase policy in Milwaukee? Nathan King and his crew stole car after car and used them in multiple violent crimes. By not chasing “non-violent” car thieves the criminals are free to use those stolen cars in many other crimes.
Motley is right, Milwaukee has a “cultural acceptance of violence” and a “proliferation of guns and illegal drugs.” We need families that support their children, improve enducation, and provide opportunities. Not create a tax on law abiding citizens who have no relationship to this violence.
And Ed Flynn says part of the gun problem is the state’s concealed carry law, but what does he know? He’s only the police chief.
I’m all for holding parents accountable. Too many times they aren’t charged when one of their guns is used in a shooting, accidental or otherwise, and that’s absurd.
|
import sys
import json
from distutils.core import Command
class extract_dist(Command):
"""Custom distutils command to extract metadata form setup function."""
description = ("Assigns self.distribution to class attribute to make "
"it accessible from outside a class.")
user_options = [('stdout', None,
'print metadata in json format to stdout')]
class_metadata = None
def __init__(self, *args, **kwargs):
"""Metadata dictionary is created, all the metadata attributes,
that were not found are set to default empty values. Checks of data
types are performed.
"""
Command.__init__(self, *args, **kwargs)
self.metadata = {}
for attr in ['setup_requires', 'tests_require', 'install_requires',
'packages', 'py_modules', 'scripts']:
self.metadata[attr] = to_list(getattr(self.distribution, attr, []))
try:
for k, v in getattr(
self.distribution, 'extras_require', {}).items():
if k in ['test, docs', 'doc', 'dev']:
attr = 'setup_requires'
else:
attr = 'install_requires'
self.metadata[attr] += to_list(v)
except (AttributeError, ValueError):
# extras require are skipped in case of wrong data format
# can't log here, because this file is executed in a subprocess
pass
for attr in ['url', 'long_description', 'description', 'license']:
self.metadata[attr] = to_str(
getattr(self.distribution.metadata, attr, None))
self.metadata['classifiers'] = to_list(
getattr(self.distribution.metadata, 'classifiers', []))
if isinstance(getattr(self.distribution, "entry_points", None), dict):
self.metadata['entry_points'] = self.distribution.entry_points
else:
self.metadata['entry_points'] = None
self.metadata['test_suite'] = getattr(
self.distribution, "test_suite", None) is not None
def initialize_options(self):
"""Sets default value of the stdout option."""
self.stdout = False
def finalize_options(self):
"""Abstract method of Command class have to be overridden."""
pass
def run(self):
"""Sends extracted metadata in json format to stdout if stdout
option is specified, assigns metadata dictionary to class_metadata
variable otherwise.
"""
if self.stdout:
sys.stdout.write("extracted json data:\n" + json.dumps(
self.metadata, default=to_str))
else:
extract_dist.class_metadata = self.metadata
def to_list(var):
"""Checks if given value is a list, tries to convert, if it is not."""
if var is None:
return []
if isinstance(var, str):
var = var.split('\n')
elif not isinstance(var, list):
try:
var = list(var)
except TypeError:
raise ValueError("{} cannot be converted to the list.".format(var))
return var
def to_str(var):
"""Similar to to_list function, but for string attributes."""
try:
return str(var)
except TypeError:
raise ValueError("{} cannot be converted to string.".format(var))
|
The most expensive stormwater runoff problem to fix is the one that’s not addressed. That’s the first point CLF Massachusetts Staff Attorney Cynthia Liebman makes in this smart letter to the editor published yesterday in the MetroWest Daily News.
As the battle over Vermont Yankee’s future is waged, Conservation Law Foundation and VPIRG seek to join as a friend of the court, or amicus for this first stage.
On April 14 U.S. District Court Judge William G. Young issued a final judgment in CLF’s favor in our suit against the MassHighway Department, bringing to a close nearly five years of litigation to push the department to manage stormwater runoff from state roads that was polluting nearby waterbodies.
|
#### PATTERN | EN | PARSER | BRILL LEXICON #########################################################
# Copyright (c) 2010 University of Antwerp, Belgium
# Author: Tom De Smedt <tom@organisms.be>
# License: BSD (see LICENSE.txt for details).
# http://www.clips.ua.ac.be/pages/pattern
####################################################################################################
# Brill lexicon with lexical and contextual rules, using lazy-laoding.
import os
try:
MODULE = os.path.dirname(__file__)
except:
MODULE = ""
#### BRILL LEXICAL RULES ###########################################################################
LEXICAL = ["char", "hassuf", "deletesuf", "addsuf", "haspref", "deletepref", "addpref"]
LEXICAL += ["goodleft", "goodright"]
LEXICAL.extend(["f"+x for x in LEXICAL])
LEXCIAL = dict.fromkeys(LEXICAL, True)
class LexicalRules(list):
def __init__(self, lexicon, path=os.path.join(MODULE, "Brill_lexical_rules.txt")):
# Brill's lexical rules.
# An entry looks like: ('fhassuf', ['NN', 's', 'fhassuf', '1', 'NNS', 'x']).
# The first item is the lookup command.
# If prefixed with an "f", it means that the token needs to have the first given tag (NN).
# In this case, if the NN-word ends with an "s", it is tagged as NNS.
self.lexicon = lexicon
self.path = path
def load(self):
for i, rule in enumerate(open(self.path).read().strip().split("\n")):
rule = rule.split()
for cmd in rule:
if cmd in LEXICAL:
list.append(self, (cmd, rule)); break
def __iter__(self):
if len(self) == 0:
self.load()
return list.__iter__(self)
def apply(self, token, previous=(None,None), next=(None,None)):
""" Applies the lexical rules to the given token.
A token is a [word,tag]-item whose tag might change if it matches a rule.
Rules are lexically based on word characters, prefixes and suffixes.
"""
word, pos = token[0], token[1]
if word[:1].isdigit() and word.replace(".","").isdigit():
return [word, "CD"]
for cmd, rule in iter(self):
pos = rule[-2]
x = rule[0]
if cmd.startswith("f"):
# Word must be tagged as the f-rule states.
cmd = cmd[1:]
if token[1] != rule[0]: continue
x = rule[1]
if (cmd == "char" and x in word) \
or (cmd == "hassuf" and word.endswith(x)) \
or (cmd == "deletesuf" and word.endswith(x) and word[:-len(x)] in self.lexicon) \
or (cmd == "haspref" and word.startswith(x)) \
or (cmd == "deletepref" and word.startswith(x) and word[len(x):] in self.lexicon) \
or (cmd == "addsuf" and word+x in self.lexicon) \
or (cmd == "addpref" and x+word in self.lexicon) \
or (cmd == "goodleft" and x == previous[0]) \
or (cmd == "goodright" and x == next[0]):
return [word, pos]
return token
#### BRILL CONTEXTUAL RULES ########################################################################
CONTEXTUAL = ["PREVTAG", "NEXTTAG", "PREV1OR2TAG", "NEXT1OR2TAG", "PREV1OR2OR3TAG", "NEXT1OR2OR3TAG"]
CONTEXTUAL += ["SURROUNDTAG", "PREVBIGRAM", "NEXTBIGRAM", "LBIGRAM", "RBIGRAM", "PREV2TAG", "NEXT2TAG"]
CONTEXTUAL += ["CURWD", "PREVWD", "NEXTWD", "PREV1OR2WD", "NEXT1OR2WD", "WDPREVTAG"]
CONTEXTUAL = dict.fromkeys(CONTEXTUAL, True)
class ContextualRules(list):
def __init__(self, lexicon, path=os.path.join(MODULE, "Brill_contextual_rules.txt")):
# Brill's contextual rules.
# An entry looks like: ('PREVTAG', ['VBD', 'VB', 'PREVTAG', 'TO']).
# The first item is the lookup command.
# The example rule reads like:
# "If the previous word is tagged TO, change this word's tag from VBD to VB (if it is VBD)".
self.lexicon = lexicon
self.path = path
def load(self):
for i, rule in enumerate(open(self.path).read().strip().split("\n")):
rule = rule.split()
for cmd in rule:
if cmd in CONTEXTUAL:
list.append(self, (cmd, rule)); break
def __iter__(self):
if len(self) == 0:
self.load()
return list.__iter__(self)
def apply(self, tokens):
""" Applies the contextual rules to the given list of tokens.
Each token is a [word,tag]-item whose tag might change if it matches a rule.
Rules are contextually based on the token's position in the sentence.
"""
b = [(None,"STAART")] * 3 # Add empty tokens so we can scan ahead and behind.
T = b + tokens + b
for i, token in enumerate(T):
for cmd, rule in iter(self):
# If the word is tagged differently than required by the rule, skip it.
if token[1] != rule[0]:
continue
# Never allow rules to tag "be" anything but infinitive.
if token[0] == "be" and token[1] == "VB":
continue
# A rule involves scanning the previous/next word or tag,
# and all combinations thereof.
x = rule[3]
if (cmd == "PREVTAG" and x == T[i-1][1]) \
or (cmd == "NEXTTAG" and x == T[i+1][1]) \
or (cmd == "PREV1OR2TAG" and x in (T[i-1][1], T[i-2][1])) \
or (cmd == "NEXT1OR2TAG" and x in (T[i+1][1], T[i+2][1])) \
or (cmd == "PREV1OR2OR3TAG" and x in (T[i-1][1], T[i-2][1], T[i-3][1])) \
or (cmd == "NEXT1OR2OR3TAG" and x in (T[i+1][1], T[i+2][1], T[i+3][1])) \
or (cmd == "SURROUNDTAG" and x == T[i-1][1] and rule[4] == T[i+1][1]) \
or (cmd == "PREVBIGRAM" and x == T[i-2][1] and rule[4] == T[i-1][1]) \
or (cmd == "NEXTBIGRAM" and x == T[i+1][1] and rule[4] == T[i+2][1]) \
or (cmd == "LBIGRAM" and x == T[i-1][0] and rule[4] == T[i][0]) \
or (cmd == "RBIGRAM" and x == T[i][0] and rule[4] == T[i+1][0]) \
or (cmd == "PREV2TAG" and x == T[i-2][1]) \
or (cmd == "NEXT2TAG" and x == T[i+2][1]) \
or (cmd == "CURWD" and x == T[i][0]) \
or (cmd == "PREVWD" and x == T[i-1][0]) \
or (cmd == "NEXTWD" and x == T[i+1][0]) \
or (cmd == "PREV1OR2WD" and x in (T[i-1][0], T[i-2][0])) \
or (cmd == "NEXT1OR2WD" and x in (T[i+1][0], T[i+2][0])) \
or (cmd == "WDPREVTAG" and x == T[i][0] and rule[4] == T[i-1][1]) \
or (cmd == "WDNEXTTAG" and x == T[i][0] and rule[4] == T[i+1][1]):
tokens[i-len(b)] = [tokens[i-len(b)][0], rule[1]]
# Brill's contextual rules assign tags based on a statistical majority vote.
# Corrections, primarily based on user-feedback.
# with/IN
if token[0] == "with":
tokens[i-len(b)][1] = "IN"
# such/JJ as/IN
if i > 0 and T[i-1][0] == "such" and token[0] == "as":
tokens[i-1-len(b)][1] = "JJ"
tokens[i-0-len(b)][1] = "IN"
# a/DT burning/VBG candle/NN => a/DT burning/JJ candle/NN
if token[1] == "VBG":
if T[i-1][1] == "DT" and T[i+1][1].startswith("NN"):
tokens[i-len(b)][1] = "JJ"
# een/DT brandende/VBG kaars/NN => een/DT brandende/JJ kaars/NN
if token[1].startswith("V(") and "teg_dw" in token[1]:
if T[i-1][1].startswith("Art(") and T[i+1][1].startswith("N("):
tokens[i-len(b)][1] = "JJ"
return tokens
#### BRILL LEXICON #################################################################################
class Lexicon(dict):
def __init__(self, path=os.path.join(MODULE, "Brill_lexicon.txt")):
self.path = path
self.lexical_rules = LexicalRules(self)
self.contextual_rules = ContextualRules(self)
def load(self):
# Brill's lexicon is a list of common tokens and their part-of-speech tag.
# It takes a while to load but this happens only once when pattern.en.parser.parse() is called.
# Create a dictionary from the entries:
dict.__init__(self, (x.split(" ")[:2] for x in open(self.path).read().splitlines()))
def get(self, word, default=None):
return word in self and dict.__getitem__(self, word) or default
def __contains__(self, word):
if len(self) == 0:
self.load()
return dict.__contains__(self, word)
def __getitem__(self, word):
if len(self) == 0:
self.load()
return dict.__getitem__(self, word)
def __setitem__(self, word, pos):
if len(self) == 0:
self.load()
return dict.__setitem__(self, word, pos)
def keys(self):
if len(self) == 0:
self.load()
return dict.keys(self)
def values(self):
if len(self) == 0:
self.load()
return dict.values(self)
def items(self):
if len(self) == 0:
self.load()
return dict.items(self)
|
When You Have To Factory Reset Your BENQ E55?
Are your looking for a way to make your BENQ E55 work faster? Do you wish to clear all of the data on your BENQ E55 before selling it to someone else? Would you like to system work faster? Well what you need is the factory reset.
What is it? Factory reset (aka hard reset) is an operation which deletes all data (including settings, applications, calendars, pictures etc) on your BENQ E55 and brings back the default settings which makes your device as if it came right from the manufacturer in 2008 year.
When do you need to perform such operation? When you need your BENQ E55 to work faster, when there are some difficulties in the performance of the operating system or when you just want to get rid of everithing that has been stored in your BENQ E55. The BENQ E55 was powered by with None MHZ and None cores chipset and after a while it's good idea to perform the hard reset in order to speed up device. The efficient processor and 0.0 MB of RAM provide enormous performance after returning them to factory state. After the restoring the Li-Ion 900.0 battery should work longer. What's all important you will have the whole 45 MB storage available.
The HardReset.Info is a website with factory reset descriptions of more than twenty thousand devices. But that is not all. You will also find the some useful tricks, interesing articles, vidoes with the tutorials, answers to the frequently asked questions and more. Just click on Hard Reset BENQ E55 button and you will see it all.
You can click here HardReset.Info YouTube Channel to find video related with BENQ E55.
|
"""Definitions for the semantics segment of the Cretonne language."""
from cdsl.ti import TypeEnv, ti_rtl, get_type_env
from cdsl.operands import ImmediateKind
from cdsl.ast import Var
try:
from typing import List, Dict, Tuple # noqa
from cdsl.ast import VarAtomMap # noqa
from cdsl.xform import XForm, Rtl # noqa
from cdsl.ti import VarTyping # noqa
from cdsl.instructions import Instruction, InstructionSemantics # noqa
except ImportError:
pass
def verify_semantics(inst, src, xforms):
# type: (Instruction, Rtl, InstructionSemantics) -> None
"""
Verify that the semantics transforms in xforms correctly describe the
instruction described by the src Rtl. This involves checking that:
0) src is a single instance of inst
1) For all x\in xforms x.src is a single instance of inst
2) For any concrete values V of Literals in inst:
For all concrete typing T of inst:
Exists single x \in xforms that applies to src conretazied to V
and T
"""
# 0) The source rtl is always a single instance of inst
assert len(src.rtl) == 1 and src.rtl[0].expr.inst == inst
# 1) For all XForms x, x.src is a single instance of inst
for x in xforms:
assert len(x.src.rtl) == 1 and x.src.rtl[0].expr.inst == inst
variants = [src] # type: List[Rtl]
# 2) For all enumerated immediates, compute all the possible
# versions of src with the concrete value filled in.
for i in inst.imm_opnums:
op = inst.ins[i]
if not (isinstance(op.kind, ImmediateKind) and
op.kind.is_enumerable()):
continue
new_variants = [] # type: List[Rtl]
for rtl_var in variants:
s = {v: v for v in rtl_var.vars()} # type: VarAtomMap
arg = rtl_var.rtl[0].expr.args[i]
assert isinstance(arg, Var)
for val in op.kind.possible_values():
s[arg] = val
new_variants.append(rtl_var.copy(s))
variants = new_variants
# For any possible version of the src with concrete enumerated immediates
for src in variants:
# 2) Any possible typing should be covered by exactly ONE semantic
# XForm
src = src.copy({})
typenv = get_type_env(ti_rtl(src, TypeEnv()))
typenv.normalize()
typenv = typenv.extract()
for t in typenv.concrete_typings():
matching_xforms = [] # type: List[XForm]
for x in xforms:
if src.substitution(x.src, {}) is None:
continue
# Translate t using x.symtab
t = {x.symtab[str(v)]: tv for (v, tv) in t.items()}
if (x.ti.permits(t)):
matching_xforms.append(x)
assert len(matching_xforms) == 1,\
("Possible typing {} of {} not matched by exactly one case " +
": {}").format(t, src.rtl[0], matching_xforms)
|
Thank from the bottom of our heart.
We had a tragic and sudden lose today of our beloved dog shyla.
Went back outside and found her laying in one of her favorite spots in the sun.
We loved her for 10 years and she loved us as well.
We were so distraught and heart broken over this and dereck (grinder) and his wife ashley made all the arrangements for us, this was truly as blessing for us. Thank you very much for helping.
I'm so sorry for your Loss Bryan/Jen. Pets are family.
I'm sorry Brian and Jen.
So sorry for you and Jen, pets are our family.
Ahe was a daddys girl every night would lay with me on the couch watching tv.
Glad that Darrick and Ashley could help.
We are so sorry for the loss is your loyal friend. Monday we took our 19 year old kiddy to the vet for the last time. Very difficult. We are still a mess.
Hang in there. Our sympathy.
So sorry to hear that, Brian! My sincerest condolences to you and your family.
Don't listen to anybody about waiting. Go rescue a puppy!!!! Best cure!
Damn, Bryan. I'm sorry. Empty pit of the stomach, dry mouth sorry.
It is such a tough thing to lose a pet. I had to put Ruby down a few weeks ago. She is the dog in my Avatar. One of the hardest things I have ever done. Just remember, you gave her a great life and someday, you will see her again.
oh man...the hardest things I have ever had to do was lose a dog...sorry to hear that Brian, I hope everything is ok with you.
As much as I wish you never had to experience that, I am glad that we were able to help you in that hard time. I'm glad that Goodyear Animal Hospital was able to comfort you and give Shyla the respect in her passing that she earned in life.
If there is anything else you need, please don't hesitate to ask.
I'm a mess today, miss my time with her in the morning.
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2007-2013 by Erwin Marsi and TST-Centrale
#
# This file is part of the DAESO Framework.
#
# The DAESO Framework is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# The DAESO Framework is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
generic Pair class for a pair of source and target items (strings, nodes,
graphs, ...)
"""
__authors__ = "Erwin Marsi <e.marsi@gmail.com>"
# used namedtuple before, but immutable attributes turned out to be
# inconvenient on a number of occassions
class Pair(object):
"""
Pair of source and target objects
"""
def __init__(self, source=None, target=None):
self.set(source, target)
def __eq__(self, other):
if isinstance(other, Pair):
return ( self.source == other.source and
self.target == other.target )
def __repr__(self):
return 'Pair(source={pair.source!r}, target={pair.target!r})'.format(
pair=self)
def __str__(self):
return 'Pair(source={pair.source}, target={pair.target})'.format(
pair=self)
def __iter__(self):
return (role for role in (self.source, self.target))
def set(self, source=None, target=None):
self. source = source
self.target = target
|
For rent at 40,000 THB/month, this fully furnished 2 bed, 2 bath unit offers 72.5 m2 of living space.
Located on the 19th floor of a modern high rise building, we will be minutes away from both Thonglor and Ekkamai.
For rent at 40,000 THB/month, this fully furnished 2 bed, 2 bath condo unit offers 72.5 sqm of living space.
Located on the 19th floor of a modern high rise building, you will be minutes away from both Thonglor and Ekkamai.
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
import codecs
import os
import re
from setuptools import find_packages
from setuptools import setup
def read(*parts):
path = os.path.join(os.path.dirname(__file__), *parts)
with codecs.open(path, encoding='utf-8') as fobj:
return fobj.read()
def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
install_requires = [
'gitpython >= 2',
'jinja2 >= 2',
'docopt >= 0.6.1, < 0.7',
'pyyaml >= 3.12',
'validators',
'requests',
'clint',
'PyYAML >= 3.08',
'prettytable'
]
setup(
name='docker-stack',
version=find_version("dockerstack", "__init__.py"),
description='This tool is used to generate easily and dynamically config files for docker.',
author='DSanchez',
author_email='dsanchez@kaliop.ca',
url='',
license='Apache License 2.0',
packages=find_packages(),
include_package_data=True,
install_requires=install_requires,
entry_points="""
[console_scripts]
docker-stack=dockerstack.main:main
""",
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
],
)
|
Director: G. Messiaen, France, 2013, 55 min., French with English subtitles.
"You have the soul of an architect," Le Corbusier wrote to Hervé and asked to visit him. From that time until his death in 1965, Le Corbusier did not cooperate with any other photographer. London's chief architect, Denys Lasdun, said that thanks to him, architecture can not be photographed as before. World-famous architects like Gropius, Nervi, Neutra, Niemeyer and Kenzo Tange have also taken advantage of his talent and his ability to capture the essence of architecture that is not only about the building but also about emotions. The documentary by Gerrit Messiaen is the essence of the life and the creation of one of the most important photographers of 20th century architecture. It captures not only his personality but also describes Paris and the spirit of the era in which Hervé worked. Director Messiaen uses historical photos and audio in the film; the storyteller is Hervé’s wife Judith Molnár.
Prior to the screening, there is a possibility of visiting the “BRNO and SUOMI” and “Echoes – 100 years of Finnish design and architecture” exhibitions located on the technical floor of the villa. For more details, click here.
Reservation is necessary, click here or e-mail info@tugendhat.eu.
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Drop duplicate indexes
Revision ID: 0864352e2168
Revises: 6a6eb0a95603
Create Date: 2018-08-15 20:27:08.429077
"""
from alembic import op
revision = "0864352e2168"
down_revision = "6a6eb0a95603"
def upgrade():
# This is an exact duplicate of the accounts_email_email_key index, minus the unique
# constraint.
op.drop_index("accounts_email_email_like", table_name="accounts_email")
# This is an exact duplicate of the journals_pkey index, minus the primary key
# constraint.
op.drop_index("journals_id_idx", table_name="journals")
# This is an exact duplicate of the trove_classifiers_classifier_key index, minus
# the unique constraint.
op.drop_index("trove_class_class_idx", table_name="trove_classifiers")
# This is an exact duplicate of the trove_classifiers_pkey index, minus the primary
# key constraint.
op.drop_index("trove_class_id_idx", table_name="trove_classifiers")
def downgrade():
op.create_index("trove_class_id_idx", "trove_classifiers", ["id"], unique=False)
op.create_index(
"trove_class_class_idx", "trove_classifiers", ["classifier"], unique=False
)
op.create_index("journals_id_idx", "journals", ["id"], unique=False)
op.create_index(
"accounts_email_email_like", "accounts_email", ["email"], unique=False
)
|
Colorado This is a State discussion forum and a no bashing zone. This area is here for members who are looking for serious answers for serious questions or discussion and are not looking for comments of a sarcastic or "bashing" nature. Please be aware that if you violate this policy you are in danger of losing your posting privileges on this website Without Warning.The calling out or posting of another members user name in the title of a thread a members personal information IE phone number or address is prohibited.
Looking for Partner - Denver Front Range/Anywhere in Colorado.
Where to go, first trip to Colorado.
Performance Shops in no. NM or so. CO?
Looking to buy land in Colorado. What area for best hunting and riding?
|
#------------------------------------------------------------------------------
# Copyright 2014 Esri
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#------------------------------------------------------------------------------
# ==================================================
# UpdateRangeFans.py
# --------------------------------------------------
# Built for ArcGIS 10.1
# ==================================================
# IMPORTS ==========================================
import os, sys, math, traceback
import arcpy
from arcpy import env
# ARGUMENTS & LOCALS ===============================
inFeature = arcpy.GetParameterAsText(0)
weaponTable = arcpy.GetParameterAsText(1)
weaponField = arcpy.GetParameterAsText(2)
weaponModel = arcpy.GetParameterAsText(3)
maxRangeField = arcpy.GetParameterAsText(4)
maxRange = float(arcpy.GetParameterAsText(5)) #1000.0 # meters
geoBearing = float(arcpy.GetParameterAsText(6)) #45.0 # degrees
traversal = float(arcpy.GetParameterAsText(7)) #60.0 # degrees
outFeature = arcpy.GetParameterAsText(8)
deleteme = []
debug = True
leftAngle = 0.0 # degrees
rightAngle = 90.0 # degrees
# CONSTANTS ========================================
# FUNCTIONS ========================================
def Geo2Arithmetic(inAngle):
outAngle = -1.0
# force input angle into 0 to 360 range
if (inAngle > 360.0):
inAngle = math.fmod(inAngle,360.0)
# if 360, make it zero
if inAngle == 360.0: inAngle = 0.0
#0 to 90
if (inAngle >= 0.0 and inAngle <= 90.0):
outAngle = math.fabs(inAngle - 90.0)
# 90 to 360
if (inAngle > 90.0 and inAngle < 360.0):
outAngle = 360.0 - (inAngle - 90.0)
if debug == True: arcpy.AddMessage("G2A inAngle(" + str(inAngle) + "), outAngle(" + str(outAngle) + ")")
return outAngle
try:
currentOverwriteOutput = env.overwriteOutput
env.overwriteOutput = True
sr = arcpy.SpatialReference()
sr.factoryCode = 4326
sr.create()
GCS_WGS_1984 = sr
#GCS_WGS_1984 = arcpy.SpatialReference(r"WGS 1984")
wbsr = arcpy.SpatialReference()
wbsr.factoryCode = 3857
wbsr.create()
webMercator = wbsr
#webMercator = arcpy.SpatialReference(r"WGS 1984 Web Mercator (Auxiliary Sphere)")
env.overwriteOutput = True
scratch = env.scratchWorkspace
#Project doesn't like in_memory featureclasses, copy to scratch
copyInFeatures = os.path.join(scratch,"copyInFeatures")
arcpy.CopyFeatures_management(inFeature,copyInFeatures)
deleteme.append(copyInFeatures)
prjInFeature = os.path.join(scratch,"prjInFeature")
srInputPoints = arcpy.Describe(copyInFeatures).spatialReference
arcpy.AddMessage("Projecting input points to Web Mercator ...")
arcpy.Project_management(copyInFeatures,prjInFeature,webMercator)
deleteme.append(prjInFeature)
tempFans = os.path.join(scratch,"tempFans")
# put bearing into 0 - 360 range
geoBearing = math.fmod(geoBearing,360.0)
if debug == True: arcpy.AddMessage("geoBearing: " + str(geoBearing))
arithmeticBearing = Geo2Arithmetic(geoBearing) # need to convert from geographic angles (zero north clockwise) to arithmetic (zero east counterclockwise)
if debug == True: arcpy.AddMessage("arithmeticBearing: " + str(arithmeticBearing))
if traversal == 0.0:
traversal = 1.0 # modify so there is at least 1 degree of angle.
arcpy.AddWarning("Traversal is zero! Forcing traversal to 1.0 degrees.")
leftAngle = arithmeticBearing + (traversal / 2.0) # get left angle (arithmetic)
leftBearing = geoBearing - (traversal / 2.0) # get left bearing (geographic)
if leftBearing < 0.0: leftBearing = 360.0 + leftBearing
rightAngle = arithmeticBearing - (traversal / 2.0) # get right angle (arithmetic)
rightBearing = geoBearing + (traversal / 2.0) # get right bearing (geographic)
if rightBearing < 0.0: rightBearing = 360.0 + rightBearing
if debug == True: arcpy.AddMessage("arithemtic left/right: " + str(leftAngle) + "/" + str(rightAngle))
if debug == True: arcpy.AddMessage("geo left/right: " + str(leftBearing) + "/" + str(rightBearing))
centerPoints = []
arcpy.AddMessage("Getting centers ....")
shapefieldname = arcpy.Describe(prjInFeature).ShapeFieldName
rows = arcpy.SearchCursor(prjInFeature)
for row in rows:
feat = row.getValue(shapefieldname)
pnt = feat.getPart()
centerPointX = pnt.X
centerPointY = pnt.Y
centerPoints.append([centerPointX,centerPointY])
del row
del rows
paths = []
arcpy.AddMessage("Creating paths ...")
for centerPoint in centerPoints:
path = []
centerPointX = centerPoint[0]
centerPointY = centerPoint[1]
path.append([centerPointX,centerPointY]) # add first point
step = -1.0 # step in degrees
rightAngleRelativeToLeft = leftAngle - traversal - 1
#for d in xrange(int(leftAngle),int(rightAngleRelativeToLeft),int(step)): #UPDATE
for d in range(int(leftAngle),int(rightAngleRelativeToLeft),int(step)):
x = centerPointX + (maxRange * math.cos(math.radians(d)))
y = centerPointY + (maxRange * math.sin(math.radians(d)))
path.append([x,y])
if debug == True: arcpy.AddMessage("d,x,y: " + str(d) + "," + str(x) + "," + str(y))
path.append([centerPointX,centerPointY]) # add last point
paths.append(path)
if debug == True: arcpy.AddMessage("Points in path: " + str(len(path)))
if debug == True: arcpy.AddMessage("paths: " + str(paths))
arcpy.AddMessage("Creating target feature class ...")
arcpy.CreateFeatureclass_management(os.path.dirname(tempFans),os.path.basename(tempFans),"Polygon","#","DISABLED","DISABLED",webMercator)
arcpy.AddField_management(tempFans,"Range","DOUBLE","#","#","#","Range (meters)")
arcpy.AddField_management(tempFans,"Bearing","DOUBLE","#","#","#","Bearing (degrees)")
arcpy.AddField_management(tempFans,"Traversal","DOUBLE","#","#","#","Traversal (degrees)")
arcpy.AddField_management(tempFans,"LeftAz","DOUBLE","#","#","#","Left Bearing (degrees)")
arcpy.AddField_management(tempFans,"RightAz","DOUBLE","#","#","#","Right Bearing (degrees)")
arcpy.AddField_management(tempFans,"Model","TEXT","#","#","#","Weapon Model")
deleteme.append(tempFans)
arcpy.AddMessage("Building " + str(len(paths)) + " fans ...")
cur = arcpy.InsertCursor(tempFans)
for outPath in paths:
lineArray = arcpy.Array()
for vertex in outPath:
pnt = arcpy.Point()
pnt.X = vertex[0]
pnt.Y = vertex[1]
lineArray.add(pnt)
del pnt
feat = cur.newRow()
feat.shape = lineArray
feat.Range = maxRange
feat.Bearing = geoBearing
feat.Traversal = traversal
feat.LeftAz = leftBearing
feat.RightAz = rightBearing
feat.Model = str(weaponModel)
cur.insertRow(feat)
del lineArray
del feat
del cur
arcpy.AddMessage("Projecting Range Fans back to " + str(srInputPoints.name))
arcpy.Project_management(tempFans,outFeature,srInputPoints)
arcpy.SetParameter(8,outFeature)
except arcpy.ExecuteError:
# Get the tool error messages
msgs = arcpy.GetMessages()
arcpy.AddError(msgs)
#print msgs #UPDATE
print(msgs)
except:
# Get the traceback object
tb = sys.exc_info()[2]
tbinfo = traceback.format_tb(tb)[0]
# Concatenate information together concerning the error into a message string
pymsg = "PYTHON ERRORS:\nTraceback info:\n" + tbinfo + "\nError Info:\n" + str(sys.exc_info()[1])
msgs = "\nArcPy ERRORS:\n" + arcpy.GetMessages() + "\n"
# Return python error messages for use in script tool or Python Window
arcpy.AddError(pymsg)
arcpy.AddError(msgs)
# Print Python error messages for use in Python / Python Window
#print pymsg + "\n" #UPDATE
print(pymsg + "\n")
#print msgs #UPDATE
print(msgs)
finally:
# cleanup intermediate datasets
if debug == True: arcpy.AddMessage("Removing intermediate datasets...")
for i in deleteme:
if debug == True: arcpy.AddMessage("Removing: " + str(i))
arcpy.Delete_management(i)
if debug == True: arcpy.AddMessage("Done")
|
Our customers house had been on the market for a while without success.
After approaching STONEDRY Bespoke Basement Conversions we replaced a tired damp kitchen and unsafe staircase into a stunning room that extends into the garden taking in the breath taking views. Then the walls and floor were fully membraned and insulated, a new concrete floor was laid incorporating new drainage. STONEDRY provide the customer with a modern kitchen with stunning mood lights the French doors lead out to the new decking and AstroTurf grassed area.
|
from flask_babel import lazy_gettext
from aleph.model.role import Role
from aleph.model.alert import Alert
from aleph.model.entity import Entity
from aleph.model.entityset import EntitySet
from aleph.model.collection import Collection
from aleph.model.export import Export
class Event(object):
def __init__(self, title, template, params, link_to):
self.name = None
self.title = title
self.template = template
self.params = params
self.link_to = link_to
def to_dict(self):
return {
"name": self.name,
"title": self.title,
"template": self.template,
"params": {p: c.__name__.lower() for (p, c) in self.params.items()},
}
class EventsRegistry(type):
def __init__(cls, name, bases, dct):
cls.registry = {}
for ename, event in dct.items():
if isinstance(event, Event):
event.name = ename
cls.registry[ename] = event
super(EventsRegistry, cls).__init__(name, bases, dct)
class Events(object, metaclass=EventsRegistry):
@classmethod
def get(cls, name):
return cls.registry.get(name)
@classmethod
def names(cls):
return list(cls.registry.keys())
# CREATE COLLECTION
CREATE_COLLECTION = Event(
title=lazy_gettext("New datasets"),
template=lazy_gettext("{{actor}} created {{collection}}"),
params={"collection": Collection},
link_to="collection",
)
# UPLOAD DOCUMENT
INGEST_DOCUMENT = Event(
title=lazy_gettext("Document uploads"),
template=lazy_gettext("{{actor}} added {{document}} to {{collection}}"),
params={"document": Entity, "collection": Collection},
link_to="document",
)
# EXECUTE MAPPING
LOAD_MAPPING = Event(
title=lazy_gettext("Entities generated"),
template=lazy_gettext(
"{{actor}} generated entities from {{table}} in {{collection}}"
),
params={"table": Entity, "collection": Collection},
link_to="table",
)
# CREATE DIAGRAM
CREATE_DIAGRAM = Event(
title=lazy_gettext("New network diagram"),
template=lazy_gettext(
"{{actor}} began diagramming {{diagram}} in {{collection}}"
),
params={"diagram": EntitySet, "collection": Collection},
link_to="table",
)
# CREATE ENTITYSET
CREATE_ENTITYSET = Event(
title=lazy_gettext("New diagrams and lists"),
template=lazy_gettext("{{actor}} created {{entityset}} in {{collection}}"),
params={"entityset": EntitySet, "collection": Collection},
link_to="table",
)
# ALERT MATCH
MATCH_ALERT = Event(
title=lazy_gettext("Alert notifications"),
template=lazy_gettext("{{entity}} matches your alert for {{alert}}"), # noqa
params={"entity": Entity, "alert": Alert, "role": Role},
link_to="entity",
)
# GRANT COLLECTION
GRANT_COLLECTION = Event(
title=lazy_gettext("Dataset access change"),
template=lazy_gettext(
"{{actor}} gave {{role}} access to {{collection}}"
), # noqa
params={"collection": Collection, "role": Role},
link_to="collection",
)
# PUBLISH COLLECTION
PUBLISH_COLLECTION = Event(
title=lazy_gettext("Dataset published"),
template=lazy_gettext("{{actor}} published {{collection}}"),
params={"collection": Collection},
link_to="collection",
)
# EXPORT PUBLISHED
COMPLETE_EXPORT = Event(
title=lazy_gettext("Exports completed"),
template=lazy_gettext("{{export}} is ready for download"),
params={"export": Export},
link_to="export",
)
|
Because the original's too basic, let's play a game of Rock Paper Scissors Lizard Spock!
Its first use on The Big Bang Theory is to settle a debate about what to watch on TV between Sheldon and Raj - appropriately on the episode "The Lizard-Spock Expansion".
When used on the show, the boys almost always choose Spock every time, causing ties after ties after ties and basically never ending the game.
A brilliant design for all Big Bang Theory fans to enjoy!
|
from operator import itemgetter
from nicedjango.utils.py import sliceable_as_chunks
from nicedjango.utils.py.chunk import as_chunks
from nicedjango.utils.py.iter import partition
from nicedjango.utils.py.operator import item_in
__all__ = ['partition_existing_pks', 'get_pks_queryset', 'queryset_as_chunks']
def partition_existing_pks(model, pk_index, values_list):
queryset = get_pks_queryset(model)
existing_pks = queryset_pk_in(queryset, map(itemgetter(pk_index), values_list))
return partition(item_in(pk_index, existing_pks), values_list)
def get_pks_queryset(model):
return model._default_manager.values_list(model._meta.pk.name, flat=True)
def queryset_pk_in(queryset, pks):
return queryset_in(queryset, queryset.model._meta.pk.name, pks)
def queryset_in(queryset, name, values):
filters = {'%s__in' % name: values}
return queryset.filter(**filters)
def queryset_in_list_getter(queryset, name):
def queryset_in_list_getter_(values):
return list(queryset_in(queryset, name, values))
return queryset_in_list_getter_
def queryset_as_chunks(queryset, chunksize=None, name=None, pks=None):
if name is not None and pks is not None:
values_getter = queryset_in_list_getter(queryset, name)
for chunk in as_chunks(pks, chunksize, None, values_getter):
yield chunk
else:
for chunk in sliceable_as_chunks(queryset, chunksize):
yield chunk
|
Hortenstine Ranch Company features some of the finest Texas ranches for sale and Oklahoma ranches for sale.
Check out our portfolio of featured ranches.
At Hortenstine Ranch Company, we’ve created a simple Texas map for searching ranches by Texas counties. Click on the Texas county on the map to the left to view listings in that county.
Have more questions about shopping, purchasing, or selling a ranch? Contact us here through our website or give our office a call.
Experience the the difference with Hortenstine Ranch Company.
Have a question or comment about ranches?
Contact us here through our website.
Get the latest photos, videos, and listings right in your inbox. Sign-up today!
|
"""
Django settings for WebLair project.
Generated by 'django-admin startproject' using Django 1.10.5.
For more information on this file, see
https://docs.djangoproject.com/en/1.10/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.10/ref/settings/
"""
import os
import confidential # because for some reason django mixes sensitive info with non-sensitive info. 10/10 framework.
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = confidential.SECRET_KEY
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = [''] # Host here
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_EXPIRE_AT_BROWSER_CLOSE = True
SECURE_SSL_REDIRECT = True
# Application definition
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_markup',
'MyWebLair',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'WebLair.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'WebLair.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.10/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': confidential.NAMEDB,
'USER': confidential.USERDB,
'PASSWORD': confidential.PASSWORDDB,
'HOST': confidential.HOSTDB,
'PORT': '3306',
}
}
# Password validation
# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.10/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.10/howto/static-files/
STATICFILES_FINDERS = [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
'''
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
'''
STATIC_ROOT = "/home/AWilliams/WebLair/static"
STATIC_URL = '/static/'
|
We caught it early, it’s stage 1, they removed enough tissue to have clear margins, my lymph nodes came back negative for cancer. I have some complications from the surgery that I'll go into later but I have to keep my drain tube in for another week. I have some new drugs to try to help manage the pain better.
I meet with my oncologist next week to see what the next steps are. I’m really hoping I get to skip chemo and radiation and go straight for the drugs that cause menopause! Woo hoo, bring on the hot flashes. Ugh.
Love hearing all the positives!!! That's amazing to hear. PTL 🙏 Prayers answered!
Amazing news. So glad to hear.
|
from datetime import datetime
import re
import sys
from dateutil import parser
import pytz
import six
from . text import levenshtein
UTC_EPOCH = datetime(1970, 1, 1).replace(tzinfo=pytz.utc)
MAX_POSIX_TIMESTAMP = pow(2, 32) - 1
DAYS = {
'monday',
'tuesday',
'wednesday',
'thursday',
'friday',
'saturday',
'sunday'
}
DAYS_ABBR = [day[:3] for day in DAYS]
class timestamp_ms(object):
"""Build UTC timestamp in milliseconds
"""
TIMEZONE_PARENTHESIS = re.compile('(.*)\(([a-zA-Z]+)[-+0-9:.]*\)$')
TIMEZONE_SEPARATOR = re.compile('(.* .*)(\d\d)[.-](\d\d)$')
QUOTED_TIMEZONE = re.compile("""(.*)['"]([\w:+-]+)['"]?$""")
START_WITH_DAY_OF_WEEK = re.compile('^([a-zA-Z]*)[\s,](.*)')
@classmethod
def feeling_lucky(cls, obj):
"""Tries to convert given object to an UTC timestamp is ms, based
on its type.
"""
if isinstance(obj, six.string_types):
return cls.from_str(obj)
elif isinstance(obj, six.integer_types) and obj <= MAX_POSIX_TIMESTAMP:
return cls.from_posix_timestamp(obj)
elif isinstance(obj, datetime):
return cls.from_datetime(obj)
else:
raise ValueError(
u"Don't know how to get timestamp from '{}'".format(obj)
)
@classmethod
def fix_mispelled_day(cls, timestr):
"""fix mispelled day when written in english
:return: `None` if the day was not modified, the new date otherwise
"""
day_extraction = cls.START_WITH_DAY_OF_WEEK.match(timestr)
if day_extraction is not None:
day = day_extraction.group(1).lower()
if len(day) == 3:
dataset = DAYS_ABBR
else:
dataset = DAYS
if day not in dataset:
days = list(dataset)
days.sort(key=lambda e: levenshtein(day, e))
return days[0] + day_extraction.group(2)
@classmethod
def remove_parenthesis_around_tz(cls, timestr):
"""get rid of parenthesis around timezone: (GMT) => GMT
:return: the new string if parenthesis were found, `None` otherwise
"""
parenthesis = cls.TIMEZONE_PARENTHESIS.match(timestr)
if parenthesis is not None:
return parenthesis.group(1)
@classmethod
def remove_quotes_around_tz(cls, timestr):
"""Remove quotes (single and double) around timezone otherwise
`dateutil.parser.parse` raises
"""
quoted = cls.QUOTED_TIMEZONE.match(timestr)
if quoted is not None:
return quoted.group(1) + quoted.group(2)
@classmethod
def remove_timezone(cls, timestr):
"""Completely remove timezone information, if any.
:return: the new string if timezone was found, `None` otherwise
"""
if re.match(r".*[\-+]?\d{2}:\d{2}$", timestr):
return re.sub(
r"(.*)(\s[\+-]?\d\d:\d\d)$",
r"\1",
timestr
)
@classmethod
def fix_timezone_separator(cls, timestr):
"""Replace invalid timezone separator to prevent
`dateutil.parser.parse` to raise.
:return: the new string if invalid separators were found,
`None` otherwise
"""
tz_sep = cls.TIMEZONE_SEPARATOR.match(timestr)
if tz_sep is not None:
return tz_sep.group(1) + tz_sep.group(2) + ':' + tz_sep.group(3)
return timestr
@classmethod
def from_str(cls, timestr, shaked=False):
"""Use `dateutil` module to parse the give string
:param basestring timestr: string representing a date to parse
:param bool shaked: whether the input parameter been already
cleaned or not.
"""
orig = timestr
if not shaked:
timestr = cls.fix_timezone_separator(timestr)
try:
date = parser.parse(timestr)
except ValueError:
if not shaked:
shaked = False
for shaker in [
cls.fix_mispelled_day,
cls.remove_parenthesis_around_tz,
cls.remove_quotes_around_tz]:
new_timestr = shaker(timestr)
if new_timestr is not None:
timestr = new_timestr
shaked = True
if shaked:
try:
return cls.from_str(timestr, shaked=True)
except ValueError:
# raise ValueError below with proper message
pass
msg = u"Unknown string format: {!r}".format(orig)
raise ValueError(msg), None, sys.exc_info()[2]
else:
try:
return cls.from_datetime(date)
except ValueError:
new_str = cls.remove_timezone(orig)
if new_str is not None:
return cls.from_str(new_str)
else:
raise
@classmethod
def from_ymd(cls, year, month=1, day=1):
return cls.from_datetime(datetime(
year=year, month=month, day=day
))
@classmethod
def from_posix_timestamp(cls, ts):
return cls.from_datetime(datetime.utcfromtimestamp(ts))
@classmethod
def from_datetime(cls, date):
if date.tzinfo is None:
date = date.replace(tzinfo=pytz.utc)
seconds = (date - UTC_EPOCH).total_seconds() * 1e3
micro_seconds = date.microsecond / 1e3
return int(seconds + micro_seconds)
@classmethod
def now(cls):
return cls.from_datetime(datetime.utcnow())
|
The North Face Mountain Festival Returns!
Fancy yourself as a festival goer? Good news: The North Face Mountain Festival is returning to Lauterbrunnen, Switzerland, and it doubles up as an action-packed European holiday. Pack your trail running shoes and a hip flask - this is very much an outdoor festival.
Starting from 14-17 September, the event will take place at the foot of the iconic Eiger; a great backdrop to a weekend packed full of adventurous activities. Climbing, paragliding, running, canyon-jumping, hiking - there's a huge array of guided events on offer, but equally as attractive is the prospect of chilling in front of a campfire in this spectacular scenery. Base camp workshops include expedition cookery, adventure photography, and many more. There'll also be live music, inspirational talks from athletes and adventure films to watch.
Doubling in participants from 2016 with an extra night added to the itinerary, this is an event growing at a rapid pace, so don't miss out.
Tickets are available at www.thenorthface.com/innovation/discover/mountain-festival.html with early-bird prices starting from €199.
|
import os
import sys
from setuptools import setup
from setuptools.command.install_lib import install_lib as _install_lib
with open('requirements.txt') as f:
required = f.read().splitlines()
class install_lib(_install_lib):
def run(self):
from django.core.management.commands.compilemessages \
import compile_messages
os.chdir('bingo_tweets')
compile_messages(sys.stderr)
os.chdir("..")
setup(name='django-bingo-tweets',
description='Bingo Tweets',
long_description='Tweet if a new game '
'is created in a django-bingo instance',
author='Alexander Schier',
author_email='allo@laxu.de',
version='1.1.0',
url='https://github.com/allo-/django-bingo-tweet',
packages=['bingo_tweets'],
package_data={'bingo_tweets': ['locale/*/LC_MESSAGES/*.*']},
include_package_data=True,
install_requires=required,
classifiers=[
'Framework :: Django',
'Topic :: Games/Entertainment :: Board Games',
'License :: OSI Approved :: GNU Affero General Public License v3',
'Operating System :: OS Independent',
'Programming Language :: Python'
]
)
|
It is believed that the name Palas de Rei comes from Pallatium regis, (royal palace) for having being the residence of king Visigodo Witiza in the early VIII century.
The council of Palas de Rei has a notable heritage and, it was one of the preferred places of residence for Galician nobility. Notable for its Castle of Pambre.
If you are traveling on the Camino de Santiago or are planning to do so soon, you should know that the distance separating Palas de Rei from Santiago de Compostela is 68 kilometers.
|
import json, os, Queue, datetime
from totalimpact import tiredis, backend, default_settings
from totalimpact import db, app
from totalimpact import item as item_module
from totalimpact.providers.provider import Provider, ProviderTimeout, ProviderFactory
from totalimpact import REDIS_UNITTEST_DATABASE_NUMBER
from nose.tools import raises, assert_equals, nottest
from test.utils import slow
from test import mocks
from test.utils import setup_postgres_for_unittests, teardown_postgres_for_unittests
class TestBackend():
def setUp(self):
self.config = None #placeholder
self.TEST_PROVIDER_CONFIG = [
("wikipedia", {})
]
self.d = None
# do the same thing for the redis db, set up the test redis database. We're using DB Number 8
self.r = tiredis.from_url("redis://localhost:6379", db=REDIS_UNITTEST_DATABASE_NUMBER)
self.r.flushdb()
provider_queues = {}
providers = ProviderFactory.get_providers(self.TEST_PROVIDER_CONFIG)
for provider in providers:
provider_queues[provider.provider_name] = backend.PythonQueue(provider.provider_name+"_queue")
self.b = backend.Backend(
backend.RedisQueue("alias-unittest", self.r),
provider_queues,
[backend.PythonQueue("couch_queue")],
self.r)
self.fake_item = {
"_id": "1",
"type": "item",
"num_providers_still_updating":1,
"aliases":{"pmid":["111"]},
"biblio": {},
"metrics": {},
"last_modified": datetime.datetime(2013, 1, 1)
}
self.fake_aliases_dict = {"pmid":["222"]}
self.tiid = "abcd"
self.db = setup_postgres_for_unittests(db, app)
def teardown(self):
self.r.flushdb()
teardown_postgres_for_unittests(self.db)
class TestProviderWorker(TestBackend):
# warning: calls live provider right now
def test_add_to_couch_queue_if_nonzero(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
provider_worker = backend.ProviderWorker(mocks.ProviderMock("myfakeprovider"),
None, None, None, {"a": test_couch_queue}, None, self.r)
response = provider_worker.add_to_couch_queue_if_nonzero("aaatiid", #start fake tiid with "a" so in first couch queue
{"doi":["10.5061/dryad.3td2f"]},
"aliases",
"dummy")
# test that it put it on the queue
in_queue = test_couch_queue.pop()
expected = {'method_name': 'aliases', 'tiid': 'aaatiid', 'provider_name': 'myfakeprovider', 'analytics_credentials': 'dummy', 'new_content': {'doi': ['10.5061/dryad.3td2f']}}
assert_equals(in_queue, expected)
def test_add_to_couch_queue_if_nonzero_given_metrics(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
provider_worker = backend.ProviderWorker(mocks.ProviderMock("myfakeprovider"),
None, None, None, {"a": test_couch_queue}, None, self.r)
metrics_method_response = {'dryad:package_views': (361, 'http://dx.doi.org/10.5061/dryad.7898'),
'dryad:total_downloads': (176, 'http://dx.doi.org/10.5061/dryad.7898'),
'dryad:most_downloaded_file': (65, 'http://dx.doi.org/10.5061/dryad.7898')}
response = provider_worker.add_to_couch_queue_if_nonzero("aaatiid", #start fake tiid with "a" so in first couch queue
metrics_method_response,
"metrics",
"dummy")
# test that it put it on the queue
in_queue = test_couch_queue.pop()
expected = {'method_name': 'metrics', 'tiid': 'aaatiid', 'provider_name': 'myfakeprovider', 'analytics_credentials': 'dummy', 'new_content': metrics_method_response}
print in_queue
assert_equals(in_queue, expected)
# check nothing in redis since it had a value
response = self.r.get_num_providers_currently_updating("aaatiid")
assert_equals(response, 0)
def test_add_to_couch_queue_if_nonzero_given_empty_metrics_response(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
provider_worker = backend.ProviderWorker(mocks.ProviderMock("myfakeprovider"),
None, None, None, {"a": test_couch_queue}, None, self.r)
metrics_method_response = {}
response = provider_worker.add_to_couch_queue_if_nonzero("aaatiid", #start fake tiid with "a" so in first couch queue
metrics_method_response,
"metrics",
"dummy")
# test that it did not put it on the queue
in_queue = test_couch_queue.pop()
expected = None
assert_equals(in_queue, expected)
# check decremented in redis since the payload was null
response = num_left = self.r.get_num_providers_currently_updating("aaatiid")
assert_equals(response, 0)
def test_wrapper(self):
def fake_callback(tiid, new_content, method_name, analytics_credentials, aliases_providers_run):
pass
response = backend.ProviderWorker.wrapper("123",
{'url': ['http://somewhere'], 'doi': ['10.123']},
mocks.ProviderMock("myfakeprovider"),
"aliases",
{}, # credentials
[], # aliases previously run
fake_callback)
print response
expected = {'url': ['http://somewhere'], 'doi': ['10.1', '10.123']}
assert_equals(response, expected)
class TestCouchWorker(TestBackend):
def test_update_item_with_new_aliases(self):
response = backend.CouchWorker.update_item_with_new_aliases(self.fake_aliases_dict, self.fake_item)
expected = {'metrics': {}, 'num_providers_still_updating': 1, 'biblio': {}, '_id': '1', 'type': 'item',
'aliases': {'pmid': ['222', '111']}, 'last_modified': datetime.datetime(2013, 1, 1, 0, 0)}
assert_equals(response, expected)
def test_update_item_with_new_aliases_using_dup_alias(self):
dup_alias_dict = self.fake_item["aliases"]
response = backend.CouchWorker.update_item_with_new_aliases(dup_alias_dict, self.fake_item)
expected = None # don't return the item if it already has all the aliases in it
assert_equals(response, expected)
def test_update_item_with_new_biblio(self):
new_biblio_dict = {"title":"A very good paper", "authors":"Smith, Lee, Khun"}
response = backend.CouchWorker.update_item_with_new_biblio(new_biblio_dict, self.fake_item)
expected = new_biblio_dict
assert_equals(response["biblio"], expected)
def test_update_item_with_new_biblio_existing_biblio(self):
item_with_some_biblio = self.fake_item
item_with_some_biblio["biblio"] = {"title":"Different title"}
new_biblio_dict = {"title":"A very good paper", "authors":"Smith, Lee, Khun"}
response = backend.CouchWorker.update_item_with_new_biblio(new_biblio_dict, item_with_some_biblio)
expected = {"authors": new_biblio_dict["authors"]}
assert_equals(response["biblio"], expected)
def test_update_item_with_new_metrics(self):
response = backend.CouchWorker.update_item_with_new_metrics("mendeley:groups", (3, "http://provenance"), self.fake_item)
expected = {'mendeley:groups': {'provenance_url': 'http://provenance', 'values': {'raw': 3, 'raw_history': {'2012-09-15T21:39:39.563710': 3}}}}
print response["metrics"]
assert_equals(response["metrics"]['mendeley:groups']["provenance_url"], 'http://provenance')
assert_equals(response["metrics"]['mendeley:groups']["values"]["raw"], 3)
assert_equals(response["metrics"]['mendeley:groups']["values"]["raw_history"].values(), [3])
# check year starts with 20
assert_equals(response["metrics"]['mendeley:groups']["values"]["raw_history"].keys()[0][0:2], "20")
def test_run_nothing_in_queue(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
couch_worker = backend.CouchWorker(test_couch_queue, self.r, self.d)
response = couch_worker.run()
expected = None
assert_equals(response, expected)
def test_run_aliases_in_queue(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
test_couch_queue_dict = {self.fake_item["_id"][0]:test_couch_queue}
provider_worker = backend.ProviderWorker(mocks.ProviderMock("myfakeprovider"),
None, None, None, test_couch_queue_dict, None, self.r)
response = provider_worker.add_to_couch_queue_if_nonzero(self.fake_item["_id"],
{"doi":["10.5061/dryad.3td2f"]},
"aliases",
"dummy")
# save basic item beforehand
item_obj = item_module.create_objects_from_item_doc(self.fake_item)
self.db.session.add(item_obj)
self.db.session.commit()
# run
couch_worker = backend.CouchWorker(test_couch_queue, self.r, self.d)
response = couch_worker.run()
expected = None
assert_equals(response, expected)
# check couch_queue has value after
response = item_module.get_item(self.fake_item["_id"], {}, self.r)
print response
expected = {'pmid': ['111'], 'doi': ['10.5061/dryad.3td2f']}
assert_equals(response["aliases"], expected)
# check has updated last_modified time
now = datetime.datetime.utcnow().isoformat()
assert_equals(response["last_modified"][0:10], now[0:10])
def test_run_metrics_in_queue(self):
test_couch_queue = backend.PythonQueue("test_couch_queue")
test_couch_queue_dict = {self.fake_item["_id"][0]:test_couch_queue}
provider_worker = backend.ProviderWorker(mocks.ProviderMock("myfakeprovider"),
None, None, None, test_couch_queue_dict, None, self.r)
metrics_method_response = {'dryad:package_views': (361, 'http://dx.doi.org/10.5061/dryad.7898'),
'dryad:total_downloads': (176, 'http://dx.doi.org/10.5061/dryad.7898'),
'dryad:most_downloaded_file': (65, 'http://dx.doi.org/10.5061/dryad.7898')}
response = provider_worker.add_to_couch_queue_if_nonzero(self.fake_item["_id"],
metrics_method_response,
"metrics",
"dummy")
# save basic item beforehand
item_obj = item_module.create_objects_from_item_doc(self.fake_item)
self.db.session.add(item_obj)
self.db.session.commit()
# run
couch_worker = backend.CouchWorker(test_couch_queue, self.r, self.d)
couch_worker.run()
# check couch_queue has value after
response = item_module.get_item(self.fake_item["_id"], {}, self.r)
print response
expected = 361
assert_equals(response["metrics"]['dryad:package_views']['values']["raw"], expected)
class TestBackendClass(TestBackend):
def test_decide_who_to_call_next_unknown(self):
aliases_dict = {"unknownnamespace":["111"]}
prev_aliases = []
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect blanks
expected = {'metrics': [], 'biblio': [], 'aliases': ['webpage']}
assert_equals(response, expected)
def test_decide_who_to_call_next_unknown_after_webpage(self):
aliases_dict = {"unknownnamespace":["111"]}
prev_aliases = ["webpage"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect blanks
expected = {'metrics': ["wikipedia"], 'biblio': ["webpage"], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_webpage_no_title(self):
aliases_dict = {"url":["http://a"]}
prev_aliases = []
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect all metrics and lookup the biblio
expected = {'metrics': ['wikipedia'], 'biblio': ['webpage'], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_webpage_with_title(self):
aliases_dict = {"url":["http://a"], "title":["A Great Paper"]}
prev_aliases = []
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect all metrics, no need to look up biblio
expected = {'metrics': ['wikipedia'], 'biblio': ['webpage'], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_slideshare_no_title(self):
aliases_dict = {"url":["http://abc.slideshare.net/def"]}
prev_aliases = []
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect all metrics and look up the biblio
expected = {'metrics': ['wikipedia'], 'biblio': ['slideshare'], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_dryad_no_url(self):
aliases_dict = {"doi":["10.5061/dryad.3td2f"]}
prev_aliases = ["altmetric_com"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to resolve the dryad doi before can go get metrics
expected = {'metrics': [], 'biblio': [], 'aliases': ['dryad']}
assert_equals(response, expected)
def test_decide_who_to_call_next_dryad_with_url(self):
aliases_dict = { "doi":["10.5061/dryad.3td2f"],
"url":["http://dryadsomewhere"]}
prev_aliases = ["altmetric_com"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# still need the dx.doi.org url
expected = {'metrics': [], 'biblio': [], 'aliases': ['dryad']}
assert_equals(response, expected)
def test_decide_who_to_call_next_dryad_with_doi_url(self):
aliases_dict = { "doi":["10.5061/dryad.3td2f"],
"url":["http://dx.doi.org/10.dryadsomewhere"]}
prev_aliases = ["altmetric_com", "dryad"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# have url so now can go get all the metrics
expected = {'metrics': ['wikipedia'], 'biblio': ['dryad', 'mendeley'], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_crossref_not_run(self):
aliases_dict = {"pmid":["111"]}
prev_aliases = ["mendeley"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get more aliases
expected = {'metrics': [], 'biblio': [], 'aliases': ['crossref']}
assert_equals(response, expected)
def test_decide_who_to_call_next_pmid_mendeley_not_run(self):
aliases_dict = {"pmid":["111"]}
prev_aliases = [""]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get more aliases
expected = {'metrics': [], 'biblio': [], 'aliases': ['mendeley']}
assert_equals(response, expected)
def test_decide_who_to_call_next_pmid_prev_run(self):
aliases_dict = { "pmid":["1111"],
"url":["http://pubmedsomewhere"]}
prev_aliases = ["pubmed", "mendeley"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get metrics and biblio
expected = {'metrics': [], 'biblio': [], 'aliases': ['crossref']}
assert_equals(response, expected)
def test_decide_who_to_call_next_doi_with_urls(self):
aliases_dict = { "doi":["10.234/345345"],
"url":["http://journalsomewhere"]}
prev_aliases = ["pubmed", "mendeley"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get metrics, biblio from crossref
expected = {'metrics': [], 'biblio': [], 'aliases': ['crossref']}
assert_equals(response, expected)
def test_decide_who_to_call_next_doi_crossref_prev_called(self):
aliases_dict = { "doi":["10.234/345345"],
"url":["http://journalsomewhere"]}
prev_aliases = ["crossref"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get metrics, no biblio
expected = {'metrics': [], 'biblio': [], 'aliases': ['mendeley']}
assert_equals(response, expected)
def test_decide_who_to_call_next_doi_crossref_pubmed_mendeley_prev_called(self):
aliases_dict = { "doi":["10.234/345345"],
"url":["http://journalsomewhere"]}
prev_aliases = ["crossref", "pubmed", "mendeley", "altmetric_com"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get metrics, no biblio
expected = {'metrics': ['wikipedia'], 'biblio': ['crossref', 'pubmed', 'mendeley', 'webpage'], 'aliases': []}
assert_equals(response, expected)
def test_decide_who_to_call_next_pmid_crossref_pubmed_prev_called(self):
aliases_dict = { "pmid":["1111"],
"url":["http://journalsomewhere"]}
prev_aliases = ["crossref", "pubmed", "mendeley", "altmetric_com"]
response = backend.Backend.sniffer(aliases_dict, prev_aliases, self.TEST_PROVIDER_CONFIG)
print response
# expect need to get metrics, no biblio
expected = {'metrics': ['wikipedia'], 'biblio': ['crossref', 'pubmed', 'mendeley', 'webpage'], 'aliases': []}
assert_equals(response, expected)
|
Grave of Sarah “Sallie” Lancaster Hagler, Oakwood Cemetery, Oakwood, Texas, July 16, 2009. Photograph by Ed and Sue Anderson.
Clearly integral to the song’s contemporary appeal, the alto line to “The Last Words of Copernicus” is uncredited in the 1911 Original Sacred Harp in which it first appeared in its present-day form.
As Steel and others have shown, however, many of the alto parts Denson added to Original Sacred Harp were his arrangements or selections of alto parts published in earlier works. In particular, Denson often drew on the alto parts included in Wilson Marion Cooper’s 1902 revision of The Sacred Harp (commonly known as the “Cooper book”), and William Walker’s 1866 The Christian Harmony.
How Do the Alto Parts Line Up?
Miss Minnie Floyd’s alto part for “The Last Words of Copernicus,” from Cooper’s revision of The Sacred Harp, 1902.
J. L. White’s alto part for “The Last Words of Copernicus,” from his The Sacred Harp: Fifth Edition, 1909.
Blue notes are identical to the notes Floyd wrote for the “Cooper book.” Black notes are original to White’s version.
Some of these forty-seven matches seem likely to be coincidental. In other places, though, such as measures four through seven in the middle of the plain section (four through six in White’s version), and fifteen through eighteen toward the middle of the fuging section (fourteen through seventeen in White’s version), the similarities are striking, last for multiple measures, and feature unusual musical figures. It seems overwhelmingly likely, then, that White had access to a copy of Cooper’s Sacred Harp revision, and drew on Minnie Floyd’s alto part for “The Last Words of Copernicus” in fashioning his own alto part.
Seaborn McDaniel Denson’s alto part for the 1911 Original Sacred Harp, though presented as an addition to the three-part version of the song added to The Sacred Harp in 1870, appears to draw on both alto parts that preceded it in print.
Seaborn McDaniel Denson’s alto part for “The Last Words of Copernicus,” in James’s Original Sacred Harp, 1911.
Blue notes are identical to the “Cooper book” version, red notes are identical to the “White book” version, and purple notes are the same in all three alto parts. Black notes are original to Denson’s version.
Only eight notes in Denson’s alto part differ from both Floyd’s and White’s alto lines. In a couple of other measures Denson’s part is original as well.9 Just about everywhere else, however, Denson’s part largely follows Floyd’s, or White’s or both.
While White’s changes lowered the alto’s range, Denson’s shift the part higher. Denson seems to have selected the higher of the two available figures from Floyd or White with few exceptions. And three of the eight notes original to Denson’s parts replace the lowest note in the other two alto parts (the 7-mi) with a higher note (the 2-sol). In including high 5-sol notes but avoiding notes below the bottom space in the G clef, these changes are consistent with other alto parts in F major that Denson contributed to James’s Original Sacred Harp.
Comparing all three parts suggests that Denson likely used both Floyd’s and White’s alto parts as models. For much of the part’s plain section (measures three through eight), Denson follows Floyd almost exactly, copying note for note the unusual figure in measures seven through eight where the part soars upwards. Yet for much of the song’s fuging section, Denson’s alto imitates White’s, particularly at the start of the fuge (measures ten through twelve) and from the song’s alto treble duet nearly to its end (measures fifteen through twenty).10 While Denson may not have composed much of the Original Sacred Harp alto part for “The Last Words of Copernicus,” he does seem to have intentionally selected from the two previously composed alto parts available to him in putting the part together.
Who, then, composed the alto part to “The Last Words of Copernicus” that appears in The Sacred Harp, 1991 Edition?
S. M. Denson likely stitched together pieces of the two previously published alto parts by Minnie Floyd and J. L. White, making alterations or substituting his own inventions as he saw fit. Perhaps Minnie Floyd can take credit for portions of the plain section, though its final few notes owe more to J. L. White. White’s alto part seems to have been the model for the song’s fuging section, including the much loved entrance, though White’s fuging section cribbed extensively from Floyd’s.11 The alto entrance that Springsteen sampled is most effective in Denson’s version, which retains Sarah Lancaster’s half note pickup, lengthening the high 5-sol at the start of the fuge.
piecing the alto part together from two or more earlier sources.
A better approach might be to continue to document the range of strategies Denson employed in devising 327 alto parts during the rush to bring Original Sacred Harp to publication between 1910 and 1911. How many other alto parts in the book (if any), did Denson stitch together in this manner? Are there alto parts where Denson stitched more of his own original writing together with music from one or more precursors?
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Country'
db.create_table('geoip_country', (
('code', self.gf('django.db.models.fields.CharField')(max_length=2, primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
))
db.send_create_signal('geoip', ['Country'])
# Adding model 'Area'
db.create_table('geoip_area', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('country', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Country'])),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
))
db.send_create_signal('geoip', ['Area'])
# Adding unique constraint on 'Area', fields ['country', 'name']
db.create_unique('geoip_area', ['country_id', 'name'])
# Adding model 'City'
db.create_table('geoip_city', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('area', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Area'])),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('latitude', self.gf('django.db.models.fields.DecimalField')(null=True, max_digits=9, decimal_places=6, blank=True)),
('longitude', self.gf('django.db.models.fields.DecimalField')(null=True, max_digits=9, decimal_places=6, blank=True)),
))
db.send_create_signal('geoip', ['City'])
# Adding unique constraint on 'City', fields ['area', 'name']
db.create_unique('geoip_city', ['area_id', 'name'])
# Adding model 'ISP'
db.create_table('geoip_isp', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
('country', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Country'])),
))
db.send_create_signal('geoip', ['ISP'])
# Adding unique constraint on 'ISP', fields ['country', 'name']
db.create_unique('geoip_isp', ['country_id', 'name'])
# Adding model 'Provider'
db.create_table('geoip_provider', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255)),
('ranges', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
))
db.send_create_signal('geoip', ['Provider'])
# Adding M2M table for field isp on 'Provider'
db.create_table('geoip_provider_isp', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('provider', models.ForeignKey(orm['geoip.provider'], null=False)),
('isp', models.ForeignKey(orm['geoip.isp'], null=False))
))
db.create_unique('geoip_provider_isp', ['provider_id', 'isp_id'])
# Adding model 'Range'
db.create_table('geoip_range', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('start_ip', self.gf('django.db.models.fields.BigIntegerField')(db_index=True)),
('end_ip', self.gf('django.db.models.fields.BigIntegerField')(db_index=True)),
('country', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Country'])),
('area', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Area'], null=True)),
('city', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.City'], null=True)),
('isp', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.ISP'], null=True)),
('provider', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['geoip.Provider'], null=True)),
))
db.send_create_signal('geoip', ['Range'])
def backwards(self, orm):
# Removing unique constraint on 'ISP', fields ['country', 'name']
db.delete_unique('geoip_isp', ['country_id', 'name'])
# Removing unique constraint on 'City', fields ['area', 'name']
db.delete_unique('geoip_city', ['area_id', 'name'])
# Removing unique constraint on 'Area', fields ['country', 'name']
db.delete_unique('geoip_area', ['country_id', 'name'])
# Deleting model 'Country'
db.delete_table('geoip_country')
# Deleting model 'Area'
db.delete_table('geoip_area')
# Deleting model 'City'
db.delete_table('geoip_city')
# Deleting model 'ISP'
db.delete_table('geoip_isp')
# Deleting model 'Provider'
db.delete_table('geoip_provider')
# Removing M2M table for field isp on 'Provider'
db.delete_table('geoip_provider_isp')
# Deleting model 'Range'
db.delete_table('geoip_range')
models = {
'geoip.area': {
'Meta': {'unique_together': "(('country', 'name'),)", 'object_name': 'Area'},
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Country']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'geoip.city': {
'Meta': {'unique_together': "(('area', 'name'),)", 'object_name': 'City'},
'area': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Area']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'latitude': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '9', 'decimal_places': '6', 'blank': 'True'}),
'longitude': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '9', 'decimal_places': '6', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'geoip.country': {
'Meta': {'object_name': 'Country'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '2', 'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
},
'geoip.isp': {
'Meta': {'unique_together': "(('country', 'name'),)", 'object_name': 'ISP'},
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Country']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
'geoip.provider': {
'Meta': {'object_name': 'Provider'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'isp': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['geoip.ISP']", 'symmetrical': 'False', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'ranges': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'})
},
'geoip.range': {
'Meta': {'object_name': 'Range'},
'area': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Area']", 'null': 'True'}),
'city': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.City']", 'null': 'True'}),
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Country']"}),
'end_ip': ('django.db.models.fields.BigIntegerField', [], {'db_index': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'isp': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.ISP']", 'null': 'True'}),
'provider': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geoip.Provider']", 'null': 'True'}),
'start_ip': ('django.db.models.fields.BigIntegerField', [], {'db_index': 'True'})
}
}
complete_apps = ['geoip']
|
If you are here you are most likely having a problem with honeybees. The bees have either invaded the walls or ceiling of your home or they have swarmed on or in a tree near your home. Don't worry, we can help you by providing professional swarm and established colony removal (Live Bee Removal)for the Carson City, Nevada area.
Colony Removal is a fee based service we provide, for pricing information please call us @ (775) 267-1451 to setup an inspection and to a recieve a quote.
If you see a large clump of bees you may have a swarm not an established colony. Please see Swarm Capture.
Don't Place Yourself In Danger!
Removal from structures and outside locations should be done by a professional who has the equipment and knowledge necessary to pinpoint the location of the honey bees and properly remove the bees. Many people will try and eradicate the bees on their own by spraying the bees with commercial pest spray. While these sprays will many times kill some of the honey bees, it almost never kills them all. Spraying the honey bees will not remove the comb, honey, eggs and larvae that are present in the walls or ceilings of the structure. As many as 30,000 to 80,000 bees could be in your ceiling or walls. Don't put yourself in danger. Call us today.
Whats in your walls? Look at the picture below, a small hole knocked in the siding.
Look at the size of the colony behind the siding! This colony had 4-5 panels of comb behind the siding each one nearly 7' in length. That is alot of honeybees!
Spraying Can Cause "A Dead Hive Condition"...Trouble!
A "dead hive" condition "WILL" cause you further problems such as fermented (Dripping) honey and rotting larvae within the wall or ceiling. The moisture and smell will attract other pest such as roaches, flies and rodents who will feed on the remaining material. The moisture can also cause structural damage to your home.
Other honeybees will also try to reestablish further colonies if the void is not cleaned and sealed properly. If you have honeybees present in a structure and want to remove them please call us.
We want to save them! The honey bee is one of our most precious resources. The honey bee is responsible for much of the food that we eat. Honey bees commonly pollinate agricultural crops such as apples, oranges, cherries, melons, squash and almonds.
The honey bee has been under tremendous pressure over the past few years with problems such as Colony Collapse Disorder, Varroa Mite and SHB.
|
import os.path
import sys
import subprocess
import re
def _get_version_from_bzr_lib(path):
import bzrlib.tag, bzrlib.branch
fullpath = os.path.abspath(path)
if sys.platform == 'win32':
fullpath = fullpath.replace('\\', '/')
fullpath = '/' + fullpath
branch = bzrlib.branch.Branch.open('file://' + fullpath)
tags = bzrlib.tag.BasicTags(branch)
#print "Getting version information from bzr branch..."
branch.lock_read()
try:
history = branch.iter_merge_sorted_revisions(direction="reverse")
version = None
extra_version = []
for revid, depth, revno, end_of_merge in history:
for tag_name, tag_revid in tags.get_tag_dict().iteritems():
#print tag_revid, "<==>", revid
if tag_revid == revid:
#print "%s matches tag %s" % (revid, tag_name)
version = [int(s) for s in tag_name.split('.')]
## if the current revision does not match the last
## tag, we append current revno to the version
if tag_revid != branch.last_revision():
extra_version = [branch.revno()]
break
if version:
break
finally:
branch.unlock()
assert version is not None
_version = version + extra_version
return _version
def _get_version_from_bzr_command(path):
# get most recent tag first
most_recent_tag = None
proc = subprocess.Popen(['bzr', 'log', '--short'], stdout=subprocess.PIPE)
reg = re.compile('{([0-9]+)\.([0-9]+)\.([0-9]+)}')
for line in proc.stdout:
result = reg.search(line)
if result is not None:
most_recent_tag = [int(result.group(1)), int(result.group(2)), int(result.group(3))]
break
proc.stdout.close()
proc.wait()
assert most_recent_tag is not None
# get most recent revno
most_recent_revno = None
proc = subprocess.Popen(['bzr', 'revno'], stdout=subprocess.PIPE)
most_recent_revno = int(proc.stdout.read().strip())
proc.wait()
version = most_recent_tag + [most_recent_revno]
return version
_version = None
def get_version_from_bzr(path):
global _version
if _version is not None:
return _version
try:
import bzrlib.tag, bzrlib.branch
except ImportError:
return _get_version_from_bzr_command(path)
else:
return _get_version_from_bzr_lib(path)
def get_version(path=None):
if path is None:
path = os.path.dirname(__file__)
try:
return '.'.join([str(x) for x in get_version_from_bzr(path)])
except ImportError:
return 'unknown'
def generate_version_py(force=False, path=None):
"""generates pybindgen/version.py, unless it already exists"""
filename = os.path.join('pybindgen', 'version.py')
if not force and os.path.exists(filename):
return
if path is None:
path = os.path.dirname(__file__)
version = get_version_from_bzr(path)
dest = open(filename, 'w')
if isinstance(version, list):
dest.write('__version__ = %r\n' % (version,))
dest.write('"""[major, minor, micro, revno], '
'revno omitted in official releases"""\n')
else:
dest.write('__version__ = "%s"\n' % (version,))
dest.close()
|
"I just want my computers to work." I hear that all the time. You have a business to run. You can't be bothered worrying about the day-to-day computer issues. We heard you. And CMIT has spent over a decade honing our business model to align perfectly with yours. Simply stated, we worry about technology so you don't have to.
The key to our services and our success is not so much the tools and technologies we use, but our people. Sure, the tools we use are important and incorporate state-of-the-art technologies. But before the technology enters your doors, we do.
Each of our CMIT Solutions offices is individually owned and operated by small business owners just like you who know that developing trusting relationships with our customers is a big part of success. We understand the challenges you face in operating a small business, since we face many of those same challenges ourselves. So when you discuss your needs and desires, you're talking with someone who can understand your pain as well as your goals.
When you work with CMIT Solutions of West Metro Denver you have a single point of contact for all your technology needs. Me. You never have to wonder who is responsible for what or how to coordinate all your various IT tasks. As your technology advisor, I'll handle that. This is one of the benefits of working with a small, locally-owned business. But unlike small one- or two-man shops, our national network of 130 offices and 800 technology advisors and systems engineers, along with our strategic partnerships with Dell, Microsoft, Rackspace and other industry leaders allows us to give our clients a competitive edge not even available to many larger companies.
Although I am your point of contact, I look to our staff and partners for expert guidance in making the right technology decisions, and for providing superior support and maintenance. We hire only the best people, not only in skill, but in character. I wouldn't send someone to your office that I wouldn't want working in mine. We want to earn your trust. And that takes trustworthy people.
If you're looking for a technology advisor that you can count on to watch out for your best interests, I hope you'll consider us. We'll work hard to earn your trust every day.
|
# -*- Coding: utf-8 -*-
"""
* Copyright (C) 2010-2014 Loic BLOT, CNRS <http://www.unix-experience.fr/>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
import sys, re, time, thread, subprocess
from pyPgSQL import PgSQL
import ZEyeUtil, zConfig
from DatabaseManager import ZEyeSQLMgr
class NetdiscoDataRefresher(ZEyeUtil.Thread):
def __init__(self):
""" 15 min between two netdisco updates """
self.sleepingTimer = 900
self.myName = "Netdisco Data Refresher"
ZEyeUtil.Thread.__init__(self)
def run(self):
self.launchMsg()
while True:
self.setRunning(True)
starttime = datetime.datetime.now()
self.launchRefresh()
"""
Because netdisco can be slow, modify the sleeping timer to
refresh datas faster
"""
totaltime = datetime.datetime.now() - starttime
"""
If runtime exceed 10 mins, sleeping timer is 15 min - totaltime
But if there is less than 1 minute interval, let 1 min interval
"""
if totaltime > 600:
self.sleepingTimer = 900 - totaltime
if self.sleepingTimer < 60:
self.sleepingTimer = 60
self.setRunning(False)
def launchRefresh(self):
try:
cmd = "/usr/bin/perl /usr/local/bin/netdisco -C /usr/local/etc/netdisco/netdisco.conf -R"
subprocess.check_output(cmd,shell=True)
self.logInfo("Refresh OK, now nbtwalk")
cmd = "/usr/bin/perl /usr/local/bin/netdisco -C /usr/local/etc/netdisco/netdisco.conf -w" % device
subprocess.check_output(cmd,shell=True)
self.logInfo("nbtwalk OK, now macwalk")
cmd = "/usr/bin/perl /usr/local/bin/netdisco -C /usr/local/etc/netdisco/netdisco.conf -m" % device
subprocess.check_output(cmd,shell=True)
self.logInfo("macwalk OK, now arpwalk")
cmd = "/usr/bin/perl /usr/local/bin/netdisco -C /usr/local/etc/netdisco/netdisco.conf -a" % device
subprocess.check_output(cmd,shell=True)
except Exception, e:
self.logCritical(e)
sys.exit(1);
class NetdiscoDataCleanup(ZEyeUtil.Thread):
def __init__(self):
""" 15 mins between two netdisco cleanups """
self.sleepingTimer = 900
self.myName = "Netdisco Data Cleanup"
ZEyeUtil.Thread.__init__(self)
def run(self):
self.launchMsg()
while True:
self.setRunning(True)
self.launchCleanup()
self.setRunning(False)
def launchCleanup(self):
try:
self.pgcon = PgSQL.connect(host=zConfig.pgHost,user=zConfig.pgUser,password=zConfig.pgPwd,database=zConfig.pgDB)
self.pgcursor = self.pgcon.cursor()
self.pgcursor.execute("DELETE FROM z_eye_switch_port_prises WHERE (ip,port) NOT IN (select host(ip),port from device_port)")
self.pgcon.commit()
except Exception, e:
self.logCritical(e)
sys.exit(1);
|
The Oncology Services Director Direct Mailing List renders easy and fast access to executives and doctors employed in different specialty hospitals across worldwide. Mailing List of Oncology Services Director contains Physician's Name, Job Title, Hospital Name, Physical Address, Fax Number, Email Address, Telephone Number, Number of Employees, Primary Specialty of Physician, and Number of years in Practice.
With our Oncology Services Director Email Address Lists we therefore give to medical marketers the opportunity to keep their brand and services in the limelight and prosper effectively in the ever-growing healthcare industry. We also Triple-Verify our Oncology Services Director Email Database by using telephone verification and company website information. We are known as the Top healthcare mailing lists Providers. With our highly accurate Oncology Services Director mailing list and email list, we can make it achievable to reach out Oncology Services executives and medical professionals both by email and mail.
Opt for healthcare industry email list to connect with thousands of key healthcare executives, Oncology Service Director Email List and decision makers at your fingertips. We always provide verified, validated and updated Oncology Services Director Email Marketing List to generate many more qualified leads and clients for your business.
|
import os
from distutils.core import setup
from setuptools import find_packages
about_path = os.path.join(os.path.dirname(__file__), "para/about.py")
exec(compile(open(about_path).read(), about_path, "exec"))
def requirements(fname):
return [line.strip()
for line in open(os.path.join(os.path.dirname(__file__), fname))]
setup(
name=__name__, # noqa
version=__version__, # noqa
author=__author__, # noqa
author_email=__author_email__, # noqa
description=__description__, # noqa
url=__url__, # noqa
license=__license__, # noqa
packages=find_packages(),
scripts=[],
long_description=open("README.md").read(),
install_requires=[],
test_suite="nose.collector",
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Environment :: Other Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities",
"Topic :: Scientific/Engineering"
],
)
|
I watched this movie originally in 1976 when it first appeared on network television and found it very interesting. I went on to read Mr. Aldrin's book on which it was based. I feel Buzz Aldrin was and still is a man of courage for stepping up and addressing so many of the emotional issues he was challenged by at that time. I was only about 19 at that time but was impressed that so many people hold heroes to impossible ideals and standards without considering what so many may be going through emotionally or the price they paid to get there. I dealt with different emotional issues at that time but they were very, very real to me then and altered so many of my plans for my future. I did have to seek help later and then moved on from there. I talk to my children (Nathan and Ryann) about different forms of mental challenges (depression, anxiety, tension) and want to make sure I can guide then through so much of this. Buzz, like myself, was hit full in the face and had no idea of what he was going through until he got the right help. I feel that this is a difficult subject matter, but is important to work with on for so many people who suffer without feeling there is help available. Please watch this movie and learn from it.
|
import re
from functools import wraps
from django.conf import settings
from django.http import HttpResponse
class HttpResponseTooManyRequests(HttpResponse):
status_code = getattr(settings, 'RATELIMIT_STATUS_CODE', 403)
def _method_match(request, method=None):
if method is None:
method = ['GET', 'POST', 'PUT', 'DELETE', 'HEAD']
if not isinstance(method, list):
method = [method]
return request.method in method
_PERIODS = {
's': 1,
'm': 60,
'h': 60 * 60,
'd': 24 * 60 * 60,
}
rate_re = re.compile('([\d]+)/([\d]*)([smhd])')
def _split_rate(rate):
count, multi, period = rate_re.match(rate).groups()
count = int(count)
time = _PERIODS[period.lower()]
if multi:
time = time * int(multi)
return count, time
def get_class_by_path(path):
mod = __import__('.'.join(path.split('.')[:-1]))
components = path.split('.')
for comp in components[1:]:
mod = getattr(mod, comp)
return mod
# Allows you to override the CacheBackend in your settings.py
_backend_class = getattr(
settings,
'RATELIMIT_CACHE_BACKEND',
'brake.backends.cachebe.CacheBackend'
)
_backend = get_class_by_path(_backend_class)()
def ratelimit(
ip=True, use_request_path=False, block=False, method=None, field=None, rate='5/m', increment=None
):
def decorator(fn):
count, period = _split_rate(rate)
@wraps(fn)
def _wrapped(request, *args, **kw):
if use_request_path:
func_name = request.path
else:
func_name = fn.__name__
response = None
if _method_match(request, method):
limits = _backend.limit(
func_name, request, ip, field, count, period
)
if limits:
if block:
response = HttpResponseTooManyRequests()
request.limited = True
request.limits = limits
if response is None:
# If the response isn't HttpResponseTooManyRequests already, run
# the actual function to get the result.
response = fn(request, *args, **kw)
if not isinstance(response, HttpResponseTooManyRequests):
if _method_match(request, method) and \
(increment is None or (callable(increment) and increment(
request, response
))):
_backend.count(func_name, request, ip, field, period)
return response
return _wrapped
return decorator
|
The project is a single, four storey building consisting of a first storey open-air parking garage with several suites of mixed occupancy located above. The piping for the firefighting water supply runs through the open-air parking garage.
Sentence 3.2.7.9.(1) of the 2006 BC Building Code requires a 2 hour emergency power supply (from a generator) if the water supply for firefighting is dependent on electrical power supplied to the building. Sentences 3.2.5.9.(1) and 3.2.5.13.(1) require conformance with NFPA 14 (standpipe systems) and NFPA 13 (sprinkler systems), respectively. Protection from freezing is required by both standards and Article 3.2.5.18. The pipes are proposed to be insulated and heat traced.
The appellant contends that no emergency power is required as the water supply for firefighting is not dependant on a fire pump and is thus not dependant on electrical power. NFPA 13 and NFPA 14 require protection from freezing, which is satisfied by the pipe insulation and the heat tracing, and neither standard requires emergency power for the heat tracing.
The building official maintains that Sentence 3.2.5.18.(1) requires the fire protection system to be protected from freezing. NFPA 13 and NFPA 14 further specify the temperature at which the water supply must be maintained. Because the temperature of the water supply is maintained by heat tracing, it is dependent on electrical power as described in Sentence 3.2.7.9.(1) and an emergency power supply is required.
It is the determination of the Board that emergency power is not required for the heat tracing system. The Board relied on newer editions of NFPA 13 (2007 & 2013) that specifically reference listed heat tracing systems as an acceptable and reliable means of protection from freezing. Neither the Building Code nor NFPA require emergency power for these systems. The Building Code’s requirements for emergency power are not intended to address long term power failures.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import math
import sys
from utils.templates import fail_string
import bisect
class Solution(object):
def is_subsequence(self, word, lookup):
# print lookup
if not word:
return True
left = -1
for c in word:
if c not in lookup:
return False
i = bisect.bisect_right(lookup[c], left)
if i < len(lookup[c]):
left = lookup[c][i]
else:
return False
return True
def numMatchingSubseq(self, S, words):
"""
:type S: str
:type words: List[str]
:rtype: int
"""
lookup = {}
for i, c in enumerate(S):
lookup[c] = lookup.get(c, [])
lookup[c].append(i)
count = 0
for word in words:
if self.is_subsequence(word, lookup):
count += 1
return count
class Solution2(object):
def validTicTacToe(self, board):
"""
:type board: List[str]
:rtype: bool
"""
num_x = 0
num_o = 0
for b in board:
for c in b:
if c == 'X':
num_x += 1
elif c == 'O':
num_o += 1
if not 0 <= num_x - num_o <= 1:
return False
if num_x < 3:
return True
rows = board
cols = [''.join([board[i][j] for i in range(3)]) for j in range(3)]
diags = [''.join([board[i][i] for i in range(3)]), ''.join([board[i][2-i] for i in range(3)])]
all_lines = rows + cols + diags
x_win = 'X'*3 in all_lines
o_win = 'O'*3 in all_lines
if x_win:
return num_x > num_o and not o_win
if o_win:
return num_x == num_o
return not (x_win and o_win)
def unit_test():
solution = Solution()
assert solution.numMatchingSubseq("dsahjpjauf", ["ahjpjau","ja","ahbwzgqnuk","tnmlanowax"]) == 2
assert solution.numMatchingSubseq("qlhxagxdqh", ["qlhxagxdq","qlhxagxdq","lhyiftwtut","yfzwraahab"]) == 2
assert solution.numMatchingSubseq("abcde", ["a", "bb", "acd", "ace"]) == 3
assert solution.numMatchingSubseq("abcdbcae", ["a", "bb", "acd", "ace", 'bb', 'bcbc']) == 6
solution = Solution2()
assert solution.validTicTacToe(["XXX","XOO","OO "]) == False
assert solution.validTicTacToe(["XOX","X O","X O"]) == True
assert solution.validTicTacToe(["O ", " ", " "]) == False
assert solution.validTicTacToe(["XOX", " X ", " "]) == False
assert solution.validTicTacToe(["XXX", " ", "OOO"]) == False
assert solution.validTicTacToe(["XOX", "O O", "XOX"]) == True
def test():
pass
if __name__ == '__main__':
unit_test()
test()
|
Bring a touch of Scandinavian chic to kids’ spaces with Danish design brand, CamCam and its charming Harlequin furniture range. Brought to you from Copenhagen, CamCam’s ethos is to create functional, imaginative and playful furniture for children with a focus on sustainability and eco-conscious design. Featuring the brand’s signature lattice detailing, the Harlequin collection includes a stylish cot, changing table, storage bench and kid’s chair, all available in a muted palette of grey, white and sky blue tones.
The perfect addition to nurseries, the Harlequin Baby Bed will create a beautiful focal point to accompany any interior style. The mattress base can be adjusted with three different height settings, and when your little one outgrows the cot, the side panel can be removed to transform it into a beautiful toddler bed. Crafted from 100% FSC-certified MDF and safety tested to meet European requirements, this is the perfect transitional sleep solution that will grow with your child. The Harlequin theme is carried over to CamCam’s Changing Table, which will make for a practical and stunning furniture piece. Featuring lattice door fronts and a framed tabletop to accommodate for a changing mat, this minimalistic design can easily transition into a regular cabinet for living spaces and bedrooms. Choose from classic Scandi-style tones of White and Grey, to coordinate with the décor scheme of your choice.
PRICE: Harlequin Baby Bed: £799.00, Harlequin Storage Bench: £199.00, Harlequin Chair: £105.00, Harlequin Changing Table: £599.00.
|
import bpy
from bpy.props import *
from mathutils import Vector
from .... base_types.node import AnimationNode
from .... algorithms.mesh_generation.indices_utils import GridMeshIndices
from .... algorithms.mesh_generation.basic_shapes import gridVertices
from .... events import executionCodeChanged
class GridMeshNode(bpy.types.Node, AnimationNode):
bl_idname = "an_GridMeshNode"
bl_label = "Grid Mesh"
bl_width_default = 160
centerGrid = BoolProperty(name = "Center", default = True, update = executionCodeChanged)
def create(self):
self.newInput("Integer", "X Divisions", "xDivisions", value = 5, minValue = 2)
self.newInput("Integer", "Y Divisions", "yDivisions", value = 5, minValue = 2)
self.newInput("Float", "X Distance", "xDistance", value = 1)
self.newInput("Float", "Y Distance", "yDistance", value = 1)
self.newInput("Vector", "Offset", "offset", isDataModified = True)
self.newOutput("Vector List", "Vertices", "vertices")
self.newOutput("Edge Indices List", "Edge Indices", "edgeIndices")
self.newOutput("Polygon Indices List", "Polygon Indices", "polygonIndices")
def draw(self, layout):
layout.prop(self, "centerGrid")
def execute(self, xDivisions, yDivisions, xDistance, yDistance, offset):
xDivisions = max(xDivisions, 2)
yDivisions = max(yDivisions, 2)
offset = offset.copy()
offset.x -= (xDivisions - 1) * xDistance / 2 if self.centerGrid else 0
offset.y -= (yDivisions - 1) * yDistance / 2 if self.centerGrid else 0
vertices = gridVertices(xDivisions, yDivisions, xDistance, yDistance, offset) if self.outputs[0].isLinked else []
edgeIndices = GridMeshIndices.innerQuadEdges(xDivisions, yDivisions) if self.outputs[1].isLinked else []
polygonIndices = GridMeshIndices.innerQuadPolygons(xDivisions, yDivisions) if self.outputs[2].isLinked else []
return vertices, edgeIndices, polygonIndices
|
Sarah Harriman [Stiles], her husband Paul [Speedman] and their 6-year old daughter Hannah [Pixie Davies] arrive in Colombia eager and excited to start a new life, as Sarah prepares to take over management of the family’s paper mill from her father, Jordan [Rea]. Settling into a sprawling old mansion, the Harrimans are fascinated to learn about the village’s ancestral lore and traditions, including the tale of the Niños Santos, a group of children martyred by the Conquistadors centuries ago. Even today, the ghosts of the murdered innocents are blamed for any unexplained mischief in the town. The imaginative Hannah begins to explore her new home, wandering into the jungle in pursuit of playmates no else can see. The child’s behavior continues to grow stranger until she falls inexplicably ill. As her parents struggle to find medical care in the remote town, the house is visited by a series of mysterious apparitions and suddenly Hannah vanishes. Sarah and Paul’s frantic search for their lost daughter plunges them into a shadowy supernatural world where they discover the shocking family secret that is at the heart Hannah’s disappearance. To save their daughter, they will have to find a way to make amends for the sins of the past.
|
# -*- coding: utf-8 -*-
from openerp import fields, api, models
from openerp.osv import osv
from openerp.tools.safe_eval import safe_eval
from random import randint, shuffle
import datetime
import logging
import math
_logger = logging.getLogger(__name__)
evaluation_context = {
'datetime': datetime,
'context_today': datetime.datetime.now,
}
try:
from flanker.addresslib import address
def checkmail(mail):
return bool(address.validate_address(mail))
except ImportError:
_logger.warning('flanker not found, email validation disabled.')
def checkmail(mail):
return True
class team_user(models.Model):
_name = 'team.user'
@api.one
def _count_leads(self):
if self.id:
limit_date = datetime.datetime.now() - datetime.timedelta(days=30)
domain = [('user_id', '=', self.user_id.id),
('team_id', '=', self.team_id.id),
('assign_date', '>', fields.Datetime.to_string(limit_date))
]
self.leads_count = self.env['crm.lead'].search_count(domain)
else:
self.leads_count = 0
@api.one
def _get_percentage(self):
try:
self.percentage_leads = round(100 * self.leads_count / float(self.maximum_user_leads), 2)
except ZeroDivisionError:
self.percentage_leads = 0.0
@api.one
@api.constrains('team_user_domain')
def _assert_valid_domain(self):
try:
domain = safe_eval(self.team_user_domain or '[]', evaluation_context)
self.env['crm.lead'].search(domain)
except Exception:
raise Warning('The domain is incorrectly formatted')
team_id = fields.Many2one('crm.team', string='SaleTeam', required=True, oldname='section_id')
user_id = fields.Many2one('res.users', string='Saleman', required=True)
name = fields.Char(related='user_id.partner_id.display_name')
running = fields.Boolean(string='Running', default=True)
team_user_domain = fields.Char('Domain')
maximum_user_leads = fields.Integer('Leads Per Month')
leads_count = fields.Integer('Assigned Leads', compute='_count_leads', help='Assigned Leads this last month')
percentage_leads = fields.Float(compute='_get_percentage', string='Percentage leads')
@api.one
def toggle_active(self):
if isinstance(self.id, int): # if already saved
self.running = not self.running
class crm_team(osv.osv):
_inherit = "crm.team"
@api.one
def _count_leads(self):
if self.id:
self.leads_count = self.env['crm.lead'].search_count([('team_id', '=', self.id)])
else:
self.leads_count = 0
@api.one
def _assigned_leads(self):
limit_date = datetime.datetime.now() - datetime.timedelta(days=30)
domain = [('assign_date', '>=', fields.Datetime.to_string(limit_date)),
('team_id', '=', self.id),
('user_id', '!=', False)
]
self.assigned_leads = self.env['crm.lead'].search_count(domain)
@api.one
def _unassigned_leads(self):
self.unassigned_leads = self.env['crm.lead'].search_count(
[('team_id', '=', self.id), ('user_id', '=', False), ('assign_date', '=', False)]
)
@api.one
def _capacity(self):
self.capacity = sum(s.maximum_user_leads for s in self.team_user_ids)
@api.one
@api.constrains('score_team_domain')
def _assert_valid_domain(self):
try:
domain = safe_eval(self.score_team_domain or '[]', evaluation_context)
self.env['crm.lead'].search(domain)
except Exception:
raise Warning('The domain is incorrectly formatted')
ratio = fields.Float(string='Ratio')
score_team_domain = fields.Char('Domain')
leads_count = fields.Integer(compute='_count_leads')
assigned_leads = fields.Integer(compute='_assigned_leads')
unassigned_leads = fields.Integer(compute='_unassigned_leads')
capacity = fields.Integer(compute='_capacity')
team_user_ids = fields.One2many('team.user', 'team_id', string='Salesman')
min_for_assign = fields.Integer("Minimum score", help="Minimum score to be automatically assign (>=)", default=0, required=True)
@api.model
def direct_assign_leads(self, ids=[]):
ctx = dict(self._context, mail_notify_noemail=True)
self.with_context(ctx)._assign_leads()
@api.model
def dry_assign_leads(self, ids=[]):
self._assign_leads(dry=True)
@api.model
# Note: The dry mode assign only 50 leads per salesteam for speed issues
def assign_leads_to_salesteams(self, all_salesteams, dry=False):
shuffle(all_salesteams)
haslead = True
while haslead:
haslead = False
for salesteam in all_salesteams:
domain = safe_eval(salesteam['score_team_domain'], evaluation_context)
domain.extend([('team_id', '=', False), ('user_id', '=', False)])
domain.extend(['|', ('stage_id.on_change', '=', False), '&', ('stage_id.probability', '!=', 0), ('stage_id.probability', '!=', 100)])
leads = self.env["crm.lead"].search(domain, limit=50)
haslead = haslead or (len(leads) == 50 and not dry)
if not leads.exists():
continue
if dry:
for lead in leads:
values = {'lead_id': lead.id, 'team_id': salesteam['id']}
self.env['crm.leads.dry.run'].create(values)
else:
leads.write({'team_id': salesteam['id']})
# Erase fake/false email
spams = map(lambda x: x.id, filter(lambda x: x.email_from and not checkmail(x.email_from), leads))
if spams:
self.env["crm.lead"].browse(spams).write({'email_from': False})
# Merge duplicated lead
leads_done = set()
for lead in leads:
if lead.id not in leads_done:
leads_duplicated = lead.get_duplicated_leads(False)
if len(leads_duplicated) > 1:
self.env["crm.lead"].browse(leads_duplicated).merge_opportunity(False, False)
leads_done.update(leads_duplicated)
self._cr.commit()
self._cr.commit()
@api.model
def assign_leads_to_salesmen(self, all_team_users, dry=False):
users = []
for su in all_team_users:
if (su.maximum_user_leads - su.leads_count) <= 0:
continue
domain = safe_eval(su.team_user_domain or '[]', evaluation_context)
domain.extend([
('user_id', '=', False),
('assign_date', '=', False),
('score', '>=', su.team_id.min_for_assign)
])
# assignation rythm: 2 days of leads if a lot of leads should be assigned
limit = int(math.ceil(su.maximum_user_leads / 15.0))
if dry:
dry_leads = self.env["crm.leads.dry.run"].search([('team_id', '=', su.team_id.id)])
domain.append(['id', 'in', dry_leads.mapped('lead_id.id')])
else:
domain.append(('team_id', '=', su.team_id.id))
leads = self.env["crm.lead"].search(domain, order='score desc', limit=limit * len(su.team_id.team_user_ids))
users.append({
"su": su,
"nbr": min(su.maximum_user_leads - su.leads_count, limit),
"leads": leads
})
assigned = set()
while users:
i = 0
# statistically select the user that should receive the next lead
idx = randint(0, reduce(lambda nbr, x: nbr + x['nbr'], users, 0) - 1)
while idx > users[i]['nbr']:
idx -= users[i]['nbr']
i += 1
user = users[i]
# Get the first unassigned leads available for this user
while user['leads'] and user['leads'][0] in assigned:
user['leads'] = user['leads'][1:]
if not user['leads']:
del users[i]
continue
# lead convert for this user
lead = user['leads'][0]
assigned.add(lead)
if dry:
values = {'lead_id': lead.id, 'team_id': user['su'].team_id.id, 'user_id': user['su'].user_id.id}
self.env['crm.leads.dry.run'].create(values)
else:
# Assign date will be setted by write function
data = {'user_id': user['su'].user_id.id}
lead.write(data)
lead.convert_opportunity(lead.partner_id and lead.partner_id.id or None)
self._cr.commit()
user['nbr'] -= 1
if not user['nbr']:
del users[i]
@api.model
def _assign_leads(self, dry=False):
# Emptying the table
self._cr.execute("""
TRUNCATE TABLE crm_leads_dry_run;
""")
all_salesteams = self.search_read(fields=['score_team_domain'], domain=[('score_team_domain', '!=', False)])
all_team_users = self.env['team.user'].search([('running', '=', True)])
self.env['website.crm.score'].assign_scores_to_leads()
self.assign_leads_to_salesteams(all_salesteams, dry=dry)
# Compute score after assign to salesteam, because if a merge has been done, the score for leads is removed.
self.env['website.crm.score'].assign_scores_to_leads()
self.assign_leads_to_salesmen(all_team_users, dry=dry)
|
This is an aged coffee from the west coast of India.
After harvesting, the coffee is stored in open-walled, wooden warehouses where the monsoon rains are allowed to “wash” over the coffee.
Water doesn’t actually come into contact with the aging beans, but the flavor is changed by the heat and humidity during the monsooning season.
This aging process gives the coffee a very spicy, musty aroma, similar to an Indonesian Sumatra.
Low acid coffee, a very syrupy body, with a spicy-buttery aroma.
|
# Licensed to the StackStorm, Inc ('StackStorm') under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from st2common.exceptions import db
from st2common.models.system.common import ResourceReference
def get_ref_from_model(model):
if model is None:
raise ValueError('Model has None value.')
model_id = getattr(model, 'id', None)
if model_id is None:
raise db.StackStormDBObjectMalformedError('model %s must contain id.' % str(model))
reference = {'id': str(model_id),
'name': getattr(model, 'name', None)}
return reference
def get_model_from_ref(db_api, reference):
if reference is None:
raise db.StackStormDBObjectNotFoundError('No reference supplied.')
model_id = reference.get('id', None)
if model_id is not None:
return db_api.get_by_id(model_id)
model_name = reference.get('name', None)
if model_name is None:
raise db.StackStormDBObjectNotFoundError('Both name and id are None.')
return db_api.get_by_name(model_name)
def get_model_by_resource_ref(db_api, ref):
"""
Retrieve a DB model based on the resource reference.
:param db_api: Class of the object to retrieve.
:type db_api: ``object``
:param ref: Resource reference.
:type ref: ``str``
:return: Retrieved object.
"""
ref_obj = ResourceReference.from_string_reference(ref=ref)
result = db_api.query(name=ref_obj.name, pack=ref_obj.pack).first()
return result
def get_resource_ref_from_model(model):
"""
Return a ResourceReference given db_model.
:param model: DB model that contains name and pack.
:type model: ``object``
:return: ResourceReference.
"""
try:
name = model.name
pack = model.pack
except AttributeError:
raise Exception('Cannot build ResourceReference for model: %s. Name or pack missing.',
model)
return ResourceReference(name=name, pack=pack)
def get_str_resource_ref_from_model(model):
"""
Return a resource reference as string given db_model.
:param model: DB model that contains name and pack.
:type model: ``object``
:return: String representation of ResourceReference.
"""
return get_resource_ref_from_model(model).ref
|
It is no secret that the romantic comedy genre is making a major comeback on the Hollywood circuit, and the Egyptian cinema industry isn’t far behind on trend! Writer/director Othman Abou Laban says audiences shouldn’t expect a cheesy rom-com, this film has heart and a big twist! Hint: the main takeaway is how love can truly be blind. We caught up with Ahmed Hatem, Ola Roshdy, and Abou Laban to find out more about this Valentine’s Day must-see.
CWM: Congratulations on the upcoming film! What do you think makes Youssef – the character you play – so special?
AH: Well, it starts out with him just being a regular guy like any other. He’s a bit lost in the love department, unable to figure out or make the right choice in finding that special someone. His life then gets turned upside down when he has an accident that results in the loss of his vision, rendering him blind. This naturally shifts and changes all his priorities, and changes his perspective completely.
I enjoy the romance genre, and I enjoy romantic comedies even more. The nature of the film has a lot of lightness; Youssef tries as hard as possible not to handle the issue too dramatically. The whole process of him learning how to cope with his new condition is funny, and how he interacts with others provides a lot of opportunities for comedy.
Obviously this was a huge undertaking for you as an actor, how did you prepare?
I couldn’t study people who have been blind since birth, because that is a totally different experience. I had to bear in mind that this is someone who is used to being able to see, and has to learn from scratch how to feel out his environment and assess obstacles. The idea was how to balance his newborn struggle, and his attempts at living as if nothing is different about his life. The challenge was mainly fighting against my natural inclinations and defenses as a seeing person, not to be startled for example by someone suddenly moving their hand towards my face!
Were there any particularly challenging scenes for you to film?
Absolutely! There is a scene where I’m at an event and the person accompanying me gets distracted greeting people, leaving me to fend for myself in a loud and crowded place. Youssef gets disoriented; bumping into people and eventually falls into the pool and sinks to the bottom. This was an extremely difficult scene to film, due to the aspect of working in the water and all the problems that arise from making it look realistic. I had to do the take over 20 times in a row; tripping, falling, sinking in the water without creating a lot of bubbles, over and over and over … and I had to wear weights to help me sink fast!
What message do you hope audiences will leave the cinema with?
That love involves sacrifice. I think we need more of this realization in life; and we need love and we also need sacrifice. I believe suffering and sacrifice isn’t always a negative thing, it can be a beautiful thing. It can make you feel gratitude for everything you have. Love isn’t just hugs and kisses, a part of it can be suffering, and that can be rewarding.
Since this is our Love Issue, what kind of relationship or marriage advice would you give lovebirds?
Different types of advice for the singles than the marrieds! If you’re just starting a relationship, the best advice I can give is to know yourself first before entering into a serious relationship. If you don’t know yourself, you’ll be making choices for the wrong reason, whether it’s society or superficial appearances or what you see on social media. As for married couples, I would say avoid stubbornness – no matter how hard it is. I think stubbornness can kill any relationship, even between friends. Try to live a balanced life between work, family, friends, and personal development. We don’t have to do everything with our partners.
Slumdog Millionaire. It has it all! Great story, fantastic music, amazing acting.
|
"""Contain the socket handler for players"""
from utils import check_types
from player import Player
from game import Game
from websocket import WebSocketHandler
import errcode
class PlayerWs(WebSocketHandler):
"""The socket handler for websocket"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.player = None
self.callable_from_json = {"login": self.login,
"logout": self.logout,
"updatePos": self.update_pos,
"choice": self.choice,
"getAllUsers": self.get_all_users}
def get_all_users(self):
"""send to player all players connected"""
players = Game().get_players()
msg = {'object': "usersConnected",
'tidu': [p.name for p in players if p.team == 'tidu'],
'tizef': [p.name for p in players if p.team == 'tizef']}
self.send(msg)
def reset(self):
"""Reset the socket (recreate a new Player...)"""
self.player = None
self.logged = False
@check_types
def login(self, username, team):
"""Login player and look if username and team are valids"""
if self.player:
self.send(errcode.USERNAME_ALREADY_SET)
else:
self.player = Player(username, team, self)
if not Game().add_player(self.player):
self.send(errcode.USERNAME_ALREADY_IN_USE)
self.reset()
self.logged = True
def logout(self):
"""logout player and remove it from game"""
self.close()
@check_types
def update_pos(self, lat: float, lng: float):
"""update the player position"""
self.player.position = (lat, lng)
def choice(self, choice: str):
"""set choice for battle"""
Game().set_player_choice(self.player, choice)
def on_close(self):
print("player {} of team {} is exiting...".format(self.player.name, self.player.team))
if self.player:
self.logout()
Game().remove_player(self.player)
self.reset()
def send(self, msg):
super().send(msg)
if self.player:
print('Send to {} of team {} : {}'.format(self.player.name, self.player.team, msg))
def on_message(self, msg):
if self.player:
print('Send by {} of team {} : {}'.format(self.player.name, self.player.team, msg))
super().on_message(msg)
|
Candidates can apply online at the official website opsc.gov.in.
New Delhi: Odisha Public Service Commission (OPSC) will begin online recruitment for Assistant Agriculture Officer in Class II (Group B) of Odisha Agriculture and Food Production Service under Agriculture and Farmers Empowerment Department. The online application form will be available till 18 December 2017. Candidates can start submitting their application from 17 November 2017. Applicants should note that this is a special recruitment drive for candidates belonging to Scheduled Tribes (ST) only. A total of 65 vacancies are available for the post.
In order to be eligible for the recruitment candidates must have graduate degree in agriculture and must be in the age group of 21-32 years. Applicants are exempted from payment of application fees.
OPSC will select candidates on the basis of career marks and interview. A total of 130 candidates will be shortlisted for the interview and the career wise weightage will be class 10 (25%), class 12 (25%) and graduation (50%).
Details of the recruitment can be found at the official website of the Commission at opsc.gov.in.
|
#!/usr/bin/env python3
# -*- coding: utf8 -*-
import sys, argparse, random
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Pattern generator")
parser.add_argument("infile", nargs='?', type=argparse.FileType('rb'), default=sys.stdin.buffer)
parser.add_argument("-p", type=int, default=10, help="number of patterns to generate")
parser.add_argument("-P", action='store_true', help="transform patterns in infile")
parser.add_argument("-m", type=int, default=None, help="minimum length of a pattern")
parser.add_argument("-M", type=int, default=None, help="maximum length of a pattern")
parser.add_argument("-n", type=int, default=None, help="minimum noise")
parser.add_argument("-N", type=int, default=None, help="maximum noise")
parser.add_argument("-r", action='store_true', help="rotate to create circular patterns")
parser.add_argument("-s", type=int, default=None, help="random seed")
args = parser.parse_args()
random.seed(args.s)
# Sanity checks
m = args.m if args.m is not None else args.M or 20
M = args.M if args.M is not None else m
n = args.n if args.n is not None else 0
N = args.N if args.N is not None else n
if m < 1 or m > M: raise Exception("Invalid min and max pattern lengths")
if n < 0 or n > N: raise Exception("Invalid min and max noise")
# Get a random text substring
def randsubstring(T):
while True:
i = random.randrange(0, len(T)-M)
j = i + random.randint(m, M)
P = T[i:j]
if all(x not in b'\n\0' for x in P):
return P
raise Exception("WTF")
T = args.infile.read()
if args.P is True:
# Transform existing patterns
gen = (ln for ln in T.split(b'\n') if ln)
else:
# Generate new patterns
if len(T) < M: raise Exception("Too short infile")
gen = (randsubstring(T) for i in range(args.p))
# Character pool for noise
pool = list(set(T) - set(b'\n\r\t\0')) if N > 0 else []
def transform(P):
# Simulate noise
for noise in range(random.randint(n, N)):
k = random.randrange(0, len(P))
P = P[:k] + bytes([random.choice(pool)]) + P[k+1:]
# Rotate
if args.r is True:
k = random.randrange(0, len(P))
P = P[k:] + P[:k]
return P
for P in gen:
sys.stdout.buffer.write(transform(P) + b'\n')
sys.stdout.buffer.flush()
|
NoseFrida the SnotSucker, your go-to natural baby booger buster now comes in a keep-it-all-in-one-place travel case.
Let's face it, babies can't blow their nose. Sometimes, they need a little help from mom or dad. The NoseFrida is the very useful and 100% sanitary way to help your little ones clear out that stuffy or runny nose. Promise, if you just try it once, you'll never be able to live without it. Now with an included travel case for snot-sucking on the run!
|
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=invalid-name
"""Save and restore variables."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import os.path
import time
import numpy as np
import six
from google.protobuf import text_format
from tensorflow.python.client import graph_util
from tensorflow.python.client import session
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import constant_op
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import gen_io_ops
from tensorflow.python.ops import io_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import gfile
from tensorflow.python.platform import logging
from tensorflow.python.training import saver_pb2
from tensorflow.python.training import training_util
from tensorflow.python.training.checkpoint_state_pb2 import CheckpointState
from tensorflow.python.util import compat
class BaseSaverBuilder(object):
"""Base class for Savers.
Can be extended to create different Ops.
"""
class VarToSave(object):
"""Class used to describe variable slices that need to be saved."""
def __init__(self, var, slice_spec, name):
self.var = var
self.slice_spec = slice_spec
self.name = name
def __init__(self):
pass
def save_op(self, filename_tensor, vars_to_save):
"""Create an Op to save 'vars_to_save'.
This is intended to be overridden by subclasses that want to generate
different Ops.
Args:
filename_tensor: String Tensor.
vars_to_save: a list of BaseSaverBuilder.VarToSave objects.
Returns:
An Operation that save the variables.
"""
return io_ops._save(
filename=filename_tensor,
tensor_names=[vs.name for vs in vars_to_save],
tensors=[vs.var for vs in vars_to_save],
tensor_slices=[vs.slice_spec for vs in vars_to_save])
def restore_op(self, filename_tensor, var_to_save, preferred_shard):
"""Create an Op to read the variable 'var_to_save'.
This is intended to be overridden by subclasses that want to generate
different Ops.
Args:
filename_tensor: String Tensor.
var_to_save: a BaseSaverBuilder.VarToSave object.
preferred_shard: Int. Shard to open first when loading a sharded file.
Returns:
A Tensor resulting from reading 'var_to_save' from 'filename'.
"""
return io_ops._restore_slice(
filename_tensor,
var_to_save.name,
var_to_save.slice_spec,
var_to_save.var.dtype,
preferred_shard=preferred_shard)
def sharded_filename(self, filename_tensor, shard, num_shards):
"""Append sharding information to a filename.
Args:
filename_tensor: a string tensor.
shard: integer. The shard for the filename.
num_shards: an int Tensor for the number of shards.
Returns:
A string tensor.
"""
return gen_io_ops._sharded_filename(filename_tensor, shard, num_shards)
def _AddSaveOps(self, filename_tensor, vars_to_save):
"""Add ops to save variables that are on the same shard.
Args:
filename_tensor: String Tensor.
vars_to_save: a list of _VarToSave objects.
Returns:
A tensor with the filename used to save.
"""
save = self.save_op(filename_tensor, vars_to_save)
return control_flow_ops.with_dependencies([save], filename_tensor)
def _AddShardedSaveOps(self, filename_tensor, per_device):
"""Add ops to save the params per shard.
Args:
filename_tensor: String Tensor.
per_device: A list of (device, BaseSaverBuilder.VarToSave) pairs, as
returned by _GroupByDevices().
Returns:
An op to save the variables.
"""
num_shards = len(per_device)
sharded_saves = []
num_shards_tensor = constant_op.constant(num_shards, name="num_shards")
for shard, (device, vars_to_save) in enumerate(per_device):
with ops.device(device):
sharded_filename = self.sharded_filename(
filename_tensor, shard, num_shards_tensor)
sharded_saves.append(self._AddSaveOps(sharded_filename, vars_to_save))
# Return the sharded name for the save path.
with ops.control_dependencies([x.op for x in sharded_saves]):
return gen_io_ops._sharded_filespec(filename_tensor, num_shards_tensor)
def _AddRestoreOps(self,
filename_tensor,
vars_to_save,
restore_sequentially,
reshape,
preferred_shard=-1,
name="restore_all"):
"""Add operations to restore vars_to_save.
Args:
filename_tensor: Tensor for the path of the file to load.
vars_to_save: a list of _VarToSave objects.
restore_sequentially: True if we want to restore variables sequentially
within a shard.
reshape: True if we want to reshape loaded tensors to the shape of
the corresponding variable.
preferred_shard: Shard to open first when loading a sharded file.
name: Name for the returned op.
Returns:
An Operation that restores the variables.
"""
assign_ops = []
for vs in vars_to_save:
v = vs.var
restore_control_inputs = assign_ops[-1:] if restore_sequentially else []
# Load and optionally reshape on the CPU, as string tensors are not
# available on the GPU.
# TODO(touts): Re-enable restore on GPU when we can support annotating
# string tensors as "HostMemory" inputs.
with ops.device(graph_util.set_cpu0(v.device) if v.device else None):
with ops.control_dependencies(restore_control_inputs):
values = self.restore_op(filename_tensor, vs, preferred_shard)
if reshape:
shape = v.get_shape()
if not shape.is_fully_defined():
shape = array_ops.shape(v)
values = array_ops.reshape(values, shape)
# Assign on the same device as the variable.
with ops.device(v.device):
assign_ops.append(state_ops.assign(v,
values,
validate_shape=not reshape))
# Create a Noop that has control dependencies from all the updates.
return control_flow_ops.group(*assign_ops, name=name)
def _AddShardedRestoreOps(self, filename_tensor, per_device,
restore_sequentially, reshape):
"""Add Ops to save variables from multiple devices.
Args:
filename_tensor: Tensor for the path of the file to load.
per_device: A list of (device, _VarToSave) pairs, as
returned by _GroupByDevices().
restore_sequentially: True if we want to restore variables sequentially
within a shard.
reshape: True if we want to reshape loaded tensors to the shape of
the corresponding variable.
Returns:
An Operation that restores the variables.
"""
sharded_restores = []
for shard, (device, vars_to_save) in enumerate(per_device):
with ops.device(device):
sharded_restores.append(self._AddRestoreOps(
filename_tensor,
vars_to_save,
restore_sequentially,
reshape,
preferred_shard=shard,
name="restore_shard"))
return control_flow_ops.group(*sharded_restores, name="restore_all")
def _IsVariable(self, v):
return isinstance(v, ops.Tensor) and (
v.op.type == "Variable" or v.op.type == "AutoReloadVariable")
def _GroupByDevices(self, vars_to_save):
"""Group Variable tensor slices per device.
TODO(touts): Make sure that all the devices found are on different
job/replica/task/cpu|gpu. It would be bad if 2 were on the same device.
It can happen if the devices as unspecified.
Args:
vars_to_save: a list of BaseSaverBuilder.VarToSave objects.
Returns:
A list of tuples: (device_name, BaseSaverBuilder.VarToSave) tuples.
The list is sorted by ascending device_name.
"""
per_device = collections.defaultdict(lambda: [])
for var_to_save in vars_to_save:
per_device[var_to_save.var.device].append(var_to_save)
return sorted(per_device.items(), key=lambda t: t[0])
def _VarListToDict(self, var_list):
"""Create a dictionary of names to variable lists.
Args:
var_list: A list, tuple, or set of Variables.
Returns:
A dictionary of variable names to the variables that must be saved under
that name. Variables with save_slice_info are grouped together under the
same key in no particular order.
Raises:
TypeError: If the type of var_list or its elements is not supported.
ValueError: If at least two variables share the same name.
"""
if not isinstance(var_list, (list, tuple, set)):
raise TypeError("Variables to save should be passed in a dict or a "
"list: %s" % var_list)
var_list = set(var_list)
names_to_variables = {}
for var in var_list:
# pylint: disable=protected-access
if isinstance(var, variables.Variable) and var._save_slice_info:
name = var._save_slice_info.name
if name in names_to_variables:
if not isinstance(names_to_variables[name], list):
raise ValueError("Mixing slices and non-slices with the same name: "
"%s" % name)
names_to_variables[name].append(var)
else:
names_to_variables[name] = [var]
else:
var = ops.convert_to_tensor(var)
if not self._IsVariable(var):
raise TypeError("Variable to save is not a Variable: %s" % var)
name = var.op.name
if name in names_to_variables:
raise ValueError("At least two variables have the same name: %s" %
name)
names_to_variables[name] = var
# pylint: enable=protected-access
return names_to_variables
def _ValidateAndSliceInputs(self, names_to_variables):
"""Returns the variables and names that will be used for a Saver.
Args:
names_to_variables: A dict (k, v) where k is the name of a variable and v
is a Variable to save or a BaseSaverBuilder.Saver.
Returns:
A list of BaseSaverBuilder.VarToSave objects.
Raises:
TypeError: if any of the keys are not strings or any of the
values are not one of Tensor or Variable.
ValueError: if the same variable is given in more than one value
(this also applies to slices of SlicedVariables).
"""
if not isinstance(names_to_variables, dict):
names_to_variables = self._VarListToDict(names_to_variables)
vars_to_save = []
seen_variables = set()
for name in sorted(names_to_variables.keys()):
if not isinstance(name, six.string_types):
raise TypeError("names_to_variables must be a dict mapping string "
"names to variable Tensors. Name is not a string: %s" %
name)
v = names_to_variables[name]
if isinstance(v, (list, tuple)):
# A set of slices.
slice_name = None
# pylint: disable=protected-access
for variable in v:
if not isinstance(variable, variables.Variable):
raise ValueError("Slices must all be Variables: %s" % variable)
if not variable._save_slice_info:
raise ValueError("Slices must all be slices: %s" % variable)
if slice_name is None:
slice_name = variable._save_slice_info.name
elif slice_name != variable._save_slice_info.name:
raise variable("Slices must all be from the same tensor: %s != %s"
% (slice_name, variable._save_slice_info.name))
self._AddVarToSave(vars_to_save, seen_variables,
variable, variable._save_slice_info.spec, name)
# pylint: enable=protected-access
else:
# A variable or tensor.
variable = ops.convert_to_tensor(v)
if not self._IsVariable(variable):
raise TypeError("names_to_variables must be a dict mapping string "
"names to Tensors/Variables. Not a variable: %s" %
variable)
self._AddVarToSave(vars_to_save, seen_variables, variable, "", name)
return vars_to_save
def _AddVarToSave(self, vars_to_save, seen_variables, variable, slice_spec,
name):
"""Create a VarToSave and add it to the vars_to_save list.
Args:
vars_to_save: List to append the new VarToSave to.
seen_variables: Set of variables already processed. Used to check
that each variable is only saved once.
variable: Variable to save.
slice_spec: String. Slice spec for the variable.
name: Name to use to save the variable.
Raises:
ValueError: If the variable has already been processed.
"""
if variable in seen_variables:
raise ValueError("The same variable will be restored with two names: %s",
variable)
vars_to_save.append(BaseSaverBuilder.VarToSave(variable, slice_spec, name))
seen_variables.add(variable)
def build(self,
names_to_variables,
reshape=False,
sharded=False,
max_to_keep=5,
keep_checkpoint_every_n_hours=10000.0,
name=None,
restore_sequentially=False):
"""Adds save/restore nodes to the graph and creates a SaverDef proto.
Args:
names_to_variables: A dictionary mapping name to a Variable.
Each name will be associated with the
corresponding variable in the checkpoint.
reshape: If True, allow restoring parameters from a checkpoint
that where the parameters have a different shape. This is
only needed when you try to restore from a Dist-Belief checkpoint,
and only some times.
sharded: If True, shard the checkpoints, one per device that has
Parameters nodes.
max_to_keep: maximum number of checkpoints to keep. As new checkpoints
are created, old ones are deleted. If None or 0, no checkpoints are
deleted. Presently the number is only roughly enforced. For example
in case of restarts more than max_to_keep checkpoints may be kept.
keep_checkpoint_every_n_hours: How often checkpoints should be kept.
Defaults to 10,000 hours.
name: string. Optional name to use as a prefix when adding operations.
restore_sequentially: A Bool, which if true, causes restore of different
variables to happen sequentially within each device.
Returns:
A SaverDef proto.
Raises:
TypeError: If 'names_to_variables' is not a dictionary mapping string
keys to variable Tensors.
ValueError: If any of the keys or values in 'names_to_variables' is not
unique.
"""
vars_to_save = self._ValidateAndSliceInputs(names_to_variables)
if max_to_keep is None:
max_to_keep = 0
with ops.op_scope([vs.var for vs in vars_to_save], name, "save") as name:
# Add the Constant string tensor for the filename.
filename_tensor = constant_op.constant("model")
# Add the save ops.
if sharded:
per_device = self._GroupByDevices(vars_to_save)
save_tensor = self._AddShardedSaveOps(filename_tensor, per_device)
restore_op = self._AddShardedRestoreOps(
filename_tensor, per_device, restore_sequentially, reshape)
else:
save_tensor = self._AddSaveOps(filename_tensor, vars_to_save)
restore_op = self._AddRestoreOps(
filename_tensor, vars_to_save, restore_sequentially, reshape)
assert restore_op.name.endswith("restore_all"), restore_op.name
return saver_pb2.SaverDef(
filename_tensor_name=filename_tensor.name,
save_tensor_name=save_tensor.name,
restore_op_name=restore_op.name,
max_to_keep=max_to_keep,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
sharded=sharded)
def _GetCheckpointFilename(save_dir, latest_filename):
"""Returns a filename for storing the CheckpointState.
Args:
save_dir: The directory for saving and restoring checkpoints.
latest_filename: Name of the file in 'save_dir' that is used
to store the CheckpointState.
Returns:
The path of the file that contains the CheckpointState proto.
"""
if latest_filename is None:
latest_filename = "checkpoint"
return os.path.join(save_dir, latest_filename)
def update_checkpoint_state(save_dir,
model_checkpoint_path,
all_model_checkpoint_paths=None,
latest_filename=None):
"""Updates the content of the 'checkpoint' file.
This updates the checkpoint file containing a CheckpointState
proto.
Args:
save_dir: Directory where the model was saved.
model_checkpoint_path: The checkpoint file.
all_model_checkpoint_paths: list of strings. Paths to all not-yet-deleted
checkpoints, sorted from oldest to newest. If this is a non-empty list,
the last element must be equal to model_checkpoint_path. These paths
are also saved in the CheckpointState proto.
latest_filename: Optional name of the checkpoint file. Default to
'checkpoint'.
Raises:
RuntimeError: If the save paths conflict.
"""
if all_model_checkpoint_paths is None:
all_model_checkpoint_paths = []
elif all_model_checkpoint_paths[-1] != model_checkpoint_path:
logging.warning(
"%s is not in all_model_checkpoint_paths! Manually adding it.",
model_checkpoint_path)
all_model_checkpoint_paths.append(model_checkpoint_path)
# Writes the "checkpoint" file for the coordinator for later restoration.
coord_checkpoint_filename = _GetCheckpointFilename(save_dir, latest_filename)
if coord_checkpoint_filename == model_checkpoint_path:
raise RuntimeError("Save path '%s' conflicts with path used for "
"checkpoint state. Please use a different save path." %
model_checkpoint_path)
coord_checkpoint_proto = CheckpointState(
model_checkpoint_path=model_checkpoint_path,
all_model_checkpoint_paths=all_model_checkpoint_paths)
f = gfile.FastGFile(coord_checkpoint_filename, mode="w")
f.write(text_format.MessageToString(coord_checkpoint_proto))
f.close()
def get_checkpoint_state(checkpoint_dir, latest_filename=None):
"""Returns CheckpointState proto from the "checkpoint" file.
If the "checkpoint" file contains a valid CheckpointState
proto, returns it.
Args:
checkpoint_dir: The directory of checkpoints.
latest_filename: Optional name of the checkpoint file. Default to
'checkpoint'.
Returns:
A CheckpointState if the state was available, None
otherwise.
"""
ckpt = None
coord_checkpoint_filename = _GetCheckpointFilename(
checkpoint_dir, latest_filename)
f = None
try:
# Check that the file exists before opening it to avoid
# many lines of errors from colossus in the logs.
if gfile.Exists(coord_checkpoint_filename):
f = gfile.FastGFile(coord_checkpoint_filename, mode="r")
ckpt = CheckpointState()
text_format.Merge(f.read(), ckpt)
except IOError:
# It's ok if the file cannot be read
return None
except text_format.ParseError as e:
logging.warning(str(e))
logging.warning("%s: Checkpoint ignored", coord_checkpoint_filename)
return None
finally:
if f:
f.close()
return ckpt
class Saver(object):
"""Saves and restores variables.
See [Variables](../../how_tos/variables/index.md)
for an overview of variables, saving and restoring.
The `Saver` class adds ops to save and restore variables to and from
*checkpoints*. It also provides convenience methods to run these ops.
Checkpoints are binary files in a proprietary format which map variable names
to tensor values. The best way to examine the contents of a checkpoint is to
load it using a `Saver`.
Savers can automatically number checkpoint filenames with a provided counter.
This lets you keep multiple checkpoints at different steps while training a
model. For example you can number the checkpoint filenames with the training
step number. To avoid filling up disks, savers manage checkpoint files
automatically. For example, they can keep only the N most recent files, or
one checkpoint for every N hours of training.
You number checkpoint filenames by passing a value to the optional
`global_step` argument to `save()`:
```python
saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
...
saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
```
Additionally, optional arguments to the `Saver()` constructor let you control
the proliferation of checkpoint files on disk:
* `max_to_keep` indicates the maximum number of recent checkpoint files to
keep. As new files are created, older files are deleted. If None or 0,
all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
checkpoint files are kept.)
* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent
`max_to_keep` checkpoint files, you might want to keep one checkpoint file
for every N hours of training. This can be useful if you want to later
analyze how a model progressed during a long training session. For
example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep
one checkpoint file for every 2 hours of training. The default value of
10,000 hours effectively disables the feature.
Note that you still have to call the `save()` method to save the model.
Passing these arguments to the constructor will not save variables
automatically for you.
A training program that saves regularly looks like:
```python
...
# Create a saver.
saver = tf.train.Saver(...variables...)
# Launch the graph and train, saving the model every 1,000 steps.
sess = tf.Session()
for step in xrange(1000000):
sess.run(..training_op..)
if step % 1000 == 0:
# Append the step number to the checkpoint name:
saver.save(sess, 'my-model', global_step=step)
```
In addition to checkpoint files, savers keep a protocol buffer on disk with
the list of recent checkpoints. This is used to manage numbered checkpoint
files and by `latest_checkpoint()`, which makes it easy to discover the path
to the most recent checkpoint. That protocol buffer is stored in a file named
'checkpoint' next to the checkpoint files.
If you create several savers, you can specify a different filename for the
protocol buffer file in the call to `save()`.
@@__init__
@@save
@@restore
Other utility methods.
@@last_checkpoints
@@set_last_checkpoints
@@as_saver_def
"""
def __init__(self,
var_list=None,
reshape=False,
sharded=False,
max_to_keep=5,
keep_checkpoint_every_n_hours=10000.0,
name=None,
restore_sequentially=False,
saver_def=None,
builder=None):
"""Creates a `Saver`.
The constructor adds ops to save and restore variables.
`var_list` specifies the variables that will be saved and restored. It can
be passed as a `dict` or a list:
* A `dict` of names to variables: The keys are the names that will be
used to save or restore the variables in the checkpoint files.
* A list of variables: The variables will be keyed with their op name in
the checkpoint files.
For example:
```python
v1 = tf.Variable(..., name='v1')
v2 = tf.Variable(..., name='v2')
# Pass the variables as a dict:
saver = tf.train.Saver({'v1': v1, 'v2': v2})
# Or pass them as a list.
saver = tf.train.Saver([v1, v2])
# Passing a list is equivalent to passing a dict with the variable op names
# as keys:
saver = tf.train.Saver({v.op.name: v for v in [v1, v2]})
```
The optional `reshape` argument, if `True`, allows restoring a variable from
a save file where the variable had a different shape, but the same number
of elements and type. This is useful if you have reshaped a variable and
want to reload it from an older checkpoint.
The optional `sharded` argument, if `True`, instructs the saver to shard
checkpoints per device.
Args:
var_list: A list of `Variable` objects or a dictionary mapping names to
variables. If `None`, defaults to the list of all variables.
reshape: If `True`, allows restoring parameters from a checkpoint
where the variables have a different shape.
sharded: If `True`, shard the checkpoints, one per device.
max_to_keep: maximum number of recent checkpoints to keep.
Defaults to 10,000 hours.
keep_checkpoint_every_n_hours: How often to keep checkpoints.
Defaults to 10,000 hours.
name: string. Optional name to use as a prefix when adding operations.
restore_sequentially: A `Bool`, which if true, causes restore of different
variables to happen sequentially within each device. This can lower
memory usage when restoring very large models.
saver_def: Optional `SaverDef` proto to use instead of running the
builder. This is only useful for specialty code that wants to recreate
a `Saver` object for a previously built `Graph` that had a `Saver`.
The `saver_def` proto should be the one returned by the
`as_saver_def()` call of the `Saver` that was created for that `Graph`.
builder: Optional `SaverBuilder` to use if a `saver_def` was not provided.
Defaults to `BaseSaverBuilder()`.
Raises:
TypeError: If `var_list` is invalid.
ValueError: If any of the keys or values in `var_list` are not unique.
"""
if saver_def is None:
if builder is None:
builder = BaseSaverBuilder()
if var_list is None:
var_list = variables.all_variables()
if not var_list:
raise ValueError("No variables to save")
saver_def = builder.build(
var_list,
reshape=reshape,
sharded=sharded,
max_to_keep=max_to_keep,
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours,
name=name,
restore_sequentially=restore_sequentially)
if not isinstance(saver_def, saver_pb2.SaverDef):
raise ValueError("saver_def must if a saver_pb2.SaverDef: %s" % saver_def)
if not saver_def.save_tensor_name:
raise ValueError("saver_def must specify the save_tensor_name: %s"
% str(saver_def))
if not saver_def.restore_op_name:
raise ValueError("saver_def must specify the restore_op_name: %s"
% str(saver_def))
self._filename_tensor_name = saver_def.filename_tensor_name
self._save_tensor_name = saver_def.save_tensor_name
self._restore_op_name = saver_def.restore_op_name
self._max_to_keep = saver_def.max_to_keep
# If keep_checkpoint_every_n_hours is not set, set it to 10000 hours.
self._keep_checkpoint_every_n_hours = (
saver_def.keep_checkpoint_every_n_hours if
saver_def.keep_checkpoint_every_n_hours else 10000)
self._next_checkpoint_time = (
time.time() + self._keep_checkpoint_every_n_hours * 3600)
self._sharded = saver_def.sharded
self._last_checkpoints = []
def _CheckpointFilename(self, p):
"""Returns the checkpoint filename given a `(filename, time)` pair.
Args:
p: (filename, time) pair.
Returns:
Checkpoint file name.
"""
name, _ = p
return name
def _MaybeDeleteOldCheckpoints(self, latest_save_path):
"""Deletes old checkpoints if necessary.
Always keep the last `max_to_keep` checkpoints. If
`keep_checkpoint_every_n_hours` was specified, keep an additional checkpoint
every `N` hours. For example, if `N` is 0.5, an additional checkpoint is
kept for every 0.5 hours of training; if `N` is 10, an additional
checkpoint is kept for every 10 hours of training.
Args:
latest_save_path: Name including path of checkpoint file to save.
"""
if not self._max_to_keep:
return
# Remove first from list if the same name was used before.
for p in self._last_checkpoints:
if latest_save_path == self._CheckpointFilename(p):
self._last_checkpoints.remove(p)
# Append new path to list
self._last_checkpoints.append((latest_save_path, time.time()))
# If more than max_to_keep, remove oldest.
if len(self._last_checkpoints) > self._max_to_keep:
p = self._last_checkpoints.pop(0)
# Do not delete the file if we keep_checkpoint_every_n_hours is set and we
# have reached N hours of training.
should_keep = p[1] > self._next_checkpoint_time
if should_keep:
self._next_checkpoint_time += (
self._keep_checkpoint_every_n_hours * 3600)
return
# Otherwise delete the files.
for f in gfile.Glob(self._CheckpointFilename(p)):
try:
gfile.Remove(f)
except OSError as e:
logging.warning("Ignoring: %s", str(e))
def as_saver_def(self):
"""Generates a `SaverDef` representation of this saver.
Returns:
A `SaverDef` proto.
"""
return saver_pb2.SaverDef(
filename_tensor_name=self._filename_tensor_name,
save_tensor_name=self._save_tensor_name,
restore_op_name=self._restore_op_name,
max_to_keep=self._max_to_keep,
keep_checkpoint_every_n_hours=self._keep_checkpoint_every_n_hours,
sharded=self._sharded)
@property
def last_checkpoints(self):
"""List of not-yet-deleted checkpoint filenames.
You can pass any of the returned values to `restore()`.
Returns:
A list of checkpoint filenames, sorted from oldest to newest.
"""
return list(self._CheckpointFilename(p) for p in self._last_checkpoints)
def set_last_checkpoints(self, last_checkpoints):
"""Sets the list of old checkpoint filenames.
Args:
last_checkpoints: A list of checkpoint filenames.
Raises:
AssertionError: If the list of checkpoint filenames has already been set.
"""
assert not self._last_checkpoints
assert isinstance(last_checkpoints, list)
# We use a timestamp of +inf so that this checkpoint will never be
# deleted. This is both safe and backwards compatible to a previous
# version of the code which used s[1] as the "timestamp".
self._last_checkpoints = [(s, np.inf) for s in last_checkpoints]
def save(self, sess, save_path, global_step=None, latest_filename=None):
"""Saves variables.
This method runs the ops added by the constructor for saving variables.
It requires a session in which the graph was launched. The variables to
save must also have been initialized.
The method returns the path of the newly created checkpoint file. This
path can be passed directly to a call to `restore()`.
Args:
sess: A Session to use to save the variables.
save_path: string. Path to the checkpoint filename. If the saver is
`sharded`, this is the prefix of the sharded checkpoint filename.
global_step: If provided the global step number is appended to
`save_path` to create the checkpoint filename. The optional argument
can be a `Tensor`, a `Tensor` name or an integer.
latest_filename: Optional name for the protocol buffer file that will
contains the list of most recent checkpoint filenames. That file,
kept in the same directory as the checkpoint files, is automatically
managed by the saver to keep track of recent checkpoints. Defaults to
'checkpoint'.
Returns:
A string: path at which the variables were saved. If the saver is
sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn'
is the number of shards created.
Raises:
TypeError: If `sess` is not a `Session`.
"""
if latest_filename is None:
latest_filename = "checkpoint"
if global_step is not None:
if not isinstance(global_step, compat.integral_types):
global_step = training_util.global_step(sess, global_step)
checkpoint_file = "%s-%d" % (save_path, global_step)
else:
checkpoint_file = save_path
save_path = os.path.dirname(save_path)
if not isinstance(sess, session.SessionInterface):
raise TypeError("'sess' must be a Session; %s" % sess)
model_checkpoint_path = sess.run(
self._save_tensor_name, {self._filename_tensor_name: checkpoint_file})
model_checkpoint_path = compat.as_str(model_checkpoint_path)
self._MaybeDeleteOldCheckpoints(model_checkpoint_path)
update_checkpoint_state(save_path, model_checkpoint_path,
self.last_checkpoints, latest_filename)
return model_checkpoint_path
def restore(self, sess, save_path):
"""Restores previously saved variables.
This method runs the ops added by the constructor for restoring variables.
It requires a session in which the graph was launched. The variables to
restore do not have to have been initialized, as restoring is itself a way
to initialize variables.
The `save_path` argument is typically a value previously returned from a
`save()` call, or a call to `latest_checkpoint()`.
Args:
sess: A `Session` to use to restore the parameters.
save_path: Path where parameters were previously saved.
"""
sess.run([self._restore_op_name], {self._filename_tensor_name: save_path})
def latest_checkpoint(checkpoint_dir, latest_filename=None):
"""Finds the filename of latest saved checkpoint file.
Args:
checkpoint_dir: Directory where the variables were saved.
latest_filename: Optional name for the protocol buffer file that
contains the list of most recent checkpoint filenames.
See the corresponding argument to `Saver.save()`.
Returns:
The full path to the latest checkpoint or `None` if no checkpoint was found.
"""
# Pick the latest checkpoint based on checkpoint state.
ckpt = get_checkpoint_state(checkpoint_dir, latest_filename)
if ckpt and ckpt.model_checkpoint_path:
checkpoint_pattern = os.path.join(
checkpoint_dir, ckpt.model_checkpoint_path)
if gfile.Glob(checkpoint_pattern):
return checkpoint_pattern
return None
|
I am going to remove Microsoft from my life in 2007. I made the move to Apple in 2006. But I have simply adopted the same Microsoft apps I used on my ThinkPad – Entourage/Outlook, Word, Excel, Powerpoint. I haven’t used Internet Explorer since making the switch and that’s gone fine.
I’ve been having huge problems with Entourage. I’ve known for months that I have to get off of it. It’s slow, bloated, gives me beach balls all day long, and it stinks as a mail app anyway. The only plus is it synchs with Exchange. But I may have to drop Exchange too or find a native Mac app that synchs with Exchange.
But I wasn’t planning on dropping all Microsoft apps until I saw this news today. Office 2007 apps – Word, Excel, Powerpoint, etc use "open XML formats". They should be called "closed XML formats" because Office for Mac won’t be able to open them.
The bottom line is Microsoft doesn’t care about Mac users. That’s been obvious for a while now. So I don’t care about Microsoft either.
It will be fun to use Google spreadsheet, Writely, and other more web native apps anyway. Not sure what I’ll do on airplanes, but I’ll figure something out.
Starting Jan 1, 2007 I will be on a mission to remove Microsoft from my life. It’s long overdue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.