repo_name stringlengths 5 100 | path stringlengths 4 375 | copies stringclasses 991 values | size stringlengths 4 7 | content stringlengths 666 1M | license stringclasses 15 values |
|---|---|---|---|---|---|
with-git/tensorflow | tensorflow/contrib/training/python/training/sgdr_learning_rate_decay.py | 24 | 8088 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""SGDR learning rate decay function."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import ops
from tensorflow.python.ops import math_ops, control_flow_ops
def sgdr_decay(learning_rate, global_step, initial_period_steps,
t_mul=2.0, m_mul=1.0, name=None):
"""Implements Stochastic Gradient Descent with Warm Restarts (SGDR).
As described in "SGDR: Stochastic Gradient Descent
with Warm Restarts" by Ilya Loshchilov & Frank Hutter, Proceedings of
ICLR'2017, available at https://arxiv.org/pdf/1608.03983.pdf
The learning rate decreases according to cosine annealing:
```python
learning_rate * 0.5 * (1 + cos(x_val * pi)) # for x_val defined in [0, 1]
```
Thus, at the beginning (when the restart index i = 0),
the learning rate decreases for `initial_period_steps` steps from the initial
learning rate `learning_rate` (when `x_val=0`, we get `cos(0)=1`) to
0 (when `x_val=1`, we get `cos(pi)=-1`).
The decrease within the i-th period takes `t_i` steps,
where `t_0` = `initial_period_steps` is the user-defined number of batch
iterations (not epochs as in the paper) to be performed before the first
restart is launched.
Then, we perform the first restart (i=1) by setting the learning rate to
`learning_rate*(m_mul^i)`, where `m_mul in [0,1]` (set to 1 by default).
The i-th restart runs for `t_i=t_0*(t_mul^i)` steps, i.e., every new
restart runs `t_mul` times longer than the previous one.
Importantly, when one has no access to a validation set, SGDR suggests
to report the best expected / recommended solution in the following way:
When we are within our initial run (i=0), every new solution represents
SGDR's recommended solution. Instead, when i>0, the recommended solution is
the one obtained at the end of each restart.
Note that the minimum learning rate is set to 0 for simplicity,
you can adjust the code to deal with any positive minimum learning rate
as defined in the paper.
`initial_period_steps` is the duration of the first period measured in terms
of number of minibatch updates. If one wants to use epochs, one should compute
the number of updates required for an epoch.
For example, assume the following parameters and intention:
Minibatch size: 100
Training dataset size: 10000
If the user wants the first decay period to span across 5 epochs, then
`initial_period_steps` = 5 * 10000/100 = 500
Train for 10000 batch iterations with the initial learning rate set to
0.1, then restart to run 2 times longer, i.e, for 20000 batch iterations
and with the initial learning rate 0.05, then restart again and again,
doubling the runtime of each new period and with two times smaller
initial learning rate.
To accomplish the above, one would write:
```python
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = sgdr_decay(starter_learning_rate, global_step,
initial_period_steps=10000, t_mul=2, m_mul=0.5)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
# Step | 0 | 1000 | 5000 | 9000 | 9999 | 10000 | 11000 |
# LR | 0.1 | 0.097 | 0.05 | 0.002 | 0.00 | 0.05 | 0.0496 |
# Step | 20000 | 29000 | 29999 | 30000 |
# LR | 0.025 | 0.0003 | 0.00 | 0.025 |
```
Args:
learning_rate: A scalar `float32` or `float64` `Tensor` or a
Python number. The initial learning rate.
global_step: A scalar `int32` or `int64` `Tensor` or a Python number.
Global step to use for the decay computation. Must not be negative.
initial_period_steps: Duration of the first period measured as the number
of minibatch updates, if one wants to use epochs, one should compute
the number of updates required for an epoch.
t_mul: A scalar `float32` or `float64` `Tensor` or a Python number.
Must be positive.
Used to derive the number of iterations in the i-th period:
`initial_period_steps * (t_mul^i)`. Defaults to 2.0.
m_mul: A scalar `float32` or `float64` `Tensor` or a Python number.
Must be positive.
Used to derive the initial learning rate of the i-th period:
`learning_rate * (m_mul^i)`. Defaults to 1.0
Returns:
A scalar `Tensor` of the same type as `learning_rate`.
The learning rate for a provided global_step.
Raises:
ValueError: if `global_step` is not supplied.
"""
if global_step is None:
raise ValueError("global_step is required for sgdr_decay.")
with ops.name_scope(name, "SGDRDecay",
[learning_rate, global_step,
initial_period_steps, t_mul, m_mul]) as name:
learning_rate = ops.convert_to_tensor(learning_rate,
name="initial_learning_rate")
dtype = learning_rate.dtype
global_step = math_ops.cast(global_step, dtype)
t_0 = math_ops.cast(initial_period_steps, dtype)
t_mul = math_ops.cast(t_mul, dtype)
m_mul = math_ops.cast(m_mul, dtype)
c_one = math_ops.cast(constant_op.constant(1.0), dtype)
c_half = math_ops.cast(constant_op.constant(0.5), dtype)
c_pi = math_ops.cast(constant_op.constant(math.pi), dtype)
# Find normalized value of the current step
x_val = math_ops.div(global_step, t_0)
def compute_step(x_val, geometric=False):
if geometric:
# Consider geometric series where t_mul != 1
# 1 + t_mul + t_mul^2 ... = (1 - t_mul^i_restart) / (1 - t_mul)
# First find how many restarts were performed for a given x_val
# Find maximal integer i_restart value for which this equation holds
# x_val >= (1 - t_mul^i_restart) / (1 - t_mul)
# x_val * (1 - t_mul) <= (1 - t_mul^i_restart)
# t_mul^i_restart <= (1 - x_val * (1 - t_mul))
# tensorflow allows only log with base e
# i_restart <= log(1 - x_val * (1 - t_mul) / log(t_mul)
# Find how many restarts were performed
i_restart = math_ops.floor(
math_ops.log(c_one - x_val * (c_one - t_mul)) / math_ops.log(t_mul))
# Compute the sum of all restarts before the current one
sum_r = (c_one - t_mul ** i_restart) / (c_one - t_mul)
# Compute our position within the current restart
x_val = (x_val - sum_r) / t_mul ** i_restart
else:
# Find how many restarts were performed
i_restart = math_ops.floor(x_val)
# Compute our position within the current restart
x_val = x_val - i_restart
return i_restart, x_val
i_restart, x_val = control_flow_ops.cond(
math_ops.equal(t_mul, c_one),
lambda: compute_step(x_val, geometric=False),
lambda: compute_step(x_val, geometric=True))
# If m_mul < 1, then the initial learning rate of every new restart will be
# smaller, i.e., by a factor of m_mul ** i_restart at i_restart-th restart
m_fac = learning_rate * (m_mul ** i_restart)
return math_ops.multiply(c_half * m_fac,
(math_ops.cos(x_val * c_pi) + c_one), name=name)
| apache-2.0 |
kennedyshead/home-assistant | homeassistant/components/wunderground/sensor.py | 2 | 40590 | """Support for WUnderground weather service."""
from __future__ import annotations
import asyncio
from datetime import timedelta
import logging
import re
from typing import Any, Callable
import aiohttp
import async_timeout
import voluptuous as vol
from homeassistant.components import sensor
from homeassistant.components.sensor import PLATFORM_SCHEMA, SensorEntity
from homeassistant.const import (
ATTR_ATTRIBUTION,
CONF_API_KEY,
CONF_LATITUDE,
CONF_LONGITUDE,
CONF_MONITORED_CONDITIONS,
DEGREE,
IRRADIATION_WATTS_PER_SQUARE_METER,
LENGTH_FEET,
LENGTH_INCHES,
LENGTH_KILOMETERS,
LENGTH_MILES,
LENGTH_MILLIMETERS,
PERCENTAGE,
PRESSURE_INHG,
SPEED_KILOMETERS_PER_HOUR,
SPEED_MILES_PER_HOUR,
TEMP_CELSIUS,
TEMP_FAHRENHEIT,
)
from homeassistant.core import HomeAssistant
from homeassistant.exceptions import PlatformNotReady
from homeassistant.helpers.aiohttp_client import async_get_clientsession
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.typing import ConfigType
from homeassistant.util import Throttle
_RESOURCE = "http://api.wunderground.com/api/{}/{}/{}/q/"
_LOGGER = logging.getLogger(__name__)
ATTRIBUTION = "Data provided by the WUnderground weather service"
CONF_PWS_ID = "pws_id"
CONF_LANG = "lang"
DEFAULT_LANG = "EN"
MIN_TIME_BETWEEN_UPDATES = timedelta(minutes=5)
# Helper classes for declaring sensor configurations
class WUSensorConfig:
"""WU Sensor Configuration.
defines basic HA properties of the weather sensor and
stores callbacks that can parse sensor values out of
the json data received by WU API.
"""
def __init__(
self,
friendly_name: str | Callable,
feature: str,
value: Callable[[WUndergroundData], Any],
unit_of_measurement: str | None = None,
entity_picture=None,
icon: str = "mdi:gauge",
extra_state_attributes=None,
device_class=None,
) -> None:
"""Initialize sensor configuration.
:param friendly_name: Friendly name
:param feature: WU feature. See:
https://www.wunderground.com/weather/api/d/docs?d=data/index
:param value: callback that extracts desired value from WUndergroundData object
:param unit_of_measurement: unit of measurement
:param entity_picture: value or callback returning URL of entity picture
:param icon: icon name or URL
:param extra_state_attributes: dictionary of attributes, or callable that returns it
"""
self.friendly_name = friendly_name
self.unit_of_measurement = unit_of_measurement
self.feature = feature
self.value = value
self.entity_picture = entity_picture
self.icon = icon
self.extra_state_attributes = extra_state_attributes or {}
self.device_class = device_class
class WUCurrentConditionsSensorConfig(WUSensorConfig):
"""Helper for defining sensor configurations for current conditions."""
def __init__(
self,
friendly_name: str | Callable,
field: str,
icon: str | None = "mdi:gauge",
unit_of_measurement: str | None = None,
device_class=None,
) -> None:
"""Initialize current conditions sensor configuration.
:param friendly_name: Friendly name of sensor
:field: Field name in the "current_observation" dictionary.
:icon: icon name or URL, if None sensor will use current weather symbol
:unit_of_measurement: unit of measurement
"""
super().__init__(
friendly_name,
"conditions",
value=lambda wu: wu.data["current_observation"][field],
icon=icon,
unit_of_measurement=unit_of_measurement,
entity_picture=lambda wu: wu.data["current_observation"]["icon_url"]
if icon is None
else None,
extra_state_attributes={
"date": lambda wu: wu.data["current_observation"]["observation_time"]
},
device_class=device_class,
)
class WUDailyTextForecastSensorConfig(WUSensorConfig):
"""Helper for defining sensor configurations for daily text forecasts."""
def __init__(
self, period: int, field: str, unit_of_measurement: str | None = None
) -> None:
"""Initialize daily text forecast sensor configuration.
:param period: forecast period number
:param field: field name to use as value
:param unit_of_measurement: unit of measurement
"""
super().__init__(
friendly_name=lambda wu: wu.data["forecast"]["txt_forecast"]["forecastday"][
period
]["title"],
feature="forecast",
value=lambda wu: wu.data["forecast"]["txt_forecast"]["forecastday"][period][
field
],
entity_picture=lambda wu: wu.data["forecast"]["txt_forecast"][
"forecastday"
][period]["icon_url"],
unit_of_measurement=unit_of_measurement,
extra_state_attributes={
"date": lambda wu: wu.data["forecast"]["txt_forecast"]["date"]
},
)
class WUDailySimpleForecastSensorConfig(WUSensorConfig):
"""Helper for defining sensor configurations for daily simpleforecasts."""
def __init__(
self,
friendly_name: str,
period: int,
field: str,
wu_unit: str | None = None,
ha_unit: str | None = None,
icon=None,
device_class=None,
) -> None:
"""Initialize daily simple forecast sensor configuration.
:param friendly_name: friendly_name of the sensor
:param period: forecast period number
:param field: field name to use as value
:param wu_unit: "fahrenheit", "celsius", "degrees" etc. see the example json at:
https://www.wunderground.com/weather/api/d/docs?d=data/forecast&MR=1
:param ha_unit: corresponding unit in Home Assistant
"""
super().__init__(
friendly_name=friendly_name,
feature="forecast",
value=(
lambda wu: wu.data["forecast"]["simpleforecast"]["forecastday"][period][
field
][wu_unit]
)
if wu_unit
else (
lambda wu: wu.data["forecast"]["simpleforecast"]["forecastday"][period][
field
]
),
unit_of_measurement=ha_unit,
entity_picture=lambda wu: wu.data["forecast"]["simpleforecast"][
"forecastday"
][period]["icon_url"]
if not icon
else None,
icon=icon,
extra_state_attributes={
"date": lambda wu: wu.data["forecast"]["simpleforecast"]["forecastday"][
period
]["date"]["pretty"]
},
device_class=device_class,
)
class WUHourlyForecastSensorConfig(WUSensorConfig):
"""Helper for defining sensor configurations for hourly text forecasts."""
def __init__(self, period: int, field: int) -> None:
"""Initialize hourly forecast sensor configuration.
:param period: forecast period number
:param field: field name to use as value
"""
super().__init__(
friendly_name=lambda wu: (
f"{wu.data['hourly_forecast'][period]['FCTTIME']['weekday_name_abbrev']} "
f"{wu.data['hourly_forecast'][period]['FCTTIME']['civil']}"
),
feature="hourly",
value=lambda wu: wu.data["hourly_forecast"][period][field],
entity_picture=lambda wu: wu.data["hourly_forecast"][period]["icon_url"],
extra_state_attributes={
"temp_c": lambda wu: wu.data["hourly_forecast"][period]["temp"][
"metric"
],
"temp_f": lambda wu: wu.data["hourly_forecast"][period]["temp"][
"english"
],
"dewpoint_c": lambda wu: wu.data["hourly_forecast"][period]["dewpoint"][
"metric"
],
"dewpoint_f": lambda wu: wu.data["hourly_forecast"][period]["dewpoint"][
"english"
],
"precip_prop": lambda wu: wu.data["hourly_forecast"][period]["pop"],
"sky": lambda wu: wu.data["hourly_forecast"][period]["sky"],
"precip_mm": lambda wu: wu.data["hourly_forecast"][period]["qpf"][
"metric"
],
"precip_in": lambda wu: wu.data["hourly_forecast"][period]["qpf"][
"english"
],
"humidity": lambda wu: wu.data["hourly_forecast"][period]["humidity"],
"wind_kph": lambda wu: wu.data["hourly_forecast"][period]["wspd"][
"metric"
],
"wind_mph": lambda wu: wu.data["hourly_forecast"][period]["wspd"][
"english"
],
"pressure_mb": lambda wu: wu.data["hourly_forecast"][period]["mslp"][
"metric"
],
"pressure_inHg": lambda wu: wu.data["hourly_forecast"][period]["mslp"][
"english"
],
"date": lambda wu: wu.data["hourly_forecast"][period]["FCTTIME"][
"pretty"
],
},
)
class WUAlmanacSensorConfig(WUSensorConfig):
"""Helper for defining field configurations for almanac sensors."""
def __init__(
self,
friendly_name: str | Callable,
field: str,
value_type: str,
wu_unit: str,
unit_of_measurement: str,
icon: str,
device_class=None,
) -> None:
"""Initialize almanac sensor configuration.
:param friendly_name: Friendly name
:param field: value name returned in 'almanac' dict as returned by the WU API
:param value_type: "record" or "normal"
:param wu_unit: unit name in WU API
:param unit_of_measurement: unit of measurement
:param icon: icon name or URL
"""
super().__init__(
friendly_name=friendly_name,
feature="almanac",
value=lambda wu: wu.data["almanac"][field][value_type][wu_unit],
unit_of_measurement=unit_of_measurement,
icon=icon,
device_class="temperature",
)
class WUAlertsSensorConfig(WUSensorConfig):
"""Helper for defining field configuration for alerts."""
def __init__(self, friendly_name: str | Callable) -> None:
"""Initialiize alerts sensor configuration.
:param friendly_name: Friendly name
"""
super().__init__(
friendly_name=friendly_name,
feature="alerts",
value=lambda wu: len(wu.data["alerts"]),
icon=lambda wu: "mdi:alert-circle-outline"
if wu.data["alerts"]
else "mdi:check-circle-outline",
extra_state_attributes=self._get_attributes,
)
@staticmethod
def _get_attributes(rest):
attrs = {}
if "alerts" not in rest.data:
return attrs
alerts = rest.data["alerts"]
multiple_alerts = len(alerts) > 1
for data in alerts:
for alert in ALERTS_ATTRS:
if data[alert]:
if multiple_alerts:
dkey = f"{alert.capitalize()}_{data['type']}"
else:
dkey = alert.capitalize()
attrs[dkey] = data[alert]
return attrs
# Declaration of supported WU sensors
# (see above helper classes for argument explanation)
SENSOR_TYPES = {
"alerts": WUAlertsSensorConfig("Alerts"),
"dewpoint_c": WUCurrentConditionsSensorConfig(
"Dewpoint", "dewpoint_c", "mdi:water", TEMP_CELSIUS
),
"dewpoint_f": WUCurrentConditionsSensorConfig(
"Dewpoint", "dewpoint_f", "mdi:water", TEMP_FAHRENHEIT
),
"dewpoint_string": WUCurrentConditionsSensorConfig(
"Dewpoint Summary", "dewpoint_string", "mdi:water"
),
"feelslike_c": WUCurrentConditionsSensorConfig(
"Feels Like", "feelslike_c", "mdi:thermometer", TEMP_CELSIUS
),
"feelslike_f": WUCurrentConditionsSensorConfig(
"Feels Like", "feelslike_f", "mdi:thermometer", TEMP_FAHRENHEIT
),
"feelslike_string": WUCurrentConditionsSensorConfig(
"Feels Like", "feelslike_string", "mdi:thermometer"
),
"heat_index_c": WUCurrentConditionsSensorConfig(
"Heat index", "heat_index_c", "mdi:thermometer", TEMP_CELSIUS
),
"heat_index_f": WUCurrentConditionsSensorConfig(
"Heat index", "heat_index_f", "mdi:thermometer", TEMP_FAHRENHEIT
),
"heat_index_string": WUCurrentConditionsSensorConfig(
"Heat Index Summary", "heat_index_string", "mdi:thermometer"
),
"elevation": WUSensorConfig(
"Elevation",
"conditions",
value=lambda wu: wu.data["current_observation"]["observation_location"][
"elevation"
].split()[0],
unit_of_measurement=LENGTH_FEET,
icon="mdi:elevation-rise",
),
"location": WUSensorConfig(
"Location",
"conditions",
value=lambda wu: wu.data["current_observation"]["display_location"]["full"],
icon="mdi:map-marker",
),
"observation_time": WUCurrentConditionsSensorConfig(
"Observation Time", "observation_time", "mdi:clock"
),
"precip_1hr_in": WUCurrentConditionsSensorConfig(
"Precipitation 1hr", "precip_1hr_in", "mdi:umbrella", LENGTH_INCHES
),
"precip_1hr_metric": WUCurrentConditionsSensorConfig(
"Precipitation 1hr", "precip_1hr_metric", "mdi:umbrella", LENGTH_MILLIMETERS
),
"precip_1hr_string": WUCurrentConditionsSensorConfig(
"Precipitation 1hr", "precip_1hr_string", "mdi:umbrella"
),
"precip_today_in": WUCurrentConditionsSensorConfig(
"Precipitation Today", "precip_today_in", "mdi:umbrella", LENGTH_INCHES
),
"precip_today_metric": WUCurrentConditionsSensorConfig(
"Precipitation Today", "precip_today_metric", "mdi:umbrella", LENGTH_MILLIMETERS
),
"precip_today_string": WUCurrentConditionsSensorConfig(
"Precipitation Today", "precip_today_string", "mdi:umbrella"
),
"pressure_in": WUCurrentConditionsSensorConfig(
"Pressure", "pressure_in", "mdi:gauge", PRESSURE_INHG, device_class="pressure"
),
"pressure_mb": WUCurrentConditionsSensorConfig(
"Pressure", "pressure_mb", "mdi:gauge", "mb", device_class="pressure"
),
"pressure_trend": WUCurrentConditionsSensorConfig(
"Pressure Trend", "pressure_trend", "mdi:gauge", device_class="pressure"
),
"relative_humidity": WUSensorConfig(
"Relative Humidity",
"conditions",
value=lambda wu: int(wu.data["current_observation"]["relative_humidity"][:-1]),
unit_of_measurement=PERCENTAGE,
icon="mdi:water-percent",
device_class="humidity",
),
"station_id": WUCurrentConditionsSensorConfig(
"Station ID", "station_id", "mdi:home"
),
"solarradiation": WUCurrentConditionsSensorConfig(
"Solar Radiation",
"solarradiation",
"mdi:weather-sunny",
IRRADIATION_WATTS_PER_SQUARE_METER,
),
"temperature_string": WUCurrentConditionsSensorConfig(
"Temperature Summary", "temperature_string", "mdi:thermometer"
),
"temp_c": WUCurrentConditionsSensorConfig(
"Temperature",
"temp_c",
"mdi:thermometer",
TEMP_CELSIUS,
device_class="temperature",
),
"temp_f": WUCurrentConditionsSensorConfig(
"Temperature",
"temp_f",
"mdi:thermometer",
TEMP_FAHRENHEIT,
device_class="temperature",
),
"UV": WUCurrentConditionsSensorConfig("UV", "UV", "mdi:sunglasses"),
"visibility_km": WUCurrentConditionsSensorConfig(
"Visibility (km)", "visibility_km", "mdi:eye", LENGTH_KILOMETERS
),
"visibility_mi": WUCurrentConditionsSensorConfig(
"Visibility (miles)", "visibility_mi", "mdi:eye", LENGTH_MILES
),
"weather": WUCurrentConditionsSensorConfig("Weather Summary", "weather", None),
"wind_degrees": WUCurrentConditionsSensorConfig(
"Wind Degrees", "wind_degrees", "mdi:weather-windy", DEGREE
),
"wind_dir": WUCurrentConditionsSensorConfig(
"Wind Direction", "wind_dir", "mdi:weather-windy"
),
"wind_gust_kph": WUCurrentConditionsSensorConfig(
"Wind Gust", "wind_gust_kph", "mdi:weather-windy", SPEED_KILOMETERS_PER_HOUR
),
"wind_gust_mph": WUCurrentConditionsSensorConfig(
"Wind Gust", "wind_gust_mph", "mdi:weather-windy", SPEED_MILES_PER_HOUR
),
"wind_kph": WUCurrentConditionsSensorConfig(
"Wind Speed", "wind_kph", "mdi:weather-windy", SPEED_KILOMETERS_PER_HOUR
),
"wind_mph": WUCurrentConditionsSensorConfig(
"Wind Speed", "wind_mph", "mdi:weather-windy", SPEED_MILES_PER_HOUR
),
"wind_string": WUCurrentConditionsSensorConfig(
"Wind Summary", "wind_string", "mdi:weather-windy"
),
"temp_high_record_c": WUAlmanacSensorConfig(
lambda wu: (
f"High Temperature Record "
f"({wu.data['almanac']['temp_high']['recordyear']})"
),
"temp_high",
"record",
"C",
TEMP_CELSIUS,
"mdi:thermometer",
),
"temp_high_record_f": WUAlmanacSensorConfig(
lambda wu: (
f"High Temperature Record "
f"({wu.data['almanac']['temp_high']['recordyear']})"
),
"temp_high",
"record",
"F",
TEMP_FAHRENHEIT,
"mdi:thermometer",
),
"temp_low_record_c": WUAlmanacSensorConfig(
lambda wu: (
f"Low Temperature Record "
f"({wu.data['almanac']['temp_low']['recordyear']})"
),
"temp_low",
"record",
"C",
TEMP_CELSIUS,
"mdi:thermometer",
),
"temp_low_record_f": WUAlmanacSensorConfig(
lambda wu: (
f"Low Temperature Record "
f"({wu.data['almanac']['temp_low']['recordyear']})"
),
"temp_low",
"record",
"F",
TEMP_FAHRENHEIT,
"mdi:thermometer",
),
"temp_low_avg_c": WUAlmanacSensorConfig(
"Historic Average of Low Temperatures for Today",
"temp_low",
"normal",
"C",
TEMP_CELSIUS,
"mdi:thermometer",
),
"temp_low_avg_f": WUAlmanacSensorConfig(
"Historic Average of Low Temperatures for Today",
"temp_low",
"normal",
"F",
TEMP_FAHRENHEIT,
"mdi:thermometer",
),
"temp_high_avg_c": WUAlmanacSensorConfig(
"Historic Average of High Temperatures for Today",
"temp_high",
"normal",
"C",
TEMP_CELSIUS,
"mdi:thermometer",
),
"temp_high_avg_f": WUAlmanacSensorConfig(
"Historic Average of High Temperatures for Today",
"temp_high",
"normal",
"F",
TEMP_FAHRENHEIT,
"mdi:thermometer",
),
"weather_1d": WUDailyTextForecastSensorConfig(0, "fcttext"),
"weather_1d_metric": WUDailyTextForecastSensorConfig(0, "fcttext_metric"),
"weather_1n": WUDailyTextForecastSensorConfig(1, "fcttext"),
"weather_1n_metric": WUDailyTextForecastSensorConfig(1, "fcttext_metric"),
"weather_2d": WUDailyTextForecastSensorConfig(2, "fcttext"),
"weather_2d_metric": WUDailyTextForecastSensorConfig(2, "fcttext_metric"),
"weather_2n": WUDailyTextForecastSensorConfig(3, "fcttext"),
"weather_2n_metric": WUDailyTextForecastSensorConfig(3, "fcttext_metric"),
"weather_3d": WUDailyTextForecastSensorConfig(4, "fcttext"),
"weather_3d_metric": WUDailyTextForecastSensorConfig(4, "fcttext_metric"),
"weather_3n": WUDailyTextForecastSensorConfig(5, "fcttext"),
"weather_3n_metric": WUDailyTextForecastSensorConfig(5, "fcttext_metric"),
"weather_4d": WUDailyTextForecastSensorConfig(6, "fcttext"),
"weather_4d_metric": WUDailyTextForecastSensorConfig(6, "fcttext_metric"),
"weather_4n": WUDailyTextForecastSensorConfig(7, "fcttext"),
"weather_4n_metric": WUDailyTextForecastSensorConfig(7, "fcttext_metric"),
"weather_1h": WUHourlyForecastSensorConfig(0, "condition"),
"weather_2h": WUHourlyForecastSensorConfig(1, "condition"),
"weather_3h": WUHourlyForecastSensorConfig(2, "condition"),
"weather_4h": WUHourlyForecastSensorConfig(3, "condition"),
"weather_5h": WUHourlyForecastSensorConfig(4, "condition"),
"weather_6h": WUHourlyForecastSensorConfig(5, "condition"),
"weather_7h": WUHourlyForecastSensorConfig(6, "condition"),
"weather_8h": WUHourlyForecastSensorConfig(7, "condition"),
"weather_9h": WUHourlyForecastSensorConfig(8, "condition"),
"weather_10h": WUHourlyForecastSensorConfig(9, "condition"),
"weather_11h": WUHourlyForecastSensorConfig(10, "condition"),
"weather_12h": WUHourlyForecastSensorConfig(11, "condition"),
"weather_13h": WUHourlyForecastSensorConfig(12, "condition"),
"weather_14h": WUHourlyForecastSensorConfig(13, "condition"),
"weather_15h": WUHourlyForecastSensorConfig(14, "condition"),
"weather_16h": WUHourlyForecastSensorConfig(15, "condition"),
"weather_17h": WUHourlyForecastSensorConfig(16, "condition"),
"weather_18h": WUHourlyForecastSensorConfig(17, "condition"),
"weather_19h": WUHourlyForecastSensorConfig(18, "condition"),
"weather_20h": WUHourlyForecastSensorConfig(19, "condition"),
"weather_21h": WUHourlyForecastSensorConfig(20, "condition"),
"weather_22h": WUHourlyForecastSensorConfig(21, "condition"),
"weather_23h": WUHourlyForecastSensorConfig(22, "condition"),
"weather_24h": WUHourlyForecastSensorConfig(23, "condition"),
"weather_25h": WUHourlyForecastSensorConfig(24, "condition"),
"weather_26h": WUHourlyForecastSensorConfig(25, "condition"),
"weather_27h": WUHourlyForecastSensorConfig(26, "condition"),
"weather_28h": WUHourlyForecastSensorConfig(27, "condition"),
"weather_29h": WUHourlyForecastSensorConfig(28, "condition"),
"weather_30h": WUHourlyForecastSensorConfig(29, "condition"),
"weather_31h": WUHourlyForecastSensorConfig(30, "condition"),
"weather_32h": WUHourlyForecastSensorConfig(31, "condition"),
"weather_33h": WUHourlyForecastSensorConfig(32, "condition"),
"weather_34h": WUHourlyForecastSensorConfig(33, "condition"),
"weather_35h": WUHourlyForecastSensorConfig(34, "condition"),
"weather_36h": WUHourlyForecastSensorConfig(35, "condition"),
"temp_high_1d_c": WUDailySimpleForecastSensorConfig(
"High Temperature Today",
0,
"high",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_2d_c": WUDailySimpleForecastSensorConfig(
"High Temperature Tomorrow",
1,
"high",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_3d_c": WUDailySimpleForecastSensorConfig(
"High Temperature in 3 Days",
2,
"high",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_4d_c": WUDailySimpleForecastSensorConfig(
"High Temperature in 4 Days",
3,
"high",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_1d_f": WUDailySimpleForecastSensorConfig(
"High Temperature Today",
0,
"high",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_2d_f": WUDailySimpleForecastSensorConfig(
"High Temperature Tomorrow",
1,
"high",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_3d_f": WUDailySimpleForecastSensorConfig(
"High Temperature in 3 Days",
2,
"high",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_high_4d_f": WUDailySimpleForecastSensorConfig(
"High Temperature in 4 Days",
3,
"high",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_1d_c": WUDailySimpleForecastSensorConfig(
"Low Temperature Today",
0,
"low",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_2d_c": WUDailySimpleForecastSensorConfig(
"Low Temperature Tomorrow",
1,
"low",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_3d_c": WUDailySimpleForecastSensorConfig(
"Low Temperature in 3 Days",
2,
"low",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_4d_c": WUDailySimpleForecastSensorConfig(
"Low Temperature in 4 Days",
3,
"low",
"celsius",
TEMP_CELSIUS,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_1d_f": WUDailySimpleForecastSensorConfig(
"Low Temperature Today",
0,
"low",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_2d_f": WUDailySimpleForecastSensorConfig(
"Low Temperature Tomorrow",
1,
"low",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_3d_f": WUDailySimpleForecastSensorConfig(
"Low Temperature in 3 Days",
2,
"low",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"temp_low_4d_f": WUDailySimpleForecastSensorConfig(
"Low Temperature in 4 Days",
3,
"low",
"fahrenheit",
TEMP_FAHRENHEIT,
"mdi:thermometer",
device_class="temperature",
),
"wind_gust_1d_kph": WUDailySimpleForecastSensorConfig(
"Max. Wind Today",
0,
"maxwind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_2d_kph": WUDailySimpleForecastSensorConfig(
"Max. Wind Tomorrow",
1,
"maxwind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_3d_kph": WUDailySimpleForecastSensorConfig(
"Max. Wind in 3 Days",
2,
"maxwind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_4d_kph": WUDailySimpleForecastSensorConfig(
"Max. Wind in 4 Days",
3,
"maxwind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_1d_mph": WUDailySimpleForecastSensorConfig(
"Max. Wind Today",
0,
"maxwind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_2d_mph": WUDailySimpleForecastSensorConfig(
"Max. Wind Tomorrow",
1,
"maxwind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_3d_mph": WUDailySimpleForecastSensorConfig(
"Max. Wind in 3 Days",
2,
"maxwind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_gust_4d_mph": WUDailySimpleForecastSensorConfig(
"Max. Wind in 4 Days",
3,
"maxwind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_1d_kph": WUDailySimpleForecastSensorConfig(
"Avg. Wind Today",
0,
"avewind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_2d_kph": WUDailySimpleForecastSensorConfig(
"Avg. Wind Tomorrow",
1,
"avewind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_3d_kph": WUDailySimpleForecastSensorConfig(
"Avg. Wind in 3 Days",
2,
"avewind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_4d_kph": WUDailySimpleForecastSensorConfig(
"Avg. Wind in 4 Days",
3,
"avewind",
SPEED_KILOMETERS_PER_HOUR,
SPEED_KILOMETERS_PER_HOUR,
"mdi:weather-windy",
),
"wind_1d_mph": WUDailySimpleForecastSensorConfig(
"Avg. Wind Today",
0,
"avewind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_2d_mph": WUDailySimpleForecastSensorConfig(
"Avg. Wind Tomorrow",
1,
"avewind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_3d_mph": WUDailySimpleForecastSensorConfig(
"Avg. Wind in 3 Days",
2,
"avewind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"wind_4d_mph": WUDailySimpleForecastSensorConfig(
"Avg. Wind in 4 Days",
3,
"avewind",
SPEED_MILES_PER_HOUR,
SPEED_MILES_PER_HOUR,
"mdi:weather-windy",
),
"precip_1d_mm": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity Today",
0,
"qpf_allday",
LENGTH_MILLIMETERS,
LENGTH_MILLIMETERS,
"mdi:umbrella",
),
"precip_2d_mm": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity Tomorrow",
1,
"qpf_allday",
LENGTH_MILLIMETERS,
LENGTH_MILLIMETERS,
"mdi:umbrella",
),
"precip_3d_mm": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity in 3 Days",
2,
"qpf_allday",
LENGTH_MILLIMETERS,
LENGTH_MILLIMETERS,
"mdi:umbrella",
),
"precip_4d_mm": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity in 4 Days",
3,
"qpf_allday",
LENGTH_MILLIMETERS,
LENGTH_MILLIMETERS,
"mdi:umbrella",
),
"precip_1d_in": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity Today",
0,
"qpf_allday",
"in",
LENGTH_INCHES,
"mdi:umbrella",
),
"precip_2d_in": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity Tomorrow",
1,
"qpf_allday",
"in",
LENGTH_INCHES,
"mdi:umbrella",
),
"precip_3d_in": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity in 3 Days",
2,
"qpf_allday",
"in",
LENGTH_INCHES,
"mdi:umbrella",
),
"precip_4d_in": WUDailySimpleForecastSensorConfig(
"Precipitation Intensity in 4 Days",
3,
"qpf_allday",
"in",
LENGTH_INCHES,
"mdi:umbrella",
),
"precip_1d": WUDailySimpleForecastSensorConfig(
"Precipitation Probability Today",
0,
"pop",
None,
PERCENTAGE,
"mdi:umbrella",
),
"precip_2d": WUDailySimpleForecastSensorConfig(
"Precipitation Probability Tomorrow",
1,
"pop",
None,
PERCENTAGE,
"mdi:umbrella",
),
"precip_3d": WUDailySimpleForecastSensorConfig(
"Precipitation Probability in 3 Days",
2,
"pop",
None,
PERCENTAGE,
"mdi:umbrella",
),
"precip_4d": WUDailySimpleForecastSensorConfig(
"Precipitation Probability in 4 Days",
3,
"pop",
None,
PERCENTAGE,
"mdi:umbrella",
),
}
# Alert Attributes
ALERTS_ATTRS = ["date", "description", "expires", "message"]
# Language Supported Codes
LANG_CODES = [
"AF",
"AL",
"AR",
"HY",
"AZ",
"EU",
"BY",
"BU",
"LI",
"MY",
"CA",
"CN",
"TW",
"CR",
"CZ",
"DK",
"DV",
"NL",
"EN",
"EO",
"ET",
"FA",
"FI",
"FR",
"FC",
"GZ",
"DL",
"KA",
"GR",
"GU",
"HT",
"IL",
"HI",
"HU",
"IS",
"IO",
"ID",
"IR",
"IT",
"JP",
"JW",
"KM",
"KR",
"KU",
"LA",
"LV",
"LT",
"ND",
"MK",
"MT",
"GM",
"MI",
"MR",
"MN",
"NO",
"OC",
"PS",
"GN",
"PL",
"BR",
"PA",
"RO",
"RU",
"SR",
"SK",
"SL",
"SP",
"SI",
"SW",
"CH",
"TL",
"TT",
"TH",
"TR",
"TK",
"UA",
"UZ",
"VU",
"CY",
"SN",
"JI",
"YI",
]
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Required(CONF_API_KEY): cv.string,
vol.Optional(CONF_PWS_ID): cv.string,
vol.Optional(CONF_LANG, default=DEFAULT_LANG): vol.All(vol.In(LANG_CODES)),
vol.Inclusive(
CONF_LATITUDE, "coordinates", "Latitude and longitude must exist together"
): cv.latitude,
vol.Inclusive(
CONF_LONGITUDE, "coordinates", "Latitude and longitude must exist together"
): cv.longitude,
vol.Required(CONF_MONITORED_CONDITIONS): vol.All(
cv.ensure_list, vol.Length(min=1), [vol.In(SENSOR_TYPES)]
),
}
)
async def async_setup_platform(
hass: HomeAssistant, config: ConfigType, async_add_entities, discovery_info=None
):
"""Set up the WUnderground sensor."""
latitude = config.get(CONF_LATITUDE, hass.config.latitude)
longitude = config.get(CONF_LONGITUDE, hass.config.longitude)
pws_id = config.get(CONF_PWS_ID)
rest = WUndergroundData(
hass,
config.get(CONF_API_KEY),
pws_id,
config.get(CONF_LANG),
latitude,
longitude,
)
if pws_id is None:
unique_id_base = f"@{longitude:06f},{latitude:06f}"
else:
# Manually specified weather station, use that for unique_id
unique_id_base = pws_id
sensors = []
for variable in config[CONF_MONITORED_CONDITIONS]:
sensors.append(WUndergroundSensor(hass, rest, variable, unique_id_base))
await rest.async_update()
if not rest.data:
raise PlatformNotReady
async_add_entities(sensors, True)
class WUndergroundSensor(SensorEntity):
"""Implementing the WUnderground sensor."""
def __init__(
self, hass: HomeAssistant, rest, condition, unique_id_base: str
) -> None:
"""Initialize the sensor."""
self.rest = rest
self._condition = condition
self._state = None
self._attributes = {ATTR_ATTRIBUTION: ATTRIBUTION}
self._icon = None
self._entity_picture = None
self._unit_of_measurement = self._cfg_expand("unit_of_measurement")
self.rest.request_feature(SENSOR_TYPES[condition].feature)
# This is only the suggested entity id, it might get changed by
# the entity registry later.
self.entity_id = sensor.ENTITY_ID_FORMAT.format(f"pws_{condition}")
self._unique_id = f"{unique_id_base},{condition}"
self._device_class = self._cfg_expand("device_class")
def _cfg_expand(self, what, default=None):
"""Parse and return sensor data."""
cfg = SENSOR_TYPES[self._condition]
val = getattr(cfg, what)
if not callable(val):
return val
try:
val = val(self.rest)
except (KeyError, IndexError, TypeError, ValueError) as err:
_LOGGER.warning(
"Failed to expand cfg from WU API. Condition: %s Attr: %s Error: %s",
self._condition,
what,
repr(err),
)
val = default
return val
def _update_attrs(self):
"""Parse and update device state attributes."""
attrs = self._cfg_expand("extra_state_attributes", {})
for (attr, callback) in attrs.items():
if callable(callback):
try:
self._attributes[attr] = callback(self.rest)
except (KeyError, IndexError, TypeError, ValueError) as err:
_LOGGER.warning(
"Failed to update attrs from WU API."
" Condition: %s Attr: %s Error: %s",
self._condition,
attr,
repr(err),
)
else:
self._attributes[attr] = callback
@property
def name(self):
"""Return the name of the sensor."""
return self._cfg_expand("friendly_name")
@property
def state(self):
"""Return the state of the sensor."""
return self._state
@property
def extra_state_attributes(self):
"""Return the state attributes."""
return self._attributes
@property
def icon(self):
"""Return icon."""
return self._icon
@property
def entity_picture(self):
"""Return the entity picture."""
return self._entity_picture
@property
def unit_of_measurement(self):
"""Return the units of measurement."""
return self._unit_of_measurement
@property
def device_class(self):
"""Return the units of measurement."""
return self._device_class
async def async_update(self):
"""Update current conditions."""
await self.rest.async_update()
if not self.rest.data:
# no data, return
return
self._state = self._cfg_expand("value")
self._update_attrs()
self._icon = self._cfg_expand("icon", super().icon)
url = self._cfg_expand("entity_picture")
if isinstance(url, str):
self._entity_picture = re.sub(
r"^http://", "https://", url, flags=re.IGNORECASE
)
@property
def unique_id(self) -> str:
"""Return a unique ID."""
return self._unique_id
class WUndergroundData:
"""Get data from WUnderground."""
def __init__(self, hass, api_key, pws_id, lang, latitude, longitude):
"""Initialize the data object."""
self._hass = hass
self._api_key = api_key
self._pws_id = pws_id
self._lang = f"lang:{lang}"
self._latitude = latitude
self._longitude = longitude
self._features = set()
self.data = None
self._session = async_get_clientsession(self._hass)
def request_feature(self, feature):
"""Register feature to be fetched from WU API."""
self._features.add(feature)
def _build_url(self, baseurl=_RESOURCE):
url = baseurl.format(
self._api_key, "/".join(sorted(self._features)), self._lang
)
if self._pws_id:
url = f"{url}pws:{self._pws_id}"
else:
url = f"{url}{self._latitude},{self._longitude}"
return f"{url}.json"
@Throttle(MIN_TIME_BETWEEN_UPDATES)
async def async_update(self):
"""Get the latest data from WUnderground."""
try:
with async_timeout.timeout(10):
response = await self._session.get(self._build_url())
result = await response.json()
if "error" in result["response"]:
raise ValueError(result["response"]["error"]["description"])
self.data = result
except ValueError as err:
_LOGGER.error("Check WUnderground API %s", err.args)
except (asyncio.TimeoutError, aiohttp.ClientError) as err:
_LOGGER.error("Error fetching WUnderground data: %s", repr(err))
| apache-2.0 |
codrut3/tensorflow | tensorflow/contrib/kfac/python/ops/layer_collection_lib.py | 17 | 1628 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Registry for layers and their parameters/variables.
This represents the collection of all layers in the approximate Fisher
information matrix to which a particular FisherBlock may belong. That is, we
might have several layer collections for one TF graph (if we have multiple K-FAC
optimizers being used, for example.)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=unused-import,line-too-long,wildcard-import
from tensorflow.contrib.kfac.python.ops.layer_collection import *
from tensorflow.python.util.all_util import remove_undocumented
# pylint: enable=unused-import,line-too-long,wildcard-import
_allowed_symbols = [
"LayerParametersDict",
"LayerCollection",
"APPROX_KRONECKER_NAME",
"APPROX_DIAGONAL_NAME",
"APPROX_FULL_NAME",
"VARIABLE_SCOPE",
]
remove_undocumented(__name__, allowed_exception_list=_allowed_symbols)
| apache-2.0 |
chenweihua/jumpserver | jumpserver/urls.py | 1 | 1085 | import xadmin
xadmin.autodiscover()
from django.conf.urls import include, url
from views import index,skin_config,Login,Logout,upload,exec_cmd,setting,web_terminal,download
from xadmin.plugins import xversion
xversion.register_models()
urlpatterns = [
url(r'^$', index, name='index'),
url(r'^skin_config/$', skin_config, name='skin_config'),
url(r'^login/$', Login, name='login'),
url(r'^logout/$', Logout, name='logout'),
url(r'^exec_cmd/$', exec_cmd, name='exec_cmd'),
url(r'^file/upload/$', upload, name='file_upload'),
url(r'^file/download/$', download, name='file_download'),
url(r'^setting', setting, name='setting'),
url(r'^terminal/$', web_terminal, name='terminal'),
url(r'^juser/', include('juser.urls')),
url(r'^jasset/', include('jasset.urls')),
url(r'^jlog/', include('jlog.urls')),
url(r'^jperm/', include('jperm.urls')),
url(r'^jproject/', include('jproject.urls')),
url(r'^ueditor/', include('DjangoUeditor.urls')),
url(r'^xadmin/', include(xadmin.site.urls), name='xadmin'),
]
| gpl-2.0 |
z0by/django | django/db/models/fields/files.py | 129 | 19629 | import datetime
import os
import warnings
from django import forms
from django.core import checks
from django.core.files.base import File
from django.core.files.images import ImageFile
from django.core.files.storage import default_storage
from django.db.models import signals
from django.db.models.fields import Field
from django.utils import six
from django.utils.deprecation import RemovedInDjango110Warning
from django.utils.encoding import force_str, force_text
from django.utils.inspect import func_supports_parameter
from django.utils.translation import ugettext_lazy as _
class FieldFile(File):
def __init__(self, instance, field, name):
super(FieldFile, self).__init__(None, name)
self.instance = instance
self.field = field
self.storage = field.storage
self._committed = True
def __eq__(self, other):
# Older code may be expecting FileField values to be simple strings.
# By overriding the == operator, it can remain backwards compatibility.
if hasattr(other, 'name'):
return self.name == other.name
return self.name == other
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.name)
# The standard File contains most of the necessary properties, but
# FieldFiles can be instantiated without a name, so that needs to
# be checked for here.
def _require_file(self):
if not self:
raise ValueError("The '%s' attribute has no file associated with it." % self.field.name)
def _get_file(self):
self._require_file()
if not hasattr(self, '_file') or self._file is None:
self._file = self.storage.open(self.name, 'rb')
return self._file
def _set_file(self, file):
self._file = file
def _del_file(self):
del self._file
file = property(_get_file, _set_file, _del_file)
def _get_path(self):
self._require_file()
return self.storage.path(self.name)
path = property(_get_path)
def _get_url(self):
self._require_file()
return self.storage.url(self.name)
url = property(_get_url)
def _get_size(self):
self._require_file()
if not self._committed:
return self.file.size
return self.storage.size(self.name)
size = property(_get_size)
def open(self, mode='rb'):
self._require_file()
self.file.open(mode)
# open() doesn't alter the file's contents, but it does reset the pointer
open.alters_data = True
# In addition to the standard File API, FieldFiles have extra methods
# to further manipulate the underlying file, as well as update the
# associated model instance.
def save(self, name, content, save=True):
name = self.field.generate_filename(self.instance, name)
if func_supports_parameter(self.storage.save, 'max_length'):
self.name = self.storage.save(name, content, max_length=self.field.max_length)
else:
warnings.warn(
'Backwards compatibility for storage backends without '
'support for the `max_length` argument in '
'Storage.save() will be removed in Django 1.10.',
RemovedInDjango110Warning, stacklevel=2
)
self.name = self.storage.save(name, content)
setattr(self.instance, self.field.name, self.name)
# Update the filesize cache
self._size = content.size
self._committed = True
# Save the object because it has changed, unless save is False
if save:
self.instance.save()
save.alters_data = True
def delete(self, save=True):
if not self:
return
# Only close the file if it's already open, which we know by the
# presence of self._file
if hasattr(self, '_file'):
self.close()
del self.file
self.storage.delete(self.name)
self.name = None
setattr(self.instance, self.field.name, self.name)
# Delete the filesize cache
if hasattr(self, '_size'):
del self._size
self._committed = False
if save:
self.instance.save()
delete.alters_data = True
def _get_closed(self):
file = getattr(self, '_file', None)
return file is None or file.closed
closed = property(_get_closed)
def close(self):
file = getattr(self, '_file', None)
if file is not None:
file.close()
def __getstate__(self):
# FieldFile needs access to its associated model field and an instance
# it's attached to in order to work properly, but the only necessary
# data to be pickled is the file's name itself. Everything else will
# be restored later, by FileDescriptor below.
return {'name': self.name, 'closed': False, '_committed': True, '_file': None}
class FileDescriptor(object):
"""
The descriptor for the file attribute on the model instance. Returns a
FieldFile when accessed so you can do stuff like::
>>> from myapp.models import MyModel
>>> instance = MyModel.objects.get(pk=1)
>>> instance.file.size
Assigns a file object on assignment so you can do::
>>> with open('/tmp/hello.world', 'r') as f:
... instance.file = File(f)
"""
def __init__(self, field):
self.field = field
def __get__(self, instance=None, owner=None):
if instance is None:
raise AttributeError(
"The '%s' attribute can only be accessed from %s instances."
% (self.field.name, owner.__name__))
# This is slightly complicated, so worth an explanation.
# instance.file`needs to ultimately return some instance of `File`,
# probably a subclass. Additionally, this returned object needs to have
# the FieldFile API so that users can easily do things like
# instance.file.path and have that delegated to the file storage engine.
# Easy enough if we're strict about assignment in __set__, but if you
# peek below you can see that we're not. So depending on the current
# value of the field we have to dynamically construct some sort of
# "thing" to return.
# The instance dict contains whatever was originally assigned
# in __set__.
file = instance.__dict__[self.field.name]
# If this value is a string (instance.file = "path/to/file") or None
# then we simply wrap it with the appropriate attribute class according
# to the file field. [This is FieldFile for FileFields and
# ImageFieldFile for ImageFields; it's also conceivable that user
# subclasses might also want to subclass the attribute class]. This
# object understands how to convert a path to a file, and also how to
# handle None.
if isinstance(file, six.string_types) or file is None:
attr = self.field.attr_class(instance, self.field, file)
instance.__dict__[self.field.name] = attr
# Other types of files may be assigned as well, but they need to have
# the FieldFile interface added to them. Thus, we wrap any other type of
# File inside a FieldFile (well, the field's attr_class, which is
# usually FieldFile).
elif isinstance(file, File) and not isinstance(file, FieldFile):
file_copy = self.field.attr_class(instance, self.field, file.name)
file_copy.file = file
file_copy._committed = False
instance.__dict__[self.field.name] = file_copy
# Finally, because of the (some would say boneheaded) way pickle works,
# the underlying FieldFile might not actually itself have an associated
# file. So we need to reset the details of the FieldFile in those cases.
elif isinstance(file, FieldFile) and not hasattr(file, 'field'):
file.instance = instance
file.field = self.field
file.storage = self.field.storage
# That was fun, wasn't it?
return instance.__dict__[self.field.name]
def __set__(self, instance, value):
instance.__dict__[self.field.name] = value
class FileField(Field):
# The class to wrap instance attributes in. Accessing the file object off
# the instance will always return an instance of attr_class.
attr_class = FieldFile
# The descriptor to use for accessing the attribute off of the class.
descriptor_class = FileDescriptor
description = _("File")
def __init__(self, verbose_name=None, name=None, upload_to='', storage=None, **kwargs):
self._primary_key_set_explicitly = 'primary_key' in kwargs
self._unique_set_explicitly = 'unique' in kwargs
self.storage = storage or default_storage
self.upload_to = upload_to
kwargs['max_length'] = kwargs.get('max_length', 100)
super(FileField, self).__init__(verbose_name, name, **kwargs)
def check(self, **kwargs):
errors = super(FileField, self).check(**kwargs)
errors.extend(self._check_unique())
errors.extend(self._check_primary_key())
return errors
def _check_unique(self):
if self._unique_set_explicitly:
return [
checks.Error(
"'unique' is not a valid argument for a %s." % self.__class__.__name__,
hint=None,
obj=self,
id='fields.E200',
)
]
else:
return []
def _check_primary_key(self):
if self._primary_key_set_explicitly:
return [
checks.Error(
"'primary_key' is not a valid argument for a %s." % self.__class__.__name__,
hint=None,
obj=self,
id='fields.E201',
)
]
else:
return []
def deconstruct(self):
name, path, args, kwargs = super(FileField, self).deconstruct()
if kwargs.get("max_length") == 100:
del kwargs["max_length"]
kwargs['upload_to'] = self.upload_to
if self.storage is not default_storage:
kwargs['storage'] = self.storage
return name, path, args, kwargs
def get_internal_type(self):
return "FileField"
def get_prep_lookup(self, lookup_type, value):
if hasattr(value, 'name'):
value = value.name
return super(FileField, self).get_prep_lookup(lookup_type, value)
def get_prep_value(self, value):
"Returns field's value prepared for saving into a database."
value = super(FileField, self).get_prep_value(value)
# Need to convert File objects provided via a form to unicode for database insertion
if value is None:
return None
return six.text_type(value)
def pre_save(self, model_instance, add):
"Returns field's value just before saving."
file = super(FileField, self).pre_save(model_instance, add)
if file and not file._committed:
# Commit the file to storage prior to saving the model
file.save(file.name, file, save=False)
return file
def contribute_to_class(self, cls, name, **kwargs):
super(FileField, self).contribute_to_class(cls, name, **kwargs)
setattr(cls, self.name, self.descriptor_class(self))
def get_directory_name(self):
return os.path.normpath(force_text(datetime.datetime.now().strftime(force_str(self.upload_to))))
def get_filename(self, filename):
return os.path.normpath(self.storage.get_valid_name(os.path.basename(filename)))
def generate_filename(self, instance, filename):
# If upload_to is a callable, make sure that the path it returns is
# passed through get_valid_name() of the underlying storage.
if callable(self.upload_to):
directory_name, filename = os.path.split(self.upload_to(instance, filename))
filename = self.storage.get_valid_name(filename)
return os.path.normpath(os.path.join(directory_name, filename))
return os.path.join(self.get_directory_name(), self.get_filename(filename))
def save_form_data(self, instance, data):
# Important: None means "no change", other false value means "clear"
# This subtle distinction (rather than a more explicit marker) is
# needed because we need to consume values that are also sane for a
# regular (non Model-) Form to find in its cleaned_data dictionary.
if data is not None:
# This value will be converted to unicode and stored in the
# database, so leaving False as-is is not acceptable.
if not data:
data = ''
setattr(instance, self.name, data)
def formfield(self, **kwargs):
defaults = {'form_class': forms.FileField, 'max_length': self.max_length}
# If a file has been provided previously, then the form doesn't require
# that a new file is provided this time.
# The code to mark the form field as not required is used by
# form_for_instance, but can probably be removed once form_for_instance
# is gone. ModelForm uses a different method to check for an existing file.
if 'initial' in kwargs:
defaults['required'] = False
defaults.update(kwargs)
return super(FileField, self).formfield(**defaults)
class ImageFileDescriptor(FileDescriptor):
"""
Just like the FileDescriptor, but for ImageFields. The only difference is
assigning the width/height to the width_field/height_field, if appropriate.
"""
def __set__(self, instance, value):
previous_file = instance.__dict__.get(self.field.name)
super(ImageFileDescriptor, self).__set__(instance, value)
# To prevent recalculating image dimensions when we are instantiating
# an object from the database (bug #11084), only update dimensions if
# the field had a value before this assignment. Since the default
# value for FileField subclasses is an instance of field.attr_class,
# previous_file will only be None when we are called from
# Model.__init__(). The ImageField.update_dimension_fields method
# hooked up to the post_init signal handles the Model.__init__() cases.
# Assignment happening outside of Model.__init__() will trigger the
# update right here.
if previous_file is not None:
self.field.update_dimension_fields(instance, force=True)
class ImageFieldFile(ImageFile, FieldFile):
def delete(self, save=True):
# Clear the image dimensions cache
if hasattr(self, '_dimensions_cache'):
del self._dimensions_cache
super(ImageFieldFile, self).delete(save)
class ImageField(FileField):
attr_class = ImageFieldFile
descriptor_class = ImageFileDescriptor
description = _("Image")
def __init__(self, verbose_name=None, name=None, width_field=None,
height_field=None, **kwargs):
self.width_field, self.height_field = width_field, height_field
super(ImageField, self).__init__(verbose_name, name, **kwargs)
def check(self, **kwargs):
errors = super(ImageField, self).check(**kwargs)
errors.extend(self._check_image_library_installed())
return errors
def _check_image_library_installed(self):
try:
from PIL import Image # NOQA
except ImportError:
return [
checks.Error(
'Cannot use ImageField because Pillow is not installed.',
hint=('Get Pillow at https://pypi.python.org/pypi/Pillow '
'or run command "pip install Pillow".'),
obj=self,
id='fields.E210',
)
]
else:
return []
def deconstruct(self):
name, path, args, kwargs = super(ImageField, self).deconstruct()
if self.width_field:
kwargs['width_field'] = self.width_field
if self.height_field:
kwargs['height_field'] = self.height_field
return name, path, args, kwargs
def contribute_to_class(self, cls, name, **kwargs):
super(ImageField, self).contribute_to_class(cls, name, **kwargs)
# Attach update_dimension_fields so that dimension fields declared
# after their corresponding image field don't stay cleared by
# Model.__init__, see bug #11196.
# Only run post-initialization dimension update on non-abstract models
if not cls._meta.abstract:
signals.post_init.connect(self.update_dimension_fields, sender=cls)
def update_dimension_fields(self, instance, force=False, *args, **kwargs):
"""
Updates field's width and height fields, if defined.
This method is hooked up to model's post_init signal to update
dimensions after instantiating a model instance. However, dimensions
won't be updated if the dimensions fields are already populated. This
avoids unnecessary recalculation when loading an object from the
database.
Dimensions can be forced to update with force=True, which is how
ImageFileDescriptor.__set__ calls this method.
"""
# Nothing to update if the field doesn't have dimension fields.
has_dimension_fields = self.width_field or self.height_field
if not has_dimension_fields:
return
# getattr will call the ImageFileDescriptor's __get__ method, which
# coerces the assigned value into an instance of self.attr_class
# (ImageFieldFile in this case).
file = getattr(instance, self.attname)
# Nothing to update if we have no file and not being forced to update.
if not file and not force:
return
dimension_fields_filled = not(
(self.width_field and not getattr(instance, self.width_field))
or (self.height_field and not getattr(instance, self.height_field))
)
# When both dimension fields have values, we are most likely loading
# data from the database or updating an image field that already had
# an image stored. In the first case, we don't want to update the
# dimension fields because we are already getting their values from the
# database. In the second case, we do want to update the dimensions
# fields and will skip this return because force will be True since we
# were called from ImageFileDescriptor.__set__.
if dimension_fields_filled and not force:
return
# file should be an instance of ImageFieldFile or should be None.
if file:
width = file.width
height = file.height
else:
# No file, so clear dimensions fields.
width = None
height = None
# Update the width and height fields.
if self.width_field:
setattr(instance, self.width_field, width)
if self.height_field:
setattr(instance, self.height_field, height)
def formfield(self, **kwargs):
defaults = {'form_class': forms.ImageField}
defaults.update(kwargs)
return super(ImageField, self).formfield(**defaults)
| bsd-3-clause |
geminy/aidear | oss/qt/qt-everywhere-opensource-src-5.9.0/qtwebengine/tools/scripts/git_submodule.py | 1 | 12677 | #############################################################################
##
## Copyright (C) 2016 The Qt Company Ltd.
## Contact: https://www.qt.io/licensing/
##
## This file is part of the QtWebEngine module of the Qt Toolkit.
##
## $QT_BEGIN_LICENSE:GPL-EXCEPT$
## Commercial License Usage
## Licensees holding valid commercial Qt licenses may use this file in
## accordance with the commercial license agreement provided with the
## Software or, alternatively, in accordance with the terms contained in
## a written agreement between you and The Qt Company. For licensing terms
## and conditions see https://www.qt.io/terms-conditions. For further
## information use the contact form at https://www.qt.io/contact-us.
##
## GNU General Public License Usage
## Alternatively, this file may be used under the terms of the GNU
## General Public License version 3 as published by the Free Software
## Foundation with exceptions as appearing in the file LICENSE.GPL3-EXCEPT
## included in the packaging of this file. Please review the following
## information to ensure the GNU General Public License requirements will
## be met: https://www.gnu.org/licenses/gpl-3.0.html.
##
## $QT_END_LICENSE$
##
#############################################################################
import glob
import os
import re
import subprocess
import sys
import version_resolver as resolver
extra_os = ['mac', 'win']
def subprocessCall(args):
print args
return subprocess.call(args)
def subprocessCheckOutput(args):
print args
return subprocess.check_output(args)
class DEPSParser:
def __init__(self):
self.global_scope = {
'Var': self.Lookup,
'deps_os': {},
}
self.local_scope = {}
self.topmost_supermodule_path_prefix = ''
def Lookup(self, var_name):
return self.local_scope["vars"][var_name]
def createSubmodulesFromScope(self, scope, os):
submodules = []
for dep in scope:
if (type(scope[dep]) == str):
repo_rev = scope[dep].split('@')
repo = repo_rev[0]
rev = repo_rev[1]
subdir = dep
if subdir.startswith('src/'):
subdir = subdir[4:]
# Don't skip submodules that have a supermodule path prefix set (at the moment these
# are 2nd level deep submodules).
elif not self.topmost_supermodule_path_prefix:
# Ignore the information about chromium itself since we get that from git,
# also ignore anything outside src/ (e.g. depot_tools)
continue
submodule = Submodule(subdir, repo, sp=self.topmost_supermodule_path_prefix)
submodule.os = os
if not submodule.matchesOS():
print '-- skipping ' + submodule.pathRelativeToTopMostSupermodule() + ' for this operating system. --'
continue
if len(rev) == 40: # Length of a git shasum
submodule.ref = rev
else:
sys.exit("Invalid shasum: " + str(rev))
submodules.append(submodule)
return submodules
def parse(self, deps_content):
exec(deps_content, self.global_scope, self.local_scope)
submodules = []
submodules.extend(self.createSubmodulesFromScope(self.local_scope['deps'], 'all'))
if 'deps_os' in self.local_scope:
for os_dep in self.local_scope['deps_os']:
submodules.extend(self.createSubmodulesFromScope(self.local_scope['deps_os'][os_dep], os_dep))
return submodules
# Strips suffix from end of text.
def strip_end(text, suffix):
if not text.endswith(suffix):
return text
return text[:len(text)-len(suffix)]
# Given supermodule_path = /chromium
# current directory = /chromium/buildtools
# submodule_path = third_party/foo/bar
# returns = buildtools
def computeRelativePathPrefixToTopMostSupermodule(submodule_path, supermodule_path):
relpath = os.path.relpath(submodule_path, supermodule_path)
topmost_supermodule_path_prefix = strip_end(relpath, submodule_path)
return topmost_supermodule_path_prefix
class Submodule:
def __init__(self, path='', url='', ref='', os=[], sp=''):
self.path = path
self.url = url
self.ref = ref
self.os = os
self.topmost_supermodule_path_prefix = sp
def pathRelativeToTopMostSupermodule(self):
return os.path.normpath(os.path.join(self.topmost_supermodule_path_prefix, self.path))
def matchesOS(self):
if not self.os:
return True
if 'all' in self.os:
return True
if sys.platform.startswith('linux') and 'unix' in self.os:
return True
if sys.platform.startswith('darwin') and ('unix' in self.os or 'mac' in self.os):
return True
if sys.platform.startswith('win32') or sys.platform.startswith('cygwin'):
if 'win' in self.os:
return True
else:
# Skipping all dependecies of the extra_os on Windows platform, because it caused confict.
return False
for os in extra_os:
if os in self.os:
return True
return False
def findShaAndCheckout(self):
oldCwd = os.getcwd()
os.chdir(self.path)
# Fetch the shasum we parsed from the DEPS file.
error = subprocessCall(['git', 'fetch', 'origin', self.ref])
if error != 0:
print('ERROR: Could not fetch ' + self.ref + ' from upstream origin.')
return error
error = subprocessCall(['git', 'checkout', 'FETCH_HEAD']);
current_shasum = subprocessCheckOutput(['git', 'rev-parse', 'HEAD']).strip()
current_tag = subprocessCheckOutput(['git', 'name-rev', '--tags', '--name-only', current_shasum]).strip()
if current_tag == resolver.currentVersion():
# We checked out a tagged version of chromium.
self.ref = current_shasum
if not self.ref:
# No shasum could be deduced, use the submodule shasum.
os.chdir(oldCwd)
line = subprocessCheckOutput(['git', 'submodule', 'status', self.path])
os.chdir(self.path)
line = line.lstrip(' -')
self.ref = line.split(' ')[0]
if not self.ref.startswith(current_shasum):
# In case HEAD differs check out the actual shasum we require.
subprocessCall(['git', 'fetch'])
error = subprocessCall(['git', 'checkout', self.ref])
os.chdir(oldCwd)
return error
def findGitDir(self):
try:
return subprocessCheckOutput(['git', 'rev-parse', '--git-dir']).strip()
except subprocess.CalledProcessError, e:
sys.exit("git dir could not be determined! - Initialization failed! " + e.output)
def reset(self):
currentDir = os.getcwd()
os.chdir(self.path)
gitdir = self.findGitDir()
if os.path.isdir(os.path.join(gitdir, 'rebase-merge')):
if os.path.isfile(os.path.join(gitdir, 'MERGE_HEAD')):
print 'merge in progress... aborting merge.'
subprocessCall(['git', 'merge', '--abort'])
else:
print 'rebase in progress... aborting merge.'
subprocessCall(['git', 'rebase', '--abort'])
if os.path.isdir(os.path.join(gitdir, 'rebase-apply')):
print 'am in progress... aborting am.'
subprocessCall(['git', 'am', '--abort'])
subprocessCall(['git', 'reset', '--hard'])
os.chdir(currentDir)
def initialize(self):
if self.matchesOS():
print '\n\n-- initializing ' + self.pathRelativeToTopMostSupermodule() + ' --'
oldCwd = os.getcwd()
# The submodule operations should be done relative to the current submodule's
# supermodule.
if self.topmost_supermodule_path_prefix:
os.chdir(self.topmost_supermodule_path_prefix)
if os.path.isdir(self.path):
self.reset()
if self.url:
subprocessCall(['git', 'submodule', 'add', '-f', self.url, self.path])
subprocessCall(['git', 'submodule', 'sync', '--', self.path])
subprocessCall(['git', 'submodule', 'init', self.path])
subprocessCall(['git', 'submodule', 'update', self.path])
if '3rdparty_upstream' in os.path.abspath(self.path):
if self.findShaAndCheckout() != 0:
sys.exit("!!! initialization failed !!!")
# Add baseline commit for upstream repository to be able to reset.
os.chdir(self.path)
commit = subprocessCheckOutput(['git', 'rev-list', '--max-count=1', 'HEAD'])
subprocessCall(['git', 'commit', '-a', '--allow-empty', '-m', '-- QtWebEngine baseline --\n\ncommit ' + commit])
os.chdir(oldCwd)
else:
print '-- skipping ' + self.path + ' for this operating system. --'
def listFiles(self):
if self.matchesOS() and os.path.isdir(self.pathRelativeToTopMostSupermodule()):
currentDir = os.getcwd()
os.chdir(self.pathRelativeToTopMostSupermodule())
files = subprocessCheckOutput(['git', 'ls-files']).splitlines()
os.chdir(currentDir)
return files
else:
print '-- skipping ' + self.path + ' for this operating system. --'
return []
def parseGitModulesFileContents(self, gitmodules_lines):
submodules = []
currentSubmodule = None
for line in gitmodules_lines:
if line.find('[submodule') == 0:
if currentSubmodule:
submodules.append(currentSubmodule)
currentSubmodule = Submodule()
tokens = line.split('=')
if len(tokens) >= 2:
key = tokens[0].strip()
value = tokens[1].strip()
if key == 'path':
currentSubmodule.path = value
elif key == 'url':
currentSubmodule.url = value
elif key == 'os':
currentSubmodule.os = value.split(',')
if currentSubmodule:
submodules.append(currentSubmodule)
return submodules
# Return a flattened list of submodules starting from module, and recursively collecting child
# submodules.
def readSubmodulesFromGitModules(self, module, gitmodules_file_name, top_level_path):
flattened_submodules = []
oldCwd = os.getcwd()
os.chdir(module.path)
if os.path.isfile(gitmodules_file_name):
gitmodules_file = open(gitmodules_file_name)
gitmodules_lines = gitmodules_file.readlines()
gitmodules_file.close()
submodules = self.parseGitModulesFileContents(gitmodules_lines)
# When inside a 2nd level submodule or deeper, store the path relative to the topmost
# module.
for submodule in submodules:
submodule.topmost_supermodule_path_prefix = computeRelativePathPrefixToTopMostSupermodule(submodule.path, top_level_path)
flattened_submodules.extend(submodules)
# Recurse into deeper submodules.
for submodule in submodules:
flattened_submodules.extend(self.readSubmodulesFromGitModules(submodule, gitmodules_file_name, top_level_path))
os.chdir(oldCwd)
return flattened_submodules
def readSubmodules(self):
submodules = []
if self.ref:
submodules = resolver.readSubmodules()
print 'DEPS file provides the following submodules:'
for submodule in submodules:
print '{:<80}'.format(submodule.pathRelativeToTopMostSupermodule()) + '{:<120}'.format(submodule.url) + submodule.ref
else: # Try .gitmodules since no ref has been specified
gitmodules_file_name = '.gitmodules'
submodules = self.readSubmodulesFromGitModules(self, gitmodules_file_name, self.path)
return submodules
def initSubmodules(self):
oldCwd = os.getcwd()
os.chdir(self.path)
submodules = self.readSubmodules()
for submodule in submodules:
submodule.initialize()
subprocessCall(['git', 'commit', '-a', '--amend', '--no-edit'])
os.chdir(oldCwd)
| gpl-3.0 |
calmjs/nunja.stock | examples/fsnavtree.py | 1 | 1511 | #!/usr/bin/env python
from os import getcwd
from os.path import basename
from os import environ
import json
from nunja.core import engine
from nunja.stock.model import fsnav
from nunja.stock.model.base import Definition
nav = fsnav.Base(
Definition(
'baseid',
basename(__file__) + '?{+path}',
uri_template_json=basename(__file__) + '?{+path}',
css_class={
'table': 'pure-table',
},
),
getcwd(),
anchor_key='name',
)
target = environ.get('QUERY_STRING') or '/'
body_tmpl = """
<div id="layout">
<div id="main">
<div class="content">
<h1>Directory Listing</h1>
%s
</div>
</div>
</div>
"""
# XXX this does NOT work in PY3 due to
# - http://bugs.python.org/issue5053
# - http://bugs.python.org/issue5054
if environ.get('HTTP_ACCEPT') == 'application/json':
print("Content-Type: application/json")
print("")
print(json.dumps(nav.get_struct(target)))
else:
print("Content-Type: text/html")
print("")
body = body_tmpl % engine.execute(
'nunja.stock.molds/model', data=nav.get_struct(target))
html = engine.render('nunja.molds/html5', data={
'title': 'Example page',
'js': [
'/node_modules/requirejs/require.js',
'/nunja.stock.js',
'/nunja/config.js',
'init.js'
],
'css': [
'/pure-min.css',
'/local.css',
],
'body': body,
})
print(html)
| gpl-2.0 |
wlanslovenija/cmsplugin-filer | cmsplugin_filer_image/cms_plugins.py | 4 | 5245 | from __future__ import unicode_literals
import os
from cms.plugin_pool import plugin_pool
from cms.plugin_base import CMSPluginBase
from django.template.loader import select_template
from django.utils.translation import ugettext_lazy as _
from . import models
from .conf import settings
from filer.settings import FILER_STATICMEDIA_PREFIX
class FilerImagePlugin(CMSPluginBase):
module = 'Filer'
model = models.FilerImage
name = _("Image")
TEMPLATE_NAME = 'cmsplugin_filer_image/plugins/image/%s.html'
render_template = TEMPLATE_NAME % 'default'
text_enabled = True
raw_id_fields = ('image', 'page_link')
admin_preview = False
fieldsets = (
(None, {
'fields': [
'caption_text',
('image', 'image_url',),
'alt_text',
]
}),
(_('Image resizing options'), {
'fields': (
'use_original_image',
('width', 'height', 'crop', 'upscale'),
'thumbnail_option',
'use_autoscale',
)
}),
(None, {
'fields': ('alignment',)
}),
(_('More'), {
'classes': ('collapse',),
'fields': (('free_link', 'page_link', 'file_link', 'original_link', 'target_blank'), 'description',)
}),
)
if settings.CMSPLUGIN_FILER_IMAGE_STYLE_CHOICES:
fieldsets[0][1]['fields'].append('style')
def _get_thumbnail_options(self, context, instance):
"""
Return the size and options of the thumbnail that should be inserted
"""
width, height = None, None
crop, upscale = False, False
subject_location = False
placeholder_width = context.get('width', None)
placeholder_height = context.get('height', None)
if instance.thumbnail_option:
# thumbnail option overrides everything else
if instance.thumbnail_option.width:
width = instance.thumbnail_option.width
if instance.thumbnail_option.height:
height = instance.thumbnail_option.height
crop = instance.thumbnail_option.crop
upscale = instance.thumbnail_option.upscale
else:
if instance.use_autoscale and placeholder_width:
# use the placeholder width as a hint for sizing
width = int(placeholder_width)
elif instance.width:
width = instance.width
if instance.use_autoscale and placeholder_height:
height = int(placeholder_height)
elif instance.height:
height = instance.height
crop = instance.crop
upscale = instance.upscale
if instance.image:
if instance.image.subject_location:
subject_location = instance.image.subject_location
if not height and width:
# height was not externally defined: use ratio to scale it by the width
height = int( float(width)*float(instance.image.height)/float(instance.image.width) )
if not width and height:
# width was not externally defined: use ratio to scale it by the height
width = int( float(height)*float(instance.image.width)/float(instance.image.height) )
if not width:
# width is still not defined. fallback the actual image width
width = instance.image.width
if not height:
# height is still not defined. fallback the actual image height
height = instance.image.height
return {'size': (width, height),
'crop': crop,
'upscale': upscale,
'subject_location': subject_location}
def get_thumbnail(self, context, instance):
if instance.image:
return instance.image.file.get_thumbnail(self._get_thumbnail_options(context, instance))
def render(self, context, instance, placeholder):
self.render_template = select_template((
'cmsplugin_filer_image/plugins/image.html', # backwards compatibility. deprecated!
self.TEMPLATE_NAME % instance.style,
self.TEMPLATE_NAME % 'default')
)
options = self._get_thumbnail_options(context, instance)
context.update({
'instance': instance,
'link': instance.link,
'opts': options,
'size': options.get('size', None),
'placeholder': placeholder
})
return context
def icon_src(self, instance):
if instance.image:
if getattr(settings, 'FILER_IMAGE_USE_ICON', False) and '32' in instance.image.icons:
return instance.image.icons['32']
else:
# Fake the context with a reasonable width value because it is not
# available at this stage
thumbnail = self.get_thumbnail({'width': 200}, instance)
return thumbnail.url
else:
return os.path.normpath("%s/icons/missingfile_%sx%s.png" % (FILER_STATICMEDIA_PREFIX, 32, 32,))
plugin_pool.register_plugin(FilerImagePlugin)
| bsd-3-clause |
tedder/ansible | lib/ansible/modules/cloud/ovirt/ovirt_nic.py | 49 | 9548 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_nic
short_description: Module to manage network interfaces of Virtual Machines in oVirt/RHV
version_added: "2.3"
author:
- Ondra Machacek (@machacekondra)
description:
- Module to manage network interfaces of Virtual Machines in oVirt/RHV.
options:
id:
description:
- "ID of the nic to manage."
version_added: "2.8"
name:
description:
- Name of the network interface to manage.
required: true
vm:
description:
- Name of the Virtual Machine to manage.
- You must provide either C(vm) parameter or C(template) parameter.
template:
description:
- Name of the template to manage.
- You must provide either C(vm) parameter or C(template) parameter.
version_added: "2.4"
state:
description:
- Should the Virtual Machine NIC be present/absent/plugged/unplugged.
choices: [ absent, plugged, present, unplugged ]
default: present
network:
description:
- Logical network to which the VM network interface should use,
by default Empty network is used if network is not specified.
profile:
description:
- Virtual network interface profile to be attached to VM network interface.
interface:
description:
- "Type of the network interface. For example e1000, pci_passthrough, rtl8139, rtl8139_virtio, spapr_vlan or virtio."
- "It's required parameter when creating the new NIC."
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
- name: Add NIC to VM
ovirt_nic:
state: present
vm: myvm
name: mynic
interface: e1000
mac_address: 00:1a:4a:16:01:56
profile: ovirtmgmt
network: ovirtmgmt
- name: Plug NIC to VM
ovirt_nic:
state: plugged
vm: myvm
name: mynic
- name: Unplug NIC from VM
ovirt_nic:
state: unplugged
vm: myvm
name: mynic
- name: Add NIC to template
ovirt_nic:
auth: "{{ ovirt_auth }}"
state: present
template: my_template
name: nic1
interface: virtio
profile: ovirtmgmt
network: ovirtmgmt
- name: Remove NIC from VM
ovirt_nic:
state: absent
vm: myvm
name: mynic
# Change NIC Name
- ovirt_nic:
id: 00000000-0000-0000-0000-000000000000
name: "new_nic_name"
vm: myvm
'''
RETURN = '''
id:
description: ID of the network interface which is managed
returned: On success if network interface is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
nic:
description: "Dictionary of all the network interface attributes. Network interface attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/nic."
returned: On success if network interface is found.
type: dict
'''
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
create_connection,
equal,
get_link_name,
ovirt_full_argument_spec,
search_by_name,
)
class EntityNicsModule(BaseModule):
def __init__(self, *args, **kwargs):
super(EntityNicsModule, self).__init__(*args, **kwargs)
self.vnic_id = None
@property
def vnic_id(self):
return self._vnic_id
@vnic_id.setter
def vnic_id(self, vnic_id):
self._vnic_id = vnic_id
def build_entity(self):
return otypes.Nic(
id=self._module.params.get('id'),
name=self._module.params.get('name'),
interface=otypes.NicInterface(
self._module.params.get('interface')
) if self._module.params.get('interface') else None,
vnic_profile=otypes.VnicProfile(
id=self.vnic_id,
) if self.vnic_id else None,
mac=otypes.Mac(
address=self._module.params.get('mac_address')
) if self._module.params.get('mac_address') else None,
)
def update_check(self, entity):
if self._module.params.get('vm'):
return (
equal(self._module.params.get('interface'), str(entity.interface)) and
equal(self._module.params.get('name'), str(entity.name)) and
equal(self._module.params.get('profile'), get_link_name(self._connection, entity.vnic_profile)) and
equal(self._module.params.get('mac_address'), entity.mac.address)
)
elif self._module.params.get('template'):
return (
equal(self._module.params.get('interface'), str(entity.interface)) and
equal(self._module.params.get('name'), str(entity.name)) and
equal(self._module.params.get('profile'), get_link_name(self._connection, entity.vnic_profile))
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(type='str', default='present', choices=['absent', 'plugged', 'present', 'unplugged']),
vm=dict(type='str'),
id=dict(default=None),
template=dict(type='str'),
name=dict(type='str', required=True),
interface=dict(type='str'),
profile=dict(type='str'),
network=dict(type='str'),
mac_address=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['vm', 'template']],
)
check_sdk(module)
try:
# Locate the service that manages the virtual machines and use it to
# search for the NIC:
auth = module.params.pop('auth')
connection = create_connection(auth)
entity_name = None
if module.params.get('vm'):
# Locate the VM, where we will manage NICs:
entity_name = module.params.get('vm')
collection_service = connection.system_service().vms_service()
elif module.params.get('template'):
entity_name = module.params.get('template')
collection_service = connection.system_service().templates_service()
# TODO: We have to modify the search_by_name function to accept raise_error=True/False,
entity = search_by_name(collection_service, entity_name)
if entity is None:
raise Exception("Vm/Template '%s' was not found." % entity_name)
service = collection_service.service(entity.id)
cluster_id = entity.cluster
nics_service = service.nics_service()
entitynics_module = EntityNicsModule(
connection=connection,
module=module,
service=nics_service,
)
# Find vNIC id of the network interface (if any):
profile = module.params.get('profile')
if profile and module.params['network']:
cluster_name = get_link_name(connection, cluster_id)
dcs_service = connection.system_service().data_centers_service()
dc = dcs_service.list(search='Clusters.name=%s' % cluster_name)[0]
networks_service = dcs_service.service(dc.id).networks_service()
network = next(
(n for n in networks_service.list()
if n.name == module.params['network']),
None
)
if network is None:
raise Exception(
"Network '%s' was not found in datacenter '%s'." % (
module.params['network'],
dc.name
)
)
for vnic in connection.system_service().vnic_profiles_service().list():
if vnic.name == profile and vnic.network.id == network.id:
entitynics_module.vnic_id = vnic.id
# Handle appropriate action:
state = module.params['state']
if state == 'present':
ret = entitynics_module.create()
elif state == 'absent':
ret = entitynics_module.remove()
elif state == 'plugged':
entitynics_module.create()
ret = entitynics_module.action(
action='activate',
action_condition=lambda nic: not nic.plugged,
wait_condition=lambda nic: nic.plugged,
)
elif state == 'unplugged':
entitynics_module.create()
ret = entitynics_module.action(
action='deactivate',
action_condition=lambda nic: nic.plugged,
wait_condition=lambda nic: not nic.plugged,
)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
| gpl-3.0 |
celiasmith/syde556 | pendulum.py | 1 | 4959 | import nengo
import numpy as np
class Pendulum(object):
def __init__(self, mass=1.0, length=1.0, dt=0.001, g=10.0, seed=None,
max_torque=2, max_speed=8, limit=2.0):
self.mass = mass
self.length = length
self.dt = dt
self.g = g
self.max_torque = max_torque
self.max_speed = max_speed
self.limit = limit
self.reset(seed)
def reset(self, seed):
self.rng = np.random.RandomState(seed=seed)
self.theta = self.rng.uniform(-self.limit, self.limit)
self.dtheta = self.rng.uniform(-1, 1)
def step(self, u):
u = np.clip(u, -1, 1) * self.max_torque
self.dtheta += (-3*self.g/(2*self.length)*np.sin(self.theta+np.pi) +
3./(self.mass*self.length**2)*u) * self.dt
self.theta += self.dtheta * self.dt
self.dtheta = np.clip(self.dtheta, -self.max_speed, self.max_speed)
self.theta = np.clip(self.theta, -self.limit, self.limit)
#self.theta = (self.theta + np.pi) % (2*np.pi) - np.pi
def generate_html(self, desired):
len0 = 40*self.length
x1 = 50
y1 = 50
x2 = x1 + len0 * np.sin(self.theta)
y2 = y1 - len0 * np.cos(self.theta)
x3 = x1 + len0 * np.sin(desired)
y3 = y1 - len0 * np.cos(desired)
return '''
<svg width="100%" height="100%" viewbox="0 0 100 100">
<line x1="{x1}" y1="{y1}" x2="{x3}" y2="{y3}" style="stroke:blue"/>
<line x1="{x1}" y1="{y1}" x2="{x2}" y2="{y2}" style="stroke:black"/>
</svg>
'''.format(**locals())
class PendulumNode(nengo.Node):
def __init__(self, **kwargs):
self.env = Pendulum(**kwargs)
def func(t, x):
self.env.step(x[0])
func._nengo_html_ = self.env.generate_html(desired=x[1])
return self.env.theta, np.sin(self.env.theta), np.cos(self.env.theta), self.env.dtheta
super(PendulumNode, self).__init__(func, size_in=2)
class PID(object):
def __init__(self, Kp, Ki, Kd, dimensions=1, dt=0.001):
self.Kp = Kp
self.Ki = Ki
self.Kd = Kd
self.dt = dt
self.last_error = np.zeros(dimensions)
self.sum_error = np.zeros(dimensions)
def step(self, error, derror):
self.sum_error += error * self.dt
#derror2 = (error - self.last_error) / self.dt
#if not np.allclose(derror, derror2):
# print derror, derror2
#derror = derror2
#self.last_error = error
return self.Kp*error + self.Ki*self.sum_error + self.Kd * derror
class PIDNode(nengo.Node):
def __init__(self, dimensions, **kwargs):
self.dimensions = dimensions
self.pid = PID(dimensions=dimensions, **kwargs)
self.last_desired = None
super(PIDNode, self).__init__(self.step, size_in=dimensions*4)
def step(self, t, x):
desired, actual = x[:self.dimensions], x[self.dimensions:self.dimensions*2]
ddesired, dactual = x[self.dimensions*2:self.dimensions*3], x[self.dimensions*3:self.dimensions*4]
diff = desired - actual
#if self.last_desired is None or not np.allclose(desired, self.last_desired):
# diff *= 0
# print t, 'changed'
# self.last_desired = desired
#print t, diff
#if diff[0]>2 or diff[0]<-2:
# diff *=0
#print t, x
return self.pid.step(diff, ddesired-dactual)
class FunctionPlot(nengo.Node):
def __init__(self, ens, pts):
self.w = np.zeros((1, ens.n_neurons))
if len(pts.shape)==1:
pts.shape = pts.shape[0],1
self.pts = pts
self.ens = ens
self.sim = None
min_x = -2
max_x = 2
self.svg_x = (pts[:,0] - min_x) * 100 / (max_x - min_x)
def plot(t):
#plot._nengo_html_ = ''
#return None
if self.sim is not None:
_, a = nengo.utils.ensemble.tuning_curves(self.ens, self.sim, self.pts)
else:
a = np.zeros((len(self.pts), self.ens.n_neurons))
y = np.dot(a, self.w.T)
min_y = -1.0
max_y = 1.0
data = (-y - min_y) * 100 / (max_y - min_y)
paths = []
# turn the data into a string for svg plotting
path = []
for j in range(len(data)):
path.append('%1.0f %1.0f' % (self.svg_x[j], data[j]))
paths.append('<path d="M%s" fill="none" stroke="blue"/>' %
('L'.join(path)))
plot._nengo_html_ = '''
<svg width="100%%" height="100%%" viewbox="0 0 100 100">
%s
<line x1=50 y1=0 x2=50 y2=100 stroke="#aaaaaa"/>
<line x1=0 y1=50 x2=100 y2=50 stroke="#aaaaaa"/>
</svg>
''' % (''.join(paths))
super(FunctionPlot, self).__init__(plot, size_in=0)
| gpl-2.0 |
nhuntwalker/astroML | book_figures/chapter8/fig_total_least_squares.py | 3 | 4653 | """
Total Least Squares Figure
--------------------------
Figure 8.6
A linear fit to data with correlated errors in x and y. In the literature, this
is often referred to as total least squares or errors-in-variables fitting. The
left panel shows the lines of best fit; the right panel shows the likelihood
contours in slope/intercept space. The points are the same set used for the
examples in Hogg, Bovy & Lang 2010.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from scipy import optimize
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
from astroML.linear_model import TLS_logL
from astroML.plotting.mcmc import convert_to_stdev
from astroML.datasets import fetch_hogg2010test
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Define some convenience functions
# translate between typical slope-intercept representation,
# and the normal vector representation
def get_m_b(beta):
b = np.dot(beta, beta) / beta[1]
m = -beta[0] / beta[1]
return m, b
def get_beta(m, b):
denom = (1 + m * m)
return np.array([-b * m / denom, b / denom])
# compute the ellipse pricipal axes and rotation from covariance
def get_principal(sigma_x, sigma_y, rho_xy):
sigma_xy2 = rho_xy * sigma_x * sigma_y
alpha = 0.5 * np.arctan2(2 * sigma_xy2,
(sigma_x ** 2 - sigma_y ** 2))
tmp1 = 0.5 * (sigma_x ** 2 + sigma_y ** 2)
tmp2 = np.sqrt(0.25 * (sigma_x ** 2 - sigma_y ** 2) ** 2 + sigma_xy2 ** 2)
return np.sqrt(tmp1 + tmp2), np.sqrt(tmp1 - tmp2), alpha
# plot ellipses
def plot_ellipses(x, y, sigma_x, sigma_y, rho_xy, factor=2, ax=None):
if ax is None:
ax = plt.gca()
sigma1, sigma2, alpha = get_principal(sigma_x, sigma_y, rho_xy)
for i in range(len(x)):
ax.add_patch(Ellipse((x[i], y[i]),
factor * sigma1[i], factor * sigma2[i],
alpha[i] * 180. / np.pi,
fc='none', ec='k'))
#------------------------------------------------------------
# We'll use the data from table 1 of Hogg et al. 2010
data = fetch_hogg2010test()
data = data[5:] # no outliers
x = data['x']
y = data['y']
sigma_x = data['sigma_x']
sigma_y = data['sigma_y']
rho_xy = data['rho_xy']
#------------------------------------------------------------
# Find best-fit parameters
X = np.vstack((x, y)).T
dX = np.zeros((len(x), 2, 2))
dX[:, 0, 0] = sigma_x ** 2
dX[:, 1, 1] = sigma_y ** 2
dX[:, 0, 1] = dX[:, 1, 0] = rho_xy * sigma_x * sigma_y
min_func = lambda beta: -TLS_logL(beta, X, dX)
beta_fit = optimize.fmin(min_func,
x0=[-1, 1])
#------------------------------------------------------------
# Plot the data and fits
fig = plt.figure(figsize=(5, 2.5))
fig.subplots_adjust(left=0.1, right=0.95, wspace=0.25,
bottom=0.15, top=0.9)
#------------------------------------------------------------
# first let's visualize the data
ax = fig.add_subplot(121)
ax.scatter(x, y, c='k', s=9)
plot_ellipses(x, y, sigma_x, sigma_y, rho_xy, ax=ax)
#------------------------------------------------------------
# plot the best-fit line
m_fit, b_fit = get_m_b(beta_fit)
x_fit = np.linspace(0, 300, 10)
ax.plot(x_fit, m_fit * x_fit + b_fit, '-k')
ax.set_xlim(40, 250)
ax.set_ylim(100, 600)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
#------------------------------------------------------------
# plot the likelihood contour in m, b
ax = fig.add_subplot(122)
m = np.linspace(1.7, 2.8, 100)
b = np.linspace(-60, 110, 100)
logL = np.zeros((len(m), len(b)))
for i in range(len(m)):
for j in range(len(b)):
logL[i, j] = TLS_logL(get_beta(m[i], b[j]), X, dX)
ax.contour(m, b, convert_to_stdev(logL.T),
levels=(0.683, 0.955, 0.997),
colors='k')
ax.set_xlabel('slope')
ax.set_ylabel('intercept')
ax.set_xlim(1.7, 2.8)
ax.set_ylim(-60, 110)
plt.show()
| bsd-2-clause |
trdean/grEME | volk/gen/volk_tmpl_utils.py | 40 | 2390 | #!/usr/bin/env python
#
# Copyright 2012 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
import os
import re
import sys
import optparse
import volk_arch_defs
import volk_machine_defs
import volk_kernel_defs
from Cheetah import Template
def __escape_pre_processor(code):
out = list()
for line in code.splitlines():
m = re.match('^(\s*)#(\s*)(\w+)(.*)$', line)
if m:
p0, p1, fcn, stuff = m.groups()
conly = fcn in ('include', 'define', 'ifdef', 'ifndef', 'endif', 'elif', 'pragma')
both = fcn in ('if', 'else')
istmpl = '$' in stuff
if 'defined' in stuff: istmpl = False
if conly or (both and not istmpl):
line = '%s\\#%s%s%s'%(p0, p1, fcn, stuff)
out.append(line)
return '\n'.join(out)
def __parse_tmpl(_tmpl, **kwargs):
defs = {
'archs': volk_arch_defs.archs,
'arch_dict': volk_arch_defs.arch_dict,
'machines': volk_machine_defs.machines,
'machine_dict': volk_machine_defs.machine_dict,
'kernels': volk_kernel_defs.kernels,
}
defs.update(kwargs)
_tmpl = __escape_pre_processor(_tmpl)
_tmpl = """
/* this file was generated by volk template utils, do not edit! */
""" + _tmpl
return str(Template.Template(_tmpl, defs))
def main():
parser = optparse.OptionParser()
parser.add_option('--input', type='string')
parser.add_option('--output', type='string')
(opts, args) = parser.parse_args()
output = __parse_tmpl(open(opts.input).read(), args=args)
if opts.output: open(opts.output, 'w').write(output)
else: print output
if __name__ == '__main__': main()
| gpl-3.0 |
hudsondossantos/only_some_numerical_methods | linear-sys_methods/gauss_seidel.py | 1 | 2080 | # -*- coding: utf-8 -*-
"""
This module is an implementation of the Gauss Seidel method
"""
def norm(vector_x):
"""
Module calculates the norm of a vector
Args:
vector_x (list): Vector
Returns:
Norm of a vector
"""
norm_v = sum([x ** 2 for x in vector_x])
norm_v **= (1/2.0)
return norm_v
def gauss_seidel(A, b, tolerance=1.0e-05, v_initial=None):
"""
Iterative method for solving systems of linear equations.
Args:
A (matrix): Coefficient matrix.
b (list): Vector of constant terms.
tolerance (float): Tolerance for the stopping criterion
v_initial (float): Vector Initial
"""
interation_max = 70
order = len(A)
x = v_initial[:] if v_initial != None else [0] * order
v = [0] * order
while interation_max:
for i in range(order):
addition = 0
for j in range(i):
addition = addition + A[i][j] * v[j]
for j in range(i+1, order):
addition = addition + A[i][j] * x[j]
v[i] = (b[i] - addition) / A[i][i]
if abs(norm(v) - norm(x)) < tolerance:
break
else:
x = v[:]
interation_max -= 1
print("Precisao = ", tolerance)
print("Solucao:")
for i in range(len(x)):
print("X%d" % (i+1), " = %.8f" % x[i])
def main():
"""
It receives matrix A, vector b, tolerance and initial vector
"""
order = int(input("Digite a ordem: "))
A = range(order)
A = [range(order) for x in A]
print("Digite a matriz:")
for i in range(order):
A[i] = input()
A[i] = list(map(float, A[i].split()))
print("Digite o vetor b:")
b = input()
b = list(map(float, b.split()))
print("Digite o vetor inicial:")
v_initial = input()
v_initial = list(map(float, v_initial.split()))
tolerance = float(input("Digite a precisão:"))
gauss_seidel(A, b, tolerance, v_initial)
if __name__ == '__main__':
main()
| mit |
0x535431/textaapp | lib/markupsafe/__init__.py | 701 | 10338 | # -*- coding: utf-8 -*-
"""
markupsafe
~~~~~~~~~~
Implements a Markup string.
:copyright: (c) 2010 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
import re
import string
from collections import Mapping
from markupsafe._compat import text_type, string_types, int_types, \
unichr, iteritems, PY2
__all__ = ['Markup', 'soft_unicode', 'escape', 'escape_silent']
_striptags_re = re.compile(r'(<!--.*?-->|<[^>]*>)')
_entity_re = re.compile(r'&([^;]+);')
class Markup(text_type):
r"""Marks a string as being safe for inclusion in HTML/XML output without
needing to be escaped. This implements the `__html__` interface a couple
of frameworks and web applications use. :class:`Markup` is a direct
subclass of `unicode` and provides all the methods of `unicode` just that
it escapes arguments passed and always returns `Markup`.
The `escape` function returns markup objects so that double escaping can't
happen.
The constructor of the :class:`Markup` class can be used for three
different things: When passed an unicode object it's assumed to be safe,
when passed an object with an HTML representation (has an `__html__`
method) that representation is used, otherwise the object passed is
converted into a unicode string and then assumed to be safe:
>>> Markup("Hello <em>World</em>!")
Markup(u'Hello <em>World</em>!')
>>> class Foo(object):
... def __html__(self):
... return '<a href="#">foo</a>'
...
>>> Markup(Foo())
Markup(u'<a href="#">foo</a>')
If you want object passed being always treated as unsafe you can use the
:meth:`escape` classmethod to create a :class:`Markup` object:
>>> Markup.escape("Hello <em>World</em>!")
Markup(u'Hello <em>World</em>!')
Operations on a markup string are markup aware which means that all
arguments are passed through the :func:`escape` function:
>>> em = Markup("<em>%s</em>")
>>> em % "foo & bar"
Markup(u'<em>foo & bar</em>')
>>> strong = Markup("<strong>%(text)s</strong>")
>>> strong % {'text': '<blink>hacker here</blink>'}
Markup(u'<strong><blink>hacker here</blink></strong>')
>>> Markup("<em>Hello</em> ") + "<foo>"
Markup(u'<em>Hello</em> <foo>')
"""
__slots__ = ()
def __new__(cls, base=u'', encoding=None, errors='strict'):
if hasattr(base, '__html__'):
base = base.__html__()
if encoding is None:
return text_type.__new__(cls, base)
return text_type.__new__(cls, base, encoding, errors)
def __html__(self):
return self
def __add__(self, other):
if isinstance(other, string_types) or hasattr(other, '__html__'):
return self.__class__(super(Markup, self).__add__(self.escape(other)))
return NotImplemented
def __radd__(self, other):
if hasattr(other, '__html__') or isinstance(other, string_types):
return self.escape(other).__add__(self)
return NotImplemented
def __mul__(self, num):
if isinstance(num, int_types):
return self.__class__(text_type.__mul__(self, num))
return NotImplemented
__rmul__ = __mul__
def __mod__(self, arg):
if isinstance(arg, tuple):
arg = tuple(_MarkupEscapeHelper(x, self.escape) for x in arg)
else:
arg = _MarkupEscapeHelper(arg, self.escape)
return self.__class__(text_type.__mod__(self, arg))
def __repr__(self):
return '%s(%s)' % (
self.__class__.__name__,
text_type.__repr__(self)
)
def join(self, seq):
return self.__class__(text_type.join(self, map(self.escape, seq)))
join.__doc__ = text_type.join.__doc__
def split(self, *args, **kwargs):
return list(map(self.__class__, text_type.split(self, *args, **kwargs)))
split.__doc__ = text_type.split.__doc__
def rsplit(self, *args, **kwargs):
return list(map(self.__class__, text_type.rsplit(self, *args, **kwargs)))
rsplit.__doc__ = text_type.rsplit.__doc__
def splitlines(self, *args, **kwargs):
return list(map(self.__class__, text_type.splitlines(
self, *args, **kwargs)))
splitlines.__doc__ = text_type.splitlines.__doc__
def unescape(self):
r"""Unescape markup again into an text_type string. This also resolves
known HTML4 and XHTML entities:
>>> Markup("Main » <em>About</em>").unescape()
u'Main \xbb <em>About</em>'
"""
from markupsafe._constants import HTML_ENTITIES
def handle_match(m):
name = m.group(1)
if name in HTML_ENTITIES:
return unichr(HTML_ENTITIES[name])
try:
if name[:2] in ('#x', '#X'):
return unichr(int(name[2:], 16))
elif name.startswith('#'):
return unichr(int(name[1:]))
except ValueError:
pass
return u''
return _entity_re.sub(handle_match, text_type(self))
def striptags(self):
r"""Unescape markup into an text_type string and strip all tags. This
also resolves known HTML4 and XHTML entities. Whitespace is
normalized to one:
>>> Markup("Main » <em>About</em>").striptags()
u'Main \xbb About'
"""
stripped = u' '.join(_striptags_re.sub('', self).split())
return Markup(stripped).unescape()
@classmethod
def escape(cls, s):
"""Escape the string. Works like :func:`escape` with the difference
that for subclasses of :class:`Markup` this function would return the
correct subclass.
"""
rv = escape(s)
if rv.__class__ is not cls:
return cls(rv)
return rv
def make_simple_escaping_wrapper(name):
orig = getattr(text_type, name)
def func(self, *args, **kwargs):
args = _escape_argspec(list(args), enumerate(args), self.escape)
_escape_argspec(kwargs, iteritems(kwargs), self.escape)
return self.__class__(orig(self, *args, **kwargs))
func.__name__ = orig.__name__
func.__doc__ = orig.__doc__
return func
for method in '__getitem__', 'capitalize', \
'title', 'lower', 'upper', 'replace', 'ljust', \
'rjust', 'lstrip', 'rstrip', 'center', 'strip', \
'translate', 'expandtabs', 'swapcase', 'zfill':
locals()[method] = make_simple_escaping_wrapper(method)
# new in python 2.5
if hasattr(text_type, 'partition'):
def partition(self, sep):
return tuple(map(self.__class__,
text_type.partition(self, self.escape(sep))))
def rpartition(self, sep):
return tuple(map(self.__class__,
text_type.rpartition(self, self.escape(sep))))
# new in python 2.6
if hasattr(text_type, 'format'):
def format(*args, **kwargs):
self, args = args[0], args[1:]
formatter = EscapeFormatter(self.escape)
kwargs = _MagicFormatMapping(args, kwargs)
return self.__class__(formatter.vformat(self, args, kwargs))
def __html_format__(self, format_spec):
if format_spec:
raise ValueError('Unsupported format specification '
'for Markup.')
return self
# not in python 3
if hasattr(text_type, '__getslice__'):
__getslice__ = make_simple_escaping_wrapper('__getslice__')
del method, make_simple_escaping_wrapper
class _MagicFormatMapping(Mapping):
"""This class implements a dummy wrapper to fix a bug in the Python
standard library for string formatting.
See http://bugs.python.org/issue13598 for information about why
this is necessary.
"""
def __init__(self, args, kwargs):
self._args = args
self._kwargs = kwargs
self._last_index = 0
def __getitem__(self, key):
if key == '':
idx = self._last_index
self._last_index += 1
try:
return self._args[idx]
except LookupError:
pass
key = str(idx)
return self._kwargs[key]
def __iter__(self):
return iter(self._kwargs)
def __len__(self):
return len(self._kwargs)
if hasattr(text_type, 'format'):
class EscapeFormatter(string.Formatter):
def __init__(self, escape):
self.escape = escape
def format_field(self, value, format_spec):
if hasattr(value, '__html_format__'):
rv = value.__html_format__(format_spec)
elif hasattr(value, '__html__'):
if format_spec:
raise ValueError('No format specification allowed '
'when formatting an object with '
'its __html__ method.')
rv = value.__html__()
else:
rv = string.Formatter.format_field(self, value, format_spec)
return text_type(self.escape(rv))
def _escape_argspec(obj, iterable, escape):
"""Helper for various string-wrapped functions."""
for key, value in iterable:
if hasattr(value, '__html__') or isinstance(value, string_types):
obj[key] = escape(value)
return obj
class _MarkupEscapeHelper(object):
"""Helper for Markup.__mod__"""
def __init__(self, obj, escape):
self.obj = obj
self.escape = escape
__getitem__ = lambda s, x: _MarkupEscapeHelper(s.obj[x], s.escape)
__unicode__ = __str__ = lambda s: text_type(s.escape(s.obj))
__repr__ = lambda s: str(s.escape(repr(s.obj)))
__int__ = lambda s: int(s.obj)
__float__ = lambda s: float(s.obj)
# we have to import it down here as the speedups and native
# modules imports the markup type which is define above.
try:
from markupsafe._speedups import escape, escape_silent, soft_unicode
except ImportError:
from markupsafe._native import escape, escape_silent, soft_unicode
if not PY2:
soft_str = soft_unicode
__all__.append('soft_str')
| bsd-3-clause |
bq/horus | src/horus/gui/workbench/control/main.py | 2 | 3260 | # -*- coding: utf-8 -*-
# This file is part of the Horus Project
__author__ = 'Jesús Arroyo Torrens <jesus.arroyo@bq.com>'
__copyright__ = 'Copyright (C) 2014-2016 Mundo Reader S.L.'
__license__ = 'GNU General Public License v2 http://www.gnu.org/licenses/gpl2.html'
from horus.util import profile
from horus.gui.engine import driver, calibration_data, image_capture
from horus.gui.util.video_view import VideoView
from horus.gui.workbench.workbench import Workbench
from horus.gui.workbench.control.panels import CameraControl, LaserControl, \
MotorControl, GcodeControl
class ControlWorkbench(Workbench):
def __init__(self, parent):
Workbench.__init__(self, parent, name=_('Control workbench'))
def add_panels(self):
self.add_panel('camera_control', CameraControl)
self.add_panel('laser_control', LaserControl)
# self.add_panel('ldr_value', LDRControl)
self.add_panel('motor_control', MotorControl)
self.add_panel('gcode_control', GcodeControl)
def add_pages(self):
self.add_page('video_view', VideoView(self, self._video_frame))
self.panels_collection.expandable_panels[
profile.settings['current_panel_control']].on_title_clicked(None)
def _video_frame(self):
return image_capture.capture_image()
def on_open(self):
self.pages_collection['video_view'].play()
def on_close(self):
try:
driver.board.lasers_off()
self.pages_collection['video_view'].stop()
laser_control = self.panels_collection.expandable_panels['laser_control']
laser_control.get_control('left_button').control.SetValue(False)
laser_control.get_control('right_button').control.SetValue(False)
except:
pass
def reset(self):
self.pages_collection['video_view'].reset()
def setup_engine(self):
driver.camera.set_frame_rate(int(profile.settings['frame_rate']))
driver.camera.set_resolution(
profile.settings['camera_width'], profile.settings['camera_height'])
driver.camera.set_rotate(profile.settings['camera_rotate'])
driver.camera.set_hflip(profile.settings['camera_hflip'])
driver.camera.set_vflip(profile.settings['camera_vflip'])
driver.camera.set_luminosity(profile.settings['luminosity'])
image_capture.set_mode_texture()
image_capture.texture_mode.set_brightness(profile.settings['brightness_control'])
image_capture.texture_mode.set_contrast(profile.settings['contrast_control'])
image_capture.texture_mode.set_saturation(profile.settings['saturation_control'])
image_capture.texture_mode.set_exposure(profile.settings['exposure_control'])
image_capture.set_use_distortion(profile.settings['use_distortion'])
width, height = driver.camera.get_resolution()
calibration_data.set_resolution(width, height)
calibration_data.camera_matrix = profile.settings['camera_matrix']
calibration_data.distortion_vector = profile.settings['distortion_vector']
driver.board.motor_speed(profile.settings['motor_speed_control'])
driver.board.motor_acceleration(profile.settings['motor_acceleration_control'])
| gpl-2.0 |
rlkelly/spotipy | examples/artist_discography.py | 7 | 1630 | import sys
import spotipy
''' shows the albums and tracks for a given artist.
'''
def get_artist(name):
results = sp.search(q='artist:' + name, type='artist')
items = results['artists']['items']
if len(items) > 0:
return items[0]
else:
return None
def show_album_tracks(album):
tracks = []
results = sp.album_tracks(album['id'])
tracks.extend(results['items'])
while results['next']:
results = sp.next(results)
tracks.extend(results['items'])
for track in tracks:
print(' ', track['name'])
print()
print(track)
def show_artist_albums(id):
albums = []
results = sp.artist_albums(artist['id'], album_type='album')
albums.extend(results['items'])
while results['next']:
results = sp.next(results)
albums.extend(results['items'])
print('Total albums:', len(albums))
unique = set() # skip duplicate albums
for album in albums:
name = album['name']
if not name in unique:
print(name)
unique.add(name)
show_album_tracks(album)
def show_artist(artist):
print('====', artist['name'], '====')
print('Popularity: ', artist['popularity'])
if len(artist['genres']) > 0:
print('Genres: ', ','.join(artist['genres']))
if __name__ == '__main__':
sp = spotipy.Spotify()
sp.trace = False
if len(sys.argv) < 2:
print(('Usage: {0} artist name'.format(sys.argv[0])))
else:
name = ' '.join(sys.argv[1:])
artist = get_artist(name)
show_artist(artist)
show_artist_albums(artist)
| mit |
vshtanko/scikit-learn | examples/linear_model/plot_lasso_lars.py | 363 | 1080 | #!/usr/bin/env python
"""
=====================
Lasso path using LARS
=====================
Computes Lasso Path along the regularization parameter using the LARS
algorithm on the diabetes dataset. Each color represents a different
feature of the coefficient vector, and this is displayed as a function
of the regularization parameter.
"""
print(__doc__)
# Author: Fabian Pedregosa <fabian.pedregosa@inria.fr>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn import datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
print("Computing regularization path using the LARS ...")
alphas, _, coefs = linear_model.lars_path(X, y, method='lasso', verbose=True)
xx = np.sum(np.abs(coefs.T), axis=1)
xx /= xx[-1]
plt.plot(xx, coefs.T)
ymin, ymax = plt.ylim()
plt.vlines(xx, ymin, ymax, linestyle='dashed')
plt.xlabel('|coef| / max|coef|')
plt.ylabel('Coefficients')
plt.title('LASSO Path')
plt.axis('tight')
plt.show()
| bsd-3-clause |
SergioChan/Stream-Framework | stream_framework/tests/storage/base.py | 6 | 10937 | from stream_framework.activity import Activity
from stream_framework.storage.base import BaseActivityStorage, BaseTimelineStorage
from stream_framework.verbs.base import Love as PinVerb
from stream_framework.tests.utils import FakeActivity, Pin
from mock import patch
import datetime
import unittest
def implementation(meth):
def wrapped_test(self, *args, **kwargs):
if self.storage.__class__ in (BaseActivityStorage, BaseTimelineStorage):
raise unittest.SkipTest('only test this on actual implementations')
return meth(self, *args, **kwargs)
return wrapped_test
def compare_lists(a, b, msg):
a_stringified = list(map(str, a))
b_stringified = list(map(str, b))
assert a_stringified == b_stringified, msg
class TestBaseActivityStorageStorage(unittest.TestCase):
'''
Makes sure base wirings are not broken, you should
implement this test class for every BaseActivityStorage subclass
to make sure APIs is respected
'''
storage_cls = BaseActivityStorage
storage_options = {'activity_class': Activity}
def setUp(self):
self.pin = Pin(
id=1, created_at=datetime.datetime.now() - datetime.timedelta(hours=1))
self.storage = self.storage_cls(**self.storage_options)
self.activity = FakeActivity(
1, PinVerb, self.pin, 1, datetime.datetime.now(), {})
self.args = ()
self.kwargs = {}
def tearDown(self):
self.storage.flush()
def test_add_to_storage(self):
with patch.object(self.storage, 'add_to_storage') as add_to_storage:
self.storage.add(self.activity, *self.args, **self.kwargs)
add_to_storage.assert_called()
def test_remove_from_storage(self):
with patch.object(self.storage, 'remove_from_storage') as remove_from_storage:
self.storage.remove(self.activity)
remove_from_storage.assert_called()
remove_from_storage.assert_called_with(
[self.activity.serialization_id], *self.args, **self.kwargs)
def test_get_from_storage(self):
with patch.object(self.storage, 'get_from_storage') as get_from_storage:
self.storage.get(self.activity)
get_from_storage.assert_called()
get_from_storage.assert_called_with(
[self.activity], *self.args, **self.kwargs)
@implementation
def test_add(self):
add_count = self.storage.add(
self.activity, *self.args, **self.kwargs)
self.assertEqual(add_count, 1)
@implementation
def test_add_get(self):
self.storage.add(self.activity, *self.args, **self.kwargs)
result = self.storage.get(
self.activity.serialization_id, *self.args, **self.kwargs)
assert result == self.activity
@implementation
def test_add_twice(self):
self.storage.add(
self.activity, *self.args, **self.kwargs)
# this shouldnt raise errors
self.storage.add(
self.activity, *self.args, **self.kwargs)
@implementation
def test_get_missing(self):
result = self.storage.get(
self.activity.serialization_id, *self.args, **self.kwargs)
assert result is None
@implementation
def test_remove(self):
self.storage.remove(self.activity, *self.args, **self.kwargs)
@implementation
def test_add_remove(self):
self.storage.add(self.activity, *self.args, **self.kwargs)
result = self.storage.get(
self.activity.serialization_id, *self.args, **self.kwargs)
assert result == self.activity
self.storage.remove(
self.activity, *self.args, **self.kwargs)
result = self.storage.get(
self.activity.serialization_id, *self.args, **self.kwargs)
assert result is None
class TestBaseTimelineStorageClass(unittest.TestCase):
storage_cls = BaseTimelineStorage
storage_options = {'activity_class': Activity}
def setUp(self):
self.storage = self.storage_cls(**self.storage_options)
self.test_key = 'key'
if self.__class__ != TestBaseTimelineStorageClass:
self.storage.delete(self.test_key)
self.storage.flush()
def tearDown(self):
if self.__class__ != TestBaseTimelineStorageClass:
self.storage.delete(self.test_key)
self.storage.flush()
def _build_activity_list(self, ids_list):
now = datetime.datetime.now()
pins = [Pin(id=i, created_at=now + datetime.timedelta(hours=i))
for i in ids_list]
pins_ids = zip(pins, ids_list)
return [FakeActivity(i, PinVerb, pin, i, now + datetime.timedelta(hours=i), {'i': i}) for pin, i in pins_ids]
def assert_results(self, results, activities, msg=''):
activity_ids = []
extra_context = []
for result in results:
if hasattr(result, 'serialization_id'):
activity_ids.append(result.serialization_id)
else:
activity_ids.append(result)
if hasattr(result, 'extra_context'):
extra_context.append(result.extra_context)
compare_lists(
activity_ids, [a.serialization_id for a in activities], msg)
if extra_context:
self.assertEquals(
[a.extra_context for a in activities], extra_context)
@implementation
def test_count_empty(self):
assert self.storage.count(self.test_key) == 0
@implementation
def test_count_insert(self):
assert self.storage.count(self.test_key) == 0
activity = self._build_activity_list([1])[0]
self.storage.add(self.test_key, activity)
assert self.storage.count(self.test_key) == 1
@implementation
def test_add_many(self):
results = self.storage.get_slice(self.test_key, 0, None)
# make sure no data polution
assert results == []
activities = self._build_activity_list(range(3, 0, -1))
self.storage.add_many(self.test_key, activities)
results = self.storage.get_slice(self.test_key, 0, None)
self.assert_results(results, activities)
@implementation
def test_add_many_unique(self):
activities = self._build_activity_list(
list(range(3, 0, -1)) + list(range(3, 0, -1)))
self.storage.add_many(self.test_key, activities)
results = self.storage.get_slice(self.test_key, 0, None)
self.assert_results(results, activities[:3])
@implementation
def test_contains(self):
activities = self._build_activity_list(range(4, 0, -1))
self.storage.add_many(self.test_key, activities[:3])
results = self.storage.get_slice(self.test_key, 0, None)
if self.storage.contains:
self.assert_results(results, activities[:3])
for a in activities[:3]:
assert self.storage.contains(self.test_key, a.serialization_id)
assert not self.storage.contains(
self.test_key, activities[3].serialization_id)
@implementation
def test_index_of(self):
activities = self._build_activity_list(range(1, 43))
activity_ids = [a.serialization_id for a in activities]
self.storage.add_many(self.test_key, activities)
assert self.storage.index_of(self.test_key, activity_ids[41]) == 0
assert self.storage.index_of(self.test_key, activity_ids[7]) == 34
with self.assertRaises(ValueError):
self.storage.index_of(self.test_key, 0)
@implementation
def test_trim(self):
activities = self._build_activity_list(range(10, 0, -1))
self.storage.add_many(self.test_key, activities[5:])
self.storage.add_many(self.test_key, activities[:5])
assert self.storage.count(self.test_key) == 10
self.storage.trim(self.test_key, 5)
assert self.storage.count(self.test_key) == 5
results = self.storage.get_slice(self.test_key, 0, None)
self.assert_results(
results, activities[:5], 'check trim direction')
@implementation
def test_noop_trim(self):
activities = self._build_activity_list(range(10, 0, -1))
self.storage.add_many(self.test_key, activities)
assert self.storage.count(self.test_key) == 10
self.storage.trim(self.test_key, 12)
assert self.storage.count(self.test_key) == 10
@implementation
def test_trim_empty_feed(self):
self.storage.trim(self.test_key, 12)
@implementation
def test_remove_missing(self):
activities = self._build_activity_list(range(10))
self.storage.remove(self.test_key, activities[1])
self.storage.remove_many(self.test_key, activities[1:2])
@implementation
def test_add_remove(self):
assert self.storage.count(self.test_key) == 0
activities = self._build_activity_list(range(10, 0, -1))
self.storage.add_many(self.test_key, activities)
self.storage.remove_many(self.test_key, activities[5:])
results = self.storage.get_slice(self.test_key, 0, 20)
assert self.storage.count(self.test_key) == 5
self.assert_results(results, activities[:5])
@implementation
def test_get_many_empty(self):
assert self.storage.get_slice(self.test_key, 0, 10) == []
@implementation
def test_timeline_order(self):
activities = self._build_activity_list(range(10, 0, -1))
self.storage.add_many(self.test_key, activities)
self.storage.trim(self.test_key, 5)
self.storage.add_many(self.test_key, activities)
results = self.storage.get_slice(self.test_key, 0, 5)
self.assert_results(results, activities[:5])
@implementation
def test_implements_batcher_as_ctx_manager(self):
batcher = self.storage.get_batch_interface()
hasattr(batcher, '__enter__')
hasattr(batcher, '__exit__')
@implementation
def test_union_set_slice(self):
activities = self._build_activity_list(range(42, 0, -1))
self.storage.add_many(self.test_key, activities)
assert self.storage.count(self.test_key) == 42
s1 = self.storage.get_slice(self.test_key, 0, 21)
self.assert_results(s1, activities[0:21])
s2 = self.storage.get_slice(self.test_key, 22, 42)
self.assert_results(s2, activities[22:42])
s3 = self.storage.get_slice(self.test_key, 22, 23)
self.assert_results(s3, activities[22:23])
s4 = self.storage.get_slice(self.test_key, None, 23)
self.assert_results(s4, activities[:23])
s5 = self.storage.get_slice(self.test_key, None, None)
self.assert_results(s5, activities[:])
s6 = self.storage.get_slice(self.test_key, 1, None)
self.assert_results(s6, activities[1:])
# check intersections
assert len(set(s1 + s2)) == len(s1) + len(s2)
| bsd-3-clause |
flavour/iscram | models/impact.py | 3 | 5502 | # -*- coding: utf-8 -*-
"""
Impact
Impact resources used by the old Assessment module
"""
if deployment_settings.has_module("assess"):
# Impact as component of incident reports
s3mgr.model.add_component("impact_impact", irs_ireport="ireport_id")
def impact_tables():
""" Load the Impact tables as-needed """
sector_id = s3db.org_sector_id
ireport_id = s3db.irs_ireport_id
# Load the models we depend on
if deployment_settings.has_module("assess"):
s3mgr.load("assess_assess")
assess_id = response.s3.assess_id
module = "impact"
# -------------------------------------------------------------------------
# Impact Type
resourcename = "type"
tablename = "%s_%s" % (module, resourcename)
table = db.define_table(tablename,
Field("name", length=128, notnull=True, unique=True),
sector_id(),
*s3_meta_fields())
# CRUD strings
ADD_IMPACT_TYPE = T("Add Impact Type")
LIST_IMPACT_TYPE = T("List Impact Types")
s3.crud_strings[tablename] = Storage(
title_create = ADD_IMPACT_TYPE,
title_display = T("Impact Type Details"),
title_list = LIST_IMPACT_TYPE,
title_update = T("Edit Impact Type"),
title_search = T("Search Impact Type"),
subtitle_create = T("Add New Impact Type"),
subtitle_list = T("Impact Types"),
label_list_button = LIST_IMPACT_TYPE,
label_create_button = ADD_IMPACT_TYPE,
label_delete_button = T("Delete Impact Type"),
msg_record_created = T("Impact Type added"),
msg_record_modified = T("Impact Type updated"),
msg_record_deleted = T("Impact Type deleted"),
msg_list_empty = T("No Impact Types currently registered"),
name_nice = T("Impact"),
name_nice_plural = T("Impacts"))
def impact_type_comment():
if auth.has_membership(auth.id_group("'Administrator'")):
return DIV(A(ADD_IMPACT_TYPE,
_class="colorbox",
_href=URL(c="impact", f="type",
args="create",
vars=dict(format="popup",
child="impact_type_id")),
_target="top",
_title=ADD_IMPACT_TYPE
)
)
else:
return None
impact_type_id = S3ReusableField("impact_type_id", db.impact_type,
sortby="name",
requires = IS_NULL_OR(IS_ONE_OF(db, "impact_type.id","%(name)s", sort=True)),
represent = lambda id: s3_get_db_field_value(tablename = "impact_type",
fieldname = "name",
look_up_value = id),
label = T("Impact Type"),
comment = impact_type_comment(),
ondelete = "RESTRICT")
# =====================================================================
# Impact
# Load model
ireport_id = s3db.irs_ireport_id
resourcename = "impact"
tablename = "%s_%s" % (module, resourcename)
table = db.define_table(tablename,
ireport_id(readable=False, writable=False),
assess_id(readable=False, writable=False),
impact_type_id(),
Field("value", "double"),
Field("severity", "integer",
default = 0),
s3_comments(),
*s3_meta_fields())
table.severity.requires = IS_EMPTY_OR(IS_IN_SET(assess_severity_opts))
table.severity.widget=SQLFORM.widgets.radio.widget
table.severity.represent = s3_assess_severity_represent
# CRUD strings
ADD_IMPACT = T("Add Impact")
LIST_IMPACT = T("List Impacts")
s3.crud_strings[tablename] = Storage(
title_create = ADD_IMPACT,
title_display = T("Impact Details"),
title_list = LIST_IMPACT,
title_update = T("Edit Impact"),
title_search = T("Search Impacts"),
subtitle_create = T("Add New Impact"),
subtitle_list = T("Impacts"),
label_list_button = LIST_IMPACT,
label_create_button = ADD_IMPACT,
label_delete_button = T("Delete Impact"),
msg_record_created = T("Impact added"),
msg_record_modified = T("Impact updated"),
msg_record_deleted = T("Impact deleted"),
msg_list_empty = T("No Impacts currently registered"))
# Provide a handle to this load function
s3mgr.loader(impact_tables,
"impact_impact",
"impact_type")
# END =========================================================================
| mit |
ofirr/dojango | dojango/util/media.py | 1 | 4383 | from django.conf import settings
from dojango.conf import settings as dojango_settings
from django.conf.urls import url
from django.core.exceptions import ImproperlyConfigured
from django.utils._os import safe_join
from django.utils.encoding import force_str
from django.views.static import serve
from os import path, listdir
def find_app_dir(app_name):
"""Given an app name (from settings.INSTALLED_APPS) return the abspath
to that app directory"""
i = app_name.rfind('.')
if i == -1:
m, a = app_name, None
else:
m, a = app_name[:i], app_name[i+1:]
try:
if a is None:
mod = __import__(m, {}, {}, [])
else:
mod = getattr(__import__(m, {}, {}, [force_str(a)]), a)
return path.dirname(path.abspath(mod.__file__))
except ImportError as e:
raise ImproperlyConfigured('ImportError %s: %s' % (app_name, e.args[0]))
def find_app_dojo_dir(app_name):
"""Checks, if a dojo-media directory exists within a given app and returns the absolute path to it."""
base = find_app_dir(app_name)
if base:
media_dir = safe_join(base, 'dojo-media')
if path.isdir(media_dir): return media_dir
return None # no dojo-media directory was found within that app
def find_app_dojo_dir_and_url(app_name):
"""Returns a list of tuples of dojo modules within an apps 'dojo-media' directory.
Each tuple contains the abspath to the module directory and the module name.
"""
ret = []
media_dir = find_app_dojo_dir(app_name)
if not media_dir: return None
for d in listdir(media_dir):
if path.isdir(safe_join(media_dir, d)):
if d not in ("src", "release") and not d.startswith("."):
ret.append(tuple([safe_join(media_dir, d), "%(module)s" % {
'module': d
}]))
return tuple(ret)
dojo_media_library = dict((app, find_app_dojo_dir_and_url(app))
for app in settings.INSTALLED_APPS)
dojo_media_apps = tuple(app for app in settings.INSTALLED_APPS
if dojo_media_library[app])
def _check_app_dojo_dirs():
"""Checks, that each dojo module is just present once. Otherwise it would throw an error."""
check = {}
for app in dojo_media_apps:
root_and_urls = dojo_media_library[app]
for elem in root_and_urls:
root, url = elem
if url in check and root != dojo_media_library[check[url]][0]:
raise ImproperlyConfigured(
"Two apps (%s, %s) contain the same dojo module (%s) in the dojo-media-directory pointing to two different directories (%s, %s)" %
(repr(app), repr(check[url]), repr(root.split("/")[-1]), repr(root), repr(dojo_media_library[check[url]][0][0])))
check[url] = app
def _build_urlmap():
"""Creating a url map for all dojo modules (dojo-media directory), that are available within activated apps."""
seen = {}
valid_urls = [] # keep the order!
for app in dojo_media_apps:
root_and_urls = dojo_media_library[app]
for elem in root_and_urls:
root, url = elem
if url.startswith('/'): url = url[1:]
if url in seen: continue
valid_urls.append((url, root))
seen[url] = root
base_url = dojango_settings.DOJO_MEDIA_URL # dojango_settings.BASE_MEDIA_URL
if base_url.startswith('/'): base_url = base_url[1:]
# all new modules need to be available next to dojo, so we need to allow a version-string in between
# e.g. /dojo-media/1.3.1/mydojonamespace == /dojo-media/1.2.0/mydojonamespace
valid_urls = [("%(base_url)s/([\w\d\.\-]*/)?%(url)s" % {
'base_url': base_url,
'url': m[0]
}, m[1]) for m in valid_urls]
valid_urls.append(("%(base_url)s/release/" % {'base_url': base_url}, path.join(dojango_settings.BASE_MEDIA_ROOT, "release")))
valid_urls.append(("%(base_url)s/" % {'base_url': base_url}, path.join(dojango_settings.BASE_MEDIA_ROOT, "src")))
return valid_urls
_check_app_dojo_dirs() # is each dojo module just created once?
dojo_media_urls = _build_urlmap()
# url_patterns that can be used directly within urls.py
url_patterns = [url('^%s(?P<path>.*)$' % url_str, serve, {'document_root': root, 'show_indexes': True} )
for url_str, root in dojo_media_urls ]
| bsd-3-clause |
djpine/pyman | Book/chap8/Problems/test.py | 3 | 2735 | import numpy as np
import matplotlib.pyplot as plt
def LineFitWt(x, y, sig):
"""
Returns slope and y-intercept of weighted linear fit to
(x,y) data set.
Inputs: x and y data array and uncertainty array (unc)
for y data set.
Outputs: slope and y-intercept of best fit to data and
uncertainties of slope & y-intercept.
"""
sig2 = sig**2
norm = (1./sig2).sum()
xhat = (x/sig2).sum() / norm
yhat = (y/sig2).sum() / norm
slope = ((x-xhat)*y/sig2).sum()/((x-xhat)*x/sig2).sum()
yint = yhat - slope*xhat
sig2_slope = 1./((x-xhat)*x/sig2).sum()
sig2_yint = sig2_slope * (x*x/sig2).sum() / norm
return slope, yint, np.sqrt(sig2_slope), np.sqrt(sig2_yint)
def redchisq(x, y, dy, slope, yint):
chisq = (((y-yint-slope*x)/dy)**2).sum()
return chisq/float(x.size-2)
# Read data from data file
t, V, dV = np.loadtxt("RLcircuit.txt", skiprows=2, unpack=True)
########## Code to tranform & fit data starts here ##########
# Transform data and parameters from ln V = ln V0 - Gamma t
# to linear form: Y = A + B*X, where Y = ln V, X = t, dY = dV/V
X = t # transform t data for fitting (not needed as X=t)
Y = np.log(V) # transform N data for fitting
dY = dV/V # transform uncertainties for fitting
# Fit transformed data X, Y, dY to obtain fitting parameters
# B & A. Also returns uncertainties dA & dB in B & A
B, A, dB, dA = LineFitWt(X, Y, dY)
# Return reduced chi-squared
redchisqr = redchisq(X, Y, dY, B, A)
# Determine fitting parameters for original exponential function
# N = N0 exp(-Gamma t) ...
V0 = np.exp(A)
Gamma = -B
# ... and their uncertainties
dV0 = V0 * dA
dGamma = dB
###### Code to plot transformed data and fit starts here ######
# Create line corresponding to fit using fitting parameters
# Only two points are needed to specify a straight line
Xext = 0.05*(X.max()-X.min())
Xfit = np.array([X.min()-Xext, X.max()+Xext]) # smallest & largest X points
Yfit = B*Xfit + A # generates Y from X data &
# fitting function
plt.errorbar(X, Y, dY, fmt="b^")
plt.plot(Xfit, Yfit, "c-", zorder=-1)
plt.title(r"$\mathrm{Fit\ to:}\ \ln V = \ln V_0-\Gamma t$ or $Y = A + BX$")
plt.xlabel('time (ns)')
plt.ylabel('ln voltage (volts)')
plt.xlim(-50, 550)
plt.text(210, 1.5, u"A = ln V0 = {0:0.4f} \xb1 {1:0.4f}".format(A, dA))
plt.text(210, 1.1, u"B = -Gamma = {0:0.4f} \xb1 {1:0.4f} /ns".format(B, dB))
plt.text(210, 0.7, "$\chi_r^2$ = {0:0.3f}".format(redchisqr))
plt.text(210, 0.3, u"V0 = {0:0.2f} \xb1 {1:0.2f} V".format(V0, dV0))
plt.text(210, -0.1,u"Gamma = {0:0.4f} \xb1 {1:0.4f} /ns".format(Gamma, dGamma))
plt.show()
plt.savefig("RLcircuit.pdf") | cc0-1.0 |
abonaca/gary | gary/dynamics/nonlinear.py | 1 | 8350 | # coding: utf-8
""" Utilities for nonlinear dynamics """
from __future__ import division, print_function
__author__ = "adrn <adrn@astro.columbia.edu>"
# Standard library
import logging
# Third-party
import numpy as np
# Project
from ..util import gram_schmidt
__all__ = ['lyapunov_spectrum', 'fast_lyapunov_max', 'lyapunov_max']
# Create logger
logger = logging.getLogger(__name__)
def lyapunov_spectrum(w0, integrator, dt, nsteps, t1=0., deviation_vecs=None):
""" Compute the spectrum of Lyapunov exponents given equations of motions
for small deviations.
Parameters
----------
w0 : array_like
Initial conditions for all phase-space coordinates.
integrator : gary.Integrator
An instantiated Integrator object. Must have a run() method.
dt : numeric
Timestep.
nsteps : int
Number of steps to run for.
t1 : numeric (optional)
Time of initial conditions. Assumed to be t=0.
deviation_vecs : array_like (optional)
Specify your own initial deviation vectors.
"""
w0 = np.atleast_2d(w0)
# phase-space dimensionality
if w0.shape[0] > 1:
raise ValueError("Initial condition vector ")
ndim_ps = w0.shape[1]
if deviation_vecs is None:
# initialize (ndim_ps) deviation vectors
A = np.zeros((ndim_ps,ndim_ps))
for ii in range(ndim_ps):
A[ii] = np.random.normal(0.,1.,size=ndim_ps)
A[ii] /= np.linalg.norm(A[ii])
else:
raise NotImplementedError()
all_w0 = np.zeros((ndim_ps,ndim_ps*2))
for ii in range(ndim_ps):
all_w0[ii] = np.append(w0,A[ii])
# array to store the full, main orbit
full_w = np.zeros((nsteps+1,ndim_ps))
full_w[0] = w0
full_ts = np.zeros((nsteps+1,))
full_ts[0] = t1
# arrays to store the Lyapunov exponents and times
lyap = np.zeros((nsteps+1,ndim_ps))
rhi = np.zeros((nsteps+1,ndim_ps)) # sum of logs
ts = np.zeros(nsteps+1)
time = t1
for ii in range(1,nsteps+1):
tt,ww = integrator.run(all_w0, dt=dt, nsteps=1, t1=time)
time += dt
alf = gram_schmidt(ww[-1,:,ndim_ps:])
rhi[ii] = rhi[ii-1] + np.log(alf)
lyap[ii] = rhi[ii]/time
ts[ii] = time
full_w[ii:ii+1] = ww[1:,0,:ndim_ps]
full_ts[ii:ii+1] = tt[1:]
all_w0 = ww[-1].copy()
return lyap, full_ts, full_w
def fast_lyapunov_max(w0, potential, dt, nsteps, d0=1e-5,
nsteps_per_pullback=10, noffset_orbits=2, t1=0.,
atol=1E-9, rtol=1E-8):
from ..integrate._dop853 import dop853_lyapunov
if not hasattr(potential, 'c_instance'):
raise TypeError("Input potential must be a CPotential subclass.")
t,w,l = dop853_lyapunov(potential.c_instance, w0,
dt, nsteps, t1, atol, rtol,
d0, nsteps_per_pullback, noffset_orbits)
return l,t,w
def lyapunov_max(w0, integrator, dt, nsteps, d0=1e-5, nsteps_per_pullback=10,
noffset=8, t1=0.):
"""
Compute the maximum Lyapunov exponent of an orbit by integrating many
nearby orbits (``noffset``) separated with isotropically distributed
directions but the same initial deviation length, ``d0``. This algorithm
re-normalizes the offset orbits every ``nsteps_per_pullback`` steps.
Parameters
----------
w0 : array_like
Initial conditions for all phase-space coordinates.
integrator : gary.Integrator
An instantiated Integrator object. Must have a run() method.
dt : numeric
Timestep.
nsteps : int
Number of steps to run for.
d0 : numeric (optional)
The initial separation.
nsteps_per_pullback : int (optional)
Number of steps to run before re-normalizing the offset vectors.
noffset : int (optional)
Number of offset orbits to run.
t1 : numeric (optional)
Time of initial conditions. Assumed to be t=0.
Returns
-------
LEs : :class:`numpy.ndarray`
Lyapunov exponents calculated from each offset / deviation orbit.
ts : :class:`numpy.ndarray`
Array of times from integrating main orbit.
ws : :class:`numpy.ndarray`
All orbits -- main / parent orbit is index 0, all others are the
full orbits of the deviations. TODO: right now, only returns parent
orbit.
"""
w0 = np.atleast_2d(w0)
# number of iterations
niter = nsteps // nsteps_per_pullback
ndim = w0.shape[1]
# define offset vectors to start the offset orbits on
d0_vec = np.random.uniform(size=(noffset,ndim))
d0_vec /= np.linalg.norm(d0_vec, axis=1)[:,np.newaxis]
d0_vec *= d0
w_offset = w0 + d0_vec
all_w0 = np.vstack((w0,w_offset))
# array to store the full, main orbit
full_w = np.zeros((nsteps+1,ndim))
full_w[0] = w0
full_ts = np.zeros((nsteps+1,))
full_ts[0] = t1
# arrays to store the Lyapunov exponents and times
LEs = np.zeros((niter,noffset))
ts = np.zeros_like(LEs)
time = t1
for i in range(1,niter+1):
ii = i * nsteps_per_pullback
tt,ww = integrator.run(all_w0, dt=dt, nsteps=nsteps_per_pullback, t1=time)
time += dt*nsteps_per_pullback
main_w = ww[-1,0][np.newaxis]
d1 = ww[-1,1:] - main_w
d1_mag = np.linalg.norm(d1, axis=1)
LEs[i-1] = np.log(d1_mag/d0)
ts[i-1] = time
w_offset = ww[-1,0] + d0 * d1 / d1_mag[:,np.newaxis]
all_w0 = np.vstack((ww[-1,0],w_offset))
full_w[(i-1)*nsteps_per_pullback+1:ii+1] = ww[1:,0]
full_ts[(i-1)*nsteps_per_pullback+1:ii+1] = tt[1:]
LEs = np.array([LEs[:ii].sum(axis=0)/ts[ii-1] for ii in range(1,niter)])
return LEs, full_ts, full_w
# def sali(w0, integrator, dt, nsteps, t1=0., deviation_vecs=None):
# """ Compute the Smaller Alignment Index (SALI)
# See: Skokos, Ch. 2001, J. Phys. A: Math. Gen., 34, 10029-10043
# Parameters
# ----------
# w0 : array_like
# Initial conditions for all phase-space coordinates.
# integrator : gary.Integrator
# An instantiated Integrator object. Must have a run() method.
# dt : numeric
# Timestep.
# nsteps : int
# Number of steps to run for.
# d0 : numeric (optional)
# The initial separation.
# nsteps_per_pullback : int (optional)
# Number of steps to run before re-normalizing the offset vectors.
# noffset : int (optional)
# Number of offset orbits to run.
# t1 : numeric (optional)
# Time of initial conditions. Assumed to be t=0.
# """
# w0 = np.atleast_2d(w0)
# # phase-space dimensionality
# if w0.shape[0] > 1:
# raise ValueError("Initial condition vector ")
# ndim_ps = w0.shape[1]
# if deviation_vecs is None:
# # initialize (ndim_ps) deviation vectors
# A = np.zeros((ndim_ps,ndim_ps))
# for ii in range(ndim_ps):
# A[ii] = np.random.normal(0.,1.,size=ndim_ps)
# A[ii] /= np.linalg.norm(A[ii])
# vec = gram_schmidt(A)
# A = A[:2]
# else:
# raise NotImplementedError()
# all_w0 = np.zeros((2,ndim_ps*2))
# for ii in range(2):
# all_w0[ii] = np.append(w0,A[ii])
# # array to store the full, main orbit
# full_w = np.zeros((nsteps+1,ndim_ps))
# full_w[0] = w0
# full_ts = np.zeros((nsteps+1,))
# full_ts[0] = t1
# # arrays to store the sali
# sali = np.zeros((nsteps+1,))
# time = t1
# for ii in range(1,nsteps+1):
# tt,ww = integrator.run(all_w0, dt=dt, nsteps=1, t1=time)
# time += dt
# dm = np.sqrt(np.sum((ww[-1,0,ndim_ps:] - ww[-1,1,ndim_ps:])**2))
# dq = np.sqrt(np.sum((ww[-1,0,ndim_ps:] + ww[-1,1,ndim_ps:])**2))
# sali[ii] = min(dm, dq)
# # renormalize
# ww[-1,0,ndim_ps:] = ww[-1,0,ndim_ps:] / np.linalg.norm(ww[-1,0,ndim_ps:])
# ww[-1,1,ndim_ps:] = ww[-1,1,ndim_ps:] / np.linalg.norm(ww[-1,1,ndim_ps:])
# full_w[ii:ii+1] = ww[-1,0,:ndim_ps]
# full_ts[ii:ii+1] = time
# all_w0 = ww[-1].copy()
# return sali, full_ts, full_w
| mit |
bookus/VideoClassification | VideoClassification/Experiments/ResNet_Temporal_EX.py | 1 | 6385 | import torch
import torch.nn as nn
from torch.autograd import Variable
try:
from cv2 import cv2
except:
import cv2
import VideoClassification.Config.Config as Config
from VideoClassification.model.resnet_twostream.resnet_twostream import resnet152_TemporalNet,resnet101_TemporalNet
from VideoClassification.utils.Logger import Logger
from VideoClassification.utils.DataSetLoader.UCF101Loader import train_UCF0101_Temporal,test_UCF0101_Temporal
from VideoClassification.utils.toolkits import accuracy,try_to_load_state_dict
from VideoClassification.utils.DataSetLoader.PictureQueue import PictureQueue,GenVariables_Spatial,GenVariables_Temporal
############ Config
logger = Logger(Config.LOGSpace+Config.EX_ID)
savepath = Config.ExWorkSpace+Config.EX_ID+'/'
import os.path
if os.path.isdir(savepath)==False:
os.mkdir(savepath)
batchsize = 86
############
def ResNet101_Temporal_Net_Run():
epochs = 80
loops = 2000
learningrate = 0.1
attenuation = 0.5
model = resnet101_TemporalNet(pretrained=False,dropout=0.4).cuda()
if Config.LOAD_SAVED_MODE_PATH is not None :
import types
model.try_to_load_state_dict = types.MethodType(try_to_load_state_dict,model)
model.try_to_load_state_dict(torch.load(Config.LOAD_SAVED_MODE_PATH))
print('LOAD {} done!'.format(Config.LOAD_SAVED_MODE_PATH))
lossfunc = nn.CrossEntropyLoss()
optim = torch.optim.SGD(model.parameters(),lr=learningrate,momentum=0.1)
pq_train = PictureQueue(dsl=train_UCF0101_Temporal(),Gen=GenVariables_Temporal,batchsize=batchsize)
pq_test = PictureQueue(dsl=test_UCF0101_Temporal(),Gen=GenVariables_Temporal,batchsize=batchsize)
cnt = 0
for epoch in range(epochs) :
for l in range(loops) :
cnt+=1
imgs,labels = pq_train.Get()
model.zero_grad()
pred = model(imgs)
loss = lossfunc(pred,labels)
logger.scalar_summary('ResNet101/Temporal/train_loss',loss.data[0],cnt)
loss.backward()
optim.step()
print('Temporal epoch: {} cnt: {} loss: {}'.format(epoch,cnt,loss.data[0]))
if cnt%20 == 0:
imgs,labels = pq_test.Get()
pred = model.inference(imgs)
loss = lossfunc(pred,labels)
logger.scalar_summary('ResNet101/Temporal/test_loss',loss.data[0],cnt)
#acc
acc = accuracy(pred,labels,topk=(1,5,10))
logger.scalar_summary('ResNet101/Temporal/test_acc@1',acc[0],cnt)
logger.scalar_summary('ResNet101/Temporal/test_acc@5',acc[1],cnt)
logger.scalar_summary('ResNet101/Temporal/test_acc@10',acc[2],cnt)
imgs,labels = pq_train.Get()
pred = model.inference(imgs)
acc = accuracy(pred,labels,topk=(1,5,10))
logger.scalar_summary('ResNet101/Temporal/train_acc@1',acc[0],cnt)
logger.scalar_summary('ResNet101/Temporal/train_acc@5',acc[1],cnt)
logger.scalar_summary('ResNet101/Temporal/train_acc@10',acc[2],cnt)
if cnt%2000 == 0:
savefile = savepath + 'ResNet101_Temporal_EX1_{:02d}.pt'.format(epoch%50)
print('Temporal save model to {}'.format(savefile))
torch.save(model.state_dict(),savefile)
if epoch in [10,20,50,60]:
learningrate = learningrate*attenuation
optim = torch.optim.SGD(model.parameters(),lr=learningrate,momentum=0.9)
def ResNet152_Temporal_Net_Run():
epochs = 80
loops = 2000
learningrate = 0.2
attenuation = 0.5
model = resnet152_TemporalNet(pretrained=False,dropout=0.4).cuda()
if Config.LOAD_SAVED_MODE_PATH is not None :
import types
model.try_to_load_state_dict = types.MethodType(try_to_load_state_dict,model)
model.try_to_load_state_dict(torch.load(Config.LOAD_SAVED_MODE_PATH))
print('LOAD {} done!'.format(Config.LOAD_SAVED_MODE_PATH))
lossfunc = nn.CrossEntropyLoss()
optim = torch.optim.SGD(model.parameters(),lr=learningrate,momentum=0.1)
pq_train = PictureQueue(dsl=train_UCF0101_Temporal(),Gen=GenVariables_Temporal,batchsize=batchsize)
pq_test = PictureQueue(dsl=test_UCF0101_Temporal(),Gen=GenVariables_Temporal,batchsize=batchsize)
cnt = 0
for epoch in range(epochs) :
for l in range(loops) :
cnt+=1
imgs,labels = pq_train.Get()
model.zero_grad()
pred = model(imgs)
loss = lossfunc(pred,labels)
logger.scalar_summary('ResNet152/Temporal/train_loss',loss.data[0],cnt)
loss.backward()
optim.step()
print('Temporal epoch: {} cnt: {} loss: {}'.format(epoch,cnt,loss.data[0]))
if cnt%20 == 0:
imgs,labels = pq_test.Get()
pred = model.inference(imgs)
loss = lossfunc(pred,labels)
logger.scalar_summary('ResNet152/Temporal/test_loss',loss.data[0],cnt)
#acc
acc = accuracy(pred,labels,topk=(1,5,10))
logger.scalar_summary('ResNet152/Temporal/test_acc@1',acc[0],cnt)
logger.scalar_summary('ResNet152/Temporal/test_acc@5',acc[1],cnt)
logger.scalar_summary('ResNet152/Temporal/test_acc@10',acc[2],cnt)
imgs,labels = pq_train.Get()
pred = model.inference(imgs)
acc = accuracy(pred,labels,topk=(1,5,10))
logger.scalar_summary('ResNet152/Temporal/train_acc@1',acc[0],cnt)
logger.scalar_summary('ResNet152/Temporal/train_acc@5',acc[1],cnt)
logger.scalar_summary('ResNet152/Temporal/train_acc@10',acc[2],cnt)
if cnt%2000 == 0:
savefile = savepath + 'ResNet152_Temporal_EX1_{:02d}.pt'.format(epoch%50)
print('Temporal save model to {}'.format(savefile))
torch.save(model.state_dict(),savefile)
if epoch in [10,20,50,60]:
learningrate = learningrate*attenuation
optim = torch.optim.SGD(model.parameters(),lr=learningrate,momentum=0.9)
if __name__=='__main__':
x = torch.randn(3,20,224,224)
x = Variable(x)
model = resnet101_TemporalNet()
y = model(x)
y.size()
| gpl-3.0 |
Joble/CumulusCI | setup.py | 1 | 2305 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from setuptools import setup
from pkgutil import walk_packages
import cumulusci
def find_packages(path='.', prefix=""):
yield prefix
prefix = prefix + "."
for _, name, ispkg in walk_packages(path, prefix):
if ispkg:
yield name
with open('README.rst') as readme_file:
readme = readme_file.read()
with open('HISTORY.rst') as history_file:
history = history_file.read()
requirements = [
'click>=6.2',
'coloredlogs>=5.2',
'docutils>=0.13.1',
'github3.py==0.9.6',
'plaintable==0.1.1',
'requests[security]>=2.9.1',
'responses>=0.5.1',
'rst2ansi>=0.1.5',
'sarge>=0.1.4',
'selenium',
'salesforce-bulk==1.1.0',
'simple-salesforce>=0.72',
'xmltodict==0.10.2',
'HiYaPyCo>=0.4.8',
'PyCrypto>=2.6.1',
'PyGithub>=1.25.1',
'PyYAML>=3.11',
'SQLAlchemy>=1.1.4',
]
test_requirements = [
'nose>=1.3.7',
'mock',
]
setup(
name='cumulusci',
version='2.0.0-beta30',
description="Build and release tools for Salesforce developers",
long_description=readme + '\n\n' + history,
author="Jason Lantz",
author_email='jlantz@salesforce.com',
url='https://github.com/SalesforceFoundation/CumulusCI',
packages = list(find_packages(cumulusci.__path__, cumulusci.__name__)),
package_dir={'cumulusci':
'cumulusci'},
entry_points={
'console_scripts': [
'cci=cumulusci.cli.cli:cli',
'cumulusci2=cumulusci.cli.cli:cli'
]
},
include_package_data=True,
install_requires=requirements,
license="BSD license",
zip_safe=False,
keywords='cumulusci',
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
test_suite='tests',
tests_require=test_requirements
)
| bsd-3-clause |
brijeshkesariya/odoo | addons/resource/faces/pcalendar.py | 433 | 28436 | #@+leo-ver=4
#@+node:@file pcalendar.py
#@@language python
#@<< Copyright >>
#@+node:<< Copyright >>
############################################################################
# Copyright (C) 2005, 2006, 2007, 2008 by Reithinger GmbH
# mreithinger@web.de
#
# This file is part of faces.
#
# faces is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# faces is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the
# Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
############################################################################
#@-node:<< Copyright >>
#@nl
"""
This module contains all classes and functions for the project plan calendar
"""
#@<< Imports >>
#@+node:<< Imports >>
from string import *
import datetime
import time
import re
import locale
import bisect
import sys
TIME_RANGE_PATTERN = re.compile("(\\d+):(\\d+)\\s*-\\s*(\\d+):(\\d+)")
TIME_DELTA_PATTERN = re.compile("([-+]?\\d+(\\.\\d+)?)([dwmyMH])")
DEFAULT_MINIMUM_TIME_UNIT = 15
DEFAULT_WORKING_DAYS_PER_WEEK = 5
DEFAULT_WORKING_DAYS_PER_MONTH = 20
DEFAULT_WORKING_DAYS_PER_YEAR = 200
DEFAULT_WORKING_HOURS_PER_DAY = 8
DEFAULT_WORKING_TIMES = ( (8 * 60, 12 * 60 ),
(13 * 60, 17 * 60 ) )
DEFAULT_WORKING_DAYS = { 0 : DEFAULT_WORKING_TIMES,
1 : DEFAULT_WORKING_TIMES,
2 : DEFAULT_WORKING_TIMES,
3 : DEFAULT_WORKING_TIMES,
4 : DEFAULT_WORKING_TIMES,
5 : (),
6 : () }
#@-node:<< Imports >>
#@nl
#@+others
#@+node:to_time_range
def to_time_range(src):
"""
converts a string to a timerange, i.e
(from, to)
from, to are ints, specifing the minutes since midnight
"""
if not src: return ()
mo = TIME_RANGE_PATTERN.match(src)
if not mo:
raise ValueError("%s is no time range" % src)
from_time = int(mo.group(1)) * 60 + int(mo.group(2))
to_time = int(mo.group(3)) * 60 + int(mo.group(4))
return from_time, to_time
#@-node:to_time_range
#@+node:to_datetime
def to_datetime(src):
"""
a tolerant conversion function to convert different strings
to a datetime.dateime
"""
#to get the original value for wrappers
new = getattr(src, "_value", src)
while new is not src:
src = new
new = getattr(src, "_value", src)
if isinstance(src, _WorkingDateBase):
src = src.to_datetime()
if isinstance(src, datetime.datetime):
return src
src = str(src)
formats = [ "%x %H:%M",
"%x",
"%Y-%m-%d %H:%M",
"%y-%m-%d %H:%M",
"%d.%m.%Y %H:%M",
"%d.%m.%y %H:%M",
"%Y%m%d %H:%M",
"%d/%m/%y %H:%M",
"%d/%m/%Y %H:%M",
"%d/%m/%Y",
"%d/%m/%y",
"%Y-%m-%d",
"%y-%m-%d",
"%d.%m.%Y",
"%d.%m.%y",
"%Y%m%d" ]
for f in formats:
try:
conv = time.strptime(src, f)
return datetime.datetime(*conv[0:-3])
except Exception, e:
pass
raise TypeError("'%s' (%s) is not a datetime" % (src, str(type(src))))
#@-node:
#@+node:_to_days
def _to_days(src):
"""
converts a string of the day abreviations mon, tue, wed,
thu, fri, sat, sun to a dir with correct weekday indices.
For Example
convert_to_days('mon, tue, thu') results in
{ 0:1, 1:1, 3:1 }
"""
tokens = src.split(",")
result = { }
for t in tokens:
try:
index = { "mon" : 0,
"tue" : 1,
"wed" : 2,
"thu" : 3,
"fri" : 4,
"sat" : 5,
"sun" : 6 } [ lower(t.strip()) ]
result[index] = 1
except:
raise ValueError("%s is not a day" % (t))
return result
#@-node:_to_days
#@+node:_add_to_time_spans
def _add_to_time_spans(src, to_add, is_free):
if not isinstance(to_add, (tuple, list)):
to_add = (to_add,)
tmp = []
for start, end, f in src:
tmp.append((start, True, f))
tmp.append((end, False, f))
for v in to_add:
if isinstance(v, (tuple, list)):
start = to_datetime(v[0])
end = to_datetime(v[1])
else:
start = to_datetime(v)
end = start.replace(hour=0, minute=0) + datetime.timedelta(1)
tmp.append((start, start <= end, is_free))
tmp.append((end, start > end, is_free))
tmp.sort()
# 0: date
# 1: is_start
# 2: is_free
sequence = []
free_count = 0
work_count = 0
last = None
for date, is_start, is_free in tmp:
if is_start:
if is_free:
if not free_count and not work_count:
last = date
free_count += 1
else:
if not work_count:
if free_count: sequence.append((last, date, True))
last = date
work_count += 1
else:
if is_free:
assert(free_count > 0)
free_count -= 1
if not free_count and not work_count:
sequence.append((last, date, True))
else:
assert(work_count > 0)
work_count -= 1
if not work_count: sequence.append((last, date, False))
if free_count: last = date
return tuple(sequence)
#@-node:_add_to_time_spans
#@+node:to_timedelta
def to_timedelta(src, cal=None, is_duration=False):
"""
converts a string to a datetime.timedelta. If cal is specified
it will be used for getting the working times. if is_duration=True
working times will not be considered. Valid units are
d for Days
w for Weeks
m for Months
y for Years
H for Hours
M for Minutes
"""
cal = cal or _default_calendar
if isinstance(src, datetime.timedelta):
return datetime.timedelta(src.days, seconds=src.seconds, calendar=cal)
if isinstance(src, (long, int, float)):
src = "%sM" % str(src)
if not isinstance(src, basestring):
raise ValueError("%s is not a duration" % (repr(src)))
src = src.strip()
if is_duration:
d_p_w = 7
d_p_m = 30
d_p_y = 360
d_w_h = 24
else:
d_p_w = cal.working_days_per_week
d_p_m = cal.working_days_per_month
d_p_y = cal.working_days_per_year
d_w_h = cal.working_hours_per_day
def convert_minutes(minutes):
minutes = int(minutes)
hours = minutes / 60
minutes = minutes % 60
days = hours / d_w_h
hours = hours % d_w_h
return [ days, 0, 0, 0, minutes, hours ]
def convert_days(value):
days = int(value)
value -= days
value *= d_w_h
hours = int(value)
value -= hours
value *= 60
minutes = round(value)
return [ days, 0, 0, 0, minutes, hours ]
sum_args = [ 0, 0, 0, 0, 0, 0 ]
split = src.split(" ")
for s in split:
mo = TIME_DELTA_PATTERN.match(s)
if not mo:
raise ValueError(src +
" is not a valid duration: valid"
" units are: d w m y M H")
unit = mo.group(3)
val = float(mo.group(1))
if unit == 'd':
args = convert_days(val)
elif unit == 'w':
args = convert_days(val * d_p_w)
elif unit == 'm':
args = convert_days(val * d_p_m)
elif unit == 'y':
args = convert_days(val * d_p_y)
elif unit == 'M':
args = convert_minutes(val)
elif unit == 'H':
args = convert_minutes(val * 60)
sum_args = [ a + b for a, b in zip(sum_args, args) ]
sum_args = tuple(sum_args)
return datetime.timedelta(*sum_args)
#@-node:to_timedelta
#@+node:timedelta_to_str
def timedelta_to_str(delta, format, cal=None, is_duration=False):
cal = cal or _default_calendar
if is_duration:
d_p_w = 7
d_p_m = 30
d_p_y = 365
d_w_h = 24
else:
d_p_w = cal.working_days_per_week
d_p_m = cal.working_days_per_month
d_p_y = cal.working_days_per_year
d_w_h = cal.working_hours_per_day
has_years = format.find("%y") > -1
has_minutes = format.find("%M") > -1
has_hours = format.find("%H") > -1 or has_minutes
has_days = format.find("%d") > -1
has_weeks = format.find("%w") > -1
has_months = format.find("%m") > -1
result = format
days = delta.days
d_r = (days, format)
minutes = delta.seconds / 60
def rebase(d_r, cond1, cond2, letter, divisor):
#rebase the days
if not cond1: return d_r
days, result = d_r
if cond2:
val = days / divisor
if not val:
result = re.sub("{[^{]*?%" + letter + "[^}]*?}", "", result)
result = result.replace("%" + letter, str(val))
days %= divisor
else:
result = result.replace("%" + letter,
locale.format("%.2f",
(float(days) / divisor)))
return (days, result)
d_r = rebase(d_r, has_years, has_months or has_weeks or has_days, "y", d_p_y)
d_r = rebase(d_r, has_months, has_weeks or has_days, "m", d_p_m)
d_r = rebase(d_r, has_weeks, has_days, "w", d_p_w)
days, result = d_r
if not has_days:
minutes += days * d_w_h * 60
days = 0
if has_hours:
if not days:
result = re.sub("{[^{]*?%d[^}]*?}", "", result)
result = result.replace("%d", str(days))
else:
result = result.replace("%d",
"%.2f" % (days + float(minutes)
/ (d_w_h * 60)))
if has_hours:
if has_minutes:
val = minutes / 60
if not val:
result = re.sub("{[^{]*?%H[^}]*?}", "", result)
result = result.replace("%H", str(val))
minutes %= 60
else:
result = result.replace("%H", "%.2f" % (float(minutes) / 60))
if not minutes:
result = re.sub("{[^{]*?%M[^}]*?}", "", result)
result = result.replace("%M", str(minutes))
result = result.replace("{", "")
result = result.replace("}", "")
return result.strip()
#@-node:timedelta_to_str
#@+node:strftime
def strftime(dt, format):
"""
an extended version of strftime, that introduces some new
directives:
%IW iso week number
%IY iso year
%IB full month name appropriate to iso week
%ib abbreviated month name appropriate to iso week
%im month as decimal number appropriate to iso week
"""
iso = dt.isocalendar()
if iso[0] != dt.year:
iso_date = dt.replace(day=1, month=1)
format = format \
.replace("%IB", iso_date.strftime("%B"))\
.replace("%ib", iso_date.strftime("%b"))\
.replace("%im", iso_date.strftime("%m"))
else:
format = format \
.replace("%IB", "%B")\
.replace("%ib", "%b")\
.replace("%im", "%m")
format = format \
.replace("%IW", str(iso[1]))\
.replace("%IY", str(iso[0]))\
return dt.strftime(format)
#@-node:strftime
#@+node:union
def union(*calendars):
"""
returns a calendar that unifies all working times
"""
#@ << check arguments >>
#@+node:<< check arguments >>
if len(calendars) == 1:
calendars = calendars[0]
#@nonl
#@-node:<< check arguments >>
#@nl
#@ << intersect vacations >>
#@+node:<< intersect vacations >>
free_time = []
for c in calendars:
for start, end, is_free in c.time_spans:
if is_free:
free_time.append((start, False))
free_time.append((end, True))
count = len(calendars)
open = 0
time_spans = []
free_time.sort()
for date, is_end in free_time:
if is_end:
if open == count:
time_spans.append((start, date, True))
open -= 1
else:
open += 1
start = date
#@-node:<< intersect vacations >>
#@nl
#@ << unify extra worktime >>
#@+node:<< unify extra worktime >>
for c in calendars:
for start, end, is_free in c.time_spans:
if not is_free:
time_spans = _add_to_time_spans(time_spans, start, end)
#@nonl
#@-node:<< unify extra worktime >>
#@nl
#@ << unify working times >>
#@+node:<< unify working times >>
working_times = {}
for d in range(0, 7):
times = []
for c in calendars:
for start, end in c.working_times.get(d, []):
times.append((start, False))
times.append((end, True))
times.sort()
open = 0
ti = []
start = None
for time, is_end in times:
if not is_end:
if not start: start = time
open += 1
else:
open -= 1
if not open:
ti.append((start, time))
start = None
if ti:
working_times[d] = ti
#@-node:<< unify working times >>
#@nl
#@ << create result calendar >>
#@+node:<< create result calendar >>
result = Calendar()
result.working_times = working_times
result.time_spans = time_spans
result._recalc_working_time()
result._build_mapping()
#@nonl
#@-node:<< create result calendar >>
#@nl
return result
#@nonl
#@-node:union
#@+node:class _CalendarItem
class _CalendarItem(int):
#@ << class _CalendarItem declarations >>
#@+node:<< class _CalendarItem declarations >>
__slots__ = ()
calender = None
#@-node:<< class _CalendarItem declarations >>
#@nl
#@ @+others
#@+node:__new__
def __new__(cls, val):
try:
return int.__new__(cls, val)
except OverflowError:
return int.__new__(cls, sys.maxint)
#@-node:__new__
#@+node:round
def round(self, round_up=True):
m_t_u = self.calendar.minimum_time_unit
minutes = int(self)
base = (minutes / m_t_u) * m_t_u
minutes %= m_t_u
round_up = round_up and minutes > 0 or minutes > m_t_u / 2
if round_up: base += m_t_u
return self.__class__(base)
#@-node:round
#@-others
#@-node:class _CalendarItem
#@+node:class _Minutes
class _Minutes(_CalendarItem):
#@ << class _Minutes declarations >>
#@+node:<< class _Minutes declarations >>
__slots__ = ()
STR_FORMAT = "{%dd}{ %HH}{ %MM}"
#@-node:<< class _Minutes declarations >>
#@nl
#@ @+others
#@+node:__new__
def __new__(cls, src=0, is_duration=False):
"""
converts a timedelta in working minutes.
"""
if isinstance(src, cls) or type(src) is int:
return _CalendarItem.__new__(cls, src)
cal = cls.calendar
if not isinstance(src, datetime.timedelta):
src = to_timedelta(src, cal, is_duration)
d_w_h = is_duration and 24 or cal.working_hours_per_day
src = src.days * d_w_h * 60 + src.seconds / 60
return _CalendarItem.__new__(cls, src)
#@-node:__new__
#@+node:__cmp__
def __cmp__(self, other):
return cmp(int(self), int(self.__class__(other)))
#@-node:__cmp__
#@+node:__add__
def __add__(self, other):
try:
return self.__class__(int(self) + int(self.__class__(other)))
except:
return NotImplemented
#@-node:__add__
#@+node:__sub__
def __sub__(self, other):
try:
return self.__class__(int(self) - int(self.__class__(other)))
except:
return NotImplemented
#@-node:__sub__
#@+node:to_timedelta
def to_timedelta(self, is_duration=False):
d_w_h = is_duration and 24 or self.calendar.working_hours_per_day
minutes = int(self)
hours = minutes / 60
minutes = minutes % 60
days = hours / d_w_h
hours = hours % d_w_h
return datetime.timedelta(days, hours=hours, minutes=minutes)
#@nonl
#@-node:to_timedelta
#@+node:strftime
def strftime(self, format=None, is_duration=False):
td = self.to_timedelta(is_duration)
return timedelta_to_str(td, format or self.STR_FORMAT,
self.calendar, is_duration)
#@nonl
#@-node:strftime
#@-others
#@-node:class _Minutes
#@+node:class _WorkingDateBase
class _WorkingDateBase(_CalendarItem):
"""
A daytetime which has only valid values within the
workingtimes of a specific calendar
"""
#@ << class _WorkingDateBase declarations >>
#@+node:<< class _WorkingDateBase declarations >>
timetuple = True
STR_FORMAT = "%x %H:%M"
_minutes = _Minutes
__slots__ = ()
#@-node:<< class _WorkingDateBase declarations >>
#@nl
#@ @+others
#@+node:__new__
def __new__(cls, src):
#cls.__bases__[0] is the base of
#the calendar specific StartDate and EndDate
if isinstance(src, cls.__bases__[0]) or type(src) in (int, float):
return _CalendarItem.__new__(cls, src)
src = cls.calendar.from_datetime(to_datetime(src))
return _CalendarItem.__new__(cls, src)
#@-node:__new__
#@+node:__repr__
def __repr__(self):
return self.strftime()
#@-node:__repr__
#@+node:to_datetime
def to_datetime(self):
return self.to_starttime()
#@-node:to_datetime
#@+node:to_starttime
def to_starttime(self):
return self.calendar.to_starttime(self)
#@-node:to_starttime
#@+node:to_endtime
def to_endtime(self):
return self.calendar.to_endtime(self)
#@-node:to_endtime
#@+node:__cmp__
def __cmp__(self, other):
return cmp(int(self), int(self.__class__(other)))
#@-node:__cmp__
#@+node:__add__
def __add__(self, other):
try:
return self.__class__(int(self) + int(self._minutes(other)))
except ValueError, e:
raise e
except:
return NotImplemented
#@-node:__add__
#@+node:__sub__
def __sub__(self, other):
if isinstance(other, (datetime.timedelta, str, _Minutes)):
try:
other = self._minutes(other)
except:
pass
if isinstance(other, self._minutes):
return self.__class__(int(self) - int(other))
try:
return self._minutes(int(self) - int(self.__class__(other)))
except:
return NotImplemented
#@-node:__sub__
#@+node:strftime
def strftime(self, format=None):
return strftime(self.to_datetime(), format or self.STR_FORMAT)
#@-node:strftime
#@-others
#@-node:class _WorkingDateBase
#@+node:class Calendar
class Calendar(object):
"""
A calendar to specify working times and vacations.
The calendars epoch start at 1.1.1979
"""
#@ << declarations >>
#@+node:<< declarations >>
# january the first must be a monday
EPOCH = datetime.datetime(1979, 1, 1)
minimum_time_unit = DEFAULT_MINIMUM_TIME_UNIT
working_days_per_week = DEFAULT_WORKING_DAYS_PER_WEEK
working_days_per_month = DEFAULT_WORKING_DAYS_PER_MONTH
working_days_per_year = DEFAULT_WORKING_DAYS_PER_YEAR
working_hours_per_day = DEFAULT_WORKING_HOURS_PER_DAY
now = EPOCH
#@-node:<< declarations >>
#@nl
#@ @+others
#@+node:__init__
def __init__(self):
self.time_spans = ()
self._dt_num_can = ()
self._num_dt_can = ()
self.working_times = { }
self._recalc_working_time()
self._make_classes()
#@-node:__init__
#@+node:__or__
def __or__(self, other):
if isinstance(other, Calendar):
return union(self, other)
return NotImplemented
#@nonl
#@-node:__or__
#@+node:clone
def clone(self):
result = Calendar()
result.working_times = self.working_times.copy()
result.time_spans = self.time_spans
result._recalc_working_time()
result._build_mapping()
return result
#@nonl
#@-node:clone
#@+node:set_working_days
def set_working_days(self, day_range, trange, *further_tranges):
"""
Sets the working days of an calendar
day_range is a string of day abbreviations like 'mon, tue'
trange and further_tranges is a time range string like
'8:00-10:00'
"""
time_ranges = [ trange ] + list(further_tranges)
time_ranges = filter(bool, map(to_time_range, time_ranges))
days = _to_days(day_range)
for k in days.keys():
self.working_times[k] = time_ranges
self._recalc_working_time()
self._build_mapping()
#@-node:set_working_days
#@+node:set_vacation
def set_vacation(self, value):
"""
Sets vacation time.
value is either a datetime literal or
a sequence of items that can be
a datetime literals and or pair of datetime literals
"""
self.time_spans = _add_to_time_spans(self.time_spans, value, True)
self._build_mapping()
#@-node:set_vacation
#@+node:set_extra_work
def set_extra_work(self, value):
"""
Sets extra working time
value is either a datetime literal or
a sequence of items that can be
a datetime literals and or pair of datetime literals
"""
self.time_spans = _add_to_time_spans(self.time_spans, value, False)
self._build_mapping()
#@-node:set_extra_work
#@+node:from_datetime
def from_datetime(self, value):
assert(isinstance(value, datetime.datetime))
delta = value - self.EPOCH
days = delta.days
minutes = delta.seconds / 60
# calculate the weektime
weeks = days / 7
wtime = self.week_time * weeks
# calculate the daytime
days %= 7
dtime = sum(self.day_times[:days])
# calculate the minute time
slots = self.working_times.get(days, DEFAULT_WORKING_DAYS[days])
mtime = 0
for start, end in slots:
if minutes > end:
mtime += end - start
else:
if minutes > start:
mtime += minutes - start
break
result = wtime + dtime + mtime
# map exceptional timespans
dt_num_can = self._dt_num_can
pos = bisect.bisect(dt_num_can, (value,)) - 1
if pos >= 0:
start, end, nstart, nend, cend = dt_num_can[pos]
if value < end:
if nstart < nend:
delta = value - start
delta = delta.days * 24 * 60 + delta.seconds / 60
result = nstart + delta
else:
result = nstart
else:
result += (nend - cend) # == (result - cend) + nend
return result
#@-node:from_datetime
#@+node:split_time
def split_time(self, value):
#map exceptional timespans
num_dt_can = self._num_dt_can
pos = bisect.bisect(num_dt_can, (value, sys.maxint)) - 1
if pos >= 0:
nstart, nend, start, end, cend = num_dt_can[pos]
if value < nend:
value = start + datetime.timedelta(minutes=value - nstart)
delta = value - self.EPOCH
return delta.days / 7, delta.days % 7, delta.seconds / 60, -1
else:
value += (cend - nend) # (value - nend + cend)
#calculate the weeks since the epoch
weeks = value / self.week_time
value %= self.week_time
#calculate the remaining days
days = 0
for day_time in self.day_times:
if value < day_time: break
value -= day_time
days += 1
#calculate the remaining minutes
minutes = 0
slots = self.working_times.get(days, DEFAULT_WORKING_DAYS[days])
index = 0
for start, end in slots:
delta = end - start
if delta > value:
minutes = start + value
break
else:
value -= delta
index += 1
return weeks, days, minutes, index
#@-node:split_time
#@+node:to_starttime
def to_starttime(self, value):
weeks, days, minutes, index = self.split_time(value)
return self.EPOCH + datetime.timedelta(weeks=weeks,
days=days,
minutes=minutes)
#@-node:to_starttime
#@+node:to_endtime
def to_endtime(self, value):
return self.to_starttime(value - 1) + datetime.timedelta(minutes=1)
#@-node:to_endtime
#@+node:get_working_times
def get_working_times(self, day):
return self.working_times.get(day, DEFAULT_WORKING_DAYS[day])
#@-node:get_working_times
#@+node:_build_mapping
def _build_mapping(self):
self._dt_num_can = self._num_dt_can = ()
dt_num_can = []
num_dt_can = []
delta = self.Minutes()
for start, end, is_free in self.time_spans:
cstart = self.StartDate(start)
cend = self.EndDate(end)
nstart = cstart + delta
if not is_free:
d = end - start
d = d.days * 24 * 60 + d.seconds / 60
nend = nstart + d
else:
nend = nstart
delta += (nend - nstart) - (cend - cstart)
dt_num_can.append((start, end, nstart, nend, cend))
num_dt_can.append((nstart, nend, start, end, cend))
self._dt_num_can = tuple(dt_num_can)
self._num_dt_can = tuple(num_dt_can)
#@-node:_build_mapping
#@+node:_recalc_working_time
def _recalc_working_time(self):
def slot_sum_time(day):
slots = self.working_times.get(day, DEFAULT_WORKING_DAYS[day])
return sum(map(lambda slot: slot[1] - slot[0], slots))
self.day_times = map(slot_sum_time, range(0, 7))
self.week_time = sum(self.day_times)
#@-node:_recalc_working_time
#@+node:_make_classes
def _make_classes(self):
#ensure that the clases are instance specific
class minutes(_Minutes):
calendar = self
__slots__ = ()
class db(_WorkingDateBase):
calendar = self
_minutes = minutes
__slots__ = ()
class wdt(db): __slots__ = ()
class edt(db):
__slots__ = ()
def to_datetime(self):
return self.to_endtime()
self.Minutes, self.StartDate, self.EndDate = minutes, wdt, edt
self.WorkingDate = self.StartDate
#@-node:_make_classes
#@-others
_default_calendar = Calendar()
WorkingDate = _default_calendar.WorkingDate
StartDate = _default_calendar.StartDate
EndDate = _default_calendar.EndDate
Minutes = _default_calendar.Minutes
#@-node:class Calendar
#@-others
if __name__ == '__main__':
cal = Calendar()
start = EndDate("10.1.2005")
delay = Minutes("4H")
start2 = cal.StartDate(start)
start3 = cal.StartDate("10.1.2005")
#@-node:@file pcalendar.py
#@-leo
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
ap1/PixeeBel | code/bbb_visualize/pb_visualize.py | 1 | 1911 | #!/usr/bin/env python
import socket
import struct
import time
import select
import screen
import signal
import sys
NR_CHANNELS = 4
visibleMs = 200
selectMs = 60
# Each queue element is tuple with ( time-to-vanish, channel-id, bin-id )
queue = []
bind_port = ( '127.0.0.1', 3000 )
interrupted = False
# Control+C handler
def sigint_handler( signal, frame ):
global interrupted
interrupted = True
print "SIGINT"
def millis():
return int( time.time() * 1000.0 )
def evalSelectDelay( nowMs ):
if not queue:
return selectMs / 1000.0
nextEventTimeMs = queue[ -1 ][ 0 ]
deltaMs = max( min( nextEventTimeMs - nowMs, selectMs ), 0 )
return deltaMs / 1000.0
def maybeDrainList( nowMs ):
if not queue:
return
while queue and queue[-1][0] <= nowMs:
# pop the last item
( _, channelId, binId, mag ) = queue.pop()
screen.hide( nowMs, channelId, binId, mag, queue=queue )
if __name__ == "__main__":
sock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM )
sock.bind( bind_port )
screen.init()
signal.signal( signal.SIGINT, sigint_handler )
iter = 0
while not interrupted:
iter += 1
if iter % 100 == 0:
print "Queue length %d" % len( queue )
nowMs = millis()
timeout = evalSelectDelay( nowMs )
try:
readReady, _, _ = select.select( [ sock ], [], [], timeout )
except select.error:
continue
if not readReady:
maybeDrainList( nowMs )
continue
data, _ = sock.recvfrom( 6 )
binId = struct.unpack( "!H", data[0:2] )[0]
channelId = struct.unpack( "!H", data[2:4] )[0]
mag = struct.unpack( "!H", data[4:6] )[0]
queue.insert( 0, ( nowMs + visibleMs, channelId, binId, mag ) )
screen.show( nowMs, channelId, binId, mag, queue=queue )
maybeDrainList( nowMs )
screen.stop()
sys.exit( 0 )
| mit |
TomBaxter/osf.io | osf_tests/test_include_queryset.py | 23 | 2339 | """Tests for osf.utils.manager.IncludeQueryset"""
import pytest
from framework.auth import Auth
from osf.models import Node
from osf_tests.factories import ProjectFactory, NodeFactory, UserFactory
pytestmark = pytest.mark.django_db
@pytest.fixture()
def create_n_nodes():
def _create_n_nodes(n, roots=True):
return [
ProjectFactory() if roots else NodeFactory()
for _ in range(n)
]
return _create_n_nodes
class TestIncludeQuerySet:
@pytest.mark.django_assert_num_queries
def test_include_guids(self, create_n_nodes, django_assert_num_queries):
create_n_nodes(3)
# Confirm guids included automagically
with django_assert_num_queries(1):
for node in Node.objects.all():
assert node._id is not None
with django_assert_num_queries(1):
for node in Node.objects.include('guids').all():
assert node._id is not None
@pytest.mark.django_assert_num_queries
def test_include_guids_filter(self, create_n_nodes, django_assert_num_queries):
nodes = create_n_nodes(3)
nids = [e.id for e in nodes[:-1]]
with django_assert_num_queries(1):
for node in Node.objects.include('guids').filter(id__in=nids):
assert node._id is not None
@pytest.mark.django_assert_num_queries
def test_include_root_guids(self, create_n_nodes, django_assert_num_queries):
nodes = create_n_nodes(3, roots=False)
queryset = Node.objects.filter(id__in=[e.id for e in nodes]).include('root__guids')
with django_assert_num_queries(1):
for node in queryset:
assert node.root._id is not None
@pytest.mark.django_assert_num_queries
def test_include_contributor_user_guids(self, create_n_nodes, django_assert_num_queries):
nodes = create_n_nodes(3)
for node in nodes:
for _ in range(3):
contrib = UserFactory()
node.add_contributor(contrib, auth=Auth(node.creator), save=True)
nodes = Node.objects.include('contributor__user__guids').all()
for node in nodes:
with django_assert_num_queries(0):
for contributor in node.contributor_set.all():
assert contributor.user._id is not None
| apache-2.0 |
amenonsen/ansible-modules-core | windows/win_feature.py | 96 | 2770 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Paul Durivage <paul.durivage@rackspace.com>, Trond Hindenes <trond@hindenes.com> and others
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
DOCUMENTATION = '''
---
module: win_feature
version_added: "1.7"
short_description: Installs and uninstalls Windows Features
description:
- Installs or uninstalls Windows Roles or Features
options:
name:
description:
- Names of roles or features to install as a single feature or a comma-separated list of features
required: true
default: null
aliases: []
state:
description:
- State of the features or roles on the system
required: false
choices:
- present
- absent
default: present
aliases: []
restart:
description:
- Restarts the computer automatically when installation is complete, if restarting is required by the roles or features installed.
choices:
- yes
- no
default: null
aliases: []
include_sub_features:
description:
- Adds all subfeatures of the specified feature
choices:
- yes
- no
default: null
aliases: []
include_management_tools:
description:
- Adds the corresponding management tools to the specified feature
choices:
- yes
- no
default: null
aliases: []
author:
- "Paul Durivage (@angstwad)"
- "Trond Hindenes (@trondhindenes)"
'''
EXAMPLES = '''
# This installs IIS.
# The names of features available for install can be run by running the following Powershell Command:
# PS C:\Users\Administrator> Import-Module ServerManager; Get-WindowsFeature
$ ansible -i hosts -m win_feature -a "name=Web-Server" all
$ ansible -i hosts -m win_feature -a "name=Web-Server,Web-Common-Http" all
# Playbook example
---
- name: Install IIS
hosts: all
gather_facts: false
tasks:
- name: Install IIS
win_feature:
name: "Web-Server"
state: present
restart: yes
include_sub_features: yes
include_management_tools: yes
'''
| gpl-3.0 |
chenokay/ripozo | ripozo/manager_base.py | 2 | 10635 | """
Contains the BaseManager which must be
implemented fully in regards to the persistence
mechanism you are using.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from abc import ABCMeta, abstractmethod
from ripozo.decorators import classproperty
import logging
import six
_logger = logging.getLogger(__name__)
@six.add_metaclass(ABCMeta)
class BaseManager(object):
"""
The BaseManager implements some common methods that are valuable across all databases
as well as expose an interface that must be implemented by any specific implementation.
This needs to be extended in order to implement a new database type. This should handle
all direct interactions with a database or ORM. The extended classes are injected into
viewsets in order to specify how to get the data from the database.
:param unicode pagination_pk_query_arg: The name of the query parameter
that specifies the page or pk
for the next set page to return when paginating over a list
:param unicode pagination_count_query_arg: The name of the
query parameter that specifies the maximum number of results
to return in a list retrieval
:param unicode pagination_next: The meta parameter to return that
specifies the next query parameters
:param int paginate_by: The number of results to return by default.
This gets overridden by pagination_count_query_arg
:param list order_by: A list of the fields to order the results by.
This may be restricted in certain databases
:param list _fields: A list of the fields that are able to be manipulated
or retrieved by the manager. These are the default fields if
_create_fields, _list_fields, or _update_fields are not defined.
:param list _create_fields: The fields to use if a model is being
created. Fields not in this list will not be applied
:param list _list_fields: The fields to use if a list of models
are being retrieved. If not defined, cls.fields will be used instead.
:param list _update_fields: The fields to use if the model is
being updated. Fields not in this list will not be used.
:param type model: The model that is being managed.
This is the individual model that is set by the user.
For any type of base class this should be None.
However, it is required for actual implementations
"""
pagination_pk_query_arg = 'pagination_pk'
pagination_count_query_arg = 'count'
pagination_next = 'next'
pagination_prev = 'previous'
paginate_by = 10000
order_by = None
model = None
arg_parser = None
_field_validators = None
@abstractmethod
def create(self, values, *args, **kwargs):
"""
Create a model with the values according to the values dictionary
:param dict values: A dictionary of values to create the model according to
:return: The dictionary of arguments that should be returned by the serializer
:rtype: dict
"""
pass
@abstractmethod
def retrieve(self, lookup_keys, *args, **kwargs):
"""
Retrieve a single model and nothing more as a python dictionary
:param dict lookup_keys: The lookup keys for the model and the associated values
:return: The dictionary of arguments that should be returned by the serializer
:return: A tuple with the first object being a dictionary of key value
pairs according to the fields list and
the second object is the meta data (such as the next url for a paginated list)
:rtype: tuple
"""
pass
@abstractmethod
def retrieve_list(self, filters, *args, **kwargs):
"""
Retrieves a list of dictionaries containing the fields for the associated model
:param dictlookup_keys: The lookup keys for the model and the associated values
:return: The dictionary of arguments that should be returned by the serializer
:return: A a list of dictionaries of key value pairs according to the fields list
:rtype: list
"""
pass
@abstractmethod
def update(self, lookup_keys, updates, *args, **kwargs):
"""
Updates the model found with the lookup keys according to the updates dictionary where
the keys are the fields to update and the values are the new values for the field
:param dict lookup_keys: The keys to find the object that is to be updated
:param dict updates: The fields to update and their associated new update values
:return: A dictionary of the full updated model according to the fields class attribute
:rtype: dict
"""
pass
@abstractmethod
def delete(self, lookup_keys, *args, **kwargs):
"""
Deletes the model found with the lookup keys
:param dict lookup_keys: The keys with which to find the model to delete
:return: nothing.
:rtype: NoneType
"""
pass
@classmethod
def get_field_type(cls, name):
"""
Returns the BaseField instance (or subclass) instance
that corresponds to the named attribute on the model.
For example, if you the column "name" on the model
for this manager is a String column, then this should
return an instance of StringField with the appropriate
requirements.
:param unicode name: The name of the field on the model
for which you are getting the field name
:return: The BaseField class (or subclass) instance to handle
the field specified by the name
:rtype: ripozo.viewsets.fields.base.BaseField
"""
pass
@classproperty
def fields(cls):
"""
Simply makes sure that the _fields attribute is not
None. Returns [] if cls._fields evaluates to None
or some equivalent.
"""
return []
@classproperty
def create_fields(cls):
"""
These are the fields that are valid when
creating a model. This is necessary when you
want the user to only be able to specify certain
fields on creation. Defaults to ``cls.fields``
if ``cls._create_fields`` is not specified
:return: The list of fields to use when creating
a model using this manager
:rtype: list
"""
return cls.fields
@classproperty
def list_fields(cls):
"""
These are the fields that should be used for
retrieving a list of models. This is often necessary
for performance reasons when you only want the ids to
create links to the individual resource and not the full
resource
:return: The list fields, if the ``cls._list_fields`` attribute is
set, otherwise, ``cls.fields``
:rtype: list
"""
return cls.fields
@classproperty
def update_fields(cls):
"""
These are the valid fields for updating a model.
If the cls._update_fields is defined then it
returns that list, otherwise it returns cls.fields
:return: The list of fields to use when updating a model.
:rtype: list
"""
return cls.fields
@classproperty
def field_validators(cls):
"""
Gets the BaseField instances for all of the
fields on the manager.
:return:
:rtype: list
"""
cls._field_validators = cls._field_validators or {}
for field_name in cls.fields:
if field_name not in cls._field_validators:
cls._field_validators[field_name] = cls.get_field_type(field_name)
return list(cls._field_validators.values())
def get_pagination_count(self, filters):
"""
Get the pagination count from the args
:param filters: All of the args
:type filters: dict
:return: tuple of (pagination_count, updated_filters
:rtype: tuple
"""
# get the pagination count or else use the default
filters = filters.copy()
pagination_count = filters.pop(self.pagination_count_query_arg, self.paginate_by)
pagination_count = int(pagination_count)
_logger.debug('Paginating list by %s', pagination_count)
return pagination_count, filters
def get_pagination_pks(self, filters):
"""
Get the pagination pks from the args
:param dict filters: All of the args
:return: tuple of (pagination_pks, updated_filters
:rtype: tuple
"""
filters = filters.copy()
last_pagination_pk = filters.pop(self.pagination_pk_query_arg, None)
return last_pagination_pk, filters
def dot_field_list_to_dict(self, fields=None):
"""
Converts a list of dot delimited fields (and related fields)
and turns it into a dictionary for example, it would transform
.. code-block:: python
>>> dot_field_list_to_dict(['id', 'value', 'related.id', 'related.related_value'])
{
'id': None,
'value': None,
'related': {
'id': None,
'related_value': None
}
}
:param list fields:
:return: A dictionary of the fields layered as to
indicate relationships.
:rtype: dict
"""
# TODO find a better fucking way
field_dict = {}
fields = fields or self.fields
for field in fields:
field_parts = field.split('.')
current = field_dict
part = field_parts.pop(0)
while len(field_parts) > 0:
current[part] = current.get(part, dict())
current = current[part]
part = field_parts.pop(0)
current[part] = None
return field_dict
@staticmethod
def valid_fields(values, valid_fields):
"""
Returns a dictionary with only the fields
specified from the valid_fields
:param dict values: The original set of fields
:param list|tuple valid_fields: The set of fields that
are valid to use.
:return: The valid values dict
:rtype: dict
"""
new_values = dict()
for key, value in six.iteritems(values):
if key in valid_fields:
new_values[key] = value
return new_values
| gpl-2.0 |
salamer/django | tests/introspection/models.py | 216 | 1112 | from __future__ import unicode_literals
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class Reporter(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
email = models.EmailField()
facebook_user_id = models.BigIntegerField(null=True)
raw_data = models.BinaryField(null=True)
small_int = models.SmallIntegerField()
class Meta:
unique_together = ('first_name', 'last_name')
def __str__(self):
return "%s %s" % (self.first_name, self.last_name)
@python_2_unicode_compatible
class Article(models.Model):
headline = models.CharField(max_length=100)
pub_date = models.DateField()
body = models.TextField(default='')
reporter = models.ForeignKey(Reporter, models.CASCADE)
response_to = models.ForeignKey('self', models.SET_NULL, null=True)
def __str__(self):
return self.headline
class Meta:
ordering = ('headline',)
index_together = [
["headline", "pub_date"],
]
| bsd-3-clause |
crosswalk-project/web-testing-service | tools/pywebsocket/src/mod_pywebsocket/common.py | 489 | 9947 | # Copyright 2012, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""This file must not depend on any module specific to the WebSocket protocol.
"""
from mod_pywebsocket import http_header_util
# Additional log level definitions.
LOGLEVEL_FINE = 9
# Constants indicating WebSocket protocol version.
VERSION_HIXIE75 = -1
VERSION_HYBI00 = 0
VERSION_HYBI01 = 1
VERSION_HYBI02 = 2
VERSION_HYBI03 = 2
VERSION_HYBI04 = 4
VERSION_HYBI05 = 5
VERSION_HYBI06 = 6
VERSION_HYBI07 = 7
VERSION_HYBI08 = 8
VERSION_HYBI09 = 8
VERSION_HYBI10 = 8
VERSION_HYBI11 = 8
VERSION_HYBI12 = 8
VERSION_HYBI13 = 13
VERSION_HYBI14 = 13
VERSION_HYBI15 = 13
VERSION_HYBI16 = 13
VERSION_HYBI17 = 13
# Constants indicating WebSocket protocol latest version.
VERSION_HYBI_LATEST = VERSION_HYBI13
# Port numbers
DEFAULT_WEB_SOCKET_PORT = 80
DEFAULT_WEB_SOCKET_SECURE_PORT = 443
# Schemes
WEB_SOCKET_SCHEME = 'ws'
WEB_SOCKET_SECURE_SCHEME = 'wss'
# Frame opcodes defined in the spec.
OPCODE_CONTINUATION = 0x0
OPCODE_TEXT = 0x1
OPCODE_BINARY = 0x2
OPCODE_CLOSE = 0x8
OPCODE_PING = 0x9
OPCODE_PONG = 0xa
# UUIDs used by HyBi 04 and later opening handshake and frame masking.
WEBSOCKET_ACCEPT_UUID = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'
# Opening handshake header names and expected values.
UPGRADE_HEADER = 'Upgrade'
WEBSOCKET_UPGRADE_TYPE = 'websocket'
WEBSOCKET_UPGRADE_TYPE_HIXIE75 = 'WebSocket'
CONNECTION_HEADER = 'Connection'
UPGRADE_CONNECTION_TYPE = 'Upgrade'
HOST_HEADER = 'Host'
ORIGIN_HEADER = 'Origin'
SEC_WEBSOCKET_ORIGIN_HEADER = 'Sec-WebSocket-Origin'
SEC_WEBSOCKET_KEY_HEADER = 'Sec-WebSocket-Key'
SEC_WEBSOCKET_ACCEPT_HEADER = 'Sec-WebSocket-Accept'
SEC_WEBSOCKET_VERSION_HEADER = 'Sec-WebSocket-Version'
SEC_WEBSOCKET_PROTOCOL_HEADER = 'Sec-WebSocket-Protocol'
SEC_WEBSOCKET_EXTENSIONS_HEADER = 'Sec-WebSocket-Extensions'
SEC_WEBSOCKET_DRAFT_HEADER = 'Sec-WebSocket-Draft'
SEC_WEBSOCKET_KEY1_HEADER = 'Sec-WebSocket-Key1'
SEC_WEBSOCKET_KEY2_HEADER = 'Sec-WebSocket-Key2'
SEC_WEBSOCKET_LOCATION_HEADER = 'Sec-WebSocket-Location'
# Extensions
DEFLATE_FRAME_EXTENSION = 'deflate-frame'
PERMESSAGE_COMPRESSION_EXTENSION = 'permessage-compress'
PERMESSAGE_DEFLATE_EXTENSION = 'permessage-deflate'
X_WEBKIT_DEFLATE_FRAME_EXTENSION = 'x-webkit-deflate-frame'
X_WEBKIT_PERMESSAGE_COMPRESSION_EXTENSION = 'x-webkit-permessage-compress'
MUX_EXTENSION = 'mux_DO_NOT_USE'
# Status codes
# Code STATUS_NO_STATUS_RECEIVED, STATUS_ABNORMAL_CLOSURE, and
# STATUS_TLS_HANDSHAKE are pseudo codes to indicate specific error cases.
# Could not be used for codes in actual closing frames.
# Application level errors must use codes in the range
# STATUS_USER_REGISTERED_BASE to STATUS_USER_PRIVATE_MAX. The codes in the
# range STATUS_USER_REGISTERED_BASE to STATUS_USER_REGISTERED_MAX are managed
# by IANA. Usually application must define user protocol level errors in the
# range STATUS_USER_PRIVATE_BASE to STATUS_USER_PRIVATE_MAX.
STATUS_NORMAL_CLOSURE = 1000
STATUS_GOING_AWAY = 1001
STATUS_PROTOCOL_ERROR = 1002
STATUS_UNSUPPORTED_DATA = 1003
STATUS_NO_STATUS_RECEIVED = 1005
STATUS_ABNORMAL_CLOSURE = 1006
STATUS_INVALID_FRAME_PAYLOAD_DATA = 1007
STATUS_POLICY_VIOLATION = 1008
STATUS_MESSAGE_TOO_BIG = 1009
STATUS_MANDATORY_EXTENSION = 1010
STATUS_INTERNAL_ENDPOINT_ERROR = 1011
STATUS_TLS_HANDSHAKE = 1015
STATUS_USER_REGISTERED_BASE = 3000
STATUS_USER_REGISTERED_MAX = 3999
STATUS_USER_PRIVATE_BASE = 4000
STATUS_USER_PRIVATE_MAX = 4999
# Following definitions are aliases to keep compatibility. Applications must
# not use these obsoleted definitions anymore.
STATUS_NORMAL = STATUS_NORMAL_CLOSURE
STATUS_UNSUPPORTED = STATUS_UNSUPPORTED_DATA
STATUS_CODE_NOT_AVAILABLE = STATUS_NO_STATUS_RECEIVED
STATUS_ABNORMAL_CLOSE = STATUS_ABNORMAL_CLOSURE
STATUS_INVALID_FRAME_PAYLOAD = STATUS_INVALID_FRAME_PAYLOAD_DATA
STATUS_MANDATORY_EXT = STATUS_MANDATORY_EXTENSION
# HTTP status codes
HTTP_STATUS_BAD_REQUEST = 400
HTTP_STATUS_FORBIDDEN = 403
HTTP_STATUS_NOT_FOUND = 404
def is_control_opcode(opcode):
return (opcode >> 3) == 1
class ExtensionParameter(object):
"""Holds information about an extension which is exchanged on extension
negotiation in opening handshake.
"""
def __init__(self, name):
self._name = name
# TODO(tyoshino): Change the data structure to more efficient one such
# as dict when the spec changes to say like
# - Parameter names must be unique
# - The order of parameters is not significant
self._parameters = []
def name(self):
return self._name
def add_parameter(self, name, value):
self._parameters.append((name, value))
def get_parameters(self):
return self._parameters
def get_parameter_names(self):
return [name for name, unused_value in self._parameters]
def has_parameter(self, name):
for param_name, param_value in self._parameters:
if param_name == name:
return True
return False
def get_parameter_value(self, name):
for param_name, param_value in self._parameters:
if param_name == name:
return param_value
class ExtensionParsingException(Exception):
def __init__(self, name):
super(ExtensionParsingException, self).__init__(name)
def _parse_extension_param(state, definition):
param_name = http_header_util.consume_token(state)
if param_name is None:
raise ExtensionParsingException('No valid parameter name found')
http_header_util.consume_lwses(state)
if not http_header_util.consume_string(state, '='):
definition.add_parameter(param_name, None)
return
http_header_util.consume_lwses(state)
# TODO(tyoshino): Add code to validate that parsed param_value is token
param_value = http_header_util.consume_token_or_quoted_string(state)
if param_value is None:
raise ExtensionParsingException(
'No valid parameter value found on the right-hand side of '
'parameter %r' % param_name)
definition.add_parameter(param_name, param_value)
def _parse_extension(state):
extension_token = http_header_util.consume_token(state)
if extension_token is None:
return None
extension = ExtensionParameter(extension_token)
while True:
http_header_util.consume_lwses(state)
if not http_header_util.consume_string(state, ';'):
break
http_header_util.consume_lwses(state)
try:
_parse_extension_param(state, extension)
except ExtensionParsingException, e:
raise ExtensionParsingException(
'Failed to parse parameter for %r (%r)' %
(extension_token, e))
return extension
def parse_extensions(data):
"""Parses Sec-WebSocket-Extensions header value returns a list of
ExtensionParameter objects.
Leading LWSes must be trimmed.
"""
state = http_header_util.ParsingState(data)
extension_list = []
while True:
extension = _parse_extension(state)
if extension is not None:
extension_list.append(extension)
http_header_util.consume_lwses(state)
if http_header_util.peek(state) is None:
break
if not http_header_util.consume_string(state, ','):
raise ExtensionParsingException(
'Failed to parse Sec-WebSocket-Extensions header: '
'Expected a comma but found %r' %
http_header_util.peek(state))
http_header_util.consume_lwses(state)
if len(extension_list) == 0:
raise ExtensionParsingException(
'No valid extension entry found')
return extension_list
def format_extension(extension):
"""Formats an ExtensionParameter object."""
formatted_params = [extension.name()]
for param_name, param_value in extension.get_parameters():
if param_value is None:
formatted_params.append(param_name)
else:
quoted_value = http_header_util.quote_if_necessary(param_value)
formatted_params.append('%s=%s' % (param_name, quoted_value))
return '; '.join(formatted_params)
def format_extensions(extension_list):
"""Formats a list of ExtensionParameter objects."""
formatted_extension_list = []
for extension in extension_list:
formatted_extension_list.append(format_extension(extension))
return ', '.join(formatted_extension_list)
# vi:sts=4 sw=4 et
| bsd-3-clause |
openaid-IATI/OIPA | OIPA/solr/dataset/indexing.py | 1 | 1948 |
from rest_framework.renderers import JSONRenderer
from api.publisher.serializers import PublisherSerializer
from solr.indexing import BaseIndexing
from solr.utils import get_child_attr, value_string
class DatasetIndexing(BaseIndexing):
def dataset_publisher(self):
publisher = get_child_attr(self.record, 'publisher')
if publisher:
self.add_field(
'publisher',
JSONRenderer().render(
PublisherSerializer(
fields=[
'iati_id',
'publisher_iati_id',
'display_name',
'name',
'activity_count'
],
instance=publisher
).data
).decode()
)
self.add_field('publisher_iati_id', publisher.publisher_iati_id)
self.add_field('publisher_name', publisher.name)
self.add_field('publisher_display_name', publisher.display_name)
def dataset(self):
dataset = self.record
self.add_field('id', dataset.id)
self.add_field('name', dataset.name)
self.add_field('title', dataset.title)
self.add_field('filetype', dataset.filetype)
self.add_field(
'date_created',
value_string(dataset.date_created).split(' ')[0]
)
self.add_field(
'date_updated',
value_string(dataset.date_updated).split(' ')[0]
)
self.add_field('iati_version', dataset.iati_version)
self.add_field('source_url', dataset.source_url)
self.dataset_publisher()
def to_representation(self, dataset):
self.record = dataset
self.indexing = {}
self.representation = {}
self.dataset()
self.build()
return self.representation
| agpl-3.0 |
kamarush/caf_kernel_mm | scripts/tracing/draw_functrace.py | 14676 | 3560 | #!/usr/bin/python
"""
Copyright 2008 (c) Frederic Weisbecker <fweisbec@gmail.com>
Licensed under the terms of the GNU GPL License version 2
This script parses a trace provided by the function tracer in
kernel/trace/trace_functions.c
The resulted trace is processed into a tree to produce a more human
view of the call stack by drawing textual but hierarchical tree of
calls. Only the functions's names and the the call time are provided.
Usage:
Be sure that you have CONFIG_FUNCTION_TRACER
# mount -t debugfs nodev /sys/kernel/debug
# echo function > /sys/kernel/debug/tracing/current_tracer
$ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func
Wait some times but not too much, the script is a bit slow.
Break the pipe (Ctrl + Z)
$ scripts/draw_functrace.py < raw_trace_func > draw_functrace
Then you have your drawn trace in draw_functrace
"""
import sys, re
class CallTree:
""" This class provides a tree representation of the functions
call stack. If a function has no parent in the kernel (interrupt,
syscall, kernel thread...) then it is attached to a virtual parent
called ROOT.
"""
ROOT = None
def __init__(self, func, time = None, parent = None):
self._func = func
self._time = time
if parent is None:
self._parent = CallTree.ROOT
else:
self._parent = parent
self._children = []
def calls(self, func, calltime):
""" If a function calls another one, call this method to insert it
into the tree at the appropriate place.
@return: A reference to the newly created child node.
"""
child = CallTree(func, calltime, self)
self._children.append(child)
return child
def getParent(self, func):
""" Retrieve the last parent of the current node that
has the name given by func. If this function is not
on a parent, then create it as new child of root
@return: A reference to the parent.
"""
tree = self
while tree != CallTree.ROOT and tree._func != func:
tree = tree._parent
if tree == CallTree.ROOT:
child = CallTree.ROOT.calls(func, None)
return child
return tree
def __repr__(self):
return self.__toString("", True)
def __toString(self, branch, lastChild):
if self._time is not None:
s = "%s----%s (%s)\n" % (branch, self._func, self._time)
else:
s = "%s----%s\n" % (branch, self._func)
i = 0
if lastChild:
branch = branch[:-1] + " "
while i < len(self._children):
if i != len(self._children) - 1:
s += "%s" % self._children[i].__toString(branch +\
" |", False)
else:
s += "%s" % self._children[i].__toString(branch +\
" |", True)
i += 1
return s
class BrokenLineException(Exception):
"""If the last line is not complete because of the pipe breakage,
we want to stop the processing and ignore this line.
"""
pass
class CommentLineException(Exception):
""" If the line is a comment (as in the beginning of the trace file),
just ignore it.
"""
pass
def parseLine(line):
line = line.strip()
if line.startswith("#"):
raise CommentLineException
m = re.match("[^]]+?\\] +([0-9.]+): (\\w+) <-(\\w+)", line)
if m is None:
raise BrokenLineException
return (m.group(1), m.group(2), m.group(3))
def main():
CallTree.ROOT = CallTree("Root (Nowhere)", None, None)
tree = CallTree.ROOT
for line in sys.stdin:
try:
calltime, callee, caller = parseLine(line)
except BrokenLineException:
break
except CommentLineException:
continue
tree = tree.getParent(caller)
tree = tree.calls(callee, calltime)
print CallTree.ROOT
if __name__ == "__main__":
main()
| gpl-2.0 |
hoangminhitvn/flask | flask/lib/python2.7/site-packages/flask/testsuite/helpers.py | 405 | 21973 | # -*- coding: utf-8 -*-
"""
flask.testsuite.helpers
~~~~~~~~~~~~~~~~~~~~~~~
Various helpers.
:copyright: (c) 2011 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
import os
import flask
import unittest
from logging import StreamHandler
from flask.testsuite import FlaskTestCase, catch_warnings, catch_stderr
from werkzeug.http import parse_cache_control_header, parse_options_header
from flask._compat import StringIO, text_type
def has_encoding(name):
try:
import codecs
codecs.lookup(name)
return True
except LookupError:
return False
class JSONTestCase(FlaskTestCase):
def test_json_bad_requests(self):
app = flask.Flask(__name__)
@app.route('/json', methods=['POST'])
def return_json():
return flask.jsonify(foo=text_type(flask.request.get_json()))
c = app.test_client()
rv = c.post('/json', data='malformed', content_type='application/json')
self.assert_equal(rv.status_code, 400)
def test_json_body_encoding(self):
app = flask.Flask(__name__)
app.testing = True
@app.route('/')
def index():
return flask.request.get_json()
c = app.test_client()
resp = c.get('/', data=u'"Hällo Wörld"'.encode('iso-8859-15'),
content_type='application/json; charset=iso-8859-15')
self.assert_equal(resp.data, u'Hällo Wörld'.encode('utf-8'))
def test_jsonify(self):
d = dict(a=23, b=42, c=[1, 2, 3])
app = flask.Flask(__name__)
@app.route('/kw')
def return_kwargs():
return flask.jsonify(**d)
@app.route('/dict')
def return_dict():
return flask.jsonify(d)
c = app.test_client()
for url in '/kw', '/dict':
rv = c.get(url)
self.assert_equal(rv.mimetype, 'application/json')
self.assert_equal(flask.json.loads(rv.data), d)
def test_json_as_unicode(self):
app = flask.Flask(__name__)
app.config['JSON_AS_ASCII'] = True
with app.app_context():
rv = flask.json.dumps(u'\N{SNOWMAN}')
self.assert_equal(rv, '"\\u2603"')
app.config['JSON_AS_ASCII'] = False
with app.app_context():
rv = flask.json.dumps(u'\N{SNOWMAN}')
self.assert_equal(rv, u'"\u2603"')
def test_json_attr(self):
app = flask.Flask(__name__)
@app.route('/add', methods=['POST'])
def add():
json = flask.request.get_json()
return text_type(json['a'] + json['b'])
c = app.test_client()
rv = c.post('/add', data=flask.json.dumps({'a': 1, 'b': 2}),
content_type='application/json')
self.assert_equal(rv.data, b'3')
def test_template_escaping(self):
app = flask.Flask(__name__)
render = flask.render_template_string
with app.test_request_context():
rv = flask.json.htmlsafe_dumps('</script>')
self.assert_equal(rv, u'"\\u003c/script\\u003e"')
self.assert_equal(type(rv), text_type)
rv = render('{{ "</script>"|tojson }}')
self.assert_equal(rv, '"\\u003c/script\\u003e"')
rv = render('{{ "<\0/script>"|tojson }}')
self.assert_equal(rv, '"\\u003c\\u0000/script\\u003e"')
rv = render('{{ "<!--<script>"|tojson }}')
self.assert_equal(rv, '"\\u003c!--\\u003cscript\\u003e"')
rv = render('{{ "&"|tojson }}')
self.assert_equal(rv, '"\\u0026"')
rv = render('{{ "\'"|tojson }}')
self.assert_equal(rv, '"\\u0027"')
rv = render("<a ng-data='{{ data|tojson }}'></a>",
data={'x': ["foo", "bar", "baz'"]})
self.assert_equal(rv,
'<a ng-data=\'{"x": ["foo", "bar", "baz\\u0027"]}\'></a>')
def test_json_customization(self):
class X(object):
def __init__(self, val):
self.val = val
class MyEncoder(flask.json.JSONEncoder):
def default(self, o):
if isinstance(o, X):
return '<%d>' % o.val
return flask.json.JSONEncoder.default(self, o)
class MyDecoder(flask.json.JSONDecoder):
def __init__(self, *args, **kwargs):
kwargs.setdefault('object_hook', self.object_hook)
flask.json.JSONDecoder.__init__(self, *args, **kwargs)
def object_hook(self, obj):
if len(obj) == 1 and '_foo' in obj:
return X(obj['_foo'])
return obj
app = flask.Flask(__name__)
app.testing = True
app.json_encoder = MyEncoder
app.json_decoder = MyDecoder
@app.route('/', methods=['POST'])
def index():
return flask.json.dumps(flask.request.get_json()['x'])
c = app.test_client()
rv = c.post('/', data=flask.json.dumps({
'x': {'_foo': 42}
}), content_type='application/json')
self.assertEqual(rv.data, b'"<42>"')
def test_modified_url_encoding(self):
class ModifiedRequest(flask.Request):
url_charset = 'euc-kr'
app = flask.Flask(__name__)
app.testing = True
app.request_class = ModifiedRequest
app.url_map.charset = 'euc-kr'
@app.route('/')
def index():
return flask.request.args['foo']
rv = app.test_client().get(u'/?foo=정상처리'.encode('euc-kr'))
self.assert_equal(rv.status_code, 200)
self.assert_equal(rv.data, u'정상처리'.encode('utf-8'))
if not has_encoding('euc-kr'):
test_modified_url_encoding = None
def test_json_key_sorting(self):
app = flask.Flask(__name__)
app.testing = True
self.assert_equal(app.config['JSON_SORT_KEYS'], True)
d = dict.fromkeys(range(20), 'foo')
@app.route('/')
def index():
return flask.jsonify(values=d)
c = app.test_client()
rv = c.get('/')
lines = [x.strip() for x in rv.data.strip().decode('utf-8').splitlines()]
self.assert_equal(lines, [
'{',
'"values": {',
'"0": "foo",',
'"1": "foo",',
'"2": "foo",',
'"3": "foo",',
'"4": "foo",',
'"5": "foo",',
'"6": "foo",',
'"7": "foo",',
'"8": "foo",',
'"9": "foo",',
'"10": "foo",',
'"11": "foo",',
'"12": "foo",',
'"13": "foo",',
'"14": "foo",',
'"15": "foo",',
'"16": "foo",',
'"17": "foo",',
'"18": "foo",',
'"19": "foo"',
'}',
'}'
])
class SendfileTestCase(FlaskTestCase):
def test_send_file_regular(self):
app = flask.Flask(__name__)
with app.test_request_context():
rv = flask.send_file('static/index.html')
self.assert_true(rv.direct_passthrough)
self.assert_equal(rv.mimetype, 'text/html')
with app.open_resource('static/index.html') as f:
rv.direct_passthrough = False
self.assert_equal(rv.data, f.read())
rv.close()
def test_send_file_xsendfile(self):
app = flask.Flask(__name__)
app.use_x_sendfile = True
with app.test_request_context():
rv = flask.send_file('static/index.html')
self.assert_true(rv.direct_passthrough)
self.assert_in('x-sendfile', rv.headers)
self.assert_equal(rv.headers['x-sendfile'],
os.path.join(app.root_path, 'static/index.html'))
self.assert_equal(rv.mimetype, 'text/html')
rv.close()
def test_send_file_object(self):
app = flask.Flask(__name__)
with catch_warnings() as captured:
with app.test_request_context():
f = open(os.path.join(app.root_path, 'static/index.html'))
rv = flask.send_file(f)
rv.direct_passthrough = False
with app.open_resource('static/index.html') as f:
self.assert_equal(rv.data, f.read())
self.assert_equal(rv.mimetype, 'text/html')
rv.close()
# mimetypes + etag
self.assert_equal(len(captured), 2)
app.use_x_sendfile = True
with catch_warnings() as captured:
with app.test_request_context():
f = open(os.path.join(app.root_path, 'static/index.html'))
rv = flask.send_file(f)
self.assert_equal(rv.mimetype, 'text/html')
self.assert_in('x-sendfile', rv.headers)
self.assert_equal(rv.headers['x-sendfile'],
os.path.join(app.root_path, 'static/index.html'))
rv.close()
# mimetypes + etag
self.assert_equal(len(captured), 2)
app.use_x_sendfile = False
with app.test_request_context():
with catch_warnings() as captured:
f = StringIO('Test')
rv = flask.send_file(f)
rv.direct_passthrough = False
self.assert_equal(rv.data, b'Test')
self.assert_equal(rv.mimetype, 'application/octet-stream')
rv.close()
# etags
self.assert_equal(len(captured), 1)
with catch_warnings() as captured:
f = StringIO('Test')
rv = flask.send_file(f, mimetype='text/plain')
rv.direct_passthrough = False
self.assert_equal(rv.data, b'Test')
self.assert_equal(rv.mimetype, 'text/plain')
rv.close()
# etags
self.assert_equal(len(captured), 1)
app.use_x_sendfile = True
with catch_warnings() as captured:
with app.test_request_context():
f = StringIO('Test')
rv = flask.send_file(f)
self.assert_not_in('x-sendfile', rv.headers)
rv.close()
# etags
self.assert_equal(len(captured), 1)
def test_attachment(self):
app = flask.Flask(__name__)
with catch_warnings() as captured:
with app.test_request_context():
f = open(os.path.join(app.root_path, 'static/index.html'))
rv = flask.send_file(f, as_attachment=True)
value, options = parse_options_header(rv.headers['Content-Disposition'])
self.assert_equal(value, 'attachment')
rv.close()
# mimetypes + etag
self.assert_equal(len(captured), 2)
with app.test_request_context():
self.assert_equal(options['filename'], 'index.html')
rv = flask.send_file('static/index.html', as_attachment=True)
value, options = parse_options_header(rv.headers['Content-Disposition'])
self.assert_equal(value, 'attachment')
self.assert_equal(options['filename'], 'index.html')
rv.close()
with app.test_request_context():
rv = flask.send_file(StringIO('Test'), as_attachment=True,
attachment_filename='index.txt',
add_etags=False)
self.assert_equal(rv.mimetype, 'text/plain')
value, options = parse_options_header(rv.headers['Content-Disposition'])
self.assert_equal(value, 'attachment')
self.assert_equal(options['filename'], 'index.txt')
rv.close()
def test_static_file(self):
app = flask.Flask(__name__)
# default cache timeout is 12 hours
with app.test_request_context():
# Test with static file handler.
rv = app.send_static_file('index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 12 * 60 * 60)
rv.close()
# Test again with direct use of send_file utility.
rv = flask.send_file('static/index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 12 * 60 * 60)
rv.close()
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 3600
with app.test_request_context():
# Test with static file handler.
rv = app.send_static_file('index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 3600)
rv.close()
# Test again with direct use of send_file utility.
rv = flask.send_file('static/index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 3600)
rv.close()
class StaticFileApp(flask.Flask):
def get_send_file_max_age(self, filename):
return 10
app = StaticFileApp(__name__)
with app.test_request_context():
# Test with static file handler.
rv = app.send_static_file('index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 10)
rv.close()
# Test again with direct use of send_file utility.
rv = flask.send_file('static/index.html')
cc = parse_cache_control_header(rv.headers['Cache-Control'])
self.assert_equal(cc.max_age, 10)
rv.close()
class LoggingTestCase(FlaskTestCase):
def test_logger_cache(self):
app = flask.Flask(__name__)
logger1 = app.logger
self.assert_true(app.logger is logger1)
self.assert_equal(logger1.name, __name__)
app.logger_name = __name__ + '/test_logger_cache'
self.assert_true(app.logger is not logger1)
def test_debug_log(self):
app = flask.Flask(__name__)
app.debug = True
@app.route('/')
def index():
app.logger.warning('the standard library is dead')
app.logger.debug('this is a debug statement')
return ''
@app.route('/exc')
def exc():
1 // 0
with app.test_client() as c:
with catch_stderr() as err:
c.get('/')
out = err.getvalue()
self.assert_in('WARNING in helpers [', out)
self.assert_in(os.path.basename(__file__.rsplit('.', 1)[0] + '.py'), out)
self.assert_in('the standard library is dead', out)
self.assert_in('this is a debug statement', out)
with catch_stderr() as err:
try:
c.get('/exc')
except ZeroDivisionError:
pass
else:
self.assert_true(False, 'debug log ate the exception')
def test_debug_log_override(self):
app = flask.Flask(__name__)
app.debug = True
app.logger_name = 'flask_tests/test_debug_log_override'
app.logger.level = 10
self.assert_equal(app.logger.level, 10)
def test_exception_logging(self):
out = StringIO()
app = flask.Flask(__name__)
app.logger_name = 'flask_tests/test_exception_logging'
app.logger.addHandler(StreamHandler(out))
@app.route('/')
def index():
1 // 0
rv = app.test_client().get('/')
self.assert_equal(rv.status_code, 500)
self.assert_in(b'Internal Server Error', rv.data)
err = out.getvalue()
self.assert_in('Exception on / [GET]', err)
self.assert_in('Traceback (most recent call last):', err)
self.assert_in('1 // 0', err)
self.assert_in('ZeroDivisionError:', err)
def test_processor_exceptions(self):
app = flask.Flask(__name__)
@app.before_request
def before_request():
if trigger == 'before':
1 // 0
@app.after_request
def after_request(response):
if trigger == 'after':
1 // 0
return response
@app.route('/')
def index():
return 'Foo'
@app.errorhandler(500)
def internal_server_error(e):
return 'Hello Server Error', 500
for trigger in 'before', 'after':
rv = app.test_client().get('/')
self.assert_equal(rv.status_code, 500)
self.assert_equal(rv.data, b'Hello Server Error')
def test_url_for_with_anchor(self):
app = flask.Flask(__name__)
@app.route('/')
def index():
return '42'
with app.test_request_context():
self.assert_equal(flask.url_for('index', _anchor='x y'),
'/#x%20y')
def test_url_for_with_scheme(self):
app = flask.Flask(__name__)
@app.route('/')
def index():
return '42'
with app.test_request_context():
self.assert_equal(flask.url_for('index',
_external=True,
_scheme='https'),
'https://localhost/')
def test_url_for_with_scheme_not_external(self):
app = flask.Flask(__name__)
@app.route('/')
def index():
return '42'
with app.test_request_context():
self.assert_raises(ValueError,
flask.url_for,
'index',
_scheme='https')
def test_url_with_method(self):
from flask.views import MethodView
app = flask.Flask(__name__)
class MyView(MethodView):
def get(self, id=None):
if id is None:
return 'List'
return 'Get %d' % id
def post(self):
return 'Create'
myview = MyView.as_view('myview')
app.add_url_rule('/myview/', methods=['GET'],
view_func=myview)
app.add_url_rule('/myview/<int:id>', methods=['GET'],
view_func=myview)
app.add_url_rule('/myview/create', methods=['POST'],
view_func=myview)
with app.test_request_context():
self.assert_equal(flask.url_for('myview', _method='GET'),
'/myview/')
self.assert_equal(flask.url_for('myview', id=42, _method='GET'),
'/myview/42')
self.assert_equal(flask.url_for('myview', _method='POST'),
'/myview/create')
class NoImportsTestCase(FlaskTestCase):
"""Test Flasks are created without import.
Avoiding ``__import__`` helps create Flask instances where there are errors
at import time. Those runtime errors will be apparent to the user soon
enough, but tools which build Flask instances meta-programmatically benefit
from a Flask which does not ``__import__``. Instead of importing to
retrieve file paths or metadata on a module or package, use the pkgutil and
imp modules in the Python standard library.
"""
def test_name_with_import_error(self):
try:
flask.Flask('importerror')
except NotImplementedError:
self.fail('Flask(import_name) is importing import_name.')
class StreamingTestCase(FlaskTestCase):
def test_streaming_with_context(self):
app = flask.Flask(__name__)
app.testing = True
@app.route('/')
def index():
def generate():
yield 'Hello '
yield flask.request.args['name']
yield '!'
return flask.Response(flask.stream_with_context(generate()))
c = app.test_client()
rv = c.get('/?name=World')
self.assertEqual(rv.data, b'Hello World!')
def test_streaming_with_context_as_decorator(self):
app = flask.Flask(__name__)
app.testing = True
@app.route('/')
def index():
@flask.stream_with_context
def generate():
yield 'Hello '
yield flask.request.args['name']
yield '!'
return flask.Response(generate())
c = app.test_client()
rv = c.get('/?name=World')
self.assertEqual(rv.data, b'Hello World!')
def test_streaming_with_context_and_custom_close(self):
app = flask.Flask(__name__)
app.testing = True
called = []
class Wrapper(object):
def __init__(self, gen):
self._gen = gen
def __iter__(self):
return self
def close(self):
called.append(42)
def __next__(self):
return next(self._gen)
next = __next__
@app.route('/')
def index():
def generate():
yield 'Hello '
yield flask.request.args['name']
yield '!'
return flask.Response(flask.stream_with_context(
Wrapper(generate())))
c = app.test_client()
rv = c.get('/?name=World')
self.assertEqual(rv.data, b'Hello World!')
self.assertEqual(called, [42])
def suite():
suite = unittest.TestSuite()
if flask.json_available:
suite.addTest(unittest.makeSuite(JSONTestCase))
suite.addTest(unittest.makeSuite(SendfileTestCase))
suite.addTest(unittest.makeSuite(LoggingTestCase))
suite.addTest(unittest.makeSuite(NoImportsTestCase))
suite.addTest(unittest.makeSuite(StreamingTestCase))
return suite
| bsd-3-clause |
Zhongqilong/kbengine | kbe/src/lib/python/Lib/idlelib/idle_test/test_text.py | 75 | 6760 | # Test mock_tk.Text class against tkinter.Text class by running same tests with both.
import unittest
from test.support import requires
from _tkinter import TclError
import tkinter as tk
class TextTest(object):
hw = 'hello\nworld' # usual initial insert after initialization
hwn = hw+'\n' # \n present at initialization, before insert
Text = None
def setUp(self):
self.text = self.Text()
def test_init(self):
self.assertEqual(self.text.get('1.0'), '\n')
self.assertEqual(self.text.get('end'), '')
def test_index_empty(self):
index = self.text.index
for dex in (-1.0, 0.3, '1.-1', '1.0', '1.0 lineend', '1.end', '1.33',
'insert'):
self.assertEqual(index(dex), '1.0')
for dex in 'end', 2.0, '2.1', '33.44':
self.assertEqual(index(dex), '2.0')
def test_index_data(self):
index = self.text.index
self.text.insert('1.0', self.hw)
for dex in -1.0, 0.3, '1.-1', '1.0':
self.assertEqual(index(dex), '1.0')
for dex in '1.0 lineend', '1.end', '1.33':
self.assertEqual(index(dex), '1.5')
for dex in 'end', '33.44':
self.assertEqual(index(dex), '3.0')
def test_get(self):
get = self.text.get
Equal = self.assertEqual
self.text.insert('1.0', self.hw)
Equal(get('end'), '')
Equal(get('end', 'end'), '')
Equal(get('1.0'), 'h')
Equal(get('1.0', '1.1'), 'h')
Equal(get('1.0', '1.3'), 'hel')
Equal(get('1.1', '1.3'), 'el')
Equal(get('1.0', '1.0 lineend'), 'hello')
Equal(get('1.0', '1.10'), 'hello')
Equal(get('1.0 lineend'), '\n')
Equal(get('1.1', '2.3'), 'ello\nwor')
Equal(get('1.0', '2.5'), self.hw)
Equal(get('1.0', 'end'), self.hwn)
Equal(get('0.0', '5.0'), self.hwn)
def test_insert(self):
insert = self.text.insert
get = self.text.get
Equal = self.assertEqual
insert('1.0', self.hw)
Equal(get('1.0', 'end'), self.hwn)
insert('1.0', '') # nothing
Equal(get('1.0', 'end'), self.hwn)
insert('1.0', '*')
Equal(get('1.0', 'end'), '*hello\nworld\n')
insert('1.0 lineend', '*')
Equal(get('1.0', 'end'), '*hello*\nworld\n')
insert('2.3', '*')
Equal(get('1.0', 'end'), '*hello*\nwor*ld\n')
insert('end', 'x')
Equal(get('1.0', 'end'), '*hello*\nwor*ldx\n')
insert('1.4', 'x\n')
Equal(get('1.0', 'end'), '*helx\nlo*\nwor*ldx\n')
def test_no_delete(self):
# if index1 == 'insert' or 'end' or >= end, there is no deletion
delete = self.text.delete
get = self.text.get
Equal = self.assertEqual
self.text.insert('1.0', self.hw)
delete('insert')
Equal(get('1.0', 'end'), self.hwn)
delete('end')
Equal(get('1.0', 'end'), self.hwn)
delete('insert', 'end')
Equal(get('1.0', 'end'), self.hwn)
delete('insert', '5.5')
Equal(get('1.0', 'end'), self.hwn)
delete('1.4', '1.0')
Equal(get('1.0', 'end'), self.hwn)
delete('1.4', '1.4')
Equal(get('1.0', 'end'), self.hwn)
def test_delete_char(self):
delete = self.text.delete
get = self.text.get
Equal = self.assertEqual
self.text.insert('1.0', self.hw)
delete('1.0')
Equal(get('1.0', '1.end'), 'ello')
delete('1.0', '1.1')
Equal(get('1.0', '1.end'), 'llo')
# delete \n and combine 2 lines into 1
delete('1.end')
Equal(get('1.0', '1.end'), 'lloworld')
self.text.insert('1.3', '\n')
delete('1.10')
Equal(get('1.0', '1.end'), 'lloworld')
self.text.insert('1.3', '\n')
delete('1.3', '2.0')
Equal(get('1.0', '1.end'), 'lloworld')
def test_delete_slice(self):
delete = self.text.delete
get = self.text.get
Equal = self.assertEqual
self.text.insert('1.0', self.hw)
delete('1.0', '1.0 lineend')
Equal(get('1.0', 'end'), '\nworld\n')
delete('1.0', 'end')
Equal(get('1.0', 'end'), '\n')
self.text.insert('1.0', self.hw)
delete('1.0', '2.0')
Equal(get('1.0', 'end'), 'world\n')
delete('1.0', 'end')
Equal(get('1.0', 'end'), '\n')
self.text.insert('1.0', self.hw)
delete('1.2', '2.3')
Equal(get('1.0', 'end'), 'held\n')
def test_multiple_lines(self): # insert and delete
self.text.insert('1.0', 'hello')
self.text.insert('1.3', '1\n2\n3\n4\n5')
self.assertEqual(self.text.get('1.0', 'end'), 'hel1\n2\n3\n4\n5lo\n')
self.text.delete('1.3', '5.1')
self.assertEqual(self.text.get('1.0', 'end'), 'hello\n')
def test_compare(self):
compare = self.text.compare
Equal = self.assertEqual
# need data so indexes not squished to 1,0
self.text.insert('1.0', 'First\nSecond\nThird\n')
self.assertRaises(TclError, compare, '2.2', 'op', '2.2')
for op, less1, less0, equal, greater0, greater1 in (
('<', True, True, False, False, False),
('<=', True, True, True, False, False),
('>', False, False, False, True, True),
('>=', False, False, True, True, True),
('==', False, False, True, False, False),
('!=', True, True, False, True, True),
):
Equal(compare('1.1', op, '2.2'), less1, op)
Equal(compare('2.1', op, '2.2'), less0, op)
Equal(compare('2.2', op, '2.2'), equal, op)
Equal(compare('2.3', op, '2.2'), greater0, op)
Equal(compare('3.3', op, '2.2'), greater1, op)
class MockTextTest(TextTest, unittest.TestCase):
@classmethod
def setUpClass(cls):
from idlelib.idle_test.mock_tk import Text
cls.Text = Text
def test_decode(self):
# test endflags (-1, 0) not tested by test_index (which uses +1)
decode = self.text._decode
Equal = self.assertEqual
self.text.insert('1.0', self.hw)
Equal(decode('end', -1), (2, 5))
Equal(decode('3.1', -1), (2, 5))
Equal(decode('end', 0), (2, 6))
Equal(decode('3.1', 0), (2, 6))
class TkTextTest(TextTest, unittest.TestCase):
@classmethod
def setUpClass(cls):
requires('gui')
from tkinter import Tk, Text
cls.Text = Text
cls.root = Tk()
@classmethod
def tearDownClass(cls):
cls.root.destroy()
del cls.root
if __name__ == '__main__':
unittest.main(verbosity=2, exit=False)
| lgpl-3.0 |
Azure/azure-sdk-for-python | sdk/databoxedge/azure-mgmt-databoxedge/azure/mgmt/databoxedge/v2020_05_01_preview/aio/operations/_shares_operations.py | 1 | 26471 | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, Optional, TypeVar, Union
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from azure.core.polling import AsyncLROPoller, AsyncNoPolling, AsyncPollingMethod
from azure.mgmt.core.exceptions import ARMErrorFormat
from azure.mgmt.core.polling.async_arm_polling import AsyncARMPolling
from ... import models as _models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class SharesOperations:
"""SharesOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.databoxedge.v2020_05_01_preview.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def list_by_data_box_edge_device(
self,
device_name: str,
resource_group_name: str,
**kwargs
) -> AsyncIterable["_models.ShareList"]:
"""Lists all the shares in a Data Box Edge/Data Box Gateway device.
Lists all the shares in a Data Box Edge/Data Box Gateway device.
:param device_name: The device name.
:type device_name: str
:param resource_group_name: The resource group name.
:type resource_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either ShareList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.databoxedge.v2020_05_01_preview.models.ShareList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.ShareList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2020-05-01-preview"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_by_data_box_edge_device.metadata['url'] # type: ignore
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('ShareList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_by_data_box_edge_device.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares'} # type: ignore
async def get(
self,
device_name: str,
name: str,
resource_group_name: str,
**kwargs
) -> "_models.Share":
"""Gets a share by name.
Gets a share by name.
:param device_name: The device name.
:type device_name: str
:param name: The share name.
:type name: str
:param resource_group_name: The resource group name.
:type resource_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: Share, or the result of cls(response)
:rtype: ~azure.mgmt.databoxedge.v2020_05_01_preview.models.Share
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.Share"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2020-05-01-preview"
accept = "application/json"
# Construct URL
url = self.get.metadata['url'] # type: ignore
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('Share', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}'} # type: ignore
async def _create_or_update_initial(
self,
device_name: str,
name: str,
resource_group_name: str,
share: "_models.Share",
**kwargs
) -> Optional["_models.Share"]:
cls = kwargs.pop('cls', None) # type: ClsType[Optional["_models.Share"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2020-05-01-preview"
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self._create_or_update_initial.metadata['url'] # type: ignore
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(share, 'Share')
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Share', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
_create_or_update_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}'} # type: ignore
async def begin_create_or_update(
self,
device_name: str,
name: str,
resource_group_name: str,
share: "_models.Share",
**kwargs
) -> AsyncLROPoller["_models.Share"]:
"""Creates a new share or updates an existing share on the device.
Creates a new share or updates an existing share on the device.
:param device_name: The device name.
:type device_name: str
:param name: The share name.
:type name: str
:param resource_group_name: The resource group name.
:type resource_group_name: str
:param share: The share properties.
:type share: ~azure.mgmt.databoxedge.v2020_05_01_preview.models.Share
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: Pass in True if you'd like the AsyncARMPolling polling method,
False for no polling, or your own initialized polling object for a personal polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either Share or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[~azure.mgmt.databoxedge.v2020_05_01_preview.models.Share]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType["_models.Share"]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._create_or_update_initial(
device_name=device_name,
name=name,
resource_group_name=resource_group_name,
share=share,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
deserialized = self._deserialize('Share', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
if polling is True: polling_method = AsyncARMPolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}'} # type: ignore
async def _delete_initial(
self,
device_name: str,
name: str,
resource_group_name: str,
**kwargs
) -> None:
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2020-05-01-preview"
accept = "application/json"
# Construct URL
url = self._delete_initial.metadata['url'] # type: ignore
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
_delete_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}'} # type: ignore
async def begin_delete(
self,
device_name: str,
name: str,
resource_group_name: str,
**kwargs
) -> AsyncLROPoller[None]:
"""Deletes the share on the Data Box Edge/Data Box Gateway device.
:param device_name: The device name.
:type device_name: str
:param name: The share name.
:type name: str
:param resource_group_name: The resource group name.
:type resource_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: Pass in True if you'd like the AsyncARMPolling polling method,
False for no polling, or your own initialized polling object for a personal polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._delete_initial(
device_name=device_name,
name=name,
resource_group_name=resource_group_name,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
if polling is True: polling_method = AsyncARMPolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}'} # type: ignore
async def _refresh_initial(
self,
device_name: str,
name: str,
resource_group_name: str,
**kwargs
) -> None:
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2020-05-01-preview"
accept = "application/json"
# Construct URL
url = self._refresh_initial.metadata['url'] # type: ignore
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.post(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
_refresh_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}/refresh'} # type: ignore
async def begin_refresh(
self,
device_name: str,
name: str,
resource_group_name: str,
**kwargs
) -> AsyncLROPoller[None]:
"""Refreshes the share metadata with the data from the cloud.
Refreshes the share metadata with the data from the cloud.
:param device_name: The device name.
:type device_name: str
:param name: The share name.
:type name: str
:param resource_group_name: The resource group name.
:type resource_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: Pass in True if you'd like the AsyncARMPolling polling method,
False for no polling, or your own initialized polling object for a personal polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._refresh_initial(
device_name=device_name,
name=name,
resource_group_name=resource_group_name,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
path_format_arguments = {
'deviceName': self._serialize.url("device_name", device_name, 'str'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
if polling is True: polling_method = AsyncARMPolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_refresh.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataBoxEdge/dataBoxEdgeDevices/{deviceName}/shares/{name}/refresh'} # type: ignore
| mit |
MichaelNedzelsky/intellij-community | plugins/hg4idea/testData/bin/hgext/interhg.py | 93 | 2836 | # interhg.py - interhg
#
# Copyright 2007 OHASHI Hideya <ohachige@gmail.com>
#
# Contributor(s):
# Edward Lee <edward.lee@engineering.uiuc.edu>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
'''expand expressions into changelog and summaries
This extension allows the use of a special syntax in summaries, which
will be automatically expanded into links or any other arbitrary
expression, much like InterWiki does.
A few example patterns (link to bug tracking, etc.) that may be used
in your hgrc::
[interhg]
issues = s!issue(\\d+)!<a href="http://bts/issue\\1">issue\\1</a>!
bugzilla = s!((?:bug|b=|(?=#?\\d{4,}))(?:\\s*#?)(\\d+))!<a..=\\2">\\1</a>!i
boldify = s!(^|\\s)#(\\d+)\\b! <b>#\\2</b>!
'''
import re
from mercurial.hgweb import hgweb_mod
from mercurial import templatefilters, extensions
from mercurial.i18n import _
testedwith = 'internal'
interhg_table = []
def uisetup(ui):
orig_escape = templatefilters.filters["escape"]
def interhg_escape(x):
escstr = orig_escape(x)
for regexp, format in interhg_table:
escstr = regexp.sub(format, escstr)
return escstr
templatefilters.filters["escape"] = interhg_escape
def interhg_refresh(orig, self, *args, **kwargs):
interhg_table[:] = []
for key, pattern in self.repo.ui.configitems('interhg'):
# grab the delimiter from the character after the "s"
unesc = pattern[1]
delim = re.escape(unesc)
# identify portions of the pattern, taking care to avoid escaped
# delimiters. the replace format and flags are optional, but delimiters
# are required.
match = re.match(r'^s%s(.+)(?:(?<=\\\\)|(?<!\\))%s(.*)%s([ilmsux])*$'
% (delim, delim, delim), pattern)
if not match:
self.repo.ui.warn(_("interhg: invalid pattern for %s: %s\n")
% (key, pattern))
continue
# we need to unescape the delimiter for regexp and format
delim_re = re.compile(r'(?<!\\)\\%s' % delim)
regexp = delim_re.sub(unesc, match.group(1))
format = delim_re.sub(unesc, match.group(2))
# the pattern allows for 6 regexp flags, so set them if necessary
flagin = match.group(3)
flags = 0
if flagin:
for flag in flagin.upper():
flags |= re.__dict__[flag]
try:
regexp = re.compile(regexp, flags)
interhg_table.append((regexp, format))
except re.error:
self.repo.ui.warn(_("interhg: invalid regexp for %s: %s\n")
% (key, regexp))
return orig(self, *args, **kwargs)
extensions.wrapfunction(hgweb_mod.hgweb, 'refresh', interhg_refresh)
| apache-2.0 |
oeeagle/quantum | neutron/extensions/servicetype.py | 18 | 3041 | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Salvatore Orlando, VMware
#
from neutron.api import extensions
from neutron.api.v2 import attributes
from neutron.api.v2 import base
from neutron.db import servicetype_db
from neutron.openstack.common import log as logging
LOG = logging.getLogger(__name__)
RESOURCE_NAME = "service_provider"
COLLECTION_NAME = "%ss" % RESOURCE_NAME
SERVICE_ATTR = 'service_type'
PLUGIN_ATTR = 'plugin'
DRIVER_ATTR = 'driver'
EXT_ALIAS = 'service-type'
# Attribute Map for Service Provider Resource
# Allow read-only access
RESOURCE_ATTRIBUTE_MAP = {
COLLECTION_NAME: {
'service_type': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'name': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'default': {'allow_post': False, 'allow_put': False,
'is_visible': True},
}
}
class Servicetype(extensions.ExtensionDescriptor):
@classmethod
def get_name(cls):
return _("Neutron Service Type Management")
@classmethod
def get_alias(cls):
return EXT_ALIAS
@classmethod
def get_description(cls):
return _("API for retrieving service providers for "
"Neutron advanced services")
@classmethod
def get_namespace(cls):
return "http://docs.openstack.org/ext/neutron/service-type/api/v1.0"
@classmethod
def get_updated(cls):
return "2013-01-20T00:00:00-00:00"
@classmethod
def get_resources(cls):
"""Returns Extended Resource for service type management."""
my_plurals = [(key, key[:-1]) for key in RESOURCE_ATTRIBUTE_MAP.keys()]
attributes.PLURALS.update(dict(my_plurals))
attr_map = RESOURCE_ATTRIBUTE_MAP[COLLECTION_NAME]
collection_name = COLLECTION_NAME.replace('_', '-')
controller = base.create_resource(
collection_name,
RESOURCE_NAME,
servicetype_db.ServiceTypeManager.get_instance(),
attr_map)
return [extensions.ResourceExtension(collection_name,
controller,
attr_map=attr_map)]
def get_extended_resources(self, version):
if version == "2.0":
return RESOURCE_ATTRIBUTE_MAP
else:
return {}
| apache-2.0 |
unnikrishnankgs/va | venv/lib/python3.5/site-packages/tensorflow/contrib/hooks/python/training/profiler_hook.py | 78 | 4153 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Additional `SessionRunHook` implementations to complement those in
tensorflow/python/training.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.client import timeline
from tensorflow.python.platform import gfile
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.training.basic_session_run_hooks import SecondOrStepTimer
from tensorflow.python.training.session_run_hook import SessionRunArgs
from tensorflow.python.training import session_run_hook
from tensorflow.python.training import training_util
class ProfilerHook(session_run_hook.SessionRunHook):
"""Captures CPU/GPU profiling information every N steps or seconds.
This produces files called "timeline-<step>.json", which are in Chrome
Trace format.
For more information see:
https://github.com/catapult-project/catapult/blob/master/tracing/README.md"""
def __init__(self,
save_steps=None,
save_secs=None,
output_dir="",
show_dataflow=True,
show_memory=False):
"""Initializes a hook that takes periodic profiling snapshots.
Args:
save_steps: `int`, save profile traces every N steps. Exactly one of
`save_secs` and `save_steps` should be set.
save_secs: `int`, save profile traces every N seconds.
output_dir: `string`, the directory to save the profile traces to.
Defaults to the current directory.
show_dataflow: `bool`, if True, add flow events to the trace connecting
producers and consumers of tensors.
show_memory: `bool`, if True, add object snapshot events to the trace
showing the sizes and lifetimes of tensors.
"""
self._output_file = os.path.join(output_dir, "timeline-{}.json")
self._show_dataflow = show_dataflow
self._show_memory = show_memory
self._timer = SecondOrStepTimer(every_secs=save_secs,
every_steps=save_steps)
def begin(self):
self._next_step = None
self._global_step_tensor = training_util.get_global_step()
if self._global_step_tensor is None:
raise RuntimeError(
"Global step should be created to use ProfilerHook.")
def before_run(self, run_context):
self._request_summary = (
self._next_step is None or
self._timer.should_trigger_for_step(self._next_step))
requests = {"global_step": self._global_step_tensor}
opts = (config_pb2.RunOptions(trace_level=config_pb2.RunOptions.FULL_TRACE)
if self._request_summary else None)
return SessionRunArgs(requests, options=opts)
def after_run(self, run_context, run_values):
global_step = run_values.results["global_step"]
if self._request_summary:
self._timer.update_last_triggered_step(global_step)
self._save(global_step,
self._output_file.format(global_step),
run_values.run_metadata.step_stats)
self._next_step = global_step + 1
def _save(self, step, save_path, step_stats):
logging.info("Saving timeline for %d into '%s'.", step, save_path)
with gfile.Open(save_path, "w") as f:
trace = timeline.Timeline(step_stats)
f.write(trace.generate_chrome_trace_format(
show_dataflow=self._show_dataflow,
show_memory=self._show_memory))
| bsd-2-clause |
statik/grr | lib/config_lib.py | 2 | 49103 | #!/usr/bin/env python
"""This is the GRR config management code.
This handles opening and parsing of config files.
"""
import collections
import ConfigParser
import errno
import os
import pickle
import platform
import re
import StringIO
import sys
import urlparse
import zipfile
import yaml
import logging
from grr.lib import flags
from grr.lib import lexer
from grr.lib import registry
from grr.lib import type_info
from grr.lib import utils
flags.DEFINE_string("config", None,
"Primary Configuration file to use.")
flags.DEFINE_list("secondary_configs", [],
"Secondary configuration files to load (These override "
"previous configuration files.).")
flags.DEFINE_bool("config_help", False,
"Print help about the configuration.")
flags.DEFINE_list("context", [],
"Use these contexts for the config.")
flags.DEFINE_list("plugins", [],
"Load these files as additional plugins.")
flags.DEFINE_bool("disallow_missing_config_definitions", False,
"If true, we raise an error on undefined config options.")
flags.PARSER.add_argument(
"-p", "--parameter", action="append",
default=[],
help="Global override of config values. "
"For example -p DataStore.implementation=MySQLDataStore")
class Error(Exception):
"""Base class for configuration exceptions."""
class ConfigFormatError(Error, type_info.TypeValueError):
"""Raised when configuration file is formatted badly."""
class ConfigWriteError(Error):
"""Raised when we failed to update the config."""
class UnknownOption(Error, KeyError):
"""Raised when an unknown option was requested."""
class FilterError(Error):
"""Raised when a filter fails to perform its function."""
class ConstModificationError(Error):
"""Raised when the config tries to change a constant option."""
class AlreadyInitializedError(Error):
"""Raised when an option is defined after initialization."""
class MissingConfigDefinitionError(Error):
"""Raised when a config contains an undefined config option."""
class InvalidContextError(Error):
"""Raised when an invalid context is used."""
class ConfigFilter(object):
"""A configuration filter can transform a configuration parameter."""
__metaclass__ = registry.MetaclassRegistry
name = "identity"
def Filter(self, data):
return data
class Literal(ConfigFilter):
"""A filter which does not interpolate."""
name = "literal"
class Lower(ConfigFilter):
name = "lower"
def Filter(self, data):
return data.lower()
class Upper(ConfigFilter):
name = "upper"
def Filter(self, data):
return data.upper()
class Filename(ConfigFilter):
name = "file"
def Filter(self, data):
try:
return open(data, "rb").read(1024000)
except IOError as e:
raise FilterError("%s: %s" % (data, e))
class FixPathSeparator(ConfigFilter):
name = "fixpathsep"
def Filter(self, data):
if platform.system() == "Windows":
# This will fix "X:\", and might add extra slashes to other paths, but
# this is OK.
return data.replace("\\", "\\\\")
else:
return data.replace("\\", "/")
class Base64(ConfigFilter):
name = "base64"
def Filter(self, data):
return data.decode("base64")
class Env(ConfigFilter):
"""Interpolate environment variables."""
name = "env"
def Filter(self, data):
return os.environ.get(data.upper(), "")
class Expand(ConfigFilter):
"""Expands the input as a configuration parameter."""
name = "expand"
def Filter(self, data):
return CONFIG.InterpolateValue(data)
class Flags(ConfigFilter):
"""Get the parameter from the flags."""
name = "flags"
def Filter(self, data):
try:
logging.debug("Overriding config option with flags.FLAGS.%s", data)
return getattr(flags.FLAGS, data)
except AttributeError as e:
raise FilterError(e)
class GRRConfigParser(object):
"""The base class for all GRR configuration parsers."""
__metaclass__ = registry.MetaclassRegistry
# Configuration parsers are named. This name is used to select the correct
# parser from the --config parameter which is interpreted as a url.
name = None
# Set to True by the parsers if the file exists.
parsed = None
def RawData(self):
"""Convert the file to a more suitable data structure.
Returns:
The standard data format from this method is for example:
{
name: default_value;
name2: default_value2;
"Context1": {
name: value,
name2: value,
"Nested Context" : {
name: value;
};
},
"Context2": {
name: value,
}
}
i.e. raw_data is an OrderedYamlDict() with keys representing parameter names
and values representing values. Contexts are represented by nested
OrderedYamlDict() structures with similar format.
Note that support for contexts is optional and depends on the config file
format. If contexts are not supported, a flat OrderedYamlDict() is returned.
"""
class ConfigFileParser(ConfigParser.RawConfigParser, GRRConfigParser):
"""A parser for ini style config files."""
def __init__(self, filename=None, data=None, fd=None):
super(ConfigFileParser, self).__init__()
self.optionxform = str
if fd:
self.parsed = self.readfp(fd)
self.filename = filename or fd.name
elif filename:
self.parsed = self.read(filename)
self.filename = filename
elif data is not None:
fd = StringIO.StringIO(data)
self.parsed = self.readfp(fd)
self.filename = filename
else:
raise Error("Filename not specified.")
def __str__(self):
return "<%s filename=\"%s\">" % (self.__class__.__name__, self.filename)
def SaveData(self, raw_data):
"""Store the raw data as our configuration."""
if self.filename is None:
raise IOError("Unknown filename")
logging.info("Writing back configuration to file %s", self.filename)
# Ensure intermediate directories exist
try:
os.makedirs(os.path.dirname(self.filename))
except (IOError, OSError):
pass
try:
# We can not use the standard open() call because we need to
# enforce restrictive file permissions on the created file.
mode = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
fd = os.open(self.filename, mode, 0600)
with os.fdopen(fd, "wb") as config_file:
self.SaveDataToFD(raw_data, config_file)
except OSError as e:
logging.warn("Unable to write config file %s: %s.", self.filename, e)
def SaveDataToFD(self, raw_data, fd):
"""Merge the raw data with the config file and store it."""
for key, value in raw_data.items():
self.set("", key, value=value)
self.write(fd)
def RawData(self):
raw_data = OrderedYamlDict()
for section in self.sections():
for key, value in self.items(section):
raw_data[".".join([section, key])] = value
return raw_data
class OrderedYamlDict(yaml.YAMLObject, collections.OrderedDict):
"""A class which produces an ordered dict."""
yaml_tag = "tag:yaml.org,2002:map"
# pylint:disable=g-bad-name
@classmethod
def to_yaml(cls, dumper, data):
value = []
node = yaml.nodes.MappingNode(cls.yaml_tag, value)
for key, item in data.iteritems():
node_key = dumper.represent_data(key)
node_value = dumper.represent_data(item)
value.append((node_key, node_value))
return node
@classmethod
def construct_mapping(cls, loader, node, deep=False):
"""Based on yaml.loader.BaseConstructor.construct_mapping."""
if not isinstance(node, yaml.MappingNode):
raise yaml.loader.ConstructorError(
None, None, "expected a mapping node, but found %s" % node.id,
node.start_mark)
mapping = OrderedYamlDict()
for key_node, value_node in node.value:
key = loader.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError, exc:
raise yaml.loader.ConstructorError(
"while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark)
value = loader.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
@classmethod
def from_yaml(cls, loader, node):
"""Parse the yaml file into an OrderedDict so we can preserve order."""
fields = cls.construct_mapping(loader, node, deep=True)
result = cls()
for k, v in fields.items():
result[k] = v
return result
# pylint:enable=g-bad-name
class YamlParser(GRRConfigParser):
"""A parser for yaml style config files."""
name = "yaml"
def _LoadYamlByName(self, filename):
return yaml.safe_load(open(filename, "rb"))
def _ParseYaml(self, filename="", fd=None):
"""Recursively parse included configs."""
if not filename and not fd:
raise IOError("Neither filename nor fd specified")
if fd:
data = yaml.safe_load(fd) or OrderedYamlDict()
elif filename:
data = self._LoadYamlByName(filename) or OrderedYamlDict()
for include in data.pop("ConfigIncludes", []):
path = os.path.join(os.path.dirname(filename), include)
parser_cls = GrrConfigManager.GetParserFromFilename(path)
parser = parser_cls(filename=path)
data.update(parser.RawData())
return data
def __init__(self, filename=None, data=None, fd=None):
super(YamlParser, self).__init__()
if fd:
self.fd = fd
try:
self.filename = fd.name
except AttributeError:
self.filename = None
self.parsed = self._ParseYaml(fd=fd, filename=(self.filename or ""))
elif filename:
self.filename = filename
try:
self.parsed = self._ParseYaml(filename=filename) or OrderedYamlDict()
except IOError as e:
if e.errno == errno.EACCES:
# Specifically catch access denied errors, this usually indicates the
# user wanted to read the file, and it existed, but they lacked the
# permissions.
raise IOError(e)
else:
self.parsed = OrderedYamlDict()
except OSError:
self.parsed = OrderedYamlDict()
elif data is not None:
self.filename = filename
fd = StringIO.StringIO(data)
self.parsed = self._ParseYaml(fd=fd)
else:
raise Error("Filename not specified.")
def __str__(self):
return "<%s filename=\"%s\">" % (self.__class__.__name__, self.filename)
def SaveData(self, raw_data):
"""Store the raw data as our configuration."""
if self.filename is None:
raise IOError("Unknown filename")
logging.info("Writing back configuration to file %s", self.filename)
# Ensure intermediate directories exist
try:
os.makedirs(os.path.dirname(self.filename))
except (IOError, OSError):
pass
try:
# We can not use the standard open() call because we need to
# enforce restrictive file permissions on the created file.
mode = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
fd = os.open(self.filename, mode, 0600)
with os.fdopen(fd, "wb") as config_file:
self.SaveDataToFD(raw_data, config_file)
except OSError as e:
logging.warn("Unable to write config file %s: %s.", self.filename, e)
def SaveDataToFD(self, raw_data, fd):
"""Merge the raw data with the config file and store it."""
yaml.dump(raw_data, fd, default_flow_style=False)
def _RawData(self, data):
"""Convert data to common format.
Configuration options are normally grouped by the functional component which
define it (e.g. Logging.path is the path parameter for the logging
subsystem). However, sometimes it is more intuitive to write the config as a
flat string (e.g. Logging.path). In this case we group all the flat strings
in their respective sections and create the sections automatically.
Args:
data: A dict of raw data.
Returns:
a dict in common format. Any keys in the raw data which have a "." in them
are separated into their own sections. This allows the config to be
written explicitly in dot notation instead of using a section.
"""
if not isinstance(data, dict):
return data
result = OrderedYamlDict()
for k, v in data.items():
result[k] = self._RawData(v)
return result
def RawData(self):
return self._RawData(self.parsed)
class StringInterpolator(lexer.Lexer):
r"""Implements a lexer for the string interpolation language.
Config files may specify nested interpolation codes:
- The following form specifies an interpolation command:
%(arg string|filter)
Where arg string is an arbitrary string and filter is the name of a filter
function which will receive the arg string. If filter is omitted, the arg
string is interpreted as a section.parameter reference and expanded from
within the config system.
- Interpolation commands may be nested. In this case, the interpolation
proceeds from innermost to outermost:
e.g. %(arg1 %(arg2|filter2)|filter1)
1. First arg2 is passed through filter2.
2. The result of that is appended to arg1.
3. The combined string is then filtered using filter1.
- The following characters must be escaped by preceeding them with a single \:
- ()|
"""
tokens = [
# When in literal mode, only allow to escape }
lexer.Token("Literal", r"\\[^{}]", "AppendArg", None),
# Allow escaping of special characters
lexer.Token(None, r"\\(.)", "Escape", None),
# Literal sequence is %{....}. Literal states can not be nested further,
# i.e. we include anything until the next }. It is still possible to
# escape } if this character needs to be inserted literally.
lexer.Token("Literal", r"\}", "EndLiteralExpression,PopState", None),
lexer.Token("Literal", r"[^}\\]+", "AppendArg", None),
lexer.Token(None, r"\%\{", "StartExpression,PushState", "Literal"),
# Expansion sequence is %(....)
lexer.Token(None, r"\%\(", "StartExpression", None),
lexer.Token(None, r"\|([a-zA-Z_]+)\)", "Filter", None),
lexer.Token(None, r"\)", "ExpandArg", None),
# Glob up as much data as possible to increase efficiency here.
lexer.Token(None, r"[^()%{}|\\]+", "AppendArg", None),
lexer.Token(None, r".", "AppendArg", None),
]
STRING_ESCAPES = {"\\\\": "\\",
"\\(": "(",
"\\)": ")",
"\\{": "{",
"\\}": "}",
"\\%": "%"}
def __init__(self, data, config, default_section="", parameter=None,
context=None):
self.stack = [""]
self.default_section = default_section
self.parameter = parameter
self.config = config
self.context = context
super(StringInterpolator, self).__init__(data)
def Escape(self, string="", **_):
"""Support standard string escaping."""
# Translate special escapes:
self.stack[-1] += self.STRING_ESCAPES.get(string, string)
def Error(self, e):
"""Parse errors are fatal."""
raise ConfigFormatError("While parsing %s: %s" % (self.parameter, e))
def StartExpression(self, **_):
"""Start processing a new expression."""
# Extend the stack for the new expression.
self.stack.append("")
def EndLiteralExpression(self, **_):
if len(self.stack) <= 1:
raise lexer.ParseError(
"Unbalanced literal sequence: Can not expand '%s'" %
self.processed_buffer)
arg = self.stack.pop(-1)
self.stack[-1] += arg
def Filter(self, match=None, **_):
"""Filter the current expression."""
arg = self.stack.pop(-1)
# Filters can be specified as a comma separated list.
for filter_name in match.group(1).split(","):
filter_object = ConfigFilter.classes_by_name.get(filter_name)
if filter_object is None:
raise FilterError("Unknown filter function %r" % filter_name)
logging.debug("Applying filter %s for %s.", filter_name, arg)
arg = filter_object().Filter(arg)
self.stack[-1] += arg
def ExpandArg(self, **_):
"""Expand the args as a section.parameter from the config."""
# This function is called when we see close ) and the stack depth has to
# exactly match the number of (.
if len(self.stack) <= 1:
raise lexer.ParseError(
"Unbalanced parenthesis: Can not expand '%s'" % self.processed_buffer)
# This is the full parameter name: e.g. Logging.path
parameter_name = self.stack.pop(-1)
if "." not in parameter_name:
parameter_name = "%s.%s" % (self.default_section, parameter_name)
final_value = self.config.Get(parameter_name, context=self.context)
if final_value is None:
final_value = ""
type_info_obj = (self.config.FindTypeInfo(parameter_name) or
type_info.String())
# Encode the interpolated string according to its type.
self.stack[-1] += type_info_obj.ToString(final_value)
def AppendArg(self, string="", **_):
self.stack[-1] += string
def Parse(self):
self.Close()
if len(self.stack) != 1:
raise lexer.ParseError("Nested expression not balanced.")
return self.stack[0]
class GrrConfigManager(object):
"""Manage configuration system in GRR."""
def __init__(self):
"""Initialize the configuration manager."""
# The context is used to provide different configuration directives in
# different situations. The context can contain any string describing a
# different aspect of the running instance.
self.context = []
self.raw_data = OrderedYamlDict()
self.validated = set()
self.writeback = None
self.writeback_data = OrderedYamlDict()
self.global_override = dict()
self.context_descriptions = {}
self.constants = set()
self.valid_contexts = set()
# This is the type info set describing all configuration
# parameters.
self.type_infos = type_info.TypeDescriptorSet()
# We store the defaults here.
self.defaults = {}
# A cache of validated and interpolated results.
self.FlushCache()
self.initialized = False
def FlushCache(self):
self.cache = {}
def MakeNewConfig(self):
"""Creates a new configuration option based on this one.
Note that it is not normally possible to just instantiate the
config object because it will have an empty set of type
descriptors (i.e. no config options will be defined). Config
options are normally defined at import time, and then they get
added to the CONFIG global in this module.
To obtain a new configuration object, inheriting the regular
config options, this method must be called from the global CONFIG
object, to make a copy.
Returns:
A new empty config object. which has the same parameter definitions as
this one.
"""
result = self.__class__()
# We do not need to copy here since these never change.
result.type_infos = self.type_infos
result.defaults = self.defaults
result.context = self.context
result.valid_contexts = self.valid_contexts
return result
def SetWriteBack(self, filename):
"""Sets the config file which will receive any modifications.
The main config file can be made writable, but directing all Set()
operations into a secondary location. This secondary location will
receive any updates and will override the options for this file.
Args:
filename: A url, or filename which will receive updates. The
file is parsed first and merged into the raw data from this
object.
"""
self.writeback = self.LoadSecondaryConfig(filename)
self.MergeData(self.writeback.RawData(), self.writeback_data)
logging.info("Configuration writeback is set to %s", filename)
def Validate(self, sections=None, parameters=None):
"""Validate sections or individual parameters.
The GRR configuration file contains several sections, used by different
components. Many of these components don't care about other sections. This
method allows a component to declare in advance what sections and parameters
it cares about, and have these validated.
Args:
sections: A list of sections to validate. All parameters within the
section are validated.
parameters: A list of specific parameters (in the format section.name) to
validate.
Returns:
dict of {parameter: Exception}, where parameter is a section.name string.
"""
if isinstance(sections, basestring):
sections = [sections]
if sections is None:
sections = []
if parameters is None:
parameters = []
validation_errors = {}
for section in sections:
for descriptor in self.type_infos:
if descriptor.name.startswith(section + "."):
try:
self.Get(descriptor.name)
except (Error, ValueError) as e:
validation_errors[descriptor.name] = e
for parameter in parameters:
for descriptor in self.type_infos:
if parameter == descriptor.name:
try:
self.Get(descriptor.name)
except (Error, ValueError) as e:
validation_errors[descriptor.name] = e
return validation_errors
def AddContext(self, context_string, description=None):
"""Adds a context string to the global configuration.
The context conveys information about the caller of the config system and
allows the configuration to have specialized results for different callers.
Note that the configuration file may specify conflicting options for
different contexts. In this case, later specified contexts (i.e. the later
AddContext() calls) will trump the earlier specified contexts. This allows
specialized contexts to be specified on the command line which override
normal operating options.
Args:
context_string: A string which describes the global program.
description: A description as to when this context applies.
Raises:
InvalidContextError: An undefined context was specified.
"""
if context_string not in self.context:
if context_string not in self.valid_contexts:
raise InvalidContextError(
"Invalid context specified: %s" % context_string)
self.context.append(context_string)
self.context_descriptions[context_string] = description
self.FlushCache()
def RemoveContext(self, context_string):
if context_string in self.context:
self.context.remove(context_string)
self.context_descriptions.pop(context_string)
self.FlushCache()
def ExportState(self):
return pickle.dumps(self)
def SetRaw(self, name, value):
"""Set the raw string without verification or escaping."""
if self.writeback is None:
logging.warn("Attempting to modify a read only config object.")
if name in self.constants:
raise ConstModificationError(
"Attempting to modify constant value %s" % name)
self.writeback_data[name] = value
self.FlushCache()
def Set(self, name, value):
"""Update the configuration option with a new value.
Note that this forces the value to be set for all contexts. The value is
written to the writeback location if Save() is later called.
Args:
name: The name of the parameter to set.
value: The value to set it to. The value will be validated against the
option's type descriptor.
Raises:
ConstModificationError: When attempting to change a constant option.
"""
# If the configuration system has a write back location we use it,
# otherwise we use the primary configuration object.
if self.writeback is None:
logging.warn("Attempting to modify a read only config object for %s.",
name)
if name in self.constants:
raise ConstModificationError(
"Attempting to modify constant value %s" % name)
writeback_data = self.writeback_data
# Check if the new value conforms with the type_info.
if value is not None:
type_info_obj = self.FindTypeInfo(name)
value = type_info_obj.ToString(value)
if isinstance(value, basestring):
value = self.EscapeString(value)
writeback_data[name] = value
self.FlushCache()
def EscapeString(self, string):
"""Escape special characters when encoding to a string."""
return re.sub(r"([\\%){}])", r"\\\1", string)
def Write(self):
"""Write out the updated configuration to the fd."""
if self.writeback:
self.writeback.SaveData(self.writeback_data)
else:
raise RuntimeError("Attempting to write a configuration without a "
"writeback location.")
def WriteToFD(self, fd):
"""Write out the updated configuration to the fd."""
if self.writeback:
self.writeback.SaveDataToFD(self.writeback_data, fd)
else:
raise RuntimeError("Attempting to write a configuration without a "
"writeback location.")
def AddOption(self, descriptor, constant=False):
"""Registers an option with the configuration system.
Args:
descriptor: A TypeInfoObject instance describing the option.
constant: If this is set, the option is treated as a constant - it can be
read at any time (before parsing the configuration) and it's an
error to try to override it in a config file.
Raises:
RuntimeError: The descriptor's name must contain a . to denote the section
name, otherwise we raise.
AlreadyInitializedError: If the config has already been read it's too late
to define new options.
"""
if self.initialized:
raise AlreadyInitializedError(
"Config was already initialized when defining %s" % descriptor.name)
descriptor.section = descriptor.name.split(".")[0]
if descriptor.name in self.type_infos:
logging.warning("Config Option %s multiply defined!", descriptor.name)
self.type_infos.Append(descriptor)
if constant:
self.constants.add(descriptor.name)
# Register this option's default value.
self.defaults[descriptor.name] = descriptor.GetDefault()
self.FlushCache()
def DefineContext(self, context_name):
self.valid_contexts.add(context_name)
def FormatHelp(self):
result = "Context: %s\n\n" % ",".join(self.context)
for descriptor in sorted(self.type_infos, key=lambda x: x.name):
result += descriptor.Help() + "\n"
try:
result += "* Value = %s\n" % self.Get(descriptor.name)
except Exception as e: # pylint:disable=broad-except
result += "* Value = %s (Error: %s)\n" % (
self.GetRaw(descriptor.name), e)
return result
def PrintHelp(self):
print self.FormatHelp()
default_descriptors = {
str: type_info.String,
unicode: type_info.String,
int: type_info.Integer,
list: type_info.List,
}
def MergeData(self, merge_data, raw_data=None):
self.FlushCache()
if raw_data is None:
raw_data = self.raw_data
for k, v in merge_data.items():
# A context clause.
if isinstance(v, OrderedYamlDict):
if k not in self.valid_contexts:
raise InvalidContextError("Invalid context specified: %s" % k)
context_data = raw_data.setdefault(k, OrderedYamlDict())
self.MergeData(v, context_data)
else:
# Find the descriptor for this field.
descriptor = self.type_infos.get(k)
if descriptor is None:
msg = ("Missing config definition for %s. This option is likely "
"deprecated or renamed. Check the release notes." % k)
if flags.FLAGS.disallow_missing_config_definitions:
raise MissingConfigDefinitionError(msg)
else:
logging.warning(msg)
if isinstance(v, basestring):
v = v.strip()
if k in self.constants:
raise ConstModificationError(
"Attempting to modify constant value %s" % k)
raw_data[k] = v
@classmethod
def GetParserFromFilename(cls, path):
"""Returns the appropriate parser class from the filename url."""
# Find the configuration parser.
url = urlparse.urlparse(path, scheme="file")
for parser_cls in GRRConfigParser.classes.values():
if parser_cls.name == url.scheme:
return parser_cls
# If url is a filename:
extension = os.path.splitext(path)[1]
if extension in [".yaml", ".yml"]:
return YamlParser
return ConfigFileParser
def LoadSecondaryConfig(self, url):
"""Loads an additional configuration file.
The configuration system has the concept of a single Primary configuration
file, and multiple secondary files. The primary configuration file is the
main file that is used by the program. Any writebacks will only be made to
the primary configuration file. Secondary files contain additional
configuration data which will be merged into the configuration system.
This method adds an additional configuration file.
Args:
url: The url of the configuration file that will be loaded. For
example file:///etc/grr.conf
or reg://HKEY_LOCAL_MACHINE/Software/GRR.
Returns:
The parser used to parse this configuration source.
"""
parser_cls = self.GetParserFromFilename(url)
parser = parser_cls(filename=url)
logging.info("Loading configuration from %s", url)
self.MergeData(parser.RawData())
return parser
def Initialize(self, filename=None, data=None, fd=None, reset=True,
must_exist=False, parser=ConfigFileParser):
"""Initializes the config manager.
This method is used to add more config options to the manager. The config
can be given as one of the parameters as described in the Args section.
Args:
filename: The name of the configuration file to use.
data: The configuration given directly as a long string of data.
fd: A file descriptor of a configuration file.
reset: If true, the previous configuration will be erased.
must_exist: If true the data source must exist and be a valid
configuration file, or we raise an exception.
parser: The parser class to use (i.e. the format of the file). If not
specified guess from the filename url.
Raises:
RuntimeError: No configuration was passed in any of the parameters.
ConfigFormatError: Raised when the configuration file is invalid or does
not exist..
"""
self.FlushCache()
if reset:
# Clear previous configuration.
self.raw_data = OrderedYamlDict()
self.writeback_data = OrderedYamlDict()
self.writeback = None
self.initialized = False
if fd is not None:
self.parser = parser(fd=fd)
self.MergeData(self.parser.RawData())
elif filename is not None:
self.parser = self.LoadSecondaryConfig(filename)
if must_exist and not self.parser.parsed:
raise ConfigFormatError(
"Unable to parse config file %s" % filename)
elif data is not None:
self.parser = parser(data=data)
self.MergeData(self.parser.RawData())
else:
raise RuntimeError("Registry path not provided.")
self.initialized = True
def __getitem__(self, name):
"""Retrieve a configuration value after suitable interpolations."""
if name not in self.type_infos:
raise UnknownOption("Config parameter %s not known." % name)
return self.Get(name)
def GetRaw(self, name, context=None, default=utils.NotAValue):
"""Get the raw value without interpolations."""
if context is None:
context = self.context
# Getting a raw value is pretty cheap so we wont bother with the cache here.
_, value = self._GetValue(name, context, default=default)
return value
def Get(self, name, default=utils.NotAValue, context=None):
"""Get the value contained by the named parameter.
This method applies interpolation/escaping of the named parameter and
retrieves the interpolated value.
Args:
name: The name of the parameter to retrieve. This should be in the format
of "Section.name"
default: If retrieving the value results in an error, return this default.
context: A list of context strings to resolve the configuration. This is a
set of roles the caller is current executing with. For example (client,
windows). If not specified we take the context from the current thread's
TLS stack.
Returns:
The value of the parameter.
Raises:
ConfigFormatError: if verify=True and the config doesn't validate.
RuntimeError: if a value is retrieved before the config is initialized.
ValueError: if a bad context is passed.
"""
if not self.initialized:
if name not in self.constants:
raise RuntimeError("Error while retrieving %s: "
"Configuration hasn't been initialized yet." % name)
if context:
# Make sure it's not just a string and is iterable.
if (isinstance(context, basestring) or
not isinstance(context, collections.Iterable)):
raise ValueError("context should be a list, got %s" % str(context))
calc_context = context
# Use a default global context if context is not provided.
if context is None:
# Only use the cache when no special context is specified.
if default is utils.NotAValue and name in self.cache:
return self.cache[name]
calc_context = self.context
type_info_obj = self.FindTypeInfo(name)
_, return_value = self._GetValue(
name, context=calc_context, default=default)
# If we returned the specified default, we just return it here.
if return_value is default:
return default
try:
return_value = self.InterpolateValue(
return_value, default_section=name.split(".")[0],
type_info_obj=type_info_obj, context=calc_context)
except (lexer.ParseError, ValueError) as e:
# We failed to parse the value, but a default was specified, so we just
# return that.
if default is not utils.NotAValue:
return default
raise ConfigFormatError("While parsing %s: %s" % (name, e))
try:
new_value = type_info_obj.Validate(return_value)
if new_value is not None:
# Update the stored value with the valid data.
return_value = new_value
except ValueError:
if default is not utils.NotAValue:
return default
raise
# Only use the cache when no special context is specified.
if context is None and default is utils.NotAValue:
self.cache[name] = return_value
return return_value
def _ResolveContext(self, context, name, raw_data, path=None):
"""Returns the config options active under this context."""
if path is None:
path = []
for element in context:
if element not in self.valid_contexts:
raise InvalidContextError("Invalid context specified: %s" % element)
if element in raw_data:
context_raw_data = raw_data[element]
value = context_raw_data.get(name)
if value is not None:
if isinstance(value, basestring):
value = value.strip()
yield context_raw_data, value, path + [element]
# Recurse into the new context configuration.
for context_raw_data, value, new_path in self._ResolveContext(
context, name, context_raw_data, path=path + [element]):
yield context_raw_data, value, new_path
def _GetValue(self, name, context, default=utils.NotAValue):
"""Search for the value based on the context."""
container = self.defaults
# The caller provided a default value.
if default is not utils.NotAValue:
value = default
# Take the default from the definition.
elif name in self.defaults:
value = self.defaults[name]
else:
raise UnknownOption("Option %s not defined." % name)
# We resolve the required key with the default raw data, and then iterate
# over all elements in the context to see if there are overriding context
# configurations.
new_value = self.raw_data.get(name)
if new_value is not None:
value = new_value
container = self.raw_data
# Now check for any contexts. We enumerate all the possible resolutions and
# sort by their path length. The last one will be the match with the deepest
# path (i.e .the most specific match).
matches = list(self._ResolveContext(context, name, self.raw_data))
if matches:
# Sort by the length of the path - longest match path will be at the end.
matches.sort(key=lambda x: len(x[2]))
value = matches[-1][1]
container = matches[-1][0]
if (len(matches) >= 2 and
len(matches[-1][2]) == len(matches[-2][2]) and
matches[-1][2] != matches[-2][2] and
matches[-1][1] != matches[-2][1]):
# This warning specifies that there is an ambiguous match, the config
# attempts to find the most specific value e.g. if you have a value
# for X.y in context A,B,C, and a value for X.y in D,B it should choose
# the one in A,B,C. This warning is for if you have a value in context
# A,B and one in A,C. The config doesn't know which one to pick so picks
# one and displays this warning.
logging.warn("Ambiguous configuration for key %s: "
"Contexts of equal length: %s (%s) and %s (%s)",
name, matches[-1][2], matches[-1][1],
matches[-2][2], matches[-2][1])
# If there is a writeback location this overrides any previous
# values.
if self.writeback_data:
new_value = self.writeback_data.get(name)
if new_value is not None:
value = new_value
container = self.writeback_data
# Allow the global override to force an option value.
if name in self.global_override:
return self.global_override, self.global_override[name]
return container, value
def FindTypeInfo(self, name):
"""Search for a type_info instance which describes this key."""
result = self.type_infos.get(name)
if result is None:
# Not found, assume string.
result = type_info.String(name=name, default="")
return result
def InterpolateValue(self, value, type_info_obj=type_info.String(),
default_section=None, context=None):
"""Interpolate the value and parse it with the appropriate type."""
# It is only possible to interpolate strings...
if isinstance(value, basestring):
value = StringInterpolator(
value, self, default_section=default_section,
parameter=type_info_obj.name, context=context).Parse()
# Parse the data from the string.
value = type_info_obj.FromString(value)
# ... and lists of strings.
if isinstance(value, list):
value = [self.InterpolateValue(
v, default_section=default_section, context=context) for v in value]
return value
def GetSections(self):
result = set()
for descriptor in self.type_infos:
result.add(descriptor.section)
return result
def MatchBuildContext(self, target_os, target_arch, target_package,
context=None):
"""Return true if target_platforms matches the supplied parameters.
Used by buildanddeploy to determine what clients need to be built.
Args:
target_os: which os we are building for in this run (linux, windows,
darwin)
target_arch: which arch we are building for in this run (i386, amd64)
target_package: which package type we are building (exe, dmg, deb, rpm)
context: config_lib context
Returns:
bool: True if target_platforms spec matches parameters.
"""
for spec in self.Get("ClientBuilder.target_platforms", context=context):
spec_os, arch, package = spec.split("_")
if (spec_os == target_os and arch == target_arch and
package == target_package):
return True
return False
# pylint: disable=g-bad-name,redefined-builtin
def DEFINE_bool(self, name, default, help):
"""A helper for defining boolean options."""
self.AddOption(type_info.Bool(name=name, default=default,
description=help))
def DEFINE_float(self, name, default, help):
"""A helper for defining float options."""
self.AddOption(type_info.Float(name=name, default=default,
description=help))
def DEFINE_integer(self, name, default, help):
"""A helper for defining integer options."""
self.AddOption(type_info.Integer(name=name, default=default,
description=help))
def DEFINE_string(self, name, default, help):
"""A helper for defining string options."""
self.AddOption(type_info.String(name=name, default=default or "",
description=help))
def DEFINE_list(self, name, default, help):
"""A helper for defining lists of strings options."""
self.AddOption(type_info.List(name=name, default=default,
description=help,
validator=type_info.String()))
def DEFINE_constant_string(self, name, default, help):
"""A helper for defining constant strings."""
self.AddOption(type_info.String(name=name, default=default or "",
description=help), constant=True)
def DEFINE_context(self, name):
self.DefineContext(name)
# pylint: enable=g-bad-name
# Global for storing the config.
CONFIG = GrrConfigManager()
def ImportConfigManger(pickled_manager):
"""Import a config manager exported with GrrConfigManager.ExportState()."""
global CONFIG
CONFIG = pickle.loads(pickled_manager)
CONFIG.FlushCache()
return CONFIG
# pylint: disable=g-bad-name,redefined-builtin
def DEFINE_bool(name, default, help):
"""A helper for defining boolean options."""
CONFIG.AddOption(type_info.Bool(name=name, default=default,
description=help))
def DEFINE_float(name, default, help):
"""A helper for defining float options."""
CONFIG.AddOption(type_info.Float(name=name, default=default,
description=help))
def DEFINE_integer(name, default, help):
"""A helper for defining integer options."""
CONFIG.AddOption(type_info.Integer(name=name, default=default,
description=help))
def DEFINE_boolean(name, default, help):
"""A helper for defining boolean options."""
CONFIG.AddOption(type_info.Bool(name=name, default=default,
description=help))
def DEFINE_string(name, default, help):
"""A helper for defining string options."""
CONFIG.AddOption(type_info.String(name=name, default=default or "",
description=help))
def DEFINE_bytes(name, default, help):
"""A helper for defining bytes options."""
CONFIG.AddOption(type_info.Bytes(name=name, default=default or "",
description=help))
def DEFINE_choice(name, default, choices, help):
"""A helper for defining choice string options."""
CONFIG.AddOption(type_info.Choice(
name=name, default=default, choices=choices,
description=help))
def DEFINE_multichoice(name, default, choices, help):
"""Choose multiple options from a list."""
CONFIG.AddOption(type_info.MultiChoice(
name=name, default=default, choices=choices,
description=help))
def DEFINE_list(name, default, help):
"""A helper for defining lists of strings options."""
CONFIG.AddOption(type_info.List(name=name, default=default,
description=help,
validator=type_info.String()))
def DEFINE_semantic(semantic_type, name, default=None, description=""):
CONFIG.AddOption(type_info.RDFValueType(
rdfclass=semantic_type, name=name, default=default, help=description))
def DEFINE_option(type_descriptor):
CONFIG.AddOption(type_descriptor)
def DEFINE_constant_string(name, default, help):
"""A helper for defining constant strings."""
CONFIG.AddOption(type_info.String(name=name, default=default or "",
description=help), constant=True)
# pylint: enable=g-bad-name
def LoadConfig(config_obj, config_file, secondary_configs=None,
contexts=None, reset=False, parser=ConfigFileParser):
"""Initialize a ConfigManager with the specified options.
Args:
config_obj: The ConfigManager object to use and update. If None, one will
be created.
config_file: Filename, url or file like object to read the config from.
secondary_configs: A list of secondary config URLs to load.
contexts: Add these contexts to the config object.
reset: Completely wipe previous config before doing the load.
parser: Specify which parser to use.
Returns:
The resulting config object. The one passed in, unless None was specified.
"""
if config_obj is None or reset:
# Create a new config object.
config_obj = CONFIG.MakeNewConfig()
# Initialize the config with a filename or file like object.
if isinstance(config_file, basestring):
config_obj.Initialize(filename=config_file, must_exist=True, parser=parser)
elif hasattr(config_file, "read"):
config_obj.Initialize(fd=config_file, parser=parser)
# Load all secondary files.
if secondary_configs:
for config_url in secondary_configs:
config_obj.LoadSecondaryConfig(config_url)
if contexts:
for context in contexts:
config_obj.AddContext(context)
return config_obj
def ParseConfigCommandLine():
"""Parse all the command line options which control the config system."""
# The user may specify the primary config file on the command line.
if flags.FLAGS.config:
CONFIG.Initialize(filename=flags.FLAGS.config, must_exist=True)
# Allow secondary configuration files to be specified.
if flags.FLAGS.secondary_configs:
for config_url in flags.FLAGS.secondary_configs:
CONFIG.LoadSecondaryConfig(config_url)
# Allow individual options to be specified as global overrides.
for statement in flags.FLAGS.parameter:
if "=" not in statement:
raise RuntimeError(
"statement %s on command line not valid." % statement)
name, value = statement.split("=", 1)
CONFIG.global_override[name] = value
# Load additional contexts from the command line.
for context in flags.FLAGS.context:
if context:
CONFIG.AddContext(context)
if CONFIG["Config.writeback"]:
CONFIG.SetWriteBack(CONFIG["Config.writeback"])
# Does the user want to dump help? We do this after the config system is
# initialized so the user can examine what we think the value of all the
# parameters are.
if flags.FLAGS.config_help:
print "Configuration overview."
CONFIG.PrintHelp()
sys.exit(0)
class PluginLoader(registry.InitHook):
"""Loads additional plugins specified by the user."""
PYTHON_EXTENSIONS = [".py", ".pyo", ".pyc"]
def RunOnce(self):
for path in flags.FLAGS.plugins:
self.LoadPlugin(path)
@classmethod
def LoadPlugin(cls, path):
"""Load (import) the plugin at the path."""
if not os.access(path, os.R_OK):
logging.error("Unable to find %s", path)
return
path = os.path.abspath(path)
directory, filename = os.path.split(path)
module_name, ext = os.path.splitext(filename)
# It's a python file.
if ext in cls.PYTHON_EXTENSIONS:
# Make sure python can find the file.
sys.path.insert(0, directory)
try:
logging.info("Loading user plugin %s", path)
__import__(module_name)
except Exception, e: # pylint: disable=broad-except
logging.error("Error loading user plugin %s: %s", path, e)
finally:
sys.path.pop(0)
elif ext == ".zip":
zfile = zipfile.ZipFile(path)
# Make sure python can find the file.
sys.path.insert(0, path)
try:
logging.info("Loading user plugin archive %s", path)
for name in zfile.namelist():
# Change from filename to python package name.
module_name, ext = os.path.splitext(name)
if ext in cls.PYTHON_EXTENSIONS:
module_name = module_name.replace("/", ".").replace(
"\\", ".")
try:
__import__(module_name.strip("\\/"))
except Exception as e: # pylint: disable=broad-except
logging.error("Error loading user plugin %s: %s",
path, e)
finally:
sys.path.pop(0)
else:
logging.error("Plugin %s has incorrect extension.", path)
| apache-2.0 |
drxaero/calibre | src/calibre/ebooks/oeb/display/webview.py | 13 | 2192 | #!/usr/bin/env python2
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2012, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import re
from calibre import guess_type
class EntityDeclarationProcessor(object): # {{{
def __init__(self, html):
self.declared_entities = {}
for match in re.finditer(r'<!\s*ENTITY\s+([^>]+)>', html):
tokens = match.group(1).split()
if len(tokens) > 1:
self.declared_entities[tokens[0].strip()] = tokens[1].strip().replace('"', '')
self.processed_html = html
for key, val in self.declared_entities.iteritems():
self.processed_html = self.processed_html.replace('&%s;'%key, val)
# }}}
def self_closing_sub(match):
tag = match.group(1)
if tag.lower().strip() == 'br':
return match.group()
return '<%s%s></%s>'%(match.group(1), match.group(2), match.group(1))
def load_html(path, view, codec='utf-8', mime_type=None,
pre_load_callback=lambda x:None, path_is_html=False,
force_as_html=False):
from PyQt5.Qt import QUrl, QByteArray
if mime_type is None:
mime_type = guess_type(path)[0]
if not mime_type:
mime_type = 'text/html'
if path_is_html:
html = path
else:
with open(path, 'rb') as f:
html = f.read().decode(codec, 'replace')
html = EntityDeclarationProcessor(html).processed_html
self_closing_pat = re.compile(r'<\s*([:A-Za-z0-9-]+)([^>]*)/\s*>')
html = self_closing_pat.sub(self_closing_sub, html)
loading_url = QUrl.fromLocalFile(path)
pre_load_callback(loading_url)
if force_as_html or re.search(r'<[a-zA-Z0-9-]+:svg', html) is None:
view.setHtml(html, loading_url)
else:
view.setContent(QByteArray(html.encode(codec)), mime_type,
loading_url)
mf = view.page().mainFrame()
elem = mf.findFirstElement('parsererror')
if not elem.isNull():
return False
return True
| gpl-3.0 |
palerdot/calibre | src/calibre/gui2/complete2.py | 2 | 15940 | #!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2012, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import weakref
import sip
from PyQt4.Qt import (QLineEdit, QAbstractListModel, Qt, pyqtSignal, QObject,
QApplication, QListView, QPoint, QModelIndex, QFont, QFontInfo)
from calibre.constants import isosx, get_osx_version
from calibre.utils.icu import sort_key, primary_startswith, primary_contains
from calibre.gui2 import NONE
from calibre.gui2.widgets import EnComboBox, LineEditECM
from calibre.utils.config import tweaks
def containsq(x, prefix):
return primary_contains(prefix, x)
class CompleteModel(QAbstractListModel): # {{{
def __init__(self, parent=None, sort_func=sort_key):
QAbstractListModel.__init__(self, parent)
self.sort_func = sort_func
self.all_items = self.current_items = ()
self.current_prefix = ''
def set_items(self, items):
items = [unicode(x.strip()) for x in items]
items = [x for x in items if x]
items = tuple(sorted(items, key=self.sort_func))
self.all_items = self.current_items = items
self.current_prefix = ''
self.reset()
def set_completion_prefix(self, prefix):
old_prefix = self.current_prefix
self.current_prefix = prefix
if prefix == old_prefix:
return
if not prefix:
self.current_items = self.all_items
self.reset()
return
subset = prefix.startswith(old_prefix)
universe = self.current_items if subset else self.all_items
func = primary_startswith if tweaks['completion_mode'] == 'prefix' else containsq
self.current_items = tuple(x for x in universe if func(x, prefix))
self.reset()
def rowCount(self, *args):
return len(self.current_items)
def data(self, index, role):
if role == Qt.DisplayRole:
try:
return self.current_items[index.row()]
except IndexError:
pass
return NONE
def index_for_prefix(self, prefix):
for i, item in enumerate(self.current_items):
if primary_startswith(item, prefix):
return self.index(i)
# }}}
class Completer(QListView): # {{{
item_selected = pyqtSignal(object)
relayout_needed = pyqtSignal()
def __init__(self, completer_widget, max_visible_items=7, sort_func=sort_key):
QListView.__init__(self)
self.disable_popup = False
self.completer_widget = weakref.ref(completer_widget)
self.setWindowFlags(Qt.Popup)
self.max_visible_items = max_visible_items
self.setEditTriggers(self.NoEditTriggers)
self.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
self.setSelectionBehavior(self.SelectRows)
self.setSelectionMode(self.SingleSelection)
self.setAlternatingRowColors(True)
self.setModel(CompleteModel(self, sort_func=sort_func))
self.setMouseTracking(True)
self.entered.connect(self.item_entered)
self.activated.connect(self.item_chosen)
self.pressed.connect(self.item_chosen)
self.installEventFilter(self)
def hide(self):
self.setCurrentIndex(QModelIndex())
QListView.hide(self)
def item_chosen(self, index):
if not self.isVisible():
return
self.hide()
text = self.model().data(index, Qt.DisplayRole)
self.item_selected.emit(unicode(text))
def set_items(self, items):
self.model().set_items(items)
if self.isVisible():
self.relayout_needed.emit()
def set_completion_prefix(self, prefix):
self.model().set_completion_prefix(prefix)
if self.isVisible():
self.relayout_needed.emit()
def item_entered(self, idx):
self.setCurrentIndex(idx)
def next_match(self, previous=False):
c = self.currentIndex()
if c.isValid():
r = c.row()
else:
r = self.model().rowCount() if previous else -1
r = r + (-1 if previous else 1)
index = self.model().index(r % self.model().rowCount())
self.setCurrentIndex(index)
def scroll_to(self, orig):
if orig:
index = self.model().index_for_prefix(orig)
if index is not None and index.isValid():
self.setCurrentIndex(index)
def popup(self, select_first=True):
if self.disable_popup:
return
p = self
m = p.model()
widget = self.completer_widget()
if widget is None:
return
screen = QApplication.desktop().availableGeometry(widget)
h = (p.sizeHintForRow(0) * min(self.max_visible_items, m.rowCount()) +
3) + 3
hsb = p.horizontalScrollBar()
if hsb and hsb.isVisible():
h += hsb.sizeHint().height()
rh = widget.height()
pos = widget.mapToGlobal(QPoint(0, widget.height() - 2))
w = min(widget.width(), screen.width())
if (pos.x() + w) > (screen.x() + screen.width()):
pos.setX(screen.x() + screen.width() - w)
if pos.x() < screen.x():
pos.setX(screen.x())
top = pos.y() - rh - screen.top() + 2
bottom = screen.bottom() - pos.y()
h = max(h, p.minimumHeight())
if h > bottom:
h = min(max(top, bottom), h)
if top > bottom:
pos.setY(pos.y() - h - rh + 2)
p.setGeometry(pos.x(), pos.y(), w, h)
if (tweaks['preselect_first_completion'] and select_first and not
self.currentIndex().isValid() and self.model().rowCount() > 0):
self.setCurrentIndex(self.model().index(0))
if not p.isVisible():
if isosx and get_osx_version() >= (10, 9, 0):
# On mavericks the popup menu seems to use a font smaller than
# the widgets font, see for example:
# https://bugs.launchpad.net/bugs/1243761
fp = QFontInfo(widget.font())
f = QFont()
f.setPixelSize(fp.pixelSize())
self.setFont(f)
p.show()
def eventFilter(self, obj, e):
'Redirect key presses from the popup to the widget'
widget = self.completer_widget()
if widget is None or sip.isdeleted(widget):
return False
etype = e.type()
if obj is not self:
return QObject.eventFilter(self, obj, e)
if etype == e.KeyPress:
key = e.key()
if key == Qt.Key_Escape:
self.hide()
e.accept()
return True
if key == Qt.Key_F4 and e.modifiers() & Qt.AltModifier:
self.hide()
e.accept()
return True
if key in (Qt.Key_Enter, Qt.Key_Return):
# We handle this explicitly because on OS X activated() is
# not emitted on pressing Enter.
idx = self.currentIndex()
if idx.isValid():
self.item_chosen(idx)
self.hide()
e.accept()
return True
if key == Qt.Key_Tab:
idx = self.currentIndex()
if idx.isValid():
self.item_chosen(idx)
self.hide()
elif self.model().rowCount() > 0:
self.next_match()
e.accept()
return True
if key in (Qt.Key_PageUp, Qt.Key_PageDown):
# Let the list view handle these keys
return False
if key in (Qt.Key_Up, Qt.Key_Down):
self.next_match(previous=key == Qt.Key_Up)
e.accept()
return True
# Send to widget
widget.eat_focus_out = False
widget.keyPressEvent(e)
widget.eat_focus_out = True
if not widget.hasFocus():
# Widget lost focus hide the popup
self.hide()
if e.isAccepted():
return True
elif etype == e.MouseButtonPress:
if not self.underMouse():
self.hide()
e.accept()
return True
elif etype in (e.InputMethod, e.ShortcutOverride):
QApplication.sendEvent(widget, e)
return False
# }}}
class LineEdit(QLineEdit, LineEditECM):
'''
A line edit that completes on multiple items separated by a
separator. Use the :meth:`update_items_cache` to set the list of
all possible completions. Separator can be controlled with the
:meth:`set_separator` and :meth:`set_space_before_sep` methods.
A call to self.set_separator(None) will allow this widget to be used
to complete non multiple fields as well.
'''
def __init__(self, parent=None, completer_widget=None, sort_func=sort_key):
QLineEdit.__init__(self, parent)
self.sep = ','
self.space_before_sep = False
self.add_separator = True
self.original_cursor_pos = None
completer_widget = (self if completer_widget is None else
completer_widget)
self.mcompleter = Completer(completer_widget, sort_func=sort_func)
self.mcompleter.item_selected.connect(self.completion_selected,
type=Qt.QueuedConnection)
self.mcompleter.relayout_needed.connect(self.relayout)
self.mcompleter.setFocusProxy(completer_widget)
self.textEdited.connect(self.text_edited)
self.no_popup = False
# Interface {{{
def update_items_cache(self, complete_items):
self.all_items = complete_items
def set_separator(self, sep):
self.sep = sep
def set_space_before_sep(self, space_before):
self.space_before_sep = space_before
def set_add_separator(self, what):
self.add_separator = bool(what)
@dynamic_property
def all_items(self):
def fget(self):
return self.mcompleter.model().all_items
def fset(self, items):
self.mcompleter.model().set_items(items)
return property(fget=fget, fset=fset)
@dynamic_property
def disable_popup(self):
def fget(self):
return self.mcompleter.disable_popup
def fset(self, val):
self.mcompleter.disable_popup = bool(val)
return property(fget=fget, fset=fset)
# }}}
def complete(self, show_all=False, select_first=True):
orig = None
if show_all:
orig = self.mcompleter.model().current_prefix
self.mcompleter.set_completion_prefix('')
if not self.mcompleter.model().current_items:
self.mcompleter.hide()
return
self.mcompleter.popup(select_first=select_first)
self.setFocus(Qt.OtherFocusReason)
self.mcompleter.scroll_to(orig)
def relayout(self):
self.mcompleter.popup()
self.setFocus(Qt.OtherFocusReason)
def text_edited(self, *args):
if self.no_popup:
return
self.update_completions()
select_first = len(self.mcompleter.model().current_prefix) > 0
if not select_first:
self.mcompleter.setCurrentIndex(QModelIndex())
self.complete(select_first=select_first)
def update_completions(self):
' Update the list of completions '
self.original_cursor_pos = cpos = self.cursorPosition()
text = unicode(self.text())
prefix = text[:cpos]
complete_prefix = prefix.lstrip()
if self.sep:
complete_prefix = prefix.split(self.sep)[-1].lstrip()
self.mcompleter.set_completion_prefix(complete_prefix)
def get_completed_text(self, text):
'Get completed text in before and after parts'
if self.sep is None:
return text, ''
else:
cursor_pos = self.original_cursor_pos
if cursor_pos is None:
cursor_pos = self.cursorPosition()
self.original_cursor_pos = None
# Split text
curtext = unicode(self.text())
before_text = curtext[:cursor_pos]
after_text = curtext[cursor_pos:].rstrip()
# Remove the completion prefix from the before text
before_text = self.sep.join(before_text.split(self.sep)[:-1]).rstrip()
if before_text:
# Add the separator to the end of before_text
if self.space_before_sep:
before_text += ' '
before_text += self.sep + ' '
if self.add_separator or after_text:
# Add separator to the end of completed text
if self.space_before_sep:
text = text.rstrip() + ' '
completed_text = text + self.sep + ' '
else:
completed_text = text
return before_text + completed_text, after_text
def completion_selected(self, text):
before_text, after_text = self.get_completed_text(unicode(text))
self.setText(before_text + after_text)
self.setCursorPosition(len(before_text))
class EditWithComplete(EnComboBox):
def __init__(self, *args):
EnComboBox.__init__(self, *args)
self.setLineEdit(LineEdit(self, completer_widget=self))
self.setCompleter(None)
self.eat_focus_out = True
self.installEventFilter(self)
# Interface {{{
def showPopup(self):
self.lineEdit().complete(show_all=True)
def update_items_cache(self, complete_items):
self.lineEdit().update_items_cache(complete_items)
def set_separator(self, sep):
self.lineEdit().set_separator(sep)
def set_space_before_sep(self, space_before):
self.lineEdit().set_space_before_sep(space_before)
def set_add_separator(self, what):
self.lineEdit().set_add_separator(what)
def show_initial_value(self, what):
what = unicode(what) if what else u''
self.setText(what)
self.lineEdit().selectAll()
@dynamic_property
def all_items(self):
def fget(self):
return self.lineEdit().all_items
def fset(self, val):
self.lineEdit().all_items = val
return property(fget=fget, fset=fset)
# }}}
def text(self):
return unicode(self.lineEdit().text())
def selectAll(self):
self.lineEdit().selectAll()
def setText(self, val):
le = self.lineEdit()
le.no_popup = True
le.setText(val)
le.no_popup = False
def setCursorPosition(self, *args):
self.lineEdit().setCursorPosition(*args)
@property
def textChanged(self):
return self.lineEdit().textChanged
def clear(self):
self.lineEdit().clear()
EnComboBox.clear(self)
def eventFilter(self, obj, e):
try:
c = self.lineEdit().mcompleter
except AttributeError:
return False
etype = e.type()
if self.eat_focus_out and self is obj and etype == e.FocusOut:
if c.isVisible():
return True
return EnComboBox.eventFilter(self, obj, e)
if __name__ == '__main__':
from PyQt4.Qt import QDialog, QVBoxLayout
app = QApplication([])
d = QDialog()
d.setLayout(QVBoxLayout())
le = EditWithComplete(d)
d.layout().addWidget(le)
items = ['one', 'otwo', 'othree', 'ooone', 'ootwo',
'oothree', 'a1', 'a2',u'Edgas', u'Èdgar', u'Édgaq', u'Edgar', u'Édgar']
le.update_items_cache(items)
le.show_initial_value('')
d.exec_()
| gpl-3.0 |
tsgit/invenio | modules/bibindex/lib/bibindex_engine_tokenizer_unit_tests.py | 5 | 20948 | # -*- coding:utf-8 -*-
##
## This file is part of Invenio.
## Copyright (C) 2010, 2011, 2012, 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""bibindex_engine_tokenizer_tests - unit tests for tokenizers
There should always be at least one test class for each class in b_e_t.
"""
from invenio.testutils import InvenioTestCase
from invenio.testutils import make_test_suite, run_test_suite
from invenio.bibindex_engine_utils import load_tokenizers
_TOKENIZERS = load_tokenizers()
class TestAuthorTokenizerScanning(InvenioTestCase):
"""Test BibIndex name tokenization"""
def setUp(self):
self.tokenizer = _TOKENIZERS["BibIndexAuthorTokenizer"]()
self.scan = self.tokenizer.scan_string_for_phrases
def test_bifnt_scan_single(self):
"""BibIndexAuthorTokenizer - scanning single names like 'Dido'"""
teststr = "Dido"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Dido'], 'nonlastnames': [], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_simple_western_forward(self):
"""BibIndexAuthorTokenizer - scanning simple Western-style: first last"""
teststr = "Ringo Starr"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_simple_western_reverse(self):
"""BibIndexAuthorTokenizer - scanning simple Western-style: last, first"""
teststr = "Starr, Ringo"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'], 'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_forward(self):
"""BibIndexAuthorTokenizer - scanning multiword: first middle last"""
teststr = "Michael Edward Peskin"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['Michael', 'Edward'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dotcrammed(self):
"""BibIndexAuthorTokenizer - scanning multiword: f.m. last"""
teststr = "M.E. Peskin"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['M', 'E'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dotcrammed_reversed(self):
"""BibIndexAuthorTokenizer - scanning multiword: last, f.m."""
teststr = "Peskin, M.E."
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['M', 'E'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dashcrammed(self):
"""BibIndexAuthorTokenizer - scanning multiword: first-middle last"""
teststr = "Jean-Luc Picard"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Picard'], 'nonlastnames': ['Jean', 'Luc'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_multiname_dashcrammed_reversed(self):
"""BibIndexAuthorTokenizer - scanning multiword: last, first-middle"""
teststr = "Picard, Jean-Luc"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Picard'], 'nonlastnames': ['Jean', 'Luc'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_dashes(self):
"""BibIndexAuthorTokenizer - scanning multiword: first middle last-last"""
teststr = "Cantina Octavia Jones-Smith"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Jones', 'Smith'], 'nonlastnames': ['Cantina', 'Octavia'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_dashes_reverse(self):
"""BibIndexAuthorTokenizer - scanning multiword: last-last, first middle"""
teststr = "Jones-Smith, Cantina Octavia"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Jones', 'Smith'], 'nonlastnames': ['Cantina', 'Octavia'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_compound_lastname_reverse(self):
"""BibIndexAuthorTokenizer - scanning compound last: last last, first"""
teststr = "Alvarez Gaume, Joachim"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Alvarez', 'Gaume'], 'nonlastnames': ['Joachim'], 'titles': [], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_titled(self):
"""BibIndexAuthorTokenizer - scanning title-bearing: last, first, title"""
teststr = "Epstein, Brian, The Fifth Beatle"
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Epstein'], 'nonlastnames': ['Brian'], 'titles': ['The Fifth Beatle'], 'raw' : teststr}
self.assertEqual(output, anticipated)
def test_bifnt_scan_wildly_interesting(self):
"""BibIndexAuthorTokenizer - scanning last last last, first first, title, title"""
teststr = "Ibanez y Gracia, Maria Luisa, II., ed."
output = self.scan(teststr)
anticipated = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ibanez', 'y', 'Gracia'], 'nonlastnames': ['Maria', 'Luisa'], 'titles': ['II.', 'ed.'], 'raw' : teststr}
self.assertEqual(output, anticipated)
class TestAuthorTokenizerTokens(InvenioTestCase):
"""Test BibIndex name variant token generation from scanned and tagged sets"""
def setUp(self):
self.tokenizer = _TOKENIZERS["BibIndexAuthorTokenizer"]()
self.get_index_tokens = self.tokenizer.parse_scanned_for_phrases
def test_bifnt_tokenize_single(self):
"""BibIndexAuthorTokenizer - tokens for single-word name
Ronaldo
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ronaldo'], 'nonlastnames': [], 'titles': [], 'raw' : 'Ronaldo'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Ronaldo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_simple_forward(self):
"""BibIndexAuthorTokenizer - tokens for first last
Ringo Starr
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : 'Ringo Starr'}
output = self.get_index_tokens(tagged_data)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_simple_reverse(self):
"""BibIndexAuthorTokenizer - tokens for last, first
Starr, Ringo
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Starr'], 'nonlastnames': ['Ringo'], 'titles': [], 'raw' : 'Starr, Ringo'}
output = self.get_index_tokens(tagged_data)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_twoname_forward(self):
"""BibIndexAuthorTokenizer - tokens for first middle last
Michael Edward Peskin
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Peskin'], 'nonlastnames': ['Michael', 'Edward'], 'titles': [], 'raw' : 'Michael Edward Peskin'}
output = self.get_index_tokens(tagged_data)
anticipated = ['E Peskin', 'Edward Peskin', 'M E Peskin', 'M Edward Peskin', 'M Peskin',
'Michael E Peskin', 'Michael Edward Peskin', 'Michael Peskin',
'Peskin, E', 'Peskin, Edward', 'Peskin, M',
'Peskin, M E', 'Peskin, M Edward', 'Peskin, Michael',
'Peskin, Michael E', 'Peskin, Michael Edward']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_compound_last(self):
"""BibIndexAuthorTokenizer - tokens for last last, first
Alvarez Gaume, Joachim
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Alvarez', 'Gaume'], 'nonlastnames': ['Joachim'], 'titles': [], 'raw' : 'Alvarez Gaume, Joachim'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Alvarez Gaume, J', 'Alvarez Gaume, Joachim', 'Alvarez, J', 'Alvarez, Joachim', 'Gaume, J',
'Gaume, Joachim', 'J Alvarez', 'J Alvarez Gaume', 'J Gaume', 'Joachim Alvarez',
'Joachim Alvarez Gaume', 'Joachim Gaume']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_titled(self):
"""BibIndexAuthorTokenizer - tokens for last, first, title
Epstein, Brian, The Fifth Beatle
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Epstein'], 'nonlastnames': ['Brian'], 'titles': ['The Fifth Beatle'], 'raw' : 'Epstein, Brian, The Fifth Beatle'}
output = self.get_index_tokens(tagged_data)
anticipated = ['B Epstein', 'B Epstein, The Fifth Beatle', 'Brian Epstein',
'Brian Epstein, The Fifth Beatle', 'Epstein, B', 'Epstein, B, The Fifth Beatle',
'Epstein, Brian', 'Epstein, Brian, The Fifth Beatle']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_wildly_interesting(self):
"""BibIndexAuthorTokenizer - tokens for last last last, first first, title, title
Ibanez y Gracia, Maria Luisa, II, (ed.)
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Ibanez', 'y', 'Gracia'], 'nonlastnames': ['Maria', 'Luisa'], 'titles': ['II', '(ed.)'], 'raw' : 'Ibanez y Gracia, Maria Luisa, II, (ed.)'}
output = self.get_index_tokens(tagged_data)
anticipated = ['Gracia, L', 'Gracia, Luisa', 'Gracia, M', 'Gracia, M L', 'Gracia, M Luisa',
'Gracia, Maria', 'Gracia, Maria L', 'Gracia, Maria Luisa',
'Ibanez y Gracia, L', 'Ibanez y Gracia, L, II',
'Ibanez y Gracia, Luisa', 'Ibanez y Gracia, Luisa, II',
'Ibanez y Gracia, M', 'Ibanez y Gracia, M L', 'Ibanez y Gracia, M L, II',
'Ibanez y Gracia, M Luisa', 'Ibanez y Gracia, M Luisa, II',
'Ibanez y Gracia, M, II',
'Ibanez y Gracia, Maria',
'Ibanez y Gracia, Maria L', 'Ibanez y Gracia, Maria L, II',
'Ibanez y Gracia, Maria Luisa', 'Ibanez y Gracia, Maria Luisa, II',
'Ibanez y Gracia, Maria, II',
'Ibanez, L', 'Ibanez, Luisa',
'Ibanez, M', 'Ibanez, M L', 'Ibanez, M Luisa', 'Ibanez, Maria',
'Ibanez, Maria L', 'Ibanez, Maria Luisa', 'L Gracia', 'L Ibanez',
'L Ibanez y Gracia', 'L Ibanez y Gracia, II', 'Luisa Gracia', 'Luisa Ibanez',
'Luisa Ibanez y Gracia', 'Luisa Ibanez y Gracia, II', 'M Gracia',
'M Ibanez', 'M Ibanez y Gracia', 'M Ibanez y Gracia, II', 'M L Gracia',
'M L Ibanez', 'M L Ibanez y Gracia', 'M L Ibanez y Gracia, II',
'M Luisa Gracia', 'M Luisa Ibanez', 'M Luisa Ibanez y Gracia', 'M Luisa Ibanez y Gracia, II',
'Maria Gracia',
'Maria Ibanez', 'Maria Ibanez y Gracia', 'Maria Ibanez y Gracia, II',
'Maria L Gracia', 'Maria L Ibanez', 'Maria L Ibanez y Gracia', 'Maria L Ibanez y Gracia, II',
'Maria Luisa Gracia', 'Maria Luisa Ibanez', 'Maria Luisa Ibanez y Gracia',
'Maria Luisa Ibanez y Gracia, II']
self.assertEqual(output, anticipated)
def test_bifnt_tokenize_multimiddle_forward(self):
"""BibIndexAuthorTokenizer - tokens for first middle middle last
W K H Panofsky
"""
tagged_data = {'TOKEN_TAG_LIST': ['lastnames', 'nonlastnames', 'titles', 'raw'],
'lastnames': ['Panofsky'], 'nonlastnames': ['W', 'K', 'H'], 'titles': [], 'raw' : 'W K H Panofsky'}
output = self.get_index_tokens(tagged_data)
anticipated = ['H Panofsky', 'K H Panofsky', 'K Panofsky', 'Panofsky, H', 'Panofsky, K',
'Panofsky, K H', 'Panofsky, W', 'Panofsky, W H', 'Panofsky, W K',
'Panofsky, W K H', 'W H Panofsky',
'W K H Panofsky', 'W K Panofsky', 'W Panofsky']
self.assertEqual(output, anticipated)
def test_tokenize(self):
"""BibIndexAuthorTokenizer - check tokenize_for_phrases()
Ringo Starr
"""
teststr = "Ringo Starr"
output = self.tokenizer.tokenize_for_phrases(teststr)
anticipated = ['R Starr', 'Ringo Starr', 'Starr, R', 'Starr, Ringo']
self.assertEqual(output, anticipated)
class TestExactAuthorTokenizer(InvenioTestCase):
"""Test exact author name tokenizer."""
def setUp(self):
"""setup"""
self.tokenizer = _TOKENIZERS["BibIndexExactAuthorTokenizer"]()
self.tokenize = self.tokenizer.tokenize_for_phrases
def test_exact_author_name_tokenizer_bare(self):
"""BibIndexExactNameTokenizer - bare name"""
self.assertEqual(self.tokenize('John Doe'),
['John Doe'])
def test_exact_author_name_tokenizer_dots(self):
"""BibIndexExactNameTokenizer - name with dots"""
self.assertEqual(self.tokenize('J. Doe'),
['J Doe'])
self.assertEqual(self.tokenize('J.R. Doe'),
['J R Doe'])
self.assertEqual(self.tokenize('J. R. Doe'),
['J R Doe'])
def test_exact_author_name_tokenizer_trailing_dots(self):
"""BibIndexExactNameTokenizer - name with trailing dots"""
self.assertEqual(self.tokenize('Doe, J'),
['Doe, J'])
self.assertEqual(self.tokenize('Doe, J.'),
['Doe, J'])
def test_exact_author_name_tokenizer_hyphens(self):
"""BibIndexExactNameTokenizer - name with hyphens"""
self.assertEqual(self.tokenize('Doe, Jean-Pierre'),
['Doe, Jean Pierre'])
class TestCJKTokenizer(InvenioTestCase):
"""Tests for CJK Tokenizer which splits CJK words into characters and treats
every single character as a word"""
@classmethod
def setUp(self):
self.tokenizer = _TOKENIZERS["BibIndexCJKTokenizer"]()
def test_tokenize_for_words_phrase_galaxy(self):
"""tokenizing phrase: galaxy s4据信"""
phrase = "galaxy s4据信"
result = self.tokenizer.tokenize_for_words(phrase)
self.assertEqual(sorted(['galaxy','s4','据','信']), sorted(result))
def test_tokenize_for_words_phrase_with_special_punctuation(self):
"""tokenizing phrase: 马英九:台湾民"""
phrase = u"马英九:台湾民"
result = self.tokenizer.tokenize_for_words(phrase)
self.assertEqual(sorted(['马','英','九','台','湾','民']), sorted(result))
def test_tokenize_for_words_phrase_with_special_punctuation_two(self):
"""tokenizing phrase: 色的“刀子嘴”"""
phrase = u"色的“刀子嘴”"
result = self.tokenizer.tokenize_for_words(phrase)
self.assertEqual(sorted(['色','的','刀','子','嘴']), sorted(result))
def test_tokenize_for_words_simple_phrase(self):
"""tokenizing phrase: 春眠暁覚"""
self.assertEqual(sorted(self.tokenizer.tokenize_for_words(u'春眠暁覚')), sorted(['春', '眠', '暁', '覚']))
def test_tokenize_for_words_mixed_phrase(self):
"""tokenizing phrase: 春眠暁ABC覚"""
self.assertEqual(sorted(self.tokenizer.tokenize_for_words(u'春眠暁ABC覚')), sorted(['春', '眠', '暁', 'abc', '覚']))
def test_tokenize_for_words_phrase_with_comma(self):
"""tokenizing phrase: 春眠暁, 暁"""
phrase = u"春眠暁, 暁"
self.assertEqual(sorted(self.tokenizer.tokenize_for_words(phrase)), sorted(['春','眠','暁']))
class TestJournalPageTokenizer(InvenioTestCase):
"""Tests for JournalPage Tokenizer"""
@classmethod
def setUp(self):
self.tokenizer = _TOKENIZERS["BibIndexJournalPageTokenizer"]()
def test_tokenize_for_single_page(self):
"""tokenizing for single page"""
test_pairs = [
# simple number
('1', ['1']),
('23', ['23']),
('12312', ['12312']),
# letter + number
('C85', ['C85']),
('L45', ['L45']),
# roman numbers
('VII', ['VII']),
('X', ['X']),
# prefix + simple number
('p.321', ['p.321', '321']),
('pp.321', ['pp.321', '321']),
('cpp.321', ['cpp.321', '321']),
('pag.321', ['pag.321', '321']),
# prefix + non-simple page
('p.A45', ['p.A45', 'A45']),
('pp.C83', ['pp.C83', 'C83']),
('p.V', ['p.V', 'V']),
('pp.IV', ['pp.IV', 'IV']),
]
for phrase, expected_tokens in test_pairs:
result = self.tokenizer.tokenize(phrase)
self.assertEqual(sorted(expected_tokens), sorted(result))
def test_tokenize_for_page_range(self):
"""tokenizing for page range"""
test_pairs = [
# simple number
('1-12', ['1', '1-12']),
('22-22', ['22', '22-22']),
('95-12312', ['95', '95-12312']),
# letter + number
('C85-D55', ['C85', 'C85-D55']),
('L45-L88', ['L45', 'L45-L88']),
# roman numbers
('I-VII', ['I', 'I-VII']),
('VIII-X', ['VIII', 'VIII-X']),
# mixed range
('III-12', ['III', 'III-12']),
('343-A10', ['343', '343-A10']),
('IX-B5', ['IX', 'IX-B5']),
# prefix + simple number
('p.56-123', ['p.56-123', '56-123', '56']),
('pp.56-123', ['pp.56-123', '56-123', '56']),
('cpp.56-123', ['cpp.56-123', '56-123', '56']),
('pag.56-123', ['pag.56-123', '56-123', '56']),
# prefix + non-simple page
('pp.VII-123', ['pp.VII-123', 'VII-123', 'VII']),
]
for phrase, expected_tokens in test_pairs:
result = self.tokenizer.tokenize(phrase)
self.assertEqual(sorted(expected_tokens), sorted(result))
TEST_SUITE = make_test_suite(TestAuthorTokenizerScanning,
TestAuthorTokenizerTokens,
TestExactAuthorTokenizer,
TestCJKTokenizer,
TestJournalPageTokenizer)
if __name__ == '__main__':
run_test_suite(TEST_SUITE)
| gpl-2.0 |
CyanogenMod/android_external_chromium_org | tools/telemetry/telemetry/value/scalar.py | 36 | 2166 | # Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import numbers
from telemetry import value as value_module
from telemetry.value import list_of_scalar_values
class ScalarValue(value_module.Value):
def __init__(self, page, name, units, value, important=True):
"""A single value (float or integer) result from a test.
A test that counts the number of DOM elements in a page might produce a
scalar value:
ScalarValue(page, 'num_dom_elements', 'count', num_elements)
"""
super(ScalarValue, self).__init__(page, name, units, important)
assert isinstance(value, numbers.Number)
self.value = value
def __repr__(self):
if self.page:
page_name = self.page.url
else:
page_name = None
return 'ScalarValue(%s, %s, %s, %s, important=%s)' % (
page_name,
self.name, self.units,
self.value,
self.important)
def GetBuildbotDataType(self, output_context):
if self._IsImportantGivenOutputIntent(output_context):
return 'default'
return 'unimportant'
def GetBuildbotValue(self):
# Buildbot's print_perf_results method likes to get lists for all values,
# even when they are scalar, so list-ize the return value.
return [self.value]
def GetRepresentativeNumber(self):
return self.value
def GetRepresentativeString(self):
return str(self.value)
@classmethod
def MergeLikeValuesFromSamePage(cls, values):
assert len(values) > 0
v0 = values[0]
return list_of_scalar_values.ListOfScalarValues(
v0.page, v0.name, v0.units,
[v.value for v in values],
important=v0.important)
@classmethod
def MergeLikeValuesFromDifferentPages(cls, values,
group_by_name_suffix=False):
assert len(values) > 0
v0 = values[0]
if not group_by_name_suffix:
name = v0.name
else:
name = v0.name_suffix
return list_of_scalar_values.ListOfScalarValues(
None, name, v0.units,
[v.value for v in values],
important=v0.important)
| bsd-3-clause |
Mj258/weiboapi | srapyDemo/envs/Lib/site-packages/win32com/test/testvbscript_regexp.py | 40 | 1108 | import unittest
from win32com.client.gencache import EnsureDispatch
from win32com.client.dynamic import DumbDispatch
import win32com.test.util
class RegexTest(win32com.test.util.TestCase):
def _CheckMatches(self, match, expected):
found = []
for imatch in match:
found.append(imatch.FirstIndex)
self.assertEquals(list(found), list(expected))
def _TestVBScriptRegex(self, re):
StringToSearch = "Python python pYthon Python"
re.Pattern = "Python"
re.Global = True
re.IgnoreCase = True
match = re.Execute(StringToSearch)
expected = 0, 7, 14, 21
self._CheckMatches(match, expected)
re.IgnoreCase = False
match = re.Execute(StringToSearch)
expected = 0, 21
self._CheckMatches(match, expected)
def testDynamic(self):
re = DumbDispatch("VBScript.Regexp")
self._TestVBScriptRegex(re)
def testGenerated(self):
re = EnsureDispatch("VBScript.Regexp")
self._TestVBScriptRegex(re)
if __name__=='__main__':
unittest.main()
| mit |
mikewiebe-ansible/ansible | lib/ansible/modules/network/fortios/fortios_router_ripng.py | 13 | 23978 | #!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
# Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: fortios_router_ripng
short_description: Configure RIPng in Fortinet's FortiOS and FortiGate.
description:
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the
user to set and modify router feature and ripng category.
Examples include all parameters and values need to be adjusted to datasources before usage.
Tested with FOS v6.0.5
version_added: "2.9"
author:
- Miguel Angel Munoz (@mamunozgonzalez)
- Nicolas Thomas (@thomnico)
notes:
- Requires fortiosapi library developed by Fortinet
- Run as a local_action in your playbook
requirements:
- fortiosapi>=0.9.8
options:
host:
description:
- FortiOS or FortiGate IP address.
type: str
required: false
username:
description:
- FortiOS or FortiGate username.
type: str
required: false
password:
description:
- FortiOS or FortiGate password.
type: str
default: ""
vdom:
description:
- Virtual domain, among those defined previously. A vdom is a
virtual instance of the FortiGate that can be configured and
used as a different unit.
type: str
default: root
https:
description:
- Indicates if the requests towards FortiGate must use HTTPS protocol.
type: bool
default: true
ssl_verify:
description:
- Ensures FortiGate certificate must be verified by a proper CA.
type: bool
default: true
router_ripng:
description:
- Configure RIPng.
default: null
type: dict
suboptions:
aggregate_address:
description:
- Aggregate address.
type: list
suboptions:
id:
description:
- Aggregate address entry ID.
required: true
type: int
prefix6:
description:
- Aggregate address prefix.
type: str
default_information_originate:
description:
- Enable/disable generation of default route.
type: str
choices:
- enable
- disable
default_metric:
description:
- Metric that the FortiGate unit advertises to adjacent routers.
type: int
distance:
description:
- Administrative distance
type: list
suboptions:
access_list6:
description:
- Access list for route destination. Source router.access-list6.name.
type: str
distance:
description:
- Distance (1 - 255).
type: int
id:
description:
- Distance ID.
required: true
type: int
prefix6:
description:
- Distance prefix6.
type: str
distribute_list:
description:
- Use this to filter incoming or outgoing updates using an access list or a prefix list.
type: list
suboptions:
direction:
description:
- Distribute list direction.
type: str
choices:
- in
- out
id:
description:
- Distribute list ID.
required: true
type: int
interface:
description:
- Distribute list interface name. Source system.interface.name.
type: str
listname:
description:
- Distribute access/prefix list name. Source router.access-list6.name router.prefix-list6.name.
type: str
status:
description:
- Use this to activate or deactivate
type: str
choices:
- enable
- disable
garbage_timer:
description:
- Time in seconds that must elapse after the timeout interval for a route expires,.
type: int
interface:
description:
- RIPng interface configuration.
type: list
suboptions:
flags:
description:
- Configuration flags of the interface.
type: int
name:
description:
- Interface name. Source system.interface.name.
required: true
type: str
split_horizon:
description:
- Configure RIP to use either regular or poisoned split horizon on this interface.
type: str
choices:
- poisoned
- regular
split_horizon_status:
description:
- Enable/disable split horizon.
type: str
choices:
- enable
- disable
max_out_metric:
description:
- Maximum metric allowed to output(0 means 'not set').
type: int
neighbor:
description:
- List of neighbors.
type: list
suboptions:
id:
description:
- Neighbor entry ID.
required: true
type: int
interface:
description:
- Interface name. Source system.interface.name.
type: str
ip6:
description:
- IPv6 link-local address.
type: str
network:
description:
- list of networks connected.
type: list
suboptions:
id:
description:
- Network entry ID.
required: true
type: int
prefix:
description:
- Network IPv6 link-local prefix.
type: str
offset_list:
description:
- Adds the specified offset to the metric (hop count) of a route.
type: list
suboptions:
access_list6:
description:
- IPv6 access list name. Source router.access-list6.name.
type: str
direction:
description:
- Offset list direction.
type: str
choices:
- in
- out
id:
description:
- Offset-list ID.
required: true
type: int
interface:
description:
- Interface name. Source system.interface.name.
type: str
offset:
description:
- Offset range
type: int
status:
description:
- Indicates if the offset is active or not
type: str
choices:
- enable
- disable
passive_interface:
description:
- Passive interface configuration.
type: list
suboptions:
name:
description:
- Passive interface name. Source system.interface.name.
required: true
type: str
redistribute:
description:
- Redistribute configuration.
type: list
suboptions:
metric:
description:
- Redistribute metric setting.
type: int
name:
description:
- Redistribute name.
required: true
type: str
routemap:
description:
- Route map name. Source router.route-map.name.
type: str
status:
description:
- Indicates if the redistribute is active or not
type: str
choices:
- enable
- disable
timeout_timer:
description:
- Time interval in seconds after which a route is declared unreachable.
type: int
update_timer:
description:
- The time interval in seconds between RIP updates.
type: int
'''
EXAMPLES = '''
- hosts: localhost
vars:
host: "192.168.122.40"
username: "admin"
password: ""
vdom: "root"
ssl_verify: "False"
tasks:
- name: Configure RIPng.
fortios_router_ripng:
host: "{{ host }}"
username: "{{ username }}"
password: "{{ password }}"
vdom: "{{ vdom }}"
https: "False"
router_ripng:
aggregate_address:
-
id: "4"
prefix6: "<your_own_value>"
default_information_originate: "enable"
default_metric: "7"
distance:
-
access_list6: "<your_own_value> (source router.access-list6.name)"
distance: "10"
id: "11"
prefix6: "<your_own_value>"
distribute_list:
-
direction: "in"
id: "15"
interface: "<your_own_value> (source system.interface.name)"
listname: "<your_own_value> (source router.access-list6.name router.prefix-list6.name)"
status: "enable"
garbage_timer: "19"
interface:
-
flags: "21"
name: "default_name_22 (source system.interface.name)"
split_horizon: "poisoned"
split_horizon_status: "enable"
max_out_metric: "25"
neighbor:
-
id: "27"
interface: "<your_own_value> (source system.interface.name)"
ip6: "<your_own_value>"
network:
-
id: "31"
prefix: "<your_own_value>"
offset_list:
-
access_list6: "<your_own_value> (source router.access-list6.name)"
direction: "in"
id: "36"
interface: "<your_own_value> (source system.interface.name)"
offset: "38"
status: "enable"
passive_interface:
-
name: "default_name_41 (source system.interface.name)"
redistribute:
-
metric: "43"
name: "default_name_44"
routemap: "<your_own_value> (source router.route-map.name)"
status: "enable"
timeout_timer: "47"
update_timer: "48"
'''
RETURN = '''
build:
description: Build number of the fortigate image
returned: always
type: str
sample: '1547'
http_method:
description: Last method used to provision the content into FortiGate
returned: always
type: str
sample: 'PUT'
http_status:
description: Last result given by FortiGate on last operation applied
returned: always
type: str
sample: "200"
mkey:
description: Master key (id) used in the last call to FortiGate
returned: success
type: str
sample: "id"
name:
description: Name of the table used to fulfill the request
returned: always
type: str
sample: "urlfilter"
path:
description: Path of the table used to fulfill the request
returned: always
type: str
sample: "webfilter"
revision:
description: Internal revision number
returned: always
type: str
sample: "17.0.2.10658"
serial:
description: Serial number of the unit
returned: always
type: str
sample: "FGVMEVYYQT3AB5352"
status:
description: Indication of the operation's result
returned: always
type: str
sample: "success"
vdom:
description: Virtual domain used
returned: always
type: str
sample: "root"
version:
description: Version of the FortiGate
returned: always
type: str
sample: "v5.6.3"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
from ansible.module_utils.network.fortimanager.common import FAIL_SOCKET_MSG
def login(data, fos):
host = data['host']
username = data['username']
password = data['password']
ssl_verify = data['ssl_verify']
fos.debug('on')
if 'https' in data and not data['https']:
fos.https('off')
else:
fos.https('on')
fos.login(host, username, password, verify=ssl_verify)
def filter_router_ripng_data(json):
option_list = ['aggregate_address', 'default_information_originate', 'default_metric',
'distance', 'distribute_list', 'garbage_timer',
'interface', 'max_out_metric', 'neighbor',
'network', 'offset_list', 'passive_interface',
'redistribute', 'timeout_timer', 'update_timer']
dictionary = {}
for attribute in option_list:
if attribute in json and json[attribute] is not None:
dictionary[attribute] = json[attribute]
return dictionary
def underscore_to_hyphen(data):
if isinstance(data, list):
for elem in data:
elem = underscore_to_hyphen(elem)
elif isinstance(data, dict):
new_data = {}
for k, v in data.items():
new_data[k.replace('_', '-')] = underscore_to_hyphen(v)
data = new_data
return data
def router_ripng(data, fos):
vdom = data['vdom']
router_ripng_data = data['router_ripng']
filtered_data = underscore_to_hyphen(filter_router_ripng_data(router_ripng_data))
return fos.set('router',
'ripng',
data=filtered_data,
vdom=vdom)
def is_successful_status(status):
return status['status'] == "success" or \
status['http_method'] == "DELETE" and status['http_status'] == 404
def fortios_router(data, fos):
if data['router_ripng']:
resp = router_ripng(data, fos)
return not is_successful_status(resp), \
resp['status'] == "success", \
resp
def main():
fields = {
"host": {"required": False, "type": "str"},
"username": {"required": False, "type": "str"},
"password": {"required": False, "type": "str", "default": "", "no_log": True},
"vdom": {"required": False, "type": "str", "default": "root"},
"https": {"required": False, "type": "bool", "default": True},
"ssl_verify": {"required": False, "type": "bool", "default": True},
"router_ripng": {
"required": False, "type": "dict", "default": None,
"options": {
"aggregate_address": {"required": False, "type": "list",
"options": {
"id": {"required": True, "type": "int"},
"prefix6": {"required": False, "type": "str"}
}},
"default_information_originate": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"default_metric": {"required": False, "type": "int"},
"distance": {"required": False, "type": "list",
"options": {
"access_list6": {"required": False, "type": "str"},
"distance": {"required": False, "type": "int"},
"id": {"required": True, "type": "int"},
"prefix6": {"required": False, "type": "str"}
}},
"distribute_list": {"required": False, "type": "list",
"options": {
"direction": {"required": False, "type": "str",
"choices": ["in", "out"]},
"id": {"required": True, "type": "int"},
"interface": {"required": False, "type": "str"},
"listname": {"required": False, "type": "str"},
"status": {"required": False, "type": "str",
"choices": ["enable", "disable"]}
}},
"garbage_timer": {"required": False, "type": "int"},
"interface": {"required": False, "type": "list",
"options": {
"flags": {"required": False, "type": "int"},
"name": {"required": True, "type": "str"},
"split_horizon": {"required": False, "type": "str",
"choices": ["poisoned", "regular"]},
"split_horizon_status": {"required": False, "type": "str",
"choices": ["enable", "disable"]}
}},
"max_out_metric": {"required": False, "type": "int"},
"neighbor": {"required": False, "type": "list",
"options": {
"id": {"required": True, "type": "int"},
"interface": {"required": False, "type": "str"},
"ip6": {"required": False, "type": "str"}
}},
"network": {"required": False, "type": "list",
"options": {
"id": {"required": True, "type": "int"},
"prefix": {"required": False, "type": "str"}
}},
"offset_list": {"required": False, "type": "list",
"options": {
"access_list6": {"required": False, "type": "str"},
"direction": {"required": False, "type": "str",
"choices": ["in", "out"]},
"id": {"required": True, "type": "int"},
"interface": {"required": False, "type": "str"},
"offset": {"required": False, "type": "int"},
"status": {"required": False, "type": "str",
"choices": ["enable", "disable"]}
}},
"passive_interface": {"required": False, "type": "list",
"options": {
"name": {"required": True, "type": "str"}
}},
"redistribute": {"required": False, "type": "list",
"options": {
"metric": {"required": False, "type": "int"},
"name": {"required": True, "type": "str"},
"routemap": {"required": False, "type": "str"},
"status": {"required": False, "type": "str",
"choices": ["enable", "disable"]}
}},
"timeout_timer": {"required": False, "type": "int"},
"update_timer": {"required": False, "type": "int"}
}
}
}
module = AnsibleModule(argument_spec=fields,
supports_check_mode=False)
# legacy_mode refers to using fortiosapi instead of HTTPAPI
legacy_mode = 'host' in module.params and module.params['host'] is not None and \
'username' in module.params and module.params['username'] is not None and \
'password' in module.params and module.params['password'] is not None
if not legacy_mode:
if module._socket_path:
connection = Connection(module._socket_path)
fos = FortiOSHandler(connection)
is_error, has_changed, result = fortios_router(module.params, fos)
else:
module.fail_json(**FAIL_SOCKET_MSG)
else:
try:
from fortiosapi import FortiOSAPI
except ImportError:
module.fail_json(msg="fortiosapi module is required")
fos = FortiOSAPI()
login(module.params, fos)
is_error, has_changed, result = fortios_router(module.params, fos)
fos.logout()
if not is_error:
module.exit_json(changed=has_changed, meta=result)
else:
module.fail_json(msg="Error in repo", meta=result)
if __name__ == '__main__':
main()
| gpl-3.0 |
GoUbiq/pyexchange | pyexchange/base/soap.py | 1 | 5464 | """
(c) 2013 LinkedIn Corp. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");?you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software?distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
"""
import logging
from lxml import etree
from lxml.builder import ElementMaker
from datetime import datetime
from pytz import utc
from dateutil import parser
from ..exceptions import FailedExchangeException
MSG_NS = u'http://schemas.microsoft.com/exchange/services/2006/messages'
TYPE_NS = u'http://schemas.microsoft.com/exchange/services/2006/types'
SOAP_NS = u'http://schemas.xmlsoap.org/soap/envelope/'
NAMESPACES = {u'm': MSG_NS, u't': TYPE_NS, u's': SOAP_NS}
M = ElementMaker(namespace=MSG_NS, nsmap=NAMESPACES)
T = ElementMaker(namespace=TYPE_NS, nsmap=NAMESPACES)
SOAP_NAMESPACES = {u's': SOAP_NS}
S = ElementMaker(namespace=SOAP_NS, nsmap=SOAP_NAMESPACES)
log = logging.getLogger('pyexchange')
class ExchangeServiceSOAP(object):
EXCHANGE_DATE_FORMAT = u"%Y-%m-%dT%H:%M:%S%z"
def __init__(self, connection):
self.connection = connection
def send(self, xml, mailbox_address=None, headers=None, retries=4, timeout=30, encoding="utf-8"):
request_xml = self._wrap_soap_xml_request(xml, mailbox_address)
log.info(etree.tostring(request_xml, encoding=encoding, pretty_print=True))
response = self._send_soap_request(request_xml, headers=headers, retries=retries, timeout=timeout, encoding=encoding)
return self._parse(response, encoding=encoding)
def _parse(self, response, encoding="utf-8"):
try:
tree = etree.XML(response.encode(encoding))
except (etree.XMLSyntaxError, TypeError) as err:
raise FailedExchangeException(u"Unable to parse response from Exchange - check your login information. Error: %s" % err)
self._check_for_errors(tree)
log.info(etree.tostring(tree, encoding=encoding, pretty_print=True))
return tree
def _check_for_errors(self, xml_tree):
self._check_for_SOAP_fault(xml_tree)
def _check_for_SOAP_fault(self, xml_tree):
# Check for SOAP errors. if <soap:Fault> is anywhere in the response, flip out
fault_nodes = xml_tree.xpath(u'//s:Fault', namespaces=SOAP_NAMESPACES)
if fault_nodes:
fault = fault_nodes[0]
print etree.tostring(fault_nodes[0])
log.debug(etree.tostring(fault, pretty_print=True))
# raise FailedExchangeException(u"SOAP Fault from Exchange server", fault.text)
def _send_soap_request(self, xml, headers=None, retries=2, timeout=30, encoding="utf-8"):
body = etree.tostring(xml, encoding=encoding)
response = self.connection.send(body, headers, retries, timeout)
return response
# def _add_impersonation_header(self, exchange_xml):
def _wrap_soap_xml_request(self, exchange_xml, mailbox_address=None):
if mailbox_address is None:
root = S.Envelope(S.Header(T.RequestServerVersion({u'Version': u'Exchange2010_SP2'})), S.Body(exchange_xml))
else:
root = S.Envelope(S.Header(T.RequestServerVersion({u'Version': u'Exchange2010_SP2'}), T.ExchangeImpersonation(T.ConnectingSID(T.SmtpAddress(mailbox_address))), T.TimeZoneContext(T.TimeZoneDefinition({u'Id':u'UTC'}))), S.Body(exchange_xml))
return root
def _parse_date(self, date_string):
# date = datetime.strptime(date_str, self.EXCHANGE_DATE_FORMAT)
date = parser.parse(date_string)
# date = date.replace(tzinfo=utc)
return date
def _parse_date_only_naive(self, date_string):
date = datetime.strptime(date_string[0:10], self.EXCHANGE_DATE_FORMAT[0:8])
return date.date()
def _xpath_to_dict(self, element, property_map, namespace_map):
"""
property_map = {
u'name' : { u'xpath' : u't:Mailbox/t:Name'},
u'email' : { u'xpath' : u't:Mailbox/t:EmailAddress'},
u'response' : { u'xpath' : u't:ResponseType'},
u'last_response': { u'xpath' : u't:LastResponseTime', u'cast': u'datetime'},
}
This runs the given xpath on the node and returns a dictionary
"""
result = {}
log.info(etree.tostring(element, pretty_print=True))
for key in property_map:
item = property_map[key]
log.info(u'Pulling xpath {xpath} into key {key}'.format(key=key, xpath=item[u'xpath']))
nodes = element.xpath(item[u'xpath'], namespaces=namespace_map)
if nodes:
result_for_node = []
for node in nodes:
cast_as = item.get(u'cast', None)
if cast_as == u'datetime':
result_for_node.append(self._parse_date(node.text))
elif cast_as == u'date_only_naive':
result_for_node.append(self._parse_date_only_naive(node.text))
elif cast_as == u'int':
result_for_node.append(int(node.text))
elif cast_as == u'bool':
if node.text.lower() == u'true':
result_for_node.append(True)
else:
result_for_node.append(False)
else:
result_for_node.append(node.text)
if not result_for_node:
result[key] = None
elif len(result_for_node) == 1:
result[key] = result_for_node[0]
else:
result[key] = result_for_node
return result
| apache-2.0 |
sysadminmatmoz/pmis | project_time_schedule/__init__.py | 3 | 1030 | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2011 Eficent (<http://www.eficent.com/>)
# Jordi Ballester Alomar <jordi.ballester@eficent.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import models
import wizard
| agpl-3.0 |
rohit21122012/DCASE2013 | runs/2016/dnn2016med_traps/traps36/src/features.py | 40 | 11159 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import librosa
import numpy
import scipy
def feature_extraction_lfcc(audio_filename_with_path, statistics=True):
print audio_filename_with_path
with open(audio_filename_with_path,'r') as f:
feature_matrix = numpy.loadtxt(f)
#f.close()
# Collect into data structure
# print feature_matrix.shape
if statistics:
return {
'feat': feature_matrix,
'stat': {
'mean': numpy.mean(feature_matrix, axis=0),
'std': numpy.std(feature_matrix, axis=0),
'N': feature_matrix.shape[0],
'S1': numpy.sum(feature_matrix, axis=0),
'S2': numpy.sum(feature_matrix ** 2, axis=0),
}
}
else:
return {
'feat': feature_matrix}
def feature_extraction_traps(y, fs=44100, statistics=True, traps_params=None, mfcc_params=None):
eps = numpy.spacing(1)
# Windowing function
if mfcc_params['window'] == 'hamming_asymmetric':
window = scipy.signal.hamming(mfcc_params['n_fft'], sym=False)
elif mfcc_params['window'] == 'hamming_symmetric':
window = scipy.signal.hamming(mfcc_params['n_fft'], sym=True)
elif mfcc_params['window'] == 'hann_asymmetric':
window = scipy.signal.hann(mfcc_params['n_fft'], sym=False)
elif mfcc_params['window'] == 'hann_symmetric':
window = scipy.signal.hann(mfcc_params['n_fft'], sym=True)
else:
window = None
# Calculate Static Coefficients
magnitude_spectrogram = numpy.abs(librosa.stft(y + eps,
n_fft=mfcc_params['n_fft'],
win_length=mfcc_params['win_length'],
hop_length=mfcc_params['hop_length'],
center=True,
window=window)) ** 2
S = librosa.feature.melspectrogram(S=magnitude_spectrogram, n_mels=mfcc_params['n_mels'])
frames = S.shape[1];
trap_window = traps_params['window']
trap_band = traps_params['band']
print S.shape
traps = numpy.empty([0, trap_window]);
for i in xrange(trap_window/2,frames-(trap_window/2 + 1)):
#print S[1,i-50:i+51][None].shape
c = S[trap_band,i-trap_window/2:i+(trap_window/2 + 1)][None]
#print c.shape
traps = numpy.vstack((traps,c))
# Collect the feature matrix
feature_matrix = traps
# This should be (1501,40) in mfcc and (1400,101) in traps
print feature_matrix.shape
# Collect into data structure
if statistics:
return {
'feat': feature_matrix,
'stat': {
'mean': numpy.mean(feature_matrix, axis=0),
'std': numpy.std(feature_matrix, axis=0),
'N': feature_matrix.shape[0],
'S1': numpy.sum(feature_matrix, axis=0),
'S2': numpy.sum(feature_matrix ** 2, axis=0),
}
}
else:
return {
'feat': feature_matrix}
def feature_extraction(y, fs=44100, statistics=True, include_mfcc0=True, include_delta=True,
include_acceleration=True, mfcc_params=None, delta_params=None, acceleration_params=None):
"""Feature extraction, MFCC based features
Outputs features in dict, format:
{
'feat': feature_matrix [shape=(frame count, feature vector size)],
'stat': {
'mean': numpy.mean(feature_matrix, axis=0),
'std': numpy.std(feature_matrix, axis=0),
'N': feature_matrix.shape[0],
'S1': numpy.sum(feature_matrix, axis=0),
'S2': numpy.sum(feature_matrix ** 2, axis=0),
}
}
Parameters
----------
y: numpy.array [shape=(signal_length, )]
Audio
fs: int > 0 [scalar]
Sample rate
(Default value=44100)
statistics: bool
Calculate feature statistics for extracted matrix
(Default value=True)
include_mfcc0: bool
Include 0th MFCC coefficient into static coefficients.
(Default value=True)
include_delta: bool
Include delta MFCC coefficients.
(Default value=True)
include_acceleration: bool
Include acceleration MFCC coefficients.
(Default value=True)
mfcc_params: dict or None
Parameters for extraction of static MFCC coefficients.
delta_params: dict or None
Parameters for extraction of delta MFCC coefficients.
acceleration_params: dict or None
Parameters for extraction of acceleration MFCC coefficients.
Returns
-------
result: dict
Feature dict
"""
eps = numpy.spacing(1)
# Windowing function
if mfcc_params['window'] == 'hamming_asymmetric':
window = scipy.signal.hamming(mfcc_params['n_fft'], sym=False)
elif mfcc_params['window'] == 'hamming_symmetric':
window = scipy.signal.hamming(mfcc_params['n_fft'], sym=True)
elif mfcc_params['window'] == 'hann_asymmetric':
window = scipy.signal.hann(mfcc_params['n_fft'], sym=False)
elif mfcc_params['window'] == 'hann_symmetric':
window = scipy.signal.hann(mfcc_params['n_fft'], sym=True)
else:
window = None
# Calculate Static Coefficients
magnitude_spectrogram = numpy.abs(librosa.stft(y + eps,
n_fft=mfcc_params['n_fft'],
win_length=mfcc_params['win_length'],
hop_length=mfcc_params['hop_length'],
center=True,
window=window)) ** 2
mel_basis = librosa.filters.mel(sr=fs,
n_fft=mfcc_params['n_fft'],
n_mels=mfcc_params['n_mels'],
fmin=mfcc_params['fmin'],
fmax=mfcc_params['fmax'],
htk=mfcc_params['htk'])
mel_spectrum = numpy.dot(mel_basis, magnitude_spectrogram)
mfcc = librosa.feature.mfcc(S=librosa.logamplitude(mel_spectrum))
# Collect the feature matrix
feature_matrix = mfcc
if include_delta:
# Delta coefficients
mfcc_delta = librosa.feature.delta(mfcc, **delta_params)
# Add Delta Coefficients to feature matrix
feature_matrix = numpy.vstack((feature_matrix, mfcc_delta))
if include_acceleration:
# Acceleration coefficients (aka delta)
mfcc_delta2 = librosa.feature.delta(mfcc, order=2, **acceleration_params)
# Add Acceleration Coefficients to feature matrix
feature_matrix = numpy.vstack((feature_matrix, mfcc_delta2))
if not include_mfcc0:
# Omit mfcc0
feature_matrix = feature_matrix[1:, :]
feature_matrix = feature_matrix.T
print feature_matrix.shape
# Collect into data structure
if statistics:
return {
'feat': feature_matrix,
'stat': {
'mean': numpy.mean(feature_matrix, axis=0),
'std': numpy.std(feature_matrix, axis=0),
'N': feature_matrix.shape[0],
'S1': numpy.sum(feature_matrix, axis=0),
'S2': numpy.sum(feature_matrix ** 2, axis=0),
}
}
else:
return {
'feat': feature_matrix}
class FeatureNormalizer(object):
"""Feature normalizer class
Accumulates feature statistics
Examples
--------
>>> normalizer = FeatureNormalizer()
>>> for feature_matrix in training_items:
>>> normalizer.accumulate(feature_matrix)
>>>
>>> normalizer.finalize()
>>> for feature_matrix in test_items:
>>> feature_matrix_normalized = normalizer.normalize(feature_matrix)
>>> # used the features
"""
def __init__(self, feature_matrix=None):
"""__init__ method.
Parameters
----------
feature_matrix : numpy.ndarray [shape=(frames, number of feature values)] or None
Feature matrix to be used in the initialization
"""
if feature_matrix is None:
self.N = 0
self.mean = 0
self.S1 = 0
self.S2 = 0
self.std = 0
else:
self.mean = numpy.mean(feature_matrix, axis=0)
self.std = numpy.std(feature_matrix, axis=0)
self.N = feature_matrix.shape[0]
self.S1 = numpy.sum(feature_matrix, axis=0)
self.S2 = numpy.sum(feature_matrix ** 2, axis=0)
self.finalize()
def __enter__(self):
# Initialize Normalization class and return it
self.N = 0
self.mean = 0
self.S1 = 0
self.S2 = 0
self.std = 0
return self
def __exit__(self, type, value, traceback):
# Finalize accumulated calculation
self.finalize()
def accumulate(self, stat):
"""Accumalate statistics
Input is statistics dict, format:
{
'mean': numpy.mean(feature_matrix, axis=0),
'std': numpy.std(feature_matrix, axis=0),
'N': feature_matrix.shape[0],
'S1': numpy.sum(feature_matrix, axis=0),
'S2': numpy.sum(feature_matrix ** 2, axis=0),
}
Parameters
----------
stat : dict
Statistics dict
Returns
-------
nothing
"""
print stat['N']
print stat['mean'].shape
#print stat['S1'].shape
#print stat['S2'].shape
self.N += stat['N']
self.mean += stat['mean']
self.S1 += stat['S1']
self.S2 += stat['S2']
def finalize(self):
"""Finalize statistics calculation
Accumulated values are used to get mean and std for the seen feature data.
Parameters
----------
nothing
Returns
-------
nothing
"""
# Finalize statistics
self.mean = self.S1 / self.N
self.std = numpy.sqrt((self.N * self.S2 - (self.S1 * self.S1)) / (self.N * (self.N - 1)))
# In case we have very brain-death material we get std = Nan => 0.0
self.std = numpy.nan_to_num(self.std)
self.mean = numpy.reshape(self.mean, [1, -1])
self.std = numpy.reshape(self.std, [1, -1])
def normalize(self, feature_matrix):
"""Normalize feature matrix with internal statistics of the class
Parameters
----------
feature_matrix : numpy.ndarray [shape=(frames, number of feature values)]
Feature matrix to be normalized
Returns
-------
feature_matrix : numpy.ndarray [shape=(frames, number of feature values)]
Normalized feature matrix
"""
print 'normalizing' + str(feature_matrix.shape)
return (feature_matrix - self.mean) / self.std
| mit |
pv/scikit-learn | sklearn/tests/test_cross_validation.py | 70 | 41943 | """Test the cross_validation module"""
from __future__ import division
import warnings
import numpy as np
from scipy.sparse import coo_matrix
from scipy import stats
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_not_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.mocking import CheckingClassifier, MockDataFrame
from sklearn import cross_validation as cval
from sklearn.datasets import make_regression
from sklearn.datasets import load_boston
from sklearn.datasets import load_digits
from sklearn.datasets import load_iris
from sklearn.metrics import explained_variance_score
from sklearn.metrics import make_scorer
from sklearn.metrics import precision_score
from sklearn.externals import six
from sklearn.externals.six.moves import zip
from sklearn.linear_model import Ridge
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.cluster import KMeans
from sklearn.preprocessing import Imputer, LabelBinarizer
from sklearn.pipeline import Pipeline
class MockClassifier(object):
"""Dummy classifier to test the cross-validation"""
def __init__(self, a=0, allow_nd=False):
self.a = a
self.allow_nd = allow_nd
def fit(self, X, Y=None, sample_weight=None, class_prior=None,
sparse_sample_weight=None, sparse_param=None, dummy_int=None,
dummy_str=None, dummy_obj=None, callback=None):
"""The dummy arguments are to test that this fit function can
accept non-array arguments through cross-validation, such as:
- int
- str (this is actually array-like)
- object
- function
"""
self.dummy_int = dummy_int
self.dummy_str = dummy_str
self.dummy_obj = dummy_obj
if callback is not None:
callback(self)
if self.allow_nd:
X = X.reshape(len(X), -1)
if X.ndim >= 3 and not self.allow_nd:
raise ValueError('X cannot be d')
if sample_weight is not None:
assert_true(sample_weight.shape[0] == X.shape[0],
'MockClassifier extra fit_param sample_weight.shape[0]'
' is {0}, should be {1}'.format(sample_weight.shape[0],
X.shape[0]))
if class_prior is not None:
assert_true(class_prior.shape[0] == len(np.unique(y)),
'MockClassifier extra fit_param class_prior.shape[0]'
' is {0}, should be {1}'.format(class_prior.shape[0],
len(np.unique(y))))
if sparse_sample_weight is not None:
fmt = ('MockClassifier extra fit_param sparse_sample_weight'
'.shape[0] is {0}, should be {1}')
assert_true(sparse_sample_weight.shape[0] == X.shape[0],
fmt.format(sparse_sample_weight.shape[0], X.shape[0]))
if sparse_param is not None:
fmt = ('MockClassifier extra fit_param sparse_param.shape '
'is ({0}, {1}), should be ({2}, {3})')
assert_true(sparse_param.shape == P_sparse.shape,
fmt.format(sparse_param.shape[0],
sparse_param.shape[1],
P_sparse.shape[0], P_sparse.shape[1]))
return self
def predict(self, T):
if self.allow_nd:
T = T.reshape(len(T), -1)
return T[:, 0]
def score(self, X=None, Y=None):
return 1. / (1 + np.abs(self.a))
def get_params(self, deep=False):
return {'a': self.a, 'allow_nd': self.allow_nd}
X = np.ones((10, 2))
X_sparse = coo_matrix(X)
W_sparse = coo_matrix((np.array([1]), (np.array([1]), np.array([0]))),
shape=(10, 1))
P_sparse = coo_matrix(np.eye(5))
y = np.arange(10) // 2
##############################################################################
# Tests
def check_valid_split(train, test, n_samples=None):
# Use python sets to get more informative assertion failure messages
train, test = set(train), set(test)
# Train and test split should not overlap
assert_equal(train.intersection(test), set())
if n_samples is not None:
# Check that the union of train an test split cover all the indices
assert_equal(train.union(test), set(range(n_samples)))
def check_cv_coverage(cv, expected_n_iter=None, n_samples=None):
# Check that a all the samples appear at least once in a test fold
if expected_n_iter is not None:
assert_equal(len(cv), expected_n_iter)
else:
expected_n_iter = len(cv)
collected_test_samples = set()
iterations = 0
for train, test in cv:
check_valid_split(train, test, n_samples=n_samples)
iterations += 1
collected_test_samples.update(test)
# Check that the accumulated test samples cover the whole dataset
assert_equal(iterations, expected_n_iter)
if n_samples is not None:
assert_equal(collected_test_samples, set(range(n_samples)))
def test_kfold_valueerrors():
# Check that errors are raised if there is not enough samples
assert_raises(ValueError, cval.KFold, 3, 4)
# Check that a warning is raised if the least populated class has too few
# members.
y = [3, 3, -1, -1, 2]
cv = assert_warns_message(Warning, "The least populated class",
cval.StratifiedKFold, y, 3)
# Check that despite the warning the folds are still computed even
# though all the classes are not necessarily represented at on each
# side of the split at each split
check_cv_coverage(cv, expected_n_iter=3, n_samples=len(y))
# Error when number of folds is <= 1
assert_raises(ValueError, cval.KFold, 2, 0)
assert_raises(ValueError, cval.KFold, 2, 1)
assert_raises(ValueError, cval.StratifiedKFold, y, 0)
assert_raises(ValueError, cval.StratifiedKFold, y, 1)
# When n is not integer:
assert_raises(ValueError, cval.KFold, 2.5, 2)
# When n_folds is not integer:
assert_raises(ValueError, cval.KFold, 5, 1.5)
assert_raises(ValueError, cval.StratifiedKFold, y, 1.5)
def test_kfold_indices():
# Check all indices are returned in the test folds
kf = cval.KFold(300, 3)
check_cv_coverage(kf, expected_n_iter=3, n_samples=300)
# Check all indices are returned in the test folds even when equal-sized
# folds are not possible
kf = cval.KFold(17, 3)
check_cv_coverage(kf, expected_n_iter=3, n_samples=17)
def test_kfold_no_shuffle():
# Manually check that KFold preserves the data ordering on toy datasets
splits = iter(cval.KFold(4, 2))
train, test = next(splits)
assert_array_equal(test, [0, 1])
assert_array_equal(train, [2, 3])
train, test = next(splits)
assert_array_equal(test, [2, 3])
assert_array_equal(train, [0, 1])
splits = iter(cval.KFold(5, 2))
train, test = next(splits)
assert_array_equal(test, [0, 1, 2])
assert_array_equal(train, [3, 4])
train, test = next(splits)
assert_array_equal(test, [3, 4])
assert_array_equal(train, [0, 1, 2])
def test_stratified_kfold_no_shuffle():
# Manually check that StratifiedKFold preserves the data ordering as much
# as possible on toy datasets in order to avoid hiding sample dependencies
# when possible
splits = iter(cval.StratifiedKFold([1, 1, 0, 0], 2))
train, test = next(splits)
assert_array_equal(test, [0, 2])
assert_array_equal(train, [1, 3])
train, test = next(splits)
assert_array_equal(test, [1, 3])
assert_array_equal(train, [0, 2])
splits = iter(cval.StratifiedKFold([1, 1, 1, 0, 0, 0, 0], 2))
train, test = next(splits)
assert_array_equal(test, [0, 1, 3, 4])
assert_array_equal(train, [2, 5, 6])
train, test = next(splits)
assert_array_equal(test, [2, 5, 6])
assert_array_equal(train, [0, 1, 3, 4])
def test_stratified_kfold_ratios():
# Check that stratified kfold preserves label ratios in individual splits
# Repeat with shuffling turned off and on
n_samples = 1000
labels = np.array([4] * int(0.10 * n_samples) +
[0] * int(0.89 * n_samples) +
[1] * int(0.01 * n_samples))
for shuffle in [False, True]:
for train, test in cval.StratifiedKFold(labels, 5, shuffle=shuffle):
assert_almost_equal(np.sum(labels[train] == 4) / len(train), 0.10,
2)
assert_almost_equal(np.sum(labels[train] == 0) / len(train), 0.89,
2)
assert_almost_equal(np.sum(labels[train] == 1) / len(train), 0.01,
2)
assert_almost_equal(np.sum(labels[test] == 4) / len(test), 0.10, 2)
assert_almost_equal(np.sum(labels[test] == 0) / len(test), 0.89, 2)
assert_almost_equal(np.sum(labels[test] == 1) / len(test), 0.01, 2)
def test_kfold_balance():
# Check that KFold returns folds with balanced sizes
for kf in [cval.KFold(i, 5) for i in range(11, 17)]:
sizes = []
for _, test in kf:
sizes.append(len(test))
assert_true((np.max(sizes) - np.min(sizes)) <= 1)
assert_equal(np.sum(sizes), kf.n)
def test_stratifiedkfold_balance():
# Check that KFold returns folds with balanced sizes (only when
# stratification is possible)
# Repeat with shuffling turned off and on
labels = [0] * 3 + [1] * 14
for shuffle in [False, True]:
for skf in [cval.StratifiedKFold(labels[:i], 3, shuffle=shuffle)
for i in range(11, 17)]:
sizes = []
for _, test in skf:
sizes.append(len(test))
assert_true((np.max(sizes) - np.min(sizes)) <= 1)
assert_equal(np.sum(sizes), skf.n)
def test_shuffle_kfold():
# Check the indices are shuffled properly, and that all indices are
# returned in the different test folds
kf = cval.KFold(300, 3, shuffle=True, random_state=0)
ind = np.arange(300)
all_folds = None
for train, test in kf:
sorted_array = np.arange(100)
assert_true(np.any(sorted_array != ind[train]))
sorted_array = np.arange(101, 200)
assert_true(np.any(sorted_array != ind[train]))
sorted_array = np.arange(201, 300)
assert_true(np.any(sorted_array != ind[train]))
if all_folds is None:
all_folds = ind[test].copy()
else:
all_folds = np.concatenate((all_folds, ind[test]))
all_folds.sort()
assert_array_equal(all_folds, ind)
def test_shuffle_stratifiedkfold():
# Check that shuffling is happening when requested, and for proper
# sample coverage
labels = [0] * 20 + [1] * 20
kf0 = list(cval.StratifiedKFold(labels, 5, shuffle=True, random_state=0))
kf1 = list(cval.StratifiedKFold(labels, 5, shuffle=True, random_state=1))
for (_, test0), (_, test1) in zip(kf0, kf1):
assert_true(set(test0) != set(test1))
check_cv_coverage(kf0, expected_n_iter=5, n_samples=40)
def test_kfold_can_detect_dependent_samples_on_digits(): # see #2372
# The digits samples are dependent: they are apparently grouped by authors
# although we don't have any information on the groups segment locations
# for this data. We can highlight this fact be computing k-fold cross-
# validation with and without shuffling: we observe that the shuffling case
# wrongly makes the IID assumption and is therefore too optimistic: it
# estimates a much higher accuracy (around 0.96) than than the non
# shuffling variant (around 0.86).
digits = load_digits()
X, y = digits.data[:800], digits.target[:800]
model = SVC(C=10, gamma=0.005)
n = len(y)
cv = cval.KFold(n, 5, shuffle=False)
mean_score = cval.cross_val_score(model, X, y, cv=cv).mean()
assert_greater(0.88, mean_score)
assert_greater(mean_score, 0.85)
# Shuffling the data artificially breaks the dependency and hides the
# overfitting of the model with regards to the writing style of the authors
# by yielding a seriously overestimated score:
cv = cval.KFold(n, 5, shuffle=True, random_state=0)
mean_score = cval.cross_val_score(model, X, y, cv=cv).mean()
assert_greater(mean_score, 0.95)
cv = cval.KFold(n, 5, shuffle=True, random_state=1)
mean_score = cval.cross_val_score(model, X, y, cv=cv).mean()
assert_greater(mean_score, 0.95)
# Similarly, StratifiedKFold should try to shuffle the data as little
# as possible (while respecting the balanced class constraints)
# and thus be able to detect the dependency by not overestimating
# the CV score either. As the digits dataset is approximately balanced
# the estimated mean score is close to the score measured with
# non-shuffled KFold
cv = cval.StratifiedKFold(y, 5)
mean_score = cval.cross_val_score(model, X, y, cv=cv).mean()
assert_greater(0.88, mean_score)
assert_greater(mean_score, 0.85)
def test_shuffle_split():
ss1 = cval.ShuffleSplit(10, test_size=0.2, random_state=0)
ss2 = cval.ShuffleSplit(10, test_size=2, random_state=0)
ss3 = cval.ShuffleSplit(10, test_size=np.int32(2), random_state=0)
for typ in six.integer_types:
ss4 = cval.ShuffleSplit(10, test_size=typ(2), random_state=0)
for t1, t2, t3, t4 in zip(ss1, ss2, ss3, ss4):
assert_array_equal(t1[0], t2[0])
assert_array_equal(t2[0], t3[0])
assert_array_equal(t3[0], t4[0])
assert_array_equal(t1[1], t2[1])
assert_array_equal(t2[1], t3[1])
assert_array_equal(t3[1], t4[1])
def test_stratified_shuffle_split_init():
y = np.asarray([0, 1, 1, 1, 2, 2, 2])
# Check that error is raised if there is a class with only one sample
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 0.2)
# Check that error is raised if the test set size is smaller than n_classes
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 2)
# Check that error is raised if the train set size is smaller than
# n_classes
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 3, 2)
y = np.asarray([0, 0, 0, 1, 1, 1, 2, 2, 2])
# Check that errors are raised if there is not enough samples
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 0.5, 0.6)
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 8, 0.6)
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, 3, 0.6, 8)
# Train size or test size too small
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, train_size=2)
assert_raises(ValueError, cval.StratifiedShuffleSplit, y, test_size=2)
def test_stratified_shuffle_split_iter():
ys = [np.array([1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3]),
np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]),
np.array([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2]),
np.array([1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4]),
np.array([-1] * 800 + [1] * 50)
]
for y in ys:
sss = cval.StratifiedShuffleSplit(y, 6, test_size=0.33,
random_state=0)
for train, test in sss:
assert_array_equal(np.unique(y[train]), np.unique(y[test]))
# Checks if folds keep classes proportions
p_train = (np.bincount(np.unique(y[train], return_inverse=True)[1])
/ float(len(y[train])))
p_test = (np.bincount(np.unique(y[test], return_inverse=True)[1])
/ float(len(y[test])))
assert_array_almost_equal(p_train, p_test, 1)
assert_equal(y[train].size + y[test].size, y.size)
assert_array_equal(np.lib.arraysetops.intersect1d(train, test), [])
def test_stratified_shuffle_split_even():
# Test the StratifiedShuffleSplit, indices are drawn with a
# equal chance
n_folds = 5
n_iter = 1000
def assert_counts_are_ok(idx_counts, p):
# Here we test that the distribution of the counts
# per index is close enough to a binomial
threshold = 0.05 / n_splits
bf = stats.binom(n_splits, p)
for count in idx_counts:
p = bf.pmf(count)
assert_true(p > threshold,
"An index is not drawn with chance corresponding "
"to even draws")
for n_samples in (6, 22):
labels = np.array((n_samples // 2) * [0, 1])
splits = cval.StratifiedShuffleSplit(labels, n_iter=n_iter,
test_size=1. / n_folds,
random_state=0)
train_counts = [0] * n_samples
test_counts = [0] * n_samples
n_splits = 0
for train, test in splits:
n_splits += 1
for counter, ids in [(train_counts, train), (test_counts, test)]:
for id in ids:
counter[id] += 1
assert_equal(n_splits, n_iter)
assert_equal(len(train), splits.n_train)
assert_equal(len(test), splits.n_test)
assert_equal(len(set(train).intersection(test)), 0)
label_counts = np.unique(labels)
assert_equal(splits.test_size, 1.0 / n_folds)
assert_equal(splits.n_train + splits.n_test, len(labels))
assert_equal(len(label_counts), 2)
ex_test_p = float(splits.n_test) / n_samples
ex_train_p = float(splits.n_train) / n_samples
assert_counts_are_ok(train_counts, ex_train_p)
assert_counts_are_ok(test_counts, ex_test_p)
def test_predefinedsplit_with_kfold_split():
# Check that PredefinedSplit can reproduce a split generated by Kfold.
folds = -1 * np.ones(10)
kf_train = []
kf_test = []
for i, (train_ind, test_ind) in enumerate(cval.KFold(10, 5, shuffle=True)):
kf_train.append(train_ind)
kf_test.append(test_ind)
folds[test_ind] = i
ps_train = []
ps_test = []
ps = cval.PredefinedSplit(folds)
for train_ind, test_ind in ps:
ps_train.append(train_ind)
ps_test.append(test_ind)
assert_array_equal(ps_train, kf_train)
assert_array_equal(ps_test, kf_test)
def test_leave_label_out_changing_labels():
# Check that LeaveOneLabelOut and LeavePLabelOut work normally if
# the labels variable is changed before calling __iter__
labels = np.array([0, 1, 2, 1, 1, 2, 0, 0])
labels_changing = np.array(labels, copy=True)
lolo = cval.LeaveOneLabelOut(labels)
lolo_changing = cval.LeaveOneLabelOut(labels_changing)
lplo = cval.LeavePLabelOut(labels, p=2)
lplo_changing = cval.LeavePLabelOut(labels_changing, p=2)
labels_changing[:] = 0
for llo, llo_changing in [(lolo, lolo_changing), (lplo, lplo_changing)]:
for (train, test), (train_chan, test_chan) in zip(llo, llo_changing):
assert_array_equal(train, train_chan)
assert_array_equal(test, test_chan)
def test_cross_val_score():
clf = MockClassifier()
for a in range(-10, 10):
clf.a = a
# Smoke test
scores = cval.cross_val_score(clf, X, y)
assert_array_equal(scores, clf.score(X, y))
# test with multioutput y
scores = cval.cross_val_score(clf, X_sparse, X)
assert_array_equal(scores, clf.score(X_sparse, X))
scores = cval.cross_val_score(clf, X_sparse, y)
assert_array_equal(scores, clf.score(X_sparse, y))
# test with multioutput y
scores = cval.cross_val_score(clf, X_sparse, X)
assert_array_equal(scores, clf.score(X_sparse, X))
# test with X and y as list
list_check = lambda x: isinstance(x, list)
clf = CheckingClassifier(check_X=list_check)
scores = cval.cross_val_score(clf, X.tolist(), y.tolist())
clf = CheckingClassifier(check_y=list_check)
scores = cval.cross_val_score(clf, X, y.tolist())
assert_raises(ValueError, cval.cross_val_score, clf, X, y,
scoring="sklearn")
# test with 3d X and
X_3d = X[:, :, np.newaxis]
clf = MockClassifier(allow_nd=True)
scores = cval.cross_val_score(clf, X_3d, y)
clf = MockClassifier(allow_nd=False)
assert_raises(ValueError, cval.cross_val_score, clf, X_3d, y)
def test_cross_val_score_pandas():
# check cross_val_score doesn't destroy pandas dataframe
types = [(MockDataFrame, MockDataFrame)]
try:
from pandas import Series, DataFrame
types.append((Series, DataFrame))
except ImportError:
pass
for TargetType, InputFeatureType in types:
# X dataframe, y series
X_df, y_ser = InputFeatureType(X), TargetType(y)
check_df = lambda x: isinstance(x, InputFeatureType)
check_series = lambda x: isinstance(x, TargetType)
clf = CheckingClassifier(check_X=check_df, check_y=check_series)
cval.cross_val_score(clf, X_df, y_ser)
def test_cross_val_score_mask():
# test that cross_val_score works with boolean masks
svm = SVC(kernel="linear")
iris = load_iris()
X, y = iris.data, iris.target
cv_indices = cval.KFold(len(y), 5)
scores_indices = cval.cross_val_score(svm, X, y, cv=cv_indices)
cv_indices = cval.KFold(len(y), 5)
cv_masks = []
for train, test in cv_indices:
mask_train = np.zeros(len(y), dtype=np.bool)
mask_test = np.zeros(len(y), dtype=np.bool)
mask_train[train] = 1
mask_test[test] = 1
cv_masks.append((train, test))
scores_masks = cval.cross_val_score(svm, X, y, cv=cv_masks)
assert_array_equal(scores_indices, scores_masks)
def test_cross_val_score_precomputed():
# test for svm with precomputed kernel
svm = SVC(kernel="precomputed")
iris = load_iris()
X, y = iris.data, iris.target
linear_kernel = np.dot(X, X.T)
score_precomputed = cval.cross_val_score(svm, linear_kernel, y)
svm = SVC(kernel="linear")
score_linear = cval.cross_val_score(svm, X, y)
assert_array_equal(score_precomputed, score_linear)
# Error raised for non-square X
svm = SVC(kernel="precomputed")
assert_raises(ValueError, cval.cross_val_score, svm, X, y)
# test error is raised when the precomputed kernel is not array-like
# or sparse
assert_raises(ValueError, cval.cross_val_score, svm,
linear_kernel.tolist(), y)
def test_cross_val_score_fit_params():
clf = MockClassifier()
n_samples = X.shape[0]
n_classes = len(np.unique(y))
DUMMY_INT = 42
DUMMY_STR = '42'
DUMMY_OBJ = object()
def assert_fit_params(clf):
# Function to test that the values are passed correctly to the
# classifier arguments for non-array type
assert_equal(clf.dummy_int, DUMMY_INT)
assert_equal(clf.dummy_str, DUMMY_STR)
assert_equal(clf.dummy_obj, DUMMY_OBJ)
fit_params = {'sample_weight': np.ones(n_samples),
'class_prior': np.ones(n_classes) / n_classes,
'sparse_sample_weight': W_sparse,
'sparse_param': P_sparse,
'dummy_int': DUMMY_INT,
'dummy_str': DUMMY_STR,
'dummy_obj': DUMMY_OBJ,
'callback': assert_fit_params}
cval.cross_val_score(clf, X, y, fit_params=fit_params)
def test_cross_val_score_score_func():
clf = MockClassifier()
_score_func_args = []
def score_func(y_test, y_predict):
_score_func_args.append((y_test, y_predict))
return 1.0
with warnings.catch_warnings(record=True):
scoring = make_scorer(score_func)
score = cval.cross_val_score(clf, X, y, scoring=scoring)
assert_array_equal(score, [1.0, 1.0, 1.0])
assert len(_score_func_args) == 3
def test_cross_val_score_errors():
class BrokenEstimator:
pass
assert_raises(TypeError, cval.cross_val_score, BrokenEstimator(), X)
def test_train_test_split_errors():
assert_raises(ValueError, cval.train_test_split)
assert_raises(ValueError, cval.train_test_split, range(3), train_size=1.1)
assert_raises(ValueError, cval.train_test_split, range(3), test_size=0.6,
train_size=0.6)
assert_raises(ValueError, cval.train_test_split, range(3),
test_size=np.float32(0.6), train_size=np.float32(0.6))
assert_raises(ValueError, cval.train_test_split, range(3),
test_size="wrong_type")
assert_raises(ValueError, cval.train_test_split, range(3), test_size=2,
train_size=4)
assert_raises(TypeError, cval.train_test_split, range(3),
some_argument=1.1)
assert_raises(ValueError, cval.train_test_split, range(3), range(42))
def test_train_test_split():
X = np.arange(100).reshape((10, 10))
X_s = coo_matrix(X)
y = np.arange(10)
# simple test
split = cval.train_test_split(X, y, test_size=None, train_size=.5)
X_train, X_test, y_train, y_test = split
assert_equal(len(y_test), len(y_train))
# test correspondence of X and y
assert_array_equal(X_train[:, 0], y_train * 10)
assert_array_equal(X_test[:, 0], y_test * 10)
# conversion of lists to arrays (deprecated?)
with warnings.catch_warnings(record=True):
split = cval.train_test_split(X, X_s, y.tolist(), allow_lists=False)
X_train, X_test, X_s_train, X_s_test, y_train, y_test = split
assert_array_equal(X_train, X_s_train.toarray())
assert_array_equal(X_test, X_s_test.toarray())
# don't convert lists to anything else by default
split = cval.train_test_split(X, X_s, y.tolist())
X_train, X_test, X_s_train, X_s_test, y_train, y_test = split
assert_true(isinstance(y_train, list))
assert_true(isinstance(y_test, list))
# allow nd-arrays
X_4d = np.arange(10 * 5 * 3 * 2).reshape(10, 5, 3, 2)
y_3d = np.arange(10 * 7 * 11).reshape(10, 7, 11)
split = cval.train_test_split(X_4d, y_3d)
assert_equal(split[0].shape, (7, 5, 3, 2))
assert_equal(split[1].shape, (3, 5, 3, 2))
assert_equal(split[2].shape, (7, 7, 11))
assert_equal(split[3].shape, (3, 7, 11))
# test stratification option
y = np.array([1, 1, 1, 1, 2, 2, 2, 2])
for test_size, exp_test_size in zip([2, 4, 0.25, 0.5, 0.75],
[2, 4, 2, 4, 6]):
train, test = cval.train_test_split(y,
test_size=test_size,
stratify=y,
random_state=0)
assert_equal(len(test), exp_test_size)
assert_equal(len(test) + len(train), len(y))
# check the 1:1 ratio of ones and twos in the data is preserved
assert_equal(np.sum(train == 1), np.sum(train == 2))
def train_test_split_pandas():
# check cross_val_score doesn't destroy pandas dataframe
types = [MockDataFrame]
try:
from pandas import DataFrame
types.append(DataFrame)
except ImportError:
pass
for InputFeatureType in types:
# X dataframe
X_df = InputFeatureType(X)
X_train, X_test = cval.train_test_split(X_df)
assert_true(isinstance(X_train, InputFeatureType))
assert_true(isinstance(X_test, InputFeatureType))
def train_test_split_mock_pandas():
# X mock dataframe
X_df = MockDataFrame(X)
X_train, X_test = cval.train_test_split(X_df)
assert_true(isinstance(X_train, MockDataFrame))
assert_true(isinstance(X_test, MockDataFrame))
X_train_arr, X_test_arr = cval.train_test_split(X_df, allow_lists=False)
assert_true(isinstance(X_train_arr, np.ndarray))
assert_true(isinstance(X_test_arr, np.ndarray))
def test_cross_val_score_with_score_func_classification():
iris = load_iris()
clf = SVC(kernel='linear')
# Default score (should be the accuracy score)
scores = cval.cross_val_score(clf, iris.data, iris.target, cv=5)
assert_array_almost_equal(scores, [0.97, 1., 0.97, 0.97, 1.], 2)
# Correct classification score (aka. zero / one score) - should be the
# same as the default estimator score
zo_scores = cval.cross_val_score(clf, iris.data, iris.target,
scoring="accuracy", cv=5)
assert_array_almost_equal(zo_scores, [0.97, 1., 0.97, 0.97, 1.], 2)
# F1 score (class are balanced so f1_score should be equal to zero/one
# score
f1_scores = cval.cross_val_score(clf, iris.data, iris.target,
scoring="f1_weighted", cv=5)
assert_array_almost_equal(f1_scores, [0.97, 1., 0.97, 0.97, 1.], 2)
def test_cross_val_score_with_score_func_regression():
X, y = make_regression(n_samples=30, n_features=20, n_informative=5,
random_state=0)
reg = Ridge()
# Default score of the Ridge regression estimator
scores = cval.cross_val_score(reg, X, y, cv=5)
assert_array_almost_equal(scores, [0.94, 0.97, 0.97, 0.99, 0.92], 2)
# R2 score (aka. determination coefficient) - should be the
# same as the default estimator score
r2_scores = cval.cross_val_score(reg, X, y, scoring="r2", cv=5)
assert_array_almost_equal(r2_scores, [0.94, 0.97, 0.97, 0.99, 0.92], 2)
# Mean squared error; this is a loss function, so "scores" are negative
mse_scores = cval.cross_val_score(reg, X, y, cv=5,
scoring="mean_squared_error")
expected_mse = np.array([-763.07, -553.16, -274.38, -273.26, -1681.99])
assert_array_almost_equal(mse_scores, expected_mse, 2)
# Explained variance
scoring = make_scorer(explained_variance_score)
ev_scores = cval.cross_val_score(reg, X, y, cv=5, scoring=scoring)
assert_array_almost_equal(ev_scores, [0.94, 0.97, 0.97, 0.99, 0.92], 2)
def test_permutation_score():
iris = load_iris()
X = iris.data
X_sparse = coo_matrix(X)
y = iris.target
svm = SVC(kernel='linear')
cv = cval.StratifiedKFold(y, 2)
score, scores, pvalue = cval.permutation_test_score(
svm, X, y, n_permutations=30, cv=cv, scoring="accuracy")
assert_greater(score, 0.9)
assert_almost_equal(pvalue, 0.0, 1)
score_label, _, pvalue_label = cval.permutation_test_score(
svm, X, y, n_permutations=30, cv=cv, scoring="accuracy",
labels=np.ones(y.size), random_state=0)
assert_true(score_label == score)
assert_true(pvalue_label == pvalue)
# check that we obtain the same results with a sparse representation
svm_sparse = SVC(kernel='linear')
cv_sparse = cval.StratifiedKFold(y, 2)
score_label, _, pvalue_label = cval.permutation_test_score(
svm_sparse, X_sparse, y, n_permutations=30, cv=cv_sparse,
scoring="accuracy", labels=np.ones(y.size), random_state=0)
assert_true(score_label == score)
assert_true(pvalue_label == pvalue)
# test with custom scoring object
def custom_score(y_true, y_pred):
return (((y_true == y_pred).sum() - (y_true != y_pred).sum())
/ y_true.shape[0])
scorer = make_scorer(custom_score)
score, _, pvalue = cval.permutation_test_score(
svm, X, y, n_permutations=100, scoring=scorer, cv=cv, random_state=0)
assert_almost_equal(score, .93, 2)
assert_almost_equal(pvalue, 0.01, 3)
# set random y
y = np.mod(np.arange(len(y)), 3)
score, scores, pvalue = cval.permutation_test_score(
svm, X, y, n_permutations=30, cv=cv, scoring="accuracy")
assert_less(score, 0.5)
assert_greater(pvalue, 0.2)
def test_cross_val_generator_with_indices():
X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
y = np.array([1, 1, 2, 2])
labels = np.array([1, 2, 3, 4])
# explicitly passing indices value is deprecated
loo = cval.LeaveOneOut(4)
lpo = cval.LeavePOut(4, 2)
kf = cval.KFold(4, 2)
skf = cval.StratifiedKFold(y, 2)
lolo = cval.LeaveOneLabelOut(labels)
lopo = cval.LeavePLabelOut(labels, 2)
ps = cval.PredefinedSplit([1, 1, 2, 2])
ss = cval.ShuffleSplit(2)
for cv in [loo, lpo, kf, skf, lolo, lopo, ss, ps]:
for train, test in cv:
assert_not_equal(np.asarray(train).dtype.kind, 'b')
assert_not_equal(np.asarray(train).dtype.kind, 'b')
X[train], X[test]
y[train], y[test]
@ignore_warnings
def test_cross_val_generator_with_default_indices():
X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
y = np.array([1, 1, 2, 2])
labels = np.array([1, 2, 3, 4])
loo = cval.LeaveOneOut(4)
lpo = cval.LeavePOut(4, 2)
kf = cval.KFold(4, 2)
skf = cval.StratifiedKFold(y, 2)
lolo = cval.LeaveOneLabelOut(labels)
lopo = cval.LeavePLabelOut(labels, 2)
ss = cval.ShuffleSplit(2)
ps = cval.PredefinedSplit([1, 1, 2, 2])
for cv in [loo, lpo, kf, skf, lolo, lopo, ss, ps]:
for train, test in cv:
assert_not_equal(np.asarray(train).dtype.kind, 'b')
assert_not_equal(np.asarray(train).dtype.kind, 'b')
X[train], X[test]
y[train], y[test]
def test_shufflesplit_errors():
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=2.0)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=1.0)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=0.1,
train_size=0.95)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=11)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=10)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=8, train_size=3)
assert_raises(ValueError, cval.ShuffleSplit, 10, train_size=1j)
assert_raises(ValueError, cval.ShuffleSplit, 10, test_size=None,
train_size=None)
def test_shufflesplit_reproducible():
# Check that iterating twice on the ShuffleSplit gives the same
# sequence of train-test when the random_state is given
ss = cval.ShuffleSplit(10, random_state=21)
assert_array_equal(list(a for a, b in ss), list(a for a, b in ss))
def test_safe_split_with_precomputed_kernel():
clf = SVC()
clfp = SVC(kernel="precomputed")
iris = load_iris()
X, y = iris.data, iris.target
K = np.dot(X, X.T)
cv = cval.ShuffleSplit(X.shape[0], test_size=0.25, random_state=0)
tr, te = list(cv)[0]
X_tr, y_tr = cval._safe_split(clf, X, y, tr)
K_tr, y_tr2 = cval._safe_split(clfp, K, y, tr)
assert_array_almost_equal(K_tr, np.dot(X_tr, X_tr.T))
X_te, y_te = cval._safe_split(clf, X, y, te, tr)
K_te, y_te2 = cval._safe_split(clfp, K, y, te, tr)
assert_array_almost_equal(K_te, np.dot(X_te, X_tr.T))
def test_cross_val_score_allow_nans():
# Check that cross_val_score allows input data with NaNs
X = np.arange(200, dtype=np.float64).reshape(10, -1)
X[2, :] = np.nan
y = np.repeat([0, 1], X.shape[0] / 2)
p = Pipeline([
('imputer', Imputer(strategy='mean', missing_values='NaN')),
('classifier', MockClassifier()),
])
cval.cross_val_score(p, X, y, cv=5)
def test_train_test_split_allow_nans():
# Check that train_test_split allows input data with NaNs
X = np.arange(200, dtype=np.float64).reshape(10, -1)
X[2, :] = np.nan
y = np.repeat([0, 1], X.shape[0] / 2)
cval.train_test_split(X, y, test_size=0.2, random_state=42)
def test_permutation_test_score_allow_nans():
# Check that permutation_test_score allows input data with NaNs
X = np.arange(200, dtype=np.float64).reshape(10, -1)
X[2, :] = np.nan
y = np.repeat([0, 1], X.shape[0] / 2)
p = Pipeline([
('imputer', Imputer(strategy='mean', missing_values='NaN')),
('classifier', MockClassifier()),
])
cval.permutation_test_score(p, X, y, cv=5)
def test_check_cv_return_types():
X = np.ones((9, 2))
cv = cval.check_cv(3, X, classifier=False)
assert_true(isinstance(cv, cval.KFold))
y_binary = np.array([0, 1, 0, 1, 0, 0, 1, 1, 1])
cv = cval.check_cv(3, X, y_binary, classifier=True)
assert_true(isinstance(cv, cval.StratifiedKFold))
y_multiclass = np.array([0, 1, 0, 1, 2, 1, 2, 0, 2])
cv = cval.check_cv(3, X, y_multiclass, classifier=True)
assert_true(isinstance(cv, cval.StratifiedKFold))
X = np.ones((5, 2))
y_seq_of_seqs = [[], [1, 2], [3], [0, 1, 3], [2]]
with warnings.catch_warnings(record=True):
# deprecated sequence of sequence format
cv = cval.check_cv(3, X, y_seq_of_seqs, classifier=True)
assert_true(isinstance(cv, cval.KFold))
y_indicator_matrix = LabelBinarizer().fit_transform(y_seq_of_seqs)
cv = cval.check_cv(3, X, y_indicator_matrix, classifier=True)
assert_true(isinstance(cv, cval.KFold))
y_multioutput = np.array([[1, 2], [0, 3], [0, 0], [3, 1], [2, 0]])
cv = cval.check_cv(3, X, y_multioutput, classifier=True)
assert_true(isinstance(cv, cval.KFold))
def test_cross_val_score_multilabel():
X = np.array([[-3, 4], [2, 4], [3, 3], [0, 2], [-3, 1],
[-2, 1], [0, 0], [-2, -1], [-1, -2], [1, -2]])
y = np.array([[1, 1], [0, 1], [0, 1], [0, 1], [1, 1],
[0, 1], [1, 0], [1, 1], [1, 0], [0, 0]])
clf = KNeighborsClassifier(n_neighbors=1)
scoring_micro = make_scorer(precision_score, average='micro')
scoring_macro = make_scorer(precision_score, average='macro')
scoring_samples = make_scorer(precision_score, average='samples')
score_micro = cval.cross_val_score(clf, X, y, scoring=scoring_micro, cv=5)
score_macro = cval.cross_val_score(clf, X, y, scoring=scoring_macro, cv=5)
score_samples = cval.cross_val_score(clf, X, y,
scoring=scoring_samples, cv=5)
assert_almost_equal(score_micro, [1, 1 / 2, 3 / 4, 1 / 2, 1 / 3])
assert_almost_equal(score_macro, [1, 1 / 2, 3 / 4, 1 / 2, 1 / 4])
assert_almost_equal(score_samples, [1, 1 / 2, 3 / 4, 1 / 2, 1 / 4])
def test_cross_val_predict():
boston = load_boston()
X, y = boston.data, boston.target
cv = cval.KFold(len(boston.target))
est = Ridge()
# Naive loop (should be same as cross_val_predict):
preds2 = np.zeros_like(y)
for train, test in cv:
est.fit(X[train], y[train])
preds2[test] = est.predict(X[test])
preds = cval.cross_val_predict(est, X, y, cv=cv)
assert_array_almost_equal(preds, preds2)
preds = cval.cross_val_predict(est, X, y)
assert_equal(len(preds), len(y))
cv = cval.LeaveOneOut(len(y))
preds = cval.cross_val_predict(est, X, y, cv=cv)
assert_equal(len(preds), len(y))
Xsp = X.copy()
Xsp *= (Xsp > np.median(Xsp))
Xsp = coo_matrix(Xsp)
preds = cval.cross_val_predict(est, Xsp, y)
assert_array_almost_equal(len(preds), len(y))
preds = cval.cross_val_predict(KMeans(), X)
assert_equal(len(preds), len(y))
def bad_cv():
for i in range(4):
yield np.array([0, 1, 2, 3]), np.array([4, 5, 6, 7, 8])
assert_raises(ValueError, cval.cross_val_predict, est, X, y, cv=bad_cv())
def test_cross_val_predict_input_types():
clf = Ridge()
# Smoke test
predictions = cval.cross_val_predict(clf, X, y)
assert_equal(predictions.shape, (10,))
# test with multioutput y
predictions = cval.cross_val_predict(clf, X_sparse, X)
assert_equal(predictions.shape, (10, 2))
predictions = cval.cross_val_predict(clf, X_sparse, y)
assert_array_equal(predictions.shape, (10,))
# test with multioutput y
predictions = cval.cross_val_predict(clf, X_sparse, X)
assert_array_equal(predictions.shape, (10, 2))
# test with X and y as list
list_check = lambda x: isinstance(x, list)
clf = CheckingClassifier(check_X=list_check)
predictions = cval.cross_val_predict(clf, X.tolist(), y.tolist())
clf = CheckingClassifier(check_y=list_check)
predictions = cval.cross_val_predict(clf, X, y.tolist())
# test with 3d X and
X_3d = X[:, :, np.newaxis]
check_3d = lambda x: x.ndim == 3
clf = CheckingClassifier(check_X=check_3d)
predictions = cval.cross_val_predict(clf, X_3d, y)
assert_array_equal(predictions.shape, (10,))
def test_cross_val_predict_pandas():
# check cross_val_score doesn't destroy pandas dataframe
types = [(MockDataFrame, MockDataFrame)]
try:
from pandas import Series, DataFrame
types.append((Series, DataFrame))
except ImportError:
pass
for TargetType, InputFeatureType in types:
# X dataframe, y series
X_df, y_ser = InputFeatureType(X), TargetType(y)
check_df = lambda x: isinstance(x, InputFeatureType)
check_series = lambda x: isinstance(x, TargetType)
clf = CheckingClassifier(check_X=check_df, check_y=check_series)
cval.cross_val_predict(clf, X_df, y_ser)
def test_sparse_fit_params():
iris = load_iris()
X, y = iris.data, iris.target
clf = MockClassifier()
fit_params = {'sparse_sample_weight': coo_matrix(np.eye(X.shape[0]))}
a = cval.cross_val_score(clf, X, y, fit_params=fit_params)
assert_array_equal(a, np.ones(3))
def test_check_is_partition():
p = np.arange(100)
assert_true(cval._check_is_partition(p, 100))
assert_false(cval._check_is_partition(np.delete(p, 23), 100))
p[0] = 23
assert_false(cval._check_is_partition(p, 100))
| bsd-3-clause |
alexmandujano/django | django/conf/locale/nb/formats.py | 118 | 1763 | # -*- encoding: utf-8 -*-
# This file is distributed under the same license as the Django package.
#
from __future__ import unicode_literals
# The *_FORMAT strings use the Django date format syntax,
# see http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATE_FORMAT = 'j. F Y'
TIME_FORMAT = 'H:i'
DATETIME_FORMAT = 'j. F Y H:i'
YEAR_MONTH_FORMAT = 'F Y'
MONTH_DAY_FORMAT = 'j. F'
SHORT_DATE_FORMAT = 'd.m.Y'
SHORT_DATETIME_FORMAT = 'd.m.Y H:i'
FIRST_DAY_OF_WEEK = 1 # Monday
# The *_INPUT_FORMATS strings use the Python strftime format syntax,
# see http://docs.python.org/library/datetime.html#strftime-strptime-behavior
# Kept ISO formats as they are in first position
DATE_INPUT_FORMATS = (
'%Y-%m-%d', '%d.%m.%Y', '%d.%m.%y', # '2006-10-25', '25.10.2006', '25.10.06'
# '%d. %b %Y', '%d %b %Y', # '25. okt 2006', '25 okt 2006'
# '%d. %b. %Y', '%d %b. %Y', # '25. okt. 2006', '25 okt. 2006'
# '%d. %B %Y', '%d %B %Y', # '25. oktober 2006', '25 oktober 2006'
)
DATETIME_INPUT_FORMATS = (
'%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59'
'%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200'
'%Y-%m-%d %H:%M', # '2006-10-25 14:30'
'%Y-%m-%d', # '2006-10-25'
'%d.%m.%Y %H:%M:%S', # '25.10.2006 14:30:59'
'%d.%m.%Y %H:%M:%S.%f', # '25.10.2006 14:30:59.000200'
'%d.%m.%Y %H:%M', # '25.10.2006 14:30'
'%d.%m.%Y', # '25.10.2006'
'%d.%m.%y %H:%M:%S', # '25.10.06 14:30:59'
'%d.%m.%y %H:%M:%S.%f', # '25.10.06 14:30:59.000200'
'%d.%m.%y %H:%M', # '25.10.06 14:30'
'%d.%m.%y', # '25.10.06'
)
DECIMAL_SEPARATOR = ','
THOUSAND_SEPARATOR = '\xa0' # non-breaking space
NUMBER_GROUPING = 3
| bsd-3-clause |
barbour-em/osf.io | scripts/log_analytics.py | 40 | 3817 | from __future__ import print_function, absolute_import
import time
import sys
import argparse
from faker import Faker
import itertools
from random import choice
from modularodm import Q
from framework.auth import Auth
from website.app import init_app
from website import models, security
from tests.factories import NodeFactory
from website.models import NodeLog, Node
fake = Faker()
app = None
CATEGORY_MAP = Node.CATEGORY_MAP
descriptors = CATEGORY_MAP.keys()
def create_fake_projects(creator, depth, num_logs, level=1, parent=None):
#auth = Auth(user=creator)
if depth < 0:
return None
descriptor = choice(descriptors) if (level % 2 == 0) else 'project'
project_title = parent.title + (': ' + CATEGORY_MAP[descriptor]) if (level % 2 == 0) else fake.word()
project = NodeFactory.build(title=project_title, description=fake.sentences(), creator=creator, parent=parent, is_public=True, privacy='public', category=descriptor)
project.save()
for i in range(int(num_logs)):
project.add_log('wiki_updated', {
'node': project._id,
},
Auth(creator),
)
project.save()
nextlevel = level + 1
nextdepth = int(depth) - 1
for i in range(nextlevel):
create_fake_projects(creator, nextdepth, num_logs, nextlevel, project)
return project
class Result(object):
def __init__(self, *args, **kwargs):
self.keys = kwargs.keys()
for k, v in kwargs.iteritems():
setattr(self, k, v)
@property
def data(self):
ret = {}
for k in self.keys:
ret[k] = getattr(self, k)
return ret
def get_nodes_recursive(project, include):
children = list(project.nodes)
descendants = children + [item for node in project.nodes for item in get_nodes_recursive(node, include)]
return [project] + [desc for desc in descendants if include(desc)]
def get_aggregate_logs(ids, user, count=100):
query = Q('params.node', 'in', ids)
return list(NodeLog.find(query).sort('date').limit(int(count)))
def get_logs(user, project, depth):
print ("Fetching logs")
t0 = time.clock()
nodes = get_nodes_recursive(project, lambda p: p.can_view(Auth(user)))
ids = [n._id for n in nodes]
t1 = time.clock()
logs = get_aggregate_logs(ids, user)
logs = [l for l in logs]
t2 = time.clock()
agg_time = t1 - t0
fetch_time = t2 - t1
total_time = t2 - t0
print ("Took {0}s to fetch {1} logs with a depth of {2}".format(total_time, len(logs), depth))
return Result(
total_time=total_time,
agg_time=agg_time,
fetch_time=fetch_time,
num_logs=len(logs),
depth=depth,
)
def clean_up(creator, project):
if len(project.nodes) == 0:
project.remove_node(Auth(creator))
else:
[clean_up(creator, node) for node in project.nodes]
def parse_args():
parser = argparse.ArgumentParser(description='Create fake data.')
parser.add_argument('-u', '--user', dest='user', required=True)
parser.add_argument('-d', '--depth', dest='depth', default=2)
parser.add_argument('-l', '--num-logs', dest='num_logs', default=10)
parser.add_argument('-c', '--clean_up', dest='clean_up', default='true')
return parser.parse_args()
def run(username, depth, num_logs):
app = init_app('website.settings', set_backends=True, routes=True)
creator = models.User.find(Q('username', 'eq', username))[0]
project = create_fake_projects(creator, depth, num_logs)
ret = get_logs(creator, project, depth)
if clean_up in ('true', 'True', True):
clean_up(creator, project)
return ret
def main():
args = parse_args()
run(args.user, int(args.depth or 0), int(args.num_logs or 0))
sys.exit(0)
if __name__ == '__main__':
main()
| apache-2.0 |
romankagan/DDBWorkbench | python/helpers/docutils/parsers/rst/__init__.py | 42 | 14143 | # $Id: __init__.py 6314 2010-04-26 10:04:17Z milde $
# Author: David Goodger <goodger@python.org>
# Copyright: This module has been placed in the public domain.
"""
This is ``docutils.parsers.rst`` package. It exports a single class, `Parser`,
the reStructuredText parser.
Usage
=====
1. Create a parser::
parser = docutils.parsers.rst.Parser()
Several optional arguments may be passed to modify the parser's behavior.
Please see `Customizing the Parser`_ below for details.
2. Gather input (a multi-line string), by reading a file or the standard
input::
input = sys.stdin.read()
3. Create a new empty `docutils.nodes.document` tree::
document = docutils.utils.new_document(source, settings)
See `docutils.utils.new_document()` for parameter details.
4. Run the parser, populating the document tree::
parser.parse(input, document)
Parser Overview
===============
The reStructuredText parser is implemented as a state machine, examining its
input one line at a time. To understand how the parser works, please first
become familiar with the `docutils.statemachine` module, then see the
`states` module.
Customizing the Parser
----------------------
Anything that isn't already customizable is that way simply because that type
of customizability hasn't been implemented yet. Patches welcome!
When instantiating an object of the `Parser` class, two parameters may be
passed: ``rfc2822`` and ``inliner``. Pass ``rfc2822=1`` to enable an initial
RFC-2822 style header block, parsed as a "field_list" element (with "class"
attribute set to "rfc2822"). Currently this is the only body-level element
which is customizable without subclassing. (Tip: subclass `Parser` and change
its "state_classes" and "initial_state" attributes to refer to new classes.
Contact the author if you need more details.)
The ``inliner`` parameter takes an instance of `states.Inliner` or a subclass.
It handles inline markup recognition. A common extension is the addition of
further implicit hyperlinks, like "RFC 2822". This can be done by subclassing
`states.Inliner`, adding a new method for the implicit markup, and adding a
``(pattern, method)`` pair to the "implicit_dispatch" attribute of the
subclass. See `states.Inliner.implicit_inline()` for details. Explicit
inline markup can be customized in a `states.Inliner` subclass via the
``patterns.initial`` and ``dispatch`` attributes (and new methods as
appropriate).
"""
__docformat__ = 'reStructuredText'
import docutils.parsers
import docutils.statemachine
from docutils.parsers.rst import states
from docutils import frontend, nodes
class Parser(docutils.parsers.Parser):
"""The reStructuredText parser."""
supported = ('restructuredtext', 'rst', 'rest', 'restx', 'rtxt', 'rstx')
"""Aliases this parser supports."""
settings_spec = (
'reStructuredText Parser Options',
None,
(('Recognize and link to standalone PEP references (like "PEP 258").',
['--pep-references'],
{'action': 'store_true', 'validator': frontend.validate_boolean}),
('Base URL for PEP references '
'(default "http://www.python.org/dev/peps/").',
['--pep-base-url'],
{'metavar': '<URL>', 'default': 'http://www.python.org/dev/peps/',
'validator': frontend.validate_url_trailing_slash}),
('Template for PEP file part of URL. (default "pep-%04d")',
['--pep-file-url-template'],
{'metavar': '<URL>', 'default': 'pep-%04d'}),
('Recognize and link to standalone RFC references (like "RFC 822").',
['--rfc-references'],
{'action': 'store_true', 'validator': frontend.validate_boolean}),
('Base URL for RFC references (default "http://www.faqs.org/rfcs/").',
['--rfc-base-url'],
{'metavar': '<URL>', 'default': 'http://www.faqs.org/rfcs/',
'validator': frontend.validate_url_trailing_slash}),
('Set number of spaces for tab expansion (default 8).',
['--tab-width'],
{'metavar': '<width>', 'type': 'int', 'default': 8,
'validator': frontend.validate_nonnegative_int}),
('Remove spaces before footnote references.',
['--trim-footnote-reference-space'],
{'action': 'store_true', 'validator': frontend.validate_boolean}),
('Leave spaces before footnote references.',
['--leave-footnote-reference-space'],
{'action': 'store_false', 'dest': 'trim_footnote_reference_space'}),
('Disable directives that insert the contents of external file '
'("include" & "raw"); replaced with a "warning" system message.',
['--no-file-insertion'],
{'action': 'store_false', 'default': 1,
'dest': 'file_insertion_enabled',
'validator': frontend.validate_boolean}),
('Enable directives that insert the contents of external file '
'("include" & "raw"). Enabled by default.',
['--file-insertion-enabled'],
{'action': 'store_true'}),
('Disable the "raw" directives; replaced with a "warning" '
'system message.',
['--no-raw'],
{'action': 'store_false', 'default': 1, 'dest': 'raw_enabled',
'validator': frontend.validate_boolean}),
('Enable the "raw" directive. Enabled by default.',
['--raw-enabled'],
{'action': 'store_true'}),))
config_section = 'restructuredtext parser'
config_section_dependencies = ('parsers',)
def __init__(self, rfc2822=None, inliner=None):
if rfc2822:
self.initial_state = 'RFC2822Body'
else:
self.initial_state = 'Body'
self.state_classes = states.state_classes
self.inliner = inliner
def parse(self, inputstring, document):
"""Parse `inputstring` and populate `document`, a document tree."""
self.setup_parse(inputstring, document)
self.statemachine = states.RSTStateMachine(
state_classes=self.state_classes,
initial_state=self.initial_state,
debug=document.reporter.debug_flag)
inputlines = docutils.statemachine.string2lines(
inputstring, tab_width=document.settings.tab_width,
convert_whitespace=1)
self.statemachine.run(inputlines, document, inliner=self.inliner)
self.finish_parse()
class DirectiveError(Exception):
"""
Store a message and a system message level.
To be thrown from inside directive code.
Do not instantiate directly -- use `Directive.directive_error()`
instead!
"""
def __init__(self, level, message):
"""Set error `message` and `level`"""
Exception.__init__(self)
self.level = level
self.msg = message
class Directive(object):
"""
Base class for reStructuredText directives.
The following attributes may be set by subclasses. They are
interpreted by the directive parser (which runs the directive
class):
- `required_arguments`: The number of required arguments (default:
0).
- `optional_arguments`: The number of optional arguments (default:
0).
- `final_argument_whitespace`: A boolean, indicating if the final
argument may contain whitespace (default: False).
- `option_spec`: A dictionary, mapping known option names to
conversion functions such as `int` or `float` (default: {}, no
options). Several conversion functions are defined in the
directives/__init__.py module.
Option conversion functions take a single parameter, the option
argument (a string or ``None``), validate it and/or convert it
to the appropriate form. Conversion functions may raise
`ValueError` and `TypeError` exceptions.
- `has_content`: A boolean; True if content is allowed. Client
code must handle the case where content is required but not
supplied (an empty content list will be supplied).
Arguments are normally single whitespace-separated words. The
final argument may contain whitespace and/or newlines if
`final_argument_whitespace` is True.
If the form of the arguments is more complex, specify only one
argument (either required or optional) and set
`final_argument_whitespace` to True; the client code must do any
context-sensitive parsing.
When a directive implementation is being run, the directive class
is instantiated, and the `run()` method is executed. During
instantiation, the following instance variables are set:
- ``name`` is the directive type or name (string).
- ``arguments`` is the list of positional arguments (strings).
- ``options`` is a dictionary mapping option names (strings) to
values (type depends on option conversion functions; see
`option_spec` above).
- ``content`` is a list of strings, the directive content line by line.
- ``lineno`` is the absolute line number of the first line
of the directive.
- ``src`` is the name (or path) of the rst source of the directive.
- ``srcline`` is the line number of the first line of the directive
in its source. It may differ from ``lineno``, if the main source
includes other sources with the ``.. include::`` directive.
- ``content_offset`` is the line offset of the first line of the content from
the beginning of the current input. Used when initiating a nested parse.
- ``block_text`` is a string containing the entire directive.
- ``state`` is the state which called the directive function.
- ``state_machine`` is the state machine which controls the state which called
the directive function.
Directive functions return a list of nodes which will be inserted
into the document tree at the point where the directive was
encountered. This can be an empty list if there is nothing to
insert.
For ordinary directives, the list must contain body elements or
structural elements. Some directives are intended specifically
for substitution definitions, and must return a list of `Text`
nodes and/or inline elements (suitable for inline insertion, in
place of the substitution reference). Such directives must verify
substitution definition context, typically using code like this::
if not isinstance(state, states.SubstitutionDef):
error = state_machine.reporter.error(
'Invalid context: the "%s" directive can only be used '
'within a substitution definition.' % (name),
nodes.literal_block(block_text, block_text), line=lineno)
return [error]
"""
# There is a "Creating reStructuredText Directives" how-to at
# <http://docutils.sf.net/docs/howto/rst-directives.html>. If you
# update this docstring, please update the how-to as well.
required_arguments = 0
"""Number of required directive arguments."""
optional_arguments = 0
"""Number of optional arguments after the required arguments."""
final_argument_whitespace = False
"""May the final argument contain whitespace?"""
option_spec = None
"""Mapping of option names to validator functions."""
has_content = False
"""May the directive have content?"""
def __init__(self, name, arguments, options, content, lineno,
content_offset, block_text, state, state_machine):
self.name = name
self.arguments = arguments
self.options = options
self.content = content
self.lineno = lineno
self.content_offset = content_offset
self.block_text = block_text
self.state = state
self.state_machine = state_machine
self.src, self.scrline = state_machine.get_source_and_line(lineno)
def run(self):
raise NotImplementedError('Must override run() is subclass.')
# Directive errors:
def directive_error(self, level, message):
"""
Return a DirectiveError suitable for being thrown as an exception.
Call "raise self.directive_error(level, message)" from within
a directive implementation to return one single system message
at level `level`, which automatically gets the directive block
and the line number added.
You'd often use self.error(message) instead, which will
generate an ERROR-level directive error.
"""
return DirectiveError(level, message)
def debug(self, message):
return self.directive_error(0, message)
def info(self, message):
return self.directive_error(1, message)
def warning(self, message):
return self.directive_error(2, message)
def error(self, message):
return self.directive_error(3, message)
def severe(self, message):
return self.directive_error(4, message)
# Convenience methods:
def assert_has_content(self):
"""
Throw an ERROR-level DirectiveError if the directive doesn't
have contents.
"""
if not self.content:
raise self.error('Content block expected for the "%s" directive; '
'none found.' % self.name)
def convert_directive_function(directive_fn):
"""
Define & return a directive class generated from `directive_fn`.
`directive_fn` uses the old-style, functional interface.
"""
class FunctionalDirective(Directive):
option_spec = getattr(directive_fn, 'options', None)
has_content = getattr(directive_fn, 'content', False)
_argument_spec = getattr(directive_fn, 'arguments', (0, 0, False))
required_arguments, optional_arguments, final_argument_whitespace \
= _argument_spec
def run(self):
return directive_fn(
self.name, self.arguments, self.options, self.content,
self.lineno, self.content_offset, self.block_text,
self.state, self.state_machine)
# Return new-style directive.
return FunctionalDirective
| apache-2.0 |
quinot/ansible | test/units/module_utils/test_database.py | 178 | 4377 | import pytest
from ansible.module_utils.database import (
pg_quote_identifier,
SQLParseError,
)
# These are all valid strings
# The results are based on interpreting the identifier as a table name
VALID = {
# User quoted
'"public.table"': '"public.table"',
'"public"."table"': '"public"."table"',
'"schema test"."table test"': '"schema test"."table test"',
# We quote part
'public.table': '"public"."table"',
'"public".table': '"public"."table"',
'public."table"': '"public"."table"',
'schema test.table test': '"schema test"."table test"',
'"schema test".table test': '"schema test"."table test"',
'schema test."table test"': '"schema test"."table test"',
# Embedded double quotes
'table "test"': '"table ""test"""',
'public."table ""test"""': '"public"."table ""test"""',
'public.table "test"': '"public"."table ""test"""',
'schema "test".table': '"schema ""test"""."table"',
'"schema ""test""".table': '"schema ""test"""."table"',
'"""wat"""."""test"""': '"""wat"""."""test"""',
# Sigh, handle these as well:
'"no end quote': '"""no end quote"',
'schema."table': '"schema"."""table"',
'"schema.table': '"""schema"."table"',
'schema."table.something': '"schema"."""table"."something"',
# Embedded dots
'"schema.test"."table.test"': '"schema.test"."table.test"',
'"schema.".table': '"schema."."table"',
'"schema."."table"': '"schema."."table"',
'schema.".table"': '"schema".".table"',
'"schema".".table"': '"schema".".table"',
'"schema.".".table"': '"schema.".".table"',
# These are valid but maybe not what the user intended
'."table"': '".""table"""',
'table.': '"table."',
}
INVALID = {
('test.too.many.dots', 'table'): 'PostgreSQL does not support table with more than 3 dots',
('"test.too".many.dots', 'database'): 'PostgreSQL does not support database with more than 1 dots',
('test.too."many.dots"', 'database'): 'PostgreSQL does not support database with more than 1 dots',
('"test"."too"."many"."dots"', 'database'): "PostgreSQL does not support database with more than 1 dots",
('"test"."too"."many"."dots"', 'schema'): "PostgreSQL does not support schema with more than 2 dots",
('"test"."too"."many"."dots"', 'table'): "PostgreSQL does not support table with more than 3 dots",
('"test"."too"."many"."dots"."for"."column"', 'column'): "PostgreSQL does not support column with more than 4 dots",
('"table "invalid" double quote"', 'table'): 'User escaped identifiers must escape extra quotes',
('"schema "invalid"""."table "invalid"', 'table'): 'User escaped identifiers must escape extra quotes',
('"schema."table"', 'table'): 'User escaped identifiers must escape extra quotes',
('"schema".', 'table'): 'Identifier name unspecified or unquoted trailing dot',
}
HOW_MANY_DOTS = (
('role', 'role', '"role"',
'PostgreSQL does not support role with more than 1 dots'),
('db', 'database', '"db"',
'PostgreSQL does not support database with more than 1 dots'),
('db.schema', 'schema', '"db"."schema"',
'PostgreSQL does not support schema with more than 2 dots'),
('db.schema.table', 'table', '"db"."schema"."table"',
'PostgreSQL does not support table with more than 3 dots'),
('db.schema.table.column', 'column', '"db"."schema"."table"."column"',
'PostgreSQL does not support column with more than 4 dots'),
)
VALID_QUOTES = ((test, VALID[test]) for test in VALID)
INVALID_QUOTES = ((test[0], test[1], INVALID[test]) for test in INVALID)
@pytest.mark.parametrize("identifier, quoted_identifier", VALID_QUOTES)
def test_valid_quotes(identifier, quoted_identifier):
assert pg_quote_identifier(identifier, 'table') == quoted_identifier
@pytest.mark.parametrize("identifier, id_type, msg", INVALID_QUOTES)
def test_invalid_quotes(identifier, id_type, msg):
with pytest.raises(SQLParseError) as ex:
pg_quote_identifier(identifier, id_type)
ex.match(msg)
@pytest.mark.parametrize("identifier, id_type, quoted_identifier, msg", HOW_MANY_DOTS)
def test_how_many_dots(identifier, id_type, quoted_identifier, msg):
assert pg_quote_identifier(identifier, id_type) == quoted_identifier
with pytest.raises(SQLParseError) as ex:
pg_quote_identifier('%s.more' % identifier, id_type)
ex.match(msg)
| gpl-3.0 |
dimid/ansible-modules-extras | cloud/amazon/ec2_vpc_igw.py | 15 | 4625 | #!/usr/bin/python
#
# This is a free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This Ansible library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this library. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: ec2_vpc_igw
short_description: Manage an AWS VPC Internet gateway
description:
- Manage an AWS VPC Internet gateway
version_added: "2.0"
author: Robert Estelle (@erydo)
options:
vpc_id:
description:
- The VPC ID for the VPC in which to manage the Internet Gateway.
required: true
default: null
state:
description:
- Create or terminate the IGW
required: false
default: present
choices: [ 'present', 'absent' ]
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Ensure that the VPC has an Internet Gateway.
# The Internet Gateway ID is can be accessed via {{igw.gateway_id}} for use in setting up NATs etc.
ec2_vpc_igw:
vpc_id: vpc-abcdefgh
state: present
register: igw
'''
try:
import boto.ec2
import boto.vpc
from boto.exception import EC2ResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
if __name__ != '__main__':
raise
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import AnsibleAWSError, connect_to_aws, ec2_argument_spec, get_aws_connection_info
class AnsibleIGWException(Exception):
pass
def ensure_igw_absent(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if not igws:
return {'changed': False}
if check_mode:
return {'changed': True}
for igw in igws:
try:
vpc_conn.detach_internet_gateway(igw.id, vpc_id)
vpc_conn.delete_internet_gateway(igw.id)
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to delete Internet Gateway, error: {0}'.format(e))
return {'changed': True}
def ensure_igw_present(vpc_conn, vpc_id, check_mode):
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc_id})
if len(igws) > 1:
raise AnsibleIGWException(
'EC2 returned more than one Internet Gateway for VPC {0}, aborting'
.format(vpc_id))
if igws:
return {'changed': False, 'gateway_id': igws[0].id}
else:
if check_mode:
return {'changed': True, 'gateway_id': None}
try:
igw = vpc_conn.create_internet_gateway()
vpc_conn.attach_internet_gateway(igw.id, vpc_id)
return {'changed': True, 'gateway_id': igw.id}
except EC2ResponseError as e:
raise AnsibleIGWException(
'Unable to create Internet Gateway, error: {0}'.format(e))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
vpc_id = dict(required=True),
state = dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
if not HAS_BOTO:
module.fail_json(msg='boto is required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if region:
try:
connection = connect_to_aws(boto.vpc, region, **aws_connect_params)
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError) as e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
vpc_id = module.params.get('vpc_id')
state = module.params.get('state', 'present')
try:
if state == 'present':
result = ensure_igw_present(connection, vpc_id, check_mode=module.check_mode)
elif state == 'absent':
result = ensure_igw_absent(connection, vpc_id, check_mode=module.check_mode)
except AnsibleIGWException as e:
module.fail_json(msg=str(e))
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 |
martinbuc/missionplanner | Lib/site-packages/numpy/testing/nosetester.py | 53 | 13444 | """
Nose test running.
This module implements ``test()`` and ``bench()`` functions for NumPy modules.
"""
import os
import sys
def get_package_name(filepath):
"""
Given a path where a package is installed, determine its name.
Parameters
----------
filepath : str
Path to a file. If the determination fails, "numpy" is returned.
Examples
--------
>>> np.testing.nosetester.get_package_name('nonsense')
'numpy'
"""
fullpath = filepath[:]
pkg_name = []
while 'site-packages' in filepath or 'dist-packages' in filepath:
filepath, p2 = os.path.split(filepath)
if p2 in ('site-packages', 'dist-packages'):
break
pkg_name.append(p2)
# if package name determination failed, just default to numpy/scipy
if not pkg_name:
if 'scipy' in fullpath:
return 'scipy'
else:
return 'numpy'
# otherwise, reverse to get correct order and return
pkg_name.reverse()
# don't include the outer egg directory
if pkg_name[0].endswith('.egg'):
pkg_name.pop(0)
return '.'.join(pkg_name)
def import_nose():
""" Import nose only when needed.
"""
fine_nose = True
minimum_nose_version = (0,10,0)
try:
import nose
from nose.tools import raises
except ImportError:
fine_nose = False
else:
if nose.__versioninfo__ < minimum_nose_version:
fine_nose = False
if not fine_nose:
msg = 'Need nose >= %d.%d.%d for tests - see ' \
'http://somethingaboutorange.com/mrl/projects/nose' % \
minimum_nose_version
raise ImportError(msg)
return nose
def run_module_suite(file_to_run = None):
if file_to_run is None:
f = sys._getframe(1)
file_to_run = f.f_locals.get('__file__', None)
assert file_to_run is not None
import_nose().run(argv=['',file_to_run])
# contructs NoseTester method docstrings
def _docmethod(meth, testtype):
if not meth.__doc__:
return
test_header = \
'''Parameters
----------
label : {'fast', 'full', '', attribute identifer}
Identifies the %(testtype)ss to run. This can be a string to
pass to the nosetests executable with the '-A' option, or one of
several special values.
Special values are:
'fast' - the default - which corresponds to nosetests -A option
of 'not slow'.
'full' - fast (as above) and slow %(testtype)ss as in the
no -A option to nosetests - same as ''
None or '' - run all %(testtype)ss
attribute_identifier - string passed directly to nosetests as '-A'
verbose : integer
verbosity value for test outputs, 1-10
extra_argv : list
List with any extra args to pass to nosetests''' \
% {'testtype': testtype}
meth.__doc__ = meth.__doc__ % {'test_header':test_header}
class NoseTester(object):
"""
Nose test runner.
This class is made available as numpy.testing.Tester, and a test function
is typically added to a package's __init__.py like so::
from numpy.testing import Tester
test = Tester().test
Calling this test function finds and runs all tests associated with the
package and all its sub-packages.
Attributes
----------
package_path : str
Full path to the package to test.
package_name : str
Name of the package to test.
Parameters
----------
package : module, str or None
The package to test. If a string, this should be the full path to
the package. If None (default), `package` is set to the module from
which `NoseTester` is initialized.
"""
def __init__(self, package=None):
''' Test class init
Parameters
----------
package : string or module
If string, gives full path to package
If None, extract calling module path
Default is None
'''
package_name = None
if package is None:
f = sys._getframe(1)
package_path = f.f_locals.get('__file__', None)
assert package_path is not None
package_path = os.path.dirname(package_path)
package_name = f.f_locals.get('__name__', None)
elif isinstance(package, type(os)):
package_path = os.path.dirname(package.__file__)
package_name = getattr(package, '__name__', None)
elif os.path.isfile(package):
package_path = os.path.dirname(package)
else:
package_path = str(package)
self.package_path = package_path
# find the package name under test; this name is used to limit coverage
# reporting (if enabled)
if package_name is None:
package_name = get_package_name(package_path)
self.package_name = package_name
def _test_argv(self, label, verbose, extra_argv):
''' Generate argv for nosetest command
%(test_header)s
'''
argv = [__file__, self.package_path, '-s']
if label and label != 'full':
if not isinstance(label, basestring):
raise TypeError, 'Selection label should be a string'
if label == 'fast':
label = 'not slow'
argv += ['-A', label]
argv += ['--verbosity', str(verbose)]
if extra_argv:
argv += extra_argv
return argv
def _show_system_info(self):
nose = import_nose()
import numpy
print "NumPy version %s" % numpy.__version__
npdir = os.path.dirname(numpy.__file__)
print "NumPy is installed in %s" % npdir
if 'scipy' in self.package_name:
import scipy
print "SciPy version %s" % scipy.__version__
spdir = os.path.dirname(scipy.__file__)
print "SciPy is installed in %s" % spdir
pyversion = sys.version.replace('\n','')
print "Python version %s" % pyversion
print "nose version %d.%d.%d" % nose.__versioninfo__
def prepare_test_args(self, label='fast', verbose=1, extra_argv=None,
doctests=False, coverage=False):
"""
Run tests for module using nose.
This method does the heavy lifting for the `test` method. It takes all
the same arguments, for details see `test`.
See Also
--------
test
"""
# if doctests is in the extra args, remove it and set the doctest
# flag so the NumPy doctester is used instead
if extra_argv and '--with-doctest' in extra_argv:
extra_argv.remove('--with-doctest')
doctests = True
argv = self._test_argv(label, verbose, extra_argv)
if doctests:
argv += ['--with-numpydoctest']
if coverage:
argv+=['--cover-package=%s' % self.package_name, '--with-coverage',
'--cover-tests', '--cover-inclusive', '--cover-erase']
# bypass these samples under distutils
argv += ['--exclude','f2py_ext']
argv += ['--exclude','f2py_f90_ext']
argv += ['--exclude','gen_ext']
argv += ['--exclude','pyrex_ext']
argv += ['--exclude','swig_ext']
nose = import_nose()
# construct list of plugins
import nose.plugins.builtin
from noseclasses import NumpyDoctest, KnownFailure
plugins = [NumpyDoctest(), KnownFailure()]
plugins += [p() for p in nose.plugins.builtin.plugins]
return argv, plugins
def test(self, label='fast', verbose=1, extra_argv=None, doctests=False,
coverage=False):
"""
Run tests for module using nose.
Parameters
----------
label : {'fast', 'full', '', attribute identifier}, optional
Identifies the tests to run. This can be a string to pass to the
nosetests executable with the '-A' option, or one of
several special values.
Special values are:
'fast' - the default - which corresponds to the ``nosetests -A``
option of 'not slow'.
'full' - fast (as above) and slow tests as in the
'no -A' option to nosetests - this is the same as ''.
None or '' - run all tests.
attribute_identifier - string passed directly to nosetests as '-A'.
verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to nosetests.
doctests : bool, optional
If True, run doctests in module. Default is False.
coverage : bool, optional
If True, report coverage of NumPy code. Default is False.
(This requires the `coverage module:
<http://nedbatchelder.com/code/modules/coverage.html>`_).
Returns
-------
result : object
Returns the result of running the tests as a
``nose.result.TextTestResult`` object.
Notes
-----
Each NumPy module exposes `test` in its namespace to run all tests for it.
For example, to run all tests for numpy.lib::
>>> np.lib.test()
Examples
--------
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
"""
# cap verbosity at 3 because nose becomes *very* verbose beyond that
verbose = min(verbose, 3)
import utils
utils.verbose = verbose
if doctests:
print "Running unit tests and doctests for %s" % self.package_name
else:
print "Running unit tests for %s" % self.package_name
self._show_system_info()
# reset doctest state on every run
import doctest
doctest.master = None
argv, plugins = self.prepare_test_args(label, verbose, extra_argv,
doctests, coverage)
from noseclasses import NumpyTestProgram
t = NumpyTestProgram(argv=argv, exit=False, plugins=plugins)
return t.result
def bench(self, label='fast', verbose=1, extra_argv=None):
"""
Run benchmarks for module using nose.
Parameters
----------
label : {'fast', 'full', '', attribute identifier}, optional
Identifies the tests to run. This can be a string to pass to the
nosetests executable with the '-A' option, or one of
several special values.
Special values are:
'fast' - the default - which corresponds to the ``nosetests -A``
option of 'not slow'.
'full' - fast (as above) and slow tests as in the
'no -A' option to nosetests - this is the same as ''.
None or '' - run all tests.
attribute_identifier - string passed directly to nosetests as '-A'.
verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to nosetests.
Returns
-------
success : bool
Returns True if running the benchmarks works, False if an error
occurred.
Notes
-----
Benchmarks are like tests, but have names starting with "bench" instead
of "test", and can be found under the "benchmarks" sub-directory of the
module.
Each NumPy module exposes `bench` in its namespace to run all benchmarks
for it.
Examples
--------
>>> success = np.lib.bench()
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success
True
"""
print "Running benchmarks for %s" % self.package_name
self._show_system_info()
argv = self._test_argv(label, verbose, extra_argv)
argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep]
nose = import_nose()
return nose.run(argv=argv)
# generate method docstrings
_docmethod(_test_argv, '(testtype)')
_docmethod(test, 'test')
_docmethod(bench, 'benchmark')
########################################################################
# Doctests for NumPy-specific nose/doctest modifications
# try the #random directive on the output line
def check_random_directive():
'''
>>> 2+2
<BadExample object at 0x084D05AC> #random: may vary on your system
'''
# check the implicit "import numpy as np"
def check_implicit_np():
'''
>>> np.array([1,2,3])
array([1, 2, 3])
'''
# there's some extraneous whitespace around the correct responses
def check_whitespace_enabled():
'''
# whitespace after the 3
>>> 1+2
3
# whitespace before the 7
>>> 3+4
7
'''
| gpl-3.0 |
Aaron1992/v2ex | html5lib/tests/test_tokenizer.py | 72 | 6826 | import sys
import os
import unittest
import cStringIO
import warnings
import re
try:
import json
except ImportError:
import simplejson as json
from support import html5lib_test_files
from html5lib.tokenizer import HTMLTokenizer
from html5lib import constants
class TokenizerTestParser(object):
def __init__(self, initialState, lastStartTag=None):
self.tokenizer = HTMLTokenizer
self._state = initialState
self._lastStartTag = lastStartTag
def parse(self, stream, encoding=None, innerHTML=False):
tokenizer = self.tokenizer(stream, encoding)
self.outputTokens = []
tokenizer.state = getattr(tokenizer, self._state)
if self._lastStartTag is not None:
tokenizer.currentToken = {"type": "startTag",
"name":self._lastStartTag}
types = dict((v,k) for k,v in constants.tokenTypes.iteritems())
for token in tokenizer:
getattr(self, 'process%s' % types[token["type"]])(token)
return self.outputTokens
def processDoctype(self, token):
self.outputTokens.append([u"DOCTYPE", token["name"], token["publicId"],
token["systemId"], token["correct"]])
def processStartTag(self, token):
self.outputTokens.append([u"StartTag", token["name"],
dict(token["data"][::-1]), token["selfClosing"]])
def processEmptyTag(self, token):
if token["name"] not in constants.voidElements:
self.outputTokens.append(u"ParseError")
self.outputTokens.append([u"StartTag", token["name"], dict(token["data"][::-1])])
def processEndTag(self, token):
self.outputTokens.append([u"EndTag", token["name"],
token["selfClosing"]])
def processComment(self, token):
self.outputTokens.append([u"Comment", token["data"]])
def processSpaceCharacters(self, token):
self.outputTokens.append([u"Character", token["data"]])
self.processSpaceCharacters = self.processCharacters
def processCharacters(self, token):
self.outputTokens.append([u"Character", token["data"]])
def processEOF(self, token):
pass
def processParseError(self, token):
self.outputTokens.append([u"ParseError", token["data"]])
def concatenateCharacterTokens(tokens):
outputTokens = []
for token in tokens:
if not "ParseError" in token and token[0] == "Character":
if (outputTokens and not "ParseError" in outputTokens[-1] and
outputTokens[-1][0] == "Character"):
outputTokens[-1][1] += token[1]
else:
outputTokens.append(token)
else:
outputTokens.append(token)
return outputTokens
def normalizeTokens(tokens):
# TODO: convert tests to reflect arrays
for i, token in enumerate(tokens):
if token[0] == u'ParseError':
tokens[i] = token[0]
return tokens
def tokensMatch(expectedTokens, receivedTokens, ignoreErrorOrder,
ignoreErrors=False):
"""Test whether the test has passed or failed
If the ignoreErrorOrder flag is set to true we don't test the relative
positions of parse errors and non parse errors
"""
checkSelfClosing= False
for token in expectedTokens:
if (token[0] == "StartTag" and len(token) == 4
or token[0] == "EndTag" and len(token) == 3):
checkSelfClosing = True
break
if not checkSelfClosing:
for token in receivedTokens:
if token[0] == "StartTag" or token[0] == "EndTag":
token.pop()
if not ignoreErrorOrder and not ignoreErrors:
return expectedTokens == receivedTokens
else:
#Sort the tokens into two groups; non-parse errors and parse errors
tokens = {"expected":[[],[]], "received":[[],[]]}
for tokenType, tokenList in zip(tokens.keys(),
(expectedTokens, receivedTokens)):
for token in tokenList:
if token != "ParseError":
tokens[tokenType][0].append(token)
else:
if not ignoreErrors:
tokens[tokenType][1].append(token)
return tokens["expected"] == tokens["received"]
def unescape_test(test):
def decode(inp):
return inp.decode("unicode-escape")
test["input"] = decode(test["input"])
for token in test["output"]:
if token == "ParseError":
continue
else:
token[1] = decode(token[1])
if len(token) > 2:
for key, value in token[2]:
del token[2][key]
token[2][decode(key)] = decode(value)
return test
def runTokenizerTest(test):
#XXX - move this out into the setup function
#concatenate all consecutive character tokens into a single token
if 'doubleEscaped' in test:
test = unescape_test(test)
expected = concatenateCharacterTokens(test['output'])
if 'lastStartTag' not in test:
test['lastStartTag'] = None
outBuffer = cStringIO.StringIO()
stdout = sys.stdout
sys.stdout = outBuffer
parser = TokenizerTestParser(test['initialState'],
test['lastStartTag'])
tokens = parser.parse(test['input'])
tokens = concatenateCharacterTokens(tokens)
received = normalizeTokens(tokens)
errorMsg = u"\n".join(["\n\nInitial state:",
test['initialState'] ,
"\nInput:", unicode(test['input']),
"\nExpected:", unicode(expected),
"\nreceived:", unicode(tokens)])
errorMsg = errorMsg.encode("utf-8")
ignoreErrorOrder = test.get('ignoreErrorOrder', False)
assert tokensMatch(expected, received, ignoreErrorOrder), errorMsg
def _doCapitalize(match):
return match.group(1).upper()
_capitalizeRe = re.compile(r"\W+(\w)").sub
def capitalize(s):
s = s.lower()
s = _capitalizeRe(_doCapitalize, s)
return s
def test_tokenizer():
for filename in html5lib_test_files('tokenizer', '*.test'):
tests = json.load(file(filename))
testName = os.path.basename(filename).replace(".test","")
if 'tests' in tests:
for index,test in enumerate(tests['tests']):
#Skip tests with a self closing flag
skip = False
if 'initialStates' not in test:
test["initialStates"] = ["Data state"]
for initialState in test["initialStates"]:
test["initialState"] = capitalize(initialState)
yield runTokenizerTest, test
| bsd-3-clause |
RAtechntukan/Sick-Beard | lib/imdb/__init__.py | 50 | 41019 | """
imdb package.
This package can be used to retrieve information about a movie or
a person from the IMDb database.
It can fetch data through different media (e.g.: the IMDb web pages,
a SQL database, etc.)
Copyright 2004-2012 Davide Alberani <da@erlug.linux.it>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
__all__ = ['IMDb', 'IMDbError', 'Movie', 'Person', 'Character', 'Company',
'available_access_systems']
__version__ = VERSION = '4.9'
# Import compatibility module (importing it is enough).
import _compat
import sys, os, ConfigParser, logging
from types import MethodType
from imdb import Movie, Person, Character, Company
import imdb._logging
from imdb._exceptions import IMDbError, IMDbDataAccessError, IMDbParserError
from imdb.utils import build_title, build_name, build_company_name
_aux_logger = logging.getLogger('imdbpy.aux')
# URLs of the main pages for movies, persons, characters and queries.
imdbURL_base = 'http://akas.imdb.com/'
# NOTE: the urls below will be removed in a future version.
# please use the values in the 'urls' attribute
# of the IMDbBase subclass instance.
# http://akas.imdb.com/title/
imdbURL_movie_base = '%stitle/' % imdbURL_base
# http://akas.imdb.com/title/tt%s/
imdbURL_movie_main = imdbURL_movie_base + 'tt%s/'
# http://akas.imdb.com/name/
imdbURL_person_base = '%sname/' % imdbURL_base
# http://akas.imdb.com/name/nm%s/
imdbURL_person_main = imdbURL_person_base + 'nm%s/'
# http://akas.imdb.com/character/
imdbURL_character_base = '%scharacter/' % imdbURL_base
# http://akas.imdb.com/character/ch%s/
imdbURL_character_main = imdbURL_character_base + 'ch%s/'
# http://akas.imdb.com/company/
imdbURL_company_base = '%scompany/' % imdbURL_base
# http://akas.imdb.com/company/co%s/
imdbURL_company_main = imdbURL_company_base + 'co%s/'
# http://akas.imdb.com/keyword/%s/
imdbURL_keyword_main = imdbURL_base + 'keyword/%s/'
# http://akas.imdb.com/chart/top
imdbURL_top250 = imdbURL_base + 'chart/top'
# http://akas.imdb.com/chart/bottom
imdbURL_bottom100 = imdbURL_base + 'chart/bottom'
# http://akas.imdb.com/find?%s
imdbURL_find = imdbURL_base + 'find?%s'
# Name of the configuration file.
confFileName = 'imdbpy.cfg'
class ConfigParserWithCase(ConfigParser.ConfigParser):
"""A case-sensitive parser for configuration files."""
def __init__(self, defaults=None, confFile=None, *args, **kwds):
"""Initialize the parser.
*defaults* -- defaults values.
*confFile* -- the file (or list of files) to parse."""
ConfigParser.ConfigParser.__init__(self, defaults=defaults)
if confFile is None:
dotFileName = '.' + confFileName
# Current and home directory.
confFile = [os.path.join(os.getcwd(), confFileName),
os.path.join(os.getcwd(), dotFileName),
os.path.join(os.path.expanduser('~'), confFileName),
os.path.join(os.path.expanduser('~'), dotFileName)]
if os.name == 'posix':
sep = getattr(os.path, 'sep', '/')
# /etc/ and /etc/conf.d/
confFile.append(os.path.join(sep, 'etc', confFileName))
confFile.append(os.path.join(sep, 'etc', 'conf.d',
confFileName))
else:
# etc subdirectory of sys.prefix, for non-unix systems.
confFile.append(os.path.join(sys.prefix, 'etc', confFileName))
for fname in confFile:
try:
self.read(fname)
except (ConfigParser.MissingSectionHeaderError,
ConfigParser.ParsingError), e:
_aux_logger.warn('Troubles reading config file: %s' % e)
# Stop at the first valid file.
if self.has_section('imdbpy'):
break
def optionxform(self, optionstr):
"""Option names are case sensitive."""
return optionstr
def _manageValue(self, value):
"""Custom substitutions for values."""
if not isinstance(value, (str, unicode)):
return value
vlower = value.lower()
if vlower in self._boolean_states:
return self._boolean_states[vlower]
elif vlower == 'none':
return None
return value
def get(self, section, option, *args, **kwds):
"""Return the value of an option from a given section."""
value = ConfigParser.ConfigParser.get(self, section, option,
*args, **kwds)
return self._manageValue(value)
def items(self, section, *args, **kwds):
"""Return a list of (key, value) tuples of items of the
given section."""
if section != 'DEFAULT' and not self.has_section(section):
return []
keys = ConfigParser.ConfigParser.options(self, section)
return [(k, self.get(section, k, *args, **kwds)) for k in keys]
def getDict(self, section):
"""Return a dictionary of items of the specified section."""
return dict(self.items(section))
def IMDb(accessSystem=None, *arguments, **keywords):
"""Return an instance of the appropriate class.
The accessSystem parameter is used to specify the kind of
the preferred access system."""
if accessSystem is None or accessSystem in ('auto', 'config'):
try:
cfg_file = ConfigParserWithCase(*arguments, **keywords)
# Parameters set by the code take precedence.
kwds = cfg_file.getDict('imdbpy')
if 'accessSystem' in kwds:
accessSystem = kwds['accessSystem']
del kwds['accessSystem']
else:
accessSystem = 'http'
kwds.update(keywords)
keywords = kwds
except Exception, e:
logging.getLogger('imdbpy').warn('Unable to read configuration' \
' file; complete error: %s' % e)
# It just LOOKS LIKE a bad habit: we tried to read config
# options from some files, but something is gone horribly
# wrong: ignore everything and pretend we were called with
# the 'http' accessSystem.
accessSystem = 'http'
if 'loggingLevel' in keywords:
imdb._logging.setLevel(keywords['loggingLevel'])
del keywords['loggingLevel']
if 'loggingConfig' in keywords:
logCfg = keywords['loggingConfig']
del keywords['loggingConfig']
try:
import logging.config
logging.config.fileConfig(os.path.expanduser(logCfg))
except Exception, e:
logging.getLogger('imdbpy').warn('unable to read logger ' \
'config: %s' % e)
if accessSystem in ('httpThin', 'webThin', 'htmlThin'):
logging.warn('httpThin was removed since IMDbPY 4.8')
accessSystem = 'http'
if accessSystem in ('http', 'web', 'html'):
from parser.http import IMDbHTTPAccessSystem
return IMDbHTTPAccessSystem(*arguments, **keywords)
elif accessSystem in ('mobile',):
from parser.mobile import IMDbMobileAccessSystem
return IMDbMobileAccessSystem(*arguments, **keywords)
elif accessSystem in ('local', 'files'):
# The local access system was removed since IMDbPY 4.2.
raise IMDbError('the local access system was removed since IMDbPY 4.2')
elif accessSystem in ('sql', 'db', 'database'):
try:
from parser.sql import IMDbSqlAccessSystem
except ImportError:
raise IMDbError('the sql access system is not installed')
return IMDbSqlAccessSystem(*arguments, **keywords)
else:
raise IMDbError('unknown kind of data access system: "%s"' \
% accessSystem)
def available_access_systems():
"""Return the list of available data access systems."""
asList = []
# XXX: trying to import modules is a good thing?
try:
from parser.http import IMDbHTTPAccessSystem
asList.append('http')
except ImportError:
pass
try:
from parser.mobile import IMDbMobileAccessSystem
asList.append('mobile')
except ImportError:
pass
try:
from parser.sql import IMDbSqlAccessSystem
asList.append('sql')
except ImportError:
pass
return asList
# XXX: I'm not sure this is a good guess.
# I suppose that an argument of the IMDb function can be used to
# set a default encoding for the output, and then Movie, Person and
# Character objects can use this default encoding, returning strings.
# Anyway, passing unicode strings to search_movie(), search_person()
# and search_character() methods is always safer.
encoding = getattr(sys.stdin, 'encoding', '') or sys.getdefaultencoding()
class IMDbBase:
"""The base class used to search for a movie/person/character and
to get a Movie/Person/Character object.
This class cannot directly fetch data of any kind and so you
have to search the "real" code into a subclass."""
# The name of the preferred access system (MUST be overridden
# in the subclasses).
accessSystem = 'UNKNOWN'
# Top-level logger for IMDbPY.
_imdb_logger = logging.getLogger('imdbpy')
# Whether to re-raise caught exceptions or not.
_reraise_exceptions = False
def __init__(self, defaultModFunct=None, results=20, keywordsResults=100,
*arguments, **keywords):
"""Initialize the access system.
If specified, defaultModFunct is the function used by
default by the Person, Movie and Character objects, when
accessing their text fields.
"""
# The function used to output the strings that need modification (the
# ones containing references to movie titles and person names).
self._defModFunct = defaultModFunct
# Number of results to get.
try:
results = int(results)
except (TypeError, ValueError):
results = 20
if results < 1:
results = 20
self._results = results
try:
keywordsResults = int(keywordsResults)
except (TypeError, ValueError):
keywordsResults = 100
if keywordsResults < 1:
keywordsResults = 100
self._keywordsResults = keywordsResults
self._reraise_exceptions = keywords.get('reraiseExceptions') or False
self.set_imdb_urls(keywords.get('imdbURL_base') or imdbURL_base)
def set_imdb_urls(self, imdbURL_base):
"""Set the urls used accessing the IMDb site."""
imdbURL_base = imdbURL_base.strip().strip('"\'')
if not imdbURL_base.startswith('http://'):
imdbURL_base = 'http://%s' % imdbURL_base
if not imdbURL_base.endswith('/'):
imdbURL_base = '%s/' % imdbURL_base
# http://akas.imdb.com/title/
imdbURL_movie_base='%stitle/' % imdbURL_base
# http://akas.imdb.com/title/tt%s/
imdbURL_movie_main=imdbURL_movie_base + 'tt%s/'
# http://akas.imdb.com/name/
imdbURL_person_base='%sname/' % imdbURL_base
# http://akas.imdb.com/name/nm%s/
imdbURL_person_main=imdbURL_person_base + 'nm%s/'
# http://akas.imdb.com/character/
imdbURL_character_base='%scharacter/' % imdbURL_base
# http://akas.imdb.com/character/ch%s/
imdbURL_character_main=imdbURL_character_base + 'ch%s/'
# http://akas.imdb.com/company/
imdbURL_company_base='%scompany/' % imdbURL_base
# http://akas.imdb.com/company/co%s/
imdbURL_company_main=imdbURL_company_base + 'co%s/'
# http://akas.imdb.com/keyword/%s/
imdbURL_keyword_main=imdbURL_base + 'keyword/%s/'
# http://akas.imdb.com/chart/top
imdbURL_top250=imdbURL_base + 'chart/top',
# http://akas.imdb.com/chart/bottom
imdbURL_bottom100=imdbURL_base + 'chart/bottom'
# http://akas.imdb.com/find?%s
imdbURL_find=imdbURL_base + 'find?%s'
self.urls = dict(
movie_base=imdbURL_movie_base,
movie_main=imdbURL_movie_main,
person_base=imdbURL_person_base,
person_main=imdbURL_person_main,
character_base=imdbURL_character_base,
character_main=imdbURL_character_main,
company_base=imdbURL_company_base,
company_main=imdbURL_company_main,
keyword_main=imdbURL_keyword_main,
top250=imdbURL_top250,
bottom100=imdbURL_bottom100,
find=imdbURL_find)
def _normalize_movieID(self, movieID):
"""Normalize the given movieID."""
# By default, do nothing.
return movieID
def _normalize_personID(self, personID):
"""Normalize the given personID."""
# By default, do nothing.
return personID
def _normalize_characterID(self, characterID):
"""Normalize the given characterID."""
# By default, do nothing.
return characterID
def _normalize_companyID(self, companyID):
"""Normalize the given companyID."""
# By default, do nothing.
return companyID
def _get_real_movieID(self, movieID):
"""Handle title aliases."""
# By default, do nothing.
return movieID
def _get_real_personID(self, personID):
"""Handle name aliases."""
# By default, do nothing.
return personID
def _get_real_characterID(self, characterID):
"""Handle character name aliases."""
# By default, do nothing.
return characterID
def _get_real_companyID(self, companyID):
"""Handle company name aliases."""
# By default, do nothing.
return companyID
def _get_infoset(self, prefname):
"""Return methods with the name starting with prefname."""
infoset = []
excludes = ('%sinfoset' % prefname,)
preflen = len(prefname)
for name in dir(self.__class__):
if name.startswith(prefname) and name not in excludes:
member = getattr(self.__class__, name)
if isinstance(member, MethodType):
infoset.append(name[preflen:].replace('_', ' '))
return infoset
def get_movie_infoset(self):
"""Return the list of info set available for movies."""
return self._get_infoset('get_movie_')
def get_person_infoset(self):
"""Return the list of info set available for persons."""
return self._get_infoset('get_person_')
def get_character_infoset(self):
"""Return the list of info set available for characters."""
return self._get_infoset('get_character_')
def get_company_infoset(self):
"""Return the list of info set available for companies."""
return self._get_infoset('get_company_')
def get_movie(self, movieID, info=Movie.Movie.default_info, modFunct=None):
"""Return a Movie object for the given movieID.
The movieID is something used to univocally identify a movie;
it can be the imdbID used by the IMDb web server, a file
pointer, a line number in a file, an ID in a database, etc.
info is the list of sets of information to retrieve.
If specified, modFunct will be the function used by the Movie
object when accessing its text fields (like 'plot')."""
movieID = self._normalize_movieID(movieID)
movieID = self._get_real_movieID(movieID)
movie = Movie.Movie(movieID=movieID, accessSystem=self.accessSystem)
modFunct = modFunct or self._defModFunct
if modFunct is not None:
movie.set_mod_funct(modFunct)
self.update(movie, info)
return movie
get_episode = get_movie
def _search_movie(self, title, results):
"""Return a list of tuples (movieID, {movieData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_movie(self, title, results=None, _episodes=False):
"""Return a list of Movie objects for a query for the given title.
The results argument is the maximum number of results to return."""
if results is None:
results = self._results
try:
results = int(results)
except (ValueError, OverflowError):
results = 20
# XXX: I suppose it will be much safer if the user provides
# an unicode string... this is just a guess.
if not isinstance(title, unicode):
title = unicode(title, encoding, 'replace')
if not _episodes:
res = self._search_movie(title, results)
else:
res = self._search_episode(title, results)
return [Movie.Movie(movieID=self._get_real_movieID(mi),
data=md, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for mi, md in res][:results]
def _search_episode(self, title, results):
"""Return a list of tuples (movieID, {movieData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_episode(self, title, results=None):
"""Return a list of Movie objects for a query for the given title.
The results argument is the maximum number of results to return;
this method searches only for titles of tv (mini) series' episodes."""
return self.search_movie(title, results=results, _episodes=True)
def get_person(self, personID, info=Person.Person.default_info,
modFunct=None):
"""Return a Person object for the given personID.
The personID is something used to univocally identify a person;
it can be the imdbID used by the IMDb web server, a file
pointer, a line number in a file, an ID in a database, etc.
info is the list of sets of information to retrieve.
If specified, modFunct will be the function used by the Person
object when accessing its text fields (like 'mini biography')."""
personID = self._normalize_personID(personID)
personID = self._get_real_personID(personID)
person = Person.Person(personID=personID,
accessSystem=self.accessSystem)
modFunct = modFunct or self._defModFunct
if modFunct is not None:
person.set_mod_funct(modFunct)
self.update(person, info)
return person
def _search_person(self, name, results):
"""Return a list of tuples (personID, {personData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_person(self, name, results=None):
"""Return a list of Person objects for a query for the given name.
The results argument is the maximum number of results to return."""
if results is None:
results = self._results
try:
results = int(results)
except (ValueError, OverflowError):
results = 20
if not isinstance(name, unicode):
name = unicode(name, encoding, 'replace')
res = self._search_person(name, results)
return [Person.Person(personID=self._get_real_personID(pi),
data=pd, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for pi, pd in res][:results]
def get_character(self, characterID, info=Character.Character.default_info,
modFunct=None):
"""Return a Character object for the given characterID.
The characterID is something used to univocally identify a character;
it can be the imdbID used by the IMDb web server, a file
pointer, a line number in a file, an ID in a database, etc.
info is the list of sets of information to retrieve.
If specified, modFunct will be the function used by the Character
object when accessing its text fields (like 'biography')."""
characterID = self._normalize_characterID(characterID)
characterID = self._get_real_characterID(characterID)
character = Character.Character(characterID=characterID,
accessSystem=self.accessSystem)
modFunct = modFunct or self._defModFunct
if modFunct is not None:
character.set_mod_funct(modFunct)
self.update(character, info)
return character
def _search_character(self, name, results):
"""Return a list of tuples (characterID, {characterData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_character(self, name, results=None):
"""Return a list of Character objects for a query for the given name.
The results argument is the maximum number of results to return."""
if results is None:
results = self._results
try:
results = int(results)
except (ValueError, OverflowError):
results = 20
if not isinstance(name, unicode):
name = unicode(name, encoding, 'replace')
res = self._search_character(name, results)
return [Character.Character(characterID=self._get_real_characterID(pi),
data=pd, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for pi, pd in res][:results]
def get_company(self, companyID, info=Company.Company.default_info,
modFunct=None):
"""Return a Company object for the given companyID.
The companyID is something used to univocally identify a company;
it can be the imdbID used by the IMDb web server, a file
pointer, a line number in a file, an ID in a database, etc.
info is the list of sets of information to retrieve.
If specified, modFunct will be the function used by the Company
object when accessing its text fields (none, so far)."""
companyID = self._normalize_companyID(companyID)
companyID = self._get_real_companyID(companyID)
company = Company.Company(companyID=companyID,
accessSystem=self.accessSystem)
modFunct = modFunct or self._defModFunct
if modFunct is not None:
company.set_mod_funct(modFunct)
self.update(company, info)
return company
def _search_company(self, name, results):
"""Return a list of tuples (companyID, {companyData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_company(self, name, results=None):
"""Return a list of Company objects for a query for the given name.
The results argument is the maximum number of results to return."""
if results is None:
results = self._results
try:
results = int(results)
except (ValueError, OverflowError):
results = 20
if not isinstance(name, unicode):
name = unicode(name, encoding, 'replace')
res = self._search_company(name, results)
return [Company.Company(companyID=self._get_real_companyID(pi),
data=pd, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for pi, pd in res][:results]
def _search_keyword(self, keyword, results):
"""Return a list of 'keyword' strings."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def search_keyword(self, keyword, results=None):
"""Search for existing keywords, similar to the given one."""
if results is None:
results = self._keywordsResults
try:
results = int(results)
except (ValueError, OverflowError):
results = 100
if not isinstance(keyword, unicode):
keyword = unicode(keyword, encoding, 'replace')
return self._search_keyword(keyword, results)
def _get_keyword(self, keyword, results):
"""Return a list of tuples (movieID, {movieData})"""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def get_keyword(self, keyword, results=None):
"""Return a list of movies for the given keyword."""
if results is None:
results = self._keywordsResults
try:
results = int(results)
except (ValueError, OverflowError):
results = 100
# XXX: I suppose it will be much safer if the user provides
# an unicode string... this is just a guess.
if not isinstance(keyword, unicode):
keyword = unicode(keyword, encoding, 'replace')
res = self._get_keyword(keyword, results)
return [Movie.Movie(movieID=self._get_real_movieID(mi),
data=md, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for mi, md in res][:results]
def _get_top_bottom_movies(self, kind):
"""Return the list of the top 250 or bottom 100 movies."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
# This method must return a list of (movieID, {movieDict})
# tuples. The kind parameter can be 'top' or 'bottom'.
raise NotImplementedError('override this method')
def get_top250_movies(self):
"""Return the list of the top 250 movies."""
res = self._get_top_bottom_movies('top')
return [Movie.Movie(movieID=self._get_real_movieID(mi),
data=md, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for mi, md in res]
def get_bottom100_movies(self):
"""Return the list of the bottom 100 movies."""
res = self._get_top_bottom_movies('bottom')
return [Movie.Movie(movieID=self._get_real_movieID(mi),
data=md, modFunct=self._defModFunct,
accessSystem=self.accessSystem) for mi, md in res]
def new_movie(self, *arguments, **keywords):
"""Return a Movie object."""
# XXX: not really useful...
if 'title' in keywords:
if not isinstance(keywords['title'], unicode):
keywords['title'] = unicode(keywords['title'],
encoding, 'replace')
elif len(arguments) > 1:
if not isinstance(arguments[1], unicode):
arguments[1] = unicode(arguments[1], encoding, 'replace')
return Movie.Movie(accessSystem=self.accessSystem,
*arguments, **keywords)
def new_person(self, *arguments, **keywords):
"""Return a Person object."""
# XXX: not really useful...
if 'name' in keywords:
if not isinstance(keywords['name'], unicode):
keywords['name'] = unicode(keywords['name'],
encoding, 'replace')
elif len(arguments) > 1:
if not isinstance(arguments[1], unicode):
arguments[1] = unicode(arguments[1], encoding, 'replace')
return Person.Person(accessSystem=self.accessSystem,
*arguments, **keywords)
def new_character(self, *arguments, **keywords):
"""Return a Character object."""
# XXX: not really useful...
if 'name' in keywords:
if not isinstance(keywords['name'], unicode):
keywords['name'] = unicode(keywords['name'],
encoding, 'replace')
elif len(arguments) > 1:
if not isinstance(arguments[1], unicode):
arguments[1] = unicode(arguments[1], encoding, 'replace')
return Character.Character(accessSystem=self.accessSystem,
*arguments, **keywords)
def new_company(self, *arguments, **keywords):
"""Return a Company object."""
# XXX: not really useful...
if 'name' in keywords:
if not isinstance(keywords['name'], unicode):
keywords['name'] = unicode(keywords['name'],
encoding, 'replace')
elif len(arguments) > 1:
if not isinstance(arguments[1], unicode):
arguments[1] = unicode(arguments[1], encoding, 'replace')
return Company.Company(accessSystem=self.accessSystem,
*arguments, **keywords)
def update(self, mop, info=None, override=0):
"""Given a Movie, Person, Character or Company object with only
partial information, retrieve the required set of information.
info is the list of sets of information to retrieve.
If override is set, the information are retrieved and updated
even if they're already in the object."""
# XXX: should this be a method of the Movie/Person/Character/Company
# classes? NO! What for instances created by external functions?
mopID = None
prefix = ''
if isinstance(mop, Movie.Movie):
mopID = mop.movieID
prefix = 'movie'
elif isinstance(mop, Person.Person):
mopID = mop.personID
prefix = 'person'
elif isinstance(mop, Character.Character):
mopID = mop.characterID
prefix = 'character'
elif isinstance(mop, Company.Company):
mopID = mop.companyID
prefix = 'company'
else:
raise IMDbError('object ' + repr(mop) + \
' is not a Movie, Person, Character or Company instance')
if mopID is None:
# XXX: enough? It's obvious that there are Characters
# objects without characterID, so I think they should
# just do nothing, when an i.update(character) is tried.
if prefix == 'character':
return
raise IMDbDataAccessError( \
'the supplied object has null movieID, personID or companyID')
if mop.accessSystem == self.accessSystem:
aSystem = self
else:
aSystem = IMDb(mop.accessSystem)
if info is None:
info = mop.default_info
elif info == 'all':
if isinstance(mop, Movie.Movie):
info = self.get_movie_infoset()
elif isinstance(mop, Person.Person):
info = self.get_person_infoset()
elif isinstance(mop, Character.Character):
info = self.get_character_infoset()
else:
info = self.get_company_infoset()
if not isinstance(info, (tuple, list)):
info = (info,)
res = {}
for i in info:
if i in mop.current_info and not override:
continue
if not i:
continue
self._imdb_logger.debug('retrieving "%s" info set', i)
try:
method = getattr(aSystem, 'get_%s_%s' %
(prefix, i.replace(' ', '_')))
except AttributeError:
self._imdb_logger.error('unknown information set "%s"', i)
# Keeps going.
method = lambda *x: {}
try:
ret = method(mopID)
except Exception, e:
self._imdb_logger.critical('caught an exception retrieving ' \
'or parsing "%s" info set for mopID ' \
'"%s" (accessSystem: %s)',
i, mopID, mop.accessSystem, exc_info=True)
ret = {}
# If requested by the user, reraise the exception.
if self._reraise_exceptions:
raise
keys = None
if 'data' in ret:
res.update(ret['data'])
if isinstance(ret['data'], dict):
keys = ret['data'].keys()
if 'info sets' in ret:
for ri in ret['info sets']:
mop.add_to_current_info(ri, keys, mainInfoset=i)
else:
mop.add_to_current_info(i, keys)
if 'titlesRefs' in ret:
mop.update_titlesRefs(ret['titlesRefs'])
if 'namesRefs' in ret:
mop.update_namesRefs(ret['namesRefs'])
if 'charactersRefs' in ret:
mop.update_charactersRefs(ret['charactersRefs'])
mop.set_data(res, override=0)
def get_imdbMovieID(self, movieID):
"""Translate a movieID in an imdbID (the ID used by the IMDb
web server); must be overridden by the subclass."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def get_imdbPersonID(self, personID):
"""Translate a personID in a imdbID (the ID used by the IMDb
web server); must be overridden by the subclass."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def get_imdbCharacterID(self, characterID):
"""Translate a characterID in a imdbID (the ID used by the IMDb
web server); must be overridden by the subclass."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def get_imdbCompanyID(self, companyID):
"""Translate a companyID in a imdbID (the ID used by the IMDb
web server); must be overridden by the subclass."""
# XXX: for the real implementation, see the method of the
# subclass, somewhere under the imdb.parser package.
raise NotImplementedError('override this method')
def _searchIMDb(self, kind, ton):
"""Search the IMDb akas server for the given title or name."""
# The Exact Primary search system has gone AWOL, so we resort
# to the mobile search. :-/
if not ton:
return None
aSystem = IMDb('mobile')
if kind == 'tt':
searchFunct = aSystem.search_movie
check = 'long imdb canonical title'
elif kind == 'nm':
searchFunct = aSystem.search_person
check = 'long imdb canonical name'
elif kind == 'char':
searchFunct = aSystem.search_character
check = 'long imdb canonical name'
elif kind == 'co':
# XXX: are [COUNTRY] codes included in the results?
searchFunct = aSystem.search_company
check = 'long imdb name'
try:
searchRes = searchFunct(ton)
except IMDbError:
return None
# When only one result is returned, assume it was from an
# exact match.
if len(searchRes) == 1:
return searchRes[0].getID()
for item in searchRes:
# Return the first perfect match.
if item[check] == ton:
return item.getID()
return None
def title2imdbID(self, title):
"""Translate a movie title (in the plain text data files format)
to an imdbID.
Try an Exact Primary Title search on IMDb;
return None if it's unable to get the imdbID."""
return self._searchIMDb('tt', title)
def name2imdbID(self, name):
"""Translate a person name in an imdbID.
Try an Exact Primary Name search on IMDb;
return None if it's unable to get the imdbID."""
return self._searchIMDb('tt', name)
def character2imdbID(self, name):
"""Translate a character name in an imdbID.
Try an Exact Primary Name search on IMDb;
return None if it's unable to get the imdbID."""
return self._searchIMDb('char', name)
def company2imdbID(self, name):
"""Translate a company name in an imdbID.
Try an Exact Primary Name search on IMDb;
return None if it's unable to get the imdbID."""
return self._searchIMDb('co', name)
def get_imdbID(self, mop):
"""Return the imdbID for the given Movie, Person, Character or Company
object."""
imdbID = None
if mop.accessSystem == self.accessSystem:
aSystem = self
else:
aSystem = IMDb(mop.accessSystem)
if isinstance(mop, Movie.Movie):
if mop.movieID is not None:
imdbID = aSystem.get_imdbMovieID(mop.movieID)
else:
imdbID = aSystem.title2imdbID(build_title(mop, canonical=0,
ptdf=1))
elif isinstance(mop, Person.Person):
if mop.personID is not None:
imdbID = aSystem.get_imdbPersonID(mop.personID)
else:
imdbID = aSystem.name2imdbID(build_name(mop, canonical=1))
elif isinstance(mop, Character.Character):
if mop.characterID is not None:
imdbID = aSystem.get_imdbCharacterID(mop.characterID)
else:
# canonical=0 ?
imdbID = aSystem.character2imdbID(build_name(mop, canonical=1))
elif isinstance(mop, Company.Company):
if mop.companyID is not None:
imdbID = aSystem.get_imdbCompanyID(mop.companyID)
else:
imdbID = aSystem.company2imdbID(build_company_name(mop))
else:
raise IMDbError('object ' + repr(mop) + \
' is not a Movie, Person or Character instance')
return imdbID
def get_imdbURL(self, mop):
"""Return the main IMDb URL for the given Movie, Person,
Character or Company object, or None if unable to get it."""
imdbID = self.get_imdbID(mop)
if imdbID is None:
return None
if isinstance(mop, Movie.Movie):
url_firstPart = imdbURL_movie_main
elif isinstance(mop, Person.Person):
url_firstPart = imdbURL_person_main
elif isinstance(mop, Character.Character):
url_firstPart = imdbURL_character_main
elif isinstance(mop, Company.Company):
url_firstPart = imdbURL_company_main
else:
raise IMDbError('object ' + repr(mop) + \
' is not a Movie, Person, Character or Company instance')
return url_firstPart % imdbID
def get_special_methods(self):
"""Return the special methods defined by the subclass."""
sm_dict = {}
base_methods = []
for name in dir(IMDbBase):
member = getattr(IMDbBase, name)
if isinstance(member, MethodType):
base_methods.append(name)
for name in dir(self.__class__):
if name.startswith('_') or name in base_methods or \
name.startswith('get_movie_') or \
name.startswith('get_person_') or \
name.startswith('get_company_') or \
name.startswith('get_character_'):
continue
member = getattr(self.__class__, name)
if isinstance(member, MethodType):
sm_dict.update({name: member.__doc__})
return sm_dict
| gpl-3.0 |
commtrack/commtrack-core | reportlab/lib/codecharts.py | 15 | 13039 | #Copyright ReportLab Europe Ltd. 2000-2004
#see license.txt for license details
#history http://www.reportlab.co.uk/cgi-bin/viewcvs.cgi/public/reportlab/trunk/reportlab/lib/codecharts.py
#$Header $
__version__=''' $Id '''
__doc__="""Routines to print code page (character set) drawings. Predates unicode.
To be sure we can accurately represent characters in various encodings
and fonts, we need some routines to display all those characters.
These are defined herein. The idea is to include flowable, drawable
and graphic objects for single and multi-byte fonts. """
import string
import codecs
from reportlab.pdfgen.canvas import Canvas
from reportlab.platypus import Flowable
from reportlab.pdfbase import pdfmetrics, cidfonts
from reportlab.graphics.shapes import Drawing, Group, String, Circle, Rect
from reportlab.graphics.widgetbase import Widget
from reportlab.lib import colors
adobe2codec = {
'WinAnsiEncoding':'winansi',
'MacRomanEncoding':'macroman',
'MacExpert':'macexpert',
'PDFDoc':'pdfdoc',
}
class CodeChartBase(Flowable):
"""Basic bits of drawing furniture used by
single and multi-byte versions: ability to put letters
into boxes."""
def calcLayout(self):
"Work out x and y positions for drawing"
rows = self.codePoints * 1.0 / self.charsPerRow
if rows == int(rows):
self.rows = int(rows)
else:
self.rows = int(rows) + 1
# size allows for a gray column of labels
self.width = self.boxSize * (1+self.charsPerRow)
self.height = self.boxSize * (1+self.rows)
#handy lists
self.ylist = []
for row in range(self.rows + 2):
self.ylist.append(row * self.boxSize)
self.xlist = []
for col in range(self.charsPerRow + 2):
self.xlist.append(col * self.boxSize)
def formatByte(self, byt):
if self.hex:
return '%02X' % byt
else:
return '%d' % byt
def drawChars(self, charList):
"""Fills boxes in order. None means skip a box.
Empty boxes at end get filled with gray"""
extraNeeded = (self.rows * self.charsPerRow - len(charList))
for i in range(extraNeeded):
charList.append(None)
#charList.extend([None] * extraNeeded)
row = 0
col = 0
self.canv.setFont(self.fontName, self.boxSize * 0.75)
for ch in charList: # may be 2 bytes or 1
if ch is None:
self.canv.setFillGray(0.9)
self.canv.rect((1+col) * self.boxSize, (self.rows - row - 1) * self.boxSize,
self.boxSize, self.boxSize, stroke=0, fill=1)
self.canv.setFillGray(0.0)
else:
try:
self.canv.drawCentredString(
(col+1.5) * self.boxSize,
(self.rows - row - 0.875) * self.boxSize,
ch,
)
except:
self.canv.setFillGray(0.9)
self.canv.rect((1+col) * self.boxSize, (self.rows - row - 1) * self.boxSize,
self.boxSize, self.boxSize, stroke=0, fill=1)
self.canv.drawCentredString(
(col+1.5) * self.boxSize,
(self.rows - row - 0.875) * self.boxSize,
'?',
)
self.canv.setFillGray(0.0)
col = col + 1
if col == self.charsPerRow:
row = row + 1
col = 0
def drawLabels(self, topLeft = ''):
"""Writes little labels in the top row and first column"""
self.canv.setFillGray(0.8)
self.canv.rect(0, self.ylist[-2], self.width, self.boxSize, fill=1, stroke=0)
self.canv.rect(0, 0, self.boxSize, self.ylist[-2], fill=1, stroke=0)
self.canv.setFillGray(0.0)
#label each row and column
self.canv.setFont('Helvetica-Oblique',0.375 * self.boxSize)
byt = 0
for row in range(self.rows):
if self.rowLabels:
label = self.rowLabels[row]
else: # format start bytes as hex or decimal
label = self.formatByte(row * self.charsPerRow)
self.canv.drawCentredString(0.5 * self.boxSize,
(self.rows - row - 0.75) * self.boxSize,
label
)
for col in range(self.charsPerRow):
self.canv.drawCentredString((col + 1.5) * self.boxSize,
(self.rows + 0.25) * self.boxSize,
self.formatByte(col)
)
if topLeft:
self.canv.setFont('Helvetica-BoldOblique',0.5 * self.boxSize)
self.canv.drawCentredString(0.5 * self.boxSize,
(self.rows + 0.25) * self.boxSize,
topLeft
)
class SingleByteEncodingChart(CodeChartBase):
def __init__(self, faceName='Helvetica', encodingName='WinAnsiEncoding',
charsPerRow=16, boxSize=14, hex=1):
self.codePoints = 256
self.faceName = faceName
self.encodingName = encodingName
self.fontName = self.faceName + '-' + self.encodingName
self.charsPerRow = charsPerRow
self.boxSize = boxSize
self.hex = hex
self.rowLabels = None
pdfmetrics.registerFont(pdfmetrics.Font(self.fontName,
self.faceName,
self.encodingName)
)
self.calcLayout()
def draw(self):
self.drawLabels()
charList = [None] * 32 + map(chr, range(32, 256))
#we need to convert these to Unicode, since ReportLab
#2.0 can only draw in Unicode.
encName = self.encodingName
#apply some common translations
encName = adobe2codec.get(encName, encName)
decoder = codecs.lookup(encName)[1]
def decodeFunc(txt):
if txt is None:
return None
else:
return decoder(txt, errors='replace')[0]
charList = [decodeFunc(ch) for ch in charList]
self.drawChars(charList)
self.canv.grid(self.xlist, self.ylist)
class KutenRowCodeChart(CodeChartBase):
"""Formats one 'row' of the 94x94 space used in many Asian encodings.aliases
These deliberately resemble the code charts in Ken Lunde's "Understanding
CJKV Information Processing", to enable manual checking. Due to the large
numbers of characters, we don't try to make one graphic with 10,000 characters,
but rather output a sequence of these."""
#would be cleaner if both shared one base class whose job
#was to draw the boxes, but never mind...
def __init__(self, row, faceName, encodingName):
self.row = row
self.codePoints = 94
self.boxSize = 18
self.charsPerRow = 20
self.rows = 5
self.rowLabels = ['00','20','40','60','80']
self.hex = 0
self.faceName = faceName
self.encodingName = encodingName
try:
# the dependent files might not be available
font = cidfonts.CIDFont(self.faceName, self.encodingName)
pdfmetrics.registerFont(font)
except:
# fall back to English and at least shwo we can draw the boxes
self.faceName = 'Helvetica'
self.encodingName = 'WinAnsiEncoding'
self.fontName = self.faceName + '-' + self.encodingName
self.calcLayout()
def makeRow(self, row):
"""Works out the character values for this kuten row"""
cells = []
if string.find(self.encodingName, 'EUC') > -1:
# it is an EUC family encoding.
for col in range(1, 95):
ch = chr(row + 160) + chr(col+160)
cells.append(ch)
## elif string.find(self.encodingName, 'GB') > -1:
## # it is an EUC family encoding.
## for col in range(1, 95):
## ch = chr(row + 160) + chr(col+160)
else:
cells.append([None] * 94)
return cells
def draw(self):
self.drawLabels(topLeft= 'R%d' % self.row)
# work out which characters we need for the row
#assert string.find(self.encodingName, 'EUC') > -1, 'Only handles EUC encoding today, you gave me %s!' % self.encodingName
# pad out by 1 to match Ken Lunde's tables
charList = [None] + self.makeRow(self.row)
self.drawChars(charList)
self.canv.grid(self.xlist, self.ylist)
class Big5CodeChart(CodeChartBase):
"""Formats one 'row' of the 94x160 space used in Big 5
These deliberately resemble the code charts in Ken Lunde's "Understanding
CJKV Information Processing", to enable manual checking."""
def __init__(self, row, faceName, encodingName):
self.row = row
self.codePoints = 160
self.boxSize = 18
self.charsPerRow = 16
self.rows = 10
self.hex = 1
self.faceName = faceName
self.encodingName = encodingName
self.rowLabels = ['4','5','6','7','A','B','C','D','E','F']
try:
# the dependent files might not be available
font = cidfonts.CIDFont(self.faceName, self.encodingName)
pdfmetrics.registerFont(font)
except:
# fall back to English and at least shwo we can draw the boxes
self.faceName = 'Helvetica'
self.encodingName = 'WinAnsiEncoding'
self.fontName = self.faceName + '-' + self.encodingName
self.calcLayout()
def makeRow(self, row):
"""Works out the character values for this Big5 row.
Rows start at 0xA1"""
cells = []
if string.find(self.encodingName, 'B5') > -1:
# big 5, different row size
for y in [4,5,6,7,10,11,12,13,14,15]:
for x in range(16):
col = y*16+x
ch = chr(row) + chr(col)
cells.append(ch)
else:
cells.append([None] * 160)
return cells
def draw(self):
self.drawLabels(topLeft='%02X' % self.row)
charList = self.makeRow(self.row)
self.drawChars(charList)
self.canv.grid(self.xlist, self.ylist)
def hBoxText(msg, canvas, x, y, fontName):
"""Helper for stringwidth tests on Asian fonts.
Registers font if needed. Then draws the string,
and a box around it derived from the stringWidth function"""
canvas.saveState()
try:
font = pdfmetrics.getFont(fontName)
except KeyError:
font = cidfonts.UnicodeCIDFont(fontName)
pdfmetrics.registerFont(font)
canvas.setFillGray(0.8)
canvas.rect(x,y,pdfmetrics.stringWidth(msg, fontName, 16),16,stroke=0,fill=1)
canvas.setFillGray(0)
canvas.setFont(fontName, 16,16)
canvas.drawString(x,y,msg)
canvas.restoreState()
class CodeWidget(Widget):
"""Block showing all the characters"""
def __init__(self):
self.x = 0
self.y = 0
self.width = 160
self.height = 160
def draw(self):
dx = self.width / 16.0
dy = self.height / 16.0
g = Group()
g.add(Rect(self.x, self.y, self.width, self.height,
fillColor=None, strokeColor=colors.black))
for x in range(16):
for y in range(16):
charValue = y * 16 + x
if charValue > 32:
s = String(self.x + x * dx,
self.y + (self.height - y*dy), chr(charValue))
g.add(s)
return g
def test():
c = Canvas('codecharts.pdf')
c.setFont('Helvetica-Bold', 24)
c.drawString(72, 750, 'Testing code page charts')
cc1 = SingleByteEncodingChart()
cc1.drawOn(c, 72, 500)
cc2 = SingleByteEncodingChart(charsPerRow=32)
cc2.drawOn(c, 72, 300)
cc3 = SingleByteEncodingChart(charsPerRow=25, hex=0)
cc3.drawOn(c, 72, 100)
## c.showPage()
##
## c.setFont('Helvetica-Bold', 24)
## c.drawString(72, 750, 'Multi-byte Kuten code chart examples')
## KutenRowCodeChart(1, 'HeiseiMin-W3','EUC-H').drawOn(c, 72, 600)
## KutenRowCodeChart(16, 'HeiseiMin-W3','EUC-H').drawOn(c, 72, 450)
## KutenRowCodeChart(84, 'HeiseiMin-W3','EUC-H').drawOn(c, 72, 300)
##
## c.showPage()
## c.setFont('Helvetica-Bold', 24)
## c.drawString(72, 750, 'Big5 Code Chart Examples')
## #Big5CodeChart(0xA1, 'MSungStd-Light-Acro','ETenms-B5-H').drawOn(c, 72, 500)
c.save()
print 'saved codecharts.pdf'
if __name__=='__main__':
test()
| bsd-3-clause |
arnarb/greenhousedb | plants/xsettings.py | 1 | 2042 | """
Django settings for plants project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '(5lq03ub0m3dprqe&(nht*1uioi8q28snv*dtygltjdj*q)m##'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'plants.urls'
WSGI_APPLICATION = 'plants.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
| gpl-3.0 |
open-synergy/account-closing | account_multicurrency_revaluation_report/__init__.py | 7 | 1046 | # -*- coding: utf-8 -*-
##############################################################################
#
# Author: Yannick Vaucher
# Copyright 2012 Camptocamp SA
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from . import wizard
from . import report
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
divio/django | django/contrib/admin/options.py | 15 | 81145 | import copy
import operator
from collections import OrderedDict
from functools import partial, reduce, update_wrapper
from django import forms
from django.conf import settings
from django.contrib import messages
from django.contrib.admin import helpers, widgets
from django.contrib.admin.checks import (
BaseModelAdminChecks, InlineModelAdminChecks, ModelAdminChecks,
)
from django.contrib.admin.exceptions import DisallowedModelAdminToField
from django.contrib.admin.templatetags.admin_static import static
from django.contrib.admin.templatetags.admin_urls import add_preserved_filters
from django.contrib.admin.utils import (
NestedObjects, flatten_fieldsets, get_deleted_objects,
lookup_needs_distinct, model_format_dict, quote, unquote,
)
from django.contrib.auth import get_permission_codename
from django.core.exceptions import (
FieldDoesNotExist, FieldError, PermissionDenied, ValidationError,
)
from django.core.paginator import Paginator
from django.core.urlresolvers import reverse
from django.db import models, router, transaction
from django.db.models.constants import LOOKUP_SEP
from django.db.models.fields import BLANK_CHOICE_DASH
from django.forms.formsets import DELETION_FIELD_NAME, all_valid
from django.forms.models import (
BaseInlineFormSet, inlineformset_factory, modelform_defines_fields,
modelform_factory, modelformset_factory,
)
from django.forms.widgets import CheckboxSelectMultiple, SelectMultiple
from django.http import Http404, HttpResponseRedirect
from django.http.response import HttpResponseBase
from django.template.response import SimpleTemplateResponse, TemplateResponse
from django.utils import six
from django.utils.decorators import method_decorator
from django.utils.encoding import force_text, python_2_unicode_compatible
from django.utils.html import escape, escapejs
from django.utils.http import urlencode
from django.utils.safestring import mark_safe
from django.utils.text import capfirst, get_text_list
from django.utils.translation import string_concat, ugettext as _, ungettext
from django.views.decorators.csrf import csrf_protect
from django.views.generic import RedirectView
IS_POPUP_VAR = '_popup'
TO_FIELD_VAR = '_to_field'
HORIZONTAL, VERTICAL = 1, 2
def get_content_type_for_model(obj):
# Since this module gets imported in the application's root package,
# it cannot import models from other applications at the module level.
from django.contrib.contenttypes.models import ContentType
return ContentType.objects.get_for_model(obj, for_concrete_model=False)
def get_ul_class(radio_style):
return 'radiolist' if radio_style == VERTICAL else 'radiolist inline'
class IncorrectLookupParameters(Exception):
pass
# Defaults for formfield_overrides. ModelAdmin subclasses can change this
# by adding to ModelAdmin.formfield_overrides.
FORMFIELD_FOR_DBFIELD_DEFAULTS = {
models.DateTimeField: {
'form_class': forms.SplitDateTimeField,
'widget': widgets.AdminSplitDateTime
},
models.DateField: {'widget': widgets.AdminDateWidget},
models.TimeField: {'widget': widgets.AdminTimeWidget},
models.TextField: {'widget': widgets.AdminTextareaWidget},
models.URLField: {'widget': widgets.AdminURLFieldWidget},
models.IntegerField: {'widget': widgets.AdminIntegerFieldWidget},
models.BigIntegerField: {'widget': widgets.AdminBigIntegerFieldWidget},
models.CharField: {'widget': widgets.AdminTextInputWidget},
models.ImageField: {'widget': widgets.AdminFileWidget},
models.FileField: {'widget': widgets.AdminFileWidget},
models.EmailField: {'widget': widgets.AdminEmailInputWidget},
}
csrf_protect_m = method_decorator(csrf_protect)
class BaseModelAdmin(six.with_metaclass(forms.MediaDefiningClass)):
"""Functionality common to both ModelAdmin and InlineAdmin."""
raw_id_fields = ()
fields = None
exclude = None
fieldsets = None
form = forms.ModelForm
filter_vertical = ()
filter_horizontal = ()
radio_fields = {}
prepopulated_fields = {}
formfield_overrides = {}
readonly_fields = ()
ordering = None
view_on_site = True
show_full_result_count = True
checks_class = BaseModelAdminChecks
@classmethod
def check(cls, model, **kwargs):
return cls.checks_class().check(cls, model, **kwargs)
def __init__(self):
overrides = FORMFIELD_FOR_DBFIELD_DEFAULTS.copy()
overrides.update(self.formfield_overrides)
self.formfield_overrides = overrides
def formfield_for_dbfield(self, db_field, **kwargs):
"""
Hook for specifying the form Field instance for a given database Field
instance.
If kwargs are given, they're passed to the form Field's constructor.
"""
request = kwargs.pop("request", None)
# If the field specifies choices, we don't need to look for special
# admin widgets - we just need to use a select widget of some kind.
if db_field.choices:
return self.formfield_for_choice_field(db_field, request, **kwargs)
# ForeignKey or ManyToManyFields
if isinstance(db_field, (models.ForeignKey, models.ManyToManyField)):
# Combine the field kwargs with any options for formfield_overrides.
# Make sure the passed in **kwargs override anything in
# formfield_overrides because **kwargs is more specific, and should
# always win.
if db_field.__class__ in self.formfield_overrides:
kwargs = dict(self.formfield_overrides[db_field.__class__], **kwargs)
# Get the correct formfield.
if isinstance(db_field, models.ForeignKey):
formfield = self.formfield_for_foreignkey(db_field, request, **kwargs)
elif isinstance(db_field, models.ManyToManyField):
formfield = self.formfield_for_manytomany(db_field, request, **kwargs)
# For non-raw_id fields, wrap the widget with a wrapper that adds
# extra HTML -- the "add other" interface -- to the end of the
# rendered output. formfield can be None if it came from a
# OneToOneField with parent_link=True or a M2M intermediary.
if formfield and db_field.name not in self.raw_id_fields:
related_modeladmin = self.admin_site._registry.get(db_field.remote_field.model)
wrapper_kwargs = {}
if related_modeladmin:
wrapper_kwargs.update(
can_add_related=related_modeladmin.has_add_permission(request),
can_change_related=related_modeladmin.has_change_permission(request),
can_delete_related=related_modeladmin.has_delete_permission(request),
)
formfield.widget = widgets.RelatedFieldWidgetWrapper(
formfield.widget, db_field.remote_field, self.admin_site, **wrapper_kwargs
)
return formfield
# If we've got overrides for the formfield defined, use 'em. **kwargs
# passed to formfield_for_dbfield override the defaults.
for klass in db_field.__class__.mro():
if klass in self.formfield_overrides:
kwargs = dict(copy.deepcopy(self.formfield_overrides[klass]), **kwargs)
return db_field.formfield(**kwargs)
# For any other type of field, just call its formfield() method.
return db_field.formfield(**kwargs)
def formfield_for_choice_field(self, db_field, request=None, **kwargs):
"""
Get a form Field for a database Field that has declared choices.
"""
# If the field is named as a radio_field, use a RadioSelect
if db_field.name in self.radio_fields:
# Avoid stomping on custom widget/choices arguments.
if 'widget' not in kwargs:
kwargs['widget'] = widgets.AdminRadioSelect(attrs={
'class': get_ul_class(self.radio_fields[db_field.name]),
})
if 'choices' not in kwargs:
kwargs['choices'] = db_field.get_choices(
include_blank=db_field.blank,
blank_choice=[('', _('None'))]
)
return db_field.formfield(**kwargs)
def get_field_queryset(self, db, db_field, request):
"""
If the ModelAdmin specifies ordering, the queryset should respect that
ordering. Otherwise don't specify the queryset, let the field decide
(returns None in that case).
"""
related_admin = self.admin_site._registry.get(db_field.remote_field.model)
if related_admin is not None:
ordering = related_admin.get_ordering(request)
if ordering is not None and ordering != ():
return db_field.remote_field.model._default_manager.using(db).order_by(*ordering)
return None
def formfield_for_foreignkey(self, db_field, request=None, **kwargs):
"""
Get a form Field for a ForeignKey.
"""
db = kwargs.get('using')
if db_field.name in self.raw_id_fields:
kwargs['widget'] = widgets.ForeignKeyRawIdWidget(db_field.remote_field,
self.admin_site, using=db)
elif db_field.name in self.radio_fields:
kwargs['widget'] = widgets.AdminRadioSelect(attrs={
'class': get_ul_class(self.radio_fields[db_field.name]),
})
kwargs['empty_label'] = _('None') if db_field.blank else None
if 'queryset' not in kwargs:
queryset = self.get_field_queryset(db, db_field, request)
if queryset is not None:
kwargs['queryset'] = queryset
return db_field.formfield(**kwargs)
def formfield_for_manytomany(self, db_field, request=None, **kwargs):
"""
Get a form Field for a ManyToManyField.
"""
# If it uses an intermediary model that isn't auto created, don't show
# a field in admin.
if not db_field.remote_field.through._meta.auto_created:
return None
db = kwargs.get('using')
if db_field.name in self.raw_id_fields:
kwargs['widget'] = widgets.ManyToManyRawIdWidget(db_field.remote_field,
self.admin_site, using=db)
kwargs['help_text'] = ''
elif db_field.name in (list(self.filter_vertical) + list(self.filter_horizontal)):
kwargs['widget'] = widgets.FilteredSelectMultiple(
db_field.verbose_name,
db_field.name in self.filter_vertical
)
if 'queryset' not in kwargs:
queryset = self.get_field_queryset(db, db_field, request)
if queryset is not None:
kwargs['queryset'] = queryset
form_field = db_field.formfield(**kwargs)
if isinstance(form_field.widget, SelectMultiple) and not isinstance(form_field.widget, CheckboxSelectMultiple):
msg = _('Hold down "Control", or "Command" on a Mac, to select more than one.')
help_text = form_field.help_text
form_field.help_text = string_concat(help_text, ' ', msg) if help_text else msg
return form_field
def get_view_on_site_url(self, obj=None):
if obj is None or not self.view_on_site:
return None
if callable(self.view_on_site):
return self.view_on_site(obj)
elif self.view_on_site and hasattr(obj, 'get_absolute_url'):
# use the ContentType lookup if view_on_site is True
return reverse('admin:view_on_site', kwargs={
'content_type_id': get_content_type_for_model(obj).pk,
'object_id': obj.pk
})
def get_empty_value_display(self):
"""
Return the empty_value_display set on ModelAdmin or AdminSite.
"""
try:
return mark_safe(self.empty_value_display)
except AttributeError:
return mark_safe(self.admin_site.empty_value_display)
def get_fields(self, request, obj=None):
"""
Hook for specifying fields.
"""
return self.fields
def get_fieldsets(self, request, obj=None):
"""
Hook for specifying fieldsets.
"""
if self.fieldsets:
return self.fieldsets
return [(None, {'fields': self.get_fields(request, obj)})]
def get_ordering(self, request):
"""
Hook for specifying field ordering.
"""
return self.ordering or () # otherwise we might try to *None, which is bad ;)
def get_readonly_fields(self, request, obj=None):
"""
Hook for specifying custom readonly fields.
"""
return self.readonly_fields
def get_prepopulated_fields(self, request, obj=None):
"""
Hook for specifying custom prepopulated fields.
"""
return self.prepopulated_fields
def get_queryset(self, request):
"""
Returns a QuerySet of all model instances that can be edited by the
admin site. This is used by changelist_view.
"""
qs = self.model._default_manager.get_queryset()
# TODO: this should be handled by some parameter to the ChangeList.
ordering = self.get_ordering(request)
if ordering:
qs = qs.order_by(*ordering)
return qs
def lookup_allowed(self, lookup, value):
from django.contrib.admin.filters import SimpleListFilter
model = self.model
# Check FKey lookups that are allowed, so that popups produced by
# ForeignKeyRawIdWidget, on the basis of ForeignKey.limit_choices_to,
# are allowed to work.
for l in model._meta.related_fkey_lookups:
# As ``limit_choices_to`` can be a callable, invoke it here.
if callable(l):
l = l()
for k, v in widgets.url_params_from_lookup_dict(l).items():
if k == lookup and v == value:
return True
relation_parts = []
prev_field = None
for part in lookup.split(LOOKUP_SEP):
try:
field = model._meta.get_field(part)
except FieldDoesNotExist:
# Lookups on non-existent fields are ok, since they're ignored
# later.
break
# It is allowed to filter on values that would be found from local
# model anyways. For example, if you filter on employee__department__id,
# then the id value would be found already from employee__department_id.
if not prev_field or (prev_field.concrete and
field not in prev_field.get_path_info()[-1].target_fields):
relation_parts.append(part)
if not getattr(field, 'get_path_info', None):
# This is not a relational field, so further parts
# must be transforms.
break
prev_field = field
model = field.get_path_info()[-1].to_opts.model
if len(relation_parts) <= 1:
# Either a local field filter, or no fields at all.
return True
clean_lookup = LOOKUP_SEP.join(relation_parts)
valid_lookups = [self.date_hierarchy]
for filter_item in self.list_filter:
if isinstance(filter_item, type) and issubclass(filter_item, SimpleListFilter):
valid_lookups.append(filter_item.parameter_name)
elif isinstance(filter_item, (list, tuple)):
valid_lookups.append(filter_item[0])
else:
valid_lookups.append(filter_item)
return clean_lookup in valid_lookups
def to_field_allowed(self, request, to_field):
"""
Returns True if the model associated with this admin should be
allowed to be referenced by the specified field.
"""
opts = self.model._meta
try:
field = opts.get_field(to_field)
except FieldDoesNotExist:
return False
# Always allow referencing the primary key since it's already possible
# to get this information from the change view URL.
if field.primary_key:
return True
# Allow reverse relationships to models defining m2m fields if they
# target the specified field.
for many_to_many in opts.many_to_many:
if many_to_many.m2m_target_field_name() == to_field:
return True
# Make sure at least one of the models registered for this site
# references this field through a FK or a M2M relationship.
registered_models = set()
for model, admin in self.admin_site._registry.items():
registered_models.add(model)
for inline in admin.inlines:
registered_models.add(inline.model)
related_objects = (
f for f in opts.get_fields(include_hidden=True)
if (f.auto_created and not f.concrete)
)
for related_object in related_objects:
related_model = related_object.related_model
if (any(issubclass(model, related_model) for model in registered_models) and
related_object.field.remote_field.get_related_field() == field):
return True
return False
def has_add_permission(self, request):
"""
Returns True if the given request has permission to add an object.
Can be overridden by the user in subclasses.
"""
opts = self.opts
codename = get_permission_codename('add', opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
def has_change_permission(self, request, obj=None):
"""
Returns True if the given request has permission to change the given
Django model instance, the default implementation doesn't examine the
`obj` parameter.
Can be overridden by the user in subclasses. In such case it should
return True if the given request has permission to change the `obj`
model instance. If `obj` is None, this should return True if the given
request has permission to change *any* object of the given type.
"""
opts = self.opts
codename = get_permission_codename('change', opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
def has_delete_permission(self, request, obj=None):
"""
Returns True if the given request has permission to change the given
Django model instance, the default implementation doesn't examine the
`obj` parameter.
Can be overridden by the user in subclasses. In such case it should
return True if the given request has permission to delete the `obj`
model instance. If `obj` is None, this should return True if the given
request has permission to delete *any* object of the given type.
"""
opts = self.opts
codename = get_permission_codename('delete', opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
def has_module_permission(self, request):
"""
Returns True if the given request has any permission in the given
app label.
Can be overridden by the user in subclasses. In such case it should
return True if the given request has permission to view the module on
the admin index page and access the module's index page. Overriding it
does not restrict access to the add, change or delete views. Use
`ModelAdmin.has_(add|change|delete)_permission` for that.
"""
return request.user.has_module_perms(self.opts.app_label)
@python_2_unicode_compatible
class ModelAdmin(BaseModelAdmin):
"Encapsulates all admin options and functionality for a given model."
list_display = ('__str__',)
list_display_links = ()
list_filter = ()
list_select_related = False
list_per_page = 100
list_max_show_all = 200
list_editable = ()
search_fields = ()
date_hierarchy = None
save_as = False
save_on_top = False
paginator = Paginator
preserve_filters = True
inlines = []
# Custom templates (designed to be over-ridden in subclasses)
add_form_template = None
change_form_template = None
change_list_template = None
delete_confirmation_template = None
delete_selected_confirmation_template = None
object_history_template = None
# Actions
actions = []
action_form = helpers.ActionForm
actions_on_top = True
actions_on_bottom = False
actions_selection_counter = True
checks_class = ModelAdminChecks
def __init__(self, model, admin_site):
self.model = model
self.opts = model._meta
self.admin_site = admin_site
super(ModelAdmin, self).__init__()
def __str__(self):
return "%s.%s" % (self.model._meta.app_label, self.__class__.__name__)
def get_inline_instances(self, request, obj=None):
inline_instances = []
for inline_class in self.inlines:
inline = inline_class(self.model, self.admin_site)
if request:
if not (inline.has_add_permission(request) or
inline.has_change_permission(request, obj) or
inline.has_delete_permission(request, obj)):
continue
if not inline.has_add_permission(request):
inline.max_num = 0
inline_instances.append(inline)
return inline_instances
def get_urls(self):
from django.conf.urls import url
def wrap(view):
def wrapper(*args, **kwargs):
return self.admin_site.admin_view(view)(*args, **kwargs)
wrapper.model_admin = self
return update_wrapper(wrapper, view)
info = self.model._meta.app_label, self.model._meta.model_name
urlpatterns = [
url(r'^$', wrap(self.changelist_view), name='%s_%s_changelist' % info),
url(r'^add/$', wrap(self.add_view), name='%s_%s_add' % info),
url(r'^(.+)/history/$', wrap(self.history_view), name='%s_%s_history' % info),
url(r'^(.+)/delete/$', wrap(self.delete_view), name='%s_%s_delete' % info),
url(r'^(.+)/change/$', wrap(self.change_view), name='%s_%s_change' % info),
# For backwards compatibility (was the change url before 1.9)
url(r'^(.+)/$', wrap(RedirectView.as_view(
pattern_name='%s:%s_%s_change' % ((self.admin_site.name,) + info)
))),
]
return urlpatterns
def urls(self):
return self.get_urls()
urls = property(urls)
@property
def media(self):
extra = '' if settings.DEBUG else '.min'
js = [
'core.js',
'admin/RelatedObjectLookups.js',
'vendor/jquery/jquery%s.js' % extra,
'jquery.init.js',
'actions%s.js' % extra,
'urlify.js',
'prepopulate%s.js' % extra,
]
return forms.Media(js=[static('admin/js/%s' % url) for url in js])
def get_model_perms(self, request):
"""
Returns a dict of all perms for this model. This dict has the keys
``add``, ``change``, and ``delete`` mapping to the True/False for each
of those actions.
"""
return {
'add': self.has_add_permission(request),
'change': self.has_change_permission(request),
'delete': self.has_delete_permission(request),
}
def get_fields(self, request, obj=None):
if self.fields:
return self.fields
form = self.get_form(request, obj, fields=None)
return list(form.base_fields) + list(self.get_readonly_fields(request, obj))
def get_form(self, request, obj=None, **kwargs):
"""
Returns a Form class for use in the admin add view. This is used by
add_view and change_view.
"""
if 'fields' in kwargs:
fields = kwargs.pop('fields')
else:
fields = flatten_fieldsets(self.get_fieldsets(request, obj))
if self.exclude is None:
exclude = []
else:
exclude = list(self.exclude)
readonly_fields = self.get_readonly_fields(request, obj)
exclude.extend(readonly_fields)
if self.exclude is None and hasattr(self.form, '_meta') and self.form._meta.exclude:
# Take the custom ModelForm's Meta.exclude into account only if the
# ModelAdmin doesn't define its own.
exclude.extend(self.form._meta.exclude)
# if exclude is an empty list we pass None to be consistent with the
# default on modelform_factory
exclude = exclude or None
# Remove declared form fields which are in readonly_fields.
new_attrs = OrderedDict(
(f, None) for f in readonly_fields
if f in self.form.declared_fields
)
form = type(self.form.__name__, (self.form,), new_attrs)
defaults = {
"form": form,
"fields": fields,
"exclude": exclude,
"formfield_callback": partial(self.formfield_for_dbfield, request=request),
}
defaults.update(kwargs)
if defaults['fields'] is None and not modelform_defines_fields(defaults['form']):
defaults['fields'] = forms.ALL_FIELDS
try:
return modelform_factory(self.model, **defaults)
except FieldError as e:
raise FieldError('%s. Check fields/fieldsets/exclude attributes of class %s.'
% (e, self.__class__.__name__))
def get_changelist(self, request, **kwargs):
"""
Returns the ChangeList class for use on the changelist page.
"""
from django.contrib.admin.views.main import ChangeList
return ChangeList
def get_object(self, request, object_id, from_field=None):
"""
Returns an instance matching the field and value provided, the primary
key is used if no field is provided. Returns ``None`` if no match is
found or the object_id fails validation.
"""
queryset = self.get_queryset(request)
model = queryset.model
field = model._meta.pk if from_field is None else model._meta.get_field(from_field)
try:
object_id = field.to_python(object_id)
return queryset.get(**{field.name: object_id})
except (model.DoesNotExist, ValidationError, ValueError):
return None
def get_changelist_form(self, request, **kwargs):
"""
Returns a Form class for use in the Formset on the changelist page.
"""
defaults = {
"formfield_callback": partial(self.formfield_for_dbfield, request=request),
}
defaults.update(kwargs)
if (defaults.get('fields') is None
and not modelform_defines_fields(defaults.get('form'))):
defaults['fields'] = forms.ALL_FIELDS
return modelform_factory(self.model, **defaults)
def get_changelist_formset(self, request, **kwargs):
"""
Returns a FormSet class for use on the changelist page if list_editable
is used.
"""
defaults = {
"formfield_callback": partial(self.formfield_for_dbfield, request=request),
}
defaults.update(kwargs)
return modelformset_factory(self.model,
self.get_changelist_form(request), extra=0,
fields=self.list_editable, **defaults)
def get_formsets_with_inlines(self, request, obj=None):
"""
Yields formsets and the corresponding inlines.
"""
for inline in self.get_inline_instances(request, obj):
yield inline.get_formset(request, obj), inline
def get_paginator(self, request, queryset, per_page, orphans=0, allow_empty_first_page=True):
return self.paginator(queryset, per_page, orphans, allow_empty_first_page)
def log_addition(self, request, object, message):
"""
Log that an object has been successfully added.
The default implementation creates an admin LogEntry object.
"""
from django.contrib.admin.models import LogEntry, ADDITION
LogEntry.objects.log_action(
user_id=request.user.pk,
content_type_id=get_content_type_for_model(object).pk,
object_id=object.pk,
object_repr=force_text(object),
action_flag=ADDITION,
change_message=message,
)
def log_change(self, request, object, message):
"""
Log that an object has been successfully changed.
The default implementation creates an admin LogEntry object.
"""
from django.contrib.admin.models import LogEntry, CHANGE
LogEntry.objects.log_action(
user_id=request.user.pk,
content_type_id=get_content_type_for_model(object).pk,
object_id=object.pk,
object_repr=force_text(object),
action_flag=CHANGE,
change_message=message,
)
def log_deletion(self, request, object, object_repr):
"""
Log that an object will be deleted. Note that this method must be
called before the deletion.
The default implementation creates an admin LogEntry object.
"""
from django.contrib.admin.models import LogEntry, DELETION
LogEntry.objects.log_action(
user_id=request.user.pk,
content_type_id=get_content_type_for_model(object).pk,
object_id=object.pk,
object_repr=object_repr,
action_flag=DELETION,
)
def action_checkbox(self, obj):
"""
A list_display column containing a checkbox widget.
"""
return helpers.checkbox.render(helpers.ACTION_CHECKBOX_NAME, force_text(obj.pk))
action_checkbox.short_description = mark_safe('<input type="checkbox" id="action-toggle" />')
action_checkbox.allow_tags = True
def get_actions(self, request):
"""
Return a dictionary mapping the names of all actions for this
ModelAdmin to a tuple of (callable, name, description) for each action.
"""
# If self.actions is explicitly set to None that means that we don't
# want *any* actions enabled on this page.
if self.actions is None or IS_POPUP_VAR in request.GET:
return OrderedDict()
actions = []
# Gather actions from the admin site first
for (name, func) in self.admin_site.actions:
description = getattr(func, 'short_description', name.replace('_', ' '))
actions.append((func, name, description))
# Then gather them from the model admin and all parent classes,
# starting with self and working back up.
for klass in self.__class__.mro()[::-1]:
class_actions = getattr(klass, 'actions', [])
# Avoid trying to iterate over None
if not class_actions:
continue
actions.extend(self.get_action(action) for action in class_actions)
# get_action might have returned None, so filter any of those out.
actions = filter(None, actions)
# Convert the actions into an OrderedDict keyed by name.
actions = OrderedDict(
(name, (func, name, desc))
for func, name, desc in actions
)
return actions
def get_action_choices(self, request, default_choices=BLANK_CHOICE_DASH):
"""
Return a list of choices for use in a form object. Each choice is a
tuple (name, description).
"""
choices = [] + default_choices
for func, name, description in six.itervalues(self.get_actions(request)):
choice = (name, description % model_format_dict(self.opts))
choices.append(choice)
return choices
def get_action(self, action):
"""
Return a given action from a parameter, which can either be a callable,
or the name of a method on the ModelAdmin. Return is a tuple of
(callable, name, description).
"""
# If the action is a callable, just use it.
if callable(action):
func = action
action = action.__name__
# Next, look for a method. Grab it off self.__class__ to get an unbound
# method instead of a bound one; this ensures that the calling
# conventions are the same for functions and methods.
elif hasattr(self.__class__, action):
func = getattr(self.__class__, action)
# Finally, look for a named method on the admin site
else:
try:
func = self.admin_site.get_action(action)
except KeyError:
return None
if hasattr(func, 'short_description'):
description = func.short_description
else:
description = capfirst(action.replace('_', ' '))
return func, action, description
def get_list_display(self, request):
"""
Return a sequence containing the fields to be displayed on the
changelist.
"""
return self.list_display
def get_list_display_links(self, request, list_display):
"""
Return a sequence containing the fields to be displayed as links
on the changelist. The list_display parameter is the list of fields
returned by get_list_display().
"""
if self.list_display_links or self.list_display_links is None or not list_display:
return self.list_display_links
else:
# Use only the first item in list_display as link
return list(list_display)[:1]
def get_list_filter(self, request):
"""
Returns a sequence containing the fields to be displayed as filters in
the right sidebar of the changelist page.
"""
return self.list_filter
def get_list_select_related(self, request):
"""
Returns a list of fields to add to the select_related() part of the
changelist items query.
"""
return self.list_select_related
def get_search_fields(self, request):
"""
Returns a sequence containing the fields to be searched whenever
somebody submits a search query.
"""
return self.search_fields
def get_search_results(self, request, queryset, search_term):
"""
Returns a tuple containing a queryset to implement the search,
and a boolean indicating if the results may contain duplicates.
"""
# Apply keyword searches.
def construct_search(field_name):
if field_name.startswith('^'):
return "%s__istartswith" % field_name[1:]
elif field_name.startswith('='):
return "%s__iexact" % field_name[1:]
elif field_name.startswith('@'):
return "%s__search" % field_name[1:]
else:
return "%s__icontains" % field_name
use_distinct = False
search_fields = self.get_search_fields(request)
if search_fields and search_term:
orm_lookups = [construct_search(str(search_field))
for search_field in search_fields]
for bit in search_term.split():
or_queries = [models.Q(**{orm_lookup: bit})
for orm_lookup in orm_lookups]
queryset = queryset.filter(reduce(operator.or_, or_queries))
if not use_distinct:
for search_spec in orm_lookups:
if lookup_needs_distinct(self.opts, search_spec):
use_distinct = True
break
return queryset, use_distinct
def get_preserved_filters(self, request):
"""
Returns the preserved filters querystring.
"""
match = request.resolver_match
if self.preserve_filters and match:
opts = self.model._meta
current_url = '%s:%s' % (match.app_name, match.url_name)
changelist_url = 'admin:%s_%s_changelist' % (opts.app_label, opts.model_name)
if current_url == changelist_url:
preserved_filters = request.GET.urlencode()
else:
preserved_filters = request.GET.get('_changelist_filters')
if preserved_filters:
return urlencode({'_changelist_filters': preserved_filters})
return ''
def construct_change_message(self, request, form, formsets, add=False):
"""
Construct a change message from a changed object.
"""
change_message = []
if add:
change_message.append(_('Added.'))
elif form.changed_data:
change_message.append(_('Changed %s.') % get_text_list(form.changed_data, _('and')))
if formsets:
for formset in formsets:
for added_object in formset.new_objects:
change_message.append(_('Added %(name)s "%(object)s".')
% {'name': force_text(added_object._meta.verbose_name),
'object': force_text(added_object)})
for changed_object, changed_fields in formset.changed_objects:
change_message.append(_('Changed %(list)s for %(name)s "%(object)s".')
% {'list': get_text_list(changed_fields, _('and')),
'name': force_text(changed_object._meta.verbose_name),
'object': force_text(changed_object)})
for deleted_object in formset.deleted_objects:
change_message.append(_('Deleted %(name)s "%(object)s".')
% {'name': force_text(deleted_object._meta.verbose_name),
'object': force_text(deleted_object)})
change_message = ' '.join(change_message)
return change_message or _('No fields changed.')
def message_user(self, request, message, level=messages.INFO, extra_tags='',
fail_silently=False):
"""
Send a message to the user. The default implementation
posts a message using the django.contrib.messages backend.
Exposes almost the same API as messages.add_message(), but accepts the
positional arguments in a different order to maintain backwards
compatibility. For convenience, it accepts the `level` argument as
a string rather than the usual level number.
"""
if not isinstance(level, int):
# attempt to get the level if passed a string
try:
level = getattr(messages.constants, level.upper())
except AttributeError:
levels = messages.constants.DEFAULT_TAGS.values()
levels_repr = ', '.join('`%s`' % l for l in levels)
raise ValueError('Bad message level string: `%s`. '
'Possible values are: %s' % (level, levels_repr))
messages.add_message(request, level, message, extra_tags=extra_tags,
fail_silently=fail_silently)
def save_form(self, request, form, change):
"""
Given a ModelForm return an unsaved instance. ``change`` is True if
the object is being changed, and False if it's being added.
"""
return form.save(commit=False)
def save_model(self, request, obj, form, change):
"""
Given a model instance save it to the database.
"""
obj.save()
def delete_model(self, request, obj):
"""
Given a model instance delete it from the database.
"""
obj.delete()
def save_formset(self, request, form, formset, change):
"""
Given an inline formset save it to the database.
"""
formset.save()
def save_related(self, request, form, formsets, change):
"""
Given the ``HttpRequest``, the parent ``ModelForm`` instance, the
list of inline formsets and a boolean value based on whether the
parent is being added or changed, save the related objects to the
database. Note that at this point save_form() and save_model() have
already been called.
"""
form.save_m2m()
for formset in formsets:
self.save_formset(request, form, formset, change=change)
def render_change_form(self, request, context, add=False, change=False, form_url='', obj=None):
opts = self.model._meta
app_label = opts.app_label
preserved_filters = self.get_preserved_filters(request)
form_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, form_url)
view_on_site_url = self.get_view_on_site_url(obj)
context.update({
'add': add,
'change': change,
'has_add_permission': self.has_add_permission(request),
'has_change_permission': self.has_change_permission(request, obj),
'has_delete_permission': self.has_delete_permission(request, obj),
'has_file_field': True, # FIXME - this should check if form or formsets have a FileField,
'has_absolute_url': view_on_site_url is not None,
'absolute_url': view_on_site_url,
'form_url': form_url,
'opts': opts,
'content_type_id': get_content_type_for_model(self.model).pk,
'save_as': self.save_as,
'save_on_top': self.save_on_top,
'to_field_var': TO_FIELD_VAR,
'is_popup_var': IS_POPUP_VAR,
'app_label': app_label,
})
if add and self.add_form_template is not None:
form_template = self.add_form_template
else:
form_template = self.change_form_template
request.current_app = self.admin_site.name
return TemplateResponse(request, form_template or [
"admin/%s/%s/change_form.html" % (app_label, opts.model_name),
"admin/%s/change_form.html" % app_label,
"admin/change_form.html"
], context)
def response_add(self, request, obj, post_url_continue=None):
"""
Determines the HttpResponse for the add_view stage.
"""
opts = obj._meta
pk_value = obj._get_pk_val()
preserved_filters = self.get_preserved_filters(request)
msg_dict = {'name': force_text(opts.verbose_name), 'obj': force_text(obj)}
# Here, we distinguish between different save types by checking for
# the presence of keys in request.POST.
if IS_POPUP_VAR in request.POST:
to_field = request.POST.get(TO_FIELD_VAR)
if to_field:
attr = str(to_field)
else:
attr = obj._meta.pk.attname
value = obj.serializable_value(attr)
return SimpleTemplateResponse('admin/popup_response.html', {
'value': value,
'obj': obj,
})
elif "_continue" in request.POST:
msg = _('The %(name)s "%(obj)s" was added successfully. You may edit it again below.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
if post_url_continue is None:
post_url_continue = reverse('admin:%s_%s_change' %
(opts.app_label, opts.model_name),
args=(quote(pk_value),),
current_app=self.admin_site.name)
post_url_continue = add_preserved_filters(
{'preserved_filters': preserved_filters, 'opts': opts},
post_url_continue
)
return HttpResponseRedirect(post_url_continue)
elif "_addanother" in request.POST:
msg = _('The %(name)s "%(obj)s" was added successfully. You may add another %(name)s below.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
redirect_url = request.path
redirect_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, redirect_url)
return HttpResponseRedirect(redirect_url)
else:
msg = _('The %(name)s "%(obj)s" was added successfully.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
return self.response_post_save_add(request, obj)
def response_change(self, request, obj):
"""
Determines the HttpResponse for the change_view stage.
"""
if IS_POPUP_VAR in request.POST:
to_field = request.POST.get(TO_FIELD_VAR)
attr = str(to_field) if to_field else obj._meta.pk.attname
# Retrieve the `object_id` from the resolved pattern arguments.
value = request.resolver_match.args[0]
new_value = obj.serializable_value(attr)
return SimpleTemplateResponse('admin/popup_response.html', {
'action': 'change',
'value': escape(value),
'obj': escapejs(obj),
'new_value': escape(new_value),
})
opts = self.model._meta
pk_value = obj._get_pk_val()
preserved_filters = self.get_preserved_filters(request)
msg_dict = {'name': force_text(opts.verbose_name), 'obj': force_text(obj)}
if "_continue" in request.POST:
msg = _('The %(name)s "%(obj)s" was changed successfully. You may edit it again below.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
redirect_url = request.path
redirect_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, redirect_url)
return HttpResponseRedirect(redirect_url)
elif "_saveasnew" in request.POST:
msg = _('The %(name)s "%(obj)s" was added successfully. You may edit it again below.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
redirect_url = reverse('admin:%s_%s_change' %
(opts.app_label, opts.model_name),
args=(pk_value,),
current_app=self.admin_site.name)
redirect_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, redirect_url)
return HttpResponseRedirect(redirect_url)
elif "_addanother" in request.POST:
msg = _('The %(name)s "%(obj)s" was changed successfully. You may add another %(name)s below.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
redirect_url = reverse('admin:%s_%s_add' %
(opts.app_label, opts.model_name),
current_app=self.admin_site.name)
redirect_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, redirect_url)
return HttpResponseRedirect(redirect_url)
else:
msg = _('The %(name)s "%(obj)s" was changed successfully.') % msg_dict
self.message_user(request, msg, messages.SUCCESS)
return self.response_post_save_change(request, obj)
def response_post_save_add(self, request, obj):
"""
Figure out where to redirect after the 'Save' button has been pressed
when adding a new object.
"""
opts = self.model._meta
if self.has_change_permission(request, None):
post_url = reverse('admin:%s_%s_changelist' %
(opts.app_label, opts.model_name),
current_app=self.admin_site.name)
preserved_filters = self.get_preserved_filters(request)
post_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, post_url)
else:
post_url = reverse('admin:index',
current_app=self.admin_site.name)
return HttpResponseRedirect(post_url)
def response_post_save_change(self, request, obj):
"""
Figure out where to redirect after the 'Save' button has been pressed
when editing an existing object.
"""
opts = self.model._meta
if self.has_change_permission(request, None):
post_url = reverse('admin:%s_%s_changelist' %
(opts.app_label, opts.model_name),
current_app=self.admin_site.name)
preserved_filters = self.get_preserved_filters(request)
post_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, post_url)
else:
post_url = reverse('admin:index',
current_app=self.admin_site.name)
return HttpResponseRedirect(post_url)
def response_action(self, request, queryset):
"""
Handle an admin action. This is called if a request is POSTed to the
changelist; it returns an HttpResponse if the action was handled, and
None otherwise.
"""
# There can be multiple action forms on the page (at the top
# and bottom of the change list, for example). Get the action
# whose button was pushed.
try:
action_index = int(request.POST.get('index', 0))
except ValueError:
action_index = 0
# Construct the action form.
data = request.POST.copy()
data.pop(helpers.ACTION_CHECKBOX_NAME, None)
data.pop("index", None)
# Use the action whose button was pushed
try:
data.update({'action': data.getlist('action')[action_index]})
except IndexError:
# If we didn't get an action from the chosen form that's invalid
# POST data, so by deleting action it'll fail the validation check
# below. So no need to do anything here
pass
action_form = self.action_form(data, auto_id=None)
action_form.fields['action'].choices = self.get_action_choices(request)
# If the form's valid we can handle the action.
if action_form.is_valid():
action = action_form.cleaned_data['action']
select_across = action_form.cleaned_data['select_across']
func = self.get_actions(request)[action][0]
# Get the list of selected PKs. If nothing's selected, we can't
# perform an action on it, so bail. Except we want to perform
# the action explicitly on all objects.
selected = request.POST.getlist(helpers.ACTION_CHECKBOX_NAME)
if not selected and not select_across:
# Reminder that something needs to be selected or nothing will happen
msg = _("Items must be selected in order to perform "
"actions on them. No items have been changed.")
self.message_user(request, msg, messages.WARNING)
return None
if not select_across:
# Perform the action only on the selected objects
queryset = queryset.filter(pk__in=selected)
response = func(self, request, queryset)
# Actions may return an HttpResponse-like object, which will be
# used as the response from the POST. If not, we'll be a good
# little HTTP citizen and redirect back to the changelist page.
if isinstance(response, HttpResponseBase):
return response
else:
return HttpResponseRedirect(request.get_full_path())
else:
msg = _("No action selected.")
self.message_user(request, msg, messages.WARNING)
return None
def response_delete(self, request, obj_display, obj_id):
"""
Determines the HttpResponse for the delete_view stage.
"""
opts = self.model._meta
if IS_POPUP_VAR in request.POST:
return SimpleTemplateResponse('admin/popup_response.html', {
'action': 'delete',
'value': escape(obj_id),
})
self.message_user(request,
_('The %(name)s "%(obj)s" was deleted successfully.') % {
'name': force_text(opts.verbose_name),
'obj': force_text(obj_display),
}, messages.SUCCESS)
if self.has_change_permission(request, None):
post_url = reverse('admin:%s_%s_changelist' %
(opts.app_label, opts.model_name),
current_app=self.admin_site.name)
preserved_filters = self.get_preserved_filters(request)
post_url = add_preserved_filters(
{'preserved_filters': preserved_filters, 'opts': opts}, post_url
)
else:
post_url = reverse('admin:index',
current_app=self.admin_site.name)
return HttpResponseRedirect(post_url)
def render_delete_form(self, request, context):
opts = self.model._meta
app_label = opts.app_label
request.current_app = self.admin_site.name
context.update(
to_field_var=TO_FIELD_VAR,
is_popup_var=IS_POPUP_VAR,
)
return TemplateResponse(request,
self.delete_confirmation_template or [
"admin/{}/{}/delete_confirmation.html".format(app_label, opts.model_name),
"admin/{}/delete_confirmation.html".format(app_label),
"admin/delete_confirmation.html"
], context)
def get_inline_formsets(self, request, formsets, inline_instances,
obj=None):
inline_admin_formsets = []
for inline, formset in zip(inline_instances, formsets):
fieldsets = list(inline.get_fieldsets(request, obj))
readonly = list(inline.get_readonly_fields(request, obj))
prepopulated = dict(inline.get_prepopulated_fields(request, obj))
inline_admin_formset = helpers.InlineAdminFormSet(inline, formset,
fieldsets, prepopulated, readonly, model_admin=self)
inline_admin_formsets.append(inline_admin_formset)
return inline_admin_formsets
def get_changeform_initial_data(self, request):
"""
Get the initial form data.
Unless overridden, this populates from the GET params.
"""
initial = dict(request.GET.items())
for k in initial:
try:
f = self.model._meta.get_field(k)
except FieldDoesNotExist:
continue
# We have to special-case M2Ms as a list of comma-separated PKs.
if isinstance(f, models.ManyToManyField):
initial[k] = initial[k].split(",")
return initial
@csrf_protect_m
@transaction.atomic
def changeform_view(self, request, object_id=None, form_url='', extra_context=None):
to_field = request.POST.get(TO_FIELD_VAR, request.GET.get(TO_FIELD_VAR))
if to_field and not self.to_field_allowed(request, to_field):
raise DisallowedModelAdminToField("The field %s cannot be referenced." % to_field)
model = self.model
opts = model._meta
add = object_id is None
if add:
if not self.has_add_permission(request):
raise PermissionDenied
obj = None
else:
obj = self.get_object(request, unquote(object_id), to_field)
if not self.has_change_permission(request, obj):
raise PermissionDenied
if obj is None:
raise Http404(_('%(name)s object with primary key %(key)r does not exist.') % {
'name': force_text(opts.verbose_name), 'key': escape(object_id)})
if request.method == 'POST' and "_saveasnew" in request.POST:
object_id = None
obj = None
ModelForm = self.get_form(request, obj)
if request.method == 'POST':
form = ModelForm(request.POST, request.FILES, instance=obj)
if form.is_valid():
form_validated = True
new_object = self.save_form(request, form, change=not add)
else:
form_validated = False
new_object = form.instance
formsets, inline_instances = self._create_formsets(request, new_object, change=not add)
if all_valid(formsets) and form_validated:
self.save_model(request, new_object, form, not add)
self.save_related(request, form, formsets, not add)
change_message = self.construct_change_message(request, form, formsets, add)
if add:
self.log_addition(request, new_object, change_message)
return self.response_add(request, new_object)
else:
self.log_change(request, new_object, change_message)
return self.response_change(request, new_object)
else:
form_validated = False
else:
if add:
initial = self.get_changeform_initial_data(request)
form = ModelForm(initial=initial)
formsets, inline_instances = self._create_formsets(request, form.instance, change=False)
else:
form = ModelForm(instance=obj)
formsets, inline_instances = self._create_formsets(request, obj, change=True)
adminForm = helpers.AdminForm(
form,
list(self.get_fieldsets(request, obj)),
self.get_prepopulated_fields(request, obj),
self.get_readonly_fields(request, obj),
model_admin=self)
media = self.media + adminForm.media
inline_formsets = self.get_inline_formsets(request, formsets, inline_instances, obj)
for inline_formset in inline_formsets:
media = media + inline_formset.media
context = dict(self.admin_site.each_context(request),
title=(_('Add %s') if add else _('Change %s')) % force_text(opts.verbose_name),
adminform=adminForm,
object_id=object_id,
original=obj,
is_popup=(IS_POPUP_VAR in request.POST or
IS_POPUP_VAR in request.GET),
to_field=to_field,
media=media,
inline_admin_formsets=inline_formsets,
errors=helpers.AdminErrorList(form, formsets),
preserved_filters=self.get_preserved_filters(request),
)
# Hide the "Save" and "Save and continue" buttons if "Save as New" was
# previously chosen to prevent the interface from getting confusing.
if request.method == 'POST' and not form_validated and "_saveasnew" in request.POST:
context['show_save'] = False
context['show_save_and_continue'] = False
context.update(extra_context or {})
return self.render_change_form(request, context, add=add, change=not add, obj=obj, form_url=form_url)
def add_view(self, request, form_url='', extra_context=None):
return self.changeform_view(request, None, form_url, extra_context)
def change_view(self, request, object_id, form_url='', extra_context=None):
return self.changeform_view(request, object_id, form_url, extra_context)
@csrf_protect_m
def changelist_view(self, request, extra_context=None):
"""
The 'change list' admin view for this model.
"""
from django.contrib.admin.views.main import ERROR_FLAG
opts = self.model._meta
app_label = opts.app_label
if not self.has_change_permission(request, None):
raise PermissionDenied
list_display = self.get_list_display(request)
list_display_links = self.get_list_display_links(request, list_display)
list_filter = self.get_list_filter(request)
search_fields = self.get_search_fields(request)
list_select_related = self.get_list_select_related(request)
# Check actions to see if any are available on this changelist
actions = self.get_actions(request)
if actions:
# Add the action checkboxes if there are any actions available.
list_display = ['action_checkbox'] + list(list_display)
ChangeList = self.get_changelist(request)
try:
cl = ChangeList(request, self.model, list_display,
list_display_links, list_filter, self.date_hierarchy,
search_fields, list_select_related, self.list_per_page,
self.list_max_show_all, self.list_editable, self)
except IncorrectLookupParameters:
# Wacky lookup parameters were given, so redirect to the main
# changelist page, without parameters, and pass an 'invalid=1'
# parameter via the query string. If wacky parameters were given
# and the 'invalid=1' parameter was already in the query string,
# something is screwed up with the database, so display an error
# page.
if ERROR_FLAG in request.GET.keys():
return SimpleTemplateResponse('admin/invalid_setup.html', {
'title': _('Database error'),
})
return HttpResponseRedirect(request.path + '?' + ERROR_FLAG + '=1')
# If the request was POSTed, this might be a bulk action or a bulk
# edit. Try to look up an action or confirmation first, but if this
# isn't an action the POST will fall through to the bulk edit check,
# below.
action_failed = False
selected = request.POST.getlist(helpers.ACTION_CHECKBOX_NAME)
# Actions with no confirmation
if (actions and request.method == 'POST' and
'index' in request.POST and '_save' not in request.POST):
if selected:
response = self.response_action(request, queryset=cl.get_queryset(request))
if response:
return response
else:
action_failed = True
else:
msg = _("Items must be selected in order to perform "
"actions on them. No items have been changed.")
self.message_user(request, msg, messages.WARNING)
action_failed = True
# Actions with confirmation
if (actions and request.method == 'POST' and
helpers.ACTION_CHECKBOX_NAME in request.POST and
'index' not in request.POST and '_save' not in request.POST):
if selected:
response = self.response_action(request, queryset=cl.get_queryset(request))
if response:
return response
else:
action_failed = True
# If we're allowing changelist editing, we need to construct a formset
# for the changelist given all the fields to be edited. Then we'll
# use the formset to validate/process POSTed data.
formset = cl.formset = None
# Handle POSTed bulk-edit data.
if (request.method == "POST" and cl.list_editable and
'_save' in request.POST and not action_failed):
FormSet = self.get_changelist_formset(request)
formset = cl.formset = FormSet(request.POST, request.FILES, queryset=cl.result_list)
if formset.is_valid():
changecount = 0
for form in formset.forms:
if form.has_changed():
obj = self.save_form(request, form, change=True)
self.save_model(request, obj, form, change=True)
self.save_related(request, form, formsets=[], change=True)
change_msg = self.construct_change_message(request, form, None)
self.log_change(request, obj, change_msg)
changecount += 1
if changecount:
if changecount == 1:
name = force_text(opts.verbose_name)
else:
name = force_text(opts.verbose_name_plural)
msg = ungettext("%(count)s %(name)s was changed successfully.",
"%(count)s %(name)s were changed successfully.",
changecount) % {'count': changecount,
'name': name,
'obj': force_text(obj)}
self.message_user(request, msg, messages.SUCCESS)
return HttpResponseRedirect(request.get_full_path())
# Handle GET -- construct a formset for display.
elif cl.list_editable:
FormSet = self.get_changelist_formset(request)
formset = cl.formset = FormSet(queryset=cl.result_list)
# Build the list of media to be used by the formset.
if formset:
media = self.media + formset.media
else:
media = self.media
# Build the action form and populate it with available actions.
if actions:
action_form = self.action_form(auto_id=None)
action_form.fields['action'].choices = self.get_action_choices(request)
else:
action_form = None
selection_note_all = ungettext('%(total_count)s selected',
'All %(total_count)s selected', cl.result_count)
context = dict(
self.admin_site.each_context(request),
module_name=force_text(opts.verbose_name_plural),
selection_note=_('0 of %(cnt)s selected') % {'cnt': len(cl.result_list)},
selection_note_all=selection_note_all % {'total_count': cl.result_count},
title=cl.title,
is_popup=cl.is_popup,
to_field=cl.to_field,
cl=cl,
media=media,
has_add_permission=self.has_add_permission(request),
opts=cl.opts,
action_form=action_form,
actions_on_top=self.actions_on_top,
actions_on_bottom=self.actions_on_bottom,
actions_selection_counter=self.actions_selection_counter,
preserved_filters=self.get_preserved_filters(request),
)
context.update(extra_context or {})
request.current_app = self.admin_site.name
return TemplateResponse(request, self.change_list_template or [
'admin/%s/%s/change_list.html' % (app_label, opts.model_name),
'admin/%s/change_list.html' % app_label,
'admin/change_list.html'
], context)
@csrf_protect_m
@transaction.atomic
def delete_view(self, request, object_id, extra_context=None):
"The 'delete' admin view for this model."
opts = self.model._meta
app_label = opts.app_label
to_field = request.POST.get(TO_FIELD_VAR, request.GET.get(TO_FIELD_VAR))
if to_field and not self.to_field_allowed(request, to_field):
raise DisallowedModelAdminToField("The field %s cannot be referenced." % to_field)
obj = self.get_object(request, unquote(object_id), to_field)
if not self.has_delete_permission(request, obj):
raise PermissionDenied
if obj is None:
raise Http404(
_('%(name)s object with primary key %(key)r does not exist.') %
{'name': force_text(opts.verbose_name), 'key': escape(object_id)}
)
using = router.db_for_write(self.model)
# Populate deleted_objects, a data structure of all related objects that
# will also be deleted.
(deleted_objects, model_count, perms_needed, protected) = get_deleted_objects(
[obj], opts, request.user, self.admin_site, using)
if request.POST: # The user has already confirmed the deletion.
if perms_needed:
raise PermissionDenied
obj_display = force_text(obj)
attr = str(to_field) if to_field else opts.pk.attname
obj_id = obj.serializable_value(attr)
self.log_deletion(request, obj, obj_display)
self.delete_model(request, obj)
return self.response_delete(request, obj_display, obj_id)
object_name = force_text(opts.verbose_name)
if perms_needed or protected:
title = _("Cannot delete %(name)s") % {"name": object_name}
else:
title = _("Are you sure?")
context = dict(
self.admin_site.each_context(request),
title=title,
object_name=object_name,
object=obj,
deleted_objects=deleted_objects,
model_count=dict(model_count).items(),
perms_lacking=perms_needed,
protected=protected,
opts=opts,
app_label=app_label,
preserved_filters=self.get_preserved_filters(request),
is_popup=(IS_POPUP_VAR in request.POST or
IS_POPUP_VAR in request.GET),
to_field=to_field,
)
context.update(extra_context or {})
return self.render_delete_form(request, context)
def history_view(self, request, object_id, extra_context=None):
"The 'history' admin view for this model."
from django.contrib.admin.models import LogEntry
# First check if the user can see this history.
model = self.model
obj = self.get_object(request, unquote(object_id))
if obj is None:
raise Http404(_('%(name)s object with primary key %(key)r does not exist.') % {
'name': force_text(model._meta.verbose_name),
'key': escape(object_id),
})
if not self.has_change_permission(request, obj):
raise PermissionDenied
# Then get the history for this object.
opts = model._meta
app_label = opts.app_label
action_list = LogEntry.objects.filter(
object_id=unquote(object_id),
content_type=get_content_type_for_model(model)
).select_related().order_by('action_time')
context = dict(self.admin_site.each_context(request),
title=_('Change history: %s') % force_text(obj),
action_list=action_list,
module_name=capfirst(force_text(opts.verbose_name_plural)),
object=obj,
opts=opts,
preserved_filters=self.get_preserved_filters(request),
)
context.update(extra_context or {})
request.current_app = self.admin_site.name
return TemplateResponse(request, self.object_history_template or [
"admin/%s/%s/object_history.html" % (app_label, opts.model_name),
"admin/%s/object_history.html" % app_label,
"admin/object_history.html"
], context)
def _create_formsets(self, request, obj, change):
"Helper function to generate formsets for add/change_view."
formsets = []
inline_instances = []
prefixes = {}
get_formsets_args = [request]
if change:
get_formsets_args.append(obj)
for FormSet, inline in self.get_formsets_with_inlines(*get_formsets_args):
prefix = FormSet.get_default_prefix()
prefixes[prefix] = prefixes.get(prefix, 0) + 1
if prefixes[prefix] != 1 or not prefix:
prefix = "%s-%s" % (prefix, prefixes[prefix])
formset_params = {
'instance': obj,
'prefix': prefix,
'queryset': inline.get_queryset(request),
}
if request.method == 'POST':
formset_params.update({
'data': request.POST,
'files': request.FILES,
'save_as_new': '_saveasnew' in request.POST
})
formsets.append(FormSet(**formset_params))
inline_instances.append(inline)
return formsets, inline_instances
class InlineModelAdmin(BaseModelAdmin):
"""
Options for inline editing of ``model`` instances.
Provide ``fk_name`` to specify the attribute name of the ``ForeignKey``
from ``model`` to its parent. This is required if ``model`` has more than
one ``ForeignKey`` to its parent.
"""
model = None
fk_name = None
formset = BaseInlineFormSet
extra = 3
min_num = None
max_num = None
template = None
verbose_name = None
verbose_name_plural = None
can_delete = True
show_change_link = False
checks_class = InlineModelAdminChecks
def __init__(self, parent_model, admin_site):
self.admin_site = admin_site
self.parent_model = parent_model
self.opts = self.model._meta
self.has_registered_model = admin_site.is_registered(self.model)
super(InlineModelAdmin, self).__init__()
if self.verbose_name is None:
self.verbose_name = self.model._meta.verbose_name
if self.verbose_name_plural is None:
self.verbose_name_plural = self.model._meta.verbose_name_plural
@property
def media(self):
extra = '' if settings.DEBUG else '.min'
js = ['vendor/jquery/jquery%s.js' % extra, 'jquery.init.js',
'inlines%s.js' % extra]
if self.filter_vertical or self.filter_horizontal:
js.extend(['SelectBox.js', 'SelectFilter2.js'])
return forms.Media(js=[static('admin/js/%s' % url) for url in js])
def get_extra(self, request, obj=None, **kwargs):
"""Hook for customizing the number of extra inline forms."""
return self.extra
def get_min_num(self, request, obj=None, **kwargs):
"""Hook for customizing the min number of inline forms."""
return self.min_num
def get_max_num(self, request, obj=None, **kwargs):
"""Hook for customizing the max number of extra inline forms."""
return self.max_num
def get_formset(self, request, obj=None, **kwargs):
"""Returns a BaseInlineFormSet class for use in admin add/change views."""
if 'fields' in kwargs:
fields = kwargs.pop('fields')
else:
fields = flatten_fieldsets(self.get_fieldsets(request, obj))
if self.exclude is None:
exclude = []
else:
exclude = list(self.exclude)
exclude.extend(self.get_readonly_fields(request, obj))
if self.exclude is None and hasattr(self.form, '_meta') and self.form._meta.exclude:
# Take the custom ModelForm's Meta.exclude into account only if the
# InlineModelAdmin doesn't define its own.
exclude.extend(self.form._meta.exclude)
# If exclude is an empty list we use None, since that's the actual
# default.
exclude = exclude or None
can_delete = self.can_delete and self.has_delete_permission(request, obj)
defaults = {
"form": self.form,
"formset": self.formset,
"fk_name": self.fk_name,
"fields": fields,
"exclude": exclude,
"formfield_callback": partial(self.formfield_for_dbfield, request=request),
"extra": self.get_extra(request, obj, **kwargs),
"min_num": self.get_min_num(request, obj, **kwargs),
"max_num": self.get_max_num(request, obj, **kwargs),
"can_delete": can_delete,
}
defaults.update(kwargs)
base_model_form = defaults['form']
class DeleteProtectedModelForm(base_model_form):
def hand_clean_DELETE(self):
"""
We don't validate the 'DELETE' field itself because on
templates it's not rendered using the field information, but
just using a generic "deletion_field" of the InlineModelAdmin.
"""
if self.cleaned_data.get(DELETION_FIELD_NAME, False):
using = router.db_for_write(self._meta.model)
collector = NestedObjects(using=using)
if self.instance.pk is None:
return
collector.collect([self.instance])
if collector.protected:
objs = []
for p in collector.protected:
objs.append(
# Translators: Model verbose name and instance representation,
# suitable to be an item in a list.
_('%(class_name)s %(instance)s') % {
'class_name': p._meta.verbose_name,
'instance': p}
)
params = {'class_name': self._meta.model._meta.verbose_name,
'instance': self.instance,
'related_objects': get_text_list(objs, _('and'))}
msg = _("Deleting %(class_name)s %(instance)s would require "
"deleting the following protected related objects: "
"%(related_objects)s")
raise ValidationError(msg, code='deleting_protected', params=params)
def is_valid(self):
result = super(DeleteProtectedModelForm, self).is_valid()
self.hand_clean_DELETE()
return result
defaults['form'] = DeleteProtectedModelForm
if defaults['fields'] is None and not modelform_defines_fields(defaults['form']):
defaults['fields'] = forms.ALL_FIELDS
return inlineformset_factory(self.parent_model, self.model, **defaults)
def get_fields(self, request, obj=None):
if self.fields:
return self.fields
form = self.get_formset(request, obj, fields=None).form
return list(form.base_fields) + list(self.get_readonly_fields(request, obj))
def get_queryset(self, request):
queryset = super(InlineModelAdmin, self).get_queryset(request)
if not self.has_change_permission(request):
queryset = queryset.none()
return queryset
def has_add_permission(self, request):
if self.opts.auto_created:
# We're checking the rights to an auto-created intermediate model,
# which doesn't have its own individual permissions. The user needs
# to have the change permission for the related model in order to
# be able to do anything with the intermediate model.
return self.has_change_permission(request)
return super(InlineModelAdmin, self).has_add_permission(request)
def has_change_permission(self, request, obj=None):
opts = self.opts
if opts.auto_created:
# The model was auto-created as intermediary for a
# ManyToMany-relationship, find the target model
for field in opts.fields:
if field.remote_field and field.remote_field.model != self.parent_model:
opts = field.remote_field.model._meta
break
codename = get_permission_codename('change', opts)
return request.user.has_perm("%s.%s" % (opts.app_label, codename))
def has_delete_permission(self, request, obj=None):
if self.opts.auto_created:
# We're checking the rights to an auto-created intermediate model,
# which doesn't have its own individual permissions. The user needs
# to have the change permission for the related model in order to
# be able to do anything with the intermediate model.
return self.has_change_permission(request, obj)
return super(InlineModelAdmin, self).has_delete_permission(request, obj)
class StackedInline(InlineModelAdmin):
template = 'admin/edit_inline/stacked.html'
class TabularInline(InlineModelAdmin):
template = 'admin/edit_inline/tabular.html'
| bsd-3-clause |
dpetzold/django-extensions | tests/test_uuid_field.py | 24 | 2355 | import re
import uuid
import six
from django.test import TestCase
from django_extensions.db.fields import PostgreSQLUUIDField
from .testapp.models import (
UUIDTestAgregateModel, UUIDTestManyToManyModel, UUIDTestModel_field,
UUIDTestModel_pk,
)
class UUIDFieldTest(TestCase):
def test_UUID_field_create(self):
j = UUIDTestModel_field.objects.create(a=6, uuid_field=six.u('550e8400-e29b-41d4-a716-446655440000'))
self.assertEqual(j.uuid_field, six.u('550e8400-e29b-41d4-a716-446655440000'))
def test_UUID_field_pk_create(self):
j = UUIDTestModel_pk.objects.create(uuid_field=six.u('550e8400-e29b-41d4-a716-446655440000'))
self.assertEqual(j.uuid_field, six.u('550e8400-e29b-41d4-a716-446655440000'))
self.assertEqual(j.pk, six.u('550e8400-e29b-41d4-a716-446655440000'))
def test_UUID_field_pk_agregate_create(self):
j = UUIDTestAgregateModel.objects.create(a=6, uuid_field=six.u('550e8400-e29b-41d4-a716-446655440001'))
self.assertEqual(j.a, 6)
self.assertIsInstance(j.pk, six.string_types)
self.assertEqual(len(j.pk), 36)
def test_UUID_field_manytomany_create(self):
j = UUIDTestManyToManyModel.objects.create(uuid_field=six.u('550e8400-e29b-41d4-a716-446655440010'))
self.assertEqual(j.uuid_field, six.u('550e8400-e29b-41d4-a716-446655440010'))
self.assertEqual(j.pk, six.u('550e8400-e29b-41d4-a716-446655440010'))
class PostgreSQLUUIDFieldTest(TestCase):
def test_uuid_casting(self):
# As explain by postgres documentation
# http://www.postgresql.org/docs/9.1/static/datatype-uuid.html
# an uuid needs to be a sequence of lower-case hexadecimal digits, in
# several groups separated by hyphens, specifically a group of 8 digits
# followed by three groups of 4 digits followed by a group of 12 digits
matcher = re.compile('^[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}'
'-[\da-f]{12}$')
field = PostgreSQLUUIDField()
for value in (str(uuid.uuid4()), uuid.uuid4().urn, uuid.uuid4().hex,
uuid.uuid4().int, uuid.uuid4().bytes):
prepared_value = field.get_db_prep_value(value, None)
self.assertTrue(matcher.match(prepared_value) is not None,
prepared_value)
| mit |
jack198345/volatility | contrib/plugins/example.py | 58 | 2769 | # Volatility
#
# Authors:
# Mike Auty <mike.auty@gmail.com>
#
# This file is part of Volatility.
#
# Volatility is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Volatility is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Volatility. If not, see <http://www.gnu.org/licenses/>.
#
import volatility.timefmt as timefmt
import volatility.obj as obj
import volatility.utils as utils
import volatility.commands as commands
#pylint: disable-msg=C0111
class DateTime(commands.Command):
"""A simple example plugin that gets the date/time information from a Windows image"""
def calculate(self):
"""Calculate and carry out any processing that may take time upon the image"""
# Load the address space
addr_space = utils.load_as(self._config)
# Call a subfunction so that it can be used by other plugins
return self.get_image_time(addr_space)
def get_image_time(self, addr_space):
"""Extracts the time and date from the KUSER_SHARED_DATA area"""
# Get the Image Datetime
result = {}
# Create a VOLATILITY_MAGIC object to look up the location of certain constants
# Get the KUSER_SHARED_DATA location
KUSER_SHARED_DATA = obj.VolMagic(addr_space).KUSER_SHARED_DATA.v()
# Create the _KUSER_SHARED_DATA object at the appropriate offset
k = obj.Object("_KUSER_SHARED_DATA",
offset = KUSER_SHARED_DATA,
vm = addr_space)
# Start reading members from it
result['ImageDatetime'] = k.SystemTime
result['ImageTz'] = timefmt.OffsetTzInfo(-k.TimeZoneBias.as_windows_timestamp() / 10000000)
# Return any results we got
return result
def render_text(self, outfd, data):
"""Renders the calculated data as text to outfd"""
# Convert the result into a datetime object for display in local and non local format
dt = data['ImageDatetime'].as_datetime()
# Display the datetime in UTC as taken from the image
outfd.write("Image date and time : {0}\n".format(data['ImageDatetime']))
# Display the datetime taking into account the timezone of the image itself
outfd.write("Image local date and time : {0}\n".format(timefmt.display_datetime(dt, data['ImageTz'])))
| gpl-2.0 |
pklaus/silhouette | src/utils.py | 1 | 2069 | import math
import re
# units
from pint import UnitRegistry
units = UnitRegistry()
# my guess as to why 508 is the magic number is the motor steps and the belt TPI
# i am guessing the belt is 2.54 TPI, and the motor can provide 200 steps
# 2.54 * 200 = 508
units.define('steps = inch / 508.0 = step')
# For SVG files, Silhouette Studio defines one inch as 72 points
units.define('dpi = inch / 72.0')
def steps(val):
val = unit(val, unit=None)
return int(val.to("steps").magnitude)
## units
DEFAULT_UNIT = "mm"
def unit(val, **kw):
_unit = kw.get("unit", DEFAULT_UNIT)
if _unit != None:
_unit = units.parse_expression(_unit)
if type(val) != units.Quantity:
if type(val) in (int, float):
assert _unit, "value %r of type '%r' requires a unit definition" % (val, type(val))
val = val * _unit
elif type(val) in (str, unicode):
val = units.parse_expression(val)
else:
raise TypeError("I don't know how to convert type '%s' to a unit" % str(type(val)))
assert type(val) == units.Quantity, "%r != %r" % (type(val), units.Quantity)
if _unit:
val = val.to(_unit)
return val
def inch2mm(inches):
inches = unit(inches, unit="inch")
return inches.to(units.mm).magnitude
def mm2inch(mm):
mm = unit(mm, unit="mm")
return mm.to(units.inch).magnitude
def circle(**kw):
assert "radius" in kw, "Need radius keyword argument"
defs = {"steps": 20, "center_x": "0in", "center_y": "0in", "phase": 0}
_kw = defs.copy()
_kw.update(kw)
_steps = int(_kw["steps"])
radius = unit(_kw["radius"])
center_x = unit(_kw["center_x"])
center_y = unit(_kw["center_y"])
phase = float(_kw["phase"])
#
if steps < 2:
raise ValueError("3 or more steps are required")
radstep = (2 * math.pi) / float(_steps - 1)
for rad in range(int(_steps)):
x = math.cos(rad * radstep + phase) * radius + center_x
y = math.sin(rad * radstep + phase) * radius + center_y
yield (steps(x), steps(y))
| mit |
ingokegel/intellij-community | python/helpers/pycharm/appcfg_fetcher.py | 45 | 1634 | import sys
import optparse
from django_manage_commands_provider import _xml
class Option:
def __init__(self):
self.long = []
self.short = []
self.arg = None
self.help = None
def dump(self, dumper):
dumper.add_command_option(self.long, self.short, self.help, self.arg)
def parse_option_desc(option_desc):
option = Option()
option.short = option_desc._short_opts
option.long = option_desc._long_opts
option.help = option_desc.help
if option_desc.nargs > 0:
option.arg = (option_desc.nargs, option_desc.type)
return option
def get_options(options_parser):
return map(parse_option_desc, options_parser.option_list)
def dump_actions(dumper, app):
common_options = get_options(app._GetOptionParser())
for name, action in app.actions.iteritems():
dumper.start_command(name, action.short_desc)
args = action.usage.split(name.split(' ')[0])[-1].strip()
dumper.set_arguments(args)
for option in common_options:
option.dump(dumper)
if action.options:
parser = optparse.OptionParser(conflict_handler='resolve')
action.options(app, parser)
for option in get_options(parser):
option.dump(dumper)
dumper.close_command()
if __name__ == "__main__":
sys.path.append(sys.argv[1])
import appcfg
try:
appcfg.run_file('appcfg.py', globals())
finally:
app = AppCfgApp(['appcfg.py', 'help'])
dumper = _xml.XmlDumper()
dump_actions(dumper, app)
print(dumper.xml)
sys.exit(0)
| apache-2.0 |
kenwang76/readthedocs.org | readthedocs/rtd_tests/tests/test_redirects_utils.py | 24 | 1731 | from django.test import TestCase
from django.test.utils import override_settings
from readthedocs.projects.models import Project
from django.core.urlresolvers import reverse
from readthedocs.redirects.utils import redirect_filename
class RedirectFilenameTests(TestCase):
fixtures = ["eric", "test_data"]
def setUp(self):
self.proj = Project.objects.get(slug="read-the-docs")
def test_http_filenames_return_themselves(self):
self.assertEqual(
redirect_filename(None, 'http'),
'http'
)
def test_redirects_no_subdomain(self):
self.assertEqual(
redirect_filename(self.proj, 'index.html'),
'/docs/read-the-docs/en/latest/index.html'
)
@override_settings(
USE_SUBDOMAIN=True, PRODUCTION_DOMAIN='rtfd.org'
)
def test_redirects_with_subdomain(self):
self.assertEqual(
redirect_filename(self.proj, 'faq.html'),
'http://read-the-docs.rtfd.org/en/latest/faq.html'
)
@override_settings(
USE_SUBDOMAIN=True, PRODUCTION_DOMAIN='rtfd.org'
)
def test_single_version_with_subdomain(self):
self.proj.single_version = True
self.assertEqual(
redirect_filename(self.proj, 'faq.html'),
'http://read-the-docs.rtfd.org/faq.html'
)
def test_single_version_no_subdomain(self):
self.proj.single_version = True
self.assertEqual(
redirect_filename(self.proj, 'faq.html'),
reverse(
'docs_detail',
kwargs={
'project_slug': self.proj.slug,
'filename': 'faq.html',
}
)
)
| mit |
ykaneko/quantum | quantum/openstack/common/periodic_task.py | 2 | 6922 | # vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import time
from oslo.config import cfg
from quantum.openstack.common.gettextutils import _
from quantum.openstack.common import log as logging
from quantum.openstack.common import timeutils
periodic_opts = [
cfg.BoolOpt('run_external_periodic_tasks',
default=True,
help=('Some periodic tasks can be run in a separate process. '
'Should we run them here?')),
]
CONF = cfg.CONF
CONF.register_opts(periodic_opts)
LOG = logging.getLogger(__name__)
DEFAULT_INTERVAL = 60.0
class InvalidPeriodicTaskArg(Exception):
message = _("Unexpected argument for periodic task creation: %(arg)s.")
def periodic_task(*args, **kwargs):
"""Decorator to indicate that a method is a periodic task.
This decorator can be used in two ways:
1. Without arguments '@periodic_task', this will be run on every cycle
of the periodic scheduler.
2. With arguments:
@periodic_task(spacing=N [, run_immediately=[True|False]])
this will be run on approximately every N seconds. If this number is
negative the periodic task will be disabled. If the run_immediately
argument is provided and has a value of 'True', the first run of the
task will be shortly after task scheduler starts. If
run_immediately is omitted or set to 'False', the first time the
task runs will be approximately N seconds after the task scheduler
starts.
"""
def decorator(f):
# Test for old style invocation
if 'ticks_between_runs' in kwargs:
raise InvalidPeriodicTaskArg(arg='ticks_between_runs')
# Control if run at all
f._periodic_task = True
f._periodic_external_ok = kwargs.pop('external_process_ok', False)
if f._periodic_external_ok and not CONF.run_external_periodic_tasks:
f._periodic_enabled = False
else:
f._periodic_enabled = kwargs.pop('enabled', True)
# Control frequency
f._periodic_spacing = kwargs.pop('spacing', 0)
f._periodic_immediate = kwargs.pop('run_immediately', False)
if f._periodic_immediate:
f._periodic_last_run = None
else:
f._periodic_last_run = timeutils.utcnow()
return f
# NOTE(sirp): The `if` is necessary to allow the decorator to be used with
# and without parens.
#
# In the 'with-parens' case (with kwargs present), this function needs to
# return a decorator function since the interpreter will invoke it like:
#
# periodic_task(*args, **kwargs)(f)
#
# In the 'without-parens' case, the original function will be passed
# in as the first argument, like:
#
# periodic_task(f)
if kwargs:
return decorator
else:
return decorator(args[0])
class _PeriodicTasksMeta(type):
def __init__(cls, names, bases, dict_):
"""Metaclass that allows us to collect decorated periodic tasks."""
super(_PeriodicTasksMeta, cls).__init__(names, bases, dict_)
# NOTE(sirp): if the attribute is not present then we must be the base
# class, so, go ahead an initialize it. If the attribute is present,
# then we're a subclass so make a copy of it so we don't step on our
# parent's toes.
try:
cls._periodic_tasks = cls._periodic_tasks[:]
except AttributeError:
cls._periodic_tasks = []
try:
cls._periodic_last_run = cls._periodic_last_run.copy()
except AttributeError:
cls._periodic_last_run = {}
try:
cls._periodic_spacing = cls._periodic_spacing.copy()
except AttributeError:
cls._periodic_spacing = {}
for value in cls.__dict__.values():
if getattr(value, '_periodic_task', False):
task = value
name = task.__name__
if task._periodic_spacing < 0:
LOG.info(_('Skipping periodic task %(task)s because '
'its interval is negative'),
{'task': name})
continue
if not task._periodic_enabled:
LOG.info(_('Skipping periodic task %(task)s because '
'it is disabled'),
{'task': name})
continue
# A periodic spacing of zero indicates that this task should
# be run every pass
if task._periodic_spacing == 0:
task._periodic_spacing = None
cls._periodic_tasks.append((name, task))
cls._periodic_spacing[name] = task._periodic_spacing
cls._periodic_last_run[name] = task._periodic_last_run
class PeriodicTasks(object):
__metaclass__ = _PeriodicTasksMeta
def run_periodic_tasks(self, context, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
idle_for = DEFAULT_INTERVAL
for task_name, task in self._periodic_tasks:
full_task_name = '.'.join([self.__class__.__name__, task_name])
now = timeutils.utcnow()
spacing = self._periodic_spacing[task_name]
last_run = self._periodic_last_run[task_name]
# If a periodic task is _nearly_ due, then we'll run it early
if spacing is not None and last_run is not None:
due = last_run + datetime.timedelta(seconds=spacing)
if not timeutils.is_soon(due, 0.2):
idle_for = min(idle_for, timeutils.delta_seconds(now, due))
continue
if spacing is not None:
idle_for = min(idle_for, spacing)
LOG.debug(_("Running periodic task %(full_task_name)s"), locals())
self._periodic_last_run[task_name] = timeutils.utcnow()
try:
task(self, context)
except Exception as e:
if raise_on_error:
raise
LOG.exception(_("Error during %(full_task_name)s: %(e)s"),
locals())
time.sleep(0)
return idle_for
| apache-2.0 |
2014cdag9/2014cdag9 | wsgi/static/Brython2.1.0-20140419-113919/Lib/unittest/test/testmock/testhelpers.py | 737 | 25793 | import unittest
from unittest.mock import (
call, _Call, create_autospec, MagicMock,
Mock, ANY, _CallList, patch, PropertyMock
)
from datetime import datetime
class SomeClass(object):
def one(self, a, b):
pass
def two(self):
pass
def three(self, a=None):
pass
class AnyTest(unittest.TestCase):
def test_any(self):
self.assertEqual(ANY, object())
mock = Mock()
mock(ANY)
mock.assert_called_with(ANY)
mock = Mock()
mock(foo=ANY)
mock.assert_called_with(foo=ANY)
def test_repr(self):
self.assertEqual(repr(ANY), '<ANY>')
self.assertEqual(str(ANY), '<ANY>')
def test_any_and_datetime(self):
mock = Mock()
mock(datetime.now(), foo=datetime.now())
mock.assert_called_with(ANY, foo=ANY)
def test_any_mock_calls_comparison_order(self):
mock = Mock()
d = datetime.now()
class Foo(object):
def __eq__(self, other):
return False
def __ne__(self, other):
return True
for d in datetime.now(), Foo():
mock.reset_mock()
mock(d, foo=d, bar=d)
mock.method(d, zinga=d, alpha=d)
mock().method(a1=d, z99=d)
expected = [
call(ANY, foo=ANY, bar=ANY),
call.method(ANY, zinga=ANY, alpha=ANY),
call(), call().method(a1=ANY, z99=ANY)
]
self.assertEqual(expected, mock.mock_calls)
self.assertEqual(mock.mock_calls, expected)
class CallTest(unittest.TestCase):
def test_call_with_call(self):
kall = _Call()
self.assertEqual(kall, _Call())
self.assertEqual(kall, _Call(('',)))
self.assertEqual(kall, _Call(((),)))
self.assertEqual(kall, _Call(({},)))
self.assertEqual(kall, _Call(('', ())))
self.assertEqual(kall, _Call(('', {})))
self.assertEqual(kall, _Call(('', (), {})))
self.assertEqual(kall, _Call(('foo',)))
self.assertEqual(kall, _Call(('bar', ())))
self.assertEqual(kall, _Call(('baz', {})))
self.assertEqual(kall, _Call(('spam', (), {})))
kall = _Call(((1, 2, 3),))
self.assertEqual(kall, _Call(((1, 2, 3),)))
self.assertEqual(kall, _Call(('', (1, 2, 3))))
self.assertEqual(kall, _Call(((1, 2, 3), {})))
self.assertEqual(kall, _Call(('', (1, 2, 3), {})))
kall = _Call(((1, 2, 4),))
self.assertNotEqual(kall, _Call(('', (1, 2, 3))))
self.assertNotEqual(kall, _Call(('', (1, 2, 3), {})))
kall = _Call(('foo', (1, 2, 4),))
self.assertNotEqual(kall, _Call(('', (1, 2, 4))))
self.assertNotEqual(kall, _Call(('', (1, 2, 4), {})))
self.assertNotEqual(kall, _Call(('bar', (1, 2, 4))))
self.assertNotEqual(kall, _Call(('bar', (1, 2, 4), {})))
kall = _Call(({'a': 3},))
self.assertEqual(kall, _Call(('', (), {'a': 3})))
self.assertEqual(kall, _Call(('', {'a': 3})))
self.assertEqual(kall, _Call(((), {'a': 3})))
self.assertEqual(kall, _Call(({'a': 3},)))
def test_empty__Call(self):
args = _Call()
self.assertEqual(args, ())
self.assertEqual(args, ('foo',))
self.assertEqual(args, ((),))
self.assertEqual(args, ('foo', ()))
self.assertEqual(args, ('foo',(), {}))
self.assertEqual(args, ('foo', {}))
self.assertEqual(args, ({},))
def test_named_empty_call(self):
args = _Call(('foo', (), {}))
self.assertEqual(args, ('foo',))
self.assertEqual(args, ('foo', ()))
self.assertEqual(args, ('foo',(), {}))
self.assertEqual(args, ('foo', {}))
self.assertNotEqual(args, ((),))
self.assertNotEqual(args, ())
self.assertNotEqual(args, ({},))
self.assertNotEqual(args, ('bar',))
self.assertNotEqual(args, ('bar', ()))
self.assertNotEqual(args, ('bar', {}))
def test_call_with_args(self):
args = _Call(((1, 2, 3), {}))
self.assertEqual(args, ((1, 2, 3),))
self.assertEqual(args, ('foo', (1, 2, 3)))
self.assertEqual(args, ('foo', (1, 2, 3), {}))
self.assertEqual(args, ((1, 2, 3), {}))
def test_named_call_with_args(self):
args = _Call(('foo', (1, 2, 3), {}))
self.assertEqual(args, ('foo', (1, 2, 3)))
self.assertEqual(args, ('foo', (1, 2, 3), {}))
self.assertNotEqual(args, ((1, 2, 3),))
self.assertNotEqual(args, ((1, 2, 3), {}))
def test_call_with_kwargs(self):
args = _Call(((), dict(a=3, b=4)))
self.assertEqual(args, (dict(a=3, b=4),))
self.assertEqual(args, ('foo', dict(a=3, b=4)))
self.assertEqual(args, ('foo', (), dict(a=3, b=4)))
self.assertEqual(args, ((), dict(a=3, b=4)))
def test_named_call_with_kwargs(self):
args = _Call(('foo', (), dict(a=3, b=4)))
self.assertEqual(args, ('foo', dict(a=3, b=4)))
self.assertEqual(args, ('foo', (), dict(a=3, b=4)))
self.assertNotEqual(args, (dict(a=3, b=4),))
self.assertNotEqual(args, ((), dict(a=3, b=4)))
def test_call_with_args_call_empty_name(self):
args = _Call(((1, 2, 3), {}))
self.assertEqual(args, call(1, 2, 3))
self.assertEqual(call(1, 2, 3), args)
self.assertTrue(call(1, 2, 3) in [args])
def test_call_ne(self):
self.assertNotEqual(_Call(((1, 2, 3),)), call(1, 2))
self.assertFalse(_Call(((1, 2, 3),)) != call(1, 2, 3))
self.assertTrue(_Call(((1, 2), {})) != call(1, 2, 3))
def test_call_non_tuples(self):
kall = _Call(((1, 2, 3),))
for value in 1, None, self, int:
self.assertNotEqual(kall, value)
self.assertFalse(kall == value)
def test_repr(self):
self.assertEqual(repr(_Call()), 'call()')
self.assertEqual(repr(_Call(('foo',))), 'call.foo()')
self.assertEqual(repr(_Call(((1, 2, 3), {'a': 'b'}))),
"call(1, 2, 3, a='b')")
self.assertEqual(repr(_Call(('bar', (1, 2, 3), {'a': 'b'}))),
"call.bar(1, 2, 3, a='b')")
self.assertEqual(repr(call), 'call')
self.assertEqual(str(call), 'call')
self.assertEqual(repr(call()), 'call()')
self.assertEqual(repr(call(1)), 'call(1)')
self.assertEqual(repr(call(zz='thing')), "call(zz='thing')")
self.assertEqual(repr(call().foo), 'call().foo')
self.assertEqual(repr(call(1).foo.bar(a=3).bing),
'call().foo.bar().bing')
self.assertEqual(
repr(call().foo(1, 2, a=3)),
"call().foo(1, 2, a=3)"
)
self.assertEqual(repr(call()()), "call()()")
self.assertEqual(repr(call(1)(2)), "call()(2)")
self.assertEqual(
repr(call()().bar().baz.beep(1)),
"call()().bar().baz.beep(1)"
)
def test_call(self):
self.assertEqual(call(), ('', (), {}))
self.assertEqual(call('foo', 'bar', one=3, two=4),
('', ('foo', 'bar'), {'one': 3, 'two': 4}))
mock = Mock()
mock(1, 2, 3)
mock(a=3, b=6)
self.assertEqual(mock.call_args_list,
[call(1, 2, 3), call(a=3, b=6)])
def test_attribute_call(self):
self.assertEqual(call.foo(1), ('foo', (1,), {}))
self.assertEqual(call.bar.baz(fish='eggs'),
('bar.baz', (), {'fish': 'eggs'}))
mock = Mock()
mock.foo(1, 2 ,3)
mock.bar.baz(a=3, b=6)
self.assertEqual(mock.method_calls,
[call.foo(1, 2, 3), call.bar.baz(a=3, b=6)])
def test_extended_call(self):
result = call(1).foo(2).bar(3, a=4)
self.assertEqual(result, ('().foo().bar', (3,), dict(a=4)))
mock = MagicMock()
mock(1, 2, a=3, b=4)
self.assertEqual(mock.call_args, call(1, 2, a=3, b=4))
self.assertNotEqual(mock.call_args, call(1, 2, 3))
self.assertEqual(mock.call_args_list, [call(1, 2, a=3, b=4)])
self.assertEqual(mock.mock_calls, [call(1, 2, a=3, b=4)])
mock = MagicMock()
mock.foo(1).bar()().baz.beep(a=6)
last_call = call.foo(1).bar()().baz.beep(a=6)
self.assertEqual(mock.mock_calls[-1], last_call)
self.assertEqual(mock.mock_calls, last_call.call_list())
def test_call_list(self):
mock = MagicMock()
mock(1)
self.assertEqual(call(1).call_list(), mock.mock_calls)
mock = MagicMock()
mock(1).method(2)
self.assertEqual(call(1).method(2).call_list(),
mock.mock_calls)
mock = MagicMock()
mock(1).method(2)(3)
self.assertEqual(call(1).method(2)(3).call_list(),
mock.mock_calls)
mock = MagicMock()
int(mock(1).method(2)(3).foo.bar.baz(4)(5))
kall = call(1).method(2)(3).foo.bar.baz(4)(5).__int__()
self.assertEqual(kall.call_list(), mock.mock_calls)
def test_call_any(self):
self.assertEqual(call, ANY)
m = MagicMock()
int(m)
self.assertEqual(m.mock_calls, [ANY])
self.assertEqual([ANY], m.mock_calls)
def test_two_args_call(self):
args = _Call(((1, 2), {'a': 3}), two=True)
self.assertEqual(len(args), 2)
self.assertEqual(args[0], (1, 2))
self.assertEqual(args[1], {'a': 3})
other_args = _Call(((1, 2), {'a': 3}))
self.assertEqual(args, other_args)
class SpecSignatureTest(unittest.TestCase):
def _check_someclass_mock(self, mock):
self.assertRaises(AttributeError, getattr, mock, 'foo')
mock.one(1, 2)
mock.one.assert_called_with(1, 2)
self.assertRaises(AssertionError,
mock.one.assert_called_with, 3, 4)
self.assertRaises(TypeError, mock.one, 1)
mock.two()
mock.two.assert_called_with()
self.assertRaises(AssertionError,
mock.two.assert_called_with, 3)
self.assertRaises(TypeError, mock.two, 1)
mock.three()
mock.three.assert_called_with()
self.assertRaises(AssertionError,
mock.three.assert_called_with, 3)
self.assertRaises(TypeError, mock.three, 3, 2)
mock.three(1)
mock.three.assert_called_with(1)
mock.three(a=1)
mock.three.assert_called_with(a=1)
def test_basic(self):
for spec in (SomeClass, SomeClass()):
mock = create_autospec(spec)
self._check_someclass_mock(mock)
def test_create_autospec_return_value(self):
def f():
pass
mock = create_autospec(f, return_value='foo')
self.assertEqual(mock(), 'foo')
class Foo(object):
pass
mock = create_autospec(Foo, return_value='foo')
self.assertEqual(mock(), 'foo')
def test_autospec_reset_mock(self):
m = create_autospec(int)
int(m)
m.reset_mock()
self.assertEqual(m.__int__.call_count, 0)
def test_mocking_unbound_methods(self):
class Foo(object):
def foo(self, foo):
pass
p = patch.object(Foo, 'foo')
mock_foo = p.start()
Foo().foo(1)
mock_foo.assert_called_with(1)
def test_create_autospec_unbound_methods(self):
# see mock issue 128
# this is expected to fail until the issue is fixed
return
class Foo(object):
def foo(self):
pass
klass = create_autospec(Foo)
instance = klass()
self.assertRaises(TypeError, instance.foo, 1)
# Note: no type checking on the "self" parameter
klass.foo(1)
klass.foo.assert_called_with(1)
self.assertRaises(TypeError, klass.foo)
def test_create_autospec_keyword_arguments(self):
class Foo(object):
a = 3
m = create_autospec(Foo, a='3')
self.assertEqual(m.a, '3')
def test_create_autospec_keyword_only_arguments(self):
def foo(a, *, b=None):
pass
m = create_autospec(foo)
m(1)
m.assert_called_with(1)
self.assertRaises(TypeError, m, 1, 2)
m(2, b=3)
m.assert_called_with(2, b=3)
def test_function_as_instance_attribute(self):
obj = SomeClass()
def f(a):
pass
obj.f = f
mock = create_autospec(obj)
mock.f('bing')
mock.f.assert_called_with('bing')
def test_spec_as_list(self):
# because spec as a list of strings in the mock constructor means
# something very different we treat a list instance as the type.
mock = create_autospec([])
mock.append('foo')
mock.append.assert_called_with('foo')
self.assertRaises(AttributeError, getattr, mock, 'foo')
class Foo(object):
foo = []
mock = create_autospec(Foo)
mock.foo.append(3)
mock.foo.append.assert_called_with(3)
self.assertRaises(AttributeError, getattr, mock.foo, 'foo')
def test_attributes(self):
class Sub(SomeClass):
attr = SomeClass()
sub_mock = create_autospec(Sub)
for mock in (sub_mock, sub_mock.attr):
self._check_someclass_mock(mock)
def test_builtin_functions_types(self):
# we could replace builtin functions / methods with a function
# with *args / **kwargs signature. Using the builtin method type
# as a spec seems to work fairly well though.
class BuiltinSubclass(list):
def bar(self, arg):
pass
sorted = sorted
attr = {}
mock = create_autospec(BuiltinSubclass)
mock.append(3)
mock.append.assert_called_with(3)
self.assertRaises(AttributeError, getattr, mock.append, 'foo')
mock.bar('foo')
mock.bar.assert_called_with('foo')
self.assertRaises(TypeError, mock.bar, 'foo', 'bar')
self.assertRaises(AttributeError, getattr, mock.bar, 'foo')
mock.sorted([1, 2])
mock.sorted.assert_called_with([1, 2])
self.assertRaises(AttributeError, getattr, mock.sorted, 'foo')
mock.attr.pop(3)
mock.attr.pop.assert_called_with(3)
self.assertRaises(AttributeError, getattr, mock.attr, 'foo')
def test_method_calls(self):
class Sub(SomeClass):
attr = SomeClass()
mock = create_autospec(Sub)
mock.one(1, 2)
mock.two()
mock.three(3)
expected = [call.one(1, 2), call.two(), call.three(3)]
self.assertEqual(mock.method_calls, expected)
mock.attr.one(1, 2)
mock.attr.two()
mock.attr.three(3)
expected.extend(
[call.attr.one(1, 2), call.attr.two(), call.attr.three(3)]
)
self.assertEqual(mock.method_calls, expected)
def test_magic_methods(self):
class BuiltinSubclass(list):
attr = {}
mock = create_autospec(BuiltinSubclass)
self.assertEqual(list(mock), [])
self.assertRaises(TypeError, int, mock)
self.assertRaises(TypeError, int, mock.attr)
self.assertEqual(list(mock), [])
self.assertIsInstance(mock['foo'], MagicMock)
self.assertIsInstance(mock.attr['foo'], MagicMock)
def test_spec_set(self):
class Sub(SomeClass):
attr = SomeClass()
for spec in (Sub, Sub()):
mock = create_autospec(spec, spec_set=True)
self._check_someclass_mock(mock)
self.assertRaises(AttributeError, setattr, mock, 'foo', 'bar')
self.assertRaises(AttributeError, setattr, mock.attr, 'foo', 'bar')
def test_descriptors(self):
class Foo(object):
@classmethod
def f(cls, a, b):
pass
@staticmethod
def g(a, b):
pass
class Bar(Foo):
pass
class Baz(SomeClass, Bar):
pass
for spec in (Foo, Foo(), Bar, Bar(), Baz, Baz()):
mock = create_autospec(spec)
mock.f(1, 2)
mock.f.assert_called_once_with(1, 2)
mock.g(3, 4)
mock.g.assert_called_once_with(3, 4)
def test_recursive(self):
class A(object):
def a(self):
pass
foo = 'foo bar baz'
bar = foo
A.B = A
mock = create_autospec(A)
mock()
self.assertFalse(mock.B.called)
mock.a()
mock.B.a()
self.assertEqual(mock.method_calls, [call.a(), call.B.a()])
self.assertIs(A.foo, A.bar)
self.assertIsNot(mock.foo, mock.bar)
mock.foo.lower()
self.assertRaises(AssertionError, mock.bar.lower.assert_called_with)
def test_spec_inheritance_for_classes(self):
class Foo(object):
def a(self):
pass
class Bar(object):
def f(self):
pass
class_mock = create_autospec(Foo)
self.assertIsNot(class_mock, class_mock())
for this_mock in class_mock, class_mock():
this_mock.a()
this_mock.a.assert_called_with()
self.assertRaises(TypeError, this_mock.a, 'foo')
self.assertRaises(AttributeError, getattr, this_mock, 'b')
instance_mock = create_autospec(Foo())
instance_mock.a()
instance_mock.a.assert_called_with()
self.assertRaises(TypeError, instance_mock.a, 'foo')
self.assertRaises(AttributeError, getattr, instance_mock, 'b')
# The return value isn't isn't callable
self.assertRaises(TypeError, instance_mock)
instance_mock.Bar.f()
instance_mock.Bar.f.assert_called_with()
self.assertRaises(AttributeError, getattr, instance_mock.Bar, 'g')
instance_mock.Bar().f()
instance_mock.Bar().f.assert_called_with()
self.assertRaises(AttributeError, getattr, instance_mock.Bar(), 'g')
def test_inherit(self):
class Foo(object):
a = 3
Foo.Foo = Foo
# class
mock = create_autospec(Foo)
instance = mock()
self.assertRaises(AttributeError, getattr, instance, 'b')
attr_instance = mock.Foo()
self.assertRaises(AttributeError, getattr, attr_instance, 'b')
# instance
mock = create_autospec(Foo())
self.assertRaises(AttributeError, getattr, mock, 'b')
self.assertRaises(TypeError, mock)
# attribute instance
call_result = mock.Foo()
self.assertRaises(AttributeError, getattr, call_result, 'b')
def test_builtins(self):
# used to fail with infinite recursion
create_autospec(1)
create_autospec(int)
create_autospec('foo')
create_autospec(str)
create_autospec({})
create_autospec(dict)
create_autospec([])
create_autospec(list)
create_autospec(set())
create_autospec(set)
create_autospec(1.0)
create_autospec(float)
create_autospec(1j)
create_autospec(complex)
create_autospec(False)
create_autospec(True)
def test_function(self):
def f(a, b):
pass
mock = create_autospec(f)
self.assertRaises(TypeError, mock)
mock(1, 2)
mock.assert_called_with(1, 2)
f.f = f
mock = create_autospec(f)
self.assertRaises(TypeError, mock.f)
mock.f(3, 4)
mock.f.assert_called_with(3, 4)
def test_skip_attributeerrors(self):
class Raiser(object):
def __get__(self, obj, type=None):
if obj is None:
raise AttributeError('Can only be accessed via an instance')
class RaiserClass(object):
raiser = Raiser()
@staticmethod
def existing(a, b):
return a + b
s = create_autospec(RaiserClass)
self.assertRaises(TypeError, lambda x: s.existing(1, 2, 3))
s.existing(1, 2)
self.assertRaises(AttributeError, lambda: s.nonexisting)
# check we can fetch the raiser attribute and it has no spec
obj = s.raiser
obj.foo, obj.bar
def test_signature_class(self):
class Foo(object):
def __init__(self, a, b=3):
pass
mock = create_autospec(Foo)
self.assertRaises(TypeError, mock)
mock(1)
mock.assert_called_once_with(1)
mock(4, 5)
mock.assert_called_with(4, 5)
def test_class_with_no_init(self):
# this used to raise an exception
# due to trying to get a signature from object.__init__
class Foo(object):
pass
create_autospec(Foo)
def test_signature_callable(self):
class Callable(object):
def __init__(self):
pass
def __call__(self, a):
pass
mock = create_autospec(Callable)
mock()
mock.assert_called_once_with()
self.assertRaises(TypeError, mock, 'a')
instance = mock()
self.assertRaises(TypeError, instance)
instance(a='a')
instance.assert_called_once_with(a='a')
instance('a')
instance.assert_called_with('a')
mock = create_autospec(Callable())
mock(a='a')
mock.assert_called_once_with(a='a')
self.assertRaises(TypeError, mock)
mock('a')
mock.assert_called_with('a')
def test_signature_noncallable(self):
class NonCallable(object):
def __init__(self):
pass
mock = create_autospec(NonCallable)
instance = mock()
mock.assert_called_once_with()
self.assertRaises(TypeError, mock, 'a')
self.assertRaises(TypeError, instance)
self.assertRaises(TypeError, instance, 'a')
mock = create_autospec(NonCallable())
self.assertRaises(TypeError, mock)
self.assertRaises(TypeError, mock, 'a')
def test_create_autospec_none(self):
class Foo(object):
bar = None
mock = create_autospec(Foo)
none = mock.bar
self.assertNotIsInstance(none, type(None))
none.foo()
none.foo.assert_called_once_with()
def test_autospec_functions_with_self_in_odd_place(self):
class Foo(object):
def f(a, self):
pass
a = create_autospec(Foo)
a.f(self=10)
a.f.assert_called_with(self=10)
def test_autospec_property(self):
class Foo(object):
@property
def foo(self):
return 3
foo = create_autospec(Foo)
mock_property = foo.foo
# no spec on properties
self.assertTrue(isinstance(mock_property, MagicMock))
mock_property(1, 2, 3)
mock_property.abc(4, 5, 6)
mock_property.assert_called_once_with(1, 2, 3)
mock_property.abc.assert_called_once_with(4, 5, 6)
def test_autospec_slots(self):
class Foo(object):
__slots__ = ['a']
foo = create_autospec(Foo)
mock_slot = foo.a
# no spec on slots
mock_slot(1, 2, 3)
mock_slot.abc(4, 5, 6)
mock_slot.assert_called_once_with(1, 2, 3)
mock_slot.abc.assert_called_once_with(4, 5, 6)
class TestCallList(unittest.TestCase):
def test_args_list_contains_call_list(self):
mock = Mock()
self.assertIsInstance(mock.call_args_list, _CallList)
mock(1, 2)
mock(a=3)
mock(3, 4)
mock(b=6)
for kall in call(1, 2), call(a=3), call(3, 4), call(b=6):
self.assertTrue(kall in mock.call_args_list)
calls = [call(a=3), call(3, 4)]
self.assertTrue(calls in mock.call_args_list)
calls = [call(1, 2), call(a=3)]
self.assertTrue(calls in mock.call_args_list)
calls = [call(3, 4), call(b=6)]
self.assertTrue(calls in mock.call_args_list)
calls = [call(3, 4)]
self.assertTrue(calls in mock.call_args_list)
self.assertFalse(call('fish') in mock.call_args_list)
self.assertFalse([call('fish')] in mock.call_args_list)
def test_call_list_str(self):
mock = Mock()
mock(1, 2)
mock.foo(a=3)
mock.foo.bar().baz('fish', cat='dog')
expected = (
"[call(1, 2),\n"
" call.foo(a=3),\n"
" call.foo.bar(),\n"
" call.foo.bar().baz('fish', cat='dog')]"
)
self.assertEqual(str(mock.mock_calls), expected)
def test_propertymock(self):
p = patch('%s.SomeClass.one' % __name__, new_callable=PropertyMock)
mock = p.start()
try:
SomeClass.one
mock.assert_called_once_with()
s = SomeClass()
s.one
mock.assert_called_with()
self.assertEqual(mock.mock_calls, [call(), call()])
s.one = 3
self.assertEqual(mock.mock_calls, [call(), call(), call(3)])
finally:
p.stop()
def test_propertymock_returnvalue(self):
m = MagicMock()
p = PropertyMock()
type(m).foo = p
returned = m.foo
p.assert_called_once_with()
self.assertIsInstance(returned, MagicMock)
self.assertNotIsInstance(returned, PropertyMock)
if __name__ == '__main__':
unittest.main()
| gpl-2.0 |
maximeolivier/pyCAF | pycaf/architecture/devices/network_features/switchport.py | 1 | 2088 | #| This file is part of pyCAF. |
#| |
#| pyCAF is free software: you can redistribute it and/or modify |
#| it under the terms of the GNU General Public License as published by |
#| the Free Software Foundation, either version 3 of the License, or |
#| (at your option) any later version. |
#| |
#| pyCAF is distributed in the hope that it will be useful, |
#| but WITHOUT ANY WARRANTY; without even the implied warranty of |
#| MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
#| GNU General Public License for more details. |
#| |
#| You should have received a copy of the GNU General Public License |
#| along with this program. If not, see <http://www.gnu.org/licenses/>. |
# -*- coding: utf-8 -*-
"""
Created on Fri Aug 1 15:24:25 2014
@author: thierry
"""
class Switchport():
"""@brief Switchport definition and features
@param interface : interface name
@param active : True or False
@param description
@param vlan : number of the VLAN, native VLAN if mode trunk for Cisco
@param mode : trunk, access, ...
"""
def __init__(self):
self.interface = ''
self.active = ''
self.description = ''
self.vlan = ''
self.vlan_allowed = []
self.mode = ''
def __str__(self):
"""
Print the caracteristics of the Switchport
"""
# General caracteristics
return "%s%s%s%s%s%s" %((str(self.interface)).ljust(30),\
(str(self.active)).ljust(10), (str(self.vlan)).ljust(10), (str(self.vlan_allowed)[1:-1]).ljust(35), (str(self.mode)).ljust(20), (str(self.description)).ljust(30))
return ""
| gpl-3.0 |
vertigo235/Sick-Beard-XEM | lib/html5lib/ihatexml.py | 129 | 15309 | import re
baseChar = """[#x0041-#x005A] | [#x0061-#x007A] | [#x00C0-#x00D6] | [#x00D8-#x00F6] | [#x00F8-#x00FF] | [#x0100-#x0131] | [#x0134-#x013E] | [#x0141-#x0148] | [#x014A-#x017E] | [#x0180-#x01C3] | [#x01CD-#x01F0] | [#x01F4-#x01F5] | [#x01FA-#x0217] | [#x0250-#x02A8] | [#x02BB-#x02C1] | #x0386 | [#x0388-#x038A] | #x038C | [#x038E-#x03A1] | [#x03A3-#x03CE] | [#x03D0-#x03D6] | #x03DA | #x03DC | #x03DE | #x03E0 | [#x03E2-#x03F3] | [#x0401-#x040C] | [#x040E-#x044F] | [#x0451-#x045C] | [#x045E-#x0481] | [#x0490-#x04C4] | [#x04C7-#x04C8] | [#x04CB-#x04CC] | [#x04D0-#x04EB] | [#x04EE-#x04F5] | [#x04F8-#x04F9] | [#x0531-#x0556] | #x0559 | [#x0561-#x0586] | [#x05D0-#x05EA] | [#x05F0-#x05F2] | [#x0621-#x063A] | [#x0641-#x064A] | [#x0671-#x06B7] | [#x06BA-#x06BE] | [#x06C0-#x06CE] | [#x06D0-#x06D3] | #x06D5 | [#x06E5-#x06E6] | [#x0905-#x0939] | #x093D | [#x0958-#x0961] | [#x0985-#x098C] | [#x098F-#x0990] | [#x0993-#x09A8] | [#x09AA-#x09B0] | #x09B2 | [#x09B6-#x09B9] | [#x09DC-#x09DD] | [#x09DF-#x09E1] | [#x09F0-#x09F1] | [#x0A05-#x0A0A] | [#x0A0F-#x0A10] | [#x0A13-#x0A28] | [#x0A2A-#x0A30] | [#x0A32-#x0A33] | [#x0A35-#x0A36] | [#x0A38-#x0A39] | [#x0A59-#x0A5C] | #x0A5E | [#x0A72-#x0A74] | [#x0A85-#x0A8B] | #x0A8D | [#x0A8F-#x0A91] | [#x0A93-#x0AA8] | [#x0AAA-#x0AB0] | [#x0AB2-#x0AB3] | [#x0AB5-#x0AB9] | #x0ABD | #x0AE0 | [#x0B05-#x0B0C] | [#x0B0F-#x0B10] | [#x0B13-#x0B28] | [#x0B2A-#x0B30] | [#x0B32-#x0B33] | [#x0B36-#x0B39] | #x0B3D | [#x0B5C-#x0B5D] | [#x0B5F-#x0B61] | [#x0B85-#x0B8A] | [#x0B8E-#x0B90] | [#x0B92-#x0B95] | [#x0B99-#x0B9A] | #x0B9C | [#x0B9E-#x0B9F] | [#x0BA3-#x0BA4] | [#x0BA8-#x0BAA] | [#x0BAE-#x0BB5] | [#x0BB7-#x0BB9] | [#x0C05-#x0C0C] | [#x0C0E-#x0C10] | [#x0C12-#x0C28] | [#x0C2A-#x0C33] | [#x0C35-#x0C39] | [#x0C60-#x0C61] | [#x0C85-#x0C8C] | [#x0C8E-#x0C90] | [#x0C92-#x0CA8] | [#x0CAA-#x0CB3] | [#x0CB5-#x0CB9] | #x0CDE | [#x0CE0-#x0CE1] | [#x0D05-#x0D0C] | [#x0D0E-#x0D10] | [#x0D12-#x0D28] | [#x0D2A-#x0D39] | [#x0D60-#x0D61] | [#x0E01-#x0E2E] | #x0E30 | [#x0E32-#x0E33] | [#x0E40-#x0E45] | [#x0E81-#x0E82] | #x0E84 | [#x0E87-#x0E88] | #x0E8A | #x0E8D | [#x0E94-#x0E97] | [#x0E99-#x0E9F] | [#x0EA1-#x0EA3] | #x0EA5 | #x0EA7 | [#x0EAA-#x0EAB] | [#x0EAD-#x0EAE] | #x0EB0 | [#x0EB2-#x0EB3] | #x0EBD | [#x0EC0-#x0EC4] | [#x0F40-#x0F47] | [#x0F49-#x0F69] | [#x10A0-#x10C5] | [#x10D0-#x10F6] | #x1100 | [#x1102-#x1103] | [#x1105-#x1107] | #x1109 | [#x110B-#x110C] | [#x110E-#x1112] | #x113C | #x113E | #x1140 | #x114C | #x114E | #x1150 | [#x1154-#x1155] | #x1159 | [#x115F-#x1161] | #x1163 | #x1165 | #x1167 | #x1169 | [#x116D-#x116E] | [#x1172-#x1173] | #x1175 | #x119E | #x11A8 | #x11AB | [#x11AE-#x11AF] | [#x11B7-#x11B8] | #x11BA | [#x11BC-#x11C2] | #x11EB | #x11F0 | #x11F9 | [#x1E00-#x1E9B] | [#x1EA0-#x1EF9] | [#x1F00-#x1F15] | [#x1F18-#x1F1D] | [#x1F20-#x1F45] | [#x1F48-#x1F4D] | [#x1F50-#x1F57] | #x1F59 | #x1F5B | #x1F5D | [#x1F5F-#x1F7D] | [#x1F80-#x1FB4] | [#x1FB6-#x1FBC] | #x1FBE | [#x1FC2-#x1FC4] | [#x1FC6-#x1FCC] | [#x1FD0-#x1FD3] | [#x1FD6-#x1FDB] | [#x1FE0-#x1FEC] | [#x1FF2-#x1FF4] | [#x1FF6-#x1FFC] | #x2126 | [#x212A-#x212B] | #x212E | [#x2180-#x2182] | [#x3041-#x3094] | [#x30A1-#x30FA] | [#x3105-#x312C] | [#xAC00-#xD7A3]"""
ideographic = """[#x4E00-#x9FA5] | #x3007 | [#x3021-#x3029]"""
combiningCharacter = """[#x0300-#x0345] | [#x0360-#x0361] | [#x0483-#x0486] | [#x0591-#x05A1] | [#x05A3-#x05B9] | [#x05BB-#x05BD] | #x05BF | [#x05C1-#x05C2] | #x05C4 | [#x064B-#x0652] | #x0670 | [#x06D6-#x06DC] | [#x06DD-#x06DF] | [#x06E0-#x06E4] | [#x06E7-#x06E8] | [#x06EA-#x06ED] | [#x0901-#x0903] | #x093C | [#x093E-#x094C] | #x094D | [#x0951-#x0954] | [#x0962-#x0963] | [#x0981-#x0983] | #x09BC | #x09BE | #x09BF | [#x09C0-#x09C4] | [#x09C7-#x09C8] | [#x09CB-#x09CD] | #x09D7 | [#x09E2-#x09E3] | #x0A02 | #x0A3C | #x0A3E | #x0A3F | [#x0A40-#x0A42] | [#x0A47-#x0A48] | [#x0A4B-#x0A4D] | [#x0A70-#x0A71] | [#x0A81-#x0A83] | #x0ABC | [#x0ABE-#x0AC5] | [#x0AC7-#x0AC9] | [#x0ACB-#x0ACD] | [#x0B01-#x0B03] | #x0B3C | [#x0B3E-#x0B43] | [#x0B47-#x0B48] | [#x0B4B-#x0B4D] | [#x0B56-#x0B57] | [#x0B82-#x0B83] | [#x0BBE-#x0BC2] | [#x0BC6-#x0BC8] | [#x0BCA-#x0BCD] | #x0BD7 | [#x0C01-#x0C03] | [#x0C3E-#x0C44] | [#x0C46-#x0C48] | [#x0C4A-#x0C4D] | [#x0C55-#x0C56] | [#x0C82-#x0C83] | [#x0CBE-#x0CC4] | [#x0CC6-#x0CC8] | [#x0CCA-#x0CCD] | [#x0CD5-#x0CD6] | [#x0D02-#x0D03] | [#x0D3E-#x0D43] | [#x0D46-#x0D48] | [#x0D4A-#x0D4D] | #x0D57 | #x0E31 | [#x0E34-#x0E3A] | [#x0E47-#x0E4E] | #x0EB1 | [#x0EB4-#x0EB9] | [#x0EBB-#x0EBC] | [#x0EC8-#x0ECD] | [#x0F18-#x0F19] | #x0F35 | #x0F37 | #x0F39 | #x0F3E | #x0F3F | [#x0F71-#x0F84] | [#x0F86-#x0F8B] | [#x0F90-#x0F95] | #x0F97 | [#x0F99-#x0FAD] | [#x0FB1-#x0FB7] | #x0FB9 | [#x20D0-#x20DC] | #x20E1 | [#x302A-#x302F] | #x3099 | #x309A"""
digit = """[#x0030-#x0039] | [#x0660-#x0669] | [#x06F0-#x06F9] | [#x0966-#x096F] | [#x09E6-#x09EF] | [#x0A66-#x0A6F] | [#x0AE6-#x0AEF] | [#x0B66-#x0B6F] | [#x0BE7-#x0BEF] | [#x0C66-#x0C6F] | [#x0CE6-#x0CEF] | [#x0D66-#x0D6F] | [#x0E50-#x0E59] | [#x0ED0-#x0ED9] | [#x0F20-#x0F29]"""
extender = """#x00B7 | #x02D0 | #x02D1 | #x0387 | #x0640 | #x0E46 | #x0EC6 | #x3005 | [#x3031-#x3035] | [#x309D-#x309E] | [#x30FC-#x30FE]"""
letter = " | ".join([baseChar, ideographic])
#Without the
name = " | ".join([letter, digit, ".", "-", "_", combiningCharacter,
extender])
nameFirst = " | ".join([letter, "_"])
reChar = re.compile(r"#x([\d|A-F]{4,4})")
reCharRange = re.compile(r"\[#x([\d|A-F]{4,4})-#x([\d|A-F]{4,4})\]")
def charStringToList(chars):
charRanges = [item.strip() for item in chars.split(" | ")]
rv = []
for item in charRanges:
foundMatch = False
for regexp in (reChar, reCharRange):
match = regexp.match(item)
if match is not None:
rv.append([hexToInt(item) for item in match.groups()])
if len(rv[-1]) == 1:
rv[-1] = rv[-1]*2
foundMatch = True
break
if not foundMatch:
assert len(item) == 1
rv.append([ord(item)] * 2)
rv = normaliseCharList(rv)
return rv
def normaliseCharList(charList):
charList = sorted(charList)
for item in charList:
assert item[1] >= item[0]
rv = []
i = 0
while i < len(charList):
j = 1
rv.append(charList[i])
while i + j < len(charList) and charList[i+j][0] <= rv[-1][1] + 1:
rv[-1][1] = charList[i+j][1]
j += 1
i += j
return rv
#We don't really support characters above the BMP :(
max_unicode = int("FFFF", 16)
def missingRanges(charList):
rv = []
if charList[0] != 0:
rv.append([0, charList[0][0] - 1])
for i, item in enumerate(charList[:-1]):
rv.append([item[1]+1, charList[i+1][0] - 1])
if charList[-1][1] != max_unicode:
rv.append([charList[-1][1] + 1, max_unicode])
return rv
def listToRegexpStr(charList):
rv = []
for item in charList:
if item[0] == item[1]:
rv.append(escapeRegexp(unichr(item[0])))
else:
rv.append(escapeRegexp(unichr(item[0])) + "-" +
escapeRegexp(unichr(item[1])))
return "[%s]"%"".join(rv)
def hexToInt(hex_str):
return int(hex_str, 16)
def escapeRegexp(string):
specialCharacters = (".", "^", "$", "*", "+", "?", "{", "}",
"[", "]", "|", "(", ")", "-")
for char in specialCharacters:
string = string.replace(char, "\\" + char)
if char in string:
print string
return string
#output from the above
nonXmlNameBMPRegexp = re.compile(u'[\x00-,/:-@\\[-\\^`\\{-\xb6\xb8-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u02cf\u02d2-\u02ff\u0346-\u035f\u0362-\u0385\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482\u0487-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u0590\u05a2\u05ba\u05be\u05c0\u05c3\u05c5-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u063f\u0653-\u065f\u066a-\u066f\u06b8-\u06b9\u06bf\u06cf\u06d4\u06e9\u06ee-\u06ef\u06fa-\u0900\u0904\u093a-\u093b\u094e-\u0950\u0955-\u0957\u0964-\u0965\u0970-\u0980\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09bd\u09c5-\u09c6\u09c9-\u09ca\u09ce-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09f2-\u0a01\u0a03-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a58\u0a5d\u0a5f-\u0a65\u0a75-\u0a80\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0adf\u0ae1-\u0ae5\u0af0-\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3b\u0b44-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b62-\u0b65\u0b70-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bd6\u0bd8-\u0be6\u0bf0-\u0c00\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c3d\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c5f\u0c62-\u0c65\u0c70-\u0c81\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbd\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce2-\u0ce5\u0cf0-\u0d01\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d3d\u0d44-\u0d45\u0d49\u0d4e-\u0d56\u0d58-\u0d5f\u0d62-\u0d65\u0d70-\u0e00\u0e2f\u0e3b-\u0e3f\u0e4f\u0e5a-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0f17\u0f1a-\u0f1f\u0f2a-\u0f34\u0f36\u0f38\u0f3a-\u0f3d\u0f48\u0f6a-\u0f70\u0f85\u0f8c-\u0f8f\u0f96\u0f98\u0fae-\u0fb0\u0fb8\u0fba-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u20cf\u20dd-\u20e0\u20e2-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3004\u3006\u3008-\u3020\u3030\u3036-\u3040\u3095-\u3098\u309b-\u309c\u309f-\u30a0\u30fb\u30ff-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]')
nonXmlNameFirstBMPRegexp = re.compile(u'[\x00-@\\[-\\^`\\{-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u0385\u0387\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u0640\u064b-\u0670\u06b8-\u06b9\u06bf\u06cf\u06d4\u06d6-\u06e4\u06e7-\u0904\u093a-\u093c\u093e-\u0957\u0962-\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09db\u09de\u09e2-\u09ef\u09f2-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a58\u0a5d\u0a5f-\u0a71\u0a75-\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abc\u0abe-\u0adf\u0ae1-\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3c\u0b3e-\u0b5b\u0b5e\u0b62-\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c5f\u0c62-\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cdd\u0cdf\u0ce2-\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d5f\u0d62-\u0e00\u0e2f\u0e31\u0e34-\u0e3f\u0e46-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eb1\u0eb4-\u0ebc\u0ebe-\u0ebf\u0ec5-\u0f3f\u0f48\u0f6a-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3006\u3008-\u3020\u302a-\u3040\u3095-\u30a0\u30fb-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]')
class InfosetFilter(object):
replacementRegexp = re.compile(r"U[\dA-F]{5,5}")
def __init__(self, replaceChars = None,
dropXmlnsLocalName = False,
dropXmlnsAttrNs = False,
preventDoubleDashComments = False,
preventDashAtCommentEnd = False,
replaceFormFeedCharacters = True):
self.dropXmlnsLocalName = dropXmlnsLocalName
self.dropXmlnsAttrNs = dropXmlnsAttrNs
self.preventDoubleDashComments = preventDoubleDashComments
self.preventDashAtCommentEnd = preventDashAtCommentEnd
self.replaceFormFeedCharacters = replaceFormFeedCharacters
self.replaceCache = {}
def coerceAttribute(self, name, namespace=None):
if self.dropXmlnsLocalName and name.startswith("xmlns:"):
#Need a datalosswarning here
return None
elif (self.dropXmlnsAttrNs and
namespace == "http://www.w3.org/2000/xmlns/"):
return None
else:
return self.toXmlName(name)
def coerceElement(self, name, namespace=None):
return self.toXmlName(name)
def coerceComment(self, data):
if self.preventDoubleDashComments:
while "--" in data:
data = data.replace("--", "- -")
return data
def coerceCharacters(self, data):
if self.replaceFormFeedCharacters:
data = data.replace("\x0C", " ")
#Other non-xml characters
return data
def toXmlName(self, name):
nameFirst = name[0]
nameRest = name[1:]
m = nonXmlNameFirstBMPRegexp.match(nameFirst)
if m:
nameFirstOutput = self.getReplacementCharacter(nameFirst)
else:
nameFirstOutput = nameFirst
nameRestOutput = nameRest
replaceChars = set(nonXmlNameBMPRegexp.findall(nameRest))
for char in replaceChars:
replacement = self.getReplacementCharacter(char)
nameRestOutput = nameRestOutput.replace(char, replacement)
return nameFirstOutput + nameRestOutput
def getReplacementCharacter(self, char):
if char in self.replaceCache:
replacement = self.replaceCache[char]
else:
replacement = self.escapeChar(char)
return replacement
def fromXmlName(self, name):
for item in set(self.replacementRegexp.findall(name)):
name = name.replace(item, self.unescapeChar(item))
return name
def escapeChar(self, char):
replacement = "U" + hex(ord(char))[2:].upper().rjust(5, "0")
self.replaceCache[char] = replacement
return replacement
def unescapeChar(self, charcode):
return unichr(int(charcode[1:], 16))
| gpl-3.0 |
aswinpj/Pygments | pygments/styles/vs.py | 50 | 1073 | # -*- coding: utf-8 -*-
"""
pygments.styles.vs
~~~~~~~~~~~~~~~~~~
Simple style with MS Visual Studio colors.
:copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Operator, Generic
class VisualStudioStyle(Style):
background_color = "#ffffff"
default_style = ""
styles = {
Comment: "#008000",
Comment.Preproc: "#0000ff",
Keyword: "#0000ff",
Operator.Word: "#0000ff",
Keyword.Type: "#2b91af",
Name.Class: "#2b91af",
String: "#a31515",
Generic.Heading: "bold",
Generic.Subheading: "bold",
Generic.Emph: "italic",
Generic.Strong: "bold",
Generic.Prompt: "bold",
Error: "border:#FF0000"
}
| bsd-2-clause |
pixelipo/stormjar | test.py | 1 | 1189 | import time
import json
class DarkSkyTest:
def __init__(self):
with open('config.json', 'r') as f:
self.config = json.load(f)
self.pixel = {}
def lightningStrobe(self, red, green, blue):
# Wipe white across display a pixel at a time very quickly
list = [5, 20, 3, 8, 5, 10]
for i in range(0, len(list)):
for j in range(self.config['led']['count']):
if i % 2 == 0:
self.pixel[j] = (0, 0, 0)
else:
self.pixel[j] = (red, green, blue)
print(self.pixel)
time.sleep(i / 1000.0)
def colorWipe(self, red, green, blue, wait_ms=80):
# Wipe color across display a pixel at a time
for i in range(self.config['led']['count']):
# Set max brighness as percentage
brightness = self.config['led']['brightness']
self.pixel[i] = [round(red * brightness), round(green * brightness), round(blue * brightness)]
print(self.pixel)
time.sleep(wait_ms / 1000.0)
# Initiate instance of the class
test = DarkSkyTest()
print(test.colorWipe(255, 0, 0))
| gpl-3.0 |
yelongyu/chihu | venv/lib/python2.7/site-packages/sqlalchemy/dialects/mysql/pymysql.py | 44 | 1504 | # mysql/pymysql.py
# Copyright (C) 2005-2016 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
.. dialect:: mysql+pymysql
:name: PyMySQL
:dbapi: pymysql
:connectstring: mysql+pymysql://<username>:<password>@<host>/<dbname>\
[?<options>]
:url: http://www.pymysql.org/
Unicode
-------
Please see :ref:`mysql_unicode` for current recommendations on unicode
handling.
MySQL-Python Compatibility
--------------------------
The pymysql DBAPI is a pure Python port of the MySQL-python (MySQLdb) driver,
and targets 100% compatibility. Most behavioral notes for MySQL-python apply
to the pymysql driver as well.
"""
from .mysqldb import MySQLDialect_mysqldb
from ...util import py3k
class MySQLDialect_pymysql(MySQLDialect_mysqldb):
driver = 'pymysql'
description_encoding = None
# generally, these two values should be both True
# or both False. PyMySQL unicode tests pass all the way back
# to 0.4 either way. See [ticket:3337]
supports_unicode_statements = True
supports_unicode_binds = True
@classmethod
def dbapi(cls):
return __import__('pymysql')
if py3k:
def _extract_error_code(self, exception):
if isinstance(exception.args[0], Exception):
exception = exception.args[0]
return exception.args[0]
dialect = MySQLDialect_pymysql
| gpl-3.0 |
OSGeoLabBp/tutorials | hungarian/python/code/shp_demo.py | 1 | 1753 | #! /usr/bin/python
# -*- coding: UTF-8 -*-
""" sample code for demonstrating shapely functionality """
import shapely
import shapely.wkt
import os
import sys
from osgeo import ogr
if len(sys.argv) < 2:
print "Usage: {} <shp_file> [buffer_distance]".format(sys.argv[0])
exit(1)
shapename = sys.argv[1]
shapefile = ogr.Open(shapename) # input point shape
if shapefile is None:
print "shape file not found"
exit(1)
layer = shapefile.GetLayer(0)
if len(sys.argv) < 3:
bufdist = 30000 # default buffer distance 30 km
else:
try:
bufdist = float(sys.argv[2])
except ValueError:
print "Illegal buffer distance parameter"
exit(1)
driver = ogr.GetDriverByName("ESRI Shapefile")
outshp = os.path.splitext(shapename)[0] + "_buf.shp"
if os.path.exists(outshp): # remove ouput shape
driver.DeleteDataSource(outshp)
dstFile = driver.CreateDataSource(outshp) # create output shape
dstLayer = dstFile.CreateLayer("layer", geom_type=ogr.wkbPolygon)
field = ogr.FieldDefn("id", ogr.OFTInteger) # create output field
dstLayer.CreateField(field)
for i in range(layer.GetFeatureCount()):
feature = layer.GetFeature(i) # get point from input
geometry = feature.GetGeometryRef()
wkt = geometry.ExportToWkt() # change to wkt format
p = shapely.wkt.loads(wkt) # convert to shapely geom
pb = p.buffer(bufdist) # buffer
wktb = shapely.wkt.dumps(pb) # export to wkt
feature = ogr.Feature(dstLayer.GetLayerDefn())
feature.SetGeometry(ogr.CreateGeometryFromWkt(wktb))
feature.SetField("id", i) # set id
dstLayer.CreateFeature(feature)
dstFile.Destroy() # close output
| cc0-1.0 |
psnovichkov/narrative | src/biokbase/Genotype_PhenotypeAPI/Client.py | 7 | 11261 | ############################################################
#
# Autogenerated by the KBase type compiler -
# any changes made here will be overwritten
#
# Passes on URLError, timeout, and BadStatusLine exceptions.
# See:
# http://docs.python.org/2/library/urllib2.html
# http://docs.python.org/2/library/httplib.html
#
############################################################
try:
import json
except ImportError:
import sys
sys.path.append('simplejson-2.3.3')
import simplejson as json
import urllib2, httplib, urlparse
from urllib2 import URLError, HTTPError
_CT = 'content-type'
_AJ = 'application/json'
_URL_SCHEME = frozenset(['http', 'https'])
class ServerError(Exception):
def __init__(self, name, code, message):
self.name = name
self.code = code
self.message = message
def __str__(self):
return self.name + ': ' + str(self.code) + '. ' + self.message
class Genotype_PhenotypeAPI:
def __init__(self, url, timeout = 30 * 60):
if url is None:
raise ValueError('A url is required')
scheme, _, _, _, _, _ = urlparse.urlparse(url)
if scheme not in _URL_SCHEME:
raise ValueError(url + " isn't a valid http url")
self.url = url
self.timeout = int(timeout)
if self.timeout < 1:
raise ValueError('Timeout value must be at least 1 second')
def genomes_with_trait(self, ):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.genomes_with_trait',
'params': [],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def get_experiments(self, kb_genome):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.get_experiments',
'params': [kb_genome],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def get_traits(self, kb_study_experiment):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.get_traits',
'params': [kb_study_experiment],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def traits_to_variations(self, trait, pvaluecutoff):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.traits_to_variations',
'params': [trait, pvaluecutoff],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def chromosome_position_from_variation_details(self, variation_details):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.chromosome_position_from_variation_details',
'params': [variation_details],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def traits_to_genes(self, trait, pvaluecutoff, distance):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.traits_to_genes',
'params': [trait, pvaluecutoff, distance],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def variations_to_genes(self, chromosomal_positions, distance):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.variations_to_genes',
'params': [chromosomal_positions, distance],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def find_common_snps(self, trait_list_pvalue):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.find_common_snps',
'params': [trait_list_pvalue],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
def selected_locations_to_genes(self, trait, pmin, pmax, chromosomal_locations, distance):
arg_hash = { 'method': 'Genotype_PhenotypeAPI.selected_locations_to_genes',
'params': [trait, pmin, pmax, chromosomal_locations, distance],
'version': '1.1'
}
body = json.dumps(arg_hash)
try:
ret = urllib2.urlopen(self.url, body, timeout = self.timeout)
except HTTPError as h:
if _CT in h.headers and h.headers[_CT] == _AJ:
err = json.loads(h.read())
if 'error' in err:
raise ServerError(**err['error'])
else: #this should never happen... if it does
raise h # h.read() will return '' in the calling code.
else:
raise h
if ret.code != httplib.OK:
raise URLError('Received bad response code from server:' + ret.code)
resp = json.loads(ret.read())
if 'result' in resp:
return resp['result'][0]
else:
raise ServerError('Unknown', 0, 'An unknown server error occurred')
| mit |
cloudera/hue | desktop/core/ext-py/pysaml2-4.9.0/src/saml2/extension/mdui.py | 36 | 13834 | #!/usr/bin/env python
#
# Generated Mon May 2 14:23:33 2011 by parse_xsd.py version 0.4.
#
import saml2
from saml2 import SamlBase
from saml2 import md
NAMESPACE = 'urn:oasis:names:tc:SAML:metadata:ui'
class DisplayName(md.LocalizedNameType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:DisplayName element """
c_tag = 'DisplayName'
c_namespace = NAMESPACE
c_children = md.LocalizedNameType_.c_children.copy()
c_attributes = md.LocalizedNameType_.c_attributes.copy()
c_child_order = md.LocalizedNameType_.c_child_order[:]
c_cardinality = md.LocalizedNameType_.c_cardinality.copy()
def display_name_from_string(xml_string):
return saml2.create_class_from_xml_string(DisplayName, xml_string)
class Description(md.LocalizedNameType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:Description element """
c_tag = 'Description'
c_namespace = NAMESPACE
c_children = md.LocalizedNameType_.c_children.copy()
c_attributes = md.LocalizedNameType_.c_attributes.copy()
c_child_order = md.LocalizedNameType_.c_child_order[:]
c_cardinality = md.LocalizedNameType_.c_cardinality.copy()
def description_from_string(xml_string):
return saml2.create_class_from_xml_string(Description, xml_string)
class InformationURL(md.LocalizedURIType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:InformationURL element """
c_tag = 'InformationURL'
c_namespace = NAMESPACE
c_children = md.LocalizedURIType_.c_children.copy()
c_attributes = md.LocalizedURIType_.c_attributes.copy()
c_child_order = md.LocalizedURIType_.c_child_order[:]
c_cardinality = md.LocalizedURIType_.c_cardinality.copy()
def information_url_from_string(xml_string):
return saml2.create_class_from_xml_string(InformationURL, xml_string)
class PrivacyStatementURL(md.LocalizedURIType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:PrivacyStatementURL element """
c_tag = 'PrivacyStatementURL'
c_namespace = NAMESPACE
c_children = md.LocalizedURIType_.c_children.copy()
c_attributes = md.LocalizedURIType_.c_attributes.copy()
c_child_order = md.LocalizedURIType_.c_child_order[:]
c_cardinality = md.LocalizedURIType_.c_cardinality.copy()
def privacy_statement_url_from_string(xml_string):
return saml2.create_class_from_xml_string(PrivacyStatementURL, xml_string)
class ListOfStrings_(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:listOfStrings element """
c_tag = 'listOfStrings'
c_namespace = NAMESPACE
c_value_type = {'member': 'string', 'base': 'list'}
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
def list_of_strings__from_string(xml_string):
return saml2.create_class_from_xml_string(ListOfStrings_, xml_string)
class KeywordsType_(ListOfStrings_):
"""The urn:oasis:names:tc:SAML:metadata:ui:KeywordsType element """
c_tag = 'KeywordsType'
c_namespace = NAMESPACE
c_children = ListOfStrings_.c_children.copy()
c_attributes = ListOfStrings_.c_attributes.copy()
c_child_order = ListOfStrings_.c_child_order[:]
c_cardinality = ListOfStrings_.c_cardinality.copy()
c_attributes['{http://www.w3.org/XML/1998/namespace}lang'] = (
'lang', 'mdui:listOfStrings', True)
def __init__(self,
lang=None,
text=None,
extension_elements=None,
extension_attributes=None):
ListOfStrings_.__init__(self,
text=text,
extension_elements=extension_elements,
extension_attributes=extension_attributes)
self.lang = lang
def keywords_type__from_string(xml_string):
return saml2.create_class_from_xml_string(KeywordsType_, xml_string)
class LogoType_(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:LogoType element """
c_tag = 'LogoType'
c_namespace = NAMESPACE
c_value_type = {'base': 'anyURI'}
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
c_attributes['height'] = ('height', 'positiveInteger', True)
c_attributes['width'] = ('width', 'positiveInteger', True)
c_attributes['{http://www.w3.org/XML/1998/namespace}lang'] = (
'lang', 'anyURI', False)
def __init__(self,
height=None,
width=None,
lang=None,
text=None,
extension_elements=None,
extension_attributes=None):
SamlBase.__init__(self,
text=text,
extension_elements=extension_elements,
extension_attributes=extension_attributes)
self.height = height
self.width = width
self.lang = lang
def logo_type__from_string(xml_string):
return saml2.create_class_from_xml_string(LogoType_, xml_string)
class IPHint(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:IPHint element """
c_tag = 'IPHint'
c_namespace = NAMESPACE
c_value_type = {'base': 'string'}
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
def ip_hint_from_string(xml_string):
return saml2.create_class_from_xml_string(IPHint, xml_string)
class DomainHint(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:DomainHint element """
c_tag = 'DomainHint'
c_namespace = NAMESPACE
c_value_type = {'base': 'string'}
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
def domain_hint_from_string(xml_string):
return saml2.create_class_from_xml_string(DomainHint, xml_string)
class GeolocationHint(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:GeolocationHint element """
c_tag = 'GeolocationHint'
c_namespace = NAMESPACE
c_value_type = {'base': 'anyURI'}
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
def geolocation_hint_from_string(xml_string):
return saml2.create_class_from_xml_string(GeolocationHint, xml_string)
class Keywords(KeywordsType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:Keywords element """
c_tag = 'Keywords'
c_namespace = NAMESPACE
c_children = KeywordsType_.c_children.copy()
c_attributes = KeywordsType_.c_attributes.copy()
c_child_order = KeywordsType_.c_child_order[:]
c_cardinality = KeywordsType_.c_cardinality.copy()
def keywords_from_string(xml_string):
return saml2.create_class_from_xml_string(Keywords, xml_string)
class Logo(LogoType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:Logo element """
c_tag = 'Logo'
c_namespace = NAMESPACE
c_children = LogoType_.c_children.copy()
c_attributes = LogoType_.c_attributes.copy()
c_child_order = LogoType_.c_child_order[:]
c_cardinality = LogoType_.c_cardinality.copy()
def logo_from_string(xml_string):
return saml2.create_class_from_xml_string(Logo, xml_string)
class DiscoHintsType_(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:DiscoHintsType element """
c_tag = 'DiscoHintsType'
c_namespace = NAMESPACE
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
c_children['{urn:oasis:names:tc:SAML:metadata:ui}IPHint'] = (
'ip_hint', [IPHint])
c_cardinality['ip_hint'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}DomainHint'] = (
'domain_hint', [DomainHint])
c_cardinality['domain_hint'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}GeolocationHint'] = (
'geolocation_hint', [GeolocationHint])
c_cardinality['geolocation_hint'] = {"min": 0}
c_child_order.extend(['ip_hint', 'domain_hint', 'geolocation_hint'])
def __init__(self,
ip_hint=None,
domain_hint=None,
geolocation_hint=None,
text=None,
extension_elements=None,
extension_attributes=None):
SamlBase.__init__(self,
text=text,
extension_elements=extension_elements,
extension_attributes=extension_attributes)
self.ip_hint = ip_hint or []
self.domain_hint = domain_hint or []
self.geolocation_hint = geolocation_hint or []
def disco_hints_type__from_string(xml_string):
return saml2.create_class_from_xml_string(DiscoHintsType_, xml_string)
class UIInfoType_(SamlBase):
"""The urn:oasis:names:tc:SAML:metadata:ui:UIInfoType element """
c_tag = 'UIInfoType'
c_namespace = NAMESPACE
c_children = SamlBase.c_children.copy()
c_attributes = SamlBase.c_attributes.copy()
c_child_order = SamlBase.c_child_order[:]
c_cardinality = SamlBase.c_cardinality.copy()
c_children['{urn:oasis:names:tc:SAML:metadata:ui}DisplayName'] = (
'display_name', [DisplayName])
c_cardinality['display_name'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}Description'] = (
'description', [Description])
c_cardinality['description'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}Keywords'] = (
'keywords', [Keywords])
c_cardinality['keywords'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}Logo'] = ('logo', [Logo])
c_cardinality['logo'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}InformationURL'] = (
'information_url', [InformationURL])
c_cardinality['information_url'] = {"min": 0}
c_children['{urn:oasis:names:tc:SAML:metadata:ui}PrivacyStatementURL'] = (
'privacy_statement_url', [PrivacyStatementURL])
c_cardinality['privacy_statement_url'] = {"min": 0}
c_child_order.extend(
['display_name', 'description', 'keywords', 'logo', 'information_url',
'privacy_statement_url'])
def __init__(self,
display_name=None,
description=None,
keywords=None,
logo=None,
information_url=None,
privacy_statement_url=None,
text=None,
extension_elements=None,
extension_attributes=None):
SamlBase.__init__(self,
text=text,
extension_elements=extension_elements,
extension_attributes=extension_attributes)
self.display_name = display_name or []
self.description = description or []
self.keywords = keywords or []
self.logo = logo or []
self.information_url = information_url or []
self.privacy_statement_url = privacy_statement_url or []
def ui_info_type__from_string(xml_string):
return saml2.create_class_from_xml_string(UIInfoType_, xml_string)
class DiscoHints(DiscoHintsType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:DiscoHints element """
c_tag = 'DiscoHints'
c_namespace = NAMESPACE
c_children = DiscoHintsType_.c_children.copy()
c_attributes = DiscoHintsType_.c_attributes.copy()
c_child_order = DiscoHintsType_.c_child_order[:]
c_cardinality = DiscoHintsType_.c_cardinality.copy()
def disco_hints_from_string(xml_string):
return saml2.create_class_from_xml_string(DiscoHints, xml_string)
class UIInfo(UIInfoType_):
"""The urn:oasis:names:tc:SAML:metadata:ui:UIInfo element """
c_tag = 'UIInfo'
c_namespace = NAMESPACE
c_children = UIInfoType_.c_children.copy()
c_attributes = UIInfoType_.c_attributes.copy()
c_child_order = UIInfoType_.c_child_order[:]
c_cardinality = UIInfoType_.c_cardinality.copy()
def ui_info_from_string(xml_string):
return saml2.create_class_from_xml_string(UIInfo, xml_string)
ELEMENT_FROM_STRING = {
UIInfo.c_tag: ui_info_from_string,
UIInfoType_.c_tag: ui_info_type__from_string,
DisplayName.c_tag: display_name_from_string,
Description.c_tag: description_from_string,
InformationURL.c_tag: information_url_from_string,
PrivacyStatementURL.c_tag: privacy_statement_url_from_string,
Keywords.c_tag: keywords_from_string,
KeywordsType_.c_tag: keywords_type__from_string,
ListOfStrings_.c_tag: list_of_strings__from_string,
Logo.c_tag: logo_from_string,
LogoType_.c_tag: logo_type__from_string,
DiscoHints.c_tag: disco_hints_from_string,
DiscoHintsType_.c_tag: disco_hints_type__from_string,
IPHint.c_tag: ip_hint_from_string,
DomainHint.c_tag: domain_hint_from_string,
GeolocationHint.c_tag: geolocation_hint_from_string,
}
ELEMENT_BY_TAG = {
'UIInfo': UIInfo,
'UIInfoType': UIInfoType_,
'DisplayName': DisplayName,
'Description': Description,
'InformationURL': InformationURL,
'PrivacyStatementURL': PrivacyStatementURL,
'Keywords': Keywords,
'KeywordsType': KeywordsType_,
'listOfStrings': ListOfStrings_,
'Logo': Logo,
'LogoType': LogoType_,
'DiscoHints': DiscoHints,
'DiscoHintsType': DiscoHintsType_,
'IPHint': IPHint,
'DomainHint': DomainHint,
'GeolocationHint': GeolocationHint,
}
def factory(tag, **kwargs):
return ELEMENT_BY_TAG[tag](**kwargs)
| apache-2.0 |
Leila20/django | tests/auth_tests/test_tokens.py | 42 | 2516 | import unittest
from datetime import date, timedelta
from django.conf import settings
from django.contrib.auth.models import User
from django.contrib.auth.tokens import PasswordResetTokenGenerator
from django.test import TestCase
from django.utils.six import PY3
class TokenGeneratorTest(TestCase):
def test_make_token(self):
"""
Ensure that we can make a token and that it is valid
"""
user = User.objects.create_user('tokentestuser', 'test2@example.com', 'testpw')
p0 = PasswordResetTokenGenerator()
tk1 = p0.make_token(user)
self.assertTrue(p0.check_token(user, tk1))
def test_10265(self):
"""
Ensure that the token generated for a user created in the same request
will work correctly.
"""
# See ticket #10265
user = User.objects.create_user('comebackkid', 'test3@example.com', 'testpw')
p0 = PasswordResetTokenGenerator()
tk1 = p0.make_token(user)
reload = User.objects.get(username='comebackkid')
tk2 = p0.make_token(reload)
self.assertEqual(tk1, tk2)
def test_timeout(self):
"""
Ensure we can use the token after n days, but no greater.
"""
# Uses a mocked version of PasswordResetTokenGenerator so we can change
# the value of 'today'
class Mocked(PasswordResetTokenGenerator):
def __init__(self, today):
self._today_val = today
def _today(self):
return self._today_val
user = User.objects.create_user('tokentestuser', 'test2@example.com', 'testpw')
p0 = PasswordResetTokenGenerator()
tk1 = p0.make_token(user)
p1 = Mocked(date.today() + timedelta(settings.PASSWORD_RESET_TIMEOUT_DAYS))
self.assertTrue(p1.check_token(user, tk1))
p2 = Mocked(date.today() + timedelta(settings.PASSWORD_RESET_TIMEOUT_DAYS + 1))
self.assertFalse(p2.check_token(user, tk1))
@unittest.skipIf(PY3, "Unnecessary test with Python 3")
def test_date_length(self):
"""
Make sure we don't allow overly long dates, causing a potential DoS.
"""
user = User.objects.create_user('ima1337h4x0r', 'test4@example.com', 'p4ssw0rd')
p0 = PasswordResetTokenGenerator()
# This will put a 14-digit base36 timestamp into the token, which is too large.
with self.assertRaises(ValueError):
p0._make_token_with_timestamp(user, 175455491841851871349)
| bsd-3-clause |
pku9104038/edx-platform | common/djangoapps/course_groups/models.py | 38 | 1447 | import logging
from django.contrib.auth.models import User
from django.db import models
log = logging.getLogger(__name__)
class CourseUserGroup(models.Model):
"""
This model represents groups of users in a course. Groups may have different types,
which may be treated specially. For example, a user can be in at most one cohort per
course, and cohorts are used to split up the forums by group.
"""
class Meta:
unique_together = (('name', 'course_id'), )
name = models.CharField(max_length=255,
help_text=("What is the name of this group? "
"Must be unique within a course."))
users = models.ManyToManyField(User, db_index=True, related_name='course_groups',
help_text="Who is in this group?")
# Note: groups associated with particular runs of a course. E.g. Fall 2012 and Spring
# 2013 versions of 6.00x will have separate groups.
course_id = models.CharField(max_length=255, db_index=True,
help_text="Which course is this group associated with?")
# For now, only have group type 'cohort', but adding a type field to support
# things like 'question_discussion', 'friends', 'off-line-class', etc
COHORT = 'cohort'
GROUP_TYPE_CHOICES = ((COHORT, 'Cohort'),)
group_type = models.CharField(max_length=20, choices=GROUP_TYPE_CHOICES)
| agpl-3.0 |
batermj/algorithm-challenger | code-analysis/programming_anguage/python/source_codes/Python3.5.9/Python-3.5.9/Lib/test/test_property.py | 5 | 8541 | # Test case for property
# more tests are in test_descr
import sys
import unittest
class PropertyBase(Exception):
pass
class PropertyGet(PropertyBase):
pass
class PropertySet(PropertyBase):
pass
class PropertyDel(PropertyBase):
pass
class BaseClass(object):
def __init__(self):
self._spam = 5
@property
def spam(self):
"""BaseClass.getter"""
return self._spam
@spam.setter
def spam(self, value):
self._spam = value
@spam.deleter
def spam(self):
del self._spam
class SubClass(BaseClass):
@BaseClass.spam.getter
def spam(self):
"""SubClass.getter"""
raise PropertyGet(self._spam)
@spam.setter
def spam(self, value):
raise PropertySet(self._spam)
@spam.deleter
def spam(self):
raise PropertyDel(self._spam)
class PropertyDocBase(object):
_spam = 1
def _get_spam(self):
return self._spam
spam = property(_get_spam, doc="spam spam spam")
class PropertyDocSub(PropertyDocBase):
@PropertyDocBase.spam.getter
def spam(self):
"""The decorator does not use this doc string"""
return self._spam
class PropertySubNewGetter(BaseClass):
@BaseClass.spam.getter
def spam(self):
"""new docstring"""
return 5
class PropertyNewGetter(object):
@property
def spam(self):
"""original docstring"""
return 1
@spam.getter
def spam(self):
"""new docstring"""
return 8
class PropertyTests(unittest.TestCase):
def test_property_decorator_baseclass(self):
# see #1620
base = BaseClass()
self.assertEqual(base.spam, 5)
self.assertEqual(base._spam, 5)
base.spam = 10
self.assertEqual(base.spam, 10)
self.assertEqual(base._spam, 10)
delattr(base, "spam")
self.assertTrue(not hasattr(base, "spam"))
self.assertTrue(not hasattr(base, "_spam"))
base.spam = 20
self.assertEqual(base.spam, 20)
self.assertEqual(base._spam, 20)
def test_property_decorator_subclass(self):
# see #1620
sub = SubClass()
self.assertRaises(PropertyGet, getattr, sub, "spam")
self.assertRaises(PropertySet, setattr, sub, "spam", None)
self.assertRaises(PropertyDel, delattr, sub, "spam")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_decorator_subclass_doc(self):
sub = SubClass()
self.assertEqual(sub.__class__.spam.__doc__, "SubClass.getter")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_decorator_baseclass_doc(self):
base = BaseClass()
self.assertEqual(base.__class__.spam.__doc__, "BaseClass.getter")
def test_property_decorator_doc(self):
base = PropertyDocBase()
sub = PropertyDocSub()
self.assertEqual(base.__class__.spam.__doc__, "spam spam spam")
self.assertEqual(sub.__class__.spam.__doc__, "spam spam spam")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_getter_doc_override(self):
newgettersub = PropertySubNewGetter()
self.assertEqual(newgettersub.spam, 5)
self.assertEqual(newgettersub.__class__.spam.__doc__, "new docstring")
newgetter = PropertyNewGetter()
self.assertEqual(newgetter.spam, 8)
self.assertEqual(newgetter.__class__.spam.__doc__, "new docstring")
def test_property___isabstractmethod__descriptor(self):
for val in (True, False, [], [1], '', '1'):
class C(object):
def foo(self):
pass
foo.__isabstractmethod__ = val
foo = property(foo)
self.assertIs(C.foo.__isabstractmethod__, bool(val))
# check that the property's __isabstractmethod__ descriptor does the
# right thing when presented with a value that fails truth testing:
class NotBool(object):
def __bool__(self):
raise ValueError()
__len__ = __bool__
with self.assertRaises(ValueError):
class C(object):
def foo(self):
pass
foo.__isabstractmethod__ = NotBool()
foo = property(foo)
C.foo.__isabstractmethod__
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_builtin_doc_writable(self):
p = property(doc='basic')
self.assertEqual(p.__doc__, 'basic')
p.__doc__ = 'extended'
self.assertEqual(p.__doc__, 'extended')
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_decorator_doc_writable(self):
class PropertyWritableDoc(object):
@property
def spam(self):
"""Eggs"""
return "eggs"
sub = PropertyWritableDoc()
self.assertEqual(sub.__class__.spam.__doc__, 'Eggs')
sub.__class__.spam.__doc__ = 'Spam'
self.assertEqual(sub.__class__.spam.__doc__, 'Spam')
# Issue 5890: subclasses of property do not preserve method __doc__ strings
class PropertySub(property):
"""This is a subclass of property"""
class PropertySubSlots(property):
"""This is a subclass of property that defines __slots__"""
__slots__ = ()
class PropertySubclassTests(unittest.TestCase):
def test_slots_docstring_copy_exception(self):
try:
class Foo(object):
@PropertySubSlots
def spam(self):
"""Trying to copy this docstring will raise an exception"""
return 1
except AttributeError:
pass
else:
raise Exception("AttributeError not raised")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_docstring_copy(self):
class Foo(object):
@PropertySub
def spam(self):
"""spam wrapped in property subclass"""
return 1
self.assertEqual(
Foo.spam.__doc__,
"spam wrapped in property subclass")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_setter_copies_getter_docstring(self):
class Foo(object):
def __init__(self): self._spam = 1
@PropertySub
def spam(self):
"""spam wrapped in property subclass"""
return self._spam
@spam.setter
def spam(self, value):
"""this docstring is ignored"""
self._spam = value
foo = Foo()
self.assertEqual(foo.spam, 1)
foo.spam = 2
self.assertEqual(foo.spam, 2)
self.assertEqual(
Foo.spam.__doc__,
"spam wrapped in property subclass")
class FooSub(Foo):
@Foo.spam.setter
def spam(self, value):
"""another ignored docstring"""
self._spam = 'eggs'
foosub = FooSub()
self.assertEqual(foosub.spam, 1)
foosub.spam = 7
self.assertEqual(foosub.spam, 'eggs')
self.assertEqual(
FooSub.spam.__doc__,
"spam wrapped in property subclass")
@unittest.skipIf(sys.flags.optimize >= 2,
"Docstrings are omitted with -O2 and above")
def test_property_new_getter_new_docstring(self):
class Foo(object):
@PropertySub
def spam(self):
"""a docstring"""
return 1
@spam.getter
def spam(self):
"""a new docstring"""
return 2
self.assertEqual(Foo.spam.__doc__, "a new docstring")
class FooBase(object):
@PropertySub
def spam(self):
"""a docstring"""
return 1
class Foo2(FooBase):
@FooBase.spam.getter
def spam(self):
"""a new docstring"""
return 2
self.assertEqual(Foo.spam.__doc__, "a new docstring")
if __name__ == '__main__':
unittest.main()
| apache-2.0 |
ardekantur/pyglet | tests/text/PLAIN.py | 33 | 4927 | #!/usr/bin/env python
'''Test an unformatted document is editable.
Examine and type over the text in the window that appears. The window
contents can be scrolled with the mouse wheel.
Press ESC to exit the test.
'''
__docformat__ = 'restructuredtext'
__version__ = '$Id: STYLE.py 1754 2008-02-10 13:26:52Z Alex.Holkner $'
import unittest
from pyglet import app
from pyglet import gl
from pyglet import graphics
from pyglet import text
from pyglet.text import caret
from pyglet.text import layout
from pyglet import window
from pyglet.window import key, mouse
doctext = '''PLAIN.py test document.
Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas aliquet quam sit amet enim. Donec iaculis, magna vitae imperdiet convallis, lectus sem ultricies nulla, non fringilla quam felis tempus velit. Etiam et velit. Integer euismod. Aliquam a diam. Donec sed ante. Mauris enim pede, dapibus sed, dapibus vitae, consectetuer in, est. Donec aliquam risus eu ipsum. Integer et tortor. Ut accumsan risus sed ante.
Aliquam dignissim, massa a imperdiet fermentum, orci dolor facilisis ante, ut vulputate nisi nunc sed massa. Morbi sodales hendrerit tortor. Nunc id tortor ut lacus mollis malesuada. Sed nibh tellus, rhoncus et, egestas eu, laoreet eu, urna. Vestibulum massa leo, convallis et, pharetra vitae, iaculis at, ante. Pellentesque volutpat porta enim. Morbi ac nunc eget mi pretium viverra. Pellentesque felis risus, lobortis vitae, malesuada vitae, bibendum eu, tortor. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Phasellus dapibus tortor ac neque. Curabitur pulvinar bibendum lectus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tellus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nulla turpis leo, rhoncus vel, euismod non, consequat sed, massa. Quisque ultricies. Aliquam fringilla faucibus est. Proin nec felis eget felis suscipit vehicula.
Etiam quam. Aliquam at ligula. Aenean quis dolor. Suspendisse potenti. Sed lacinia leo eu est. Nam pede ligula, molestie nec, tincidunt vel, posuere in, tellus. Donec fringilla dictum dolor. Aenean tellus orci, viverra id, vehicula eget, tempor a, dui. Morbi eu dolor nec lacus fringilla dapibus. Nulla facilisi. Nulla posuere. Nunc interdum. Donec convallis libero vitae odio.
Aenean metus lectus, faucibus in, malesuada at, fringilla nec, risus. Integer enim. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Proin bibendum felis vel neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec ipsum dui, euismod at, dictum eu, congue tincidunt, urna. Sed quis odio. Integer aliquam pretium augue. Vivamus nonummy, dolor vel viverra rutrum, lacus dui congue pede, vel sodales dui diam nec libero. Morbi et leo sit amet quam sollicitudin laoreet. Vivamus suscipit.
Duis arcu eros, iaculis ut, vehicula in, elementum a, sapien. Phasellus ut tellus. Integer feugiat nunc eget odio. Morbi accumsan nonummy ipsum. Donec condimentum, tortor non faucibus luctus, neque mi mollis magna, nec gravida risus elit nec ipsum. Donec nec sem. Maecenas varius libero quis diam. Curabitur pulvinar. Morbi at sem eget mauris tempor vulputate. Aenean eget turpis.
'''
class TestWindow(window.Window):
def __init__(self, *args, **kwargs):
super(TestWindow, self).__init__(*args, **kwargs)
self.batch = graphics.Batch()
self.document = text.decode_text(doctext)
self.margin = 2
self.layout = layout.IncrementalTextLayout(self.document,
self.width - self.margin * 2, self.height - self.margin * 2,
multiline=True,
batch=self.batch)
self.caret = caret.Caret(self.layout)
self.push_handlers(self.caret)
self.set_mouse_cursor(self.get_system_mouse_cursor('text'))
def on_resize(self, width, height):
super(TestWindow, self).on_resize(width, height)
self.layout.begin_update()
self.layout.x = self.margin
self.layout.y = self.margin
self.layout.width = width - self.margin * 2
self.layout.height = height - self.margin * 2
self.layout.end_update()
def on_mouse_scroll(self, x, y, scroll_x, scroll_y):
self.layout.view_x -= scroll_x
self.layout.view_y += scroll_y * 16
def on_draw(self):
gl.glClearColor(1, 1, 1, 1)
self.clear()
self.batch.draw()
def on_key_press(self, symbol, modifiers):
super(TestWindow, self).on_key_press(symbol, modifiers)
if symbol == key.TAB:
self.caret.on_text('\t')
class TestCase(unittest.TestCase):
def test(self):
self.window = TestWindow(resizable=True, visible=False)
self.window.set_visible()
app.run()
if __name__ == '__main__':
unittest.main()
| bsd-3-clause |
kenshay/ImageScripter | Script_Runner/PYTHON/Lib/site-packages/cryptography/hazmat/backends/openssl/rsa.py | 2 | 17999 | # This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
from __future__ import absolute_import, division, print_function
import math
from cryptography import utils
from cryptography.exceptions import (
InvalidSignature, UnsupportedAlgorithm, _Reasons
)
from cryptography.hazmat.backends.openssl.utils import (
_calculate_digest_and_algorithm, _check_not_prehashed,
_warn_sign_verify_deprecated
)
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import (
AsymmetricSignatureContext, AsymmetricVerificationContext, rsa
)
from cryptography.hazmat.primitives.asymmetric.padding import (
AsymmetricPadding, MGF1, OAEP, PKCS1v15, PSS, calculate_max_pss_salt_length
)
from cryptography.hazmat.primitives.asymmetric.rsa import (
RSAPrivateKeyWithSerialization, RSAPublicKeyWithSerialization
)
def _get_rsa_pss_salt_length(pss, key, hash_algorithm):
salt = pss._salt_length
if salt is MGF1.MAX_LENGTH or salt is PSS.MAX_LENGTH:
return calculate_max_pss_salt_length(key, hash_algorithm)
else:
return salt
def _enc_dec_rsa(backend, key, data, padding):
if not isinstance(padding, AsymmetricPadding):
raise TypeError("Padding must be an instance of AsymmetricPadding.")
if isinstance(padding, PKCS1v15):
padding_enum = backend._lib.RSA_PKCS1_PADDING
elif isinstance(padding, OAEP):
padding_enum = backend._lib.RSA_PKCS1_OAEP_PADDING
if not isinstance(padding._mgf, MGF1):
raise UnsupportedAlgorithm(
"Only MGF1 is supported by this backend.",
_Reasons.UNSUPPORTED_MGF
)
if not backend.rsa_padding_supported(padding):
raise UnsupportedAlgorithm(
"This combination of padding and hash algorithm is not "
"supported by this backend.",
_Reasons.UNSUPPORTED_PADDING
)
else:
raise UnsupportedAlgorithm(
"{0} is not supported by this backend.".format(
padding.name
),
_Reasons.UNSUPPORTED_PADDING
)
return _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding)
def _enc_dec_rsa_pkey_ctx(backend, key, data, padding_enum, padding):
if isinstance(key, _RSAPublicKey):
init = backend._lib.EVP_PKEY_encrypt_init
crypt = backend._lib.EVP_PKEY_encrypt
else:
init = backend._lib.EVP_PKEY_decrypt_init
crypt = backend._lib.EVP_PKEY_decrypt
pkey_ctx = backend._lib.EVP_PKEY_CTX_new(
key._evp_pkey, backend._ffi.NULL
)
backend.openssl_assert(pkey_ctx != backend._ffi.NULL)
pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free)
res = init(pkey_ctx)
backend.openssl_assert(res == 1)
res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(
pkey_ctx, padding_enum)
backend.openssl_assert(res > 0)
buf_size = backend._lib.EVP_PKEY_size(key._evp_pkey)
backend.openssl_assert(buf_size > 0)
if (
isinstance(padding, OAEP) and
backend._lib.Cryptography_HAS_RSA_OAEP_MD
):
mgf1_md = backend._evp_md_non_null_from_algorithm(
padding._mgf._algorithm)
res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md)
backend.openssl_assert(res > 0)
oaep_md = backend._evp_md_non_null_from_algorithm(padding._algorithm)
res = backend._lib.EVP_PKEY_CTX_set_rsa_oaep_md(pkey_ctx, oaep_md)
backend.openssl_assert(res > 0)
if (
isinstance(padding, OAEP) and
padding._label is not None and
len(padding._label) > 0
):
# set0_rsa_oaep_label takes ownership of the char * so we need to
# copy it into some new memory
labelptr = backend._lib.OPENSSL_malloc(len(padding._label))
backend.openssl_assert(labelptr != backend._ffi.NULL)
backend._ffi.memmove(labelptr, padding._label, len(padding._label))
res = backend._lib.EVP_PKEY_CTX_set0_rsa_oaep_label(
pkey_ctx, labelptr, len(padding._label)
)
backend.openssl_assert(res == 1)
outlen = backend._ffi.new("size_t *", buf_size)
buf = backend._ffi.new("unsigned char[]", buf_size)
res = crypt(pkey_ctx, buf, outlen, data, len(data))
if res <= 0:
_handle_rsa_enc_dec_error(backend, key)
return backend._ffi.buffer(buf)[:outlen[0]]
def _handle_rsa_enc_dec_error(backend, key):
errors = backend._consume_errors()
backend.openssl_assert(errors)
assert errors[0].lib == backend._lib.ERR_LIB_RSA
if isinstance(key, _RSAPublicKey):
assert (errors[0].reason ==
backend._lib.RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE)
raise ValueError(
"Data too long for key size. Encrypt less data or use a "
"larger key size."
)
else:
decoding_errors = [
backend._lib.RSA_R_BLOCK_TYPE_IS_NOT_01,
backend._lib.RSA_R_BLOCK_TYPE_IS_NOT_02,
backend._lib.RSA_R_OAEP_DECODING_ERROR,
# Though this error looks similar to the
# RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE, this occurs on decrypts,
# rather than on encrypts
backend._lib.RSA_R_DATA_TOO_LARGE_FOR_MODULUS,
]
if backend._lib.Cryptography_HAS_RSA_R_PKCS_DECODING_ERROR:
decoding_errors.append(backend._lib.RSA_R_PKCS_DECODING_ERROR)
assert errors[0].reason in decoding_errors
raise ValueError("Decryption failed.")
def _rsa_sig_determine_padding(backend, key, padding, algorithm):
if not isinstance(padding, AsymmetricPadding):
raise TypeError("Expected provider of AsymmetricPadding.")
pkey_size = backend._lib.EVP_PKEY_size(key._evp_pkey)
backend.openssl_assert(pkey_size > 0)
if isinstance(padding, PKCS1v15):
padding_enum = backend._lib.RSA_PKCS1_PADDING
elif isinstance(padding, PSS):
if not isinstance(padding._mgf, MGF1):
raise UnsupportedAlgorithm(
"Only MGF1 is supported by this backend.",
_Reasons.UNSUPPORTED_MGF
)
# Size of key in bytes - 2 is the maximum
# PSS signature length (salt length is checked later)
if pkey_size - algorithm.digest_size - 2 < 0:
raise ValueError("Digest too large for key size. Use a larger "
"key or different digest.")
padding_enum = backend._lib.RSA_PKCS1_PSS_PADDING
else:
raise UnsupportedAlgorithm(
"{0} is not supported by this backend.".format(padding.name),
_Reasons.UNSUPPORTED_PADDING
)
return padding_enum
def _rsa_sig_setup(backend, padding, algorithm, key, data, init_func):
padding_enum = _rsa_sig_determine_padding(backend, key, padding, algorithm)
evp_md = backend._evp_md_non_null_from_algorithm(algorithm)
pkey_ctx = backend._lib.EVP_PKEY_CTX_new(key._evp_pkey, backend._ffi.NULL)
backend.openssl_assert(pkey_ctx != backend._ffi.NULL)
pkey_ctx = backend._ffi.gc(pkey_ctx, backend._lib.EVP_PKEY_CTX_free)
res = init_func(pkey_ctx)
backend.openssl_assert(res == 1)
res = backend._lib.EVP_PKEY_CTX_set_signature_md(pkey_ctx, evp_md)
if res == 0:
backend._consume_errors()
raise UnsupportedAlgorithm(
"{0} is not supported by this backend for RSA signing.".format(
algorithm.name
),
_Reasons.UNSUPPORTED_HASH
)
res = backend._lib.EVP_PKEY_CTX_set_rsa_padding(pkey_ctx, padding_enum)
backend.openssl_assert(res > 0)
if isinstance(padding, PSS):
res = backend._lib.EVP_PKEY_CTX_set_rsa_pss_saltlen(
pkey_ctx, _get_rsa_pss_salt_length(padding, key, algorithm)
)
backend.openssl_assert(res > 0)
mgf1_md = backend._evp_md_non_null_from_algorithm(
padding._mgf._algorithm)
res = backend._lib.EVP_PKEY_CTX_set_rsa_mgf1_md(pkey_ctx, mgf1_md)
backend.openssl_assert(res > 0)
return pkey_ctx
def _rsa_sig_sign(backend, padding, algorithm, private_key, data):
pkey_ctx = _rsa_sig_setup(
backend, padding, algorithm, private_key, data,
backend._lib.EVP_PKEY_sign_init
)
buflen = backend._ffi.new("size_t *")
res = backend._lib.EVP_PKEY_sign(
pkey_ctx,
backend._ffi.NULL,
buflen,
data,
len(data)
)
backend.openssl_assert(res == 1)
buf = backend._ffi.new("unsigned char[]", buflen[0])
res = backend._lib.EVP_PKEY_sign(
pkey_ctx, buf, buflen, data, len(data))
if res != 1:
errors = backend._consume_errors()
assert errors[0].lib == backend._lib.ERR_LIB_RSA
reason = None
if (errors[0].reason ==
backend._lib.RSA_R_DATA_TOO_LARGE_FOR_KEY_SIZE):
reason = ("Salt length too long for key size. Try using "
"MAX_LENGTH instead.")
else:
assert (errors[0].reason ==
backend._lib.RSA_R_DIGEST_TOO_BIG_FOR_RSA_KEY)
reason = "Digest too large for key size. Use a larger key."
assert reason is not None
raise ValueError(reason)
return backend._ffi.buffer(buf)[:]
def _rsa_sig_verify(backend, padding, algorithm, public_key, signature, data):
pkey_ctx = _rsa_sig_setup(
backend, padding, algorithm, public_key, data,
backend._lib.EVP_PKEY_verify_init
)
res = backend._lib.EVP_PKEY_verify(
pkey_ctx, signature, len(signature), data, len(data)
)
# The previous call can return negative numbers in the event of an
# error. This is not a signature failure but we need to fail if it
# occurs.
backend.openssl_assert(res >= 0)
if res == 0:
backend._consume_errors()
raise InvalidSignature
@utils.register_interface(AsymmetricSignatureContext)
class _RSASignatureContext(object):
def __init__(self, backend, private_key, padding, algorithm):
self._backend = backend
self._private_key = private_key
# We now call _rsa_sig_determine_padding in _rsa_sig_setup. However
# we need to make a pointless call to it here so we maintain the
# API of erroring on init with this context if the values are invalid.
_rsa_sig_determine_padding(backend, private_key, padding, algorithm)
self._padding = padding
self._algorithm = algorithm
self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
def update(self, data):
self._hash_ctx.update(data)
def finalize(self):
return _rsa_sig_sign(
self._backend,
self._padding,
self._algorithm,
self._private_key,
self._hash_ctx.finalize()
)
@utils.register_interface(AsymmetricVerificationContext)
class _RSAVerificationContext(object):
def __init__(self, backend, public_key, signature, padding, algorithm):
self._backend = backend
self._public_key = public_key
self._signature = signature
self._padding = padding
# We now call _rsa_sig_determine_padding in _rsa_sig_setup. However
# we need to make a pointless call to it here so we maintain the
# API of erroring on init with this context if the values are invalid.
_rsa_sig_determine_padding(backend, public_key, padding, algorithm)
padding = padding
self._algorithm = algorithm
self._hash_ctx = hashes.Hash(self._algorithm, self._backend)
def update(self, data):
self._hash_ctx.update(data)
def verify(self):
return _rsa_sig_verify(
self._backend,
self._padding,
self._algorithm,
self._public_key,
self._signature,
self._hash_ctx.finalize()
)
@utils.register_interface(RSAPrivateKeyWithSerialization)
class _RSAPrivateKey(object):
def __init__(self, backend, rsa_cdata, evp_pkey):
self._backend = backend
self._rsa_cdata = rsa_cdata
self._evp_pkey = evp_pkey
n = self._backend._ffi.new("BIGNUM **")
self._backend._lib.RSA_get0_key(
self._rsa_cdata, n, self._backend._ffi.NULL,
self._backend._ffi.NULL
)
self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
self._key_size = self._backend._lib.BN_num_bits(n[0])
key_size = utils.read_only_property("_key_size")
def signer(self, padding, algorithm):
_warn_sign_verify_deprecated()
_check_not_prehashed(algorithm)
return _RSASignatureContext(self._backend, self, padding, algorithm)
def decrypt(self, ciphertext, padding):
key_size_bytes = int(math.ceil(self.key_size / 8.0))
if key_size_bytes != len(ciphertext):
raise ValueError("Ciphertext length must be equal to key size.")
return _enc_dec_rsa(self._backend, self, ciphertext, padding)
def public_key(self):
ctx = self._backend._lib.RSAPublicKey_dup(self._rsa_cdata)
self._backend.openssl_assert(ctx != self._backend._ffi.NULL)
ctx = self._backend._ffi.gc(ctx, self._backend._lib.RSA_free)
res = self._backend._lib.RSA_blinding_on(ctx, self._backend._ffi.NULL)
self._backend.openssl_assert(res == 1)
evp_pkey = self._backend._rsa_cdata_to_evp_pkey(ctx)
return _RSAPublicKey(self._backend, ctx, evp_pkey)
def private_numbers(self):
n = self._backend._ffi.new("BIGNUM **")
e = self._backend._ffi.new("BIGNUM **")
d = self._backend._ffi.new("BIGNUM **")
p = self._backend._ffi.new("BIGNUM **")
q = self._backend._ffi.new("BIGNUM **")
dmp1 = self._backend._ffi.new("BIGNUM **")
dmq1 = self._backend._ffi.new("BIGNUM **")
iqmp = self._backend._ffi.new("BIGNUM **")
self._backend._lib.RSA_get0_key(self._rsa_cdata, n, e, d)
self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(d[0] != self._backend._ffi.NULL)
self._backend._lib.RSA_get0_factors(self._rsa_cdata, p, q)
self._backend.openssl_assert(p[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(q[0] != self._backend._ffi.NULL)
self._backend._lib.RSA_get0_crt_params(
self._rsa_cdata, dmp1, dmq1, iqmp
)
self._backend.openssl_assert(dmp1[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(dmq1[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(iqmp[0] != self._backend._ffi.NULL)
return rsa.RSAPrivateNumbers(
p=self._backend._bn_to_int(p[0]),
q=self._backend._bn_to_int(q[0]),
d=self._backend._bn_to_int(d[0]),
dmp1=self._backend._bn_to_int(dmp1[0]),
dmq1=self._backend._bn_to_int(dmq1[0]),
iqmp=self._backend._bn_to_int(iqmp[0]),
public_numbers=rsa.RSAPublicNumbers(
e=self._backend._bn_to_int(e[0]),
n=self._backend._bn_to_int(n[0]),
)
)
def private_bytes(self, encoding, format, encryption_algorithm):
return self._backend._private_key_bytes(
encoding,
format,
encryption_algorithm,
self._evp_pkey,
self._rsa_cdata
)
def sign(self, data, padding, algorithm):
data, algorithm = _calculate_digest_and_algorithm(
self._backend, data, algorithm
)
return _rsa_sig_sign(self._backend, padding, algorithm, self, data)
@utils.register_interface(RSAPublicKeyWithSerialization)
class _RSAPublicKey(object):
def __init__(self, backend, rsa_cdata, evp_pkey):
self._backend = backend
self._rsa_cdata = rsa_cdata
self._evp_pkey = evp_pkey
n = self._backend._ffi.new("BIGNUM **")
self._backend._lib.RSA_get0_key(
self._rsa_cdata, n, self._backend._ffi.NULL,
self._backend._ffi.NULL
)
self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
self._key_size = self._backend._lib.BN_num_bits(n[0])
key_size = utils.read_only_property("_key_size")
def verifier(self, signature, padding, algorithm):
_warn_sign_verify_deprecated()
if not isinstance(signature, bytes):
raise TypeError("signature must be bytes.")
_check_not_prehashed(algorithm)
return _RSAVerificationContext(
self._backend, self, signature, padding, algorithm
)
def encrypt(self, plaintext, padding):
return _enc_dec_rsa(self._backend, self, plaintext, padding)
def public_numbers(self):
n = self._backend._ffi.new("BIGNUM **")
e = self._backend._ffi.new("BIGNUM **")
self._backend._lib.RSA_get0_key(
self._rsa_cdata, n, e, self._backend._ffi.NULL
)
self._backend.openssl_assert(n[0] != self._backend._ffi.NULL)
self._backend.openssl_assert(e[0] != self._backend._ffi.NULL)
return rsa.RSAPublicNumbers(
e=self._backend._bn_to_int(e[0]),
n=self._backend._bn_to_int(n[0]),
)
def public_bytes(self, encoding, format):
return self._backend._public_key_bytes(
encoding,
format,
self,
self._evp_pkey,
self._rsa_cdata
)
def verify(self, signature, data, padding, algorithm):
data, algorithm = _calculate_digest_and_algorithm(
self._backend, data, algorithm
)
return _rsa_sig_verify(
self._backend, padding, algorithm, self, signature, data
)
| gpl-3.0 |
praekelt/familyconnect-registration | changes/views.py | 1 | 1910 | import django_filters
from .models import Source, Change
from rest_framework import viewsets, mixins, generics, filters
from rest_framework.permissions import IsAuthenticated
from .serializers import ChangeSerializer
class ChangePost(mixins.CreateModelMixin, generics.GenericAPIView):
permission_classes = (IsAuthenticated,)
queryset = Change.objects.all()
serializer_class = ChangeSerializer
def post(self, request, *args, **kwargs):
# load the users sources - posting users should only have one source
source = Source.objects.get(user=self.request.user)
request.data["source"] = source.id
return self.create(request, *args, **kwargs)
# TODO make this work in test harness, works in production
# def perform_create(self, serializer):
# serializer.save(created_by=self.request.user,
# updated_by=self.request.user)
# def perform_update(self, serializer):
# serializer.save(updated_by=self.request.user)
class ChangeFilter(filters.FilterSet):
"""Filter for changes created, using ISO 8601 formatted dates"""
created_before = django_filters.IsoDateTimeFilter(name="created_at",
lookup_type="lte")
created_after = django_filters.IsoDateTimeFilter(name="created_at",
lookup_type="gte")
class Meta:
model = Change
('action', 'mother_id', 'validated', 'source', 'created_at')
fields = ['action', 'mother_id', 'validated', 'source',
'created_before', 'created_after']
class ChangeGetViewSet(viewsets.ReadOnlyModelViewSet):
"""
API endpoint that allows Changes to be viewed.
"""
permission_classes = (IsAuthenticated,)
queryset = Change.objects.all()
serializer_class = ChangeSerializer
filter_class = ChangeFilter
| bsd-3-clause |
has2k1/numpy | numpy/core/tests/test_memmap.py | 64 | 4296 | from __future__ import division, absolute_import, print_function
import sys
import os
import shutil
from tempfile import NamedTemporaryFile, TemporaryFile, mktemp, mkdtemp
from numpy import memmap
from numpy import arange, allclose, asarray
from numpy.testing import (
TestCase, run_module_suite, assert_, assert_equal, assert_array_equal,
dec
)
class TestMemmap(TestCase):
def setUp(self):
self.tmpfp = NamedTemporaryFile(prefix='mmap')
self.tempdir = mkdtemp()
self.shape = (3, 4)
self.dtype = 'float32'
self.data = arange(12, dtype=self.dtype)
self.data.resize(self.shape)
def tearDown(self):
self.tmpfp.close()
shutil.rmtree(self.tempdir)
def test_roundtrip(self):
# Write data to file
fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
fp[:] = self.data[:]
del fp # Test __del__ machinery, which handles cleanup
# Read data back from file
newfp = memmap(self.tmpfp, dtype=self.dtype, mode='r',
shape=self.shape)
assert_(allclose(self.data, newfp))
assert_array_equal(self.data, newfp)
def test_open_with_filename(self):
tmpname = mktemp('', 'mmap', dir=self.tempdir)
fp = memmap(tmpname, dtype=self.dtype, mode='w+',
shape=self.shape)
fp[:] = self.data[:]
del fp
def test_unnamed_file(self):
with TemporaryFile() as f:
fp = memmap(f, dtype=self.dtype, shape=self.shape)
del fp
def test_attributes(self):
offset = 1
mode = "w+"
fp = memmap(self.tmpfp, dtype=self.dtype, mode=mode,
shape=self.shape, offset=offset)
self.assertEqual(offset, fp.offset)
self.assertEqual(mode, fp.mode)
del fp
def test_filename(self):
tmpname = mktemp('', 'mmap', dir=self.tempdir)
fp = memmap(tmpname, dtype=self.dtype, mode='w+',
shape=self.shape)
abspath = os.path.abspath(tmpname)
fp[:] = self.data[:]
self.assertEqual(abspath, fp.filename)
b = fp[:1]
self.assertEqual(abspath, b.filename)
del b
del fp
def test_filename_fileobj(self):
fp = memmap(self.tmpfp, dtype=self.dtype, mode="w+",
shape=self.shape)
self.assertEqual(fp.filename, self.tmpfp.name)
@dec.knownfailureif(sys.platform == 'gnu0', "This test is known to fail on hurd")
def test_flush(self):
fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
fp[:] = self.data[:]
assert_equal(fp[0], self.data[0])
fp.flush()
def test_del(self):
# Make sure a view does not delete the underlying mmap
fp_base = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
fp_base[0] = 5
fp_view = fp_base[0:1]
assert_equal(fp_view[0], 5)
del fp_view
# Should still be able to access and assign values after
# deleting the view
assert_equal(fp_base[0], 5)
fp_base[0] = 6
assert_equal(fp_base[0], 6)
def test_arithmetic_drops_references(self):
fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
tmp = (fp + 10)
if isinstance(tmp, memmap):
assert tmp._mmap is not fp._mmap
def test_indexing_drops_references(self):
fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
tmp = fp[[(1, 2), (2, 3)]]
if isinstance(tmp, memmap):
assert tmp._mmap is not fp._mmap
def test_slicing_keeps_references(self):
fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
shape=self.shape)
assert fp[:2, :2]._mmap is fp._mmap
def test_view(self):
fp = memmap(self.tmpfp, dtype=self.dtype, shape=self.shape)
new1 = fp.view()
new2 = new1.view()
assert(new1.base is fp)
assert(new2.base is fp)
new_array = asarray(fp)
assert(new_array.base is fp)
if __name__ == "__main__":
run_module_suite()
| bsd-3-clause |
monikasulik/django-oscar | src/oscar/apps/catalogue/receivers.py | 60 | 1249 | # -*- coding: utf-8 -*-
from django.conf import settings
if settings.OSCAR_DELETE_IMAGE_FILES:
from oscar.core.loading import get_model
from django.db import models
from django.db.models.signals import post_delete
from sorl import thumbnail
from sorl.thumbnail.helpers import ThumbnailError
ProductImage = get_model('catalogue', 'ProductImage')
Category = get_model('catalogue', 'Category')
def delete_image_files(sender, instance, **kwargs):
"""
Deletes the original image, created thumbnails, and any entries
in sorl's key-value store.
"""
image_fields = (models.ImageField, thumbnail.ImageField)
for field in instance._meta.fields:
if isinstance(field, image_fields):
# Make Django return ImageFieldFile instead of ImageField
fieldfile = getattr(instance, field.name)
try:
thumbnail.delete(fieldfile)
except ThumbnailError:
pass
# connect for all models with ImageFields - add as needed
models_with_images = [ProductImage, Category]
for sender in models_with_images:
post_delete.connect(delete_image_files, sender=sender)
| bsd-3-clause |
jabesq/home-assistant | homeassistant/components/raspihats/binary_sensor.py | 7 | 3835 | """Support for raspihats board binary sensors."""
import logging
import voluptuous as vol
from homeassistant.components.binary_sensor import (
PLATFORM_SCHEMA, BinarySensorDevice)
from homeassistant.const import (
CONF_ADDRESS, CONF_DEVICE_CLASS, CONF_NAME, DEVICE_DEFAULT_NAME)
import homeassistant.helpers.config_validation as cv
from . import (
CONF_BOARD, CONF_CHANNELS, CONF_I2C_HATS, CONF_INDEX, CONF_INVERT_LOGIC,
I2C_HAT_NAMES, I2C_HATS_MANAGER, I2CHatsException)
_LOGGER = logging.getLogger(__name__)
DEFAULT_INVERT_LOGIC = False
DEFAULT_DEVICE_CLASS = None
_CHANNELS_SCHEMA = vol.Schema([{
vol.Required(CONF_INDEX): cv.positive_int,
vol.Required(CONF_NAME): cv.string,
vol.Optional(CONF_INVERT_LOGIC, default=DEFAULT_INVERT_LOGIC): cv.boolean,
vol.Optional(CONF_DEVICE_CLASS, default=DEFAULT_DEVICE_CLASS): cv.string,
}])
_I2C_HATS_SCHEMA = vol.Schema([{
vol.Required(CONF_BOARD): vol.In(I2C_HAT_NAMES),
vol.Required(CONF_ADDRESS): vol.Coerce(int),
vol.Required(CONF_CHANNELS): _CHANNELS_SCHEMA,
}])
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Optional(CONF_I2C_HATS): _I2C_HATS_SCHEMA,
})
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the raspihats binary_sensor devices."""
I2CHatBinarySensor.I2C_HATS_MANAGER = hass.data[I2C_HATS_MANAGER]
binary_sensors = []
i2c_hat_configs = config.get(CONF_I2C_HATS)
for i2c_hat_config in i2c_hat_configs:
address = i2c_hat_config[CONF_ADDRESS]
board = i2c_hat_config[CONF_BOARD]
try:
I2CHatBinarySensor.I2C_HATS_MANAGER.register_board(board, address)
for channel_config in i2c_hat_config[CONF_CHANNELS]:
binary_sensors.append(
I2CHatBinarySensor(
address,
channel_config[CONF_INDEX],
channel_config[CONF_NAME],
channel_config[CONF_INVERT_LOGIC],
channel_config[CONF_DEVICE_CLASS]
)
)
except I2CHatsException as ex:
_LOGGER.error("Failed to register %s I2CHat@%s %s",
board, hex(address), str(ex))
add_entities(binary_sensors)
class I2CHatBinarySensor(BinarySensorDevice):
"""Representation of a binary sensor that uses a I2C-HAT digital input."""
I2C_HATS_MANAGER = None
def __init__(self, address, channel, name, invert_logic, device_class):
"""Initialize the raspihats sensor."""
self._address = address
self._channel = channel
self._name = name or DEVICE_DEFAULT_NAME
self._invert_logic = invert_logic
self._device_class = device_class
self._state = self.I2C_HATS_MANAGER.read_di(
self._address, self._channel)
def online_callback():
"""Call fired when board is online."""
self.schedule_update_ha_state()
self.I2C_HATS_MANAGER.register_online_callback(
self._address, self._channel, online_callback)
def edge_callback(state):
"""Read digital input state."""
self._state = state
self.schedule_update_ha_state()
self.I2C_HATS_MANAGER.register_di_callback(
self._address, self._channel, edge_callback)
@property
def device_class(self):
"""Return the class of this sensor."""
return self._device_class
@property
def name(self):
"""Return the name of this sensor."""
return self._name
@property
def should_poll(self):
"""No polling needed for this sensor."""
return False
@property
def is_on(self):
"""Return the state of this sensor."""
return self._state != self._invert_logic
| apache-2.0 |
foer/linuxmuster-client-unity | tests/autopilot/unity/emulators/__init__.py | 2 | 1738 | # -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
# Copyright 2012 Canonical
# Author: Thomi Richards
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
"""A collection of Unity-specific emulators."""
from time import sleep
from autopilot.introspection import (
get_proxy_object_for_existing_process,
ProcessSearchError
)
from autopilot.introspection.dbus import CustomEmulatorBase
from autopilot.introspection.backends import DBusAddress
from dbus import DBusException
class UnityIntrospectionObject(CustomEmulatorBase):
DBUS_SERVICE = "com.canonical.Unity"
DBUS_OBJECT = "/com/canonical/Unity/Debug"
_Backend = DBusAddress.SessionBus(DBUS_SERVICE, DBUS_OBJECT)
def ensure_unity_is_running(timeout=300):
"""Poll the unity debug interface, and return when it's ready for use.
The default timeout is 300 seconds (5 minutes) to account for the case where
Unity has crashed and is taking a while to get restarted (on a slow VM for
example).
If, after the timeout period, unity is still not up, this function raises a
RuntimeError exception.
"""
sleep_period=10
for i in range(0, timeout, sleep_period):
try:
get_proxy_object_for_existing_process(
connection_name=UnityIntrospectionObject.DBUS_SERVICE,
object_path=UnityIntrospectionObject.DBUS_OBJECT
)
return True
except ProcessSearchError:
sleep(sleep_period)
raise RuntimeError("Unity debug interface is down after %d seconds of polling." % (timeout))
| gpl-3.0 |
squidsoup/snapcraft | snapcraft/internal/cache/_cache.py | 5 | 1337 | # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
#
# Copyright (C) 2016 Canonical Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from xdg import BaseDirectory
class SnapcraftCache:
"""Generic cache base class.
This class is responsible for cache location, notification and pruning.
"""
def __init__(self):
self.cache_root = os.path.join(
BaseDirectory.xdg_cache_home, 'snapcraft')
def cache(self):
raise NotImplementedError
def prune(self, *args, **kwargs):
raise NotImplementedError
class SnapcraftProjectCache(SnapcraftCache):
"""Project specific cache"""
def __init__(self, *, project_name):
super().__init__()
self.project_cache_root = os.path.join(
self.cache_root, project_name)
| gpl-3.0 |
GertBurger/pygame_cffi | pygame/_sdl_keys.py | 1 | 12801 | """Special separate FFI module for keyboard constants.
These are enums that take many long seconds to build and they don't change at
all, so having them in a separate FFI unit makes startup faster when we've
changed the cdef and have to rebuild.
"""
import cffi
ffi = cffi.FFI()
ffi.cdef("""
typedef enum {
SDLK_UNKNOWN,
SDLK_FIRST,
SDLK_BACKSPACE,
SDLK_TAB,
SDLK_CLEAR,
SDLK_RETURN,
SDLK_PAUSE,
SDLK_ESCAPE,
SDLK_SPACE,
SDLK_EXCLAIM,
SDLK_QUOTEDBL,
SDLK_HASH,
SDLK_DOLLAR,
SDLK_AMPERSAND,
SDLK_QUOTE,
SDLK_LEFTPAREN,
SDLK_RIGHTPAREN,
SDLK_ASTERISK,
SDLK_PLUS,
SDLK_COMMA,
SDLK_MINUS,
SDLK_PERIOD,
SDLK_SLASH,
SDLK_0,
SDLK_1,
SDLK_2,
SDLK_3,
SDLK_4,
SDLK_5,
SDLK_6,
SDLK_7,
SDLK_8,
SDLK_9,
SDLK_COLON,
SDLK_SEMICOLON,
SDLK_LESS,
SDLK_EQUALS,
SDLK_GREATER,
SDLK_QUESTION,
SDLK_AT,
SDLK_LEFTBRACKET,
SDLK_BACKSLASH,
SDLK_RIGHTBRACKET,
SDLK_CARET,
SDLK_UNDERSCORE,
SDLK_BACKQUOTE,
SDLK_a,
SDLK_b,
SDLK_c,
SDLK_d,
SDLK_e,
SDLK_f,
SDLK_g,
SDLK_h,
SDLK_i,
SDLK_j,
SDLK_k,
SDLK_l,
SDLK_m,
SDLK_n,
SDLK_o,
SDLK_p,
SDLK_q,
SDLK_r,
SDLK_s,
SDLK_t,
SDLK_u,
SDLK_v,
SDLK_w,
SDLK_x,
SDLK_y,
SDLK_z,
SDLK_DELETE,
SDLK_WORLD_0,
SDLK_WORLD_1,
SDLK_WORLD_2,
SDLK_WORLD_3,
SDLK_WORLD_4,
SDLK_WORLD_5,
SDLK_WORLD_6,
SDLK_WORLD_7,
SDLK_WORLD_8,
SDLK_WORLD_9,
SDLK_WORLD_10,
SDLK_WORLD_11,
SDLK_WORLD_12,
SDLK_WORLD_13,
SDLK_WORLD_14,
SDLK_WORLD_15,
SDLK_WORLD_16,
SDLK_WORLD_17,
SDLK_WORLD_18,
SDLK_WORLD_19,
SDLK_WORLD_20,
SDLK_WORLD_21,
SDLK_WORLD_22,
SDLK_WORLD_23,
SDLK_WORLD_24,
SDLK_WORLD_25,
SDLK_WORLD_26,
SDLK_WORLD_27,
SDLK_WORLD_28,
SDLK_WORLD_29,
SDLK_WORLD_30,
SDLK_WORLD_31,
SDLK_WORLD_32,
SDLK_WORLD_33,
SDLK_WORLD_34,
SDLK_WORLD_35,
SDLK_WORLD_36,
SDLK_WORLD_37,
SDLK_WORLD_38,
SDLK_WORLD_39,
SDLK_WORLD_40,
SDLK_WORLD_41,
SDLK_WORLD_42,
SDLK_WORLD_43,
SDLK_WORLD_44,
SDLK_WORLD_45,
SDLK_WORLD_46,
SDLK_WORLD_47,
SDLK_WORLD_48,
SDLK_WORLD_49,
SDLK_WORLD_50,
SDLK_WORLD_51,
SDLK_WORLD_52,
SDLK_WORLD_53,
SDLK_WORLD_54,
SDLK_WORLD_55,
SDLK_WORLD_56,
SDLK_WORLD_57,
SDLK_WORLD_58,
SDLK_WORLD_59,
SDLK_WORLD_60,
SDLK_WORLD_61,
SDLK_WORLD_62,
SDLK_WORLD_63,
SDLK_WORLD_64,
SDLK_WORLD_65,
SDLK_WORLD_66,
SDLK_WORLD_67,
SDLK_WORLD_68,
SDLK_WORLD_69,
SDLK_WORLD_70,
SDLK_WORLD_71,
SDLK_WORLD_72,
SDLK_WORLD_73,
SDLK_WORLD_74,
SDLK_WORLD_75,
SDLK_WORLD_76,
SDLK_WORLD_77,
SDLK_WORLD_78,
SDLK_WORLD_79,
SDLK_WORLD_80,
SDLK_WORLD_81,
SDLK_WORLD_82,
SDLK_WORLD_83,
SDLK_WORLD_84,
SDLK_WORLD_85,
SDLK_WORLD_86,
SDLK_WORLD_87,
SDLK_WORLD_88,
SDLK_WORLD_89,
SDLK_WORLD_90,
SDLK_WORLD_91,
SDLK_WORLD_92,
SDLK_WORLD_93,
SDLK_WORLD_94,
SDLK_WORLD_95,
SDLK_KP0,
SDLK_KP1,
SDLK_KP2,
SDLK_KP3,
SDLK_KP4,
SDLK_KP5,
SDLK_KP6,
SDLK_KP7,
SDLK_KP8,
SDLK_KP9,
SDLK_KP_PERIOD,
SDLK_KP_DIVIDE,
SDLK_KP_MULTIPLY,
SDLK_KP_MINUS,
SDLK_KP_PLUS,
SDLK_KP_ENTER,
SDLK_KP_EQUALS,
SDLK_UP,
SDLK_DOWN,
SDLK_RIGHT,
SDLK_LEFT,
SDLK_INSERT,
SDLK_HOME,
SDLK_END,
SDLK_PAGEUP,
SDLK_PAGEDOWN,
SDLK_F1,
SDLK_F2,
SDLK_F3,
SDLK_F4,
SDLK_F5,
SDLK_F6,
SDLK_F7,
SDLK_F8,
SDLK_F9,
SDLK_F10,
SDLK_F11,
SDLK_F12,
SDLK_F13,
SDLK_F14,
SDLK_F15,
SDLK_NUMLOCK,
SDLK_CAPSLOCK,
SDLK_SCROLLOCK,
SDLK_RSHIFT,
SDLK_LSHIFT,
SDLK_RCTRL,
SDLK_LCTRL,
SDLK_RALT,
SDLK_LALT,
SDLK_RMETA,
SDLK_LMETA,
SDLK_LSUPER,
SDLK_RSUPER,
SDLK_MODE,
SDLK_COMPOSE,
SDLK_HELP,
SDLK_PRINT,
SDLK_SYSREQ,
SDLK_BREAK,
SDLK_MENU,
SDLK_POWER,
SDLK_EURO,
SDLK_UNDO,
SDLK_LAST,
...
} SDLKey;
typedef enum {
KMOD_NONE,
KMOD_LSHIFT,
KMOD_RSHIFT,
KMOD_LCTRL,
KMOD_RCTRL,
KMOD_LALT,
KMOD_RALT,
KMOD_LMETA,
KMOD_RMETA,
KMOD_NUM,
KMOD_CAPS,
KMOD_MODE,
KMOD_RESERVED,
...
} SDLMod;
#define KMOD_CTRL ...
#define KMOD_SHIFT ...
#define KMOD_ALT ...
#define KMOD_META ...
""")
_sdl_keys = ffi.verify(
include_dirs=['/usr/include/SDL', '/usr/local/include/SDL'],
source="""
#include <SDL_keysym.h>
"""
)
K_UNKNOWN = _sdl_keys.SDLK_UNKNOWN
K_FIRST = _sdl_keys.SDLK_FIRST
K_BACKSPACE = _sdl_keys.SDLK_BACKSPACE
K_TAB = _sdl_keys.SDLK_TAB
K_CLEAR = _sdl_keys.SDLK_CLEAR
K_RETURN = _sdl_keys.SDLK_RETURN
K_PAUSE = _sdl_keys.SDLK_PAUSE
K_ESCAPE = _sdl_keys.SDLK_ESCAPE
K_SPACE = _sdl_keys.SDLK_SPACE
K_EXCLAIM = _sdl_keys.SDLK_EXCLAIM
K_QUOTEDBL = _sdl_keys.SDLK_QUOTEDBL
K_HASH = _sdl_keys.SDLK_HASH
K_DOLLAR = _sdl_keys.SDLK_DOLLAR
K_AMPERSAND = _sdl_keys.SDLK_AMPERSAND
K_QUOTE = _sdl_keys.SDLK_QUOTE
K_LEFTPAREN = _sdl_keys.SDLK_LEFTPAREN
K_RIGHTPAREN = _sdl_keys.SDLK_RIGHTPAREN
K_ASTERISK = _sdl_keys.SDLK_ASTERISK
K_PLUS = _sdl_keys.SDLK_PLUS
K_COMMA = _sdl_keys.SDLK_COMMA
K_MINUS = _sdl_keys.SDLK_MINUS
K_PERIOD = _sdl_keys.SDLK_PERIOD
K_SLASH = _sdl_keys.SDLK_SLASH
K_0 = _sdl_keys.SDLK_0
K_1 = _sdl_keys.SDLK_1
K_2 = _sdl_keys.SDLK_2
K_3 = _sdl_keys.SDLK_3
K_4 = _sdl_keys.SDLK_4
K_5 = _sdl_keys.SDLK_5
K_6 = _sdl_keys.SDLK_6
K_7 = _sdl_keys.SDLK_7
K_8 = _sdl_keys.SDLK_8
K_9 = _sdl_keys.SDLK_9
K_COLON = _sdl_keys.SDLK_COLON
K_SEMICOLON = _sdl_keys.SDLK_SEMICOLON
K_LESS = _sdl_keys.SDLK_LESS
K_EQUALS = _sdl_keys.SDLK_EQUALS
K_GREATER = _sdl_keys.SDLK_GREATER
K_QUESTION = _sdl_keys.SDLK_QUESTION
K_AT = _sdl_keys.SDLK_AT
K_LEFTBRACKET = _sdl_keys.SDLK_LEFTBRACKET
K_BACKSLASH = _sdl_keys.SDLK_BACKSLASH
K_RIGHTBRACKET = _sdl_keys.SDLK_RIGHTBRACKET
K_CARET = _sdl_keys.SDLK_CARET
K_UNDERSCORE = _sdl_keys.SDLK_UNDERSCORE
K_BACKQUOTE = _sdl_keys.SDLK_BACKQUOTE
K_a = _sdl_keys.SDLK_a
K_b = _sdl_keys.SDLK_b
K_c = _sdl_keys.SDLK_c
K_d = _sdl_keys.SDLK_d
K_e = _sdl_keys.SDLK_e
K_f = _sdl_keys.SDLK_f
K_g = _sdl_keys.SDLK_g
K_h = _sdl_keys.SDLK_h
K_i = _sdl_keys.SDLK_i
K_j = _sdl_keys.SDLK_j
K_k = _sdl_keys.SDLK_k
K_l = _sdl_keys.SDLK_l
K_m = _sdl_keys.SDLK_m
K_n = _sdl_keys.SDLK_n
K_o = _sdl_keys.SDLK_o
K_p = _sdl_keys.SDLK_p
K_q = _sdl_keys.SDLK_q
K_r = _sdl_keys.SDLK_r
K_s = _sdl_keys.SDLK_s
K_t = _sdl_keys.SDLK_t
K_u = _sdl_keys.SDLK_u
K_v = _sdl_keys.SDLK_v
K_w = _sdl_keys.SDLK_w
K_x = _sdl_keys.SDLK_x
K_y = _sdl_keys.SDLK_y
K_z = _sdl_keys.SDLK_z
K_DELETE = _sdl_keys.SDLK_DELETE
K_WORLD_0 = _sdl_keys.SDLK_WORLD_0
K_WORLD_1 = _sdl_keys.SDLK_WORLD_1
K_WORLD_2 = _sdl_keys.SDLK_WORLD_2
K_WORLD_3 = _sdl_keys.SDLK_WORLD_3
K_WORLD_4 = _sdl_keys.SDLK_WORLD_4
K_WORLD_5 = _sdl_keys.SDLK_WORLD_5
K_WORLD_6 = _sdl_keys.SDLK_WORLD_6
K_WORLD_7 = _sdl_keys.SDLK_WORLD_7
K_WORLD_8 = _sdl_keys.SDLK_WORLD_8
K_WORLD_9 = _sdl_keys.SDLK_WORLD_9
K_WORLD_10 = _sdl_keys.SDLK_WORLD_10
K_WORLD_11 = _sdl_keys.SDLK_WORLD_11
K_WORLD_12 = _sdl_keys.SDLK_WORLD_12
K_WORLD_13 = _sdl_keys.SDLK_WORLD_13
K_WORLD_14 = _sdl_keys.SDLK_WORLD_14
K_WORLD_15 = _sdl_keys.SDLK_WORLD_15
K_WORLD_16 = _sdl_keys.SDLK_WORLD_16
K_WORLD_17 = _sdl_keys.SDLK_WORLD_17
K_WORLD_18 = _sdl_keys.SDLK_WORLD_18
K_WORLD_19 = _sdl_keys.SDLK_WORLD_19
K_WORLD_20 = _sdl_keys.SDLK_WORLD_20
K_WORLD_21 = _sdl_keys.SDLK_WORLD_21
K_WORLD_22 = _sdl_keys.SDLK_WORLD_22
K_WORLD_23 = _sdl_keys.SDLK_WORLD_23
K_WORLD_24 = _sdl_keys.SDLK_WORLD_24
K_WORLD_25 = _sdl_keys.SDLK_WORLD_25
K_WORLD_26 = _sdl_keys.SDLK_WORLD_26
K_WORLD_27 = _sdl_keys.SDLK_WORLD_27
K_WORLD_28 = _sdl_keys.SDLK_WORLD_28
K_WORLD_29 = _sdl_keys.SDLK_WORLD_29
K_WORLD_30 = _sdl_keys.SDLK_WORLD_30
K_WORLD_31 = _sdl_keys.SDLK_WORLD_31
K_WORLD_32 = _sdl_keys.SDLK_WORLD_32
K_WORLD_33 = _sdl_keys.SDLK_WORLD_33
K_WORLD_34 = _sdl_keys.SDLK_WORLD_34
K_WORLD_35 = _sdl_keys.SDLK_WORLD_35
K_WORLD_36 = _sdl_keys.SDLK_WORLD_36
K_WORLD_37 = _sdl_keys.SDLK_WORLD_37
K_WORLD_38 = _sdl_keys.SDLK_WORLD_38
K_WORLD_39 = _sdl_keys.SDLK_WORLD_39
K_WORLD_40 = _sdl_keys.SDLK_WORLD_40
K_WORLD_41 = _sdl_keys.SDLK_WORLD_41
K_WORLD_42 = _sdl_keys.SDLK_WORLD_42
K_WORLD_43 = _sdl_keys.SDLK_WORLD_43
K_WORLD_44 = _sdl_keys.SDLK_WORLD_44
K_WORLD_45 = _sdl_keys.SDLK_WORLD_45
K_WORLD_46 = _sdl_keys.SDLK_WORLD_46
K_WORLD_47 = _sdl_keys.SDLK_WORLD_47
K_WORLD_48 = _sdl_keys.SDLK_WORLD_48
K_WORLD_49 = _sdl_keys.SDLK_WORLD_49
K_WORLD_50 = _sdl_keys.SDLK_WORLD_50
K_WORLD_51 = _sdl_keys.SDLK_WORLD_51
K_WORLD_52 = _sdl_keys.SDLK_WORLD_52
K_WORLD_53 = _sdl_keys.SDLK_WORLD_53
K_WORLD_54 = _sdl_keys.SDLK_WORLD_54
K_WORLD_55 = _sdl_keys.SDLK_WORLD_55
K_WORLD_56 = _sdl_keys.SDLK_WORLD_56
K_WORLD_57 = _sdl_keys.SDLK_WORLD_57
K_WORLD_58 = _sdl_keys.SDLK_WORLD_58
K_WORLD_59 = _sdl_keys.SDLK_WORLD_59
K_WORLD_60 = _sdl_keys.SDLK_WORLD_60
K_WORLD_61 = _sdl_keys.SDLK_WORLD_61
K_WORLD_62 = _sdl_keys.SDLK_WORLD_62
K_WORLD_63 = _sdl_keys.SDLK_WORLD_63
K_WORLD_64 = _sdl_keys.SDLK_WORLD_64
K_WORLD_65 = _sdl_keys.SDLK_WORLD_65
K_WORLD_66 = _sdl_keys.SDLK_WORLD_66
K_WORLD_67 = _sdl_keys.SDLK_WORLD_67
K_WORLD_68 = _sdl_keys.SDLK_WORLD_68
K_WORLD_69 = _sdl_keys.SDLK_WORLD_69
K_WORLD_70 = _sdl_keys.SDLK_WORLD_70
K_WORLD_71 = _sdl_keys.SDLK_WORLD_71
K_WORLD_72 = _sdl_keys.SDLK_WORLD_72
K_WORLD_73 = _sdl_keys.SDLK_WORLD_73
K_WORLD_74 = _sdl_keys.SDLK_WORLD_74
K_WORLD_75 = _sdl_keys.SDLK_WORLD_75
K_WORLD_76 = _sdl_keys.SDLK_WORLD_76
K_WORLD_77 = _sdl_keys.SDLK_WORLD_77
K_WORLD_78 = _sdl_keys.SDLK_WORLD_78
K_WORLD_79 = _sdl_keys.SDLK_WORLD_79
K_WORLD_80 = _sdl_keys.SDLK_WORLD_80
K_WORLD_81 = _sdl_keys.SDLK_WORLD_81
K_WORLD_82 = _sdl_keys.SDLK_WORLD_82
K_WORLD_83 = _sdl_keys.SDLK_WORLD_83
K_WORLD_84 = _sdl_keys.SDLK_WORLD_84
K_WORLD_85 = _sdl_keys.SDLK_WORLD_85
K_WORLD_86 = _sdl_keys.SDLK_WORLD_86
K_WORLD_87 = _sdl_keys.SDLK_WORLD_87
K_WORLD_88 = _sdl_keys.SDLK_WORLD_88
K_WORLD_89 = _sdl_keys.SDLK_WORLD_89
K_WORLD_90 = _sdl_keys.SDLK_WORLD_90
K_WORLD_91 = _sdl_keys.SDLK_WORLD_91
K_WORLD_92 = _sdl_keys.SDLK_WORLD_92
K_WORLD_93 = _sdl_keys.SDLK_WORLD_93
K_WORLD_94 = _sdl_keys.SDLK_WORLD_94
K_WORLD_95 = _sdl_keys.SDLK_WORLD_95
K_KP0 = _sdl_keys.SDLK_KP0
K_KP1 = _sdl_keys.SDLK_KP1
K_KP2 = _sdl_keys.SDLK_KP2
K_KP3 = _sdl_keys.SDLK_KP3
K_KP4 = _sdl_keys.SDLK_KP4
K_KP5 = _sdl_keys.SDLK_KP5
K_KP6 = _sdl_keys.SDLK_KP6
K_KP7 = _sdl_keys.SDLK_KP7
K_KP8 = _sdl_keys.SDLK_KP8
K_KP9 = _sdl_keys.SDLK_KP9
K_KP_PERIOD = _sdl_keys.SDLK_KP_PERIOD
K_KP_DIVIDE = _sdl_keys.SDLK_KP_DIVIDE
K_KP_MULTIPLY = _sdl_keys.SDLK_KP_MULTIPLY
K_KP_MINUS = _sdl_keys.SDLK_KP_MINUS
K_KP_PLUS = _sdl_keys.SDLK_KP_PLUS
K_KP_ENTER = _sdl_keys.SDLK_KP_ENTER
K_KP_EQUALS = _sdl_keys.SDLK_KP_EQUALS
K_UP = _sdl_keys.SDLK_UP
K_DOWN = _sdl_keys.SDLK_DOWN
K_RIGHT = _sdl_keys.SDLK_RIGHT
K_LEFT = _sdl_keys.SDLK_LEFT
K_INSERT = _sdl_keys.SDLK_INSERT
K_HOME = _sdl_keys.SDLK_HOME
K_END = _sdl_keys.SDLK_END
K_PAGEUP = _sdl_keys.SDLK_PAGEUP
K_PAGEDOWN = _sdl_keys.SDLK_PAGEDOWN
K_F1 = _sdl_keys.SDLK_F1
K_F2 = _sdl_keys.SDLK_F2
K_F3 = _sdl_keys.SDLK_F3
K_F4 = _sdl_keys.SDLK_F4
K_F5 = _sdl_keys.SDLK_F5
K_F6 = _sdl_keys.SDLK_F6
K_F7 = _sdl_keys.SDLK_F7
K_F8 = _sdl_keys.SDLK_F8
K_F9 = _sdl_keys.SDLK_F9
K_F10 = _sdl_keys.SDLK_F10
K_F11 = _sdl_keys.SDLK_F11
K_F12 = _sdl_keys.SDLK_F12
K_F13 = _sdl_keys.SDLK_F13
K_F14 = _sdl_keys.SDLK_F14
K_F15 = _sdl_keys.SDLK_F15
K_NUMLOCK = _sdl_keys.SDLK_NUMLOCK
K_CAPSLOCK = _sdl_keys.SDLK_CAPSLOCK
K_SCROLLOCK = _sdl_keys.SDLK_SCROLLOCK
K_RSHIFT = _sdl_keys.SDLK_RSHIFT
K_LSHIFT = _sdl_keys.SDLK_LSHIFT
K_RCTRL = _sdl_keys.SDLK_RCTRL
K_LCTRL = _sdl_keys.SDLK_LCTRL
K_RALT = _sdl_keys.SDLK_RALT
K_LALT = _sdl_keys.SDLK_LALT
K_RMETA = _sdl_keys.SDLK_RMETA
K_LMETA = _sdl_keys.SDLK_LMETA
K_LSUPER = _sdl_keys.SDLK_LSUPER
K_RSUPER = _sdl_keys.SDLK_RSUPER
K_MODE = _sdl_keys.SDLK_MODE
K_COMPOSE = _sdl_keys.SDLK_COMPOSE
K_HELP = _sdl_keys.SDLK_HELP
K_PRINT = _sdl_keys.SDLK_PRINT
K_SYSREQ = _sdl_keys.SDLK_SYSREQ
K_BREAK = _sdl_keys.SDLK_BREAK
K_MENU = _sdl_keys.SDLK_MENU
K_POWER = _sdl_keys.SDLK_POWER
K_EURO = _sdl_keys.SDLK_EURO
K_UNDO = _sdl_keys.SDLK_UNDO
KMOD_NONE = _sdl_keys.KMOD_NONE
KMOD_LSHIFT = _sdl_keys.KMOD_LSHIFT
KMOD_RSHIFT = _sdl_keys.KMOD_RSHIFT
KMOD_LCTRL = _sdl_keys.KMOD_LCTRL
KMOD_RCTRL = _sdl_keys.KMOD_RCTRL
KMOD_LALT = _sdl_keys.KMOD_LALT
KMOD_RALT = _sdl_keys.KMOD_RALT
KMOD_LMETA = _sdl_keys.KMOD_LMETA
KMOD_RMETA = _sdl_keys.KMOD_RMETA
KMOD_NUM = _sdl_keys.KMOD_NUM
KMOD_CAPS = _sdl_keys.KMOD_CAPS
KMOD_MODE = _sdl_keys.KMOD_MODE
KMOD_CTRL = _sdl_keys.KMOD_CTRL
KMOD_SHIFT = _sdl_keys.KMOD_SHIFT
KMOD_ALT = _sdl_keys.KMOD_ALT
KMOD_META = _sdl_keys.KMOD_META
| lgpl-2.1 |
notspiff/kodi-cmake | lib/gtest/test/gtest_color_test.py | 3259 | 4911 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Verifies that Google Test correctly determines whether to use colors."""
__author__ = 'wan@google.com (Zhanyong Wan)'
import os
import gtest_test_utils
IS_WINDOWS = os.name = 'nt'
COLOR_ENV_VAR = 'GTEST_COLOR'
COLOR_FLAG = 'gtest_color'
COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_color_test_')
def SetEnvVar(env_var, value):
"""Sets the env variable to 'value'; unsets it when 'value' is None."""
if value is not None:
os.environ[env_var] = value
elif env_var in os.environ:
del os.environ[env_var]
def UsesColor(term, color_env_var, color_flag):
"""Runs gtest_color_test_ and returns its exit code."""
SetEnvVar('TERM', term)
SetEnvVar(COLOR_ENV_VAR, color_env_var)
if color_flag is None:
args = []
else:
args = ['--%s=%s' % (COLOR_FLAG, color_flag)]
p = gtest_test_utils.Subprocess([COMMAND] + args)
return not p.exited or p.exit_code
class GTestColorTest(gtest_test_utils.TestCase):
def testNoEnvVarNoFlag(self):
"""Tests the case when there's neither GTEST_COLOR nor --gtest_color."""
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', None, None))
self.assert_(not UsesColor('emacs', None, None))
self.assert_(not UsesColor('xterm-mono', None, None))
self.assert_(not UsesColor('unknown', None, None))
self.assert_(not UsesColor(None, None, None))
self.assert_(UsesColor('linux', None, None))
self.assert_(UsesColor('cygwin', None, None))
self.assert_(UsesColor('xterm', None, None))
self.assert_(UsesColor('xterm-color', None, None))
self.assert_(UsesColor('xterm-256color', None, None))
def testFlagOnly(self):
"""Tests the case when there's --gtest_color but not GTEST_COLOR."""
self.assert_(not UsesColor('dumb', None, 'no'))
self.assert_(not UsesColor('xterm-color', None, 'no'))
if not IS_WINDOWS:
self.assert_(not UsesColor('emacs', None, 'auto'))
self.assert_(UsesColor('xterm', None, 'auto'))
self.assert_(UsesColor('dumb', None, 'yes'))
self.assert_(UsesColor('xterm', None, 'yes'))
def testEnvVarOnly(self):
"""Tests the case when there's GTEST_COLOR but not --gtest_color."""
self.assert_(not UsesColor('dumb', 'no', None))
self.assert_(not UsesColor('xterm-color', 'no', None))
if not IS_WINDOWS:
self.assert_(not UsesColor('dumb', 'auto', None))
self.assert_(UsesColor('xterm-color', 'auto', None))
self.assert_(UsesColor('dumb', 'yes', None))
self.assert_(UsesColor('xterm-color', 'yes', None))
def testEnvVarAndFlag(self):
"""Tests the case when there are both GTEST_COLOR and --gtest_color."""
self.assert_(not UsesColor('xterm-color', 'no', 'no'))
self.assert_(UsesColor('dumb', 'no', 'yes'))
self.assert_(UsesColor('xterm-color', 'no', 'auto'))
def testAliasesOfYesAndNo(self):
"""Tests using aliases in specifying --gtest_color."""
self.assert_(UsesColor('dumb', None, 'true'))
self.assert_(UsesColor('dumb', None, 'YES'))
self.assert_(UsesColor('dumb', None, 'T'))
self.assert_(UsesColor('dumb', None, '1'))
self.assert_(not UsesColor('xterm', None, 'f'))
self.assert_(not UsesColor('xterm', None, 'false'))
self.assert_(not UsesColor('xterm', None, '0'))
self.assert_(not UsesColor('xterm', None, 'unknown'))
if __name__ == '__main__':
gtest_test_utils.Main()
| gpl-2.0 |
jadore/nosqlsploit | lib/utils/db.py | 1 | 3370 | # -*- coding: utf-8 -*-
from prettyPrint import *
from sqlite3 import *
from os import listdir,system
class DB:
def __init__(self):
self.db = 'db/nss.db'
self.plugin = 'plugins'
self.mongodb = 'mongodb'
self.multi = 'multi'
self.sep = "/"
def initDB(self):
self.execSQL("create table if not exists nss(id integer primary key,type text,path text)")
self.execSQL("delete from nss")
self.insertToDB(self.getPlugins(self.mongodb),self.mongodb)
self.insertToDB(self.getPlugins(self.multi),self.multi)
def insertToDB(self,plugins,dbType):
'''insert data to DB'''
for plugin in plugins:
plugin = plugin[:len(plugin)-3]#去掉.py
self.execSQL('insert into nss(type,path) values("%s","%s/%s")'%(dbType,dbType,plugin))
def execSQL(self,sql):
'''execute a sql'''
conn = connect(self.db)
conn.execute(sql)
conn.commit()
conn.close()
def getPlugins(self,path):
'''get plugins list'''
return listdir(self.plugin+self.sep+path)
def fetchAll(self,sql):
'''sqlite3=>cur.fetchall()'''
conn = connect(self.db)
cur = conn.cursor()
cur.execute(sql)
res = cur.fetchall()
cur.close()
conn.close()
return res
def getPluginPath(self):
""" get the plugin path"""
sql = 'select path from nss'
result = self.fetchAll(sql)
path = []
for res in result:
path.append(res[0])
return path
def searchPlugin(self,keyword):
'''search plugins'''
sql = 'select * from nss where path like "%'+keyword+'%"'
result = self.fetchAll(sql)
self.showSearchResult(result)
def showSearchResult(self,result):
'''format print results'''
prettyPrint.prettyPrint("\n",GREY)
msg = " Matching Modules"
prettyPrint.prettyPrint(msg,YELLOW)
prettyPrint.prettyPrint(" "+"="*len(msg),GREY)
prettyPrint.prettyPrint(" %-5s %-60s %-7s"%("ID","PATH","TYPE"),YELLOW)
prettyPrint.prettyPrint(" %-5s %-60s %-7s"%("-"*5,"-"*60,"-"*7),GREY)
for res in result:
pluginId = res[0]
pluginType = res[1]
pluginPath = res[2]
if len(pluginPath)>70:
pluginPath = pluginPath[:68]+".."
prettyPrint.prettyPrint(" %-5s %-60s %-7s"%(pluginId,pluginPath,pluginType),CYAN)
prettyPrint.prettyPrint(" "+"="*74,GREY)
prettyPrint.prettyPrint(" total [%s] results found "%len(result),GREEN)
prettyPrint.prettyPrint("\n",GREY)
def showPlugins(self,pluginType):
'''show plugins'''
if pluginType.lower() == 'all':
sql = 'select * from nss'
else:
sql = 'select * from nss where type like "%'+pluginType+'%"'
self.showSearchResult(self.fetchAll(sql))
def getPluginNums(self,pluginType):
'''get plugins nums'''
if pluginType.lower == 'all':
return len(self.fetchAll('select * from nss'))
else:
return len(self.fetchAll('select * from nss where type like "%'+pluginType+'%"'))
if __name__=='__main__':
print __doc__
else:
db = DB()
| gpl-3.0 |
nidhisaini28/shipping-costs-sample | api-ai-python-master/apiai/requests/query/text.py | 3 | 1590 | # -*- coding: utf-8 -*-
from . import QueryRequest
import json
class TextRequest(QueryRequest):
"""
TextRequest request class
Send simple text queries.
Query can be string or array of strings.
"""
@property
def query(self):
"""
Query parameter, can be string or array of strings.
Default equal None, nut your should fill this field before send
request.
:rtype: str or unicode
"""
return self._query
@query.setter
def query(self, query):
self._query = query
def __init__(self, client_access_token, base_url, version, session_id):
super(TextRequest, self).__init__(client_access_token,
base_url,
version,
session_id)
self.query = None
def _prepare_headers(self):
return {
'Content-Type': 'application/json; charset=utf-8',
'Content-Length': len(self._prepage_end_request_data()),
'devMode': 'true'
}
def _prepage_begin_request_data(self):
return None
def _prepage_end_request_data(self):
data = {
'query': self.query,
'lang': self.lang,
'sessionId': self.session_id,
'contexts': self.contexts,
'timezone': self.time_zone,
'resetContexts': self.resetContexts,
'entities': self._prepare_entities(),
}
return json.dumps(data)
| mit |
p0cisk/Quantum-GIS | python/plugins/processing/algs/gdal/onesidebuffer.py | 4 | 5719 | # -*- coding: utf-8 -*-
"""
***************************************************************************
onesidebuffer.py
---------------------
Date : Janaury 2015
Copyright : (C) 2015 by Giovanni Manghi
Email : giovanni dot manghi at naturalgis dot pt
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Giovanni Manghi'
__date__ = 'January 2015'
__copyright__ = '(C) 2015, Giovanni Manghi'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
from processing.core.parameters import ParameterVector
from processing.core.parameters import ParameterString
from processing.core.parameters import ParameterBoolean
from processing.core.parameters import ParameterTableField
from processing.core.parameters import ParameterSelection
from processing.core.outputs import OutputVector
from processing.algs.gdal.GdalAlgorithm import GdalAlgorithm
from processing.algs.gdal.GdalUtils import GdalUtils
from processing.tools import dataobjects
from processing.tools.system import isWindows
from processing.tools.vector import ogrConnectionString, ogrLayerName
class OneSideBuffer(GdalAlgorithm):
OUTPUT_LAYER = 'OUTPUT_LAYER'
INPUT_LAYER = 'INPUT_LAYER'
GEOMETRY = 'GEOMETRY'
RADIUS = 'RADIUS'
LEFTRIGHT = 'LEFTRIGHT'
LEFTRIGHTLIST = ['Right', 'Left']
DISSOLVEALL = 'DISSOLVEALL'
FIELD = 'FIELD'
MULTI = 'MULTI'
OPTIONS = 'OPTIONS'
def defineCharacteristics(self):
self.name, self.i18n_name = self.trAlgorithm('Single sided buffer for lines')
self.group, self.i18n_group = self.trAlgorithm('[OGR] Geoprocessing')
self.addParameter(ParameterVector(self.INPUT_LAYER,
self.tr('Input layer'), [dataobjects.TYPE_VECTOR_LINE], False))
self.addParameter(ParameterString(self.GEOMETRY,
self.tr('Geometry column name ("geometry" for Shapefiles, may be different for other formats)'),
'geometry', optional=False))
self.addParameter(ParameterString(self.RADIUS,
self.tr('Buffer distance'), '1000', optional=False))
self.addParameter(ParameterSelection(self.LEFTRIGHT,
self.tr('Buffer side'), self.LEFTRIGHTLIST, 0))
self.addParameter(ParameterBoolean(self.DISSOLVEALL,
self.tr('Dissolve all results'), False))
self.addParameter(ParameterTableField(self.FIELD,
self.tr('Dissolve by attribute'), self.INPUT_LAYER, optional=True))
self.addParameter(ParameterBoolean(self.MULTI,
self.tr('Output as singlepart geometries (only used when dissolving by attribute)'), False))
self.addParameter(ParameterString(self.OPTIONS,
self.tr('Additional creation options (see ogr2ogr manual)'),
'', optional=True))
self.addOutput(OutputVector(self.OUTPUT_LAYER, self.tr('Single sided buffer')))
def getConsoleCommands(self):
inLayer = self.getParameterValue(self.INPUT_LAYER)
geometry = self.getParameterValue(self.GEOMETRY)
distance = self.getParameterValue(self.RADIUS)
leftright = self.getParameterValue(self.LEFTRIGHT)
dissolveall = self.getParameterValue(self.DISSOLVEALL)
field = self.getParameterValue(self.FIELD)
multi = self.getParameterValue(self.MULTI)
options = self.getParameterValue(self.OPTIONS)
ogrLayer = ogrConnectionString(inLayer)[1:-1]
layername = ogrLayerName(inLayer)
output = self.getOutputFromName(self.OUTPUT_LAYER)
outFile = output.value
output = ogrConnectionString(outFile)
layername = ogrLayerName(inLayer)
arguments = []
arguments.append(output)
arguments.append(ogrLayer)
arguments.append(layername)
arguments.append('-dialect')
arguments.append('sqlite')
arguments.append('-sql')
if dissolveall or field is not None:
sql = "SELECT ST_Union(ST_SingleSidedBuffer({}, {}, {})), * FROM '{}'".format(geometry, distance, leftright, layername)
else:
sql = "SELECT ST_SingleSidedBuffer({},{},{}), * FROM '{}'".format(geometry, distance, leftright, layername)
if field is not None:
sql = '"{} GROUP BY {}"'.format(sql, field)
arguments.append(sql)
if field is not None and multi:
arguments.append('-explodecollections')
if len(options) > 0:
arguments.append(options)
commands = []
if isWindows():
commands = ['cmd.exe', '/C ', 'ogr2ogr.exe',
GdalUtils.escapeAndJoin(arguments)]
else:
commands = ['ogr2ogr', GdalUtils.escapeAndJoin(arguments)]
return commands
def commandName(self):
return 'ogr2ogr'
| gpl-2.0 |
BackupGGCode/sphinx | sphinx/ext/ifconfig.py | 14 | 1969 | # -*- coding: utf-8 -*-
"""
sphinx.ext.ifconfig
~~~~~~~~~~~~~~~~~~~
Provides the ``ifconfig`` directive that allows to write documentation
that is included depending on configuration variables.
Usage::
.. ifconfig:: releaselevel in ('alpha', 'beta', 'rc')
This stuff is only included in the built docs for unstable versions.
The argument for ``ifconfig`` is a plain Python expression, evaluated in the
namespace of the project configuration (that is, all variables from ``conf.py``
are available.)
:copyright: 2008 by Georg Brandl.
:license: BSD.
"""
from docutils import nodes
class ifconfig(nodes.Element): pass
def ifconfig_directive(name, arguments, options, content, lineno,
content_offset, block_text, state, state_machine):
node = ifconfig()
node.line = lineno
node['expr'] = arguments[0]
state.nested_parse(content, content_offset, node)
return [node]
def process_ifconfig_nodes(app, doctree, docname):
ns = app.config.__dict__.copy()
ns['builder'] = app.builder.name
for node in doctree.traverse(ifconfig):
try:
res = eval(node['expr'], ns)
except Exception, err:
# handle exceptions in a clean fashion
from traceback import format_exception_only
msg = ''.join(format_exception_only(err.__class__, err))
newnode = doctree.reporter.error('Exception occured in '
'ifconfig expression: \n%s' %
msg, base_node=node)
node.replace_self(newnode)
else:
if not res:
node.replace_self([])
else:
node.replace_self(node.children)
def setup(app):
app.add_node(ifconfig)
app.add_directive('ifconfig', ifconfig_directive, 1, (1, 0, 1))
app.connect('doctree-resolved', process_ifconfig_nodes)
| bsd-3-clause |
dangeles/WormFiles | applomics/src/applomics_analysis_160508.py | 1 | 1800 | """
A script to analyze applomics data.
author: dangeles@caltech.edu
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df1 = pd.read_csv('../input/apple_inoculation_expt_160508.csv')
# count colonies adjusted for dilution and volume plated
df1['cfu_per_apple'] = df1.colonies * \
df1.dilution_factor/df1.volume_plated*df1.apple_mass + 5
df1.dropna(inplace=True)
# plot cfu vs inoculation factor
fig, ax = plt.subplots()
df1[df1.worms == 0].plot('inculation_factor', 'cfu_per_apple', 'scatter',
logx=True, logy=True)
df1[df1.worms == 1].plot('inculation_factor', 'cfu_per_apple', 'scatter',
logx=True, logy=True)
plt.show()
# since 10**-8 seems like an odd value, so remove it from this experiment.
df2 = df1[df1.inculation_factor > 10**-8].copy()
mean_growth_7 = df1[(df1.worms == 0) &
(df1.inculation_factor == 10**-7)].cfu_per_apple.mean()
mean_growth_6 = df1[(df1.worms == 0) &
(df1.inculation_factor == 10**-6)].cfu_per_apple.mean()
divider = np.repeat([mean_growth_6, mean_growth_7], [6, 6])
df2['fold_change'] = df2.cfu_per_apple/divider
plt.plot(df2[df2.worms == 0].inculation_factor,
df2[df2.worms == 0].fold_change, 'bo', ms=10, alpha=0.65,
label='No Worms/Mean(No Worms)')
plt.plot(df2[df2.worms == 1].inculation_factor,
df2[df2.worms == 1].fold_change, 'ro', ms=10, alpha=0.65,
label='Worms/Mean(No Worms)')
plt.xlim(5*10**-8, 2*10**-6)
plt.xscale('log')
plt.ylabel('Fold Change (worms/no worms)')
plt.xlabel('Inoculation Factor (dilution from sat. soln)')
plt.title('Effect of Worms on Bacteria')
plt.legend()
plt.savefig('../output/Fold_Change_Applomics_160508_Expt1.pdf')
plt.show()
| mit |
WadeBarnes/TheOrgBook | tob-api/api_v2/migrations/0001_initial.py | 2 | 8988 | # -*- coding: utf-8 -*-
# Generated by Django 1.11.14 on 2018-07-05 17:03
from __future__ import unicode_literals
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Address',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('addressee', models.TextField(null=True)),
('civic_address', models.TextField(null=True)),
('city', models.TextField(null=True)),
('province', models.TextField(null=True)),
('postal_code', models.TextField(null=True)),
('country', models.TextField(null=True)),
],
options={
'db_table': 'address',
},
),
migrations.CreateModel(
name='Claim',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('name', models.TextField(blank=True, null=True)),
('value', models.TextField(blank=True, null=True)),
],
options={
'db_table': 'claim',
},
),
migrations.CreateModel(
name='Contact',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('text', models.TextField(null=True)),
('type', models.TextField(null=True)),
],
options={
'db_table': 'contact',
},
),
migrations.CreateModel(
name='Credential',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('wallet_id', models.TextField()),
('start_date', models.DateField(default=django.utils.timezone.now)),
('end_date', models.DateField(blank=True, null=True)),
],
options={
'db_table': 'credential',
},
),
migrations.CreateModel(
name='CredentialType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('description', models.TextField(blank=True, null=True)),
('processor_config', django.contrib.postgres.fields.jsonb.JSONField(blank=True, null=True)),
],
options={
'db_table': 'credential_type',
},
),
migrations.CreateModel(
name='Issuer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('did', models.TextField(unique=True)),
('name', models.TextField()),
('abbreviation', models.TextField()),
('email', models.TextField()),
('url', models.TextField()),
],
options={
'db_table': 'issuer',
},
),
migrations.CreateModel(
name='Name',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('text', models.TextField(null=True)),
('language', models.TextField(null=True)),
('credential', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='names', to='api_v2.Credential')),
],
options={
'db_table': 'name',
},
),
migrations.CreateModel(
name='Person',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('full_name', models.TextField(null=True)),
('credential', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='people', to='api_v2.Credential')),
],
options={
'db_table': 'person',
},
),
migrations.CreateModel(
name='Schema',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('name', models.TextField()),
('version', models.TextField()),
('origin_did', models.TextField()),
],
options={
'db_table': 'schema',
},
),
migrations.CreateModel(
name='Topic',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('create_timestamp', models.DateTimeField(auto_now_add=True, null=True)),
('update_timestamp', models.DateTimeField(auto_now=True, null=True)),
('source_id', models.TextField()),
('type', models.TextField()),
],
options={
'db_table': 'topic',
},
),
migrations.AlterUniqueTogether(
name='schema',
unique_together=set([('name', 'version', 'origin_did')]),
),
migrations.AddField(
model_name='credentialtype',
name='issuer',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='credential_types', to='api_v2.Issuer'),
),
migrations.AddField(
model_name='credentialtype',
name='schema',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='credential_types', to='api_v2.Schema'),
),
migrations.AddField(
model_name='credential',
name='credential_type',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='credentials', to='api_v2.CredentialType'),
),
migrations.AddField(
model_name='credential',
name='topics',
field=models.ManyToManyField(related_name='credentials', to='api_v2.Topic'),
),
migrations.AddField(
model_name='contact',
name='credential',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='contacts', to='api_v2.Credential'),
),
migrations.AddField(
model_name='claim',
name='credential',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='claims', to='api_v2.Credential'),
),
migrations.AddField(
model_name='address',
name='credential',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='addresses', to='api_v2.Credential'),
),
migrations.AlterUniqueTogether(
name='credentialtype',
unique_together=set([('schema', 'issuer')]),
),
]
| apache-2.0 |
moshez/nanoauto | nanoauto/web.py | 1 | 2934 | import json
from zope import interface
from twisted.python import usage, filepath
from twisted.internet import reactor, endpoints
from twisted.cred import portal, checkers
from twisted.web import guard, resource, server
from twisted.application import internet
import klein
class Outline(object):
app = klein.Klein()
def __init__(self, store):
self.store = store
@app.route('/toc', methods=['GET'])
def toc(self, request):
ret = {}
for child in self.store.children():
content = child.getContent()
parsed = json.loads(content)
logicalName = parsed['logicalName']
realName = child.basename()
ret[logicalName] = realName
return json.dumps(ret)
@app.route('/children/<child>', methods=['PUT'])
def add(self, request, child):
fp = self.store.child(child)
if fp.exists():
raise ValueError("cannot modify")
request.content.seek(0, 0)
content = json.loads(request.content.read())
parent = content.get('parent')
if parent:
parentFP = self.store.child(parent)
parentContents = json.loads(parentFP.getContent())
lastID = parentContents['lastID'] = parentContents.get('lastID', 0)+1
parentFP.setContent(json.dumps(parentContents))
child['logicalName'] = parent['logicalName']+'.'+lastID
fp.setContent(json.dumps(child))
return 'Done'
class SimpleRealm(object):
interface.implements(portal.IRealm)
def __init__(self, guarded):
self.guarded = guarded
def requestAvatar(self, avatarId, mind, *interfaces):
if resource.IResource in interfaces:
return resource.IResource, self.guarded, lambda:None
raise NotImplementedError()
def makeWrapper(guarded, username, pwd):
checkerList = [checkers.InMemoryUsernamePasswordDatabaseDontUse(**{username: pwd})]
realm = SimpleRealm(guarded)
myPortal = portal.Portal(realm, checkerList)
webGuard = guard.BasicCredentialFactory("nanoauto")
wrapper = guard.HTTPAuthSessionWrapper(myPortal, [webGuard])
return wrapper
def makeService(opt):
outline = Outline(filepath.FilePath(opt['store']))
resource = makeWrapper(outline.app.resource(), opt['username'], opt['password'])
site = server.Site(resource)
port = endpoints.TCP4ServerEndpoint(reactor, opt['port'], interface=opt['interface'])
webService = internet.StreamServerEndpointService(endpoint=port, factory=site)
return webService
class Options(usage.Options):
optParameters = [['username', None, 'root', 'Username'],
['password', None, 'root', 'Password'],
['store', None, '.', 'Store'],
['interface', None, '0.0.0.0', 'Interface to serve on'],
['port', None, 8080, 'Port to serve on', int],
]
| mit |
dkodnik/Ant | addons/base_report_designer/wizard/__init__.py | 421 | 1081 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import base_report_designer_modify
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
ARTFL-Project/PhiloLogic4 | www/scripts/get_query_terms.py | 2 | 1117 | #!/usr/bin/env python3
import json
import os
from wsgiref.handlers import CGIHandler
from philologic.runtime.DB import DB
from philologic.runtime.Query import get_expanded_query
import sys
sys.path.append("..")
import custom_functions
try:
from custom_functions import WebConfig
except ImportError:
from philologic.runtime import WebConfig
try:
from custom_functions import WSGIHandler
except ImportError:
from philologic.runtime import WSGIHandler
def term_list(environ, start_response):
status = "200 OK"
headers = [("Content-type", "application/json; charset=UTF-8"), ("Access-Control-Allow-Origin", "*")]
start_response(status, headers)
config = WebConfig(os.path.abspath(os.path.dirname(__file__)).replace("scripts", ""))
db = DB(config.db_path + "/data/")
request = WSGIHandler(environ, config)
hits = db.query(request["q"], request["method"], request["arg"], **request.metadata)
hits.finish()
expanded_terms = get_expanded_query(hits)
yield json.dumps(expanded_terms[0]).encode("utf8")
if __name__ == "__main__":
CGIHandler().run(term_list)
| gpl-3.0 |
MotorolaMobilityLLC/external-chromium_org | tools/sharding_supervisor/sharding_supervisor.py | 23 | 2622 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Defer to run_test_cases.py."""
import os
import optparse
import sys
ROOT_DIR = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
def pop_known_arguments(args):
"""Extracts known arguments from the args if present."""
rest = []
run_test_cases_extra_args = []
for arg in args:
if arg.startswith(('--gtest_filter=', '--gtest_output=')):
run_test_cases_extra_args.append(arg)
elif arg == '--run-manual':
run_test_cases_extra_args.append(arg)
elif arg == '--gtest_print_time':
# Ignore.
pass
elif 'interactive_ui_tests' in arg:
# Run this test in a single thread. It is useful to run it under
# run_test_cases so automatic flaky test workaround is still used.
run_test_cases_extra_args.append('-j1')
rest.append(arg)
elif 'browser_tests' in arg:
# Test cases in this executable fire up *a lot* of child processes,
# causing huge memory bottleneck. So use less than N-cpus jobs.
run_test_cases_extra_args.append('--use-less-jobs')
rest.append(arg)
else:
rest.append(arg)
return run_test_cases_extra_args, rest
def main():
parser = optparse.OptionParser()
group = optparse.OptionGroup(
parser, 'Compability flag with the old sharding_supervisor')
group.add_option(
'--no-color', action='store_true', help='Ignored')
group.add_option(
'--retry-failed', action='store_true', help='Ignored')
group.add_option(
'-t', '--timeout', type='int', help='Kept as --timeout')
group.add_option(
'--total-slaves', type='int', default=1, help='Converted to --index')
group.add_option(
'--slave-index', type='int', default=0, help='Converted to --shards')
parser.add_option_group(group)
parser.disable_interspersed_args()
options, args = parser.parse_args()
swarm_client_dir = os.path.join(
ROOT_DIR, 'tools', 'swarm_client', 'googletest')
sys.path.insert(0, swarm_client_dir)
cmd = [
'--shards', str(options.total_slaves),
'--index', str(options.slave_index),
'--no-dump',
'--no-cr',
]
if options.timeout is not None:
cmd.extend(['--timeout', str(options.timeout)])
run_test_cases_extra_args, rest = pop_known_arguments(args)
import run_test_cases # pylint: disable=F0401
return run_test_cases.main(cmd + run_test_cases_extra_args + ['--'] + rest)
if __name__ == '__main__':
sys.exit(main())
| bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.