code
stringlengths 3
1.05M
| repo_name
stringlengths 5
104
| path
stringlengths 4
251
| language
stringclasses 1
value | license
stringclasses 15
values | size
int64 3
1.05M
|
|---|---|---|---|---|---|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
################################################################################
# #
# tmux-control #
# #
################################################################################
# #
# LICENCE INFORMATION #
# #
# This program is a way to configure and control tmux. #
# #
# copyright (C) 2015 William Breaden Madden #
# #
# This software is released under the terms of the GNU General Public License #
# version 3 (GPLv3). #
# #
# This program is free software: you can redistribute it and/or modify it #
# under the terms of the GNU General Public License as published by the Free #
# Software Foundation, either version 3 of the License, or (at your option) #
# any later version. #
# #
# This program is distributed in the hope that it will be useful, but WITHOUT #
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or #
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for #
# more details. #
# #
# For a copy of the GNU General Public License, see #
# <http://www.gnu.org/licenses/>. #
# #
################################################################################
# tmux customised configuration and launcher
# introduction
tmux is a terminal multiplexer. It enables a number of terminals to be created,
accessed and controlled from a single screen. tmux can be detached from and
continue running in the background, then reattached. tmux is essentially a tiled
window manager for the terminal.
When launched, tmux creates a new session with a single window and displays it.
A status line at the bottom of the display details information on the current
session and is used to enter interactive commands. The status line displays the
session number, the window number and other information. An active pane has a
bright green border while inactive panes have white borders.
A session is a single collection of pseudoterminals under the management of
tmux. Each session has one or more windows linked to it. A window occupies the
entire screen and may be split into rectangular panes, each of which is a
pseudoterminal. Any number of tmux instances may connect to the same session and
any number of the windows may be present in the same session. Once all sessions
are killed, tmux exits.
Each session is persistent and can survive accidental disconnection (e.g. via
SSH timeout) or intentional detaching (Ctrl a d). tmux can be reattached using
the command tmux attach.
In tmux, a session is displayed on screen by a client and all sessions are
managed by a single server. The server and each client are separate processes
which communicate through a socket in /tmp.
# keybindings
--------------------------------------------------------------------------------
|keybinding |description |
|-----------------------|------------------------------------------------------|
|Ctrl a ? |List all keybindings (press q to exit). |
|Ctrl a - |Split window horizontally. |
|Ctrl a | |Split window vertically. |
|Ctrl a x |Kill the current pane. |
|Ctrl a o |Change to the next pane in the current window. |
|Ctrl a p |Change to previous pane in the current window. |
|Ctrl a <direction> |Change to a pane adjacent to the current pane. |
|Ctrl a Ctrl <direction>|Resize the current pane in steps of one cell. |
|Ctrl a Alt <direction> |Resize the current pane in steps of five cells. |
|Ctrl a f |Prompt to search for text in open windows. |
|Ctrl a [ |Enter copy mode to copy text or view the history. |
|Ctrl a ] |Paste the most recently copied buffer of text. |
|Ctrl a = |Choose which buffer to paste interactively from a |
| |list. |
|Ctrl a : |Enter the tmux command prompt. |
|Ctrl a t |Display the time in the current pane (press q to |
| |exit). |
|Ctrl a d |Detach the current client. |
|Ctrl a & |Kill the current window. |
|Ctrl a k |Kill the current session. |
|Ctrl a K |Kill the current server. |
|------------------------------------------------------------------------------|
|Ctrl a c |Create a new window. |
|Ctrl a , |Name a window. |
|Ctrl a w |Display a list of all windows. |
|Ctrl a l |Change to the previous window. |
|Ctrl a n |Change to the next window. |
|Ctrl a Alt n |Change to the next window with a bell or activity |
| |marker. |
|Ctrl a Alt p |Change to the previous window with a bell or activity |
| |marker. |
|Ctrl a Ctrl o |Rotate the panes of the current window forwards. |
|Ctrl a Alt o |Rotate the panes of the current window backwards. |
|Ctrl a Alt <1 -- 5> |Arrange panes in one of five preset layouts: even |
| |horizontal, even vertical, main horizontal, main |
| |vertical or tiled. |
|Ctrl a { |Swap the current pane with the previous pane. |
|Ctrl a } |Swap the current pane with the next pane. |
|Ctrl a ! |Break the current pane out of the window. |
|Ctrl a % |Split the current pane into two, left and right. |
|Ctrl a " |Split the current pane into two, top and bottom. |
|Ctrl a ; |Change to the previously-active pane. |
|Ctrl a $ |Rename the current session. |
|Ctrl a , |Rename the current window. |
|Ctrl a <0 -- 9> |Select window 0 to 9. |
|Ctrl a q |Display pane indexes briefly. |
|Ctrl a i |Display information about the current window. |
|Ctrl a ~ |Show previous messages from tmux, if they exist (press|
| |q to exit). |
|Ctrl a r |Force a redraw of the attached client. |
|Ctrl a z |Suspend the tmux client. |
--------------------------------------------------------------------------------
# launch modes
analysis mode:
-----------------------
| |
| ranger |
| |
| |
|---------------------|
| |
| terminal |
| |
| |
-----------------------
badass mode (does not work via SSH):
-----------------------
| ranger | |
|----------| |
| terminal | |
|----------| |
| htop | ranger |
|----------| |
| arXiv | |
|----------| |
| cmus | |
-----------------------
detail mode:
-----------------------
| | |
| ranger | |
|----------| |
| terminal | |
|----------| ranger |
| htop | |
|----------| |
| arXiv | |
| | |
-----------------------
edit mode (default):
-----------------------
| | |
| | |
| | |
| | |
| terminal | ranger |
| | |
| | |
| | |
| | |
-----------------------
Nvidia mode:
-----------------------
| |
| htop |
| |
| |
|---------------------|
| |
| nvidia-smi |
| |
| |
-----------------------
run mode:
-----------------------
| |
| |
| |
| |
| scripts |
| |
| |
| |
| |
-----------------------
work mode:
-----------------------
| |
| ranger |
|---------------------|
| |
| terminal |
|---------------------|
| | |
| ranger | cmus |
| | |
-----------------------
# cmus
-----------------------------------------------
|command |action |
|----------|----------------------------------|
|:a ~/music|load music from directory |
|x |start playing |
|b |next |
|z |previous |
|c |pause |
|v |stop |
|r |toggle repeat |
|s |toggle shuffle |
|1 |view tree (artists and songs list)|
|space |expand tree |
|2 |view sorted (songs list) |
|3 |view playlist |
|4 |view queue |
|5 |view browser |
|6 |view filters |
|7 |view settings |
|right |seek forward |
|left |seek backward |
|q |quit |
-----------------------------------------------
usage:
program [options]
options:
-h, --help display help message
--version display version and exit
--analysis analysis configuration
--badass badass configuration
--detail detail configuration
--edit edit configuration
--nvidia Nvidia configuration
--run run configuration
--work work configuration
--directory=NAME directory of scripts to run [default: scripts]
--extension=EXT scripts extension (set to none for any) [default: sh]
"""
import docopt
import os
import re
import sys
name = "tmux-control"
version = "2017-06-11T1539Z"
def main(options):
engage_configuration_analysis = options["--analysis"]
engage_configuration_badass = options["--badass"]
engage_configuration_detail = options["--detail"]
engage_configuration_edit = options["--edit"]
engage_configuration_Nvidia = options["--nvidia"]
engage_configuration_run = options["--run"]
engage_configuration_work = options["--work"]
directoryname = options["--directory"]
extension_required = options["--extension"]
if extension_required.lower() == "none":
extension_required = None
prerequisites = [
"cmus",
"elinks",
"htop",
"ranger",
"tmux"
]
ensure_prerequisites(prerequisites)
host_name = os.uname()[1]
if "cern.ch" in host_name:
executable = "/usr/bin/tmux"
elif "physics.gla.ac.uk" in host_name:
executable = "/usr/bin/tmux"
else:
executable = "tmux"
configuration_analysis = \
"""
set -g set-remain-on-exit on
new -s "ANALYSIS"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
# split up-down
split-window -v
# split left up
select-pane -t 0
send-keys 'ranger' Enter
select-pane -t 1
send-keys 'clear' Enter
set -g set-remain-on-exit off
"""
configuration_badass = \
"""
set -g set-remain-on-exit on
new -s "BADASS"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
# split left-right
split-window -h
# split left up
select-pane -t 0
split-window -v
select-pane -t 0
split-window -v
select-pane -t 0
split-window -v
select-pane -t 3
split-window -v
select-pane -t 0
send-keys 'ranger' Enter
select-pane -t 1
send-keys 'clear' Enter
select-pane -t 2
send-keys 'htop' Enter
select-pane -t 3
send-keys 'elinks http://arxiv.org/list/hep-ph/new' Enter
select-pane -t 4
send-keys 'cmus' Enter
select-pane -t 5
send-keys 'ranger' Enter
select-pane -t 4
set -g set-remain-on-exit off
"""
configuration_detail = \
"""
set -g set-remain-on-exit on
new -s "DETAIL"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
# split left-right
split-window -h
# split left up
select-pane -t 0
split-window -v
select-pane -t 0
split-window -v
select-pane -t 0
split-window -v
select-pane -t 0
send-keys 'ranger' Enter
select-pane -t 1
send-keys 'clear' Enter
select-pane -t 2
send-keys 'htop' Enter
select-pane -t 3
send-keys 'elinks http://arxiv.org/list/hep-ph/new' Enter
select-pane -t 4
send-keys 'ranger' Enter
set -g set-remain-on-exit off
"""
configuration_edit = \
"""
set -g set-remain-on-exit on
new -s "EDIT"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
# split left-right
split-window -h
select-pane -t 0
send-keys 'clear' Enter
select-pane -t 1
send-keys 'ranger' Enter
set -g set-remain-on-exit off
"""
configuration_Nvidia = \
"""
set -g set-remain-on-exit on
new -s "NVIDIA"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
# split up-down
split-window -v
# split left up
select-pane -t 0
send-keys 'htop' Enter
select-pane -t 1
send-keys 'watch -n 0.5 nvidia-smi' Enter
set -g set-remain-on-exit off
"""
configuration_run = \
"""
set -g set-remain-on-exit on
new -s "RUN"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
set -g set-remain-on-exit off
## run scripts in windows
"""
configuration_work = \
"""
set -g set-remain-on-exit on
new -s "WORK"
set-option -g prefix C-a
unbind C-b
bind - split-window -v
bind | split-window -h
bind k kill-session
bind K kill-server
## colours
set-option -g window-status-current-bg yellow
set-option -g pane-active-border-fg yellow
set -g status-fg black
set -g status-bg '#FEFE0A'
set -g message-fg black
set -g message-bg '#FEFE0A'
set -g message-command-fg black
set -g message-command-bg '#FEFE0A'
set-option -g mode-keys vi
set -g history-limit 20000
## mouse mode
set -g mouse on
## status
set-option -g status-interval 1
set-option -g status-left-length 20
set-option -g status-left ''
set-option -g status-right '%Y-%m-%dT%H%M%S '
## run programs in panes
split-window -v
select-pane -t 1
split-window -v
select-pane -t 3
split-window -h
select-pane -t 0
send-keys 'ranger' Enter
select-pane -t 1
send-keys 'clear' Enter
select-pane -t 2
send-keys 'ranger' Enter
select-pane -t 3
send-keys 'cmus' Enter
set -g set-remain-on-exit off
"""
if engage_configuration_run:
filepaths = natural_sort(filepaths_at_directory(
directory = directoryname,
extension_required = extension_required
))
for index, filepath in enumerate(filepaths):
configuration_run += " new-window\n"
configuration_run += " rename-window -t {index} '{name}'\n".format(
index = index + 1,
name = os.path.split(filepath)[1]
)
configuration_run += " select-window -t {index}\n".format(
index = index + 1
)
configuration_run += " send-keys '{filepath}' Enter\n".format(
filepath = filepath
)
#configuration_run += " choose-window\n"
configuration_run += " select-window -t 0\n"
if engage_configuration_analysis:
#print("engage configuration analysis")
configuration_tmux = configuration_analysis
elif engage_configuration_badass:
#print("engage configuration badass")
configuration_tmux = configuration_badass
elif engage_configuration_detail:
#print("engage configuration detail")
configuration_tmux = configuration_detail
elif engage_configuration_edit:
#print("engage configuration edit")
configuration_tmux = configuration_edit
elif engage_configuration_Nvidia:
#print("engage configuration Nvidia")
configuration_tmux = configuration_Nvidia
elif engage_configuration_run:
#print("engage configuration run")
configuration_tmux = configuration_run
elif engage_configuration_work:
#print("engage configuration work")
configuration_tmux = configuration_work
else:
#print("engage configuration edit (default)")
configuration_tmux = configuration_edit
command = \
"configuration_tmux=\"$(mktemp)\" && { echo \"" + \
configuration_tmux + \
"\" > \"${configuration_tmux}\"; " + \
executable + \
" -f \"${configuration_tmux}\" attach; " + \
"unlink \"${configuration_tmux}\"; }"
os.system(command)
sys.exit()
def get_keystroke():
import tty
import termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
character = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return character
def get_y_or_n():
character = None
while character != "y" and character != "n":
character = get_keystroke().lower()
return character
def ensure_prerequisites(prerequisites):
successes = {}
for prerequisite in prerequisites:
if which(prerequisite) is None:
success = instate(prerequisite)
successes[prerequisite] = success
if False in successes.values():
print("error: dependencies not met -- continue? (y/n)\n")
print(successes)
y_or_n = get_y_or_n()
if y_or_n == "n":
sys.exit()
def instate(program):
print("instate {program}".format(program = program))
if program == "hollywood":
if which("apt-get") is None:
print(
"error: \"apt-get\" not detected -- "
"install program \"{program}\" manually".format(
program = program
)
)
success = False
else:
os.system("sudo apt-add-repository -y ppa:hollywood/ppa")
os.system("sudo apt-get -y update")
os.system("sudo apt-get -y install byobu hollywood")
success = True
else:
if which("apt-get") is None:
print(
"error: \"apt-get\" not detected -- "
"install program \"{program}\" manually".format(
program = program
)
)
success = False
else:
command = "sudo apt-get -y install " + program
os.system(command)
success = True
return success
def natural_sort(
list_object
):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanumeric_key = lambda key: [
convert(text) for text in re.split("([0-9]+)", key)
]
return sorted(list_object, key = alphanumeric_key)
def filepaths_at_directory(
directory = None,
extension_required = None
):
if not os.path.isdir(directory):
print("error -- directory {directory} not found".format(directory = directory))
raise(IOError)
filepaths = [
os.path.abspath(os.path.join(directory, filename))\
for filename in os.listdir(directory)\
if os.path.isfile(os.path.join(directory, filename))
]
if extension_required:
filepaths = [filepath for filepath in filepaths if extension_required in os.path.splitext(filepath)[1]]
return filepaths
def which(program):
def is_exe(fpath):
return(os.path.isfile(fpath) and os.access(fpath, os.X_OK))
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return(program)
else:
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return(exe_file)
return(None)
if __name__ == "__main__":
options = docopt.docopt(__doc__)
if options["--version"]:
print(version)
exit()
main(options)
|
wdbm/tmux-control
|
tmux-control.py
|
Python
|
gpl-3.0
| 26,871
|
# Staging environment settings for us_ignite
import datetime
import os
import urlparse
from us_ignite.settings.base import *
DEBUG = True
TEMPLATE_DEBUG = DEBUG
# Sensitive values are saved as env variables:
env = os.getenv
PROJECT_ROOT = os.path.dirname(os.path.realpath(__file__))
# settings is one directory up now
here = lambda *x: os.path.join(PROJECT_ROOT, '..', *x)
SITE_URL = 'https://us-ignite-staging.herokuapp.com'
ALLOWED_HOSTS = [
'us-ignite-staging.herokuapp.com',
]
# HTTPS configuration:
SESSION_COOKIE_SECURE = True
SECURE_SSL_REDIRECT = True
SECURE_HSTS_SECONDS = 60 * 5
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
# Make this unique, and don't share it with anybody.
SECRET_KEY = env('SECRET_KEY')
# Remote storage settings:
STATICFILES_STORAGE = 'us_ignite.common.storage.StaticS3Storage'
DEFAULT_FILE_STORAGE = 'us_ignite.common.storage.MediaS3Storage'
THUMBNAIL_DEFAULT_STORAGE = DEFAULT_FILE_STORAGE
AWS_ACCESS_KEY_ID = env('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = env('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = 'staging-us-ignite-org'
expire_date = datetime.date.today() + datetime.timedelta(days=365)
expire_seconds = 30 * 24 * 60 * 60
AWS_HEADERS = {
'Expires': expire_date.strftime('%a, %d %b %Y 00:00:00 GMT'),
'Cache-Control': 'max-age=%s' % expire_seconds,
}
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = 'https://%s/static/' % AWS_S3_CUSTOM_DOMAIN
redis_url = urlparse.urlparse(env('REDISTOGO_URL'))
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': '%s:%s' % (redis_url.hostname, redis_url.port),
'OPTIONS': {
'DB': 0,
'PASSWORD': redis_url.password,
}
}
}
# Email:
EMAIL_SUBJECT_PREFIX = '[STAGE US Ignite] '
DEFAULT_FROM_EMAIL = 'info@staging.us-ignite.org'
SERVER_EMAIL = 'info@staging.us-ignite.org'
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = env('EMAIL_HOST')
EMAIL_PORT = env('EMAIL_PORT')
EMAIL_HOST_USER = env('EMAIL_HOST_USER')
EMAIL_HOST_PASSWORD = env('EMAIL_HOST_PASSWORD')
# Twitter API:
TWITTER_API_KEY = env('TWITTER_API_KEY')
TWITTER_API_SECRET = env('TWITTER_API_SECRET')
# WP email
WP_EMAIL = env('WP_EMAIL')
# Enable dummy content generation on this build:
ENABLE_DUMMY = True
if ENABLE_DUMMY:
INSTALLED_APPS += ('us_ignite.dummy', )
# List of words:
WORDS_PATH = here('..', 'words')
MAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY')
MAILCHIMP_LIST = env('MAILCHIMP_LIST')
GOOGLE_ANALYTICS_ID = 'DUMMY'
# Production flag:
IS_PRODUCTION = True
# Asset compressor:
COMPRESS_ENABLED = True
STATIC_FILES_VERSION = 'v1'
# Heroku does not have a filesystem, used to deploy the assets to S3:
COMPRESS_STORAGE = 'us_ignite.common.storage.CachedS3BotoStorage'
USE_DEBUG_TOOLBAR = False
|
us-ignite/us_ignite
|
us_ignite/settings/staging.py
|
Python
|
bsd-3-clause
| 2,837
|
#!/usr/bin/env python
#
# Copyright 2009 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr
class pfb_decimator_ccf(gr.hier_block2):
'''
Make a Polyphase Filter decimator (complex in, complex out, floating-point taps)
This simplifies the interface by allowing a single input stream to connect to this block.
It will then output a stream that is the decimated output stream.
'''
def __init__(self, decim, taps, channel=0):
gr.hier_block2.__init__(self, "pfb_decimator_ccf",
gr.io_signature(1, 1, gr.sizeof_gr_complex), # Input signature
gr.io_signature(1, 1, gr.sizeof_gr_complex)) # Output signature
self._decim = decim
self._taps = taps
self._channel = channel
self.s2ss = gr.stream_to_streams(gr.sizeof_gr_complex, self._decim)
self.pfb = gr.pfb_decimator_ccf(self._decim, self._taps, self._channel)
self.connect(self, self.s2ss)
for i in xrange(self._decim):
self.connect((self.s2ss,i), (self.pfb,i))
self.connect(self.pfb, self)
|
GREO/GNU-Radio
|
gnuradio-core/src/python/gnuradio/blks2impl/pfb_decimator.py
|
Python
|
gpl-3.0
| 1,791
|
# -*- coding: utf-8 -*-
class Solution:
# @param dungeon, a list of lists of integers
# @return a integer
def calculateMinimumHP(self, map):
dp = [ [0]*len(map[0]) for _ in xrange(len(map))]
pre = [ [0]*len(map[0]) for _ in xrange(len(map))]
def get(init):
dp[0][0] = init+map[0][0]
if dp[0][0] < 1:
return 0
for i in xrange(1, len(map[0])):
if dp[0][i-1] > 0:
dp[0][i] = dp[0][i-1] + map[0][i]
pre[0][i] = 1
else:
dp[0][i] = -1
for i in xrange(1, len(map)):
if dp[i-1][0] > 0:
dp[i][0] = dp[i-1][0] + map[i][0]
pre[i][0] = 2
else:
dp[i][0] = -1
for i in xrange(1, len(map)):
for j in xrange(1, len(map[0])):
if dp[i][j-1] <= 0 and dp[i-1][j] <= 0:
dp[i][j] = -1
continue
if dp[i][j-1] > dp[i-1][j]:
dp[i][j] = dp[i][j-1] + map[i][j]
pre[i][j] = 1
else:
dp[i][j] = dp[i-1][j] + map[i][j]
pre[i][j] = 2
return dp[-1][-1] > 0
low = 1;high = 1000000000
while low < high:
mid = (low+high)>>1
if not get(mid):
low = mid+1
else:
high = mid
return low
s = Solution()
print s.calculateMinimumHP([[-61,-52,10,-82],[-77,-22,-83,48],[-59,-61,-42,-34],[231,47,-6,19]])
|
atupal/oj
|
leetcode/174_hard_dungeon-game.py
|
Python
|
mit
| 1,426
|
##########################################################################
#
# Copyright (c) 2014, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import functools
import IECore
import Gaffer
import GafferUI
##########################################################################
# Public methods
##########################################################################
## May be called to connect the DotUI functionality to an application
# instance. This isn't done automatically because some applications
# may have graphs for which it doesn't make sense to use Dots. Typically
# this function would be called from an application startup file.
def connect( applicationRoot ) :
applicationRoot.__dotUIConnected = True
##########################################################################
# Metadata
##########################################################################
Gaffer.Metadata.registerNodeDescription(
Gaffer.Dot,
"""A utility node which can be used for organising large graphs.""",
)
Gaffer.Metadata.registerNodeValue( Gaffer.Dot, "nodeGadget:minWidth", 0.0 )
Gaffer.Metadata.registerNodeValue( Gaffer.Dot, "nodeGadget:padding", 0.5 )
##########################################################################
# NodeGraph menus
##########################################################################
def __insertDot( menu, destinationPlug ) :
nodeGraph = menu.ancestor( GafferUI.NodeGraph )
gadgetWidget = nodeGraph.graphGadgetWidget()
graphGadget = nodeGraph.graphGadget()
with Gaffer.UndoContext( destinationPlug.ancestor( Gaffer.ScriptNode ) ) :
node = Gaffer.Dot()
graphGadget.getRoot().addChild( node )
node.setup( destinationPlug )
node["in"].setInput( destinationPlug.getInput() )
destinationPlug.setInput( node["out"] )
menuPosition = menu.popupPosition( relativeTo = gadgetWidget )
position = gadgetWidget.getViewportGadget().rasterToGadgetSpace(
IECore.V2f( menuPosition.x, menuPosition.y ),
gadget = graphGadget
).p0
graphGadget.setNodePosition( node, IECore.V2f( position.x, position.y ) )
def __connectionContextMenu( nodeGraph, destinationPlug, menuDefinition ) :
applicationRoot = nodeGraph.scriptNode().ancestor( Gaffer.ApplicationRoot )
connected = False
with IECore.IgnoredExceptions( AttributeError ) :
connected = applicationRoot.__dotUIConnected
if not connected :
return
if len( menuDefinition.items() ) :
menuDefinition.append( "/DotDivider", { "divider" : True } )
menuDefinition.append(
"/Insert Dot",
{
"command" : functools.partial( __insertDot, destinationPlug = destinationPlug ),
"active" : not destinationPlug.getFlags( Gaffer.Plug.Flags.ReadOnly ),
}
)
__connectionContextMenuConnection = GafferUI.NodeGraph.connectionContextMenuSignal().connect( __connectionContextMenu )
##########################################################################
# PlugValueWidget registrations
##########################################################################
GafferUI.PlugValueWidget.registerCreator( Gaffer.Dot, "in", None )
GafferUI.PlugValueWidget.registerCreator( Gaffer.Dot, "out", None )
|
goddardl/gaffer
|
python/GafferUI/DotUI.py
|
Python
|
bsd-3-clause
| 4,789
|
{
'name': 'To-Do Website',
'description': 'Manage your personal To-Do tasks.',
'author': 'Daniel Reis',
'depends': ['todo_stage', 'website_form'],
'data': [
'views/todo_web.xml',
'views/todo_extend.xml',
'data/config_data.xml',
],
}
|
dreispt/todo_app
|
todo_website/__manifest__.py
|
Python
|
agpl-3.0
| 281
|
from BaseController import BaseController
import tornado.ioloop
import tornado.web
import dateutil.parser
import datetime
class TopCommandsController(BaseController):
def get(self):
return_data = dict(data=[],
timestamp=datetime.datetime.now().isoformat())
server = self.get_argument("server")
from_date = self.get_argument("from", None)
to_date = self.get_argument("to", None)
if not from_date or not to_date:
end = datetime.datetime.now()
delta = datetime.timedelta(seconds=120)
start = end - delta
else:
start = dateutil.parser.parse(from_date)
end = dateutil.parser.parse(to_date)
for data in self.stats_provider.get_top_commands_stats(server, start,
end):
return_data['data'].append([data[0], data[1]])
self.write(return_data)
|
udomsak/RedisLive
|
src/api/controller/TopCommandsController.py
|
Python
|
mit
| 971
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2007-2009 Andrew Resch <andrewresch@gmail.com>
#
# This file is part of Deluge and is licensed under GNU General Public License 3.0, or later, with
# the additional special exception to link portions of this program with the OpenSSL library.
# See LICENSE for more details.
#
# We skip isorting this file as it want to move the gtk2reactor.install() below the imports
# isort:skip_file
import logging
import os
import sys
import warnings
import gobject
import gtk
from twisted.internet import gtk2reactor
from twisted.internet.error import ReactorAlreadyInstalledError
try:
# Install twisted reactor, before any other modules import reactor.
reactor = gtk2reactor.install()
except ReactorAlreadyInstalledError:
# Running unit tests so trial already installed a rector
pass
import deluge.common
import deluge.component as component
from deluge.configmanager import ConfigManager, get_config_dir
from deluge.error import AuthenticationRequired, BadLoginError, DaemonRunningError
from deluge.ui.client import client
from deluge.ui.gtkui.addtorrentdialog import AddTorrentDialog
from deluge.ui.gtkui.common import associate_magnet_links
from deluge.ui.gtkui.dialogs import AuthenticationDialog, ErrorDialog, YesNoDialog
from deluge.ui.gtkui.filtertreeview import FilterTreeView
from deluge.ui.gtkui.ipcinterface import IPCInterface
from deluge.ui.gtkui.mainwindow import MainWindow
from deluge.ui.gtkui.menubar import MenuBar
from deluge.ui.gtkui.pluginmanager import PluginManager
from deluge.ui.gtkui.preferences import Preferences
from deluge.ui.gtkui.queuedtorrents import QueuedTorrents
from deluge.ui.gtkui.sidebar import SideBar
from deluge.ui.gtkui.statusbar import StatusBar
from deluge.ui.gtkui.systemtray import SystemTray
from deluge.ui.gtkui.toolbar import ToolBar
from deluge.ui.gtkui.torrentdetails import TorrentDetails
from deluge.ui.gtkui.torrentview import TorrentView
from deluge.ui.sessionproxy import SessionProxy
from deluge.ui.tracker_icons import TrackerIcons
from deluge.ui.ui import _UI
gobject.set_prgname("deluge")
log = logging.getLogger(__name__)
try:
from setproctitle import setproctitle, getproctitle
except ImportError:
setproctitle = lambda t: None
getproctitle = lambda: None
class Gtk(_UI):
help = """Starts the Deluge GTK+ interface"""
def __init__(self):
super(Gtk, self).__init__("gtk")
def start(self):
super(Gtk, self).start()
GtkUI(self.args)
def start():
Gtk().start()
DEFAULT_PREFS = {
"classic_mode": True,
"interactive_add": True,
"focus_add_dialog": True,
"enable_system_tray": True,
"close_to_tray": False,
"start_in_tray": False,
"enable_appindicator": False,
"lock_tray": False,
"tray_password": "",
"check_new_releases": True,
"default_load_path": None,
"window_maximized": False,
"window_x_pos": 0,
"window_y_pos": 0,
"window_width": 640,
"window_height": 480,
"pref_dialog_width": None,
"pref_dialog_height": None,
"edit_trackers_dialog_width": None,
"edit_trackers_dialog_height": None,
"window_pane_position": 235,
"tray_download_speed_list": [5.0, 10.0, 30.0, 80.0, 300.0],
"tray_upload_speed_list": [5.0, 10.0, 30.0, 80.0, 300.0],
"connection_limit_list": [50, 100, 200, 300, 500],
"enabled_plugins": [],
"show_connection_manager_on_start": True,
"autoconnect": False,
"autoconnect_host_id": None,
"autostart_localhost": False,
"autoadd_queued": False,
"choose_directory_dialog_path": deluge.common.get_default_download_dir(),
"show_new_releases": True,
"signal_port": 40000,
"ntf_tray_blink": True,
"ntf_sound": False,
"ntf_sound_path": deluge.common.get_default_download_dir(),
"ntf_popup": False,
"ntf_email": False,
"ntf_email_add": "",
"ntf_username": "",
"ntf_pass": "",
"ntf_server": "",
"ntf_security": None,
"signal_port": 40000,
"show_sidebar": True,
"show_toolbar": True,
"show_statusbar": True,
"sidebar_show_zero": False,
"sidebar_show_trackers": True,
"sidebar_position": 170,
"show_rate_in_title": False,
"createtorrent.trackers": [],
"show_piecesbar": False,
"pieces_color_missing": [65535, 0, 0],
"pieces_color_waiting": [4874, 56494, 0],
"pieces_color_downloading": [65535, 55255, 0],
"pieces_color_completed": [4883, 26985, 56540],
"focus_main_window_on_add": True,
"language": None,
}
class GtkUI(object):
def __init__(self, args):
self.daemon_bps = (0, 0, 0)
# Setup btkbuilder/glade translation
deluge.common.setup_translations(setup_gettext=False, setup_pygtk=True)
# Setup signals
try:
import gnome.ui
import gnome
# Suppress: Warning: Attempt to add property GnomeProgram::*** after class was initialised
original_filters = warnings.filters[:]
warnings.simplefilter("ignore")
try:
self.gnome_prog = gnome.init("Deluge", deluge.common.get_version())
finally:
warnings.filters = original_filters
self.gnome_client = gnome.ui.master_client()
def on_die(*args):
reactor.stop()
self.gnome_client.connect("die", on_die)
log.debug("GNOME session 'die' handler registered!")
except Exception as ex:
log.warning("Unable to register a 'die' handler with the GNOME session manager: %s", ex)
if deluge.common.windows_check():
from win32api import SetConsoleCtrlHandler
from win32con import CTRL_CLOSE_EVENT
from win32con import CTRL_SHUTDOWN_EVENT
def win_handler(ctrl_type):
log.debug("ctrl_type: %s", ctrl_type)
if ctrl_type in (CTRL_CLOSE_EVENT, CTRL_SHUTDOWN_EVENT):
reactor.stop()
return 1
SetConsoleCtrlHandler(win_handler)
if deluge.common.osx_check() and gtk.gdk.WINDOWING == "quartz":
import gtkosx_application
self.osxapp = gtkosx_application.gtkosx_application_get()
def on_die(*args):
reactor.stop()
self.osxapp.connect("NSApplicationWillTerminate", on_die)
# Set process name again to fix gtk issue
setproctitle(getproctitle())
# Attempt to register a magnet URI handler with gconf, but do not overwrite
# if already set by another program.
associate_magnet_links(False)
# Make sure gtkui.conf has at least the defaults set
self.config = ConfigManager("gtkui.conf", DEFAULT_PREFS)
# Make sure the gtkui state folder has been created
if not os.path.exists(os.path.join(get_config_dir(), "gtkui_state")):
os.makedirs(os.path.join(get_config_dir(), "gtkui_state"))
# We need to check on exit if it was started in classic mode to ensure we
# shutdown the daemon.
self.started_in_classic = self.config["classic_mode"]
# Set language
if not self.config["language"] is None:
deluge.common.set_language(self.config["language"])
# Start the IPC Interface before anything else.. Just in case we are
# already running.
self.queuedtorrents = QueuedTorrents()
self.ipcinterface = IPCInterface(args)
# Initialize gdk threading
gtk.gdk.threads_init()
# We make sure that the UI components start once we get a core URI
client.set_disconnect_callback(self.__on_disconnect)
self.trackericons = TrackerIcons()
self.sessionproxy = SessionProxy()
# Initialize various components of the gtkui
self.mainwindow = MainWindow()
self.menubar = MenuBar()
self.toolbar = ToolBar()
self.torrentview = TorrentView()
self.torrentdetails = TorrentDetails()
self.sidebar = SideBar()
self.filtertreeview = FilterTreeView()
self.preferences = Preferences()
self.systemtray = SystemTray()
self.statusbar = StatusBar()
self.addtorrentdialog = AddTorrentDialog()
if deluge.common.osx_check() and gtk.gdk.WINDOWING == "quartz":
def nsapp_open_file(osxapp, filename):
# Will be raised at app launch (python opening main script)
if filename.endswith('Deluge-bin'):
return True
from deluge.ui.gtkui.ipcinterface import process_args
process_args([filename])
self.osxapp.connect("NSApplicationOpenFile", nsapp_open_file)
from deluge.ui.gtkui.menubar_osx import menubar_osx
menubar_osx(self, self.osxapp)
self.osxapp.ready()
# Initalize the plugins
self.plugins = PluginManager()
# Late import because of setting up translations
from deluge.ui.gtkui.connectionmanager import ConnectionManager
# Show the connection manager
self.connectionmanager = ConnectionManager()
from twisted.internet.task import LoopingCall
rpc_stats = LoopingCall(self.print_rpc_stats)
rpc_stats.start(10)
reactor.callWhenRunning(self._on_reactor_start)
# Initialize gdk threading
gtk.gdk.threads_enter()
reactor.run()
self.shutdown()
gtk.gdk.threads_leave()
def shutdown(self, *args, **kwargs):
log.debug("gtkui shutting down..")
component.stop()
# Process any pending gtk events since the mainloop has been quit
while gtk.events_pending():
gtk.main_iteration(0)
# Shutdown all components
component.shutdown()
# Make sure the config is saved.
self.config.save()
def print_rpc_stats(self):
import time
try:
recv = client.get_bytes_recv()
sent = client.get_bytes_sent()
except AttributeError:
return
log.debug("sent: %s recv: %s", deluge.common.fsize(sent), deluge.common.fsize(recv))
t = time.time()
delta_time = t - self.daemon_bps[0]
delta_sent = sent - self.daemon_bps[1]
delta_recv = recv - self.daemon_bps[2]
sent_rate = deluge.common.fspeed(float(delta_sent) / float(delta_time))
recv_rate = deluge.common.fspeed(float(delta_recv) / float(delta_time))
log.debug("sent rate: %s recv rate: %s", sent_rate, recv_rate)
self.daemon_bps = (t, sent, recv)
def _on_reactor_start(self):
log.debug("_on_reactor_start")
self.mainwindow.first_show()
if self.config["classic_mode"]:
def on_dialog_response(response):
if response != gtk.RESPONSE_YES:
# The user does not want to turn Standalone Mode off, so just quit
self.mainwindow.quit()
return
# Turning off classic_mode
self.config["classic_mode"] = False
self.__start_non_classic()
try:
try:
client.start_classic_mode()
except DaemonRunningError:
d = YesNoDialog(
_("Switch to Thin Client Mode?"),
_("A Deluge daemon process (deluged) is already running. "
"To use Standalone mode, stop this daemon and restart Deluge."
"\n\n"
"Continue in Thin Client mode?")).run()
self.started_in_classic = False
d.addCallback(on_dialog_response)
except ImportError as ex:
if "No module named libtorrent" in ex.message:
d = YesNoDialog(
_("Switch to Thin Client Mode?"),
_("Only Thin Client mode is available because libtorrent is not installed."
"\n\n"
"To use Deluge Standalone mode, please install libtorrent.")).run()
self.started_in_classic = False
d.addCallback(on_dialog_response)
else:
raise ex
else:
component.start()
return
except Exception:
import traceback
tb = sys.exc_info()
ed = ErrorDialog(
_("Error Starting Core"),
_("An error occurred starting the core component required to run Deluge in Standalone mode."
"\n\n"
"Please see the details below for more information."), details=traceback.format_exc(tb[2])).run()
def on_ed_response(response):
d = YesNoDialog(
_("Switch to Thin Client Mode?"),
_("Unable to start Standalone mode would you like to continue in Thin Client mode?")
).run()
self.started_in_classic = False
d.addCallback(on_dialog_response)
ed.addCallback(on_ed_response)
else:
self.__start_non_classic()
def __start_non_classic(self):
# Autoconnect to a host
if self.config["autoconnect"]:
def update_connection_manager():
if not self.connectionmanager.running:
return
self.connectionmanager.builder.get_object("button_refresh").emit("clicked")
def close_connection_manager():
if not self.connectionmanager.running:
return
self.connectionmanager.builder.get_object("button_close").emit("clicked")
for host_config in self.connectionmanager.config["hosts"]:
hostid, host, port, user, passwd = host_config
if hostid == self.config["autoconnect_host_id"]:
try_connect = True
# Check to see if we need to start the localhost daemon
if self.config["autostart_localhost"] and host in ("localhost", "127.0.0.1"):
log.debug("Autostarting localhost:%s", host)
try_connect = client.start_daemon(
port, get_config_dir()
)
log.debug("Localhost started: %s", try_connect)
if not try_connect:
ErrorDialog(
_("Error Starting Daemon"),
_("There was an error starting the daemon "
"process. Try running it from a console "
"to see if there is an error.")
).run()
# Daemon Started, let's update it's info
reactor.callLater(0.5, update_connection_manager)
def on_connect(connector):
component.start()
reactor.callLater(0.2, update_connection_manager)
reactor.callLater(0.5, close_connection_manager)
def on_connect_fail(reason, try_counter,
host, port, user, passwd):
if not try_counter:
return
if reason.check(AuthenticationRequired, BadLoginError):
log.debug("PasswordRequired exception")
dialog = AuthenticationDialog(reason.value.message, reason.value.username)
def dialog_finished(response_id, host, port):
if response_id == gtk.RESPONSE_OK:
reactor.callLater(
0.5, do_connect, try_counter - 1,
host, port, dialog.get_username(),
dialog.get_password())
dialog.run().addCallback(dialog_finished, host, port)
return
log.info("Connection to host failed..")
log.info("Retrying connection.. Retries left: "
"%s", try_counter)
reactor.callLater(0.5, update_connection_manager)
reactor.callLater(0.5, do_connect, try_counter - 1,
host, port, user, passwd)
def do_connect(try_counter, host, port, user, passwd):
log.debug("Trying to connect to %s@%s:%s",
user, host, port)
d = client.connect(host, port, user, passwd)
d.addCallback(on_connect)
d.addErrback(on_connect_fail, try_counter,
host, port, user, passwd)
if try_connect:
reactor.callLater(
0.5, do_connect, 6, host, port, user, passwd
)
break
if self.config["show_connection_manager_on_start"]:
# XXX: We need to call a simulate() here, but this could be a bug in twisted
try:
reactor._simulate()
except AttributeError:
# twisted < 12
reactor.simulate()
self.connectionmanager.show()
def __on_disconnect(self):
"""
Called when disconnected from the daemon. We basically just stop all
the components here.
"""
component.stop()
|
Arzie/deluge
|
deluge/ui/gtkui/gtkui.py
|
Python
|
gpl-3.0
| 18,283
|
# coding: utf-8
import base64
import datetime
import logging
import time
from hashlib import sha1
from pprint import pformat
from unicodedata import normalize
import requests
from lxml import etree, objectify
from werkzeug import urls, url_encode
from odoo import api, fields, models, _
from odoo.addons.payment.models.payment_acquirer import ValidationError
from odoo.addons.payment_ogone.controllers.main import OgoneController
from odoo.addons.payment_ogone.data import ogone
from odoo.tools import DEFAULT_SERVER_DATE_FORMAT, ustr
from odoo.tools.float_utils import float_compare, float_repr, float_round
_logger = logging.getLogger(__name__)
class PaymentAcquirerOgone(models.Model):
_inherit = 'payment.acquirer'
provider = fields.Selection(selection_add=[('ogone', 'Ogone')])
ogone_pspid = fields.Char('PSPID', required_if_provider='ogone', groups='base.group_user')
ogone_userid = fields.Char('API User ID', required_if_provider='ogone', groups='base.group_user')
ogone_password = fields.Char('API User Password', required_if_provider='ogone', groups='base.group_user')
ogone_shakey_in = fields.Char('SHA Key IN', size=32, required_if_provider='ogone', groups='base.group_user')
ogone_shakey_out = fields.Char('SHA Key OUT', size=32, required_if_provider='ogone', groups='base.group_user')
ogone_alias_usage = fields.Char('Alias Usage', default="Allow saving my payment data",
help="If you want to use Ogone Aliases, this default "
"Alias Usage will be presented to the customer as the "
"reason you want to keep his payment data")
def _get_feature_support(self):
"""Get advanced feature support by provider.
Each provider should add its technical in the corresponding
key for the following features:
* fees: support payment fees computations
* authorize: support authorizing payment (separates
authorization and capture)
* tokenize: support saving payment data in a payment.tokenize
object
"""
res = super(PaymentAcquirerOgone, self)._get_feature_support()
res['tokenize'].append('ogone')
return res
def _get_ogone_urls(self, environment):
""" Ogone URLS:
- standard order: POST address for form-based """
return {
'ogone_standard_order_url': 'https://secure.ogone.com/ncol/%s/orderstandard_utf8.asp' % (environment,),
'ogone_direct_order_url': 'https://secure.ogone.com/ncol/%s/orderdirect_utf8.asp' % (environment,),
'ogone_direct_query_url': 'https://secure.ogone.com/ncol/%s/querydirect_utf8.asp' % (environment,),
'ogone_afu_agree_url': 'https://secure.ogone.com/ncol/%s/AFU_agree.asp' % (environment,),
}
def _ogone_generate_shasign(self, inout, values):
""" Generate the shasign for incoming or outgoing communications.
:param string inout: 'in' (odoo contacting ogone) or 'out' (ogone
contacting odoo). In this last case only some
fields should be contained (see e-Commerce basic)
:param dict values: transaction values
:return string: shasign
"""
assert inout in ('in', 'out')
assert self.provider == 'ogone'
key = getattr(self, 'ogone_shakey_' + inout)
def filter_key(key):
if inout == 'in':
return True
else:
# SHA-OUT keys
# source https://viveum.v-psp.com/Ncol/Viveum_e-Com-BAS_EN.pdf
keys = [
'AAVADDRESS',
'AAVCHECK',
'AAVMAIL',
'AAVNAME',
'AAVPHONE',
'AAVZIP',
'ACCEPTANCE',
'ALIAS',
'AMOUNT',
'BIC',
'BIN',
'BRAND',
'CARDNO',
'CCCTY',
'CN',
'COMPLUS',
'CREATION_STATUS',
'CURRENCY',
'CVCCHECK',
'DCC_COMMPERCENTAGE',
'DCC_CONVAMOUNT',
'DCC_CONVCCY',
'DCC_EXCHRATE',
'DCC_EXCHRATESOURCE',
'DCC_EXCHRATETS',
'DCC_INDICATOR',
'DCC_MARGINPERCENTAGE',
'DCC_VALIDHOURS',
'DIGESTCARDNO',
'ECI',
'ED',
'ENCCARDNO',
'FXAMOUNT',
'FXCURRENCY',
'IBAN',
'IP',
'IPCTY',
'NBREMAILUSAGE',
'NBRIPUSAGE',
'NBRIPUSAGE_ALLTX',
'NBRUSAGE',
'NCERROR',
'NCERRORCARDNO',
'NCERRORCN',
'NCERRORCVC',
'NCERRORED',
'ORDERID',
'PAYID',
'PAYIDSUB',
'PM',
'SCO_CATEGORY',
'SCORING',
'STATUS',
'SUBBRAND',
'SUBSCRIPTION_ID',
'TRXDATE',
'VC'
]
return key.upper() in keys
items = sorted((k.upper(), v) for k, v in values.items())
sign = ''.join('%s=%s%s' % (k, v, key) for k, v in items if v and filter_key(k))
sign = sign.encode("utf-8")
shasign = sha1(sign).hexdigest()
return shasign
def ogone_form_generate_values(self, values):
base_url = self.env['ir.config_parameter'].sudo().get_param('web.base.url')
ogone_tx_values = dict(values)
param_plus = {
'return_url': ogone_tx_values.pop('return_url', False)
}
temp_ogone_tx_values = {
'PSPID': self.ogone_pspid,
'ORDERID': values['reference'],
'AMOUNT': float_repr(float_round(values['amount'], 2) * 100, 0),
'CURRENCY': values['currency'] and values['currency'].name or '',
'LANGUAGE': values.get('partner_lang'),
'CN': values.get('partner_name'),
'EMAIL': values.get('partner_email'),
'OWNERZIP': values.get('partner_zip'),
'OWNERADDRESS': values.get('partner_address'),
'OWNERTOWN': values.get('partner_city'),
'OWNERCTY': values.get('partner_country') and values.get('partner_country').code or '',
'OWNERTELNO': values.get('partner_phone'),
'ACCEPTURL': urls.url_join(base_url, OgoneController._accept_url),
'DECLINEURL': urls.url_join(base_url, OgoneController._decline_url),
'EXCEPTIONURL': urls.url_join(base_url, OgoneController._exception_url),
'CANCELURL': urls.url_join(base_url, OgoneController._cancel_url),
'PARAMPLUS': url_encode(param_plus),
}
if self.save_token in ['ask', 'always']:
temp_ogone_tx_values.update({
'ALIAS': 'ODOO-NEW-ALIAS-%s' % time.time(), # something unique,
'ALIASUSAGE': values.get('alias_usage') or self.ogone_alias_usage,
})
shasign = self._ogone_generate_shasign('in', temp_ogone_tx_values)
temp_ogone_tx_values['SHASIGN'] = shasign
ogone_tx_values.update(temp_ogone_tx_values)
return ogone_tx_values
def ogone_get_form_action_url(self):
return self._get_ogone_urls(self.environment)['ogone_standard_order_url']
def ogone_s2s_form_validate(self, data):
error = dict()
mandatory_fields = ["cc_number", "cc_cvc", "cc_holder_name", "cc_expiry", "cc_brand"]
# Validation
for field_name in mandatory_fields:
if not data.get(field_name):
error[field_name] = 'missing'
return False if error else True
def ogone_s2s_form_process(self, data):
values = {
'cc_number': data.get('cc_number'),
'cc_cvc': int(data.get('cc_cvc')),
'cc_holder_name': data.get('cc_holder_name'),
'cc_expiry': data.get('cc_expiry'),
'cc_brand': data.get('cc_brand'),
'acquirer_id': int(data.get('acquirer_id')),
'partner_id': int(data.get('partner_id'))
}
pm_id = self.env['payment.token'].sudo().create(values)
return pm_id
class PaymentTxOgone(models.Model):
_inherit = 'payment.transaction'
# ogone status
_ogone_valid_tx_status = [5, 9, 8]
_ogone_wait_tx_status = [41, 50, 51, 52, 55, 56, 91, 92, 99]
_ogone_pending_tx_status = [46, 81, 82] # 46 = 3DS HTML response
_ogone_cancel_tx_status = [1]
# --------------------------------------------------
# FORM RELATED METHODS
# --------------------------------------------------
@api.model
def _ogone_form_get_tx_from_data(self, data):
""" Given a data dict coming from ogone, verify it and find the related
transaction record. Create a payment token if an alias is returned."""
reference, pay_id, shasign, alias = data.get('orderID'), data.get('PAYID'), data.get('SHASIGN'), data.get('ALIAS')
if not reference or not pay_id or not shasign:
error_msg = _('Ogone: received data with missing reference (%s) or pay_id (%s) or shasign (%s)') % (reference, pay_id, shasign)
_logger.info(error_msg)
raise ValidationError(error_msg)
# find tx -> @TDENOTE use paytid ?
tx = self.search([('reference', '=', reference)])
if not tx or len(tx) > 1:
error_msg = _('Ogone: received data for reference %s') % (reference)
if not tx:
error_msg += _('; no order found')
else:
error_msg += _('; multiple order found')
_logger.info(error_msg)
raise ValidationError(error_msg)
# verify shasign
shasign_check = tx.acquirer_id._ogone_generate_shasign('out', data)
if shasign_check.upper() != shasign.upper():
error_msg = _('Ogone: invalid shasign, received %s, computed %s, for data %s') % (shasign, shasign_check, data)
_logger.info(error_msg)
raise ValidationError(error_msg)
if not tx.acquirer_reference:
tx.acquirer_reference = pay_id
# alias was created on ogone server, store it
if alias and tx.type == 'form_save':
Token = self.env['payment.token']
domain = [('acquirer_ref', '=', alias)]
cardholder = data.get('CN')
if not Token.search_count(domain):
_logger.info('Ogone: saving alias %s for partner %s' % (data.get('CARDNO'), tx.partner_id))
ref = Token.create({'name': data.get('CARDNO') + (' - ' + cardholder if cardholder else ''),
'partner_id': tx.partner_id.id,
'acquirer_id': tx.acquirer_id.id,
'acquirer_ref': alias})
tx.write({'payment_token_id': ref.id})
return tx
def _ogone_form_get_invalid_parameters(self, data):
invalid_parameters = []
# TODO: txn_id: should be false at draft, set afterwards, and verified with txn details
if self.acquirer_reference and data.get('PAYID') != self.acquirer_reference:
invalid_parameters.append(('PAYID', data.get('PAYID'), self.acquirer_reference))
# check what is bought
if float_compare(float(data.get('amount', '0.0')), self.amount, 2) != 0:
invalid_parameters.append(('amount', data.get('amount'), '%.2f' % self.amount))
if data.get('currency') != self.currency_id.name:
invalid_parameters.append(('currency', data.get('currency'), self.currency_id.name))
return invalid_parameters
def _ogone_form_validate(self, data):
if self.state in ['done', 'refunding', 'refunded']:
_logger.info('Ogone: trying to validate an already validated tx (ref %s)', self.reference)
return True
status = int(data.get('STATUS', '0'))
if status in self._ogone_valid_tx_status:
vals = {
'state': 'done',
'date_validate': datetime.datetime.strptime(data['TRXDATE'], '%m/%d/%y').strftime(DEFAULT_SERVER_DATE_FORMAT),
'acquirer_reference': data['PAYID'],
}
if data.get('ALIAS') and self.partner_id and \
(self.type == 'form_save' or self.acquirer_id.save_token == 'always')\
and not self.payment_token_id:
pm = self.env['payment.token'].create({
'partner_id': self.partner_id.id,
'acquirer_id': self.acquirer_id.id,
'acquirer_ref': data.get('ALIAS'),
'name': '%s - %s' % (data.get('CARDNO'), data.get('CN'))
})
vals.update(payment_token_id=pm.id)
self.write(vals)
if self.payment_token_id:
self.payment_token_id.verified = True
self.execute_callback()
# if this transaction is a validation one, then we refund the money we just withdrawn
if self.type == 'validation':
self.s2s_do_refund()
return True
elif status in self._ogone_cancel_tx_status:
self.write({
'state': 'cancel',
'acquirer_reference': data.get('PAYID'),
})
elif status in self._ogone_pending_tx_status or status in self._ogone_wait_tx_status:
self.write({
'state': 'pending',
'acquirer_reference': data.get('PAYID'),
})
else:
error = 'Ogone: feedback error: %(error_str)s\n\n%(error_code)s: %(error_msg)s' % {
'error_str': data.get('NCERRORPLUS'),
'error_code': data.get('NCERROR'),
'error_msg': ogone.OGONE_ERROR_MAP.get(data.get('NCERROR')),
}
_logger.info(error)
self.write({
'state': 'error',
'state_message': error,
'acquirer_reference': data.get('PAYID'),
})
return False
# --------------------------------------------------
# S2S RELATED METHODS
# --------------------------------------------------
def ogone_s2s_do_transaction(self, **kwargs):
# TODO: create tx with s2s type
account = self.acquirer_id
reference = self.reference or "ODOO-%s-%s" % (datetime.datetime.now().strftime('%y%m%d_%H%M%S'), self.partner_id.id)
param_plus = {
'return_url': kwargs.get('return_url', False)
}
data = {
'PSPID': account.ogone_pspid,
'USERID': account.ogone_userid,
'PSWD': account.ogone_password,
'ORDERID': reference,
'AMOUNT': int(self.amount * 100),
'CURRENCY': self.currency_id.name,
'OPERATION': 'SAL',
'ECI': 2, # Recurring (from MOTO)
'ALIAS': self.payment_token_id.acquirer_ref,
'RTIMEOUT': 30,
'PARAMPLUS' : url_encode(param_plus)
}
if kwargs.get('3d_secure'):
data.update({
'FLAG3D': 'Y',
'LANGUAGE': self.partner_id.lang or 'en_US',
})
for url in 'accept decline exception'.split():
key = '{0}_url'.format(url)
val = kwargs.pop(key, None)
if val:
key = '{0}URL'.format(url).upper()
data[key] = val
data['SHASIGN'] = self.acquirer_id._ogone_generate_shasign('in', data)
direct_order_url = 'https://secure.ogone.com/ncol/%s/orderdirect.asp' % (self.acquirer_id.environment)
logged_data = data.copy()
logged_data.pop('PSWD')
_logger.info("ogone_s2s_do_transaction: Sending values to URL %s, values:\n%s", direct_order_url, pformat(logged_data))
result = requests.post(direct_order_url, data=data).content
try:
tree = objectify.fromstring(result)
_logger.info('ogone_s2s_do_transaction: Values received:\n%s', etree.tostring(tree, pretty_print=True, encoding='utf-8'))
except etree.XMLSyntaxError:
# invalid response from ogone
_logger.exception('Invalid xml response from ogone')
_logger.info('ogone_s2s_do_transaction: Values received:\n%s', result)
raise
return self._ogone_s2s_validate_tree(tree)
def ogone_s2s_do_refund(self, **kwargs):
# we refund only if this transaction hasn't been already refunded and was paid.
if self.state != 'done':
return False
self.state = 'refunding'
account = self.acquirer_id
reference = self.reference or "ODOO-%s-%s" % (datetime.datetime.now().strftime('%y%m%d_%H%M%S'), self.partner_id.id)
data = {
'PSPID': account.ogone_pspid,
'USERID': account.ogone_userid,
'PSWD': account.ogone_password,
'ORDERID': reference,
'AMOUNT': int(self.amount * 100),
'CURRENCY': self.currency_id.name,
'OPERATION': 'RFS',
'PAYID': self.acquirer_reference,
}
data['SHASIGN'] = self.acquirer_id._ogone_generate_shasign('in', data)
direct_order_url = 'https://secure.ogone.com/ncol/%s/maintenancedirect.asp' % (self.acquirer_id.environment)
logged_data = data.copy()
logged_data.pop('PSWD')
_logger.info("ogone_s2s_do_refund: Sending values to URL %s, values:\n%s", direct_order_url, pformat(logged_data))
result = requests.post(direct_order_url, data=data).content
try:
tree = objectify.fromstring(result)
_logger.info('ogone_s2s_do_refund: Values received:\n%s', etree.tostring(tree, pretty_print=True, encoding='utf-8'))
except etree.XMLSyntaxError:
# invalid response from ogone
_logger.exception('Invalid xml response from ogone')
_logger.info('ogone_s2s_do_refund: Values received:\n%s', result)
raise
return self._ogone_s2s_validate_tree(tree)
def _ogone_s2s_validate(self):
tree = self._ogone_s2s_get_tx_status()
return self._ogone_s2s_validate_tree(tree)
def _ogone_s2s_validate_tree(self, tree, tries=2):
if self.state not in ('draft', 'pending', 'refunding'):
_logger.info('Ogone: trying to validate an already validated tx (ref %s)', self.reference)
return True
status = int(tree.get('STATUS') or 0)
if status in self._ogone_valid_tx_status:
new_state = 'refunded' if self.state == 'refunding' else 'done'
self.write({
'state': new_state,
'date_validate': datetime.date.today().strftime(DEFAULT_SERVER_DATE_FORMAT),
'acquirer_reference': tree.get('PAYID'),
})
if tree.get('ALIAS') and self.partner_id and \
(self.type == 'form_save' or self.acquirer_id.save_token == 'always')\
and not self.payment_token_id:
pm = self.env['payment.token'].create({
'partner_id': self.partner_id.id,
'acquirer_id': self.acquirer_id.id,
'acquirer_ref': tree.get('ALIAS'),
'name': tree.get('CARDNO'),
})
self.write({'payment_token_id': pm.id})
if self.payment_token_id:
self.payment_token_id.verified = True
self.execute_callback()
# if this transaction is a validation one, then we refund the money we just withdrawn
if self.type == 'validation':
self.s2s_do_refund()
return True
elif status in self._ogone_cancel_tx_status:
self.write({
'state': 'cancel',
'acquirer_reference': tree.get('PAYID'),
})
elif status in self._ogone_pending_tx_status:
new_state = 'refunding' if self.state == 'refunding' else 'pending'
vals = {
'state': new_state,
'acquirer_reference': tree.get('PAYID'),
}
if status == 46: # HTML 3DS
vals['html_3ds'] = ustr(base64.b64decode(tree.HTML_ANSWER.text))
self.write(vals)
elif status in self._ogone_wait_tx_status and tries > 0:
time.sleep(0.5)
self.write({'acquirer_reference': tree.get('PAYID')})
tree = self._ogone_s2s_get_tx_status()
return self._ogone_s2s_validate_tree(tree, tries - 1)
else:
error = 'Ogone: feedback error: %(error_str)s\n\n%(error_code)s: %(error_msg)s' % {
'error_str': tree.get('NCERRORPLUS'),
'error_code': tree.get('NCERROR'),
'error_msg': ogone.OGONE_ERROR_MAP.get(tree.get('NCERROR')),
}
_logger.info(error)
self.write({
'state': 'error',
'state_message': error,
'acquirer_reference': tree.get('PAYID'),
})
return False
def _ogone_s2s_get_tx_status(self):
account = self.acquirer_id
#reference = tx.reference or "ODOO-%s-%s" % (datetime.datetime.now().strftime('%Y%m%d_%H%M%S'), tx.partner_id.id)
data = {
'PAYID': self.acquirer_reference,
'PSPID': account.ogone_pspid,
'USERID': account.ogone_userid,
'PSWD': account.ogone_password,
}
query_direct_url = 'https://secure.ogone.com/ncol/%s/querydirect.asp' % (self.acquirer_id.environment)
logged_data = data.copy()
logged_data.pop('PSWD')
_logger.info("_ogone_s2s_get_tx_status: Sending values to URL %s, values:\n%s", query_direct_url, pformat(logged_data))
result = requests.post(query_direct_url, data=data).content
try:
tree = objectify.fromstring(result)
_logger.info('_ogone_s2s_get_tx_status: Values received:\n%s', etree.tostring(tree, pretty_print=True, encoding='utf-8'))
except etree.XMLSyntaxError:
# invalid response from ogone
_logger.exception('Invalid xml response from ogone')
_logger.info('_ogone_s2s_get_tx_status: Values received:\n%s', result)
raise
return tree
class PaymentToken(models.Model):
_inherit = 'payment.token'
def ogone_create(self, values):
if values.get('cc_number'):
# create a alias via batch
values['cc_number'] = values['cc_number'].replace(' ', '')
acquirer = self.env['payment.acquirer'].browse(values['acquirer_id'])
alias = 'ODOO-NEW-ALIAS-%s' % time.time()
expiry = str(values['cc_expiry'][:2]) + str(values['cc_expiry'][-2:])
line = 'ADDALIAS;%(alias)s;%(cc_holder_name)s;%(cc_number)s;%(expiry)s;%(cc_brand)s;%(pspid)s'
line = line % dict(values, alias=alias, expiry=expiry, pspid=acquirer.ogone_pspid)
data = {
'FILE_REFERENCE': alias,
'TRANSACTION_CODE': 'MTR',
'OPERATION': 'SAL',
'NB_PAYMENTS': 1, # even if we do not actually have any payment, ogone want it to not be 0
'FILE': normalize('NFKD', line).encode('ascii','ignore'), # Ogone Batch must be ASCII only
'REPLY_TYPE': 'XML',
'PSPID': acquirer.ogone_pspid,
'USERID': acquirer.ogone_userid,
'PSWD': acquirer.ogone_password,
'PROCESS_MODE': 'CHECKANDPROCESS',
}
url = 'https://secure.ogone.com/ncol/%s/AFU_agree.asp' % (acquirer.environment,)
result = requests.post(url, data=data).content
try:
tree = objectify.fromstring(result)
except etree.XMLSyntaxError:
_logger.exception('Invalid xml response from ogone')
return None
error_code = error_str = None
if hasattr(tree, 'PARAMS_ERROR'):
error_code = tree.NCERROR.text
error_str = 'PARAMS ERROR: %s' % (tree.PARAMS_ERROR.text or '',)
else:
node = tree.FORMAT_CHECK
error_node = getattr(node, 'FORMAT_CHECK_ERROR', None)
if error_node is not None:
error_code = error_node.NCERROR.text
error_str = 'CHECK ERROR: %s' % (error_node.ERROR.text or '',)
if error_code:
error_msg = tree.get(error_code)
error = '%s\n\n%s: %s' % (error_str, error_code, error_msg)
_logger.error(error)
raise Exception(error)
return {
'acquirer_ref': alias,
'name': 'XXXXXXXXXXXX%s - %s' % (values['cc_number'][-4:], values['cc_holder_name'])
}
return {}
|
maxive/erp
|
addons/payment_ogone/models/payment.py
|
Python
|
agpl-3.0
| 25,633
|
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import unittest
from unittest.mock import patch
import pytest
from airflow.providers.google.cloud.transfers.presto_to_gcs import PrestoToGCSOperator
TASK_ID = "test-presto-to-gcs"
PRESTO_CONN_ID = "my-presto-conn"
GCP_CONN_ID = "my-gcp-conn"
IMPERSONATION_CHAIN = ["ACCOUNT_1", "ACCOUNT_2", "ACCOUNT_3"]
SQL = "SELECT * FROM memory.default.test_multiple_types"
BUCKET = "gs://test"
FILENAME = "test_{}.ndjson"
NDJSON_LINES = [
b'{"some_num": 42, "some_str": "mock_row_content_1"}\n',
b'{"some_num": 43, "some_str": "mock_row_content_2"}\n',
b'{"some_num": 44, "some_str": "mock_row_content_3"}\n',
]
CSV_LINES = [
b"some_num,some_str\r\n",
b"42,mock_row_content_1\r\n",
b"43,mock_row_content_2\r\n",
b"44,mock_row_content_3\r\n",
]
SCHEMA_FILENAME = "schema_test.json"
SCHEMA_JSON = b'[{"name": "some_num", "type": "INT64"}, {"name": "some_str", "type": "STRING"}]'
@pytest.mark.integration("presto")
class TestPrestoToGCSOperator(unittest.TestCase):
def test_init(self):
"""Test PrestoToGCSOperator instance is properly initialized."""
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
impersonation_chain=IMPERSONATION_CHAIN,
)
assert op.task_id == TASK_ID
assert op.sql == SQL
assert op.bucket == BUCKET
assert op.filename == FILENAME
assert op.impersonation_chain == IMPERSONATION_CHAIN
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
def test_save_as_json(self, mock_gcs_hook, mock_presto_hook):
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
assert BUCKET == bucket
assert FILENAME.format(0) == obj
assert "application/json" == mime_type
assert not gzip
with open(tmp_filename, "rb") as file:
assert b"".join(NDJSON_LINES) == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
presto_conn_id=PRESTO_CONN_ID,
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=IMPERSONATION_CHAIN,
)
op.execute(None)
mock_presto_hook.assert_called_once_with(presto_conn_id=PRESTO_CONN_ID)
mock_gcs_hook.assert_called_once_with(
delegate_to=None,
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=IMPERSONATION_CHAIN,
)
mock_gcs_hook.return_value.upload.assert_called()
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
def test_save_as_json_with_file_splitting(self, mock_gcs_hook, mock_presto_hook):
"""Test that ndjson is split by approx_max_file_size_bytes param."""
expected_upload = {
FILENAME.format(0): b"".join(NDJSON_LINES[:2]),
FILENAME.format(1): NDJSON_LINES[2],
}
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
assert BUCKET == bucket
assert "application/json" == mime_type
assert not gzip
with open(tmp_filename, "rb") as file:
assert expected_upload[obj] == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR(20)", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
approx_max_file_size_bytes=len(expected_upload[FILENAME.format(0)]),
)
op.execute(None)
mock_gcs_hook.return_value.upload.assert_called()
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
def test_save_as_json_with_schema_file(self, mock_gcs_hook, mock_presto_hook):
"""Test writing schema files."""
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
if obj == SCHEMA_FILENAME:
with open(tmp_filename, "rb") as file:
assert SCHEMA_JSON == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
schema_filename=SCHEMA_FILENAME,
export_format="csv",
presto_conn_id=PRESTO_CONN_ID,
gcp_conn_id=GCP_CONN_ID,
)
op.execute(None)
# once for the file and once for the schema
assert 2 == mock_gcs_hook.return_value.upload.call_count
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
def test_save_as_csv(self, mock_presto_hook, mock_gcs_hook):
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
assert BUCKET == bucket
assert FILENAME.format(0) == obj
assert "text/csv" == mime_type
assert not gzip
with open(tmp_filename, "rb") as file:
assert b"".join(CSV_LINES) == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
export_format="csv",
presto_conn_id=PRESTO_CONN_ID,
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=IMPERSONATION_CHAIN,
)
op.execute(None)
mock_gcs_hook.return_value.upload.assert_called()
mock_presto_hook.assert_called_once_with(presto_conn_id=PRESTO_CONN_ID)
mock_gcs_hook.assert_called_once_with(
delegate_to=None,
gcp_conn_id=GCP_CONN_ID,
impersonation_chain=IMPERSONATION_CHAIN,
)
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
def test_save_as_csv_with_file_splitting(self, mock_gcs_hook, mock_presto_hook):
"""Test that csv is split by approx_max_file_size_bytes param."""
expected_upload = {
FILENAME.format(0): b"".join(CSV_LINES[:3]),
FILENAME.format(1): b"".join([CSV_LINES[0], CSV_LINES[3]]),
}
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
assert BUCKET == bucket
assert "text/csv" == mime_type
assert not gzip
with open(tmp_filename, "rb") as file:
assert expected_upload[obj] == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR(20)", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
approx_max_file_size_bytes=len(expected_upload[FILENAME.format(0)]),
export_format="csv",
)
op.execute(None)
mock_gcs_hook.return_value.upload.assert_called()
@patch("airflow.providers.google.cloud.transfers.presto_to_gcs.PrestoHook")
@patch("airflow.providers.google.cloud.transfers.sql_to_gcs.GCSHook")
def test_save_as_csv_with_schema_file(self, mock_gcs_hook, mock_presto_hook):
"""Test writing schema files."""
def _assert_upload(bucket, obj, tmp_filename, mime_type, gzip):
if obj == SCHEMA_FILENAME:
with open(tmp_filename, "rb") as file:
assert SCHEMA_JSON == file.read()
mock_gcs_hook.return_value.upload.side_effect = _assert_upload
mock_cursor = mock_presto_hook.return_value.get_conn.return_value.cursor
mock_cursor.return_value.description = [
("some_num", "INTEGER", None, None, None, None, None),
("some_str", "VARCHAR", None, None, None, None, None),
]
mock_cursor.return_value.fetchone.side_effect = [
[42, "mock_row_content_1"],
[43, "mock_row_content_2"],
[44, "mock_row_content_3"],
None,
]
op = PrestoToGCSOperator(
task_id=TASK_ID,
sql=SQL,
bucket=BUCKET,
filename=FILENAME,
schema_filename=SCHEMA_FILENAME,
export_format="csv",
)
op.execute(None)
# once for the file and once for the schema
assert 2 == mock_gcs_hook.return_value.upload.call_count
|
apache/incubator-airflow
|
tests/providers/google/cloud/transfers/test_presto_to_gcs.py
|
Python
|
apache-2.0
| 12,057
|
# This file is part of BurnMan - a thermoelastic and thermodynamic toolkit for the Earth and Planetary Sciences
# Copyright (C) 2012 - 2017 by the BurnMan team, released under the GNU
# GPL v2 or later.
from __future__ import absolute_import
import warnings
import numpy as np
from . import equation_of_state as eos
def tait_constants(params):
"""
returns parameters for the modified Tait equation of state
derived from K_T and its two first pressure derivatives
EQ 4 from Holland and Powell, 2011
"""
a = (1. + params['Kprime_0']) / (
1. + params['Kprime_0'] + params['K_0'] * params['Kdprime_0'])
b = params['Kprime_0'] / params['K_0'] - \
params['Kdprime_0'] / (1. + params['Kprime_0'])
c = (1. + params['Kprime_0'] + params['K_0'] * params['Kdprime_0']) / (
params['Kprime_0'] * params['Kprime_0'] + params['Kprime_0'] - params['K_0'] * params['Kdprime_0'])
return a, b, c
def modified_tait(x, params):
"""
equation for the modified Tait equation of state, returns
pressure in the same units that are supplied for the reference bulk
modulus (params['K_0'])
EQ 2 from Holland and Powell, 2011
"""
a, b, c = tait_constants(params)
return (np.power((x + a - 1.) / a, -1. / c) - 1.) / b + params['P_0']
def volume(pressure, params):
"""
Returns volume [m^3] as a function of pressure [Pa] and temperature [K]
EQ 12
"""
a, b, c = tait_constants(params)
x = 1 - a * \
(1. - np.power((1. + b * (pressure - params['P_0'])), -1.0 * c))
return x * params['V_0']
def bulk_modulus(pressure, params):
"""
Returns isothermal bulk modulus :math:`K_T` of the mineral. :math:`[Pa]`.
EQ 13+2
"""
a, b, c = tait_constants(params)
return params['K_0'] * (1. + b * (pressure - params['P_0'])) * (a + (1. - a) * np.power((1. + b * (pressure - params['P_0'])), c))
class MT(eos.EquationOfState):
"""
Base class for the generic modified Tait equation of state.
References for this can be found in :cite:`HC1974`
and :cite:`HP2011` (followed here).
An instance "m" of a Mineral can be assigned this
equation of state with the command m.set_method('mt')
(or by initialising the class with the param
equation_of_state = 'mt').
"""
def volume(self, pressure, temperature, params):
"""
Returns volume :math:`[m^3]` as a function of pressure :math:`[Pa]`.
"""
return volume(pressure, params)
def pressure(self, temperature, volume, params):
"""
Returns pressure [Pa] as a function of temperature [K] and volume[m^3]
"""
return modified_tait(params['V_0'] / volume, params)
def isothermal_bulk_modulus(self, pressure, temperature, volume, params):
"""
Returns isothermal bulk modulus :math:`K_T` of the mineral. :math:`[Pa]`.
"""
return bulk_modulus(pressure, params)
def adiabatic_bulk_modulus(self, pressure, temperature, volume, params):
"""
Since this equation of state does not contain temperature effects, simply return a very large number. :math:`[Pa]`
"""
return 1.e99
def shear_modulus(self, pressure, temperature, volume, params):
"""
Not implemented in the Modified Tait EoS. :math:`[Pa]`
Returns 0.
Could potentially apply a fixed Poissons ratio as a rough estimate.
"""
return 0.
def entropy(self, pressure, temperature, volume, params):
"""
Returns the molar entropy :math:`\mathcal{S}` of the mineral. :math:`[J/K/mol]`
"""
return 0.
def internal_energy(self, pressure, temperature, volume, params):
"""
Returns the internal energy :math:`\mathcal{E}` of the mineral. :math:`[J/mol]`
"""
return self.gibbs_free_energy(pressure, temperature, volume, params) - volume*pressure
def gibbs_free_energy(self, pressure, temperature, volume, params):
"""
Returns the Gibbs free energy :math:`\mathcal{G}` of the mineral. :math:`[J/mol]`
"""
# G = int VdP = [PV] - int PdV = E + PV
a, b, c = tait_constants(params)
intVdP = params['V_0']*( a/(b*(1. - c)) *
(np.power(b*(pressure - params['P_0']) + 1.,
1. - c) - 1.) +
(1. - a)*(pressure - params['P_0']))
return intVdP + params['E_0'] + params['V_0']*params['P_0']
def heat_capacity_v(self, pressure, temperature, volume, params):
"""
Since this equation of state does not contain temperature effects, simply return a very large number. :math:`[J/K/mol]`
"""
return 1.e99
def heat_capacity_p(self, pressure, temperature, volume, params):
"""
Since this equation of state does not contain temperature effects, simply return a very large number. :math:`[J/K/mol]`
"""
return 1.e99
def thermal_expansivity(self, pressure, temperature, volume, params):
"""
Since this equation of state does not contain temperature effects, simply return zero. :math:`[1/K]`
"""
return 0.
def grueneisen_parameter(self, pressure, temperature, volume, params):
"""
Since this equation of state does not contain temperature effects, simply return zero. :math:`[unitless]`
"""
return 0.
def validate_parameters(self, params):
"""
Check for existence and validity of the parameters
"""
if 'E_0' not in params:
params['E_0'] = 0.
if 'P_0' not in params:
params['P_0'] = 1.e5
# G and Gprime are not defined in this equation of state,
# We can model density and bulk modulus just fine without them,
# so just add them to the dictionary as nans
if 'G_0' not in params:
params['G_0'] = float('nan')
if 'Gprime_0' not in params:
params['Gprime_0'] = float('nan')
# Check that all the required keys are in the dictionary
expected_keys = [
'V_0', 'K_0', 'Kprime_0', 'Kdprime_0', 'G_0', 'Gprime_0']
for k in expected_keys:
if k not in params:
raise KeyError('params object missing parameter : ' + k)
# Finally, check that the values are reasonable.
if params['P_0'] < 0.:
warnings.warn('Unusual value for P_0', stacklevel=2)
if params['V_0'] < 1.e-7 or params['V_0'] > 1.e-2:
warnings.warn('Unusual value for V_0', stacklevel=2)
if params['K_0'] < 1.e9 or params['K_0'] > 1.e13:
warnings.warn('Unusual value for K_0', stacklevel=2)
if params['Kprime_0'] < 0. or params['Kprime_0'] > 10.:
warnings.warn('Unusual value for Kprime_0', stacklevel=2)
if params['G_0'] < 0.0 or params['G_0'] > 1.e13:
warnings.warn('Unusual value for G_0', stacklevel=2)
if params['Gprime_0'] < -5. or params['Gprime_0'] > 10.:
warnings.warn('Unusual value for Gprime_0', stacklevel=2)
|
sannecottaar/burnman
|
burnman/eos/modified_tait.py
|
Python
|
gpl-2.0
| 7,218
|
"""Test that anonymous structs/unions are transparent to member access"""
import lldb
from lldbsuite.test.decorators import *
from lldbsuite.test.lldbtest import *
from lldbsuite.test import lldbutil
class AnonymousTestCase(TestBase):
mydir = TestBase.compute_mydir(__file__)
@skipIf(
compiler="icc",
bugnumber="llvm.org/pr15036: LLDB generates an incorrect AST layout for an anonymous struct when DWARF is generated by ICC")
def test_expr_nest(self):
self.build()
self.common_setup(self.line0)
# These should display correctly.
self.expect("expression n->foo.d", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 4"])
self.expect("expression n->b", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 2"])
def test_expr_child(self):
self.build()
self.common_setup(self.line1)
# These should display correctly.
self.expect("expression c->foo.d", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 4"])
self.expect(
"expression c->grandchild.b",
VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 2"])
@skipIf(
compiler="icc",
bugnumber="llvm.org/pr15036: This particular regression was introduced by r181498")
def test_expr_grandchild(self):
self.build()
self.common_setup(self.line2)
# These should display correctly.
self.expect("expression g.child.foo.d", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 4"])
self.expect("expression g.child.b", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["= 2"])
def test_expr_parent(self):
self.build()
if "clang" in self.getCompiler() and "3.4" in self.getCompilerVersion():
self.skipTest(
"llvm.org/pr16214 -- clang emits partial DWARF for structures referenced via typedef")
self.common_setup(self.line2)
# These should display correctly.
self.expect("expression pz", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["(type_z *) $", " = 0x0000"])
self.expect("expression z.y", VARIABLES_DISPLAYED_CORRECTLY,
substrs=["(type_y) $", "dummy = 2"])
def test_expr_null(self):
self.build()
self.common_setup(self.line2)
# This should fail because pz is 0, but it succeeds on OS/X.
# This fails on Linux with an upstream error "Couldn't dematerialize struct", as does "p *n" with "int *n = 0".
# Note that this can also trigger llvm.org/pr15036 when run
# interactively at the lldb command prompt.
self.expect("expression *(type_z *)pz", error=True)
def test_child_by_name(self):
self.build()
# Set debugger into synchronous mode
self.dbg.SetAsync(False)
# Create a target
exe = self.getBuildArtifact("a.out")
target = self.dbg.CreateTarget(exe)
self.assertTrue(target, VALID_TARGET)
break_in_main = target.BreakpointCreateBySourceRegex(
'// Set breakpoint 2 here.', lldb.SBFileSpec(self.source))
self.assertTrue(break_in_main, VALID_BREAKPOINT)
process = target.LaunchSimple(
None, None, self.get_process_working_directory())
self.assertTrue(process, PROCESS_IS_VALID)
threads = lldbutil.get_threads_stopped_at_breakpoint(
process, break_in_main)
if len(threads) != 1:
self.fail("Failed to stop at breakpoint in main.")
thread = threads[0]
frame = thread.frames[0]
if not frame.IsValid():
self.fail("Failed to get frame 0.")
var_n = frame.FindVariable("n")
if not var_n.IsValid():
self.fail("Failed to get the variable 'n'")
elem_a = var_n.GetChildMemberWithName("a")
if not elem_a.IsValid():
self.fail("Failed to get the element a in n")
error = lldb.SBError()
value = elem_a.GetValueAsSigned(error, 1000)
if not error.Success() or value != 0:
self.fail("failed to get the correct value for element a in n")
def test_nest_flat(self):
self.build()
self.common_setup(self.line2)
# These should display correctly.
self.expect('frame variable n --flat',
substrs=['n.a = 0',
'n.b = 2',
'n.foo.c = 0',
'n.foo.d = 4'])
def setUp(self):
# Call super's setUp().
TestBase.setUp(self)
# Find the line numbers to break in main.c.
self.source = 'main.c'
self.line0 = line_number(self.source, '// Set breakpoint 0 here.')
self.line1 = line_number(self.source, '// Set breakpoint 1 here.')
self.line2 = line_number(self.source, '// Set breakpoint 2 here.')
def common_setup(self, line):
# Set debugger into synchronous mode
self.dbg.SetAsync(False)
# Create a target
exe = self.getBuildArtifact("a.out")
target = self.dbg.CreateTarget(exe)
self.assertTrue(target, VALID_TARGET)
# Set breakpoints inside and outside methods that take pointers to the
# containing struct.
lldbutil.run_break_set_by_file_and_line(
self, self.source, line, num_expected_locations=1, loc_exact=True)
# Now launch the process, and do not stop at entry point.
process = target.LaunchSimple(
None, None, self.get_process_working_directory())
self.assertTrue(process, PROCESS_IS_VALID)
# The stop reason of the thread should be breakpoint.
self.expect("thread list", STOPPED_DUE_TO_BREAKPOINT,
substrs=['stopped',
'stop reason = breakpoint'])
# The breakpoint should have a hit count of 1.
self.expect("breakpoint list -f", BREAKPOINT_HIT_ONCE,
substrs=[' resolved, hit count = 1'])
|
endlessm/chromium-browser
|
third_party/llvm/lldb/test/API/lang/c/anonymous/TestAnonymous.py
|
Python
|
bsd-3-clause
| 6,073
|
#!/usr/bin/env python
"""
Part 3, doaj
============
We repeat the same steps for DOAJ.
----
To run:
(vm) $ python scaffold3_doaj.py
To clean output:
(vm) $ make clean-3
"""
from __future__ import print_function
import os
import luigi
from gluish.task import BaseTask
from gluish.utils import shellout
class Task(BaseTask):
BASE = 'output'
TAG = '3'
def inputdir(self):
__dir__ = os.path.dirname(os.path.realpath(__file__))
return os.path.join(__dir__, '../input')
class DOAJInput(luigi.ExternalTask, Task):
"""
A single DOAJ dump.
"""
def output(self):
path = os.path.join(self.inputdir(), 'doaj/date-2016-02-01.ldj.gz')
return luigi.LocalTarget(path=path)
class DOAJIntermediateSchema(Task):
"""
Convert DOAJ into an intermediate schema. Takes 10 minutes.
"""
def requires(self):
"""
TODO: require the right input.
"""
def run(self):
"""
TODO: convert to intermediate schema.
"""
luigi.File(output).move(self.output().path)
def output(self):
return luigi.LocalTarget(path=self.path(ext='ldj.gz'))
if __name__ == '__main__':
luigi.run(['DOAJIntermediateSchema', '--workers', '1', '--local-scheduler'])
|
miku/siskin
|
docs/elag-2016/byoi/code/scaffold3_doaj.py
|
Python
|
gpl-3.0
| 1,281
|
import numpy as np
import matplotlib.pyplot as plt
import time
from pylab import *
from astropy import constants as cs
from scipy import integrate
UA=1.49597860*10**13
##Pregunta 1
#Guardammos los datos según columna en variables
longitud= np.loadtxt('sun_AM0.dat', usecols = [0])
flujo= np.loadtxt('sun_AM0.dat', usecols = [1])
#Hay que cambiar las unidades de medida de los datos
longitudb=longitud*10
flujob=flujo*100
#Graficamos
semilogx(longitudb,flujob)
xlabel('$Longitud\; de\; onda\; (\lambda) \;[\AA]$')
ylabel('$Flujo\; [ergs\cdot s^{-1} \cdot cm^{-2} \cdot \AA^{-1}]$')
title('Espectro solar')
grid(True)
savefig("EspectroSolar.png")
show()
##Pregunta 2
#Método de integración
def cuadrilatero(x,y):
s=0
for i in range(len(x)-1):
s=s+(x[i+1]-x[i])*y[i]
return s
#Cálculo de luminosidad
Luminosidad2=cuadrilatero(longitudb,flujob)*4*np.pi*UA**2
print "Luminosidad parte 2=" + str(Luminosidad2) + "Erg/s"
##Pregunta 3
T=5777
h=cs.h.cgs.value
c=cs.c.cgs.value
k=cs.k_B.cgs.value
def fun(x):
#Función luego de hacer el cambio de variable sugerido
f=((np.tan(x)**3)*(1+np.tan(x)**2))/(np.exp(np.tan(x))-1)
return f
def medio(f,a,b):
#Método sugerido en clases para calcular integrales. Utiliza el valor medio
m=(a+b)/2
s=(b-a)*f(m)
return s
def linspace(a,b,step):
#Función análoga a la de MatLab
while a<=b:
yield a
a+=step
def simpson(f,a,b):
#Función que permite calcular integrales siguiendo el método de Simpson
n=100000
s=f(a)+f(b)
h=(b-a)/n
for i in linspace(1,n-1,2):
s=s+4*f(a+i*h)
for j in linspace(2,n-2,2):
s=s+2*f(a+j*h)
return s*h/3
constante=((2*pi*h/c**2)*(k*T/h)**4)
t0=time.time()
Energia3=constante*(medio(fun,0.0,0.25)+simpson(fun,0.25,(np.pi/2)-0.25)+medio(fun,(np.pi/2)-0.25,(np.pi/2)))
tf=time.time()
#Se calcula el valor de la integral para todos sus valores, luego de aplicar el c.v.
#El primer y último término equivalen a los extremos donde antes no se podía calcular directamente
Energia3a=constante*((np.pi**4)/15) #Valor analítico usado para comparar
print "Energia parte 3=" + str(Energia3) + "Erg"
print "Se demora" + str(tf-t0) + "segundos en correr"
print "Energia calculada analíticamente="+str(Energia3a)
r=np.sqrt(Luminosidad2/(4*np.pi*Energia3))
print "Radio solar estimado="+str(r)+" metros"
##Pregunta 4
#Luminosidad calculada con el método "trapz"
t0=time.time()
Luminosidad4=np.trapz(flujob,x=longitudb)*4*np.pi*UA**2
tf=time.time()
print "Luminosidad parte 4=" + str(Luminosidad4) + "Erg/s"
#Energía total calculada con el método "quadz"
t0=time.time()
F=lambda x:(x**3)/(np.exp(x)-1) #Función a la que hay que aplicar la integral
integrando4=((2*pi*h/c**2)*(k*T/h)**4)
i4=integrate.quad(F,0,np.inf)
Energia4=integrando4*i4[0]
tf=time.time()
print "Energia parte 4=" + str(Energia4) + "Erg"
print "Se demora" + str(tf-t0) + "segundos en correr"
|
alvarocesped/01Tarea
|
Tarea1.py
|
Python
|
mit
| 2,964
|
from django import forms
from django.forms.util import flatatt
from django.utils.encoding import smart_unicode
from django.utils.html import escape
from django.utils.safestring import mark_safe
from django.utils.translation import ugettext as _
# From http://www.djangosnippets.org/snippets/200/
# widget for select with optional opt groups
# modified from ticket 3442
# not sure if it's better but it doesn't force all options to be grouped
# Example:
# groceries = ((False, (('milk','milk'), (-1,'eggs'))),
# ('fruit', ((0,'apple'), (1,'orange'))),
# ('', (('yum','beer'), )),
# )
# grocery_list = GroupedChoiceField(choices=groceries)
# Renders:
# <select name="grocery_list" id="id_grocery_list">
# <option value="milk">milk</option>
# <option value="-1">eggs</option>
# <optgroup label="fruit">
# <option value="0">apple</option>
# <option value="1">orange</option>
# </optgroup>
# <option value="yum">beer</option>
# </select>
class GroupedSelect(forms.Select):
def render(self, name, value, attrs=None, choices=()):
if value is None:
value = ''
final_attrs = self.build_attrs(attrs, name=name)
output = [u'<select%s>' % flatatt(final_attrs)]
str_value = smart_unicode(value)
for group_label, group in self.choices:
if group_label: # should belong to an optgroup
group_label = smart_unicode(group_label)
output.append(u'<optgroup label="%s">' % escape(group_label))
for k, v in group:
option_value = smart_unicode(k)
option_label = smart_unicode(v)
selected_html = ((option_value == str_value) and
u' selected="selected"' or '')
output.append(u'<option value="%s"%s>%s</option>' % (
escape(option_value), selected_html,
escape(option_label)
))
if group_label:
output.append(u'</optgroup>')
output.append(u'</select>')
return mark_safe(u'\n'.join(output))
# field for grouped choices, handles cleaning of funky choice tuple
class GroupedChoiceField(forms.ChoiceField):
def __init__(self, choices=(), required=True, widget=GroupedSelect,
label=None, initial=None, help_text=None):
super(forms.ChoiceField, self).__init__(required, widget, label,
initial, help_text)
self.choices = choices
def clean(self, value):
"""
Validates that the input is in self.choices.
"""
value = super(forms.ChoiceField, self).clean(value)
if value in (None, ''):
value = u''
value = smart_unicode(value)
if value == u'':
return value
valid_values = []
for group_label, group in self.choices:
valid_values += [str(k) for k, v in group]
if value not in valid_values:
raise forms.ValidationError(
_(u'Select a valid choice. That choice is not one of the '
'available choices.')
)
return value
|
polinom/djangopeople
|
djangopeople/djangopeople/groupedselect.py
|
Python
|
mit
| 3,218
|
#!/usr/bin/env python
# Copyright (C) 2017 Daniel Asarnow
# University of California, San Francisco
#
# Generate subparticles for "local reconstruction" methods.
# See help text and README file for more information.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import logging
import json
import numpy as np
import os
import os.path
import sys
from pyem import geom
from pyem import star
from pyem import util
def main(args):
log = logging.getLogger(__name__)
hdlr = logging.StreamHandler(sys.stdout)
log.addHandler(hdlr)
log.setLevel(logging.getLevelName(args.loglevel.upper()))
if args.target is None and args.sym is None and args.transform is None and args.euler is None:
log.error("At least a target, transformation matrix, Euler angles, or a symmetry group must be provided")
return 1
elif (args.target is not None or args.transform is not None) and args.boxsize is None and args.origin is None:
log.error("An origin must be provided via --boxsize or --origin")
return 1
if args.apix is None:
df = star.parse_star(args.input, nrows=1)
args.apix = star.calculate_apix(df)
if args.apix is None:
log.warn("Could not compute pixel size, default is 1.0 Angstroms per pixel")
args.apix = 1.0
df[star.Relion.MAGNIFICATION] = 10000
df[star.Relion.DETECTORPIXELSIZE] = 1.0
if args.target is not None:
try:
args.target = np.array([np.double(tok) for tok in args.target.split(",")])
except:
log.error("Target must be comma-separated list of x,y,z coordinates")
return 1
if args.euler is not None:
try:
args.euler = np.deg2rad(np.array([np.double(tok) for tok in args.euler.split(",")]))
args.transform = np.zeros((3, 4))
args.transform[:, :3] = geom.euler2rot(*args.euler)
if args.target is not None:
args.transform[:, -1] = args.target
except:
log.error("Euler angles must be comma-separated list of rotation, tilt, skew in degrees")
return 1
if args.transform is not None and not hasattr(args.transform, "dtype"):
if args.target is not None:
log.warn("--target supersedes --transform")
try:
args.transform = np.array(json.loads(args.transform))
except:
log.error("Transformation matrix must be in JSON/Numpy format")
return 1
if args.origin is not None:
if args.boxsize is not None:
log.warn("--origin supersedes --boxsize")
try:
args.origin = np.array([np.double(tok) for tok in args.origin.split(",")])
args.origin /= args.apix
except:
log.error("Origin must be comma-separated list of x,y,z coordinates")
return 1
elif args.boxsize is not None:
args.origin = np.ones(3) * args.boxsize / 2
if args.sym is not None:
args.sym = util.relion_symmetry_group(args.sym)
df = star.parse_star(args.input)
if star.calculate_apix(df) != args.apix:
log.warn("Using specified pixel size of %f instead of calculated size %f" %
(args.apix, star.calculate_apix(df)))
if args.cls is not None:
df = star.select_classes(df, args.cls)
if args.target is not None:
args.target /= args.apix
c = args.target - args.origin
c = np.where(np.abs(c) < 1, 0, c) # Ignore very small coordinates.
d = np.linalg.norm(c)
ax = c / d
r = geom.euler2rot(*np.array([np.arctan2(ax[1], ax[0]), np.arccos(ax[2]), np.deg2rad(args.psi)]))
d = -d
elif args.transform is not None:
r = args.transform[:, :3]
if args.transform.shape[1] == 4:
d = args.transform[:, -1] / args.apix
d = r.dot(args.origin) + d - args.origin
else:
d = 0
elif args.sym is not None:
r = np.identity(3)
d = -args.displacement / args.apix
else:
log.error("At least a target or symmetry group must be provided via --target or --sym")
return 1
log.debug("Final rotation: %s" % str(r).replace("\n", "\n" + " " * 16))
ops = [op.dot(r.T) for op in args.sym] if args.sym is not None else [r.T]
log.debug("Final translation: %s (%f px)" % (str(d), np.linalg.norm(d)))
dfs = list(subparticle_expansion(df, ops, d, rotate=args.shift_only, invert=args.invert, adjust_defocus=args.adjust_defocus))
if args.recenter:
for s in dfs:
star.recenter(s, inplace=True)
if args.suffix is None and not args.skip_join:
if len(dfs) > 1:
df = util.interleave(dfs)
else:
df = dfs[0]
df = star.compatible(df, relion2=args.relion2, inplace=True)
star.write_star(args.output, df, optics=(not args.relion2))
else:
for i, s in enumerate(dfs):
s = star.compatible(s, relion2=args.relion2, inplace=True)
star.write_star(os.path.join(args.output, args.suffix + "_%d" % i), s, optics=(not args.relion2))
return 0
def subparticle_expansion(s, ops=None, dists=0, rots=None, rotate=True, invert=False, adjust_defocus=False):
log = logging.getLogger(__name__)
if ops is None:
ops = [np.eye(3)]
if rots is None:
rots = geom.e2r_vec(np.deg2rad(s[star.Relion.ANGLES].values))
dists = np.atleast_2d(dists)
if len(dists) == 1:
dists = np.repeat(dists, len(ops), axis=0)
for i in range(len(ops)):
log.debug("Yielding expansion %d" % i)
log.debug("Rotation: %s" % str(ops[i]).replace("\n", "\n" + " " * 10))
log.debug("Translation: %s (%f px)" % (str(dists[i]), np.linalg.norm(dists[i])))
yield star.transform_star(s, ops[i], dists[i], rots=rots, rotate=rotate, invert=invert, adjust_defocus=adjust_defocus)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("input", help="STAR file with source particles")
parser.add_argument("output", help="Output file path (and prefix for output files)")
parser.add_argument("--apix", "--angpix", help="Angstroms per pixel (calculate from STAR by default)", type=float)
parser.add_argument("--boxsize", help="Particle box size in pixels (used to define origin only)", type=int)
parser.add_argument("--class", help="Keep this class in output, may be passed multiple times",
action="append", type=int, dest="cls")
parser.add_argument("--displacement", help="Distance of new origin from symmetrix axis in Angstroms",
type=float, default=0)
parser.add_argument("--origin", help="Origin coordinates in Angstroms", metavar="x,y,z")
parser.add_argument("--target", help="Target coordinates in Angstroms", metavar="x,y,z")
parser.add_argument("--invert", help="Invert the transformation", action="store_true")
parser.add_argument("--target-invert", action="store_true", dest="invert", help=argparse.SUPPRESS)
parser.add_argument("--psi", help="Additional in-plane rotation of target in degrees", type=float, default=0)
parser.add_argument("--euler", help="Euler angles (ZYZ intrinsic) to rotate particles", metavar="rot,tilt,psi")
parser.add_argument("--transform", help="Transformation matrix (3x3 or 3x4) in Numpy format")
parser.add_argument("--recenter", help="Recenter subparticle coordinates by subtracting X and Y shifts (e.g. for "
"extracting outside Relion)", action="store_true")
parser.add_argument("--adjust-defocus", help="Add Z component of shifts to defocus", action="store_true")
parser.add_argument("--shift-only", help="Keep original view axis after target transformation", action="store_false")
parser.add_argument("--loglevel", "-l", type=str, default="WARNING", help="Logging level and debug output")
parser.add_argument("--skip-join", help="Force multiple output files even if no suffix provided",
action="store_true", default=False)
parser.add_argument("--suffix", help="Suffix for multiple output files")
parser.add_argument("--sym", help="Symmetry group for whole-particle expansion or symmetry-derived subparticles ("
"Relion conventions)")
parser.add_argument("--relion2", "-r2", help="Write Relion2 compatible STAR file", action="store_true")
sys.exit(main(parser.parse_args()))
|
asarnow/pyem
|
subparticles.py
|
Python
|
gpl-3.0
| 9,149
|
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 16 11:44:25 2016
@author: steven
"""
import numpy as npy
import math
import scipy.linalg as linalg
def CrossProductMatrix(u):
L=npy.array([[0,-u[2],u[1]],[u[2],0,-u[0]],[-u[1],u[0],0]])
return L
def Euler2TransferMatrix(psi,theta,phi):
"""
Give Transition Matrix from euler angles
Angles in radians
"""
cpsi=math.cos(psi)
spsi=math.sin(psi)
ctheta=math.cos(theta)
stheta=math.sin(theta)
cphi=math.cos(phi)
sphi=math.sin(phi)
P=npy.array([[cphi*cpsi-sphi*ctheta*spsi,-spsi*cphi-cpsi*ctheta*sphi,stheta*sphi],[cpsi*sphi+spsi*ctheta*cphi,-sphi*spsi+cphi*ctheta*cpsi,-stheta*cphi],[spsi*stheta,cpsi*stheta,ctheta]])
return P
def TransferMatrix2Euler(R):
if ((R[2,2]!=1)&(R[2,2]!=-1)):
# R[2,2]=R[2,2]/abs(R[2,2])
theta=math.acos(R[2,2])
psi=math.atan2(R[2,0]/math.sin(theta),R[2,1]/math.sin(theta))
phi=math.atan2(R[0,2]/math.sin(theta),-R[1,2]/math.sin(theta))
else:
phi=0
if R[2,2]==1:
theta=0
psi=math.atan2(R[1,0],R[0,0])
else:
theta=math.pi
psi=-math.atan2(R[1,0],R[0,0])
return(npy.array([psi,theta,phi]))
def Direction2Euler(u,v=npy.random.random(3)):
# u=npy.array([ux,uy,uz])
u=u/linalg.norm(u)
R=npy.zeros((3,3))
R[:,0]=u
v=v-npy.dot(u,v)*u
v=v/linalg.norm(v)
w=npy.cross(u,v)
R[:,1]=v
R[:,2]=w
euler=TransferMatrix2Euler(R)
return euler
|
masfaraud/genmechanics
|
genmechanics/geometry.py
|
Python
|
gpl-3.0
| 1,524
|
class Solution:
# @param {integer} n
# @return {boolean}
def isHappy(self, n):
nums = []
while n not in nums:
nums.append(n)
next_n = sum([int(d)*int(d) for d in str(n)])
if next_n == 1:
return True
n = next_n
return False
|
lutianming/leetcode
|
happy_number.py
|
Python
|
mit
| 323
|
from plotly.basedatatypes import BaseLayoutHierarchyType as _BaseLayoutHierarchyType
import copy as _copy
class Domain(_BaseLayoutHierarchyType):
# class properties
# --------------------
_parent_path_str = "layout.scene"
_path_str = "layout.scene.domain"
_valid_props = {"column", "row", "x", "y"}
# column
# ------
@property
def column(self):
"""
If there is a layout grid, use the domain for this column in
the grid for this scene subplot .
The 'column' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [0, 9223372036854775807]
Returns
-------
int
"""
return self["column"]
@column.setter
def column(self, val):
self["column"] = val
# row
# ---
@property
def row(self):
"""
If there is a layout grid, use the domain for this row in the
grid for this scene subplot .
The 'row' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [0, 9223372036854775807]
Returns
-------
int
"""
return self["row"]
@row.setter
def row(self, val):
self["row"] = val
# x
# -
@property
def x(self):
"""
Sets the horizontal domain of this scene subplot (in plot
fraction).
The 'x' property is an info array that may be specified as:
* a list or tuple of 2 elements where:
(0) The 'x[0]' property is a number and may be specified as:
- An int or float in the interval [0, 1]
(1) The 'x[1]' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
list
"""
return self["x"]
@x.setter
def x(self, val):
self["x"] = val
# y
# -
@property
def y(self):
"""
Sets the vertical domain of this scene subplot (in plot
fraction).
The 'y' property is an info array that may be specified as:
* a list or tuple of 2 elements where:
(0) The 'y[0]' property is a number and may be specified as:
- An int or float in the interval [0, 1]
(1) The 'y[1]' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
list
"""
return self["y"]
@y.setter
def y(self, val):
self["y"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
column
If there is a layout grid, use the domain for this
column in the grid for this scene subplot .
row
If there is a layout grid, use the domain for this row
in the grid for this scene subplot .
x
Sets the horizontal domain of this scene subplot (in
plot fraction).
y
Sets the vertical domain of this scene subplot (in plot
fraction).
"""
def __init__(self, arg=None, column=None, row=None, x=None, y=None, **kwargs):
"""
Construct a new Domain object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.layout.scene.Domain`
column
If there is a layout grid, use the domain for this
column in the grid for this scene subplot .
row
If there is a layout grid, use the domain for this row
in the grid for this scene subplot .
x
Sets the horizontal domain of this scene subplot (in
plot fraction).
y
Sets the vertical domain of this scene subplot (in plot
fraction).
Returns
-------
Domain
"""
super(Domain, self).__init__("domain")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.layout.scene.Domain
constructor must be a dict or
an instance of :class:`plotly.graph_objs.layout.scene.Domain`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("column", None)
_v = column if column is not None else _v
if _v is not None:
self["column"] = _v
_v = arg.pop("row", None)
_v = row if row is not None else _v
if _v is not None:
self["row"] = _v
_v = arg.pop("x", None)
_v = x if x is not None else _v
if _v is not None:
self["x"] = _v
_v = arg.pop("y", None)
_v = y if y is not None else _v
if _v is not None:
self["y"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
|
plotly/python-api
|
packages/python/plotly/plotly/graph_objs/layout/scene/_domain.py
|
Python
|
mit
| 5,784
|
import pytest
import numpy as np
from instrument_models.mastcam import Mastcam
from . import mastcam_label1, mastcam_label2, EMPTY_LABEL
class TestMastcam(object):
mastcam1 = Mastcam(mastcam_label1)
mastcam2 = Mastcam(mastcam_label2)
empty = Mastcam(EMPTY_LABEL)
def test_group(self):
assert self.mastcam1.group == 'INSTRUMENT_STATE_PARMS'
def test_wavelength_key(self):
assert self.mastcam1.wavelength_key1 == 'CENTER_FILTER_WAVELENGTH'
assert self.mastcam1.wavelength_key2 == 'FILTER_CENTER_WAVELENGTH'
@pytest.mark.parametrize(
'unit, wavelength',
[
('nm', 500),
('um', 0.5),
('AA', 5000)
]
)
def test_get_wavelength(self, unit, wavelength):
assert self.mastcam1.get_wavelength(unit) == wavelength
assert self.mastcam2.get_wavelength(unit) == wavelength
assert np.isnan(self.empty.get_wavelength(unit))
|
planetarypy/pdsspect
|
tests/test_mastcam.py
|
Python
|
bsd-3-clause
| 954
|
# (c) Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import time
from unittest import mock
from os_brick import exception
from os_brick.tests import base
from os_brick import utils
class WrongException(exception.BrickException):
pass
class TestRetryDecorator(base.TestCase):
def test_no_retry_required(self):
self.counter = 0
with mock.patch.object(utils, '_time_sleep') as mock_sleep:
@utils.retry(exception.VolumeDeviceNotFound,
interval=2,
retries=3,
backoff_rate=2)
def succeeds():
self.counter += 1
return 'success'
ret = succeeds()
self.assertFalse(mock_sleep.called)
self.assertEqual('success', ret)
self.assertEqual(1, self.counter)
def test_retries_once(self):
self.counter = 0
interval = 2
backoff_rate = 2
retries = 3
with mock.patch.object(utils, '_time_sleep') as mock_sleep:
@utils.retry(exception.VolumeDeviceNotFound,
interval,
retries,
backoff_rate)
def fails_once():
self.counter += 1
if self.counter < 2:
raise exception.VolumeDeviceNotFound(device='fake')
else:
return 'success'
ret = fails_once()
self.assertEqual('success', ret)
self.assertEqual(2, self.counter)
self.assertEqual(1, mock_sleep.call_count)
mock_sleep.assert_called_with(interval)
def test_limit_is_reached(self):
self.counter = 0
retries = 3
interval = 2
backoff_rate = 4
with mock.patch.object(utils, '_time_sleep') as mock_sleep:
@utils.retry(exception.VolumeDeviceNotFound,
interval,
retries,
backoff_rate)
def always_fails():
self.counter += 1
raise exception.VolumeDeviceNotFound(device='fake')
self.assertRaises(exception.VolumeDeviceNotFound,
always_fails)
self.assertEqual(retries, self.counter)
expected_sleep_arg = []
for i in range(retries):
if i > 0:
interval *= (backoff_rate ** (i - 1))
expected_sleep_arg.append(float(interval))
mock_sleep.assert_has_calls(
list(map(mock.call, expected_sleep_arg)))
def test_wrong_exception_no_retry(self):
with mock.patch.object(utils, '_time_sleep') as mock_sleep:
@utils.retry(exception.VolumeDeviceNotFound)
def raise_unexpected_error():
raise WrongException("wrong exception")
self.assertRaises(WrongException, raise_unexpected_error)
self.assertFalse(mock_sleep.called)
@mock.patch('tenacity.nap.sleep')
def test_retry_exit_code(self, sleep_mock):
exit_code = 5
exception = utils.processutils.ProcessExecutionError
@utils.retry(retry=utils.retry_if_exit_code, retry_param=exit_code)
def raise_retriable_exit_code():
raise exception(exit_code=exit_code)
self.assertRaises(exception, raise_retriable_exit_code)
self.assertEqual(0, sleep_mock.call_count)
@mock.patch('tenacity.nap.sleep')
def test_retry_exit_code_non_retriable(self, sleep_mock):
exit_code = 5
exception = utils.processutils.ProcessExecutionError
@utils.retry(retry=utils.retry_if_exit_code, retry_param=exit_code)
def raise_non_retriable_exit_code():
raise exception(exit_code=exit_code + 1)
self.assertRaises(exception, raise_non_retriable_exit_code)
sleep_mock.assert_not_called()
class LogTracingTestCase(base.TestCase):
"""Test out the log tracing."""
def test_utils_trace_method_default_logger(self):
mock_log = self.mock_object(utils, 'LOG')
@utils.trace
def _trace_test_method_custom_logger(*args, **kwargs):
return 'OK'
result = _trace_test_method_custom_logger()
self.assertEqual('OK', result)
self.assertEqual(2, mock_log.debug.call_count)
def test_utils_trace_method_inner_decorator(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
def _test_decorator(f):
def blah(*args, **kwargs):
return f(*args, **kwargs)
return blah
@_test_decorator
@utils.trace
def _trace_test_method(*args, **kwargs):
return 'OK'
result = _trace_test_method(self)
self.assertEqual('OK', result)
self.assertEqual(2, mock_log.debug.call_count)
# Ensure the correct function name was logged
for call in mock_log.debug.call_args_list:
self.assertIn('_trace_test_method', str(call))
self.assertNotIn('blah', str(call))
def test_utils_trace_method_outer_decorator(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
def _test_decorator(f):
def blah(*args, **kwargs):
return f(*args, **kwargs)
return blah
@utils.trace
@_test_decorator
def _trace_test_method(*args, **kwargs):
return 'OK'
result = _trace_test_method(self)
self.assertEqual('OK', result)
self.assertEqual(2, mock_log.debug.call_count)
# Ensure the incorrect function name was logged
for call in mock_log.debug.call_args_list:
self.assertNotIn('_trace_test_method', str(call))
self.assertIn('blah', str(call))
def test_utils_trace_method_outer_decorator_with_functools(self):
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
self.mock_object(utils.logging, 'getLogger', mock_log)
mock_log = self.mock_object(utils, 'LOG')
def _test_decorator(f):
@functools.wraps(f)
def wraps(*args, **kwargs):
return f(*args, **kwargs)
return wraps
@utils.trace
@_test_decorator
def _trace_test_method(*args, **kwargs):
return 'OK'
result = _trace_test_method()
self.assertEqual('OK', result)
self.assertEqual(2, mock_log.debug.call_count)
# Ensure the incorrect function name was logged
for call in mock_log.debug.call_args_list:
self.assertIn('_trace_test_method', str(call))
self.assertNotIn('wraps', str(call))
def test_utils_trace_method_with_exception(self):
self.LOG = self.mock_object(utils, 'LOG')
@utils.trace
def _trace_test_method(*args, **kwargs):
raise exception.VolumeDeviceNotFound('test message')
self.assertRaises(exception.VolumeDeviceNotFound, _trace_test_method)
exception_log = self.LOG.debug.call_args_list[1]
self.assertIn('exception', str(exception_log))
self.assertIn('test message', str(exception_log))
def test_utils_trace_method_with_time(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
mock_time = mock.Mock(side_effect=[3.1, 6])
self.mock_object(time, 'time', mock_time)
@utils.trace
def _trace_test_method(*args, **kwargs):
return 'OK'
result = _trace_test_method(self)
self.assertEqual('OK', result)
return_log = mock_log.debug.call_args_list[1]
self.assertIn('2900', str(return_log))
def test_utils_trace_method_with_password_dict(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
@utils.trace
def _trace_test_method(*args, **kwargs):
return {'something': 'test',
'password': 'Now you see me'}
result = _trace_test_method(self)
expected_unmasked_dict = {'something': 'test',
'password': 'Now you see me'}
self.assertEqual(expected_unmasked_dict, result)
self.assertEqual(2, mock_log.debug.call_count)
self.assertIn("'password': '***'",
str(mock_log.debug.call_args_list[1]))
def test_utils_trace_method_with_password_str(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
@utils.trace
def _trace_test_method(*args, **kwargs):
return "'adminPass': 'Now you see me'"
result = _trace_test_method(self)
expected_unmasked_str = "'adminPass': 'Now you see me'"
self.assertEqual(expected_unmasked_str, result)
self.assertEqual(2, mock_log.debug.call_count)
self.assertIn("'adminPass': '***'",
str(mock_log.debug.call_args_list[1]))
def test_utils_trace_method_with_password_in_formal_params(self):
mock_logging = self.mock_object(utils, 'logging')
mock_log = mock.Mock()
mock_log.isEnabledFor = lambda x: True
mock_logging.getLogger = mock.Mock(return_value=mock_log)
@utils.trace
def _trace_test_method(*args, **kwargs):
self.assertEqual('verybadpass',
kwargs['connection']['data']['auth_password'])
pass
connector_properties = {
'data': {
'auth_password': 'verybadpass'
}
}
_trace_test_method(self, connection=connector_properties)
self.assertEqual(2, mock_log.debug.call_count)
self.assertIn("'auth_password': '***'",
str(mock_log.debug.call_args_list[0]))
|
openstack/os-brick
|
os_brick/tests/test_utils.py
|
Python
|
apache-2.0
| 11,110
|
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""TestEnvironment classes.
These classes abstract away the various setups needed to run the WebDriver java
tests in various environments.
"""
import logging
import os
import sys
import chrome_paths
import util
_THIS_DIR = os.path.abspath(os.path.dirname(__file__))
if util.IsLinux():
sys.path.insert(0, os.path.join(chrome_paths.GetSrc(), 'third_party',
'catapult', 'devil'))
from devil.android import device_errors
from devil.android import device_utils
from devil.android import forwarder
sys.path.insert(0, os.path.join(chrome_paths.GetSrc(), 'build', 'android'))
import devil_chromium
ANDROID_TEST_HTTP_PORT = 2311
ANDROID_TEST_HTTPS_PORT = 2411
_EXPECTATIONS = {}
execfile(os.path.join(_THIS_DIR, 'test_expectations'), _EXPECTATIONS)
class BaseTestEnvironment(object):
"""Manages the environment java tests require to run."""
def __init__(self, chrome_version='HEAD'):
"""Initializes a desktop test environment.
Args:
chrome_version: Optionally a chrome version to run the tests against.
"""
self._chrome_version = chrome_version
def GetOS(self):
"""Name of the OS."""
raise NotImplementedError
def GlobalSetUp(self):
"""Sets up the global test environment state."""
pass
def GlobalTearDown(self):
"""Tears down the global test environment state."""
pass
def GetDisabledJavaTestMatchers(self):
"""Get the list of disabled java test matchers.
Returns:
List of disabled test matchers, which may contain '*' wildcards.
"""
return _EXPECTATIONS['GetDisabledTestMatchers'](self.GetOS())
def GetReadyToRunJavaTestMatchers(self):
"""Get the list of disabled for Chrome java test matchers
but which already works.
Returns:
List of disabled for Chrome java test matchers
but which already works.
"""
return _EXPECTATIONS['GetReadyToRunTestMatchers']()
def GetPassedJavaTests(self):
"""Get the list of passed java tests.
Returns:
List of passed test names.
"""
with open(os.path.join(_THIS_DIR, 'java_tests.txt'), 'r') as f:
return _EXPECTATIONS['ApplyJavaTestFilter'](
self.GetOS(), [t.strip('\n') for t in f.readlines()])
class DesktopTestEnvironment(BaseTestEnvironment):
"""Manages the environment java tests require to run on Desktop."""
# override
def GetOS(self):
return util.GetPlatformName()
class AndroidTestEnvironment(DesktopTestEnvironment):
"""Manages the environment java tests require to run on Android."""
def __init__(self, package, chrome_version='HEAD'):
super(AndroidTestEnvironment, self).__init__(chrome_version)
self._package = package
self._device = None
self._forwarder = None
# override
def GlobalSetUp(self):
devil_chromium.Initialize()
os.putenv('TEST_HTTP_PORT', str(ANDROID_TEST_HTTP_PORT))
os.putenv('TEST_HTTPS_PORT', str(ANDROID_TEST_HTTPS_PORT))
devices = device_utils.DeviceUtils.HealthyDevices()
if not devices:
raise device_errors.NoDevicesError()
elif len(devices) > 1:
logging.warning('Multiple devices attached. Using %s.' % devices[0])
self._device = devices[0]
forwarder.Forwarder.Map(
[(ANDROID_TEST_HTTP_PORT, ANDROID_TEST_HTTP_PORT),
(ANDROID_TEST_HTTPS_PORT, ANDROID_TEST_HTTPS_PORT)],
self._device)
# override
def GlobalTearDown(self):
if self._device:
forwarder.Forwarder.UnmapAllDevicePorts(self._device)
# override
def GetOS(self):
return 'android:%s' % self._package
|
endlessm/chromium-browser
|
chrome/test/chromedriver/test/test_environment.py
|
Python
|
bsd-3-clause
| 3,734
|
from os.path import join
from sys import exit
from ..utils.text import mark_for_translation as _, red
from ..utils.ui import io
OPERATIONS = (
'bytes',
'decrypt',
'encrypt',
'human',
'password',
)
def get_operation(args):
opcount = 0
selected_op = None
for op in OPERATIONS:
if args[op]:
selected_op = op
opcount += 1
if opcount > 1:
io.stdout(_("{x} More than one operation selected").format(x=red("!!!")))
exit(1)
elif opcount == 0:
return 'password'
return selected_op
def bw_pw(repo, args):
if args['length'] < 1:
io.stdout(_("{x} length must be > 1").format(x=red("!!!")))
exit(1)
op = get_operation(args)
if op == 'bytes':
io.stdout(repo.vault.random_bytes_as_base64_for(
args['string'],
key=args['key'] or 'generate',
length=args['length'],
).value)
elif op == 'decrypt':
if args['file']:
content = repo.vault.decrypt_file(
args['string'],
key=args['key'],
).value
with open(join(repo.data_dir, args['file']), 'wb') as f:
f.write(content.encode('utf-8'))
else:
try:
key, cryptotext = args['string'].split("$", 1)
except ValueError:
cryptotext = args['string']
key = args['key'] or 'encrypt'
io.stdout(repo.vault.decrypt(
cryptotext,
key=key,
).value)
elif op == 'encrypt':
if args['file']:
repo.vault.encrypt_file(
args['string'],
args['file'],
key=args['key'] or 'encrypt',
)
else:
io.stdout(repo.vault.encrypt(
args['string'],
key=args['key'] or 'encrypt',
))
elif op == 'human':
io.stdout(repo.vault.human_password_for(
args['string'],
key=args['key'] or 'generate',
).value)
elif op == 'password':
io.stdout(repo.vault.password_for(
args['string'],
key=args['key'] or 'generate',
length=args['length'],
).value)
|
bundlewrap/bundlewrap
|
bundlewrap/cmdline/pw.py
|
Python
|
gpl-3.0
| 2,296
|
'''
Copyright (C) 2013 Travis DeWolf
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import numpy as np
import scipy.optimize
class Arm3Link:
def __init__(self, q=None, q0=None, L=None):
"""Set up the basic parameters of the arm.
All lists are in order [shoulder, elbow, wrist].
q : np.array
the initial joint angles of the arm
q0 : np.array
the default (resting state) joint configuration
L : np.array
the arm segment lengths
"""
# initial joint angles
self.q = [.3, .3, 0] if q is None else q
# some default arm positions
self.q0 = np.array([np.pi/4, np.pi/4, np.pi/4]) if q0 is None else q0
# arm segment lengths
self.L = np.array([1, 1, 1]) if L is None else L
self.max_angles = [np.pi, np.pi, np.pi/4]
self.min_angles = [0, 0, -np.pi/4]
def get_xy(self, q=None):
"""Returns the corresponding hand xy coordinates for
a given set of joint angle values [shoulder, elbow, wrist],
and the above defined arm segment lengths, L
q : np.array
the list of current joint angles
returns : list
the [x,y] position of the arm
"""
if q is None:
q = self.q
x = self.L[0]*np.cos(q[0]) + \
self.L[1]*np.cos(q[0]+q[1]) + \
self.L[2]*np.cos(np.sum(q))
y = self.L[0]*np.sin(q[0]) + \
self.L[1]*np.sin(q[0]+q[1]) + \
self.L[2]*np.sin(np.sum(q))
return [x, y]
def inv_kin(self, xy):
"""This is just a quick write up to find the inverse kinematics
for a 3-link arm, using the SciPy optimize package minimization
function.
Given an (x,y) position of the hand, return a set of joint angles (q)
using constraint based minimization, constraint is to match hand (x,y),
minimize the distance of each joint from it's default position (q0).
xy : tuple
the desired xy position of the arm
returns : list
the optimal [shoulder, elbow, wrist] angle configuration
"""
def distance_to_default(q, *args):
"""Objective function to minimize
Calculates the euclidean distance through joint space to the
default arm configuration. The weight list allows the penalty of
each joint being away from the resting position to be scaled
differently, such that the arm tries to stay closer to resting
state more for higher weighted joints than those with a lower
weight.
q : np.array
the list of current joint angles
returns : scalar
euclidean distance to the default arm position
"""
# weights found with trial and error,
# get some wrist bend, but not much
weight = [1, 1, 1.3]
return np.sqrt(np.sum([(qi - q0i)**2 * wi
for qi, q0i, wi in zip(q, self.q0, weight)]))
def x_constraint(q, xy):
"""Returns the corresponding hand xy coordinates for
a given set of joint angle values [shoulder, elbow, wrist],
and the above defined arm segment lengths, L
q : np.array
the list of current joint angles
xy : np.array
current xy position (not used)
returns : np.array
the difference between current and desired x position
"""
x = (self.L[0]*np.cos(q[0]) + self.L[1]*np.cos(q[0]+q[1]) +
self.L[2]*np.cos(np.sum(q))) - xy[0]
return x
def y_constraint(q, xy):
"""Returns the corresponding hand xy coordinates for
a given set of joint angle values [shoulder, elbow, wrist],
and the above defined arm segment lengths, L
q : np.array
the list of current joint angles
xy : np.array
current xy position (not used)
returns : np.array
the difference between current and desired y position
"""
y = (self.L[0]*np.sin(q[0]) + self.L[1]*np.sin(q[0]+q[1]) +
self.L[2]*np.sin(np.sum(q))) - xy[1]
return y
def joint_limits_upper_constraint(q, xy):
"""Used in the function minimization such that the output from
this function must be greater than 0 to be successfully passed.
q : np.array
the current joint angles
xy : np.array
current xy position (not used)
returns : np.array
all > 0 if constraint matched
"""
return self.max_angles - q
def joint_limits_lower_constraint(q, xy):
"""Used in the function minimization such that the output from
this function must be greater than 0 to be successfully passed.
q : np.array
the current joint angles
xy : np.array
current xy position (not used)
returns : np.array
all > 0 if constraint matched
"""
return q - self.min_angles
return scipy.optimize.fmin_slsqp(
func=distance_to_default,
x0=self.q,
eqcons=[x_constraint,
y_constraint],
# uncomment to add in min / max angles for the joints
# ieqcons=[joint_limits_upper_constraint,
# joint_limits_lower_constraint],
args=(xy,),
iprint=0) # iprint=0 suppresses output
def test():
# ###########Test it!##################
arm = Arm3Link()
# set of desired (x,y) hand positions
x = np.arange(-.75, .75, .05)
y = np.arange(.25, .75, .05)
# threshold for printing out information, to find trouble spots
thresh = .025
count = 0
total_error = 0
# test it across the range of specified x and y values
for xi in range(len(x)):
for yi in range(len(y)):
# test the inv_kin function on a range of different targets
xy = [x[xi], y[yi]]
# run the inv_kin function, get the optimal joint angles
q = arm.inv_kin(xy=xy)
# find the (x,y) position of the hand given these angles
actual_xy = arm.get_xy(q)
# calculate the root squared error
error = np.sqrt(np.sum((np.array(xy) - np.array(actual_xy))**2))
# total the error
total_error += np.nan_to_num(error)
# if the error was high, print out more information
if np.sum(error) > thresh:
print('-------------------------')
print('Initial joint angles', arm.q)
print('Final joint angles: ', q)
print('Desired hand position: ', xy)
print('Actual hand position: ', actual_xy)
print('Error: ', error)
print('-------------------------')
count += 1
print('\n---------Results---------')
print('Total number of trials: ', count)
print('Total error: ', total_error)
print('-------------------------')
if __name__ == '__main__':
test()
|
studywolf/blog
|
InvKin/Arm.py
|
Python
|
gpl-3.0
| 7,959
|
#!/usr/bin/env python
# -*- coding: utf8 -*-
# *****************************************************************
# ** PTS -- Python Toolkit for working with SKIRT **
# ** © Astronomical Observatory, Ghent University **
# *****************************************************************
## \package pts.do.magic.soften Soften edges of an image.
# -----------------------------------------------------------------
# Import the relevant PTS classes and modules
from pts.core.basics.configuration import ConfigurationDefinition, parse_arguments
from pts.magic.core.rgba import RGBAImage
from pts.magic.region.ellipse import PixelEllipseRegion
from pts.magic.basics.stretch import PixelStretch
from pts.magic.basics.coordinate import PixelCoordinate
from pts.core.basics.range import RealRange
# -----------------------------------------------------------------
# Create the definition
definition = ConfigurationDefinition()
definition.add_required("file_path", "file_path", "name of the input image file")
# Parse the command line arguments
config = parse_arguments("soften", definition)
# -----------------------------------------------------------------
image = RGBAImage.from_file(config.filepath)
# -----------------------------------------------------------------
radius = PixelStretch(500.,300.)
center = PixelCoordinate(750., 1200.)
ellipse = PixelEllipseRegion(center, radius)
# -----------------------------------------------------------------
factor_range = RealRange(0.4, 1.2)
image.soften_edges(ellipse, factor_range)
# -----------------------------------------------------------------
image.show()
# -----------------------------------------------------------------
|
SKIRT/PTS
|
do/magic/soften.py
|
Python
|
agpl-3.0
| 1,722
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2013 Radim Rehurek <radimrehurek@seznam.cz>
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
"""This module contains classes for analyzing the texts of a corpus to accumulate
statistical information about word occurrences."""
import itertools
import logging
import multiprocessing as mp
import sys
from collections import Counter
import numpy as np
import scipy.sparse as sps
from gensim import utils
from gensim.models.word2vec import Word2Vec
logger = logging.getLogger(__name__)
def _ids_to_words(ids, dictionary):
"""Convert an iterable of ids to their corresponding words using a dictionary.
Abstract away the differences between the HashDictionary and the standard one.
Parameters
----------
ids: dict
Dictionary of ids and their words.
dictionary: :class:`~gensim.corpora.dictionary.Dictionary`
Input gensim dictionary
Returns
-------
set
Corresponding words.
Examples
--------
.. sourcecode:: pycon
>>> from gensim.corpora.dictionary import Dictionary
>>> from gensim.topic_coherence import text_analysis
>>>
>>> dictionary = Dictionary()
>>> ids = {1: 'fake', 4: 'cats'}
>>> dictionary.id2token = {1: 'fake', 2: 'tokens', 3: 'rabbids', 4: 'cats'}
>>>
>>> text_analysis._ids_to_words(ids, dictionary)
set(['cats', 'fake'])
"""
if not dictionary.id2token: # may not be initialized in the standard gensim.corpora.Dictionary
setattr(dictionary, 'id2token', {v: k for k, v in dictionary.token2id.items()})
top_words = set()
for word_id in ids:
word = dictionary.id2token[word_id]
if isinstance(word, set):
top_words = top_words.union(word)
else:
top_words.add(word)
return top_words
class BaseAnalyzer:
"""Base class for corpus and text analyzers.
Attributes
----------
relevant_ids : dict
Mapping
_vocab_size : int
Size of vocabulary.
id2contiguous : dict
Mapping word_id -> number.
log_every : int
Interval for logging.
_num_docs : int
Number of documents.
"""
def __init__(self, relevant_ids):
"""
Parameters
----------
relevant_ids : dict
Mapping
Examples
--------
.. sourcecode:: pycon
>>> from gensim.topic_coherence import text_analysis
>>> ids = {1: 'fake', 4: 'cats'}
>>> base = text_analysis.BaseAnalyzer(ids)
>>> # should return {1: 'fake', 4: 'cats'} 2 {1: 0, 4: 1} 1000 0
>>> print(base.relevant_ids, base._vocab_size, base.id2contiguous, base.log_every, base._num_docs)
{1: 'fake', 4: 'cats'} 2 {1: 0, 4: 1} 1000 0
"""
self.relevant_ids = relevant_ids
self._vocab_size = len(self.relevant_ids)
self.id2contiguous = {word_id: n for n, word_id in enumerate(self.relevant_ids)}
self.log_every = 1000
self._num_docs = 0
@property
def num_docs(self):
return self._num_docs
@num_docs.setter
def num_docs(self, num):
self._num_docs = num
if self._num_docs % self.log_every == 0:
logger.info(
"%s accumulated stats from %d documents",
self.__class__.__name__, self._num_docs)
def analyze_text(self, text, doc_num=None):
raise NotImplementedError("Base classes should implement analyze_text.")
def __getitem__(self, word_or_words):
if isinstance(word_or_words, str) or not hasattr(word_or_words, '__iter__'):
return self.get_occurrences(word_or_words)
else:
return self.get_co_occurrences(*word_or_words)
def get_occurrences(self, word_id):
"""Return number of docs the word occurs in, once `accumulate` has been called."""
return self._get_occurrences(self.id2contiguous[word_id])
def _get_occurrences(self, word_id):
raise NotImplementedError("Base classes should implement occurrences")
def get_co_occurrences(self, word_id1, word_id2):
"""Return number of docs the words co-occur in, once `accumulate` has been called."""
return self._get_co_occurrences(self.id2contiguous[word_id1], self.id2contiguous[word_id2])
def _get_co_occurrences(self, word_id1, word_id2):
raise NotImplementedError("Base classes should implement co_occurrences")
class UsesDictionary(BaseAnalyzer):
"""A BaseAnalyzer that uses a Dictionary, hence can translate tokens to counts.
The standard BaseAnalyzer can only deal with token ids since it doesn't have the token2id
mapping.
Attributes
----------
relevant_words : set
Set of words that occurrences should be accumulated for.
dictionary : :class:`~gensim.corpora.dictionary.Dictionary`
Dictionary based on text
token2id : dict
Mapping from :class:`~gensim.corpora.dictionary.Dictionary`
"""
def __init__(self, relevant_ids, dictionary):
"""
Parameters
----------
relevant_ids : dict
Mapping
dictionary : :class:`~gensim.corpora.dictionary.Dictionary`
Dictionary based on text
Examples
--------
.. sourcecode:: pycon
>>> from gensim.topic_coherence import text_analysis
>>> from gensim.corpora.dictionary import Dictionary
>>>
>>> ids = {1: 'foo', 2: 'bar'}
>>> dictionary = Dictionary([['foo', 'bar', 'baz'], ['foo', 'bar', 'bar', 'baz']])
>>> udict = text_analysis.UsesDictionary(ids, dictionary)
>>>
>>> print(udict.relevant_words)
set([u'foo', u'baz'])
"""
super(UsesDictionary, self).__init__(relevant_ids)
self.relevant_words = _ids_to_words(self.relevant_ids, dictionary)
self.dictionary = dictionary
self.token2id = dictionary.token2id
def get_occurrences(self, word):
"""Return number of docs the word occurs in, once `accumulate` has been called."""
try:
word_id = self.token2id[word]
except KeyError:
word_id = word
return self._get_occurrences(self.id2contiguous[word_id])
def _word2_contiguous_id(self, word):
try:
word_id = self.token2id[word]
except KeyError:
word_id = word
return self.id2contiguous[word_id]
def get_co_occurrences(self, word1, word2):
"""Return number of docs the words co-occur in, once `accumulate` has been called."""
word_id1 = self._word2_contiguous_id(word1)
word_id2 = self._word2_contiguous_id(word2)
return self._get_co_occurrences(word_id1, word_id2)
class InvertedIndexBased(BaseAnalyzer):
"""Analyzer that builds up an inverted index to accumulate stats."""
def __init__(self, *args):
"""
Parameters
----------
args : dict
Look at :class:`~gensim.topic_coherence.text_analysis.BaseAnalyzer`
Examples
--------
.. sourcecode:: pycon
>>> from gensim.topic_coherence import text_analysis
>>>
>>> ids = {1: 'fake', 4: 'cats'}
>>> ininb = text_analysis.InvertedIndexBased(ids)
>>>
>>> print(ininb._inverted_index)
[set([]) set([])]
"""
super(InvertedIndexBased, self).__init__(*args)
self._inverted_index = np.array([set() for _ in range(self._vocab_size)])
def _get_occurrences(self, word_id):
return len(self._inverted_index[word_id])
def _get_co_occurrences(self, word_id1, word_id2):
s1 = self._inverted_index[word_id1]
s2 = self._inverted_index[word_id2]
return len(s1.intersection(s2))
def index_to_dict(self):
contiguous2id = {n: word_id for word_id, n in self.id2contiguous.items()}
return {contiguous2id[n]: doc_id_set for n, doc_id_set in enumerate(self._inverted_index)}
class CorpusAccumulator(InvertedIndexBased):
"""Gather word occurrence stats from a corpus by iterating over its BoW representation."""
def analyze_text(self, text, doc_num=None):
"""Build an inverted index from a sequence of corpus texts."""
doc_words = frozenset(x[0] for x in text)
top_ids_in_doc = self.relevant_ids.intersection(doc_words)
for word_id in top_ids_in_doc:
self._inverted_index[self.id2contiguous[word_id]].add(self._num_docs)
def accumulate(self, corpus):
for document in corpus:
self.analyze_text(document)
self.num_docs += 1
return self
class WindowedTextsAnalyzer(UsesDictionary):
"""Gather some stats about relevant terms of a corpus by iterating over windows of texts."""
def __init__(self, relevant_ids, dictionary):
"""
Parameters
----------
relevant_ids : set of int
Relevant id
dictionary : :class:`~gensim.corpora.dictionary.Dictionary`
Dictionary instance with mappings for the relevant_ids.
"""
super(WindowedTextsAnalyzer, self).__init__(relevant_ids, dictionary)
self._none_token = self._vocab_size # see _iter_texts for use of none token
def accumulate(self, texts, window_size):
relevant_texts = self._iter_texts(texts)
windows = utils.iter_windows(
relevant_texts, window_size, ignore_below_size=False, include_doc_num=True)
for doc_num, virtual_document in windows:
self.analyze_text(virtual_document, doc_num)
self.num_docs += 1
return self
def _iter_texts(self, texts):
dtype = np.uint16 if np.iinfo(np.uint16).max >= self._vocab_size else np.uint32
for text in texts:
if self.text_is_relevant(text):
yield np.fromiter((
self.id2contiguous[self.token2id[w]] if w in self.relevant_words
else self._none_token
for w in text), dtype=dtype, count=len(text))
def text_is_relevant(self, text):
"""Check if the text has any relevant words."""
for word in text:
if word in self.relevant_words:
return True
return False
class InvertedIndexAccumulator(WindowedTextsAnalyzer, InvertedIndexBased):
"""Build an inverted index from a sequence of corpus texts."""
def analyze_text(self, window, doc_num=None):
for word_id in window:
if word_id is not self._none_token:
self._inverted_index[word_id].add(self._num_docs)
class WordOccurrenceAccumulator(WindowedTextsAnalyzer):
"""Accumulate word occurrences and co-occurrences from a sequence of corpus texts."""
def __init__(self, *args):
super(WordOccurrenceAccumulator, self).__init__(*args)
self._occurrences = np.zeros(self._vocab_size, dtype='uint32')
self._co_occurrences = sps.lil_matrix((self._vocab_size, self._vocab_size), dtype='uint32')
self._uniq_words = np.zeros((self._vocab_size + 1,), dtype=bool) # add 1 for none token
self._counter = Counter()
def __str__(self):
return self.__class__.__name__
def accumulate(self, texts, window_size):
self._co_occurrences = self._co_occurrences.tolil()
self.partial_accumulate(texts, window_size)
self._symmetrize()
return self
def partial_accumulate(self, texts, window_size):
"""Meant to be called several times to accumulate partial results.
Notes
-----
The final accumulation should be performed with the `accumulate` method as opposed to this one.
This method does not ensure the co-occurrence matrix is in lil format and does not
symmetrize it after accumulation.
"""
self._current_doc_num = -1
self._token_at_edge = None
self._counter.clear()
super(WordOccurrenceAccumulator, self).accumulate(texts, window_size)
for combo, count in self._counter.items():
self._co_occurrences[combo] += count
return self
def analyze_text(self, window, doc_num=None):
self._slide_window(window, doc_num)
mask = self._uniq_words[:-1] # to exclude none token
if mask.any():
self._occurrences[mask] += 1
self._counter.update(itertools.combinations(np.nonzero(mask)[0], 2))
def _slide_window(self, window, doc_num):
if doc_num != self._current_doc_num:
self._uniq_words[:] = False
self._uniq_words[np.unique(window)] = True
self._current_doc_num = doc_num
else:
self._uniq_words[self._token_at_edge] = False
self._uniq_words[window[-1]] = True
self._token_at_edge = window[0]
def _symmetrize(self):
"""Word pairs may have been encountered in (i, j) and (j, i) order.
Notes
-----
Rather than enforcing a particular ordering during the update process,
we choose to symmetrize the co-occurrence matrix after accumulation has completed.
"""
co_occ = self._co_occurrences
co_occ.setdiag(self._occurrences) # diagonal should be equal to occurrence counts
self._co_occurrences = \
co_occ + co_occ.T - sps.diags(co_occ.diagonal(), offsets=0, dtype='uint32')
def _get_occurrences(self, word_id):
return self._occurrences[word_id]
def _get_co_occurrences(self, word_id1, word_id2):
return self._co_occurrences[word_id1, word_id2]
def merge(self, other):
self._occurrences += other._occurrences
self._co_occurrences += other._co_occurrences
self._num_docs += other._num_docs
class PatchedWordOccurrenceAccumulator(WordOccurrenceAccumulator):
"""Monkey patched for multiprocessing worker usage, to move some of the logic to the master process."""
def _iter_texts(self, texts):
return texts # master process will handle this
class ParallelWordOccurrenceAccumulator(WindowedTextsAnalyzer):
"""Accumulate word occurrences in parallel.
Attributes
----------
processes : int
Number of processes to use; must be at least two.
args :
Should include `relevant_ids` and `dictionary` (see :class:`~UsesDictionary.__init__`).
kwargs :
Can include `batch_size`, which is the number of docs to send to a worker at a time.
If not included, it defaults to 64.
"""
def __init__(self, processes, *args, **kwargs):
super(ParallelWordOccurrenceAccumulator, self).__init__(*args)
if processes < 2:
raise ValueError(
"Must have at least 2 processes to run in parallel; got %d" % processes)
self.processes = processes
self.batch_size = kwargs.get('batch_size', 64)
def __str__(self):
return "%s<processes=%s, batch_size=%s>" % (
self.__class__.__name__, self.processes, self.batch_size)
def accumulate(self, texts, window_size):
workers, input_q, output_q = self.start_workers(window_size)
try:
self.queue_all_texts(input_q, texts, window_size)
interrupted = False
except KeyboardInterrupt:
logger.warn("stats accumulation interrupted; <= %d documents processed", self._num_docs)
interrupted = True
accumulators = self.terminate_workers(input_q, output_q, workers, interrupted)
return self.merge_accumulators(accumulators)
def start_workers(self, window_size):
"""Set up an input and output queue and start processes for each worker.
Notes
-----
The input queue is used to transmit batches of documents to the workers.
The output queue is used by workers to transmit the WordOccurrenceAccumulator instances.
Parameters
----------
window_size : int
Returns
-------
(list of lists)
Tuple of (list of workers, input queue, output queue).
"""
input_q = mp.Queue(maxsize=self.processes)
output_q = mp.Queue()
workers = []
for _ in range(self.processes):
accumulator = PatchedWordOccurrenceAccumulator(self.relevant_ids, self.dictionary)
worker = AccumulatingWorker(input_q, output_q, accumulator, window_size)
worker.start()
workers.append(worker)
return workers, input_q, output_q
def yield_batches(self, texts):
"""Return a generator over the given texts that yields batches of `batch_size` texts at a time."""
batch = []
for text in self._iter_texts(texts):
batch.append(text)
if len(batch) == self.batch_size:
yield batch
batch = []
if batch:
yield batch
def queue_all_texts(self, q, texts, window_size):
"""Sequentially place batches of texts on the given queue until `texts` is consumed.
The texts are filtered so that only those with at least one relevant token are queued.
"""
for batch_num, batch in enumerate(self.yield_batches(texts)):
q.put(batch, block=True)
before = self._num_docs / self.log_every
self._num_docs += sum(len(doc) - window_size + 1 for doc in batch)
if before < (self._num_docs / self.log_every):
logger.info(
"%d batches submitted to accumulate stats from %d documents (%d virtual)",
(batch_num + 1), (batch_num + 1) * self.batch_size, self._num_docs)
def terminate_workers(self, input_q, output_q, workers, interrupted=False):
"""Wait until all workers have transmitted their WordOccurrenceAccumulator instances, then terminate each.
Warnings
--------
We do not use join here because it has been shown to have some issues
in Python 2.7 (and even in later versions). This method also closes both the input and output queue.
If `interrupted` is False (normal execution), a None value is placed on the input queue for
each worker. The workers are looking for this sentinel value and interpret it as a signal to
terminate themselves. If `interrupted` is True, a KeyboardInterrupt occurred. The workers are
programmed to recover from this and continue on to transmit their results before terminating.
So in this instance, the sentinel values are not queued, but the rest of the execution
continues as usual.
"""
if not interrupted:
for _ in workers:
input_q.put(None, block=True)
accumulators = []
while len(accumulators) != len(workers):
accumulators.append(output_q.get())
logger.info("%d accumulators retrieved from output queue", len(accumulators))
for worker in workers:
if worker.is_alive():
worker.terminate()
input_q.close()
output_q.close()
return accumulators
def merge_accumulators(self, accumulators):
"""Merge the list of accumulators into a single `WordOccurrenceAccumulator` with all
occurrence and co-occurrence counts, and a `num_docs` that reflects the total observed
by all the individual accumulators.
"""
accumulator = WordOccurrenceAccumulator(self.relevant_ids, self.dictionary)
for other_accumulator in accumulators:
accumulator.merge(other_accumulator)
# Workers do partial accumulation, so none of the co-occurrence matrices are symmetrized.
# This is by design, to avoid unnecessary matrix additions/conversions during accumulation.
accumulator._symmetrize()
logger.info("accumulated word occurrence stats for %d virtual documents", accumulator.num_docs)
return accumulator
class AccumulatingWorker(mp.Process):
"""Accumulate stats from texts fed in from queue."""
def __init__(self, input_q, output_q, accumulator, window_size):
super(AccumulatingWorker, self).__init__()
self.input_q = input_q
self.output_q = output_q
self.accumulator = accumulator
self.accumulator.log_every = sys.maxsize # avoid logging in workers
self.window_size = window_size
def run(self):
try:
self._run()
except KeyboardInterrupt:
logger.info(
"%s interrupted after processing %d documents",
self.__class__.__name__, self.accumulator.num_docs)
except Exception:
logger.exception("worker encountered unexpected exception")
finally:
self.reply_to_master()
def _run(self):
batch_num = -1
n_docs = 0
while True:
batch_num += 1
docs = self.input_q.get(block=True)
if docs is None: # sentinel value
logger.debug("observed sentinel value; terminating")
break
self.accumulator.partial_accumulate(docs, self.window_size)
n_docs += len(docs)
logger.debug(
"completed batch %d; %d documents processed (%d virtual)",
batch_num, n_docs, self.accumulator.num_docs)
logger.debug(
"finished all batches; %d documents processed (%d virtual)",
n_docs, self.accumulator.num_docs)
def reply_to_master(self):
logger.info("serializing accumulator to return to master...")
self.output_q.put(self.accumulator, block=False)
logger.info("accumulator serialized")
class WordVectorsAccumulator(UsesDictionary):
"""Accumulate context vectors for words using word vector embeddings.
Attributes
----------
model: Word2Vec (:class:`~gensim.models.keyedvectors.KeyedVectors`)
If None, a new Word2Vec model is trained on the given text corpus. Otherwise,
it should be a pre-trained Word2Vec context vectors.
model_kwargs:
if model is None, these keyword arguments will be passed through to the Word2Vec constructor.
"""
def __init__(self, relevant_ids, dictionary, model=None, **model_kwargs):
super(WordVectorsAccumulator, self).__init__(relevant_ids, dictionary)
self.model = model
self.model_kwargs = model_kwargs
def not_in_vocab(self, words):
uniq_words = set(utils.flatten(words))
return set(word for word in uniq_words if word not in self.model)
def get_occurrences(self, word):
"""Return number of docs the word occurs in, once `accumulate` has been called."""
try:
self.token2id[word] # is this a token or an id?
except KeyError:
word = self.dictionary.id2token[word]
return self.model.get_vecattr(word, 'count')
def get_co_occurrences(self, word1, word2):
"""Return number of docs the words co-occur in, once `accumulate` has been called."""
raise NotImplementedError("Word2Vec model does not support co-occurrence counting")
def accumulate(self, texts, window_size):
if self.model is not None:
logger.debug("model is already trained; no accumulation necessary")
return self
kwargs = self.model_kwargs.copy()
if window_size is not None:
kwargs['window'] = window_size
kwargs['min_count'] = kwargs.get('min_count', 1)
kwargs['sg'] = kwargs.get('sg', 1)
kwargs['hs'] = kwargs.get('hw', 0)
self.model = Word2Vec(**kwargs)
self.model.build_vocab(texts)
self.model.train(texts, total_examples=self.model.corpus_count, epochs=self.model.epochs)
self.model = self.model.wv # retain KeyedVectors
return self
def ids_similarity(self, ids1, ids2):
words1 = self._words_with_embeddings(ids1)
words2 = self._words_with_embeddings(ids2)
return self.model.n_similarity(words1, words2)
def _words_with_embeddings(self, ids):
if not hasattr(ids, '__iter__'):
ids = [ids]
words = [self.dictionary.id2token[word_id] for word_id in ids]
return [word for word in words if word in self.model]
|
RaRe-Technologies/gensim
|
gensim/topic_coherence/text_analysis.py
|
Python
|
lgpl-2.1
| 24,505
|
default_app_config = 'django.contrib.contenttypes.apps.ContentTypesConfig'
|
diego-d5000/MisValesMd
|
env/lib/python2.7/site-packages/django/contrib/contenttypes/__init__.py
|
Python
|
mit
| 76
|
import tempfile
import unittest
from collections import namedtuple
import six
from conans.client.rest.file_uploader import FileUploader
from conans.errors import AuthenticationException, ForbiddenException, InternalErrorException
from conans.test.utils.mocks import TestBufferConanOutput
from conans.util.files import save
class _ConfigMock:
def __init__(self):
self.retry = 0
self.retry_wait = 0
class MockRequester(object):
retry = 0
retry_wait = 0
def __init__(self, response):
self._response = response
def put(self, *args, **kwargs):
return namedtuple("response", "status_code content")(self._response, "tururu")
class UploaderUnitTest(unittest.TestCase):
def setUp(self):
_, self.f = tempfile.mkstemp()
save(self.f, "some contents")
self.out = TestBufferConanOutput()
def test_401_raises_unauthoirzed_exception(self):
uploader = FileUploader(MockRequester(401), self.out, verify=False, config=_ConfigMock())
with six.assertRaisesRegex(self, AuthenticationException, "tururu"):
uploader.upload("fake_url", self.f)
def test_403_raises_unauthoirzed_exception_if_no_token(self):
auth = namedtuple("auth", "token")(None)
uploader = FileUploader(MockRequester(403), self.out, verify=False, config=_ConfigMock())
with six.assertRaisesRegex(self, AuthenticationException, "tururu"):
uploader.upload("fake_url", self.f, auth=auth)
def test_403_raises_unauthorized_exception_if_no_auth(self):
uploader = FileUploader(MockRequester(403), self.out, verify=False, config=_ConfigMock())
with six.assertRaisesRegex(self, AuthenticationException, "tururu"):
uploader.upload("fake_url", self.f)
def test_403_raises_forbidden_exception_if_token(self):
auth = namedtuple("auth", "token")("SOMETOKEN")
uploader = FileUploader(MockRequester(403), self.out, verify=False, config=_ConfigMock())
with six.assertRaisesRegex(self, ForbiddenException, "tururu"):
uploader.upload("fake_url", self.f, auth=auth)
def test_500_raises_internal_error(self):
out = TestBufferConanOutput()
uploader = FileUploader(MockRequester(500), out, verify=False, config=_ConfigMock())
_, f = tempfile.mkstemp()
save(f, "some contents")
with six.assertRaisesRegex(self, InternalErrorException, "tururu"):
uploader.upload("fake_url", self.f, dedup=True)
|
conan-io/conan
|
conans/test/unittests/client/rest/uploader_test.py
|
Python
|
mit
| 2,508
|
from a10sdk.common.A10BaseClass import A10BaseClass
class Udp(A10BaseClass):
"""Class Description::
Set UDP STUN timeout.
Class udp supports CRUD Operations and inherits from `common/A10BaseClass`.
This class is the `"PARENT"` class for this module.`
:param port_start: {"description": "Port Range (Port Range Start)", "format": "number", "type": "number", "maximum": 65535, "minimum": 1, "optional": false}
:param port_end: {"description": "Port Range (Port Range End)", "format": "number", "type": "number", "maximum": 65535, "minimum": 1, "optional": false}
:param timeout: {"description": "STUN timeout in minutes (default: 2 minutes)", "format": "number", "type": "number", "maximum": 60, "minimum": 0, "optional": true}
:param uuid: {"description": "uuid of the object", "format": "string", "minLength": 1, "modify-not-allowed": 1, "optional": true, "maxLength": 64, "type": "string"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
URL for this object::
`https://<Hostname|Ip address>//axapi/v3/cgnv6/lsn/stun-timeout/udp/{port_start}+{port_end}`.
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.required = [ "port_start","port_end"]
self.b_key = "udp"
self.a10_url="/axapi/v3/cgnv6/lsn/stun-timeout/udp/{port_start}+{port_end}"
self.DeviceProxy = ""
self.port_start = ""
self.port_end = ""
self.timeout = ""
self.uuid = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
|
amwelch/a10sdk-python
|
a10sdk/core/cgnv6/cgnv6_lsn_stun_timeout_udp.py
|
Python
|
apache-2.0
| 1,644
|
import os
import unittest
from application import create_app
config_name = 'development'
app = create_app(config_name)
@app.cli.command()
def test():
tests = unittest.TestLoader().discover('tests')
unittest.TextTestRunner(verbosity=2).run(tests)
if __name__ == '__main__':
app.run()
|
mrpoor/heart_telehealth
|
run.py
|
Python
|
mit
| 300
|
#!/usr/bin/env python
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
"""
Support for amqp 'reference' content (as opposed to inline content)
"""
import threading
from queue import Queue, Closed
class NotOpened(Exception): pass
class AlreadyOpened(Exception): pass
"""
A representation of a reference id; can be passed wherever amqp
content is required in place of inline data
"""
class ReferenceId:
def __init__(self, id):
self.id = id
"""
Holds content received through 'reference api'. Instances of this
class will be placed in the consumers queue on receiving a transfer
(assuming the reference has been opened). Data can be retrieved in
chunks (as append calls are received) or in full (after reference has
been closed signalling data s complete).
"""
class Reference:
def __init__(self, id):
self.id = id
self.chunks = Queue(0)
def close(self):
self.chunks.close()
def append(self, bytes):
self.chunks.put(bytes)
def get_chunk(self):
return self.chunks.get()
def get_complete(self):
data = ""
for chunk in self:
data += chunk
return data
def next(self):
try:
return self.get_chunk()
except Closed, e:
raise StopIteration
def __iter__(self):
return self
"""
Manages a set of opened references. New references can be opened and
existing references can be retrieved or closed.
"""
class References:
def __init__(self):
self.map = {}
self.lock = threading.Lock()
def get(self, id):
self.lock.acquire()
try:
try:
ref = self.map[id]
except KeyError:
raise NotOpened()
finally:
self.lock.release()
return ref
def open(self, id):
self.lock.acquire()
try:
if id in self.map: raise AlreadyOpened()
self.map[id] = Reference(id)
finally:
self.lock.release()
def close(self, id):
self.get(id).close()
self.lock.acquire()
try:
self.map.pop(id)
finally:
self.lock.release()
|
mbroadst/debian-qpid-python
|
qpid/reference.py
|
Python
|
apache-2.0
| 2,955
|
"""FimPolicy and FimBaseline classes"""
import cloudpassage.sanity as sanity
from .halo_endpoint import HaloEndpoint
from .http_helper import HttpHelper
class FimPolicy(HaloEndpoint):
"""FimPolicy class:
The list_all() method allows filtering of results with keyword arguments.
An exhaustive list of keyword arguments can be found here:
https://api-doc.cloudpassage.com/help#file-integrity-policies
Args:
session (:class:`cloudpassage.HaloSession`): This will define how you
interact with the Halo API, including proxy settings and API keys
used for authentication.
Keyword args:
endpoint_version (int): Endpoint version override.
"""
object_name = "fim_policy"
objects_name = "fim_policies"
default_endpoint_version = 1
def endpoint(self):
"""Return endpoint for API requests."""
return "/v{}/{}".format(self.endpoint_version, self.objects_name)
@classmethod
def pagination_key(cls):
"""Defines the pagination key for parsing paged results"""
return cls.objects_name
@classmethod
def object_key(cls):
"""Defines the key used to pull the policy from the json document"""
return cls.object_name
class FimBaseline(HaloEndpoint):
"""Initializing the FimBaseline class:
Args:
session (:class:`cloudpassage.HaloSession`): This will define how you
interact with the Halo API, including proxy settings and API keys
used for authentication.
"""
object_name = "baseline"
objects_name = "baselines"
default_endpoint_version = 1
def endpoint(self, policy_id):
"""Return endpoint for API requests."""
return "/v{}/fim_policies/{}/{}".format(self.endpoint_version,
policy_id, self.objects_name)
def list_all(self, fim_policy_id):
"""Returns a list of all baselines for the indicated FIM policy
Args:
fim_policy_id (str): ID of fim policy
Returns:
list: List of all baselines for the given policy
"""
request = HttpHelper(self.session)
endpoint = self.endpoint(fim_policy_id)
max_pages = 30
response = request.get_paginated(endpoint, self.objects_name,
max_pages)
return response
def describe(self, fim_policy_id, baseline_id):
"""Returns the body of the baseline indicated by fim_baseline_id.
Args
fim_policy_id (str): ID of FIM policy
fim_baseline_id (str): ID of baseline
Returns:
dict: Dictionary describing FIM baseline
"""
request = HttpHelper(self.session)
endpoint = "{}/{}/details".format(self.endpoint(fim_policy_id),
baseline_id)
response = request.get(endpoint)
result = response[self.object_name]
return result
def create(self, fim_policy_id, server_id, **kwargs):
"""Creates a FIM baseline
Args:
fim_policy_id (str): ID of FIM policy to baseline
server_id (str): ID of server to use for generating baseline
Keyword Args:
expires (int): Number of days from today for expiration of baseline
comment (str): Guess.
Returns:
str: ID of new baseline
"""
sanity.validate_object_id([fim_policy_id, server_id])
request = HttpHelper(self.session)
endpoint = self.endpoint(fim_policy_id)
request_body = {"baseline": {"server_id": server_id,
"expires": None,
"comment": None}}
if "expires" in kwargs:
request_body["baseline"]["expires"] = kwargs["expires"]
if "comment" in kwargs:
request_body["baseline"]["comment"] = kwargs["comment"]
response = request.post(endpoint, request_body)
policy_id = response["baseline"]["id"]
return policy_id
def delete(self, fim_policy_id, fim_baseline_id):
"""Delete a FIM baseline by ID
Args:
fim_policy_id (str): ID of FIM policy
fim_baseline_id (str): ID of baseline to be deleted
Returns:
None if successful, exceptions throw otherwise.
"""
sanity.validate_object_id([fim_policy_id, fim_baseline_id])
request = HttpHelper(self.session)
endpoint = "{}/{}".format(self.endpoint(fim_policy_id),
fim_baseline_id)
request.delete(endpoint)
return None
def update(self, fim_policy_id, fim_baseline_id, server_id):
"""Update a FIM policy baseline.
Args:
fim_policy_id (str): ID of fim policy
fim_baseline_id (str): ID of baseline to be updated
server_id (str): ID of server to use when generating new baseline
Returns:
None if successful, exceptions throw otherwise.
"""
sanity.validate_object_id([fim_policy_id, fim_baseline_id, server_id])
request = HttpHelper(self.session)
endpoint = "{}/{}".format(self.endpoint(fim_policy_id),
fim_baseline_id)
request_body = {"baseline": {"server_id": server_id}}
request.put(endpoint, request_body)
return None
|
cloudpassage/cloudpassage-halo-python-sdk
|
cloudpassage/fim_policy.py
|
Python
|
bsd-3-clause
| 5,464
|
__author__ = 'anthony <>'
from collections import OrderedDict
from django import forms
class FormOrderMixin(object):
def order_fields(self, field_order):
"""
Rearranges the fields according to field_order.
field_order is a list of field names specifying the order. Fields not
included in the list are appended in the default order for backward
compatibility with subclasses not overriding field_order. If field_order
is None, all fields are kept in the order defined in the class.
Unknown fields in field_order are ignored to allow disabling fields in
form subclasses without redefining ordering.
"""
if field_order is None:
return
fields = OrderedDict()
for key in field_order:
try:
fields[key] = self.fields.pop(key)
except KeyError: # ignore unknown fields
pass
fields.update(self.fields) # add remaining fields in original order
self.fields = fields
def get_form_field_no_validation(fieldname):
class FieldNoValidation(fieldname):
def clean(self, value):
return value
return FieldNoValidation
class Icons(object):
icons = {}
|
unicefuganda/uSurvey
|
survey/forms/form_helper.py
|
Python
|
bsd-3-clause
| 1,258
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
class PyDashingUt(unittest.TestCase):
def test_basic(self):
print "Basic test"
|
bdastur/utils
|
testdash/tests/pydashingut.py
|
Python
|
apache-2.0
| 166
|
# -*- coding: utf-8 -*-
from __future__ import print_function
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
"""
Test validating of GlobalSection.
"""
from base import TestBase
from virtwho.config import GlobalSection, str_to_bool, MinimumSendInterval, DefaultInterval
from virtwho.log import DEFAULT_LOG_DIR
# Values used for testing GlobalConfigSection
GLOBAL_SECTION_VALUES = {
'debug': True,
'oneshot': 'true',
'print': 'false',
'wrong_bool_value': 'nein',
'configs': ['local_libvirt.conf'],
'log_file': 'my_custom.log',
'interval': '120'
}
class TestGlobalConfigSection(TestBase):
"""
Test base for testing class GlobalSection
"""
def setUp(self):
self.init_global_config_section()
def init_global_config_section(self):
"""
Method executed before each unit test
"""
self.global_config = GlobalSection('global', None)
# We need to set values using this way, because we need
# to trigger __setitem__ of global_config
for key, value in GLOBAL_SECTION_VALUES.items():
self.global_config[key] = value
def test_validate_debug_value_bool(self):
"""
Test validating of correct bool value
"""
result = self.global_config._validate_str_to_bool('debug')
self.assertIsNone(result)
value = self.global_config.get('debug')
self.assertIs(value, True)
def test_validate_oneshot_value_string_true(self):
"""
Test validating of correct bool value stored as string ('true')
"""
result = self.global_config._validate_str_to_bool('oneshot')
self.assertIsNone(result)
value = self.global_config.get('oneshot')
self.assertIs(value, True)
def test_validate_print_value_string_false(self):
"""
Test validating of correct bool value stored as string ('false')
"""
result = self.global_config._validate_str_to_bool('print')
self.assertIsNone(result)
value = self.global_config.get('print')
self.assertIs(value, False)
def test_validate_wrong_bool_value(self):
"""
Test validating of wrong string representing bool ('nein')
"""
self.assertRaises(ValueError, str_to_bool, self.global_config['wrong_bool_value'])
result = self.global_config._validate_str_to_bool('wrong_bool_value')
self.assertIsNotNone(result)
self.assertIsInstance(result, tuple)
def test_validate_non_existing_key(self):
"""
Test validation of non-existing key
"""
result = self.global_config._validate_str_to_bool('does_not_exist')
self.assertIsNotNone(result)
self.assertIsInstance(result, tuple)
def test_validate_string(self):
"""
Test validation of correct string value
"""
result = self.global_config._validate_non_empty_string('log_file')
self.assertIsNone(result)
def test_validate_interval(self):
"""
Test validation of time interval
"""
result = self.global_config._validate_interval('interval')
self.assertIsNone(result)
def test_validate_wrong_interval(self):
"""
Test validation of wrong time interval
"""
self.global_config['interval'] = '10'
self.global_config.validate()
interval = self.global_config.get('interval')
# The behavior of this has changed. We now replace any invalid value with the default
# No special cases
self.assertIs(interval, DefaultInterval)
def test_validate_missing_interval(self):
"""
Test validation of wrong time interval
"""
self.global_config = GlobalSection('global', None)
for key, value in GLOBAL_SECTION_VALUES.items():
if key != 'interval':
self.global_config[key] = value
result = self.global_config._validate_interval('interval')
self.assertIsNotNone(result)
interval = self.global_config['interval']
self.assertIs(interval, DefaultInterval)
def test_validate_configs(self):
"""
Test validation of configs (list of paths to config files)
"""
result = self.global_config._validate_configs()
self.assertIsNone(result)
def test_validate_section_values(self):
"""
Test validation of all config values
"""
validate_messages = self.global_config.validate()
# TODO: use following expected messages, when format of warning/error messages will be settled down.
# expected_results = [
# ('warning', 'Ignoring unknown configuration option "wrong_bool_value"'),
# (
# 'warning',
# 'log_per_config must be a valid boolean, using default. See man virt-who-config for more info'
# ),
# ('warning', 'Value for reporter_id not set, using default'),
# ('warning', 'Value for log_dir not set, using default'),
# ]
self.assertGreater(len(validate_messages), 0)
# self.assertEqual(expected_results, validate_messages)
def test_validate_default_value(self):
"""
Test the validate method will set default value, when an option
is missing and default value exist in DEFAULT
"""
self.global_config.validate()
default_value = self.global_config.get('log_dir')
self.assertIs(default_value, DEFAULT_LOG_DIR)
|
virt-who/virt-who
|
tests/test_config_section_global.py
|
Python
|
gpl-2.0
| 6,223
|
#----------------------------------------------------------------------
# This file was generated by D:\personal\src\airs\gui\images\anim\make_images.py
#
from wx import ImageFromStream, BitmapFromImage, EmptyIcon
import cStringIO, zlib
def getData():
return zlib.decompress(
'x\xda\x01\x9f\x03`\xfc\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00$\x00\
\x00\x00$\x08\x02\x00\x00\x00nb\x0f\xcf\x00\x00\x00\x03sBIT\x08\x08\x08\xdb\
\xe1O\xe0\x00\x00\x03WIDATH\x89\xa5\x97m\x93\x9b6\x14\x85\xcf\x95d\x8c]\xdaI\
\xb6\xc9L2\x93v\xf2\xff\xffT\xb2\xf9\xb0\xc9\xecn\x9b\x9d\xd6\xb1\x01K\xf7\
\xf4\x83\x00\x0b,X6\xbd\xc3\xd8\x06#=\xdc\x17\x9d+DU\xb1\xce<er\xc5\tW\x8e\
\xed\xee_\xc3\xf0\x8c\x93^O}\xc1\xaf\x01\xcb\x82g\xb5\xc2+\x00\x18\x81\x99z\
\x05\'W\x97\x9eC\xe6=\xf3\x94:,\x0e\xcb\x91\xe2\xc0\x05^\x06V+\xbcN\x07(3\
\xce\xbd\xd4\xcc5\xa9\x0e\xf0/K\xfcO\xc1")Z\xca\xbb\xf2s\xd6\x96sv\x81y\xca@\
\xcaZ\x8a\xec\xeb\xf3\x05\xa4\x11\xec\xba"\x16\x9c\x9b+\x90U\xb0c@\x9d\x0b\
\xd6\xfa\xe4\xcd\xb9\xf5\xf9\xa8_O]\xc4\xbajl\x15\x00\xc8\x00@\xc4NxN\x80\
\xc5\x82\xcc\x92\x1e[~\xfa\xa1M\xa03x[\x8a\x13:\xc4Z\xefs\x10`,C\x96\xe7\x0c\
\x9c\x0c\xcb\x88\xfd_\x19\xfe\xc1\xe3\xf6\xa8\xdf\xdbN.\xbc\xe2\xa1\xf6\xefw\
\xd6\x01\x97\n\x0c0\xf1\xd3\x01\x1c#K\x1b1S\x0f&>y\xca\x97c\xb8;MU\xe9[\x83\
\xf7;8O\xf1\x9c\xfe\xe7I\'\xd6\x93\x16*b+\xb7\xa4\x0b\x03\xe6\xa1\xf6\xb7\'\
\xfa\x9c\xfc\x9d\x02k\x85\xf3d6\x191\xb0\x01foV\x89\xec\x97c\xb8;\xcd\xde\
\xa6\xc4\xc9\x07\xa3\x84\x02\xca\xcb1\xb1b\x9dL\xdd\x14S1\x9a\xd8\x8f`\\\x9c\
=u=^1=ie\xd3\xaa\x9cl\xad4\xf3\xf2\xdd*\xcd\xdcJ\xd2\xf1\x13<kN\xb8]\xf4\xed\
\x14\xae\x84\xf8\xff[l~C\xec\x93\x1f\xcc\xf73\xaa\x02\x080-\x98\xb6\xe3\x05\
\xf3\x940.\xb4\x94\xa7\xc4\xd6\x88\x03\xe0\x15\x16\xdd\xec\xc3\x9d\xf1\xab\
\xd5g\xfa\xe1`u`\xa3\x1c\x00\xd9B3\xa5E\xc4dC\xea\xc9\xc3:}\xbc\xabC\n\x18\
\x829\x1c\x95\x13\xb3\xa0\xdfq\xa9=\x9d\xf5\xb8\xd8z\x00\xdc7\xfc\xbb\xe5\
\xc2\x1a1\x82\xd2\xc28\xa1\xbbr\xc9\x93i\xc7\xbao\xf4\xe0\x97H\xb7G\xb5"\xf1\
\xb8\xbeA\x89\xcaIi\xe0\x00TV\x1e\x83:\xe9\xb7l\xf13\x19vV\xdc\xd5\xe1W\x8b\
\xd7[;hq\xdc\x14\xdd7\xfaW;\x8a\xb3\x15\t\xe3\xd6\xaa\xc4\xef\x1b \xb6\x98\
\xd2\xca\x85\x14\xad\'\ri\x10\xc8\x93\xe7\x93W\'\xb0\x02\x00\x8d2\xeeVb\x03J\
3\x1b\xfd\x1b\x90[+o\nA\xac:\'\xac\x9c(\xd1\x89\xfa\x98\x145\x81\xa0\xed\'=x\
\x1e<\xcf\n+\x1d\t\xb8\xfcH\x91VD\x89\xb7[\xd9X\x83\xa1\xc4+\'\x85\x91tI\xa5\
\xa4h\xa1o]Vd\xe5\xb6.\x90\x95\x93\x0f\xbb\xaeU\x99\xfe\xb9xS\xc8\x10\xb4\
\xc9*a\xd2\xc6\x94\xddi\xe4\xd9\xe7\xa8\x7f\xee\xed\xb0L/\x85\xb8\xb7\xb8)L*\
\xfc\x8b{\xe2\x8cM"\x19\xc8\x8f\xbf\xd8W\x9b\xcb\x95Q\xd5\xbf\xda\xe0\xb7\
\x8d\x84q\x00\xa3\x1f!\xd7\xfe\xa3\xa5\xce\xb9>\x8b\x81\xfc\xb0\xb3\xef\xb6\
\xa3Q\x99\x17\x8b\xa73\x1e\x1a%(\x90\x08KI\xda\xd7Kz\x9a\xc6 \x96\xe5\x1f{3!\
\xe5a\x00\x0e\x1e\xf7\x8d6\n\xdb\x87{\xe0\r9\xcb\xc2<\xb1\xb3\xf8\xb8\xb7UN\
\xe1g_\x99<\xe5\xb1\xd5\x7f\xce\x1a\x88I\x95fa\x9e\xd8\x1a\xdc\x14\xe6]i\xe6\
\x84{\xe9\xfd,"\xff=\xfb\xefgi\xb4\xc3L\n5\x10V\xb01\xf2\xba\x907\xc5,f\x15l\
\xb0\xfe\xed&\xaa\x06\xa3|;\x83\xd2HiQ\xae\xeb\xc1\xff\x01~\xff\x10pu~\xc8#\
\x00\x00\x00\x00IEND\xaeB`\x82\xdbm\xc0z' )
def getBitmap():
return BitmapFromImage(getImage())
def getImage():
stream = cStringIO.StringIO(getData())
return ImageFromStream(stream)
|
jorgb/airs
|
gui/images/anim/progress_1_04.py
|
Python
|
gpl-2.0
| 3,301
|
from django.contrib import admin
class DvdCategoryAdmin(admin.ModelAdmin):
pass
class DvdAdmin(admin.ModelAdmin):
pass
class UserDvdAdmin(admin.ModelAdmin):
def get_queryset(self, request):
"""
Show only current user's objects.
"""
qs = super(UserDvdAdmin, self).get_queryset(request)
return qs.filter(owner=request.user)
|
barszczmm/django-wpadmin
|
test_project/apps/dvds/admin.py
|
Python
|
mit
| 382
|
# -*- coding: utf-8 -*-
"""This file is part of the TPOT library.
TPOT was primarily developed at the University of Pennsylvania by:
- Randal S. Olson (rso@randalolson.com)
- Weixuan Fu (weixuanf@upenn.edu)
- Daniel Angell (dpa34@drexel.edu)
- and many more generous open source contributors
TPOT is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation, either version 3 of
the License, or (at your option) any later version.
TPOT is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with TPOT. If not, see <http://www.gnu.org/licenses/>.
"""
import numpy as np
# Check the TPOT documentation for information on the structure of config dicts
regressor_config_dict = {
'sklearn.linear_model.ElasticNetCV': {
'l1_ratio': np.arange(0.0, 1.01, 0.05),
'tol': [1e-5, 1e-4, 1e-3, 1e-2, 1e-1]
},
'sklearn.ensemble.ExtraTreesRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21),
'bootstrap': [True, False]
},
'sklearn.ensemble.GradientBoostingRegressor': {
'n_estimators': [100],
'loss': ["ls", "lad", "huber", "quantile"],
'learning_rate': [1e-3, 1e-2, 1e-1, 0.5, 1.],
'max_depth': range(1, 11),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21),
'subsample': np.arange(0.05, 1.01, 0.05),
'max_features': np.arange(0.05, 1.01, 0.05),
'alpha': [0.75, 0.8, 0.85, 0.9, 0.95, 0.99]
},
'sklearn.ensemble.AdaBoostRegressor': {
'n_estimators': [100],
'learning_rate': [1e-3, 1e-2, 1e-1, 0.5, 1.],
'loss': ["linear", "square", "exponential"],
'max_depth': range(1, 11)
},
'sklearn.tree.DecisionTreeRegressor': {
'max_depth': range(1, 11),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21)
},
'sklearn.neighbors.KNeighborsRegressor': {
'n_neighbors': range(1, 101),
'weights': ["uniform", "distance"],
'p': [1, 2]
},
'sklearn.linear_model.LassoLarsCV': {
'normalize': [True, False]
},
'sklearn.svm.LinearSVR': {
'loss': ["epsilon_insensitive", "squared_epsilon_insensitive"],
'dual': [True, False],
'tol': [1e-5, 1e-4, 1e-3, 1e-2, 1e-1],
'C': [1e-4, 1e-3, 1e-2, 1e-1, 0.5, 1., 5., 10., 15., 20., 25.],
'epsilon': [1e-4, 1e-3, 1e-2, 1e-1, 1.]
},
'sklearn.ensemble.RandomForestRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05),
'min_samples_split': range(2, 21),
'min_samples_leaf': range(1, 21),
'bootstrap': [True, False]
},
'sklearn.linear_model.RidgeCV': {
},
'xgboost.XGBRegressor': {
'n_estimators': [100],
'max_depth': range(1, 11),
'learning_rate': [1e-3, 1e-2, 1e-1, 0.5, 1.],
'subsample': np.arange(0.05, 1.01, 0.05),
'min_child_weight': range(1, 21),
'nthread': [1]
},
# Preprocesssors
'sklearn.preprocessing.Binarizer': {
'threshold': np.arange(0.0, 1.01, 0.05)
},
'sklearn.decomposition.FastICA': {
'tol': np.arange(0.0, 1.01, 0.05)
},
'sklearn.cluster.FeatureAgglomeration': {
'linkage': ['ward', 'complete', 'average'],
'affinity': ['euclidean', 'l1', 'l2', 'manhattan', 'cosine']
},
'sklearn.preprocessing.MaxAbsScaler': {
},
'sklearn.preprocessing.MinMaxScaler': {
},
'sklearn.preprocessing.Normalizer': {
'norm': ['l1', 'l2', 'max']
},
'sklearn.kernel_approximation.Nystroem': {
'kernel': ['rbf', 'cosine', 'chi2', 'laplacian', 'polynomial', 'poly', 'linear', 'additive_chi2', 'sigmoid'],
'gamma': np.arange(0.0, 1.01, 0.05),
'n_components': range(1, 11)
},
'sklearn.decomposition.PCA': {
'svd_solver': ['randomized'],
'iterated_power': range(1, 11)
},
'sklearn.preprocessing.PolynomialFeatures': {
'degree': [2],
'include_bias': [False],
'interaction_only': [False]
},
'sklearn.kernel_approximation.RBFSampler': {
'gamma': np.arange(0.0, 1.01, 0.05)
},
'sklearn.preprocessing.RobustScaler': {
},
'sklearn.preprocessing.StandardScaler': {
},
'tpot.builtins.ZeroCount': {
},
'tpot.builtins.OneHotEncoder': {
'minimum_fraction': [0.05, 0.1, 0.15, 0.2, 0.25],
'sparse': [False],
'threshold': [10]
},
# Selectors
'sklearn.feature_selection.SelectFwe': {
'alpha': np.arange(0, 0.05, 0.001),
'score_func': {
'sklearn.feature_selection.f_regression': None
}
},
'sklearn.feature_selection.SelectPercentile': {
'percentile': range(1, 100),
'score_func': {
'sklearn.feature_selection.f_regression': None
}
},
'sklearn.feature_selection.VarianceThreshold': {
'threshold': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.2]
},
'sklearn.feature_selection.SelectFromModel': {
'threshold': np.arange(0, 1.01, 0.05),
'estimator': {
'sklearn.ensemble.ExtraTreesRegressor': {
'n_estimators': [100],
'max_features': np.arange(0.05, 1.01, 0.05)
}
}
}
}
|
weixuanfu2016/tpot
|
tpot/config/regressor.py
|
Python
|
lgpl-3.0
| 5,792
|
"""Define tests for the GeoNet NZ Quakes general setup."""
from unittest.mock import patch
from homeassistant.components.geonetnz_quakes import DOMAIN, FEED
async def test_component_unload_config_entry(hass, config_entry):
"""Test that loading and unloading of a config entry works."""
config_entry.add_to_hass(hass)
with patch(
"aio_geojson_geonetnz_quakes.GeonetnzQuakesFeedManager.update"
) as mock_feed_manager_update:
# Load config entry.
assert await hass.config_entries.async_setup(config_entry.entry_id)
await hass.async_block_till_done()
assert mock_feed_manager_update.call_count == 1
assert hass.data[DOMAIN][FEED][config_entry.entry_id] is not None
# Unload config entry.
assert await hass.config_entries.async_unload(config_entry.entry_id)
await hass.async_block_till_done()
assert hass.data[DOMAIN][FEED].get(config_entry.entry_id) is None
|
jawilson/home-assistant
|
tests/components/geonetnz_quakes/test_init.py
|
Python
|
apache-2.0
| 953
|
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class UpgradePolicy(Model):
"""Describes an upgrade policy - automatic, manual, or rolling.
:param mode: Specifies the mode of an upgrade to virtual machines in the
scale set.<br /><br /> Possible values are:<br /><br /> **Manual** - You
control the application of updates to virtual machines in the scale set.
You do this by using the manualUpgrade action.<br /><br /> **Automatic** -
All virtual machines in the scale set are automatically updated at the
same time. Possible values include: 'Automatic', 'Manual', 'Rolling'
:type mode: str or ~azure.mgmt.compute.v2017_03_30.models.UpgradeMode
:param rolling_upgrade_policy: The configuration parameters used while
performing a rolling upgrade.
:type rolling_upgrade_policy:
~azure.mgmt.compute.v2017_03_30.models.RollingUpgradePolicy
:param automatic_os_upgrade: Whether OS upgrades should automatically be
applied to scale set instances in a rolling fashion when a newer version
of the image becomes available.
:type automatic_os_upgrade: bool
"""
_attribute_map = {
'mode': {'key': 'mode', 'type': 'UpgradeMode'},
'rolling_upgrade_policy': {'key': 'rollingUpgradePolicy', 'type': 'RollingUpgradePolicy'},
'automatic_os_upgrade': {'key': 'automaticOSUpgrade', 'type': 'bool'},
}
def __init__(self, *, mode=None, rolling_upgrade_policy=None, automatic_os_upgrade: bool=None, **kwargs) -> None:
super(UpgradePolicy, self).__init__(**kwargs)
self.mode = mode
self.rolling_upgrade_policy = rolling_upgrade_policy
self.automatic_os_upgrade = automatic_os_upgrade
|
lmazuel/azure-sdk-for-python
|
azure-mgmt-compute/azure/mgmt/compute/v2017_03_30/models/upgrade_policy_py3.py
|
Python
|
mit
| 2,176
|
# disable missing docstring
#pylint: disable=C0111
from lettuce import world
from nose.tools import assert_equal
from terrain.steps import reload_the_page
@world.absorb
def create_component_instance(step, component_button_css, category,
expected_css, boilerplate=None,
has_multiple_templates=True):
click_new_component_button(step, component_button_css)
if has_multiple_templates:
click_component_from_menu(category, boilerplate, expected_css)
assert_equal(1, len(world.css_find(expected_css)))
@world.absorb
def click_new_component_button(step, component_button_css):
step.given('I have clicked the new unit button')
world.css_click(component_button_css)
@world.absorb
def click_component_from_menu(category, boilerplate, expected_css):
"""
Creates a component from `instance_id`. For components with more
than one template, clicks on `elem_css` to create the new
component. Components with only one template are created as soon
as the user clicks the appropriate button, so we assert that the
expected component is present.
"""
if boilerplate:
elem_css = "a[data-category='{}'][data-boilerplate='{}']".format(category, boilerplate)
else:
elem_css = "a[data-category='{}']:not([data-boilerplate])".format(category)
elements = world.css_find(elem_css)
assert_equal(len(elements), 1)
world.css_click(elem_css)
@world.absorb
def edit_component_and_select_settings():
world.css_click('a.edit-button')
world.css_click('#settings-mode')
@world.absorb
def verify_setting_entry(setting, display_name, value, explicitly_set):
assert_equal(display_name, setting.find_by_css('.setting-label')[0].value)
assert_equal(value, setting.find_by_css('.setting-input')[0].value)
settingClearButton = setting.find_by_css('.setting-clear')[0]
assert_equal(explicitly_set, settingClearButton.has_class('active'))
assert_equal(not explicitly_set, settingClearButton.has_class('inactive'))
@world.absorb
def verify_all_setting_entries(expected_entries):
settings = world.browser.find_by_css('.wrapper-comp-setting')
assert_equal(len(expected_entries), len(settings))
for (counter, setting) in enumerate(settings):
world.verify_setting_entry(
setting, expected_entries[counter][0],
expected_entries[counter][1], expected_entries[counter][2]
)
@world.absorb
def save_component_and_reopen(step):
world.css_click("a.save-button")
# We have a known issue that modifications are still shown within the edit window after cancel (though)
# they are not persisted. Refresh the browser to make sure the changes WERE persisted after Save.
reload_the_page(step)
edit_component_and_select_settings()
@world.absorb
def cancel_component(step):
world.css_click("a.cancel-button")
# We have a known issue that modifications are still shown within the edit window after cancel (though)
# they are not persisted. Refresh the browser to make sure the changes were not persisted.
reload_the_page(step)
@world.absorb
def revert_setting_entry(label):
get_setting_entry(label).find_by_css('.setting-clear')[0].click()
@world.absorb
def get_setting_entry(label):
settings = world.browser.find_by_css('.wrapper-comp-setting')
for setting in settings:
if setting.find_by_css('.setting-label')[0].value == label:
return setting
return None
|
rationalAgent/edx-platform-custom
|
cms/djangoapps/contentstore/features/component_settings_editor_helpers.py
|
Python
|
agpl-3.0
| 3,516
|
import zstackwoodpecker.test_state as ts_header
import os
TestAction = ts_header.TestAction
def path():
return dict(initial_formation="template5", checking_point=1, faild_point=100000, path_list=[
[TestAction.create_mini_vm, 'vm1', 'cluster=cluster2'],
[TestAction.destroy_vm, 'vm1'],
[TestAction.create_mini_vm, 'vm2', 'cluster=cluster2'],
[TestAction.create_volume, 'volume1', 'cluster=cluster2', 'flag=scsi'],
[TestAction.attach_volume, 'vm2', 'volume1'],
[TestAction.create_volume_backup, 'volume1', 'volume1-backup1'],
[TestAction.start_vm, 'vm2'],
[TestAction.stop_vm, 'vm2'],
[TestAction.resize_volume, 'vm2', 5*1024*1024],
[TestAction.poweroff_only, 'cluster=cluster2'],
[TestAction.create_mini_vm, 'vm3', 'cluster=cluster1'],
[TestAction.create_volume, 'volume2', 'cluster=cluster1', 'flag=scsi'],
[TestAction.attach_volume, 'vm3', 'volume2'],
[TestAction.delete_volume, 'volume1'],
[TestAction.add_image, 'image1', 'root', 'http://172.20.1.28/mirror/diskimages/centos_vdbench.qcow2'],
[TestAction.start_vm, 'vm2'],
[TestAction.create_vm_backup, 'vm2', 'vm2-backup2'],
[TestAction.stop_vm, 'vm2'],
[TestAction.use_vm_backup, 'vm2-backup2'],
[TestAction.delete_image, 'image1'],
[TestAction.recover_image, 'image1'],
[TestAction.delete_image, 'image1'],
[TestAction.expunge_image, 'image1'],
[TestAction.create_vm_backup, 'vm3', 'vm3-backup3'],
[TestAction.create_mini_vm, 'vm4', 'cluster=cluster2'],
[TestAction.poweroff_only, 'cluster=cluster2'],
[TestAction.create_image_from_volume, 'vm2', 'vm2-image2'],
[TestAction.create_volume, 'volume3', 'size=random', 'cluster=cluster1', 'flag=scsi'],
[TestAction.create_volume, 'volume4', 'cluster=cluster2', 'flag=scsi'],
[TestAction.delete_volume, 'volume4'],
[TestAction.delete_vm_backup, 'vm3-backup3'],
[TestAction.start_vm, 'vm2'],
[TestAction.stop_vm, 'vm2'],
[TestAction.expunge_volume, 'volume1'],
[TestAction.create_mini_vm, 'vm5', 'cluster=cluster2'],
[TestAction.create_vm_backup, 'vm5', 'vm5-backup5'],
[TestAction.start_vm, 'vm2'],
[TestAction.migrate_vm, 'vm2'],
[TestAction.poweroff_only, 'cluster=cluster1'],
[TestAction.delete_vm_backup, 'vm5-backup5'],
])
'''
The final status:
Running:['vm5', 'vm2']
Stopped:['vm4', 'vm3']
Enadbled:['volume1-backup1', 'vm2-backup2', 'vm2-image2']
attached:['volume2']
Detached:['volume3']
Deleted:['vm1', 'volume4', 'vm3-backup3', 'volume2-backup3', 'vm5-backup5']
Expunged:['volume1', 'image1']
Ha:[]
Group:
vm_backup1:['vm2-backup2']---vm2@
'''
|
zstackio/zstack-woodpecker
|
integrationtest/vm/mini/multiclusters/paths/multi_path110.py
|
Python
|
apache-2.0
| 2,543
|
#! /usr/bin/python
# -*- coding: utf8 -*-
import numpy as np
import tensorflow as tf
import tensorlayer as tl
from tensorlayer.layers import set_keep
import time
"""Examples of Stacked Denoising Autoencoder, Dropout, Dropconnect and CNN.
This tutorial uses placeholder to control all keeping probabilities,
so we need to set the non-one probabilities during training, and set them to 1
during evaluating and testing.
$ Set keeping probabilities.
>>> feed_dict = {x: X_train_a, y_: y_train_a}
>>> feed_dict.update( network.all_drop )
$ Set all keeping probabilities to 1 for evaluating and testing.
>>> dp_dict = tl.utils.dict_to_one( network.all_drop )
>>> feed_dict = {x: X_train_a, y_: y_train_a}
>>> feed_dict.update(dp_dict)
Alternatively, if you don't want to use placeholder to control them, you can
build different inferences for training, evaluating and testing,
and all inferences share the same model parameters.
(see tutorial_ptb_lstm.py)
"""
def main_test_layers(model='relu'):
X_train, y_train, X_val, y_val, X_test, y_test = \
tl.files.load_mnist_dataset(shape=(-1,784))
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32)
X_val = np.asarray(X_val, dtype=np.float32)
y_val = np.asarray(y_val, dtype=np.int32)
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32)
print('X_train.shape', X_train.shape)
print('y_train.shape', y_train.shape)
print('X_val.shape', X_val.shape)
print('y_val.shape', y_val.shape)
print('X_test.shape', X_test.shape)
print('y_test.shape', y_test.shape)
print('X %s y %s' % (X_test.dtype, y_test.dtype))
sess = tf.InteractiveSession()
# placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int32, shape=[None, ], name='y_')
# Note: the softmax is implemented internally in tl.cost.cross_entropy(y, y_)
# to speed up computation, so we use identity in the last layer.
# see tf.nn.sparse_softmax_cross_entropy_with_logits()
if model == 'relu':
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network = tl.layers.DenseLayer(network, n_units=800,
act = tf.nn.relu, name='relu1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800,
act = tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
network = tl.layers.DenseLayer(network, n_units=10,
act = tf.identity,
name='output_layer')
elif model == 'dropconnect':
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.8,
n_units=800, act = tf.nn.relu,
name='dropconnect_relu1')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
n_units=800, act = tf.nn.relu,
name='dropconnect_relu2')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
n_units=10,
act = tf.identity,
name='output_layer')
# To print all attributes of a Layer.
# attrs = vars(network)
# print(', '.join("%s: %s\n" % item for item in attrs.items()))
#
# print(network.all_drop) # {'drop1': 0.8, 'drop2': 0.5, 'drop3': 0.5}
# print(drop1, drop2, drop3) # Tensor("Placeholder_2:0", dtype=float32) Tensor("Placeholder_3:0", dtype=float32) Tensor("Placeholder_4:0", dtype=float32)
# exit()
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=y_))
# Alternatively, you can use TensorLayer's function to compute cost:
# cost = tl.cost.cross_entropy(y, y_)
# You can add more penalty to the cost function as follow.
# cost = cost + tl.cost.maxnorm_regularizer(1.0)(network.all_params[0]) + tl.cost.maxnorm_regularizer(1.0)(network.all_params[2])
# cost = cost + tl.cost.lo_regularizer(0.0001)(network.all_params[0]) + tl.cost.lo_regularizer(0.0001)(network.all_params[2])
# cost = cost + tl.cost.maxnorm_o_regularizer(0.001)(network.all_params[0]) + tl.cost.maxnorm_o_regularizer(0.001)(network.all_params[2])
params = network.all_params
# train
n_epoch = 1
batch_size = 128
learning_rate = 0.0001
print_freq = 10
train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost)
sess.run(tf.initialize_all_variables()) # initialize all variables
network.print_params()
network.print_layers()
print(' learning_rate: %f' % learning_rate)
print(' batch_size: %d' % batch_size)
for epoch in range(n_epoch):
start_time = time.time()
for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train,
batch_size, shuffle=True):
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop ) # enable dropout or dropconnect layers
sess.run(train_op, feed_dict=feed_dict)
# The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types:
# If the key is a Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a placeholder, the shape of the value will be checked for compatibility with the placeholder.
# If the key is a SparseTensor, the value should be a SparseTensorValue.
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time))
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable all dropout/dropconnect/denoising layers
feed_dict = {x: X_train, y_: y_train}
feed_dict.update(dp_dict)
print(" train loss: %f" % sess.run(cost, feed_dict=feed_dict))
dp_dict = tl.utils.dict_to_one( network.all_drop )
feed_dict = {x: X_val, y_: y_val}
feed_dict.update(dp_dict)
print(" val loss: %f" % sess.run(cost, feed_dict=feed_dict))
print(" val acc: %f" % np.mean(y_val ==
sess.run(y_op, feed_dict=feed_dict)))
try:
# You can visualize the weight of 1st hidden layer as follow.
tl.visualize.W(network.all_params[0].eval(), second=10,
saveable=True, shape=[28, 28],
name='output/w1_'+str(epoch+1), fig_idx=2012)
# You can also save the weight of 1st hidden layer to .npz file.
# tl.files.save_npz([network.all_params[0]] , name='w1'+str(epoch+1)+'.npz')
except:
raise Exception("You should change visualize_W(), if you want \
to save the feature images for different dataset")
print('Evaluation')
dp_dict = tl.utils.dict_to_one( network.all_drop )
feed_dict = {x: X_test, y_: y_test}
feed_dict.update(dp_dict)
print(" test loss: %f" % sess.run(cost, feed_dict=feed_dict))
print(" test acc: %f" % np.mean(y_test == sess.run(y_op,
feed_dict=feed_dict)))
# Add ops to save and restore all the variables, including variables for training.
# ref: https://www.tensorflow.org/versions/r0.8/how_tos/variables/index.html
saver = tf.train.Saver()
save_path = saver.save(sess, "output/model.ckpt")
print("Model saved in file: %s" % save_path)
# You can also save the parameters into .npz file.
tl.files.save_npz(network.all_params , name='output/model.npz')
# You can only save one parameter as follow.
# tl.files.save_npz([network.all_params[0]] , name='output/model.npz')
# Then, restore the parameters as follow.
# load_params = tl.files.load_npz(path='', name='output/model.npz')
# tl.files.assign_params(sess, load_params, network)
# In the end, close TensorFlow session.
sess.close()
def main_test_denoise_AE(model='relu'):
X_train, y_train, X_val, y_val, X_test, y_test = \
tl.files.load_mnist_dataset(shape=(-1,784))
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int64)
X_val = np.asarray(X_val, dtype=np.float32)
y_val = np.asarray(y_val, dtype=np.int64)
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int64)
print('X_train.shape', X_train.shape)
print('y_train.shape', y_train.shape)
print('X_val.shape', X_val.shape)
print('y_val.shape', y_val.shape)
print('X_test.shape', X_test.shape)
print('y_test.shape', y_test.shape)
print('X %s y %s' % (X_test.dtype, y_test.dtype))
sess = tf.InteractiveSession()
# placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
print("Build Network")
if model == 'relu':
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='denoising1') # if drop some inputs, it is denoise AE
network = tl.layers.DenseLayer(network, n_units=196,
act = tf.nn.relu, name='relu1')
recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784,
act = tf.nn.softplus, name='recon_layer1')
elif model == 'sigmoid':
# sigmoid - set keep to 1.0, if you want a vanilla Autoencoder
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='denoising1')
network = tl.layers.DenseLayer(network, n_units=196,
act=tf.nn.sigmoid, name='sigmoid1')
recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784,
act=tf.nn.sigmoid, name='recon_layer1')
## ready to train
sess.run(tf.initialize_all_variables())
## print all params
print("All Network Params")
network.print_params()
## pretrain
print("Pre-train Layer 1")
recon_layer1.pretrain(sess, x=x, X_train=X_train, X_val=X_val,
denoise_name='denoising1', n_epoch=200,
batch_size=128, print_freq=10, save=True,
save_name='w1pre_')
# You can also disable denoisong by setting denoise_name=None.
# recon_layer1.pretrain(sess, x=x, X_train=X_train, X_val=X_val,
# denoise_name=None, n_epoch=500, batch_size=128,
# print_freq=10, save=True, save_name='w1pre_')
# Add ops to save and restore all the variables.
# ref: https://www.tensorflow.org/versions/r0.8/how_tos/variables/index.html
saver = tf.train.Saver()
# you may want to save the model
save_path = saver.save(sess, "output/model.ckpt")
print("Model saved in file: %s" % save_path)
sess.close()
def main_test_stacked_denoise_AE(model='relu'):
X_train, y_train, X_val, y_val, X_test, y_test = \
tl.files.load_mnist_dataset(shape=(-1,784))
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int64)
X_val = np.asarray(X_val, dtype=np.float32)
y_val = np.asarray(y_val, dtype=np.int64)
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int64)
print('X_train.shape', X_train.shape)
print('y_train.shape', y_train.shape)
print('X_val.shape', X_val.shape)
print('y_val.shape', y_val.shape)
print('X_test.shape', X_test.shape)
print('y_test.shape', y_test.shape)
print('X %s y %s' % (X_test.dtype, y_test.dtype))
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
if model == 'relu':
act = tf.nn.relu
act_recon = tf.nn.softplus
elif model == 'sigmoid':
act = tf.nn.sigmoid
act_recon = act
# Define network
print("\nBuild Network")
network = tl.layers.InputLayer(x, name='input_layer')
# denoise layer for AE
network = tl.layers.DropoutLayer(network, keep=0.5, name='denoising1')
# 1st layer
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network = tl.layers.DenseLayer(network, n_units=800, act = act, name=model+'1')
x_recon1 = network.outputs
recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784,
act = act_recon, name='recon_layer1')
# 2nd layer
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800, act = act, name=model+'2')
recon_layer2 = tl.layers.ReconLayer(network, x_recon=x_recon1, n_units=800,
act = act_recon, name='recon_layer2')
# 3rd layer
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
network = tl.layers.DenseLayer(network, n_units=10,
act = tf.identity,
name='output_layer')
# Define fine-tune process
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)
ce = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y, y_))
cost = ce
n_epoch = 200
batch_size = 128
learning_rate = 0.0001
print_freq = 10
train_params = network.all_params
# train_op = tf.train.GradientDescentOptimizer(0.5).minimize(cost)
train_op = tf.train.AdamOptimizer(learning_rate , beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)
# Initialize all variables including weights, biases and the variables in train_op
sess.run(tf.initialize_all_variables())
# Pre-train
print("\nAll Network Params before pre-train")
network.print_params()
print("\nPre-train Layer 1")
recon_layer1.pretrain(sess, x=x, X_train=X_train, X_val=X_val,
denoise_name='denoising1', n_epoch=200,
batch_size=128, print_freq=10, save=True,
save_name='w1pre_')
print("\nPre-train Layer 2")
recon_layer2.pretrain(sess, x=x, X_train=X_train, X_val=X_val,
denoise_name='denoising1', n_epoch=200,
batch_size=128, print_freq=10, save=False)
print("\nAll Network Params after pre-train")
network.print_params()
# Fine-tune
print("\nFine-tune Network")
correct_prediction = tf.equal(tf.argmax(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(' learning_rate: %f' % learning_rate)
print(' batch_size: %d' % batch_size)
for epoch in range(n_epoch):
start_time = time.time()
for X_train_a, y_train_a in tl.iterate.minibatches(
X_train, y_train, batch_size, shuffle=True):
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop ) # enable noise layers
feed_dict[set_keep['denoising1']] = 1 # disable denoising layer
sess.run(train_op, feed_dict=feed_dict)
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time))
train_loss, train_acc, n_batch = 0, 0, 0
for X_train_a, y_train_a in tl.iterate.minibatches(
X_train, y_train, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
train_loss += err
train_acc += ac
n_batch += 1
print(" train loss: %f" % (train_loss/ n_batch))
print(" train acc: %f" % (train_acc/ n_batch))
val_loss, val_acc, n_batch = 0, 0, 0
for X_val_a, y_val_a in tl.iterate.minibatches(
X_val, y_val, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_val_a, y_: y_val_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
val_loss += err
val_acc += ac
n_batch += 1
print(" val loss: %f" % (val_loss/ n_batch))
print(" val acc: %f" % (val_acc/ n_batch))
try:
# visualize the 1st hidden layer during fine-tune
tl.visualize.W(network.all_params[0].eval(), second=10,
saveable=True, shape=[28, 28],
name='output/w1_'+str(epoch+1), fig_idx=2012)
except:
raise Exception("# You should change visualize_W(), if you want to save the feature images for different dataset")
print('Evaluation')
test_loss, test_acc, n_batch = 0, 0, 0
for X_test_a, y_test_a in tl.iterate.minibatches(
X_test, y_test, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_test_a, y_: y_test_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
test_loss += err
test_acc += ac
n_batch += 1
print(" test loss: %f" % (test_loss/n_batch))
print(" test acc: %f" % (test_acc/n_batch))
# print(" test acc: %f" % np.mean(y_test == sess.run(y_op, feed_dict=feed_dict)))
# Add ops to save and restore all the variables.
# ref: https://www.tensorflow.org/versions/r0.8/how_tos/variables/index.html
saver = tf.train.Saver()
# you may want to save the model
save_path = saver.save(sess, "output/model.ckpt")
print("Model saved in file: %s" % save_path)
sess.close()
def main_test_cnn_layer():
"""Reimplementation of the TensorFlow official MNIST CNN tutorials:
# https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py
More TensorFlow official CNN tutorials can be found here:
# tutorial_cifar10.py
# https://www.tensorflow.org/versions/master/tutorials/deep_cnn/index.html
"""
X_train, y_train, X_val, y_val, X_test, y_test = \
tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int64)
X_val = np.asarray(X_val, dtype=np.float32)
y_val = np.asarray(y_val, dtype=np.int64)
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int64)
print('X_train.shape', X_train.shape)
print('y_train.shape', y_train.shape)
print('X_val.shape', X_val.shape)
print('y_val.shape', y_val.shape)
print('X_test.shape', X_test.shape)
print('y_test.shape', y_test.shape)
print('X %s y %s' % (X_test.dtype, y_test.dtype))
sess = tf.InteractiveSession()
# Define the batchsize at the begin, you can give the batchsize in x and y_
# rather than 'None', this can allow TensorFlow to apply some optimizations
# – especially for convolutional layers.
batch_size = 128
x = tf.placeholder(tf.float32, shape=[batch_size, 28, 28, 1]) # [batch_size, height, width, channels]
y_ = tf.placeholder(tf.int64, shape=[batch_size,])
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.Conv2dLayer(network,
act = tf.nn.relu,
shape = [5, 5, 1, 32], # 32 features for each 5x5 patch
strides=[1, 1, 1, 1],
padding='SAME',
name ='cnn_layer1') # output: (?, 28, 28, 32)
network = tl.layers.PoolLayer(network,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
pool = tf.nn.max_pool,
name ='pool_layer1',) # output: (?, 14, 14, 32)
network = tl.layers.Conv2dLayer(network,
act = tf.nn.relu,
shape = [5, 5, 32, 64], # 64 features for each 5x5 patch
strides=[1, 1, 1, 1],
padding='SAME',
name ='cnn_layer2') # output: (?, 14, 14, 64)
network = tl.layers.PoolLayer(network,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME',
pool = tf.nn.max_pool,
name ='pool_layer2',) # output: (?, 7, 7, 64)
network = tl.layers.FlattenLayer(network, name='flatten_layer') # output: (?, 3136)
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop1') # output: (?, 3136)
network = tl.layers.DenseLayer(network, n_units=256,
act = tf.nn.relu, name='relu1') # output: (?, 256)
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2') # output: (?, 256)
network = tl.layers.DenseLayer(network, n_units=10,
act = tf.identity,
name='output_layer') # output: (?, 10)
y = network.outputs
ce = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y, y_))
cost = ce
correct_prediction = tf.equal(tf.argmax(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# train
n_epoch = 200
learning_rate = 0.0001
print_freq = 10
train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)
sess.run(tf.initialize_all_variables())
network.print_params()
network.print_layers()
print(' learning_rate: %f' % learning_rate)
print(' batch_size: %d' % batch_size)
for epoch in range(n_epoch):
start_time = time.time()
for X_train_a, y_train_a in tl.iterate.minibatches(
X_train, y_train, batch_size, shuffle=True):
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop ) # enable noise layers
sess.run(train_op, feed_dict=feed_dict)
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time))
train_loss, train_acc, n_batch = 0, 0, 0
for X_train_a, y_train_a in tl.iterate.minibatches(
X_train, y_train, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
train_loss += err; train_acc += ac; n_batch += 1
print(" train loss: %f" % (train_loss/ n_batch))
print(" train acc: %f" % (train_acc/ n_batch))
val_loss, val_acc, n_batch = 0, 0, 0
for X_val_a, y_val_a in tl.iterate.minibatches(
X_val, y_val, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_val_a, y_: y_val_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
val_loss += err; val_acc += ac; n_batch += 1
print(" val loss: %f" % (val_loss/ n_batch))
print(" val acc: %f" % (val_acc/ n_batch))
try:
tl.visualize.CNN2d(network.all_params[0].eval(),
second=10, saveable=True,
name='cnn1_'+str(epoch+1), fig_idx=2012)
except:
raise Exception("# You should change visualize.CNN(), if you want to save the feature images for different dataset")
print('Evaluation')
test_loss, test_acc, n_batch = 0, 0, 0
for X_test_a, y_test_a in tl.iterate.minibatches(
X_test, y_test, batch_size, shuffle=True):
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
feed_dict = {x: X_test_a, y_: y_test_a}
feed_dict.update(dp_dict)
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
test_loss += err; test_acc += ac; n_batch += 1
print(" test loss: %f" % (test_loss/n_batch))
print(" test acc: %f" % (test_acc/n_batch))
if __name__ == '__main__':
sess = tf.InteractiveSession()
sess = tl.ops.set_gpu_fraction(sess, gpu_fraction = 0.3)
try:
"""Dropout and Dropconnect"""
main_test_layers(model='relu') # model = relu, dropconnect
"""Single Denoising Autoencoder"""
# main_test_denoise_AE(model='sigmoid') # model = relu, sigmoid
"""Stacked Denoising Autoencoder"""
# main_test_stacked_denoise_AE(model='relu') # model = relu, sigmoid
"""CNN"""
# main_test_cnn_layer()
tl.ops.exit_tf(sess) # close sess, tensorboard and nvidia-process
except KeyboardInterrupt:
print('\nKeyboardInterrupt')
tl.ops.exit_tf(sess)
|
emperrors/fetchLinuxIDC
|
tensor_layer/tutorial_mnist.py
|
Python
|
gpl-3.0
| 27,007
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2002-2006 Donald N. Allingham
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# $Id$
#-------------------------------------------------------------------------
#
# Standard Python modules
#
#-------------------------------------------------------------------------
from ....const import GRAMPS_LOCALE as glocale
_ = glocale.get_translation().gettext
#-------------------------------------------------------------------------
#
# GRAMPS modules
#
#-------------------------------------------------------------------------
from .._isprivate import IsPrivate
#-------------------------------------------------------------------------
# "Repo marked private"
#-------------------------------------------------------------------------
class NotePrivate(IsPrivate):
"""Note marked private"""
name = _('Notes marked private')
description = _("Matches notes that are indicated as private")
|
Forage/Gramps
|
gramps/gen/filters/rules/note/_noteprivate.py
|
Python
|
gpl-2.0
| 1,631
|
#-------------------------------------------------------------------------------
# py3compat.py
#
# Some Python2&3 compatibility code
#-------------------------------------------------------------------------------
import sys
PY3 = sys.version_info[0] == 3
try:
from collections.abc import MutableMapping # python >= 3.3
except ImportError:
from collections import MutableMapping # python < 3.3
if PY3:
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
def bchr(i):
""" When iterating over b'...' in Python 2 you get single b'_' chars
and in Python 3 you get integers. Call bchr to always turn this
to single b'_' chars.
"""
return bytes((i,))
def u(s):
return s
def int2byte(i):
return bytes((i,))
def byte2int(b):
return b
def str2bytes(s):
return s.encode("latin-1")
def str2unicode(s):
return s
def bytes2str(b):
return b.decode('latin-1')
def decodebytes(b, encoding):
return bytes(b, encoding)
advance_iterator = next
else:
import cStringIO
StringIO = BytesIO = cStringIO.StringIO
int2byte = chr
byte2int = ord
bchr = lambda i: i
def u(s):
return unicode(s, "unicode_escape")
def str2bytes(s):
return s
def str2unicode(s):
return unicode(s, "unicode_escape")
def bytes2str(b):
return b
def decodebytes(b, encoding):
return b.decode(encoding)
def advance_iterator(it):
return it.next()
|
pombredanne/pyelftools
|
elftools/construct/lib/py3compat.py
|
Python
|
unlicense
| 1,578
|
#!/usr/bin/env python
"""limiter.py
Provides checking if something was called multiple times in a
given timeframe.
Usage:
-h, --help
-q, --quiet
-f, --file
-m, --max
-t, --nseconds
[-l, --logfile]
Example:
./limiter.py -f /tmp/limit -l /tmp/limit.log -m 1 -t 10 -q "Comment" && echo "OK"
Copyright 2011 Holger Mueller
This program is free software: you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation, either version 3 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this program. If not, see
<http://www.gnu.org/licenses/>.
"""
import time
from fcntl import flock, LOCK_EX, LOCK_UN
import bsddb
class Limiter:
def __init__(self, dbpath, log=None):
self.db = dbpath + ".db"
self.lck = dbpath + ".lck"
self.lckfd = None
self.tstamp = time.time()
if log:
self.logfile = open(log, "a")
else:
self.logfile = None
def lock(self):
self.tstamp = time.time()
self.lckfd = open(self.lck, "w")
flock(self.lckfd, LOCK_EX)
def unlock(self):
flock(self.lckfd, LOCK_UN)
self.lckfd.close()
def insert(self, entry):
self.tstamp = time.time()
log = bsddb.btopen(self.db)
log["%10.6f" % time.time()] = entry
log.sync()
log.close()
def last(self, count=1):
ret = []
if count == 0:
return ret
try:
log = bsddb.btopen(self.db, "r")
except bsddb.db.DBNoSuchFileError:
return ret
try:
ret.append(log.last())
except bsddb.error:
return ret
i = 1
while i < count:
i += 1
try:
ret.append(log.previous())
except bsddb.error:
return ret
return ret
def maxpernseconds(self, max, nseconds):
"""checks if no more then max calls per nseconds"""
now = time.time()
recent = self.last(max)
if len(recent) < max:
return False
if now - float(recent[-1][0]) > nseconds:
return False
return True
def log(self, entry, timestamp=None):
try:
if not timestamp:
self.logfile.write("%10.6f: " % self.tstamp)
else:
self.logfile.write("%10.6f: " % timestamp)
self.logfile.write(entry)
self.logfile.write("\n")
self.logfile.flush()
except:
pass
def limit(dbpath, entry="", max=3, nseconds=900, log=None):
"""
Check if this functions was called in the limits of given parameters
@type dbpath: filename
@param dbpath: The filename of the locking an db file without an extension
@type entry: string
@param entry: an optional log entry
@type max: int
@param max: maximum amount of calls
@type nseconds int
@param nseconds: time limit
@type log: filename
@param log: optional logfile where each call to limit will be logged
@rtype: boolean
@return: True if calls are in limit
"""
limit = False
l = Limiter(dbpath, log)
l.lock()
if not l.maxpernseconds(max, nseconds):
l.insert(entry)
limit = True
l.unlock()
if log:
if limit:
l.log("OK - " + entry)
else:
l.log("Error - " + entry)
return limit
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
dbpath, entry, max, nseconds, log, quiet = (None, "", 3, 900, None, False)
if argv is None:
argv = sys.argv
try:
try:
opts, args = getopt.getopt(argv[1:],
"hf:m:t:l:q",
["help", "file=", "max=", "nseconds=",
"logfile=", "quiet"])
except getopt.error, msg:
raise Usage(msg)
for o, a in opts:
if o in ("-h", "--help"):
print __doc__
sys.exit(0)
if o in ("-f", "--file"):
dbpath = a
if o in ("-m", "--max"):
max = int(a)
if o in ("-t", "--nseconds"):
nseconds = int(a)
if o in ("-l", "--logfile"):
log = a
if o in ("-q", "--quiet"):
quiet = True
entry = "".join(args)
ret = limit(dbpath, entry, max, nseconds, log)
if ret:
if not quiet: print "OK"
return 0
else:
if not quiet: print "Error"
return 1
except Usage, err:
print >>sys.stderr, err.msg
print >>sys.stderr, "for help use --help"
return 2
except TypeError:
print __doc__
sys.exit(0)
if __name__ == "__main__":
import sys
import getopt
sys.exit(main())
|
zarath/python-utils
|
limiter.py
|
Python
|
lgpl-3.0
| 5,293
|
# -*- coding: utf-8 -*-
import sys
sys.path[0:0] = [""]
import bson
import os
import pickle
import unittest
import uuid
from datetime import datetime
from bson import DBRef
from tests.fixtures import (PickleEmbedded, PickleTest, PickleSignalsTest,
PickleDyanmicEmbedded, PickleDynamicTest)
from mongoengine import *
from mongoengine.errors import (NotRegistered, InvalidDocumentError,
InvalidQueryError, NotUniqueError)
from mongoengine.queryset import NULLIFY, Q
from mongoengine.connection import get_db
from mongoengine.base import get_document
from mongoengine.context_managers import switch_db, query_counter
from mongoengine import signals
TEST_IMAGE_PATH = os.path.join(os.path.dirname(__file__),
'../fields/mongoengine.png')
__all__ = ("InstanceTest",)
class InstanceTest(unittest.TestCase):
def setUp(self):
connect(db='mongoenginetest')
self.db = get_db()
class Person(Document):
name = StringField()
age = IntField()
non_field = True
meta = {"allow_inheritance": True}
self.Person = Person
def tearDown(self):
for collection in self.db.collection_names():
if 'system.' in collection:
continue
self.db.drop_collection(collection)
def test_capped_collection(self):
"""Ensure that capped collections work properly.
"""
class Log(Document):
date = DateTimeField(default=datetime.now)
meta = {
'max_documents': 10,
'max_size': 90000,
}
Log.drop_collection()
# Ensure that the collection handles up to its maximum
for _ in range(10):
Log().save()
self.assertEqual(Log.objects.count(), 10)
# Check that extra documents don't increase the size
Log().save()
self.assertEqual(Log.objects.count(), 10)
options = Log.objects._collection.options()
self.assertEqual(options['capped'], True)
self.assertEqual(options['max'], 10)
self.assertEqual(options['size'], 90000)
# Check that the document cannot be redefined with different options
def recreate_log_document():
class Log(Document):
date = DateTimeField(default=datetime.now)
meta = {
'max_documents': 11,
}
# Create the collection by accessing Document.objects
Log.objects
self.assertRaises(InvalidCollectionError, recreate_log_document)
Log.drop_collection()
def test_repr(self):
"""Ensure that unicode representation works
"""
class Article(Document):
title = StringField()
def __unicode__(self):
return self.title
doc = Article(title=u'привет мир')
self.assertEqual('<Article: привет мир>', repr(doc))
def test_queryset_resurrects_dropped_collection(self):
self.Person.drop_collection()
self.assertEqual([], list(self.Person.objects()))
class Actor(self.Person):
pass
# Ensure works correctly with inhertited classes
Actor.objects()
self.Person.drop_collection()
self.assertEqual([], list(Actor.objects()))
def test_polymorphic_references(self):
"""Ensure that the correct subclasses are returned from a query when
using references / generic references
"""
class Animal(Document):
meta = {'allow_inheritance': True}
class Fish(Animal): pass
class Mammal(Animal): pass
class Dog(Mammal): pass
class Human(Mammal): pass
class Zoo(Document):
animals = ListField(ReferenceField(Animal))
Zoo.drop_collection()
Animal.drop_collection()
Animal().save()
Fish().save()
Mammal().save()
Dog().save()
Human().save()
# Save a reference to each animal
zoo = Zoo(animals=Animal.objects)
zoo.save()
zoo.reload()
classes = [a.__class__ for a in Zoo.objects.first().animals]
self.assertEqual(classes, [Animal, Fish, Mammal, Dog, Human])
Zoo.drop_collection()
class Zoo(Document):
animals = ListField(GenericReferenceField(Animal))
# Save a reference to each animal
zoo = Zoo(animals=Animal.objects)
zoo.save()
zoo.reload()
classes = [a.__class__ for a in Zoo.objects.first().animals]
self.assertEqual(classes, [Animal, Fish, Mammal, Dog, Human])
Zoo.drop_collection()
Animal.drop_collection()
def test_reference_inheritance(self):
class Stats(Document):
created = DateTimeField(default=datetime.now)
meta = {'allow_inheritance': False}
class CompareStats(Document):
generated = DateTimeField(default=datetime.now)
stats = ListField(ReferenceField(Stats))
Stats.drop_collection()
CompareStats.drop_collection()
list_stats = []
for i in xrange(10):
s = Stats()
s.save()
list_stats.append(s)
cmp_stats = CompareStats(stats=list_stats)
cmp_stats.save()
self.assertEqual(list_stats, CompareStats.objects.first().stats)
def test_db_field_load(self):
"""Ensure we load data correctly
"""
class Person(Document):
name = StringField(required=True)
_rank = StringField(required=False, db_field="rank")
@property
def rank(self):
return self._rank or "Private"
Person.drop_collection()
Person(name="Jack", _rank="Corporal").save()
Person(name="Fred").save()
self.assertEqual(Person.objects.get(name="Jack").rank, "Corporal")
self.assertEqual(Person.objects.get(name="Fred").rank, "Private")
def test_db_embedded_doc_field_load(self):
"""Ensure we load embedded document data correctly
"""
class Rank(EmbeddedDocument):
title = StringField(required=True)
class Person(Document):
name = StringField(required=True)
rank_ = EmbeddedDocumentField(Rank,
required=False,
db_field='rank')
@property
def rank(self):
if self.rank_ is None:
return "Private"
return self.rank_.title
Person.drop_collection()
Person(name="Jack", rank_=Rank(title="Corporal")).save()
Person(name="Fred").save()
self.assertEqual(Person.objects.get(name="Jack").rank, "Corporal")
self.assertEqual(Person.objects.get(name="Fred").rank, "Private")
def test_custom_id_field(self):
"""Ensure that documents may be created with custom primary keys.
"""
class User(Document):
username = StringField(primary_key=True)
name = StringField()
meta = {'allow_inheritance': True}
User.drop_collection()
self.assertEqual(User._fields['username'].db_field, '_id')
self.assertEqual(User._meta['id_field'], 'username')
def create_invalid_user():
User(name='test').save() # no primary key field
self.assertRaises(ValidationError, create_invalid_user)
def define_invalid_user():
class EmailUser(User):
email = StringField(primary_key=True)
self.assertRaises(ValueError, define_invalid_user)
class EmailUser(User):
email = StringField()
user = User(username='test', name='test user')
user.save()
user_obj = User.objects.first()
self.assertEqual(user_obj.id, 'test')
self.assertEqual(user_obj.pk, 'test')
user_son = User.objects._collection.find_one()
self.assertEqual(user_son['_id'], 'test')
self.assertTrue('username' not in user_son['_id'])
User.drop_collection()
user = User(pk='mongo', name='mongo user')
user.save()
user_obj = User.objects.first()
self.assertEqual(user_obj.id, 'mongo')
self.assertEqual(user_obj.pk, 'mongo')
user_son = User.objects._collection.find_one()
self.assertEqual(user_son['_id'], 'mongo')
self.assertTrue('username' not in user_son['_id'])
User.drop_collection()
def test_document_not_registered(self):
class Place(Document):
name = StringField()
meta = {'allow_inheritance': True}
class NicePlace(Place):
pass
Place.drop_collection()
Place(name="London").save()
NicePlace(name="Buckingham Palace").save()
# Mimic Place and NicePlace definitions being in a different file
# and the NicePlace model not being imported in at query time.
from mongoengine.base import _document_registry
del(_document_registry['Place.NicePlace'])
def query_without_importing_nice_place():
print Place.objects.all()
self.assertRaises(NotRegistered, query_without_importing_nice_place)
def test_document_registry_regressions(self):
class Location(Document):
name = StringField()
meta = {'allow_inheritance': True}
class Area(Location):
location = ReferenceField('Location', dbref=True)
Location.drop_collection()
self.assertEqual(Area, get_document("Area"))
self.assertEqual(Area, get_document("Location.Area"))
def test_creation(self):
"""Ensure that document may be created using keyword arguments.
"""
person = self.Person(name="Test User", age=30)
self.assertEqual(person.name, "Test User")
self.assertEqual(person.age, 30)
def test_to_dbref(self):
"""Ensure that you can get a dbref of a document"""
person = self.Person(name="Test User", age=30)
self.assertRaises(OperationError, person.to_dbref)
person.save()
person.to_dbref()
def test_reload(self):
"""Ensure that attributes may be reloaded.
"""
person = self.Person(name="Test User", age=20)
person.save()
person_obj = self.Person.objects.first()
person_obj.name = "Mr Test User"
person_obj.age = 21
person_obj.save()
self.assertEqual(person.name, "Test User")
self.assertEqual(person.age, 20)
person.reload()
self.assertEqual(person.name, "Mr Test User")
self.assertEqual(person.age, 21)
def test_reload_sharded(self):
class Animal(Document):
superphylum = StringField()
meta = {'shard_key': ('superphylum',)}
Animal.drop_collection()
doc = Animal(superphylum='Deuterostomia')
doc.save()
doc.reload()
Animal.drop_collection()
def test_reload_referencing(self):
"""Ensures reloading updates weakrefs correctly
"""
class Embedded(EmbeddedDocument):
dict_field = DictField()
list_field = ListField()
class Doc(Document):
dict_field = DictField()
list_field = ListField()
embedded_field = EmbeddedDocumentField(Embedded)
Doc.drop_collection()
doc = Doc()
doc.dict_field = {'hello': 'world'}
doc.list_field = ['1', 2, {'hello': 'world'}]
embedded_1 = Embedded()
embedded_1.dict_field = {'hello': 'world'}
embedded_1.list_field = ['1', 2, {'hello': 'world'}]
doc.embedded_field = embedded_1
doc.save()
doc = doc.reload(10)
doc.list_field.append(1)
doc.dict_field['woot'] = "woot"
doc.embedded_field.list_field.append(1)
doc.embedded_field.dict_field['woot'] = "woot"
self.assertEqual(doc._get_changed_fields(), [
'list_field', 'dict_field', 'embedded_field.list_field',
'embedded_field.dict_field'])
doc.save()
doc = doc.reload(10)
self.assertEqual(doc._get_changed_fields(), [])
self.assertEqual(len(doc.list_field), 4)
self.assertEqual(len(doc.dict_field), 2)
self.assertEqual(len(doc.embedded_field.list_field), 4)
self.assertEqual(len(doc.embedded_field.dict_field), 2)
def test_dictionary_access(self):
"""Ensure that dictionary-style field access works properly.
"""
person = self.Person(name='Test User', age=30)
self.assertEqual(person['name'], 'Test User')
self.assertRaises(KeyError, person.__getitem__, 'salary')
self.assertRaises(KeyError, person.__setitem__, 'salary', 50)
person['name'] = 'Another User'
self.assertEqual(person['name'], 'Another User')
# Length = length(assigned fields + id)
self.assertEqual(len(person), 3)
self.assertTrue('age' in person)
person.age = None
self.assertFalse('age' in person)
self.assertFalse('nationality' in person)
def test_embedded_document_to_mongo(self):
class Person(EmbeddedDocument):
name = StringField()
age = IntField()
meta = {"allow_inheritance": True}
class Employee(Person):
salary = IntField()
self.assertEqual(Person(name="Bob", age=35).to_mongo().keys(),
['_cls', 'name', 'age'])
self.assertEqual(Employee(name="Bob", age=35, salary=0).to_mongo().keys(),
['_cls', 'name', 'age', 'salary'])
def test_embedded_document_to_mongo_id(self):
class SubDoc(EmbeddedDocument):
id = StringField(required=True)
sub_doc = SubDoc(id="abc")
self.assertEqual(sub_doc.to_mongo().keys(), ['id'])
def test_embedded_document(self):
"""Ensure that embedded documents are set up correctly.
"""
class Comment(EmbeddedDocument):
content = StringField()
self.assertTrue('content' in Comment._fields)
self.assertFalse('id' in Comment._fields)
def test_embedded_document_instance(self):
"""Ensure that embedded documents can reference parent instance
"""
class Embedded(EmbeddedDocument):
string = StringField()
class Doc(Document):
embedded_field = EmbeddedDocumentField(Embedded)
Doc.drop_collection()
Doc(embedded_field=Embedded(string="Hi")).save()
doc = Doc.objects.get()
self.assertEqual(doc, doc.embedded_field._instance)
def test_embedded_document_complex_instance(self):
"""Ensure that embedded documents in complex fields can reference
parent instance"""
class Embedded(EmbeddedDocument):
string = StringField()
class Doc(Document):
embedded_field = ListField(EmbeddedDocumentField(Embedded))
Doc.drop_collection()
Doc(embedded_field=[Embedded(string="Hi")]).save()
doc = Doc.objects.get()
self.assertEqual(doc, doc.embedded_field[0]._instance)
def test_instance_is_set_on_setattr(self):
class Email(EmbeddedDocument):
email = EmailField()
def clean(self):
print "instance:"
print self._instance
class Account(Document):
email = EmbeddedDocumentField(Email)
Account.drop_collection()
acc = Account()
acc.email = Email(email='test@example.com')
self.assertTrue(hasattr(acc._data["email"], "_instance"))
acc.save()
acc1 = Account.objects.first()
self.assertTrue(hasattr(acc1._data["email"], "_instance"))
def test_document_clean(self):
class TestDocument(Document):
status = StringField()
pub_date = DateTimeField()
def clean(self):
if self.status == 'draft' and self.pub_date is not None:
msg = 'Draft entries may not have a publication date.'
raise ValidationError(msg)
# Set the pub_date for published items if not set.
if self.status == 'published' and self.pub_date is None:
self.pub_date = datetime.now()
TestDocument.drop_collection()
t = TestDocument(status="draft", pub_date=datetime.now())
try:
t.save()
except ValidationError, e:
expect_msg = "Draft entries may not have a publication date."
self.assertTrue(expect_msg in e.message)
self.assertEqual(e.to_dict(), {'__all__': expect_msg})
t = TestDocument(status="published")
t.save(clean=False)
self.assertEqual(t.pub_date, None)
t = TestDocument(status="published")
t.save(clean=True)
self.assertEqual(type(t.pub_date), datetime)
def test_document_embedded_clean(self):
class TestEmbeddedDocument(EmbeddedDocument):
x = IntField(required=True)
y = IntField(required=True)
z = IntField(required=True)
meta = {'allow_inheritance': False}
def clean(self):
if self.z:
if self.z != self.x + self.y:
raise ValidationError('Value of z != x + y')
else:
self.z = self.x + self.y
class TestDocument(Document):
doc = EmbeddedDocumentField(TestEmbeddedDocument)
status = StringField()
TestDocument.drop_collection()
t = TestDocument(doc=TestEmbeddedDocument(x=10, y=25, z=15))
try:
t.save()
except ValidationError, e:
expect_msg = "Value of z != x + y"
self.assertTrue(expect_msg in e.message)
self.assertEqual(e.to_dict(), {'doc': {'__all__': expect_msg}})
t = TestDocument(doc=TestEmbeddedDocument(x=10, y=25)).save()
self.assertEqual(t.doc.z, 35)
# Asserts not raises
t = TestDocument(doc=TestEmbeddedDocument(x=15, y=35, z=5))
t.save(clean=False)
def test_save(self):
"""Ensure that a document may be saved in the database.
"""
# Create person object and save it to the database
person = self.Person(name='Test User', age=30)
person.save()
# Ensure that the object is in the database
collection = self.db[self.Person._get_collection_name()]
person_obj = collection.find_one({'name': 'Test User'})
self.assertEqual(person_obj['name'], 'Test User')
self.assertEqual(person_obj['age'], 30)
self.assertEqual(person_obj['_id'], person.id)
# Test skipping validation on save
class Recipient(Document):
email = EmailField(required=True)
recipient = Recipient(email='root@localhost')
self.assertRaises(ValidationError, recipient.save)
try:
recipient.save(validate=False)
except ValidationError:
self.fail()
def test_save_to_a_value_that_equates_to_false(self):
class Thing(EmbeddedDocument):
count = IntField()
class User(Document):
thing = EmbeddedDocumentField(Thing)
User.drop_collection()
user = User(thing=Thing(count=1))
user.save()
user.reload()
user.thing.count = 0
user.save()
user.reload()
self.assertEqual(user.thing.count, 0)
def test_save_max_recursion_not_hit(self):
class Person(Document):
name = StringField()
parent = ReferenceField('self')
friend = ReferenceField('self')
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.parent = None
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p2.save()
p1.friend = p2
p1.save()
# Confirm can save and it resets the changed fields without hitting
# max recursion error
p0 = Person.objects.first()
p0.name = 'wpjunior'
p0.save()
def test_save_max_recursion_not_hit_with_file_field(self):
class Foo(Document):
name = StringField()
picture = FileField()
bar = ReferenceField('self')
Foo.drop_collection()
a = Foo(name='hello').save()
a.bar = a
with open(TEST_IMAGE_PATH, 'rb') as test_image:
a.picture = test_image
a.save()
# Confirm can save and it resets the changed fields without hitting
# max recursion error
b = Foo.objects.with_id(a.id)
b.name = 'world'
b.save()
self.assertEqual(b.picture, b.bar.picture, b.bar.bar.picture)
def test_save_cascades(self):
class Person(Document):
name = StringField()
parent = ReferenceField('self')
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.parent = None
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p2.save()
p = Person.objects(name="Wilson Jr").get()
p.parent.name = "Daddy Wilson"
p.save(cascade=True)
p1.reload()
self.assertEqual(p1.name, p.parent.name)
def test_save_cascade_kwargs(self):
class Person(Document):
name = StringField()
parent = ReferenceField('self')
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.parent = None
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p1.name = "Daddy Wilson"
p2.save(force_insert=True, cascade_kwargs={"force_insert": False})
p1.reload()
p2.reload()
self.assertEqual(p1.name, p2.parent.name)
def test_save_cascade_meta_false(self):
class Person(Document):
name = StringField()
parent = ReferenceField('self')
meta = {'cascade': False}
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.parent = None
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p2.save()
p = Person.objects(name="Wilson Jr").get()
p.parent.name = "Daddy Wilson"
p.save()
p1.reload()
self.assertNotEqual(p1.name, p.parent.name)
p.save(cascade=True)
p1.reload()
self.assertEqual(p1.name, p.parent.name)
def test_save_cascade_meta_true(self):
class Person(Document):
name = StringField()
parent = ReferenceField('self')
meta = {'cascade': False}
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.parent = None
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p2.save(cascade=True)
p = Person.objects(name="Wilson Jr").get()
p.parent.name = "Daddy Wilson"
p.save()
p1.reload()
self.assertNotEqual(p1.name, p.parent.name)
def test_save_cascades_generically(self):
class Person(Document):
name = StringField()
parent = GenericReferenceField()
Person.drop_collection()
p1 = Person(name="Wilson Snr")
p1.save()
p2 = Person(name="Wilson Jr")
p2.parent = p1
p2.save()
p = Person.objects(name="Wilson Jr").get()
p.parent.name = "Daddy Wilson"
p.save()
p1.reload()
self.assertNotEqual(p1.name, p.parent.name)
p.save(cascade=True)
p1.reload()
self.assertEqual(p1.name, p.parent.name)
def test_update(self):
"""Ensure that an existing document is updated instead of be
overwritten."""
# Create person object and save it to the database
person = self.Person(name='Test User', age=30)
person.save()
# Create same person object, with same id, without age
same_person = self.Person(name='Test')
same_person.id = person.id
same_person.save()
# Confirm only one object
self.assertEqual(self.Person.objects.count(), 1)
# reload
person.reload()
same_person.reload()
# Confirm the same
self.assertEqual(person, same_person)
self.assertEqual(person.name, same_person.name)
self.assertEqual(person.age, same_person.age)
# Confirm the saved values
self.assertEqual(person.name, 'Test')
self.assertEqual(person.age, 30)
# Test only / exclude only updates included fields
person = self.Person.objects.only('name').get()
person.name = 'User'
person.save()
person.reload()
self.assertEqual(person.name, 'User')
self.assertEqual(person.age, 30)
# test exclude only updates set fields
person = self.Person.objects.exclude('name').get()
person.age = 21
person.save()
person.reload()
self.assertEqual(person.name, 'User')
self.assertEqual(person.age, 21)
# Test only / exclude can set non excluded / included fields
person = self.Person.objects.only('name').get()
person.name = 'Test'
person.age = 30
person.save()
person.reload()
self.assertEqual(person.name, 'Test')
self.assertEqual(person.age, 30)
# test exclude only updates set fields
person = self.Person.objects.exclude('name').get()
person.name = 'User'
person.age = 21
person.save()
person.reload()
self.assertEqual(person.name, 'User')
self.assertEqual(person.age, 21)
# Confirm does remove unrequired fields
person = self.Person.objects.exclude('name').get()
person.age = None
person.save()
person.reload()
self.assertEqual(person.name, 'User')
self.assertEqual(person.age, None)
person = self.Person.objects.get()
person.name = None
person.age = None
person.save()
person.reload()
self.assertEqual(person.name, None)
self.assertEqual(person.age, None)
def test_inserts_if_you_set_the_pk(self):
p1 = self.Person(name='p1', id=bson.ObjectId()).save()
p2 = self.Person(name='p2')
p2.id = bson.ObjectId()
p2.save()
self.assertEqual(2, self.Person.objects.count())
def test_can_save_if_not_included(self):
class EmbeddedDoc(EmbeddedDocument):
pass
class Simple(Document):
pass
class Doc(Document):
string_field = StringField(default='1')
int_field = IntField(default=1)
float_field = FloatField(default=1.1)
boolean_field = BooleanField(default=True)
datetime_field = DateTimeField(default=datetime.now)
embedded_document_field = EmbeddedDocumentField(
EmbeddedDoc, default=lambda: EmbeddedDoc())
list_field = ListField(default=lambda: [1, 2, 3])
dict_field = DictField(default=lambda: {"hello": "world"})
objectid_field = ObjectIdField(default=bson.ObjectId)
reference_field = ReferenceField(Simple, default=lambda:
Simple().save())
map_field = MapField(IntField(), default=lambda: {"simple": 1})
decimal_field = DecimalField(default=1.0)
complex_datetime_field = ComplexDateTimeField(default=datetime.now)
url_field = URLField(default="http://mongoengine.org")
dynamic_field = DynamicField(default=1)
generic_reference_field = GenericReferenceField(
default=lambda: Simple().save())
sorted_list_field = SortedListField(IntField(),
default=lambda: [1, 2, 3])
email_field = EmailField(default="ross@example.com")
geo_point_field = GeoPointField(default=lambda: [1, 2])
sequence_field = SequenceField()
uuid_field = UUIDField(default=uuid.uuid4)
generic_embedded_document_field = GenericEmbeddedDocumentField(
default=lambda: EmbeddedDoc())
Simple.drop_collection()
Doc.drop_collection()
Doc().save()
my_doc = Doc.objects.only("string_field").first()
my_doc.string_field = "string"
my_doc.save()
my_doc = Doc.objects.get(string_field="string")
self.assertEqual(my_doc.string_field, "string")
self.assertEqual(my_doc.int_field, 1)
def test_document_update(self):
def update_not_saved_raises():
person = self.Person(name='dcrosta')
person.update(set__name='Dan Crosta')
self.assertRaises(OperationError, update_not_saved_raises)
author = self.Person(name='dcrosta')
author.save()
author.update(set__name='Dan Crosta')
author.reload()
p1 = self.Person.objects.first()
self.assertEqual(p1.name, author.name)
def update_no_value_raises():
person = self.Person.objects.first()
person.update()
self.assertRaises(OperationError, update_no_value_raises)
def update_no_op_raises():
person = self.Person.objects.first()
person.update(name="Dan")
self.assertRaises(InvalidQueryError, update_no_op_raises)
def test_update_unique_field(self):
class Doc(Document):
name = StringField(unique=True)
doc1 = Doc(name="first").save()
doc2 = Doc(name="second").save()
self.assertRaises(NotUniqueError, lambda:
doc2.update(set__name=doc1.name))
def test_embedded_update(self):
"""
Test update on `EmbeddedDocumentField` fields
"""
class Page(EmbeddedDocument):
log_message = StringField(verbose_name="Log message",
required=True)
class Site(Document):
page = EmbeddedDocumentField(Page)
Site.drop_collection()
site = Site(page=Page(log_message="Warning: Dummy message"))
site.save()
# Update
site = Site.objects.first()
site.page.log_message = "Error: Dummy message"
site.save()
site = Site.objects.first()
self.assertEqual(site.page.log_message, "Error: Dummy message")
def test_embedded_update_db_field(self):
"""
Test update on `EmbeddedDocumentField` fields when db_field is other
than default.
"""
class Page(EmbeddedDocument):
log_message = StringField(verbose_name="Log message",
db_field="page_log_message",
required=True)
class Site(Document):
page = EmbeddedDocumentField(Page)
Site.drop_collection()
site = Site(page=Page(log_message="Warning: Dummy message"))
site.save()
# Update
site = Site.objects.first()
site.page.log_message = "Error: Dummy message"
site.save()
site = Site.objects.first()
self.assertEqual(site.page.log_message, "Error: Dummy message")
def test_save_only_changed_fields(self):
"""Ensure save only sets / unsets changed fields
"""
class User(self.Person):
active = BooleanField(default=True)
User.drop_collection()
# Create person object and save it to the database
user = User(name='Test User', age=30, active=True)
user.save()
user.reload()
# Simulated Race condition
same_person = self.Person.objects.get()
same_person.active = False
user.age = 21
user.save()
same_person.name = 'User'
same_person.save()
person = self.Person.objects.get()
self.assertEqual(person.name, 'User')
self.assertEqual(person.age, 21)
self.assertEqual(person.active, False)
def test_query_count_when_saving(self):
"""Ensure references don't cause extra fetches when saving"""
class Organization(Document):
name = StringField()
class User(Document):
name = StringField()
orgs = ListField(ReferenceField('Organization'))
class Feed(Document):
name = StringField()
class UserSubscription(Document):
name = StringField()
user = ReferenceField(User)
feed = ReferenceField(Feed)
Organization.drop_collection()
User.drop_collection()
Feed.drop_collection()
UserSubscription.drop_collection()
o1 = Organization(name="o1").save()
o2 = Organization(name="o2").save()
u1 = User(name="Ross", orgs=[o1, o2]).save()
f1 = Feed(name="MongoEngine").save()
sub = UserSubscription(user=u1, feed=f1).save()
user = User.objects.first()
# Even if stored as ObjectId's internally mongoengine uses DBRefs
# As ObjectId's aren't automatically derefenced
self.assertTrue(isinstance(user._data['orgs'][0], DBRef))
self.assertTrue(isinstance(user.orgs[0], Organization))
self.assertTrue(isinstance(user._data['orgs'][0], Organization))
# Changing a value
with query_counter() as q:
self.assertEqual(q, 0)
sub = UserSubscription.objects.first()
self.assertEqual(q, 1)
sub.name = "Test Sub"
sub.save()
self.assertEqual(q, 2)
# Changing a value that will cascade
with query_counter() as q:
self.assertEqual(q, 0)
sub = UserSubscription.objects.first()
self.assertEqual(q, 1)
sub.user.name = "Test"
self.assertEqual(q, 2)
sub.save(cascade=True)
self.assertEqual(q, 3)
# Changing a value and one that will cascade
with query_counter() as q:
self.assertEqual(q, 0)
sub = UserSubscription.objects.first()
sub.name = "Test Sub 2"
self.assertEqual(q, 1)
sub.user.name = "Test 2"
self.assertEqual(q, 2)
sub.save(cascade=True)
self.assertEqual(q, 4) # One for the UserSub and one for the User
# Saving with just the refs
with query_counter() as q:
self.assertEqual(q, 0)
sub = UserSubscription(user=u1.pk, feed=f1.pk)
self.assertEqual(q, 0)
sub.save()
self.assertEqual(q, 1)
# Saving with just the refs on a ListField
with query_counter() as q:
self.assertEqual(q, 0)
User(name="Bob", orgs=[o1.pk, o2.pk]).save()
self.assertEqual(q, 1)
# Saving new objects
with query_counter() as q:
self.assertEqual(q, 0)
user = User.objects.first()
self.assertEqual(q, 1)
feed = Feed.objects.first()
self.assertEqual(q, 2)
sub = UserSubscription(user=user, feed=feed)
self.assertEqual(q, 2) # Check no change
sub.save()
self.assertEqual(q, 3)
def test_set_unset_one_operation(self):
"""Ensure that $set and $unset actions are performed in the same
operation.
"""
class FooBar(Document):
foo = StringField(default=None)
bar = StringField(default=None)
FooBar.drop_collection()
# write an entity with a single prop
foo = FooBar(foo='foo').save()
self.assertEqual(foo.foo, 'foo')
del foo.foo
foo.bar = 'bar'
with query_counter() as q:
self.assertEqual(0, q)
foo.save()
self.assertEqual(1, q)
def test_save_only_changed_fields_recursive(self):
"""Ensure save only sets / unsets changed fields
"""
class Comment(EmbeddedDocument):
published = BooleanField(default=True)
class User(self.Person):
comments_dict = DictField()
comments = ListField(EmbeddedDocumentField(Comment))
active = BooleanField(default=True)
User.drop_collection()
# Create person object and save it to the database
person = User(name='Test User', age=30, active=True)
person.comments.append(Comment())
person.save()
person.reload()
person = self.Person.objects.get()
self.assertTrue(person.comments[0].published)
person.comments[0].published = False
person.save()
person = self.Person.objects.get()
self.assertFalse(person.comments[0].published)
# Simple dict w
person.comments_dict['first_post'] = Comment()
person.save()
person = self.Person.objects.get()
self.assertTrue(person.comments_dict['first_post'].published)
person.comments_dict['first_post'].published = False
person.save()
person = self.Person.objects.get()
self.assertFalse(person.comments_dict['first_post'].published)
def test_delete(self):
"""Ensure that document may be deleted using the delete method.
"""
person = self.Person(name="Test User", age=30)
person.save()
self.assertEqual(self.Person.objects.count(), 1)
person.delete()
self.assertEqual(self.Person.objects.count(), 0)
def test_save_custom_id(self):
"""Ensure that a document may be saved with a custom _id.
"""
# Create person object and save it to the database
person = self.Person(name='Test User', age=30,
id='497ce96f395f2f052a494fd4')
person.save()
# Ensure that the object is in the database with the correct _id
collection = self.db[self.Person._get_collection_name()]
person_obj = collection.find_one({'name': 'Test User'})
self.assertEqual(str(person_obj['_id']), '497ce96f395f2f052a494fd4')
def test_save_custom_pk(self):
"""Ensure that a document may be saved with a custom _id using pk alias.
"""
# Create person object and save it to the database
person = self.Person(name='Test User', age=30,
pk='497ce96f395f2f052a494fd4')
person.save()
# Ensure that the object is in the database with the correct _id
collection = self.db[self.Person._get_collection_name()]
person_obj = collection.find_one({'name': 'Test User'})
self.assertEqual(str(person_obj['_id']), '497ce96f395f2f052a494fd4')
def test_save_list(self):
"""Ensure that a list field may be properly saved.
"""
class Comment(EmbeddedDocument):
content = StringField()
class BlogPost(Document):
content = StringField()
comments = ListField(EmbeddedDocumentField(Comment))
tags = ListField(StringField())
BlogPost.drop_collection()
post = BlogPost(content='Went for a walk today...')
post.tags = tags = ['fun', 'leisure']
comments = [Comment(content='Good for you'), Comment(content='Yay.')]
post.comments = comments
post.save()
collection = self.db[BlogPost._get_collection_name()]
post_obj = collection.find_one()
self.assertEqual(post_obj['tags'], tags)
for comment_obj, comment in zip(post_obj['comments'], comments):
self.assertEqual(comment_obj['content'], comment['content'])
BlogPost.drop_collection()
def test_list_search_by_embedded(self):
class User(Document):
username = StringField(required=True)
meta = {'allow_inheritance': False}
class Comment(EmbeddedDocument):
comment = StringField()
user = ReferenceField(User,
required=True)
meta = {'allow_inheritance': False}
class Page(Document):
comments = ListField(EmbeddedDocumentField(Comment))
meta = {'allow_inheritance': False,
'indexes': [
{'fields': ['comments.user']}
]}
User.drop_collection()
Page.drop_collection()
u1 = User(username="wilson")
u1.save()
u2 = User(username="rozza")
u2.save()
u3 = User(username="hmarr")
u3.save()
p1 = Page(comments=[Comment(user=u1, comment="Its very good"),
Comment(user=u2, comment="Hello world"),
Comment(user=u3, comment="Ping Pong"),
Comment(user=u1, comment="I like a beer")])
p1.save()
p2 = Page(comments=[Comment(user=u1, comment="Its very good"),
Comment(user=u2, comment="Hello world")])
p2.save()
p3 = Page(comments=[Comment(user=u3, comment="Its very good")])
p3.save()
p4 = Page(comments=[Comment(user=u2, comment="Heavy Metal song")])
p4.save()
self.assertEqual([p1, p2], list(Page.objects.filter(comments__user=u1)))
self.assertEqual([p1, p2, p4], list(Page.objects.filter(comments__user=u2)))
self.assertEqual([p1, p3], list(Page.objects.filter(comments__user=u3)))
def test_save_embedded_document(self):
"""Ensure that a document with an embedded document field may be
saved in the database.
"""
class EmployeeDetails(EmbeddedDocument):
position = StringField()
class Employee(self.Person):
salary = IntField()
details = EmbeddedDocumentField(EmployeeDetails)
# Create employee object and save it to the database
employee = Employee(name='Test Employee', age=50, salary=20000)
employee.details = EmployeeDetails(position='Developer')
employee.save()
# Ensure that the object is in the database
collection = self.db[self.Person._get_collection_name()]
employee_obj = collection.find_one({'name': 'Test Employee'})
self.assertEqual(employee_obj['name'], 'Test Employee')
self.assertEqual(employee_obj['age'], 50)
# Ensure that the 'details' embedded object saved correctly
self.assertEqual(employee_obj['details']['position'], 'Developer')
def test_embedded_update_after_save(self):
"""
Test update of `EmbeddedDocumentField` attached to a newly saved
document.
"""
class Page(EmbeddedDocument):
log_message = StringField(verbose_name="Log message",
required=True)
class Site(Document):
page = EmbeddedDocumentField(Page)
Site.drop_collection()
site = Site(page=Page(log_message="Warning: Dummy message"))
site.save()
# Update
site.page.log_message = "Error: Dummy message"
site.save()
site = Site.objects.first()
self.assertEqual(site.page.log_message, "Error: Dummy message")
def test_updating_an_embedded_document(self):
"""Ensure that a document with an embedded document field may be
saved in the database.
"""
class EmployeeDetails(EmbeddedDocument):
position = StringField()
class Employee(self.Person):
salary = IntField()
details = EmbeddedDocumentField(EmployeeDetails)
# Create employee object and save it to the database
employee = Employee(name='Test Employee', age=50, salary=20000)
employee.details = EmployeeDetails(position='Developer')
employee.save()
# Test updating an embedded document
promoted_employee = Employee.objects.get(name='Test Employee')
promoted_employee.details.position = 'Senior Developer'
promoted_employee.save()
promoted_employee.reload()
self.assertEqual(promoted_employee.name, 'Test Employee')
self.assertEqual(promoted_employee.age, 50)
# Ensure that the 'details' embedded object saved correctly
self.assertEqual(promoted_employee.details.position, 'Senior Developer')
# Test removal
promoted_employee.details = None
promoted_employee.save()
promoted_employee.reload()
self.assertEqual(promoted_employee.details, None)
def test_object_mixins(self):
class NameMixin(object):
name = StringField()
class Foo(EmbeddedDocument, NameMixin):
quantity = IntField()
self.assertEqual(['name', 'quantity'], sorted(Foo._fields.keys()))
class Bar(Document, NameMixin):
widgets = StringField()
self.assertEqual(['id', 'name', 'widgets'], sorted(Bar._fields.keys()))
def test_mixin_inheritance(self):
class BaseMixIn(object):
count = IntField()
data = StringField()
class DoubleMixIn(BaseMixIn):
comment = StringField()
class TestDoc(Document, DoubleMixIn):
age = IntField()
TestDoc.drop_collection()
t = TestDoc(count=12, data="test",
comment="great!", age=19)
t.save()
t = TestDoc.objects.first()
self.assertEqual(t.age, 19)
self.assertEqual(t.comment, "great!")
self.assertEqual(t.data, "test")
self.assertEqual(t.count, 12)
def test_save_reference(self):
"""Ensure that a document reference field may be saved in the database.
"""
class BlogPost(Document):
meta = {'collection': 'blogpost_1'}
content = StringField()
author = ReferenceField(self.Person)
BlogPost.drop_collection()
author = self.Person(name='Test User')
author.save()
post = BlogPost(content='Watched some TV today... how exciting.')
# Should only reference author when saving
post.author = author
post.save()
post_obj = BlogPost.objects.first()
# Test laziness
self.assertTrue(isinstance(post_obj._data['author'],
bson.DBRef))
self.assertTrue(isinstance(post_obj.author, self.Person))
self.assertEqual(post_obj.author.name, 'Test User')
# Ensure that the dereferenced object may be changed and saved
post_obj.author.age = 25
post_obj.author.save()
author = list(self.Person.objects(name='Test User'))[-1]
self.assertEqual(author.age, 25)
BlogPost.drop_collection()
def test_duplicate_db_fields_raise_invalid_document_error(self):
"""Ensure a InvalidDocumentError is thrown if duplicate fields
declare the same db_field"""
def throw_invalid_document_error():
class Foo(Document):
name = StringField()
name2 = StringField(db_field='name')
self.assertRaises(InvalidDocumentError, throw_invalid_document_error)
def test_invalid_son(self):
"""Raise an error if loading invalid data"""
class Occurrence(EmbeddedDocument):
number = IntField()
class Word(Document):
stem = StringField()
count = IntField(default=1)
forms = ListField(StringField(), default=list)
occurs = ListField(EmbeddedDocumentField(Occurrence), default=list)
def raise_invalid_document():
Word._from_son({'stem': [1, 2, 3], 'forms': 1, 'count': 'one',
'occurs': {"hello": None}})
self.assertRaises(InvalidDocumentError, raise_invalid_document)
def test_reverse_delete_rule_cascade_and_nullify(self):
"""Ensure that a referenced document is also deleted upon deletion.
"""
class BlogPost(Document):
content = StringField()
author = ReferenceField(self.Person, reverse_delete_rule=CASCADE)
reviewer = ReferenceField(self.Person, reverse_delete_rule=NULLIFY)
self.Person.drop_collection()
BlogPost.drop_collection()
author = self.Person(name='Test User')
author.save()
reviewer = self.Person(name='Re Viewer')
reviewer.save()
post = BlogPost(content='Watched some TV')
post.author = author
post.reviewer = reviewer
post.save()
reviewer.delete()
self.assertEqual(BlogPost.objects.count(), 1) # No effect on the BlogPost
self.assertEqual(BlogPost.objects.get().reviewer, None)
# Delete the Person, which should lead to deletion of the BlogPost, too
author.delete()
self.assertEqual(BlogPost.objects.count(), 0)
def test_reverse_delete_rule_with_document_inheritance(self):
"""Ensure that a referenced document is also deleted upon deletion
of a child document.
"""
class Writer(self.Person):
pass
class BlogPost(Document):
content = StringField()
author = ReferenceField(self.Person, reverse_delete_rule=CASCADE)
reviewer = ReferenceField(self.Person, reverse_delete_rule=NULLIFY)
self.Person.drop_collection()
BlogPost.drop_collection()
author = Writer(name='Test User')
author.save()
reviewer = Writer(name='Re Viewer')
reviewer.save()
post = BlogPost(content='Watched some TV')
post.author = author
post.reviewer = reviewer
post.save()
reviewer.delete()
self.assertEqual(BlogPost.objects.count(), 1)
self.assertEqual(BlogPost.objects.get().reviewer, None)
# Delete the Writer should lead to deletion of the BlogPost
author.delete()
self.assertEqual(BlogPost.objects.count(), 0)
def test_reverse_delete_rule_cascade_and_nullify_complex_field(self):
"""Ensure that a referenced document is also deleted upon deletion for
complex fields.
"""
class BlogPost(Document):
content = StringField()
authors = ListField(ReferenceField(self.Person, reverse_delete_rule=CASCADE))
reviewers = ListField(ReferenceField(self.Person, reverse_delete_rule=NULLIFY))
self.Person.drop_collection()
BlogPost.drop_collection()
author = self.Person(name='Test User')
author.save()
reviewer = self.Person(name='Re Viewer')
reviewer.save()
post = BlogPost(content='Watched some TV')
post.authors = [author]
post.reviewers = [reviewer]
post.save()
# Deleting the reviewer should have no effect on the BlogPost
reviewer.delete()
self.assertEqual(BlogPost.objects.count(), 1)
self.assertEqual(BlogPost.objects.get().reviewers, [])
# Delete the Person, which should lead to deletion of the BlogPost, too
author.delete()
self.assertEqual(BlogPost.objects.count(), 0)
def test_reverse_delete_rule_cascade_triggers_pre_delete_signal(self):
''' ensure the pre_delete signal is triggered upon a cascading deletion
setup a blog post with content, an author and editor
delete the author which triggers deletion of blogpost via cascade
blog post's pre_delete signal alters an editor attribute
'''
class Editor(self.Person):
review_queue = IntField(default=0)
class BlogPost(Document):
content = StringField()
author = ReferenceField(self.Person, reverse_delete_rule=CASCADE)
editor = ReferenceField(Editor)
@classmethod
def pre_delete(cls, sender, document, **kwargs):
# decrement the docs-to-review count
document.editor.update(dec__review_queue=1)
signals.pre_delete.connect(BlogPost.pre_delete, sender=BlogPost)
self.Person.drop_collection()
BlogPost.drop_collection()
Editor.drop_collection()
author = self.Person(name='Will S.').save()
editor = Editor(name='Max P.', review_queue=1).save()
BlogPost(content='wrote some books', author=author,
editor=editor).save()
# delete the author, the post is also deleted due to the CASCADE rule
author.delete()
# the pre-delete signal should have decremented the editor's queue
editor = Editor.objects(name='Max P.').get()
self.assertEqual(editor.review_queue, 0)
def test_two_way_reverse_delete_rule(self):
"""Ensure that Bi-Directional relationships work with
reverse_delete_rule
"""
class Bar(Document):
content = StringField()
foo = ReferenceField('Foo')
class Foo(Document):
content = StringField()
bar = ReferenceField(Bar)
Bar.register_delete_rule(Foo, 'bar', NULLIFY)
Foo.register_delete_rule(Bar, 'foo', NULLIFY)
Bar.drop_collection()
Foo.drop_collection()
b = Bar(content="Hello")
b.save()
f = Foo(content="world", bar=b)
f.save()
b.foo = f
b.save()
f.delete()
self.assertEqual(Bar.objects.count(), 1) # No effect on the BlogPost
self.assertEqual(Bar.objects.get().foo, None)
def test_invalid_reverse_delete_rules_raise_errors(self):
def throw_invalid_document_error():
class Blog(Document):
content = StringField()
authors = MapField(ReferenceField(self.Person, reverse_delete_rule=CASCADE))
reviewers = DictField(field=ReferenceField(self.Person, reverse_delete_rule=NULLIFY))
self.assertRaises(InvalidDocumentError, throw_invalid_document_error)
def throw_invalid_document_error_embedded():
class Parents(EmbeddedDocument):
father = ReferenceField('Person', reverse_delete_rule=DENY)
mother = ReferenceField('Person', reverse_delete_rule=DENY)
self.assertRaises(InvalidDocumentError, throw_invalid_document_error_embedded)
def test_reverse_delete_rule_cascade_recurs(self):
"""Ensure that a chain of documents is also deleted upon cascaded
deletion.
"""
class BlogPost(Document):
content = StringField()
author = ReferenceField(self.Person, reverse_delete_rule=CASCADE)
class Comment(Document):
text = StringField()
post = ReferenceField(BlogPost, reverse_delete_rule=CASCADE)
self.Person.drop_collection()
BlogPost.drop_collection()
Comment.drop_collection()
author = self.Person(name='Test User')
author.save()
post = BlogPost(content = 'Watched some TV')
post.author = author
post.save()
comment = Comment(text = 'Kudos.')
comment.post = post
comment.save()
# Delete the Person, which should lead to deletion of the BlogPost, and,
# recursively to the Comment, too
author.delete()
self.assertEqual(Comment.objects.count(), 0)
self.Person.drop_collection()
BlogPost.drop_collection()
Comment.drop_collection()
def test_reverse_delete_rule_deny(self):
"""Ensure that a document cannot be referenced if there are still
documents referring to it.
"""
class BlogPost(Document):
content = StringField()
author = ReferenceField(self.Person, reverse_delete_rule=DENY)
self.Person.drop_collection()
BlogPost.drop_collection()
author = self.Person(name='Test User')
author.save()
post = BlogPost(content = 'Watched some TV')
post.author = author
post.save()
# Delete the Person should be denied
self.assertRaises(OperationError, author.delete) # Should raise denied error
self.assertEqual(BlogPost.objects.count(), 1) # No objects may have been deleted
self.assertEqual(self.Person.objects.count(), 1)
# Other users, that don't have BlogPosts must be removable, like normal
author = self.Person(name='Another User')
author.save()
self.assertEqual(self.Person.objects.count(), 2)
author.delete()
self.assertEqual(self.Person.objects.count(), 1)
self.Person.drop_collection()
BlogPost.drop_collection()
def subclasses_and_unique_keys_works(self):
class A(Document):
pass
class B(A):
foo = BooleanField(unique=True)
A.drop_collection()
B.drop_collection()
A().save()
A().save()
B(foo=True).save()
self.assertEqual(A.objects.count(), 2)
self.assertEqual(B.objects.count(), 1)
A.drop_collection()
B.drop_collection()
def test_document_hash(self):
"""Test document in list, dict, set
"""
class User(Document):
pass
class BlogPost(Document):
pass
# Clear old datas
User.drop_collection()
BlogPost.drop_collection()
u1 = User.objects.create()
u2 = User.objects.create()
u3 = User.objects.create()
u4 = User() # New object
b1 = BlogPost.objects.create()
b2 = BlogPost.objects.create()
# in List
all_user_list = list(User.objects.all())
self.assertTrue(u1 in all_user_list)
self.assertTrue(u2 in all_user_list)
self.assertTrue(u3 in all_user_list)
self.assertFalse(u4 in all_user_list) # New object
self.assertFalse(b1 in all_user_list) # Other object
self.assertFalse(b2 in all_user_list) # Other object
# in Dict
all_user_dic = {}
for u in User.objects.all():
all_user_dic[u] = "OK"
self.assertEqual(all_user_dic.get(u1, False), "OK")
self.assertEqual(all_user_dic.get(u2, False), "OK")
self.assertEqual(all_user_dic.get(u3, False), "OK")
self.assertEqual(all_user_dic.get(u4, False), False) # New object
self.assertEqual(all_user_dic.get(b1, False), False) # Other object
self.assertEqual(all_user_dic.get(b2, False), False) # Other object
# in Set
all_user_set = set(User.objects.all())
self.assertTrue(u1 in all_user_set)
def test_picklable(self):
pickle_doc = PickleTest(number=1, string="One", lists=['1', '2'])
pickle_doc.embedded = PickleEmbedded()
pickled_doc = pickle.dumps(pickle_doc) # make sure pickling works even before the doc is saved
pickle_doc.save()
pickled_doc = pickle.dumps(pickle_doc)
resurrected = pickle.loads(pickled_doc)
self.assertEqual(resurrected, pickle_doc)
# Test pickling changed data
pickle_doc.lists.append("3")
pickled_doc = pickle.dumps(pickle_doc)
resurrected = pickle.loads(pickled_doc)
self.assertEqual(resurrected, pickle_doc)
resurrected.string = "Two"
resurrected.save()
pickle_doc = PickleTest.objects.first()
self.assertEqual(resurrected, pickle_doc)
self.assertEqual(pickle_doc.string, "Two")
self.assertEqual(pickle_doc.lists, ["1", "2", "3"])
def test_dynamic_document_pickle(self):
pickle_doc = PickleDynamicTest(name="test", number=1, string="One", lists=['1', '2'])
pickle_doc.embedded = PickleDyanmicEmbedded(foo="Bar")
pickled_doc = pickle.dumps(pickle_doc) # make sure pickling works even before the doc is saved
pickle_doc.save()
pickled_doc = pickle.dumps(pickle_doc)
resurrected = pickle.loads(pickled_doc)
self.assertEqual(resurrected, pickle_doc)
self.assertEqual(resurrected._fields_ordered,
pickle_doc._fields_ordered)
self.assertEqual(resurrected._dynamic_fields.keys(),
pickle_doc._dynamic_fields.keys())
self.assertEqual(resurrected.embedded, pickle_doc.embedded)
self.assertEqual(resurrected.embedded._fields_ordered,
pickle_doc.embedded._fields_ordered)
self.assertEqual(resurrected.embedded._dynamic_fields.keys(),
pickle_doc.embedded._dynamic_fields.keys())
def test_picklable_on_signals(self):
pickle_doc = PickleSignalsTest(number=1, string="One", lists=['1', '2'])
pickle_doc.embedded = PickleEmbedded()
pickle_doc.save()
pickle_doc.delete()
def test_throw_invalid_document_error(self):
# test handles people trying to upsert
def throw_invalid_document_error():
class Blog(Document):
validate = DictField()
self.assertRaises(InvalidDocumentError, throw_invalid_document_error)
def test_mutating_documents(self):
class B(EmbeddedDocument):
field1 = StringField(default='field1')
class A(Document):
b = EmbeddedDocumentField(B, default=lambda: B())
A.drop_collection()
a = A()
a.save()
a.reload()
self.assertEqual(a.b.field1, 'field1')
class C(EmbeddedDocument):
c_field = StringField(default='cfield')
class B(EmbeddedDocument):
field1 = StringField(default='field1')
field2 = EmbeddedDocumentField(C, default=lambda: C())
class A(Document):
b = EmbeddedDocumentField(B, default=lambda: B())
a = A.objects()[0]
a.b.field2.c_field = 'new value'
a.save()
a.reload()
self.assertEqual(a.b.field2.c_field, 'new value')
def test_can_save_false_values(self):
"""Ensures you can save False values on save"""
class Doc(Document):
foo = StringField()
archived = BooleanField(default=False, required=True)
Doc.drop_collection()
d = Doc()
d.save()
d.archived = False
d.save()
self.assertEqual(Doc.objects(archived=False).count(), 1)
def test_can_save_false_values_dynamic(self):
"""Ensures you can save False values on dynamic docs"""
class Doc(DynamicDocument):
foo = StringField()
Doc.drop_collection()
d = Doc()
d.save()
d.archived = False
d.save()
self.assertEqual(Doc.objects(archived=False).count(), 1)
def test_do_not_save_unchanged_references(self):
"""Ensures cascading saves dont auto update"""
class Job(Document):
name = StringField()
class Person(Document):
name = StringField()
age = IntField()
job = ReferenceField(Job)
Job.drop_collection()
Person.drop_collection()
job = Job(name="Job 1")
# job should not have any changed fields after the save
job.save()
person = Person(name="name", age=10, job=job)
from pymongo.collection import Collection
orig_update = Collection.update
try:
def fake_update(*args, **kwargs):
self.fail("Unexpected update for %s" % args[0].name)
return orig_update(*args, **kwargs)
Collection.update = fake_update
person.save()
finally:
Collection.update = orig_update
def test_db_alias_tests(self):
""" DB Alias tests """
# mongoenginetest - Is default connection alias from setUp()
# Register Aliases
register_connection('testdb-1', 'mongoenginetest2')
register_connection('testdb-2', 'mongoenginetest3')
register_connection('testdb-3', 'mongoenginetest4')
class User(Document):
name = StringField()
meta = {"db_alias": "testdb-1"}
class Book(Document):
name = StringField()
meta = {"db_alias": "testdb-2"}
# Drops
User.drop_collection()
Book.drop_collection()
# Create
bob = User.objects.create(name="Bob")
hp = Book.objects.create(name="Harry Potter")
# Selects
self.assertEqual(User.objects.first(), bob)
self.assertEqual(Book.objects.first(), hp)
# DeReference
class AuthorBooks(Document):
author = ReferenceField(User)
book = ReferenceField(Book)
meta = {"db_alias": "testdb-3"}
# Drops
AuthorBooks.drop_collection()
ab = AuthorBooks.objects.create(author=bob, book=hp)
# select
self.assertEqual(AuthorBooks.objects.first(), ab)
self.assertEqual(AuthorBooks.objects.first().book, hp)
self.assertEqual(AuthorBooks.objects.first().author, bob)
self.assertEqual(AuthorBooks.objects.filter(author=bob).first(), ab)
self.assertEqual(AuthorBooks.objects.filter(book=hp).first(), ab)
# DB Alias
self.assertEqual(User._get_db(), get_db("testdb-1"))
self.assertEqual(Book._get_db(), get_db("testdb-2"))
self.assertEqual(AuthorBooks._get_db(), get_db("testdb-3"))
# Collections
self.assertEqual(User._get_collection(), get_db("testdb-1")[User._get_collection_name()])
self.assertEqual(Book._get_collection(), get_db("testdb-2")[Book._get_collection_name()])
self.assertEqual(AuthorBooks._get_collection(), get_db("testdb-3")[AuthorBooks._get_collection_name()])
def test_db_alias_overrides(self):
"""db_alias can be overriden
"""
# Register a connection with db_alias testdb-2
register_connection('testdb-2', 'mongoenginetest2')
class A(Document):
"""Uses default db_alias
"""
name = StringField()
meta = {"allow_inheritance": True}
class B(A):
"""Uses testdb-2 db_alias
"""
meta = {"db_alias": "testdb-2"}
A.objects.all()
self.assertEqual('testdb-2', B._meta.get('db_alias'))
self.assertEqual('mongoenginetest',
A._get_collection().database.name)
self.assertEqual('mongoenginetest2',
B._get_collection().database.name)
def test_db_alias_propagates(self):
"""db_alias propagates?
"""
register_connection('testdb-1', 'mongoenginetest2')
class A(Document):
name = StringField()
meta = {"db_alias": "testdb-1", "allow_inheritance": True}
class B(A):
pass
self.assertEqual('testdb-1', B._meta.get('db_alias'))
def test_db_ref_usage(self):
""" DB Ref usage in dict_fields"""
class User(Document):
name = StringField()
class Book(Document):
name = StringField()
author = ReferenceField(User)
extra = DictField()
meta = {
'ordering': ['+name']
}
def __unicode__(self):
return self.name
def __str__(self):
return self.name
# Drops
User.drop_collection()
Book.drop_collection()
# Authors
bob = User.objects.create(name="Bob")
jon = User.objects.create(name="Jon")
# Redactors
karl = User.objects.create(name="Karl")
susan = User.objects.create(name="Susan")
peter = User.objects.create(name="Peter")
# Bob
Book.objects.create(name="1", author=bob, extra={
"a": bob.to_dbref(), "b": [karl.to_dbref(), susan.to_dbref()]})
Book.objects.create(name="2", author=bob, extra={
"a": bob.to_dbref(), "b": karl.to_dbref()})
Book.objects.create(name="3", author=bob, extra={
"a": bob.to_dbref(), "c": [jon.to_dbref(), peter.to_dbref()]})
Book.objects.create(name="4", author=bob)
# Jon
Book.objects.create(name="5", author=jon)
Book.objects.create(name="6", author=peter)
Book.objects.create(name="7", author=jon)
Book.objects.create(name="8", author=jon)
Book.objects.create(name="9", author=jon,
extra={"a": peter.to_dbref()})
# Checks
self.assertEqual(",".join([str(b) for b in Book.objects.all()]),
"1,2,3,4,5,6,7,8,9")
# bob related books
self.assertEqual(",".join([str(b) for b in Book.objects.filter(
Q(extra__a=bob) |
Q(author=bob) |
Q(extra__b=bob))]),
"1,2,3,4")
# Susan & Karl related books
self.assertEqual(",".join([str(b) for b in Book.objects.filter(
Q(extra__a__all=[karl, susan]) |
Q(author__all=[karl, susan]) |
Q(extra__b__all=[
karl.to_dbref(), susan.to_dbref()]))
]), "1")
# $Where
self.assertEqual(u",".join([str(b) for b in Book.objects.filter(
__raw__={
"$where": """
function(){
return this.name == '1' ||
this.name == '2';}"""
})]),
"1,2")
def test_switch_db_instance(self):
register_connection('testdb-1', 'mongoenginetest2')
class Group(Document):
name = StringField()
Group.drop_collection()
with switch_db(Group, 'testdb-1') as Group:
Group.drop_collection()
Group(name="hello - default").save()
self.assertEqual(1, Group.objects.count())
group = Group.objects.first()
group.switch_db('testdb-1')
group.name = "hello - testdb!"
group.save()
with switch_db(Group, 'testdb-1') as Group:
group = Group.objects.first()
self.assertEqual("hello - testdb!", group.name)
group = Group.objects.first()
self.assertEqual("hello - default", group.name)
# Slightly contrived now - perform an update
# Only works as they have the same object_id
group.switch_db('testdb-1')
group.update(set__name="hello - update")
with switch_db(Group, 'testdb-1') as Group:
group = Group.objects.first()
self.assertEqual("hello - update", group.name)
Group.drop_collection()
self.assertEqual(0, Group.objects.count())
group = Group.objects.first()
self.assertEqual("hello - default", group.name)
# Totally contrived now - perform a delete
# Only works as they have the same object_id
group.switch_db('testdb-1')
group.delete()
with switch_db(Group, 'testdb-1') as Group:
self.assertEqual(0, Group.objects.count())
group = Group.objects.first()
self.assertEqual("hello - default", group.name)
def test_no_overwritting_no_data_loss(self):
class User(Document):
username = StringField(primary_key=True)
name = StringField()
@property
def foo(self):
return True
User.drop_collection()
user = User(username="Ross", foo="bar")
self.assertTrue(user.foo)
User._get_collection().save({"_id": "Ross", "foo": "Bar",
"data": [1, 2, 3]})
user = User.objects.first()
self.assertEqual("Ross", user.username)
self.assertEqual(True, user.foo)
self.assertEqual("Bar", user._data["foo"])
self.assertEqual([1, 2, 3], user._data["data"])
def test_spaces_in_keys(self):
class Embedded(DynamicEmbeddedDocument):
pass
class Doc(DynamicDocument):
pass
Doc.drop_collection()
doc = Doc()
setattr(doc, 'hello world', 1)
doc.save()
one = Doc.objects.filter(**{'hello world': 1}).count()
self.assertEqual(1, one)
def test_shard_key(self):
class LogEntry(Document):
machine = StringField()
log = StringField()
meta = {
'shard_key': ('machine',)
}
LogEntry.drop_collection()
log = LogEntry()
log.machine = "Localhost"
log.save()
log.log = "Saving"
log.save()
def change_shard_key():
log.machine = "127.0.0.1"
self.assertRaises(OperationError, change_shard_key)
def test_shard_key_primary(self):
class LogEntry(Document):
machine = StringField(primary_key=True)
log = StringField()
meta = {
'shard_key': ('machine',)
}
LogEntry.drop_collection()
log = LogEntry()
log.machine = "Localhost"
log.save()
log.log = "Saving"
log.save()
def change_shard_key():
log.machine = "127.0.0.1"
self.assertRaises(OperationError, change_shard_key)
def test_kwargs_simple(self):
class Embedded(EmbeddedDocument):
name = StringField()
class Doc(Document):
doc_name = StringField()
doc = EmbeddedDocumentField(Embedded)
classic_doc = Doc(doc_name="my doc", doc=Embedded(name="embedded doc"))
dict_doc = Doc(**{"doc_name": "my doc",
"doc": {"name": "embedded doc"}})
self.assertEqual(classic_doc, dict_doc)
self.assertEqual(classic_doc._data, dict_doc._data)
def test_kwargs_complex(self):
class Embedded(EmbeddedDocument):
name = StringField()
class Doc(Document):
doc_name = StringField()
docs = ListField(EmbeddedDocumentField(Embedded))
classic_doc = Doc(doc_name="my doc", docs=[
Embedded(name="embedded doc1"),
Embedded(name="embedded doc2")])
dict_doc = Doc(**{"doc_name": "my doc",
"docs": [{"name": "embedded doc1"},
{"name": "embedded doc2"}]})
self.assertEqual(classic_doc, dict_doc)
self.assertEqual(classic_doc._data, dict_doc._data)
def test_positional_creation(self):
"""Ensure that document may be created using positional arguments.
"""
person = self.Person("Test User", 42)
self.assertEqual(person.name, "Test User")
self.assertEqual(person.age, 42)
def test_mixed_creation(self):
"""Ensure that document may be created using mixed arguments.
"""
person = self.Person("Test User", age=42)
self.assertEqual(person.name, "Test User")
self.assertEqual(person.age, 42)
def test_mixed_creation_dynamic(self):
"""Ensure that document may be created using mixed arguments.
"""
class Person(DynamicDocument):
name = StringField()
person = Person("Test User", age=42)
self.assertEqual(person.name, "Test User")
self.assertEqual(person.age, 42)
def test_bad_mixed_creation(self):
"""Ensure that document gives correct error when duplicating arguments
"""
def construct_bad_instance():
return self.Person("Test User", 42, name="Bad User")
self.assertRaises(TypeError, construct_bad_instance)
def test_data_contains_id_field(self):
"""Ensure that asking for _data returns 'id'
"""
class Person(Document):
name = StringField()
Person.drop_collection()
Person(name="Harry Potter").save()
person = Person.objects.first()
self.assertTrue('id' in person._data.keys())
self.assertEqual(person._data.get('id'), person.id)
def test_complex_nesting_document_and_embedded_document(self):
class Macro(EmbeddedDocument):
value = DynamicField(default="UNDEFINED")
class Parameter(EmbeddedDocument):
macros = MapField(EmbeddedDocumentField(Macro))
def expand(self):
self.macros["test"] = Macro()
class Node(Document):
parameters = MapField(EmbeddedDocumentField(Parameter))
def expand(self):
self.flattened_parameter = {}
for parameter_name, parameter in self.parameters.iteritems():
parameter.expand()
class System(Document):
name = StringField(required=True)
nodes = MapField(ReferenceField(Node, dbref=False))
def save(self, *args, **kwargs):
for node_name, node in self.nodes.iteritems():
node.expand()
node.save(*args, **kwargs)
super(System, self).save(*args, **kwargs)
System.drop_collection()
Node.drop_collection()
system = System(name="system")
system.nodes["node"] = Node()
system.save()
system.nodes["node"].parameters["param"] = Parameter()
system.save()
system = System.objects.first()
self.assertEqual("UNDEFINED", system.nodes["node"].parameters["param"].macros["test"].value)
def test_embedded_document_equality(self):
class Test(Document):
field = StringField(required=True)
class Embedded(EmbeddedDocument):
ref = ReferenceField(Test)
Test.drop_collection()
test = Test(field='123').save() # has id
e = Embedded(ref=test)
f1 = Embedded._from_son(e.to_mongo())
f2 = Embedded._from_son(e.to_mongo())
self.assertEqual(f1, f2)
f1.ref # Dereferences lazily
self.assertEqual(f1, f2)
if __name__ == '__main__':
unittest.main()
|
Multiposting/mongoengine
|
tests/document/instance.py
|
Python
|
mit
| 78,838
|
import numpy as np
import mathpy.numtheory.primes
from mathpy.numtheory import integers
def test_numbertests():
a = 13
b = 20
np.testing.assert_equal(integers.iscomposite(a), False)
np.testing.assert_equal(integers.iscomposite(b), True)
np.testing.assert_equal(integers.isodd(a), True)
np.testing.assert_equal(integers.isodd(b), False)
np.testing.assert_equal(integers.isodd(13.4), False)
np.testing.assert_equal(integers.iseven(a), False)
np.testing.assert_equal(integers.iseven(b), True)
np.testing.assert_equal(integers.iseven(12.4), False)
np.testing.assert_equal(integers.issquare(25), True)
np.testing.assert_equal(integers.issquare(30), False)
|
aschleg/mathpy
|
mathpy/numtheory/tests/test_integers.py
|
Python
|
mit
| 735
|
# Copyright (c) 2016-2017, troposphere project
# All rights reserved.
#
# See LICENSE file for full license.
from . import AWSObject, AWSProperty
from .validators import boolean, integer, positive_integer
def processor_type_validator(x):
valid_types = ["Lambda"]
if x not in valid_types:
raise ValueError("Type must be one of: %s" %
", ".join(valid_types))
return x
def delivery_stream_type_validator(x):
valid_types = ["DirectPut", "KinesisStreamAsSource"]
if x not in valid_types:
raise ValueError("DeliveryStreamType must be one of: %s" %
", ".join(valid_types))
return x
def index_rotation_period_validator(x):
valid_types = ["NoRotation", "OneHour", "OneDay", "OneWeek", "OneMonth"]
if x not in valid_types:
raise ValueError("IndexRotationPeriod must be one of: %s" %
", ".join(valid_types))
return x
def s3_backup_mode_elastic_search_validator(x):
valid_types = ["FailedDocumentsOnly", "AllDocuments"]
if x not in valid_types:
raise ValueError("S3BackupMode must be one of: %s" %
", ".join(valid_types))
return x
def s3_backup_mode_extended_s3_validator(x):
valid_types = ["Disabled", "Enabled"]
if x not in valid_types:
raise ValueError("S3BackupMode must be one of: %s" %
", ".join(valid_types))
return x
class BufferingHints(AWSProperty):
props = {
'IntervalInSeconds': (positive_integer, True),
'SizeInMBs': (positive_integer, True)
}
class CloudWatchLoggingOptions(AWSProperty):
props = {
'Enabled': (boolean, False),
'LogGroupName': (basestring, False), # Conditional
'LogStreamName': (basestring, False), # Conditional
}
class RetryOptions(AWSProperty):
props = {
'DurationInSeconds': (positive_integer, True),
}
class KMSEncryptionConfig(AWSProperty):
props = {
'AWSKMSKeyARN': (basestring, True),
}
class EncryptionConfiguration(AWSProperty):
props = {
'KMSEncryptionConfig': (KMSEncryptionConfig, False),
'NoEncryptionConfig': (basestring, False),
}
class S3Configuration(AWSProperty):
props = {
'BucketARN': (basestring, True),
'BufferingHints': (BufferingHints, True),
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'CompressionFormat': (basestring, True),
'EncryptionConfiguration': (EncryptionConfiguration, False),
'Prefix': (basestring, False),
'RoleARN': (basestring, True)
}
class CopyCommand(AWSProperty):
props = {
'CopyOptions': (basestring, False),
'DataTableColumns': (basestring, False),
'DataTableName': (basestring, True),
}
class ProcessorParameter(AWSProperty):
props = {
'ParameterName': (basestring, True),
'ParameterValue': (basestring, True),
}
class Processor(AWSProperty):
props = {
'Parameters': ([ProcessorParameter], True),
'Type': (processor_type_validator, True),
}
class ProcessingConfiguration(AWSProperty):
props = {
'Enabled': (boolean, True),
'Processors': ([Processor], True),
}
class ElasticsearchDestinationConfiguration(AWSProperty):
props = {
'BufferingHints': (BufferingHints, True),
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'DomainARN': (basestring, True),
'IndexName': (basestring, True),
'IndexRotationPeriod': (index_rotation_period_validator, True),
'ProcessingConfiguration': (ProcessingConfiguration, False),
'RetryOptions': (RetryOptions, False),
'RoleARN': (basestring, True),
'S3BackupMode': (s3_backup_mode_elastic_search_validator, True),
'S3Configuration': (S3Configuration, False),
'TypeName': (basestring, True),
}
class RedshiftDestinationConfiguration(AWSProperty):
props = {
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'ClusterJDBCURL': (basestring, True),
'CopyCommand': (CopyCommand, True),
'Password': (basestring, True),
'ProcessingConfiguration': (ProcessingConfiguration, False),
'RoleARN': (basestring, True),
'S3Configuration': (S3Configuration, True),
'Username': (basestring, True),
}
class S3DestinationConfiguration(AWSProperty):
props = {
'BucketARN': (basestring, True),
'BufferingHints': (BufferingHints, True),
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'CompressionFormat': (basestring, True),
'EncryptionConfiguration': (EncryptionConfiguration, False),
'ErrorOutputPrefix': (basestring, False),
'Prefix': (basestring, False),
'RoleARN': (basestring, True),
}
class HiveJsonSerDe(AWSProperty):
props = {
'TimestampFormats': ([basestring], False),
}
class OpenXJsonSerDe(AWSProperty):
props = {
'CaseInsensitive': (boolean, False),
'ColumnToJsonKeyMappings': (dict, False),
'ConvertDotsInJsonKeysToUnderscores': (boolean, False),
}
class Deserializer(AWSProperty):
props = {
'HiveJsonSerDe': (HiveJsonSerDe, False),
'OpenXJsonSerDe': (OpenXJsonSerDe, False),
}
class InputFormatConfiguration(AWSProperty):
props = {
'Deserializer': (Deserializer, True),
}
class OrcSerDe(AWSProperty):
props = {
'BlockSizeBytes': (integer, False),
'BloomFilterColumns': ([basestring], False),
'BloomFilterFalsePositiveProbability': (float, False),
'Compression': (basestring, False),
'DictionaryKeyThreshold': (float, False),
'EnablePadding': (boolean, False),
'FormatVersion': (basestring, False),
'PaddingTolerance': (float, False),
'RowIndexStride': (integer, False),
'StripeSizeBytes': (integer, False),
}
class ParquetSerDe(AWSProperty):
props = {
'BlockSizeBytes': (integer, False),
'Compression': (basestring, False),
'EnableDictionaryCompression': (boolean, False),
'MaxPaddingBytes': (integer, False),
'PageSizeBytes': (integer, False),
'WriterVersion': (basestring, False),
}
class Serializer(AWSProperty):
props = {
'OrcSerDe': (OrcSerDe, False),
'ParquetSerDe': (ParquetSerDe, False),
}
class OutputFormatConfiguration(AWSProperty):
props = {
'Serializer': (Serializer, True),
}
class SchemaConfiguration(AWSProperty):
props = {
'CatalogId': (basestring, True),
'DatabaseName': (basestring, True),
'Region': (basestring, True),
'RoleARN': (basestring, True),
'TableName': (basestring, True),
'VersionId': (basestring, True),
}
class DataFormatConversionConfiguration(AWSProperty):
props = {
'Enabled': (boolean, True),
'InputFormatConfiguration': (InputFormatConfiguration, True),
'OutputFormatConfiguration': (OutputFormatConfiguration, True),
'SchemaConfiguration': (SchemaConfiguration, True),
}
class ExtendedS3DestinationConfiguration(AWSProperty):
props = {
'BucketARN': (basestring, True),
'BufferingHints': (BufferingHints, True),
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'CompressionFormat': (basestring, True),
'DataFormatConversionConfiguration':
(DataFormatConversionConfiguration, False),
'EncryptionConfiguration': (EncryptionConfiguration, False),
'ErrorOutputPrefix': (basestring, False),
'Prefix': (basestring, False),
'ProcessingConfiguration': (ProcessingConfiguration, False),
'RoleARN': (basestring, True),
'S3BackupConfiguration': (S3DestinationConfiguration, False),
'S3BackupMode': (s3_backup_mode_extended_s3_validator, False),
}
class KinesisStreamSourceConfiguration(AWSProperty):
props = {
'KinesisStreamARN': (basestring, True),
'RoleARN': (basestring, True)
}
class SplunkRetryOptions(AWSProperty):
props = {
'DurationInSeconds': (positive_integer, True),
}
class SplunkDestinationConfiguration(AWSProperty):
props = {
'CloudWatchLoggingOptions': (CloudWatchLoggingOptions, False),
'HECAcknowledgmentTimeoutInSeconds': (positive_integer, False),
'HECEndpoint': (basestring, True),
'HECEndpointType': (basestring, True),
'HECToken': (basestring, True),
'ProcessingConfiguration': (ProcessingConfiguration, False),
'RetryOptions': (SplunkRetryOptions, False),
'S3BackupMode': (basestring, False),
'S3Configuration': (S3DestinationConfiguration, True),
}
class DeliveryStream(AWSObject):
resource_type = "AWS::KinesisFirehose::DeliveryStream"
props = {
'DeliveryStreamName': (basestring, False),
'DeliveryStreamType': (delivery_stream_type_validator, False),
'ElasticsearchDestinationConfiguration': (ElasticsearchDestinationConfiguration, False), # noqa
'ExtendedS3DestinationConfiguration': (ExtendedS3DestinationConfiguration, False), # noqa
'KinesisStreamSourceConfiguration': (KinesisStreamSourceConfiguration, False), # noqa
'RedshiftDestinationConfiguration': (RedshiftDestinationConfiguration, False), # noqa
'S3DestinationConfiguration': (S3DestinationConfiguration, False),
'SplunkDestinationConfiguration':
(SplunkDestinationConfiguration, False),
}
|
ikben/troposphere
|
troposphere/firehose.py
|
Python
|
bsd-2-clause
| 9,684
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from os import system, environ
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
with open('README.rst', 'r') as f:
readme = f.read()
if system('curl --version | grep NSS 2>/dev/null') != 0:
environ['PYCURL_SSL_LIBRARY'] = 'openssl'
system('pip install --compile --install-option="--with-openssl" pycurl')
else:
environ['PYCURL_SSL_LIBRARY'] = 'nss'
system('pip install --compile --install-option="--with-nss" pycurl')
setup(
name='automation_tools',
version='0.1.0',
description='Tools to help automating testing Foreman with Robottelo.',
long_description=readme,
author=u'Elyézer Rezende',
author_email='erezende@redhat.com',
url='https://github.com/SatelliteQE/automation-tools',
packages=['automation_tools', 'automation_tools/satellite6'],
package_data={'': ['LICENSE']},
package_dir={'automation_tools': 'automation_tools'},
include_package_data=True,
install_requires=[
'beautifulsoup4',
'Fabric3',
'lxml',
'pycurl',
'pytest',
'python-bugzilla==1.2.2',
'requests',
'robozilla',
'six',
'unittest2',
],
license='GNU GPL v3.0',
classifiers=(
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
'Programming Language :: Python',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.6',
),
)
|
lpramuk/automation-tools
|
setup.py
|
Python
|
gpl-3.0
| 1,663
|
from django.conf.urls import url
from oioioi.testrun import views
app_name = 'testrun'
contest_patterns = [
url(r'^testrun_submit/$', views.testrun_submit_view, name='testrun_submit'),
url(
r'^s/(?P<submission_id>\d+)/tr/input/$',
views.show_input_file_view,
name='get_testrun_input',
),
url(
r'^s/(?P<submission_id>\d+)/tr/output/$',
views.show_output_file_view,
name='get_testrun_output',
),
url(
r'^s/(?P<submission_id>\d+)/tr/output/(?P<testrun_report_id>\d+)/$',
views.show_output_file_view,
name='get_specific_testrun_output',
),
url(
r'^s/(?P<submission_id>\d+)/tr/input/download/$',
views.download_input_file_view,
name='download_testrun_input',
),
url(
r'^s/(?P<submission_id>\d+)/tr/output/download/$',
views.download_output_file_view,
name='download_testrun_output',
),
url(
r'^s/(?P<submission_id>\d+)/tr/output/'
r'(?P<testrun_report_id>\d+)/download/$',
views.download_output_file_view,
name='download_specific_testrun_output',
),
]
|
sio2project/oioioi
|
oioioi/testrun/urls.py
|
Python
|
gpl-3.0
| 1,154
|
#!/usr/bin/env python
import logging
import sys
import time
import rtmidi
from rtmidi.midiutil import open_midiport
log = logging.getLogger('test_midiin_callback')
logging.basicConfig(level=logging.DEBUG)
class MidiInputHandler(object):
def __init__(self, port):
self.port = port
self._wallclock = time.time()
def __call__(self, event, data=None):
message, deltatime = event
self._wallclock += deltatime
#print na STDOUT
if compare(message, play):
print ("play")
elif compare(message, rec):
print ("rec")
#print("@%0.6f %r" % (deltatime, message))
play = [148, 36, 100]
rec = [148, 37, 100]
port = sys.argv[1] if len(sys.argv) > 1 else None
try:
midiin, port_name = open_midiport(port, use_virtual = True)
except (EOFError, KeyboardInterrupt):
sys.exit()
print("Attaching MIDI input callback handler.")
midiin.set_callback(MidiInputHandler(port_name))
print("Entering main loop. Press Control-C to exit.")
try:
# just wait for keyboard interrupt in main thread
while True:
time.sleep(1)
except KeyboardInterrupt:
print('')
finally:
print("Exit.")
midiin.close_port()
del midiin
|
schef/midiplay
|
source/poc/midiin2stdout.py
|
Python
|
gpl-3.0
| 1,224
|
from __future__ import unicode_literals
from django.db import models
from django.utils.translation import ugettext_lazy as _
IMPORTANT_FIELD_GUESSES = ['id', 'pk', 'name', 'last', 'first', 'full_name', 'summary', 'description', 'user', 'person']
def representation(model, field_names=[], max_fields=None):
"""Unicode representation of Django model instance (object/record/row)"""
representation.max_fields = max_fields if max_fields is not None else representation.max_fields
if not field_names:
field_names = getattr(model, 'IMPORTANT_FIELDS', None)
if field_names is None:
field_names = []
# model_fields = set([f.name for f in model._meta.fields])
for f in model._meta.fields:
field_names += [f.name] if f.name in IMPORTANT_FIELD_GUESSES else []
retval = model.__class__.__name__ + u'('
retval += ', '.join("{}".format(repr(getattr(model, s, '') or ''))
for s in field_names[:min(len(field_names), representation.max_fields)])
return retval + u')'
representation.max_fields = 5
def name_similarity():
"""Compute the similarity (inverse distance) matrix between committe names"""
pass
class LongCharField(models.CharField):
"An unlimited-length CharField to satisfy by Django and postgreSQL varchar."
description = _("Unlimited-length string")
def __init__(self, *args, **kwargs):
kwargs['max_length'] = int(1e9) # Satisfy management validation.
super(models.CharField, self).__init__(*args, **kwargs)
# Don't add max-length validator like CharField does.
def get_internal_type(self):
# This has no function, since this value is used as a lookup in
# db_type(). Put something that isn't known by django so it
# raises an error if it is ever used.
return 'LongCharField'
def db_type(self, connection):
# *** This is probably only compatible with Postgres.
# 'varchar' with no max length is equivalent to 'text' in Postgres,
# but put 'varchar' so we can tell LongCharFields from TextFields
# when we're looking at the db.
return 'varchar'
def formfield(self, **kwargs):
# Don't pass max_length to form field like CharField does.
return super(models.CharField, self).formfield(**kwargs)
models.LongCharField = LongCharField
|
totalgood/twote
|
twote/model_utils.py
|
Python
|
mit
| 2,398
|
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand
import sys
class PyTest(TestCommand):
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
#import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
setup(
name='skeleton_for_pytest',
version='0.0.1',
url='https://github.com/tddbc/python_pytest.git',
author='TDD BaseCamp',
author_email='tddbc@googlegroups.com',
description='The skeleton of py.test for python users',
license='MIT',
packages=find_packages(exclude=['tests']),
tests_require=['pytest'],
cmdclass={'test': PyTest},
classifiers=[
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Environment :: Console',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: Implementation :: PyPy',
],
)
|
mokkeee/python_tddbc4th
|
setup.py
|
Python
|
mit
| 1,323
|
#!/usr/bin/python
__author__ = 'Alexandre Menai'
import unittest
from bin.math_utilities import *
class TestPolygon(unittest.TestCase):
def setUp(self):
self.mysquare=Polygon((0,0),(0,1),(1,1),(1,0))
self.point=(0.25,0.25)
def test_init(self):
self.assertEqual(self.mysquare.points,((0,0),(0,1),(1,1),(1,0)))
def test_point_inside(self):
self.mysquare.point_inside_polygon(self.point[0],self.point[1])
if __name__ == '__main__':
unittest.main()
|
amenai1979/pylot
|
test/test_math_utilities.py
|
Python
|
gpl-2.0
| 495
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Timothy Vandenbrande <timothy.vandenbrande@gmail.com>
# Copyright: (c) 2017, Artem Zinenko <zinenkoartem@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_firewall_rule
version_added: "2.0"
short_description: Windows firewall automation
description:
- Allows you to create/remove/update firewall rules.
options:
enabled:
description:
- Whether this firewall rule is enabled or disabled.
- Defaults to C(true) when creating a new rule.
type: bool
aliases: [ enable ]
state:
description:
- Should this rule be added or removed.
type: str
choices: [ absent, present ]
default: present
name:
description:
- The rule's display name.
type: str
required: yes
group:
description:
- The group name for the rule.
version_added: '2.9'
type: str
direction:
description:
- Whether this rule is for inbound or outbound traffic.
- Defaults to C(in) when creating a new rule.
type: str
choices: [ in, out ]
action:
description:
- What to do with the items this rule is for.
- Defaults to C(allow) when creating a new rule.
type: str
choices: [ allow, block ]
description:
description:
- Description for the firewall rule.
type: str
localip:
description:
- The local ip address this rule applies to.
- Set to C(any) to apply to all local ip addresses.
- Defaults to C(any) when creating a new rule.
type: str
remoteip:
description:
- The remote ip address/range this rule applies to.
- Set to C(any) to apply to all remote ip addresses.
- Defaults to C(any) when creating a new rule.
type: str
localport:
description:
- The local port this rule applies to.
- Set to C(any) to apply to all local ports.
- Defaults to C(any) when creating a new rule.
- Must have I(protocol) set
type: str
remoteport:
description:
- The remote port this rule applies to.
- Set to C(any) to apply to all remote ports.
- Defaults to C(any) when creating a new rule.
- Must have I(protocol) set
type: str
program:
description:
- The program this rule applies to.
- Set to C(any) to apply to all programs.
- Defaults to C(any) when creating a new rule.
type: str
service:
description:
- The service this rule applies to.
- Set to C(any) to apply to all services.
- Defaults to C(any) when creating a new rule.
type: str
protocol:
description:
- The protocol this rule applies to.
- Set to C(any) to apply to all services.
- Defaults to C(any) when creating a new rule.
type: str
profiles:
description:
- The profile this rule applies to.
- Defaults to C(domain,private,public) when creating a new rule.
type: list
aliases: [ profile ]
icmp_type_code:
description:
- The ICMP types and codes for the rule.
- This is only valid when I(protocol) is C(icmpv4) or C(icmpv6).
- Each entry follows the format C(type:code) where C(type) is the type
number and C(code) is the code number for that type or C(*) for all
codes.
- Set the value to just C(*) to apply the rule for all ICMP type codes.
- See U(https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml)
for a list of ICMP types and the codes that apply to them.
type: list
version_added: '2.10'
seealso:
- module: win_firewall
author:
- Artem Zinenko (@ar7z1)
- Timothy Vandenbrande (@TimothyVandenbrande)
'''
EXAMPLES = r'''
- name: Firewall rule to allow SMTP on TCP port 25
win_firewall_rule:
name: SMTP
localport: 25
action: allow
direction: in
protocol: tcp
state: present
enabled: yes
- name: Firewall rule to allow RDP on TCP port 3389
win_firewall_rule:
name: Remote Desktop
localport: 3389
action: allow
direction: in
protocol: tcp
profiles: private
state: present
enabled: yes
- name: Firewall rule to be created for application group
win_firewall_rule:
name: SMTP
group: application
localport: 25
action: allow
direction: in
protocol: tcp
state: present
enabled: yes
- name: Firewall rule to allow port range
win_firewall_rule:
name: Sample port range
localport: 5000-5010
action: allow
direction: in
protocol: tcp
state: present
enabled: yes
- name: Firewall rule to allow ICMP v4 echo (ping)
win_firewall_rule:
name: ICMP Allow incoming V4 echo request
enabled: yes
state: present
profiles: private
action: allow
direction: in
protocol: icmpv4
icmp_type_code:
- '8:*'
- name: Firewall rule to alloc ICMP v4 on all type codes
win_firewall_rule:
name: ICMP Allow incoming V4 echo request
enabled: yes
state: present
profiles: private
action: allow
direction: in
protocol: icmpv4
icmp_type_code: '*'
'''
|
roadmapper/ansible
|
lib/ansible/modules/windows/win_firewall_rule.py
|
Python
|
gpl-3.0
| 5,309
|
import pytest
from mitmproxy.addons import intercept
from mitmproxy import exceptions
from mitmproxy.proxy import layers
from mitmproxy.test import taddons
from mitmproxy.test import tflow
def test_simple():
r = intercept.Intercept()
with taddons.context(r) as tctx:
assert not r.filt
tctx.configure(r, intercept="~q")
assert r.filt
assert tctx.options.intercept_active
with pytest.raises(exceptions.OptionsError):
tctx.configure(r, intercept="~~")
tctx.configure(r, intercept=None)
assert not r.filt
assert not tctx.options.intercept_active
tctx.configure(r, intercept="~s")
f = tflow.tflow(resp=True)
tctx.cycle(r, f)
assert f.intercepted
f = tflow.tflow(resp=False)
tctx.cycle(r, f)
assert not f.intercepted
f = tflow.tflow(resp=True)
r.response(f)
assert f.intercepted
tctx.configure(r, intercept_active=False)
f = tflow.tflow(resp=True)
tctx.cycle(r, f)
assert not f.intercepted
tctx.configure(r, intercept_active=True)
f = tflow.tflow(resp=True)
tctx.cycle(r, f)
assert f.intercepted
def test_tcp():
r = intercept.Intercept()
with taddons.context(r) as tctx:
tctx.configure(r, intercept="~tcp")
f = tflow.ttcpflow()
tctx.cycle(r, f)
assert f.intercepted
tctx.configure(r, intercept_active=False)
f = tflow.ttcpflow()
tctx.cycle(r, f)
assert not f.intercepted
def test_already_taken():
r = intercept.Intercept()
with taddons.context(r) as tctx:
tctx.configure(r, intercept="~q")
f = tflow.tflow()
tctx.invoke(r, layers.http.HttpRequestHook(f))
assert f.intercepted
f = tflow.tflow()
f.reply.take()
tctx.invoke(r, layers.http.HttpRequestHook(f))
assert not f.intercepted
|
Kriechi/mitmproxy
|
test/mitmproxy/addons/test_intercept.py
|
Python
|
mit
| 1,963
|
#!/usr/bin/env python
# Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
If should_use_hermetic_xcode.py emits "1", and the current toolchain is out of
date:
* Downloads the hermetic mac toolchain
* Requires gsutil to be configured.
* Accepts the license.
* If xcode-select and xcodebuild are not passwordless in sudoers, requires
user interaction.
The toolchain version can be overridden by setting IOS_TOOLCHAIN_REVISION or
MAC_TOOLCHAIN_REVISION with the full revision, e.g. 9A235-1.
"""
from distutils.version import LooseVersion
import os
import platform
import plistlib
import shutil
import subprocess
import sys
import tarfile
import time
import tempfile
import urllib2
# This can be changed after running /build/package_mac_toolchain.py.
MAC_TOOLCHAIN_VERSION = '8E2002'
MAC_TOOLCHAIN_SUB_REVISION = 3
MAC_TOOLCHAIN_VERSION = '%s-%s' % (MAC_TOOLCHAIN_VERSION,
MAC_TOOLCHAIN_SUB_REVISION)
# The toolchain will not be downloaded if the minimum OS version is not met.
# 16 is the major version number for macOS 10.12.
MAC_MINIMUM_OS_VERSION = 16
IOS_TOOLCHAIN_VERSION = '9A235'
IOS_TOOLCHAIN_SUB_REVISION = 1
IOS_TOOLCHAIN_VERSION = '%s-%s' % (IOS_TOOLCHAIN_VERSION,
IOS_TOOLCHAIN_SUB_REVISION)
# Absolute path to src/ directory.
REPO_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Absolute path to a file with gclient solutions.
GCLIENT_CONFIG = os.path.join(os.path.dirname(REPO_ROOT), '.gclient')
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
TOOLCHAIN_BUILD_DIR = os.path.join(BASE_DIR, '%s_files', 'Xcode.app')
STAMP_FILE = os.path.join(BASE_DIR, '%s_files', 'toolchain_build_revision')
TOOLCHAIN_URL = 'gs://chrome-mac-sdk/'
def PlatformMeetsHermeticXcodeRequirements(target_os):
if target_os == 'ios':
return True
return int(platform.release().split('.')[0]) >= MAC_MINIMUM_OS_VERSION
def GetPlatforms():
target_os = set(['mac'])
try:
env = {}
execfile(GCLIENT_CONFIG, env, env)
target_os |= set(env.get('target_os', target_os))
except:
pass
return target_os
def ReadStampFile(target_os):
"""Return the contents of the stamp file, or '' if it doesn't exist."""
try:
with open(STAMP_FILE % target_os, 'r') as f:
return f.read().rstrip()
except IOError:
return ''
def WriteStampFile(target_os, s):
"""Write s to the stamp file."""
EnsureDirExists(os.path.dirname(STAMP_FILE % target_os))
with open(STAMP_FILE % target_os, 'w') as f:
f.write(s)
f.write('\n')
def EnsureDirExists(path):
if not os.path.exists(path):
os.makedirs(path)
def DownloadAndUnpack(url, output_dir):
"""Decompresses |url| into a cleared |output_dir|."""
temp_name = tempfile.mktemp(prefix='mac_toolchain')
try:
print 'Downloading new toolchain.'
subprocess.check_call(['gsutil.py', 'cp', url, temp_name])
if os.path.exists(output_dir):
print 'Deleting old toolchain.'
shutil.rmtree(output_dir)
EnsureDirExists(output_dir)
print 'Unpacking new toolchain.'
tarfile.open(mode='r:gz', name=temp_name).extractall(path=output_dir)
finally:
if os.path.exists(temp_name):
os.unlink(temp_name)
def CanAccessToolchainBucket():
"""Checks whether the user has access to |TOOLCHAIN_URL|."""
proc = subprocess.Popen(['gsutil.py', 'ls', TOOLCHAIN_URL],
stdout=subprocess.PIPE)
proc.communicate()
return proc.returncode == 0
def LoadPlist(path):
"""Loads Plist at |path| and returns it as a dictionary."""
fd, name = tempfile.mkstemp()
try:
subprocess.check_call(['plutil', '-convert', 'xml1', '-o', name, path])
with os.fdopen(fd, 'r') as f:
return plistlib.readPlist(f)
finally:
os.unlink(name)
def FinalizeUnpack(output_dir, target_os):
"""Use xcodebuild to accept new toolchain license and run first launch
installers if necessary. Don't accept the license if a newer license has
already been accepted. This only works if xcodebuild and xcode-select are
passwordless in sudoers."""
# Check old license
try:
target_license_plist_path = os.path.join(
output_dir, 'Contents','Resources','LicenseInfo.plist')
target_license_plist = LoadPlist(target_license_plist_path)
build_type = target_license_plist['licenseType']
build_version = target_license_plist['licenseID']
accepted_license_plist = LoadPlist(
'/Library/Preferences/com.apple.dt.Xcode.plist')
agreed_to_key = 'IDELast%sLicenseAgreedTo' % build_type
last_license_agreed_to = accepted_license_plist[agreed_to_key]
# Historically all Xcode build numbers have been in the format of AANNNN, so
# a simple string compare works. If Xcode's build numbers change this may
# need a more complex compare.
if build_version <= last_license_agreed_to:
# Don't accept the license of older toolchain builds, this will break the
# license of newer builds.
return
except (subprocess.CalledProcessError, KeyError):
# If there's never been a license of type |build_type| accepted,
# |target_license_plist_path| or |agreed_to_key| may not exist.
pass
print "Accepting license."
target_version_plist_path = os.path.join(
output_dir, 'Contents','version.plist')
target_version_plist = LoadPlist(target_version_plist_path)
short_version_string = target_version_plist['CFBundleShortVersionString']
old_path = subprocess.Popen(['/usr/bin/xcode-select', '-p'],
stdout=subprocess.PIPE).communicate()[0].strip()
try:
build_dir = os.path.join(output_dir, 'Contents/Developer')
subprocess.check_call(['sudo', '/usr/bin/xcode-select', '-s', build_dir])
subprocess.check_call(['sudo', '/usr/bin/xcodebuild', '-license', 'accept'])
if target_os == 'ios' and \
LooseVersion(short_version_string) >= LooseVersion("9.0"):
print "Installing packages."
subprocess.check_call(['sudo', '/usr/bin/xcodebuild', '-runFirstLaunch'])
finally:
subprocess.check_call(['sudo', '/usr/bin/xcode-select', '-s', old_path])
def _UseHermeticToolchain(target_os):
current_dir = os.path.dirname(os.path.realpath(__file__))
script_path = os.path.join(current_dir, 'mac/should_use_hermetic_xcode.py')
proc = subprocess.Popen([script_path, target_os], stdout=subprocess.PIPE)
return '1' in proc.stdout.readline()
def RequestGsAuthentication():
"""Requests that the user authenticate to be able to access gs://.
"""
print 'Access to ' + TOOLCHAIN_URL + ' not configured.'
print '-----------------------------------------------------------------'
print
print 'You appear to be a Googler.'
print
print 'I\'m sorry for the hassle, but you need to do a one-time manual'
print 'authentication. Please run:'
print
print ' download_from_google_storage --config'
print
print 'and follow the instructions.'
print
print 'NOTE 1: Use your google.com credentials, not chromium.org.'
print 'NOTE 2: Enter 0 when asked for a "project-id".'
print
print '-----------------------------------------------------------------'
print
sys.stdout.flush()
sys.exit(1)
def DownloadHermeticBuild(target_os, toolchain_version, toolchain_filename):
if not _UseHermeticToolchain(target_os):
print 'Using local toolchain for %s.' % target_os
return 0
toolchain_output_path = TOOLCHAIN_BUILD_DIR % target_os
if ReadStampFile(target_os) == toolchain_version:
print 'Toolchain (%s) is already up to date.' % toolchain_version
FinalizeUnpack(toolchain_output_path, target_os)
return 0
if not CanAccessToolchainBucket():
RequestGsAuthentication()
return 1
# Reset the stamp file in case the build is unsuccessful.
WriteStampFile(target_os, '')
toolchain_file = '%s.tgz' % toolchain_version
toolchain_full_url = TOOLCHAIN_URL + toolchain_file
print 'Updating toolchain to %s...' % toolchain_version
try:
toolchain_file = toolchain_filename % toolchain_version
toolchain_full_url = TOOLCHAIN_URL + toolchain_file
DownloadAndUnpack(toolchain_full_url, toolchain_output_path)
FinalizeUnpack(toolchain_output_path, target_os)
print 'Toolchain %s unpacked.' % toolchain_version
WriteStampFile(target_os, toolchain_version)
return 0
except Exception as e:
print 'Failed to download toolchain %s.' % toolchain_file
print 'Exception %s' % e
print 'Exiting.'
return 1
def main():
if sys.platform != 'darwin':
return 0
for target_os in GetPlatforms():
if not PlatformMeetsHermeticXcodeRequirements(target_os):
print 'OS version does not support toolchain.'
continue
if target_os == 'ios':
toolchain_version = os.environ.get('IOS_TOOLCHAIN_REVISION',
IOS_TOOLCHAIN_VERSION)
toolchain_filename = 'ios-toolchain-%s.tgz'
else:
toolchain_version = os.environ.get('MAC_TOOLCHAIN_REVISION',
MAC_TOOLCHAIN_VERSION)
toolchain_filename = 'toolchain-%s.tgz'
return_value = DownloadHermeticBuild(
target_os, toolchain_version, toolchain_filename)
if return_value:
return return_value
return 0
if __name__ == '__main__':
sys.exit(main())
|
chrisdickinson/nojs
|
build/mac_toolchain.py
|
Python
|
bsd-3-clause
| 9,424
|
#!/usr/bin/env python
"""
Copyright (C) 2012 Roman Mohr <roman@fenkhuber.at>
"""
"""static - A stupidly simple WSGI way to serve static (or mixed) content.
(See the docstrings of the various functions and classes.)
Copyright (C) 2006-2009 Luke Arno - http://lukearno.com/
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to:
The Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor,
Boston, MA 02110-1301, USA.
Luke Arno can be found at http://lukearno.com/
"""
import mimetypes
import email.utils as rfc822
import time
import string
import sys
from os import path, stat
from wsgiref import util
from wsgiref.headers import Headers
from wsgiref.simple_server import make_server
from optparse import OptionParser
try:
from pkg_resources import resource_filename, Requirement
except:
pass
try:
import kid
except:
pass
try:
from genshi.template import MarkupTemplate
except:
pass
if sys.version < '3':
import codecs
def u(x):
return codecs.unicode_escape_decode(x)[0]
else:
def u(x):
return x
class MagicError(Exception):
pass
def _encode(string, encoding):
if sys.version_info[0] > 2:
return string.encode(encoding=encoding, errors='strict')
else:
if type(u('')) == type(string):
string = string.encode(encoding)
return string
def _decode(string, encoding):
if sys.version_info[0] > 2:
return string.decode(encoding=encoding, errors='strict')
else:
return string
def _open(filename, encoding):
if sys.version_info[0] > 2:
return open(filename, 'r', encoding=encoding, errors='strict')
else:
return open(filename, 'rb')
class StatusApp:
"""Used by WSGI apps to return some HTTP status."""
def __init__(self, status, message=None, encoding=sys.getdefaultencoding()):
self.status = status
self.encoding = encoding
if message is None:
self.message = status
else:
self.message = message
def __call__(self, environ, start_response, headers=[]):
if self.message:
Headers(headers).add_header('Content-type', 'text/plain')
start_response(self.status, headers)
if environ['REQUEST_METHOD'] == 'HEAD':
return [_encode("", self.econding)]
else:
return [_encode(self.message, self.encoding)]
class Cling(object):
"""A stupidly simple way to serve static content via WSGI.
Serve the file of the same path as PATH_INFO in self.datadir.
Look up the Content-type in self.content_types by extension
or use 'text/plain' if the extension is not found.
Serve up the contents of the file or delegate to self.not_found.
"""
block_size = 16 * 4096
index_file = 'index.html'
not_found = StatusApp('404 Not Found')
not_modified = StatusApp('304 Not Modified', "")
moved_permanently = StatusApp('301 Moved Permanently')
method_not_allowed = StatusApp('405 Method Not Allowed')
def __init__(self, root, **kw):
"""Just set the root and any other attribs passes via **kw."""
self.root = root
self.encoding = sys.getdefaultencoding()
for k, v in kw.items():
setattr(self, k, v)
def __call__(self, environ, start_response):
"""Respond to a request when called in the usual WSGI way."""
if environ['REQUEST_METHOD'] not in ('GET', 'HEAD'):
headers = [('Allow', 'GET, HEAD')]
return self.method_not_allowed(environ, start_response, headers)
path_info = environ.get('PATH_INFO', '')
full_path = self._full_path(path_info)
if not self._is_under_root(full_path):
return self.not_found(environ, start_response)
if path.isdir(full_path):
if full_path[-1] != '/' or full_path == self.root:
location = util.request_uri(environ, include_query=False) + '/'
if environ.get('QUERY_STRING'):
location += '?' + environ.get('QUERY_STRING')
headers = [('Location', location)]
return self.moved_permanently(environ, start_response, headers)
else:
full_path = self._full_path(path_info + self.index_file)
content_type = self._guess_type(full_path)
try:
etag, last_modified = self._conditions(full_path, environ)
headers = [('Date', rfc822.formatdate(time.time())),
('Last-Modified', last_modified),
('ETag', etag)]
if_modified = environ.get('HTTP_IF_MODIFIED_SINCE')
if if_modified and (rfc822.parsedate(if_modified)
>= rfc822.parsedate(last_modified)):
return self.not_modified(environ, start_response, headers)
if_none = environ.get('HTTP_IF_NONE_MATCH')
if if_none and (if_none == '*' or etag in if_none):
return self.not_modified(environ, start_response, headers)
file_like = self._file_like(full_path)
headers.append(('Content-Type', content_type))
start_response("200 OK", headers)
if environ['REQUEST_METHOD'] == 'GET':
return self._body(full_path, environ, file_like)
else:
return ['']
except (IOError, OSError) as e:
print(e)
return self.not_found(environ, start_response)
def _full_path(self, path_info):
"""Return the full path from which to read."""
return self.root + path_info
def _is_under_root(self, full_path):
"""Guard against arbitrary file retrieval."""
if (path.abspath(full_path) + path.sep)\
.startswith(path.abspath(self.root) + path.sep):
return True
else:
return False
def _guess_type(self, full_path):
"""Guess the mime type using the mimetypes module."""
return mimetypes.guess_type(full_path)[0] or 'text/plain'
def _conditions(self, full_path, environ):
"""Return a tuple of etag, last_modified by mtime from stat."""
mtime = stat(full_path).st_mtime
return str(mtime), rfc822.formatdate(mtime)
def _file_like(self, full_path):
"""Return the appropriate file object."""
return open(full_path, 'rb')
def _body(self, full_path, environ, file_like):
"""Return an iterator over the body of the response."""
way_to_send = environ.get('wsgi.file_wrapper', iter_and_close)
return way_to_send(file_like, self.block_size)
def iter_and_close(file_like, block_size):
"""Yield file contents by block then close the file."""
while 1:
try:
block = file_like.read(block_size)
if block:
yield block
else:
raise StopIteration
except StopIteration as si:
file_like.close()
return
def cling_wrap(package_name, dir_name, **kw):
"""Return a Cling that serves from the given package and dir_name.
This uses pkg_resources.resource_filename which is not the
recommended way, since it extracts the files.
I think this works fine unless you have some _very_ serious
requirements for static content, in which case you probably
shouldn't be serving it through a WSGI app, IMHO. YMMV.
"""
resource = Requirement.parse(package_name)
return Cling(resource_filename(resource, dir_name), **kw)
class Shock(Cling):
"""A stupidly simple way to serve up mixed content.
Serves static content just like Cling (it's superclass)
except that it process content with the first matching
magic from self.magics if any apply.
See Cling and classes with "Magic" in their names in this module.
If you are using Shock with the StringMagic class for instance:
shock = Shock('/data', magics=[StringMagic(food='cheese')])
Let's say you have a file called /data/foo.txt.stp containing one line:
"I love to eat $food!"
When you do a GET on /foo.txt you will see this in your browser:
"I love to eat cheese!"
This is really nice if you have a color variable in your css files or
something trivial like that. It seems silly to create or change a
handful of objects for a couple of dynamic bits of text.
"""
magics = ()
def _match_magic(self, full_path):
"""Return the first magic that matches this path or None."""
for magic in self.magics:
if magic.matches(full_path):
return magic
def _full_path(self, path_info):
"""Return the full path from which to read."""
full_path = self.root + path_info
if path.exists(full_path):
return full_path
else:
for magic in self.magics:
if path.exists(magic.new_path(full_path)):
return magic.new_path(full_path)
else:
return full_path
def _guess_type(self, full_path):
"""Guess the mime type magically or using the mimetypes module."""
magic = self._match_magic(full_path)
if magic is not None:
return (mimetypes.guess_type(magic.old_path(full_path))[0]
or 'text/plain')
else:
return mimetypes.guess_type(full_path)[0] or 'text/plain'
def _conditions(self, full_path, environ):
"""Return Etag and Last-Modified values defaults to now for both."""
magic = self._match_magic(full_path)
if magic is not None:
return magic.conditions(full_path, environ)
else:
mtime = stat(full_path).st_mtime
return str(mtime), rfc822.formatdate(mtime)
def _file_like(self, full_path):
"""Return the appropriate file object."""
magic = self._match_magic(full_path)
if magic is not None:
return magic.file_like(full_path, self.encoding)
else:
return open(full_path, 'rb')
def _body(self, full_path, environ, file_like):
"""Return an iterator over the body of the response."""
magic = self._match_magic(full_path)
if magic is not None:
return [_encode(s,self.encoding) for s in magic.body(environ,
file_like)]
else:
way_to_send = environ.get('wsgi.file_wrapper', iter_and_close)
return way_to_send(file_like, self.block_size)
class BaseMagic(object):
"""Base class for magic file handling.
Really a do nothing if you were to use this directly.
In a strait forward case you would just override .extension and body().
(See StringMagic in this module for a simple example of subclassing.)
In a more complex case you may need to override many or all methods.
"""
extension = ''
def exists(self, full_path):
"""Check that self.new_path(full_path) exists."""
if path.exists(self.new_path(full_path)):
return self.new_path(full_path)
def new_path(self, full_path):
"""Add the self.extension to the path."""
return full_path + self.extension
def old_path(self, full_path):
"""Remove self.extension from path or raise MagicError."""
if self.matches(full_path):
return full_path[:-len(self.extension)]
else:
raise MagicError("Path does not match this magic.")
def matches(self, full_path):
"""Check that path ends with self.extension."""
if full_path.endswith(self.extension):
return full_path
def conditions(self, full_path, environ):
"""Return Etag and Last-Modified values (based on mtime)."""
mtime = int(time.time())
return str(mtime), rfc822.formatdate(mtime)
def file_like(self, full_path, encoding):
"""Return a file object for path."""
return _open(full_path, encoding)
def body(self, environ, file_like):
"""Return an iterator over the body of the response."""
return [file_like.read()]
class StringMagic(BaseMagic):
"""Magic to replace variables in file contents using string.Template.
Using this requires Python2.4.
"""
extension = '.stp'
safe = False
def __init__(self, **variables):
"""Keyword arguments populate self.variables."""
self.variables = variables
def body(self, environ, file_like):
"""Pass environ and self.variables in to template.
self.variables overrides environ so that suprises in environ don't
cause unexpected output if you are passing a value in explicitly.
"""
variables = environ.copy()
variables.update(self.variables)
template = string.Template(file_like.read())
if self.safe is True:
return [template.safe_substitute(variables)]
else:
return [template.substitute(variables)]
class KidMagic(StringMagic):
"""Like StringMagic only using the Kid templating language.
Using this requires Kid: http://kid.lesscode.org/
"""
extension = '.kid'
def body(self, environ, full_path):
"""Pass environ and **self.variables into the template."""
template = kid.Template(file=full_path,
environ=environ,
**self.variables)
return [template.serialize()]
class GenshiMagic(StringMagic):
"""Like StringMagic only using the Genshi templating language.
Using this requires Genshi
"""
extension = '.genshi'
def body(self, environ, full_path):
"""Pass environ and **self.variables into the template."""
template = MarkupTemplate(full_path.read())
variables = self.variables.copy()
variables["environ"] = environ
return [template.generate(**variables)
.render('html', doctype='html')]
def command():
parser = OptionParser(usage="%prog DIR [HOST][:][PORT]",
version="static 0.3.6")
options, args = parser.parse_args()
if len(args) in (1, 2):
if len(args) == 2:
parts = args[1].split(":")
if len(parts) == 1:
host = parts[0]
port = None
elif len(parts) == 2:
host, port = parts
else:
sys.exit("Invalid host:port specification.")
elif len(args) == 1:
host, port = None, None
if not host:
host = '0.0.0.0'
if not port:
port = 9999
try:
port = int(port)
except:
sys.exit("Invalid host:port specification.")
app = Cling(args[0])
try:
make_server(host, port, app).serve_forever()
except KeyboardInterrupt as ki:
print("Cio, baby!")
except:
sys.exit("Problem initializing server.")
else:
parser.print_help(sys.stderr)
sys.exit(1)
def test():
from wsgiref.validate import validator
magics = (StringMagic(title="String Test"),
KidMagic(title="Kid Test"), GenshiMagic(title="Genshi Test"))
app = Shock('testdata/pub', magics=magics)
try:
make_server('localhost', 9999, validator(app)).serve_forever()
except KeyboardInterrupt as ki:
print("Ciao, baby!")
if __name__ == '__main__':
test()
|
ramcn/demo3
|
venv/lib/python3.4/site-packages/static.py
|
Python
|
mit
| 16,038
|
from datetime import datetime
import tornado.ioloop
import yaml, json, glob
import os
import sys
import time
import argparse
import subprocess
import distutils
import traceback
import functools
import imp
from va_master.handlers.datastore_handler import DatastoreHandler
from va_master.api import login
from va_master.va_master_project import config
import cli_environment
import unittest
consul_conf_path = '/etc/consul.json'
run_sync = tornado.ioloop.IOLoop.instance().run_sync
folder_pwd = os.path.join(os.path.dirname(os.path.realpath(__file__)), '')
def is_cli():
# TODO: Find a way to separate a CLI entrypoint execution versus
# a standard module import
return True
def cli_info(msg):
"""Outputs a CLI information message to the console."""
if not is_cli(): return
sys.stdout.write('%(yellow)s[info]%(nocolor)s %(msg)s\n' % {
'yellow': '\033[93m',
'nocolor': '\033[0m',
'msg': msg
})
def cli_success(msg):
"""Outputs a CLI success message to the console."""
if not is_cli(): return
sys.stdout.write('%(green)s[success]%(nocolor)s %(msg)s\n' % {
'green': '\033[92m',
'nocolor': '\033[0m',
'msg': msg
})
def cli_error(msg):
"""Outputs a CLI error message to the console."""
if not is_cli(): return
sys.stderr.write('%(red)s[error]%(nocolor)s %(msg)s\n' % {
'red': '\033[91m',
'nocolor': '\033[0m',
'msg': msg
})
def generate_store_config(values):
store_config = {
'va_flavours' :
{
'va-small' :
{
'vol_capacity' : 5,
'memory' : 2**20,
'max_memory' : 2**20,
'num_cpus' : 1
},
'debian' :
{
'vol_capacity' : 5,
'memory' : 2**20,
'max_memory' : 2**20,
'num_cpus' : 1
}
}
}
store_config.update(values)
return store_config
def get_values_from_args(args):
# If optional arguments weren't specified, interactively ask.
attrs = [
('company-name', 'The name of Your company. It will be used in the VPN certifiates [VapourApps cloud] '),
('domain_name', 'Enter the default domain name for this installation [va.mk]'),
('fqdn', 'Enter an IP address of FQDN [master.va.mk]'),
('admin-user', 'Enter username for the first admin [admin]'),
('admin-pass', 'Enter password for the first admin [admin]'),
('vpn-port', 'Enter the OpenVPN port accesible from Internet to this host [8443]'),
]
values = {}
if args.get('skip_args'):
cli_info('Setting up the va_master module without prompting for arguments. If this is the first time you are setting up the environment, you might want to make sure you enter all arguments or run init again without skip_args. ')
for attr in attrs:
name = attr[0].replace('-', '_')
cmdhelp = attr[1]
values[name] = args.get(name)
if (values[name] is None) and not args.get('skip_args'): # The CLI `args` doesn't have it, ask.
values[name] = raw_input('%s: ' % cmdhelp)
values = {k: v for k, v in values.items() if v}
return values
#We used to try and work with this to setup consul. Atm seems like we're doing it manually.
def handle_configurations(fqdn = None):
result = True
if fqdn:
try:
cli_environment.write_supervisor_conf()
cli_success('Configured Supervisor.')
except:
import traceback
cli_error('Failed configuring Supervisor: ')
traceback.print_exc()
result = False # We failed with step #1
try:
cli_environment.write_consul_conf(fqdn)
cli_success('Configured Consul.')
except:
import traceback
cli_error('Failed configuring Consul: ')
traceback.print_exc()
result = False # We failed with step #2
if not result:
cli_error('Initialization failed because of one or more errors.')
sys.exit(1)
else:
try:
pass
# cli_environment.reload_daemon()
# cli_success('Started daemon.')
except:
import traceback
cli_error('Failed reloading the daemon, check supervisor logs.' + \
'\nYou may try `service supervisor restart` or ' + \
'/var/log/supervisor/supervisord.log')
traceback.print_exc()
sys.exit(1)
def check_datastore_connection(values, store):
attempts, failed = 1, True
cli_info('Waiting for the key value store to come alive...')
while attempts <= cli_environment.DATASTORE_ATTEMPTS:
is_running = run_sync(store.check_connection)
cli_info(' -> attempt #%i...' % attempts)
if is_running:
failed = False
break
else:
time.sleep(cli_environment.DATASTORE_RETRY_TIME)
attempts+= 1
if failed:
cli_error('Store connection timeout after %i attempts.' \
% attempts)
sys.exit(1)
try:
cli_info('Trying to start VPN. ')
cli_environment.write_vpn_pillar(values['domain_name'])
cli_success('VPN is running. ')
except:
cli_error('Failed to start VPN. Error was : ')
import traceback
traceback.print_exc()
return not failed
def create_admin_user(admin_user, admin_pass, datastore_handler):
if admin_user and admin_pass:
create_user = functools.partial(login.create_user,
datastore_handler, admin_user, admin_pass, 'admin')
create_user_run = run_sync(create_user)
else:
cli_info('No username and password; will not create user')
def handle_store_init(cli_config, values, store, datastore_handler):
states_data = run_sync(functools.partial(datastore_handler.import_states_from_states_data))
store_config = generate_store_config(values)
store_config = {x : store_config[x] for x in store_config if x not in ['admin_pass']}
run_sync(functools.partial(datastore_handler.insert_init_vals, store_config))
# run_sync(functools.partial(datastore_handler.create_standalone_provider))
return store_config
def create_ssh_keys(cli_config, store_config):
key_full_path = cli_config.ssh_key_path + cli_config.ssh_key_name
if all ([os.path.isfile(key_full_path + file_type) for file_type in ['.pem', '.pub']]):
return
try:
os.mkdir(cli_config.ssh_key_path)
except Exception as e:
import traceback
print 'Could not create ssh path; It probably exists. Error was :'
traceback.print_exc()
try:
ssh_cmd = ['ssh-keygen', '-t', 'rsa', '-f', key_full_path, '-P', '']
subprocess.call(ssh_cmd)
subprocess.call(['mv', key_full_path, key_full_path + '.pem'])
except:
print ('Could not generate a key. Probably already exists. ')
import traceback
traceback.print_exc()
def handle_init(args):
"""Handles cli `init` command. Should write proper conf and start daemon."""
values = get_values_from_args(args)
result = True # If `result` is True, all actions completed successfully
# handle_configurations(values.get('fqdn'))
cli_config = config.Config(init_vals = values)
store = cli_config.datastore
datastore_handler = DatastoreHandler(store)
check_datastore_connection(values, store)
create_admin_user(values.get('admin_user'), values.get('admin_pass'), datastore_handler)
store_config = handle_store_init(cli_config, values, store, datastore_handler)
#Generate an ssh-key
create_ssh_keys(cli_config, store_config)
cli_success('Created first account. Setup is finished.')
# cli_config.init_handler(init_vals = values)
def handle_manage(args):
"""Handles cli `manage` command. """
cli_config = config.Config()
store = cli_config.datastore
datastore_handler = DatastoreHandler(store)
states_data = run_sync(functools.partial(datastore_handler.import_states_from_states_data, delete_panels = args['clear_panels']))
def handle_jsbuild(args):
try:
build_path = os.path.join(os.path.dirname(__file__), 'dashboard',
'build.js')
build_path = os.path.abspath(build_path)
subprocess.check_call(['node', build_path])
except (OSError, subprocess.CalledProcessError):
cli_error('An error occured during compile invocation. Make sure' + \
' NodeJS interpreter is in PATH, with name `node`.')
traceback.print_exc()
else:
cli_success(('Compiled JS using the command `node %s`, into `' + \
'dashboard/static/*`') % build_path)
def handle_add_module(args):
file_path = args['module_path']
file_name = file_path.split('/')[-1]
file_contents = ''
with open(file_path, 'r') as f:
file_contents = f.read()
cli_info('Read file ' + file_path)
try:
new_module = imp.load_source(file_name, file_path)
cli_info('Imported module. Checking for get_paths()')
paths = getattr(new_module, 'get_paths')()
paths_list = {}
for key in ['get', 'post', 'delete', 'put']:
paths_list.update(paths.get(key, {}))
for path in paths_list:
if not callable(paths_list[path]):
raise Exception('Attribute ' + str(paths_list[path]) + ' is not callable. ')
except Exception as e:
cli_error('Error adding module: ' + e.message)
return
#TODO more checks.
cli_success('Module looks fine. Adding to api. ')
va_path = '/'.join((os.path.realpath(__file__).split('/')[:-1]))
api_path = os.path.join(va_path, 'api/')
with open(api_path + file_name, 'w') as f:
f.write(file_contents)
cli_success('Module copied to : ' + api_path + file_name)
def handle_new_user(args):
from va_master.api import login
cli_config = config.Config()
store = cli_config.datastore
datastore_handler = DatastoreHandler(store)
run_sync = tornado.ioloop.IOLoop.instance().run_sync
user_type = 'user' if args.get('ordinary_user') else 'admin'
create_user = functools.partial(login.create_user, datastore_handler, args['username'], args['password'], user_type)
create_user_run = run_sync(create_user)
def handle_test_api(args):
from va_master import tests
from va_master.tests import va_panels_tests, va_providers_tests, va_states_tests, va_users_tests, va_vpn_tests, va_services_tests
tests = args.get('tests')
all_tests = [
va_panels_tests.VAPanelsTests,
va_providers_tests.VAProvidersTests,
va_states_tests.VAStatesTests,
va_users_tests.VAUsersTests,
va_vpn_tests.VAVPNTests,
va_services_tests.VAServicesTests,
]
if tests:
all_tests = [x for x in all_tests for t in tests if t in str(x)]
for t in all_tests:
t = t(va_pass = args.get('password'))
t.do_tests()
def entry():
parser = argparse.ArgumentParser(description='A VapourApps client interface')
subparsers = parser.add_subparsers(help='action')
init_sub = subparsers.add_parser('init', help='Initializes and starts server')
expected_args = [
('company-name', 'The name of Your company. It will be used in the VPN certifiates [VapourApps cloud] '),
('domain_name', 'Enter the default domain name for this installation [va.mk]'),
('fqdn', 'Enter an IP address of FQDN [master.va.mk]'),
('admin-user', 'Enter username for the first admin [admin]'),
('admin-pass', 'Enter password for the first admin [admin]'),
('vpn-port', 'Enter the OpenVPN port accesible from Internet to this host [8443]'),
]
for arg in expected_args:
init_sub.add_argument('--' + arg[0], help = arg[1])
init_sub.add_argument('--skip-args', help = 'If set, the cli will not prompt you for values for arguments which were not supplied. ', action = 'store_true')
# args.sub will equal 'start' if this subparser is used
init_sub.set_defaults(sub='init')
manage_sub = subparsers.add_parser('manage', help = 'Helps with managing some va_master properties. ')
manage_sub.add_argument('--reset-states', help = 'Reads all appinfo.json files from /srv/salt/* and updates the states. ', action = 'store_true')
manage_sub.add_argument('--clear-panels', help = 'Reads all appinfo.json files from /srv/salt/* and clears all the panels. ', action = 'store_true')
manage_sub.set_defaults(sub = 'manage')
jsbuild_sub = subparsers.add_parser('jsbuild', help='Compiles and' + \
' minifies JavaScript')
jsbuild_sub.set_defaults(sub='jsbuild')
stop_sub = subparsers.add_parser('stop', help='Stops the server')
stop_sub.set_defaults(sub='stop')
add_module = subparsers.add_parser('add_module', help='Adds an api module. Check the documentation on how to write api modules. ')
add_module.add_argument('--module-path', help = 'Path to the python module. ')
add_module.set_defaults(sub='add_module')
new_user = subparsers.add_parser('new_user', help='Creates a new user. ')
new_user.add_argument('--username', help = 'The username for the user. ')
new_user.add_argument('--password', help = 'The password. Will be hashed in the datastore. ')
new_user.add_argument('--ordinary-user', help = 'If set, will create a normal user instead of an admin. ', action = 'store_true', default = False)
new_user.set_defaults(sub='new_user')
test_api = subparsers.add_parser('test-api', help='Runs through (some of) the API endpoints and tests their results. ')
test_api.add_argument('--tests', nargs = '+', help = 'List of test_modules to run, example --tests va_vpn_tests va_provider_tests', default = [])
test_api.add_argument('--password', default = 'admin')
test_api.set_defaults(sub='test_api')
args = parser.parse_args()
# Define handlers for each subparser
print ('Have args. ')
handlers = {
'init': handle_init,
'manage' : handle_manage,
'jsbuild': handle_jsbuild,
'add_module' : handle_add_module,
'new_user' : handle_new_user,
'test_api' : handle_test_api,
'stop': lambda x: None
}
# Call the proper handler based on the subparser argument
handlers[args.sub](vars(args))
if __name__ == '__main__':
entry()
|
VapourApps/va_master
|
va_master/cli/cli.py
|
Python
|
gpl-3.0
| 14,600
|
import bpy
from mathutils import Vector
from ...utils import copy_bone, flip_bone, put_bone, org
from ...utils import strip_org, make_deformer_name, connected_children_names
from ...utils import create_circle_widget, create_sphere_widget, create_widget
from ...utils import MetarigError, make_mechanism_name, create_cube_widget
from rna_prop_ui import rna_idprop_ui_prop_get
script = """
controls = [%s]
torso = '%s'
if is_selected( controls ):
layout.prop( pose_bones[ torso ], '["%s"]', slider = True )
layout.prop( pose_bones[ torso ], '["%s"]', slider = True )
"""
class Rig:
def __init__(self, obj, bone_name, params):
""" Initialize torso rig and key rig properties """
eb = obj.data.edit_bones
self.obj = obj
self.org_bones = [bone_name] + connected_children_names(obj, bone_name)
self.params = params
self.spine_length = sum( [ eb[b].length for b in self.org_bones ] )
# Check if user provided the positions of the neck and pivot
if params.neck_pos and params.pivot_pos:
self.neck_pos = params.neck_pos
self.pivot_pos = params.pivot_pos
else:
raise MetarigError(
"RIGIFY ERROR: please specify neck and pivot bone positions"
)
# Check if neck is lower than pivot
if params.neck_pos <= params.pivot_pos:
raise MetarigError(
"RIGIFY ERROR: Neck cannot be below or the same as pivot"
)
# TODO:
# Limit neck_pos prop to 1 --> num of bones - 1 (last is head)
# Limit pivot_pos prop to 2 --> num of bones (must leave place for lower torso)
if params.tail_pos:
self.tail_pos = params.tail_pos
# Assign values to tweak layers props if opted by user
if params.tweak_extra_layers:
self.tweak_layers = list(params.tweak_layers)
else:
self.tweak_layers = None
# Report error of user created less than the minimum of 4 bones for rig
if len(self.org_bones) <= 4:
raise MetarigError(
"RIGIFY ERROR: invalid rig structure" % (strip_org(bone_name))
)
def build_bone_structure( self ):
""" Divide meta-rig into lists of bones according to torso rig anatomy:
Neck --> Upper torso --> Lower torso --> Tail (optional) """
if self.pivot_pos and self.neck_pos:
neck_index = self.neck_pos - 1
pivot_index = self.pivot_pos - 1
tail_index = 0
if 'tail_pos' in dir(self):
tail_index = self.tail_pos - 1
neck_bones = self.org_bones[neck_index::]
upper_torso_bones = self.org_bones[pivot_index:neck_index]
lower_torso_bones = self.org_bones[tail_index:pivot_index]
tail_bones = []
if tail_index:
tail_bones = self.org_bones[::tail_index+1]
return {
'neck' : neck_bones,
'upper' : upper_torso_bones,
'lower' : lower_torso_bones,
'tail' : tail_bones
}
else:
return 'ERROR'
def orient_bone( self, eb, axis, scale, reverse = False ):
v = Vector((0,0,0))
setattr(v,axis,scale)
if reverse:
tail_vec = v * self.obj.matrix_world
eb.head[:] = eb.tail
eb.tail[:] = eb.head + tail_vec
else:
tail_vec = v * self.obj.matrix_world
eb.tail[:] = eb.head + tail_vec
def create_pivot( self, pivot ):
""" Create the pivot control and mechanism bones """
org_bones = self.org_bones
pivot_name = org_bones[pivot-1]
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# Create torso control bone
torso_name = 'torso'
ctrl_name = copy_bone(self.obj, pivot_name, torso_name)
ctrl_eb = eb[ ctrl_name ]
self.orient_bone( ctrl_eb, 'y', self.spine_length / 2.5 )
# Create mch_pivot
mch_name = make_mechanism_name( 'pivot' )
mch_name = copy_bone(self.obj, ctrl_name, mch_name)
mch_eb = eb[ mch_name ]
mch_eb.length /= 4
# Positioning pivot in a more usable location for animators
pivot_loc = ( eb[ org_bones[0]].head + eb[ org_bones[0]].tail ) / 2
put_bone( self.obj, ctrl_name, pivot_loc )
return {
'ctrl' : ctrl_name,
'mch' : mch_name
}
def create_deform( self ):
org_bones = self.org_bones
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
def_bones = []
for org in org_bones:
def_name = make_deformer_name( strip_org( org ) )
def_name = copy_bone( self.obj, org, def_name )
def_bones.append( def_name )
return def_bones
def create_neck( self, neck_bones ):
org_bones = self.org_bones
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# Create neck control
neck = copy_bone( self.obj, org(neck_bones[0]), 'neck' )
neck_eb = eb[ neck ]
# Neck spans all neck bones (except head)
neck_eb.tail[:] = eb[ org(neck_bones[-1]) ].head
# Create head control
head = copy_bone( self.obj, org(neck_bones[-1]), 'head' )
# MCH bones
# Neck MCH stretch
mch_str = copy_bone( self.obj, neck, make_mechanism_name('STR-neck') )
# Neck MCH rotation
mch_neck = copy_bone(
self.obj, neck, make_mechanism_name('ROT-neck')
)
self.orient_bone( eb[mch_neck], 'y', self.spine_length / 10 )
# Head MCH rotation
mch_head = copy_bone(
self.obj, head, make_mechanism_name('ROT-head')
)
self.orient_bone( eb[mch_head], 'y', self.spine_length / 10 )
twk,mch = [],[]
# Intermediary bones
for b in neck_bones[1:-1]: # All except 1st neck and (last) head
mch_name = copy_bone( self.obj, org(b), make_mechanism_name(b) )
eb[mch_name].length /= 4
mch += [ mch_name ]
# Tweak bones
for b in neck_bones[:-1]: # All except last bone
twk_name = "tweak_" + b
twk_name = copy_bone( self.obj, org(b), twk_name )
eb[twk_name].length /= 2
twk += [ twk_name ]
return {
'ctrl_neck' : neck,
'ctrl' : head,
'mch_str' : mch_str,
'mch_neck' : mch_neck,
'mch_head' : mch_head,
'mch' : mch,
'tweak' : twk
}
def create_chest( self, chest_bones ):
org_bones = self.org_bones
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# get total spine length
# Create chest control bone
chest = copy_bone( self.obj, org( chest_bones[0] ), 'chest' )
self.orient_bone( eb[chest], 'y', self.spine_length / 3 )
# create chest mch_wgt
mch_wgt = copy_bone(
self.obj, org( chest_bones[-1] ),
make_mechanism_name( 'WGT-chest' )
)
# Create mch and twk bones
twk,mch = [],[]
for b in chest_bones:
mch_name = copy_bone( self.obj, org(b), make_mechanism_name(b) )
self.orient_bone( eb[mch_name], 'y', self.spine_length / 10 )
twk_name = "tweak_" + b
twk_name = copy_bone( self.obj, org(b), twk_name )
eb[twk_name].length /= 2
mch += [ mch_name ]
twk += [ twk_name ]
return {
'ctrl' : chest,
'mch' : mch,
'tweak' : twk,
'mch_wgt' : mch_wgt
}
def create_hips( self, hip_bones ):
org_bones = self.org_bones
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# Create hips control bone
hips = copy_bone( self.obj, org( hip_bones[-1] ), 'hips' )
self.orient_bone(
eb[hips],
'y',
self.spine_length / 4,
reverse = True
)
# create hips mch_wgt
mch_wgt = copy_bone(
self.obj, org( hip_bones[0] ),
make_mechanism_name( 'WGT-hips' )
)
# Create mch and tweak bones
twk,mch = [],[]
for b in hip_bones:
mch_name = copy_bone( self.obj, org(b), make_mechanism_name(b) )
self.orient_bone(
eb[mch_name], 'y', self.spine_length / 10, reverse = True
)
twk_name = "tweak_" + b
twk_name = copy_bone( self.obj, org( b ), twk_name )
eb[twk_name].length /= 2
mch += [ mch_name ]
twk += [ twk_name ]
return {
'ctrl' : hips,
'mch' : mch,
'tweak' : twk,
'mch_wgt' : mch_wgt
}
def create_tail( self, tail_bones ):
pass
def parent_bones( self, bones ):
org_bones = self.org_bones
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# Parent deform bones
for i,b in enumerate( bones['def'] ):
if i > 0: # For all bones but the first (which has no parent)
eb[b].parent = eb[ bones['def'][i-1] ] # to previous
eb[b].use_connect = True
# Parent control bones
# Head control => MCH-rotation_head
eb[ bones['neck']['ctrl'] ].parent = eb[ bones['neck']['mch_head'] ]
# MCH stretch => neck ctrl
eb[ bones['neck']['mch_str'] ].parent = eb[ bones['neck']['ctrl_neck'] ]
# Neck control => MCH-rotation_neck
eb[ bones['neck']['ctrl_neck'] ].parent = eb[ bones['neck']['mch_neck'] ]
# Parent hips and chest controls to torso
eb[ bones['chest']['ctrl'] ].parent = eb[ bones['pivot']['ctrl'] ]
eb[ bones['hips']['ctrl'] ].parent = eb[ bones['pivot']['ctrl'] ]
# Parent mch bones
# Neck mch
eb[ bones['neck']['mch_head'] ].parent = eb[ bones['neck']['ctrl_neck'] ]
parent = eb[ bones['neck']['mch_str'] ]
for i,b in enumerate([ eb[n] for n in bones['neck']['mch'] ]):
b.parent = parent
# Chest mch bones and neck mch
chest_mch = bones['chest']['mch'] + [ bones['neck']['mch_neck'] ]
for i,b in enumerate(chest_mch):
if i == 0:
eb[b].parent = eb[ bones['pivot']['ctrl'] ]
else:
eb[b].parent = eb[ chest_mch[i-1] ]
# Hips mch bones
for i,b in enumerate( bones['hips']['mch'] ):
if i == len(bones['hips']['mch']) - 1:
eb[b].parent = eb[ bones['pivot']['ctrl'] ]
else:
eb[b].parent = eb[ bones['hips']['mch'][i+1] ]
# mch pivot
eb[ bones['pivot']['mch'] ].parent = eb[ bones['chest']['mch'][0] ]
# MCH widgets
eb[ bones['chest']['mch_wgt'] ].parent = eb[ bones['chest']['mch'][-1] ]
eb[ bones['hips' ]['mch_wgt'] ].parent = eb[ bones['hips' ]['mch'][0 ] ]
# Tweaks
# Neck tweaks
for i,twk in enumerate( bones['neck']['tweak'] ):
if i == 0:
eb[ twk ].parent = eb[ bones['neck']['ctrl_neck'] ]
else:
eb[ twk ].parent = eb[ bones['neck']['mch'][i-1] ]
# Chest tweaks
for twk,mch in zip( bones['chest']['tweak'], bones['chest']['mch'] ):
if bones['chest']['tweak'].index( twk ) == 0:
eb[ twk ].parent = eb[ bones['pivot']['mch'] ]
else:
eb[ twk ].parent = eb[ mch ]
# Hips tweaks
for i,twk in enumerate(bones['hips']['tweak']):
if i == 0:
eb[twk].parent = eb[ bones['hips']['mch'][i] ]
else:
eb[twk].parent = eb[ bones['hips']['mch'][i-1] ]
# Parent orgs to matching tweaks
tweaks = bones['hips']['tweak'] + bones['chest']['tweak']
tweaks += bones['neck']['tweak'] + [ bones['neck']['ctrl'] ]
if 'tail' in bones.keys():
tweaks += bones['tail']['tweak']
for org, twk in zip( org_bones, tweaks ):
eb[ org ].parent = eb[ twk ]
def make_constraint( self, bone, constraint ):
bpy.ops.object.mode_set(mode = 'OBJECT')
pb = self.obj.pose.bones
owner_pb = pb[bone]
const = owner_pb.constraints.new( constraint['constraint'] )
const.target = self.obj
# filter contraint props to those that actually exist in the currnet
# type of constraint, then assign values to each
for p in [ k for k in constraint.keys() if k in dir(const) ]:
setattr( const, p, constraint[p] )
def constrain_bones( self, bones ):
# MCH bones
# head and neck MCH bones
for b in [ bones['neck']['mch_head'], bones['neck']['mch_neck'] ]:
self.make_constraint( b, {
'constraint' : 'COPY_ROTATION',
'subtarget' : bones['pivot']['ctrl'],
} )
self.make_constraint( b, {
'constraint' : 'COPY_SCALE',
'subtarget' : bones['pivot']['ctrl'],
} )
# Neck MCH Stretch
self.make_constraint( bones['neck']['mch_str'], {
'constraint' : 'DAMPED_TRACK',
'subtarget' : bones['neck']['ctrl'],
})
self.make_constraint( bones['neck']['mch_str'], {
'constraint' : 'STRETCH_TO',
'subtarget' : bones['neck']['ctrl'],
})
# Intermediary mch bones
intermediaries = [ bones['neck'], bones['chest'], bones['hips'] ]
if 'tail' in bones.keys():
intermediaries += bones['tail']
for i,l in enumerate(intermediaries):
mch = l['mch']
factor = float( 1 / len( l['tweak'] ) )
for j,b in enumerate(mch):
if i == 0:
nfactor = float( (j + 1) / len( mch ) )
self.make_constraint( b, {
'constraint' : 'COPY_ROTATION',
'subtarget' : l['ctrl'],
'influence' : nfactor
} )
else:
self.make_constraint( b, {
'constraint' : 'COPY_TRANSFORMS',
'subtarget' : l['ctrl'],
'influence' : factor,
'owner_space' : 'LOCAL',
'target_space' : 'LOCAL'
} )
# MCH pivot
self.make_constraint( bones['pivot']['mch'], {
'constraint' : 'COPY_TRANSFORMS',
'subtarget' : bones['hips']['mch'][-1],
'owner_space' : 'LOCAL',
'target_space' : 'LOCAL'
})
# DEF bones
deform = bones['def']
tweaks = bones['hips']['tweak'] + bones['chest']['tweak']
tweaks += bones['neck']['tweak'] + [ bones['neck']['ctrl'] ]
for d,t in zip(deform, tweaks):
tidx = tweaks.index(t)
self.make_constraint( d, {
'constraint' : 'COPY_TRANSFORMS',
'subtarget' : t
})
if tidx != len(tweaks) - 1:
self.make_constraint( d, {
'constraint' : 'DAMPED_TRACK',
'subtarget' : tweaks[ tidx + 1 ],
})
self.make_constraint( d, {
'constraint' : 'STRETCH_TO',
'subtarget' : tweaks[ tidx + 1 ],
})
def create_drivers( self, bones ):
bpy.ops.object.mode_set(mode ='OBJECT')
pb = self.obj.pose.bones
# Setting the torso's props
torso = pb[ bones['pivot']['ctrl'] ]
props = [ "head_follow", "neck_follow" ]
owners = [ bones['neck']['mch_head'], bones['neck']['mch_neck'] ]
for prop in props:
if prop == 'neck_follow':
torso[prop] = 0.5
else:
torso[prop] = 0.0
prop = rna_idprop_ui_prop_get( torso, prop, create=True )
prop["min"] = 0.0
prop["max"] = 1.0
prop["soft_min"] = 0.0
prop["soft_max"] = 1.0
prop["description"] = prop
# driving the follow rotation switches for neck and head
for bone, prop, in zip( owners, props ):
# Add driver to copy rotation constraint
drv = pb[ bone ].constraints[ 0 ].driver_add("influence").driver
drv.type = 'AVERAGE'
var = drv.variables.new()
var.name = prop
var.type = "SINGLE_PROP"
var.targets[0].id = self.obj
var.targets[0].data_path = \
torso.path_from_id() + '['+ '"' + prop + '"' + ']'
drv_modifier = self.obj.animation_data.drivers[-1].modifiers[0]
drv_modifier.mode = 'POLYNOMIAL'
drv_modifier.poly_order = 1
drv_modifier.coefficients[0] = 1.0
drv_modifier.coefficients[1] = -1.0
def locks_and_widgets( self, bones ):
bpy.ops.object.mode_set(mode ='OBJECT')
pb = self.obj.pose.bones
# deform bones bbone segements
for bone in bones['def'][:-1]:
self.obj.data.bones[bone].bbone_segments = 8
self.obj.data.bones[ bones['def'][0] ].bbone_in = 0.0
self.obj.data.bones[ bones['def'][-2] ].bbone_out = 0.0
# Locks
tweaks = bones['neck']['tweak'] + bones['chest']['tweak']
tweaks += bones['hips']['tweak']
if 'tail' in bones.keys():
tweaks += bones['tail']['tweak']
# Tweak bones locks
for bone in tweaks:
pb[bone].lock_rotation = True, False, True
pb[bone].lock_scale = False, True, False
# Widgets
# Assigning a widget to torso bone
create_cube_widget(
self.obj,
bones['pivot']['ctrl'],
radius = 0.5,
bone_transform_name = None
)
# Assigning widgets to control bones
gen_ctrls = [
bones['neck']['ctrl_neck'],
bones['chest']['ctrl'],
bones['hips']['ctrl']
]
if 'tail' in bones.keys():
gen_ctrls += [ bones['tail']['ctrl'] ]
for bone in gen_ctrls:
create_circle_widget(
self.obj,
bone,
radius = 1.0,
head_tail = 0.5,
with_line = False,
bone_transform_name = None
)
# Head widget
create_circle_widget(
self.obj,
bones['neck']['ctrl'],
radius = 0.75,
head_tail = 1.0,
with_line = False,
bone_transform_name = None
)
# place widgets on correct bones
chest_widget_loc = pb[ bones['chest']['mch_wgt'] ]
pb[ bones['chest']['ctrl'] ].custom_shape_transform = chest_widget_loc
hips_widget_loc = pb[ bones['hips']['mch_wgt'] ]
if 'tail' in bones.keys():
hips_widget_loc = bones['def'][self.tail_pos -1]
pb[ bones['hips']['ctrl'] ].custom_shape_transform = hips_widget_loc
# Assigning widgets to tweak bones and layers
for bone in tweaks:
create_sphere_widget(self.obj, bone, bone_transform_name=None)
if self.tweak_layers:
pb[bone].bone.layers = self.tweak_layers
def generate( self ):
# Torso Rig Anatomy:
# Neck: all bones above neck point, last bone is head
# Upper torso: all bones between pivot and neck start
# Lower torso: all bones below pivot until tail point
# Tail: all bones below tail point
bone_chains = self.build_bone_structure()
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
# Clear parents for org bones
for bone in self.org_bones:
eb[bone].use_connect = False
eb[bone].parent = None
if bone_chains != 'ERROR':
# Create lists of bones and strip "ORG" from their names
neck_bones = [ strip_org(b) for b in bone_chains['neck' ] ]
upper_torso_bones = [ strip_org(b) for b in bone_chains['upper'] ]
lower_torso_bones = [ strip_org(b) for b in bone_chains['lower'] ]
tail_bones = [ strip_org(b) for b in bone_chains['tail' ] ]
bones = {}
bones['def'] = self.create_deform() # Gets org bones from self
bones['pivot'] = self.create_pivot( self.pivot_pos )
bones['neck'] = self.create_neck( neck_bones )
bones['chest'] = self.create_chest( upper_torso_bones )
bones['hips'] = self.create_hips( lower_torso_bones )
# TODO: Add create tail
if tail_bones:
bones['tail'] = self.create_tail( tail_bones )
# TEST
bpy.ops.object.mode_set(mode ='EDIT')
eb = self.obj.data.edit_bones
self.parent_bones( bones )
self.constrain_bones( bones )
self.create_drivers( bones )
self.locks_and_widgets( bones )
controls = [ bones['neck']['ctrl'], bones['neck']['ctrl_neck'] ]
controls += [ bones['chest']['ctrl'], bones['hips']['ctrl'] ]
controls += [ bones['pivot']['ctrl'] ]
if 'tail' in bones.keys():
controls += [ bones['tail']['ctrl'] ]
# Create UI
controls_string = ", ".join(["'" + x + "'" for x in controls])
return [script % (
controls_string,
bones['pivot']['ctrl'],
'head_follow',
'neck_follow'
)]
def add_parameters( params ):
""" Add the parameters of this rig type to the
RigifyParameters PropertyGroup
"""
params.neck_pos = bpy.props.IntProperty(
name = 'neck_position',
default = 6,
min = 0,
description = 'Neck start position'
)
params.pivot_pos = bpy.props.IntProperty(
name = 'pivot_position',
default = 3,
min = 0,
description = 'Position of the torso control and pivot point'
)
params.tail_pos = bpy.props.IntProperty(
name = 'tail_position',
default = 0,
min = 0,
description = 'Where the tail starts (change from 0 to enable)'
)
# Setting up extra layers for the FK and tweak
params.tweak_extra_layers = bpy.props.BoolProperty(
name = "tweak_extra_layers",
default = True,
description = ""
)
params.tweak_layers = bpy.props.BoolVectorProperty(
size = 32,
description = "Layers for the tweak controls to be on",
default = tuple( [ i == 1 for i in range(0, 32) ] )
)
def parameters_ui(layout, params):
""" Create the ui for the rig parameters."""
r = layout.row()
r.prop(params, "neck_pos")
r = layout.row()
r.prop(params, "pivot_pos")
r = layout.row()
r.prop(params, "tail_pos")
r = layout.row()
r.prop(params, "tweak_extra_layers")
r.active = params.tweak_extra_layers
col = r.column(align=True)
row = col.row(align=True)
for i in range(8):
row.prop(params, "tweak_layers", index=i, toggle=True, text="")
row = col.row(align=True)
for i in range(16,24):
row.prop(params, "tweak_layers", index=i, toggle=True, text="")
col = r.column(align=True)
row = col.row(align=True)
for i in range(8,16):
row.prop(params, "tweak_layers", index=i, toggle=True, text="")
row = col.row(align=True)
for i in range(24,32):
row.prop(params, "tweak_layers", index=i, toggle=True, text="")
def create_sample(obj):
# generated by rigify.utils.write_metarig
bpy.ops.object.mode_set(mode='EDIT')
arm = obj.data
bones = {}
bone = arm.edit_bones.new('spine')
bone.head[:] = 0.0000, 0.0552, 1.0099
bone.tail[:] = 0.0000, 0.0172, 1.1573
bone.roll = 0.0000
bone.use_connect = False
bones['spine'] = bone.name
bone = arm.edit_bones.new('spine.001')
bone.head[:] = 0.0000, 0.0172, 1.1573
bone.tail[:] = 0.0000, 0.0004, 1.2929
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine']]
bones['spine.001'] = bone.name
bone = arm.edit_bones.new('spine.002')
bone.head[:] = 0.0000, 0.0004, 1.2929
bone.tail[:] = 0.0000, 0.0059, 1.4657
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine.001']]
bones['spine.002'] = bone.name
bone = arm.edit_bones.new('spine.003')
bone.head[:] = 0.0000, 0.0059, 1.4657
bone.tail[:] = 0.0000, 0.0114, 1.6582
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine.002']]
bones['spine.003'] = bone.name
bone = arm.edit_bones.new('spine.004')
bone.head[:] = 0.0000, 0.0114, 1.6582
bone.tail[:] = 0.0000, -0.0067, 1.7197
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine.003']]
bones['spine.004'] = bone.name
bone = arm.edit_bones.new('spine.005')
bone.head[:] = 0.0000, -0.0067, 1.7197
bone.tail[:] = 0.0000, -0.0247, 1.7813
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine.004']]
bones['spine.005'] = bone.name
bone = arm.edit_bones.new('spine.006')
bone.head[:] = 0.0000, -0.0247, 1.7813
bone.tail[:] = 0.0000, -0.0247, 1.9796
bone.roll = 0.0000
bone.use_connect = True
bone.parent = arm.edit_bones[bones['spine.005']]
bones['spine.006'] = bone.name
bpy.ops.object.mode_set(mode='OBJECT')
pbone = obj.pose.bones[bones['spine']]
pbone.rigify_type = 'pitchipoy.super_torso_turbo'
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
try:
pbone.rigify_parameters.chain_bone_controls = "1, 2, 3"
except AttributeError:
pass
try:
pbone.rigify_parameters.neck_pos = 5
except AttributeError:
pass
try:
pbone.rigify_parameters.tweak_layers = [False, False, False, False, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]
except AttributeError:
pass
pbone = obj.pose.bones[bones['spine.001']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
pbone = obj.pose.bones[bones['spine.002']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
pbone = obj.pose.bones[bones['spine.003']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
pbone = obj.pose.bones[bones['spine.004']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
pbone = obj.pose.bones[bones['spine.005']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
pbone = obj.pose.bones[bones['spine.006']]
pbone.rigify_type = ''
pbone.lock_location = (False, False, False)
pbone.lock_rotation = (False, False, False)
pbone.lock_rotation_w = False
pbone.lock_scale = (False, False, False)
pbone.rotation_mode = 'QUATERNION'
bpy.ops.object.mode_set(mode='EDIT')
for bone in arm.edit_bones:
bone.select = False
bone.select_head = False
bone.select_tail = False
for b in bones:
bone = arm.edit_bones[bones[b]]
bone.select = True
bone.select_head = True
bone.select_tail = True
arm.edit_bones.active = bone
|
Passtechsoft/TPEAlpGen
|
blender/release/scripts/addons/rigify/rigs/pitchipoy/super_torso_turbo.py
|
Python
|
gpl-3.0
| 29,912
|
import unittest
import urlparse
class URLTestCase(unittest.TestCase):
def assertQueryStringArgsEqual(self, querystring1, querystring2):
# clean-up '?'
if isinstance(querystring1, basestring) and querystring1.startswith('?'):
querystring1 = querystring1[1:]
if isinstance(querystring2, basestring) and querystring2.startswith('?'):
querystring2 = querystring2[1:]
if isinstance(querystring1, str) or isinstance(querystring1, unicode):
querystring1 = urlparse.parse_qs(querystring1)
if isinstance(querystring2, str) or isinstance(querystring2, unicode):
querystring2 = urlparse.parse_qs(querystring2)
self.assertDictEqual(querystring1, querystring2)
|
bmentges/brainiak_api
|
tests/utils.py
|
Python
|
gpl-2.0
| 754
|
import numpy as np
x = np.random.normal(size=[10,10])
y = np.random.normal(size=[10,10])
z = np.dot(x,y)
# print(z)
import tensorflow as tf
#Example 1
x = tf.random_normal([10,10])
y = tf.random_normal([10,10])
z = tf.matmul(x,y)
sess = tf.Session()
z_val = sess.run(z)
print(z_val)
#Example 2
# placeholders are used to feed values from python to Tf ops. We define two placeholders
# one for input feature x and one for output y
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
# since we know the size of the weights vector
w = tf.get_variable("w", shape=[3,1])
# define yhat to be an estimate of y
f = tf.stack([tf.square(x),x,tf.ones_like(x)], 1)
yhat = tf.squeeze(tf.matmul(f,w),1)
# The loss is defined ot be the l2 distance between our estimate of y and its true value
# adding a normalizing value on weights to prevent overfitting
loss = tf.nn.l2_loss(yhat-y) + 0.1*tf.nn.l2_loss(w)
train_op = tf.train.AdamOptimizer(0.1).minimize(loss)
def generate_data():
x_val = np.random.uniform(-10.0, 10.0, size=100)
y_val = 5 * np.square(x_val) + 3
return x_val, y_val
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for _ in range(1000):
x_val, y_val = generate_data()
_, loss_val = sess.run([train_op, loss], {x:x_val, y:y_val})
# print(loss_val)
print(sess.run([w]))
# understanding static and dynamic types
# Tensors in tf have a static shape attribute which is determined during graph construction.
# The static shape may be underspecified.
a = tf.placeholder(tf.float32, [None, 128])
# This means that the first dimension is dtermined dynamically during Session.run()
static_shape = a.shape.as_list()
dynamic_shape = tf.shape(a)
# It's convenient to have a function that returns the static shape when available and dynamic shape when it's not
def get_shape(tensor):
static_shape = tensor.shape.as_list()
dynamic_shape = tf.unstack(tf.shape(tensor))
dims = [s[1] if s[0] is None else s[0]
for s in zip(static_shape, dynamic_shape)]
return dims
b = tf.placeholder(tf.float32, [None, 10, 32])
shape = get_shape(b)
b = tf.reshape(b, [shape[0], shape[1]*shape[2]])
# general purpose reshape function
def reshape(tensor, dims_ilfst):
shape = getshape(tensor)
dims_prod = []
for dims in dims_list:
if isinstance(dims, int):
dims_prod.append(shape[dims])
elif all([isinstance(shape[d], int)] for d in dims):
dims_prod.append(np.prod([shape[d] for d in dims]))
else:
dims_prod.append(tf.prod([shape[d] for d in dims]))
tensor = tf.reshape(tensor, dims_prod)
return tensor
b = tf.placeholder(tf.float32, [None, 10, 32])
b = reshape(tensor, [0,[1,2]])
# tf introduces two different context managers to alter the name of the tesors - very much like namespaces in cpp
with tf.name_scope("scope"):
a = tf.constant(1, name="a")
print(a.name) # prints "scope/a:0"
b = tf.Variable(1, name="b")
print(b.name) # prints "scope/b:0"
# tf_get_variable creates a new vairbale with the fiven name but raises a ValueError exception when trying to redeclare
# tf.name_scope affects the names of the tensors and variables created with tf.Variable but doesn't affect the names of tensors created with
# tf.get_variable
# unnlike tf.name_scope tf.variable_scope modifies the names of the variables created with tf.get_variable as well.
# RESUING PREVIOUSLY DECLARED VARIABLES
with tf.variable_scope("scope"):
a1 = tf.get_variable(name = "a", shape=[])
with tf.variable_scope("scope", reuse=True):
a2 = tf.get_variable(name="a", shape=[])
# comes in handy when building neural network layers
features1 = tf.layers.conv2d(image1, filters=32, kernel_size=3)
# use the same convolutional weights to process the second image
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
features2 = tf.layers.conv2d(image2, filters=32, kernel_size=3)
# Making templates in tf is easy peezy lemon squeezy: for example
conv3x32 = tf.make_template("conv3x32", lambda x: tf.layers.conv2d(x, 32, 3))
features1 = conv3x32(image1)
features2 = conv3x32(image2)
# one can turn any function to a template using the syntax above. Upon the first call to a template, the variables
# defined inside the function would be declared and in the consecutive invocations they would automatically get reused
# Broadcasting the good and the ugly
def merge(a, b, units, activation=tf.nn.relu):
pa = tf.layers.dense(a, units, activation=None)
pb = tf.layers.dense(b, units, activation=None)
c = pa + pb
if activation is not None:
c = activation(c)
return c
# A general rule of thumb is to always specify the dimensions in reduction operations and when using tf.squeeze.
#Feeding data to tensorflow
# Constants but not modular and can't store all data in the memory so only possible with very small datasets
# Placeholders sovle both these problems
data = tf.placeholder(tf.float32)
prediction = tf.square(data) + 1
actual_data = np.random.normal(size=[100])
tf.Session().run(prediction, feed_dict = {data:actual_data})
# Dataset API - the recommended way of reading the data in tf
actual_data = np.random.normal(size=[100])
dataset = tf.contrib.data.Dataset.from_tensor_slices(actual_data)
data = dataset.make_one_shot_iterator().get_next()
# if you have created tfrecord files then use this
datset = tf.contrib.data.Dataset.TFRecordDataset(path_to_data)
#overloaded operators in tf
# Just like numpy, tf overloads a number of python operators to make building graphs easier and the code more readable
# Slicing operation is very costly though so don't use it.
# tf doesn't allow using tensors as bools
# Instead use tf.cond, tf.equal or tf.not_equal
# Understanding order of execution and control dependencies
# Only use variables if tensors don't do the job. .
a = tf.constant(1)
b = tf.constant(2)
a = a+b
tf.Session().run(a)
# Evaluating 'a' will return 3 as expected. Note that here we are creating 3 tensors, two constant tensors and another tensor that stores the result of the addition.
# Note that you can't overwrite the value of a tensor.If you want to modify it you have to create a new tensor. As we did here.
# Unlike tensors, variables can be updated.
a = tf.Variable(1)
b = tf.constant(2)
assign = tf.assign(a, a+b)
sess = tf.Session()
sess.run(t.global_variables_initializer())
print(sess.run(assign))
# Note that tf.assign returns a tensor representing the value of the assignment.
a = tf.Variable(1)
b = tf.constant(2)
c = a + b
assign = tf.assign(a,5)
sess = tf.Session()
for i in range(15):
sess.run(tf.global_variables_initializer())
print(sess.run(assign))
# Note that the tensor c here won't have a deterministic value It might be 3 or 7 depending on the order of whether assignment or addition gets execute first.
# When dealing with variables, you may need to explicitly define dependencies using tf.control_dependencies() as follows:
a = tf.Variable(1)
b = tf.constant(2)
c = a + b
with tf.control_dependencies([c]):
assign = tf.assign(a, 5)
sess = tf.Session()
for i in range(10):
sess.run(tf.global_variables_initializer())
print(sess.run([assign, c]))
# This will make sure that the assign op will be called after the addition
# Control flow operations: conditionals and loops
# When building complex models such as RNNs we may need to control the flow of ops through conditionals and loops.
# eg: suppose you want to decide whether to multiply or add two tensors based on a predicate.
a = tf.constant(1)
b = tf.constant(1)
p = tf.constant(True)
x = tf.cond(p, lambda: a+b, lambda: a*b)
print(tf.Session().run(x))
# Since the predicate is true in this case the output will be addition.
# TO perform operations in batch use tf.where
a = tf.constant([1,2])
b = tf.constant([2,2])
p = tf.constant([True, False])
x = tf.where(p, a+b, a*b)
print(rf.Session().run())
# While loop in tf
# Example to generate fibonnaci with while loops
n = tf.constant(5)
def cond(i, a, b):
return i < n
def body(i, a, b):
return i+1, b, a+b
i,a,b = tf.while_loop(cond, body, (2,1,1))
print(tf.Session().run(b))
# Suppose you want to save these vals.
n = tf.constant(5)
def cond(i, a, b):
return i < n
def body(i, a, b, c):
return i + 1, b, a + b, tf.concat([c, [a + b]], 0)
i, a, b, c = tf.while_loop(cond, body, (2, 1, 1, tf.constant([1, 1])),
shape_invariants = (tf.TensorShape([]),
tf.TensorShape([]),
tf.TensorShape([]),
tf.TensorShape([None]))
)
print(tf.Session().run(c))
# ^ that is ugly and somewhat inefficient. TF provides tf.TensorArray for such arrays that grow in size. So to do
# the same thing as above we can do the following
n = tf.constant(5)
c = tf.TensorArray(tf.int32, n)
c = c.write(0, 1)
c = c.write(1, 1)
def cond(i, a, b):
return i < n
def body(i, a, b, c):
return i + 1, b, a + b, c
i, a, b, c = tf.while_loop(cond, body, (2, 1, 1, c))
c = c.stack()
print(tf.Session().run(c))
|
karanchawla/100DaysofCode
|
day19/tensorflow_basics.py
|
Python
|
mit
| 8,939
|
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Bob Melander, Cisco Systems, Inc.
from oslo.config import cfg
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.common import constants as q_const
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.db import api as qdbapi
from neutron.db import common_db_mixin
from neutron.db import extraroute_db
from neutron.db import l3_dvr_db
from neutron.db import l3_dvrscheduler_db
from neutron.db import l3_gwmode_db
from neutron.db import l3_rpc_base
from neutron.db import model_base
from neutron.openstack.common import importutils
from neutron.plugins.common import constants
class L3RouterPluginRpcCallbacks(n_rpc.RpcCallback,
l3_rpc_base.L3RpcCallbackMixin):
RPC_API_VERSION = '1.2'
# history
# 1.2 Added methods for DVR support
class L3RouterPlugin(common_db_mixin.CommonDbMixin,
extraroute_db.ExtraRoute_db_mixin,
l3_dvr_db.L3_NAT_with_dvr_db_mixin,
l3_gwmode_db.L3_NAT_db_mixin,
l3_dvrscheduler_db.L3_DVRsch_db_mixin):
"""Implementation of the Neutron L3 Router Service Plugin.
This class implements a L3 service plugin that provides
router and floatingip resources and manages associated
request/response.
All DB related work is implemented in classes
l3_db.L3_NAT_db_mixin, l3_dvr_db.L3_NAT_with_dvr_db_mixin, and
extraroute_db.ExtraRoute_db_mixin.
"""
supported_extension_aliases = ["dvr", "router", "ext-gw-mode",
"extraroute", "l3_agent_scheduler"]
def __init__(self):
qdbapi.register_models(base=model_base.BASEV2)
self.setup_rpc()
self.router_scheduler = importutils.import_object(
cfg.CONF.router_scheduler_driver)
def setup_rpc(self):
# RPC support
self.topic = topics.L3PLUGIN
self.conn = n_rpc.create_connection(new=True)
self.agent_notifiers.update(
{q_const.AGENT_TYPE_L3: l3_rpc_agent_api.L3AgentNotifyAPI()})
self.endpoints = [L3RouterPluginRpcCallbacks()]
self.conn.create_consumer(self.topic, self.endpoints,
fanout=False)
self.conn.consume_in_threads()
def get_plugin_type(self):
return constants.L3_ROUTER_NAT
def get_plugin_description(self):
"""returns string description of the plugin."""
return ("L3 Router Service Plugin for basic L3 forwarding"
" between (L2) Neutron networks and access to external"
" networks via a NAT gateway.")
def create_floatingip(self, context, floatingip):
"""Create floating IP.
:param context: Neutron request context
:param floatingip: data fo the floating IP being created
:returns: A floating IP object on success
AS the l3 router plugin aysnchrounously creates floating IPs
leveraging tehe l3 agent, the initial status fro the floating
IP object will be DOWN.
"""
return super(L3RouterPlugin, self).create_floatingip(
context, floatingip,
initial_status=q_const.FLOATINGIP_STATUS_DOWN)
|
shakamunyi/neutron-dvr
|
neutron/services/l3_router/l3_router_plugin.py
|
Python
|
apache-2.0
| 3,884
|
#!/user/bin/env python
'makeTextFile.py -- create text file'
import os
ls = os.linesep #第一个函数
while True:
fname = raw_input("The File name will be(remember the '.'):")
if os.path.exists(fname):#第二个函数
print "ERROR: '%s' already exists" % fname
else:
break
all = []
print "\nEnter lines ('.' by itself to quit).\n"
while True:
entry = raw_input('>')
if entry == '.':
break
else:
all.append(entry)
fobj = open(fname, 'w')
fobj.writelines(['%s%s' % (x, ls) for x in all])#第三个函数
fobj.close()
print 'DONE!'
|
chidaobanjiu/My_python_scripts
|
makeTextFile.py
|
Python
|
cc0-1.0
| 556
|
from django import forms
from django.contrib.auth import get_user_model
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.models import Group, Permission
from django.forms.models import inlineformset_factory
from wagtail.wagtailcore import hooks
from wagtail.wagtailadmin.widgets import AdminPageChooser
from wagtail.wagtailusers.models import UserProfile
from wagtail.wagtailcore.models import Page, UserPagePermissionsProxy, GroupPagePermission
User = get_user_model()
# The standard fields each user model is expected to have, as a minimum.
standard_fields = set(['email', 'first_name', 'last_name', 'is_superuser', 'groups'])
class UsernameForm(forms.ModelForm):
"""
Intelligently sets up the username field if it is infact a username. If the
User model has been swapped out, and the username field is an email or
something else, dont touch it.
"""
def __init__(self, *args, **kwargs):
super(UsernameForm, self).__init__(*args, **kwargs)
if User.USERNAME_FIELD == 'username':
field = self.fields['username']
field.regex = r"^[\w.@+-]+$"
field.help_text = _("Required. 30 characters or fewer. Letters, "
"digits and @/./+/-/_ only.")
field.error_messages = field.error_messages.copy()
field.error_messages.update({
'invalid': _("This value may contain only letters, numbers "
"and @/./+/-/_ characters.")})
@property
def username_field(self):
return self[User.USERNAME_FIELD]
def separate_username_field(self):
return User.USERNAME_FIELD not in standard_fields
class UserCreationForm(UsernameForm):
required_css_class = "required"
error_messages = {
'duplicate_username': _("A user with that username already exists."),
'password_mismatch': _("The two password fields didn't match."),
}
is_superuser = forms.BooleanField(
label=_("Administrator"),
required=False,
help_text=_("If ticked, this user has the ability to manage user accounts.")
)
password1 = forms.CharField(
label=_("Password"),
required=False,
widget=forms.PasswordInput,
help_text=_("Leave blank if not changing."))
password2 = forms.CharField(
label=_("Password confirmation"), required=False,
widget=forms.PasswordInput,
help_text=_("Enter the same password as above, for verification."))
email = forms.EmailField(required=True, label=_("Email"))
first_name = forms.CharField(required=True, label=_("First Name"))
last_name = forms.CharField(required=True, label=_("Last Name"))
class Meta:
model = User
fields = set([User.USERNAME_FIELD]) | standard_fields
widgets = {
'groups': forms.CheckboxSelectMultiple
}
def clean_username(self):
username_field = User.USERNAME_FIELD
username = self.cleaned_data[username_field]
try:
User._default_manager.get(**{username_field: username})
except User.DoesNotExist:
return username
raise forms.ValidationError(
self.error_messages['duplicate_username'],
code='duplicate_username',
)
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError(
self.error_messages['password_mismatch'],
code='password_mismatch',
)
return password2
def save(self, commit=True):
user = super(UserCreationForm, self).save(commit=False)
user.set_password(self.cleaned_data["password1"])
# users can access django-admin iff they are a superuser
user.is_staff = user.is_superuser
if commit:
user.save()
self.save_m2m()
return user
# Largely the same as django.contrib.auth.forms.UserCreationForm, but with enough subtle changes
# (to make password non-required) that it isn't worth inheriting...
class UserEditForm(UsernameForm):
required_css_class = "required"
error_messages = {
'duplicate_username': _("A user with that username already exists."),
'password_mismatch': _("The two password fields didn't match."),
}
email = forms.EmailField(required=True, label=_("Email"))
first_name = forms.CharField(required=True, label=_("First Name"))
last_name = forms.CharField(required=True, label=_("Last Name"))
password1 = forms.CharField(
label=_("Password"),
required=False,
widget=forms.PasswordInput,
help_text=_("Leave blank if not changing."))
password2 = forms.CharField(
label=_("Password confirmation"), required=False,
widget=forms.PasswordInput,
help_text=_("Enter the same password as above, for verification."))
is_superuser = forms.BooleanField(
label=_("Administrator"),
required=False,
help_text=_("Administrators have the ability to manage user accounts.")
)
class Meta:
model = User
fields = set([User.USERNAME_FIELD, "is_active"]) | standard_fields
widgets = {
'groups': forms.CheckboxSelectMultiple
}
def clean_username(self):
# Since User.username is unique, this check is redundant,
# but it sets a nicer error message than the ORM. See #13147.
username = self.cleaned_data["username"]
username_field = User.USERNAME_FIELD
try:
User._default_manager.exclude(id=self.instance.id).get(**{
username_field: username})
except User.DoesNotExist:
return username
raise forms.ValidationError(self.error_messages['duplicate_username'])
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 != password2:
raise forms.ValidationError(
self.error_messages['password_mismatch'])
return password2
def save(self, commit=True):
user = super(UserEditForm, self).save(commit=False)
# users can access django-admin iff they are a superuser
user.is_staff = user.is_superuser
if self.cleaned_data["password1"]:
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
self.save_m2m()
return user
class GroupForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(GroupForm, self).__init__(*args, **kwargs)
self.registered_permissions = Permission.objects.none()
for fn in hooks.get_hooks('register_permissions'):
self.registered_permissions = self.registered_permissions | fn()
self.fields['permissions'].queryset = self.registered_permissions
required_css_class = "required"
error_messages = {
'duplicate_name': _("A group with that name already exists."),
}
is_superuser = forms.BooleanField(
label=_("Administrator"),
required=False,
help_text=_("Administrators have the ability to manage user accounts.")
)
class Meta:
model = Group
fields = ("name", "permissions", )
def clean_name(self):
# Since Group.name is unique, this check is redundant,
# but it sets a nicer error message than the ORM. See #13147.
name = self.cleaned_data["name"]
try:
Group._default_manager.exclude(id=self.instance.id).get(name=name)
except Group.DoesNotExist:
return name
raise forms.ValidationError(self.error_messages['duplicate_name'])
def save(self):
# We go back to the object to read (in order to reapply) the
# permissions which were set on this group, but which are not
# accessible in the wagtail admin interface, as otherwise these would
# be clobbered by this form.
try:
untouchable_permissions = self.instance.permissions.exclude(pk__in=self.registered_permissions)
bool(untouchable_permissions) # force this to be evaluated, as it's about to change
except ValueError:
# this form is not bound; we're probably creating a new group
untouchable_permissions = []
group = super(GroupForm, self).save()
group.permissions.add(*untouchable_permissions)
return group
class GroupPagePermissionForm(forms.ModelForm):
page = forms.ModelChoiceField(queryset=Page.objects.all(),
widget=AdminPageChooser(show_edit_link=False))
class Meta:
model = GroupPagePermission
fields = ('page', 'permission_type')
class BaseGroupPagePermissionFormSet(forms.models.BaseInlineFormSet):
def __init__(self, *args, **kwargs):
super(BaseGroupPagePermissionFormSet, self).__init__(*args, **kwargs)
self.form = GroupPagePermissionForm
for form in self.forms:
form.fields['DELETE'].widget = forms.HiddenInput()
@property
def empty_form(self):
empty_form = super(BaseGroupPagePermissionFormSet, self).empty_form
empty_form.fields['DELETE'].widget = forms.HiddenInput()
return empty_form
GroupPagePermissionFormSet = inlineformset_factory(
Group,
GroupPagePermission,
formset=BaseGroupPagePermissionFormSet,
extra=0,
fields=('page', 'permission_type'),
)
class NotificationPreferencesForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(NotificationPreferencesForm, self).__init__(*args, **kwargs)
user_perms = UserPagePermissionsProxy(self.instance.user)
if not user_perms.can_publish_pages():
del self.fields['submitted_notifications']
if not user_perms.can_edit_pages():
del self.fields['approved_notifications']
del self.fields['rejected_notifications']
class Meta:
model = UserProfile
fields = ("submitted_notifications", "approved_notifications", "rejected_notifications")
|
Klaudit/wagtail
|
wagtail/wagtailusers/forms.py
|
Python
|
bsd-3-clause
| 10,300
|
"""
These meta-datasources operate on :class:`revscoring.Datasource`'s that returns
a list of strings (i.e. "tokens") and produces a list of ngram/skipgram
sequences.
.. autoclass:: revscoring.datasources.meta.hashing.hash
"""
import json
import mmh3
from ..datasource import Datasource
class hash(Datasource):
"""
Converts a sequence of items into a sequence of portable hashes (`int`)
based on the result of applying `str()`. E.g. `str(["foo"]) = '["foo"]'`
:Parameters:
items_datasource : :class:`revscoring.Datasource`
A datasource that generates a list of items to be hashed
n : `int`
The number of potential hashes that can be produced
name : `str`
A name for the datasource.
"""
def __init__(self, items_datasource, n=2 ** 20, name=None):
name = self._format_name(name, [items_datasource, n])
super().__init__(name, self.process,
depends_on=[items_datasource])
self.n = n
def process(self, items):
return [mmh3_item(item, self.n) for item in items]
def mmh3_item(item, n):
return (2**32 + mmh3.hash(json.dumps(item))) % n
|
yafeunteun/wikipedia-spam-classifier
|
revscoring/revscoring/datasources/meta/hashing.py
|
Python
|
mit
| 1,189
|
try:
from xml.etree import cElementTree as ElementTree
except ImportError:
from xml.etree import ElementTree
import types
#####################
# String operations #
#####################
def normalize_value(val):
"""
Normalize strings with booleans into Python types.
"""
if val is not None:
if val.lower() == 'false':
val = False
elif val.lower() == 'true':
val = True
return val
def normalize_dictionary_values(dictionary):
"""
Normalizes the values in a dictionary recursivly.
"""
for key, val in dictionary.iteritems():
if isinstance(val, dict):
dictionary[key] = normalize_dictionary_values(val)
elif isinstance(val, list):
dictionary[key] = list(val)
else:
dictionary[key] = normalize_value(val)
return dictionary
def smart_str(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Returns a bytestring version of 's', encoded as specified in 'encoding'.
If strings_only is True, don't convert (some) non-string-like objects.
Source:
django.utils.encoding.smart_str
"""
if strings_only and isinstance(s, (types.NoneType, int)):
return s
elif not isinstance(s, basestring):
try:
return str(s)
except UnicodeEncodeError:
if isinstance(s, Exception):
# An Exception subclass containing non-ASCII data that doesn't
# know how to print itself properly. We shouldn't raise a
# further exception.
return ' '.join([smart_str(arg, encoding, strings_only,
errors) for arg in s])
return unicode(s).encode(encoding, errors)
elif isinstance(s, unicode):
return s.encode(encoding, errors)
elif s and encoding != 'utf-8':
return s.decode('utf-8', errors).encode(encoding, errors)
else:
return s
#####################
# XML to dictionary #
#####################
class XmlListConfig(list):
def __init__(self, aList):
for element in aList:
if element:
# treat like dict
if len(element) == 1 or element[0].tag != element[1].tag:
self.append(XmlDictConfig(element))
# treat like list
elif element[0].tag == element[1].tag:
self.append(XmlListConfig(element))
elif element.text:
text = element.text.strip()
if text:
self.append(text)
class XmlDictConfig(dict):
"""
Converts XML to Python dictionaries.
Source:
http://code.activestate.com/recipes/410469-xml-as-dictionary/
Example usage:
>>> tree = ElementTree.parse('your_file.xml')
>>> root = tree.getroot()
>>> xmldict = XmlDictConfig(root)
Or, if you want to use an XML string:
>>> root = ElementTree.XML(xml_string)
>>> xmldict = XmlDictConfig(root)
"""
def __init__(self, parent_element):
if parent_element.items():
self.update(dict(parent_element.items()))
for element in parent_element:
if len(element) > 0:
# treat like dict - we assume that if the first two tags
# in a series are different, then they are all different.
if len(element) == 1 or element[0].tag != element[1].tag:
aDict = XmlDictConfig(element)
# treat like list - we assume that if the first two tags
# in a series are the same, then the rest are the same.
else:
# here, we put the list in dictionary; the key is the
# tag name the list elements all share in common, and
# the value is the list itself
aDict = {element[0].tag: XmlListConfig(element)}
# if the tag has attributes, add those to the dict
if element.items():
aDict.update(dict(element.items()))
self.update({element.tag: aDict})
# this assumes that if you've got an attribute in a tag,
# you won't be having any text. This may or may not be a
# good idea -- time will tell. It works for the way we are
# currently doing XML configuration files...
elif element.items():
self.update({element.tag: dict(element.items())})
# finally, if there are no child tags and no attributes, extract
# the text
else:
self.update({element.tag: element.text})
|
funkbit/pypayex
|
payex/utils.py
|
Python
|
bsd-2-clause
| 4,857
|
# Copyright 2018 TVB-HPC contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .base import BaseKernel
class Covar:
template = """
// stable one-pass co-moment algo, cf wikipedia
__kernel void update_cov(int i_sample,
int n_node,
__global float *cov,
__global float *means,
__global float *data)
{
int it = get_global_id(0), nt = get_global_size(0);
if (i_sample == 0)
{
for (int i_node = 0; i_node < n_node; i_node++)
means[i_node * nt + it] = data[i_node * nt + it];
return;
}
float recip_n = 1.0f / i_sample;
// double buffer to avoid copying memory
__global float *next_mean = means, *prev_mean = means;
if (i_sample % 2 == 0) {
prev_mean += n_node * nt;
} else {
next_mean += n_node * nt;
}
for (int i_node = 0; i_node < n_node; i_node++)
{
int i_idx = i_node * nt + it;
next_mean[i_idx] = prev_mean[i_idx] \
+ (data[i_idx] - prev_mean[i_idx]) * recip_n;
}
for (int i_node = 0; i_node < n_node; i_node++)
{
int i_idx = i_node * nt + it;
float data_mean_i = data[i_idx] - prev_mean[i_idx];
for (int j_node = 0; j_node < n_node; j_node++)
{
int j_idx = j_node * nt + it;
float data_mean_j = data[j_idx] - next_mean[j_idx];
int cij_idx = (j_node * n_node + i_node) * nt + it;
cov[cij_idx] += data_mean_j * data_mean_i;
}
}
}
"""
class BatchCov(BaseKernel):
domains = '{[i,j,t]: 0<=i<n and 0<=j<n and 0<t<m}'
dtypes = {'cov,x,u': 'f', 'm,n': 'i'}
instructions = """
for i
u[i] = sum(t, x[t, i])
end
for i
for j
cov[j, i] = sum(t, (x[t, i] - u[i]) * (x[t, j] - u[j]))
end
end
"""
class OnlineCov(BaseKernel):
domains = '{[i,j]: 0<=i<n and 0<=j<n}'
dtypes = {'cov,x,u0,u1': 'f', 't,n': 'i'}
instructions = """
if (t == 0)
for i
u0[i] = x[i]
end
end
for i
u1[i] = u0[i] + (x[i] - u0[i]) / t
end
for i
<> dui = x[i] - u0[i]
for j
<> duj = x[j] - u1[j]
cov[j, i] = cov[j, i] + duj * dui
end
end
"""
class CovToCorr:
template = """
__kernel void cov_to_corr(int n_sample, int n_node,
__global float *cov,
__global float *corr)
{
int it = get_global_id(0), nt = get_global_size(0);
float recip_n_samp = 1.0f / n_sample;
// normalize comoment to covariance
for (int ij = 0; ij < (n_node * n_node); ij++)
cov[ij*nt + it] *= recip_n_samp;
// compute correlation coefficient
#define COV(i, j) cov[(i*n_node + j)*nt + it]
#define CORR(i, j) corr[(i*n_node + j)*nt + it]
for (int i = 0; i < n_node; i++)
{
float var_i = COV(i, i);
for (int j = 0; j < n_node; j++)
{
float var_j = COV(j, j);
CORR(i, j) = COV(i, j) / sqrt(var_i * var_j);
}
}
}
"""
|
the-virtual-brain/tvb-hpc
|
tvb_hpc/metric.py
|
Python
|
apache-2.0
| 3,683
|
# -*- coding: utf-8 -*-
from django.conf.urls import patterns, url
from website.views import IndexView
urlpatterns = patterns('',
url(r'^$', IndexView.as_view(),
name='website-index'),
url(r'^contact/$', 'website.views.contact',
name='website-contact'),
# -------------------------------------------------------------------------
# API views
# -------------------------------------------------------------------------
url(r'^api/contact/$',
'website.views.api_contact',
name='api-website-contact'),
url(r'^api/task/result/(?P<task_id>.+)/$',
'website.views.api_task_result',
name='api-website-task-result'),
url(r'^api/server-info/$',
'website.views.api_server_info',
name='api-website-server-info-no-token'),
url(r'^api/server-info/(?P<update_token>\w+)/$',
'website.views.api_server_info',
name='api-website-server-info'),
)
|
IL2HorusTeam/il2ds-events-commander
|
il2ec/apps/website/urls.py
|
Python
|
gpl-2.0
| 948
|
#!/usr/bin/python
import picamera
import time
import pygame
import tty
import sys
import os
from PIL import Image
from PIL import ImageOps
cam = picamera.PiCamera()
cam.hflip = True
cam.vflip=True
#cam.start_preview()
#orig_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin)
x = 0
i=0
while x != chr(27):
x = sys.stdin.read(1)[0]
if x == "w":
now=time.time()
file_string ="data/forward/img_" + time.strftime("%y%m%d_%H-%M-%S") + ".jpg"
cam.capture(file_string)
file_string ="data/forward/img_" + time.strftime("%y%m%d_%H-%M-%S") + ".jpg"
cam.capture(file_string)
then=time.time()
print("%s function took %f ms" % (file_string, (then-now)*1000.0))
elif x == "d":
file_string ="data/right/img_" + time.strftime("%y%m%d_%H-%M-%S") + ".jpg"
cam.capture(file_string)
# cam.stop_preview()
#var = raw_input("Command:")
#i=0
#while var != 'q':
# if var == 'w':
# y = "testimage%s.jpg"%(i)
# cam.capture(y)
# print(y)
# i=i+1
# else:
# print('invalid')
# var = raw_input("Command:")
|
qdm8t2/capstone-autonomous-car
|
Data_Capture_Mode.py
|
Python
|
mit
| 1,087
|
### BEGIN LICENSE
# Copyright (C) 2012 Owais Lone <hello@owaislone.org>
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranties of
# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
# PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
### END LICENSE
#from gi.repository import Gtk, Unity, Notify, Dbusmenu, TelepathyGLib
from gi.repository import Gtk, Notify, TelepathyGLib
Notify.init('fogger')
TELEPATHY_PRESENCE_MAP = {
TelepathyGLib.ConnectionPresenceType.AVAILABLE: 'available',
TelepathyGLib.ConnectionPresenceType.AWAY: 'away',
TelepathyGLib.ConnectionPresenceType.EXTENDED_AWAY: 'away',
TelepathyGLib.ConnectionPresenceType.BUSY: 'busy',
TelepathyGLib.ConnectionPresenceType.OFFLINE: 'offline',
}
class DesktopBridge:
icon_name = None
desktop_file = None
actions = {}
launcher_actions = {}
def __init__(self, root, desktop_file, icon_name=None):
self.W = root
self.desktop_file = desktop_file
self.icon_name = icon_name
#self.launcher_entry = Unity.LauncherEntry.get_for_desktop_file(self.desktop_file)
#self.quicklist = Dbusmenu.Menuitem.new()
#self.launcher_entry.set_property("quicklist", self.quicklist)
self.launcher_entry = None
self.quicklist = None
self.indicator = None
#self._rename_methods = {
# Dbusmenu.Menuitem: self._rename_dbus_menu_item,
# Gtk.MenuItem: self._rename_gtk_menu_item,
#}
self.launcher_entry = None
self.quicklist = None
self.indicator = None
self._rename_methods = { }
self.telepathy_account_manager = TelepathyGLib.AccountManager.new(
TelepathyGLib.DBusDaemon.dup())
self.telepathy_account_manager.connect(
'most-available-presence-changed', self.notify_presence_change)
self.W.connect('notify::is-active', self.notify_window_state)
def _js(self, W, jscode):
if W:
windows = [W]
W.webview.execute_script(jscode)
else:
windows = self.W.popups
for win in windows:
win.webview.execute_script(jscode)
def _dispatch_dom_event(self, W, event, params):
js = 'var e = document.createEvent("Event"); e.initEvent("%s"); var params={};' % event
for k,v in params.items():
js = js + 'params.%s = "%s";' % (k, v,)
js = js + 'e.foggerData = params; document.dispatchEvent(e);'
self._js(W, js)
def _rename_gtk_menu_item(self, item, name):
item.props.label = name
def _rename_dbus_menu_item(self, item, name):
item.property_set(Dbusmenu.MENUITEM_PROP_LABEL, name)
def rename_item(self, W, data):
widget_id = data['id'][0]
item = self.widgets.get(widget_id)
rename = self._rename_methods.get(item.__class__)
if rename:
rename(item, data['name'][0])
def notify_window_state(self, window, active):
self._dispatch_dom_event(self.W, 'foggerWindowStateChange', {
'active': self.W.props.is_active})
def notify_presence_change(self, manager, presence, status, message):
self._dispatch_dom_event(None, 'foggerPresenceChange',
{'presence': TELEPATHY_PRESENCE_MAP.get(presence, 'offline')})
def get_presence(self, W, data):
presence = self.telepathy_account_manager.get_most_available_presence()
print W
def notify(self, W, data):
Notify.Notification.new(data.get('summary', [''])[0], data.get('body', [''])[0], self.icon_name).show()
def set_progress(self, W, data):
self.launcher_entry.props.progress = float(data['progress'][0])
self.launcher_entry.props.progress_visible = True
def clear_progress(self, W, data):
self.launcher_entry.props.progress_visible = False
def set_count(self, W, data):
self.launcher_entry.props.count = int(data['count'][0])
self.launcher_entry.props.count_visible = True
def clear_count(self, W, data):
self.launcher_entry.props.count_visible = False
def set_urgent(self, W, data):
self.launcher_entry.props.urgent = True
def clear_urgent(self, W, data):
self.launcher_entry.props.urgent = False
def add_launcher_action(self, W, data):
name = data['name'][0]
if name in self.launcher_actions:
return
item = self.launcher_actions[name] = Dbusmenu.Menuitem.new()
item.property_set(Dbusmenu.MENUITEM_PROP_LABEL, name)
item.property_set_bool(Dbusmenu.MENUITEM_PROP_VISIBLE, True)
item.connect('item-activated', lambda *a, **kw:
self._dispatch_dom_event(None, 'foggerQLCallbackEvent',
{'name': name}))
self.quicklist.child_append(item)
def remove_launcher_action(self, W, data):
name = data['name'][0]
action = self.launcher_actions.get(name)
if action:
self.quicklist.child_delete(action)
del self.launcher_actions[name]
def remove_launcher_actions(self, W, data):
for action in self.launcher_actions.values():
self.quicklist.child_delete(action)
self.launcher_actions = {}
def add_action(self, W, data):
action_path = data['name'][0]
if action_path in self.actions:
return
action_items = []
action_parts = [X for X in action_path.split('/') if X]
parent = self.W.menubar
length = len(action_parts)
for i, action in enumerate(action_parts, 1):
if i == length:
prepend = False
if i == 1:
parent = self.W.menubar.get_children()[0].get_submenu()
prepend = True
if [X for X in parent.get_children() if X.props.label == action]:
return
item = Gtk.MenuItem(action)
item.props.use_underline = True
if prepend:
parent.prepend(item)
else:
parent.append(item)
item.connect('activate', lambda *a, **kw:
self._dispatch_dom_event(W, 'foggerActionCallbackEvent',
{'name': action_path}))
item.show()
action_items.append(item)
else:
children = [X.props.label for X in parent.get_children()]
if action in children:
parent = [X for X in parent.get_children() if X.props.label == action][0].get_submenu()
action_items.append(parent)
else:
menu = Gtk.MenuItem(action)
menu.props.use_underline = True
menu.set_submenu(Gtk.Menu())
parent.append(menu)
parent = menu.get_submenu()
menu.show()
action_items.append(menu)
self.actions[action_path] = action_items
def remove_action(self, W, data):
action_path = data['name'][0]
action_items = self.actions.get(action_path)
if action_items:
for item in reversed(action_items):
if isinstance(item, Gtk.MenuItem):
submenu = item.get_submenu()
else:
submenu = item
if not submenu or not submenu.get_children():
action_items.remove(item)
item.destroy()
del self.actions[action_path]
def remove_actions(self, W, data):
for value in self.actions.values():
root = value[0]
root.destroy()
self.actions = {}
|
andrenam/Fogger
|
fogger_lib/Bridge.py
|
Python
|
gpl-3.0
| 8,215
|
# coding: utf-8
#
# Copyright 2018 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.]
"""Domain objects for Classroom."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
import python_utils
class Classroom(python_utils.OBJECT):
"""Domain object for a classroom."""
def __init__(
self, name, url_fragment, topic_ids,
course_details, topic_list_intro):
"""Constructs a Classroom domain object.
Args:
name: str. The name of the classroom.
url_fragment: str. The url fragment of the classroom.
topic_ids: list(str). List of topic ids attached to the classroom.
course_details: str. Course details for the classroom.
topic_list_intro: str. Topic list introduction for the classroom.
"""
self.name = name
self.url_fragment = url_fragment
self.topic_ids = topic_ids
self.course_details = course_details
self.topic_list_intro = topic_list_intro
|
prasanna08/oppia
|
core/domain/classroom_domain.py
|
Python
|
apache-2.0
| 1,646
|
# -*- coding: utf-8 -*-
# Licensed under the Apache-2.0 License, see LICENSE for details.
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
import six
import ctypes
import numpy as np
from numpy.ctypeslib import ndpointer
from .constants import RS_API_VERSION, rs_stream, rs_option
from .stream import ColorStream, DepthStream, PointStream, CADStream, DACStream, InfraredStream
from .extstruct import rs_error, rs_intrinsics, rs_extrinsics, rs_context, rs_device
from .utils import pp, _check_error, RealsenseError, StreamMode, DeviceOptionRange
from .extlib import lrs, rsutilwrapper
class Service(object):
"""Context manager for librealsense service."""
def __init__(self):
super(Service, self).__init__()
self.ctx = None
self.start()
def start(self):
"""Start librealsense service."""
e = ctypes.POINTER(rs_error)()
if not self.ctx:
lrs.rs_create_context.restype = ctypes.POINTER(rs_context)
self.ctx = lrs.rs_create_context(RS_API_VERSION, ctypes.byref(e))
_check_error(e)
# mirror librealsense behaviour of printing number of connected devices
n_devices = lrs.rs_get_device_count(self.ctx, ctypes.byref(e))
_check_error(e)
logger.info('There are {} connected RealSense devices.'.format(n_devices))
def stop(self):
"""Stop librealsense service."""
if self.ctx:
e = ctypes.POINTER(rs_error)()
lrs.rs_delete_context(self.ctx, ctypes.byref(e))
_check_error(e)
self.ctx = None
def get_devices(self):
"""Returns a generator that yields a dictionnary containing 'id', 'name', 'serial',
'firmware' and 'is_streaming' keys.
"""
e = ctypes.POINTER(rs_error)()
n_devices = lrs.rs_get_device_count(self.ctx, ctypes.byref(e))
_check_error(e)
lrs.rs_get_device.restype = ctypes.POINTER(rs_device)
for idx in range(n_devices):
dev = lrs.rs_get_device(self.ctx, idx, ctypes.byref(e))
_check_error(e)
name = pp(lrs.rs_get_device_name, dev, ctypes.byref(e))
_check_error(e)
serial = pp(lrs.rs_get_device_serial, dev, ctypes.byref(e))
_check_error(e)
version = pp(lrs.rs_get_device_firmware_version, dev, ctypes.byref(e))
_check_error(e)
is_streaming = lrs.rs_is_device_streaming(dev, ctypes.byref(e))
_check_error(e)
yield {'id': idx, 'name': name, 'serial': serial,
'firmware': version, 'is_streaming': is_streaming}
def get_device_modes(self, device_id):
"""Generates all different modes for the device which `id` is provided.
Args:
device_id (int): the device id as hinted by the output from :func:`start` or :func:`get_devices`.
Returns: :obj:`generator` that yields all possible streaming modes as :obj:`StreamMode`.
"""
e = ctypes.POINTER(rs_error)()
dev = lrs.rs_get_device(self.ctx, device_id, ctypes.byref(e))
_check_error(e)
for stream_id in range(rs_stream.RS_STREAM_COUNT):
mode_count = lrs.rs_get_stream_mode_count(dev, stream_id, ctypes.byref(e))
_check_error(e)
for idx in range(mode_count):
width = ctypes.c_int()
height = ctypes.c_int()
fmt = ctypes.c_int()
fps = ctypes.c_int()
lrs.rs_get_stream_mode(dev, stream_id, idx,
ctypes.byref(width),
ctypes.byref(height),
ctypes.byref(fmt),
ctypes.byref(fps),
ctypes.byref(e))
_check_error(e)
yield StreamMode(stream_id, width.value, height.value,
fmt.value, fps.value)
def is_device_streaming(self, device_id):
"""Indicates if device is streaming.
Utility function which does not require to enumerate all devices
or to initialize a Device object.
"""
e = ctypes.POINTER(rs_error)()
lrs.rs_get_device.restype = ctypes.POINTER(rs_device)
dev = lrs.rs_get_device(self.ctx, device_id, ctypes.byref(e))
_check_error(e)
is_streaming = lrs.rs_is_device_streaming(dev, ctypes.byref(e))
_check_error(e)
return is_streaming
def Device(self, *args, **kwargs):
"""Factory function which returns a :obj:`Device`, also accepts optionnal arguments.
"""
return Device(self, *args, **kwargs)
def __enter__(self):
self.start()
return self
def __exit__(self, *args):
# print 'stopping the service'
self.stop()
def __del__(self):
self.stop()
def __nonzero__(self):
return bool(self.ctx)
def __bool__(self):
return bool(self.ctx)
def Device(service, device_id=0, streams=None, depth_control_preset=None, ivcam_preset=None):
"""Camera device, which subclass :class:`DeviceBase` and create properties for each input
streams to expose their data. It should be instantiated through :func:`Service.Device`.
Args:
service (:obj:`Service`): any running service.
device_id (int): the device id as hinted by the output from :func:`start`.
streams (:obj:`list` of :obj:`pyrealsense.stream.Stream`): if None, all streams will be
enabled with their default parameters (e.g `640x480@30FPS`)
depth_control_preset (int): optional preset to be applied.
ivcam_preset (int): optional preset to be applied with input value from
:obj:`pyrealsense.constants.rs_ivcam_preset`.
Returns:
A subclass of :class:`DeviceBase` which class name includes the device serial number.
"""
e = ctypes.POINTER(rs_error)()
assert service.ctx, 'Service needs to be started'
ctx = service.ctx
if streams is None:
streams = [ColorStream(), DepthStream(), PointStream(), CADStream(), DACStream(), InfraredStream()]
lrs.rs_get_device.restype = ctypes.POINTER(rs_device)
dev = lrs.rs_get_device(ctx, device_id, ctypes.byref(e))
_check_error(e)
name = pp(lrs.rs_get_device_name, dev, ctypes.byref(e))
_check_error(e)
serial = pp(lrs.rs_get_device_serial, dev, ctypes.byref(e))
_check_error(e)
version = pp(lrs.rs_get_device_firmware_version, dev, ctypes.byref(e))
_check_error(e)
logger.info("Using device {}, an {}".format(device_id, name))
logger.info(" Serial number: {}".format(serial))
logger.info(" Firmware version: {}".format(version))
# create a new class for the device
class_name = name.split(" ")[-1] + "-" + serial
NewDevice = type(class_name, (DeviceBase,), dict())
nd = NewDevice(dev, device_id, name, serial, version, streams)
if nd.is_streaming():
# Device is already running.
# It is not possible to enable further streams.
return nd
# enable the stream and start device
for s in streams:
if s.native:
lrs.rs_enable_stream(dev, s.stream, s.width, s.height, s.format, s.fps, ctypes.byref(e))
_check_error(e)
lrs.rs_start_device(dev, ctypes.byref(e))
# depth control preset
if depth_control_preset:
rsutilwrapper.apply_depth_control_preset(dev, depth_control_preset)
# ivcam preset
if ivcam_preset:
rsutilwrapper.apply_ivcam_preset(dev, ivcam_preset)
# add stream property and intrinsics
for s in streams:
if s.native:
setattr(NewDevice, s.name + '_intrinsics', nd._get_stream_intrinsics(s.stream))
setattr(NewDevice, s.name, property(nd._get_stream_data_closure(s)))
# add manually depth_scale and manual pointcloud
for s in streams:
if s.name == 'depth':
setattr(NewDevice, 'depth_scale', property(lambda x: nd._get_depth_scale()))
if s.name == 'points':
setattr(NewDevice, 'pointcloud', property(lambda x: nd._get_pointcloud()))
return nd
class DeviceBase(object):
"""Camera device base class which is called via the :func:`Device` factory. It
exposes the main functions from librealsense.
"""
def __init__(self, dev, device_id, name, serial, version, streams):
super(DeviceBase, self).__init__()
assert dev, 'Device was not initialized correctly'
self.dev = dev
self.device_id = device_id
self.name = name
self.serial = serial
self.version = version
self.streams = streams
def stop(self):
"""End data acquisition.
Raises:
:class:`utils.RealsenseError`: in case librealsense reports a problem.
"""
if self.dev and self.is_streaming():
e = ctypes.POINTER(rs_error)()
lrs.rs_stop_device(self.dev, ctypes.byref(e))
_check_error(e)
self.dev = None
def is_streaming(self):
"""Indicates if device is streaming.
Returns:
(bool): return value of `lrs.rs_is_device_streaming`.
"""
if self.dev:
e = ctypes.POINTER(rs_error)()
is_streaming = lrs.rs_is_device_streaming(self.dev, ctypes.byref(e))
_check_error(e)
return bool(is_streaming)
else:
return False
def poll_for_frame(self):
"""Check if new frames are available, without blocking.
Returns:
int: 1 if new frames are available, 0 if no new frames have arrived
Raises:
:class:`utils.RealsenseError`: in case librealsense reports a problem.
"""
e = ctypes.POINTER(rs_error)()
res = lrs.rs_poll_for_frames(self.dev, ctypes.byref(e))
_check_error(e)
return res
def wait_for_frames(self):
"""Block until new frames are available.
Raises:
:class:`utils.RealsenseError`: in case librealsense reports a problem.
"""
e = ctypes.POINTER(rs_error)()
lrs.rs_wait_for_frames(self.dev, ctypes.byref(e))
_check_error(e)
def get_frame_timestamp(self, stream):
"""Retrieve the time at which the latest frame on a specific stream was captured.
Args:
stream (int): stream id
Returns:
(long): timestamp
"""
lrs.rs_get_frame_timestamp.restype = ctypes.c_double
e = ctypes.POINTER(rs_error)()
return lrs.rs_get_frame_timestamp(self.dev, stream, ctypes.byref(e))
def get_frame_number(self, stream):
"""Retrieve the frame number for specific stream.
Args:
stream (int): value from :class:`pyrealsense.constants.rs_stream`.
Returns:
(double): frame number.
"""
lrs.rs_get_frame_number.restype = ctypes.c_ulonglong
e = ctypes.POINTER(rs_error)()
return lrs.rs_get_frame_number(self.dev, stream, ctypes.byref(e))
def get_device_extrinsics(self, from_stream, to_stream):
"""Retrieve extrinsic transformation between the viewpoints of two different streams.
Args:
from_stream (:class:`pyrealsense.constants.rs_stream`): from stream.
to_stream (:class:`pyrealsense.constants.rs_stream`): to stream.
Returns:
(:class:`pyrealsense.extstruct.rs_extrinsics`): extrinsics parameters as a structure
"""
e = ctypes.POINTER(rs_error)()
_rs_extrinsics = rs_extrinsics()
lrs.rs_get_device_extrinsics(
self.dev,
from_stream,
to_stream,
ctypes.byref(_rs_extrinsics),
ctypes.byref(e))
_check_error(e)
return _rs_extrinsics
def get_device_modes(self):
"""Returns a generator that yields all possible streaming modes as :obj:`StreamMode`."""
e = ctypes.POINTER(rs_error)()
for stream in self.streams:
mode_count = lrs.rs_get_stream_mode_count(self.dev, stream.stream, ctypes.byref(e))
_check_error(e)
for idx in range(mode_count):
width = ctypes.c_int()
height = ctypes.c_int()
fmt = ctypes.c_int()
fps = ctypes.c_int()
lrs.rs_get_stream_mode(self.dev, stream.stream, idx,
ctypes.byref(width),
ctypes.byref(height),
ctypes.byref(fmt),
ctypes.byref(fps),
ctypes.byref(e))
_check_error(e)
yield StreamMode(stream.stream, width.value, height.value,
fmt.value, fps.value)
def get_available_options(self):
"""Returns available options as a list of (:obj:`DeviceOptionRange`, value).
"""
avail_opt_ranges = []
for option in range(rs_option.RS_OPTION_COUNT):
try:
opt_range = self.get_device_option_range_ex(option)
except RealsenseError:
pass
else:
avail_opt_ranges.append(opt_range)
avail_opt = [r.option for r in avail_opt_ranges]
return six.moves.zip(avail_opt_ranges, self.get_device_options(avail_opt))
def get_device_options(self, options):
"""Get device options.
Args:
option (:obj:`list` of int): taken from :class:`pyrealsense.constants.rs_option`.
Returns:
(:obj:`iter` of double): options values.
"""
e = ctypes.POINTER(rs_error)()
current_values = (ctypes.c_double*len(options))()
option_array_type = ctypes.c_int*len(options)
option_array = option_array_type(*options)
lrs.rs_get_device_options.argtypes = [ctypes.POINTER(rs_device),
option_array_type,
ctypes.c_int,
ctypes.POINTER(ctypes.c_double),
ctypes.POINTER(ctypes.POINTER(rs_error))]
lrs.rs_get_device_options.restype = None
lrs.rs_get_device_options(self.dev, option_array, len(options), current_values, ctypes.byref(e))
_check_error(e)
return iter(current_values)
def set_device_options(self, options, values):
"""Set device options.
Args:
option (:obj:`list` of int): taken from :class:`pyrealsense.constants.rs_option`.
values (:obj:`list` of double): options values.
"""
assert len(options) == len(values)
e = ctypes.POINTER(rs_error)()
count = len(options)
option_array_type = ctypes.c_int * count
values_array_type = ctypes.c_double * count
lrs.rs_set_device_options.argtypes = [ctypes.POINTER(rs_device),
option_array_type,
ctypes.c_int,
values_array_type,
ctypes.POINTER(ctypes.POINTER(rs_error))]
lrs.rs_set_device_options.restype = None
c_options = option_array_type(*options)
c_values = values_array_type(*values)
lrs.rs_set_device_options(self.dev, c_options, count, c_values, ctypes.byref(e))
_check_error(e)
def get_device_option(self, option):
"""Get device option.
Args:
option (int): taken from :class:`pyrealsense.constants.rs_option`.
Returns:
(double): option value.
"""
lrs.rs_get_device_option.restype = ctypes.c_double
e = ctypes.POINTER(rs_error)()
return lrs.rs_get_device_option(self.dev, option, ctypes.byref(e))
def set_device_option(self, option, value):
"""Set device option.
Args:
option (int): taken from :class:`pyrealsense.constants.rs_option`.
value (double): value to be set for the option.
"""
e = ctypes.POINTER(rs_error)()
lrs.rs_set_device_option(self.dev, ctypes.c_uint(option), ctypes.c_double(value), ctypes.byref(e))
_check_error(e)
def get_device_option_range_ex(self, option):
"""Get device option range.
Args:
option (int): taken from :class:`pyrealsense.constants.rs_option`.
Returns:
(:obj:`DeviceOptionRange`): option range.
"""
e = ctypes.POINTER(rs_error)()
min_ = ctypes.c_double()
max_ = ctypes.c_double()
step = ctypes.c_double()
defv = ctypes.c_double()
lrs.rs_get_device_option_range_ex(self.dev, option, ctypes.byref(min_),
ctypes.byref(max_), ctypes.byref(step),
ctypes.byref(defv), ctypes.byref(e))
_check_error(e)
return DeviceOptionRange(option, min_.value, max_.value, step.value, defv.value)
def get_device_option_description(self, option):
"""Get the device option description.
Args:
option (int): taken from :class:`pyrealsense.constants.rs_option`.
Returns:
(str): option value.
"""
e = ctypes.POINTER(rs_error)()
return pp(lrs.rs_get_device_option_description, self.dev, ctypes.c_uint(option), ctypes.byref(e))
def reset_device_options_to_default(self, options):
"""Reset device options to default.
Args:
option (:obj:`list` of int): taken from :class:`pyrealsense.constants.rs_option`.
"""
e = ctypes.POINTER(rs_error)()
count = len(options)
option_array_type = ctypes.c_int * count
lrs.rs_reset_device_options_to_default.argtypes = [ctypes.POINTER(rs_device),
option_array_type,
ctypes.c_int,
ctypes.POINTER(ctypes.POINTER(rs_error))]
lrs.rs_reset_device_options_to_default.restype = None
c_options = option_array_type(*options)
lrs.rs_reset_device_options_to_default(self.dev, c_options, count, ctypes.byref(e))
_check_error(e)
def _get_stream_intrinsics(self, stream):
e = ctypes.POINTER(rs_error)()
_rs_intrinsics = rs_intrinsics()
lrs.rs_get_stream_intrinsics(
self.dev,
stream,
ctypes.byref(_rs_intrinsics),
ctypes.byref(e))
return _rs_intrinsics
def _get_stream_data_closure(self, s):
def get_stream_data(s):
e = ctypes.POINTER(rs_error)()
lrs.rs_get_frame_data.restype = ndpointer(dtype=s.dtype, shape=s.shape)
try:
data = lrs.rs_get_frame_data(self.dev, s.stream, ctypes.byref(e))
except TypeError:
_check_error(e)
raise
else:
return data
return lambda x: get_stream_data(s)
def _get_depth_scale(self):
e = ctypes.POINTER(rs_error)()
lrs.rs_get_device_depth_scale.restype = ctypes.c_float
return lrs.rs_get_device_depth_scale(self.dev, ctypes.byref(e))
def _get_pointcloud(self):
ds = [s for s in self.streams if type(s) is DepthStream][0]
e = ctypes.POINTER(rs_error)()
lrs.rs_get_frame_data.restype = ndpointer(dtype=ctypes.c_uint16, shape=(ds.height, ds.width))
depth = lrs.rs_get_frame_data(self.dev, rs_stream.RS_STREAM_DEPTH, ctypes.byref(e))
pointcloud = np.zeros((ds.height * ds.width * 3), dtype=np.float32)
# ugly fix for outliers
depth[0, :2] = 0
rsutilwrapper.deproject_depth(pointcloud, self.depth_intrinsics, depth, self.depth_scale)
return pointcloud.reshape((ds.height, ds.width, 3))
def apply_ivcam_preset(self, preset):
"""Provide access to several recommend sets of option presets for ivcam.
Args:
preset (int): preset from (:obj:`pyrealsense.constants.rs_ivcam_preset`)
"""
rsutilwrapper.apply_ivcam_preset(self.dev, preset)
def project_point_to_pixel(self, point):
"""Project a 3d point to its 2d pixel coordinate by calling rsutil's
rs_project_point_to_pixel under the hood.
Args:
point (np.array): (x,y,z) coordinate of the point
Returns:
pixel (np.array): (x,y) coordinate of the pixel
"""
pixel = np.ones(2, dtype=np.float32) * np.NaN
rsutilwrapper.project_point_to_pixel(pixel, self.depth_intrinsics, point)
return pixel
def deproject_pixel_to_point(self, pixel, depth):
"""Deproject a 2d pixel to its 3d point coordinate by calling rsutil's
rs_deproject_pixel_to_point under the hood.
Args:
pixel (np.array): (x,y) coordinate of the point
depth (float): depth at that pixel
Returns:
point (np.array): (x,y,z) coordinate of the point
"""
point = np.ones(3, dtype=np.float32) * np.NaN
rsutilwrapper.deproject_pixel_to_point(point, self.depth_intrinsics, pixel, depth)
return point
def __enter__(self):
return self
def __exit__(self, *args):
self.stop()
def __del__(self):
self.stop()
def __str__(self):
return '{}(serial={}, firmware={})'.format(self.__class__.__name__,
self.serial, self.firmware)
def __nonzero__(self):
return self.is_streaming()
def __bool__(self):
return self.is_streaming()
|
toinsson/pyrealsense
|
pyrealsense/core.py
|
Python
|
apache-2.0
| 22,079
|
'''
Created on Oct 27, 2010
Logistic Regression Working Module
@author: Peter
'''
from numpy import *
def loadDataSet():
dataMat = []; labelMat = []
fr = open('testSet.txt')
for line in fr.readlines():
lineArr = line.strip().split()
dataMat.append([1.0, float(lineArr[0]), float(lineArr[1])])
labelMat.append(int(lineArr[2]))
return dataMat,labelMat
def sigmoid(inX):
return 1.0/(1+exp(-inX))
def gradAscent(dataMatIn, classLabels):
dataMatrix = mat(dataMatIn) #convert to NumPy matrix
labelMat = mat(classLabels).transpose() #convert to NumPy matrix
m,n = shape(dataMatrix)
alpha = 0.001
maxCycles = 500
weights = ones((n,1))
for k in range(maxCycles): #heavy on matrix operations
h = sigmoid(dataMatrix*weights) #matrix mult
error = (labelMat - h) #vector subtraction
weights = weights + alpha * dataMatrix.transpose()* error #matrix mult
return weights
def plotBestFit(weights):
import matplotlib.pyplot as plt
dataMat,labelMat=loadDataSet()
dataArr = array(dataMat)
n = shape(dataArr)[0]
xcord1 = []; ycord1 = []
xcord2 = []; ycord2 = []
for i in range(n):
if int(labelMat[i])== 1:
xcord1.append(dataArr[i,1]); ycord1.append(dataArr[i,2])
else:
xcord2.append(dataArr[i,1]); ycord2.append(dataArr[i,2])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xcord1, ycord1, s=30, c='red', marker='s')
ax.scatter(xcord2, ycord2, s=30, c='green')
x = arange(-3.0, 3.0, 0.1)
y = (-weights[0]-weights[1]*x)/weights[2]
ax.plot(x, y)
plt.xlabel('X1'); plt.ylabel('X2');
plt.show()
def stocGradAscent0(dataMatrix, classLabels):
m,n = shape(dataMatrix)
alpha = 0.01
weights = ones(n) #initialize to all ones
for i in range(m):
h = sigmoid(sum(dataMatrix[i]*weights))
error = classLabels[i] - h
weights = weights + alpha * error * dataMatrix[i]
return weights
def stocGradAscent1(dataMatrix, classLabels, numIter=150):
m,n = shape(dataMatrix)
weights = ones(n) #initialize to all ones
for j in range(numIter):
dataIndex = range(m)
for i in range(m):
alpha = 4/(1.0+j+i)+0.0001 #apha decreases with iteration, does not
randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant
h = sigmoid(sum(dataMatrix[randIndex]*weights))
error = classLabels[randIndex] - h
weights = weights + alpha * error * dataMatrix[randIndex]
del(dataIndex[randIndex])
return weights
def classifyVector(inX, weights):
prob = sigmoid(sum(inX*weights))
if prob > 0.5: return 1.0
else: return 0.0
def colicTest():
frTrain = open('horseColicTraining.txt'); frTest = open('horseColicTest.txt')
trainingSet = []; trainingLabels = []
for line in frTrain.readlines():
currLine = line.strip().split('\t')
lineArr =[]
for i in range(21):
lineArr.append(float(currLine[i]))
trainingSet.append(lineArr)
trainingLabels.append(float(currLine[21]))
trainWeights = stocGradAscent1(array(trainingSet), trainingLabels, 1000)
errorCount = 0; numTestVec = 0.0
for line in frTest.readlines():
numTestVec += 1.0
currLine = line.strip().split('\t')
lineArr =[]
for i in range(21):
lineArr.append(float(currLine[i]))
if int(classifyVector(array(lineArr), trainWeights))!= int(currLine[21]):
errorCount += 1
errorRate = (float(errorCount)/numTestVec)
print "the error rate of this test is: %f" % errorRate
return errorRate
def multiTest():
numTests = 10; errorSum=0.0
for k in range(numTests):
errorSum += colicTest()
print "after %d iterations the average error rate is: %f" % (numTests, errorSum/float(numTests))
|
loganlinn/mlia
|
resources/Ch05/logRegres.py
|
Python
|
epl-1.0
| 4,099
|
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ed_reviews.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
|
davejlin/treehouse
|
python/django/ed_reviews_REST/manage.py
|
Python
|
unlicense
| 253
|
# coding: latin-1
import os
import re
s = r"D:\Dupre\_data\informatique\support\vba\image/vbatd1_4.png"
print re.compile ("[\\\\/]image[\\\\/].*[.]png").search(s)
print re.compile ("[\\\\/]image[\\\\/].*[.]png").match(s)
def liste_fichier_repertoire (folder) :
file, rep = [], []
for r, d, f in os.walk (folder) :
#for a in d : rep.append (r + "/" + a)
for a in f :
e = r + "/" + a
if re.compile ("[\\\\/]image[\\\\/].*[.]png$").search(e) :
file.append (r + "/" + a)
return file, rep
folder = r"D:\Dupre\_data\informatique"
file,fold = liste_fichier_repertoire (folder)
for f in file : print "fichier ", f
for f in fold : print "répertoire ", f
|
sdpython/teachpyx
|
_todo/programme/filelist3.py
|
Python
|
mit
| 722
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
#
# Copyright 2015 Nandaja Varma <nvarma@redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
#
# playbook_gen.py
# ----------------------
# PlaybookGen, with the help of other helper methods from various
# classes in the package, creates the variable files for the ansible
# playbooks to read from.
#
# This script can be called seperately even, providing the configuration
# file and the directory to which the ansible playbooks and variable files
# are to be generated(this is optional.Default is '/tmp/playbooks'
# Usage: ./playbook_gen.py <configuration_file> [<directory name>]
#
import os
import sys
from yaml_writer import YamlWriter
from global_vars import Global
from gluster_conf_writer import GlusterConfWriter
class PlaybookGen(GlusterConfWriter):
def __init__(self, config_file):
self.config = self.read_config(config_file)
self.options = self.config_get_sections(self.config)
self.get_hostnames()
self.create_files_and_dirs()
self.get_var_file_type()
output = {'host_vars': self.host_vars_gen,
'group_vars': self.group_vars_gen
}[self.var_file]()
'''
since the client configuration data are to be written
to the global_vars file no matter what, this method
is called seperately
'''
self.parse_gluster_info(self.config)
self.create_inventory()
self.write_host_names(self.config, self.hosts)
def get_hostnames(self):
self.hosts = self.config_get_options(self.config, 'hosts', False)
if not self.hosts:
print "Error: Section `hosts' not found.\ngluster-deploy will "\
"continue only if volume name is given in the format " \
"<hostname>:<volumename>"
def create_files_and_dirs(self):
'''
Creates required directories for all the configuration files
to go in. Since the client data for gluster confs are common for all the
hosts, creating a group var file anyway.
'''
self.mk_dir(Global.base_dir)
self.mk_dir(Global.group_vars_dir)
self.touch_file(Global.group_file)
self.move_templates_to_playbooks()
self.mk_dir(Global.host_vars_dir)
def get_var_file_type(self):
'''
Decides if host_vars are to be created or everything can
fit into the group_vars file based on the options provided
in the configuration file. If all the hostnames are
present as sections in the configuration file, assumes
we need host_vars. Fails accordingly.
'''
if set(self.hosts).intersection(set(self.options)):
if set(self.hosts).issubset(set(self.options)):
self.var_file = 'host_vars'
else:
print "Error: Looks like you missed to give configurations " \
"for one or many host(s). Exiting!"
self.cleanup_and_quit()
else:
self.var_file = 'group_vars'
def create_inventory(self):
self.hosts and self.write_config(
Global.group,
self.hosts,
Global.inventory)
if self.hosts or Global.master:
self.write_config('master', Global.master or [self.hosts[0]],
Global.inventory)
def host_vars_gen(self):
'''
If decided to create host, this will create host_vars file for
each hosts and writes data to it, accorsing with the help of
YAMLWriter
'''
for host in self.hosts:
host_file = self.get_file_dir_path(Global.host_vars_dir, host)
self.touch_file(host_file)
devices = self.config_section_map(self.config, host,
'devices', False)
device_names = self.split_comma_seperated_options(devices)
YamlWriter(device_names, self.config, host_file, self.var_file)
def group_vars_gen(self):
'''
Calls YamlWriter for writing data to the group_vars file
'''
device_names = self.config_get_options(self.config,
'devices', False)
YamlWriter(device_names, self.config, Global.group_file, self.var_file)
def template_files_create(self, temp_file):
if not os.path.isdir(temp_file):
return False
self.exec_cmds('cp %s/*' % temp_file, Global.base_dir)
return True
def move_templates_to_playbooks(self):
'''
Templates directory in this codebase's repo will be moved to
/tmp/playbooks
'''
# Is the templates present as a part of ansible installation?
templates_path_pkg = '/usr/share/ansible/gdeploy/templates'
# Or is it present in the source directory or installed via setuptools
templates_path_bk = self.get_file_dir_path(self.uppath(__file__, 2),
'templates')
templates_dir = [
path for path in [
templates_path_pkg,
templates_path_bk] if os.path.isdir(path)]
if not templates_dir:
print "Error: Template files not found at %s or %s. " \
"Check your ansible-gluster " \
"installation and try " \
"again." % (templates_path_pkg, templates_path_bk)
self.cleanup_and_quit()
self.template_files_create(templates_dir[0])
if __name__ == '__main__':
# For playbook_gen to be standalone script.
if len(sys.argv) < 2:
print "Usage: var_populate configuration_file"
sys.exit(0)
PlaybookGen(sys.argv[1])
print "You can find your configuration file inside " \
"'%s' directory" % Global.base_dir
|
nandajavarma/gluster-deploy
|
glusterlib/playbook_gen.py
|
Python
|
gpl-2.0
| 6,565
|
from django.conf import settings
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response
from django.template import RequestContext
from django.utils.translation import ugettext
from django.contrib import messages
from django.contrib.admin.views.decorators import staff_member_required
from pinax.apps.account.utils import get_default_redirect, user_display
from pinax.apps.signup_codes.models import SignupCode
from pinax.apps.signup_codes.forms import SignupForm, InviteUserForm
def group_and_bridge(request):
"""
Given the request we can depend on the GroupMiddleware to provide the
group and bridge.
"""
# be group aware
group = getattr(request, "group", None)
if group:
bridge = request.bridge
else:
bridge = None
return group, bridge
def group_context(group, bridge):
# @@@ use bridge
ctx = {
"group": group,
}
if group:
ctx["group_base"] = bridge.group_base_template()
return ctx
def signup(request, **kwargs):
form_class = kwargs.pop("form_class", SignupForm)
template_name = kwargs.pop("template_name", "account/signup.html")
template_name_failure = kwargs.pop("template_name_failure", "signup_codes/failure.html")
success_url = kwargs.pop("success_url", None)
group, bridge = group_and_bridge(request)
ctx = group_context(group, bridge)
if success_url is None:
if hasattr(settings, "SIGNUP_REDIRECT_URLNAME"):
fallback_url = reverse(settings.SIGNUP_REDIRECT_URLNAME)
else:
if hasattr(settings, "LOGIN_REDIRECT_URLNAME"):
fallback_url = reverse(settings.LOGIN_REDIRECT_URLNAME)
else:
fallback_url = settings.LOGIN_REDIRECT_URL
success_url = get_default_redirect(request, fallback_url)
code = request.GET.get("code")
if request.method == "POST":
form = form_class(request.POST, request.FILES, group=group)
if form.is_valid():
user = form.save(request=request)
signup_code = form.cleaned_data["signup_code"]
if signup_code:
signup_code.use(user)
form.login(request, user)
messages.add_message(request, messages.SUCCESS,
ugettext("Successfully logged in as %(username)s.") % {
"username": user_display(user),
}
)
return HttpResponseRedirect(success_url)
else:
signup_code = SignupCode.check(code)
if signup_code:
initial = {
"signup_code": code,
"email": signup_code.email,
}
form = form_class(initial=initial, group=group)
else:
if not settings.ACCOUNT_OPEN_SIGNUP:
ctx.update({
"code": code,
})
ctx = RequestContext(request, ctx)
# if account signup is not open we want to fail when there is
# no sign up code or what was provided failed.
return render_to_response(template_name_failure, ctx)
else:
form = form_class(group=group)
ctx.update({
"code": code,
"form": form,
})
return render_to_response(template_name, RequestContext(request, ctx))
@staff_member_required
def admin_invite_user(request, **kwargs):
"""
This view, by default, works inside the Django admin.
"""
form_class = kwargs.pop("form_class", InviteUserForm)
template_name = kwargs.pop("template_name", "signup_codes/admin_invite_user.html")
group, bridge = group_and_bridge(request)
if request.method == "POST":
form = form_class(request.POST, group=group)
if form.is_valid():
email = form.cleaned_data["email"]
form.send_signup_code()
messages.add_message(request, messages.INFO,
ugettext("An email has been sent to %(email)s.") % {
"email": email
}
)
form = form_class() # reset
else:
form = form_class(group=group)
ctx = group_context(group, bridge)
ctx.update({
"title": ugettext("Invite user"),
"form": form,
})
return render_to_response(template_name, RequestContext(request, ctx))
|
jhaus/pinax
|
pinax/apps/signup_codes/views.py
|
Python
|
mit
| 4,511
|
from django.db import models
from django.conf import settings
from django.contrib.auth.models import User
from django.db.models.signals import post_save
from django.dispatch import receiver
from colorful.fields import RGBColorField
# Create your models here.
class Reservation(models.Model):
time_slot = models.DateTimeField('time slot scheduled')
created_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
reserved_for = models.CharField(max_length=120)
creation_date = models.DateTimeField(auto_now_add=True)
res_color = RGBColorField(default='#5DADE2')
def __str__(self):
return self.time_slot
class UserProfile(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, related_name='profile', on_delete=models.CASCADE)
band_name = models.CharField(max_length=120, blank=True, default='')
fav_color = RGBColorField(blank=True, default='#5DADE2')
def __str__(self):
return self.user.email
# Function to automatically generate UserProfile db entry whenever a new user is created
@receiver(post_save, sender=User)
def create_or_update_user_profile(sender, instance, created, **kwargs):
if created:
UserProfile.objects.create(user=instance)
instance.profile.save()
|
djb1815/Essex-MuSoc
|
musoc_web/schedule/models.py
|
Python
|
mit
| 1,281
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import numpy as np
import tvm
from tvm import te
from tvm import relay
from tvm.relay import transform
from tvm.relay.build_module import bind_params_by_name
from tvm.relay.testing import run_infer_type, create_workload
import tvm.topi.testing
import tvm.testing
def run_opt_pass(expr, opt_pass):
assert isinstance(opt_pass, tvm.transform.Pass)
mod = tvm.IRModule.from_expr(expr)
mod = opt_pass(mod)
entry = mod["main"]
return entry if isinstance(expr, relay.Function) else entry.body
def verify_func(func, data, ref_res, rtol=1e-5, atol=1e-7):
assert isinstance(data, list)
for target, ctx in tvm.testing.enabled_targets():
for kind in ["graph", "vm", "debug"]:
mod = tvm.ir.IRModule.from_expr(func)
intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target)
op_res = intrp.evaluate()(*data)
tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=rtol, atol=atol)
@tvm.testing.uses_gpu
def test_dynamic_to_static_reshape():
def verify_reshape(shape, newshape, oshape):
x = relay.var("x", relay.TensorType(shape, "float32"))
y = relay.var("y", relay.TensorType(newshape, "float32"))
z = relay.reshape(x, relay.shape_of(y))
func = run_infer_type(relay.Function([x, y], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("reshape")
assert "newshape=" in zz.astext()
assert zz.checked_type == relay.ty.TensorType(oshape, "float32")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
y_data = np.random.uniform(low=-1, high=1, size=newshape).astype("float32")
ref_res = np.reshape(x_data, oshape)
verify_func(func2, [x_data, y_data], ref_res)
verify_reshape((2, 3, 4), (8, 3), (8, 3))
verify_reshape((4, 7), (2, 7, 2), (2, 7, 2))
@tvm.testing.uses_gpu
def test_dynamic_to_static_double_reshape():
def verify_reshape(shape, newshape):
x = relay.var("x", relay.TensorType(shape, "float32"))
y = relay.var("y", relay.TensorType(newshape, "float32"))
z = relay.reshape(x, relay.shape_of(y))
z = relay.reshape(z, relay.shape_of(x))
func = run_infer_type(relay.Function([x, y], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("reshape")
assert "newshape=" in zz.astext()
assert zz.checked_type == relay.ty.TensorType(shape, "float32")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
y_data = np.random.uniform(low=-1, high=1, size=newshape).astype("float32")
verify_func(func2, [x_data, y_data], x_data)
verify_reshape((2, 3, 4), (8, 3))
verify_reshape((4, 7), (2, 7, 2))
@tvm.testing.uses_gpu
def test_dynamic_to_static_quad_reshape():
def verify_reshape(shape, newshape):
x = relay.var("x", relay.TensorType(shape, "float32"))
y = relay.var("y", relay.TensorType(newshape, "float32"))
z1 = relay.reshape(x, relay.shape_of(y))
z2 = relay.reshape(z1, relay.shape_of(x))
z3 = relay.reshape(z2, relay.shape_of(z1))
z4 = relay.reshape(z3, relay.shape_of(z2))
func = run_infer_type(relay.Function([x, y], z4))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("reshape")
assert "newshape=" in zz.astext()
assert zz.checked_type == relay.ty.TensorType(shape, "float32")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
y_data = np.random.uniform(low=-1, high=1, size=newshape).astype("float32")
verify_func(func2, [x_data, y_data], x_data)
verify_reshape((2, 3, 4), (8, 3))
verify_reshape((4, 7), (2, 7, 2))
@tvm.testing.uses_gpu
def test_dynamic_to_static_tile():
def verify_tile(shape, reps, oshape):
x = relay.var("x", relay.TensorType(shape, "float32"))
y = relay.var("y", relay.TensorType(reps, "float32"))
z = relay.tile(x, relay.shape_of(y))
func = run_infer_type(relay.Function([x, y], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("tile")
assert zz.checked_type == relay.ty.TensorType(oshape, "float32")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
y_data = np.random.uniform(low=-1, high=1, size=reps).astype("float32")
ref_res = np.tile(x_data, reps)
verify_func(func2, [x_data, y_data], ref_res)
verify_tile((2, 3, 4), (2, 1, 5), (4, 3, 20))
verify_tile((4, 7), (4, 2), (16, 14))
@tvm.testing.uses_gpu
def test_dynamic_to_static_topk():
def verify_topk(k, axis, ret_type, is_ascend, dtype):
shape = (20, 100)
x = relay.var("x", relay.TensorType(shape, "float32"))
k_var = relay.const(k)
out = relay.topk(x, k_var, axis, ret_type, is_ascend, dtype)
if isinstance(out, relay.expr.TupleWrapper):
out = out.astuple()
func = relay.Function([x], out)
np_data = np.random.uniform(size=shape).astype("float32")
if is_ascend:
np_indices = np.argsort(np_data, axis=axis)
else:
np_indices = np.argsort(-np_data, axis=axis)
kk = k if k >= 1 else shape[axis]
if axis == 0:
np_indices = np_indices[:kk, :]
np_values = np.zeros(np_indices.shape).astype("float32")
for i in range(shape[1]):
np_values[:, i] = np_data[np_indices[:, i], i]
else:
np_indices = np_indices[:, :kk]
np_values = np.zeros(np_indices.shape).astype("float32")
for i in range(shape[0]):
np_values[i, :] = np_data[i, np_indices[i, :]]
np_indices = np_indices.astype(dtype)
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("topk")
for target, ctx in tvm.testing.enabled_targets():
if "llvm" not in target:
continue
for kind in ["graph", "vm", "debug"]:
mod = tvm.ir.IRModule.from_expr(func2)
intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target)
op_res = intrp.evaluate()(np_data)
if ret_type == "both":
tvm.testing.assert_allclose(op_res[0].asnumpy(), np_values)
tvm.testing.assert_allclose(op_res[1].asnumpy(), np_indices)
elif ret_type == "values":
tvm.testing.assert_allclose(op_res.asnumpy(), np_values)
else:
tvm.testing.assert_allclose(op_res.asnumpy(), np_indices)
np.random.seed(0)
for k in [0, 1, 5]:
for axis in [0, -1, 1]:
for ret_type in ["both", "values", "indices"]:
verify_topk(k, axis, ret_type, True, "int64")
verify_topk(k, axis, ret_type, False, "float32")
@tvm.testing.uses_gpu
def test_dynamic_to_static_broadcast_to():
def verify_broadcast_to(shape, broadcast_shape):
x = relay.var("x", relay.TensorType(shape, "float32"))
y = relay.var("y", relay.TensorType(broadcast_shape, "float32"))
z = relay.broadcast_to(x, shape=relay.shape_of(y))
func = run_infer_type(relay.Function([x, y], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("broadcast_to")
assert zz.checked_type == relay.ty.TensorType(broadcast_shape, "float32")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
y_data = np.random.uniform(low=-1, high=1, size=broadcast_shape).astype("float32")
ref_res = np.broadcast_to(x_data, y_data.shape)
verify_func(func2, [x_data, y_data], ref_res)
verify_broadcast_to((3, 1), (3, 3))
@tvm.testing.uses_gpu
def test_dynamic_to_static_zeros_ones():
def verify_ones_zeros(shape, dtype):
for op, ref in [(relay.zeros, np.zeros), (relay.ones, np.ones)]:
x = relay.var("x", relay.TensorType(shape, dtype))
y = op(relay.shape_of(x), dtype)
func = run_infer_type(relay.Function([x], y))
func2 = run_opt_pass(
run_opt_pass(func, transform.DynamicToStatic()), transform.InferType()
)
zz = func2.body
assert isinstance(zz, relay.Constant)
assert zz.checked_type == relay.ty.TensorType(shape, dtype)
x_data = np.random.uniform(low=1, high=1, size=shape)
ref_res = ref(x_data.shape)
verify_func(func2, [x_data], ref_res)
verify_ones_zeros((1, 2, 3), "int64")
verify_ones_zeros((9, 8, 3, 4), "float32")
@tvm.testing.uses_gpu
def test_dynamic_to_static_resize():
def verify_resize(shape, scale, method, layout):
if layout == "NHWC":
size = (shape[1] * scale, shape[2] * scale)
else:
size = (shape[2] * scale, shape[3] * scale)
x = relay.var("x", relay.TensorType(shape, "float32"))
size_var = relay.const(np.array(size).astype("float32"))
coord_trans = "asymmetric" if method == "nearest_neighbor" else "align_corners"
z = relay.image.resize(
x, size_var, layout, method, coordinate_transformation_mode=coord_trans
)
func = run_infer_type(relay.Function([x], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("image.resize")
x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32")
if method == "bilinear":
ref_res = tvm.topi.testing.bilinear_resize_python(x_data, size, layout)
else:
ref_res = tvm.topi.testing.upsampling_python(x_data, (scale, scale), layout)
verify_func(func2, [x_data], ref_res, rtol=1e-4, atol=1e-6)
for method in ["bilinear", "nearest_neighbor"]:
for layout in ["NCHW", "NHWC"]:
verify_resize((1, 4, 4, 4), 2, method, layout)
@tvm.testing.uses_gpu
def test_dynamic_to_static_one_hot():
def _verify(indices_shape, depth, on_value, off_value, axis, dtype):
indices = relay.var("indices", relay.TensorType(indices_shape, "int32"))
depth_var = relay.const(depth)
on_value_const = relay.const(on_value)
off_value_const = relay.const(off_value)
out = relay.one_hot(indices, on_value_const, off_value_const, depth_var, axis, dtype)
func = relay.Function([indices], out)
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("one_hot")
indices_np = np.random.randint(0, depth, size=indices_shape).astype("int32")
out_np = tvm.topi.testing.one_hot(indices_np, on_value, off_value, depth, axis, dtype)
verify_func(func2, [indices_np], out_np)
_verify((3,), 3, 1, 0, -1, "int32")
_verify((3,), 3, 1.0, 0.0, -1, "float32")
_verify((2, 2), 5, 2, -2, 0, "int32")
_verify((2, 2), 5, 0.5, -0.5, 1, "float32")
_verify((3, 2, 4, 5), 6, 1, 0, 1, "int32")
_verify((3, 2, 4, 5), 6, 1.0, 0.0, 0, "float32")
@tvm.testing.uses_gpu
def test_dynamic_to_static_full():
def verify_full(fill_value, fill_shape, dtype):
x = relay.var("x", relay.scalar_type(dtype))
y = relay.var("y", relay.TensorType(fill_shape, "int64"))
z = relay.full(x, relay.shape_of(y), dtype)
func = run_infer_type(relay.Function([x, y], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("full")
ref_res = np.full(fill_shape, fill_value).astype(dtype)
y_data = np.random.uniform(low=-1, high=1, size=fill_shape).astype("int64")
verify_func(func2, [fill_value, y_data], ref_res)
verify_full(4, (1, 2, 3, 4), "int32")
verify_full(4.0, (1, 2, 8, 10), "float32")
def test_dynamic_to_static_upsampling():
def verify_upsampling(data_shape, scale_h_val, scale_w_val, dtype):
x = relay.var("x", relay.TensorType(data_shape, dtype))
scale_h = relay.const(scale_h_val)
scale_w = relay.const(scale_w_val)
z = relay.nn.upsampling(x, scale_h, scale_w)
func = run_infer_type(relay.Function([x], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("nn.upsampling")
x_data = np.random.uniform(size=data_shape).astype(dtype)
ref_res = tvm.topi.testing.upsampling_python(x_data, (scale_h_val, scale_w_val), "NCHW")
verify_func(func2, [x_data], ref_res)
verify_upsampling((1, 16, 32, 32), 2, 2, "int8")
verify_upsampling((1, 16, 32, 32), 4, 4, "int32")
def test_dynamic_to_static_upsampling3d():
def verify_upsampling3d(data_shape, scale_d_val, scale_h_val, scale_w_val, dtype):
x = relay.var("x", relay.TensorType(data_shape, dtype))
scale_d = relay.const(scale_d_val)
scale_h = relay.const(scale_h_val)
scale_w = relay.const(scale_w_val)
z = relay.nn.upsampling3d(x, scale_d, scale_h, scale_w)
func = run_infer_type(relay.Function([x], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("nn.upsampling3d")
x_data = np.random.uniform(size=data_shape).astype(dtype)
ref_res = tvm.topi.testing.upsampling3d_python(
x_data, (scale_d_val, scale_h_val, scale_w_val), "NCDHW"
)
verify_func(func2, [x_data], ref_res)
verify_upsampling3d((1, 1, 1, 1, 1), 2, 3, 4, "int8")
verify_upsampling3d((5, 7, 8, 10, 32), 3, 2, 2, "int8")
verify_upsampling3d((1, 4, 2, 5, 3), 5, 4, 3, "int32")
def test_dynamic_to_static_pad():
def verify_pad(data_shape, pad_width, pad_val, dtype):
x = relay.var("x", relay.TensorType(data_shape, dtype))
z = relay.nn.pad(x, relay.const(np.array(pad_width)), pad_val)
func = run_infer_type(relay.Function([x], z))
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
zz = func2.body
assert isinstance(zz, relay.Call)
assert zz.op == relay.op.get("nn.pad")
x_data = np.random.uniform(size=data_shape).astype(dtype)
ref_res = np.pad(
x_data, pad_width, "constant", constant_values=(((pad_val,) * 2),) * len(data_shape)
)
verify_func(func2, [x_data], ref_res)
verify_pad((4, 10, 7, 7), ((1, 1), (2, 2), (3, 3), (4, 4)), 2.0, "int32")
verify_pad((2, 7), ((1, 4), (2, 2)), 4.0, "float64")
def test_dynamic_to_static_strided_slice():
def verify(dshape, begin, end, strides, output, slice_mode="end", test_ref=True, dtype="int32"):
x = relay.var("x", relay.TensorType(dshape, "float32"))
ndim = len(dshape)
begin = begin if begin else [0] * ndim
end = end if end else list(dshape)
if strides:
if len(strides) == 1:
strides = strides * ndim
else:
strides = [1] * ndim
# target numpy result
x_data = np.random.uniform(size=dshape).astype("float32")
ref_res = tvm.topi.testing.strided_slice_python(x_data, begin, end, strides, slice_mode)
data = [x_data, np.array(begin), np.array(end)]
begin = relay.const(begin, dtype=dtype)
end = relay.const(end, dtype=dtype)
if strides:
data.append(np.array(strides))
strides = relay.const(strides, dtype=dtype)
z = relay.strided_slice(x, begin=begin, end=end, strides=strides, slice_mode=slice_mode)
else:
z = relay.strided_slice(x, begin=begin, end=end, slice_mode=slice_mode)
func = relay.Function([x], z)
func = run_infer_type(func)
func2 = run_opt_pass(run_opt_pass(func, transform.DynamicToStatic()), transform.InferType())
assert isinstance(func2.body, relay.Call)
assert func2.body.op == relay.op.get("strided_slice")
verify_func(func2, [x_data], ref_res)
verify((1, 3, 10, 10), [0, 0, 0, 0], [1, 3, 10, 10], [1], (0, 3, 10, 10), dtype="int64")
verify(
(1, 224, 224, 3),
[0, 20, 20, 0],
[1, 140, 140, 3],
[1, 1, 1, 1],
(1, 120, 120, 3),
dtype="int64",
)
verify((3, 4, 3), [1, 1, 0], [4, 4, 3], [2, 1, 1], (1, 3, 3), dtype="int16")
verify((3, 4, 3), [0, 0, 0], [4, -5, 4], [1, -1, 2], (3, 1, 2))
verify((3, 4, 3), [1, 1, 0], [4, 4, 3], None, (2, 3, 3))
verify((3, 4, 3), [1, 1, 0], [4, 1000, 3], None, (2, 3, 3))
verify((3, 4, 3), [1, 1, 0], [4, 4, 4], None, (2, 3, 3))
verify((3, 4, 3), [1, 1, 0], [4, 4, 3], None, (2, 3, 3))
verify((3, 4, 3), [1, -1, 0], [4, -5, 3], [2, -1, 1], (1, 4, 3))
verify((3, 4, 3), [1, -1, 0], [2, -3, 3], [1, -1, 1], (1, 2, 3))
verify(
(3, 4, 3), [1, 0, 0], [3, -1, 3], [1, 1, 1], (2, 4, 3), slice_mode="size", test_ref=False
)
verify((3, 4, 3), [1, 0, 0], [-1, 2, 3], [1, 1, 1], (2, 2, 3), slice_mode="size", test_ref=True)
if __name__ == "__main__":
test_dynamic_to_static_reshape()
test_dynamic_to_static_double_reshape()
test_dynamic_to_static_quad_reshape()
test_dynamic_to_static_tile()
test_dynamic_to_static_topk()
test_dynamic_to_static_broadcast_to()
test_dynamic_to_static_zeros_ones()
test_dynamic_to_static_resize()
test_dynamic_to_static_one_hot()
test_dynamic_to_static_full()
test_dynamic_to_static_upsampling()
test_dynamic_to_static_pad()
test_dynamic_to_static_strided_slice()
|
tqchen/tvm
|
tests/python/relay/test_pass_dynamic_to_static.py
|
Python
|
apache-2.0
| 19,598
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contrib version of MirroredStrategy."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
from tensorflow.python.distribute import distribute_lib
from tensorflow.python.distribute import mirrored_strategy
from tensorflow.python.distribute import values
# pylint: disable=protected-access,invalid-name
_call_for_each_replica = mirrored_strategy._call_for_each_replica
_create_mirrored_variable = mirrored_strategy._create_mirrored_variable
all_local_devices = mirrored_strategy.all_local_devices
CoreMirroredStrategy = mirrored_strategy.MirroredStrategy
CoreMirroredExtended = mirrored_strategy.MirroredExtended
# pylint: enable=protected-access,invalid-name
class MirroredStrategy(distribute_lib.DistributionStrategy):
"""Mirrors vars to distribute across multiple devices and machines.
*** contrib version ***
This strategy uses one replica per device and sync replication for its
multi-GPU version.
When `cluster_spec` is given by the `configure` method., it turns into the
mulit-worker version that works on multiple workers with in-graph replication.
Note: `configure` will be called by higher-level APIs if running in
distributed environment.
There are several important concepts for distributed TensorFlow, e.g.
`client`, `job`, 'task', `cluster`, `in-graph replication` and
'synchronous training' and they have already been defined in the
[TensorFlow's documentation](https://www.tensorflow.org/deploy/distributed).
The distribution strategy inherits these concepts as well and in addition to
that we also clarify several more concepts:
* **In-graph replication**: the `client` creates a single `tf.Graph` that
specifies tasks for devices on all workers. The `client` then creates a
client session which will talk to the `master` service of a `worker`. Then
the `master` will partition the graph and distribute the work to all
participating workers.
* **Worker**: A `worker` is a TensorFlow `task` that usually maps to one
physical machine. We will have multiple `worker`s with different `task`
index. They all do similar things except for one worker checkpointing model
variables, writing summaries, etc. in addition to its ordinary work.
The multi-worker version of this class maps one replica to one device on a
worker. It mirrors all model variables on all replicas. For example, if you
have two `worker`s and each `worker` has 4 GPUs, it will create 8 copies of
the model variables on these 8 GPUs. Then like in MirroredStrategy, each
replica performs their computation with their own copy of variables unless in
cross-replica model where variable or tensor reduction happens.
Args:
devices: a list of device strings.
num_gpus: number of GPUs. For local training, either specify `devices` or
`num_gpus`. In distributed training, this must be specified as number of
GPUs on each worker.
num_gpus_per_worker: number of GPUs per worker. This is the same as
`num_gpus` and only one of `num_gpus` and `num_gpus_per_worker` can be
specified.
cross_device_ops: optional, a descedant of `CrossDeviceOps`. If this is not
set, the `configure` method will try to find the best one.
auto_shard_dataset: whether to auto-shard the dataset when there are
multiple workers.
cross_tower_ops: Deprecated alias for `cross_device_ops`.
"""
def __init__(self,
devices=None,
num_gpus=None,
num_gpus_per_worker=None,
cross_device_ops=None,
auto_shard_dataset=False,
cross_tower_ops=None):
assert not (cross_device_ops and cross_tower_ops)
if num_gpus is not None and num_gpus_per_worker is not None:
raise ValueError(
"You cannot specify both `num_gpus` and `num_gpus_per_worker`.")
if num_gpus is None:
num_gpus = num_gpus_per_worker
extended = MirroredExtended(self, devices, num_gpus,
cross_device_ops or cross_tower_ops,
auto_shard_dataset)
super(MirroredStrategy, self).__init__(extended)
class MirroredExtended(CoreMirroredExtended):
"""Implementation of (contrib) MirroredStrategy."""
def __init__(self,
container_strategy,
devices=None,
num_gpus_per_worker=None,
cross_device_ops=None,
auto_shard_dataset=False):
if devices is None:
devices = mirrored_strategy.all_local_devices(num_gpus_per_worker)
elif num_gpus_per_worker is not None:
raise ValueError(
"Must only specify one of `devices` and `num_gpus_per_worker`.")
super(MirroredExtended, self).__init__(container_strategy, devices,
cross_device_ops)
self._auto_shard_dataset = auto_shard_dataset
def _make_dataset_iterator(self, dataset):
"""Make iterator from dataset without splitting the batch.
This implementation is different than the one in
`tf.distribute.MirroredStrategy` for purposes of backward compatibility.
We treat the incoming dataset's batch size as per replica batch size.
Args:
dataset: `tf.data.Dataset` for input.
Returns:
An `InputIterator` which returns inputs for each step of the computation.
"""
return values.DatasetIterator(dataset, self._input_workers)
def _distribute_dataset(self, dataset_fn):
if self._local_mode:
return values.PerReplicaDataset(
self._call_dataset_fn(dataset_fn), self._input_workers, 0)
else:
return values.MultiWorkerDataset(
functools.partial(self._call_dataset_fn, dataset_fn),
self._input_workers,
auto_shard=self._auto_shard_dataset)
# TODO(priyag): Delete this once all strategies use global batch size.
@property
def _global_batch_size(self):
return False
|
Bismarrck/tensorflow
|
tensorflow/contrib/distribute/python/mirrored_strategy.py
|
Python
|
apache-2.0
| 6,670
|
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from code import Code
from model import PropertyType, Property, Type
import cpp_util
import schema_util
import util_cc_helper
from cpp_namespace_environment import CppNamespaceEnvironment
class CCGenerator(object):
def __init__(self, type_generator):
self._type_generator = type_generator
def Generate(self, namespace):
return _Generator(namespace, self._type_generator).Generate()
class _Generator(object):
"""A .cc generator for a namespace.
"""
def __init__(self, namespace, cpp_type_generator):
assert type(namespace.environment) is CppNamespaceEnvironment
self._namespace = namespace
self._type_helper = cpp_type_generator
self._util_cc_helper = (
util_cc_helper.UtilCCHelper(self._type_helper))
self._generate_error_messages = namespace.compiler_options.get(
'generate_error_messages', False)
def Generate(self):
"""Generates a Code object with the .cc for a single namespace.
"""
cpp_namespace = cpp_util.GetCppNamespace(
self._namespace.environment.namespace_pattern,
self._namespace.unix_name)
c = Code()
(c.Append(cpp_util.CHROMIUM_LICENSE)
.Append()
.Append(cpp_util.GENERATED_FILE_MESSAGE %
cpp_util.ToPosixPath(self._namespace.source_file))
.Append()
.Append('#include "%s/%s.h"' %
(cpp_util.ToPosixPath(self._namespace.source_file_dir),
self._namespace.short_filename))
.Append()
.Append('#include <memory>')
.Append('#include <ostream>')
.Append('#include <string>')
.Append('#include <utility>')
.Append('#include <vector>')
.Append()
.Append('#include "base/check.h"')
.Append('#include "base/check_op.h"')
.Append('#include "base/notreached.h"')
.Append('#include "base/strings/string_number_conversions.h"')
.Append('#include "base/strings/utf_string_conversions.h"')
.Append('#include "base/values.h"')
.Append(self._util_cc_helper.GetIncludePath())
.Cblock(self._GenerateManifestKeysIncludes())
.Cblock(self._type_helper.GenerateIncludes(include_soft=True))
.Append()
.Append('using base::UTF8ToUTF16;')
.Append()
.Concat(cpp_util.OpenNamespace(cpp_namespace))
)
if self._namespace.properties:
(c.Append('//')
.Append('// Properties')
.Append('//')
.Append()
)
for prop in self._namespace.properties.values():
property_code = self._type_helper.GeneratePropertyValues(
prop,
'const %(type)s %(name)s = %(value)s;',
nodoc=True)
if property_code:
c.Cblock(property_code)
if self._namespace.types:
(c.Append('//')
.Append('// Types')
.Append('//')
.Append()
.Cblock(self._GenerateTypes(None, self._namespace.types.values()))
)
if self._namespace.manifest_keys:
(c.Append('//')
.Append('// Manifest Keys')
.Append('//')
.Append()
.Cblock(self._GenerateManifestKeys())
)
if self._namespace.functions:
(c.Append('//')
.Append('// Functions')
.Append('//')
.Append()
)
for function in self._namespace.functions.values():
c.Cblock(self._GenerateFunction(function))
if self._namespace.events:
(c.Append('//')
.Append('// Events')
.Append('//')
.Append()
)
for event in self._namespace.events.values():
c.Cblock(self._GenerateEvent(event))
c.Cblock(cpp_util.CloseNamespace(cpp_namespace))
c.Append()
return c
def _GenerateType(self, cpp_namespace, type_):
"""Generates the function definitions for a type.
"""
classname = cpp_util.Classname(schema_util.StripNamespace(type_.name))
c = Code()
if type_.functions:
# Wrap functions within types in the type's namespace.
(c.Append('namespace %s {' % classname)
.Append())
for function in type_.functions.values():
c.Cblock(self._GenerateFunction(function))
c.Append('} // namespace %s' % classname)
elif type_.property_type == PropertyType.ARRAY:
c.Cblock(self._GenerateType(cpp_namespace, type_.item_type))
elif type_.property_type in (PropertyType.CHOICES,
PropertyType.OBJECT):
if cpp_namespace is None:
classname_in_namespace = classname
else:
classname_in_namespace = '%s::%s' % (cpp_namespace, classname)
if type_.property_type == PropertyType.OBJECT:
c.Cblock(self._GeneratePropertyFunctions(classname_in_namespace,
type_.properties.values()))
else:
c.Cblock(self._GenerateTypes(classname_in_namespace, type_.choices))
(c.Append('%s::%s()' % (classname_in_namespace, classname))
.Cblock(self._GenerateInitializersAndBody(type_))
.Append('%s::~%s() = default;' % (classname_in_namespace, classname))
)
# Note: we use 'rhs' because some API objects have a member 'other'.
(c.Append('%s::%s(%s&& rhs) = default;' %
(classname_in_namespace, classname, classname))
.Append('%s& %s::operator=(%s&& rhs) = default;' %
(classname_in_namespace, classname_in_namespace,
classname))
)
if type_.origin.from_manifest_keys:
c.Cblock(
self._GenerateManifestKeyConstants(
classname_in_namespace, type_.properties.values()))
if type_.origin.from_json:
c.Cblock(self._GenerateTypePopulate(classname_in_namespace, type_))
if cpp_namespace is None: # only generate for top-level types
c.Cblock(self._GenerateTypeFromValue(classname_in_namespace, type_))
if type_.origin.from_client:
c.Cblock(self._GenerateTypeToValue(classname_in_namespace, type_))
if type_.origin.from_manifest_keys:
c.Cblock(
self._GenerateParseFromDictionary(
classname, classname_in_namespace, type_))
elif type_.property_type == PropertyType.ENUM:
(c.Cblock(self._GenerateEnumToString(cpp_namespace, type_))
.Cblock(self._GenerateEnumFromString(cpp_namespace, type_))
)
return c
def _GenerateInitializersAndBody(self, type_):
items = []
for prop in type_.properties.values():
t = prop.type_
real_t = self._type_helper.FollowRef(t)
if real_t.property_type == PropertyType.ENUM:
namespace_prefix = ('%s::' % real_t.namespace.unix_name
if real_t.namespace != self._namespace
else '')
items.append('%s(%s%s)' % (prop.unix_name,
namespace_prefix,
self._type_helper.GetEnumNoneValue(t)))
elif prop.optional:
continue
elif t.property_type == PropertyType.INTEGER:
items.append('%s(0)' % prop.unix_name)
elif t.property_type == PropertyType.DOUBLE:
items.append('%s(0.0)' % prop.unix_name)
elif t.property_type == PropertyType.BOOLEAN:
items.append('%s(false)' % prop.unix_name)
elif (t.property_type == PropertyType.ANY or
t.property_type == PropertyType.ARRAY or
t.property_type == PropertyType.BINARY or
t.property_type == PropertyType.CHOICES or
t.property_type == PropertyType.OBJECT or
t.property_type == PropertyType.FUNCTION or
t.property_type == PropertyType.REF or
t.property_type == PropertyType.STRING):
# TODO(miket): It would be nice to initialize CHOICES, but we
# don't presently have the semantics to indicate which one of a set
# should be the default.
continue
else:
raise TypeError(t)
if items:
s = ': %s' % (',\n'.join(items))
else:
s = ''
s = s + ' {}'
return Code().Append(s)
def _GenerateTypePopulate(self, cpp_namespace, type_):
"""Generates the function for populating a type given a pointer to it.
E.g for type "Foo", generates Foo::Populate()
"""
classname = cpp_util.Classname(schema_util.StripNamespace(type_.name))
c = Code()
(c.Append('// static')
.Append('bool %(namespace)s::Populate(')
.Sblock(' %s) {' % self._GenerateParams(
('const base::Value& value', '%(name)s* out'))))
if self._generate_error_messages:
c.Append('DCHECK(error);')
if type_.property_type == PropertyType.CHOICES:
for choice in type_.choices:
(c.Sblock('if (%s) {' % self._GenerateValueIsTypeExpression('value',
choice))
.Concat(self._GeneratePopulateVariableFromValue(
choice,
'value',
'out->as_%s' % choice.unix_name,
'false',
is_ptr=True))
.Append('return true;')
.Eblock('}')
)
(c.Concat(self._AppendError16(
'u"expected %s, got " + %s' %
(" or ".join(choice.name for choice in type_.choices),
self._util_cc_helper.GetValueTypeString('value'))))
.Append('return false;'))
elif type_.property_type == PropertyType.OBJECT:
(c.Sblock('if (!value.is_dict()) {')
.Concat(self._AppendError16(
'u"expected dictionary, got " + ' +
self._util_cc_helper.GetValueTypeString('value')))
.Append('return false;')
.Eblock('}'))
if type_.properties or type_.additional_properties is not None:
c.Append('const auto* dict = '
'static_cast<const base::DictionaryValue*>(&value);')
# TODO(crbug.com/1145154): The generated code here will ignore
# unrecognized keys, but the parsing code for types passed to APIs in the
# renderer will hard-error on them. We should probably be consistent with
# the renderer here (at least for types also parsed in the renderer).
for prop in type_.properties.values():
c.Concat(self._InitializePropertyToDefault(prop, 'out'))
for prop in type_.properties.values():
c.Concat(self._GenerateTypePopulateProperty(prop, 'dict', 'out'))
if type_.additional_properties is not None:
if type_.additional_properties.property_type == PropertyType.ANY:
c.Append('out->additional_properties.MergeDictionary(dict);')
else:
cpp_type = self._type_helper.GetCppType(type_.additional_properties,
is_in_container=True)
(c.Append('for (base::DictionaryValue::Iterator it(*dict);')
.Sblock(' !it.IsAtEnd(); it.Advance()) {')
.Append('%s tmp;' % cpp_type)
.Concat(self._GeneratePopulateVariableFromValue(
type_.additional_properties,
'it.value()',
'tmp',
'false'))
.Append('out->additional_properties[it.key()] = tmp;')
.Eblock('}')
)
c.Append('return true;')
(c.Eblock('}')
.Substitute({'namespace': cpp_namespace, 'name': classname}))
return c
def _GenerateValueIsTypeExpression(self, var, type_):
real_type = self._type_helper.FollowRef(type_)
if real_type.property_type is PropertyType.CHOICES:
return '(%s)' % ' || '.join(self._GenerateValueIsTypeExpression(var,
choice)
for choice in real_type.choices)
return '%s.type() == %s' % (var, cpp_util.GetValueType(real_type))
def _GenerateTypePopulateProperty(self, prop, src, dst):
"""Generate the code to populate a single property in a type.
src: base::DictionaryValue*
dst: Type*
"""
c = Code()
value_var = prop.unix_name + '_value'
c.Append('const base::Value* %(value_var)s = %(src)s->FindKey("%(key)s");')
if prop.optional:
(c.Sblock(
'if (%(value_var)s) {')
.Concat(self._GeneratePopulatePropertyFromValue(
prop, '(*%s)' % value_var, dst, 'false')))
underlying_type = self._type_helper.FollowRef(prop.type_)
if underlying_type.property_type == PropertyType.ENUM:
namespace_prefix = ('%s::' % underlying_type.namespace.unix_name
if underlying_type.namespace != self._namespace
else '')
(c.Append('} else {')
.Append('%%(dst)s->%%(name)s = %s%s;' %
(namespace_prefix,
self._type_helper.GetEnumNoneValue(prop.type_))))
c.Eblock('}')
else:
(c.Sblock(
'if (!%(value_var)s) {')
.Concat(self._AppendError16('u"\'%%(key)s\' is required"'))
.Append('return false;')
.Eblock('}')
.Concat(self._GeneratePopulatePropertyFromValue(
prop, '(*%s)' % value_var, dst, 'false'))
)
c.Append()
c.Substitute({
'value_var': value_var,
'key': prop.name,
'src': src,
'dst': dst,
'name': prop.unix_name
})
return c
def _GenerateTypeFromValue(self, cpp_namespace, type_):
classname = cpp_util.Classname(schema_util.StripNamespace(type_.name))
c = Code()
(c.Append('// static')
.Append('std::unique_ptr<%s> %s::FromValue(%s) {' % (classname,
cpp_namespace, self._GenerateParams(('const base::Value& value',))))
)
c.Sblock();
if self._generate_error_messages:
c.Append('DCHECK(error);')
c.Append('auto out = std::make_unique<%s>();' % classname)
c.Append('bool result = Populate(%s);' %
self._GenerateArgs(('value', 'out.get()')))
if self._generate_error_messages:
c.Append('DCHECK_EQ(result, error->empty());')
c.Sblock('if (!result)')
c.Append('return nullptr;')
c.Eblock('return out;')
c.Eblock('}')
return c
def _GenerateTypeToValue(self, cpp_namespace, type_):
"""Generates a function that serializes the type into a base::Value.
E.g. for type "Foo" generates Foo::ToValue()
"""
if type_.property_type == PropertyType.OBJECT:
return self._GenerateObjectTypeToValue(cpp_namespace, type_)
elif type_.property_type == PropertyType.CHOICES:
return self._GenerateChoiceTypeToValue(cpp_namespace, type_)
else:
raise ValueError("Unsupported property type %s" % type_.type_)
def _GenerateManifestKeysIncludes(self):
# type: () -> (Code)
"""Returns the includes needed for manifest key parsing.
"""
c = Code()
if not self._namespace.manifest_keys:
return c
c.Append('#include "tools/json_schema_compiler/manifest_parse_util.h"')
return c
def _GenerateManifestKeyConstants(self, classname_in_namespace, properties):
# type: (str, List[Property]) -> Code
""" Generates the definition for manifest key constants declared in the
header.
"""
c = Code()
for prop in properties:
c.Comment('static')
c.Append('constexpr char %s::%s[];' %
(classname_in_namespace,
cpp_util.UnixNameToConstantName(prop.unix_name)))
return c
def _GenerateManifestKeys(self):
# type: () -> Code
"""Generates the types and parsing code for manifest keys.
"""
assert self._namespace.manifest_keys
assert self._namespace.manifest_keys.property_type == PropertyType.OBJECT
return self._GenerateType(None, self._namespace.manifest_keys)
def _GenerateParseFromDictionary(
self, classname, classname_in_namespace, type_):
# type: (str, str, Type) -> Code
"""Generates a function that deserializes the type from the passed
dictionary. E.g. for type "Foo", generates Foo::ParseFromDictionary().
"""
assert type_.property_type == PropertyType.OBJECT, \
('Manifest type %s must be an object, but it is: %s' %
(type_.name, type_.property_type))
if type_.IsRootManifestKeyType():
return self._GenerateParseFromDictionaryForRootManifestType(
classname, classname_in_namespace, type_.properties.values())
return self._GenerateParseFromDictionaryForChildManifestType(
classname, classname_in_namespace, type_.properties.values())
def _GenerateParseFromDictionaryForRootManifestType(
self, classname, classname_in_namespace, properties):
# type: (str, str, List[Property]) -> Code
"""Generates definition for ManifestKeys::ParseFromDictionary.
"""
params = [
'const base::DictionaryValue& root_dict',
'%(classname)s* out'
]
c = Code()
c.Append('//static')
c.Append('bool %(classname_in_namespace)s::ParseFromDictionary(')
# Make |generate_error_messages| True since we always generate error
# messages for manifest parsing.
c.Sblock('%s) {' %
self._GenerateParams(params, generate_error_messages=True))
c.Append('DCHECK(out);')
c.Append('DCHECK(error);')
c.Append()
c.Append('std::vector<base::StringPiece> error_path_reversed_vec;')
c.Append('auto* error_path_reversed = &error_path_reversed_vec;')
c.Append('const base::DictionaryValue& dict = root_dict;')
for prop in properties:
c.Concat(self._InitializePropertyToDefault(prop, 'out'))
for prop in properties:
c.Cblock(
self._ParsePropertyFromDictionary(prop, is_root_manifest_type=True))
c.Append('return true;')
c.Eblock('}')
c.Substitute({
'classname_in_namespace': classname_in_namespace,
'classname': classname
})
return c
def _GenerateParseFromDictionaryForChildManifestType(
self, classname, classname_in_namespace, properties):
# type: (str, str, List[Property]) -> Code
"""Generates T::ParseFromDictionary for a child manifest type.
"""
params = [
'const base::DictionaryValue& root_dict',
'base::StringPiece key',
'%(classname)s* out',
'std::u16string* error',
'std::vector<base::StringPiece>* error_path_reversed'
]
c = Code()
c.Append('//static')
c.Append('bool %(classname_in_namespace)s::ParseFromDictionary(')
# Make |generate_error_messages| False since |error| is already included
# within |params|.
c.Sblock('%s) {' %
self._GenerateParams(params, generate_error_messages=False))
c.Append('DCHECK(out);')
c.Append('DCHECK(error);')
c.Append('DCHECK(error_path_reversed);')
c.Append()
c.Append(
'const base::Value* value = '
'::json_schema_compiler::manifest_parse_util::FindKeyOfType('
'root_dict, key, base::Value::Type::DICTIONARY, error, '
'error_path_reversed);'
)
c.Sblock('if (!value)')
c.Append('return false;')
c.Eblock('const base::DictionaryValue& dict = '
'base::Value::AsDictionaryValue(*value);')
for prop in properties:
c.Concat(self._InitializePropertyToDefault(prop, 'out'))
for prop in properties:
c.Cblock(
self._ParsePropertyFromDictionary(prop, is_root_manifest_type=False))
c.Append('return true;')
c.Eblock('}')
c.Substitute({
'classname_in_namespace': classname_in_namespace,
'classname': classname
})
return c
def _ParsePropertyFromDictionary(self, property, is_root_manifest_type):
# type: (Property, bool) -> Code
"""Generates the code to parse a single property from a dictionary.
"""
supported_property_types = {
PropertyType.ARRAY,
PropertyType.BOOLEAN,
PropertyType.DOUBLE,
PropertyType.INT64,
PropertyType.INTEGER,
PropertyType.OBJECT,
PropertyType.STRING,
PropertyType.ENUM
}
c = Code()
underlying_type = self._type_helper.FollowRef(property.type_)
underlying_property_type = underlying_type.property_type
underlying_item_type = (
self._type_helper.FollowRef(underlying_type.item_type)
if underlying_property_type is PropertyType.ARRAY
else None)
assert (underlying_property_type in supported_property_types), (
'Property type not implemented for %s, type: %s, namespace: %s' %
(underlying_property_type, underlying_type.name,
underlying_type.namespace.name))
property_constant = cpp_util.UnixNameToConstantName(property.unix_name)
out_expression = '&out->%s' % property.unix_name
def get_enum_params(enum_type, include_optional_param):
# type: (Type, bool) -> List[str]
enum_name = cpp_util.Classname(
schema_util.StripNamespace(enum_type.name))
cpp_type_namespace = (''
if enum_type.namespace == self._namespace
else '%s::' % enum_type.namespace.unix_name)
params = [
'dict',
'%s' % property_constant,
'&%sParse%s' % (cpp_type_namespace, enum_name)
]
if include_optional_param:
params.append('true' if property.optional else 'false')
params += [
'%s%s' % (cpp_type_namespace,
self._type_helper.GetEnumNoneValue(enum_type)),
'%s' % out_expression,
'error',
'error_path_reversed'
]
return params
if underlying_property_type == PropertyType.ENUM:
params = get_enum_params(underlying_type, include_optional_param=True)
func_name = 'ParseEnumFromDictionary'
elif underlying_item_type and \
underlying_item_type.property_type == PropertyType.ENUM:
# Array of enums.
params = get_enum_params(underlying_item_type,
include_optional_param=False)
func_name = 'ParseEnumArrayFromDictionary'
else:
params = [
'dict',
'%s' % property_constant,
'%s' % out_expression,
'error',
'error_path_reversed'
]
func_name = 'ParseFromDictionary'
c.Sblock(
'if (!::json_schema_compiler::manifest_parse_util::%s(%s)) {'
% (func_name, self._GenerateParams(params, generate_error_messages=False))
)
if is_root_manifest_type:
c.Append('::json_schema_compiler::manifest_parse_util::'
'PopulateFinalError(error, error_path_reversed);')
else:
c.Append('error_path_reversed->push_back(key);')
c.Append('return false;')
c.Eblock('}')
return c
def _GenerateObjectTypeToValue(self, cpp_namespace, type_):
"""Generates a function that serializes an object-representing type
into a base::DictionaryValue.
"""
c = Code()
(c.Sblock('std::unique_ptr<base::DictionaryValue> %s::ToValue() const {' %
cpp_namespace)
.Append('auto to_value_result =')
.Append(' std::make_unique<base::DictionaryValue>();')
.Append()
)
for prop in type_.properties.values():
prop_var = 'this->%s' % prop.unix_name
if prop.optional:
underlying_type = self._type_helper.FollowRef(prop.type_)
if underlying_type.property_type == PropertyType.ENUM:
# Optional enum values are generated with a NONE enum value,
# potentially from another namespace.
maybe_namespace = ''
if underlying_type.namespace != self._namespace:
maybe_namespace = '%s::' % underlying_type.namespace.unix_name
c.Sblock('if (%s != %s%s) {' %
(prop_var,
maybe_namespace,
self._type_helper.GetEnumNoneValue(prop.type_)))
else:
c.Sblock('if (%s.get()) {' % prop_var)
# ANY is a base::Value which is abstract and cannot be a direct member, so
# it will always be a pointer.
is_ptr = prop.optional or prop.type_.property_type == PropertyType.ANY
c.Cblock(self._CreateValueFromType(
'to_value_result->SetWithoutPathExpansion("%s", %%s);' % prop.name,
prop.name,
prop.type_,
prop_var,
is_ptr=is_ptr))
if prop.optional:
c.Eblock('}')
if type_.additional_properties is not None:
if type_.additional_properties.property_type == PropertyType.ANY:
c.Append('to_value_result->MergeDictionary(&additional_properties);')
else:
(c.Sblock('for (const auto& it : additional_properties) {')
.Cblock(self._CreateValueFromType(
'to_value_result->SetWithoutPathExpansion(it.first, %s);',
type_.additional_properties.name,
type_.additional_properties,
'it.second'))
.Eblock('}')
)
return (c.Append()
.Append('return to_value_result;')
.Eblock('}'))
def _GenerateChoiceTypeToValue(self, cpp_namespace, type_):
"""Generates a function that serializes a choice-representing type
into a base::Value.
"""
c = Code()
c.Sblock('std::unique_ptr<base::Value> %s::ToValue() const {' %
cpp_namespace)
c.Append('std::unique_ptr<base::Value> result;')
for choice in type_.choices:
choice_var = 'as_%s' % choice.unix_name
# Enums cannot be wrapped with scoped_ptr, but the XXX_NONE enum value
# is equal to 0.
(c.Sblock('if (%s) {' % choice_var)
.Append('DCHECK(!result) << "Cannot set multiple choices for %s";' %
type_.unix_name).Cblock(self._CreateValueFromType(
'result = %s;', choice.name, choice, choice_var, True))
.Eblock('}'))
(c.Append('DCHECK(result) << "Must set at least one choice for %s";' %
type_.unix_name).Append('return result;').Eblock('}'))
return c
def _GenerateFunction(self, function):
"""Generates the definitions for function structs.
"""
c = Code()
# TODO(kalman): use function.unix_name not Classname.
function_namespace = cpp_util.Classname(function.name)
# Windows has a #define for SendMessage, so to avoid any issues, we need
# to not use the name.
if function_namespace == 'SendMessage':
function_namespace = 'PassMessage'
(c.Append('namespace %s {' % function_namespace)
.Append()
)
# Params::Populate function
if function.params:
c.Concat(self._GeneratePropertyFunctions('Params', function.params))
(c.Append('Params::Params() = default;')
.Append('Params::~Params() = default;')
.Append()
.Cblock(self._GenerateFunctionParamsCreate(function))
)
# Results::Create function
if function.returns_async:
c.Concat(
self._GenerateAsyncResponseArguments('Results',
function.returns_async.params))
c.Append('} // namespace %s' % function_namespace)
return c
def _GenerateEvent(self, event):
# TODO(kalman): use event.unix_name not Classname.
c = Code()
event_namespace = cpp_util.Classname(event.name)
(c.Append('namespace %s {' % event_namespace)
.Append()
.Cblock(self._GenerateEventNameConstant(event))
.Cblock(self._GenerateAsyncResponseArguments(None, event.params))
.Append('} // namespace %s' % event_namespace)
)
return c
def _CreateValueFromType(self, code, prop_name, type_, var, is_ptr=False):
"""Creates a base::Value given a type. Generated code passes ownership
to caller via std::unique_ptr.
var: variable or variable*
E.g for std::string, generate new base::Value(var)
"""
c = Code()
underlying_type = self._type_helper.FollowRef(type_)
if underlying_type.property_type == PropertyType.ARRAY:
# Enums are treated specially because C++ templating thinks that they're
# ints, but really they're strings. So we create a vector of strings and
# populate it with the names of the enum in the array. The |ToString|
# function of the enum can be in another namespace when the enum is
# referenced. Templates can not be used here because C++ templating does
# not support passing a namespace as an argument.
item_type = self._type_helper.FollowRef(underlying_type.item_type)
if item_type.property_type == PropertyType.ENUM:
varname = ('*' if is_ptr else '') + '(%s)' % var
maybe_namespace = ''
if type_.item_type.property_type == PropertyType.REF:
maybe_namespace = '%s::' % item_type.namespace.unix_name
enum_list_var = '%s_list' % prop_name
# Scope the std::vector variable declaration inside braces.
(c.Sblock('{')
.Append('std::vector<std::string> %s;' % enum_list_var)
.Append('for (const auto& it : %s) {' % varname)
.Append('%s.push_back(%sToString(it));' % (enum_list_var,
maybe_namespace))
.Eblock('}'))
# Because the std::vector above is always created for both required and
# optional enum arrays, |is_ptr| is set to false and uses the
# std::vector to create the values.
(c.Append(code %
self._GenerateCreateValueFromType(type_, enum_list_var, False))
.Append('}'))
return c
c.Append(code % self._GenerateCreateValueFromType(type_, var, is_ptr))
return c
def _GenerateCreateValueFromType(self, type_, var, is_ptr):
"""Generates the statement to create a base::Value given a type.
type_: The type of the values being converted.
var: The name of the variable.
is_ptr: Whether |type_| is optional.
"""
underlying_type = self._type_helper.FollowRef(type_)
if (underlying_type.property_type == PropertyType.CHOICES or
underlying_type.property_type == PropertyType.OBJECT):
if is_ptr:
return '(%s)->ToValue()' % var
else:
return '(%s).ToValue()' % var
elif (underlying_type.property_type == PropertyType.ANY or
(underlying_type.property_type == PropertyType.FUNCTION and
not underlying_type.is_serializable_function)):
if is_ptr:
vardot = '(%s)->' % var
else:
vardot = '(%s).' % var
return '%sCreateDeepCopy()' % vardot
elif underlying_type.property_type == PropertyType.ENUM:
maybe_namespace = ''
if type_.property_type == PropertyType.REF:
maybe_namespace = '%s::' % underlying_type.namespace.unix_name
return 'std::make_unique<base::Value>(%sToString(%s))' % (
maybe_namespace, var)
elif underlying_type.property_type == PropertyType.BINARY:
if is_ptr:
var = '*%s' % var
return 'std::make_unique<base::Value>(%s)' % var
elif underlying_type.property_type == PropertyType.ARRAY:
if is_ptr:
var = '*%s' % var
return '%s' % self._util_cc_helper.CreateValueFromArray(var)
elif (underlying_type.property_type.is_fundamental or
underlying_type.is_serializable_function):
if is_ptr:
var = '*%s' % var
return 'std::make_unique<base::Value>(%s)' % var
else:
raise NotImplementedError('Conversion of %s to base::Value not '
'implemented' % repr(type_.type_))
def _GenerateParamsCheck(self, function, var):
"""Generates a check for the correct number of arguments when creating
Params.
"""
c = Code()
num_required = 0
for param in function.params:
if not param.optional:
num_required += 1
if num_required == len(function.params):
c.Sblock('if (%(var)s.size() != %(total)d) {')
elif not num_required:
c.Sblock('if (%(var)s.size() > %(total)d) {')
else:
c.Sblock('if (%(var)s.size() < %(required)d'
' || %(var)s.size() > %(total)d) {')
(c.Concat(self._AppendError16(
'u"expected %%(total)d arguments, got " '
'+ base::NumberToString16(%%(var)s.size())'))
.Append('return nullptr;')
.Eblock('}')
.Substitute({
'var': var,
'required': num_required,
'total': len(function.params),
}))
return c
def _GenerateFunctionParamsCreate(self, function):
"""Generate function to create an instance of Params. The generated
function takes a base::Value::ConstListView of arguments.
E.g for function "Bar", generate Bar::Params::Create()
"""
c = Code()
(c.Append('// static')
.Sblock('std::unique_ptr<Params> Params::Create(%s) {' %
self._GenerateParams([
'const base::Value::ConstListView& args']))
)
if self._generate_error_messages:
c.Append('DCHECK(error);')
(c.Concat(self._GenerateParamsCheck(function, 'args'))
.Append('std::unique_ptr<Params> params(new Params());')
)
for param in function.params:
c.Concat(self._InitializePropertyToDefault(param, 'params'))
for i, param in enumerate(function.params):
# Any failure will cause this function to return. If any argument is
# incorrect or missing, those following it are not processed. Note that
# for optional arguments, we allow missing arguments and proceed because
# there may be other arguments following it.
failure_value = 'std::unique_ptr<Params>()'
c.Append()
value_var = param.unix_name + '_value'
(c.Append('if (%(i)s < args.size() &&')
.Sblock(' !args[%(i)s].is_none()) {')
.Append('const base::Value& %(value_var)s = args[%(i)s];')
.Concat(self._GeneratePopulatePropertyFromValue(
param, value_var, 'params', failure_value))
.Eblock('}')
)
if not param.optional:
(c.Sblock('else {')
.Concat(self._AppendError16('u"\'%%(key)s\' is required"'))
.Append('return %s;' % failure_value)
.Eblock('}'))
c.Substitute({'value_var': value_var, 'i': i, 'key': param.name})
(c.Append()
.Append('return params;')
.Eblock('}')
.Append()
)
return c
def _GeneratePopulatePropertyFromValue(self,
prop,
src_var,
dst_class_var,
failure_value):
"""Generates code to populate property |prop| of |dst_class_var| (a
pointer) from a Value. See |_GeneratePopulateVariableFromValue| for
semantics.
"""
return self._GeneratePopulateVariableFromValue(prop.type_,
src_var,
'%s->%s' % (dst_class_var,
prop.unix_name),
failure_value,
is_ptr=prop.optional)
def _GeneratePopulateVariableFromValue(self,
type_,
src_var,
dst_var,
failure_value,
is_ptr=False):
"""Generates code to populate a variable |dst_var| of type |type_| from a
Value |src_var|. In the generated code, if |dst_var| fails to be populated
then Populate will return |failure_value|.
"""
c = Code()
underlying_type = self._type_helper.FollowRef(type_)
if (underlying_type.property_type.is_fundamental or
underlying_type.is_serializable_function):
is_string_or_function = (
underlying_type.property_type == PropertyType.STRING
or (underlying_type.property_type == PropertyType.FUNCTION
and underlying_type.is_serializable_function))
c.Append('auto%s temp = %s;' % (
'*' if is_string_or_function else '',
cpp_util.GetAsFundamentalValue(underlying_type, src_var)
))
if is_string_or_function:
(c.Sblock('if (!temp) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected ' + '%s, got " + %s' % (
type_.name,
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))))
else:
(c.Sblock('if (!temp.has_value()) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected ' + '%s, got " + %s' % (
type_.name,
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))))
if is_ptr:
c.Append('%(dst_var)s.reset();')
c.Append('return %(failure_value)s;')
(c.Eblock('}'))
if is_ptr:
if is_string_or_function:
c.Append('%(dst_var)s = std::make_unique<%(cpp_type)s>(*temp);')
else:
c.Append('%(dst_var)s = ' +
'std::make_unique<%(cpp_type)s>(temp.value());')
else:
if is_string_or_function:
c.Append('%(dst_var)s = *temp;')
else:
c.Append('%(dst_var)s = temp.value();')
elif underlying_type.property_type == PropertyType.OBJECT:
if is_ptr:
(c.Append('const base::DictionaryValue* dictionary = nullptr;')
.Sblock('if (!%(src_var)s.GetAsDictionary(&dictionary)) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected dictionary, got " + ' +
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))
.Append('return %(failure_value)s;')
)
(c.Eblock('}')
.Sblock('else {')
.Append('auto temp = std::make_unique<%(cpp_type)s>();')
.Append('if (!%%(cpp_type)s::Populate(%s)) {' % self._GenerateArgs(
('*dictionary', 'temp.get()')))
.Append(' return %(failure_value)s;')
)
(c.Append('}')
.Append('else')
.Append(' %(dst_var)s = std::move(temp);')
.Eblock('}')
)
else:
(c.Append('const base::DictionaryValue* dictionary = nullptr;')
.Sblock('if (!%(src_var)s.GetAsDictionary(&dictionary)) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected dictionary, got " + ' +
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))
.Append('return %(failure_value)s;')
.Eblock('}')
.Append('if (!%%(cpp_type)s::Populate(%s)) {' % self._GenerateArgs(
('*dictionary', '&%(dst_var)s')))
.Append(' return %(failure_value)s;')
.Append('}')
)
elif underlying_type.property_type == PropertyType.FUNCTION:
assert not underlying_type.is_serializable_function, \
'Serializable functions should have been handled above.'
if is_ptr: # Non-serializable functions are just represented as dicts.
c.Append('%(dst_var)s = std::make_unique<base::DictionaryValue>();')
elif underlying_type.property_type == PropertyType.ANY:
c.Append('%(dst_var)s = %(src_var)s.CreateDeepCopy();')
elif underlying_type.property_type == PropertyType.ARRAY:
# util_cc_helper deals with optional and required arrays
(c.Sblock('if (!%(src_var)s.is_list()) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected list, got " + ' +
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))
.Append('return %(failure_value)s;')
)
c.Eblock('}')
c.Sblock('else {')
item_type = self._type_helper.FollowRef(underlying_type.item_type)
if item_type.property_type == PropertyType.ENUM:
c.Concat(self._GenerateListValueToEnumArrayConversion(
item_type,
src_var,
dst_var,
failure_value,
is_ptr=is_ptr))
else:
args = ['%(src_var)s.GetList()', '&%(dst_var)s']
if self._generate_error_messages:
c.Append('std::u16string array_parse_error;')
args.append('&array_parse_error')
c.Append('if (!%s(%s)) {' % (
self._util_cc_helper.PopulateArrayFromListFunction(is_ptr),
self._GenerateArgs(args, generate_error_messages=False)))
c.Sblock()
if self._generate_error_messages:
c.Append(
'array_parse_error = u"Error at key \'%(key)s\': " + '
'array_parse_error;'
)
c.Concat(self._AppendError16('array_parse_error'))
c.Append('return %(failure_value)s;')
c.Eblock('}')
c.Eblock('}')
elif underlying_type.property_type == PropertyType.CHOICES:
if is_ptr:
(c.Append('auto temp = std::make_unique<%(cpp_type)s>();')
.Append('if (!%%(cpp_type)s::Populate(%s))' % self._GenerateArgs(
('%(src_var)s', 'temp.get()')))
.Append(' return %(failure_value)s;')
.Append('%(dst_var)s = std::move(temp);')
)
else:
(c.Append('if (!%%(cpp_type)s::Populate(%s))' % self._GenerateArgs(
('%(src_var)s', '&%(dst_var)s')))
.Append(' return %(failure_value)s;'))
elif underlying_type.property_type == PropertyType.ENUM:
c.Concat(self._GenerateStringToEnumConversion(underlying_type,
src_var,
dst_var,
failure_value))
elif underlying_type.property_type == PropertyType.BINARY:
(c.Sblock('if (!%(src_var)s.is_blob()) {')
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected binary, got " + ' +
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))
.Append('return %(failure_value)s;')
)
(c.Eblock('}')
.Sblock('else {')
)
if is_ptr:
c.Append('%(dst_var)s = std::make_unique<std::vector<uint8_t>>('
'%(src_var)s.GetBlob());')
else:
c.Append('%(dst_var)s = %(src_var)s.GetBlob();')
c.Eblock('}')
else:
raise NotImplementedError(type_)
if c.IsEmpty():
return c
return Code().Sblock('{').Concat(c.Substitute({
'cpp_type': self._type_helper.GetCppType(type_),
'src_var': src_var,
'dst_var': dst_var,
'failure_value': failure_value,
'key': type_.name,
'parent_key': type_.parent.name,
})).Eblock('}')
def _GenerateListValueToEnumArrayConversion(self,
item_type,
src_var,
dst_var,
failure_value,
is_ptr=False):
"""Returns Code that converts a list Value of string constants from
|src_var| into an array of enums of |type_| in |dst_var|. On failure,
returns |failure_value|.
"""
c = Code()
accessor = '.'
if is_ptr:
accessor = '->'
cpp_type = self._type_helper.GetCppType(item_type, is_in_container=True)
c.Append('%s = std::make_unique<std::vector<%s>>();' %
(dst_var, cpp_type))
(c.Sblock('for (const auto& it : (%s).GetList()) {' % src_var)
.Append('%s tmp;' % self._type_helper.GetCppType(item_type))
.Concat(self._GenerateStringToEnumConversion(item_type,
'(it)',
'tmp',
failure_value))
.Append('%s%spush_back(tmp);' % (dst_var, accessor))
.Eblock('}')
)
return c
def _GenerateStringToEnumConversion(self,
type_,
src_var,
dst_var,
failure_value):
"""Returns Code that converts a string type in |src_var| to an enum with
type |type_| in |dst_var|. In the generated code, if |src_var| is not
a valid enum name then the function will return |failure_value|.
"""
if type_.property_type != PropertyType.ENUM:
raise TypeError(type_)
c = Code()
enum_as_string = '%s_as_string' % type_.unix_name
cpp_type_namespace = ''
if type_.namespace != self._namespace:
cpp_type_namespace = '%s::' % type_.namespace.unix_name
(c.Append('const std::string* %s = %s.GetIfString();' % (enum_as_string,
src_var))
.Sblock('if (!%s) {' % enum_as_string)
.Concat(self._AppendError16(
'u"\'%%(key)s\': expected string, got " + ' +
self._util_cc_helper.GetValueTypeString('%%(src_var)s')))
.Append('return %s;' % failure_value)
.Eblock('}')
.Append('%s = %sParse%s(*%s);' % (dst_var,
cpp_type_namespace,
cpp_util.Classname(type_.name),
enum_as_string))
.Sblock('if (%s == %s%s) {' % (dst_var,
cpp_type_namespace,
self._type_helper.GetEnumNoneValue(type_)))
.Concat(self._AppendError16(
'u\"\'%%(key)s\': expected \\"' +
'\\" or \\"'.join(
enum_value.name
for enum_value in self._type_helper.FollowRef(type_).enum_values) +
'\\", got \\"" + UTF8ToUTF16(*%s) + u"\\""' % enum_as_string))
.Append('return %s;' % failure_value)
.Eblock('}')
.Substitute({'src_var': src_var, 'key': type_.name})
)
return c
def _GeneratePropertyFunctions(self, namespace, params):
"""Generates the member functions for a list of parameters.
"""
return self._GenerateTypes(namespace, (param.type_ for param in params))
def _GenerateTypes(self, namespace, types):
"""Generates the member functions for a list of types.
"""
c = Code()
for type_ in types:
c.Cblock(self._GenerateType(namespace, type_))
return c
def _GenerateEnumToString(self, cpp_namespace, type_):
"""Generates ToString() which gets the string representation of an enum.
"""
c = Code()
classname = cpp_util.Classname(schema_util.StripNamespace(type_.name))
if cpp_namespace is not None:
c.Append('// static')
maybe_namespace = '' if cpp_namespace is None else '%s::' % cpp_namespace
c.Sblock('const char* %sToString(%s enum_param) {' %
(maybe_namespace, classname))
c.Sblock('switch (enum_param) {')
for enum_value in self._type_helper.FollowRef(type_).enum_values:
name = enum_value.name
(c.Append('case %s: ' % self._type_helper.GetEnumValue(type_, enum_value))
.Append(' return "%s";' % name))
(c.Append('case %s:' % self._type_helper.GetEnumNoneValue(type_))
.Append(' return "";')
.Eblock('}')
.Append('NOTREACHED();')
.Append('return "";')
.Eblock('}')
)
return c
def _GenerateEnumFromString(self, cpp_namespace, type_):
"""Generates FromClassNameString() which gets an enum from its string
representation.
"""
c = Code()
classname = cpp_util.Classname(schema_util.StripNamespace(type_.name))
if cpp_namespace is not None:
c.Append('// static')
maybe_namespace = '' if cpp_namespace is None else '%s::' % cpp_namespace
c.Sblock('%s%s %sParse%s(const std::string& enum_string) {' %
(maybe_namespace, classname, maybe_namespace, classname))
for _, enum_value in enumerate(
self._type_helper.FollowRef(type_).enum_values):
# This is broken up into all ifs with no else ifs because we get
# "fatal error C1061: compiler limit : blocks nested too deeply"
# on Windows.
name = enum_value.name
(c.Append('if (enum_string == "%s")' % name)
.Append(' return %s;' %
self._type_helper.GetEnumValue(type_, enum_value)))
(c.Append('return %s;' % self._type_helper.GetEnumNoneValue(type_))
.Eblock('}')
)
return c
def _GenerateAsyncResponseArguments(self, function_scope, params):
"""Generate the function that creates base::Value parameters to return to a
callback, promise or pass to an event listener.
E.g for function "Bar", generate Bar::Results::Create
E.g for event "Baz", generate Baz::Create
function_scope: the function scope path, e.g. Foo::Bar for the function
Foo::Bar::Baz(). May be None if there is no function scope.
params: the parameters passed as results or event details.
"""
c = Code()
c.Concat(self._GeneratePropertyFunctions(function_scope, params))
(c.Sblock('std::vector<base::Value> %(function_scope)s'
'Create(%(declaration_list)s) {')
.Append('std::vector<base::Value> create_results;')
.Append('create_results.reserve(%d);' % len(params) if len(params)
else '')
)
declaration_list = []
for param in params:
declaration_list.append(cpp_util.GetParameterDeclaration(
param, self._type_helper.GetCppType(param.type_)))
c.Cblock(self._CreateValueFromType(
'create_results.push_back(base::Value::FromUniquePtrValue(%s));',
param.name,
param.type_,
param.unix_name))
c.Append('return create_results;')
c.Eblock('}')
c.Substitute({
'function_scope': ('%s::' % function_scope) if function_scope else '',
'declaration_list': ', '.join(declaration_list),
'param_names': ', '.join(param.unix_name for param in params)
})
return c
def _GenerateEventNameConstant(self, event):
"""Generates a constant string array for the event name.
"""
c = Code()
c.Append('const char kEventName[] = "%s.%s";' % (
self._namespace.name, event.name))
return c
def _InitializePropertyToDefault(self, prop, dst):
"""Initialize a model.Property to its default value inside an object.
E.g for optional enum "state", generate dst->state = STATE_NONE;
dst: Type*
"""
c = Code()
underlying_type = self._type_helper.FollowRef(prop.type_)
if (underlying_type.property_type == PropertyType.ENUM and
prop.optional):
namespace_prefix = ('%s::' % underlying_type.namespace.unix_name
if underlying_type.namespace != self._namespace
else '')
c.Append('%s->%s = %s%s;' % (
dst,
prop.unix_name,
namespace_prefix,
self._type_helper.GetEnumNoneValue(prop.type_)))
return c
def _AppendError16(self, error16):
"""Appends the given |error16| expression/variable to |error|.
"""
c = Code()
if not self._generate_error_messages:
return c
c.Append('DCHECK(error->empty());')
c.Append('*error = %s;' % error16)
return c
def _GenerateParams(self, params, generate_error_messages=None):
"""Builds the parameter list for a function, given an array of parameters.
If |generate_error_messages| is specified, it overrides
|self._generate_error_messages|.
"""
if generate_error_messages is None:
generate_error_messages = self._generate_error_messages
if generate_error_messages:
params = list(params) + ['std::u16string* error']
return ', '.join(str(p) for p in params)
def _GenerateArgs(self, args, generate_error_messages=None):
"""Builds the argument list for a function, given an array of arguments.
If |generate_error_messages| is specified, it overrides
|self._generate_error_messages|.
"""
if generate_error_messages is None:
generate_error_messages = self._generate_error_messages
if generate_error_messages:
args = list(args) + ['error']
return ', '.join(str(a) for a in args)
|
ric2b/Vivaldi-browser
|
chromium/tools/json_schema_compiler/cc_generator.py
|
Python
|
bsd-3-clause
| 51,753
|
#!/usr/bin/env python
from flask_rest_1 import app
app.run(debug=True)
|
bable5/Backbone-Playground
|
runserver.py
|
Python
|
mit
| 74
|
# -*- coding: utf-8 -*-
# Copyright 2018 Therp BV <https://therp.nl>
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl.html).
from base64 import b64encode
from openerp.tests.common import TransactionCase
from openerp.exceptions import AccessError, ValidationError
class TestAttachmentLock(TransactionCase):
def test_attachment_lock(self):
demo = self.env.ref('base.user_demo')
testattachment = self.env['ir.attachment'].create({
'name': 'testattachment',
'datas': b64encode('hello world'),
'datas_fname': 'test.txt',
})
self.assertTrue(testattachment.can_lock)
self.assertFalse(testattachment.locked)
testattachment.lock()
self.assertTrue(testattachment.can_lock)
self.assertTrue(testattachment.locked)
with self.assertRaises(ValidationError):
testattachment.sudo(demo).write({
'datas': b64encode('hello world2'),
})
with self.assertRaises(AccessError):
testattachment.sudo(demo).lock()
demo.write({'groups_id': [
(4, self.env.ref('attachment_lock.group_attachment_lock').id),
]})
with self.assertRaises(AccessError):
testattachment.sudo(demo).lock()
testattachment.unlock()
self.assertTrue(testattachment.sudo(demo).can_lock)
testattachment.sudo(demo).lock()
self.assertTrue(testattachment.sudo(demo).can_lock)
self.assertTrue(testattachment.sudo(demo).locked)
|
acsone/knowledge
|
attachment_lock/tests/test_attachment_lock.py
|
Python
|
agpl-3.0
| 1,537
|
"""
Admonition extension for Python-Markdown
========================================
Adds rST-style admonitions. Inspired by [rST][] feature with the same name.
[rST]: http://docutils.sourceforge.net/docs/ref/rst/directives.html#specific-admonitions # noqa
See <https://pythonhosted.org/Markdown/extensions/admonition.html>
for documentation.
Original code Copyright [Tiago Serafim](http://www.tiagoserafim.com/).
All changes Copyright The Python Markdown Project
License: [BSD](http://www.opensource.org/licenses/bsd-license.php)
"""
from __future__ import absolute_import
from __future__ import unicode_literals
from . import Extension
from ..blockprocessors import BlockProcessor
from ..util import etree
import re
class AdmonitionExtension(Extension):
""" Admonition extension for Python-Markdown. """
def extendMarkdown(self, md, md_globals):
""" Add Admonition to Markdown instance. """
md.registerExtension(self)
md.parser.blockprocessors.add('admonition',
AdmonitionProcessor(md.parser),
'_begin')
class AdmonitionProcessor(BlockProcessor):
CLASSNAME = 'admonition'
CLASSNAME_TITLE = 'admonition-title'
RE = re.compile(r'(?:^|\n)!!!\ ?([\w\-]+)(?:\ "(.*?)")?')
def test(self, parent, block):
sibling = self.lastChild(parent)
return self.RE.search(block) or \
(block.startswith(' ' * self.tab_length) and sibling is not None and
sibling.get('class', '').find(self.CLASSNAME) != -1)
def run(self, parent, blocks):
sibling = self.lastChild(parent)
block = blocks.pop(0)
m = self.RE.search(block)
if m:
block = block[m.end() + 1:] # removes the first line
block, theRest = self.detab(block)
if m:
klass, title = self.get_class_and_title(m)
div = etree.SubElement(parent, 'div')
div.set('class', '%s %s' % (self.CLASSNAME, klass))
if title:
p = etree.SubElement(div, 'p')
p.text = title
p.set('class', self.CLASSNAME_TITLE)
else:
div = sibling
self.parser.parseChunk(div, block)
if theRest:
# This block contained unindented line(s) after the first indented
# line. Insert these lines as the first block of the master blocks
# list for future processing.
blocks.insert(0, theRest)
def get_class_and_title(self, match):
klass, title = match.group(1).lower(), match.group(2)
if title is None:
# no title was provided, use the capitalized classname as title
# e.g.: `!!! note` will render
# `<p class="admonition-title">Note</p>`
title = klass.capitalize()
elif title == '':
# an explicit blank title should not be rendered
# e.g.: `!!! warning ""` will *not* render `p` with a title
title = None
return klass, title
def makeExtension(*args, **kwargs):
return AdmonitionExtension(*args, **kwargs)
|
unnikrishnankgs/va
|
venv/lib/python3.5/site-packages/markdown/extensions/admonition.py
|
Python
|
bsd-2-clause
| 3,158
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# setup.py
import sys
from cx_Freeze import setup, Executable
executables = [Executable("TRGUI.py", icon="exe.ico", base="Win32GUI", appendScriptToExe=True, appendScriptToLibrary=False)]
setup(
name = "Travian Raider",
version = "2.1",
description = "A toolset for the browser game Travian",
executables = executables)
|
Elkasitu/bearded-octo-bear
|
src/setup.py
|
Python
|
gpl-3.0
| 375
|
#!/usr/bin/env python
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See LICENSE for more details.
#
# Copyright: 2016 IBM
# Author: Pavithra <pavrampu@linux.vnet.ibm.com>
import os
from shutil import copyfile
from avocado import Test
from avocado import main
from avocado.utils import process, distro
from avocado import skipIf
from avocado.utils.software_manager import SoftwareManager
IS_POWER_NV = 'PowerNV' in open('/proc/cpuinfo', 'r').read()
IS_KVM_GUEST = 'qemu' in open('/proc/cpuinfo', 'r').read()
class RASTools(Test):
"""
This test checks various RAS tools:
"""
is_fail = 0
def run_cmd(self, cmd):
cmd_result = process.run(cmd, ignore_status=True, sudo=True,
shell=True)
if cmd_result.exit_status != 0:
self.is_fail += 1
return
def setUp(self):
if "ppc" not in distro.detect().arch:
self.cancel("supported only on Power platform")
sm = SoftwareManager()
for package in ("ppc64-diag", "powerpc-utils", "lsvpd", "ipmitool"):
if not sm.check_installed(package) and not sm.install(package):
self.error("Fail to install %s required for this"
" test." % package)
@skipIf(IS_POWER_NV or IS_KVM_GUEST, "This test is not supported on KVM guest or PowerNV platform")
def test1_set_poweron_time(self):
"""
set_poweron_time schedules the power on time
"""
self.log.info("===============Executing set_poweron_time tool test===="
"===========")
self.run_cmd("set_poweron_time -m")
self.run_cmd("set_poweron_time -h")
self.run_cmd("set_poweron_time -d m2")
self.run_cmd("set_poweron_time -t M6D15h12")
if self.is_fail >= 1:
self.fail("%s command(s) failed in set_poweron_time tool "
"verification" % self.is_fail)
@skipIf(IS_POWER_NV or IS_KVM_GUEST, "This test is not supported on KVM guest or PowerNV platform")
def test2_sys_ident_tool(self):
"""
sys_ident provides unique system identification information
"""
self.log.info("===============Executing sys_ident_tool test==========="
"====")
self.run_cmd("sys_ident -p")
self.run_cmd("sys_ident -s")
if self.is_fail >= 1:
self.fail("%s command(s) failed in sys_ident tool verification"
% self.is_fail)
def test3_lsmcode(self):
"""
lsmcode provides FW version information
"""
self.log.info("===============Executing lsmcode tool test============="
"==")
self.run_cmd("vpdupdate")
self.run_cmd("lsmcode")
self.run_cmd("lsmcode -A")
self.run_cmd("lsmcode -v")
self.run_cmd("lsmcode -D")
path_db = process.system_output("find /var/lib/lsvpd/ -iname vpd.db | "
"head -1", shell=True).strip()
if path_db:
copyfile_path = os.path.join(self.outputdir, 'vpd.db')
copyfile(path_db, copyfile_path)
self.run_cmd("lsmcode --path=%s" % copyfile_path)
path_tar = process.system_output("find /var/lib/lsvpd/ -iname vpd.*.gz"
" | head -1", shell=True).strip()
if path_tar:
self.run_cmd("lsmcode --zip=%s" % path_tar)
if self.is_fail >= 1:
self.fail("%s command(s) failed in lsmcode tool verification"
% self.is_fail)
@skipIf(IS_POWER_NV, "Skipping test in PowerNV platform")
def test4_drmgr(self):
"""
drmgr can be used for pci, cpu or memory hotplug
"""
self.log.info("===============Executing drmgr tool test============="
"==")
self.run_cmd("drmgr -h")
self.run_cmd("drmgr -C")
lcpu_count = process.system_output("lparstat -i | "
"grep \"Online Virtual CPUs\" | "
"cut -d':' -f2",
shell=True).strip()
if lcpu_count:
lcpu_count = int(lcpu_count)
if lcpu_count >= 2:
self.run_cmd("drmgr -c cpu -r 1")
self.run_cmd("lparstat")
self.run_cmd("drmgr -c cpu -a 1")
self.run_cmd("lparstat")
if self.is_fail >= 1:
self.fail("%s command(s) failed in drmgr tool verification"
% self.is_fail)
def test5_lsprop(self):
"""
lsprop provides device tree information
"""
self.log.info("===============Executing lsprop tool test============="
"==")
self.run_cmd("lsprop")
if self.is_fail >= 1:
self.fail("%s command(s) failed in lsprop tool verification"
% self.is_fail)
@skipIf(IS_POWER_NV, "Skipping test in PowerNV platform")
def test6_lsslot(self):
"""
lsslot lists the slots based on the option provided
"""
self.log.info("===============Executing lsslot tool test============="
"==")
self.run_cmd("lsslot")
self.run_cmd("lsslot -c mem")
self.run_cmd("lsslot -ac pci")
if not IS_KVM_GUEST:
self.run_cmd("lsslot -c cpu -b")
self.run_cmd("lsslot -c pci -o")
slot = process.system_output("lsslot | cut -d' ' -f1 | head -2 | "
"tail -1", shell=True).strip()
if slot:
self.run_cmd("lsslot -s %s" % slot)
if self.is_fail >= 1:
self.fail("%s command(s) failed in lsslot tool verification"
% self.is_fail)
@skipIf(IS_POWER_NV, "Skipping test in PowerNV platform")
def test7_lsvio(self):
"""
lsvio lists the virtual I/O adopters and devices
"""
self.log.info("===============Executing lsvio tool test============="
"==")
self.run_cmd("lsvio -h")
self.run_cmd("lsvio -v")
self.run_cmd("lsvio -s")
self.run_cmd("lsvio -e")
self.run_cmd("lsvio -d")
if self.is_fail >= 1:
self.fail("%s command(s) failed in lsvio tool verification"
% self.is_fail)
def test8_nvram(self):
"""
nvram command retrieves and displays NVRAM data
"""
self.log.info("===============Executing nvram tool test============="
"==")
self.run_cmd("nvram --help")
self.run_cmd("nvram --partitions")
self.run_cmd("nvram --print-config -p common")
self.run_cmd("nvram --dump common --verbose")
if self.is_fail >= 1:
self.fail("%s command(s) failed in nvram tool verification"
% self.is_fail)
@skipIf(IS_POWER_NV, "Skipping test in PowerNV platform")
def test9_ofpathname(self):
"""
ofpathname translates the device name between logical name and Open
Firmware name
"""
self.log.info("===============Executing ofpathname tool test=========="
"=====")
self.run_cmd("ofpathname -h")
self.run_cmd("ofpathname -V")
disk_name = process.system_output("df -h | egrep '(s|v)da[1-8]' | "
"tail -1 | cut -d' ' -f1",
shell=True).strip()
if disk_name:
self.run_cmd("ofpathname %s" % disk_name)
of_name = process.system_output("ofpathname %s"
% disk_name).strip()
self.run_cmd("ofpathname -l %s" % of_name)
if self.is_fail >= 1:
self.fail("%s command(s) failed in ofpathname tool verification"
% self.is_fail)
@skipIf(IS_POWER_NV or IS_KVM_GUEST, "This test is not supported on KVM guest or PowerNV platform")
def test11_rtas_ibm_get_vpd(self):
"""
rtas_ibm_get_vpd gives vpd data
"""
self.log.info("===============Executing rtas_ibm_get_vpd tool test===="
"===========")
output_file = os.path.join(self.outputdir, 'output')
self.run_cmd("rtas_ibm_get_vpd >> %s 2>&1" % output_file)
if self.is_fail >= 1:
self.fail("%s command(s) failed in rtas_ibm_get_vpd tool "
"verification" % self.is_fail)
@skipIf(IS_POWER_NV, "Skipping test in PowerNV platform")
def test12_rtas_errd_and_rtas_dump(self):
"""
rtas_errd adds RTAS events to /var/log/platform and rtas_dump dumps
RTAS events
"""
self.log.info("===============Executing rtas_errd and rtas_dump tools"
" test===============")
self.log.info("1 - Injecting event")
rtas_file = os.path.join(self.datadir, 'rtas')
self.run_cmd("/usr/sbin/rtas_errd -d -f %s" % rtas_file)
self.log.info("2 - Checking if the event was dumped to /var/log/"
"platform")
self.run_cmd("cat /var/log/platform")
myplatform_file = os.path.join(self.outputdir, 'myplatformfile')
my_log = os.path.join(self.outputdir, 'mylog')
self.run_cmd("/usr/sbin/rtas_errd -d -f %s -p %s -l %s" %
(rtas_file, myplatform_file, my_log))
self.run_cmd("cat %s" % myplatform_file)
self.run_cmd("cat %s" % my_log)
self.log.info("3 - Verifying rtas_dump command")
self.run_cmd("rtas_dump -f %s" % rtas_file)
self.log.info("4 - Verifying rtas_dump with event number 2302")
self.run_cmd("rtas_dump -f %s -n 2302" % rtas_file)
self.log.info("5 - Verifying rtas_dump with verbose option")
self.run_cmd("rtas_dump -f %s -v" % rtas_file)
self.log.info("6 - Verifying rtas_dump with width 20")
self.run_cmd("rtas_dump -f %s -w 20" % rtas_file)
if self.is_fail >= 1:
self.fail("%s command(s) failed in rtas_errd and rtas_dump tools "
"verification" % self.is_fail)
@skipIf(IS_POWER_NV, "This test is not supported on PowerNV platform")
def test13_rtas_event_decode(self):
self.log.info("===============Executing rtas_event_decode tool test===="
"===========")
self.run_cmd("rtas_event_decode -w 500 -dv -n 2302 < %s" %
os.path.join(self.datadir, 'rtas'))
if self.is_fail >= 1:
self.fail("%s command(s) failed in rtas_event_decode tool "
"verification" % self.is_fail)
if __name__ == "__main__":
main()
|
vrbagalkote/avocado-misc-tests-1
|
generic/ras.py
|
Python
|
gpl-2.0
| 11,208
|
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import multiprocessing
import os
import tempfile
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.executor.play_iterator import PlayIterator
from ansible.executor.stats import AggregateStats
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from ansible.playbook.block import Block
from ansible.playbook.play_context import PlayContext
from ansible.plugins.loader import callback_loader, strategy_loader, module_loader
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.utils.helpers import pct_to_int
from ansible.vars.hostvars import HostVars
from ansible.vars.reserved import warn_if_reserved
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
__all__ = ['TaskQueueManager']
class TaskQueueManager:
'''
This class handles the multiprocessing requirements of Ansible by
creating a pool of worker forks, a result handler fork, and a
manager object with shared datastructures/queues for coordinating
work between all processes.
The queue manager is responsible for loading the play strategy plugin,
which dispatches the Play's tasks to hosts.
'''
RUN_OK = 0
RUN_ERROR = 1
RUN_FAILED_HOSTS = 2
RUN_UNREACHABLE_HOSTS = 4
RUN_FAILED_BREAK_PLAY = 8
RUN_UNKNOWN_ERROR = 255
def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False):
self._inventory = inventory
self._variable_manager = variable_manager
self._loader = loader
self._options = options
self._stats = AggregateStats()
self.passwords = passwords
self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._callbacks_loaded = False
self._callback_plugins = []
self._start_at_done = False
# make sure any module paths (if specified) are added to the module_loader
if options.module_path:
for path in options.module_path:
if path:
module_loader.add_directory(path)
# a special flag to help us exit cleanly
self._terminated = False
# this dictionary is used to keep track of notified handlers
self._notified_handlers = dict()
self._listening_handlers = dict()
# dictionaries to keep track of failed/unreachable hosts
self._failed_hosts = dict()
self._unreachable_hosts = dict()
self._final_q = multiprocessing.Queue()
# A temporary file (opened pre-fork) used by connection
# plugins for inter-process locking.
self._connection_lockfile = tempfile.TemporaryFile()
def _initialize_processes(self, num):
self._workers = []
for i in range(num):
rslt_q = multiprocessing.Queue()
self._workers.append([None, rslt_q])
def _initialize_notified_handlers(self, play):
'''
Clears and initializes the shared notified handlers dict with entries
for each handler in the play, which is an empty array that will contain
inventory hostnames for those hosts triggering the handler.
'''
# Zero the dictionary first by removing any entries there.
# Proxied dicts don't support iteritems, so we have to use keys()
self._notified_handlers.clear()
self._listening_handlers.clear()
def _process_block(b):
temp_list = []
for t in b.block:
if isinstance(t, Block):
temp_list.extend(_process_block(t))
else:
temp_list.append(t)
return temp_list
handler_list = []
for handler_block in play.handlers:
handler_list.extend(_process_block(handler_block))
# then initialize it with the given handler list
self.update_handler_list(handler_list)
def update_handler_list(self, handler_list):
for handler in handler_list:
if handler._uuid not in self._notified_handlers:
display.debug("Adding handler %s to notified list" % handler.name)
self._notified_handlers[handler._uuid] = []
if handler.listen:
listeners = handler.listen
if not isinstance(listeners, list):
listeners = [listeners]
for listener in listeners:
if listener not in self._listening_handlers:
self._listening_handlers[listener] = []
display.debug("Adding handler %s to listening list" % handler.name)
self._listening_handlers[listener].append(handler._uuid)
def load_callbacks(self):
'''
Loads all available callbacks, with the exception of those which
utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to 'stdout',
only one such callback plugin will be loaded.
'''
if self._callbacks_loaded:
return
stdout_callback_loaded = False
if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if isinstance(self._stdout_callback, CallbackBase):
stdout_callback_loaded = True
elif isinstance(self._stdout_callback, string_types):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
self._stdout_callback = callback_loader.get(self._stdout_callback)
self._stdout_callback.set_options(C.config.get_plugin_options('callback', self._stdout_callback._load_name))
stdout_callback_loaded = True
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
for callback_plugin in callback_loader.all(class_only=True):
if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0:
# we only allow one callback of type 'stdout' to be loaded, so check
# the name of the current plugin and type to see if we need to skip
# loading this callback plugin
callback_type = getattr(callback_plugin, 'CALLBACK_TYPE', None)
callback_needs_whitelist = getattr(callback_plugin, 'CALLBACK_NEEDS_WHITELIST', False)
(callback_name, _) = os.path.splitext(os.path.basename(callback_plugin._original_path))
if callback_type == 'stdout':
if callback_name != self._stdout_callback or stdout_callback_loaded:
continue
stdout_callback_loaded = True
elif callback_name == 'tree' and self._run_tree:
pass
elif not self._run_additional_callbacks or (callback_needs_whitelist and (
C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):
continue
callback_obj = callback_plugin()
callback_obj .set_options(C.config.get_plugin_options('callback', callback_plugin._load_name))
self._callback_plugins.append(callback_obj)
self._callbacks_loaded = True
def run(self, play):
'''
Iterates over the roles/tasks in a play, using the given (or default)
strategy for queueing tasks. The default is the linear strategy, which
operates like classic Ansible by keeping all hosts in lock-step with
a given task (meaning no hosts move on to the next task until all hosts
are done with the current task).
'''
if not self._callbacks_loaded:
self.load_callbacks()
all_vars = self._variable_manager.get_vars(play=play)
warn_if_reserved(all_vars)
templar = Templar(loader=self._loader, variables=all_vars)
new_play = play.copy()
new_play.post_validate(templar)
new_play.handlers = new_play.compile_roles_handlers() + new_play.handlers
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
# Fork # of forks, # of hosts or serial, whichever is lowest
num_hosts = len(self._inventory.get_hosts(new_play.hosts, ignore_restrictions=True))
max_serial = 0
if new_play.serial:
# the play has not been post_validated here, so we may need
# to convert the scalar value to a list at this point
serial_items = new_play.serial
if not isinstance(serial_items, list):
serial_items = [serial_items]
max_serial = max([pct_to_int(x, num_hosts) for x in serial_items])
contenders = [self._options.forks, max_serial, num_hosts]
contenders = [v for v in contenders if v is not None and v > 0]
self._initialize_processes(min(contenders))
play_context = PlayContext(new_play, self._options, self.passwords, self._connection_lockfile.fileno())
for callback_plugin in self._callback_plugins:
if hasattr(callback_plugin, 'set_play_context'):
callback_plugin.set_play_context(play_context)
self.send_callback('v2_playbook_on_play_start', new_play)
# initialize the shared dictionary containing the notified handlers
self._initialize_notified_handlers(new_play)
# load the specified strategy (or the default linear one)
strategy = strategy_loader.get(new_play.strategy, self)
if strategy is None:
raise AnsibleError("Invalid play strategy specified: %s" % new_play.strategy, obj=play._ds)
# build the iterator
iterator = PlayIterator(
inventory=self._inventory,
play=new_play,
play_context=play_context,
variable_manager=self._variable_manager,
all_vars=all_vars,
start_at_done=self._start_at_done,
)
# Because the TQM may survive multiple play runs, we start by marking
# any hosts as failed in the iterator here which may have been marked
# as failed in previous runs. Then we clear the internal list of failed
# hosts so we know what failed this round.
for host_name in self._failed_hosts.keys():
host = self._inventory.get_host(host_name)
iterator.mark_host_failed(host)
self.clear_failed_hosts()
# during initialization, the PlayContext will clear the start_at_task
# field to signal that a matching task was found, so check that here
# and remember it so we don't try to skip tasks on future plays
if getattr(self._options, 'start_at_task', None) is not None and play_context.start_at_task is None:
self._start_at_done = True
# and run the play using the strategy and cleanup on way out
play_return = strategy.run(iterator, play_context)
# now re-save the hosts that failed from the iterator to our internal list
for host_name in iterator.get_failed_hosts():
self._failed_hosts[host_name] = True
strategy.cleanup()
self._cleanup_processes()
return play_return
def cleanup(self):
display.debug("RUNNING CLEANUP")
self.terminate()
self._final_q.close()
self._cleanup_processes()
def _cleanup_processes(self):
if hasattr(self, '_workers'):
for (worker_prc, rslt_q) in self._workers:
rslt_q.close()
if worker_prc and worker_prc.is_alive():
try:
worker_prc.terminate()
except AttributeError:
pass
def clear_failed_hosts(self):
self._failed_hosts = dict()
def get_inventory(self):
return self._inventory
def get_variable_manager(self):
return self._variable_manager
def get_loader(self):
return self._loader
def get_workers(self):
return self._workers[:]
def terminate(self):
self._terminated = True
def has_dead_workers(self):
# [<WorkerProcess(WorkerProcess-2, stopped[SIGKILL])>,
# <WorkerProcess(WorkerProcess-2, stopped[SIGTERM])>
defunct = False
for (idx, x) in enumerate(self._workers):
if hasattr(x[0], 'exitcode'):
if x[0].exitcode in [-9, -11, -15]:
defunct = True
return defunct
def send_callback(self, method_name, *args, **kwargs):
for callback_plugin in [self._stdout_callback] + self._callback_plugins:
# a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin
if getattr(callback_plugin, 'disabled', False):
continue
# try to find v2 method, fallback to v1 method, ignore callback if no method found
methods = []
for possible in [method_name, 'v2_on_any']:
gotit = getattr(callback_plugin, possible, None)
if gotit is None:
gotit = getattr(callback_plugin, possible.replace('v2_', ''), None)
if gotit is not None:
methods.append(gotit)
for method in methods:
try:
method(*args, **kwargs)
except Exception as e:
# TODO: add config toggle to make this fatal or not?
display.warning(u"Failure using method (%s) in callback plugin (%s): %s" % (to_text(method_name), to_text(callback_plugin), to_text(e)))
from traceback import format_tb
from sys import exc_info
display.vvv('Callback Exception: \n' + ' '.join(format_tb(exc_info()[2])))
|
rmfitzpatrick/ansible
|
lib/ansible/executor/task_queue_manager.py
|
Python
|
gpl-3.0
| 15,133
|
import sys
from vacs.models import Command, Experiment, Vac, Evaluation,\
Assignment, Participant, Score, ValAssignment, Validation
from django.contrib.auth import get_user_model
import csv
import numpy as np
# Get all the Scores for the experiment
experiment_id = 77
scores = Score.objects.filter(experiment__id=77)
vacs = Vac.objects.filter(experiment__id=77)
commands = Command.objects.all()
write_data = [["Lexicons"],[1],[2],[3],[4],[5],[6],[7],[8],[9]]
with open('gestureclean/analytics/avg_scores_full.csv', 'w') as filewriter:
writer = csv.writer(filewriter)
for vac in vacs:
write_data[0].append(vac.name)
write_data[0].append("mean")
write_data[0].append("std")
print"you should happen once"
for lexicon_index in range(1,10):
full_scores = []
for vac in vacs:
scores_for_vac = scores.filter(vac=vac)
scores_for_lexicon = scores_for_vac.filter(lexicon_number=lexicon_index)
if vac.name == "Complexity" or vac.name == "Amount of movement":
vac_mean = round(np.mean([1-s.score for s in scores_for_lexicon]),2)
full_scores += [1-s.score for s in scores_for_lexicon]
else:
vac_mean = round(np.mean([s.score for s in scores_for_lexicon]),2)
full_scores += [s.score for s in scores_for_lexicon]
write_data[lexicon_index].append(vac_mean)
lexicon_mean = round(np.mean(full_scores),2)
lexicon_std = round(np.std(full_scores),2)
write_data[lexicon_index].append(lexicon_mean)
write_data[lexicon_index].append(lexicon_std)
print write_data
writer.writerows(write_data)
for vac in vacs:
write_data = [["Count", "Command Name",
"L1",
"L2",
"L3",
"L4",
"L5",
"L6",
"L7",
"L8",
"L9"]]
with open('gestureclean/analytics/'+vac.name+'_scores.csv', 'w') as filewriter:
writer = csv.writer(filewriter)
counter = 0
for command in commands:
row = []
counter +=1
score_vals = [-10000000]*9
row.append(counter)
row.append(command.name)
scores_vac_command = scores.filter(vac=vac,command=command)
for s in scores_vac_command:
score_vals[s.lexicon_number-1] = s.score
row = row + score_vals
write_data.append(row)
writer.writerows(write_data)
|
glebysg/GC_server
|
gestureclean/score_analysis.py
|
Python
|
mit
| 2,491
|
# This file is distributed under the terms of the GNU General Public
# license. #Copyright (C) 2012 Anthony Pesce <timetopat@gmail.com> (See
# the file COPYING for details).
from atlas import *
from physics import *
# class Pioneeringconstruction(server.Task):
# """A task for creating a Wooden structures such as
# A Frames with lumber and rope"""
#
# materials = "lumber"
#
# def aframe_operation(self, op):
# """ Op handler for Pioneeringconstruction op
# which activates this task """
#
# if len(op) < 1:
# sys.stderr.write("Pioneeringconstruction task has no target "
# " in op")
#
# self.target = server.world.get_object(op[0].id)
# self.tool = op.to
#
# self.pos = Point3D(op[0].pos)
#
# def info_operation(self, op):
# print("Aframe info")
# aframe = server.world.get_object(op[0].id)
# self.lcount = 0
#
# raw_materials = []
# for item in self.character.contains:
# if item.type[0] == str(self.materials):
# raw_materials.append(item)
# self.lcount = self.lcount + 1
# if self.lcount == 3:
# break
# else:
# print("No materials in inventory for A frame")
# self.irrelevant()
# return
#
# chunk_loc = Location(aframe())
# chunk_loc.pos = Point3D([0, 0, 0])
#
# count = self.lcount
# res = Oplist()
# # loops through raw_materials and places 3 lumber
# # in inventory infront of user
# offset = Vector3D(0, 0, 0)
# while (count > 0):
# tar = raw_materials.pop()
# # length of the lumber obtained
# lumberlength = tar.location.bbox.high_corner[2] - \
# tar.location.bbox.low_corner[2]
# lumberheight = tar.location.bbox.high_corner[1] - \
# tar.location.bbox.low_corner[1]
# # rough length to position lumber
# lumber_length = lumberlength / 4
#
# if count == 3:
# # left component
# chunk_loc.orientation = Quaternion([.653, 0.27, .27, .653])
# if count == 2:
# # right component
# chunk_loc.orientation = Quaternion([.653, -0.27, -.27, .653])
# offset = Vector3D(lumber_length, 0, 0)
# chunk_loc.pos = chunk_loc.pos + offset
# if count == 1:
# # bottom component
# chunk_loc.pos = Point3D([0, 0, 0]) # self.pos
# # .707 is sin(.5) which is needed for a 90 degree rotation
# chunk_loc.orientation = Quaternion([.707, 0, .707, 0])
# offset = Vector3D(-(1.5 * lumber_length), 0, (2.5 * lumber_length))
# chunk_loc.pos = chunk_loc.pos + offset
#
# move = Operation("move", Entity(tar.id, location=chunk_loc,
# mode="fixed"), to=tar)
# res.append(move)
# count = count - 1
#
# self.progress = 1
# self.irrelevant()
# return res
#
# def tick_operation(self, op):
#
# """ Op handler for regular tick op """
# target = self.target()
# if not target:
# # print "Target is no more"
# self.irrelevant()
# return
#
# self.rate = 0.5 / 0.75
# self.progress += 1
#
# if not target:
# print("Target is no more")
# self.irrelevant()
# return
#
# if self.progress < 1:
# # print "Not done yet"
# return self.next_tick(1.75)
#
# self.progress = 0
#
# chunk_loc = Location(self.character.location.parent)
# chunk_loc.pos = self.pos
# lumberh = 0 # lumberheight
# lumberl = 0 # lumberlength
# res = Oplist()
# lcount = 0
# # makes sure we have 3 lumber to construct A frame
# for item in self.character.contains:
# if item.type[0] == str(self.materials):
# lcount = lcount + 1
# lumberl = item.location.bbox.high_corner[2] - \
# item.location.bbox.low_corner[2]
# lumberh = item.location.bbox.high_corner[1] - \
# item.location.bbox.low_corner[1]
#
# if lcount == 3:
# break
# else:
# print("No materials in inventory for A frame")
# self.irrelevant()
# return
#
# bbox1 = [-lumberl / 2, -lumberl / 2, -lumberh / 2, lumberl / 2, lumberl / 2, lumberh / 2]
# # bbox of a frame
# create = Operation("create", Entity(name="A_Frame",
# type="construction",
# bbox=bbox1, location=chunk_loc),
# to=target)
# create.set_serialno(0)
# res.append(create)
# res.append(self.next_tick(1.75))
# return res
|
olekw/cyphesis
|
data/rulesets/deeds/scripts/world/tasks/Pioneeringconstruction.py
|
Python
|
gpl-2.0
| 5,158
|
"""Fetch and edit raster dataset metadata from the command line."""
from __future__ import absolute_import
import code
import logging
import sys
import collections
import warnings
import numpy as np
import click
from . import options
import rasterio
from rasterio.plot import show, show_hist
try:
import matplotlib.pyplot as plt
except ImportError: # pragma: no cover
plt = None
except RuntimeError as e: # pragma: no cover
# Certain environment configurations can trigger a RuntimeError like:
# Trying to import matplotlibRuntimeError: Python is not installed as a
# framework. The Mac OS X backend will not be able to function correctly
# if Python is not installed as a framework. See the Python ...
warnings.warn(str(e), RuntimeWarning, stacklevel=2)
plt = None
logger = logging.getLogger('rasterio')
Stats = collections.namedtuple('Stats', ['min', 'max', 'mean'])
# Collect dictionary of functions for use in the interpreter in main()
funcs = locals()
def stats(source):
"""Return a tuple with raster min, max, and mean."""
if isinstance(source, tuple):
arr = source[0].read(source[1])
else:
arr = source
return Stats(np.min(arr), np.max(arr), np.mean(arr))
def main(banner, dataset, alt_interpreter=None):
"""Main entry point for use with python interpreter."""
local = dict(funcs, src=dataset, np=np, rio=rasterio, plt=plt)
if not alt_interpreter:
code.interact(banner, local=local)
elif alt_interpreter == 'ipython': # pragma: no cover
import IPython
IPython.InteractiveShell.banner1 = banner
IPython.start_ipython(argv=[], user_ns=local)
else:
raise ValueError("Unsupported interpreter '%s'" % alt_interpreter)
return 0
@click.command(short_help="Open a data file and start an interpreter.")
@options.file_in_arg
@click.option('--ipython', 'interpreter', flag_value='ipython',
help="Use IPython as interpreter.")
@click.option(
'-m',
'--mode',
type=click.Choice(['r', 'r+']),
default='r',
help="File mode (default 'r').")
@click.pass_context
def insp(ctx, input, mode, interpreter):
"""Open the input file in a Python interpreter."""
logger = logging.getLogger('rio')
try:
with ctx.obj['env']:
with rasterio.open(input, mode) as src:
main(
'Rasterio %s Interactive Inspector (Python %s)\n'
'Type "src.meta", "src.read(1)", or "help(src)" '
'for more information.' % (
rasterio.__version__,
'.'.join(map(str, sys.version_info[:3]))),
src, interpreter)
except Exception:
logger.exception("Exception caught during processing")
raise click.Abort()
|
brendan-ward/rasterio
|
rasterio/rio/insp.py
|
Python
|
bsd-3-clause
| 2,830
|
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2011 Rackspace
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import context
from nova import db
from nova import exception
from nova import log as logging
from nova import test
from nova.network import manager as network_manager
import mox
LOG = logging.getLogger('nova.tests.network')
HOST = "testhost"
class FakeModel(dict):
"""Represent a model from the db"""
def __init__(self, *args, **kwargs):
self.update(kwargs)
def __getattr__(self, name):
return self[name]
networks = [{'id': 0,
'uuid': "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
'label': 'test0',
'injected': False,
'multi_host': False,
'cidr': '192.168.0.0/24',
'cidr_v6': '2001:db8::/64',
'gateway_v6': '2001:db8::1',
'netmask_v6': '64',
'netmask': '255.255.255.0',
'bridge': 'fa0',
'bridge_interface': 'fake_fa0',
'gateway': '192.168.0.1',
'broadcast': '192.168.0.255',
'dns1': '192.168.0.1',
'dns2': '192.168.0.2',
'vlan': None,
'host': HOST,
'project_id': 'fake_project',
'vpn_public_address': '192.168.0.2'},
{'id': 1,
'uuid': "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
'label': 'test1',
'injected': False,
'multi_host': False,
'cidr': '192.168.1.0/24',
'cidr_v6': '2001:db9::/64',
'gateway_v6': '2001:db9::1',
'netmask_v6': '64',
'netmask': '255.255.255.0',
'bridge': 'fa1',
'bridge_interface': 'fake_fa1',
'gateway': '192.168.1.1',
'broadcast': '192.168.1.255',
'dns1': '192.168.0.1',
'dns2': '192.168.0.2',
'vlan': None,
'host': HOST,
'project_id': 'fake_project',
'vpn_public_address': '192.168.1.2'}]
fixed_ips = [{'id': 0,
'network_id': 0,
'address': '192.168.0.100',
'instance_id': 0,
'allocated': False,
'virtual_interface_id': 0,
'floating_ips': []},
{'id': 0,
'network_id': 1,
'address': '192.168.1.100',
'instance_id': 0,
'allocated': False,
'virtual_interface_id': 0,
'floating_ips': []}]
flavor = {'id': 0,
'rxtx_cap': 3}
floating_ip_fields = {'id': 0,
'address': '192.168.10.100',
'fixed_ip_id': 0,
'project_id': None,
'auto_assigned': False}
vifs = [{'id': 0,
'address': 'DE:AD:BE:EF:00:00',
'uuid': '00000000-0000-0000-0000-0000000000000000',
'network_id': 0,
'network': FakeModel(**networks[0]),
'instance_id': 0},
{'id': 1,
'address': 'DE:AD:BE:EF:00:01',
'uuid': '00000000-0000-0000-0000-0000000000000001',
'network_id': 1,
'network': FakeModel(**networks[1]),
'instance_id': 0},
{'id': 2,
'address': 'DE:AD:BE:EF:00:02',
'uuid': '00000000-0000-0000-0000-0000000000000002',
'network_id': 2,
'network': None,
'instance_id': 0}]
class FlatNetworkTestCase(test.TestCase):
def setUp(self):
super(FlatNetworkTestCase, self).setUp()
self.network = network_manager.FlatManager(host=HOST)
self.network.db = db
self.context = context.RequestContext('testuser', 'testproject',
is_admin=False)
def test_get_instance_nw_info(self):
self.mox.StubOutWithMock(db, 'fixed_ip_get_by_instance')
self.mox.StubOutWithMock(db, 'virtual_interface_get_by_instance')
self.mox.StubOutWithMock(db, 'instance_type_get')
db.fixed_ip_get_by_instance(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(fixed_ips)
db.virtual_interface_get_by_instance(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(vifs)
db.instance_type_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(flavor)
self.mox.ReplayAll()
nw_info = self.network.get_instance_nw_info(None, 0, 0, None)
self.assertTrue(nw_info)
for i, nw in enumerate(nw_info):
i8 = i + 8
check = {'bridge': 'fa%s' % i,
'cidr': '192.168.%s.0/24' % i,
'cidr_v6': '2001:db%s::/64' % i8,
'id': i,
'multi_host': False,
'injected': 'DONTCARE',
'bridge_interface': 'fake_fa%s' % i,
'vlan': None}
self.assertDictMatch(nw[0], check)
check = {'broadcast': '192.168.%s.255' % i,
'dhcp_server': '192.168.%s.1' % i,
'dns': 'DONTCARE',
'gateway': '192.168.%s.1' % i,
'gateway6': '2001:db%s::1' % i8,
'ip6s': 'DONTCARE',
'ips': 'DONTCARE',
'label': 'test%s' % i,
'mac': 'DE:AD:BE:EF:00:0%s' % i,
'vif_uuid': ('00000000-0000-0000-0000-000000000000000%s' %
i),
'rxtx_cap': 'DONTCARE',
'should_create_vlan': False,
'should_create_bridge': False}
self.assertDictMatch(nw[1], check)
check = [{'enabled': 'DONTCARE',
'ip': '2001:db%s::dcad:beff:feef:%s' % (i8, i),
'netmask': '64'}]
self.assertDictListMatch(nw[1]['ip6s'], check)
check = [{'enabled': '1',
'ip': '192.168.%s.100' % i,
'netmask': '255.255.255.0'}]
self.assertDictListMatch(nw[1]['ips'], check)
def test_validate_networks(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
self.mox.StubOutWithMock(db, "fixed_ip_get_by_address")
requested_networks = [("bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
"192.168.1.100")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
fixed_ips[1]['network'] = FakeModel(**networks[1])
fixed_ips[1]['instance'] = None
db.fixed_ip_get_by_address(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(fixed_ips[1])
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_none_requested_networks(self):
self.network.validate_networks(self.context, None)
def test_validate_networks_empty_requested_networks(self):
requested_networks = []
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_invalid_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "192.168.0.100.1")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks, None,
requested_networks)
def test_validate_networks_empty_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks,
None, requested_networks)
def test_validate_networks_none_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, None)]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.network.validate_networks(None, requested_networks)
def test_add_fixed_ip_instance_without_vpn_requested_networks(self):
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'network_update')
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'instance_get')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.101')
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[0])
db.network_update(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg())
self.mox.ReplayAll()
self.network.add_fixed_ip_to_instance(self.context, 1, HOST,
networks[0]['id'])
class VlanNetworkTestCase(test.TestCase):
def setUp(self):
super(VlanNetworkTestCase, self).setUp()
self.network = network_manager.VlanManager(host=HOST)
self.network.db = db
self.context = context.RequestContext('testuser', 'testproject',
is_admin=False)
def test_vpn_allocate_fixed_ip(self):
self.mox.StubOutWithMock(db, 'fixed_ip_associate')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
db.fixed_ip_associate(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg(),
reserved=True).AndReturn('192.168.0.1')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
self.mox.ReplayAll()
network = dict(networks[0])
network['vpn_private_address'] = '192.168.0.2'
self.network.allocate_fixed_ip(None, 0, network, vpn=True)
def test_allocate_fixed_ip(self):
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'instance_get')
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.1')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
self.mox.ReplayAll()
network = dict(networks[0])
network['vpn_private_address'] = '192.168.0.2'
self.network.allocate_fixed_ip(self.context, 0, network)
def test_create_networks_too_big(self):
self.assertRaises(ValueError, self.network.create_networks, None,
num_networks=4094, vlan_start=1)
def test_create_networks_too_many(self):
self.assertRaises(ValueError, self.network.create_networks, None,
num_networks=100, vlan_start=1,
cidr='192.168.0.1/24', network_size=100)
def test_validate_networks(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
self.mox.StubOutWithMock(db, "fixed_ip_get_by_address")
requested_networks = [("bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
"192.168.1.100")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
fixed_ips[1]['network'] = FakeModel(**networks[1])
fixed_ips[1]['instance'] = None
db.fixed_ip_get_by_address(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(fixed_ips[1])
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_none_requested_networks(self):
self.network.validate_networks(self.context, None)
def test_validate_networks_empty_requested_networks(self):
requested_networks = []
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_validate_networks_invalid_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "192.168.0.100.1")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks, self.context,
requested_networks)
def test_validate_networks_empty_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, "")]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.assertRaises(exception.FixedIpInvalid,
self.network.validate_networks,
self.context, requested_networks)
def test_validate_networks_none_fixed_ip(self):
self.mox.StubOutWithMock(db, 'network_get_all_by_uuids')
requested_networks = [(1, None)]
db.network_get_all_by_uuids(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks)
self.mox.ReplayAll()
self.network.validate_networks(self.context, requested_networks)
def test_cant_associate_associated_floating_ip(self):
ctxt = context.RequestContext('testuser', 'testproject',
is_admin=False)
def fake_floating_ip_get_by_address(context, address):
return {'address': '10.10.10.10',
'fixed_ip': {'address': '10.0.0.1'}}
self.stubs.Set(self.network.db, 'floating_ip_get_by_address',
fake_floating_ip_get_by_address)
self.assertRaises(exception.FloatingIpAlreadyInUse,
self.network.associate_floating_ip,
ctxt,
mox.IgnoreArg(),
mox.IgnoreArg())
def test_add_fixed_ip_instance_without_vpn_requested_networks(self):
self.mox.StubOutWithMock(db, 'network_get')
self.mox.StubOutWithMock(db, 'fixed_ip_associate_pool')
self.mox.StubOutWithMock(db, 'instance_get')
self.mox.StubOutWithMock(db,
'virtual_interface_get_by_instance_and_network')
self.mox.StubOutWithMock(db, 'fixed_ip_update')
db.fixed_ip_update(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg())
db.virtual_interface_get_by_instance_and_network(mox.IgnoreArg(),
mox.IgnoreArg(), mox.IgnoreArg()).AndReturn({'id': 0})
db.instance_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn({'security_groups':
[{'id': 0}]})
db.fixed_ip_associate_pool(mox.IgnoreArg(),
mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn('192.168.0.101')
db.network_get(mox.IgnoreArg(),
mox.IgnoreArg()).AndReturn(networks[0])
self.mox.ReplayAll()
self.network.add_fixed_ip_to_instance(self.context, 1, HOST,
networks[0]['id'])
def test_ip_association_and_allocation_of_other_project(self):
"""Makes sure that we cannot deallocaate or disassociate
a public ip of other project"""
context1 = context.RequestContext('user', 'project1')
context2 = context.RequestContext('user', 'project2')
address = '1.2.3.4'
float_addr = db.floating_ip_create(context1.elevated(),
{'address': address,
'project_id': context1.project_id})
instance = db.instance_create(context1,
{'project_id': 'project1'})
fix_addr = db.fixed_ip_associate_pool(context1.elevated(),
1, instance['id'])
# Associate the IP with non-admin user context
self.assertRaises(exception.NotAuthorized,
self.network.associate_floating_ip,
context2,
float_addr,
fix_addr)
# Deallocate address from other project
self.assertRaises(exception.NotAuthorized,
self.network.deallocate_floating_ip,
context2,
float_addr)
# Now Associates the address to the actual project
self.network.associate_floating_ip(context1, float_addr, fix_addr)
# Now try dis-associating from other project
self.assertRaises(exception.NotAuthorized,
self.network.disassociate_floating_ip,
context2,
float_addr)
# Clean up the ip addresses
self.network.deallocate_floating_ip(context1, float_addr)
self.network.deallocate_fixed_ip(context1, fix_addr)
db.floating_ip_destroy(context1.elevated(), float_addr)
db.fixed_ip_disassociate(context1.elevated(), fix_addr)
class CommonNetworkTestCase(test.TestCase):
class FakeNetworkManager(network_manager.NetworkManager):
"""This NetworkManager doesn't call the base class so we can bypass all
inherited service cruft and just perform unit tests.
"""
class FakeDB:
def fixed_ip_get_by_instance(self, context, instance_id):
return [dict(address='10.0.0.0'), dict(address='10.0.0.1'),
dict(address='10.0.0.2')]
def network_get_by_cidr(self, context, cidr):
raise exception.NetworkNotFoundForCidr()
def network_create_safe(self, context, net):
fakenet = dict(net)
fakenet['id'] = 999
return fakenet
def network_get_all(self, context):
raise exception.NoNetworksFound()
def __init__(self):
self.db = self.FakeDB()
self.deallocate_called = None
def deallocate_fixed_ip(self, context, address):
self.deallocate_called = address
def _create_fixed_ips(self, context, network_id):
pass
def fake_create_fixed_ips(self, context, network_id):
return None
def test_remove_fixed_ip_from_instance(self):
manager = self.FakeNetworkManager()
manager.remove_fixed_ip_from_instance(None, 99, '10.0.0.1')
self.assertEquals(manager.deallocate_called, '10.0.0.1')
def test_remove_fixed_ip_from_instance_bad_input(self):
manager = self.FakeNetworkManager()
self.assertRaises(exception.FixedIpNotFoundForSpecificInstance,
manager.remove_fixed_ip_from_instance,
None, 99, 'bad input')
def test_validate_cidrs(self):
manager = self.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/24',
False, 1, 256, None, None, None,
None)
self.assertEqual(1, len(nets))
cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/24' in cidrs)
def test_validate_cidrs_split_exact_in_half(self):
manager = self.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/24',
False, 2, 128, None, None, None,
None)
self.assertEqual(2, len(nets))
cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/25' in cidrs)
self.assertTrue('192.168.0.128/25' in cidrs)
def test_validate_cidrs_split_cidr_in_use_middle_of_range(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.0/24'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 4, 256, None, None, None,
None)
self.assertEqual(4, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24',
'192.168.4.0/24']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/24' in cidrs)
def test_validate_cidrs_smaller_subnet_in_use(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.9/25'}])
self.mox.ReplayAll()
# ValueError: requested cidr (192.168.2.0/24) conflicts with
# existing smaller cidr
args = (None, 'fake', '192.168.2.0/24', False, 1, 256, None, None,
None, None)
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_split_smaller_cidr_in_use(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.0/25'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 4, 256, None, None, None, None)
self.assertEqual(4, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24',
'192.168.4.0/24']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/24' in cidrs)
def test_validate_cidrs_split_smaller_cidr_in_use2(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.2.9/29'}])
self.mox.ReplayAll()
nets = manager.create_networks(None, 'fake', '192.168.2.0/24',
False, 3, 32, None, None, None, None)
self.assertEqual(3, len(nets))
cidrs = [str(net['cidr']) for net in nets]
exp_cidrs = ['192.168.2.32/27', '192.168.2.64/27', '192.168.2.96/27']
for exp_cidr in exp_cidrs:
self.assertTrue(exp_cidr in cidrs)
self.assertFalse('192.168.2.0/27' in cidrs)
def test_validate_cidrs_split_all_in_use(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
in_use = [{'id': 1, 'cidr': '192.168.2.9/29'},
{'id': 2, 'cidr': '192.168.2.64/26'},
{'id': 3, 'cidr': '192.168.2.128/26'}]
manager.db.network_get_all(ctxt).AndReturn(in_use)
self.mox.ReplayAll()
args = (None, 'fake', '192.168.2.0/24', False, 3, 64, None, None,
None, None)
# ValueError: Not enough subnets avail to satisfy requested num_
# networks - some subnets in requested range already
# in use
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_one_in_use(self):
manager = self.FakeNetworkManager()
args = (None, 'fake', '192.168.0.0/24', False, 2, 256, None, None,
None, None)
# ValueError: network_size * num_networks exceeds cidr size
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_already_used(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
manager.db.network_get_all(ctxt).AndReturn([{'id': 1,
'cidr': '192.168.0.0/24'}])
self.mox.ReplayAll()
# ValueError: cidr already in use
args = (None, 'fake', '192.168.0.0/24', False, 1, 256, None, None,
None, None)
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_too_many(self):
manager = self.FakeNetworkManager()
args = (None, 'fake', '192.168.0.0/24', False, 200, 256, None, None,
None, None)
# ValueError: Not enough subnets avail to satisfy requested
# num_networks
self.assertRaises(ValueError, manager.create_networks, *args)
def test_validate_cidrs_split_partial(self):
manager = self.FakeNetworkManager()
nets = manager.create_networks(None, 'fake', '192.168.0.0/16',
False, 2, 256, None, None, None, None)
returned_cidrs = [str(net['cidr']) for net in nets]
self.assertTrue('192.168.0.0/24' in returned_cidrs)
self.assertTrue('192.168.1.0/24' in returned_cidrs)
def test_validate_cidrs_conflict_existing_supernet(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
fakecidr = [{'id': 1, 'cidr': '192.168.0.0/8'}]
manager.db.network_get_all(ctxt).AndReturn(fakecidr)
self.mox.ReplayAll()
args = (None, 'fake', '192.168.0.0/24', False, 1, 256, None, None,
None, None)
# ValueError: requested cidr (192.168.0.0/24) conflicts
# with existing supernet
self.assertRaises(ValueError, manager.create_networks, *args)
def test_create_networks(self):
cidr = '192.168.0.0/24'
manager = self.FakeNetworkManager()
self.stubs.Set(manager, '_create_fixed_ips',
self.fake_create_fixed_ips)
args = [None, 'foo', cidr, None, 1, 256, 'fd00::/48', None, None,
None]
result = manager.create_networks(*args)
self.assertTrue(manager.create_networks(*args))
def test_create_networks_cidr_already_used(self):
manager = self.FakeNetworkManager()
self.mox.StubOutWithMock(manager.db, 'network_get_all')
ctxt = mox.IgnoreArg()
fakecidr = [{'id': 1, 'cidr': '192.168.0.0/24'}]
manager.db.network_get_all(ctxt).AndReturn(fakecidr)
self.mox.ReplayAll()
args = [None, 'foo', '192.168.0.0/24', None, 1, 256,
'fd00::/48', None, None, None]
self.assertRaises(ValueError, manager.create_networks, *args)
def test_create_networks_many(self):
cidr = '192.168.0.0/16'
manager = self.FakeNetworkManager()
self.stubs.Set(manager, '_create_fixed_ips',
self.fake_create_fixed_ips)
args = [None, 'foo', cidr, None, 10, 256, 'fd00::/48', None, None,
None]
self.assertTrue(manager.create_networks(*args))
|
xushiwei/nova
|
nova/tests/test_network.py
|
Python
|
apache-2.0
| 30,254
|
# Copyright 2017 The Tangram Developers. See the AUTHORS file at the
# top-level directory of this distribution and at
# https://github.com/renatoGarcia/tangram/blob/master/AUTHORS.
#
# This file is part of Tangram.
#
# Tangram is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Tangram is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Tangram in the COPYING and COPYING.LESSER files.
# If not, see <http://www.gnu.org/licenses/>.
import tangram as tg
def test_imshow(img_data):
mw = tg.widget.MainWindow(
tg.widget.ImageViewer(tg.Pixmap(img_data, name='img'))
)
mw.show()
mw.close()
|
renatoGarcia/tangram
|
test/test_image_viewer.py
|
Python
|
lgpl-3.0
| 1,079
|
import _plotly_utils.basevalidators
class LineValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="line", parent_name="funnelarea.marker", **kwargs):
super(LineValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Line"),
data_docs=kwargs.pop(
"data_docs",
"""
color
Sets the color of the line enclosing each
sector. Defaults to the `paper_bgcolor` value.
colorsrc
Sets the source reference on Chart Studio Cloud
for `color`.
width
Sets the width (in px) of the line enclosing
each sector.
widthsrc
Sets the source reference on Chart Studio Cloud
for `width`.
""",
),
**kwargs
)
|
plotly/plotly.py
|
packages/python/plotly/plotly/validators/funnelarea/marker/_line.py
|
Python
|
mit
| 987
|
import time
from datetime import date, timedelta, datetime
from sqlalchemy import Column, Integer, String, Enum, ForeignKey, DateTime, \
Date, Text, Boolean
from sqlalchemy.dialects import postgresql
from conditional import db
attendance_enum = Enum('Attended', 'Excused', 'Absent', name='attendance_enum')
class FreshmanAccount(db.Model):
__tablename__ = 'freshman_accounts'
id = Column(Integer, primary_key=True)
name = Column(String(64), nullable=False)
eval_date = Column(Date, nullable=False)
onfloor_status = Column(Boolean)
room_number = Column(String)
signatures_missed = Column(Integer)
rit_username = Column(String(10), nullable=True)
def __init__(self, name, onfloor, room=None, missed=None, rit_username=None):
self.name = name
today = date.fromtimestamp(time.time())
self.eval_date = today + timedelta(weeks=6)
self.onfloor_status = onfloor
self.room_number = room
self.signatures_missed = missed
self.rit_username = rit_username
class FreshmanEvalData(db.Model):
__tablename__ = 'freshman_eval_data'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
freshman_project = Column(Enum('Pending', 'Passed', 'Failed',
name="freshman_project_enum"), nullable=True)
eval_date = Column(DateTime, nullable=False)
signatures_missed = Column(Integer, nullable=False)
social_events = Column(Text)
other_notes = Column(Text)
freshman_eval_result = Column(Enum('Pending', 'Passed', 'Failed',
name="freshman_eval_enum"), nullable=False)
active = Column(Boolean)
def __init__(self, uid, signatures_missed):
self.uid = uid
self.freshman_project = None
self.freshman_eval_result = 'Pending'
self.signatures_missed = signatures_missed
self.social_events = ""
self.other_notes = ""
self.active = True
class CommitteeMeeting(db.Model):
__tablename__ = 'committee_meetings'
id = Column(Integer, primary_key=True)
committee = Column(Enum('Evaluations', 'History', 'Social', 'Opcomm',
'R&D', 'House Improvements', 'Financial', 'Chairman', 'Ad-Hoc', name="committees_enum"),
nullable=False)
timestamp = Column(DateTime, nullable=False)
approved = Column(Boolean, nullable=False)
active = Column(Boolean)
def __init__(self, committee, timestamp, approved):
self.committee = committee
self.timestamp = timestamp
self.approved = approved
self.active = True
class MemberCommitteeAttendance(db.Model):
__tablename__ = 'member_committee_attendance'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
meeting_id = Column(ForeignKey('committee_meetings.id'), nullable=False)
def __init__(self, uid, meeting_id):
self.uid = uid
self.meeting_id = meeting_id
class FreshmanCommitteeAttendance(db.Model):
__tablename__ = 'freshman_committee_attendance'
id = Column(Integer, primary_key=True)
fid = Column(ForeignKey('freshman_accounts.id'), nullable=False)
meeting_id = Column(ForeignKey('committee_meetings.id'), nullable=False)
def __init__(self, fid, meeting_id):
self.fid = fid
self.meeting_id = meeting_id
class TechnicalSeminar(db.Model):
__tablename__ = 'technical_seminars'
id = Column(Integer, primary_key=True)
name = Column(String(128), nullable=False)
timestamp = Column(DateTime, nullable=False)
approved = Column(Boolean, nullable=False)
active = Column(Boolean)
def __init__(self, name, timestamp, approved):
self.name = name
self.timestamp = timestamp
self.approved = approved
self.active = True
class MemberSeminarAttendance(db.Model):
__tablename__ = 'member_seminar_attendance'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
seminar_id = Column(ForeignKey('technical_seminars.id'), nullable=False)
def __init__(self, uid, seminar_id):
self.uid = uid
self.seminar_id = seminar_id
class FreshmanSeminarAttendance(db.Model):
__tablename__ = 'freshman_seminar_attendance'
id = Column(Integer, primary_key=True)
fid = Column(ForeignKey('freshman_accounts.id'), nullable=False)
seminar_id = Column(ForeignKey('technical_seminars.id'), nullable=False)
def __init__(self, fid, seminar_id):
self.fid = fid
self.seminar_id = seminar_id
class MajorProject(db.Model):
__tablename__ = 'major_projects'
id = Column(Integer, primary_key=True)
date = Column(Date, nullable=False)
uid = Column(String(32), nullable=False)
name = Column(String(64), nullable=False)
description = Column(Text)
active = Column(Boolean, nullable=False)
status = Column(Enum('Pending', 'Passed', 'Failed',
name="major_project_enum"),
nullable=False)
def __init__(self, uid, name, desc):
self.uid = uid
self.date = datetime.now()
self.name = name
self.description = desc
self.status = 'Pending'
self.active = True
class HouseMeeting(db.Model):
__tablename__ = 'house_meetings'
id = Column(Integer, primary_key=True)
date = Column(Date, nullable=False)
active = Column(Boolean, nullable=False)
def __init__(self, hm_date):
self.date = hm_date
self.active = True
class MemberHouseMeetingAttendance(db.Model):
__tablename__ = 'member_hm_attendance'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
meeting_id = Column(ForeignKey('house_meetings.id'), nullable=False)
excuse = Column(Text)
attendance_status = Column(attendance_enum)
def __init__(self, uid, meeting_id, excuse, status):
self.uid = uid
self.meeting_id = meeting_id
self.excuse = excuse
self.attendance_status = status
class FreshmanHouseMeetingAttendance(db.Model):
__tablename__ = 'freshman_hm_attendance'
id = Column(Integer, primary_key=True)
fid = Column(ForeignKey('freshman_accounts.id'), nullable=False)
meeting_id = Column(ForeignKey('house_meetings.id'), nullable=False)
excuse = Column(Text)
attendance_status = Column(attendance_enum)
def __init__(self, fid, meeting_id, excuse, status):
self.fid = fid
self.meeting_id = meeting_id
self.excuse = excuse
self.attendance_status = status
class CurrentCoops(db.Model):
__tablename__ = 'current_coops'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
date_created = Column(Date, nullable=False)
semester = Column(Enum('Fall', 'Spring', name="co_op_enum"), nullable=False)
def __init__(self, uid, semester):
self.uid = uid
self.active = True
self.date_created = datetime.now()
self.semester = semester
class OnFloorStatusAssigned(db.Model):
__tablename__ = 'onfloor_datetime'
uid = Column(String(32), primary_key=True)
onfloor_granted = Column(DateTime, primary_key=True)
def __init__(self, uid, time_granted):
self.uid = uid
self.onfloor_granted = time_granted
class Conditional(db.Model):
__tablename__ = 'conditional'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
description = Column(String(512), nullable=False)
date_created = Column(Date, nullable=False)
date_due = Column(Date, nullable=False)
active = Column(Boolean, nullable=False)
status = Column(Enum('Pending', 'Passed', 'Failed',
name="conditional_enum"),
nullable=False)
s_evaluation = Column(ForeignKey('spring_evals.id'))
i_evaluation = Column(ForeignKey('freshman_eval_data.id'))
def __init__(self, uid, description, due, s_eval=None, i_eval=None):
self.uid = uid
self.description = description
self.date_due = due
self.date_created = datetime.now()
self.status = "Pending"
self.active = True
self.s_evaluation = s_eval
self.i_evaluation = i_eval
class EvalSettings(db.Model):
__tablename__ = 'settings'
id = Column(Integer, primary_key=True)
housing_form_active = Column(Boolean)
intro_form_active = Column(Boolean)
site_lockdown = Column(Boolean)
accept_dues_until = Column(Date)
def __init__(self):
self.housing_form_active = True
self.intro_form_active = True
self.site_lockdown = False
self.accept_dues_until = datetime.now()
class SpringEval(db.Model):
__tablename__ = 'spring_evals'
id = Column(Integer, primary_key=True)
uid = Column(String(32), nullable=False)
active = Column(Boolean, nullable=False)
date_created = Column(Date, nullable=False)
status = Column(Enum('Pending', 'Passed', 'Failed',
name="spring_eval_enum"),
nullable=False)
def __init__(self, uid):
self.uid = uid
self.active = True
self.date_created = datetime.now()
self.status = "Pending"
class InHousingQueue(db.Model):
__tablename__ = 'in_housing_queue'
uid = Column(String(32), primary_key=True)
http_enum = Enum('GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'CONNECT', 'OPTIONS', 'TRACE', 'PATCH', name='http_enum')
class UserLog(db.Model):
__tablename__ = 'user_log'
id = Column(Integer, primary_key=True)
ipaddr = Column(postgresql.INET, nullable=False)
timestamp = Column(DateTime, nullable=False)
uid = Column(String(32), nullable=False)
method = Column(http_enum)
blueprint = Column(String(32), nullable=False)
path = Column(String(128), nullable=False)
description = Column(String(128), nullable=False)
def __init__(self, ipaddr, user, method, blueprint, path, description):
self.ipaddr = ipaddr
self.timestamp = datetime.now()
self.uid = user
self.method = method
self.blueprint = blueprint
self.path = path
self.description = description
|
ComputerScienceHouse/conditional
|
conditional/models/models.py
|
Python
|
mit
| 10,287
|
#!/usr/bin/env python
import sys
import json
for i in sys.argv[1:]:
with open(i) as infile:
data = json.load(infile)
cleaned = json.dumps(data, sort_keys=False, indent=2)
with open(i, "w") as output:
for line in cleaned.split('\n'):
output.write(line.rstrip() + '\n')
|
simonsonc/mn-glo-mosaic
|
reformat-json.py
|
Python
|
mit
| 312
|
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
from __future__ import unicode_literals
import frappe
import datetime
from frappe import _, msgprint, scrub
from frappe.defaults import get_user_permissions
from frappe.utils import add_days, getdate, formatdate, flt, get_first_day, date_diff, nowdate
from erpnext.utilities.doctype.address.address import get_address_display
from erpnext.utilities.doctype.contact.contact import get_contact_details
@frappe.whitelist()
def get_party_details(party=None, account=None, party_type="Customer", company=None,
posting_date=None, price_list=None, currency=None, doctype=None):
if not party:
return {}
if not frappe.db.exists(party_type, party):
frappe.throw(_("{0}: {1} does not exists").format(party_type, party))
return _get_party_details(party, account, party_type,
company, posting_date, price_list, currency, doctype)
def _get_party_details(party=None, account=None, party_type="Customer", company=None,
posting_date=None, price_list=None, currency=None, doctype=None, ignore_permissions=False):
out = frappe._dict(set_account_and_due_date(party, account, party_type, company, posting_date, doctype))
party = out[party_type.lower()]
if not ignore_permissions and not frappe.has_permission(party_type, "read", party):
frappe.throw(_("Not permitted"), frappe.PermissionError)
party = frappe.get_doc(party_type, party)
set_address_details(out, party, party_type)
set_contact_details(out, party, party_type)
set_other_values(out, party, party_type)
set_price_list(out, party, party_type, price_list)
if not out.get("currency"):
out["currency"] = currency
# sales team
if party_type=="Customer":
out["sales_team"] = [{
"sales_person": d.sales_person,
"sales_designation": d.sales_designation,
"allocated_percentage": d.allocated_percentage
} for d in party.get("sales_team")]
return out
def set_address_details(out, party, party_type):
billing_address_field = "customer_address" if party_type == "Lead" \
else party_type.lower() + "_address"
out[billing_address_field] = frappe.db.get_value("Address",
{party_type.lower(): party.name, "is_primary_address":1}, "name")
# address display
out.address_display = get_address_display(out[billing_address_field])
# shipping address
if party_type in ["Customer", "Lead"]:
out.shipping_address_name = frappe.db.get_value("Address",
{party_type.lower(): party.name, "is_shipping_address":1}, "name")
out.shipping_address = get_address_display(out["shipping_address_name"])
def set_contact_details(out, party, party_type):
out.contact_person = frappe.db.get_value("Contact",
{party_type.lower(): party.name, "is_primary_contact":1}, "name")
if not out.contact_person:
return
out.update(get_contact_details(out.contact_person))
def set_other_values(out, party, party_type):
# copy
if party_type=="Customer":
to_copy = ["customer_name", "customer_group", "territory"]
else:
to_copy = ["supplier_name", "supplier_type"]
for f in to_copy:
out[f] = party.get(f)
# fields prepended with default in Customer doctype
for f in ['currency', 'taxes_and_charges'] \
+ (['sales_partner', 'commission_rate'] if party_type=="Customer" else []):
if party.get("default_" + f):
out[f] = party.get("default_" + f)
def set_price_list(out, party, party_type, given_price_list):
# price list
price_list = filter(None, get_user_permissions().get("Price List", []))
if isinstance(price_list, list):
price_list = price_list[0] if len(price_list)==1 else None
if not price_list:
price_list = party.default_price_list
if not price_list and party_type=="Customer":
price_list = frappe.db.get_value("Customer Group",
party.customer_group, "default_price_list")
if not price_list:
price_list = given_price_list
if price_list:
out.price_list_currency = frappe.db.get_value("Price List", price_list, "currency")
out["selling_price_list" if party.doctype=="Customer" else "buying_price_list"] = price_list
def set_account_and_due_date(party, account, party_type, company, posting_date, doctype):
if doctype not in ["Sales Invoice", "Purchase Invoice"]:
# not an invoice
return {
party_type.lower(): party
}
if party:
account = get_party_account(company, party, party_type)
account_fieldname = "debit_to" if party_type=="Customer" else "credit_to"
out = {
party_type.lower(): party,
account_fieldname : account,
"due_date": get_due_date(posting_date, party_type, party, company)
}
return out
@frappe.whitelist()
def get_party_account(company, party, party_type):
"""Returns the account for the given `party`.
Will first search in party (Customer / Supplier) record, if not found,
will search in group (Customer Group / Supplier Type),
finally will return default."""
if not company:
frappe.throw(_("Please select company first."))
if party:
account = frappe.db.get_value("Party Account",
{"parenttype": party_type, "parent": party, "company": company}, "account")
if not account:
party_group_doctype = "Customer Group" if party_type=="Customer" else "Supplier Type"
group = frappe.db.get_value(party_type, party, scrub(party_group_doctype))
account = frappe.db.get_value("Party Account",
{"parenttype": party_group_doctype, "parent": group, "company": company}, "account")
if not account:
default_account_name = "default_receivable_account" if party_type=="Customer" else "default_payable_account"
account = frappe.db.get_value("Company", company, default_account_name)
return account
@frappe.whitelist()
def get_due_date(posting_date, party_type, party, company):
"""Set Due Date = Posting Date + Credit Days"""
due_date = None
if posting_date and party:
due_date = posting_date
if party_type=="Customer":
credit_days_based_on, credit_days = get_credit_days(party_type, party, company)
if credit_days_based_on == "Fixed Days" and credit_days:
due_date = add_days(posting_date, credit_days)
elif credit_days_based_on == "Last Day of the Next Month":
due_date = (get_first_day(posting_date, 0, 2) + datetime.timedelta(-1)).strftime("%Y-%m-%d")
else:
credit_days = get_credit_days(party_type, party, company)
if credit_days:
due_date = add_days(posting_date, credit_days)
return due_date
def get_credit_days(party_type, party, company):
if party_type and party:
if party_type == "Customer":
credit_days_based_on, credit_days, customer_group = \
frappe.db.get_value(party_type, party, ["credit_days_based_on", "credit_days", "customer_group"])
if not credit_days_based_on:
credit_days_based_on, credit_days = \
frappe.db.get_value("Customer Group", customer_group, ["credit_days_based_on", "credit_days"]) \
or frappe.db.get_value("Company", company, ["credit_days_based_on", "credit_days"])
return credit_days_based_on, credit_days
else:
credit_days, supplier_type = frappe.db.get_value(party_type, party, ["credit_days", "supplier_type"])
if not credit_days:
credit_days = frappe.db.get_value("Supplier Type", supplier_type, "credit_days") \
or frappe.db.get_value("Company", company, "credit_days")
return credit_days
def validate_due_date(posting_date, due_date, party_type, party, company):
if getdate(due_date) < getdate(posting_date):
frappe.throw(_("Due Date cannot be before Posting Date"))
else:
default_due_date = get_due_date(posting_date, party_type, party, company)
if default_due_date != posting_date and getdate(due_date) > getdate(default_due_date):
is_credit_controller = frappe.db.get_single_value("Accounts Settings", "credit_controller") in frappe.get_roles()
if is_credit_controller:
msgprint(_("Note: Due / Reference Date exceeds allowed customer credit days by {0} day(s)")
.format(date_diff(due_date, default_due_date)))
else:
frappe.throw(_("Due / Reference Date cannot be after {0}").format(formatdate(default_due_date)))
|
hatwar/Das_erpnext
|
erpnext/accounts/party.py
|
Python
|
agpl-3.0
| 8,025
|
# -*- coding: utf8 -*-
import json
from utils.es_manager import ES_Manager
from permission_admin.models import Dataset
class ActiveDataset:
""" Dataset class
"""
def __init__(self, id, dataset):
self.id = id
self.index = dataset['index']
self.mapping = dataset['mapping']
class Datasets:
""" Datasets class
"""
def __init__(self):
self.datasets = self._build_datasets_map()
self.mapping_id = None
self.active_datasets = []
@staticmethod
def _build_datasets_map():
datasets = {}
for dataset in Dataset.objects.all():
pk = dataset.pk
index = dataset.index
mapping = dataset.mapping
datasets[pk] = {'index': index, 'mapping': mapping}
return datasets
def activate_datasets(self, session):
""" Activate datasets for a given session. If the session does not contain
information about the dataset, initiates with the first valid dataset
Returns: the session object containing the active dataset mapping_id
"""
if len(self.datasets.keys()) > 0:
if 'dataset' not in session:
# Activate first if not defined in session
session['dataset'] = [int(list(self.datasets.keys())[0])]
# Check if dataset in map and activate
self.active_datasets = [ActiveDataset(int(ds), self.datasets[int(ds)]) for ds in session['dataset'] if int(ds) in self.datasets]
return self
def activate_datasets_by_id(self, _ids):
""" Activate dataset by ID
Expects list of datasets id-s
"""
if len(self.datasets.keys()) > 0:
self.active_datasets = [ActiveDataset(int(_id), self.datasets[int(_id)]) for _id in _ids if int(_id) in self.datasets]
return self
def get_datasets(self):
""" Returns: map of all dataset objects
"""
return self.datasets
def build_manager(self, manager_class):
""" Builds manager_class as:
ManagerClass(index, mapping, date_range)
"""
datasets = self.active_datasets
return manager_class(datasets)
def sort_datasets(self, indices):
out = []
open_indices = [index['index'] for index in indices if index['status'] == 'open']
for dataset in sorted(self.datasets.items(), key=lambda l: l[1]['index']):
ds = dataset[1]
ds['id'] = dataset[0]
# wildcard dataset
if '*' in ds['index']:
out.append(ds)
elif ds['index'] in open_indices:
out.append(ds)
return out
def get_allowed_datasets(self, user):
indices = ES_Manager.get_indices()
datasets = self.sort_datasets(indices)
#print(datasets)
return [dataset for dataset in datasets if user.has_perm('permission_admin.can_access_dataset_' + str(dataset['id']))]
|
texta-tk/texta
|
utils/datasets.py
|
Python
|
gpl-3.0
| 2,551
|
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Unittest for js_map_format.py.
"""
import os
import re
import sys
if __name__ == '__main__':
sys.path.append(os.path.join(os.path.dirname(sys.argv[0]), '../..'))
import unittest
import StringIO
from grit import grd_reader
from grit import util
from grit.tool import build
class JsMapFormatUnittest(unittest.TestCase):
def testMessages(self):
grd_text = u"""
<messages>
<message name="IDS_SIMPLE_MESSAGE">
Simple message.
</message>
<message name="IDS_QUOTES">
element\u2019s \u201c<ph name="NAME">%s<ex>name</ex></ph>\u201d attribute
</message>
<message name="IDS_PLACEHOLDERS">
<ph name="ERROR_COUNT">%1$d<ex>1</ex></ph> error, <ph name="WARNING_COUNT">%2$d<ex>1</ex></ph> warning
</message>
<message name="IDS_STARTS_WITH_SPACE">
''' (<ph name="COUNT">%d<ex>2</ex></ph>)
</message>
<message name="IDS_DOUBLE_QUOTES">
A "double quoted" message.
</message>
</messages>
"""
root = grd_reader.Parse(StringIO.StringIO(grd_text.encode('utf-8')),
flexible_root=True)
util.FixRootForUnittest(root)
buf = StringIO.StringIO()
build.RcBuilder.ProcessNode(root, DummyOutput('js_map_format', 'en'), buf)
output = buf.getvalue()
test = u"""
localizedStrings["Simple message."] = "Simple message.";
localizedStrings["element\u2019s \u201c%s\u201d attribute"] = "element\u2019s \u201c%s\u201d attribute";
localizedStrings["%d error, %d warning"] = "%1$d error, %2$d warning";
localizedStrings[" (%d)"] = " (%d)";
localizedStrings["A \\\"double quoted\\\" message."] = "A \\\"double quoted\\\" message.";
"""
self.failUnless(output.strip() == test.strip())
def testTranslations(self):
root = grd_reader.Parse(StringIO.StringIO("""
<messages>
<message name="ID_HELLO">Hello!</message>
<message name="ID_HELLO_USER">Hello <ph name="USERNAME">%s<ex>
Joi</ex></ph></message>
</messages>
"""), flexible_root=True)
util.FixRootForUnittest(root)
buf = StringIO.StringIO()
build.RcBuilder.ProcessNode(root, DummyOutput('js_map_format', 'fr'), buf)
output = buf.getvalue()
test = u"""
localizedStrings["Hello!"] = "H\xe9P\xe9ll\xf4P\xf4!";
localizedStrings["Hello %s"] = "H\xe9P\xe9ll\xf4P\xf4 %s";
"""
self.failUnless(output.strip() == test.strip())
class DummyOutput(object):
def __init__(self, type, language):
self.type = type
self.language = language
def GetType(self):
return self.type
def GetLanguage(self):
return self.language
def GetOutputFilename(self):
return 'hello.gif'
if __name__ == '__main__':
unittest.main()
|
JoKaWare/WTL-DUI
|
tools/grit/grit/format/js_map_format_unittest.py
|
Python
|
bsd-3-clause
| 2,917
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.