repo_name stringlengths 7 65 | path stringlengths 5 185 | copies stringlengths 1 4 | size stringlengths 4 6 | content stringlengths 977 990k | license stringclasses 14 values | hash stringlengths 32 32 | line_mean float64 7.18 99.4 | line_max int64 31 999 | alpha_frac float64 0.25 0.95 | ratio float64 1.5 7.84 | autogenerated bool 1 class | config_or_test bool 2 classes | has_no_keywords bool 2 classes | has_few_assignments bool 1 class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
glumpy/glumpy | glumpy/ext/freetype/ft_structs.py | 5 | 35670 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# -----------------------------------------------------------------------------
#
# FreeType high-level python API - Copyright 2011 Nicolas P. Rougier
# Distributed under the terms of the new BSD license.
#
# -----------------------------------------------------------------------------
'''
Freetype structured types
-------------------------
FT_Library: A handle to a FreeType library instance.
FT_Vector: A simple structure used to store a 2D vector.
FT_BBox: A structure used to hold an outline's bounding box.
FT_Matrix: A simple structure used to store a 2x2 matrix.
FT_UnitVector: A simple structure used to store a 2D vector unit vector.
FT_Bitmap: A structure used to describe a bitmap or pixmap to the raster.
FT_Data: Read-only binary data represented as a pointer and a length.
FT_Generic: Client applications generic data.
FT_Bitmap_Size: Metrics of a bitmap strike.
FT_Charmap: The base charmap structure.
FT_Glyph_Metrics:A structure used to model the metrics of a single glyph.
FT_Outline: This structure is used to describe an outline to the scan-line
converter.
FT_GlyphSlot: FreeType root glyph slot class structure.
FT_Glyph: The root glyph structure contains a given glyph image plus its
advance width in 16.16 fixed float format.
FT_Size_Metrics: The size metrics structure gives the metrics of a size object.
FT_Size: FreeType root size class structure.
FT_Face: FreeType root face class structure.
FT_Parameter: A simple structure used to pass more or less generic parameters
to FT_Open_Face.
FT_Open_Args: A structure used to indicate how to open a new font file or
stream.
FT_SfntName: A structure used to model an SFNT 'name' table entry.
FT_Stroker: Opaque handler to a path stroker object.
FT_BitmapGlyph: A structure used for bitmap glyph images.
'''
from .ft_types import *
# -----------------------------------------------------------------------------
# A handle to a FreeType library instance. Each 'library' is completely
# independent from the others; it is the 'root' of a set of objects like fonts,
# faces, sizes, etc.
class FT_LibraryRec(Structure):
'''
A handle to a FreeType library instance. Each 'library' is completely
independent from the others; it is the 'root' of a set of objects like
fonts, faces, sizes, etc.
'''
_fields_ = [ ]
FT_Library = POINTER(FT_LibraryRec)
# -----------------------------------------------------------------------------
# A simple structure used to store a 2D vector; coordinates are of the FT_Pos
# type.
class FT_Vector(Structure):
'''
A simple structure used to store a 2D vector; coordinates are of the FT_Pos
type.
x: The horizontal coordinate.
y: The vertical coordinate.
'''
_fields_ = [('x', FT_Pos),
('y', FT_Pos)]
# -----------------------------------------------------------------------------
# A structure used to hold an outline's bounding box, i.e., the coordinates of
# its extrema in the horizontal and vertical directions.
#
# The bounding box is specified with the coordinates of the lower left and the
# upper right corner. In PostScript, those values are often called (llx,lly)
# and (urx,ury), respectively.
#
# If 'yMin' is negative, this value gives the glyph's descender. Otherwise, the
# glyph doesn't descend below the baseline. Similarly, if 'ymax' is positive,
# this value gives the glyph's ascender.
#
# 'xMin' gives the horizontal distance from the glyph's origin to the left edge
# of the glyph's bounding box. If 'xMin' is negative, the glyph extends to the
# left of the origin.
class FT_BBox(Structure):
'''
A structure used to hold an outline's bounding box, i.e., the coordinates
of its extrema in the horizontal and vertical directions.
The bounding box is specified with the coordinates of the lower left and
the upper right corner. In PostScript, those values are often called
(llx,lly) and (urx,ury), respectively.
If 'yMin' is negative, this value gives the glyph's descender. Otherwise,
the glyph doesn't descend below the baseline. Similarly, if 'ymax' is
positive, this value gives the glyph's ascender.
'xMin' gives the horizontal distance from the glyph's origin to the left
edge of the glyph's bounding box. If 'xMin' is negative, the glyph extends
to the left of the origin.
xMin: The horizontal minimum (left-most).
yMin: The vertical minimum (bottom-most).
xMax: The horizontal maximum (right-most).
yMax: The vertical maximum (top-most).
'''
_fields_ = [('xMin', FT_Pos),
('yMin', FT_Pos),
('xMax', FT_Pos),
('yMax', FT_Pos)]
# -----------------------------------------------------------------------------
# A simple structure used to store a 2x2 matrix. Coefficients are in 16.16
# fixed float format. The computation performed is:
# x' = x*xx + y*xy
# y' = x*yx + y*yy
class FT_Matrix(Structure):
'''
A simple structure used to store a 2x2 matrix. Coefficients are in 16.16
fixed float format. The computation performed is:
x' = x*xx + y*xy
y' = x*yx + y*yy
xx: Matrix coefficient.
xy: Matrix coefficient.
yx: Matrix coefficient.
yy: Matrix coefficient.
'''
_fields_ = [('xx', FT_Fixed),
('xy', FT_Fixed),
('yx', FT_Fixed),
('yy', FT_Fixed)]
# -----------------------------------------------------------------------------
# A simple structure used to store a 2D vector unit vector. Uses FT_F2Dot14
# types.
class FT_UnitVector(Structure):
'''
A simple structure used to store a 2D vector unit vector. Uses FT_F2Dot14
types.
x: The horizontal coordinate.
y: The vertical coordinate.
'''
_fields_ = [('x', FT_F2Dot14),
('y', FT_F2Dot14)]
# -----------------------------------------------------------------------------
# A structure used to describe a bitmap or pixmap to the raster. Note that we
# now manage pixmaps of various depths through the 'pixel_mode' field.
class FT_Bitmap(Structure):
'''
A structure used to describe a bitmap or pixmap to the raster. Note that we
now manage pixmaps of various depths through the 'pixel_mode' field.
rows: The number of bitmap rows.
width: The number of pixels in bitmap row.
pitch: The pitch's absolute value is the number of bytes taken by one
bitmap row, including padding. However, the pitch is positive when
the bitmap has a 'down' flow, and negative when it has an 'up'
flow. In all cases, the pitch is an offset to add to a bitmap
pointer in order to go down one row.
Note that 'padding' means the alignment of a bitmap to a byte
border, and FreeType functions normally align to the smallest
possible integer value.
For the B/W rasterizer, 'pitch' is always an even number.
To change the pitch of a bitmap (say, to make it a multiple of 4),
use FT_Bitmap_Convert. Alternatively, you might use callback
functions to directly render to the application's surface; see the
file 'example2.py' in the tutorial for a demonstration.
buffer: A typeless pointer to the bitmap buffer. This value should be
aligned on 32-bit boundaries in most cases.
num_grays: This field is only used with FT_PIXEL_MODE_GRAY; it gives the
number of gray levels used in the bitmap.
pixel_mode: The pixel mode, i.e., how pixel bits are stored. See
FT_Pixel_Mode for possible values.
palette_mode: This field is intended for paletted pixel modes; it indicates
how the palette is stored. Not used currently.
palette: A typeless pointer to the bitmap palette; this field is intended
for paletted pixel modes. Not used currently.
'''
_fields_ = [
('rows', c_int),
('width', c_int),
('pitch', c_int),
# declaring buffer as c_char_p confuses ctypes
('buffer', POINTER(c_ubyte)),
('num_grays', c_short),
('pixel_mode', c_ubyte),
('palette_mode', c_char),
('palette', c_void_p) ]
# -----------------------------------------------------------------------------
# Read-only binary data represented as a pointer and a length.
class FT_Data(Structure):
'''
Read-only binary data represented as a pointer and a length.
pointer: The data.
length: The length of the data in bytes.
'''
_fields_ = [('pointer', POINTER(FT_Byte)),
('y', FT_Int)]
# -----------------------------------------------------------------------------
# Client applications often need to associate their own data to a variety of
# FreeType core objects. For example, a text layout API might want to associate
# a glyph cache to a given size object.
#
# Most FreeType object contains a 'generic' field, of type FT_Generic, which
# usage is left to client applications and font servers.
#
# It can be used to store a pointer to client-specific data, as well as the
# address of a 'finalizer' function, which will be called by FreeType when the
# object is destroyed (for example, the previous client example would put the
# address of the glyph cache destructor in the 'finalizer' field).
class FT_Generic(Structure):
'''
Client applications often need to associate their own data to a variety of
FreeType core objects. For example, a text layout API might want to
associate a glyph cache to a given size object.
Most FreeType object contains a 'generic' field, of type FT_Generic, which
usage is left to client applications and font servers.
It can be used to store a pointer to client-specific data, as well as the
address of a 'finalizer' function, which will be called by FreeType when
the object is destroyed (for example, the previous client example would put
the address of the glyph cache destructor in the 'finalizer' field).
data: A typeless pointer to any client-specified data. This field is
completely ignored by the FreeType library.
finalizer: A pointer to a 'generic finalizer' function, which will be
called when the object is destroyed. If this field is set to
NULL, no code will be called.
'''
_fields_ = [('data', c_void_p),
('finalizer', FT_Generic_Finalizer)]
# -----------------------------------------------------------------------------
# This structure models the metrics of a bitmap strike (i.e., a set of glyphs
# for a given point size and resolution) in a bitmap font. It is used for the
# 'available_sizes' field of FT_Face.
class FT_Bitmap_Size(Structure):
'''
This structure models the metrics of a bitmap strike (i.e., a set of glyphs
for a given point size and resolution) in a bitmap font. It is used for the
'available_sizes' field of FT_Face.
height: The vertical distance, in pixels, between two consecutive
baselines. It is always positive.
width: The average width, in pixels, of all glyphs in the strike.
size: The nominal size of the strike in 26.6 fractional points. This field
is not very useful.
x_ppem: The horizontal ppem (nominal width) in 26.6 fractional pixels.
y_ppem: The vertical ppem (nominal height) in 26.6 fractional pixels.
'''
_fields_ = [
('height', FT_Short),
('width', FT_Short),
('size', FT_Pos),
('x_ppem', FT_Pos),
('y_ppem', FT_Pos) ]
# -----------------------------------------------------------------------------
# The base charmap structure.
class FT_CharmapRec(Structure):
'''
The base charmap structure.
face : A handle to the parent face object.
encoding : An FT_Encoding tag identifying the charmap. Use this with
FT_Select_Charmap.
platform_id: An ID number describing the platform for the following
encoding ID. This comes directly from the TrueType
specification and should be emulated for other formats.
encoding_id: A platform specific encoding number. This also comes from the
TrueType specification and should be emulated similarly.
'''
_fields_ = [
('face', c_void_p), # Shoudl be FT_Face
('encoding', FT_Encoding),
('platform_id', FT_UShort),
('encoding_id', FT_UShort),
]
FT_Charmap = POINTER(FT_CharmapRec)
# -----------------------------------------------------------------------------
# A structure used to model the metrics of a single glyph. The values are
# expressed in 26.6 fractional pixel format; if the flag FT_LOAD_NO_SCALE has
# been used while loading the glyph, values are expressed in font units
# instead.
class FT_Glyph_Metrics(Structure):
'''
A structure used to model the metrics of a single glyph. The values are
expressed in 26.6 fractional pixel format; if the flag FT_LOAD_NO_SCALE has
been used while loading the glyph, values are expressed in font units
instead.
width: The glyph's width.
height: The glyph's height.
horiBearingX: Left side bearing for horizontal layout.
horiBearingY: Top side bearing for horizontal layout.
horiAdvance: Advance width for horizontal layout.
vertBearingX: Left side bearing for vertical layout.
vertBearingY: Top side bearing for vertical layout.
vertAdvance: Advance height for vertical layout.
'''
_fields_ = [
('width', FT_Pos),
('height', FT_Pos),
('horiBearingX', FT_Pos),
('horiBearingY', FT_Pos),
('horiAdvance', FT_Pos),
('vertBearingX', FT_Pos),
('vertBearingY', FT_Pos),
('vertAdvance', FT_Pos),
]
# -----------------------------------------------------------------------------
# This structure is used to describe an outline to the scan-line converter.
class FT_Outline(Structure):
'''
This structure is used to describe an outline to the scan-line converter.
n_contours: The number of contours in the outline.
n_points: The number of points in the outline.
points: A pointer to an array of 'n_points' FT_Vector elements, giving the
outline's point coordinates.
tags: A pointer to an array of 'n_points' chars, giving each outline
point's type.
If bit 0 is unset, the point is 'off' the curve, i.e., a Bezier
control point, while it is 'on' if set.
Bit 1 is meaningful for 'off' points only. If set, it indicates a
third-order Bezier arc control point; and a second-order control
point if unset.
If bit 2 is set, bits 5-7 contain the drop-out mode (as defined in
the OpenType specification; the value is the same as the argument to
the SCANMODE instruction).
Bits 3 and 4 are reserved for internal purposes.
contours: An array of 'n_contours' shorts, giving the end point of each
contour within the outline. For example, the first contour is
defined by the points '0' to 'contours[0]', the second one is
defined by the points 'contours[0]+1' to 'contours[1]', etc.
flags: A set of bit flags used to characterize the outline and give hints
to the scan-converter and hinter on how to convert/grid-fit it. See
FT_OUTLINE_FLAGS.
'''
_fields_ = [
('n_contours', c_short),
('n_points', c_short),
('points', POINTER(FT_Vector)),
# declaring buffer as c_char_p would prevent us to acces all tags
('tags', POINTER(c_ubyte)),
('contours', POINTER(c_short)),
('flags', c_int),
]
# -----------------------------------------------------------------------------
# The root glyph structure contains a given glyph image plus its advance width
# in 16.16 fixed float format.
class FT_GlyphRec(Structure):
'''
The root glyph structure contains a given glyph image plus its advance
width in 16.16 fixed float format.
library: A handle to the FreeType library object.
clazz: A pointer to the glyph's class. Private.
format: The format of the glyph's image.
advance: A 16.16 vector that gives the glyph's advance width.
'''
_fields_ = [
('library', FT_Library),
('clazz', c_void_p),
('format', FT_Glyph_Format),
('advance', FT_Vector)
]
FT_Glyph = POINTER(FT_GlyphRec)
# -----------------------------------------------------------------------------
# FreeType root glyph slot class structure. A glyph slot is a container where
# individual glyphs can be loaded, be they in outline or bitmap format.
class FT_GlyphSlotRec(Structure):
'''
FreeType root glyph slot class structure. A glyph slot is a container where
individual glyphs can be loaded, be they in outline or bitmap format.
library: A handle to the FreeType library instance this slot belongs to.
face: A handle to the parent face object.
next: In some cases (like some font tools), several glyph slots per face
object can be a good thing. As this is rare, the glyph slots are
listed through a direct, single-linked list using its 'next' field.
generic: A typeless pointer which is unused by the FreeType library or any
of its drivers. It can be used by client applications to link
their own data to each glyph slot object.
metrics: The metrics of the last loaded glyph in the slot. The returned
values depend on the last load flags (see the FT_Load_Glyph API
function) and can be expressed either in 26.6 fractional pixels or
font units.
Note that even when the glyph image is transformed, the metrics
are not.
linearHoriAdvance: The advance width of the unhinted glyph. Its value is
expressed in 16.16 fractional pixels, unless
FT_LOAD_LINEAR_DESIGN is set when loading the
glyph. This field can be important to perform correct
WYSIWYG layout. Only relevant for outline glyphs.
linearVertAdvance: The advance height of the unhinted glyph. Its value is
expressed in 16.16 fractional pixels, unless
FT_LOAD_LINEAR_DESIGN is set when loading the
glyph. This field can be important to perform correct
WYSIWYG layout. Only relevant for outline glyphs.
advance: This shorthand is, depending on FT_LOAD_IGNORE_TRANSFORM, the
transformed advance width for the glyph (in 26.6 fractional pixel
format). As specified with FT_LOAD_VERTICAL_LAYOUT, it uses either
the 'horiAdvance' or the 'vertAdvance' value of 'metrics' field.
format: This field indicates the format of the image contained in the glyph
slot. Typically FT_GLYPH_FORMAT_BITMAP, FT_GLYPH_FORMAT_OUTLINE, or
FT_GLYPH_FORMAT_COMPOSITE, but others are possible.
bitmap: This field is used as a bitmap descriptor when the slot format is
FT_GLYPH_FORMAT_BITMAP. Note that the address and content of the
bitmap buffer can change between calls of FT_Load_Glyph and a few
other functions.
bitmap_left: This is the bitmap's left bearing expressed in integer
pixels. Of course, this is only valid if the format is
FT_GLYPH_FORMAT_BITMAP.
bitmap_top: This is the bitmap's top bearing expressed in integer
pixels. Remember that this is the distance from the baseline to
the top-most glyph scanline, upwards y coordinates being
positive.
outline: The outline descriptor for the current glyph image if its format
is FT_GLYPH_FORMAT_OUTLINE. Once a glyph is loaded, 'outline' can
be transformed, distorted, embolded, etc. However, it must not be
freed.
num_subglyphs: The number of subglyphs in a composite glyph. This field is
only valid for the composite glyph format that should
normally only be loaded with the FT_LOAD_NO_RECURSE
flag. For now this is internal to FreeType.
subglyphs: An array of subglyph descriptors for composite glyphs. There are
'num_subglyphs' elements in there. Currently internal to
FreeType.
control_data: Certain font drivers can also return the control data for a
given glyph image (e.g. TrueType bytecode, Type 1
charstrings, etc.). This field is a pointer to such data.
control_len: This is the length in bytes of the control data.
other: Really wicked formats can use this pointer to present their own
glyph image to client applications. Note that the application needs
to know about the image format.
lsb_delta: The difference between hinted and unhinted left side bearing
while autohinting is active. Zero otherwise.
rsb_delta: The difference between hinted and unhinted right side bearing
while autohinting is active. Zero otherwise.
'''
_fields_ = [
('library', FT_Library),
('face', c_void_p),
('next', c_void_p),
('reserved', c_uint),
('generic', FT_Generic),
('metrics', FT_Glyph_Metrics),
('linearHoriAdvance', FT_Fixed),
('linearVertAdvance', FT_Fixed),
('advance', FT_Vector),
('format', FT_Glyph_Format),
('bitmap', FT_Bitmap),
('bitmap_left', FT_Int),
('bitmap_top', FT_Int),
('outline', FT_Outline),
('num_subglyphs', FT_UInt),
('subglyphs', c_void_p),
('control_data', c_void_p),
('control_len', c_long),
('lsb_delta', FT_Pos),
('rsb_delta', FT_Pos),
('other', c_void_p),
('internal', c_void_p),
]
FT_GlyphSlot = POINTER(FT_GlyphSlotRec)
# -----------------------------------------------------------------------------
# The size metrics structure gives the metrics of a size object.
class FT_Size_Metrics(Structure):
'''
The size metrics structure gives the metrics of a size object.
x_ppem: The width of the scaled EM square in pixels, hence the term 'ppem'
(pixels per EM). It is also referred to as 'nominal width'.
y_ppem: The height of the scaled EM square in pixels, hence the term 'ppem'
(pixels per EM). It is also referred to as 'nominal height'.
x_scale: A 16.16 fractional scaling value used to convert horizontal
metrics from font units to 26.6 fractional pixels. Only relevant
for scalable font formats.
y_scale: A 16.16 fractional scaling value used to convert vertical metrics
from font units to 26.6 fractional pixels. Only relevant for
scalable font formats.
ascender: The ascender in 26.6 fractional pixels. See FT_FaceRec for the
details.
descender: The descender in 26.6 fractional pixels. See FT_FaceRec for the
details.
height: The height in 26.6 fractional pixels. See FT_FaceRec for the
details.
max_advance: The maximal advance width in 26.6 fractional pixels. See
FT_FaceRec for the details.
'''
_fields_ = [
('x_ppem', FT_UShort),
('y_ppem', FT_UShort),
('x_scale', FT_Fixed),
('y_scale', FT_Fixed),
('ascender', FT_Pos),
('descender', FT_Pos),
('height', FT_Pos),
('max_advance', FT_Pos),
]
# -----------------------------------------------------------------------------
# FreeType root size class structure. A size object models a face object at a
# given size.
class FT_SizeRec(Structure):
'''
FreeType root size class structure. A size object models a face object at a
given size.
face: Handle to the parent face object.
generic: A typeless pointer, which is unused by the FreeType library or any
of its drivers. It can be used by client applications to link
their own data to each size object.
metrics: Metrics for this size object. This field is read-only.
'''
_fields_ = [
('face', c_void_p),
('generic', FT_Generic),
('metrics', FT_Size_Metrics),
('internal', c_void_p),
]
FT_Size = POINTER(FT_SizeRec)
# -----------------------------------------------------------------------------
# FreeType root face class structure. A face object models a typeface in a font
# file.
class FT_FaceRec(Structure):
'''
FreeType root face class structure. A face object models a typeface in a
font file.
num_faces: The number of faces in the font file. Some font formats can have
multiple faces in a font file.
face_index: The index of the face in the font file. It is set to 0 if there
is only one face in the font file.
face_flags: A set of bit flags that give important information about the
face; see FT_FACE_FLAG_XXX for the details.
style_flags: A set of bit flags indicating the style of the face; see
FT_STYLE_FLAG_XXX for the details.
num_glyphs: The number of glyphs in the face. If the face is scalable and
has sbits (see 'num_fixed_sizes'), it is set to the number of
outline glyphs.
For CID-keyed fonts, this value gives the highest CID used in
the font.
family_name: The face's family name. This is an ASCII string, usually in
English, which describes the typeface's family (like 'Times
New Roman', 'Bodoni', 'Garamond', etc). This is a least common
denominator used to list fonts. Some formats (TrueType &
OpenType) provide localized and Unicode versions of this
string. Applications should use the format specific interface
to access them. Can be NULL (e.g., in fonts embedded in a PDF
file).
style_name: The face's style name. This is an ASCII string, usually in
English, which describes the typeface's style (like 'Italic',
'Bold', 'Condensed', etc). Not all font formats provide a style
name, so this field is optional, and can be set to NULL. As for
'family_name', some formats provide localized and Unicode
versions of this string. Applications should use the format
specific interface to access them.
num_fixed_sizes: The number of bitmap strikes in the face. Even if the face
is scalable, there might still be bitmap strikes, which
are called 'sbits' in that case.
available_sizes: An array of FT_Bitmap_Size for all bitmap strikes in the
face. It is set to NULL if there is no bitmap strike.
num_charmaps: The number of charmaps in the face.
charmaps: An array of the charmaps of the face.
generic: A field reserved for client uses. See the FT_Generic type
description.
bbox: The font bounding box. Coordinates are expressed in font units (see
'units_per_EM'). The box is large enough to contain any glyph from
the font. Thus, 'bbox.yMax' can be seen as the 'maximal ascender',
and 'bbox.yMin' as the 'minimal descender'. Only relevant for
scalable formats.
Note that the bounding box might be off by (at least) one pixel for
hinted fonts. See FT_Size_Metrics for further discussion.
units_per_EM: The number of font units per EM square for this face. This is
typically 2048 for TrueType fonts, and 1000 for Type 1
fonts. Only relevant for scalable formats.
ascender: The typographic ascender of the face, expressed in font
units. For font formats not having this information, it is set to
'bbox.yMax'. Only relevant for scalable formats.
descender: The typographic descender of the face, expressed in font
units. For font formats not having this information, it is set
to 'bbox.yMin'. Note that this field is usually negative. Only
relevant for scalable formats.
height: The height is the vertical distance between two consecutive
baselines, expressed in font units. It is always positive. Only
relevant for scalable formats.
max_advance_width: The maximal advance width, in font units, for all glyphs
in this face. This can be used to make word wrapping
computations faster. Only relevant for scalable formats.
max_advance_height: The maximal advance height, in font units, for all
glyphs in this face. This is only relevant for vertical
layouts, and is set to 'height' for fonts that do not
provide vertical metrics. Only relevant for scalable
formats.
underline_position: The position, in font units, of the underline line for
this face. It is the center of the underlining
stem. Only relevant for scalable formats.
underline_thickness: The thickness, in font units, of the underline for
this face. Only relevant for scalable formats.
glyph: The face's associated glyph slot(s).
size: The current active size for this face.
charmap: The current active charmap for this face.
'''
_fields_ = [
('num_faces', FT_Long),
('face_index', FT_Long),
('face_flags', FT_Long),
('style_flags', FT_Long),
('num_glyphs', FT_Long),
('family_name', FT_String_p),
('style_name', FT_String_p),
('num_fixed_sizes', FT_Int),
('available_sizes', POINTER(FT_Bitmap_Size)),
('num_charmaps', c_int),
('charmaps', POINTER(FT_Charmap)),
('generic', FT_Generic),
# The following member variables (down to `underline_thickness')
# are only relevant to scalable outlines; cf. @FT_Bitmap_Size
# for bitmap fonts.
('bbox', FT_BBox),
('units_per_EM', FT_UShort),
('ascender', FT_Short),
('descender', FT_Short),
('height', FT_Short),
('max_advance_width', FT_Short),
('max_advance_height', FT_Short),
('underline_position', FT_Short),
('underline_thickness', FT_Short),
('glyph', FT_GlyphSlot),
('size', FT_Size),
('charmap', FT_Charmap),
# private
('driver', c_void_p),
('memory', c_void_p),
('stream', c_void_p),
('sizes_list_head', c_void_p),
('sizes_list_tail', c_void_p),
('autohint', FT_Generic),
('extensions', c_void_p),
('internal', c_void_p),
]
FT_Face = POINTER(FT_FaceRec)
# -----------------------------------------------------------------------------
# A simple structure used to pass more or less generic parameters to
# FT_Open_Face.
class FT_Parameter(Structure):
'''
A simple structure used to pass more or less generic parameters to
FT_Open_Face.
tag: A four-byte identification tag.
data: A pointer to the parameter data
'''
_fields_ = [
('tag', FT_ULong),
('data', FT_Pointer) ]
FT_Parameter_p = POINTER(FT_Parameter)
# -----------------------------------------------------------------------------
# A structure used to indicate how to open a new font file or stream. A pointer
# to such a structure can be used as a parameter for the functions FT_Open_Face
# and FT_Attach_Stream.
class FT_Open_Args(Structure):
'''
A structure used to indicate how to open a new font file or stream. A pointer
to such a structure can be used as a parameter for the functions FT_Open_Face
and FT_Attach_Stream.
flags: A set of bit flags indicating how to use the structure.
memory_base: The first byte of the file in memory.
memory_size: The size in bytes of the file in memory.
pathname: A pointer to an 8-bit file pathname.
stream: A handle to a source stream object.
driver: This field is exclusively used by FT_Open_Face; it simply specifies
the font driver to use to open the face. If set to 0, FreeType
tries to load the face with each one of the drivers in its list.
num_params: The number of extra parameters.
params: Extra parameters passed to the font driver when opening a new face.
'''
_fields_ = [
('flags', FT_UInt),
('memory_base', POINTER(FT_Byte)),
('memory_size', FT_Long),
('pathname', FT_String_p),
('stream', c_void_p),
('driver', c_void_p),
('num_params', FT_Int),
('params', FT_Parameter_p) ]
# -----------------------------------------------------------------------------
# A structure used to model an SFNT 'name' table entry.
class FT_SfntName(Structure):
'''
platform_id: The platform ID for 'string'.
encoding_id: The encoding ID for 'string'.
language_id: The language ID for 'string'
name_id: An identifier for 'string'
string: The 'name' string. Note that its format differs depending on the
(platform,encoding) pair. It can be a Pascal String, a UTF-16 one,
etc.
Generally speaking, the string is not zero-terminated. Please refer
to the TrueType specification for details.
string_len: The length of 'string' in bytes.
'''
_fields_ = [
('platform_id', FT_UShort),
('encoding_id', FT_UShort),
('language_id', FT_UShort),
('name_id', FT_UShort),
# this string is *not* null-terminated!
('string', POINTER(FT_Byte)),
('string_len', FT_UInt) ]
# -----------------------------------------------------------------------------
# Opaque handler to a path stroker object.
class FT_StrokerRec(Structure):
'''
Opaque handler to a path stroker object.
'''
_fields_ = [ ]
FT_Stroker = POINTER(FT_StrokerRec)
# -----------------------------------------------------------------------------
# A structure used for bitmap glyph images. This really is a 'sub-class' of
# FT_GlyphRec.
#
class FT_BitmapGlyphRec(Structure):
'''
A structure used for bitmap glyph images. This really is a 'sub-class' of
FT_GlyphRec.
'''
_fields_ = [
('root' , FT_GlyphRec),
('left', FT_Int),
('top', FT_Int),
('bitmap', FT_Bitmap)
]
FT_BitmapGlyph = POINTER(FT_BitmapGlyphRec)
| bsd-3-clause | 141fde8f26685250a6a27a583865d27b | 36.786017 | 81 | 0.597869 | 4.240875 | false | false | false | false |
glumpy/glumpy | glumpy/geometry/colored_cube.py | 3 | 2467 | # -----------------------------------------------------------------------------
# Copyright (c) 2009-2016 Nicolas P. Rougier. All rights reserved.
# Distributed under the (new) BSD License.
# -----------------------------------------------------------------------------
import numpy as np
from glumpy import gloo
def colored_cube(size=2.0):
""" Generate vertices & indices for a filled and outlined cube """
vtype = [('position', np.float32, 3),
('texcoord', np.float32, 2),
('normal', np.float32, 3),
('color', np.float32, 4)]
itype = np.uint32
# Vertices positions
p = np.array([[1, 1, 1], [-1, 1, 1], [-1, -1, 1], [1, -1, 1],
[1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, -1]])
p *= size/2.0
# Face Normals
n = np.array([[0, 0, 1], [1, 0, 0], [0, 1, 0],
[-1, 0, 1], [0, -1, 0], [0, 0, -1]])
# Vertice colors
c = np.array([[0, 1, 1, 1], [0, 0, 1, 1], [0, 0, 0, 1], [0, 1, 0, 1],
[1, 1, 0, 1], [1, 1, 1, 1], [1, 0, 1, 1], [1, 0, 0, 1]])
# Texture coords
t = np.array([[0, 0], [0, 1], [1, 1], [1, 0]])
faces_p = [0, 1, 2, 3,
0, 3, 4, 5,
0, 5, 6, 1,
1, 6, 7, 2,
7, 4, 3, 2,
4, 7, 6, 5]
faces_c = [0, 1, 2, 3,
0, 3, 4, 5,
0, 5, 6, 1,
1, 6, 7, 2,
7, 4, 3, 2,
4, 7, 6, 5]
faces_n = [0, 0, 0, 0,
1, 1, 1, 1,
2, 2, 2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
5, 5, 5, 5]
faces_t = [0, 1, 2, 3,
0, 1, 2, 3,
0, 1, 2, 3,
3, 2, 1, 0,
0, 1, 2, 3,
0, 1, 2, 3]
vertices = np.zeros(24, vtype)
vertices['position'] = p[faces_p]
vertices['normal'] = n[faces_n]
vertices['color'] = c[faces_c]
vertices['texcoord'] = t[faces_t]
filled = np.resize(
np.array([0, 1, 2, 0, 2, 3], dtype=itype), 6 * (2 * 3))
filled += np.repeat(4 * np.arange(6, dtype=itype), 6)
outline = np.resize(
np.array([0, 1, 1, 2, 2, 3, 3, 0], dtype=itype), 6 * (2 * 4))
outline += np.repeat(4 * np.arange(6, dtype=itype), 8)
vertices = vertices.view(gloo.VertexBuffer)
filled = filled.view(gloo.IndexBuffer)
outline = outline.view(gloo.IndexBuffer)
return vertices, filled, outline
| bsd-3-clause | 6c781461fc6aef49255e108b0de3bcae | 31.460526 | 79 | 0.38711 | 2.88538 | false | false | false | false |
glumpy/glumpy | glumpy/transforms/conic_equal_area.py | 3 | 3901 | # -----------------------------------------------------------------------------
# Copyright (c) 2009-2016 Nicolas P. Rougier. All rights reserved.
# Distributed under the (new) BSD License.
# -----------------------------------------------------------------------------
"""
Conic Equal Area projection
See: https://github.com/mbostock/d3/blob/master/src/geo/conic-equal-area.js
http://mathworld.wolfram.com/AlbersEqual-AreaConicProjection.html
http://en.wikipedia.org/wiki/Albers_projection
"""
from glumpy import library
from . transform import Transform
class ConicEqualArea(Transform):
""" Conic Equal Area projection """
aliases = { "clip" : "conic_clip",
"scale" : "conic_scale",
"center" : "conic_center",
"rotate" : "conic_rotate",
"translate" : "conic_translate",
"parallels" : "conic_parallels" }
def __init__(self, *args, **kwargs):
"""
Initialize the transform.
Note that parameters must be passed by name (param=value).
Kwargs parameters
-----------------
clip : tuple of 4 floats
scale : float
Scale factor applied to normalized Cartesian coordinates
center : float, float
Center of the projection as (longitude,latitude)
rotate : float, float, [float]
Rotation as yaw, pitch and roll.
translate : float, float
Translation (in scaled coordinates)
parallels : float, float
Parallels as define in conic equal area projection.
"""
self._clip = Transform._get_kwarg("clip", kwargs, (-180,180,-90,90))
self._scale = Transform._get_kwarg("scale", kwargs, 1.0)
self._center = Transform._get_kwarg("center", kwargs, (0,0))
self._rotate = Transform._get_kwarg("rotate", kwargs, (0,0))
self._translate = Transform._get_kwarg("translate", kwargs, (0,0))
self._parallels = Transform._get_kwarg("parallels", kwargs, (0,90))
code = library.get("transforms/conic-equal-area.glsl")
# Make sure to call the forward function
kwargs["call"] = "forward"
Transform.__init__(self, code, *args, **kwargs)
@property
def scale(self):
return self._scale
@scale.setter
def scale(self, value):
self._scale = float(value)
if self.is_attached:
self["scale"] = self._scale
@property
def clip(self):
return self._clip
@clip.setter
def clip(self, value):
self._clip = float(value)
if self.is_attached:
self["clip"] = self._clip
@property
def translate(self):
return self._translate
@translate.setter
def translate(self, value):
self._translate = float(value)
if self.is_attached:
self["translate"] = self._translate
@property
def center(self):
return self._center
@center.setter
def center(self, value):
self._center = value
if self.is_attached:
self["center"] = self._center
@property
def rotate(self):
return self._rotate
@rotate.setter
def rotate(self, value):
self._rotate = value
if self.is_attached:
self["rotate"] = self._rotate
@property
def parallels(self):
return self._parallels
@parallels.setter
def parallels(self, value):
self._parallels = value
if self.is_attached:
self["parallels"] = self._parallels
def on_attach(self, program):
""" Initialization event """
self["clip"] = self._clip
self["scale"] = self._scale
self["center"] = self._center
self["rotate"] = self._rotate
self["translate"] = self._translate
self["parallels"] = self._parallels
| bsd-3-clause | ed2648cffdf10174e7f667c5a1fc85d1 | 27.268116 | 79 | 0.555755 | 4.203664 | false | false | false | false |
ihmeuw/vivarium | src/vivarium/framework/artifact/hdf.py | 1 | 13279 | """
=============
HDF Interface
=============
A convenience wrapper around the `tables <https://www.pytables.org>`_ and
:mod:`pandas` HDF interfaces.
Public Interface
----------------
The public interface consists of 5 functions:
.. list-table:: HDF Public Interface
:widths: 20 60
:header-rows: 1
* - Function
- Description
* - :func:`touch`
- Creates an HDF file, wiping an existing file if necessary.
* - :func:`write`
- Stores data at a key in an HDF file.
* - :func:`load`
- Loads (potentially filtered) data from a key in an HDF file.
* - :func:`remove`
- Clears data from a key in an HDF file.
* - :func:`get_keys`
- Gets all available HDF keys from an HDF file.
Contracts
+++++++++
- All functions in the public interface accept both :class:`pathlib.Path` and
normal Python :class:`str` objects for paths.
- All functions in the public interface accept only :class:`str` objects
as representations of the keys in the hdf file. The strings must be
formatted as ``"type.name.measure"`` or ``"type.measure"``.
"""
import json
import re
from pathlib import Path
from typing import Any, List, Optional, Union
import pandas as pd
import tables
from tables.nodes import filenode
PandasObj = (pd.DataFrame, pd.Series)
####################
# Public interface #
####################
def touch(path: Union[str, Path]):
"""Creates an HDF file, wiping an existing file if necessary.
If the given path is proper to create a HDF file, it creates a new
HDF file.
Parameters
----------
path
The path to the HDF file.
Raises
------
ValueError
If the non-proper path is given to create a HDF file.
"""
path = _get_valid_hdf_path(path)
with tables.open_file(str(path), mode="w"):
pass
def write(path: Union[str, Path], entity_key: str, data: Any):
"""Writes data to the HDF file at the given path to the given key.
Parameters
----------
path
The path to the HDF file to write to.
entity_key
A string representation of the internal HDF path where we want to
write the data. The key must be formatted as ``"type.name.measure"``
or ``"type.measure"``.
data
The data to write. If it is a :mod:`pandas` object, it will be
written using a
`pandas.HDFStore <https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables>`_
or :meth:`pandas.DataFrame.to_hdf`. If it is some other kind of python
object, it will first be encoded as json with :func:`json.dumps` and
then written to the provided key.
Raises
------
ValueError
If the path or entity_key are improperly formatted.
"""
path = _get_valid_hdf_path(path)
entity_key = EntityKey(entity_key)
if isinstance(data, PandasObj):
_write_pandas_data(path, entity_key, data)
else:
_write_json_blob(path, entity_key, data)
def load(
path: Union[str, Path],
entity_key: str,
filter_terms: Optional[List[str]],
column_filters: Optional[List[str]],
) -> Any:
"""Loads data from an HDF file.
Parameters
----------
path
The path to the HDF file to load the data from.
entity_key
A representation of the internal HDF path where the data is located.
filter_terms
An optional list of terms used to filter the rows in the data.
The terms must be formatted in a way that is suitable for use with
the ``where`` argument of :func:`pandas.read_hdf`. Only
filters applying to existing columns in the data are used.
column_filters
An optional list of columns to load from the data.
Raises
------
ValueError
If the path or entity_key are improperly formatted.
Returns
-------
Any
The data stored at the the given key in the HDF file.
"""
path = _get_valid_hdf_path(path)
entity_key = EntityKey(entity_key)
with tables.open_file(str(path)) as file:
node = file.get_node(entity_key.path)
if isinstance(node, tables.earray.EArray):
# This should be a json encoded document rather than a pandas dataframe
with filenode.open_node(node) as file_node:
data = json.load(file_node)
else:
filter_terms = _get_valid_filter_terms(filter_terms, node.table.colnames)
with pd.HDFStore(str(path), complevel=9, mode="r") as store:
metadata = store.get_storer(
entity_key.path
).attrs.metadata # NOTE: must use attrs. write this up
if metadata.get("is_empty", False):
data = pd.read_hdf(path, entity_key.path, where=filter_terms)
data = data.set_index(
list(data.columns)
) # undoing transform performed on write
else:
data = pd.read_hdf(
path, entity_key.path, where=filter_terms, columns=column_filters
)
return data
def remove(path: Union[str, Path], entity_key: str):
"""Removes a piece of data from an HDF file.
Parameters
----------
path :
The path to the HDF file to remove the data from.
entity_key :
A representation of the internal HDF path where the data is located.
Raises
------
ValueError
If the path or entity_key are improperly formatted.
"""
path = _get_valid_hdf_path(path)
entity_key = EntityKey(entity_key)
with tables.open_file(str(path), mode="a") as file:
file.remove_node(entity_key.path, recursive=True)
def get_keys(path: str) -> List[str]:
"""Gets key representation of all paths in an HDF file.
Parameters
----------
path :
The path to the HDF file.
Returns
-------
List[str]
A list of key representations of the internal paths in the HDF.
"""
path = _get_valid_hdf_path(path)
with tables.open_file(str(path)) as file:
keys = _get_keys(file.root)
return keys
class EntityKey(str):
"""A convenience wrapper that translates artifact keys.
This class provides several representations of the artifact keys that
are useful when working with the :mod:`pandas` and
`tables <https://www.pytables.org>`_ HDF interfaces.
"""
def __init__(self, key):
"""
Parameters
----------
key
The string representation of the entity key. Must be formatted
as ``"type.name.measure"`` or ``"type.measure"``.
"""
elements = [e for e in key.split(".") if e]
if len(elements) not in [2, 3] or len(key.split(".")) != len(elements):
raise ValueError(
f"Invalid format for HDF key: {key}. "
'Acceptable formats are "type.name.measure" and "type.measure"'
)
super().__init__()
@property
def type(self) -> str:
"""The type of the entity represented by the key."""
return self.split(".")[0]
@property
def name(self) -> str:
"""The name of the entity represented by the key"""
return self.split(".")[1] if len(self.split(".")) == 3 else ""
@property
def measure(self) -> str:
"""The measure associated with the data represented by the key."""
return self.split(".")[-1]
@property
def group_prefix(self) -> str:
"""The HDF group prefix for the key."""
return "/" + self.type if self.name else "/"
@property
def group_name(self) -> str:
"""The HDF group name for the key."""
return self.name if self.name else self.type
@property
def group(self) -> str:
"""The full path to the group for this key."""
return (
self.group_prefix + "/" + self.group_name
if self.name
else self.group_prefix + self.group_name
)
@property
def path(self) -> str:
"""The full HDF path associated with this key."""
return self.group + "/" + self.measure
def with_measure(self, measure: str) -> "EntityKey":
"""Replaces this key's measure with the provided one.
Parameters
----------
measure :
The measure to replace this key's measure with.
Returns
-------
EntityKey
A new EntityKey with the updated measure.
"""
if self.name:
return EntityKey(f"{self.type}.{self.name}.{measure}")
else:
return EntityKey(f"{self.type}.{measure}")
def __eq__(self, other: "EntityKey") -> bool:
return isinstance(other, str) and str(self) == str(other)
def __ne__(self, other: "EntityKey") -> bool:
return not self == other
def __hash__(self):
return hash(str(self))
def __repr__(self) -> str:
return f"EntityKey({str(self)})"
#####################
# Private utilities #
#####################
def _get_valid_hdf_path(path: Union[str, Path]) -> Path:
valid_suffixes = [".hdf", ".h5"]
path = Path(path)
if path.suffix not in valid_suffixes:
raise ValueError(
f"{str(path)} has an invalid HDF suffix {path.suffix}."
f" HDF files must have one of {valid_suffixes} as a path suffix."
)
return path
def _write_pandas_data(path: Path, entity_key: EntityKey, data: Union[PandasObj]):
"""Write data in a pandas format to an HDF file.
This method currently supports :class:`pandas DataFrame` objects, with or
with or without columns, and :class:`pandas.Series` objects.
"""
if data.empty:
# Our data is indexed, sometimes with no other columns. This leaves an
# empty dataframe that store.put will silently fail to write in table
# format.
data = data.reset_index()
if data.empty:
raise ValueError("Cannot write an empty dataframe that does not have an index.")
metadata = {"is_empty": True}
data_columns = True
else:
metadata = {"is_empty": False}
data_columns = None
with pd.HDFStore(str(path), complevel=9) as store:
store.put(entity_key.path, data, format="table", data_columns=data_columns)
store.get_storer(
entity_key.path
).attrs.metadata = metadata # NOTE: must use attrs. write this up
def _write_json_blob(path: Path, entity_key: EntityKey, data: Any):
"""Writes a Python object as json to the HDF file at the given path."""
with tables.open_file(str(path), "a") as store:
if entity_key.group_prefix not in store:
store.create_group("/", entity_key.type)
if entity_key.group not in store:
store.create_group(entity_key.group_prefix, entity_key.group_name)
with filenode.new_node(
store, where=entity_key.group, name=entity_key.measure
) as fnode:
fnode.write(bytes(json.dumps(data), "utf-8"))
def _get_keys(root: tables.node.Node, prefix: str = "") -> List[str]:
"""Recursively formats the paths in an HDF file into a key format."""
keys = []
for child in root:
child_name = _get_node_name(child)
if isinstance(child, tables.earray.EArray): # This is the last node
keys.append(f"{prefix}.{child_name}")
elif isinstance(child, tables.table.Table): # Parent was the last node
keys.append(prefix)
else:
new_prefix = f"{prefix}.{child_name}" if prefix else child_name
keys.extend(_get_keys(child, new_prefix))
# Clean up some weird meta groups that get written with dataframes.
keys = [k for k in keys if ".meta." not in k]
return keys
def _get_node_name(node: tables.node.Node) -> str:
"""Gets the name of a node from its string representation."""
node_string = str(node)
node_path = node_string.split()[0]
node_name = node_path.split("/")[-1]
return node_name
def _get_valid_filter_terms(filter_terms, colnames):
"""Removes any filter terms referencing non-existent columns
Parameters
----------
filter_terms
A list of terms formatted so as to be used in the `where` argument of
:func:`pd.read_hdf`.
colnames :
A list of column names present in the data that will be filtered.
Returns
-------
The list of valid filter terms (terms that do not reference any column
not existing in the data). Returns none if the list is empty because
the `where` argument doesn't like empty lists.
"""
if not filter_terms:
return None
valid_terms = filter_terms.copy()
for term in filter_terms:
# first strip out all the parentheses - the where in read_hdf
# requires all references to be valid
t = re.sub("[()]", "", term)
# then split each condition out
t = re.split("[&|]", t)
# get the unique columns referenced by this term
term_columns = set([re.split("[<=>\s]", i.strip())[0] for i in t])
if not term_columns.issubset(colnames):
valid_terms.remove(term)
return valid_terms if valid_terms else None
| bsd-3-clause | b856d25f5197f4db4cce580e72657dea | 29.738426 | 106 | 0.59756 | 3.998494 | false | false | false | false |
ihmeuw/vivarium | src/vivarium/interface/interactive.py | 1 | 7890 | """
==========================
Vivarium Interactive Tools
==========================
This module provides an interface for interactive simulation usage. The main
part is the :class:`InteractiveContext`, a sub-class of the main simulation
object in ``vivarium`` that has been extended to include convenience
methods for running and exploring the simulation in an interactive setting.
See the associated tutorials for :ref:`running <interactive_tutorial>` and
:ref:`exploring <exploration_tutorial>` for more information.
"""
from math import ceil
from typing import Any, Callable, Dict, List
import pandas as pd
from vivarium.framework.engine import SimulationContext
from vivarium.framework.time import Time, Timedelta
from vivarium.framework.values import Pipeline
from .utilities import log_progress, run_from_ipython
class InteractiveContext(SimulationContext):
"""A simulation context with helper methods for running simulations interactively."""
def __init__(self, *args, setup=True, **kwargs):
super().__init__(*args, **kwargs)
if setup:
self.setup()
@property
def current_time(self) -> Time:
"""Returns the current simulation time."""
return self._clock.time
def setup(self):
super().setup()
self.initialize_simulants()
def step(self, step_size: Timedelta = None):
"""Advance the simulation one step.
Parameters
----------
step_size
An optional size of step to take. Must be the same type as the
simulation clock's step size (usually a pandas.Timedelta).
"""
old_step_size = self._clock.step_size
if step_size is not None:
if not isinstance(step_size, type(self._clock.step_size)):
raise ValueError(
f"Provided time must be an instance of {type(self._clock.step_size)}"
)
self._clock._step_size = step_size
super().step()
self._clock._step_size = old_step_size
def run(self, with_logging: bool = True) -> int:
"""Run the simulation for the duration specified in the configuration.
Parameters
----------
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
int
The number of steps the simulation took.
"""
return self.run_until(self._clock.stop_time, with_logging=with_logging)
def run_for(self, duration: Timedelta, with_logging: bool = True) -> int:
"""Run the simulation for the given time duration.
Parameters
----------
duration
The length of time to run the simulation for. Should be the same
type as the simulation clock's step size (usually a pandas
Timedelta).
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
int
The number of steps the simulation took.
"""
return self.run_until(self._clock.time + duration, with_logging=with_logging)
def run_until(self, end_time: Time, with_logging: bool = True) -> int:
"""Run the simulation until the provided end time.
Parameters
----------
end_time
The time to run the simulation until. The simulation will run until
its clock is greater than or equal to the provided end time.
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
int
The number of steps the simulation took.
"""
if not isinstance(end_time, type(self._clock.time)):
raise ValueError(f"Provided time must be an instance of {type(self._clock.time)}")
iterations = int(ceil((end_time - self._clock.time) / self._clock.step_size))
self.take_steps(number_of_steps=iterations, with_logging=with_logging)
assert self._clock.time - self._clock.step_size < end_time <= self._clock.time
return iterations
def take_steps(
self, number_of_steps: int = 1, step_size: Timedelta = None, with_logging: bool = True
):
"""Run the simulation for the given number of steps.
Parameters
----------
number_of_steps
The number of steps to take.
step_size
An optional size of step to take. Must be the same type as the
simulation clock's step size (usually a pandas.Timedelta).
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
"""
if not isinstance(number_of_steps, int):
raise ValueError("Number of steps must be an integer.")
if run_from_ipython() and with_logging:
for _ in log_progress(range(number_of_steps), name="Step"):
self.step(step_size)
else:
for _ in range(number_of_steps):
self.step(step_size)
def get_population(self, untracked: bool = False) -> pd.DataFrame:
"""Get a copy of the population state table.
Parameters
----------
untracked
Whether or not to return simulants who are no longer being tracked
by the simulation.
"""
return self._population.get_population(untracked)
def list_values(self) -> List[str]:
"""List the names of all pipelines in the simulation."""
return list(self._values.keys())
def get_value(self, value_pipeline_name: str) -> Pipeline:
"""Get the value pipeline associated with the given name."""
return self._values.get_value(value_pipeline_name)
def list_events(self) -> List[str]:
"""List all event types registered with the simulation."""
return self._events.list_events()
def get_listeners(self, event_type: str) -> List[Callable]:
"""Get all listeners of a particular type of event.
Available event types can be found by calling
:func:`InteractiveContext.list_events`.
Parameters
----------
event_type
The type of event to grab the listeners for.
"""
if event_type not in self._events:
raise ValueError(f"No event {event_type} in system.")
return self._events.get_listeners(event_type)
def get_emitter(self, event_type: str) -> Callable:
"""Get the callable that emits the given type of events.
Available event types can be found by calling
:func:`InteractiveContext.list_events`.
Parameters
----------
event_type
The type of event to grab the listeners for.
"""
if event_type not in self._events:
raise ValueError(f"No event {event_type} in system.")
return self._events.get_emitter(event_type)
def list_components(self) -> Dict[str, Any]:
"""Get a mapping of component names to components currently in the simulation.
Returns
-------
Dict[str, Any]
A dictionary mapping component names to components.
"""
return self._component_manager.list_components()
def get_component(self, name: str) -> Any:
"""Get the component in the simulation that has ``name``, if present.
Names are guaranteed to be unique.
Parameters
----------
name
A component name.
Returns
-------
A component that has the name ``name`` else None.
"""
return self._component_manager.get_component(name)
def __repr__(self):
return "InteractiveContext()"
| bsd-3-clause | 79ed10e63280f5b7f278809d13604a0e | 32.151261 | 94 | 0.6 | 4.555427 | false | false | false | false |
ihmeuw/vivarium | tests/config_tree/test_basic_functionality.py | 1 | 15651 | import textwrap
import pytest
import yaml
from vivarium.config_tree import (
ConfigNode,
ConfigTree,
ConfigurationError,
ConfigurationKeyError,
DuplicatedConfigurationError,
)
@pytest.fixture(params=list(range(1, 5)))
def layers(request):
return [f"layer_{i}" for i in range(1, request.param + 1)]
@pytest.fixture
def layers_and_values(layers):
return {layer: f"test_value_{i+1}" for i, layer in enumerate(layers)}
@pytest.fixture
def empty_node(layers):
return ConfigNode(layers, name="test_node")
@pytest.fixture
def full_node(layers_and_values):
n = ConfigNode(list(layers_and_values.keys()), name="test_node")
for layer, value in layers_and_values.items():
n.update(value, layer, source=None)
return n
@pytest.fixture
def empty_tree(layers):
return ConfigTree(layers=layers)
def test_node_creation(empty_node):
assert not empty_node
assert not empty_node.accessed
assert not empty_node.metadata
assert not repr(empty_node)
assert not str(empty_node)
def test_full_node_update(full_node):
assert full_node
assert not full_node.accessed
assert len(full_node.metadata) == len(full_node._layers)
assert repr(full_node)
assert str(full_node)
def test_node_update_no_args():
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer=None, source=None)
assert n._values["base"] == (None, "test_value")
n = ConfigNode(["layer_1", "layer_2"], name="test_node")
n.update("test_value", layer=None, source=None)
assert "layer_1" not in n._values
assert n._values["layer_2"] == (None, "test_value")
def test_node_update_with_args():
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer=None, source="test")
assert n._values["base"] == ("test", "test_value")
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer="base", source="test")
assert n._values["base"] == ("test", "test_value")
n = ConfigNode(["layer_1", "layer_2"], name="test_node")
n.update("test_value", layer=None, source="test")
assert "layer_1" not in n._values
assert n._values["layer_2"] == ("test", "test_value")
n = ConfigNode(["layer_1", "layer_2"], name="test_node")
n.update("test_value", layer="layer_1", source="test")
assert "layer_2" not in n._values
assert n._values["layer_1"] == ("test", "test_value")
n = ConfigNode(["layer_1", "layer_2"], name="test_node")
n.update("test_value", layer="layer_2", source="test")
assert "layer_1" not in n._values
assert n._values["layer_2"] == ("test", "test_value")
n = ConfigNode(["layer_1", "layer_2"], name="test_node")
n.update("test_value", layer="layer_1", source="test")
n.update("test_value", layer="layer_2", source="test")
assert n._values["layer_1"] == ("test", "test_value")
assert n._values["layer_2"] == ("test", "test_value")
def test_node_frozen_update():
n = ConfigNode(["base"], name="test_node")
n.freeze()
with pytest.raises(ConfigurationError):
n.update("test_val", layer=None, source=None)
def test_node_bad_layer_update():
n = ConfigNode(["base"], name="test_node")
with pytest.raises(ConfigurationKeyError):
n.update("test_value", layer="layer_1", source=None)
def test_node_duplicate_update():
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer=None, source=None)
with pytest.raises(DuplicatedConfigurationError):
n.update("test_value", layer=None, source=None)
def test_node_get_value_with_source_empty(empty_node):
with pytest.raises(ConfigurationKeyError):
empty_node._get_value_with_source(layer=None)
for layer in empty_node._layers:
with pytest.raises(ConfigurationKeyError):
empty_node._get_value_with_source(layer=layer)
assert not empty_node.accessed
def test_node_get_value_with_source(full_node):
assert full_node._get_value_with_source(layer=None) == (
None,
f"test_value_{len(full_node._layers)}",
)
for i, layer in enumerate(full_node._layers):
assert full_node._get_value_with_source(layer=layer) == (None, f"test_value_{i+1}")
assert not full_node.accessed
def test_node_get_value_empty(empty_node):
with pytest.raises(ConfigurationKeyError):
empty_node.get_value(layer=None)
for layer in empty_node._layers:
with pytest.raises(ConfigurationKeyError):
empty_node.get_value(layer=layer)
assert not empty_node.accessed
def test_node_get_value(full_node):
assert full_node.get_value(layer=None) == f"test_value_{len(full_node._layers)}"
assert full_node.accessed
full_node._accessed = False
for i, layer in enumerate(full_node._layers):
assert full_node.get_value(layer=layer) == f"test_value_{i + 1}"
assert full_node.accessed
full_node._accessed = False
assert not full_node.accessed
def test_node_repr():
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer="base", source="test")
s = """\
base: test_value
source: test"""
assert repr(n) == textwrap.dedent(s)
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer="base", source="test")
s = """\
base: test_value
source: test"""
assert repr(n) == textwrap.dedent(s)
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer=None, source="test")
s = """\
layer_1: test_value
source: test"""
assert repr(n) == textwrap.dedent(s)
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer="base", source="test")
n.update("test_value", layer="layer_1", source="test")
s = """\
layer_1: test_value
source: test
base: test_value
source: test"""
assert repr(n) == textwrap.dedent(s)
def test_node_str():
n = ConfigNode(["base"], name="test_node")
n.update("test_value", layer="base", source="test")
s = "base: test_value"
assert str(n) == s
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer="base", source="test")
s = "base: test_value"
assert str(n) == s
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer=None, source="test")
s = "layer_1: test_value"
assert str(n) == s
n = ConfigNode(["base", "layer_1"], name="test_node")
n.update("test_value", layer="base", source="test")
n.update("test_value", layer="layer_1", source="test")
s = "layer_1: test_value"
assert str(n) == s
def test_tree_creation(empty_tree):
assert len(empty_tree) == 0
assert not empty_tree.items()
assert not empty_tree.values()
assert not empty_tree.keys()
assert not repr(empty_tree)
assert not str(empty_tree)
assert not empty_tree._children
assert empty_tree.to_dict() == {}
def test_tree_coerce_dict():
d, s = {}, "test"
assert ConfigTree._coerce(d, s) == (d, s)
d, s = {"key": "val"}, "test"
assert ConfigTree._coerce(d, s) == (d, s)
d = {"key1": {"sub_key1": ["val", "val", "val"], "sub_key2": "val"}, "key2": "val"}
s = "test"
assert ConfigTree._coerce(d, s) == (d, s)
def test_tree_coerce_str():
d = """"""
s = "test"
assert ConfigTree._coerce(d, s) == (None, s)
d = """\
key: val"""
assert ConfigTree._coerce(d, s) == ({"key": "val"}, s)
d = """\
key1:
sub_key1:
- val
- val
- val
sub_key2: val
key2: val"""
r = {"key1": {"sub_key1": ["val", "val", "val"], "sub_key2": "val"}, "key2": "val"}
assert ConfigTree._coerce(d, s) == (r, s)
d = """\
key1:
sub_key1: [val, val, val]
sub_key2: val
key2: val"""
r = {"key1": {"sub_key1": ["val", "val", "val"], "sub_key2": "val"}, "key2": "val"}
assert ConfigTree._coerce(d, s) == (r, s)
def test_tree_coerce_yaml(tmpdir):
d = """\
key1:
sub_key1:
- val
- val
- val
sub_key2: [val, val]
key2: val"""
r = {
"key1": {"sub_key1": ["val", "val", "val"], "sub_key2": ["val", "val"]},
"key2": "val",
}
s = "test"
p = tmpdir.join("model_spec.yaml")
with p.open("w") as f:
f.write(d)
assert ConfigTree._coerce(str(p), s) == (r, s)
assert ConfigTree._coerce(str(p), None) == (r, str(p))
def test_single_layer():
d = ConfigTree()
d.update({"test_key": "test_value", "test_key2": "test_value2"})
assert d.test_key == "test_value"
assert d.test_key2 == "test_value2"
with pytest.raises(DuplicatedConfigurationError):
d.test_key2 = "test_value3"
assert d.test_key2 == "test_value2"
assert d.test_key == "test_value"
def test_dictionary_style_access():
d = ConfigTree()
d.update({"test_key": "test_value", "test_key2": "test_value2"})
assert d["test_key"] == "test_value"
assert d["test_key2"] == "test_value2"
with pytest.raises(DuplicatedConfigurationError):
d["test_key2"] = "test_value3"
assert d["test_key2"] == "test_value2"
assert d["test_key"] == "test_value"
def test_get_missing_key():
d = ConfigTree()
with pytest.raises(ConfigurationKeyError):
_ = d.missing_key
def test_set_missing_key():
d = ConfigTree()
with pytest.raises(ConfigurationKeyError):
d.missing_key = "test_value"
with pytest.raises(ConfigurationKeyError):
d["missing_key"] = "test_value"
def test_multiple_layer_get():
d = ConfigTree(layers=["first", "second", "third"])
d._set_with_metadata("test_key", "test_with_source_value", "first", source=None)
d._set_with_metadata("test_key", "test_value2", "second", source=None)
d._set_with_metadata("test_key", "test_value3", "third", source=None)
d._set_with_metadata("test_key2", "test_value4", "first", source=None)
d._set_with_metadata("test_key2", "test_value5", "second", source=None)
d._set_with_metadata("test_key3", "test_value6", "first", source=None)
assert d.test_key == "test_value3"
assert d.test_key2 == "test_value5"
assert d.test_key3 == "test_value6"
def test_outer_layer_set():
d = ConfigTree(layers=["inner", "outer"])
d._set_with_metadata("test_key", "test_value", "inner", source=None)
d._set_with_metadata("test_key", "test_value3", layer=None, source=None)
assert d.test_key == "test_value3"
assert d["test_key"] == "test_value3"
d = ConfigTree(layers=["inner", "outer"])
d._set_with_metadata("test_key", "test_value", "inner", source=None)
d.test_key = "test_value3"
assert d.test_key == "test_value3"
assert d["test_key"] == "test_value3"
d = ConfigTree(layers=["inner", "outer"])
d._set_with_metadata("test_key", "test_value", "inner", source=None)
d["test_key"] = "test_value3"
assert d.test_key == "test_value3"
assert d["test_key"] == "test_value3"
def test_update_dict():
d = ConfigTree(layers=["inner", "outer"])
d.update({"test_key": "test_value", "test_key2": "test_value2"}, layer="inner")
d.update({"test_key": "test_value3"}, layer="outer")
assert d.test_key == "test_value3"
assert d.test_key2 == "test_value2"
def test_update_dict_nested():
d = ConfigTree(layers=["inner", "outer"])
d.update(
{"test_container": {"test_key": "test_value", "test_key2": "test_value2"}},
layer="inner",
)
with pytest.raises(DuplicatedConfigurationError):
d.update({"test_container": {"test_key": "test_value3"}}, layer="inner")
assert d.test_container.test_key == "test_value"
assert d.test_container.test_key2 == "test_value2"
d.update({"test_container": {"test_key2": "test_value4"}}, layer="outer")
assert d.test_container.test_key2 == "test_value4"
def test_source_metadata():
d = ConfigTree(layers=["inner", "outer"])
d.update({"test_key": "test_value"}, layer="inner", source="initial_load")
d.update({"test_key": "test_value2"}, layer="outer", source="update")
assert d.metadata("test_key") == [
{"layer": "inner", "source": "initial_load", "value": "test_value"},
{"layer": "outer", "source": "update", "value": "test_value2"},
]
def test_exception_on_source_for_missing_key():
d = ConfigTree(layers=["inner", "outer"])
d.update({"test_key": "test_value"}, layer="inner", source="initial_load")
with pytest.raises(ConfigurationKeyError):
d.metadata("missing_key")
def test_unused_keys():
d = ConfigTree({"test_key": {"test_key2": "test_value", "test_key3": "test_value2"}})
assert d.unused_keys() == ["test_key.test_key2", "test_key.test_key3"]
_ = d.test_key.test_key2
assert d.unused_keys() == ["test_key.test_key3"]
_ = d.test_key.test_key3
assert not d.unused_keys()
def test_to_dict_dict():
test_dict = {"configuration": {"time": {"start": {"year": 2000}}}}
config = ConfigTree(test_dict)
assert config.to_dict() == test_dict
def test_to_dict_yaml(test_spec):
config = ConfigTree(str(test_spec))
with test_spec.open() as f:
yaml_config = yaml.full_load(f)
assert yaml_config == config.to_dict()
def test_freeze():
config = ConfigTree(data={"configuration": {"time": {"start": {"year": 2000}}}})
config.freeze()
with pytest.raises(ConfigurationError):
config.update(data={"configuration": {"time": {"end": {"year": 2001}}}})
def test_retrieval_behavior():
layer_inner = "inner"
layer_middle = "middle"
layer_outer = "outer"
default_cfg_value = "value_a"
layer_list = [layer_inner, layer_middle, layer_outer]
# update the ConfigTree layers in different order and verify that has no effect on
# the values retrieved ("outer" is retrieved when no layer is specified regardless of
# the initialization order
for scenario in [layer_list, reversed(layer_list)]:
cfg = ConfigTree(layers=layer_list)
for layer in scenario:
cfg.update({default_cfg_value: layer}, layer=layer)
assert cfg.get_from_layer(default_cfg_value) == layer_outer
assert cfg.get_from_layer(default_cfg_value, layer=layer_outer) == layer_outer
assert cfg.get_from_layer(default_cfg_value, layer=layer_middle) == layer_middle
assert cfg.get_from_layer(default_cfg_value, layer=layer_inner) == layer_inner
def test_repr_display():
expected_repr = """\
Key1:
override_2: value_ov_2
source: ov2_src
override_1: value_ov_1
source: ov1_src
base: value_base
source: base_src"""
# codifies the notion that repr() displays values from most to least overridden
# regardless of initialization order
layers = ["base", "override_1", "override_2"]
cfg = ConfigTree(layers=layers)
cfg.update({"Key1": "value_ov_2"}, layer="override_2", source="ov2_src")
cfg.update({"Key1": "value_ov_1"}, layer="override_1", source="ov1_src")
cfg.update({"Key1": "value_base"}, layer="base", source="base_src")
assert repr(cfg) == textwrap.dedent(expected_repr)
cfg = ConfigTree(layers=layers)
cfg.update({"Key1": "value_base"}, layer="base", source="base_src")
cfg.update({"Key1": "value_ov_1"}, layer="override_1", source="ov1_src")
cfg.update({"Key1": "value_ov_2"}, layer="override_2", source="ov2_src")
assert repr(cfg) == textwrap.dedent(expected_repr)
| bsd-3-clause | e3227d32945fb58b4a4a0f8726fb4afa | 30.682186 | 91 | 0.609226 | 3.205858 | false | true | false | false |
scrapy/scrapy | tests/test_linkextractors.py | 1 | 25760 | import pickle
import re
import unittest
from scrapy.http import HtmlResponse, XmlResponse
from scrapy.link import Link
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from tests import get_testdata
# a hack to skip base class tests in pytest
class Base:
class LinkExtractorTestCase(unittest.TestCase):
extractor_cls = None
def setUp(self):
body = get_testdata('link_extractor', 'linkextractor.html')
self.response = HtmlResponse(url='http://example.com/index', body=body)
def test_urls_type(self):
''' Test that the resulting urls are str objects '''
lx = self.extractor_cls()
self.assertTrue(all(isinstance(link.url, str)
for link in lx.extract_links(self.response)))
def test_extract_all_links(self):
lx = self.extractor_cls()
page4_url = 'http://example.com/page%204.html'
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html#foo', text='sample 3 repetition with fragment'),
Link(url='http://www.google.com/something', text=''),
Link(url='http://example.com/innertag.html', text='inner tag'),
Link(url=page4_url, text='href with whitespaces'),
])
def test_extract_filter_allow(self):
lx = self.extractor_cls(allow=('sample', ))
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html#foo', text='sample 3 repetition with fragment')
])
def test_extract_filter_allow_with_duplicates(self):
lx = self.extractor_cls(allow=('sample', ), unique=False)
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html', text='sample 3 repetition'),
Link(url='http://example.com/sample3.html#foo', text='sample 3 repetition with fragment')
])
def test_extract_filter_allow_with_duplicates_canonicalize(self):
lx = self.extractor_cls(allow=('sample', ), unique=False,
canonicalize=True)
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html', text='sample 3 repetition'),
Link(url='http://example.com/sample3.html', text='sample 3 repetition with fragment')
])
def test_extract_filter_allow_no_duplicates_canonicalize(self):
lx = self.extractor_cls(allow=('sample',), unique=True,
canonicalize=True)
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
])
def test_extract_filter_allow_and_deny(self):
lx = self.extractor_cls(allow=('sample', ), deny=('3', ))
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
])
def test_extract_filter_allowed_domains(self):
lx = self.extractor_cls(allow_domains=('google.com', ))
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://www.google.com/something', text=''),
])
def test_extraction_using_single_values(self):
'''Test the extractor's behaviour among different situations'''
lx = self.extractor_cls(allow='sample')
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html#foo',
text='sample 3 repetition with fragment')
])
lx = self.extractor_cls(allow='sample', deny='3')
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
])
lx = self.extractor_cls(allow_domains='google.com')
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://www.google.com/something', text=''),
])
lx = self.extractor_cls(deny_domains='example.com')
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://www.google.com/something', text=''),
])
def test_nofollow(self):
'''Test the extractor's behaviour for links with rel="nofollow"'''
html = b"""<html><head><title>Page title<title>
<body>
<div class='links'>
<p><a href="/about.html">About us</a></p>
</div>
<div>
<p><a href="/follow.html">Follow this link</a></p>
</div>
<div>
<p><a href="/nofollow.html" rel="nofollow">Dont follow this one</a></p>
</div>
<div>
<p><a href="/nofollow2.html" rel="blah">Choose to follow or not</a></p>
</div>
<div>
<p><a href="http://google.com/something" rel="external nofollow">External link not to follow</a></p>
</div>
</body></html>"""
response = HtmlResponse("http://example.org/somepage/index.html", body=html)
lx = self.extractor_cls()
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.org/about.html', text='About us'),
Link(url='http://example.org/follow.html', text='Follow this link'),
Link(url='http://example.org/nofollow.html', text='Dont follow this one', nofollow=True),
Link(url='http://example.org/nofollow2.html', text='Choose to follow or not'),
Link(url='http://google.com/something', text='External link not to follow', nofollow=True),
])
def test_matches(self):
url1 = 'http://lotsofstuff.com/stuff1/index'
url2 = 'http://evenmorestuff.com/uglystuff/index'
lx = self.extractor_cls(allow=(r'stuff1', ))
self.assertEqual(lx.matches(url1), True)
self.assertEqual(lx.matches(url2), False)
lx = self.extractor_cls(deny=(r'uglystuff', ))
self.assertEqual(lx.matches(url1), True)
self.assertEqual(lx.matches(url2), False)
lx = self.extractor_cls(allow_domains=('evenmorestuff.com', ))
self.assertEqual(lx.matches(url1), False)
self.assertEqual(lx.matches(url2), True)
lx = self.extractor_cls(deny_domains=('lotsofstuff.com', ))
self.assertEqual(lx.matches(url1), False)
self.assertEqual(lx.matches(url2), True)
lx = self.extractor_cls(allow=['blah1'], deny=['blah2'],
allow_domains=['blah1.com'],
deny_domains=['blah2.com'])
self.assertEqual(lx.matches('http://blah1.com/blah1'), True)
self.assertEqual(lx.matches('http://blah1.com/blah2'), False)
self.assertEqual(lx.matches('http://blah2.com/blah1'), False)
self.assertEqual(lx.matches('http://blah2.com/blah2'), False)
def test_restrict_xpaths(self):
lx = self.extractor_cls(restrict_xpaths=('//div[@id="subwrapper"]', ))
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
])
def test_restrict_xpaths_encoding(self):
"""Test restrict_xpaths with encodings"""
html = b"""<html><head><title>Page title<title>
<body><p><a href="item/12.html">Item 12</a></p>
<div class='links'>
<p><a href="/about.html">About us\xa3</a></p>
</div>
<div>
<p><a href="/nofollow.html">This shouldn't be followed</a></p>
</div>
</body></html>"""
response = HtmlResponse("http://example.org/somepage/index.html", body=html, encoding='windows-1252')
lx = self.extractor_cls(restrict_xpaths="//div[@class='links']")
self.assertEqual(lx.extract_links(response),
[Link(url='http://example.org/about.html', text='About us\xa3')])
def test_restrict_xpaths_with_html_entities(self):
html = b'<html><body><p><a href="/♥/you?c=€">text</a></p></body></html>'
response = HtmlResponse("http://example.org/somepage/index.html", body=html, encoding='iso8859-15')
links = self.extractor_cls(restrict_xpaths='//p').extract_links(response)
self.assertEqual(links,
[Link(url='http://example.org/%E2%99%A5/you?c=%A4', text='text')])
def test_restrict_xpaths_concat_in_handle_data(self):
"""html entities cause SGMLParser to call handle_data hook twice"""
body = b"""<html><body><div><a href="/foo">>\xbe\xa9<\xb6\xab</a></body></html>"""
response = HtmlResponse("http://example.org", body=body, encoding='gb18030')
lx = self.extractor_cls(restrict_xpaths="//div")
self.assertEqual(lx.extract_links(response),
[Link(url='http://example.org/foo', text='>\u4eac<\u4e1c',
fragment='', nofollow=False)])
def test_restrict_css(self):
lx = self.extractor_cls(restrict_css=('#subwrapper a',))
self.assertEqual(lx.extract_links(self.response), [
Link(url='http://example.com/sample2.html', text='sample 2')
])
def test_restrict_css_and_restrict_xpaths_together(self):
lx = self.extractor_cls(restrict_xpaths=('//div[@id="subwrapper"]', ),
restrict_css=('#subwrapper + a', ))
self.assertEqual([link for link in lx.extract_links(self.response)], [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
])
def test_area_tag_with_unicode_present(self):
body = b"""<html><body>\xbe\xa9<map><area href="http://example.org/foo" /></map></body></html>"""
response = HtmlResponse("http://example.org", body=body, encoding='utf-8')
lx = self.extractor_cls()
lx.extract_links(response)
lx.extract_links(response)
lx.extract_links(response)
self.assertEqual(lx.extract_links(response),
[Link(url='http://example.org/foo', text='',
fragment='', nofollow=False)])
def test_encoded_url(self):
body = b"""<html><body><div><a href="?page=2">BinB</a></body></html>"""
response = HtmlResponse("http://known.fm/AC%2FDC/", body=body, encoding='utf8')
lx = self.extractor_cls()
self.assertEqual(lx.extract_links(response), [
Link(url='http://known.fm/AC%2FDC/?page=2', text='BinB', fragment='', nofollow=False),
])
def test_encoded_url_in_restricted_xpath(self):
body = b"""<html><body><div><a href="?page=2">BinB</a></body></html>"""
response = HtmlResponse("http://known.fm/AC%2FDC/", body=body, encoding='utf8')
lx = self.extractor_cls(restrict_xpaths="//div")
self.assertEqual(lx.extract_links(response), [
Link(url='http://known.fm/AC%2FDC/?page=2', text='BinB', fragment='', nofollow=False),
])
def test_ignored_extensions(self):
# jpg is ignored by default
html = b"""<a href="page.html">asd</a> and <a href="photo.jpg">"""
response = HtmlResponse("http://example.org/", body=html)
lx = self.extractor_cls()
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.org/page.html', text='asd'),
])
# override denied extensions
lx = self.extractor_cls(deny_extensions=['html'])
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.org/photo.jpg'),
])
def test_process_value(self):
"""Test restrict_xpaths with encodings"""
html = b"""
<a href="javascript:goToPage('../other/page.html','photo','width=600,height=540,scrollbars'); return false">Text</a>
<a href="/about.html">About us</a>
"""
response = HtmlResponse("http://example.org/somepage/index.html", body=html, encoding='windows-1252')
def process_value(value):
m = re.search(r"javascript:goToPage\('(.*?)'", value)
if m:
return m.group(1)
lx = self.extractor_cls(process_value=process_value)
self.assertEqual(lx.extract_links(response),
[Link(url='http://example.org/other/page.html', text='Text')])
def test_base_url_with_restrict_xpaths(self):
html = b"""<html><head><title>Page title<title><base href="http://otherdomain.com/base/" />
<body><p><a href="item/12.html">Item 12</a></p>
</body></html>"""
response = HtmlResponse("http://example.org/somepage/index.html", body=html)
lx = self.extractor_cls(restrict_xpaths="//p")
self.assertEqual(lx.extract_links(response),
[Link(url='http://otherdomain.com/base/item/12.html', text='Item 12')])
def test_attrs(self):
lx = self.extractor_cls(attrs="href")
page4_url = 'http://example.com/page%204.html'
self.assertEqual(lx.extract_links(self.response), [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html#foo', text='sample 3 repetition with fragment'),
Link(url='http://www.google.com/something', text=''),
Link(url='http://example.com/innertag.html', text='inner tag'),
Link(url=page4_url, text='href with whitespaces'),
])
lx = self.extractor_cls(attrs=("href", "src"), tags=("a", "area", "img"), deny_extensions=())
self.assertEqual(lx.extract_links(self.response), [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample2.jpg', text=''),
Link(url='http://example.com/sample3.html', text='sample 3 text'),
Link(url='http://example.com/sample3.html#foo', text='sample 3 repetition with fragment'),
Link(url='http://www.google.com/something', text=''),
Link(url='http://example.com/innertag.html', text='inner tag'),
Link(url=page4_url, text='href with whitespaces'),
])
lx = self.extractor_cls(attrs=None)
self.assertEqual(lx.extract_links(self.response), [])
def test_tags(self):
html = (
b'<html><area href="sample1.html"></area>'
b'<a href="sample2.html">sample 2</a><img src="sample2.jpg"/></html>'
)
response = HtmlResponse("http://example.com/index.html", body=html)
lx = self.extractor_cls(tags=None)
self.assertEqual(lx.extract_links(response), [])
lx = self.extractor_cls()
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/sample1.html', text=''),
Link(url='http://example.com/sample2.html', text='sample 2'),
])
lx = self.extractor_cls(tags="area")
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/sample1.html', text=''),
])
lx = self.extractor_cls(tags="a")
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/sample2.html', text='sample 2'),
])
lx = self.extractor_cls(tags=("a", "img"), attrs=("href", "src"), deny_extensions=())
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/sample2.html', text='sample 2'),
Link(url='http://example.com/sample2.jpg', text=''),
])
def test_tags_attrs(self):
html = b"""
<html><body>
<div id="item1" data-url="get?id=1"><a href="#">Item 1</a></div>
<div id="item2" data-url="get?id=2"><a href="#">Item 2</a></div>
</body></html>
"""
response = HtmlResponse("http://example.com/index.html", body=html)
lx = self.extractor_cls(tags='div', attrs='data-url')
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/get?id=1', text='Item 1', fragment='', nofollow=False),
Link(url='http://example.com/get?id=2', text='Item 2', fragment='', nofollow=False)
])
lx = self.extractor_cls(tags=('div',), attrs=('data-url',))
self.assertEqual(lx.extract_links(response), [
Link(url='http://example.com/get?id=1', text='Item 1', fragment='', nofollow=False),
Link(url='http://example.com/get?id=2', text='Item 2', fragment='', nofollow=False)
])
def test_xhtml(self):
xhtml = b"""
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>XHTML document title</title>
</head>
<body>
<div class='links'>
<p><a href="/about.html">About us</a></p>
</div>
<div>
<p><a href="/follow.html">Follow this link</a></p>
</div>
<div>
<p><a href="/nofollow.html" rel="nofollow">Dont follow this one</a></p>
</div>
<div>
<p><a href="/nofollow2.html" rel="blah">Choose to follow or not</a></p>
</div>
<div>
<p><a href="http://google.com/something" rel="external nofollow">External link not to follow</a></p>
</div>
</body>
</html>
"""
response = HtmlResponse("http://example.com/index.xhtml", body=xhtml)
lx = self.extractor_cls()
self.assertEqual(
lx.extract_links(response),
[
Link(url='http://example.com/about.html', text='About us', fragment='', nofollow=False),
Link(url='http://example.com/follow.html', text='Follow this link', fragment='', nofollow=False),
Link(url='http://example.com/nofollow.html', text='Dont follow this one',
fragment='', nofollow=True),
Link(url='http://example.com/nofollow2.html', text='Choose to follow or not',
fragment='', nofollow=False),
Link(url='http://google.com/something', text='External link not to follow', nofollow=True),
]
)
response = XmlResponse("http://example.com/index.xhtml", body=xhtml)
lx = self.extractor_cls()
self.assertEqual(
lx.extract_links(response),
[
Link(url='http://example.com/about.html', text='About us', fragment='', nofollow=False),
Link(url='http://example.com/follow.html', text='Follow this link', fragment='', nofollow=False),
Link(url='http://example.com/nofollow.html', text='Dont follow this one',
fragment='', nofollow=True),
Link(url='http://example.com/nofollow2.html', text='Choose to follow or not',
fragment='', nofollow=False),
Link(url='http://google.com/something', text='External link not to follow', nofollow=True),
]
)
def test_link_wrong_href(self):
html = b"""
<a href="http://example.org/item1.html">Item 1</a>
<a href="http://[example.org/item2.html">Item 2</a>
<a href="http://example.org/item3.html">Item 3</a>
"""
response = HtmlResponse("http://example.org/index.html", body=html)
lx = self.extractor_cls()
self.assertEqual([link for link in lx.extract_links(response)], [
Link(url='http://example.org/item1.html', text='Item 1', nofollow=False),
Link(url='http://example.org/item3.html', text='Item 3', nofollow=False),
])
def test_ftp_links(self):
body = b"""
<html><body>
<div><a href="ftp://www.external.com/">An Item</a></div>
</body></html>"""
response = HtmlResponse("http://www.example.com/index.html", body=body, encoding='utf8')
lx = self.extractor_cls()
self.assertEqual(lx.extract_links(response), [
Link(url='ftp://www.external.com/', text='An Item', fragment='', nofollow=False),
])
def test_pickle_extractor(self):
lx = self.extractor_cls()
self.assertIsInstance(pickle.loads(pickle.dumps(lx)), self.extractor_cls)
class LxmlLinkExtractorTestCase(Base.LinkExtractorTestCase):
extractor_cls = LxmlLinkExtractor
def test_link_wrong_href(self):
html = b"""
<a href="http://example.org/item1.html">Item 1</a>
<a href="http://[example.org/item2.html">Item 2</a>
<a href="http://example.org/item3.html">Item 3</a>
"""
response = HtmlResponse("http://example.org/index.html", body=html)
lx = self.extractor_cls()
self.assertEqual([link for link in lx.extract_links(response)], [
Link(url='http://example.org/item1.html', text='Item 1', nofollow=False),
Link(url='http://example.org/item3.html', text='Item 3', nofollow=False),
])
def test_link_restrict_text(self):
html = b"""
<a href="http://example.org/item1.html">Pic of a cat</a>
<a href="http://example.org/item2.html">Pic of a dog</a>
<a href="http://example.org/item3.html">Pic of a cow</a>
"""
response = HtmlResponse("http://example.org/index.html", body=html)
# Simple text inclusion test
lx = self.extractor_cls(restrict_text='dog')
self.assertEqual([link for link in lx.extract_links(response)], [
Link(url='http://example.org/item2.html', text='Pic of a dog', nofollow=False),
])
# Unique regex test
lx = self.extractor_cls(restrict_text=r'of.*dog')
self.assertEqual([link for link in lx.extract_links(response)], [
Link(url='http://example.org/item2.html', text='Pic of a dog', nofollow=False),
])
# Multiple regex test
lx = self.extractor_cls(restrict_text=[r'of.*dog', r'of.*cat'])
self.assertEqual([link for link in lx.extract_links(response)], [
Link(url='http://example.org/item1.html', text='Pic of a cat', nofollow=False),
Link(url='http://example.org/item2.html', text='Pic of a dog', nofollow=False),
])
def test_restrict_xpaths_with_html_entities(self):
super().test_restrict_xpaths_with_html_entities()
| bsd-3-clause | 7029e08f794644e7e3eb63bb250ce366 | 48.922481 | 117 | 0.550932 | 3.79717 | false | true | false | false |
scrapy/scrapy | scrapy/commands/__init__.py | 1 | 6713 | """
Base class for Scrapy commands
"""
import os
import argparse
from pathlib import Path
from typing import Any, Dict, Optional
from twisted.python import failure
from scrapy.crawler import CrawlerProcess
from scrapy.utils.conf import arglist_to_dict, feed_process_params_from_cli
from scrapy.exceptions import UsageError
class ScrapyCommand:
requires_project = False
crawler_process: Optional[CrawlerProcess] = None
# default settings to be used for this command instead of global defaults
default_settings: Dict[str, Any] = {}
exitcode = 0
def __init__(self):
self.settings: Any = None # set in scrapy.cmdline
def set_crawler(self, crawler):
if hasattr(self, '_crawler'):
raise RuntimeError("crawler already set")
self._crawler = crawler
def syntax(self):
"""
Command syntax (preferably one-line). Do not include command name.
"""
return ""
def short_desc(self):
"""
A short description of the command
"""
return ""
def long_desc(self):
"""A long description of the command. Return short description when not
available. It cannot contain newlines since contents will be formatted
by optparser which removes newlines and wraps text.
"""
return self.short_desc()
def help(self):
"""An extensive help for the command. It will be shown when using the
"help" command. It can contain newlines since no post-formatting will
be applied to its contents.
"""
return self.long_desc()
def add_options(self, parser):
"""
Populate option parse with options available for this command
"""
group = parser.add_argument_group(title='Global Options')
group.add_argument("--logfile", metavar="FILE",
help="log file. if omitted stderr will be used")
group.add_argument("-L", "--loglevel", metavar="LEVEL", default=None,
help=f"log level (default: {self.settings['LOG_LEVEL']})")
group.add_argument("--nolog", action="store_true",
help="disable logging completely")
group.add_argument("--profile", metavar="FILE", default=None,
help="write python cProfile stats to FILE")
group.add_argument("--pidfile", metavar="FILE",
help="write process ID to FILE")
group.add_argument("-s", "--set", action="append", default=[], metavar="NAME=VALUE",
help="set/override setting (may be repeated)")
group.add_argument("--pdb", action="store_true", help="enable pdb on failure")
def process_options(self, args, opts):
try:
self.settings.setdict(arglist_to_dict(opts.set),
priority='cmdline')
except ValueError:
raise UsageError("Invalid -s value, use -s NAME=VALUE", print_help=False)
if opts.logfile:
self.settings.set('LOG_ENABLED', True, priority='cmdline')
self.settings.set('LOG_FILE', opts.logfile, priority='cmdline')
if opts.loglevel:
self.settings.set('LOG_ENABLED', True, priority='cmdline')
self.settings.set('LOG_LEVEL', opts.loglevel, priority='cmdline')
if opts.nolog:
self.settings.set('LOG_ENABLED', False, priority='cmdline')
if opts.pidfile:
Path(opts.pidfile).write_text(str(os.getpid()) + os.linesep)
if opts.pdb:
failure.startDebugMode()
def run(self, args, opts):
"""
Entry point for running commands
"""
raise NotImplementedError
class BaseRunSpiderCommand(ScrapyCommand):
"""
Common class used to share functionality between the crawl, parse and runspider commands
"""
def add_options(self, parser):
ScrapyCommand.add_options(self, parser)
parser.add_argument("-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE",
help="set spider argument (may be repeated)")
parser.add_argument("-o", "--output", metavar="FILE", action="append",
help="append scraped items to the end of FILE (use - for stdout),"
" to define format set a colon at the end of the output URI (i.e. -o FILE:FORMAT)")
parser.add_argument("-O", "--overwrite-output", metavar="FILE", action="append",
help="dump scraped items into FILE, overwriting any existing file,"
" to define format set a colon at the end of the output URI (i.e. -O FILE:FORMAT)")
parser.add_argument("-t", "--output-format", metavar="FORMAT",
help="format to use for dumping items")
def process_options(self, args, opts):
ScrapyCommand.process_options(self, args, opts)
try:
opts.spargs = arglist_to_dict(opts.spargs)
except ValueError:
raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False)
if opts.output or opts.overwrite_output:
feeds = feed_process_params_from_cli(
self.settings,
opts.output,
opts.output_format,
opts.overwrite_output,
)
self.settings.set('FEEDS', feeds, priority='cmdline')
class ScrapyHelpFormatter(argparse.HelpFormatter):
"""
Help Formatter for scrapy command line help messages.
"""
def __init__(self, prog, indent_increment=2, max_help_position=24, width=None):
super().__init__(prog, indent_increment=indent_increment,
max_help_position=max_help_position, width=width)
def _join_parts(self, part_strings):
parts = self.format_part_strings(part_strings)
return super()._join_parts(parts)
def format_part_strings(self, part_strings):
"""
Underline and title case command line help message headers.
"""
if part_strings and part_strings[0].startswith("usage: "):
part_strings[0] = "Usage\n=====\n " + part_strings[0][len('usage: '):]
headings = [i for i in range(len(part_strings)) if part_strings[i].endswith(':\n')]
for index in headings[::-1]:
char = '-' if "Global Options" in part_strings[index] else '='
part_strings[index] = part_strings[index][:-2].title()
underline = ''.join(["\n", (char * len(part_strings[index])), "\n"])
part_strings.insert(index + 1, underline)
return part_strings
| bsd-3-clause | 02762e61f690d8719f1924bdb4814c5b | 39.197605 | 116 | 0.59422 | 4.367599 | false | false | false | false |
scrapy/scrapy | tests/test_utils_signal.py | 3 | 3511 | import asyncio
from pydispatch import dispatcher
from pytest import mark
from testfixtures import LogCapture
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from twisted.trial import unittest
from scrapy.utils.signal import send_catch_log, send_catch_log_deferred
from scrapy.utils.test import get_from_asyncio_queue
class SendCatchLogTest(unittest.TestCase):
@defer.inlineCallbacks
def test_send_catch_log(self):
test_signal = object()
handlers_called = set()
dispatcher.connect(self.error_handler, signal=test_signal)
dispatcher.connect(self.ok_handler, signal=test_signal)
with LogCapture() as log:
result = yield defer.maybeDeferred(
self._get_result, test_signal, arg='test',
handlers_called=handlers_called
)
assert self.error_handler in handlers_called
assert self.ok_handler in handlers_called
self.assertEqual(len(log.records), 1)
record = log.records[0]
self.assertIn('error_handler', record.getMessage())
self.assertEqual(record.levelname, 'ERROR')
self.assertEqual(result[0][0], self.error_handler)
self.assertIsInstance(result[0][1], Failure)
self.assertEqual(result[1], (self.ok_handler, "OK"))
dispatcher.disconnect(self.error_handler, signal=test_signal)
dispatcher.disconnect(self.ok_handler, signal=test_signal)
def _get_result(self, signal, *a, **kw):
return send_catch_log(signal, *a, **kw)
def error_handler(self, arg, handlers_called):
handlers_called.add(self.error_handler)
1 / 0
def ok_handler(self, arg, handlers_called):
handlers_called.add(self.ok_handler)
assert arg == 'test'
return "OK"
class SendCatchLogDeferredTest(SendCatchLogTest):
def _get_result(self, signal, *a, **kw):
return send_catch_log_deferred(signal, *a, **kw)
class SendCatchLogDeferredTest2(SendCatchLogDeferredTest):
def ok_handler(self, arg, handlers_called):
handlers_called.add(self.ok_handler)
assert arg == 'test'
d = defer.Deferred()
reactor.callLater(0, d.callback, "OK")
return d
@mark.usefixtures('reactor_pytest')
class SendCatchLogDeferredAsyncDefTest(SendCatchLogDeferredTest):
async def ok_handler(self, arg, handlers_called):
handlers_called.add(self.ok_handler)
assert arg == 'test'
await defer.succeed(42)
return "OK"
def test_send_catch_log(self):
return super().test_send_catch_log()
@mark.only_asyncio()
class SendCatchLogDeferredAsyncioTest(SendCatchLogDeferredTest):
async def ok_handler(self, arg, handlers_called):
handlers_called.add(self.ok_handler)
assert arg == 'test'
await asyncio.sleep(0.2)
return await get_from_asyncio_queue("OK")
def test_send_catch_log(self):
return super().test_send_catch_log()
class SendCatchLogTest2(unittest.TestCase):
def test_error_logged_if_deferred_not_supported(self):
def test_handler():
return defer.Deferred()
test_signal = object()
dispatcher.connect(test_handler, test_signal)
with LogCapture() as log:
send_catch_log(test_signal)
self.assertEqual(len(log.records), 1)
self.assertIn("Cannot return deferreds from signal handler", str(log))
dispatcher.disconnect(test_handler, test_signal)
| bsd-3-clause | 586a3c988885ccc2a84e4099af4c3e01 | 31.211009 | 78 | 0.670179 | 3.837158 | false | true | false | false |
scrapy/scrapy | scrapy/commands/parse.py | 2 | 10567 | import json
import logging
from typing import Dict
from itemadapter import is_item, ItemAdapter
from w3lib.url import is_url
from twisted.internet.defer import maybeDeferred
from scrapy.commands import BaseRunSpiderCommand
from scrapy.http import Request
from scrapy.utils import display
from scrapy.utils.spider import iterate_spider_output, spidercls_for_request
from scrapy.exceptions import UsageError
logger = logging.getLogger(__name__)
class Command(BaseRunSpiderCommand):
requires_project = True
spider = None
items: Dict[int, list] = {}
requests: Dict[int, list] = {}
first_response = None
def syntax(self):
return "[options] <url>"
def short_desc(self):
return "Parse URL (using its spider) and print the results"
def add_options(self, parser):
BaseRunSpiderCommand.add_options(self, parser)
parser.add_argument("--spider", dest="spider", default=None,
help="use this spider without looking for one")
parser.add_argument("--pipelines", action="store_true",
help="process items through pipelines")
parser.add_argument("--nolinks", dest="nolinks", action="store_true",
help="don't show links to follow (extracted requests)")
parser.add_argument("--noitems", dest="noitems", action="store_true",
help="don't show scraped items")
parser.add_argument("--nocolour", dest="nocolour", action="store_true",
help="avoid using pygments to colorize the output")
parser.add_argument("-r", "--rules", dest="rules", action="store_true",
help="use CrawlSpider rules to discover the callback")
parser.add_argument("-c", "--callback", dest="callback",
help="use this callback for parsing, instead looking for a callback")
parser.add_argument("-m", "--meta", dest="meta",
help="inject extra meta into the Request, it must be a valid raw json string")
parser.add_argument("--cbkwargs", dest="cbkwargs",
help="inject extra callback kwargs into the Request, it must be a valid raw json string")
parser.add_argument("-d", "--depth", dest="depth", type=int, default=1,
help="maximum depth for parsing requests [default: %(default)s]")
parser.add_argument("-v", "--verbose", dest="verbose", action="store_true",
help="print each depth level one by one")
@property
def max_level(self):
max_items, max_requests = 0, 0
if self.items:
max_items = max(self.items)
if self.requests:
max_requests = max(self.requests)
return max(max_items, max_requests)
def add_items(self, lvl, new_items):
old_items = self.items.get(lvl, [])
self.items[lvl] = old_items + new_items
def add_requests(self, lvl, new_reqs):
old_reqs = self.requests.get(lvl, [])
self.requests[lvl] = old_reqs + new_reqs
def print_items(self, lvl=None, colour=True):
if lvl is None:
items = [item for lst in self.items.values() for item in lst]
else:
items = self.items.get(lvl, [])
print("# Scraped Items ", "-" * 60)
display.pprint([ItemAdapter(x).asdict() for x in items], colorize=colour)
def print_requests(self, lvl=None, colour=True):
if lvl is None:
if self.requests:
requests = self.requests[max(self.requests)]
else:
requests = []
else:
requests = self.requests.get(lvl, [])
print("# Requests ", "-" * 65)
display.pprint(requests, colorize=colour)
def print_results(self, opts):
colour = not opts.nocolour
if opts.verbose:
for level in range(1, self.max_level + 1):
print(f'\n>>> DEPTH LEVEL: {level} <<<')
if not opts.noitems:
self.print_items(level, colour)
if not opts.nolinks:
self.print_requests(level, colour)
else:
print(f'\n>>> STATUS DEPTH LEVEL {self.max_level} <<<')
if not opts.noitems:
self.print_items(colour=colour)
if not opts.nolinks:
self.print_requests(colour=colour)
def _get_items_and_requests(self, spider_output, opts, depth, spider, callback):
items, requests = [], []
for x in spider_output:
if is_item(x):
items.append(x)
elif isinstance(x, Request):
requests.append(x)
return items, requests, opts, depth, spider, callback
def run_callback(self, response, callback, cb_kwargs=None):
cb_kwargs = cb_kwargs or {}
d = maybeDeferred(iterate_spider_output, callback(response, **cb_kwargs))
return d
def get_callback_from_rules(self, spider, response):
if getattr(spider, 'rules', None):
for rule in spider.rules:
if rule.link_extractor.matches(response.url):
return rule.callback or "parse"
else:
logger.error('No CrawlSpider rules found in spider %(spider)r, '
'please specify a callback to use for parsing',
{'spider': spider.name})
def set_spidercls(self, url, opts):
spider_loader = self.crawler_process.spider_loader
if opts.spider:
try:
self.spidercls = spider_loader.load(opts.spider)
except KeyError:
logger.error('Unable to find spider: %(spider)s',
{'spider': opts.spider})
else:
self.spidercls = spidercls_for_request(spider_loader, Request(url))
if not self.spidercls:
logger.error('Unable to find spider for: %(url)s', {'url': url})
def _start_requests(spider):
yield self.prepare_request(spider, Request(url), opts)
if self.spidercls:
self.spidercls.start_requests = _start_requests
def start_parsing(self, url, opts):
self.crawler_process.crawl(self.spidercls, **opts.spargs)
self.pcrawler = list(self.crawler_process.crawlers)[0]
self.crawler_process.start()
if not self.first_response:
logger.error('No response downloaded for: %(url)s',
{'url': url})
def scraped_data(self, args):
items, requests, opts, depth, spider, callback = args
if opts.pipelines:
itemproc = self.pcrawler.engine.scraper.itemproc
for item in items:
itemproc.process_item(item, spider)
self.add_items(depth, items)
self.add_requests(depth, requests)
scraped_data = items if opts.output else []
if depth < opts.depth:
for req in requests:
req.meta['_depth'] = depth + 1
req.meta['_callback'] = req.callback
req.callback = callback
scraped_data += requests
return scraped_data
def prepare_request(self, spider, request, opts):
def callback(response, **cb_kwargs):
# memorize first request
if not self.first_response:
self.first_response = response
# determine real callback
cb = response.meta['_callback']
if not cb:
if opts.callback:
cb = opts.callback
elif opts.rules and self.first_response == response:
cb = self.get_callback_from_rules(spider, response)
if not cb:
logger.error('Cannot find a rule that matches %(url)r in spider: %(spider)s',
{'url': response.url, 'spider': spider.name})
return
else:
cb = 'parse'
if not callable(cb):
cb_method = getattr(spider, cb, None)
if callable(cb_method):
cb = cb_method
else:
logger.error('Cannot find callback %(callback)r in spider: %(spider)s',
{'callback': cb, 'spider': spider.name})
return
# parse items and requests
depth = response.meta['_depth']
d = self.run_callback(response, cb, cb_kwargs)
d.addCallback(self._get_items_and_requests, opts, depth, spider, callback)
d.addCallback(self.scraped_data)
return d
# update request meta if any extra meta was passed through the --meta/-m opts.
if opts.meta:
request.meta.update(opts.meta)
# update cb_kwargs if any extra values were was passed through the --cbkwargs option.
if opts.cbkwargs:
request.cb_kwargs.update(opts.cbkwargs)
request.meta['_depth'] = 1
request.meta['_callback'] = request.callback
request.callback = callback
return request
def process_options(self, args, opts):
BaseRunSpiderCommand.process_options(self, args, opts)
self.process_request_meta(opts)
self.process_request_cb_kwargs(opts)
def process_request_meta(self, opts):
if opts.meta:
try:
opts.meta = json.loads(opts.meta)
except ValueError:
raise UsageError("Invalid -m/--meta value, pass a valid json string to -m or --meta. "
"Example: --meta='{\"foo\" : \"bar\"}'", print_help=False)
def process_request_cb_kwargs(self, opts):
if opts.cbkwargs:
try:
opts.cbkwargs = json.loads(opts.cbkwargs)
except ValueError:
raise UsageError("Invalid --cbkwargs value, pass a valid json string to --cbkwargs. "
"Example: --cbkwargs='{\"foo\" : \"bar\"}'", print_help=False)
def run(self, args, opts):
# parse arguments
if not len(args) == 1 or not is_url(args[0]):
raise UsageError()
else:
url = args[0]
# prepare spidercls
self.set_spidercls(url, opts)
if self.spidercls and opts.depth > 0:
self.start_parsing(url, opts)
self.print_results(opts)
| bsd-3-clause | a697ed1dfb2b15d5e024dd2fb85af071 | 38.137037 | 117 | 0.559478 | 4.27987 | false | false | false | false |
scrapy/scrapy | scrapy/utils/spider.py | 3 | 2195 | import inspect
import logging
from scrapy.spiders import Spider
from scrapy.utils.defer import deferred_from_coro
from scrapy.utils.misc import arg_to_iter
logger = logging.getLogger(__name__)
def iterate_spider_output(result):
if inspect.isasyncgen(result):
return result
elif inspect.iscoroutine(result):
d = deferred_from_coro(result)
d.addCallback(iterate_spider_output)
return d
else:
return arg_to_iter(deferred_from_coro(result))
def iter_spider_classes(module):
"""Return an iterator over all spider classes defined in the given module
that can be instantiated (i.e. which have name)
"""
# this needs to be imported here until get rid of the spider manager
# singleton in scrapy.spider.spiders
from scrapy.spiders import Spider
for obj in vars(module).values():
if (
inspect.isclass(obj)
and issubclass(obj, Spider)
and obj.__module__ == module.__name__
and getattr(obj, 'name', None)
):
yield obj
def spidercls_for_request(spider_loader, request, default_spidercls=None,
log_none=False, log_multiple=False):
"""Return a spider class that handles the given Request.
This will look for the spiders that can handle the given request (using
the spider loader) and return a Spider class if (and only if) there is
only one Spider able to handle the Request.
If multiple spiders (or no spider) are found, it will return the
default_spidercls passed. It can optionally log if multiple or no spiders
are found.
"""
snames = spider_loader.find_by_request(request)
if len(snames) == 1:
return spider_loader.load(snames[0])
if len(snames) > 1 and log_multiple:
logger.error('More than one spider can handle: %(request)s - %(snames)s',
{'request': request, 'snames': ', '.join(snames)})
if len(snames) == 0 and log_none:
logger.error('Unable to find spider that handles: %(request)s',
{'request': request})
return default_spidercls
class DefaultSpider(Spider):
name = 'default'
| bsd-3-clause | 40d2b355788c655eb721399e91790d02 | 30.811594 | 81 | 0.651025 | 4.012797 | false | false | false | false |
scrapy/scrapy | scrapy/cmdline.py | 2 | 5706 | import sys
import os
import argparse
import cProfile
import inspect
import pkg_resources
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.commands import ScrapyCommand, ScrapyHelpFormatter, BaseRunSpiderCommand
from scrapy.exceptions import UsageError
from scrapy.utils.misc import walk_modules
from scrapy.utils.project import inside_project, get_project_settings
from scrapy.utils.python import garbage_collect
class ScrapyArgumentParser(argparse.ArgumentParser):
def _parse_optional(self, arg_string):
# if starts with -: it means that is a parameter not a argument
if arg_string[:2] == '-:':
return None
return super()._parse_optional(arg_string)
def _iter_command_classes(module_name):
# TODO: add `name` attribute to commands and and merge this function with
# scrapy.utils.spider.iter_spider_classes
for module in walk_modules(module_name):
for obj in vars(module).values():
if (
inspect.isclass(obj)
and issubclass(obj, ScrapyCommand)
and obj.__module__ == module.__name__
and obj not in (ScrapyCommand, BaseRunSpiderCommand)
):
yield obj
def _get_commands_from_module(module, inproject):
d = {}
for cmd in _iter_command_classes(module):
if inproject or not cmd.requires_project:
cmdname = cmd.__module__.split('.')[-1]
d[cmdname] = cmd()
return d
def _get_commands_from_entry_points(inproject, group='scrapy.commands'):
cmds = {}
for entry_point in pkg_resources.iter_entry_points(group):
obj = entry_point.load()
if inspect.isclass(obj):
cmds[entry_point.name] = obj()
else:
raise Exception(f"Invalid entry point {entry_point.name}")
return cmds
def _get_commands_dict(settings, inproject):
cmds = _get_commands_from_module('scrapy.commands', inproject)
cmds.update(_get_commands_from_entry_points(inproject))
cmds_module = settings['COMMANDS_MODULE']
if cmds_module:
cmds.update(_get_commands_from_module(cmds_module, inproject))
return cmds
def _pop_command_name(argv):
i = 0
for arg in argv[1:]:
if not arg.startswith('-'):
del argv[i]
return arg
i += 1
def _print_header(settings, inproject):
version = scrapy.__version__
if inproject:
print(f"Scrapy {version} - active project: {settings['BOT_NAME']}\n")
else:
print(f"Scrapy {version} - no active project\n")
def _print_commands(settings, inproject):
_print_header(settings, inproject)
print("Usage:")
print(" scrapy <command> [options] [args]\n")
print("Available commands:")
cmds = _get_commands_dict(settings, inproject)
for cmdname, cmdclass in sorted(cmds.items()):
print(f" {cmdname:<13} {cmdclass.short_desc()}")
if not inproject:
print()
print(" [ more ] More commands available when run from project directory")
print()
print('Use "scrapy <command> -h" to see more info about a command')
def _print_unknown_command(settings, cmdname, inproject):
_print_header(settings, inproject)
print(f"Unknown command: {cmdname}\n")
print('Use "scrapy" to see available commands')
def _run_print_help(parser, func, *a, **kw):
try:
func(*a, **kw)
except UsageError as e:
if str(e):
parser.error(str(e))
if e.print_help:
parser.print_help()
sys.exit(2)
def execute(argv=None, settings=None):
if argv is None:
argv = sys.argv
if settings is None:
settings = get_project_settings()
# set EDITOR from environment if available
try:
editor = os.environ['EDITOR']
except KeyError:
pass
else:
settings['EDITOR'] = editor
inproject = inside_project()
cmds = _get_commands_dict(settings, inproject)
cmdname = _pop_command_name(argv)
if not cmdname:
_print_commands(settings, inproject)
sys.exit(0)
elif cmdname not in cmds:
_print_unknown_command(settings, cmdname, inproject)
sys.exit(2)
cmd = cmds[cmdname]
parser = ScrapyArgumentParser(formatter_class=ScrapyHelpFormatter,
usage=f"scrapy {cmdname} {cmd.syntax()}",
conflict_handler='resolve',
description=cmd.long_desc())
settings.setdict(cmd.default_settings, priority='command')
cmd.settings = settings
cmd.add_options(parser)
opts, args = parser.parse_known_args(args=argv[1:])
_run_print_help(parser, cmd.process_options, args, opts)
cmd.crawler_process = CrawlerProcess(settings)
_run_print_help(parser, _run_command, cmd, args, opts)
sys.exit(cmd.exitcode)
def _run_command(cmd, args, opts):
if opts.profile:
_run_command_profiled(cmd, args, opts)
else:
cmd.run(args, opts)
def _run_command_profiled(cmd, args, opts):
if opts.profile:
sys.stderr.write(f"scrapy: writing cProfile stats to {opts.profile!r}\n")
loc = locals()
p = cProfile.Profile()
p.runctx('cmd.run(args, opts)', globals(), loc)
if opts.profile:
p.dump_stats(opts.profile)
if __name__ == '__main__':
try:
execute()
finally:
# Twisted prints errors in DebugInfo.__del__, but PyPy does not run gc.collect() on exit:
# http://doc.pypy.org/en/latest/cpython_differences.html
# ?highlight=gc.collect#differences-related-to-garbage-collection-strategies
garbage_collect()
| bsd-3-clause | 3526f97882507aa994f433a6ff393d31 | 30.180328 | 97 | 0.627234 | 3.763852 | false | false | false | false |
scrapy/scrapy | scrapy/utils/request.py | 1 | 13713 | """
This module provides some useful functions for working with
scrapy.http.Request objects
"""
import hashlib
import json
import warnings
from typing import Dict, Iterable, List, Optional, Tuple, Union
from urllib.parse import urlunparse
from weakref import WeakKeyDictionary
from w3lib.http import basic_auth_header
from w3lib.url import canonicalize_url
from scrapy import Request, Spider
from scrapy.exceptions import ScrapyDeprecationWarning
from scrapy.utils.httpobj import urlparse_cached
from scrapy.utils.misc import load_object
from scrapy.utils.python import to_bytes, to_unicode
_deprecated_fingerprint_cache: "WeakKeyDictionary[Request, Dict[Tuple[Optional[Tuple[bytes, ...]], bool], str]]"
_deprecated_fingerprint_cache = WeakKeyDictionary()
def _serialize_headers(headers, request):
for header in headers:
if header in request.headers:
yield header
for value in request.headers.getlist(header):
yield value
def request_fingerprint(
request: Request,
include_headers: Optional[Iterable[Union[bytes, str]]] = None,
keep_fragments: bool = False,
) -> str:
"""
Return the request fingerprint as an hexadecimal string.
The request fingerprint is a hash that uniquely identifies the resource the
request points to. For example, take the following two urls:
http://www.example.com/query?id=111&cat=222
http://www.example.com/query?cat=222&id=111
Even though those are two different URLs both point to the same resource
and are equivalent (i.e. they should return the same response).
Another example are cookies used to store session ids. Suppose the
following page is only accessible to authenticated users:
http://www.example.com/members/offers.html
Lots of sites use a cookie to store the session id, which adds a random
component to the HTTP Request and thus should be ignored when calculating
the fingerprint.
For this reason, request headers are ignored by default when calculating
the fingerprint. If you want to include specific headers use the
include_headers argument, which is a list of Request headers to include.
Also, servers usually ignore fragments in urls when handling requests,
so they are also ignored by default when calculating the fingerprint.
If you want to include them, set the keep_fragments argument to True
(for instance when handling requests with a headless browser).
"""
if include_headers or keep_fragments:
message = (
'Call to deprecated function '
'scrapy.utils.request.request_fingerprint().\n'
'\n'
'If you are using this function in a Scrapy component because you '
'need a non-default fingerprinting algorithm, and you are OK '
'with that non-default fingerprinting algorithm being used by '
'all Scrapy components and not just the one calling this '
'function, use crawler.request_fingerprinter.fingerprint() '
'instead in your Scrapy component (you can get the crawler '
'object from the \'from_crawler\' class method), and use the '
'\'REQUEST_FINGERPRINTER_CLASS\' setting to configure your '
'non-default fingerprinting algorithm.\n'
'\n'
'Otherwise, consider using the '
'scrapy.utils.request.fingerprint() function instead.\n'
'\n'
'If you switch to \'fingerprint()\', or assign the '
'\'REQUEST_FINGERPRINTER_CLASS\' setting a class that uses '
'\'fingerprint()\', the generated fingerprints will not only be '
'bytes instead of a string, but they will also be different from '
'those generated by \'request_fingerprint()\'. Before you switch, '
'make sure that you understand the consequences of this (e.g. '
'cache invalidation) and are OK with them; otherwise, consider '
'implementing your own function which returns the same '
'fingerprints as the deprecated \'request_fingerprint()\' function.'
)
else:
message = (
'Call to deprecated function '
'scrapy.utils.request.request_fingerprint().\n'
'\n'
'If you are using this function in a Scrapy component, and you '
'are OK with users of your component changing the fingerprinting '
'algorithm through settings, use '
'crawler.request_fingerprinter.fingerprint() instead in your '
'Scrapy component (you can get the crawler object from the '
'\'from_crawler\' class method).\n'
'\n'
'Otherwise, consider using the '
'scrapy.utils.request.fingerprint() function instead.\n'
'\n'
'Either way, the resulting fingerprints will be returned as '
'bytes, not as a string, and they will also be different from '
'those generated by \'request_fingerprint()\'. Before you switch, '
'make sure that you understand the consequences of this (e.g. '
'cache invalidation) and are OK with them; otherwise, consider '
'implementing your own function which returns the same '
'fingerprints as the deprecated \'request_fingerprint()\' function.'
)
warnings.warn(message, category=ScrapyDeprecationWarning, stacklevel=2)
processed_include_headers: Optional[Tuple[bytes, ...]] = None
if include_headers:
processed_include_headers = tuple(
to_bytes(h.lower()) for h in sorted(include_headers)
)
cache = _deprecated_fingerprint_cache.setdefault(request, {})
cache_key = (processed_include_headers, keep_fragments)
if cache_key not in cache:
fp = hashlib.sha1()
fp.update(to_bytes(request.method))
fp.update(to_bytes(canonicalize_url(request.url, keep_fragments=keep_fragments)))
fp.update(request.body or b'')
if processed_include_headers:
for part in _serialize_headers(processed_include_headers, request):
fp.update(part)
cache[cache_key] = fp.hexdigest()
return cache[cache_key]
def _request_fingerprint_as_bytes(*args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return bytes.fromhex(request_fingerprint(*args, **kwargs))
_fingerprint_cache: "WeakKeyDictionary[Request, Dict[Tuple[Optional[Tuple[bytes, ...]], bool], bytes]]"
_fingerprint_cache = WeakKeyDictionary()
def fingerprint(
request: Request,
*,
include_headers: Optional[Iterable[Union[bytes, str]]] = None,
keep_fragments: bool = False,
) -> bytes:
"""
Return the request fingerprint.
The request fingerprint is a hash that uniquely identifies the resource the
request points to. For example, take the following two urls:
http://www.example.com/query?id=111&cat=222
http://www.example.com/query?cat=222&id=111
Even though those are two different URLs both point to the same resource
and are equivalent (i.e. they should return the same response).
Another example are cookies used to store session ids. Suppose the
following page is only accessible to authenticated users:
http://www.example.com/members/offers.html
Lots of sites use a cookie to store the session id, which adds a random
component to the HTTP Request and thus should be ignored when calculating
the fingerprint.
For this reason, request headers are ignored by default when calculating
the fingerprint. If you want to include specific headers use the
include_headers argument, which is a list of Request headers to include.
Also, servers usually ignore fragments in urls when handling requests,
so they are also ignored by default when calculating the fingerprint.
If you want to include them, set the keep_fragments argument to True
(for instance when handling requests with a headless browser).
"""
processed_include_headers: Optional[Tuple[bytes, ...]] = None
if include_headers:
processed_include_headers = tuple(
to_bytes(h.lower()) for h in sorted(include_headers)
)
cache = _fingerprint_cache.setdefault(request, {})
cache_key = (processed_include_headers, keep_fragments)
if cache_key not in cache:
# To decode bytes reliably (JSON does not support bytes), regardless of
# character encoding, we use bytes.hex()
headers: Dict[str, List[str]] = {}
if processed_include_headers:
for header in processed_include_headers:
if header in request.headers:
headers[header.hex()] = [
header_value.hex()
for header_value in request.headers.getlist(header)
]
fingerprint_data = {
'method': to_unicode(request.method),
'url': canonicalize_url(request.url, keep_fragments=keep_fragments),
'body': (request.body or b'').hex(),
'headers': headers,
}
fingerprint_json = json.dumps(fingerprint_data, sort_keys=True)
cache[cache_key] = hashlib.sha1(fingerprint_json.encode()).digest()
return cache[cache_key]
class RequestFingerprinter:
"""Default fingerprinter.
It takes into account a canonical version
(:func:`w3lib.url.canonicalize_url`) of :attr:`request.url
<scrapy.http.Request.url>` and the values of :attr:`request.method
<scrapy.http.Request.method>` and :attr:`request.body
<scrapy.http.Request.body>`. It then generates an `SHA1
<https://en.wikipedia.org/wiki/SHA-1>`_ hash.
.. seealso:: :setting:`REQUEST_FINGERPRINTER_IMPLEMENTATION`.
"""
@classmethod
def from_crawler(cls, crawler):
return cls(crawler)
def __init__(self, crawler=None):
if crawler:
implementation = crawler.settings.get(
'REQUEST_FINGERPRINTER_IMPLEMENTATION'
)
else:
implementation = '2.6'
if implementation == '2.6':
message = (
'\'2.6\' is a deprecated value for the '
'\'REQUEST_FINGERPRINTER_IMPLEMENTATION\' setting.\n'
'\n'
'It is also the default value. In other words, it is normal '
'to get this warning if you have not defined a value for the '
'\'REQUEST_FINGERPRINTER_IMPLEMENTATION\' setting. This is so '
'for backward compatibility reasons, but it will change in a '
'future version of Scrapy.\n'
'\n'
'See the documentation of the '
'\'REQUEST_FINGERPRINTER_IMPLEMENTATION\' setting for '
'information on how to handle this deprecation.'
)
warnings.warn(message, category=ScrapyDeprecationWarning, stacklevel=2)
self._fingerprint = _request_fingerprint_as_bytes
elif implementation == '2.7':
self._fingerprint = fingerprint
else:
raise ValueError(
f'Got an invalid value on setting '
f'\'REQUEST_FINGERPRINTER_IMPLEMENTATION\': '
f'{implementation!r}. Valid values are \'2.6\' (deprecated) '
f'and \'2.7\'.'
)
def fingerprint(self, request: Request):
return self._fingerprint(request)
def request_authenticate(
request: Request,
username: str,
password: str,
) -> None:
"""Authenticate the given request (in place) using the HTTP basic access
authentication mechanism (RFC 2617) and the given username and password
"""
request.headers['Authorization'] = basic_auth_header(username, password)
def request_httprepr(request: Request) -> bytes:
"""Return the raw HTTP representation (as bytes) of the given request.
This is provided only for reference since it's not the actual stream of
bytes that will be send when performing the request (that's controlled
by Twisted).
"""
parsed = urlparse_cached(request)
path = urlunparse(('', '', parsed.path or '/', parsed.params, parsed.query, ''))
s = to_bytes(request.method) + b" " + to_bytes(path) + b" HTTP/1.1\r\n"
s += b"Host: " + to_bytes(parsed.hostname or b'') + b"\r\n"
if request.headers:
s += request.headers.to_string() + b"\r\n"
s += b"\r\n"
s += request.body
return s
def referer_str(request: Request) -> Optional[str]:
""" Return Referer HTTP header suitable for logging. """
referrer = request.headers.get('Referer')
if referrer is None:
return referrer
return to_unicode(referrer, errors='replace')
def request_from_dict(d: dict, *, spider: Optional[Spider] = None) -> Request:
"""Create a :class:`~scrapy.Request` object from a dict.
If a spider is given, it will try to resolve the callbacks looking at the
spider for methods with the same name.
"""
request_cls = load_object(d["_class"]) if "_class" in d else Request
kwargs = {key: value for key, value in d.items() if key in request_cls.attributes}
if d.get("callback") and spider:
kwargs["callback"] = _get_method(spider, d["callback"])
if d.get("errback") and spider:
kwargs["errback"] = _get_method(spider, d["errback"])
return request_cls(**kwargs)
def _get_method(obj, name):
"""Helper function for request_from_dict"""
name = str(name)
try:
return getattr(obj, name)
except AttributeError:
raise ValueError(f"Method {name!r} not found in: {obj}")
| bsd-3-clause | 61b5de7a231db9cd8a42780f3225d6af | 40.807927 | 112 | 0.651426 | 4.310909 | false | false | false | false |
moraes/tipfy | tipfy/utils.py | 9 | 4320 | #!/usr/bin/env python
#
# Copyright 2009 Facebook
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Escaping/unescaping methods for HTML, JSON, URLs, and others."""
import base64
import htmlentitydefs
import re
import unicodedata
import urllib
import xml.sax.saxutils
# Imported here for compatibility.
from .json import json_encode, json_decode, json_b64encode, json_b64decode
from .local import get_request
from .routing import url_for
def xhtml_escape(value):
"""Escapes a string so it is valid within XML or XHTML.
:param value:
The value to be escaped.
:returns:
The escaped value.
"""
return utf8(xml.sax.saxutils.escape(value, {'"': """}))
def xhtml_unescape(value):
"""Un-escapes an XML-escaped string.
:param value:
The value to be un-escaped.
:returns:
The un-escaped value.
"""
return re.sub(r"&(#?)(\w+?);", _convert_entity, _unicode(value))
def render_json_response(*args, **kwargs):
"""Renders a JSON response.
:param args:
Arguments to be passed to json_encode().
:param kwargs:
Keyword arguments to be passed to json_encode().
:returns:
A :class:`Response` object with a JSON string in the body and
mimetype set to ``application/json``.
"""
return get_request().app.response_class(json_encode(*args, **kwargs),
mimetype='application/json')
def squeeze(value):
"""Replace all sequences of whitespace chars with a single space."""
return re.sub(r"[\x00-\x20]+", " ", value).strip()
def url_escape(value):
"""Returns a valid URL-encoded version of the given value."""
return urllib.quote_plus(utf8(value))
def url_unescape(value):
"""Decodes the given value from a URL."""
return _unicode(urllib.unquote_plus(value))
def utf8(value):
"""Encodes a unicode value to UTF-8 if not yet encoded.
:param value:
Value to be encoded.
:returns:
An encoded string.
"""
if isinstance(value, unicode):
return value.encode("utf-8")
assert isinstance(value, str)
return value
def _unicode(value):
"""Encodes a string value to unicode if not yet decoded.
:param value:
Value to be decoded.
:returns:
A decoded string.
"""
if isinstance(value, str):
return value.decode("utf-8")
assert isinstance(value, unicode)
return value
def _convert_entity(m):
if m.group(1) == "#":
try:
return unichr(int(m.group(2)))
except ValueError:
return "&#%s;" % m.group(2)
try:
return _HTML_UNICODE_MAP[m.group(2)]
except KeyError:
return "&%s;" % m.group(2)
def _build_unicode_map():
return dict((name, unichr(value)) for \
name, value in htmlentitydefs.name2codepoint.iteritems())
def slugify(value, max_length=None, default=None):
"""Converts a string to slug format (all lowercase, words separated by
dashes).
:param value:
The string to be slugified.
:param max_length:
An integer to restrict the resulting string to a maximum length.
Words are not broken when restricting length.
:param default:
A default value in case the resulting string is empty.
:returns:
A slugified string.
"""
value = _unicode(value)
s = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').lower()
s = re.sub('-+', '-', re.sub('[^a-zA-Z0-9-]+', '-', s)).strip('-')
if not s:
return default
if max_length:
# Restrict length without breaking words.
while len(s) > max_length:
if s.find('-') == -1:
s = s[:max_length]
else:
s = s.rsplit('-', 1)[0]
return s
_HTML_UNICODE_MAP = _build_unicode_map()
| bsd-3-clause | 46e7ec1021452299f8a6643d70a6a1eb | 26 | 78 | 0.634491 | 3.799472 | false | false | false | false |
moraes/tipfy | tipfy/appengine/acl.py | 9 | 11435 | # -*- coding: utf-8 -*-
"""
tipfy.appengine.acl
~~~~~~~~~~~~~~~~~~~
Simple Access Control List
This module provides utilities to manage permissions for anything that
requires some level of restriction, such as datastore models or handlers.
Access permissions can be grouped into roles for convenience, so that a new
user can be assigned to a role directly instead of having all
permissions defined manually. Individual access permissions can then
override or extend the role permissions.
.. note::
Roles are optional, so this module doesn't define a roles model (to keep
things simple and fast). Role definitions are set directly in the Acl
class. The strategy to load roles is open to the implementation; for
best performance, define them statically in a module.
Usage example::
# Set a dict of roles with an 'admin' role that has full access and
# assign users to it. Each role maps to a list of rules. Each rule is a
# tuple (topic, name, flag), where flag is as bool to allow or disallow
# access. Wildcard '*' can be used to match all topics and/or names.
Acl.roles_map = {
'admin': [
('*', '*', True),
],
}
# Assign users 'user_1' and 'user_2' to the 'admin' role.
AclRules.insert_or_update(area='my_area', user='user_1',
roles=['admin'])
AclRules.insert_or_update(area='my_area', user='user_2',
roles=['admin'])
# Restrict 'user_2' from accessing a specific resource, adding a new
# rule with flag set to False. Now this user has access to everything
# except this resource.
user_acl = AclRules.get_by_area_and_user('my_area', 'user_2')
user_acl.rules.append(('UserAdmin', '*', False))
user_acl.put()
# Check that 'user_2' permissions are correct.
acl = Acl(area='my_area', user='user_2')
assert acl.has_access(topic='UserAdmin', name='save') is False
assert acl.has_access(topic='AnythingElse', name='put') is True
The Acl object should be created once after a user is loaded, so that
it becomes available for the app to do all necessary permissions checkings.
Based on concept from `Solar <http://solarphp.com>`_ Access and Role
classes.
:copyright: 2011 by tipfy.org.
:license: BSD, see LICENSE.txt for more details.
"""
from google.appengine.ext import db
from google.appengine.api import memcache
from werkzeug import cached_property
from tipfy.appengine import CURRENT_VERSION_ID
from tipfy.appengine.db import PickleProperty
from tipfy.local import get_request
#: Cache for loaded rules.
_rules_map = {}
class AclMixin(object):
"""A mixin that adds an acl property to a ``tipfy.RequestHandler``.
The handler *must* have the properties area and current_user set for
it to work.
"""
roles_map = None
roles_lock = None
@cached_property
def acl(self):
"""Loads and returns the access permission for the currently logged in
user. This requires the handler to have the area and
current_user attributes. Casted to a string they must return the
object identifiers.
"""
return Acl(str(self.area.key()), str(self.current_user.key()),
self.roles_map, self.roles_lock)
def validate_rules(rules):
"""Ensures that the list of rule tuples is set correctly."""
assert isinstance(rules, list), 'Rules must be a list'
for rule in rules:
assert isinstance(rule, tuple), 'Each rule must be tuple'
assert len(rule) == 3, 'Each rule must have three elements'
assert isinstance(rule[0], basestring), 'Rule topic must be a string'
assert isinstance(rule[1], basestring), 'Rule name must be a string'
assert isinstance(rule[2], bool), 'Rule flag must be a bool'
class AclRules(db.Model):
"""Stores roles and rules for a user in a given area."""
#: Creation date.
created = db.DateTimeProperty(auto_now_add=True)
#: Modification date.
updated = db.DateTimeProperty(auto_now=True)
#: Area to which this role is related.
area = db.StringProperty(required=True)
#: User identifier.
user = db.StringProperty(required=True)
#: List of role names.
roles = db.StringListProperty()
#: Lists of rules. Each rule is a tuple (topic, name, flag).
rules = PickleProperty(validator=validate_rules)
@classmethod
def get_key_name(cls, area, user):
"""Returns this entity's key name, also used as memcache key.
:param area:
Area string identifier.
:param user:
User string identifier.
:returns:
The key name.
"""
return '%s:%s' % (str(area), str(user))
@classmethod
def get_by_area_and_user(cls, area, user):
"""Returns an AclRules entity for a given user in a given area.
:param area:
Area string identifier.
:param user:
User string identifier.
:returns:
An AclRules entity.
"""
return cls.get_by_key_name(cls.get_key_name(area, user))
@classmethod
def insert_or_update(cls, area, user, roles=None, rules=None):
"""Inserts or updates ACL rules and roles for a given user. This will
reset roles and rules if the user exists and the values are not passed.
:param area:
Area string identifier.
:param user:
User string identifier.
:param roles:
List of the roles for the user.
:param rules:
List of the rules for the user.
:returns:
An AclRules entity.
"""
if roles is None:
roles = []
if rules is None:
rules = []
user_acl = cls(key_name=cls.get_key_name(area, user), area=area,
user=user, roles=roles, rules=rules)
user_acl.put()
return user_acl
@classmethod
def get_roles_and_rules(cls, area, user, roles_map, roles_lock):
"""Returns a tuple (roles, rules) for a given user in a given area.
:param area:
Area string identifier.
:param user:
User string identifier.
:param roles_map:
Dictionary of available role names mapping to list of rules.
:param roles_lock:
Lock for the roles map: a unique identifier to track changes.
:returns:
A tuple of (roles, rules) for the given user in the given area.
"""
res = None
cache_key = cls.get_key_name(area, user)
if cache_key in _rules_map:
res = _rules_map[cache_key]
else:
res = memcache.get(cache_key, namespace=cls.__name__)
if res is not None:
lock, roles, rules = res
if res is None or lock != roles_lock or get_request().app.debug:
entity = cls.get_by_key_name(cache_key)
if entity is None:
res = (roles_lock, [], [])
else:
rules = []
# Apply role rules.
for role in entity.roles:
rules.extend(roles_map.get(role, []))
# Extend with rules, eventually overriding some role rules.
rules.extend(entity.rules)
# Reverse everything, as rules are checked from last to first.
rules.reverse()
# Set results for cache, applying current roles_lock.
res = (roles_lock, entity.roles, rules)
cls.set_cache(cache_key, res)
return (res[1], res[2])
@classmethod
def set_cache(cls, cache_key, spec):
"""Sets a memcache value.
:param cache_key:
The Cache key.
:param spec:
Value to be saved.
"""
_rules_map[cache_key] = spec
memcache.set(cache_key, spec, namespace=cls.__name__)
@classmethod
def delete_cache(cls, cache_key):
"""Deletes a memcache value.
:param cache_key:
The Cache key.
"""
if cache_key in _rules_map:
del _rules_map[cache_key]
memcache.delete(cache_key, namespace=cls.__name__)
def put(self):
"""Saves the entity and clears the cache."""
self.delete_cache(self.get_key_name(self.area, self.user))
super(AclRules, self).put()
def delete(self):
"""Deletes the entity and clears the cache."""
self.delete_cache(self.get_key_name(self.area, self.user))
super(AclRules, self).delete()
def is_rule_set(self, topic, name, flag):
"""Checks if a given rule is set.
:param topic:
A rule topic, as a string.
:param roles:
A rule name, as a string.
:param flag:
A rule flag, a boolean.
:returns:
True if the rule already exists, False otherwise.
"""
for rule_topic, rule_name, rule_flag in self.rules:
if rule_topic == topic and rule_name == name and rule_flag == flag:
return True
return False
class Acl(object):
"""Loads access rules and roles for a given user in a given area and
provides a centralized interface to check permissions. Each Acl object
checks the permissions for a single user. For example::
from tipfy.appengine.acl import Acl
# Build an Acl object for user 'John' in the 'code-reviews' area.
acl = Acl('code-reviews', 'John')
# Check if 'John' is 'admin' in the 'code-reviews' area.
is_admin = acl.is_one('admin')
# Check if 'John' can approve new reviews.
can_edit = acl.has_access('EditReview', 'approve')
"""
#: Dictionary of available role names mapping to list of rules.
roles_map = {}
#: Lock for role changes. This is needed because if role definitions change
#: we must invalidate existing cache that applied the previous definitions.
roles_lock = None
def __init__(self, area, user, roles_map=None, roles_lock=None):
"""Loads access privileges and roles for a given user in a given area.
:param area:
An area identifier, as a string.
:param user:
A user identifier, as a string.
:param roles_map:
A dictionary of roles mapping to a list of rule tuples.
:param roles_lock:
Roles lock string to validate cache. If not set, uses
the application version id.
"""
if roles_map is not None:
self.roles_map = roles_map
if roles_lock is not None:
self.roles_lock = roles_lock
elif self.roles_lock is None:
# Set roles_lock default.
self.roles_lock = CURRENT_VERSION_ID
if area and user:
self._roles, self._rules = AclRules.get_roles_and_rules(area, user,
self.roles_map, self.roles_lock)
else:
self.reset()
def reset(self):
"""Resets the currently loaded access rules and user roles."""
self._rules = []
self._roles = []
def is_one(self, role):
"""Check to see if a user is in a role group.
:param role:
A role name, as a string.
:returns:
True if the user is in this role group, False otherwise.
"""
return role in self._roles
def is_any(self, roles):
"""Check to see if a user is in any of the listed role groups.
:param roles:
An iterable of role names.
:returns:
True if the user is in any of the role groups, False otherwise.
"""
for role in roles:
if role in self._roles:
return True
return False
def is_all(self, roles):
"""Check to see if a user is in all of the listed role groups.
:param roles:
An iterable of role names.
:returns:
True if the user is in all of the role groups, False otherwise.
"""
for role in roles:
if role not in self._roles:
return False
return True
def has_any_access(self):
"""Checks if the user has any access or roles.
:returns:
True if the user has any access rule or role set, False otherwise.
"""
if self._rules or self._roles:
return True
return False
def has_access(self, topic, name):
"""Checks if the user has access to a topic/name combination.
:param topic:
A rule topic, as a string.
:param roles:
A rule name, as a string.
:returns:
True if the user has access to this rule, False otherwise.
"""
if topic == '*' or name == '*':
raise ValueError("has_access() can't be called passing '*'")
for rule_topic, rule_name, rule_flag in self._rules:
if (rule_topic == topic or rule_topic == '*') and \
(rule_name == name or rule_name == '*'):
# Topic and name matched, so return the flag.
return rule_flag
# No match.
return False
| bsd-3-clause | 352701344724e868eaa9fac037f0ca88 | 27.516209 | 76 | 0.686576 | 3.290647 | false | false | false | false |
moraes/tipfy | tipfy/appengine/blobstore.py | 9 | 12717 | # -*- coding: utf-8 -*-
"""
tipfy.appengine.blobstore
~~~~~~~~~~~~~~~~~~~~~~~~~
Handler library for Blobstore API.
Contains handler mixins to help with uploading and downloading blobs.
BlobstoreDownloadMixin: Has helper method for easily sending blobs
to client.
BlobstoreUploadMixin: mixin for receiving upload notification requests.
Based on the original App Engine library and the adaptation to Werkzeug
from Kay framework.
:copyright: 2007 Google Inc.
:copyright: 2009 Accense Technology, Inc. All rights reserved.
:copyright: 2011 tipfy.org.
:license: Apache 2.0 License, see LICENSE.txt for more details.
"""
import cgi
import cStringIO
import datetime
import email
import logging
import re
import sys
import time
from google.appengine.ext import blobstore
from google.appengine.api import blobstore as api_blobstore
from webob import byterange
from werkzeug import FileStorage, Response
_BASE_CREATION_HEADER_FORMAT = '%Y-%m-%d %H:%M:%S'
_CONTENT_DISPOSITION_FORMAT = 'attachment; filename="%s"'
_SEND_BLOB_PARAMETERS = frozenset(['use_range'])
_RANGE_NUMERIC_FORMAT = r'([0-9]*)-([0-9]*)'
_RANGE_FORMAT = r'([a-zA-Z]+)=%s' % _RANGE_NUMERIC_FORMAT
_RANGE_FORMAT_REGEX = re.compile('^%s$' % _RANGE_FORMAT)
_UNSUPPORTED_RANGE_FORMAT_REGEX = re.compile(
'^%s(?:,%s)+$' % (_RANGE_FORMAT, _RANGE_NUMERIC_FORMAT))
_BYTES_UNIT = 'bytes'
class CreationFormatError(api_blobstore.Error):
"""Raised when attempting to parse bad creation date format."""
class Error(Exception):
"""Base class for all errors in blobstore handlers module."""
class RangeFormatError(Error):
"""Raised when Range header incorrectly formatted."""
class UnsupportedRangeFormatError(RangeFormatError):
"""Raised when Range format is correct, but not supported."""
def _check_ranges(start, end, use_range_set, use_range, range_header):
"""Set the range header.
Args:
start: As passed in from send_blob.
end: As passed in from send_blob.
use_range_set: Use range was explcilty set during call to send_blob.
use_range: As passed in from send blob.
range_header: Range header as received in HTTP request.
Returns:
Range header appropriate for placing in blobstore.BLOB_RANGE_HEADER.
Raises:
ValueError if parameters are incorrect. This happens:
- start > end.
- start < 0 and end is also provided.
- end < 0
- If index provided AND using the HTTP header, they don't match.
This is a safeguard.
"""
if end is not None and start is None:
raise ValueError('May not specify end value without start.')
use_indexes = start is not None
if use_indexes:
if end is not None:
if start > end:
raise ValueError('start must be < end.')
range_indexes = byterange.Range.serialize_bytes(_BYTES_UNIT, [(start,
end)])
if use_range_set and use_range and use_indexes:
if range_header != range_indexes:
raise ValueError('May not provide non-equivalent range indexes '
'and range headers: (header) %s != (indexes) %s'
% (range_header, range_indexes))
if use_range and range_header is not None:
return range_header
elif use_indexes:
return range_indexes
else:
return None
class BlobstoreDownloadMixin(object):
"""Mixin for handlers that may send blobs to users."""
__use_range_unset = object()
def send_blob(self, blob_key_or_info, content_type=None, save_as=None,
start=None, end=None, **kwargs):
"""Sends a blob-response based on a blob_key.
Sets the correct response header for serving a blob. If BlobInfo
is provided and no content_type specified, will set request content type
to BlobInfo's content type.
:param blob_key_or_info:
BlobKey or BlobInfo record to serve.
:param content_type:
Content-type to override when known.
:param save_as:
If True, and BlobInfo record is provided, use BlobInfos filename
to save-as. If string is provided, use string as filename. If
None or False, do not send as attachment.
:returns:
A :class:`tipfy.app.Response` object.
:raises:
``ValueError`` on invalid save_as parameter.
"""
# Response headers.
headers = {}
if set(kwargs) - _SEND_BLOB_PARAMETERS:
invalid_keywords = []
for keyword in kwargs:
if keyword not in _SEND_BLOB_PARAMETERS:
invalid_keywords.append(keyword)
if len(invalid_keywords) == 1:
raise TypeError('send_blob got unexpected keyword argument '
'%s.' % invalid_keywords[0])
else:
raise TypeError('send_blob got unexpected keyword arguments: '
'%s.' % sorted(invalid_keywords))
use_range = kwargs.get('use_range', self.__use_range_unset)
use_range_set = use_range is not self.__use_range_unset
if use_range:
self.get_range()
range_header = _check_ranges(start,
end,
use_range_set,
use_range,
self.request.headers.get('range', None))
if range_header is not None:
headers[blobstore.BLOB_RANGE_HEADER] = range_header
if isinstance(blob_key_or_info, blobstore.BlobInfo):
blob_key = blob_key_or_info.key()
blob_info = blob_key_or_info
else:
blob_key = blob_key_or_info
blob_info = None
headers[blobstore.BLOB_KEY_HEADER] = str(blob_key)
if content_type:
if isinstance(content_type, unicode):
content_type = content_type.encode('utf-8')
headers['Content-Type'] = content_type
else:
headers['Content-Type'] = ''
def send_attachment(filename):
if isinstance(filename, unicode):
filename = filename.encode('utf-8')
headers['Content-Disposition'] = (
_CONTENT_DISPOSITION_FORMAT % filename)
if save_as:
if isinstance(save_as, basestring):
send_attachment(save_as)
elif blob_info and save_as is True:
send_attachment(blob_info.filename)
else:
if not blob_info:
raise ValueError('Expected BlobInfo value for '
'blob_key_or_info.')
else:
raise ValueError('Unexpected value for save_as')
return Response('', headers=headers)
def get_range(self):
"""Get range from header if it exists.
Returns:
Tuple (start, end):
start: Start index. None if there is None.
end: End index. None if there is None.
None if there is no request header.
Raises:
UnsupportedRangeFormatError: If the range format in the header is
valid, but not supported.
RangeFormatError: If the range format in the header is not valid.
"""
range_header = self.request.headers.get('range', None)
if range_header is None:
return None
try:
original_stdout = sys.stdout
sys.stdout = cStringIO.StringIO()
try:
parsed_range = byterange.Range.parse_bytes(range_header)
finally:
sys.stdout = original_stdout
except TypeError, err:
raise RangeFormatError('Invalid range header: %s' % err)
if parsed_range is None:
raise RangeFormatError('Invalid range header: %s' % range_header)
units, ranges = parsed_range
if len(ranges) != 1:
raise UnsupportedRangeFormatError(
'Unable to support multiple range values in Range header.')
if units != _BYTES_UNIT:
raise UnsupportedRangeFormatError(
'Invalid unit in range header type: %s', range_header)
return ranges[0]
class BlobstoreUploadMixin(object):
"""Mixin for blob upload handlers."""
def get_uploads(self, field_name=None):
"""Returns uploads sent to this handler.
:param field_name:
Only select uploads that were sent as a specific field.
:returns:
A list of BlobInfo records corresponding to each upload. Empty list
if there are no blob-info records for field_name.
"""
if getattr(self, '_BlobstoreUploadMixin__uploads', None) is None:
self.__uploads = {}
for key, value in self.request.files.items():
if isinstance(value, FileStorage):
for option in value.headers['Content-Type'].split(';'):
if 'blob-key' in option:
self.__uploads.setdefault(key, []).append(
parse_blob_info(value, key))
if field_name:
try:
return list(self.__uploads[field_name])
except KeyError:
return []
else:
results = []
for uploads in self.__uploads.itervalues():
results += uploads
return results
def parse_blob_info(file_storage, field_name=None):
"""Parse a BlobInfo record from file upload field_storage.
:param file_storage:
``werkzeug.FileStorage`` that represents uploaded blob.
:returns:
BlobInfo record as parsed from the field-storage instance.
None if there was no field_storage.
:raises:
BlobInfoParseError when provided field_storage does not contain enough
information to construct a BlobInfo object.
"""
if file_storage is None:
return None
field_name = field_name or file_storage.name
def get_value(dict, name):
value = dict.get(name, None)
if value is None:
raise blobstore.BlobInfoParseError('Field %s has no %s.' %
(field_name, name))
return value
filename = file_storage.filename
content_type, cdict = cgi.parse_header(file_storage.headers['Content-Type'])
blob_key = blobstore.BlobKey(get_value(cdict, 'blob-key'))
upload_content = email.message_from_file(file_storage.stream)
content_type = get_value(upload_content, 'content-type')
size = get_value(upload_content, 'content-length')
creation_string = get_value(upload_content,
blobstore.UPLOAD_INFO_CREATION_HEADER)
try:
size = int(size)
except (TypeError, ValueError):
raise blobstore.BlobInfoParseError(
'%s is not a valid value for %s size.' % (size, field_name))
try:
creation = parse_creation(creation_string, field_name)
except CreationFormatError, e:
raise blobstore.BlobInfoParseError(
'Could not parse creation for %s: %s' % (field_name, str(e)))
return blobstore.BlobInfo(blob_key, {
'content_type': content_type,
'creation': creation,
'filename': filename,
'size': size,
})
def parse_creation(creation_string, field_name):
"""Parses upload creation string from header format.
Parse creation date of the format:
YYYY-mm-dd HH:MM:SS.ffffff
Y: Year
m: Month (01-12)
d: Day (01-31)
H: Hour (00-24)
M: Minute (00-59)
S: Second (00-59)
f: Microsecond
Args:
creation_string: String creation date format.
Returns:
datetime object parsed from creation_string.
Raises:
_CreationFormatError when the creation string is formatted incorrectly.
"""
split_creation_string = creation_string.split('.', 1)
if len(split_creation_string) != 2:
raise CreationFormatError(
'Could not parse creation %s in field %s.' % (creation_string,
field_name))
timestamp_string, microsecond = split_creation_string
try:
timestamp = time.strptime(timestamp_string,
_BASE_CREATION_HEADER_FORMAT)
microsecond = int(microsecond)
except ValueError:
raise CreationFormatError('Could not parse creation %s in field %s.'
% (creation_string, field_name))
return datetime.datetime(*timestamp[:6] + tuple([microsecond]))
| bsd-3-clause | a788edd8ed971f34b0f6f095a410dd21 | 32.55409 | 80 | 0.595659 | 4.334356 | false | false | false | false |
moraes/tipfy | tipfy/auth/twitter.py | 9 | 7220 | # -*- coding: utf-8 -*-
"""
tipfy.auth.twitter
~~~~~~~~~~~~~~~~~~
Implementation of Twitter authentication scheme.
Ported from `tornado.auth`_.
:copyright: 2009 Facebook.
:copyright: 2011 tipfy.org.
:license: Apache License Version 2.0, see LICENSE.txt for more details.
"""
from __future__ import absolute_import
import functools
import logging
import urllib
from google.appengine.api import urlfetch
from tipfy import REQUIRED_VALUE
from tipfy.utils import json_decode, json_encode
from tipfy.auth.oauth import OAuthMixin
#: Default configuration values for this module. Keys are:
#:
#: consumer_key
#: Key provided when you register an application with Twitter.
#:
#: consumer_secret
#: Secret provided when you register an application with Twitter.
default_config = {
'consumer_key': REQUIRED_VALUE,
'consumer_secret': REQUIRED_VALUE,
}
class TwitterMixin(OAuthMixin):
"""Twitter OAuth authentication.
To authenticate with Twitter, register your application with
Twitter at http://twitter.com/apps. Then copy your Consumer Key and
Consumer Secret to the application settings 'twitter_consumer_key' and
'twitter_consumer_secret'. Use this Mixin on the handler for the URL
you registered as your application's Callback URL.
When your application is set up, you can use this Mixin like this
to authenticate the user with Twitter and get access to their stream:
class TwitterHandler(tornado.web.RequestHandler,
tornado.auth.TwitterMixin):
@tornado.web.asynchronous
def get(self):
if self.get_argument("oauth_token", None):
self.get_authenticated_user(self.async_callback(self._on_auth))
return
self.authorize_redirect()
def _on_auth(self, user):
if not user:
raise tornado.web.HTTPError(500, "Twitter auth failed")
# Save the user using, e.g., set_secure_cookie()
The user object returned by get_authenticated_user() includes the
attributes 'username', 'name', and all of the custom Twitter user
attributes describe at
http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-users%C2%A0show
in addition to 'access_token'. You should save the access token with
the user; it is required to make requests on behalf of the user later
with twitter_request().
"""
_OAUTH_REQUEST_TOKEN_URL = 'http://api.twitter.com/oauth/request_token'
_OAUTH_ACCESS_TOKEN_URL = 'http://api.twitter.com/oauth/access_token'
_OAUTH_AUTHORIZE_URL = 'http://api.twitter.com/oauth/authorize'
_OAUTH_AUTHENTICATE_URL = 'http://api.twitter.com/oauth/authenticate'
def authenticate_redirect(self):
"""Just like authorize_redirect(), but auto-redirects if authorized.
This is generally the right interface to use if you are using
Twitter for single-sign on.
"""
url = self._oauth_request_token_url()
try:
response = urlfetch.fetch(url, deadline=10)
except urlfetch.DownloadError, e:
logging.exception(e)
response = None
return self._on_request_token(self._OAUTH_AUTHENTICATE_URL, None,
response)
def twitter_request(self, path, callback, access_token=None,
post_args=None, **args):
"""Fetches the given API path, e.g., "/statuses/user_timeline/btaylor"
The path should not include the format (we automatically append
".json" and parse the JSON output).
If the request is a POST, post_args should be provided. Query
string arguments should be given as keyword arguments.
All the Twitter methods are documented at
http://apiwiki.twitter.com/Twitter-API-Documentation.
Many methods require an OAuth access token which you can obtain
through authorize_redirect() and get_authenticated_user(). The
user returned through that process includes an 'access_token'
attribute that can be used to make authenticated requests via
this method. Example usage:
class MainHandler(tornado.web.RequestHandler,
tornado.auth.TwitterMixin):
@tornado.web.authenticated
@tornado.web.asynchronous
def get(self):
self.twitter_request(
"/statuses/update",
post_args={"status": "Testing Tornado Web Server"},
access_token=user["access_token"],
callback=self.async_callback(self._on_post))
def _on_post(self, new_entry):
if not new_entry:
# Call failed; perhaps missing permission?
self.authorize_redirect()
return
self.finish("Posted a message!")
"""
# Add the OAuth resource request signature if we have credentials
url = 'http://api.twitter.com/1' + path + '.json'
if access_token:
all_args = {}
all_args.update(args)
all_args.update(post_args or {})
consumer_token = self._oauth_consumer_token()
if post_args is not None:
method = 'POST'
else:
method = 'GET'
oauth = self._oauth_request_parameters(url, access_token,
all_args, method=method)
args.update(oauth)
if args:
url += '?' + urllib.urlencode(args)
try:
if post_args is not None:
response = urlfetch.fetch(url, method='POST',
payload=urllib.urlencode(post_args), deadline=10)
else:
response = urlfetch.fetch(url, deadline=10)
except urlfetch.DownloadError, e:
logging.exception(e)
response = None
return self._on_twitter_request(callback, response)
def _on_twitter_request(self, callback, response):
if not response:
logging.warning('Could not get Twitter request token.')
return callback(None)
elif response.status_code < 200 or response.status_code >= 300:
logging.warning('Invalid Twitter response (%d): %s',
response.status_code, response.content)
return callback(None)
return callback(json_decode(response.content))
def _twitter_consumer_key(self):
return self.app.config[__name__]['consumer_key']
def _twitter_consumer_secret(self):
return self.app.config[__name__]['consumer_secret']
def _oauth_consumer_token(self):
return dict(
key=self._twitter_consumer_key(),
secret=self._twitter_consumer_secret())
def _oauth_get_user(self, access_token, callback):
callback = functools.partial(self._parse_user_response, callback)
return self.twitter_request(
'/users/show/' + access_token['screen_name'],
access_token=access_token, callback=callback)
def _parse_user_response(self, callback, user):
if user:
user['username'] = user['screen_name']
return callback(user)
| bsd-3-clause | 3d35747b00abb399f3a37a42c2667d8a | 36.025641 | 79 | 0.624515 | 4.399756 | false | false | false | false |
pika/pika | tests/acceptance/async_adapter_tests.py | 1 | 44865 |
# too-many-lines
# pylint: disable=C0302
# Suppress pylint messages concerning missing class and method docstrings
# pylint: disable=C0111
# Suppress pylint warning about attribute defined outside __init__
# pylint: disable=W0201
# Suppress pylint warning about access to protected member
# pylint: disable=W0212
# Suppress pylint warning about unused argument
# pylint: disable=W0613
# invalid-name
# pylint: disable=C0103
import functools
import socket
import threading
import uuid
import pika
from pika.adapters.utils import connection_workflow
from pika import spec
from pika.compat import as_bytes, time_now
import pika.connection
import pika.exceptions
from pika.exchange_type import ExchangeType
import pika.frame
from tests.base import async_test_base
from tests.base.async_test_base import (AsyncTestCase, BoundQueueTestCase, AsyncAdapters)
class TestA_Connect(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Connect, open channel and disconnect"
def begin(self, channel):
self.stop()
class TestConstructAndImmediatelyCloseConnection(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Construct and immediately close connection."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
connection_class = self.connection.__class__
params = self.new_connection_params()
@async_test_base.make_stop_on_error_with_self(self)
def on_opened(connection):
self.fail('Connection should have aborted, but got '
'on_opened({!r})'.format(connection))
@async_test_base.make_stop_on_error_with_self(self)
def on_open_error(connection, error):
self.assertIsInstance(error,
pika.exceptions.ConnectionOpenAborted)
self.stop()
conn = connection_class(params,
on_open_callback=on_opened,
on_open_error_callback=on_open_error,
custom_ioloop=self.connection.ioloop)
conn.close()
class TestCloseConnectionDuringAMQPHandshake(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Close connection during AMQP handshake."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
base_class = self.connection.__class__ # type: pika.adapters.BaseConnection
params = self.new_connection_params()
class MyConnectionClass(base_class):
# Cause an exception if _on_stream_connected doesn't exist
base_class._on_stream_connected # pylint: disable=W0104
@async_test_base.make_stop_on_error_with_self(self)
def _on_stream_connected(self, *args, **kwargs):
# Now that AMQP handshake has begun, schedule imminent closing
# of the connection
self._nbio.add_callback_threadsafe(self.close)
return super(MyConnectionClass, self)._on_stream_connected(
*args,
**kwargs)
@async_test_base.make_stop_on_error_with_self(self)
def on_opened(connection):
self.fail('Connection should have aborted, but got '
'on_opened({!r})'.format(connection))
@async_test_base.make_stop_on_error_with_self(self)
def on_open_error(connection, error):
self.assertIsInstance(error, pika.exceptions.ConnectionOpenAborted)
self.stop()
conn = MyConnectionClass(params,
on_open_callback=on_opened,
on_open_error_callback=on_open_error,
custom_ioloop=self.connection.ioloop)
conn.close()
class TestSocketConnectTimeoutWithTinySocketTimeout(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Force socket.connect() timeout with very tiny socket_timeout."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
connection_class = self.connection.__class__
params = self.new_connection_params()
# socket_timeout expects something > 0
params.socket_timeout = 0.0000000000000000001
@async_test_base.make_stop_on_error_with_self(self)
def on_opened(connection):
self.fail('Socket connection should have timed out, but got '
'on_opened({!r})'.format(connection))
@async_test_base.make_stop_on_error_with_self(self)
def on_open_error(connection, error):
self.assertIsInstance(error,
pika.exceptions.AMQPConnectionError)
self.stop()
connection_class(
params,
on_open_callback=on_opened,
on_open_error_callback=on_open_error,
custom_ioloop=self.connection.ioloop)
class TestStackConnectionTimeoutWithTinyStackTimeout(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Force stack bring-up timeout with very tiny stack_timeout."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
connection_class = self.connection.__class__
params = self.new_connection_params()
# stack_timeout expects something > 0
params.stack_timeout = 0.0000000000000000001
@async_test_base.make_stop_on_error_with_self(self)
def on_opened(connection):
self.fail('Stack connection should have timed out, but got '
'on_opened({!r})'.format(connection))
def on_open_error(connection, exception):
error = None
if not isinstance(exception, pika.exceptions.AMQPConnectionError):
error = AssertionError(
'Expected AMQPConnectionError, but got {!r}'.format(
exception))
self.stop(error)
connection_class(
params,
on_open_callback=on_opened,
on_open_error_callback=on_open_error,
custom_ioloop=self.connection.ioloop)
class TestCreateConnectionViaDefaultConnectionWorkflow(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Connect via adapter's create_connection() method with single config."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
configs = [self.parameters]
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
@async_test_base.make_stop_on_error_with_self(self)
def on_done(conn):
self.assertIsInstance(conn, connection_class)
conn.add_on_close_callback(on_my_connection_closed)
conn.close()
@async_test_base.make_stop_on_error_with_self(self)
def on_my_connection_closed(_conn, error):
self.assertIsInstance(error,
pika.exceptions.ConnectionClosedByClient)
self.stop()
workflow = connection_class.create_connection(configs,
on_done,
self.connection.ioloop)
self.assertIsInstance(
workflow,
connection_workflow.AbstractAMQPConnectionWorkflow)
class TestCreateConnectionViaCustomConnectionWorkflow(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Connect via adapter's create_connection() method using custom workflow."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
configs = [self.parameters]
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
@async_test_base.make_stop_on_error_with_self(self)
def on_done(conn):
self.assertIsInstance(conn, connection_class)
self.assertIs(conn.i_was_here, MyWorkflow)
conn.add_on_close_callback(on_my_connection_closed)
conn.close()
@async_test_base.make_stop_on_error_with_self(self)
def on_my_connection_closed(_conn, error):
self.assertIsInstance(error,
pika.exceptions.ConnectionClosedByClient)
self.stop()
class MyWorkflow(connection_workflow.AMQPConnectionWorkflow):
if not hasattr(connection_workflow.AMQPConnectionWorkflow,
'_report_completion_and_cleanup'):
raise AssertionError('_report_completion_and_cleanup not in '
'AMQPConnectionWorkflow.')
def _report_completion_and_cleanup(self, result):
"""Override implementation to tag the presumed connection"""
result.i_was_here = MyWorkflow
super(MyWorkflow, self)._report_completion_and_cleanup(result)
original_workflow = MyWorkflow()
workflow = connection_class.create_connection(
configs,
on_done,
self.connection.ioloop,
workflow=original_workflow)
self.assertIs(workflow, original_workflow)
class TestCreateConnectionMultipleConfigsDefaultConnectionWorkflow(
AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Connect via adapter's create_connection() method with multiple configs."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
good_params = self.parameters
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
sock = socket.socket()
self.addCleanup(sock.close)
sock.bind(('127.0.0.1', 0))
bad_host, bad_port = sock.getsockname()
sock.close() # so that attempt to connect will fail immediately
bad_params = pika.ConnectionParameters(host=bad_host, port=bad_port)
@async_test_base.make_stop_on_error_with_self(self)
def on_done(conn):
self.assertIsInstance(conn, connection_class)
self.assertEqual(conn.params.host, good_params.host)
self.assertEqual(conn.params.port, good_params.port)
self.assertNotEqual((conn.params.host, conn.params.port),
(bad_host, bad_port))
conn.add_on_close_callback(on_my_connection_closed)
conn.close()
@async_test_base.make_stop_on_error_with_self(self)
def on_my_connection_closed(_conn, error):
self.assertIsInstance(error,
pika.exceptions.ConnectionClosedByClient)
self.stop()
workflow = connection_class.create_connection([bad_params, good_params],
on_done,
self.connection.ioloop)
self.assertIsInstance(
workflow,
connection_workflow.AbstractAMQPConnectionWorkflow)
class TestCreateConnectionRetriesWithDefaultConnectionWorkflow(
AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Connect via adapter's create_connection() method with multiple retries."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
base_class = self.connection.__class__ # type: pika.adapters.BaseConnection
first_config = self.parameters
second_config = self.new_connection_params()
# Configure retries (default connection workflow keys off the last
# config in the sequence)
second_config.retry_delay = 0.001
second_config.connection_attempts = 2
# MyConnectionClass will use connection_attempts to distinguish between
# first and second configs
self.assertNotEqual(first_config.connection_attempts,
second_config.connection_attempts)
logger = self.logger
class MyConnectionClass(base_class):
got_second_config = False
def __init__(self, parameters, *args, **kwargs):
logger.info('Entered MyConnectionClass constructor: %s',
parameters)
if (parameters.connection_attempts ==
second_config.connection_attempts):
MyConnectionClass.got_second_config = True
logger.info('Got second config.')
raise Exception('Reject second config.')
if not MyConnectionClass.got_second_config:
logger.info('Still on first attempt with first config.')
raise Exception('Still on first attempt with first config.')
logger.info('Start of retry cycle detected.')
super(MyConnectionClass, self).__init__(parameters,
*args,
**kwargs)
@async_test_base.make_stop_on_error_with_self(self)
def on_done(conn):
self.assertIsInstance(conn, MyConnectionClass)
self.assertEqual(conn.params.connection_attempts,
first_config.connection_attempts)
conn.add_on_close_callback(on_my_connection_closed)
conn.close()
@async_test_base.make_stop_on_error_with_self(self)
def on_my_connection_closed(_conn, error):
self.assertIsInstance(error,
pika.exceptions.ConnectionClosedByClient)
self.stop()
MyConnectionClass.create_connection([first_config, second_config],
on_done,
self.connection.ioloop)
class TestCreateConnectionConnectionWorkflowSocketConnectionFailure(
AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Connect via adapter's create_connection() fails to connect socket."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
sock = socket.socket()
self.addCleanup(sock.close)
sock.bind(('127.0.0.1', 0))
bad_host, bad_port = sock.getsockname()
sock.close() # so that attempt to connect will fail immediately
bad_params = pika.ConnectionParameters(host=bad_host, port=bad_port)
@async_test_base.make_stop_on_error_with_self(self)
def on_done(exc):
self.assertIsInstance(
exc,
connection_workflow.AMQPConnectionWorkflowFailed)
self.assertIsInstance(
exc.exceptions[-1],
connection_workflow.AMQPConnectorSocketConnectError)
self.stop()
connection_class.create_connection([bad_params,],
on_done,
self.connection.ioloop)
class TestCreateConnectionAMQPHandshakeTimesOutDefaultWorkflow(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "AMQP handshake timeout handling in adapter's create_connection()."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
base_class = self.connection.__class__ # type: pika.adapters.BaseConnection
params = self.parameters
workflow = None # type: connection_workflow.AMQPConnectionWorkflow
class MyConnectionClass(base_class):
# Cause an exception if _on_stream_connected doesn't exist
base_class._on_stream_connected # pylint: disable=W0104
@async_test_base.make_stop_on_error_with_self(self)
def _on_stream_connected(self, *args, **kwargs):
# Now that AMQP handshake has begun, simulate imminent stack
# timeout in AMQPConnector
connector = workflow._connector # type: connection_workflow.AMQPConnector
connector._stack_timeout_ref.cancel()
connector._stack_timeout_ref = connector._nbio.call_later(
0,
connector._on_overall_timeout)
return super(MyConnectionClass, self)._on_stream_connected(
*args,
**kwargs)
@async_test_base.make_stop_on_error_with_self(self)
def on_done(error):
self.assertIsInstance(
error,
connection_workflow.AMQPConnectionWorkflowFailed)
self.assertIsInstance(
error.exceptions[-1],
connection_workflow.AMQPConnectorAMQPHandshakeError)
self.assertIsInstance(
error.exceptions[-1].exception,
connection_workflow.AMQPConnectorStackTimeout)
self.stop()
workflow = MyConnectionClass.create_connection([params],
on_done,
self.connection.ioloop)
class TestCreateConnectionAndImmediatelyAbortDefaultConnectionWorkflow(
AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Immediately abort workflow initiated via adapter's create_connection()."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
configs = [self.parameters]
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
@async_test_base.make_stop_on_error_with_self(self)
def on_done(exc):
self.assertIsInstance(
exc,
connection_workflow.AMQPConnectionWorkflowAborted)
self.stop()
workflow = connection_class.create_connection(configs,
on_done,
self.connection.ioloop)
workflow.abort()
class TestCreateConnectionAndAsynchronouslyAbortDefaultConnectionWorkflow(
AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Asyncrhonously abort workflow initiated via adapter's create_connection()."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
configs = [self.parameters]
connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection
@async_test_base.make_stop_on_error_with_self(self)
def on_done(exc):
self.assertIsInstance(
exc,
connection_workflow.AMQPConnectionWorkflowAborted)
self.stop()
workflow = connection_class.create_connection(configs,
on_done,
self.connection.ioloop)
self.connection._nbio.add_callback_threadsafe(workflow.abort)
class TestUpdateSecret(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Update secret and receive confirmation"
def begin(self, channel):
self.connection.update_secret(
"new_secret", "reason", self.on_secret_update)
def on_secret_update(self, frame):
self.assertIsInstance(frame.method, spec.Connection.UpdateSecretOk)
self.stop()
class TestConfirmSelect(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Receive confirmation of Confirm.Select"
def begin(self, channel):
channel.confirm_delivery(ack_nack_callback=self.ack_nack_callback,
callback=self.on_complete)
@staticmethod
def ack_nack_callback(frame):
pass
def on_complete(self, frame):
self.assertIsInstance(frame.method, spec.Confirm.SelectOk)
self.stop()
class TestBlockingNonBlockingBlockingRPCWontStall(AsyncTestCase, AsyncAdapters):
DESCRIPTION = ("Verify that a sequence of blocking, non-blocking, blocking "
"RPC requests won't stall")
def begin(self, channel):
# Queue declaration params table: queue name, nowait value
self._expected_queue_params = (
("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False),
("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, True),
("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False)
)
self._declared_queue_names = []
for queue, nowait in self._expected_queue_params:
cb = self._queue_declare_ok_cb if not nowait else None
channel.queue_declare(queue=queue,
auto_delete=True,
arguments={'x-expires': self.TIMEOUT * 1000},
callback=cb)
def _queue_declare_ok_cb(self, declare_ok_frame):
self._declared_queue_names.append(declare_ok_frame.method.queue)
if len(self._declared_queue_names) == 2:
# Initiate check for creation of queue declared with nowait=True
self.channel.queue_declare(queue=self._expected_queue_params[1][0],
passive=True,
callback=self._queue_declare_ok_cb)
elif len(self._declared_queue_names) == 3:
self.assertSequenceEqual(
sorted(self._declared_queue_names),
sorted(item[0] for item in self._expected_queue_params))
self.stop()
class TestConsumeCancel(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Consume and cancel"
def begin(self, channel):
self.queue_name = self.__class__.__name__ + ':' + uuid.uuid1().hex
channel.queue_declare(self.queue_name, callback=self.on_queue_declared)
def on_queue_declared(self, frame):
for i in range(0, 100):
msg_body = '{}:{}:{}'.format(self.__class__.__name__, i,
time_now())
self.channel.basic_publish('', self.queue_name, msg_body)
self.ctag = self.channel.basic_consume(self.queue_name,
self.on_message,
auto_ack=True)
def on_message(self, _channel, _frame, _header, body):
self.channel.basic_cancel(self.ctag, callback=self.on_cancel)
def on_cancel(self, _frame):
self.channel.queue_delete(self.queue_name, callback=self.on_deleted)
def on_deleted(self, _frame):
self.stop()
class TestExchangeDeclareAndDelete(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Create and delete and exchange"
X_TYPE = ExchangeType.direct
def begin(self, channel):
self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex
channel.exchange_declare(self.name,
exchange_type=self.X_TYPE,
passive=False,
durable=False,
auto_delete=True,
callback=self.on_exchange_declared)
def on_exchange_declared(self, frame):
self.assertIsInstance(frame.method, spec.Exchange.DeclareOk)
self.channel.exchange_delete(self.name, callback=self.on_exchange_delete)
def on_exchange_delete(self, frame):
self.assertIsInstance(frame.method, spec.Exchange.DeleteOk)
self.stop()
class TestExchangeRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "should close chan: re-declared exchange w/ diff params"
X_TYPE1 = ExchangeType.direct
X_TYPE2 = ExchangeType.topic
def begin(self, channel):
self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex
self.channel.add_on_close_callback(self.on_channel_closed)
channel.exchange_declare(self.name,
exchange_type=self.X_TYPE1,
passive=False,
durable=False,
auto_delete=True,
callback=self.on_exchange_declared)
def on_cleanup_channel(self, channel):
channel.exchange_delete(self.name)
self.stop()
def on_channel_closed(self, _channel, _reason):
self.connection.channel(on_open_callback=self.on_cleanup_channel)
def on_exchange_declared(self, frame):
self.channel.exchange_declare(self.name,
exchange_type=self.X_TYPE2,
passive=False,
durable=False,
auto_delete=True,
callback=self.on_bad_result)
def on_bad_result(self, frame):
self.channel.exchange_delete(self.name)
raise AssertionError("Should not have received an Exchange.DeclareOk")
class TestNoDeadlockWhenClosingChannelWithPendingBlockedRequestsAndConcurrentChannelCloseFromBroker(
AsyncTestCase, AsyncAdapters):
DESCRIPTION = ("No deadlock when closing a channel with pending blocked "
"requests and concurrent Channel.Close from broker.")
# To observe the behavior that this is testing, comment out this line
# in pika/channel.py - _on_close:
#
# self._drain_blocked_methods_on_remote_close()
#
# With the above line commented out, this test will hang
def begin(self, channel):
base_exch_name = self.__class__.__name__ + ':' + uuid.uuid1().hex
self.channel.add_on_close_callback(self.on_channel_closed)
for i in range(0, 99):
# Passively declare a non-existent exchange to force Channel.Close
# from broker
exch_name = base_exch_name + ':' + str(i)
cb = functools.partial(self.on_bad_result, exch_name)
channel.exchange_declare(exch_name,
exchange_type=ExchangeType.direct,
passive=True,
callback=cb)
channel.close()
def on_channel_closed(self, _channel, _reason):
# The close is expected because the requested exchange doesn't exist
self.stop()
def on_bad_result(self, exch_name, frame):
self.fail("Should not have received an Exchange.DeclareOk")
class TestClosingAChannelPermitsBlockedRequestToComplete(AsyncTestCase,
AsyncAdapters):
DESCRIPTION = "Closing a channel permits blocked requests to complete."
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
self._queue_deleted = False
channel.add_on_close_callback(self.on_channel_closed)
q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex
# NOTE we pass callback to make it a blocking request
channel.queue_declare(q_name,
exclusive=True,
callback=lambda _frame: None)
self.assertIsNotNone(channel._blocking)
# The Queue.Delete should block on completion of Queue.Declare
channel.queue_delete(q_name, callback=self.on_queue_deleted)
self.assertTrue(channel._blocked)
# This Channel.Close should allow the blocked Queue.Delete to complete
# Before closing the channel
channel.close()
def on_queue_deleted(self, _frame):
# Getting this callback shows that the blocked request was processed
self._queue_deleted = True
@async_test_base.stop_on_error_in_async_test_case_method
def on_channel_closed(self, _channel, _reason):
self.assertTrue(self._queue_deleted)
self.stop()
class TestQueueUnnamedDeclareAndDelete(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Create and delete an unnamed queue"
@async_test_base.stop_on_error_in_async_test_case_method
def begin(self, channel):
channel.queue_declare(queue='',
passive=False,
durable=False,
exclusive=True,
auto_delete=False,
arguments={'x-expires': self.TIMEOUT * 1000},
callback=self.on_queue_declared)
@async_test_base.stop_on_error_in_async_test_case_method
def on_queue_declared(self, frame):
self.assertIsInstance(frame.method, spec.Queue.DeclareOk)
self.channel.queue_delete(frame.method.queue, callback=self.on_queue_delete)
@async_test_base.stop_on_error_in_async_test_case_method
def on_queue_delete(self, frame):
self.assertIsInstance(frame.method, spec.Queue.DeleteOk)
self.stop()
class TestQueueNamedDeclareAndDelete(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Create and delete a named queue"
def begin(self, channel):
self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex
channel.queue_declare(self._q_name,
passive=False,
durable=False,
exclusive=True,
auto_delete=True,
arguments={'x-expires': self.TIMEOUT * 1000},
callback=self.on_queue_declared)
def on_queue_declared(self, frame):
self.assertIsInstance(frame.method, spec.Queue.DeclareOk)
# Frame's method's queue is encoded (impl detail)
self.assertEqual(frame.method.queue, self._q_name)
self.channel.queue_delete(frame.method.queue, callback=self.on_queue_delete)
def on_queue_delete(self, frame):
self.assertIsInstance(frame.method, spec.Queue.DeleteOk)
self.stop()
class TestQueueRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Should close chan: re-declared queue w/ diff params"
def begin(self, channel):
self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex
self.channel.add_on_close_callback(self.on_channel_closed)
channel.queue_declare(self._q_name,
passive=False,
durable=False,
exclusive=True,
auto_delete=True,
arguments={'x-expires': self.TIMEOUT * 1000},
callback=self.on_queue_declared)
def on_channel_closed(self, _channel, _reason):
self.stop()
def on_queue_declared(self, frame):
self.channel.queue_declare(self._q_name,
passive=False,
durable=True,
exclusive=False,
auto_delete=True,
arguments={'x-expires': self.TIMEOUT * 1000},
callback=self.on_bad_result)
def on_bad_result(self, frame):
self.channel.queue_delete(self._q_name)
raise AssertionError("Should not have received a Queue.DeclareOk")
class TestTX1_Select(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Receive confirmation of Tx.Select"
def begin(self, channel):
channel.tx_select(callback=self.on_complete)
def on_complete(self, frame):
self.assertIsInstance(frame.method, spec.Tx.SelectOk)
self.stop()
class TestTX2_Commit(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Start a transaction, and commit it"
def begin(self, channel):
channel.tx_select(callback=self.on_selectok)
def on_selectok(self, frame):
self.assertIsInstance(frame.method, spec.Tx.SelectOk)
self.channel.tx_commit(callback=self.on_commitok)
def on_commitok(self, frame):
self.assertIsInstance(frame.method, spec.Tx.CommitOk)
self.stop()
class TestTX2_CommitFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Close the channel: commit without a TX"
def begin(self, channel):
self.channel.add_on_close_callback(self.on_channel_closed)
self.channel.tx_commit(callback=self.on_commitok)
def on_channel_closed(self, _channel, _reason):
self.stop()
def on_selectok(self, frame):
self.assertIsInstance(frame.method, spec.Tx.SelectOk)
@staticmethod
def on_commitok(frame):
raise AssertionError("Should not have received a Tx.CommitOk")
class TestTX3_Rollback(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Start a transaction, then rollback"
def begin(self, channel):
channel.tx_select(callback=self.on_selectok)
def on_selectok(self, frame):
self.assertIsInstance(frame.method, spec.Tx.SelectOk)
self.channel.tx_rollback(callback=self.on_rollbackok)
def on_rollbackok(self, frame):
self.assertIsInstance(frame.method, spec.Tx.RollbackOk)
self.stop()
class TestTX3_RollbackFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Close the channel: rollback without a TX"
def begin(self, channel):
self.channel.add_on_close_callback(self.on_channel_closed)
self.channel.tx_rollback(callback=self.on_commitok)
def on_channel_closed(self, _channel, _reason):
self.stop()
@staticmethod
def on_commitok(frame):
raise AssertionError("Should not have received a Tx.RollbackOk")
class TestZ_PublishAndConsume(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Publish a message and consume it"
def on_ready(self, frame):
self.ctag = self.channel.basic_consume(self.queue, self.on_message)
self.msg_body = "%s: %i" % (self.__class__.__name__, time_now())
self.channel.basic_publish(self.exchange, self.routing_key,
self.msg_body)
def on_cancelled(self, frame):
self.assertIsInstance(frame.method, spec.Basic.CancelOk)
self.stop()
def on_message(self, channel, method, header, body):
self.assertIsInstance(method, spec.Basic.Deliver)
self.assertEqual(body, as_bytes(self.msg_body))
self.channel.basic_ack(method.delivery_tag)
self.channel.basic_cancel(self.ctag, callback=self.on_cancelled)
class TestZ_PublishAndConsumeBig(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Publish a big message and consume it"
@staticmethod
def _get_msg_body():
return '\n'.join(["%s" % i for i in range(0, 2097152)])
def on_ready(self, frame):
self.ctag = self.channel.basic_consume(self.queue, self.on_message)
self.msg_body = self._get_msg_body()
self.channel.basic_publish(self.exchange, self.routing_key,
self.msg_body)
def on_cancelled(self, frame):
self.assertIsInstance(frame.method, spec.Basic.CancelOk)
self.stop()
def on_message(self, channel, method, header, body):
self.assertIsInstance(method, spec.Basic.Deliver)
self.assertEqual(body, as_bytes(self.msg_body))
self.channel.basic_ack(method.delivery_tag)
self.channel.basic_cancel(self.ctag, callback=self.on_cancelled)
class TestZ_PublishAndGet(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Publish a message and get it"
def on_ready(self, frame):
self.msg_body = "%s: %i" % (self.__class__.__name__, time_now())
self.channel.basic_publish(self.exchange, self.routing_key,
self.msg_body)
self.channel.basic_get(self.queue, self.on_get)
def on_get(self, channel, method, header, body):
self.assertIsInstance(method, spec.Basic.GetOk)
self.assertEqual(body, as_bytes(self.msg_body))
self.channel.basic_ack(method.delivery_tag)
self.stop()
class TestZ_AccessDenied(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Unknown vhost results in ProbableAccessDeniedError."
def start(self, *args, **kwargs): # pylint: disable=W0221
self.parameters.virtual_host = str(uuid.uuid4())
self.error_captured = None
super(TestZ_AccessDenied, self).start(*args, **kwargs)
self.assertIsInstance(self.error_captured,
pika.exceptions.ProbableAccessDeniedError)
def on_open_error(self, connection, error):
self.error_captured = error
self.stop()
def on_open(self, connection):
super(TestZ_AccessDenied, self).on_open(connection)
self.stop()
class TestBlockedConnectionTimesOut(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Verify that blocked connection terminates on timeout"
def start(self, *args, **kwargs): # pylint: disable=W0221
self.parameters.blocked_connection_timeout = 0.001
self.on_closed_error = None
super(TestBlockedConnectionTimesOut, self).start(*args, **kwargs)
self.assertIsInstance(self.on_closed_error,
pika.exceptions.ConnectionBlockedTimeout)
def begin(self, channel):
# Simulate Connection.Blocked
channel.connection._on_connection_blocked(
channel.connection,
pika.frame.Method(0,
spec.Connection.Blocked(
'Testing blocked connection timeout')))
def on_closed(self, connection, error):
"""called when the connection has finished closing"""
self.on_closed_error = error
self.stop() # acknowledge that closed connection is expected
super(TestBlockedConnectionTimesOut, self).on_closed(connection, error)
class TestBlockedConnectionUnblocks(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Verify that blocked-unblocked connection closes normally"
def start(self, *args, **kwargs): # pylint: disable=W0221
self.parameters.blocked_connection_timeout = 0.001
self.on_closed_error = None
super(TestBlockedConnectionUnblocks, self).start(*args, **kwargs)
self.assertIsInstance(self.on_closed_error,
pika.exceptions.ConnectionClosedByClient)
self.assertEqual(
(self.on_closed_error.reply_code, self.on_closed_error.reply_text),
(200, 'Normal shutdown'))
def begin(self, channel):
# Simulate Connection.Blocked
channel.connection._on_connection_blocked(
channel.connection,
pika.frame.Method(0,
spec.Connection.Blocked(
'Testing blocked connection unblocks')))
# Simulate Connection.Unblocked
channel.connection._on_connection_unblocked(
channel.connection,
pika.frame.Method(0, spec.Connection.Unblocked()))
# Schedule shutdown after blocked connection timeout would expire
channel.connection._adapter_call_later(0.005, self.on_cleanup_timer)
def on_cleanup_timer(self):
self.stop()
def on_closed(self, connection, error):
"""called when the connection has finished closing"""
self.on_closed_error = error
super(TestBlockedConnectionUnblocks, self).on_closed(connection, error)
class TestAddCallbackThreadsafeRequestBeforeIOLoopStarts(AsyncTestCase, AsyncAdapters):
DESCRIPTION = (
"Test _adapter_add_callback_threadsafe request before ioloop starts.")
def _run_ioloop(self, *args, **kwargs): # pylint: disable=W0221
"""We intercept this method from AsyncTestCase in order to call
_adapter_add_callback_threadsafe before AsyncTestCase starts the ioloop.
"""
self.my_start_time = time_now()
# Request a callback from our current (ioloop's) thread
self.connection._adapter_add_callback_threadsafe(
self.on_requested_callback)
return super(
TestAddCallbackThreadsafeRequestBeforeIOLoopStarts, self)._run_ioloop(
*args, **kwargs)
def start(self, *args, **kwargs): # pylint: disable=W0221
self.loop_thread_ident = threading.current_thread().ident
self.my_start_time = None
self.got_callback = False
super(TestAddCallbackThreadsafeRequestBeforeIOLoopStarts, self).start(*args, **kwargs)
self.assertTrue(self.got_callback)
def begin(self, channel):
self.stop()
def on_requested_callback(self):
self.assertEqual(threading.current_thread().ident,
self.loop_thread_ident)
self.assertLess(time_now() - self.my_start_time, 0.25)
self.got_callback = True
class TestAddCallbackThreadsafeFromIOLoopThread(AsyncTestCase, AsyncAdapters):
DESCRIPTION = (
"Test _adapter_add_callback_threadsafe request from same thread.")
def start(self, *args, **kwargs): # pylint: disable=W0221
self.loop_thread_ident = threading.current_thread().ident
self.my_start_time = None
self.got_callback = False
super(TestAddCallbackThreadsafeFromIOLoopThread, self).start(*args, **kwargs)
self.assertTrue(self.got_callback)
def begin(self, channel):
self.my_start_time = time_now()
# Request a callback from our current (ioloop's) thread
channel.connection._adapter_add_callback_threadsafe(
self.on_requested_callback)
def on_requested_callback(self):
self.assertEqual(threading.current_thread().ident,
self.loop_thread_ident)
self.assertLess(time_now() - self.my_start_time, 0.25)
self.got_callback = True
self.stop()
class TestAddCallbackThreadsafeFromAnotherThread(AsyncTestCase, AsyncAdapters):
DESCRIPTION = (
"Test _adapter_add_callback_threadsafe request from another thread.")
def start(self, *args, **kwargs): # pylint: disable=W0221
self.loop_thread_ident = threading.current_thread().ident
self.my_start_time = None
self.got_callback = False
super(TestAddCallbackThreadsafeFromAnotherThread, self).start(*args, **kwargs)
self.assertTrue(self.got_callback)
def begin(self, channel):
self.my_start_time = time_now()
# Request a callback from ioloop while executing in another thread
timer = threading.Timer(
0,
lambda: channel.connection._adapter_add_callback_threadsafe(
self.on_requested_callback))
self.addCleanup(timer.cancel)
timer.start()
def on_requested_callback(self):
self.assertEqual(threading.current_thread().ident,
self.loop_thread_ident)
self.assertLess(time_now() - self.my_start_time, 0.25)
self.got_callback = True
self.stop()
class TestIOLoopStopBeforeIOLoopStarts(AsyncTestCase, AsyncAdapters):
DESCRIPTION = "Test ioloop.stop() before ioloop starts causes ioloop to exit quickly."
def _run_ioloop(self, *args, **kwargs): # pylint: disable=W0221
"""We intercept this method from AsyncTestCase in order to call
ioloop.stop() before AsyncTestCase starts the ioloop.
"""
# Request ioloop to stop before it starts
my_start_time = time_now()
self.stop_ioloop_only()
super(
TestIOLoopStopBeforeIOLoopStarts, self)._run_ioloop(*args, **kwargs)
self.assertLess(time_now() - my_start_time, 0.25)
# Resume I/O loop to facilitate normal course of events and closure
# of connection in order to prevent reporting of a socket resource leak
# from an unclosed connection.
super(
TestIOLoopStopBeforeIOLoopStarts, self)._run_ioloop(*args, **kwargs)
def begin(self, channel):
self.stop()
class TestViabilityOfMultipleTimeoutsWithSameDeadlineAndCallback(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103
DESCRIPTION = "Test viability of multiple timeouts with same deadline and callback"
def begin(self, channel):
timer1 = channel.connection._adapter_call_later(0, self.on_my_timer)
timer2 = channel.connection._adapter_call_later(0, self.on_my_timer)
self.assertIsNot(timer1, timer2)
channel.connection._adapter_remove_timeout(timer1)
# Wait for timer2 to fire
def on_my_timer(self):
self.stop()
| bsd-3-clause | a059cdfc6cd6d10541562952b95ce587 | 38.424429 | 120 | 0.614042 | 4.280195 | false | true | false | false |
pika/pika | tests/unit/diagnostic_utils_test.py | 2 | 2715 | """
Test of `pika.diagnostic_utils`
"""
import unittest
import logging
import pika.compat
from pika import diagnostic_utils
# Suppress invalid-name, since our test names are descriptive and quite long
# pylint: disable=C0103
# Suppress missing-docstring to allow test method names to be printed by our
# test runner
# pylint: disable=C0111
class DiagnosticUtilsTest(unittest.TestCase):
def test_args_and_return_value_propagation(self):
bucket = []
log_exception = diagnostic_utils.create_log_exception_decorator(
logging.getLogger(__name__))
return_value = (1, 2, 3)
@log_exception
def my_func(*args, **kwargs):
bucket.append((args, kwargs))
return return_value
# Test with args and kwargs
expected_args = ('a', 2, 'B', Exception('oh-oh'))
expected_kwargs = dict(hello='world', bye='hello', error=RuntimeError())
result = my_func(*expected_args, **expected_kwargs)
self.assertIs(result, return_value)
self.assertEqual(bucket, [(expected_args, expected_kwargs)])
# Make sure that the original instances were passed through, not copies
for i in pika.compat.xrange(len(expected_args)):
self.assertIs(bucket[0][0][i], expected_args[i])
for key in pika.compat.dictkeys(expected_kwargs):
self.assertIs(bucket[0][1][key], expected_kwargs[key])
# Now, repeat without any args/kwargs
expected_args = tuple()
expected_kwargs = dict()
del bucket[:] # .clear() doesn't exist in python 2.7
result = my_func()
self.assertIs(result, return_value)
self.assertEqual(bucket, [(expected_args, expected_kwargs)])
def test_exception_propagation(self):
logger = logging.getLogger(__name__)
log_exception = diagnostic_utils.create_log_exception_decorator(logger)
# Suppress log output and capture LogRecord
log_record_bucket = []
logger.handle = log_record_bucket.append
exception = Exception('Oops!')
@log_exception
def my_func_that_raises():
raise exception
with self.assertRaises(Exception) as ctx:
my_func_that_raises()
# Make sure the expected exception was raised
self.assertIs(ctx.exception, exception)
# Check log message
self.assertEqual(len(log_record_bucket), 1)
log_record = log_record_bucket[0] # type: logging.LogRecord
print(log_record.getMessage())
expected_ending = 'Exception: Oops!\n'
self.assertEqual(log_record.getMessage()[-len(expected_ending):],
expected_ending)
| bsd-3-clause | b262f56288382aec158a71abffbbd663 | 29.505618 | 80 | 0.634254 | 4.082707 | false | true | false | false |
pika/pika | examples/consumer_simple.py | 1 | 1745 | # -*- coding: utf-8 -*-
# pylint: disable=C0111,C0103,R0205
import json
import logging
import pika
from pika.exchange_type import ExchangeType
print('pika version: %s' % pika.__version__)
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
main_channel = connection.channel()
consumer_channel = connection.channel()
bind_channel = connection.channel()
main_channel.exchange_declare(exchange='com.micex.sten', exchange_type=ExchangeType.direct)
main_channel.exchange_declare(
exchange='com.micex.lasttrades', exchange_type=ExchangeType.direct)
queue = main_channel.queue_declare('', exclusive=True).method.queue
queue_tickers = main_channel.queue_declare('', exclusive=True).method.queue
main_channel.queue_bind(
exchange='com.micex.sten', queue=queue, routing_key='order.stop.create')
def hello():
print('Hello world')
connection.call_later(5, hello)
def callback(_ch, _method, _properties, body):
body = json.loads(body)['order.stop.create']
ticker = None
if 'ticker' in body['data']['params']['condition']:
ticker = body['data']['params']['condition']['ticker']
if not ticker:
return
print('got ticker %s, gonna bind it...' % ticker)
bind_channel.queue_bind(
exchange='com.micex.lasttrades',
queue=queue_tickers,
routing_key=str(ticker))
print('ticker %s binded ok' % ticker)
logging.basicConfig(level=logging.INFO)
# Note: consuming with automatic acknowledgements has its risks
# and used here for simplicity.
# See https://www.rabbitmq.com/confirms.html.
consumer_channel.basic_consume(queue, callback, auto_ack=True)
try:
consumer_channel.start_consuming()
finally:
connection.close()
| bsd-3-clause | 5de1001d9f480ed000ff99fd12a96cd1 | 26.698413 | 91 | 0.708309 | 3.455446 | false | false | false | false |
pika/pika | pika/adapters/select_connection.py | 1 | 45096 | """A connection adapter that tries to use the best polling method for the
platform pika is running on.
"""
import abc
import collections
import errno
import heapq
import logging
import select
import time
import threading
import pika.compat
from pika.adapters.utils import nbio_interface
from pika.adapters.base_connection import BaseConnection
from pika.adapters.utils.selector_ioloop_adapter import (
SelectorIOServicesAdapter, AbstractSelectorIOLoop)
LOGGER = logging.getLogger(__name__)
# One of select, epoll, kqueue or poll
SELECT_TYPE = None
# Reason for this unconventional dict initialization is the fact that on some
# platforms select.error is an aliases for OSError. We don't want the lambda
# for select.error to win over one for OSError.
_SELECT_ERROR_CHECKERS = {}
if pika.compat.PY3:
# InterruptedError is undefined in PY2
# pylint: disable=E0602
_SELECT_ERROR_CHECKERS[InterruptedError] = lambda e: True
_SELECT_ERROR_CHECKERS[select.error] = lambda e: e.args[0] == errno.EINTR
_SELECT_ERROR_CHECKERS[IOError] = lambda e: e.errno == errno.EINTR
_SELECT_ERROR_CHECKERS[OSError] = lambda e: e.errno == errno.EINTR
# We can reduce the number of elements in the list by looking at super-sub
# class relationship because only the most generic ones needs to be caught.
# For now the optimization is left out.
# Following is better but still incomplete.
# _SELECT_ERRORS = tuple(filter(lambda e: not isinstance(e, OSError),
# _SELECT_ERROR_CHECKERS.keys())
# + [OSError])
_SELECT_ERRORS = tuple(_SELECT_ERROR_CHECKERS.keys())
def _is_resumable(exc):
"""Check if caught exception represents EINTR error.
:param exc: exception; must be one of classes in _SELECT_ERRORS
"""
checker = _SELECT_ERROR_CHECKERS.get(exc.__class__, None)
if checker is not None:
return checker(exc)
else:
return False
class SelectConnection(BaseConnection):
"""An asynchronous connection adapter that attempts to use the fastest
event loop adapter for the given platform.
"""
def __init__(
self, # pylint: disable=R0913
parameters=None,
on_open_callback=None,
on_open_error_callback=None,
on_close_callback=None,
custom_ioloop=None,
internal_connection_workflow=True):
"""Create a new instance of the Connection object.
:param pika.connection.Parameters parameters: Connection parameters
:param callable on_open_callback: Method to call on connection open
:param None | method on_open_error_callback: Called if the connection
can't be established or connection establishment is interrupted by
`Connection.close()`: on_open_error_callback(Connection, exception).
:param None | method on_close_callback: Called when a previously fully
open connection is closed:
`on_close_callback(Connection, exception)`, where `exception` is
either an instance of `exceptions.ConnectionClosed` if closed by
user or broker or exception of another type that describes the cause
of connection failure.
:param None | IOLoop | nbio_interface.AbstractIOServices custom_ioloop:
Provide a custom I/O Loop object.
:param bool internal_connection_workflow: True for autonomous connection
establishment which is default; False for externally-managed
connection workflow via the `create_connection()` factory.
:raises: RuntimeError
"""
if isinstance(custom_ioloop, nbio_interface.AbstractIOServices):
nbio = custom_ioloop
else:
nbio = SelectorIOServicesAdapter(custom_ioloop or IOLoop())
super().__init__(
parameters,
on_open_callback,
on_open_error_callback,
on_close_callback,
nbio,
internal_connection_workflow=internal_connection_workflow)
@classmethod
def create_connection(cls,
connection_configs,
on_done,
custom_ioloop=None,
workflow=None):
"""Implement
:py:classmethod:`pika.adapters.BaseConnection.create_connection()`.
"""
nbio = SelectorIOServicesAdapter(custom_ioloop or IOLoop())
def connection_factory(params):
"""Connection factory."""
if params is None:
raise ValueError('Expected pika.connection.Parameters '
'instance, but got None in params arg.')
return cls(
parameters=params,
custom_ioloop=nbio,
internal_connection_workflow=False)
return cls._start_connection_workflow(
connection_configs=connection_configs,
connection_factory=connection_factory,
nbio=nbio,
workflow=workflow,
on_done=on_done)
def _get_write_buffer_size(self):
"""
:returns: Current size of output data buffered by the transport
:rtype: int
"""
return self._transport.get_write_buffer_size()
class _Timeout:
"""Represents a timeout"""
__slots__ = (
'deadline',
'callback',
)
def __init__(self, deadline, callback):
"""
:param float deadline: timer expiration as non-negative epoch number
:param callable callback: callback to call when timeout expires
:raises ValueError, TypeError:
"""
if deadline < 0:
raise ValueError(
'deadline must be non-negative epoch number, but got %r' %
(deadline,))
if not callable(callback):
raise TypeError(
'callback must be a callable, but got {!r}'.format(callback))
self.deadline = deadline
self.callback = callback
def __eq__(self, other):
"""NOTE: not supporting sort stability"""
if isinstance(other, _Timeout):
return self.deadline == other.deadline
return NotImplemented
def __ne__(self, other):
"""NOTE: not supporting sort stability"""
result = self.__eq__(other)
if result is not NotImplemented:
return not result
return NotImplemented
def __lt__(self, other):
"""NOTE: not supporting sort stability"""
if isinstance(other, _Timeout):
return self.deadline < other.deadline
return NotImplemented
def __gt__(self, other):
"""NOTE: not supporting sort stability"""
if isinstance(other, _Timeout):
return self.deadline > other.deadline
return NotImplemented
def __le__(self, other):
"""NOTE: not supporting sort stability"""
if isinstance(other, _Timeout):
return self.deadline <= other.deadline
return NotImplemented
def __ge__(self, other):
"""NOTE: not supporting sort stability"""
if isinstance(other, _Timeout):
return self.deadline >= other.deadline
return NotImplemented
class _Timer:
"""Manage timeouts for use in ioloop"""
# Cancellation count threshold for triggering garbage collection of
# cancelled timers
_GC_CANCELLATION_THRESHOLD = 1024
def __init__(self):
self._timeout_heap = []
# Number of canceled timeouts on heap; for scheduling garbage
# collection of canceled timeouts
self._num_cancellations = 0
def close(self):
"""Release resources. Don't use the `_Timer` instance after closing
it
"""
# Eliminate potential reference cycles to aid garbage-collection
if self._timeout_heap is not None:
for timeout in self._timeout_heap:
timeout.callback = None
self._timeout_heap = None
def call_later(self, delay, callback):
"""Schedule a one-shot timeout given delay seconds.
NOTE: you may cancel the timer before dispatch of the callback. Timer
Manager cancels the timer upon dispatch of the callback.
:param float delay: Non-negative number of seconds from now until
expiration
:param callable callback: The callback method, having the signature
`callback()`
:rtype: _Timeout
:raises ValueError, TypeError
"""
if self._timeout_heap is None:
raise ValueError("Timeout closed before call")
if delay < 0:
raise ValueError(
'call_later: delay must be non-negative, but got {!r}'.format(delay))
now = pika.compat.time_now()
timeout = _Timeout(now + delay, callback)
heapq.heappush(self._timeout_heap, timeout)
LOGGER.debug(
'call_later: added timeout %r with deadline=%r and '
'callback=%r; now=%s; delay=%s', timeout, timeout.deadline,
timeout.callback, now, delay)
return timeout
def remove_timeout(self, timeout):
"""Cancel the timeout
:param _Timeout timeout: The timer to cancel
"""
# NOTE removing from the heap is difficult, so we just deactivate the
# timeout and garbage-collect it at a later time; see discussion
# in http://docs.python.org/library/heapq.html
if timeout.callback is None:
LOGGER.debug(
'remove_timeout: timeout was already removed or called %r',
timeout)
else:
LOGGER.debug(
'remove_timeout: removing timeout %r with deadline=%r '
'and callback=%r', timeout, timeout.deadline, timeout.callback)
timeout.callback = None
self._num_cancellations += 1
def get_remaining_interval(self):
"""Get the interval to the next timeout expiration
:returns: non-negative number of seconds until next timer expiration;
None if there are no timers
:rtype: float
"""
if self._timeout_heap:
now = pika.compat.time_now()
interval = max(0, self._timeout_heap[0].deadline - now)
else:
interval = None
return interval
def process_timeouts(self):
"""Process pending timeouts, invoking callbacks for those whose time has
come
"""
if self._timeout_heap:
now = pika.compat.time_now()
# Remove ready timeouts from the heap now to prevent IO starvation
# from timeouts added during callback processing
ready_timeouts = []
while self._timeout_heap and self._timeout_heap[0].deadline <= now:
timeout = heapq.heappop(self._timeout_heap)
if timeout.callback is not None:
ready_timeouts.append(timeout)
else:
self._num_cancellations -= 1
# Invoke ready timeout callbacks
for timeout in ready_timeouts:
if timeout.callback is None:
# Must have been canceled from a prior callback
self._num_cancellations -= 1
continue
timeout.callback()
timeout.callback = None
# Garbage-collect canceled timeouts if they exceed threshold
if (self._num_cancellations >= self._GC_CANCELLATION_THRESHOLD and
self._num_cancellations > (len(self._timeout_heap) >> 1)):
self._num_cancellations = 0
self._timeout_heap = [
t for t in self._timeout_heap if t.callback is not None
]
heapq.heapify(self._timeout_heap)
class PollEvents:
"""Event flags for I/O"""
# Use epoll's constants to keep life easy
READ = getattr(select, 'POLLIN', 0x01) # available for read
WRITE = getattr(select, 'POLLOUT', 0x04) # available for write
ERROR = getattr(select, 'POLLERR', 0x08) # error on associated fd
class IOLoop(AbstractSelectorIOLoop):
"""I/O loop implementation that picks a suitable poller (`select`,
`poll`, `epoll`, `kqueue`) to use based on platform.
Implements the
`pika.adapters.utils.selector_ioloop_adapter.AbstractSelectorIOLoop`
interface.
"""
# READ/WRITE/ERROR per `AbstractSelectorIOLoop` requirements
READ = PollEvents.READ
WRITE = PollEvents.WRITE
ERROR = PollEvents.ERROR
def __init__(self):
self._timer = _Timer()
# Callbacks requested via `add_callback`
self._callbacks = collections.deque()
self._poller = self._get_poller(self._get_remaining_interval,
self.process_timeouts)
def close(self):
"""Release IOLoop's resources.
`IOLoop.close` is intended to be called by the application or test code
only after `IOLoop.start()` returns. After calling `close()`, no other
interaction with the closed instance of `IOLoop` should be performed.
"""
if self._callbacks is not None:
self._poller.close()
self._timer.close()
# Set _callbacks to empty list rather than None so that race from
# another thread calling add_callback_threadsafe() won't result in
# AttributeError
self._callbacks = []
@staticmethod
def _get_poller(get_wait_seconds, process_timeouts):
"""Determine the best poller to use for this environment and instantiate
it.
:param get_wait_seconds: Function for getting the maximum number of
seconds to wait for IO for use by the poller
:param process_timeouts: Function for processing timeouts for use by the
poller
:returns: The instantiated poller instance supporting `_PollerBase` API
:rtype: object
"""
poller = None
kwargs = dict(
get_wait_seconds=get_wait_seconds,
process_timeouts=process_timeouts)
if hasattr(select, 'epoll'):
if not SELECT_TYPE or SELECT_TYPE == 'epoll':
LOGGER.debug('Using EPollPoller')
poller = EPollPoller(**kwargs)
if not poller and hasattr(select, 'kqueue'):
if not SELECT_TYPE or SELECT_TYPE == 'kqueue':
LOGGER.debug('Using KQueuePoller')
poller = KQueuePoller(**kwargs)
if (not poller and hasattr(select, 'poll') and
hasattr(select.poll(), 'modify')): # pylint: disable=E1101
if not SELECT_TYPE or SELECT_TYPE == 'poll':
LOGGER.debug('Using PollPoller')
poller = PollPoller(**kwargs)
if not poller:
LOGGER.debug('Using SelectPoller')
poller = SelectPoller(**kwargs)
return poller
def call_later(self, delay, callback):
"""Add the callback to the IOLoop timer to be called after delay seconds
from the time of call on best-effort basis. Returns a handle to the
timeout.
:param float delay: The number of seconds to wait to call callback
:param callable callback: The callback method
:returns: handle to the created timeout that may be passed to
`remove_timeout()`
:rtype: object
"""
return self._timer.call_later(delay, callback)
def remove_timeout(self, timeout_handle):
"""Remove a timeout
:param timeout_handle: Handle of timeout to remove
"""
self._timer.remove_timeout(timeout_handle)
def add_callback_threadsafe(self, callback):
"""Requests a call to the given function as soon as possible in the
context of this IOLoop's thread.
NOTE: This is the only thread-safe method in IOLoop. All other
manipulations of IOLoop must be performed from the IOLoop's thread.
For example, a thread may request a call to the `stop` method of an
ioloop that is running in a different thread via
`ioloop.add_callback_threadsafe(ioloop.stop)`
:param callable callback: The callback method
"""
if not callable(callback):
raise TypeError(
'callback must be a callable, but got {!r}'.format(callback))
# NOTE: `deque.append` is atomic
self._callbacks.append(callback)
# Wake up the IOLoop which may be running in another thread
self._poller.wake_threadsafe()
LOGGER.debug('add_callback_threadsafe: added callback=%r', callback)
# To satisfy `AbstractSelectorIOLoop` requirement
add_callback = add_callback_threadsafe
def process_timeouts(self):
"""[Extension] Process pending callbacks and timeouts, invoking those
whose time has come. Internal use only.
"""
# Avoid I/O starvation by postponing new callbacks to the next iteration
for _ in pika.compat.xrange(len(self._callbacks)):
callback = self._callbacks.popleft()
LOGGER.debug('process_timeouts: invoking callback=%r', callback)
callback()
self._timer.process_timeouts()
def _get_remaining_interval(self):
"""Get the remaining interval to the next callback or timeout
expiration.
:returns: non-negative number of seconds until next callback or timer
expiration; None if there are no callbacks and timers
:rtype: float
"""
if self._callbacks:
return 0
return self._timer.get_remaining_interval()
def add_handler(self, fd, handler, events):
"""Start watching the given file descriptor for events
:param int fd: The file descriptor
:param callable handler: When requested event(s) occur,
`handler(fd, events)` will be called.
:param int events: The event mask using READ, WRITE, ERROR.
"""
self._poller.add_handler(fd, handler, events)
def update_handler(self, fd, events):
"""Changes the events we watch for
:param int fd: The file descriptor
:param int events: The event mask using READ, WRITE, ERROR
"""
self._poller.update_handler(fd, events)
def remove_handler(self, fd):
"""Stop watching the given file descriptor for events
:param int fd: The file descriptor
"""
self._poller.remove_handler(fd)
def start(self):
"""[API] Start the main poller loop. It will loop until requested to
exit. See `IOLoop.stop`.
"""
self._poller.start()
def stop(self):
"""[API] Request exit from the ioloop. The loop is NOT guaranteed to
stop before this method returns.
To invoke `stop()` safely from a thread other than this IOLoop's thread,
call it via `add_callback_threadsafe`; e.g.,
`ioloop.add_callback_threadsafe(ioloop.stop)`
"""
self._poller.stop()
def activate_poller(self):
"""[Extension] Activate the poller
"""
self._poller.activate_poller()
def deactivate_poller(self):
"""[Extension] Deactivate the poller
"""
self._poller.deactivate_poller()
def poll(self):
"""[Extension] Wait for events of interest on registered file
descriptors until an event of interest occurs or next timer deadline or
`_PollerBase._MAX_POLL_TIMEOUT`, whichever is sooner, and dispatch the
corresponding event handlers.
"""
self._poller.poll()
class _PollerBase(pika.compat.AbstractBase): # pylint: disable=R0902
"""Base class for select-based IOLoop implementations"""
# Drop out of the poll loop every _MAX_POLL_TIMEOUT secs as a worst case;
# this is only a backstop value; we will run timeouts when they are
# scheduled.
_MAX_POLL_TIMEOUT = 5
# if the poller uses MS override with 1000
POLL_TIMEOUT_MULT = 1
def __init__(self, get_wait_seconds, process_timeouts):
"""
:param get_wait_seconds: Function for getting the maximum number of
seconds to wait for IO for use by the poller
:param process_timeouts: Function for processing timeouts for use by the
poller
"""
self._get_wait_seconds = get_wait_seconds
self._process_timeouts = process_timeouts
# We guard access to the waking file descriptors to avoid races from
# closing them while another thread is calling our `wake()` method.
self._waking_mutex = threading.Lock()
# fd-to-handler function mappings
self._fd_handlers = dict()
# event-to-fdset mappings
self._fd_events = {
PollEvents.READ: set(),
PollEvents.WRITE: set(),
PollEvents.ERROR: set()
}
self._processing_fd_event_map = {}
# Reentrancy tracker of the `start` method
self._running = False
self._stopping = False
# Create ioloop-interrupt socket pair and register read handler.
self._r_interrupt, self._w_interrupt = self._get_interrupt_pair()
self.add_handler(self._r_interrupt.fileno(), self._read_interrupt,
PollEvents.READ)
def close(self):
"""Release poller's resources.
`close()` is intended to be called after the poller's `start()` method
returns. After calling `close()`, no other interaction with the closed
poller instance should be performed.
"""
# Unregister and close ioloop-interrupt socket pair; mutual exclusion is
# necessary to avoid race condition with `wake_threadsafe` executing in
# another thread's context
assert not self._running, 'Cannot call close() before start() unwinds.'
with self._waking_mutex:
if self._w_interrupt is not None:
self.remove_handler(self._r_interrupt.fileno()) # pylint: disable=E1101
self._r_interrupt.close()
self._r_interrupt = None
self._w_interrupt.close()
self._w_interrupt = None
self.deactivate_poller()
self._fd_handlers = None
self._fd_events = None
self._processing_fd_event_map = None
def wake_threadsafe(self):
"""Wake up the poller as soon as possible. As the name indicates, this
method is thread-safe.
"""
with self._waking_mutex:
if self._w_interrupt is None:
return
try:
# Send byte to interrupt the poll loop, use send() instead of
# os.write for Windows compatibility
self._w_interrupt.send(b'X')
except pika.compat.SOCKET_ERROR as err:
if err.errno != errno.EWOULDBLOCK:
raise
except Exception as err:
# There's nothing sensible to do here, we'll exit the interrupt
# loop after POLL_TIMEOUT secs in worst case anyway.
LOGGER.warning("Failed to send interrupt to poller: %s", err)
raise
def _get_max_wait(self):
"""Get the interval to the next timeout event, or a default interval
:returns: maximum number of self.POLL_TIMEOUT_MULT-scaled time units
to wait for IO events
:rtype: int
"""
delay = self._get_wait_seconds()
if delay is None:
delay = self._MAX_POLL_TIMEOUT
else:
delay = min(delay, self._MAX_POLL_TIMEOUT)
return delay * self.POLL_TIMEOUT_MULT
def add_handler(self, fileno, handler, events):
"""Add a new fileno to the set to be monitored
:param int fileno: The file descriptor
:param callable handler: What is called when an event happens
:param int events: The event mask using READ, WRITE, ERROR
"""
self._fd_handlers[fileno] = handler
self._set_handler_events(fileno, events)
# Inform the derived class
self._register_fd(fileno, events)
def update_handler(self, fileno, events):
"""Set the events to the current events
:param int fileno: The file descriptor
:param int events: The event mask using READ, WRITE, ERROR
"""
# Record the change
events_cleared, events_set = self._set_handler_events(fileno, events)
# Inform the derived class
self._modify_fd_events(
fileno,
events=events,
events_to_clear=events_cleared,
events_to_set=events_set)
def remove_handler(self, fileno):
"""Remove a file descriptor from the set
:param int fileno: The file descriptor
"""
try:
del self._processing_fd_event_map[fileno]
except KeyError:
pass
events_cleared, _ = self._set_handler_events(fileno, 0)
del self._fd_handlers[fileno]
# Inform the derived class
self._unregister_fd(fileno, events_to_clear=events_cleared)
def _set_handler_events(self, fileno, events):
"""Set the handler's events to the given events; internal to
`_PollerBase`.
:param int fileno: The file descriptor
:param int events: The event mask (READ, WRITE, ERROR)
:returns: a 2-tuple (events_cleared, events_set)
:rtype: tuple
"""
events_cleared = 0
events_set = 0
for evt in (PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR):
if events & evt:
if fileno not in self._fd_events[evt]:
self._fd_events[evt].add(fileno)
events_set |= evt
else:
if fileno in self._fd_events[evt]:
self._fd_events[evt].discard(fileno)
events_cleared |= evt
return events_cleared, events_set
def activate_poller(self):
"""Activate the poller
"""
# Activate the underlying poller and register current events
self._init_poller()
fd_to_events = collections.defaultdict(int)
for event, file_descriptors in self._fd_events.items():
for fileno in file_descriptors:
fd_to_events[fileno] |= event
for fileno, events in fd_to_events.items():
self._register_fd(fileno, events)
def deactivate_poller(self):
"""Deactivate the poller
"""
self._uninit_poller()
def start(self):
"""Start the main poller loop. It will loop until requested to exit.
This method is not reentrant and will raise an error if called
recursively (pika/pika#1095)
:raises: RuntimeError
"""
if self._running:
raise RuntimeError('IOLoop is not reentrant and is already running')
LOGGER.debug('Entering IOLoop')
self._running = True
self.activate_poller()
try:
# Run event loop
while not self._stopping:
self.poll()
self._process_timeouts()
finally:
try:
LOGGER.debug('Deactivating poller')
self.deactivate_poller()
finally:
self._stopping = False
self._running = False
def stop(self):
"""Request exit from the ioloop. The loop is NOT guaranteed to stop
before this method returns.
"""
LOGGER.debug('Stopping IOLoop')
self._stopping = True
self.wake_threadsafe()
@abc.abstractmethod
def poll(self):
"""Wait for events on interested filedescriptors.
"""
raise NotImplementedError
@abc.abstractmethod
def _init_poller(self):
"""Notify the implementation to allocate the poller resource"""
raise NotImplementedError
@abc.abstractmethod
def _uninit_poller(self):
"""Notify the implementation to release the poller resource"""
raise NotImplementedError
@abc.abstractmethod
def _register_fd(self, fileno, events):
"""The base class invokes this method to notify the implementation to
register the file descriptor with the polling object. The request must
be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: The event mask (READ, WRITE, ERROR)
"""
raise NotImplementedError
@abc.abstractmethod
def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set):
"""The base class invoikes this method to notify the implementation to
modify an already registered file descriptor. The request must be
ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: absolute events (READ, WRITE, ERROR)
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
:param int events_to_set: The events to set (READ, WRITE, ERROR)
"""
raise NotImplementedError
@abc.abstractmethod
def _unregister_fd(self, fileno, events_to_clear):
"""The base class invokes this method to notify the implementation to
unregister the file descriptor being tracked by the polling object. The
request must be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
"""
raise NotImplementedError
def _dispatch_fd_events(self, fd_event_map):
""" Helper to dispatch callbacks for file descriptors that received
events.
Before doing so we re-calculate the event mask based on what is
currently set in case it has been changed under our feet by a
previous callback. We also take a store a refernce to the
fd_event_map so that we can detect removal of an
fileno during processing of another callback and not generate
spurious callbacks on it.
:param dict fd_event_map: Map of fds to events received on them.
"""
# Reset the prior map; if the call is nested, this will suppress the
# remaining dispatch in the earlier call.
self._processing_fd_event_map.clear()
self._processing_fd_event_map = fd_event_map
for fileno in pika.compat.dictkeys(fd_event_map):
if fileno not in fd_event_map:
# the fileno has been removed from the map under our feet.
continue
events = fd_event_map[fileno]
for evt in [PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR]:
if fileno not in self._fd_events[evt]:
events &= ~evt
if events:
handler = self._fd_handlers[fileno]
handler(fileno, events)
@staticmethod
def _get_interrupt_pair():
""" Use a socketpair to be able to interrupt the ioloop if called
from another thread. Socketpair() is not supported on some OS (Win)
so use a pair of simple TCP sockets instead. The sockets will be
closed and garbage collected by python when the ioloop itself is.
"""
return pika.compat._nonblocking_socketpair() # pylint: disable=W0212
def _read_interrupt(self, _interrupt_fd, _events):
""" Read the interrupt byte(s). We ignore the event mask as we can ony
get here if there's data to be read on our fd.
:param int _interrupt_fd: (unused) The file descriptor to read from
:param int _events: (unused) The events generated for this fd
"""
try:
# NOTE Use recv instead of os.read for windows compatibility
self._r_interrupt.recv(512) # pylint: disable=E1101
except pika.compat.SOCKET_ERROR as err:
if err.errno != errno.EAGAIN:
raise
class SelectPoller(_PollerBase):
"""Default behavior is to use Select since it's the widest supported and has
all of the methods we need for child classes as well. One should only need
to override the update_handler and start methods for additional types.
"""
# if the poller uses MS specify 1000
POLL_TIMEOUT_MULT = 1
def poll(self):
"""Wait for events of interest on registered file descriptors until an
event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT,
whichever is sooner, and dispatch the corresponding event handlers.
"""
while True:
try:
if (self._fd_events[PollEvents.READ] or
self._fd_events[PollEvents.WRITE] or
self._fd_events[PollEvents.ERROR]):
read, write, error = select.select(
self._fd_events[PollEvents.READ],
self._fd_events[PollEvents.WRITE],
self._fd_events[PollEvents.ERROR], self._get_max_wait())
else:
# NOTE When called without any FDs, select fails on
# Windows with error 10022, 'An invalid argument was
# supplied'.
time.sleep(self._get_max_wait())
read, write, error = [], [], []
break
except _SELECT_ERRORS as error:
if _is_resumable(error):
continue
else:
raise
# Build an event bit mask for each fileno we've received an event for
fd_event_map = collections.defaultdict(int)
for fd_set, evt in zip(
(read, write, error),
(PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR)):
for fileno in fd_set:
fd_event_map[fileno] |= evt
self._dispatch_fd_events(fd_event_map)
def _init_poller(self):
"""Notify the implementation to allocate the poller resource"""
# It's a no op in SelectPoller
def _uninit_poller(self):
"""Notify the implementation to release the poller resource"""
# It's a no op in SelectPoller
def _register_fd(self, fileno, events):
"""The base class invokes this method to notify the implementation to
register the file descriptor with the polling object. The request must
be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: The event mask using READ, WRITE, ERROR
"""
# It's a no op in SelectPoller
def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set):
"""The base class invoikes this method to notify the implementation to
modify an already registered file descriptor. The request must be
ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: absolute events (READ, WRITE, ERROR)
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
:param int events_to_set: The events to set (READ, WRITE, ERROR)
"""
# It's a no op in SelectPoller
def _unregister_fd(self, fileno, events_to_clear):
"""The base class invokes this method to notify the implementation to
unregister the file descriptor being tracked by the polling object. The
request must be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
"""
# It's a no op in SelectPoller
class KQueuePoller(_PollerBase):
# pylint: disable=E1101
"""KQueuePoller works on BSD based systems and is faster than select"""
def __init__(self, get_wait_seconds, process_timeouts):
"""Create an instance of the KQueuePoller
"""
self._kqueue = None
super().__init__(get_wait_seconds, process_timeouts)
@staticmethod
def _map_event(kevent):
"""return the event type associated with a kevent object
:param kevent kevent: a kevent object as returned by kqueue.control()
"""
mask = 0
if kevent.filter == select.KQ_FILTER_READ:
mask = PollEvents.READ
elif kevent.filter == select.KQ_FILTER_WRITE:
mask = PollEvents.WRITE
if kevent.flags & select.KQ_EV_EOF:
# May be set when the peer reader disconnects. We don't check
# KQ_EV_EOF for KQ_FILTER_READ because in that case it may be
# set before the remaining data is consumed from sockbuf.
mask |= PollEvents.ERROR
elif kevent.flags & select.KQ_EV_ERROR:
mask = PollEvents.ERROR
else:
LOGGER.critical('Unexpected kevent: %s', kevent)
return mask
def poll(self):
"""Wait for events of interest on registered file descriptors until an
event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT,
whichever is sooner, and dispatch the corresponding event handlers.
"""
while True:
try:
kevents = self._kqueue.control(None, 1000, self._get_max_wait())
break
except _SELECT_ERRORS as error:
if _is_resumable(error):
continue
else:
raise
fd_event_map = collections.defaultdict(int)
for event in kevents:
fd_event_map[event.ident] |= self._map_event(event)
self._dispatch_fd_events(fd_event_map)
def _init_poller(self):
"""Notify the implementation to allocate the poller resource"""
assert self._kqueue is None
self._kqueue = select.kqueue()
def _uninit_poller(self):
"""Notify the implementation to release the poller resource"""
if self._kqueue is not None:
self._kqueue.close()
self._kqueue = None
def _register_fd(self, fileno, events):
"""The base class invokes this method to notify the implementation to
register the file descriptor with the polling object. The request must
be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: The event mask using READ, WRITE, ERROR
"""
self._modify_fd_events(
fileno, events=events, events_to_clear=0, events_to_set=events)
def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set):
"""The base class invoikes this method to notify the implementation to
modify an already registered file descriptor. The request must be
ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: absolute events (READ, WRITE, ERROR)
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
:param int events_to_set: The events to set (READ, WRITE, ERROR)
"""
if self._kqueue is None:
return
kevents = list()
if events_to_clear & PollEvents.READ:
kevents.append(
select.kevent(
fileno,
filter=select.KQ_FILTER_READ,
flags=select.KQ_EV_DELETE))
if events_to_set & PollEvents.READ:
kevents.append(
select.kevent(
fileno,
filter=select.KQ_FILTER_READ,
flags=select.KQ_EV_ADD))
if events_to_clear & PollEvents.WRITE:
kevents.append(
select.kevent(
fileno,
filter=select.KQ_FILTER_WRITE,
flags=select.KQ_EV_DELETE))
if events_to_set & PollEvents.WRITE:
kevents.append(
select.kevent(
fileno,
filter=select.KQ_FILTER_WRITE,
flags=select.KQ_EV_ADD))
self._kqueue.control(kevents, 0)
def _unregister_fd(self, fileno, events_to_clear):
"""The base class invokes this method to notify the implementation to
unregister the file descriptor being tracked by the polling object. The
request must be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
"""
self._modify_fd_events(
fileno, events=0, events_to_clear=events_to_clear, events_to_set=0)
class PollPoller(_PollerBase):
"""Poll works on Linux and can have better performance than EPoll in
certain scenarios. Both are faster than select.
"""
POLL_TIMEOUT_MULT = 1000
def __init__(self, get_wait_seconds, process_timeouts):
"""Create an instance of the KQueuePoller
"""
self._poll = None
super().__init__(get_wait_seconds, process_timeouts)
@staticmethod
def _create_poller():
"""
:rtype: `select.poll`
"""
return select.poll() # pylint: disable=E1101
def poll(self):
"""Wait for events of interest on registered file descriptors until an
event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT,
whichever is sooner, and dispatch the corresponding event handlers.
"""
while True:
try:
events = self._poll.poll(self._get_max_wait())
break
except _SELECT_ERRORS as error:
if _is_resumable(error):
continue
else:
raise
fd_event_map = collections.defaultdict(int)
for fileno, event in events:
# NOTE: On OS X, when poll() sets POLLHUP, it's mutually-exclusive with
# POLLOUT and it doesn't seem to set POLLERR along with POLLHUP when
# socket connection fails, for example. So, we need to at least add
# POLLERR when we see POLLHUP
if (event & select.POLLHUP) and pika.compat.ON_OSX:
event |= select.POLLERR
fd_event_map[fileno] |= event
self._dispatch_fd_events(fd_event_map)
def _init_poller(self):
"""Notify the implementation to allocate the poller resource"""
assert self._poll is None
self._poll = self._create_poller()
def _uninit_poller(self):
"""Notify the implementation to release the poller resource"""
if self._poll is not None:
if hasattr(self._poll, "close"):
self._poll.close()
self._poll = None
def _register_fd(self, fileno, events):
"""The base class invokes this method to notify the implementation to
register the file descriptor with the polling object. The request must
be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: The event mask using READ, WRITE, ERROR
"""
if self._poll is not None:
self._poll.register(fileno, events)
def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set):
"""The base class invoikes this method to notify the implementation to
modify an already registered file descriptor. The request must be
ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events: absolute events (READ, WRITE, ERROR)
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
:param int events_to_set: The events to set (READ, WRITE, ERROR)
"""
if self._poll is not None:
self._poll.modify(fileno, events)
def _unregister_fd(self, fileno, events_to_clear):
"""The base class invokes this method to notify the implementation to
unregister the file descriptor being tracked by the polling object. The
request must be ignored if the poller is not activated.
:param int fileno: The file descriptor
:param int events_to_clear: The events to clear (READ, WRITE, ERROR)
"""
if self._poll is not None:
self._poll.unregister(fileno)
class EPollPoller(PollPoller):
"""EPoll works on Linux and can have better performance than Poll in
certain scenarios. Both are faster than select.
"""
POLL_TIMEOUT_MULT = 1
@staticmethod
def _create_poller():
"""
:rtype: `select.poll`
"""
return select.epoll() # pylint: disable=E1101
| bsd-3-clause | 65b262dcea31e739c6a67810b4ad7cb7 | 34.592739 | 88 | 0.603801 | 4.570849 | false | false | false | false |
pika/pika | pika/heartbeat.py | 1 | 8167 | """Handle AMQP Heartbeats"""
import logging
import pika.exceptions
from pika import frame
LOGGER = logging.getLogger(__name__)
class HeartbeatChecker:
"""Sends heartbeats to the broker. The provided timeout is used to
determine if the connection is stale - no received heartbeats or
other activity will close the connection. See the parameter list for more
details.
"""
_STALE_CONNECTION = "No activity or too many missed heartbeats in the last %i seconds"
def __init__(self, connection, timeout):
"""Create an object that will check for activity on the provided
connection as well as receive heartbeat frames from the broker. The
timeout parameter defines a window within which this activity must
happen. If not, the connection is considered dead and closed.
The value passed for timeout is also used to calculate an interval
at which a heartbeat frame is sent to the broker. The interval is
equal to the timeout value divided by two.
:param pika.connection.Connection: Connection object
:param int timeout: Connection idle timeout. If no activity occurs on the
connection nor heartbeat frames received during the
timeout window the connection will be closed. The
interval used to send heartbeats is calculated from
this value by dividing it by two.
"""
if timeout < 1:
raise ValueError('timeout must >= 0, but got {!r}'.format(timeout))
self._connection = connection
# Note: see the following documents:
# https://www.rabbitmq.com/heartbeats.html#heartbeats-timeout
# https://github.com/pika/pika/pull/1072
# https://groups.google.com/d/topic/rabbitmq-users/Fmfeqe5ocTY/discussion
# There is a certain amount of confusion around how client developers
# interpret the spec. The spec talks about 2 missed heartbeats as a
# *timeout*, plus that any activity on the connection counts for a
# heartbeat. This is to avoid edge cases and not to depend on network
# latency.
self._timeout = timeout
self._send_interval = float(timeout) / 2
# Note: Pika will calculate the heartbeat / connectivity check interval
# by adding 5 seconds to the negotiated timeout to leave a bit of room
# for broker heartbeats that may be right at the edge of the timeout
# window. This is different behavior from the RabbitMQ Java client and
# the spec that suggests a check interval equivalent to two times the
# heartbeat timeout value. But, one advantage of adding a small amount
# is that bad connections will be detected faster.
# https://github.com/pika/pika/pull/1072#issuecomment-397850795
# https://github.com/rabbitmq/rabbitmq-java-client/blob/b55bd20a1a236fc2d1ea9369b579770fa0237615/src/main/java/com/rabbitmq/client/impl/AMQConnection.java#L773-L780
# https://github.com/ruby-amqp/bunny/blob/3259f3af2e659a49c38c2470aa565c8fb825213c/lib/bunny/session.rb#L1187-L1192
self._check_interval = timeout + 5
LOGGER.debug('timeout: %f send_interval: %f check_interval: %f',
self._timeout, self._send_interval, self._check_interval)
# Initialize counters
self._bytes_received = 0
self._bytes_sent = 0
self._heartbeat_frames_received = 0
self._heartbeat_frames_sent = 0
self._idle_byte_intervals = 0
self._send_timer = None
self._check_timer = None
self._start_send_timer()
self._start_check_timer()
@property
def bytes_received_on_connection(self):
"""Return the number of bytes received by the connection bytes object.
:rtype int
"""
return self._connection.bytes_received
@property
def connection_is_idle(self):
"""Returns true if the byte count hasn't changed in enough intervals
to trip the max idle threshold.
"""
return self._idle_byte_intervals > 0
def received(self):
"""Called when a heartbeat is received"""
LOGGER.debug('Received heartbeat frame')
self._heartbeat_frames_received += 1
def _send_heartbeat(self):
"""Invoked by a timer to send a heartbeat when we need to.
"""
LOGGER.debug('Sending heartbeat frame')
self._send_heartbeat_frame()
self._start_send_timer()
def _check_heartbeat(self):
"""Invoked by a timer to check for broker heartbeats. Checks to see
if we've missed any heartbeats and disconnect our connection if it's
been idle too long.
"""
if self._has_received_data:
self._idle_byte_intervals = 0
else:
# Connection has not received any data, increment the counter
self._idle_byte_intervals += 1
LOGGER.debug(
'Received %i heartbeat frames, sent %i, '
'idle intervals %i', self._heartbeat_frames_received,
self._heartbeat_frames_sent, self._idle_byte_intervals)
if self.connection_is_idle:
self._close_connection()
return
self._start_check_timer()
def stop(self):
"""Stop the heartbeat checker"""
if self._send_timer:
LOGGER.debug('Removing timer for next heartbeat send interval')
self._connection._adapter_remove_timeout(self._send_timer) # pylint: disable=W0212
self._send_timer = None
if self._check_timer:
LOGGER.debug('Removing timer for next heartbeat check interval')
self._connection._adapter_remove_timeout(self._check_timer) # pylint: disable=W0212
self._check_timer = None
def _close_connection(self):
"""Close the connection with the AMQP Connection-Forced value."""
LOGGER.info('Connection is idle, %i stale byte intervals',
self._idle_byte_intervals)
text = HeartbeatChecker._STALE_CONNECTION % self._timeout
# Abort the stream connection. There is no point trying to gracefully
# close the AMQP connection since lack of heartbeat suggests that the
# stream is dead.
self._connection._terminate_stream( # pylint: disable=W0212
pika.exceptions.AMQPHeartbeatTimeout(text))
@property
def _has_received_data(self):
"""Returns True if the connection has received data.
:rtype: bool
"""
return self._bytes_received != self.bytes_received_on_connection
@staticmethod
def _new_heartbeat_frame():
"""Return a new heartbeat frame.
:rtype pika.frame.Heartbeat
"""
return frame.Heartbeat()
def _send_heartbeat_frame(self):
"""Send a heartbeat frame on the connection.
"""
LOGGER.debug('Sending heartbeat frame')
self._connection._send_frame( # pylint: disable=W0212
self._new_heartbeat_frame())
self._heartbeat_frames_sent += 1
def _start_send_timer(self):
"""Start a new heartbeat send timer."""
self._send_timer = self._connection._adapter_call_later( # pylint: disable=W0212
self._send_interval,
self._send_heartbeat)
def _start_check_timer(self):
"""Start a new heartbeat check timer."""
# Note: update counters now to get current values
# at the start of the timeout window. Values will be
# checked against the connection's byte count at the
# end of the window
self._update_counters()
self._check_timer = self._connection._adapter_call_later( # pylint: disable=W0212
self._check_interval,
self._check_heartbeat)
def _update_counters(self):
"""Update the internal counters for bytes sent and received and the
number of frames received
"""
self._bytes_sent = self._connection.bytes_sent
self._bytes_received = self._connection.bytes_received
| bsd-3-clause | d3b6e6badde4571ec0afdd890d0e44b1 | 38.076555 | 172 | 0.639525 | 4.348775 | false | false | false | false |
pika/pika | examples/direct_reply_to.py | 1 | 2394 | # -*- coding: utf-8 -*-
# pylint: disable=C0111,C0103,R0205
"""
This example demonstrates RabbitMQ's "Direct reply-to" usage via
`pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html
for more info about this feature.
"""
import pika
SERVER_QUEUE = 'rpc.server.queue'
def main():
""" Here, Client sends "Marco" to RPC Server, and RPC Server replies with
"Polo".
NOTE Normally, the server would be running separately from the client, but
in this very simple example both are running in the same thread and sharing
connection and channel.
"""
with pika.BlockingConnection() as conn:
channel = conn.channel()
# Set up server
channel.queue_declare(
queue=SERVER_QUEUE, exclusive=True, auto_delete=True)
channel.basic_consume(SERVER_QUEUE, on_server_rx_rpc_request)
# Set up client
# NOTE Client must create its consumer and publish RPC requests on the
# same channel to enable the RabbitMQ broker to make the necessary
# associations.
#
# Also, client must create the consumer *before* starting to publish the
# RPC requests.
#
# Client must create its consumer with auto_ack=True, because the reply-to
# queue isn't real.
channel.basic_consume(
'amq.rabbitmq.reply-to',
on_client_rx_reply_from_server,
auto_ack=True)
channel.basic_publish(
exchange='',
routing_key=SERVER_QUEUE,
body='Marco',
properties=pika.BasicProperties(reply_to='amq.rabbitmq.reply-to'))
channel.start_consuming()
def on_server_rx_rpc_request(ch, method_frame, properties, body):
print('RPC Server got request: %s' % body)
ch.basic_publish('', routing_key=properties.reply_to, body='Polo')
ch.basic_ack(delivery_tag=method_frame.delivery_tag)
print('RPC Server says good bye')
def on_client_rx_reply_from_server(ch, _method_frame, _properties, body):
print('RPC Client got reply: %s' % body)
# NOTE A real client might want to make additional RPC requests, but in this
# simple example we're closing the channel after getting our first reply
# to force control to return from channel.start_consuming()
print('RPC Client says bye')
ch.close()
if __name__ == '__main__':
main()
| bsd-3-clause | 8b910da7182b4ff231d5632caabd3cdd | 29.303797 | 82 | 0.651629 | 3.855072 | false | false | false | false |
djc/couchdb-python | couchdb/tests/couchhttp.py | 1 | 3677 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2009 Christopher Lenz
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution.
import socket
import time
import unittest
from couchdb import http, util
from couchdb.tests import testutil
class SessionTestCase(testutil.TempDatabaseMixin, unittest.TestCase):
def test_timeout(self):
dbname, db = self.temp_db()
timeout = 1
session = http.Session(timeout=timeout)
start = time.time()
status, headers, body = session.request('GET', db.resource.url + '/_changes?feed=longpoll&since=1000&timeout=%s' % (timeout*2*1000,))
self.assertRaises(socket.timeout, body.read)
self.assertTrue(time.time() - start < timeout * 1.3)
def test_timeout_retry(self):
dbname, db = self.temp_db()
timeout = 1e-12
session = http.Session(timeout=timeout, retryable_errors=["timed out"])
self.assertRaises(socket.timeout, session.request, 'GET', db.resource.url)
class ResponseBodyTestCase(unittest.TestCase):
def test_close(self):
class TestStream(util.StringIO):
def isclosed(self):
return len(self.getvalue()) == self.tell()
class ConnPool(object):
def __init__(self):
self.value = 0
def release(self, url, conn):
self.value += 1
conn_pool = ConnPool()
stream = TestStream(b'foobar')
stream.msg = {}
response = http.ResponseBody(stream, conn_pool, 'a', 'b')
response.read(10) # read more than stream has. close() is called
response.read() # steam ended. another close() call
self.assertEqual(conn_pool.value, 1)
def test_double_iteration_over_same_response_body(self):
class TestHttpResp(object):
msg = {'transfer-encoding': 'chunked'}
def __init__(self, fp):
self.fp = fp
def close(self):
pass
def isclosed(self):
return len(self.fp.getvalue()) == self.fp.tell()
data = b'foobarbaz\n'
data = b'\n'.join([hex(len(data))[2:].encode('utf-8'), data])
response = http.ResponseBody(TestHttpResp(util.StringIO(data)),
None, None, None)
self.assertEqual(list(response.iterchunks()), [b'foobarbaz\n'])
self.assertEqual(list(response.iterchunks()), [])
class CacheTestCase(testutil.TempDatabaseMixin, unittest.TestCase):
def test_remove_miss(self):
"""Check that a cache remove miss is handled gracefully."""
url = 'http://localhost:5984/foo'
cache = http.Cache()
cache.put(url, (None, None, None))
cache.remove(url)
cache.remove(url)
def test_cache_clean(self):
cache = http.Cache()
cache.put('foo', (None, {'Date': 'Sat, 14 Feb 2009 02:31:28 -0000'}, None))
cache.put('bar', (None, {'Date': 'Sat, 14 Feb 2009 02:31:29 -0000'}, None))
cache.put('baz', (None, {'Date': 'Sat, 14 Feb 2009 02:31:30 -0000'}, None))
cache.keep_size = 1
cache._clean()
self.assertEqual(len(cache.by_url), 1)
self.assertTrue('baz' in cache.by_url)
def suite():
suite = unittest.TestSuite()
suite.addTest(testutil.doctest_suite(http))
suite.addTest(unittest.makeSuite(SessionTestCase, 'test'))
suite.addTest(unittest.makeSuite(ResponseBodyTestCase, 'test'))
suite.addTest(unittest.makeSuite(CacheTestCase, 'test'))
return suite
if __name__ == '__main__':
unittest.main(defaultTest='suite')
| bsd-3-clause | fdfb03f1b4bc78515d46790d7532c357 | 33.688679 | 141 | 0.608376 | 3.802482 | false | true | false | false |
django/channels | tests/security/test_auth.py | 3 | 7021 | from importlib import import_module
from unittest import mock
import pytest
from asgiref.sync import sync_to_async
from django.conf import settings
from django.contrib.auth import (
BACKEND_SESSION_KEY,
HASH_SESSION_KEY,
SESSION_KEY,
get_user_model,
user_logged_in,
user_logged_out,
)
from django.contrib.auth.models import AnonymousUser
from channels.auth import get_user, login, logout
from channels.db import database_sync_to_async
class CatchSignal:
"""
Capture (and detect) a django signal event.
This should be used as a Contextmanager.
:Example:
with CatchSignal(user_logged_in) as handler:
# do the django action here that will create the signal
assert handler.called
:Async Example:
async with CatchSignal(user_logged_in) as handler:
await ... # the django action the creates the signal
assert handler.called
"""
def __init__(self, signal):
self.handler = mock.Mock()
self.signal = signal
async def __aenter__(self):
await sync_to_async(self.signal.connect)(self.handler)
return self.handler
async def __aexit__(self, exc_type, exc, tb):
await sync_to_async(self.signal.disconnect)(self.handler)
def __enter__(self):
self.signal.connect(self.handler)
return self.handler
def __exit__(self, exc_type, exc_val, exc_tb):
self.signal.disconnect(self.handler)
@pytest.fixture
def user_bob():
return get_user_model().objects.create(username="bob", email="bob@example.com")
@pytest.fixture
def user_bill():
return get_user_model().objects.create(username="bill", email="bill@example.com")
@pytest.fixture
def session():
SessionStore = import_module(settings.SESSION_ENGINE).SessionStore
session = SessionStore()
session.create()
return session
async def assert_is_logged_in(scope, user):
"""
Assert that the provided user is logged in to the session contained within
the scope.
"""
assert "user" in scope
assert scope["user"] == user
session = scope["session"]
# logged in!
assert SESSION_KEY in session
assert BACKEND_SESSION_KEY in session
assert HASH_SESSION_KEY in session
assert isinstance(
await get_user(scope), await database_sync_to_async(get_user_model)()
)
assert await get_user(scope) == user
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_login_no_session_in_scope():
"""
Test to ensure that a `ValueError` is raised if when tying to login a user
to a scope that has no session.
"""
msg = (
"Cannot find session in scope. You should wrap your consumer in "
"SessionMiddleware."
)
with pytest.raises(ValueError, match=msg):
await login(scope={}, user=None)
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_login_no_user_in_scope(session):
"""
Test the login method to ensure it raises a `ValueError` if no user is
passed and is no user in the scope.
"""
scope = {"session": session}
with pytest.raises(
ValueError,
match="User must be passed as an argument or must be present in the scope.",
):
await login(scope, user=None)
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_login_user_as_argument(session, user_bob):
"""
Test that one can login to a scope that has a session by passing the scope
and user as arguments to the login function.
"""
scope = {"session": session}
assert isinstance(await get_user(scope), AnonymousUser)
# not logged in
assert SESSION_KEY not in session
async with CatchSignal(user_logged_in) as handler:
assert not handler.called
await login(scope, user=user_bob)
assert handler.called
await assert_is_logged_in(scope, user_bob)
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_login_user_on_scope(session, user_bob):
"""
Test that in the absence of a user being passed to the `login` function the
function will use the user set on the scope.
"""
scope = {"session": session, "user": user_bob}
# check that we are not logged in on the session
assert isinstance(await get_user(scope), AnonymousUser)
async with CatchSignal(user_logged_in) as handler:
assert not handler.called
await login(scope, user=None)
assert handler.called
await assert_is_logged_in(scope, user_bob)
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_login_change_user(session, user_bob, user_bill):
"""
Test logging in a second user into a scope were another user is already logged in.
"""
scope = {"session": session}
# check that we are not logged in on the session
assert isinstance(await get_user(scope), AnonymousUser)
async with CatchSignal(user_logged_in) as handler:
assert not handler.called
await login(scope, user=user_bob)
assert handler.called
await assert_is_logged_in(scope, user_bob)
session_key = session[SESSION_KEY]
assert session_key
async with CatchSignal(user_logged_in) as handler:
assert not handler.called
await login(scope, user=user_bill)
assert handler.called
await assert_is_logged_in(scope, user_bill)
assert session_key != session[SESSION_KEY]
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_logout(session, user_bob):
"""
Test that one can logout a user from a logged in session.
"""
scope = {"session": session}
# check that we are not logged in on the session
assert isinstance(await get_user(scope), AnonymousUser)
async with CatchSignal(user_logged_in) as handler:
assert not handler.called
await login(scope, user=user_bob)
assert handler.called
await assert_is_logged_in(scope, user_bob)
assert SESSION_KEY in session
session_key = session[SESSION_KEY]
assert session_key
async with CatchSignal(user_logged_out) as handler:
assert not handler.called
await logout(scope)
assert handler.called
assert isinstance(await get_user(scope), AnonymousUser)
assert isinstance(scope["user"], AnonymousUser)
assert SESSION_KEY not in session
@pytest.mark.django_db(transaction=True)
@pytest.mark.asyncio
async def test_logout_not_logged_in(session):
"""
Test that the `logout` function does nothing in the case were there is no
user logged in.
"""
scope = {"session": session}
# check that we are not logged in on the session
assert isinstance(await get_user(scope), AnonymousUser)
async with CatchSignal(user_logged_out) as handler:
assert not handler.called
await logout(scope)
assert not handler.called
assert "user" not in scope
assert isinstance(await get_user(scope), AnonymousUser)
| bsd-3-clause | 16b7975007c96954736a2310a77f0dcd | 26.861111 | 86 | 0.681242 | 3.915784 | false | true | false | false |
jjhelmus/nmrglue | tests/test_spinsolve.py | 2 | 1779 | """ Tests for the fileio.spinsolve submodule """
import nmrglue as ng
from pathlib import Path
from setup import DATA_DIR
def test_acqu():
""" read nmr_fid.dx """
dic, data = ng.spinsolve.read(Path(DATA_DIR) / "spinsolve" / "ethanol", "nmr_fid.dx")
assert dic["acqu"]["Sample"] == "EtOH"
assert dic["acqu"]["Solvent"] == "None"
def test_jcamp_dx():
""" read nmr_fid.dx """
dic, data = ng.spinsolve.read(Path(DATA_DIR) / "spinsolve" / "ethanol", "nmr_fid.dx")
assert data.size == 32768
assert data.shape == (32768,)
assert "Magritek Spinsolve" in dic["dx"]["_comments"][0]
def test_data1d():
""" read nmr_fid.dx """
dic, data = ng.spinsolve.read(Path(DATA_DIR) / "spinsolve" / "ethanol", "data.1d")
assert dic["spectrum"]["xDim"] == 32768
assert len(dic["spectrum"]["xaxis"]) == 32768
assert data.size == 32768
assert data.shape == (32768,)
def test_guess_acqu():
""" guess_udic based on acqu dictionary """
dic, data = ng.spinsolve.read(Path(DATA_DIR) / "spinsolve" / "ethanol", "nmr_fid.dx")
udic = ng.spinsolve.guess_udic(dic, data)
assert udic[0]["sw"] == 5000
assert 43.49 < udic[0]["obs"] < 43.50
assert 206 < udic[0]["car"] < 207
assert udic[0]["size"] == 32768
assert udic[0]["label"] == "1H"
def test_guess_jcamp_dx():
""" guess_udic based on dx dictionary """
dic, data = ng.spinsolve.read(Path(DATA_DIR) / "spinsolve" / "ethanol", "nmr_fid.dx")
# Drop acqu dict that would be used as default
dic["acqu"] = {}
udic = ng.spinsolve.guess_udic(dic, data)
assert 4999 < udic[0]["sw"] < 5001
assert 43.49 < udic[0]["obs"] < 43.50
assert 206 < udic[0]["car"] < 207
assert udic[0]["size"] == 32768
assert udic[0]["label"] == "1H"
| bsd-3-clause | 1db00bb5a30cea72a3d27802eb5a0c9d | 30.210526 | 89 | 0.600337 | 2.79717 | false | true | false | false |
jjhelmus/nmrglue | nmrglue/fileio/rnmrtk.py | 2 | 22364 | """
Functions for reading and writing Rowland NMR Toolkit (RNMRTK) files
"""
from __future__ import division
__developer_info__ = """
Information of the Rowland NMR Toolkit file format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"""
import numpy as np
from warnings import warn
from . import fileiobase
###################
# unit conversion #
###################
def make_uc(dic, data, dim=-1):
"""
Creat a unit conversion object
Parameters
----------
dic : dict
Dictionary of RNMRTK parameters.
data : ndarray
Array of NMR data.
dim : int, optional
Dimension number to create unit conversion object for. Default is for
the last dimension.
Returns
-------
uc : unit conversion object.
Unit conversion object for given dimension.
"""
if dim < 0: # negative dimensions
dim = data.ndim + dim
size = data.shape[dim] # R|I
ddim = find_dic_dim(dic, dim)
cplx = {'R': False, 'C': True}[dic['nptype'][ddim]]
sw = dic['sw'][ddim] # Hz
obs = dic['sf'][ddim] # MHz
car = dic['ppm'][ddim] * dic[obs] # Hz
return fileiobase.unit_conversion(size, cplx, sw, obs, car)
#################
# data creation #
#################
def create_data(data):
"""
Create a RNMRTK data array (recast into float32 or complex64)
"""
if np.iscomplexobj(data):
return np.array(data, dtype="complex64")
else:
return np.array(data, dtype="float32")
########################
# universal dictionary #
########################
def guess_udic(dic, data):
"""
Guess parameters of a universal dictionary from a dic, data pair.
Parameters
----------
dic : dict
Dictionary of RNMRTK parameters.
data : ndarray
Array of NMR data.
Returns
-------
udic : dict
Universal dictionary of spectral parameters.
"""
# create an empty universal dictionary
ndim = dic['ndim']
udic = fileiobase.create_blank_udic(ndim)
# fill in parameters from RNMRTK dictionary for each dimension
for iudim in range(ndim):
# find the corresponding dimension in the RNMRTK parameter dictionary
idim = find_dic_dim(dic, iudim)
udic[iudim]['encoding'] = dic['quad'][idim].lower()
udic[iudim]['car'] = dic['ppm'][idim] * dic['sf'][idim]
udic[iudim]['obs'] = dic['sf'][idim]
udic[iudim]['sw'] = dic['sw'][idim]
udic[iudim]['size'] = dic['npts'][idim]
# set quadrature and correct size
if dic['nptype'][idim] == 'C':
if iudim != ndim - 1:
udic[iudim]['size'] *= 2 # don't double size of last dim
udic[iudim]['complex'] = True
else:
udic[iudim]['complex'] = False
# set label to T1 or F1, etc
udic[iudim]['label'] = dic['dom'][idim] + str(idim + 1)
# set time or frequency domain
if dic['dom'][idim].upper() == 'T':
udic[iudim]['freq'] = False
udic[iudim]['time'] = True
else:
udic[iudim]['freq'] = True
udic[iudim]['time'] = False
return udic
def create_dic(udic, dim_order=None):
"""
Create a RNMRTK dictionary from a universal dictionary.
Parameters
----------
udic : dict
Universal dictionary of spectral parameters.
dim_order : list, optional
List mapping axis numbers in the universal dictionary to the order
in which they will appear in the RNMRTK dictionary. If None, the
default [0, 1, 2, ...] is used.
Returns
--------
dic : dict
Dictionary of RNMRTK parameters.
"""
# create the RNMRTK dictionary and fill with some default values
dic = {}
dic['comment'] = ''
dic['ndim'] = ndim = int(udic['ndim'])
dic['format'] = np.dtype('float32').str
if dim_order is None:
dim_order = range(ndim) # default to 0, 1, 2, ...
# set various parameters from the universal dictionary
dic['dom'] = [['F', 'T'][udic[i]['time']] for i in dim_order]
dic['nptype'] = [['R', 'C'][udic[i]['complex']] for i in dim_order]
dic['ppm'] = [udic[i]['car'] / udic[i]['obs'] for i in dim_order]
dic['sf'] = [udic[i]['obs'] for i in dim_order]
dic['sw'] = [udic[i]['sw'] for i in dim_order]
dic['npts'] = [udic[i]['size'] for i in dim_order]
dic['quad'] = [udic[i]['encoding'].lower() for i in dim_order]
# xfirst and xstep are freq domains values, correct later for time domain
dic['xfirst'] = [-0.5 * i for i in dic['sw']]
dic['xstep'] = [udic[i]['sw'] / udic[i]['size'] for i in dim_order]
# these we guess on, they may be incorrect
dic['cphase'] = [0.0] * ndim
dic['lphase'] = [0.0] * ndim
dic['nacq'] = [udic[i]['size'] for i in dim_order]
# make small corrections as needed
rnmrtk_quads = ['states', 'states-tppi', 'tppi', 'tppi-redfield']
for i in range(ndim):
# fix quadrature if not a valid RNMRTK quadrature
if dic['quad'][i] not in rnmrtk_quads:
dic['quad'][i] = 'states'
# fix parameters if time domain data
if dic['dom'][i] == 'T': # time domain data
dic['xfirst'][i] = 0.0
if dic['quad'][i] in ['states']:
# states time domain data
dic['xstep'][i] = 1. / dic['sw'][i]
else:
# tppi time domain data
dic['xstep'][i] = 0.5 / dic['sw'][i]
# half the number of points if dimension is complex
if dic['nptype'][i] == 'C':
dic['npts'][i] //= 2
# determine and set layout
size = [udic[i]['size'] for i in range(ndim)]
domains = [dic['dom'][i] + str(i + 1) for i in range(ndim)]
# correct size of last dimension if complex
if dic['nptype'][dim_order[-1]] == 'C':
dic['npts'][dim_order[-1]] *= 2
size[-1] *= 2
if dic['dom'][dim_order[-1]] == 'F':
dic['xstep'][dim_order[-1]] /= 2.
dic['layout'] = (size, domains)
return dic
#######################
# Reading and Writing #
#######################
def read(filename, par_file=None):
"""
Read RNMRTK files.
Parameters
----------
filename : str
Filename of RNMRTK file to read (.sec).
par_file : str or None, optional
Filename of RNMRTK parameter file. If None (default) a the last four
characters of `file` are changed to .par.
Returns
-------
dic : dic
Dictionary of RNMRTK parameters.
data : ndarray
Array of NMR data.
Notes
-----
The dictionary parameters are ordered opposite the data layout, that is to
say the the FIRST parameter in each list corresponds to the LAST axis in
the data array.
See Also
--------
read_lowmem : Read RNMRTK files with minimal memory usage.
write : Write RNMRTK files.
"""
# determine par_file name if not given
if par_file is None:
par_file = filename[:-4] + ".par"
dic = read_par(par_file)
# determine sec file parameters from parameter dictionary
dtype = dic["format"]
shape = dic["layout"][0]
cplex = {'R': False, 'C': True}[dic['nptype'][-1]]
# read in the data
data = read_sec(filename, dtype, shape, cplex)
return dic, data
def read_lowmem(filename, par_file=None):
"""
Read RNMRTK files with minimal memory usage
Parameters
----------
filename : str
Filename of RNMRTK file to read (.sec).
par_file : str or None, optional
Filename of RNMRTK parameter file. If None (default) a the last four
characters of `file` are changed to .par.
Returns
-------
dic : dic
Dictionary of RNMRTK parameters.
data : array_like
Low memory object which can access NMR data on demand.
Notes
-----
The dictionary parameters are ordered opposite the data layout, that is to
say the the FIRST parameter in each list corresponds to the LAST axis in
the data array.
See Also
--------
read : Read RNMRTK files.
write : Write RNMRTK files.
"""
# determine par_file name if not given
if par_file is None:
par_file = filename[:-4] + ".par"
dic = read_par(par_file)
# determine shape, complexity and endianness from dictionary
fshape = list(dic["layout"][0])
cplex = {'R': False, 'C': True}[dic['nptype'][-1]]
if cplex:
fshape[-1] //= 2
big = {'<': False, '>': True}[dic['format'][0]]
data = rnmrtk_nd(filename, fshape, cplex, big)
return dic, data
def write(filename, dic, data, par_file=None, overwrite=False):
"""
Write RNMRTK files.
Parameters
----------
filename : str
Filename of RNMRTK file to write to (.sec).
dic : dict
Dictionary of RNMRTK parameters.
data : ndarray
Array of NMR data.
par_file : str or None, optional
Filename of RNMRTK parameter file. If None (default) a the last four
characters of `file` are changed to .par.
overwrite : bool, optional
True to overwrite existing files. False will raises a Warning if the
file exists.
See Also
--------
write_lowmem : Write RNMRTK files using minimal amounts of memory.
read : Read RNMRTK files.
"""
# determine par_file name if not given
if par_file is None:
par_file = filename[:-4] + ".par"
write_par(par_file, dic, overwrite)
dtype = dic["format"]
write_sec(filename, data, dtype, overwrite)
def write_lowmem(filename, dic, data, par_file=None, overwrite=False):
"""
Write RNMRTK files using minimal amounts of memory (trace by trace).
Parameters
----------
filename : str
Filename of RNMRTK file to write to (.sec).
dic : dict
Dictionary of RNMRTK parameters.
data : array_like
Array of NMR data.
par_file : str or None, optional
Filename of RNMRTK parameter file. If None (default) a the last four
characters of `file` are changed to .par.
overwrite : bool, optional
True to overwrite existing files. False will raises a Warning if the
file exists.
See Also
--------
write : Write RNMRTK files using minimal amounts of memory.
read_lowmem : Read RNMRTK files using minimal amounts of memory.
"""
# determine par_file name if not given
if par_file is None:
par_file = filename[:-4] + ".par"
write_par(par_file, dic, overwrite)
# open the file for writing
f = fileiobase.open_towrite(filename, overwrite=overwrite, mode='wb')
# write out the file trace by trace
for tup in np.ndindex(data.shape[:-1]):
put_trace(f, data[tup])
f.close()
return
#######################
# sec reading/writing #
#######################
def write_sec(filename, data, dtype='f4', overwrite=False):
"""
Write a RNMRTK .sec file.
Parameters
----------
filename : str
Filename of RNMRTK file to write to (.sec).
data : array_like
Array of NMR data.
dtype : dtype
Data type to convert data to before writing to disk.
overwrite : bool, optional
True to overwrite existing files. False will raises a Warning if the
file exists.
See Also
--------
write : Write RNMRTK files.
"""
# open file
f = fileiobase.open_towrite(filename, overwrite, mode='wb')
# interleave read/imag if needed
if np.iscomplexobj(data):
data = interleave_data(data)
# write data and close file
f.write(data.astype(dtype).tobytes())
f.close()
return
def read_sec(filename, dtype, shape, cplex):
"""
Read a RNMRTK parameter .par file.
Parameters
----------
filename : str
Filename of RNMRTK (.sec) file to read .
dtype : dtype
Type of data in file, typically 'float32'.
shape : tuple
Shape of data.
cplex : bool
True if the last (fast) dimension is complex. False is real only.
Returns
-------
data : ndarray
Array of NMR data.
"""
data = get_data(filename, dtype)
data = data.reshape(shape)
if cplex:
data = uninterleave_data(data)
return data
##########################
# data get/put functions #
##########################
def get_data(filename, dtype):
"""
Get spectral data from a RNMRTK file.
Parameters
----------
filename : str
Filename of RNMRTK file (.sec) to get data from.
dtype : dtype
Type of data in file, typically 'float32'
Returns
-------
rdata : ndarray
Raw NMR data, unshaped and typically not complex.
"""
return np.fromfile(filename, dtype)
def get_trace(f, num_points, big):
"""
Get a trace from an open RNMRTK file.
Parameters
-----------
f : file object
Open file object to read from.
num_points : int
Number of points in trace (R+I)
big : bool
True for data that is big-endian, False for little-endian.
Returns
-------
trace : ndarray
Raw trace of NMR data.
"""
if big:
bsize = num_points * np.dtype('>f4').itemsize
return np.frombuffer(f.read(bsize), dtype='>f4')
else:
bsize = num_points * np.dtype('<f4').itemsize
return np.frombuffer(f.read(bsize), dtype='<f4')
def put_trace(f, trace):
"""
Put a trace to an open RNMRTK file.
Parameters
----------
f : file object
Open file object to read from.
trace : ndarray
Raw trace of NMR data, may be complex64.
"""
f.write(trace.view('float32').tobytes())
def uninterleave_data(data):
"""
Remove interleaving of real. imag data in last dimension of data.
"""
return data.view('complex64')
def interleave_data(data):
"""
Interleave real, imag data in data
"""
return data.view('float32')
# size = list(data.shape)
# size[-1] = size[-1]*2
# data_out = np.empty(size,dtype="float32")
# data_out[...,::2] = data.real
# data_out[...,1::2] = data.imag
# return data_out
######################
# low-memory objects #
######################
class rnmrtk_nd(fileiobase.data_nd):
"""
Emulate a ndarray objects without loading data into memory for low memory
reading of RNMRTK files.
* slicing operations return ndarray objects.
* can iterate over with expected results.
* transpose and swapaxes methods create a new objects with correct axes
ordering.
* has ndim, shape, and dtype attributes.
Parameters
----------
filename : str
Filename of RNMRTK file (.sec) to read.
fshape : tuple of ints
Shape of data in file.
cplex : bool
True if the last (fast) axis is complex.
big : bool
True for big-endian data, False for little-endian.
order : tuple
Ordering of axes against file. None for (0, 1, 2, ...).
"""
def __init__(self, filename, fshape, cplex, big, order=None):
"""
Create and set up
"""
# check and set order
if order is None:
order = range(len(fshape))
self.order = order
# set additional parameters
self.fshape = fshape # shape on disk
self.cplex = cplex
self.filename = filename
self.big = big
if self.cplex:
self.dtype = np.dtype('complex64')
else:
self.dtype = np.dtype('float32')
self.__setdimandshape__() # set ndim and shape attributes
def __fcopy__(self, order):
"""
Create a copy
"""
n = rnmrtk_nd(self.filename, self.filename, self.fshape, self.cplex,
self.big, order)
return n
def __fgetitem__(self, slices):
"""
Return ndarray of selected values.
slices is a well formatted tuple of slices
"""
# separate the last slice from the leading slices
lslice = slices[-1]
fslice = slices[:-1]
# and the same for fshape
lfshape = self.fshape[-1]
ffshape = self.fshape[:-1]
# find the output size and make an to/from nd_iterator
osize, nd_iter = fileiobase.size_and_ndtofrom_iter(ffshape, fslice)
osize.append(len(range(lfshape)[lslice]))
# create an empty array to store the selected slices
out = np.empty(tuple(osize), dtype=self.dtype)
# opent the file for reading
f = open(self.filename, 'rb')
# read in the data trace by trace
for out_index, in_index in nd_iter:
# determine the trace number from the index
ntrace = fileiobase.index2trace_flat(ffshape, in_index)
# seek to the correct place in the file
if self.cplex:
ts = ntrace * lfshape * 2 * 4
f.seek(ts)
trace = get_trace(f, lfshape * 2, self.big)
trace = uninterleave_data(trace)
else:
ts = ntrace * lfshape * 4
f.seek(ts)
trace = get_trace(f, lfshape, self.big)
# put the trace into the output array
out[out_index] = trace[lslice]
# close the file and return
f.close()
return out
##################################
# Parameter dictionary utilities #
##################################
def find_dic_dim(dic, dim):
"""
Find dimension in dictionary which corresponds to array dimension.
Parameters
----------
dic : dict
Dictionary of RNMRTK parameters.
dim : int, non-negative
Dimension of data array.
Returns
-------
ddim : int
Dimension in dic which corresponds to array dimension, dim.
"""
dic_dims = [int(i[1]) - 1 for i in dic['layout'][1]]
return dic_dims.index(dim)
def find_array_dim(dic, ddim):
"""
Find array dimension which corresponds to dictionary dimension.
Parameters
----------
dic : dict
Dictionary of RNMRTK parameters.
ddim : int, non-negative
Dimension in dictionary.
Returns
-------
dim : int
Dimension in array which corresponds to dictionary dimension, ddim.
"""
dic_dims = [int(i[1]) for i in dic['layout'][1]]
return dic_dims[ddim]
############################
# parameter file functions #
############################
def read_par(filename):
"""
Parse a RNMRTK parameter (.par) file.
Parameters
----------
file : str
Filename of RNMRTK parameter file (.par) to read
Returns
-------
dic : dict
Dictionary of RNMRTK parameters.
"""
dic = {}
with open(filename, 'r') as f:
for line in f:
if len(line.split()) >= 2:
parse_par_line(line, dic)
# check that order and layout match, if they do remove from dictionary
if dic['order'] != [int(i[1]) for i in dic['layout'][1]]:
warn('Dom order and layout order do not match')
else:
dic.pop('order')
return dic
def write_par(par_file, dic, overwrite):
"""
Write a RNMRTK parameter file (.par).
Parameters
-----------
par_file : str
Filename of RNMRTK parameter file (.par) to write.
dic : dict
Dictionary of NMR parameters.
overwrite : bool
Set True to overwrite existing files, False will raise a Warning if the
file exists.
"""
# open file for writing
f = fileiobase.open_towrite(par_file, overwrite, mode='w')
# write comment line
f.write('Comment \'' + dic['comment'] + '\'\n')
# Dom line, set from layout
l = "Dom " + " ".join(dic['layout'][1])
f.write(l + "\n")
# N line
s = ["%14i %c" % (t) for t in zip(dic['npts'], dic['nptype'])]
l = "N".ljust(8) + "".join(s)
f.write(l + "\n")
# write out additional lines Command Value lines
order = ['Sw', 'Xfirst', 'Xstep', 'Cphase', 'Lphase', 'Sf', 'Ppm', 'Nacq']
codes = {'Sw': '%16.3f', 'Xfirst': '%16.5f', 'Xstep': '%16.5G',
'Cphase': '%16.3f', 'Lphase': '%16.3f', 'Sf': '%16.2f',
'Ppm': '%16.3f', 'Nacq': '%16i'}
for lc in order:
t = [codes[lc] % i for i in dic[lc.lower()]]
l = lc.ljust(8) + "".join(t)
f.write(l + "\n")
# Quad line
quad_dic = {'states': 'States', 'states-tppi': 'States-TPPI',
'tppi': 'TPPI', 'tppi-redfield': 'TPPI-Redfield'}
t = ["%16s" % (quad_dic[i]) for i in dic['quad']]
l = "Quad".ljust(8) + "".join(t)
f.write(l + "\n")
# write format line
if dic['format'][0] == '<':
f.write('Format Little-endian IEEE-Float\n')
else:
f.write('Format Big-endian IEEE-Float\n')
l = "Layout " + " ".join([j + ":" + str(i) for i, j in
zip(*dic['layout'])])
f.write(l + "\n")
f.close()
return
def parse_par_line(line, dic):
"""
Parse a line from a RNMRTK parameter file (.par).
"""
c, pl = line.split()[0], line.split()[1:]
c = c.upper()
if c == 'COMMENT':
dic['comment'] = pl[0].strip('\'')
elif c == 'DOM':
dom = [s[0] for s in pl] # dom as it appears in the file
dic['ndim'] = ndim = len(pl)
dic['order'] = order = [int(s[1]) for s in pl] # dimension order
# dom in ascending order (to match other parameter)
dic['dom'] = [dom[order.index(i)] for i in range(1, ndim + 1)]
elif c == 'N':
dic['npts'] = [int(i) for i in pl[::2]]
dic['nptype'] = list(pl[1::2])
elif c in ['SW', 'XFIRST', 'XSTEP', 'CPHASE', 'LPHASE', 'SF', 'PPM']:
dic[c.lower()] = [float(i) for i in pl]
elif c == "NACQ":
dic['nacq'] = [int(i) for i in pl]
elif c == "QUAD":
dic['quad'] = [str(s).lower() for s in pl]
# format assumes IEEE-Float type, only checks endianness
elif c == 'FORMAT':
if pl[0].upper() == "LITTLE-ENDIAN":
dic['format'] = '<f4'
else:
dic['format'] = '>f4'
elif c == 'LAYOUT':
size = [int(p.split(":")[1]) for p in pl]
domains = [p.split(":")[0] for p in pl]
dic['layout'] = size, domains
return
| bsd-3-clause | dceda070939f1cfcfb4be35fc2c0fab4 | 26.107879 | 79 | 0.553702 | 3.653651 | false | false | false | false |
jjhelmus/nmrglue | tests/test_simpson.py | 4 | 5071 |
import nmrglue.fileio.simpson as simpson
import numpy as np
from numpy.testing import assert_allclose, assert_raises
import os.path
from setup import DATA_DIR
DD_1D = os.path.join(DATA_DIR, 'simpson_1d')
DD_2D = os.path.join(DATA_DIR, 'simpson_2d')
def test_1d_time():
""" reading 1D time domain files """
# read the text, binary, xreim, and rawbin data
text_dic, text_data = simpson.read(os.path.join(DD_1D, '1d_text.fid'))
bin_dic, bin_data = simpson.read(os.path.join(DD_1D, '1d_bin.fid'))
xreim_units, xreim_data = simpson.read(os.path.join(DD_1D, '1d_ftext.fid'))
rd, rawbin_data = simpson.read(
os.path.join(DD_1D, '1d_rawbin.fid'), spe=False, ndim=1)
# check data in text file
assert text_data.shape == (4096, )
assert text_data.dtype == 'complex64'
assert np.abs(text_data[0].real - 2.0) <= 0.01
assert np.abs(text_data[0].imag - 0.0) <= 0.01
assert np.abs(text_data[1].real - 1.78) <= 0.01
assert np.abs(text_data[1].imag - -0.01) <= 0.01
# data in all files should be close
assert np.allclose(rawbin_data, text_data)
assert np.allclose(rawbin_data, bin_data)
assert np.allclose(rawbin_data, xreim_data)
def test_1d_freq():
""" reading 1D freq domain files """
# read the text, binary, xreim, and rawbin data
text_dic, text_data = simpson.read(os.path.join(DD_1D, '1d_text.spe'))
bin_dic, bin_data = simpson.read(os.path.join(DD_1D, '1d_bin.spe'))
xreim_units, xreim_data = simpson.read(os.path.join(DD_1D, '1d_ftext.spe'))
rd, rawbin_data = simpson.read(
os.path.join(DD_1D, '1d_rawbin.spe'), spe=True, ndim=1)
# check data in text file
assert text_data.shape == (4096, )
assert text_data.dtype == 'complex64'
assert np.abs(text_data[2048].real - 40.34) <= 0.01
assert np.abs(text_data[2048].imag - -1.51) <= 0.01
assert np.abs(text_data[2049].real - 39.58) <= 0.01
assert np.abs(text_data[2049].imag - -3.97) <= 0.01
# data in all file should be close
assert np.allclose(rawbin_data, text_data)
assert np.allclose(rawbin_data, bin_data)
assert np.allclose(rawbin_data, xreim_data)
def test_2d_time():
""" reading 2D time domain files """
# read the text, binary, xreim, and rawbin data
text_dic, text_data = simpson.read(os.path.join(DD_2D, '2d_text.fid'))
bin_dic, bin_data = simpson.read(os.path.join(DD_2D, '2d.fid'))
xyreim_units, xyreim_data = simpson.read(
os.path.join(DD_2D, '2d_ftext.fid'))
rd, rawbin_data = simpson.read(
os.path.join(DD_2D, '2d_raw.fid'), NP=128, NI=48, ndim=2, spe=False)
# check data in text file
assert text_data.shape == (48, 128)
assert text_data.dtype == 'complex64'
assert np.abs(text_data[0, 0].real - 1.00) <= 0.01
assert np.abs(text_data[0, 0].imag - 0.03) <= 0.01
assert np.abs(text_data[0, 1].real - 0.75) <= 0.01
assert np.abs(text_data[0, 1].imag - 0.59) <= 0.01
assert np.abs(text_data[1, 0].real - 0.89) <= 0.01
assert np.abs(text_data[1, 0].imag - 0.03) <= 0.01
# data in all files should be close
assert np.allclose(rawbin_data, text_data)
assert np.allclose(rawbin_data, bin_data)
assert np.allclose(rawbin_data, xyreim_data)
def test_2d_freq():
""" reading 2D freq domain files """
# read the text, binary, xreim, and rawbin data
text_dic, text_data = simpson.read(os.path.join(DD_2D, '2d_text.spe'))
bin_dic, bin_data = simpson.read(os.path.join(DD_2D, '2d.spe'))
xyreim_units, xyreim_data = simpson.read(
os.path.join(DD_2D, '2d_ftext.spe'))
rd, rawbin_data = simpson.read(
os.path.join(DD_2D, '2d_raw.spe'), ndim=2, NP=256, NI=512, spe=True)
# check data in text file
assert text_data.shape == (512, 256)
assert text_data.dtype == 'complex64'
assert np.abs(text_data[4, 150].real - 0.29) <= 0.01
assert np.abs(text_data[4, 150].imag - 0.34) <= 0.01
assert np.abs(text_data[4, 151].real - 0.13) <= 0.01
assert np.abs(text_data[4, 151].imag - 0.16) <= 0.01
assert np.abs(text_data[5, 150].real - 0.41) <= 0.01
assert np.abs(text_data[5, 150].imag - 0.14) <= 0.01
# data in text, bin and xyreim files should all be close
assert np.allclose(text_data, bin_data)
assert np.allclose(text_data, xyreim_data)
# rawbin should be close except for first point along each vector
assert np.allclose(rawbin_data[:, 1:], text_data[:, 1:])
def test_exceptions_read():
""" raising exceptions due to missing read parameters """
# missing spe parameter
assert_raises(
ValueError, simpson.read, os.path.join(DD_1D, '1d_rawbin.fid'))
# missing ndim parameter
assert_raises(
ValueError, simpson.read, os.path.join(DD_1D, '1d_rawbin.fid'),
spe=False)
# missing NP/NI parameter
assert_raises(
ValueError, simpson.read, os.path.join(DD_2D, '2d_raw.fid'),
spe=False, ndim=2)
# bad ftype
assert_raises(
ValueError, simpson.read, os.path.join(DD_1D, '1d_rawbin.fid'),
ftype='a')
| bsd-3-clause | 225388e3f6608dd53f71b9604a1a61ef | 37.12782 | 79 | 0.634786 | 2.717578 | false | false | false | false |
jjhelmus/nmrglue | nmrglue/analysis/analysisbase.py | 2 | 10485 | """
analysisbase provides general purpose analysis functions and classes used by
several nmrglue.analysis modules
"""
import numpy as np
pi = np.pi
# helper functions
def neighbors(pt, shape, structure):
"""
Generate a list of all neighbors to a point.
Parameters
----------
pt : tuple of ints
Index of the point to find neighbors of.
shape : tuple of ints
Shape of the region.
structure : ndarray of bools
Structure element that defines connections.
Returns
-------
pts : list of int tuples
List of tuples which represent indices for all points neighboring pt.
Edges are treated as stopping points.
"""
# set middle of structure to False
s = np.copy(structure) # copy structure
middle = [int(np.floor(i / 2.)) for i in s.shape] # find middle
s.flat[np.ravel_multi_index(middle, s.shape)] = False
offsets = np.argwhere(s) - middle
# loop over the offset adding all valid points
pts = []
for offset in offsets:
npt = pt - offset
if valid_pt(npt, shape):
pts.append(tuple(npt))
return pts
def valid_pt(pt, shape):
"""
Determine if a point (indices) is valid for a given shaped
"""
for i, j in zip(pt, shape):
if i < 0: # index is not negative
return False
if i >= j: # index is less than j
return False
return True
dimension_names = ['A', 'Z', 'Y', 'X']
# utility functions
def find_limits(pts):
"""
Find the limits which outline the provided list of points
Parameters
----------
pts : list of int tuples
List of points [(z0, y0, x0), (z1, y1, x1), ...]
Returns
-------
min : ndarray
Array of minimum indices: array([zmin, ymin, xmin]
max : ndarray
Array of maximum indices: array([zmin, ymin, xmin]
See Also
--------
limits2slice : Create a list of slices from min, max limits
"""
arr_pts = np.array(pts)
return np.min(arr_pts, 0), np.max(arr_pts, 0)
def limits2slice(limits):
"""
Create a set of slice objects given an array of min, max limits.
Parameters
----------
limits: tuple, (ndarray, ndarray)
Two tuple consisting of array of the minimum and maximum indices.
Returns
-------
slices : list
List of slice objects which return points between limits
See Also
--------
find_limits : Find the minimum and maximum limits from a list of points.
slice2limits : Find a minimum and maximum limits for a list of slices.
"""
mins, maxs = limits
return tuple([slice(i, j + 1) for i, j in zip(mins, maxs)])
def slice2limits(slices):
"""
Create a tuple of minimum, maximum limits from a set of slices.
Parameters
----------
slices : list
List of slice objects which return points between limits
Returns
-------
limits: tuple, (ndarray, ndarray)
Two tuple consisting of array of the minimum and maximum indices.
See Also
--------
limits2slice : Find a list of slices given minimum and maximum limits.
"""
mins = [s.start for s in slices]
maxs = [s.stop - 1 for s in slices]
return mins, maxs
def squish(r, axis):
"""
Squish array along an axis.
Determine the sum along all but one axis for an array.
Parameters
----------
r : ndarray
Array to squish.
axis : int
Axis of r to squish along.
Returns
-------
s : 1D ndarray
Array r squished into a single dimension.
"""
# put axis to be squished as the last axis
N = int(r.ndim)
r = r.swapaxes(axis, N - 1)
# sum along leading axis N-1 times
for i in range(N - 1):
r = r.sum(0)
return r
# Windowing classes
class ndwindow(object):
"""
An N-dimensional iterator to slice arrays into windows.
Given the shape of an array and a window size, an 'ndwindow' instance
iterators over tuples of slices which slice an the array into wsize
sub-arrays. At each iteration, the index of the center of the sub-array
is incremented by one along the last dimension. Array borders are ignored
so the resulting sub-array can be smaller than wsize. If wsize contains
even values the window is off center containing an additional point with
lower index.
Parameters
----------
size : tuple of ints
Size of array to generate tuples of slices from.
wsize : tuple of ints
Window/sub-array size. Size of the area to select from array. This is
the maximum size of the window.
Examples
--------
>>> a = np.arange(12).reshape(3,4)
>>> for s in ndwindow(a.shape,(3,3)):
... print(a[s])
[[0 1]
[4 5]]
[[0 1 2]
[4 5 6]]
[[1 2 3]
[5 6 7]]
[[2 3]
[6 7]]
[[0 1]
[4 5]
[8 9]]
[[ 0 1 2]
[ 4 5 6]
[ 8 9 10]]
[[ 1 2 3]
[ 5 6 7]
[ 9 10 11]]
[[ 2 3]
[ 6 7]
[10 11]]
[[4 5]
[8 9]]
[[ 4 5 6]
[ 8 9 10]]
[[ 5 6 7]
[ 9 10 11]]
[[ 6 7]
[10 11]]
See Also
--------
ndwindow_index : Iterator of a ndwindow and index of the window center
ndwindow_inside : Iterator over equal sized windows in the array.
"""
def __init__(self, shape, wsize):
""" Set up the ndwindow object """
if len(shape) != len(wsize):
raise ValueError("shape and wsize do match match")
self.ndindex = np.ndindex(shape)
wsize = np.array(wsize)
self.sub = np.ceil((wsize - 1.) / 2.)
self.add = wsize - 1. - self.sub
def __next__(self):
""" next iterator. """
return self.next()
def next(self):
""" x.next() -> the next value, or raise StopIteration """
center = self.ndindex.next()
start = [max(0, i - j) for i, j in zip(center, self.sub)]
stop = [i + j + 1 for i, j in zip(center, self.add)]
return tuple([slice(x, y) for x, y in zip(start, stop)])
def __iter__(self):
""" x.__iter__() <==> iter(x) """
return self
class ndwindow_index(object):
"""
An N-dimensional iterator object which returns the index of the window
center and a :py:class:`ndwindow` slice array. See :py:class:`ndwindow`
for additional documentation.
This class is equivalent to:
for slices, index in zip(np.ndindex(shape), ndwindow(shape,wshape)):
return (index, slice)
See Also
--------
ndwindow: Iterator over only the window slices.
ndwindow_inside : Iterator over equal sized windows in the array.
"""
def __init__(self, shape, wsize):
""" Set up the object """
if len(shape) != len(wsize):
raise ValueError("shape and wsize do match match")
self.ndindex = np.ndindex(shape)
wsize = np.array(wsize)
self.sub = np.ceil((wsize - 1.) / 2.)
self.add = wsize - 1. - self.sub
def __next__(self):
""" next iterator. """
return self.next()
def next(self):
""" x.next() -> the next value, or raise StopIteration """
center = self.ndindex.next()
start = [max(0, i - j) for i, j in zip(center, self.sub)]
stop = [i + j + 1 for i, j in zip(center, self.add)]
return center, tuple([slice(x, y) for x, y in zip(start, stop)])
def __iter__(self):
""" x.__iter__() <==> iter(x) """
return self
class ndwindow_inside(object):
"""
An N-dimensional iterator to slice arrays into uniform size windows.
Given the shape of an array and a window size, an 'ndwindow_inside'
instance iterators over tuples of slices which slice an the array into
uniform size wsize windows/sub-arrays. At each iteration, the index of
the top left of the sub-array is incremented by one along the last
dimension until the resulting windows would extend past the array border.
All sub-arrays are equal sized (wsize).
Parameters
----------
size : tuple of ints
Size of array to generate tuples of slices from.
wsize : tuple of ints
Size of the area to select from array (widow size).
Examples
--------
>>> a = np.arange(9).reshape(3,3)
>>> for s in ndwindow_inside(a.shape,(2,2)):
... print(a[s])
[[0 1]
[3 4]]
[[1 2]
[4 5]]
[[3 4]
[6 7]]
[[4 5]
[7 8]]
See Also
--------
ndwindow : Iterator over non-uniform windows.
ndwindow_inside_index : Iterator of a ndwindow_inside and the index of the
window's top left point.
"""
def __init__(self, shape, wsize):
""" Set up the object """
if len(shape) != len(wsize):
raise ValueError("shape and wsize do match match")
self.ndindex = np.ndindex(
tuple(np.array(shape) - np.array(wsize) + 1))
self.wsize = wsize
def __next__(self):
""" next iterator. """
return self.next()
def next(self):
""" x.next() -> the next value, or raise StopIteration """
start = self.ndindex.next()
stop = np.array(start) + np.array(self.wsize)
return tuple([slice(x, y) for x, y in zip(start, stop)])
def __iter__(self):
""" x.__iter__() <==> iter(x) """
return self
class ndwindow_inside_index(object):
"""
An N-dimensional iterator object which returns the index of the window
top-left and a :py:class:`ndwindow_inside` slice array.
Similar to :py:class:`ndwindow_index` but reports top left index of
window.
See :py:class:`ndwindow_inside` and :py:class`ndwindow_index` for addition
documentation.
"""
def __init__(self, shape, wsize):
" Set up the object """
if len(shape) != len(wsize):
raise ValueError("shape and wsize do match match")
self.ndindex = np.ndindex(
tuple(np.array(shape) - np.array(wsize) + 1))
self.wsize = wsize
def __next__(self):
""" next iterator. """
return self.next()
def next(self):
""" x.next() -> the next value, or raiseStopIteration """
start = self.ndindex.next()
stop = np.array(start) + np.array(self.wsize)
return (start, tuple([slice(x, y) for x, y in zip(start, stop)]))
def __iter__(self):
""" x.__iter__() <==> iter(x) """
return self
| bsd-3-clause | d61f09a2111e8bce15066a64de32bcc4 | 26.093023 | 78 | 0.574344 | 3.754028 | false | false | false | false |
jjhelmus/nmrglue | nmrglue/analysis/linesh.py | 2 | 26044 | """
Functions for fitting and simulating arbitrary dimensional lineshapes commonly
found in NMR experiments
"""
from __future__ import print_function
import numpy as np
from .leastsqbound import leastsqbound
from .analysisbase import squish
from .lineshapes1d import ls_str2class
from ..fileio import table
pi = np.pi
# table packing/unpacking
def add_to_table(rec, columns, column_names):
"""
Add (append) multiple columns to a records array.
Parameters
----------
rec : recarray
Records array (table).
columns : list of ndarrays
List of columns data to append to table.
column_names : list of str
List of names of columns.
Returns
-------
nrec : recarray
Records array with columns added
"""
for col, col_name in zip(columns, column_names):
rec = table.append_column(rec, col, name=col_name)
return rec
def pack_table(pbest, abest, iers, rec, param_columns, amp_column,
ier_column=None):
"""
Pack fitting parameters into table
Parameters
----------
pbest : list
List of best-fit parameters. See :py:func:`fit_NDregion` for format.
abest : list
List of best-fit amplitudes.
iers : list
List of fitting error return values.
rec : recarray
Records array (table) to save fitting parameters into. Updated with
fitting parameter in place.
param_columns : list
List of parameter columns in rec. Format is the same as pbest.
amp_columns : str
Name of amplitude column in rec.
ier_column : str or None, optional
Name of column in rec to save iers to. None will not record this in the
table.
"""
# pack the amplitudes
rec[amp_column] = abest
# pack the parameters
for dbest, dcolumns in zip(zip(*pbest), param_columns):
for p, c in zip(zip(*dbest), dcolumns):
rec[c] = p
# pack the iers
if ier_column is not None:
rec[ier_column] = iers
def unpack_table(rec, param_columns, amp_column):
"""
Unpack initial fitting parameters from a table.
Parameters
----------
rec : recarray
Records array (table) holding parameters.
param_columns : list
List of column names which hold lineshape parameters. See
:py:func:`fit_NDregion` for format.
amp_column : str
Name of columns in rec holding initial amplitudes.
Returns
-------
params : list
List of initial parameter in the format required for
:py:func:`fit_NDregion`.
amps : list
List of initial peak amplitudes.
"""
params = zip(*[zip(*[rec[c] for c in dc]) for dc in param_columns])
amps = rec[amp_column]
return params, amps
def estimate_scales(spectrum, centers, box_width, scale_axis=0):
"""
Estimate scale parameter for peaks in a spectrum.
Parameters
----------
spectrum : array_like
NMR spectral data. ndarray or emulated type which can be sliced.
centers : list
List of N-tuples indicating peak centers.
box_width : tuple
N-tuple indicating box width to add and subtract from peak centers to
form region around peak to fit.
scale_axis : int
Axis number to estimate scale parameters for.
Returns
-------
scales : list
List of estimated scale parameters.
"""
shape = spectrum.shape
bcenters = np.round(np.array(centers)).astype('int')
scales = []
# loop over the box centers
for bc in bcenters:
# calculate box limits
bmin = [max(c - w, 0) for c, w in zip(bc, box_width)]
bmax = [min(c + w + 1, s) for c, w, s in zip(bc, box_width, shape)]
# cut the spectrum and squish
s = tuple([slice(mn, mx) for mn, mx in zip(bmin, bmax)])
scale = squish(spectrum[s], scale_axis)
scale = scale / scale[0]
scales.append(scale[1:])
return scales
# User facing fit/simulation functions
def fit_spectrum(spectrum, lineshapes, params, amps, bounds, ampbounds,
centers, rIDs, box_width, error_flag, verb=True, **kw):
"""
Fit a NMR spectrum by regions which contain one or more peaks.
Parameters
----------
spectrum : array_like
NMR data. ndarray or emulated type, must be sliceable.
lineshape :list
List of lineshapes by label (str) or a lineshape class. See
:py:func:`fit_NDregion` for details.
params : list
P-length list (P is the number of peaks in region) of N-length lists
of tuples where each each tuple is the optimiztion starting parameters
for a given peak and dimension lineshape.
amps : list
P-length list of amplitudes.
bounds : list
List of bounds for parameter of same shape as params. If none of the
parameters in a given dimension have limits None can be used,
otherwise each dimension should have a list or tuple of (min,max) or
None for each parameter. min or max may be None when there is no
bounds in a given direction.
ampbounds : list
P-length list of bounds for the amplitude with format similar to
bounds.
centers : list
List of N-tuples indicating peak centers.
rIDs : list
P-length list of region numbers. Peak with the same region number
are fit together.
box_width : tuple
Tuple of length N indicating box width to add and subtract from peak
centers to form regions around peak to fit.
error_flag : bool
True to estimate errors for each lineshape parameter and amplitude.
verb : bool, optional
True to print a summary of each region fit, False (the default)
suppresses all printing.
**kw : optional
Additional keywords passed to the scipy.optimize.leastsq function.
Returns
-------
params_best : list
Optimal values for lineshape parameters with same format as params
input parameter.
amp_best : list
List of optimal peak amplitudes.
param_err : list, only returned when error_flag is True
Estimated lineshape parameter errors with same format as params.
amp_err : list, only returned when error_flag is True
Estimated peak amplitude errors.
iers : list
List of integer flag from scipy.optimize.leastsq indicating if the
solution was found for a given peak. 1,2,3,4 indicates that a
solution was found. Other indicate an error.
"""
pbest = [[]] * len(params)
pbest_err = [[]] * len(params)
abest = [[]] * len(params)
abest_err = [[]] * len(params)
iers = [[]] * len(params)
shape = spectrum.shape
ls_classes = []
for l in lineshapes:
if isinstance(l, str):
ls_classes.append(ls_str2class(l))
else:
ls_classes.append(l)
cIDs = set(rIDs) # region values to loop over
for cID in cIDs:
cpeaks = [i for i, v in enumerate(rIDs) if v == cID]
# select the parameter
cparams = [params[i] for i in cpeaks]
camps = [amps[i] for i in cpeaks]
cbounds = [bounds[i] for i in cpeaks]
campbounds = [ampbounds[i] for i in cpeaks]
ccenters = [centers[i] for i in cpeaks]
# find the box edges
bcenters = np.round(np.array(ccenters)).astype('int')
bmin = bcenters - box_width
bmax = bcenters + box_width + 1
# correct for spectrum edges
for i in range(len(shape)):
bmin[:, i][np.where(bmin[:, i] < 0)] = 0
for i, v in enumerate(shape):
bmax[:, i][np.where(bmax[:, i] > v)] = v
# find the region limits
rmin = edge = np.array(bmin).min(0)
rmax = np.array(bmax).max(0)
# cut the spectrum
s = tuple([slice(mn, mx) for mn, mx in zip(rmin, rmax)])
region = spectrum[s]
# add edge to the box limits
ebmin = bmin - edge
ebmax = bmax - edge
# create the weight mask array
wmask = np.zeros(region.shape, dtype='bool')
for bmn, bmx in zip(ebmin, ebmax):
s = tuple([slice(mn, mx) for mn, mx in zip(bmn, bmx)])
wmask[s] = True
# add edges to the initial parameters
ecparams = [[ls.add_edge(p, (mn, mx)) for ls, mn, mx, p in
zip(ls_classes, rmin, rmax, g)] for g in cparams]
# TODO make this better...
ecbounds = [[list(zip(*[ls.add_edge(b, (mn, mx)) for b in zip(*db)]))
for ls, mn, mx, db in zip(ls_classes, rmin, rmax, pb)]
for pb in cbounds]
# fit the region
t = fit_NDregion(region, ls_classes, ecparams, camps, ecbounds,
campbounds, wmask, error_flag, **kw)
if error_flag:
ecpbest, acbest, ecpbest_err, acbest_err, ier = t
cpbest_err = [[ls.remove_edge(p, (mn, mx)) for ls, mn, mx, p in
zip(ls_classes, rmin, rmax, g)] for g in ecpbest_err]
else:
ecpbest, acbest, ier = t
# remove edges from best fit parameters
cpbest = [[ls.remove_edge(p, (mn, mx)) for ls, mn, mx, p in
zip(ls_classes, rmin, rmax, g)] for g in ecpbest]
if verb:
print("-----------------------")
print("cID:", cID, "ier:", ier, "Peaks fit", cpeaks)
print("fit parameters:", cpbest)
print("fit amplitudes", acbest)
for i, pb, ab in zip(cpeaks, cpbest, acbest):
pbest[i] = pb
abest[i] = ab
iers[i] = ier
if error_flag:
for i, pb, ab in zip(cpeaks, cpbest_err, acbest_err):
pbest_err[i] = pb
abest_err[i] = ab
if error_flag is False:
return pbest, abest, iers
return pbest, abest, pbest_err, abest_err, iers
def fit_NDregion(region, lineshapes, params, amps, bounds=None,
ampbounds=None, wmask=None, error_flag=False, **kw):
"""
Fit a N-dimensional region.
Parameters
----------
region : ndarray
Region of a NMR data to fit.
lineshape :list
List of lineshapes by label (str) or a lineshape class. See
Notes for details.
params : list
P-length list (P is the number of peaks in region) of N-length lists
of tuples where each each tuple is the optimization starting parameters
for a given peak and dimension lineshape.
amps : list
P-length list of amplitudes.
bounds : list
List of bounds for parameter of same shape as params. If none of the
parameters in a given dimension have limits None can be used,
otherwise each dimension should have a list or tuple of (min,max) or
None for each parameter. min or max may be None when there is no
bounds in a given direction.
ampbounds : list
P-length list of bounds for the amplitude with format similar to
bounds.
wmask : ndarray, optional
Array with same shape as region which is used to weight points in the
error calculation, typically a boolean array is used to exclude
certain points in the region. Default of None will include all
points in the region equally in the error calculation.
centers : list
List of N-tuples indicating peak centers.
error_flag : bool
True to estimate errors for each lineshape parameter and amplitude.
**kw : optional
Additional keywords passed to the scipy.optimize.leastsq function.
Returns
-------
params_best : list
Optimal values for lineshape parameters with same format as params
input parameter.
amp_best : list
List of optimal peak amplitudes.
param_err : list, only returned when error_flag is True
Estimated lineshape parameter errors with same format as params.
amp_err : list, only returned when error_flag is True
Estimated peak amplitude errors.
iers : list
List of integer flag from scipy.optimize.leastsq indicating if the
solution was found for a given peak. 1,2,3,4 indicates that a
solution was found. Other indicate an error.
Notes
-----
The lineshape parameter:
Elements of the lineshape parameter list can be string indicating the
lineshape of given dimension or an instance of a lineshape class
which provide a sim method which takes two arguments, the first being the
length of the lineshape the second being a list of lineshape parameters,
and returns a simulated lineshape as well as a nparam method which when
given the length of lineshape returns the number of parameters needed to
describe the lineshape. Currently the following strings are allowed:
* 'g' or 'gauss' Gaussian (normal) lineshape.
* 'l' or 'lorentz' Lorentzian lineshape.
* 'v' or 'voigt' Voigt lineshape.
* 'pv' or 'pvoight' Pseudo Voigt lineshape
* 's' or 'scale' Scaled lineshape.
The first four lineshapes (Gaussian, Lorentzian, Voigt and Pseudo Voigt)
all take a FWHM scale parameter.
The following are all valid lineshapes parameters for a 2D Gaussian peak:
* ['g','g']
* ['gauss','gauss']
* [ng.lineshapes1d.gauss(),ng.lineshapes1d.gauss()]
"""
# this function parses the user-friendly input into a format digestable
# by f_NDregion, performs the fitting, then format the fitting results
# into a user friendly format
# parse the region parameter
ndim = region.ndim
shape = region.shape
# parse the lineshape parameter
if len(lineshapes) != ndim:
raise ValueError("Incorrect number of lineshapes provided")
ls_classes = []
for l in lineshapes:
if isinstance(l, str):
ls_classes.append(ls_str2class(l))
else:
ls_classes.append(l)
# determine the number of parameter in each dimension
dim_nparam = [c.nparam(l) for l, c in zip(shape, ls_classes)]
# parse params
n_peaks = len(params)
p0 = []
for i, guess in enumerate(params): # peak loop
if len(guess) != ndim:
err = "Incorrect number of params for peak %i"
raise ValueError(err % (i))
for j, dim_guess in enumerate(guess): # dimension loop
if len(dim_guess) != dim_nparam[j]:
err = "Incorrect number of parameters in peak %i dimension %i"
raise ValueError(err % (i, j))
for g in dim_guess: # parameter loop
p0.append(g)
# parse the bounds parameter
if bounds is None: # No bounds
peak_bounds = [[(None, None)] * i for i in dim_nparam]
bounds = [peak_bounds] * n_peaks
if len(bounds) != n_peaks:
raise ValueError("Incorrect number of parameter bounds provided")
# build the parameter bound list to be passed to f_NDregion
p_bounds = []
for i, peak_bounds in enumerate(bounds): # peak loop
if peak_bounds is None:
peak_bounds = [[(None, None)] * i for i in dim_nparam]
if len(peak_bounds) != ndim:
err = "Incorrect number of bounds for peak %i"
raise ValueError(err % (i))
for j, dim_bounds in enumerate(peak_bounds): # dimension loop
if dim_bounds is None:
dim_bounds = [(None, None)] * dim_nparam[j]
if len(dim_bounds) != dim_nparam[j]:
err = "Incorrect number of bounds for peak %i dimension %i"
raise ValueError(err % (i, j))
for k, b in enumerate(dim_bounds): # parameter loop
if b is None:
b = (None, None)
if len(b) != 2:
err = "No min/max for peak %i dim %i parameter %i"
raise ValueError(err % (i, j, k))
p_bounds.append(b)
# parse amps parameter
if len(amps) != n_peaks:
raise ValueError("Incorrect number of amplitude guesses provided")
p0 = list(amps) + p0 # amplitudes appended to front of p0
# parse ampbounds parameter
if ampbounds is None:
ampbounds = [(None, None)] * n_peaks
if len(ampbounds) != n_peaks:
raise ValueError("Incorrect number of amplitude bounds")
to_add = []
for k, b in enumerate(ampbounds):
if b is None:
b = (None, None)
if len(b) != 2:
err = "No min/max for amplitude bound %i"
raise ValueError(err % (k))
to_add.append(b)
p_bounds = to_add + p_bounds # amplitude bound at front of p_bounds
# parse the wmask parameter
if wmask is None: # default is to include all points in region
wmask = np.ones(shape, dtype='bool')
if wmask.shape != shape:
err = "wmask has incorrect shape:" + str(wmask.shape) + \
" should be " + str(shape)
raise ValueError(err)
# DEBUGGING
# print("--------------------------------")
# print(region)
# print(ls_classes)
# print(p0)
# print(p_bounds)
# print(n_peaks)
# print(dim_nparam)
# print("=================================")
# for i,j in zip(p0,p_bounds):
# print(i, j)
# include full_output=True when errors requested
if error_flag:
kw["full_output"] = True
# perform fitting
r = f_NDregion(region, ls_classes, p0, p_bounds, n_peaks, wmask, **kw)
# DEBUGGING
# print(r)
# unpack results depending of if full output requested
if "full_output" in kw and kw["full_output"]:
p_best, cov_xi, infodic, mesg, ier = r
else:
p_best, ier = r
# unpack and repack p_best
# pull off the ampltides
amp_best = p_best[:n_peaks]
# split the remaining parameters into n_peaks equal sized lists
p_list = split_list(list(p_best[n_peaks:]), n_peaks)
# for each peak repack the flat parameter lists to reference by dimension
param_best = [make_slist(l, dim_nparam) for l in p_list]
# return as is if no errors requested
if error_flag is False:
return param_best, amp_best, ier
# calculate errors
p_err = calc_errors(region, ls_classes, p_best, cov_xi, n_peaks, wmask)
# unpack and repack the error p_err
# pull off the amplitude errors
amp_err = p_err[:n_peaks]
# split the remaining errors into n_peaks equal sized lists
pe_list = split_list(list(p_err[n_peaks:]), n_peaks)
# for each peak repack the flat errors list to reference by dimension
param_err = [make_slist(l, dim_nparam) for l in pe_list]
return param_best, amp_best, param_err, amp_err, ier
def sim_NDregion(shape, lineshapes, params, amps):
"""
Simulate an N-dimensional region with one or more peaks.
Parameters
----------
shape : tuple of ints
Shape of region.
lineshapes : list
List of lineshapes by label (str) or a lineshape class. See
:py:func:`fit_NDregion` for additional documentation.
params : list
P-length list (P is the number of peaks in region) of N-length lists
of tuples where each each tuple is lineshape parameters for a given
peak and dimension.
amps : list
P-length of peak amplitudes.
Returns
-------
sim : ndarray with shape, shape.
Simulated region.
"""
# parse the user-friendly input into a format digestable by s_NDregion
# parse the shape
ndim = len(shape)
# parse the lineshape parameters
if len(lineshapes) != ndim:
raise ValueError("Incorrect number of lineshapes provided")
ls_classes = []
for l in lineshapes:
if isinstance(l, str):
ls_classes.append(ls_str2class(l))
else:
ls_classes.append(l)
# determine the number of parameters in each dimension.
dim_nparam = [c.nparam(l) for l, c in zip(shape, ls_classes)]
# parse the params parameter
n_peaks = len(params)
p = []
for i, param in enumerate(params):
if len(param) != ndim:
err = "Incorrect number of parameters for peak %i"
raise ValueError(err % (i))
for j, dim_param in enumerate(param):
if len(dim_param) != dim_nparam[j]:
err = "Incorrect number of parameters in peak %i dimension %i"
raise ValueError(err % (i, j))
for g in dim_param:
p.append(g)
# parse the amps parameter
if len(amps) != n_peaks:
raise ValueError("Incorrect number of amplitudes provided")
p = list(amps) + p # amplitudes appended to front of p
# DEBUGGING
# print("p",p)
# print("shape",shape)
# print("ls_classes",ls_classes)
# print("n_peaks",n_peaks)
return s_NDregion(p, shape, ls_classes, n_peaks)
def make_slist(l, t_sizes):
"""
Create a list of tuples of given sizes from a list
Parameters
----------
l : list or ndarray
List or array to pack into shaped list.
t_sizes : list of ints
List of tuple sizes.
Returns
-------
slist : list of tuples
List of tuples of lengths given by t_sizes.
"""
out = [] # output
start = 0
for s in t_sizes:
out.append(l[start:start + s])
start = start + s
return out
def split_list(l, N):
""" Split list l into N sublists of equal size """
step = int(len(l) / N)
div_points = range(0, len(l) + 1, step)
return [l[div_points[i]:div_points[i + 1]] for i in range(N)]
def calc_errors(region, ls_classes, p, cov, n_peaks, wmask):
"""
Calculate the parameter errors from the standard errors of the estimate.
Parameters
----------
region : ndarray
Region which was fit.
ls_classes : list
List of lineshape classes.
p : ndarray
Fit parameters.
cov : ndarray
Covariance matrix from least squares fitting.
n_peaks : int
Number of peaks in the region.
Returns
-------
errors : ndarray
Array of standard errors of parameters in p.
"""
# calculate the residuals
resid = err_NDregion(p, region, region.shape, ls_classes, n_peaks, wmask)
SS_err = np.power(resid, 2).sum() # Sum of squared residuals
n = region.size # size of sample XXX not sure if this always makes sense
k = p.size - 1 # free parameters
st_err = np.sqrt(SS_err / (n - k - 1)) # standard error of estimate
if cov is None: # indicate that parameter errors cannot be calculated.
return [None] * len(p)
return st_err * np.sqrt(np.diag(cov))
# internal functions
def s_NDregion(p, shape, ls_classes, n_peaks):
"""
Simulate an N-dimensional region with one or more peaks.
Parameters
----------
p : list
List of parameters, must be a list, modified by function.
shape : tuple of ints
Shape of region.
ls_classes : list
List of lineshape classes.
n_peaks : int
Number of peaks in region.
Returns
-------
r : ndarray
Simulated region.
"""
# split the parameter list into a list of amplitudes and peak param lists
As = [p.pop(0) for i in range(n_peaks)]
ps = split_list(p, n_peaks)
# simulate the first region
A, curr_p = As.pop(0), ps.pop(0)
r = s_single_NDregion([A] + curr_p, shape, ls_classes)
# simulate any additional regions
for A, curr_p in zip(As, ps):
r = r + s_single_NDregion([A] + curr_p, shape, ls_classes)
return r
def s_single_NDregion(p, shape, ls_classes):
"""
Simulate an N-dimensional region with a single peak.
This function is called repeatedly by s_NDregion to build up a full
simulated region.
Parameters
----------
p : list
List of parameters, must be a list.
shape : tuple
Shape of region.
ls_classes : list
List of lineshape classes.
Returns
-------
r : ndarray
Simulated region.
"""
A = p.pop(0) # amplitude is ALWAYS the first parameter
r = np.array(A, dtype='float')
for length, ls_class in zip(shape, ls_classes):
# print("Making lineshape of", ls_class.name, "with length:", length)
s_p = [p.pop(0) for i in range(ls_class.nparam(length))]
ls = ls_class.sim(length, s_p)
# print("Lineshape is:", ls)
r = np.kron(r, ls) # vector direct product flattened
return r.reshape(shape)
def err_NDregion(p, region, shape, ls_classes, n_peaks, wmask):
"""
Error function for an N-dimensional region, called by :py:func:`f_NDregion`
"""
sim_region = s_NDregion(list(p), shape, ls_classes, n_peaks)
return ((region - sim_region) * wmask).flatten()
def f_NDregion(region, ls_classes, p0, p_bounds, n_peaks, wmask, **kw):
"""
Fit an N-dimensional regions containing one or more peaks.
Region is fit using a constrained Levenberg-Marquard optimization algorithm.
See :py:func:`fit_NDregion` for additional documentation.
Parameters
----------
region : ndarray
Region to fit.
ls_classes : list
List of lineshape classes.
p0 : ndarray
Initial parameters.
p_bounds : list of tuples
List of (min, max) bounds for each element of p0.
n_peaks : int
Number of peaks in the simulated region.
wmask : ndarray
Array with same shape as region which is used to weight points in the
error calculation, typically a boolean array is used to exclude
certain points in the region.
**kw : optional
Additional keywords passed to the scipy.optimize.leastsq function.
See Also
--------
fit_NDregion : Fit N-dimensional region with user friendly parameter.
"""
args = (region, region.shape, ls_classes, n_peaks, wmask)
p_best = leastsqbound(err_NDregion, p0, bounds=p_bounds, args=args, **kw)
return p_best
| bsd-3-clause | 7f566c677c7735e82a0f1b1a52d4b910 | 31.11344 | 80 | 0.606781 | 3.824938 | false | false | false | false |
jobovy/galpy | galpy/snapshot/nemo_util.py | 1 | 1972 | ###############################################################################
# nemo_util.py: some utilities for handling NEMO snapshots
###############################################################################
import os
import subprocess
import tempfile
import numpy
def read(filename,ext=None,swapyz=False):
"""
NAME:
read
PURPOSE:
read a NEMO snapshot file consisting of mass,position,velocity
INPUT:
filename - name of the file
ext= if set, 'nemo' for NEMO binary format, otherwise assumed ASCII; if not set, gleaned from extension
swapyz= (False) if True, swap the y and z axes in the output (only for position and velocity)
OUTPUT:
snapshots [nbody,ndim,nt]
HISTORY:
2015-11-18 - Written - Bovy (UofT)
"""
if ext is None and filename.split('.')[-1] == 'nemo':
ext= 'nemo'
elif ext is None:
ext= 'dat'
# Convert to ASCII if necessary
if ext.lower() == 'nemo':
file_handle, asciifilename= tempfile.mkstemp()
os.close(file_handle)
stderr= open('/dev/null','w')
try:
subprocess.check_call(['s2a',filename,asciifilename])#,stderr=stderr)
except subprocess.CalledProcessError:
os.remove(asciifilename)
finally:
stderr.close()
else:
asciifilename= filename
# Now read
out= numpy.loadtxt(asciifilename,comments='#')
if ext.lower() == 'nemo': os.remove(asciifilename)
if swapyz:
out[:,[2,3]]= out[:,[3,2]]
out[:,[5,6]]= out[:,[6,5]]
# Get the number of snapshots
nt= (_wc(asciifilename)-out.shape[0])//13 # 13 comments/snapshot
out= numpy.reshape(out,(nt,out.shape[0]//nt,out.shape[1]))
return numpy.swapaxes(numpy.swapaxes(out,0,1),1,2)
def _wc(filename):
try:
return int(subprocess.check_output(['wc','-l',filename]).split()[0])
except subprocess.CalledProcessError:
return numpy.nan
| bsd-3-clause | 6974d077aaf2a1ab9d563f2d4a3e5521 | 33 | 110 | 0.567951 | 3.88189 | false | false | false | false |
jobovy/galpy | doc/source/examples/sellwood-jrjp.py | 1 | 4179 | import csv
import os
import os.path
import re
import sys
import cPickle as pickle
import numpy as nu
from galpy.orbit import Orbit
from galpy.potential import LogarithmicHaloPotential, PowerSphericalPotential
from galpy.util import plot
_degtorad= nu.pi/180.
def hms_to_rad(ra):
spl= re.split(r' ',ra)
return (float(spl[0])*15.+float(spl[1])*0.25+
float(spl[1])*0.25/60.)*_degtorad
def dms_to_rad(dec):
spl= re.split(r' ',dec)
return (float(spl[0])+float(spl[1])/60.+float(spl[2])/60./60.)*_degtorad
def read_float(f):
if f == '':
return -9999
else:
return float(f)
def calcj(rotcurve):
if rotcurve == 'flat':
savefilename= 'myjs.sav'
elif rotcurve == 'power':
savefilename= 'myjs_power.sav'
if os.path.exists(savefilename):
savefile= open(savefilename,'rb')
myjr= pickle.load(savefile)
myjp= pickle.load(savefile)
mye= pickle.load(savefile)
myzmax= pickle.load(savefile)
e= pickle.load(savefile)
zmax= pickle.load(savefile)
savefile.close()
else:
dialect= csv.excel
dialect.skipinitialspace=True
reader= csv.reader(open('../data/gcs.tsv'),delimiter='|',dialect=dialect)
vxvs= []
es= []
zmaxs= []
for row in reader:
if row[0][0] == '#':
continue
thisra= row[0]
thisdec= row[1]
thisd= read_float(row[2])/1000.
if thisd > 0.2: continue
thisu= read_float(row[3])
thisv= read_float(row[4])
thisw= read_float(row[5])
thise= read_float(row[6])
thiszmax= read_float(row[7])
if thisd == -9999 or thisu == -9999 or thisv == -9999 or thisw == -9999:
continue
vxvs.append([hms_to_rad(thisra),dms_to_rad(thisdec),
thisd,thisu,thisv,thisw])
es.append(thise)
zmaxs.append(thiszmax)
vxvv= nu.array(vxvs)
e= nu.array(es)
zmax= nu.array(zmaxs)
#Define potential
lp= LogarithmicHaloPotential(normalize=1.)
pp= PowerSphericalPotential(normalize=1.,alpha=-2.)
ts= nu.linspace(0.,100.,10000)
myjr= nu.zeros(len(e))
myjp= nu.zeros(len(e))
mye= nu.zeros(len(e))
myzmax= nu.zeros(len(e))
for ii in range(len(e)):
#Integrate the orbit
o= Orbit(vxvv[ii,:],radec=True,uvw=True,vo=220.,ro=8.)
if rotcurve == 'flat':
o.integrate(ts,lp)
mye[ii]= o.e()
myzmax[ii]= o.zmax()*8.
print(e[ii], mye[ii], zmax[ii], myzmax[ii])
myjr[ii]= o.jr(lp)
else:
myjr[ii]= o.jr(pp)
myjp[ii]= o.jp()
#Save
savefile= open(savefilename,'wb')
pickle.dump(myjr,savefile)
pickle.dump(myjp,savefile)
pickle.dump(mye,savefile)
pickle.dump(myzmax,savefile)
pickle.dump(e,savefile)
pickle.dump(zmax,savefile)
savefile.close()
#plot
if rotcurve == 'flat':
plot.print()
plot.plot(nu.array([0.,1.]),nu.array([0.,1.]),'k-',
xlabel=r'$\mathrm{Holmberg\ et\ al.}\ e$',
ylabel=r'$\mathrm{galpy}\ e$')
plot.plot(e,mye,'k,',overplot=True)
plot.end_print('myee.png')
plot.print()
plot.plot(nu.array([0.,2.5]),
nu.array([0.,2.5]),'k-',
xlabel=r'$\mathrm{Holmberg\ et\ al.}\ z_{\mathrm{max}}$',
ylabel=r'$\mathrm{galpy}\ z_{\mathrm{max}}$')
plot.plot(zmax,myzmax,'k,',overplot=True)
plot.end_print('myzmaxzmax.png')
plot.print()
plot.plot(myjp,myjr,'k.',ms=2.,
xlabel=r'$J_{\phi}$',
ylabel=r'$J_R$',
xrange=[0.7,1.3],
yrange=[0.,0.05])
if rotcurve == 'flat':
plot.end_print('jrjp.png')
else:
plot.end_print('jrjp_power.png')
if __name__ == '__main__':
if len(sys.argv) > 1:
calcj(sys.argv[1])
else:
calcj('flat')
| bsd-3-clause | b663729614e7053cf4962f0aa682af27 | 30.186567 | 84 | 0.520699 | 3.070536 | false | false | false | false |
jobovy/galpy | galpy/potential/SCFPotential.py | 1 | 40308 | import hashlib
import numpy
from numpy.polynomial.legendre import leggauss
from scipy import integrate
from scipy.special import gamma, gammaln, lpmn
from ..util import conversion, coords
from ..util._optional_deps import _APY_LOADED
from .Potential import Potential
if _APY_LOADED:
from astropy import units
from .NumericalPotentialDerivativesMixin import \
NumericalPotentialDerivativesMixin
class SCFPotential(Potential,NumericalPotentialDerivativesMixin):
"""Class that implements the `Hernquist & Ostriker (1992) <http://adsabs.harvard.edu/abs/1992ApJ...386..375H>`_ Self-Consistent-Field-type potential.
Note that we divide the amplitude by 2 such that :math:`Acos = \\delta_{0n}\\delta_{0l}\\delta_{0m}` and :math:`Asin = 0` corresponds to :ref:`Galpy's Hernquist Potential <hernquist_potential>`.
.. math::
\\rho(r, \\theta, \\phi) = \\frac{amp}{2}\\sum_{n=0}^{\\infty} \\sum_{l=0}^{\\infty} \\sum_{m=0}^l N_{lm} P_{lm}(\\cos(\\theta)) \\tilde{\\rho}_{nl}(r) \\left(A_{cos, nlm} \\cos(m\\phi) + A_{sin, nlm} \\sin(m\\phi)\\right)
where
.. math::
\\tilde{\\rho}_{nl}(r) = \\frac{K_{nl}}{\\sqrt{\\pi}} \\frac{(a r)^l}{(r/a) (a + r)^{2l + 3}} C_{n}^{2l + 3/2}(\\xi)
.. math::
\\Phi(r, \\theta, \\phi) = \\sum_{n=0}^{\\infty} \\sum_{l=0}^{\\infty} \\sum_{m=0}^l N_{lm} P_{lm}(\\cos(\\theta)) \\tilde{\\Phi}_{nl}(r) \\left(A_{cos, nlm} \\cos(m\\phi) + A_{sin, nlm} \\sin(m\\phi)\\right)
where
.. math::
\\tilde{\\Phi}_{nl}(r) = -\\sqrt{4 \\pi}K_{nl} \\frac{(ar)^l}{(a + r)^{2l + 1}} C_{n}^{2l + 3/2}(\\xi)
where
.. math::
\\xi = \\frac{r - a}{r + a} \\qquad
N_{lm} = \\sqrt{\\frac{2l + 1}{4\\pi} \\frac{(l - m)!}{(l + m)!}}(2 - \\delta_{m0}) \\qquad
K_{nl} = \\frac{1}{2} n (n + 4l + 3) + (l + 1)(2l + 1)
and :math:`P_{lm}` is the Associated Legendre Polynomials whereas :math:`C_n^{\\alpha}` is the Gegenbauer polynomial.
"""
def __init__(self, amp=1., Acos=numpy.array([[[1]]]),Asin=None, a = 1., normalize=False, ro=None,vo=None):
"""
NAME:
__init__
PURPOSE:
initialize a SCF Potential from a set of expansion coefficients (use SCFPotential.from_density to directly initialize from a density)
INPUT:
amp - amplitude to be applied to the potential (default: 1); can be a Quantity with units of mass or Gxmass
Acos - The real part of the expansion coefficient (NxLxL matrix, or optionally NxLx1 if Asin=None)
Asin - The imaginary part of the expansion coefficient (NxLxL matrix or None)
a - scale length (can be Quantity)
normalize - if True, normalize such that vc(1.,0.)=1., or, if given as a number, such that the force is this fraction of the force necessary to make vc(1.,0.)=1.
ro=, vo= distance and velocity scales for translation into internal units (default from configuration file)
OUTPUT:
SCFPotential object
HISTORY:
2016-05-13 - Written - Aladdin Seaifan (UofT)
"""
NumericalPotentialDerivativesMixin.__init__(self,{}) # just use default dR etc.
Potential.__init__(self,amp=amp/2.,ro=ro,vo=vo,amp_units='mass')
a= conversion.parse_length(a,ro=self._ro)
##Errors
shape = Acos.shape
errorMessage = None
if len(shape) != 3:
errorMessage="Acos must be a 3 dimensional numpy array"
elif Asin is not None and shape[1] != shape[2]:
errorMessage="The second and third dimension of the expansion coefficients must have the same length"
elif Asin is None and not (shape[2] == 1 or shape[1] == shape[2]):
errorMessage="The third dimension must have length=1 or equal to the length of the second dimension"
elif Asin is None and shape[1] > 1 and numpy.any(Acos[:,:,1:] !=0):
errorMessage="Acos has non-zero elements at indices m>0, which implies a non-axi symmetric potential.\n" +\
"Asin=None which implies an axi symmetric potential.\n" + \
"Contradiction."
elif Asin is not None and Asin.shape != shape:
errorMessage = "The shape of Asin does not match the shape of Acos."
if errorMessage is not None:
raise RuntimeError(errorMessage)
##Warnings
warningMessage=None
if numpy.any(numpy.triu(Acos,1) != 0) or (Asin is not None and numpy.any(numpy.triu(Asin,1) != 0)):
warningMessage="Found non-zero values at expansion coefficients where m > l\n" + \
"The Mth and Lth dimension is expected to make a lower triangular matrix.\n" + \
"All values found above the diagonal will be ignored."
if warningMessage is not None:
raise RuntimeWarning(warningMessage)
##Is non axi?
self.isNonAxi= True
if Asin is None or shape[1] == 1 or (numpy.all(Acos[:,:,1:] == 0) and numpy.all(Asin[:,:,:]==0)):
self.isNonAxi = False
self._a = a
NN = self._Nroot(Acos.shape[1], Acos.shape[2])
self._Acos= Acos*NN[numpy.newaxis,:,:]
if Asin is not None:
self._Asin = Asin*NN[numpy.newaxis,:,:]
else:
self._Asin = numpy.zeros_like(Acos)
self._force_hash= None
self.hasC= True
self.hasC_dxdv=True
self.hasC_dens=True
if normalize or \
(isinstance(normalize,(int,float)) \
and not isinstance(normalize,bool)):
self.normalize(normalize)
return None
@classmethod
def from_density(cls,dens,N,L=None,a=1.,symmetry=None,
radial_order=None,costheta_order=None,phi_order=None,
ro=None,vo=None):
"""
NAME:
from_density
PURPOSE:
initialize an SCF Potential from from a given density
INPUT:
dens - density function that takes parameters R, z and phi; z and phi are optional for spherical profiles, phi is optional for axisymmetric profiles. The density function must take input positions in internal units (R/ro, z/ro), but can return densities in physical units. You can use the member dens of Potential instances or the density from evaluateDensities
N - Number of radial basis functions
L - Number of costheta basis functions; for non-axisymmetric profiles also sets the number of azimuthal (phi) basis functions to M = 2L+1)
a - expansion scale length (can be Quantity)
symmetry= (None) symmetry of the profile to assume: 'spherical', 'axisymmetry', or None (for the general, non-axisymmetric case)
radial_order - Number of sample points for the radial integral. If None, radial_order=max(20, N + 3/2L + 1)
costheta_order - Number of sample points of the costheta integral. If None, If costheta_order=max(20, L + 1)
phi_order - Number of sample points of the phi integral. If None, If costheta_order=max(20, L + 1)
ro=, vo= distance and velocity scales for translation into internal units (default from configuration file)
OUTPUT:
SCFPotential object
HISTORY:
2022-06-20 - Written - Jo Bovy (UofT)
"""
# Dummy object for ro/vo handling, to ensure consistency
dumm= cls(ro=ro,vo=vo)
internal_ro= dumm._ro
internal_vo= dumm._vo
a= conversion.parse_length(a,ro=internal_ro)
if not symmetry is None and symmetry.startswith('spher'):
Acos, Asin= scf_compute_coeffs_spherical(dens,N,a=a,
radial_order=radial_order)
elif not symmetry is None and symmetry.startswith('axi'):
Acos, Asin= scf_compute_coeffs_axi(dens,N,L,a=a,
radial_order=radial_order,
costheta_order=costheta_order)
else:
Acos, Asin= scf_compute_coeffs(dens,N,L,a=a,
radial_order=radial_order,
costheta_order=costheta_order,
phi_order=phi_order)
# Turn on physical outputs if input density was physical
if _APY_LOADED:
# First need to determine number of parameters, like in
# scf_compute_coeffs_spherical/axi
numOfParam = 0
try:
dens(0)
numOfParam=1
except:
try:
dens(0,0)
numOfParam=2
except:
numOfParam=3
param= [1]*numOfParam
try:
dens(*param).to(units.kg/units.m**3)
except (AttributeError,units.UnitConversionError):
# We'll just assume that unit conversion means density
# is scalar Quantity
pass
else:
ro= internal_ro
vo= internal_vo
return cls(Acos=Acos,Asin=Asin,a=a,ro=ro,vo=vo)
def _Nroot(self, L, M=None):
"""
NAME:
_Nroot
PURPOSE:
Evaluate the square root of equation (3.15) with the (2 - del_m,0) term outside the square root
INPUT:
L - evaluate Nroot for 0 <= l <= L
M - evaluate Nroot for 0 <= m <= M
OUTPUT:
The square root of equation (3.15) with the (2 - del_m,0) outside
HISTORY:
2016-05-16 - Written - Aladdin Seaifan (UofT)
"""
if M is None: M =L
NN = numpy.zeros((L,M),float)
l = numpy.arange(0,L)[:,numpy.newaxis]
m = numpy.arange(0,M)[numpy.newaxis, :]
nLn = gammaln(l-m+1) - gammaln(l+m+1)
NN[:,:] = ((2*l+1.)/(4.*numpy.pi) * numpy.e**nLn)**.5 * 2
NN[:,0] /= 2.
NN = numpy.tril(NN)
return NN
def _calculateXi(self, r):
"""
NAME:
_calculateXi
PURPOSE:
Calculate xi given r
INPUT:
r - Evaluate at radius r
OUTPUT:
xi
HISTORY:
2016-05-18 - Written - Aladdin Seaifan (UofT)
"""
a = self._a
if r == 0:
return -1
else:
return (1.-a/r)/(1.+a/r)
def _rhoTilde(self, r, N,L):
"""
NAME:
_rhoTilde
PURPOSE:
Evaluate rho_tilde as defined in equation 3.9 and 2.24 for 0 <= n < N and 0 <= l < L
INPUT:
r - Evaluate at radius r
N - size of the N dimension
L - size of the L dimension
OUTPUT:
rho tilde
HISTORY:
2016-05-17 - Written - Aladdin Seaifan (UofT)
"""
xi = self._calculateXi(r)
CC = _C(xi,N,L)
a = self._a
rho = numpy.zeros((N,L), float)
n = numpy.arange(0,N, dtype=float)[:, numpy.newaxis]
l = numpy.arange(0, L, dtype=float)[numpy.newaxis,:]
K = 0.5 * n * (n + 4*l + 3) + (l + 1.)*(2*l + 1)
rho[:,:] = K * ((a*r)**l) / ((r/a)*(a + r)**(2*l + 3.)) * CC[:,:]* (numpy.pi)**-0.5
return rho
def _phiTilde(self, r, N,L):
"""
NAME:
_phiTilde
PURPOSE:
Evaluate phi_tilde as defined in equation 3.10 and 2.25 for 0 <= n < N and 0 <= l < L
INPUT:
r - Evaluate at radius r
N - size of the N dimension
L - size of the L dimension
OUTPUT:
phi tilde
HISTORY:
2016-05-17 - Written - Aladdin Seaifan (UofT)
"""
xi = self._calculateXi(r)
CC = _C(xi,N,L)
a = self._a
phi = numpy.zeros((N,L), float)
n = numpy.arange(0,N)[:, numpy.newaxis]
l = numpy.arange(0, L)[numpy.newaxis,:]
if r == 0:
phi[:,:]= -1./a* CC[:,:]*(4*numpy.pi)**0.5
else:
phi[:,:] = - a**l*r**(-l-1.)/ ((1.+a/r)**(2*l + 1.)) * CC[:,:]* (4*numpy.pi)**0.5
return phi
def _compute(self, funcTilde, R, z, phi):
"""
NAME:
_compute
PURPOSE:
evaluate the NxLxM density or potential
INPUT:
funcTidle - must be _rhoTilde or _phiTilde
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
OUTPUT:
An NxLxM density or potential at (R,z, phi)
HISTORY:
2016-05-18 - Written - Aladdin Seaifan (UofT)
"""
Acos, Asin = self._Acos, self._Asin
N, L, M = Acos.shape
r, theta, phi = coords.cyl_to_spher(R,z,phi)
PP = lpmn(M-1,L-1,numpy.cos(theta))[0].T ##Get the Legendre polynomials
func_tilde = funcTilde(r, N, L) ## Tilde of the function of interest
func = numpy.zeros((N,L,M), float) ## The function of interest (density or potential)
m = numpy.arange(0, M)[numpy.newaxis, numpy.newaxis, :]
mcos = numpy.cos(m*phi)
msin = numpy.sin(m*phi)
func = func_tilde[:,:,None]*(Acos[:,:,:]*mcos + Asin[:,:,:]*msin)*PP[None,:,:]
return func
def _computeArray(self, funcTilde, R, z, phi):
"""
NAME:
_computeArray
PURPOSE:
evaluate the density or potential for a given array of coordinates
INPUT:
funcTidle - must be _rhoTilde or _phiTilde
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
OUTPUT:
density or potential evaluated at (R,z, phi)
HISTORY:
2016-06-02 - Written - Aladdin Seaifan (UofT)
"""
R = numpy.array(R,dtype=float); z = numpy.array(z,dtype=float); phi = numpy.array(phi,dtype=float);
shape = (R*z*phi).shape
if shape == (): return numpy.sum(self._compute(funcTilde, R,z,phi))
R = R*numpy.ones(shape); z = z*numpy.ones(shape); phi = phi*numpy.ones(shape);
func = numpy.zeros(shape, float)
li = _cartesian(shape)
for i in range(li.shape[0]):
j= tuple(numpy.split(li[i], li.shape[1]))
func[j] = numpy.sum(self._compute(funcTilde, R[j][0],z[j][0],phi[j][0]))
return func
def _dens(self, R, z, phi=0., t=0.):
"""
NAME:
_dens
PURPOSE:
evaluate the density at (R,z, phi)
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
density at (R,z, phi)
HISTORY:
2016-05-17 - Written - Aladdin Seaifan (UofT)
"""
if not self.isNonAxi and phi is None:
phi= 0.
return self._computeArray(self._rhoTilde, R,z,phi)
def _mass(self,R,z=None,t=0.):
"""
NAME:
_mass
PURPOSE:
evaluate the mass within R (and z) for this potential; if z=None, integrate spherical
INPUT:
R - Galactocentric cylindrical radius
z - vertical height
t - time
OUTPUT:
the mass enclosed
HISTORY:
2021-03-09 - Written - Bovy (UofT)
2021-03-18 - Switched to using Gauss' theorem - Bovy (UofT)
"""
if not z is None: raise AttributeError # Hack to fall back to general
# when integrating over spherical volume, all non-zero l,m vanish
N= len(self._Acos)
return R**2.*numpy.sum(self._Acos[:,0,0]*self._dphiTilde(R,N,1)[:,0])
def _evaluate(self,R,z,phi=0.,t=0.):
"""
NAME:
_evaluate
PURPOSE:
evaluate the potential at (R,z, phi)
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
potential at (R,z, phi)
HISTORY:
2016-05-17 - Written - Aladdin Seaifan (UofT)
"""
if not self.isNonAxi and phi is None:
phi= 0.
return self._computeArray(self._phiTilde, R,z,phi)
def _dphiTilde(self, r, N, L):
"""
NAME:
_dphiTilde
PURPOSE:
Evaluate the derivative of phiTilde with respect to r
INPUT:
r - spherical radius
N - size of the N dimension
L - size of the L dimension
OUTPUT:
the derivative of phiTilde with respect to r
HISTORY:
2016-06-06 - Written - Aladdin Seaifan (UofT)
"""
a = self._a
l = numpy.arange(0, L, dtype=float)[numpy.newaxis, :]
n = numpy.arange(0, N, dtype=float)[:, numpy.newaxis]
xi = self._calculateXi(r)
dC = _dC(xi,N,L)
return -(4*numpy.pi)**.5 * (numpy.power(a*r, l)*(l*(a + r)*numpy.power(r,-1) -(2*l + 1))/((a + r)**(2*l + 2))*_C(xi,N,L) +
a**-1*(1 - xi)**2 * (a*r)**l / (a + r)**(2*l + 1) *dC/2.)
def _computeforce(self,R,z,phi=0,t=0):
"""
NAME:
_computeforce
PURPOSE:
Evaluate the first derivative of Phi with respect to R, z and phi
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
dPhi/dr, dPhi/dtheta, dPhi/dphi
HISTORY:
2016-06-07 - Written - Aladdin Seaifan (UofT)
"""
Acos, Asin = self._Acos, self._Asin
N, L, M = Acos.shape
r, theta, phi = coords.cyl_to_spher(R,z,phi)
new_hash= hashlib.md5(numpy.array([R, z,phi])).hexdigest()
if new_hash == self._force_hash:
dPhi_dr = self._cached_dPhi_dr
dPhi_dtheta = self._cached_dPhi_dtheta
dPhi_dphi = self._cached_dPhi_dphi
else:
PP, dPP = lpmn(M-1,L-1,numpy.cos(theta)) ##Get the Legendre polynomials
PP = PP.T[None,:,:]
dPP = dPP.T[None,:,:]
phi_tilde = self._phiTilde(r, N, L)[:,:,numpy.newaxis]
dphi_tilde = self._dphiTilde(r,N,L)[:,:,numpy.newaxis]
m = numpy.arange(0, M)[numpy.newaxis, numpy.newaxis, :]
mcos = numpy.cos(m*phi)
msin = numpy.sin(m*phi)
dPhi_dr = -numpy.sum((Acos*mcos + Asin*msin)*PP*dphi_tilde)
dPhi_dtheta = -numpy.sum((Acos*mcos + Asin*msin)*phi_tilde*dPP*(-numpy.sin(theta)))
dPhi_dphi =-numpy.sum(m*(Asin*mcos - Acos*msin)*phi_tilde*PP)
self._force_hash = new_hash
self._cached_dPhi_dr = dPhi_dr
self._cached_dPhi_dtheta = dPhi_dtheta
self._cached_dPhi_dphi = dPhi_dphi
return dPhi_dr,dPhi_dtheta,dPhi_dphi
def _computeforceArray(self,dr_dx, dtheta_dx, dphi_dx, R, z, phi):
"""
NAME:
_computeforceArray
PURPOSE:
evaluate the forces in the x direction for a given array of coordinates
INPUT:
dr_dx - the derivative of r with respect to the chosen variable x
dtheta_dx - the derivative of theta with respect to the chosen variable x
dphi_dx - the derivative of phi with respect to the chosen variable x
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
The forces in the x direction
HISTORY:
2016-06-02 - Written - Aladdin Seaifan (UofT)
"""
R = numpy.array(R,dtype=float); z = numpy.array(z,dtype=float); phi = numpy.array(phi,dtype=float);
shape = (R*z*phi).shape
if shape == ():
dPhi_dr,dPhi_dtheta,dPhi_dphi = \
self._computeforce(R,z,phi)
return dr_dx*dPhi_dr + dtheta_dx*dPhi_dtheta +dPhi_dphi*dphi_dx
R = R*numpy.ones(shape);
z = z* numpy.ones(shape);
phi = phi* numpy.ones(shape);
force = numpy.zeros(shape, float)
dr_dx = dr_dx*numpy.ones(shape); dtheta_dx = dtheta_dx*numpy.ones(shape);dphi_dx = dphi_dx*numpy.ones(shape);
li = _cartesian(shape)
for i in range(li.shape[0]):
j = tuple(numpy.split(li[i], li.shape[1]))
dPhi_dr,dPhi_dtheta,dPhi_dphi = \
self._computeforce(R[j][0],z[j][0],phi[j][0])
force[j] = dr_dx[j][0]*dPhi_dr + dtheta_dx[j][0]*dPhi_dtheta +dPhi_dphi*dphi_dx[j][0]
return force
def _Rforce(self, R, z, phi=0, t=0):
"""
NAME:
_Rforce
PURPOSE:
evaluate the radial force at (R,z, phi)
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
radial force at (R,z, phi)
HISTORY:
2016-06-06 - Written - Aladdin Seaifan (UofT)
"""
if not self.isNonAxi and phi is None:
phi= 0.
r, theta, phi = coords.cyl_to_spher(R,z,phi)
#x = R
dr_dR = numpy.divide(R,r); dtheta_dR = numpy.divide(z,r**2); dphi_dR = 0
return self._computeforceArray(dr_dR, dtheta_dR, dphi_dR, R,z,phi)
def _zforce(self, R, z, phi=0., t=0.):
"""
NAME:
_zforce
PURPOSE:
evaluate the vertical force at (R,z, phi)
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
vertical force at (R,z, phi)
HISTORY:
2016-06-06 - Written - Aladdin Seaifan (UofT)
"""
if not self.isNonAxi and phi is None:
phi= 0.
r, theta, phi = coords.cyl_to_spher(R,z,phi)
#x = z
dr_dz = numpy.divide(z,r); dtheta_dz = numpy.divide(-R,r**2); dphi_dz = 0
return self._computeforceArray(dr_dz, dtheta_dz, dphi_dz, R,z,phi)
def _phitorque(self, R,z,phi=0,t=0):
"""
NAME:
_phitorque
PURPOSE:
evaluate the azimuth force at (R,z, phi)
INPUT:
R - Cylindrical Galactocentric radius
z - vertical height
phi - azimuth
t - time
OUTPUT:
azimuth force at (R,z, phi)
HISTORY:
2016-06-06 - Written - Aladdin Seaifan (UofT)
"""
if not self.isNonAxi and phi is None:
phi= 0.
r, theta, phi = coords.cyl_to_spher(R,z,phi)
#x = phi
dr_dphi = 0; dtheta_dphi = 0; dphi_dphi = 1
return self._computeforceArray(dr_dphi, dtheta_dphi, dphi_dphi, R,z,phi)
def OmegaP(self):
return 0
def _xiToR(xi, a =1):
return a*numpy.divide((1. + xi),(1. - xi))
def _RToxi(r, a=1):
out= numpy.divide((r/a-1.),(r/a+1.),where=True^numpy.isinf(r))
if numpy.any(numpy.isinf(r)):
if hasattr(r,'__len__'):
out[numpy.isinf(r)]= 1.
else:
return 1.
return out
def _C(xi,N,L,alpha=lambda x: 2*x + 3./2,singleL=False):
"""
NAME:
_C
PURPOSE:
Evaluate C_n,l (the Gegenbauer polynomial) for 0 <= l < L and 0<= n < N
INPUT:
xi - radial transformed variable
N - Size of the N dimension
L - Size of the L dimension
alpha = A lambda function of l. Default alpha = 2l + 3/2
singleL= (False), if True only compute the L-th polynomial
OUTPUT:
An LxN Gegenbauer Polynomial
HISTORY:
2016-05-16 - Written - Aladdin Seaifan (UofT)
2021-02-22 - Upgraded to array xi - Bovy (UofT)
2021-02-22 - Added singleL for use in compute...nbody - Bovy (UofT)
"""
floatIn= False
if isinstance(xi,(float,int)):
floatIn= True
xi= numpy.array([xi])
if singleL:
Ls= [L]
else:
Ls= range(L)
CC= numpy.zeros((N,len(Ls),len(xi)))
for l,ll in enumerate(Ls):
for n in range(N):
a= alpha(ll)
if n==0:
CC[n,l]= 1.
continue
elif n==1:
CC[n,l]= 2.*a*xi
if n + 1 != N:
CC[n+1,l]= (2*(n + a)*xi*CC[n,l]-(n + 2*a - 1)*CC[n-1,l])\
/(n+1.)
if floatIn:
return CC[:,:,0]
else:
return CC
def _dC(xi, N, L):
l = numpy.arange(0,L)[numpy.newaxis, :]
CC = _C(xi,N + 1,L, alpha = lambda x: 2*x + 5./2)
CC = numpy.roll(CC, 1, axis=0)[:-1,:]
CC[0, :] = 0
CC *= 2*(2*l + 3./2)
return CC
def scf_compute_coeffs_spherical_nbody(pos,N,mass=1.,a=1.):
"""
NAME:
scf_compute_coeffs_spherical_nbody
PURPOSE:
Numerically compute the expansion coefficients for a spherical expansion for a given $N$-body set of points
INPUT:
pos - positions of particles in rectangular coordinates with shape [3,n]
N - size of the Nth dimension of the expansion coefficients
mass= (1.) mass of particles (scalar or array with size n)
a= (1.) parameter used to scale the radius
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2020-11-18 - Written - Morgan Bennett (UofT)
2021-02-22 - Sped-up - Bovy (UofT)
"""
Acos = numpy.zeros((N,1,1), float)
Asin = None
r= numpy.sqrt(pos[0]**2+pos[1]**2+pos[2]**2)
RhoSum= numpy.einsum('j,ij',mass/(1.+r/a),_C(_RToxi(r,a=a),N,1)[:,0])
n = numpy.arange(0,N)
K = 4*(n + 3./2)/((n + 2)*(n + 1)*(1 + n*(n + 3.)/2.))
Acos[n,0,0] = 2*K*RhoSum
return Acos, Asin
def _scf_compute_determine_dens_kwargs(dens,param):
try:
param[0]= 1.
dens(*param,use_physical=False)
except:
dens_kw= {}
else:
dens_kw= {'use_physical': False}
return dens_kw
def scf_compute_coeffs_spherical(dens, N, a=1., radial_order=None):
"""
NAME:
scf_compute_coeffs_spherical
PURPOSE:
Numerically compute the expansion coefficients for a given spherical density
INPUT:
dens - A density function that takes a parameter R
N - size of expansion coefficients
a= (1.) parameter used to scale the radius
radial_order - Number of sample points of the radial integral. If None, radial_order=max(20, N + 1)
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2016-05-18 - Written - Aladdin Seaifan (UofT)
"""
numOfParam = 0
try:
dens(0)
numOfParam=1
except:
try:
dens(0,0)
numOfParam=2
except:
numOfParam=3
param = [0]*numOfParam
dens_kw= _scf_compute_determine_dens_kwargs(dens,param)
def integrand(xi):
r = _xiToR(xi, a)
R = r
param[0] = R
return a**3. * dens(*param,**dens_kw)*(1 + xi)**2. * (1 - xi)**-3. \
* _C(xi, N, 1)[:,0]
Acos = numpy.zeros((N,1,1), float)
Asin = None
Ksample = [max(N + 1, 20)]
if radial_order != None:
Ksample[0] = radial_order
integrated = _gaussianQuadrature(integrand, [[-1., 1.]], Ksample=Ksample)
n = numpy.arange(0,N)
K = 16*numpy.pi*(n + 3./2)/((n + 2)*(n + 1)*(1 + n*(n + 3.)/2.))
Acos[n,0,0] = 2*K*integrated
return Acos, Asin
def scf_compute_coeffs_axi_nbody(pos,N,L,mass=1.,a=1.):
"""
NAME:
scf_compute_coeffs_axi_nbody
PURPOSE:
Numerically compute the expansion coefficients for a given $N$-body set of points assuming that the density is axisymmetric
INPUT:
pos - positions of particles in rectangular coordinates with shape [3,n]
N - size of the Nth dimension of the expansion coefficients
L - size of the Lth dimension of the expansion coefficients
mass= (1.) mass of particles (scalar or array with size n)
a= (1.) parameter used to scale the radius
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2021-02-22 - Written based on general code - Bovy (UofT)
"""
r= numpy.sqrt(pos[0]**2+pos[1]**2+pos[2]**2)
costheta = pos[2]/r
mass= numpy.atleast_1d(mass)
Acos, Asin= numpy.zeros([N,L,1]), None
Pll= numpy.ones(len(r)) # Set up Assoc. Legendre recursion
# (n,l) dependent constant
n= numpy.arange(0,N)[:,numpy.newaxis]
l= numpy.arange(0,L)[numpy.newaxis,:]
Knl= 0.5*n*(n+4.*l+3.)+(l+1)*(2.*l+1.)
Inl= -Knl*2.*numpy.pi/2.**(8.*l+6.)*gamma(n+4.*l+3.)\
/gamma(n+1)/(n+2.*l+1.5)/gamma(2.*l+1.5)**2/numpy.sqrt(2.*l+1)
# Set up Assoc. Legendre recursion
Plm= Pll
Plmm1= 0.
for ll in range(L):
# Compute Gegenbauer polys for this l
Cn= _C(_RToxi(r,a=a),N,ll,singleL=True)
phinlm= -(r/a)**ll/(1.+r/a)**(2.*ll+1)*Cn[:,0]*Plm
# Acos
Sum= numpy.sum(mass[numpy.newaxis,:]*phinlm,axis=-1)
Acos[:,ll,0]= Sum/Inl[:,ll]
# Recurse Assoc. Legendre
if ll < L:
tmp= Plm
Plm= ((2*ll+1.)*costheta*Plm-ll*Plmm1)/(ll+1)
Plmm1= tmp
return Acos,Asin
def scf_compute_coeffs_axi(dens, N, L, a=1.,radial_order=None, costheta_order=None):
"""
NAME:
scf_compute_coeffs_axi
PURPOSE:
Numerically compute the expansion coefficients for a given axi-symmetric density
INPUT:
dens - A density function that takes a parameter R and z
N - size of the Nth dimension of the expansion coefficients
L - size of the Lth dimension of the expansion coefficients
a - parameter used to shift the basis functions
radial_order - Number of sample points of the radial integral. If None, radial_order=max(20, N + 3/2L + 1)
costheta_order - Number of sample points of the costheta integral. If None, If costheta_order=max(20, L + 1)
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2016-05-20 - Written - Aladdin Seaifan (UofT)
"""
numOfParam = 0
try:
dens(0,0)
numOfParam=2
except:
numOfParam=3
param = [0]*numOfParam
dens_kw= _scf_compute_determine_dens_kwargs(dens,param)
def integrand(xi, costheta):
l = numpy.arange(0, L)[numpy.newaxis, :]
r = _xiToR(xi,a)
R = r*numpy.sqrt(1 - costheta**2.)
z = r*costheta
Legandre = lpmn(0,L-1,costheta)[0].T[numpy.newaxis,:,0]
dV = (1. + xi)**2. * numpy.power(1. - xi, -4.)
phi_nl = a**3*(1. + xi)**l * (1. - xi)**(l + 1.)*_C(xi, N, L)[:,:] * Legandre
param[0] = R
param[1] = z
return phi_nl*dV * dens(*param,**dens_kw)
Acos = numpy.zeros((N,L,1), float)
Asin = None
##This should save us some computation time since we're only taking the double integral once, rather then L times
Ksample = [max(N + 3*L//2 + 1, 20) , max(L + 1,20) ]
if radial_order != None:
Ksample[0] = radial_order
if costheta_order != None:
Ksample[1] = costheta_order
integrated = _gaussianQuadrature(integrand, [[-1, 1], [-1, 1]], Ksample = Ksample)*(2*numpy.pi)
n = numpy.arange(0,N)[:,numpy.newaxis]
l = numpy.arange(0,L)[numpy.newaxis,:]
K = .5*n*(n + 4*l + 3) + (l + 1)*(2*l + 1)
#I = -K*(4*numpy.pi)/(2.**(8*l + 6)) * gamma(n + 4*l + 3)/(gamma(n + 1)*(n + 2*l + 3./2)*gamma(2*l + 3./2)**2)
##Taking the ln of I will allow bigger size coefficients
lnI = -(8*l + 6)*numpy.log(2) + gammaln(n + 4*l + 3) - gammaln(n + 1) - numpy.log(n + 2*l + 3./2) - 2*gammaln(2*l + 3./2)
I = -K*(4*numpy.pi) * numpy.e**(lnI)
constants = -2.**(-2*l)*(2*l + 1.)**.5
Acos[:,:,0] = 2*I**-1 * integrated*constants
return Acos, Asin
def scf_compute_coeffs_nbody(pos,N,L,mass=1.,a=1.):
"""
NAME:
scf_compute_coeffs_nbody
PURPOSE:
Numerically compute the expansion coefficients for a given $N$-body set of points
INPUT:
pos - positions of particles in rectangular coordinates with shape [3,n]
N - size of the Nth dimension of the expansion coefficients
L - size of the Lth and Mth dimension of the expansion coefficients
mass= (1.) mass of particles (scalar or array with size n)
a= (1.) parameter used to scale the radius
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2020-11-18 - Written - Morgan Bennett (UofT)
"""
r= numpy.sqrt(pos[0]**2+pos[1]**2+pos[2]**2)
phi= numpy.arctan2(pos[1],pos[0])
costheta= pos[2]/r
sintheta= numpy.sqrt(1.-costheta**2.)
mass= numpy.atleast_1d(mass)
Acos, Asin= numpy.zeros([N,L,L]), numpy.zeros([N,L,L])
Pll= numpy.ones(len(r)) # Set up Assoc. Legendre recursion
# (n,l) dependent constant
n= numpy.arange(0,N)[:,numpy.newaxis]
l= numpy.arange(0,L)[numpy.newaxis,:]
Knl= 0.5*n*(n+4.*l+3.)+(l+1)*(2.*l+1.)
Inl= -Knl*2.*numpy.pi/2.**(8.*l+6.)*gamma(n+4.*l+3.)\
/gamma(n+1)/(n+2.*l+1.5)/gamma(2.*l+1.5)**2
for mm in range(L): # Loop over m
cosmphi= numpy.cos(phi*mm)
sinmphi= numpy.sin(phi*mm)
# Set up Assoc. Legendre recursion
Plm= Pll
Plmm1= 0.
for ll in range(mm,L):
# Compute Gegenbauer polys for this l
Cn= _C(_RToxi(r,a=a),N,ll,singleL=True)
phinlm= -(r/a)**ll/(1.+r/a)**(2.*ll+1)*Cn[:,0]*Plm
# Acos
Sum= numpy.sqrt((2.*ll+1)*gamma(ll-mm+1)/gamma(ll+mm+1))\
*numpy.sum((mass*cosmphi)[numpy.newaxis,:]*phinlm,axis=-1)
Acos[:,ll,mm]= Sum/Inl[:,ll]
# Asin
Sum= numpy.sqrt((2.*ll+1)*gamma(ll-mm+1)/gamma(ll+mm+1))\
*numpy.sum((mass*sinmphi)[numpy.newaxis,:]*phinlm,axis=-1)
Asin[:,ll,mm]= Sum/Inl[:,ll]
# Recurse Assoc. Legendre
if ll < L:
tmp= Plm
Plm= ((2*ll+1.)*costheta*Plm-(ll+mm)*Plmm1)/(ll-mm+1)
Plmm1= tmp
# Recurse Assoc. Legendre
Pll*= -(2*mm+1.)*sintheta
return Acos,Asin
def scf_compute_coeffs(dens,N,L,a=1.,
radial_order=None,costheta_order=None,phi_order=None):
"""
NAME:
scf_compute_coeffs
PURPOSE:
Numerically compute the expansion coefficients for a given triaxial density
INPUT:
dens - A density function that takes a parameter R, z and phi
N - size of the Nth dimension of the expansion coefficients
L - size of the Lth and Mth dimension of the expansion coefficients
a - parameter used to shift the basis functions
radial_order - Number of sample points of the radial integral. If None, radial_order=max(20, N + 3/2L + 1)
costheta_order - Number of sample points of the costheta integral. If None, If costheta_order=max(20, L + 1)
phi_order - Number of sample points of the phi integral. If None, If costheta_order=max(20, L + 1)
OUTPUT:
(Acos,Asin) - Expansion coefficients for density dens that can be given to SCFPotential.__init__
HISTORY:
2016-05-27 - Written - Aladdin Seaifan (UofT)
"""
dens_kw= _scf_compute_determine_dens_kwargs(dens,[0.1,0.1,0.1])
def integrand(xi, costheta, phi):
l = numpy.arange(0, L)[numpy.newaxis, :, numpy.newaxis]
m = numpy.arange(0, L)[numpy.newaxis,numpy.newaxis,:]
r = _xiToR(xi, a)
R = r*numpy.sqrt(1 - costheta**2.)
z = r*costheta
Legandre = lpmn(L - 1,L-1,costheta)[0].T[numpy.newaxis,:,:]
dV = (1. + xi)**2. * numpy.power(1. - xi, -4.)
phi_nl = - a**3*(1. + xi)**l * (1. - xi)**(l + 1.)*_C(xi, N, L)[:,:,numpy.newaxis] * Legandre
return dens(R,z, phi,**dens_kw) * phi_nl[numpy.newaxis, :,:,:]*numpy.array([numpy.cos(m*phi), numpy.sin(m*phi)])*dV
Acos = numpy.zeros((N,L,L), float)
Asin = numpy.zeros((N,L,L), float)
Ksample = [max(N + 3*L//2 + 1,20), max(L + 1,20 ), max(L + 1,20)]
if radial_order != None:
Ksample[0] = radial_order
if costheta_order != None:
Ksample[1] = costheta_order
if phi_order != None:
Ksample[2] = phi_order
integrated = _gaussianQuadrature(integrand, [[-1., 1.], [-1., 1.], [0, 2*numpy.pi]], Ksample = Ksample)
n = numpy.arange(0,N)[:,numpy.newaxis, numpy.newaxis]
l = numpy.arange(0,L)[numpy.newaxis,:, numpy.newaxis]
m = numpy.arange(0,L)[numpy.newaxis,numpy.newaxis,:]
K = .5*n*(n + 4*l + 3) + (l + 1)*(2*l + 1)
Nln = .5*gammaln(l - m + 1) - .5*gammaln(l + m + 1) - (2*l)*numpy.log(2)
NN = numpy.e**(Nln)
NN[numpy.where(NN == numpy.inf)] = 0 ## To account for the fact that m can't be bigger than l
constants = NN*(2*l + 1.)**.5
lnI = -(8*l + 6)*numpy.log(2) + gammaln(n + 4*l + 3) - gammaln(n + 1) - numpy.log(n + 2*l + 3./2) - 2*gammaln(2*l + 3./2)
I = -K*(4*numpy.pi) * numpy.e**(lnI)
Acos[:,:,:],Asin[:,:,:] = 2*(I**-1.)[numpy.newaxis,:,:,:] * integrated * constants[numpy.newaxis,:,:,:]
return Acos, Asin
def _cartesian(arraySizes, out=None):
"""
NAME:
cartesian
PURPOSE:
Generate a cartesian product of input arrays.
INPUT:
arraySizes - list of size of arrays
out - Array to place the cartesian product in.
OUTPUT:
2-D array of shape (product(arraySizes), len(arraySizes)) containing cartesian products
formed of input arrays.
HISTORY:
2016-06-02 - Obtained from
http://stackoverflow.com/questions/1208118/using-numpy-to-build-an-array-of-all-combinations-of-two-arrays
"""
arrays = []
for i in range(len(arraySizes)):
arrays.append(numpy.arange(0, arraySizes[i]))
arrays = [numpy.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = numpy.prod([x.size for x in arrays])
if out is None:
out = numpy.zeros([n, len(arrays)], dtype=dtype)
m = n // arrays[0].size
out[:,0] = numpy.repeat(arrays[0], m)
if arrays[1:]:
_cartesian(arraySizes[1:], out=out[0:m,1:])
for j in range(1, arrays[0].size):
out[j*m:(j+1)*m,1:] = out[0:m,1:]
return out
def _gaussianQuadrature(integrand, bounds, Ksample=[20], roundoff=0):
"""
NAME:
_gaussianQuadrature
PURPOSE:
Numerically take n integrals over a function that returns a float or an array
INPUT:
integrand - The function you're integrating over.
bounds - The bounds of the integral in the form of [[a_0, b_0], [a_1, b_1], ... , [a_n, b_n]]
where a_i is the lower bound and b_i is the upper bound
Ksample - Number of sample points in the form of [K_0, K_1, ..., K_n] where K_i is the sample point
of the ith integral.
roundoff - if the integral is less than this value, round it to 0.
OUTPUT:
The integral of the function integrand
HISTORY:
2016-05-24 - Written - Aladdin Seaifan (UofT)
"""
##Maps the sample point and weights
xp = numpy.zeros((len(bounds), numpy.max(Ksample)), float)
wp = numpy.zeros((len(bounds), numpy.max(Ksample)), float)
for i in range(len(bounds)):
x,w = leggauss(Ksample[i]) ##Calculates the sample points and weights
a,b = bounds[i]
xp[i, :Ksample[i]] = .5*(b-a)*x + .5*(b+a)
wp[i, :Ksample[i]] = .5*(b - a)*w
##Determines the shape of the integrand
s = 0.
shape=None
s_temp = integrand(*numpy.zeros(len(bounds)))
if type(s_temp).__name__ == numpy.ndarray.__name__ :
shape = s_temp.shape
s = numpy.zeros(shape, float)
#gets all combinations of indices from each integrand
li = _cartesian(Ksample)
##Performs the actual integration
for i in range(li.shape[0]):
index = (numpy.arange(len(bounds)),li[i])
s+= numpy.prod(wp[index])*integrand(*xp[index])
##Rounds values that are less than roundoff to zero
if shape!= None:
s[numpy.where(numpy.fabs(s) < roundoff)] = 0
else: s *= numpy.fabs(s) >roundoff
return s
| bsd-3-clause | 2ee7934fa3b4a316640eae3511560519 | 33.304681 | 372 | 0.549916 | 3.19246 | false | false | false | false |
jobovy/galpy | tests/test_SpiralArmsPotential.py | 1 | 60567 | import numpy
from packaging.version import parse as parse_version
_NUMPY_VERSION= parse_version(numpy.__version__)
_NUMPY_1_23= (_NUMPY_VERSION > parse_version('1.22'))\
*(_NUMPY_VERSION < parse_version('1.24')) # For testing 1.23 precision issues
import unittest
from numpy.testing import assert_allclose
from scipy.misc import derivative as deriv
from galpy.potential import SpiralArmsPotential as spiral
class TestSpiralArmsPotential(unittest.TestCase):
def test_constructor(self):
"""Test that constructor initializes and converts units correctly."""
sp = spiral() # default values
assert sp._amp == 1
assert sp._N == -2 # trick to change to left handed coordinate system
assert sp._alpha == -0.2
assert sp._r_ref == 1
assert sp._phi_ref == 0
assert sp._Rs == 0.3
assert sp._H == 0.125
assert sp._Cs == [1]
assert sp._omega == 0
assert sp._rho0 == 1 / (4 * numpy.pi)
assert sp.isNonAxi == True
assert sp.hasC == True
assert sp.hasC_dxdv == True
assert sp._ro == 8
assert sp._vo == 220
def test_Rforce(self):
"""Tests Rforce against a numerical derivative -d(Potential) / dR."""
dx = 1e-8
rtol = 1e-5 # relative tolerance
pot = spiral()
assert_allclose(pot.Rforce(1., 0.), -deriv(lambda x: pot(x, 0.), 1., dx=dx), rtol=rtol)
R, z, t = 0.3, 0, 0
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2.2, t), -deriv(lambda x: pot(x, z, numpy.pi/2.2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3.7*numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3.7*numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z, t = 1, -.7, 3
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(x, z, numpy.pi/2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3.3*numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3.3*numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z = 3.14, .7
assert_allclose(pot.Rforce(R, z, 0), -deriv(lambda x: pot(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi / 2), -deriv(lambda x: pot(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi), -deriv(lambda x: pot(x, z, numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=7, alpha=-0.3, r_ref=0.5, phi_ref=0.3, Rs=0.7, H=0.7, Cs=[1, 2, 3], omega=3)
assert_allclose(pot.Rforce(1., 0.), -deriv(lambda x: pot(x, 0.), 1., dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(0.01, 0.), -deriv(lambda x: pot(x, 0.), 0.01, dx=dx), rtol=rtol)
R, z, t = 0.3, 0, 1.123
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(x, z, numpy.pi/2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3*numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z, t = 1, -.7, 121
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi / 2, t), -deriv(lambda x: pot(x, z, numpy.pi / 2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3* numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3* numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z, t = 3.14, .7, 0.123
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(x, z, numpy.pi / 2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3*numpy.pi/2, t), R, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=1, alpha=0.01, r_ref=1.12, phi_ref=0, Cs=[1, 1.5, 8.], omega=-3)
assert_allclose(pot.Rforce(1., 0.), -deriv(lambda x: pot(x, 0.), 1., dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(0.1, 0.), -deriv(lambda x: pot(x, 0.), 0.1, dx=dx), rtol=rtol)
R, z, t = 0.3, 0, -4.5
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(x, z, numpy.pi/2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3* numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3* numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z, t = 1, -.7, -123
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi / 2, t), -deriv(lambda x: pot(x, z, numpy.pi / 2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3* numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3*numpy.pi/2, t), R, dx=dx), rtol=rtol)
R, z, t = 3.14, .7, -123.123
assert_allclose(pot.Rforce(R, z, 0, t), -deriv(lambda x: pot(x, z, 0, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(x, z, numpy.pi/2, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi, t), -deriv(lambda x: pot(x, z, numpy.pi, t), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(x, z, 3*numpy.pi/2, t), R, dx=dx), rtol=rtol)
pot = spiral(N=10, r_ref=15, phi_ref=5, Cs=[8./(3.*numpy.pi), 0.5, 8./(15.*numpy.pi)])
assert_allclose(pot.Rforce(1., 0.), -deriv(lambda x: pot(x, 0.), 1., dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(0.01, 0.), -deriv(lambda x: pot(x, 0.), 0.01, dx=dx), rtol=rtol)
R, z = 0.3, 0
assert_allclose(pot.Rforce(R, z, 0), -deriv(lambda x: pot(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi/2.1), -deriv(lambda x: pot(x, z, numpy.pi/2.1), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 1.3 *numpy.pi), -deriv(lambda x: pot(x, z, 1.3 *numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
R, z = 1, -.7
assert_allclose(pot.Rforce(R, z, 0), -deriv(lambda x: pot(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi / 2), -deriv(lambda x: pot(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, .9 *numpy.pi), -deriv(lambda x: pot(x, z, .9 *numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3.3*numpy.pi/2), -deriv(lambda x: pot(x, z, 3.3*numpy.pi/2), R, dx=dx), rtol=rtol)
R, z = 3.14, .7
assert_allclose(pot.Rforce(R, z, 0), -deriv(lambda x: pot(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, numpy.pi / 2.3), -deriv(lambda x: pot(x, z, numpy.pi / 2.3), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 1.1 *numpy.pi), -deriv(lambda x: pot(x, z, 1.1 *numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.Rforce(R, z, 3.5*numpy.pi/2), -deriv(lambda x: pot(x, z, 3.5*numpy.pi/2), R, dx=dx), rtol=rtol)
def test_zforce(self):
"""Test zforce against a numerical derivative -d(Potential) / dz"""
dx = 1e-8
rtol = 1e-6 # relative tolerance
pot = spiral()
# zforce is zero in the plane of the galaxy
assert_allclose(0, pot.zforce(0.3, 0, 0), rtol=rtol)
assert_allclose(0, pot.zforce(0.3, 0, numpy.pi/2), rtol=rtol)
assert_allclose(0, pot.zforce(0.3, 0, numpy.pi), rtol=rtol)
assert_allclose(0, pot.zforce(0.3, 0, 3*numpy.pi/2), rtol=rtol)
# test zforce against -dPhi/dz
R, z = 1, -.7
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 3.7, .7
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=3, alpha=-.3, r_ref=0.5, phi_ref=0.3, Rs=0.7, H=0.7, Cs=[1, 2], omega=3)
# zforce is zero in the plane of the galaxy
assert_allclose(0, pot.zforce(0.3, 0, 0, 1), rtol=rtol)
assert_allclose(0, pot.zforce(0.6, 0, numpy.pi/2, 2), rtol=rtol)
assert_allclose(0, pot.zforce(0.9, 0, numpy.pi, 3), rtol=rtol)
assert_allclose(0, pot.zforce(1.2, 0, 2*numpy.pi, 4), rtol=rtol)
# test zforce against -dPhi/dz
R, z, t = 1, -.7, 123
assert_allclose(pot.zforce(R, z, 0, t), -deriv(lambda x: pot(R, x, 0, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, x, numpy.pi/2, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi, t), -deriv(lambda x: pot(R, x, numpy.pi, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(R, x, 3*numpy.pi/2, t), z, dx=dx), rtol=rtol)
R, z = 3.7, .7
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
pot = spiral(N=1, alpha=-0.2, r_ref=.5, Cs=[1, 1.5], omega=-3)
# zforce is zero in the plane of the galaxy
assert_allclose(0, pot.zforce(0.3, 0, 0, 123), rtol=rtol)
assert_allclose(0, pot.zforce(0.3, 0, numpy.pi/2, -321), rtol=rtol)
assert_allclose(0, pot.zforce(32, 0, numpy.pi, 1.23), rtol=rtol)
assert_allclose(0, pot.zforce(0.123, 0, 3.33*numpy.pi/2, -3.21), rtol=rtol)
# test zforce against -dPhi/dz
R, z = 1, -1.5
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2.1), -deriv(lambda x: pot(R, x, 3*numpy.pi/2.1), z, dx=dx), rtol=rtol)
R, z, t = 3.7, .7, -100
assert_allclose(pot.zforce(R, z, 0, t), -deriv(lambda x: pot(R, x, 0, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, x, numpy.pi/2, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi, t), -deriv(lambda x: pot(R, x, numpy.pi, t), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3.4*numpy.pi/2, t), -deriv(lambda x: pot(R, x, 3.4*numpy.pi/2, t), z, dx=dx), rtol=rtol)
pot = spiral(N=5, r_ref=1.5, phi_ref=0.5, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)])
# zforce is zero in the plane of the galaxy
assert_allclose(0, pot.zforce(0.3, 0, 0), rtol=rtol)
assert_allclose(0, pot.zforce(0.4, 0, numpy.pi/2), rtol=rtol)
assert_allclose(0, pot.zforce(0.5, 0, numpy.pi*1.1), rtol=rtol)
assert_allclose(0, pot.zforce(0.6, 0, 3*numpy.pi/2), rtol=rtol)
# test zforce against -dPhi/dz
R, z = 1, -.7
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 37, 1.7
assert_allclose(pot.zforce(R, z, 0), -deriv(lambda x: pot(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi/2), -deriv(lambda x: pot(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, numpy.pi), -deriv(lambda x: pot(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.zforce(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
def test_phitorque(self):
"""Test phitorque against a numerical derivative -d(Potential) / d(phi)."""
dx = 1e-8
rtol = 1e-5 # relative tolerance
pot = spiral()
R, z = .3, 0
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = .1, -.3
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 3, 7
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2.1), -deriv(lambda x: pot(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
pot = spiral(N=7, alpha=-0.3, r_ref=0.5, phi_ref=0.3, Rs=0.7, H=0.7, Cs=[1, 1, 1], omega=2 *numpy.pi)
R, z, t = .3, 0, 1.2
assert_allclose(pot.phitorque(R, z, 0, 0), -deriv(lambda x: pot(R, z, x, 0), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1, -.7
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z, t = 3.7, .7, -5.1
assert_allclose(pot.phitorque(R, z, 0, t), -deriv(lambda x: pot(R, z, x, t), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3.2*numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), 3.2*numpy.pi/2, dx=dx), rtol=rtol)
pot = spiral(N=1, alpha=0.1, phi_ref=0, Cs=[1, 1.5], omega=-.333)
R, z = .3, 0
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3.2*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3.2*numpy.pi/2, dx=dx), rtol=rtol)
R, z, t = 1, -.7, 123
assert_allclose(pot.phitorque(R, z, 0, t), -deriv(lambda x: pot(R, z, x, t), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z, t = 3, 4, 5
assert_allclose(pot.phitorque(R, z, 0, t), -deriv(lambda x: pot(R, z, x, t), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi, t), -deriv(lambda x: pot(R, z, x, t), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2, t), -deriv(lambda x: pot(R, z, x, t), 3*numpy.pi/2, dx=dx), rtol=rtol)
pot = spiral(N=4, r_ref=1.5, phi_ref=5, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)])
R, z = .3, 0
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1, -.7
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 3*numpy.pi/2), -deriv(lambda x: pot(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 2.1, .12345
assert_allclose(pot.phitorque(R, z, 0), -deriv(lambda x: pot(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi/2), -deriv(lambda x: pot(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, numpy.pi), -deriv(lambda x: pot(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phitorque(R, z, 2 *numpy.pi), -deriv(lambda x: pot(R, z, x), 2*numpy.pi, dx=dx), rtol=rtol)
def test_R2deriv(self):
"""Test R2deriv against a numerical derivative -d(Rforce) / dR."""
dx = 1e-8
rtol = 1e-6 # relative tolerance
pot = spiral()
assert_allclose(pot.R2deriv(1., 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1., dx=dx), rtol=rtol)
R, z = 0.3, 0
assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi/2), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, 3.1*numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3.1*numpy.pi/2), R, dx=dx), rtol=rtol)
R, z = 1, -.7
assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, 2*numpy.pi), -deriv(lambda x: pot.Rforce(x, z, 2*numpy.pi), R, dx=dx), rtol=rtol)
R, z = 5, .9
assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
# pot = spiral(N=1, alpha=-.3, r_ref=.1, phi_ref=numpy.pi, Rs=1, H=1, Cs=[1, 2, 3], omega=3)
# assert_allclose(pot.R2deriv(1e-3, 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1e-3, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(1., 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1., dx=dx), rtol=rtol)
# R, z = 0.3, 0
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi/2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
# R, z = 1, -.7
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3.1*numpy.pi/2), -deriv(lambda x: pot.Rforce(x, z, 3.1*numpy.pi/2), R, dx=dx), rtol=rtol)
# R, z = 5, .9
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2.4), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2.4), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
#
# pot = spiral(N=7, alpha=.1, r_ref=1, phi_ref=1, Rs=1.1, H=.1, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)], omega=-.3)
# assert_allclose(pot.R2deriv(1., 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1., dx=dx), rtol=rtol)
# R, z = 0.3, 0
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi/2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
# R, z = 1, -.7
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
# R, z = 5, .9
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
#
# pot = spiral(N=4, alpha=numpy.pi/2, r_ref=1, phi_ref=1, Rs=.7, H=.77, Cs=[3, 4], omega=-1.3)
# assert_allclose(pot.R2deriv(1e-3, 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1e-3, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(1., 0.), -deriv(lambda x: pot.Rforce(x, 0.), 1., dx=dx), rtol=rtol)
# R, z = 0.3, 0
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi/2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, 3 * numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, 3*numpy.pi/2), R, dx=dx), rtol=rtol)
# R, z = 1, -.7
# assert_allclose(pot.R2deriv(R, z, 0), -deriv(lambda x: pot.Rforce(x, z, 0), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi / 2), -deriv(lambda x: pot.Rforce(x, z, numpy.pi / 2), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, numpy.pi), -deriv(lambda x: pot.Rforce(x, z, numpy.pi), R, dx=dx), rtol=rtol)
# assert_allclose(pot.R2deriv(R, z, .33*numpy.pi/2), -deriv(lambda x: pot.Rforce(x, z, .33*numpy.pi/2), R, dx=dx), rtol=rtol)
def test_z2deriv(self):
"""Test z2deriv against a numerical derivative -d(zforce) / dz"""
dx = 1e-8
rtol = 1e-6 # relative tolerance
pot = spiral()
R, z = .3, 0
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 1.2, .1
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
pot = spiral(N=3, alpha=-0.3, r_ref=.25, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)])
R, z = .3, 0
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 3.3, .7
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
pot = spiral(amp=5, N=1, alpha=0.1, r_ref=0.5, phi_ref=0.3, Rs=0.7, H=0.7, Cs=[1, 2], omega=3)
R, z = .3, 0
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=2*rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=2*rtol)
R, z = 3.3, .7
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
pot = spiral(N=1, alpha=1, r_ref=3, phi_ref=numpy.pi, Cs=[1, 2], omega=-3)
R, z = .7, 0
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=2*rtol)
R, z = 2.1, .99
assert_allclose(pot.z2deriv(R, z, 0), -deriv(lambda x: pot.zforce(R, x, 0), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, numpy.pi/2), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, numpy.pi), -deriv(lambda x: pot.zforce(R, x, numpy.pi), z, dx=dx), rtol=rtol)
assert_allclose(pot.z2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.zforce(R, x, 3*numpy.pi/2), z, dx=dx), rtol=rtol)
def test_phi2deriv(self):
"""Test phi2deriv against a numerical derivative -d(phitorque) / d(phi)."""
dx = 1e-8
rtol = rtol = _NUMPY_1_23 * 3e-7 + (1-_NUMPY_1_23) * 1e-7
pot = spiral()
R, z = .3, 0
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.1), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2.5), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2.5, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 3.3, .7
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.1), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=1, alpha=-.3, r_ref=0.5, phi_ref=0.1, Rs=0.7, H=0.7, Cs=[1, 2, 3], omega=3)
R, z = .3, 0
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3.3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3.3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 3.3, .7
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.1), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=5, alpha=0.1, r_ref=.3, phi_ref=.1, Rs=0.77, H=0.747, Cs=[3, 2], omega=-3)
R, z = .3, 0
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1, -.3
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 3.3, .7
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.1), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol*3)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol*3)
pot = spiral(amp=11, N=7, alpha=.777, r_ref=7, phi_ref=.7, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)])
R, z = .7, 0
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1, -.33
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.2), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.2, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
R, z = 1.123, .123
assert_allclose(pot.phi2deriv(R, z, 0), -deriv(lambda x: pot.phitorque(R, z, x), 0, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi/2.1), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi/2.1, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, numpy.pi), -deriv(lambda x: pot.phitorque(R, z, x), numpy.pi, dx=dx), rtol=rtol)
assert_allclose(pot.phi2deriv(R, z, 3*numpy.pi/2), -deriv(lambda x: pot.phitorque(R, z, x), 3*numpy.pi/2, dx=dx), rtol=rtol)
def test_dens(self):
"""Test dens against density obtained using Poisson's equation."""
rtol = 1e-2 # relative tolerance (this one isn't as precise)
pot = spiral()
assert_allclose(pot.dens(1, 0, 0, forcepoisson=False), pot.dens(1, 0, 0, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, 1, .5, forcepoisson=False), pot.dens(1, 1, .5, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, -1, -1, forcepoisson=False), pot.dens(1, -1, -1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(.1, .1, .1, forcepoisson=False), pot.dens(.1, .1, .1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(33, .777, .747, forcepoisson=False), pot.dens(33, .777, .747, forcepoisson=True), rtol=rtol)
pot = spiral(amp=3, N=5, alpha=.3, r_ref=.7, omega=5)
assert_allclose(pot.dens(1, 0, 0, forcepoisson=False), pot.dens(1, 0, 0, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1.2, 1.2, 1.2, forcepoisson=False), pot.dens(1.2, 1.2, 1.2, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, -1, -1, forcepoisson=False), pot.dens(1, -1, -1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(.1, .1, .1, forcepoisson=False), pot.dens(.1, .1, .1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(33.3, .007, .747, forcepoisson=False), pot.dens(33.3, .007, .747, forcepoisson=True), rtol=rtol)
pot = spiral(amp=0.6, N=3, alpha=.24, r_ref=1, phi_ref=numpy.pi, Cs=[8./(3. *numpy.pi), 0.5, 8./(15. *numpy.pi)], omega=-3)
assert_allclose(pot.dens(1, 0, 0, forcepoisson=False), pot.dens(1, 0, 0, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, 1, 1, forcepoisson=False), pot.dens(1, 1, 1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, -1, -1, forcepoisson=False), pot.dens(1, -1, -1, forcepoisson=True), rtol=rtol)
# assert_allclose(pot.dens(.1, .1, .1, forcepoisson=False), pot.dens(.1, .1, .1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(3.33, -7.77, -.747, forcepoisson=False), pot.dens(3.33, -7.77, -.747, forcepoisson=True), rtol=rtol)
pot = spiral(amp=100, N=4, alpha=numpy.pi/2, r_ref=1, phi_ref=1, Rs=7, H=77, Cs=[3, 1, 1], omega=-1.3)
assert_allclose(pot.dens(1, 0, 0, forcepoisson=False), pot.dens(1, 0, 0, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(3, 2, numpy.pi, forcepoisson=False), pot.dens(3, 2, numpy.pi, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(1, -1, -1, forcepoisson=False), pot.dens(1, -1, -1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(.1, .123, .1, forcepoisson=False), pot.dens(.1, .123, .1, forcepoisson=True), rtol=rtol)
assert_allclose(pot.dens(333, -.777, .747, forcepoisson=False), pot.dens(333, -.777, .747, forcepoisson=True), rtol=rtol)
def test_Rzderiv(self):
"""Test Rzderiv against a numerical derivative."""
dx = 1e-8
rtol = _NUMPY_1_23 * 3e-6 + (1-_NUMPY_1_23) * 1e-6
pot = spiral()
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 0.7, 0.3, numpy.pi/3, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.1, -0.3, numpy.pi/4.2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 12, 1, 2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 5, -.7, 3*numpy.pi/2, 5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 4, 7, 2, 10000
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.23, 0, 44, 343
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 7, 7, 7, 7
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
pot = spiral(amp=13, N=7, alpha=.1, r_ref=1.123, phi_ref=.3, Rs=0.777, H=.5, Cs=[4.5], omega=-3.4)
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .777, 0.333, numpy.pi/3, 0.
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.1, -0.3, numpy.pi/4.2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 12, 1, 2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 2, -.7, 3*numpy.pi/2, 5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 4, 7, 2, 10000
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.23, 0, 44, 343
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 7, 7, 7, 7
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
pot = spiral(amp=11, N=2, alpha=.777, r_ref=7, Cs=[8.], omega=0.1)
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 0.7, 0.3, numpy.pi/12, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.1, -0.3, numpy.pi/4.2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 2, 1, 2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 5, -.7, 3*numpy.pi/2, 5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 4, 7, 2, 10000
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.23, 0, 44, 343
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 7, 7, 7, 7
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
pot = spiral(amp=2, N=1, alpha=-0.1, r_ref=5, Rs=5, H=.7, Cs=[3.5], omega=3)
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 0.77, 0.3, numpy.pi/3, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 3.1, -0.3, numpy.pi/5, 2
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 12, 1, 2, 3
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 5, -.7, 3*numpy.pi/2, 5
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 4, 7, 2, 10000
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 1.23, 0, 44, 343
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
R, z, phi, t = 7, 7, 7, 7
assert_allclose(pot.Rzderiv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, x, phi, t), z, dx=dx), rtol=rtol)
def test_Rphideriv(self):
"""Test Rphideriv against a numerical derivative."""
dx = 1e-8
rtol = 5e-5
pot = spiral()
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 0.7, 0.3, numpy.pi / 3, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 1.1, -0.3, numpy.pi / 4.2, 3
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 12, 1, 2, 3
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 5, -.7, 3 * numpy.pi / 2, 5
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 4, 7, 2, 1000
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 1.23, 0, 44, 343
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 7, 1, 7, 7
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
pot = spiral(N=3, alpha=.21, r_ref=.5, phi_ref=numpy.pi, Cs=[2.], omega=-3)
R, z, phi, t = 1, 0, 0, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 0.7, 0.3, numpy.pi / 3, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 1.1, -0.3, numpy.pi / 4.2, 3
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = .777, .747, .343, 2.5
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 12, 1, 2, 3
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 3, 4, 5, 6
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 5, -.7, 3 * numpy.pi / 2, 5
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 11, 11, 11, 1.123
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 3, 2, 1, 100
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = .01, 0, 0, 0
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 1.12, 0, 2, 343
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
R, z, phi, t = 7, 7, 7, 7
assert_allclose(pot.Rphideriv(R, z, phi, t), -deriv(lambda x: pot.Rforce(R, z, x, t), phi, dx=dx), rtol=rtol)
def test_OmegaP(self):
sp = spiral()
assert sp.OmegaP() == 0
sp = spiral(N=1, alpha=2, r_ref=.1, phi_ref=.5, Rs=0.2, H=0.7, Cs=[1,2], omega=-123)
assert sp.OmegaP() == -123
sp = spiral(omega=123.456)
assert sp.OmegaP() == 123.456
def test_K(self):
pot = spiral()
R = 1
assert_allclose([pot._K(R)], [pot._ns * pot._N / R / numpy.sin(pot._alpha)])
R = 1e-6
assert_allclose([pot._K(R)], [pot._ns * pot._N / R / numpy.sin(pot._alpha)])
R = 0.5
assert_allclose([pot._K(R)], [pot._ns * pot._N / R / numpy.sin(pot._alpha)])
def test_B(self):
pot = spiral()
R = 1
assert_allclose([pot._B(R)], [pot._K(R) * pot._H * (1 + 0.4 * pot._K(R) * pot._H)])
R = 1e-6
assert_allclose([pot._B(R)], [pot._K(R) * pot._H * (1 + 0.4 * pot._K(R) * pot._H)])
R = 0.3
assert_allclose([pot._B(R)], [pot._K(R) * pot._H * (1 + 0.4 * pot._K(R) * pot._H)])
def test_D(self):
pot = spiral()
assert_allclose([pot._D(3)], [(1. + pot._K(3)*pot._H + 0.3 * pot._K(3)**2 * pot._H**2.) / (1. + 0.3*pot._K(3) * pot._H)])
assert_allclose([pot._D(1e-6)], [(1. + pot._K(1e-6)*pot._H + 0.3 * pot._K(1e-6)**2 * pot._H**2.) / (1. + 0.3*pot._K(1e-6) * pot._H)])
assert_allclose([pot._D(.5)], [(1. + pot._K(.5)*pot._H + 0.3 * pot._K(.5)**2 * pot._H**2.) / (1. + 0.3*pot._K(.5) * pot._H)])
def test_dK_dR(self):
pot = spiral()
dx = 1e-8
assert_allclose(pot._dK_dR(3), deriv(pot._K, 3, dx=dx))
assert_allclose(pot._dK_dR(2.3), deriv(pot._K, 2.3, dx=dx))
assert_allclose(pot._dK_dR(-2.3), deriv(pot._K, -2.3, dx=dx))
def test_dB_dR(self):
pot = spiral()
dx = 1e-8
assert_allclose(pot._dB_dR(3.3), deriv(pot._B, 3.3, dx=dx))
assert_allclose(pot._dB_dR(1e-3), deriv(pot._B, 1e-3, dx=dx))
assert_allclose(pot._dB_dR(3), deriv(pot._B, 3, dx=dx))
def test_dD_dR(self):
pot = spiral()
dx = 1e-8
assert_allclose(pot._dD_dR(1e-3), deriv(pot._D, 1e-3, dx=dx))
assert_allclose(pot._dD_dR(2), deriv(pot._D, 2, dx=dx))
def test_gamma(self):
pot = spiral()
R, phi = 1, 2
assert_allclose(pot._gamma(R, phi), [pot._N * (float(phi) - pot._phi_ref - numpy.log(float(R) / pot._r_ref) /
numpy.tan(pot._alpha))])
R , phi = .1, -.2
assert_allclose(pot._gamma(R, phi), [pot._N * (float(phi) - pot._phi_ref - numpy.log(float(R) / pot._r_ref) /
numpy.tan(pot._alpha))])
R, phi = 0.01, 0
assert_allclose(pot._gamma(R, phi), [pot._N * (float(phi) - pot._phi_ref - numpy.log(float(R) / pot._r_ref) /
numpy.tan(pot._alpha))])
def test_dgamma_dR(self):
pot = spiral()
dx = 1e-8
assert_allclose(pot._dgamma_dR(3.), deriv(lambda x: pot._gamma(x, 1), 3., dx=dx))
assert_allclose(pot._dgamma_dR(3), deriv(lambda x: pot._gamma(x, 1), 3, dx=dx))
assert_allclose(pot._dgamma_dR(0.01), deriv(lambda x: pot._gamma(x, 1), 0.01, dx=dx))
if __name__ == '__main__':
suite = unittest.TestLoader().loadTestsFromTestCase(TestSpiralArmsPotential)
unittest.TextTestRunner(verbosity=2).run(suite)
| bsd-3-clause | a4837d10e8d984b24c1f0731f2042263 | 77.353169 | 141 | 0.559909 | 2.449824 | false | true | false | false |
jobovy/galpy | galpy/potential/Irrgang13.py | 1 | 3211 | # Milky-Way mass models from Irrgang et al. (2013)
import numpy
from ..potential import (MiyamotoNagaiPotential, NFWPotential,
PlummerPotential, SCFPotential,
scf_compute_coeffs_spherical)
from ..util import conversion
# Their mass unit
mgal_in_msun= 1e5/conversion._G
# Model I: updated version of Allen & Santillan
# Unit normalizations
ro, vo= 8.4, 242.
Irrgang13I_bulge= PlummerPotential(\
amp=409.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
b=0.23/ro,ro=ro,vo=vo)
Irrgang13I_disk= MiyamotoNagaiPotential(\
amp=2856.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
a=4.22/ro,b=0.292/ro,ro=ro,vo=vo)
# The halo is a little more difficult, because the Irrgang13I halo model is
# not in galpy, so we use SCF to represent it (because we're lazy...). The
# sharp cut-off in the Irrgang13I halo model makes SCF difficult, so we
# replace it with a smooth cut-off; this only affects the very outer halo
def Irrgang13I_halo_dens(\
r,amp=1018*mgal_in_msun/conversion.mass_in_msol(vo,ro),
ah=2.562/ro,gamma=2.,Lambda=200./ro):
r_over_ah_gamma= (r/ah)**(gamma-1.)
return amp/4./numpy.pi/ah*r_over_ah_gamma*(r_over_ah_gamma+gamma)/r**2\
/(1.+r_over_ah_gamma)**2.\
*((1.-numpy.tanh((r-Lambda)/(Lambda/20.)))/2.)
a_for_scf= 20.
# scf_compute_coeffs_spherical currently seems to require a function of 3 parameters...
Acos= scf_compute_coeffs_spherical(\
lambda r,z,p: Irrgang13I_halo_dens(r),40,a=a_for_scf)[0]
Irrgang13I_halo= SCFPotential(Acos=Acos,a=a_for_scf,ro=ro,vo=vo)
# Final model I
Irrgang13I= Irrgang13I_bulge+Irrgang13I_disk+Irrgang13I_halo
# Model II
# Unit normalizations
ro, vo= 8.35, 240.4
Irrgang13II_bulge= PlummerPotential(\
amp=175.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
b=0.184/ro,ro=ro,vo=vo)
Irrgang13II_disk= MiyamotoNagaiPotential(\
amp=2829.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
a=4.85/ro,b=0.305/ro,ro=ro,vo=vo)
# Again use SCF because the Irrgang13II halo model is not in galpy; because
# the halo model is quite different from Hernquist both in the inner and outer
# part, need quite a few basis functions...
def Irrgang13II_halo_dens(\
r,amp=69725*mgal_in_msun/conversion.mass_in_msol(vo,ro),
ah=200./ro):
return amp/4./numpy.pi*ah**2./r**2./(r**2.+ah**2.)**1.5
a_for_scf= 0.15
# scf_compute_coeffs_spherical currently seems to require a function of 3 parameters...
Acos= scf_compute_coeffs_spherical(\
lambda r,z,p: Irrgang13II_halo_dens(r),75,a=a_for_scf)[0]
Irrgang13II_halo= SCFPotential(Acos=Acos,a=a_for_scf,ro=ro,vo=vo)
# Final model II
Irrgang13II= Irrgang13II_bulge+Irrgang13II_disk+Irrgang13II_halo
# Model III
# Unit normalizations
ro, vo= 8.33, 239.7
Irrgang13III_bulge= PlummerPotential(\
amp=439.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
b=0.236/ro,ro=ro,vo=vo)
Irrgang13III_disk= MiyamotoNagaiPotential(\
amp=3096.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
a=3.262/ro,b=0.289/ro,ro=ro,vo=vo)
Irrgang13III_halo= NFWPotential(\
amp=142200.*mgal_in_msun/conversion.mass_in_msol(vo,ro),
a=45.02/ro,ro=ro,vo=vo)
# Final model III
Irrgang13III= Irrgang13III_bulge+Irrgang13III_disk+Irrgang13III_halo
| bsd-3-clause | 4d6c2c2e8ad1446edcc7169c25371b39 | 41.25 | 87 | 0.709125 | 2.340379 | false | false | false | false |
pyecore/pyecore | pyecore/ordered_set_patch.py | 1 | 2846 | import ordered_set
from typing import Iterable
SLICE_ALL = ordered_set.SLICE_ALL
# monkey patching the OrderedSet implementation
def insert(self, index, key):
"""Adds an element at a dedicated position in an OrderedSet.
This implementation is meant for the OrderedSet from the ordered_set
package only.
"""
if key in self.map:
return
# compute the right index
size = len(self.items)
if index < 0:
index = size + index if size + index > 0 else 0
else:
index = index if index < size else size
# insert the value
self.items.insert(index, key)
for k, v in self.map.items():
if v >= index:
self.map[k] = v + 1
self.map[key] = index
def pop(self, index=-1):
"""Removes an element at the tail of the OrderedSet or at a dedicated
position.
This implementation is meant for the OrderedSet from the ordered_set
package only.
"""
if not self.items:
raise KeyError('Set is empty')
elem = self.items[index]
del self.items[index]
del self.map[elem]
if elem != -1:
for k, v in self.map.items():
if v >= index and v > 0:
self.map[k] = v - 1
return elem
def __setitem__(self, index, item):
if isinstance(index, slice):
raise KeyError('Item assignation using slices is not yet supported '
f'for {self.__class__.__name__}')
if index < 0:
index = len(self.items) + index
if index < 0:
raise IndexError('assignement index out of range')
self.pop(index)
self.insert(index, item)
def __getitem__(self, index):
if isinstance(index, slice) and index == SLICE_ALL:
return self.copy()
elif isinstance(index, Iterable):
return self.subcopy(self.items[i] for i in index)
elif isinstance(index, slice) or hasattr(index, "__index__"):
result = self.items[index]
if isinstance(result, list):
return self.subcopy(result)
else:
return result
else:
raise TypeError("Don't know how to index an OrderedSet by %r" % index)
def __delitem__(self, index):
if isinstance(index, slice) and index == SLICE_ALL:
self.clear()
return
elif isinstance(index, slice):
raise KeyError('Item deletion using slices is not yet supported '
f'for {self.__class__.__name__}')
self.pop(index)
def subcopy(self, subitems):
"""
This method is here mainly for overriding
"""
return self.__class__(subitems)
ordered_set.OrderedSet.insert = insert
ordered_set.OrderedSet.pop = pop
ordered_set.OrderedSet.__setitem__ = __setitem__
ordered_set.OrderedSet.__getitem__ = __getitem__
ordered_set.OrderedSet.__delitem__ = __delitem__
ordered_set.OrderedSet.subcopy = subcopy
| bsd-3-clause | 8c312f36dc6427c636833f92476239ce | 27.46 | 78 | 0.614547 | 3.887978 | false | false | false | false |
pyecore/pyecore | tests/test_derived.py | 2 | 1494 | import pytest
from pyecore.ecore import *
def test_default_derived_collection():
collection = EDerivedCollection.create(None, None)
with pytest.raises(AttributeError):
len(collection)
with pytest.raises(AttributeError):
collection[0]
with pytest.raises(AttributeError):
collection[0] = 4
with pytest.raises(AttributeError):
collection.insert(0, 4)
with pytest.raises(AttributeError):
collection.discard(4)
with pytest.raises(AttributeError):
del collection[0]
with pytest.raises(AttributeError):
collection.add(4)
def test_new_derived_collection():
A = EClass('A')
A.eStructuralFeatures.append(EAttribute('mod2', EInt, upper=-1))
a = A()
assert len(a.mod2) == 0
class DerivedMod2(EDerivedCollection):
def _get_collection(self):
return [x for x in self.owner.ages if x % 2 == 0]
def __len__(self):
return len(self._get_collection())
def __getitem__(self, index):
return self._get_collection()[index]
def test_new_factory_derived_collection():
A = EClass('A')
A.eStructuralFeatures.append(EAttribute('ages', EInt, upper=-1))
A.eStructuralFeatures.append(EAttribute('mod2', EInt, upper=-1,
derived_class=DerivedMod2))
a = A()
a.ages.extend([1, 2, 3, 4, 5, 6])
assert isinstance(a.mod2, DerivedMod2)
assert a.mod2
assert len(a.mod2) == 3
assert a.mod2[0] == 2
| bsd-3-clause | c4253383e52a5907c68fdf38a884793a | 24.322034 | 71 | 0.630522 | 3.6 | false | true | false | false |
pyecore/pyecore | pyecore/commands.py | 1 | 10554 | """ This module introduce the command system which allows to defined various
that can be executed onto a commands stack. Each command can also be 'undo' and
'redo'.
"""
from abc import ABCMeta, abstractmethod
from collections import UserList
from .ecore import EObject, BadValueError
from .resources import ResourceSet
class Command(metaclass=ABCMeta):
"""Provides the basic elements that must be implemented by a custom
command.
The methods/properties that need to be implemented are:
* can_execute (@property)
* can_undo (@property)
* execute (method)
* undo (method)
* redo (method)
"""
@property
@abstractmethod
def can_execute(self):
pass
@abstractmethod
def execute(self):
pass
@property
@abstractmethod
def can_undo(self):
pass
@abstractmethod
def undo(self):
pass
@abstractmethod
def redo(self):
pass
class AbstractCommand(Command):
def __init__(self, owner=None, feature=None, value=None, label=None):
if owner and not isinstance(owner, EObject):
raise BadValueError(got=owner, expected=EObject)
self.owner = owner
self.resource = owner.eResource
self.feature = feature
self.value = value
self.previous_value = None
self.label = label
self._is_prepared = False
self._is_executable = False
@property
def can_execute(self):
execute = False
eclass = self.owner.eClass
if isinstance(self.feature, str):
actual = eclass.findEStructuralFeature(self.feature)
self.feature = actual
execute = actual is not None
else:
actual = eclass.findEStructuralFeature(self.feature.name)
execute = self.feature is actual
return execute
@property
def can_undo(self):
return self._executed
def execute(self):
self.do_execute()
self._executed = True
def __repr__(self):
if self.feature is None:
feature = 'NO_FEATURE'
elif not isinstance(self.feature, str):
feature = self.feature.name
else:
feature = self.feature
return (f'{self.__class__.__name__} '
f'{self.owner}.{feature} <- { self.value}')
class Set(AbstractCommand):
def __init__(self, owner=None, feature=None, value=None):
super().__init__(owner, feature, value)
@property
def can_execute(self):
can = super().can_execute
return can and not self.feature.many
def undo(self):
self.owner.eSet(self.feature, self.previous_value)
def redo(self):
self.owner.eSet(self.feature, self.value)
def do_execute(self):
object_ = self.owner
self.previous_value = object_.eGet(self.feature)
object_.eSet(self.feature, self.value)
class Add(AbstractCommand):
def __init__(self, owner=None, feature=None, value=None, index=None):
super().__init__(owner, feature, value)
self.index = index
self._collection = None
@property
def can_execute(self):
executable = super().can_execute
executable = executable and self.value is not None
self._collection = self.owner.eGet(self.feature)
return executable
@property
def can_undo(self):
can = super().can_undo
return can and self.value in self._collection
def undo(self):
self._collection.pop(self.index)
def redo(self):
self._collection.insert(self.index, self.value)
def do_execute(self):
if self.index is not None:
self._collection.insert(self.index, self.value)
else:
self.index = len(self._collection)
self._collection.append(self.value)
class Remove(AbstractCommand):
def __init__(self, owner=None, feature=None, value=None, index=None):
super().__init__(owner, feature, value)
self.index = index
self._collection = None
if bool(self.index is not None) == bool(self.value is not None):
raise ValueError('Remove command cannot have index and value set '
'together.')
@property
def can_execute(self):
executable = super().can_execute
self._collection = self.owner.eGet(self.feature)
if self.index is None:
executable = executable and self.value is not None
else:
self.value = self._collection[self.index]
return executable
def undo(self):
self._collection.insert(self.index, self.value)
def redo(self):
self._collection.pop(self.index)
def do_execute(self):
if self.index is None:
self.index = self._collection.index(self.value)
self._collection.pop(self.index)
class Move(AbstractCommand):
def __init__(self, owner=None, feature=None, from_index=None,
to_index=None, value=None):
super().__init__(owner, feature, value=value)
self.from_index = from_index
self.to_index = to_index
if bool(self.from_index is not None) == bool(self.value is not None):
raise ValueError('Move command cannot have from_index and value '
'set together.')
@property
def can_execute(self):
can = super().can_execute
self._collection = self.owner.eGet(self.feature)
if self.value is None:
self.value = self._collection[self.from_index]
if self.from_index is None:
self.from_index = self._collection.index(self.value)
return can and self.value in self._collection
@property
def can_undo(self):
can = super().can_undo
obj = self._collection[self.to_index]
return can and obj is self.value
def undo(self):
self.value = self._collection.pop(self.to_index)
self._collection.insert(self.from_index, self.value)
def redo(self):
self.do_execute()
def do_execute(self):
self.value = self._collection.pop(self.from_index)
self._collection.insert(self.to_index, self.value)
class Delete(AbstractCommand):
def __init__(self, owner=None):
super().__init__(owner=owner)
@property
def can_execute(self):
self.feature = self.owner.eContainmentFeature()
self.references = {}
elements = {self.owner}
elements.update(self.owner.eAllContents())
for element in elements:
rels_tuple = [(ref, element.eGet(ref))
for ref in element.eClass.eAllReferences()]
self.references[element] = rels_tuple
self.inverse_references = {}
for element in elements:
rels_tuple = []
for obj, reference in element._inverse_rels:
if reference.many:
index = obj.eGet(reference).index(element)
else:
index = 0
rels_tuple.append((index, obj, reference))
self.inverse_references[element] = rels_tuple
return True
def undo(self):
for element, v in self.references.items():
for reference, content in v:
if reference.many:
element.eGet(reference).extend(content)
else:
element.eSet(reference, content)
for element, v in self.inverse_references.items():
for i, obj, reference in v:
if reference.many:
obj.eGet(reference).insert(i, element)
else:
obj.eSet(reference, element)
def redo(self):
self.do_execute()
def do_execute(self):
self.owner.delete()
def __repr__(self):
return f'{self.__class__.__name__} {self.owner}'
class Compound(Command, UserList):
def __init__(self, *commands):
super().__init__(commands)
@property
def can_execute(self):
return all(command.can_execute for command in self)
def execute(self):
for command in self:
command.execute()
@property
def can_undo(self):
return all(command.can_undo for command in self)
def undo(self):
for command in reversed(self):
command.undo()
def redo(self):
for command in self:
command.redo()
def unwrap(self):
return self[0] if len(self) == 1 else self
def __repr__(self):
return f'{self.__class__.__name__}({self.data})'
class CommandStack(object):
def __init__(self):
self.stack = []
self.stack_index = -1
@property
def top(self):
return self.stack[self.stack_index]
@property
def peek_next_top(self):
return self.stack[self.stack_index + 1]
@top.setter
def top(self, command):
index = self.stack_index + 1
self.stack[index:index] = [command]
self.stack_index = index
@top.deleter
def top(self):
self.stack_index -= 1
def __bool__(self):
return self.stack_index > -1
def execute(self, *commands):
for command in commands:
if command.can_execute:
command.execute()
self.top = command
else:
raise ValueError(f'Cannot execute command {command}')
def undo(self):
if not self:
raise IndexError('Command stack is empty')
if self.top.can_undo:
self.top.undo()
del self.top
def redo(self):
self.peek_next_top.redo()
self.stack_index += 1
class EditingDomain(object):
def __init__(self, resource_set=None, command_stack_class=CommandStack):
self.resource_set = resource_set or ResourceSet()
self.__stack = command_stack_class()
self.clipboard = []
def create_resource(self, uri):
return self.resource_set.create_resource(uri)
def load_resource(self, uri):
return self.resource_set.get_resource(uri)
def execute(self, cmd):
if cmd.resource not in self.resource_set.resources.values():
raise ValueError(f"Cannot execute command '{cmd}', the resource's "
"command is not contained in the editing domain "
"resource set.")
self.__stack.execute(cmd)
def undo(self):
self.__stack.undo()
def redo(self):
self.__stack.redo()
| bsd-3-clause | 09d61ecd48aed2638728dee1860e9aaf | 28.154696 | 79 | 0.586413 | 4.122656 | false | false | false | false |
lmcinnes/umap | umap/parametric_umap.py | 1 | 39979 | import numpy as np
from umap import UMAP
from warnings import warn, catch_warnings, filterwarnings
from numba import TypingError
import os
from umap.spectral import spectral_layout
from sklearn.utils import check_random_state
import codecs, pickle
from sklearn.neighbors import KDTree
import sys
try:
import tensorflow as tf
except ImportError:
warn(
"""The umap.parametric_umap package requires Tensorflow > 2.0 to be installed.
You can install Tensorflow at https://www.tensorflow.org/install
or you can install the CPU version of Tensorflow using
pip install umap-learn[parametric_umap]
"""
)
raise ImportError("umap.parametric_umap requires Tensorflow >= 2.0") from None
TF_MAJOR_VERSION = int(tf.__version__.split(".")[0])
if TF_MAJOR_VERSION < 2:
warn(
"""The umap.parametric_umap package requires Tensorflow > 2.0 to be installed.
You can install Tensorflow at https://www.tensorflow.org/install
or you can install the CPU version of Tensorflow using
pip install umap-learn[parametric_umap]
"""
)
raise ImportError("umap.parametric_umap requires Tensorflow >= 2.0") from None
try:
import tensorflow_probability
except ImportError:
warn(
""" Global structure preservation in the umap.parametric_umap package requires
tensorflow_probability to be installed. You can install tensorflow_probability at
https://www.tensorflow.org/probability,
or via
pip install --upgrade tensorflow-probability
Please ensure to install a version which is compatible to your tensorflow
installation. You can verify the correct release at
https://github.com/tensorflow/probability/releases.
"""
)
class ParametricUMAP(UMAP):
def __init__(
self,
optimizer=None,
batch_size=None,
dims=None,
encoder=None,
decoder=None,
parametric_embedding=True,
parametric_reconstruction=False,
parametric_reconstruction_loss_fcn=tf.keras.losses.BinaryCrossentropy(
from_logits=True
),
parametric_reconstruction_loss_weight=1.0,
autoencoder_loss=False,
reconstruction_validation=None,
loss_report_frequency=10,
n_training_epochs=1,
global_correlation_loss_weight=0,
run_eagerly=False,
keras_fit_kwargs={},
**kwargs
):
"""
Parametric UMAP subclassing UMAP-learn, based on keras/tensorflow.
There is also a non-parametric implementation contained within to compare
with the base non-parametric implementation.
Parameters
----------
optimizer : tf.keras.optimizers, optional
The tensorflow optimizer used for embedding, by default None
batch_size : int, optional
size of batch used for batch training, by default None
dims : tuple, optional
dimensionality of data, if not flat (e.g. (32x32x3 images for ConvNet), by default None
encoder : tf.keras.Sequential, optional
The encoder Keras network
decoder : tf.keras.Sequential, optional
the decoder Keras network
parametric_embedding : bool, optional
Whether the embedder is parametric or non-parametric, by default True
parametric_reconstruction : bool, optional
Whether the decoder is parametric or non-parametric, by default False
parametric_reconstruction_loss_fcn : bool, optional
What loss function to use for parametric reconstruction, by default tf.keras.losses.BinaryCrossentropy
parametric_reconstruction_loss_weight : float, optional
How to weight the parametric reconstruction loss relative to umap loss, by default 1.0
autoencoder_loss : bool, optional
[description], by default False
reconstruction_validation : array, optional
validation X data for reconstruction loss, by default None
loss_report_frequency : int, optional
how many times per epoch to report loss, by default 1
n_training_epochs : int, optional
number of epochs to train for, by default 1
global_correlation_loss_weight : float, optional
Whether to additionally train on correlation of global pairwise relationships (>0), by default 0
run_eagerly : bool, optional
Whether to run tensorflow eagerly
keras_fit_kwargs : dict, optional
additional arguments for model.fit (like callbacks), by default {}
"""
super().__init__(**kwargs)
# add to network
self.dims = dims # if this is an image, we should reshape for network
self.encoder = encoder # neural network used for embedding
self.decoder = decoder # neural network used for decoding
self.parametric_embedding = (
parametric_embedding # nonparametric vs parametric embedding
)
self.parametric_reconstruction = parametric_reconstruction
self.parametric_reconstruction_loss_fcn = parametric_reconstruction_loss_fcn
self.parametric_reconstruction_loss_weight = (
parametric_reconstruction_loss_weight
)
self.run_eagerly = run_eagerly
self.autoencoder_loss = autoencoder_loss
self.batch_size = batch_size
self.loss_report_frequency = (
loss_report_frequency # how many times per epoch to report loss in keras
)
if "tensorflow_probability" in sys.modules:
self.global_correlation_loss_weight = global_correlation_loss_weight
else:
warn(
"tensorflow_probability not installed or incompatible to current \
tensorflow installation. Setting global_correlation_loss_weight to zero."
)
self.global_correlation_loss_weight = 0
self.reconstruction_validation = (
reconstruction_validation # holdout data for reconstruction acc
)
self.keras_fit_kwargs = keras_fit_kwargs # arguments for model.fit
self.parametric_model = None
# how many epochs to train for (different than n_epochs which is specific to each sample)
self.n_training_epochs = n_training_epochs
# set optimizer
if optimizer is None:
if parametric_embedding:
# Adam is better for parametric_embedding
self.optimizer = tf.keras.optimizers.Adam(1e-3)
else:
# Larger learning rate can be used for embedding
self.optimizer = tf.keras.optimizers.Adam(1e-1)
else:
self.optimizer = optimizer
if parametric_reconstruction and not parametric_embedding:
warn(
"Parametric decoding is not implemented with nonparametric \
embedding. Turning off parametric decoding"
)
self.parametric_reconstruction = False
if self.encoder is not None:
if encoder.outputs[0].shape[-1] != self.n_components:
raise ValueError(
(
"Dimensionality of embedder network output ({}) does"
"not match n_components ({})".format(
encoder.outputs[0].shape[-1], self.n_components
)
)
)
def fit(self, X, y=None, precomputed_distances=None):
if self.metric == "precomputed":
if precomputed_distances is None:
raise ValueError(
"Precomputed distances must be supplied if metric \
is precomputed."
)
# prepare X for training the network
self._X = X
# geneate the graph on precomputed distances
return super().fit(precomputed_distances, y)
else:
return super().fit(X, y)
def fit_transform(self, X, y=None, precomputed_distances=None):
if self.metric == "precomputed":
if precomputed_distances is None:
raise ValueError(
"Precomputed distances must be supplied if metric \
is precomputed."
)
# prepare X for training the network
self._X = X
# generate the graph on precomputed distances
return super().fit_transform(precomputed_distances, y)
else:
return super().fit_transform(X, y)
def transform(self, X):
"""Transform X into the existing embedded space and return that
transformed output.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the new data in low-dimensional space.
"""
if self.parametric_embedding:
return self.encoder.predict(
np.asanyarray(X), batch_size=self.batch_size, verbose=self.verbose
)
else:
warn(
"Embedding new data is not supported by ParametricUMAP. \
Using original embedder."
)
return super().transform(X)
def inverse_transform(self, X):
""" Transform X in the existing embedded space back into the input
data space and return that transformed output.
Parameters
----------
X : array, shape (n_samples, n_components)
New points to be inverse transformed.
Returns
-------
X_new : array, shape (n_samples, n_features)
Generated data points new data in data space.
"""
if self.parametric_reconstruction:
return self.decoder.predict(
np.asanyarray(X), batch_size=self.batch_size, verbose=self.verbose
)
else:
return super().inverse_transform(X)
def _define_model(self):
"""Define the model in keras"""
# network outputs
outputs = {}
# inputs
if self.parametric_embedding:
to_x = tf.keras.layers.Input(shape=self.dims, name="to_x")
from_x = tf.keras.layers.Input(shape=self.dims, name="from_x")
inputs = [to_x, from_x]
# parametric embedding
embedding_to = self.encoder(to_x)
embedding_from = self.encoder(from_x)
if self.parametric_reconstruction:
# parametric reconstruction
if self.autoencoder_loss:
embedding_to_recon = self.decoder(embedding_to)
else:
# stop gradient of reconstruction loss before it reaches the encoder
embedding_to_recon = self.decoder(tf.stop_gradient(embedding_to))
embedding_to_recon = tf.keras.layers.Lambda(
lambda x: x, name="reconstruction"
)(embedding_to_recon)
outputs["reconstruction"] = embedding_to_recon
else:
# this is the sham input (it's just a 0) to make keras think there is input data
batch_sample = tf.keras.layers.Input(
shape=(1), dtype=tf.int32, name="batch_sample"
)
# gather all of the edges (so keras model is happy)
to_x = tf.squeeze(tf.gather(self.head, batch_sample[0]))
from_x = tf.squeeze(tf.gather(self.tail, batch_sample[0]))
# grab relevant embeddings
embedding_to = self.encoder(to_x)[:, -1, :]
embedding_from = self.encoder(from_x)[:, -1, :]
inputs = [batch_sample]
# concatenate to/from projections for loss computation
embedding_to_from = tf.concat([embedding_to, embedding_from], axis=1)
embedding_to_from = tf.keras.layers.Lambda(lambda x: x, name="umap")(
embedding_to_from
)
outputs["umap"] = embedding_to_from
if self.global_correlation_loss_weight > 0:
outputs["global_correlation"] = tf.keras.layers.Lambda(
lambda x: x, name="global_correlation"
)(embedding_to)
# create model
# self.parametric_model = tf.keras.Model(inputs=inputs, outputs=outputs)
self.parametric_model = GradientClippedModel(inputs=inputs, outputs=outputs)
def _compile_model(self):
"""
Compiles keras model with losses
"""
losses = {}
loss_weights = {}
umap_loss_fn = umap_loss(
self.batch_size,
self.negative_sample_rate,
self._a,
self._b,
self.edge_weight,
self.parametric_embedding,
)
losses["umap"] = umap_loss_fn
loss_weights["umap"] = 1.0
if self.global_correlation_loss_weight > 0:
losses["global_correlation"] = distance_loss_corr
loss_weights["global_correlation"] = self.global_correlation_loss_weight
if self.run_eagerly == False:
# this is needed to avoid a 'NaN' error bug in tensorflow_probability (v0.12.2)
warn("Setting tensorflow to run eagerly for global_correlation_loss.")
self.run_eagerly = True
if self.parametric_reconstruction:
losses["reconstruction"] = self.parametric_reconstruction_loss_fcn
loss_weights["reconstruction"] = self.parametric_reconstruction_loss_weight
self.parametric_model.compile(
optimizer=self.optimizer,
loss=losses,
loss_weights=loss_weights,
run_eagerly=self.run_eagerly,
)
def _fit_embed_data(self, X, n_epochs, init, random_state):
if self.metric == "precomputed":
X = self._X
# get dimensionality of dataset
if self.dims is None:
self.dims = [np.shape(X)[-1]]
else:
# reshape data for network
if len(self.dims) > 1:
X = np.reshape(X, [len(X)] + list(self.dims))
if self.parametric_reconstruction and (np.max(X) > 1.0 or np.min(X) < 0.0):
warn(
"Data should be scaled to the range 0-1 for cross-entropy reconstruction loss."
)
# get dataset of edges
(
edge_dataset,
self.batch_size,
n_edges,
head,
tail,
self.edge_weight,
) = construct_edge_dataset(
X,
self.graph_,
self.n_epochs,
self.batch_size,
self.parametric_embedding,
self.parametric_reconstruction,
self.global_correlation_loss_weight,
)
self.head = tf.constant(tf.expand_dims(head.astype(np.int64), 0))
self.tail = tf.constant(tf.expand_dims(tail.astype(np.int64), 0))
if self.parametric_embedding:
init_embedding = None
else:
init_embedding = init_embedding_from_graph(
X,
self.graph_,
self.n_components,
self.random_state,
self.metric,
self._metric_kwds,
init="spectral",
)
# create encoder and decoder model
n_data = len(X)
self.encoder, self.decoder = prepare_networks(
self.encoder,
self.decoder,
self.n_components,
self.dims,
n_data,
self.parametric_embedding,
self.parametric_reconstruction,
init_embedding,
)
# create the model
self._define_model()
self._compile_model()
# report every loss_report_frequency subdivision of an epochs
if self.parametric_embedding:
steps_per_epoch = int(
n_edges / self.batch_size / self.loss_report_frequency
)
else:
# all edges are trained simultaneously with nonparametric, so this is arbitrary
steps_per_epoch = 100
# Validation dataset for reconstruction
if (
self.parametric_reconstruction
and self.reconstruction_validation is not None
):
# reshape data for network
if len(self.dims) > 1:
self.reconstruction_validation = np.reshape(
self.reconstruction_validation,
[len(self.reconstruction_validation)] + list(self.dims),
)
validation_data = (
(
self.reconstruction_validation,
tf.zeros_like(self.reconstruction_validation),
),
{"reconstruction": self.reconstruction_validation},
)
else:
validation_data = None
# create embedding
history = self.parametric_model.fit(
edge_dataset,
epochs=self.loss_report_frequency * self.n_training_epochs,
steps_per_epoch=steps_per_epoch,
max_queue_size=100,
validation_data=validation_data,
**self.keras_fit_kwargs
)
# save loss history dictionary
self._history = history.history
# get the final embedding
if self.parametric_embedding:
embedding = self.encoder.predict(X, verbose=self.verbose)
else:
embedding = self.encoder.trainable_variables[0].numpy()
return embedding, {}
def __getstate__(self):
# this function supports pickling, making sure that objects can be pickled
return dict(
(k, v)
for (k, v) in self.__dict__.items()
if should_pickle(k, v) and k not in ("optimizer", "encoder", "decoder", "parametric_model")
)
def save(self, save_location, verbose=True):
# save encoder
if self.encoder is not None:
encoder_output = os.path.join(save_location, "encoder")
self.encoder.save(encoder_output)
if verbose:
print("Keras encoder model saved to {}".format(encoder_output))
# save decoder
if self.decoder is not None:
decoder_output = os.path.join(save_location, "decoder")
self.decoder.save(decoder_output)
if verbose:
print("Keras decoder model saved to {}".format(decoder_output))
# save parametric_model
if self.parametric_model is not None:
parametric_model_output = os.path.join(save_location, "parametric_model")
self.parametric_model.save(parametric_model_output)
if verbose:
print("Keras full model saved to {}".format(parametric_model_output))
# # save model.pkl (ignoring unpickleable warnings)
with catch_warnings():
filterwarnings("ignore")
# work around optimizers not pickling anymore (since tf 2.4)
self._optimizer_dict = self.optimizer.get_config()
model_output = os.path.join(save_location, "model.pkl")
with open(model_output, "wb") as output:
pickle.dump(self, output, pickle.HIGHEST_PROTOCOL)
if verbose:
print("Pickle of ParametricUMAP model saved to {}".format(model_output))
def get_graph_elements(graph_, n_epochs):
"""
gets elements of graphs, weights, and number of epochs per edge
Parameters
----------
graph_ : scipy.sparse.csr.csr_matrix
umap graph of probabilities
n_epochs : int
maximum number of epochs per edge
Returns
-------
graph scipy.sparse.csr.csr_matrix
umap graph
epochs_per_sample np.array
number of epochs to train each sample for
head np.array
edge head
tail np.array
edge tail
weight np.array
edge weight
n_vertices int
number of vertices in graph
"""
### should we remove redundancies () here??
# graph_ = remove_redundant_edges(graph_)
graph = graph_.tocoo()
# eliminate duplicate entries by summing them together
graph.sum_duplicates()
# number of vertices in dataset
n_vertices = graph.shape[1]
# get the number of epochs based on the size of the dataset
if n_epochs is None:
# For smaller datasets we can use more epochs
if graph.shape[0] <= 10000:
n_epochs = 500
else:
n_epochs = 200
# remove elements with very low probability
graph.data[graph.data < (graph.data.max() / float(n_epochs))] = 0.0
graph.eliminate_zeros()
# get epochs per sample based upon edge probability
epochs_per_sample = n_epochs * graph.data
head = graph.row
tail = graph.col
weight = graph.data
return graph, epochs_per_sample, head, tail, weight, n_vertices
def init_embedding_from_graph(
_raw_data, graph, n_components, random_state, metric, _metric_kwds, init="spectral"
):
"""Initialize embedding using graph. This is for direct embeddings.
Parameters
----------
init : str, optional
Type of initialization to use. Either random, or spectral, by default "spectral"
Returns
-------
embedding : np.array
the initialized embedding
"""
if random_state is None:
random_state = check_random_state(None)
if isinstance(init, str) and init == "random":
embedding = random_state.uniform(
low=-10.0, high=10.0, size=(graph.shape[0], n_components)
).astype(np.float32)
elif isinstance(init, str) and init == "spectral":
# We add a little noise to avoid local minima for optimization to come
initialisation = spectral_layout(
_raw_data,
graph,
n_components,
random_state,
metric=metric,
metric_kwds=_metric_kwds,
)
expansion = 10.0 / np.abs(initialisation).max()
embedding = (initialisation * expansion).astype(
np.float32
) + random_state.normal(
scale=0.0001, size=[graph.shape[0], n_components]
).astype(
np.float32
)
else:
init_data = np.array(init)
if len(init_data.shape) == 2:
if np.unique(init_data, axis=0).shape[0] < init_data.shape[0]:
tree = KDTree(init_data)
dist, ind = tree.query(init_data, k=2)
nndist = np.mean(dist[:, 1])
embedding = init_data + random_state.normal(
scale=0.001 * nndist, size=init_data.shape
).astype(np.float32)
else:
embedding = init_data
return embedding
def convert_distance_to_log_probability(distances, a=1.0, b=1.0):
"""
convert distance representation into log probability,
as a function of a, b params
Parameters
----------
distances : array
euclidean distance between two points in embedding
a : float, optional
parameter based on min_dist, by default 1.0
b : float, optional
parameter based on min_dist, by default 1.0
Returns
-------
float
log probability in embedding space
"""
return -tf.math.log1p(a * distances ** (2 * b))
def compute_cross_entropy(
probabilities_graph, log_probabilities_distance, EPS=1e-4, repulsion_strength=1.0
):
"""
Compute cross entropy between low and high probability
Parameters
----------
probabilities_graph : array
high dimensional probabilities
log_probabilities_distance : array
low dimensional log probabilities
EPS : float, optional
offset to ensure log is taken of a positive number, by default 1e-4
repulsion_strength : float, optional
strength of repulsion between negative samples, by default 1.0
Returns
-------
attraction_term: tf.float32
attraction term for cross entropy loss
repellant_term: tf.float32
repellent term for cross entropy loss
cross_entropy: tf.float32
cross entropy umap loss
"""
# cross entropy
attraction_term = -probabilities_graph * tf.math.log_sigmoid(
log_probabilities_distance
)
# use numerically stable repellent term
# Shi et al. 2022 (https://arxiv.org/abs/2111.08851)
# log(1 - sigmoid(logits)) = log(sigmoid(logits)) - logits
repellant_term = (
-(1.0 - probabilities_graph)
* (tf.math.log_sigmoid(log_probabilities_distance) - log_probabilities_distance)
* repulsion_strength
)
# balance the expected losses between attraction and repel
CE = attraction_term + repellant_term
return attraction_term, repellant_term, CE
def umap_loss(
batch_size,
negative_sample_rate,
_a,
_b,
edge_weights,
parametric_embedding,
repulsion_strength=1.0,
):
"""
Generate a keras-compatible loss function for UMAP loss
Parameters
----------
batch_size : int
size of mini-batches
negative_sample_rate : int
number of negative samples per positive samples to train on
_a : float
distance parameter in embedding space
_b : float
distance parameter in embedding space
edge_weights : array
weights of all edges from sparse UMAP graph
parametric_embedding : bool
whether the embedding is parametric or nonparametric
repulsion_strength : float, optional
strength of repulsion vs attraction for cross-entropy, by default 1.0
Returns
-------
loss : function
loss function that takes in a placeholder (0) and the output of the keras network
"""
if not parametric_embedding:
# multiply loss by weights for nonparametric
weights_tiled = np.tile(edge_weights, negative_sample_rate + 1)
@tf.function
def loss(placeholder_y, embed_to_from):
# split out to/from
embedding_to, embedding_from = tf.split(
embed_to_from, num_or_size_splits=2, axis=1
)
# get negative samples
embedding_neg_to = tf.repeat(embedding_to, negative_sample_rate, axis=0)
repeat_neg = tf.repeat(embedding_from, negative_sample_rate, axis=0)
embedding_neg_from = tf.gather(
repeat_neg, tf.random.shuffle(tf.range(tf.shape(repeat_neg)[0]))
)
# distances between samples (and negative samples)
distance_embedding = tf.concat(
[
tf.norm(embedding_to - embedding_from, axis=1),
tf.norm(embedding_neg_to - embedding_neg_from, axis=1),
],
axis=0,
)
# convert distances to probabilities
log_probabilities_distance = convert_distance_to_log_probability(
distance_embedding, _a, _b
)
# set true probabilities based on negative sampling
probabilities_graph = tf.concat(
[tf.ones(batch_size), tf.zeros(batch_size * negative_sample_rate)], axis=0
)
# compute cross entropy
(attraction_loss, repellant_loss, ce_loss) = compute_cross_entropy(
probabilities_graph,
log_probabilities_distance,
repulsion_strength=repulsion_strength,
)
if not parametric_embedding:
ce_loss = ce_loss * weights_tiled
return tf.reduce_mean(ce_loss)
return loss
def distance_loss_corr(x, z_x):
"""Loss based on the distance between elements in a batch"""
# flatten data
x = tf.keras.layers.Flatten()(x)
z_x = tf.keras.layers.Flatten()(z_x)
## z score data
def z_score(x):
return (x - tf.reduce_mean(x)) / tf.math.reduce_std(x)
x = z_score(x)
z_x = z_score(z_x)
# clip distances to 10 standard deviations for stability
x = tf.clip_by_value(x, -10, 10)
z_x = tf.clip_by_value(z_x, -10, 10)
dx = tf.math.reduce_euclidean_norm(x[1:] - x[:-1], axis=1)
dz = tf.math.reduce_euclidean_norm(z_x[1:] - z_x[:-1], axis=1)
# jitter dz to prevent mode collapse
dz = dz + tf.random.uniform(dz.shape) * 1e-10
# compute correlation
corr_d = tf.squeeze(
tensorflow_probability.stats.correlation(
x=tf.expand_dims(dx, -1), y=tf.expand_dims(dz, -1)
)
)
if tf.math.is_nan(corr_d):
raise ValueError("NaN values found in correlation loss.")
return -corr_d
def prepare_networks(
encoder,
decoder,
n_components,
dims,
n_data,
parametric_embedding,
parametric_reconstruction,
init_embedding=None,
):
"""
Generates a set of keras networks for the encoder and decoder if one has not already
been predefined.
Parameters
----------
encoder : tf.keras.Sequential
The encoder Keras network
decoder : tf.keras.Sequential
the decoder Keras network
n_components : int
the dimensionality of the latent space
dims : tuple of shape (dim1, dim2, dim3...)
dimensionality of data
n_data : number of elements in dataset
# of elements in training dataset
parametric_embedding : bool
Whether the embedder is parametric or non-parametric
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
init_embedding : array (optional, default None)
The initial embedding, for nonparametric embeddings
Returns
-------
encoder: tf.keras.Sequential
encoder keras network
decoder: tf.keras.Sequential
decoder keras network
"""
if parametric_embedding:
if encoder is None:
encoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=dims),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=n_components, name="z"),
]
)
else:
embedding_layer = tf.keras.layers.Embedding(
n_data, n_components, input_length=1
)
embedding_layer.build(input_shape=(1,))
embedding_layer.set_weights([init_embedding])
encoder = tf.keras.Sequential([embedding_layer])
if decoder is None:
if parametric_reconstruction:
decoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=n_components),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(
units=np.product(dims), name="recon", activation=None
),
tf.keras.layers.Reshape(dims),
]
)
return encoder, decoder
def construct_edge_dataset(
X,
graph_,
n_epochs,
batch_size,
parametric_embedding,
parametric_reconstruction,
global_correlation_loss_weight,
):
"""
Construct a tf.data.Dataset of edges, sampled by edge weight.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
graph_ : scipy.sparse.csr.csr_matrix
Generated UMAP graph
n_epochs : int
# of epochs to train each edge
batch_size : int
batch size
parametric_embedding : bool
Whether the embedder is parametric or non-parametric
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
"""
def gather_index(index):
return X[index]
# if X is > 512Mb in size, we need to use a different, slower method for
# batching data.
gather_indices_in_python = True if X.nbytes * 1e-9 > 0.5 else False
def gather_X(edge_to, edge_from):
# gather data from indexes (edges) in either numpy of tf, depending on array size
if gather_indices_in_python:
edge_to_batch = tf.py_function(gather_index, [edge_to], [tf.float32])[0]
edge_from_batch = tf.py_function(gather_index, [edge_from], [tf.float32])[0]
else:
edge_to_batch = tf.gather(X, edge_to)
edge_from_batch = tf.gather(X, edge_from)
return edge_to_batch, edge_from_batch
def get_outputs(edge_to_batch, edge_from_batch):
outputs = {"umap": tf.repeat(0, batch_size)}
if global_correlation_loss_weight > 0:
outputs["global_correlation"] = edge_to_batch
if parametric_reconstruction:
# add reconstruction to iterator output
# edge_out = tf.concat([edge_to_batch, edge_from_batch], axis=0)
outputs["reconstruction"] = edge_to_batch
return (edge_to_batch, edge_from_batch), outputs
def make_sham_generator():
"""
The sham generator is a placeholder when all data is already intrinsic to
the model, but keras wants some input data. Used for non-parametric
embedding.
"""
def sham_generator():
while True:
yield tf.zeros(1, dtype=tf.int32), tf.zeros(1, dtype=tf.int32)
return sham_generator
# get data from graph
graph, epochs_per_sample, head, tail, weight, n_vertices = get_graph_elements(
graph_, n_epochs
)
# number of elements per batch for embedding
if batch_size is None:
# batch size can be larger if its just over embeddings
if parametric_embedding:
batch_size = np.min([n_vertices, 1000])
else:
batch_size = len(head)
edges_to_exp, edges_from_exp = (
np.repeat(head, epochs_per_sample.astype("int")),
np.repeat(tail, epochs_per_sample.astype("int")),
)
# shuffle edges
shuffle_mask = np.random.permutation(range(len(edges_to_exp)))
edges_to_exp = edges_to_exp[shuffle_mask].astype(np.int64)
edges_from_exp = edges_from_exp[shuffle_mask].astype(np.int64)
# create edge iterator
if parametric_embedding:
edge_dataset = tf.data.Dataset.from_tensor_slices(
(edges_to_exp, edges_from_exp)
)
edge_dataset = edge_dataset.repeat()
edge_dataset = edge_dataset.shuffle(10000)
edge_dataset = edge_dataset.batch(batch_size, drop_remainder=True)
edge_dataset = edge_dataset.map(
gather_X, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
edge_dataset = edge_dataset.map(
get_outputs, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
edge_dataset = edge_dataset.prefetch(10)
else:
# nonparametric embedding uses a sham dataset
gen = make_sham_generator()
edge_dataset = tf.data.Dataset.from_generator(
gen,
(tf.int32, tf.int32),
output_shapes=(tf.TensorShape(1), tf.TensorShape((1,))),
)
return edge_dataset, batch_size, len(edges_to_exp), head, tail, weight
def should_pickle(key, val):
"""
Checks if a dictionary item can be pickled
Parameters
----------
key : try
key for dictionary element
val : None
element of dictionary
Returns
-------
picklable: bool
whether the dictionary item can be pickled
"""
try:
## make sure object can be pickled and then re-read
# pickle object
pickled = codecs.encode(pickle.dumps(val), "base64").decode()
# unpickle object
unpickled = pickle.loads(codecs.decode(pickled.encode(), "base64"))
except (
pickle.PicklingError,
tf.errors.InvalidArgumentError,
TypeError,
tf.errors.InternalError,
tf.errors.NotFoundError,
OverflowError,
TypingError,
AttributeError,
) as e:
warn("Did not pickle {}: {}".format(key, e))
return False
except ValueError as e:
warn(f"Failed at pickling {key}:{val} due to {e}")
return False
return True
def load_ParametricUMAP(save_location, verbose=True):
"""
Load a parametric UMAP model consisting of a umap-learn UMAP object
and corresponding keras models.
Parameters
----------
save_location : str
the folder that the model was saved in
verbose : bool, optional
Whether to print the loading steps, by default True
Returns
-------
parametric_umap.ParametricUMAP
Parametric UMAP objects
"""
## Loads a ParametricUMAP model and its related keras models
model_output = os.path.join(save_location, "model.pkl")
model = pickle.load((open(model_output, "rb")))
if verbose:
print("Pickle of ParametricUMAP model loaded from {}".format(model_output))
# Work around optimizer not pickling anymore (since tf 2.4)
class_name = model._optimizer_dict["name"]
OptimizerClass = getattr(tf.keras.optimizers, class_name)
model.optimizer = OptimizerClass.from_config(model._optimizer_dict)
# load encoder
encoder_output = os.path.join(save_location, "encoder")
if os.path.exists(encoder_output):
model.encoder = tf.keras.models.load_model(encoder_output)
if verbose:
print("Keras encoder model loaded from {}".format(encoder_output))
# save decoder
decoder_output = os.path.join(save_location, "decoder")
if os.path.exists(decoder_output):
model.decoder = tf.keras.models.load_model(decoder_output)
print("Keras decoder model loaded from {}".format(decoder_output))
# get the custom loss function
umap_loss_fn = umap_loss(
model.batch_size,
model.negative_sample_rate,
model._a,
model._b,
model.edge_weight,
model.parametric_embedding,
)
# save parametric_model
parametric_model_output = os.path.join(save_location, "parametric_model")
if os.path.exists(parametric_model_output):
model.parametric_model = tf.keras.models.load_model(
parametric_model_output, custom_objects={"loss": umap_loss_fn}
)
print("Keras full model loaded from {}".format(parametric_model_output))
return model
class GradientClippedModel(tf.keras.Model):
"""
We need to define a custom keras model here for gradient clipping,
to stabilize training.
"""
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
gradients = [tf.clip_by_value(grad, -4.0, 4.0) for grad in gradients]
gradients = [
(tf.where(tf.math.is_nan(grad), tf.zeros_like(grad), grad))
for grad in gradients
]
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
| bsd-3-clause | f0362b8fa23c96703b938f4355b151ec | 33.554019 | 114 | 0.599165 | 4.211863 | false | false | false | false |
lmcinnes/umap | umap/tests/test_umap_nn.py | 1 | 5348 | import numpy as np
import pytest
from numpy.testing import assert_array_almost_equal
from sklearn.neighbors import KDTree
from sklearn.preprocessing import normalize
from umap import distances as dist
from umap.umap_ import (
nearest_neighbors,
smooth_knn_dist,
)
# ===================================================
# Nearest Neighbour Test cases
# ===================================================
# nearest_neighbours metric parameter validation
# -----------------------------------------------
def test_nn_bad_metric(nn_data):
with pytest.raises(ValueError):
nearest_neighbors(nn_data, 10, 42, {}, False, np.random)
def test_nn_bad_metric_sparse_data(sparse_nn_data):
with pytest.raises(ValueError):
nearest_neighbors(
sparse_nn_data,
10,
"seuclidean",
{},
False,
np.random,
)
# -------------------------------------------------
# Utility functions for Nearest Neighbour
# -------------------------------------------------
def knn(indices, nn_data): # pragma: no cover
tree = KDTree(nn_data)
true_indices = tree.query(nn_data, 10, return_distance=False)
num_correct = 0.0
for i in range(nn_data.shape[0]):
num_correct += np.sum(np.in1d(true_indices[i], indices[i]))
return num_correct / (nn_data.shape[0] * 10)
def smooth_knn(nn_data, local_connectivity=1.0):
knn_indices, knn_dists, _ = nearest_neighbors(
nn_data, 10, "euclidean", {}, False, np.random
)
sigmas, rhos = smooth_knn_dist(
knn_dists, 10.0, local_connectivity=local_connectivity
)
shifted_dists = knn_dists - rhos[:, np.newaxis]
shifted_dists[shifted_dists < 0.0] = 0.0
vals = np.exp(-(shifted_dists / sigmas[:, np.newaxis]))
norms = np.sum(vals, axis=1)
return norms
@pytest.mark.skip()
def test_nn_descent_neighbor_accuracy(nn_data): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
nn_data, 10, "euclidean", {}, False, np.random
)
percent_correct = knn(knn_indices, nn_data)
assert (
percent_correct >= 0.85
), "NN-descent did not get 89% accuracy on nearest neighbors"
@pytest.mark.skip()
def test_nn_descent_neighbor_accuracy_low_memory(nn_data): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
nn_data, 10, "euclidean", {}, False, np.random, low_memory=True
)
percent_correct = knn(knn_indices, nn_data)
assert (
percent_correct >= 0.89
), "NN-descent did not get 89% accuracy on nearest neighbors"
@pytest.mark.skip()
def test_angular_nn_descent_neighbor_accuracy(nn_data): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
nn_data, 10, "cosine", {}, True, np.random
)
angular_data = normalize(nn_data, norm="l2")
percent_correct = knn(knn_indices, angular_data)
assert (
percent_correct >= 0.85
), "NN-descent did not get 89% accuracy on nearest neighbors"
@pytest.mark.skip()
def test_sparse_nn_descent_neighbor_accuracy(sparse_nn_data): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
sparse_nn_data, 20, "euclidean", {}, False, np.random
)
percent_correct = knn(knn_indices, sparse_nn_data.todense())
assert (
percent_correct >= 0.75
), "Sparse NN-descent did not get 90% accuracy on nearest neighbors"
@pytest.mark.skip()
def test_sparse_nn_descent_neighbor_accuracy_low_memory(
sparse_nn_data,
): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
sparse_nn_data, 20, "euclidean", {}, False, np.random, low_memory=True
)
percent_correct = knn(knn_indices, sparse_nn_data.todense())
assert (
percent_correct >= 0.85
), "Sparse NN-descent did not get 90% accuracy on nearest neighbors"
@pytest.mark.skip()
def test_nn_descent_neighbor_accuracy_callable_metric(nn_data): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
nn_data, 10, dist.euclidean, {}, False, np.random
)
percent_correct = knn(knn_indices, nn_data)
assert (
percent_correct >= 0.95
), "NN-descent did not get 95% accuracy on nearest neighbors with callable metric"
@pytest.mark.skip()
def test_sparse_angular_nn_descent_neighbor_accuracy(
sparse_nn_data,
): # pragma: no cover
knn_indices, knn_dists, _ = nearest_neighbors(
sparse_nn_data, 20, "cosine", {}, True, np.random
)
angular_data = normalize(sparse_nn_data, norm="l2").toarray()
percent_correct = knn(knn_indices, angular_data)
assert (
percent_correct >= 0.90
), "Sparse NN-descent did not get 90% accuracy on nearest neighbors"
def test_smooth_knn_dist_l1norms(nn_data):
norms = smooth_knn(nn_data)
assert_array_almost_equal(
norms,
1.0 + np.log2(10) * np.ones(norms.shape[0]),
decimal=3,
err_msg="Smooth knn-dists does not give expected" "norms",
)
def test_smooth_knn_dist_l1norms_w_connectivity(nn_data):
norms = smooth_knn(nn_data, local_connectivity=1.75)
assert_array_almost_equal(
norms,
1.0 + np.log2(10) * np.ones(norms.shape[0]),
decimal=3,
err_msg="Smooth knn-dists does not give expected"
"norms for local_connectivity=1.75",
)
| bsd-3-clause | 3c8cabfe74ee7fb267e3c0ae52b82a37 | 31.023952 | 86 | 0.614622 | 3.372005 | false | true | false | false |
lmcinnes/umap | umap/tests/test_composite_models.py | 1 | 3681 | from umap import UMAP
import pytest
try:
# works for sklearn>=0.22
from sklearn.manifold import trustworthiness
except ImportError:
# this is to comply with requirements (scikit-learn>=0.20)
# More recent versions of sklearn have exposed trustworthiness
# in top level module API
# see: https://github.com/scikit-learn/scikit-learn/pull/15337
from sklearn.manifold.t_sne import trustworthiness
def test_composite_trustworthiness(nn_data, iris_model):
data = nn_data[:50]
model1 = UMAP(n_neighbors=10, min_dist=0.01, random_state=42, n_epochs=50).fit(data)
model2 = UMAP(
n_neighbors=30,
min_dist=0.01,
random_state=42,
n_epochs=50,
init=model1.embedding_,
).fit(data)
model3 = model1 * model2
trust = trustworthiness(data, model3.embedding_, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "nn dataset: {}".format(trust)
model4 = model1 + model2
trust = trustworthiness(data, model4.embedding_, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "nn dataset: {}".format(trust)
with pytest.raises(ValueError):
_ = model1 + iris_model
with pytest.raises(ValueError):
_ = model1 * iris_model
with pytest.raises(ValueError):
_ = model1 - iris_model
@pytest.mark.skip(reason="Marked as Skipped test")
def test_composite_trustworthiness_random_init(nn_data): # pragma: no cover
data = nn_data[:50]
model1 = UMAP(
n_neighbors=10,
min_dist=0.01,
random_state=42,
n_epochs=50,
init="random",
).fit(data)
model2 = UMAP(
n_neighbors=30,
min_dist=0.01,
random_state=42,
n_epochs=50,
init="random",
).fit(data)
model3 = model1 * model2
trust = trustworthiness(data, model3.embedding_, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "nn dataset: {}".format(trust)
model4 = model1 + model2
trust = trustworthiness(data, model4.embedding_, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "nn dataset: {}".format(trust)
def test_composite_trustworthiness_on_iris(iris):
iris_model1 = UMAP(
n_neighbors=10,
min_dist=0.01,
random_state=42,
n_epochs=100,
).fit(iris.data[:, :2])
iris_model2 = UMAP(
n_neighbors=10,
min_dist=0.01,
random_state=42,
n_epochs=100,
).fit(iris.data[:, 2:])
embedding = (iris_model1 + iris_model2).embedding_
trust = trustworthiness(iris.data, embedding, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "iris dataset: {}".format(trust)
embedding = (iris_model1 * iris_model2).embedding_
trust = trustworthiness(iris.data, embedding, n_neighbors=10)
assert (
trust >= 0.82
), "Insufficiently trustworthy embedding for" "iris dataset: {}".format(trust)
def test_contrastive_trustworthiness_on_iris(iris):
iris_model1 = UMAP(
n_neighbors=10,
min_dist=0.01,
random_state=42,
n_epochs=100,
).fit(iris.data[:, :2])
iris_model2 = UMAP(
n_neighbors=10,
min_dist=0.01,
random_state=42,
n_epochs=100,
).fit(iris.data[:, 2:])
embedding = (iris_model1 - iris_model2).embedding_
trust = trustworthiness(iris.data, embedding, n_neighbors=10)
assert (
trust >= 0.75
), "Insufficiently trustworthy embedding for" "iris dataset: {}".format(trust)
| bsd-3-clause | 814e68f001cc5d9ce04b955023e159a9 | 30.461538 | 88 | 0.624559 | 3.295434 | false | true | false | false |
lmcinnes/umap | umap/utils.py | 1 | 6662 | # Author: Leland McInnes <leland.mcinnes@gmail.com>
#
# License: BSD 3 clause
import time
from warnings import warn
import numpy as np
import numba
from sklearn.utils.validation import check_is_fitted
import scipy.sparse
@numba.njit(parallel=True)
def fast_knn_indices(X, n_neighbors):
"""A fast computation of knn indices.
Parameters
----------
X: array of shape (n_samples, n_features)
The input data to compute the k-neighbor indices of.
n_neighbors: int
The number of nearest neighbors to compute for each sample in ``X``.
Returns
-------
knn_indices: array of shape (n_samples, n_neighbors)
The indices on the ``n_neighbors`` closest points in the dataset.
"""
knn_indices = np.empty((X.shape[0], n_neighbors), dtype=np.int32)
for row in numba.prange(X.shape[0]):
# v = np.argsort(X[row]) # Need to call argsort this way for numba
v = X[row].argsort(kind="quicksort")
v = v[:n_neighbors]
knn_indices[row] = v
return knn_indices
@numba.njit("i4(i8[:])")
def tau_rand_int(state):
"""A fast (pseudo)-random number generator.
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random int32 value
"""
state[0] = (((state[0] & 4294967294) << 12) & 0xFFFFFFFF) ^ (
(((state[0] << 13) & 0xFFFFFFFF) ^ state[0]) >> 19
)
state[1] = (((state[1] & 4294967288) << 4) & 0xFFFFFFFF) ^ (
(((state[1] << 2) & 0xFFFFFFFF) ^ state[1]) >> 25
)
state[2] = (((state[2] & 4294967280) << 17) & 0xFFFFFFFF) ^ (
(((state[2] << 3) & 0xFFFFFFFF) ^ state[2]) >> 11
)
return state[0] ^ state[1] ^ state[2]
@numba.njit("f4(i8[:])")
def tau_rand(state):
"""A fast (pseudo)-random number generator for floats in the range [0,1]
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random float32 in the interval [0, 1]
"""
integer = tau_rand_int(state)
return abs(float(integer) / 0x7FFFFFFF)
@numba.njit()
def norm(vec):
"""Compute the (standard l2) norm of a vector.
Parameters
----------
vec: array of shape (dim,)
Returns
-------
The l2 norm of vec.
"""
result = 0.0
for i in range(vec.shape[0]):
result += vec[i] ** 2
return np.sqrt(result)
@numba.njit(parallel=True)
def submatrix(dmat, indices_col, n_neighbors):
"""Return a submatrix given an orginal matrix and the indices to keep.
Parameters
----------
dmat: array, shape (n_samples, n_samples)
Original matrix.
indices_col: array, shape (n_samples, n_neighbors)
Indices to keep. Each row consists of the indices of the columns.
n_neighbors: int
Number of neighbors.
Returns
-------
submat: array, shape (n_samples, n_neighbors)
The corresponding submatrix.
"""
n_samples_transform, n_samples_fit = dmat.shape
submat = np.zeros((n_samples_transform, n_neighbors), dtype=dmat.dtype)
for i in numba.prange(n_samples_transform):
for j in numba.prange(n_neighbors):
submat[i, j] = dmat[i, indices_col[i, j]]
return submat
# Generates a timestamp for use in logging messages when verbose=True
def ts():
return time.ctime(time.time())
# I'm not enough of a numba ninja to numba this successfully.
# np.arrays of lists, which are objects...
def csr_unique(matrix, return_index=True, return_inverse=True, return_counts=True):
"""Find the unique elements of a sparse csr matrix.
We don't explicitly construct the unique matrix leaving that to the user
who may not want to duplicate a massive array in memory.
Returns the indices of the input array that give the unique values.
Returns the indices of the unique array that reconstructs the input array.
Returns the number of times each unique row appears in the input matrix.
matrix: a csr matrix
return_index = bool, optional
If true, return the row indices of 'matrix'
return_inverse: bool, optional
If true, return the indices of the unique array that can be
used to reconstruct 'matrix'.
return_counts = bool, optional
If true, returns the number of times each unique item appears in 'matrix'
The unique matrix can computed via
unique_matrix = matrix[index]
and the original matrix reconstructed via
unique_matrix[inverse]
"""
lil_matrix = matrix.tolil()
rows = [x + y for x, y in zip(lil_matrix.rows, lil_matrix.data)]
return_values = return_counts + return_inverse + return_index
return np.unique(
rows,
return_index=return_index,
return_inverse=return_inverse,
return_counts=return_counts,
)[1 : (return_values + 1)]
def disconnected_vertices(model):
"""
Returns a boolean vector indicating which vertices are disconnected from the umap graph.
These vertices will often be scattered across the space and make it difficult to focus on the main
manifold. They can either be filtered and have UMAP re-run or simply filtered from the interactive plotting tool
via the subset_points parameter.
Use ~disconnected_vertices(model) to only plot the connected points.
Parameters
----------
model: a trained UMAP model
Returns
-------
A boolean vector indicating which points are disconnected
"""
check_is_fitted(model, "graph_")
if model.unique:
vertices_disconnected = (
np.array(model.graph_[model._unique_inverse_].sum(axis=1)).flatten() == 0
)
else:
vertices_disconnected = np.array(model.graph_.sum(axis=1)).flatten() == 0
return vertices_disconnected
def average_nn_distance(dist_matrix):
"""Calculate the average distance to each points nearest neighbors.
Parameters
----------
dist_matrix: a csr_matrix
A distance matrix (usually umap_model.graph_)
Returns
-------
An array with the average distance to each points nearest neighbors
"""
(row_idx, col_idx, val) = scipy.sparse.find(dist_matrix)
# Count/sum is done per row
count_non_zero_elems = np.bincount(row_idx)
sum_non_zero_elems = np.bincount(row_idx, weights=val)
averages = sum_non_zero_elems / count_non_zero_elems
if any(np.isnan(averages)):
warn(
"Embedding contains disconnected vertices which will be ignored."
"Use umap.utils.disconnected_vertices() to identify them."
)
return averages
| bsd-3-clause | 2a0639b2de18bb39f13025c044bc11ae | 29.281818 | 117 | 0.638097 | 3.738496 | false | false | false | false |
machinalis/iepy | iepy/webui/corpus/admin.py | 2 | 3787 | from django.contrib import admin
from django.core import urlresolvers
from django.db.models import Q
from relatedwidget import RelatedWidgetWrapperBase
from corpus.models import (
IEDocument, IEDocumentMetadata, Entity, EntityKind, Relation,
EntityOccurrence, GazetteItem
)
admin.site.site_header = 'IEPY administration'
admin.site.site_title = 'IEPY'
admin.site.index_title = 'IEPY'
@admin.register(EntityKind)
class EntityKindAdmin(admin.ModelAdmin):
pass
@admin.register(EntityOccurrence)
class EntityOccurrenceAdmin(admin.ModelAdmin):
pass
@admin.register(Entity)
class EntityAdmin(admin.ModelAdmin):
list_per_page = 20
@admin.register(IEDocumentMetadata)
class IEDocumentMetadataAdmin(admin.ModelAdmin):
def has_delete_permission(self, request, obj=None):
return False
@admin.register(IEDocument)
class IEDocumentAdmin(RelatedWidgetWrapperBase, admin.ModelAdmin):
change_form_template = 'relatives/change_form.html'
list_display = ['id', 'human_identifier', 'link_to_document_navigation']
search_fields = ['text']
fieldsets = [
(None, {'fields': ['human_identifier', 'text', 'metadata']}),
('Preprocess output',
{'classes': ['collapse'],
'fields': ['tokens', 'offsets_to_text', 'tokenization_done_at',
'sentences', 'sentencer_done_at',
'lemmas', 'lemmatization_done_at',
'postags', 'tagging_done_at',
'ner_done_at', 'segmentation_done_at', 'syntactic_parsing_done_at'],
})]
def get_form(self, request, obj=None, **kwargs):
form = super().get_form(request, obj, **kwargs)
metadata_field = form.base_fields['metadata']
if obj is None:
metadata_field.queryset = metadata_field.queryset.filter(
document__isnull=True)
# let's make this field not required during creating.
# This means that on save_model we'll create an empty metadata obj if needed
metadata_field.required = False
else:
metadata_field.queryset = metadata_field.queryset.filter(
Q(document__id=obj.id) | Q(document__isnull=True))
return form
def save_model(self, request, obj, form, change):
if obj.id is None and not change: # ie, creation
try:
obj.metadata
except IEDocumentMetadata.DoesNotExist:
obj.metadata = IEDocumentMetadata.objects.create()
return super().save_model(request, obj, form, change)
def link_to_document_navigation(self, obj):
return '<a href="{0}">Rich View</a>'.format(
urlresolvers.reverse('corpus:navigate_document', args=(obj.id,))
)
link_to_document_navigation.short_description = 'Rich View'
link_to_document_navigation.allow_tags = True
list_per_page = 20
@admin.register(Relation)
class RelationAdmin(admin.ModelAdmin):
list_display = ('name', 'left_entity_kind', 'right_entity_kind', 'link_to_label')
def link_to_label(self, obj):
return '<a href="{0}">Label evidence</a>'.format(
urlresolvers.reverse('corpus:next_document_to_label', args=(obj.id,))
)
link_to_label.short_description = 'Labeling'
link_to_label.allow_tags = True
def get_readonly_fields(self, request, obj=None):
if obj: # editing an existing object
return self.readonly_fields + ('left_entity_kind', 'right_entity_kind')
return self.readonly_fields
@admin.register(GazetteItem)
class GazetteAdmin(admin.ModelAdmin):
search_fields = ['text']
list_display = ('text', 'kind', 'from_freebase',)
list_filter = ('kind', 'from_freebase',)
readonly_fields = ('from_freebase', )
| bsd-3-clause | c16b3c29703fc2fdf57cb30320165117 | 33.743119 | 89 | 0.64827 | 3.723697 | false | false | false | false |
machinalis/iepy | tests/test_relations.py | 2 | 18064 | from unittest import mock
from iepy.data.models import EvidenceLabel
from .factories import (
RelationFactory, EntityFactory, EntityKindFactory,
TextSegmentFactory, EntityOccurrenceFactory,
IEDocFactory,
)
from .manager_case import ManagerTestCase
class TestRelations(ManagerTestCase):
def test_cant_change_kinds_after_creation(self):
r = RelationFactory()
new_ek = EntityKindFactory()
r.left_entity_kind = new_ek
self.assertRaises(ValueError, r.save)
class BaseTestReferenceBuilding(ManagerTestCase):
# Reference = a complete labeled Corpus
def setUp(self):
self.k_person = EntityKindFactory(name='person')
self.k_location = EntityKindFactory(name='location')
self.k_org = EntityKindFactory(name='organization')
self.john = EntityFactory(key='john', kind=self.k_person)
self.peter = EntityFactory(key='peter', kind=self.k_person)
self.london = EntityFactory(key='london', kind=self.k_location)
self.roma = EntityFactory(key='roma', kind=self.k_location)
self.UN = EntityFactory(key='United Nations', kind=self.k_org)
self.WHO = EntityFactory(key='World Health Organization', kind=self.k_org)
self.r_lives_in = RelationFactory(left_entity_kind=self.k_person,
right_entity_kind=self.k_location)
self.r_was_born_in = RelationFactory(left_entity_kind=self.k_person,
right_entity_kind=self.k_location)
self.r_father_of = RelationFactory(left_entity_kind=self.k_person,
right_entity_kind=self.k_person)
self.weak_label = EvidenceLabel.SKIP # means that will need to be re-labeled
self.solid_label = EvidenceLabel.YESRELATION
def create_occurrence(self, doc, e, offset, end):
return EntityOccurrenceFactory(document=doc, entity=e,
offset=offset, offset_end=end)
def segment_with_occurrences_factory(self, occurrences=tuple(), **kwargs):
s = TextSegmentFactory(**kwargs)
for occurrence_data in occurrences:
if isinstance(occurrence_data, (list, tuple)):
e, start, end = occurrence_data
else:
e = occurrence_data
start, end = 0, 1 # just something, the simplest
eo = self.create_occurrence(s.document, e, start, end)
s.entity_occurrences.add(eo)
return s
class TestReferenceNextSegmentToLabel(BaseTestReferenceBuilding):
judge = "iepy"
# the method to test, shorcut
def next(self, relation=None, **kwargs):
if relation is None:
relation = self.r_lives_in
if 'judge' not in kwargs:
kwargs['judge'] = self.judge
return relation.get_next_segment_to_label(**kwargs)
def test_if_no_segment_around_None_is_returned(self):
self.assertIsNone(self.next())
def test_if_segments_exists_but_with_no_matching_occurrences_None(self):
self.segment_with_occurrences_factory() # No occurrences at all
self.assertIsNone(self.next())
self.segment_with_occurrences_factory([self.john])
self.segment_with_occurrences_factory([self.roma])
self.assertIsNone(self.next())
self.segment_with_occurrences_factory([self.john, self.WHO])
self.segment_with_occurrences_factory([self.roma, self.WHO])
self.assertIsNone(self.next())
self.segment_with_occurrences_factory([self.john, self.peter])
self.segment_with_occurrences_factory([self.roma, self.london])
self.assertIsNone(self.next())
def test_if_matching_kinds_is_retrieved(self):
s = self.segment_with_occurrences_factory([self.john, self.roma])
self.assertEqual(s, self.next())
def test_if_segment_has_several_of_the_matching_kinds_is_still_found(self):
s = self.segment_with_occurrences_factory([self.john, self.peter, self.roma])
self.assertEqual(s, self.next())
def test_if_segment_has_matching_and_other_kinds_is_still_found(self):
s = self.segment_with_occurrences_factory([self.john, self.roma, self.UN])
self.assertEqual(s, self.next())
def test_segment_with_lowest_id_is_retrieved(self):
s1 = self.segment_with_occurrences_factory([self.john, self.roma])
self.segment_with_occurrences_factory([self.peter, self.london])
self.assertEqual(s1, self.next())
def test_relation_of_same_kind_expect_at_least_2_of_them(self):
self.segment_with_occurrences_factory([self.john])
self.segment_with_occurrences_factory([self.peter, self.london, self.WHO])
self.assertIsNone(self.next(relation=self.r_father_of))
s = self.segment_with_occurrences_factory([self.john, self.peter])
self.assertEqual(s, self.next(relation=self.r_father_of))
def test_relation_of_same_kind_accepts_2_occurrences_of_same_entity(self):
s = self.segment_with_occurrences_factory([self.john, (self.john, 2, 3)])
self.assertEqual(s, self.next(relation=self.r_father_of))
# until now, only Entity Kind matching. Let's check about existence and properties
# of questions - aka Labeled-Evidence
def test_if_segment_has_all_questions_answered_is_omitted(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
self.assertIsNotNone(self.next())
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.set_label(self.r_lives_in, self.solid_label, self.judge)
self.assertIsNone(self.next())
def test_if_segment_has_all_questions_answered_for_other_relation_is_NOT_omitted(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
self.assertIsNotNone(self.next())
for evidence in s.get_evidences_for_relation(self.r_was_born_in):
evidence.set_label(self.r_was_born_in, self.solid_label, self.judge)
self.assertEqual(s, self.next())
def test_if_segment_has_question_not_labeled_is_found(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
self.assertIsNotNone(self.next())
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence_label = evidence.labels.filter(judge=self.judge)
evidence_label.delete()
self.assertEqual(s, self.next())
def test_if_segment_has_question_with_label_None_is_found_by_same_judge(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
s_2 = self.segment_with_occurrences_factory([self.john, self.roma])
self.assertIsNotNone(self.next())
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.labels.all().delete() # just to be sure, but shall be empty
evidence.set_label(self.r_lives_in, None, self.judge)
self.assertEqual(s, self.next())
# Now, for other judge, that segment is put last
other_judge = 'someone else'
self.assertEqual(s_2, self.next(judge=other_judge))
# But still foundable if it's the last one available
s_2.delete()
self.assertEqual(s, self.next(judge=other_judge))
def test_if_segment_has_question_labeled_with_dont_know_is_found(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
self.assertIsNotNone(self.next())
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.set_label(self.r_lives_in, self.weak_label, self.judge)
self.assertEqual(s, self.next())
def test_if_segment_was_fully_labeled_but_some_empty_for_other_relation_is_omitted(self):
# ie, LabeledE Evidences of a Segment with some other relation doesnt matter here.
# This test is more for ensuring we are not coding an underised side-effect
s = self.segment_with_occurrences_factory([self.john, self.london])
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.set_label(self.r_lives_in, self.solid_label, self.judge)
self.assertIsNone(self.next())
def test_if_segment_has_some_questions_answered_but_other_dont_know_is_found(self):
s = self.segment_with_occurrences_factory([self.john, self.peter, self.london])
self.assertIsNotNone(self.next())
for evidence, lbl in zip(s.get_evidences_for_relation(self.r_lives_in),
[self.weak_label, self.solid_label]):
evidence.set_label(self.r_lives_in, lbl, self.judge)
self.assertEqual(s, self.next())
def test_if_segment_was_fully_labeled_but_some_dunno_for_other_relation_is_omitted(self):
# ie, LabeledE Evidences of a Segment with some other relation doesnt matter here.
# This test is more for ensuring we are not coding an underised side-effect
s = self.segment_with_occurrences_factory([self.john, self.london])
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.set_label(self.r_lives_in, self.solid_label, self.judge)
for evidence in s.get_evidences_for_relation(self.r_was_born_in):
evidence.set_label(self.r_was_born_in, self.weak_label, self.judge)
self.assertIsNone(self.next())
def test_segments_with_zero_evidence_labeled_are_prefered(self):
s = self.segment_with_occurrences_factory([self.john, self.london])
for evidence in s.get_evidences_for_relation(self.r_lives_in):
evidence.set_label(self.r_lives_in, self.weak_label, self.judge)
# so, this segment is found when searching...
self.assertEqual(s, self.next())
# But if a new one appears, pristine, with no evidences, is preferred
s2 = self.segment_with_occurrences_factory([self.peter, self.london])
self.assertEqual(s2, self.next())
def test_matching_text_segments_no_duplicates_no_extra(self):
a = self.segment_with_occurrences_factory([self.john, self.peter, self.london, self.roma])
b = self.segment_with_occurrences_factory([self.john, self.peter, self.london])
c = self.segment_with_occurrences_factory([self.john, self.london])
self.segment_with_occurrences_factory([self.roma, self.london])
real = list(self.r_lives_in._matching_text_segments())
expected = set([a, b, c])
self.assertEqual(len(real), len(expected))
self.assertEqual(set(real), expected)
class TestNavigateLabeledSegments(BaseTestReferenceBuilding):
judge = "iepy"
def create_labeled_segments_for_relation(self, relation, how_many):
result = []
for i in range(how_many):
s = self.segment_with_occurrences_factory([self.john, self.london, self.roma])
result.append(s)
for le in s.get_evidences_for_relation(relation):
le.set_label(relation, self.solid_label, self.judge)
return result
def test_asking_neighbor_when_nothing_is_labeled_returns_None(self):
segm = TextSegmentFactory()
self.assertIsNone(self.r_lives_in.labeled_neighbor(segm, self.judge))
def test_labeled_evidences_for_other_relations_doesnt_affect(self):
segm = TextSegmentFactory()
self.create_labeled_segments_for_relation(self.r_father_of, 5)
self.assertIsNone(self.r_lives_in.labeled_neighbor(segm, self.judge))
def test_asking_previous_returns_low_closest_segment_with_labeled_evidences(self):
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
reference = segments[2] # the one in the middle
prev_id = r.labeled_neighbor(reference, self.judge, back=True)
self.assertEqual(prev_id, segments[1].id)
# But if that had no labeled evidences...
segments[1].evidence_relations.all().delete()
prev_id = r.labeled_neighbor(reference, self.judge, back=True)
self.assertEqual(prev_id, segments[0].id)
def test_segments_with_all_empty_answers_are_excluded(self):
# Because they have zero actual labels
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
reference = segments[2] # the one in the middle
seg_1_evidences = list(segments[1].get_evidences_for_relation(r))
assert len(seg_1_evidences) > 1
seg_1_evidences[0].set_label(r, None, judge=self.judge)
# some none, not all, still found
self.assertEqual(
segments[1].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
for le in seg_1_evidences:
le.set_label(r, None, judge=self.judge)
# all none, not found
self.assertNotEqual(
segments[1].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
self.assertEqual(segments[0].id,
r.labeled_neighbor(reference, self.judge, back=True))
def test_all_labels_empty_for_this_relation_but_filled_for_other_still_omitted(self):
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
reference = segments[2] # the one in the middle
for le in segments[1].get_evidences_for_relation(r):
le.set_label(r, None, judge=self.judge)
# all none for relation "r_lives_in", shall be not found
for le in segments[1].get_evidences_for_relation(self.r_father_of):
le.set_label(r, self.solid_label, self.judge)
self.assertNotEqual(
segments[1].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
def test_asking_next_returns_high_closest_segment_with_labeled_evidences(self):
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
reference = segments[2] # the one in the middle
next_id = r.labeled_neighbor(reference, self.judge, back=False)
self.assertEqual(next_id, segments[3].id)
# But if that had no labeled evidences...
segments[3].evidence_relations.all().delete()
next_id = r.labeled_neighbor(reference, self.judge, back=False)
self.assertEqual(next_id, segments[4].id)
def test_asking_for_neighbor_of_unlabeled_segment_returns_last_available(self):
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
s = self.segment_with_occurrences_factory()
expected = segments[-1].id
self.assertEqual(expected, r.labeled_neighbor(s, self.judge, back=True))
self.assertEqual(expected, r.labeled_neighbor(s, self.judge, back=False))
def test_delete_a_label_is_the_same_as_settings_as_none(self):
r = self.r_lives_in
segments = self.create_labeled_segments_for_relation(r, 5)
reference = segments[2] # the one in the middle
seg_1_evidences = list(segments[1].get_evidences_for_relation(r))
assert len(seg_1_evidences) > 1
label_obj = seg_1_evidences[0].labels.get(judge=self.judge)
label_obj.delete()
# deleted just one, not all, still found
self.assertEqual(
segments[1].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
for le in seg_1_evidences[1:]:
label_obj = le.labels.get(judge=self.judge)
label_obj.delete()
# delete all, not found
self.assertNotEqual(
segments[1].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
self.assertEqual(
segments[0].id,
r.labeled_neighbor(reference, self.judge, back=True)
)
class TestNavigateLabeledDocuments(BaseTestReferenceBuilding):
judge = "iepy"
def create_labeled_documents_for_relation(self, relation, how_many):
result = []
for i in range(how_many):
s = self.segment_with_occurrences_factory(
[self.john, self.london, self.roma],
document=IEDocFactory()
)
result.append(s)
for le in s.get_evidences_for_relation(relation):
le.set_label(relation, self.solid_label, self.judge)
return list(set([x.document for x in result]))
def test_asking_previous_returns_low_closest_document_with_labeled_evidences(self):
r = self.r_lives_in
documents = self.create_labeled_documents_for_relation(r, 5)
reference = documents[2] # the one in the middle
prev_id = r.labeled_neighbor(reference, self.judge, back=True)
self.assertEqual(prev_id, documents[1].id)
# But if that had no labeled evidences...
for segment in documents[1].segments.all():
segment.evidence_relations.all().delete()
prev_id = r.labeled_neighbor(reference, self.judge, back=True)
self.assertEqual(prev_id, documents[0].id)
class TestReferenceNextDocumentToLabel(BaseTestReferenceBuilding):
judge = 'someone'
def setUp(self):
super().setUp()
self.relation = self.r_lives_in
self.eo1, self.eo2 = self.john, self.roma
patcher = mock.patch.object(self.relation, 'get_next_segment_to_label')
self.mock_next_segment = patcher.start()
self.addCleanup(patcher.stop)
self.mock_next_segment.return_value = None
def test_if_no_segment_returned_then_no_document_returned(self):
self.assertEqual(self.relation.get_next_document_to_label(self.judge), None)
self.mock_next_segment.assert_called_once_with(self.judge)
def test_if_segment_returned_then_its_document_is_returned(self):
s = self.segment_with_occurrences_factory([self.eo1, self.eo2])
self.mock_next_segment.return_value = s
self.assertEqual(self.relation.get_next_document_to_label(self.judge), s.document)
self.mock_next_segment.assert_called_once_with(self.judge)
| bsd-3-clause | 649915c17297402510d045b076710ed6 | 46.915119 | 98 | 0.654728 | 3.3923 | false | true | false | false |
machinalis/iepy | iepy/webui/corpus/migrations/0011_data_migration_moving_relation_from_candiates_to_labels.py | 2 | 2630 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import logging
from django.db import models, migrations
logging.basicConfig(format="%(asctime)-15s %(message)s")
logger = logging.getLogger(__file__)
logger.setLevel(logging.INFO)
def get_key(evidence):
return (
evidence.left_entity_occurrence_id,
evidence.right_entity_occurrence_id,
evidence.segment_id,
)
def move_relation_to_labels(apps, schema_editor):
EvidenceCandidate = apps.get_model('corpus', 'EvidenceCandidate')
candidates_that_live = {}
labeled_evidences = EvidenceCandidate.objects.filter(labels__isnull=False)
labeled_evidences = labeled_evidences.prefetch_related('labels')
labeled_evidences = labeled_evidences.select_related('relation')
total = labeled_evidences.count()
candidates_to_delete = []
for i, candidate_to_check in enumerate(labeled_evidences):
if i % 1000 == 0:
logger.info("Checking {} out of {}".format(i, total))
key = get_key(candidate_to_check)
if key in candidates_that_live:
live_candidate = candidates_that_live.get(key)
candidate_to_check.labels.all().update(evidence_candidate=live_candidate,
relation=candidate_to_check.relation)
candidates_to_delete.append(candidate_to_check.id)
else:
candidates_that_live[key] = candidate_to_check
# Set the relation of every label of the candidate
candidate_to_check.labels.all().update(relation=candidate_to_check.relation)
not_labeled_evidences = EvidenceCandidate.objects.filter(labels__isnull=True)
not_labeled_evidences = not_labeled_evidences.values_list(
'left_entity_occurrence_id', 'right_entity_occurrence_id', 'segment_id', 'id')
total = not_labeled_evidences.count()
logger.info("Needing to check {} unlabeled candidate evidences".format(total))
keys_taken = set(candidates_that_live.keys())
for i, (leoid, reoid, sid, c_id) in enumerate(not_labeled_evidences.iterator()):
key = (leoid, reoid, sid)
if i % 1000 == 0:
logger.info("Checking {} out of {}".format(i, total))
if key in keys_taken:
candidates_to_delete.append(c_id)
else:
keys_taken.add(key)
EvidenceCandidate.objects.filter(pk__in=candidates_to_delete).all().delete()
class Migration(migrations.Migration):
dependencies = [
('corpus', '0010_auto_20150219_1752'),
]
operations = [
migrations.RunPython(move_relation_to_labels),
]
| bsd-3-clause | 6152d92bb4a27e9777f939d67ca68961 | 35.027397 | 88 | 0.651711 | 3.587995 | false | false | false | false |
machinalis/iepy | iepy/preprocess/corenlp.py | 1 | 8725 | import subprocess
import xmltodict
import os
import sys
import logging
import stat
from functools import lru_cache
import iepy
from iepy.utils import DIRS, unzip_from_url
logger = logging.getLogger(__name__)
def detect_java_version():
java_cmd = os.getenv('JAVAHOME')
if not java_cmd:
print('Environment variable JAVAHOME not defined.')
sys.exit(-1)
here = os.path.dirname(os.path.realpath(__file__))
jar = os.path.join(here, 'utils', 'get-java-version.jar')
jversion = subprocess.check_output([java_cmd, "-jar", jar], stderr=subprocess.PIPE)
return int(jversion.strip())
JAVA_VERSION = detect_java_version()
_STANFORD_BASE_URL = "http://nlp.stanford.edu/software/"
if JAVA_VERSION < 8:
# Stanford Core NLP 3.4.1 - Last version to support Java 6 and Java 7
# Pitifully Stanford folks have a public name ("version") of their releases that isn't
# used on their download urls. So, 3.4.1 is "stanford-corenlp-full-2014-08-27"
_CORENLP_VERSION = "stanford-corenlp-full-2014-08-27"
DOWNLOAD_URL = _STANFORD_BASE_URL + _CORENLP_VERSION + ".zip"
DOWNLOAD_URL_ES = _STANFORD_BASE_URL + 'stanford-spanish-corenlp-2014-08-26-models.jar'
DOWNLOAD_URL_DE = _STANFORD_BASE_URL + 'stanford-german-2016-01-19-models.jar'
_FOLDER_PATH = os.path.join(DIRS.user_data_dir, _CORENLP_VERSION)
COMMAND_PATH = os.path.join(_FOLDER_PATH, "corenlp.sh")
else:
# Stanford Core NLP 3.5.2
_CORENLP_VERSION = "stanford-corenlp-full-2015-04-20"
DOWNLOAD_URL_ES = _STANFORD_BASE_URL + 'stanford-spanish-corenlp-2015-01-08-models.jar'
DOWNLOAD_URL_DE = _STANFORD_BASE_URL + 'stanford-german-2016-01-19-models.jar'
DOWNLOAD_URL = _STANFORD_BASE_URL + _CORENLP_VERSION + ".zip"
_FOLDER_PATH = os.path.join(DIRS.user_data_dir, _CORENLP_VERSION)
COMMAND_PATH = os.path.join(_FOLDER_PATH, "corenlp.sh")
@lru_cache(maxsize=1)
def get_analizer(*args, **kwargs):
logger.info("Loading StanfordCoreNLP...")
return StanfordCoreNLP(*args, **kwargs)
class StanfordCoreNLP:
CMD_ARGS = "-outputFormat xml -threads 4"
PROMPT = b"\nNLP> "
def __init__(self, tokenize_with_whitespace=False, gazettes_filepath=None):
cmd_args = self.command_args(tokenize_with_whitespace, gazettes_filepath)
os.chdir(_FOLDER_PATH)
self.corenlp_cmd = [COMMAND_PATH] + cmd_args
self._start_proc()
def _start_proc(self):
self.proc = subprocess.Popen(
self.corenlp_cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=_FOLDER_PATH
)
self.output = self.iter_output_segments()
self.receive() # Wait until the prompt is ready
def command_args(self, tokenize_with_whitespace, gazettes_filepath):
annotators = ["tokenize", "ssplit", "pos", "lemma", "ner", "parse", "dcoref"]
cmd_args = self.CMD_ARGS[:]
if tokenize_with_whitespace:
cmd_args += " -tokenize.whitespace=true"
if gazettes_filepath:
annotators.insert(annotators.index("ner") + 1, "regexner")
cmd_args += " -regexner.mapping {}".format(gazettes_filepath)
tkn_opts = self._tokenizer_options()
if tkn_opts:
cmd_args += " " + tkn_opts
lang = iepy.instance.settings.IEPY_LANG
edu_mods = "edu/stanford/nlp/models"
if lang == 'es':
annotators.remove('dcoref') # not supported for spanish on Stanford 3.4.1
cmd_args += " -tokenize.language es"
cmd_args += " -pos.model %s/pos-tagger/spanish/spanish-distsim.tagger" % edu_mods
cmd_args += " -ner.model %s/ner/spanish.ancora.distsim.s512.crf.ser.gz" % edu_mods
cmd_args += " -parse.model %s/lexparser/spanishPCFG.ser.gz" % edu_mods
if lang == 'de':
annotators.remove('dcoref') # not supported for german on Stanford 3.4.1
cmd_args += " -tokenize.language de"
cmd_args += " -pos.model %s/pos-tagger/german/german-dewac.tagger" % edu_mods
cmd_args += " -ner.model %s/ner/german.dewac_175m_600.crf.ser.gz" % edu_mods
cmd_args += " -parse.model %s/lexparser/germanPCFG.ser.gz" % edu_mods
cmd_args += " -annotators {}".format(",".join(annotators))
return cmd_args.split()
def _tokenizer_options(self):
"""As stated in
http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/process/PTBTokenizer.html
there are several tokenizer options that can be changed.
We'll only send to command line those that differ from the Stanford default.
"""
extra_keys = ['ptb3Escaping']
defaults = {
'invertible': False,
'tokenizeNLs': False,
'americanize': True,
'normalizeSpace': True,
'normalizeAmpersandEntity': True,
'normalizeCurrency': True,
'normalizeFractions': True,
'normalizeParentheses': True,
'normalizeOtherBrackets': True,
'asciiQuotes': False,
'latexQuotes': True,
'unicodeQuotes': False,
'ptb3Ellipsis': True,
'unicodeEllipsis': False,
'ptb3Dashes': True,
'keepAssimilations': True,
'escapeForwardSlashAsterisk': True,
'untokenizable': "firstDelete",
'strictTreebank3': False
}
allowed_keys = set(defaults.keys()).union(extra_keys)
customizations = getattr(iepy.instance.settings, 'CORENLP_TKN_OPTS', {})
opts = []
for k, v in customizations.items():
if k not in allowed_keys:
raise ValueError('Invalid key "%s". Valid options are %s' % (k, allowed_keys))
if k in defaults and defaults[k] == v:
# valid option, but it's the defaults, so no need to provide it.
continue
if isinstance(v, bool):
v = ("%s" % v).lower()
opts.append("%s=%s" % (k, v))
if opts:
return '-tokenize.options "{}"'.format(','.join(opts))
def iter_output_segments(self):
while True:
buf = b""
while self.PROMPT not in buf:
buf += self.proc.stdout.read1(1024)
if self.proc.poll() == 1:
logger.error("Error running '{}'".format(" ".join(self.corenlp_cmd)))
logger.error("Output was: '{}'".format(buf))
sys.exit(1)
segment, _, buf = buf.partition(self.PROMPT)
yield segment.decode("utf8")
def receive(self):
return next(self.output)
def send(self, data):
data = data.replace("\n", " ") + "\n"
self.proc.stdin.write(data.encode("utf8"))
self.proc.stdin.flush()
def quit(self):
self.proc.stdin.write("q\n".encode("utf8"))
self.proc.stdin.flush()
@lru_cache(maxsize=1)
def analyse(self, text):
self.send(text)
text = self.receive()
i = text.index("<?xml version")
text = text[i:]
return xmltodict.parse(text)["root"]["document"]
def download(lang='en'):
base = os.path.dirname(COMMAND_PATH)
if os.path.isfile(COMMAND_PATH):
print("Stanford CoreNLP is already downloaded at {}.".format(base))
else:
print("Downloading Stanford CoreNLP...")
unzip_from_url(DOWNLOAD_URL, DIRS.user_data_dir)
# Zip acquired. Make sure right Java is used, and file is executable
for directory in os.listdir(DIRS.user_data_dir):
if directory.startswith("stanford-corenlp-full"):
stanford_directory = os.path.join(DIRS.user_data_dir, directory)
if os.path.isdir(stanford_directory):
runner_path = os.path.join(stanford_directory, "corenlp.sh")
st = os.stat(runner_path)
_content = open(runner_path).read()
_content = _content.replace('java', '$JAVAHOME')
with open(runner_path, 'w') as runner_file:
runner_file.write(_content)
os.chmod(runner_path, st.st_mode | stat.S_IEXEC)
break
# Download extra data for specific language
download_urls = dict(es=DOWNLOAD_URL_ES, de=DOWNLOAD_URL_DE)
if lang.lower() in download_urls.keys():
print("Downloading Stanford CoreNLP extra data for lang '{}'...".format(lang))
unzip_from_url(download_urls[lang.lower()], _FOLDER_PATH)
elif lang.lower() != 'en':
print("There are no extra data to download for lang '{}'.".format(lang))
| bsd-3-clause | 0d5eb377392ede30dc4c62f977b30d40 | 38.659091 | 94 | 0.597479 | 3.41621 | false | false | false | false |
machinalis/iepy | iepy/__init__.py | 2 | 3647 | import os
import sys
from importlib import import_module
import django
from django.conf import settings
# Version number reading ...
fname = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'version.txt')
with open(fname, encoding='utf-8') as filehandler:
__version__ = filehandler.read().strip().replace("\n", "")
del fname
instance = None # instance reference will be stored here
def setup(fuzzy_path=None, _safe_mode=False):
"""
Configure IEPY internals,
Reads IEPY instance configuration if any path provided.
Detects out of dated instances.
Returns the absolute path to the IEPY instance if provided, None if not.
"""
# Prevent nosetests messing up with this
if not isinstance(fuzzy_path, (type(None), str)):
# nosetests is grabing this function because its named "setup"... .
return
if not settings.configured:
if fuzzy_path is None:
if not os.getenv('DJANGO_SETTINGS_MODULE'):
os.environ['DJANGO_SETTINGS_MODULE'] = 'iepy.webui.webui.settings'
result = None
else:
path, project_name, old = _actual_path(fuzzy_path)
sys.path.insert(0, path)
if old:
django_settings_module = "{0}_settings".format(project_name)
sys.path.insert(0, os.path.join(path, project_name))
else:
django_settings_module = "{0}.settings".format(project_name)
os.environ['DJANGO_SETTINGS_MODULE'] = django_settings_module
result = os.path.join(path, project_name)
import_instance(project_name)
django.setup()
if not _safe_mode and settings.IEPY_VERSION != __version__:
sys.exit(
'Instance version is {} and current IEPY installation is {}.\n'
'Run iepy --upgrade on the instance.'.format(settings.IEPY_VERSION,
__version__)
)
return result
def import_instance(project_name):
"""
Imports the project_name instance and stores it
on the global variable `instance`.
"""
global instance
instance = import_module(project_name)
def _actual_path(fuzzy_path):
"""
Given the fuzzy_path path, walks-up until it finds a folder containing a iepy-instance.
Returns the path where the folder is contained, the folder name and a boolean to indicate
if its an instance older than 0.9.2 where the settings file was different.
"""
def _find_settings_file(folder_path):
folder_name = os.path.basename(folder_path)
expected_file = os.path.join(folder_path, "settings.py")
old_settings_file = os.path.join(
folder_path, "{}_settings.py".format(folder_name)
)
if os.path.exists(expected_file):
return expected_file
elif os.path.exists(old_settings_file):
return old_settings_file
# first, make sure we are handling an absolute path
original = fuzzy_path # used for debug
fuzzy_path = os.path.abspath(fuzzy_path)
while True:
settings_filepath = _find_settings_file(fuzzy_path)
if settings_filepath is not None:
old = True if settings_filepath.endswith("_settings.py") else False
return os.path.dirname(fuzzy_path), os.path.basename(fuzzy_path), old
else:
parent = os.path.dirname(fuzzy_path)
if parent == fuzzy_path:
raise ValueError("There's no IEPY instance on the provided path {}".format(original))
fuzzy_path = parent
| bsd-3-clause | 229c070f99df3baee2a7f331ca3f1252 | 36.214286 | 101 | 0.618042 | 4.125566 | false | false | false | false |
machinalis/iepy | setup.py | 1 | 3267 | from setuptools import setup, find_packages # Always prefer setuptools over distutils
from os import path
import sys
assert sys.version_info >= (3, 4, 0), "Python 3.4 or newer is required"
HERE = path.abspath(path.dirname(__file__))
# Get the long description from the relevant file
with open(path.join(HERE, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
with open(path.join(HERE, 'iepy', 'version.txt'), encoding='utf-8') as f:
iepy_version = f.read().strip()
base_reqs = """nltk>=3.2.1
numpy>=1.8.0
scipy>=0.13.3
scikit-learn==0.15.2
REfO==0.13
docopt==0.6.1
future==0.11.4
appdirs==1.2.0
wget==2.0
colorama==0.2.7
featureforge>=0.1.5
Django==1.8.14
django-relatives==0.3.1
django-relatedadminwidget==0.0.3
six>=1.9.0
django-extra-views==0.7.1
jsonfield==1.0.0
django-angular==0.7.8
nose>=1.3.0
factory-boy==2.4.1
xmltodict==0.8.6""".splitlines()
setup(
name='iepy',
version=iepy_version,
zip_safe=False,
description='Information Extraction framework in Python',
long_description=long_description,
url='https://github.com/machinalis/iepy',
# Author details
author=(
"Rafael Carrascosa, Javier Mansilla, Gonzalo García Berrotarán, "
"Daniel Moisset, Franco M. Luque",
),
# Choose your license
license='BSD',
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 5 - Production/Stable',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'Intended Audience :: Information Technology',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: BSD License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3 :: Only',
],
# What does your project relate to?
keywords='information extraction relation detection',
# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
packages=find_packages(exclude=['docs', 'tests*', 'scripts']),
include_package_data=True,
# List run-time dependencies here. These will be installed by pip when your
# project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/technical.html#install-requires-vs-requirements-files
install_requires=base_reqs,
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# pip to create the appropriate form of executable for the target platform.
entry_points={
'console_scripts': [
'iepy=iepy.instantiation.command_line:execute_from_command_line',
],
},
)
| bsd-3-clause | b64164c78822b1e7f9da6a72175a0ddb | 30.699029 | 98 | 0.662787 | 3.583974 | false | false | false | false |
machinalis/iepy | iepy/metrics.py | 2 | 1212 | # -*- coding: utf-8 -*-
import time
def result_dict_from_predictions(evidences, real_labels, predictions):
correct = []
incorrect = []
tp, fp, tn, fn = 0.0, 0.0, 0.0, 0.0
for evidence, real, predicted in zip(evidences, real_labels, predictions):
if real == predicted:
correct.append(evidence.id)
if real:
tp += 1
else:
tn += 1
else:
incorrect.append(evidence.id)
if predicted:
fp += 1
else:
fn += 1
# Make stats
try:
precision = tp / (tp + fp)
except ZeroDivisionError:
precision = 1.0
try:
recall = tp / (tp + fn)
except ZeroDivisionError:
recall = 1.0
try:
f1 = 2 * (precision * recall) / (precision + recall)
except ZeroDivisionError:
f1 = 0.0
result = {
"true_positives": tp,
"false_positives": fp,
"true_negatives": tn,
"false_negatives": fn,
"accuracy": (tp + tn) / len(evidences),
"precision": precision,
"recall": recall,
"f1": f1,
"end_time": time.time()
}
return result
| bsd-3-clause | 3a62a7fa2ad3c0298836d8a4bc582c68 | 24.25 | 78 | 0.491749 | 3.775701 | false | false | false | false |
machinalis/iepy | iepy/webui/corpus/migrations/0003_auto_20140922_1547.py | 2 | 1063 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('corpus', '0002_auto_20140918_1733'),
]
operations = [
migrations.AlterModelOptions(
name='labeledrelationevidence',
options={'ordering': ['segment_id', 'relation_id', 'left_entity_occurrence', 'right_entity_occurrence']},
),
migrations.AlterField(
model_name='labeledrelationevidence',
name='label',
field=models.CharField(choices=[('NO', 'No relation present'), ('YE', 'Yes, relation is present'), ('DK', "Don't know if the relation is present"), ('SK', 'Skipped labeling of this evidence'), ('NS', 'Evidence is nonsense')], max_length=2, default='SK', null=True),
),
migrations.AlterUniqueTogether(
name='labeledrelationevidence',
unique_together=set([('left_entity_occurrence', 'right_entity_occurrence', 'relation', 'segment')]),
),
]
| bsd-3-clause | 31a170afae3b9c16f4e8519748a6226a | 38.37037 | 277 | 0.614299 | 4.072797 | false | false | false | false |
biolink/ontobio | bin/materialize.py | 1 | 6455 | import click
import json
import os
import yaml
import requests
import gzip
import urllib
import shutil
import re
import glob
import logging
import copy
import yamldown
from functools import wraps
# from ontobio.util.user_agent import get_user_agent
from ontobio.ontol_factory import OntologyFactory
from ontobio import ontol
from ontobio.io.gafparser import GafParser
from ontobio.io.gpadparser import GpadParser
from ontobio.io.assocwriter import GafWriter
from ontobio.io.assocwriter import GpadWriter
from ontobio.io import assocparser
from ontobio.io import gafgpibridge
from ontobio.io import entitywriter
from ontobio.rdfgen import relations
from typing import Dict, Set
logger = logging.getLogger("INFER")
logger.setLevel(logging.WARNING)
MF = "GO:0003674"
ENABLES = "enables"
HAS_PART = "BFO:0000051"
__ancestors_cache = dict()
@click.group()
@click.option("--log", "-L", type=click.Path(exists=False))
def cli(log):
global logger
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
logger.addHandler(console_handler)
if log:
click.echo("Setting up logging to {}".format(log))
logfile_handler = logging.FileHandler(log, mode="w")
logfile_handler.setLevel(logging.INFO)
logger.addHandler(logfile_handler)
logger.setLevel(logging.INFO)
@cli.command()
@click.option("--ontology", "-o", "ontology_path", type=click.Path(exists=True), required=True)
@click.option("--target", "-t", type=click.File("w"), required=True)
@click.option("--gaf", "-g", type=click.File("r"), required=True)
def infer(ontology_path, target, gaf):
ontology_graph = ontology(ontology_path)
writer = GafWriter(file=target)
assoc_generator = gafparser_generator(ontology_graph, gaf)
line_count = 0
for association in assoc_generator:
if association["relation"]["id"] != ENABLES:
continue
# Skip all non enables annotations
inferred_associations = materialize_inferences(ontology_graph, association)
if len(inferred_associations) > 0:
click.echo("Materialized {} associations".format(len(inferred_associations)))
for inferred in inferred_associations:
writer.write_assoc(inferred)
line_count += 1
if line_count % 100 == 0:
click.echo("Processed {} lines".format(line_count))
@cli.command()
@click.option("--ontology", "-o", "ontology_path", type=click.Path(exists=True), required=True)
@click.option("--relation", "-r", required=True)
@click.option("--allowed-trees", multiple=True, default=["biological_process", "molecular_function", "cellular_component"])
def termable(ontology_path, relation, allowed_trees):
ontology_graph = ontology(ontology_path)
accum = dict()
for term in ontology_graph.nodes():
if term.split(":")[0] != "GO":
continue
go_tree = [d["val"] for d in ontology_graph.node(term)["meta"]["basicPropertyValues"] if d["pred"] == "OIO:hasOBONamespace"]
if len(go_tree) > 0 and go_tree[0] not in allowed_trees:
continue
ns = neighbor_by_relation(ontology_graph, term, relation)
if len(ns) > 0:
accum[term] = ns
click.echo(json.dumps(accum, indent=4))
desc = []
for term in accum.keys():
if ontology_graph.children(term, relations=["subClassOf"]):
desc.append(term)
click.echo(desc)
def ontology(path) -> ontol.Ontology:
click.echo("Loading ontology from {}...".format(path))
return OntologyFactory().create(path, ignore_cache=True)
def ancestors(term: str, ontology: ontol.Ontology, cache) -> Set[str]:
click.echo("Computing ancestors for {}".format(term))
if term == MF:
click.echo("Found 0")
return set()
if term not in cache:
anc = set(ontology.ancestors(term, relations=["subClassOf"], reflexive=True))
cache[term] = anc
click.echo("Found {} (from adding to cache: {} terms added)".format(len(anc), len(cache)))
else:
anc = cache[term]
click.echo("Found {} (from cache)".format(len(anc)))
return anc
def gafparser_generator(ontology_graph: ontol.Ontology, gaf_file):
config = assocparser.AssocParserConfig(
ontology=ontology_graph,
)
parser = GafParser(config=config)
return parser.association_generator(gaf_file, skipheader=True)
def neighbor_by_relation(ontology_graph: ontol.Ontology, term, relation):
return ontology_graph.parents(term, relations=[relation])
def transform_relation(mf_annotation, new_mf, ontology_graph):
new_annotation = copy.deepcopy(mf_annotation)
new_annotation["object"]["id"] = new_mf
return new_annotation
def materialize_inferences(ontology_graph: ontol.Ontology, annotation):
materialized_annotations = [] #(gp, new_mf)
mf = annotation["object"]["id"]
gp = annotation["subject"]["id"]
global __ancestors_cache
mf_ancestors = ancestors(mf, ontology_graph, __ancestors_cache)
# if mf_ancestors:
# logger.info("For {term} \"{termdef}\":".format(term=mf, termdef=ontology_graph.label(mf)))
messages = []
for mf_anc in mf_ancestors:
has_part_mfs = neighbor_by_relation(ontology_graph, mf_anc, HAS_PART)
# if has_part_mfs:
# logger.info("\tHas Parent --> {parent} \"{parentdef}\"".format(parent=mf_anc, parentdef=ontology_graph.label(mf_anc)))
if has_part_mfs:
messages.append((gp, mf, mf_anc, has_part_mfs))
for new_mf in has_part_mfs:
# logger.info("\t\thas_part --> {part} \"{partdef}\"".format(part=new_mf, partdef=ontology_graph.label(new_mf)))
new_annotation = transform_relation(annotation, new_mf, ontology_graph)
materialized_annotations.append(new_annotation)
messages = [ message for message in messages if message[3] ] # Filter out empty has_parts
for message in messages:
logger.info("\nFor {gp} -> {term} \"{termdef}\":".format(gp=message[0], term=message[1], termdef=ontology_graph.label(message[1])))
logger.info("\tHas Parent --> {parent} \"{parentdef}\"".format(parent=message[1], parentdef=ontology_graph.label(message[1])))
for part in message[3]:
logger.info("\t\t has_part --> {part} \"{partdef}\"".format(part=part, partdef=ontology_graph.label(part)))
return materialized_annotations
if __name__ == "__main__":
cli()
| bsd-3-clause | 90264a496bd3b81a07db2fcb892e2d2e | 33.704301 | 139 | 0.669404 | 3.476037 | false | false | false | false |
biolink/ontobio | ontobio/bin/phenolog.py | 1 | 4423 | #!/usr/bin/env python
"""
Command line wrapper to obographs library.
Example:
(venv) ~/repos/biolink-api(master) $ python obographs/bin/phenolog.py -vvv -r obo:mp -R go 'abnormal cardiovascular system physiology'
With background:
python obographs/bin/phenolog.py -b 'nervous system phenotype' -v -r cache/ontologies/monarch.json -R go 'abnormal nervous system morphology'
"""
import argparse
import networkx as nx
from networkx.algorithms.dag import ancestors, descendants
from ontobio.ontol_factory import OntologyFactory
from ontobio.assoc_factory import AssociationSetFactory
from ontobio.graph_io import GraphRenderer
from ontobio.slimmer import get_minimal_subgraph
import logging
import sys
def main():
"""
Phenologs
"""
parser = argparse.ArgumentParser(description='Phenologs'
"""
By default, ontologies are cached locally and synced from a remote sparql endpoint
""",
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('-r', '--resource1', type=str, required=False,
help='Name of ontology1')
parser.add_argument('-R', '--resource2', type=str, required=False,
help='Name of ontology2')
parser.add_argument('-T', '--taxon', type=str, default='NCBITaxon:10090', required=False,
help='NCBITaxon ID')
parser.add_argument('-s', '--search', type=str, default='', required=False,
help='Search type. p=partial, r=regex')
parser.add_argument('-b', '--background', type=str, default=None, required=False,
help='Class to use for background')
parser.add_argument('-p', '--pthreshold', type=float, default=0.05, required=False,
help='P-value threshold')
parser.add_argument('-v', '--verbosity', default=0, action='count',
help='Increase output verbosity')
parser.add_argument('ids',nargs='*')
args = parser.parse_args()
if args.verbosity >= 2:
logging.basicConfig(level=logging.DEBUG)
if args.verbosity == 1:
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info("Welcome!")
ofactory = OntologyFactory()
afactory = AssociationSetFactory()
handle = args.resource1
ont1 = ofactory.create(args.resource1)
ont2 = ofactory.create(args.resource2)
logger.info("onts: {} {}".format(ont1, ont2))
searchp = args.search
category = 'gene'
aset1 = afactory.create(ontology=ont1,
subject_category=category,
object_category='phenotype',
taxon=args.taxon)
aset2 = afactory.create(ontology=ont2,
subject_category=category,
object_category='function',
taxon=args.taxon)
bg_cls = None
if args.background is not None:
bg_ids = resolve(ont1,[args.background],searchp)
if len(bg_ids) == 0:
logger.error("Cannnot resolve: '{}' using {} in {}".format(args.background, searchp, ont1))
sys.exit(1)
elif len(bg_ids) > 1:
logger.error("Multiple matches: '{}' using {} MATCHES={}".format(args.background, searchp,bg_ids))
sys.exit(1)
else:
logger.info("Background: {}".format(bg_cls))
[bg_cls] = bg_ids
for id in resolve(ont1,args.ids,searchp):
sample = aset1.query([id],[])
print("Gene set class:{} Gene set: {}".format(id, sample))
bg = None
if bg_cls is not None:
bg = aset1.query([bg_cls],[])
print("BACKGROUND SUBJECTS: {}".format(bg))
rs = aset2.enrichment_test(sample, bg, threshold=args.pthreshold, labels=True)
print("RESULTS: {} < {}".format(len(rs), args.pthreshold))
for r in rs:
print(str(r))
def resolve(ont, names, searchp):
return ont.resolve_names(names,
is_remote = searchp.find('x') > -1,
is_partial_match = searchp.find('p') > -1,
is_regex = searchp.find('r') > -1)
if __name__ == "__main__":
main()
| bsd-3-clause | afcc4ca6646d5e7c96809286cd9238b4 | 36.168067 | 141 | 0.570653 | 3.883231 | false | false | false | false |
biolink/ontobio | tests/test_phenosim_engine.py | 1 | 5390 | from ontobio.sim.phenosim_engine import PhenoSimEngine
from ontobio.sim.api.owlsim2 import OwlSim2Api
from ontobio.vocabulary.similarity import SimAlgorithm
from ontobio.model.similarity import IcStatistic
from unittest.mock import patch
import os
import json
MONDO_0008199 = ['HP:0000751', 'HP:0000738', 'HP:0000726']
def mock_resolve_nodes(id_list):
"""
Mock phenosim_engine _resolve_nodes_to_phenotypes
Replaces calls to scigraph and solr
"""
if id_list == ['HP:0002367', 'HP:0031466', 'HP:0007123']:
ret_val = id_list
elif id_list == ['MONDO:0008199']:
ret_val = MONDO_0008199
return ret_val
def mock_get_scigraph_nodes(id_list):
"""
Mock scigraph_util get_scigraph_nodes
"""
scigraph_desc_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-scigraph-nodes.json')
ids = [iri.replace("http://purl.obolibrary.org/obo/HP_", "HP:") for iri in id_list]
scigraph_res = json.load(open(scigraph_desc_fh))
for node in scigraph_res['nodes']:
if node['id'] in ids:
yield node
def mock_compare(url, set_a, set_b):
# Load fake output from owlsim2 and mock compare
mock_compare_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-owlsim-compare.json')
mock_compare = json.load(open(mock_compare_fh))
individuals_a = frozenset(MONDO_0008199)
individuals_b= frozenset(['HP:0002367', 'HP:0031466', 'HP:0007123'])
if set_a == individuals_a and set_b == individuals_b:
return mock_compare
else:
return False
class TestPhenoSimEngine():
"""
Functional test of ontobio.sim.phenosim_engine.PhenoSimEngine
using mock return values from owlsim2, scigraph, and solr assocs
"""
@classmethod
def setup_class(self):
patch('ontobio.sim.api.owlsim2.get_owlsim_stats', return_value=(None, None)).start()
self.resolve_mock = patch.object(PhenoSimEngine, '_resolve_nodes_to_phenotypes',
side_effect=mock_resolve_nodes)
self.mock_scigraph = patch('ontobio.util.scigraph_util.get_scigraph_nodes',
side_effect=mock_get_scigraph_nodes)
self.owlsim2_api = OwlSim2Api()
self.owlsim2_api.statistics = IcStatistic(
mean_mean_ic=6.82480,
mean_sum_ic=120.89767,
mean_cls=15.47425,
max_max_ic=16.16108,
max_sum_ic=6746.96160,
individual_count=65309,
mean_max_ic=9.51535
)
self.pheno_sim = PhenoSimEngine(self.owlsim2_api)
self.resolve_mock.start()
self.mock_scigraph.start()
@classmethod
def teardown_class(self):
self.owlsim2_api = None
self.pheno_sim = None
self.resolve_mock.stop()
self.mock_scigraph.stop()
def test_sim_search(self):
# Load fake output from owlsim2 and mock search_by_attribute_set
mock_search_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-owlsim-search.json')
mock_search = json.load(open(mock_search_fh))
patch('ontobio.sim.api.owlsim2.search_by_attribute_set',
return_value=mock_search).start()
expected_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-sim-search.json')
expected_sim_results = json.load(open(expected_fh))
classes = ['HP:0002367', 'HP:0031466', 'HP:0007123']
search_results = self.pheno_sim.search(classes)
results = json.loads(
json.dumps(search_results,
default=lambda obj: getattr(obj, '__dict__', str(obj))
)
)
assert expected_sim_results == results
def test_sim_compare(self):
patch('ontobio.sim.api.owlsim2.compare_attribute_sets',
side_effect=mock_compare).start()
expected_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-sim-compare.json')
expected_sim_results = json.load(open(expected_fh))
individuals_a = ['MONDO:0008199']
individuals_b = [['HP:0002367', 'HP:0031466', 'HP:0007123']]
compare_results = self.pheno_sim.compare(
individuals_a, individuals_b, is_feature_set=False)
results = json.loads(
json.dumps(compare_results,
default=lambda obj: getattr(obj, '__dict__', str(obj))
)
)
assert expected_sim_results == results
def test_no_results(self):
"""
Make sure ontobio handles no results correctly
"""
# Load fake output from owlsim2 where no results are returned
mock_search_fh = os.path.join(os.path.dirname(__file__),
'resources/owlsim2/mock-owlsim-noresults.json')
mock_search = json.load(open(mock_search_fh))
patch('ontobio.sim.api.owlsim2.search_by_attribute_set',
return_value=mock_search).start()
classes = ['HP:0002367', 'HP:0031466', 'HP:0007123']
search_results = self.pheno_sim.search(classes, method=SimAlgorithm.SIM_GIC)
assert search_results.matches == []
| bsd-3-clause | b11d261d5555f15d70f329b7d0d580ec | 35.174497 | 93 | 0.598145 | 3.424396 | false | false | false | false |
biolink/ontobio | ontobio/slimmer.py | 1 | 2961 | import networkx as nx
import logging
logger = logging.getLogger(__name__)
def get_minimal_subgraph(g, nodes):
"""
given a set of nodes, extract a subgraph that excludes non-informative nodes - i.e.
those that are not MRCAs of pairs of existing nodes.
Note: no property chain reasoning is performed. As a result, edge labels are lost.
"""
logger.info("Slimming {} to {}".format(g,nodes))
# maps ancestor nodes to members of the focus node set they subsume
mm = {}
subnodes = set()
for n in nodes:
subnodes.add(n)
ancs = nx.ancestors(g, n)
ancs.add(n)
for a in ancs:
subnodes.add(a)
if a not in mm:
mm[a] = set()
mm[a].add(n)
# merge graph
egraph = nx.MultiDiGraph()
# TODO: ensure edge labels are preserved
for a, aset in mm.items():
for p in g.predecessors(a):
logger.info(" cmp {} -> {} // {} {}".format(len(aset),len(mm[p]), a, p))
if p in mm and len(aset) == len(mm[p]):
egraph.add_edge(p, a)
egraph.add_edge(a, p)
logger.info("will merge {} <-> {} (members identical)".format(p,a))
nmap = {}
leafmap = {}
disposable = set()
for cliq in nx.strongly_connected_components(egraph):
leaders = set()
leafs = set()
for n in cliq:
is_src = False
if n in nodes:
logger.info("Preserving: {} in {}".format(n,cliq))
leaders.add(n)
is_src = True
is_leaf = True
for p in g.successors(n):
if p in cliq:
is_leaf = False
if not(is_leaf or is_src):
disposable.add(n)
if is_leaf:
logger.info("Clique leaf: {} in {}".format(n,cliq))
leafs.add(n)
leader = None
if len(leaders) > 1:
logger.info("UHOH: {}".format(leaders))
if len(leaders) > 0:
leader = list(leaders)[0]
else:
leader = list(leafs)[0]
leafmap[n] = leafs
subg = g.subgraph(subnodes)
fg = remove_nodes(subg, disposable)
return fg
def remove_nodes(g, rmnodes):
logger.info("Removing {} from {}".format(rmnodes,g))
newg = nx.MultiDiGraph()
for (n,nd) in g.nodes(data=True):
if n not in rmnodes:
newg.add_node(n, **nd)
parents = _traverse(g, set([n]), set(rmnodes), set())
for p in parents:
newg.add_edge(p,n,**{'pred':'subClassOf'})
return newg
def _traverse(g, nset, rmnodes, acc):
if len(nset) == 0:
return acc
n = nset.pop()
parents = set(g.predecessors(n))
acc = acc.union(parents - rmnodes)
nset = nset.union(parents.intersection(rmnodes))
return _traverse(g, nset, rmnodes, acc)
| bsd-3-clause | 9f4b36c01d61aedda6989c62e0753828 | 29.214286 | 87 | 0.515366 | 3.5 | false | false | false | false |
biolink/ontobio | ontobio/model/bbop_graph.py | 1 | 3211 | """
BBOP Graph class created in the original biolink-api
before ontobio was stripped out, it is still used
in the scigraph-util
TODO Merge this with OBOGraph
"""
from typing import Dict
class BBOPGraph:
"""
BBOPGraph Graph object model
https://github.com/berkeleybop/bbop-graph
"""
nodemap = {}
def __init__(self, obj: Dict = None):
obj = obj or {}
self.nodes = []
self.edges = []
if obj:
self.add_json_graph(obj)
def add_json_graph(self, obj):
for node in obj['nodes']:
self.add_node(Node(**node))
for edge in obj['edges']:
self.add_edge(Edge(edge))
def add_node(self, node) :
self.nodemap[node.id] = node
self.nodes.append(node)
def add_edge(self, edge) :
self.edges.append(edge)
def merge(self, graph):
for node in graph.nodes:
self.add_node(node)
for edge in graph.edges:
self.add_edge(edge)
def get_node(self, id):
return self.nodemap[id]
def get_lbl(self, id):
return self.nodemap[id].lbl
def get_root_nodes(self, relations):
roots = []
if relations is None:
relations = []
for node in self.nodes:
if len(self.get_outgoing_edges(node.id, relations)) == 0:
roots.append(node)
return roots
def get_leaf_nodes(self, relations):
roots = []
if relations is None:
relations = []
for node in self.nodes:
if len(self.get_incoming_edges(node.id, relations)) == 0:
roots.append(node)
return roots
def get_outgoing_edges(self, nid, relations):
el = []
if relations is None:
relations = []
for edge in self.edges:
if edge.sub == nid:
if len(relations) == 0 or edge.pred in relations:
el.append(edge)
return el
def get_incoming_edges(self, nid, relations=[]):
el = []
for edge in self.edges:
if edge.obj == nid:
if len(relations) == 0 or edge.pred in relations:
el.append(edge)
return el
def as_dict(self):
return {
"nodes": [node.as_dict() for node in self.nodes],
"edges": self.edges
}
class Node:
def __init__(self, id, lbl=None, meta=None):
self.id = id
self.lbl = lbl
self.meta = Meta(meta)
def __str__(self):
return self.id + ' "' + str(self.lbl) + '"'
def as_dict(self):
return {
"id": self.id,
"lbl": self.lbl,
"meta": self.meta.pmap
}
class Edge:
def __init__(self, obj):
self.sub = obj['sub']
self.pred = obj['pred']
self.obj = obj['obj']
self.meta = obj['meta']
def __str__(self):
return self.sub + "-[" + self.pred + "]->" + self.obj
class Meta:
def __init__(self, obj):
self.type_list = obj['types']
self.category_list = []
if 'category' in obj:
self.category_list = obj['category']
self.pmap = obj
| bsd-3-clause | ac889d75fafeb15ccb4ab41332a38bf7 | 23.891473 | 69 | 0.515416 | 3.712139 | false | false | false | false |
biolink/ontobio | ontobio/sparql/sparql_ontology.py | 1 | 8756 | """
Classes for representing ontologies backed by a SPARQL endpoint
```
Ontology
RemoteSparqlOntology
EagerRemoteSparqlOntology
LazyRemoteSparqlOntology
```
"""
import networkx as nx
import logging
import ontobio.ontol
from ontobio.ontol import Ontology, Synonym, TextDefinition
from ontobio.sparql.sparql_ontol_utils import get_digraph, get_named_graph, get_xref_graph, run_sparql, fetchall_syns, fetchall_textdefs, fetchall_labels, fetchall_obs, OIO_SYNS
from prefixcommons.curie_util import contract_uri, expand_uri, get_prefixes
logger = logging.getLogger(__name__)
class RemoteSparqlOntology(Ontology):
"""
Local or remote ontology
"""
def extract_subset(self, subset):
"""
Find all nodes in a subset.
We assume the oboInOwl encoding of subsets, and subset IDs are IRIs
"""
# note subsets have an unusual encoding
query = """
prefix oboInOwl: <http://www.geneontology.org/formats/oboInOwl#>
SELECT ?c WHERE {{
GRAPH <{g}> {{
?c oboInOwl:inSubset ?s
FILTER regex(?s,'#{s}$','i')
}}
}}
""".format(s=subset, g=self.graph_name)
bindings = run_sparql(query)
return [r['c']['value'] for r in bindings]
def subsets(self):
"""
Find all subsets for an ontology
"""
# note subsets have an unusual encoding
query = """
prefix oboInOwl: <http://www.geneontology.org/formats/oboInOwl#>
SELECT DISTINCT ?s WHERE {{
GRAPH <{g}> {{
?c oboInOwl:inSubset ?s
}}
}}
""".format(g=self.graph_name)
bindings = run_sparql(query)
return [r['s']['value'] for r in bindings]
def text_definition(self, nid):
logger.info("lookup defs for {}".format(nid))
if self.all_text_definitions_cache is None:
self.all_text_definitions()
return super().text_definition(nid)
# Override
def all_text_definitions(self):
logger.debug("Fetching all textdefs...")
if self.all_text_definitions_cache is None:
vals = fetchall_textdefs(self.graph_name)
tds = [TextDefinition(c,v) for (c,v) in vals]
for td in tds:
self.add_text_definition(td)
self.all_text_definitions_cache = tds # TODO: check if still used
return self.all_text_definitions_cache
def is_obsolete(self, nid):
logger.info("lookup obs for {}".format(nid))
if self.all_obsoletes_cache is None:
self.all_obsoletes()
return super().is_obsolete(nid)
def all_obsoletes(self):
logger.debug("Fetching all obsoletes...")
if self.all_obsoletes_cache is None:
obsnodes = fetchall_obs(self.graph_name)
for n in obsnodes:
self.set_obsolete(n)
self.all_obsoletes_cache = obsnodes # TODO: check if still used
return self.all_obsoletes_cache
def synonyms(self, nid, **args):
logger.info("lookup syns for {}".format(nid))
if self.all_synonyms_cache is None:
self.all_synonyms()
return super().synonyms(nid, **args)
# Override
def all_synonyms(self, include_label=False):
logger.debug("Fetching all syns...")
# TODO: include_label in cache
if self.all_synonyms_cache is None:
syntups = fetchall_syns(self.graph_name)
syns = [Synonym(t[0],pred=t[1], val=t[2]) for t in syntups]
for syn in syns:
self.add_synonym(syn)
if include_label:
lsyns = [Synonym(x, pred='label', val=self.label(x)) for x in self.nodes()]
syns = syns + lsyns
self.all_synonyms_cache = syns # TODO: check if still used
return self.all_synonyms_cache
# Override
def subontology(self, nodes=None, **args):
# ensure caches populated
self.all_synonyms()
self.all_text_definitions()
return super().subontology(nodes, **args)
# Override
def resolve_names(self, names, is_remote=False, synonyms=False, **args):
logger.debug("resolving via {}".format(self))
if not is_remote:
# TODO: ensure synonyms present
return super().resolve_names(names, synonyms, **args)
else:
results = set()
for name in names:
results.update( self._search(name, 'rdfs:label', **args) )
if synonyms:
for pred in OIO_SYNS.values():
results.update( self._search(name, pred, **args) )
logger.info("REMOTE RESULTS="+str(results))
return list(results)
def _search(self, searchterm, pred, **args):
"""
Search for things using labels
"""
# TODO: DRY with sparql_ontol_utils
searchterm = searchterm.replace('%','.*')
namedGraph = get_named_graph(self.handle)
query = """
prefix oboInOwl: <http://www.geneontology.org/formats/oboInOwl#>
SELECT ?c WHERE {{
GRAPH <{g}> {{
?c {pred} ?l
FILTER regex(?l,'{s}','i')
}}
}}
""".format(pred=pred, s=searchterm, g=namedGraph)
bindings = run_sparql(query)
return [r['c']['value'] for r in bindings]
def sparql(self, select='*', body=None, inject_prefixes=None, single_column=False):
"""
Execute a SPARQL query.
The query is specified using `select` and `body` parameters.
The argument for the Named Graph is injected into the query.
The select parameter should be either '*' or a list of vars (not prefixed with '?').
- If '*' is passed, then the result is a list of dicts, { $var: {value: $val } }
- If a list of vars is passed, then the result is a list of lists
- Unless single_column=True, in which case the results are a simple list of values from the first var
The inject_prefixes argument can be used to inject a list of prefixes - these are expanded
using the prefixcommons library
"""
if inject_prefixes is None:
inject_prefixes = []
namedGraph = get_named_graph(self.handle)
cols = []
select_val = None
if select is None or select=='*':
if not single_column:
cols=None
select_val='*'
else:
if isinstance(cols,list):
cols = [select]
else:
cols = select
select_val = ", ".join(['?'+c for c in cols])
prefixes = ""
if inject_prefixes is not None:
plist = ["prefix {}: <{}> ".format(p,expand_uri(p+":")) for p in inject_prefixes if p != "" and p is not None]
prefixes = "\n".join(plist)
query = """
{prefixes}
SELECT {s} WHERE {{
GRAPH <{g}> {{
{b}
}}
}}
""".format(prefixes=prefixes, s=select_val, b=body, g=namedGraph)
bindings = run_sparql(query)
if len(bindings) == 0:
return []
if cols is None:
return bindings
else:
if single_column:
c = list(bindings[0].keys())[0]
return [r[c]['value'] for r in bindings]
else:
return [r[c]['value'] for c in cols for r in bindings]
class EagerRemoteSparqlOntology(RemoteSparqlOntology):
"""
Local or remote ontology
"""
def __init__(self, handle=None):
"""
initializes based on an ontology name
"""
self.id = get_named_graph(handle)
self.handle = handle
logger.info("Creating eager-remote-sparql from "+str(handle))
g = get_digraph(handle, None, True)
logger.info("Graph:"+str(g))
if len(g.nodes()) == 0 and len(g.edges()) == 0:
logger.error("Empty graph for '{}' - did you use the correct id?".
format(handle))
self.graph = g
self.graph_name = get_named_graph(handle)
self.xref_graph = get_xref_graph(handle)
self.all_logical_definitions = []
self.all_synonyms_cache = None
self.all_text_definitions_cache = None
self.all_obsoletes_cache = None
logger.info("Graph: {} LDs: {}".format(self.graph, self.all_logical_definitions))
def __str__(self):
return "h:{} g:{}".format(self.handle, self.graph)
class LazyRemoteSparqlOntology(RemoteSparqlOntology):
"""
Local or remote ontology
"""
def __init__(self):
self.all_logical_definitions = [] ## TODO
| bsd-3-clause | 9439ce84c046efff2908f7edfb7943d5 | 32.80695 | 177 | 0.565555 | 3.759553 | false | false | false | false |
biolink/ontobio | ontobio/golr/golr_query.py | 1 | 68763 | """
A query wrapper for a Golr instance
Intended to work with:
* Monarch golr instance
* AmiGO/GO golr instance (including both GO and Planteome)
Conventions
-----------
Documents follow either entity or association patterns.
Associations
------------
Connects some kind of *subject* to an *object* via a *relation*, this
should be read as any RDF triple.
The subject may be a molecular biological entity such as a gene, or an
ontology class. The distinction between these two may be malleable.
The object is typically an ontology class, but not
always. E.g. gene-gene interactions or homology for exceptions.
An association also has evidence plus various provenance metadata.
In Monarch, the evidence is modeled as a graph encoded as a JSON blob;
In AmiGO, we follow the GAF data model where it is assumed evidence is
simple as does not follow chains, there is assumed to be one evidence
object for the intermediate entity.
### Entities
TODO
"""
import json
import logging
import pysolr
import re
from dataclasses import asdict
from typing import Dict, List
import xml.etree.ElementTree as ET
from collections import OrderedDict
from ontobio.vocabulary.relations import HomologyTypes
from ontobio.model.GolrResults import SearchResults, AutocompleteResult, Highlight
from ontobio.util.user_agent import get_user_agent
from prefixcommons.curie_util import expand_uri
from ontobio.util.curie_map import get_curie_map
from ontobio import ecomap
INVOLVED_IN="involved_in"
ACTS_UPSTREAM_OF_OR_WITHIN="acts_upstream_of_or_within"
ISA_PARTOF_CLOSURE="isa_partof_closure"
REGULATES_CLOSURE="regulates_closure"
ecomapping = ecomap.EcoMap()
iea_eco = ecomapping.coderef_to_ecoclass("IEA")
logger = logging.getLogger(__name__)
class GolrFields:
"""
Enumeration of fields in Golr.
Note the Monarch golr schema is taken as canonical here
"""
ID='id'
ASSOCIATION_TYPE='association_type'
SOURCE='source'
OBJECT_CLOSURE='object_closure'
SOURCE_CLOSURE_MAP='source_closure_map'
SUBJECT_TAXON_CLOSURE_LABEL='subject_taxon_closure_label'
OBJECT_TAXON_CLOSURE_LABEL = 'object_taxon_closure_label'
SUBJECT_GENE_CLOSURE_MAP='subject_gene_closure_map'
SUBJECT_TAXON_LABEL_SEARCHABLE='subject_taxon_label_searchable'
OBJECT_TAXON_LABEL_SEARCHABLE = 'object_taxon_label_searchable'
IS_DEFINED_BY='is_defined_by'
SUBJECT_GENE_CLOSURE_LABEL='subject_gene_closure_label'
SUBJECT_TAXON_CLOSURE='subject_taxon_closure'
OBJECT_TAXON_CLOSURE = 'object_taxon_closure'
OBJECT_LABEL='object_label'
SUBJECT_CATEGORY='subject_category'
SUBJECT_GENE_LABEL='subject_gene_label'
SUBJECT_TAXON_CLOSURE_LABEL_SEARCHABLE='subject_taxon_closure_label_searchable'
OBJECT_TAXON_CLOSURE_LABEL_SEARCHABLE = 'object_taxon_closure_label_searchable'
SUBJECT_GENE_CLOSURE='subject_gene_closure'
SUBJECT_GENE_LABEL_SEARCHABLE='subject_gene_label_searchable'
OBJECT_GENE_LABEL_SEARCHABLE = 'object_gene_label_searchable'
SUBJECT='subject'
SUBJECT_LABEL='subject_label'
SUBJECT_CLOSURE_LABEL_SEARCHABLE='subject_closure_label_searchable'
OBJECT_CLOSURE_LABEL_SEARCHABLE='object_closure_label_searchable'
OBJECT_CLOSURE_LABEL='object_closure_label'
SUBJECT_CLOSURE_LABEL='subject_closure_label'
SUBJECT_GENE='subject_gene'
SUBJECT_TAXON='subject_taxon'
OBJECT_TAXON = 'object_taxon'
OBJECT_LABEL_SEARCHABLE='object_label_searchable'
OBJECT_CATEGORY='object_category'
SUBJECT_TAXON_CLOSURE_MAP='subject_taxon_closure_map'
OBJECT_TAXON_CLOSURE_MAP = 'object_taxon_closure_map'
QUALIFIER='qualifier'
SUBJECT_TAXON_LABEL='subject_taxon_label'
OBJECT_TAXON_LABEL = 'object_taxon_label'
SUBJECT_CLOSURE_MAP='subject_closure_map'
SUBJECT_ORTHOLOG_CLOSURE='subject_ortholog_closure'
SUBJECT_CLOSURE='subject_closure'
OBJECT='object'
OBJECT_CLOSURE_MAP='object_closure_map'
SUBJECT_LABEL_SEARCHABLE='subject_label_searchable'
EVIDENCE_OBJECT='evidence_object'
EVIDENCE_OBJECT_CLOSURE_MAP='evidence_object_closure_map'
EVIDENCE_OBJECT_LABEL='evidence_object_label'
EVIDENCE_OBJECT_CLOSURE='evidence_object_closure'
EVIDENCE_OBJECT_CLOSURE_LABEL='evidence_object_closure_label'
EVIDENCE='evidence'
EVIDENCE_LABEL='evidence_label'
EVIDENCE_CLOSURE_MAP = 'evidence_closure_map'
EVIDENCE_GRAPH = 'evidence_graph'
_VERSION_='_version_'
SUBJECT_GENE_CLOSURE_LABEL_SEARCHABLE='subject_gene_closure_label_searchable'
ASPECT='aspect'
RELATION='relation'
RELATION_LABEL='relation_label'
FREQUENCY='frequency'
FREQUENCY_LABEL='frequency_label'
ONSET='onset'
ONSET_LABEL='onset_label'
# This is a temporary fix until
# https://github.com/biolink/ontobio/issues/126 is resolved.
# AmiGO specific fields
AMIGO_SPECIFIC_FIELDS = [
'reference',
'qualifier',
'is_redundant_for',
'type',
'evidence',
'evidence_label',
'evidence_type',
'evidence_type_label',
'evidence_with',
'evidence_closure',
'evidence_closure_label',
'evidence_subset_closure',
'evidence_subset_closure_label',
'evidence_type_closure',
'evidence_type_closure_label',
'aspect'
]
# golr convention: for any entity FOO, the id is denoted 'foo'
# and the label FOO_label
def label_field(self, f):
return f + "_label"
# golr convention: for any class FOO, the id is denoted 'foo'
# and the cosure FOO_closure. Other closures may exist
def closure_field(self, f):
return f + "_closure"
# create an instance
M=GolrFields()
# fields in the result docs that are to be inverted when 'invert_subject_object' is True
INVERT_FIELDS_MAP = {
M.SUBJECT: M.OBJECT,
M.SUBJECT_CLOSURE: M.OBJECT_CLOSURE,
M.SUBJECT_TAXON: M.OBJECT_TAXON,
M.SUBJECT_CLOSURE_LABEL: M.OBJECT_CLOSURE_LABEL,
M.SUBJECT_TAXON_CLOSURE_LABEL: M.OBJECT_TAXON_CLOSURE_LABEL,
M.SUBJECT_TAXON_LABEL_SEARCHABLE: M.OBJECT_TAXON_LABEL_SEARCHABLE,
M.SUBJECT_TAXON_CLOSURE: M.OBJECT_TAXON_CLOSURE,
M.SUBJECT_LABEL: M.OBJECT_LABEL,
M.SUBJECT_TAXON_CLOSURE_LABEL_SEARCHABLE: M.OBJECT_TAXON_CLOSURE_LABEL_SEARCHABLE,
M.SUBJECT_CLOSURE_LABEL_SEARCHABLE: M.OBJECT_CLOSURE_LABEL_SEARCHABLE,
M.SUBJECT_LABEL_SEARCHABLE: M.OBJECT_LABEL_SEARCHABLE,
M.SUBJECT_CATEGORY: M.OBJECT_CATEGORY,
M.SUBJECT_TAXON_CLOSURE_MAP: M.OBJECT_TAXON_CLOSURE_MAP,
M.SUBJECT_TAXON_LABEL: M.OBJECT_TAXON_LABEL,
M.SUBJECT_CLOSURE_MAP: M.OBJECT_CLOSURE_MAP,
}
ASPECT_MAP = {
'F': 'molecular_activity',
'P': 'biological_process',
'C': 'cellular_component'
}
# normalize to what Monarch uses
PREFIX_NORMALIZATION_MAP = {
'MGI:MGI' : 'MGI',
'FB' : 'FlyBase',
}
def flip(d, x, y):
dx = d.get(x)
dy = d.get(y)
d[x] = dy
d[y] = dx
def solr_quotify(v, operator="OR"):
if isinstance(v, list):
if len(v) == 1:
return solr_quotify(v[0], operator)
else:
return '({})'.format(" {} ".format(operator).join([solr_quotify(x) for x in v]))
else:
# TODO - escape quotes
return '"{}"'.format(v)
def translate_facet_field(fcs, invert_subject_object = False):
"""
Translates solr facet_fields results into something easier to manipulate
A solr facet field looks like this: [field1, count1, field2, count2, ..., fieldN, countN]
We translate this to a dict {f1: c1, ..., fn: cn}
This has slightly higher overhead for sending over the wire, but is easier to use
"""
if 'facet_fields' not in fcs:
return {}
ffs = fcs['facet_fields']
rs={}
for (facet, facetresults) in ffs.items():
if invert_subject_object:
for (k,v) in INVERT_FIELDS_MAP.items():
if facet == k:
facet = v
break
elif facet == v:
facet = k
break
pairs = {}
rs[facet] = pairs
for i in range(int(len(facetresults)/2)):
(fv,fc) = (facetresults[i*2],facetresults[i*2+1])
pairs[fv] = fc
return rs
### GO-SPECIFIC CODE
def goassoc_fieldmap(relationship_type=ACTS_UPSTREAM_OF_OR_WITHIN):
"""
Returns a mapping of canonical monarch fields to amigo-golr.
See: https://github.com/geneontology/amigo/blob/master/metadata/ann-config.yaml
"""
return {
M.SUBJECT: 'bioentity',
M.SUBJECT_CLOSURE: 'bioentity',
## In the GO AmiGO instance, the type field is not correctly populated
## See above in the code for hack that restores this for planteome instance
## M.SUBJECT_CATEGORY: 'type',
M.SUBJECT_CATEGORY: None,
M.SUBJECT_LABEL: 'bioentity_label',
M.SUBJECT_TAXON: 'taxon',
M.SUBJECT_TAXON_LABEL: 'taxon_label',
M.SUBJECT_TAXON_CLOSURE: 'taxon_closure',
M.RELATION: 'qualifier',
M.OBJECT: 'annotation_class',
M.OBJECT_CLOSURE: REGULATES_CLOSURE if relationship_type == ACTS_UPSTREAM_OF_OR_WITHIN else ISA_PARTOF_CLOSURE,
M.OBJECT_LABEL: 'annotation_class_label',
M.OBJECT_TAXON: 'taxon',
M.OBJECT_TAXON_LABEL: 'taxon_label',
M.OBJECT_TAXON_CLOSURE: 'taxon_closure',
M.OBJECT_CATEGORY: None,
M.EVIDENCE_OBJECT_CLOSURE: 'evidence_subset_closure',
M.IS_DEFINED_BY: 'assigned_by'
}
def map_field(fn, m) :
"""
Maps a field name, given a mapping file.
Returns input if fieldname is unmapped.
"""
if m is None:
return fn
if fn in m:
return m[fn]
else:
return fn
### CLASSES
class GolrServer():
pass
class GolrAbstractQuery():
def get_config(self):
if self.config is None:
from ontobio.config import Config, get_config
self.config = get_config()
return self.config
def _set_solr(self, url, timeout=2):
self.solr = pysolr.Solr(url=url, timeout=timeout)
return self.solr
def _set_user_agent(self, user_agent):
self.solr.get_session().headers['User-Agent'] = user_agent
def _use_amigo_schema(self, object_category):
if object_category is not None and object_category == 'function':
return True
ds = self.get_config().default_solr_schema
if ds is not None and ds == 'amigo':
return True
return False
class GolrSearchQuery(GolrAbstractQuery):
"""
Controller for monarch and go solr search cores
Queries over a search document
"""
def __init__(self,
term=None,
category=None,
is_go=False,
url=None,
solr=None,
config=None,
fq=None,
fq_string=None,
hl=True,
facet_fields=None,
facet=True,
search_fields=None,
taxon_map=True,
rows=100,
start=None,
prefix=None,
boost_fx=None,
boost_q=None,
highlight_class=None,
taxon=None,
min_match=None,
minimal_tokenizer=False,
include_eqs=False,
exclude_groups=False,
user_agent=None):
self.term = term
self.category = category
self.is_go = is_go
self.url = url
self.solr = solr
self.config = config
self.hl = hl
self.facet = facet
self.facet_fields = facet_fields
self.search_fields = search_fields
self.taxon_map = taxon_map
self.rows = rows
self.start = start
# test if client explicitly passes a URL; do not override
self.is_explicit_url = url is not None
# Raw fq param string
self.fq_string = fq_string if fq_string is not None else []
# fq as dictionary where key:values get converted
# to fq="(key1:value1 OR key2:value2)"
self.fq = fq if fq is not None else {}
self.prefix = prefix
self.boost_fx = boost_fx
self.boost_q = boost_q
self.highlight_class = highlight_class
self.taxon = taxon
self.min_match = min_match
self.include_eqs = include_eqs
self.exclude_groups = exclude_groups
self.minimal_tokenizer = minimal_tokenizer
self.user_agent = get_user_agent(modules=[requests, pysolr], caller_name=__name__)
if user_agent is not None:
self.user_agent += " {}".format(user_agent)
if self.search_fields is None:
self.search_fields = dict(id=3,
label=2,
synonym=1,
definition=1,
taxon_label=1,
taxon_label_synonym=1,
equivalent_curie=1)
if self.is_go:
if self.url is None:
endpoint = self.get_config().amigo_solr_search
solr_config = {'url': endpoint.url, 'timeout': endpoint.timeout}
else:
solr_config = {'url': self.url, 'timeout': 2}
else:
if self.url is None:
endpoint = self.get_config().solr_search
solr_config = {'url': endpoint.url, 'timeout': endpoint.timeout}
else:
solr_config = {'url': self.url, 'timeout': 2}
self._set_solr(**solr_config)
self._set_user_agent(self.user_agent)
def update_solr_url(self, url, timeout=2):
self.url = url
solr_config = {'url': url, 'timeout': timeout}
self._set_solr(**solr_config)
self._set_user_agent(self.user_agent)
def solr_params(self, mode=None):
if self.facet_fields is None and self.facet:
self.facet_fields = ['category', 'taxon', 'taxon_label']
if self.category is not None:
self.fq['category'] = self.category
suffixes = ['std', 'kw', 'eng']
if self.is_go:
self.search_fields=dict(entity_label=3, general_blob=3)
self.hl = False
# TODO: formal mapping
if 'taxon_label' in self.facet_fields:
self.facet_fields.remove('taxon_label')
suffixes = ['searchable']
self.fq['document_category'] = "general"
qf = self._format_query_filter(self.search_fields, suffixes)
if mode == 'search':
# Decrease ngram weight and increase keyword and standard tokenizer
for field, weight in qf.items():
if '_kw' in field:
qf[field] += 2
elif '_std' in field:
qf[field] += 1
if self.term is not None and ":" in self.term:
qf["id_kw"] = 20
qf["equivalent_curie_kw"] = 20
if self.minimal_tokenizer:
# Split text using a minimal set of word boundaries
# useful for variants and genotypes where typical
# word boundaries are part of the nomenclature
tokens = re.split(r'[\s|\'\",]+', self.term)
if tokens[-1] == '':
del tokens[-1]
tokenized = "".join(['"{}"'.format(token) for token in tokens])
else:
# Solr will run through the Standard Tokenizer
tokenized = self.term
select_fields = ["*", "score"]
params = {
'q': '{0} "{1}"'.format(tokenized, self.term),
"qt": "standard",
'fl': ",".join(list(filter(None, select_fields))),
"defType": "edismax",
"qf": ["{}^{}".format(field, weight) for field, weight in qf.items()],
'rows': self.rows
}
if self.facet:
params['facet'] = 'on'
params['facet.field'] = self.facet_fields
params['facet.limit'] = 25
params['facet.mincount'] = 1
if self.taxon_map:
params["facet.pivot.mincount"] =1
params["facet.pivot"] = "taxon,taxon_label"
if self.start is not None:
params['start'] = self.start
if self.hl:
params['hl.simple.pre'] = "<em class=\"hilite\">"
params['hl.snippets'] = "1000"
params['hl'] = 'on'
if self.fq is not None:
filter_queries = ['{}:{}'.format(k,solr_quotify(v))
for (k,v) in self.fq.items()]
params['fq'] = filter_queries
else:
params['fq'] = []
for fq in self.fq_string:
params['fq'].append(fq)
if self.prefix is not None:
negative_filter = [p_filt[1:] for p_filt in self.prefix
if p_filt.startswith('-')]
positive_filter = [p_filt for p_filt in self.prefix
if not p_filt.startswith('-')]
if negative_filter:
if self.include_eqs:
single_filts = [
f'(-prefix:"{prefix}" OR -equivalent_curie:{prefix}\:*)'
for prefix in negative_filter
]
for filt in single_filts:
params['fq'].append(filt)
else:
neg_filter = '({})'.format(" OR ".join([filt for filt in negative_filter]))
params['fq'].append('-prefix:{}'.format(solr_quotify(negative_filter)))
if positive_filter:
if self.include_eqs:
# fq=((prefix:HP OR equivalent_curie:HP) OR (prefix:MONDO OR equivalent_curie:MONDO))
single_filts = [
f'(prefix:"{prefix}" OR equivalent_curie:{prefix}\:*)'
for prefix in positive_filter
]
pos_filter = '({})'.format(" OR ".join([filt for filt in single_filts]))
params['fq'].append(pos_filter)
else:
params['fq'].append('prefix:{}'.format(solr_quotify(positive_filter)))
if self.boost_fx is not None:
params['bf'] = []
for boost in self.boost_fx:
params['bf'].append(boost)
if self.boost_q is not None:
params['bq'] = []
for boost in self.boost_q:
params['bq'].append(boost)
if self.taxon is not None:
for tax in self.taxon:
params['fq'].append('taxon:"{}"'.format(tax))
if self.exclude_groups:
params['fq'].append('leaf:1')
if self.min_match is not None:
params['mm'] = self.min_match
if self.highlight_class is not None:
params['hl.simple.pre'] = \
'<em class=\"{}\">'.format(self.highlight_class)
return params
def search(self):
"""
Execute solr search query
"""
params = self.solr_params(mode='search')
logger.info("PARAMS=" + str(params))
results = self.solr.search(**params)
logger.info("Docs found: {}".format(results.hits))
return self._process_search_results(results)
def autocomplete(self):
"""
Execute solr autocomplete
"""
self.facet = False
params = self.solr_params()
logger.info("PARAMS=" + str(params))
results = self.solr.search(**params)
logger.info("Docs found: {}".format(results.hits))
return self._process_autocomplete_results(results)
def _process_search_results(self,
results: pysolr.Results) -> SearchResults:
"""
Convert solr docs to biolink object
:param results: pysolr.Results
:return: model.GolrResults.SearchResults
"""
# map go-golr fields to standard
for doc in results.docs:
if 'entity' in doc:
doc['id'] = doc['entity']
doc['label'] = doc['entity_label']
translated_facets = translate_facet_field(results.facets)
# inject the taxon map (aka a facet pivot) into the returned facets
if self.taxon_map:
translated_facets['_taxon_map'] = [
{
'id': taxon['value'],
'label': taxon['pivot'][0]['value'],
'count': taxon['pivot'][0]['count']
}
for taxon in results.facets['facet_pivot']['taxon,taxon_label']
]
highlighting = {
doc['id']: asdict(self._process_highlight(results, doc))
for doc in results.docs if results.highlighting
}
payload = SearchResults(
facet_counts=translated_facets,
highlighting=highlighting,
docs=results.docs,
numFound=results.hits
)
logger.debug('Docs: {}'.format(len(results.docs)))
return payload
def _process_autocomplete_results(
self,
results: pysolr.Results) -> Dict[str, List[AutocompleteResult]]:
"""
Convert results to biolink autocomplete object
:param results: pysolr.Results
:return: {'docs': List[AutocompleteResult]}
"""
# map go-golr fields to standard
for doc in results.docs:
if 'entity' in doc:
doc['id'] = doc['entity']
doc['label'] = doc['entity_label']
docs = []
for doc in results.docs:
if results.highlighting:
hl = self._process_highlight(results, doc)
else:
hl = Highlight(None, None, None)
# In some cases a node does not have a category
category = doc['category'] if 'category' in doc else []
doc['taxon'] = doc['taxon'] if 'taxon' in doc else ""
doc['taxon_label'] = doc['taxon_label'] if 'taxon_label' in doc else ""
doc['equivalent_curie'] = doc['equivalent_curie'] if 'equivalent_curie' in doc else []
doc = AutocompleteResult(
id=doc['id'],
label=doc['label'],
match=hl.match,
category=category,
taxon=doc['taxon'],
taxon_label=doc['taxon_label'],
highlight=hl.highlight,
has_highlight=hl.has_highlight,
equivalent_ids=doc['equivalent_curie']
)
docs.append(doc)
payload = {
'docs': docs
}
logger.debug('Docs: {}'.format(len(results.docs)))
return payload
def _process_highlight(self, results: pysolr.Results, doc) -> Highlight:
hl = results.highlighting[doc['id']]
highlights = []
primary_label_matches = [] # Store all primary label
for field, hl_list in hl.items():
if field.startswith('label'):
primary_label_matches.extend(hl_list)
highlights.extend(hl_list)
# If we've matched on the primary label, get the longest
# from the list, else use other fields
if primary_label_matches:
highlights = primary_label_matches
try:
highlight = Highlight(
highlight=self._get_longest_hl(highlights),
match=self._hl_as_string(self._get_longest_hl(highlights)),
has_highlight=True
)
except ET.ParseError:
highlight = Highlight(
highlight=doc['label'][0],
match=doc['label'][0],
has_highlight=False
)
return highlight
@staticmethod
def _format_query_filter(search_fields, suffixes):
qf = {}
for (field, relevancy) in search_fields.items():
for suffix in suffixes:
field_filter = "{}_{}".format(field, suffix)
qf[field_filter] = relevancy
return qf
def _get_longest_hl(self, highlights):
"""
Given a list of highlighted text, returns the
longest highlight
For example:
[
"<em>Muscle</em> <em>atrophy</em>, generalized",
"Generalized <em>muscle</em> degeneration",
"Diffuse skeletal <em>">muscle</em> wasting"
]
and returns:
<em>Muscle</em> <em>atrophy</em>, generalized
If there are mutliple matches of the same length, returns
the top (arbitrary) highlight
:return:
"""
len_dict = OrderedDict()
for hl in highlights:
# dummy tags to make it valid xml
dummy_xml = "<p>" + hl + "</p>"
try:
element_tree = ET.fromstring(dummy_xml)
hl_length = 0
for emph in element_tree.findall('em'):
hl_length += len(emph.text)
len_dict[hl] = hl_length
except ET.ParseError:
raise ET.ParseError
return max(len_dict, key=len_dict.get)
def _hl_as_string(self, highlight):
"""
Given a solr string of highlighted text, returns the
str representations
For example:
"Foo <em>Muscle</em> bar <em>atrophy</em>, generalized"
Returns:
"Foo Muscle bar atrophy, generalized"
:return: str
"""
# dummy tags to make it valid xml
dummy_xml = "<p>" + highlight + "</p>"
try:
element_tree = ET.fromstring(dummy_xml)
except ET.ParseError:
raise ET.ParseError
return "".join(list(element_tree.itertext()))
class GolrLayPersonSearch(GolrSearchQuery):
"""
Controller for the HPO lay person index,
see https://github.com/monarch-initiative/hpo-plain-index
"""
def __init__(self, term=None, **kwargs):
super().__init__(term, **kwargs)
self.facet = False
endpoint = self.get_config().lay_person_search
self._set_solr(endpoint.url, endpoint.timeout)
self._set_user_agent(self.user_agent)
def set_lay_params(self):
params = self.solr_params()
suffixes = ['std', 'kw', 'eng']
qf = self._get_default_weights(suffixes)
params['qf'] = ["{}^{}".format(field, weight) for field, weight in qf.items()]
return params
def autocomplete(self):
"""
Execute solr query for autocomplete
"""
params = self.set_lay_params()
logger.info("PARAMS="+str(params))
results = self.solr.search(**params)
logger.info("Docs found: {}".format(results.hits))
return self._process_layperson_results(results)
def _process_layperson_results(self, results):
"""
Convert pysolr.Results to biolink object
:param results:
:return:
"""
payload = {
'results': []
}
for doc in results.docs:
hl = self._process_highlight(results, doc)
highlight = {
'id': doc['id'],
'highlight': hl.highlight,
'label': doc['label'],
'matched_synonym': hl.match
}
payload['results'].append(highlight)
logger.debug('Docs: {}'.format(len(results.docs)))
return payload
@staticmethod
def _get_default_weights(suffixes):
"""
Defaults for the plain language index
:param suffixes: list of suffixes (eng (ngram), std,)
:return:
"""
weights = {
"exact_synonym": "5",
"related_synonym": "2",
"broad_synonym": "1",
"narrow_synonym": "3"
}
qf = GolrLayPersonSearch._format_query_filter(weights, suffixes)
return qf
class GolrAssociationQuery(GolrAbstractQuery):
"""
A Query object providing a higher level of abstraction over either GO or Monarch Solr indexes
Fields
------
All of these can be set when creating a new object
fetch_objects : bool
we frequently want a list of distinct association objects (in
the RDF sense). for example, when querying for all phenotype
associations for a gene, it is convenient to get a list of
distinct phenotype terms. Although this can be obtained by
iterating over the list of associations, it can be expensive
to obtain all associations.
Results are in the 'objects' field
fetch_subjects : bool
This is the analog of the fetch_objects field. Note that due
to an inherent asymmetry by which the list of subjects can be
very large (e.g. all genes in all species for "metabolic
process" or "metabolic phenotype") it's necessary to combine
this with subject_category and subject_taxon filters
Results are in the 'subjects' field
slim : List
a list of either class ids (or in future subset ids), used to
map up (slim) objects in associations. This will populate
an additional 'slim' field in each association object corresponding
to the slimmed-up value(s) from the direct objects.
If fetch_objects is passed, this will be populated with slimmed IDs.
evidence: String
Evidence class from ECO. Inference is used.
exclude_automatic_assertions : bool
If true, then any annotations with ECO evidence code for IEA or
subclasses will be excluded.
use_compact_associations : bool
If true, then the associations list will be false, instead
compact_associations contains a more compact representation
consisting of objects with (subject, relation and objects)
config : Config
See :ref:`Config` for details. The config object can be used
to set values for the solr instance to be queried
TODO - Extract params into their own object
"""
def __init__(self,
subject_category=None,
object_category=None,
relation=None,
relationship_type=None,
subject_or_object_ids=None,
subject_or_object_category=None,
subject=None,
subjects=None,
object=None,
objects=None,
subject_direct=False,
object_direct=False,
subject_taxon=None,
subject_taxon_direct=False,
object_taxon=None,
object_taxon_direct=False,
invert_subject_object=None,
evidence=None,
exclude_automatic_assertions=False,
q=None,
id=None,
use_compact_associations=False,
include_raw=False,
field_mapping=None,
solr=None,
config=None,
url=None,
select_fields=None,
fetch_objects=False,
fetch_subjects=False,
fq=None,
slim=None,
json_facet=None,
iterate=False,
map_identifiers=None,
facet_fields=None,
facet_field_limits=None,
facet_limit=25,
facet_mincount=1,
facet_pivot_fields=None,
stats=False,
stats_field=None,
facet=True,
pivot_subject_object=False,
unselect_evidence=False,
rows=10,
start=None,
homology_type=None,
non_null_fields=None,
user_agent=None,
association_type=None,
sort=None,
**kwargs):
"""Fetch a set of association objects based on a query.
"""
self.subject_category = subject_category
self.object_category = object_category
self.relation = relation
self.relationship_type = relationship_type
self.subject_or_object_ids = subject_or_object_ids
self.subject_or_object_category = subject_or_object_category
self.subject = subject
self.subjects = subjects
self.subject_direct = subject_direct
self.object = object
self.objects = objects
self.object_direct = object_direct
self.subject_taxon = subject_taxon
self.subject_taxon_direct = subject_taxon_direct
self.object_taxon = object_taxon
self.object_taxon_direct = object_taxon_direct
self.invert_subject_object = invert_subject_object
self.evidence = evidence
self.exclude_automatic_assertions = exclude_automatic_assertions
self.id = id
self.q = q
self.use_compact_associations = use_compact_associations
self.include_raw = include_raw
self.field_mapping = field_mapping
self.solr = solr
self.config = config
self.select_fields = select_fields
self.fetch_objects = fetch_objects
self.fetch_subjects = fetch_subjects
self.fq = fq if fq is not None else {}
self.slim = slim if slim is not None else []
self.json_facet = json_facet
self.iterate = iterate
self.map_identifiers = map_identifiers
self.facet_fields = facet_fields
self.facet_field_limits = facet_field_limits
self.facet_limit = facet_limit
self.facet_mincount = facet_mincount
self.facet_pivot_fields = facet_pivot_fields
self.stats = stats
self.stats_field = stats_field
self.facet = facet
self.pivot_subject_object = pivot_subject_object
self.unselect_evidence = unselect_evidence
self.max_rows = 100000
self.rows = rows
self.start = start
self.homology_type = homology_type
self.url = url
# test if client explicitly passes a URL; do not override
self.is_explicit_url = url is not None
self.non_null_fields = non_null_fields
self.association_type = association_type
self.sort = sort
self.user_agent = get_user_agent(modules=[requests, pysolr], caller_name=__name__)
if user_agent is not None:
self.user_agent += " {}".format(user_agent)
if self.facet_pivot_fields is None:
self.facet_pivot_fields = []
if self.non_null_fields is None:
self.non_null_fields = []
if self.facet_fields is None:
if self.facet:
self.facet_fields = [
M.SUBJECT_TAXON,
M.SUBJECT_TAXON_LABEL,
M.OBJECT_CLOSURE
]
if self.sort is None and not self._use_amigo_schema(object_category):
# Make default descending by count of publications for monarch
self.sort = 'source_count desc'
if self.solr is None:
if self.url is None:
endpoint = self.get_config().solr_assocs
solr_config = {'url': endpoint.url, 'timeout': endpoint.timeout}
else:
solr_config = {'url': self.url, 'timeout': 5}
self.update_solr_url(**solr_config)
def update_solr_url(self, url, timeout=2):
self.url = url
solr_config = {'url': url, 'timeout': timeout}
self._set_solr(**solr_config)
self._set_user_agent(self.user_agent)
def adjust(self):
pass
def solr_params(self):
"""
Generate HTTP parameters for passing to Solr.
In general you should not need to call this directly, calling exec() on a query object
will transparently perform this step for you.
"""
## Main query params for solr
fq=self.fq
if fq is None:
fq = {}
logger.info("TEMPx FQ={}".format(fq))
# subject_or_object_ids is a list of identifiers that can be matched to either subjects or objects
subject_or_object_ids = self.subject_or_object_ids
if subject_or_object_ids is not None:
subject_or_object_ids = [self.make_canonical_identifier(c) for c in subject_or_object_ids]
# canonical form for MGI is a CURIE MGI:nnnn
#if subject is not None and subject.startswith('MGI:MGI:'):
# logger.info('Unhacking MGI ID presumably from GO:'+str(subject))
# subject = subject.replace("MGI:MGI:","MGI")
subject = self.subject
if subject is not None:
subject = self.make_canonical_identifier(subject)
subjects = self.subjects
if subjects is not None:
subjects = [self.make_canonical_identifier(s) for s in subjects]
subject_direct = self.subject_direct
# temporary: for querying go solr, map fields. TODO
object_category = self.object_category
logger.info("Object category: {}".format(object_category))
object = self.object
objects = self.objects
object_direct = self.object_direct
if object_category is None and object is not None and object.startswith('GO:'):
# Infer category
object_category = 'function'
logger.info("Inferring Object category: {} from {}".
format(object_category, object))
# URL to use for querying solr
if self._use_amigo_schema(object_category):
# Override solr config and use go solr
endpoint = self.get_config().amigo_solr_assocs
solr_config = {'url': endpoint.url, 'timeout': endpoint.timeout}
self.update_solr_url(**solr_config)
self.field_mapping=goassoc_fieldmap(self.relationship_type)
# awkward hack: we want to avoid typing on the amigo golr gene field,
# UNLESS this is a planteome golr
if "planteome" in self.get_config().amigo_solr_assocs.url:
self.field_mapping[M.SUBJECT_CATEGORY] = 'type'
fq['document_category'] = 'annotation'
if subject is not None:
subject = self.make_gostyle_identifier(subject)
if subjects is not None:
subjects = [self.make_gostyle_identifier(s) for s in subjects]
# the AmiGO schema lacks an object_category field;
# we could use the 'aspect' field but instead we use a mapping of
# the category to a root class
if object_category is not None:
cc = self.get_config().get_category_class(object_category)
if cc is not None and object is None:
object = cc
## subject params
subject_taxon = self.subject_taxon
subject_taxon_direct = self.subject_taxon_direct
subject_category = self.subject_category
# heuristic procedure to guess unspecified subject_category
if subject_category is None and subject is not None:
subject_category = self.infer_category(subject)
if subject_category is not None and subject_category == 'disease':
if subject_taxon is not None and subject_taxon=='NCBITaxon:9606':
logger.info("Unsetting taxon, until indexed correctly")
subject_taxon = None
if self.invert_subject_object is None:
# TODO: consider placing in a separate lookup
p = (subject_category, object_category)
if p == ('disease', 'gene'):
self.invert_subject_object = True
elif p == ('disease', 'model'):
self.invert_subject_object = True
else:
self.invert_subject_object = False
if self.invert_subject_object:
logger.info("Inferred that subject/object should be inverted for {}".format(p))
## taxon of object of triple
object_taxon=self.object_taxon
object_taxon_direct = self.object_taxon_direct
# typically information is stored one-way, e.g. model-disease;
# sometimes we want associations from perspective of object
if self.invert_subject_object:
(subject, object) = (object,subject)
(subject_category, object_category) = (object_category,subject_category)
(subject_taxon, object_taxon) = (object_taxon,subject_taxon)
(object_direct, subject_direct) = (subject_direct, object_direct)
(object_taxon_direct, subject_taxon_direct) = (subject_taxon_direct, object_taxon_direct)
## facet fields
facet_fields=self.facet_fields
facet=self.facet
facet_limit=self.facet_limit
select_fields=self.select_fields
if self.use_compact_associations:
facet_fields = []
facet = False
facet_limit = 0
select_fields = [
M.SUBJECT,
M.SUBJECT_LABEL,
M.RELATION,
M.OBJECT]
if subject_category is not None:
fq['subject_category'] = subject_category
if object_category is not None:
fq['object_category'] = object_category
if subject is not None:
# note: by including subject closure by default,
# we automaticaly get equivalent nodes
if subject_direct:
fq['subject_eq'] = subject
else:
fq['subject_closure'] = subject
if subjects is not None:
# lists are assumed to be disjunctive
if subject_direct:
fq['subject'] = subjects
else:
fq['subject_closure'] = subjects
if object is not None:
if object_direct:
fq['object_eq'] = object
else:
fq['object_closure'] = object
if objects is not None:
# lists are assumed to be disjunctive
if object_direct:
fq['object_eq'] = objects
else:
fq['object_eq'] = objects
objects=self.objects
if objects is not None:
# lists are assumed to be disjunctive
fq['object_closure'] = objects
relation=self.relation
if relation is not None:
fq['relation_closure'] = relation
if subject_taxon is not None:
if subject_taxon_direct:
fq['subject_taxon'] = subject_taxon
else:
fq['subject_taxon_closure'] = subject_taxon
if object_taxon is not None:
if object_taxon_direct:
fq['object_taxon'] = object_taxon
else:
fq['object_taxon_closure'] = object_taxon
if self.id is not None:
fq['id'] = self.id
if self.evidence is not None:
e = self.evidence
if e.startswith("-"):
fq['-evidence_object_closure'] = e.replace("-","")
else:
fq['evidence_object_closure'] = e
if self.exclude_automatic_assertions:
fq['-evidence_object_closure'] = iea_eco
# Homolog service params
# TODO can we sync with argparse.choices?
if self.homology_type is not None:
if self.homology_type == 'O':
fq['relation_closure'] = HomologyTypes.Ortholog.value
elif self.homology_type == 'P':
fq['relation_closure'] = HomologyTypes.Paralog.value
elif self.homology_type == 'LDO':
fq['relation_closure'] = \
HomologyTypes.LeastDivergedOrtholog.value
## Association type, monarch only
if self.association_type is not None:
fq['association_type'] = self.association_type
## pivots
facet_pivot_fields=self.facet_pivot_fields
if self.pivot_subject_object:
facet_pivot_fields = [M.SUBJECT, M.OBJECT]
# Map solr field names for fq. The generic Monarch schema is
# canonical, GO schema is mapped to this using
# field_mapping dictionary
if self.field_mapping is not None:
for (k,v) in self.field_mapping.items():
# map fq[k] -> fq[k]
if k in fq:
if v is None:
del fq[k]
else:
fq[v] = fq[k]
del fq[k]
# in solr, the fq field can be
# a negated expression, e.g. -evidence_object_closure:"ECO:0000501"
# ideally we would have a higher level representation rather than
# relying on string munging...
negk = '-' + k
if negk in fq:
if v is None:
del fq[negk]
else:
negv = '-' + v
fq[negv] = fq[negk]
del fq[negk]
filter_queries = []
qstr = "*:*"
if self.q is not None:
qstr = self.q
filter_queries = [ '{}:{}'.format(k,solr_quotify(v)) for (k,v) in fq.items()]
# We want to match all associations that have either a subject or object
# with an ID that is contained in subject_or_object_ids.
if subject_or_object_ids is not None:
quotified_ids = solr_quotify(subject_or_object_ids)
subject_id_filter = '{}:{}'.format('subject_closure', quotified_ids)
object_id_filter = '{}:{}'.format('object_closure', quotified_ids)
# If subject_or_object_category is provided, we add it to the filter.
if self.subject_or_object_category is not None:
quotified_categories = solr_quotify(self.subject_or_object_category)
subject_category_filter = '{}:{}'.format('subject_category', quotified_categories)
object_category_filter = '{}:{}'.format('object_category', quotified_categories)
filter_queries.append(
'(' + subject_id_filter + ' AND ' + object_category_filter + ')' \
' OR ' \
'(' + object_id_filter + ' AND ' + subject_category_filter + ')'
)
else:
filter_queries.append(subject_id_filter + ' OR ' + object_id_filter)
# unless caller specifies a field list, use default
if select_fields is None:
select_fields = [
M.ID,
M.IS_DEFINED_BY,
M.SOURCE,
M.SUBJECT,
M.SUBJECT_LABEL,
M.SUBJECT_TAXON,
M.SUBJECT_TAXON_LABEL,
M.RELATION,
M.RELATION_LABEL,
M.OBJECT,
M.OBJECT_LABEL,
M.OBJECT_TAXON,
M.OBJECT_TAXON_LABEL,
M.EVIDENCE,
M.EVIDENCE_CLOSURE_MAP,
M.FREQUENCY,
M.FREQUENCY_LABEL,
M.ONSET,
M.ONSET_LABEL
]
if not self.unselect_evidence:
select_fields += [
M.EVIDENCE_GRAPH
]
if not self._use_amigo_schema(object_category):
select_fields.append(M.SUBJECT_CATEGORY)
select_fields.append(M.OBJECT_CATEGORY)
if self.map_identifiers is not None:
select_fields.append(M.SUBJECT_CLOSURE)
if self.slim is not None and len(self.slim) > 0:
select_fields.append(M.OBJECT_CLOSURE)
if self.field_mapping is not None:
logger.info("Applying field mapping to SELECT: {}".format(self.field_mapping))
select_fields = [ map_field(fn, self.field_mapping) for fn in select_fields ]
if facet_pivot_fields is not None:
logger.info("Applying field mapping to PIV: {}".format(facet_pivot_fields))
facet_pivot_fields = [ map_field(fn, self.field_mapping) for fn in facet_pivot_fields ]
logger.info("APPLIED field mapping to PIV: {}".format(facet_pivot_fields))
if facet_fields:
facet_fields = [ map_field(fn, self.field_mapping) for fn in facet_fields ]
if self._use_amigo_schema(object_category):
select_fields += [x for x in M.AMIGO_SPECIFIC_FIELDS if x not in select_fields]
## true if iterate in windows of max_size until all results found
iterate=self.iterate
#logger.info('FL'+str(select_fields))
is_unlimited = False
rows=self.rows
if rows < 0:
is_unlimited = True
iterate = True
rows = self.max_rows
for field in self.non_null_fields:
filter_queries.append(field + ":['' TO *]")
search_fields = None
if self.q is not None and not self._use_amigo_schema(object_category):
search_fields = [
M.SUBJECT_LABEL_SEARCHABLE,
M.OBJECT_LABEL_SEARCHABLE,
M.SUBJECT_TAXON_LABEL_SEARCHABLE,
M.OBJECT_TAXON_LABEL_SEARCHABLE,
M.SUBJECT_GENE_LABEL_SEARCHABLE,
M.OBJECT_GENE_LABEL_SEARCHABLE,
]
params = {
'q': qstr,
'fq': filter_queries,
'facet': 'on' if facet else 'off',
'facet.field': facet_fields if facet_fields else [],
'facet.limit': facet_limit,
'facet.mincount': self.facet_mincount,
'fl': ",".join(list(filter(None, select_fields))),
'rows': rows,
"defType": "edismax"
}
if self.start is not None:
params['start'] = self.start
json_facet = self.json_facet
if json_facet:
params['json.facet'] = json.dumps(json_facet)
facet_field_limits = self.facet_field_limits
if facet_field_limits is not None:
for (f,flim) in facet_field_limits.items():
params["f."+f+".facet.limit"] = flim
if len(facet_pivot_fields) > 0:
params['facet.pivot'] = ",".join(facet_pivot_fields)
params['facet.pivot.mincount'] = 1
if self.stats_field:
self.stats = True
params['stats.field'] = self.stats_field
params['stats'] = json.dumps(self.stats)
if self.sort is not None:
params['sort'] = self.sort
if search_fields:
params['qf'] = search_fields
return params
def exec(self, **kwargs):
"""
Execute solr query
Result object is a dict with the following keys:
- raw
- associations : list
- compact_associations : list
- facet_counts
- facet_pivot
"""
params = self.solr_params()
logger.info("PARAMS="+str(params))
results = self.solr.search(**params)
n_docs = len(results.docs)
logger.info("Docs found: {}".format(results.hits))
if self.iterate:
docs = results.docs
start = n_docs
while n_docs >= self.rows:
logger.info("Iterating; start={}".format(start))
next_results = self.solr.search(start=start, **params)
next_docs = next_results.docs
n_docs = len(next_docs)
docs += next_docs
start += self.rows
results.docs = docs
fcs = results.facets
payload = {
'facet_counts': translate_facet_field(fcs, self.invert_subject_object),
'pagination': {},
'numFound': results.hits
}
include_raw = self.include_raw
if include_raw:
# note: this is not JSON serializable, do not send via REST
payload['raw'] = results
# TODO - check if truncated
logger.info("COMPACT={} INV={}".format(self.use_compact_associations, self.invert_subject_object))
if self.use_compact_associations:
payload['compact_associations'] = self.translate_docs_compact(results.docs, field_mapping=self.field_mapping,
slim=self.slim, invert_subject_object=self.invert_subject_object,
map_identifiers=self.map_identifiers, **kwargs)
else:
payload['associations'] = self.translate_docs(results.docs, field_mapping=self.field_mapping, map_identifiers=self.map_identifiers, **kwargs)
if 'facet_pivot' in fcs:
payload['facet_pivot'] = fcs['facet_pivot']
if 'facets' in results.raw_response:
payload['facets'] = results.raw_response['facets']
# For solr, we implement this by finding all facets
# TODO: no need to do 2nd query, see https://wiki.apache.org/solr/SimpleFacetParameters#Parameters
fetch_objects=self.fetch_objects
if fetch_objects:
core_object_field = M.OBJECT
if self.slim is not None and len(self.slim)>0:
core_object_field = M.OBJECT_CLOSURE
object_field = map_field(core_object_field, self.field_mapping)
if self.invert_subject_object:
object_field = map_field(M.SUBJECT, self.field_mapping)
oq_params = params.copy()
oq_params['fl'] = []
oq_params['facet.field'] = [object_field]
oq_params['facet.limit'] = -1
oq_params['rows'] = 0
oq_params['facet.mincount'] = 1
oq_results = self.solr.search(**oq_params)
if self.facet:
ff = oq_results.facets['facet_fields']
ofl = ff.get(object_field)
# solr returns facets counts as list, every 2nd element is number, we don't need the numbers here
payload['objects'] = ofl[0::2]
fetch_subjects=self.fetch_subjects
if fetch_subjects:
core_subject_field = M.SUBJECT
if self.slim is not None and len(self.slim)>0:
core_subject_field = M.SUBJECT_CLOSURE
subject_field = map_field(core_subject_field, self.field_mapping)
if self.invert_subject_object:
subject_field = map_field(M.SUBJECT, self.field_mapping)
oq_params = params.copy()
oq_params['fl'] = []
oq_params['facet.field'] = [subject_field]
oq_params['facet.limit'] = self.max_rows
oq_params['rows'] = 0
oq_params['facet.mincount'] = 1
oq_results = self.solr.search(**oq_params)
if self.facet:
ff = oq_results.facets['facet_fields']
ofl = ff.get(subject_field)
# solr returns facets counts as list, every 2nd element is number, we don't need the numbers here
payload['subjects'] = ofl[0::2]
if len(payload['subjects']) == self.max_rows:
payload['is_truncated'] = True
if self.slim is not None and len(self.slim)>0:
if 'objects' in payload:
payload['objects'] = [x for x in payload['objects'] if x in self.slim]
if 'associations' in payload:
for a in payload['associations']:
a['slim'] = [x for x in a['object_closure'] if x in self.slim]
del a['object_closure']
return payload
def infer_category(self, id):
"""
heuristic to infer a category from an id, e.g. DOID:nnn --> disease
"""
logger.info("Attempting category inference on id={}".format(id))
toks = id.split(":")
idspace = toks[0]
c = None
if idspace == 'DOID':
c='disease'
if c is not None:
logger.info("Inferred category: {} based on id={}".format(c, id))
return c
def make_canonical_identifier(self,id):
"""
E.g. MGI:MGI:nnnn --> MGI:nnnn
"""
if id is not None:
for (k,v) in PREFIX_NORMALIZATION_MAP.items():
s = k+':'
if id.startswith(s):
return id.replace(s,v+':')
return id
def make_gostyle_identifier(self,id):
"""
E.g. MGI:nnnn --> MGI:MGI:nnnn
"""
if id is not None:
for (k,v) in PREFIX_NORMALIZATION_MAP.items():
s = v+':'
if id.startswith(s):
return id.replace(s,k+':')
return id
def translate_objs(self, d, fname, default=None):
"""
Translate a field whose value is expected to be a list
"""
if fname not in d:
# TODO: consider adding arg for failure on null
return default
#lf = M.label_field(fname)
v = d[fname]
if not isinstance(v,list):
v = [v]
objs = [{'id': idval} for idval in v]
# todo - labels
return objs
def translate_obj(self,d,fname):
"""
Translate a field value from a solr document.
This includes special logic for when the field value
denotes an object, here we nest it
"""
if fname not in d:
# TODO: consider adding arg for failure on null
return None
lf = M.label_field(fname)
id = d[fname]
id = self.make_canonical_identifier(id)
#if id.startswith('MGI:MGI:'):
# id = id.replace('MGI:MGI:','MGI:')
obj = {'id': id}
if id:
if self._use_amigo_schema(self.object_category):
iri = expand_uri(id)
else:
iri = expand_uri(id, [get_curie_map('{}/cypher/curies'.format(self.config.scigraph_data.url))])
obj['iri'] = iri
if lf in d:
obj['label'] = d[lf]
cf = fname + "_category"
if cf in d:
obj['category'] = [d[cf]]
if 'aspect' in d and id.startswith('GO:'):
obj['category'] = [ASPECT_MAP[d['aspect']]]
del d['aspect']
return obj
def map_doc(self, d, field_mapping, invert_subject_object=False):
if field_mapping is not None:
for (k,v) in field_mapping.items():
if v is not None and k is not None:
#logger.debug("TESTING FOR:"+v+" IN "+str(d))
if v in d:
#logger.debug("Setting field {} to {} // was in {}".format(k,d[v],v))
d[k] = d[v]
if invert_subject_object:
for field in INVERT_FIELDS_MAP:
flip(d, field, INVERT_FIELDS_MAP[field])
return d
def translate_doc(self, d, field_mapping=None, map_identifiers=None, **kwargs):
"""
Translate a solr document (i.e. a single result row)
"""
if field_mapping is not None:
self.map_doc(d, field_mapping)
subject = self.translate_obj(d, M.SUBJECT)
obj = self.translate_obj(d, M.OBJECT)
# TODO: use a more robust method; we need equivalence as separate field in solr
if map_identifiers is not None:
if M.SUBJECT_CLOSURE in d:
subject['id'] = self.map_id(subject, map_identifiers, d[M.SUBJECT_CLOSURE])
else:
logger.info("NO SUBJECT CLOSURE IN: "+str(d))
if M.SUBJECT_TAXON in d:
subject['taxon'] = self.translate_obj(d,M.SUBJECT_TAXON)
if M.OBJECT_TAXON in d:
obj['taxon'] = self.translate_obj(d, M.OBJECT_TAXON)
qualifiers = []
if M.RELATION in d and isinstance(d[M.RELATION],list):
# GO overloads qualifiers and relation
relation = None
for rel in d[M.RELATION]:
if rel.lower() == 'not':
qualifiers.append(rel)
else:
relation = rel
if relation is not None:
d[M.RELATION] = relation
else:
d[M.RELATION] = None
negated = 'not' in qualifiers
assoc = {'id':d.get(M.ID),
'subject': subject,
'object': obj,
'negated': negated,
'relation': self.translate_obj(d,M.RELATION),
'publications': self.translate_objs(d, M.SOURCE, []), # note 'source' is used in the golr schema
}
if self.invert_subject_object and assoc['relation'] is not None:
assoc['relation']['inverse'] = True
if len(qualifiers) > 0:
assoc['qualifiers'] = qualifiers
evidence_types = []
if M.EVIDENCE in d:
evidence_label_map = json.loads(d[M.EVIDENCE_CLOSURE_MAP])
if self._use_amigo_schema(self.object_category):
evidence_codes = [d[M.EVIDENCE]]
else:
evidence_codes = d[M.EVIDENCE]
for evidence_code in evidence_codes:
evidence_label = None
if evidence_code in evidence_label_map:
evidence_label = evidence_label_map[evidence_code]
evidence_types.append({
'id': evidence_code,
'label': evidence_label
})
assoc['evidence_types'] = evidence_types
if M.OBJECT_CLOSURE in d:
assoc['object_closure'] = d.get(M.OBJECT_CLOSURE)
if M.IS_DEFINED_BY in d:
if isinstance(d[M.IS_DEFINED_BY],list):
assoc['provided_by'] = d[M.IS_DEFINED_BY]
else:
# hack for GO Golr instance
assoc['provided_by'] = [d[M.IS_DEFINED_BY]]
# solr does not allow nested objects, so evidence graph is json-encoded
if M.EVIDENCE_GRAPH in d:
assoc[M.EVIDENCE_GRAPH] = json.loads(d[M.EVIDENCE_GRAPH])
if M.FREQUENCY in d:
assoc[M.FREQUENCY] = {
'id': d[M.FREQUENCY]
}
if M.FREQUENCY_LABEL in d:
assoc[M.FREQUENCY]['label'] = d[M.FREQUENCY_LABEL]
if M.ONSET in d:
assoc[M.ONSET] = {
'id': d[M.ONSET]
}
if M.ONSET_LABEL in d:
assoc[M.ONSET]['label'] = d[M.ONSET_LABEL]
if M.ASSOCIATION_TYPE in d:
assoc['type'] = d[M.ASSOCIATION_TYPE]
if self._use_amigo_schema(self.object_category):
for f in M.AMIGO_SPECIFIC_FIELDS:
if f in d:
assoc[f] = d[f]
return assoc
def translate_docs(self, ds, **kwargs):
"""
Translate a set of solr results
"""
for d in ds:
self.map_doc(d, {}, self.invert_subject_object)
return [self.translate_doc(d, **kwargs) for d in ds]
def translate_docs_compact(self, ds, field_mapping=None, slim=None, map_identifiers=None, invert_subject_object=False, **kwargs):
"""
Translate golr association documents to a compact representation
"""
amap = {}
logger.info("Translating docs to compact form. Slim={}".format(slim))
for d in ds:
self.map_doc(d, field_mapping, invert_subject_object=invert_subject_object)
subject = d[M.SUBJECT]
subject_label = d[M.SUBJECT_LABEL]
# TODO: use a more robust method; we need equivalence as separate field in solr
if map_identifiers is not None:
if M.SUBJECT_CLOSURE in d:
subject = self.map_id(subject, map_identifiers, d[M.SUBJECT_CLOSURE])
else:
logger.debug("NO SUBJECT CLOSURE IN: "+str(d))
rel = d.get(M.RELATION)
skip = False
# TODO
if rel == 'not' or rel == 'NOT':
skip = True
# this is a list in GO
if isinstance(rel,list):
if 'not' in rel or 'NOT' in rel:
skip = True
if len(rel) > 1:
logger.warning(">1 relation: {}".format(rel))
rel = ";".join(rel)
if skip:
logger.debug("Skipping: {}".format(d))
continue
subject = self.make_canonical_identifier(subject)
#if subject.startswith('MGI:MGI:'):
# subject = subject.replace('MGI:MGI:','MGI:')
k = (subject,rel)
if k not in amap:
amap[k] = {'subject':subject,
'subject_label':subject_label,
'relation':rel,
'objects': []}
if slim is not None and len(slim)>0:
mapped_objects = [x for x in d[M.OBJECT_CLOSURE] if x in slim]
logger.debug("Mapped objects: {}".format(mapped_objects))
amap[k]['objects'] += mapped_objects
else:
amap[k]['objects'].append(d[M.OBJECT])
for k in amap.keys():
amap[k]['objects'] = list(set(amap[k]['objects']))
return list(amap.values())
def map_id(self,id, prefix, closure_list):
"""
Map identifiers based on an equivalence closure list.
"""
prefixc = prefix + ':'
ids = [eid for eid in closure_list if eid.startswith(prefixc)]
# TODO: add option to fail if no mapping, or if >1 mapping
if len(ids) == 0:
# default to input
return id
return ids[0]
### This may quite possibly be a temporary code, but it looks a lot simpler than the above for more customizable Solr queries
import requests
from enum import Enum
## Should take those URLs from config.yaml
class ESOLR(Enum):
GOLR = "http://golr-aux.geneontology.io/solr/"
MOLR = "https://solr.monarchinitiative.org/solr/search"
class ESOLRDoc(Enum):
ONTOLOGY = "ontology_class"
ANNOTATION = "annotation"
BIOENTITY = "bioentity"
## Respect the method name for run_sparql_on with enums
def run_solr_on(solrInstance, category, id, fields):
"""
Return the result of a solr query on the given solrInstance (Enum ESOLR), for a certain document_category (ESOLRDoc) and id
"""
query = solrInstance.value + "select?q=*:*&fq=document_category:\"" + category.value + "\"&fq=id:\"" + id + "\"&fl=" + fields + "&wt=json&indent=on"
response = requests.get(query)
return response.json()['response']['docs'][0]
def run_solr_text_on(solrInstance, category, q, qf, fields, optionals):
"""
Return the result of a solr query on the given solrInstance (Enum ESOLR), for a certain document_category (ESOLRDoc) and id
"""
if optionals == None:
optionals = ""
query = solrInstance.value + "select?q=" + q + "&qf=" + qf + "&fq=document_category:\"" + category.value + "\"&fl=" + fields + "&wt=json&indent=on" + optionals
# print("QUERY: ", query)
response = requests.get(query)
return response.json()['response']['docs']
### Those utility functions should find their place in a common utils.py if any exists
## Utility function to merge two field of a json
def merge(json, firstField, secondField):
"""
merge two fields of a json into an array of { firstField : secondField }
"""
merged = []
for i in range(0, len(json[firstField])):
merged.append({ json[firstField][i] : json[secondField][i] })
return merged
## Utility function to filter out two fields of a json and give it each a new label
def mergeWithLabels(json, firstField, firstFieldLabel, secondField, secondFieldLabel):
"""
merge two fields of a json into an array of { firstFieldLabel : firstFieldLabel, secondFieldLabel : secondField }
"""
merged = []
for i in range(0, len(json[firstField])):
merged.append({ firstFieldLabel : json[firstField][i],
secondFieldLabel : json[secondField][i] })
return merged
## Utility function to replace in a specific <field> an <old> string by a <new> string
def replace(json, field, old, new):
for i in range(0, len(json)):
if json[i][field]:
json[i][field] = json[i][field].replace(old, new)
return json
| bsd-3-clause | c092aee93625f6104c3356d5de4e7c0d | 34.907572 | 163 | 0.557393 | 3.906545 | false | false | false | false |
biolink/ontobio | ontobio/sparql/skos.py | 1 | 4073 | import logging
import requests
import rdflib
from rdflib import Namespace
from rdflib.namespace import RDF
from rdflib.namespace import SKOS
from prefixcommons.curie_util import contract_uri
from ontobio.ontol import Ontology, Synonym
# TODO: make configurable
GEMET = Namespace('http://www.eionet.europa.eu/gemet/2004/06/gemet-schema.rdf#')
logger = logging.getLogger(__name__)
class Skos(object):
"""
SKOS is an RDF data model for representing thesauri and terminologies.
See https://www.w3.org/TR/skos-primer/ for more details
"""
def __init__(self, prefixmap=None, lang='en'):
self.prefixmap = prefixmap if prefixmap is not None else {}
self.lang = lang
self.context = None
def _uri2id(self, uri):
s = "{:s}".format(str(uri))
for prefix,uribase in self.prefixmap.items():
if (s.startswith(uribase)):
s = s.replace(uribase,prefix+":")
return s
curies = contract_uri(uri)
if len(curies) > 0:
return curies[0]
return s
def process_file(self,filename=None, format=None):
"""
Parse a file into an ontology object, using rdflib
"""
rdfgraph = rdflib.Graph()
if format is None:
if filename.endswith(".ttl"):
format='turtle'
elif filename.endswith(".rdf"):
format='xml'
rdfgraph.parse(filename, format=format)
return self.process_rdfgraph(rdfgraph)
def process_rdfgraph(self, rg, ont=None):
"""
Transform a skos terminology expressed in an rdf graph into an Ontology object
Arguments
---------
rg: rdflib.Graph
graph object
Returns
-------
Ontology
"""
# TODO: ontology metadata
if ont is None:
ont = Ontology()
subjs = list(rg.subjects(RDF.type, SKOS.ConceptScheme))
if len(subjs) == 0:
logger.warning("No ConceptScheme")
else:
ont.id = self._uri2id(subjs[0])
subset_map = {}
for concept in rg.subjects(RDF.type, SKOS.Concept):
for s in self._get_schemes(rg, concept):
subset_map[self._uri2id(s)] = s
for concept in sorted(list(rg.subjects(RDF.type, SKOS.Concept))):
concept_uri = str(concept)
id=self._uri2id(concept)
logger.info("ADDING: {}".format(id))
ont.add_node(id, self._get_label(rg,concept))
for defn in rg.objects(concept, SKOS.definition):
if (defn.language == self.lang):
td = TextDefinition(id, escape_value(defn.value))
ont.add_text_definition(td)
for s in rg.objects(concept, SKOS.broader):
ont.add_parent(id, self._uri2id(s))
for s in rg.objects(concept, SKOS.related):
ont.add_parent(id, self._uri2id(s), self._uri2id(SKOS.related))
for m in rg.objects(concept, SKOS.exactMatch):
ont.add_xref(id, self._uri2id(m))
for m in rg.objects(concept, SKOS.altLabel):
syn = Synonym(id, val=self._uri2id(m))
ont.add_synonym(syn)
for s in self._get_schemes(rg,concept):
ont.add_to_subset(id, self._uri2id(s))
return ont
def _get_schemes(self, rg, concept):
schemes = set(rg.objects(concept, SKOS.inScheme))
schemes.update(rg.objects(concept, GEMET.group))
return schemes
def _get_label(self, rg,concept):
labels = sorted(rg.preferredLabel(concept, lang=self.lang))
if len(labels) == 0:
return None
if len(labels) > 1:
logger.warning(">1 label for {} : {}".format(concept, labels))
return labels[0][1].value
| bsd-3-clause | 37d26e9a3b64dfc6a5787c9cac244c18 | 32.113821 | 86 | 0.544071 | 3.860664 | false | false | false | false |
biolink/ontobio | ontobio/util/go_utils.py | 1 | 2826 | from ontobio.ontol_factory import OntologyFactory
class GoAspector:
def __init__(self, go_ontology):
if go_ontology:
self.ontology = go_ontology
else:
self.ontology = OntologyFactory().create("go")
def get_ancestors_through_subont(self, go_term, relations):
"""
Returns the ancestors from the relation filtered GO subontology of go_term's ancestors.
subontology() primarily used here for speed when specifying relations to traverse. Point of this is to first get
a smaller graph (all ancestors of go_term regardless of relation) and then filter relations on that instead of
the whole GO.
"""
all_ancestors = self.ontology.ancestors(go_term, reflexive=True)
subont = self.ontology.subontology(all_ancestors)
return subont.ancestors(go_term, relations)
def get_isa_partof_closure(self, go_term):
return self.get_ancestors_through_subont(go_term, relations=["subClassOf", "BFO:0000050"])
def get_isa_closure(self, go_term):
return self.get_ancestors_through_subont(go_term, relations=["subClassOf"])
def is_biological_process(self, go_term):
"""
Returns True is go_term has is_a, part_of ancestor of biological process GO:0008150
"""
bp_root = "GO:0008150"
if go_term == bp_root:
return True
ancestors = self.get_isa_closure(go_term)
if bp_root in ancestors:
return True
else:
return False
def is_molecular_function(self, go_term):
"""
Returns True is go_term has is_a, part_of ancestor of molecular function GO:0003674
"""
mf_root = "GO:0003674"
if go_term == mf_root:
return True
ancestors = self.get_isa_closure(go_term)
if mf_root in ancestors:
return True
else:
return False
def is_cellular_component(self, go_term):
"""
Returns True is go_term has is_a, part_of ancestor of cellular component GO:0005575
"""
cc_root = "GO:0005575"
if go_term == cc_root:
return True
ancestors = self.get_isa_closure(go_term)
if cc_root in ancestors:
return True
else:
return False
def go_aspect(self, go_term):
"""
For GO terms, returns F, C, or P corresponding to its aspect
"""
if not go_term.startswith("GO:"):
return None
else:
# Check ancestors for root terms
if self.is_molecular_function(go_term):
return 'F'
elif self.is_cellular_component(go_term):
return 'C'
elif self.is_biological_process(go_term):
return 'P'
| bsd-3-clause | 37599b01a06cba87d57140b75c5f4a55 | 33.888889 | 120 | 0.591649 | 3.897931 | false | false | false | false |
shadow-robot/sr_common | sr_description/test/test_compatibility.py | 1 | 8589 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
######################################################################
# Software License Agreement (BSD License)
#
# Copyright (c) 2021, Bielefeld University
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Bielefeld University nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
######################################################################
# Author: Robert Haschke <rhaschke@techfak.uni-bielefeld.de>
from __future__ import absolute_import, print_function
import ast
import re
import os
import unittest
from unittest import subTest # pylint: disable=C0103
import xml.dom
from xml.dom.minidom import parseString
import xacro
# regex to match whitespace
whitespace = re.compile(r'\s+')
def text_values_match(arg1, arg2):
# generic comparison
if whitespace.sub(' ', arg1).strip() == whitespace.sub(' ', arg2).strip():
return True
try: # special handling of dicts: ignore order
a_dict = ast.literal_eval(arg1)
b_dict = ast.literal_eval(arg1)
if (isinstance(a_dict, dict) and isinstance(b_dict, dict) and a_dict == b_dict):
return True
except Exception: # Attribute values aren't dicts
pass
# on failure, try to split a and b at whitespace and compare snippets
def match_splits(arg1, arg2):
if len(arg1) != len(arg2):
return False
el1, el2 = 0, 0
for el1, el2 in zip(arg1, arg2):
if el1 == el2:
continue
try: # compare numeric values only up to some accuracy
if abs(float(el1) - float(el2)) > 1.0e-9:
return False
except ValueError: # values aren't numeric and not identical
return False
return True
return match_splits(arg1.split(), arg2.split())
def all_attributes_match(arg1, arg2):
if len(arg1.attributes) != len(arg2.attributes):
raise AssertionError('Different number of attributes: [{}] != [{}]'.
format(', '.join(sorted(arg1.attributes.keys())),
', '.join(sorted(arg2.attributes.keys()))))
a_atts = arg1.attributes.items()
b_atts = arg2.attributes.items()
a_atts.sort()
b_atts.sort()
el1, el2 = 0, 0
for el1, el2 in zip(a_atts, b_atts):
if el1[0] != el2[0]:
raise AssertionError('Different attribute names: %s and %s' % (el1[0], el2[0]))
if not text_values_match(el1[1], el2[1]):
raise AssertionError('Different attribute values: {}={} and {}={}'.
format(el1[0], el1[1], el2[0], el2[1]))
return True
def text_matches(arg1, arg2):
if text_values_match(arg1, arg2):
return True
raise AssertionError("Different text values: '%s' and '%s'" % (arg1, arg2))
def nodes_match(arg1, arg2, ignore_nodes):
if not arg1 and not arg2:
return True
if not arg1 or not arg2:
return False
if arg1.nodeType != arg2.nodeType:
raise AssertionError('Different node types: %s and %s' % (arg1, arg2))
# compare text-valued nodes
if arg1.nodeType in [xml.dom.Node.TEXT_NODE,
xml.dom.Node.CDATA_SECTION_NODE,
xml.dom.Node.COMMENT_NODE]:
return text_matches(arg1.data, arg2.data)
# ignore all other nodes except ELEMENTs
if arg1.nodeType != xml.dom.Node.ELEMENT_NODE:
return True
# compare ELEMENT nodes
if arg1.nodeName != arg2.nodeName:
raise AssertionError('Different element names: %s and %s' % (arg1.nodeName, arg2.nodeName))
try:
all_attributes_match(arg1, arg2)
except AssertionError as error:
raise AssertionError('{err} in node <{node}>'.format(err=str(error), node=arg1.nodeName)) from error
arg1 = arg1.firstChild
arg2 = arg2.firstChild
while arg1 or arg2:
# ignore whitespace-only text nodes
# we could have several text nodes in a row, due to replacements
while (arg1 and
((arg1.nodeType in ignore_nodes) or
(arg1.nodeType == xml.dom.Node.TEXT_NODE and whitespace.sub('', arg1.data) == ""))):
arg1 = arg1.nextSibling
while (arg2 and
((arg2.nodeType in ignore_nodes) or
(arg2.nodeType == xml.dom.Node.TEXT_NODE and whitespace.sub('', arg2.data) == ""))):
arg2 = arg2.nextSibling
nodes_match(arg1, arg2, ignore_nodes)
if arg1:
arg1 = arg1.nextSibling
if arg2:
arg2 = arg2.nextSibling
return True
def xml_matches(arg1, arg2, ignore_nodes=None):
if ignore_nodes is None:
ignore_nodes = []
if isinstance(arg1, str):
return xml_matches(parseString(arg1).documentElement, arg2, ignore_nodes)
if isinstance(arg2, str):
return xml_matches(arg1, parseString(arg2).documentElement, ignore_nodes)
if arg1.nodeType == xml.dom.Node.DOCUMENT_NODE:
return xml_matches(arg1.documentElement, arg2, ignore_nodes)
if arg2.nodeType == xml.dom.Node.DOCUMENT_NODE:
return xml_matches(arg1, arg2.documentElement, ignore_nodes)
return nodes_match(arg1, arg2, ignore_nodes)
class TestEquality(unittest.TestCase):
def generate_test_params(self): # pylint: disable=R0201
path = os.path.dirname(__file__)
old_path = os.path.join(path, 'robots.old')
new_path = os.path.join(path, 'robots.new')
for name in os.listdir(old_path):
old_file = os.path.join(old_path, name)
new_file = os.path.join(new_path, name)
if name.endswith('.urdf.xacro') and os.path.isfile(old_file) and os.path.isfile(new_file):
yield name, old_file, new_file
def save_results(self, name, doc): # pylint: disable=R0201
with open(name, 'w', encoding="utf-8") as file:
file.write(doc.toprettyxml(indent=' '))
def test_files(self):
def process(filename):
return xacro.process_file(filename)
results_dir = None
for name, old_file, new_file in self.generate_test_params():
with subTest(msg='Checking {}'.format(name)):
try:
old_doc = process(old_file)
new_doc = process(new_file)
xml_matches(old_doc, new_doc, ignore_nodes=[xml.dom.Node.COMMENT_NODE])
except AssertionError:
if results_dir is None:
import tempfile
results_dir = tempfile.mkdtemp(prefix='sr_compat')
print('Saving mismatching URDFs to:', results_dir)
for suffix, doc in zip(['.old', '.new'], [old_doc, new_doc]):
self.save_results(os.path.join(results_dir, name + suffix), doc)
raise
except Exception as error:
msg = str(error) or repr(error)
xacro.error(msg)
xacro.print_location()
raise
if __name__ == '__main__':
unittest.main()
| bsd-3-clause | 28534e391de1c68de6a893dad78788a4 | 37.004425 | 108 | 0.608453 | 3.898774 | false | false | false | false |
encode/uvicorn | tools/cli_usage.py | 1 | 1998 | """
Look for a marker comment in docs pages, and place the output of
`$ uvicorn --help` there. Pass `--check` to ensure the content is in sync.
"""
import argparse
import subprocess
import sys
import typing
from pathlib import Path
def _get_usage_lines() -> typing.List[str]:
res = subprocess.run(["uvicorn", "--help"], stdout=subprocess.PIPE)
help_text = res.stdout.decode("utf-8")
return ["```", "$ uvicorn --help", *help_text.splitlines(), "```"]
def _find_next_codefence_lineno(lines: typing.List[str], after: int) -> int:
return next(
lineno for lineno, line in enumerate(lines[after:], after) if line == "```"
)
def _get_insert_location(lines: typing.List[str]) -> typing.Tuple[int, int]:
marker = lines.index("<!-- :cli_usage: -->")
start = marker + 1
if lines[start] == "```":
# Already generated.
# <!-- :cli_usage: -->
# ``` <- start
# [...]
# ``` <- end
next_codefence = _find_next_codefence_lineno(lines, after=start + 1)
end = next_codefence + 1
else:
# Not generated yet.
end = start
return start, end
def _generate_cli_usage(path: Path, check: bool = False) -> int:
content = path.read_text()
lines = content.splitlines()
usage_lines = _get_usage_lines()
start, end = _get_insert_location(lines)
lines = lines[:start] + usage_lines + lines[end:]
output = "\n".join(lines) + "\n"
if check:
if content == output:
return 0
print(f"ERROR: CLI usage in {path} is out of sync. Run scripts/lint to fix.")
return 1
path.write_text(output)
return 0
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--check", action="store_true")
args = parser.parse_args()
paths = [Path("docs", "index.md"), Path("docs", "deployment.md")]
rv = 0
for path in paths:
rv |= _generate_cli_usage(path, check=args.check)
sys.exit(rv)
| bsd-3-clause | dd0d4aeffed28d3fb7ffc96f3cbdd040 | 27.542857 | 85 | 0.587588 | 3.438898 | false | false | false | false |
encode/uvicorn | uvicorn/protocols/websockets/websockets_impl.py | 1 | 13536 | import asyncio
import http
import logging
import sys
from typing import TYPE_CHECKING, Any, List, Optional, Sequence, Tuple, Union, cast
from urllib.parse import unquote
import websockets
from websockets.datastructures import Headers
from websockets.exceptions import ConnectionClosed
from websockets.extensions.permessage_deflate import ServerPerMessageDeflateFactory
from websockets.legacy.server import HTTPResponse
from websockets.server import WebSocketServerProtocol
from websockets.typing import Subprotocol
from uvicorn.config import Config
from uvicorn.logging import TRACE_LOG_LEVEL
from uvicorn.protocols.utils import (
get_local_addr,
get_path_with_query_string,
get_remote_addr,
is_ssl,
)
from uvicorn.server import ServerState
if sys.version_info < (3, 8): # pragma: py-gte-38
from typing_extensions import Literal
else: # pragma: py-lt-38
from typing import Literal
if TYPE_CHECKING:
from asgiref.typing import (
ASGISendEvent,
WebSocketAcceptEvent,
WebSocketCloseEvent,
WebSocketConnectEvent,
WebSocketDisconnectEvent,
WebSocketReceiveEvent,
WebSocketScope,
WebSocketSendEvent,
)
class Server:
closing = False
def register(self, ws: WebSocketServerProtocol) -> None:
pass
def unregister(self, ws: WebSocketServerProtocol) -> None:
pass
def is_serving(self) -> bool:
return not self.closing
class WebSocketProtocol(WebSocketServerProtocol):
extra_headers: List[Tuple[str, str]]
def __init__(
self,
config: Config,
server_state: ServerState,
_loop: Optional[asyncio.AbstractEventLoop] = None,
):
if not config.loaded:
config.load()
self.config = config
self.app = config.loaded_app
self.loop = _loop or asyncio.get_event_loop()
self.root_path = config.root_path
# Shared server state
self.connections = server_state.connections
self.tasks = server_state.tasks
# Connection state
self.transport: asyncio.Transport = None # type: ignore[assignment]
self.server: Optional[Tuple[str, int]] = None
self.client: Optional[Tuple[str, int]] = None
self.scheme: Literal["wss", "ws"] = None # type: ignore[assignment]
# Connection events
self.scope: WebSocketScope = None # type: ignore[assignment]
self.handshake_started_event = asyncio.Event()
self.handshake_completed_event = asyncio.Event()
self.closed_event = asyncio.Event()
self.initial_response: Optional[HTTPResponse] = None
self.connect_sent = False
self.lost_connection_before_handshake = False
self.accepted_subprotocol: Optional[Subprotocol] = None
self.transfer_data_task: asyncio.Task = None # type: ignore[assignment]
self.ws_server: Server = Server() # type: ignore[assignment]
extensions = []
if self.config.ws_per_message_deflate:
extensions.append(ServerPerMessageDeflateFactory())
super().__init__(
ws_handler=self.ws_handler,
ws_server=self.ws_server, # type: ignore[arg-type]
max_size=self.config.ws_max_size,
ping_interval=self.config.ws_ping_interval,
ping_timeout=self.config.ws_ping_timeout,
extensions=extensions,
logger=logging.getLogger("uvicorn.error"),
)
self.server_header = None
self.extra_headers = [
(name.decode("latin-1"), value.decode("latin-1"))
for name, value in server_state.default_headers
]
def connection_made( # type: ignore[override]
self, transport: asyncio.Transport
) -> None:
self.connections.add(self)
self.transport = transport
self.server = get_local_addr(transport)
self.client = get_remote_addr(transport)
self.scheme = "wss" if is_ssl(transport) else "ws"
if self.logger.isEnabledFor(TRACE_LOG_LEVEL):
prefix = "%s:%d - " % self.client if self.client else ""
self.logger.log(TRACE_LOG_LEVEL, "%sWebSocket connection made", prefix)
super().connection_made(transport)
def connection_lost(self, exc: Optional[Exception]) -> None:
self.connections.remove(self)
if self.logger.isEnabledFor(TRACE_LOG_LEVEL):
prefix = "%s:%d - " % self.client if self.client else ""
self.logger.log(TRACE_LOG_LEVEL, "%sWebSocket connection lost", prefix)
self.lost_connection_before_handshake = (
not self.handshake_completed_event.is_set()
)
self.handshake_completed_event.set()
super().connection_lost(exc)
if exc is None:
self.transport.close()
def shutdown(self) -> None:
self.ws_server.closing = True
self.transport.close()
def on_task_complete(self, task: asyncio.Task) -> None:
self.tasks.discard(task)
async def process_request(
self, path: str, headers: Headers
) -> Optional[HTTPResponse]:
"""
This hook is called to determine if the websocket should return
an HTTP response and close.
Our behavior here is to start the ASGI application, and then wait
for either `accept` or `close` in order to determine if we should
close the connection.
"""
path_portion, _, query_string = path.partition("?")
websockets.legacy.handshake.check_request(headers)
subprotocols = []
for header in headers.get_all("Sec-WebSocket-Protocol"):
subprotocols.extend([token.strip() for token in header.split(",")])
asgi_headers = [
(name.encode("ascii"), value.encode("ascii"))
for name, value in headers.raw_items()
]
self.scope = { # type: ignore[typeddict-item]
"type": "websocket",
"asgi": {"version": self.config.asgi_version, "spec_version": "2.3"},
"http_version": "1.1",
"scheme": self.scheme,
"server": self.server,
"client": self.client,
"root_path": self.root_path,
"path": unquote(path_portion),
"raw_path": path_portion.encode("ascii"),
"query_string": query_string.encode("ascii"),
"headers": asgi_headers,
"subprotocols": subprotocols,
}
task = self.loop.create_task(self.run_asgi())
task.add_done_callback(self.on_task_complete)
self.tasks.add(task)
await self.handshake_started_event.wait()
return self.initial_response
def process_subprotocol(
self, headers: Headers, available_subprotocols: Optional[Sequence[Subprotocol]]
) -> Optional[Subprotocol]:
"""
We override the standard 'process_subprotocol' behavior here so that
we return whatever subprotocol is sent in the 'accept' message.
"""
return self.accepted_subprotocol
def send_500_response(self) -> None:
msg = b"Internal Server Error"
content = [
b"HTTP/1.1 500 Internal Server Error\r\n"
b"content-type: text/plain; charset=utf-8\r\n",
b"content-length: " + str(len(msg)).encode("ascii") + b"\r\n",
b"connection: close\r\n",
b"\r\n",
msg,
]
self.transport.write(b"".join(content))
# Allow handler task to terminate cleanly, as websockets doesn't cancel it by
# itself (see https://github.com/encode/uvicorn/issues/920)
self.handshake_started_event.set()
async def ws_handler( # type: ignore[override]
self, protocol: WebSocketServerProtocol, path: str
) -> Any:
"""
This is the main handler function for the 'websockets' implementation
to call into. We just wait for close then return, and instead allow
'send' and 'receive' events to drive the flow.
"""
self.handshake_completed_event.set()
await self.closed_event.wait()
async def run_asgi(self) -> None:
"""
Wrapper around the ASGI callable, handling exceptions and unexpected
termination states.
"""
try:
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
except BaseException as exc:
self.closed_event.set()
msg = "Exception in ASGI application\n"
self.logger.error(msg, exc_info=exc)
if not self.handshake_started_event.is_set():
self.send_500_response()
else:
await self.handshake_completed_event.wait()
self.transport.close()
else:
self.closed_event.set()
if not self.handshake_started_event.is_set():
msg = "ASGI callable returned without sending handshake."
self.logger.error(msg)
self.send_500_response()
self.transport.close()
elif result is not None:
msg = "ASGI callable should return None, but returned '%s'."
self.logger.error(msg, result)
await self.handshake_completed_event.wait()
self.transport.close()
async def asgi_send(self, message: "ASGISendEvent") -> None:
message_type = message["type"]
if not self.handshake_started_event.is_set():
if message_type == "websocket.accept":
message = cast("WebSocketAcceptEvent", message)
self.logger.info(
'%s - "WebSocket %s" [accepted]',
self.scope["client"],
get_path_with_query_string(self.scope),
)
self.initial_response = None
self.accepted_subprotocol = cast(
Optional[Subprotocol], message.get("subprotocol")
)
if "headers" in message:
self.extra_headers.extend(
# ASGI spec requires bytes
# But for compatibility we need to convert it to strings
(name.decode("latin-1"), value.decode("latin-1"))
for name, value in message["headers"]
)
self.handshake_started_event.set()
elif message_type == "websocket.close":
message = cast("WebSocketCloseEvent", message)
self.logger.info(
'%s - "WebSocket %s" 403',
self.scope["client"],
get_path_with_query_string(self.scope),
)
self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b"")
self.handshake_started_event.set()
self.closed_event.set()
else:
msg = (
"Expected ASGI message 'websocket.accept' or 'websocket.close', "
"but got '%s'."
)
raise RuntimeError(msg % message_type)
elif not self.closed_event.is_set():
await self.handshake_completed_event.wait()
if message_type == "websocket.send":
message = cast("WebSocketSendEvent", message)
bytes_data = message.get("bytes")
text_data = message.get("text")
data = text_data if bytes_data is None else bytes_data
await self.send(data) # type: ignore[arg-type]
elif message_type == "websocket.close":
message = cast("WebSocketCloseEvent", message)
code = message.get("code", 1000)
reason = message.get("reason", "") or ""
await self.close(code, reason)
self.closed_event.set()
else:
msg = (
"Expected ASGI message 'websocket.send' or 'websocket.close',"
" but got '%s'."
)
raise RuntimeError(msg % message_type)
else:
msg = "Unexpected ASGI message '%s', after sending 'websocket.close'."
raise RuntimeError(msg % message_type)
async def asgi_receive(
self,
) -> Union[
"WebSocketDisconnectEvent", "WebSocketConnectEvent", "WebSocketReceiveEvent"
]:
if not self.connect_sent:
self.connect_sent = True
return {"type": "websocket.connect"}
await self.handshake_completed_event.wait()
if self.lost_connection_before_handshake:
# If the handshake failed or the app closed before handshake completion,
# use 1006 Abnormal Closure.
return {"type": "websocket.disconnect", "code": 1006}
if self.closed_event.is_set():
return {"type": "websocket.disconnect", "code": 1005}
try:
data = await self.recv()
except ConnectionClosed as exc:
self.closed_event.set()
if self.ws_server.closing:
return {"type": "websocket.disconnect", "code": 1012}
return {"type": "websocket.disconnect", "code": exc.code}
msg: WebSocketReceiveEvent = { # type: ignore[typeddict-item]
"type": "websocket.receive"
}
if isinstance(data, str):
msg["text"] = data
else:
msg["bytes"] = data
return msg
| bsd-3-clause | 05a37a7c16dc54cf14a7ccba204db46c | 35.882834 | 87 | 0.583629 | 4.263307 | false | false | false | false |
encode/uvicorn | uvicorn/protocols/http/h11_impl.py | 1 | 20152 | import asyncio
import http
import logging
import sys
from typing import TYPE_CHECKING, Callable, List, Optional, Tuple, Union, cast
from urllib.parse import unquote
import h11
from uvicorn.config import Config
from uvicorn.logging import TRACE_LOG_LEVEL
from uvicorn.protocols.http.flow_control import (
CLOSE_HEADER,
HIGH_WATER_LIMIT,
FlowControl,
service_unavailable,
)
from uvicorn.protocols.utils import (
get_client_addr,
get_local_addr,
get_path_with_query_string,
get_remote_addr,
is_ssl,
)
from uvicorn.server import ServerState
if sys.version_info < (3, 8): # pragma: py-gte-38
from typing_extensions import Literal
else: # pragma: py-lt-38
from typing import Literal
if TYPE_CHECKING:
from asgiref.typing import (
ASGI3Application,
ASGIReceiveEvent,
ASGISendEvent,
HTTPDisconnectEvent,
HTTPRequestEvent,
HTTPResponseBodyEvent,
HTTPResponseStartEvent,
HTTPScope,
)
H11Event = Union[
h11.Request,
h11.InformationalResponse,
h11.Response,
h11.Data,
h11.EndOfMessage,
h11.ConnectionClosed,
]
def _get_status_phrase(status_code: int) -> bytes:
try:
return http.HTTPStatus(status_code).phrase.encode()
except ValueError:
return b""
STATUS_PHRASES = {
status_code: _get_status_phrase(status_code) for status_code in range(100, 600)
}
class H11Protocol(asyncio.Protocol):
def __init__(
self,
config: Config,
server_state: ServerState,
_loop: Optional[asyncio.AbstractEventLoop] = None,
) -> None:
if not config.loaded:
config.load()
self.config = config
self.app = config.loaded_app
self.loop = _loop or asyncio.get_event_loop()
self.logger = logging.getLogger("uvicorn.error")
self.access_logger = logging.getLogger("uvicorn.access")
self.access_log = self.access_logger.hasHandlers()
self.conn = h11.Connection(h11.SERVER, config.h11_max_incomplete_event_size)
self.ws_protocol_class = config.ws_protocol_class
self.root_path = config.root_path
self.limit_concurrency = config.limit_concurrency
# Timeouts
self.timeout_keep_alive_task: Optional[asyncio.TimerHandle] = None
self.timeout_keep_alive = config.timeout_keep_alive
# Shared server state
self.server_state = server_state
self.connections = server_state.connections
self.tasks = server_state.tasks
# Per-connection state
self.transport: asyncio.Transport = None # type: ignore[assignment]
self.flow: FlowControl = None # type: ignore[assignment]
self.server: Optional[Tuple[str, int]] = None
self.client: Optional[Tuple[str, int]] = None
self.scheme: Optional[Literal["http", "https"]] = None
# Per-request state
self.scope: HTTPScope = None # type: ignore[assignment]
self.headers: List[Tuple[bytes, bytes]] = None # type: ignore[assignment]
self.cycle: RequestResponseCycle = None # type: ignore[assignment]
# Protocol interface
def connection_made( # type: ignore[override]
self, transport: asyncio.Transport
) -> None:
self.connections.add(self)
self.transport = transport
self.flow = FlowControl(transport)
self.server = get_local_addr(transport)
self.client = get_remote_addr(transport)
self.scheme = "https" if is_ssl(transport) else "http"
if self.logger.level <= TRACE_LOG_LEVEL:
prefix = "%s:%d - " % self.client if self.client else ""
self.logger.log(TRACE_LOG_LEVEL, "%sHTTP connection made", prefix)
def connection_lost(self, exc: Optional[Exception]) -> None:
self.connections.discard(self)
if self.logger.level <= TRACE_LOG_LEVEL:
prefix = "%s:%d - " % self.client if self.client else ""
self.logger.log(TRACE_LOG_LEVEL, "%sHTTP connection lost", prefix)
if self.cycle and not self.cycle.response_complete:
self.cycle.disconnected = True
if self.conn.our_state != h11.ERROR:
event = h11.ConnectionClosed()
try:
self.conn.send(event)
except h11.LocalProtocolError:
# Premature client disconnect
pass
if self.cycle is not None:
self.cycle.message_event.set()
if self.flow is not None:
self.flow.resume_writing()
if exc is None:
self.transport.close()
self._unset_keepalive_if_required()
def eof_received(self) -> None:
pass
def _unset_keepalive_if_required(self) -> None:
if self.timeout_keep_alive_task is not None:
self.timeout_keep_alive_task.cancel()
self.timeout_keep_alive_task = None
def _get_upgrade(self) -> Optional[bytes]:
connection = []
upgrade = None
for name, value in self.headers:
if name == b"connection":
connection = [token.lower().strip() for token in value.split(b",")]
if name == b"upgrade":
upgrade = value.lower()
if b"upgrade" in connection:
return upgrade
return None
def _should_upgrade_to_ws(self) -> bool:
if self.ws_protocol_class is None:
if self.config.ws == "auto":
msg = "Unsupported upgrade request."
self.logger.warning(msg)
msg = "No supported WebSocket library detected. Please use 'pip install uvicorn[standard]', or install 'websockets' or 'wsproto' manually." # noqa: E501
self.logger.warning(msg)
return False
return True
def data_received(self, data: bytes) -> None:
self._unset_keepalive_if_required()
self.conn.receive_data(data)
self.handle_events()
def handle_events(self) -> None:
while True:
try:
event = self.conn.next_event()
except h11.RemoteProtocolError:
msg = "Invalid HTTP request received."
self.logger.warning(msg)
self.send_400_response(msg)
return
event_type = type(event)
if event_type is h11.NEED_DATA:
break
elif event_type is h11.PAUSED:
# This case can occur in HTTP pipelining, so we need to
# stop reading any more data, and ensure that at the end
# of the active request/response cycle we handle any
# events that have been buffered up.
self.flow.pause_reading()
break
elif event_type is h11.Request:
self.headers = [(key.lower(), value) for key, value in event.headers]
raw_path, _, query_string = event.target.partition(b"?")
self.scope = { # type: ignore[typeddict-item]
"type": "http",
"asgi": {
"version": self.config.asgi_version,
"spec_version": "2.3",
},
"http_version": event.http_version.decode("ascii"),
"server": self.server,
"client": self.client,
"scheme": self.scheme,
"method": event.method.decode("ascii"),
"root_path": self.root_path,
"path": unquote(raw_path.decode("ascii")),
"raw_path": raw_path,
"query_string": query_string,
"headers": self.headers,
}
upgrade = self._get_upgrade()
if upgrade == b"websocket" and self._should_upgrade_to_ws():
self.handle_websocket_upgrade(event)
return
# Handle 503 responses when 'limit_concurrency' is exceeded.
if self.limit_concurrency is not None and (
len(self.connections) >= self.limit_concurrency
or len(self.tasks) >= self.limit_concurrency
):
app = service_unavailable
message = "Exceeded concurrency limit."
self.logger.warning(message)
else:
app = self.app
self.cycle = RequestResponseCycle(
scope=self.scope,
conn=self.conn,
transport=self.transport,
flow=self.flow,
logger=self.logger,
access_logger=self.access_logger,
access_log=self.access_log,
default_headers=self.server_state.default_headers,
message_event=asyncio.Event(),
on_response=self.on_response_complete,
)
task = self.loop.create_task(self.cycle.run_asgi(app))
task.add_done_callback(self.tasks.discard)
self.tasks.add(task)
elif event_type is h11.Data:
if self.conn.our_state is h11.DONE:
continue
self.cycle.body += event.data
if len(self.cycle.body) > HIGH_WATER_LIMIT:
self.flow.pause_reading()
self.cycle.message_event.set()
elif event_type is h11.EndOfMessage:
if self.conn.our_state is h11.DONE:
self.transport.resume_reading()
self.conn.start_next_cycle()
continue
self.cycle.more_body = False
self.cycle.message_event.set()
def handle_websocket_upgrade(self, event: H11Event) -> None:
if self.logger.level <= TRACE_LOG_LEVEL:
prefix = "%s:%d - " % self.client if self.client else ""
self.logger.log(TRACE_LOG_LEVEL, "%sUpgrading to WebSocket", prefix)
self.connections.discard(self)
output = [event.method, b" ", event.target, b" HTTP/1.1\r\n"]
for name, value in self.headers:
output += [name, b": ", value, b"\r\n"]
output.append(b"\r\n")
protocol = self.ws_protocol_class( # type: ignore[call-arg, misc]
config=self.config, server_state=self.server_state
)
protocol.connection_made(self.transport)
protocol.data_received(b"".join(output))
self.transport.set_protocol(protocol)
def send_400_response(self, msg: str) -> None:
reason = STATUS_PHRASES[400]
headers = [
(b"content-type", b"text/plain; charset=utf-8"),
(b"connection", b"close"),
]
event = h11.Response(status_code=400, headers=headers, reason=reason)
output = self.conn.send(event)
self.transport.write(output)
event = h11.Data(data=msg.encode("ascii"))
output = self.conn.send(event)
self.transport.write(output)
event = h11.EndOfMessage()
output = self.conn.send(event)
self.transport.write(output)
self.transport.close()
def on_response_complete(self) -> None:
self.server_state.total_requests += 1
if self.transport.is_closing():
return
# Set a short Keep-Alive timeout.
self._unset_keepalive_if_required()
self.timeout_keep_alive_task = self.loop.call_later(
self.timeout_keep_alive, self.timeout_keep_alive_handler
)
# Unpause data reads if needed.
self.flow.resume_reading()
# Unblock any pipelined events.
if self.conn.our_state is h11.DONE and self.conn.their_state is h11.DONE:
self.conn.start_next_cycle()
self.handle_events()
def shutdown(self) -> None:
"""
Called by the server to commence a graceful shutdown.
"""
if self.cycle is None or self.cycle.response_complete:
event = h11.ConnectionClosed()
self.conn.send(event)
self.transport.close()
else:
self.cycle.keep_alive = False
def pause_writing(self) -> None:
"""
Called by the transport when the write buffer exceeds the high water mark.
"""
self.flow.pause_writing()
def resume_writing(self) -> None:
"""
Called by the transport when the write buffer drops below the low water mark.
"""
self.flow.resume_writing()
def timeout_keep_alive_handler(self) -> None:
"""
Called on a keep-alive connection if no new data is received after a short
delay.
"""
if not self.transport.is_closing():
event = h11.ConnectionClosed()
self.conn.send(event)
self.transport.close()
class RequestResponseCycle:
def __init__(
self,
scope: "HTTPScope",
conn: h11.Connection,
transport: asyncio.Transport,
flow: FlowControl,
logger: logging.Logger,
access_logger: logging.Logger,
access_log: bool,
default_headers: List[Tuple[bytes, bytes]],
message_event: asyncio.Event,
on_response: Callable[..., None],
) -> None:
self.scope = scope
self.conn = conn
self.transport = transport
self.flow = flow
self.logger = logger
self.access_logger = access_logger
self.access_log = access_log
self.default_headers = default_headers
self.message_event = message_event
self.on_response = on_response
# Connection state
self.disconnected = False
self.keep_alive = True
self.waiting_for_100_continue = conn.they_are_waiting_for_100_continue
# Request state
self.body = b""
self.more_body = True
# Response state
self.response_started = False
self.response_complete = False
# ASGI exception wrapper
async def run_asgi(self, app: "ASGI3Application") -> None:
try:
result = await app( # type: ignore[func-returns-value]
self.scope, self.receive, self.send
)
except BaseException as exc:
msg = "Exception in ASGI application\n"
self.logger.error(msg, exc_info=exc)
if not self.response_started:
await self.send_500_response()
else:
self.transport.close()
else:
if result is not None:
msg = "ASGI callable should return None, but returned '%s'."
self.logger.error(msg, result)
self.transport.close()
elif not self.response_started and not self.disconnected:
msg = "ASGI callable returned without starting response."
self.logger.error(msg)
await self.send_500_response()
elif not self.response_complete and not self.disconnected:
msg = "ASGI callable returned without completing response."
self.logger.error(msg)
self.transport.close()
finally:
self.on_response = lambda: None
async def send_500_response(self) -> None:
response_start_event: "HTTPResponseStartEvent" = {
"type": "http.response.start",
"status": 500,
"headers": [
(b"content-type", b"text/plain; charset=utf-8"),
(b"connection", b"close"),
],
}
await self.send(response_start_event)
response_body_event: "HTTPResponseBodyEvent" = {
"type": "http.response.body",
"body": b"Internal Server Error",
"more_body": False,
}
await self.send(response_body_event)
# ASGI interface
async def send(self, message: "ASGISendEvent") -> None:
message_type = message["type"]
if self.flow.write_paused and not self.disconnected:
await self.flow.drain()
if self.disconnected:
return
if not self.response_started:
# Sending response status line and headers
if message_type != "http.response.start":
msg = "Expected ASGI message 'http.response.start', but got '%s'."
raise RuntimeError(msg % message_type)
message = cast("HTTPResponseStartEvent", message)
self.response_started = True
self.waiting_for_100_continue = False
status_code = message["status"]
message_headers = cast(
List[Tuple[bytes, bytes]], message.get("headers", [])
)
headers = self.default_headers + message_headers
if CLOSE_HEADER in self.scope["headers"] and CLOSE_HEADER not in headers:
headers = headers + [CLOSE_HEADER]
if self.access_log:
self.access_logger.info(
'%s - "%s %s HTTP/%s" %d',
get_client_addr(self.scope),
self.scope["method"],
get_path_with_query_string(self.scope),
self.scope["http_version"],
status_code,
)
# Write response status line and headers
reason = STATUS_PHRASES[status_code]
event = h11.Response(
status_code=status_code, headers=headers, reason=reason
)
output = self.conn.send(event)
self.transport.write(output)
elif not self.response_complete:
# Sending response body
if message_type != "http.response.body":
msg = "Expected ASGI message 'http.response.body', but got '%s'."
raise RuntimeError(msg % message_type)
message = cast("HTTPResponseBodyEvent", message)
body = message.get("body", b"")
more_body = message.get("more_body", False)
# Write response body
if self.scope["method"] == "HEAD":
event = h11.Data(data=b"")
else:
event = h11.Data(data=body)
output = self.conn.send(event)
self.transport.write(output)
# Handle response completion
if not more_body:
self.response_complete = True
self.message_event.set()
event = h11.EndOfMessage()
output = self.conn.send(event)
self.transport.write(output)
else:
# Response already sent
msg = "Unexpected ASGI message '%s' sent, after response already completed."
raise RuntimeError(msg % message_type)
if self.response_complete:
if self.conn.our_state is h11.MUST_CLOSE or not self.keep_alive:
event = h11.ConnectionClosed()
self.conn.send(event)
self.transport.close()
self.on_response()
async def receive(self) -> "ASGIReceiveEvent":
if self.waiting_for_100_continue and not self.transport.is_closing():
event = h11.InformationalResponse(
status_code=100, headers=[], reason="Continue"
)
output = self.conn.send(event)
self.transport.write(output)
self.waiting_for_100_continue = False
if not self.disconnected and not self.response_complete:
self.flow.resume_reading()
await self.message_event.wait()
self.message_event.clear()
message: "Union[HTTPDisconnectEvent, HTTPRequestEvent]"
if self.disconnected or self.response_complete:
message = {"type": "http.disconnect"}
else:
message = {
"type": "http.request",
"body": self.body,
"more_body": self.more_body,
}
self.body = b""
return message
| bsd-3-clause | deb929d8382be1c8851a7e35ff225e59 | 34.985714 | 169 | 0.560639 | 4.217664 | false | false | false | false |
encode/uvicorn | uvicorn/supervisors/watchgodreload.py | 1 | 5491 | import logging
import warnings
from pathlib import Path
from socket import socket
from typing import TYPE_CHECKING, Callable, Dict, List, Optional
from watchgod import DefaultWatcher
from uvicorn.config import Config
from uvicorn.supervisors.basereload import BaseReload
if TYPE_CHECKING:
import os
DirEntry = os.DirEntry[str]
logger = logging.getLogger("uvicorn.error")
class CustomWatcher(DefaultWatcher):
def __init__(self, root_path: Path, config: Config):
default_includes = ["*.py"]
self.includes = [
default
for default in default_includes
if default not in config.reload_excludes
]
self.includes.extend(config.reload_includes)
self.includes = list(set(self.includes))
default_excludes = [".*", ".py[cod]", ".sw.*", "~*"]
self.excludes = [
default
for default in default_excludes
if default not in config.reload_includes
]
self.excludes.extend(config.reload_excludes)
self.excludes = list(set(self.excludes))
self.watched_dirs: Dict[str, bool] = {}
self.watched_files: Dict[str, bool] = {}
self.dirs_includes = set(config.reload_dirs)
self.dirs_excludes = set(config.reload_dirs_excludes)
self.resolved_root = root_path
super().__init__(str(root_path))
def should_watch_file(self, entry: "DirEntry") -> bool:
cached_result = self.watched_files.get(entry.path)
if cached_result is not None:
return cached_result
entry_path = Path(entry)
# cwd is not verified through should_watch_dir, so we need to verify here
if entry_path.parent == Path.cwd() and not Path.cwd() in self.dirs_includes:
self.watched_files[entry.path] = False
return False
for include_pattern in self.includes:
if entry_path.match(include_pattern):
for exclude_pattern in self.excludes:
if entry_path.match(exclude_pattern):
self.watched_files[entry.path] = False
return False
self.watched_files[entry.path] = True
return True
self.watched_files[entry.path] = False
return False
def should_watch_dir(self, entry: "DirEntry") -> bool:
cached_result = self.watched_dirs.get(entry.path)
if cached_result is not None:
return cached_result
entry_path = Path(entry)
if entry_path in self.dirs_excludes:
self.watched_dirs[entry.path] = False
return False
for exclude_pattern in self.excludes:
if entry_path.match(exclude_pattern):
is_watched = False
if entry_path in self.dirs_includes:
is_watched = True
for directory in self.dirs_includes:
if directory in entry_path.parents:
is_watched = True
if is_watched:
logger.debug(
"WatchGodReload detected a new excluded dir '%s' in '%s'; "
"Adding to exclude list.",
entry_path.relative_to(self.resolved_root),
str(self.resolved_root),
)
self.watched_dirs[entry.path] = False
self.dirs_excludes.add(entry_path)
return False
if entry_path in self.dirs_includes:
self.watched_dirs[entry.path] = True
return True
for directory in self.dirs_includes:
if directory in entry_path.parents:
self.watched_dirs[entry.path] = True
return True
for include_pattern in self.includes:
if entry_path.match(include_pattern):
logger.info(
"WatchGodReload detected a new reload dir '%s' in '%s'; "
"Adding to watch list.",
str(entry_path.relative_to(self.resolved_root)),
str(self.resolved_root),
)
self.dirs_includes.add(entry_path)
self.watched_dirs[entry.path] = True
return True
self.watched_dirs[entry.path] = False
return False
class WatchGodReload(BaseReload):
def __init__(
self,
config: Config,
target: Callable[[Optional[List[socket]]], None],
sockets: List[socket],
) -> None:
warnings.warn(
'"watchgod" is depreciated, you should switch '
"to watchfiles (`pip install watchfiles`).",
DeprecationWarning,
)
super().__init__(config, target, sockets)
self.reloader_name = "WatchGod"
self.watchers = []
reload_dirs = []
for directory in config.reload_dirs:
if Path.cwd() not in directory.parents:
reload_dirs.append(directory)
if Path.cwd() not in reload_dirs:
reload_dirs.append(Path.cwd())
for w in reload_dirs:
self.watchers.append(CustomWatcher(w.resolve(), self.config))
def should_restart(self) -> Optional[List[Path]]:
self.pause()
for watcher in self.watchers:
change = watcher.check()
if change != set():
return list({Path(c[1]) for c in change})
return None
| bsd-3-clause | cf78c8df3ec4b1a5d6518c10216fec60 | 33.753165 | 84 | 0.562193 | 4.263199 | false | true | false | false |
quantmind/lux | lux/ext/content/contents.py | 1 | 7180 | import os
import stat
from datetime import datetime, date
from collections import Mapping
from itertools import chain
from dateutil.parser import parse as parse_date
from pulsar.api import Unsupported
from pulsar.utils.slugify import slugify
from pulsar.utils.structures import mapping_iterator
from lux.utils.date import iso8601
from .urlwrappers import (URLWrapper, Processor, MultiValue, Tag, Author,
Category)
try:
from markdown import Markdown
except ImportError: # pragma nocover
Markdown = False
def chain_meta(meta1, meta2):
return chain(mapping_iterator(meta1), mapping_iterator(meta2))
def guess(value):
return value if len(value) > 1 else value[-1]
READERS = {}
# Meta attributes to contribute to html head tag
METADATA_PROCESSORS = dict(((p.name, p) for p in (
Processor('title'),
Processor('description'),
Processor('image'),
Processor('date', lambda x, cfg: get_date(x)),
Processor('modified', lambda x, cfg: get_date(x)),
Processor('status'),
Processor('priority', lambda x, cfg: int(x)),
Processor('order', lambda x, cfg: int(x)),
MultiValue('keywords', Tag),
MultiValue('category', Category),
MultiValue('author', Author),
Processor('template'),
Processor('template-engine'),
MultiValue('requirejs')
)))
def get_date(d):
if not isinstance(d, date):
d = parse_date(d)
return d
def modified_datetime(src):
stat_src = os.stat(src)
return datetime.fromtimestamp(stat_src[stat.ST_MTIME])
def register_reader(cls):
for extension in cls.file_extensions:
READERS[extension] = cls
return cls
def get_reader(app, src=None, ext=None):
if src:
bits = src.split('.')
ext = bits[-1] if len(bits) > 1 else None
reader = READERS.get(ext) or READERS['html']
if not reader or not reader.enabled:
name = reader.__name__ if reader else ext
raise Unsupported('Missing dependencies for %s' % name)
return reader(app.app, ext)
def render_data(app, value, render, context):
if isinstance(value, Mapping):
return dict(((k, render_data(app, v, render, context))
for k, v in value.items()))
elif isinstance(value, (list, tuple)):
return [render_data(app, v, render, context) for v in value]
elif isinstance(value, date):
return iso8601(value)
elif isinstance(value, URLWrapper):
return value.to_json(app)
elif isinstance(value, str):
return render(value, context)
else:
return value
def process_meta(meta, cfg):
as_list = MultiValue()
for key, values in mapping_iterator(meta):
key = slugify(key, separator='_')
if not isinstance(values, (list, tuple)):
values = (values,)
if key not in METADATA_PROCESSORS:
bits = key.split('_', 1)
values = guess(as_list(values, cfg))
if len(bits) > 1 and bits[0] == 'meta':
k = '_'.join(bits[1:])
yield k, values
else:
yield key, values
#
elif values:
process = METADATA_PROCESSORS[key]
yield key, process(values, cfg)
class HtmlContent:
def __init__(self, src, body, meta=None):
self.src = src
self.body = body
self.meta = meta
def __repr__(self):
return self.src
__str__ = __repr__
def tojson(self):
"""Convert the content into a JSON dictionary
"""
meta = self.meta or {}
if self.src and 'modified' not in meta:
meta['modified'] = modified_datetime(self.src)
meta['body_length'] = len(self.body)
meta['body'] = self.body
return dict(_flatten(meta))
@register_reader
class HtmlReader:
"""Base class to read files.
This class is used to process static files, and it can be inherited for
other types of file. A Reader class must have the following attributes:
- enabled: (boolean) tell if the Reader class is enabled. It
generally depends on the import of some dependency.
- file_extensions: a list of file extensions that the Reader will process.
- extensions: a list of extensions to use in the reader (typical use is
Markdown).
"""
content = HtmlContent
file_extensions = ['html']
suffix = 'html'
enabled = True
extensions = None
def __init__(self, app, ext=None):
self.app = app
self.ext = ext
self.logger = app.logger
self.config = app.config
def __str__(self):
return self.__class__.__name__
def read(self, src, meta=None):
"""Read content from a file"""
with open(src, 'rb') as text:
body = text.read()
return self.process(body.decode('utf-8'), src, meta=meta)
def process(self, body, src=None, meta=None):
"""Return the dict containing document metadata
"""
meta = dict(process_meta(meta, self.config)) if meta else {}
meta['type'] = self.file_extensions[0]
return self.content(src, body, meta)
@register_reader
class MarkdownReader(HtmlReader):
"""Reader for Markdown files"""
enabled = bool(Markdown)
file_extensions = ['markdown', 'mdown', 'mkd', 'md']
suffix = 'html'
@property
def md(self):
md = getattr(self.app, '_markdown', None)
if md is None:
extensions = list(self.config['MD_EXTENSIONS'])
if 'meta' not in extensions:
extensions.append('meta')
self.app._markdown = Markdown(extensions=extensions)
return self.app._markdown
def process(self, raw, src=None, meta=None):
raw = '%s\n\n%s' % (raw, self.links())
md = self.md
body = md.convert(raw)
meta = tuple(chain_meta(meta, md.Meta))
return super().process(body, src, meta=meta)
def links(self):
links = self.app.config.get('_MARKDOWN_LINKS_')
if links is None:
links = []
for name, href in self.app.config['CONTENT_LINKS'].items():
title = None
if isinstance(href, dict):
title = href.get('title')
href = href['href']
md = '[%s]: %s "%s"' % (name, href, title or name)
links.append(md)
links = '\n'.join(links)
self.app.config['_MARKDOWN_LINKS_'] = links
return links
# INTERNALS
def _flatten(meta):
for key, value in mapping_iterator(meta):
if isinstance(value, Mapping):
for child, v in _flatten(value):
yield '%s_%s' % (key, child), v
else:
yield key, _flatten_value(value)
def _flatten_value(value):
if isinstance(value, Mapping):
raise ValueError('A dictionary found when converting to string')
elif isinstance(value, (list, tuple)):
return ', '.join(str(_flatten_value(v)) for v in value)
elif isinstance(value, date):
return iso8601(value)
elif isinstance(value, URLWrapper):
return str(value)
else:
return value
| bsd-3-clause | 8a4a5ddbe939150ddba090cd36c33290 | 28.792531 | 78 | 0.597493 | 3.936404 | false | false | false | false |
quantmind/lux | lux/ext/sessions/browser.py | 1 | 5814 | """Backends for Browser based Authentication
"""
import time
from functools import wraps
from pulsar.api import (
Http401, PermissionDenied, Http404, HttpRedirect, BadRequest
)
from pulsar.apps.wsgi import Route
from lux.utils.date import to_timestamp, date_from_now, iso8601
from lux.utils.context import app_attribute
from lux.core import User
from .store import session_store
NotAuthorised = (Http401, PermissionDenied, BadRequest)
@app_attribute
def exclude_urls(app):
"""urls to exclude from browser sessions
"""
urls = []
for url in app.config['SESSION_EXCLUDE_URLS']:
urls.append(Route(url))
return tuple(urls)
def session_backend_action(method):
@wraps(method)
def _(self, r, *args, **kwargs):
if r.cache.get('skip_session_backend'):
return
return method(self, r, *args, **kwargs)
return _
class SessionBackend:
"""SessionBackend is used when the client is a web browser
It maintain a session via a cookie key
"""
@session_backend_action
def login(self, request, **data):
api = request.api
seconds = request.config['SESSION_EXPIRY']
data['user_agent'] = self._user_agent(request)
data['ip_address'] = request.get_client_address()
data['expiry'] = iso8601(date_from_now(seconds))
response = api.authorizations.post(json=data, jwt=True)
token = response.json()
session = self._create_session(request, token)
request.cache.session = session
return token
@session_backend_action
def logout(self, request):
"""logout a user
"""
session = request.cache.session
try:
request.api.authorizations.delete(token=session.token)
except NotAuthorised:
pass
session_store(request).delete(session.id)
request.cache.user = request.cache.auth_backend.anonymous(request)
request.cache.session = self._create_session(request)
@session_backend_action
def get_permissions(self, request, resources, actions=None):
return self._get_permissions(request, resources, actions)
@session_backend_action
def has_permission(self, request, resource, action):
"""Implement :class:`~AuthBackend.has_permission` method
"""
data = self._get_permissions(request, resource, action)
resource = data.get(resource)
if resource:
return resource.get(action, False)
return False
def request(self, request):
path = request.path[1:]
for url in exclude_urls(request.app):
if url.match(path):
request.cache.skip_session_backend = True
return
key = request.config['SESSION_COOKIE_NAME']
session_key = request.cookies.get(key)
store = session_store(request)
session = None
if session_key:
session = store.get(session_key.value)
if (session and (
session.expiry is None or session.expiry < time.time())):
store.delete(session.id)
session = None
if not session:
session = self._create_session(request)
request.cache.session = session
token = session.token
if token:
try:
user = request.api.user.get(token=session.token).json()
except NotAuthorised:
request.app.auth.logout(request)
raise HttpRedirect(request.config['LOGIN_URL']) from None
except Exception:
request.app.auth.logout(request)
raise
request.cache.user = User(user)
@session_backend_action
def response(self, request, response):
session = request.cache.get('session')
if session:
if response.can_set_cookies():
key = request.config['SESSION_COOKIE_NAME']
session_key = request.cookies.get(key)
id = session.id
if not session_key or session_key.value != id:
response.set_cookie(key, value=str(id), httponly=True,
expires=session.expiry)
session_store(request).save(session)
return response
# INTERNALS
def _create_session(self, request, token=None):
"""Create a new Session object"""
expiry = None
if token:
expiry = to_timestamp(token.get('expiry'))
token = token['id']
if not expiry:
seconds = request.config['SESSION_EXPIRY']
expiry = time.time() + seconds
return session_store(request).create(expiry=expiry,
token=token)
def _get_permissions(self, request, resources, actions=None):
if not isinstance(resources, (list, tuple)):
resources = (resources,)
query = [('resource', resource) for resource in resources]
if actions:
if not isinstance(actions, (list, tuple)):
actions = (actions,)
query.extend((('action', action) for action in actions))
try:
response = request.api.user.permissions.get(params=query)
except NotAuthorised:
handle_401(request)
return response.json()
def _user_agent(self, request, max_len=256):
agent = request.get('HTTP_USER_AGENT')
return agent[:max_len] if agent else ''
def handle_401(request, user=None):
"""When the API respond with a 401 logout and redirect to login
"""
user = user or request.session.user
if user.is_authenticated():
request.app.auth.logout(request)
raise HttpRedirect(request.config['LOGIN_URL'])
else:
raise Http404
| bsd-3-clause | 544f76583adc5dfafb183e777b3796d2 | 32.222857 | 74 | 0.602511 | 4.284451 | false | false | false | false |
quantmind/lux | tests/web/test_signup.py | 1 | 5683 | from tests import web
class AuthTest(web.WebsiteTest):
def _get_code(self, message):
url = '/reset-password/'
idx = message.find(url)
self.assertTrue(idx)
msg = message[idx+len(url):]
idx = msg.find(' ')
return msg[:idx]
async def test_html_signup(self):
request = await self.webclient.get('/signup')
html = self.html(request.response, 200)
self.assertTrue(html)
async def test_signup(self):
data = await self._signup()
self.assertTrue('email' in data)
async def test_signup_error(self):
data = {'username': 'djkhvbdf'}
request = await self.webclient.post('/signup', json=data)
self.json(request.response, 403)
async def test_signup_error_form(self):
data = {'username': 'djkhvbdf'}
request = await self.client.post('/registrations',
json=data,
jwt=self.admin_jwt)
self.assertValidationError(request.response, 'password')
async def test_signup_confirmation(self):
data = await self._signup()
reg = await self._get_registration(data['email'])
self.assertTrue(reg.id)
request = await self.webclient.get('/signup/%s' % reg.id)
doc = self.bs(request.response, 200)
body = doc.find('body')
self.assertTrue(body)
# await self._check_body(reg, body)
# PASSWORD RESET
async def test_reset_password_get(self):
request = await self.webclient.get('/reset-password')
bs = self.bs(request.response, 200)
form = bs.find('lux-form')
self.assertTrue(form)
async def test_reset_password_fail(self):
cookie, data = await self._cookie_csrf('/reset-password')
request = await self.webclient.post('/reset-password',
json=data,
cookie=cookie)
self.assertValidationError(request.response, 'email')
data['email'] = 'dvavf@sdvavadf.com'
request = await self.webclient.post('/reset-password',
json=data, cookie=cookie)
self.assertValidationError(request.response,
text="Can't find user, sorry")
async def test_reset_password_bad_key(self):
cookie, data = await self._cookie_csrf('/reset-password')
request = await self.webclient.get('/reset-password/sdhcvshc',
cookie=cookie)
self.assertEqual(request.response.status_code, 404)
async def test_reset_password_success(self):
cookie, data = await self._cookie_csrf('/reset-password')
#
# Post Reset password request
data['email'] = 'toni@test.com'
request = await self.webclient.post('/reset-password',
json=data, cookie=cookie)
data = self.json(request.response, 200)
self.assertTrue(data['user']['email'], 'toni@test.com')
mail = None
for msg in self.app._outbox:
if msg.to == 'toni@test.com':
mail = msg
break
self.assertTrue(mail)
self.assertEqual(mail.sender, 'admin@lux.com')
code = self._get_code(mail.message)
self.assertTrue(code)
self.assertEqual(code, data['id'])
request = await self.webclient.get('/reset-password/sdcsd',
cookie=cookie)
self.html(request.response, 404)
request = await self.webclient.get('/reset-password/%s' % code,
cookie=cookie)
bs = self.bs(request.response, 200)
form = bs.find('lux-form')
self.assertTrue(form)
#
# now lets post
password = 'new-pass-for-toni'
cookie, data = await self._cookie_csrf(
'/reset-password/%s' % code, cookie=cookie)
data.update({'password': password, 'password_repeat': password})
request = await self.webclient.post('/reset-password/%s' % code,
data=data,
cookie=cookie)
self.assertTrue(self.json(request.response, 200)['success'])
#
# the change password link should now raise 404
request = await self.webclient.get('/reset-password/%s' % code,
cookie=cookie)
self.assertEqual(request.response.status_code, 404)
async def _(self, reg, body):
login = body.find_all('a')
self.assertEqual(len(login), 1)
text = login[0].prettify()
self.assertTrue(self.app.config['LOGIN_URL'] in text)
text = body.get_text()
self.assertTrue('You have confirmed your email' in text)
request = await self.client.get('/signup/%s' % reg.id)
html = self.html(request.response, 410)
self.assertTrue(html)
async def __test_confirm_signup(self):
data = await self._signup()
reg = await self.app.green_pool.submit(self._get_registration,
data['email'])
api_url = '/authorizations/signup/%s' % reg.id
request = await self.client.options(api_url)
self.assertEqual(request.response.status_code, 200)
#
request = await self.client.post(api_url)
data = self.json(request.response, 200)
self.assertTrue(data['success'])
request = await self.client.post(api_url)
self.json(request.response, 410)
| bsd-3-clause | fe370a5af9a01050420d1409d9a621d0 | 40.181159 | 72 | 0.554989 | 4.175606 | false | true | false | false |
quantmind/lux | lux/ext/smtp/log.py | 1 | 3013 | import logging
import json
from inspect import isawaitable
from pulsar.api import ensure_future
MESSAGE = 'Exception while posting message to Slack'
def context_text_formatter(context):
res = ""
maxlen = max([len(key) for key in context])
for key, val in context.items():
space = " " * (maxlen - len(key))
res += "%s:%s%s\n" % (key, space, val)
return res
class SMTPHandler(logging.Handler):
def __init__(self, app, level):
super().__init__(logging._checkLevel(level))
self.app = app
def emit(self, record):
cfg = self.app.config
managers = cfg['SITE_MANAGERS']
if getattr(record, 'mail', False) or not managers:
return
backend = self.app.email_backend
msg = self.format(record)
first = record.message.split('\n')[0]
subject = '%s - %s - %s' % (cfg['APP_NAME'], record.levelname, first)
context_factory = cfg['LOG_CONTEXT_FACTORY']
if context_factory:
ctx = context_factory(self)
msg = context_text_formatter(ctx) + '\n' + msg
subject = ctx['host'] + ': ' + subject
backend.send_mail(to=managers,
subject=subject,
message=msg)
class SlackHandler(logging.Handler):
"""Handler that will emit every event to slack channel
"""
webhook_url = 'https://hooks.slack.com/services'
def __init__(self, app, level, token):
super().__init__(logging._checkLevel(level))
self.app = app
self.webhook_url = '%s/%s' % (self.webhook_url, token)
def emit(self, record):
"""Emit record to slack channel using pycurl to avoid recurrence
event logging (log logged record)
"""
if record.message.startswith(MESSAGE): # avoid cyrcular emit
return
cfg = self.app.config
managers = cfg['SLACK_LINK_NAMES']
text = ''
data = {}
if managers:
text = ' '.join(('@%s' % m for m in managers))
text = '%s\n\n' % text
data['link_names'] = 1
context_factory = cfg['LOG_CONTEXT_FACTORY']
data['text'] = text
if context_factory:
ctx = context_factory(self)
data['text'] += "\n" + context_text_formatter(ctx)
data['text'] += "```\n%s\n```" % self.format(record)
sessions = self.app.http()
response = sessions.post(self.webhook_url,
data=json.dumps(data))
if isawaitable(response):
ensure_future(self._emit(response), loop=sessions._loop)
else:
sessions._loop.call_soon(self._raise_error, response)
async def _emit(self, response):
self._raise_error(await response)
def _raise_error(self, response):
try:
response.raise_for_status()
except Exception:
text = response.text()
self.app.logger.error('%s: %s' % (MESSAGE, text))
| bsd-3-clause | ed0695a4573aa7d33a566e1e44b9d31f | 31.75 | 77 | 0.557252 | 3.892765 | false | false | false | false |
quantmind/lux | tests/sockjs/test_rest.py | 1 | 10210 | import json
import asyncio
from lux.utils import test
from lux.utils.crypt import create_uuid
class TestSockJSRestApp(test.AppTestCase):
config_file = 'tests.sockjs'
@classmethod
async def beforeAll(cls):
cls.super_token, cls.pippo_token = await asyncio.gather(
cls.user_token('testuser', jwt=cls.admin_jwt),
cls.user_token('pippo', jwt=cls.admin_jwt)
)
async def ws(self):
request = await self.client.wsget('/testws/websocket')
return self.ws_upgrade(request.response)
def test_app(self):
app = self.app
self.assertEqual(app.config['WS_URL'], '/testws')
async def test_get(self):
request = await self.client.get('/testws')
response = request.response
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type,
'text/plain; charset=utf-8')
async def test_info(self):
request = await self.client.get('/testws/info')
response = request.response
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type,
'application/json; charset=utf-8')
async def test_websocket_400(self):
request = await self.client.get('/testws/websocket')
response = request.response
self.assertEqual(response.status_code, 400)
async def test_websocket_handler(self):
websocket = await self.ws()
handler = websocket.handler
self.assertEqual(len(handler.rpc_methods), 9)
self.assertTrue(handler.rpc_methods.get('authenticate'))
self.assertTrue(handler.rpc_methods.get('publish'))
self.assertTrue(handler.rpc_methods.get('subscribe'))
self.assertTrue(handler.rpc_methods.get('unsubscribe'))
self.assertTrue(handler.rpc_methods.get('model_metadata'))
self.assertTrue(handler.rpc_methods.get('model_data'))
self.assertTrue(handler.rpc_methods.get('model_create'))
self.assertTrue(handler.rpc_methods.get('add'))
self.assertTrue(handler.rpc_methods.get('echo'))
async def test_ws_protocol_error(self):
websocket = await self.ws()
logger = websocket.cache.wsclient.logger
msg = json.dumps(dict(method='add', params=dict(a=4, b=6)))
await websocket.handler.on_message(websocket, msg)
logger.error.assert_called_with(
'Protocol error: %s',
'Malformed message; expected list, got dict')
#
logger.reset_mock()
msg = json.dumps(['?'])
await websocket.handler.on_message(websocket, msg)
logger.error.assert_called_with(
'Protocol error: %s',
'Invalid JSON')
#
logger.reset_mock()
msg = json.dumps([json.dumps('?')])
await websocket.handler.on_message(websocket, msg)
logger.error.assert_called_with(
'Protocol error: %s',
'Malformed data; expected dict, got str')
async def test_ws_add_error(self):
websocket = await self.ws()
msg = self.ws_message(method='add', params=dict(a=4, b=6))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'Request ID not available')
async def test_ws_add(self):
websocket = await self.ws()
msg = self.ws_message(method='add', params=dict(a=4, b=6), id="57")
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['result'], 10)
async def test_ws_authenticate_error(self):
websocket = await self.ws()
msg = self.ws_message(method='authenticate', id=5)
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing authToken')
self.assertEqual(msg['id'], 5)
async def test_ws_authenticate_fails(self):
websocket = await self.ws()
msg = self.ws_message(method='authenticate', id="dfg",
params=dict(authToken='dsd'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'bad authToken')
msg = self.ws_message(method='authenticate', id="dfg",
params=dict(authToken=create_uuid().hex))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'bad authToken')
async def test_ws_authenticate(self):
websocket = await self.ws()
msg = self.ws_message(method='authenticate', id="dfg",
params=dict(authToken=self.super_token))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertTrue(msg['result'])
self.assertEqual(msg['result']['username'], 'testuser')
return websocket
async def test_ws_publish_fails(self):
websocket = await self.ws()
#
msg = self.ws_message(method='publish', id=456)
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing channel')
#
msg = self.ws_message(method='publish', id=456,
params=dict(channel='foo'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing event')
async def test_ws_publish(self):
websocket = await self.ws()
#
msg = self.ws_message(method='publish', id=456,
params=dict(channel='foo', event='myevent'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertTrue('result' in msg)
async def test_ws_subscribe_fails(self):
websocket = await self.ws()
#
msg = self.ws_message(method='subscribe', id=456)
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing channel')
#
msg = self.ws_message(method='subscribe', id=456,
params=dict(channel='foo'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing event')
async def test_ws_subscribe(self):
websocket = await self.ws()
#
msg = self.ws_message(method='subscribe', id=456,
params=dict(channel='pizza', event='myevent'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['id'], 456)
#
msg = self.ws_message(method='subscribe', id=4556,
params=dict(channel='lux-foo', event='myevent'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['id'], 4556)
async def test_ws_model_metadata_fails(self):
websocket = await self.ws()
#
msg = self.ws_message(method='model_metadata', id=456)
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'missing model')
#
msg = self.ws_message(method='model_metadata', id=456,
params=dict(model='foo'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertEqual(msg['error']['message'], 'Model "foo" does not exist')
async def test_ws_model_metadata(self):
websocket = await self.ws()
#
msg = self.ws_message(method='model_metadata', id=456,
params=dict(model='user'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
self.assertTrue(msg['result'])
self.assertTrue(msg['result']['columns'])
self.assertTrue(msg['result']['permissions'])
async def test_ws_model_data(self):
websocket = await self.ws()
#
msg = self.ws_message(method='model_data', id=456,
params=dict(model='user'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
data = msg['result']
self.assertTrue(data)
self.assertTrue(data['total'])
self.assertTrue(data['result'])
async def test_model_create(self):
#
# Subscribe to create event
ws = await self.ws()
msg = self.ws_message(method='subscribe', id=456,
params=dict(channel='datamodel',
event='tasks.create'))
await ws.handler.on_message(ws, msg)
self.get_ws_message(ws)
future = asyncio.Future()
ws.connection.write = future.set_result
#
websocket = await self.test_ws_authenticate()
msg = self.ws_message(method='model_create', id=456,
params=dict(model='tasks',
subject='just a test'))
await websocket.handler.on_message(websocket, msg)
msg = self.get_ws_message(websocket)
data = msg['result']
self.assertTrue(data)
self.assertEqual(data['subject'], 'just a test')
#
frame = await asyncio.wait_for(future, 1.5)
ws.connection.reset_mock()
self.assertTrue(frame)
msg = self.parse_frame(ws, frame)
self.assertTrue(msg)
self.assertEqual(msg['event'], 'tasks.create')
self.assertEqual(msg['channel'], 'datamodel')
self.assertEqual(msg['data'], data)
| bsd-3-clause | 3091e47f82135c9c8da9ffe2123f9b2e | 40.504065 | 79 | 0.597747 | 4.000784 | false | true | false | false |
quantmind/lux | lux/utils/crypt/arc4.py | 1 | 2165 | '''RC4, ARC4, ARCFOUR algorithm for encryption.
Adapted from
#
# RC4, ARC4, ARCFOUR algorithm
#
# Copyright (c) 2009 joonis new media
# Author: Thimo Kraemer <thimo.kraemer@joonis.de>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
'''
import base64
from os import urandom
__all__ = ['rc4crypt', 'encrypt', 'decrypt']
def _rc4crypt(data, box):
'''Return a generator over encrypted bytes'''
x = 0
y = 0
for o in data:
x = (x + 1) % 256
y = (y + box[x]) % 256
box[x], box[y] = box[y], box[x]
yield o ^ box[(box[x] + box[y]) % 256]
def rc4crypt(data, key):
'''data and key must be a byte strings'''
x = 0
box = list(range(256))
for i in range(256):
x = (x + box[i] + key[i % len(key)]) % 256
box[i], box[x] = box[x], box[i]
return bytes(_rc4crypt(data, box))
def encrypt(plaintext, key, salt_size=8):
if not plaintext:
return ''
salt = urandom(salt_size)
v = rc4crypt(plaintext, salt + key)
n = bytes((salt_size,))
rs = n+salt+v
return base64.b64encode(rs)
def decrypt(ciphertext, key):
if ciphertext:
rs = base64.b64decode(ciphertext)
sl = rs[0] + 1
salt = rs[1:sl]
ciphertext = rs[sl:]
return rc4crypt(ciphertext, salt+key)
else:
return ''
def verify(encrypted, raw, key, salt_size=8):
return raw == decrypt(encrypted, key)
| bsd-3-clause | 9b9cecfb8792a66c56c30e6deccadddb | 27.486842 | 76 | 0.611085 | 3.341049 | false | false | false | false |
quantmind/lux | lux/ext/auth/rest/registrations.py | 1 | 5326 | from datetime import datetime
from pulsar.api import PermissionDenied, Http404
from lux.core import http_assert
from lux.ext.rest import RestRouter, route
from lux.models import Schema, fields, ValidationError
from lux.ext.odm import Model
from . import ensure_service_user, IdSchema
URI = 'registrations'
email_templates = {
"subject": {
1: "registration/activation_email_subject.txt",
2: "registration/password_email_subject.txt"
},
"message": {
1: "registration/activation_email.txt",
2: "registration/password_email.txt"
}
}
class RegistrationSchema(Schema):
user = fields.Nested('UserSchema')
class Meta:
model = URI
class PasswordSchema(Schema):
"""Schema for checking a password is input correctly
"""
password = fields.Password(required=True, minLength=5, maxLength=128)
password_repeat = fields.Password(required=True)
def post_load(self, data):
password = data['password']
password_repeat = data.pop('password_repeat', None)
if password != password_repeat:
raise ValidationError('Passwords did not match')
class UserCreateSchema(PasswordSchema):
username = fields.Slug(required=True, minLength=2, maxLength=30)
email = fields.Email(required=True)
class RegistrationModel(Model):
@property
def type(self):
return self.metadata.get('type', 1)
def create_instance(self, session, data):
data['active'] = False
user = self.app.auth.create_user(session, **data)
# send_email_confirmation(request, reg)
return user
def update_model(self, request, instance, data, session=None, **kw):
if not instance.id:
return super().update_model(request, instance, data,
session=session, **kw)
reg = self.instance(instance).obj
http_assert(reg.type == self.type, Http404)
self.update_registration(request, reg, data, session=session)
return {'success': True}
def update_registration(self, request, reg, data, session=None):
with self.session(request, session=session) as session:
user = reg.user
user.active = True
session.add(user)
session.delete(reg)
class RegistrationCRUD(RestRouter):
"""
---
summary: Registration to the API
tags:
- authentication
- registration
"""
model = RegistrationModel("registrations", RegistrationSchema)
@route(default_response_schema=[RegistrationSchema])
def get(self, request):
"""
---
summary: List registration objects
responses:
200:
description: List of registrations matching filters
"""
return self.model.get_list_response(request)
@route(default_response=201,
default_response_schema=RegistrationSchema,
body_schema=UserCreateSchema)
def post(self, request, **kw):
"""
---
summary: Create a new registration
"""
ensure_service_user(request)
return self.model.create_response(request, **kw)
@route('<id>/activate', path_schema=IdSchema)
def post_activate(self, request):
"""
---
summary: Activate a user from a registration ID
description: Clients should POST to this endpoint once they are
happy the user has confirm his/her identity.
This is a one time only operation.
responses:
204:
description: Activation was successful
400:
description: Bad Token
401:
description: Token missing or expired
404:
description: Activation id not found
"""
ensure_service_user(request)
model = self.get_model(request)
with model.session(request) as session:
reg = self.get_instance(request, session=session)
if reg.expiry < datetime.utcnow():
raise PermissionDenied('registration token expired')
reg.user.active = True
session.add(reg.user)
model.delete_model(request, reg, session=session)
request.response.status_code = 204
return request.response
def send_email_confirmation(request, reg, email_subject=None,
email_message=None):
"""Send an email to user
"""
user = reg.user
if not user.email:
return
app = request.app
token = ensure_service_user(request)
site = token.get('url')
reg_url = token.get('registration_url')
psw_url = token.get('password_reset_url')
ctx = {'auth_key': reg.id,
'register_url': reg_url,
'reset_password_url': psw_url,
'expiration': reg.expiry,
'email': user.email,
'site_uri': site}
email_subject = email_subject or email_templates['subject'][reg.type]
email_message = email_message or email_templates['message'][reg.type]
subject = app.render_template(email_subject, ctx)
# Email subject *must not* contain newlines
subject = ''.join(subject.splitlines())
body = app.render_template(email_message, ctx)
user.email_user(app, subject, body)
| bsd-3-clause | 6e0fd230fb81118194cab93d866c7737 | 29.786127 | 73 | 0.616786 | 4.277912 | false | false | false | false |
quantmind/lux | tests/auth/groups.py | 1 | 2785 |
class GroupsMixin:
"""Groups CRUD views
"""
async def test_group_validation(self):
payload = {'name': 'abc'}
request = await self.client.post(self.api_url('groups'),
json=payload,
token=self.super_token)
data = self.json(request.response, 201)
gid = data['id']
payload['name'] = 'abcd'
request = await self.client.patch(self.api_url('groups/abc'),
json=payload,
token=self.super_token)
data = self.json(request.response, 200)
self.assertEqual(data['name'], 'abcd')
self.assertEqual(data['id'], gid)
payload['name'] = 'ABCd'
request = await self.client.post(self.api_url('groups'),
json=payload,
token=self.super_token)
self.assertValidationError(request.response, 'name',
'Only lower case, alphanumeric characters '
'and hyphens are allowed')
async def test_add_user_to_group(self):
credentials = await self._new_credentials()
username = credentials['username']
request = await self.client.patch(
self.api_url('users/%s' % username),
json={'groups': ['users']},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertTrue('groups[]' in data)
async def test_add_user_to_group_updated(self):
credentials = await self._new_credentials()
username = credentials['username']
request = await self.client.patch(
self.api_url('users/%s' % username),
json={'groups': ['users']},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertTrue('groups[]' in data)
self.assertEqual(data['groups[]'], [{'id': 'users'}])
#
request = await self.client.patch(
self.api_url('users/%s' % username),
json={'groups': ['users', 'power-users']},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertTrue('groups[]' in data)
self.assertEqual(data['groups[]'],
[{'id': 'users'}, {'id': 'power-users'}])
#
request = await self.client.patch(
self.api_url('users/%s' % username),
json={'groups': []},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertTrue('groups[]' in data)
self.assertEqual(data['groups[]'], [])
| bsd-3-clause | 33a8e0613ca8faeddc7facde6b133a8c | 38.225352 | 78 | 0.504129 | 4.427663 | false | false | false | false |
quantmind/lux | lux/core/commands/stop.py | 1 | 1680 | import os
import time
import signal
from pulsar.utils.tools import Pidfile
from lux.core import LuxCommand, Setting, CommandError
class Command(LuxCommand):
help = "Stop a running server"
option_list = (
Setting('timeout', ('--timeout',),
default=10, type=int,
desc=('Timeout for waiting SIGTERM stop')),
)
pulsar_config_include = ('log_level', 'log_handlers', 'debug',
'config', 'pid_file')
def run(self, options, **params):
app = self.app
pid_file = options.pid_file
if pid_file:
if os.path.isfile(pid_file):
pid = Pidfile(pid_file).read()
if not pid:
raise CommandError('No pid in pid file %s' % pid_file)
else:
raise CommandError('Could not located pid file %s' % pid_file)
else:
raise CommandError('Pid file not available')
try:
self.kill(pid, signal.SIGTERM)
except ProcessLookupError:
raise CommandError('Process %d does not exist' % pid) from None
start = time.time()
while time.time() - start < options.timeout:
if os.path.isfile(pid_file):
time.sleep(0.2)
else:
app.write('Process %d terminated' % pid)
return 0
app.write_err('Could not terminate process after %d seconds' %
options.timeout)
self.kill(pid, signal.SIGKILL)
app.write_err('Processed %d killed' % pid)
return 1
def kill(self, pid, sig): # pragma nocover
os.kill(pid, sig)
| bsd-3-clause | 084051d9634903092153db1079fbafcb | 30.111111 | 78 | 0.542857 | 4.148148 | false | false | false | false |
quantmind/lux | tests/odm/test_postgresql.py | 1 | 12869 | from dateutil.parser import parse
from lux.utils import test
from tests.odm.utils import OdmUtils
class TestPostgreSql(OdmUtils, test.AppTestCase):
@classmethod
async def beforeAll(cls):
cls.super_token = await cls.user_token('testuser', jwt=cls.admin_jwt)
async def test_odm(self):
tables = await self.app.odm.tables()
self.assertTrue(tables)
self.assertEqual(len(tables), 1)
self.assertEqual(len(tables[0][1]), 11)
def test_rest_model(self):
from tests.odm import CRUDTask, CRUDPerson
model = self.app.models.register(CRUDTask().model)
self.assertEqual(model.name, 'task')
fields = model.fields()
self.assertTrue(fields)
model = self.app.models.register(CRUDPerson().model)
self.assertEqual(model,
self.app.models.register(CRUDPerson().model))
self.assertEqual(model.name, 'person')
self.assertEqual(model.identifier, 'people')
self.assertEqual(model.api_name, 'people_url')
fields = model.fields()
self.assertTrue(fields)
@test.green
def test_simple_session(self):
app = self.app
odm = app.odm()
with odm.begin() as session:
self.assertEqual(session.app, app)
user = odm.user(first_name='Luca')
session.add(user)
self.assertTrue(user.id)
self.assertEqual(user.first_name, 'Luca')
self.assertFalse(user.is_superuser())
async def test_get_tasks(self):
request = await self.client.get(self.api_url('tasks'))
response = request.response
self.assertEqual(response.status_code, 200)
data = self.json(response)
self.assertIsInstance(data, dict)
result = data['result']
self.assertIsInstance(result, list)
async def test_get_tasks_multi(self):
url = self.api_url('tasks?id=1&id=2&id=3')
request = await self.client.get(url)
response = request.response
self.assertEqual(response.status_code, 200)
data = self.json(response)
self.assertIsInstance(data, dict)
result = data['result']
self.assertIsInstance(result, list)
async def test_metadata(self):
request = await self.client.get(self.api_url('tasks/metadata'))
response = request.response
self.assertEqual(response.status_code, 200)
data = self.json(response)
self.assertIsInstance(data, dict)
columns = data['columns']
self.assertIsInstance(columns, list)
self.assertEqual(len(columns), 9)
async def test_create_task(self):
await self._create_task(self.super_token)
async def test_update_task(self):
task = await self._create_task(self.super_token,
'This is another task')
url = self.api_url('tasks/%d' % task['id'])
# Update task
request = await self.client.patch(
url,
json={'done': True}
)
self.json(request.response, 401)
#
# Update task
request = await self.client.patch(
url,
json={'done': True},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertEqual(data['id'], task['id'])
self.assertEqual(data['done'], True)
#
request = await self.client.get(url)
data = self.json(request.response, 200)
self.assertEqual(data['id'], task['id'])
self.assertEqual(data['done'], True)
async def test_delete_task(self):
task = await self._create_task(self.super_token,
'A task to be deleted')
# Delete task
url = self.api_url('tasks/%d' % task['id'])
request = await self.client.delete(url)
self.json(request.response, 401)
# Delete task
request = await self.client.delete(url, token=self.super_token)
self.empty(request.response, 204)
self.assertEqual(len(request.cache.del_items), 1)
self.assertEqual(request.cache.del_items[0]['id'], task['id'])
#
request = await self.client.get(url)
self.json(request.response, 404)
async def test_sortby(self):
await self._create_task(self.super_token, 'We want to sort 1')
await self._create_task(self.super_token, 'We want to sort 2')
request = await self.client.get('/tasks?sortby=created')
data = self.json(request.response, 200)
self.assertIsInstance(data, dict)
result = data['result']
self.assertIsInstance(result, list)
for task1, task2 in zip(result, result[1:]):
dt1 = parse(task1['created'])
dt2 = parse(task2['created'])
self.assertTrue(dt2 > dt1)
#
request = await self.client.get('/tasks?sortby=created:desc')
response = request.response
self.assertEqual(response.status_code, 200)
data = self.json(response)
self.assertIsInstance(data, dict)
result = data['result']
self.assertIsInstance(result, list)
for task1, task2 in zip(result, result[1:]):
dt1 = parse(task1['created'])
dt2 = parse(task2['created'])
self.assertTrue(dt2 < dt1)
async def test_sortby_non_existent(self):
await self._create_task(self.super_token, 'a task')
await self._create_task(self.super_token, 'another task')
request = await self.client.get('/tasks?sortby=fgjsdgj')
data = self.json(request.response, 200)
result = data['result']
self.assertIsInstance(result, list)
async def test_relationship_field(self):
person = await self._create_person(self.super_token, 'spiderman')
task = await self._create_task(self.super_token,
'climb a wall a day',
person)
self.assertTrue('assigned' in task)
request = await self.client.get('/tasks/%s' % task['id'])
data = self.json(request.response, 200)
self.assertEqual(data['assigned'], task['assigned'])
async def test_relationship_field_failed(self):
data = {'subject': 'climb a wall a day',
'assigned': 6868897}
request = await self.client.post('/tasks',
json=data,
token=self.super_token)
self.assertValidationError(request.response, 'assigned',
'Invalid person')
async def test_unique_field(self):
await self._create_person(self.super_token, 'spiderman1', 'luca')
data = dict(username='spiderman1', name='john')
request = await self.client.post('/people',
json=data,
token=self.super_token)
self.assertValidationError(request.response, 'username',
'spiderman1 not available')
async def test_unique_field_update_fail(self):
"""
Tests that it's not possible to update a unique field of an
existing record to that of another
"""
await self._create_person(self.super_token, 'spiderfail1', 'luca')
data = await self._create_person(self.super_token,
'spiderfail2', 'pippo')
request = await self.client.patch(
self.api_url('people/%s' % data['id']),
json={'username': 'spiderfail1', 'name': 'pluto'},
token=self.super_token
)
self.assertValidationError(request.response, 'username',
'spiderfail1 not available')
async def test_unique_field_update_unchanged(self):
"""
Tests that an update of an existing model instance works if the the
unique field hasn't changed
"""
data = await self._create_person(self.super_token,
'spiderstale1', 'luca')
await self._update_person(self.super_token,
data['id'],
'spiderstale1',
'lucachanged')
async def test_enum_field(self):
data = await self._create_task(self.super_token, enum_field='opt1')
self.assertEqual(data['enum_field'], 'opt1')
data = await self._get_task(self.super_token, id=data['id'])
self.assertEqual(data['enum_field'], 'opt1')
async def test_enum_field_fail(self):
request = await self.client.post('/tasks',
json={'enum_field': 'opt3'},
token=self.super_token)
response = request.response
self.assertValidationError(response, 'enum_field',
'opt3 is not a valid choice')
async def test_metadata_users(self):
request = await self.client.get('/users/metadata')
data = self.json(request.response, 200)
columns = data['columns']
self.assertTrue(columns)
async def test_preflight_request(self):
request = await self.client.options('/users')
response = request.response
self.assertEqual(response.status_code, 200)
self.checkOptions(request.response, ['GET', 'POST', 'HEAD'])
task = await self._create_task(self.super_token,
'testing preflight on id')
request = await self.client.options('/tasks/%s' % task['id'])
response = request.response
self.assertEqual(response.status_code, 200)
self.checkOptions(request.response,
['GET', 'PATCH', 'HEAD', 'DELETE'])
async def test_head_request(self):
request = await self.client.head('/tasks/8676097')
self.empty(request.response, 404)
task = await self._create_task(self.super_token,
'testing head request')
request = await self.client.head('/tasks/%s' % task['id'])
response = request.response
self.assertEqual(response.status_code, 200)
self.assertFalse(response.content)
async def test_limit(self):
await self._create_task(self.super_token, 'whatever')
await self._create_task(self.super_token, 'do everything')
request = await self.client.get('/tasks?limit=-1')
data = self.json(request.response, 200)
result = data['result']
self.assertIsInstance(result, list)
self.assertTrue(len(result) >= 2)
request = await self.client.get('/tasks?limit=-1&offset=-89')
data = self.json(request.response, 200)
result = data['result']
self.assertIsInstance(result, list)
self.assertTrue(len(result) >= 2)
request = await self.client.get('/tasks?limit=sdc&offset=hhh')
data = self.json(request.response, 200)
result = data['result']
self.assertIsInstance(result, list)
self.assertTrue(len(result) >= 2)
async def test_update_related_field(self):
person1 = await self._create_person(self.super_token, 'abcdfg1')
person2 = await self._create_person(self.super_token, 'abcdfg2')
task = await self._create_task(self.super_token,
'climb a wall a day',
person1)
self.assertTrue('assigned' in task)
url = self.api_url('tasks/%s' % task['id'])
#
request = await self.client.get(url)
data = self.json(request.response, 200)
self.assertEqual(data['assigned']['id'], person1['id'])
#
# Updated with same assigned person
request = await self.client.patch(
url,
json={'assigned': person1['id']},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertEqual(data['assigned']['id'], person1['id'])
#
# Change the assigned person
request = await self.client.patch(
url,
json={'assigned': person2['id']},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertEqual(data['assigned']['id'], person2['id'])
#
# Change to non assigned
request = await self.client.patch(
url,
json={'assigned': ''},
token=self.super_token
)
data = self.json(request.response, 200)
self.assertTrue('assigned' not in data)
request = await self.client.get(url)
data = self.json(request.response, 200)
self.assertTrue('assigned' not in data)
| bsd-3-clause | 9707494daa9c10fd09c7c90a370ad54b | 39.724684 | 77 | 0.571062 | 4.132627 | false | true | false | false |
quantmind/lux | lux/ext/auth/rest/authorization.py | 1 | 4160 | from pulsar.api import Http401, BadRequest, UnprocessableEntity
from lux.models import Schema, fields, ValidationError
from lux.utils.date import date_from_now
from lux.ext.rest import RestRouter, route
from lux.ext.odm import Model
from ..backend import AuthenticationError
from .tokens import TokenSchema
from . import ensure_service_user
class LoginSchema(Schema):
"""The Standard login schema
"""
username = fields.Slug(
required=True,
minLength=2,
maxLength=30
)
password = fields.Password(
required=True,
minLength=2,
maxLength=128
)
class AuthorizeSchema(LoginSchema):
expiry = fields.Int()
user_agent = fields.String()
ip_address = fields.String()
def post_load(self, data):
"""Perform authentication by creating a session token if possible
"""
session = self.model.object_session(data)
maxexp = date_from_now(session.config['MAX_TOKEN_SESSION_EXPIRY'])
data['user'] = session.auth.authenticate(session, **data)
if not data['user']:
raise ValidationError('Invalid username or password')
data.pop('username')
data.pop('password')
data['session'] = True
data['expiry'] = min(data.get('expiry') or maxexp, maxexp)
# create the db token
tokens = session.models['tokens']
return tokens.create_one(session, data, tokens.model_schema)
class AuthorizationModel(Model):
def data_and_files(self, request):
data, files = super().data_and_files(request)
maxexp = request.config['MAX_TOKEN_SESSION_EXPIRY']
data['expiry'] = min(data.get('expiry') or maxexp, maxexp)
data['user_agent'] = request.get('HTTP_USER_AGENT')
return data, files
def create_instance(self, session, data):
try:
data['user'] = session.auth.authenticate(
session,
username=data.pop('username'),
password=data.pop('password')
)
except AuthenticationError as exc:
raise UnprocessableEntity(str(exc)) from None
if not data['user']:
raise ValidationError('Invalid username or password')
class Authorization(RestRouter):
"""
---
summary: Authentication path
description: provide operation for creating new authentication tokens
and check their validity
tags:
- authentication
"""
model = AuthorizationModel('authorizations', TokenSchema, db_name='token')
@route()
def head(self, request):
"""
---
summary: Check token validity
description: Check validity of the token in the
Authorization header. It works for both user and
application tokens.
responses:
200:
description: Token is valid
400:
description: Bad Token
401:
description: Token is expired or not available
"""
if not request.cache.get('token'):
raise Http401
return request.response
@route(body_schema=AuthorizeSchema,
default_response=201,
default_response_schema=TokenSchema,
responses=(400, 401, 403, 422))
def post(self, request, **kw):
"""
---
summary: Create a new user token
description: The headers must contain a valid token
signed by the application sending the request
"""
ensure_service_user(request)
return self.model.create_response(request, **kw)
@route(default_response=204,
responses=(401, 403))
def delete(self, request):
"""
---
summary: Delete the token used by the authenticated User
description: A valid bearer token must be available in the
Authorization header
"""
token = request.cache.get('token')
if not request.cache.user.is_authenticated():
if not token:
raise Http401
raise BadRequest
return self.model.delete_one_response(request, token)
| bsd-3-clause | 5b0bdf08b9247bc09a4682b21302011c | 30.755725 | 78 | 0.609375 | 4.526659 | false | false | false | false |
quantmind/lux | lux/ext/sockjs/socketio.py | 1 | 2441 | import sys
import hashlib
from random import randint
from pulsar.api import HttpException
from pulsar.apps.wsgi import Router, route
from pulsar.utils.httpurl import CacheControl
from .transports.websocket import WebSocket
from .utils import IFRAME_TEXT
class SocketIO(Router):
"""A Router for sockjs requests
"""
info_cache = CacheControl(nostore=True)
home_cache = CacheControl(maxage=60*60*24*30)
def __init__(self, route, handle, **kwargs):
super().__init__(route, **kwargs)
self.handle = handle
self.add_child(WebSocket('/websocket', self.handle, **kwargs))
self.add_child(WebSocket('<server_id>/<session_id>/websocket',
self.handle, **kwargs))
def get(self, request):
response = request.response
self.home_cache(response.headers)
response.content_type = 'text/plain'
response.content = 'Welcome to SockJS!\n'
return response
@route(method=('options', 'get'))
def info(self, request):
response = request.response
self.info_cache(response.headers)
self.origin(request)
return request.json_response({
'websocket': request.config['WEBSOCKET_AVAILABLE'],
'origins': ['*:*'],
'entropy': randint(0, sys.maxsize)
})
@route('iframe[0-9-.a-z_]*.html', re=True,
response_content_types=('text/html',))
def iframe(self, request):
response = request.response
url = request.absolute_uri(self.full_route.path)
response.content = IFRAME_TEXT % url
hsh = hashlib.md5(response.content[0]).hexdigest()
value = request.get('HTTP_IF_NONE_MATCH')
if value and value.find(hsh) != -1:
raise HttpException(status=304)
self.home_cache(response.headers)
response['Etag'] = hsh
return response
def origin(self, request):
"""Handles request authentication
"""
response = request.response
origin = request.get('HTTP_ORIGIN', '*')
# Respond with '*' to 'null' origin
if origin == 'null':
origin = '*'
response['Access-Control-Allow-Origin'] = origin
headers = request.get('HTTP_ACCESS_CONTROL_REQUEST_HEADERS')
if headers:
response['Access-Control-Allow-Headers'] = headers
response['Access-Control-Allow-Credentials'] = 'true'
| bsd-3-clause | 5a7979b920e81cb25cc7e1dc55a185e5 | 31.118421 | 70 | 0.612044 | 4.054817 | false | false | false | false |
morepath/morepath | morepath/request.py | 1 | 11760 | """Morepath request implementation.
Entirely documented in :class:`morepath.Request` and
:class:`morepath.Response` in the public API.
"""
from webob import BaseRequest, Response as BaseResponse
from dectate import Sentinel
from .reify import reify
from .traject import create_path, parse_path
from .error import LinkError
from .authentication import NO_IDENTITY
SAME_APP = Sentinel("SAME_APP")
class Request(BaseRequest):
"""Request.
Extends :class:`webob.request.BaseRequest`
"""
path_code_info = None
view_code_info = None
def __init__(self, environ, app, **kw):
super().__init__(environ, **kw)
# parse path, normalizing dots away in
# in case the client didn't do the normalization
path_info = self.path_info
segments = parse_path(path_info)
# optimization: only if the normalized path is different from the
# original path do we set it to the webob request, as this is
# relatively expensive. Webob updates the environ as well
new_path_info = create_path(segments)
if new_path_info != path_info:
self.path_info = new_path_info
# reverse to get unconsumed
segments.reverse()
self.unconsumed = segments
"""Stack of path segments that have not yet been consumed.
See :mod:`morepath.publish`.
"""
self._root_app = app
self.app = app
""":class:`morepath.App` instance currently handling request.
"""
self._after = []
self._link_prefix_cache = {}
def reset(self):
"""Reset request.
This resets the request back to the state it had when request
processing started. This is used by ``more.transaction`` when it
retries a transaction.
"""
self.make_body_seekable()
segments = parse_path(self.path_info)
segments.reverse()
self.unconsumed = segments
self.app = self._root_app
self._after = []
@reify
def identity(self):
"""Self-proclaimed identity of the user.
The identity is established using the identity policy. Normally
this would be an instance of :class:`morepath.Identity`.
If no identity is claimed or established, or if the identity
is not verified by the application, the identity is the the
special value :attr:`morepath.NO_IDENTITY`.
The identity can be used for authentication/authorization of
the user, using Morepath permission directives.
"""
result = self.app._identify(self)
if result is None or result is NO_IDENTITY:
return NO_IDENTITY
if not self.app._verify_identity(result):
return NO_IDENTITY
return result
def link_prefix(self, app=None):
"""Prefix to all links created by this request.
:param app: Optionally use the given app to create the link.
This leads to use of the link prefix configured for the given app.
This parameter is mainly used internally for link creation.
"""
app = app or self.app
cached = self._link_prefix_cache.get(app.__class__)
if cached is not None:
return cached
prefix = self._link_prefix_cache[app.__class__] = app._link_prefix(self)
return prefix
def view(self, obj, default=None, app=SAME_APP, **predicates):
"""Call view for model instance.
This does not render the view, but calls the appropriate
view function and returns its result.
:param obj: the model instance to call the view on.
:param default: default value if view is not found.
:param app: If set, change the application in which to look up
the view. By default the view is looked up for the current
application. The ``defer_links`` directive can be used to change
the default app for all instances of a particular class.
:param predicates: extra predicates to modify view
lookup, such as ``name`` and ``request_method``. The default
``name`` is empty, so the default view is looked up,
and the default ``request_method`` is ``GET``. If you introduce
your own predicates you can specify your own default.
"""
if app is None:
raise LinkError("Cannot view: app is None")
if app is SAME_APP:
app = self.app
predicates["model"] = obj.__class__
def find(app, obj):
return app.get_view.by_predicates(**predicates).component
view, app = app._follow_defers(find, obj)
if view is None:
return default
old_app = self.app
self.app = app
# need to use value as view is registered as a function, not
# as a wrapped method
result = view.func(obj, self)
self.app = old_app
return result
def link(self, obj, name="", default=None, app=SAME_APP):
"""Create a link (URL) to a view on a model instance.
The resulting link is prefixed by the link prefix. By default
this is the full URL based on the Host header.
You can configure the link prefix for an application using the
:meth:`morepath.App.link_prefix` directive.
If no link can be constructed for the model instance, a
:exc:`morepath.error.LinkError` is raised. ``None`` is treated
specially: if ``None`` is passed in the default value is
returned.
The :meth:`morepath.App.defer_links` or
:meth:`morepath.App.defer_class_links` directives can be used
to defer link generation for all instances of a particular
class (if this app doesn't handle them) to another app.
:param obj: the model instance to link to, or ``None``.
:param name: the name of the view to link to. If omitted, the
the default view is looked up.
:param default: if ``None`` is passed in, the default value is
returned. By default this is ``None``.
:param app: If set, change the application to which the
link is made. By default the link is made to an object
in the current application.
"""
if obj is None:
return default
if app is None:
raise LinkError("Cannot link: app is None")
if app is SAME_APP:
app = self.app
info, app = app._get_deferred_mounted_path(obj)
if info is None:
raise LinkError("Cannot link to: %r" % obj)
return info.url(self.link_prefix(app), name)
def class_link(self, model, variables=None, name="", app=SAME_APP):
"""Create a link (URL) to a view on a class.
Given a model class and a variables dictionary, create a link
based on the path registered for the class and interpolate the
variables.
If you have an instance of the model available you'd link to the
model instead, but in some cases it is expensive to instantiate
the model just to create a link. In this case `class_link` can be
used as an optimization.
The :meth:`morepath.App.defer_class_links` directive can be
used to defer link generation for a particular class (if this
app doesn't handle them) to another app.
Note that the :meth:`morepath.App.defer_links` directive has
**no** effect on ``class_link``, as it needs an instance of the
model to work, which is not available.
If no link can be constructed for the model class, a
:exc:`morepath.error.LinkError` is raised. This error is
also raised if you don't supply enough variables. Additional
variables not used in the path are interpreted as URL
parameters.
:param model: the model class to link to.
:param variables: a dictionary with as keys the variable names,
and as values the variable values. These are used to construct
the link URL. If omitted, the dictionary is treated as containing
no variables.
:param name: the name of the view to link to. If omitted, the
the default view is looked up.
:param app: If set, change the application to which the
link is made. By default the link is made to an object
in the current application.
"""
if variables is None:
variables = {}
if app is None:
raise LinkError("Cannot link: app is None")
if app is SAME_APP:
app = self.app
info = app._get_deferred_mounted_class_path(model, variables)
if info is None:
raise LinkError("Cannot link to class: %r" % model)
return info.url(self.link_prefix(), name)
def resolve_path(self, path, app=SAME_APP):
"""Resolve a path to a model instance.
The resulting object is a model instance, or ``None`` if the
path could not be resolved.
:param path: URL path to resolve.
:param app: If set, change the application in which the
path is resolved. By default the path is resolved in the
current application.
:return: instance or ``None`` if no path could be resolved.
"""
if app is None:
raise LinkError("Cannot path: app is None")
if app is SAME_APP:
app = self.app
request = Request(self.environ.copy(), app, path_info=path)
# try to resolve imports..
from .publish import resolve_model
return resolve_model(request)
def after(self, func):
"""Call a function with the response after a successful request.
A request is considered *successful* if the HTTP status is a 2XX or a
3XX code (e.g. 200 OK, 204 No Content, 302 Found).
In this case ``after`` *is* called.
A request is considered *unsuccessful* if the HTTP status lies outside
the 2XX-3XX range (e.g. 403 Forbidden, 404 Not Found,
500 Internal Server Error). Usually this happens if an exception
occurs. In this case ``after`` is *not* called.
Some exceptions indicate a successful request however and their
occurrence still leads to a call to ``after``. These exceptions
inherit from either :class:`webob.exc.HTTPOk` or
:class:`webob.exc.HTTPRedirection`.
You use `request.after` inside a view function definition.
It can be used explicitly::
@App.view(model=SomeModel)
def some_model_default(self, request):
def myfunc(response):
response.headers.add('blah', 'something')
request.after(my_func)
or as a decorator::
@App.view(model=SomeModel)
def some_model_default(self, request):
@request.after
def myfunc(response):
response.headers.add('blah', 'something')
:param func: callable that is called with response
:return: func argument, not wrapped
"""
self._after.append(func)
return func
def _run_after(self, response):
"""Run callbacks registered with :meth:`morepath.Request.after`."""
# if we don't have anything to run, don't even check status
if not self._after:
return
# run after only if it's not a 2XX or 3XX response
if response.status[0] not in ("2", "3"):
return
for after in self._after:
after(response)
def clear_after(self):
self._after = []
class Response(BaseResponse):
"""Response.
Extends :class:`webob.response.Response`.
"""
| bsd-3-clause | 4b4aa869b42a40e22426bff6dc388484 | 34.421687 | 80 | 0.619133 | 4.379888 | false | false | false | false |
morepath/morepath | morepath/tests/test_mount_directive.py | 1 | 29067 | import morepath
from morepath.error import LinkError, ConflictError
from webtest import TestApp as Client
import pytest
def test_model_mount_conflict():
class app(morepath.App):
pass
class app2(morepath.App):
pass
class A:
pass
@app.path(model=A, path="a")
def get_a():
return A()
@app.mount(app=app2, path="a")
def get_mount():
return app2()
with pytest.raises(ConflictError):
app.commit()
def test_mount_basic():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, id):
self.id = id
@mounted.path(path="")
class MountedRoot:
pass
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root"
@mounted.view(model=MountedRoot, name="link")
def root_link(self, request):
return request.link(self)
@app.mount(path="{id}", app=mounted)
def get_mounted(id):
return mounted(id=id)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The root"
response = c.get("/foo/link")
assert response.body == b"http://localhost/foo"
def test_mounted_app_classes():
class App(morepath.App):
pass
class Mounted(morepath.App):
def __init__(self, id):
self.id = id
class Sub(morepath.App):
pass
@App.mount(path="{id}", app=Mounted)
def get_mounted(id):
return Mounted(id=id)
@Mounted.mount(path="sub", app=Sub)
def get_sub():
return Sub()
assert App.commit() == {App, Mounted, Sub}
assert App.mounted_app_classes() == {App, Mounted, Sub}
def test_mounted_app_classes_nothing_mounted():
class App(morepath.App):
pass
assert App.commit() == {App}
assert App.mounted_app_classes() == {App}
def test_mount_none_should_fail():
class app(morepath.App):
pass
class mounted(morepath.App):
pass
@mounted.path(path="")
class MountedRoot:
pass
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root"
@mounted.view(model=MountedRoot, name="link")
def root_link(self, request):
return request.link(self)
@app.mount(path="{id}", app=mounted)
def mount_mounted(id):
return None
c = Client(app())
c.get("/foo", status=404)
c.get("/foo/link", status=404)
def test_mount_context():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="")
class MountedRoot:
def __init__(self, app):
self.mount_id = app.mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root for mount id: %s" % self.mount_id
@app.mount(path="{id}", app=mounted)
def get_context(id):
return mounted(mount_id=id)
c = Client(app())
response = c.get("/foo")
assert response.body == b"The root for mount id: foo"
response = c.get("/bar")
assert response.body == b"The root for mount id: bar"
def test_mount_context_parameters():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="")
class MountedRoot:
def __init__(self, app):
assert isinstance(app.mount_id, int)
self.mount_id = app.mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root for mount id: %s" % self.mount_id
@app.mount(path="mounts", app=mounted)
def get_context(mount_id=0):
return mounted(mount_id=mount_id)
c = Client(app())
response = c.get("/mounts?mount_id=1")
assert response.body == b"The root for mount id: 1"
response = c.get("/mounts")
assert response.body == b"The root for mount id: 0"
def test_mount_context_parameters_override_default():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="")
class MountedRoot:
def __init__(self, app, mount_id):
self.mount_id = mount_id
self.app_mount_id = app.mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "mount_id: {} app_mount_id: {}".format(
self.mount_id,
self.app_mount_id,
)
@app.mount(path="{id}", app=mounted)
def get_context(id):
return mounted(mount_id=id)
c = Client(app())
response = c.get("/foo")
assert response.body == b"mount_id: None app_mount_id: foo"
# the URL parameter mount_id cannot interfere with the mounting
# process
response = c.get("/bar?mount_id=blah")
assert response.body == b"mount_id: blah app_mount_id: bar"
def test_mount_context_standalone():
class app(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@app.path(path="")
class Root:
def __init__(self, app):
self.mount_id = app.mount_id
@app.view(model=Root)
def root_default(self, request):
return "The root for mount id: %s" % self.mount_id
c = Client(app(mount_id="foo"))
response = c.get("/")
assert response.body == b"The root for mount id: foo"
def test_mount_parent_link():
class app(morepath.App):
pass
@app.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="")
class MountedRoot:
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return request.link(Model("one"), app=request.app.parent)
@app.mount(path="{id}", app=mounted)
def get_context(id):
return mounted(mount_id=id)
c = Client(app())
response = c.get("/foo")
assert response.body == b"http://localhost/models/one"
def test_mount_child_link():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
child = request.app.child(mounted, id="foo")
return request.link(Model("one"), app=child)
@app.view(model=Root, name="inst")
def app_root_inst(self, request):
child = request.app.child(mounted(mount_id="foo"))
return request.link(Model("one"), app=child)
@app.mount(path="{id}", app=mounted, variables=lambda a: {"id": a.mount_id})
def get_context(id):
return mounted(mount_id=id)
c = Client(app())
response = c.get("/")
assert response.body == b"http://localhost/foo/models/one"
response = c.get("/+inst")
assert response.body == b"http://localhost/foo/models/one"
def test_mount_sibling_link():
class app(morepath.App):
pass
class first(morepath.App):
pass
class second(morepath.App):
pass
@first.path(path="models/{id}")
class FirstModel:
def __init__(self, id):
self.id = id
@first.view(model=FirstModel)
def first_model_default(self, request):
sibling = request.app.sibling("second")
return request.link(SecondModel(2), app=sibling)
@second.path(path="foos/{id}")
class SecondModel:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.mount(path="first", app=first)
def get_context_first():
return first()
@app.mount(path="second", app=second)
def get_context_second():
return second()
c = Client(app())
response = c.get("/first/models/1")
assert response.body == b"http://localhost/second/foos/2"
def test_mount_sibling_link_at_root_app():
class app(morepath.App):
pass
@app.path(path="")
class Root:
pass
class Item:
def __init__(self, id):
self.id = id
@app.view(model=Root)
def root_default(self, request):
sibling = request.app.sibling("foo")
return request.link(Item(3), app=sibling)
c = Client(app())
with pytest.raises(LinkError):
c.get("/")
def test_mount_child_link_unknown_child():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
child = request.app.child(mounted, id="foo")
if child is None:
return "link error"
return request.link(Model("one"), app=child)
@app.view(model=Root, name="inst")
def app_root_inst(self, request):
child = request.app.child(mounted(mount_id="foo"))
if child is None:
return "link error"
return request.link(Model("one"), app=child)
# no mount directive so linking will fail
c = Client(app())
response = c.get("/")
assert response.body == b"link error"
response = c.get("/+inst")
assert response.body == b"link error"
def test_mount_child_link_unknown_parent():
class app(morepath.App):
pass
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
parent = request.app.parent
if parent is None:
return "link error"
return request.link(Model("one"), app=parent)
c = Client(app())
response = c.get("/")
assert response.body == b"link error"
def test_mount_child_link_unknown_app():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
child = request.app.child(mounted, id="foo")
try:
return request.link(Model("one"), app=child)
except LinkError:
return "link error"
# no mounting, so mounted is unknown when making link
c = Client(app())
response = c.get("/")
assert response.body == b"link error"
def test_mount_link_prefix():
class App(morepath.App):
pass
class Mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@App.mount(
path="/mnt/{id}", app=Mounted, variables=lambda a: dict(id=a.mount_id)
)
def get_mounted(id):
return Mounted(mount_id=id)
@App.path(path="")
class AppRoot:
pass
@Mounted.path(path="")
class MountedRoot:
pass
@App.link_prefix()
def link_prefix(request):
return "http://app"
@Mounted.link_prefix()
def mounted_link_prefix(request):
return "http://mounted"
@App.view(model=AppRoot, name="get-root-link")
def get_root_link(self, request):
return request.link(self)
@Mounted.view(model=MountedRoot, name="get-mounted-root-link")
def get_mounted_root_link(self, request):
return request.link(self)
@Mounted.view(model=MountedRoot, name="get-root-link-through-mount")
def get_root_link_through_mount(self, request):
parent = request.app.parent
return request.view(AppRoot(), app=parent, name="get-root-link")
c = Client(App())
# response = c.get('/get-root-link')
# assert response.body == b'http://app/'
# response = c.get('/mnt/1/get-mounted-root-link')
# assert response.body == b'http://mounted/mnt/1'
response = c.get("/mnt/1/get-root-link-through-mount")
assert response.body == b"http://app/"
response = c.get("/get-root-link")
assert response.body == b"http://app/"
def test_request_view_in_mount():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@app.path(path="")
class Root:
pass
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@mounted.view(model=Model)
def model_default(self, request):
return {"hey": "Hey"}
@app.view(model=Root)
def root_default(self, request):
child = request.app.child(mounted, id="foo")
return request.view(Model("x"), app=child)["hey"]
@app.view(model=Root, name="inst")
def root_inst(self, request):
child = request.app.child(mounted(mount_id="foo"))
return request.view(Model("x"), app=child)["hey"]
@app.mount(
path="{id}", app=mounted, variables=lambda a: dict(id=a.mount_id)
)
def get_context(id):
return mounted(mount_id=id)
c = Client(app())
response = c.get("/")
assert response.body == b"Hey"
response = c.get("/+inst")
assert response.body == b"Hey"
def test_request_link_child_child():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
class submounted(morepath.App):
pass
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def root_default(self, request):
child = request.app.child(mounted, id="foo").child(submounted)
return request.view(SubRoot(), app=child)
@app.view(model=Root, name="inst")
def root_inst(self, request):
child = request.app.child(mounted(mount_id="foo")).child(submounted())
return request.view(SubRoot(), app=child)
@app.view(model=Root, name="info")
def root_info(self, request):
return "info"
@app.mount(
path="{id}", app=mounted, variables=lambda a: dict(mount_id=a.mount_id)
)
def get_context(id):
return mounted(mount_id=id)
@mounted.mount(path="sub", app=submounted)
def get_context2():
return submounted()
@submounted.path(path="")
class SubRoot:
pass
@submounted.view(model=SubRoot)
def subroot_default(self, request):
return "SubRoot"
@submounted.view(model=SubRoot, name="parentage")
def subroot_parentage(self, request):
ancestor = request.app.parent.parent
return request.view(Root(), name="info", app=ancestor)
c = Client(app())
response = c.get("/")
assert response.body == b"SubRoot"
response = c.get("/+inst")
assert response.body == b"SubRoot"
response = c.get("/foo/sub/parentage")
assert response.body == b"info"
def test_request_view_in_mount_broken():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@app.path(path="")
class Root:
pass
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@mounted.view(model=Model)
def model_default(self, request):
return {"hey": "Hey"}
@app.view(model=Root)
def root_default(self, request):
child = request.app.child(mounted, id="foo")
try:
return request.view(Model("x"), app=child)["hey"]
except LinkError:
return "link error"
@app.view(model=Root, name="inst")
def root_inst(self, request):
child = request.app.child(mounted(mount_id="foo"))
try:
return request.view(Model("x"), app=child)["hey"]
except LinkError:
return "link error"
@app.view(model=Root, name="doublechild")
def doublechild(self, request):
try:
request.app.child(mounted, id="foo").child(mounted, id="bar")
except AttributeError:
return "link error"
@app.view(model=Root, name="childparent")
def childparent(self, request):
try:
request.app.child(mounted, id="foo").parent
except AttributeError:
return "link error"
# deliberately don't mount so using view is broken
c = Client(app())
response = c.get("/")
assert response.body == b"link error"
response = c.get("/+inst")
assert response.body == b"link error"
response = c.get("/doublechild")
assert response.body == b"link error"
response = c.get("/childparent")
assert response.body == b"link error"
def test_mount_implicit_converters():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, id):
self.id = id
class MountedRoot:
def __init__(self, id):
self.id = id
@mounted.path(path="", model=MountedRoot)
def get_root(app):
return MountedRoot(app.id)
@mounted.view(model=MountedRoot)
def root_default(self, request):
return f"The root for: {self.id} {type(self.id)}"
@app.mount(path="{id}", app=mounted)
def get_context(id=0):
return mounted(id=id)
c = Client(app())
response = c.get("/1")
assert response.body in (
b"The root for: 1 <type 'int'>",
b"The root for: 1 <class 'int'>",
)
def test_mount_explicit_converters():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, id):
self.id = id
class MountedRoot:
def __init__(self, id):
self.id = id
@mounted.path(path="", model=MountedRoot)
def get_root(app):
return MountedRoot(id=app.id)
@mounted.view(model=MountedRoot)
def root_default(self, request):
return f"The root for: {self.id} {type(self.id)}"
@app.mount(path="{id}", app=mounted, converters=dict(id=int))
def get_context(id):
return mounted(id=id)
c = Client(app())
response = c.get("/1")
assert response.body in (
b"The root for: 1 <type 'int'>",
b"The root for: 1 <class 'int'>",
)
def test_mount_view_in_child_view():
class app(morepath.App):
pass
class fooapp(morepath.App):
pass
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def default_homepage(self, request):
return request.view(FooRoot(), app=request.app.child(fooapp))
@fooapp.path(path="")
class FooRoot:
pass
@fooapp.view(model=FooRoot, name="name")
def foo_name(self, request):
return "Foo"
@fooapp.view(model=FooRoot)
def foo_default(self, request):
return "Hello " + request.view(self, name="name")
@app.mount(path="foo", app=fooapp)
def mount_to_root():
return fooapp()
c = Client(app())
response = c.get("/foo")
assert response.body == b"Hello Foo"
response = c.get("/")
assert response.body == b"Hello Foo"
def test_mount_view_in_child_view_then_parent_view():
class app(morepath.App):
pass
class fooapp(morepath.App):
pass
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def default_homepage(self, request):
other = request.app.child(fooapp)
return (
request.view(FooRoot(), app=other)
+ " "
+ request.view(self, name="other")
)
@app.view(model=Root, name="other")
def root_other(self, request):
return "other"
@fooapp.path(path="")
class FooRoot:
pass
@fooapp.view(model=FooRoot, name="name")
def foo_name(self, request):
return "Foo"
@fooapp.view(model=FooRoot)
def foo_default(self, request):
return "Hello " + request.view(self, name="name")
@app.mount(path="foo", app=fooapp)
def mount_to_root():
return fooapp()
c = Client(app())
response = c.get("/")
assert response.body == b"Hello Foo other"
def test_mount_directive_with_link_and_absorb():
class app1(morepath.App):
pass
@app1.path(path="")
class Model1:
pass
class app2(morepath.App):
pass
class Model2:
def __init__(self, absorb):
self.absorb = absorb
@app2.path(model=Model2, path="", absorb=True)
def get_model(absorb):
return Model2(absorb)
@app2.view(model=Model2)
def default(self, request):
return f"A:{self.absorb} L:{request.link(self)}"
@app1.mount(path="foo", app=app2)
def get_mount():
return app2()
c = Client(app1())
response = c.get("/foo")
assert response.body == b"A: L:http://localhost/foo"
response = c.get("/foo/bla")
assert response.body == b"A:bla L:http://localhost/foo/bla"
def test_mount_named_child_link_explicit_name():
class app(morepath.App):
pass
class mounted(morepath.App):
pass
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
return request.link(Model("one"), app=request.app.child(mounted))
@app.view(model=Root, name="extra")
def app_root_default2(self, request):
return request.link(Model("one"), app=request.app.child("sub"))
@app.mount(path="subapp", app=mounted, name="sub")
def get_context():
return mounted()
c = Client(app())
response = c.get("/")
assert response.body == b"http://localhost/subapp/models/one"
response = c.get("/extra")
assert response.body == b"http://localhost/subapp/models/one"
def test_mount_named_child_link_name_defaults_to_path():
class app(morepath.App):
pass
class mounted(morepath.App):
pass
@mounted.path(path="models/{id}")
class Model:
def __init__(self, id):
self.id = id
@app.path(path="")
class Root:
pass
@app.view(model=Root)
def app_root_default(self, request):
return request.link(Model("one"), app=request.app.child(mounted))
@app.view(model=Root, name="extra")
def app_root_default2(self, request):
return request.link(Model("one"), app=request.app.child("subapp"))
@app.mount(path="subapp", app=mounted)
def get_context():
return mounted()
c = Client(app())
response = c.get("/")
assert response.body == b"http://localhost/subapp/models/one"
response = c.get("/extra")
assert response.body == b"http://localhost/subapp/models/one"
def test_named_mount_with_parameters():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@app.path(path="")
class Root:
pass
@mounted.path(path="")
class MountedRoot:
def __init__(self, mount_id):
assert isinstance(mount_id, int)
self.mount_id = mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root for mount id: %s" % self.mount_id
@app.mount(path="mounts/{mount_id}", app=mounted)
def get_context(mount_id=0):
return mounted(mount_id=mount_id)
class Item:
def __init__(self, id):
self.id = id
@mounted.path(path="items/{id}", model=Item)
def get_item(id):
return Item(id)
@app.view(model=Root)
def root_default2(self, request):
child = request.app.child("mounts/{mount_id}", mount_id=3)
return request.link(Item(4), app=child)
c = Client(app())
response = c.get("/")
assert response.body == b"http://localhost/mounts/3/items/4"
def test_named_mount_with_url_parameters():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, mount_id):
self.mount_id = mount_id
@app.path(path="")
class Root:
pass
@mounted.path(path="")
class MountedRoot:
def __init__(self, mount_id):
assert isinstance(mount_id, int)
self.mount_id = mount_id
@mounted.view(model=MountedRoot)
def root_default(self, request):
return "The root for mount id: %s" % self.mount_id
@app.mount(path="mounts", app=mounted)
def get_context(mount_id=0):
return mounted(mount_id=mount_id)
class Item:
def __init__(self, id):
self.id = id
@mounted.path(path="items/{id}", model=Item)
def get_item(id):
return Item(id)
@app.view(model=Root)
def root_default2(self, request):
child = request.app.child("mounts", mount_id=3)
return request.link(Item(4), app=child)
c = Client(app())
response = c.get("/")
assert response.body == b"http://localhost/mounts/items/4?mount_id=3"
def test_access_app_through_request():
class root(morepath.App):
pass
class sub(morepath.App):
def __init__(self, name):
self.name = name
@root.path(path="")
class RootModel:
pass
@root.view(model=RootModel)
def root_model_default(self, request):
child = request.app.child(sub, mount_name="foo")
return request.link(SubModel("foo"), app=child)
class SubModel:
def __init__(self, name):
self.name = name
@sub.path(path="", model=SubModel)
def get_sub_model(request):
return SubModel(request.app.name)
@root.mount(
app=sub, path="{mount_name}", variables=lambda a: {"mount_name": a.name}
)
def mount_sub(mount_name):
return sub(name=mount_name)
c = Client(root())
response = c.get("/")
assert response.body == b"http://localhost/foo"
def test_mount_ancestors():
class app(morepath.App):
pass
class mounted(morepath.App):
def __init__(self, id):
self.id = id
@app.path(path="")
class AppRoot:
pass
@app.view(model=AppRoot)
def app_root_default(self, request):
ancestors = list(request.app.ancestors())
assert len(ancestors) == 1
assert ancestors[0] is request.app
assert request.app.root is request.app
@mounted.path(path="")
class MountedRoot:
pass
@mounted.view(model=MountedRoot)
def mounted_root_default(self, request):
ancestors = list(request.app.ancestors())
assert len(ancestors) == 2
assert ancestors[0] is request.app
assert ancestors[1] is request.app.parent
assert request.app.root is request.app.parent
@app.mount(path="{id}", app=mounted)
def get_mounted(id):
return mounted(id=id)
c = Client(app())
c.get("/")
c.get("/foo")
def test_breadthfist_vs_inheritance_on_commit():
class Root(morepath.App):
pass
class App1(morepath.App):
pass
class ExtendedApp1(App1):
pass
class App2(morepath.App):
pass
class ExtendedApp2(App2):
pass
@App1.path(path="")
class Model1:
pass
@App2.path(path="")
class Model2:
pass
@App1.view(model=Model1)
def view1(self, request):
return type(request.app).__name__
@App2.view(model=Model2)
def view2(self, request):
return type(request.app).__name__
Root.mount(app=App1, path="a/")(App1)
App1.mount(app=App2, path="b/")(App2)
Root.mount(app=ExtendedApp2, path="x/")(ExtendedApp2)
ExtendedApp2.mount(app=ExtendedApp1, path="y/")(ExtendedApp1)
# NB: ExtendedApp2 is mounted higher app in the tree than App2,
# from which it inherits. This means that ExtendedApp2 is
# discovered before App2. The purpose of this test is to ensure
# that this potentially problematic situation is in fact harmless,
# i.e., that the breadth-first order in which apps are discovered,
# which is not in general a valid traversal of the inheritance
# graph, does not lead to partial commits and hence to
# misconfigurations.
c = Client(Root())
response = c.get("/a")
assert response.body == b"App1"
response = c.get("/a/b")
assert response.body == b"App2"
response = c.get("/x")
assert response.body == b"ExtendedApp2"
response = c.get("/x/y")
assert response.body == b"ExtendedApp1"
| bsd-3-clause | 8914695fb497493536a78f40b404b24f | 23.262938 | 80 | 0.586885 | 3.487343 | false | false | false | false |
cornell-brg/pymtl | pclib/fl/QueuePortProxy.py | 8 | 3091 | #=========================================================================
# QueuePortProxy
#=========================================================================
# These classes provide part of a standard Python deque interface, but
# the implementation essentially turns the popleft() and append() methods
# into a val/rdy port-based interface. We use greenlets to enable us to
# wait until data is ready via the val/rdy interface before returning to
# function calling the popleft() or append() method.
from greenlet import greenlet
#=========================================================================
# InQueuePortProxy
#=========================================================================
class InQueuePortProxy (object):
#-----------------------------------------------------------------------
# Constructor
#-----------------------------------------------------------------------
def __init__( s, in_ ):
s.in_ = in_
s.trace = " "
#-----------------------------------------------------------------------
# popleft
#-----------------------------------------------------------------------
def popleft( s ):
# Set the rdy signal
s.in_.rdy.next = 1
s.trace = "+"
# Yield so we wait at least one cycle for the response
greenlet.getcurrent().parent.switch(0)
# If input interface is not valid then yield
while not s.in_.val:
s.trace = ":"
greenlet.getcurrent().parent.switch(0)
# Input interface is valid so reset rdy signal and return message
s.trace = " "
s.in_.rdy.next = 0
return s.in_.msg
#-----------------------------------------------------------------------
# line_trace
#-----------------------------------------------------------------------
def line_trace( s ):
return s.trace
#=========================================================================
# OutQueuePortProxy
#=========================================================================
class OutQueuePortProxy (object):
#-----------------------------------------------------------------------
# Constructor
#-----------------------------------------------------------------------
def __init__( s, out ):
s.out = out
s.trace = " "
#-----------------------------------------------------------------------
# append
#-----------------------------------------------------------------------
def append( s, msg ):
# Set the val signal and message
s.out.msg.next = msg
s.out.val.next = 1
s.trace = "+"
# Yield so we wait at least one cycle for the rdy
greenlet.getcurrent().parent.switch(0)
# If output interface is not ready then yield
while not s.out.rdy:
s.trace = ":"
greenlet.getcurrent().parent.switch(0)
# Output interface is ready so reset val signal
s.trace = " "
s.out.val.next = 0
#-----------------------------------------------------------------------
# line_trace
#-----------------------------------------------------------------------
def line_trace( s ):
return s.trace
| bsd-3-clause | 86fb7384171a13da08f3af452c8b3105 | 27.357798 | 74 | 0.359107 | 5.630237 | false | false | false | false |
cornell-brg/pymtl | pclib/cl/pipelines_test.py | 8 | 3436 | #=========================================================================
# pipelines_test.py
#=========================================================================
import pytest
from pymtl import *
from pclib.ifcs import InValRdyBundle, OutValRdyBundle
from pclib.test import TestSrcSinkSim
from pclib.cl import InValRdyQueue, OutValRdyQueue
from pipelines import Pipeline
#-------------------------------------------------------------------------
# test_Pipeline
#-------------------------------------------------------------------------
@pytest.mark.parametrize(
('stages'), [1, 3, 12]
)
def test_Pipeline( dump_vcd, stages ):
# Create the pipeline
pipeline = Pipeline( stages )
pipeline.vcd_file = dump_vcd
# Fill up the pipeline
i = -1
for i in range( stages-1 ):
pipeline.advance()
pipeline.insert( i )
assert not pipeline.ready()
# Insert one last item
pipeline.advance()
pipeline.insert( i+1 )
# Make sure there is something at the tail of the pipeline
assert pipeline.ready()
# Start removing items from the pipeline
for i in range( stages ):
assert pipeline.ready()
assert pipeline.remove() == i
pipeline.advance()
assert not pipeline.ready()
#-------------------------------------------------------------------------
# TestValRdyPipeline
#-------------------------------------------------------------------------
class ValRdyPipelineHarness( Model ):
def __init__( s, dtype, stages, pipeq, bypassq ):
s.in_ = InValRdyBundle ( dtype )
s.out = OutValRdyBundle( dtype )
s.in_q = InValRdyQueue ( dtype, pipe =pipeq )
s.out_q = OutValRdyQueue( dtype, bypass=bypassq )
s.pipe = Pipeline( stages )
s.connect( s.in_, s.in_q. in_ )
s.connect( s.out, s.out_q.out )
@s.tick
def logic():
# Automatically enq from input / deq from output
s.in_q.xtick()
s.out_q.xtick()
# No stall
if not s.out_q.is_full():
# Insert item into pipeline from input queue
if not s.in_q.is_empty():
s.pipe.insert( s.in_q.deq() )
# Items graduating from pipeline, add to output queue
if s.pipe.ready():
s.out_q.enq( s.pipe.remove() )
# Advance the pipeline
s.pipe.advance()
#-------------------------------------------------------------------------
# test_ValRdyPipeline
#-------------------------------------------------------------------------
@pytest.mark.parametrize(
('stages', 'pipeq', 'bypassq', 'src_delay', 'sink_delay'),
[
(1, 0, 0, 0, 0), (1, 0, 1, 0, 0), (1, 1, 0, 0, 0), (1, 1, 1, 0, 0),
(1, 0, 0, 3, 0), (1, 0, 1, 3, 0), (1, 1, 0, 3, 0), (1, 1, 1, 3, 0),
(1, 0, 0, 0, 3), (1, 0, 1, 0, 3), (1, 1, 0, 0, 3), (1, 1, 1, 0, 3),
(1, 0, 0, 3, 5), (1, 0, 1, 3, 5), (1, 1, 0, 3, 5), (1, 1, 1, 3, 5),
(5, 0, 0, 0, 0), (5, 0, 1, 0, 0), (5, 1, 0, 0, 0), (5, 1, 1, 0, 0),
(5, 0, 0, 3, 0), (5, 0, 1, 3, 0), (5, 1, 0, 3, 0), (5, 1, 1, 3, 0),
(5, 0, 0, 0, 3), (5, 0, 1, 0, 3), (5, 1, 0, 0, 3), (5, 1, 1, 0, 3),
(5, 0, 0, 3, 5), (5, 0, 1, 3, 5), (5, 1, 0, 3, 5), (5, 1, 1, 3, 5),
]
)
def test_ValRdyPipeline( dump_vcd, stages, pipeq, bypassq, src_delay, sink_delay ):
msgs = range( 20 )
model = ValRdyPipelineHarness( Bits( 8 ), stages, pipeq, bypassq )
model.vcd_file = dump_vcd
sim = TestSrcSinkSim( model, msgs, msgs, src_delay, sink_delay )
sim.run_test()
| bsd-3-clause | 3a67ad6da9ae9866fba8726b1dd76c87 | 31.415094 | 83 | 0.467113 | 3.152294 | false | true | false | false |
cornell-brg/pymtl | pclib/ifcs/XcelMsg.py | 8 | 2581 | #=========================================================================
# XcelMsg
#=========================================================================
# Accelerator request and response messages.
from pymtl import *
#-------------------------------------------------------------------------
# XcelReqMsg
#-------------------------------------------------------------------------
# Accelerator request messages can either be to read or write an
# accelerator register. Read requests include just a register specifier,
# while write requests include an accelerator register specifier and the
# actual data to write to the accelerator register.
#
# Message Format:
#
# 1b 5b 32b
# +------+-------+-----------+
# | type | raddr | data |
# +------+-------+-----------+
#
class XcelReqMsg( BitStructDefinition ):
TYPE_READ = 0
TYPE_WRITE = 1
def __init__( s ):
s.type_ = BitField( 1 )
s.raddr = BitField( 5 )
s.data = BitField( 32 )
def mk_rd( s, raddr ):
msg = s()
msg.type_ = XcelReqMsg.TYPE_READ
msg.raddr = raddr
msg.data = 0
return msg
def mk_wr( s, raddr, data ):
msg = s()
msg.type_ = XcelReqMsg.TYPE_WRITE
msg.addr = raddr
msg.data = data
return msg
def __str__( s ):
if s.type_ == XcelReqMsg.TYPE_READ:
return "rd:{}:{}".format( s.raddr, ' ' )
elif s.type_ == XcelReqMsg.TYPE_WRITE:
return "wr:{}:{}".format( s.raddr, s.data )
#-------------------------------------------------------------------------
# XcelRespMsg
#-------------------------------------------------------------------------
# Accelerator response messages can either be from a read or write of an
# accelerator register. Read requests include the actual value read from
# the accelerator register, while write requests currently include
# nothing other than the type.
#
# Message Format:
#
# 1b 32b
# +------+-----------+
# | type | data |
# +------+-----------+
#
class XcelRespMsg( BitStructDefinition ):
TYPE_READ = 0
TYPE_WRITE = 1
def __init__( s ):
s.type_ = BitField( 1 )
s.data = BitField( 32 )
def mk_rd( s, data ):
msg = s()
msg.type_ = XcelReqMsg.TYPE_READ
msg.data = data
return msg
def mk_wr( s ):
msg = s()
msg.type_ = XcelReqMsg.TYPE_WRITE
msg.data = 0
return msg
def __str__( s ):
if s.type_ == XcelReqMsg.TYPE_READ:
return "rd:{}".format( s.data )
elif s.type_ == XcelReqMsg.TYPE_WRITE:
return "wr:{}".format( ' ' )
| bsd-3-clause | 9aa3bae2765d24eaf10cba5f64b325c5 | 22.898148 | 74 | 0.469198 | 3.762391 | false | false | false | false |
cornell-brg/pymtl | pymtl/tools/integration/verilog_parser_test_todo.py | 8 | 6172 | #=======================================================================
# verilog_parser
#=======================================================================
import pytest
import os
from verilog_parser import header_parser
#-----------------------------------------------------------------------
# test cases
#-----------------------------------------------------------------------
def a(): return """\
module tester;
endmodule
""", 'tester', 0, 0
def b(): return """\
module vc_QueueCtrl1
(
input clk, reset,
input enq_val,
output enq_rdy,
output deq_val,
input deq_rdy,
output wen,
output bypass_mux_sel
);
endmodule
""", 'vc_QueueCtrl1', 0, 8
def c(): return """\
module vc_QueueCtrl1
(
input clk, reset,
input enq_val,
output reg enq_rdy,
output reg deq_val,
input deq_rdy,
output reg wen,
output bypass_mux_sel
);
endmodule
""", 'vc_QueueCtrl1', 0, 8
def d(): return """\
module vc_QueueCtrl1
(
input [0 :0] clk, reset,
input [1 :0] enq_val,
output [0 :0] enq_rdy,
output [4 :0] deq_val,
input [0 :0] deq_rdy,
output [0 :0] wen,
output [2-1:0] bypass_mux_sel
);
endmodule
""", 'vc_QueueCtrl1', 0, 8
def e(): return """\
//module vc_QueueCtrl1 #( parameter TYPE = `VC_QUEUE_NORMAL )
module vc_QueueCtrl1 //#( parameter TYPE = `VC_QUEUE_NORMAL )
(
input clk, reset,
input enq_val, // Enqueue data is valid
output enq_rdy, // Ready for producer to do an enqueue
output deq_val, // Dequeue data is valid
input deq_rdy, // Consumer is ready to do a dequeue
output wen, // Write en signal to wire up to storage element
output bypass_mux_sel // Used to control bypass mux for bypass queues
);
endmodule
""", 'vc_QueueCtrl1', 0, 8
def f(): return """\
module vc_QueueCtrl1 #( parameter TYPE = `VC_QUEUE_NORMAL )
(
input clk, reset,
input enq_val, // Enqueue data is valid
output enq_rdy, // Ready for producer to do an enqueue
output deq_val, // Dequeue data is valid
input deq_rdy, // Consumer is ready to do a dequeue
output wen, // Write en signal to wire up to storage element
output bypass_mux_sel // Used to control bypass mux for bypass queues
);
endmodule
""", 'vc_QueueCtrl1', 1, 8
def g(): return """\
// comment
module simple /* comment */
(
input in, // comment
/* comment */
output out // comment
);
endmodule
""", 'simple', 0, 2
def h(): return """\
module vc_RoundRobinArbChain
#(
parameter NUM_REQS = 2,
parameter RESET_PRIORITY_VAL = 1 // (one-hot) 1 indicates which req
// has highest priority on reset
)(
input clk,
input reset,
input kin, // Kill in
input [NUM_REQS-1:0] reqs, // 1 = making a request, 0 = no request
output [NUM_REQS-1:0] grants, // (one-hot) 1 is req won grant
output kout // Kill out
);
endmodule
""", 'vc_RoundRobinArbChain', 2, 6
def i(): return """\
module mod_a #( parameter a = 2, parameter b = 3 )
(
input in,
output out
);
endmodule
""", 'mod_a', 2, 2
def j(): return """\
module mod_a
#( parameter a = 2 )
(
input in,
output out
);
endmodule
module mod_b
#( parameter a = 2, parameter b = 2)
(
input in,
output out
);
endmodule
""", 'mod_a', 1, 2
def k(): return """\
`ifdef SOMETHING
`define SOMETHING
module mod_a
#( parameter a = 2 )
(
input in,
output out
);
endmodule
//`endif
""", 'mod_a', 1, 2
def l(): return """\
`ifdef SOMETHING
`define SOMETHING
module mod_a
#( parameter a = 2 )
(
input in,
output out
);
endmodule
module mod_b
#( parameter a = 2 )
(
input in,
output out
);
endmodule
//`endif
""", 'mod_a', 1, 2
def m(): return """\
module vc_MemPortWidthAdapter
#(
parameter p_addr_sz = 32,
parameter p_proc_data_sz = 32,
parameter p_mem_data_sz = 128,
// Local constants not meant to be set from outside the module
parameter c_proc_req_msg_sz = `VC_MEM_REQ_MSG_SZ(p_addr_sz,p_proc_data_sz),
parameter c_proc_resp_msg_sz = `VC_MEM_RESP_MSG_SZ(p_proc_data_sz)
)(
input [c_proc_req_msg_sz-1:0] procreq_msg,
output [c_proc_resp_msg_sz-1:0] procresp_msg
);
endmodule
""", 'vc_MemPortWidthAdapter', 5, 2
def n(): return """\
module vc_TraceWithValRdy
#(
parameter integer NUMBITS = 1,
parameter integer FORMAT_CHARS = 2,
parameter [(FORMAT_CHARS<<3)-1:0] FORMAT = "%x"
)(
input [(NUMCHARS<<3)-1:0] istr,
input [NUMBITS-1:0] bits
);
endmodule
""", 'vc_TraceWithValRdy', 3, 2
def y(): return """\
module fulladder ( carry, sum, in1, in2, in3 );
endmodule
""", 'fulladder', 0, 5
def x(): return """\
module fulladder ( carry, sum, in1, in2, in3 );
input in1, in2, in3;
output carry, sum;
//xor U5 ( in1, n3, sum );
endmodule
""", 'fulladder', 0, 5
#-----------------------------------------------------------------------
# test_simple
#-----------------------------------------------------------------------
@pytest.mark.parametrize( 'src',
[a, b, c, d, e, f, g, h, i, j, k, l, m, n, y, x]
)
def test_simple( src ):
code, module_name, nparams, nports = src()
parser = header_parser()
tokens = parser.parseString( code, parseAll=True )
# x contains only tokens from first module encountered
x = tokens[0].asDict()
assert x['module_name'] == module_name
if nparams:
assert len(x['params']) == nparams
if nports:
assert len(x['ports']) == nports
#-----------------------------------------------------------------------
# test_simple
#-----------------------------------------------------------------------
home = os.path.expanduser('~')
path = '{}/vc/git-brg/pyparc/vc'.format(home)
if os.path.exists( path ):
files = os.listdir( path )
files = ( x for x in files if not x.endswith('.t.v' ) and x.endswith('.v') )
files = [ os.path.join( path, x ) for x in files ]
else:
files = []
@pytest.mark.parametrize( 'filename', files )
def test_files( filename ):
parser = header_parser()
print filename
x = parser.parseFile( filename )
| bsd-3-clause | 6a3e5d1a7e57bd0b8da2c5157081e3b4 | 22.290566 | 78 | 0.542126 | 3.217935 | false | false | false | false |
cornell-brg/pymtl | pymtl/datatypes/helpers.py | 8 | 3100 | #=======================================================================
# helpers.py
#=======================================================================
'Collection of built-in helpers functions for the PyMTL framework.'
import math
import operator
# NOTE: circular imports between Bits and helpers, using 'import'
# instead of 'from Bits import' ensures pydoc still works
import Bits
#-----------------------------------------------------------------------
# get_nbits
#-----------------------------------------------------------------------
def get_nbits( N ):
'Return the number of bits needed to represent a value "N".'
if N > 0:
return N.bit_length()
else:
return N.bit_length() + 1
#-----------------------------------------------------------------------
# clog2
#-----------------------------------------------------------------------
def clog2( N ):
'Return the number of bits needed to choose between "N" items.'
assert N > 0
return int( math.ceil( math.log( N, 2 ) ) )
#-----------------------------------------------------------------------
# zext
#-----------------------------------------------------------------------
def zext( value, new_width ):
'Return a zero extended version of the provided SignalValue object.'
return value._zext( new_width )
#-----------------------------------------------------------------------
# sext
#-----------------------------------------------------------------------
def sext( value, new_width ):
'Return a sign extended version of the provided SignalValue object.'
return value._sext( new_width )
#-----------------------------------------------------------------------
# concat
#-----------------------------------------------------------------------
def concat( *args ):
'Return a Bits which is the concatenation of the Bits in bits_list.'
assert isinstance( args[0], Bits.Bits )
# Calculate total new bitwidth
nbits = sum( [ x.nbits for x in args ] )
# Create new Bits and add each bits from bits_list to it
concat_bits = Bits.Bits( nbits )
begin = 0
for bits in reversed( args ):
concat_bits[ begin : begin+bits.nbits ] = bits
begin += bits.nbits
return concat_bits
#-----------------------------------------------------------------------
# reduce_and
#-----------------------------------------------------------------------
def reduce_and( signal ):
return reduce( operator.and_, (signal[x] for x in xrange( signal.nbits )) )
#-----------------------------------------------------------------------
# reduce_or
#-----------------------------------------------------------------------
def reduce_or( signal ):
return reduce( operator.or_, (signal[x] for x in xrange( signal.nbits )) )
#-----------------------------------------------------------------------
# reduce_xor
#-----------------------------------------------------------------------
# Verilog iterates through MSB to LSB, so we must reverse iteration.
def reduce_xor( signal ):
return reduce( operator.xor,
(signal[x] for x in reversed( xrange( signal.nbits ) ))
)
| bsd-3-clause | 809f8249273a05703ab01bd73a6d4b4a | 35.470588 | 77 | 0.389355 | 5.354059 | false | false | false | false |
cornell-brg/pymtl | pclib/rtl/arith.py | 8 | 6880 | #=======================================================================
# arith.py
#=======================================================================
'''Collection of translatable arithmetic components.'''
from pymtl import *
#-----------------------------------------------------------------------
# Adder
#-----------------------------------------------------------------------
class Adder( Model ):
def __init__( s, nbits = 1 ):
s.in0 = InPort ( nbits )
s.in1 = InPort ( nbits )
s.cin = InPort ( 1 )
s.out = OutPort ( nbits )
s.cout = OutPort ( 1 )
# Wires
twidth = nbits + 1
s.temp = Wire( twidth )
# Connections
s.connect( s.out, s.temp[0:nbits] )
s.connect( s.cout, s.temp[nbits] )
@s.combinational
def comb_logic():
# Zero extend the inputs by one bit so we can generate an extra
# carry out bit
t0 = zext( s.in0, twidth )
t1 = zext( s.in1, twidth )
s.temp.value = t0 + t1 + s.cin
def line_trace( s ):
return "{} {} {} () {} {}" \
.format( s.in0, s.in1, s.cin, s.out, s.cout )
#-----------------------------------------------------------------------
# Subtractor
#-----------------------------------------------------------------------
class Subtractor( Model ):
def __init__( s, nbits = 1 ):
s.in0 = InPort ( nbits )
s.in1 = InPort ( nbits )
s.out = OutPort ( nbits )
@s.combinational
def comb_logic():
s.out.value = s.in0 - s.in1
def line_trace( s ):
return "{} {} () {}".format( s.in0, s.in1, s.out )
#-----------------------------------------------------------------------
# Incrementer
#-----------------------------------------------------------------------
class Incrementer( Model ):
def __init__( s, nbits = 1, increment_amount = 1 ):
s.in_ = InPort ( nbits )
s.out = OutPort ( nbits )
@s.combinational
def comb_logic():
s.out.value = s.in_ + increment_amount
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# ZeroExtender
#-----------------------------------------------------------------------
class ZeroExtender( Model ):
def __init__( s, in_nbits = 1, out_nbits = 1 ):
s.in_ = InPort ( in_nbits )
s.out = OutPort ( out_nbits )
@s.combinational
def comb_logic():
s.out.value = zext( s.in_, out_nbits )
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# SignExtender
#-----------------------------------------------------------------------
class SignExtender( Model ):
def __init__( s, in_nbits = 1, out_nbits = 1 ):
assert in_nbits <= out_nbits
s.in_ = InPort ( in_nbits )
s.out = OutPort ( out_nbits )
@s.combinational
def comb_logic():
s.out.value = sext( s.in_, out_nbits )
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# Zero Comparator
#-----------------------------------------------------------------------
class ZeroComparator( Model ):
def __init__( s, nbits = 1 ):
s.in_ = InPort ( nbits )
s.out = OutPort ( 1 )
@s.combinational
def comb_logic():
s.out.value = s.in_ == 0
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# Equal Comparator
#-----------------------------------------------------------------------
class EqComparator( Model ):
def __init__( s, nbits = 1 ):
s.in0 = InPort ( nbits )
s.in1 = InPort ( nbits )
s.out = OutPort ( 1 )
@s.combinational
def comb_logic():
s.out.value = s.in0 == s.in1
def line_trace( s ):
return "{} {} () {}".format( s.in0, s.in1, s.out )
#-----------------------------------------------------------------------
# Less-Than Comparator
#-----------------------------------------------------------------------
class LtComparator( Model ):
def __init__( s, nbits = 1 ):
s.in0 = InPort ( nbits )
s.in1 = InPort ( nbits )
s.out = OutPort ( 1 )
@s.combinational
def comb_logic():
s.out.value = s.in0 < s.in1
def line_trace( s ):
return "{} {} () {}".format( s.in0, s.in1, s.out )
#-----------------------------------------------------------------------
# Greater-Than Comparator
#-----------------------------------------------------------------------
class GtComparator( Model ):
def __init__( s, nbits = 1 ):
s.in0 = InPort ( nbits )
s.in1 = InPort ( nbits )
s.out = OutPort ( 1 )
@s.combinational
def comb_logic():
s.out.value = s.in0 > s.in1
def line_trace( s ):
return "{} {} () {}".format( s.in0, s.in1, s.out )
#-----------------------------------------------------------------------
# SignUnit
#-----------------------------------------------------------------------
class SignUnit( Model ):
def __init__( s, nbits = 1 ):
s.in_ = InPort ( nbits )
s.out = OutPort ( nbits )
@s.combinational
def comb_logic():
s.out.value = ~s.in_ + 1
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# UnsignUnit
#-----------------------------------------------------------------------
class UnsignUnit( Model ):
def __init__( s, nbits ):
s.in_ = InPort ( nbits )
s.out = OutPort ( nbits )
@s.combinational
def comb_logic():
if s.in_[nbits-1]:
s.out.value = ~s.in_ + 1
else:
s.out.value = s.in_
def line_trace( s ):
return "{} () {}".format( s.in_, s.out )
#-----------------------------------------------------------------------
# LeftLogicalShifter
#-----------------------------------------------------------------------
class LeftLogicalShifter( Model ):
def __init__( s, inout_nbits = 1, shamt_nbits = 1 ):
s.in_ = InPort ( inout_nbits )
s.shamt = InPort ( shamt_nbits )
s.out = OutPort ( inout_nbits )
@s.combinational
def comb_logic():
s.out.value = s.in_ << s.shamt
def line_trace( s ):
return "{} {} () {}".format( s.in_, s.shamt, s.out )
#-----------------------------------------------------------------------
# RightLogicalShifter
#-----------------------------------------------------------------------
class RightLogicalShifter( Model ):
def __init__( s, inout_nbits = 1, shamt_nbits = 1 ):
s.in_ = InPort ( inout_nbits )
s.shamt = InPort ( shamt_nbits )
s.out = OutPort ( inout_nbits )
@s.combinational
def comb_logic():
s.out.value = s.in_ >> s.shamt
def line_trace( s ):
return "{} {} () {}".format( s.in_, s.shamt, s.out )
| bsd-3-clause | cf3bff154323127f2c114006484173f3 | 25.360153 | 72 | 0.376744 | 3.718919 | false | false | false | false |
cornell-brg/pymtl | pclib/rtl/RegisterFile.py | 1 | 3447 | #=======================================================================
# RegisterFile.py
#=======================================================================
from pymtl import *
#-----------------------------------------------------------------------
# RegisterFile
#-----------------------------------------------------------------------
class RegisterFile( Model ):
def __init__( s, dtype = Bits(32), nregs = 32, rd_ports = 1, wr_ports = 1,
const_zero=False ):
addr_nbits = clog2( nregs )
s.rd_addr = [ InPort ( addr_nbits ) for _ in range(rd_ports) ]
s.rd_data = [ OutPort( dtype ) for _ in range(rd_ports) ]
if wr_ports == 1:
s.wr_addr = InPort( addr_nbits )
s.wr_data = InPort( dtype )
s.wr_en = InPort( 1 )
else:
s.wr_addr = [ InPort( addr_nbits ) for _ in range(wr_ports) ]
s.wr_data = [ InPort( dtype ) for _ in range(wr_ports) ]
s.wr_en = [ InPort( 1 ) for _ in range(wr_ports) ]
s.regs = [ Wire( dtype ) for _ in range( nregs ) ]
#-------------------------------------------------------------------
# Combinational read logic
#-------------------------------------------------------------------
# constant zero
if const_zero:
@s.combinational
def comb_logic():
for i in range( rd_ports ):
assert s.rd_addr[i] < nregs
if s.rd_addr[i] == 0:
s.rd_data[i].value = 0
else:
s.rd_data[i].value = s.regs[ s.rd_addr[i] ]
else:
@s.combinational
def comb_logic():
for i in range( rd_ports ):
assert s.rd_addr[i] < nregs
s.rd_data[i].value = s.regs[ s.rd_addr[i] ]
# Select write logic depending on if this register file should have
# a constant zero register or not!
#-------------------------------------------------------------------
# Sequential write logic, single write port
#-------------------------------------------------------------------
if wr_ports == 1 and not const_zero:
@s.posedge_clk
def seq_logic():
if s.wr_en:
s.regs[ s.wr_addr ].next = s.wr_data
#-------------------------------------------------------------------
# Sequential write logic, single write port, constant zero
#-------------------------------------------------------------------
elif wr_ports == 1:
@s.posedge_clk
def seq_logic_const_zero():
if s.wr_en and s.wr_addr != 0:
s.regs[ s.wr_addr ].next = s.wr_data
#-------------------------------------------------------------------
# Sequential write logic, multiple write ports
#-------------------------------------------------------------------
elif not const_zero:
@s.posedge_clk
def seq_logic_multiple_wr():
for i in range( wr_ports ):
if s.wr_en[i]:
s.regs[ s.wr_addr[i] ].next = s.wr_data[i]
#-------------------------------------------------------------------
# Sequential write logic, multiple write ports, constant zero
#-------------------------------------------------------------------
else:
@s.posedge_clk
def seq_logic_multiple_wr():
for i in range( wr_ports ):
if s.wr_en[i] and s.wr_addr[i] != 0:
s.regs[ s.wr_addr[i] ].next = s.wr_data[i]
def line_trace( s ):
return [x.uint() for x in s.regs]
| bsd-3-clause | 4c8dd735680734692f3c90c0a267949a | 32.144231 | 76 | 0.380621 | 4.07929 | false | false | false | false |
cornell-brg/pymtl | pymtl/tools/deprecated/ast_typer.py | 8 | 9396 | #=========================================================================
# ast_typer.py
#=========================================================================
# Create a simplified representation of the Python AST for help with
# source to source translation.
from __future__ import print_function
import ast, _ast
import re
from ...datatypes.Bits import Bits
from ...model.signals import InPort, OutPort
#-------------------------------------------------------------------------
# TypeAST
#-------------------------------------------------------------------------
# ASTTransformer which uses type information to simplify the AST:
#
# - clears references to the module
# - clears the decorator, attaches relevant notation to func instead
# - removes Index nodes
# - replaces Name nodes with Self if they reference the self object
# - replaces Name nodes with Temp if they reference a local temporary
# - replaces Subscript nodes with BitSlice if they reference a Bits
# or BitStruct object
# - replaces Subscript nodes with ArrayIndex if they reference a list
# - attaches object references to each node
# - removes '.next', '.value', '.n', and '.v' Attribute nodes on Ports
#
# TODO: fix ctx references on newly created nodes
#
class TypeAST( ast.NodeTransformer ):
def __init__( self, model, func ):
self.model = model
self.func = func
self.closed_vars = get_closure_dict( func )
self.current_obj = None
#-----------------------------------------------------------------------
# visit_Module
#-----------------------------------------------------------------------
def visit_Module( self, node ):
# visit children
self.generic_visit( node )
# copy the function body, delete module references
return ast.copy_location( node.body[0], node )
#-----------------------------------------------------------------------
# visit_FunctionDef
#-----------------------------------------------------------------------
def visit_FunctionDef( self, node ):
# visit children
self.generic_visit( node )
# TODO: add annotation to self.func based on decorator type
#dec = node.decorator_list[0].attr
# create a new FunctionDef node that deletes the decorators
#new_node = ast.FunctionDef( name=node.name, args=node.args,
# body=node.body, decorator_list=)
#return ast.copy_location( new_node, node )
return node
#-----------------------------------------------------------------------
# visit_Attribute
#-----------------------------------------------------------------------
def visit_Attribute( self, node ):
self.generic_visit( node )
# TODO: handle self.current_obj == None. These are temporary
# locals that we should check to ensure their types don't
# change!
if self.current_obj:
try :
x = self.current_obj.getattr( node.attr )
self.current_obj.update( node.attr, x )
except AttributeError:
if node.attr in ['next', 'value', 'n', 'v']:
node.value.ctx = node.ctx # Update the Load/Store information
return node.value
else:
raise Exception("Error: Unknown attribute for this object: {}"
.format( node.attr ) )
node._object = self.current_obj.inst if self.current_obj else None
return node
#-----------------------------------------------------------------------
# visit_Name
#-----------------------------------------------------------------------
def visit_Name( self, node ):
# If the name is not in closed_vars, it is a local temporary
if node.id not in self.closed_vars:
new_node = Temp( id=node.id )
new_obj = None
# If the name points to the model, this is a reference to self (or s)
elif self.closed_vars[ node.id ] is self.model:
new_node = Self( id=node.id )
new_obj = PyObj( '', self.closed_vars[ node.id ] )
# Otherwise, we have some other variable captured by the closure...
# TODO: should we allow this?
else:
new_node = node
new_obj = PyObj( node.id, self.closed_vars[ node.id ] )
# Store the new_obj
self.current_obj = new_obj
node._object = self.current_obj.inst if self.current_obj else None
# Return the new_node
return ast.copy_location( new_node, node )
#-----------------------------------------------------------------------
# visit_Subscript
#-----------------------------------------------------------------------
def visit_Subscript( self, node ):
# Visit the object being sliced
new_value = self.visit( node.value )
# Visit the index of the slice; stash and restore the current_obj
stash = self.current_obj
self.current_obj = None
new_slice = self.visit( node.slice )
self.current_obj = stash
# If current_obj not initialized, it is a local temp. Don't replace.
if not self.current_obj:
new_node = _ast.Subscript( value=new_value, slice=new_slice, ctx=node.ctx )
# If current_obj is a Bits object, replace with a BitSlice node.
elif isinstance( self.current_obj.inst, (Bits, InPort, OutPort) ):
new_node = BitSlice( value=new_value, slice=new_slice, ctx=node.ctx )
# If current_obj is a list object, replace with an ArrayIndex node.
elif isinstance( self.current_obj.inst, list ):
new_node = ArrayIndex( value=new_value, slice=new_slice, ctx=node.ctx )
# TODO: Want to do this for lists, but can't add attribute
# handling in translation instead
#self.current_obj.inst.name = self.current_obj.inst[0].name.split('[')[0]
# Otherwise, throw an exception
else:
print( self.current_obj )
raise Exception("Unknown type being subscripted!")
# Update the current_obj to contain the obj returned by subscript
# TODO: check that type of all elements in item are identical
# TODO: won't work for lists that are initially empty
# TODO: what about lists that initially contain None?
if self.current_obj:
self.current_obj.update( '[]', self.current_obj.inst[0] )
node._object = self.current_obj.inst if self.current_obj else None
return ast.copy_location( new_node, node )
#-----------------------------------------------------------------------
# visit_Index
#-----------------------------------------------------------------------
def visit_Index( self, node ):
# Remove Index nodes, they seem pointless
child = self.visit( node.value )
return ast.copy_location( child, node )
#-----------------------------------------------------------------------
# visit_Call
#-----------------------------------------------------------------------
# Specially handle certain function calls
def visit_Call( self, node ):
# func, args, keywords, starargs, kwargs
# Check that this is just a normal function call, not something weird
self.generic_visit( node )
if node.func.id == 'range':
if len( node.args ) == 1:
start = _ast.Num( n=0 )
stop = node.args[0]
step = _ast.Num( n=1 )
elif len( node.args ) == 2:
start = node.args[0]
stop = node.args[1]
step = _ast.Num( n=1 ) # TODO: should be an expression
elif len( node.args ) == 3:
start = node.args[0]
stop = node.args[1]
step = node.args[2]
else:
raise Exception("Invalid # of arguments to range function!")
new_node = _ast.Slice( lower=start, upper=stop, step=step )
else:
new_node = node
return ast.copy_location( new_node, node )
#------------------------------------------------------------------------
# PyObj
#------------------------------------------------------------------------
class PyObj( object ):
def __init__( self, name, inst ):
self.name = name
self.inst = inst
def update( self, name, inst ):
self.name += name
self.inst = inst
def getattr( self, name ):
return getattr( self.inst, name )
def __repr__( self ):
return "PyObj( name={} inst={} )".format( self.name, type(self.inst) )
#------------------------------------------------------------------------
# get_closure_dict
#------------------------------------------------------------------------
# http://stackoverflow.com/a/19416942
def get_closure_dict( fn ):
closure_objects = [c.cell_contents for c in fn.func_closure]
return dict( zip( fn.func_code.co_freevars, closure_objects ))
#------------------------------------------------------------------------
# ArrayIndex
#------------------------------------------------------------------------
class ArrayIndex( _ast.Subscript ):
pass
#------------------------------------------------------------------------
# BitSlice
#------------------------------------------------------------------------
class BitSlice( _ast.Subscript ):
pass
#------------------------------------------------------------------------
# Self
#------------------------------------------------------------------------
# New AST Node for references to self. Based on Name node.
class Self( _ast.Name ):
pass
#------------------------------------------------------------------------
# Temp
#------------------------------------------------------------------------
# New AST Node for local temporaries. Based on Name node.
class Temp( _ast.Name ):
pass
| bsd-3-clause | 826b4d053eab4458b4818f6e44da1d62 | 36.13834 | 81 | 0.495743 | 4.532562 | false | false | false | false |
flutter/buildroot | build/linux/unbundle/remove_bundled_libraries.py | 1 | 3226 | #!/usr/bin/env python3
#
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Removes bundled libraries to make sure they are not used.
See README for more details.
"""
import optparse
import os.path
import sys
def DoMain(argv):
my_dirname = os.path.abspath(os.path.dirname(__file__))
source_tree_root = os.path.abspath(
os.path.join(my_dirname, '..', '..', '..'))
if os.path.join(source_tree_root, 'build', 'linux', 'unbundle') != my_dirname:
print(('Sanity check failed: please run this script from ' +
'build/linux/unbundle directory.'))
return 1
parser = optparse.OptionParser()
parser.add_option('--do-remove', action='store_true')
options, args = parser.parse_args(argv)
exclusion_used = {}
for exclusion in args:
exclusion_used[exclusion] = False
for root, dirs, files in os.walk(source_tree_root, topdown=False):
# Only look at paths which contain a "third_party" component
# (note that e.g. third_party.png doesn't count).
root_relpath = os.path.relpath(root, source_tree_root)
if 'third_party' not in root_relpath.split(os.sep):
continue
for f in files:
path = os.path.join(root, f)
relpath = os.path.relpath(path, source_tree_root)
excluded = False
for exclusion in args:
# Require precise exclusions. Find the right-most third_party
# in the relative path, and if there is more than one ignore
# the exclusion if it's completely contained within the part
# before right-most third_party path component.
split = relpath.rsplit(os.sep + 'third_party' + os.sep, 1)
if len(split) > 1 and split[0].startswith(exclusion):
continue
if relpath.startswith(exclusion):
# Multiple exclusions can match the same path. Go through all of them
# and mark each one as used.
exclusion_used[exclusion] = True
excluded = True
if excluded:
continue
# Deleting gyp files almost always leads to gyp failures.
# These files come from Chromium project, and can be replaced if needed.
if f.endswith('.gyp') or f.endswith('.gypi'):
continue
# Deleting .isolate files leads to gyp failures. They are usually
# not used by a distro build anyway.
# See http://www.chromium.org/developers/testing/isolated-testing
# for more info.
if f.endswith('.isolate'):
continue
if options.do_remove:
# Delete the file - best way to ensure it's not used during build.
os.remove(path)
else:
# By default just print paths that would be removed.
print(path)
exit_code = 0
# Fail if exclusion list contains stale entries - this helps keep it
# up to date.
for exclusion, used in exclusion_used.items():
if not used:
print('%s does not exist' % exclusion)
exit_code = 1
if not options.do_remove:
print(('To actually remove files printed above, please pass ' +
'--do-remove flag.'))
return exit_code
if __name__ == '__main__':
sys.exit(DoMain(sys.argv[1:]))
| bsd-3-clause | 4b4cbac56cb431234a98014f6c403e20 | 30.320388 | 80 | 0.648481 | 3.915049 | false | false | false | false |
flutter/buildroot | build/download_nacl_toolchains.py | 1 | 2159 | #!/usr/bin/env python3
#
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Shim to run nacl toolchain download script only if there is a nacl dir."""
import os
import shutil
import sys
def Main(args):
# Exit early if disable_nacl=1.
if 'disable_nacl=1' in os.environ.get('GYP_DEFINES', ''):
return 0
script_dir = os.path.dirname(os.path.abspath(__file__))
src_dir = os.path.dirname(script_dir)
nacl_dir = os.path.join(src_dir, 'native_client')
nacl_build_dir = os.path.join(nacl_dir, 'build')
package_version_dir = os.path.join(nacl_build_dir, 'package_version')
package_version = os.path.join(package_version_dir, 'package_version.py')
if not os.path.exists(package_version):
print("Can't find '%s'" % package_version)
print('Presumably you are intentionally building without NativeClient.')
print('Skipping NativeClient toolchain download.')
sys.exit(0)
sys.path.insert(0, package_version_dir)
import package_version
# BUG:
# We remove this --optional-pnacl argument, and instead replace it with
# --no-pnacl for most cases. However, if the bot name is an sdk
# bot then we will go ahead and download it. This prevents increasing the
# gclient sync time for developers, or standard Chrome bots.
if '--optional-pnacl' in args:
args.remove('--optional-pnacl')
use_pnacl = False
buildbot_name = os.environ.get('BUILDBOT_BUILDERNAME', '')
if 'pnacl' in buildbot_name and 'sdk' in buildbot_name:
use_pnacl = True
if use_pnacl:
print('\n*** DOWNLOADING PNACL TOOLCHAIN ***\n')
else:
args = ['--exclude', 'pnacl_newlib'] + args
# Only download the ARM gcc toolchain if we are building for ARM
# TODO(olonho): we need to invent more reliable way to get build
# configuration info, to know if we're building for ARM.
if 'target_arch=arm' not in os.environ.get('GYP_DEFINES', ''):
args = ['--exclude', 'nacl_arm_newlib'] + args
package_version.main(args)
return 0
if __name__ == '__main__':
sys.exit(Main(sys.argv[1:]))
| bsd-3-clause | 407a36dca2b91adf4dac8cc950e7bca7 | 34.983333 | 77 | 0.685503 | 3.373438 | false | false | false | false |
flutter/buildroot | build/linux/install-chromeos-fonts.py | 1 | 2238 | #!/usr/bin/env python3
#
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Script to install the Chrome OS fonts on Linux.
# This script can be run manually (as root), but is also run as part
# install-build-deps.sh.
import os
import shutil
import subprocess
import sys
# Taken from the media-fonts/notofonts ebuild in chromiumos-overlay.
VERSION = '20140815'
URL = ('https://commondatastorage.googleapis.com/chromeos-localmirror/'
'distfiles/notofonts-%s.tar.bz2') % (VERSION)
FONTS_DIR = '/usr/local/share/fonts'
def main(args):
if not sys.platform.startswith('linux'):
print("Error: %s must be run on Linux." % __file__)
return 1
if os.getuid() != 0:
print("Error: %s must be run as root." % __file__)
return 1
if not os.path.isdir(FONTS_DIR):
print("Error: Destination directory does not exist: %s" % FONTS_DIR)
return 1
dest_dir = os.path.join(FONTS_DIR, 'chromeos')
stamp = os.path.join(dest_dir, ".stamp02")
if os.path.exists(stamp):
with open(stamp) as s:
if s.read() == URL:
print("Chrome OS fonts already up-to-date in %s." % dest_dir)
return 0
if os.path.isdir(dest_dir):
shutil.rmtree(dest_dir)
os.mkdir(dest_dir)
os.chmod(dest_dir, 0o755)
print("Installing Chrome OS fonts to %s." % dest_dir)
tarball = os.path.join(dest_dir, os.path.basename(URL))
subprocess.check_call(['curl', '-L', URL, '-o', tarball])
subprocess.check_call(['tar', '--no-same-owner', '--no-same-permissions',
'-xf', tarball, '-C', dest_dir])
os.remove(tarball)
readme = os.path.join(dest_dir, "README")
with open(readme, 'w') as s:
s.write("This directory and its contents are auto-generated.\n")
s.write("It may be deleted and recreated. Do not modify.\n")
s.write("Script: %s\n" % __file__)
with open(stamp, 'w') as s:
s.write(URL)
for base, dirs, files in os.walk(dest_dir):
for dir in dirs:
os.chmod(os.path.join(base, dir), 0o755)
for file in files:
os.chmod(os.path.join(base, file), 0o644)
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
| bsd-3-clause | 8e32e69a442e7940292e70f2a38c08ad | 29.243243 | 75 | 0.6479 | 3.116992 | false | false | false | false |
flutter/buildroot | build/toolchain/win/tool_wrapper.py | 1 | 7391 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Utility functions for Windows builds.
This file is copied to the build directory as part of toolchain setup and
is used to set up calls to tools used by the build that need wrappers.
"""
from __future__ import print_function
import os
import re
import shutil
import subprocess
import stat
import sys
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
# A regex matching an argument corresponding to the output filename passed to
# link.exe.
_LINK_EXE_OUT_ARG = re.compile('/OUT:(?P<out>.+)$', re.IGNORECASE)
def main(args):
exit_code = WinTool().Dispatch(args)
if exit_code is not None:
sys.exit(exit_code)
class WinTool(object):
"""This class performs all the Windows tooling steps. The methods can either
be executed directly, or dispatched from an argument list."""
def _UseSeparateMspdbsrv(self, env, args):
"""Allows to use a unique instance of mspdbsrv.exe per linker instead of a
shared one."""
if len(args) < 1:
raise Exception("Not enough arguments")
if args[0] != 'link.exe':
return
# Use the output filename passed to the linker to generate an endpoint name
# for mspdbsrv.exe.
endpoint_name = None
for arg in args:
m = _LINK_EXE_OUT_ARG.match(arg)
if m:
endpoint_name = re.sub(r'\W+', '',
'%s_%d' % (m.group('out'), os.getpid()))
break
if endpoint_name is None:
return
# Adds the appropriate environment variable. This will be read by link.exe
# to know which instance of mspdbsrv.exe it should connect to (if it's
# not set then the default endpoint is used).
env['_MSPDBSRV_ENDPOINT_'] = endpoint_name
def Dispatch(self, args):
"""Dispatches a string command to a method."""
if len(args) < 1:
raise Exception("Not enough arguments")
method = "Exec%s" % self._CommandifyName(args[0])
return getattr(self, method)(*args[1:])
def _CommandifyName(self, name_string):
"""Transforms a tool name like recursive-mirror to RecursiveMirror."""
return name_string.title().replace('-', '')
def _GetEnv(self, arch):
"""Gets the saved environment from a file for a given architecture."""
# The environment is saved as an "environment block" (see CreateProcess
# and msvs_emulation for details). We convert to a dict here.
# Drop last 2 NULs, one for list terminator, one for trailing vs. separator.
pairs = open(arch).read()[:-2].split('\0')
kvs = [item.split('=', 1) for item in pairs]
return dict(kvs)
def ExecDeleteFile(self, path):
"""Simple file delete command."""
if os.path.exists(path):
os.unlink(path)
def ExecRecursiveMirror(self, source, dest):
"""Emulation of rm -rf out && cp -af in out."""
if os.path.exists(dest):
if os.path.isdir(dest):
def _on_error(fn, path, dummy_excinfo):
# The operation failed, possibly because the file is set to
# read-only. If that's why, make it writable and try the op again.
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWRITE)
fn(path)
shutil.rmtree(dest, onerror=_on_error)
else:
if not os.access(dest, os.W_OK):
# Attempt to make the file writable before deleting it.
os.chmod(dest, stat.S_IWRITE)
os.unlink(dest)
if os.path.isdir(source):
shutil.copytree(source, dest)
else:
shutil.copy2(source, dest)
# Try to diagnose crbug.com/741603
if not os.path.exists(dest):
raise Exception("Copying of %s to %s failed" % (source, dest))
def ExecLinkWrapper(self, arch, use_separate_mspdbsrv, *args):
"""Filter diagnostic output from link that looks like:
' Creating library ui.dll.lib and object ui.dll.exp'
This happens when there are exports from the dll or exe.
"""
env = self._GetEnv(arch)
if use_separate_mspdbsrv == 'True':
self._UseSeparateMspdbsrv(env, args)
if sys.platform == 'win32':
args = list(args) # *args is a tuple by default, which is read-only.
args[0] = args[0].replace('/', '\\')
# https://docs.python.org/2/library/subprocess.html:
# "On Unix with shell=True [...] if args is a sequence, the first item
# specifies the command string, and any additional items will be treated as
# additional arguments to the shell itself. That is to say, Popen does the
# equivalent of:
# Popen(['/bin/sh', '-c', args[0], args[1], ...])"
# For that reason, since going through the shell doesn't seem necessary on
# non-Windows don't do that there.
pe_name = None
for arg in args:
m = _LINK_EXE_OUT_ARG.match(arg)
if m:
pe_name = m.group('out')
link = subprocess.Popen(args, shell=sys.platform == 'win32', env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Read output one line at a time as it shows up to avoid OOM failures when
# GBs of output is produced.
for line in link.stdout:
if (not line.startswith(b' Creating library ')
and not line.startswith(b'Generating code')
and not line.startswith(b'Finished generating code')):
print(line)
return link.wait()
def ExecAsmWrapper(self, arch, *args):
"""Filter logo banner from invocations of asm.exe."""
env = self._GetEnv(arch)
if sys.platform == 'win32':
# Windows ARM64 uses clang-cl as assembler which has '/' as path
# separator, convert it to '\\' when running on Windows.
args = list(args) # *args is a tuple by default, which is read-only
args[0] = args[0].replace('/', '\\')
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
for line in out.decode('utf8').splitlines():
if not line.startswith(' Assembling: '):
print(line)
return popen.returncode
def ExecRcWrapper(self, arch, *args):
"""Filter logo banner from invocations of rc.exe. Older versions of RC
don't support the /nologo flag."""
env = self._GetEnv(arch)
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
for line in out.splitlines():
if (not line.startswith(b'Microsoft (R) Windows (R) Resource Compiler') and
not line.startswith(b'Copyright (C) Microsoft Corporation') and line):
print(line)
return popen.returncode
def ExecActionWrapper(self, arch, rspfile, *dirname):
"""Runs an action command line from a response file using the environment
for |arch|. If |dirname| is supplied, use that as the working directory."""
env = self._GetEnv(arch)
# TODO(scottmg): This is a temporary hack to get some specific variables
# through to actions that are set after GN-time. http://crbug.com/333738.
for k, v in os.environ.items():
if k not in env:
env[k] = v
args = open(rspfile).read()
dirname = dirname[0] if dirname else None
return subprocess.call(args, shell=True, env=env, cwd=dirname)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
| bsd-3-clause | 0cbd4136239549e2bf05c8fe158f811d | 37.295337 | 81 | 0.647544 | 3.727181 | false | false | false | false |
flutter/buildroot | build/toolchain/wrapper_utils.py | 1 | 2854 | # Copyright (c) 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Helper functions for gcc_toolchain.gni wrappers."""
import gzip
import os
import re
import subprocess
import shlex
import shutil
import sys
import threading
_BAT_PREFIX = 'cmd /c call '
def _GzipThenDelete(src_path, dest_path):
# Results for Android map file with GCC on a z620:
# Uncompressed: 207MB
# gzip -9: 16.4MB, takes 8.7 seconds.
# gzip -1: 21.8MB, takes 2.0 seconds.
# Piping directly from the linker via -print-map (or via -Map with a fifo)
# adds a whopping 30-45 seconds!
with open(src_path, 'rb') as f_in, gzip.GzipFile(dest_path, 'wb', 1) as f_out:
shutil.copyfileobj(f_in, f_out)
os.unlink(src_path)
def CommandToRun(command):
"""Generates commands compatible with Windows.
When running on a Windows host and using a toolchain whose tools are
actually wrapper scripts (i.e. .bat files on Windows) rather than binary
executables, the |command| to run has to be prefixed with this magic.
The GN toolchain definitions take care of that for when GN/Ninja is
running the tool directly. When that command is passed in to this
script, it appears as a unitary string but needs to be split up so that
just 'cmd' is the actual command given to Python's subprocess module.
Args:
command: List containing the UNIX style |command|.
Returns:
A list containing the Windows version of the |command|.
"""
if command[0].startswith(_BAT_PREFIX):
command = command[0].split(None, 3) + command[1:]
return command
def RunLinkWithOptionalMapFile(command, env=None, map_file=None):
"""Runs the given command, adding in -Wl,-Map when |map_file| is given.
Also takes care of gzipping when |map_file| ends with .gz.
Args:
command: List of arguments comprising the command.
env: Environment variables.
map_file: Path to output map_file.
Returns:
The exit code of running |command|.
"""
tmp_map_path = None
if map_file and map_file.endswith('.gz'):
tmp_map_path = map_file + '.tmp'
command.append('-Wl,-Map,' + tmp_map_path)
elif map_file:
command.append('-Wl,-Map,' + map_file)
result = subprocess.call(command, env=env)
if tmp_map_path and result == 0:
threading.Thread(
target=lambda: _GzipThenDelete(tmp_map_path, map_file)).start()
elif tmp_map_path and os.path.exists(tmp_map_path):
os.unlink(tmp_map_path)
return result
def CaptureCommandStderr(command, env=None):
"""Returns the stderr of a command.
Args:
command: A list containing the command and arguments.
env: Environment variables for the new process.
"""
child = subprocess.Popen(command, stderr=subprocess.PIPE, env=env)
_, stderr = child.communicate()
return child.returncode, stderr
| bsd-3-clause | 78bbcfd87095446c613161065a4283bd | 38.638889 | 80 | 0.713385 | 3.589937 | false | false | false | false |
flutter/buildroot | build/android/gyp/util/build_utils.py | 1 | 11762 | # Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import ast
import contextlib
import fnmatch
import json
import os
import pipes
import re
import shlex
import shutil
import subprocess
import sys
import tempfile
import zipfile
# Definition copied from pylib/constants/__init__.py to avoid adding
# a dependency on pylib.
DIR_SOURCE_ROOT = os.environ.get('CHECKOUT_SOURCE_ROOT',
os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir, os.pardir, os.pardir, os.pardir)))
CHROMIUM_SRC = os.path.normpath(
os.path.join(os.path.dirname(__file__),
os.pardir, os.pardir, os.pardir, os.pardir))
COLORAMA_ROOT = os.path.join(CHROMIUM_SRC,
'third_party', 'colorama', 'src')
# aapt should ignore OWNERS files in addition the default ignore pattern.
AAPT_IGNORE_PATTERN = ('!OWNERS:!.svn:!.git:!.ds_store:!*.scc:.*:<dir>_*:' +
'!CVS:!thumbs.db:!picasa.ini:!*~:!*.d.stamp')
@contextlib.contextmanager
def TempDir():
dirname = tempfile.mkdtemp()
try:
yield dirname
finally:
shutil.rmtree(dirname)
def MakeDirectory(dir_path):
try:
os.makedirs(dir_path)
except OSError:
pass
def DeleteDirectory(dir_path):
if os.path.exists(dir_path):
shutil.rmtree(dir_path)
def Touch(path, fail_if_missing=False):
if fail_if_missing and not os.path.exists(path):
raise Exception(path + ' doesn\'t exist.')
MakeDirectory(os.path.dirname(path))
with open(path, 'a'):
os.utime(path, None)
def FindInDirectory(directory, filename_filter):
files = []
for root, _dirnames, filenames in os.walk(directory):
matched_files = fnmatch.filter(filenames, filename_filter)
files.extend((os.path.join(root, f) for f in matched_files))
return files
def FindInDirectories(directories, filename_filter):
all_files = []
for directory in directories:
all_files.extend(FindInDirectory(directory, filename_filter))
return all_files
def ParseGnList(gn_string):
return ast.literal_eval(gn_string)
def ParseGypList(gyp_string):
# The ninja generator doesn't support $ in strings, so use ## to
# represent $.
# TODO(cjhopman): Remove when
# https://code.google.com/p/gyp/issues/detail?id=327
# is addressed.
gyp_string = gyp_string.replace('##', '$')
if gyp_string.startswith('['):
return ParseGnList(gyp_string)
return shlex.split(gyp_string)
def CheckOptions(options, parser, required=None):
if not required:
return
for option_name in required:
if getattr(options, option_name) is None:
parser.error('--%s is required' % option_name.replace('_', '-'))
def WriteJson(obj, path, only_if_changed=False):
old_dump = None
if os.path.exists(path):
with open(path, 'r') as oldfile:
old_dump = oldfile.read()
new_dump = json.dumps(obj, sort_keys=True, indent=2, separators=(',', ': '))
if not only_if_changed or old_dump != new_dump:
with open(path, 'w') as outfile:
outfile.write(new_dump)
def ReadJson(path):
with open(path, 'r') as jsonfile:
return json.load(jsonfile)
class CalledProcessError(Exception):
"""This exception is raised when the process run by CheckOutput
exits with a non-zero exit code."""
def __init__(self, cwd, args, output):
super(CalledProcessError, self).__init__()
self.cwd = cwd
self.args = args
self.output = output
def __str__(self):
# A user should be able to simply copy and paste the command that failed
# into their shell.
copyable_command = '( cd {}; {} )'.format(os.path.abspath(self.cwd),
' '.join(map(pipes.quote, self.args)))
return 'Command failed: {}\n{}'.format(copyable_command, self.output)
# This can be used in most cases like subprocess.check_output(). The output,
# particularly when the command fails, better highlights the command's failure.
# If the command fails, raises a build_utils.CalledProcessError.
def CheckOutput(args, cwd=None,
print_stdout=False, print_stderr=True,
stdout_filter=None,
stderr_filter=None,
universal_newlines=True,
fail_func=lambda returncode, stderr: returncode != 0):
if not cwd:
cwd = os.getcwd()
child = subprocess.Popen(args,
universal_newlines=universal_newlines,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
stdout, stderr = child.communicate()
if stdout_filter is not None:
stdout = stdout_filter(stdout)
if stderr_filter is not None:
stderr = stderr_filter(stderr)
if fail_func(child.returncode, stderr):
raise CalledProcessError(cwd, args, stdout + stderr)
if print_stdout:
sys.stdout.write(stdout)
if print_stderr:
sys.stderr.write(stderr)
return stdout
def GetModifiedTime(path):
# For a symlink, the modified time should be the greater of the link's
# modified time and the modified time of the target.
return max(os.lstat(path).st_mtime, os.stat(path).st_mtime)
def IsTimeStale(output, inputs):
if not os.path.exists(output):
return True
output_time = GetModifiedTime(output)
for i in inputs:
if GetModifiedTime(i) > output_time:
return True
return False
def IsDeviceReady():
device_state = CheckOutput(['adb', 'get-state'])
return device_state.strip() == 'device'
def CheckZipPath(name):
if os.path.normpath(name) != name:
raise Exception('Non-canonical zip path: %s' % name)
if os.path.isabs(name):
raise Exception('Absolute zip path: %s' % name)
def ExtractAll(zip_path, path=None, no_clobber=True, pattern=None):
if path is None:
path = os.getcwd()
elif not os.path.exists(path):
MakeDirectory(path)
with zipfile.ZipFile(zip_path) as z:
for name in z.namelist():
if name.endswith('/'):
continue
if pattern is not None:
if not fnmatch.fnmatch(name, pattern):
continue
CheckZipPath(name)
if no_clobber:
output_path = os.path.join(path, name)
if os.path.exists(output_path):
raise Exception(
'Path already exists from zip: %s %s %s'
% (zip_path, name, output_path))
z.extractall(path=path)
def DoZip(inputs, output, base_dir):
with zipfile.ZipFile(output, 'w') as outfile:
for f in inputs:
CheckZipPath(os.path.relpath(f, base_dir))
outfile.write(f, os.path.relpath(f, base_dir))
def ZipDir(output, base_dir):
with zipfile.ZipFile(output, 'w') as outfile:
for root, _, files in os.walk(base_dir):
for f in files:
path = os.path.join(root, f)
archive_path = os.path.relpath(path, base_dir)
CheckZipPath(archive_path)
outfile.write(path, archive_path)
def MergeZips(output, inputs, exclude_patterns=None):
added_names = set()
def Allow(name):
if exclude_patterns is not None:
for p in exclude_patterns:
if fnmatch.fnmatch(name, p):
return False
return True
with zipfile.ZipFile(output, 'w') as out_zip:
for in_file in inputs:
with zipfile.ZipFile(in_file, 'r') as in_zip:
for name in in_zip.namelist():
if name not in added_names and Allow(name):
out_zip.writestr(name, in_zip.read(name))
added_names.add(name)
def PrintWarning(message):
print('WARNING: %s' % message)
def PrintBigWarning(message):
print('***** ' * 8)
PrintWarning(message)
print('***** ' * 8)
def GetSortedTransitiveDependencies(top, deps_func):
"""Gets the list of all transitive dependencies in sorted order.
There should be no cycles in the dependency graph.
Args:
top: a list of the top level nodes
deps_func: A function that takes a node and returns its direct dependencies.
Returns:
A list of all transitive dependencies of nodes in top, in order (a node will
appear in the list at a higher index than all of its dependencies).
"""
def Node(dep):
return (dep, deps_func(dep))
# First: find all deps
unchecked_deps = list(top)
all_deps = set(top)
while unchecked_deps:
dep = unchecked_deps.pop()
new_deps = deps_func(dep).difference(all_deps)
unchecked_deps.extend(new_deps)
all_deps = all_deps.union(new_deps)
# Then: simple, slow topological sort.
sorted_deps = []
unsorted_deps = dict(map(Node, all_deps))
while unsorted_deps:
for library, dependencies in unsorted_deps.items():
if not dependencies.intersection(unsorted_deps.keys()):
sorted_deps.append(library)
del unsorted_deps[library]
return sorted_deps
def GetPythonDependencies():
"""Gets the paths of imported non-system python modules.
A path is assumed to be a "system" import if it is outside of chromium's
src/. The paths will be relative to the current directory.
"""
_ForceLazyModulesToLoad()
module_paths = (m.__file__ for m in sys.modules.values()
if m is not None and hasattr(m, '__file__'))
abs_module_paths = map(os.path.abspath, filter(lambda p: p is not None, module_paths))
assert os.path.isabs(DIR_SOURCE_ROOT)
non_system_module_paths = [
p for p in abs_module_paths if p.startswith(DIR_SOURCE_ROOT)]
def ConvertPycToPy(s):
if s.endswith('.pyc'):
return s[:-1]
return s
non_system_module_paths = map(ConvertPycToPy, non_system_module_paths)
non_system_module_paths = map(os.path.relpath, non_system_module_paths)
return sorted(set(non_system_module_paths))
def _ForceLazyModulesToLoad():
"""Forces any lazily imported modules to fully load themselves.
Inspecting the modules' __file__ attribute causes lazily imported modules
(e.g. from email) to get fully imported and update sys.modules. Iterate
over the values until sys.modules stabilizes so that no modules are missed.
"""
while True:
num_modules_before = len(sys.modules.keys())
for m in sys.modules.values():
if m is not None and hasattr(m, '__file__'):
_ = m.__file__
num_modules_after = len(sys.modules.keys())
if num_modules_before == num_modules_after:
break
def AddDepfileOption(parser):
parser.add_option('--depfile',
help='Path to depfile. This must be specified as the '
'action\'s first output.')
def WriteDepfile(path, dependencies):
with open(path, 'w') as depfile:
depfile.write(path)
depfile.write(': ')
depfile.write(' '.join(dependencies))
depfile.write('\n')
def ExpandFileArgs(args):
"""Replaces file-arg placeholders in args.
These placeholders have the form:
@FileArg(filename:key1:key2:...:keyn)
The value of such a placeholder is calculated by reading 'filename' as json.
And then extracting the value at [key1][key2]...[keyn].
Note: This intentionally does not return the list of files that appear in such
placeholders. An action that uses file-args *must* know the paths of those
files prior to the parsing of the arguments (typically by explicitly listing
them in the action's inputs in build files).
"""
new_args = list(args)
file_jsons = dict()
r = re.compile('@FileArg\((.*?)\)')
for i, arg in enumerate(args):
match = r.search(arg)
if not match:
continue
if match.end() != len(arg):
raise Exception('Unexpected characters after FileArg: ' + arg)
lookup_path = match.group(1).split(':')
file_path = lookup_path[0]
if not file_path in file_jsons:
file_jsons[file_path] = ReadJson(file_path)
expansion = file_jsons[file_path]
for k in lookup_path[1:]:
expansion = expansion[k]
new_args[i] = arg[:match.start()] + str(expansion)
return new_args
| bsd-3-clause | 86f8d12958b2767c27ec6d3a2daadb1c | 28.331671 | 88 | 0.667148 | 3.541704 | false | false | false | false |
flutter/buildroot | build/download_sdk_extras.py | 1 | 2631 | #!/usr/bin/env python3
#
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Script to download sdk/extras packages on the bots from google storage.
The script expects arguments that specify zips file in the google storage
bucket named: <dir in SDK extras>_<package name>_<version>.zip. The file will
be extracted in the android_tools/sdk/extras directory on the test bots. This
script will not do anything for developers.
TODO(navabi): Move this script (crbug.com/459819).
"""
import json
import os
import shutil
import subprocess
import sys
import zipfile
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
CHROME_SRC = os.path.abspath(os.path.join(SCRIPT_DIR, os.pardir))
sys.path.insert(0, os.path.join(SCRIPT_DIR, 'android'))
sys.path.insert(1, os.path.join(CHROME_SRC, 'tools'))
from pylib import constants
import find_depot_tools
DEPOT_PATH = find_depot_tools.add_depot_tools_to_path()
GSUTIL_PATH = os.path.join(DEPOT_PATH, 'gsutil.py')
SDK_EXTRAS_BUCKET = 'gs://chrome-sdk-extras'
SDK_EXTRAS_PATH = os.path.join(constants.ANDROID_SDK_ROOT, 'extras')
SDK_EXTRAS_JSON_FILE = os.path.join(os.path.dirname(__file__),
'android_sdk_extras.json')
def clean_and_extract(dir_name, package_name, zip_file):
local_dir = '%s/%s/%s' % (SDK_EXTRAS_PATH, dir_name, package_name)
if os.path.exists(local_dir):
shutil.rmtree(local_dir)
local_zip = '%s/%s' % (SDK_EXTRAS_PATH, zip_file)
with zipfile.ZipFile(local_zip) as z:
z.extractall(path=SDK_EXTRAS_PATH)
def main():
if not os.environ.get('CHROME_HEADLESS'):
# This is not a buildbot checkout.
return 0
# Update the android_sdk_extras.json file to update downloaded packages.
with open(SDK_EXTRAS_JSON_FILE) as json_file:
packages = json.load(json_file)
for package in packages:
local_zip = '%s/%s' % (SDK_EXTRAS_PATH, package['zip'])
if not os.path.exists(local_zip):
package_zip = '%s/%s' % (SDK_EXTRAS_BUCKET, package['zip'])
try:
subprocess.check_call(['python', GSUTIL_PATH, '--force-version', '4.7',
'cp', package_zip, local_zip])
except subprocess.CalledProcessError:
print ('WARNING: Failed to download SDK packages. If this bot compiles '
'for Android, it may have errors.')
return 0
# Always clean dir and extract zip to ensure correct contents.
clean_and_extract(package['dir_name'], package['package'], package['zip'])
if __name__ == '__main__':
sys.exit(main())
| bsd-3-clause | e6201e5953f575de22883a2e407ff41a | 35.541667 | 80 | 0.68339 | 3.292866 | false | false | false | false |
matllubos/django-is-core | is_core/generic_views/inlines/inline_form_views.py | 1 | 10444 | from django.forms.formsets import DELETION_FIELD_NAME
from django.utils.translation import ugettext_lazy as _
from django.utils.functional import cached_property
from chamber.utils.forms import formset_has_file_field
from is_core.forms.models import BaseInlineFormSet, smartinlineformset_factory, SmartModelForm
from is_core.generic_views.inlines.base import RelatedInlineView
from is_core.forms.fields import SmartReadonlyField, EmptyReadonlyField
from is_core.utils import get_readonly_field_data, GetMethodFieldMixin
class InlineFormView(GetMethodFieldMixin, RelatedInlineView):
form_class = SmartModelForm
base_inline_formset_class = BaseInlineFormSet
fields = None
exclude = ()
inline_views = None
field_labels = None
template_name = None
fk_name = None
extra = 0
can_add = True
can_delete = True
is_readonly = False
max_num = None
min_num = 0
readonly_fields = ()
initial = []
no_items_text = _('There are no items')
class_names = ['inline-js']
add_inline_button_verbose_name = None
save_before_parent = False
def __init__(self, request, parent_view, parent_instance):
super().__init__(request, parent_view, parent_instance)
self.core = parent_view.core
self.parent_instance = parent_instance
self.parent_model = self.parent_instance.__class__
self.readonly = self._is_readonly()
@cached_property
def formset(self):
return self.get_formset()
def _get_field_labels(self):
return self.field_labels
def _is_readonly(self):
return self.is_readonly or self.parent_view.is_readonly()
def can_form_delete(self, form):
return (
not self.is_form_readonly(form)
and self.permission.has_permission('delete', self.request, self, obj=form.instance)
)
def is_form_readonly(self, form):
return self.readonly and self.permission.has_permission('update', self.request, self, obj=form.instance)
def get_context_data(self, **kwargs):
formset = self.formset
context_data = super().get_context_data(**kwargs)
context_data.update({
'formset': formset,
'fields': self.get_formset_fields(formset),
'name': self.get_name(),
'button_value': self.get_button_value(),
'class_names': self.get_class_names(formset, **kwargs),
'no_items_text': self.no_items_text
})
return context_data
def get_class_names(self, formset, **kwargs):
class_names = self.class_names + [self.get_name().lower()]
if formset.can_add:
class_names.append('can-add')
if formset.can_delete:
class_names.append('can-delete')
if kwargs.get('title'):
class_names.append('with-title')
else:
class_names.append('without-title')
return class_names
def get_exclude(self):
return self.exclude
def generate_fields(self):
fields = self.get_fields()
if fields is None:
fields = (
list(self.get_form_class().base_fields.keys())
+ list(self.get_formset_factory().form.base_fields.keys())
)
return [field for field in fields if field not in self._get_disallowed_fields_from_permissions()]
def get_fields(self):
return self.fields
def get_extra(self):
return self.extra + len(self.get_initial())
def get_initial(self):
return self.initial[:]
def get_can_delete(self):
return (
(self.get_can_add() or (self.can_delete and self.permission.has_permission('delete', self.request, self)))
and not self.readonly
)
def get_can_add(self):
return self.can_add and not self.readonly and self.permission.has_permission('create', self.request, self)
def get_readonly_fields(self):
return self.readonly_fields
def generate_readonly_fields(self):
return list(self.get_readonly_fields()) + list(self._get_readonly_fields_from_permissions())
def get_prefix(self):
return '-'.join((self.parent_view.get_prefix(), 'inline', self.__class__.__name__)).lower()
def get_formset_fields(self, formset):
fields = list(self.generate_fields() or formset.form.base_fields.keys())
if formset.can_delete:
fields.append(DELETION_FIELD_NAME)
return fields
def formfield_for_dbfield(self, db_field, **kwargs):
return db_field.formfield(**kwargs)
def formfield_for_readonlyfield(self, name, **kwargs):
def _get_readonly_field_data(instance):
return get_readonly_field_data(
instance, name, self.request, view=self, field_labels=self._get_field_labels()
)
return SmartReadonlyField(_get_readonly_field_data)
def get_form_class(self):
return self.form_class
def get_max_num(self):
return self.max_num
def get_min_num(self):
return self.min_num
def get_formset_factory(self, fields=None, readonly_fields=()):
return smartinlineformset_factory(
self.parent_model, self.model, self.request, form=self.get_form_class(), fk_name=self.fk_name,
extra=self.get_extra(), formset=self.base_inline_formset_class, exclude=self.get_exclude(),
fields=fields, min_num=self.get_min_num(), max_num=self.get_max_num(), readonly_fields=readonly_fields,
readonly=self._is_readonly(), formreadonlyfield_callback=self.formfield_for_readonlyfield,
formfield_callback=self.formfield_for_dbfield, labels=self._get_field_labels(),
can_delete=self.get_can_delete()
)
def get_queryset(self):
return self.model.objects.all()
def get_formset(self):
fields = self.generate_fields()
readonly_fields = self.generate_readonly_fields()
if self.request.POST:
formset = self.get_formset_factory(fields, readonly_fields)(data=self.request.POST,
files=self.request.FILES,
instance=self.parent_instance,
queryset=self.get_queryset(),
prefix=self.get_prefix())
else:
formset = self.get_formset_factory(fields, readonly_fields)(instance=self.parent_instance,
queryset=self.get_queryset(),
initial=self.get_initial(),
prefix=self.get_prefix())
formset.can_add = self.get_can_add()
for form in formset.all_forms():
form.class_names = self.form_class_names(form)
form._is_readonly = self.is_form_readonly(form)
if not self.readonly and form._is_readonly:
if formset.can_delete:
form.readonly_fields = set(form.fields.keys()) - {'id', DELETION_FIELD_NAME}
else:
form.readonly_fields = set(form.fields.keys()) - {'id'}
if formset.can_delete and form.instance.pk and not self.can_form_delete(form):
form.fields[DELETION_FIELD_NAME] = EmptyReadonlyField(
required=form.fields[DELETION_FIELD_NAME].required,
label=form.fields[DELETION_FIELD_NAME].label
)
form.readonly_fields |= {DELETION_FIELD_NAME}
self.init_form(form)
for i in range(self.get_min_num()):
formset.forms[i].empty_permitted = False
return formset
def form_class_names(self, form):
if not form.instance.pk:
return ['empty']
return []
def init_form(self, form):
self.form_fields(form)
def form_fields(self, form):
for field_name, field in form.fields.items():
self.form_field(form, field_name, field)
def form_field(self, form, field_name, form_field):
placeholder = self.model._ui_meta.placeholders.get('field_name', None)
if placeholder:
form_field.widget.placeholder = self.model._ui_meta.placeholders.get('field_name', None)
return form_field
def get_name(self):
return self.model.__name__
def get_button_value(self):
return self.add_inline_button_verbose_name or self.model._ui_meta.add_inline_button_verbose_name % {
'verbose_name': self.model._meta.verbose_name,
'verbose_name_plural': self.model._meta.verbose_name_plural
}
def form_valid(self, request):
formset = self.formset
instances = formset.save(commit=False)
for obj in instances:
change = obj.pk is not None
self.save_obj(obj, change)
for obj in formset.deleted_objects:
self.delete_obj(obj)
formset.save_m2m()
def get_has_file_field(self):
return formset_has_file_field(self.formset.form)
def pre_save_obj(self, obj, change):
pass
def post_save_obj(self, obj, change):
pass
def save_obj(self, obj, change):
self.pre_save_obj(obj, change)
obj.save()
self.post_save_obj(obj, change)
def pre_delete_obj(self, obj):
pass
def post_delete_obj(self, obj):
pass
def delete_obj(self, obj):
self.pre_delete_obj(obj)
obj.delete()
self.post_delete_obj(obj)
def is_valid(self):
return self.formset.is_valid()
def has_changed(self):
return self.formset.has_changed()
def get_title(self):
return (
self.model._meta.verbose_name if self.max_num and self.max_num <= 1
else self.model._meta.verbose_name_plural
)
class TabularInlineFormView(InlineFormView):
template_name = 'is_core/forms/tabular_inline_formset.html'
class StackedInlineFormView(InlineFormView):
template_name = 'is_core/forms/stacked_inline_formset.html'
class ResponsiveInlineFormView(InlineFormView):
template_name = 'is_core/forms/responsive_inline_formset.html'
| bsd-3-clause | acade4552327122c27f1853ae356bc41 | 34.40339 | 118 | 0.599196 | 3.963567 | false | false | false | false |
matllubos/django-is-core | is_core/utils/__init__.py | 1 | 14970 | import re
import json
import types
import datetime
from django.core.exceptions import ImproperlyConfigured
from django.contrib.admin.utils import display_for_value as admin_display_for_value
from django.core.serializers.json import DjangoJSONEncoder
from django.db.models import QuerySet
from django.core.exceptions import FieldDoesNotExist
from django.utils.translation import ugettext
from django.utils.html import format_html, format_html_join
from django.utils.formats import get_format, date_format
from django.utils.timezone import template_localtime
from chamber.utils import call_function_with_unknown_input
from pyston.converters import get_converter
PK_PATTERN = r'(?P<pk>[^/]+)'
NUMBER_PK_PATTERN = r'(?P<pk>\d+)'
EMPTY_VALUE = '---'
LOOKUP_SEP = '__'
METHOD_OBJ_STR_NAME = '_obj_name'
def is_callable(val):
return hasattr(val, '__call__')
def get_new_class_name(prefix, klass):
prefix = prefix.replace('-', ' ').title()
prefix = re.sub(r'\s+', '', prefix)
return prefix + klass.__name__
def flatten_fieldsets(fieldsets):
"""Returns a list of field names from an admin fieldsets structure."""
field_names = []
for _, opts in fieldsets or ():
if 'fieldsets' in opts:
field_names += flatten_fieldsets(opts.get('fieldsets'))
else:
for field in opts.get('fields', ()):
if isinstance(field, (list, tuple)):
field_names.extend(field)
else:
field_names.append(field)
return field_names
def get_fieldsets_without_disallowed_fields(request, fieldsets, disallowed_fields):
generated_fieldsets = []
for title, fieldset_values in fieldsets:
fieldset_values = dict(fieldset_values)
if 'fields' in fieldset_values:
fieldset_values['fields'] = [
field for field in fieldset_values.pop('fields')
if field not in disallowed_fields
]
if 'fieldsets' in fieldset_values:
fieldsets = get_fieldsets_without_disallowed_fields(
request, fieldset_values.pop('fieldsets'), disallowed_fields
)
if fieldsets:
fieldset_values['fieldsets'] = fieldsets
if set(fieldset_values.keys()) & {'fields', 'fieldsets', 'inline_view_inst'}:
generated_fieldsets.append((title, fieldset_values))
return generated_fieldsets
def get_inline_views_from_fieldsets(fieldsets):
"""Returns a list of field names from an admin fieldsets structure."""
inline_views = []
for _, opts in fieldsets or ():
if 'fieldsets' in opts:
inline_views += get_inline_views_from_fieldsets(opts.get('fieldsets'))
elif 'inline_view_inst' in opts:
inline_views.append(opts.get('inline_view_inst'))
return inline_views
def get_inline_views_opts_from_fieldsets(fieldsets):
"""Returns a list of field names from an admin fieldsets structure."""
inline_views = []
for _, opts in fieldsets or ():
if 'fieldsets' in opts:
inline_views += get_inline_views_opts_from_fieldsets(opts.get('fieldsets'))
elif 'inline_view' in opts:
inline_views.append(opts)
return inline_views
def get_field_from_model_or_none(model, field_name):
"""
Return field from model. If field doesn't exists null is returned instead of exception.
"""
try:
return model._meta.get_field(field_name)
except (FieldDoesNotExist, AttributeError):
return None
def get_field_label_from_path(model, field_path, view=None, field_labels=None):
"""
Return field label of model class for input field path. For every field name in the field path is firstly get the
right label and these labels are joined with " - " separator to one string.
field_label input parameter can affect the result value. Example:
* field_path='user__email', field_labels={} => 'user email' # default values get from fields
* field_path='user__email', field_labels={'user__email': 'e-mail'} => 'e-mail' # full value is replaced
* field_path='user__email', field_labels={'user': 'customer'} => 'customer - email' # related field prefix is
changed
* field_path='user', field_labels={'user': 'customer'} => 'customer' # full value is replaced
* field_path='user', field_labels={'user__': 'customer'} => 'user' # has no effect
* field_path='user__email', field_labels={'user__': 'customer'} => 'customer email' # related field prefix is
changed
* field_path='user__email', field_labels={'user__': None} => 'email' # related field prefix is ignored
* field_path='user__email', field_labels={'email': 'e-mail'} => 'user email' # has no effect
:param model: Django model class
:param field_path: field names separated with "__"
:param view: view instance
:param field_labels: dict of field labels which can override result field name
:return: field label
"""
from .field_api import get_field_descriptors_from_path
field_labels = {} if field_labels is None else field_labels
field_descriptors = get_field_descriptors_from_path(model, field_path, view)
used_field_names = []
field_descriptor_labels = []
for field_descriptor in field_descriptors:
field_path_prefix = LOOKUP_SEP.join(used_field_names)
current_field_path = LOOKUP_SEP.join(used_field_names + [field_descriptor.field_name])
if field_descriptor_labels and field_path_prefix + LOOKUP_SEP in field_labels:
if field_labels[field_path_prefix + LOOKUP_SEP] is not None:
field_descriptor_labels = [field_labels[field_path_prefix + LOOKUP_SEP]]
else:
field_descriptor_labels = []
if current_field_path in field_labels:
if field_labels[current_field_path] is not None:
field_descriptor_labels = [field_labels[current_field_path]]
else:
field_descriptor_labels = []
elif field_descriptor.field_name != METHOD_OBJ_STR_NAME or not field_descriptor_labels:
if field_descriptor.get_label() is not None:
field_descriptor_labels.append(field_descriptor.get_label())
used_field_names.append(field_descriptor.field_name)
return ' - '.join([str(label) for label in field_descriptor_labels if label is not None])
def get_field_widget_from_path(model, field_path, view=None):
"""
Return form widget to show value get from model instance and field_path
"""
from .field_api import get_field_descriptors_from_path
return get_field_descriptors_from_path(model, field_path, view)[-1].get_widget()
def get_readonly_field_value_from_path(instance, field_path, request=None, view=None):
"""
Return ReadonlyValue instance which contains value and humanized value get from model instance and field_path
"""
from .field_api import get_field_value_from_path
return get_field_value_from_path(instance, field_path, request, view, return_readonly_value=True)
def get_readonly_field_data(instance, field_name, request, view=None, field_labels=None):
"""
Returns field humanized value, label and widget which are used to display of instance or view readonly data.
Args:
field_name: name of the field which will be displayed
instance: model instance
view: view instance
field_labels: dict of field labels which rewrites the generated field label
Returns:
field humanized value, label and widget which are used to display readonly data
"""
return (
get_readonly_field_value_from_path(instance, field_name, request, view),
get_field_label_from_path(instance.__class__, field_name, view, field_labels),
get_field_widget_from_path(instance.__class__, field_name, view)
)
def display_object_data(obj, field_name, request, view=None):
"""
Returns humanized value of model object that can be rendered to HTML or returned as part of REST
examples:
boolean True/False ==> Yes/No
objects ==> object display name with link if current user has permissions to see the object
field with choices ==> string value of choice
field with humanize function ==> result of humanize function
"""
return display_for_value(get_readonly_field_value_from_path(obj, field_name, request, view), request=request)
def display_code(value):
"""
Display input value as a code.
"""
return format_html(
'<pre style="max-height: 400px">{}</pre>',
value
) if value else display_for_value(value)
def display_json(value):
"""
Display input JSON as a code
"""
if value is None:
return display_for_value(value)
if isinstance(value, str):
value = json.loads(value)
return display_code(json.dumps(value, indent=2, ensure_ascii=False, cls=DjangoJSONEncoder))
def display_for_value(value, request=None):
"""
Converts humanized value
examples:
boolean True/False ==> Yes/No
objects ==> object display name with link if current user has permissions to see the object
datetime ==> in localized format
list ==> values separated with ","
dict ==> string formatted with HTML ul/li tags
"""
from is_core.forms.utils import ReadonlyValue
from is_core.site import registered_model_cores
if isinstance(value, ReadonlyValue):
value = value.value
if request and value.__class__ in registered_model_cores:
return render_model_object_with_link(request, value)
elif isinstance(value, (QuerySet, list, tuple, set, types.GeneratorType)):
return format_html(
'<ol class="field-list">{}</ol>',
format_html_join(
'\n',
'<li>{}</li>',
(
(display_for_value(v, request),) for v in value
)
)
)
elif isinstance(value, dict):
return format_html(
'<ul class="field-dict">{}</ul>',
format_html_join(
'\n',
'{}{}',
(
(
format_html('<li>{}</li>', k),
(
display_for_value(v, request) if isinstance(v, dict)
else format_html(
'<ul class="field-dict"><li>{}</li></ul>',
display_for_value(v, request)
)
)
)
for k, v in value.items()
)
)
)
elif isinstance(value, bool):
return ugettext('Yes') if value else ugettext('No')
elif isinstance(value, datetime.datetime):
return date_format(template_localtime(value), (
'DATETIME_FORMAT' if get_format('IS_CORE_VIEW_DATETIME_FORMAT') == 'IS_CORE_VIEW_DATETIME_FORMAT'
else 'IS_CORE_VIEW_DATETIME_FORMAT'
))
elif isinstance(value, datetime.date):
return date_format(value, (
'DATE_FORMAT' if get_format('IS_CORE_VIEW_DATE_FORMAT') == 'IS_CORE_VIEW_DATE_FORMAT'
else 'IS_CORE_VIEW_DATE_FORMAT'
))
else:
return admin_display_for_value(value, EMPTY_VALUE)
def get_url_from_model_core(request, obj):
"""
Returns object URL from model core.
"""
from is_core.site import get_model_core
model_core = get_model_core(obj.__class__)
if model_core and hasattr(model_core, 'ui_patterns'):
edit_pattern = model_core.ui_patterns.get('detail')
return (
edit_pattern.get_url_string(request, obj=obj)
if edit_pattern and edit_pattern.has_permission('get', request, obj=obj) else None
)
else:
return None
def get_obj_url(request, obj):
"""
Returns object URL if current logged user has permissions to see the object
"""
if (is_callable(getattr(obj, 'get_absolute_url', None)) and
(not hasattr(obj, 'can_see_edit_link') or
(is_callable(getattr(obj, 'can_see_edit_link', None)) and obj.can_see_edit_link(request)))):
return call_function_with_unknown_input(obj.get_absolute_url, request=request)
else:
return get_url_from_model_core(request, obj)
def render_model_object_with_link(request, obj, display_value=None):
if obj is None:
return '[{}]'.format(ugettext('missing object'))
obj_url = get_obj_url(request, obj)
display_value = str(obj) if display_value is None else str(display_value)
return format_html('<a href="{}">{}</a>', obj_url, display_value) if obj_url else display_value
def render_model_objects_with_link(request, objs):
return format_html_join(', ', '{}', ((render_model_object_with_link(request, obj),) for obj in objs))
def header_name_to_django(header_name):
return '_'.join(('HTTP', header_name.replace('-', '_').upper()))
def pretty_class_name(class_name):
return re.sub(r'(\w)([A-Z])', r'\1-\2', class_name).lower()
def get_export_types_with_content_type(export_types):
generated_export_types = []
for title, type, serialization_format in export_types:
try:
generated_export_types.append(
(title, type, serialization_format, get_converter(type).media_type)
)
except KeyError:
raise ImproperlyConfigured('Missing converter for type {}'.format(type))
return generated_export_types
def get_link_or_none(pattern_name, request, view_kwargs=None):
"""
Helper that generate URL prom pattern name and kwargs and check if current request has permission to open the URL.
If not None is returned.
Args:
pattern_name (str): slug which is used for view registratin to pattern
request (django.http.request.HttpRequest): Django request object
view_kwargs (dict): list of kwargs necessary for URL generator
Returns:
"""
from is_core.patterns import reverse_pattern
pattern = reverse_pattern(pattern_name)
assert pattern is not None, 'Invalid pattern name {}'.format(pattern_name)
if pattern.has_permission('get', request, view_kwargs=view_kwargs):
return pattern.get_url_string(request, view_kwargs=view_kwargs)
else:
return None
class GetMethodFieldMixin:
@classmethod
def get_method_returning_field_value(cls, field_name):
"""
Method should return object method that can be used to get field value.
Args:
field_name: name of the field
Returns: method for obtaining a field value
"""
method = getattr(cls, field_name, None)
return method if method and callable(method) else None
def get_model_name(model):
return str(model._meta.model_name)
| bsd-3-clause | 81ac81018e0d317b511e587e71d2e45e | 35.601467 | 118 | 0.633801 | 4.000534 | false | false | false | false |
matllubos/django-is-core | is_core/forms/generic.py | 1 | 2883 | from django.db import models
from django.contrib.contenttypes.models import ContentType
from django.forms.models import ModelForm
from django.contrib.contenttypes.forms import BaseGenericInlineFormSet as OriginBaseGenericInlineFormSet
from is_core.forms.models import smartmodelformset_factory
from is_core.forms.formsets import BaseFormSetMixin
class BaseGenericInlineFormSet(BaseFormSetMixin, OriginBaseGenericInlineFormSet):
pass
def smart_generic_inlineformset_factory(model, request, form=ModelForm, formset=BaseGenericInlineFormSet,
ct_field='content_type', fk_field='object_id', fields=None, exclude=None,
extra=3, can_order=False, can_delete=True, min_num=None, max_num=None,
formfield_callback=None, widgets=None, validate_min=False, validate_max=False,
localized_fields=None, labels=None, help_texts=None, error_messages=None,
formreadonlyfield_callback=None, readonly_fields=None, for_concrete_model=True,
readonly=False):
"""
Returns a ``GenericInlineFormSet`` for the given kwargs.
You must provide ``ct_field`` and ``fk_field`` if they are different from
the defaults ``content_type`` and ``object_id`` respectively.
"""
opts = model._meta
# if there is no field called `ct_field` let the exception propagate
ct_field = opts.get_field(ct_field)
if not isinstance(ct_field, models.ForeignKey) or ct_field.related_model != ContentType:
raise Exception("fk_name '%s' is not a ForeignKey to ContentType" % ct_field)
fk_field = opts.get_field(fk_field) # let the exception propagate
if exclude is not None:
exclude = list(exclude)
exclude.extend([ct_field.name, fk_field.name])
else:
exclude = [ct_field.name, fk_field.name]
kwargs = {
'form': form,
'formfield_callback': formfield_callback,
'formset': formset,
'extra': extra,
'can_delete': can_delete,
'can_order': can_order,
'fields': fields,
'exclude': exclude,
'max_num': max_num,
'min_num': min_num,
'widgets': widgets,
'validate_min': validate_min,
'validate_max': validate_max,
'localized_fields': localized_fields,
'formreadonlyfield_callback': formreadonlyfield_callback,
'readonly_fields': readonly_fields,
'readonly': readonly,
'labels': labels,
'help_texts': help_texts,
'error_messages': error_messages,
}
FormSet = smartmodelformset_factory(model, request, **kwargs)
FormSet.ct_field = ct_field
FormSet.ct_fk_field = fk_field
FormSet.for_concrete_model = for_concrete_model
return FormSet
| bsd-3-clause | 75548f85149b9644c1b209e97d9c4e33 | 43.353846 | 119 | 0.63753 | 4.154179 | false | false | false | false |
niwinz/django-jinja | django_jinja/views/generic/base.py | 1 | 1148 | from ...base import get_match_extension
class Jinja2TemplateResponseMixin:
jinja2_template_extension = None
def get_template_names(self):
"""
Return a list of template names to be used for the request.
This calls the super class's get_template_names and appends
the Jinja2 match extension is suffixed to the returned values.
If you specify jinja2_template_extension then that value will
be used. Otherwise it tries to detect the extension based on
values in settings.
If you would like to not have it append an extension, set
jinja2_template_extension to '' (empty string).
"""
vals = super().get_template_names()
ext = self.jinja2_template_extension
if ext is None:
ext = get_match_extension(using=getattr(self, 'template_engine', None))
# Exit early if the user has specified an empty match extension
if not ext:
return vals
names = []
for val in vals:
if not val.endswith(ext):
val += ext
names.append(val)
return names
| bsd-3-clause | d5aec7d98198208156cac1db6006ad9a | 30.888889 | 83 | 0.622822 | 4.647773 | false | false | false | false |
dask/dask-ml | tests/test_utils.py | 1 | 6640 | from collections import namedtuple
import dask.array as da
import dask.dataframe as dd
import numpy as np
import pandas as pd
import pandas.testing as tm
import pytest
from dask.array.utils import assert_eq as assert_eq_ar
from dask.dataframe.utils import assert_eq as assert_eq_df
from dask_ml.datasets import make_classification
from dask_ml.utils import (
_num_samples,
assert_estimator_equal,
check_array,
check_chunks,
check_matching_blocks,
check_random_state,
handle_zeros_in_scale,
slice_columns,
)
df = dd.from_pandas(pd.DataFrame(5 * [range(42)]).T, npartitions=5)
s = dd.from_pandas(pd.Series([0, 1, 2, 3, 0]), npartitions=5)
a = da.from_array(np.array([0, 1, 2, 3, 0]), chunks=3)
X, y = make_classification(chunks=(2, 20))
Foo = namedtuple("Foo", "a_ b_ c_ d_")
Bar = namedtuple("Bar", "a_ b_ d_ e_")
def test_slice_columns():
columns = [2, 3]
df2 = slice_columns(df, columns)
X2 = slice_columns(X, columns)
assert list(df2.columns) == columns
assert_eq_df(df[columns].compute(), df2.compute())
assert_eq_ar(X.compute(), X2.compute())
def test_handle_zeros_in_scale():
s2 = handle_zeros_in_scale(s)
a2 = handle_zeros_in_scale(a)
assert list(s2.compute()) == [1, 1, 2, 3, 1]
assert list(a2.compute()) == [1, 1, 2, 3, 1]
x = np.array([1, 2, 3, 0], dtype="f8")
expected = np.array([1, 2, 3, 1], dtype="f8")
result = handle_zeros_in_scale(x)
np.testing.assert_array_equal(result, expected)
x = pd.Series(x)
expected = pd.Series(expected)
result = handle_zeros_in_scale(x)
tm.assert_series_equal(result, expected)
x = da.from_array(x.values, chunks=2)
expected = expected.values
result = handle_zeros_in_scale(x)
assert_eq_ar(result, expected)
x = dd.from_dask_array(x)
expected = pd.Series(expected)
result = handle_zeros_in_scale(x)
assert_eq_df(result, expected)
def test_assert_estimator_passes():
l = Foo(1, 2, 3, 4)
r = Foo(1, 2, 3, 4)
assert_estimator_equal(l, r) # it works!
def test_assert_estimator_different_attributes():
l = Foo(1, 2, 3, 4)
r = Bar(1, 2, 3, 4)
with pytest.raises(AssertionError):
assert_estimator_equal(l, r)
def test_assert_estimator_different_scalers():
l = Foo(1, 2, 3, 4)
r = Foo(1, 2, 3, 3)
with pytest.raises(AssertionError):
assert_estimator_equal(l, r)
@pytest.mark.parametrize(
"a", [np.array([1, 2]), da.from_array(np.array([1, 2]), chunks=1)]
)
def test_assert_estimator_different_arrays(a):
l = Foo(1, 2, 3, a)
r = Foo(1, 2, 3, np.array([1, 0]))
with pytest.raises(AssertionError):
assert_estimator_equal(l, r)
@pytest.mark.parametrize(
"a",
[
pd.DataFrame({"A": [1, 2]}),
dd.from_pandas(pd.DataFrame({"A": [1, 2]}), npartitions=2),
],
)
def test_assert_estimator_different_dataframes(a):
l = Foo(1, 2, 3, a)
r = Foo(1, 2, 3, pd.DataFrame({"A": [0, 1]}))
with pytest.raises(AssertionError):
assert_estimator_equal(l, r)
def test_check_random_state():
for rs in [None, 0]:
result = check_random_state(rs)
assert isinstance(result, da.random.RandomState)
rs = da.random.RandomState(0)
result = check_random_state(rs)
assert result is rs
with pytest.raises(TypeError):
check_random_state(np.random.RandomState(0))
@pytest.mark.parametrize("chunks", [None, 4, (2000, 4), [2000, 4]])
def test_get_chunks(chunks):
from unittest import mock
with mock.patch("dask_ml.utils.cpu_count", return_value=4):
result = check_chunks(n_samples=8000, n_features=4, chunks=chunks)
expected = (2000, 4)
assert result == expected
@pytest.mark.parametrize("chunks", [None, 8])
def test_get_chunks_min(chunks):
result = check_chunks(n_samples=8, n_features=4, chunks=chunks)
expected = (100, 4)
assert result == expected
def test_get_chunks_raises():
with pytest.raises(AssertionError):
check_chunks(1, 1, chunks=(1, 2, 3))
with pytest.raises(AssertionError):
check_chunks(1, 1, chunks=[1, 2, 3])
with pytest.raises(ValueError):
check_chunks(1, 1, chunks=object())
def test_check_array_raises():
X = da.random.uniform(size=(10, 5), chunks=2)
with pytest.raises(TypeError) as m:
check_array(X)
assert m.match("Chunking is only allowed on the first axis.")
@pytest.mark.parametrize(
"data",
[
np.random.uniform(size=10),
da.random.uniform(size=10, chunks=5),
da.random.uniform(size=(10, 4), chunks=5),
dd.from_pandas(pd.DataFrame({"A": range(10)}), npartitions=2),
dd.from_pandas(pd.Series(range(10)), npartitions=2),
],
)
def test_num_samples(data):
assert _num_samples(data) == 10
def test_check_array_1d():
arr = da.random.uniform(size=(10,), chunks=5)
check_array(arr, ensure_2d=False)
@pytest.mark.parametrize(
"arrays",
[
[],
[da.random.uniform(size=10, chunks=5)],
[da.random.uniform(size=10, chunks=5), da.random.uniform(size=10, chunks=5)],
[
dd.from_pandas(pd.Series([1, 2, 3]), 2),
dd.from_pandas(pd.Series([1, 2, 3]), 2),
],
[
dd.from_pandas(pd.Series([1, 2, 3]), 2),
dd.from_pandas(pd.DataFrame({"A": [1, 2, 3]}), 2),
],
[
dd.from_pandas(pd.Series([1, 2, 3]), 2).reset_index(),
dd.from_pandas(pd.Series([1, 2, 3]), 2).reset_index(),
],
# Allow known and unknown?
pytest.param(
[
dd.from_pandas(pd.Series([1, 2, 3]), 2),
dd.from_pandas(pd.Series([1, 2, 3]), 2).reset_index(),
],
marks=pytest.mark.xfail(reason="Known and unknown divisions."),
),
],
)
def test_matching_blocks_ok(arrays):
check_matching_blocks(*arrays)
@pytest.mark.parametrize(
"arrays",
[
[np.array([1, 2]), np.array([1, 2])],
[da.random.uniform(size=10, chunks=5), da.random.uniform(size=10, chunks=4)],
[
da.random.uniform(size=(10, 10), chunks=(5, 5)),
da.random.uniform(size=(10, 10), chunks=(5, 4)),
],
[
dd.from_pandas(pd.Series(range(100)), 50),
dd.from_pandas(pd.Series(range(100)), 25),
],
[
dd.from_pandas(pd.Series(range(100)), 50),
dd.from_pandas(pd.DataFrame({"A": range(100)}), 25),
],
],
)
def test_matching_blocks_raises(arrays):
with pytest.raises(ValueError):
check_matching_blocks(*arrays)
| bsd-3-clause | a9cd60224d6e75e0ccdaef61f650f10e | 27.135593 | 85 | 0.595633 | 3.079777 | false | true | false | false |
andialbrecht/sqlparse | sqlparse/filters/reindent.py | 1 | 9549 | #
# Copyright (C) 2009-2020 the sqlparse authors and contributors
# <see AUTHORS file>
#
# This module is part of python-sqlparse and is released under
# the BSD License: https://opensource.org/licenses/BSD-3-Clause
from sqlparse import sql, tokens as T
from sqlparse.utils import offset, indent
class ReindentFilter:
def __init__(self, width=2, char=' ', wrap_after=0, n='\n',
comma_first=False, indent_after_first=False,
indent_columns=False):
self.n = n
self.width = width
self.char = char
self.indent = 1 if indent_after_first else 0
self.offset = 0
self.wrap_after = wrap_after
self.comma_first = comma_first
self.indent_columns = indent_columns
self._curr_stmt = None
self._last_stmt = None
self._last_func = None
def _flatten_up_to_token(self, token):
"""Yields all tokens up to token but excluding current."""
if token.is_group:
token = next(token.flatten())
for t in self._curr_stmt.flatten():
if t == token:
break
yield t
@property
def leading_ws(self):
return self.offset + self.indent * self.width
def _get_offset(self, token):
raw = ''.join(map(str, self._flatten_up_to_token(token)))
line = (raw or '\n').splitlines()[-1]
# Now take current offset into account and return relative offset.
return len(line) - len(self.char * self.leading_ws)
def nl(self, offset=0):
return sql.Token(
T.Whitespace,
self.n + self.char * max(0, self.leading_ws + offset))
def _next_token(self, tlist, idx=-1):
split_words = ('FROM', 'STRAIGHT_JOIN$', 'JOIN$', 'AND', 'OR',
'GROUP BY', 'ORDER BY', 'UNION', 'VALUES',
'SET', 'BETWEEN', 'EXCEPT', 'HAVING', 'LIMIT')
m_split = T.Keyword, split_words, True
tidx, token = tlist.token_next_by(m=m_split, idx=idx)
if token and token.normalized == 'BETWEEN':
tidx, token = self._next_token(tlist, tidx)
if token and token.normalized == 'AND':
tidx, token = self._next_token(tlist, tidx)
return tidx, token
def _split_kwds(self, tlist):
tidx, token = self._next_token(tlist)
while token:
pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
uprev = str(prev_)
if prev_ and prev_.is_whitespace:
del tlist.tokens[pidx]
tidx -= 1
if not (uprev.endswith('\n') or uprev.endswith('\r')):
tlist.insert_before(tidx, self.nl())
tidx += 1
tidx, token = self._next_token(tlist, tidx)
def _split_statements(self, tlist):
ttypes = T.Keyword.DML, T.Keyword.DDL
tidx, token = tlist.token_next_by(t=ttypes)
while token:
pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
if prev_ and prev_.is_whitespace:
del tlist.tokens[pidx]
tidx -= 1
# only break if it's not the first token
if prev_:
tlist.insert_before(tidx, self.nl())
tidx += 1
tidx, token = tlist.token_next_by(t=ttypes, idx=tidx)
def _process(self, tlist):
func_name = '_process_{cls}'.format(cls=type(tlist).__name__)
func = getattr(self, func_name.lower(), self._process_default)
func(tlist)
def _process_where(self, tlist):
tidx, token = tlist.token_next_by(m=(T.Keyword, 'WHERE'))
if not token:
return
# issue121, errors in statement fixed??
tlist.insert_before(tidx, self.nl())
with indent(self):
self._process_default(tlist)
def _process_parenthesis(self, tlist):
ttypes = T.Keyword.DML, T.Keyword.DDL
_, is_dml_dll = tlist.token_next_by(t=ttypes)
fidx, first = tlist.token_next_by(m=sql.Parenthesis.M_OPEN)
if first is None:
return
with indent(self, 1 if is_dml_dll else 0):
tlist.tokens.insert(0, self.nl()) if is_dml_dll else None
with offset(self, self._get_offset(first) + 1):
self._process_default(tlist, not is_dml_dll)
def _process_function(self, tlist):
self._last_func = tlist[0]
self._process_default(tlist)
def _process_identifierlist(self, tlist):
identifiers = list(tlist.get_identifiers())
if self.indent_columns:
first = next(identifiers[0].flatten())
num_offset = 1 if self.char == '\t' else self.width
else:
first = next(identifiers.pop(0).flatten())
num_offset = 1 if self.char == '\t' else self._get_offset(first)
if not tlist.within(sql.Function) and not tlist.within(sql.Values):
with offset(self, num_offset):
position = 0
for token in identifiers:
# Add 1 for the "," separator
position += len(token.value) + 1
if position > (self.wrap_after - self.offset):
adjust = 0
if self.comma_first:
adjust = -2
_, comma = tlist.token_prev(
tlist.token_index(token))
if comma is None:
continue
token = comma
tlist.insert_before(token, self.nl(offset=adjust))
if self.comma_first:
_, ws = tlist.token_next(
tlist.token_index(token), skip_ws=False)
if (ws is not None
and ws.ttype is not T.Text.Whitespace):
tlist.insert_after(
token, sql.Token(T.Whitespace, ' '))
position = 0
else:
# ensure whitespace
for token in tlist:
_, next_ws = tlist.token_next(
tlist.token_index(token), skip_ws=False)
if token.value == ',' and not next_ws.is_whitespace:
tlist.insert_after(
token, sql.Token(T.Whitespace, ' '))
end_at = self.offset + sum(len(i.value) + 1 for i in identifiers)
adjusted_offset = 0
if (self.wrap_after > 0
and end_at > (self.wrap_after - self.offset)
and self._last_func):
adjusted_offset = -len(self._last_func.value) - 1
with offset(self, adjusted_offset), indent(self):
if adjusted_offset < 0:
tlist.insert_before(identifiers[0], self.nl())
position = 0
for token in identifiers:
# Add 1 for the "," separator
position += len(token.value) + 1
if (self.wrap_after > 0
and position > (self.wrap_after - self.offset)):
adjust = 0
tlist.insert_before(token, self.nl(offset=adjust))
position = 0
self._process_default(tlist)
def _process_case(self, tlist):
iterable = iter(tlist.get_cases())
cond, _ = next(iterable)
first = next(cond[0].flatten())
with offset(self, self._get_offset(tlist[0])):
with offset(self, self._get_offset(first)):
for cond, value in iterable:
token = value[0] if cond is None else cond[0]
tlist.insert_before(token, self.nl())
# Line breaks on group level are done. let's add an offset of
# len "when ", "then ", "else "
with offset(self, len("WHEN ")):
self._process_default(tlist)
end_idx, end = tlist.token_next_by(m=sql.Case.M_CLOSE)
if end_idx is not None:
tlist.insert_before(end_idx, self.nl())
def _process_values(self, tlist):
tlist.insert_before(0, self.nl())
tidx, token = tlist.token_next_by(i=sql.Parenthesis)
first_token = token
while token:
ptidx, ptoken = tlist.token_next_by(m=(T.Punctuation, ','),
idx=tidx)
if ptoken:
if self.comma_first:
adjust = -2
offset = self._get_offset(first_token) + adjust
tlist.insert_before(ptoken, self.nl(offset))
else:
tlist.insert_after(ptoken,
self.nl(self._get_offset(token)))
tidx, token = tlist.token_next_by(i=sql.Parenthesis, idx=tidx)
def _process_default(self, tlist, stmts=True):
self._split_statements(tlist) if stmts else None
self._split_kwds(tlist)
for sgroup in tlist.get_sublists():
self._process(sgroup)
def process(self, stmt):
self._curr_stmt = stmt
self._process(stmt)
if self._last_stmt is not None:
nl = '\n' if str(self._last_stmt).endswith('\n') else '\n\n'
stmt.tokens.insert(0, sql.Token(T.Whitespace, nl))
self._last_stmt = stmt
return stmt
| bsd-3-clause | ee489a4f7827f01d33bf625a5f518c7b | 38.458678 | 77 | 0.512619 | 3.919951 | false | false | false | false |
andialbrecht/sqlparse | sqlparse/filters/right_margin.py | 1 | 1543 | #
# Copyright (C) 2009-2020 the sqlparse authors and contributors
# <see AUTHORS file>
#
# This module is part of python-sqlparse and is released under
# the BSD License: https://opensource.org/licenses/BSD-3-Clause
import re
from sqlparse import sql, tokens as T
# FIXME: Doesn't work
class RightMarginFilter:
keep_together = (
# sql.TypeCast, sql.Identifier, sql.Alias,
)
def __init__(self, width=79):
self.width = width
self.line = ''
def _process(self, group, stream):
for token in stream:
if token.is_whitespace and '\n' in token.value:
if token.value.endswith('\n'):
self.line = ''
else:
self.line = token.value.splitlines()[-1]
elif token.is_group and type(token) not in self.keep_together:
token.tokens = self._process(token, token.tokens)
else:
val = str(token)
if len(self.line) + len(val) > self.width:
match = re.search(r'^ +', self.line)
if match is not None:
indent = match.group()
else:
indent = ''
yield sql.Token(T.Whitespace, '\n{}'.format(indent))
self.line = indent
self.line += val
yield token
def process(self, group):
# return
# group.tokens = self._process(group, group.tokens)
raise NotImplementedError
| bsd-3-clause | c57fa319940abe657f9447cefdcba6df | 31.145833 | 74 | 0.527544 | 4.262431 | false | false | false | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.