doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
get_size()
get the dimensions of the Surface get_size() -> (width, height) Return the width and height of the Surface in pixels. | pygame.ref.surface#pygame.Surface.get_size |
get_view()
return a buffer view of the Surface's pixels. get_view(<kind>='2') -> BufferProxy Return an object which exports a surface's internal pixel buffer as a C level array struct, Python level array interface or a C level buffer interface. The pixel buffer is writeable. The new buffer protocol is supported for Python 2.6 and up in CPython. The old buffer protocol is also supported for Python 2.x. The old buffer data is in one segment for kind '0', multi-segment for other buffer view kinds. The kind argument is the length 1 string '0', '1', '2', '3', 'r', 'g', 'b', or 'a'. The letters are case insensitive; 'A' will work as well. The argument can be either a Unicode or byte (char) string. The default is '2'. '0' returns a contiguous unstructured bytes view. No surface shape information is given. A ValueError is raised if the surface's pixels are discontinuous. '1' returns a (surface-width * surface-height) array of continuous pixels. A ValueError is raised if the surface pixels are discontinuous. '2' returns a (surface-width, surface-height) array of raw pixels. The pixels are surface-bytesize-d unsigned integers. The pixel format is surface specific. The 3 byte unsigned integers of 24 bit surfaces are unlikely accepted by anything other than other pygame functions. '3' returns a (surface-width, surface-height, 3) array of RGB color components. Each of the red, green, and blue components are unsigned bytes. Only 24-bit and 32-bit surfaces are supported. The color components must be in either RGB or BGR order within the pixel. 'r' for red, 'g' for green, 'b' for blue, and 'a' for alpha return a (surface-width, surface-height) view of a single color component within a surface: a color plane. Color components are unsigned bytes. Both 24-bit and 32-bit surfaces support 'r', 'g', and 'b'. Only 32-bit surfaces with SRCALPHA support 'a'. The surface is locked only when an exposed interface is accessed. For new buffer interface accesses, the surface is unlocked once the last buffer view is released. For array interface and old buffer interface accesses, the surface remains locked until the BufferProxy object is released. New in pygame 1.9.2. | pygame.ref.surface#pygame.Surface.get_view |
get_width()
get the width of the Surface get_width() -> width Return the width of the Surface in pixels. | pygame.ref.surface#pygame.Surface.get_width |
lock()
lock the Surface memory for pixel access lock() -> None Lock the pixel data of a Surface for access. On accelerated Surfaces, the pixel data may be stored in volatile video memory or nonlinear compressed forms. When a Surface is locked the pixel memory becomes available to access by regular software. Code that reads or writes pixel values will need the Surface to be locked. Surfaces should not remain locked for more than necessary. A locked Surface can often not be displayed or managed by pygame. Not all Surfaces require locking. The mustlock() method can determine if it is actually required. There is no performance penalty for locking and unlocking a Surface that does not need it. All pygame functions will automatically lock and unlock the Surface data as needed. If a section of code is going to make calls that will repeatedly lock and unlock the Surface many times, it can be helpful to wrap the block inside a lock and unlock pair. It is safe to nest locking and unlocking calls. The surface will only be unlocked after the final lock is released. | pygame.ref.surface#pygame.Surface.lock |
map_rgb()
convert a color into a mapped color value map_rgb(Color) -> mapped_int Convert an RGBA color into the mapped integer value for this Surface. The returned integer will contain no more bits than the bit depth of the Surface. Mapped color values are not often used inside pygame, but can be passed to most functions that require a Surface and a color. See the Surface object documentation for more information about colors and pixel formats. | pygame.ref.surface#pygame.Surface.map_rgb |
mustlock()
test if the Surface requires locking mustlock() -> bool Returns True if the Surface is required to be locked to access pixel data. Usually pure software Surfaces do not require locking. This method is rarely needed, since it is safe and quickest to just lock all Surfaces as needed. All pygame functions will automatically lock and unlock the Surface data as needed. If a section of code is going to make calls that will repeatedly lock and unlock the Surface many times, it can be helpful to wrap the block inside a lock and unlock pair. | pygame.ref.surface#pygame.Surface.mustlock |
scroll()
Shift the surface image in place scroll(dx=0, dy=0) -> None Move the image by dx pixels right and dy pixels down. dx and dy may be negative for left and up scrolls respectively. Areas of the surface that are not overwritten retain their original pixel values. Scrolling is contained by the Surface clip area. It is safe to have dx and dy values that exceed the surface size. New in pygame 1.9. | pygame.ref.surface#pygame.Surface.scroll |
set_alpha()
set the alpha value for the full Surface image set_alpha(value, flags=0) -> None set_alpha(None) -> None Set the current alpha value for the Surface. When blitting this Surface onto a destination, the pixels will be drawn slightly transparent. The alpha value is an integer from 0 to 255, 0 is fully transparent and 255 is fully opaque. If None is passed for the alpha value, then alpha blending will be disabled, including per-pixel alpha. This value is different than the per pixel Surface alpha. For a surface with per pixel alpha, blanket alpha is ignored and None is returned. Changed in pygame 2.0: per-surface alpha can be combined with per-pixel alpha. The optional flags argument can be set to pygame.RLEACCEL to provide better performance on non accelerated displays. An RLEACCEL Surface will be slower to modify, but quicker to blit as a source. | pygame.ref.surface#pygame.Surface.set_alpha |
set_at()
set the color value for a single pixel set_at((x, y), Color) -> None Set the RGBA or mapped integer color value for a single pixel. If the Surface does not have per pixel alphas, the alpha value is ignored. Setting pixels outside the Surface area or outside the Surface clipping will have no effect. Getting and setting pixels one at a time is generally too slow to be used in a game or realtime situation. This function will temporarily lock and unlock the Surface as needed. | pygame.ref.surface#pygame.Surface.set_at |
set_clip()
set the current clipping area of the Surface set_clip(rect) -> None set_clip(None) -> None Each Surface has an active clipping area. This is a rectangle that represents the only pixels on the Surface that can be modified. If None is passed for the rectangle the full Surface will be available for changes. The clipping area is always restricted to the area of the Surface itself. If the clip rectangle is too large it will be shrunk to fit inside the Surface. | pygame.ref.surface#pygame.Surface.set_clip |
set_colorkey()
Set the transparent colorkey set_colorkey(Color, flags=0) -> None set_colorkey(None) -> None Set the current color key for the Surface. When blitting this Surface onto a destination, any pixels that have the same color as the colorkey will be transparent. The color can be an RGB color or a mapped color integer. If None is passed, the colorkey will be unset. The colorkey will be ignored if the Surface is formatted to use per pixel alpha values. The colorkey can be mixed with the full Surface alpha value. The optional flags argument can be set to pygame.RLEACCEL to provide better performance on non accelerated displays. An RLEACCEL Surface will be slower to modify, but quicker to blit as a source. | pygame.ref.surface#pygame.Surface.set_colorkey |
set_masks()
set the bitmasks needed to convert between a color and a mapped integer set_masks((r,g,b,a)) -> None This is not needed for normal pygame usage. Note In SDL2, the masks are read-only and accordingly this method will raise an AttributeError if called. New in pygame 1.8.1. | pygame.ref.surface#pygame.Surface.set_masks |
set_palette()
set the color palette for an 8-bit Surface set_palette([RGB, RGB, RGB, ...]) -> None Set the full palette for an 8-bit Surface. This will replace the colors in the existing palette. A partial palette can be passed and only the first colors in the original palette will be changed. This function has no effect on a Surface with more than 8-bits per pixel. | pygame.ref.surface#pygame.Surface.set_palette |
set_palette_at()
set the color for a single index in an 8-bit Surface palette set_palette_at(index, RGB) -> None Set the palette value for a single entry in a Surface palette. The index should be a value from 0 to 255. This function has no effect on a Surface with more than 8-bits per pixel. | pygame.ref.surface#pygame.Surface.set_palette_at |
set_shifts()
sets the bit shifts needed to convert between a color and a mapped integer set_shifts((r,g,b,a)) -> None This is not needed for normal pygame usage. Note In SDL2, the shifts are read-only and accordingly this method will raise an AttributeError if called. New in pygame 1.8.1. | pygame.ref.surface#pygame.Surface.set_shifts |
subsurface()
create a new surface that references its parent subsurface(Rect) -> Surface Returns a new Surface that shares its pixels with its new parent. The new Surface is considered a child of the original. Modifications to either Surface pixels will effect each other. Surface information like clipping area and color keys are unique to each Surface. The new Surface will inherit the palette, color key, and alpha settings from its parent. It is possible to have any number of subsurfaces and subsubsurfaces on the parent. It is also possible to subsurface the display Surface if the display mode is not hardware accelerated. See get_offset() and get_parent() to learn more about the state of a subsurface. A subsurface will have the same class as the parent surface. | pygame.ref.surface#pygame.Surface.subsurface |
unlock()
unlock the Surface memory from pixel access unlock() -> None Unlock the Surface pixel data after it has been locked. The unlocked Surface can once again be drawn and managed by pygame. See the lock() documentation for more details. All pygame functions will automatically lock and unlock the Surface data as needed. If a section of code is going to make calls that will repeatedly lock and unlock the Surface many times, it can be helpful to wrap the block inside a lock and unlock pair. It is safe to nest locking and unlocking calls. The surface will only be unlocked after the final lock is released. | pygame.ref.surface#pygame.Surface.unlock |
unmap_rgb()
convert a mapped integer color value into a Color unmap_rgb(mapped_int) -> Color Convert an mapped integer color into the RGB color components for this Surface. Mapped color values are not often used inside pygame, but can be passed to most functions that require a Surface and a color. See the Surface object documentation for more information about colors and pixel formats. | pygame.ref.surface#pygame.Surface.unmap_rgb |
pygame.surfarray.array2d()
Copy pixels into a 2d array array2d(Surface) -> array Copy the mapped (raw) pixels from a Surface into a 2D array. The bit depth of the surface will control the size of the integer values, and will work for any type of pixel format. This function will temporarily lock the Surface as pixels are copied (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | pygame.ref.surfarray#pygame.surfarray.array2d |
pygame.surfarray.array3d()
Copy pixels into a 3d array array3d(Surface) -> array Copy the pixels from a Surface into a 3D array. The bit depth of the surface will control the size of the integer values, and will work for any type of pixel format. This function will temporarily lock the Surface as pixels are copied (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | pygame.ref.surfarray#pygame.surfarray.array3d |
pygame.surfarray.array_alpha()
Copy pixel alphas into a 2d array array_alpha(Surface) -> array Copy the pixel alpha values (degree of transparency) from a Surface into a 2D array. This will work for any type of Surface format. Surfaces without a pixel alpha will return an array with all opaque values. This function will temporarily lock the Surface as pixels are copied (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | pygame.ref.surfarray#pygame.surfarray.array_alpha |
pygame.surfarray.array_colorkey()
Copy the colorkey values into a 2d array array_colorkey(Surface) -> array Create a new array with the colorkey transparency value from each pixel. If the pixel matches the colorkey it will be fully transparent; otherwise it will be fully opaque. This will work on any type of Surface format. If the image has no colorkey a solid opaque array will be returned. This function will temporarily lock the Surface as pixels are copied. | pygame.ref.surfarray#pygame.surfarray.array_colorkey |
pygame.surfarray.blit_array()
Blit directly from a array values blit_array(Surface, array) -> None Directly copy values from an array into a Surface. This is faster than converting the array into a Surface and blitting. The array must be the same dimensions as the Surface and will completely replace all pixel values. Only integer, ASCII character and record arrays are accepted. This function will temporarily lock the Surface as the new values are copied. | pygame.ref.surfarray#pygame.surfarray.blit_array |
pygame.surfarray.get_arraytype()
Gets the currently active array type. get_arraytype () -> str DEPRECATED: Returns the currently active array type. This will be a value of the get_arraytypes() tuple and indicates which type of array module is used for the array creation. New in pygame 1.8. | pygame.ref.surfarray#pygame.surfarray.get_arraytype |
pygame.surfarray.get_arraytypes()
Gets the array system types currently supported. get_arraytypes () -> tuple DEPRECATED: Checks, which array systems are available and returns them as a tuple of strings. The values of the tuple can be used directly in the pygame.surfarray.use_arraytype() () method. If no supported array system could be found, None will be returned. New in pygame 1.8. | pygame.ref.surfarray#pygame.surfarray.get_arraytypes |
pygame.surfarray.make_surface()
Copy an array to a new surface make_surface(array) -> Surface Create a new Surface that best resembles the data and format on the array. The array can be 2D or 3D with any sized integer values. Function make_surface uses the array struct interface to acquire array properties, so is not limited to just NumPy arrays. See pygame.pixelcopy. New in pygame 1.9.2: array struct interface support. | pygame.ref.surfarray#pygame.surfarray.make_surface |
pygame.surfarray.map_array()
Map a 3d array into a 2d array map_array(Surface, array3d) -> array2d Convert a 3D array into a 2D array. This will use the given Surface format to control the conversion. Palette surface formats are supported for NumPy arrays. | pygame.ref.surfarray#pygame.surfarray.map_array |
pygame.surfarray.pixels2d()
Reference pixels into a 2d array pixels2d(Surface) -> array Create a new 2D array that directly references the pixel values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. Pixels from a 24-bit Surface cannot be referenced, but all other Surface bit depths can. The Surface this references will remain locked for the lifetime of the array (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | pygame.ref.surfarray#pygame.surfarray.pixels2d |
pygame.surfarray.pixels3d()
Reference pixels into a 3d array pixels3d(Surface) -> array Create a new 3D array that directly references the pixel values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This will only work on Surfaces that have 24-bit or 32-bit formats. Lower pixel formats cannot be referenced. The Surface this references will remain locked for the lifetime of the array (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | pygame.ref.surfarray#pygame.surfarray.pixels3d |
pygame.surfarray.pixels_alpha()
Reference pixel alphas into a 2d array pixels_alpha(Surface) -> array Create a new 2D array that directly references the alpha values (degree of transparency) in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This can only work on 32-bit Surfaces with a per-pixel alpha value. The Surface this array references will remain locked for the lifetime of the array. | pygame.ref.surfarray#pygame.surfarray.pixels_alpha |
pygame.surfarray.pixels_blue()
Reference pixel blue into a 2d array. pixels_blue (Surface) -> array Create a new 2D array that directly references the blue values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This can only work on 24-bit or 32-bit Surfaces. The Surface this array references will remain locked for the lifetime of the array. | pygame.ref.surfarray#pygame.surfarray.pixels_blue |
pygame.surfarray.pixels_green()
Reference pixel green into a 2d array. pixels_green (Surface) -> array Create a new 2D array that directly references the green values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This can only work on 24-bit or 32-bit Surfaces. The Surface this array references will remain locked for the lifetime of the array. | pygame.ref.surfarray#pygame.surfarray.pixels_green |
pygame.surfarray.pixels_red()
Reference pixel red into a 2d array. pixels_red (Surface) -> array Create a new 2D array that directly references the red values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This can only work on 24-bit or 32-bit Surfaces. The Surface this array references will remain locked for the lifetime of the array. | pygame.ref.surfarray#pygame.surfarray.pixels_red |
pygame.surfarray.use_arraytype()
Sets the array system to be used for surface arrays use_arraytype (arraytype) -> None DEPRECATED: Uses the requested array type for the module functions. The only supported arraytype is 'numpy'. Other values will raise ValueError. | pygame.ref.surfarray#pygame.surfarray.use_arraytype |
pygame.tests.run()
Run the pygame unit test suite run(*args, **kwds) -> tuple Positional arguments (optional): The names of tests to include. If omitted then all tests are run. Test names
need not include the trailing '_test'. Keyword arguments: incomplete - fail incomplete tests (default False)
nosubprocess - run all test suites in the current process
(default False, use separate subprocesses)
dump - dump failures/errors as dict ready to eval (default False)
file - if provided, the name of a file into which to dump failures/errors
timings - if provided, the number of times to run each individual test to
get an average run time (default is run each test once)
exclude - A list of TAG names to exclude from the run
show_output - show silenced stderr/stdout on errors (default False)
all - dump all results, not just errors (default False)
randomize - randomize order of tests (default False)
seed - if provided, a seed randomizer integer
multi_thread - if provided, the number of THREADS in which to run
subprocessed tests
time_out - if subprocess is True then the time limit in seconds before
killing a test (default 30)
fake - if provided, the name of the fake tests package in the
run_tests__tests subpackage to run instead of the normal
pygame tests
python - the path to a python executable to run subprocessed tests
(default sys.executable) Return value: A tuple of total number of tests run, dictionary of error information.
The dictionary is empty if no errors were recorded. By default individual test modules are run in separate subprocesses. This recreates normal pygame usage where pygame.init() and pygame.quit() are called only once per program execution, and avoids unfortunate interactions between test modules. Also, a time limit is placed on test execution, so frozen tests are killed when there time allotment expired. Use the single process option if threading is not working properly or if tests are taking too long. It is not guaranteed that all tests will pass in single process mode. Tests are run in a randomized order if the randomize argument is True or a seed argument is provided. If no seed integer is provided then the system time is used. Individual test modules may have a __tags__ attribute, a list of tag strings used to selectively omit modules from a run. By default only 'interactive' modules such as cdrom_test are ignored. An interactive module must be run from the console as a Python program. This function can only be called once per Python session. It is not reentrant. | pygame.ref.tests#pygame.tests.run |
pygame.time.Clock
create an object to help track time Clock() -> Clock Creates a new Clock object that can be used to track an amount of time. The clock also provides several functions to help control a game's framerate. tick()
update the clock tick(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses SDL_Delay function which is not accurate on every platform, but does not use much CPU. Use tick_busy_loop if you want an accurate timer, and don't mind chewing CPU.
tick_busy_loop()
update the clock tick_busy_loop(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick_busy_loop(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses pygame.time.delay(), which uses lots of CPU in a busy loop to make sure that timing is more accurate. New in pygame 1.8.
get_time()
time used in the previous tick get_time() -> milliseconds The number of milliseconds that passed between the previous two calls to Clock.tick().
get_rawtime()
actual time used in the previous tick get_rawtime() -> milliseconds Similar to Clock.get_time(), but does not include any time used while Clock.tick() was delaying to limit the framerate.
get_fps()
compute the clock framerate get_fps() -> float Compute your game's framerate (in frames per second). It is computed by averaging the last ten calls to Clock.tick(). | pygame.ref.time#pygame.time.Clock |
get_fps()
compute the clock framerate get_fps() -> float Compute your game's framerate (in frames per second). It is computed by averaging the last ten calls to Clock.tick(). | pygame.ref.time#pygame.time.Clock.get_fps |
get_rawtime()
actual time used in the previous tick get_rawtime() -> milliseconds Similar to Clock.get_time(), but does not include any time used while Clock.tick() was delaying to limit the framerate. | pygame.ref.time#pygame.time.Clock.get_rawtime |
get_time()
time used in the previous tick get_time() -> milliseconds The number of milliseconds that passed between the previous two calls to Clock.tick(). | pygame.ref.time#pygame.time.Clock.get_time |
tick()
update the clock tick(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses SDL_Delay function which is not accurate on every platform, but does not use much CPU. Use tick_busy_loop if you want an accurate timer, and don't mind chewing CPU. | pygame.ref.time#pygame.time.Clock.tick |
tick_busy_loop()
update the clock tick_busy_loop(framerate=0) -> milliseconds This method should be called once per frame. It will compute how many milliseconds have passed since the previous call. If you pass the optional framerate argument the function will delay to keep the game running slower than the given ticks per second. This can be used to help limit the runtime speed of a game. By calling Clock.tick_busy_loop(40) once per frame, the program will never run at more than 40 frames per second. Note that this function uses pygame.time.delay(), which uses lots of CPU in a busy loop to make sure that timing is more accurate. New in pygame 1.8. | pygame.ref.time#pygame.time.Clock.tick_busy_loop |
pygame.time.delay()
pause the program for an amount of time delay(milliseconds) -> time Will pause for a given number of milliseconds. This function will use the processor (rather than sleeping) in order to make the delay more accurate than pygame.time.wait(). This returns the actual number of milliseconds used. | pygame.ref.time#pygame.time.delay |
pygame.time.get_ticks()
get the time in milliseconds get_ticks() -> milliseconds Return the number of milliseconds since pygame.init() was called. Before pygame is initialized this will always be 0. | pygame.ref.time#pygame.time.get_ticks |
pygame.time.set_timer()
repeatedly create an event on the event queue set_timer(eventid, milliseconds) -> None set_timer(eventid, milliseconds, once) -> None Set an event type to appear on the event queue every given number of milliseconds. The first event will not appear until the amount of time has passed. Every event type can have a separate timer attached to it. It is best to use the value between pygame.USEREVENT and pygame.NUMEVENTS. To disable the timer for an event, set the milliseconds argument to 0. If the once argument is True, then only send the timer once. New in pygame 2.0.0.dev3: once argument added. | pygame.ref.time#pygame.time.set_timer |
pygame.time.wait()
pause the program for an amount of time wait(milliseconds) -> time Will pause for a given number of milliseconds. This function sleeps the process to share the processor with other programs. A program that waits for even a few milliseconds will consume very little processor time. It is slightly less accurate than the pygame.time.delay() function. This returns the actual number of milliseconds used. | pygame.ref.time#pygame.time.wait |
pygame.transform.average_color()
finds the average color of a surface average_color(Surface, Rect = None) -> Color Finds the average color of a Surface or a region of a surface specified by a Rect, and returns it as a Color. | pygame.ref.transform#pygame.transform.average_color |
pygame.transform.average_surfaces()
find the average surface from many surfaces. average_surfaces(Surfaces, DestSurface = None, palette_colors = 1) -> Surface Takes a sequence of surfaces and returns a surface with average colors from each of the surfaces. palette_colors - if true we average the colors in palette, otherwise we average the pixel values. This is useful if the surface is actually greyscale colors, and not palette colors. Note, this function currently does not handle palette using surfaces correctly. New in pygame 1.8. New in pygame 1.9: palette_colors argument | pygame.ref.transform#pygame.transform.average_surfaces |
pygame.transform.chop()
gets a copy of an image with an interior area removed chop(Surface, rect) -> Surface Extracts a portion of an image. All vertical and horizontal pixels surrounding the given rectangle area are removed. The corner areas (diagonal to the rect) are then brought together. (The original image is not altered by this operation.) NOTE: If you want a "crop" that returns the part of an image within a rect, you can blit with a rect to a new surface or copy a subsurface. | pygame.ref.transform#pygame.transform.chop |
pygame.transform.flip()
flip vertically and horizontally flip(Surface, xbool, ybool) -> Surface This can flip a Surface either vertically, horizontally, or both. Flipping a Surface is non-destructive and returns a new Surface with the same dimensions. | pygame.ref.transform#pygame.transform.flip |
pygame.transform.get_smoothscale_backend()
return smoothscale filter version in use: 'GENERIC', 'MMX', or 'SSE' get_smoothscale_backend() -> String Shows whether or not smoothscale is using MMX or SSE acceleration. If no acceleration is available then "GENERIC" is returned. For a x86 processor the level of acceleration to use is determined at runtime. This function is provided for pygame testing and debugging. | pygame.ref.transform#pygame.transform.get_smoothscale_backend |
pygame.transform.laplacian()
find edges in a surface laplacian(Surface, DestSurface = None) -> Surface Finds the edges in a surface using the laplacian algorithm. New in pygame 1.8. | pygame.ref.transform#pygame.transform.laplacian |
pygame.transform.rotate()
rotate an image rotate(Surface, angle) -> Surface Unfiltered counterclockwise rotation. The angle argument represents degrees and can be any floating point value. Negative angle amounts will rotate clockwise. Unless rotating by 90 degree increments, the image will be padded larger to hold the new size. If the image has pixel alphas, the padded area will be transparent. Otherwise pygame will pick a color that matches the Surface colorkey or the topleft pixel value. | pygame.ref.transform#pygame.transform.rotate |
pygame.transform.rotozoom()
filtered scale and rotation rotozoom(Surface, angle, scale) -> Surface This is a combined scale and rotation transform. The resulting Surface will be a filtered 32-bit Surface. The scale argument is a floating point value that will be multiplied by the current resolution. The angle argument is a floating point value that represents the counterclockwise degrees to rotate. A negative rotation angle will rotate clockwise. | pygame.ref.transform#pygame.transform.rotozoom |
pygame.transform.scale()
resize to new resolution scale(Surface, (width, height), DestSurface = None) -> Surface Resizes the Surface to a new resolution. This is a fast scale operation that does not sample the results. An optional destination surface can be used, rather than have it create a new one. This is quicker if you want to repeatedly scale something. However the destination must be the same size as the (width, height) passed in. Also the destination surface must be the same format. | pygame.ref.transform#pygame.transform.scale |
pygame.transform.scale2x()
specialized image doubler scale2x(Surface, DestSurface = None) -> Surface This will return a new image that is double the size of the original. It uses the AdvanceMAME Scale2X algorithm which does a 'jaggie-less' scale of bitmap graphics. This really only has an effect on simple images with solid colors. On photographic and antialiased images it will look like a regular unfiltered scale. An optional destination surface can be used, rather than have it create a new one. This is quicker if you want to repeatedly scale something. However the destination must be twice the size of the source surface passed in. Also the destination surface must be the same format. | pygame.ref.transform#pygame.transform.scale2x |
pygame.transform.set_smoothscale_backend()
set smoothscale filter version to one of: 'GENERIC', 'MMX', or 'SSE' set_smoothscale_backend(type) -> None Sets smoothscale acceleration. Takes a string argument. A value of 'GENERIC' turns off acceleration. 'MMX' uses MMX instructions only. 'SSE' allows SSE extensions as well. A value error is raised if type is not recognized or not supported by the current processor. This function is provided for pygame testing and debugging. If smoothscale causes an invalid instruction error then it is a pygame/SDL bug that should be reported. Use this function as a temporary fix only. | pygame.ref.transform#pygame.transform.set_smoothscale_backend |
pygame.transform.smoothscale()
scale a surface to an arbitrary size smoothly smoothscale(Surface, (width, height), DestSurface = None) -> Surface Uses one of two different algorithms for scaling each dimension of the input surface as required. For shrinkage, the output pixels are area averages of the colors they cover. For expansion, a bilinear filter is used. For the x86-64 and i686 architectures, optimized MMX routines are included and will run much faster than other machine types. The size is a 2 number sequence for (width, height). This function only works for 24-bit or 32-bit surfaces. An exception will be thrown if the input surface bit depth is less than 24. New in pygame 1.8. | pygame.ref.transform#pygame.transform.smoothscale |
pygame.transform.threshold()
finds which, and how many pixels in a surface are within a threshold of a 'search_color' or a 'search_surf'. threshold(dest_surf, surf, search_color, threshold=(0,0,0,0), set_color=(0,0,0,0), set_behavior=1, search_surf=None, inverse_set=False) -> num_threshold_pixels This versatile function can be used for find colors in a 'surf' close to a 'search_color' or close to colors in a separate 'search_surf'. It can also be used to transfer pixels into a 'dest_surf' that match or don't match. By default it sets pixels in the 'dest_surf' where all of the pixels NOT within the threshold are changed to set_color. If inverse_set is optionally set to True, the pixels that ARE within the threshold are changed to set_color. If the optional 'search_surf' surface is given, it is used to threshold against rather than the specified 'set_color'. That is, it will find each pixel in the 'surf' that is within the 'threshold' of the pixel at the same coordinates of the 'search_surf'.
Parameters:
dest_surf (pygame.Surface or None) -- Surface we are changing. See 'set_behavior'. Should be None if counting (set_behavior is 0).
surf (pygame.Surface) -- Surface we are looking at.
search_color (pygame.Color) -- Color we are searching for.
threshold (pygame.Color) -- Within this distance from search_color (or search_surf). You can use a threshold of (r,g,b,a) where the r,g,b can have different thresholds. So you could use an r threshold of 40 and a blue threshold of 2 if you like.
set_color (pygame.Color or None) -- Color we set in dest_surf.
set_behavior (int) -- set_behavior=1 (default). Pixels in dest_surface will be changed to 'set_color'. set_behavior=0 we do not change 'dest_surf', just count. Make dest_surf=None. set_behavior=2 pixels set in 'dest_surf' will be from 'surf'.
search_surf (pygame.Surface or None) -- search_surf=None (default). Search against 'search_color' instead. search_surf=Surface. Look at the color in 'search_surf' rather than using 'search_color'.
inverse_set (bool) -- False, default. Pixels outside of threshold are changed. True, Pixels within threshold are changed.
Return type:
int
Returns:
The number of pixels that are within the 'threshold' in 'surf' compared to either 'search_color' or search_surf.
Examples:
See the threshold tests for a full of examples: https://github.com/pygame/pygame/blob/master/test/transform_test.py def test_threshold_dest_surf_not_change(self):
""" the pixels within the threshold.
All pixels not within threshold are changed to set_color.
So there should be none changed in this test.
"""
(w, h) = size = (32, 32)
threshold = (20, 20, 20, 20)
original_color = (25, 25, 25, 25)
original_dest_color = (65, 65, 65, 55)
threshold_color = (10, 10, 10, 10)
set_color = (255, 10, 10, 10)
surf = pygame.Surface(size, pygame.SRCALPHA, 32)
dest_surf = pygame.Surface(size, pygame.SRCALPHA, 32)
search_surf = pygame.Surface(size, pygame.SRCALPHA, 32)
surf.fill(original_color)
search_surf.fill(threshold_color)
dest_surf.fill(original_dest_color)
# set_behavior=1, set dest_surface from set_color.
# all within threshold of third_surface, so no color is set.
THRESHOLD_BEHAVIOR_FROM_SEARCH_COLOR = 1
pixels_within_threshold = pygame.transform.threshold(
dest_surf=dest_surf,
surf=surf,
search_color=None,
threshold=threshold,
set_color=set_color,
set_behavior=THRESHOLD_BEHAVIOR_FROM_SEARCH_COLOR,
search_surf=search_surf,
)
# # Return, of pixels within threshold is correct
self.assertEqual(w * h, pixels_within_threshold)
# # Size of dest surface is correct
dest_rect = dest_surf.get_rect()
dest_size = dest_rect.size
self.assertEqual(size, dest_size)
# The color is not the change_color specified for every pixel As all
# pixels are within threshold
for pt in test_utils.rect_area_pts(dest_rect):
self.assertNotEqual(dest_surf.get_at(pt), set_color)
self.assertEqual(dest_surf.get_at(pt), original_dest_color) New in pygame 1.8. Changed in pygame 1.9.4: Fixed a lot of bugs and added keyword arguments. Test your code. | pygame.ref.transform#pygame.transform.threshold |
pygame.version.rev
repository revision of the build rev = 'a6f89747b551+' The Mercurial node identifier of the repository checkout from which this package was built. If the identifier ends with a plus sign '+' then the package contains uncommitted changes. Please include this revision number in bug reports, especially for non-release pygame builds. Important note: pygame development has moved to github, this variable is obsolete now. As soon as development shifted to github, this variable started returning an empty string "". It has always been returning an empty string since v1.9.5. Changed in pygame 1.9.5: Always returns an empty string "". | pygame.ref.pygame#pygame.version.rev |
pygame.version.SDL
tupled integers of the SDL library version SDL = '(2, 0, 12)' This is the SDL library version represented as an extended tuple. It also has attributes 'major', 'minor' & 'patch' that can be accessed like this: >>> pygame.version.SDL.major
2 printing the whole thing returns a string like this: >>> pygame.version.SDL
SDLVersion(major=2, minor=0, patch=12) New in pygame 2.0.0. | pygame.ref.pygame#pygame.version.SDL |
pygame.version.ver
version number as a string ver = '1.2' This is the version represented as a string. It can contain a micro release number as well, e.g. '1.5.2' | pygame.ref.pygame#pygame.version.ver |
pygame.version.vernum
tupled integers of the version vernum = (1, 5, 3) This version information can easily be compared with other version numbers of the same format. An example of checking pygame version numbers would look like this: if pygame.version.vernum < (1, 5):
print('Warning, older version of pygame (%s)' % pygame.version.ver)
disable_advanced_features = True New in pygame 1.9.6: Attributes major, minor, and patch. vernum.major == vernum[0]
vernum.minor == vernum[1]
vernum.patch == vernum[2] Changed in pygame 1.9.6: str(pygame.version.vernum) returns a string like "2.0.0" instead of "(2, 0, 0)". Changed in pygame 1.9.6: repr(pygame.version.vernum) returns a string like "PygameVersion(major=2, minor=0, patch=0)" instead of "(2, 0, 0)". | pygame.ref.pygame#pygame.version.vernum |
Cookbook This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to this documentation. Adding interesting links and/or inline examples to this section is a great First Pull Request. Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer. pandas (pd) and NumPy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for newer users. Idioms These are some neat pandas idioms if-then/if-then-else on one column, and assignment to another one or more columns:
In [1]: df = pd.DataFrame(
...: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
...: )
...:
In [2]: df
Out[2]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
if-then… An if-then on one column
In [3]: df.loc[df.AAA >= 5, "BBB"] = -1
In [4]: df
Out[4]:
AAA BBB CCC
0 4 10 100
1 5 -1 50
2 6 -1 -30
3 7 -1 -50
An if-then with assignment to 2 columns:
In [5]: df.loc[df.AAA >= 5, ["BBB", "CCC"]] = 555
In [6]: df
Out[6]:
AAA BBB CCC
0 4 10 100
1 5 555 555
2 6 555 555
3 7 555 555
Add another line with different logic, to do the -else
In [7]: df.loc[df.AAA < 5, ["BBB", "CCC"]] = 2000
In [8]: df
Out[8]:
AAA BBB CCC
0 4 2000 2000
1 5 555 555
2 6 555 555
3 7 555 555
Or use pandas where after you’ve set up a mask
In [9]: df_mask = pd.DataFrame(
...: {"AAA": [True] * 4, "BBB": [False] * 4, "CCC": [True, False] * 2}
...: )
...:
In [10]: df.where(df_mask, -1000)
Out[10]:
AAA BBB CCC
0 4 -1000 2000
1 5 -1000 -1000
2 6 -1000 555
3 7 -1000 -1000
if-then-else using NumPy’s where()
In [11]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [12]: df
Out[12]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [13]: df["logic"] = np.where(df["AAA"] > 5, "high", "low")
In [14]: df
Out[14]:
AAA BBB CCC logic
0 4 10 100 low
1 5 20 50 low
2 6 30 -30 high
3 7 40 -50 high
Splitting Split a frame with a boolean criterion
In [15]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [16]: df
Out[16]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [17]: df[df.AAA <= 5]
Out[17]:
AAA BBB CCC
0 4 10 100
1 5 20 50
In [18]: df[df.AAA > 5]
Out[18]:
AAA BBB CCC
2 6 30 -30
3 7 40 -50
Building criteria Select with multi-column criteria
In [19]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [20]: df
Out[20]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
…and (without assignment returns a Series)
In [21]: df.loc[(df["BBB"] < 25) & (df["CCC"] >= -40), "AAA"]
Out[21]:
0 4
1 5
Name: AAA, dtype: int64
…or (without assignment returns a Series)
In [22]: df.loc[(df["BBB"] > 25) | (df["CCC"] >= -40), "AAA"]
Out[22]:
0 4
1 5
2 6
3 7
Name: AAA, dtype: int64
…or (with assignment modifies the DataFrame.)
In [23]: df.loc[(df["BBB"] > 25) | (df["CCC"] >= 75), "AAA"] = 0.1
In [24]: df
Out[24]:
AAA BBB CCC
0 0.1 10 100
1 5.0 20 50
2 0.1 30 -30
3 0.1 40 -50
Select rows with data closest to certain value using argsort
In [25]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [26]: df
Out[26]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [27]: aValue = 43.0
In [28]: df.loc[(df.CCC - aValue).abs().argsort()]
Out[28]:
AAA BBB CCC
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
Dynamically reduce a list of criteria using a binary operators
In [29]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [30]: df
Out[30]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [31]: Crit1 = df.AAA <= 5.5
In [32]: Crit2 = df.BBB == 10.0
In [33]: Crit3 = df.CCC > -40.0
One could hard code:
In [34]: AllCrit = Crit1 & Crit2 & Crit3
…Or it can be done with a list of dynamically built criteria
In [35]: import functools
In [36]: CritList = [Crit1, Crit2, Crit3]
In [37]: AllCrit = functools.reduce(lambda x, y: x & y, CritList)
In [38]: df[AllCrit]
Out[38]:
AAA BBB CCC
0 4 10 100
Selection Dataframes The indexing docs. Using both row labels and value conditionals
In [39]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [40]: df
Out[40]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [41]: df[(df.AAA <= 6) & (df.index.isin([0, 2, 4]))]
Out[41]:
AAA BBB CCC
0 4 10 100
2 6 30 -30
Use loc for label-oriented slicing and iloc positional slicing GH2904
In [42]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]},
....: index=["foo", "bar", "boo", "kar"],
....: )
....:
There are 2 explicit slicing methods, with a third general case Positional-oriented (Python slicing style : exclusive of end) Label-oriented (Non-Python slicing style : inclusive of end) General (Either slicing style : depends on if the slice contains labels or positions)
In [43]: df.loc["bar":"kar"] # Label
Out[43]:
AAA BBB CCC
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
# Generic
In [44]: df[0:3]
Out[44]:
AAA BBB CCC
foo 4 10 100
bar 5 20 50
boo 6 30 -30
In [45]: df["bar":"kar"]
Out[45]:
AAA BBB CCC
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [46]: data = {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
In [47]: df2 = pd.DataFrame(data=data, index=[1, 2, 3, 4]) # Note index starts at 1.
In [48]: df2.iloc[1:3] # Position-oriented
Out[48]:
AAA BBB CCC
2 5 20 50
3 6 30 -30
In [49]: df2.loc[1:3] # Label-oriented
Out[49]:
AAA BBB CCC
1 4 10 100
2 5 20 50
3 6 30 -30
Using inverse operator (~) to take the complement of a mask
In [50]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [51]: df
Out[51]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [52]: df[~((df.AAA <= 6) & (df.index.isin([0, 2, 4])))]
Out[52]:
AAA BBB CCC
1 5 20 50
3 7 40 -50
New columns Efficiently and dynamically creating new columns using applymap
In [53]: df = pd.DataFrame({"AAA": [1, 2, 1, 3], "BBB": [1, 1, 2, 2], "CCC": [2, 1, 3, 1]})
In [54]: df
Out[54]:
AAA BBB CCC
0 1 1 2
1 2 1 1
2 1 2 3
3 3 2 1
In [55]: source_cols = df.columns # Or some subset would work too
In [56]: new_cols = [str(x) + "_cat" for x in source_cols]
In [57]: categories = {1: "Alpha", 2: "Beta", 3: "Charlie"}
In [58]: df[new_cols] = df[source_cols].applymap(categories.get)
In [59]: df
Out[59]:
AAA BBB CCC AAA_cat BBB_cat CCC_cat
0 1 1 2 Alpha Alpha Beta
1 2 1 1 Beta Alpha Alpha
2 1 2 3 Alpha Beta Charlie
3 3 2 1 Charlie Beta Alpha
Keep other columns when using min() with groupby
In [60]: df = pd.DataFrame(
....: {"AAA": [1, 1, 1, 2, 2, 2, 3, 3], "BBB": [2, 1, 3, 4, 5, 1, 2, 3]}
....: )
....:
In [61]: df
Out[61]:
AAA BBB
0 1 2
1 1 1
2 1 3
3 2 4
4 2 5
5 2 1
6 3 2
7 3 3
Method 1 : idxmin() to get the index of the minimums
In [62]: df.loc[df.groupby("AAA")["BBB"].idxmin()]
Out[62]:
AAA BBB
1 1 1
5 2 1
6 3 2
Method 2 : sort then take first of each
In [63]: df.sort_values(by="BBB").groupby("AAA", as_index=False).first()
Out[63]:
AAA BBB
0 1 1
1 2 1
2 3 2
Notice the same results, with the exception of the index. Multiindexing The multindexing docs. Creating a MultiIndex from a labeled frame
In [64]: df = pd.DataFrame(
....: {
....: "row": [0, 1, 2],
....: "One_X": [1.1, 1.1, 1.1],
....: "One_Y": [1.2, 1.2, 1.2],
....: "Two_X": [1.11, 1.11, 1.11],
....: "Two_Y": [1.22, 1.22, 1.22],
....: }
....: )
....:
In [65]: df
Out[65]:
row One_X One_Y Two_X Two_Y
0 0 1.1 1.2 1.11 1.22
1 1 1.1 1.2 1.11 1.22
2 2 1.1 1.2 1.11 1.22
# As Labelled Index
In [66]: df = df.set_index("row")
In [67]: df
Out[67]:
One_X One_Y Two_X Two_Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
# With Hierarchical Columns
In [68]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split("_")) for c in df.columns])
In [69]: df
Out[69]:
One Two
X Y X Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
# Now stack & Reset
In [70]: df = df.stack(0).reset_index(1)
In [71]: df
Out[71]:
level_1 X Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
# And fix the labels (Notice the label 'level_1' got added automatically)
In [72]: df.columns = ["Sample", "All_X", "All_Y"]
In [73]: df
Out[73]:
Sample All_X All_Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
Arithmetic Performing arithmetic with a MultiIndex that needs broadcasting
In [74]: cols = pd.MultiIndex.from_tuples(
....: [(x, y) for x in ["A", "B", "C"] for y in ["O", "I"]]
....: )
....:
In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=["n", "m"], columns=cols)
In [76]: df
Out[76]:
A B C
O I O I O I
n 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215
m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804
In [77]: df = df.div(df["C"], level=1)
In [78]: df
Out[78]:
A B C
O I O I O I
n 0.387021 1.633022 -1.244983 6.556214 1.0 1.0
m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0
Slicing Slicing a MultiIndex with xs
In [79]: coords = [("AA", "one"), ("AA", "six"), ("BB", "one"), ("BB", "two"), ("BB", "six")]
In [80]: index = pd.MultiIndex.from_tuples(coords)
In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ["MyData"])
In [82]: df
Out[82]:
MyData
AA one 11
six 22
BB one 33
two 44
six 55
To take the cross section of the 1st level and 1st axis the index:
# Note : level and axis are optional, and default to zero
In [83]: df.xs("BB", level=0, axis=0)
Out[83]:
MyData
one 33
two 44
six 55
…and now the 2nd level of the 1st axis.
In [84]: df.xs("six", level=1, axis=0)
Out[84]:
MyData
AA 22
BB 55
Slicing a MultiIndex with xs, method #2
In [85]: import itertools
In [86]: index = list(itertools.product(["Ada", "Quinn", "Violet"], ["Comp", "Math", "Sci"]))
In [87]: headr = list(itertools.product(["Exams", "Labs"], ["I", "II"]))
In [88]: indx = pd.MultiIndex.from_tuples(index, names=["Student", "Course"])
In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-named
In [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]
In [91]: df = pd.DataFrame(data, indx, cols)
In [92]: df
Out[92]:
Exams Labs
I II I II
Student Course
Ada Comp 70 71 72 73
Math 71 73 75 74
Sci 72 75 75 75
Quinn Comp 73 74 75 76
Math 74 76 78 77
Sci 75 78 78 78
Violet Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [93]: All = slice(None)
In [94]: df.loc["Violet"]
Out[94]:
Exams Labs
I II I II
Course
Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [95]: df.loc[(All, "Math"), All]
Out[95]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
Violet Math 77 79 81 80
In [96]: df.loc[(slice("Ada", "Quinn"), "Math"), All]
Out[96]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
In [97]: df.loc[(All, "Math"), ("Exams")]
Out[97]:
I II
Student Course
Ada Math 71 73
Quinn Math 74 76
Violet Math 77 79
In [98]: df.loc[(All, "Math"), (All, "II")]
Out[98]:
Exams Labs
II II
Student Course
Ada Math 73 74
Quinn Math 76 77
Violet Math 79 80
Setting portions of a MultiIndex with xs Sorting Sort by specific column or an ordered list of columns, with a MultiIndex
In [99]: df.sort_values(by=("Labs", "II"), ascending=False)
Out[99]:
Exams Labs
I II I II
Student Course
Violet Sci 78 81 81 81
Math 77 79 81 80
Comp 76 77 78 79
Quinn Sci 75 78 78 78
Math 74 76 78 77
Comp 73 74 75 76
Ada Sci 72 75 75 75
Math 71 73 75 74
Comp 70 71 72 73
Partial selection, the need for sortedness GH2995 Levels Prepending a level to a multiindex Flatten Hierarchical columns Missing data The missing data docs. Fill forward a reversed timeseries
In [100]: df = pd.DataFrame(
.....: np.random.randn(6, 1),
.....: index=pd.date_range("2013-08-01", periods=6, freq="B"),
.....: columns=list("A"),
.....: )
.....:
In [101]: df.loc[df.index[3], "A"] = np.nan
In [102]: df
Out[102]:
A
2013-08-01 0.721555
2013-08-02 -0.706771
2013-08-05 -1.039575
2013-08-06 NaN
2013-08-07 -0.424972
2013-08-08 0.567020
In [103]: df.reindex(df.index[::-1]).ffill()
Out[103]:
A
2013-08-08 0.567020
2013-08-07 -0.424972
2013-08-06 -0.424972
2013-08-05 -1.039575
2013-08-02 -0.706771
2013-08-01 0.721555
cumsum reset at NaN values Replace Using replace with backrefs Grouping The grouping docs. Basic grouping with apply Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns
In [104]: df = pd.DataFrame(
.....: {
.....: "animal": "cat dog cat fish dog cat cat".split(),
.....: "size": list("SSMMMLL"),
.....: "weight": [8, 10, 11, 1, 20, 12, 12],
.....: "adult": [False] * 5 + [True] * 2,
.....: }
.....: )
.....:
In [105]: df
Out[105]:
animal size weight adult
0 cat S 8 False
1 dog S 10 False
2 cat M 11 False
3 fish M 1 False
4 dog M 20 False
5 cat L 12 True
6 cat L 12 True
# List the size of the animals with the highest weight.
In [106]: df.groupby("animal").apply(lambda subf: subf["size"][subf["weight"].idxmax()])
Out[106]:
animal
cat L
dog M
fish M
dtype: object
Using get_group
In [107]: gb = df.groupby(["animal"])
In [108]: gb.get_group("cat")
Out[108]:
animal size weight adult
0 cat S 8 False
2 cat M 11 False
5 cat L 12 True
6 cat L 12 True
Apply to different items in a group
In [109]: def GrowUp(x):
.....: avg_weight = sum(x[x["size"] == "S"].weight * 1.5)
.....: avg_weight += sum(x[x["size"] == "M"].weight * 1.25)
.....: avg_weight += sum(x[x["size"] == "L"].weight)
.....: avg_weight /= len(x)
.....: return pd.Series(["L", avg_weight, True], index=["size", "weight", "adult"])
.....:
In [110]: expected_df = gb.apply(GrowUp)
In [111]: expected_df
Out[111]:
size weight adult
animal
cat L 12.4375 True
dog L 20.0000 True
fish L 1.2500 True
Expanding apply
In [112]: S = pd.Series([i / 100.0 for i in range(1, 11)])
In [113]: def cum_ret(x, y):
.....: return x * (1 + y)
.....:
In [114]: def red(x):
.....: return functools.reduce(cum_ret, x, 1.0)
.....:
In [115]: S.expanding().apply(red, raw=True)
Out[115]:
0 1.010000
1 1.030200
2 1.061106
3 1.103550
4 1.158728
5 1.228251
6 1.314229
7 1.419367
8 1.547110
9 1.701821
dtype: float64
Replacing some values with mean of the rest of a group
In [116]: df = pd.DataFrame({"A": [1, 1, 2, 2], "B": [1, -1, 1, 2]})
In [117]: gb = df.groupby("A")
In [118]: def replace(g):
.....: mask = g < 0
.....: return g.where(mask, g[~mask].mean())
.....:
In [119]: gb.transform(replace)
Out[119]:
B
0 1.0
1 -1.0
2 1.5
3 1.5
Sort groups by aggregated data
In [120]: df = pd.DataFrame(
.....: {
.....: "code": ["foo", "bar", "baz"] * 2,
.....: "data": [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],
.....: "flag": [False, True] * 3,
.....: }
.....: )
.....:
In [121]: code_groups = df.groupby("code")
In [122]: agg_n_sort_order = code_groups[["data"]].transform(sum).sort_values(by="data")
In [123]: sorted_df = df.loc[agg_n_sort_order.index]
In [124]: sorted_df
Out[124]:
code data flag
1 bar -0.21 True
4 bar -0.59 False
0 foo 0.16 False
3 foo 0.45 True
2 baz 0.33 False
5 baz 0.62 True
Create multiple aggregated columns
In [125]: rng = pd.date_range(start="2014-10-07", periods=10, freq="2min")
In [126]: ts = pd.Series(data=list(range(10)), index=rng)
In [127]: def MyCust(x):
.....: if len(x) > 2:
.....: return x[1] * 1.234
.....: return pd.NaT
.....:
In [128]: mhc = {"Mean": np.mean, "Max": np.max, "Custom": MyCust}
In [129]: ts.resample("5min").apply(mhc)
Out[129]:
Mean Max Custom
2014-10-07 00:00:00 1.0 2 1.234
2014-10-07 00:05:00 3.5 4 NaT
2014-10-07 00:10:00 6.0 7 7.404
2014-10-07 00:15:00 8.5 9 NaT
In [130]: ts
Out[130]:
2014-10-07 00:00:00 0
2014-10-07 00:02:00 1
2014-10-07 00:04:00 2
2014-10-07 00:06:00 3
2014-10-07 00:08:00 4
2014-10-07 00:10:00 5
2014-10-07 00:12:00 6
2014-10-07 00:14:00 7
2014-10-07 00:16:00 8
2014-10-07 00:18:00 9
Freq: 2T, dtype: int64
Create a value counts column and reassign back to the DataFrame
In [131]: df = pd.DataFrame(
.....: {"Color": "Red Red Red Blue".split(), "Value": [100, 150, 50, 50]}
.....: )
.....:
In [132]: df
Out[132]:
Color Value
0 Red 100
1 Red 150
2 Red 50
3 Blue 50
In [133]: df["Counts"] = df.groupby(["Color"]).transform(len)
In [134]: df
Out[134]:
Color Value Counts
0 Red 100 3
1 Red 150 3
2 Red 50 3
3 Blue 50 1
Shift groups of the values in a column based on the index
In [135]: df = pd.DataFrame(
.....: {"line_race": [10, 10, 8, 10, 10, 8], "beyer": [99, 102, 103, 103, 88, 100]},
.....: index=[
.....: "Last Gunfighter",
.....: "Last Gunfighter",
.....: "Last Gunfighter",
.....: "Paynter",
.....: "Paynter",
.....: "Paynter",
.....: ],
.....: )
.....:
In [136]: df
Out[136]:
line_race beyer
Last Gunfighter 10 99
Last Gunfighter 10 102
Last Gunfighter 8 103
Paynter 10 103
Paynter 10 88
Paynter 8 100
In [137]: df["beyer_shifted"] = df.groupby(level=0)["beyer"].shift(1)
In [138]: df
Out[138]:
line_race beyer beyer_shifted
Last Gunfighter 10 99 NaN
Last Gunfighter 10 102 99.0
Last Gunfighter 8 103 102.0
Paynter 10 103 NaN
Paynter 10 88 103.0
Paynter 8 100 88.0
Select row with maximum value from each group
In [139]: df = pd.DataFrame(
.....: {
.....: "host": ["other", "other", "that", "this", "this"],
.....: "service": ["mail", "web", "mail", "mail", "web"],
.....: "no": [1, 2, 1, 2, 1],
.....: }
.....: ).set_index(["host", "service"])
.....:
In [140]: mask = df.groupby(level=0).agg("idxmax")
In [141]: df_count = df.loc[mask["no"]].reset_index()
In [142]: df_count
Out[142]:
host service no
0 other web 2
1 that mail 1
2 this mail 2
Grouping like Python’s itertools.groupby
In [143]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=["A"])
In [144]: df["A"].groupby((df["A"] != df["A"].shift()).cumsum()).groups
Out[144]: {1: [0], 2: [1], 3: [2], 4: [3, 4, 5], 5: [6], 6: [7, 8]}
In [145]: df["A"].groupby((df["A"] != df["A"].shift()).cumsum()).cumsum()
Out[145]:
0 0
1 1
2 0
3 1
4 2
5 3
6 0
7 1
8 2
Name: A, dtype: int64
Expanding data Alignment and to-date Rolling Computation window based on values instead of counts Rolling Mean by Time Interval Splitting Splitting a frame Create a list of dataframes, split using a delineation based on logic included in rows.
In [146]: df = pd.DataFrame(
.....: data={
.....: "Case": ["A", "A", "A", "B", "A", "A", "B", "A", "A"],
.....: "Data": np.random.randn(9),
.....: }
.....: )
.....:
In [147]: dfs = list(
.....: zip(
.....: *df.groupby(
.....: (1 * (df["Case"] == "B"))
.....: .cumsum()
.....: .rolling(window=3, min_periods=1)
.....: .median()
.....: )
.....: )
.....: )[-1]
.....:
In [148]: dfs[0]
Out[148]:
Case Data
0 A 0.276232
1 A -1.087401
2 A -0.673690
3 B 0.113648
In [149]: dfs[1]
Out[149]:
Case Data
4 A -1.478427
5 A 0.524988
6 B 0.404705
In [150]: dfs[2]
Out[150]:
Case Data
7 A 0.577046
8 A -1.715002
Pivot The Pivot docs. Partial sums and subtotals
In [151]: df = pd.DataFrame(
.....: data={
.....: "Province": ["ON", "QC", "BC", "AL", "AL", "MN", "ON"],
.....: "City": [
.....: "Toronto",
.....: "Montreal",
.....: "Vancouver",
.....: "Calgary",
.....: "Edmonton",
.....: "Winnipeg",
.....: "Windsor",
.....: ],
.....: "Sales": [13, 6, 16, 8, 4, 3, 1],
.....: }
.....: )
.....:
In [152]: table = pd.pivot_table(
.....: df,
.....: values=["Sales"],
.....: index=["Province"],
.....: columns=["City"],
.....: aggfunc=np.sum,
.....: margins=True,
.....: )
.....:
In [153]: table.stack("City")
Out[153]:
Sales
Province City
AL All 12.0
Calgary 8.0
Edmonton 4.0
BC All 16.0
Vancouver 16.0
... ...
All Montreal 6.0
Toronto 13.0
Vancouver 16.0
Windsor 1.0
Winnipeg 3.0
[20 rows x 1 columns]
Frequency table like plyr in R
In [154]: grades = [48, 99, 75, 80, 42, 80, 72, 68, 36, 78]
In [155]: df = pd.DataFrame(
.....: {
.....: "ID": ["x%d" % r for r in range(10)],
.....: "Gender": ["F", "M", "F", "M", "F", "M", "F", "M", "M", "M"],
.....: "ExamYear": [
.....: "2007",
.....: "2007",
.....: "2007",
.....: "2008",
.....: "2008",
.....: "2008",
.....: "2008",
.....: "2009",
.....: "2009",
.....: "2009",
.....: ],
.....: "Class": [
.....: "algebra",
.....: "stats",
.....: "bio",
.....: "algebra",
.....: "algebra",
.....: "stats",
.....: "stats",
.....: "algebra",
.....: "bio",
.....: "bio",
.....: ],
.....: "Participated": [
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "no",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: ],
.....: "Passed": ["yes" if x > 50 else "no" for x in grades],
.....: "Employed": [
.....: True,
.....: True,
.....: True,
.....: False,
.....: False,
.....: False,
.....: False,
.....: True,
.....: True,
.....: False,
.....: ],
.....: "Grade": grades,
.....: }
.....: )
.....:
In [156]: df.groupby("ExamYear").agg(
.....: {
.....: "Participated": lambda x: x.value_counts()["yes"],
.....: "Passed": lambda x: sum(x == "yes"),
.....: "Employed": lambda x: sum(x),
.....: "Grade": lambda x: sum(x) / len(x),
.....: }
.....: )
.....:
Out[156]:
Participated Passed Employed Grade
ExamYear
2007 3 2 3 74.000000
2008 3 3 0 68.500000
2009 3 2 2 60.666667
Plot pandas DataFrame with year over year data To create year and month cross tabulation:
In [157]: df = pd.DataFrame(
.....: {"value": np.random.randn(36)},
.....: index=pd.date_range("2011-01-01", freq="M", periods=36),
.....: )
.....:
In [158]: pd.pivot_table(
.....: df, index=df.index.month, columns=df.index.year, values="value", aggfunc="sum"
.....: )
.....:
Out[158]:
2011 2012 2013
1 -1.039268 -0.968914 2.565646
2 -0.370647 -1.294524 1.431256
3 -1.157892 0.413738 1.340309
4 -1.344312 0.276662 -1.170299
5 0.844885 -0.472035 -0.226169
6 1.075770 -0.013960 0.410835
7 -0.109050 -0.362543 0.813850
8 1.643563 -0.006154 0.132003
9 -1.469388 -0.923061 -0.827317
10 0.357021 0.895717 -0.076467
11 -0.674600 0.805244 -1.187678
12 -1.776904 -1.206412 1.130127
Apply Rolling apply to organize - Turning embedded lists into a MultiIndex frame
In [159]: df = pd.DataFrame(
.....: data={
.....: "A": [[2, 4, 8, 16], [100, 200], [10, 20, 30]],
.....: "B": [["a", "b", "c"], ["jj", "kk"], ["ccc"]],
.....: },
.....: index=["I", "II", "III"],
.....: )
.....:
In [160]: def SeriesFromSubList(aList):
.....: return pd.Series(aList)
.....:
In [161]: df_orgz = pd.concat(
.....: {ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()}
.....: )
.....:
In [162]: df_orgz
Out[162]:
0 1 2 3
I A 2 4 8 16.0
B a b c NaN
II A 100 200 NaN NaN
B jj kk NaN NaN
III A 10 20.0 30.0 NaN
B ccc NaN NaN NaN
Rolling apply with a DataFrame returning a Series Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
In [163]: df = pd.DataFrame(
.....: data=np.random.randn(2000, 2) / 10000,
.....: index=pd.date_range("2001-01-01", periods=2000),
.....: columns=["A", "B"],
.....: )
.....:
In [164]: df
Out[164]:
A B
2001-01-01 -0.000144 -0.000141
2001-01-02 0.000161 0.000102
2001-01-03 0.000057 0.000088
2001-01-04 -0.000221 0.000097
2001-01-05 -0.000201 -0.000041
... ... ...
2006-06-19 0.000040 -0.000235
2006-06-20 -0.000123 -0.000021
2006-06-21 -0.000113 0.000114
2006-06-22 0.000136 0.000109
2006-06-23 0.000027 0.000030
[2000 rows x 2 columns]
In [165]: def gm(df, const):
.....: v = ((((df["A"] + df["B"]) + 1).cumprod()) - 1) * const
.....: return v.iloc[-1]
.....:
In [166]: s = pd.Series(
.....: {
.....: df.index[i]: gm(df.iloc[i: min(i + 51, len(df) - 1)], 5)
.....: for i in range(len(df) - 50)
.....: }
.....: )
.....:
In [167]: s
Out[167]:
2001-01-01 0.000930
2001-01-02 0.002615
2001-01-03 0.001281
2001-01-04 0.001117
2001-01-05 0.002772
...
2006-04-30 0.003296
2006-05-01 0.002629
2006-05-02 0.002081
2006-05-03 0.004247
2006-05-04 0.003928
Length: 1950, dtype: float64
Rolling apply with a DataFrame returning a Scalar Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
In [168]: rng = pd.date_range(start="2014-01-01", periods=100)
In [169]: df = pd.DataFrame(
.....: {
.....: "Open": np.random.randn(len(rng)),
.....: "Close": np.random.randn(len(rng)),
.....: "Volume": np.random.randint(100, 2000, len(rng)),
.....: },
.....: index=rng,
.....: )
.....:
In [170]: df
Out[170]:
Open Close Volume
2014-01-01 -1.611353 -0.492885 1219
2014-01-02 -3.000951 0.445794 1054
2014-01-03 -0.138359 -0.076081 1381
2014-01-04 0.301568 1.198259 1253
2014-01-05 0.276381 -0.669831 1728
... ... ... ...
2014-04-06 -0.040338 0.937843 1188
2014-04-07 0.359661 -0.285908 1864
2014-04-08 0.060978 1.714814 941
2014-04-09 1.759055 -0.455942 1065
2014-04-10 0.138185 -1.147008 1453
[100 rows x 3 columns]
In [171]: def vwap(bars):
.....: return (bars.Close * bars.Volume).sum() / bars.Volume.sum()
.....:
In [172]: window = 5
In [173]: s = pd.concat(
.....: [
.....: (pd.Series(vwap(df.iloc[i: i + window]), index=[df.index[i + window]]))
.....: for i in range(len(df) - window)
.....: ]
.....: )
.....:
In [174]: s.round(2)
Out[174]:
2014-01-06 0.02
2014-01-07 0.11
2014-01-08 0.10
2014-01-09 0.07
2014-01-10 -0.29
...
2014-04-06 -0.63
2014-04-07 -0.02
2014-04-08 -0.03
2014-04-09 0.34
2014-04-10 0.29
Length: 95, dtype: float64
Timeseries Between times Using indexer between time Constructing a datetime range that excludes weekends and includes only certain times Vectorized Lookup Aggregation and plotting time series Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series. How to rearrange a Python pandas DataFrame? Dealing with duplicates when reindexing a timeseries to a specified frequency Calculate the first day of the month for each entry in a DatetimeIndex
In [175]: dates = pd.date_range("2000-01-01", periods=5)
In [176]: dates.to_period(freq="M").to_timestamp()
Out[176]:
DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
'2000-01-01'],
dtype='datetime64[ns]', freq=None)
Resampling The Resample docs. Using Grouper instead of TimeGrouper for time grouping of values Time grouping with some missing values Valid frequency arguments to Grouper Timeseries Grouping using a MultiIndex Using TimeGrouper and another grouping to create subgroups, then apply a custom function GH3791 Resampling with custom periods Resample intraday frame without adding new days Resample minute data Resample with groupby Merge The Join docs. Concatenate two dataframes with overlapping index (emulate R rbind)
In [177]: rng = pd.date_range("2000-01-01", periods=6)
In [178]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=["A", "B", "C"])
In [179]: df2 = df1.copy()
Depending on df construction, ignore_index may be needed
In [180]: df = pd.concat([df1, df2], ignore_index=True)
In [181]: df
Out[181]:
A B C
0 -0.870117 -0.479265 -0.790855
1 0.144817 1.726395 -0.464535
2 -0.821906 1.597605 0.187307
3 -0.128342 -1.511638 -0.289858
4 0.399194 -1.430030 -0.639760
5 1.115116 -2.012600 1.810662
6 -0.870117 -0.479265 -0.790855
7 0.144817 1.726395 -0.464535
8 -0.821906 1.597605 0.187307
9 -0.128342 -1.511638 -0.289858
10 0.399194 -1.430030 -0.639760
11 1.115116 -2.012600 1.810662
Self Join of a DataFrame GH2996
In [182]: df = pd.DataFrame(
.....: data={
.....: "Area": ["A"] * 5 + ["C"] * 2,
.....: "Bins": [110] * 2 + [160] * 3 + [40] * 2,
.....: "Test_0": [0, 1, 0, 1, 2, 0, 1],
.....: "Data": np.random.randn(7),
.....: }
.....: )
.....:
In [183]: df
Out[183]:
Area Bins Test_0 Data
0 A 110 0 -0.433937
1 A 110 1 -0.160552
2 A 160 0 0.744434
3 A 160 1 1.754213
4 A 160 2 0.000850
5 C 40 0 0.342243
6 C 40 1 1.070599
In [184]: df["Test_1"] = df["Test_0"] - 1
In [185]: pd.merge(
.....: df,
.....: df,
.....: left_on=["Bins", "Area", "Test_0"],
.....: right_on=["Bins", "Area", "Test_1"],
.....: suffixes=("_L", "_R"),
.....: )
.....:
Out[185]:
Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R
0 A 110 0 -0.433937 -1 1 -0.160552 0
1 A 160 0 0.744434 -1 1 1.754213 0
2 A 160 1 1.754213 0 2 0.000850 1
3 C 40 0 0.342243 -1 1 1.070599 0
How to set the index and join KDB like asof join Join with a criteria based on the values Using searchsorted to merge based on values inside a range Plotting The Plotting docs. Make Matplotlib look like R Setting x-axis major and minor labels Plotting multiple charts in an IPython Jupyter notebook Creating a multi-line plot Plotting a heatmap Annotate a time-series plot Annotate a time-series plot #2 Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter Boxplot for each quartile of a stratifying variable
In [186]: df = pd.DataFrame(
.....: {
.....: "stratifying_var": np.random.uniform(0, 100, 20),
.....: "price": np.random.normal(100, 5, 20),
.....: }
.....: )
.....:
In [187]: df["quartiles"] = pd.qcut(
.....: df["stratifying_var"], 4, labels=["0-25%", "25-50%", "50-75%", "75-100%"]
.....: )
.....:
In [188]: df.boxplot(column="price", by="quartiles")
Out[188]: <AxesSubplot:title={'center':'price'}, xlabel='quartiles'>
Data in/out Performance comparison of SQL vs HDF5 CSV The CSV docs read_csv in action appending to a csv Reading a csv chunk-by-chunk Reading only certain rows of a csv chunk-by-chunk Reading the first few lines of a frame Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands). This example shows a WinZipped file, but is a general application of opening the file within a context manager and using that handle to read. See here Inferring dtypes from a file Dealing with bad lines GH2886 Write a multi-row index CSV without writing duplicates Reading multiple files to create a single DataFrame The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat():
In [189]: for i in range(3):
.....: data = pd.DataFrame(np.random.randn(10, 4))
.....: data.to_csv("file_{}.csv".format(i))
.....:
In [190]: files = ["file_0.csv", "file_1.csv", "file_2.csv"]
In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern. Here is an example using glob:
In [192]: import glob
In [193]: import os
In [194]: files = glob.glob("file_*.csv")
In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs. Parsing date components in multi-columns Parsing date components in multi-columns is faster with a format
In [196]: i = pd.date_range("20000101", periods=10000)
In [197]: df = pd.DataFrame({"year": i.year, "month": i.month, "day": i.day})
In [198]: df.head()
Out[198]:
year month day
0 2000 1 1
1 2000 1 2
2 2000 1 3
3 2000 1 4
4 2000 1 5
In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')
.....: ds = df.apply(lambda x: "%04d%02d%02d" % (x["year"], x["month"], x["day"]), axis=1)
.....: ds.head()
.....: %timeit pd.to_datetime(ds)
.....:
8.7 ms +- 765 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
2.1 ms +- 419 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Skip row between header and data
In [200]: data = """;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: date;Param1;Param2;Param4;Param5
.....: ;m²;°C;m²;m
.....: ;;;;
.....: 01.01.1990 00:00;1;1;2;3
.....: 01.01.1990 01:00;5;3;4;5
.....: 01.01.1990 02:00;9;5;6;7
.....: 01.01.1990 03:00;13;7;8;9
.....: 01.01.1990 04:00;17;9;10;11
.....: 01.01.1990 05:00;21;11;12;13
.....: """
.....:
Option 1: pass rows explicitly to skip rows
In [201]: from io import StringIO
In [202]: pd.read_csv(
.....: StringIO(data),
.....: sep=";",
.....: skiprows=[11, 12],
.....: index_col=0,
.....: parse_dates=True,
.....: header=10,
.....: )
.....:
Out[202]:
Param1 Param2 Param4 Param5
date
1990-01-01 00:00:00 1 1 2 3
1990-01-01 01:00:00 5 3 4 5
1990-01-01 02:00:00 9 5 6 7
1990-01-01 03:00:00 13 7 8 9
1990-01-01 04:00:00 17 9 10 11
1990-01-01 05:00:00 21 11 12 13
Option 2: read column names and then data
In [203]: pd.read_csv(StringIO(data), sep=";", header=10, nrows=10).columns
Out[203]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
In [204]: columns = pd.read_csv(StringIO(data), sep=";", header=10, nrows=10).columns
In [205]: pd.read_csv(
.....: StringIO(data), sep=";", index_col=0, header=12, parse_dates=True, names=columns
.....: )
.....:
Out[205]:
Param1 Param2 Param4 Param5
date
1990-01-01 00:00:00 1 1 2 3
1990-01-01 01:00:00 5 3 4 5
1990-01-01 02:00:00 9 5 6 7
1990-01-01 03:00:00 13 7 8 9
1990-01-01 04:00:00 17 9 10 11
1990-01-01 05:00:00 21 11 12 13
SQL The SQL docs Reading from databases with SQL Excel The Excel docs Reading from a filelike handle Modifying formatting in XlsxWriter output Loading only visible sheets GH19842#issuecomment-892150745 HTML Reading HTML tables from a server that cannot handle the default request header HDFStore The HDFStores docs Simple queries with a Timestamp Index Managing heterogeneous data using a linked multiple table hierarchy GH3032 Merging on-disk tables with millions of rows Avoiding inconsistencies when writing to a store from multiple processes/threads De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here Creating a store chunk-by-chunk from a csv file Appending to a store, while creating a unique index Large Data work flows Reading in a sequence of files, then providing a global unique index to a store while appending Groupby on a HDFStore with low group density Groupby on a HDFStore with high group density Hierarchical queries on a HDFStore Counting with a HDFStore Troubleshoot HDFStore exceptions Setting min_itemsize with strings Using ptrepack to create a completely-sorted-index on a store Storing Attributes to a group node
In [206]: df = pd.DataFrame(np.random.randn(8, 3))
In [207]: store = pd.HDFStore("test.h5")
In [208]: store.put("df", df)
# you can store an arbitrary Python object via pickle
In [209]: store.get_storer("df").attrs.my_attribute = {"A": 10}
In [210]: store.get_storer("df").attrs.my_attribute
Out[210]: {'A': 10}
You can create or load a HDFStore in-memory by passing the driver parameter to PyTables. Changes are only written to disk when the HDFStore is closed.
In [211]: store = pd.HDFStore("test.h5", "w", driver="H5FD_CORE")
In [212]: df = pd.DataFrame(np.random.randn(8, 3))
In [213]: store["test"] = df
# only after closing the store, data is written to disk:
In [214]: store.close()
Binary files pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit machine,
#include <stdio.h>
#include <stdint.h>
typedef struct _Data
{
int32_t count;
double avg;
float scale;
} Data;
int main(int argc, const char *argv[])
{
size_t n = 10;
Data d[n];
for (int i = 0; i < n; ++i)
{
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
}
FILE *file = fopen("binary.dat", "wb");
fwrite(&d, sizeof(Data), n, file);
fclose(file);
return 0;
}
the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element of the struct corresponds to a column in the frame:
names = "count", "avg", "scale"
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = "i4", "f8", "f4"
dt = np.dtype({"names": names, "offsets": offsets, "formats": formats}, align=True)
df = pd.DataFrame(np.fromfile("binary.dat", dt))
Note The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or parquet, both of which are supported by pandas’ IO facilities. Computation Numerical integration (sample-based) of a time series Correlation Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:
In [215]: df = pd.DataFrame(np.random.random(size=(100, 5)))
In [216]: corr_mat = df.corr()
In [217]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool_), k=-1)
In [218]: corr_mat.where(mask)
Out[218]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 -0.079861 NaN NaN NaN NaN
2 -0.236573 0.183801 NaN NaN NaN
3 -0.013795 -0.051975 0.037235 NaN NaN
4 -0.031974 0.118342 -0.073499 -0.02063 NaN
The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.
In [219]: def distcorr(x, y):
.....: n = len(x)
.....: a = np.zeros(shape=(n, n))
.....: b = np.zeros(shape=(n, n))
.....: for i in range(n):
.....: for j in range(i + 1, n):
.....: a[i, j] = abs(x[i] - x[j])
.....: b[i, j] = abs(y[i] - y[j])
.....: a += a.T
.....: b += b.T
.....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
.....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
.....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
.....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
.....: cov_ab = np.sqrt(np.nansum(A * B)) / n
.....: std_a = np.sqrt(np.sqrt(np.nansum(A ** 2)) / n)
.....: std_b = np.sqrt(np.sqrt(np.nansum(B ** 2)) / n)
.....: return cov_ab / std_a / std_b
.....:
In [220]: df = pd.DataFrame(np.random.normal(size=(100, 3)))
In [221]: df.corr(method=distcorr)
Out[221]:
0 1 2
0 1.000000 0.197613 0.216328
1 0.197613 1.000000 0.208749
2 0.216328 0.208749 1.000000
Timedeltas The Timedeltas docs. Using timedeltas
In [222]: import datetime
In [223]: s = pd.Series(pd.date_range("2012-1-1", periods=3, freq="D"))
In [224]: s - s.max()
Out[224]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [225]: s.max() - s
Out[225]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [226]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[226]:
0 364 days 20:55:00
1 365 days 20:55:00
2 366 days 20:55:00
dtype: timedelta64[ns]
In [227]: s + datetime.timedelta(minutes=5)
Out[227]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [228]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[228]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]
In [229]: datetime.timedelta(minutes=5) + s
Out[229]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
Adding and subtracting deltas and dates
In [230]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])
In [231]: df = pd.DataFrame({"A": s, "B": deltas})
In [232]: df
Out[232]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
In [233]: df["New Dates"] = df["A"] + df["B"]
In [234]: df["Delta"] = df["A"] - df["New Dates"]
In [235]: df
Out[235]:
A B New Dates Delta
0 2012-01-01 0 days 2012-01-01 0 days
1 2012-01-02 1 days 2012-01-03 -1 days
2 2012-01-03 2 days 2012-01-05 -2 days
In [236]: df.dtypes
Out[236]:
A datetime64[ns]
B timedelta64[ns]
New Dates datetime64[ns]
Delta timedelta64[ns]
dtype: object
Another example Values can be set to NaT using np.nan, similar to datetime
In [237]: y = s - s.shift()
In [238]: y
Out[238]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
In [239]: y[1] = np.nan
In [240]: y
Out[240]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]
Creating example data To create a dataframe from every combination of some given values, like R’s expand.grid() function, we can create a dict where the keys are column names and the values are lists of the data values:
In [241]: def expand_grid(data_dict):
.....: rows = itertools.product(*data_dict.values())
.....: return pd.DataFrame.from_records(rows, columns=data_dict.keys())
.....:
In [242]: df = expand_grid(
.....: {"height": [60, 70], "weight": [100, 140, 180], "sex": ["Male", "Female"]}
.....: )
.....:
In [243]: df
Out[243]:
height weight sex
0 60 100 Male
1 60 100 Female
2 60 140 Male
3 60 140 Female
4 60 180 Male
5 60 180 Female
6 70 100 Male
7 70 100 Female
8 70 140 Male
9 70 140 Female
10 70 180 Male
11 70 180 Female | pandas.user_guide.cookbook |
DataFrame Constructor
DataFrame([data, index, columns, dtype, copy]) Two-dimensional, size-mutable, potentially heterogeneous tabular data. Attributes and underlying data Axes
DataFrame.index The index (row labels) of the DataFrame.
DataFrame.columns The column labels of the DataFrame.
DataFrame.dtypes Return the dtypes in the DataFrame.
DataFrame.info([verbose, buf, max_cols, ...]) Print a concise summary of a DataFrame.
DataFrame.select_dtypes([include, exclude]) Return a subset of the DataFrame's columns based on the column dtypes.
DataFrame.values Return a Numpy representation of the DataFrame.
DataFrame.axes Return a list representing the axes of the DataFrame.
DataFrame.ndim Return an int representing the number of axes / array dimensions.
DataFrame.size Return an int representing the number of elements in this object.
DataFrame.shape Return a tuple representing the dimensionality of the DataFrame.
DataFrame.memory_usage([index, deep]) Return the memory usage of each column in bytes.
DataFrame.empty Indicator whether Series/DataFrame is empty.
DataFrame.set_flags(*[, copy, ...]) Return a new object with updated flags. Conversion
DataFrame.astype(dtype[, copy, errors]) Cast a pandas object to a specified dtype dtype.
DataFrame.convert_dtypes([infer_objects, ...]) Convert columns to best possible dtypes using dtypes supporting pd.NA.
DataFrame.infer_objects() Attempt to infer better dtypes for object columns.
DataFrame.copy([deep]) Make a copy of this object's indices and data.
DataFrame.bool() Return the bool of a single element Series or DataFrame. Indexing, iteration
DataFrame.head([n]) Return the first n rows.
DataFrame.at Access a single value for a row/column label pair.
DataFrame.iat Access a single value for a row/column pair by integer position.
DataFrame.loc Access a group of rows and columns by label(s) or a boolean array.
DataFrame.iloc Purely integer-location based indexing for selection by position.
DataFrame.insert(loc, column, value[, ...]) Insert column into DataFrame at specified location.
DataFrame.__iter__() Iterate over info axis.
DataFrame.items() Iterate over (column name, Series) pairs.
DataFrame.iteritems() Iterate over (column name, Series) pairs.
DataFrame.keys() Get the 'info axis' (see Indexing for more).
DataFrame.iterrows() Iterate over DataFrame rows as (index, Series) pairs.
DataFrame.itertuples([index, name]) Iterate over DataFrame rows as namedtuples.
DataFrame.lookup(row_labels, col_labels) (DEPRECATED) Label-based "fancy indexing" function for DataFrame.
DataFrame.pop(item) Return item and drop from frame.
DataFrame.tail([n]) Return the last n rows.
DataFrame.xs(key[, axis, level, drop_level]) Return cross-section from the Series/DataFrame.
DataFrame.get(key[, default]) Get item from object for given key (ex: DataFrame column).
DataFrame.isin(values) Whether each element in the DataFrame is contained in values.
DataFrame.where(cond[, other, inplace, ...]) Replace values where the condition is False.
DataFrame.mask(cond[, other, inplace, axis, ...]) Replace values where the condition is True.
DataFrame.query(expr[, inplace]) Query the columns of a DataFrame with a boolean expression. For more information on .at, .iat, .loc, and .iloc, see the indexing documentation. Binary operator functions
DataFrame.add(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator add).
DataFrame.sub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator sub).
DataFrame.mul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator mul).
DataFrame.div(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.truediv(other[, axis, level, ...]) Get Floating division of dataframe and other, element-wise (binary operator truediv).
DataFrame.floordiv(other[, axis, level, ...]) Get Integer division of dataframe and other, element-wise (binary operator floordiv).
DataFrame.mod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator mod).
DataFrame.pow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator pow).
DataFrame.dot(other) Compute the matrix multiplication between the DataFrame and other.
DataFrame.radd(other[, axis, level, fill_value]) Get Addition of dataframe and other, element-wise (binary operator radd).
DataFrame.rsub(other[, axis, level, fill_value]) Get Subtraction of dataframe and other, element-wise (binary operator rsub).
DataFrame.rmul(other[, axis, level, fill_value]) Get Multiplication of dataframe and other, element-wise (binary operator rmul).
DataFrame.rdiv(other[, axis, level, fill_value]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rtruediv(other[, axis, level, ...]) Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
DataFrame.rfloordiv(other[, axis, level, ...]) Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
DataFrame.rmod(other[, axis, level, fill_value]) Get Modulo of dataframe and other, element-wise (binary operator rmod).
DataFrame.rpow(other[, axis, level, fill_value]) Get Exponential power of dataframe and other, element-wise (binary operator rpow).
DataFrame.lt(other[, axis, level]) Get Less than of dataframe and other, element-wise (binary operator lt).
DataFrame.gt(other[, axis, level]) Get Greater than of dataframe and other, element-wise (binary operator gt).
DataFrame.le(other[, axis, level]) Get Less than or equal to of dataframe and other, element-wise (binary operator le).
DataFrame.ge(other[, axis, level]) Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
DataFrame.ne(other[, axis, level]) Get Not equal to of dataframe and other, element-wise (binary operator ne).
DataFrame.eq(other[, axis, level]) Get Equal to of dataframe and other, element-wise (binary operator eq).
DataFrame.combine(other, func[, fill_value, ...]) Perform column-wise combine with another DataFrame.
DataFrame.combine_first(other) Update null elements with value in the same location in other. Function application, GroupBy & window
DataFrame.apply(func[, axis, raw, ...]) Apply a function along an axis of the DataFrame.
DataFrame.applymap(func[, na_action]) Apply a function to a Dataframe elementwise.
DataFrame.pipe(func, *args, **kwargs) Apply chainable functions that expect Series or DataFrames.
DataFrame.agg([func, axis]) Aggregate using one or more operations over the specified axis.
DataFrame.aggregate([func, axis]) Aggregate using one or more operations over the specified axis.
DataFrame.transform(func[, axis]) Call func on self producing a DataFrame with the same axis shape as self.
DataFrame.groupby([by, axis, level, ...]) Group DataFrame using a mapper or by a Series of columns.
DataFrame.rolling(window[, min_periods, ...]) Provide rolling window calculations.
DataFrame.expanding([min_periods, center, ...]) Provide expanding window calculations.
DataFrame.ewm([com, span, halflife, alpha, ...]) Provide exponentially weighted (EW) calculations. Computations / descriptive stats
DataFrame.abs() Return a Series/DataFrame with absolute numeric value of each element.
DataFrame.all([axis, bool_only, skipna, level]) Return whether all elements are True, potentially over an axis.
DataFrame.any([axis, bool_only, skipna, level]) Return whether any element is True, potentially over an axis.
DataFrame.clip([lower, upper, axis, inplace]) Trim values at input threshold(s).
DataFrame.corr([method, min_periods]) Compute pairwise correlation of columns, excluding NA/null values.
DataFrame.corrwith(other[, axis, drop, method]) Compute pairwise correlation.
DataFrame.count([axis, level, numeric_only]) Count non-NA cells for each column or row.
DataFrame.cov([min_periods, ddof]) Compute pairwise covariance of columns, excluding NA/null values.
DataFrame.cummax([axis, skipna]) Return cumulative maximum over a DataFrame or Series axis.
DataFrame.cummin([axis, skipna]) Return cumulative minimum over a DataFrame or Series axis.
DataFrame.cumprod([axis, skipna]) Return cumulative product over a DataFrame or Series axis.
DataFrame.cumsum([axis, skipna]) Return cumulative sum over a DataFrame or Series axis.
DataFrame.describe([percentiles, include, ...]) Generate descriptive statistics.
DataFrame.diff([periods, axis]) First discrete difference of element.
DataFrame.eval(expr[, inplace]) Evaluate a string describing operations on DataFrame columns.
DataFrame.kurt([axis, skipna, level, ...]) Return unbiased kurtosis over requested axis.
DataFrame.kurtosis([axis, skipna, level, ...]) Return unbiased kurtosis over requested axis.
DataFrame.mad([axis, skipna, level]) Return the mean absolute deviation of the values over the requested axis.
DataFrame.max([axis, skipna, level, ...]) Return the maximum of the values over the requested axis.
DataFrame.mean([axis, skipna, level, ...]) Return the mean of the values over the requested axis.
DataFrame.median([axis, skipna, level, ...]) Return the median of the values over the requested axis.
DataFrame.min([axis, skipna, level, ...]) Return the minimum of the values over the requested axis.
DataFrame.mode([axis, numeric_only, dropna]) Get the mode(s) of each element along the selected axis.
DataFrame.pct_change([periods, fill_method, ...]) Percentage change between the current and a prior element.
DataFrame.prod([axis, skipna, level, ...]) Return the product of the values over the requested axis.
DataFrame.product([axis, skipna, level, ...]) Return the product of the values over the requested axis.
DataFrame.quantile([q, axis, numeric_only, ...]) Return values at the given quantile over requested axis.
DataFrame.rank([axis, method, numeric_only, ...]) Compute numerical data ranks (1 through n) along axis.
DataFrame.round([decimals]) Round a DataFrame to a variable number of decimal places.
DataFrame.sem([axis, skipna, level, ddof, ...]) Return unbiased standard error of the mean over requested axis.
DataFrame.skew([axis, skipna, level, ...]) Return unbiased skew over requested axis.
DataFrame.sum([axis, skipna, level, ...]) Return the sum of the values over the requested axis.
DataFrame.std([axis, skipna, level, ddof, ...]) Return sample standard deviation over requested axis.
DataFrame.var([axis, skipna, level, ddof, ...]) Return unbiased variance over requested axis.
DataFrame.nunique([axis, dropna]) Count number of distinct elements in specified axis.
DataFrame.value_counts([subset, normalize, ...]) Return a Series containing counts of unique rows in the DataFrame. Reindexing / selection / label manipulation
DataFrame.add_prefix(prefix) Prefix labels with string prefix.
DataFrame.add_suffix(suffix) Suffix labels with string suffix.
DataFrame.align(other[, join, axis, level, ...]) Align two objects on their axes with the specified join method.
DataFrame.at_time(time[, asof, axis]) Select values at particular time of day (e.g., 9:30AM).
DataFrame.between_time(start_time, end_time) Select values between particular times of the day (e.g., 9:00-9:30 AM).
DataFrame.drop([labels, axis, index, ...]) Drop specified labels from rows or columns.
DataFrame.drop_duplicates([subset, keep, ...]) Return DataFrame with duplicate rows removed.
DataFrame.duplicated([subset, keep]) Return boolean Series denoting duplicate rows.
DataFrame.equals(other) Test whether two objects contain the same elements.
DataFrame.filter([items, like, regex, axis]) Subset the dataframe rows or columns according to the specified index labels.
DataFrame.first(offset) Select initial periods of time series data based on a date offset.
DataFrame.head([n]) Return the first n rows.
DataFrame.idxmax([axis, skipna]) Return index of first occurrence of maximum over requested axis.
DataFrame.idxmin([axis, skipna]) Return index of first occurrence of minimum over requested axis.
DataFrame.last(offset) Select final periods of time series data based on a date offset.
DataFrame.reindex([labels, index, columns, ...]) Conform Series/DataFrame to new index with optional filling logic.
DataFrame.reindex_like(other[, method, ...]) Return an object with matching indices as other object.
DataFrame.rename([mapper, index, columns, ...]) Alter axes labels.
DataFrame.rename_axis([mapper, index, ...]) Set the name of the axis for the index or columns.
DataFrame.reset_index([level, drop, ...]) Reset the index, or a level of it.
DataFrame.sample([n, frac, replace, ...]) Return a random sample of items from an axis of object.
DataFrame.set_axis(labels[, axis, inplace]) Assign desired index to given axis.
DataFrame.set_index(keys[, drop, append, ...]) Set the DataFrame index using existing columns.
DataFrame.tail([n]) Return the last n rows.
DataFrame.take(indices[, axis, is_copy]) Return the elements in the given positional indices along an axis.
DataFrame.truncate([before, after, axis, copy]) Truncate a Series or DataFrame before and after some index value. Missing data handling
DataFrame.backfill([axis, inplace, limit, ...]) Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.bfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'.
DataFrame.dropna([axis, how, thresh, ...]) Remove missing values.
DataFrame.ffill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.fillna([value, method, axis, ...]) Fill NA/NaN values using the specified method.
DataFrame.interpolate([method, axis, limit, ...]) Fill NaN values using an interpolation method.
DataFrame.isna() Detect missing values.
DataFrame.isnull() DataFrame.isnull is an alias for DataFrame.isna.
DataFrame.notna() Detect existing (non-missing) values.
DataFrame.notnull() DataFrame.notnull is an alias for DataFrame.notna.
DataFrame.pad([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
DataFrame.replace([to_replace, value, ...]) Replace values given in to_replace with value. Reshaping, sorting, transposing
DataFrame.droplevel(level[, axis]) Return Series/DataFrame with requested index / column level(s) removed.
DataFrame.pivot([index, columns, values]) Return reshaped DataFrame organized by given index / column values.
DataFrame.pivot_table([values, index, ...]) Create a spreadsheet-style pivot table as a DataFrame.
DataFrame.reorder_levels(order[, axis]) Rearrange index levels using input order.
DataFrame.sort_values(by[, axis, ascending, ...]) Sort by the values along either axis.
DataFrame.sort_index([axis, level, ...]) Sort object by labels (along an axis).
DataFrame.nlargest(n, columns[, keep]) Return the first n rows ordered by columns in descending order.
DataFrame.nsmallest(n, columns[, keep]) Return the first n rows ordered by columns in ascending order.
DataFrame.swaplevel([i, j, axis]) Swap levels i and j in a MultiIndex.
DataFrame.stack([level, dropna]) Stack the prescribed level(s) from columns to index.
DataFrame.unstack([level, fill_value]) Pivot a level of the (necessarily hierarchical) index labels.
DataFrame.swapaxes(axis1, axis2[, copy]) Interchange axes and swap values axes appropriately.
DataFrame.melt([id_vars, value_vars, ...]) Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
DataFrame.explode(column[, ignore_index]) Transform each element of a list-like to a row, replicating index values.
DataFrame.squeeze([axis]) Squeeze 1 dimensional axis objects into scalars.
DataFrame.to_xarray() Return an xarray object from the pandas object.
DataFrame.T
DataFrame.transpose(*args[, copy]) Transpose index and columns. Combining / comparing / joining / merging
DataFrame.append(other[, ignore_index, ...]) Append rows of other to the end of caller, returning a new object.
DataFrame.assign(**kwargs) Assign new columns to a DataFrame.
DataFrame.compare(other[, align_axis, ...]) Compare to another DataFrame and show the differences.
DataFrame.join(other[, on, how, lsuffix, ...]) Join columns of another DataFrame.
DataFrame.merge(right[, how, on, left_on, ...]) Merge DataFrame or named Series objects with a database-style join.
DataFrame.update(other[, join, overwrite, ...]) Modify in place using non-NA values from another DataFrame. Time Series-related
DataFrame.asfreq(freq[, method, how, ...]) Convert time series to specified frequency.
DataFrame.asof(where[, subset]) Return the last row(s) without any NaNs before where.
DataFrame.shift([periods, freq, axis, ...]) Shift index by desired number of periods with an optional time freq.
DataFrame.slice_shift([periods, axis]) (DEPRECATED) Equivalent to shift without copying data.
DataFrame.tshift([periods, freq, axis]) (DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrame.first_valid_index() Return index for first non-NA value or None, if no NA value is found.
DataFrame.last_valid_index() Return index for last non-NA value or None, if no NA value is found.
DataFrame.resample(rule[, axis, closed, ...]) Resample time-series data.
DataFrame.to_period([freq, axis, copy]) Convert DataFrame from DatetimeIndex to PeriodIndex.
DataFrame.to_timestamp([freq, how, axis, copy]) Cast to DatetimeIndex of timestamps, at beginning of period.
DataFrame.tz_convert(tz[, axis, level, copy]) Convert tz-aware axis to target time zone.
DataFrame.tz_localize(tz[, axis, level, ...]) Localize tz-naive index of a Series or DataFrame to target time zone. Flags Flags refer to attributes of the pandas object. Properties of the dataset (like the date is was recorded, the URL it was accessed from, etc.) should be stored in DataFrame.attrs.
Flags(obj, *, allows_duplicate_labels) Flags that apply to pandas objects. Metadata DataFrame.attrs is a dictionary for storing global metadata for this DataFrame. Warning DataFrame.attrs is considered experimental and may change without warning.
DataFrame.attrs Dictionary of global attributes of this dataset. Plotting DataFrame.plot is both a callable method and a namespace attribute for specific plotting methods of the form DataFrame.plot.<kind>.
DataFrame.plot([x, y, kind, ax, ....]) DataFrame plotting accessor and method
DataFrame.plot.area([x, y]) Draw a stacked area plot.
DataFrame.plot.bar([x, y]) Vertical bar plot.
DataFrame.plot.barh([x, y]) Make a horizontal bar plot.
DataFrame.plot.box([by]) Make a box plot of the DataFrame columns.
DataFrame.plot.density([bw_method, ind]) Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.hexbin(x, y[, C, ...]) Generate a hexagonal binning plot.
DataFrame.plot.hist([by, bins]) Draw one histogram of the DataFrame's columns.
DataFrame.plot.kde([bw_method, ind]) Generate Kernel Density Estimate plot using Gaussian kernels.
DataFrame.plot.line([x, y]) Plot Series or DataFrame as lines.
DataFrame.plot.pie(**kwargs) Generate a pie plot.
DataFrame.plot.scatter(x, y[, s, c]) Create a scatter plot with varying marker point size and color.
DataFrame.boxplot([column, by, ax, ...]) Make a box plot from DataFrame columns.
DataFrame.hist([column, by, grid, ...]) Make a histogram of the DataFrame's columns. Sparse accessor Sparse-dtype specific methods and attributes are provided under the DataFrame.sparse accessor.
DataFrame.sparse.density Ratio of non-sparse points to total (dense) data points.
DataFrame.sparse.from_spmatrix(data[, ...]) Create a new DataFrame from a scipy sparse matrix.
DataFrame.sparse.to_coo() Return the contents of the frame as a sparse SciPy COO matrix.
DataFrame.sparse.to_dense() Convert a DataFrame with sparse values to dense. Serialization / IO / conversion
DataFrame.from_dict(data[, orient, dtype, ...]) Construct DataFrame from dict of array-like or dicts.
DataFrame.from_records(data[, index, ...]) Convert structured or record ndarray to DataFrame.
DataFrame.to_parquet([path, engine, ...]) Write a DataFrame to the binary parquet format.
DataFrame.to_pickle(path[, compression, ...]) Pickle (serialize) object to file.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file.
DataFrame.to_hdf(path_or_buf, key[, mode, ...]) Write the contained data to an HDF5 file using HDFStore.
DataFrame.to_sql(name, con[, schema, ...]) Write records stored in a DataFrame to a SQL database.
DataFrame.to_dict([orient, into]) Convert the DataFrame to a dictionary.
DataFrame.to_excel(excel_writer[, ...]) Write object to an Excel sheet.
DataFrame.to_json([path_or_buf, orient, ...]) Convert the object to a JSON string.
DataFrame.to_html([buf, columns, col_space, ...]) Render a DataFrame as an HTML table.
DataFrame.to_feather(path, **kwargs) Write a DataFrame to the binary Feather format.
DataFrame.to_latex([buf, columns, ...]) Render object to a LaTeX tabular, longtable, or nested table.
DataFrame.to_stata(path[, convert_dates, ...]) Export DataFrame object to Stata dta format.
DataFrame.to_gbq(destination_table[, ...]) Write a DataFrame to a Google BigQuery table.
DataFrame.to_records([index, column_dtypes, ...]) Convert DataFrame to a NumPy record array.
DataFrame.to_string([buf, columns, ...]) Render a DataFrame to a console-friendly tabular output.
DataFrame.to_clipboard([excel, sep]) Copy object to the system clipboard.
DataFrame.to_markdown([buf, mode, index, ...]) Print DataFrame in Markdown-friendly format.
DataFrame.style Returns a Styler object. | pandas.reference.frame |
Extensions These are primarily intended for library authors looking to extend pandas objects.
api.extensions.register_extension_dtype(cls) Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name) Register a custom accessor on DataFrame objects.
api.extensions.register_series_accessor(name) Register a custom accessor on Series objects.
api.extensions.register_index_accessor(name) Register a custom accessor on Index objects.
api.extensions.ExtensionDtype() A custom data type, to be paired with an ExtensionArray.
api.extensions.ExtensionArray() Abstract base class for custom 1-D array types.
arrays.PandasArray(values[, copy]) A pandas ExtensionArray for NumPy data. Additionally, we have some utility methods for ensuring your object behaves correctly.
api.indexers.check_array_indexer(array, indexer) Check if indexer is a valid array indexer for array. The sentinel pandas.api.extensions.no_default is used as the default value in some methods. Use an is comparison to check if the user provides a non-default value. | pandas.reference.extensions |
GroupBy GroupBy objects are returned by groupby calls: pandas.DataFrame.groupby(), pandas.Series.groupby(), etc. Indexing, iteration
GroupBy.__iter__() Groupby iterator.
GroupBy.groups Dict {group name -> group labels}.
GroupBy.indices Dict {group name -> group indices}.
GroupBy.get_group(name[, obj]) Construct DataFrame from group with provided name.
Grouper(*args, **kwargs) A Grouper allows the user to specify a groupby instruction for an object. Function application
GroupBy.apply(func, *args, **kwargs) Apply function func group-wise and combine the results together.
GroupBy.agg(func, *args, **kwargs)
SeriesGroupBy.aggregate([func, engine, ...]) Aggregate using one or more operations over the specified axis.
DataFrameGroupBy.aggregate([func, engine, ...]) Aggregate using one or more operations over the specified axis.
SeriesGroupBy.transform(func, *args[, ...]) Call function producing a like-indexed Series on each group and return a Series having the same indexes as the original object filled with the transformed values.
DataFrameGroupBy.transform(func, *args[, ...]) Call function producing a like-indexed DataFrame on each group and return a DataFrame having the same indexes as the original object filled with the transformed values.
GroupBy.pipe(func, *args, **kwargs) Apply a function func with arguments to this GroupBy object and return the function's result. Computations / descriptive stats
GroupBy.all([skipna]) Return True if all values in the group are truthful, else False.
GroupBy.any([skipna]) Return True if any value in the group is truthful, else False.
GroupBy.bfill([limit]) Backward fill the values.
GroupBy.backfill([limit]) Backward fill the values.
GroupBy.count() Compute count of group, excluding missing values.
GroupBy.cumcount([ascending]) Number each item in each group from 0 to the length of that group - 1.
GroupBy.cummax([axis]) Cumulative max for each group.
GroupBy.cummin([axis]) Cumulative min for each group.
GroupBy.cumprod([axis]) Cumulative product for each group.
GroupBy.cumsum([axis]) Cumulative sum for each group.
GroupBy.ffill([limit]) Forward fill the values.
GroupBy.first([numeric_only, min_count]) Compute first of group values.
GroupBy.head([n]) Return first n rows of each group.
GroupBy.last([numeric_only, min_count]) Compute last of group values.
GroupBy.max([numeric_only, min_count]) Compute max of group values.
GroupBy.mean([numeric_only, engine, ...]) Compute mean of groups, excluding missing values.
GroupBy.median([numeric_only]) Compute median of groups, excluding missing values.
GroupBy.min([numeric_only, min_count]) Compute min of group values.
GroupBy.ngroup([ascending]) Number each group from 0 to the number of groups - 1.
GroupBy.nth(n[, dropna]) Take the nth row from each group if n is an int, otherwise a subset of rows.
GroupBy.ohlc() Compute open, high, low and close values of a group, excluding missing values.
GroupBy.pad([limit]) Forward fill the values.
GroupBy.prod([numeric_only, min_count]) Compute prod of group values.
GroupBy.rank([method, ascending, na_option, ...]) Provide the rank of values within each group.
GroupBy.pct_change([periods, fill_method, ...]) Calculate pct_change of each value to previous entry in group.
GroupBy.size() Compute group sizes.
GroupBy.sem([ddof]) Compute standard error of the mean of groups, excluding missing values.
GroupBy.std([ddof, engine, engine_kwargs]) Compute standard deviation of groups, excluding missing values.
GroupBy.sum([numeric_only, min_count, ...]) Compute sum of group values.
GroupBy.var([ddof, engine, engine_kwargs]) Compute variance of groups, excluding missing values.
GroupBy.tail([n]) Return last n rows of each group. The following methods are available in both SeriesGroupBy and DataFrameGroupBy objects, but may differ slightly, usually in that the DataFrameGroupBy version usually permits the specification of an axis argument, and often an argument indicating whether to restrict application to columns of a specific data type.
DataFrameGroupBy.all([skipna]) Return True if all values in the group are truthful, else False.
DataFrameGroupBy.any([skipna]) Return True if any value in the group is truthful, else False.
DataFrameGroupBy.backfill([limit]) Backward fill the values.
DataFrameGroupBy.bfill([limit]) Backward fill the values.
DataFrameGroupBy.corr Compute pairwise correlation of columns, excluding NA/null values.
DataFrameGroupBy.count() Compute count of group, excluding missing values.
DataFrameGroupBy.cov Compute pairwise covariance of columns, excluding NA/null values.
DataFrameGroupBy.cumcount([ascending]) Number each item in each group from 0 to the length of that group - 1.
DataFrameGroupBy.cummax([axis]) Cumulative max for each group.
DataFrameGroupBy.cummin([axis]) Cumulative min for each group.
DataFrameGroupBy.cumprod([axis]) Cumulative product for each group.
DataFrameGroupBy.cumsum([axis]) Cumulative sum for each group.
DataFrameGroupBy.describe(**kwargs) Generate descriptive statistics.
DataFrameGroupBy.diff First discrete difference of element.
DataFrameGroupBy.ffill([limit]) Forward fill the values.
DataFrameGroupBy.fillna Fill NA/NaN values using the specified method.
DataFrameGroupBy.filter(func[, dropna]) Return a copy of a DataFrame excluding filtered elements.
DataFrameGroupBy.hist Make a histogram of the DataFrame's columns.
DataFrameGroupBy.idxmax([axis, skipna]) Return index of first occurrence of maximum over requested axis.
DataFrameGroupBy.idxmin([axis, skipna]) Return index of first occurrence of minimum over requested axis.
DataFrameGroupBy.mad Return the mean absolute deviation of the values over the requested axis.
DataFrameGroupBy.nunique([dropna]) Return DataFrame with counts of unique elements in each position.
DataFrameGroupBy.pad([limit]) Forward fill the values.
DataFrameGroupBy.pct_change([periods, ...]) Calculate pct_change of each value to previous entry in group.
DataFrameGroupBy.plot Class implementing the .plot attribute for groupby objects.
DataFrameGroupBy.quantile([q, interpolation]) Return group values at the given quantile, a la numpy.percentile.
DataFrameGroupBy.rank([method, ascending, ...]) Provide the rank of values within each group.
DataFrameGroupBy.resample(rule, *args, **kwargs) Provide resampling when using a TimeGrouper.
DataFrameGroupBy.sample([n, frac, replace, ...]) Return a random sample of items from each group.
DataFrameGroupBy.shift([periods, freq, ...]) Shift each group by periods observations.
DataFrameGroupBy.size() Compute group sizes.
DataFrameGroupBy.skew Return unbiased skew over requested axis.
DataFrameGroupBy.take Return the elements in the given positional indices along an axis.
DataFrameGroupBy.tshift (DEPRECATED) Shift the time index, using the index's frequency if available.
DataFrameGroupBy.value_counts([subset, ...]) Return a Series or DataFrame containing counts of unique rows. The following methods are available only for SeriesGroupBy objects.
SeriesGroupBy.hist Draw histogram of the input series using matplotlib.
SeriesGroupBy.nlargest([n, keep]) Return the largest n elements.
SeriesGroupBy.nsmallest([n, keep]) Return the smallest n elements.
SeriesGroupBy.nunique([dropna]) Return number of unique elements in the group.
SeriesGroupBy.unique Return unique values of Series object.
SeriesGroupBy.value_counts([normalize, ...])
SeriesGroupBy.is_monotonic_increasing Alias for is_monotonic.
SeriesGroupBy.is_monotonic_decreasing Return boolean if values in the object are monotonic_decreasing. The following methods are available only for DataFrameGroupBy objects.
DataFrameGroupBy.corrwith Compute pairwise correlation.
DataFrameGroupBy.boxplot([subplots, column, ...]) Make box plots from DataFrameGroupBy data. | pandas.reference.groupby |
Input/output Pickling
read_pickle(filepath_or_buffer[, ...]) Load pickled pandas object (or any object) from file.
DataFrame.to_pickle(path[, compression, ...]) Pickle (serialize) object to file. Flat file
read_table(filepath_or_buffer[, sep, ...]) Read general delimited file into DataFrame.
read_csv(filepath_or_buffer[, sep, ...]) Read a comma-separated values (csv) file into DataFrame.
DataFrame.to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer[, colspecs, ...]) Read a table of fixed-width formatted lines into DataFrame. Clipboard
read_clipboard([sep]) Read text from clipboard and pass to read_csv.
DataFrame.to_clipboard([excel, sep]) Copy object to the system clipboard. Excel
read_excel(io[, sheet_name, header, names, ...]) Read an Excel file into a pandas DataFrame.
DataFrame.to_excel(excel_writer[, ...]) Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...]) Parse specified sheet(s) into a DataFrame.
Styler.to_excel(excel_writer[, sheet_name, ...]) Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...]) Class for writing DataFrame objects into excel sheets. JSON
read_json([path_or_buf, orient, typ, dtype, ...]) Convert a JSON string to pandas object.
json_normalize(data[, record_path, meta, ...]) Normalize semi-structured JSON data into a flat table.
DataFrame.to_json([path_or_buf, orient, ...]) Convert the object to a JSON string.
build_table_schema(data[, index, ...]) Create a Table schema from data. HTML
read_html(io[, match, flavor, header, ...]) Read HTML tables into a list of DataFrame objects.
DataFrame.to_html([buf, columns, col_space, ...]) Render a DataFrame as an HTML table.
Styler.to_html([buf, table_uuid, ...]) Write Styler to a file, buffer or string in HTML-CSS format. XML
read_xml(path_or_buffer[, xpath, ...]) Read XML document into a DataFrame object.
DataFrame.to_xml([path_or_buffer, index, ...]) Render a DataFrame to an XML document. Latex
DataFrame.to_latex([buf, columns, ...]) Render object to a LaTeX tabular, longtable, or nested table.
Styler.to_latex([buf, column_format, ...]) Write Styler to a file, buffer or string in LaTeX format. HDFStore: PyTables (HDF5)
read_hdf(path_or_buf[, key, mode, errors, ...]) Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...]) Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...]) Append to Table in file.
HDFStore.get(key) Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...]) Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info() Print detailed information on the store.
HDFStore.keys([include]) Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups() Return a list of all the top-level nodes.
HDFStore.walk([where]) Walk the pytables group hierarchy for pandas objects. Warning One can store a subclass of DataFrame or Series to HDF5, but the type of the subclass is lost upon storing. Feather
read_feather(path[, columns, use_threads, ...]) Load a feather-format object from the file path.
DataFrame.to_feather(path, **kwargs) Write a DataFrame to the binary Feather format. Parquet
read_parquet(path[, engine, columns, ...]) Load a parquet object from the file path, returning a DataFrame.
DataFrame.to_parquet([path, engine, ...]) Write a DataFrame to the binary parquet format. ORC
read_orc(path[, columns]) Load an ORC object from the file path, returning a DataFrame. SAS
read_sas(filepath_or_buffer[, format, ...]) Read SAS files stored as either XPORT or SAS7BDAT format files. SPSS
read_spss(path[, usecols, convert_categoricals]) Load an SPSS file from the file path, returning a DataFrame. SQL
read_sql_table(table_name, con[, schema, ...]) Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...]) Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...]) Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...]) Write records stored in a DataFrame to a SQL database. Google BigQuery
read_gbq(query[, project_id, index_col, ...]) Load data from Google BigQuery. STATA
read_stata(filepath_or_buffer[, ...]) Read Stata file into DataFrame.
DataFrame.to_stata(path[, convert_dates, ...]) Export DataFrame object to Stata dta format.
StataReader.data_label Return data label of Stata file.
StataReader.value_labels() Return a dict, associating each variable name a dict, associating each value its corresponding label.
StataReader.variable_labels() Return variable labels as a dict, associating each variable name with corresponding label.
StataWriter.write_file() Export DataFrame object to Stata dta format. | pandas.reference.io |
pandas.api.extensions.ExtensionArray classpandas.api.extensions.ExtensionArray[source]
Abstract base class for custom 1-D array types. pandas will recognize instances of this class as proper arrays with a custom type and will not attempt to coerce them to objects. They may be stored directly inside a DataFrame or Series. Notes The interface includes the following abstract methods that must be implemented by subclasses: _from_sequence _from_factorized __getitem__ __len__ __eq__ dtype nbytes isna take copy _concat_same_type A default repr displaying the type, (truncated) data, length, and dtype is provided. It can be customized or replaced by by overriding: __repr__ : A default repr for the ExtensionArray. _formatter : Print scalars inside a Series or DataFrame. Some methods require casting the ExtensionArray to an ndarray of Python objects with self.astype(object), which may be expensive. When performance is a concern, we highly recommend overriding the following methods: fillna dropna unique factorize / _values_for_factorize argsort / _values_for_argsort searchsorted The remaining methods implemented on this class should be performant, as they only compose abstract methods. Still, a more efficient implementation may be available, and these methods can be overridden. One can implement methods to handle array reductions. _reduce One can implement methods to handle parsing from strings that will be used in methods such as pandas.io.parsers.read_csv. _from_sequence_of_strings This class does not inherit from ‘abc.ABCMeta’ for performance reasons. Methods and properties required by the interface raise pandas.errors.AbstractMethodError and no register method is provided for registering virtual subclasses. ExtensionArrays are limited to 1 dimension. They may be backed by none, one, or many NumPy arrays. For example, pandas.Categorical is an extension array backed by two arrays, one for codes and one for categories. An array of IPv6 address may be backed by a NumPy structured array with two fields, one for the lower 64 bits and one for the upper 64 bits. Or they may be backed by some other storage type, like Python lists. Pandas makes no assumptions on how the data are stored, just that it can be converted to a NumPy array. The ExtensionArray interface does not impose any rules on how this data is stored. However, currently, the backing data cannot be stored in attributes called .values or ._values to ensure full compatibility with pandas internals. But other names as .data, ._data, ._items, … can be freely used. If implementing NumPy’s __array_ufunc__ interface, pandas expects that You defer by returning NotImplemented when any Series are present in inputs. Pandas will extract the arrays and call the ufunc again. You define a _HANDLED_TYPES tuple as an attribute on the class. Pandas inspect this to determine whether the ufunc is valid for the types present. See NumPy universal functions for more. By default, ExtensionArrays are not hashable. Immutable subclasses may override this behavior. Attributes
dtype An instance of 'ExtensionDtype'.
nbytes The number of bytes needed to store this object in memory.
ndim Extension Arrays are only allowed to be 1-dimensional.
shape Return a tuple of the array dimensions. Methods
argsort([ascending, kind, na_position]) Return the indices that would sort this array.
astype(dtype[, copy]) Cast to a NumPy array or ExtensionArray with 'dtype'.
copy() Return a copy of the array.
dropna() Return ExtensionArray without NA values.
factorize([na_sentinel]) Encode the extension array as an enumerated type.
fillna([value, method, limit]) Fill NA/NaN values using the specified method.
equals(other) Return if another array is equivalent to this array.
insert(loc, item) Insert an item at the given position.
isin(values) Pointwise comparison for set containment in the given values.
isna() A 1-D array indicating if each value is missing.
ravel([order]) Return a flattened view on this array.
repeat(repeats[, axis]) Repeat elements of a ExtensionArray.
searchsorted(value[, side, sorter]) Find indices where elements should be inserted to maintain order.
shift([periods, fill_value]) Shift values by desired number.
take(indices, *[, allow_fill, fill_value]) Take elements from an array.
tolist() Return a list of the values.
unique() Compute the ExtensionArray of unique values.
view([dtype]) Return a view on the array.
_concat_same_type(to_concat) Concatenate multiple array of this dtype.
_formatter([boxed]) Formatting function for scalar values.
_from_factorized(values, original) Reconstruct an ExtensionArray after factorization.
_from_sequence(scalars, *[, dtype, copy]) Construct a new ExtensionArray from a sequence of scalars.
_from_sequence_of_strings(strings, *[, ...]) Construct a new ExtensionArray from a sequence of strings.
_reduce(name, *[, skipna]) Return a scalar result of performing the reduction operation.
_values_for_argsort() Return values for sorting.
_values_for_factorize() Return an array and missing value suitable for factorization. | pandas.reference.api.pandas.api.extensions.extensionarray |
pandas.api.extensions.ExtensionArray._concat_same_type classmethodExtensionArray._concat_same_type(to_concat)[source]
Concatenate multiple array of this dtype. Parameters
to_concat:sequence of this type
Returns
ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray._concat_same_type |
pandas.api.extensions.ExtensionArray._formatter ExtensionArray._formatter(boxed=False)[source]
Formatting function for scalar values. This is used in the default ‘__repr__’. The returned formatting function receives instances of your scalar type. Parameters
boxed:bool, default False
An indicated for whether or not your array is being printed within a Series, DataFrame, or Index (True), or just by itself (False). This may be useful if you want scalar values to appear differently within a Series versus on its own (e.g. quoted or not). Returns
Callable[[Any], str]
A callable that gets instances of the scalar type and returns a string. By default, repr() is used when boxed=False and str() is used when boxed=True. | pandas.reference.api.pandas.api.extensions.extensionarray._formatter |
pandas.api.extensions.ExtensionArray._from_factorized classmethodExtensionArray._from_factorized(values, original)[source]
Reconstruct an ExtensionArray after factorization. Parameters
values:ndarray
An integer ndarray with the factorized values.
original:ExtensionArray
The original ExtensionArray that factorize was called on. See also factorize
Top-level factorize method that dispatches here. ExtensionArray.factorize
Encode the extension array as an enumerated type. | pandas.reference.api.pandas.api.extensions.extensionarray._from_factorized |
pandas.api.extensions.ExtensionArray._from_sequence classmethodExtensionArray._from_sequence(scalars, *, dtype=None, copy=False)[source]
Construct a new ExtensionArray from a sequence of scalars. Parameters
scalars:Sequence
Each element will be an instance of the scalar type for this array, cls.dtype.type or be converted into this type in this method.
dtype:dtype, optional
Construct for this particular dtype. This should be a Dtype compatible with the ExtensionArray.
copy:bool, default False
If True, copy the underlying data. Returns
ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray._from_sequence |
pandas.api.extensions.ExtensionArray._from_sequence_of_strings classmethodExtensionArray._from_sequence_of_strings(strings, *, dtype=None, copy=False)[source]
Construct a new ExtensionArray from a sequence of strings. Parameters
strings:Sequence
Each element will be an instance of the scalar type for this array, cls.dtype.type.
dtype:dtype, optional
Construct for this particular dtype. This should be a Dtype compatible with the ExtensionArray.
copy:bool, default False
If True, copy the underlying data. Returns
ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray._from_sequence_of_strings |
pandas.api.extensions.ExtensionArray._reduce ExtensionArray._reduce(name, *, skipna=True, **kwargs)[source]
Return a scalar result of performing the reduction operation. Parameters
name:str
Name of the function, supported values are: { any, all, min, max, sum, mean, median, prod, std, var, sem, kurt, skew }.
skipna:bool, default True
If True, skip NaN values. **kwargs
Additional keyword arguments passed to the reduction function. Currently, ddof is the only supported kwarg. Returns
scalar
Raises
TypeError:subclass does not define reductions | pandas.reference.api.pandas.api.extensions.extensionarray._reduce |
pandas.api.extensions.ExtensionArray._values_for_argsort ExtensionArray._values_for_argsort()[source]
Return values for sorting. Returns
ndarray
The transformed values should maintain the ordering between values within the array. See also ExtensionArray.argsort
Return the indices that would sort this array. | pandas.reference.api.pandas.api.extensions.extensionarray._values_for_argsort |
pandas.api.extensions.ExtensionArray._values_for_factorize ExtensionArray._values_for_factorize()[source]
Return an array and missing value suitable for factorization. Returns
values:ndarray
An array suitable for factorization. This should maintain order and be a supported dtype (Float64, Int64, UInt64, String, Object). By default, the extension array is cast to object dtype.
na_value:object
The value in values to consider missing. This will be treated as NA in the factorization routines, so it will be coded as na_sentinel and not included in uniques. By default, np.nan is used. Notes The values returned by this method are also used in pandas.util.hash_pandas_object(). | pandas.reference.api.pandas.api.extensions.extensionarray._values_for_factorize |
pandas.api.extensions.ExtensionArray.argsort ExtensionArray.argsort(ascending=True, kind='quicksort', na_position='last', *args, **kwargs)[source]
Return the indices that would sort this array. Parameters
ascending:bool, default True
Whether the indices should result in an ascending or descending sort.
kind:{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional
Sorting algorithm. *args, **kwargs:
Passed through to numpy.argsort(). Returns
np.ndarray[np.intp]
Array of indices that sort self. If NaN values are contained, NaN values are placed at the end. See also numpy.argsort
Sorting implementation used internally. | pandas.reference.api.pandas.api.extensions.extensionarray.argsort |
pandas.api.extensions.ExtensionArray.astype ExtensionArray.astype(dtype, copy=True)[source]
Cast to a NumPy array or ExtensionArray with ‘dtype’. Parameters
dtype:str or dtype
Typecode or data-type to which the array is cast.
copy:bool, default True
Whether to copy the data, even if not necessary. If False, a copy is made only if the old dtype does not match the new dtype. Returns
array:np.ndarray or ExtensionArray
An ExtensionArray if dtype is ExtensionDtype, Otherwise a NumPy ndarray with ‘dtype’ for its dtype. | pandas.reference.api.pandas.api.extensions.extensionarray.astype |
pandas.api.extensions.ExtensionArray.copy ExtensionArray.copy()[source]
Return a copy of the array. Returns
ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray.copy |
pandas.api.extensions.ExtensionArray.dropna ExtensionArray.dropna()[source]
Return ExtensionArray without NA values. Returns
valid:ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray.dropna |
pandas.api.extensions.ExtensionArray.dtype propertyExtensionArray.dtype
An instance of ‘ExtensionDtype’. | pandas.reference.api.pandas.api.extensions.extensionarray.dtype |
pandas.api.extensions.ExtensionArray.equals ExtensionArray.equals(other)[source]
Return if another array is equivalent to this array. Equivalent means that both arrays have the same shape and dtype, and all values compare equal. Missing values in the same location are considered equal (in contrast with normal equality). Parameters
other:ExtensionArray
Array to compare to this Array. Returns
boolean
Whether the arrays are equivalent. | pandas.reference.api.pandas.api.extensions.extensionarray.equals |
pandas.api.extensions.ExtensionArray.factorize ExtensionArray.factorize(na_sentinel=- 1)[source]
Encode the extension array as an enumerated type. Parameters
na_sentinel:int, default -1
Value to use in the codes array to indicate missing values. Returns
codes:ndarray
An integer NumPy array that’s an indexer into the original ExtensionArray.
uniques:ExtensionArray
An ExtensionArray containing the unique values of self. Note uniques will not contain an entry for the NA value of the ExtensionArray if there are any missing values present in self. See also factorize
Top-level factorize method that dispatches here. Notes pandas.factorize() offers a sort keyword as well. | pandas.reference.api.pandas.api.extensions.extensionarray.factorize |
pandas.api.extensions.ExtensionArray.fillna ExtensionArray.fillna(value=None, method=None, limit=None)[source]
Fill NA/NaN values using the specified method. Parameters
value:scalar, array-like
If a scalar value is passed it is used to fill all missing values. Alternatively, an array-like ‘value’ can be given. It’s expected that the array-like have the same length as ‘self’.
method:{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use NEXT valid observation to fill gap.
limit:int, default None
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Returns
ExtensionArray
With NA/NaN filled. | pandas.reference.api.pandas.api.extensions.extensionarray.fillna |
pandas.api.extensions.ExtensionArray.insert ExtensionArray.insert(loc, item)[source]
Insert an item at the given position. Parameters
loc:int
item:scalar-like
Returns
same type as self
Notes This method should be both type and dtype-preserving. If the item cannot be held in an array of this type/dtype, either ValueError or TypeError should be raised. The default implementation relies on _from_sequence to raise on invalid items. | pandas.reference.api.pandas.api.extensions.extensionarray.insert |
pandas.api.extensions.ExtensionArray.isin ExtensionArray.isin(values)[source]
Pointwise comparison for set containment in the given values. Roughly equivalent to np.array([x in values for x in self]) Parameters
values:Sequence
Returns
np.ndarray[bool] | pandas.reference.api.pandas.api.extensions.extensionarray.isin |
pandas.api.extensions.ExtensionArray.isna ExtensionArray.isna()[source]
A 1-D array indicating if each value is missing. Returns
na_values:Union[np.ndarray, ExtensionArray]
In most cases, this should return a NumPy ndarray. For exceptional cases like SparseArray, where returning an ndarray would be expensive, an ExtensionArray may be returned. Notes If returning an ExtensionArray, then na_values._is_boolean should be True na_values should implement ExtensionArray._reduce() na_values.any and na_values.all should be implemented | pandas.reference.api.pandas.api.extensions.extensionarray.isna |
pandas.api.extensions.ExtensionArray.nbytes propertyExtensionArray.nbytes
The number of bytes needed to store this object in memory. | pandas.reference.api.pandas.api.extensions.extensionarray.nbytes |
pandas.api.extensions.ExtensionArray.ndim propertyExtensionArray.ndim
Extension Arrays are only allowed to be 1-dimensional. | pandas.reference.api.pandas.api.extensions.extensionarray.ndim |
pandas.api.extensions.ExtensionArray.ravel ExtensionArray.ravel(order='C')[source]
Return a flattened view on this array. Parameters
order:{None, ‘C’, ‘F’, ‘A’, ‘K’}, default ‘C’
Returns
ExtensionArray
Notes Because ExtensionArrays are 1D-only, this is a no-op. The “order” argument is ignored, is for compatibility with NumPy. | pandas.reference.api.pandas.api.extensions.extensionarray.ravel |
pandas.api.extensions.ExtensionArray.repeat ExtensionArray.repeat(repeats, axis=None)[source]
Repeat elements of a ExtensionArray. Returns a new ExtensionArray where each element of the current ExtensionArray is repeated consecutively a given number of times. Parameters
repeats:int or array of ints
The number of repetitions for each element. This should be a non-negative integer. Repeating 0 times will return an empty ExtensionArray.
axis:None
Must be None. Has no effect but is accepted for compatibility with numpy. Returns
repeated_array:ExtensionArray
Newly created ExtensionArray with repeated elements. See also Series.repeat
Equivalent function for Series. Index.repeat
Equivalent function for Index. numpy.repeat
Similar method for numpy.ndarray. ExtensionArray.take
Take arbitrary positions. Examples
>>> cat = pd.Categorical(['a', 'b', 'c'])
>>> cat
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat(2)
['a', 'a', 'b', 'b', 'c', 'c']
Categories (3, object): ['a', 'b', 'c']
>>> cat.repeat([1, 2, 3])
['a', 'b', 'b', 'c', 'c', 'c']
Categories (3, object): ['a', 'b', 'c'] | pandas.reference.api.pandas.api.extensions.extensionarray.repeat |
pandas.api.extensions.ExtensionArray.searchsorted ExtensionArray.searchsorted(value, side='left', sorter=None)[source]
Find indices where elements should be inserted to maintain order. Find the indices into a sorted array self (a) such that, if the corresponding elements in value were inserted before the indices, the order of self would be preserved. Assuming that self is sorted:
side returned index i satisfies
left self[i-1] < value <= self[i]
right self[i-1] <= value < self[i] Parameters
value:array-like, list or scalar
Value(s) to insert into self.
side:{‘left’, ‘right’}, optional
If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of self).
sorter:1-D array-like, optional
Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. Returns
array of ints or int
If value is array-like, array of insertion points. If value is scalar, a single integer. See also numpy.searchsorted
Similar method from NumPy. | pandas.reference.api.pandas.api.extensions.extensionarray.searchsorted |
pandas.api.extensions.ExtensionArray.shape propertyExtensionArray.shape
Return a tuple of the array dimensions. | pandas.reference.api.pandas.api.extensions.extensionarray.shape |
pandas.api.extensions.ExtensionArray.shift ExtensionArray.shift(periods=1, fill_value=None)[source]
Shift values by desired number. Newly introduced missing values are filled with self.dtype.na_value. Parameters
periods:int, default 1
The number of periods to shift. Negative values are allowed for shifting backwards.
fill_value:object, optional
The scalar value to use for newly introduced missing values. The default is self.dtype.na_value. Returns
ExtensionArray
Shifted. Notes If self is empty or periods is 0, a copy of self is returned. If periods > len(self), then an array of size len(self) is returned, with all values filled with self.dtype.na_value. | pandas.reference.api.pandas.api.extensions.extensionarray.shift |
pandas.api.extensions.ExtensionArray.take ExtensionArray.take(indices, *, allow_fill=False, fill_value=None)[source]
Take elements from an array. Parameters
indices:sequence of int or one-dimensional np.ndarray of int
Indices to be taken.
allow_fill:bool, default False
How to handle negative values in indices. False: negative values in indices indicate positional indices from the right (the default). This is similar to numpy.take(). True: negative values in indices indicate missing values. These values are set to fill_value. Any other other negative values raise a ValueError.
fill_value:any, optional
Fill value to use for NA-indices when allow_fill is True. This may be None, in which case the default NA value for the type, self.dtype.na_value, is used. For many ExtensionArrays, there will be two representations of fill_value: a user-facing “boxed” scalar, and a low-level physical NA value. fill_value should be the user-facing version, and the implementation should handle translating that to the physical version for processing the take if necessary. Returns
ExtensionArray
Raises
IndexError
When the indices are out of bounds for the array. ValueError
When indices contains negative values other than -1 and allow_fill is True. See also numpy.take
Take elements from an array along an axis. api.extensions.take
Take elements from an array. Notes ExtensionArray.take is called by Series.__getitem__, .loc, iloc, when indices is a sequence of values. Additionally, it’s called by Series.reindex(), or any other method that causes realignment, with a fill_value. Examples Here’s an example implementation, which relies on casting the extension array to object dtype. This uses the helper method pandas.api.extensions.take().
def take(self, indices, allow_fill=False, fill_value=None):
from pandas.core.algorithms import take
# If the ExtensionArray is backed by an ndarray, then
# just pass that here instead of coercing to object.
data = self.astype(object)
if allow_fill and fill_value is None:
fill_value = self.dtype.na_value
# fill value should always be translated from the scalar
# type for the array, to the physical storage type for
# the data, before passing to take.
result = take(data, indices, fill_value=fill_value,
allow_fill=allow_fill)
return self._from_sequence(result, dtype=self.dtype) | pandas.reference.api.pandas.api.extensions.extensionarray.take |
pandas.api.extensions.ExtensionArray.tolist ExtensionArray.tolist()[source]
Return a list of the values. These are each a scalar type, which is a Python scalar (for str, int, float) or a pandas scalar (for Timestamp/Timedelta/Interval/Period) Returns
list | pandas.reference.api.pandas.api.extensions.extensionarray.tolist |
pandas.api.extensions.ExtensionArray.unique ExtensionArray.unique()[source]
Compute the ExtensionArray of unique values. Returns
uniques:ExtensionArray | pandas.reference.api.pandas.api.extensions.extensionarray.unique |
pandas.api.extensions.ExtensionArray.view ExtensionArray.view(dtype=None)[source]
Return a view on the array. Parameters
dtype:str, np.dtype, or ExtensionDtype, optional
Default None. Returns
ExtensionArray or np.ndarray
A view on the ExtensionArray’s data. | pandas.reference.api.pandas.api.extensions.extensionarray.view |
pandas.api.extensions.ExtensionDtype classpandas.api.extensions.ExtensionDtype[source]
A custom data type, to be paired with an ExtensionArray. See also extensions.register_extension_dtype
Register an ExtensionType with pandas as class decorator. extensions.ExtensionArray
Abstract base class for custom 1-D array types. Notes The interface includes the following abstract methods that must be implemented by subclasses: type name construct_array_type The following attributes and methods influence the behavior of the dtype in pandas operations _is_numeric _is_boolean _get_common_dtype The na_value class attribute can be used to set the default NA value for this type. numpy.nan is used by default. ExtensionDtypes are required to be hashable. The base class provides a default implementation, which relies on the _metadata class attribute. _metadata should be a tuple containing the strings that define your data type. For example, with PeriodDtype that’s the freq attribute. If you have a parametrized dtype you should set the ``_metadata`` class property. Ideally, the attributes in _metadata will match the parameters to your ExtensionDtype.__init__ (if any). If any of the attributes in _metadata don’t implement the standard __eq__ or __hash__, the default implementations here will not work. For interaction with Apache Arrow (pyarrow), a __from_arrow__ method can be implemented: this method receives a pyarrow Array or ChunkedArray as only argument and is expected to return the appropriate pandas ExtensionArray for this dtype and the passed values:
class ExtensionDtype:
def __from_arrow__(
self, array: Union[pyarrow.Array, pyarrow.ChunkedArray]
) -> ExtensionArray:
...
This class does not inherit from ‘abc.ABCMeta’ for performance reasons. Methods and properties required by the interface raise pandas.errors.AbstractMethodError and no register method is provided for registering virtual subclasses. Attributes
kind A character code (one of 'biufcmMOSUV'), default 'O'
na_value Default NA value to use for this type.
name A string identifying the data type.
names Ordered list of field names, or None if there are no fields.
type The scalar type for the array, e.g. Methods
construct_array_type() Return the array type associated with this dtype.
construct_from_string(string) Construct this type from a string.
empty(shape) Construct an ExtensionArray of this dtype with the given shape.
is_dtype(dtype) Check if we match 'dtype'. | pandas.reference.api.pandas.api.extensions.extensiondtype |
pandas.api.extensions.ExtensionDtype.construct_array_type classmethodExtensionDtype.construct_array_type()[source]
Return the array type associated with this dtype. Returns
type | pandas.reference.api.pandas.api.extensions.extensiondtype.construct_array_type |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.