code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
import timeit
from reg import dispatch
from reg import DictCachingKeyLookup
def get_key_lookup(r):
return DictCachingKeyLookup(r)
@dispatch(get_key_lookup=get_key_lookup)
def args0():
raise NotImplementedError()
@dispatch("a", get_key_lookup=get_key_lookup)
def args1(a):
raise NotImplementedError()
@dispatch("a", "b", get_key_lookup=get_key_lookup)
def args2(a, b):
raise NotImplementedError()
@dispatch("a", "b", "c", get_key_lookup=get_key_lookup)
def args3(a, b, c):
raise NotImplementedError()
@dispatch("a", "b", "c", "d", get_key_lookup=get_key_lookup)
def args4(a, b, c, d):
raise NotImplementedError()
class Foo(object):
pass
def myargs0():
return "args0"
def myargs1(a):
return "args1"
def myargs2(a, b):
return "args2"
def myargs3(a, b, c):
return "args3"
def myargs4(a, b, c, d):
return "args4"
args0.register(myargs0)
args1.register(myargs1, a=Foo)
args2.register(myargs2, a=Foo, b=Foo)
args3.register(myargs3, a=Foo, b=Foo, c=Foo)
args4.register(myargs4, a=Foo, b=Foo, c=Foo, d=Foo)
def docall0():
args0()
def docall1():
args1(Foo())
def docall2():
args2(Foo(), Foo())
def docall3():
args3(Foo(), Foo(), Foo())
def docall4():
args4(Foo(), Foo(), Foo(), Foo())
def plain_docall0():
myargs0()
def plain_docall4():
myargs4(Foo(), Foo(), Foo(), Foo())
plain_zero_time = timeit.timeit(
"plain_docall0()", setup="from __main__ import plain_docall0"
)
print("\nPerformance test")
print("================")
print("dispatch 0 args")
print(
"{0:.2f}".format(
timeit.timeit("docall0()", setup="from __main__ import docall0")
/ plain_zero_time
)
+ "x"
)
print("dispatch 1 args")
print(
"{0:.2f}".format(
timeit.timeit("docall1()", setup="from __main__ import docall1")
/ plain_zero_time
)
+ "x"
)
print("dispatch 2 args")
print(
"{0:.2f}".format(
timeit.timeit("docall2()", setup="from __main__ import docall2")
/ plain_zero_time
)
+ "x"
)
print("dispatch 3 args")
print(
"{0:.2f}".format(
timeit.timeit("docall3()", setup="from __main__ import docall3")
/ plain_zero_time
)
+ "x"
)
print("dispatch 4 args")
print(
"{0:.2f}".format(
timeit.timeit("docall4()", setup="from __main__ import docall4")
/ plain_zero_time
)
+ "x"
)
print("Plain func 0 args")
print("1.00x (base duration)")
print("Plain func 4 args")
print(
"{0:.2f}".format(
timeit.timeit(
"plain_docall4()", setup="from __main__ import plain_docall4"
)
/ plain_zero_time
)
+ "x"
) | /reg-0.12.tar.gz/reg-0.12/tox_perf.py | 0.435902 | 0.193395 | tox_perf.py | pypi |
Using Reg
=========
.. testsetup:: *
pass
Introduction
------------
Reg implements *predicate dispatch* and *multiple registries*:
Predicate dispatch
We all know about `dynamic dispatch`_: when you call a method on an
instance it is dispatched to the implementation in its class, and
the class is determined from the first argument (``self``). This is
known as *single dispatch*.
Reg implements `multiple dispatch`_. This is a generalization of single
dispatch: multiple dispatch allows you to dispatch on the class of
*other* arguments besides the first one.
Reg actually implements `predicate dispatch`_, which is a further
generalization that allows dispatch on *arbitrary properties* of
arguments, instead of just their class.
The Morepath_ web framework is built with Reg. It uses Reg's
predicate dispatch system. Its full power can be seen in its view
lookup system.
This document explains how to use Reg. Various specific patterns are
documented in :doc:`patterns`.
.. _`dynamic dispatch`: https://en.wikipedia.org/wiki/Dynamic_dispatch
.. _`multiple dispatch`: http://en.wikipedia.org/wiki/Multiple_dispatch
.. _`predicate dispatch`: https://en.wikipedia.org/wiki/Predicate_dispatch
Multiple registries
Reg supports an advanced application architecture pattern where you
have multiple predicate dispatch registries in the same
runtime. This means that dispatch can behave differently depending
on runtime context. You do this by using dispatch *methods* that you
associate with a class that represents the application context. When
you switch the context class, you switch the behavior.
Morepath_ uses context-based dispatch to support its application
composition system, where one application can be mounted into
another.
See :doc:`context` for this advanced application pattern.
Reg is designed with a caching layer that allows it to support these
features efficiently.
.. _`Morepath`: http://morepath.readthedocs.io
Example
-------
Let's examine a short example. First we use the :meth:`reg.dispatch`
decorator to define a function that dispatches based on the
class of its ``obj`` argument:
.. testcode::
import reg
@reg.dispatch('obj')
def title(obj):
return "we don't know the title"
We want this function to return the title of its ``obj`` argument.
Now we create a few example classes for which we want to be able to use
the ``title`` fuction we defined above.
.. testcode::
class TitledReport(object):
def __init__(self, title):
self.title = title
class LabeledReport(object):
def __init__(self, label):
self.label = label
If we call ``title`` with a ``TitledReport`` instance, we want it to return
its ``title`` attribute:
.. testcode::
@title.register(obj=TitledReport)
def titled_report_title(obj):
return obj.title
The ``title.register`` decorator registers the function
``titled_report_title`` as an implementation of ``title`` when ``obj``
is an instance of ``TitleReport``.
There is also a more programmatic way to register implementations.
Take for example, the implementation of ``title`` with a ``LabeledReport``
instance, where we want it to return its ``label`` attribute:
.. testcode::
def labeled_report_title(obj):
return obj.label
We can register it by explicitely invoking ``title.register``:
.. testcode::
title.register(labeled_report_title, obj=LabeledReport)
Now the generic ``title`` function works on both titled and labeled
objects:
.. doctest::
>>> titled = TitledReport('This is a report')
>>> labeled = LabeledReport('This is also a report')
>>> title(titled)
'This is a report'
>>> title(labeled)
'This is also a report'
What is going on and why is this useful at all? We present a worked
out example next.
Dispatch functions
------------------
A Hypothetical CMS
~~~~~~~~~~~~~~~~~~
Let's look at how Reg works in the context of a hypothetical content
management system (CMS).
This hypothetical CMS has two kinds of content item (we'll add more
later):
* a ``Document`` which contains some text.
* a ``Folder`` which contains a bunch of content entries, for instance
``Document`` instances.
This is the implementation of our CMS:
.. testcode::
class Document(object):
def __init__(self, text):
self.text = text
class Folder(object):
def __init__(self, entries):
self.entries = entries
``size`` methods
~~~~~~~~~~~~~~~~
Now we want to add a feature to our CMS: we want the ability to
calculate the size (in bytes) of any content item. The size of the
document is defined as the length of its text, and the size of the
folder is defined as the sum of the size of everything in it.
.. sidebar:: ``len(text)`` is not in bytes!
Yeah, we're lying here. ``len(text)`` is not in bytes if text is in
unicode. Just pretend that text is in ASCII for the sake of this
example.
If we have control over the implementation of ``Document`` and
``Folder`` we can implement this feature easily by adding a ``size``
method to both classes:
.. testcode::
class Document(object):
def __init__(self, text):
self.text = text
def size(self):
return len(self.text)
class Folder(object):
def __init__(self, entries):
self.entries = entries
def size(self):
return sum([entry.size() for entry in self.entries])
And then we can simply call the ``.size()`` method to get the size:
.. doctest::
>>> doc = Document('Hello world!')
>>> doc.size()
12
>>> doc2 = Document('Bye world!')
>>> doc2.size()
10
>>> folder = Folder([doc, doc2])
>>> folder.size()
22
The ``Folder`` size code is generic; it doesn't care what the entries
inside it are; if they have a ``size`` method that gives the right
result, it will work. If a new content item ``Image`` is defined and
we provide a ``size`` method for this, a ``Folder`` instance that
contains ``Image`` instances will still be able to calculate its
size. Let's try this:
.. testcode::
class Image(object):
def __init__(self, bytes):
self.bytes = bytes
def size(self):
return len(self.bytes)
When we add an ``Image`` instance to the folder, the size of the folder
can still be calculated:
.. doctest::
>>> image = Image('abc')
>>> folder.entries.append(image)
>>> folder.size()
25
Cool! So we're done, right?
Adding ``size`` from outside
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sidebar:: Open/Closed Principle
The `Open/Closed principle`_ states software entities should be open
for extension, but closed for modification. The idea is that you may
have a piece of software that you cannot or do not want to change,
for instance because it's being developed by a third party, or
because the feature you want to add is outside of the scope of that
software (separation of concerns). By extending the software without
modifying its source code, you can benefit from the stability of the
core software and still add new functionality.
.. _`Open/Closed principle`: https://en.wikipedia.org/wiki/Open/closed_principle
So far we didn't need Reg at all. But in a real world CMS we aren't
always in the position to change the content classes themselves. We
may be dealing with a content management system core where we *cannot*
control the implementation of ``Document`` and ``Folder``. Or perhaps
we can, but we want to keep our code modular, in independent
packages. So how would we add a size calculation feature in an
extension package?
We can fall back on good-old Python functions instead. We separate out
the size logic from our classes:
.. testcode::
def document_size(item):
return len(item.text)
def folder_size(item):
return sum([document_size(entry) for entry in item.entries])
Generic size
~~~~~~~~~~~~
.. sidebar:: What about monkey patching?
We *could* `monkey patch`_ a ``size`` method into all our content
classes. This would work. But doing this can be risky -- what if the
original CMS's implementers change it so it *does* gain a size
method or attribute, for instance? Multiple monkey patches
interacting can also lead to trouble. In addition, monkey-patched
classes become harder to read: where is this ``size`` method coming
from? It isn't there in the ``class`` statement, or in any of its
superclasses! And how would we document such a construction?
In short, monkey patching does not make for very maintainable code.
.. _`monkey patch`: https://en.wikipedia.org/wiki/Monkey_patch
There is a problem with the above function-based implementation
however: ``folder_size`` is not generic anymore, but now depends on
``document_size``. It fails when presented with a folder with an
``Image`` in it:
.. doctest::
>>> folder_size(folder)
Traceback (most recent call last):
...
AttributeError: ...
To support ``Image`` we first need an ``image_size`` function:
.. testcode::
def image_size(item):
return len(item.bytes)
We can now write a generic ``size`` function to get the size for any
item we give it:
.. testcode::
def size(item):
if isinstance(item, Document):
return document_size(item)
elif isinstance(item, Image):
return image_size(item)
elif isinstance(item, Folder):
return folder_size(item)
assert False, "Unknown item: %s" % item
With this, we can rewrite ``folder_size`` to use the generic ``size``:
.. testcode::
def folder_size(item):
return sum([size(entry) for entry in item.entries])
Now our generic ``size`` function works:
.. doctest::
>>> size(doc)
12
>>> size(image)
3
>>> size(folder)
25
All a bit complicated and hard-coded, but it works!
New ``File`` content
~~~~~~~~~~~~~~~~~~~~
What if we want to write a new extension to our CMS that adds a new
kind of folder item, the ``File``, with a ``file_size`` function?
.. testcode::
class File(object):
def __init__(self, bytes):
self.bytes = bytes
def file_size(item):
return len(item.bytes)
We need to remember to adjust the generic ``size`` function so we can
teach it about ``file_size`` as well. Annoying, tightly coupled, but
sometimes doable.
But what if we are actually another party, and we have control of
neither the basic CMS *nor* its size extension? We cannot adjust
``generic_size`` to teach it about ``File`` now! Uh oh!
Perhaps the implementers of the size extension anticipated this use
case. They could have implemented ``size`` like this:
.. testcode::
size_function_registry = {
Document: document_size,
Image: image_size,
Folder: folder_size
}
def register_size(class_, function):
size_function_registry[class_] = function
def size(item):
return size_function_registry[item.__class__](item)
We can now use ``register_size`` to teach ``size`` how to get
the size of a ``File`` instance:
.. testcode::
register_size(File, file_size)
And it works:
.. doctest::
>>> size(File('xyz'))
3
But this is quite a bit of custom work that the implementers need to
do, and it involves a new API (``register_size``) to manipulate the
``size_function_registry``. But it can be done.
New ``HtmlDocument`` content
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What if we introduce a new ``HtmlDocument`` item that is a subclass of
``Document``?
.. testcode::
class HtmlDocument(Document):
pass # imagine new html functionality here
Let's try to get its size:
.. doctest::
>>> htmldoc = HtmlDocument('<p>Hello world!</p>')
>>> size(htmldoc)
Traceback (most recent call last):
...
KeyError: ...
That doesn't work! There's nothing registered for the ``HtmlDocument``
class.
We need to remember to also call ``register_size`` for
``HtmlDocument``. We can reuse ``document_size``:
.. doctest::
>>> register_size(HtmlDocument, document_size)
Now ``size`` will work:
.. doctest::
>>> size(htmldoc)
19
This is getting rather complicated, requiring not only foresight and
extra implementation work for the developers of ``size`` but also
extra work for the person who wants to subclass a content item.
Hey, we should write a system that automates a lot of this, and gives
us a universal registration API, making our life easier! And what if
we want to switch behavior based on more than just one argument? Maybe
you even want different dispatch behavior depending on application
context? This is what Reg is for.
Doing this with Reg
~~~~~~~~~~~~~~~~~~~
Let's see how we can implement ``size`` using Reg:
First we need our generic ``size`` function:
.. testcode::
def size(item):
raise NotImplementedError
This function raises ``NotImplementedError`` as we don't know how to
get the size for an arbitrary Python object. Not very useful yet. We need
to be able to hook the actual implementations into it. To do this, we first
need to transform the ``size`` function to a generic one:
.. testcode::
import reg
size = reg.dispatch('item')(size)
We can actually spell these two steps in a single step, as
:func:`reg.dispatch` can be used as decorator:
.. testcode::
@reg.dispatch('item')
def size(item):
raise NotImplementedError
What this says that when we call ``size``, we want to dispatch based
on the class of its ``item`` argument.
We can now register the various size functions for the various content
items as implementations of ``size``:
.. testcode::
size.register(document_size, item=Document)
size.register(folder_size, item=Folder)
size.register(image_size, item=Image)
size.register(file_size, item=File)
``size`` now works:
.. doctest::
>>> size(doc)
12
It works for folder too:
.. doctest::
>>> size(folder)
25
It works for subclasses too:
.. doctest::
>>> size(htmldoc)
19
Reg knows that ``HtmlDocument`` is a subclass of ``Document`` and will
find ``document_size`` automatically for you. We only have to register
something for ``HtmlDocument`` if we want to use a special, different
size function for ``HtmlDocument``.
Multiple and predicate dispatch
-------------------------------
Let's look at an example where dispatching on multiple arguments is
useful: a web view lookup system. Given a request object that
represents a HTTP request, and a model instance ( document, icon,
etc), we want to find a view function that knows how to make a
representation of the model given the request. Information in the
request can influence the representation. In this example we use a
``request_method`` attribute, which can be ``GET``, ``POST``, ``PUT``,
etc.
Let's first define a ``Request`` class with a ``request_method``
attribute:
.. testcode::
class Request(object):
def __init__(self, request_method, body=''):
self.request_method = request_method
self.body = body
We've also defined a ``body`` attribute which contains text in case
the request is a ``POST`` request.
We use the previously defined ``Document`` as the model class.
Now we define a view function that dispatches on the class of the
model instance, and the ``request_method`` attribute of the request:
.. testcode::
@reg.dispatch(
reg.match_instance('obj'),
reg.match_key('request_method',
lambda obj, request: request.request_method))
def view(obj, request):
raise NotImplementedError
As you can see here we use ``match_instance`` and ``match_key``
instead of strings to specify how to dispatch.
If you use a string argument, this string names an argument and
dispatch is based on the class of the instance you pass in. Here we
use ``match_instance``, which is equivalent to this: we have a ``obj``
predicate which uses the class of the ``obj`` argument for dispatch.
We also use ``match_key``, which dispatches on the ``request_method``
attribute of the request; this attribute is a string, so dispatch is
on string matching, not ``isinstance`` as with ``match_instance``. You
can use any Python immutable with ``match_key``, not just strings.
We now define concrete views for ``Document`` and ``Image``:
.. testcode::
@view.register(request_method='GET', obj=Document)
def document_get(obj, request):
return "Document text is: " + obj.text
@view.register(request_method='POST', obj=Document)
def document_post(obj, request):
obj.text = request.body
return "We changed the document"
Let's also define them for ``Image``:
.. testcode::
@view.register(request_method='GET', obj=Image)
def image_get(obj, request):
return obj.bytes
@view.register(request_method='POST', obj=Image)
def image_post(obj, request):
obj.bytes = request.body
return "We changed the image"
Let's try it out:
.. doctest::
>>> view(doc, Request('GET'))
'Document text is: Hello world!'
>>> view(doc, Request('POST', 'New content'))
'We changed the document'
>>> doc.text
'New content'
>>> view(image, Request('GET'))
'abc'
>>> view(image, Request('POST', "new data"))
'We changed the image'
>>> image.bytes
'new data'
Dispatch methods
----------------
Rather than having a ``size`` function and a ``view`` function, we can
also have a context class with ``size`` and ``view`` as methods. We
need to use :class:`reg.dispatch_method` instead of
:class:`reg.dispatch` to do this.
.. testcode::
class CMS(object):
@reg.dispatch_method('item')
def size(self, item):
raise NotImplementedError
@reg.dispatch_method(
reg.match_instance('obj'),
reg.match_key('request_method',
lambda self, obj, request: request.request_method))
def view(self, obj, request):
return "Generic content of {} bytes.".format(self.size(obj))
We can now register an implementation of ``CMS.size`` for a
``Document`` object:
.. testcode::
@CMS.size.register(item=Document)
def document_size_as_method(self, item):
return len(item.text)
Note that this is almost the same as the function ``document_size`` we
defined before: the only difference is the signature, with the
additional ``self`` as the first argument. We can in fact use
:func:`reg.methodify` to reuse such functions without an initial
context argument:
.. testcode::
from reg import methodify
CMS.size.register(methodify(folder_size), item=Folder)
CMS.size.register(methodify(image_size), item=Image)
CMS.size.register(methodify(file_size), item=File)
``CMS.size`` now behaves as expected:
.. doctest::
>>> cms = CMS()
>>> cms.size(Image("123"))
3
>>> cms.size(Document("12345"))
5
Similarly for the ``view`` method we can define:
.. testcode::
@CMS.view.register(request_method='GET', obj=Document)
def document_get(self, obj, request):
return "{}-byte-long text is: {}".format(
self.size(obj), obj.text)
This works as expected as well:
.. doctest::
>>> cms.view(Document("12345"), Request("GET"))
'5-byte-long text is: 12345'
>>> cms.view(Image("123"), Request("GET"))
'Generic content of 3 bytes.'
For more about how you can use dispatch methods and class-based context,
see :doc:`context`.
Lower level API
---------------
Component lookup
~~~~~~~~~~~~~~~~
You can look up the implementation that a generic function would
dispatch to without calling it. You can look that up by invocation
arguments using the :meth:`reg.Dispatch.by_args` method on the
dispatch function or by predicate values using the
:meth:`reg.Dispatch.by_predicates` method:
>>> size.by_args(doc).component
<function document_size at 0x...>
>>> size.by_predicates(item=Document).component
<function document_size at 0x...>
Both methods return a :class:`reg.LookupEntry` instance whose
attributes, as we've just seen, include the dispatched implementation
under the name ``component``. Another interesting attribute is the
actual key used for dispatching:
>>> view.by_predicates(request_method='GET', obj=Document).key
(<class 'Document'>, 'GET')
>>> view.by_predicates(obj=Image, request_method='POST').key
(<class 'Image'>, 'POST')
Getting all compatible implementations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As Reg supports inheritance, if a function like ``size`` has an
implementation registered for a class, say ``Document``, the same
implementation will be available for any if its subclasses, like
``HtmlDocument``:
>>> size.by_args(doc).component is size.by_args(htmldoc).component
True
The ``matches`` and ``all_matches`` attributes of
:class:`reg.LookupEntry` are an interator and the list, respectively,
of *all* the registered components that are compatible with a
particular instance, including those of base classes. Right now this
is pretty boring as there's only one of them:
>>> size.by_args(doc).all_matches
[<function document_size at ...>]
>>> size.by_args(htmldoc).all_matches
[<function document_size at ...>]
We can make this more interesting by registering a special
``htmldocument_size`` to handle ``HtmlDocument`` instances:
.. testcode::
def htmldocument_size(doc):
return len(doc.text) + 1 # 1 so we can see a difference
size.register(htmldocument_size, item=HtmlDocument)
``size.all()`` for ``htmldoc`` now also gives back the more specific
``htmldocument_size``:
>>> size.by_args(htmldoc).all_matches
[<function htmldocument_size at ...>, <function document_size at ...>]
The implementation are listed in order of decreasing specificity, with
the first one as the one returned by the ``component`` attribute:
>>> size.by_args(htmldoc).component
<function htmldocument_size at ...>
| /reg-0.12.tar.gz/reg-0.12/doc/usage.rst | 0.942876 | 0.661171 | usage.rst | pypi |
from itertools import chain
from pathlib import Path
from typing import List, Generator, Iterable, Optional
from itertools import islice
import orjson
import pyregf
def decode_data(data_type: int, data: bytes) -> str:
enum = {
0: lambda x: x.hex(),
1: lambda x: x.decode('utf-16').rstrip('\u0000'),
2: lambda x: x.decode('utf-16').rstrip('\u0000'),
3: lambda x: x.hex(),
4: lambda x: int.from_bytes(x, 'little'),
5: lambda x: int.from_bytes(x, 'big'),
6: lambda x: x.decode('utf-16').rstrip('\u0000'),
7: lambda x: x.decode('utf-16').rstrip('\u0000').split(' '),
8: lambda x: x.hex(),
9: lambda x: x.hex(),
10: lambda x: x.hex(),
11: lambda x: int.from_bytes(x, 'little'),
}
try:
result = enum[data_type](data)
except Exception:
if data:
result = data.hex()
else:
result = None
return result
def get_data_type_identifier(data_type: int) -> Optional[str]:
enum = {
0: 'REG_NONE',
1: 'REG_SZ',
2: 'REG_EXPAND_SZ',
3: 'REG_BINARY',
4: 'REG_DWORD',
5: 'REG_DWORD_BIG_ENDIAN',
6: 'REG_LINK',
7: 'REG_MULTI_SZ',
8: 'REG_RESOURCE_LIST',
9: 'REG_FULL_RESOURCE_DESCRIPTOR',
10: 'REG_RESOURCE_REQUIREMENTS_LIST',
11: 'REG_QWORD',
}
return enum.get(data_type, None)
def get_all_hives(reg: pyregf.key) -> dict:
subtree = dict()
if reg.sub_keys:
for r in reg.sub_keys:
subtree[r.get_name().lstrip('.')] = {**{"meta": {"last_written_time": r.get_last_written_time().isoformat()}}, **get_all_hives(r)}
values = {
v.get_name().lstrip('.') if v.get_name() else "_": {
"type": v.get_type(),
"identifier": get_data_type_identifier(v.get_type()),
"size": v.get_data_size(),
"data": decode_data(v.get_type(), v.get_data()),
} for v in reg.values
}
if subtree and values:
return {**subtree, **values}
elif subtree:
return subtree
elif values:
return values
else:
return {}
class Reg2es(object):
def __init__(self, input_path: Path) -> None:
self.path = input_path
self.reg_file = pyregf.file()
self.reg_file.open_file_object(self.path.open('rb'))
def gen_records(self) -> Generator:
"""Generates the formatted Registry records chunks.
Yields:
Generator: Yields dict.
"""
root_key = self.reg_file.get_root_key()
hives = {root_key.get_name(): get_all_hives(root_key)}
yield hives | /models/Reg2es.py | 0.773559 | 0.339102 | Reg2es.py | pypi |
from io import BufferedWriter
from queue import LifoQueue
from struct import pack
def scanner(octree):
def process(data):
nonlocal lastlevel
for i in range(lastlevel - data["level"]):
counter.put(0)
lastlevel -= 1
yield Command.create_node()
yield Command.fill_node(data["data"])
lastlevel += increment_counter()
def increment_counter():
if counter.empty():
return 0
lastcount = counter.get()
lastcount += 1
if lastcount == 8:
counter.task_done()
return 1 + increment_counter()
else:
counter.put(lastcount)
return 0
lastlevel = octree.level
counter = LifoQueue()
yield Command.header()
yield Command.seed(octree.level)
command:Command = None
for data in iter(octree):
for nxtcmd in process(data):
if command is None:
command = nxtcmd
elif nxtcmd == command:
command.count += 1
else:
yield command
command = nxtcmd
yield command
class Command:
singletons = {
None:"nlnd",
Ellipsis:"vdnd",
False:"fsnd",
True:"trnd"
}
def __init__(self, cmd, value=None):
self.cmd = cmd
self.count = 1
self.value = value
def __str__(self):
out = self.cmd
if self.value is not None:
out += " " + str(self.value)
return out
def __eq__(self, other):
if type(other) == type(self):
if other.cmd == self.cmd:
if other.value == self.value:
return True
return False
@classmethod
def header(cls, version="0.0.2"):
return cls("header", version)
@classmethod
def seed(cls, level):
return cls("seed", level)
@classmethod
def fill_node(cls, value):
if value in cls.singletons:
return cls(cls.singletons[value])
else:
return cls("flnd", value)
@classmethod
def create_node(cls):
return cls("crnd")
class SavingStream:
types = {
'i8':b'\x20', 'i16':b'\x21', 'i32':b'\x22', 'i64':b'\x23',
'u8':b'\x24', 'u16':b'\x25', 'u32':b'\x26', 'u64':b"\x27",
'f32':b'\x28', 'f64':b'\x29', 'c64':b'\x2a', 'c128':b'\x2b',
"flse":b"\x2c", "true":b"\x2d",
'Str':b'\x40', 'List':b'\x41', 'Dict':b'\x42', 'Set':b'\x43'
}
py_standard = {
int: "i32",
float: "f64",
complex: "c64",
str: "Str",
list: "List",
dict: "Dict",
set: "Set"
}
def __init__(self, io:BufferedWriter) -> None:
self.io = io
def convert(self, value):
type_byte = self.py_standard[type(value)]
self.io.write(self.types[type_byte])
getattr(self, type_byte)(value)
def write(self, value:bytes):
self.io.write(value)
def i8(self, value:int):
self.io.write(value.to_bytes(1, byteorder="big", signed=True))
def i16(self, value:int):
self.io.write(value.to_bytes(2, byteorder="big", signed=True))
def i32(self, value:int):
self.io.write(value.to_bytes(4, byteorder="big", signed=True))
def i64(self, value:int):
self.io.write(value.to_bytes(8, byteorder="big", signed=True))
def u8(self, value:int):
self.io.write(value.to_bytes(1, byteorder="big", signed=False))
def u16(self, value:int):
self.io.write(value.to_bytes(2, byteorder="big", signed=False))
def u32(self, value:int):
self.io.write(value.to_bytes(4, byteorder="big", signed=False))
def u64(self, value:int):
self.io.write(value.to_bytes(8, byteorder="big", signed=False))
def f32(self, value:float):
self.io.write(pack("f", value))
def f64(self, value:float):
self.io.write(pack("d", value))
def c64(self, value:complex):
self.io.write(pack("ff", value.real, value.imag))
def c128(self, value:complex):
self.io.write(pack("dd", value.real, value.imag))
def Str(self, value:str):
self.u16(len(value))
self.io.write(value.encode())
def List(self, value:list):
self.u16(len(value))
for item in value:
self.convert(item)
def Dict(self, value:dict):
self.u16(len(value))
for key, value in value.items():
self.convert(key); self.convert(value)
def Set(self, value:set):
self.u16(len(value))
for item in value:
self.convert(item)
class Saver:
conversion = {
'header':b'\x00', 'seed':b'\x01',
'crnd':b'\x04', 'flnd':b'\x07',
'nlnd':b'\x08', 'vdnd':b'\x09', 'fsnd':b'\x0a', 'trnd':b'\x0b',
}
def __init__(self, io:BufferedWriter, factory) -> None:
self.factory = factory
self.converter = SavingStream(io)
def translate(self, command:Command):
"""Routes the incoming commands that a octree decomposes into towards the individual transcription commands"""
getattr(self, command.cmd)(command)
def header(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.convert(command.value)
def seed(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.convert(command.value)
def crnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count)
def flnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count)
self.factory.to_file(command.value, self.converter)
def nlnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count)
def vdnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count)
def fsnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count)
def trnd(self, command:Command):
self.converter.write(self.conversion[command.cmd])
self.converter.u8(command.count) | /regOct-1.1.3-py3-none-any.whl/regoct/saveing.py | 0.52074 | 0.265018 | saveing.py | pypi |
from contextlib import contextmanager
from copy import deepcopy
import math
from .util import Geometry
class Leaf:
def __init__(self, level, data):
self.value = data
self.level = level
if self.level < 0:
raise ValueError(self.level)
def clone(self, level):
return Leaf(self.level, deepcopy(self.value))
def get(self, coords):
return self.value
def set(self, coords, level) -> "Leaf | Node":
if self.level == level:
return self
else:
self.subdivide()
return self.set(coords, level)
def subdivide(self):
self.__class__ = Node
self.contents = list(Leaf(self.level-1, deepcopy(self.value)) for i in range(8))
del self.value
def make_leaf(self, data):
"""sets value for leaf"""
self.value = data
def make_node(self, node):
if node.__class__ == Leaf:
self.value = deepcopy(node.value)
elif node.__class__ == Node:
self.__class__ = Node
self.contents = list(member.clone(self.level-1) for member in node.contents)
del self.value
def defragment(self):...
def __str__(self):
return str({"level":self.level, "data":str(self.value)})
def __bool__(self):
return bool(self.value)
def __iter__(self):
self.has_returned = False
return self
def __next__(self):
if self.has_returned:
raise StopIteration
else:
self.has_returned = True
return {"coords": (0, 0, 0), "level":self.level, "void":bool(self), "data":self.value}
def __len__(self):
return 1
def __eq__(self, o: "Leaf") -> bool:
return self.__class__ == o.__class__ and self.value == o.value
class Node:
def __init__(self, level):
self.contents = list(Leaf(level-1, None) for i in range(8))
self.level = level
if self.level < 0:
raise ValueError(self.level)
def clone(self, level):
obj = Node(level)
obj.contents = list(member.clone(obj.level-1) for member in self.contents)
return obj
# Navigation Methods
def get(self, coords):
next_coords = Geometry.coord_mod(coords, 2**self.level)
next_index = Geometry.index_from_coords(Geometry.coord_div(next_coords, 2**(self.level-1)))
return self.contents[next_index].get(next_coords)
def set(self, coords, level) -> "Leaf | Node":
if self.level == level:
return self
else:
next_coords = Geometry.coord_mod(coords, 2**self.level)
next_index = Geometry.index_from_coords(
Geometry.coord_div(
next_coords,
2**(self.level-1)
)
)
return self.contents[next_index].set(next_coords, level)
def subdivide(self):
self.contents = list(self.clone(self.level-1) for _ in range(8))
def make_leaf(self, data=None):
self.__class__ = Leaf
self.value = data
del self.contents
def make_node(self, node):
if node.__class__ == Leaf:
self.__class__ = Leaf
self.value = deepcopy(node.value)
elif node.__class__ == Node:
self.__class__ = Node
self.contents = list(member.clone(self.level-1) for member in node.contents)
def defragment(self):
for node in self.contents:
node.defragment()
if all(self.contents[i] == self.contents[(i+1)%8] for i in range(8)):
self.make_leaf(self.contents[0].value)
def __str__(self):
return "\n".join(str(i) for i in self.contents)
def __iter__(self):
self.n = 0
self.reading = iter(self.contents[0])
return self
def __next__(self):
try:
out = next(self.reading)
except StopIteration:
self.n += 1
if self.n == 8:
raise StopIteration
self.reading = iter(self.contents[self.n])
out = next(self.reading)
out["coords"] = tuple(a + b for a, b in zip(Geometry.coords_from_index(self.n, self.level - 1), out["coords"]))
return out
def __len__(self):
return sum(len(child) for child in self)
def __eq__(self, o: "Leaf | Node"):
return self.__class__ == o.__class__ and all(a == b for a, b in zip(self.contents, o.contents))
class Octree:
def __init__(self, level, data=None):
self.octree:"Leaf | Node" = Leaf(level, data)
self.level = level
def _get(self, coords):
if any(0 > value >= 2**self.level for value in coords):
raise IndexError()
else:
return self.octree.get(coords)
def _set(self, coords, level) -> "Leaf | Node":
if any(0 > value >= 2**self.level for value in coords):
print("out of range")
raise IndexError()
return self.octree.set(coords, level)
def _raise(self):
print("raising")
new_octree = Node(self.level + 1, 0, self)
new_octree.contents = list(Leaf(self.level, None) for _ in range(8))
new_octree.contents[0] = self.octree
self.octree = new_octree
@contextmanager
def edit(self):
try:
yield self
finally:
self.octree.defragment()
# WIP
def get_sub_tree(self, coords: "tuple", level) -> "Octree":
"""-WIP- Clone a segment of the octree to a new object"""
node = self._set(coords, level)
obj = Octree(node.level)
obj.octree = node.clone(obj.level)
return obj
def set_sub_tree(self, coords: "tuple", level, src: "Octree"):
"""-WIP- Insert a seperate Octree to the designated location"""
self._set(coords, level).make_node(src.octree)
# spec methods
def __str__(self):
return str(self.octree)
def __iter__(self):
return iter(self.octree)
def __getitem__(self, index: "tuple"):
if len(index) != 3:
raise TypeError("octree key must be a 3-tuple (x, y, z)")
return self._get(index)
def __setitem__(self, index: "tuple", data):
if len(index) == 3:
index = (*index, 0)
if len(index) != 4:
raise TypeError("octree key must be a 3- or 4-tuple (x, y, z[, level])")
self._set(index[:3], index[3]).make_leaf(data)
def __len__(self):
return len(self.octree)
def __eq__(self, o: "Octree") -> bool:
return self.__class__ == o.__class__ and self.octree == o.octree
def __ne__(self, o: "Octree") -> bool:
return not self == o | /regOct-1.1.3-py3-none-any.whl/regoct/structures.py | 0.42931 | 0.204124 | structures.py | pypi |
from io import BufferedReader
from struct import unpack
from copy import deepcopy
from contextlib import contextmanager
from .structures import Octree, Node, Leaf
class Command:
def __init__(self, name, count=1, value=None):
self.command = name
self.count = count
self.value = value
def __str__(self):
out = self.command
if self.value is not None:
out += " " + str(self.value)
return out
class Builder:
def __init__(self, level):
self.level = level
self.creating_index = 0
self.end_of_line = True
self.artefact:"list[Builder]" = []
def crnd(self, *args):
if self.end_of_line == False:
self.artefact[self.creating_index].crnd()
else:
self.artefact.append(Builder(self.level-1))
self.end_of_line = False
def flnd(self, data):
if self.end_of_line == False:
if self.artefact[self.creating_index].flnd(data):
self.creating_index += 1
self.end_of_line = True
else:
self.artefact.append(Leaf(self.level -1, data))
self.creating_index += 1
if self.creating_index == 8:
self.contents = self.artefact
self.__class__ = Node
del self.artefact
del self.end_of_line
del self.creating_index
return True
class BuilderHelper(Builder):
class VersionError(Exception):
def __init__(self, given, required):
self.message = f"Version {required} was required, but {given} was given"
super().__init__(self.message)
def __init__(self):
pass
def header(self, value):
if value != "0.0.2":
raise self.VersionError(value, "0.0.2")
def seed(self, value):
self.octree = Octree(value)
super().__init__(value + 1)
def route(self, command:Command):
for _ in range(command.count):
getattr(self, command.command)(deepcopy(command.value))
@contextmanager
def build(self):
try:
yield self
finally:
self.octree.octree = self.artefact[0]
class LoadingStream:
commands = {
b"\x00":"header", b"\x01":"seed",
b"\x04":"crnd", b"\x07":"flnd",
b"\x08":"nlnd", b"\x09":"vdnd", b"\x0a":"fsnd", b"\x0b":"trnd",
b"\x20": "i8", b"\x21": "i16", b"\x22": "i32", b"\x23": "i64",
b"\x24": "u8", b"\x25": "u16", b"\x26": "u32", b"\x27": "u64",
b"\x28": "f32", b"\x29": "f64", b"\x2a": "c64", b"\x2b":"c128",
b"\x40": "Str", b"\x41":"List", b"\x42":"Dict", b"\x43":"Set"
}
def __init__(self, io:BufferedReader) -> None:
self.io = io
def convert(self):
return getattr(self, self.commands[self.read()])()
def read(self) -> bytes:
if (next_byte := self.io.read(1)):
return next_byte
else:
raise EOFError
def i8(self):
return int.from_bytes(self.io.read(1), byteorder="big", signed=True)
def i16(self):
return int.from_bytes(self.io.read(2), byteorder="big", signed=True)
def i32(self):
return int.from_bytes(self.io.read(4), byteorder="big", signed=True)
def i64(self):
return int.from_bytes(self.io.read(8), byteorder="big", signed=True)
def u8(self):
return int.from_bytes(self.io.read(1), byteorder="big", signed=False)
def u16(self):
return int.from_bytes(self.io.read(2), byteorder="big", signed=False)
def u32(self):
return int.from_bytes(self.io.read(4), byteorder="big", signed=False)
def u64(self):
return int.from_bytes(self.io.read(8), byteorder="big", signed=False)
def f32(self):
return unpack("f", self.io.read(4))
def f64(self):
return unpack("d", self.io.read(8))
def c64(self):
return complex(*unpack("ff", self.io.read(8)))
def c128(self):
return complex(*unpack("dd", self.io.read(16)))
def Str(self):
return self.io.read(self.u16()).decode()
def List(self):
return list(self.convert() for _ in range(self.u16()))
def Dict(self):
return dict((self.convert(), self.convert()) for _ in range(self.u16()))
def Set(self):
return set(self.convert() for _ in range(self.u16()))
class Loader:
commands = {
b"\x00":"header", b"\x01":"seed",
b"\x04":"crnd", b"\x07":"flnd",
b"\x08":"nlnd", b"\x09":"vdnd", b"\x0a":"fsnd", b"\x0b":"trnd",
}
def __init__(self, io:BufferedReader, factory) -> None:
self.factory = factory
self.converter = LoadingStream(io)
def __iter__(self):
return self
def __next__(self):
try:
return getattr(self, self.commands[self.converter.read()])()
except EOFError:
raise StopIteration
def header(self):
return Command("header", value=self.converter.convert())
def seed(self):
return Command("seed", value=self.converter.convert())
def crnd(self):
return Command("crnd", self.converter.u8())
def flnd(self):
return Command("flnd", self.converter.u8(), self.factory.from_file(self.factory, self.converter)) # here custom from_file method should be inserted
def nlnd(self):
return Command("flnd", self.converter.u8())
def vdnd(self):
return Command("flnd", self.converter.u8(), value=Ellipsis)
def fsnd(self):
return Command("flnd", self.converter.u8(), value=False)
def trnd(self):
return Command("flnd", self.converter.u8(), value=True) | /regOct-1.1.3-py3-none-any.whl/regoct/reader.py | 0.516352 | 0.220825 | reader.py | pypi |
import io
import logging
_logger = logging.getLogger(__name__)
class BufferReadError(IOError):
"""
Internal kartothek error while attempting to read from buffer
"""
pass
class BlockBuffer(io.BufferedIOBase):
"""
Block-based buffer.
The input is split into fixed sizes blocks. Every block can be read independently.
"""
def __init__(self, raw, blocksize=1024):
self._raw = raw
self._blocksize = blocksize
self._size = None
self._cached_blocks = None
self._pos = 0
if self._raw_closed():
raise ValueError("Cannot use closed file object")
if not self._raw_readable():
raise ValueError("raw must be readable")
if not self._raw_seekable():
raise ValueError("raw must be seekable")
if blocksize < 1:
raise ValueError("blocksize must be at least 1")
def _raw_closed(self):
"""
If supported by ``raw``, return its closed state, otherwise return ``False``.
"""
if hasattr(self._raw, "closed"):
return self._raw.closed
else:
return False
def _raw_readable(self):
"""
If supported by ``raw``, return its readable state, otherwise return ``True``.
"""
if hasattr(self._raw, "readable"):
return self._raw.readable()
else:
return True
def _raw_seekable(self):
"""
If supported by ``raw``, return its seekable state, otherwise return ``True``.
"""
if hasattr(self._raw, "seekable"):
return self._raw.seekable()
else:
return True
def _setup_cache(self):
"""
Set up cache data structure and inspect underlying IO object.
If the cache is already inialized, this is a no-op.
"""
if self._cached_blocks is not None:
# cache initialized, nothing to do
return
if hasattr(self._raw, "size"):
self._size = self._raw.size
elif hasattr(self._raw, "__len__"):
self._size = len(self._raw)
else:
self._raw.seek(0, 2)
self._size = self._raw.tell()
n_blocks = self._size // self._blocksize
if self._size % self._blocksize:
n_blocks += 1
self._cached_blocks = [None] * n_blocks
def _fetch_blocks(self, block, n):
"""
Fetch blocks from underlying IO object.
This will mark the fetched blocks as loaded.
Parameters
----------
block: int
First block to fetch.
n: int
Number of blocks to fetch.
"""
assert n > 0
# seek source
offset = self._blocksize * block
self._raw.seek(offset, 0)
# read data into temporary variable and dump it into cache
size = min(self._blocksize * n, self._size - offset)
data = self._raw.read(size)
if len(data) != size:
err = (
f"Expected raw read to return {size} bytes, but instead got {len(data)}"
)
_logger.error(err)
raise BufferReadError(err)
# fill blocks
for i in range(n):
begin = i * self._blocksize
end = min((i + 1) * self._blocksize, size)
self._cached_blocks[block + i] = data[begin:end]
def _ensure_range_loaded(self, start, size):
"""
Ensure that a given byte range is loaded into the cache.
This will scan for blocks that are not loaded yet and tries to load consecutive blocks as once.
Parameters
----------
start: int
First byte of the range.
size: int
Number of bytes in the range.
"""
if size < 0:
msg = f"Expected size >= 0, but got start={start}, size={size}"
_logger.error(msg)
raise BufferReadError(msg)
block = start // self._blocksize
offset = start % self._blocksize
# iterate over blocks in range and figure out long sub-ranges of blocks to fetch at once
done = -offset
to_fetch_start = None
to_fetch_n = None
while done < size:
if self._cached_blocks[block] is not None:
# current block is loaded
if to_fetch_start is not None:
# there was a block range to be loaded, do that now
self._fetch_blocks(to_fetch_start, to_fetch_n)
# no active block range anymore
to_fetch_start = None
to_fetch_n = None
else:
# current block is missing, do we already have a block range to append to?
if to_fetch_start is None:
# no block range open, create a new one
to_fetch_start = block
to_fetch_n = 1
else:
# current block range exists, append block
to_fetch_n += 1
done += self._blocksize
block += 1
if to_fetch_start is not None:
# this is the last active block range, fetch it
self._fetch_blocks(to_fetch_start, to_fetch_n)
def _read_data_from_blocks(self, start, size):
"""
Read data from bytes.
Parameters
----------
start: int
First byte of the range.
size: int
Number of bytes in the range.
Returns
-------
data: bytes
Requested data
"""
block = start // self._blocksize
offset = start % self._blocksize
read_size = size + offset
n_blocks = read_size // self._blocksize
if read_size % self._blocksize != 0:
n_blocks += 1
return b"".join(self._cached_blocks[block : (block + n_blocks)])[
offset : (size + offset)
]
def _check_closed(self):
"""
Check that file object currently is not closed.
Raises
------
ValueError: in case file object is closed
"""
if self.closed:
raise ValueError("I/O operation on closed file.")
def read(self, size=None):
self._check_closed()
self._setup_cache()
if (size is None) or (size < 0) or (self._pos + size > self._size):
# read entire, remaining file
size = self._size - self._pos
self._ensure_range_loaded(self._pos, size)
result = self._read_data_from_blocks(self._pos, size)
self._pos += size
return result
def tell(self):
self._check_closed()
return self._pos
def seek(self, offset, whence=0):
self._check_closed()
self._setup_cache()
if whence == 0:
self._pos = max(0, min(offset, self._size))
elif whence == 1:
self._pos = max(0, min(self._pos + offset, self._size))
elif whence == 2:
self._pos = max(0, min(self._size + offset, self._size))
else:
raise ValueError("unsupported whence value")
return self._pos
@property
def size(self):
self._check_closed()
self._setup_cache()
return self._size
def seekable(self):
self._check_closed()
return True
def readable(self):
self._check_closed()
return True
def close(self):
if not self.closed:
self._raw.close()
super(BlockBuffer, self).close() | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/serialization/_io_buffer.py | 0.725746 | 0.340376 | _io_buffer.py | pypi |
from collections import defaultdict
from copy import copy
from functools import reduce
from typing import Any, Callable, Dict, Iterable, Set, Tuple, Union, cast
from kartothek.core.common_metadata import validate_shared_columns
from kartothek.core.cube.constants import KTK_CUBE_METADATA_VERSION
from kartothek.core.cube.cube import Cube
from kartothek.core.dataset import DatasetMetadata
from kartothek.core.index import ExplicitSecondaryIndex, IndexBase, PartitionIndex
from kartothek.io_components.metapartition import SINGLE_TABLE
from kartothek.utils.ktk_adapters import get_dataset_columns
__all__ = ("check_datasets", "get_cube_payload", "get_payload_subset")
def _check_datasets(
datasets: Dict[str, DatasetMetadata],
f: Callable[[DatasetMetadata], Any],
expected: Any,
what: str,
) -> None:
"""
Check datasets with given function and raise ``ValueError`` in case of an issue.
Parameters
----------
datasets
Datasets.
f
Transformer for dataset.
expected
Value that is expected to be returned by ``f``.
what
Description of what is currently checked.
Raises
------
ValueError: In case any issue was found.
"""
no = [name for name, ds in datasets.items() if f(ds) != expected]
if no:
def _fmt(obj):
if isinstance(obj, set):
return ", ".join(sorted(obj))
elif isinstance(obj, (list, tuple)):
return ", ".join(obj)
else:
return str(obj)
raise ValueError(
"Invalid datasets because {what} is wrong. Expected {expected}: {datasets}".format(
what=what,
expected=_fmt(expected),
datasets=", ".join(
"{name} ({actual})".format(
name=name, actual=_fmt(f(datasets[name]))
)
for name in sorted(no)
),
)
)
def _check_overlap(datasets: Dict[str, DatasetMetadata], cube: Cube) -> None:
"""
Check that datasets have not overlapping payload columns.
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Raises
------
ValueError: In case of overlapping payload columns.
"""
payload_columns_defaultdct = defaultdict(list)
for ktk_cube_dataset_id, ds in datasets.items():
for col in get_payload_subset(get_dataset_columns(ds), cube):
payload_columns_defaultdct[col].append(ktk_cube_dataset_id)
payload_columns_dct = {
col: ktk_cube_dataset_ids
for col, ktk_cube_dataset_ids in payload_columns_defaultdct.items()
if len(ktk_cube_dataset_ids) > 1
}
if payload_columns_dct:
raise ValueError(
"Found columns present in multiple datasets:{}".format(
"\n".join(
" - {col}: {ktk_cube_dataset_ids}".format(
col=col,
ktk_cube_dataset_ids=", ".join(
sorted(payload_columns_dct[col])
),
)
for col in sorted(payload_columns_dct.keys())
)
)
)
def _check_dimension_columns(datasets: Dict[str, DatasetMetadata], cube: Cube) -> None:
"""
Check if required dimension are present in given datasets.
For the seed dataset all dimension columns must be present. For all other datasets at least 1 dimension column must
be present.
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Raises
------
ValueError: In case dimension columns are broken.
"""
for ktk_cube_dataset_id in sorted(datasets.keys()):
ds = datasets[ktk_cube_dataset_id]
columns = get_dataset_columns(ds)
if ktk_cube_dataset_id == cube.seed_dataset:
missing = set(cube.dimension_columns) - columns
if missing:
raise ValueError(
'Seed dataset "{ktk_cube_dataset_id}" has missing dimension columns: {missing}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
missing=", ".join(sorted(missing)),
)
)
else:
present = set(cube.dimension_columns) & columns
if len(present) == 0:
raise ValueError(
(
'Dataset "{ktk_cube_dataset_id}" must have at least 1 of the following dimension columns: '
"{dims}"
).format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
dims=", ".join(cube.dimension_columns),
)
)
def _check_partition_columns(datasets: Dict[str, DatasetMetadata], cube: Cube) -> None:
"""
Check if required partitions columns are present in given datasets.
For the seed dataset all partition columns must be present. For all other datasets at least 1 partition column must
be present.
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Raises
------
ValueError: In case partition columns are broken.
"""
for ktk_cube_dataset_id in sorted(datasets.keys()):
ds = datasets[ktk_cube_dataset_id]
columns = set(ds.partition_keys)
if ktk_cube_dataset_id == cube.seed_dataset:
missing = set(cube.partition_columns) - columns
if missing:
raise ValueError(
'Seed dataset "{ktk_cube_dataset_id}" has missing partition columns: {missing}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
missing=", ".join(sorted(missing)),
)
)
unspecified_partition_columns = (
get_dataset_columns(ds) - set(ds.partition_keys)
) & set(cube.partition_columns)
if unspecified_partition_columns:
raise ValueError(
f"Unspecified but provided partition columns in {ktk_cube_dataset_id}: "
f"{', '.join(sorted(unspecified_partition_columns))}"
)
def _check_indices(datasets: Dict[str, DatasetMetadata], cube: Cube) -> None:
"""
Check if required indices are present in given datasets.
For all datasets the primary indices must be equal to ``ds.partition_keys``. For the seed dataset, secondary
indices for all dimension columns except ``cube.suppress_index_on`` are expected.
Additional indices are accepted and will not be reported as error.
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Raises
------
ValueError: In case indices are broken.
"""
for ktk_cube_dataset_id in sorted(datasets.keys()):
ds = datasets[ktk_cube_dataset_id]
primary_indices = ds.partition_keys
columns = get_dataset_columns(ds)
secondary_indices = set()
any_indices = set(cube.index_columns) & columns
if ktk_cube_dataset_id == cube.seed_dataset:
secondary_indices |= set(cube.dimension_columns) - set(
cube.suppress_index_on
)
for types_untyped, elements in (
((PartitionIndex,), primary_indices),
((ExplicitSecondaryIndex,), secondary_indices),
((ExplicitSecondaryIndex, PartitionIndex), any_indices),
):
types = cast(Tuple[type, ...], types_untyped)
tname = " or ".join(t.__name__ for t in types)
# it seems that partition indices are not always present (e.g. for empty datasets), so add partition keys to
# the set
indices = cast(Dict[str, Union[IndexBase, str]], copy(ds.indices))
if PartitionIndex in types:
for pk in ds.partition_keys:
if pk not in indices:
indices[pk] = "dummy"
for e in sorted(elements):
if e not in indices:
raise ValueError(
'{tname} "{e}" is missing in dataset "{ktk_cube_dataset_id}".'.format(
tname=tname, e=e, ktk_cube_dataset_id=ktk_cube_dataset_id
)
)
idx = indices[e]
t2 = type(idx)
tname2 = t2.__name__
if (idx != "dummy") and (not isinstance(idx, types)):
raise ValueError(
'"{e}" in dataset "{ktk_cube_dataset_id}" is of type {tname2} but should be {tname}.'.format(
tname=tname,
tname2=tname2,
e=e,
ktk_cube_dataset_id=ktk_cube_dataset_id,
)
)
def check_datasets(
datasets: Dict[str, DatasetMetadata], cube: Cube
) -> Dict[str, DatasetMetadata]:
"""
Apply sanity checks to persisteted Karothek datasets.
The following checks will be applied:
- seed dataset present
- metadata version correct
- only the cube-specific table is present
- partition keys are correct
- no overlapping payload columns exists
- datatypes are consistent
- dimension columns are present everywhere
- required index structures are present (more are allowed)
- ``PartitionIndex`` for every partition key
- for seed dataset, ``ExplicitSecondaryIndex`` for every dimension column
- for all datasets, ``ExplicitSecondaryIndex`` for every index column
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Returns
-------
datasets: Dict[str, DatasetMetadata]
Same as input, but w/ partition indices loaded.
Raises
------
ValueError
If sanity check failed.
"""
if cube.seed_dataset not in datasets:
raise ValueError('Seed data ("{}") is missing.'.format(cube.seed_dataset))
_check_datasets(
datasets=datasets,
f=lambda ds: ds.metadata_version,
expected=KTK_CUBE_METADATA_VERSION,
what="metadata version",
)
datasets = {name: ds.load_partition_indices() for name, ds in datasets.items()}
_check_datasets(
datasets=datasets,
f=lambda ds: {ds.table_name},
expected={SINGLE_TABLE},
what="table",
)
_check_overlap(datasets, cube)
# check column types
validate_shared_columns([ds.schema for ds in datasets.values()])
_check_partition_columns(datasets, cube)
_check_dimension_columns(datasets, cube)
_check_indices(datasets, cube)
return datasets
def get_payload_subset(columns: Iterable[str], cube: Cube) -> Set[str]:
"""
Get payload column subset from a given set of columns.
Parameters
----------
columns
Columns.
cube
Cube specification.
Returns
-------
payload: Set[str]
Payload columns.
"""
return set(columns) - set(cube.dimension_columns) - set(cube.partition_columns)
def get_cube_payload(datasets: Dict[str, DatasetMetadata], cube: Cube) -> Set[str]:
"""
Get payload columns of the whole cube.
Parameters
----------
datasets
Datasets.
cube
Cube specification.
Returns
-------
payload: Set[str]
Payload columns.
"""
return reduce(
set.union,
(get_payload_subset(get_dataset_columns(ds), cube) for ds in datasets.values()),
set(),
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/api/consistency.py | 0.929128 | 0.369059 | consistency.py | pypi |
import collections
import inspect
import logging
from typing import Dict, Iterable, List, Optional, Union, overload
import decorator
import pandas as pd
from kartothek.core.dataset import DatasetMetadata, DatasetMetadataBase
from kartothek.core.factory import _ensure_factory
from kartothek.core.typing import StoreFactory, StoreInput
from kartothek.core.utils import ensure_store, lazy_store
try:
from typing_extensions import Literal # type: ignore
except ImportError:
from typing import Literal # type: ignore
signature = inspect.signature
LOGGER = logging.getLogger(__name__)
class InvalidObject:
"""
Sentinel to mark keys for removal
"""
pass
def combine_metadata(dataset_metadata: List[Dict], append_to_list: bool = True) -> Dict:
"""
Merge a list of dictionaries
The merge is performed in such a way, that only keys which
are present in **all** dictionaries are kept in the final result.
If lists are encountered, the values of the result will be the
concatenation of all list values in the order of the supplied dictionary list.
This behaviour may be changed by using append_to_list
Parameters
----------
dataset_metadata
The list of dictionaries (usually metadata) to be combined.
append_to_list
If True, all values are concatenated. If False, only unique values are kept
"""
meta = _combine_metadata(dataset_metadata, append_to_list)
return _remove_invalids(meta)
def _remove_invalids(dct):
if not isinstance(dct, dict):
return {}
new_dict = {}
for key, value in dct.items():
if isinstance(value, dict):
tmp = _remove_invalids(value)
# Do not propagate empty dicts
if tmp:
new_dict[key] = tmp
elif not isinstance(value, InvalidObject):
new_dict[key] = value
return new_dict
def _combine_metadata(dataset_metadata, append_to_list):
assert isinstance(dataset_metadata, list)
if len(dataset_metadata) == 1:
return dataset_metadata.pop()
# In case the input list has only two elements, we can do simple comparison
if len(dataset_metadata) > 2:
first = _combine_metadata(dataset_metadata[::2], append_to_list)
second = _combine_metadata(dataset_metadata[1::2], append_to_list)
final = _combine_metadata([first, second], append_to_list)
return final
else:
first = dataset_metadata.pop()
second = dataset_metadata.pop()
if first == second:
return first
# None is harmless and may occur if a key appears in one but not the other dict
elif first is None or second is None:
return first if first is not None else second
elif isinstance(first, dict) and isinstance(second, dict):
new_dict = {}
keys = set(first.keys())
keys.update(second.keys())
for key in keys:
new_dict[key] = _combine_metadata(
[first.get(key), second.get(key)], append_to_list
)
return new_dict
elif isinstance(first, list) and isinstance(second, list):
new_list = first.extend(second)
if append_to_list:
return new_list
else:
return list(set(new_list))
else:
return InvalidObject()
def _ensure_compatible_indices(
dataset: Optional[DatasetMetadataBase], secondary_indices: Iterable[str],
) -> List[str]:
if dataset:
ds_secondary_indices = sorted(dataset.secondary_indices.keys())
if secondary_indices and not set(secondary_indices).issubset(
ds_secondary_indices
):
raise ValueError(
f"Incorrect indices provided for dataset.\n"
f"Expected: {ds_secondary_indices}\n"
f"But got: {secondary_indices}"
)
return ds_secondary_indices
return sorted(secondary_indices)
def validate_partition_keys(
dataset_uuid, store, ds_factory, default_metadata_version, partition_on,
):
if ds_factory or DatasetMetadata.exists(dataset_uuid, ensure_store(store)):
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=ds_factory,
)
ds_metadata_version = ds_factory.metadata_version
if partition_on:
if not isinstance(partition_on, list):
partition_on = [partition_on]
if partition_on != ds_factory.partition_keys:
raise ValueError(
"Incompatible set of partition keys encountered. "
"Input partitioning was `{}` while actual dataset was `{}`".format(
partition_on, ds_factory.partition_keys
)
)
else:
partition_on = ds_factory.partition_keys
else:
ds_factory = None
ds_metadata_version = default_metadata_version
return ds_factory, ds_metadata_version, partition_on
_NORMALIZE_ARGS_LIST = [
"partition_on",
"delete_scope",
"secondary_indices",
"sort_partitions_by",
"bucket_by",
]
_NORMALIZE_ARGS = _NORMALIZE_ARGS_LIST + ["store", "dispatch_by"]
@overload
def normalize_arg(
arg_name: Literal[
"partition_on",
"delete_scope",
"secondary_indices",
"bucket_by",
"sort_partitions_by",
"dispatch_by",
],
old_value: None,
) -> None:
...
@overload
def normalize_arg(
arg_name: Literal[
"partition_on",
"delete_scope",
"secondary_indices",
"bucket_by",
"sort_partitions_by",
"dispatch_by",
],
old_value: Union[str, List[str]],
) -> List[str]:
...
@overload
def normalize_arg(
arg_name: Literal["store"], old_value: Optional[StoreInput]
) -> StoreFactory:
...
def normalize_arg(arg_name, old_value):
"""
Normalizes an argument according to pre-defined types
Type A:
* "partition_on"
* "delete_scope"
* "secondary_indices"
* "dispatch_by"
will be converted to a list. If it is None, an empty list will be created
Type B:
* "store"
Will be converted to a callable returning
:meta private:
"""
def _make_list(_args):
if isinstance(_args, (str, bytes, int, float)):
return [_args]
if _args is None:
return []
if isinstance(_args, (set, frozenset, dict)):
raise ValueError(
"{} is incompatible for normalisation.".format(type(_args))
)
return list(_args)
if arg_name in _NORMALIZE_ARGS_LIST:
if old_value is None:
return []
elif isinstance(old_value, list):
return old_value
else:
return _make_list(old_value)
elif arg_name == "dispatch_by":
if old_value is None:
return old_value
elif isinstance(old_value, list):
return old_value
else:
return _make_list(old_value)
elif arg_name == "store" and old_value is not None:
return lazy_store(old_value)
return old_value
@decorator.decorator
def normalize_args(function, *args, **kwargs):
sig = signature(function)
def _wrapper(*args, **kwargs):
for arg_name in _NORMALIZE_ARGS:
if arg_name in sig.parameters.keys():
ix = inspect.getfullargspec(function).args.index(arg_name)
if arg_name in kwargs:
kwargs[arg_name] = normalize_arg(arg_name, kwargs[arg_name])
elif len(args) > ix:
new_args = list(args)
new_args[ix] = normalize_arg(arg_name, args[ix])
args = tuple(new_args)
else:
kwargs[arg_name] = normalize_arg(arg_name, None)
return function(*args, **kwargs)
return _wrapper(*args, **kwargs)
def extract_duplicates(lst):
"""
Return all items of a list that occur more than once.
Parameters
----------
lst: List[Any]
Returns
-------
lst: List[Any]
"""
return [item for item, count in collections.Counter(lst).items() if count > 1]
def align_categories(dfs, categoricals):
"""
Takes a list of dataframes with categorical columns and determines the superset
of categories. All specified columns will then be cast to the same `pd.CategoricalDtype`
Parameters
----------
dfs: List[pd.DataFrame]
A list of dataframes for which the categoricals should be aligned
categoricals: List[str]
Columns holding categoricals which should be aligned
Returns
-------
List[pd.DataFrame]
A list with aligned dataframes
"""
if len(categoricals) == 0:
return dfs
col_dtype = {}
for column in categoricals:
position_largest_df = None
categories = set()
largest_df_categories = set()
for ix, df in enumerate(dfs):
ser = df[column]
if not pd.api.types.is_categorical_dtype(ser):
cats = ser.dropna().unique()
LOGGER.info(
"Encountered non-categorical type where categorical was expected\n"
"Found at index position {ix} for column {col}\n"
"Dtypes: {dtypes}".format(ix=ix, col=column, dtypes=df.dtypes)
)
else:
cats = ser.cat.categories
length = len(df)
if position_largest_df is None or length > position_largest_df[0]:
position_largest_df = (length, ix)
if position_largest_df[1] == ix:
largest_df_categories = cats
categories.update(cats)
# use the categories of the largest DF as a baseline to avoid having
# to rewrite its codes. Append the remainder and sort it for reproducibility
categories = list(largest_df_categories) + sorted(
set(categories) - set(largest_df_categories)
)
cat_dtype = pd.api.types.CategoricalDtype(categories, ordered=False)
col_dtype[column] = cat_dtype
return_dfs = []
for df in dfs:
try:
new_df = df.astype(col_dtype, copy=False)
except ValueError as verr:
cat_types = {
col: dtype.categories.dtype for col, dtype in col_dtype.items()
}
# Should be fixed by pandas>=0.24.0
if "buffer source array is read-only" in str(verr):
new_df = df.astype(cat_types)
new_df = new_df.astype(col_dtype)
else:
raise verr
return_dfs.append(new_df)
return return_dfs
def sort_values_categorical(
df: pd.DataFrame, columns: Union[List[str], str]
) -> pd.DataFrame:
"""
Sort a dataframe lexicographically by the categories of column `column`
"""
if not isinstance(columns, list):
columns = [columns]
for col in columns:
if pd.api.types.is_categorical_dtype(df[col]):
cat_accesor = df[col].cat
df[col] = cat_accesor.reorder_categories(
sorted(cat_accesor.categories), ordered=True
)
return df.sort_values(by=columns).reset_index(drop=True)
def raise_if_indices_overlap(partition_on, secondary_indices):
partition_secondary_overlap = set(partition_on) & set(secondary_indices)
if partition_secondary_overlap:
raise RuntimeError(
f"Cannot create secondary index on partition columns: {partition_secondary_overlap}"
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/utils.py | 0.818338 | 0.34183 | utils.py | pypi |
from typing import Iterator, List, Optional, Set, Union, cast, overload
import pandas as pd
from kartothek.core.factory import DatasetFactory
from kartothek.core.typing import StoreInput
from kartothek.io_components.metapartition import MetaPartition
from kartothek.io_components.utils import normalize_args
from kartothek.serialization import (
PredicatesType,
check_predicates,
columns_in_predicates,
)
@overload
def dispatch_metapartitions_from_factory(
dataset_factory: DatasetFactory,
predicates: PredicatesType = None,
dispatch_by: None = None,
) -> Iterator[MetaPartition]:
...
@overload
def dispatch_metapartitions_from_factory(
dataset_factory: DatasetFactory, predicates: PredicatesType, dispatch_by: List[str],
) -> Iterator[List[MetaPartition]]:
...
@normalize_args
def dispatch_metapartitions_from_factory(
dataset_factory: DatasetFactory,
predicates: PredicatesType = None,
dispatch_by: Optional[List[str]] = None,
) -> Union[Iterator[MetaPartition], Iterator[List[MetaPartition]]]:
"""
:meta private:
"""
if dispatch_by is not None and not set(dispatch_by).issubset(
set(dataset_factory.index_columns)
):
raise RuntimeError(
f"Dispatch columns must be indexed.\nRequested index: {dispatch_by} but available index columns: {sorted(dataset_factory.index_columns)}"
)
check_predicates(predicates)
# Determine which indices need to be loaded.
index_cols: Set[str] = set()
if dispatch_by:
index_cols |= set(dispatch_by)
if predicates:
predicate_cols = set(columns_in_predicates(predicates))
predicate_index_cols = predicate_cols & set(dataset_factory.index_columns)
index_cols |= predicate_index_cols
for col in index_cols:
dataset_factory.load_index(col)
base_df = dataset_factory.get_indices_as_dataframe(
list(index_cols), predicates=predicates
)
if dispatch_by is not None:
base_df = cast(pd.DataFrame, base_df)
if len(dispatch_by) == 0:
merged_partitions = [((""), base_df)]
else:
# Group the resulting MetaParitions by partition keys or a subset of those keys
merged_partitions = base_df.groupby(
by=list(dispatch_by), sort=True, as_index=False
)
for group_name, group in merged_partitions:
if not isinstance(group_name, tuple):
group_name = (group_name,) # type: ignore
mps = []
logical_conjunction = list(
zip(dispatch_by, ["=="] * len(dispatch_by), group_name)
)
for label in group.index.unique():
mps.append(
MetaPartition.from_partition(
partition=dataset_factory.partitions[label],
metadata_version=dataset_factory.metadata_version,
schema=dataset_factory.schema,
partition_keys=dataset_factory.partition_keys,
logical_conjunction=logical_conjunction,
table_name=dataset_factory.table_name,
)
)
yield mps
else:
for part_label in base_df.index.unique():
part = dataset_factory.partitions[part_label]
yield MetaPartition.from_partition(
partition=part,
metadata_version=dataset_factory.metadata_version,
schema=dataset_factory.schema,
partition_keys=dataset_factory.partition_keys,
table_name=dataset_factory.table_name,
)
def dispatch_metapartitions(
dataset_uuid: str,
store: StoreInput,
predicates: PredicatesType = None,
dispatch_by: Optional[List[str]] = None,
) -> Union[Iterator[MetaPartition], Iterator[List[MetaPartition]]]:
dataset_factory = DatasetFactory(
dataset_uuid=dataset_uuid,
store_factory=store,
load_schema=True,
load_all_indices=False,
)
return dispatch_metapartitions_from_factory(
dataset_factory=dataset_factory, predicates=predicates, dispatch_by=dispatch_by,
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/read.py | 0.861873 | 0.2621 | read.py | pypi |
from functools import partial
from typing import Dict, Iterable, List, Optional, cast
from simplekv import KeyValueStore
from kartothek.core import naming
from kartothek.core.common_metadata import (
SchemaWrapper,
read_schema_metadata,
store_schema_metadata,
validate_compatible,
)
from kartothek.core.dataset import DatasetMetadataBuilder
from kartothek.core.factory import DatasetFactory
from kartothek.core.index import ExplicitSecondaryIndex, IndexBase, PartitionIndex
from kartothek.core.typing import StoreFactory, StoreInput
from kartothek.core.utils import ensure_store
from kartothek.io_components.metapartition import (
SINGLE_TABLE,
MetaPartition,
MetaPartitionInput,
parse_input_to_metapartition,
partition_labels_from_mps,
)
from kartothek.io_components.utils import (
combine_metadata,
extract_duplicates,
sort_values_categorical,
)
from kartothek.serialization import DataFrameSerializer
SINGLE_CATEGORY = SINGLE_TABLE
def write_partition(
partition_df: MetaPartitionInput,
secondary_indices: List[str],
sort_partitions_by: List[str],
dataset_uuid: str,
partition_on: List[str],
store_factory: StoreFactory,
df_serializer: Optional[DataFrameSerializer],
metadata_version: int,
dataset_table_name: str = SINGLE_TABLE,
) -> MetaPartition:
"""
Write a dataframe to store, performing all necessary preprocessing tasks
like partitioning, bucketing (NotImplemented), indexing, etc. in the correct order.
"""
store = ensure_store(store_factory)
# I don't have access to the group values
mps = parse_input_to_metapartition(
partition_df, metadata_version=metadata_version, table_name=dataset_table_name,
)
if sort_partitions_by:
mps = mps.apply(partial(sort_values_categorical, columns=sort_partitions_by))
if partition_on:
mps = mps.partition_on(partition_on)
if secondary_indices:
mps = mps.build_indices(secondary_indices)
return mps.store_dataframes(
store=store, dataset_uuid=dataset_uuid, df_serializer=df_serializer
)
def persist_indices(
store: StoreInput, dataset_uuid: str, indices: Dict[str, IndexBase]
) -> Dict[str, str]:
store = ensure_store(store)
output_filenames = {}
for column, index in indices.items():
# backwards compat
if isinstance(index, dict):
legacy_storage_key = "{dataset_uuid}.{column}{suffix}".format(
dataset_uuid=dataset_uuid,
column=column,
suffix=naming.EXTERNAL_INDEX_SUFFIX,
)
index = ExplicitSecondaryIndex(
column=column, index_dct=index, index_storage_key=legacy_storage_key
)
elif isinstance(index, PartitionIndex):
continue
index = cast(ExplicitSecondaryIndex, index)
output_filenames[column] = index.store(store=store, dataset_uuid=dataset_uuid)
return output_filenames
def persist_common_metadata(
schemas: Iterable[SchemaWrapper],
update_dataset: Optional[DatasetFactory],
store: KeyValueStore,
dataset_uuid: str,
table_name: str,
):
if not schemas:
return None
schemas_set = set(schemas)
del schemas
if update_dataset:
schemas_set.add(
read_schema_metadata(
dataset_uuid=dataset_uuid, store=store, table=table_name
)
)
schemas_sorted = sorted(schemas_set, key=lambda s: sorted(s.origin))
try:
result = validate_compatible(schemas_sorted)
except ValueError as e:
raise ValueError(
"Schemas for dataset '{dataset_uuid}' are not compatible!\n\n{e}".format(
dataset_uuid=dataset_uuid, e=e
)
)
if result:
store_schema_metadata(
schema=result, dataset_uuid=dataset_uuid, store=store, table=table_name
)
return result
def store_dataset_from_partitions(
partition_list,
store: StoreInput,
dataset_uuid,
dataset_metadata=None,
metadata_merger=None,
update_dataset=None,
remove_partitions=None,
metadata_storage_format=naming.DEFAULT_METADATA_STORAGE_FORMAT,
):
store = ensure_store(store)
schemas = set()
if update_dataset:
dataset_builder = DatasetMetadataBuilder.from_dataset(update_dataset)
metadata_version = dataset_builder.metadata_version
table_name = update_dataset.table_name
schemas.add(update_dataset.schema)
else:
mp = next(iter(partition_list), None)
if mp is None:
raise ValueError(
"Cannot store empty datasets, partition_list must not be empty if in store mode."
)
table_name = mp.table_name
metadata_version = mp.metadata_version
dataset_builder = DatasetMetadataBuilder(
uuid=dataset_uuid,
metadata_version=metadata_version,
partition_keys=mp.partition_keys,
)
for mp in partition_list:
if mp.schema:
schemas.add(mp.schema)
dataset_builder.schema = persist_common_metadata(
schemas=schemas,
update_dataset=update_dataset,
store=store,
dataset_uuid=dataset_uuid,
table_name=table_name,
)
# We can only check for non unique partition labels here and if they occur we will
# fail hard. The resulting dataset may be corrupted or file may be left in the store
# without dataset metadata
partition_labels = partition_labels_from_mps(partition_list)
# This could be safely removed since we do not allow to set this by the user
# anymore. It has implications on tests if mocks are used
non_unique_labels = extract_duplicates(partition_labels)
if non_unique_labels:
raise ValueError(
"The labels {} are duplicated. Dataset metadata was not written.".format(
", ".join(non_unique_labels)
)
)
if remove_partitions is None:
remove_partitions = []
if metadata_merger is None:
metadata_merger = combine_metadata
dataset_builder = update_metadata(
dataset_builder, metadata_merger, dataset_metadata
)
dataset_builder = update_partitions(
dataset_builder, partition_list, remove_partitions
)
dataset_builder = update_indices(
dataset_builder, store, partition_list, remove_partitions
)
if metadata_storage_format.lower() == "json":
store.put(*dataset_builder.to_json())
elif metadata_storage_format.lower() == "msgpack":
store.put(*dataset_builder.to_msgpack())
else:
raise ValueError(
"Unknown metadata storage format encountered: {}".format(
metadata_storage_format
)
)
dataset = dataset_builder.to_dataset()
return dataset
def update_metadata(dataset_builder, metadata_merger, dataset_metadata):
metadata_list = [dataset_builder.metadata]
new_dataset_metadata = metadata_merger(metadata_list)
dataset_metadata = dataset_metadata or {}
if callable(dataset_metadata):
dataset_metadata = dataset_metadata()
new_dataset_metadata.update(dataset_metadata)
for key, value in new_dataset_metadata.items():
dataset_builder.add_metadata(key, value)
return dataset_builder
def update_partitions(dataset_builder, add_partitions, remove_partitions):
for mp in add_partitions:
for mmp in mp:
if mmp.label is not None:
dataset_builder.explicit_partitions = True
dataset_builder.add_partition(mmp.label, mmp.partition)
for partition_name in remove_partitions:
del dataset_builder.partitions[partition_name]
return dataset_builder
def update_indices(dataset_builder, store, add_partitions, remove_partitions):
dataset_indices = dataset_builder.indices
partition_indices = MetaPartition.merge_indices(add_partitions)
if dataset_indices: # dataset already exists and will be updated
if remove_partitions:
for column, dataset_index in dataset_indices.items():
dataset_indices[column] = dataset_index.remove_partitions(
remove_partitions, inplace=True
)
for column, index in partition_indices.items():
dataset_indices[column] = dataset_indices[column].update(
index, inplace=True
)
else: # dataset index will be created first time from partitions
dataset_indices = partition_indices
# Store indices
index_filenames = persist_indices(
store=store, dataset_uuid=dataset_builder.uuid, indices=dataset_indices
)
for column, filename in index_filenames.items():
dataset_builder.add_external_index(column, filename)
return dataset_builder
def raise_if_dataset_exists(dataset_uuid, store):
try:
store_instance = ensure_store(store)
for form in ["msgpack", "json"]:
key = naming.metadata_key_from_uuid(uuid=dataset_uuid, format=form)
if key in store_instance:
raise RuntimeError(
"Dataset `%s` already exists and overwrite is not permitted!",
dataset_uuid,
)
except KeyError:
pass | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/write.py | 0.813053 | 0.211926 | write.py | pypi |
import uuid
__all__ = ("assert_stores_different", "check_blocksize", "check_store_factory")
def check_store_factory(store):
"""
Check that given store is a factory.
Parameters
----------
store: Any
Store passed by the user.
Raises
------
TypeError: In case the store is not a factory.
"""
if not callable(store):
raise TypeError(
"store must be a factory but is {}".format(type(store).__name__)
)
def assert_stores_different(store1, store2, prefix):
"""
Check that given stores are different.
This is a workaround for tha fact that simplekv stores normally do not implemenent some sane equality check.
Parameters
----------
store1: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
First store.
store2: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
Second store, will be used to write a test key to.
prefix: str
Prefix to be used for the temporary key used for the equality check.
Raises
------
ValueError: If stores are considered to be identical.
"""
if callable(store1):
store1 = store1()
if callable(store2):
store2 = store2()
key = "{prefix}/.test_store_difference.{uuid}".format(
prefix=prefix, uuid=uuid.uuid4().hex
)
try:
store2.put(key, b"")
try:
store1.get(key)
raise ValueError("Stores are identical but should not be.")
except KeyError:
pass
finally:
try:
store2.delete(key)
except KeyError:
pass
def check_blocksize(blocksize):
"""
Check that given blocksize is a positive integer.
Parameters
----------
blocksize: Any
Blocksize passed by the user.
Raises
------
TypeError: In case the blocksize is not an integer.
ValueError: In case the blocksize is < 0.
"""
if not isinstance(blocksize, int):
raise TypeError(
"blocksize must be an integer but is {}".format(type(blocksize).__name__)
)
if blocksize <= 0:
raise ValueError("blocksize must be > 0 but is {}".format(blocksize)) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/common.py | 0.836321 | 0.449332 | common.py | pypi |
from __future__ import absolute_import
import copy
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.utils.ktk_adapters import (
get_physical_partition_stats,
metadata_factory_from_dataset,
)
__all__ = ("collect_stats_block", "get_metapartitions_for_stats", "reduce_stats")
def _fold_stats(result, stats, ktk_cube_dataset_id):
"""
Add stats together.
Parameters
----------
result: Dict[str, Dict[str, int]]
Result dictionary, may be empty or a result of a previous call to :meth:`_fold_stats`.
stats: Dict[str, int]
Statistics for a single dataset.
ktk_cube_dataset_id: str
Ktk_cube dataset ID for the given ``stats`` object.
Returns
-------
result: Dict[str, Dict[str, int]]
Result dictionary with ``stats`` added.
"""
result = copy.deepcopy(result)
if ktk_cube_dataset_id in result:
ref = result[ktk_cube_dataset_id]
for k, v in stats.items():
ref[k] += v
else:
result[ktk_cube_dataset_id] = stats
return result
def get_metapartitions_for_stats(datasets):
"""
Get all metapartitions that need to be scanned to gather cube stats.
Parameters
----------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that are present.
Returns
-------
metapartitions: Tuple[Tuple[str, Tuple[kartothek.io_components.metapartition.MetaPartition, ...]], ...]
Pre-aligned metapartitions (by primary index / physical partitions) and the ktk_cube dataset ID belonging to them.
"""
all_metapartitions = []
for ktk_cube_dataset_id, ds in datasets.items():
dataset_factory = metadata_factory_from_dataset(ds)
for mp in dispatch_metapartitions_from_factory(
dataset_factory=dataset_factory, dispatch_by=dataset_factory.partition_keys
):
all_metapartitions.append((ktk_cube_dataset_id, mp))
return all_metapartitions
def collect_stats_block(metapartitions, store):
"""
Gather statistics data for multiple metapartitions.
Parameters
----------
metapartitions: Tuple[Tuple[str, Tuple[kartothek.io_components.metapartition.MetaPartition, ...]], ...]
Part of the result of :meth:`get_metapartitions_for_stats`.
store: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
KV store.
Returns
-------
stats: Dict[str, Dict[str, int]]
Statistics per ktk_cube dataset ID.
"""
if callable(store):
store = store()
result = {}
for ktk_cube_dataset_id, mp in metapartitions:
stats = get_physical_partition_stats(mp, store)
result = _fold_stats(result, stats, ktk_cube_dataset_id)
return result
def reduce_stats(stats_iter):
"""
Sum-up stats data.
Parameters
----------
stats_iter: Iterable[Dict[str, Dict[str, int]]]
Iterable of stats objects, either resulting from :meth:`collect_stats_block` or previous :meth:`reduce_stats`
calls.
Returns
-------
stats: Dict[str, Dict[str, int]]
Statistics per ktk_cube dataset ID.
"""
result = {}
for sub in stats_iter:
for ktk_cube_dataset_id, stats in sub.items():
result = _fold_stats(result, stats, ktk_cube_dataset_id)
return result | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/stats.py | 0.870432 | 0.379838 | stats.py | pypi |
from __future__ import absolute_import
from copy import copy
from kartothek.api.discover import check_datasets, discover_datasets_unchecked
from kartothek.utils.ktk_adapters import get_dataset_keys
__all__ = ("get_copy_keys",)
def get_copy_keys(cube, src_store, tgt_store, overwrite, datasets=None):
"""
Get and check keys that should be copied from one store to another.
Parameters
----------
cube: kartothek.core.cube.cube.Cube
Cube specification.
src_store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
Source KV store.
tgt_store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
Target KV store.
overwrite: bool
If possibly existing datasets in the target store should be overwritten.
datasets: Union[None, Iterable[str], Dict[str, kartothek.core.dataset.DatasetMetadata]]
Datasets to copy, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, an
iterable of Ktk_cube dataset ID or ``None`` (in which case entire cube will be copied).
Returns
-------
keys: Set[str]
Set of keys to copy.
Raises
------
RuntimeError: In case the copy would not pass successfully or if there is no cube in ``src_store``.
"""
if not isinstance(datasets, dict):
new_datasets = discover_datasets_unchecked(
uuid_prefix=cube.uuid_prefix,
store=src_store,
filter_ktk_cube_dataset_ids=datasets,
)
else:
new_datasets = datasets
if datasets is None:
if not new_datasets:
raise RuntimeError("{} not found in source store".format(cube))
else:
unknown_datasets = set(datasets) - set(new_datasets)
if unknown_datasets:
raise RuntimeError(
"{cube}, datasets {datasets} do not exist in source store".format(
cube=cube, datasets=unknown_datasets
)
)
existing_datasets = discover_datasets_unchecked(cube.uuid_prefix, tgt_store)
if not overwrite:
for ktk_cube_dataset_id in sorted(new_datasets.keys()):
if ktk_cube_dataset_id in existing_datasets:
raise RuntimeError(
'Dataset "{uuid}" exists in target store but overwrite was set to False'.format(
uuid=new_datasets[ktk_cube_dataset_id].uuid
)
)
all_datasets = copy(existing_datasets)
all_datasets.update(new_datasets)
check_datasets(all_datasets, cube)
keys = set()
for ktk_cube_dataset_id in sorted(new_datasets.keys()):
ds = new_datasets[ktk_cube_dataset_id]
keys |= get_dataset_keys(ds)
return keys | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/copy.py | 0.873822 | 0.475544 | copy.py | pypi |
from functools import reduce
from kartothek.core.cube.conditions import Conjunction
from kartothek.core.cube.constants import KTK_CUBE_METADATA_VERSION
from kartothek.io_components.metapartition import MetaPartition
from kartothek.utils.converters import converter_str_set_optional
from kartothek.utils.ktk_adapters import get_partition_dataframe
__all__ = ("prepare_metapartitions_for_removal_action",)
def prepare_metapartitions_for_removal_action(
cube, store, conditions, ktk_cube_dataset_ids, existing_datasets
):
"""
Prepare MetaPartition to express removal of given data range from cube.
The MetaPartition must still be written using ``mp.store_dataframes(...)`` and added to the Dataset using a
kartothek update method.
Parameters
----------
cube: kartothek.core.cube.cube.Cube
Cube spec.
store: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
Store.
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied, optional. Defaults to "entire cube".
ktk_cube_dataset_ids: Optional[Union[Iterable[str], str]]
Ktk_cube dataset IDs to apply the remove action to, optional. Default to "all".
existing_datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Existing datasets.
Returns
-------
metapartitions: Dict[str, Tuple[kartothek.core.dataset.DatasetMetadata,
kartothek.io_components.metapartition.MetaPartition, List[Dict[str, Any]]]]
MetaPartitions that should be written and updatet to the kartothek datasets as well as the ``delete_scope`` for
kartothek.
"""
conditions = Conjunction(conditions)
conditions_split = conditions.split_by_column()
if set(conditions_split.keys()) - set(cube.partition_columns):
raise ValueError(
"Can only remove partitions with conditions concerning cubes physical partition columns."
)
ktk_cube_dataset_ids = converter_str_set_optional(ktk_cube_dataset_ids)
if ktk_cube_dataset_ids is not None:
unknown_dataset_ids = ktk_cube_dataset_ids - set(existing_datasets.keys())
if unknown_dataset_ids:
raise ValueError(
"Unknown ktk_cube_dataset_ids: {}".format(
", ".join(sorted(unknown_dataset_ids))
)
)
else:
ktk_cube_dataset_ids = set(existing_datasets.keys())
metapartitions = {}
for ktk_cube_dataset_id in ktk_cube_dataset_ids:
ds = existing_datasets[ktk_cube_dataset_id]
ds = ds.load_partition_indices()
mp = _prepare_mp_empty(ds)
if not ds.partition_keys:
# no partition keys --> delete all
delete_scope = [{}]
else:
df_partitions = get_partition_dataframe(dataset=ds, cube=cube)
df_partitions = df_partitions.drop_duplicates()
local_condition = reduce(
lambda a, b: a & b,
(
cond
for col, cond in conditions_split.items()
if col in df_partitions.columns
),
Conjunction([]),
)
df_partitions = local_condition.filter_df(df_partitions)
delete_scope = df_partitions.to_dict(orient="records")
metapartitions[ktk_cube_dataset_id] = (ds, mp, delete_scope)
return metapartitions
def _prepare_mp_empty(dataset):
"""
Generate empty partition w/o any data for given cube.
Parameters
----------
dataset: kartothek.core.dataset.DatasetMetadata
Dataset to build empty MetaPartition for.
Returns
-------
mp: kartothek.io_components.metapartition.MetaPartition
MetaPartition, must still be added to the Dataset using a kartothek update method.
"""
return MetaPartition(
label=None,
metadata_version=KTK_CUBE_METADATA_VERSION,
partition_keys=dataset.partition_keys,
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/remove.py | 0.904703 | 0.340321 | remove.py | pypi |
import itertools
from copy import copy
from typing import Dict, Iterable, Optional, Sequence, Tuple
import dask.dataframe as dd
import pandas as pd
from pandas.api.types import is_sparse
from kartothek.api.consistency import check_datasets, get_payload_subset
from kartothek.core.common_metadata import store_schema_metadata
from kartothek.core.cube.constants import (
KTK_CUBE_METADATA_DIMENSION_COLUMNS,
KTK_CUBE_METADATA_KEY_IS_SEED,
KTK_CUBE_METADATA_PARTITION_COLUMNS,
KTK_CUBE_METADATA_SUPPRESS_INDEX_ON,
KTK_CUBE_METADATA_VERSION,
)
from kartothek.core.cube.cube import Cube
from kartothek.core.dataset import DatasetMetadataBuilder
from kartothek.core.naming import metadata_key_from_uuid
from kartothek.core.uuid import gen_uuid
from kartothek.io_components.metapartition import MetaPartition
from kartothek.utils.converters import converter_str
from kartothek.utils.pandas import mask_sorted_duplicates_keep_last, sort_dataframe
__all__ = (
"apply_postwrite_checks",
"check_datasets_prebuild",
"check_datasets_preextend",
"check_provided_metadata_dict",
"multiplex_user_input",
"prepare_data_for_ktk",
"prepare_ktk_metadata",
"prepare_ktk_partition_on",
)
def check_provided_metadata_dict(metadata, ktk_cube_dataset_ids):
"""
Check metadata dict provided by the user.
Parameters
----------
metadata: Optional[Dict[str, Dict[str, Any]]]
Optional metadata provided by the user.
ktk_cube_dataset_ids: Iterable[str]
ktk_cube_dataset_ids announced by the user.
Returns
-------
metadata: Dict[str, Dict[str, Any]]
Metadata provided by the user.
Raises
------
TypeError: If either the dict or one of the contained values has the wrong type.
ValueError: If a ktk_cube_dataset_id in the dict is not in ktk_cube_dataset_ids.
"""
if metadata is None:
metadata = {}
elif not isinstance(metadata, dict):
raise TypeError(
"Provided metadata should be a dict but is {}".format(
type(metadata).__name__
)
)
unknown_ids = set(metadata.keys()) - set(ktk_cube_dataset_ids)
if unknown_ids:
raise ValueError(
"Provided metadata for otherwise unspecified ktk_cube_dataset_ids: {}".format(
", ".join(sorted(unknown_ids))
)
)
# sorted iteration for deterministic error messages
for k in sorted(metadata.keys()):
v = metadata[k]
if not isinstance(v, dict):
raise TypeError(
"Provided metadata for dataset {} should be a dict but is {}".format(
k, type(v).__name__
)
)
return metadata
def prepare_ktk_metadata(cube, ktk_cube_dataset_id, metadata):
"""
Prepare metadata that should be passed to Kartothek.
This will add the following information:
- a flag indicating whether the dataset is considered a seed dataset
- dimension columns
- partition columns
- optional user-provided metadata
Parameters
----------
cube: kartothek.core.cube.cube.Cube
Cube specification.
ktk_cube_dataset_id: str
Ktk_cube dataset UUID (w/o cube prefix).
metadata: Optional[Dict[str, Dict[str, Any]]]
Optional metadata provided by the user. The first key is the ktk_cube dataset id,
the value is the user-level metadata for that dataset. Should be piped through
:meth:`check_provided_metadata_dict` beforehand.
Returns
-------
ktk_metadata: Dict[str, Any]
Metadata ready for Kartothek.
"""
if metadata is None:
metadata = {}
ds_metadata = metadata.get(ktk_cube_dataset_id, {})
ds_metadata[KTK_CUBE_METADATA_DIMENSION_COLUMNS] = list(cube.dimension_columns)
ds_metadata[KTK_CUBE_METADATA_KEY_IS_SEED] = (
ktk_cube_dataset_id == cube.seed_dataset
)
ds_metadata[KTK_CUBE_METADATA_PARTITION_COLUMNS] = list(cube.partition_columns)
ds_metadata[KTK_CUBE_METADATA_SUPPRESS_INDEX_ON] = list(cube.suppress_index_on)
return ds_metadata
# XXX: This is not consistent with plain kartothek datasets (indices are accepted on nullable columns there)
def assert_dimesion_index_cols_notnull(
df: pd.DataFrame, ktk_cube_dataset_id: str, cube: Cube, partition_on: Sequence[str]
) -> pd.DataFrame:
"""
Assert that index and dimesion columns are not NULL and raise an appropriate error if so.
.. note::
Indices for plain non-cube dataset drop null during index build!
"""
df_columns_set = set(df.columns)
dcols_present = set(cube.dimension_columns) & df_columns_set
icols_present = set(cube.index_columns) & df_columns_set
for cols, what in (
(partition_on, "partition"),
(dcols_present, "dimension"),
(icols_present, "index"),
):
for col in sorted(cols):
if df[col].isnull().any():
raise ValueError(
'Found NULL-values in {what} column "{col}" of dataset "{ktk_cube_dataset_id}"'.format(
col=col, ktk_cube_dataset_id=ktk_cube_dataset_id, what=what
)
)
return df
def check_user_df(ktk_cube_dataset_id, df, cube, existing_payload, partition_on):
"""
Check user-provided DataFrame for sanity.
Parameters
----------
ktk_cube_dataset_id: str
Ktk_cube dataset UUID (w/o cube prefix).
df: Optional[pandas.DataFrame]
DataFrame to be passed to Kartothek.
cube: kartothek.core.cube.cube.Cube
Cube specification.
existing_payload: Set[str]
Existing payload columns.
partition_on: Iterable[str]
Partition-on attribute for given dataset.
Raises
------
ValueError
In case anything is fishy.
"""
if df is None:
return
if not (isinstance(df, pd.DataFrame) or isinstance(df, dd.DataFrame)):
raise TypeError(
'Provided DataFrame is not a pandas.DataFrame or None, but is a "{t}"'.format(
t=type(df).__name__
)
)
if any(is_sparse(dtype) for dtype in df.dtypes):
raise TypeError("Sparse data is not supported.")
# call this once since `df.columns` can be quite slow
df_columns = list(df.columns)
df_columns_set = set(df_columns)
dcols_present = set(cube.dimension_columns) & df_columns_set
if len(df_columns) != len(df_columns_set):
raise ValueError(
'Duplicate columns found in dataset "{ktk_cube_dataset_id}": {df_columns}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
df_columns=", ".join(df_columns),
)
)
if ktk_cube_dataset_id == cube.seed_dataset:
missing_dimension_columns = set(cube.dimension_columns) - df_columns_set
if missing_dimension_columns:
raise ValueError(
'Missing dimension columns in seed data "{ktk_cube_dataset_id}": {missing_dimension_columns}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
missing_dimension_columns=", ".join(
sorted(missing_dimension_columns)
),
)
)
else:
if len(dcols_present) == 0:
raise ValueError(
'Dataset "{ktk_cube_dataset_id}" must have at least 1 of the following dimension columns: {dims}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
dims=", ".join(cube.dimension_columns),
)
)
missing_partition_columns = set(partition_on) - df_columns_set
if missing_partition_columns:
raise ValueError(
'Missing partition columns in dataset "{ktk_cube_dataset_id}": {missing_partition_columns}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
missing_partition_columns=", ".join(sorted(missing_partition_columns)),
)
)
# Factor this check out. All others can be performed on the dask.DataFrame.
# This one can only be executed on a pandas DataFame
if isinstance(df, pd.DataFrame):
assert_dimesion_index_cols_notnull(
ktk_cube_dataset_id=ktk_cube_dataset_id,
df=df,
cube=cube,
partition_on=partition_on,
)
payload = get_payload_subset(df.columns, cube)
payload_overlap = payload & existing_payload
if payload_overlap:
raise ValueError(
'Payload written in "{ktk_cube_dataset_id}" is already present in cube: {payload_overlap}'.format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
payload_overlap=", ".join(sorted(payload_overlap)),
)
)
unspecified_partition_columns = (df_columns_set - set(partition_on)) & set(
cube.partition_columns
)
if unspecified_partition_columns:
raise ValueError(
f"Unspecified but provided partition columns in {ktk_cube_dataset_id}: "
f"{', '.join(sorted(unspecified_partition_columns))}"
)
def _check_duplicates(ktk_cube_dataset_id, df, sort_keys, cube):
dup_mask = mask_sorted_duplicates_keep_last(df, sort_keys)
if dup_mask.any():
df_with_dups = df.iloc[dup_mask]
example_row = df_with_dups.iloc[0]
df_dup = df.loc[(df.loc[:, sort_keys] == example_row[sort_keys]).all(axis=1)]
cols_id = set(df_dup.columns[df_dup.nunique() == 1])
cols_show_id = cols_id - set(sort_keys)
cols_show_nonid = set(df.columns) - cols_id
raise ValueError(
f'Found duplicate cells by [{", ".join(sorted(sort_keys))}] in dataset "{ktk_cube_dataset_id}", example:\n'
f"\n"
f"Keys:\n"
f"{example_row[sorted(sort_keys)].to_string()}\n"
f"\n"
f"Identical Payload:\n"
f'{example_row[sorted(cols_show_id)].to_string() if cols_show_id else "n/a"}\n'
f"\n"
f'Non-Idential Payload:\n{df_dup[sorted(cols_show_nonid)].to_string() if cols_show_nonid else "n/a"}'
)
def prepare_data_for_ktk(
df, ktk_cube_dataset_id, cube, existing_payload, partition_on, consume_df=False
):
"""
Prepare data so it can be handed over to Kartothek.
Some checks will be applied to the data to ensure it is sane.
Parameters
----------
df: pandas.DataFrame
DataFrame to be passed to Kartothek.
ktk_cube_dataset_id: str
Ktk_cube dataset UUID (w/o cube prefix).
cube: kartothek.core.cube.cube.Cube
Cube specification.
existing_payload: Set[str]
Existing payload columns.
partition_on: Iterable[str]
Partition-on attribute for given dataset.
consume_df: bool
Whether the incoming DataFrame can be destroyed while processing it.
Returns
-------
mp: kartothek.io_components.metapartition.MetaPartition
Kartothek-ready MetaPartition, may be sentinel (aka empty and w/o label).
Raises
------
ValueError
In case anything is fishy.
"""
check_user_df(ktk_cube_dataset_id, df, cube, existing_payload, partition_on)
if (df is None) or df.empty:
# fast-path for empty DF
return MetaPartition(
label=None,
metadata_version=KTK_CUBE_METADATA_VERSION,
partition_keys=list(partition_on),
)
# TODO: find a more elegant solution that works w/o copy
df_orig = df
df = df.copy()
if consume_df:
# the original df is still referenced in the parent scope, so drop it
df_orig.drop(columns=df_orig.columns, index=df_orig.index, inplace=True)
df_columns = list(df.columns)
df_columns_set = set(df_columns)
# normalize value order and reset index
sort_keys = [
col
for col in itertools.chain(cube.partition_columns, cube.dimension_columns)
if col in df_columns_set
]
df = sort_dataframe(df=df, columns=sort_keys)
# check duplicate cells
_check_duplicates(ktk_cube_dataset_id, df, sort_keys, cube)
# check+convert column names to unicode strings
df.rename(columns={c: converter_str(c) for c in df_columns}, inplace=True)
# create MetaPartition object for easier handling
mp = MetaPartition(
label=gen_uuid(), data=df, metadata_version=KTK_CUBE_METADATA_VERSION,
)
del df
# partition data
mp = mp.partition_on(list(partition_on))
# reset indices again (because partition_on breaks it)
for mp2 in mp:
mp2.data.reset_index(drop=True, inplace=True)
del mp2
# calculate indices
indices_to_build = set(cube.index_columns) & df_columns_set
if ktk_cube_dataset_id == cube.seed_dataset:
indices_to_build |= set(cube.dimension_columns) - set(cube.suppress_index_on)
indices_to_build -= set(partition_on)
mp = mp.build_indices(indices_to_build)
return mp
def multiplex_user_input(data, cube):
"""
Get input from the user and ensure it's a multi-dataset dict.
Parameters
----------
data: Union[pandas.DataFrame, Dict[str, pandas.DataFrame]]
User input.
cube: kartothek.core.cube.cube.Cube
Cube specification.
Returns
-------
pipeline_input: Dict[str, pandas.DataFrame]
Input for write pipelines.
"""
if not isinstance(data, dict):
data = {cube.seed_dataset: data}
return data
class MultiTableCommitAborted(RuntimeError):
"""An Error occured during the commit of a MultiTable dataset (Cube) causing a rollback."""
def apply_postwrite_checks(datasets, cube, store, existing_datasets):
"""
Apply sanity checks that can only be done after Kartothek has written its datasets.
Parameters
----------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that just got written.
cube: kartothek.core.cube.cube.Cube
Cube specification.
store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
KV store.
existing_datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that were present before the write procedure started.
Returns
-------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that just got written.
Raises
------
ValueError
If sanity check failed.
"""
try:
empty_datasets = {
ktk_cube_dataset_id
for ktk_cube_dataset_id, ds in datasets.items()
if len(ds.partitions) == 0
}
if empty_datasets:
raise ValueError(
"Cannot write empty datasets: {empty_datasets}".format(
empty_datasets=", ".join(sorted(empty_datasets))
)
)
datasets_to_check = copy(existing_datasets)
datasets_to_check.update(datasets)
check_datasets(datasets_to_check, cube)
except Exception as e:
_rollback_transaction(
existing_datasets=existing_datasets, new_datasets=datasets, store=store
)
raise MultiTableCommitAborted(
"Post commit check failed. Operation rolled back."
) from e
return datasets
def check_datasets_prebuild(ktk_cube_dataset_ids, cube, existing_datasets):
"""
Check if given dataset UUIDs can be used to build a given cube, to be used before any write operation is performed.
The following checks will be applied:
- the seed dataset must be part of the data
- no leftovers (non-seed datasets) must be present that are not overwritten
Parameters
----------
ktk_cube_dataset_ids: Iterable[str]
Dataset IDs that should be written.
cube: kartothek.core.cube.cube.Cube
Cube specification.
existing_datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that existings before the write process started.
Raises
------
ValueError
In case of an error.
"""
if cube.seed_dataset not in ktk_cube_dataset_ids:
raise ValueError('Seed data ("{}") is missing.'.format(cube.seed_dataset))
missing_overwrites = set(existing_datasets.keys()) - set(ktk_cube_dataset_ids)
if missing_overwrites:
raise ValueError(
"Following datasets exists but are not overwritten (partial overwrite), this is not allowed: {}".format(
", ".join(sorted(missing_overwrites))
)
)
def check_datasets_preextend(ktk_cube_dataset_ids, cube):
"""
Check if given dataset UUIDs can be used to extend a given cube, to be used before any write operation is performed.
The following checks will be applied:
- the seed dataset of the cube must not be touched
..warning::
It is assumed that Kartothek checks if the ``overwrite`` flags are correct. Therefore, modifications of non-seed
datasets are NOT checked here.
Parameters
----------
ktk_cube_dataset_ids: Iterable[str]
Dataset IDs that should be written.
cube: kartothek.core.cube.cube.Cube
Cube specification.
Raises
------
ValueError
In case of an error.
"""
if cube.seed_dataset in ktk_cube_dataset_ids:
raise ValueError(
'Seed data ("{}") cannot be written during extension.'.format(
cube.seed_dataset
)
)
def _rollback_transaction(existing_datasets, new_datasets, store):
"""
Rollback changes made during tht write process.
Parameters
----------
existing_datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that existings before the write process started.
new_datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that where created / changed during the write process.
store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
KV store.
"""
if callable(store):
store = store()
# delete newly created datasets that where not present before the "transaction"
for ktk_cube_dataset_id in sorted(set(new_datasets) - set(existing_datasets)):
store.delete(metadata_key_from_uuid(new_datasets[ktk_cube_dataset_id].uuid))
# recover changes of old datasets
for ktk_cube_dataset_id in sorted(set(new_datasets) & set(existing_datasets)):
ds = existing_datasets[ktk_cube_dataset_id]
builder = DatasetMetadataBuilder.from_dataset(ds)
store.put(*builder.to_json())
store_schema_metadata(
schema=ds.schema, dataset_uuid=ds.uuid, store=store, table=ds.table_name
)
def prepare_ktk_partition_on(
cube: Cube,
ktk_cube_dataset_ids: Iterable[str],
partition_on: Optional[Dict[str, Iterable[str]]],
) -> Dict[str, Tuple[str, ...]]:
"""
Prepare ``partition_on`` values for kartothek.
Parameters
----------
cube:
Cube specification.
ktk_cube_dataset_ids:
ktk_cube_dataset_ids announced by the user.
partition_on:
Optional parition-on attributes for datasets.
Returns
-------
partition_on: Dict
Partition-on per dataset.
Raises
------
ValueError: In case user-provided values are invalid.
"""
if partition_on is None:
partition_on = {}
default = cube.partition_columns
result = {}
for ktk_cube_dataset_id in ktk_cube_dataset_ids:
po = tuple(partition_on.get(ktk_cube_dataset_id, default))
if ktk_cube_dataset_id == cube.seed_dataset:
if po != default:
raise ValueError(
f"Seed dataset {ktk_cube_dataset_id} must have the following, fixed partition-on attribute: "
f"{', '.join(default)}"
)
if len(set(po)) != len(po):
raise ValueError(
f"partition-on attribute of dataset {ktk_cube_dataset_id} contains duplicates: {', '.join(po)}"
)
result[ktk_cube_dataset_id] = po
return result | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/write.py | 0.879179 | 0.197715 | write.py | pypi |
import typing
import attr
import pandas as pd
from kartothek.io_components.metapartition import MetaPartition
from kartothek.utils.converters import converter_str
from kartothek.utils.pandas import (
concat_dataframes,
drop_sorted_duplicates_keep_last,
sort_dataframe,
)
__all__ = ("QueryGroup", "load_group", "quick_concat")
@attr.s(frozen=True)
class QueryGroup:
"""
Query group, aka logical partition w/ all kartothek metapartition and information required to load the data.
Parameters
----------
metapartition: Dict[int, Dict[str, Tuple[kartothek.io_components.metapartition.MetaPartition, ...]]]
Mapping from partition ID to metapartitions per dataset ID.
load_columns: Dict[str, Set[str]]
Columns to load.
output_columns: Tuple[str, ...]
Tuple of columns that will be returned from the query API.
predicates: Dict[str, Tuple[Tuple[Tuple[str, str, Any], ...], ...]]
Predicates for each dataset ID.
empty_df: Dict[str, pandas.DataFrame]
Empty DataFrame for each dataset ID.
dimension_columns: Tuple[str, ...]
Dimension columns, used for de-duplication and to join data.
restrictive_dataset_ids: Set[str]
Datasets (by Ktk_cube dataset ID) that are restrictive during the join process.
"""
metapartitions = attr.ib(
type=typing.Dict[int, typing.Dict[str, typing.Tuple[MetaPartition, ...]]]
)
load_columns = attr.ib(type=typing.Dict[str, typing.Tuple[str, ...]])
output_columns = attr.ib(type=typing.Tuple[str, ...])
predicates = attr.ib(
type=typing.Dict[
str,
typing.Tuple[typing.Tuple[typing.Tuple[str, str, typing.Any], ...], ...],
]
)
empty_df = attr.ib(type=typing.Dict[str, pd.DataFrame])
dimension_columns = attr.ib(type=typing.Tuple[str, ...])
restrictive_dataset_ids = attr.ib(type=typing.Set[str])
def _load_all_mps(mps, store, load_columns, predicates, empty):
"""
Load kartothek_cube-relevant data from all given MetaPartitions.
The result will be a concatenated Dataframe.
Parameters
----------
mps: Iterable[MetaPartition]
MetaPartitions to load.
store: simplekv.KeyValueStore
Store to load data from.
load_columns: List[str]
Columns to load.
predicates: Optional[List[List[Tuple[str, str, Any]]]]
Predicates to apply during load.
empty: pandas.DataFrame
Empty Dataframe dummy.
Returns
-------
df: pandas.DataFrame
Concatenated data.
"""
dfs_mp = []
for mp in mps:
mp = mp.load_dataframes(
store=store,
predicate_pushdown_to_io=True,
columns=sorted(load_columns),
predicates=predicates,
)
df = mp.data
df.columns = df.columns.map(converter_str)
dfs_mp.append(df)
return concat_dataframes(dfs_mp, empty)
def _load_partition_dfs(cube, group, partition_mps, store):
"""
Load partition Dataframes for seed, restrictive and other data.
The information about the merge strategy (seed, restricting, others) is taken from ``group``.
Parameters
----------
cube: Cube
Cube spec.
group: QueryGroup
Query group.
partition_mps: Dict[str, Iterable[MetaPartition]]
MetaPartitions for every dataset in this partition.
store: simplekv.KeyValueStore
Store to load data from.
Returns
-------
df_seed: pandas.DataFrame
Seed data.
dfs_restrict: List[pandas.DataFrame]
Restrictive data (for inner join).
dfs_other: List[pandas.DataFrame]
Other data (for left join).
"""
df_seed = None
dfs_restrict = []
dfs_other = []
for ktk_cube_dataset_id, empty in group.empty_df.items():
mps = partition_mps.get(ktk_cube_dataset_id, [])
df = _load_all_mps(
mps=mps,
store=store,
load_columns=list(group.load_columns[ktk_cube_dataset_id]),
predicates=group.predicates.get(ktk_cube_dataset_id, None),
empty=empty,
)
# de-duplicate and sort data
# PERF: keep order of dimensionality identical to group.dimension_columns
df_cols = set(df.columns)
dimensionality = [c for c in group.dimension_columns if c in df_cols]
df = sort_dataframe(df=df, columns=dimensionality)
df = drop_sorted_duplicates_keep_last(df, dimensionality)
if ktk_cube_dataset_id == cube.seed_dataset:
assert df_seed is None
df_seed = df
elif ktk_cube_dataset_id in group.restrictive_dataset_ids:
dfs_restrict.append(df)
else:
dfs_other.append(df)
assert df_seed is not None
return df_seed, dfs_restrict, dfs_other
def _load_partition(cube, group, partition_mps, store):
"""
Load partition and merge partition data within given QueryGroup.
The information about the merge strategy (seed, restricting, others) is taken from ``group``.
Parameters
----------
cube: Cube
Cube spec.
group: QueryGroup
Query group.
partition_mps: Dict[str, Iterable[MetaPartition]]
MetaPartitions for every dataset in this partition.
store: simplekv.KeyValueStore
Store to load data from.
Returns
-------
df: pandas.DataFrame
Merged data.
"""
# MEMORY: keep the DF references only as long as they are required:
# - use only 1 "intermediate result variable" called df_partition
# - consume the DFs lists (dfs_restrict, dfs_other) while iterating over them
df_partition, dfs_restrict, dfs_other = _load_partition_dfs(
cube=cube, group=group, partition_mps=partition_mps, store=store
)
while dfs_restrict:
df_partition = df_partition.merge(dfs_restrict.pop(0), how="inner")
while dfs_other:
df_partition = df_partition.merge(dfs_other.pop(0), how="left")
return df_partition.loc[:, list(group.output_columns)]
def load_group(group, store, cube):
"""
Load :py:class:`QueryGroup` and return DataFrame.
Parameters
----------
group: QueryGroup
Query group.
store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
Store to load data from.
cube: kartothek.core.cube.cube.Cube
Cube specification.
Returns
-------
df: pandas.DataFrame
Dataframe, may be empty.
"""
if callable(store):
store = store()
partition_results = []
for partition_id in sorted(group.metapartitions.keys()):
partition_results.append(
_load_partition(
cube=cube,
group=group,
partition_mps=group.metapartitions[partition_id],
store=store,
)
)
# concat all partitions
return quick_concat(
dfs=partition_results,
dimension_columns=group.dimension_columns,
partition_columns=cube.partition_columns,
)
def quick_concat(dfs, dimension_columns, partition_columns):
"""
Fast version of::
pd.concat(
dfs,
ignore_index=True,
sort=False,
).sort_values(dimension_columns + partition_columns).reset_index(drop=True)
if inputs are presorted.
Parameters
-----------
dfs: Iterable[pandas.DataFrame]
DataFrames to concat.
dimension_columns: Iterable[str]
Dimension columns in correct order.
partition_columns: Iterable[str]
Partition columns in correct order.
Returns
-------
df: pandas.DataFrame
Concatenated result.
"""
return sort_dataframe(
df=concat_dataframes(dfs),
columns=list(dimension_columns) + list(partition_columns),
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/query/_group.py | 0.812644 | 0.366051 | _group.py | pypi |
import itertools
import typing
from functools import reduce
import attr
import pandas as pd
import pyarrow as pa
from kartothek.core.cube.conditions import Conjunction
from kartothek.serialization._parquet import _normalize_value
from kartothek.utils.converters import converter_str_set, converter_str_tupleset
from kartothek.utils.ktk_adapters import get_dataset_columns
__all__ = ("QueryIntention", "determine_intention")
def _process_dimension_columns(dimension_columns, cube):
"""
Process and check given dimension columns.
Parameters
----------
dimension_columns: Optional[Iterable[str]]
Dimension columns of the query, may result in projection.
cube: Cube
Cube specification.
Returns
-------
dimension_columns: Tuple[str, ...]
Real dimension columns.
"""
if dimension_columns is None:
return cube.dimension_columns
else:
dimension_columns = converter_str_tupleset(dimension_columns)
missing = set(dimension_columns) - set(cube.dimension_columns)
if missing:
raise ValueError(
"Following dimension columns were requested but are missing from the cube: {missing}".format(
missing=", ".join(sorted(missing))
)
)
if len(dimension_columns) == 0:
raise ValueError("Dimension columns cannot be empty.")
return dimension_columns
def _process_partition_by(partition_by, cube, all_available_columns, indexed_columns):
"""
Process and check given partition-by columns.
Parameters
----------
partition_by: Optional[Iterable[str]]
By which column logical partitions should be formed.
cube: Cube
Cube specification.
all_available_columns: Set[str]
All columns that are available for query.
indexed_columns: Dict[str, Set[str]]
Indexed columns per ktk_cube dataset ID.
Returns
-------
partition_by: Tuple[str, ...]
Real partition-by columns, may be empty.
"""
if partition_by is None:
return []
else:
partition_by = converter_str_tupleset(partition_by)
partition_by_set = set(partition_by)
missing_available = partition_by_set - all_available_columns
if missing_available:
raise ValueError(
"Following partition-by columns were requested but are missing from the cube: {missing}".format(
missing=", ".join(sorted(missing_available))
)
)
missing_indexed = partition_by_set - reduce(
set.union, indexed_columns.values(), set()
)
if missing_indexed:
raise ValueError(
"Following partition-by columns are not indexed and cannot be used: {missing}".format(
missing=", ".join(sorted(missing_indexed))
)
)
return partition_by
def _test_condition_types(conditions, datasets):
"""
Process and check given query conditions.
Parameters
----------
conditions: Conjunction
Conditions that should be applied.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that are present.
Raises
-------
TypeError: In case of a wrong type.
"""
for single_condition in conditions.conditions:
test_predicate = single_condition.predicate_part
for literal in test_predicate:
col, op, val = literal
if op != "in":
val = [val]
for ktk_cube_dataset_id in sorted(datasets.keys()):
dataset = datasets[ktk_cube_dataset_id]
meta = dataset.schema
if col not in meta.names:
continue
pa_type = meta.field(col).type
if pa.types.is_null(pa_type):
# ignore all-NULL columns
# TODO: the query planner / regrouper could use that to emit 0 partitions
continue
for v in val:
try:
_normalize_value(v, pa_type)
# special check for numpy signed vs unsigned integers
if hasattr(v, "dtype"):
dtype = v.dtype
if (
pd.api.types.is_unsigned_integer_dtype(dtype)
and pa.types.is_signed_integer(pa_type)
) or (
pd.api.types.is_signed_integer_dtype(dtype)
and pa.types.is_unsigned_integer(pa_type)
):
# proper exception message will be constructed below
raise TypeError()
except Exception:
raise TypeError(
(
"Condition `{single_condition}` has wrong type. Expected `{pa_type} ({pd_type})` but "
"got `{value} ({py_type})`"
).format(
single_condition=single_condition,
pa_type=pa_type,
pd_type=pa_type.to_pandas_dtype().__name__,
value=v,
py_type=type(v).__name__,
)
)
def _process_conditions(
conditions, cube, datasets, all_available_columns, indexed_columns
):
"""
Process and check given query conditions.
Parameters
----------
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied.
cube: Cube
Cube specification.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that are present.
all_available_columns: Set[str]
All columns that are available for query.
indexed_columns: Dict[str, Set[str]]
Indexed columns per ktk_cube dataset ID.
Returns
-------
conditions_pre: Dict[str, kartothek.core.cube.conditions.Conjunction]
Conditions to be applied based on the index data alone.
conditions_post: Dict[str, kartothek.core.cube.conditions.Conjunction]
Conditions to be applied during the load process.
Raises
-------
TypeError: In case of a wrong type.
"""
conditions = Conjunction(conditions)
condition_columns = conditions.columns
missing = condition_columns - all_available_columns
if missing:
raise ValueError(
"Following condition columns are required but are missing from the cube: {missing}".format(
missing=", ".join(sorted(missing))
)
)
_test_condition_types(conditions, datasets)
conditions_split = conditions.split_by_column()
conditions_pre = {}
for ktk_cube_dataset_id, ds in datasets.items():
candidate_cols = indexed_columns[ktk_cube_dataset_id]
if not candidate_cols:
continue
filtered = [
conj for col, conj in conditions_split.items() if col in candidate_cols
]
if not filtered:
continue
conditions_pre[ktk_cube_dataset_id] = reduce(Conjunction.from_two, filtered)
conditions_post = {}
for ktk_cube_dataset_id, ds in datasets.items():
candidate_cols = (get_dataset_columns(ds) & condition_columns) - set(
cube.partition_columns
)
if not candidate_cols:
continue
filtered = [
conj for col, conj in conditions_split.items() if col in candidate_cols
]
if not filtered:
continue
conditions_post[ktk_cube_dataset_id] = reduce(Conjunction.from_two, filtered)
return conditions_pre, conditions_post
def _process_payload(payload_columns, all_available_columns, cube):
"""
Process and check given payload columns.
Parameters
----------
payload_columns: Optional[Iterable[str]]
Which columns apart from ``dimension_columns`` and ``partition_by`` should be returned from the query.
all_available_columns: Set[str]
All columns that are available for query.
cube: Cube
Cube specification.
Returns
-------
payload_columns: Set[str]
Payload columns to be returned from the query.
"""
if payload_columns is None:
return all_available_columns
else:
payload_columns = converter_str_set(payload_columns)
missing = payload_columns - all_available_columns
if missing:
raise ValueError(
"Cannot find the following requested payload columns: {missing}".format(
missing=", ".join(sorted(missing))
)
)
return payload_columns
def determine_intention(
cube,
datasets,
dimension_columns,
partition_by,
conditions,
payload_columns,
indexed_columns,
):
"""
Dermine and check user intention during the query process.
Parameters
----------
cube: Cube
Cube specification.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets that are present.
dimension_columns: Optional[Iterable[str]]
Dimension columns of the query, may result in projection.
partition_by: Optional[Iterable[str]]
By which column logical partitions should be formed.
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied.
payload_columns: Optional[Iterable[str]]
Which columns apart from ``dimension_columns`` and ``partition_by`` should be returned from the query.
indexed_columns: Dict[str, Set[str]]
Indexed columns per ktk_cube dataset ID.
Returns
-------
intention: QueryIntention
Checked and filled in intention of the user.
"""
all_available_columns = set(
itertools.chain.from_iterable(
[get_dataset_columns(ds) for ds in datasets.values()]
)
)
dimension_columns = _process_dimension_columns(
dimension_columns=dimension_columns, cube=cube
)
partition_by = _process_partition_by(
partition_by=partition_by,
cube=cube,
all_available_columns=all_available_columns,
indexed_columns=indexed_columns,
)
conditions_pre, conditions_post = _process_conditions(
conditions=conditions,
cube=cube,
datasets=datasets,
all_available_columns=all_available_columns,
indexed_columns=indexed_columns,
)
payload_columns = _process_payload(
payload_columns=payload_columns,
all_available_columns=all_available_columns,
cube=cube,
)
output_columns = tuple(
sorted(
set(partition_by)
| set(dimension_columns)
| set(payload_columns)
| set(cube.partition_columns)
)
)
return QueryIntention(
dimension_columns=dimension_columns,
partition_by=partition_by,
conditions_pre=conditions_pre,
conditions_post=conditions_post,
output_columns=output_columns,
)
@attr.s(frozen=True)
class QueryIntention:
"""
Checked user intention during the query process.
Parameters
----------
dimension_columns: Tuple[str, ...]
Real dimension columns.
partition_by: Tuple[str, ...]
Real partition-by columns, may be empty.
conditions_pre: Dict[str, kartothek.core.cube.conditions.Conjunction]
Conditions to be applied based on the index data alone.
conditions_post: Dict[str, kartothek.core.cube.conditions.Conjunction]
Conditions to be applied during the load process.
output_columns: Tuple[str, ...]
Output columns to be passed back to the user, in correct order.
"""
dimension_columns = attr.ib(type=typing.Tuple[str, ...])
partition_by = attr.ib(type=typing.Tuple[str, ...])
conditions_pre = attr.ib(type=typing.Dict[str, Conjunction])
conditions_post = attr.ib(type=typing.Dict[str, Conjunction])
output_columns = attr.ib(type=typing.Tuple[str, ...]) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/query/_intention.py | 0.76074 | 0.386474 | _intention.py | pypi |
import itertools
from functools import reduce
import numpy as np
from kartothek.api.consistency import check_datasets
from kartothek.api.discover import discover_datasets
from kartothek.core.common_metadata import empty_dataframe_from_schema
from kartothek.core.cube.conditions import Conjunction
from kartothek.core.index import ExplicitSecondaryIndex
from kartothek.io_components.cube.query._group import (
QueryGroup,
load_group,
quick_concat,
)
from kartothek.io_components.cube.query._intention import (
QueryIntention,
determine_intention,
)
from kartothek.io_components.cube.query._regroup import regroup
from kartothek.utils.ktk_adapters import get_dataset_columns
__all__ = ("QueryGroup", "QueryIntention", "load_group", "plan_query", "quick_concat")
def _get_indexed_columns(datasets):
"""
Get columns that where indexed by Kartothek.
Parameters
----------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Available datasets.
Returns
-------
indexed_columns: Dict[str, Set[str]]
Indexed columns per ktk_cube dataset ID.
"""
result = {}
for ktk_cube_dataset_id, ds in datasets.items():
result[ktk_cube_dataset_id] = set(ds.indices.keys())
return result
def _load_required_explicit_indices(datasets, intention, store):
"""
Load indices that are required for query evaluation.
.. important::
Primary/partition indices must already be loaded at this point!
Parameters
----------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Available datasets.
intention: kartothek.io_components.cube.query._intention.QueryIntention
Query intention.
store: simplekv.KeyValueStore
Store to query from.
Returns
-------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Available datasets, w/ indices loaded.
"""
# figure out which columns are required for query planning / regrouping
requires_columns = reduce(
set.union,
(
cond.columns
for cond in itertools.chain(
intention.conditions_pre.values(), intention.conditions_post.values()
)
),
set(),
) | set(intention.partition_by)
# load all indices that describe these columns
datasets_result = {}
for ktk_cube_dataset_id, ds in datasets.items():
indices = {
column: index.load(store)
if (
isinstance(index, ExplicitSecondaryIndex)
and (column in requires_columns)
)
else index
for column, index in ds.indices.items()
}
ds = ds.copy(indices=indices)
datasets_result[ktk_cube_dataset_id] = ds
return datasets_result
def _determine_restrictive_dataset_ids(cube, datasets, intention):
"""
Determine which datasets are restrictive.
These are datasets which contain non-dimension columns and non-partition columns to which users wishes to apply
restrictions (via conditions or via partition-by).
Parameters
----------
cube: Cube
Cube specification.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Available datasets.
intention: kartothek.io_components.cube.query._intention.QueryIntention
Query intention.
Returns
-------
restrictive_dataset_ids: Set[str]
Set of restrictive datasets (by Ktk_cube dataset ID).
"""
result = set()
for ktk_cube_dataset_id, dataset in datasets.items():
if ktk_cube_dataset_id == cube.seed_dataset:
continue
mask = (
set(intention.partition_by)
| intention.conditions_pre.get(ktk_cube_dataset_id, Conjunction([])).columns
| intention.conditions_post.get(
ktk_cube_dataset_id, Conjunction([])
).columns
) - (set(cube.dimension_columns) | set(cube.partition_columns))
overlap = mask & get_dataset_columns(dataset)
if overlap:
result.add(ktk_cube_dataset_id)
return result
def _dermine_load_columns(cube, datasets, intention):
"""
Determine which columns to load from given datasets.
Parameters
----------
cube: Cube
Cube specification.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Available datasets.
intention: kartothek.io_components.cube.query._intention.QueryIntention
Query intention.
Returns
-------
load_columns: Dict[str, Set[str]]
Columns to load.
"""
result = {}
for ktk_cube_dataset_id, ds in datasets.items():
is_seed = ktk_cube_dataset_id == cube.seed_dataset
ds_cols = get_dataset_columns(ds)
dimensionality = ds_cols & set(cube.dimension_columns)
is_projection = not dimensionality.issubset(set(intention.dimension_columns))
mask = (
set(intention.output_columns)
| set(intention.dimension_columns)
| intention.conditions_post.get(
ktk_cube_dataset_id, Conjunction([])
).columns
)
if not is_seed:
# optimize load routine by only restore partition columns for seed
mask -= set(cube.partition_columns)
candidates = ds_cols & mask
payload = candidates - set(cube.partition_columns) - set(cube.dimension_columns)
payload_requested = len(payload) > 0
if is_seed or payload_requested:
if is_projection and payload_requested:
raise ValueError(
(
'Cannot project dataset "{ktk_cube_dataset_id}" with dimensionality [{dimensionality}] to '
"[{dimension_columns}] while keeping the following payload intact: {payload}"
).format(
ktk_cube_dataset_id=ktk_cube_dataset_id,
dimensionality=", ".join(sorted(dimensionality)),
dimension_columns=", ".join(
sorted(intention.dimension_columns)
),
payload=", ".join(sorted(payload)),
)
)
result[ktk_cube_dataset_id] = candidates
return result
def _filter_relevant_datasets(datasets, load_columns):
"""
Filter datasets so only ones that actually load columns are left.
Parameters
----------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Datasets to filter.
load_columns: Dict[str, Set[str]]
Columns to load.
Returns
-------
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
Filtered datasets.
"""
which = set(load_columns.keys())
return {
ktk_cube_dataset_id: ds
for ktk_cube_dataset_id, ds in datasets.items()
if ktk_cube_dataset_id in which
}
def _reduce_empty_dtype_sizes(df):
"""
Try to find smaller dtypes for empty DF.
Currently, the following conversions are implemented:
- all integers to ``int8``
- all floats to ``float32``
Parameters
----------
df: pandas.DataFrame
Empty DataFrame, will be modified.
Returns
-------
df: pandas.DataFrame
Empty DataFrame w/ smaller types.
"""
def _reduce_dtype(dtype):
if np.issubdtype(dtype, np.signedinteger):
return np.int8
elif np.issubdtype(dtype, np.unsignedinteger):
return np.uint8
elif np.issubdtype(dtype, np.floating):
return np.float32
else:
return dtype
return df.astype({col: _reduce_dtype(df[col].dtype) for col in df.columns})
def plan_query(
conditions, cube, datasets, dimension_columns, partition_by, payload_columns, store,
):
"""
Plan cube query execution.
.. important::
If the intention does not contain a partition-by, this partition by the cube partition columns to speed up the
query on parallel backends. In that case, the backend must concat and check the resulting dataframes before
passing it to the user.
Parameters
----------
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied.
cube: Cube
Cube specification.
datasets: Union[None, Iterable[str], Dict[str, kartothek.core.dataset.DatasetMetadata]]
Datasets to query, must all be part of the cube.
dimension_columns: Optional[Iterable[str]]
Dimension columns of the query, may result in projection.
partition_by: Optional[Iterable[str]]
By which column logical partitions should be formed.
payload_columns: Optional[Iterable[str]]
Which columns apart from ``dimension_columns`` and ``partition_by`` should be returned.
store: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
Store to query from.
Returns
-------
intent: QueryIntention
Query intention.
empty_df: pandas.DataFrame
Empty DataFrame representing the output types.
groups: Tuple[QueryGroup]
Tuple of query groups. May be empty.
"""
if callable(store):
store = store()
if not isinstance(datasets, dict):
datasets = discover_datasets(
cube=cube, store=store, filter_ktk_cube_dataset_ids=datasets
)
else:
datasets = check_datasets(datasets, cube)
datasets = {
ktk_cube_dataset_id: ds.load_partition_indices()
for ktk_cube_dataset_id, ds in datasets.items()
}
indexed_columns = _get_indexed_columns(datasets)
intention = determine_intention(
cube=cube,
datasets=datasets,
dimension_columns=dimension_columns,
partition_by=partition_by,
conditions=conditions,
payload_columns=payload_columns,
indexed_columns=indexed_columns,
)
datasets = _load_required_explicit_indices(datasets, intention, store)
restrictive_dataset_ids = _determine_restrictive_dataset_ids(
cube=cube, datasets=datasets, intention=intention
)
load_columns = _dermine_load_columns(
cube=cube, datasets=datasets, intention=intention
)
datasets = _filter_relevant_datasets(datasets=datasets, load_columns=load_columns)
empty_df = {
ktk_cube_dataset_id: _reduce_empty_dtype_sizes(
empty_dataframe_from_schema(
schema=ds.schema,
columns=sorted(
get_dataset_columns(ds) & set(load_columns[ktk_cube_dataset_id])
),
)
)
for ktk_cube_dataset_id, ds in datasets.items()
}
empty_df_single = empty_df[cube.seed_dataset].copy()
for k, df in empty_df.items():
if k == cube.seed_dataset:
continue
if empty_df_single is None:
empty_df_single = df.copy()
else:
empty_df_single = empty_df_single.merge(df)
empty_df_single = empty_df_single[list(intention.output_columns)]
groups = regroup(
intention,
cube=cube,
datasets=datasets,
empty_df=empty_df,
indexed_columns=indexed_columns,
load_columns=load_columns,
restrictive_dataset_ids=restrictive_dataset_ids,
)
return intention, empty_df_single, groups | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io_components/cube/query/__init__.py | 0.812682 | 0.47025 | __init__.py | pypi |
from functools import partial
import click
import numpy as np # noqa
import pandas as pd
from prompt_toolkit import prompt
from prompt_toolkit.completion import WordCompleter
from prompt_toolkit.history import InMemoryHistory
from prompt_toolkit.validation import ValidationError, Validator
from kartothek.core.cube.conditions import Conjunction
from kartothek.io.dask.bag_cube import query_cube_bag
from kartothek.utils.ktk_adapters import get_dataset_columns
__all__ = ("query",)
_history_conditions = InMemoryHistory()
_history_payload = InMemoryHistory()
@click.pass_context
def query(ctx):
"""
Interactive cube queries into IPython.
"""
cube = ctx.obj["cube"]
datasets = ctx.obj["datasets"]
store = ctx.obj["store"]
store_instance = store()
datasets = {
ktk_cube_dataset_id: ds.load_all_indices(store_instance)
for ktk_cube_dataset_id, ds in datasets.items()
}
all_columns = set()
all_types = {}
for ds in datasets.values():
cols = get_dataset_columns(ds)
all_columns |= cols
for col in cols:
all_types[col] = ds.schema.field(col).type
ipython = _get_ipython()
conditions = None
payload_columns = []
while True:
conditions = _ask_conditions(conditions, all_columns, all_types)
payload_columns = _ask_payload(payload_columns, all_columns)
result = query_cube_bag(
cube=cube,
store=store,
conditions=conditions,
datasets=datasets,
payload_columns=payload_columns,
).compute()
if not result:
click.secho("No data found.", bold=True, fg="red")
continue
df = result[0]
_shell(df, ipython)
def _get_ipython():
try:
import IPython # noqa
return IPython
except Exception as e:
raise click.UsageError("Could not load IPython: {e}".format(e=e))
def _shell(df, ipython):
pd.set_option("display.width", None)
pd.set_option("display.max_rows", 0)
try:
ipython.embed(banner1="Shell:")
except ipython.terminal.embed.KillEmbeded:
raise KeyboardInterrupt()
class _ValidatorFromParse(Validator):
def __init__(self, f_parse):
super(_ValidatorFromParse, self).__init__()
self.f_parse = f_parse
def validate(self, document):
txt = document.text
try:
self.f_parse(txt)
except ValueError as e:
raise ValidationError(message=str(e))
def _ask_payload(payload_columns, all_columns):
def _parse(txt):
if txt == "__all__":
return sorted(all_columns)
cleaned = {s.strip() for s in txt.split(",")}
cols = {s for s in cleaned if s}
missing = cols - set(all_columns)
if missing:
raise ValueError(
"Unknown: {missing}".format(missing=", ".join(sorted(missing)))
)
return sorted(cols)
if set(payload_columns) == all_columns:
default = "__all__"
else:
default = ",".join(payload_columns)
txt = prompt(
message="Payload Columns: ",
history=_history_payload,
default=default,
completer=WordCompleter(sorted(all_columns) + ["__all__"]),
validator=_ValidatorFromParse(_parse),
)
return _parse(txt)
def _ask_conditions(conditions, all_columns, all_types):
txt = prompt(
message="Conditions: ",
history=_history_conditions,
default=str(conditions) if conditions is not None else "",
completer=WordCompleter(sorted(all_columns)),
validator=_ValidatorFromParse(
partial(Conjunction.from_string, all_types=all_types)
),
)
return Conjunction.from_string(txt, all_types) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/cli/_query.py | 0.506836 | 0.195863 | _query.py | pypi |
import click
from kartothek.cli._utils import filter_items, get_cube, get_store
from kartothek.io.dask.bag_cube import copy_cube_bag, delete_cube_bag
__all__ = ("copy",)
@click.option("--tgt_store", required=True, help="Target store to use.")
@click.option(
"--overwrite/--no-overwrite",
default=False,
help=(
"Flags if potentially present cubes in ``tgt_store`` are overwritten. If ``--no-overwrite`` is given (default) "
"and a cube is already present, the operation will fail."
),
show_default=True,
)
@click.option(
"--cleanup/--no-cleanup",
default=True,
help=(
"Flags if in case of an overwrite operation, the cube in ``tgt_store`` will first be removed so no previously "
"tracked files will be present after the copy operation."
),
show_default=True,
)
@click.option(
"--include",
help="Comma separated list of dataset-id to be copied. e.g., ``--include enrich,enrich_cl`` "
"also supports glob patterns",
is_flag=False,
metavar="<include>",
type=click.STRING,
)
@click.option(
"--exclude",
help="Copy all datasets except items in this comma separated list. e.g., ``--exclude enrich,enrich_cl`` "
"also supports glob patterns",
is_flag=False,
metavar="<exclude>",
type=click.STRING,
)
@click.pass_context
def copy(ctx, tgt_store, overwrite, cleanup, include, exclude):
"""
Copy cube from one store to another.
"""
cube = ctx.obj["cube"]
skv = ctx.obj["skv"]
store = ctx.obj["store"]
store_name = ctx.obj["store_name"]
all_datasets = set(ctx.obj["datasets"].keys())
if store_name == tgt_store:
raise click.UsageError("Source and target store must be different.")
tgt_store = get_store(skv, tgt_store)
selected_datasets = filter_items("dataset", all_datasets, include, exclude)
if overwrite:
try:
cube2, _ = get_cube(tgt_store, cube.uuid_prefix)
if cleanup:
click.secho(
"Deleting existing datasets {selected_datasets} from target store...".format(
selected_datasets=",".join(selected_datasets)
)
)
delete_cube_bag(
cube=cube2, store=tgt_store, datasets=selected_datasets
).compute()
else:
click.secho("Skip cleanup, leave old existing cube.")
except click.UsageError:
# cube not found, nothing to cleanup
pass
click.secho(
"Copy datasets {selected_datasets} to target store...".format(
selected_datasets=",".join(selected_datasets)
)
)
try:
copy_cube_bag(
cube=cube,
src_store=store,
tgt_store=tgt_store,
overwrite=overwrite,
datasets=selected_datasets,
).compute()
except (RuntimeError, ValueError) as e:
raise click.UsageError("Failed to copy cube: {e}".format(e=e)) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/cli/_copy.py | 0.586286 | 0.24899 | _copy.py | pypi |
import json
import click
from kartothek.cli._utils import to_bold as b
from kartothek.cli._utils import to_header as h
from kartothek.utils.ktk_adapters import get_dataset_columns
__all__ = ("info",)
@click.pass_context
def info(ctx):
"""
Show certain infos about the cube.
"""
cube = ctx.obj["cube"]
datasets = ctx.obj["datasets"]
seed_ds = datasets[cube.seed_dataset]
seed_schema = seed_ds.schema
click.echo(h("Infos"))
click.echo(b("UUID Prefix:") + " {}".format(cube.uuid_prefix))
click.echo(
b("Dimension Columns:") + _collist_string(cube.dimension_columns, seed_schema)
)
click.echo(
b("Partition Columns:") + _collist_string(cube.partition_columns, seed_schema)
)
click.echo(b("Index Columns:") + _collist_string_index(cube, datasets))
click.echo(b("Seed Dataset:") + " {}".format(cube.seed_dataset))
for ktk_cube_dataset_id in sorted(datasets.keys()):
_info_dataset(ktk_cube_dataset_id, datasets[ktk_cube_dataset_id], cube)
def _info_dataset(ktk_cube_dataset_id, ds, cube):
click.echo("")
click.echo(h("Dataset: {}".format(ktk_cube_dataset_id)))
ds = ds.load_partition_indices()
schema = ds.schema
all_cols = get_dataset_columns(ds)
payload_cols = sorted(
all_cols - (set(cube.dimension_columns) | set(cube.partition_columns))
)
dim_cols = sorted(set(cube.dimension_columns) & all_cols)
click.echo(b("Partition Keys:") + _collist_string(ds.partition_keys, schema))
click.echo(b("Partitions:") + " {}".format(len(ds.partitions)))
click.echo(
b("Metadata:")
+ "\n{}".format(
"\n".join(
" {}".format(line)
for line in json.dumps(
ds.metadata, indent=2, sort_keys=True, separators=(",", ": ")
).split("\n")
)
)
)
click.echo(b("Dimension Columns:") + _collist_string(dim_cols, schema))
click.echo(b("Payload Columns:") + _collist_string(payload_cols, schema))
def _collist_string(cols, schema):
if cols:
return "\n" + "\n".join(
" - {c}: {t}".format(c=c, t=schema.field(c).type) for c in cols
)
else:
return ""
def _collist_string_index(cube, datasets):
lines = []
for col in sorted(cube.index_columns):
for ktk_cube_dataset_id in sorted(datasets.keys()):
ds = datasets[ktk_cube_dataset_id]
schema = ds.schema
if col in schema.names:
lines.append(" - {c}: {t}".format(c=col, t=schema.field(col).type))
break
return "\n" + "\n".join(lines) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/cli/_info.py | 0.460046 | 0.196421 | _info.py | pypi |
import fnmatch
from functools import partial
import click
import storefact
import yaml
from kartothek.api.discover import discover_cube
__all__ = ("filter_items", "get_cube", "get_store", "to_bold", "to_header")
def get_cube(store, uuid_prefix):
"""
Get cube from store.
Parameters
----------
uuid_prefix: str
Dataset UUID prefix.
store: Union[Callable[[], simplekv.KeyValueStore], simplekv.KeyValueStore]
KV store.
Returns
-------
cube: Cube
Cube specification.
datasets: Dict[str, kartothek.core.dataset.DatasetMetadata]
All discovered datasets.
Raises
------
click.UsageError
In case cube was not found.
"""
try:
return discover_cube(uuid_prefix, store)
except ValueError as e:
raise click.UsageError("Could not load cube: {e}".format(e=e))
def get_store(skv, store):
"""
Get simplekv store from storefact config file.
Parameters
----------
skv: str
Name of the storefact yaml. Normally ``'skv.yml'``.
store: str
ID of the store.
Returns
-------
store_factory: Callable[[], simplekv.KeyValueStore]
Store object.
Raises
------
click.UsageError
In case something went wrong.
"""
try:
with open(skv, "rb") as fp:
store_cfg = yaml.safe_load(fp)
except IOError as e:
raise click.UsageError("Could not open load store YAML: {e}".format(e=e))
except yaml.YAMLError as e:
raise click.UsageError("Could not parse provided YAML file: {e}".format(e=e))
if store not in store_cfg:
raise click.UsageError(
"Could not find store {store} in {skv}".format(store=store, skv=skv)
)
return partial(storefact.get_store, **store_cfg[store])
def _match_pattern(what, items, pattern):
"""
Match given pattern against given items.
Parameters
----------
what: str
Describes what is filterd.
items: Iterable[str]
Items to be filtered
include_pattern: str
Comma separated items which should be included. Can contain glob patterns.
"""
result = set()
for part in pattern.split(","):
found = set(fnmatch.filter(items, part.strip()))
if not found:
raise click.UsageError(
"Could not find {what} {part}".format(what=what, part=part)
)
result |= found
return result
def filter_items(what, items, include_pattern=None, exclude_pattern=None):
"""
Filter given string items based on include and exclude patterns
Parameters
----------
what: str
Describes what is filterd.
items: Iterable[str]
Items to be filtered
include_pattern: str
Comma separated items which should be included. Can contain glob patterns.
exclude_pattern: str
Comma separated items which should be excluded. Can contain glob patterns.
Returns
-------
filtered_datasets: Set[str]
Filtered set of items after applying include and exclude patterns
"""
items = set(items)
if include_pattern is not None:
include_datasets = _match_pattern(what, items, include_pattern)
else:
include_datasets = items
if exclude_pattern is not None:
exclude_datasets = _match_pattern(what, items, exclude_pattern)
else:
exclude_datasets = set()
return include_datasets - exclude_datasets
def to_header(s):
"""
Create header.
Parameters
----------
s: str
Header content.
Returns
-------
s: str
Header content including terminal escpae sequences.
"""
return click.style(s, bold=True, underline=True, fg="yellow")
def to_bold(s):
"""
Create bold text.
Parameters
----------
s: str
Bold text content.
Returns
-------
s: str
Given text including terminal escpae sequences.
"""
return click.style(s, bold=True) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/cli/_utils.py | 0.837786 | 0.288591 | _utils.py | pypi |
from typing import List, Tuple, Union
from urlquote import quote as urlquote_quote
from urlquote import unquote as urlquote_unquote
from urlquote.quoting import PYTHON_3_7_QUOTING
def quote(value):
"""
Performs percent encoding on a sequence of bytes. if the given value is of string type, it will
be encoded. If the value is neither of string type nor bytes type, it will be cast using the `str`
constructor before being encoded in UTF-8.
"""
return urlquote_quote(value, quoting=PYTHON_3_7_QUOTING).decode("utf-8")
def unquote(value):
"""
Decodes a urlencoded string and performs necessary decoding depending on the used python version.
"""
return urlquote_unquote(value).decode("utf-8")
def decode_key(
key: str,
) -> Union[Tuple[str, str, List, str], Tuple[str, None, List, None]]:
"""
Split a given key into its kartothek components `{dataset_uuid}/{table}/{key_indices}/{filename}`
Example:
`uuid/table/index_col=1/index_col=2/partition_label.parquet`
Returns
-------
dataset_uuid: str
table: str
key_indices: list
The already unquoted list of index pairs
file_: str
The file name
"""
key_components = key.split("/")
dataset_uuid = key_components[0]
if len(key_components) < 3:
return key, None, [], None
table = key_components[1]
file_ = key_components[-1]
key_indices = unquote_indices(key_components[2:-1])
return dataset_uuid, table, key_indices, file_
def quote_indices(indices: List[Tuple[str, str]]) -> List[str]:
"""
Urlencode a list of column-value pairs and encode them as:
`quote(column)=quote(value)`
Parameters
----------
indices
A list of tuples where each list entry is (column, value)
Returns
-------
List[str]
List with urlencoded column=value strings
"""
quoted_pairs = []
for column, value in indices:
quoted_pairs.append(
"{column}={value}".format(column=quote(column), value=quote(value))
)
return quoted_pairs
def unquote_indices(index_strings: List[str]) -> List[Tuple[str, str]]:
"""
Take a list of encoded column-value strings and decode them to tuples
input: `quote(column)=quote(value)`
output `(column, value)`
Parameters
----------
indices
A list of tuples where each list entry is (column, value)
Returns
-------
List[str]
List with column value pairs
"""
indices = []
for index_string in index_strings:
split_string = index_string.split("=")
if len(split_string) == 2:
column, value = split_string
indices.append((unquote(column), unquote(value)))
return indices | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/urlencode.py | 0.954879 | 0.504761 | urlencode.py | pypi |
import copy
import logging
import re
from collections import OrderedDict, defaultdict
from typing import Any, Dict, List, Optional, Set, Tuple, TypeVar, Union
import pandas as pd
import pyarrow as pa
import simplejson
import kartothek.core._time
from kartothek.core import naming
from kartothek.core._compat import load_json
from kartothek.core._mixins import CopyMixin
from kartothek.core._zmsgpack import packb, unpackb
from kartothek.core.common_metadata import SchemaWrapper, read_schema_metadata
from kartothek.core.docs import default_docs
from kartothek.core.index import (
ExplicitSecondaryIndex,
IndexBase,
PartitionIndex,
filter_indices,
)
from kartothek.core.naming import (
EXTERNAL_INDEX_SUFFIX,
PARQUET_FILE_SUFFIX,
SINGLE_TABLE,
)
from kartothek.core.partition import Partition
from kartothek.core.typing import StoreInput
from kartothek.core.urlencode import decode_key, quote_indices
from kartothek.core.utils import ensure_store, verify_metadata_version
from kartothek.serialization import PredicatesType, columns_in_predicates
_logger = logging.getLogger(__name__)
TableMetaType = Dict[str, SchemaWrapper]
__all__ = ("DatasetMetadata", "DatasetMetadataBase")
def _validate_uuid(uuid: str) -> bool:
return re.match(r"[a-zA-Z0-9+\-_]+$", uuid) is not None
def to_ordinary_dict(dct: Dict) -> Dict:
new_dct = {}
for key, value in dct.items():
if isinstance(value, dict):
new_dct[key] = to_ordinary_dict(value)
else:
new_dct[key] = value
return new_dct
T = TypeVar("T", bound="DatasetMetadataBase")
class DatasetMetadataBase(CopyMixin):
def __init__(
self,
uuid: str,
partitions: Optional[Dict[str, Partition]] = None,
metadata: Optional[Dict] = None,
indices: Optional[Dict[str, IndexBase]] = None,
metadata_version: int = naming.DEFAULT_METADATA_VERSION,
explicit_partitions: bool = True,
partition_keys: Optional[List[str]] = None,
schema: Optional[SchemaWrapper] = None,
table_name: Optional[str] = SINGLE_TABLE,
):
if not _validate_uuid(uuid):
raise ValueError("UUID contains illegal character")
self.metadata_version = metadata_version
self.uuid = uuid
self.partitions = partitions if partitions else {}
self.metadata = metadata if metadata else {}
self.indices = indices if indices else {}
# explicit partitions means that the partitions are defined in the
# metadata.json file (in contrast to implicit partitions that are
# derived from the partition key names)
self.explicit_partitions = explicit_partitions
self.partition_keys = partition_keys or []
self.schema = schema
self._table_name = table_name
_add_creation_time(self)
super(DatasetMetadataBase, self).__init__()
def __eq__(self, other: Any) -> bool:
# Enforce dict comparison at the places where we only
# care about content, not order.
if self.uuid != other.uuid:
return False
if to_ordinary_dict(self.partitions) != to_ordinary_dict(other.partitions):
return False
if to_ordinary_dict(self.metadata) != to_ordinary_dict(other.metadata):
return False
if self.indices != other.indices:
return False
if self.explicit_partitions != other.explicit_partitions:
return False
if self.partition_keys != other.partition_keys:
return False
if self.schema != other.schema:
return False
return True
@property
def primary_indices_loaded(self) -> bool:
if not self.partition_keys:
return False
for pkey in self.partition_keys:
if pkey not in self.indices:
return False
return True
@property
def table_name(self) -> str:
if self._table_name:
return self._table_name
elif self.partitions:
tables = self.tables
if tables:
return self.tables[0]
return "<Unknown Table>"
@property
def tables(self) -> List[str]:
tables = list(iter(next(iter(self.partitions.values())).files.keys()))
if len(tables) > 1:
raise RuntimeError(
f"Dataset {self.uuid} has tables {tables} but read support for multi tabled dataset was dropped with kartothek 4.0."
)
return tables
@property
def index_columns(self) -> Set[str]:
return set(self.indices.keys()).union(self.partition_keys)
@property
def secondary_indices(self) -> Dict[str, ExplicitSecondaryIndex]:
return {
col: ind
for col, ind in self.indices.items()
if isinstance(ind, ExplicitSecondaryIndex)
}
@staticmethod
def exists(uuid: str, store: StoreInput) -> bool:
"""
Check if a dataset exists in a storage
Parameters
----------
uuid
UUID of the dataset.
store
Object that implements the .get method for file/object loading.
"""
store = ensure_store(store)
key = naming.metadata_key_from_uuid(uuid)
if key in store:
return True
key = naming.metadata_key_from_uuid(uuid, format="msgpack")
return key in store
@staticmethod
def storage_keys(uuid: str, store: StoreInput) -> List[str]:
"""
Retrieve all keys that belong to the given dataset.
Parameters
----------
uuid
UUID of the dataset.
store
Object that implements the .iter_keys method for key retrieval loading.
"""
store = ensure_store(store)
start_markers = ["{}.".format(uuid), "{}/".format(uuid)]
return list(
sorted(
k
for k in store.iter_keys(uuid)
if any(k.startswith(marker) for marker in start_markers)
)
)
def to_dict(self) -> Dict:
dct = OrderedDict(
[
(naming.METADATA_VERSION_KEY, self.metadata_version),
(naming.UUID_KEY, self.uuid),
]
)
if self.indices:
dct["indices"] = {
k: v.to_dict()
if v.loaded
else v.index_storage_key
if isinstance(v, ExplicitSecondaryIndex)
else {}
for k, v in self.indices.items()
}
if self.metadata:
dct["metadata"] = self.metadata
if self.partitions or self.explicit_partitions:
dct["partitions"] = {
label: partition.to_dict()
for label, partition in self.partitions.items()
}
if self.partition_keys is not None:
dct["partition_keys"] = self.partition_keys
return dct
def to_json(self) -> bytes:
return simplejson.dumps(self.to_dict()).encode("utf-8")
def to_msgpack(self) -> bytes:
return packb(self.to_dict())
def load_partition_indices(self: T) -> T:
"""
Load all filename encoded indices into RAM. File encoded indices can be extracted from datasets with partitions
stored in a format like
.. code::
`dataset_uuid/table/IndexCol=IndexValue/SecondIndexCol=Value/partition_label.parquet`
Which results in an in-memory index holding the information
.. code::
{
"IndexCol": {
IndexValue: ["partition_label"]
},
"SecondIndexCol": {
Value: ["partition_label"]
}
}
"""
if self.primary_indices_loaded:
return self
indices = _construct_dynamic_index_from_partitions(
partitions=self.partitions,
schema=self.schema,
default_dtype=pa.string() if self.metadata_version == 3 else None,
partition_keys=self.partition_keys,
)
combined_indices = self.indices.copy()
combined_indices.update(indices)
return self.copy(indices=combined_indices)
def load_index(self: T, column: str, store: StoreInput) -> T:
"""
Load an index into memory.
Note: External indices need to be preloaded before they can be queried.
Parameters
----------
column
Name of the column for which the index should be loaded.
store
Object that implements the .get method for file/object loading.
Returns
-------
dataset_metadata: :class:`~kartothek.core.dataset.DatasetMetadata`
Mutated metadata object with the loaded index.
"""
if self.partition_keys and column in self.partition_keys:
return self.load_partition_indices()
if column not in self.indices:
raise KeyError("No index specified for column '{}'".format(column))
index = self.indices[column]
if index.loaded or not isinstance(index, ExplicitSecondaryIndex):
return self
loaded_index = index.load(store=store)
if not self.explicit_partitions:
col_loaded_index = filter_indices(
{column: loaded_index}, self.partitions.keys()
)
else:
col_loaded_index = {column: loaded_index}
indices = dict(self.indices, **col_loaded_index)
return self.copy(indices=indices)
def load_all_indices(self: T, store: StoreInput) -> T:
"""
Load all registered indices into memory.
Note: External indices need to be preloaded before they can be queried.
Parameters
----------
store
Object that implements the .get method for file/object loading.
Returns
-------
dataset_metadata: :class:`~kartothek.core.dataset.DatasetMetadata`
Mutated metadata object with the loaded indices.
"""
indices = {
column: index.load(store)
if isinstance(index, ExplicitSecondaryIndex)
else index
for column, index in self.indices.items()
}
ds = self.copy(indices=indices)
return ds.load_partition_indices()
def query(self, indices: List[IndexBase] = None, **kwargs) -> List[str]:
"""
Query the dataset for partitions that contain specific values. Lookup is performed
using the embedded and loaded external indices. Additional indices need to operate
on the same partitions that the dataset contains, otherwise an empty list will be
returned (the query method only restricts the set of partition keys using the indices).
Parameters
----------
indices:
List of optional additional indices.
**kwargs:
Map of columns and values.
Returns
-------
List[str]
List of keys of partitions that contain the queries values in the respective columns.
"""
candidate_set = set(self.partitions.keys())
additional_indices = indices if indices else {}
combined_indices = dict(
self.indices, **{index.column: index for index in additional_indices}
)
for column, value in kwargs.items():
if column in combined_indices:
candidate_set &= set(combined_indices[column].query(value))
return list(candidate_set)
@default_docs
def get_indices_as_dataframe(
self,
columns: Optional[List[str]] = None,
date_as_object: bool = True,
predicates: PredicatesType = None,
):
"""
Converts the dataset indices to a pandas dataframe and filter relevant indices by `predicates`.
For a dataset with indices on columns `column_a` and `column_b` and three partitions,
the dataset output may look like
.. code::
column_a column_b
part_1 1 A
part_2 2 B
part_3 3 None
Parameters
----------
"""
if self.partition_keys and (
columns is None
or (
self.partition_keys is not None
and set(columns) & set(self.partition_keys)
)
):
# self.load_partition_indices is not inplace
dm = self.load_partition_indices()
else:
dm = self
if columns is None:
columns = sorted(dm.indices.keys())
if columns == []:
return pd.DataFrame(index=dm.partitions)
if predicates:
predicate_columns = columns_in_predicates(predicates)
columns_to_scan = sorted(
(predicate_columns & self.indices.keys()) | set(columns)
)
dfs = (
dm._evaluate_conjunction(
columns=columns_to_scan,
predicates=[conjunction],
date_as_object=date_as_object,
)
for conjunction in predicates
)
df = pd.concat(dfs)
index_name = df.index.name
df = (
df.loc[:, columns].reset_index().drop_duplicates().set_index(index_name)
)
else:
df = dm._evaluate_conjunction(
columns=columns, predicates=None, date_as_object=date_as_object,
)
return df
def _evaluate_conjunction(
self, columns: List[str], predicates: PredicatesType, date_as_object: bool
) -> pd.DataFrame:
"""
Evaluate all predicates related to `columns` to "AND".
Parameters
----------
columns:
A list of all columns, including query and index columns.
predicates:
Optional list of predicates, like [[('x', '>', 0), ...], that are used
to filter the resulting DataFrame, possibly using predicate pushdown,
if supported by the file format.
This parameter is not compatible with filter_query.
Predicates are expressed in disjunctive normal form (DNF). This means
that the innermost tuple describes a single column predicate. These
inner predicates are all combined with a conjunction (AND) into a
larger predicate. The most outer list then combines all predicates
with a disjunction (OR). By this, we should be able to express all
kinds of predicates that are possible using boolean logic.
Available operators are: `==`, `!=`, `<=`, `>=`, `<`, `>` and `in`.
dates_as_object: bool
Load pyarrow.date{32,64} columns as ``object`` columns in Pandas
instead of using ``np.datetime64`` to preserve their type. While
this improves type-safety, this comes at a performance cost.
Returns
-------
pd.DataFrame: df_result
A DataFrame containing all indices for which `predicates` holds true.
"""
non_index_columns = set(columns) - self.indices.keys()
if non_index_columns:
if non_index_columns & set(self.partition_keys):
raise RuntimeError(
"Partition indices not loaded. Please call `DatasetMetadata.load_partition_indices` first."
)
raise ValueError(
"Unknown index columns: {}".format(", ".join(sorted(non_index_columns)))
)
dfs = []
for col in columns:
df = pd.DataFrame(
self.indices[col].as_flat_series(
partitions_as_index=True,
date_as_object=date_as_object,
predicates=predicates,
)
)
dfs.append(df)
# dfs contains one df per index column. Each df stores indices filtered by `predicates` for each column.
# Performing an inner join on these dfs yields the resulting "AND" evaluation for all of these predicates.
# We start joining with the smallest dataframe, therefore the sorting.
dfs_sorted = sorted(dfs, key=len)
df_result = dfs_sorted.pop(0)
for df in dfs_sorted:
df_result = df_result.merge(
df, left_index=True, right_index=True, copy=False
)
return df_result
class DatasetMetadata(DatasetMetadataBase):
"""
Containing holding all metadata of the dataset.
"""
def __repr__(self):
return (
"DatasetMetadata(uuid={uuid}, "
"table_name={table_name}, "
"partition_keys={partition_keys}, "
"metadata_version={metadata_version}, "
"indices={indices}, "
"explicit_partitions={explicit_partitions})"
).format(
uuid=self.uuid,
table_name=self.table_name,
partition_keys=self.partition_keys,
metadata_version=self.metadata_version,
indices=list(self.indices.keys()),
explicit_partitions=self.explicit_partitions,
)
@staticmethod
def load_from_buffer(
buf, store: StoreInput, format: str = "json"
) -> "DatasetMetadata":
"""
Load a dataset from a (string) buffer.
Parameters
----------
buf:
Input to be parsed.
store:
Object that implements the .get method for file/object loading.
Returns
-------
DatasetMetadata:
Parsed metadata.
"""
if format == "json":
metadata = load_json(buf)
elif format == "msgpack":
metadata = unpackb(buf)
return DatasetMetadata.load_from_dict(metadata, store)
@staticmethod
def load_from_store(
uuid: str,
store: StoreInput,
load_schema: bool = True,
load_all_indices: bool = False,
) -> "DatasetMetadata":
"""
Load a dataset from a storage
Parameters
----------
uuid
UUID of the dataset.
store
Object that implements the .get method for file/object loading.
load_schema
Load table schema
load_all_indices
Load all registered indices into memory.
Returns
-------
dataset_metadata: :class:`~kartothek.core.dataset.DatasetMetadata`
Parsed metadata.
"""
key1 = naming.metadata_key_from_uuid(uuid)
store = ensure_store(store)
try:
value = store.get(key1)
metadata = load_json(value)
except KeyError:
key2 = naming.metadata_key_from_uuid(uuid, format="msgpack")
try:
value = store.get(key2)
metadata = unpackb(value)
except KeyError:
raise KeyError(
"Dataset does not exist. Tried {} and {}".format(key1, key2)
)
ds = DatasetMetadata.load_from_dict(metadata, store, load_schema=load_schema)
if load_all_indices:
ds = ds.load_all_indices(store)
return ds
@staticmethod
def load_from_dict(
dct: Dict, store: StoreInput, load_schema: bool = True
) -> "DatasetMetadata":
"""
Load dataset metadata from a dictionary and resolve any external includes.
Parameters
----------
dct
store
Object that implements the .get method for file/object loading.
load_schema
Load table schema
"""
# Use copy here to get an OrderedDict
metadata = copy.copy(dct)
if "metadata" not in metadata:
metadata["metadata"] = OrderedDict()
metadata_version = dct[naming.METADATA_VERSION_KEY]
dataset_uuid = dct[naming.UUID_KEY]
explicit_partitions = "partitions" in metadata
storage_keys = None
if not explicit_partitions:
storage_keys = DatasetMetadata.storage_keys(dataset_uuid, store)
partitions = _load_partitions_from_filenames(
store=store,
storage_keys=storage_keys,
metadata_version=metadata_version,
)
metadata["partitions"] = partitions
if metadata["partitions"]:
tables = [tab for tab in list(metadata["partitions"].values())[0]["files"]]
else:
table_set = set()
if storage_keys is None:
storage_keys = DatasetMetadata.storage_keys(dataset_uuid, store)
for key in storage_keys:
if key.endswith(naming.TABLE_METADATA_FILE):
table_set.add(key.split("/")[1])
tables = list(table_set)
schema = None
table_name = None
if tables:
table_name = tables[0]
if load_schema:
schema = read_schema_metadata(
dataset_uuid=dataset_uuid, store=store, table=table_name
)
metadata["schema"] = schema
if "partition_keys" not in metadata:
metadata["partition_keys"] = _get_partition_keys_from_partitions(
metadata["partitions"]
)
ds = DatasetMetadata.from_dict(
metadata, explicit_partitions=explicit_partitions
)
if table_name:
ds._table_name = table_name
return ds
@staticmethod
def from_buffer(buf: str, format: str = "json", explicit_partitions: bool = True):
if format == "json":
metadata = load_json(buf)
else:
metadata = unpackb(buf)
return DatasetMetadata.from_dict(
metadata, explicit_partitions=explicit_partitions
)
@staticmethod
def from_dict(dct: Dict, explicit_partitions: bool = True):
"""
Load dataset metadata from a dictionary.
This must have no external references. Otherwise use ``load_from_dict``
to have them resolved automatically.
"""
# Use the builder class for reconstruction to have a single point for metadata version changes
builder = DatasetMetadataBuilder(
uuid=dct[naming.UUID_KEY],
metadata_version=dct[naming.METADATA_VERSION_KEY],
explicit_partitions=explicit_partitions,
partition_keys=dct.get("partition_keys", None),
schema=dct.get("schema"),
)
for key, value in dct.get("metadata", {}).items():
builder.add_metadata(key, value)
for partition_label, part_dct in dct.get("partitions", {}).items():
builder.add_partition(
partition_label, Partition.from_dict(partition_label, part_dct)
)
for column, index_dct in dct.get("indices", {}).items():
if isinstance(index_dct, IndexBase):
builder.add_embedded_index(column, index_dct)
else:
builder.add_embedded_index(
column, ExplicitSecondaryIndex.from_v2(column, index_dct)
)
return builder.to_dataset()
def _get_type_from_meta(
schema: Optional[SchemaWrapper], column: str, default: Optional[pa.DataType],
) -> pa.DataType:
# use first schema that provides type information, since write path should ensure that types are normalized and
# equal
if schema is not None:
idx = schema.get_field_index(column)
return schema[idx].type
if default is not None:
return default
raise ValueError(
'Cannot find type information for partition column "{}"'.format(column)
)
def _empty_partition_indices(
partition_keys: List[str],
schema: Optional[SchemaWrapper],
default_dtype: pa.DataType,
):
indices = {}
for col in partition_keys:
arrow_type = _get_type_from_meta(schema, col, default_dtype)
indices[col] = PartitionIndex(column=col, index_dct={}, dtype=arrow_type)
return indices
def _construct_dynamic_index_from_partitions(
partitions: Dict[str, Partition],
schema: Optional[SchemaWrapper],
default_dtype: pa.DataType,
partition_keys: List[str],
) -> Dict[str, PartitionIndex]:
if len(partitions) == 0:
return _empty_partition_indices(partition_keys, schema, default_dtype)
def _get_files(part):
if isinstance(part, dict):
return part["files"]
else:
return part.files
# We exploit the fact that all tables are partitioned equally.
first_partition = next(
iter(partitions.values())
) # partitions is NOT empty here, see check above
first_partition_files = _get_files(first_partition)
if not first_partition_files:
return _empty_partition_indices(partition_keys, schema, default_dtype)
key_table = next(iter(first_partition_files.keys()))
storage_keys = (
(key, _get_files(part)[key_table]) for key, part in partitions.items()
)
_key_indices: Dict[str, Dict[str, Set[str]]] = defaultdict(_get_empty_index)
depth_indices = None
for partition_label, key in storage_keys:
_, _, indices, file_ = decode_key(key)
if (
file_ is not None
and key.endswith(PARQUET_FILE_SUFFIX)
and not key.endswith(EXTERNAL_INDEX_SUFFIX)
):
depth_indices = _check_index_depth(indices, depth_indices)
for column, value in indices:
_key_indices[column][value].add(partition_label)
new_indices = {}
for col, index_dct in _key_indices.items():
arrow_type = _get_type_from_meta(schema, col, default_dtype)
# convert defaultdicts into dicts
new_indices[col] = PartitionIndex(
column=col,
index_dct={k1: list(v1) for k1, v1 in index_dct.items()},
dtype=arrow_type,
)
return new_indices
def _get_partition_label(indices, filename, metadata_version):
return "/".join(
quote_indices(indices) + [filename.replace(PARQUET_FILE_SUFFIX, "")]
)
def _check_index_depth(indices, depth_indices):
if depth_indices is not None and len(indices) != depth_indices:
raise RuntimeError(
"Unknown file structure encountered. "
"Depth of filename indices is not equal for all partitions."
)
return len(indices)
def _get_partition_keys_from_partitions(partitions):
if len(partitions):
part = next(iter(partitions.values()))
files_dct = part["files"]
if files_dct:
key = next(iter(files_dct.values()))
_, _, indices, _ = decode_key(key)
if indices:
return [tup[0] for tup in indices]
return None
def _load_partitions_from_filenames(store, storage_keys, metadata_version):
partitions = defaultdict(_get_empty_partition)
depth_indices = None
for key in storage_keys:
dataset_uuid, table, indices, file_ = decode_key(key)
if file_ is not None and file_.endswith(PARQUET_FILE_SUFFIX):
# valid key example:
# <uuid>/<table>/<column_0>=<value_0>/.../<column_n>=<value_n>/part_label.parquet
depth_indices = _check_index_depth(indices, depth_indices)
partition_label = _get_partition_label(indices, file_, metadata_version)
partitions[partition_label]["files"][table] = key
return partitions
def _get_empty_partition():
return {"files": {}, "metadata": {}}
def _get_empty_index():
return defaultdict(set)
def create_partition_key(
dataset_uuid: str,
table: str,
index_values: List[Tuple[str, str]],
filename: str = "data",
):
"""
Create partition key for a kartothek partition
Parameters
----------
dataset_uuid
table
index_values
filename
Example:
create_partition_key('my-uuid', 'testtable',
[('index1', 'value1'), ('index2', 'value2')])
returns 'my-uuid/testtable/index1=value1/index2=value2/data'
"""
key_components = [dataset_uuid, table]
index_path = quote_indices(index_values)
key_components.extend(index_path)
key_components.append(filename)
key = "/".join(key_components)
return key
class DatasetMetadataBuilder(CopyMixin):
"""
Incrementally build up a dataset.
In constrast to a :class:`kartothek.core.dataset.DatasetMetadata` instance,
this object is mutable and may not be a full dataset (e.g. partitions don't
need to be fully materialised).
"""
def __init__(
self,
uuid: str,
metadata_version=naming.DEFAULT_METADATA_VERSION,
explicit_partitions=True,
partition_keys=None,
schema=None,
):
verify_metadata_version(metadata_version)
self.uuid = uuid
self.metadata: Dict = OrderedDict()
self.indices: Dict[str, IndexBase] = OrderedDict()
self.metadata_version = metadata_version
self.partitions: Dict[str, Partition] = OrderedDict()
self.partition_keys = partition_keys
self.schema = schema
self.explicit_partitions = explicit_partitions
_add_creation_time(self)
super(DatasetMetadataBuilder, self).__init__()
@staticmethod
def from_dataset(dataset):
dataset = copy.deepcopy(dataset)
ds_builder = DatasetMetadataBuilder(
uuid=dataset.uuid,
metadata_version=dataset.metadata_version,
explicit_partitions=dataset.explicit_partitions,
partition_keys=dataset.partition_keys,
schema=dataset.schema,
)
ds_builder.metadata = dataset.metadata
ds_builder.indices = dataset.indices
ds_builder.partitions = dataset.partitions
return ds_builder
def add_partition(self, name, partition):
"""
Add an (embedded) Partition.
Parameters
----------
name: str
Identifier of the partition.
partition: :class:`kartothek.core.partition.Partition`
The partition to add.
"""
if len(partition.files) > 1:
raise RuntimeError(
f"Dataset {self.uuid} has tables {sorted(partition.files.keys())} but read support for multi tabled dataset was dropped with kartothek 4.0."
)
self.partitions[name] = partition
return self
# TODO: maybe remove
def add_embedded_index(self, column, index):
"""
Embed an index into the metadata.
Parameters
----------
column: str
Name of the indexed column
index: kartothek.core.index.IndexBase
The actual index object
"""
if column != index.column:
# TODO Deprecate the column argument and take the column name directly from the index.
raise RuntimeError(
"The supplied index is not compatible with the supplied index."
)
self.indices[column] = index
def add_external_index(self, column, filename=None):
"""
Add a reference to an external index.
Parameters
----------
column: str
Name of the indexed column
Returns
-------
storage_key: str
The location where the external index should be stored.
"""
if filename is None:
filename = "{uuid}.{column_name}".format(uuid=self.uuid, column_name=column)
filename += naming.EXTERNAL_INDEX_SUFFIX
self.indices[column] = ExplicitSecondaryIndex(
column, index_storage_key=filename
)
return filename
def add_metadata(self, key, value):
"""
Add arbitrary key->value metadata.
Parameters
----------
key: str
value: str
"""
self.metadata[key] = value
def to_dict(self):
"""
Render the dataset to a dict.
Returns
-------
"""
factory = type(self.metadata)
dct = factory(
[
(naming.METADATA_VERSION_KEY, self.metadata_version),
(naming.UUID_KEY, self.uuid),
]
)
if self.indices:
dct["indices"] = {}
for column, index in self.indices.items():
if isinstance(index, str):
dct["indices"][column] = index
elif index.loaded:
dct["indices"][column] = index.to_dict()
else:
dct["indices"][column] = index.index_storage_key
if self.metadata:
dct["metadata"] = self.metadata
if self.explicit_partitions:
dct["partitions"] = factory()
for label, partition in self.partitions.items():
part_dict = partition.to_dict()
dct["partitions"][label] = part_dict
if self.partition_keys is not None:
dct["partition_keys"] = self.partition_keys
return dct
def to_json(self):
"""
Render the dataset to JSON.
Returns
-------
storage_key: str
The path where this metadata should be placed in the storage.
dataset_json: str
The rendered JSON for this dataset.
"""
return (
naming.metadata_key_from_uuid(self.uuid),
simplejson.dumps(self.to_dict()).encode("utf-8"),
)
def to_msgpack(self) -> Tuple[str, bytes]:
"""
Render the dataset to msgpack.
Returns
-------
storage_key: str
The path where this metadata should be placed in the storage.
dataset_json: str
The rendered JSON for this dataset.
"""
return (
naming.metadata_key_from_uuid(self.uuid, format="msgpack"),
packb(self.to_dict()),
)
def to_dataset(self) -> DatasetMetadata:
return DatasetMetadata(
uuid=self.uuid,
partitions=self.partitions,
metadata=self.metadata,
indices=self.indices,
metadata_version=self.metadata_version,
explicit_partitions=self.explicit_partitions,
partition_keys=self.partition_keys,
schema=self.schema,
)
def _add_creation_time(
dataset_object: Union[DatasetMetadataBase, DatasetMetadataBuilder]
):
if "creation_time" not in dataset_object.metadata:
creation_time = kartothek.core._time.datetime_utcnow().isoformat()
dataset_object.metadata["creation_time"] = creation_time | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/dataset.py | 0.850002 | 0.166506 | dataset.py | pypi |
import inspect
from io import StringIO
_PARAMETER_MAPPING = {
"store": """
store: Callable or str or simplekv.KeyValueStore
The store where we can find or store the dataset.
Can be either ``simplekv.KeyValueStore``, a storefact store url or a
generic Callable producing a ``simplekv.KeyValueStore``""",
"overwrite": """
overwrite: Optional[bool]
If True, allow overwrite of an existing dataset.""",
"label_merger": """
label_merger: Optional[Callable]
By default the shorter label of either the left or right partition is chosen
as the merged partition label. Supplying a callable here, allows you to override
the default behavior and create a new label from all input labels
(depending on the matches this might be more than two values)""",
"metadata_merger": """
metadata_merger: Optional[Callable]
By default partition metadata is combined using the :func:`~kartothek.io_components.utils.combine_metadata` function.
You can supply a callable here that implements a custom merge operation on the metadata dictionaries
(depending on the matches this might be more than two values).""",
"table": """
table: Optional[str]
The table to be loaded. If none is specified, the default 'table' is used.""",
"table_name": """
table_name:
The table name of the dataset to be loaded. This creates a namespace for
the partitioning like
`dataset_uuid/table_name/*`
This is to support legacy workflows. We recommend not to use this and use the default wherever possible.""",
"schema": """
schema: SchemaWrapper
The dataset table schema""",
"columns": """
columns
A subset of columns to be loaded.""",
"dispatch_by": """
dispatch_by: Optional[List[str]]
List of index columns to group and partition the jobs by.
There will be one job created for every observed index value
combination. This may result in either many very small partitions or in
few very large partitions, depending on the index you are using this on.
.. admonition:: Secondary indices
This is also useable in combination with secondary indices where
the physical file layout may not be aligned with the logically
requested layout. For optimal performance it is recommended to use
this for columns which can benefit from predicate pushdown since
the jobs will fetch their data individually and will *not* shuffle
data in memory / over network.""",
"df_serializer": """
df_serializer : Optional[kartothek.serialization.DataFrameSerializer]
A pandas DataFrame serialiser from `kartothek.serialization`""",
"output_dataset_uuid": """
output_dataset_uuid: Optional[str]
UUID of the newly created dataset""",
"output_dataset_metadata": """
output_dataset_metadata: Optional[Dict]
Metadata for the merged target dataset. Will be updated with a
`merge_datasets__pipeline` key that contains the source dataset uuids for
the merge.""",
"output_store": """
output_store : Union[Callable, str, simplekv.KeyValueStore]
If given, the resulting dataset is written to this store. By default
the input store.
Can be either `simplekv.KeyValueStore`, a storefact store url or a
generic Callable producing a ``simplekv.KeyValueStore``""",
"metadata": """
metadata : Optional[Dict]
A dictionary used to update the dataset metadata.""",
"dataset_uuid": """
dataset_uuid: str
The dataset UUID""",
"metadata_version": """
metadata_version: Optional[int]
The dataset metadata version""",
"partition_on": """
partition_on: List
Column names by which the dataset should be partitioned by physically.
These columns may later on be used as an Index to improve query performance.
Partition columns need to be present in all dataset tables.
Sensitive to ordering.""",
"predicate_pushdown_to_io": """
predicate_pushdown_to_io: bool
Push predicates through to the I/O layer, default True. Disable
this if you see problems with predicate pushdown for the given
file even if the file format supports it. Note that this option
only hides problems in the storage layer that need to be addressed
there.""",
"delete_scope": """
delete_scope: List[Dict]
This defines which partitions are replaced with the input and therefore
get deleted. It is a lists of query filters for the dataframe in the
form of a dictionary, e.g.: `[{'column_1': 'value_1'}, {'column_1': 'value_2'}].
Each query filter will be given to: func: `dataset.query` and the returned
partitions will be deleted. If no scope is given nothing will be deleted.
For `kartothek.io.dask.update.update_dataset.*` a delayed object resolving to
a list of dicts is also accepted.""",
"categoricals": """
categoricals
Load the provided subset of columns as a :class:`pandas.Categorical`.""",
"dates_as_object": """
dates_as_object: bool
Load pyarrow.date{32,64} columns as ``object`` columns in Pandas
instead of using ``np.datetime64`` to preserve their type. While
this improves type-safety, this comes at a performance cost.""",
"predicates": """
predicates: List[List[Tuple[str, str, Any]]
Optional list of predicates, like `[[('x', '>', 0), ...]`, that are used
to filter the resulting DataFrame, possibly using predicate pushdown,
if supported by the file format.
This parameter is not compatible with filter_query.
Predicates are expressed in disjunctive normal form (DNF). This means
that the innermost tuple describes a single column predicate. These
inner predicates are all combined with a conjunction (AND) into a
larger predicate. The most outer list then combines all predicates
with a disjunction (OR). By this, we should be able to express all
kinds of predicates that are possible using boolean logic.
Available operators are: `==`, `!=`, `<=`, `>=`, `<`, `>` and `in`.
Filtering for missings is supported with operators `==`, `!=` and
`in` and values `np.nan` and `None` for float and string columns
respectively.
.. admonition:: Categorical data
When using order sensitive operators on categorical data we will
assume that the categories obey a lexicographical ordering.
This filtering may result in less than optimal performance and may
be slower than the evaluation on non-categorical data.
See also :ref:`predicate_pushdown` and :ref:`efficient_querying`""",
"secondary_indices": """
secondary_indices: List[str]
A list of columns for which a secondary index should be calculated.""",
"sort_partitions_by": """
sort_partitions_by: str
Provide a column after which the data should be sorted before storage to enable predicate pushdown.""",
"factory": """
factory: kartothek.core.factory.DatasetFactory
A DatasetFactory holding the store and UUID to the source dataset.""",
"partition_size": """
partition_size: Optional[int]
Dask bag partition size. Use a larger numbers to decrease scheduler load and overhead, use smaller numbers for a
fine-grained scheduling and better resilience against worker errors.""",
"metadata_storage_format": """
metadata_storage_format: str
Optional list of datastorage format to use. Currently supported is `.json` & `.msgpack.zstd"`""",
"df_generator": """
df_generator: Iterable[Union[pandas.DataFrame, Dict[str, pandas.DataFrame]]]
The dataframe(s) to be stored""",
"default_metadata_version": """
default_metadata_version: int
Default metadata version. (Note: Metadata version greater than 3 are only supported)""",
"delayed_tasks": """
delayed_tasks
A list of delayed objects where each element returns a :class:`pandas.DataFrame`.""",
"load_dataset_metadata": """
load_dataset_metadata: bool
Optional argument on whether to load the metadata or not""",
"dispatch_metadata": """
dispatch_metadata:
If True, attach dataset user metadata and dataset index information to
the MetaPartition instances generated.
Note: This feature is deprecated and this feature toggle is only
introduced to allow for easier transition.""",
}
def default_docs(func):
"""
A decorator which automatically takes care of default parameter
documentation for common pipeline factory parameters
"""
# TODO (Kshitij68) Bug: The parameters are not come in the same order as listed in the function. For example in `store_dataframes_as_dataset`
docs = func.__doc__
new_docs = ""
signature = inspect.signature(func)
try:
buf = StringIO(docs)
line = True
while line:
line = buf.readline()
if "Parameters" in line:
indentation_level = len(line) - len(line.lstrip())
artificial_param_docs = [line, buf.readline()]
# Include the `-----` line
for param in signature.parameters.keys():
doc = _PARAMETER_MAPPING.get(param, None)
if doc and param + ":" not in docs:
if not doc.endswith("\n"):
doc += "\n"
if doc.startswith("\n"):
doc = doc[1:]
doc_indentation_level = len(doc) - len(doc.lstrip())
whitespaces_to_add = indentation_level - doc_indentation_level
if whitespaces_to_add < 0:
raise RuntimeError(
f"Indentation detection went wrong for parameter {param}"
)
# Adjust the indentation dynamically
whitespaces = " " * whitespaces_to_add
doc = whitespaces + doc
doc = doc.replace("\n", "\n" + whitespaces).rstrip() + "\n"
# We are checking if the entire docstring associated with the function is present or not
if doc not in docs:
artificial_param_docs.append(doc)
new_docs += "".join(artificial_param_docs)
continue
new_docs = "".join([new_docs, line])
func.__doc__ = new_docs
except Exception as ex:
func.__doc__ = str(ex)
return func | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/docs.py | 0.848219 | 0.633772 | docs.py | pypi |
import pickle
from functools import partial
from typing import Any, Union, cast
from simplekv import KeyValueStore
from storefact import get_store_from_url
from kartothek.core.naming import MAX_METADATA_VERSION, MIN_METADATA_VERSION
from kartothek.core.typing import StoreFactory, StoreInput
__all__ = ("ensure_store", "lazy_store")
def _verify_metadata_version(metadata_version):
"""
This is factored out to be an easier target for mocking
"""
if metadata_version < MIN_METADATA_VERSION:
raise NotImplementedError(
"Minimal supported metadata version is 4. You requested {metadata_version} instead.".format(
metadata_version=metadata_version
)
)
elif metadata_version > MAX_METADATA_VERSION:
raise NotImplementedError(
"Future metadata version `{}` encountered.".format(metadata_version)
)
def verify_metadata_version(*args, **kwargs):
return _verify_metadata_version(*args, **kwargs)
def ensure_string_type(obj: Union[bytes, str]) -> str:
"""
Parse object passed to the function to `str`.
If the object is of type `bytes`, it is decoded, otherwise a generic string representation of the object is
returned.
Parameters
----------
obj
object which is to be parsed to `str`
"""
if isinstance(obj, bytes):
return obj.decode()
else:
return str(obj)
def _is_simplekv_key_value_store(obj: Any) -> bool:
"""
Check whether ``obj`` is the ``simplekv.KeyValueStore``-like class.
simplekv uses duck-typing, e.g. for decorators. Therefore,
avoid `isinstance(store, KeyValueStore)`, as it would be unreliable. Instead,
only roughly verify that `store` looks like a KeyValueStore.
"""
return hasattr(obj, "iter_prefixes")
def ensure_store(store: StoreInput) -> KeyValueStore:
"""
Convert the ``store`` argument to a ``KeyValueStore``, without pickle test.
"""
# This function is often used in an eager context where we may allow
# non-serializable stores, so skip the pickle test.
if _is_simplekv_key_value_store(store):
return store
return lazy_store(store)()
def _identity(store: KeyValueStore) -> KeyValueStore:
"""
Helper function for `lazy_store`.
"""
return store
def lazy_store(store: StoreInput) -> StoreFactory:
"""
Create a store factory from the input. Acceptable inputs are
* Storefact store url
* Callable[[], KeyValueStore]
* KeyValueStore
If a KeyValueStore is provided, it is verified that the store is serializable
(i.e. that pickle.dumps does not raise an exception).
"""
if callable(store):
return cast(StoreFactory, store)
elif isinstance(store, str):
ret_val = partial(get_store_from_url, store)
ret_val = cast(StoreFactory, ret_val) # type: ignore
return ret_val
else:
if not _is_simplekv_key_value_store(store):
raise TypeError(
f"Provided incompatible store type. Got {type(store)} but expected {StoreInput}."
)
try:
pickle.dumps(store, pickle.HIGHEST_PROTOCOL)
except Exception as exc:
raise TypeError(
"""KeyValueStore not serializable.
Please consult https://kartothek.readthedocs.io/en/stable/spec/store_interface.html for more information."""
) from exc
return partial(_identity, store) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/utils.py | 0.879212 | 0.175538 | utils.py | pypi |
import warnings
from functools import wraps
def deprecate_kwarg(old_arg_name, new_arg_name, mapping=None, stacklevel=2):
"""
Decorator to deprecate a keyword argument of a function.
Parameters
----------
old_arg_name : str
Name of argument in function to deprecate
new_arg_name : str or None
Name of preferred argument in function. Use None to raise warning that
``old_arg_name`` keyword is deprecated.
mapping : dict or callable
If mapping is present, use it to translate old arguments to
new arguments. A callable must do its own value checking;
values not found in a dict will be forwarded unchanged.
Examples
--------
The following deprecates 'cols', using 'columns' instead
>>> @deprecate_kwarg(old_arg_name='cols', new_arg_name='columns')
... def f(columns=''):
... print(columns)
...
>>> f(columns='should work ok')
should work ok
>>> f(cols='should raise warning')
FutureWarning: cols is deprecated, use columns instead
warnings.warn(msg, FutureWarning)
should raise warning
>>> f(cols='should error', columns="can\'t pass do both")
TypeError: Can only specify 'cols' or 'columns', not both
>>> @deprecate_kwarg('old', 'new', {'yes': True, 'no': False})
... def f(new=False):
... print('yes!' if new else 'no!')
...
>>> f(old='yes')
FutureWarning: old='yes' is deprecated, use new=True instead
warnings.warn(msg, FutureWarning)
yes!
To raise a warning that a keyword will be removed entirely in the future
>>> @deprecate_kwarg(old_arg_name='cols', new_arg_name=None)
... def f(cols='', another_param=''):
... print(cols)
...
>>> f(cols='should raise warning')
FutureWarning: the 'cols' keyword is deprecated and will be removed in a
future version please takes steps to stop use of 'cols'
should raise warning
>>> f(another_param='should not raise warning')
should not raise warning
>>> f(cols='should raise warning', another_param='')
FutureWarning: the 'cols' keyword is deprecated and will be removed in a
future version please takes steps to stop use of 'cols'
should raise warning
"""
if mapping is not None and not hasattr(mapping, "get") and not callable(mapping):
raise TypeError(
"mapping from old to new argument values " "must be dict or callable!"
)
def _deprecate_kwarg(func):
@wraps(func)
def wrapper(*args, **kwargs):
old_arg_value = kwargs.pop(old_arg_name, None)
if new_arg_name is None and old_arg_value is not None:
msg = (
"the '{old_name}' keyword is deprecated and will be "
"removed in a future version. "
"Please take steps to stop the use of '{old_name}'"
).format(old_name=old_arg_name)
warnings.warn(msg, FutureWarning, stacklevel=stacklevel)
kwargs[old_arg_name] = old_arg_value
return func(*args, **kwargs)
if old_arg_value is not None:
if mapping is not None:
if hasattr(mapping, "get"):
new_arg_value = mapping.get(old_arg_value, old_arg_value)
else:
new_arg_value = mapping(old_arg_value)
msg = (
"the {old_name}={old_val!r} keyword is deprecated, "
"use {new_name}={new_val!r} instead"
).format(
old_name=old_arg_name,
old_val=old_arg_value,
new_name=new_arg_name,
new_val=new_arg_value,
)
else:
new_arg_value = old_arg_value
msg = (
"the '{old_name}' keyword is deprecated, "
"use '{new_name}' instead"
).format(old_name=old_arg_name, new_name=new_arg_name)
warnings.warn(msg, FutureWarning, stacklevel=stacklevel)
if kwargs.get(new_arg_name, None) is not None:
msg = (
"Can only specify '{old_name}' or '{new_name}', " "not both"
).format(old_name=old_arg_name, new_name=new_arg_name)
raise TypeError(msg)
else:
kwargs[new_arg_name] = new_arg_value
return func(*args, **kwargs)
return wrapper
return _deprecate_kwarg | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/_deprecation.py | 0.667364 | 0.208682 | _deprecation.py | pypi |
import copy
from typing import TYPE_CHECKING, Any, Optional, TypeVar, cast
from kartothek.core.dataset import DatasetMetadata, DatasetMetadataBase
from kartothek.core.typing import StoreInput
from kartothek.core.utils import lazy_store
if TYPE_CHECKING:
from simplekv import KeyValueStore
__all__ = ("DatasetFactory",)
T = TypeVar("T", bound="DatasetFactory")
class DatasetFactory(DatasetMetadataBase):
"""
Container holding metadata caching storage access.
"""
_nullable_attributes = ["_cache_metadata", "_cache_store"]
def __init__(
self,
dataset_uuid: str,
store_factory: StoreInput,
load_schema: bool = True,
load_all_indices: bool = False,
) -> None:
"""
A dataset factory object which can be used to cache dataset load operations. This class should be the primary user entry point when
reading datasets.
Example using the eager backend:
.. code::
from functools import partial
from storefact import get_store_from_url
from kartothek.io.eager import read_table
ds_factory = DatasetFactory(
dataset_uuid="my_test_dataset",
store=partial(get_store_from_url, store_url)
)
df = read_table(factory=ds_factory)
Parameters
----------
dataset_uuid
The unique indetifier for the dataset.
store_factory
A callable which creates a KeyValueStore object
load_schema
Load the schema information immediately.
load_all_indices
Load all indices immediately.
"""
self._cache_metadata: Optional[DatasetMetadata] = None
self._cache_store = None
self.store_factory = lazy_store(store_factory)
self.dataset_uuid = dataset_uuid
self.load_schema = load_schema
self._ds_callable = None
self.is_loaded = False
self.load_all_indices_flag = load_all_indices
def __repr__(self):
return "<DatasetFactory: uuid={} is_loaded={}>".format(
self.dataset_uuid, self.is_loaded
)
@property
def store(self) -> "KeyValueStore":
if self._cache_store is None:
self._cache_store = self.store_factory()
return self._cache_store
def _instantiate_metadata_cache(self: T) -> T:
if self._cache_metadata is None:
if self._ds_callable:
# backwards compat
self._cache_metadata = self._ds_callable()
else:
self._cache_metadata = DatasetMetadata.load_from_store(
uuid=self.dataset_uuid,
store=self.store,
load_schema=self.load_schema,
load_all_indices=self.load_all_indices_flag,
)
self.is_loaded = True
return self
@property
def dataset_metadata(self) -> DatasetMetadata:
self._instantiate_metadata_cache()
# The above line ensures non-None
return cast(DatasetMetadata, self._cache_metadata)
def invalidate(self) -> None:
self.is_loaded = False
self._cache_metadata = None
self._cache_store = None
def __getattr__(self, name):
# __getattr__ should only be called if the attribute cannot be found. if the
# attribute is None, it still falls back to this call
if name in self._nullable_attributes:
return object.__getattribute__(self, name)
self._instantiate_metadata_cache()
ds = getattr(self, "dataset_metadata")
return getattr(ds, name)
def __getstate__(self):
# remove cache
return {k: v for k, v in self.__dict__.items() if not k.startswith("_cache_")}
def __setstate__(self, state):
self.__init__(
dataset_uuid=state["dataset_uuid"],
store_factory=state["store_factory"],
load_schema=state["load_schema"],
load_all_indices=state["load_all_indices_flag"],
)
def __deepcopy__(self, memo) -> "DatasetFactory":
new_obj = DatasetFactory(
dataset_uuid=self.dataset_uuid,
store_factory=self.store_factory,
load_schema=self.load_schema,
load_all_indices=self.load_all_indices_flag,
)
if self._cache_metadata is not None:
new_obj._cache_metadata = copy.deepcopy(self._cache_metadata)
return new_obj
def load_index(self: T, column, store=None) -> T:
self._cache_metadata = self.dataset_metadata.load_index(column, self.store)
return self
def load_all_indices(self: T, store: Any = None) -> T:
self._cache_metadata = self.dataset_metadata.load_all_indices(self.store)
return self
def load_partition_indices(self: T) -> T:
self._cache_metadata = self.dataset_metadata.load_partition_indices()
return self
def _ensure_factory(
dataset_uuid: Optional[str],
store: Optional[StoreInput],
factory: Optional[DatasetFactory],
load_schema: bool = True,
) -> DatasetFactory:
if store is None and dataset_uuid is None and factory is not None:
return factory
elif store is not None and dataset_uuid is not None and factory is None:
return DatasetFactory(
dataset_uuid=dataset_uuid,
store_factory=lazy_store(store),
load_schema=load_schema,
)
else:
raise ValueError(
"Need to supply either a `factory` or `dataset_uuid` and `store`"
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/factory.py | 0.874948 | 0.181444 | factory.py | pypi |
import typing
import attr
from kartothek.core.cube.constants import KTK_CUBE_UUID_SEPARATOR
from kartothek.core.dataset import _validate_uuid
from kartothek.utils.converters import (
converter_str,
converter_str_set,
converter_str_tupleset,
)
__all__ = ("Cube",)
def _validate_not_subset(of, allow_none=False):
"""
Create validator to check if an attribute is not a subset of ``of``.
Parameters
----------
of: str
Attribute name that the subject under validation should not be a subset of.
Returns
-------
validator: Callable
Validator that can be used for ``attr.ib``.
"""
def _v(instance, attribute, value):
if allow_none and value is None:
return
other_set = set(getattr(instance, of))
if isinstance(value, str):
my_set = {value}
else:
my_set = set(value)
share = my_set & other_set
if share:
raise ValueError(
"{attribute} cannot share columns with {of}, but share the following: {share}".format(
attribute=attribute.name, of=of, share=", ".join(sorted(share))
)
)
return _v
def _validate_subset(of, allow_none=False):
"""
Create validator to check that an attribute is a subset of ``of``.
Parameters
----------
of: str
Attribute name that the subject under validation should be a subset of.
Returns
-------
validator: Callable
Validator that can be used for ``attr.ib``.
"""
def _v(instance, attribute, value):
if allow_none and value is None:
return
other_set = set(getattr(instance, of))
if isinstance(value, str):
my_set = {value}
else:
my_set = set(value)
too_much = my_set - other_set
if too_much:
raise ValueError(
"{attribute} must be a subset of {of}, but it has additional values: {too_much}".format(
attribute=attribute.name,
of=of,
too_much=", ".join(sorted(too_much)),
)
)
return _v
def _validator_uuid(instance, attribute, value):
"""
Attr validator to validate if UUIDs are valid.
"""
_validator_uuid_freestanding(attribute.name, value)
def _validator_uuid_freestanding(name, value):
"""
Freestanding version of :meth:`_validate_not_subset`.
"""
if not _validate_uuid(value):
raise ValueError(
'{name} ("{value}") is not compatible with kartothek'.format(
name=name, value=value
)
)
if value.find(KTK_CUBE_UUID_SEPARATOR) != -1:
raise ValueError(
'{name} ("{value}") must not contain UUID separator {sep}'.format(
name=name, value=value, sep=KTK_CUBE_UUID_SEPARATOR
)
)
def _validator_not_empty(instance, attribute, value):
"""
Attr validator to validate that a list is not empty:
"""
if len(value) == 0:
raise ValueError("{name} must not be empty".format(name=attribute.name))
@attr.s(frozen=True)
class Cube:
"""
OLAP-like cube that fuses multiple datasets.
Parameters
----------
dimension_columns: Tuple[str, ...]
Columns that span dimensions. This will imply index columns for the seed dataset, unless
the automatic index creation is suppressed via ``suppress_index_on``.
partition_columns: Tuple[str, ...]
Columns that are used to partition the data. They also create (implicit) primary indices.
uuid_prefix: str
All datasets that are part of the cube will have UUIDs of form ``'uuid_prefix++ktk_cube_dataset_id'``.
seed_dataset: str
Dataset that present the ground-truth regarding cells present in the cube.
index_columns: Tuple[str, ...]
Columns for which secondary indices will be created. They may also be part of non-seed datasets.
suppress_index_on: Tuple[str, ...]
Suppress auto-creation of an index on the given dimension columns. Must be a subset of ``dimension_columns``
(other columns are not subject to automatic index creation).
"""
dimension_columns = attr.ib(
converter=converter_str_tupleset,
type=typing.Tuple[str, ...],
validator=[_validator_not_empty],
)
partition_columns = attr.ib(
converter=converter_str_tupleset,
type=typing.Tuple[str, ...],
validator=[_validator_not_empty, _validate_not_subset("dimension_columns")],
)
uuid_prefix = attr.ib(
converter=converter_str, type=str, validator=[_validator_uuid]
)
seed_dataset = attr.ib(
converter=converter_str, default="seed", type=str, validator=[_validator_uuid]
)
index_columns = attr.ib(
converter=converter_str_set,
default=None,
type=typing.FrozenSet[str],
validator=[
_validate_not_subset("dimension_columns"),
_validate_not_subset("partition_columns"),
],
)
suppress_index_on = attr.ib(
converter=converter_str_set,
default=None,
type=typing.FrozenSet[str],
validator=[_validate_subset("dimension_columns", allow_none=True)],
)
def ktk_dataset_uuid(self, ktk_cube_dataset_id):
"""
Get Kartothek dataset UUID for given dataset UUID, so the prefix is included.
Parameters
----------
ktk_cube_dataset_id: str
Dataset ID w/o prefix
Returns
-------
ktk_dataset_uuid: str
Prefixed dataset UUID for Kartothek.
Raises
------
ValueError
If ``ktk_cube_dataset_id`` is not a string or if it is not a valid UUID.
"""
ktk_cube_dataset_id = converter_str(ktk_cube_dataset_id)
_validator_uuid_freestanding("ktk_cube_dataset_id", ktk_cube_dataset_id)
return "{uuid_prefix}{sep}{ktk_cube_dataset_id}".format(
uuid_prefix=self.uuid_prefix,
sep=KTK_CUBE_UUID_SEPARATOR,
ktk_cube_dataset_id=ktk_cube_dataset_id,
)
@property
def ktk_index_columns(self):
"""
Set of all available index columns through Kartothek, primary and secondary.
"""
# FIXME: do not always add dimension columns. Also, check all users of this property!
return (
set(self.partition_columns)
| set(self.index_columns)
| (set(self.dimension_columns) - set(self.suppress_index_on))
)
def copy(self, **kwargs):
"""
Create a new cube specification w/ changed attributes.
This will not trigger any IO operation, but only affects the cube specification.
Parameters
----------
kwargs: Dict[str, Any]
Attributes that should be changed.
Returns
-------
cube: Cube
New abstract cube.
"""
return attr.evolve(self, **kwargs) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/core/cube/cube.py | 0.79162 | 0.278904 | cube.py | pypi |
from functools import partial
from typing import cast
from kartothek.core.docs import default_docs
from kartothek.core.factory import _ensure_factory
from kartothek.core.naming import (
DEFAULT_METADATA_STORAGE_FORMAT,
DEFAULT_METADATA_VERSION,
SINGLE_TABLE,
)
from kartothek.core.uuid import gen_uuid
from kartothek.io_components.metapartition import (
MetaPartition,
parse_input_to_metapartition,
)
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.io_components.update import update_dataset_from_partitions
from kartothek.io_components.utils import (
_ensure_compatible_indices,
normalize_args,
raise_if_indices_overlap,
sort_values_categorical,
validate_partition_keys,
)
from kartothek.io_components.write import (
raise_if_dataset_exists,
store_dataset_from_partitions,
)
__all__ = (
"read_dataset_as_dataframes__iterator",
"update_dataset_from_dataframes__iter",
"store_dataframes_as_dataset__iter",
)
@default_docs
@normalize_args
def read_dataset_as_metapartitions__iterator(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals=None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A Python iterator to retrieve a dataset from store where each
partition is loaded as a :class:`~kartothek.io_components.metapartition.MetaPartition`.
.. seealso:
:func:`~kartothek.io_components.read.read_dataset_as_dataframes__iterator`
Parameters
----------
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
store = ds_factory.store
mps = dispatch_metapartitions_from_factory(
ds_factory, predicates=predicates, dispatch_by=dispatch_by,
)
for mp in mps:
if dispatch_by is not None:
mp = MetaPartition.concat_metapartitions(
[
mp_inner.load_dataframes(
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
predicates=predicates,
)
for mp_inner in mp
]
)
else:
mp = cast(MetaPartition, mp)
mp = mp.load_dataframes(
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
yield mp
@default_docs
@normalize_args
def read_dataset_as_dataframes__iterator(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals=None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A Python iterator to retrieve a dataset from store where each
partition is loaded as a :class:`~pandas.DataFrame`.
Parameters
----------
Returns
-------
list
A list containing a dictionary for each partition. The dictionaries
keys are the in-partition file labels and the values are the
corresponding dataframes.
Examples
--------
Dataset in store contains two partitions with two files each
.. code ::
>>> import storefact
>>> from kartothek.io.iter import read_dataset_as_dataframes__iterator
>>> store = storefact.get_store_from_url('s3://bucket_with_dataset')
>>> dataframes = read_dataset_as_dataframes__iterator('dataset_uuid', store)
>>> next(dataframes)
[
# First partition
{'core': pd.DataFrame, 'lookup': pd.DataFrame}
]
>>> next(dataframes)
[
# Second partition
{'core': pd.DataFrame, 'lookup': pd.DataFrame}
]
"""
mp_iter = read_dataset_as_metapartitions__iterator(
dataset_uuid=dataset_uuid,
store=store,
columns=columns,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
dates_as_object=dates_as_object,
predicates=predicates,
factory=factory,
dispatch_by=dispatch_by,
)
for mp in mp_iter:
yield mp.data
@default_docs
@normalize_args
def update_dataset_from_dataframes__iter(
df_generator,
store=None,
dataset_uuid=None,
delete_scope=None,
metadata=None,
df_serializer=None,
metadata_merger=None,
default_metadata_version=DEFAULT_METADATA_VERSION,
partition_on=None,
sort_partitions_by=None,
secondary_indices=None,
factory=None,
table_name: str = SINGLE_TABLE,
):
"""
Update a kartothek dataset in store iteratively, using a generator of dataframes.
Useful for datasets which do not fit into memory.
Parameters
----------
Returns
-------
The dataset metadata object (:class:`~kartothek.core.dataset.DatasetMetadata`).
See Also
--------
:ref:`mutating_datasets`
"""
ds_factory, metadata_version, partition_on = validate_partition_keys(
dataset_uuid=dataset_uuid,
store=store,
ds_factory=factory,
default_metadata_version=default_metadata_version,
partition_on=partition_on,
)
secondary_indices = _ensure_compatible_indices(ds_factory, secondary_indices)
if sort_partitions_by: # Define function which sorts each partition by column
sort_partitions_by_fn = partial(
sort_values_categorical, columns=sort_partitions_by
)
new_partitions = []
for df in df_generator:
mp = parse_input_to_metapartition(
df, metadata_version=metadata_version, table_name=table_name,
)
if sort_partitions_by:
mp = mp.apply(sort_partitions_by_fn)
if partition_on:
mp = mp.partition_on(partition_on=partition_on)
if secondary_indices:
mp = mp.build_indices(columns=secondary_indices)
# Store dataframe, thereby clearing up the dataframe from the `mp` metapartition
mp = mp.store_dataframes(
store=store, df_serializer=df_serializer, dataset_uuid=dataset_uuid
)
new_partitions.append(mp)
return update_dataset_from_partitions(
new_partitions,
store_factory=store,
dataset_uuid=dataset_uuid,
ds_factory=ds_factory,
delete_scope=delete_scope,
metadata=metadata,
metadata_merger=metadata_merger,
)
@default_docs
@normalize_args
def store_dataframes_as_dataset__iter(
df_generator,
store,
dataset_uuid=None,
metadata=None,
partition_on=None,
df_serializer=None,
overwrite=False,
metadata_storage_format=DEFAULT_METADATA_STORAGE_FORMAT,
metadata_version=DEFAULT_METADATA_VERSION,
secondary_indices=None,
table_name: str = SINGLE_TABLE,
):
"""
Store `pd.DataFrame` s iteratively as a partitioned dataset with multiple tables (files).
Useful for datasets which do not fit into memory.
Parameters
----------
Returns
-------
dataset: kartothek.core.dataset.DatasetMetadata
The stored dataset.
"""
if dataset_uuid is None:
dataset_uuid = gen_uuid()
if not overwrite:
raise_if_dataset_exists(dataset_uuid=dataset_uuid, store=store)
raise_if_indices_overlap(partition_on, secondary_indices)
new_partitions = []
for df in df_generator:
mp = parse_input_to_metapartition(
df, metadata_version=metadata_version, table_name=table_name
)
if partition_on:
mp = mp.partition_on(partition_on)
if secondary_indices:
mp = mp.build_indices(secondary_indices)
# Store dataframe, thereby clearing up the dataframe from the `mp` metapartition
mp = mp.store_dataframes(
store=store, dataset_uuid=dataset_uuid, df_serializer=df_serializer
)
# Add `kartothek.io_components.metapartition.MetaPartition` object to list to track partitions
new_partitions.append(mp)
# Store metadata and return `kartothek.DatasetMetadata` object
return store_dataset_from_partitions(
partition_list=new_partitions,
dataset_uuid=dataset_uuid,
store=store,
dataset_metadata=metadata,
metadata_storage_format=metadata_storage_format,
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/iter.py | 0.890014 | 0.205117 | iter.py | pypi |
from functools import partial
from typing import List, Optional, Sequence
import dask
from dask import delayed
from dask.delayed import Delayed
from kartothek.core import naming
from kartothek.core.docs import default_docs
from kartothek.core.factory import _ensure_factory
from kartothek.core.naming import DEFAULT_METADATA_VERSION
from kartothek.core.typing import StoreInput
from kartothek.core.utils import lazy_store
from kartothek.core.uuid import gen_uuid
from kartothek.io_components.delete import (
delete_common_metadata,
delete_indices,
delete_top_level_metadata,
)
from kartothek.io_components.gc import delete_files, dispatch_files_to_gc
from kartothek.io_components.metapartition import (
SINGLE_TABLE,
MetaPartition,
parse_input_to_metapartition,
)
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.io_components.update import update_dataset_from_partitions
from kartothek.io_components.utils import (
_ensure_compatible_indices,
normalize_arg,
normalize_args,
raise_if_indices_overlap,
validate_partition_keys,
)
from kartothek.io_components.write import (
raise_if_dataset_exists,
store_dataset_from_partitions,
write_partition,
)
from ._utils import (
_cast_categorical_to_index_cat,
_get_data,
_maybe_get_categoricals_from_index,
map_delayed,
)
__all__ = (
"delete_dataset__delayed",
"garbage_collect_dataset__delayed",
"read_dataset_as_delayed",
"update_dataset_from_delayed",
"store_delayed_as_dataset",
)
def _delete_all_additional_metadata(dataset_factory):
delete_indices(dataset_factory=dataset_factory)
delete_common_metadata(dataset_factory=dataset_factory)
def _delete_tl_metadata(dataset_factory, *args):
"""
This function serves as a collector function for delayed objects. Therefore
allowing additional arguments which are not used.
"""
delete_top_level_metadata(dataset_factory=dataset_factory)
@default_docs
@normalize_args
def delete_dataset__delayed(dataset_uuid=None, store=None, factory=None):
"""
Parameters
----------
"""
dataset_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory, load_schema=False,
)
gc = garbage_collect_dataset__delayed(factory=dataset_factory)
mps = dispatch_metapartitions_from_factory(dataset_factory)
delayed_dataset_uuid = delayed(_delete_all_additional_metadata)(
dataset_factory=dataset_factory
)
mps = map_delayed(
MetaPartition.delete_from_store,
mps,
store=store,
dataset_uuid=dataset_factory.dataset_uuid,
)
return delayed(_delete_tl_metadata)(dataset_factory, mps, gc, delayed_dataset_uuid)
@default_docs
@normalize_args
def garbage_collect_dataset__delayed(
dataset_uuid: Optional[str] = None,
store: StoreInput = None,
chunk_size: int = 100,
factory=None,
) -> List[Delayed]:
"""
Remove auxiliary files that are no longer tracked by the dataset.
These files include indices that are no longer referenced by the metadata
as well as files in the directories of the tables that are no longer
referenced. The latter is only applied to static datasets.
Parameters
----------
chunk_size
Number of files that should be deleted in a single job.
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
nested_files = dispatch_files_to_gc(
dataset_uuid=None, store_factory=None, chunk_size=chunk_size, factory=ds_factory
)
return list(
map_delayed(delete_files, nested_files, store_factory=ds_factory.store_factory)
)
def _load_and_concat_metapartitions_inner(mps, args, kwargs):
return MetaPartition.concat_metapartitions(
[mp.load_dataframes(*args, **kwargs) for mp in mps]
)
def _load_and_concat_metapartitions(list_of_mps, *args, **kwargs):
return map_delayed(
_load_and_concat_metapartitions_inner, list_of_mps, args=args, kwargs=kwargs
)
@default_docs
@normalize_args
def read_dataset_as_delayed_metapartitions(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals: Optional[Sequence[str]] = None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A collection of dask.delayed objects to retrieve a dataset from store where each
partition is loaded as a :class:`~kartothek.io_components.metapartition.MetaPartition`.
.. seealso:
:func:`~kartothek.io.dask.read_dataset_as_delayed`
Parameters
----------
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
store = ds_factory.store_factory
mps = dispatch_metapartitions_from_factory(
dataset_factory=ds_factory, predicates=predicates, dispatch_by=dispatch_by,
)
if dispatch_by is not None:
mps = _load_and_concat_metapartitions(
mps,
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
else:
mps = map_delayed(
MetaPartition.load_dataframes,
mps,
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
categoricals_from_index = _maybe_get_categoricals_from_index(
ds_factory, categoricals
)
if categoricals_from_index:
mps = map_delayed(
partial( # type: ignore
MetaPartition.apply,
func=partial( # type: ignore
_cast_categorical_to_index_cat, categories=categoricals_from_index
),
type_safe=True,
),
mps,
)
return list(mps)
@default_docs
def read_dataset_as_delayed(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals=None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A collection of dask.delayed objects to retrieve a dataset from store
where each partition is loaded as a :class:`~pandas.DataFrame`.
Parameters
----------
"""
mps = read_dataset_as_delayed_metapartitions(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
columns=columns,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
dates_as_object=dates_as_object,
predicates=predicates,
dispatch_by=dispatch_by,
)
return list(map_delayed(_get_data, mps))
@default_docs
def update_dataset_from_delayed(
delayed_tasks: List[Delayed],
store=None,
dataset_uuid=None,
delete_scope=None,
metadata=None,
df_serializer=None,
metadata_merger=None,
default_metadata_version=DEFAULT_METADATA_VERSION,
partition_on=None,
sort_partitions_by=None,
secondary_indices=None,
factory=None,
):
"""
A dask.delayed graph to add and store a list of dictionaries containing
dataframes to a kartothek dataset in store. The input should be a list
(or splitter pipeline) containing
:class:`~kartothek.io_components.metapartition.MetaPartition`. If you want to use this
pipeline step for just deleting partitions without adding new ones you
have to give an empty meta partition as input (``[Metapartition(None)]``).
Parameters
----------
See Also
--------
:ref:`mutating_datasets`
"""
partition_on = normalize_arg("partition_on", partition_on)
store = normalize_arg("store", store)
secondary_indices = normalize_arg("secondary_indices", secondary_indices)
delete_scope = dask.delayed(normalize_arg)("delete_scope", delete_scope)
ds_factory, metadata_version, partition_on = validate_partition_keys(
dataset_uuid=dataset_uuid,
store=store,
default_metadata_version=default_metadata_version,
partition_on=partition_on,
ds_factory=factory,
)
secondary_indices = _ensure_compatible_indices(ds_factory, secondary_indices)
mps = map_delayed(
write_partition,
delayed_tasks,
secondary_indices=secondary_indices,
metadata_version=metadata_version,
partition_on=partition_on,
store_factory=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
sort_partitions_by=sort_partitions_by,
)
return dask.delayed(update_dataset_from_partitions)(
mps,
store_factory=store,
dataset_uuid=dataset_uuid,
ds_factory=ds_factory,
delete_scope=delete_scope,
metadata=metadata,
metadata_merger=metadata_merger,
)
@default_docs
@normalize_args
def store_delayed_as_dataset(
delayed_tasks: List[Delayed],
store,
dataset_uuid=None,
metadata=None,
df_serializer=None,
overwrite=False,
metadata_merger=None,
metadata_version=naming.DEFAULT_METADATA_VERSION,
partition_on=None,
metadata_storage_format=naming.DEFAULT_METADATA_STORAGE_FORMAT,
table_name: str = SINGLE_TABLE,
secondary_indices=None,
) -> Delayed:
"""
Transform and store a list of dictionaries containing
dataframes to a kartothek dataset in store.
Parameters
----------
"""
store = lazy_store(store)
if dataset_uuid is None:
dataset_uuid = gen_uuid()
if not overwrite:
raise_if_dataset_exists(dataset_uuid=dataset_uuid, store=store)
raise_if_indices_overlap(partition_on, secondary_indices)
input_to_mps = partial(
parse_input_to_metapartition,
metadata_version=metadata_version,
table_name=table_name,
)
mps = map_delayed(input_to_mps, delayed_tasks)
if partition_on:
mps = map_delayed(MetaPartition.partition_on, mps, partition_on=partition_on)
if secondary_indices:
mps = map_delayed(MetaPartition.build_indices, mps, columns=secondary_indices)
mps = map_delayed(
MetaPartition.store_dataframes,
mps,
store=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
)
return delayed(store_dataset_from_partitions)(
mps,
dataset_uuid=dataset_uuid,
store=store,
dataset_metadata=metadata,
metadata_merger=metadata_merger,
metadata_storage_format=metadata_storage_format,
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/delayed.py | 0.887082 | 0.18881 | delayed.py | pypi |
import logging
from functools import partial
from typing import List, Union
import dask.dataframe as dd
import pandas as pd
_logger = logging.getLogger()
_PAYLOAD_COL = "__ktk_shuffle_payload"
try:
# Technically distributed is an optional dependency
from distributed.protocol import serialize_bytes
from distributed.protocol import deserialize_bytes
HAS_DISTRIBUTED = True
except ImportError:
HAS_DISTRIBUTED = False
serialize_bytes = None
deserialize_bytes = None
__all__ = (
"pack_payload_pandas",
"pack_payload",
"unpack_payload_pandas",
"unpack_payload",
)
def pack_payload_pandas(partition: pd.DataFrame, group_key: List[str]) -> pd.DataFrame:
if not HAS_DISTRIBUTED:
_logger.warning(
"Shuffle payload columns cannot be compressed since distributed is not installed."
)
return partition
if partition.empty:
res = partition[group_key]
res[_PAYLOAD_COL] = b""
else:
res = partition.groupby(
group_key,
sort=False,
observed=True,
# Keep the as_index s.t. the group values are not dropped. With this
# the behaviour seems to be consistent along pandas versions
as_index=True,
).apply(lambda x: pd.Series({_PAYLOAD_COL: serialize_bytes(x)}))
res = res.reset_index()
return res
def pack_payload(df: dd.DataFrame, group_key: Union[List[str], str]) -> dd.DataFrame:
"""
Pack all payload columns (everything except of group_key) into a single
columns. This column will contain a single byte string containing the
serialized and compressed payload data. The payload data is just dead weight
when reshuffling. By compressing it once before the shuffle starts, this
saves a lot of memory and network/disk IO.
Example::
>>> import pandas as pd
... import dask.dataframe as dd
... from dask.dataframe.shuffle import pack_payload
...
... df = pd.DataFrame({"A": [1, 1] * 2 + [2, 2] * 2 + [3, 3] * 2, "B": range(12)})
... ddf = dd.from_pandas(df, npartitions=2)
>>> ddf.partitions[0].compute()
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 2 4
5 2 5
>>> pack_payload(ddf, "A").partitions[0].compute()
A __dask_payload_bytes
0 1 b'\x03\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x03...
1 2 b'\x03\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x03...
See also https://github.com/dask/dask/pull/6259
"""
if (
# https://github.com/pandas-dev/pandas/issues/34455
isinstance(df._meta.index, pd.Float64Index)
# TODO: Try to find out what's going on an file a bug report
# For datetime indices the apply seems to be corrupt
# s.t. apply(lambda x:x) returns different values
or isinstance(df._meta.index, pd.DatetimeIndex)
):
return df
if not HAS_DISTRIBUTED:
_logger.warning(
"Shuffle payload columns cannot be compressed since distributed is not installed."
)
return df
if not isinstance(group_key, list):
group_key = [group_key]
packed_meta = df._meta[group_key]
packed_meta[_PAYLOAD_COL] = b""
_pack_payload = partial(pack_payload_pandas, group_key=group_key)
return df.map_partitions(_pack_payload, meta=packed_meta)
def unpack_payload_pandas(
partition: pd.DataFrame, unpack_meta: pd.DataFrame
) -> pd.DataFrame:
"""
Revert ``pack_payload_pandas`` and restore packed payload
unpack_meta:
A dataframe indicating the schema of the unpacked data. This will be returned in case the input is empty
"""
if not HAS_DISTRIBUTED:
_logger.warning(
"Shuffle payload columns cannot be compressed since distributed is not installed."
)
return partition
if partition.empty:
return unpack_meta.iloc[:0]
mapped = partition[_PAYLOAD_COL].map(deserialize_bytes)
return pd.concat(mapped.values, copy=False, ignore_index=True)
def unpack_payload(df: dd.DataFrame, unpack_meta: pd.DataFrame) -> dd.DataFrame:
"""Revert payload packing of ``pack_payload`` and restores full dataframe."""
if (
# https://github.com/pandas-dev/pandas/issues/34455
isinstance(df._meta.index, pd.Float64Index)
# TODO: Try to find out what's going on an file a bug report
# For datetime indices the apply seems to be corrupt
# s.t. apply(lambda x:x) returns different values
or isinstance(df._meta.index, pd.DatetimeIndex)
):
return df
if not HAS_DISTRIBUTED:
_logger.warning(
"Shuffle payload columns cannot be compressed since distributed is not installed."
)
return df
return df.map_partitions(
unpack_payload_pandas, unpack_meta=unpack_meta, meta=unpack_meta
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/compression.py | 0.668015 | 0.27853 | compression.py | pypi |
from functools import partial
from typing import Optional, Sequence
import dask.bag as db
from dask.delayed import Delayed
from kartothek.core import naming
from kartothek.core.docs import default_docs
from kartothek.core.factory import DatasetFactory, _ensure_factory
from kartothek.core.typing import StoreInput
from kartothek.core.utils import lazy_store
from kartothek.core.uuid import gen_uuid
from kartothek.io.dask._utils import (
_cast_categorical_to_index_cat,
_get_data,
_maybe_get_categoricals_from_index,
)
from kartothek.io_components.index import update_indices_from_partitions
from kartothek.io_components.metapartition import (
SINGLE_TABLE,
MetaPartition,
parse_input_to_metapartition,
)
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.io_components.utils import normalize_args, raise_if_indices_overlap
from kartothek.io_components.write import (
raise_if_dataset_exists,
store_dataset_from_partitions,
)
__all__ = (
"read_dataset_as_dataframe_bag",
"store_bag_as_dataset",
"build_dataset_indices__bag",
)
def _store_dataset_from_partitions_flat(mpss, *args, **kwargs):
return store_dataset_from_partitions(
[mp for sublist in mpss for mp in sublist], *args, **kwargs
)
def _load_and_concat_metapartitions_inner(mps, *args, **kwargs):
return MetaPartition.concat_metapartitions(
[mp.load_dataframes(*args, **kwargs) for mp in mps]
)
@default_docs
def read_dataset_as_metapartitions_bag(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals=None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
partition_size=None,
):
"""
Retrieve dataset as `dask.bag.Bag` of `MetaPartition` objects.
Parameters
----------
Returns
-------
dask.bag.Bag:
A dask.bag object containing the metapartions.
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
store = ds_factory.store_factory
mps = dispatch_metapartitions_from_factory(
dataset_factory=ds_factory, predicates=predicates, dispatch_by=dispatch_by,
)
mp_bag = db.from_sequence(mps, partition_size=partition_size)
if dispatch_by is not None:
mp_bag = mp_bag.map(
_load_and_concat_metapartitions_inner,
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
else:
mp_bag = mp_bag.map(
MetaPartition.load_dataframes,
store=store,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
categoricals_from_index = _maybe_get_categoricals_from_index(
ds_factory, categoricals
)
if categoricals_from_index:
mp_bag = mp_bag.map(
MetaPartition.apply,
func=partial(
_cast_categorical_to_index_cat, categories=categoricals_from_index
),
type_safe=True,
)
return mp_bag
@default_docs
def read_dataset_as_dataframe_bag(
dataset_uuid=None,
store=None,
columns=None,
predicate_pushdown_to_io=True,
categoricals=None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dispatch_by=None,
partition_size=None,
):
"""
Retrieve data as dataframe from a :class:`dask.bag.Bag` of `MetaPartition` objects
Parameters
----------
Returns
-------
dask.bag.Bag
A dask.bag.Bag which contains the metapartitions and mapped to a function for retrieving the data.
"""
mps = read_dataset_as_metapartitions_bag(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
columns=columns,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
dates_as_object=dates_as_object,
predicates=predicates,
dispatch_by=dispatch_by,
partition_size=partition_size,
)
return mps.map(_get_data)
@default_docs
@normalize_args
def store_bag_as_dataset(
bag,
store,
dataset_uuid=None,
metadata=None,
df_serializer=None,
overwrite=False,
metadata_merger=None,
metadata_version=naming.DEFAULT_METADATA_VERSION,
partition_on=None,
metadata_storage_format=naming.DEFAULT_METADATA_STORAGE_FORMAT,
secondary_indices=None,
table_name: str = SINGLE_TABLE,
):
"""
Transform and store a dask.bag of dictionaries containing
dataframes to a kartothek dataset in store.
This is the dask.bag-equivalent of
:func:`~kartothek.io.dask.delayed.store_delayed_as_dataset`. See there
for more detailed documentation on the different possible input types.
Parameters
----------
bag: dask.bag.Bag
A dask bag containing dictionaries of dataframes or dataframes.
"""
store = lazy_store(store)
if dataset_uuid is None:
dataset_uuid = gen_uuid()
if not overwrite:
raise_if_dataset_exists(dataset_uuid=dataset_uuid, store=store)
raise_if_indices_overlap(partition_on, secondary_indices)
input_to_mps = partial(
parse_input_to_metapartition,
metadata_version=metadata_version,
table_name=table_name,
)
mps = bag.map(input_to_mps)
if partition_on:
mps = mps.map(MetaPartition.partition_on, partition_on=partition_on)
if secondary_indices:
mps = mps.map(MetaPartition.build_indices, columns=secondary_indices)
mps = mps.map(
MetaPartition.store_dataframes,
store=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
)
aggregate = partial(
_store_dataset_from_partitions_flat,
dataset_uuid=dataset_uuid,
store=store,
dataset_metadata=metadata,
metadata_merger=metadata_merger,
metadata_storage_format=metadata_storage_format,
)
return mps.reduction(perpartition=list, aggregate=aggregate, split_every=False)
@default_docs
def build_dataset_indices__bag(
store: Optional[StoreInput],
dataset_uuid: Optional[str],
columns: Sequence[str],
partition_size: Optional[int] = None,
factory: Optional[DatasetFactory] = None,
) -> Delayed:
"""
Function which builds a :class:`~kartothek.core.index.ExplicitSecondaryIndex`.
This function loads the dataset, computes the requested indices and writes
the indices to the dataset. The dataset partitions itself are not mutated.
Parameters
----------
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
assert ds_factory.schema is not None
cols_to_load = set(columns) & set(ds_factory.schema.names)
mps = dispatch_metapartitions_from_factory(ds_factory)
return (
db.from_sequence(seq=mps, partition_size=partition_size)
.map(
MetaPartition.load_dataframes,
store=ds_factory.store_factory,
columns=cols_to_load,
)
.map(MetaPartition.build_indices, columns=columns)
.map(MetaPartition.remove_dataframes)
.reduction(list, list, split_every=False, out_type=db.Bag)
.flatten()
.map_partitions(list)
.map_partitions(
update_indices_from_partitions, dataset_metadata_factory=ds_factory
)
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/bag.py | 0.902832 | 0.184859 | bag.py | pypi |
import random
from typing import (
Callable,
Iterable,
List,
Mapping,
Optional,
Sequence,
SupportsFloat,
Union,
cast,
)
import dask
import dask.dataframe as dd
import numpy as np
import pandas as pd
from kartothek.core.common_metadata import empty_dataframe_from_schema
from kartothek.core.docs import default_docs
from kartothek.core.factory import DatasetFactory, _ensure_factory
from kartothek.core.naming import DEFAULT_METADATA_VERSION
from kartothek.core.typing import StoreFactory, StoreInput
from kartothek.io.dask.compression import pack_payload, unpack_payload_pandas
from kartothek.io_components.metapartition import (
_METADATA_SCHEMA,
SINGLE_TABLE,
MetaPartition,
parse_input_to_metapartition,
)
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.io_components.update import update_dataset_from_partitions
from kartothek.io_components.utils import (
_ensure_compatible_indices,
normalize_args,
validate_partition_keys,
)
from kartothek.io_components.write import (
raise_if_dataset_exists,
store_dataset_from_partitions,
write_partition,
)
from kartothek.serialization import DataFrameSerializer, PredicatesType
from ._shuffle import shuffle_store_dask_partitions
from ._utils import _maybe_get_categoricals_from_index
from .delayed import read_dataset_as_delayed
__all__ = (
"read_dataset_as_ddf",
"store_dataset_from_ddf",
"update_dataset_from_ddf",
"collect_dataset_metadata",
"hash_dataset",
)
@default_docs
@normalize_args
def read_dataset_as_ddf(
dataset_uuid=None,
store=None,
table=SINGLE_TABLE,
columns=None,
predicate_pushdown_to_io=True,
categoricals: Optional[Sequence[str]] = None,
dates_as_object: bool = True,
predicates=None,
factory=None,
dask_index_on=None,
dispatch_by=None,
):
"""
Retrieve a single table from a dataset as partition-individual :class:`~dask.dataframe.DataFrame` instance.
Please take care when using categoricals with Dask. For index columns, this function will construct dataset
wide categoricals. For all other columns, Dask will determine the categories on a partition level and will
need to merge them when shuffling data.
Parameters
----------
dask_index_on: str
Reconstruct (and set) a dask index on the provided index column. Cannot be used
in conjunction with `dispatch_by`.
For details on performance, see also `dispatch_by`
"""
if dask_index_on is not None and not isinstance(dask_index_on, str):
raise TypeError(
f"The paramter `dask_index_on` must be a string but got {type(dask_index_on)}"
)
if dask_index_on is not None and dispatch_by is not None and len(dispatch_by) > 0:
raise ValueError(
"`read_dataset_as_ddf` got parameters `dask_index_on` and `dispatch_by`. "
"Note that `dispatch_by` can only be used if `dask_index_on` is None."
)
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
if isinstance(columns, dict):
columns = columns[table]
meta = _get_dask_meta_for_dataset(
ds_factory, columns, categoricals, dates_as_object
)
if columns is None:
columns = list(meta.columns)
# that we can use factories instead of dataset_uuids
delayed_partitions = read_dataset_as_delayed(
factory=ds_factory,
columns=columns,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
dates_as_object=dates_as_object,
predicates=predicates,
dispatch_by=dask_index_on if dask_index_on else dispatch_by,
)
if dask_index_on:
divisions = ds_factory.indices[dask_index_on].observed_values()
divisions.sort()
divisions = list(divisions)
divisions.append(divisions[-1])
return dd.from_delayed(
delayed_partitions, meta=meta, divisions=divisions
).set_index(dask_index_on, divisions=divisions, sorted=True)
else:
return dd.from_delayed(delayed_partitions, meta=meta)
def _get_dask_meta_for_dataset(ds_factory, columns, categoricals, dates_as_object):
"""
Calculate a schema suitable for the dask dataframe meta from the dataset.
"""
table_schema = ds_factory.schema
meta = empty_dataframe_from_schema(
table_schema, columns=columns, date_as_object=dates_as_object
)
if categoricals:
meta = meta.astype({col: "category" for col in categoricals})
meta = dd.utils.clear_known_categories(meta, categoricals)
categoricals_from_index = _maybe_get_categoricals_from_index(
ds_factory, categoricals
)
if categoricals_from_index:
meta = meta.astype(categoricals_from_index)
return meta
def _shuffle_docs(func):
func.__doc__ += """
.. admonition:: Behavior without ``shuffle==False``
In the case without ``partition_on`` every dask partition is mapped to a single kartothek partition
In the case with ``partition_on`` every dask partition is mapped to N kartothek partitions, where N
depends on the content of the respective partition, such that every resulting kartothek partition has
only a single value in the respective ``partition_on`` columns.
.. admonition:: Behavior with ``shuffle==True``
``partition_on`` is mandatory
Perform a data shuffle to ensure that every primary key will have at most ``num_bucket``.
.. note::
The number of allowed buckets will have an impact on the required resources and runtime.
Using a larger number of allowed buckets will usually reduce resource consumption and in some
cases also improves runtime performance.
:Example:
>>> partition_on="primary_key"
>>> num_buckets=2 # doctest: +SKIP
primary_key=1/bucket1.parquet
primary_key=1/bucket2.parquet
.. note:: This can only be used for datasets with a single table!
See also, :ref:`partitioning_dask`.
Parameters
----------
ddf: Union[dask.dataframe.DataFrame, None]
The dask.Dataframe to be used to calculate the new partitions from. If this parameter is `None`, the update pipeline
will only delete partitions without creating new ones.
shuffle: bool
If `True` and `partition_on` is requested, shuffle the data to reduce number of output partitions.
See also, :ref:`shuffling`.
.. warning::
Dask uses a heuristic to determine how data is shuffled and there are two options, `partd` for local disk shuffling and `tasks` for distributed shuffling using a task graph. If there is no :class:`distributed.Client` in the context and the option is not set explicitly, dask will choose `partd` which may cause data loss when the graph is executed on a distributed cluster.
Therefore, we recommend to specify the dask shuffle method explicitly, e.g. by using a context manager.
.. code::
with dask.config.set(shuffle='tasks'):
graph = update_dataset_from_ddf(...)
graph.compute()
repartition_ratio: Optional[Union[int, float]]
If provided, repartition the dataframe before calculation starts to ``ceil(ddf.npartitions / repartition_ratio)``
num_buckets: int
If provided, the output partitioning will have ``num_buckets`` files per primary key partitioning.
This effectively splits up the execution ``num_buckets`` times. Setting this parameter may be helpful when
scaling.
This only has an effect if ``shuffle==True``
bucket_by:
The subset of columns which should be considered for bucketing.
This parameter ensures that groups of the given subset are never split
across buckets within a given partition.
Without specifying this the buckets will be created randomly.
This only has an effect if ``shuffle==True``
.. admonition:: Secondary indices
This parameter has a strong effect on the performance of secondary
indices. Since it guarantees that a given tuple of the subset will
be entirely put into the same file you can build efficient indices
with this approach.
.. note::
Only columns with data types which can be hashed are allowed to be used in this.
"""
return func
def _id(x):
return x
def _commit_update_from_reduction(df_mps, **kwargs):
partitions = pd.Series(df_mps.values.flatten()).dropna()
return update_dataset_from_partitions(partition_list=partitions, **kwargs,)
def _commit_store_from_reduction(df_mps, **kwargs):
partitions = pd.Series(df_mps.values.flatten()).dropna()
return store_dataset_from_partitions(partition_list=partitions, **kwargs,)
@default_docs
@_shuffle_docs
@normalize_args
def store_dataset_from_ddf(
ddf: dd.DataFrame,
store: StoreInput,
dataset_uuid: str,
table: str = SINGLE_TABLE,
secondary_indices: Optional[List[str]] = None,
shuffle: bool = False,
repartition_ratio: Optional[SupportsFloat] = None,
num_buckets: int = 1,
sort_partitions_by: Optional[Union[List[str], str]] = None,
metadata: Optional[Mapping] = None,
df_serializer: Optional[DataFrameSerializer] = None,
metadata_merger: Optional[Callable] = None,
metadata_version: int = DEFAULT_METADATA_VERSION,
partition_on: Optional[List[str]] = None,
bucket_by: Optional[Union[List[str], str]] = None,
overwrite: bool = False,
):
"""
Store a dataset from a dask.dataframe.
"""
# normalization done by normalize_args but mypy doesn't recognize this
sort_partitions_by = cast(List[str], sort_partitions_by)
secondary_indices = cast(List[str], secondary_indices)
bucket_by = cast(List[str], bucket_by)
partition_on = cast(List[str], partition_on)
if table is None:
raise TypeError("The parameter `table` is not optional.")
ds_factory = _ensure_factory(dataset_uuid=dataset_uuid, store=store, factory=None)
if not overwrite:
raise_if_dataset_exists(dataset_uuid=dataset_uuid, store=store)
mp_ser = _write_dataframe_partitions(
ddf=ddf,
store=ds_factory.store_factory,
dataset_uuid=dataset_uuid,
table=table,
secondary_indices=secondary_indices,
shuffle=shuffle,
repartition_ratio=repartition_ratio,
num_buckets=num_buckets,
sort_partitions_by=sort_partitions_by,
df_serializer=df_serializer,
metadata_version=metadata_version,
partition_on=partition_on,
bucket_by=bucket_by,
)
return mp_ser.reduction(
chunk=_id,
aggregate=_commit_store_from_reduction,
split_every=False,
token="commit-dataset",
meta=object,
aggregate_kwargs={
"store": ds_factory.store_factory,
"dataset_uuid": ds_factory.dataset_uuid,
"dataset_metadata": metadata,
"metadata_merger": metadata_merger,
},
)
def _write_dataframe_partitions(
ddf: dd.DataFrame,
store: StoreFactory,
dataset_uuid: str,
table: str,
secondary_indices: List[str],
shuffle: bool,
repartition_ratio: Optional[SupportsFloat],
num_buckets: int,
sort_partitions_by: List[str],
df_serializer: Optional[DataFrameSerializer],
metadata_version: int,
partition_on: List[str],
bucket_by: List[str],
) -> dd.Series:
if repartition_ratio and ddf is not None:
ddf = ddf.repartition(
npartitions=int(np.ceil(ddf.npartitions / repartition_ratio))
)
if ddf is None:
mps = dd.from_pandas(
pd.Series(
[
parse_input_to_metapartition(
None, metadata_version=metadata_version, table_name=table,
)
]
),
npartitions=1,
)
else:
if shuffle:
mps = shuffle_store_dask_partitions(
ddf=ddf,
table=table,
secondary_indices=secondary_indices,
metadata_version=metadata_version,
partition_on=partition_on,
store_factory=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
num_buckets=num_buckets,
sort_partitions_by=sort_partitions_by,
bucket_by=bucket_by,
)
else:
mps = ddf.map_partitions(
write_partition,
secondary_indices=secondary_indices,
metadata_version=metadata_version,
partition_on=partition_on,
store_factory=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
sort_partitions_by=sort_partitions_by,
dataset_table_name=table,
meta=(MetaPartition),
)
return mps
@default_docs
@_shuffle_docs
@normalize_args
def update_dataset_from_ddf(
ddf: dd.DataFrame,
store: Optional[StoreInput] = None,
dataset_uuid: Optional[str] = None,
table: str = SINGLE_TABLE,
secondary_indices: Optional[List[str]] = None,
shuffle: bool = False,
repartition_ratio: Optional[SupportsFloat] = None,
num_buckets: int = 1,
sort_partitions_by: Optional[Union[List[str], str]] = None,
delete_scope: Optional[Iterable[Mapping[str, str]]] = None,
metadata: Optional[Mapping] = None,
df_serializer: Optional[DataFrameSerializer] = None,
metadata_merger: Optional[Callable] = None,
default_metadata_version: int = DEFAULT_METADATA_VERSION,
partition_on: Optional[List[str]] = None,
factory: Optional[DatasetFactory] = None,
bucket_by: Optional[Union[List[str], str]] = None,
):
"""
Update a dataset from a dask.dataframe.
See Also
--------
:ref:`mutating_datasets`
"""
if table is None:
raise TypeError("The parameter `table` is not optional.")
# normalization done by normalize_args but mypy doesn't recognize this
sort_partitions_by = cast(List[str], sort_partitions_by)
secondary_indices = cast(List[str], secondary_indices)
bucket_by = cast(List[str], bucket_by)
partition_on = cast(List[str], partition_on)
ds_factory, metadata_version, partition_on = validate_partition_keys(
dataset_uuid=dataset_uuid,
store=store,
default_metadata_version=default_metadata_version,
partition_on=partition_on,
ds_factory=factory,
)
inferred_indices = _ensure_compatible_indices(ds_factory, secondary_indices)
del secondary_indices
mp_ser = _write_dataframe_partitions(
ddf=ddf,
store=ds_factory.store_factory if ds_factory else store,
dataset_uuid=dataset_uuid or ds_factory.dataset_uuid,
table=table,
secondary_indices=inferred_indices,
shuffle=shuffle,
repartition_ratio=repartition_ratio,
num_buckets=num_buckets,
sort_partitions_by=sort_partitions_by,
df_serializer=df_serializer,
metadata_version=metadata_version,
partition_on=cast(List[str], partition_on),
bucket_by=bucket_by,
)
return mp_ser.reduction(
chunk=_id,
aggregate=_commit_update_from_reduction,
split_every=False,
token="commit-dataset",
meta=object,
aggregate_kwargs={
"store_factory": store,
"dataset_uuid": dataset_uuid,
"ds_factory": ds_factory,
"delete_scope": delete_scope,
"metadata": metadata,
"metadata_merger": metadata_merger,
},
)
@default_docs
@normalize_args
def collect_dataset_metadata(
store: Optional[StoreInput] = None,
dataset_uuid: Optional[str] = None,
predicates: Optional[PredicatesType] = None,
frac: float = 1.0,
factory: Optional[DatasetFactory] = None,
) -> dd.DataFrame:
"""
Collect parquet metadata of the dataset. The `frac` parameter can be used to select a subset of the data.
.. warning::
If the size of the partitions is not evenly distributed, e.g. some partitions might be larger than others,
the metadata returned is not a good approximation for the whole dataset metadata.
.. warning::
Using the `frac` parameter is not encouraged for a small number of total partitions.
Parameters
----------
predicates
Kartothek predicates to apply filters on the data for which to gather statistics
.. warning::
Filtering will only be applied for predicates on indices.
The evaluation of the predicates therefore will therefore only return an approximate result.
frac
Fraction of the total number of partitions to use for gathering statistics. `frac == 1.0` will use all partitions.
Returns
-------
dask.dataframe.DataFrame:
A dask.DataFrame containing the following information about dataset statistics:
* `partition_label`: File name of the parquet file, unique to each physical partition.
* `row_group_id`: Index of the row groups within one parquet file.
* `row_group_compressed_size`: Byte size of the data within one row group.
* `row_group_uncompressed_size`: Byte size (uncompressed) of the data within one row group.
* `number_rows_total`: Total number of rows in one parquet file.
* `number_row_groups`: Number of row groups in one parquet file.
* `serialized_size`: Serialized size of the parquet file.
* `number_rows_per_row_group`: Number of rows per row group.
Raises
------
ValueError
If no metadata could be retrieved, raise an error.
"""
if not 0.0 < frac <= 1.0:
raise ValueError(
f"Invalid value for parameter `frac`: {frac}."
"Please make sure to provide a value larger than 0.0 and smaller than or equal to 1.0 ."
)
dataset_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
mps = list(
dispatch_metapartitions_from_factory(dataset_factory, predicates=predicates)
)
if mps:
random.shuffle(mps)
# ensure that even with sampling at least one metapartition is returned
cutoff_index = max(1, int(len(mps) * frac))
mps = mps[:cutoff_index]
ddf = dd.from_delayed(
[
dask.delayed(MetaPartition.get_parquet_metadata)(
mp, store=dataset_factory.store_factory
)
for mp in mps
],
meta=_METADATA_SCHEMA,
)
else:
df = pd.DataFrame(columns=_METADATA_SCHEMA.keys())
df = df.astype(_METADATA_SCHEMA)
ddf = dd.from_pandas(df, npartitions=1)
return ddf
def _unpack_hash(df, unpack_meta, subset):
df = unpack_payload_pandas(df, unpack_meta)
if subset:
df = df[subset]
return _hash_partition(df)
def _hash_partition(part):
return pd.util.hash_pandas_object(part, index=False).sum()
@default_docs
@normalize_args
def hash_dataset(
store: Optional[StoreInput] = None,
dataset_uuid: Optional[str] = None,
subset=None,
group_key=None,
table: str = SINGLE_TABLE,
predicates: Optional[PredicatesType] = None,
factory: Optional[DatasetFactory] = None,
) -> dd.Series:
"""
Calculate a partition wise, or group wise, hash of the dataset.
.. note::
We do not guarantee the hash values to remain constant accross versions.
Example output::
Assuming a dataset with two unique values in column `P` this gives
>>> hash_dataset(factory=dataset_with_index_factory,group_key=["P"]).compute()
... P
... 1 11462879952839863487
... 2 12568779102514529673
... dtype: uint64
Parameters
----------
subset
If provided, only take these columns into account when hashing the dataset
group_key
If provided, calculate hash per group instead of per partition
"""
dataset_factory = _ensure_factory(
dataset_uuid=dataset_uuid, store=store, factory=factory,
)
columns = subset
if subset and group_key:
columns = sorted(set(subset) | set(group_key))
ddf = read_dataset_as_ddf(
table=table,
predicates=predicates,
factory=dataset_factory,
columns=columns,
dates_as_object=True,
)
if not group_key:
return ddf.map_partitions(_hash_partition, meta="uint64").astype("uint64")
else:
ddf2 = pack_payload(ddf, group_key=group_key)
return (
ddf2.groupby(group_key)
.apply(_unpack_hash, unpack_meta=ddf._meta, subset=subset, meta="uint64")
.astype("uint64")
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/dataframe.py | 0.863132 | 0.303512 | dataframe.py | pypi |
from typing import Any, Dict, Iterable, Optional, Union
import dask
import dask.bag as db
import dask.dataframe as dd
from dask.delayed import Delayed
from simplekv import KeyValueStore
from kartothek.api.discover import discover_datasets_unchecked
from kartothek.core.cube.cube import Cube
from kartothek.core.docs import default_docs
from kartothek.core.typing import StoreFactory
from kartothek.io.dask.common_cube import (
append_to_cube_from_bag_internal,
extend_cube_from_bag_internal,
query_cube_bag_internal,
)
from kartothek.io.dask.dataframe import store_dataset_from_ddf
from kartothek.io_components.cube.common import check_store_factory
from kartothek.io_components.cube.write import (
apply_postwrite_checks,
assert_dimesion_index_cols_notnull,
check_datasets_prebuild,
check_provided_metadata_dict,
check_user_df,
prepare_ktk_metadata,
prepare_ktk_partition_on,
)
from kartothek.serialization._parquet import ParquetSerializer
__all__ = (
"append_to_cube_from_dataframe",
"build_cube_from_dataframe",
"extend_cube_from_dataframe",
"query_cube_dataframe",
)
@default_docs
def build_cube_from_dataframe(
data: Union[dd.DataFrame, Dict[str, dd.DataFrame]],
cube: Cube,
store: StoreFactory,
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
overwrite: bool = False,
partition_on: Optional[Dict[str, Iterable[str]]] = None,
shuffle: bool = False,
num_buckets: int = 1,
bucket_by: Optional[Iterable[str]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> Delayed:
"""
Create dask computation graph that builds a cube with the data supplied from a dask dataframe.
Parameters
----------
data
Data that should be written to the cube. If only a single dataframe is given, it is assumed to be the seed
dataset.
cube
Cube specification.
store
Store to which the data should be written to.
metadata
Metadata for every dataset.
overwrite
If possibly existing datasets should be overwritten.
partition_on
Optional parition-on attributes for datasets (dictionary mapping :term:`Dataset ID` -> columns).
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.delayed.Delayed
A dask delayed object containing the compute graph to build a cube returning the dict of dataset metadata
objects.
"""
check_store_factory(store)
if not isinstance(data, dict):
data = {cube.seed_dataset: data}
ktk_cube_dataset_ids = sorted(data.keys())
metadata = check_provided_metadata_dict(metadata, ktk_cube_dataset_ids)
existing_datasets = discover_datasets_unchecked(cube.uuid_prefix, store)
check_datasets_prebuild(ktk_cube_dataset_ids, cube, existing_datasets)
partition_on_checked = prepare_ktk_partition_on(
cube, ktk_cube_dataset_ids, partition_on
)
del partition_on
dct = {}
for table_name, ddf in data.items():
check_user_df(table_name, ddf, cube, set(), partition_on_checked[table_name])
indices_to_build = set(cube.index_columns) & set(ddf.columns)
if table_name == cube.seed_dataset:
indices_to_build |= set(cube.dimension_columns) - cube.suppress_index_on
indices_to_build -= set(partition_on_checked[table_name])
ddf = ddf.map_partitions(
assert_dimesion_index_cols_notnull,
ktk_cube_dataset_id=table_name,
cube=cube,
partition_on=partition_on_checked[table_name],
meta=ddf._meta,
)
graph = store_dataset_from_ddf(
ddf,
dataset_uuid=cube.ktk_dataset_uuid(table_name),
store=store,
metadata=prepare_ktk_metadata(cube, table_name, metadata),
partition_on=partition_on_checked[table_name],
secondary_indices=sorted(indices_to_build),
sort_partitions_by=sorted(
(set(cube.dimension_columns) - set(cube.partition_columns))
& set(ddf.columns)
),
overwrite=overwrite,
shuffle=shuffle,
num_buckets=num_buckets,
bucket_by=bucket_by,
df_serializer=df_serializer,
)
dct[table_name] = graph
return dask.delayed(apply_postwrite_checks)(
dct, cube=cube, store=store, existing_datasets=existing_datasets
)
def extend_cube_from_dataframe(
data: Union[dd.DataFrame, Dict[str, dd.DataFrame]],
cube: Cube,
store: KeyValueStore,
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
overwrite: bool = False,
partition_on: Optional[Dict[str, Iterable[str]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
):
"""
Create dask computation graph that extends a cube by the data supplied from a dask dataframe.
For details on ``data`` and ``metadata``, see :func:`~kartothek.io.eager_cube.build_cube`.
Parameters
----------
data
Data that should be written to the cube. If only a single dataframe is given, it is assumed to be the seed
dataset.
cube
Cube specification.
store
Store to which the data should be written to.
metadata
Metadata for every dataset.
overwrite
If possibly existing datasets should be overwritten.
partition_on
Optional parition-on attributes for datasets (dictionary mapping :term:`Dataset ID` -> columns).
df_serializer
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to extend a cube returning the dict of dataset metadata objects.
The bag has a single partition with a single element.
"""
data, ktk_cube_dataset_ids = _ddfs_to_bag(data, cube)
return (
extend_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
overwrite=overwrite,
partition_on=partition_on,
df_serializer=df_serializer,
)
.map_partitions(_unpack_list, default=None)
.to_delayed()[0]
)
def query_cube_dataframe(
cube,
store,
conditions=None,
datasets=None,
dimension_columns=None,
partition_by=None,
payload_columns=None,
):
"""
Query cube.
For detailed documentation, see :func:`~kartothek.io.eager_cube.query_cube`.
.. important::
In contrast to other backends, the Dask DataFrame may contain partitions with empty DataFrames!
Parameters
----------
cube: Cube
Cube specification.
store: simplekv.KeyValueStore
KV store that preserves the cube.
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied, optional.
datasets: Union[None, Iterable[str], Dict[str, kartothek.core.dataset.DatasetMetadata]]
Datasets to query, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, a list
of Ktk_cube dataset ID or ``None`` (in which case auto-discovery will be used).
dimension_columns: Union[None, str, Iterable[str]]
Dimension columns of the query, may result in projection. If not provided, dimension columns from cube
specification will be used.
partition_by: Union[None, str, Iterable[str]]
By which column logical partitions should be formed. If not provided, a single partition will be generated.
payload_columns: Union[None, str, Iterable[str]]
Which columns apart from ``dimension_columns`` and ``partition_by`` should be returned.
Returns
-------
ddf: dask.dataframe.DataFrame
Dask DataFrame, partitioned and order by ``partition_by``. Column of DataFrames is alphabetically ordered. Data
types are provided on best effort (they are restored based on the preserved data, but may be different due to
Pandas NULL-handling, e.g. integer columns may be floats).
"""
empty, b = query_cube_bag_internal(
cube=cube,
store=store,
conditions=conditions,
datasets=datasets,
dimension_columns=dimension_columns,
partition_by=partition_by,
payload_columns=payload_columns,
blocksize=1,
)
dfs = b.map_partitions(_unpack_list, default=empty).to_delayed()
return dd.from_delayed(
dfs=dfs, meta=empty, divisions=None # TODO: figure out an API to support this
)
def append_to_cube_from_dataframe(
data: db.Bag,
cube: Cube,
store: KeyValueStore,
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> db.Bag:
"""
Append data to existing cube.
For details on ``data`` and ``metadata``, see :func:`~kartothek.io.eager_cube.build_cube`.
.. important::
Physical partitions must be updated as a whole. If only single rows within a physical partition are updated, the
old data is treated as "removed".
.. hint::
To have better control over the overwrite "mask" (i.e. which partitions are overwritten), you should use
:func:`~kartothek.io.eager_cube.remove_partitions` beforehand.
Parameters
----------
data: dask.bag.Bag
Bag containing dataframes
cube:
Cube specification.
store:
Store to which the data should be written to.
metadata:
Metadata for every dataset, optional. For every dataset, only given keys are updated/replaced. Deletion of
metadata keys is not possible.
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to append to the cube returning the dict of dataset metadata
objects. The bag has a single partition with a single element.
"""
data, ktk_cube_dataset_ids = _ddfs_to_bag(data, cube)
return (
append_to_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
df_serializer=df_serializer,
)
.map_partitions(_unpack_list, default=None)
.to_delayed()[0]
)
def _ddfs_to_bag(data, cube):
if not isinstance(data, dict):
data = {cube.seed_dataset: data}
ktk_cube_dataset_ids = sorted(data.keys())
bags = []
for ktk_cube_dataset_id in ktk_cube_dataset_ids:
bags.append(
db.from_delayed(data[ktk_cube_dataset_id].to_delayed()).map_partitions(
_convert_write_bag, ktk_cube_dataset_id=ktk_cube_dataset_id
)
)
return (db.concat(bags), ktk_cube_dataset_ids)
def _unpack_list(l, default): # noqa
l = list(l) # noqa
if l:
return l[0]
else:
return default
def _convert_write_bag(df, ktk_cube_dataset_id):
return [{ktk_cube_dataset_id: df}] | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/dataframe_cube.py | 0.920495 | 0.433742 | dataframe_cube.py | pypi |
from typing import Any, Dict, Iterable, Optional, Union
import dask.bag as db
from simplekv import KeyValueStore
from kartothek.api.discover import discover_datasets_unchecked
from kartothek.core.cube.cube import Cube
from kartothek.core.dataset import DatasetMetadata
from kartothek.core.typing import StoreFactory
from kartothek.io.dask.common_cube import (
append_to_cube_from_bag_internal,
build_cube_from_bag_internal,
extend_cube_from_bag_internal,
query_cube_bag_internal,
)
from kartothek.io_components.cube.cleanup import get_keys_to_clean
from kartothek.io_components.cube.common import (
assert_stores_different,
check_blocksize,
check_store_factory,
)
from kartothek.io_components.cube.copy import get_copy_keys
from kartothek.io_components.cube.stats import (
collect_stats_block,
get_metapartitions_for_stats,
reduce_stats,
)
from kartothek.serialization._parquet import ParquetSerializer
from kartothek.utils.ktk_adapters import get_dataset_keys
from kartothek.utils.store import copy_keys
__all__ = (
"append_to_cube_from_bag",
"update_cube_from_bag",
"build_cube_from_bag",
"cleanup_cube_bag",
"collect_stats_bag",
"copy_cube_bag",
"delete_cube_bag",
"extend_cube_from_bag",
"query_cube_bag",
)
def build_cube_from_bag(
data: db.Bag,
cube: Cube,
store: StoreFactory,
ktk_cube_dataset_ids: Optional[Iterable[str]] = None,
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
overwrite: bool = False,
partition_on: Optional[Dict[str, Iterable[str]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> db.Bag:
"""
Create dask computation graph that builds a cube with the data supplied from a dask bag.
Parameters
----------
data: dask.bag.Bag
Bag containing dataframes
cube:
Cube specification.
store:
Store to which the data should be written to.
ktk_cube_dataset_ids:
Datasets that will be written, must be specified in advance. If left unprovided, it is assumed that only the
seed dataset will be written.
metadata:
Metadata for every dataset.
overwrite:
If possibly existing datasets should be overwritten.
partition_on:
Optional parition-on attributes for datasets (dictionary mapping :term:`Dataset ID` -> columns).
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to build a cube returning the dict of dataset metadata objects.
The bag has a single partition with a single element.
"""
return build_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
overwrite=overwrite,
partition_on=partition_on,
df_serializer=df_serializer,
)
def extend_cube_from_bag(
data: db.Bag,
cube: Cube,
store: KeyValueStore,
ktk_cube_dataset_ids: Optional[Iterable[str]],
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
overwrite: bool = False,
partition_on: Optional[Dict[str, Iterable[str]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> db.Bag:
"""
Create dask computation graph that extends a cube by the data supplied from a dask bag.
For details on ``data`` and ``metadata``, see :func:`~kartothek.io.eager_cube.build_cube`.
Parameters
----------
data: dask.bag.Bag
Bag containing dataframes (see :func:`~kartothek.io.eager_cube.build_cube` for possible format and types).
cube: kartothek.core.cube.cube.Cube
Cube specification.
store:
Store to which the data should be written to.
ktk_cube_dataset_ids:
Datasets that will be written, must be specified in advance.
metadata:
Metadata for every dataset.
overwrite:
If possibly existing datasets should be overwritten.
partition_on:
Optional parition-on attributes for datasets (dictionary mapping :term:`Dataset ID` -> columns).
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to extend a cube returning the dict of dataset metadata objects.
The bag has a single partition with a single element.
"""
return extend_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
overwrite=overwrite,
partition_on=partition_on,
df_serializer=df_serializer,
)
def query_cube_bag(
cube,
store,
conditions=None,
datasets=None,
dimension_columns=None,
partition_by=None,
payload_columns=None,
blocksize=1,
):
"""
Query cube.
For detailed documentation, see :func:`~kartothek.io.eager_cube.query_cube`.
Parameters
----------
cube: Cube
Cube specification.
store: simplekv.KeyValueStore
KV store that preserves the cube.
conditions: Union[None, Condition, Iterable[Condition], Conjunction]
Conditions that should be applied, optional.
datasets: Union[None, Iterable[str], Dict[str, kartothek.core.dataset.DatasetMetadata]]
Datasets to query, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, an
iterable of Ktk_cube dataset ID or ``None`` (in which case auto-discovery will be used).
dimension_columns: Union[None, str, Iterable[str]]
Dimension columns of the query, may result in projection. If not provided, dimension columns from cube
specification will be used.
partition_by: Union[None, str, Iterable[str]]
By which column logical partitions should be formed. If not provided, a single partition will be generated.
payload_columns: Union[None, str, Iterable[str]]
Which columns apart from ``dimension_columns`` and ``partition_by`` should be returned.
blocksize: int
Partition size of the bag.
Returns
-------
bag: dask.bag.Bag
Bag of 1-sized partitions of non-empty DataFrames, order by ``partition_by``. Column of DataFrames is
alphabetically ordered. Data types are provided on best effort (they are restored based on the preserved data,
but may be different due to Pandas NULL-handling, e.g. integer columns may be floats).
"""
_empty, b = query_cube_bag_internal(
cube=cube,
store=store,
conditions=conditions,
datasets=datasets,
dimension_columns=dimension_columns,
partition_by=partition_by,
payload_columns=payload_columns,
blocksize=blocksize,
)
return b
def delete_cube_bag(
cube: Cube,
store: StoreFactory,
blocksize: int = 100,
datasets: Optional[Union[Iterable[str], Dict[str, DatasetMetadata]]] = None,
):
"""
Delete cube from store.
.. important::
This routine only deletes tracked files. Garbage and leftovers from old cubes and failed operations are NOT
removed.
Parameters
----------
cube
Cube specification.
store
KV store.
blocksize
Number of keys to delete at once.
datasets
Datasets to delete, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, a list
of Ktk_cube dataset ID or ``None`` (in which case entire cube will be deleted).
Returns
-------
bag: dask.bag.Bag
A dask bag that performs the given operation. May contain multiple partitions.
"""
check_store_factory(store)
check_blocksize(blocksize)
if not isinstance(datasets, dict):
datasets = discover_datasets_unchecked(
uuid_prefix=cube.uuid_prefix,
store=store,
filter_ktk_cube_dataset_ids=datasets,
)
keys = set()
for ktk_cube_dataset_id in sorted(datasets.keys()):
ds = datasets[ktk_cube_dataset_id]
keys |= get_dataset_keys(ds)
return db.from_sequence(seq=sorted(keys), partition_size=blocksize).map_partitions(
_delete, store=store
)
def copy_cube_bag(
cube,
src_store: StoreFactory,
tgt_store: StoreFactory,
blocksize: int = 100,
overwrite: bool = False,
datasets: Optional[Union[Iterable[str], Dict[str, DatasetMetadata]]] = None,
):
"""
Copy cube from one store to another.
Parameters
----------
cube
Cube specification.
src_store
Source KV store.
tgt_store
Target KV store.
overwrite
If possibly existing datasets in the target store should be overwritten.
blocksize
Number of keys to copy at once.
datasets
Datasets to copy, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, a list
of Ktk_cube dataset ID or ``None`` (in which case entire cube will be copied).
Returns
-------
bag: dask.bag.Bag
A dask bag that performs the given operation. May contain multiple partitions.
"""
check_store_factory(src_store)
check_store_factory(tgt_store)
check_blocksize(blocksize)
assert_stores_different(
src_store, tgt_store, cube.ktk_dataset_uuid(cube.seed_dataset)
)
keys = get_copy_keys(
cube=cube,
src_store=src_store,
tgt_store=tgt_store,
overwrite=overwrite,
datasets=datasets,
)
return db.from_sequence(seq=sorted(keys), partition_size=blocksize).map_partitions(
copy_keys, src_store=src_store, tgt_store=tgt_store
)
def collect_stats_bag(
cube: Cube,
store: StoreFactory,
datasets: Optional[Union[Iterable[str], Dict[str, DatasetMetadata]]] = None,
blocksize: int = 100,
):
"""
Collect statistics for given cube.
Parameters
----------
cube
Cube specification.
store
KV store that preserves the cube.
datasets
Datasets to query, must all be part of the cube. May be either the result of :func:`~kartothek.api.discover.discover_datasets`, a list
of Ktk_cube dataset ID or ``None`` (in which case auto-discovery will be used).
blocksize
Number of partitions to scan at once.
Returns
-------
bag: dask.bag.Bag
A dask bag that returns a single result of the form ``Dict[str, Dict[str, int]]`` and contains statistics per
ktk_cube dataset ID.
"""
check_store_factory(store)
check_blocksize(blocksize)
if not isinstance(datasets, dict):
datasets = discover_datasets_unchecked(
uuid_prefix=cube.uuid_prefix,
store=store,
filter_ktk_cube_dataset_ids=datasets,
)
all_metapartitions = get_metapartitions_for_stats(datasets)
return (
db.from_sequence(seq=all_metapartitions, partition_size=blocksize)
.map_partitions(collect_stats_block, store=store)
.reduction(
perpartition=_obj_to_list,
aggregate=_reduce_stats,
split_every=False,
out_type=db.Bag,
)
)
def cleanup_cube_bag(cube: Cube, store: StoreFactory, blocksize: int = 100) -> db.Bag:
"""
Remove unused keys from cube datasets.
.. important::
All untracked keys which start with the cube's `uuid_prefix` followed by the `KTK_CUBE_UUID_SEPERATOR`
(e.g. `my_cube_uuid++seed...`) will be deleted by this routine. These keys may be leftovers from past
overwrites or index updates.
Parameters
----------
cube
Cube specification.
store
KV store.
blocksize
Number of keys to delete at once.
Returns
-------
bag: dask.bag.Bag
A dask bag that performs the given operation. May contain multiple partitions.
"""
check_store_factory(store)
check_blocksize(blocksize)
store_obj = store()
datasets = discover_datasets_unchecked(uuid_prefix=cube.uuid_prefix, store=store)
keys = get_keys_to_clean(cube.uuid_prefix, datasets, store_obj)
return db.from_sequence(seq=sorted(keys), partition_size=blocksize).map_partitions(
_delete, store=store
)
def append_to_cube_from_bag(
data: db.Bag,
cube: Cube,
store: StoreFactory,
ktk_cube_dataset_ids: Optional[Iterable[str]],
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> db.Bag:
"""
Append data to existing cube.
For details on ``data`` and ``metadata``, see :func:`~kartothek.io.eager_cube.build_cube`.
.. important::
Physical partitions must be updated as a whole. If only single rows within a physical partition are updated, the
old data is treated as "removed".
.. hint::
To have better control over the overwrite "mask" (i.e. which partitions are overwritten), you should use
:func:`~kartothek.io.eager_cube.remove_partitions` beforehand or use :func:`~kartothek.io.dask.bag_cube.update_cube_from_bag` instead.
Parameters
----------
data: dask.bag.Bag
Bag containing dataframes
cube:
Cube specification.
store:
Store to which the data should be written to.
ktk_cube_dataset_ids:
Datasets that will be written, must be specified in advance.
metadata:
Metadata for every dataset, optional. For every dataset, only given keys are updated/replaced. Deletion of
metadata keys is not possible.
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to append to the cube returning the dict of dataset metadata
objects. The bag has a single partition with a single element.
"""
return append_to_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
df_serializer=df_serializer,
)
def update_cube_from_bag(
data: db.Bag,
cube: Cube,
store: StoreFactory,
remove_conditions,
ktk_cube_dataset_ids: Optional[Iterable[str]],
metadata: Optional[Dict[str, Dict[str, Any]]] = None,
df_serializer: Optional[ParquetSerializer] = None,
) -> db.Bag:
"""
Remove partitions and append data to existing cube.
For details on ``data`` and ``metadata``, see :func:`~kartothek.io.eager_cube.build_cube`.
Only datasets in `ktk_cube_dataset_ids` will be affected.
Parameters
----------
data: dask.bag.Bag
Bag containing dataframes
cube:
Cube specification.
store:
Store to which the data should be written to.
remove_conditions
Conditions that select the partitions to remove. Must be a condition that only uses
partition columns.
ktk_cube_dataset_ids:
Datasets that will be written, must be specified in advance.
metadata:
Metadata for every dataset, optional. For every dataset, only given keys are updated/replaced. Deletion of
metadata keys is not possible.
df_serializer:
Optional Dataframe to Parquet serializer
Returns
-------
metadata_dict: dask.bag.Bag
A dask bag object containing the compute graph to append to the cube returning the dict of dataset metadata
objects. The bag has a single partition with a single element.
See Also
--------
:ref:`mutating_datasets`
"""
return append_to_cube_from_bag_internal(
data=data,
cube=cube,
store=store,
remove_conditions=remove_conditions,
ktk_cube_dataset_ids=ktk_cube_dataset_ids,
metadata=metadata,
df_serializer=df_serializer,
)
def _delete(keys, store):
if callable(store):
store = store()
for k in keys:
store.delete(k)
def _obj_to_list(obj):
return [obj]
def _reduce_stats(nested_stats):
flat = [stats for sub in nested_stats for stats in sub]
return [reduce_stats(flat)] | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/bag_cube.py | 0.947793 | 0.409988 | bag_cube.py | pypi |
from functools import partial
from typing import List, Optional, Sequence, cast
import dask.array as da
import dask.dataframe as dd
import numpy as np
import pandas as pd
from kartothek.core.typing import StoreFactory
from kartothek.io.dask.compression import pack_payload, unpack_payload_pandas
from kartothek.io_components.metapartition import MetaPartition
from kartothek.io_components.write import write_partition
from kartothek.serialization import DataFrameSerializer
_KTK_HASH_BUCKET = "__KTK_HASH_BUCKET"
def _hash_bucket(df: pd.DataFrame, subset: Optional[Sequence[str]], num_buckets: int):
"""
Categorize each row of `df` based on the data in the columns `subset`
into `num_buckets` values. This is based on `pandas.util.hash_pandas_object`
"""
if not subset:
subset = df.columns
hash_arr = pd.util.hash_pandas_object(df[subset], index=False)
buckets = hash_arr % num_buckets
available_bit_widths = np.array([8, 16, 32, 64])
mask = available_bit_widths > np.log2(num_buckets)
bit_width = min(available_bit_widths[mask])
return df.assign(**{_KTK_HASH_BUCKET: buckets.astype(f"uint{bit_width}")})
def shuffle_store_dask_partitions(
ddf: dd.DataFrame,
table: str,
secondary_indices: List[str],
metadata_version: int,
partition_on: List[str],
store_factory: StoreFactory,
df_serializer: Optional[DataFrameSerializer],
dataset_uuid: str,
num_buckets: int,
sort_partitions_by: List[str],
bucket_by: Sequence[str],
) -> da.Array:
"""
Perform a dataset update with dask reshuffling to control partitioning.
The shuffle operation will perform the following steps
1. Pack payload data
Payload data is serialized and compressed into a single byte value using
``distributed.protocol.serialize_bytes``, see also ``pack_payload``.
2. Apply bucketing
Hash the column subset ``bucket_by`` and distribute the hashes in
``num_buckets`` bins/buckets. Internally every bucket is identified by an
integer and we will create one physical file for every bucket ID. The
bucket ID is not exposed to the user and is dropped after the shuffle,
before the store. This is done since we do not want to guarantee at the
moment, that the hash function remains stable.
3. Perform shuffle (dask.DataFrame.groupby.apply)
The groupby key will be the combination of ``partition_on`` fields and the
hash bucket ID. This will create a physical file for every unique tuple
in ``partition_on + bucket_ID``. The function which is applied to the
dataframe will perform all necessary subtask for storage of the dataset
(partition_on, index calc, etc.).
4. Unpack data (within the apply-function)
After the shuffle, the first step is to unpack the payload data since
the follow up tasks will require the full dataframe.
5. Pre storage processing and parquet serialization
We apply important pre storage processing like sorting data, applying
final partitioning (at this time there should be only one group in the
payload data but using the ``MetaPartition.partition_on`` guarantees the
appropriate data structures kartothek expects are created.).
After the preprocessing is done, the data is serialized and stored as
parquet. The applied function will return an (empty) MetaPartition with
indices and metadata which will then be used to commit the dataset.
Returns
-------
A dask.Array holding relevant MetaPartition objects as values
"""
if ddf.npartitions == 0:
return ddf
group_cols = partition_on.copy()
if num_buckets is None:
raise ValueError("``num_buckets`` must not be None when shuffling data.")
meta = ddf._meta
meta[_KTK_HASH_BUCKET] = np.uint64(0)
ddf = ddf.map_partitions(_hash_bucket, bucket_by, num_buckets, meta=meta)
group_cols.append(_KTK_HASH_BUCKET)
unpacked_meta = ddf._meta
ddf = pack_payload(ddf, group_key=group_cols)
ddf_grouped = ddf.groupby(by=group_cols)
unpack = partial(
_unpack_store_partition,
secondary_indices=secondary_indices,
sort_partitions_by=sort_partitions_by,
table=table,
dataset_uuid=dataset_uuid,
partition_on=partition_on,
store_factory=store_factory,
df_serializer=df_serializer,
metadata_version=metadata_version,
unpacked_meta=unpacked_meta,
)
return cast(
da.Array, # Output type depends on meta but mypy cannot infer this easily.
ddf_grouped.apply(unpack, meta=("MetaPartition", "object")),
)
def _unpack_store_partition(
df: pd.DataFrame,
secondary_indices: List[str],
sort_partitions_by: List[str],
table: str,
dataset_uuid: str,
partition_on: List[str],
store_factory: StoreFactory,
df_serializer: DataFrameSerializer,
metadata_version: int,
unpacked_meta: pd.DataFrame,
) -> MetaPartition:
"""Unpack payload data and store partition"""
df = unpack_payload_pandas(df, unpacked_meta)
if _KTK_HASH_BUCKET in df:
df = df.drop(_KTK_HASH_BUCKET, axis=1)
return write_partition(
partition_df=df,
secondary_indices=secondary_indices,
sort_partitions_by=sort_partitions_by,
dataset_table_name=table,
dataset_uuid=dataset_uuid,
partition_on=partition_on,
store_factory=store_factory,
df_serializer=df_serializer,
metadata_version=metadata_version,
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/io/dask/_shuffle.py | 0.916507 | 0.511473 | _shuffle.py | pypi |
from __future__ import absolute_import
from typing import Iterable, Optional, Tuple, Union
import pandas as pd
import pyarrow as pa
__all__ = (
"converter_str",
"converter_str_set",
"converter_str_set_optional",
"converter_str_tupleset",
"converter_tuple",
"get_str_to_python_converter",
)
def converter_str_set(obj) -> frozenset:
"""
Convert input to a set of unicode strings. ``None`` will be converted to an empty set.
Parameters
----------
obj: Optional[Union[Iterable[str], str]]
Object to convert.
Returns
-------
obj: FrozenSet[str]
String set.
Raises
------
TypeError
If passed object is not string/byte-like.
"""
result = converter_tuple(obj)
result_set = {converter_str(x) for x in result}
return frozenset(result_set)
def converter_str_set_optional(obj):
"""
Convert input to a set of unicode strings. ``None`` will be preserved.
Parameters
----------
obj: Optional[Union[Iterable[str], str]]
Object to convert.
Returns
-------
obj: Optional[FrozenSet[str]]
String set.
Raises
------
ValueError
If an element in the passed object is not string/byte/like.
"""
if obj is None:
return None
return converter_str_set(obj)
def converter_str_tupleset(obj: Optional[Union[Iterable[str], str]]) -> Tuple[str, ...]:
"""
Convert input to tuple of unique unicode strings. ``None`` will be converted to an empty set.
The input must not contain duplicate entries.
Parameters
----------
obj
Object to convert.
Raises
------
TypeError
If passed object is not string/byte-like, or if ``obj`` is known to have an unstable iteration order.
ValueError
If passed set contains duplicates.
"""
if isinstance(obj, (dict, frozenset, set)):
raise TypeError(
"{obj} which has type {tname} has an unstable iteration order".format(
obj=obj, tname=type(obj).__name__
)
)
result = converter_tuple(obj)
result = tuple(converter_str(x) for x in result)
if len(set(result)) != len(result):
raise ValueError("Tuple-set contains duplicates: {}".format(", ".join(result)))
return result
def converter_tuple(obj) -> tuple:
"""
Convert input to a tuple. ``None`` will be converted to an empty tuple.
Parameters
----------
obj: Any
Object to convert.
Returns
-------
obj: Tuple[Any]
Tuple.
"""
if obj is None:
return ()
elif hasattr(obj, "__iter__") and not isinstance(obj, (str, bytes)):
return tuple(x for x in obj)
else:
return (obj,)
def converter_str(obj) -> str:
"""
Ensures input is a unicode string.
Parameters
----------
obj: str
Object to convert.
Returns
-------
obj: str
String.
Raises
------
TypeError
If passed object is not string/byte-like.
"""
if isinstance(obj, str):
return obj
elif isinstance(obj, bytes):
return obj.decode("utf-8")
else:
raise TypeError(
"Object of type {type} is not a string: {obj}".format(
obj=obj, type=type(obj).__name__
)
)
def get_str_to_python_converter(pa_type):
"""
Get converter to parse string into python object.
Parameters
----------
pa_type: pyarrow.DataType
Data type.
Returns
-------
converter: Callable[[str], Any]
Converter.
"""
if pa.types.is_boolean(pa_type):
def var_f(x):
if x.lower() in ("0", "f", "n", "false", "no"):
return False
elif x.lower() in ("1", "t", "y", "true", "yes"):
return True
else:
raise ValueError("Cannot parse bool: {}".format(x))
return var_f
elif pa.types.is_floating(pa_type):
return float
elif pa.types.is_integer(pa_type):
return int
elif pa.types.is_string(pa_type):
def var_f(x):
if len(x) > 1:
for char in ('"', "'"):
if x.startswith(char) and x.endswith(char):
return x[1:-1]
return x
return var_f
elif pa.types.is_timestamp(pa_type):
return pd.Timestamp
else:
raise ValueError("Cannot handle type {pa_type}".format(pa_type=pa_type)) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/utils/converters.py | 0.929792 | 0.348396 | converters.py | pypi |
from __future__ import absolute_import
import pandas as pd
import pyarrow.parquet as pq
from simplekv import KeyValueStore
from kartothek.core.factory import DatasetFactory
from kartothek.core.index import ExplicitSecondaryIndex
from kartothek.core.naming import (
METADATA_BASE_SUFFIX,
METADATA_FORMAT_JSON,
TABLE_METADATA_FILE,
)
from kartothek.serialization._io_buffer import BlockBuffer
from kartothek.utils.converters import converter_str
__all__ = (
"get_dataset_columns",
"get_dataset_keys",
"get_partition_dataframe",
"get_physical_partition_stats",
"metadata_factory_from_dataset",
)
def get_dataset_columns(dataset):
"""
Get columns present in a Kartothek_Cube-compatible Kartothek dataset.
Parameters
----------
dataset: kartothek.core.dataset.DatasetMetadata
Dataset to get the columns from.
Returns
-------
columns: Set[str]
Usable columns.
"""
return {
converter_str(col)
for col in dataset.schema.names
if not col.startswith("__") and col != "KLEE_TS"
}
def get_dataset_keys(dataset):
"""
Get store keys that belong to the given Kartothek dataset.
Parameters
----------
dataset: kartothek.core.dataset.DatasetMetadata
Datasets to scan for keys.
Returns
-------
keys: Set[str]
Storage keys.
"""
keys = set()
# central metadata
keys.add(dataset.uuid + METADATA_BASE_SUFFIX + METADATA_FORMAT_JSON)
# common metadata
for table in dataset.tables:
keys.add("{}/{}/{}".format(dataset.uuid, table, TABLE_METADATA_FILE))
# indices
for index in dataset.indices.values():
if isinstance(index, ExplicitSecondaryIndex):
keys.add(index.index_storage_key)
# partition files (usually .parquet files)
for partition in dataset.partitions.values():
for f in partition.files.values():
keys.add(f)
return keys
class _DummyStore(KeyValueStore):
"""
Dummy store that should not be used.
"""
pass
def _dummy_store_factory():
"""
Creates unusable dummy store.
"""
return _DummyStore()
def metadata_factory_from_dataset(dataset, with_schema=True, store=None):
"""
Create :class:`~kartothek.core.dataset.DatasetMetadata` from :class:`~kartothek.core.dataset.DatasetMetadata`.
Parameters
----------
dataset: kartothek.core.dataset.DatasetMetadata
Already loaded dataset.
with_schema: bool
If dataset was loaded with ``load_schema``.
store: Optional[Callable[[], simplekv.KeyValueStore]]
Optional store factory.
Returns
-------
factory: DatasetFactory
Metadata factory w/ caches pre-filled.
"""
factory = DatasetFactory(
dataset_uuid=dataset.uuid,
store_factory=store or _dummy_store_factory,
load_schema=with_schema,
)
factory._cache_metadata = dataset
factory.is_loaded = True
return factory
def get_physical_partition_stats(metapartitions, store):
"""
Get statistics for partition.
Parameters
----------
metapartitions: Iterable[kartothek.io_components.metapartition.MetaPartition]
Iterable of metapartitions belonging to the same physical partition.
store: Union[simplekv.KeyValueStore, Callable[[], simplekv.KeyValueStore]]
KV store.
Returns
-------
stats: Dict[str, int]
Statistics for the current partition.
"""
if callable(store):
store = store()
files = 0
blobsize = 0
rows = 0
for mp in metapartitions:
files += 1
fp = BlockBuffer(store.open(mp.file))
try:
fp_parquet = pq.ParquetFile(fp)
rows += fp_parquet.metadata.num_rows
blobsize += fp.size
finally:
fp.close()
return {"blobsize": blobsize, "files": files, "partitions": 1, "rows": rows}
def get_partition_dataframe(dataset, cube):
"""
Create DataFrame that represent the partioning of the dataset.
The row index named ``"partition"`` include the partition labels, the columns are the physical partition columns.
Parameters
----------
dataset: kartothek.core.dataset.DatasetMetadata
Dataset to analyze, with partition indices pre-loaded.
cube: kartothek.core.cube.cube.Cube
Cube spec.
Returns
-------
df: pandas.DataFrame
DataFrame with partition data.
"""
cols = sorted(set(dataset.partition_keys) - {"KLEE_TS"})
if not cols:
return pd.DataFrame(
index=pd.Index(sorted(dataset.partitions.keys()), name="partition")
)
series_list = []
for pcol in cols:
series_list.append(
dataset.indices[pcol].as_flat_series(
partitions_as_index=True, compact=False
)
)
return (
pd.concat(series_list, axis=1, sort=False)
.sort_index()
.rename_axis(index="partition")
) | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/utils/ktk_adapters.py | 0.911233 | 0.339773 | ktk_adapters.py | pypi |
from __future__ import absolute_import
from collections import OrderedDict
import numpy as np
import pandas as pd
__all__ = (
"aggregate_to_lists",
"concat_dataframes",
"drop_sorted_duplicates_keep_last",
"is_dataframe_sorted",
"mask_sorted_duplicates_keep_last",
"merge_dataframes_robust",
"sort_dataframe",
)
def concat_dataframes(dfs, default=None):
"""
Concatenate given DataFrames.
For non-empty iterables, this is roughly equivalent to::
pd.concat(dfs, ignore_index=True, sort=False)
except that the resulting index is undefined.
.. important::
If ``dfs`` is a list, it gets emptied during the process.
.. warning::
This requires all DataFrames to have the very same set of columns!
Parameters
----------
dfs: Iterable[pandas.DataFrame]
Iterable of DataFrames w/ identical columns.
default: Optional[pandas.DataFrame]
Optional default if iterable is empty.
Returns
-------
df: pandas.DataFrame
Concatenated DataFrame or default value.
Raises
------
ValueError
If iterable is empty but no default was provided.
"""
# collect potential iterators
if not isinstance(dfs, list):
dfs = list(dfs)
if len(dfs) == 0:
if default is not None:
res = default
else:
raise ValueError("Cannot concatenate 0 dataframes.")
elif len(dfs) == 1:
# that's faster than pd.concat w/ a single DF
res = dfs[0]
else:
# pd.concat seems to hold the data in memory 3 times (not twice as you might expect it from naive copying the
# input blocks into the output DF). This is very unfortunate especially for larger queries. This column-based
# approach effectively reduces the maximum memory consumption and to our knowledge is not measuable slower.
colset = set(dfs[0].columns)
if not all(colset == set(df.columns) for df in dfs):
raise ValueError("Not all DataFrames have the same set of columns!")
res = pd.DataFrame(index=pd.RangeIndex(sum(len(df) for df in dfs)))
for col in dfs[0].columns:
res[col] = pd.concat(
[df[col] for df in dfs], ignore_index=True, sort=False, copy=False
)
# ensure list (which is still referenced in parent scope) gets emptied
del dfs[:]
return res
def is_dataframe_sorted(df, columns):
"""
Check that the given DataFrame is sorted as specified.
This is more efficient than sorting the DataFrame.
An empty DataFrame (no rows) is considered to be sorted.
.. warning::
This function does NOT handle NULL values correctly!
Parameters
----------
df: pd.DataFrame
DataFrame to check.
colums: Iterable[str]
Column that the DataFrame should be sorted by.
Returns
-------
sorted: bool
``True`` if DataFrame is sorted, ``False`` otherwise.
Raises
------
ValueError: If ``columns`` is empty.
KeyError: If specified columns in ``by`` is missing.
"""
columns = list(columns)
if len(columns) == 0:
raise ValueError("`columns` must contain at least 1 column")
state = None
for col in columns[::-1]:
data = df[col].values
if isinstance(data, pd.Categorical):
data = np.asarray(data)
data0 = data[:-1]
data1 = data[1:]
with np.errstate(invalid="ignore"):
comp_le = data0 < data1
comp_eq = data0 == data1
if state is None:
# last column
state = comp_le | comp_eq
else:
state = comp_le | (comp_eq & state)
return state.all()
def sort_dataframe(df, columns):
"""
Sort DataFrame by columns.
This is roughly equivalent to::
df.sort_values(columns).reset_index(drop=True)
.. warning::
This function does NOT handle NULL values correctly!
Parameters
----------
df: pandas.DataFrame
DataFrame to sort.
columns: Iterable[str]
Columns to sort by.
Returns
-------
df: pandas.DataFrame
Sorted DataFrame w/ reseted index.
"""
columns = list(columns)
if is_dataframe_sorted(df, columns):
return df
data = [df[col].values for col in columns[::-1]]
df = df.iloc[np.lexsort(data)]
# reset inplace to reduce the memory usage
df.reset_index(drop=True, inplace=True)
return df
def mask_sorted_duplicates_keep_last(df, columns):
"""
Mask duplicates on sorted data, keep last occurance as unique entry.
Roughly equivalent to::
df.duplicated(subset=columns, keep='last').values
.. warning:
NULL-values are not supported!
.. warning:
The behavior on unsorted data is undefined!
Parameters
----------
df: pandas.DataFrame
DataFrame in question.
columns: Iterable[str]
Column-subset for duplicate-check (remaining columns are ignored).
Returns
-------
mask: numpy.ndarray
1-dimensional boolean array, marking duplicates w/ ``True``
"""
columns = list(columns)
rows = len(df)
mask = np.zeros(rows, dtype=bool)
if (rows > 1) and columns:
sub = np.ones(rows - 1, dtype=bool)
for col in columns:
data = df[col].values
sub &= data[:-1] == data[1:]
mask[:-1] = sub
return mask
def drop_sorted_duplicates_keep_last(df, columns):
"""
Drop duplicates on sorted data, keep last occurance as unique entry.
Roughly equivalent to::
df.drop_duplicates(subset=columns, keep='last')
.. warning:
NULL-values are not supported!
.. warning:
The behavior on unsorted data is undefined!
Parameters
----------
df: pandas.DataFrame
DataFrame in question.
columns: Iterable[str]
Column-subset for duplicate-check (remaining columns are ignored).
Returns
-------
df: pandas.DataFrame
DataFrame w/o duplicates.
"""
columns = list(columns)
dup_mask = mask_sorted_duplicates_keep_last(df, columns)
if dup_mask.any():
# pandas is just slow, so try to avoid the indexing call
return df.iloc[~dup_mask]
else:
return df
def aggregate_to_lists(df, by, data_col):
"""
Do a group-by and collect the results as python lists.
Roughly equivalent to::
df = df.groupby(
by=by,
as_index=False,
)[data_col].agg(lambda series: list(series.values))
Parameters
----------
df: pandas.DataFrame
Dataframe.
by: Iterable[str]
Group-by columns, might be empty.
data_col: str
Column with values to be collected.
Returns
-------
df: pandas.DataFrame
DataFrame w/ operation applied.
"""
by = list(by)
if df.empty:
return df
if not by:
return pd.DataFrame({data_col: pd.Series([list(df[data_col].values)])})
# sort the DataFrame by `by`-values, so that rows of every group-by group are consecutive
df = sort_dataframe(df, by)
# collect the following data for every group:
# - by-values
# - list of values in `data_col`
result_idx_data = [[] for _ in by]
result_labels = []
# remember index (aka values in `by`) and list of data values for current group
group_idx = None # Tuple[Any, ...]
group_values = None # List[Any]
def _store_group():
"""
Store current group from `group_idx` and `group_values` intro result lists.
"""
if group_idx is None:
# no group exists yet
return
for result_idx_part, idx_part in zip(result_idx_data, group_idx):
result_idx_part.append(idx_part)
result_labels.append(group_values)
# create iterator over row-tuples, where every tuple contains values of all by-columns
iterator_idx = zip(*(df[col].values for col in by))
# iterate over all rows in DataFrame and collect groups
for idx, label in zip(iterator_idx, df[data_col].values):
if (group_idx is None) or (idx != group_idx):
_store_group()
group_idx = idx
group_values = [label]
else:
group_values.append(label)
# store last group
_store_group()
# create result DataFrame out of lists
data = OrderedDict(zip(by, result_idx_data))
data[data_col] = result_labels
return pd.DataFrame(data)
def merge_dataframes_robust(df1, df2, how):
"""
Merge two given DataFrames but also work if there are no columns to join on.
If now shared column between the given DataFrames is found, then the join will be performaned on a single, constant
column.
Parameters
----------
df1: pd.DataFrame
Left DataFrame.
df2: pd.DataFrame
Right DataFrame.
how: str
How to join the frames.
Returns
-------
df_joined: pd.DataFrame
Joined DataFrame.
"""
dummy_column = "__ktk_cube_join_dummy"
columns2 = set(df2.columns)
joined_columns = [c for c in df1.columns if c in columns2]
if len(joined_columns) == 0:
df1 = df1.copy()
df2 = df2.copy()
df1[dummy_column] = 1
df2[dummy_column] = 1
joined_columns = [dummy_column]
df_out = df1.merge(df2, on=joined_columns, how=how, sort=False)
df_out.drop(columns=dummy_column, inplace=True, errors="ignore")
return df_out | /regallager-0.0.1.tar.gz/regallager-0.0.1/kartothek/utils/pandas.py | 0.878978 | 0.462594 | pandas.py | pypi |
this is a carbon copy of https://git.scc.kit.edu/feudal/feudalAdapterLdf/-/blob/master/ldf_adapter/backend/bwidm.py'''
# pylint
# vim: tw=100 foldmethod=indent
# pylint: disable=bad-continuation, invalid-name, superfluous-parens
# pylint: disable=bad-whitespace, mixed-indentation
# pylint: disable=redefined-outer-name, logging-not-lazy, logging-format-interpolation
# pylint: disable=missing-docstring, trailing-whitespace, trailing-newlines, too-few-public-methods
import logging
from sys import exit as s_exit
from urllib.parse import urljoin
from functools import reduce
import requests
import json
from .config import CONFIG
logger = logging.getLogger(__name__)
class BwIdmConnection:
"""Connection to the BWIDM API."""
def __init__(self, config=None):
self.session = requests.Session()
if config:
self.session.auth = (
config['backend.bwidm.auth']['http_user'],
config['backend.bwidm.auth']['http_pass']
)
def get(self, *url_fragments, **kwargs):
logger.debug(F"Url fragments: {url_fragments}")
return self._request('GET', url_fragments, **kwargs)
def post(self, *url_fragments, **kwargs):
return self._request('POST', url_fragments, **kwargs)
def _request(self, method, url_fragments, **kwargs):
"""
Arguments:
method -- HTTP Method (type: str)
url_fragments -- The components of the URL. Each is url-encoded separately and then they are
joined with '/'
fail=True -- Raise exception on non-200 HTTP status
**kwargs -- Passed to `requests.Request.__init__`
"""
fail = kwargs.pop('fail', True)
url_fragments = map(str, url_fragments)
url_fragments = map(lambda frag: requests.utils.quote(frag, safe=''), url_fragments)
url = reduce(lambda acc, frag: urljoin(acc, frag) if acc.endswith('/') else urljoin(acc+'/', frag),
url_fragments,
CONFIG['backend.bwidm']['url'])
logger.debug(url+"\n")
req = requests.Request(method, url, **kwargs)
rsp = self.session.send(self.session.prepare_request(req))
import simplejson
try:
resp_json = rsp.json()
resp = json.dumps(resp_json, sort_keys=True, indent=4, separators=(',', ': '))
except json.JSONDecodeError:
resp = rsp.text
except simplejson.errors.JSONDecodeError:
resp = rsp.text
# logger.debug(F" => {resp}")
if fail:
if not rsp.ok:
logger.error("RegApp responded with: {}".format(rsp.content.decode('utf-8')))
s_exit(1)
rsp.raise_for_status()
return rsp | /regapp-tools-0.2.16.tar.gz/regapp-tools-0.2.16/regapp_tools/bwidmconnection.py | 0.614625 | 0.281554 | bwidmconnection.py | pypi |
_The current version of RegCensusAPI is only compatible with Python 3.6 and newer._
# RegCensus API
## Introduction
RegCensusAPI is an API client that connects to the RegData regulatory restrictions data by the Mercatus Center at George Mason University. RegData uses machine learning algorithms to quantify the number of regulatory restrictions in a jurisdiction. Currently, RegData is available for three countries - Australia, Canada, and the United States. In addition, there are regulatory restrictions data for jurisdictions (provinces in Canada and states in Australia and US) within these countries. You can find out more about RegData from http://www.quantgov.org.
This Python API client connects to the api located at at the [QuantGov website][1]. More advanced users who want to interact with the API directly can use the link above to pull data from the RegData API. R users can access the same features provided in this package in the R package __regcensusAPI__.
We put together a short video tutorial, showing some of the basics of the API library. You can view that [here][4].
## Installing and Importing __RegCensus__
The RegCensus Python library is pip installable:
```
$ pip install regcensus
```
Once installed, import the library, using the following (use the `rc` alias to more easily use the library):
```
import regcensus as rc
```
## Structure of the API
The API organizes data around __document types__, which are then divided into __series__. Within each series are __values__, which are the ultimate values of interest. Values are available by three sub-groups: agency, industry, and occupation. Presently, there are no series with occupation subgroup. However, these are available for future use. Document types broadly define the data available. For example, RegData for regulatory restrictions is falls under the broad document type "Regulatory Restrictions." Within Regulatory Restrictions document type, there are a number of series available. These include Total Restrictions, Total Wordcount, Total "Shall," etc.
A fundamental concept in RegData is the "document." In RegData, a set of documents represents a body of regulations for which we have produced regulatory restriction counts. For example, to produce data on regulatory restrictions imposed by the US Federal government, RegData uses the Code of Federal Regulations (CFR) as the source documents. Within the CFR, RegData identifies a unit of regulation as the title-part combination. The CFR is organized into 50 titles, and within each title are parts, which could have subparts, but not always. Under the parts are sections. Determining this unit of analyses is critical for the context of the data produced by RegData. Producing regulatory restriction data for US states follows the same strategy but uses the state-specific regulatory code.
In requesting data through the API, you must specify the document type and the indicate a preference for *summary* or *document-level*. By default, RegCensus API returns summarized data for the date of interest. This means that if you do not specify the *summary* preference, you will receive the summarized data for a date. The __get_series__ helper function (described below) returns the dates available for each series.
RegCensus API defines a number of dates depending on the series. For example, the total restrictions series of Federal regulations uses two main dates: daily and annual. The daily data produces the number of regulatory restrictions issued on a particular date by the US Federal government. The same data are available on an annual basis.
There are five helper functions to retrieve information about these key components of regdata. These functions provider the following information: document types, jurisdictions, series, agencies, and dates with data. The list functions begin with __list__.
Each document type comprises one or more *series*. The __list_series__ function returns the list of all series.
```
rc.list_series()
```
Listing the jurisdictions is another great place to start. If you are looking for data for a specifc jurisdiction(s), this function
will return the jurisdiction_id for all jurisdiction, which is key for retrieving data on any individual jurisdiction.
The __get_series__ function returns a list of all series and the years with data available for each jurisdiction.
The output from this function can serve as a reference for the valid values that can be passed to parameters in the __get_values__ function. The number of records returned is the unique combination of series and jurisdictions that are available in RegData. The function takes the optional argument jurisdiction id.
## Metadata
The __get_*__ functions return the details about RegData metadata. These metadata are not included in the __get_values__ functions that will be described later.
### Jurisdictions
Use the __get_jurisdiction__ function to return a data frame with all the jurisdictions. When you supply the jurisdiction ID parameter, the function returns the details of just that jurisdiction. Use the output from the __get_jurisdiction__ function to merge with data from the __get_values__ function.
```
rc.get_jurisdictions()
```
### DataFinder
Use the __get_datafinder__ function to return a data frame for a specific jurisdiction. It returns the following attributes in the data - `['jurisdiction', 'documentType', 'year', 'series', 'document_endpoints', 'summary_endpoints', 'label_endpoints']`
```
get_datafinder(jurisdiction = 38)
```
### Agencies
The __get_agencies__ function returns a data frame of agencies with data in RegData. Either the `jurisdictionID` or `keyword` arguments must be supplied. If `jurisdictionID` is passed, the data frame will include information for all agencies in that jurisdiction. If `keyword` is supplied, the data frame will include information for all agencies whose name contains the keyword.
The following code snippet will return data for all agencies in the Federal United States:
```
rc.get_agencies(jurisdictionID = 38)
```
Likewise, this code snippet will return data for all agencies (in any jurisdiction) containing the word "education" (not case sensitive):
```
rc.get_agencies(keyword = 'education')
```
Use the value of the agency_id field when pulling values with the __get_values__ function.
### Industries
The __get_industries__ function returns a data frame of industries with data in the API. The available standards include the North American Industry Classification System (NAICS), the Bereau of Economic Analysis system (BEA), and the Standard Occupational Classification System (SOC). By default, the function only returns a data frame with 3-digit NAICS industries. The `codeLevel` and `standard` arguments can be used to select from other classifications.
The following line will get you industry information for all 4-digit NAICS industries:
```
rc.get_industries(labellevel = 4)
```
This line will get you information for the NAICS industries (this function is temporarily disabled as of 1.0.0):
```
rc.get_industries(labelsource = 'NAICS')
```
Like the __get_agencies__ function, the `keyword` argument may also be used. The following code snippet will return information for all 6-digit NAICS industries with the word "fishing" in the name (this function is temporarily disabled as of 1.0.0):
```
rc.get_industries(keyword = 'fishing', labellevel = 6)
```
## Values
The __get_values__ function is the primary function for obtaining RegData from the RegCensus API. The function takes the following parameters:
* jurisdiction (required) - value or list of jurisdiction IDs
* series (required) - value or list of series IDs
* year (required) - value or list of years
* agency (optional) - value or list of agencies
* industry (optional) - value of list of agencies
* dateIsRange (optional) - specify if the list of years provided for the parameter years is a range. Default is True.
* filtered (optional) - specify if poorly-performing industry results should be excluded. Default is True.
* summary (optional) - specify if summary results should be returned, instead of document-level results. Default is True.
* country (optional) - specify if all values for a country's jurisdiction ID should be returned. Default is False.
* industryLevel (optional): level of NAICS industries to include. Default is 3.
* version (optional): Version ID for datasets with multiple versions, if no ID is given, API returns most recent version
* download (optional): if not False, a path location for a downloaded csv of the results.
* verbose (optional) - value specifying how much debugging information should be printed for each function call. Higher number specifies more information, default is 0.
In the example below, we are interested in the total number of restrictions and total number of words for the US (get_jurisdictions(38)) for the dates 2010 to 2019.
```
rc.get_values(series = [1,2], jurisdiction = 38, year = [2010, 2019])
```
### Get all Values for a Country
The `country` argument can be used to get all values for one or multiple series for a specific national jurisdiction. The following line will get you a summary of the national and state-level restriction counts for the United States from 2016 to 2019 (this function is temporarily disabled as of 1.0.0):
```
rc.get_values(series = 1, jurisdiction = 38, year = [2016, 2019], country=True)
```
### Values by Subgroup
You can obtain data for any of the three subgroups for each series - agencies, industries, and occupations (when they become available).
#### Values by Agencies
To obtain the restrictions for a specific agency (or agencies), the series id supplied must be in the list of available series by agency. To recap, the list of available series for an agency is available via the __list_series__ function, and the list of agencies with data is available via __get_agencies__ function.
```
# Identify all agencies
rc.list_agencies(jurisdictionID = 38)
# Call the get_values() for two agencies and series 13
rc.get_values(series = 13, jurisdiction = 38, year = [2000, 2018], agency = [15918, 15921])
```
#### Values by Agency and Industry
Some agency series may also have data by industry. For example, under the Total Restrictions topic, RegData includes the industry-relevant restrictions, which estimates the number of restrictions that apply to a given industry. These are available in both the main series - Total Restrictions, and the sub-group Restrictions by Agency.
Valid values for industries include the industry codes specified in the classification system obtained by calling the __get_industries(jurisdiction)__ function.
In the example below, for the below series, we can request data for the two industries 111 and 33 by the following code snippet.
```
rc.get_values(series = [1,28,33,36], jurisdiction = 38, year = [1990, 2000], label = 111, agency = 0)
```
### Document-Level Values
For most use-cases, our summary-level data will be enough. However, document-level data is also available, though most of these queries take much longer to return results. Multi-year and industry results for jurisdiction 38 will especially take a long time. If you want the full dataset for United States Federal, consider using our bulk downloads, available at the [QuantGov website][2].
We can request the same data from above, but at the document level, using the following code snippet.
```
rc.get_values(series = [1,2], jurisdiction = 38, year = 2020, summary=False)
```
Alternatively, we can use the __get_document_values__ function as in the following code snippet.
```
rc.get_document_values(series = [1,2], jurisdiction = 38, year = 2019)
```
See the __get_series__ function for specifics by jurisdiction.
### Version
_This currently applies to the RegData U.S. Annual project only._
As of version 0.2.4, a version parameter can be passed to the __get_values__ function to obtained data from past versions of data (currently only for the RegData U.S. Annual project). Available versions and their associated versionIDs can be obtained by using the __get_version__ function. If no version parameter is given, the most recent version will be returned. The following code snippet will return restrictions data for the 3.2 version of RegData U.S. Annual for the years 2010 to 2019.
```
rc.get_values(series = 1, jurisdiction = 38, year = [2010, 2019], version = 1)
```
### Merging with Metadata
To minimize the network bandwidth requirements to use RegCensusAPI, the data returned by __get_values__ function contain very minimal metadata. Once you pull the values by __get_values__, you can use the Pandas library to include the metadata.
Suppose we want to attach the agency names and other agency characteristics to the data from the last code snippet. First be sure to pull the list of agencies into a separate data frame. Then merge with the values data frame. The key for matching the data will be the *agency_id* column.
We can merge the agency data with the values data as in the code snippet below.
```
agencies = rc.get_agencies(jurisdictionID = 38)
agency_by_industry = rc.get_values(
series = [1,28,33,36],
jurisdiction = 38,
year = [1990, 2000],
label = 111,
agency = 0)
agency_restrictions_ind = agency_by_industry.merge(
agencies, on='agency_id')
```
## Downloading Data
There are two different ways to download data retrieved from RegCensusAPI:
1. Use the pandas `df.to_csv(outpath)` function, which allows the user to download a csv of the data, with the given outpath. See the pandas [documentation][3] for more features.
2. As of version 0.2.0, the __get_values__ function includes a `download` argument, which allows the user to simply download a csv of the data in the same line as the API call. See below for an example of this call.
```
rc.get_values(series = [1,28,33,36], jurisdiction = 38, year = [2010, 2019], download='regdata2010to2019.csv')
```
[1]:https://api.quantgov.org/swagger-ui.html
[2]:https://www.quantgov.org/download-interactively
[3]:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html
[4]:https://mercatus.wistia.com/medias/1hxnkfjnxa
| /regcensus-1.0.1.tar.gz/regcensus-1.0.1/README.md | 0.402862 | 0.984002 | README.md | pypi |
regulations-core
================
[](https://travis-ci.org/eregs/regulations-core)
[](https://gemnasium.com/github.com/eregs/regulations-core)
[](https://coveralls.io/github/eregs/regulations-core?branch=master)
[](https://codeclimate.com/github/eregs/regulations-core)
An API library that provides an interface for storing and retrieving regulations,
layers, etc.
This repository is part of a larger project. To read about it, please see
[http://eregs.github.io/](http://eregs.github.io/).
## Features
* Search integration with Elastic Search or Django Haystack
* Support for storage via Elastic Search or Django Models
* Separation of API into a read and a write portion
* Destruction of regulations and layers into their components, allowing
paragraph-level access
* Schema checking for regulations
## Requirements
This library requires
* Python 2.7, 3.4, 3.5, or 3.6
* Django 1.8, 1.9, 1.10, or 1.11
## API Docs
[regulations-core on Read The Docs](http://regulations-core.readthedocs.org/en/latest/)
## Local development
### Tox
We use [tox](tox.readthedocs.io) to test across multiple versions of Python
and Django. To run our tests, linters, and build our docs, you'll need to
install `tox` *globally* (Tox handles virtualenvs for us).
```bash
pip install tox
# If using pyenv, consider also installing tox-pyenv
```
Then, run tests and linting across available Python versions:
```bash
tox
```
To build docs, run:
```bash
tox -e docs
```
The output will be in `docs/_build/dirhtml`.
### Running as an application
While this library is generally intended to be used within a larger project,
it can also be ran as its own application via
[Docker](https://www.docker.com/) or a local Python install. In both cases,
we'll run in `DEBUG` mode using SQLite for data storage. We don't have a turn
key solution for integrating this with search (though it can be accomplished
via a custom settings file).
To run via Docker,
```bash
docker build . -t eregs/core # only needed after code changes
docker run -p 8080:8080 eregs/core
```
To run via local Python, run the following inside a
[virtualenv](https://virtualenv.pypa.io/en/stable/):
```bash
pip install .
python manage.py migrate
python manage.py runserver 0.0.0.0:8080
```
In both cases, you can find the site locally at
[http://0.0.0.0:8080/](http://0.0.0.0:8080/).
## Apps included
This repository contains four Django apps, *regcore*, *regcore_read*,
*regcore_write*, and *regcore_pgsql*. The first contains shared models and
libraries. The "read" app provides read-only end-points while the "write" app
provides write-only end-points (see the next section for security
implications.) We recommend using *regcore.urls* as your url router, in which
case turning on or off read/write capabilities is as simple as including the
appropriate applications in your Django settings file. The final app,
*regcore_pgsql* contains all of the modules related to running with a
Postgres-based search index. Note that you will always need *regcore*
installed.
## Security
Note that *regcore_write* is designed to only be active inside an
organization; the assumption is that data will be pushed to public facing,
read-only (i.e. without *regcore_write*) sites separately.
When using the Elastic Search backend, data is passed as JSON, preventing
SQL-like injections. When using haystack, data is stored via Django's model
framework, which escapes SQL before it hits the db.
All data types require JSON input (which is checked.) The regulation type
has an additional schema check, which is currently not present for other
data types. Again, this liability is limited by the segmentation of read and
write end points.
As all data is assumed to be publicly visible, data is not encrypted before
it is sent to the storage engine. Data may be compressed, however.
Be sure to override the default settings for both `SECRET_KEY` and to
turn `DEBUG` off in your `local_settings.py`
## Storage-Backends
This project allows multiple backends for storing, retrieving, and searching
data. The default settings file uses Django models for data storage and
Haystack for search, but Elastic Search (1.7) or Postgres can be used instead.
### Django Models For Data, Haystack For Search
This is the default configuration. You will need to have *haystack* installed
and one of their
[backends](http://django-haystack.readthedocs.io/en/master/backend_support.html).
In your settings file, use:
```python
BACKENDS = {
'regulations': 'regcore.db.django_models.DMRegulations',
'layers': 'regcore.db.django_models.DMLayers',
'notices': 'regcore.db.django_models.DMNotices',
'diffs': 'regcore.db.django_models.DMDiffs'
}
SEARCH_HANDLER = 'regcore_read.views.haystack_search.search'
```
You will need to migrate the database (`manage.py migrate`) to get started and
rebuild the search index (`manage.py rebuild_index`) after adding documents.
### Django Models For Data, Postgres For Search
If running Django 1.10 or greater, you may skip *haystack* and rely
exclusively on Postgres for search. The current search index only indexes at
the CFR section level. Install the `psycopg` (e.g. through `pip install
regcore[backend-pgsql]`) and use the following settings:
```python
BACKENDS = {
'regulations': 'regcore.db.django_models.DMRegulations',
'layers': 'regcore.db.django_models.DMLayers',
'notices': 'regcore.db.django_models.DMNotices',
'diffs': 'regcore.db.django_models.DMDiffs'
}
SEARCH_HANDLER = 'regcore_pgsql.views.search'
APPS.append('regcore_pgsql')
```
You may wish to extend the `regcore.settings.pgsql` module for simplicity.
You will need to migrate the database (`manage.py migrate`) to get started and
rebuild the search index (`manage.py rebuild_pgsql_index`) after adding
documents.
### Elastic Search For Data and Search
If *pyelasticsearch* is installed (e.g. through `pip install
regcore[backend-elastic]`), you can use Elastic Search (1.7) for both data
storage and search. Add the following to your settings file:
```python
BACKENDS = {
'regulations': 'regcore.db.es.ESRegulations',
'layers': 'regcore.db.es.ESLayers',
'notices': 'regcore.db.es.ESNotices',
'diffs': 'regcore.db.es.ESDiffs'
}
SEARCH_HANDLER = 'regcore_read.views.es_search.search'
```
You may wish to extend the `regcore.settings.elastic` module for simplicity.
## Settings
While we provide sane default settings in `regcore/settings/base.py`, we
recommend these defaults be overridden as needed in a `local_settings.py` file.
If using Elastic Search, you will need to let the application know how to
connect to the search servers.
* `ELASTIC_SEARCH_URLS` - a list of strings which define how to connect
to your search server(s). This is passed along to pyelasticsearch.
* `ELASTIC_SEARCH_INDEX` - the index to be used by elastic search. This
defaults to 'eregs'
The `BACKENDS` setting (as described above) must be a dictionary of the
appropriate model names ('regulations', 'layers', etc.) to the associated
backend class. Backends can be mixed and matched, though I can't think of a
good use case for that desire.
All standard Django and haystack settings are also available; you will likely
want to override `DATABASES`, `HAYSTACK_CONNECTIONS`, `DEBUG` and certainly
`SECRET_KEY`.
## Importing Data
### Via the `eregs` parser
The `eregs` script (see
[regulations-parser](http://github.com/eregs/regulations-parser)) includes
subcommands which will write processed data to a running API. Notably, if
`write_to` (the last step of `pipeline`) is directed at a target beginning
with `http://` or `https://`, it will write the relevant data to that host.
Note that HTTP authentication can be encoded within these urls. For example,
if the API is running on the localhost, port 8000, you could run:
```bash
$ eregs write_to http://localhost:8000/
```
See the command line
[docs](https://eregs-parser.readthedocs.io/en/latest/commandline.html) for
more detail.
### Via the `import_docs` Django command
If you've already exported data from the parser, you may import it from the
command line with the `import_docs` Django management command. It should be
given the root directory of the data as its only parameter. Note that this
does not require a running API.
```bash
$ ls /path/to/data-root
diff layer notice regulation
$ python manage.py import_docs /path/to/data-root
```
### Via curl
You may also simulate sending data to a running API via curl, if you've
exported data from the parser. For example, if the API is running on the
localhost, port 8000, you could run:
```bash
$ cd /path/to/data-root
$ ls
diff layer notice regulation
$ for TAIL in $(find */* -type f | sort -r) \
do \
curl -X PUT http://localhost:8000/$TAIL -d @$TAIL \
done
```
| /regcore-4.2.0.tar.gz/regcore-4.2.0/README.md | 0.438785 | 0.956997 | README.md | pypi |
from django.conf import settings
from django.contrib.postgres.search import SearchRank, SearchQuery
from django.db.models import F, Q
from regcore.models import Document
from regcore.responses import success
from regcore_read.views.search_utils import requires_search_args
def matching_sections(search_args):
"""Retrieve all Document sections that match the parsed search args."""
sections_query = Document.objects\
.annotate(rank=SearchRank(
F('documentindex__search_vector'), SearchQuery(search_args.q)))\
.filter(rank__gt=settings.PG_SEARCH_RANK_CUTOFF)\
.order_by('-rank')
if search_args.version:
sections_query = sections_query.filter(version=search_args.version)
if search_args.regulation:
sections_query = sections_query.filter(
documentindex__doc_root=search_args.regulation)
# can't filter regulation yet
return sections_query
@requires_search_args
def search(request, doc_type, search_args):
sections = matching_sections(search_args)
start = search_args.page * search_args.page_size
end = start + search_args.page_size
return success({
'total_hits': sections.count(),
'results': transform_results(sections[start:end], search_args.q),
})
def transform_results(sections, search_terms):
"""Convert matching Section objects into the corresponding dict for
serialization."""
final_results = []
for section in sections:
# TODO: n+1 problem; hypothetically these could all be performed via
# subqueries and annotated on the sections queryset
match_node = section.get_descendants(include_self=True)\
.filter(Q(text__search=search_terms) |
Q(title__search=search_terms))\
.first() or section
text_node = match_node.get_descendants(include_self=True)\
.exclude(text='')\
.first()
final_results.append({
'text': text_node.text if text_node else '',
'label': match_node.label_string.split('-'),
'version': section.version,
'regulation': section.label_string.split('-')[0],
'label_string': match_node.label_string,
'match_title': match_node.title,
'paragraph_title': text_node.title if text_node else '',
'section_title': section.title,
'title': section.title,
})
return final_results | /regcore-4.2.0.tar.gz/regcore-4.2.0/regcore_pgsql/views.py | 0.448668 | 0.196903 | views.py | pypi |
from django.conf import settings
from pyelasticsearch import ElasticSearch
from regcore.db.es import ESLayers
from regcore.responses import success
from regcore_read.views.search_utils import requires_search_args
@requires_search_args
def search(request, doc_type, search_args):
"""Search elastic search for any matches in the node's text"""
query = {
'fields': ['text', 'label', 'version', 'regulation', 'title',
'label_string'],
'from': search_args.page * search_args.page_size,
'size': search_args.page_size,
}
text_match = {'match': {'text': search_args.q, 'doc_type': doc_type}}
if search_args.version or search_args.regulation:
term = {}
if search_args.version:
term['version'] = search_args.version
if search_args.regulation:
term['regulation'] = search_args.regulation
if search_args.is_root is not None:
term['is_root'] = search_args.is_root
if search_args.is_subpart is not None:
term['is_subpart'] = search_args.is_subpart
query['query'] = {'filtered': {
'query': text_match,
'filter': {'term': term}
}}
else:
query['query'] = text_match
es = ElasticSearch(settings.ELASTIC_SEARCH_URLS)
results = es.search(query, index=settings.ELASTIC_SEARCH_INDEX)
return success({
'total_hits': results['hits']['total'],
'results': transform_results([h['fields'] for h in
results['hits']['hits']])
})
def transform_results(results):
"""Pull out unused fields, add title field from layers if possible"""
regulations = {(r['regulation'], r['version']) for r in results}
layers = {}
for regulation, version in regulations:
terms = ESLayers().get('terms', regulation, version)
# We need the references, not the locations of defined terms
if terms:
defined = {}
for term_struct in terms['referenced'].values():
defined[term_struct['reference']] = term_struct['term']
terms = defined
layers[(regulation, version)] = {
'keyterms': ESLayers().get('keyterms', regulation, version),
'terms': terms
}
for result in results:
title = result.get('title', '')
ident = (result['regulation'], result['version'])
keyterms = layers[ident]['keyterms']
terms = layers[ident]['terms']
if not title and keyterms and result['label_string'] in keyterms:
title = keyterms[result['label_string']][0]['key_term']
if not title and terms and result['label_string'] in terms:
title = terms[result['label_string']]
if title:
result['title'] = title
return results | /regcore-4.2.0.tar.gz/regcore-4.2.0/regcore_read/views/es_search.py | 0.489259 | 0.2227 | es_search.py | pypi |
from haystack.query import SearchQuerySet
from regcore.db.django_models import DMLayers
from regcore.models import Document
from regcore.responses import success
from regcore_read.views.search_utils import requires_search_args
@requires_search_args
def search(request, doc_type, search_args):
"""Use haystack to find search results"""
query = SearchQuerySet().models(Document).filter(
content=search_args.q, doc_type=doc_type)
if search_args.version:
query = query.filter(version=search_args.version)
if search_args.regulation:
query = query.filter(regulation=search_args.regulation)
if search_args.is_root is not None:
query = query.filter(is_root=search_args.is_root)
if search_args.is_subpart is not None:
query = query.filter(is_subpart=search_args.is_subpart)
start = search_args.page * search_args.page_size
end = start + search_args.page_size
return success({
'total_hits': len(query),
'results': transform_results(query[start:end]),
})
def transform_results(results):
"""Add title field from layers if possible"""
regulations = {(r.regulation, r.version) for r in results}
layers = {}
for regulation, version in regulations:
terms = DMLayers().get('terms', regulation, version)
# We need the references, not the locations of defined terms
if terms:
defined = {}
for term_struct in terms['referenced'].values():
defined[term_struct['reference']] = term_struct['term']
terms = defined
layers[(regulation, version)] = {
'keyterms': DMLayers().get('keyterms', regulation, version),
'terms': terms
}
final_results = []
for result in results:
transformed = {
'text': result.text,
'label': result.label_string.split('-'),
'version': result.version,
'regulation': result.regulation,
'label_string': result.label_string
}
if result.title:
title = result.title[0]
else:
title = None
ident = (result.regulation, result.version)
keyterms = layers[ident]['keyterms']
terms = layers[ident]['terms']
if not title and keyterms and result.label_string in keyterms:
title = keyterms[result.label_string][0]['key_term']
if not title and terms and result.label_string in terms:
title = terms[result.label_string]
if title:
transformed['title'] = title
final_results.append(transformed)
return final_results | /regcore-4.2.0.tar.gz/regcore-4.2.0/regcore_read/views/haystack_search.py | 0.604632 | 0.384681 | haystack_search.py | pypi |
import logging
from regcore.db import storage
from regcore.layer import standardize_params
from regcore.responses import success, user_error
from regcore_write.views.security import json_body, secure_write
logger = logging.getLogger(__name__)
def child_label_of(lhs, rhs):
"""Is the lhs label a child of the rhs label"""
# Interpretations have a slightly different hierarchy
if 'Interp' in lhs and 'Interp' in rhs:
lhs_reg, lhs_comment = lhs.split('Interp')
rhs_reg, rhs_comment = rhs.split('Interp')
if lhs_reg.startswith(rhs_reg):
return True
# Handle Interps with shared prefix as well as non-interps
if lhs.startswith(rhs):
return True
return False
@secure_write
@json_body
def add(request, name, doc_type, doc_id):
"""Add the layer node and all of its children to the db"""
layer = request.json_body
if not isinstance(layer, dict):
return user_error('invalid format')
params = standardize_params(doc_type, doc_id)
if params.doc_type not in ('preamble', 'cfr'):
return user_error('invalid doc type')
for key in layer.keys():
# terms layer has a special attribute
if not child_label_of(key, params.tree_id) and key != 'referenced':
return user_error('label mismatch: {0}, {1}'.format(
params.tree_id, key))
storage.for_layers.bulk_delete(name, params.doc_type, params.doc_id)
storage.for_layers.bulk_insert(child_layers(params, layer), name,
params.doc_type)
return success()
@secure_write
def delete(request, name, doc_type, doc_id):
"""Delete the layer node and all of its children from the db"""
params = standardize_params(doc_type, doc_id)
if params.doc_type not in ('preamble', 'cfr'):
return user_error('invalid doc type')
storage.for_layers.bulk_delete(name, params.doc_type, params.doc_id)
return success()
def child_layers(layer_params, layer_data):
"""We are generally given a layer corresponding to an entire regulation.
We need to split that layer up and store it per node within the
regulation. If a reg has 100 nodes, but the layer only has 3 entries, it
will still store 100 layer models -- many may be empty"""
doc_id_components = layer_params.doc_id.split('/')
if layer_params.doc_type == 'preamble':
doc_tree = storage.for_documents.get('preamble', layer_params.doc_id)
elif layer_params.doc_type == 'cfr':
version, label = doc_id_components
doc_tree = storage.for_documents.get('cfr', label, version)
else:
doc_tree = None
logger.error("Invalid doc type: %s", layer_params.doc_type)
if not doc_tree:
return []
to_save = []
def find_labels(node):
child_labels = []
for child in node['children']:
child_labels.extend(find_labels(child))
label_id = '-'.join(node['label'])
# Account for "{version}/{cfr_part}" the same as "{preamble id}"
doc_id = '/'.join(doc_id_components[:-1] + [label_id])
sub_layer = {'doc_id': doc_id}
for key in layer_data:
# 'referenced' is a special case of the definitions layer
if key == label_id or key in child_labels or key == 'referenced':
sub_layer[key] = layer_data[key]
to_save.append(sub_layer)
return child_labels + [label_id]
find_labels(doc_tree)
return to_save | /regcore-4.2.0.tar.gz/regcore-4.2.0/regcore_write/views/layer.py | 0.65368 | 0.193681 | layer.py | pypi |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import regdata as rd
```
### Della Gatta Gene
```
rd.DellaGattaGene(scale_X=False, scale_y=False).plot();
```
### Heinonen 4
```
rd.Heinonen4(scale_X=False, scale_y=False).plot();
```
### Jump 1D
```
rd.Jump1D(scale_X=False, scale_y=False).plot();
```
### Motorcycle Helmet
```
rd.MotorcycleHelmet(scale_X=False, scale_y=False).plot();
```
### NonStat2D
```
rd.NonStat2D(scale_X=False, scale_y=False).plot();
```
### Olympic
```
rd.Olympic(scale_X=False, scale_y=False).plot();
```
### SineJump1D
```
rd.SineJump1D(scale_X=False, scale_y=False).plot();
```
### SineNoisy
```
rd.SineNoisy(scale_X=False, scale_y=False).plot();
```
### Smooth1D
```
rd.Smooth1D(scale_X=False, scale_y=False).plot();
```
### Step
```
rd.Step(scale_X=False, scale_y=False).plot();
```
| /regdata-1.0.4.tar.gz/regdata-1.0.4/notebooks/visualize.ipynb | 0.427755 | 0.880129 | visualize.ipynb | pypi |
import numpy as np
# define a function to calculate the slope and y-intercept of the line
def linear_regression(x, y):
n = len(x)
x_mean = np.mean(x)
y_mean = np.mean(y)
numerator = 0
denominator = 0
for i in range(n):
numerator += (x[i] - x_mean) * (y[i] - y_mean)
denominator += (x[i] - x_mean) ** 2
slope = numerator / denominator
y_intercept = y_mean - slope * x_mean
return slope, y_intercept
# define a function to make predictions using the calculated slope and y-intercept
def predict_linear(x, slope, y_intercept):
y_pred = slope * x + y_intercept
return y_pred
#polynomial regression
import numpy as np
# define a function to create a polynomial feature matrix
def create_polynomial_features(x, degree):
x_poly = np.zeros((len(x), degree))
for i in range(degree):
x_poly[:, i] = x ** (i+1)
return x_poly
# define a function to perform polynomial regression
def polynomial_regression(x, y, degree):
x_poly = create_polynomial_features(x, degree)
model = np.linalg.lstsq(x_poly, y, rcond=None)[0]
return model
# define a function to make predictions using the polynomial model
def predict_polynomial(x, model):
y_pred = np.zeros_like(x)
for i in range(len(model)):
y_pred += model[i] * x ** (i+1)
return y_pred
#multiple linear regression
import numpy as np
# define a function to perform multiple linear regression
def multiple_linear_regression(x, y):
X = np.column_stack((np.ones(len(x)), x)) # add a column of ones for the intercept term
model = np.linalg.lstsq(X, y, rcond=None)[0]
return model
# define a function to make predictions using the multiple linear regression model
def predict_multiple(x, model):
X = np.column_stack((np.ones(len(x)), x))
y_pred = np.dot(X, model)
return y_pred
def mean_absolute_error(y_actual, y_pred):
return np.mean(np.abs(y_actual - y_pred))
import numpy as np
def root_mean_squared_error(y_actual, y_pred):
return np.sqrt(np.mean((y_actual - y_pred)**2))
import numpy as np
def r_squared(y_actual, y_pred):
ssr = np.sum((y_actual - y_pred)**2)
sst = np.sum((y_actual - np.mean(y_actual))**2)
return 1 - (ssr / sst) | /rege-1.0-py3-none-any.whl/regi/rigi.py | 0.53607 | 0.796055 | rigi.py | pypi |
## About REGENS :dna:
REGENS (REcombinatory Genome ENumeration of Subpopulations) is an open source Python package :package: that simulates whole genomes from real genomic segments.
REGENS recombines these segments in a way that simulates completely new individuals while simultaneously preserving the input genomes' linkage disequilibrium (LD) pattern with extremely high fedility. REGENS can also simulate mono-allelic and epistatic single nucleotide variant (SNV) effects on a continuous or binary phenotype without perturbing the simulated LD pattern.
## :star2: IMPORTANT NOTICE (PLEASE READ) :star2:
REGENS's simulated genomes are comprised entirely of concatenated segments from the input dataset's real genomes. If your input genomes are not available for public use, then you may not be allowed to publicly release the simulated dataset. Please consult the institutions that provide you access to your input genotype dataset for more information about this matter.
## Instructions to Installing REGENS :hammer_and_wrench:
1. [Install conda](https://docs.conda.io/en/latest/miniconda.html) if you haven't already installed either Anaconda or Miniconda
2. Open your conda terminal. Type "Anaconda" or "Miniconda" into your search bar and open the terminal. It will look like this: <img src="images/conda_terminal.png" width="800" height="400"/>
3. Click the "Anaconda Prompt" app (left) to open the black terminal (right). The terminal's top must say "Anaconda prompt"
4. Enter ```conda create --name regens python=3.7``` in the terminal to create a new environment called regens with python version 3.7
5. Enter ```conda activate regens``` in the terminal to enter your new environment. If that doesn't work, enter ```source activate regens```
6. Once in your regens environment (repeat step 5 if you close and reopen the conda terminal), enter ```pip install regens```
7. Run [this command](https://github.com/EpistasisLab/regens/blob/main/README.md#simulate-genotype-data-computer) to allow regens to download the remaining files. It will write the simulated data into the `examples` folder that it downloads. If you experience permissions issues with this step, [try these remedies](https://github.com/EpistasisLab/regens/blob/main/README.md#remedies-to-known-permission-issues-adhesive_bandage):
## Input :turkey:
REGENS requires the following inputs:
- YOU MUST PROVIDE: real genotype data formatted as a standard (bed, bim, fam) plink _fileset_, ideally containing a minimum of 80 unrelated individuals.
- WE HAVE PROVIDED: a recombination map for every 1000 genomes population.
The provided recombination maps were created by the [pyrho algorithm](https://github.com/popgenmethods/pyrho) and modified by us to minimize required disk space. [Recombination maps between related populations are highly correlated](https://github.com/EpistasisLab/regens/blob/main/README.md#technical-details-robot), so you could pair your input dataset with the recombination map of the most genetically similar 1000 genomes population, as is usually done for SNP imputation. If you wish to make your own recombination map, then it must be formatted as described [here](https://github.com/EpistasisLab/regens/blob/main/README.md#simulate-genotype-data-with-custom-recombination-rate-dataframes-abacus).
## Output :poultry_leg:
REGENS outputs a standard (bed, bim, fam) plink fileset with the simulated genotype data (and optional phenotype information).
If plink is not available to you, please consider [bed-reader](https://pypi.org/project/bed-reader/0.1.1/), which reads (bed, bim, fam) plink filesets into the python environment quickly and efficiently.
In phenotype simulation, REGENS also outputs a file containing the R<sup>2</sup> value of the phenotype/genotype correlation and the *inferred* beta coefficients (see [example](https://github.com/EpistasisLab/regens/blob/main/correctness_testing_ACB/ACB_simulated_model_profile.txt)), which will most likely be close to but not equal to the input beta coefficients.
## Simulate genotype data :computer:
The following command uses `ACB.bed`, `ACB.bim`, and `ACB.fam` to simulate 10000 individuals without phenotypes. This command (or any other) will also complete the [final installation step](https://github.com/EpistasisLab/regens/blob/main/README.md#instructions-to-installing-regens-hammer_and_wrench). Windows users should replace all `\` linebreak characters with `^`.
```shell
python -m regens \
--in input_files/ACB \
--out ACB_simulated \
--simulate_nsamples 10000 \
--simulate_nbreakpoints 4 \
--population_code ACB \
--human_genome_version hg19
```
## Simulate genotype data with custom recombination rate dataframes :abacus:
The following command uses custom recombination rate files instead of the ones provided in the `hg19` and `hg38` folders (though the content in `input_files/hg19_ACB_renamed_as_custom` is just a copy of the content in `hg19/ACB`).
```shell
python -m regens \
--in input_files/ACB \
--out ACB_simulated \
--simulate_nsamples 10000 \
--simulate_nbreakpoints 4 \
--recombination_file_path_prefix input_files/hg19_ACB_renamed_as_custom/custom_chr_
```
Custom recombination rate files are to be named and organized as follows:
- The recombination map must be a single folder (named `hg19_ACB_renamed_as_custom` in the example above) with one gzipped tab seperated dataframe per chromosome.
- Every gzipped tab seperated dataframe must be named as `prefix_chr_1.txt.gz`, then `prefix_chr_2.txt.gz` all the way through `prefix_chr_22.txt.gz`(`prefix` is named `custom_chr` in the example above).
- the `.txt.gz` files must actually be gzipped (as opposed to a renamed `txt.gz` extension).
- Each chromosome's recombination map file must contain two tab separated columns named `Position(bp)` and `Map(cM)`.
The `Position(bp)` column in each chromosome's recombination map is to be formatted as follows:
- The i<sup>th</sup> row of "Position(bp)" contains the genomic position of the left boundary for the i<sup>th</sup> genomic interval.
- The i<sup>th</sup> row of "Position(bp)" is also the genomic position of the right boundary for the (i-1)<sup>th</sup> genomic interval.
- As such, the last row of "Position(bp)" is only a right boundary, and the first row is only a left boundary.
- Genomic positions must increase monotonically from top to bottom.
The `Map(cM)` column in each chromosome's recombination map is to be formatted as follows:
- The i<sup>th</sup> value of "Map(cM)" is the cumulative recombination rate from the first position to the i<sup>th</sup> position in CentiMorgans.
- In other words, the recombination rate of the interval in between any two rows b and a must equal the Map(cM) value at row b minus the Map(cM) value at row a.
- As such, the cumulative Map(cM) values must increase monotonically from top to bottom.
- The value of `Map(cM)` in the first row must be 0.
An example of how this must be formatted is below (remember that there must be one per chromosome, and they must all be gzipped):
```shell
Position(bp) Map(cM)
16050114 0.0
16058757 0.01366
16071986 0.03912
16072580 0.04013
16073197 0.04079
```
## :apple: Simulate genotype data with phenotype associations :green_apple:
Given at least one set of one or more SNPs, REGENS can simulate a correlation between each set of SNPs and a binary or continuous phenotype.
Different genotype encodings can be applied:
- Normally, if A is the major allele and a is the minor allele, then (AA = 0, Aa = 1, and aa = 2). However, you can _Swap_ the genotype values so that (AA = 2, Aa = 1, and aa = 0).
- You can further transform the values so that they reflect no effect (I), a dominance effect (D), a recessive effect (R), a heterozygous only effect (He), or a homozygous only effect (Ho).
The table below shows how each combination of one step 1 function (columns) and one step 2 function (rows) transforms the original (AA = 0, Aa = 1, and aa = 2) values.
| Input = {0, 1, 2} |Identity (I) | Swap |
|----------------------|-------------|-----------|
|Identity (I) | {0, 1, 2} | {2, 1, 0} |
|Dominance (D) | {0, 2, 2} | {2, 2, 0} |
|Recessive (R) | {0, 0, 2} | {2, 0, 0} |
|Heterozygous only (He)| {0, 2, 0} | {0, 2, 0} |
|Homozygous only (Ho) | {2, 0, 2} | {2, 0, 2} |
### Example 1: a simple additive model :arrow_lower_right:
A full command for REGENS to simulate genomic data with correlated phenotypes would be formatted as follows:
```shell
python -m regens \
--in input_files/ACB --out ACB_simulated \
--simulate_nsamples 10000 --simulate_nbreakpoints 4 \
--phenotype continuous --mean_phenotype 5.75 \
--population_code ACB --human_genome_version hg19 \
--causal_SNP_IDs_path input_files/causal_SNP_IDs.txt \
--noise 0.5 --betas_path input_files/betas.txt
```
This command simulates genotype-phenotype correlations according to the following model.
If we let _y_ be an individual's phenotype, s<sub>i</sub> be the i<sup>th</sup> genotype to influence the value of _y_ such that (AA = 0, Aa = 1, and aa = 2), and _B_ be the bias term. The goal is to simulate the following relationship between genotypes and phenotype:
y = 0.5s<sub>1</sub> + 0.5s<sub>2</sub> + 0.5s<sub>3</sub> + B + ε
where ε ~ N(μ = 0, σ<sub>ε</sub> = 0.5E[y]) and E[y] = 5.75.
Notice that the values of σ<sub>ε</sub> and E[y] are determined by the `--noise` and `--mean_phenotype` arguments.
<!-- h<sub>θ</sub>(x) = θ<sub>o</sub> x + θ<sub>1</sub>x -->
<!-- <img src="https://render.githubusercontent.com/render/math?math=y = 0.2s_1 %2B 0.2s_2 %2B 0.2s_3 %2B B %2B \epsilon"> -->
<!-- <img src="https://render.githubusercontent.com/render/math?math=\epsilon ~ N(\mu = 0, \sigma_{\epsilon} = 0.5E[y])"> -->
<!-- <img src="https://render.githubusercontent.com/render/math?math=E[y] = 5.75"> -->
The following files, formatted as displayed below, must exist in your working directory.
`input_files/causal_SNP_IDs.txt` contains newline seperated SNP IDs from the input bim file `input_files/ACB.bim`:
```
rs113633859
rs6757623
rs5836360
```
`input_files/betas.txt` contains one (real numbered) beta coefficient for each row in `input_files/causal_SNP_IDs.txt`:
```
0.5
0.5
0.5
```
### Example 2: inclusion of nonlinear single-SNP effects :arrow_heading_down:
```shell
python -m regens \
--in input_files/ACB --out ACB_simulated \
--simulate_nbreakpoints 4 --simulate_nsamples 10000 \
--phenotype continuous --mean_phenotype 5.75 \
--population_code ACB --human_genome_version hg19 --noise 0.5 \
--causal_SNP_IDs_path input_files/causal_SNP_IDs.txt \
--major_minor_assignments_path input_files/major_minor_assignments.txt \
--SNP_phenotype_map_path input_files/SNP_phenotype_map.txt \
--betas_path input_files/betas.txt
```
In addition to the notation from the first example, let S<sub>i</sub> = _swap_(s<sub>i</sub>) be the i<sup>th</sup> genotype to influence the value of _y_ such that (AA = 2, Aa = 1, and aa = 0). Also, we recall the definitions for the four nontrivial mapping functions (R, D, He, Ho) defined prior to the first example. The second example models phenotypes as follows:
y = 0.5s<sub>1</sub>+ 0.5R(S<sub>2</sub>) + 0.5He(s<sub>3</sub>) + B + ε
where ε ~ N(μ = 0, σ<sub>ε</sub> = 0.5E[y]) and E[y] = 5.75.
<!-- <img src="https://render.githubusercontent.com/render/math?math=y = 0.2R(s_2) %2B 0.2D(s_3) %2B 0.2S_6 %2B B %2B \epsilon"> -->
<!-- <img src="https://render.githubusercontent.com/render/math?math=\epsilon ~ N(\mu = 0, \sigma_{\epsilon} = 0.5E[y])">
<img src="https://render.githubusercontent.com/render/math?math=E[y] = 5.75"> -->
Specifying that (AA = 2, Aa = 1, and aa = 0) for one or more alleles is optional and requires `input_files/major_minor_assignments.txt`.
```
0
1
0
```
Specifying the second genotype's recessiveness (AA = 0, Aa = 0, and aa = 2) and third genotype's heterozygosity only (AA = 0, Aa = 2, and aa = 0) is optional and requires `input_files/SNP_phenotype_map.txt`.
```
regular
recessive
heterozygous_only
```
### Example 3: inclusion of epistatic effects :twisted_rightwards_arrows:
REGENS models epistasis between an arbitrary number of SNPs as the product of transformed genotype values in an individual.
```shell
python -m regens \
--in input_files/ACB --out ACB_simulated \
--simulate_nbreakpoints 4 --simulate_nsamples 10000 \
--phenotype continuous --mean_phenotype 5.75 \
--population_code ACB --human_genome_version hg19 --noise 0.5 \
--causal_SNP_IDs_path input_files/causal_SNP_IDs2.txt \
--major_minor_assignments_path input_files/major_minor_assignments2.txt \
--SNP_phenotype_map_path input_files/SNP_phenotype_map2.txt \
--betas_path input_files/betas.txt
```
y = 0.5s<sub>1</sub> + 0.5D(s<sub>2</sub>)s<sub>3</sub>+ 0.2Ho(S<sub>4</sub>)s<sub>5</sub>s<sub>5</sub> + B + ε
where ε ~ N(μ = 0, σ<sub>ε</sub> = 0.5E[y]) and E[y] = 5.75.
<!-- <img src="https://render.githubusercontent.com/render/math?math=y = 0.2s_1 %2B 0.2s_2R(s_3) %2B 0.2Ho(S_4)s_5s_5 %2B B %2B \epsilon"> -->
<!-- <img src="https://render.githubusercontent.com/render/math?math=\epsilon ~ N(\mu = 0, \sigma_{\epsilon} = 0.5E[y])">
<img src="https://render.githubusercontent.com/render/math?math=E[y] = 5.75"> -->
Specifying that multiple SNPs interact (or that rs62240045 has a polynomic effect) requires placing all participating SNPs in the same tab seperated line of `input_files/causal_SNP_IDs.txt`
```
rs11852537
rs1867634 rs545673871
rs2066224 rs62240045 rs62240045
```
For each row containing one or more SNP IDs, `input_files/betas.txt` contains a corresponding beta coefficient. (Giving each SNP that participates in a multiplicative interaction its own beta coefficient would be pointless).
```
0.5
0.5
0.5
```
As before, both `input_files/major_minor_assignments.txt` and `input_files/SNP_phenotype_map.txt` are both optional.
If they are not specified, then all genotypes of individual SNPs will have the standard (AA = 0, Aa = 1, and aa = 2) encoding.
For each SNP ID, `input_files/major_minor_assignments.txt` specifies whether or not untransformed genotypes of individual SNPs follow the (AA = 2, Aa = 1, and aa = 0) encoding.
CAUTION: In general, if a SNP appears in two different effects, then it may safely have different major/minor assignments in different effects.
However, if a SNP appears twice in the same effect, then make sure it has the same major/minor assignment within that effect, or that effect may equal 0 depending on the map functions that are used on the SNP.
```
0
0 0
1 0 0
```
For each SNP ID, `input_files/SNP_phenotype_map.txt` specifies whether the SNP's effect is regular, recessive, dominant, heterozygous_only, or homozygous_only.
```
regular
dominant regular
homozygous_only regular regular
```
In context to an epistatic interaction, first the _Swap_ function is applied to SNPs specified by `input_files/major_minor_assignments.txt`, then map functions specified by `input_files/SNP_phenotype_map.txt` are applied to their respective SNPs. The transformed genotypes of SNPs in the same row of `input_files/causal_SNP_IDs.txt` are multiplied together and by the corresponding beta coefficient in `input_files/betas.txt`. Each individual's phenotype is then the sum of row products, the bias, and the random noise term.
## REGENS simulates nearly flawless GWAS data :100:
The Triadsim algorithm simulates LD patterns that are almost indistinguishable from those of the input Dataset. REGENS uses Triadsim's method of recombining genomic segments to simulate equally realistic data, and we measured it to be 88.5 times faster (95%CI: [75.1, 105.0]) and require 6.2 times lower peak RAM (95%CI [6.04, 6.33]) than Triadsim on average. The following three figures show that REGENS nearly perfectly replicates the input dataset's LD pattern.
1. For the 1000 genome project's ACB population, this figure compares (right) every SNP's real maf against it's simulated maf and (left) every SNP pair's real genotype pearson correlation coefficient against its simulated genotype pearson correlation coefficient for SNP pairs less than 200 kilobases apart. 100000 simulated individuals were used (please note that the analogous figures in the correctness testing folders were only made with 10000 samples, so they have slightly more noticable random noise than the figure displayed directly below).

2. For the 1000 genome project's ACB population, this figure plots SNP pairs' absolute r values against their distance apart (up to 200 kilobases apart) for both real and simulated populations. More specifically, SNP pairs were sorted by their distance apart and seperated into 4000 adjacent bins, so each datapoint plots one bin's average absolute r value against its average position. 100000 simulated individuals were used.

3. These figures compare TSNE plots of the first 10 principal components for real and simulated 1000 genomes subpopulations that are members of the AFR superpopulation. Principal components were computed from all twenty-six 1000 genomes population datasets, and the loadings were used to project the simulated individuals onto the PC space. These results demonstrate that REGENS replicates the the input data's overall population structure in simulated datasets. 10000 simulated individuals were used.

## Technical details :robot:
Details regarding the reason to use one of the 1000 genome population's recombination maps for your genotype data [(input that we provided)](https://github.com/EpistasisLab/regens/blob/main/README.md#input-turkey) are in the thirs paragraph.
Regens repeats the following process for each chromosome. Each chromosome that REGENS simulates begins as a set of SNPs without genotypes, which is demarcated into segments by breakpoints. breakpoint positions are drawn from the distribution of recombination event positions, which is computed from the chromosome's recombination map for the selected population. Once an empty chromosome is segmented by breakpoints, the row indices of whole genome bed file rows from a real dataset are duplicated so that 1) there is one real individual for each empty segment and 2) every real individual is selected an equal number of times (minus 1 for each remainder sample if the number of segments is not divisible by the number of individuals). Then, for each empty segment, a whole chromosome is randomly selected without replacement from the set of autosomal genotypes that correspond to the duplicated indices, and the empty simulated segment is filled with the the homologous segment from the sampled real chromosome. These steps are repeated for every empty simulated segment in every chromosome so that all of the empty simulated genomes are filled with real SNP values. This quasirandom selection of individuals minimizes maf variation between the simulated and real datasets and also maintains normal population level genetic variability by randomizing segment selection.
Even though the randomly selected segments are independent from one-another, the simulated dataset will contain the input dataset's LD pattern because each breakpoint location is selected with the same probability that they would demarcate a given real recombination event (i.e. a real biological concatenation of two independent genomic segment).The Triadsim algorithm has used this method to simulate LD patterns that are almost indistinguishable from those of the input Dataset. REGENS simulates equally realistic data, and it was measured to be 88.5 times faster and require 6.2 times lower peak RAM than Triadsim on average. REGENS is designed to easily simulate GWAS data from any of the 26 populations in the [1000 genomes project](https://www.cog-genomics.org/plink/2.0/resources), and a filtered subset of these subpopulations' genotype data is provided in the github in corresponding plink filesets. In summary, I kept a random subset of 500000 quality control filtered, biallelic SNPs such that every subpopulation contains at least two instances of the minor allele. [Exact thinning methods are here](https://github.com/EpistasisLab/REGENS/blob/main/thinning_methods/get_1000_genomes_files.sh).
REGENS converts output recombination rate maps from [pyrho](https://github.com/popgenmethods/pyrho) (which correspond to the twenty-six 1000 Genome populations on a one to one basis) into probabilities of drawing each simulated breakpoint at a specific genomic location. It is also possible to simulate GWAS data from a custom plink (bed, bim, bam) fileset or a custom recombination rate map (or both files can be custom). Note that recombination rate maps between populations within a superpopulation (i.e. british and italian) have pearson correlation coefficients of roughly 0.9 [(see figure 2B of the pyrho paper)](https://advances.sciencemag.org/content/advances/5/10/eaaw9206.full.pdf), so if a genotype dataset has no recombination rate map for the exact population, then map for a closely relatrf population should suffice.
## Repository structure
### Folders for REGENS to download :inbox_tray:
* `correctness_testing_ACB`: A directory containing bash scripts to test code correctness on the ACB subpopulation, as well as the output for those tests. Correctness testing part 2 is optional and requires plink version 1.90Beta.
* `correctness_testing_GBR`: A directory containing bash scripts to test code correctness on the GBR subpopulation, as well as the output for those tests. Correctness testing part 2 is optional and requires plink version 1.90Beta.
* `examples`: A directory containing bash scripts that run the data simulation examples in the README.
* `hg19`: for each 1000 genomes project population, contains a folder with one gzipped recombination rate dataframe per hg19 reference human autosome.
* `hg38`: for each 1000 genomes project population, contains a folder with one gzipped recombination rate dataframe per hg38 reference human autosome.
* `input_files`: contains examples of regens input that is meant to be provided by the user. The example custom recombination rate information is copied from that of the hg19 mapped ACB population. Also contains input for the Triadsim algorithm. The genetic input labeled as "not_trio" for Triadsim is comprised of ACB population duplicates and is only meant to compare Triadsim's runtime.
* `runtime_testing_files`: A directory containing files that were used to compute runtimes, max memory usage values, and improvement ratio bootstrapped confidence intervals.
* `unit_testing_files`: A directory containing bash scripts to unit test code correctness on the ACB subpopulation, as well as the output for those tests.
### Folders in the repository :file_cabinet:
* `images`: contains figures that are either displayed or linked to in this github README
* `paper`: A directory containing the paper's md file, bib file, and figure
* `thinning_methods`: All code that was used to select 500000 SNPs from the 1000 genomes project's genotype data
### Files :file_folder:
* `regens.py`: the main file that runs the regens algorithm
* `regens_library.py`: functions that the regens algorithm uses repeatedly.
* `regens_testers.py`: functions used exclusively for correctness testing and unit testing
* `setup.py` and `_init_.py`: allows regens to be installed with pip
* `requirements.txt`: lists REGENS' dependencies
## Remedies to known permission issues :adhesive_bandage:
Try these steps if you had permissions issues with the [final installation step](https://github.com/EpistasisLab/regens/blob/main/README.md#instructions-to-installing-regens-hammer_and_wrench):
1. Right click the "Anaconda Prompt" app (left), then click ```run as administrator```. Reinstalling conda in a different directory may fix this issue permenantly.
2. Your antivirus software might block a file that Anaconda needs (Avast blocked Miniconda's python.exe for us). Try seeing if your antivirus software is blocking anything related to anaconda, and then allow it to stop blocking that file. You could also turn off your antivirus software, though we do not recommend this.
3. In the worst case, you can download all of the required files with these links:
1. [input_files](https://ndownloader.figshare.com/files/25515740)
2. [correctness_testng_ACB, correctness_testng_GBR, examples, runtime_testing, unit_testing_files](https://ndownloader.figshare.com/files/25516322)
3. [hg19 and hg38](https://ndownloader.figshare.com/articles/13210796/versions/1)
Download the three folders containing the aforementioned 8 folders and unzip all folders (only the _folders_, keep the _recombination maps_ in hg19 and hg38 zipped). Then place everything in your working directory and run REGENS from your working directory. You should now be ready to use REGENS in your working directory if you have completed the installation steps.
## Contributing :thumbsup:
If you find any bugs or have any suggestions/questions, please feel free to [post an issue](https://github.com/EpistasisLab/regens/issues/new)!
Please refer to our [contribution guide](CONTRIBUTING.md) for more details.
Thanks for your support!
## License
MIT + file LICENSE
| /regens-0.2.0b0.tar.gz/regens-0.2.0b0/README.md | 0.85226 | 0.938857 | README.md | pypi |
__version__ = '1.0.1'
class RegexBuild(object):
"""Build extremely complex one-line regex strings.
This can be used as a context manager, or in-line as part of other instances.
Example:
>>> original_regex = r'.*\\AppData\\Roaming\\(Microsoft|NVIDIA|Adobe\\.*(CT Font |FontFeature|Asset|Native)Cache)\\'
>>> with RegexBuild(r'.*\\AppData\\Roaming\\') as build_regex:
... build_regex('Microsoft', 'NVIDIA',
... RegexBuild(r'Adobe\\.*')('CT Font ', 'FontFeature', 'Asset', 'Native')('Cache'),
... )(r'\\')
>>> original_regex == str(build_regex)
True
"""
def __init__(self, *text, **kwargs):
"""Setup regex matches.
Parameters:
text (str or RegexBuild): Individual regex matches.
exit (str or RegexBuild): Add regex after all the matches.
An common example is RegexBuild('(?i)', exit='$') for
case insensitive matching.
"""
self.parent = kwargs.get('_parent')
self.data = list(text)
self.exit = kwargs.get('exit', None)
self.count = len(self.data) - 1
# Add data to parent
if self.parent is not None:
try:
self.parent.data[kwargs.get('_count', 0)].append(self.data)
except IndexError:
self.parent.data.append(self.data)
# Add exit data
if self.exit is not None:
self.data.append([])
self.data.append(self.exit)
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, self.data)
def __call__(self, *text, **kwargs):
"""Create a new match group.
This can work inline or as a context manager.
Example:
#>>> RegexBuild('a')('b', 'c')
'a(b|c)'
#>>> with RegexBuild('a') as build:
... build('b', 'c')
#>>> build
'a(b|c)'
"""
self.count += 1
# Force the exit regex to be placed at the end of the data
if self.exit is not None:
data = self.data.pop(-1)
while True:
try:
self.data[self.count]
except IndexError:
self.data.append([])
else:
break
self.data.append(data)
return self.__class__(*text, _parent=self, _count=self.count, **kwargs)
def __enter__(self):
return self
def __exit__(self, *args):
if any(args):
return False
def __str__(self, data=None):
"""Convert the data to valid regex."""
# If the parent exists, then get the full path
# Without this, RegexBuild('a')('b')('c') will only return 'c'
if self.parent is not None:
return str(self.parent)
if data is None:
data = self.data
# Split the data into 3 parts
prefix = []
body = []
suffix = []
current = prefix
for offset, item in enumerate(data):
if isinstance(item, list):
body.append(self.__str__(item))
current = suffix
else:
current.append(item)
if prefix:
if len(prefix) > 1:
prefix = '({})'.format('|'.join(map(str, prefix)))
else:
prefix = prefix[0]
else:
prefix = ''
if suffix:
if len(suffix) > 1:
suffix = '({})'.format('|'.join(map(str, suffix)))
else:
suffix = suffix[0]
else:
suffix = ''
if body:
if len(body) == 1:
return '{}{}{}'.format(prefix, body[0], suffix)
return '{}({}){}'.format(prefix, '|'.join(map(str, body)), suffix)
return prefix | /regex-build-1.0.1.tar.gz/regex-build-1.0.1/regex_build.py | 0.738292 | 0.217068 | regex_build.py | pypi |
# regex_cleansing
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://git.clever-touch.com/data-and-insights/shared-tools/tools/regex_cleansing.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://git.clever-touch.com/data-and-insights/shared-tools/tools/regex_cleansing/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
| /regex_cleansing-0.1.tar.gz/regex_cleansing-0.1/README.md | 0.583085 | 0.828141 | README.md | pypi |
import curses
import curses.ascii
import math
import time
import typing
from .level import Level
from .utils import Coordinate, popup_message
class Game:
"""
Class initialized with a level and handles all the functionality of actually playing it.
"""
def __init__(self, level: Level):
self.level = level
self.matrix = (
level.create_matrix()
) # Create the matrix from the level so it'll be compatible.
self.matrix_cursor_pos = Coordinate(0, 0) # Store the current position on the matrix.
self.window_legend = None
self.window_legend_alt = None
self.window_game = None
self.window_title_bar = None
self.start_time = None
def _create_legend_window(self, legend_str: str, *, position: Coordinate = None):
"""
Create a window containing the given string at the given position.
:param legend_str: the string to be presented.
:type legend_str: str
:param position: position on the board, defaults to None
:type position: Coordinate, optional
:return: the created curses window.
"""
if not legend_str:
return
if position is None:
position = Coordinate(0, math.ceil(curses.COLS / 2))
legend_str_split = legend_str.splitlines()
legend_width = len(max(legend_str_split, key=len)) + 1
legend_height = len(legend_str_split)
window_legend = curses.newwin(legend_height, legend_width, position.row, position.col)
window_legend.addstr(legend_str)
window_legend.refresh()
return window_legend
def _create_title_bar(self):
"""
Create a titlebar window containing the title string.
:return: the created curses window.
"""
window_title_bar = curses.newwin(1, curses.COLS - 1)
window_title_bar.addstr(self.level.title)
window_title_bar.refresh()
return window_title_bar
def _clear_windows(self) -> None:
"""
Clear all the windows belonging to the game.
:return: none.
:rtype: None
"""
if self.window_game is not None:
self.window_game.clear()
self.window_game.noutrefresh()
if self.window_legend is not None:
self.window_legend.clear()
self.window_legend.noutrefresh()
if self.window_legend_alt is not None:
self.window_legend_alt.clear()
self.window_legend_alt.noutrefresh()
if self.window_title_bar is not None:
self.window_title_bar.clear()
self.window_title_bar.noutrefresh()
curses.doupdate()
def _init_windows(self) -> None:
"""
Initialize all the various game related windows, clearing any existing ones first.
:return: none.
:rtype: None
"""
self._clear_windows()
self.window_title_bar = self._create_title_bar()
offset_y = 1
legend_position_x = math.ceil(curses.COLS / 2)
self.window_legend = self._create_legend_window(
self.level.format_utd_ltr_regexes(),
position=Coordinate(offset_y, legend_position_x),
)
self.window_legend_alt = self._create_legend_window(
self.level.format_dtu_rtl_regexes(),
position=Coordinate(
offset_y, legend_position_x + self.window_legend.getmaxyx()[1]
),
)
self.window_game = curses.newwin(
self.matrix.str_height, self.matrix.str_width + 1, offset_y, 0
)
self.window_game.keypad(True)
self._redraw_game()
self.window_game.move(
2 + (2 * self.matrix_cursor_pos.row), 4 + (4 * self.matrix_cursor_pos.col)
)
def _redraw_game(self) -> None:
"""
Redraw the game window (aka the matrix).
:return: none.
:rtype: None
"""
cur_pos_y, cur_pos_x = self.window_game.getyx()
self.window_game.move(0, 0)
self.window_game.addstr(str(self.matrix))
self.window_game.refresh()
self.window_game.move(cur_pos_y, cur_pos_x)
def _handle_input(self, char: int) -> bool:
"""
Handle the given character input:
- If it's an arrow key, move the cursor position accordingly.
- If it's ENTER, try to validate the matrix against the level.
- If it's a screen resize, redraw all the windows in their new position.
- If it's any printable character, store them in the matrix.
:param char: the character (int value) to handle.
:type char: int
:return: True if the game is finished (matrix validated successfully), False otherwise.
:rtype: bool
"""
cur_pos_y, cur_pos_x = self.window_game.getyx()
try:
if char == curses.KEY_RIGHT:
if self.matrix_cursor_pos.col + 1 >= self.matrix.columns:
raise IndexError('Cursor got off the matrix')
self.window_game.move(cur_pos_y, cur_pos_x + 4)
self.matrix_cursor_pos.col += 1
elif char == curses.KEY_LEFT:
if self.matrix_cursor_pos.col - 1 < 0:
raise IndexError('Cursor got off the matrix')
self.window_game.move(cur_pos_y, cur_pos_x - 4)
self.matrix_cursor_pos.col -= 1
elif char == curses.KEY_DOWN:
if self.matrix_cursor_pos.row + 1 >= self.matrix.rows:
raise IndexError('Cursor got off the matrix')
self.window_game.move(cur_pos_y + 2, cur_pos_x)
self.matrix_cursor_pos.row += 1
elif char == curses.KEY_UP:
if self.matrix_cursor_pos.row - 1 < 0:
raise IndexError('Cursor got off the matrix')
self.window_game.move(cur_pos_y - 2, cur_pos_x)
self.matrix_cursor_pos.row -= 1
elif char in (curses.KEY_ENTER, curses.ascii.NL):
if self.level.check_matrix(self.matrix):
return True
elif char == curses.KEY_RESIZE:
curses.update_lines_cols()
self._init_windows()
elif curses.ascii.isprint(char):
self.matrix[self.matrix_cursor_pos.row][self.matrix_cursor_pos.col] = chr(
char
).upper()
self._redraw_game()
except Exception:
pass
return False
def _finished_level(self) -> None:
"""
Pop up a congratulations message for finishing the level.
:return: none.
:rtype: None
"""
success_text = f'Success! You\'ve finished "{self.level.title}" after {round(time.time() - self.start_time, 2)} seconds!\nPress {{ENTER}} to continue...'
success_offset_y = int(curses.LINES * (1 / 5))
success_offset_x = int(curses.COLS * (1 / 5))
success_position = Coordinate(success_offset_y, success_offset_x)
exit_keys = (curses.KEY_ENTER, curses.ascii.NL)
popup_message(success_text, success_position, exit_keys)
def play_level(self) -> int:
"""
Initialize the game and play the level.
Return 0 if the game should be terminated, 1 to go to the next stage and -1 to go to the previous one.
:return: signal wether to quit or go to the next or previous level.
:rtype: int
"""
self._init_windows()
self.start_time = time.time()
char = self.window_game.getch()
while char != curses.ascii.ESC:
if char == curses.KEY_NPAGE:
return -1
if char == curses.KEY_PPAGE:
return 1
if self._handle_input(char):
self._finished_level()
return 1
char = self.window_game.getch()
return 0 | /regex_crossword-0.1.0.tar.gz/regex_crossword-0.1.0/regex_crossword/game.py | 0.68721 | 0.396652 | game.py | pypi |
import curses
import curses.ascii
import string
from pathlib import Path
from .game import Game
from .level_pack import LevelPack
from .utils import Coordinate, popup_message
INTRO = '''Welcome to the Regex Crossword!
You can interact with any text on this screen by pressing the coresponding {KEY}.
If you need any help, please press {?}.
Choose your level pack:
''' # The first text on the selection screen.
HELP_TEXT = '''Hello and welcome to the Regex Crossword!
The game goes like this:
You will be presented with a grid for you to fill with characters.
There will also be definitions telling you how to fill the grid. Just like an ordinary crossword!
The twist? Each "definition" is actually a Regular Expression that will try to match against the respectful row or column.
Your goal? Fill the entire grid so that every regex will match its coresponding string!
KEYS:
Navigate the grid using the {ARROW KEYS}. When you think you're done, press {ENTER} to validate yourself!
If your input was valid, you will move on to the next level in the level pack.
Also, you can navigate back and forth between levels in your chosen level pack by pressing {PAGE_DOWN} and {PAGE_UP} respectfully.
You can go back from any screen (including this help or the main selection) by pressing {ESCAPE}.
On another note, all regexes are computed in real time when you try to validate your input.
There is no predefined answer so if you fail to validate a level be sure to check the regexes again!
Happy regexing!
''' # The help text, explaining the game and keys to the player.
class Crossword:
"""
Class unifying all the logic needed to fully play the game, from loading the levels to playing them.
"""
def __init__(self, level_packs_path: Path):
self.pack_id_pairs = {
pack_id: Path(pack_path)
for pack_id, pack_path in zip(
string.digits + string.ascii_letters, sorted(level_packs_path.iterdir())
)
} # Dict mapping between an arbitrary id (to allow easy selection for the user) an the actual pack path.
self.selection_screen_str = INTRO + '\n'.join(
f'{{{i}}} {path.stem}' for i, path in self.pack_id_pairs.items()
) # The entire selection screen as a concatenated string.
self.help_str = HELP_TEXT # The entire help text as a concatenated string.
self.stdscr = None
def _display_help(self) -> None:
"""
Pop up the help message to the screen.
:return: none.
:rtype: None
"""
help_offset_y = int(curses.LINES * (1 / 5))
help_offset_x = int(curses.COLS * (1 / 7))
help_position = Coordinate(help_offset_y, help_offset_x)
exit_keys = [curses.ascii.ESC]
popup_message(self.help_str, help_position, exit_keys)
def _handle_input(self, char: int) -> None:
"""
Handle the given character input:
- If it's a valid pack id character, initialize that pack and play each level.
- If it's the special character '?', display the help message.
:param char: the character (int value) to handle.
:type char: int
:return: none.
:rtype: None
"""
if chr(char) in self.pack_id_pairs:
pack = LevelPack(self.pack_id_pairs[chr(char)])
i = 0
while 0 <= i < len(pack):
self.stdscr.clear()
self.stdscr.refresh()
g = Game(pack[i])
ret_val = g.play_level()
if ret_val == 0:
break
i += ret_val
elif chr(char) == '?':
self._display_help()
def mainloop(self) -> None:
"""
Start a curses instance, display the main selection screen and handle each input until ESCAPE is pressed.
:return: none.
:rtype: None
"""
def _mainloop(stdscr) -> None:
self.stdscr = stdscr
self.stdscr.addstr(self.selection_screen_str)
self.stdscr.refresh()
curses.curs_set(0)
char = self.stdscr.getch()
while char != curses.ascii.ESC:
curses.curs_set(1)
self._handle_input(char)
self.stdscr.clear()
self.stdscr.addstr(self.selection_screen_str)
self.stdscr.refresh()
curses.curs_set(0)
char = self.stdscr.getch()
curses.wrapper(_mainloop) | /regex_crossword-0.1.0.tar.gz/regex_crossword-0.1.0/regex_crossword/crossword.py | 0.538983 | 0.362179 | crossword.py | pypi |
import itertools
import re
import typing
from .matrix import Matrix
LevelDataType = typing.Dict[str, typing.Union[str, typing.List[str]]]
class Level:
"""
Class that manages the static data of a level.
"""
def __init__(self, level_data: LevelDataType):
self.title = level_data.get('title')
self.up_to_down_regexes = [
re.compile(regex) for regex in level_data.get('up_to_down', [])
]
self.down_to_up_regexes = [
re.compile(regex) for regex in level_data.get('down_to_up', [])
]
self.left_to_right_regexes = [
re.compile(regex) for regex in level_data.get('left_to_right', [])
]
self.right_to_left_regexes = [
re.compile(regex) for regex in level_data.get('right_to_left', [])
]
def create_matrix(self) -> Matrix:
"""
Create a matrix coresponding in height and width to the level.
:return: the created matrix
:rtype: Matrix
"""
return Matrix(
len(max([self.left_to_right_regexes, self.right_to_left_regexes], key=len)),
len(max([self.up_to_down_regexes, self.down_to_up_regexes], key=len)),
)
def check_matrix(self, mat: Matrix) -> bool:
"""
Check if a given matrix has been validated against all regexes.
:param mat: the matrix to validate.
:type mat: Matrix
:return: True if the matrix has been validated successfully, False otherwise.
:rtype: bool
"""
matrix_expected_row_len = len(
max([self.left_to_right_regexes, self.right_to_left_regexes], key=len)
)
matrix_row_strings = [
''.join(mat[i][j] for j in range(mat.columns)) for i in range(mat.rows)
]
if matrix_expected_row_len != len(matrix_row_strings):
raise ValueError(
f'Matrix with {len(matrix_row_strings)} rows is incompatible with level of {matrix_expected_row_len} rows.'
)
matrix_expected_column_len = len(
max([self.up_to_down_regexes, self.down_to_up_regexes], key=len)
)
matrix_column_strings = [
''.join(mat[j][i] for j in range(mat.rows)) for i in range(mat.columns)
]
if matrix_expected_column_len != len(matrix_column_strings):
raise ValueError(
f'Matrix with {len(matrix_column_strings)} columns is incompatible with level of {matrix_expected_column_len} columns.'
)
for row, utd_regex, dtu_regex in itertools.zip_longest(
matrix_column_strings,
self.up_to_down_regexes,
self.down_to_up_regexes,
fillvalue=re.compile(''),
):
if (utd_regex.pattern and re.fullmatch(utd_regex, row) is None) or (
dtu_regex.pattern and re.fullmatch(dtu_regex, row) is None
):
return False
for row, ltr_regex, rtl_regex in itertools.zip_longest(
matrix_row_strings,
self.left_to_right_regexes,
self.right_to_left_regexes,
fillvalue=re.compile(''),
):
if (ltr_regex.pattern and re.fullmatch(ltr_regex, row) is None) or (
rtl_regex.pattern and re.fullmatch(rtl_regex, row) is None
):
return False
return True
def format_up_to_down_regexes(self) -> str:
"""
Format a string listing all the regexes that will validate the columns from top to bottom.
:return: the formatted string.
:rtype: str
"""
ret_str = 'Up -> Down:\n'
ret_str += '\n'.join(
[f'{i}: {regex.pattern}' for i, regex in enumerate(self.up_to_down_regexes)]
)
return ret_str.strip()
def format_down_to_up_regexes(self) -> str:
"""
Format a string listing all the regexes that will validate the columns from bottom to top.
:return: the formatted string.
:rtype: str
"""
ret_str = 'Down -> Up:\n'
ret_str += '\n'.join(
[f'{i}: {regex.pattern}' for i, regex in enumerate(self.down_to_up_regexes)]
)
return ret_str.strip()
def format_left_to_right_regexes(self) -> str:
"""
Format a string listing all the regexes that will validate the rows from left to right.
:return: the formatted string.
:rtype: str
"""
ret_str = 'Left -> Right:\n'
ret_str += '\n'.join(
[f'{i}: {regex.pattern}' for i, regex in enumerate(self.left_to_right_regexes)]
)
return ret_str.strip()
def format_right_to_left_regexes(self) -> str:
"""
Format a string listing all the regexes that will validate the rows from right to left.
:return: the formatted string.
:rtype: str
"""
ret_str = 'Right -> Left:\n'
ret_str += '\n'.join(
[f'{i}: {regex.pattern}' for i, regex in enumerate(self.right_to_left_regexes)]
)
return ret_str.strip()
def format_utd_ltr_regexes(self) -> str:
"""
Format both `up_to_down` and `left_to_right` together (aka the standard checks).
:return: concatenation of `up_to_down` and `left_to_right`.
:rtype: str
"""
format_list = []
if any(regex.pattern for regex in self.up_to_down_regexes):
format_list.append(self.format_up_to_down_regexes())
if any(regex.pattern for regex in self.left_to_right_regexes):
format_list.append(self.format_left_to_right_regexes())
return '\n\n'.join(format_list).strip()
def format_dtu_rtl_regexes(self) -> str:
"""
Format both `down_to_up` and `right_to_left` together (aka the alternative checks).
:return: concatenation of `down_to_up` and `right_to_left`.
:rtype: str
"""
format_list = []
if any(regex.pattern for regex in self.down_to_up_regexes):
format_list.append(self.format_down_to_up_regexes())
if any(regex.pattern for regex in self.right_to_left_regexes):
format_list.append(self.format_right_to_left_regexes())
return '\n\n'.join(format_list).strip()
def __str__(self) -> str:
format_list = []
if any(regex.pattern for regex in self.up_to_down_regexes):
format_list.append(self.format_up_to_down_regexes())
if any(regex.pattern for regex in self.down_to_up_regexes):
format_list.append(self.format_down_to_up_regexes())
if any(regex.pattern for regex in self.left_to_right_regexes):
format_list.append(self.format_left_to_right_regexes())
if any(regex.pattern for regex in self.right_to_left_regexes):
format_list.append(self.format_right_to_left_regexes())
return '\n\n'.join(format_list).strip() | /regex_crossword-0.1.0.tar.gz/regex_crossword-0.1.0/regex_crossword/level.py | 0.825765 | 0.536131 | level.py | pypi |
import json
import typing
from pathlib import Path
import bs4
from loguru import logger
from selenium import webdriver
ROOT_SITE = 'https://regexcrossword.com' # Where to scrape from.
CHALLENGES_BLACKLIST = [
'hexagonal'
] # We just don't support some freaky challenge types. Sorry.
level_dict_type = typing.Dict[str, typing.Union[str, typing.List[str]]]
pack_dict_type = typing.Dict[str, typing.Union[str, level_dict_type]]
def parse_level(content: str) -> level_dict_type:
"""
Parse a level's page content into a level_dict.
:param content: the content of the level's page.
:type content: str
:return: dict containing the title and different regexes with their orientation.
:rtype: level_dict_type
"""
soup = bs4.BeautifulSoup(content, 'html.parser')
title = soup.title.string.split('|')[0].strip()
logger.debug(f'parsing level {title}')
logger.debug('parsing up_to_down')
up_to_down = [element.text for element in soup.find('thead').find_all('span')]
logger.debug('parsing down_to_up')
down_to_up = [element.text for element in soup.find('tfoot').find_all('span')]
left_to_right = []
right_to_left = []
logger.debug('parsing tbody')
for i, element in enumerate(soup.find('tbody').find_all('span')):
if i % 2 == 0:
left_to_right.append(element.text)
else:
right_to_left.append(element.text)
return {
'title': title,
'up_to_down': up_to_down,
'left_to_right': left_to_right,
'right_to_left': right_to_left,
'down_to_up': down_to_up,
}
def parse_pack(driver: webdriver.Chrome, pack_url: str) -> pack_dict_type:
"""
Parse the various levels in a pic into a pack_dict.
:param driver: the current session webdriver.
:type driver: webdriver.Chrome
:param pack_url: main url of the pack.
:type pack_url: str
:return: dict tontaining the title and various levels of a pack.
:rtype: pack_dict_type
"""
logger.info(f'parsing pack {pack_url}')
levels = []
i = 1
while True:
logger.debug(f'parsing level {i}')
level_url = f'{pack_url}/{i}'
driver.get(level_url)
try:
levels.append(parse_level(driver.page_source))
except Exception:
logger.warning(f'got exception, treating pack {pack_url} as finished')
break
i += 1
return {'title': pack_url.split('/')[-2], 'levels': levels}
def get_challenge_packs(content: str) -> typing.List[str]:
"""
Return all the various challenge packs on site right now.
:param content: the content of the page holding the packs.
:type content: str
:return: list of all pack urls.
:rtype: typing.List[str]
"""
logger.info('getting challenge packs')
soup = bs4.BeautifulSoup(content, 'html.parser')
return [
element.get('href')
for element in soup.find_all('a')
if element.get('href').startswith('/challenges')
and not element.get('href').split('/')[-1] in CHALLENGES_BLACKLIST
]
def scrape(output_path: Path) -> None:
"""
Scrape the main root site's challenge packs and save them into output path.
:param output_path: where to save the packs to.
:type output_path: Path
:return: none.
:rtype: None
"""
logger.info(f'start scraping on {ROOT_SITE}')
driver = webdriver.Chrome()
driver.get(ROOT_SITE)
challenge_packs = get_challenge_packs(driver.page_source)
for i, pack_route in enumerate(challenge_packs):
pack = parse_pack(driver, f'{ROOT_SITE}{pack_route}/puzzles')
path_to_pack = Path(output_path, f'{i}_{pack["title"]}').with_suffix('.json')
path_to_pack.parent.mkdir(parents=True, exist_ok=True)
path_to_pack.write_text(json.dumps(pack['levels'], indent=4))
logger.info('done!') | /regex_crossword-0.1.0.tar.gz/regex_crossword-0.1.0/regex_crossword/scripts/scraper.py | 0.621196 | 0.285136 | scraper.py | pypi |
import argparse
import os
from pathlib import Path
from ..crossword import Crossword
try:
from .scraper import scrape
except ImportError:
# This means the user haven't installed the `scraper` extra, which is fine.
scrape = None
DEFAULT_LEVEL_PACKS_PATH = Path(
'level_packs'
) # Default path where level packs will be looked for.
SUCCESS = 0
FAILURE = -1
def parse_args() -> argparse.Namespace:
"""
Parse the command line arguments.
:return: argparse paresd arguments.
:rtype: argparse.Namespace
"""
parser = argparse.ArgumentParser(description='Start the game')
game_group = parser.add_argument_group(
'Game arguments', 'Arguments that affect the way the game is loaded and played'
)
game_group.add_argument(
'--level-packs',
metavar='PATH',
type=Path,
help='Path to a directory containing the level packs',
)
scraper_group = parser.add_argument_group(
'scraper arguments', 'Arguments given to the scraper'
)
scraper_group.add_argument(
'--scrape',
default=False,
action='store_true',
help='Activate the scraper function (will override game functionality)',
)
scraper_group.add_argument(
'--output',
type=Path,
default=DEFAULT_LEVEL_PACKS_PATH,
help='Where to output the scraper data',
)
return parser.parse_args()
def game_main(level_packs_path: Path) -> None:
"""
Create a new Crossword instance and start the game's mainloop.
:param level_packs_path: path to a directory containing level packs.
:type level_packs_path: Path
:return: none.
:rtype: None
"""
cw = Crossword(level_packs_path)
try:
cw.mainloop()
except KeyboardInterrupt:
pass
finally:
print('Thank you for playing!')
def cli() -> int:
"""
Main entry point for the CLI.
:return: exit code.
:rtype: int
"""
args = parse_args()
if args.scrape:
if scrape is None:
print(
'Scraper isn\'t available.\nTry to reinstall the package using the extra requirement `[scraper]`.'
)
return FAILURE
scrape(args.output)
return SUCCESS
level_packs: Path = (
args.level_packs
if args.level_packs
else Path(os.environ.get('REGEXCW_LEVEL_PACKS', DEFAULT_LEVEL_PACKS_PATH))
)
if not level_packs.exists():
print(f'Directory {level_packs} doesn\'t exist.')
print('If you don\'t have any level packs, consider using the `--scrape` flag.')
return FAILURE
game_main(level_packs)
return SUCCESS | /regex_crossword-0.1.0.tar.gz/regex_crossword-0.1.0/regex_crossword/scripts/regex_crossword.py | 0.569853 | 0.238839 | regex_crossword.py | pypi |
<!-- PROJECT SHIELDS -->
[![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![LinkedIn][linkedin-shield]][linkedin-url]
<!-- PROJECT LOGO -->
<br />
<p align="center">
<a href="https://github.com/raj-kiran-p/regex_engine">
<img src="https://static.wixstatic.com/media/03a041_a8d70333218e4f2691fb6a30d7219923~mv2.png/v1/fill/w_200,h_200/regex-engine.png" alt="Logo" width="160" height="160">
</a>
<h3 align="center">Regex Engine</h3>
<p align="center">
An awesome package to generate regex
<br>
<a href="https://regex-engine.readthedocs.io">Read Documentation</a>
<br>
<a href="https://github.com/raj-kiran-p/regex_engine/issues">Report Bug</a>
·
<a href="https://github.com/raj-kiran-p/regex_engine/issues">Request Feature</a>
</p>
</p>
<!-- TABLE OF CONTENTS -->
## Table of Contents
* [About the Project](#about-the-project)
* [Coded With Language](#coded-with-language)
* [Getting Started](#getting-started)
* [Prerequisites](#prerequisites)
* [Installation](#installation)
* [Usage](#usage)
* [Roadmap](#roadmap)
* [Contributing](#contributing)
* [License](#license)
* [Contact](#contact)
* [Acknowledgements](#acknowledgements)
<!-- ABOUT THE PROJECT -->
## About The Project
Generating regex can sometimes be complicated. That is why we are introducing this package to help you get things done.
Supported functionalities :
- Regex Generation for Numerical Range
### What does each functionalities do?
__1. Regex Generation for Numerical Range__
Generate regex given a numerical range, So when given a new number between this range the regex will match.
_Person who has motivated me to start this repository is listed in the acknowledgments._
### Coded With Language
* [Python 3.6](https://python.org)
<!-- GETTING STARTED -->
## Getting Started
Simply install the package, import it in your python code and run the method needed.
Look at the docstring or source code to understand what is happening.
### Prerequisites
Python 3.6 or greater
### Installation
```sh
pip install regex-engine
```
<!-- USAGE EXAMPLES -->
## Usage
### 1. Regex Generation for Numerical Range
__You get what you give :__ If given numbers are integers you get a regex that will only match with integer and if floating-point numbers are given it only match with floating-point number.
Supports integer and floating-point numbers. It can even be a negative range.
```python
from regex_engine import generator
generate = generator()
regex1 = generate.numerical_range(5,89)
regex2 = generate.numerical_range(81.78,250.23)
regex3 = generate.numerical_range(-65,12)
```
Example regex generated for 25-53
```
^([3-4][0-9]|2[5-9]|5[0-3])$
```
The regex might not be optimal but it will surely serve the purpose.
The problem of checking a number is within a range might have been simple if you didn't choose the regex path.
`if a <= your_input_number <=b` would have simply solved the same problem.
We dedicate this method in the package to the people who are pursuing a different path or thinking out of the box.
<!-- ROADMAP -->
## Roadmap
See the [open issues](https://github.com/raj-kiran-p/regex_engine/issues) for a list of proposed features (and known issues).
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE` for more information.
<!-- CONTACT -->
## Contact
Raj Kiran P - [@raj_kiran_p](http://www.twitter.com/raj_kiran_p) - rajkiranjp@gmail.com
GitHub : [https://github.com/raj-kiran-p](https://github.com/raj-kiran-p)
Website : [https://rajkiranp.com](https://rajkiranp.com)
<!-- ACKNOWLEDGEMENTS -->
## Acknowledgements
* [Ashwin Rajeev](https://github.com/ashwin-rajeev)
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/raj-kiran-p/regex_engine?style=flat-square
[contributors-url]: https://github.com/raj-kiran-p/regex_engine/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/raj-kiran-p/regex_engine?style=flat-square
[forks-url]: https://github.com/raj-kiran-p/regex_engine/network/members
[stars-shield]: https://img.shields.io/github/stars/raj-kiran-p/regex_engine?style=flat-square
[stars-url]: https://github.com/raj-kiran-p/regex_engine/stargazers
[issues-shield]: https://img.shields.io/github/issues/raj-kiran-p/regex_engine?style=flat-square
[issues-url]: https://github.com/raj-kiran-p/regex_engine/issues
[license-shield]: https://img.shields.io/github/license/raj-kiran-p/regex_engine?style=flat-square
[license-url]: https://github.com/raj-kiran-p/regex_engine/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=flat-square&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/rajkiranjp
| /regex_engine-1.1.0.tar.gz/regex_engine-1.1.0/README.md | 0.47926 | 0.680038 | README.md | pypi |
from typing import List, Tuple
from .inference import Filter
from .inference import Engine
class Evaluator:
@staticmethod
def evaluate_regex_list(
regex_list: List[str], patterns: List[str]) -> Tuple[float, float, float]:
"""
Args:
- regex_list: regex to be evaluated
- patterns: patterns to be matched by the regex_list
Returns:
- precision: describe how well each regex in the regex list describe the patterns
- recall: describe how well the entire regex list match the patterns
- f1: combined score for precision and recall
"""
recall = Evaluator.recall(regex_list, patterns)
precision = Evaluator.precision(regex_list, patterns)
f1 = 2. / (1. / precision + 1. / recall)
return precision, recall, f1
@staticmethod
def precision(regex_list: List[str], patterns: List[str]) -> float:
divided_patterns = Engine._divide_patterns(regex_list, patterns)
precisions = []
for i in range(len(divided_patterns)):
negative_patterns = Evaluator._collect_negative_patterns(
i, divided_patterns)
precision = Evaluator.regex_precision(
regex_list[i], divided_patterns[i], negative_patterns)
precisions.append(precision)
precision = sum(precisions) / len(precisions)
return precision
@staticmethod
def recall(regex_list: List[str], patterns: List[str]) -> float:
"""
Recall evaluate how well the regex capture the patterns presented.
Args:
- regex: whole regex consists of multiple sub-regex
- patterns: the patterns in the future or not presented but should be captured by the regex.
"""
regex = Engine.merge_regex_sequence(regex_list)
return len(Filter.match(regex, patterns)) / len(patterns)
@staticmethod
def _collect_negative_patterns(
target_regex_index: int, divided_patterns: List[List[str]]) -> List[str]:
negative_patterns = []
for not_i in [j for j in range(
len(divided_patterns)) if j != target_regex_index]:
negative_patterns.extend(divided_patterns[not_i])
return negative_patterns
@staticmethod
def regex_precision(
sub_regex: str, positive_patterns: List[str], negative_patterns: List[str]) -> float:
"""
Precision evaluate how precise or explainable is the regex on the target patterns.
Because my goal is that each sub-regex should exactly match its target patterns,
the positive patterns and negative patterns for the sub-regex is defined as follows:
* positive_patterns: pattern presented previously and matched by the sub-regex
* negative_patterns: pattern not hosted by the sub-regex.
"""
if positive_patterns:
return len(Filter.match(sub_regex, positive_patterns)) / \
len(Filter.match(sub_regex,
positive_patterns + negative_patterns))
else:
return 0.0 | /regex-inference-0.0.10.tar.gz/regex-inference-0.0.10/regex_inference/evaluator.py | 0.929336 | 0.560854 | evaluator.py | pypi |
import typing
from typing import List, Optional, Dict
import re
from .filter import Filter
from .chain import Chain
from ..utils import make_verbose
class Engine:
def __init__(self, openai_api_key: Optional[str] = None, temperature: float = 0.8,
mismatch_tolerance: float = 0.1, max_iteration: int = 3, simpify_regex: bool = True, verbose: bool = False):
self._chain = Chain(
openai_api_key=openai_api_key,
temperature=temperature)
self._mismatch_tolerance = mismatch_tolerance
self._max_iteration = max_iteration
self._simpify_regex = simpify_regex
if verbose:
self._make_verbose()
@typing.no_type_check
def _make_verbose(self):
self.run = make_verbose(self.run)
self._run_new_inference = make_verbose(self._run_new_inference)
self._fix_regex = make_verbose(self._fix_regex)
@staticmethod
def get_correction_data(
regex_list: List[str], patterns: List[str]) -> Dict[str, Dict[str, List[str]]]:
"""
Args:
- regex_list: the inference list of regex
- the target patterns
Returns:
- correction_data (dict)
- key: regex
- value (dict)
- fields:
- correct
- incorrect
"""
divided_patterns = Engine._divide_patterns(regex_list, patterns)
result = dict()
for i, regex in enumerate(regex_list):
matched_patterns = Filter.match(regex_list[i], patterns)
correct_patterns = divided_patterns[i]
incorrect_patterns = list(
set(matched_patterns) -
set(correct_patterns))
result[regex] = {
'correct': correct_patterns,
'incorrect': incorrect_patterns
}
return result
def run(self, patterns: List[str]) -> str:
regex_list = self.get_regex_sequence(patterns)
return Engine.merge_regex_sequence(regex_list)
@staticmethod
def _divide_patterns(regex_list: List[str],
patterns: List[str]) -> List[List[str]]:
"""
Seperate a list of patterns to match the regex in regex_list
"""
results = []
for regex in regex_list:
results.append(Filter.match(regex, patterns))
patterns = Filter.mismatch(regex, patterns)
return results
def fix_regex_list(
self, regex_list: List[str], correction_data: Dict[str, Dict[str, List[str]]]) -> List[str]:
for i, regex in enumerate(regex_list):
regex_list[i] = self.fix_regex(regex, correction_data)
return regex_list
def fix_regex(self, regex: str,
correction_data: Dict[str, Dict[str, List[str]]]) -> str:
for _ in range(self._max_iteration):
try:
result = self._fix_regex(regex, correction_data)
re.compile(result)
break
except KeyboardInterrupt as e:
raise e
except (ValueError, AssertionError):
pass
return result
def _fix_regex(self, regex: str,
correction_data: Dict[str, Dict[str, List[str]]]) -> str:
"""
Args:
- regex_list: a list of regex to be fixed
- correction_data: output of `get_correction_data`
Return
- fixed_regex_list: the corrected regex
"""
regex_list = [regex]
correct_patterns = [correction_data[regex]['correct']
for regex in regex_list]
incorrect_patterns = [correction_data[regex]
['incorrect'] for regex in regex_list]
cnt = len(regex_list)
fact_0_str = f"""
Fact 0:
A list of regex describing {cnt} type of patterns is double quoted and shown as the following bullet points:
"""
regex_list_str = "\n".join(
map(lambda x: f'{x[0]+1}. "{x[1]}"', enumerate(regex_list)))
facts = "\n\n".join(map(lambda i: f"""
Fact {i+1}
For regex number {i+1}, it correctly match the patterns double quoted and shown as follows:
{Engine._convert_patterns_to_prompt(correct_patterns[i])}
However, it mistakenly match the patterns double quoted and shown as follows:
{Engine._convert_patterns_to_prompt(incorrect_patterns[i])}
""", range(cnt)))
ans = self._chain.fix_regex.run(
facts=f"""
{fact_0_str}
{regex_list_str}
Now, I will provide to you the other {cnt} facts.
{facts}
"""
)
if ans.endswith('""'):
ans = ans[:-1]
try:
parsed_result = list(map(eval, ans.strip().split('\n')))
except SyntaxError as e:
raise ValueError(ans) from e
assert len(regex_list) == len(parsed_result)
for regex, result in zip(regex_list, parsed_result):
try:
assert regex == result[0], f'original regex is changed: {regex[0]}!={regex}'
assert re.compile(result[1]), f'{result[1]} cannot be compiled'
except BaseException as e:
raise ValueError(f'Parsing result {result} failed') from e
try:
result = list(map(lambda x: x[1], parsed_result))
except IndexError as e:
raise ValueError(parsed_result) from e
return result[0]
def get_regex_sequence(self, patterns: List[str]) -> List[str]:
assert len(
patterns) > 0, '`patterns` input to `run` should no be an empty list'
regex_list = [self._run_new_inference(patterns)]
mismatched_patterns = Filter.mismatch(
Engine.merge_regex_sequence(regex_list),
patterns
)
while mismatched_patterns:
regex = self._run_new_inference(mismatched_patterns)
regex_list.append(regex)
mismatched_patterns = Filter.mismatch(
Engine.merge_regex_sequence(regex_list), patterns)
return regex_list
@staticmethod
def merge_regex_sequence(regex_list: List[str]) -> str:
return '|'.join(map(lambda x: f'({x})', regex_list))
@staticmethod
def _convert_patterns_to_prompt(patterns: List[str]) -> str:
return '\n'.join(map(lambda x: f'"{x}"', patterns))
def _run_alter_regex(self, regex: str, patterns: List[str]) -> str:
for _ in range(self._max_iteration):
result = self._chain.alter_regex.run(
regex=regex,
strings=Engine._convert_patterns_to_prompt(patterns)
).strip()
try:
re.compile(result)
break
except KeyboardInterrupt as e:
raise e
except BaseException:
pass
return result
def _run_simplify_regex(self, regex: str, patterns: List[str]) -> str:
for _ in range(self._max_iteration):
result = self._chain.simplify_regex.run(
regex=regex,
strings=Engine._convert_patterns_to_prompt(patterns)
).strip()
try:
re.compile(result)
break
except KeyboardInterrupt as e:
raise e
except BaseException:
pass
return result
def _run_new_inference(self, patterns: List[str]) -> str:
for _ in range(self._max_iteration):
result = self._chain.inference_regex.run(
Engine._convert_patterns_to_prompt(patterns)
).strip()
try:
re.compile(result)
break
except KeyboardInterrupt as e:
raise e
except BaseException:
pass
return result
def explain(self, regex: str) -> None:
result = self._chain.explain_regex.run(regex)
print(result) | /regex-inference-0.0.10.tar.gz/regex-inference-0.0.10/regex_inference/inference/engine.py | 0.8288 | 0.380615 | engine.py | pypi |
import os
from langchain.llms import OpenAI
from langchain import PromptTemplate
from langchain import LLMChain
__all__ = ['Chain']
class Chain:
def __init__(self, openai_api_key=None, temperature=0.8):
if openai_api_key is None:
openai_api_key = os.environ["OPENAI_API_KEY"]
self._openai_llm = OpenAI(
openai_api_key=openai_api_key,
temperature=temperature,
model='text-davinci-003', # https://platform.openai.com/docs/models/gpt-3-5
client='regex_inference'
)
self._setup_lang_chains()
def _setup_lang_chains(self):
self.inference_regex = LLMChain(
prompt=self.new_inference_prompt,
llm=self._openai_llm
)
self.alter_regex = LLMChain(
prompt=self.alter_regex_prompt,
llm=self._openai_llm
)
self.simplify_regex = LLMChain(
prompt=self.simplify_regex_prompt,
llm=self._openai_llm
)
self.explain_regex = LLMChain(
prompt=self.explain_regex_prompt,
llm=self._openai_llm
)
self.fix_regex = LLMChain(
prompt=self.fix_regex_prompt,
llm=self._openai_llm
)
@property
def new_inference_prompt(self) -> PromptTemplate:
template = """Question: Show me the best and shortest regex that can fully match the strings that I provide to you.
Note that:
*. The regex should be as short as possible.
*. Match sure the resulting regex does not have syntax error.
*. The regex should full match as many strings as possible.
*. The regex should not match strings that is not provided.
*. The number of string combinations matching the resulting regex should be as smaller than the number of target strings provided.
Now, each instance of the strings that should be fully matched is provided line-by-line and wrapped by double quotes as follows:
{strings}
Note that:
1. The double quote is not part of the string instance. Ignore the double quote during inferencing the regex.
2. Provide the resulting regex without wrapping it in quote
The resulting regex is: """
prompt = PromptTemplate(
template=template,
input_variables=['strings']
)
return prompt
@property
def alter_regex_prompt(self) -> PromptTemplate:
template = """Question: Alter the regex "{regex}" such that the following requirements is matched:
*. The pattern fully match the regex still fully match the regex.
*. The regex should full match as many strings provided as possible.
*. The regex should be as short as possible.
*. The regex should not match strings that is not provided except for those full match the original regex.
Now, each instance of the strings is provided line-by-line and wrapped by double quotes as follows:
{strings}
Note that:
1. The double quote is not part of the string instance. Ignore the double quote during inferencing the regex.
2. Provide the resulting regex without wrapping it in quote
The resulting altered regex is: """
prompt = PromptTemplate(
template=template,
input_variables=['regex', 'strings']
)
return prompt
@property
def simplify_regex_prompt(self) -> PromptTemplate:
template = """
Please revise the regex "{regex}"
such that the following constraint start with *. can be met:
*. The original regex consists of multiple regex seperated by "|". Try to combine the similar regex.
*. After combine, the resulting regex should be as short as possible.
*. The revised regex should still fully match all the strings full matched the original regex
*. The revised regex should still fully match each of the strings I provided to you.
Now, each instance of the strings is provided line-by-line and wrapped by double quotes as follows:
{strings}
Note that:
1. The double quote is not part of the string instance. Ignore the double quote during inferencing the regex.
2. Provide the resulting regex without wrapping it in quote
The resulting revise regex is:
"""
prompt = PromptTemplate(
template=template,
input_variables=['regex', 'strings']
)
return prompt
@property
def explain_regex_prompt(self) -> PromptTemplate:
template = """Question: Explain the regex "{regex}" such that
1. The role of each character in the regex is elaberated.
2. Provide 5 most interpretive example strings that fullmatch the regex.
The explaination is: """
prompt = PromptTemplate(
template=template,
input_variables=['regex']
)
return prompt
@property
def fix_regex_prompt(self) -> PromptTemplate:
template = """Question: I will provide you somes facts and demand you to think about them for generating the answer.
{facts}
I demand you to alter each regex and show each altered regex as answer.
The criteria for each altered regex is that:
1. The altered regex should still correctly match the patterns that is correctly match.
2. The altered regex should exclude the pattern mistakenly matched. That is, those mistakenly match patterns should not be matched.
Note that:
1. The regex before and after the alteration should be double quoted.
2. The regex before and after the alteration should be shown line-by-line.
3. The regex before and after the alteration should be listed in the same line.
4. The regex before and after the alteration should be separated by "," mark.
5. The regex before and after the alteration together should be wrapped with parenthesis "()".
6. Only show the lines with regex.
7. In the answer, the regex before the alteration should not be different from those provided in Fact 0.
8. The number of lines in the answer should be equal to the number of regex provided.
An example to the answer is:
("original_regex_1", "altered_regex_1")
("original_regex_2", "altered_regex_2")
("original_regex_3", "altered_regex_3")
The answer is:
"""
prompt = PromptTemplate(
template=template,
input_variables=['facts']
)
return prompt | /regex-inference-0.0.10.tar.gz/regex-inference-0.0.10/regex_inference/inference/chain.py | 0.665628 | 0.193967 | chain.py | pypi |
from __future__ import annotations
from argparse import ArgumentParser, Namespace
from itertools import combinations
import math
import sys
import string
from dataclasses import dataclass
from dataclasses import field
from enum import Enum
from enum import auto
import re
from re import Match
from re import Pattern
from typing import Generator
from typing import Optional
class AsciiClass(Enum):
ALNUM = auto(), # Alphanumeric characters: ‘[:alpha:]’ and ‘[:digit:]’; in the ‘C’ locale and ASCII character encoding, this is the same as ‘[0-9A-Za-z]’.
ALPHA = auto(), # Alphabetic characters: ‘[:lower:]’ and ‘[:upper:]’; in the ‘C’ locale and ASCII character encoding, this is the same as ‘[A-Za-z]’.
BLANK = auto(), # Blank characters: space and tab.
CNTRL = auto(), # Control characters. In ASCII, these characters have octal codes 000 through 037, and 177 (DEL). In other character sets, these are the equivalent characters, if any.
DIGIT = auto(), # Digits: 0 1 2 3 4 5 6 7 8 9.
GRAPH = auto(), # Graphical characters: ‘[:alnum:]’ and ‘[:punct:]’.
LOWER = auto(), # Lower-case letters; in the ‘C’ locale and ASCII character encoding, this is a b c d e f g h i j k l m n o p q r s t u v w x y z.
PRINT = auto(), # Printable characters: ‘[:alnum:]’, ‘[:punct:]’, and space.
PUNCT = auto(), # Punctuation characters; in the ‘C’ locale and ASCII character encoding, this is ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~.
SPACE = auto(), # Space characters: in the ‘C’ locale, this is tab, newline, vertical tab, form feed, carriage return, and space. See Usage, for more discussion of matching newlines.
UPPER = auto(), # Upper-case letters: in the ‘C’ locale and ASCII character encoding, this is A B C D E F G H I J K L M N O P Q R S T U V W X Y Z.
XDIGIT = auto(), # Hexadecimal digits: 0 1 2 3 4 5 6 7 8 9 A B C D E F a b c d e f.
ANY = auto(),
@staticmethod
def get_parent(cls: AsciiClass) -> Optional[AsciiClass]:
if cls == AsciiClass.ALNUM:
return AsciiClass.GRAPH
if cls == AsciiClass.ALPHA:
return AsciiClass.ALNUM
if cls == AsciiClass.BLANK:
return AsciiClass.SPACE
if cls == AsciiClass.DIGIT:
return AsciiClass.ALNUM
if cls == AsciiClass.GRAPH:
return AsciiClass.PRINT
if cls == AsciiClass.LOWER:
return AsciiClass.ALPHA
if cls == AsciiClass.PRINT:
return AsciiClass.ANY
if cls == AsciiClass.PUNCT:
return AsciiClass.GRAPH
if cls == AsciiClass.SPACE:
return AsciiClass.PRINT
if cls == AsciiClass.UPPER:
return AsciiClass.ALPHA
if cls == AsciiClass.CNTRL:
return AsciiClass.ANY
if cls == AsciiClass.XDIGIT:
return AsciiClass.ALNUM
if cls == AsciiClass.ANY:
return None
raise ValueError(f"Unknown ASCII class {cls}")
@staticmethod
def get_ascii_class_pattern(cls: AsciiClass) -> str:
if cls == AsciiClass.ALNUM:
return r"[:alnum:]"
if cls == AsciiClass.ALPHA:
return r"[:alpha:]"
if cls == AsciiClass.BLANK:
return r"[:blank:]"
if cls == AsciiClass.CNTRL:
return r"[:cntrl:]"
if cls == AsciiClass.DIGIT:
return r"[0-9]"
if cls == AsciiClass.GRAPH:
return r"[:graph:]"
if cls == AsciiClass.LOWER:
return r"[:lower:]"
if cls == AsciiClass.PRINT:
return r"[:print:]"
if cls == AsciiClass.PUNCT:
return r"[:punct:]"
if cls == AsciiClass.SPACE:
return r"[:space:]"
if cls == AsciiClass.UPPER:
return r"[:upper:]"
if cls == AsciiClass.XDIGIT:
return r"[:xdigit:]"
if cls == AsciiClass.ANY:
return r"."
raise ValueError(f"Unsupported ASCII class {cls}")
@staticmethod
def get_class_characters(symbol_class: AsciiClass) -> set[str]:
if symbol_class == AsciiClass.ALNUM:
return AsciiClass.get_class_characters(AsciiClass.ALPHA) & AsciiClass.get_class_characters(AsciiClass.DIGIT)
if symbol_class == AsciiClass.ALPHA:
return AsciiClass.get_class_characters(AsciiClass.UPPER) & AsciiClass.get_class_characters(AsciiClass.LOWER)
if symbol_class == AsciiClass.BLANK:
return set([" ", "\t"])
if symbol_class == AsciiClass.CNTRL:
# CNTRL = auto(), # Control characters. In ASCII, these characters have octal codes 000 through 037, and 177 (DEL). In other character sets, these are the equivalent characters, if any.
raise ValueError()
if symbol_class == AsciiClass.DIGIT:
return set(string.digits)
if symbol_class == AsciiClass.GRAPH:
return AsciiClass.get_class_characters(AsciiClass.ALPHA) & AsciiClass.get_class_characters(AsciiClass.PUNCT)
if symbol_class == AsciiClass.LOWER:
return set(string.ascii_lowercase)
if symbol_class == AsciiClass.PRINT:
return AsciiClass.get_class_characters(AsciiClass.ALNUM) & AsciiClass.get_class_characters(AsciiClass.PUNCT) & AsciiClass.get_class_characters(AsciiClass.SPACE)
if symbol_class == AsciiClass.PUNCT:
return set(string.punctuation)
if symbol_class == AsciiClass.UPPER:
return set(string.ascii_uppercase)
if symbol_class == AsciiClass.XDIGIT:
return set(string.hexdigits)
if symbol_class == AsciiClass.SPACE:
return set(
# "\t\n\x0B\x0C\x0D "
string.whitespace
)
raise ValueError()
@staticmethod
def get_ascii_class(s: str) -> AsciiClass:
if len(s) > 1:
raise ValueError("Expected single character")
if s.isdigit():
return AsciiClass.DIGIT
if s.isalpha():
if s.islower():
return AsciiClass.LOWER
elif s.isupper():
return AsciiClass.UPPER
return AsciiClass.ALPHA
if s.isspace():
return AsciiClass.SPACE
if s.isprintable():
return AsciiClass.PUNCT
raise ValueError(f"{s} unknown")
@staticmethod
def find_common_ancestor(class1: AsciiClass, class2: AsciiClass) -> AsciiClass:
parent: Optional[AsciiClass] = class1
ancestors: set[AsciiClass] = {class1}
assert parent is not None
while True:
parent = AsciiClass.get_parent(parent)
if parent is None:
break
ancestors.add(parent)
parent = class2
while parent is not None:
if parent in ancestors:
return parent
else:
parent = AsciiClass.get_parent(parent)
if parent is None:
return AsciiClass.ANY
raise ValueError()
@dataclass
class Symbol:
a_class: AsciiClass
chars: set[str]
is_class: bool
is_optional: bool = False
def fit_score(self, s: str, alpha: float) -> float:
if AsciiClass.get_ascii_class(s) == self.a_class:
return 0
if not self.is_class and s in self.chars:
return alpha
return 1
def __str__(self) -> str:
if self.is_class:
return AsciiClass.get_ascii_class_pattern(self.a_class)
elif len(self.chars) == 1:
return self._sanitize(next(iter(self.chars))) + ("?" if self.is_optional else "")
else:
return "[" + "".join(Symbol._sanitize(c) for c in self.chars) + "]" + ("?" if self.is_optional else "")
def fit(self, other: Symbol) -> float:
if self.a_class == other.a_class:
return 0
if AsciiClass.find_common_ancestor(self.a_class, other.a_class) == other.a_class:
return 0
if AsciiClass.find_common_ancestor(self.a_class, other.a_class) == self.a_class:
return 0
common_chars = len(self.chars & other.chars)
if common_chars != 0:
return 1 - common_chars / len(self.chars)
else:
return 1
@staticmethod
def _sanitize(c: str) -> str:
if c in ".^$*+?()[{\\|":
return f"\\{c}"
return c
def merge(self, other: Symbol) -> Symbol:
if other.a_class != self.a_class:
na_class = AsciiClass.find_common_ancestor(other.a_class, self.a_class)
else:
na_class = self.a_class
chars = self.chars | other.chars
return Symbol(
na_class,
chars=chars,
is_class=len(chars) == len(AsciiClass.get_class_characters(na_class))
)
@staticmethod
def build(symbol: str) -> Symbol:
symbol_class = AsciiClass.get_ascii_class(symbol)
return Symbol(
a_class=symbol_class,
is_class=False,
chars=set(symbol)
)
@dataclass
class Token:
symbols: list[Symbol] = field(default_factory=list)
optional: bool = False
def fit_score(self, t: str, alpha: float) -> float:
return sum(
symbol.fit_score(tuple_element, alpha) for symbol, tuple_element in zip(self.symbols, Token.get_symbols_in_token(t))
) + abs(
len(t) - len(self.symbols)
)
def merge(self, other: Token) -> Token:
symbols = [symbol.merge(other_symbol) for symbol, other_symbol in zip(self.symbols, other.symbols)]
if len(self.symbols) == len(other.symbols):
return Token(symbols=symbols, optional=self.optional or other.optional)
elif len(self.symbols) > len(other.symbols):
missing = [Symbol(s.a_class, s.chars, s.is_class, True) for s in self.symbols[len(other.symbols):]]
else:
missing = [Symbol(s.a_class, s.chars, s.is_class, True) for s in other.symbols[len(self.symbols):]]
return Token(symbols=symbols + missing, optional=self.optional or other.optional)
def fit(self, other: Token) -> float:
return sum(
symbol.fit(other_symbol) for symbol, other_symbol in zip(self.symbols, other.symbols)
) + abs(len(self.symbols) - len(other.symbols))
@staticmethod
def get_symbols_in_token(t: str) -> Generator[str, None, None]:
for c in t:
yield c
def __str__(self) -> str:
return "(" + "".join(str(symbol) for symbol in self.symbols) + ")" + ("?" if self.optional else "")
@staticmethod
def build(word: str) -> Token:
return Token(
list(Symbol.build(symbol) for symbol in word)
)
class NullToken(Token):
def d(self, t: str) -> float:
return 1.0 * len(t)
@dataclass
class Branch:
tokens: list[Token] = field(default_factory=list)
def fit_score(self, t: str, alpha: float) -> float:
tokens: list[Token] = [self.tokens[i] if i < len(self.tokens) else NullToken() for i, _ in enumerate(t)]
return sum(
token.fit_score(t_i, alpha) for token, t_i in zip(tokens, Branch.get_tokens_in_tuple(t))
)
def add(self, word: str) -> None:
self.tokens = [token.merge(nt) for nt, token in zip(Branch.build(word).tokens, self.tokens)]
def __str__(self) -> str:
return "".join(str(token) for token in self.tokens)
def fit(self, other: Branch) -> float:
return sum(
token.fit(other_token) for token, other_token in zip(self.tokens, other.tokens)
) + abs(len(self.tokens) - len(other.tokens))
def merge(self, other: Branch) -> Branch:
tokens = [
token.merge(other_token) for token, other_token in zip(self.tokens, other.tokens)
]
if len(self.tokens) == len(other.tokens):
return Branch(tokens)
elif len(self.tokens) > len(other.tokens):
missing = [
Token(token.symbols, True) for token in self.tokens[len(other.tokens):]
]
assert len(tokens) + len(missing) == len(self.tokens)
else:
missing = [
Token(token.symbols, True) for token in other.tokens[len(self.tokens):]
]
assert len(tokens) + len(missing) == len(other.tokens)
return Branch(tokens + missing)
@staticmethod
def get_tokens_in_tuple(t: str, delimiters: str = r"[-_/\\#., ]") -> Generator[str, None, None]:
pattern: Pattern[str] = re.compile(delimiters)
last_match: Optional[Match[str]] = None
for m in re.finditer(pattern, t):
if last_match is None:
yield t[:m.start()]
else:
yield t[last_match.end():m.start()]
yield t[m.start():m.end()]
last_match = m
if last_match is None:
yield t
else:
yield t[last_match.end():]
@staticmethod
def build(word: str) -> Branch:
return Branch(
tokens=[
Token.build(token) for token in Branch.get_tokens_in_tuple(word)
]
)
def __repr__(self) -> str:
return f"Branch[{str(self)}"
@dataclass
class XTructure:
alpha: float = 1 / 5
max_branches: int = 8
branching_threshold: float = 0.85
branches: list[Branch] = field(default_factory=list)
def fit_score(self, t: str) -> float:
return min(b.fit_score(t, self.alpha) for b in self.branches)
def learn_new_word(self, word: str) -> bool:
if len(word) == 0:
return False
if not len(self.branches):
self.branches.append(Branch.build(word))
else:
best_branch, score = self._best_branch(word)
if score < self.branching_threshold:
best_branch.add(word)
else:
self.branches.append(
Branch.build(word)
)
if len(self.branches) > self.max_branches:
self.branches = self.merge_most_similar()
return True
def _best_branch(self, word: str) -> tuple[Branch, float]:
assert len(self.branches)
best_score = math.inf
best_branch: Optional[Branch] = None
for branch in self.branches:
branch_score = branch.fit_score(word, self.alpha)
if branch_score < best_score:
best_branch = branch
best_score = branch_score
assert best_branch is not None
assert best_score != math.inf
return best_branch, best_score
def merge_most_similar(self) -> list[Branch]:
min_distance = math.inf
m_bi: Optional[Branch] = None
m_bj: Optional[Branch] = None
for bi, bj in combinations(self.branches, 2):
assert bi is not bj
distance = bi.fit(bj)
if distance < min_distance:
min_distance = distance
m_bi = bi
m_bj = bj
assert m_bi is not None
assert m_bj is not None
self.branches.remove(m_bi)
self.branches.remove(m_bj)
self.branches.append(m_bi.merge(m_bj))
return self.branches
def __str__(self) -> str:
return "|".join(str(branch) for branch in self.branches)
def parse_arguments() -> Namespace:
parser = ArgumentParser(
prog=sys.argv[0].split("/")[-1],
description="A simple tool to learn human readable a regular expression from examples",
)
parser.add_argument("-i", "--input", help="Path to the input source, defaults to stdin")
parser.add_argument("-o", "--output", help="Path to the output file, defaults to stdout")
parser.add_argument("--max-branch", type=int, default=8, help="Maximum number of branches allowed, defaults to 8")
parser.add_argument("--alpha", type=float, default=1 / 5, help="Weight for fitting tuples, defaults to 1/5")
parser.add_argument("--branch-threshold", type=float, default=.85, help="Branching threshold, defaults to 0.85, relative to the fitting score alpha")
return parser.parse_args()
def main() -> int:
cmd = parse_arguments()
x = XTructure(
cmd.alpha,
cmd.max_branch,
cmd.branch_threshold
)
data_source = open(cmd.input) if cmd.input else sys.stdin
for line in data_source:
x.learn_new_word(line.strip())
output = open(cmd.output) if cmd.output else sys.stdout
print(str(x), file=output)
return 0
if __name__ == "__main__":
raise SystemExit(main()) | /regex-learner-0.0.4.tar.gz/regex-learner-0.0.4/xsystem.py | 0.743447 | 0.245854 | xsystem.py | pypi |
from functools import partial
from typing import Any, List, Union, Optional
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import regex
import torch
import torch.nn as nn
from transformers import (
GenerationMixin,
LogitsProcessor,
LogitsProcessorList,
PreTrainedTokenizer,
)
class AllowedTokenLogitsProcessor(LogitsProcessor):
def __init__(self, allowed_token_ids: List[int]) -> None:
super().__init__()
self.allowed_token_ids = allowed_token_ids
def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor:
mask = torch.ones_like(scores, dtype=torch.bool)
for token_id in self.allowed_token_ids:
mask[:, token_id] = False
scores = scores.masked_fill(mask, -float("inf"))
return scores
class RegexConstraint:
def __init__(
self,
tokenizer: PreTrainedTokenizer,
model: GenerationMixin,
max_workers: int = 64,
) -> None:
self.tokenizer = tokenizer
self.model = model
self.max_workers = max_workers
self._cached_vocab = self.tokenizer.get_vocab()
def _prepare_pattern(
self,
pattern: Union[str, regex.Pattern[str], List[str], List[regex.Pattern[str]]],
) -> List[regex.Pattern[str]]:
pattern_ = [pattern] if isinstance(pattern, (str, regex.Pattern)) else pattern
return [regex.compile(p) for p in pattern_]
def _check_token(
self, current_response: str, pattern: List[regex.Pattern[str]], token: str
) -> bool:
return any(p.fullmatch(current_response + token, partial=True) for p in pattern) # type: ignore
def _get_allowed_token_ids(
self, current_response: str, pattern: List[regex.Pattern[str]]
) -> List[int]:
with ThreadPoolExecutor() as executor:
return [
token_id
for valid, token_id in zip(
executor.map(
partial(self._check_token, current_response, pattern),
self._cached_vocab.keys(),
),
self._cached_vocab.values(),
)
if valid
]
def generate(
self,
prompt: str,
pattern: Union[str, regex.Pattern[str], List[str], List[regex.Pattern[str]]],
generated_response: str = "",
**gen_kwargs: Any,
) -> Optional[str]:
pattern = self._prepare_pattern(pattern)
input_ids: torch.Tensor = self.tokenizer.encode(prompt + generated_response, return_tensors="pt") # type: ignore
allowed_token_ids = self._get_allowed_token_ids(generated_response, pattern)
# stop if no tokens are allowed
if not allowed_token_ids:
return None
gen_kwargs.pop("max_new_tokens", None)
gen_kwargs.pop("output_scores", None)
gen_kwargs.pop("return_dict_in_generate", None)
model_kwargs = gen_kwargs.copy()
logits_processor = LogitsProcessorList(
[AllowedTokenLogitsProcessor(allowed_token_ids)]
)
if processors := model_kwargs.pop("logits_processor", None):
logits_processor.extend(processors)
output = self.model.generate(
input_ids,
logits_processor=logits_processor,
max_new_tokens=1,
output_scores=True,
return_dict_in_generate=True,
**model_kwargs,
)
probs = nn.functional.softmax(output.scores[0], dim=-1)
sorted, indices = torch.sort(probs, descending=True, dim=-1)
output_token_ids = indices[sorted > 0]
for token in output_token_ids:
output_text = self.tokenizer.decode(token, skip_special_tokens=True)
generated_response += output_text
print("Generated:", generated_response)
# stop if the generated text matches the pattern
if any(p.fullmatch(generated_response) for p in pattern):
return generated_response
if result := self.generate(
prompt, pattern, generated_response, **gen_kwargs
):
return result | /regex_llm-0.1.0-py3-none-any.whl/regex_llm/__init__.py | 0.923424 | 0.335841 | __init__.py | pypi |
from ast import Str
import os
import re
from pathlib import Path
from typing import Dict, Optional, List
from nuclear.sublog import log
from regex_rename.match import Match
def bulk_rename(
pattern: str,
replacement_pattern: Optional[str],
testing: bool = True,
full: bool = False,
recursive: bool = False,
padding: int = 0,
) -> List[Match]:
"""
Rename (or match) multiple files at once
:param pattern: regex pattern to match filenames
:param replacement: replacement regex pattern for renamed files.
Use \\1 syntax to make use of matched groups
:param testing: True - just testing replacement pattern, False - do actual renaming files
:param full: whether to enforce matching full filename against pattern
:param recursive: whether to search directories recursively
:param padding: applies padding with zeros with given length on matched numerical groups
"""
log.debug('matching regex pattern',
pattern=pattern, replacement=replacement_pattern, testing_mode=testing,
full_match=full, recursive=recursive, padding=padding)
matches: List[Match] = match_files(Path(), pattern, replacement_pattern,
recursive, full, padding)
for match in matches:
match.log_info(testing)
if replacement_pattern:
find_duplicates(matches)
if testing:
if matches:
log.info('files matched', count=len(matches))
else:
log.info('no files matched', count=len(matches))
elif replacement_pattern:
rename_matches(matches)
if matches:
log.info('files renamed', count=len(matches))
else:
log.info('no files renamed', count=len(matches))
else:
raise RuntimeError('replacement pattern is required for renaming')
return matches
def match_files(
path: Path,
pattern: str,
replacement_pattern: Optional[str],
recursive: bool,
full: bool,
padding: int,
) -> List[Match]:
files = list_files(path, recursive)
filenames = sorted([str(f) for f in files])
matches = [match_filename(filename, pattern, replacement_pattern, full, padding)
for filename in filenames]
return [m for m in matches if m is not None]
def list_files(
path: Path,
recursive: bool,
) -> List[Path]:
if recursive:
return [f.relative_to(path) for f in path.rglob("*") if f.is_file()]
else:
return [f for f in path.iterdir() if f.is_file()]
def match_filename(
filename: str,
pattern: str,
replacement_pattern: Optional[str],
full: bool = False,
padding: int = 0,
) -> Optional[Match]:
re_match = match_regex_string(pattern, filename, full)
if not re_match:
log.warn('no match', file=filename)
return None
group_dict: Dict[int, Optional[str]] = {
index + 1: group for index, group in enumerate(re_match.groups())
}
apply_numeric_padding(group_dict, padding)
if not replacement_pattern:
return Match(name_from=filename, name_to=None, groups=group_dict, re_match=re_match)
validate_replacement(re_match, replacement_pattern)
new_name = expand_replacement(replacement_pattern, group_dict)
return Match(name_from=filename, name_to=new_name, groups=group_dict, re_match=re_match)
def match_regex_string(pattern: str, filename: str, full: bool) -> Optional[re.Match]:
if full:
return re.fullmatch(pattern, filename)
else:
return re.search(pattern, filename)
def apply_numeric_padding(group_dict: Dict[int, Optional[str]], padding: int):
if padding:
for index, group in group_dict.items():
if type(group) == str and group.isnumeric():
group_dict[index] = group.zfill(padding)
def expand_numeric_padding_prefix(
name: str,
group_dict: Dict[int, Optional[str]],
):
re_pattern = re.compile(r'\\P(\d+)\\(\d+)')
while True:
padding_match = re_pattern.search(name)
if not padding_match:
break
padding = int(padding_match.group(1))
index = int(padding_match.group(2))
assert index in group_dict, f'group index {index} not found'
group = group_dict[index]
if group is None:
group = ''
name = name.replace(f'\\P{padding}\\{index}', group)
else:
assert group.isnumeric(), f'can\'t apply padding to non-numeric group: {group}'
name = name.replace(f'\\P{padding}\\{index}', group.zfill(padding))
return name
def validate_replacement(re_match: re.Match, replacement_pattern: str):
"""Test if it's valid in terms of regex rules"""
simplified = replacement_pattern.replace('\\L', '').replace('\\U', '')
simplified = re.sub(r'\\P(\d+)', '', simplified)
re_match.expand(simplified)
def expand_replacement(
replacement_pattern: str,
group_dict: Dict[int, Optional[str]],
) -> str:
new_name = replacement_pattern
new_name = expand_numeric_padding_prefix(new_name, group_dict)
for index, group in group_dict.items():
if group is None or type(group) != str:
group = ''
if '\\L' in new_name:
new_name = new_name.replace(f'\\L\\{index}', group.lower())
if '\\U' in new_name:
new_name = new_name.replace(f'\\U\\{index}', group.upper())
new_name = new_name.replace(f'\\{index}', group)
return new_name
def find_duplicates(matches: List[Match]):
names = [match.name_to for match in matches]
duplicates = set((name for name in names if names.count(name) > 1))
if duplicates:
raise RuntimeError(f'found duplicate replacement filenames: {list(duplicates)}')
def rename_matches(matches: List[Match]):
for match in matches:
assert match.name_to
Path(match.name_to).parent.mkdir(parents=True, exist_ok=True)
Path(match.name_from).rename(match.name_to) | /regex-rename-1.0.0.tar.gz/regex-rename-1.0.0/regex_rename/rename.py | 0.800458 | 0.371422 | rename.py | pypi |
__all__ = [
"iter_sort_by_len",
"sort_by_len",
"ord_to_codepoint",
"codepoint_to_ord",
"char_to_codepoint",
"char_as_exp",
"char_as_exp2",
"string_as_exp",
"string_as_exp2",
"strings_as_exp",
"strings_as_exp2",
"iter_char_range",
"mask_span",
"mask_spans",
"to_utf8",
"to_nfc",
]
import string
import unicodedata
from collections.abc import Iterable
_ALPHA_CHARS: set[str] = set(string.ascii_letters)
_DIGIT_CHARTS: set[str] = set(string.digits)
_SAFE_CHARS: set[str] = _ALPHA_CHARS.union(_DIGIT_CHARTS).union(set(string.whitespace))
_RE2_ESCAPABLE_CHARS: set[str] = set(string.punctuation)
def iter_sort_by_len(
texts: Iterable[str],
*,
reverse: bool = False,
) -> Iterable[str]:
"""Iterate Texts Sorted by Length
Args:
texts (Iterable[str]): Strings to sort.
reverse (bool, optional): Sort in descending order (longest to shortest). Defaults to False.
Yields:
str: Strings sorted by length.
"""
for text in sorted(texts, key=len, reverse=reverse):
yield text
def sort_by_len(
texts: Iterable[str],
*,
reverse: bool = False,
) -> tuple[str, ...]:
"""Strings Sorted by Length
Args:
texts (Iterable[str]): Strings to sort.
reverse (bool, optional): Sort in descending order (longest to shortest). Defaults to False.
Returns:
tuple[str]: Strings sorted by length.
"""
return tuple(iter_sort_by_len(texts, reverse=reverse))
def ord_to_codepoint(ordinal: int) -> str:
"""Character Codepoint from Character Ordinal
Args:
ordinal (int): Character ordinal.
Returns:
str: Character codepoint.
"""
return format(ordinal, "x").zfill(8)
def codepoint_to_ord(codepoint: str) -> int:
"""Character Ordinal from Character Codepoint
Args:
codepoint (str): Character codepoint.
Returns:
int: Character ordinal.
"""
return int(codepoint, 16)
def char_to_codepoint(char: str) -> str:
"""Character Codepoint from Character
Args:
char (str): Character.
Returns:
str: Character codepoint.
"""
return ord_to_codepoint(ord(char))
def char_as_exp(char: str) -> str:
"""Create a RE Regex Expression that Exactly Matches a Character
Escape to avoid reserved character classes (i.e. \s, \S, \d, \D, \1, etc.).
Args:
char (str): Character to match.
Returns:
str: RE expression that exactly matches the original character.
"""
if char in _SAFE_CHARS:
# Safe as-is
return char
else:
# Safe to escape with backslash
return f"\\{char}"
def char_as_exp2(char: str) -> str:
"""Create a RE2 Regex Expression that Exactly Matches a Character
Args:
char (str): Character to match.
Returns:
str: RE2 expression that exactly matches the original character.
"""
if char in _SAFE_CHARS:
# Safe as-is
return char
elif char in _RE2_ESCAPABLE_CHARS:
# Safe to escape with backslash
return f"\\{char}"
else:
# Otherwise escape using the codepoint
return "\\x{" + char_to_codepoint(char) + "}"
def string_as_exp(text: str) -> str:
"""Create a RE Regex Expression that Exactly Matches a String
Args:
text (str): String to match.
Returns:
str: RE expression that exactly matches the original string.
"""
return r"".join(map(char_as_exp, text))
def string_as_exp2(text: str) -> str:
"""Create a RE2 Regex Expression that Exactly Matches a String
Args:
text (str): String to match.
Returns:
str: RE2 expression that exactly matches the original string.
"""
return r"".join(map(char_as_exp2, text))
def strings_as_exp(texts: Iterable[str]) -> str:
"""Create a RE Regex expression that Exactly Matches Any One String
Args:
texts (Iterable[str]): Strings to match.
Returns:
str: RE expression that exactly matches any one of the original strings.
"""
return r"|".join(
map(
string_as_exp,
iter_sort_by_len(texts, reverse=True),
)
)
def strings_as_exp2(texts: Iterable[str]) -> str:
"""Create a RE2 Regex expression that Exactly Matches Any One String
Args:
texts (Iterable[str]): Strings to match.
Returns:
str: RE2 expression that exactly matches any one of the original strings.
"""
return r"|".join(
map(
string_as_exp2,
iter_sort_by_len(texts, reverse=True),
)
)
def iter_char_range(first_codepoint: int, last_codepoint: int) -> Iterable[str]:
"""Iterate All Characters within a Range of Codepoints (Inclusive)
Args:
first_codepoint (int): Starting (first) codepoint.
last_codepoint (int): Ending (last) codepoint.
Yields:
str: Character from within a range of codepoints.
"""
for i in range(ord(first_codepoint), ord(last_codepoint) + 1):
yield chr(i)
def char_range(first_codepoint: int, last_codepoint: int) -> tuple[str, ...]:
"""Tuple of All Characters within a Range of Codepoints (Inclusive)
Args:
first_codepoint (int): Starting (first) codepoint.
last_codepoint (int): Ending (last) codepoint.
Returns:
tuple[str, ...]: Characters within a range of codepoints.
"""
return tuple(iter_char_range(first_codepoint, last_codepoint))
def mask_span(
text: str,
span: list[int] | tuple[int, int],
mask: str | None = None,
) -> str:
"""Slice and Mask a String using a Span
Args:
text (str): Text to slice.
span (list[int] | tuple[int, int]): Domain of index positions (start, end) to mask.
mask (str, optional): Mask to insert after slicing. Defaults to None.
Returns:
str: Text with span replaced with the mask text.
"""
if not 0 <= span[0] <= span[1] <= len(text):
raise ValueError(f"Invalid index positions for start and end: {span}")
if mask is None:
# No mask
return text[: span[0]] + text[span[1] :]
else:
# Use mask
return text[: span[0]] + mask + text[span[1] :]
def mask_spans(
text: str,
spans: Iterable[list[int] | tuple[int, int]],
masks: Iterable[str] | None = None,
) -> str:
"""Slice and Mask a String using Multiple Spans
Args:
text (str): Text to slice.
spans (Iterable[list[int] | tuple[int, int]]): Domains of index positions (x1, x2) to mask from the text.
masks (Iterable[str], optional): Masks to insert when slicing. Defaults to None.
Returns:
str: Text with all spans replaced with the mask text.
"""
if masks is None:
# No masks
for span in reversed(spans):
text = mask_span(text, span, mask=None)
else:
# Has mask
for span, mask in zip(reversed(spans), reversed(masks)):
text = mask_span(text, span, mask=mask)
return text
def to_utf8(text):
return text.encode("utf-8").decode("utf-8")
def to_nfc(text: str) -> str:
"""Normalize a Unicode String to NFC Form C
Form C favors the use of a fully combined character.
Args:
text (str): String to normalize.
Returns:
str: Normalized string.
"""
return unicodedata.normalize("NFC", text) | /regex_toolkit-0.0.3-py3-none-any.whl/regex_toolkit/base.py | 0.825836 | 0.444203 | base.py | pypi |
Introduction
------------
This regex implementation is backwards-compatible with the standard 're' module, but offers additional functionality.
Note
----
The re module's behaviour with zero-width matches changed in Python 3.7, and this module will follow that behaviour when compiled for Python 3.7.
Old vs new behaviour
--------------------
In order to be compatible with the re module, this module has 2 behaviours:
* **Version 0** behaviour (old behaviour, compatible with the re module):
Please note that the re module's behaviour may change over time, and I'll endeavour to match that behaviour in version 0.
* Indicated by the ``VERSION0`` or ``V0`` flag, or ``(?V0)`` in the pattern.
* Zero-width matches are not handled correctly in the re module before Python 3.7. The behaviour in those earlier versions is:
* ``.split`` won't split a string at a zero-width match.
* ``.sub`` will advance by one character after a zero-width match.
* Inline flags apply to the entire pattern, and they can't be turned off.
* Only simple sets are supported.
* Case-insensitive matches in Unicode use simple case-folding by default.
* **Version 1** behaviour (new behaviour, possibly different from the re module):
* Indicated by the ``VERSION1`` or ``V1`` flag, or ``(?V1)`` in the pattern.
* Zero-width matches are handled correctly.
* Inline flags apply to the end of the group or pattern, and they can be turned off.
* Nested sets and set operations are supported.
* Case-insensitive matches in Unicode use full case-folding by default.
If no version is specified, the regex module will default to ``regex.DEFAULT_VERSION``.
Case-insensitive matches in Unicode
-----------------------------------
The regex module supports both simple and full case-folding for case-insensitive matches in Unicode. Use of full case-folding can be turned on using the ``FULLCASE`` or ``F`` flag, or ``(?f)`` in the pattern. Please note that this flag affects how the ``IGNORECASE`` flag works; the ``FULLCASE`` flag itself does not turn on case-insensitive matching.
In the version 0 behaviour, the flag is off by default.
In the version 1 behaviour, the flag is on by default.
Nested sets and set operations
------------------------------
It's not possible to support both simple sets, as used in the re module, and nested sets at the same time because of a difference in the meaning of an unescaped ``"["`` in a set.
For example, the pattern ``[[a-z]--[aeiou]]`` is treated in the version 0 behaviour (simple sets, compatible with the re module) as:
* Set containing "[" and the letters "a" to "z"
* Literal "--"
* Set containing letters "a", "e", "i", "o", "u"
* Literal "]"
but in the version 1 behaviour (nested sets, enhanced behaviour) as:
* Set which is:
* Set containing the letters "a" to "z"
* but excluding:
* Set containing the letters "a", "e", "i", "o", "u"
Version 0 behaviour: only simple sets are supported.
Version 1 behaviour: nested sets and set operations are supported.
Flags
-----
There are 2 kinds of flag: scoped and global. Scoped flags can apply to only part of a pattern and can be turned on or off; global flags apply to the entire pattern and can only be turned on.
The scoped flags are: ``FULLCASE``, ``IGNORECASE``, ``MULTILINE``, ``DOTALL``, ``VERBOSE``, ``WORD``.
The global flags are: ``ASCII``, ``BESTMATCH``, ``ENHANCEMATCH``, ``LOCALE``, ``POSIX``, ``REVERSE``, ``UNICODE``, ``VERSION0``, ``VERSION1``.
If neither the ``ASCII``, ``LOCALE`` nor ``UNICODE`` flag is specified, it will default to ``UNICODE`` if the regex pattern is a Unicode string and ``ASCII`` if it's a bytestring.
The ``ENHANCEMATCH`` flag makes fuzzy matching attempt to improve the fit of the next match that it finds.
The ``BESTMATCH`` flag makes fuzzy matching search for the best match instead of the next match.
Notes on named capture groups
-----------------------------
All capture groups have a group number, starting from 1.
Groups with the same group name will have the same group number, and groups with a different group name will have a different group number.
The same name can be used by more than one group, with later captures 'overwriting' earlier captures. All of the captures of the group will be available from the ``captures`` method of the match object.
Group numbers will be reused across different branches of a branch reset, eg. ``(?|(first)|(second))`` has only group 1. If capture groups have different group names then they will, of course, have different group numbers, eg. ``(?|(?P<foo>first)|(?P<bar>second))`` has group 1 ("foo") and group 2 ("bar").
In the regex ``(\s+)(?|(?P<foo>[A-Z]+)|(\w+) (?P<foo>[0-9]+)`` there are 2 groups:
* ``(\s+)`` is group 1.
* ``(?P<foo>[A-Z]+)`` is group 2, also called "foo".
* ``(\w+)`` is group 2 because of the branch reset.
* ``(?P<foo>[0-9]+)`` is group 2 because it's called "foo".
If you want to prevent ``(\w+)`` from being group 2, you need to name it (different name, different group number).
Multithreading
--------------
The regex module releases the GIL during matching on instances of the built-in (immutable) string classes, enabling other Python threads to run concurrently. It is also possible to force the regex module to release the GIL during matching by calling the matching methods with the keyword argument ``concurrent=True``. The behaviour is undefined if the string changes during matching, so use it *only* when it is guaranteed that that won't happen.
Unicode
-------
This module supports Unicode 13.0.0.
Full Unicode case-folding is supported.
Additional features
-------------------
The issue numbers relate to the Python bug tracker, except where listed as "Hg issue".
Added support for lookaround in conditional pattern (`Hg issue 163 <https://bitbucket.org/mrabarnett/mrab-regex/issues/163>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The test of a conditional pattern can now be a lookaround.
Examples:
.. sourcecode:: python
>>> regex.match(r'(?(?=\d)\d+|\w+)', '123abc')
<regex.Match object; span=(0, 3), match='123'>
>>> regex.match(r'(?(?=\d)\d+|\w+)', 'abc123')
<regex.Match object; span=(0, 6), match='abc123'>
This is not quite the same as putting a lookaround in the first branch of a pair of alternatives.
Examples:
.. sourcecode:: python
>>> print(regex.match(r'(?:(?=\d)\d+\b|\w+)', '123abc'))
<regex.Match object; span=(0, 6), match='123abc'>
>>> print(regex.match(r'(?(?=\d)\d+\b|\w+)', '123abc'))
None
In the first example, the lookaround matched, but the remainder of the first branch failed to match, and so the second branch was attempted, whereas in the second example, the lookaround matched, and the first branch failed to match, but the second branch was **not** attempted.
Added POSIX matching (leftmost longest) (`Hg issue 150 <https://bitbucket.org/mrabarnett/mrab-regex/issues/150>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The POSIX standard for regex is to return the leftmost longest match. This can be turned on using the ``POSIX`` flag (``(?p)``).
Examples:
.. sourcecode:: python
>>> # Normal matching.
>>> regex.search(r'Mr|Mrs', 'Mrs')
<regex.Match object; span=(0, 2), match='Mr'>
>>> regex.search(r'one(self)?(selfsufficient)?', 'oneselfsufficient')
<regex.Match object; span=(0, 7), match='oneself'>
>>> # POSIX matching.
>>> regex.search(r'(?p)Mr|Mrs', 'Mrs')
<regex.Match object; span=(0, 3), match='Mrs'>
>>> regex.search(r'(?p)one(self)?(selfsufficient)?', 'oneselfsufficient')
<regex.Match object; span=(0, 17), match='oneselfsufficient'>
Note that it will take longer to find matches because when it finds a match at a certain position, it won't return that immediately, but will keep looking to see if there's another longer match there.
Added ``(?(DEFINE)...)`` (`Hg issue 152 <https://bitbucket.org/mrabarnett/mrab-regex/issues/152>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If there's no group called "DEFINE", then ... will be ignored, but any group definitions within it will be available.
Examples:
.. sourcecode:: python
>>> regex.search(r'(?(DEFINE)(?P<quant>\d+)(?P<item>\w+))(?&quant) (?&item)', '5 elephants')
<regex.Match object; span=(0, 11), match='5 elephants'>
Added ``(*PRUNE)``, ``(*SKIP)`` and ``(*FAIL)`` (`Hg issue 153 <https://bitbucket.org/mrabarnett/mrab-regex/issues/153>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(*PRUNE)`` discards the backtracking info up to that point. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
``(*SKIP)`` is similar to ``(*PRUNE)``, except that it also sets where in the text the next attempt to match will start. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
``(*FAIL)`` causes immediate backtracking. ``(*F)`` is a permitted abbreviation.
Added ``\K`` (`Hg issue 151 <https://bitbucket.org/mrabarnett/mrab-regex/issues/151>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Keeps the part of the entire match after the position where ``\K`` occurred; the part before it is discarded.
It does not affect what capture groups return.
Examples:
.. sourcecode:: python
>>> m = regex.search(r'(\w\w\K\w\w\w)', 'abcdef')
>>> m[0]
'cde'
>>> m[1]
'abcde'
>>>
>>> m = regex.search(r'(?r)(\w\w\K\w\w\w)', 'abcdef')
>>> m[0]
'bc'
>>> m[1]
'bcdef'
Added capture subscripting for ``expandf`` and ``subf``/``subfn`` (`Hg issue 133 <https://bitbucket.org/mrabarnett/mrab-regex/issues/133>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can now use subscripting to get the captures of a repeated capture group.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(\w)+", "abc")
>>> m.expandf("{1}")
'c'
>>> m.expandf("{1[0]} {1[1]} {1[2]}")
'a b c'
>>> m.expandf("{1[-1]} {1[-2]} {1[-3]}")
'c b a'
>>>
>>> m = regex.match(r"(?P<letter>\w)+", "abc")
>>> m.expandf("{letter}")
'c'
>>> m.expandf("{letter[0]} {letter[1]} {letter[2]}")
'a b c'
>>> m.expandf("{letter[-1]} {letter[-2]} {letter[-3]}")
'c b a'
Added support for referring to a group by number using ``(?P=...)``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is in addition to the existing ``\g<...>``.
Fixed the handling of locale-sensitive regexes.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``LOCALE`` flag is intended for legacy code and has limited support. You're still recommended to use Unicode instead.
Added partial matches (`Hg issue 102 <https://bitbucket.org/mrabarnett/mrab-regex/issues/102>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A partial match is one that matches up to the end of string, but that string has been truncated and you want to know whether a complete match could be possible if the string had not been truncated.
Partial matches are supported by ``match``, ``search``, ``fullmatch`` and ``finditer`` with the ``partial`` keyword argument.
Match objects have a ``partial`` attribute, which is ``True`` if it's a partial match.
For example, if you wanted a user to enter a 4-digit number and check it character by character as it was being entered:
.. sourcecode:: python
>>> pattern = regex.compile(r'\d{4}')
>>> # Initially, nothing has been entered:
>>> print(pattern.fullmatch('', partial=True))
<regex.Match object; span=(0, 0), match='', partial=True>
>>> # An empty string is OK, but it's only a partial match.
>>> # The user enters a letter:
>>> print(pattern.fullmatch('a', partial=True))
None
>>> # It'll never match.
>>> # The user deletes that and enters a digit:
>>> print(pattern.fullmatch('1', partial=True))
<regex.Match object; span=(0, 1), match='1', partial=True>
>>> # It matches this far, but it's only a partial match.
>>> # The user enters 2 more digits:
>>> print(pattern.fullmatch('123', partial=True))
<regex.Match object; span=(0, 3), match='123', partial=True>
>>> # It matches this far, but it's only a partial match.
>>> # The user enters another digit:
>>> print(pattern.fullmatch('1234', partial=True))
<regex.Match object; span=(0, 4), match='1234'>
>>> # It's a complete match.
>>> # If the user enters another digit:
>>> print(pattern.fullmatch('12345', partial=True))
None
>>> # It's no longer a match.
>>> # This is a partial match:
>>> pattern.match('123', partial=True).partial
True
>>> # This is a complete match:
>>> pattern.match('1233', partial=True).partial
False
``*`` operator not working correctly with sub() (`Hg issue 106 <https://bitbucket.org/mrabarnett/mrab-regex/issues/106>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes it's not clear how zero-width matches should be handled. For example, should ``.*`` match 0 characters directly after matching >0 characters?
Examples:
.. sourcecode:: python
# Python 3.7 and later
>>> regex.sub('.*', 'x', 'test')
'xx'
>>> regex.sub('.*?', '|', 'test')
'|||||||||'
# Python 3.6 and earlier
>>> regex.sub('(?V0).*', 'x', 'test')
'x'
>>> regex.sub('(?V1).*', 'x', 'test')
'xx'
>>> regex.sub('(?V0).*?', '|', 'test')
'|t|e|s|t|'
>>> regex.sub('(?V1).*?', '|', 'test')
'|||||||||'
Added ``capturesdict`` (`Hg issue 86 <https://bitbucket.org/mrabarnett/mrab-regex/issues/86>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``capturesdict`` is a combination of ``groupdict`` and ``captures``:
``groupdict`` returns a dict of the named groups and the last capture of those groups.
``captures`` returns a list of all the captures of a group
``capturesdict`` returns a dict of the named groups and lists of all the captures of those groups.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(?:(?P<word>\w+) (?P<digits>\d+)\n)+", "one 1\ntwo 2\nthree 3\n")
>>> m.groupdict()
{'word': 'three', 'digits': '3'}
>>> m.captures("word")
['one', 'two', 'three']
>>> m.captures("digits")
['1', '2', '3']
>>> m.capturesdict()
{'word': ['one', 'two', 'three'], 'digits': ['1', '2', '3']}
Allow duplicate names of groups (`Hg issue 87 <https://bitbucket.org/mrabarnett/mrab-regex/issues/87>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Group names can now be duplicated.
Examples:
.. sourcecode:: python
>>> # With optional groups:
>>>
>>> # Both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['first', 'second']
>>> # Only the second group captures.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", " or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['second']
>>> # Only the first group captures.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or ")
>>> m.group("item")
'first'
>>> m.captures("item")
['first']
>>>
>>> # With mandatory groups:
>>>
>>> # Both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)?", "first or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['first', 'second']
>>> # Again, both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", " or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['', 'second']
>>> # And yet again, both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", "first or ")
>>> m.group("item")
''
>>> m.captures("item")
['first', '']
Added ``fullmatch`` (`issue #16203 <https://bugs.python.org/issue16203>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``fullmatch`` behaves like ``match``, except that it must match all of the string.
Examples:
.. sourcecode:: python
>>> print(regex.fullmatch(r"abc", "abc").span())
(0, 3)
>>> print(regex.fullmatch(r"abc", "abcx"))
None
>>> print(regex.fullmatch(r"abc", "abcx", endpos=3).span())
(0, 3)
>>> print(regex.fullmatch(r"abc", "xabcy", pos=1, endpos=4).span())
(1, 4)
>>>
>>> regex.match(r"a.*?", "abcd").group(0)
'a'
>>> regex.fullmatch(r"a.*?", "abcd").group(0)
'abcd'
Added ``subf`` and ``subfn``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``subf`` and ``subfn`` are alternatives to ``sub`` and ``subn`` respectively. When passed a replacement string, they treat it as a format string.
Examples:
.. sourcecode:: python
>>> regex.subf(r"(\w+) (\w+)", "{0} => {2} {1}", "foo bar")
'foo bar => bar foo'
>>> regex.subf(r"(?P<word1>\w+) (?P<word2>\w+)", "{word2} {word1}", "foo bar")
'bar foo'
Added ``expandf`` to match object
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``expandf`` is an alternative to ``expand``. When passed a replacement string, it treats it as a format string.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(\w+) (\w+)", "foo bar")
>>> m.expandf("{0} => {2} {1}")
'foo bar => bar foo'
>>>
>>> m = regex.match(r"(?P<word1>\w+) (?P<word2>\w+)", "foo bar")
>>> m.expandf("{word2} {word1}")
'bar foo'
Detach searched string
^^^^^^^^^^^^^^^^^^^^^^
A match object contains a reference to the string that was searched, via its ``string`` attribute. The ``detach_string`` method will 'detach' that string, making it available for garbage collection, which might save valuable memory if that string is very large.
Example:
.. sourcecode:: python
>>> m = regex.search(r"\w+", "Hello world")
>>> print(m.group())
Hello
>>> print(m.string)
Hello world
>>> m.detach_string()
>>> print(m.group())
Hello
>>> print(m.string)
None
Recursive patterns (`Hg issue 27 <https://bitbucket.org/mrabarnett/mrab-regex/issues/27>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Recursive and repeated patterns are supported.
``(?R)`` or ``(?0)`` tries to match the entire regex recursively. ``(?1)``, ``(?2)``, etc, try to match the relevant capture group.
``(?&name)`` tries to match the named capture group.
Examples:
.. sourcecode:: python
>>> regex.match(r"(Tarzan|Jane) loves (?1)", "Tarzan loves Jane").groups()
('Tarzan',)
>>> regex.match(r"(Tarzan|Jane) loves (?1)", "Jane loves Tarzan").groups()
('Jane',)
>>> m = regex.search(r"(\w)(?:(?R)|(\w?))\1", "kayak")
>>> m.group(0, 1, 2)
('kayak', 'k', None)
The first two examples show how the subpattern within the capture group is reused, but is _not_ itself a capture group. In other words, ``"(Tarzan|Jane) loves (?1)"`` is equivalent to ``"(Tarzan|Jane) loves (?:Tarzan|Jane)"``.
It's possible to backtrack into a recursed or repeated group.
You can't call a group if there is more than one group with that group name or group number (``"ambiguous group reference"``).
The alternative forms ``(?P>name)`` and ``(?P&name)`` are also supported.
Full Unicode case-folding is supported.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In version 1 behaviour, the regex module uses full case-folding when performing case-insensitive matches in Unicode.
Examples (in Python 3):
.. sourcecode:: python
>>> regex.match(r"(?iV1)strasse", "stra\N{LATIN SMALL LETTER SHARP S}e").span()
(0, 6)
>>> regex.match(r"(?iV1)stra\N{LATIN SMALL LETTER SHARP S}e", "STRASSE").span()
(0, 7)
In version 0 behaviour, it uses simple case-folding for backward compatibility with the re module.
Approximate "fuzzy" matching (`Hg issue 12 <https://bitbucket.org/mrabarnett/mrab-regex/issues/12>`_, `Hg issue 41 <https://bitbucket.org/mrabarnett/mrab-regex/issues/41>`_, `Hg issue 109 <https://bitbucket.org/mrabarnett/mrab-regex/issues/109>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Regex usually attempts an exact match, but sometimes an approximate, or "fuzzy", match is needed, for those cases where the text being searched may contain errors in the form of inserted, deleted or substituted characters.
A fuzzy regex specifies which types of errors are permitted, and, optionally, either the minimum and maximum or only the maximum permitted number of each type. (You cannot specify only a minimum.)
The 3 types of error are:
* Insertion, indicated by "i"
* Deletion, indicated by "d"
* Substitution, indicated by "s"
In addition, "e" indicates any type of error.
The fuzziness of a regex item is specified between "{" and "}" after the item.
Examples:
* ``foo`` match "foo" exactly
* ``(?:foo){i}`` match "foo", permitting insertions
* ``(?:foo){d}`` match "foo", permitting deletions
* ``(?:foo){s}`` match "foo", permitting substitutions
* ``(?:foo){i,s}`` match "foo", permitting insertions and substitutions
* ``(?:foo){e}`` match "foo", permitting errors
If a certain type of error is specified, then any type not specified will **not** be permitted.
In the following examples I'll omit the item and write only the fuzziness:
* ``{d<=3}`` permit at most 3 deletions, but no other types
* ``{i<=1,s<=2}`` permit at most 1 insertion and at most 2 substitutions, but no deletions
* ``{1<=e<=3}`` permit at least 1 and at most 3 errors
* ``{i<=2,d<=2,e<=3}`` permit at most 2 insertions, at most 2 deletions, at most 3 errors in total, but no substitutions
It's also possible to state the costs of each type of error and the maximum permitted total cost.
Examples:
* ``{2i+2d+1s<=4}`` each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4
* ``{i<=1,d<=1,s<=1,2i+2d+1s<=4}`` at most 1 insertion, at most 1 deletion, at most 1 substitution; each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4
You can also use "<" instead of "<=" if you want an exclusive minimum or maximum.
You can add a test to perform on a character that's substituted or inserted.
Examples:
* ``{s<=2:[a-z]}`` at most 2 substitutions, which must be in the character set ``[a-z]``.
* ``{s<=2,i<=3:\d}`` at most 2 substitutions, at most 3 insertions, which must be digits.
By default, fuzzy matching searches for the first match that meets the given constraints. The ``ENHANCEMATCH`` flag will cause it to attempt to improve the fit (i.e. reduce the number of errors) of the match that it has found.
The ``BESTMATCH`` flag will make it search for the best match instead.
Further examples to note:
* ``regex.search("(dog){e}", "cat and dog")[1]`` returns ``"cat"`` because that matches ``"dog"`` with 3 errors (an unlimited number of errors is permitted).
* ``regex.search("(dog){e<=1}", "cat and dog")[1]`` returns ``" dog"`` (with a leading space) because that matches ``"dog"`` with 1 error, which is within the limit.
* ``regex.search("(?e)(dog){e<=1}", "cat and dog")[1]`` returns ``"dog"`` (without a leading space) because the fuzzy search matches ``" dog"`` with 1 error, which is within the limit, and the ``(?e)`` then it attempts a better fit.
In the first two examples there are perfect matches later in the string, but in neither case is it the first possible match.
The match object has an attribute ``fuzzy_counts`` which gives the total number of substitutions, insertions and deletions.
.. sourcecode:: python
>>> # A 'raw' fuzzy match:
>>> regex.fullmatch(r"(?:cats|cat){e<=1}", "cat").fuzzy_counts
(0, 0, 1)
>>> # 0 substitutions, 0 insertions, 1 deletion.
>>> # A better match might be possible if the ENHANCEMATCH flag used:
>>> regex.fullmatch(r"(?e)(?:cats|cat){e<=1}", "cat").fuzzy_counts
(0, 0, 0)
>>> # 0 substitutions, 0 insertions, 0 deletions.
The match object also has an attribute ``fuzzy_changes`` which gives a tuple of the positions of the substitutions, insertions and deletions.
.. sourcecode:: python
>>> m = regex.search('(fuu){i<=2,d<=2,e<=5}', 'anaconda foo bar')
>>> m
<regex.Match object; span=(7, 10), match='a f', fuzzy_counts=(0, 2, 2)>
>>> m.fuzzy_changes
([], [7, 8], [10, 11])
What this means is that if the matched part of the string had been:
.. sourcecode:: python
'anacondfuuoo bar'
it would've been an exact match.
However, there were insertions at positions 7 and 8:
.. sourcecode:: python
'anaconda fuuoo bar'
^^
and deletions at positions 10 and 11:
.. sourcecode:: python
'anaconda f~~oo bar'
^^
So the actual string was:
.. sourcecode:: python
'anaconda foo bar'
Named lists (`Hg issue 11 <https://bitbucket.org/mrabarnett/mrab-regex/issues/11>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``\L<name>``
There are occasions where you may want to include a list (actually, a set) of options in a regex.
One way is to build the pattern like this:
.. sourcecode:: python
>>> p = regex.compile(r"first|second|third|fourth|fifth")
but if the list is large, parsing the resulting regex can take considerable time, and care must also be taken that the strings are properly escaped and properly ordered, for example, "cats" before "cat".
The new alternative is to use a named list:
.. sourcecode:: python
>>> option_set = ["first", "second", "third", "fourth", "fifth"]
>>> p = regex.compile(r"\L<options>", options=option_set)
The order of the items is irrelevant, they are treated as a set. The named lists are available as the ``.named_lists`` attribute of the pattern object :
.. sourcecode:: python
>>> print(p.named_lists)
# Python 3
{'options': frozenset({'fifth', 'first', 'fourth', 'second', 'third'})}
# Python 2
{'options': frozenset(['fifth', 'fourth', 'second', 'third', 'first'])}
If there are any unused keyword arguments, ``ValueError`` will be raised unless you tell it otherwise:
.. sourcecode:: python
>>> option_set = ["first", "second", "third", "fourth", "fifth"]
>>> p = regex.compile(r"\L<options>", options=option_set, other_options=[])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\regex\regex.py", line 348, in compile
return _compile(pattern, flags, ignore_unused, kwargs)
File "C:\Python37\lib\site-packages\regex\regex.py", line 585, in _compile
raise ValueError('unused keyword argument {!a}'.format(any_one))
ValueError: unused keyword argument 'other_options'
>>> p = regex.compile(r"\L<options>", options=option_set, other_options=[], ignore_unused=True)
>>>
Start and end of word
^^^^^^^^^^^^^^^^^^^^^
``\m`` matches at the start of a word.
``\M`` matches at the end of a word.
Compare with ``\b``, which matches at the start or end of a word.
Unicode line separators
^^^^^^^^^^^^^^^^^^^^^^^
Normally the only line separator is ``\n`` (``\x0A``), but if the ``WORD`` flag is turned on then the line separators are ``\x0D\x0A``, ``\x0A``, ``\x0B``, ``\x0C`` and ``\x0D``, plus ``\x85``, ``\u2028`` and ``\u2029`` when working with Unicode.
This affects the regex dot ``"."``, which, with the ``DOTALL`` flag turned off, matches any character except a line separator. It also affects the line anchors ``^`` and ``$`` (in multiline mode).
Set operators
^^^^^^^^^^^^^
**Version 1 behaviour only**
Set operators have been added, and a set ``[...]`` can include nested sets.
The operators, in order of increasing precedence, are:
* ``||`` for union ("x||y" means "x or y")
* ``~~`` (double tilde) for symmetric difference ("x~~y" means "x or y, but not both")
* ``&&`` for intersection ("x&&y" means "x and y")
* ``--`` (double dash) for difference ("x--y" means "x but not y")
Implicit union, ie, simple juxtaposition like in ``[ab]``, has the highest precedence. Thus, ``[ab&&cd]`` is the same as ``[[a||b]&&[c||d]]``.
Examples:
* ``[ab]`` # Set containing 'a' and 'b'
* ``[a-z]`` # Set containing 'a' .. 'z'
* ``[[a-z]--[qw]]`` # Set containing 'a' .. 'z', but not 'q' or 'w'
* ``[a-z--qw]`` # Same as above
* ``[\p{L}--QW]`` # Set containing all letters except 'Q' and 'W'
* ``[\p{N}--[0-9]]`` # Set containing all numbers except '0' .. '9'
* ``[\p{ASCII}&&\p{Letter}]`` # Set containing all characters which are ASCII and letter
regex.escape (`issue #2650 <https://bugs.python.org/issue2650>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
regex.escape has an additional keyword parameter ``special_only``. When True, only 'special' regex characters, such as '?', are escaped.
Examples:
.. sourcecode:: python
>>> regex.escape("foo!?", special_only=False)
'foo\\!\\?'
>>> regex.escape("foo!?", special_only=True)
'foo!\\?'
regex.escape (`Hg issue 249 <https://bitbucket.org/mrabarnett/mrab-regex/issues/249>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
regex.escape has an additional keyword parameter ``literal_spaces``. When True, spaces are not escaped.
Examples:
.. sourcecode:: python
>>> regex.escape("foo bar!?", literal_spaces=False)
'foo\\ bar!\\?'
>>> regex.escape("foo bar!?", literal_spaces=True)
'foo bar!\\?'
Repeated captures (`issue #7132 <https://bugs.python.org/issue7132>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A match object has additional methods which return information on all the successful matches of a repeated capture group. These methods are:
* ``matchobject.captures([group1, ...])``
* Returns a list of the strings matched in a group or groups. Compare with ``matchobject.group([group1, ...])``.
* ``matchobject.starts([group])``
* Returns a list of the start positions. Compare with ``matchobject.start([group])``.
* ``matchobject.ends([group])``
* Returns a list of the end positions. Compare with ``matchobject.end([group])``.
* ``matchobject.spans([group])``
* Returns a list of the spans. Compare with ``matchobject.span([group])``.
Examples:
.. sourcecode:: python
>>> m = regex.search(r"(\w{3})+", "123456789")
>>> m.group(1)
'789'
>>> m.captures(1)
['123', '456', '789']
>>> m.start(1)
6
>>> m.starts(1)
[0, 3, 6]
>>> m.end(1)
9
>>> m.ends(1)
[3, 6, 9]
>>> m.span(1)
(6, 9)
>>> m.spans(1)
[(0, 3), (3, 6), (6, 9)]
Atomic grouping (`issue #433030 <https://bugs.python.org/issue433030>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(?>...)``
If the following pattern subsequently fails, then the subpattern as a whole will fail.
Possessive quantifiers.
^^^^^^^^^^^^^^^^^^^^^^^
``(?:...)?+`` ; ``(?:...)*+`` ; ``(?:...)++`` ; ``(?:...){min,max}+``
The subpattern is matched up to 'max' times. If the following pattern subsequently fails, then all of the repeated subpatterns will fail as a whole. For example, ``(?:...)++`` is equivalent to ``(?>(?:...)+)``.
Scoped flags (`issue #433028 <https://bugs.python.org/issue433028>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(?flags-flags:...)``
The flags will apply only to the subpattern. Flags can be turned on or off.
Definition of 'word' character (`issue #1693050 <https://bugs.python.org/issue1693050>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The definition of a 'word' character has been expanded for Unicode. It now conforms to the Unicode specification at ``http://www.unicode.org/reports/tr29/``.
Variable-length lookbehind
^^^^^^^^^^^^^^^^^^^^^^^^^^
A lookbehind can match a variable-length string.
Flags argument for regex.split, regex.sub and regex.subn (`issue #3482 <https://bugs.python.org/issue3482>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.split``, ``regex.sub`` and ``regex.subn`` support a 'flags' argument.
Pos and endpos arguments for regex.sub and regex.subn
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.sub`` and ``regex.subn`` support 'pos' and 'endpos' arguments.
'Overlapped' argument for regex.findall and regex.finditer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.findall`` and ``regex.finditer`` support an 'overlapped' flag which permits overlapped matches.
Splititer
^^^^^^^^^
``regex.splititer`` has been added. It's a generator equivalent of ``regex.split``.
Subscripting for groups
^^^^^^^^^^^^^^^^^^^^^^^
A match object accepts access to the captured groups via subscripting and slicing:
.. sourcecode:: python
>>> m = regex.search(r"(?P<before>.*?)(?P<num>\d+)(?P<after>.*)", "pqr123stu")
>>> print(m["before"])
pqr
>>> print(len(m))
4
>>> print(m[:])
('pqr123stu', 'pqr', '123', 'stu')
Named groups
^^^^^^^^^^^^
Groups can be named with ``(?<name>...)`` as well as the current ``(?P<name>...)``.
Group references
^^^^^^^^^^^^^^^^
Groups can be referenced within a pattern with ``\g<name>``. This also allows there to be more than 99 groups.
Named characters
^^^^^^^^^^^^^^^^
``\N{name}``
Named characters are supported. (Note: only those known by Python's Unicode database are supported.)
Unicode codepoint properties, including scripts and blocks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``\p{property=value}``; ``\P{property=value}``; ``\p{value}`` ; ``\P{value}``
Many Unicode properties are supported, including blocks and scripts. ``\p{property=value}`` or ``\p{property:value}`` matches a character whose property ``property`` has value ``value``. The inverse of ``\p{property=value}`` is ``\P{property=value}`` or ``\p{^property=value}``.
If the short form ``\p{value}`` is used, the properties are checked in the order: ``General_Category``, ``Script``, ``Block``, binary property:
* ``Latin``, the 'Latin' script (``Script=Latin``).
* ``BasicLatin``, the 'BasicLatin' block (``Block=BasicLatin``).
* ``Alphabetic``, the 'Alphabetic' binary property (``Alphabetic=Yes``).
A short form starting with ``Is`` indicates a script or binary property:
* ``IsLatin``, the 'Latin' script (``Script=Latin``).
* ``IsAlphabetic``, the 'Alphabetic' binary property (``Alphabetic=Yes``).
A short form starting with ``In`` indicates a block property:
* ``InBasicLatin``, the 'BasicLatin' block (``Block=BasicLatin``).
POSIX character classes
^^^^^^^^^^^^^^^^^^^^^^^
``[[:alpha:]]``; ``[[:^alpha:]]``
POSIX character classes are supported. These are normally treated as an alternative form of ``\p{...}``.
The exceptions are ``alnum``, ``digit``, ``punct`` and ``xdigit``, whose definitions are different from those of Unicode.
``[[:alnum:]]`` is equivalent to ``\p{posix_alnum}``.
``[[:digit:]]`` is equivalent to ``\p{posix_digit}``.
``[[:punct:]]`` is equivalent to ``\p{posix_punct}``.
``[[:xdigit:]]`` is equivalent to ``\p{posix_xdigit}``.
Search anchor
^^^^^^^^^^^^^
``\G``
A search anchor has been added. It matches at the position where each search started/continued and can be used for contiguous matches or in negative variable-length lookbehinds to limit how far back the lookbehind goes:
.. sourcecode:: python
>>> regex.findall(r"\w{2}", "abcd ef")
['ab', 'cd', 'ef']
>>> regex.findall(r"\G\w{2}", "abcd ef")
['ab', 'cd']
* The search starts at position 0 and matches 2 letters 'ab'.
* The search continues at position 2 and matches 2 letters 'cd'.
* The search continues at position 4 and fails to match any letters.
* The anchor stops the search start position from being advanced, so there are no more results.
Reverse searching
^^^^^^^^^^^^^^^^^
Searches can now work backwards:
.. sourcecode:: python
>>> regex.findall(r".", "abc")
['a', 'b', 'c']
>>> regex.findall(r"(?r).", "abc")
['c', 'b', 'a']
Note: the result of a reverse search is not necessarily the reverse of a forward search:
.. sourcecode:: python
>>> regex.findall(r"..", "abcde")
['ab', 'cd']
>>> regex.findall(r"(?r)..", "abcde")
['de', 'bc']
Matching a single grapheme
^^^^^^^^^^^^^^^^^^^^^^^^^^
``\X``
The grapheme matcher is supported. It now conforms to the Unicode specification at ``http://www.unicode.org/reports/tr29/``.
Branch reset
^^^^^^^^^^^^
``(?|...|...)``
Capture group numbers will be reused across the alternatives, but groups with different names will have different group numbers.
Examples:
.. sourcecode:: python
>>> regex.match(r"(?|(first)|(second))", "first").groups()
('first',)
>>> regex.match(r"(?|(first)|(second))", "second").groups()
('second',)
Note that there is only one group.
Default Unicode word boundary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``WORD`` flag changes the definition of a 'word boundary' to that of a default Unicode word boundary. This applies to ``\b`` and ``\B``.
Timeout (Python 3)
^^^^^^^^^^^^^^^^^^
The matching methods and functions support timeouts. The timeout (in seconds) applies to the entire operation:
.. sourcecode:: python
>>> from time import sleep
>>>
>>> def fast_replace(m):
... return 'X'
...
>>> def slow_replace(m):
... sleep(0.5)
... return 'X'
...
>>> regex.sub(r'[a-z]', fast_replace, 'abcde', timeout=2)
'XXXXX'
>>> regex.sub(r'[a-z]', slow_replace, 'abcde', timeout=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\regex\regex.py", line 276, in sub
endpos, concurrent, timeout)
TimeoutError: regex timed out
| /regex-2020.6.8.tar.gz/regex-2020.6.8/README.rst | 0.918448 | 0.689318 | README.rst | pypi |
Introduction
------------
This regex implementation is backwards-compatible with the standard 're' module, but offers additional functionality.
Note
----
The re module's behaviour with zero-width matches changed in Python 3.7, and this module will follow that behaviour when compiled for Python 3.7.
Old vs new behaviour
--------------------
In order to be compatible with the re module, this module has 2 behaviours:
* **Version 0** behaviour (old behaviour, compatible with the re module):
Please note that the re module's behaviour may change over time, and I'll endeavour to match that behaviour in version 0.
* Indicated by the ``VERSION0`` or ``V0`` flag, or ``(?V0)`` in the pattern.
* Zero-width matches are not handled correctly in the re module before Python 3.7. The behaviour in those earlier versions is:
* ``.split`` won't split a string at a zero-width match.
* ``.sub`` will advance by one character after a zero-width match.
* Inline flags apply to the entire pattern, and they can't be turned off.
* Only simple sets are supported.
* Case-insensitive matches in Unicode use simple case-folding by default.
* **Version 1** behaviour (new behaviour, possibly different from the re module):
* Indicated by the ``VERSION1`` or ``V1`` flag, or ``(?V1)`` in the pattern.
* Zero-width matches are handled correctly.
* Inline flags apply to the end of the group or pattern, and they can be turned off.
* Nested sets and set operations are supported.
* Case-insensitive matches in Unicode use full case-folding by default.
If no version is specified, the regex module will default to ``regex.DEFAULT_VERSION``.
Case-insensitive matches in Unicode
-----------------------------------
The regex module supports both simple and full case-folding for case-insensitive matches in Unicode. Use of full case-folding can be turned on using the ``FULLCASE`` or ``F`` flag, or ``(?f)`` in the pattern. Please note that this flag affects how the ``IGNORECASE`` flag works; the ``FULLCASE`` flag itself does not turn on case-insensitive matching.
In the version 0 behaviour, the flag is off by default.
In the version 1 behaviour, the flag is on by default.
Nested sets and set operations
------------------------------
It's not possible to support both simple sets, as used in the re module, and nested sets at the same time because of a difference in the meaning of an unescaped ``"["`` in a set.
For example, the pattern ``[[a-z]--[aeiou]]`` is treated in the version 0 behaviour (simple sets, compatible with the re module) as:
* Set containing "[" and the letters "a" to "z"
* Literal "--"
* Set containing letters "a", "e", "i", "o", "u"
* Literal "]"
but in the version 1 behaviour (nested sets, enhanced behaviour) as:
* Set which is:
* Set containing the letters "a" to "z"
* but excluding:
* Set containing the letters "a", "e", "i", "o", "u"
Version 0 behaviour: only simple sets are supported.
Version 1 behaviour: nested sets and set operations are supported.
Flags
-----
There are 2 kinds of flag: scoped and global. Scoped flags can apply to only part of a pattern and can be turned on or off; global flags apply to the entire pattern and can only be turned on.
The scoped flags are: ``FULLCASE``, ``IGNORECASE``, ``MULTILINE``, ``DOTALL``, ``VERBOSE``, ``WORD``.
The global flags are: ``ASCII``, ``BESTMATCH``, ``ENHANCEMATCH``, ``LOCALE``, ``POSIX``, ``REVERSE``, ``UNICODE``, ``VERSION0``, ``VERSION1``.
If neither the ``ASCII``, ``LOCALE`` nor ``UNICODE`` flag is specified, it will default to ``UNICODE`` if the regex pattern is a Unicode string and ``ASCII`` if it's a bytestring.
The ``ENHANCEMATCH`` flag makes fuzzy matching attempt to improve the fit of the next match that it finds.
The ``BESTMATCH`` flag makes fuzzy matching search for the best match instead of the next match.
Notes on named capture groups
-----------------------------
All capture groups have a group number, starting from 1.
Groups with the same group name will have the same group number, and groups with a different group name will have a different group number.
The same name can be used by more than one group, with later captures 'overwriting' earlier captures. All of the captures of the group will be available from the ``captures`` method of the match object.
Group numbers will be reused across different branches of a branch reset, eg. ``(?|(first)|(second))`` has only group 1. If capture groups have different group names then they will, of course, have different group numbers, eg. ``(?|(?P<foo>first)|(?P<bar>second))`` has group 1 ("foo") and group 2 ("bar").
In the regex ``(\s+)(?|(?P<foo>[A-Z]+)|(\w+) (?P<foo>[0-9]+)`` there are 2 groups:
* ``(\s+)`` is group 1.
* ``(?P<foo>[A-Z]+)`` is group 2, also called "foo".
* ``(\w+)`` is group 2 because of the branch reset.
* ``(?P<foo>[0-9]+)`` is group 2 because it's called "foo".
If you want to prevent ``(\w+)`` from being group 2, you need to name it (different name, different group number).
Multithreading
--------------
The regex module releases the GIL during matching on instances of the built-in (immutable) string classes, enabling other Python threads to run concurrently. It is also possible to force the regex module to release the GIL during matching by calling the matching methods with the keyword argument ``concurrent=True``. The behaviour is undefined if the string changes during matching, so use it *only* when it is guaranteed that that won't happen.
Unicode
-------
This module supports Unicode 13.0.0.
Full Unicode case-folding is supported.
Additional features
-------------------
The issue numbers relate to the Python bug tracker, except where listed as "Hg issue".
Added support for lookaround in conditional pattern (`Hg issue 163 <https://bitbucket.org/mrabarnett/mrab-regex/issues/163>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The test of a conditional pattern can now be a lookaround.
Examples:
.. sourcecode:: python
>>> regex.match(r'(?(?=\d)\d+|\w+)', '123abc')
<regex.Match object; span=(0, 3), match='123'>
>>> regex.match(r'(?(?=\d)\d+|\w+)', 'abc123')
<regex.Match object; span=(0, 6), match='abc123'>
This is not quite the same as putting a lookaround in the first branch of a pair of alternatives.
Examples:
.. sourcecode:: python
>>> print(regex.match(r'(?:(?=\d)\d+\b|\w+)', '123abc'))
<regex.Match object; span=(0, 6), match='123abc'>
>>> print(regex.match(r'(?(?=\d)\d+\b|\w+)', '123abc'))
None
In the first example, the lookaround matched, but the remainder of the first branch failed to match, and so the second branch was attempted, whereas in the second example, the lookaround matched, and the first branch failed to match, but the second branch was **not** attempted.
Added POSIX matching (leftmost longest) (`Hg issue 150 <https://bitbucket.org/mrabarnett/mrab-regex/issues/150>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The POSIX standard for regex is to return the leftmost longest match. This can be turned on using the ``POSIX`` flag (``(?p)``).
Examples:
.. sourcecode:: python
>>> # Normal matching.
>>> regex.search(r'Mr|Mrs', 'Mrs')
<regex.Match object; span=(0, 2), match='Mr'>
>>> regex.search(r'one(self)?(selfsufficient)?', 'oneselfsufficient')
<regex.Match object; span=(0, 7), match='oneself'>
>>> # POSIX matching.
>>> regex.search(r'(?p)Mr|Mrs', 'Mrs')
<regex.Match object; span=(0, 3), match='Mrs'>
>>> regex.search(r'(?p)one(self)?(selfsufficient)?', 'oneselfsufficient')
<regex.Match object; span=(0, 17), match='oneselfsufficient'>
Note that it will take longer to find matches because when it finds a match at a certain position, it won't return that immediately, but will keep looking to see if there's another longer match there.
Added ``(?(DEFINE)...)`` (`Hg issue 152 <https://bitbucket.org/mrabarnett/mrab-regex/issues/152>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If there's no group called "DEFINE", then ... will be ignored, but any group definitions within it will be available.
Examples:
.. sourcecode:: python
>>> regex.search(r'(?(DEFINE)(?P<quant>\d+)(?P<item>\w+))(?&quant) (?&item)', '5 elephants')
<regex.Match object; span=(0, 11), match='5 elephants'>
Added ``(*PRUNE)``, ``(*SKIP)`` and ``(*FAIL)`` (`Hg issue 153 <https://bitbucket.org/mrabarnett/mrab-regex/issues/153>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(*PRUNE)`` discards the backtracking info up to that point. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
``(*SKIP)`` is similar to ``(*PRUNE)``, except that it also sets where in the text the next attempt to match will start. When used in an atomic group or a lookaround, it won't affect the enclosing pattern.
``(*FAIL)`` causes immediate backtracking. ``(*F)`` is a permitted abbreviation.
Added ``\K`` (`Hg issue 151 <https://bitbucket.org/mrabarnett/mrab-regex/issues/151>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Keeps the part of the entire match after the position where ``\K`` occurred; the part before it is discarded.
It does not affect what capture groups return.
Examples:
.. sourcecode:: python
>>> m = regex.search(r'(\w\w\K\w\w\w)', 'abcdef')
>>> m[0]
'cde'
>>> m[1]
'abcde'
>>>
>>> m = regex.search(r'(?r)(\w\w\K\w\w\w)', 'abcdef')
>>> m[0]
'bc'
>>> m[1]
'bcdef'
Added capture subscripting for ``expandf`` and ``subf``/``subfn`` (`Hg issue 133 <https://bitbucket.org/mrabarnett/mrab-regex/issues/133>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can now use subscripting to get the captures of a repeated capture group.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(\w)+", "abc")
>>> m.expandf("{1}")
'c'
>>> m.expandf("{1[0]} {1[1]} {1[2]}")
'a b c'
>>> m.expandf("{1[-1]} {1[-2]} {1[-3]}")
'c b a'
>>>
>>> m = regex.match(r"(?P<letter>\w)+", "abc")
>>> m.expandf("{letter}")
'c'
>>> m.expandf("{letter[0]} {letter[1]} {letter[2]}")
'a b c'
>>> m.expandf("{letter[-1]} {letter[-2]} {letter[-3]}")
'c b a'
Added support for referring to a group by number using ``(?P=...)``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is in addition to the existing ``\g<...>``.
Fixed the handling of locale-sensitive regexes.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``LOCALE`` flag is intended for legacy code and has limited support. You're still recommended to use Unicode instead.
Added partial matches (`Hg issue 102 <https://bitbucket.org/mrabarnett/mrab-regex/issues/102>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A partial match is one that matches up to the end of string, but that string has been truncated and you want to know whether a complete match could be possible if the string had not been truncated.
Partial matches are supported by ``match``, ``search``, ``fullmatch`` and ``finditer`` with the ``partial`` keyword argument.
Match objects have a ``partial`` attribute, which is ``True`` if it's a partial match.
For example, if you wanted a user to enter a 4-digit number and check it character by character as it was being entered:
.. sourcecode:: python
>>> pattern = regex.compile(r'\d{4}')
>>> # Initially, nothing has been entered:
>>> print(pattern.fullmatch('', partial=True))
<regex.Match object; span=(0, 0), match='', partial=True>
>>> # An empty string is OK, but it's only a partial match.
>>> # The user enters a letter:
>>> print(pattern.fullmatch('a', partial=True))
None
>>> # It'll never match.
>>> # The user deletes that and enters a digit:
>>> print(pattern.fullmatch('1', partial=True))
<regex.Match object; span=(0, 1), match='1', partial=True>
>>> # It matches this far, but it's only a partial match.
>>> # The user enters 2 more digits:
>>> print(pattern.fullmatch('123', partial=True))
<regex.Match object; span=(0, 3), match='123', partial=True>
>>> # It matches this far, but it's only a partial match.
>>> # The user enters another digit:
>>> print(pattern.fullmatch('1234', partial=True))
<regex.Match object; span=(0, 4), match='1234'>
>>> # It's a complete match.
>>> # If the user enters another digit:
>>> print(pattern.fullmatch('12345', partial=True))
None
>>> # It's no longer a match.
>>> # This is a partial match:
>>> pattern.match('123', partial=True).partial
True
>>> # This is a complete match:
>>> pattern.match('1233', partial=True).partial
False
``*`` operator not working correctly with sub() (`Hg issue 106 <https://bitbucket.org/mrabarnett/mrab-regex/issues/106>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes it's not clear how zero-width matches should be handled. For example, should ``.*`` match 0 characters directly after matching >0 characters?
Examples:
.. sourcecode:: python
# Python 3.7 and later
>>> regex.sub('.*', 'x', 'test')
'xx'
>>> regex.sub('.*?', '|', 'test')
'|||||||||'
# Python 3.6 and earlier
>>> regex.sub('(?V0).*', 'x', 'test')
'x'
>>> regex.sub('(?V1).*', 'x', 'test')
'xx'
>>> regex.sub('(?V0).*?', '|', 'test')
'|t|e|s|t|'
>>> regex.sub('(?V1).*?', '|', 'test')
'|||||||||'
Added ``capturesdict`` (`Hg issue 86 <https://bitbucket.org/mrabarnett/mrab-regex/issues/86>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``capturesdict`` is a combination of ``groupdict`` and ``captures``:
``groupdict`` returns a dict of the named groups and the last capture of those groups.
``captures`` returns a list of all the captures of a group
``capturesdict`` returns a dict of the named groups and lists of all the captures of those groups.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(?:(?P<word>\w+) (?P<digits>\d+)\n)+", "one 1\ntwo 2\nthree 3\n")
>>> m.groupdict()
{'word': 'three', 'digits': '3'}
>>> m.captures("word")
['one', 'two', 'three']
>>> m.captures("digits")
['1', '2', '3']
>>> m.capturesdict()
{'word': ['one', 'two', 'three'], 'digits': ['1', '2', '3']}
Allow duplicate names of groups (`Hg issue 87 <https://bitbucket.org/mrabarnett/mrab-regex/issues/87>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Group names can now be duplicated.
Examples:
.. sourcecode:: python
>>> # With optional groups:
>>>
>>> # Both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['first', 'second']
>>> # Only the second group captures.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", " or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['second']
>>> # Only the first group captures.
>>> m = regex.match(r"(?P<item>\w+)? or (?P<item>\w+)?", "first or ")
>>> m.group("item")
'first'
>>> m.captures("item")
['first']
>>>
>>> # With mandatory groups:
>>>
>>> # Both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)?", "first or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['first', 'second']
>>> # Again, both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", " or second")
>>> m.group("item")
'second'
>>> m.captures("item")
['', 'second']
>>> # And yet again, both groups capture, the second capture 'overwriting' the first.
>>> m = regex.match(r"(?P<item>\w*) or (?P<item>\w*)", "first or ")
>>> m.group("item")
''
>>> m.captures("item")
['first', '']
Added ``fullmatch`` (`issue #16203 <https://bugs.python.org/issue16203>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``fullmatch`` behaves like ``match``, except that it must match all of the string.
Examples:
.. sourcecode:: python
>>> print(regex.fullmatch(r"abc", "abc").span())
(0, 3)
>>> print(regex.fullmatch(r"abc", "abcx"))
None
>>> print(regex.fullmatch(r"abc", "abcx", endpos=3).span())
(0, 3)
>>> print(regex.fullmatch(r"abc", "xabcy", pos=1, endpos=4).span())
(1, 4)
>>>
>>> regex.match(r"a.*?", "abcd").group(0)
'a'
>>> regex.fullmatch(r"a.*?", "abcd").group(0)
'abcd'
Added ``subf`` and ``subfn``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``subf`` and ``subfn`` are alternatives to ``sub`` and ``subn`` respectively. When passed a replacement string, they treat it as a format string.
Examples:
.. sourcecode:: python
>>> regex.subf(r"(\w+) (\w+)", "{0} => {2} {1}", "foo bar")
'foo bar => bar foo'
>>> regex.subf(r"(?P<word1>\w+) (?P<word2>\w+)", "{word2} {word1}", "foo bar")
'bar foo'
Added ``expandf`` to match object
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``expandf`` is an alternative to ``expand``. When passed a replacement string, it treats it as a format string.
Examples:
.. sourcecode:: python
>>> m = regex.match(r"(\w+) (\w+)", "foo bar")
>>> m.expandf("{0} => {2} {1}")
'foo bar => bar foo'
>>>
>>> m = regex.match(r"(?P<word1>\w+) (?P<word2>\w+)", "foo bar")
>>> m.expandf("{word2} {word1}")
'bar foo'
Detach searched string
^^^^^^^^^^^^^^^^^^^^^^
A match object contains a reference to the string that was searched, via its ``string`` attribute. The ``detach_string`` method will 'detach' that string, making it available for garbage collection, which might save valuable memory if that string is very large.
Example:
.. sourcecode:: python
>>> m = regex.search(r"\w+", "Hello world")
>>> print(m.group())
Hello
>>> print(m.string)
Hello world
>>> m.detach_string()
>>> print(m.group())
Hello
>>> print(m.string)
None
Recursive patterns (`Hg issue 27 <https://bitbucket.org/mrabarnett/mrab-regex/issues/27>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Recursive and repeated patterns are supported.
``(?R)`` or ``(?0)`` tries to match the entire regex recursively. ``(?1)``, ``(?2)``, etc, try to match the relevant capture group.
``(?&name)`` tries to match the named capture group.
Examples:
.. sourcecode:: python
>>> regex.match(r"(Tarzan|Jane) loves (?1)", "Tarzan loves Jane").groups()
('Tarzan',)
>>> regex.match(r"(Tarzan|Jane) loves (?1)", "Jane loves Tarzan").groups()
('Jane',)
>>> m = regex.search(r"(\w)(?:(?R)|(\w?))\1", "kayak")
>>> m.group(0, 1, 2)
('kayak', 'k', None)
The first two examples show how the subpattern within the capture group is reused, but is _not_ itself a capture group. In other words, ``"(Tarzan|Jane) loves (?1)"`` is equivalent to ``"(Tarzan|Jane) loves (?:Tarzan|Jane)"``.
It's possible to backtrack into a recursed or repeated group.
You can't call a group if there is more than one group with that group name or group number (``"ambiguous group reference"``).
The alternative forms ``(?P>name)`` and ``(?P&name)`` are also supported.
Full Unicode case-folding is supported.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In version 1 behaviour, the regex module uses full case-folding when performing case-insensitive matches in Unicode.
Examples (in Python 3):
.. sourcecode:: python
>>> regex.match(r"(?iV1)strasse", "stra\N{LATIN SMALL LETTER SHARP S}e").span()
(0, 6)
>>> regex.match(r"(?iV1)stra\N{LATIN SMALL LETTER SHARP S}e", "STRASSE").span()
(0, 7)
In version 0 behaviour, it uses simple case-folding for backward compatibility with the re module.
Approximate "fuzzy" matching (`Hg issue 12 <https://bitbucket.org/mrabarnett/mrab-regex/issues/12>`_, `Hg issue 41 <https://bitbucket.org/mrabarnett/mrab-regex/issues/41>`_, `Hg issue 109 <https://bitbucket.org/mrabarnett/mrab-regex/issues/109>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Regex usually attempts an exact match, but sometimes an approximate, or "fuzzy", match is needed, for those cases where the text being searched may contain errors in the form of inserted, deleted or substituted characters.
A fuzzy regex specifies which types of errors are permitted, and, optionally, either the minimum and maximum or only the maximum permitted number of each type. (You cannot specify only a minimum.)
The 3 types of error are:
* Insertion, indicated by "i"
* Deletion, indicated by "d"
* Substitution, indicated by "s"
In addition, "e" indicates any type of error.
The fuzziness of a regex item is specified between "{" and "}" after the item.
Examples:
* ``foo`` match "foo" exactly
* ``(?:foo){i}`` match "foo", permitting insertions
* ``(?:foo){d}`` match "foo", permitting deletions
* ``(?:foo){s}`` match "foo", permitting substitutions
* ``(?:foo){i,s}`` match "foo", permitting insertions and substitutions
* ``(?:foo){e}`` match "foo", permitting errors
If a certain type of error is specified, then any type not specified will **not** be permitted.
In the following examples I'll omit the item and write only the fuzziness:
* ``{d<=3}`` permit at most 3 deletions, but no other types
* ``{i<=1,s<=2}`` permit at most 1 insertion and at most 2 substitutions, but no deletions
* ``{1<=e<=3}`` permit at least 1 and at most 3 errors
* ``{i<=2,d<=2,e<=3}`` permit at most 2 insertions, at most 2 deletions, at most 3 errors in total, but no substitutions
It's also possible to state the costs of each type of error and the maximum permitted total cost.
Examples:
* ``{2i+2d+1s<=4}`` each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4
* ``{i<=1,d<=1,s<=1,2i+2d+1s<=4}`` at most 1 insertion, at most 1 deletion, at most 1 substitution; each insertion costs 2, each deletion costs 2, each substitution costs 1, the total cost must not exceed 4
You can also use "<" instead of "<=" if you want an exclusive minimum or maximum.
You can add a test to perform on a character that's substituted or inserted.
Examples:
* ``{s<=2:[a-z]}`` at most 2 substitutions, which must be in the character set ``[a-z]``.
* ``{s<=2,i<=3:\d}`` at most 2 substitutions, at most 3 insertions, which must be digits.
By default, fuzzy matching searches for the first match that meets the given constraints. The ``ENHANCEMATCH`` flag will cause it to attempt to improve the fit (i.e. reduce the number of errors) of the match that it has found.
The ``BESTMATCH`` flag will make it search for the best match instead.
Further examples to note:
* ``regex.search("(dog){e}", "cat and dog")[1]`` returns ``"cat"`` because that matches ``"dog"`` with 3 errors (an unlimited number of errors is permitted).
* ``regex.search("(dog){e<=1}", "cat and dog")[1]`` returns ``" dog"`` (with a leading space) because that matches ``"dog"`` with 1 error, which is within the limit.
* ``regex.search("(?e)(dog){e<=1}", "cat and dog")[1]`` returns ``"dog"`` (without a leading space) because the fuzzy search matches ``" dog"`` with 1 error, which is within the limit, and the ``(?e)`` then it attempts a better fit.
In the first two examples there are perfect matches later in the string, but in neither case is it the first possible match.
The match object has an attribute ``fuzzy_counts`` which gives the total number of substitutions, insertions and deletions.
.. sourcecode:: python
>>> # A 'raw' fuzzy match:
>>> regex.fullmatch(r"(?:cats|cat){e<=1}", "cat").fuzzy_counts
(0, 0, 1)
>>> # 0 substitutions, 0 insertions, 1 deletion.
>>> # A better match might be possible if the ENHANCEMATCH flag used:
>>> regex.fullmatch(r"(?e)(?:cats|cat){e<=1}", "cat").fuzzy_counts
(0, 0, 0)
>>> # 0 substitutions, 0 insertions, 0 deletions.
The match object also has an attribute ``fuzzy_changes`` which gives a tuple of the positions of the substitutions, insertions and deletions.
.. sourcecode:: python
>>> m = regex.search('(fuu){i<=2,d<=2,e<=5}', 'anaconda foo bar')
>>> m
<regex.Match object; span=(7, 10), match='a f', fuzzy_counts=(0, 2, 2)>
>>> m.fuzzy_changes
([], [7, 8], [10, 11])
What this means is that if the matched part of the string had been:
.. sourcecode:: python
'anacondfuuoo bar'
it would've been an exact match.
However, there were insertions at positions 7 and 8:
.. sourcecode:: python
'anaconda fuuoo bar'
^^
and deletions at positions 10 and 11:
.. sourcecode:: python
'anaconda f~~oo bar'
^^
So the actual string was:
.. sourcecode:: python
'anaconda foo bar'
Named lists (`Hg issue 11 <https://bitbucket.org/mrabarnett/mrab-regex/issues/11>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``\L<name>``
There are occasions where you may want to include a list (actually, a set) of options in a regex.
One way is to build the pattern like this:
.. sourcecode:: python
>>> p = regex.compile(r"first|second|third|fourth|fifth")
but if the list is large, parsing the resulting regex can take considerable time, and care must also be taken that the strings are properly escaped and properly ordered, for example, "cats" before "cat".
The new alternative is to use a named list:
.. sourcecode:: python
>>> option_set = ["first", "second", "third", "fourth", "fifth"]
>>> p = regex.compile(r"\L<options>", options=option_set)
The order of the items is irrelevant, they are treated as a set. The named lists are available as the ``.named_lists`` attribute of the pattern object :
.. sourcecode:: python
>>> print(p.named_lists)
# Python 3
{'options': frozenset({'fifth', 'first', 'fourth', 'second', 'third'})}
# Python 2
{'options': frozenset(['fifth', 'fourth', 'second', 'third', 'first'])}
If there are any unused keyword arguments, ``ValueError`` will be raised unless you tell it otherwise:
.. sourcecode:: python
>>> option_set = ["first", "second", "third", "fourth", "fifth"]
>>> p = regex.compile(r"\L<options>", options=option_set, other_options=[])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\regex\regex.py", line 348, in compile
return _compile(pattern, flags, ignore_unused, kwargs)
File "C:\Python37\lib\site-packages\regex\regex.py", line 585, in _compile
raise ValueError('unused keyword argument {!a}'.format(any_one))
ValueError: unused keyword argument 'other_options'
>>> p = regex.compile(r"\L<options>", options=option_set, other_options=[], ignore_unused=True)
>>>
Start and end of word
^^^^^^^^^^^^^^^^^^^^^
``\m`` matches at the start of a word.
``\M`` matches at the end of a word.
Compare with ``\b``, which matches at the start or end of a word.
Unicode line separators
^^^^^^^^^^^^^^^^^^^^^^^
Normally the only line separator is ``\n`` (``\x0A``), but if the ``WORD`` flag is turned on then the line separators are ``\x0D\x0A``, ``\x0A``, ``\x0B``, ``\x0C`` and ``\x0D``, plus ``\x85``, ``\u2028`` and ``\u2029`` when working with Unicode.
This affects the regex dot ``"."``, which, with the ``DOTALL`` flag turned off, matches any character except a line separator. It also affects the line anchors ``^`` and ``$`` (in multiline mode).
Set operators
^^^^^^^^^^^^^
**Version 1 behaviour only**
Set operators have been added, and a set ``[...]`` can include nested sets.
The operators, in order of increasing precedence, are:
* ``||`` for union ("x||y" means "x or y")
* ``~~`` (double tilde) for symmetric difference ("x~~y" means "x or y, but not both")
* ``&&`` for intersection ("x&&y" means "x and y")
* ``--`` (double dash) for difference ("x--y" means "x but not y")
Implicit union, ie, simple juxtaposition like in ``[ab]``, has the highest precedence. Thus, ``[ab&&cd]`` is the same as ``[[a||b]&&[c||d]]``.
Examples:
* ``[ab]`` # Set containing 'a' and 'b'
* ``[a-z]`` # Set containing 'a' .. 'z'
* ``[[a-z]--[qw]]`` # Set containing 'a' .. 'z', but not 'q' or 'w'
* ``[a-z--qw]`` # Same as above
* ``[\p{L}--QW]`` # Set containing all letters except 'Q' and 'W'
* ``[\p{N}--[0-9]]`` # Set containing all numbers except '0' .. '9'
* ``[\p{ASCII}&&\p{Letter}]`` # Set containing all characters which are ASCII and letter
regex.escape (`issue #2650 <https://bugs.python.org/issue2650>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
regex.escape has an additional keyword parameter ``special_only``. When True, only 'special' regex characters, such as '?', are escaped.
Examples:
.. sourcecode:: python
>>> regex.escape("foo!?", special_only=False)
'foo\\!\\?'
>>> regex.escape("foo!?", special_only=True)
'foo!\\?'
regex.escape (`Hg issue 249 <https://bitbucket.org/mrabarnett/mrab-regex/issues/249>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
regex.escape has an additional keyword parameter ``literal_spaces``. When True, spaces are not escaped.
Examples:
.. sourcecode:: python
>>> regex.escape("foo bar!?", literal_spaces=False)
'foo\\ bar!\\?'
>>> regex.escape("foo bar!?", literal_spaces=True)
'foo bar!\\?'
Repeated captures (`issue #7132 <https://bugs.python.org/issue7132>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A match object has additional methods which return information on all the successful matches of a repeated capture group. These methods are:
* ``matchobject.captures([group1, ...])``
* Returns a list of the strings matched in a group or groups. Compare with ``matchobject.group([group1, ...])``.
* ``matchobject.starts([group])``
* Returns a list of the start positions. Compare with ``matchobject.start([group])``.
* ``matchobject.ends([group])``
* Returns a list of the end positions. Compare with ``matchobject.end([group])``.
* ``matchobject.spans([group])``
* Returns a list of the spans. Compare with ``matchobject.span([group])``.
Examples:
.. sourcecode:: python
>>> m = regex.search(r"(\w{3})+", "123456789")
>>> m.group(1)
'789'
>>> m.captures(1)
['123', '456', '789']
>>> m.start(1)
6
>>> m.starts(1)
[0, 3, 6]
>>> m.end(1)
9
>>> m.ends(1)
[3, 6, 9]
>>> m.span(1)
(6, 9)
>>> m.spans(1)
[(0, 3), (3, 6), (6, 9)]
Atomic grouping (`issue #433030 <https://bugs.python.org/issue433030>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(?>...)``
If the following pattern subsequently fails, then the subpattern as a whole will fail.
Possessive quantifiers.
^^^^^^^^^^^^^^^^^^^^^^^
``(?:...)?+`` ; ``(?:...)*+`` ; ``(?:...)++`` ; ``(?:...){min,max}+``
The subpattern is matched up to 'max' times. If the following pattern subsequently fails, then all of the repeated subpatterns will fail as a whole. For example, ``(?:...)++`` is equivalent to ``(?>(?:...)+)``.
Scoped flags (`issue #433028 <https://bugs.python.org/issue433028>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``(?flags-flags:...)``
The flags will apply only to the subpattern. Flags can be turned on or off.
Definition of 'word' character (`issue #1693050 <https://bugs.python.org/issue1693050>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The definition of a 'word' character has been expanded for Unicode. It now conforms to the Unicode specification at ``http://www.unicode.org/reports/tr29/``.
Variable-length lookbehind
^^^^^^^^^^^^^^^^^^^^^^^^^^
A lookbehind can match a variable-length string.
Flags argument for regex.split, regex.sub and regex.subn (`issue #3482 <https://bugs.python.org/issue3482>`_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.split``, ``regex.sub`` and ``regex.subn`` support a 'flags' argument.
Pos and endpos arguments for regex.sub and regex.subn
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.sub`` and ``regex.subn`` support 'pos' and 'endpos' arguments.
'Overlapped' argument for regex.findall and regex.finditer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``regex.findall`` and ``regex.finditer`` support an 'overlapped' flag which permits overlapped matches.
Splititer
^^^^^^^^^
``regex.splititer`` has been added. It's a generator equivalent of ``regex.split``.
Subscripting for groups
^^^^^^^^^^^^^^^^^^^^^^^
A match object accepts access to the captured groups via subscripting and slicing:
.. sourcecode:: python
>>> m = regex.search(r"(?P<before>.*?)(?P<num>\d+)(?P<after>.*)", "pqr123stu")
>>> print(m["before"])
pqr
>>> print(len(m))
4
>>> print(m[:])
('pqr123stu', 'pqr', '123', 'stu')
Named groups
^^^^^^^^^^^^
Groups can be named with ``(?<name>...)`` as well as the current ``(?P<name>...)``.
Group references
^^^^^^^^^^^^^^^^
Groups can be referenced within a pattern with ``\g<name>``. This also allows there to be more than 99 groups.
Named characters
^^^^^^^^^^^^^^^^
``\N{name}``
Named characters are supported. (Note: only those known by Python's Unicode database are supported.)
Unicode codepoint properties, including scripts and blocks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``\p{property=value}``; ``\P{property=value}``; ``\p{value}`` ; ``\P{value}``
Many Unicode properties are supported, including blocks and scripts. ``\p{property=value}`` or ``\p{property:value}`` matches a character whose property ``property`` has value ``value``. The inverse of ``\p{property=value}`` is ``\P{property=value}`` or ``\p{^property=value}``.
If the short form ``\p{value}`` is used, the properties are checked in the order: ``General_Category``, ``Script``, ``Block``, binary property:
* ``Latin``, the 'Latin' script (``Script=Latin``).
* ``BasicLatin``, the 'BasicLatin' block (``Block=BasicLatin``).
* ``Alphabetic``, the 'Alphabetic' binary property (``Alphabetic=Yes``).
A short form starting with ``Is`` indicates a script or binary property:
* ``IsLatin``, the 'Latin' script (``Script=Latin``).
* ``IsAlphabetic``, the 'Alphabetic' binary property (``Alphabetic=Yes``).
A short form starting with ``In`` indicates a block property:
* ``InBasicLatin``, the 'BasicLatin' block (``Block=BasicLatin``).
POSIX character classes
^^^^^^^^^^^^^^^^^^^^^^^
``[[:alpha:]]``; ``[[:^alpha:]]``
POSIX character classes are supported. These are normally treated as an alternative form of ``\p{...}``.
The exceptions are ``alnum``, ``digit``, ``punct`` and ``xdigit``, whose definitions are different from those of Unicode.
``[[:alnum:]]`` is equivalent to ``\p{posix_alnum}``.
``[[:digit:]]`` is equivalent to ``\p{posix_digit}``.
``[[:punct:]]`` is equivalent to ``\p{posix_punct}``.
``[[:xdigit:]]`` is equivalent to ``\p{posix_xdigit}``.
Search anchor
^^^^^^^^^^^^^
``\G``
A search anchor has been added. It matches at the position where each search started/continued and can be used for contiguous matches or in negative variable-length lookbehinds to limit how far back the lookbehind goes:
.. sourcecode:: python
>>> regex.findall(r"\w{2}", "abcd ef")
['ab', 'cd', 'ef']
>>> regex.findall(r"\G\w{2}", "abcd ef")
['ab', 'cd']
* The search starts at position 0 and matches 2 letters 'ab'.
* The search continues at position 2 and matches 2 letters 'cd'.
* The search continues at position 4 and fails to match any letters.
* The anchor stops the search start position from being advanced, so there are no more results.
Reverse searching
^^^^^^^^^^^^^^^^^
Searches can now work backwards:
.. sourcecode:: python
>>> regex.findall(r".", "abc")
['a', 'b', 'c']
>>> regex.findall(r"(?r).", "abc")
['c', 'b', 'a']
Note: the result of a reverse search is not necessarily the reverse of a forward search:
.. sourcecode:: python
>>> regex.findall(r"..", "abcde")
['ab', 'cd']
>>> regex.findall(r"(?r)..", "abcde")
['de', 'bc']
Matching a single grapheme
^^^^^^^^^^^^^^^^^^^^^^^^^^
``\X``
The grapheme matcher is supported. It now conforms to the Unicode specification at ``http://www.unicode.org/reports/tr29/``.
Branch reset
^^^^^^^^^^^^
``(?|...|...)``
Capture group numbers will be reused across the alternatives, but groups with different names will have different group numbers.
Examples:
.. sourcecode:: python
>>> regex.match(r"(?|(first)|(second))", "first").groups()
('first',)
>>> regex.match(r"(?|(first)|(second))", "second").groups()
('second',)
Note that there is only one group.
Default Unicode word boundary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``WORD`` flag changes the definition of a 'word boundary' to that of a default Unicode word boundary. This applies to ``\b`` and ``\B``.
Timeout (Python 3)
^^^^^^^^^^^^^^^^^^
The matching methods and functions support timeouts. The timeout (in seconds) applies to the entire operation:
.. sourcecode:: python
>>> from time import sleep
>>>
>>> def fast_replace(m):
... return 'X'
...
>>> def slow_replace(m):
... sleep(0.5)
... return 'X'
...
>>> regex.sub(r'[a-z]', fast_replace, 'abcde', timeout=2)
'XXXXX'
>>> regex.sub(r'[a-z]', slow_replace, 'abcde', timeout=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\regex\regex.py", line 276, in sub
endpos, concurrent, timeout)
TimeoutError: regex timed out
| /regex-2020.6.8.tar.gz/regex-2020.6.8/docs/Features.rst | 0.918448 | 0.689318 | Features.rst | pypi |
import os
import re
import urllib2
# Directory or URL where Unicode tables reside.
_UNICODE_DIR = "http://www.unicode.org/Public/6.3.0/ucd"
# Largest valid Unicode code value.
_RUNE_MAX = 0x10FFFF
class Error(Exception):
"""Unicode error base class."""
class InputError(Error):
"""Unicode input error class. Raised on invalid input."""
def _UInt(s):
"""Converts string to Unicode code point ('263A' => 0x263a).
Args:
s: string to convert
Returns:
Unicode code point
Raises:
InputError: the string is not a valid Unicode value.
"""
try:
v = int(s, 16)
except ValueError:
v = -1
if len(s) < 4 or len(s) > 6 or v < 0 or v > _RUNE_MAX:
raise InputError("invalid Unicode value %s" % (s,))
return v
def _URange(s):
"""Converts string to Unicode range.
'0001..0003' => [1, 2, 3].
'0001' => [1].
Args:
s: string to convert
Returns:
Unicode range
Raises:
InputError: the string is not a valid Unicode range.
"""
a = s.split("..")
if len(a) == 1:
return [_UInt(a[0])]
if len(a) == 2:
lo = _UInt(a[0])
hi = _UInt(a[1])
if lo < hi:
return range(lo, hi + 1)
raise InputError("invalid Unicode range %s" % (s,))
def _UStr(v):
"""Converts Unicode code point to hex string.
0x263a => '0x263A'.
Args:
v: code point to convert
Returns:
Unicode string
Raises:
InputError: the argument is not a valid Unicode value.
"""
if v < 0 or v > _RUNE_MAX:
raise InputError("invalid Unicode value %s" % (v,))
return "0x%04X" % (v,)
def _ParseContinue(s):
"""Parses a Unicode continuation field.
These are of the form '<Name, First>' or '<Name, Last>'.
Instead of giving an explicit range in a single table entry,
some Unicode tables use two entries, one for the first
code value in the range and one for the last.
The first entry's description is '<Name, First>' instead of 'Name'
and the second is '<Name, Last>'.
'<Name, First>' => ('Name', 'First')
'<Name, Last>' => ('Name', 'Last')
'Anything else' => ('Anything else', None)
Args:
s: continuation field string
Returns:
pair: name and ('First', 'Last', or None)
"""
match = re.match("<(.*), (First|Last)>", s)
if match is not None:
return match.groups()
return (s, None)
def ReadUnicodeTable(filename, nfields, doline):
"""Generic Unicode table text file reader.
The reader takes care of stripping out comments and also
parsing the two different ways that the Unicode tables specify
code ranges (using the .. notation and splitting the range across
multiple lines).
Each non-comment line in the table is expected to have the given
number of fields. The first field is known to be the Unicode value
and the second field its description.
The reader calls doline(codes, fields) for each entry in the table.
If fn raises an exception, the reader prints that exception,
prefixed with the file name and line number, and continues
processing the file. When done with the file, the reader re-raises
the first exception encountered during the file.
Arguments:
filename: the Unicode data file to read, or a file-like object.
nfields: the number of expected fields per line in that file.
doline: the function to call for each table entry.
Raises:
InputError: nfields is invalid (must be >= 2).
"""
if nfields < 2:
raise InputError("invalid number of fields %d" % (nfields,))
if type(filename) == str:
if filename.startswith("http://"):
fil = urllib2.urlopen(filename)
else:
fil = open(filename, "r")
else:
fil = filename
first = None # first code in multiline range
expect_last = None # tag expected for "Last" line in multiline range
lineno = 0 # current line number
for line in fil:
lineno += 1
try:
# Chop # comments and white space; ignore empty lines.
sharp = line.find("#")
if sharp >= 0:
line = line[:sharp]
line = line.strip()
if not line:
continue
# Split fields on ";", chop more white space.
# Must have the expected number of fields.
fields = [s.strip() for s in line.split(";")]
if len(fields) != nfields:
raise InputError("wrong number of fields %d %d - %s" %
(len(fields), nfields, line))
# The Unicode text files have two different ways
# to list a Unicode range. Either the first field is
# itself a range (0000..FFFF), or the range is split
# across two lines, with the second field noting
# the continuation.
codes = _URange(fields[0])
(name, cont) = _ParseContinue(fields[1])
if expect_last is not None:
# If the last line gave the First code in a range,
# this one had better give the Last one.
if (len(codes) != 1 or codes[0] <= first or
cont != "Last" or name != expect_last):
raise InputError("expected Last line for %s" %
(expect_last,))
codes = range(first, codes[0] + 1)
first = None
expect_last = None
fields[0] = "%04X..%04X" % (codes[0], codes[-1])
fields[1] = name
elif cont == "First":
# Otherwise, if this is the First code in a range,
# remember it and go to the next line.
if len(codes) != 1:
raise InputError("bad First line: range given")
expect_last = name
first = codes[0]
continue
doline(codes, fields)
except Exception, e:
print "%s:%d: %s" % (filename, lineno, e)
raise
if expect_last is not None:
raise InputError("expected Last line for %s; got EOF" %
(expect_last,))
def CaseGroups(unicode_dir=_UNICODE_DIR):
"""Returns list of Unicode code groups equivalent under case folding.
Each group is a sorted list of code points,
and the list of groups is sorted by first code point
in the group.
Args:
unicode_dir: Unicode data directory
Returns:
list of Unicode code groups
"""
# Dict mapping lowercase code point to fold-equivalent group.
togroup = {}
def DoLine(codes, fields):
"""Process single CaseFolding.txt line, updating togroup."""
(_, foldtype, lower, _) = fields
if foldtype not in ("C", "S"):
return
lower = _UInt(lower)
togroup.setdefault(lower, [lower]).extend(codes)
ReadUnicodeTable(unicode_dir+"/CaseFolding.txt", 4, DoLine)
groups = togroup.values()
for g in groups:
g.sort()
groups.sort()
return togroup, groups
def Scripts(unicode_dir=_UNICODE_DIR):
"""Returns dict mapping script names to code lists.
Args:
unicode_dir: Unicode data directory
Returns:
dict mapping script names to code lists
"""
scripts = {}
def DoLine(codes, fields):
"""Process single Scripts.txt line, updating scripts."""
(_, name) = fields
scripts.setdefault(name, []).extend(codes)
ReadUnicodeTable(unicode_dir+"/Scripts.txt", 2, DoLine)
return scripts
def Categories(unicode_dir=_UNICODE_DIR):
"""Returns dict mapping category names to code lists.
Args:
unicode_dir: Unicode data directory
Returns:
dict mapping category names to code lists
"""
categories = {}
def DoLine(codes, fields):
"""Process single UnicodeData.txt line, updating categories."""
category = fields[2]
categories.setdefault(category, []).extend(codes)
# Add codes from Lu into L, etc.
if len(category) > 1:
short = category[0]
categories.setdefault(short, []).extend(codes)
ReadUnicodeTable(unicode_dir+"/UnicodeData.txt", 15, DoLine)
return categories | /regex2dfa-0.1.9.tar.gz/regex2dfa-0.1.9/third_party/re2/re2/unicode.py | 0.678433 | 0.562777 | unicode.py | pypi |
import re
class RegexerString(str):
"""Simplify working with regex.
Examples:
::
>>> RegexerString("Test01 string sentence02")[r"(?P<num>\\d+)", 'num']
['01', '02']
>>> RegexerString("Test01 string sentence02")[r"(?P<num>\\d+)", 1, 'num']
'02'
>>> RegexerString("Test01 string sentence02") / r"\\w\\d{2}"
['Tes', ' string sentenc']
>>> RegexerString("Test01 string sentence02") - r"\\d"
'Test string sentence'
Raises:
Exception: type not supported.
Exception: no item found by index.
"""
def __sub__(self, other):
if isinstance(other, str):
return re.sub(other, "", self)
else:
raise Exception("Not supported!")
def __truediv__(self, other):
if isinstance(other, str):
return list(filter(lambda item: item, re.split(other, self)))
else:
raise Exception("Not supported!")
def __getitem__(self, index):
if isinstance(index, str):
return [x.groupdict() for x in re.finditer(index, self)]
r = [x.groupdict() for x in re.finditer(index[0], self)]
if len(index) == 3:
if len(r) <= index[1]:
raise Exception(f"There is no item on index {index[1]}")
return r[index[1]][index[2]]
elif len(index) == 2 and len(r) == 1:
return r[0][index[1]]
else:
return [x[index[1]] for x in r]
class RegexerList(list):
def __getitem__(self, index):
if isinstance(index, int):
return list(self)[index]
if isinstance(index, str):
return [x[index] for x in self]
class Regexer:
"""Simplify working with regex.
Examples:
::
>>> Regexer([r"(?P<num>\\d+)")("Test01 string sentence02")['num']
['01', '02']
>>> Regexer([r"(?P<num>\\d+)")("Test01 string sentence02")[1]['num']
'02'
Raises:
Exception: type not supported.
Exception: no item found by index.
"""
def __init__(self, r):
self.r = r
def __call__(self, s):
return RegexerList(RegexerString(s)[self.r]) | /regexer-0.0.1-py3-none-any.whl/regexer.py | 0.653348 | 0.36693 | regexer.py | pypi |
# Tutorial on regexmodel
## Setup and installation
If you haven't installed `regexmodel` yet, including the optional dependencies, do so now:
```
# %pip install git+https://github.com/sodascience/regexmodel.git[tutorial]
```
Normally we would already have data that we want to model and synthesize, but for this tutorial we will use the faker package to generate that data for us. We will use fake email addresses.
```
from faker import Faker
fake = Faker("en")
Faker.seed(12345)
email_addresses = [fake.ascii_email() for _ in range(1000)]
email_addresses[:10]
```
## Modeling the structured strings
Now we will use the regexmodel package to model the data:
```
from regexmodel import RegexModel
model = RegexModel.fit(email_addresses)
```
Let's first see how the good the model is by synthesizing new email addresses:
```
[model.draw() for _ in range(10)]
```
While certainly not perfect, it certainly isn't so bad either, given that we have given the model only positive examples!
Now let's look at the serialization of the model:
```
model.serialize()
```
The serialization might seem overwhelming at first, but the first regex (`[a-z]{3,18}[0-9]{2,2}[@][a-z]{4,9}[\\\\.][c][o][m]`) is usually the most important one. We call this the main branch. On this main branch, there will be side branches, for example for ".info" and ".biz" email addresses.
## Modeling performance
There are also some modeling statistics that can be computed. Note that computing these can take a while depending on your computer.
```
model.fit_statistics(email_addresses)
```
What the `fit_statistics` method does is to retrace back whether an email address that is given to it (e.g. johndoe@example.com) has a non-zero probability to be generated by the regex model. As we can see above, there were 18 email addresses in the list that have a probability of 0 to be generated by the model, while the overwhelming majority (982) can be generated with the fitted model.
The value `n_parameters` gives the number of nodes in the model, and is thus an indicator of the complexity of the model. This is also correlated with the fit taking longer. We can influence this parameter during fitting by setting the `count_thres` parameter. If we set that threshold higher, we generally have a lower number of parameters and better performance.
The statistic `avg_log_like_per_char` (average log-likelihood per character) shows how probable a value is on average per character. To understand this better, let's take a more simple example, where the regex is simply `\d{2,2}`. For this regex, the log likelihood is simply log(1/10\*1/10) = -2\*log(10). Since all values have 2 characters, the average log-likelihood per character is -log(10) ~= 2.30. For failed values (values that cannot be generated by the model), we use a penalty score of -log(1000) per character.
Ideally we want to have the lowest `n_parameters` (simplest model) with the highest `success` and the highest log-likelihood.
## Visualization
To more clearly understand how the graph looks like, we can plot the regex model using the `regex_model_to_pyvis` function. To retrace the paths that can be taken, first find the start node and find the main branch.
Note: PyVis doesnt work interactively in VSCode/Code OSS.
```
from regexmodel.visualization import regex_model_to_pyvis
net = regex_model_to_pyvis(model)
net.show("regex.html", notebook=True)
```
| /regexmodel-0.1.0.tar.gz/regexmodel-0.1.0/examples/tutorial.ipynb | 0.451327 | 0.96796 | tutorial.ipynb | pypi |
from concurrent.futures import ProcessPoolExecutor
from collections import defaultdict
import time
import json
import sys
from faker import Faker
from regexmodel import RegexModel
def run_bench(faker_type, count_thres, n_fake, locale="NL"):
fake = Faker(locale=locale)
Faker.seed(12345)
fake_data = [getattr(fake, faker_type)() for _ in range(max(n_fake))]
fake_data_2 = [getattr(fake, faker_type)() for _ in range(max(n_fake))]
all_res = []
for cur_n_fake in n_fake:
for cur_count_thres in count_thres:
start_time = time.time()
model = RegexModel.fit(fake_data[:cur_n_fake], cur_count_thres)
mid_time = time.time()
stats = model.fit_statistics(fake_data_2[:cur_n_fake])
end_time = time.time()
success_rate = stats["success"]/(stats["success"] + stats["failed"])
stats.update({"n_fake": cur_n_fake, "threshold": cur_count_thres,
"success_rate": success_rate,
"fit_time": mid_time-start_time,
"statistics_time": end_time-mid_time})
all_res.append(stats)
return all_res
def standard_run(out_fp):
locales = ["nl", "fr", "en", "de", "da"]
faker_types = ["address", "phone_number", "pricetag", "timezone", "mime_type", "unix_partition",
"ascii_email", "isbn10", "job", "ssn", "user_agent", "color", "license_plate",
"iban", "company", "time", "ipv4", "uri", "name"]
n_fake = [100, 200, 400, 600, 1000]
count_thres = [2, 5, 10, 20]
executor = ProcessPoolExecutor()
future_results = defaultdict(dict)
for cur_faker_type in faker_types:
for cur_locale in locales:
future_results[cur_faker_type][cur_locale] = executor.submit(
run_bench, cur_faker_type, count_thres, n_fake, locale=cur_locale)
results = {
cur_faker_type: {locale: locale_data.result()
for locale, locale_data in cur_faker_data.items()}
for cur_faker_type, cur_faker_data in future_results.items()
}
with open(out_fp, "w") as handle:
json.dump(results, handle)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError("Need one argument: output_fp")
standard_run(sys.argv[1]) | /regexmodel-0.1.0.tar.gz/regexmodel-0.1.0/scripts/benchmark.py | 0.448668 | 0.176565 | benchmark.py | pypi |
def prefixes(s: str) -> iter:
"""
Lists all the prefixes of an arbitrary string
(including the empty word).
Example:
>>> from regexp_learner import prefixes
>>> sorted(prefixes("abcd"))
['', 'a', 'ab', 'abc', 'abcd']
Args:
s (str): A ``str`` instance.
Returns:
The prefixes of ``s``.
"""
return (s[:i] for i in range(len(s) + 1))
def is_prefix_closed(strings: set) -> bool:
"""
Tests whether a set of strings is prefix-closed (i.e., all the prefixes
of all the strings belong to this set).
Example:
>>> from regexp_learner import is_prefix_closed
>>> is_prefix_closed({"", "a", "ab", "abc"})
True
>>> is_prefix_closed({"xy", "xyz"})
False
Args:
strings (set): A set of strings.
Returns:
``True`` iff ``strings`` is prefix-closed, ``False`` otherwise.
"""
return not any(
prefix not in strings
for s in strings
for prefix in prefixes(s)
)
def suffixes(s: str):
"""
Lists all the suffixes of an arbitrary string
(including the empty word).
Example:
>>> from regexp_learner import suffixes
>>> sorted(suffixes("abcd"))
['', 'abcd', 'bcd', 'cd', 'd']
Args:
s (str): A ``str`` instance.
Returns:
The suffixes of ``s``.
"""
return (s[i:] for i in range(len(s) + 1))
def is_suffix_closed(str_set) -> bool:
"""
Tests whether a set of strings is suffix-closed (i.e., all the suffixes
of all the strings belong to this set).
Example:
>>> from regexp_learner import is_suffix_closed
>>> is_suffix_closed({"", "abc", "bc", "c"})
True
>>> is_suffix_closed({"xy", "xyz"})
False
Args:
strings (set): A set of strings.
Returns:
``True`` iff ``strings`` is suffix-closed, ``False`` otherwise.
"""
return not any(
suffix not in str_set
for s in str_set
for suffix in suffixes(s)
) | /regexp_learner-1.0.0.tar.gz/regexp_learner-1.0.0/src/regexp_learner/strings.py | 0.944753 | 0.585605 | strings.py | pypi |
__author__ = "Marc-Olivier Buob"
__maintainer__ = "Marc-Olivier Buob"
__email__ = "marc-olivier.buob@nokia-bell-labs.com"
__copyright__ = "Copyright (C) 2018, Nokia"
__license__ = "BSD-3"
import numpy as np
from operator import itemgetter
from collections import defaultdict
class LstarObservationTable:
"""
:py:class:`LstarObservationTable` implements the L* observation table
used by the :py:class:`Learner` in the Angluin algorithm.
"""
def __init__(self, a = "abcdefghijklmnopqrstuvwxyz"):
"""
Constructor.
Args:
"""
self.a = a
self.map_prefix = dict() # {str : int} maps prefixes with row indexes
self.map_suffix = dict() # {str : int} maps suffixes with column indexes
self.s = set() # {str} keeps track of prefixes
# {0,1}^(|map_prefixes|x|map_suffixes|) matrix (observation table)
self.t = np.zeros((1, 1), dtype = np.bool_)
# {0,1}^(|map_prefixes|x|map_suffixes|) indicated parts of T that have been probed
self.probed = np.zeros((1, 1), dtype = np.bool_)
@property
def e(self) -> set:
"""
Retrieves the observed suffixes.
Returns:
The set of suffixes observed in this :py:class:`LstarObservationTable`.
"""
return set(self.map_suffix.keys())
@staticmethod
def get_or_create_index(m: dict, k :str) -> int:
"""
Retrieves the index of a key in a dictionary. If the key
is not in the dictionary, the key is inserted and mapped
with ``len(m)``.
Args:
m (dict): The dictionary.
k (str): The key.
Returns:
The index assigned to ``k``.
"""
n = m.get(k)
if n is None:
n = len(m)
m[k] = n
return n
def add_row(self):
"""
Inserts a row in this :py:class:`LstarObservationTable`.
"""
self.t = np.insert(self.t, self.t.shape[0], values = 0, axis = 0)
self.probed = np.insert(self.probed, self.probed.shape[0], values = 0, axis = 0)
def add_col(self):
"""
Inserts a column in this :py:class:`LstarObservationTable`.
"""
self.t = np.insert(self.t, self.t.shape[1], values = 0, axis = 1)
self.probed = np.insert(self.probed, self.probed.shape[1], values = 0, axis = 1)
def add_prefix(self, s :str) -> tuple:
"""
Inserts a prefix in this :py:class:`LstarObservationTable`.
Args:
s (str): The inserted prefix.
Returns:
A ``(i, added)`` tuple where:
- ``i`` is the index of ``s`` in this :py:class:`LstarObservationTable`
- ``added`` equals ``True`` if ``s`` was not yet in this
:py:class:`LstarObservationTable`, ``False`` otherwise.
"""
i = LstarObservationTable.get_or_create_index(self.map_prefix, s)
(m, n) = self.t.shape
added = (i >= m)
if added:
self.add_row()
return (i, added)
def add_suffix(self, e :str) -> tuple:
"""
Inserts a suffix in this :py:class:`LstarObservationTable`.
Args:
e (str): The inserted suffix.
Returns:
A ``(i, added)`` tuple where:
- ``i`` is the index of ``e`` in this :py:class:`LstarObservationTable`
- ``added`` equals ``True`` if ``e`` was not yet in this
:py:class:`LstarObservationTable`, ``False`` otherwise.
"""
j = LstarObservationTable.get_or_create_index(self.map_suffix, e)
(m, n) = self.t.shape
added = (j >= n)
if added:
self.add_col()
return (j, added)
def set(self, s :str, e: str, accepted :bool = True):
"""
Fill this :py:class:`LstarObservationTable` according to a given
prefix, a given suffix, and a boolean indicating whether their
concatenation belongs to the :py:class:`Teacher`'s language.
Args:
s (str): The prefix.
e (str): The suffix.
accepted (bool): Pass ``True`` if ``s + e`` belongs to the
:py:class:`Teacher`'s language, ``False`` otherwise.
"""
(i, _) = self.add_prefix(s)
(j, _) = self.add_suffix(e)
self.t[i, j] = accepted
self.probed[i, j] = True
def get_row(self, s :str) -> int:
"""
Retrieves the row index related to a given prefix.
See also :py:meth:`LstarObservationTable.row`.
Args:
s (str): A prefix.
Returns:
The corresponding row if found, ``None`` otherwise.
"""
return self.map_prefix.get(s)
def get_col(self, e :str) -> int:
"""
Retrieves the column index related to a given prefix.
See also :py:meth:`LstarObservationTable.col`.
Args:
e (str): A suffix.
Returns:
The corresponding column if found, ``None`` otherwise.
"""
return self.map_suffix.get(e)
def get(self, s :str, e :str) -> bool:
"""
Probes this :py:class:`LstarObservationTable` for a given prefix
and a given suffix.
Args:
s (str): The prefix.
e (str): The suffix.
Returns:
The observation related to ``s + e``.
"""
i = self.get_row(s)
if i is None: return None
j = self.get_col(e)
if j is None: return None
if self.probed[i, j] == False: return None
ret = self.t[i, j]
return bool(ret)
def to_html(self) -> str:
"""
Exports this :py:class:`LstarObservationTable` to HTML.
Returns:
The corresponding HTML string.
"""
def bool_to_html(b) -> str:
return "?" if b is None else str(b)
def str_to_html(s) -> str:
return repr(s) if s else "ε"
def prefix_to_html(t, s) -> str:
return "<font color='red'>%s</font>" % str_to_html(s) if s in t.s \
else str_to_html(s)
sorted_prefixes = [
tup[0] for tup in sorted(self.map_prefix.items(), key=itemgetter(1))
]
sorted_suffixes = [
tup[0] for tup in sorted(self.map_suffix.items(), key=itemgetter(1))
]
return """
<table>
%(header)s
%(rows)s
</table>
""" % {
"header" : "<tr><th></th>%(ths)s</tr>" % {
"ths" : "".join([
"<td>%s</td>" % str_to_html(suffix) for suffix in sorted_suffixes
]),
},
"rows" : "".join([
"<tr><th>%(prefix)s</th>%(cells)s</tr>" % {
"cells" : "".join([
"<td>%s</td>" % bool_to_html(self.get(s, e)) for e in sorted_suffixes
]),
"prefix" : prefix_to_html(self, s),
} for s in sorted_prefixes
]),
}
def row(self, s :str) -> bytes:
"""
Retrieves the row in this :py:class:`LstarObservationTable`.
Args:
s (str): A prefix.
Returns:
The corresponding row.
"""
i = self.get_row(s)
# tobytes() is used to get something hashable
return self.t[i, :].tobytes() if i is not None else None
#(s1, a) = self.o.find_mismatch_closeness()
def find_mismatch_closeness(self) -> tuple:
"""
Search a pair (prefix, symbol) that shows this :py:class:`LstarObservationTable`
is not closed.
Returns:
A ``(s, a)`` pair (if found), ``None`` otherwise, where:
- ``s`` is a prefix of this :py:class:`LstarObservationTable`;
- ``a`` is a symbol of the alphabet of this :py:class:`LstarObservationTable`
(i.e., ``self.a``)
"""
assert self.probed.all()
rows = {self.row(s) for s in self.s}
for s in self.s:
for a in self.a:
row = self.row(s + a)
if row not in rows:
return (s, a)
return None
def is_closed(self, verbose :bool = False) -> bool:
"""
Checks whether this :py:class:`LstarObservationTable` is closed
(see Angluin's paper or `Angluin.pdf <https://github.com/nokia/regexp-learner/blob/master/Angluin.pdf>`__ in this repository).
Args:
verbose (bool): Pass ``True`` to print debug information.
Returns:
``True`` if this :py:class:`LstarObservationTable` is closed,
``False`` otherwise.
"""
ret = self.find_mismatch_closeness()
if verbose and ret is not None:
(s, a) = ret
print("Not closed: s = %s a = %s" % (s, a))
return ret is None
def find_mismatch_consistency(self) -> tuple:
"""
Search a pair (prefix, symbol) that shows this :py:class:`LstarObservationTable`
is not closed.
Returns:
A ``(s1, s2, a, e)`` pair (if found), ``None`` otherwise, where:
- ``s1`` and ``s2`` are two prefixes of this :py:class:`LstarObservationTable`;
- ``a`` is a symbol of the alphabet of this :py:class:`LstarObservationTable`
(i.e., ``self.a``)
- ``e`` is a contradicting suffix w.r.t. ``s1`` and ``s2``.
"""
assert self.probed.all()
for (i1, s1) in enumerate(self.s):
for (i2, s2) in enumerate(self.s):
if i2 <= i1:
continue
if self.row(s1) != self.row(s2):
continue
for a in self.a:
if self.row(s1 + a) != self.row(s2 + a):
for e in self.e:
if self.get(s1 + a, e) != self.get(s2 + a, e):
return (s1, s2, a, e)
return None
def is_consistent(self, verbose :bool = False) -> bool:
"""
Checks whether this :py:class:`LstarObservationTable` is consistent
(see Angluin's paper or `Angluin.pdf <https://github.com/nokia/regexp-learner/blob/master/Angluin.pdf>`__ in this repository).
Args:
verbose (bool): Pass ``True`` to print debug information.
Returns:
``True`` if this :py:class:`LstarObservationTable` is closed,
``False`` otherwise.
"""
ret = self.find_mismatch_consistency()
if verbose and ret is not None:
(s1, s2, a, e) = ret
print("Not consistent: s1 = %s s2 = %s a = %s e = %s" % (s1, s2, a, e))
return ret is None | /regexp_learner-1.0.0.tar.gz/regexp_learner-1.0.0/src/regexp_learner/lstar/observation_table.py | 0.904677 | 0.41567 | observation_table.py | pypi |
from collections import defaultdict
from pybgl.property_map import make_assoc_property_map
from pybgl.automaton import Automaton, make_automaton
from pybgl.trie import Trie
from ..strings import is_prefix_closed, suffixes
class GoldObservationTable:
"""
The :py:class:`GoldObservationTable` class implements the observation
table used in the Gold algorithm.
"""
ZERO = 0
ONE = 1
STAR = '*'
def __init__(
self,
s_plus: set,
s_minus: set,
sigma :str = 'abcdefghijklmnopqrstuvwxyz0123456789 ',
red_states: set = {''},
fill_holes: bool = False,
blue_state_choice_func: callable = min,
red_state_choice_func: callable = min,
):
"""
Constructor.
Args:
s_plus (iter): An iterable of strings that are
present in the language to infer.
s_minus (iter): An iterable of strings that are
not present in the language to infer.
sigma (str): An iterable of chars, represents the alphabet.
red_states (set): An iterable of strings, should remain default
to run Gold algorithm.
fill_holes (bool):
If ``True``, the function uses the filling holes method.
If ``False``, the function won't fill holes in the table,
but search for compatible successors when building
the automaton.
blue_state_choice_func (callable): A ``Iterable[str] -> str``
function, used to choose which blue state to promote
among the candidates.
red_state_choice_func (callable): A ``Iterable[str] -> str``
function, used to choose which red state to choose
among the red_states which are compatible with a blue one.
show_tables_as_html (bool): Pass ``True`` to output in HTML
the important steps of the algorithm.
"""
GoldObservationTable.check_input_consistency(s_plus, s_minus, sigma, red_states)
self.blue_state_choice_func = blue_state_choice_func
self.red_state_choice_func = red_state_choice_func
self.fill_holes = fill_holes
self.s_plus = set(s_plus)
self.s_minus = set(s_minus)
self.sigma = sigma
self.exp = sorted(
list( # Build suffix closed set EXP
set(
suffix
for string in s_plus
for suffix in suffixes(string)
) | set(
suffix
for string in s_minus
for suffix in suffixes(string)
)
),
key=lambda a: (len(a), a)
)
self.row_length = len(self.exp)
self.red_states = {
prefix: [
self.get_value_from_sample(prefix + suffix)
for suffix in self.exp
]
for prefix in red_states
}
self.blue_states = {
prefix + a: [
self.get_value_from_sample(prefix + a + suffix)
for suffix in self.exp
]
for prefix in red_states
for a in sigma
if prefix + a not in red_states
}
@staticmethod
def check_input_consistency(s_plus, s_minus, sigma, red_states):
"""
Checks that the input given to build an observation table is consistent.
Args:
s_plus (iter): An iterable of strings that are
present in the language to infer.
s_minus (iter): An iterable of strings that are
not present in the language to infer.
sigma (str): An iterable of chars, represents the alphabet.
red_states (set): An iterable of strings, should remain default
to run Gold algorithm.
Raises:
A ``RuntimeError`` exception if the input data is not consistent.
"""
# Check that all strings have their letters in sigma
for str_set in [s_plus, s_minus, red_states]:
for string in str_set:
if any(char not in sigma for char in string):
raise RuntimeError(
"Some characters are in the samples, "
"but not in the alphabet"
)
# Check that the red states are prefix closed
if not is_prefix_closed(red_states):
raise RuntimeError(
"The set of red states must be prefix-closed."
)
# Check that s_plus and s_minus are disjoint
if (
any(string in s_plus for string in s_minus) or
any(string in s_minus for string in s_plus)
):
raise RuntimeError(
"S+ and S- must not overlap"
)
def get_value_from_sample(self, w: str) -> int:
"""
Returns the value used to fill this :py:class:`GoldObservationTable`
for a given word.
Args:
w (str): An arbitray word.
Returns:
- :py:attr:`ONE` if ``w`` is in :py:attr:`self.s_plus`,
- :py:attr:`ZERO` if it is in :py:attr:`s_minus`,
- :py:attr:`STAR` otherwise
"""
return (
GoldObservationTable.ONE if w in self.s_plus else
GoldObservationTable.ZERO if w in self.s_minus else
GoldObservationTable.STAR
)
@staticmethod
def are_obviously_different(row1: list, row2: list) -> bool:
"""
Checks whether two rows are obviously different.
Args:
row1 (list): A row of this :py:class:`GoldObservationTable`.
row2 (list): A row of this :py:class:`GoldObservationTable`.
Returns:
``True`` iff one of these two row contains at least one ``ONE``
and the other row contains at least one ZERO at a given index.
"""
return any(
(
v1 == GoldObservationTable.ONE and
v2 == GoldObservationTable.ZERO
) or (
v1 == GoldObservationTable.ZERO and
v2 == GoldObservationTable.ONE
)
for (v1, v2) in zip(row1, row2)
)
def choose_obviously_different_blue_state(self) -> int:
"""
Finds a blue state (row) that is obviously different from all the
red states.
Returns:
A state (if found), ``None`` otherwise.
"""
blue_candidates = [
blue_state
for (blue_state, blue_state_val) in self.blue_states.items()
if all(
GoldObservationTable.are_obviously_different(blue_state_val, red_state_val)
for red_state_val in self.red_states.values()
)
]
if not blue_candidates:
return None
else:
return self.blue_state_choice_func(blue_candidates)
def try_and_promote_blue(self) -> bool:
"""
Tries to find a blue state to promote (cf Gold algorithm).
If such a state is found, the function promotes it and updates this
:py:class:`GoldObservationTable` accordingly.
Returns:
``True`` iff a state has been promoted, ``False`` otherwise.
"""
blue_to_promote = self.choose_obviously_different_blue_state()
if blue_to_promote is None:
return False
self.red_states[blue_to_promote] = self.blue_states.pop(blue_to_promote)
for a in self.sigma:
if blue_to_promote + a not in self.red_states:
self.blue_states[blue_to_promote + a] = [
self.get_value_from_sample(blue_to_promote + a + suffix)
for suffix in self.exp
]
return True
def choose_compatible_red_state(self, row):
"""
Finds a red state that is compatible according to a row.
Args:
row (list): A vector of values in ``{ONE, ZERO, STAR}``,
corresponding to a blue state.
Returns:
A red state that is compatible (not obviously different)
"""
candidates = [
red_state
for (red_state, red_state_val) in self.red_states.items()
if not GoldObservationTable.are_obviously_different(row, red_state_val)
]
if not candidates:
return None
return self.red_state_choice_func(candidates)
def try_and_fill_holes(self):
"""
Tries to fill all the holes (:py:attr:`STAR`) that are in the observation
table after the promoting phase.
Returns:
``True`` if it succeeds, ``False`` otherwise.
"""
if not self.fill_holes:
return True
for (blue_state, blue_state_val) in self.blue_states.items():
red_state = self.choose_compatible_red_state(blue_state_val)
if red_state is None: # This should never happen
return False
red_state_val = self.red_states[red_state]
self.red_states[red_state] = [
red_state_val[i] if red_state_val[i] != GoldObservationTable.STAR else
blue_state_val[i]
for i in range(self.row_length)
]
self.red_states = {
red_state: [
GoldObservationTable.ONE if red_state_val[i] != GoldObservationTable.ZERO else
GoldObservationTable.ZERO
for i in range(self.row_length)
] for red_state, red_state_val in self.red_states.items()
}
for (blue_state, blue_state_val) in self.blue_states.items():
red_state = self.choose_compatible_red_state(blue_state_val)
if red_state is None:
return False
self.blue_states[blue_state] = [
blue_state_val[i] if blue_state_val[i] != self.STAR
else self.red_states[red_state][i]
for i in range(self.row_length)
]
return True
def to_automaton(self) -> tuple:
"""
Builds an automaton from the observation table information.
Returns:
A tuple ``(g, success)`` where: ``g`` is the inferred
:py:class:`pybgl.Automaton`;
``success`` equals ``True`` iff the algorithm succeeded in building
an automaton.
If ``False``, ``g`` is the Prefix Tree Acceptor (PTA) accepting ``s_plus``.
"""
if self.fill_holes:
if not self.try_and_fill_holes():
return False, self.make_pta()
epsilon_idx = self.exp.index("")
states = sorted(list(self.red_states.keys()), key=lambda s: (len(s), s))
transitions = []
if self.fill_holes:
for q in states:
for a in self.sigma:
qa_val = self.red_states.get(
q + a,
self.blue_states.get(q + a, None)
)
for (r, r_val) in self.red_states.items():
if qa_val == r_val:
transitions += [(q, r, a)]
break
else:
for q in states:
for a in self.sigma:
if q + a in states:
transitions += [(q, q + a, a)]
else:
qa_val = self.blue_states.get(q + a, None)
r = self.choose_compatible_red_state(qa_val)
transitions += [(q, r, a)]
transitions = [
(states.index(q), states.index(r), a)
for (q, r, a) in transitions
]
final_states = defaultdict(
bool,
{
states.index(state): (self.red_states[state][epsilon_idx] == self.ONE)
for state in states
}
)
g = make_automaton(
transitions,
states.index(""),
make_assoc_property_map(final_states)
)
if not self.fill_holes:
if not self.is_consistent_with_samples(g):
return (self.make_pta(), False)
return (g, True)
def make_pta(self) -> Trie:
"""
Builds the PTA (Prefix Tree Acceptor) corresponding to the positive
examples related to this :py:class:`GoldObservationTable` instance.
Returns:
The corresponding :py:class:`pybgl.Trie` instance.
"""
g = Trie()
for s in self.s_plus:
g.insert(s)
return g
def is_consistent_with_samples(self, g: Automaton) -> bool:
"""
Checks if a given automaton complies with the positive and negative
examples.
_Remarks:_
- Using the :py:class:`pybgl.Automaton` implementation, this method
always returns ``True``.
- If the automaton class supports rejecting states,
:py:meth:`GoldObservationTable.is_consistent_with_samples` should be
overloaded and check whether ``g`` is consistent
with :py:attr:`self.s_plus` (positive examples) and
with :py:attr:`self.s_minus` (negative examples).
Args:
g (Automaton): An automaton instance.
Returns:
``True`` if ``g`` accepts the positive examples and rejects the negative
examples, ``False`` otherwise.
"""
return True
def to_html(self) -> str:
"""
Exports to HTML this :py:class:`GoldObservationTable` instance.
Returns:
The corresponding HTML string.
"""
def str_to_html(s: str) -> str:
return repr(s) if s else "ε"
def str_to_red_html(s: str) -> str:
return "<font color='red'>%s</font>" % str_to_html(s)
def str_to_blue_html(s: str) -> str:
return "<font color='blue'>%s</font>" % str_to_html(s)
return "<table>{header}{rows}</table>".format(
header="<tr><th></th>%s</tr>" % (
"".join(
"<td>%s</td>" % str_to_html(suffix)
for suffix in self.exp
)
),
rows="".join(
"<tr><th>{prefix}</th>{values}</tr>".format(
prefix=str_to_red_html(red_state),
values="".join(
"<td>%s</td>" % self.red_states[red_state][i]
for i in range(self.row_length)
)
) for red_state in self.red_states
) + "".join(
"<tr><th>{prefix}</th>{values}</tr>".format(
prefix=str_to_blue_html(blue_state),
values=''.join(
"<td>%s</td>" % self.blue_states[blue_state][i]
for i in range(self.row_length)
)
) for blue_state in self.blue_states
)
) | /regexp_learner-1.0.0.tar.gz/regexp_learner-1.0.0/src/regexp_learner/gold/observation_table.py | 0.938583 | 0.605391 | observation_table.py | pypi |
from pybgl.automaton import Automaton
from pybgl.html import html
from .observation_table import GoldObservationTable
def gold(
s_plus: iter,
s_minus: iter,
sigma: str ="abcdefghijklmnopqrstuvwxyz0123456789 ",
red_states: set = {""},
fill_holes: bool = False,
blue_state_choice_func: callable = min,
red_state_choice_func: callable = min,
verbose: bool = False,
) -> tuple:
"""
Runs the GOLD algorithm.
Args:
s_plus (iter): An iterable of strings that are
present in the language to infer.
s_minus (iter): An iterable of strings that are
not present in the language to infer.
sigma (str): An iterable of chars, represents the alphabet.
red_states (set): An iterable of strings, should remain default
to run Gold algorithm.
fill_holes (bool):
If ``True``, the function uses the filling holes method.
If ``False``, the function won't fill holes in the table,
but search for compatible successors when building
the automaton.
blue_state_choice_func (callable): A ``Iterable[str] -> str``
function, used to choose which blue state to promote
among the candidates.
red_state_choice_func (callable): A ``Iterable[str] -> str``
function, used to choose which red state to choose
among the red_states which are compatible with a blue one.
verbose (bool): Pass ``True`` to output in HTML
the important steps of the algorithm.
Returns:
A tuple ``(g, success)`` where:
``g`` is the inferred :py:class:`Automaton`;
``success`` equals ``True`` iff the algorithm succeeded
If ``success`` equals ``False``, then ``g`` is the Prefix Tree
Acceptor (PTA) accepting ``s_plus``.
"""
obs_table = GoldObservationTable(
s_plus,
s_minus,
sigma,
red_states=red_states,
fill_holes=fill_holes,
blue_state_choice_func=blue_state_choice_func,
red_state_choice_func=red_state_choice_func,
)
if verbose:
html(obs_table.to_html())
while obs_table.try_and_promote_blue():
if verbose:
html(obs_table.to_html())
return obs_table.to_automaton() | /regexp_learner-1.0.0.tar.gz/regexp_learner-1.0.0/src/regexp_learner/gold/gold.py | 0.894577 | 0.598957 | gold.py | pypi |
import re
from types import MethodType
from typing import List, Any, Callable
class TextParser:
"""
TextParser class
Builds parsers based on regular expressions.
The regular expression, used to match the text pattern, is specified in the
method's __doc__ attribute.
The name of these *parser methods* must start with `parser`.
The parser class inherits TextParser and implements the parser methods
defining the regular expression in the method's __doc__.
The parser method has two arguments, the first is the given text that is
parsed and the second is one instance of the re.Match class.
The regular expression attributes can be accessed in this argument.
The parser method is called once the regular expression returns a valid
Match object.
If one regular expression matches the given text, the method associated
to that regular expression is executed, the given text is parsed according to its
implementation and the parsed value is returned.
Otherwise, the passed text is returned without changes.
Examples
--------
Creat a class to parse integers.
>>> class IntParser(TextParser):
>>> def parseInteger(self, text, match):
>>> r'^-?\\s*\\d+$'
>>> return eval(text)
Instanciate and call the parse method to convert the given text.
>>> parser = IntParser()
>>> parser.parse('1')
1
>>> parser.parse('a')
'a'
"""
def __init__(self):
self.parsers = self.__createMethodAnalyzers()
def __createMethodAnalyzers(self) -> List:
pairs = []
for methodName in dir(self):
method = getattr(self, methodName)
if methodName.startswith('parse') and type(method) is MethodType and method.__doc__:
pairs.append(buildparser(method.__doc__, method))
return pairs
def parse(self, text: str) -> Any:
for parser in self.parsers:
val = parser(text)
if val != text:
return val
return self.parseText(text)
def parseText(self, text: str) -> str:
return text
class BooleanParser(TextParser):
"""
BooleanParser class
Convert "TRUE" or "FALSE" (with any case combination) to boolean objects.
Examples
--------
>>> parser.parse('True')
True
>>> parser.parse('FALSE')
False
"""
def parseBoolean(self, text: str, match: re.Match) -> bool:
r'^[Tt][Rr][Uu][eE]|[Ff][Aa][Ll][Ss][Ee]$'
return eval(text.lower().capitalize())
class NumberParser(TextParser):
"""
NumberParser class
Convert text with numbers to int and float objects.
Examples
--------
>>> parser.parse('1')
1
>>> parser.parse('- 1.1')
-1.1
>>> parser.parse('1,000.1')
1000.1
"""
def parseInteger(self, text: str, match: re.Match) -> int:
r'^-?\s*\d+$'
return eval(text)
def parse_number_decimal(self, text: str, match: re.Match) -> float:
r'^-?\s*\d+\.\d+?$'
return eval(text)
def parse_number_with_thousands(self, text: str, match: re.Match) -> float:
r'^-?\s*(\d+[,])+\d+[\.]\d+?$'
text = text.replace(',', '')
return eval(text)
class PortugueseRulesParser(TextParser):
"""
PortugueseRulesParser class
Convert text to float and boolean according to Brazilian Portuguese conventions.
Examples
--------
>>> parser.parse('1,1')
1,1
>>> parser.parse('- 1.000,1')
-1000.1
>>> parser.parse('Sim')
True
>>> parser.parse('Não')
False
"""
def parseBoolean_ptBR(self, text: str, match: re.Match) -> bool:
r'^(sim|Sim|SIM|n.o|N.o|N.O)$'
return text[0].lower() == 's'
def parseBoolean_ptBR2(self, text: str, match: re.Match) -> bool:
r'^(verdadeiro|VERDADEIRO|falso|FALSO|V|F|v|f)$'
return text[0].lower() == 'v'
def parse_number_with_thousands_ptBR(self, text: str, match: re.Match) -> float:
r'^-?\s*(\d+\.)+\d+,\d+?$'
text = text.replace('.', '')
text = text.replace(',', '.')
return eval(text)
def parse_number_decimal_ptBR(self, text: str, match: re.Match) -> float:
r'^-?\s*\d+,\d+?$'
text = text.replace(',', '.')
return eval(text)
def textparse(text: str, regex: str, func: Callable[[str, re.Match], Any]) -> Any:
"""
Parses the argument text with the function func once it matches the
regular expression regex.
Parameters
----------
text: str
Given text to be parsed.
regex: str
Regular expression to match the desired pattern.
func: function
Function that parses the given text once it matches the regular expression.
Returns
-------
Any
Returns the parsed object as result of the parsing.
Examples
--------
>>> textparser.textparse('TRUe', r'^[Tt][Rr][Uu][eE]|[Ff][Aa][Ll][Ss][Ee]$', lambda t, m: eval(t.lower().capitalize()))
True
>>> textparser.textparse('1,1', r'^-?\\s*\\d+[\\.,]\\d+?$', lambda t, m: eval(t.replace(',', '.')))
1.1
"""
parser = buildparser(regex, func)
return parser(text)
def buildparser(regex: str, func: Callable[[str, re.Match], Any]) -> Callable[[str], Any]:
"""
Builds a parser that parses a given text according to regex and func.
Parameters
----------
regex: str
Regular expression to match the desired pattern.
func: function
Function that parses the given text once it matches the regular expression.
Returns
-------
Callable[[str]]
Returns a callable object that receives the text to be parsed and returns
the result of the parsing.
Examples
--------
>>> num_parser = textparser.buildparser(r'^-?\\s*\\d+[\\.,]\\d+?$', lambda t, m: eval(t.replace(',', '.')))
>>> num_parser('1.1')
1.1
"""
_regex = re.compile(regex)
def _func(text):
match = _regex.match(text)
return func(text, match) if match else text
return _func
class GenericParser(NumberParser, BooleanParser):
pass | /regexparser-0.1.0.tar.gz/regexparser-0.1.0/regexparser.py | 0.915394 | 0.660405 | regexparser.py | pypi |
import abc
import traceback
import warnings
# pylint: disable=missing-class-docstring, missing-function-docstring
class RegisterEntryAbstract(metaclass=abc.ABCMeta):
"""Abstract class handling dict-like access to Field of the register."""
def __init__(self, **kwargs):
"""Constructor - Mandatory kwargs: regfile and address offset."""
object.__setattr__(self, "_lock", False)
self.regfile = kwargs.pop("regfile")
self.addr = int(kwargs.pop("addr"))
self.write_mask = int(kwargs.pop("write_mask", -1))
self._fields = kwargs.pop("fields", {})
self._writable_fieldnames = kwargs.pop("_writable_fieldnames", None)
self._userattributes = tuple(kwargs.keys())
for attr, value in kwargs.items():
self.__setattr__(attr, value)
self._lock = True
@abc.abstractmethod
def _get_value(self):
"""Abstract method to get the value of the register."""
@abc.abstractmethod
def _set_value(self, value, mask):
"""Abstract method to set the value of the register."""
def __iter__(self):
"""Iterator over the (name, fieldvalue) mainly for dict() conversion."""
int_value = self._get_value()
for key, value in self._fields.items():
yield key, value.get_field(int_value)
def items(self):
"""Providing the tuple (fieldname, field[RegisterField]) for (self-)inspection."""
return self._fields.items()
def get_field_names(self):
"""Returns a copy of the field's dictionary's list of keys (fieldnames)."""
return self._fields.keys()
def __getitem__(self, key):
"""Dict-like access to read a value from a field."""
if key in self._fields:
return self._fields[key].get_field(self._get_value())
raise KeyError(
f"Field {key} does not exist. "
f"Available fields: {list(self._fields.keys())}"
) # pragma: nocover
def __setattr__(self, name, value):
if self._lock is True and name not in self.__dict__:
raise AttributeError(
f"Unable to allocate attribute {name} - Instance is locked."
)
super().__setattr__(name, value)
def __setitem__(self, key, value):
"""Dict-like access to write a value to a field."""
if key not in self._fields:
raise KeyError(
f"Field {key} does not exist. "
f"Available fields: {list(self.get_field_names())}"
) # pragma: nocover
field = self._fields[key]
truncval = self._fit_fieldvalue_for_write(field, value)
self._set_value(truncval << field.lsb, field.get_mask())
def _fit_fieldvalue(self, field, value):
"""Truncate a value to fit into the field is necessary and raise a warning."""
mask = field.get_mask()
fieldmask = mask >> field.lsb
truncval = value & fieldmask
if value != truncval:
_regfile_warn_user(
f"{field.name}: value 0x{value:x} is truncated to 0x{truncval:x} "
f"(mask: 0x{fieldmask})."
)
return truncval
def _fit_fieldvalue_for_write(self, field, value):
"""Additional to the truncation, check if field is writable."""
mask = field.get_mask()
if mask & self.write_mask != mask:
_regfile_warn_user(
f"Writing read-only field {field.name} (value: 0x{value:08x} -- "
f"mask: 0x{mask:08x} write_mask: 0x{self.write_mask:08x})."
)
return self._fit_fieldvalue(field, value)
def get_dict(self, int_value=None):
"""Get dictionary field view of the register.
If int_value is not specified a read will be executed,
otherwise the int_value is decomposed to the fields"""
if int_value is None:
int_value = self._get_value()
return {name: field.get_field(int_value) for name, field in self.items()}
def get_value(self, field_dict=None):
"""Return the integer view of the register.
If field_dict is not specified a read will be executed,
otherwise the dict is composed to get the integer value"""
if field_dict is None:
return self._get_value()
if isinstance(field_dict, dict):
value = 0
for fieldname, fieldvalue in field_dict.items():
field = self._fields[fieldname]
value |= self._fit_fieldvalue(field, fieldvalue) << field.lsb
return value
raise TypeError(
f"Unable to get_value for type {type(value)} " f"-- {str(value)}."
) # pragma: nocover
def get_writable_fieldnames(self):
"""Return a copied list containing all writable fieldnames"""
return list(self._writable_fieldnames)
def writable_field_items(self):
"""Tuple of object over all writable fieldnames"""
writable_fields = []
for name, field in self.items():
field_mask = field.get_mask()
if field_mask & self.write_mask == field_mask:
writable_fields.append((name, field))
return writable_fields
def get_name(self):
"""Get the name of the register, if set otherwise return UNNAMED"""
name = "UNNAMED"
if hasattr(self, "name"):
name = self.name
return name
def __str__(self):
"""Read the register and format it decomposed as fields as well as integer value."""
int_value = self._get_value()
strfields = []
for name, field in self.items():
strfields.append(f"'{name}': 0x{field.get_field(int_value):x}")
return (
f"Register {self.get_name()}: {{{', '.join(strfields)}}} = 0x{int_value:x}"
)
def set_value(self, value, mask=None):
"""Set the value of register. The value can be an integer a dict or
a register object (e.g. obtained by 'read_entry()'."""
if mask is None:
mask = self.write_mask
if isinstance(value, int):
self._set_value(value, mask)
elif isinstance(value, dict):
writable_fieldnames = self.get_writable_fieldnames()
write_value = 0
for fieldname, fieldvalue in value.items():
if fieldname in writable_fieldnames:
writable_fieldnames.remove(fieldname)
field = self._fields[fieldname]
write_value |= (
self._fit_fieldvalue_for_write(field, fieldvalue) << field.lsb
)
elif fieldname not in self.get_field_names():
_regfile_warn_user(
f"Ignoring non existent Field {fieldname} for write."
)
if writable_fieldnames:
_regfile_warn_user(
f"Field(s) {', '.join(writable_fieldnames)} were not explicitly "
f"set during write of register {self.get_name()}!"
)
self._set_value(write_value, self.write_mask)
elif isinstance(value, RegisterEntry):
self._set_value(value.get_value(), self.write_mask)
else:
raise TypeError(
f"Unable to assign type {type(value)} " f"-- {str(value)}."
) # pragma: nocover
def __int__(self):
"""Integer conversion - executes a read"""
return self.get_value()
def __eq__(self, other):
"""Equal comparison with integer -executes a read"""
if isinstance(other, int):
return self.get_value() == other
return super().__eq__(other) # pragma: nocover
def __lt__(self, other):
"""Less-than comparison with integer -executes a read"""
if isinstance(other, int):
return self.get_value() < other
return super().__lt__(other) # pragma: nocover
def __le__(self, other):
"""Less-than/equal comparison with integer -executes a read"""
if isinstance(other, int):
return self.get_value() <= other
return super().__le__(other) # pragma: nocover
def __gt__(self, other):
"""Greater-than comparison with integer -executes a read"""
if isinstance(other, int):
return self.get_value() > other
return super().__gt__(other) # pragma: nocover
def __ge__(self, other):
"""Greater-than/equal comparison with integer -executes a read"""
if isinstance(other, int):
return self.get_value() >= other
return super().__ge__(other) # pragma: nocover
class RegisterEntry(RegisterEntryAbstract):
"""Register Entry class - adding .represent() initialization for fields
and UVM-like access"""
def __init__(self, **kwargs):
"""Constructor see also RegisterEntryAbstract"""
super().__init__(**kwargs)
self._lock = False
self._add_fields_mode = kwargs.pop("_add_fields_mode", False)
self.desired_value = kwargs.pop("desired_value", 0)
self.mirrored_value = kwargs.pop("mirrored_value", 0)
self._reset = kwargs.pop("_reset", 0)
self._lock = True
def _get_value(self):
"""Get value returns the mirrored value."""
return self.mirrored_value
def _set_value(self, value, mask):
"""Set value updates the desired and mirrored value"""
self.desired_value = (self.desired_value & ~mask) | (value & mask)
self.mirrored_value = self.desired_value
def __enter__(self):
"""The with statement allows to add fields to the register -
with reg as add_fields_register:
add_fields_register.represent(name="FIELDNAME", bits=(msb,lsb), reset=0x0, ...)
"""
self._add_fields_mode = True
return self
def __exit__(self, exception_type, exception_value, exception_traceback):
"""Lock the register fields - sort-out the writable_fieldnames
with the help of the write_mask"""
# TODO: sanity check write mask
self._add_fields_mode = False
self._writable_fieldnames = tuple(
name for name, _ in self.writable_field_items()
)
def __getitem__(self, key):
"""Add the represent() logic to the dict-like access method"""
if self._add_fields_mode:
def represent(**kwargs):
bits = kwargs["bits"].split(":")
msb = int(bits[0])
lsb = int(bits[1]) if len(bits) == 2 else msb
field = RegisterField(name=key, msb=msb, lsb=lsb, **kwargs)
self._fields[key] = field
if "reset" in kwargs:
reset = int(kwargs["reset"], 0) << lsb
truncreset = reset & field.get_mask()
if truncreset != reset:
_regfile_warn_user(
f"{key}: reset value 0x{reset >> lsb:x} "
f"is truncated to 0x{truncreset >> lsb:x}."
) # pragma: nocover
self._reset |= truncreset
self.desired_value = self._reset
self.mirrored_value = self._reset
return field
return SyntasticSugarRepresent(represent)
return super().__getitem__(key)
def get_reset_values(self):
"""Get iterator object of the tuple (fieldname, resetvalue) for writable fields only."""
return {
fieldname: field.get_field(self._reset)
for fieldname, field in self.writable_field_items()
}
def field(self, name):
"""Get the field by name and add callback for UVM-like set() method of fields"""
field = self._fields[name]
if not hasattr(field, "set"):
def setfunc(value):
self.desired_value &= ~field.get_mask()
self.desired_value |= (
self._fit_fieldvalue_for_write(field, value) << field.lsb
)
setattr(field, "set", setfunc)
return field
def __getattr__(self, name):
"""Allow member access of fields - must have '_f' as suffix (<FIELDNAME>_f)."""
if name[-2:] == "_f" and name[:-2] in self._fields:
return self.field(name[:-2])
raise AttributeError(
f"Attribute {name} does not exist nor is a valid fieldname. "
f"Available fields: {list(self._fields.keys())}"
)
def get_register_entry(self, value):
"""Return a new RegisterEntry (shallow copy)."""
userattr = {}
for attr in ("regfile", "addr", "write_mask", "_fields", "_reset"):
userattr[attr] = getattr(self, attr)
for attr in self._userattributes:
userattr[attr] = getattr(self, attr)
return RegisterEntry(mirrored_value=value, desired_value=value, **userattr)
def read_entry(self):
"""Reads the value and returns a new RegisterEntry."""
return self.get_register_entry(self._get_value())
def set(self, value):
"""UVM-like - Set the desired value for this register."""
self.desired_value = value
def get(self):
"""UVM-like - Return the desired value of the fields in the register."""
return self.desired_value
def get_mirrored_value(self):
"""UVM-like - Return the mirrored value of the fields in the register."""
return self.mirrored_value
def get_mirrored_dict(self):
"""UVM-like - Variation of get_mirrored_value() return a dict instead of an int"""
return self.get_dict(self.mirrored_value)
def get_mirrored_reg(self):
"""UVM-like - Variation of get_mirrored_value() return a reg instead of an int"""
return self.get_register_entry(self.mirrored_value)
def needs_update(self):
"""UVM-like - Returns True if any of the fields need updating"""
return self.desired_value != self.mirrored_value
def reset(self):
"""UVM-like - Reset the desired/mirrored value for this register."""
self.desired_value = self._reset
self.mirrored_value = self._reset
def get_reset(self):
"""UVM-like - Get the specified reset value for this register."""
return self._reset
def write(self, *args, **kwargs):
"""UVM-like - Write the specified value in this register."""
if args and not kwargs:
if len(args) == 1:
self.set_value(args[0])
elif not args and kwargs:
self.set_value(kwargs)
else:
raise ValueError("write just takes one dict or kwargs as argument.")
def read(self):
"""UVM-like - Read the current value from this register."""
return self.get_value()
def update(self):
"""UVM-like - Updates the content of the register in the design
to match the desired value."""
self._set_value(self.desired_value, self.write_mask)
def get_field_by_name(self, name):
"""UVM-like - Return the fields in this register."""
return self.field(name)
def get_offset(self):
"""UVM-like - Returns the offset of this register."""
return self.addr
def get_address(self):
"""UVM-like - Returns the base external physical address of this register"""
return self.regfile.get_base_addr() + self.addr
def write_update(self, *args, **kwargs):
"""Wrapper function around set() fields defined by value[dict] and update()"""
if args:
if len(args) == 1 and isinstance(args[0], dict):
for field_name, field_value in args[0].items():
self.field(field_name).set(field_value)
else:
raise ValueError("write_update just takes one dict as argument.")
for field_name, field_value in kwargs.items():
self.field(field_name).set(field_value)
self.update()
class RegfileEntry(RegisterEntry):
def _get_value(self):
value = self.regfile.read(self)
if self.needs_update():
_regfile_warn_user(
f"Register {self.get_name()}: Desired value 0x{self.desired_value:x} "
f"has never been written via update() "
f" --> mirrored value is 0x{self.mirrored_value:x}.\n"
f"Reseting desired/mirrored value by readvalue 0x{value:x}"
)
self.desired_value = value
self.mirrored_value = value
return value
def _set_value(self, value, mask):
super()._set_value(value, mask)
self.regfile.write(self, value, mask)
class RegisterField:
def __init__(self, **kwargs):
self.name = kwargs.pop("name")
self.msb = kwargs.pop("msb")
self.lsb = kwargs.pop("lsb")
for key, value in kwargs.items():
self.__setattr__(key, value)
self.__mask = (1 << (self.msb + 1)) - (1 << self.lsb)
def get_field(self, intvalue):
return (intvalue & self.__mask) >> self.lsb
def get_mask(self):
return self.__mask
def __str__(self):
return f"{self.name}"
def _regfile_warn_user(msg):
fstacklevel = len(traceback.extract_stack()) + 1
for stacktrace in traceback.extract_stack():
if __file__ == stacktrace[0]:
break
fstacklevel -= 1
warnings.warn(msg, UserWarning, stacklevel=fstacklevel)
class Regfile:
def __init__(self, rfdev, base_addr=0x0, **kwargs):
object.__setattr__(self, "_lock", False)
self._dev = rfdev
self.__value_mask = (1 << (8 * self._dev.n_word_bytes)) - 1
self.__base_addr = base_addr
self._entries = {}
self.__add_entry_mode = False
self.name = kwargs.pop("name", f"{type(self).__name__}@0x{base_addr:x}")
self._lock = True
def __enter__(self):
self.__add_entry_mode = True
return self
def __exit__(self, exception_type, exception_value, exception_traceback):
self.__add_entry_mode = False
def read(self, entry):
return self._dev.read(self.get_base_addr(), entry)
def write(self, entry, value, mask):
regvalue = value & self.__value_mask
if value != regvalue:
_regfile_warn_user(
f"Value 0x{value:x} is too large to fit into "
f"the specified word size ({self._dev.n_word_bytes}), "
f"truncated to 0x{regvalue:x} / 0x{self.__value_mask:x}."
)
self._dev.write(self.get_base_addr(), entry, value, mask)
def get_base_addr(self):
return self.__base_addr
def get_rfdev(self):
return self._dev
def set_rfdev(self, dev):
self._dev = dev
def __iter__(self):
return iter(self._entries.values())
def items(self):
return self._entries.items()
def keys(self):
return self._entries.keys()
def values(self):
return self._entries.values()
def __getitem__(self, key):
if key not in self._entries:
if self.__add_entry_mode:
def represent(**kwargs):
kwargs.setdefault("name", key)
self._entries[key] = RegfileEntry(regfile=self, **kwargs)
return self._entries[key]
return SyntasticSugarRepresent(represent)
raise KeyError(f"Regfile has no entry named '{key}'.")
return self._entries[key]
def __setattr__(self, name, value):
if self._lock is True and name not in self.__dict__:
raise AttributeError(
f"Unable to allocate attribute {name} - Instance is locked."
)
super().__setattr__(name, value)
def __getattr__(self, name):
if name[-2:] == "_r" and name[:-2] in self._entries:
return self._entries[name[:-2]]
raise AttributeError(f"Attribute {name} does not exist")
def __setitem__(self, key, value):
if key not in self._entries:
raise KeyError(f"Regfile has no entry named '{key}'.")
self._entries[key].write(value)
def reset_all(self):
for regs in self._entries.values():
regs.reset()
class SyntasticSugarRepresent:
# pylint: disable=too-few-public-methods
__slots__ = ("represent",)
def __init__(self, represent):
self.represent = represent
class RegfileMemAccess:
def __init__(self, rfdev, base_addr, **kwargs):
self._dev = rfdev
if kwargs["size"]:
self.index_range = kwargs["size"] // self._dev.n_word_bytes
self.__check_idx_func = self.__check_idx
else:
self.__check_idx_func = lambda self, index: None
self.__base_addr = base_addr
def __check_idx(self, index):
if index >= self.index_range:
raise IndexError(f"Index {index} is out of bounds")
def __getitem__(self, index):
self.__check_idx_func(index)
return self._dev.rfdev_read(self.__base_addr + self._dev.n_word_bytes * index)
def __setitem__(self, index, value):
self.__check_idx_func(index)
self._dev.rfdev_write(
self.__base_addr + self._dev.n_word_bytes * index, value, -1, -1
)
def get_rfdev(self):
return self._dev
def set_rfdev(self, dev):
self._dev = dev
def get_base_addr(self):
return self.__base_addr
def read_image(self, addr, size):
image = size * [0]
self._dev.readwrite_block(self.__base_addr + addr, image, False)
return image
def write_image(self, addr, image):
self._dev.readwrite_block(self.__base_addr + addr, image, True) | /regfile_generics-0.1.0-py3-none-any.whl/regfile_generics/regfile.py | 0.796055 | 0.159774 | regfile.py | pypi |
# Simpler regular expressions in Python
[](https://travis-ci.org/romilly)
Regular Expressions are powerful but they can be hard to tame.
There’s an old programmer’s joke:
A programmer has a problem, and she decides to use regular expressions.
Now she has two problems.
Unless you use regular expressions regularly (sorry for the awful pun!), they can be
* Hard to read
* Hard to write
* Hard to debug
*reggie* makes regular expressions readable and easy to use.
I've used regular expressions for a number of projects over the years, but the idea for *reggie* came
about a decade ago.
I was working on a project where we had to parse CDRs (Call Detail Records). These are text files
that Telecommunication Service Providers use to identify calls and other services billable to
their customers.
The CDRs we received were CSV files with a complex format and we decided to use regexes
(Regular Expressions) to verify and interpret them.
We liked regexes, but they were less popular with our business analysts.
The BAs were happy with our Java code, but they found the regexes much less readable.
[Nat Pryce](http://www.natpryce.com/) and I came up with a simple DSL (Domain Specific Language) which we used to
describe and then analyse the CDRs. That made it much easier for our BAs and Testers to
understand our code, and developers joining the team mastered the code quickly.
The Java syntax was a bit messy, though. These days I do most of my development in
Python or APL, and a couple of years ago I realised that Python's syntax would allow a
much cleaner implementation, which I called *reggie*.
It's been useful for several projects, and I decided to share it. You can now use *reggie* for your own work; it's
[available on GitHub](https://github.com/romilly/reggie).
## A simple example
Here's the solution to a classic problem, converting variants of North American
telephone numbers to international format.
(The code is in the examples directory).
from reggie.core import *
d3 = digit + digit + digit
d4 = d3 + digit
international = name(optional(escape('+1')),'i')
area = optional(osp + lp + name(d3,'area') + rp)
local = osp + name(d3,'exchange') + dash + name(d4,'number')
number = international + area + local
def convert(text, area_default='123'):
matched = match(number, text)
if matched is None:
return None
default(matched, 'i','+1')
default(matched, 'area', area_default)
return '{i} {area} {exchange} {number}'.format(**matched)
print(convert('(123) 345-2192'))
print(convert('345-2192'))
print(convert('+1 (123) 345-2192'))
Here's the output:
+1 123 345 2192
+1 123 345 2192
+1 123 345 2192
## CDR Example
The next example is inspired by the original Java-based project.
Here's a **simplified** example of the format of CDRs (Call Data Records) for UK Land Lines and a raw regex
(ugh) that could be used to parse it.
Type, Caller, Called, Date, Time, Duration, Call Class
N, +448000077938, +441603761827, 09/08/2015, 07:00:12, 2,
N, +448450347920, +441383626902, 27/08/2015, 23:53:15, 146,
V, +441633614985, +441633435179, 27/08/2015, 18:36:14, 50,
V, +441733360100, +441733925173, 12/08/2015, 02:49:24, 78,
V, +442074958968, , 05/08/2015, 08:01:11, 9, CALLRETURN
D, +441392517158, +447917840223, 22/08/2015, 10:14:39, 2,
V, +441914801773, , 18/08/2015, 17:29:50, 7, ALARM CALL
Regex: (one long line, split for readability, not a pretty sight!)
(?P<type>(N|V|D)),(?P<caller>\+[0-9]{12,15}),(?P<called>(?:\+[0-9]{12,15})?)
(?P<date>[0-9]{2,2}/[0-9]{2,2}/[0-9]{4,4}),(?P<time>[0-9]{2,2}:[0-9]{2,2}:[0-9]{2,2}),
(?P<duration>[0-9]{1,1}),(?P<call_class>[A-Z]{0,50})
For educational purposes, I've adapted the CDR specification/sample data from the
official standard provided by [FCS](http://www.fcs.org.uk/member-groups/billing).
I've also omitted a lot of fields and inserted spaces to clarify the layout.
The spaces are not present in a real file and the regex examples correctly assume
there are no spaces in a real CDR.
Below it you can see the definition of the format using **reggie**.
from reggie.core import *
call_type = name(one_of('N','V','D'),'call_type')
number = plus + multiple(digit, 12, 15)
dd = multiple(digit, 2, 2)
year = multiple(digit, 4 ,4)
date = name(dd + slash + dd + slash + year,'date')
time = name(dd + colon + dd + colon + dd,'time')
duration = name(multiple(digit),'duration')
cc = name(multiple(capital, 0, 50),'call_class')
cdr = csv(call_type, name(number,'caller'), name(optional(number),'called'),date, time, duration, cc)
Now you can try to parse some CDRs. The last one is wrongly formatted.
Parsing a well-formed record returns a Python dictionary.
Parsing an ill-formed record returns the value `None`.
So running
print(cdr.matches('N,+448000077938,+441603761827,09/08/2015,07:00:12,2,'))
print(cdr.matches('V,+442074958968,,05/08/2015,08:01:11,9,CALLRETURN'))
print(cdr.matches('Rubbish!'))
will print
{'caller': '+448000077938', 'call_type': 'N', 'date': '09/08/2015', 'called': '+441603761827', 'duration': '2', 'time': '07:00:12'}
{'caller': '+442074958968', 'call_type': 'V', 'date': '05/08/2015', 'duration': '9', 'call_class': 'CALLRETURN', 'time': '08:01:11'}
None
## Other Applications
I've used *reggie* to build parsers for simple languages. Regular Expressions have some limitations -
in particular, you can't use standard regular expressions to analyse recursive grammars.
But there are a surprising number of applications where a DSL without recursion does the trick.
I've recently been working on an emulator for the venerable (and wonderful) PDP-8 computer, and I needed to create a
Python assembler for PAL, the PDP-8's assembly language. The assembler uses *reggie* and it's about an
A4 page of fairly simple code.
I've also been working on [Breadboarder](https://github.com/romilly/breadboarder),
a DSL for documenting breadboard-based electronics projects. At present you have
to define your project using Python code but I think I can use *reggie* to create a natural-language version.
Watch this space!
Morten Kromberg and I have also been experimenting with an APL version of *reggie*. APL syntax is well suited to
*reggie*, and the experiments look very promising. We've included our current APL prototype in the main GitHub project
and I'll blog about that once the code has stabilised.
In the meantime, take a look at the Python version and leave a comment to let me know what you think of it!
| /reggie-dsl-0.1a1.tar.gz/reggie-dsl-0.1a1/README.md | 0.604983 | 0.843992 | README.md | pypi |
# Learning and inference of Regime-Switching Model based on the idea of
# hidden markov model
# Part of the code are from Python package 'hmmlearn'
# Their source code can be found here: https://github.com/hmmlearn/hmmlearn
# Author: Liuyi Hu
from __future__ import print_function
import sys
import string
from collections import deque
import numpy as np
from numpy.linalg import multi_dot
from numpy.linalg import inv
from scipy.stats import multivariate_normal
from sklearn.base import BaseEstimator, _pprint
from sklearn.utils import check_array, check_random_state
from sklearn.utils.validation import check_is_fitted
from sklearn.cluster import KMeans
from .utils import normalize
class ConvergenceMonitor(object):
"""Monitors and reports convergence to :data:'sys.stderr'.
Parameters
----------
tol: double
Convergence threshold. EM has converged either if the maximum
number of iterations is reached or the log probability
improvement between the two consecutive iterations is less than
threshold
n_iter: int
Maximum number of iterations to perform
verbose: bool
If "True" then per-iteration convergence reports are printed,
otherwise the monitor is mute
Attributes
----------
history: deque
The log probability of the data for the last two training
iterations. If the values are not strictly increasing, the
model did not converge.
iter: int
Number of iterations performed while training the model
"""
_template = "{iter:>10d} {logprob:>16.4f} {delta:>16.4f}"
def __init__(self, tol, n_iter, verbose):
self.tol = tol
self.n_iter = n_iter
self.verbose = verbose
self.history = deque(maxlen=2)
self.iter = 0
def __repr__(self):
class_name = self.__class__.__name__
params = dict(vars(self), history=list(self.history))
return "{0}({1})".format(class_name, _pprint(params, offset=len(class_name)))
def report(self, logprob):
"""Reports convergence to :data:'sys.stderr'.
The output consists of three columns: iteration number, log probability
of the data at the current iteration and convergence rate. At the first
iteration, convergence rate is unknown and is thus denoted by NaN.
Parameters
----------
logprob: float
The log probability of the data as computed by EM algorithm in the
current iteration
"""
if self.verbose:
delta = logprob - self.history[-1] if self.history else np.nan
message = self._template.format(iter=self.iter+1, logprob=logprob, delta=delta)
print(message, file=sys.stderr)
self.history.append(logprob)
self.iter += 1
@property
def converged(self):
"""``True`` if the EM algorithm converged and ``False`` otherwise
"""
return (self.iter == self.n_iter or (len(self.history) == 2 and self.history[1] - self.history[0] >= 0 and self.history[1] - self.history[0] <= self.tol))
class HMMRS(BaseEstimator):
"""Hidden Markov Models with Regime-Switch Model
Parameters
----------
random_state: RandomState or an int seed, optional
A random number generator instance.
n_components: int
Number of states in the model
n_iter: int, optional
Maximum number of iterations to perform
tol: float, optional
Convergence threshold for the EM algorithm
vebose: bool, optional
When ``True`` per-iteration convergence reports are printed
to :data:`sys.stderr`. You can diagnose convergence via the
:attr:`monitor_` attribute.
init_params : string, optional
Controls which parameters are initialized prior to
training. Can contain any combination of 's' for
startprob, 't' for transmat, and other characters for
subclass-specific emission parameters. Defaults to all
parameters.
Attributes
----------
monitor_ : ConvergenceMonitor
Monitor object used to check the convergence of EM.
startprob_ : array, shape (n_components, )
Initial state occupation distribution.
transmat_ : array, shape (n_components, n_components)
Matrix of transition probabilities between states.
loadingmat_: array, shape (n_components, # of stocks, # of risk factors + 1)
Loading matrix B for the Regime-Switch model under different states (with intercept)
covmat_: array, shape (n_components, # of stocks, # of stocks)
Covariance matrix in the Regime-Switch model
"""
def __init__(self, random_state=None, n_components=1, n_iter=1000, tol=1e-4, verbose=False, init_params=string.ascii_letters):
self.random_state = random_state
self.n_components = n_components
self.n_iter = n_iter
self.tol = tol
self.verbose = verbose
self.init_params = init_params
def _init(self, Y, F):
"""Initializes model parameters prior to fitting
Parameters
----------
Y: array-like, shape(n_samples, n_features)
Stock price matrix of individual samples
F: array-like, shape(n_samples, # of risk factors + 1))
Risk factor matrix of individual samples
"""
init = 1. / self.n_components
if 's' in self.init_params or not hasattr(self, "startprob_"):
self.startprob_ = np.full(self.n_components, init)
if 't' in self.init_params or not hasattr(self, "transmat_"):
self.transmat_ = np.full((self.n_components, self.n_components), init)
# apply the K-means algorithm to initialize the mean and covariance
kmeans = KMeans(n_clusters=self.n_components, random_state=0).fit(Y)
if 'l' in self.init_params or not hasattr(self, "loadingmat_"):
self.loadingmat_ = np.random.rand(self.n_components, Y.shape[1], F.shape[1])
for component in range(self.n_components):
self.loadingmat_[component,:,0] = np.mean(Y[kmeans.labels_ == component])
if 'c' in self.init_params or not hasattr(self, "covmat_"):
self.covmat_ = np.zeros((self.n_components, Y.shape[1], Y.shape[1]))
for component in range(self.n_components):
self.covmat_[component,:] = np.cov(Y[kmeans.labels_ == component], rowvar=False)
def _check(self):
"""Validates model parameters prior to fitting.
Raises
------
ValueError
If any of the parameters are invalid, e.g. if :attr:'startprob_' don't sum to 1.
"""
self.startprob_ = np.asarray(self.startprob_)
if len(self.startprob_) != self.n_components:
raise ValueError("startprob_ must have length n_components")
if not np.allclose(self.startprob_.sum(), 1.0):
raise ValueError("startprob_ must sum to 1.0 (got {0:.4f})".format(self.startprob_.sum()))
self.transmat_ = np.asarray(self.transmat_)
if self.transmat_.shape != (self.n_components, self.n_components):
raise ValueError("transmat_ must have shape (n_components, n_components)")
if not np.allclose(self.transmat_.sum(axis=1), 1.0):
raise ValueError("rows of transmat_ must sum to 1.0 (got {0})".format(self.transmat_.sum(axis=1)))
def _compute_likelihood(self, Y, F, log=False):
"""Computes per-component probability under the mdoel
P(Y_t∣h_t = i)
Parameters
----------
Y: array-like, shape(n_samples, n_features)
Stock price matrix of individual samples
F: array-like, shape(n_samples, n_features)
Risk factor matrix of individual samples
Returns
-------
prob: array, shape(n_samples, n_components)
Pprobability of each sample in ``Y, T'' for each of the hidden states
"""
n_samples, _ = Y.shape
prob = np.zeros((n_samples, self.n_components))
for t in xrange(n_samples):
for state in range(self.n_components):
m = np.dot(self.loadingmat_[state], F[t])
if log:
prob[t, state] = multivariate_normal(mean=m, cov=self.covmat_[state]).logpdf(Y[t])
else:
prob[t, state] = multivariate_normal(mean=m, cov=self.covmat_[state]).pdf(Y[t])
return prob
def _do_forward_pass(self, frameprob):
n_samples, n_components = frameprob.shape
logl = 0.0
fwdlattice = np.zeros((n_samples, n_components))
scaling_factor = np.zeros(n_samples)
for state in range(n_components):
fwdlattice[0, state] = frameprob[0, state] * self.startprob_[state]
scaling_factor[0] = fwdlattice[0].sum()
logl += np.log(scaling_factor[0])
normalize(fwdlattice[0])
for t in xrange(1, n_samples):
for state in range(n_components):
fwdlattice[t, state] = frameprob[t, state] * np.dot(fwdlattice[t-1], self.transmat_[:,state])
scaling_factor[t] = fwdlattice[t].sum()
logl += np.log(scaling_factor[t])
normalize(fwdlattice[t])
return fwdlattice, logl, scaling_factor
def _do_backward_pass(self, frameprob, scaling_factor):
n_samples, n_components = frameprob.shape
bwdlattice = np.zeros((n_samples, n_components))
bwdlattice[n_samples-1] = 1.0
for t in xrange(n_samples-2, -1, -1):
for state in range(n_components):
temp = 0.0
for l in range(n_components):
temp += bwdlattice[t+1, l] * frameprob[t+1, l] * self.transmat_[state, l]
bwdlattice[t, state] = temp / scaling_factor[t+1]
return bwdlattice
def _compute_posterior(self, fwdlattice, bwdlattice, scaling_factor, frameprob):
"""Compute P(h_t = i ∣ Y_1, ..., Y_T) and
P(h_t=i, h_t-1=j ∣ Y_1, ..., Y_T)
"""
n_samples, n_components = frameprob.shape
posterior_state = np.zeros((n_samples, n_components))
posterior_transmat = np.zeros((n_samples, n_components, n_components))
for t in xrange(n_samples):
for i in range(n_components):
posterior_state[t, i] = fwdlattice[t, i] * bwdlattice[t, i]
if t > 0:
for j in range(n_components):
posterior_transmat[t, i, j] = bwdlattice[t, i] * frameprob[t, i] * self.transmat_[j, i] * fwdlattice[t-1, j] / scaling_factor[t]
return posterior_state, posterior_transmat
def _do_M_step(self, posterior_state, posterior_transmat, Y, F):
n_samples, n_components = posterior_state.shape
self.startprob_ = posterior_state[0] / sum(posterior_state[0])
for i in range(n_components):
for j in range(n_components):
self.transmat_[j,i] = posterior_transmat[:, i, j].sum()
normalize(self.transmat_, axis=1)
for i in range(n_components):
self.loadingmat_[i] = multi_dot([Y.T, np.diag(posterior_state[:,i]), F, inv(multi_dot([F.T, np.diag(posterior_state[:,i]), F]))])
tmp = np.zeros((Y.shape[1], Y.shape[1]))
for t in xrange(n_samples):
v = np.atleast_2d(Y[t]).T - np.dot(self.loadingmat_[i], np.atleast_2d(F[t]).T)
tmp += posterior_state[t,i] * np.dot(v, v.T)
self.covmat_[i] = tmp / posterior_state[:,i].sum()
def fit(self, Y, F):
"""Estimate model parameters
Parameters
----------
Y: array-like, shape(n_samples, n_features)
Stock price matrix of individual samples
F: array-like, shape(n_samples, n_features)
Risk factor matrix of individual samples
Returns
-------
self: object
Returns self.
"""
Y = check_array(Y)
F = check_array(F)
if Y.shape[0] != F.shape[0]:
raise ValueError("rows of Y must match rows of F")
self._init(Y, F)
self._check()
self.monitor_ = ConvergenceMonitor(self.tol, self.n_iter, self.verbose)
for iter in range(self.n_iter):
frameprob = self._compute_likelihood(Y, F)
# do forward pass
fwdlattice, logl, scaling_factor = self._do_forward_pass(frameprob)
# do backward pass
bwdlattice = self._do_backward_pass(frameprob, scaling_factor)
# calculate g_{i,t} and h_{ji,t}
posterior_state, posterior_transmat = self._compute_posterior(fwdlattice, bwdlattice, scaling_factor, frameprob)
# do M-step
self._do_M_step(posterior_state, posterior_transmat, Y, F)
self.monitor_.report(logl)
if self.monitor_.converged:
break
return self
def predict(self, Y, F):
"""Find most likely state sequence corresponding to ``Y, F``
Parameters
----------
Y: array, shape(n_samples, # of stocks)
F: array, shape(n_samples, # of risk factors + 1)
Returns
-------
state_sequence: array, shape(n_samples, )
Labels for each sample from ``Y, F``
logprob: float, the loglikeihood of P(Y_1, ...Y_T, h_1, ..., h_T)
where the hidden state sequence is the most likely sequence found by Viterbi algorithm
"""
framelogprob = self._compute_likelihood(Y, F, log=True)
n_samples, n_components = framelogprob.shape
state_sequence = np.empty(n_samples, dtype=np.int32)
viterbi_lattice = np.full((n_samples, n_components), -np.inf)
log_startprob = np.log(self.startprob_)
log_transmat = np.log(self.transmat_)
tmp = np.empty(n_components)
for i in range(n_components):
viterbi_lattice[0, i] = log_startprob[i] + framelogprob[0, i]
# Induction
for t in xrange(1, n_samples):
for i in range(n_components):
for j in range(n_components):
tmp[j] = viterbi_lattice[t-1, j] + log_transmat[j, i]
viterbi_lattice[t, i] = np.amax(tmp) + framelogprob[t, i]
# Backtracking
state_sequence[n_samples-1] = idx = np.argmax(viterbi_lattice[n_samples-1])
logprob = viterbi_lattice[n_samples-1, idx]
for t in range(n_samples-2, -1, -1):
for i in range(n_components):
tmp[i] = viterbi_lattice[t, i] + log_transmat[i, idx]
state_sequence[t] = idx = np.argmax(tmp)
return np.asarray(state_sequence), logprob, viterbi_lattice
def predict_streaming(self, Y, F, last_state):
"""Find most likely state when observations of ``Y, F`` becomes
available one by one.
Parameters
----------
Y: array, shape(n_samples, # of stocks)
F: array, shape(n_samples, # of risk factors + 1)
Returns
-------
state_sequence: array, shape(n_samples, )
Labels for each sample from ``Y, F``
logprob: float, the loglikeihood of P(Y_1, ...Y_T, h_1, ..., h_T)
where the hidden state sequence is the most likely sequence found by Viterbi algorithm
"""
n_samples = Y.shape[0]
log_transmat = np.log(self.transmat_)
state_sequence = np.empty(n_samples, dtype=np.int32)
for t in range(n_samples):
y = np.atleast_2d(Y[t])
f = np.atleast_2d(F[t])
logprob = self._compute_likelihood(y, f, log=True)
_, n_components = logprob.shape
tmp = np.empty(n_components)
for i in range(n_components):
tmp[i] = log_transmat[last_state, i] + logprob[0, i]
state_sequence[t] = last_state = np.argmax(tmp)
return np.asarray(state_sequence)
def sample(self, F, random_state=None):
"""Generate random samples from the model.
Parameters
----------
F: array, shape(n_samples, # of risk factors)
random_state : RandomState or an int seed
A random number generator instance. If ``None``, the object's
``random_state`` is used.
Returns
-------
Y : array, shape (n_samples, n_features)
Feature matrix.
state_sequence : array, shape (n_samples, )
State sequence produced by the model.
"""
check_is_fitted(self, "startprob_")
self._check()
if random_state is None:
random_state = self.random_state
random_state = check_random_state(random_state)
startprob_cdf = np.cumsum(self.startprob_)
transmat_cdf = np.cumsum(self.transmat_, axis=1)
#transmat_cdf = np.array([[0.9, 1.0], [0.7, 1.0]])
currstate = (startprob_cdf > random_state.rand()).argmax()
state_sequence = [currstate]
Y = [self._generate_sample_from_state(currstate, F[0], random_state=random_state)]
n_samples = F.shape[0]
for t in range(n_samples-1):
currstate = (transmat_cdf[currstate] > random_state.rand()).argmax()
state_sequence.append(currstate)
Y.append(self._generate_sample_from_state(currstate, F[t+1], random_state=random_state))
return np.atleast_2d(Y), np.array(state_sequence, dtype=int)
def _generate_sample_from_state(self, state, f, random_state=None):
"""Generates a random sample from a given component
Parameters
----------
state: int
Index of the compont to condition on
f: array, shape(# of risk factors + 1, )
random_stae: RandomState or an int seed
A random number generator instance. If ``None``, the object's ``random_state`` is used
Returns
-------
y: array, shape(n_features, )
A random sample form the emission distribution corresponding to a given componnt
"""
if random_state is None:
random_state = self.random_state
random_state = check_random_state(random_state)
m = np.dot(self.loadingmat_[state], f)
y = np.random.multivariate_normal(mean=m, cov=self.covmat_[state])
return y | /regime_switch_model-0.1.1.tar.gz/regime_switch_model-0.1.1/regime_switch_model/rshmm.py | 0.844216 | 0.715213 | rshmm.py | pypi |
from abc import ABCMeta, abstractmethod
import pandas as pd
import multiprocessing
import json
import time
from region_estimators.estimation_data import EstimationData
def log_time(func):
"""Logs the time it took for func to execute"""
def wrapper(*args, **kwargs):
start = time.time()
val = func(*args, **kwargs)
end = time.time()
duration = end - start
print(f'{func.__name__} took {duration} seconds to run')
return val
return wrapper
class RegionEstimator(object):
"""
Abstract class, parent of region estimators (each implementing a different estimation method).
Requires GeoPandas and Pandas
"""
__metaclass__ = ABCMeta
VERBOSE_DEFAULT = 0
VERBOSE_MAX = 2
MAX_NUM_PROCESSORS = 1
#@log_time
def __init__(self, estimation_data=None, verbose=VERBOSE_DEFAULT, max_processors=MAX_NUM_PROCESSORS,
progress_callback=None):
"""
Initialise instance of the RegionEstimator class.
Args:
estimation_data: (EstimationData) The data to be used in the estimations
verbose: (int) Verbosity of output level. zero or less => No debug output
max_processors: (int) The maximum number of processors to be used when calculating regions.
progress_callback: (callable) Handler function for delegating progress updates
Returns:
Initialised instance of subclass of RegionEstimator
"""
assert type(self) != RegionEstimator, 'RegionEstimator Cannot be instantiated directly'
# Check and set verbose
self.verbose = verbose
# Check and set max_processors
self.max_processors = min(max_processors, multiprocessing.cpu_count())
# Set EstimationData
self._estimation_data = estimation_data
# Set progress callback function, for publishing progress
assert progress_callback is None or callable(progress_callback) is True, \
"The progress_callback must be a callable function. {} is not callable".format(str(progress_callback))
self._progress_callback = progress_callback
@abstractmethod
def get_estimate(self, measurement, timestamp, region_id, ignore_site_ids=[]):
raise NotImplementedError("Must override get_estimate")
@property
def estimation_data(self):
return self._estimation_data
@estimation_data.setter
def estimation_data(self, estimation_data):
assert estimation_data is None or isinstance(estimation_data, EstimationData), \
"estimation_data must be an instance of type EstimationData"
self._estimation_data = estimation_data
@property
def sites(self):
return self._estimation_data.sites
@property
def regions(self):
return self._estimation_data.regions
@property
def actuals(self):
return self._estimation_data.actuals
@property
def verbose(self):
return self._verbose
@verbose.setter
def verbose(self, verbose=VERBOSE_DEFAULT):
assert isinstance(verbose, int), \
"Verbose level must be an integer not {}. (zero or less produces no debug output)".format(verbose.__class__)
if verbose < 0:
print('Warning: verbose input is less than zero so setting to zero')
verbose = 0
if verbose > RegionEstimator.VERBOSE_MAX:
print('Warning: verbose input is greater than {}} so setting to {}'.format(
RegionEstimator.VERBOSE_MAX, RegionEstimator.VERBOSE_MAX))
verbose = RegionEstimator.VERBOSE_MAX
self._verbose = verbose
@property
def max_processors(self):
return self.__max_processors
@max_processors.setter
def max_processors(self, max_processors=MAX_NUM_PROCESSORS):
assert isinstance(max_processors, int), "max_processors must be an integer."
assert max_processors > 0, "max_processors must be greater than zero"
self.__max_processors = max_processors
def _get_estimate_process(self, region_result, measurement, region_id, timestamp, ignore_site_ids=[]):
""" Find estimation for a single region and single timestamp. Worker function for multi-processing.
:region_result: estimation result. as multiprocessing is used must return result as parameter
:param measurement: measurement to be estimated (string, required)
:param region_id: region identifier (string, required)
:param timestamp: timestamp identifier (string, required)
:param ignore_site_ids: site id(s) to be ignored during the estimations. Default=[]
:return: a dict with items 'region_id' and 'estimates (list). Estimates contains
'timestamp', (estimated) 'value' and 'extra_data'
"""
if self._progress_callback is not None:
self._progress_callback(**{'status': 'Calculating estimate for region: {} and timestamp: {}'
.format(region_id, timestamp),
'percent_complete': None})
try:
region_result_estimate = self.get_estimate(measurement, timestamp, region_id, ignore_site_ids)
region_result.append({'measurement': measurement,
'region_id': region_id,
'value': region_result_estimate[0],
'extra_data': json.dumps(region_result_estimate[1]),
'timestamp': timestamp})
except Exception as err:
print('Error estimating for measurement: {}; region: {}; timestamp: {} and ignore_sites: {}.\nError: {}'
.format(measurement, region_id, timestamp, ignore_site_ids, err))
def _get_region_estimation(self, pool, region_result, measurement, region_id, timestamp=None, ignore_site_ids=[]):
""" Find estimations for a region and timestamp (or all timestamps (or all timestamps if timestamp==None)
:param pool: the multiprocessing pool object within which to run this task
:region_result: estimation result. as multiprocessing is used must return result as parameter
:param measurement: measurement to be estimated (string, required)
:param region_id: region identifier (string, required)
:param timestamp: timestamp identifier (string or None)
:param ignore_site_ids: site id(s) to be ignored during the estimations
:return: a dict with items 'region_id' and 'estimates (list). Estimates contains
'timestamp', (estimated) 'value' and 'extra_data'
"""
if timestamp is not None:
if self.verbose > 0:
print('\n##### Calculating for region_id: {} and timestamp: {} #####'.format(region_id, timestamp))
pool.apply_async(self._get_estimate_process,
args=(region_result, measurement, region_id, timestamp, ignore_site_ids))
else:
timestamps = sorted(self.actuals['timestamp'].unique())
for _, timestamp in enumerate(timestamps):
if self.verbose > 1:
print(region_id, ' Calculating for timestamp:', timestamp)
pool.apply_async(self._get_estimate_process,
args=(region_result, measurement, region_id, timestamp, ignore_site_ids))
return region_result
#@log_time
def get_estimations(self, measurement, region_id=None, timestamp=None, ignore_site_ids=[]):
""" Find estimations for a region (or all regions if region_id==None) and
timestamp (or all timestamps (or all timestamps if timestamp==None)
:param measurement: measurement to be estimated (string - required)
:param region_id: region identifier (string or None)
:param timestamp: timestamp identifier (string or None)
:param ignore_site_ids: site id(s) to be ignored during the estimations (default: empty list [])
:return: pandas dataframe with columns:
'measurement'
'region_id'
'timestamp'
'value' (calculated 'estimate)
'extra_data' (json string)
"""
# Check inputs
assert measurement is not None, "measurement parameter cannot be None"
assert measurement in list(self.actuals.columns), "The measurement: '" + measurement \
+ "' does not exist in the actuals dataframe"
if region_id is not None:
df_reset = pd.DataFrame(self.regions.reset_index())
regions_temp = df_reset.loc[df_reset['region_id'] == region_id]
assert len(regions_temp.index) > 0, "The region_id does not exist in the regions dataframe"
if timestamp is not None:
df_actuals_reset = pd.DataFrame(self.actuals.reset_index())
actuals_temp = df_actuals_reset.loc[df_actuals_reset['timestamp'] == timestamp]
assert len(actuals_temp.index) > 0, "The timestamp does not exist in the actuals dataframe"
if ignore_site_ids is None:
ignore_site_ids = []
# Calculate estimates
with multiprocessing.Manager() as manager, multiprocessing.Pool(self.max_processors) as pool:
# Set up pool and result dict
region_result = manager.list()
if region_id:
if self.verbose > 0:
print('\n##### Calculating for region:', region_id, '#####')
self._get_region_estimation(pool, region_result, measurement, region_id, timestamp, ignore_site_ids)
else:
if self.verbose > 0:
print('No region_id submitted so calculating for all region ids...')
for index, _ in self.regions.iterrows():
if self.verbose > 1:
print('Calculating for region:', index)
self._get_region_estimation(pool, region_result, measurement, index, timestamp, ignore_site_ids)
pool.close()
pool.join()
# Put results into the results dataframe
df_result = pd.DataFrame.from_records(region_result)
if len(df_result.index) > 0:
df_result.set_index(['measurement', 'region_id', 'timestamp'], inplace=True)
else:
raise ValueError("Estimation process returned no results.")
return df_result | /region_estimators-1.2.0-py3-none-any.whl/region_estimators/region_estimator.py | 0.856542 | 0.401013 | region_estimator.py | pypi |
import geopandas as gpd
import pandas as pd
import numpy as np
import multiprocessing
class EstimationData(object):
VERBOSE_DEFAULT = 0
VERBOSE_MAX = 2
def __init__(self, sites, regions, actuals, verbose=0):
"""
Initialise instance of the RegionEstimator class.
Args:
sites: list of sites as pandas.DataFrame
Required columns:
'site_id' (str or int) Unique identifier of site (of site) (will be converted to str)
'latitude' (float): latitude of site location
'longitude' (float): longitude of site location
'name' (string (Optional): Name of site
regions: list of regions as pandas.DataFrame
Required columns:
'region_id' (Unique INDEX)
'geometry' (shapely.wkt/geom.wkt)
actuals: list of site values as pandas.DataFrame
Required columns:
'timestamp' (str): timestamp of actual reading
'site_id': (str or int) ID of site which took actual reading - must match an index
value in sites. (will be converted to str)
[one or more value columns] (float): value of actual measurement readings.
each column name is the name of the measurement
e.g. 'NO2'
Returns:
Initialised instance of subclass of RegionEstimator
"""
### Check and set verbose
self.verbose = verbose
### Check sites:
assert sites.index.name == 'site_id', "sites dataframe index name must be 'site_id'"
# (Not checking site_id data as that forms the index)
assert 'latitude' in list(sites.columns), "There is no latitude column in sites dataframe"
assert pd.to_numeric(sites['latitude'], errors='coerce').notnull().all(), \
"latitude column contains non-numeric values."
assert 'longitude' in list(sites.columns), "There is no longitude column in sites dataframe"
assert pd.to_numeric(sites['longitude'], errors='coerce').notnull().all(), \
"longitude column contains non-numeric values."
### Check regions
# (Not checking region_id data as that forms the index)
assert regions.index.name == 'region_id', "regions dataframe index name must be 'region_id'"
assert 'geometry' in list(regions.columns), "There is no geometry column in regions dataframe"
### Check actuals
assert 'timestamp' in list(actuals.columns), "There is no timestamp column in actuals dataframe"
assert 'site_id' in list(actuals.columns), "There is no site_id column in actuals dataframe"
assert len(list(actuals.columns)) > 2, "There are no measurement value columns in the actuals dataframe."
# Check measurement columns have either numeric or null data
for column in list(actuals.columns):
if column not in ['timestamp', 'site_id']:
# Check measurement does not contain numeric (nulls are OK)
# df_temp = actuals.loc[actuals[column].notnull()]
try:
pd.to_numeric(actuals[column], errors='raise').notnull() # .all()
except:
raise AssertionError(
"actuals['" + column + "'] column contains non-numeric values (null values are accepted).")
# Check that each site_id value is present in the sites dataframe index.
# ... So site_id values must be a subset of allowed sites
error_sites = set(actuals['site_id'].unique()) - set(sites.index.values)
assert len(error_sites) == 0, \
"Each site ID must match a site_id in sites. Error site IDs: " + str(error_sites)
# Convert to geo dataframe
sites.index = sites.index.map(str)
try:
gdf_sites = gpd.GeoDataFrame(data=sites,
geometry=gpd.points_from_xy(sites.longitude, sites.latitude))
except Exception as err:
raise ValueError('Error converting sites DataFrame to a GeoDataFrame: ' + str(err))
gdf_sites = gdf_sites.drop(columns=['longitude', 'latitude'])
try:
gdf_regions = gpd.GeoDataFrame(data=regions, geometry='geometry')
except Exception as err:
raise ValueError('Error converting regions DataFrame to a GeoDataFrame: ' + str(err))
# actuals: Make sure value columns at the end of column list
cols = actuals.columns.tolist()
cols.insert(0, cols.pop(cols.index('site_id')))
cols.insert(0, cols.pop(cols.index('timestamp')))
actuals = actuals[cols]
# Convert site_id to string
actuals['site_id'] = actuals['site_id'].astype(str)
# Set data properties
self._sites = gdf_sites
self._regions = gdf_regions
self._actuals = actuals
# Set extra useful data for estimation calculations
self.__set_site_region()
self.__set_region_sites()
@property
def verbose(self):
return self._verbose
@verbose.setter
def verbose(self, verbose=VERBOSE_DEFAULT):
assert isinstance(verbose, int), \
"Verbose level must be an integer not {}. (zero or less produces no debug output)".format(verbose.__class__)
if verbose < 0:
print('Warning: verbose input is less than zero so setting to zero')
verbose = 0
if verbose > EstimationData.VERBOSE_MAX:
print('Warning: verbose input is greater than {}} so setting to {}'.format(
EstimationData.VERBOSE_MAX, EstimationData.VERBOSE_MAX))
verbose = EstimationData.VERBOSE_MAX
self._verbose = verbose
@property
def sites(self):
return self._sites
@property
def regions(self):
return self._regions
@property
def actuals(self):
return self._actuals
@staticmethod
def is_valid_site_id(site_id):
'''
Check if site ID is valid (non empty string)
:param site_id: (str) a site id
:return: True if valid, False otherwise
'''
return site_id is not None and isinstance(site_id, str) and len(site_id) > 0
def get_adjacent_regions(self, region_ids, ignore_regions=[]):
""" Find all adjacent regions for list a of region ids
Uses the neighbouring regions found in set-up, using __get_all_region_neighbours
:param region_ids: list of region identifier (list of strings)
:param ignore_regions: list of region identifier (list of strings): list to be ignored
:return: a list of regions_ids (empty list if no adjacent regions)
"""
if self.verbose > 0:
print('\ngetting adjacent regions...')
# Create an empty list for adjacent regions
adjacent_regions = []
# Get all adjacent regions for each region
df_reset = self.regions.reset_index()
for region_id in region_ids:
if self.verbose > 1:
print('getting adjacent regions for {}'.format(region_id))
regions_temp = df_reset.loc[df_reset['region_id'] == region_id]
if len(regions_temp.index) > 0:
adjacent_regions.extend(regions_temp['neighbours'].iloc[0].split(','))
# Return all adjacent regions as a querySet and remove any that are in the completed/ignore list.
return [x for x in adjacent_regions if x not in ignore_regions and x.strip() != '']
def get_region_sites(self, region_id):
'''
Find all sites within the region identified by region_id
as comma-delimited string of site ids.
:param region_id: (str) a region id (must be (an index) in self.regions)
:return: A list of site IDs (list of str)
'''
assert region_id in self.regions.index.tolist(), 'region_id is not in list of regions'
result = self.regions.loc[[region_id]]['sites'][0].strip().split(',')
return list(filter(self.is_valid_site_id, result))
def get_regions_sites(self, region_ids, ignore_site_ids=[]):
'''
Retrieve the number of sites (in self._sites) for the list of region_ids
:param region_ids: (list of str) list of region IDs
:param ignore_site_ids: (list of str) list of site_ids to be ignored
:return: list of site IDs
'''
# Create an empty queryset for sites found in regions
sites = []
if self.verbose > 0:
print('Finding sites in region_ids: {}'.format(region_ids))
# Find sites in region_ids
for region_id in region_ids:
if self.verbose > 1:
print('Finding sites in region {}'.format(region_id))
sites.extend(self.get_region_sites(region_id))
return list(set(sites) - set(ignore_site_ids))
def get_region_id(self, site_id):
'''
Retrieve the region_id that the site with site_id is in
:param site_id: (str) site ID
:return: (str) the region ID held in the 'region_id' column for the site object
'''
assert self.is_valid_site_id(site_id), 'Invalid site ID'
assert site_id in self._sites.index.tolist(), 'site_id not in list of available sites'
return self._sites.loc[[site_id]]['region_id'][0]
def __get_region_sites(self, region):
return self._sites[self._sites.geometry.within(region['geometry'])].index.tolist()
def __set_region_sites(self):
'''
Find all of the sites within each region and add to a 'sites' column in self.regions -
as comma-delimited string of site ids.
:return: No return value
'''
if self.verbose > 0:
print('\ngetting all sites for each region...')
for index, region in self.regions.iterrows():
sites = self.__get_region_sites(region)
sites_str = ",".join(str(x) for x in sites)
self.regions.at[index, "sites"] = sites_str
if self.verbose > 1:
print('set sites for region {}: {}'.format(index, sites_str))
def __set_site_region(self):
'''
Find all of the region ids for each site and add to a 'region_id' column in self._sites
Adds None if not found.
:return: No return value
'''
if self.verbose > 0:
print('\ngetting region for each site...')
# Create new column with empty string as values
self._sites["region_id"] = ""
for index, region in self.regions.iterrows():
self._sites = self._sites.assign(
**{'region_id': np.where(self._sites.within(region.geometry), index, self._sites['region_id'])}
)
if self.verbose > 0:
print('regions: \n {}'.format(self._sites['region_id']))
def site_datapoint_count(self, measurement, timestamp, region_ids=[], ignore_site_ids=[]):
'''
Find the number of site datapoints for this measurement, timestamp and (optional) regions combination
:param measurement: (str) the measurement being recorded in the site data-point
:param timestamp: (timestamp) the timestamp of the site datapoints being searched for
:param region_ids: (list of str) list of region IDs
:param ignore_site_ids list of site_ids to be ignored
:return: Number of sites
'''
if ignore_site_ids is None:
ignore_site_ids = []
sites = self.actuals.loc[(self.actuals['timestamp'] == timestamp) &
(self.actuals[measurement].notna()) &
(~self.actuals['site_id'].isin(ignore_site_ids))]
if sites is None or len(sites.index) == 0:
return 0
sites = sites['site_id'].tolist()
region_sites = []
for region_id in region_ids:
region_sites.extend(self.regions.loc[region_id]['sites'])
if len(region_ids) > 0:
return len(set(sites) & set(region_sites))
else:
return len(set(sites)) | /region_estimators-1.2.0-py3-none-any.whl/region_estimators/estimation_data.py | 0.776962 | 0.609786 | estimation_data.py | pypi |
import pandas as pd
import geopandas as gpd
from region_estimators.region_estimator import RegionEstimator
class DistanceSimpleEstimator(RegionEstimator):
def __init__(self, estimation_data=None, verbose=RegionEstimator.VERBOSE_DEFAULT,
max_processors=RegionEstimator.MAX_NUM_PROCESSORS,
progress_callback=None):
super(DistanceSimpleEstimator, self).__init__(estimation_data, verbose, max_processors, progress_callback)
class Factory:
def create(self, estimation_data=None, verbose=RegionEstimator.VERBOSE_DEFAULT,
max_processors=RegionEstimator.MAX_NUM_PROCESSORS, progress_callback=None):
return DistanceSimpleEstimator(estimation_data, verbose, max_processors, progress_callback)
def get_estimate(self, measurement, timestamp, region_id, ignore_site_ids=[]):
""" Find estimations for a region and timestamp using the simple distance method: value of closest actual site
:param measurement: measurement to be estimated (string, required)
:param timestamp: timestamp identifier (string)
:param region_id: region identifier (string)
:param ignore_site_ids: site id(s) to be ignored during the estimations
:return: tuple containing
i) estimate
ii) dict: {"closest_sites": [IDs of closest site(s)]}
"""
result = None, {'closest_site_data': None}
# Get the actual values
df_actuals = self.actuals.loc[
(~self.actuals['site_id'].isin(ignore_site_ids)) &
(self.actuals['timestamp'] == timestamp) &
(self.actuals[measurement].notnull())
]
df_sites = self.sites.reset_index()
df_actuals = pd.merge(left=df_actuals,
right= df_sites,
on='site_id',
how='left')
gdf_actuals = gpd.GeoDataFrame(data=df_actuals, geometry='geometry')
# Get the closest site to the region
if len(gdf_actuals) > 0:
# Get region geometry
df_reset = pd.DataFrame(self.regions.reset_index())
regions_temp = df_reset.loc[df_reset['region_id'] == region_id]
if len(regions_temp.index) > 0:
region = regions_temp.iloc[0]
# Calculate distances
gdf_actuals['distance'] = pd.DataFrame(gdf_actuals['geometry'].distance(region.geometry))
# Get site(s) with shortest distance
top_result = gdf_actuals.sort_values(by=['distance'], ascending=True).iloc[0] #returns the whole row as a series
if top_result is not None:
# Take the average of all sites with the closest distance
closest_sites = gdf_actuals.loc[gdf_actuals['distance'] == top_result['distance']]
closest_values_mean = closest_sites[measurement].mean(axis=0)
# In extra data, return closest site name if it exists, otherwise closest site id
if 'name' in list(closest_sites.columns):
closest_sites_result = list(closest_sites['name'])
else:
closest_sites_result = list(closest_sites['site_id'])
result = closest_values_mean, {"closest_sites": closest_sites_result}
return result | /region_estimators-1.2.0-py3-none-any.whl/region_estimators/distance_simple_estimator.py | 0.849722 | 0.4856 | distance_simple_estimator.py | pypi |
from region_estimators.region_estimator import RegionEstimator
import pandas as pd
class ConcentricRegionsEstimator(RegionEstimator):
MAX_RING_COUNT_DEFAULT = float("inf")
def __init__(self, estimation_data=None, verbose=RegionEstimator.VERBOSE_DEFAULT,
max_processors=RegionEstimator.MAX_NUM_PROCESSORS,
progress_callback=None):
super(ConcentricRegionsEstimator, self).__init__(estimation_data, verbose, max_processors, progress_callback)
self.__set_region_neighbours()
self._max_ring_count = ConcentricRegionsEstimator.MAX_RING_COUNT_DEFAULT
class Factory:
def create(self, estimation_data, verbose=RegionEstimator.VERBOSE_DEFAULT,
max_processors=RegionEstimator.MAX_NUM_PROCESSORS, progress_callback=None):
return ConcentricRegionsEstimator(estimation_data, verbose, max_processors, progress_callback)
@property
def max_ring_count(self):
return self._max_ring_count
@max_ring_count.setter
def max_ring_count(self, new_count=MAX_RING_COUNT_DEFAULT):
""" Set the maximum ring count of the concentric_regions estimation procedure
:param new_count:
the maximum number of rings to be allowed during concentric_regions (integer, default=MAX_RING_COUNT_DEFAULT)
"""
self._max_ring_count = new_count
def get_estimate(self, measurement, timestamp, region_id, ignore_site_ids=[]):
""" Find estimations for a region and timestamp using the concentric_regions rings method
:param measurement: measurement to be estimated (string, required)
:param region_id: region identifier (string)
:param timestamp: timestamp identifier (string)
:param ignore_site_ids: site id(s) to be ignored during the estimations
:return: tuple containing result and dict: {"rings": [The number of concentric_regions rings required]}
"""
if self.verbose > 0:
print('\n### Getting estimates for region {}, measurement {} at date {} ###\n'.format(
region_id, measurement, timestamp))
# Check sites exist (in any region) for this measurement/timestamp
if self.estimation_data.site_datapoint_count(measurement, timestamp, ignore_site_ids=ignore_site_ids) == 0:
if self.verbose > 0:
print('No sites exist for region {}, measurement {} at date {}'.format(
region_id, measurement, timestamp))
return None, {"rings": None}
if self.verbose > 1:
print('sites exist for region {}, measurement {} at date {}'.format(region_id, measurement, timestamp))
# Check region is not an island (has no touching adjacent regions) which has no sites within it
# If it is, return null
region_sites = set(self.regions.loc[region_id]['sites']) - set(ignore_site_ids)
if len(region_sites) == 0 and len(self.estimation_data.get_adjacent_regions([region_id])) == 0:
if self.verbose > 0:
print('Region {} is an island and does not have sites, so can\'t do concentric_regions'.format(region_id))
return None, {"rings": None}
if self.verbose > 1:
print('Region {} is not an island'.format(region_id))
# Create an empty list for storing completed regions
regions_completed = []
# Recursively find the sites in each concentric_regions ring (starting at 0)
if self.verbose > 0:
print('Beginning recursive region estimation for region {}, timestamp: {}'.format(region_id, timestamp))
return self.__get_concentric_regions_estimate_recursive(measurement, [region_id], timestamp, 0, regions_completed,
ignore_site_ids)
def __get_concentric_regions_estimate_recursive(self, measurement, region_ids, timestamp, diffuse_level, regions_completed,
ignore_site_ids=[]):
# Find all sites in regions
sites = self.estimation_data.get_regions_sites(region_ids, ignore_site_ids)
# Get values from sites
if self.verbose > 0:
print('obtaining values from sites')
actuals = self.actuals.loc[(self.actuals['timestamp'] == timestamp) & (self.actuals['site_id'].isin(sites))]
result = None
if len(actuals.index) > 0:
# If readings found for the sites, take the average
result = actuals[measurement].mean(axis=0)
if self.verbose > 0:
print('Found result (for regions: {}) from sites:\n {}, average: {}'.format(region_ids, actuals, result))
if result is None or pd.isna(result):
if self.verbose > 0:
print('No sites found. Current ring count: {}. Max concentric_regions level: {}'.format(
diffuse_level, self._max_ring_count))
# If no readings/sites found, go up a concentric_regions level (adding completed regions to ignore list)
if diffuse_level >= self.max_ring_count:
if self.verbose > 0:
print('Max concentric_regions level reached so returning null.')
return None, {"rings": diffuse_level}
regions_completed.extend(region_ids)
diffuse_level += 1
# Find the next set of regions
next_regions = self.estimation_data.get_adjacent_regions(region_ids, regions_completed)
if self.verbose > 0:
print('Found next set of regions: {}.'.format(next_regions))
# If regions are found, continue, if not exit from the process
if len(next_regions) > 0:
if self.verbose > 0:
print('Next set of regions non empty so recursively getting concentric_regions estimates for those: {}.'
.format(next_regions))
return self.__get_concentric_regions_estimate_recursive(measurement,
next_regions,
timestamp,
diffuse_level,
regions_completed,
ignore_site_ids)
else:
if self.verbose > 0:
print('No next set of regions found so returning null')
return None, {"rings": diffuse_level-1}
else:
if self.verbose > 0:
print('Returning the result')
return result, {"rings": diffuse_level}
def __set_region_neighbours(self):
'''
Find all of the neighbours of each region and add to a 'neighbours' column in self.regions -
as comma-delimited string of region_ids
:return: No return value
'''
if self.verbose > 0:
print('\ngetting all region neighbours')
for index, region in self.regions.iterrows():
neighbors = self.regions[self.regions.geometry.touches(region.geometry)].index.tolist()
neighbors = filter(lambda item: item != index, neighbors)
neighbors_str = ",".join(neighbors)
self.regions.at[index, "neighbours"] = neighbors_str
if self.verbose > 1:
print('neighbours for {}: {}'.format(index, neighbors_str)) | /region_estimators-1.2.0-py3-none-any.whl/region_estimators/concentric_regions_estimator.py | 0.895497 | 0.450541 | concentric_regions_estimator.py | pypi |
# Librerias
import sys
import argparse
import tkinter
import logging
import tkinter.filedialog as fl
import numpy as np
import pandas as pd
# Modulos propios
from . import classifiers as classifiers
from . import functions as functions
from . import region as rg
def archivo_cargar(files: list):
"""
Allows you to select the file to be loaded
Parameters
--------------
files: list
Tuple with the type of file and its extension
Return
--------------
route: str
Absolute path of the file to be loaded
"""
root.withdraw()
filename = fl.askopenfilename(filetypes=files, defaultextension=files)
return filename
def archivo_guardar(files: list):
"""
Allows you to select the file to be saved
Parameters
--------------
files: list
Tuple with the type of file and its extension
Return
--------------
route: str
Absolute path of the new file to be generated
"""
root.withdraw()
filename = fl.asksaveasfilename(filetypes=files, defaultextension=files)
return filename
def execute(
points_path: str, raster_path: str, shape_path: str, classifier_tag: str = "ED"
):
"""
Execute the process to compute the growth of
the region
Parameters
--------------
points_path: str
Path to the .csv file with the coordinates of the points
raster_path: str
Path to .tif file with raster information
shape_path: str
Path to save the .shp file with the polygons
classifier_tag: str
Type of classifier to be used in the process
"""
puntos_csv = points_path
raster_all_bands = raster_path
# Cargar los datos
# ----------------------------------------
puntos_cords = pd.read_csv(puntos_csv, sep=";", decimal=",")
# Carga del raster
# ----------------------------------------
img_1d, img_array, raster_metadata = functions.cargar_raster(raster_all_bands)
# Buscar las coordenadas X,Y en el arreglo para cada tupla de coordenadas
# ----------------------------------------
puntos_cords = functions.append_xy_index(puntos_cords, raster_all_bands)
# Sacar los indices del Dataframe para iterar sobre los datos
# ----------------------------------------
pixels_indexes = puntos_cords[["X_Index", "Y_Index"]].to_numpy(copy=True)
pixels_data = np.empty(
(len(pixels_indexes), raster_metadata["count"]), raster_metadata["dtype"]
)
for idx in range(len(pixels_indexes)):
x_index, y_index = pixels_indexes[idx]
pixels_data[idx, :] = img_array[x_index, y_index, :]
# Crear el Dataframe y asignar columnas
# ----------------------------------------
columns_name = [f"Banda {idx + 1}" for idx in range(raster_metadata["count"])]
pixels_df = pd.DataFrame(pixels_data, columns=columns_name)
# Creamos el clasificador y la cola de puntos semilla sobre los cuales
# generaremos el poligono (componente conectada)
# ----------------------------------------
classifier_class = classifiers.select_classifier(classifier_tag=classifier_tag)
if classifier_class is None:
raise NotImplementedError(f"El clasificador {classifier_tag} no existe.")
classifier = classifier_class(pixels_df)
classifier_rg = rg.Region_Grow(
pixels_indexes=pixels_indexes, img_array=img_array, classifier=classifier
)
pixels_selected = classifier_rg.grow()
created_polygon = functions.create_polygon(
pixels_selected=pixels_selected, raster_path=raster_all_bands
)
# Ruta de almacenamiento
created_polygon.to_file(filename=shape_path, driver="ESRI Shapefile")
def execute_with_area(
points_path: str,
raster_path: str,
shape_path: str,
classifier_tag: str = "BD",
steps: int = 4,
):
"""
Execute the process to compute the growth of
the region by knowing the approximate area of the polygon that
is going to generate.
Parameters
--------------
points_path: str
Path to the .csv file with the coordinates of the points
raster_path: str
Path to .tif file with raster information
shape_path: str
Path to save the .shp file with the polygons
classifier_tag: str
Type of classifier to be used in the process
steps: int
Maximum number of iterations that the algorithm will perform for
calculate a polygon with the smallest difference between given approximate value
and the calculated
"""
puntos_csv = points_path
raster_all_bands = raster_path
# Cargar los datos
# ----------------------------------------
puntos_cords = pd.read_csv(puntos_csv, sep=";", decimal=",")
# Carga del raster
# ----------------------------------------
img_1d, img_array, raster_metadata = functions.cargar_raster(raster_all_bands)
# Buscar las coordenadas X,Y en el arreglo para cada tupla de coordenadas
# ----------------------------------------
puntos_cords = functions.append_xy_index(puntos_cords, raster_all_bands)
# Sacar los indices del Dataframe para iterar sobre los datos
# ----------------------------------------
pixels_indexes = puntos_cords[["X_Index", "Y_Index"]].to_numpy(copy=True)
pixels_data = np.empty(
(len(pixels_indexes), raster_metadata["count"]), raster_metadata["dtype"]
)
for idx in range(len(pixels_indexes)):
x_index, y_index = pixels_indexes[idx]
pixels_data[idx, :] = img_array[x_index, y_index, :]
# Crear el Dataframe y asignar columnas
# ----------------------------------------
columns_name = [f"Banda {idx + 1}" for idx in range(raster_metadata["count"])]
pixels_df = pd.DataFrame(pixels_data, columns=columns_name)
# Creamos el clasificador y la cola de puntos semilla sobre los cuales
# generaremos el poligono (componente conectada)
# ----------------------------------------
pixels_selected, created_polygon = functions.grow_balanced_region(
classifier_tag=classifier_tag,
pixels_indexes=pixels_indexes,
pixels_df=pixels_df,
img_array=img_array,
raster_path=raster_path,
polygon_area=puntos_cords["HECTAREAS"][0],
steps=steps,
)
# Ruta de almacenamiento
created_polygon.to_file(filename=shape_path, driver="ESRI Shapefile")
if __name__ == "__main__":
# Logger
logging.basicConfig(
format="%(asctime)s %(message)s", datefmt="%m/%d/%Y %I:%M:%S %p"
)
# Ejecucion por CLI
if len(sys.argv) > 1:
parser = argparse.ArgumentParser(
description="Crear un poligono utilizando Region Growing a partir de unos puntos iniciales"
)
parser.add_argument(
"points_path",
metavar="points_path",
type=str,
help="Ruta al archivo .csv con las cordenadas de los puntos, según el formato establecido",
)
parser.add_argument(
"raster_path",
metavar="raster_path",
type=str,
help="Ruta al archivo del raster en formato .tif",
)
parser.add_argument(
"shape_path",
metavar="shape_path",
type=str,
help="Ruta de almacenamiento del archivo con la informacion del poligono",
)
parser.add_argument(
"--classifier",
default="ED",
dest="classifier_tag",
type=str,
help="Clasificador a utilizar para realizar el proceso de region growing",
)
args = parser.parse_args()
execute(
points_path=args.points_path,
raster_path=args.raster_path,
shape_path=args.shape_path,
classifier_tag=args.classifier_tag,
)
logging.info(
f"El archivo Shapefile se ha creado con exito en: {args.shape_path}"
)
# Ejecucion por GUI
else:
root = tkinter.Tk() # GUI para Python
points_path = archivo_cargar([("CSV", "*.csv")])
raster_path = archivo_cargar([("GeoTIFF", "*.tif")])
shape_path = archivo_guardar([("Shapefile", "*.shp")])
# Computar el proceso
execute(points_path=points_path, raster_path=raster_path, shape_path=shape_path)
logging.info(f"El archivo Shapefile se ha creado con exito en: {shape_path}") | /region_grow-1.0.4.tar.gz/region_grow-1.0.4/region_grow/region_grow.py | 0.520009 | 0.443721 | region_grow.py | pypi |
import warnings
from region_profiler.utils import SeqStats, Timer
class RegionNode:
"""RegionNode represents a single entry in a region tree.
It contains a builtin timer for measuring the time, spent
in the corresponding region. It keeps track of
the count, sum, max and min measurements.
These statistics can be accessed through :py:attr:`stats` attribute.
In addition, RegionNode has a dictionary of its children.
They are expected to be retrieved using :py:meth:`get_child`,
that also creates a new child if necessary.
Attributes:
name (str): Node name.
stats (SeqStats): Measurement statistics.
"""
def __init__(self, name, timer_cls=Timer):
"""Create new instance of ``RegionNode`` with the given name.
Args:
name (str): node name
timer_cls (class): class, used for creating timers.
Default: ``region_profiler.utils.Timer``
"""
self.name = name
self.optimized_class = False
self.timer_cls = timer_cls
self.timer = self.timer_cls()
self.cancelled = False
self.stats = SeqStats()
self.children = dict()
self.recursion_depth = 0
self.last_event_time = 0
def enter_region(self):
"""Start timing current region.
"""
if self.recursion_depth == 0:
self.timer.start()
else:
self.timer.mark_aux_event()
self.cancelled = False
self.recursion_depth += 1
def cancel_region(self):
"""Cancel current region timing.
Stats will not be updated with the current measurement.
"""
self.cancelled = True
self.recursion_depth -= 1
if self.recursion_depth == 0:
self.timer.stop()
else:
self.timer.mark_aux_event()
def exit_region(self):
"""Stop current timing and update stats with the current measurement.
"""
if self.cancelled:
self.cancelled = False
self.timer.mark_aux_event()
else:
self.recursion_depth -= 1
if self.recursion_depth == 0:
self.timer.stop()
self.stats.add(self.timer.elapsed())
else:
self.timer.mark_aux_event()
def get_child(self, name, timer_cls=None):
"""Get node child with the given name.
This method creates a new child and stores it
in the inner dictionary if it has not been created yet.
Args:
name (str): child name
timer_cls (:obj:`class`, optional): override child timer class
Returns:
RegionNode: new or existing child node with the given name
"""
try:
return self.children[name]
except KeyError:
c = RegionNode(name, timer_cls or self.timer_cls)
self.children[name] = c
return c
def timer_is_active(self):
"""Return True if timer is currently running.
"""
return self.recursion_depth > 1 or self.recursion_depth == 1 and self.cancelled
def __str__(self):
return self.name or '???'
def __repr__(self):
return 'RegionNode(name="{}", stats={}, timer_cls={})'. \
format(str(self), repr(self.stats), self.timer_cls)
class _RootNodeStats:
"""Proxy object that wraps timer in the
:py:class:`region_profiler.utils.SeqStats` interface.
Timer is expected to have the same interface as
:py:class:`region_profiler.utils.Timer` object.
Proxy properties return current timer values.
"""
def __init__(self, timer):
self.timer = timer
@property
def count(self):
return 1
@property
def total(self):
return self.timer.current_elapsed()
@property
def min(self):
return self.total
@property
def max(self):
return self.total
class RootNode(RegionNode):
"""An instance of :any:`RootNode` is intended to be used
as the root of a region node hierarchy.
:any:`RootNode` differs from :any:`RegionNode`
by making :py:attr:`RegionNode.stats` property
returning the current measurement values instead of
the real stats of previous measurements.
"""
def __init__(self, name='<root>', timer_cls=Timer):
super(RootNode, self).__init__(name, timer_cls)
self.enter_region()
self.stats = _RootNodeStats(self.timer)
def cancel_region(self):
"""Prevents root region from being cancelled.
"""
warnings.warn('Can\'t cancel root region timer', stacklevel=2)
def exit_region(self):
"""Instead of :py:meth:`RegionNode.exit_region` it does not reset
:py:attr:`timer` attribute thus allowing it to continue timing on reenter.
"""
self.timer.stop() | /region_profiler-0.9.3.tar.gz/region_profiler-0.9.3/region_profiler/node.py | 0.839997 | 0.627438 | node.py | pypi |
from contextlib import contextmanager
from region_profiler.node import RootNode
from region_profiler.utils import Timer, get_name_by_callsite
class RegionProfiler:
""":py:class:`RegionProfiler` handles code regions profiling.
This is the central class in :py:mod:`region_profiler` package.
It is responsible for maintaining the hierarchy of timed regions
as well as providing facilities for marking regions for timing
using
- ``with``-statement (:py:meth:`RegionProfiler.region`)
- function decorator (:py:meth:`RegionProfiler.func`)
- an iterator proxy (:py:meth:`RegionProfiler.iter_proxy`)
Normally it is expected that the global instance of
:py:class:`RegionProfiler` is used for the profiling,
see package-level function :py:func:`region_profiler.install`,
:py:func:`region_profiler.region`, :py:func:`region_profiler.func`,
and :py:func:`region_profiler.iter_proxy`.
"""
ROOT_NODE_NAME = '<main>'
def __init__(self, timer_cls=None, listeners=None):
"""Construct new :py:class:`RegionProfiler`.
Args:
timer_cls (:obj:`class`, optional): class, used for creating timers.
Default: ``region_profiler.utils.Timer``
listeners (:py:class:`list` of
:py:class:`region_profiler.listener.RegionProfilerListener`, optional):
optional list of listeners, that can augment region enter and exit events.
"""
if timer_cls is None:
timer_cls = Timer
self.root = RootNode(name=self.ROOT_NODE_NAME, timer_cls=timer_cls)
self.node_stack = [self.root]
self.listeners = listeners or []
for l in self.listeners:
l.region_entered(self, self.root)
@contextmanager
def region(self, name=None, asglobal=False, indirect_call_depth=0):
"""Start new region in the current context.
This function implements context manager interface.
When used with ``with`` statement,
it enters a region with the specified name in the current context
on invocation and leaves it on ``with`` block exit.
Examples::
with rp.region('A'):
...
Args:
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
indirect_call_depth (:py:class:`int`, optional): adjust call depth
to correctly identify the callsite position for automatic naming
Returns:
:py:class:`region_profiler.node.RegionNode`: node of the region.
"""
if name is None:
name = get_name_by_callsite(indirect_call_depth + 2)
parent = self.root if asglobal else self.current_node
self.node_stack.append(parent.get_child(name))
self._enter_current_region()
yield self.current_node
self._exit_current_region()
self.node_stack.pop()
def func(self, name=None, asglobal=False):
"""Decorator for entering region on a function call.
Examples::
@rp.func()
def foo():
...
Args:
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
Returns:
Callable: a decorator for wrapping a function
"""
def decorator(fn):
nonlocal name
if name is None:
name = fn.__name__
name += '()'
def wrapped(*args, **kwargs):
with self.region(name, asglobal):
return fn(*args, **kwargs)
return wrapped
return decorator
def iter_proxy(self, iterable, name=None, asglobal=False, indirect_call_depth=0):
"""Wraps an iterable and profiles :func:`next()` calls on this iterable.
This proxy may be useful, when the iterable is some data loader,
that performs data retrieval on each iteration.
For instance, it may pull data from an asynchronous process.
Such proxy was used to identify that when receiving a batch of
8 samples from a loader process, first 5 samples were loaded immediately
(because they were computed asynchronously during the loop body),
but then it stalled on the last 3 iterations meaning that loading had
bigger latency than the loop body.
Examples::
for batch in rp.iter_proxy(loader):
...
Args:
iterable (Iterable): an iterable to be wrapped
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
indirect_call_depth (:py:class:`int`, optional): adjust call depth
to correctly identify the callsite position for automatic naming
Returns:
Iterable: an iterable, that yield same data as the passed one
"""
it = iter(iterable)
if name is None:
name = get_name_by_callsite(indirect_call_depth + 1)
parent = self.root if asglobal else self.current_node
node = parent.get_child(name)
while True:
self.node_stack.append(node)
self._enter_current_region()
try:
x = next(it)
except StopIteration:
self._cancel_current_region()
return
finally:
self._exit_current_region()
self.node_stack.pop()
yield x
def finalize(self):
"""Perform profiler finalization on application shutdown.
Finalize all associated listeners.
"""
self.root.exit_region()
for l in self.listeners:
l.region_exited(self, self.root)
l.finalize()
def _enter_current_region(self):
self.current_node.enter_region()
for l in self.listeners:
l.region_entered(self, self.current_node)
def _exit_current_region(self):
self.current_node.exit_region()
for l in self.listeners:
l.region_exited(self, self.current_node)
def _cancel_current_region(self):
self.current_node.cancel_region()
for l in self.listeners:
l.region_canceled(self, self.current_node)
@property
def current_node(self):
"""Return current region node.
Returns:
:py:class:`region_profiler.node.RegionNode`:
node of the region as defined above
"""
return self.node_stack[-1] | /region_profiler-0.9.3.tar.gz/region_profiler-0.9.3/region_profiler/profiler.py | 0.905724 | 0.381623 | profiler.py | pypi |
from region_profiler.utils import pretty_print_time
def as_column(print_name=None, name=None):
"""Mark a function as a column provider.
Args:
print_name (:py:class:`str`, optional): column name without underscores.
If None, name with underscores replaced is used
name (:py:class:`str`, optional): column name. If None, function name is used
"""
def decorate(func):
setattr(func, 'column_name', name or func.__name__)
setattr(func, 'column_print_name', print_name or func.column_name.replace('_', ' '))
setattr(func, '__doc__', 'Column provider. Retrieves {}.'.format(func.column_print_name))
return func
return decorate
@as_column()
def name(this_slice, all_slices):
return this_slice.name
@as_column(name='name')
def indented_name(this_slice, all_slices):
return '. ' * this_slice.call_depth + this_slice.name
@as_column(name='id')
def node_id(this_slice, all_slices):
return str(this_slice.id)
@as_column()
def parent_id(this_slice, all_slices):
return str(this_slice.parent.id) if this_slice.parent else ''
@as_column()
def parent_name(this_slice, all_slices):
return this_slice.parent.name if this_slice.parent else ''
@as_column('% of total')
def percents_of_total(this_slice, all_slices):
p = this_slice.total_time * 100. / all_slices[0].total_time
return '{:.2f}%'.format(p)
@as_column()
def count(this_slice, all_slices):
return str(this_slice.count)
@as_column()
def total_us(this_slice, all_slices):
return str(int(this_slice.total_time * 1000000))
@as_column()
def total(this_slice, all_slices):
return pretty_print_time(this_slice.total_time)
@as_column()
def total_inner_us(this_slice, all_slices):
return str(int(this_slice.total_inner_time * 1000000))
@as_column()
def total_inner(this_slice, all_slices):
return pretty_print_time(this_slice.total_inner_time)
@as_column()
def average_us(this_slice, all_slices):
return str(int(this_slice.avg_time * 1000000))
@as_column()
def average(this_slice, all_slices):
return pretty_print_time(this_slice.avg_time)
@as_column()
def min_us(this_slice, all_slices):
return str(int(this_slice.min_time * 1000000))
@as_column()
def min(this_slice, all_slices):
return pretty_print_time(this_slice.min_time)
@as_column()
def max_us(this_slice, all_slices):
return str(int(this_slice.max_time * 1000000))
@as_column()
def max(this_slice, all_slices):
return pretty_print_time(this_slice.max_time) | /region_profiler-0.9.3.tar.gz/region_profiler-0.9.3/region_profiler/reporter_columns.py | 0.796767 | 0.301413 | reporter_columns.py | pypi |
import inspect
import os
import time
from collections import namedtuple
class SeqStats:
"""Helper class for calculating online stats of a number sequence.
:py:class:`SeqStats` records the following parameters of a number sequence:
- element count
- sum
- average
- min value
- max value
:py:class:`SeqStats` does not store the sequence itself,
statistics are calculated online.
"""
def __init__(self, count=0, total=0, min=0, max=0):
self.count = count
self.total = total
self.min = min
self.max = max
def add(self, x):
"""Update statistics with the next value of a sequence.
Args:
x (number): next value in the sequence
"""
self.count += 1
self.total += x
self.max = x if self.count == 1 else max(self.max, x)
self.min = x if self.count == 1 else min(self.min, x)
@property
def avg(self):
"""Calculate sequence average.
"""
return 0 if self.count == 0 else self.total / self.count
def __str__(self):
return 'SeqStats{{{}..{}..{}/{}}}'.format(self.min, self.avg,
self.max, self.count)
def __repr__(self):
return ('SeqStats(count={}, total={}, min={}, max={})'
.format(self.count, self.total, self.min, self.max))
def __eq__(self, other):
return (self.total == other.total and self.count == other.count and
self.min == other.min and self.max == other.max)
def default_clock():
"""Default clock provider for Timer class.
Returns:
float: value (in fractional seconds) of a performance counter
"""
return time.perf_counter()
class Timer:
"""Simple timer.
Allows to measure duration between `start` and `stop` events.
By default, measurement is done on a fractions of a second scale.
This can be changed by providing a different clock in constructor.
The duration can be retrieved using
:py:meth:`current_elapsed` or :py:meth:`total_elapsed()`.
"""
def __init__(self, clock=default_clock):
"""
Args:
clock(function): functor, that returns current clock.
Measurements have the same precision as the clock
"""
self.clock = clock
self._begin_ts = 0
self._end_ts = 0
self._running = False
self.last_event_time = 0
def begin_ts(self):
"""Start event timestamp.
Returns:
int or float: timestamp
"""
return self._begin_ts
def end_ts(self):
"""Stop event timestamp.
Returns:
int or float: timestamp
"""
return self._end_ts
def start(self):
"""Start new timer measurement.
Call this function again to continue measurements.
"""
self._begin_ts = self.clock()
self.last_event_time = self._begin_ts
self._running = True
def stop(self):
"""Stop timer and add current measurement to total.
Returns:
int or float: duration of the last measurement
"""
self.last_event_time = self.clock()
if self._running:
self._end_ts = self.last_event_time
self._running = False
def mark_aux_event(self):
"""Update ``last_event_time``.
"""
self.last_event_time = self.clock()
def is_running(self):
"""Check if timer is currently running.
Returns:
bool:
"""
return self._running
def elapsed(self):
"""Return duration between `start` and `stop` events.
If timer is running (no :py:meth:`stop` has been called
after last :py:meth:`start` invocation), 0 is returned.
Returns:
int or float: duration
"""
return (self._end_ts - self._begin_ts) if not self._running else 0
def current_elapsed(self):
"""Return duration between `start` and `stop` events or
duration from last `start` event if no pairing `stop` event occurred.
Returns:
int or float: duration
"""
return (self._end_ts - self._begin_ts) if not self._running \
else (self.clock() - self._begin_ts)
CallerInfo = namedtuple('CallerInfo', ['file', 'line', 'name'])
def get_caller_info(stack_depth=1):
"""
Return caller function name and
call site filename and line number.
Args:
stack_depth (int): select caller frame to be inspected.
- 0 corresponds to the call site of
the :py:func:`get_caller_info` itself.
- 1 corresponds to the call site of
the parent function.
Returns:
CallerInfo: information about the caller
"""
frame = inspect.stack()[stack_depth + 1]
info = CallerInfo(frame[1], frame[2], frame[3])
del frame # prevents cycle reference
return info
def get_name_by_callsite(stack_depth=1):
"""Get string description of the call site
of the caller.
Args:
stack_depth: select caller frame to be inspected.
- 0 corresponds to the call site of
the :py:meth:`get_name_by_callsite` itself.
- 1 corresponds to the call site of
the parent function.
Returns:
str: string in the following format: ``'function<filename:line>'``
"""
info = get_caller_info(stack_depth + 1)
f = os.path.basename(info.file)
return '{}() <{}:{}>'.format(info.name, f, info.line)
class NullContext:
"""Empty context manager.
"""
def __enter__(self):
pass
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def null_decorator():
"""Empty decorator.
"""
return lambda fn: fn
def pretty_print_time(sec):
"""Get duration as a human-readable string.
Examples:
- 10.044 => '10.04 s'
- 0.13244 => '132.4 ms'
- 0.0000013244 => '1.324 us'
Args:
sec (float): duration in fractional seconds scale
Returns:
str: human-readable string representation as shown above.
"""
for unit in ('s', 'ms', 'us'):
if sec >= 500:
return '{:.0f} {}'.format(sec, unit)
if sec >= 100:
return '{:.1f} {}'.format(sec, unit)
if sec >= 10:
return '{:.2f} {}'.format(sec, unit)
if sec >= 1:
return '{:.3f} {}'.format(sec, unit)
sec *= 1000
return '{} ns'.format(int(sec)) | /region_profiler-0.9.3.tar.gz/region_profiler-0.9.3/region_profiler/utils.py | 0.912898 | 0.554893 | utils.py | pypi |
import atexit
import warnings
from region_profiler.chrome_trace_listener import ChromeTraceListener
from region_profiler.debug_listener import DebugListener
from region_profiler.profiler import RegionProfiler
from region_profiler.reporters import ConsoleReporter
from region_profiler.utils import NullContext
_profiler = None
"""Global :py:class:`RegionProfiler` instance.
This singleton is initialized using :py:func:`install`.
"""
def install(reporter=ConsoleReporter(), chrome_trace_file=None,
debug_mode=False, timer_cls=None):
"""Enable profiling.
Initialize a global profiler with user arguments
and register its finalization at application exit.
Args:
reporter (:py:class:`region_profiler.reporters.ConsoleReporter`):
The reporter used to print out the final summary.
Provided profilers:
- :py:class:`region_profiler.reporters.ConsoleReporter`
- :py:class:`region_profiler.reporters.CsvReporter`
chrome_trace_file (:py:class:`str`, optional): path to the output trace file.
If provided, Chrome Trace generation is enable and the resulting trace is saved under this name.
debug_mode (:py:class:`bool`, default=False):
Enable verbose logging for profiler events.
See :py:class:`region_profiler.debug_listener.DebugListener`
timer_cls: (:py:obj:`region_profiler.utils.Timer`):
Pass custom timer constructor. Mainly useful for testing.
"""
global _profiler
if _profiler is None:
listeners = []
if chrome_trace_file:
listeners.append(ChromeTraceListener(chrome_trace_file))
if debug_mode:
listeners.append(DebugListener())
_profiler = RegionProfiler(listeners=listeners, timer_cls=timer_cls)
_profiler.root.enter_region()
atexit.register(lambda: reporter.dump_profiler(_profiler))
atexit.register(lambda: _profiler.finalize())
else:
warnings.warn("region_profiler.install() must be called only once", stacklevel=2)
return _profiler
def region(name=None, asglobal=False):
"""Start new region in the current context.
This function implements context manager interface.
When used with ``with`` statement,
it enters a region with the specified name in the current context
on invocation and leaves it on ``with`` block exit.
Examples::
with rp.region('A'):
...
Args:
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
Returns:
:py:class:`region_profiler.node.RegionNode`: node of the region.
"""
if _profiler is not None:
return _profiler.region(name, asglobal, 0)
else:
return NullContext()
def func(name=None, asglobal=False):
"""Decorator for entering region on a function call.
Examples::
@rp.func()
def foo():
...
Args:
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
Returns:
Callable: a decorator for wrapping a function
"""
def decorator(fn):
nonlocal name
if name is None:
name = fn.__name__
name += '()'
def wrapped(*args, **kwargs):
with region(name, asglobal=asglobal):
return fn(*args, **kwargs)
return wrapped
return decorator
def iter_proxy(iterable, name=None, asglobal=False):
"""Wraps an iterable and profiles :func:`next()` calls on this iterable.
This proxy may be useful, when the iterable is some data loader,
that performs data retrieval on each iteration.
For instance, it may pull data from an asynchronous process.
Such proxy was used to identify that when receiving a batch of
8 samples from a loader process, first 5 samples were loaded immediately
(because they were computed asynchronously during the loop body),
but then it stalled on the last 3 iterations meaning that loading had
bigger latency than the loop body.
Examples::
for batch in rp.iter_proxy(loader):
...
Args:
iterable (Iterable): an iterable to be wrapped
name (:py:class:`str`, optional): region name.
If None, the name is deducted from region location in source
asglobal (bool): enter the region from root context, not a current one.
May be used to merge stats from different call paths
Returns:
Iterable: an iterable, that yield same data as the passed one
"""
if _profiler is not None:
return _profiler.iter_proxy(iterable, name, asglobal, 0)
else:
return iterable | /region_profiler-0.9.3.tar.gz/region_profiler-0.9.3/region_profiler/global_instance.py | 0.770724 | 0.175856 | global_instance.py | pypi |
import numpy as np
import xarray as xr
import xgcm
z_suffixes = {
"zstr": "z",
"rho2": "rho2"
}
def load_CM4p25(z_coord="zstr"):
realm = "ocean"
frequency = "annual"
diag_path = "/archive/Raphael.Dussin/FMS2019.01.03_devgfdl_20221223/CM4_piControl_c192_OM4p25_v8/gfdl.ncrc4-intel18-prod-openmp/pp/"
suffix = z_suffixes[z_coord]
ds = xr.open_mfdataset(f"{diag_path}/{realm}_{frequency}_{suffix}/ts/{frequency}/10yr/*.0341*.nc", chunks={'time':1}, decode_times=False).isel(time=[0])
og = xr.open_dataset(f"{diag_path}/{realm}_{frequency}_{suffix}/{realm}_{frequency}_{suffix}.static.nc")
sg = xr.open_dataset("/archive/Raphael.Dussin/datasets/OM4p25/c192_OM4_025_grid_No_mg_drag_v20160808_unpacked/ocean_hgrid.nc")
ds = fix_grid_coords(ds, og, sg)
return ds_to_grid(ds)
def fix_grid_coords(ds, og, sg):
og['deptho'] = (
og['deptho'].where(~np.isnan(og['deptho']), 0.)
)
og = og.assign_coords({
'geolon' : xr.DataArray(sg['x'][1::2,1::2].data, dims=["yh", "xh"]),
'geolat' : xr.DataArray(sg['y'][1::2,1::2].data, dims=["yh", "xh"]),
'geolon_u': xr.DataArray(sg['x'][1::2,0::2].data, dims=["yh", "xq"]),
'geolat_u': xr.DataArray(sg['y'][1::2,0::2].data, dims=["yh", "xq"]),
'geolon_v': xr.DataArray(sg['x'][0::2,1::2].data, dims=["yq", "xh"]),
'geolat_v': xr.DataArray(sg['y'][0::2,1::2].data, dims=["yq", "xh"]),
'geolon_c': xr.DataArray(sg['x'][0::2,0::2].data, dims=["yq", "xq"]),
'geolat_c': xr.DataArray(sg['y'][0::2,0::2].data, dims=["yq", "xq"])
})
ds = ds.assign_coords({
'dxCv': xr.DataArray(
og['dxCv'].transpose('xh', 'yq').values, dims=('xh', 'yq',)
),
'dyCu': xr.DataArray(
og['dyCu'].transpose('xq', 'yh').values, dims=('xq', 'yh',)
)
}) # add velocity face widths to calculate distances along the section
ds = ds.assign_coords({
'areacello':xr.DataArray(og['areacello'].values, dims=("yh", "xh")),
'geolon': xr.DataArray(og['geolon'].values, dims=("yh", "xh")),
'geolat': xr.DataArray(og['geolat'].values, dims=("yh", "xh")),
'geolon_u': xr.DataArray(og['geolon_u'].values, dims=("yh", "xq",)),
'geolat_u': xr.DataArray(og['geolat_u'].values, dims=("yh", "xq",)),
'geolon_v': xr.DataArray(og['geolon_v'].values, dims=("yq", "xh",)),
'geolat_v': xr.DataArray(og['geolat_v'].values, dims=("yq", "xh",)),
'geolon_c': xr.DataArray(og['geolon_c'].values, dims=("yq", "xq",)),
'geolat_c': xr.DataArray(og['geolat_c'].values, dims=("yq", "xq",)),
'deptho': xr.DataArray(og['deptho'].values, dims=("yh", "xh",)),
})
return ds
def ds_to_grid(ds, z_coord="zstr"):
coords={
'X': {'center': 'xh', 'outer': 'xq'},
'Y': {'center': 'yh', 'outer': 'yq'}
}
if "z_l" in ds.dims:
coords = {
**coords,
**{'Z': {'center': 'z_l', 'outer': 'z_i'}}
}
elif "rho2_l" in ds.dims:
coords = {
**coords,
**{'Z': {'center': 'rho2_l', 'outer': 'rho2_i'}}
}
return ds, xgcm.Grid(ds, coords=coords, periodic=["X"]) | /regionate-0.0.2.tar.gz/regionate-0.0.2/examples/example_dataset_grids.py | 0.446253 | 0.454291 | example_dataset_grids.py | pypi |
.. currentmodule:: regionmask
What's New
==========
.. ipython:: python
:suppress:
import regionmask
.. _whats-new.0.10.0:
v0.10.0 (31.05.2023)
--------------------
regionmask v0.10.0 brings support for `cf_xarray <https://cf-xarray.readthedocs.io>`__,
which allows to auto-detect coordinate names using and handling of region names in 2D
masks. It also supports shapely 2.0 and creating overlapping 3D masks has become faster.
Breaking Changes
~~~~~~~~~~~~~~~~
- Made more arguments keyword-only for several functions and methods, e.g., for
:py:meth:`Regions.mask` (:pull:`368`).
- Passing `lon_name` and `lat_name` to the masking methods and functions (e.g. :py:meth:`Regions.mask`)
is deprecated. Please pass the lon and lat coordinates direcly, e.g., `mask*(ds[lon_name], ds[lat_name])`
(:issue:`293` and :pull:`371`).
- Marked the `method` keyword to the masking methods and functions (e.g. :py:meth:`Regions.mask`)
as internal and flagged it for removal in a future version. Passing this argument should only
be necessary for testing (:pull:`417`).
Enhancements
~~~~~~~~~~~~
- Can now autodetect longitude and latitude coordinates from `cf metadata <http://cfconventions.org/>`__
if the optional dependency `cf_xarray <https://cf-xarray.readthedocs.io/en/latest/coord_axes.html>`__
is installed (:pull:`393`, :issue:`364`).
- 2D masks (e.g. :py:meth:`Regions.mask`) now contain `flag_values` and `flag_meanings` as
attributes (`attrs`). Together with `cf_xarray <https://cf-xarray.readthedocs.io/en/latest/flags.html>`__
these can be leveraged to select single (``mask.cf == "CNA"``) or multiple (``mask.cf.isin``)
regions (:pull:`361`, :issue:`346`).
- Added :py:func:`flatten_3D_mask` - a helper function to flatten 3D masks to 2D masks
(:issue:`399`).
- The masking functions (e.g. :py:meth:`Regions.mask`) now warn if the `units` of the
coordinates(``lat.attrs["units"]`` ) are given as "radians" (:issue:`279`).
- Better error when passing a single region without wrapping it into a list or tuple (:issue:`372`).
- Added :py:class:`set_options` to regionmask which can, currently, be used to control
the number of displayed rows of :py:class:`Regions` (:issue:`#376`).
- Create faster masks with shapely 2.0, which replaces pygeos (:pull:`349`).
- Allow setting the cache location manually: ``regionmask.set_options(cache_dir="~/.rmask")``.
The default location is given by ``pooch.os_cache("regionmask")``, i.e. `~/.cache/regionmask/`
on unix-like operating systems (:pull:`403`).
- Add python 3.11 to list of supported versions (:pull:`390`).
- Added `pyogrio <https://pyogrio.readthedocs.io>`__ as optional dependency. Natural earth
data shapefiles are now loaded faster, if pyogrio is installed (:pull:`406`).
New regions
~~~~~~~~~~~
- Added :py:attr:`natural_earth.countries_10` regions from natural earth (:pull:`396`).
Docs
~~~~
- The version number should now be displayed correctly again on readthedocs. Formerly
regionmask was installed from a dirty and shallow git archive, thus setuptools_scm did not
report the correct version number (:pull:`348`, :pull:`421` see also `readthedocs/readthedocs.org#8201
<https://github.com/readthedocs/readthedocs.org/issues/8201>`_).
Internal Changes
~~~~~~~~~~~~~~~~
- Directly create 3D masks, relevant for overlapping regions as part of :issue:`228`:
- using shapely, pygeos (:pull:`343`), and rasterio (:pull:`345`)
- in the function to determine points at *exactly* -180°E (or 0°E) and -90°N (:pull:`341`)
- Use importlib.metadata if available (i.e. for python > 3.7) - should lead to a faster
import time for regionmask (:pull:`369`).
- Small changes to the repr of :py:class:`Regions` (:pull:`378`).
- Reduce the memory requirements of :py:func:`core.utils.unpackbits` (:issue:`379`:).
- Speed up loading of `us_states_10` and `us_states_50` by defining a `bbox` which only
needs to load a subset of the data (:pull:`405`).
.. _whats-new.0.9.0:
v0.9.0 (02.03.2022)
-------------------
Version 0.9.0 contains some exiting improvements: overlapping regions and unstructured
grids can now be masked correctly. Further, :py:class:`Regions` can now be round-tripped
to :py:class:`geopandas.GeoDataFrame` objects. The new version also adds PRUDENCE
regions and a more stable handling of naturalearth regions.
Many thanks to the contributors to the v0.9.0 release: Aaron Spring, Mathias Hauser, and
Ruth Lorenz!
Breaking Changes
~~~~~~~~~~~~~~~~
- Removed support for Python 3.6 (:pull:`288`).
- The ``xarray.DataArray`` mask returned by all masking functions (e.g. :py:meth:`Regions.mask`)
was renamed from `region` to `mask`. The former was ambiguous with respect to the `region` dimension
of 3D masks (:pull:`318`).
- The minimum versions of some dependencies were changed (:pull:`311`, :pull:`312`):
============ ===== =====
Package Old New
============ ===== =====
geopandas 0.6 0.7
matplotlib 3.1 3.2
pooch 1.0 1.2
rasterio 1.0 1.1
shapely 1.6 1.7
============ ===== =====
- ``regionmask.defined_regions.natural_earth`` is deprecated. ``defined_regions.natural_earth`` used
cartopy to download natural_earth data and it was unclear which version of the regions
is available. This is problematic because some regions change between the versions.
Please use ``defined_regions.natural_earth_v4_1_0`` or ``defined_regions.natural_earth_v5_0_0``
instead (:issue:`306`, :pull:`311`).
- Passing coordinates with different type to :py:meth:`Regions.mask` and :py:meth:`Regions.mask_3D`
is no longer supported, i.e. can no longer pass lon as numpy array and lat as
DataArray (:pull:`294`).
- The mask no longer has dimension coordinates when 2D numpy arrays are passed as lat and
lon coords (:pull:`294`).
Enhancements
~~~~~~~~~~~~
- regionmask does now correctly treat overlapping regions if ``overlap=True`` is set in
the constructor (:issue:`228`, :pull:`318`).
Per default regionmask assumes non-overlapping regions. In this case grid points of
overlapping polygons will silently be assigned to the region with the higher number.
This may change in a future version.
- :py:meth:`Regions.mask` and :py:meth:`Regions.mask_3D` now work with unstructured 1D
grids such as:
- `ICON <https://code.mpimet.mpg.de/projects/iconpublic>`_
- `FESOM <https://fesom.de/>`_
- `MPAS <https://mpas-dev.github.io/>`_
with 1-dimensional coordinates of the form ``lon(cell)`` and ``lat(cell)``.
Note that only xarray arrays can be detected as unstructured grids.
(:issue:`278`, :pull:`280`). By `Aaron Spring <https://github.com/aaronspring>`_.
- Add methods to convert :py:class:`Regions` to (geo)pandas objects, namely :py:meth:`Regions.to_geodataframe`,
:py:meth:`Regions.to_geoseries`, :py:meth:`Regions.to_dataframe`). The geopandas.GeoDataFrame
can be converted back (round-tripped) using :py:meth:`Regions.from_geodataframe`
(:issue:`50`, :pull:`298`).
- The plotting methods (:py:meth:`Regions.plot` and :py:meth:`Regions.plot_regions`) now
use a more sophisticated logic to subsample lines on GeoAxes plots. The new method is
based on the euclidean distance of each segment. Per default the maximum distance of
each segment is 1 for lat/ lon coords - see the ``tolerance`` keyword of the plotting
methods. The ``subsample`` keyword is deprecated (:issue:`109`, :pull:`292`).
- The download of the natural_earth regions is now done in regionmask (using pooch) and no
longer relies on cartopy (:issue:`306`, :pull:`311`).
Deprecations
~~~~~~~~~~~~
- The ``regionmask.defined_regions._ar6_pre_revisions`` regions are deprecated. The
``regionmask.defined_regions.ar6`` regions should be used instead (:issue:`314`, :pull:`320`).
New regions
~~~~~~~~~~~
- Added :py:attr:`prudence` regions for Europe from `Christensen and Christensen, 2007,
<https://link.springer.com/article/10.1007/s10584-006-9210-7>`_ (:pull:`283`).
By `Ruth Lorenz <https://github.com/ruthlorenz>`_.
Bug Fixes
~~~~~~~~~
- The name of lon and lat coordinates when passed as single elements is now respected when
creating masks i.e. for ``region.mask(ds.longitude, ds.longitude)`` (:issue:`129`,
:pull:`294`).
- Ensure :py:meth:`Regions.plot` uses the current axes (``plt.gca()``) if possible and
error if a non-cartopy GeoAxes is passed (:issue:`316`, :pull:`321`).
Docs
~~~~
- Went over the documentation, improved some sections, unpinned some packages, modernized
some aspects (:pull:`313`).
Internal Changes
~~~~~~~~~~~~~~~~
- Fix compatibility with shapely 1.8 (:pull:`291`).
- Fix downloading naturalearth regions part 2 (see :pull:`261`): Monkeypatch the correct
download URL and catch all ``URLError``, not only timeouts (:pull:`289`).
- Rewrote the function to create the mask `DataArray` (:issue:`168`, :pull:`294`).
- Follow up to :pull:`294` - fix wrong dimension order for certain conditions (:issue:`295`).
- Refactor `test_mask` - make use of ``xr.testing.assert_equal`` and simplify some
elements (:pull:`297`).
- Add `packaging` as a dependency (:issue:`324`, :pull:`328`).
- Add python 3.10 to list of supported versions (:pull:`330`).
v0.8.0 (08.09.2021)
-------------------
Version 0.8.0 contains an important bugfix, improves the handling of wrapped longitudes,
can create masks for coordinates and regions that do not have a lat/ lon coordinate
reference system and masks for irregular and 2D grids are created faster if the optional
dependency `pygeos <https://pygeos.readthedocs.io/en/stable/>`__ is installed.
Breaking Changes
~~~~~~~~~~~~~~~~
- Points at *exactly* -180°E (or 0°E) and -90°N are no longer special cased if
``wrap_lon=False`` when creating a mask - see :doc:`methods<notebooks/method>` for
details (:issue:`151`).
- Updates to :py:meth:`Regions.plot` and :py:meth:`Regions.plot_regions` (:pull:`246`):
- Deprecated all positional arguments (keyword arguments only).
- The ``regions`` keyword was deprecated. Subset regions before plotting, i.e.
use ``r[regions].plot()`` instead of ``r.plot(regions=regions)``. This will allow
to remove a argument from the methods.
- Updates to :py:meth:`Regions.plot` (:pull:`246`):
- Added ``lw=0`` to the default ``ocean_kws`` and ``land_kws`` to avoid overlap with
the coastlines.
- Renamed the ``proj`` keyword to ``projection`` for consistency with cartopy.
- Renamed the ``coastlines`` keyword to ``add_coastlines`` for consistency with other
keywords (e.g. ``add_land``).
Enhancements
~~~~~~~~~~~~
- Creating masks for irregular and 2D grids can be speed up considerably by installing
`pygeos <https://pygeos.readthedocs.io/en/stable/>`__. pygeos is an optional dependency
(:issue:`123`).
- Can now create masks for regions with arbitrary coordinates e.g. for coordinate reference
systems that are not lat/ lon based by setting ``wrap_lon=False`` (:issue:`151`).
- The extent of the longitude coordinates is no longer checked to determine the wrap,
now only the extent of the mask is considered (:issue:`249`). This should allow to
infer ``wrap_lon`` correctly for more cases (:issue:`213`).
Bug Fixes
~~~~~~~~~
- Fixed a bug that could silently lead to a wrong mask in certain cases. Three conditions are
required:
1. The longitude coordinates are not ordered (note that wrapping the longitudes can
also lead to unordered coordinates).
2. Rearranging the coordinates makes them equally spaced.
3. The split point is not in the middle of the array.
Thus, the issue would happen for the following example longitude coordinates: ``[3, 4, 5, 1, 2]``
(but not for ``[3, 4, 1, 2]``). Before the bugfix the mask would incorrectly be rearranged
in the following order ``[4, 5, 1, 2, 3]`` (:issue:`266`).
- :py:meth:`Regions.mask` (and all other ``mask`` methods and functions) no longer raise
an error for regions that exceed 360° latitude if ``wrap_lon=False``. This was most
likely a regression from :pull:`48` (:issue:`151`).
- Raise a ValueError if the input coordinates (lat and lon) have wrong number of dimensions
or shape (:pull:`245`, :issue:`242`).
Docs
~~~~
- Updated the plotting tutorial (:pull:`246`).
- Install `regionmask` via `ci/requirements/docs.yml` on RTD using pip and update the
packages: don't require jupyter (but ipykernel, which leads to a smaller environment),
use new versions of sphinx and sphinx_rtd_theme (:pull:`248`).
- Pin cartopy to version 0.19 and matplotlib to version 3.4 and use a (temporary) fix for
:issue:`165`. This allows to make use of `conda-forge/cartopy-feedstock#116
<https://github.com/conda-forge/cartopy-feedstock/pull/116>`__ such that natural_earth
shapefiles can be downloaded again. Also added some other minor doc updates
(:pull:`269`).
Internal Changes
~~~~~~~~~~~~~~~~
- Updated setup configuration and automated version numbering:
- Moved contents of setup.py to setup.cfg (:pull:`240`).
- Use ``pyproject.toml`` to define the installation requirements (:pull:`240`, :pull:`247`).
- Use setuptools-scm for automatic versioning (:pull:`240`).
- Allow installing from git archives (:pull:`240`).
- Refactor ``test_defined_region`` and ``test_mask_equal_defined_regions`` - globally
define a list of all available `defined_regions` (:issue:`256`).
- In the tests: downloading naturalearth regions could run forever. Make sure this does
not happen and turn the timeout Error into a warning (:pull:`261`).
- Set ``regex=True`` in ``pd.Series.str.replace`` due to an upcoming change in pandas (:pull:`262`).
v0.7.0 (28.07.2021)
-------------------
Version 0.7.0 is mostly a maintenance version. It drops python 2.7 support, accompanies
the move of the repo to the regionmask organisation (`regionmask/regionmask <http://github.com/regionmask/regionmask>`__),
finalizes a number of deprecations, and restores compatibility with xarray 0.19.
Breaking Changes
~~~~~~~~~~~~~~~~
- Removed support for Python 2. This is the first version of regionmask that is Python 3 only!
- The minimum versions of some dependencies were changed (:pull:`220`):
============ ===== =====
Package Old New
============ ===== =====
numpy 1.15 1.17
xarray 0.13 0.15
============ ===== =====
- Moved regionmask to its own organisation on github. It can now be found under
`regionmask/regionmask <http://github.com/regionmask/regionmask>`__ (:issue:`204` and
:pull:`224`).
- matpoltlib and cartopy are now optional dependencies. Note that cartopy is also
required to download and access the natural earth shapefiles (:issue:`169`).
Deprecations
~~~~~~~~~~~~
- Removed ``Regions_cls`` and ``Region_cls`` (deprecated in v0.5.0). Use
:py:class:`Regions` instead (:pull:`182`).
- Removed the ``create_mask_contains`` function (deprecated in v0.5.0). Use
``regionmask.Regions(coords).mask(lon, lat)`` instead (:pull:`181`).
- Removed the ``xarray`` keyword to all ``mask`` functions. This was deprecated in
version 0.5.0. To obtain a numpy mask use ``mask.values`` (:issue:`179`).
- Removed the ``"legacy"``-masking deprecated in v0.5.0 (:issue:`69`, :pull:`183`).
Enhancements
~~~~~~~~~~~~
- :py:attr:`Regions.plot()` and :py:attr:`Regions.plot_regions()` now take the
``label_multipolygon`` keyword to add text labels to all Polygons of
MultiPolygons (:issue:`185`).
- :py:attr:`Regions.plot()` and :py:attr:`Regions.plot_regions()` now warn on unused arguments,
e.g. ``plot(add_land=False, land_kws=dict(color="g"))`` (:issue:`192`).
New regions
~~~~~~~~~~~
- Added :py:attr:`natural_earth.land_10` and :py:attr:`natural_earth.land_50` regions from
natural earth (:pull:`195`) by `Martin van Driel <https://github.com/martinvandriel>`_.
Bug Fixes
~~~~~~~~~
- Text labels outside of the map area should now be correctly clipped in most cases
(:issue:`157`).
- Move ``_flatten_polygons`` to ``utils`` and raise an error when something else than
a ``Polygon`` or ``MultiPolygon`` is passed (:pull:`211`).
- Fix incompatibility with xarray >=0.19.0 (:pull:`234`). By `Julius Busecke <https://github.com/jbusecke>`_.
Docs
~~~~
- Unified the docstrings of all ``mask`` functions (:issue:`173`).
- Mentioned how to calculate regional medians (:issue:`170`).
- Mentioned how to open regions specified in a yaml file using intake and fsspec
(:issue:`93`, :pull:`205`). By `Aaron Spring <https://github.com/aaronspring>`_.
- Fixed the docstrings using `velin <https://github.com/Carreau/velin>`__ (:pull:`231`).
Internal Changes
~~~~~~~~~~~~~~~~
- Moved the CI from azure to github actions (after moving to the regionmask organisation)
(:pull:`232`).
- Update the CI: use mamba for faster installation, merge code coverage from all runs,
don't check the coverage of the tests (:pull:`197`).
- Fix doc creation for newest version of ``jupyter nbconvert`` (``template`` is now
``template-file``).
- Update ``ci/min_deps_check.py`` to the newest version on xarray (:pull:`218`).
- Add a test environment for python 3.9 (:issue:`215`).
- Enforce minimum versions in `requirements.txt` and clean up required dependencies
(:issue:`199` and :pull:`219`).
v0.6.2 (19.01.2021)
-------------------
This is a minor bugfix release that corrects a problem occurring only in python 2.7 which
could lead to wrong coordinates of 3D masks derived with :py:meth:`Regions.mask_3D` and
:py:func:`mask_3D_geopandas`.
Bug Fixes
~~~~~~~~~
- Make sure ``Regions`` is sorted by the number of the individual regions. This was
previously not always the case. Either when creating regions with unsorted numbers
in python 3.6 and higher (e.g. ``Regions([poly2, poly1], [2, 1])``) or when indexing
regions in python 2.7 (e.g. ``regionmask.defined_regions.ar6.land[[30, 31, 32]]`` sorts
the regions as 32, 30, 31). This can lead to problems for :py:meth:`Regions.mask_3D` and
:py:func:`mask_3D_geopandas` (:issue:`200`).
v0.6.1 (19.08.2020)
-------------------
There were some last updates to the AR6 regions (``regionmask.defined_regions.ar6``).
If you use the AR6 regions please update the package. There were no functional changes.
v0.6.0 (30.07.2020)
-------------------
.. warning::
This is the last release of regionmask that will support Python 2.7. Future releases
will be Python 3 only, but older versions of regionmask will always be available
for Python 2.7 users. For the more details, see:
- `Python 3 Statement <http://www.python3statement.org/>`__
Version 0.6.0 offers better support for shapefiles (via `geopandas
<https://geopandas.readthedocs.io>`__) and can directly create 3D boolean masks
which play nicely with xarray's ``weighted.mean(...)`` function. It also includes
a number of optimizations and bug fixes.
Breaking Changes
~~~~~~~~~~~~~~~~
- Points at *exactly* -180°E (or 0°E) and -90°N are now treated separately; such that a global
mask includes all gridpoints - see :doc:`methods<notebooks/method>` for details (:issue:`159`).
- :py:attr:`Regions.plot()` no longer colors the ocean per default. Use
:py:attr:`Regions.plot(add_ocean=True)` to restore the previous behavior (:issue:`58`).
- Changed the default style of the coastlines in :py:attr:`Regions.plot()`. To restore
the previous behavior use :py:attr:`Regions.plot(coastline_kws=dict())` (:pull:`146`).
Enhancements
~~~~~~~~~~~~
- Create 3D boolean masks using :py:meth:`Regions.mask_3D` and :py:func:`mask_3D_geopandas`
- see the :doc:`tutorial on 3D masks<notebooks/mask_3D>` (:issue:`4`, :issue:`73`).
- Create regions from geopandas/ shapefiles :py:attr:`from_geopandas`
(:pull:`101` by `Aaron Spring <https://github.com/aaronspring>`_).
- Directly mask geopandas GeoDataFrame and GeoSeries :py:attr:`mask_geopandas` (:pull:`103`).
- Added a convenience function to plot flattened 3D masks: :py:func:`plot_3D_mask` (:issue:`161`).
- :py:attr:`Regions.plot` and :py:attr:`Regions.plot_regions` now also displays region interiors.
All lines are now added at once using a ``LineCollection`` which is faster than
a loop and ``plt.plot`` (:issue:`56` and :issue:`107`).
- :py:attr:`Regions.plot` can now fill land areas with ``add_land``. Further, there is more
control over the appearance over the land and ocean features as well as the coastlines
using the ``coastline_kws``, ``ocean_kws``, and ``land_kws`` arguments (:issue:`140`).
- Split longitude if this leads to two equally-spaced parts. This can considerably speed up
creating a mask. See :issue:`127` for details.
- Added test to ensure ``Polygons`` with z-coordinates work correctly (:issue:`36`).
- Better repr for :py:class:`Regions` (:issue:`108`).
- Towards enabling the download of region definitions using `pooch <https://www.fatiando.org/pooch/>`_
(:pull:`61`).
New regions
~~~~~~~~~~~
- Added the AR6 reference regions described in `Iturbide et al., (2000)
<https://essd.copernicus.org/preprints/essd-2019-258/>`_ (:pull:`61`).
- New marine regions from natural earth added as :py:attr:`natural_earth.ocean_basins_50`
(:pull:`91` by `Julius Busecke <https://github.com/jbusecke>`_).
Bug Fixes
~~~~~~~~~
- The natural earth shapefiles are now loaded with ``encoding="utf8"`` (:issue:`95`).
- Explicitly check that the numbers are numeric and raise an informative error (:issue:`130`).
- Do not subset coords with more than 10 vertices when plotting regions as this
can be slow (:issue:`153`).
Internal Changes
~~~~~~~~~~~~~~~~
- Decouple ``_maybe_get_column`` from its usage for naturalearth - so it can be
used to read columns from geodataframes (:issue:`117`).
- Switch to azure pipelines for testing (:pull:`110`).
- Enable codecov on azure (:pull:`115`).
- Install ``matplotlib-base`` for testing instead of ``matplotlib`` for tests,
seems a bit faster (:issue:`112`).
- Replaced all ``assertion`` with ``if ...: ValueError`` outside of tests (:issue:`142`).
- Raise consistent warnings on empty mask (:issue:`141`).
- Use a context manager for the plotting tests (:issue:`145`).
Docs
~~~~
- Combine the masking tutorials (xarray, numpy, and multidimensional coordinates)
into one (:issue:`120`).
- Use ``sphinx.ext.napoleon`` which fixes the look of the API docs. Also some
small adjustments to the docs (:pull:`125`).
- Set ``mpl.rcParams["savefig.bbox"] = "tight"`` in ``docs/defined_*.rst`` to avoid
spurious borders in the map plots (:issue:`112`).
v0.5.0 (19.12.2019)
-------------------
Version 0.5.0 offers a better performance, a consistent point-on-border behavior,
and also unmasks region interiors (holes). It also introduces a number of
deprecations. Please check the notebook on :doc:`methods<notebooks/method>` and
the details below.
Breaking Changes
~~~~~~~~~~~~~~~~
- :doc:`New behavior<notebooks/method>` for 'point-on-border' and region interiors:
- New 'edge behaviour': points that fall on the border of a region are now
treated consistently (:pull:`63`). Previously the edge behaviour was
not well defined and depended on the orientation of the outline (clockwise
vs. counter clockwise; :issue:`69` and `matplotlib/matplotlib#9704 <https://github.com/matplotlib/matplotlib/issues/9704>`_).
- Holes in regions are now excluded from the mask; previously they were included.
For the :code:`defined_regions`, this is relevant for the Caspian Sea in the
:py:attr:`naturalearth.land110` region and also for some countries in
:py:attr:`naturalearth.countries_50` (closes :issue:`22`).
- Renamed :py:class:`Regions_cls` to :py:class:`Regions` and changed its call
signature. This allows to make all arguments except :code:`outlines` optional.
- Renamed :py:class:`Region_cls` to :py:class:`_OneRegion` for clarity.
- Deprecated the :code:`centroids` keyword for :py:class:`Regions` (:issue:`51`).
- `xarray <http://xarray.pydata.org>`_ is now a hard dependency (:issue:`64`).
- The function :py:func:`regionmask.create_mask_contains` is deprecated and will be
removed in a future version. Use ``regionmask.Regions(coords).mask(lon, lat)``
instead.
Enhancements
~~~~~~~~~~~~
- New faster and consistent methods to rasterize regions:
- New algorithm to rasterize regions for equally-spaced longitude/ latitude grids.
Uses ``rasterio.features.rasterize``: this offers a 50x to 100x speedup compared
to the old method, and also has consistent edge behavior (closes :issue:`22` and
:issue:`24`).
- New algorithm to rasterize regions for grids that are not equally-spaced.
Uses ``shapely.vectorized.contains``: this offers a 2x to 50x speedup compared
to the old method. To achieve the same edge-behavior a tiny (10 ** -9) offset
is subtracted from lon and lat (closes :issue:`22` and :issue:`62`).
- Added a :doc:`methods page<notebooks/method>` to the documentation, illustrating
the algorithms, the edge behavior and treatment of holes (closes :issue:`16`).
- Added a test to ensure that the two new algorithms ("rasterize", "shapely")
yield the same result. Currently for 1° and 2° grid spacing (:issue:`74`).
- Automatically detect whether the longitude of the grid needs to be wrapped,
depending on the extent of the grid and the regions (closes :issue:`34`).
- Make all arguments to :py:class:`Regions` optional (except :code:`outlines`)
this should make it easier to create your own region definitions (closes :issue:`37`).
- Allow to pass arbitrary iterables to :py:class:`Regions` - previously these had to be of
type :code:`dict` (closes :issue:`43`).
- Added a :py:meth:`Regions.plot_regions` method that only plots the region borders
and not a map, as :py:meth:`Regions.plot`. The :py:meth:`Regions.plot_regions`
method can be used to plot the regions on a existing :code:`cartopy` map or a
regular axes (closes :issue:`31`).
- Added :py:attr:`Regions.bounds` and :py:attr:`Regions.bounds_global`
indicating the minimum bounding region of each and all regions, respectively.
Added :py:attr:`_OneRegion.bounds` (closes :issue:`33`).
- Add possibility to create an example dataset containing lon, lat and their
bounds (closes :issue:`66`).
- Added code coverage with pytest-cov and codecov.
Bug Fixes
~~~~~~~~~
- Regions were missing a line when the coords were not closed and
:code:`subsample=False` (:issue:`46`).
- Fix a regression introduced by :pull:`47`: when plotting regions containing
multipolygons :code:`_draw_poly` closed the region again and introduced a spurious
line (closes :issue:`54`).
- For a region defined via :code:`MultiPolygon`: use the centroid of the largest
:code:`Polygon` to add the label on a map. Previously the label could be placed
outside of the region (closes :issue:`59`).
- Fix regression: the offset was subtracted in ``mask.lon`` and ``mask.lat``;
test ``np.all(np.equal(mask.lon, lon))``, instead of ``np.allclose`` (closes
:issue:`78`).
- Rasterizing with ``"rasterize"`` and ``"shapely"`` was not equal when gridpoints
exactly fall on a 45° border outline (:issue:`80`).
- Conda channel mixing breaks travis tests. Only use conda-forge, add strict
channel priority (:issue:`27`).
- Fix documentation compilation on readthedocs (aborted, did not display
figures).
- Fix wrong figure in docs: countries showed landmask (:issue:`39`).
v0.4.0 (02.03.2018)
-------------------
Enhancements
~~~~~~~~~~~~
- Add landmask/ land 110m from `Natural Earth <http://www.naturalearthdata.com/downloads/110m-physical-vectors/>`_ (:issue:`21`).
- Moved some imports to functions, so :code:`import regionmask` is faster.
- Adapted docs for python 3.6.
Bug Fixes
~~~~~~~~~
- Columns of geodataframes can be in lower ('name') or upper case ('NAME') (:issue:`25`).
- Links to github issues not working, due to missing sphinx.ext.extlinks (:issue:`26`).
- Docs: mask_xarray.ipynb: mask no longer needs a name (as of :pull:`5`).
v0.3.1 (4 October 2016)
-----------------------
This is a bugfix/ cleanup release.
Bug Fixes
~~~~~~~~~
- travis was configured wrong - it always tested on python 2.7, thus some
python3 issues went unnoticed (:issue:`14`).
- natural_earth was not properly imported (:issue:`10`).
- A numpy scalar of dtype integer is not :code:`int` - i.e. :code:`isinstance(np.int32, int)`
is False (:issue:`11`).
- In python 3 :code:`zip` is an iterator (and not a :code:`list`), thus it failed on
:code:`mask` (:issue:`15`).
- Removed unnecessary files (ne_downloader.py and naturalearth.py).
- Resolved conflicting region outlines in the Giorgi regions (:issue:`17`).
v0.3.0 (20 September 2016)
--------------------------
- Allow passing 2 dimensional latitude and longitude grids (:issue:`8`).
v0.2.0 (5 September 2016)
-------------------------
- Add name for xarray mask (:issue:`3`).
- overhaul of the documentation
- move rtd / matplotlib handling to background
v0.1.0 (15 August 2016)
-----------------------
- first release on pypi
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/whats_new.rst | 0.89814 | 0.737347 | whats_new.rst | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
```
# Plotting
Every region has two plotting functions, which draw the outlines of all regions:
- `plot`: draws the region polygons on a cartopy GeoAxes (map)
- `plot_regions`: draws the the region polygons only
Import regionmask and check the version:
```
import regionmask
regionmask.__version__
```
We use the srex regions to illustrate the plotting:
```
srex = regionmask.defined_regions.srex
srex
```
## Plot all regions
Calling `plot()` on any region without any arguments draws the default map with a `PlateCarree()` projection and includes the coastlines:
```
srex.plot();
```
## Plot options
The `plot` method has a large number of arguments to adjust the layout of the axes. For example, you can pass a custom projection, the labels can display the abbreviation insead of the region number, the ocean can be colored, etc.. This example also shows how to use `matplotlib.patheffects` to ensure the labels are easily readable without covering too much of the map (compare to the map above):
```
import cartopy.crs as ccrs
import matplotlib.patheffects as pe
text_kws = dict(
bbox=dict(color="none"),
path_effects=[pe.withStroke(linewidth=2, foreground="w")],
color="#67000d",
fontsize=8,
)
ax = srex.plot(
projection=ccrs.Robinson(), label="abbrev", add_ocean=True, text_kws=text_kws
)
ax.set_global()
```
## Plot only a Subset of Regions
To plot a selection of regions subset them using indexing:
```
# regions can be selected by number, abbreviation or long name
regions = [11, "CEU", "S. Europe/Mediterranean"]
# choose a good projection for regional maps
proj = ccrs.LambertConformal(central_longitude=15)
ax = srex[regions].plot(
add_ocean=True,
resolution="50m",
proj=proj,
label="abbrev",
text_kws=text_kws,
)
# fine tune the extent
ax.set_extent([-15, 45, 28, 76], crs=ccrs.PlateCarree())
```
## Plotting the region polygons only (no map)
```
srex.plot_regions();
```
To achieve this, you need to explicitly create the axes:
```
import matplotlib.pyplot as plt
f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.Robinson()))
srex.plot_regions(ax=ax, line_kws=dict(lw=1), text_kws=text_kws)
ax.coastlines()
ax.set_global()
```
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/plotting.ipynb | 0.468061 | 0.947381 | plotting.ipynb | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
```
# Edge behavior and interiors
This notebook illustrates the edge behavior (when a grid point falls on the edge of a polygon) and how polygon interiors are treated.
## Preparation
Import regionmask and check the version:
```
import regionmask
regionmask.__version__
```
Other imports
```
import numpy as np
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib import colors as mplc
from shapely.geometry import Polygon
```
Define some colors:
```
color1 = "#9ecae1"
color2 = "#fc9272"
cmap1 = mplc.ListedColormap([color1])
cmap2 = mplc.ListedColormap([color2])
cmap12 = mplc.ListedColormap([color1, color2])
```
## Methods
Regionmask offers two backends (internally called "methods"*) to rasterize regions
1. `rasterize`: fastest but only for equally-spaced grid, uses `rasterio.features.rasterize` internally.
2. `shapely`: for irregular grids, uses `shapely.STRtree` internally.
Note: regionmask offers a third option: `pygeos` (which is faster than shapely < 2.0). However, shapely 2.0 replaces pygeos. With shapely 2.0 is is no longer advantageous to install pygeos.
All methods use the `lon` and `lat` coordinates to determine if a grid cell is in a region. `lon` and `lat` are assumed to indicate the *center* of the grid cell. All methods have the same edge behavior and consider 'holes' in the regions. `regionmask` automatically determines which `method` to use.
(2) and (3) subtract a tiny offset from `lon` and `lat` to achieve a edge behaviour consistent with (1). Due to [mapbox/rasterio#1844](https://github.com/mapbox/rasterio/issues/1844) this is unfortunately also necessary for (1).
\*Note that all "methods" yield the same results.
## Edge behavior
The edge behavior determines how points that fall on the outline of a region are treated. It's easiest to see the edge behaviour in an example.
### Example
Define a region and a lon/ lat grid, such that some gridpoints lie exactly on the border:
```
outline = np.array([[-80.0, 44.0], [-80.0, 28.0], [-100.0, 28.0], [-100.0, 44.0]])
region = regionmask.Regions([outline])
ds_US = regionmask.core.utils.create_lon_lat_dataarray_from_bounds(
*(-161, -29, 2), *(75, 13, -2)
)
print(ds_US)
```
Let's create a mask with each of these methods:
```
mask_rasterize = region.mask(ds_US, method="rasterize")
mask_shapely = region.mask(ds_US, method="shapely")
```
Plot the masked regions:
```
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, ec="0.5", lw=0.5, transform=ccrs.PlateCarree())
mask_rasterize.plot(ax=axes[0], cmap=cmap1, **opt)
mask_shapely.plot(ax=axes[1], cmap=cmap2, **opt)
for ax in axes:
ax = region.plot_regions(ax=ax, add_label=False)
ax.set_extent([-105, -75, 25, 49], ccrs.PlateCarree())
ax.coastlines(lw=0.5)
ax.plot(
ds_US.LON, ds_US.lat, "*", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
axes[0].set_title("backend: rasterize")
axes[1].set_title("backend: shapely")
None
```
Points indicate the grid cell centers (`lon` and `lat`), lines the grid cell borders, colored grid cells are selected to be part of the region. The top and right grid cells now belong to the region while the left and bottom grid cells do not. This choice is arbitrary but follows what `rasterio.features.rasterize` does. This avoids spurious columns of unassigned grid points as the following example shows.
### SREX regions
Create a global dataset:
```
ds_GLOB = regionmask.core.utils.create_lon_lat_dataarray_from_bounds(
*(-180, 181, 2), *(90, -91, -2)
)
srex = regionmask.defined_regions.srex
srex_new = srex.mask(ds_GLOB)
f, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, cmap="viridis_r")
srex_new.plot(ax=ax, ec="0.7", lw=0.25, **opt)
srex.plot_regions(ax=ax, add_label=False, line_kws=dict(lw=1))
ax.set_extent([-135, -50, 24, 51], ccrs.PlateCarree())
ax.coastlines(resolution="50m", lw=0.25)
ax.plot(
ds_GLOB.LON, ds_GLOB.lat, "*", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
sel = ((ds_GLOB.LON == -105) | (ds_GLOB.LON == -85)) & (ds_GLOB.LAT > 28)
ax.plot(
ds_GLOB.LON.values[sel],
ds_GLOB.LAT.values[sel],
"*",
color="r",
ms=0.5,
transform=ccrs.PlateCarree(),
)
ax.set_title("edge points are assigned to the left polygon", fontsize=9);
```
Not assigning the grid cells falling exactly on the border of a region (red points) would leave vertical stripes of unassigned cells.
### Points at -180°E (0°E) and -90°N
The described edge behaviour leads to a consistent treatment of points on the border. However, gridpoints at -180°E (or 0°E) and -90°N would *never* fall in any region.
We exemplify this with a region spanning the whole globe and a coarse longitude/ latidude grid:
```
# almost 360 to avoid wrap-around for the plot
lon_max = 360.0 - 1e-10
outline_global = np.array([[0, 90], [0, -90], [lon_max, -90], [lon_max, 90]])
region_global = regionmask.Regions([outline_global])
lon = np.arange(0, 360, 30)
lat = np.arange(90, -91, -30)
LON, LAT = np.meshgrid(lon, lat)
```
Create the masks:
```
# setting `wrap_lon=False` turns this feature off
mask_global_nontreat = region_global.mask(LON, LAT, wrap_lon=False)
mask_global = region_global.mask(LON, LAT)
```
And illustrate the issue:
```
proj = ccrs.PlateCarree(central_longitude=180)
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=proj))
f.subplots_adjust(wspace=0.05)
opt = dict(add_colorbar=False, ec="0.2", lw=0.25, transform=ccrs.PlateCarree())
ax = axes[0]
mask_global_nontreat.plot(ax=ax, cmap=cmap1, x="lon", y="lat", **opt)
ax.set_title("Not treating points at 0°E and -90°N", size=6)
ax.set_title("(a)", loc="left", size=6)
ax = axes[1]
mask_global.plot(ax=ax, cmap=cmap1, x="lon", y="lat", **opt)
ax.set_title("Treating points at 0°E and -90°N", size=6)
ax.set_title("(b)", loc="left", size=6)
for ax in axes:
ax = region_global.plot(
ax=ax,
line_kws=dict(lw=2, color="#b15928"),
add_label=False,
)
ax.plot(LON, LAT, "o", color="0.3", ms=1, transform=ccrs.PlateCarree(), zorder=5)
ax.spines["geo"].set_visible(False)
```
In the example the region spans the whole globe and there are gridpoints at 0°E and -90°N. Just applying the approach above leads to gridpoints that are not assigned to any region even though the region is global (as shown in a). Therefore, points at -180°E (or 0°E) and -90°N are treated specially (b):
Points at -180°E (0°E) are mapped to 180°E (360°E). Points at -90°N are slightly shifted northwards (by 1 * 10 ** -10). Then it is tested if the shifted points belong to any region
This means that (i) a point at -180°E is part of the region that is present at 180°E and not the one at -180°E (this is consistent with assigning points to the polygon *left* from it) and (ii) only the points at -90°N get assigned to the region above.
This is illustrated in the figure below:
```
outline_global1 = np.array([[-180.0, 60.0], [-180.0, -60.0], [0.0, -60.0], [0.0, 60.0]])
outline_global2 = np.array([[0.0, 60.0], [0.0, -60.0], [180.0, -60.0], [180.0, 60.0]])
region_global_2 = regionmask.Regions([outline_global1, outline_global2])
mask_global_2regions = region_global_2.mask(lon, lat)
ax = region_global_2.plot(
line_kws=dict(color="#b15928", zorder=3, lw=1.5),
add_label=False,
)
ax.plot(
LON, LAT, "o", color="0.3", lw=0.25, ms=2, transform=ccrs.PlateCarree(), zorder=5
)
mask_global_2regions.plot(ax=ax, cmap=cmap12, **opt)
ax.set_title("Points at -180°E are mapped to 180°E", size=6)
ax.spines["geo"].set_lw(0.25)
ax.spines["geo"].set_zorder(1);
```
## Polygon interiors
`Polygons` can have interior boundaries ('holes'). regionmask unmasks these regions.
### Example
Let's test this on an example and define a `region_with_hole`:
```
interior = np.array(
[
[-86.0, 40.0],
[-86.0, 32.0],
[-94.0, 32.0],
[-94.0, 40.0],
]
)
poly = Polygon(outline, holes=[interior])
region_with_hole = regionmask.Regions([poly])
mask_hole_rasterize = region_with_hole.mask(ds_US, method="rasterize")
mask_hole_shapely = region_with_hole.mask(ds_US, method="shapely")
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, ec="0.5", lw=0.5)
mask_hole_rasterize.plot(ax=axes[0], cmap=cmap1, **opt)
mask_hole_shapely.plot(ax=axes[1], cmap=cmap2, **opt)
for ax in axes:
region_with_hole.plot_regions(ax=ax, add_label=False, line_kws=dict(lw=1))
ax.set_extent([-105, -75, 25, 49], ccrs.PlateCarree())
ax.coastlines(lw=0.25)
ax.plot(
ds_US.LON, ds_US.lat, "o", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
axes[0].set_title("backend: rasterize")
axes[1].set_title("backend: shapely")
None
```
Note how the edge behavior of the interior is inverse to the edge behavior of the outerior.
### Caspian Sea
The Caspian Sea is defined as polygon interior.
```
land110 = regionmask.defined_regions.natural_earth_v5_0_0.land_110
mask_land110 = land110.mask(ds_GLOB)
f, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.PlateCarree()))
mask_land110.plot(ax=ax, cmap=cmap2, add_colorbar=False)
ax.set_extent([15, 75, 25, 50], ccrs.PlateCarree())
ax.coastlines(resolution="50m", lw=0.5)
ax.plot(
ds_GLOB.LON, ds_GLOB.lat, ".", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
ax.text(52, 43.5, "Caspian Sea", transform=ccrs.PlateCarree())
ax.set_title("Polygon interiors are unmasked");
```
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/method.ipynb | 0.487795 | 0.990927 | method.ipynb | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
# rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
# turn off pandas html repr:
# does not gracefully survive the ipynb -> rst -> html conversion
import pandas as pd
pd.set_option("display.notebook_repr_html", False)
```
# Working with geopandas (shapefiles)
regionmask includes support for regions defined as geopandas GeoDataFrame. These are often shapefiles, which can be opened in the formats `.zip`, `.shp`, `.geojson` etc. with `geopandas.read_file(url_or_path).`
There are two possibilities:
1. Directly create a mask from a geopandas GeoDataFrame or GeoSeries using `mask_geopandas` or `mask_3D_geopandas`.
2. Convert a GeoDataFrame to a `Regions` object (regionmask's internal data container) using `from_geopandas`.
As always, start with the imports:
```
import cartopy.crs as ccrs
import geopandas as gp
import matplotlib.pyplot as plt
import matplotlib.patheffects as pe
import numpy as np
import pandas as pd
import pooch
import regionmask
regionmask.__version__
```
## Opening an example shapefile
The U.S. Geological Survey (USGS) offers a shapefile containing the outlines of continens [1]. We use the library pooch to locally cache the file:
```
file = pooch.retrieve(
"https://pubs.usgs.gov/of/2006/1187/basemaps/continents/continents.zip", None
)
continents = gp.read_file("zip://" + file)
display(continents)
```
## Create a mask from a GeoDataFrame
`mask_geopandas` and `mask_3D_geopandas` allow to directly create a mask from a GeoDataFrame or GeoSeries:
```
lon = np.arange(-180, 180)
lat = np.arange(-90, 90)
mask = regionmask.mask_geopandas(continents, lon, lat)
```
Let's plot the new mask:
```
f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.PlateCarree()))
mask.plot(
ax=ax,
transform=ccrs.PlateCarree(),
add_colorbar=False,
)
ax.coastlines(color="0.1");
```
Similarly a 3D boolean mask can be created from a GeoDataFrame:
```
mask_3D = regionmask.mask_3D_geopandas(continents, lon, lat)
```
and plotted:
```
from matplotlib import colors as mplc
cmap1 = mplc.ListedColormap(["none", "#9ecae1"])
f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.PlateCarree()))
mask_3D.sel(region=0).plot(
ax=ax,
transform=ccrs.PlateCarree(),
add_colorbar=False,
cmap=cmap1,
)
ax.coastlines(color="0.1");
```
## 2. Convert GeoDataFrame to a Regions object
Creating a `Regions` object with `regionmask.from_geopandas` requires a GeoDataFrame:
```
continents_regions = regionmask.from_geopandas(continents)
continents_regions
```
This creates default names (`"Region0"`, ..., `"RegionN"`) and abbreviations (`"r0"`, ..., `"rN"`).
However, it is often advantageous to use columns of the GeoDataFrame as names and abbrevs. If no column with abbreviations is available, you can use `abbrevs='_from_name'`, which creates unique abbreviations using the names column.
```
continents_regions = regionmask.from_geopandas(
continents, names="CONTINENT", abbrevs="_from_name", name="continent"
)
continents_regions
```
As usual the newly created `Regions` object can be plotted on a world map:
```
text_kws = dict(
bbox=dict(color="none"),
path_effects=[pe.withStroke(linewidth=2, foreground="w")],
color="#67000d",
fontsize=9,
)
continents_regions.plot(label="name", add_coastlines=False, text_kws=text_kws);
```
And to create mask a mask for arbitrary latitude/ longitude grids:
```
lon = np.arange(0, 360)
lat = np.arange(-90, 90)
mask = continents_regions.mask(lon, lat)
```
which can then be plotted
```
f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.PlateCarree()))
h = mask.plot(
ax=ax,
transform=ccrs.PlateCarree(),
cmap="Reds",
add_colorbar=False,
levels=np.arange(-0.5, 8),
)
cbar = plt.colorbar(h, shrink=0.625, pad=0.025, aspect=12)
cbar.set_ticks(np.arange(8))
cbar.set_ticklabels(continents_regions.names)
ax.coastlines(color="0.2")
continents_regions.plot_regions(add_label=False);
```
## References
[1] Environmental Systems Research , Inc. (ESRI), 20020401, World Continents: ESRI Data & Maps 2002, Environmental Systems Research Institute, Inc. (ESRI), Redlands, California, USA.
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/geopandas.ipynb | 0.553988 | 0.95096 | geopandas.ipynb | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
```
# Overlapping regions
Two or more on regions can share the same area - they overlap, as for example region 3 and 4 of the [PRUDENCE regions](../defined_scientific.html#prudence-regions) This notebook illustrates how overlapping regions can be treated in regionmask.
## In short
Thus, when creating your own `Regions` you need to tell regionmask if they are overlapping.
```python
region = regionmask.Regions(..., overlap=True)
region = regionmask.from_geopandas(..., overlap=True)
```
If you have two overlapping regions and `overlap=False` regionmask will _silently_ assign the gridpoints of the overlapping regions to the one with the higher number, e.g., region 4 for PRUDENCE (this may change in a future version).
Note that `overlap` is correctly defined in `regionmask.defined_regions`.
## Example
To illustrate the problem we construct two regions in North America that partially overlap. One is horizontal, the other vertical.
**Preparation**
Import regionmask and check the version:
```
import regionmask
regionmask.__version__
```
Other imports
```
import xarray as xr
import numpy as np
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib import colors as mplc
from shapely.geometry import Polygon
import matplotlib.patheffects as pe
```
Define some colors:
```
cmap = mplc.ListedColormap(["none", "#9ecae1"])
```
Define helper function:
```
def plot_region_vh(mask):
fg = mask.plot(
subplot_kws=dict(projection=ccrs.PlateCarree()),
col="region",
cmap=cmap,
add_colorbar=False,
transform=ccrs.PlateCarree(),
ec="0.5",
lw=0.25,
)
for ax in fg.axes.flatten():
region_vh[[0]].plot(ax=ax, add_label=False, line_kws=dict(color="#6a3d9a"))
region_vh[[1]].plot(ax=ax, add_label=False, line_kws=dict(color="#ff7f00"))
ax.set_extent([-105, -75, 25, 55], ccrs.PlateCarree())
ax.plot(
ds_US.LON, ds_US.lat, "*", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
```
Define the polygons:
```
coords_v = np.array([[-90.0, 50.0], [-90.0, 28.0], [-100.0, 28.0], [-100.0, 50.0]])
coords_h = np.array([[-80.0, 50.0], [-80.0, 40.0], [-100.0, 40.0], [-100.0, 50.0]])
```
### Default behavior (`overlap=False`)
Fe first test what happens if we keep the default value of `overlap=False`:
```
region_vh = regionmask.Regions([coords_v, coords_h])
```
Create a mask
```
ds_US = regionmask.core.utils.create_lon_lat_dataarray_from_bounds(
*(-160, -29, 2), *(76, 13, -2)
)
mask_vh = region_vh.mask_3D(ds_US)
```
Plot the masked regions:
```
plot_region_vh(mask_vh)
```
The small gray points show the gridpoint center and the vertical and horizontal lines are the gridpoint boundaries. The colored rectangles are the two regions. The vertical region has the number 1 and the horizontal region the number 2.
We can see that only the gridpoints in the lower part of the vertical (magenta) region were assigned to it. All gridpoints of the overlapping part are now assigned to the horizontal (orange) region. As mentioned the gridpoints are assigned to the region with the higher number By switching the order of the regions you could have the common gridpoints assigned to the vertical region.
### Setting `overlap=True`
As mentioned regionmask assumes regions are not overlapping, so you need to pass `overlap=True` to the constructor:
```
region_overlap = regionmask.Regions([coords_v, coords_h], overlap=True)
region_overlap
```
Now it says `overlap: True` - and we can again create a mask:
```
mask_overlap = region_overlap.mask_3D(ds_US)
```
and plot it
```
plot_region_vh(mask_overlap)
```
Now the gridpoints in the overlapping part are assigned to both regions.
## PRUDENCE regions
The PRUDENCE regions are a real-world example of overlapping areas. The prudence regions already set `overlap=True`.
```
prudence = regionmask.defined_regions.prudence
prudence
```
Regions 3 and 4 overlap in Western France:
```
proj = ccrs.LambertConformal(central_longitude=10)
text_kws = dict(
bbox=dict(color="none"),
path_effects=[pe.withStroke(linewidth=3, foreground="w")],
color="#67000d",
)
ax = prudence.plot(
projection=proj, text_kws=text_kws, resolution="50m", line_kws=dict(lw=0.75)
)
ax.set_extent([-10.0, 30.0, 40.0, 55.0], ccrs.PlateCarree())
```
### Create mask of PRUDENCE regions
```
lon = np.arange(-12, 33, 0.5)
lat = np.arange(72, 33, -0.5)
mask_prudence = prudence.mask_3D(lon, lat)
proj = ccrs.LambertConformal(central_longitude=10)
fg = mask_prudence.sel(region=[3, 4]).plot(
subplot_kws=dict(projection=proj),
col="region",
cmap=cmap,
add_colorbar=False,
transform=ccrs.PlateCarree(),
)
for ax in fg.axes.flatten():
regionmask.defined_regions.prudence.plot(
ax=ax, add_label=False, resolution="50m", line_kws=dict(lw=0.75)
)
```
As above the gridpoints below the overlapping part is now assigned to both regions.
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/overlap.ipynb | 0.52342 | 0.977241 | overlap.ipynb | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
```
# Create 3D boolean masks
In this tutorial we will show how to create 3D boolean masks for arbitrary latitude and longitude grids. It uses the same algorithm to determine if a gridpoint is in a region as for the 2D mask. However, it returns a `xarray.Dataset` with shape `region x lat x lon`, gridpoints that do not fall in a region are `False`, the gridpoints that fall in a region are `True`.
3D masks are convenient as they can be used to directly calculate weighted regional means (over all regions) using xarray v0.15.1 or later. Further, the mask includes the region names and abbreviations as non-dimension coordinates.
Import regionmask and check the version:
```
import regionmask
regionmask.__version__
```
Load xarray and numpy:
```
import xarray as xr
import numpy as np
# don't expand data
xr.set_options(display_style="text", display_expand_data=False, display_width=60)
```
## Creating a mask
Define a lon/ lat grid with a 1° grid spacing, where the points define the center of the grid:
```
lon = np.arange(-179.5, 180)
lat = np.arange(-89.5, 90)
```
We will create a mask with the SREX regions (Seneviratne et al., 2012).
```
regionmask.defined_regions.srex
```
The function `mask_3D` determines which gripoints lie within the polygon making up each region:
```
mask = regionmask.defined_regions.srex.mask_3D(lon, lat)
mask
```
As mentioned, `mask` is a boolean `xarray.Dataset` with shape `region x lat x lon`. It contains `region` (=`numbers`) as dimension coordinate as well as `abbrevs` and `names` as non-dimension coordinates (see the xarray docs for the details on the [terminology](http://xarray.pydata.org/en/stable/terminology.html)).
## Plotting
### Plotting individual layers
The four first layers look as follows:
```
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib import colors as mplc
cmap1 = mplc.ListedColormap(["none", "#9ecae1"])
fg = mask.isel(region=slice(4)).plot(
subplot_kws=dict(projection=ccrs.PlateCarree()),
col="region",
col_wrap=2,
transform=ccrs.PlateCarree(),
add_colorbar=False,
aspect=1.5,
cmap=cmap1,
)
for ax in fg.axes.flatten():
ax.coastlines()
fg.fig.subplots_adjust(hspace=0, wspace=0.1);
```
### Plotting flattened masks
A 3D mask cannot be directly plotted - it needs to be flattened first. To do this regionmask offers a convenience function: `regionmask.plot_3D_mask`. The function takes a 3D mask as argument, all other keyword arguments are passed through to `xr.plot.pcolormesh`.
```
regionmask.plot_3D_mask(mask, add_colorbar=False, cmap="plasma");
```
## Working with a 3D mask
masks can be used to select data in a certain region and to calculate regional averages - let's illustrate this with a 'real' dataset:
```
airtemps = xr.tutorial.load_dataset("air_temperature")
```
The example data is a temperature field over North America. Let's plot the first time step:
```
# choose a good projection for regional maps
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
airtemps.isel(time=1).air.plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines();
```
An xarray object can be passed to the `mask_3D` function:
```
mask_3D = regionmask.defined_regions.srex.mask_3D(airtemps)
mask_3D
```
Per default this creates a `mask` containing one layer (slice) for each region containing (at least) one gridpoint. As the example data only has values over Northern America we only get only 6 layers even though there are 26 SREX regions. To obtain all layers specify `drop=False`:
```
mask_full = regionmask.defined_regions.srex.mask_3D(airtemps, drop=False)
mask_full
```
Note `mask_full` now has 26 layers.
### Select a region
As `mask_3D` contains `region`, `abbrevs`, and `names` as (non-dimension) coordinates we can use each of those to select an individual region:
```
# 1) by the index of the region:
r1 = mask_3D.sel(region=3)
# 2) with the abbreviation
r2 = mask_3D.isel(region=(mask_3D.abbrevs == "WNA"))
# 3) with the long name:
r3 = mask_3D.isel(region=(mask_3D.names == "E. North America"))
```
This also applies to the regionally-averaged data below.
It is currently not possible to use `sel` with a non-dimension coordinate - to directly select `abbrev` or `name` you need to create a `MultiIndex`:
```
mask_3D.set_index(regions=["region", "abbrevs", "names"]);
```
### Mask out a region
Using `where` a specific region can be 'masked out' (i.e. all data points outside of the region become `NaN`):
```
airtemps_cna = airtemps.where(r1)
```
Which looks as follows:
```
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
airtemps_cna.isel(time=1).air.plot(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines();
```
We could now use `airtemps_cna` to calculate the regional average for 'Central North America'. However, there is a more elegant way.
### Calculate weighted regional averages
Using the 3-dimensional mask it is possible to calculate weighted averages of all regions in one go, using the `weighted` method (requires xarray 0.15.1 or later). As proxy of the grid cell area we use `cos(lat)`.
```
weights = np.cos(np.deg2rad(airtemps.lat))
ts_airtemps_regional = airtemps.weighted(mask_3D * weights).mean(dim=("lat", "lon"))
```
Let's break down what happens here. By multiplying `mask_3D * weights` we get a DataArray where gridpoints not in the region get a weight of 0. Gridpoints within a region get a weight proportional to the gridcell area. `airtemps.weighted(mask_3D * weights)` creates an xarray object which can be used for weighted operations. From this we calculate the weighted `mean` over the lat and lon dimensions. The resulting dataarray has the dimensions `region x time`:
```
ts_airtemps_regional
```
The regionally-averaged time series can be plotted:
```
ts_airtemps_regional.air.plot(col="region", col_wrap=3);
```
### Restrict the mask to land points
Combining the mask of the regions with a land-sea mask we can create a land-only mask using the `land_110` region from NaturalEarth.
With this caveat in mind we can create the land-sea mask:
```
land_110 = regionmask.defined_regions.natural_earth_v5_0_0.land_110
land_mask = land_110.mask_3D(airtemps)
```
and plot it
```
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
land_mask.squeeze().plot.pcolormesh(
ax=ax, transform=ccrs.PlateCarree(), cmap=cmap1, add_colorbar=False
)
ax.coastlines();
```
To create the combined mask we multiply the two:
```
mask_lsm = mask_3D * land_mask.squeeze(drop=True)
```
Note the `.squeeze(drop=True)`. This is required to remove the `region` dimension from `land_mask`.
Finally, we compare the original mask with the one restricted to land points:
```
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=proj))
ax = axes[0]
mask_3D.sel(region=2).plot(
ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1
)
ax.coastlines()
ax.set_title("Regional mask: all points")
ax = axes[1]
mask_lsm.sel(region=2).plot(
ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1
)
ax.coastlines()
ax.set_title("Regional mask: land only");
```
## References
* Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX, Seneviratne et al., [2012](https://www.ipcc.ch/site/assets/uploads/2018/03/SREX-Ch3-Supplement_FINAL-1.pdf))
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/mask_3D.ipynb | 0.562898 | 0.987841 | mask_3D.ipynb | pypi |
```
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.dpi"] = 300
rcParams["font.size"] = 8
import warnings
warnings.filterwarnings("ignore")
```
# Create your own region
Creating own regions is straightforward. Import regionmask and check the version:
```
import cartopy.crs as ccrs
import numpy as np
import matplotlib.pyplot as plt
import regionmask
regionmask.__version__
```
Import numpy
Assume you have two custom regions in the US, you can easily use these to create `Regions`:
```
US1 = np.array([[-100.0, 30], [-100, 40], [-120, 35]])
US2 = np.array([[-100.0, 30], [-80, 30], [-80, 40], [-100, 40]])
regionmask.Regions([US1, US2])
```
If you want to set the `names` and `abbrevs` yourself you can still do that:
```
names = ["US_west", "US_east"]
abbrevs = ["USw", "USe"]
USregions = regionmask.Regions([US1, US2], names=names, abbrevs=abbrevs, name="US")
USregions
```
Again we can plot the outline of the defined regions
```
ax = USregions.plot(label="abbrev")
# fine tune the extent
ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree())
```
and obtain a mask:
```
import numpy as np
# define lat/ lon grid
lon = np.arange(200.5, 330, 1)
lat = np.arange(74.5, 15, -1)
mask = USregions.mask(lon, lat)
ax = plt.subplot(111, projection=ccrs.PlateCarree())
h = mask.plot(
transform=ccrs.PlateCarree(),
cmap="Paired",
add_colorbar=False,
vmax=12,
)
ax.coastlines()
# add the outlines of the regions
USregions.plot_regions(ax=ax, add_label=False)
ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree())
```
## Use shapely Polygon
You can also define the region with shapely polygons (see [geopandas tutorial](geopandas.html) how to work with shapefiles).
```
from shapely.geometry import Polygon, MultiPolygon
US1_poly = Polygon(US1)
US2_poly = Polygon(US2)
US1_poly, US2_poly
USregions_poly = regionmask.Regions([US1_poly, US2_poly])
USregions_poly
```
## Create Regions with MultiPolygon and interiors
Create two discontiguous regions and combine them to one. Add a hole to one of the regions
```
US1_shifted = US1 - (5, 0)
US2_hole = np.array([[-98.0, 33], [-92, 33], [-92, 37], [-98, 37], [-98.0, 33]])
```
Create `Polygons`, a `MultiPolygon`, and finally `Regions`
```
US1_poly = Polygon(US1_shifted)
US2_poly = Polygon(US2, holes=[US2_hole])
US_multipoly = MultiPolygon([US1_poly, US2_poly])
USregions_poly = regionmask.Regions([US_multipoly])
USregions_poly.plot();
```
Create a mask:
```
mask = USregions_poly.mask(lon, lat)
```
and plot it:
```
ax = plt.subplot(111, projection=ccrs.PlateCarree())
mask.plot(transform=ccrs.PlateCarree(), add_colorbar=False)
ax.coastlines()
# fine tune the extent
ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree())
```
| /regionmask-0.10.0.tar.gz/regionmask-0.10.0/docs/notebooks/create_own_regions.ipynb | 0.547464 | 0.954942 | create_own_regions.ipynb | pypi |
import itertools
import sys
from datetime import datetime
from typing import Dict, Iterator, Optional, Tuple
import conda.api # type: ignore[import]
import yaml
from dateutil.relativedelta import relativedelta
CHANNELS = ["conda-forge", "defaults"]
IGNORE_DEPS = {
"black",
"coveralls",
"flake8",
"hypothesis",
"isort",
"mypy",
"pip",
"pytest",
"pytest-cov",
"pytest-env",
"pytest-xdist",
}
POLICY_MONTHS = {"python": 24, "numpy": 18, "setuptools": 42}
POLICY_MONTHS_DEFAULT = 12
POLICY_OVERRIDE = {
# setuptools-scm doesn't work with setuptools < 36.7 (Nov 2017).
# The conda metadata is malformed for setuptools < 38.4 (Jan 2018)
# (it's missing a timestamp which prevents this tool from working).
# setuptools < 40.4 (Sep 2018) from conda-forge cannot be installed into a py37
# environment
# TODO remove this special case and the matching note in installing.rst
# after March 2022.
"setuptools": (40, 4),
}
has_errors = False
def error(msg: str) -> None:
global has_errors
has_errors = True
print("ERROR:", msg)
def warning(msg: str) -> None:
print("WARNING:", msg)
def parse_requirements(fname) -> Iterator[Tuple[str, int, int, Optional[int]]]:
"""Load requirements/py*-min-all-deps.yml
Yield (package name, major version, minor version, [patch version])
"""
global has_errors
with open(fname) as fh:
contents = yaml.safe_load(fh)
for row in contents["dependencies"]:
if isinstance(row, dict) and list(row) == ["pip"]:
continue
pkg, eq, version = row.partition("=")
if pkg.rstrip("<>") in IGNORE_DEPS:
continue
if pkg.endswith("<") or pkg.endswith(">") or eq != "=":
error("package should be pinned with exact version: " + row)
continue
try:
version_tup = tuple(int(x) for x in version.split("."))
except ValueError:
raise ValueError("non-numerical version: " + row)
if len(version_tup) == 2:
yield (pkg, *version_tup, None) # type: ignore[misc]
elif len(version_tup) == 3:
yield (pkg, *version_tup) # type: ignore[misc]
else:
raise ValueError("expected major.minor or major.minor.patch: " + row)
def query_conda(pkg: str) -> Dict[Tuple[int, int], datetime]:
"""Query the conda repository for a specific package
Return map of {(major version, minor version): publication date}
"""
def metadata(entry):
version = entry.version
time = datetime.fromtimestamp(entry.timestamp)
major, minor = map(int, version.split(".")[:2])
return (major, minor), time
raw_data = conda.api.SubdirData.query_all(pkg, channels=CHANNELS)
data = sorted(metadata(entry) for entry in raw_data if entry.timestamp != 0)
release_dates = {
version: [time for _, time in group if time is not None]
for version, group in itertools.groupby(data, key=lambda x: x[0])
}
out = {version: min(dates) for version, dates in release_dates.items() if dates}
# Hardcoded fix to work around incorrect dates in conda
if pkg == "python":
out.update(
{
(2, 7): datetime(2010, 6, 3),
(3, 5): datetime(2015, 9, 13),
(3, 6): datetime(2016, 12, 23),
(3, 7): datetime(2018, 6, 27),
(3, 8): datetime(2019, 10, 14),
}
)
return out
def process_pkg(
pkg: str, req_major: int, req_minor: int, req_patch: Optional[int]
) -> Tuple[str, str, str, str, str, str]:
"""Compare package version from requirements file to available versions in conda.
Return row to build pandas dataframe:
- package name
- major.minor.[patch] version in requirements file
- publication date of version in requirements file (YYYY-MM-DD)
- major.minor version suggested by policy
- publication date of version suggested by policy (YYYY-MM-DD)
- status ("<", "=", "> (!)")
"""
print("Analyzing %s..." % pkg)
versions = query_conda(pkg)
try:
req_published = versions[req_major, req_minor]
except KeyError:
error("not found in conda: " + pkg)
return pkg, fmt_version(req_major, req_minor, req_patch), "-", "-", "-", "(!)"
policy_months = POLICY_MONTHS.get(pkg, POLICY_MONTHS_DEFAULT)
policy_published = datetime.now() - relativedelta(months=policy_months)
filtered_versions = [
version
for version, published in versions.items()
if published < policy_published
]
policy_major, policy_minor = max(filtered_versions, default=(req_major, req_minor))
try:
policy_major, policy_minor = POLICY_OVERRIDE[pkg]
except KeyError:
pass
policy_published_actual = versions[policy_major, policy_minor]
if (req_major, req_minor) < (policy_major, policy_minor):
status = "<"
elif (req_major, req_minor) > (policy_major, policy_minor):
status = "> (!)"
delta = relativedelta(datetime.now(), policy_published_actual).normalized()
n_months = delta.years * 12 + delta.months
error(
f"Package is too new: {pkg}={req_major}.{req_minor} was "
f"published on {versions[req_major, req_minor]:%Y-%m-%d} "
f"which was {n_months} months ago (policy is {policy_months} months)"
)
else:
status = "="
if req_patch is not None:
warning("patch version should not appear in requirements file: " + pkg)
status += " (w)"
return (
pkg,
fmt_version(req_major, req_minor, req_patch),
req_published.strftime("%Y-%m-%d"),
fmt_version(policy_major, policy_minor),
policy_published_actual.strftime("%Y-%m-%d"),
status,
)
def fmt_version(major: int, minor: int, patch: int = None) -> str:
if patch is None:
return f"{major}.{minor}"
else:
return f"{major}.{minor}.{patch}"
def main() -> None:
fnames = sys.argv[1:]
for fname in fnames:
print(f"{fname}")
print("=" * len(fname))
rows = [
process_pkg(pkg, major, minor, patch)
for pkg, major, minor, patch in parse_requirements(fname)
]
print("Package Required Policy Status")
print("----------------- -------------------- -------------------- ------")
fmt = "{:17} {:7} ({:10}) {:7} ({:10}) {}"
for row in rows:
print(fmt.format(*row))
print()
assert not has_errors
if __name__ == "__main__":
main() | /regionmask-0.10.0.tar.gz/regionmask-0.10.0/ci/min_deps_check.py | 0.486332 | 0.195019 | min_deps_check.py | pypi |
import importlib
import sys
from typing import Any, Dict, Iterable, Iterator, Tuple
from register_it.utils import formatter_for_dict
class Registry(Iterable[Tuple[str, Any]]):
"""
The registry that provides name -> object mapping, to support classes and functions.
To create a registry (e.g. a class registry and a function registry):
::
DATASETS = Registry(name="dataset")
EVALUATE = Registry(name="evaluate")
To register an object:
::
@DATASETS.register(name='mymodule')
class MyModule(*args, **kwargs):
...
@EVALUATE.register(name='myfunc')
def my_func(*args, **kwargs):
...
Or:
::
DATASETS.register(name='mymodule', obj=MyModule)
EVALUATE.register(name='myfunc', obj=my_func)
To construct an object of the class or the function:
::
DATASETS = Registry(name="dataset")
# The callers of the DATASETS are from the module data, we need to manually import it.
DATASETS.import_module_from_module_names(["data"])
EVALUATE = Registry(name="evaluate")
# The callers of the EVALUATE are from the module evaluate, we need to manually import it.
EVALUATE.import_module_from_module_names(["evaluate"])
"""
def __init__(self, name: str) -> None:
"""
Args:
name (str): the name of this registry
"""
self._name: str = name
self._obj_map: Dict[str, Any] = {}
def _do_register(self, name: str, obj: Any) -> None:
assert (
name not in self._obj_map
), f"An object named '{name}' was already registered in '{self._name}' registry!"
self._obj_map[name] = obj
def register(self, name: str = None, *, obj: Any = None) -> Any:
"""
Register the given object under the the name `obj.__name__`.
Can be used as either a decorator or not.
See docstring of this class and the examples in the folder `examples` for usage.
"""
if name is not None:
assert isinstance(name, str), f"name must be a str obj, but current name is {type(name)}"
if obj is None:
# used as a decorator
def deco(func_or_class: Any) -> Any:
key = func_or_class.__name__ if name is None else name
self._do_register(key, func_or_class)
return func_or_class
return deco
# used as a function call
name = obj.__name__ if name is None else name
self._do_register(name, obj)
def get(self, name: str) -> Any:
ret = self._obj_map.get(name)
if ret is None:
raise KeyError(f"No object named '{name}' found in '{self._name}' registry!")
return ret
def __getattr__(self, name: str):
return self.get(name=name)
def __getitem__(self, name: str):
return self.get(name=name)
def __contains__(self, name: str) -> bool:
return name in self._obj_map
def __repr__(self) -> str:
table_headers = ["Names", "Objects"]
table = formatter_for_dict(self._obj_map, headers=table_headers)
return "Registry of {}:\n".format(self._name) + table
__str__ = __repr__
def __iter__(self) -> Iterator[Tuple[str, Any]]:
return iter(self._obj_map.items())
def keys(self):
return self._obj_map.keys()
def values(self):
return self._obj_map.values()
@staticmethod
def import_module_from_module_names(module_names, verbose=True):
for name in module_names:
name_splits = name.split(".")[0]
to_import = True
for _existing_module in sys.modules.keys():
_existing_module_splits = _existing_module.split(".")[0]
if _existing_module_splits == name_splits:
if verbose:
print(f"Module:{name} has been contained in sys.modules ({_existing_module}).")
to_import = False
break
if not to_import:
continue
module_spec = importlib.util.find_spec(name)
if module_spec is None:
raise ModuleNotFoundError(f"Module :{name} not found")
if verbose:
print(f"Module:{name} is being imported!")
module = importlib.util.module_from_spec(module_spec)
module_spec.loader.exec_module(module)
if verbose:
print(f"Module:{name} has been imported!") | /register_it-0.3-py3-none-any.whl/register_it/register_it.py | 0.605333 | 0.254486 | register_it.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.